Safety, Security and Privacy for Cyber-Physical Systems (Lecture Notes in Control and Information Sciences, 486) 3030650472, 9783030650476

This book presents an in-depth overview of recent work related to the safety, security, and privacy of cyber-physical sy

209 20 16MB

English Pages 400 [392] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
1 Introduction to the Book
1.1 Motivation and Objectives
1.2 Safety, Security, and Privacy: A Taxonomy
1.2.1 The Security Triad
1.2.2 The Attack Triad
1.2.3 The Mitigation Triad
1.3 Structure and Content
References
2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment
2.1 Introduction to Fault-Tolerant Networked Control Systems
2.1.1 Networked Control Systems
2.1.2 Contribution to Fault-Tolerant Networked Control Systems
2.1.3 Notation
2.2 Flexible Task Assignment Problem
2.2.1 Description of the Subsystems
2.2.2 Description of the Controllers
2.2.3 Description of the Cooperative Task
2.3 Method for Fault-Tolerant Task Assignment
2.3.1 Consistent Subtasks
2.3.2 Satisfiable Functions
2.3.3 Autonomy of the Subsystems
2.3.4 Summary: Design Steps for Fault-Tolerant Task Assignment
2.4 Example: Transportation System
2.4.1 Cooperative Task
2.4.2 Active and Passive Subsystems
2.4.3 Scenario 1: Fault-Free Subsystems
2.4.4 Scenario 2: Faulty Subsystem
2.5 Conclusions
References
3 Resilient Control Under Denial-of-Service: Results and Research Directions
3.1 Introduction
3.2 Stability Under Denial-of-Service
3.2.1 Basic Framework
3.2.2 Input-to-State Stability Under DoS
3.2.3 Research Directions: Scheduling Design and Min–Max Problems
3.3 Robust Control Design
3.3.1 Control Schemes Based on Finite-Time Observers
3.3.2 Performant Observers and Packetized Control
3.4 Distributed Systems
3.4.1 DoS-Resilient Distributed Consensus
3.4.2 Complex Network Systems and Critical Links
3.5 Conclusions
References
4 Stealthy False Data Injection Attacks in Feedback Systems Revisited
4.1 Introduction
4.2 Modeling and Assumptions
4.2.1 System Model
4.2.2 Operator and Attacker Models
4.3 False Data Injection Attacks
4.3.1 Sensor Attacks
4.3.2 Actuator Attacks
4.4 Case Study
4.4.1 Exponential Sensor Attack
4.4.2 Sum-of-Exponentials Sensor Attack
4.4.3 Sinusoidal Actuator Attack
4.5 Defenses
4.6 Conclusion
References
5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications
5.1 Introduction
5.2 Problem Formulation
5.3 Stealthiness in Stochastic Systems
5.4 Fundamental Performance Limitations
5.4.1 Converse
5.4.2 Achievability for Right-Invertible Systems
5.4.3 Achievability for Non-Right-Invertible Systems
5.5 Numerical Results
5.6 Conclusion
References
6 Security Metrics for Control Systems
6.1 Introduction
6.2 Closed-Loop System Under Cyber- and Physical Attacks
6.2.1 Attack Scenario and Adversary Model
6.2.2 Toward Metrics for Security Analysis
6.3 Classical Metrics in Robust Control and Fault Detection
6.3.1 The mathcalHinfty Norm
6.3.2 The mathcalH- Index
6.3.3 Mixing mathcalHinfty and mathcalH-
6.4 A Security Metric for Analysis and Design: The Output-to-Output Gain
6.4.1 Security Analysis with the Output-to-Output Gain
6.4.2 Security Metrics-Based Design of Controller and Observer
6.5 Conclusions
References
7 The Secure State Estimation Problem
7.1 Introduction
7.2 The Secure State Estimation Problem
7.2.1 Notation
7.2.2 Threat Model and Attack Assumptions
7.2.3 Attack Detection and Secure State Estimation Problems
7.3 The s-Sparse Observability Condition
7.3.1 Sufficient and Necessary Conditions for Linear Time-Invariant Systems
7.3.2 Extension to Nonlinear Systems—An Coding-Theoretic Interpretation
7.4 Algorithms for Attack Detection and Secure State Estimation
7.4.1 Attack Detection Algorithm
7.4.2 Secure State Estimator: Brute Force Search
7.4.3 Secure State Estimator: Satisfiability Modulo Convex Programming
7.4.4 Numerical Evaluation
7.5 Special Cases for Polynomial-Time Secure State Estimation
7.6 Conclusions and Future Work
References
8 Active Detection Against Replay Attack: A Survey on Watermark Design for Cyber-Physical Systems
8.1 Introduction
8.2 Problem Setup
8.2.1 System Description
8.2.2 Attack Model
8.3 Physical Watermark Scheme
8.3.1 LQG Performance Loss
8.3.2 Detection Performance
8.3.3 The Trade-Off Between Control and Detection Performance
8.4 Extensions of Physical Watermark Scheme
8.4.1 A Non-IID Watermarking Design Approach
8.4.2 An Online Design Approach
8.4.3 A Multiplicative Watermarking Design
8.5 Conclusion and Future Work
References
9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme
9.1 Introduction
9.2 Problem Formulation
9.2.1 False Data Injection Attacks
9.3 Multiplicative Watermarking Scheme
9.3.1 Watermarking Scheme: A Hybrid System Approach
9.3.2 Watermarking Scheme Design Principles
9.3.3 Stability Analysis
9.3.4 An Application Example
9.4 Detection of Stealthy False Data Injection Attacks
9.5 Numerical Study
9.6 Conclusions
References
10 Differentially Private Anomaly Detection for Interconnected Systems
10.1 Introduction
10.1.1 Contributions
10.1.2 Related Work
10.2 Problem Formulation
10.2.1 Differential Privacy
10.2.2 The Case of an Isolated System
10.2.3 The Case of Interconnected Systems
10.3 Diagnosis in Absence of Privacy Constraint
10.3.1 Model-Based Residual Generation
10.3.2 A Probabilistic Detection Threshold
10.3.3 Detectability Analysis
10.4 Privacy and Its Cost
10.4.1 Privacy Mechanism
10.4.2 Residual and Threshold Generation Under Privacy
10.4.3 Numerical Study
10.5 Conclusions
References
11 Remote State Estimation in the Presence of an Eavesdropper
11.1 Introduction
11.2 System Model
11.3 Covariance-Based Measure of Security
11.3.1 Eavesdropper Error Covariance Known at Remote Estimator
11.3.2 Eavesdropper Error Covariance Unknown at Remote Estimator
11.3.3 Infinite Horizon
11.4 Information-Based Measure of Security
11.4.1 Eavesdropper Error Covariance Known at Remote Estimator
11.4.2 Eavesdropper Error Covariance Unknown at Remote Estimator
11.4.3 Infinite Horizon
11.5 Numerical Studies
11.5.1 Finite Horizon
11.5.2 Infinite Horizon
11.6 Conclusion
References
12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption
12.1 Introduction
12.2 Preliminaries and Notations
12.3 Backgroud on Paillier Encryption
12.3.1 Fixed-Point Operations
12.3.2 Paillier Encryption
12.4 NCS Architecture and Problem Statement
12.4.1 Static Output Feedback
12.4.2 Combination of Basis Functions
12.4.3 Two-Server Structure with State Measurement Only
12.5 Main Result
12.5.1 Robust Stabilization
12.5.2 Disturbance Free Case
12.5.3 Security Enhancement
12.6 Homogeneous Control Systems
12.7 An Illustrative Example
12.8 Conclusions and Future Work
12.9 Proof of Theorem 12.1
12.10 Proof of Theorem 12.2
References
13 Deception-as-Defense Framework for Cyber-Physical Systems
13.1 Introduction
13.2 Deception Theory in Literature
13.2.1 Economics Literature
13.2.2 Engineering Literature
13.3 Deception-as-Defense Framework
13.4 Game Formulation
13.5 Quadratic Costs and Information of Interest
13.5.1 Gaussian Information of Interest
13.6 Communication Systems
13.7 Control Systems
13.8 Uncertainty in the Uninformed Agent's Objective
13.9 Partial or Noisy Measurements
13.10 Conclusion
References
14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems
14.1 Introduction
14.2 Cyber-Attacks Against ITS and CPS
14.2.1 Attacks Against ITS
14.2.2 Attacks Against CPS
14.3 Cyber-Risks: Byproduct of the IT Revolution
14.3.1 Vulnerabilities
14.3.2 Threats
14.3.3 Impact
14.3.4 Comparing Cyber-Risks with Other Risks
14.4 Cyber-Risks on CPS
14.4.1 Obstacles to Securing CPSs
14.5 Facing the Risks
14.5.1 Risk Assessment/Metrics
14.6 Risk Treatment
14.6.1 Diversification
14.6.2 Decisions Under Uncertainty
14.7 Evaluating Risks
14.7.1 Experimental Studies
14.7.2 Observational Studies
14.7.3 Measuring the Impact of Risk Treatment
14.7.4 Estimating Risks: Detecting and Correcting Biased Estimates
14.7.5 Combining Risk Reduction with Risk Transfer via Insurance
14.8 Conclusions
References
15 Cyber-Insurance
15.1 Introduction
15.2 Cyber-Insurance
15.2.1 Insurance of Material Versus Cyber-Assets
15.2.2 Cyber-Insurance Ecosystem
15.2.3 Limitations of Cyber-Insurance Ecosystem
15.3 Introduction to Insurance
15.3.1 Types of Insurance
15.3.2 Model of Risk Pooling
15.3.3 Premium Calculation Principles
15.3.4 Insurance Markets and Reinsurance
15.4 Insurance in Practice
15.4.1 Agent Preferences and Insurance Instruments
15.4.2 Imperfections of Insurance Markets
15.4.3 Regulation
15.5 Extreme Events
15.5.1 Modeling Extreme Events
15.5.2 Managing Extreme Risks
15.5.3 Case-Study: Effects of Insurance on Security
15.6 Concluding Remarks
References
16 Concluding Remarks and Future Outlook
16.1 Looking Back: The Contributions of This Book
16.1.1 Deception Attacks and Loss of Integrity
16.1.2 Disclosure Attacks and Loss of Confidentiality
16.1.3 Disruption Attacks and Loss of Availability
16.1.4 General Contributions
16.2 Looking Forward: Future Outlook
16.2.1 Specific Areas to Explore
16.2.2 General Advancements
References
Index
Recommend Papers

Safety, Security and Privacy for Cyber-Physical Systems (Lecture Notes in Control and Information Sciences, 486)
 3030650472, 9783030650476

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Control and Information Sciences 486

Riccardo M. G. Ferrari André M. H. Teixeira   Editors

Safety, Security and Privacy for Cyber-Physical Systems

Lecture Notes in Control and Information Sciences Volume 486

Series Editors Frank Allgöwer, Institute for Systems Theory and Automatic Control, Universität Stuttgart, Stuttgart, Germany Manfred Morari, Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA Advisory Editors P. Fleming, University of Sheffield, UK P. Kokotovic, University of California, Santa Barbara, CA, USA A. B. Kurzhanski, Moscow State University, Moscow, Russia H. Kwakernaak, University of Twente, Enschede, The Netherlands A. Rantzer, Lund Institute of Technology, Lund, Sweden J. N. Tsitsiklis, MIT, Cambridge, MA, USA

This series reports new developments in the fields of control and information sciences—quickly, informally and at a high level. The type of material considered for publication includes: 1. 2. 3. 4.

Preliminary drafts of monographs and advanced textbooks Lectures on a new field, or presenting a new angle on a classical field Research reports Reports of meetings, provided they are (a) of exceptional interest and (b) devoted to a specific topic. The timeliness of subject material is very important.

Indexed by EI-Compendex, SCOPUS, Ulrich’s, MathSciNet, Current Index to Statistics, Current Mathematical Publications, Mathematical Reviews, IngentaConnect, MetaPress and Springerlink.

More information about this series at http://www.springer.com/series/642

Riccardo M. G. Ferrari André M. H. Teixeira •

Editors

Safety, Security and Privacy for Cyber-Physical Systems

123

Editors Riccardo M. G. Ferrari Center for Systems and Control Delft University of Technology Delft, The Netherlands

André M. H. Teixeira Department of Electrical Engineering Uppsala University Uppsala, Sweden

ISSN 0170-8643 ISSN 1610-7411 (electronic) Lecture Notes in Control and Information Sciences ISBN 978-3-030-65047-6 ISBN 978-3-030-65048-3 (eBook) https://doi.org/10.1007/978-3-030-65048-3 Mathematics Subject Classification: 37N35, 47N70, 58E25, 93A14, 93C83, 93E10, 93E12, 94A60, 94A13, 94A62, 94C12, 14G50 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Contents

1

1

Introduction to the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Riccardo M. G. Ferrari and André M. H. Teixeira

2

Fault Tolerance in Networked Control Systems by Flexible Task Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kai Schenk and Jan Lunze

9

Resilient Control Under Denial-of-Service: Results and Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudio De Persis and Pietro Tesi

41

Stealthy False Data Injection Attacks in Feedback Systems Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Henrik Sandberg

61

3

4

5

Detection of Attacks in Cyber-Physical Systems: Theory and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vaibhav Katewa, Cheng-Zong Bai, Vijay Gupta, and Fabio Pasqualetti

79

99

6

Security Metrics for Control Systems . . . . . . . . . . . . . . . . . . . . . . . André M. H. Teixeira

7

The Secure State Estimation Problem . . . . . . . . . . . . . . . . . . . . . . . 123 Yasser Shoukry and Paulo Tabuada

8

Active Detection Against Replay Attack: A Survey on Watermark Design for Cyber-Physical Systems . . . . . . . . . . . . . . . . . . . . . . . . . 145 Hanxiao Liu, Yilin Mo, and Karl Henrik Johansson

9

Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Riccardo M. G. Ferrari and André M. H. Teixeira

v

vi

Contents

10 Differentially Private Anomaly Detection for Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Riccardo M. G. Ferrari, Kwassi H. Degue, and Jerome Le Ny 11 Remote State Estimation in the Presence of an Eavesdropper . . . . 231 Alex S. Leong, Daniel E. Quevedo, Daniel Dolz, and Subhrakanti Dey 12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Yankai Lin, Farhad Farokhi, Iman Shames, and Dragan Nešić 13 Deception-as-Defense Framework for Cyber-Physical Systems . . . . 287 Muhammed O. Sayin and Tamer Başar 14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Carlos Barreto, Galina Schwartz, and Alvaro A. Cardenas 15 Cyber-Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Carlos Barreto, Galina Schwartz, and Alvaro A. Cardenas 16 Concluding Remarks and Future Outlook . . . . . . . . . . . . . . . . . . . 377 Riccardo M. G. Ferrari Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Chapter 1

Introduction to the Book Riccardo M. G. Ferrari and André M. H. Teixeira

Abstract In this introductory chapter, we illustrate the book’s motivation and objective. In particular, the book takes its raison d’être from the need for protecting Cyber-Physical Systems (CPSs) against threats originating either in the cyber or in the physical domain. Exploring the concepts of safety, security, and privacy for CPSs thus emerged as the natural goal to reach. In order to better support this objective and to help the reader to navigate the book contents, a taxonomy of the above-mentioned concepts is introduced, based on a set of three triads, including the well-known Confidentiality, Integrity, and Availability triad which was introduced in the Information Technology security literature.

1.1 Motivation and Objectives Cyber-physical systems (CPSs) represent a class of networked control systems with vast and promising applications. This class may include for instance smart cities, intelligent transportation systems based on fleets of cooperative and autonomous vehicles or distributed sensing and control solutions that leverage Internet-of-Things (IoT) devices. As a common trait, these systems are expected to provide important functionalities that may positively influence our life and society. However, said positive outcomes may be hindered by novel threats to the safety of CPSs, such as malicious cyber-attacks that could negatively affect the physical domain. Furthermore, the sheer amount of data gathered, exchanged, and processed by those architectures are going to pose fundamental societal interrogatives regarding privacy and confidentiality, and the fair use of such data. The intended focus of this

R. M. G. Ferrari (B) Delft University of Technology, Delft, Netherlands e-mail: [email protected] A. M. H. Teixeira Uppsala University, Uppsala, Sweden e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_1

1

2

R. M. G. Ferrari and A. M. H. Teixeira

edited book is on novel approaches to the analysis and design of safety, security, and privacy in CPSs. One aspect of this work relates to security and safety enhancements for CPSs, which may be achieved by the implementation of methods for monitoring and controlling aforementioned systems in order to detect, identify and possibly counteract malicious cyber-attacks. Another topic relates to analysis methods, since deciding which safety and security measures to deploy requires a thorough analysis of the detectability of cyber-attacks and their impact on the closed-loop performance of CPSs under different system designs. The third aspect that will be addressed, as anticipated, is the preservation of privacy and confidentiality in CPSs. Indeed, monitoring or controlling a large-scale CPS would require the acquisition and processing of large amounts of data that can expose user habits or preferences, as it has been already shown for the case of electrical consumption of households that are equipped with smart meters. Or, as in the case of competing operators in a given market, acquisition and exchange of data for monitoring purpose may be hampered by the desire for operators to keep their industrial and commercial practices secret. As a first step toward delivering this book’s promises, we will introduce the definitions of the fundamental concepts of safety, security, and privacy. Based on that, we will derive a taxonomy that will be, in our intention, useful in presenting and positioning systematically the next chapters.

1.2 Safety, Security, and Privacy: A Taxonomy Safety, security, and privacy are central terms in the technical and scientific literature dealing with CPSs, as the following chapters will hopefully prove. It can thus come as a surprise to notice that the terms “safety” and “security” are often used interchangeably. Such a blurry distinction can be to some extent justified by a linguistic deficit, as explained in [7]. Indeed, in most languages only a single term is used to represent both (such as sicherheit in German or sicurezza in Italian), while the English language has separate words as well as, for instance, the French (sûreté and sécurité). The term “privacy”, instead, gained prominence in recent years due to the public debate about the social and ethical implications of the use of personal data by online and mobile services and especially by social networks. Still, most of such discussions are underpinned on a rather intuitive and implicit definition of privacy, rather than a formal one. Such a situation may prompt us to refresh the definitions of the very words appearing in the title of the present book. In doing so, we will also propose a taxonomy under which to classify the contributions of this book, which we feel would be beneficial in presenting systematically the next chapters and, later, in drawing concluding remarks and discussing the future outlook of this field of research. We thus propose to make use of the following definitions, inspired by the literature:

1 Introduction to the Book

3

Safety A system is safe if the risk of intolerable effects on the environment, caused by accidental events originating inside the system, is below an acceptable level. Security A system is secure if the risk of intolerable effects on the system, caused by malicious events originating in the environment, is below an acceptable level. Privacy The privacy of an individual is guaranteed if no elaboration of a dataset including the individual’s data will lead to acquiring more knowledge about the individual than what was available before. In the definitions of the first two terms, inspired by [5, 7] and other works in the literature, we introduced two of the dualities highlighted by them: the systemenvironment one, and the accidental-malicious one. Although a plethora of variations on such definitions exist in the literature, we believe that these are actually best positioned to support us in the following discussion. According to these, for instance, a fault being an accidental and internal event is a safety issue, while a cyber-attack being malicious and carried out from outside must be classified as a security one. The definition of privacy, instead, is based on the pioneering work of [4] and is best suited for situations where privacy–sensitive data is contained in a larger data set. For instance, in the present field, we can consider as an example the case of a smart grid, where data from a large population of smart meters is used both for monitoring and control. At first sight, safety is the property we want to achieve as control systems engineers. Security and privacy may initially be not of concern, provided that no malicious event from outside a system, or no leak of private information, can lead to a loss of safety. A system capable of tolerating arbitrarily high amounts of stress and damages caused by internal or external events is termed resilient, and as such a resilient system will automatically guarantee safety. Anyway, assuming unbounded resiliency is not a credible path when dealing with CPSs, as such unboundedness will come at an unreasonable cost. We thus need to acknowledge that in practice CPSs, being an inseparable composition of sensing and actuation (physical) and of computation and communication (cyber) capabilities [1], are vulnerable to security breaches on their cyber side causing loss of safety in their physical counterpart. It must not so come as a surprise that a huge body of literature has been devoted to defining, analyzing, and guaranteeing the (cyber) security of CPSs, from early works such as the mentioned [1] up to very recent ones cited in surveys and tutorials such as [2, 3, 6].

1.2.1 The Security Triad The problem of cyber-security received a tremendous amount of attention from the Computer Science (CS) and Information Technology (IT) fields and its solution rests on guaranteeing that the classic triad of Confidentiality, Integrity, and Availability (CIA) properties holds for the data of a given system [2]. Confidentiality requires that data, either stored, transmitted, or processed, is concealed and can be only accessed by authorized entities.

4

R. M. G. Ferrari and A. M. H. Teixeira

Integrity means that data is trustworthy, as it cannot be altered in an unauthorized way. Availability is concerned with a system providing data in a timely way when it is requested to. It is interesting to address how the concepts originating from the CIA triad can be transferred to the CPS world, or Operational Technology (OT) one as it is called in the CS field. As aptly noted by [1] and other researchers in the control systems community, CPSs possess some unique peculiarities, due to the fact that their operational goals include real-time and resource constraints that are not present, or not relevant in their IT counterparts. Thus, for a CPS, confidentiality is not only about concealing data, that is information in its cyber part, but as well about keeping hidden details of the physical process. In particular, a precious information to keep away from a malicious third party is the system dynamical model because such knowledge can be used to build sophisticated active attacks, both in the physical and in the cyber domains. Integrity refers not only to data in the cyber part, such as commands sent over a field bus on the factory floor, but also to the genuineness and the health status of the physical components making up the Industrial Control System and the process itself. This means that physical faults or deliberate sabotage induced by an external actor are also a security concern in a CPS, which in turn can lead to safety issues. Availability, finally, should not only be referred to data and services provided by the cyber part, but by the physical ones as well. Most importantly, in a CPS, as the physical domain evolves dynamically in time, data, and services provided by the cyber part must satisfy timing constraints. For instance, measurements from a sensor or control commands computed by an industrial controller must be updated in real time, in order to guarantee stability of the resulting closed-loop system. Lack of stability, as can we easily imagine, could quickly lead to a safety incident with possibly catastrophic consequences. While the above remarks may have convinced us of the closely intertwined relationship between security and safety in a CPS, we may as well ask ourselves what is, if any, the connection with privacy. By looking at the CIA triad, we can ascribe privacy to the concept of confidentiality. Anyway, there is an important clarification that needs to be made. Privacy, as a property, is weaker than confidentiality as it does not require all the data of a system to be concealed. When considering a CPS, it only requires that upon an authorized release of information from a system, no additional knowledge upon an individual user, or upon a system component, can be gained. Such knowledge may include the presence itself of such individual or component, or the exact value of a given attribute linked to them. A widely used, probabilistic definition of privacy is rooted in the concept of Differential Privacy (DP), pioneered by computer scientist Cynthia Dwork (see Chap. 10). While DP was introduced to preserve the privacy of individuals whom sensitive information was stored in a database, such as a medical one, privacy is still an important issue in a CPS when such CPS behavior, and data generated within, is dependent on individuals. Indeed, a lack of privacy would lead to an unfavorable competitive advantage for a

1 Introduction to the Book

5

malicious third party, that can use such information for its own goals. For instance, we could consider the case of a smart grid, when individual homes are equipped with a smart meter. In this case, the smart grid may acquire and process detailed, finely grained electric consumption data that can be linked to personal habits. Even worse, other measured CPS outputs can include information uniquely traceable to a single individual behavior, even if data from its smart meter is never acquired or released externally, due to the physical dependence of such outputs on the user consumption. Such data, as said, could allow a malicious external actor to carry out attacks (for a paradigmatic case where privacy breaches led to physical thefts, please see1 ).

1.2.2 The Attack Triad The tripartite subdivision that we described so far, following the CIA triad, leads us naturally to mirror it when proposing a taxonomy of attacks to CPSs, as suggested by previous authors [1–3]. Such classification goes by the acronym DDD, from the initials of the words Disclosure, Deception, and Disruption. Disclosure refers to a lack of confidentiality due to unauthorized access to data that was concealed. Deception involves the loss of integrity due to an unauthorized modification or replacement of data, when it is either stored, transmitted or processed. Disruption leads to the lack of availability, through one or more actions that prevent data or a service to be made available in a timely manner when requested. While it is outside the scope of this work to provide an exhaustive list of attacks according to the DDD taxonomy, we can name a few significant examples. For instance eavesdropping is a typical example of disclosure attack, which is extensively treated in Chap. 11, while Man-in-the-Middle attacks such as data replay or data injection (see Chaps. 4, 5, 8 and 9) are representative of the deception class of attacks. Finally, Denial-of-Service (DoS) attacks are paradigmatic disruptive attacks (see Chap. 3). Lack of privacy should indeed be treated as a disclosure, that is a breach of confidentiality. A subtle difference anyway needs to be noted: as suggested previously lack of privacy may occur normally if the system is built in a way that third parties, even in bona fide, can observe it and deduce information about individual components present in the system, or about individual users. Eavesdropping, instead, always involve a malicious action from an external party. Ultimately, privacy is a fundamental property that must be guaranteed in a CPS, indeed to prevent possibly malicious external actors to gain inside information about a system design, composition, mode of operation and users, that could later be leveraged to implement an attack. 1 According to several news outlets, thieves used information obtained from fitness tracking apps to

pinpoint owners of expensive bikes and steal them: https://www.yellowjersey.co.uk/the-draft/stravabike-theft/.

6

R. M. G. Ferrari and A. M. H. Teixeira

1.2.3 The Mitigation Triad In the previous sections, we have reinstated, hopefully in a clear way, what is the relationship between safety, security, and privacy, and we have introduced a first part of our proposed taxonomy by means of the CIA and DDD triads. We can now complete such taxonomy by introducing the classes of mitigation mechanisms that can be used as countermeasures against the categories of attacks presented earlier. In order to do so, we will borrow yet another triad from [3], which will be termed PRD from the words Prevention, Resilience, and Detection: Prevention refers to techniques that can avoid or postpone the onset of an attack. Resilience involves containing the effects of an attack to allow the system to continue operation as close as possible to normal. Detection includes all kind of techniques that can detect, classify, and then actively mitigate an attack. While the CIA and DDD triads are matched to each other, that is only one of the two would be needed to build a taxonomy, the PRD is independent from the former two. That is, any of the P, R, and D mechanisms could be used to guarantee any of the C, I, or A properties (or equivalently to mitigate one of the three attack categories). This would lead to nine possible combinations of attacks and mitigations. In order to visualize this, we will introduce a circular diagram such as the one presented in Fig. 1.1. There, three concentric circles are drawn, one for each mechanism class in the PRD triad, and each circle is further divided into three slices, one for each attack or CIA category. In the next section, we make use of this diagram to guide the reader through the contents of this book.

1.3 Structure and Content This edited book provides an in-depth overview of recent work related to safety, security, and privacy of CPSs. As we can see from the lower part of Fig. 1.1, the initial chapters address the problem of guaranteeing resiliency against disruptions. In particular, Chap. 2 (“Fault Tolerance in Networked Control Systems by Flexible Task Assignment”) uses fault-tolerant control approaches to overcome failures in networked, multi-agents control systems. Chap. 3 (“Resilient Control Under Denialof-Service: Results and Research Directions”), instead, discusses the mitigation of Denial-of-Service attacks. From Chaps. 4 to 9, several aspects of resiliency and detection of deception attacks are addressed. In particular, detectability properties of false-data injection attacks are discussed in Chaps. 4, 5, and 6. Chapter 4 (“Stealthy False Data Injection Attacks in Feedback Systems Revisited”) analyzes the sensitivity of plants modeled as linear dynamical systems to such kind of attacks. Basic control principles regarding robustness and sensitivity of control loops are used to analyze when attacks may

ITY

E E T

D

PR

ITY EGR

6

N

10

7

ENTI EV 12

8

INT

11

9

O

DISCLOSURE

DE CE PT

C T I O N 10

I L I E N C E S E R 13

CO

N

AL TI

7

N IO

NF ID E

1 Introduction to the Book

5

14, 15

4 2

3

Y DIS ILIT RUP B A L TION AVAI Fig. 1.1 A pictorial representation of the contributions of the present book, according to the taxonomy introduced in this chapter. The PRD triad is depicted via concentric circles, while the CIA and DDD triads are represented via slices of such circles. Each resulting sector corresponds to a possible combination of one security breach (CIA and DDD triads) and one possible mitigation measure (PRD triad). Each chapter of this book is thus represented as one or more orange dots accompanied by its number, positioned in one or more of such sectors according to its contribution

be hard to detect, while fundamental detectability limitations are derived and discussed in Chap. 5 (“Detection of Attacks in Cyber-Physical Systems: Theory and Applications”). Sensitivity metrics that jointly consider the impact and detectability of attacks are introduced in Chap. 6 (“Security Metrics for Control Systems”), by making use of classical metrics such as the largest and smallest gains of dynamical systems. Chapter 7 (“The Secure State Estimation Problem”) addresses estimation under compromised sensors, while the subsequent two chapters leverage the concept of watermarking. Chapter 8 (“Active Detection Against Replay Attack: A Survey on Watermark Design for Cyber-Physical Systems”) describes several watermarking approaches including a physical authentication method that can be exploited to detect replay and integrity attacks. Chapter 9 (“Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme”), instead, presents a multiplicative sensor watermarking scheme that allows the detection of stealthy false data injection attacks while preserving the original closed-loop control performances.

8

R. M. G. Ferrari and A. M. H. Teixeira

Then, Chaps. 10 to 13 consider the problem of guaranteeing prevention and resilience against disclosure attack and, in some cases, against deception attacks as well. Chapter 10 (“Differentially Private Anomaly Detection for Interconnected Systems”) tackles the problem of preventing the disclosure of private information in the context of distributed anomaly detection. Chapter 11 (“Remote State Estimation in the Presence of an Eavesdropper”) reports schemes for ensuring confidentiality against eavesdroppers in remote state estimation across wireless links. Chapter 12 (“Secure Networked Control Systems Design Using Semi-homomorphic Encryption”) discusses instead a secure and private computation approach with applications to control systems. Chapter 13 (“Deception-as-Defense Framework for Cyber-Physical Systems”) proposes a deception-based scheme as a defense mechanism to different classes of adversaries. Finally, Chaps. 14 (“Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems”) and 15 (“Cyber-Insurance”) discuss the unique characteristics and challenges that cyber-physical systems pose to risk management and cyberinsurance, as opposed to conventional information systems. Final remarks and an outlook of future research directions in Chap. 16 conclude the book.

References 1. Cardenas, A.A., Amin, S., Sastry, S.: Secure control: towards survivable cyber-physical systems. In: 2008 The 28th International Conference on Distributed Computing Systems Workshops, IEEE, Washington, DC, pp. 495–500 (2008) 2. Chong, M.S., Sandberg, H., Teixeira, A.M.: A tutorial introduction to security and privacy for cyber-physical systems. In: 2019 18th European Control Conference (ECC), IEEE, Naples, pp. 968–978 (2019) 3. Dibaji, S.M., Pirani, M., Flamholz, D.B., Annaswamy, A.M., Johansson, K.H., Chakrabortty, A.: A systems and control perspective of CPS security. Annual Rev. Control 47, 394–411 (2019) 4. Dwork, C., Roth, A., et al.: The Algorithmic Foundations of Differential Privacy. Foundations and Trends® in Theoretical Computer Science, vol. 9, issues 3–4, pp. 211–407 (2014) 5. Line, M.B., Nordland, O., Røstad, L., Tøndel, I.A.: Safety vs security? In: PSAM Conference, New Orleans, USA (2006) 6. Lun, Y.Z., D’Innocenzo, A., Smarra, F., Malavolta, I., Di Benedetto, M.D.: State of the art of cyber-physical systems security: an automatic control perspective. J. Syst. Softw. 149, 174–216 (2019) 7. Piètre-Cambacédès, L., Chaudet, C.: The SEMA referential framework: avoiding ambiguities in the terms “security” and “safety”. Int. J. Crit. Infrastruct. Protect. 3(2), 55–66 (2010)

Chapter 2

Fault Tolerance in Networked Control Systems by Flexible Task Assignment Kai Schenk and Jan Lunze

Abstract This chapter deals with networked control systems in which all subsystems have to fulfill a common task. Based on the capabilities of each subsystem, a coordination unit evaluates how much a particular subsystem can contribute to the cooperative task. In case of a fault, the contribution of the faulty subsystem is re-evaluated and adjusted if necessary. To counterbalance this change, other subsystems have to increase their contribution. As a result, the proposed method ensures the satisfaction of the cooperative task even in case of faults. An example illustrates the effectiveness of this method for a transportation system.

2.1 Introduction to Fault-Tolerant Networked Control Systems The fault-tolerant control of technical systems is an important means for avoiding safety-critical situations and for reducing machine and plant malfunctions due to faults. Active fault tolerance (see Fig. 2.1) is created by extending the execution layer consisting of the plant P and the controller C ∗ by a monitoring layer, in which a diagnostic unit D is responsible for detecting and identifying faults and a reconfiguration unit R adapts the nominal controller to the fault situation [1]. One feature of a fault-tolerant system should be its capability to perform its nominal task even if components have failed partially or completely. In systems that are composed of coupled subsystems, the complexity of the fault scenarios increase with the number of components, but simultaneously the freedom in selecting counteractive measures is enhanced by the communication network. This chapter investigates this situation. K. Schenk (B) · J. Lunze Institute of Automation and Computer Control, Ruhr-University Bochum, 44801 Bochum, Germany e-mail: [email protected] J. Lunze e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_2

9

10

K. Schenk and J. Lunze

Fig. 2.1 Structure of a fault-tolerant system: The diagnostic unit D detects a fault in the plant P and triggers the reconfiguration unit R to adapt the nominal controller C ∗ to the fault scenario

2.1.1 Networked Control Systems Networked control systems denote a class of subsystems that are connected via some fixed physical network and via some flexible communication network (Fig. 2.2). Such a structure occurs naturally if the complexity, for example, in terms of the number of system states, in terms of a privacy policy or in terms of geographical distribution exceeds a certain level. Questions that are of central relevance in the context of networked control systems and that will be discussed in detail in this chapter are Ar e ther e decisions that the subsystems can make locally and, i f so, what in f or mation is requir ed f or this? Ar e ther e any tasks that r equir e global knowledge and can ther e f or e only be per f or med by a coor dinator ?

Cooperative task versus subtasks. This chapter is motivated by the observation that in many applications networked control systems have to exhibit a certain kind of cooperative behavior [2, 3]. Power plants jointly cover the energy requirements of a city, trucks drive together in a convoy, and tugboats maneuver together large oil transporters in ports. Usually, the common behavior of the overall system cannot be evaluated by a single subsystem, making direct control of this variable impossible.

Fig. 2.2 Structure of networked control systems: The subsystems Pi are equipped with local controllers Ci and form individual control loops. On the plant side, the subsystems are physically coupled, while on the controller side, a communication network allows to share information among the local controllers

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

11

Instead, the idea of this chapter is to decompose the cooperative task into suitable subtasks that are then assigned to each subsystem. Remark 2.1 The term cooperative task is used in this chapter as a synonym for common behaviour or common task. In this context, the term task describes the function of a system and characterizes a desired behavior. Flexible task assignment. In the context of fault-tolerant control, cooperative tasks allow achieving fault tolerance of the entire system by redistributing subtasks after the occurrence of a fault, which is the main idea for ensuring fault tolerance in networked systems presented in this chapter. The monitoring layer does not restore the functionality of the faulty subsystem, but ensures the accomplishment of the cooperative behavior by assigning new subtasks to the subsystems [4–6]. This new method for achieving fault tolerance in coupled systems significantly distinguishes the approach presented here from the conventional approach in the literature. This more holistic approach is motivated by several reasons: • The utilization of the methods for fault-tolerant control proposed in literature [7–9] requires that any faulty subsystem has redundant components to compensate local faults. However, many applications show that redundancy is more likely to occur within the other healthy subsystems. • It seems unreasonable to use the faulty but reconfigured subsystem as long as the reason that caused the fault is present, even if the nominal performance is initially restored by a suitable redesign of the control system. • A redistribution of tasks is more intuitive compared to an adjustment of the controller because the task has a direct relation to the application [6] and, hence the redistribution process is understandable for the operators. The following examples illustrate the differences between a cooperative task and local tasks. Furthermore, they show that faults can be compensated by means of systematic cooperation between the subsystems. • In power supply, numerous power plants jointly cover the electricity demand (cooperative task), whereby each power plant has to deliver a particular amount of energy (subtask). If a power plant is no longer able to deliver the required energy, other power plants increase their power output (flexible task assignment) to match the demand. • In shipping, several tugboats collectively maneuver large ships in the port (cooperative task). Each tugboat has to pull on the large ship in a very specific way (subtask). However, if one tugboat loses the connection to the ship, other tugboats compensate for the missing force (flexible task assignment). • In traffic, multiple trucks form a convoy (cooperative task) in order to save fuel. In case that a truck has to decrease its speed due to some engine problem (faulty subsystem), the other trucks adjust their speeds to maintain the convoy (flexible task assignment).

12

K. Schenk and J. Lunze

Fig. 2.3 Construction of the transportation system: The linear actuators are placed beneath a conveyor belt to steer the ball along the sx -axis

• Another example taken from [10] is the following: In a steel strip mill, the steel strips pass through a number of rolling stands and re-heating furnaces with the aim of obtaining steel of specified strength and quality (cooperative task). If, for example, a rolling stand does no longer operate properly (faulty subsystem), some of the remaining rolling stands can compensate for this failure by changing their pressure (flexible task assignment) accordingly. The following running example is used throughout this chapter to illustrate the proposed method for flexible task assignment. Example 2.1 (Transportation system) Fig. 2.3 shows a part of a transportation system in which linear actuators have the cooperative task to steer the ball along some prescribed trajectory sx∗ (t). The control problem requires to find the specific motion for each of the linear actuators that pushes the ball from the left to the right. Thereby, it must be considered that the actuators can perform the required behavior. Particularly in the fault case, the actuators have very limited capabilities.

2.1.2 Contribution to Fault-Tolerant Networked Control Systems The aim of this chapter is to develop a method for the cooperative control of networked systems in which fault tolerance is achieved by redistributing subtasks from faulty to non-faulty subsystems. The cooperative task refers to a performance output T  yp (t) = Q(σ (t)) · y1 (t) y2 (t) · · · y N (t)

(2.1)

in which the matrix Q(σ ) depends on a switching state σ (t). The cooperative task of all subsystems is to steer yp (t) along a prescribed reference trajectory y∗p (t): yp (t) = y∗p (t). Example 2.2 (Transportation system (cont.)) The output yi (t) of each subsystem represents the vertical offset of a piston (Fig. 2.3). The acceleration of the ball is the performance output yp (t) = s¨x (t) of the overall system. As only actuators that are in the neighborhood of the ball contribute to yp (t), the rolling ball changes the operation mode σ (t) of the system, which determines the indices of the outputs that are relevant

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

13

to yp (t). For N = 10 subsystems, the ball can be located in between N − 1 pairs of pistons and, hence, N − 1 switching states σ ∈ {1, 2, . . . , 9} can occur. The global control problem is to find control signals u 1 (t), u 2 (t), …, u N (t) that steer the output of the subsystems y1 (t), y2 (t), …, y N (t) in a way such that the performance output (2.1) follows the reference y∗p (t). This chapter proposes a twostage process to solve that control problem: 1. Task assignment: The reference y∗p (t) is decomposed into suitable local references y1∗ (t), y2∗ (t), …, y N∗ (t) taking into account the switching character of the overall system. This step is investigated in detail in this chapter. 2. Trajectory tracking: Based on the local references y1∗ (t), y2∗ (t), …, y N∗ (t), networked controllers C1∗ , C2∗ , …, C N∗ are designed that generate control signals u ∗1 (t), u ∗2 (t), …, u ∗N (t) that steer each output along the corresponding reference. The design of such controllers is not explained in detail in this chapter, but is merely repeated as far as necessary to deal with the step of task assignment. The interested reader is referred to the publications [1] and [2] instead. Problem 2.1 Given the reference performance output y∗p (t), find suitable reference trajectories yi∗ (t) for all i = 1, 2, . . . , N that ensure the fulfillment of the cooperative task yp (t) = y∗p (t), even in the case of faults. The specific structure used in this chapter to solve Problem 2.1 is shown in Fig. 2.4, which is basically a combination of the general fault-tolerant control loop shown in Fig. 2.1 and the networked control systems shown in Fig. 2.2. A coordination unit divides the common task into suitable subtasks. Thereby, it is considered that the subsystems are subject to certain restrictions and, hence, cannot fulfill arbitrary subtasks. In particular, faulty subsystems can fulfill tasks only to a limited extent. Each subtask defines the reference trajectory y˜i∗ (t) of the associated subsystem for a specific time interval, which takes the fact into account that the subsystems do not have a permanent influence on the performance output due to the switching character of the system. Each reconfiguration unit Ri , (i ∈ {1, 2, . . . , N }), contains a unit for trajectory generation that complements the partially defined reference in order to provide the local controllers with a reference defined on the complete time interval. After considering the previous discussion and with respect to the existing literature, the current chapter presents a method for fault tolerance that is able to satisfy the following three properties: 1. The coordination unit can decompose the cooperative task into suitable subtasks. 2. Each subsystem is able to satisfy its associated subtask. 3. The cooperative task is performed jointly by all subsystems at all times, even if subsystems are defective or temporarily have no influence on the overall objective. Classification of task assignment in active fault-tolerant control. Faults have a serious impact on the subsystems in which they occur as well as on the entire system as they can cause performance degradation, loss of tracking ability or even instability and the shutdown of a process. The monitoring layers, shown in Fig. 2.4, are designed to compensate for the fault and its consequences in the following three stages:

14

K. Schenk and J. Lunze

Fig. 2.4 Task assignment in networked systems: The coordination unit decomposes the cooperative task into partial subtasks y˜i∗ , which are used by the reconfiguration units Ri to provide the local controller with a reference yi∗ (t). As a consequence, the performance output yp (t) tracks the desired reference y∗p (t)

1. The fault in subsystem P f is detected, isolated, and identified by the diagnostic unit D f . 2. The reconfiguration units Ri , (i ∈ {1, 2, . . . , N }), adapt the nominal controllers to guarantee stability of the entire system. This step will generally not recover the nominal performance in terms of the ability to follow a certain reference trajectory. 3. The coordination unit has to redistribute the subtasks to guarantee the fulfillment of the cooperative task. Since this chapter focuses on the third aspect, the redistribution process, the following assumption is made. Assumption 2.1 Each subsystem has a diagnostic unit (see [11–13]) that immediately provides a perfect diagnostic result at fault time t = tf . The overall system remains stable in any fault case (see [9, 14, 15]). Structure. The aim of Sect. 2.2 is to formalize the flexible task assignment problem. This includes a description of the faulty scenario that is investigated here and its consequences for the task assignment. A method of how the coordination unit can decompose the cooperative task into suitable subtasks is elaborated in Sect. 2.3, which is the theoretical main part of this chapter. Beside a mathematical analysis of the conditions that each subtask has to certify, this section explains one possible implementation of the developed flexible task assignment method. In Sect. 2.4, a transportation system is used in a simulation study to illustrate the single steps and their results of the task assignment.

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

15

2.1.3 Notation Vectors are written as bold lower case letters (a), while matrices are written as bold upper case letters ( A). All other Latin letters are scalars (a). The set of real numbers is denoted R and the set of integers is denoted N. A matrix of dimension n × m is written in compact form as A = (ai j ) ∈ Rn×m with the element ai j in the ith row and jth column. Matrices are written solely in square brackets. The Moore- Penrose pseudoinverse of a matrix A is denoted by A+ . I n denotes the identity matrix of dimension n × n while 0n×m is a matrix with n rows and m columns that has only zero elements. If clear from the context the dimension of a matrix is not explicitly indicated. The k−th order derivative of a signal a(t) with respect to time t is written as a(k) (t). A function that is k-times continuously differentiable on the domain D is written as C k (D). The cardinality (number of elements) of set A is written as |A |. The floor function a defines the greatest integer that is less than or equal to a.

2.2 Flexible Task Assignment Problem The way in which a coordinator decomposes the cooperative task into suitable subtasks must consider the capabilities of the subsystems. This step requires a precise description of the system class under consideration and the controller structures used and, on the other hand, a characterization of the possible faults and their effects on the behavior of the entire system.

2.2.1 Description of the Subsystems The overall plant P, shown in Fig. 2.2, consists of N interconnected subsystems  Pi :

x˙ i (t) = Ai x i (t) + bi u i (t) + g i (t), x i (t0 ) = x i0 yi (t) = ciT x i (t)

(2.2)

with i ∈ N := {1, 2, . . . , N }, the local state x i (t) ∈ Rni , the local control input u i (t) ∈ R, the local measured output yi (t) ∈ R and matrices of appropriate dimensions. The physical network that couples the subsystems with each other is represented by  Ai j x j (t) ∈ Rni , i ∈N. g i (t) = j∈N

The matrix Ai j ∈ Rni ×n j describes the physical influence of subsystem P j onto subsystem Pi . Without the loss of generality, Aii = 0 is assumed. Two subsystems Pi

16

K. Schenk and J. Lunze

and P j are said to be neighbors if one of them influence the other, i.e., if Ai j = 0 or A ji = 0. Definition 2.1 (Subtask) For a given function yi∗ (t), the output of the subsystem Pi∗ should satisfy the requirement !

Ti : yi (t) = yi∗ (t),

(2.3)

which is said to be the subtask Ti of the subsystem. The subtask Ti can be interpreted as the local control task that needs to be satisfied by the subsystem Pi∗ . This formal definition connects the colloquial use of a function of a system or aim of a system with a precise mathematical requirement (2.3). During the decomposition process, the coordination unit has to take the ability of each subsystem into account: • The reference trajectory complies with present state constraints, input constraints, and output constraints and, hence, lies within an area that the subsystem Pi∗ can reach. • The subsystem satisfies certain properties (e.g., controllability, observability, etc.) that are required for the calculation of the control law. • The reference trajectory matches the structural properties of the subsystem Pi∗ in terms of continuity and differentiability. For example, the output of a subsystem with a relative degree identical to one can follow a function that can be continuously differentiated at least once. Definition 2.2 (Satisfiable subtask) A subtask Ti is said to be satisfiable if the controlled subsystem Pi in (2.2) can achieve it with an appropriate control input: ∃u i (t) : yi (t) = yi∗ (t), ∀t ≥ 0. Description of faulty subsystems. In this chapter, the faults are assumed to occur instantaneously and to remain persistent. At time tf , the subsystem P f with f ∈ N is affected by a fault that changes its behavior instantaneously to  Pf :

x˙ f (t) = 0, x f (tf ) = x f 0 y f (t) = cTf x f (t).

(2.4)

Although the subsystem P f is faulty, it nevertheless contributes to the cooperative task. The type of faults considered in (2.4) does not allow any further movement of the faulty subsystem P f . Such faults occur, for example, in mechanical systems in which a subsystem is jammed in a certain position and can no longer be moved. The following lemma results as a direct consequence of this particular class of faults: Lemma 2.1 The constant trajectory y ∗f (t) = y f (tf ) is the only satisfiable subtask of any faulty subsystem P f for the time t ≥ tf .

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

17

2.2.2 Description of the Controllers Each subsystem Pi is equipped with an extended two-degrees of freedom controller Ci∗ that has to ensure that the subtask Ti is satisfied (Fig. 2.2). This section briefly describes the working principle of these controllers as far as it is needed for the task assignment, while the complete design process is described in [5, 16]. Each controller Ci∗ generates the signal u i (t) = u di (t) + u fi (t) + u ci (t), which consists of three components (Fig. 2.5): 1. A decoupling unit Di generates the signal u di (t) to compensate the coupling input g i (t) to treat the subsystems as decoupled from each other. The decoupling unit’s input g˜ i∗ describes the coupling input under the assumption that the neighboring subsystems fulfill their subtasks, i.e., for y j (t) = y ∗j (t), ( j ∈ N ). The neighbors determine the trajectories g˜ i∗j in the design phase of the controller and send them over the digital network toward their neighbors where it is stored in the memory unit Mi . This step is executed only once at the design time of the controller and has to be repeated only in case of a modification of the references yi∗ (t) or in case that disturbances cannot be compensated by the local feedback controllers Ci . 2. The signal u fi (t) achieves perfect tracking for the decoupled subsystem Pi with proper initial condition x i0 and is generated by the feedforward controller Fi . Every time the coordination unit modifies the reference yi∗ (t), as it is done in case of a fault affecting a subsystem, the feedforward controller Fi has to be redesigned. Example: Let the reference be yi∗ (t) = sin(t) for the time intervall 0 s ≤ t < 100 s and yi∗ (t) = sin(2t) for 100 s ≤ t < 200 s. Then, Fi is designed at time t = 0 s for the reference yi∗ (t) = sin(t) and re-designed at t = 100 s for yi∗ (t) = sin(2t). 3. The signal u ci (t) is generated by the feedback controller Ci . Generally, feedback makes the feedforward control more robust against model uncertainties and disturbances. Furthermore, it compensates for improper initial conditions of the subsystems. If one subsystem is affected by disturbances that cannot be compensated locally, it will not be able to track its local reference. In that case, there is a discrepancy with the information concerning the coupling that the subsystem has sent to its neighbors and the true coupling effect. If the feedback controllers of the neighboring subsystems are not able to correct this discrepancy, this tracking error can spread through the physical network. In such a case, the communication network must be used continuously by the subsystems to communicate their current coupling effect. The interested reader is referred to [17] where this aspect is discussed for the problem of merging two-vehicle streams into one. For the design of the decoupling units, the feedforward units, and the feedback units, several conditions have to be satisfied by the subsystems as, for example, controllability [16]. These conditions are assumed to be satisfied. However, whether

18

K. Schenk and J. Lunze

Fig. 2.5 Structure of the local controller Ci∗ : The controller is an extended two degrees of freedom controller with a feedforward unit Fi , a feedback controller Ci , and a decoupling unit Di receiving information from a memory unit Mi

or not the resulting controller Ci∗ will steer the output yi (t) along the reference yi∗ (t) depends solely on the shape of yi∗ (t). Lemma 2.2 (Smooth reference) The controllers Ci∗ , (i ∈ N ), satisfy all subtasks Ti = (yi∗ (t)) if and only if the reference trajectories yi∗ (t), (i ∈ N ), satisfy the following requirements: yi∗(k) (t0 ) = yi(k) (t0 ), ∀k ∈ {0, . . . , ri − 1} yi∗ (t)

∈C

ri −1

(R),

(2.5) (2.6)

where ri is the relative degree of the subsystem Pi , which is the smallest positive integer such that ciT Ari i −1 bi = 0. Proof For yi∗ (t) ∈ C ∞ (T ), [16] proves that condition (2.5) is necessary and sufficient to guarantee the tracking (2.3). In contrast to [16], the additional condition (2.6) occurs because no assumption is made in this chapter regarding the continuity properties of yi∗ (t). To obtain (2.6), the proof of Proposition 5 in [16] has to be repeated  without assuming yi∗ (t) ∈ C ∞ (T ). From the perspective of task assignment, it is sufficient to focus on the requirements (2.5) and (2.6) because they are related to the description of subtasks. Corollary 2.1 Any trajectory yi∗ (t) that satisfies the requirements (2.5) and (2.6) defines a satisfiable subtask for the subsystem Pi∗ .

2.2.3 Description of the Cooperative Task The function of the overall system requires that all subsystems Pi∗ , (i ∈ N ), together have to guarantee a certain behavior, which is called the cooperative task. In order to evaluate the current function of the entire system at a given point in time, the performance output yp (t) was introduced by Eq. (2.1) and can be written in a more compact form:

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

yp (t) = Q(σ ) y(t),

19

(2.7)

  where yT (t) = y1 (t) y2 (t) . . . y N (t) denotes the collection of all local outputs. The matrix Q(σ ) ∈ R p×N depends upon the switching signal σ that is the output of the following dynamical system:  σ :

x˙ σ (t) = f σ (x σ (t), yp (t)), x σ (0) = x σ 0 σ (t) = h σ (x σ (t), yp (t))

(2.8)

with an integer-valued output σ ∈ N, an output function h σ : Rn σ × R p → N, a state x σ ∈ Rn σ and a function f σ : Rn σ × R p → Rn σ . In general, the performance output yp (t) cannot be measured by any single subsystem Pi∗ . Definition 2.3 (Cooperative task) For a given function y∗p (t), the performance output yp (t) described by (2.7) has to satisfy the requirement !

T : yp (t) = y∗p (t),

(2.9)

which is said to be the cooperative task T of the overall system. The trajectory y∗p (t) is given on the time interval T = [ta , tb ] ⊂ R and assumed to be sufficiently often differentiable, i.e., y∗p (t) ∈ C ∞ (T ). As it will be shown in Sect. 2.3, the smoothness property of y∗p (t) does not necessarily apply to the subtasks yi∗ (t). Rather the coordination unit and the subsystems have to guarantee during the task assignment the satisfaction of the properties required by Lemma 2.2. Example 2.3 (Transportation system (cont.)) The performance matrices Q(σ ) have to represent the difference of the pistons of two neighboring linear actuators,   Q(σ ) = 01×σ −1 1 −1 01×9−σ

with

σ ∈ {1, . . . , 9}.

(2.10)

The switching state σ depends on the ball’s position, which in turn depends on the performance output:







⎪ ⎨ x˙ σ (t) = 0 1 x σ (t) + 0 y (t), x σ (0) = 0 00 9.81 p 0 : ⎪ ⎩ σ (t) = h (x (t), y (t)) σ

σ

(2.11)

p

with the output function  

1 0 x σ (t) h σ (x σ (t), yp (t)) = 1 + . 0.15 The constant 0.15 in h σ (·) represents the horizontal offset of 0.15 m between two linear actuators and the first element of x σ (t) is identical to the position of the ball.

20

K. Schenk and J. Lunze

Remark 2.2 Note that the definition of the performance output is not unique but depends upon the given application. In [10], for example, a finite time integral over the state and control signal of the system is used as the performance output. From the perspective of fault tolerance, the outputs yi (t) can deviate from their nominal trajectories as long as the performance output satisfies the requirement (2.9). This fact is used by the coordination unit in Fig. 2.4 to re-distribute the subtasks in case of faults. The subtasks have to be consistent, which is defined as follows: Definition 2.4 All subtasks Ti , (i ∈ N ), are said to be consistent if their satisfaction guarantees the fulfillment of the cooperative task T:   ∀i ∈ N : yi (t) = yi∗ (t)



yp (t) = y∗p (t).

In addition to the consistency of the subtasks, the decomposition process must consider that the subsystems can satisfy their corresponding subtasks (cf. Lemma 2.1 and Corollary 2.1). In comparison to the previous problem description (cf. Problem 2.1), a formally correct and complete characterization of the problem of cooperative faulttolerant control can now be provided. Problem 2.2 Given the cooperative task T = ( y∗p , Q(σ ), σ ). Find consistent and satisfiable subtasks Ti = (yi∗ ), (i ∈ N ) taking into account that subsystems can be faulty as described by (2.4). Communication network. It is evident that the network and thus the ability to exchange information is an essential part of cooperative fault-tolerant control. In total, the network is used for three purposes: 1. The coordination unit has to inform each subsystem on their subtask Ti = (yi∗ ), (i ∈ N ). Therefore, the entire trajectory yi∗ (·) must be sent over the network. As the trajectories are typically described by a combination of basis functions (e.g., polynomials), it is sufficient in this step to send the respective parameters that parametrize the basis functions. 2. The controllers Ci∗ , (i ∈ N ), exchange information with their neighbors to satisfy their subtasks (see Sect. 2.2.2). The information that is exchanged over the network during this step is basically the trajectory that describes the coupling effect to other subsystems, which means the exchange of parameters that describe the trajectory. As the design of the tracking controllers is not within the scope of this chapter, the interested reader is referred to [5, 16, 17] for the details. 3. The faulty subsystem informs the coordination unit about the fault. This step requires only a binary decision that must be sent over the network which is “subsystem Pi is faulty.” To some extent, any imperfections of the communication network as described in [18–20] have a negative effect on the overall performance. Considering the above-mentioned purposes of the network, the fault-tolerant cooperative controller described in this chapter does more rely on the ability to share information and not

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

21

on a certain performance of the network. The reason is that the exchange of data occurs rather sporadically: Once at the design time and then each time, a subsystem is affected by a fault. Assumption 2.2 The communication network is ideal. Data sent from one subscriber to another is neither delayed nor quantized or otherwise modified.

2.3 Method for Fault-Tolerant Task Assignment This section solves Problem 2.2 and can be divided into three parts. First, the cooperative task T is analyzed from a global perspective to derive consistent subtasks (Sect. 2.3.1). Second, the subtasks are further restricted to get consistent and satisfiable tasks (Sect. 2.3.2). Third, the question is answered which subsystems are active at which time in order to maximize the autonomy of each subsystem (Sect. 2.3.3).

2.3.1 Consistent Subtasks The first step toward a flexible task assignment is to characterize consistent subtasks. The references yi∗ (t), (i ∈ N ), can be lumped together to the vector T  y∗ (t) = y1∗ (t) y2∗ (t) · · · y N∗ (t) . Then, it follows from the description of the performance output (2.7) together with Definition 2.4 that following relation has to be satisfied by the references y∗ (t) because consistency must be fulfilled: !

y∗p (t) = Q(σ ∗ (t)) y∗ (t)

∀t ∈ T .

(2.12)

The signal σ ∗ (t) is the time sequence of the switching state σ for the case that the cooperative task is fulfilled. It can be determined by feeding the reference performance output y∗p (t) into the switching system (2.8):  σ∗

:

x˙ ∗σ (t) = f σ (x ∗σ (t), y∗p (t)), x σ (0) = x σ 0 σ ∗ (t) = h σ (x ∗σ (t), y∗p (t))

(2.13)

For a given but fixed point in time, the requirement (2.12) is treated as a linear system of equations with the known signal y∗p (t) and the known matrix Q(σ ∗ (t)), whereas y∗ (t) is unknown. In order to solve (2.12), recall the following result on the solution of linear equations:

22

K. Schenk and J. Lunze

Lemma 2.3 ([21]) The linear equation M 1 X = M 2 with M 1 ∈ Rm×n and M 2 ∈ + all solutions are of the Rm×k has a solution if and only if M  1 M 1 M 2 = M 2 . Then, n×k + + form X = M 1 M 2 + In − M 1 M 1 H for arbitrary H ∈ R . Lemma 2.3 is used with M 1 = Q(σ ∗ (t)) and M 2 = y∗p (t) to describe all possible solutions of (2.12) and, hence, specify consistent subtasks: Lemma 2.4 (Consistent subtasks [22]) The set of all consistent subtasks Ti is given by the references y∗ (t) = Q + (σ ∗ (t)) y∗p (t) + δ Q(σ ∗ (t))δ y∗ (t),

(2.14)

  with the matrix δ Q(σ ∗ (t)) = I N − Q + (σ ∗ (t)) Q(σ ∗ (t)) ∈ R N ×N and an arbitrary vector function δ y∗ (t) ∈ R N ×1 . Interpretation. The characterization (2.14) could be used to decompose the cooperative task into subtasks whereby these subtasks are not necessarily satisfiable: 1. The first term in (2.14) is one particular solution of (2.12) and depends on the switching state σ ∗ (t). Remember that σ ∗ (t) changes its value instantaneously if the entire system comes into a new switching state. This causes trajectories y∗ (t) that jump at the time of the switching. Such trajectories are not satisfiable because they are discontinuous and, hence, do not belong to the class of C ri −1 (T ) functions as required by Corollary 2.1. This aspect will be analyzed in detail in Sect. 2.3.2. 2. The elements of the vector y∗ (t) are the reference trajectories yi∗ (t) of the subsystems Pi , (i ∈ N ). Depending on the structure of the matrix δ Q(σ ∗ (t)), the vector function δ y∗ (t) influences these references yi∗ (t). It will be shown in Sect. 2.3.2 that δ y∗ (t) is a key element to solve the satisfiability issue because it is used to obtain continuously differentiable functions yi∗ (t), (i ∈ N ).

2.3.2 Satisfiable Functions This section derives conditions that must be fulfilled in order to get satisfiable subtasks. As it will be shown, the main reason for the existence of non-satisfiable subtasks is a change of the switching state σ .

2.3.2.1

Fault-Free Subsystems

The reference for a particular subsystem yi∗ (t) is the ith y∗ (t)   element of the vector T in (2.14) and can be written by using the vector ei = 01×i−1 1 01×N −i as yi∗ (t) = q iT (σ ∗ (t)) y∗p (t) + δq iT (σ ∗ (t)) · δ y∗ (t)

(2.15)

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

23

Fig. 2.6 Determine a satisfiable subtask: Top: Consistency analysis yields a particular solution that is discontinuous. Middle: Satisfiability condition provides supporting point (black circles) and piecewise continuous functions that connect these points. Bottom: Finally, the superposition of both functions defines a smooth reference for the subsystem

with the vectors q iT (σ ∗ (t)) = eiT Q + (σ ∗ (t)) and δq iT (σ ∗ (t)) = eiT δ Q(σ ∗ (t)). If the switching state σ ∗ (t) changes its value at time t = ts from σ to σ  , the left-hand limit and the right-hand limit do not coincide σ (ts ) =: lim σ ∗ (t) = lim σ ∗ (t) := σ  (ts ). tts

tts

(2.16)

Switching as characterized by (2.16) occurs multiple times and the set Ts (σ (t)) = {t ∈ T | σ (t) = σ  (t)}

(2.17)

contains all switching times. The time interval in which a particular switching state σ is constant is denoted by T (ts ) := (ts , ts+1 ) as an open set and T¯ (ts ) := [ts , ts+1 ] as the corresponding closed set, where ts , ts+1 ∈ Ts (·) are two consecutive switching times. At the switching time, the first term of the local reference (2.15) jumps at t = ts as well lim q iT (σ ∗ (t)) · y∗p (t) = lim q iT (σ ∗ (t)) · y∗p (t),

tts

tts

(2.18)

which is illustrated in Fig. 2.6(top). Remember, that for the reference (2.15) to be satisfiable, it must be (ri − 1)-times continuously differentiable as stated in Corollary 2.1. In terms of limits, this continuity property is equivalent to require for all τ ∈ T the following condition is satisfied: ∗( j)

lim yi tτ

!

∗( j)

(t) = lim yi tτ

(t)

∀ j ∈ {0, . . . , ri − 1}.

(2.19)

24

K. Schenk and J. Lunze

The problem is now that the jumping part (2.18) contradicts the desired property (2.19). To remove this contradiction, the function δ y∗ (t) in (2.15) is used to fulfill the continuity property (2.19). Lemma 2.5 (Satisfiable subtasks in fault-free case [22]) Consider the fault-free subsystems Pi∗ , (i ∈ N ), and define the integer r¯ := max{r1 , r2 . . . , r N }. The consistent references (2.14) describe satisfiable subtasks Ti , (i ∈ N ), if and only if the following two conditions are satisfied: 1. δ y∗ (t) is continuously differentiable between any two consecutive switching times ts , ts+1 ∈ Ts : δ y∗ (t) ∈ C r¯ −1 (T (ts ))

with

T (ts ) = (ts , ts+1 ) ⊆ R

(2.20)

2. δ y∗ (t) satisfies the condition j) ∗( j) (t) Q + (σ (ts )) · y∗( p (ts ) + δ Q(σ (ts )) · lim δ y tts

+



= Q (σ (ts )) ·

j) y∗( p (ts )



+ δ Q(σ (ts )) · lim δ y∗( j) (t)

(2.21)

tts

for all switching times ts ∈ Ts and for all j ∈ {0, 1, . . . , r¯ − 1}. Figure 2.6 illustrates why any function δ y∗ (t) satisfying the relation (2.21) yields satisfiable trajectories yi∗ (t), (i ∈ N ). For a smooth reference y∗p (t), the diagram at the top shows the jumping part (2.18). A function δ y∗ (t) that satisfies Lemma 2.5 is shown as the diagram in the middle. The superposition is the desired reference for a particular subsystem and is shown at the bottom. It can be seen that no discontinuities are present. Note that the requirement (2.21) must be formulated in terms of the matrices Q + (·) and δ Q(·) because the jumping (2.18) can affect multiple subsystems. It will be shown in Sect. 2.3.3 that for the satisfaction of the requirements (2.20) and (2.21) in fact it is sufficient to analyze only some of the subsystems. Implementation. The formulation of Lemma 2.5 is not constructive and, hence, does not provide a direct computation method for δ y∗ (t). Indeed, there is no unique solution to find a function δ y∗ (t) that satisfies the conditions (2.20) and (2.21). Depending on the application, other boundary conditions may have to be considered. Subsequently, one way is presented on how to determine δ y∗ (t). 1. The requirement (2.21) defines only supporting points of the vector function δ y∗ (t) for all switching times ts ∈ Ts a− (ts , j) := lim δ y∗( j) (t) tts

and

a+ (ts , j) := lim δ y∗( j) (t). tts

(2.22)

The two variables a+ (ts , j) ∈ R N and a− (ts , j) ∈ R N are the right-hand limit and the left-hand limit of the jth order derivative of the vector function δ y∗( j) (t) for the particular switching time ts .

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

25

2. Equation (2.21) can be written in matrix-vector form as 

 j) Q + (σ (ts )) − Q + (σ  (ts )) · y∗( p (ts ) =

 a+ (ts , j)   δ Q(σ (ts )) −δ Q(σ (ts )) . a− (ts , j)

(2.23)

Since (2.23) defines a linear equation with the supporting points as the unknowns, it can be directly solved by using Lemma 2.3:

 +   ∗( j) a+ (ts , j) = δ Q(σ  (ts )) −δ Q(σ (ts )) · Q + (σ (ts )) − Q + (σ  (ts )) yp (ts ). a− (ts , j)

3. Next, it will be guaranteed that the function δ y∗ (t) is continuously differentiable within the time interval (ts , ts+1 ) as required by (2.20). Therefore, δ y∗ (t) is expressed as a spline [23], in which for each switching state σ a polynomial of order p = 2 · r¯ − 1 is used to describe the function: δ y∗ (t) = a0 + a1 t + · · · + a p t p =

p 

ai t i

∀t ∈ T¯ (ts ) = [ts , ts+1 ], (2.24)

i=0

with the coefficient vector ai ∈ R N , (i ∈ {0, 1, . . . , p}). The coefficients ai are chosen such that the polynomial (2.24) connects the supporting points ⎡ ⎢ ⎢ a+ (ts ) := ⎢ ⎣

a+ (ts , 0) a+ (ts , 1) .. .

⎤ ⎥ ⎥ ⎥ ⎦

⎡ and

⎢ ⎢ a− (ts+1 ) := ⎢ ⎣

a+ (ts , r¯ − 1)

a− (ts+1 , 0) a− (ts+1 , 1) .. .

⎤ ⎥ ⎥ ⎥ (2.25) ⎦

a− (ts+1 , r¯ − 1)

in the time interval [ts , ts+1 ]. A system of equations is built with p + 1 unknowns (a0 , a1 , . . . , a p ) and p + 1 knowns (a+ (ts ) and a− (ts )). The jth derivative of the polynomial (2.24) is needed and can be determined as δ y∗( j) (t) =

p  i= j

ai b j,i (t)

with

b j,i (t) =

 j−1 

 (i − k) · t i− j ,

(2.26)

k=0

 j−1 with k=0 (i − k) = 1 for j − 1 < 0. The desired system of equations to determine the coefficients in (2.24) consists of the supporting points defined by (2.22) and (2.25) as well as (2.26) and can be written in the more compact form

26

K. Schenk and J. Lunze

Fig. 2.7 Spline interpolation: The cubic polynomial f (t) connects the supporting points a+ and a− as requested and is continuously differentiable in each of the time intervals [0, 1], [1, 2], and [2, 3]

⎤ a0 ⎢ ⎥ V (ts ) · ⎣ ... ⎦ = a+ (ts ) ⎡

⎤ a0 ⎢ ⎥ V (ts+1 ) · ⎣ ... ⎦ = a− (ts+1 ) ⎡

and

ap

(2.27)

ap

with the matrix V (t) characterized by the coefficients b j,i (t): ( p+1)N ×( p+1)N

V (t) = (v j,i (t)) ∈ R

 b j−1,i−1 (t) i ≥ j with v j,i (t) = 0 else.

4. Finally, the coefficients of the polynomial (2.24) follow from (2.27): ⎤ a0

−1

V (ts ) a+ (ts ) ⎢ .. ⎥ . · ⎣ . ⎦ = V (t ) a− (ts+1 ) s+1 ap ⎡

(2.28)

Lemma 2.6 For any two consecutive switching times ts , ts+1 ∈ Ts , the polynomial (2.24) with coefficients (2.28) satisfies the conditions stated in Lemma 2.5. Example 2.4 Given are three switching states σ1 , σ2 , and σ3 with the corresponding switching times ts1 = 0, ts2 = 1, ts3 = 2, and ts4 = 3. The supporting points are specified as follows:



0 0.5 a+ (ts2 ) = −1 0



1 1 a− (ts3 ) = a− (ts2 ) = −1 0

a+ (ts1 ) =

0.5 −1

0 a− (ts4 ) = . 1

a+ (ts3 ) =

With 4 supporting points for each time interval, the spline interpolation is based on a cubic polynomial f (t) = a0 + a1 t + a2 t 2 + a3 t 3 . Figure 2.7 shows the function f (t) with the parameter determined by Eq. (2.28).

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

2.3.2.2

27

Faulty Subsystem

The method described in the preceding section can be applied after a fault has occurred to adapt the task assignment to the fault situation. The faulty subsystem P f is described by the state-space model (2.4). The subtask T f = (y ∗f (t)) that was assigned to P f in the fault-free case is no longer satisfiable in the fault case (cf. Lemma 2.1). Therefore, the cooperative task has to re-distributed. The consistent subtasks (2.14) remain consistent in the fault case since the consistency does not depend on the ability of each subsystem. The only change that has to be done by the coordinator is to adapt the vector function δ y∗ (t) to the fault case. If during this process the coordinator complies with the criteria of the subsequent lemma, the subsystems fulfill the cooperative task despite the fault. Lemma 2.7 (Satisfiable subtasks in faulty case [22]) Consider a faulty subsystem Pf described by (2.4) that can satisfy the subtask specified by Lemma 2.1. If, in addition to the requirements (2.20) and (2.21), the vector function δ y∗ (t) satisfies the condition y f (tf ) = q Tf (σ ∗ (t)) y∗p (t) + δq Tf (σ ∗ (t))δ y∗ (t),

∀t ∈ T ,

(2.29)

then the consistent references (2.14) describe satisfiable subtasks for the faulty overall system. Interpretation. The inclusion of the faulty subsystem is straightforward. The previous choice (2.20) and (2.21) of the vector δ y∗ (t) ensures a reference for the faulty subsystem, which is continuously differentiable. The purpose of the additional condition (2.29) is to ensure that the faulty subsystem receives the only subtask it can fulfill.

2.3.3 Autonomy of the Subsystems In many applications, the subsystems should have the greatest possible autonomy. This means in the scope of this chapter that the coordination unit should determine the subtask for a particular subsystem only for the time the subsystem influences the cooperative task. For a given switching state σ , only a part of all subsystems contributes to the cooperative task. Definition 2.5 (Active and passive subsystems) For a given switching state σ , the subsystem Pi∗ is said to a be passive subsystem if and only if the ith column of Q(σ ) is the zero vector. Otherwise, Pi∗ is said to be an active subsystem. As a consequence of Definition 2.5, two index sets can be introduced. The set NP (σ ) ⊆ N contains all indices i of subsystems Pi∗ that belong to the class of

28

K. Schenk and J. Lunze

passive subsystems, while the set NA (σ ) ⊆ N contains the indices corresponding to active subsystems: Q(σ ) · eiT = 0



i ∈ NP (σ )

Q(σ ) ·



i ∈ NA (σ )

eiT

= 0

Example 2.5 (Transportation system (cont.)) For σ = 1, the ball is located inbetween P1 and P2 for which the performance matrix is given by the row vector   Q(1) = 1 −1 0 0 0 0 0 0 0 0 . The two non-zero elements show that only P1 and P2 influences the ball (i.e., NA (1) = {1, 2}) while all other subsystems do not (i.e., NP (1) = {3, 4, . . . , 10}). For a given trajectory of the switching state σ (t), the moments in time in which a particular subsystem Pi∗ belongs to the set of passive systems is given by TP (i, σ (t)) = {t ∈ T | i ∈ NP (σ (t))}, while the set TA (i, σ (t)) = {t ∈ T | i ∈ NA (σ (t))}

(2.30)

encloses all time intervals in which Pi∗ belongs to the set of active systems. For the subsequent analysis, the matrix Q(σ ) is rearranged to separate the nonzero columns from the zero columns. Therefore, a permutation matrix T (σ ) ∈ R N ×N is introduced that yields the separation 

 M(σ ) 0 p×|N P (σ )| (σ ) = Q(σ )T −1 (σ ),

(2.31)

The matrix T (σ ) decomposes the references T (σ ) y∗ (t) =



y∗A (t) y∗P (t)

and

T δ y∗ (t) =

∗ δ yA (t) δ y∗P (t)

in accordance with the partitioning of the matrices in (2.31). Note that T (σ ) y∗ (t) contains the same elements as y∗ (t) but in a different order. The elements of the vector y∗A (t) are, for a given switching state, the references yi∗ (t), (i ∈ NA (σ )), for the active subsystems, while the elements of y∗p (t) define the subtasks for the passive subsystems. Hence, the dimensions of the signals coincides with the cardinality of the corresponding index sets: y∗A (t), δ y∗A (t) ∈ R|N A |

and

y∗P (t), δ y∗P (t) ∈ R|N P | .

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

29

The separation (2.31) considerably simplifies the analysis of consistent subtasks and the analysis of satisfiable subtasks. Consistency analysis. The structure of the matrices in (2.14) can be refined by the separation (2.31). Therefore, Eq. (2.14) is multiplied from the left-hand side with T (σ ). Then, the consistent references (2.14) can be written by using Lemma 2.10 given in the appendix:



+

y∗A (t) i − M + (σ )M(σ ) 0 δ y∗A (t) M (σ ) ∗ = . yp (t) + y∗P (t) 0 0 i δ y∗P (t)

(2.32)

The special structure of the matrices in (2.32) shows that the subtasks of all passive subsystems can be chosen completely arbitrarily (e.g., yi∗ (t) = 0 for i ∈ NP (σ ) and t ∈ T (σ )) since y∗P (t) = δ y∗P (t) holds, while at the same time, δ y∗P (t) is not influencing the references y∗A (t) of the active systems. Lemma 2.8 (Reference of passive subsystems) Consider some time interval T (ts ) = (ts , ts+1 ) with ts ∈ Ts in which the overall system is in the switching state σ . Any change of the references yk∗ (t) of passive subsystems, i.e., k ∈ Np (σ (ts < t < ts+1 )) does not change the consistency of all subtasks if and only if the following conditions are satisfied: 1. The subtasks Ti , (i ∈ N ), were consistent before the change of yk∗ (t). 2. The references yk∗ (t) do not change their values on the boundaries: ∗( j)

lim yi

tts

(t) and

∗( j)

lim yi

tts+1

(t)

∀ j ∈ {0, 1, . . . , r¯ − 1}.

Satisfiability analysis. The separation between active subsystems and passive subsystems can be additionally be used to decompose the satisfiability analysis of subtasks that was done in Sect. 2.3.2 from a global perspective. In order to explore this subject, the structure of the permutation matrix T must be slightly refined. In addition to the separation (2.31), the matrix T is chosen such that it arranges all subsystems to the right-hand side that remain passive in two consecutive switching states σ (ts ) and σ  (ts ). These are those subsystems Pi∗ which index i is contained in the set Np (σ, σ  ) := NP (σ ) ∩ NP (σ  ). In other words, the ith column of the performance matrix Q(·) remains zero Q(σ (ts )) · eiT = 0



Q(σ  (ts )) · eiT = 0.

The result is a new permutation matrix T¯ (σ, σ  ) that can be used in the same way as before (cf. (2.31)) to rearrange the performance matrix 

 ¯ ) 0 p×|N P (σ,σ  )| (σ ) = Q(σ ) T¯ −1 (σ ), M(σ

(2.33)

30

K. Schenk and J. Lunze

but the zero matrix in (2.33) will typically have less columns compared to (2.31). The main strength of the particular structure of T¯ (·) from the perspective of satisfiability is that it allows to transform (2.21) into a new form in which the matrices on both sides have a particular structure, which is used in the appendix to prove the next lemma. Lemma 2.9 Any subsystem Pi∗ , (i ∈ NP (σ, σ  )), that remains a passive system for the switching σ (ts ) → σ  (ts ) at time ts ∈ Ts , can chose the supporting points ∗( j)

lim δyi

tts

(t)

and

∗( j)

lim δyi

tts

(t)

on its own as long as the result is a satisfiable reference yi∗ (t). The combination of Lemmas 2.8 and 2.9 allows to equip the subsystems with the autonomy illustrated by Fig. 2.8. At the bottom of the figure, the graph shows the time intervals in which subsystem Pi∗ belongs to a set of active systems or passive systems. At the top of the figure, one example of a reference yi∗ (t) is shown that results from the method described in this chapter. • A solid blue line is a part of the reference that is defined by the coordinator from a global perspective. This is always the case for those intervals in which the subsystem belongs to a set of active systems (here σ2 and σ5 ). • A dashed blue line indicates that the subsystems have chosen the shape of the reference on their own according to Lemma 2.8. The shape must be continuously differentiable and has to consider the supporting points at the boundaries shown as circles. • The filled black circles are provided by the coordinator to ensure the satisfiability (2.21). As a consequence of Lemma 2.9, the coordinator has to determine such supporting points for the switching time ts ∈ {t0 , t1 , . . . , t5 , . . .} whenever the subsystem is an active system either before or after the switching (or both). • Whenever the subsystem remains passive (here from σ3 to σ4 ), the supporting points must not be provided by the coordinator, as stated by Lemma 2.9, but can be determined by the local subsystem on its own. The white circles show such supporting points based on local decisions. Motivated by the above discussion, it is now possible to define the trajectory  y˜i∗ (t, σ (t)) =

yi∗ (t) if t ∈ NA (i, σ (t)) undefined else

(2.34)

that is sent by the coordination unit to the individual subsystems. The function y˜i∗ (t, σ (t)) is identical to the reference yi∗ (t) if the corresponding subsystem is active, while it is undefined everywhere else. It means, the dashed blue line in Fig. 2.8 shows y˜i∗ (t, σ (t)).

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

31

Fig. 2.8 Illustration of the autonomy of subsystems: The reference yi∗ (t) or subtask, respectively, for an active subsystem is defined by the coordinator (solid line) because global constraints must be satisfied. Otherwise, the subsystem can shape its reference based on local decisions, which is important to obtain autonomy

2.3.4 Summary: Design Steps for Fault-Tolerant Task Assignment In summary, the following steps need to be done to get satisfiable and consistent tasks Ti and, hence, to solve Problem 2.2. The coordination unit in Fig. 2.4 knows the cooperative task T = ( y∗p (t), Q(σ ), σ ) and executes Algorithm 2.1. The result is a set of reference trajectories y˜i∗ (t, σ (t)) for each subsystem Pi∗ , (i ∈ N ), that defines the desired behavior of the subsystems whenever Pi∗ is active (cf. Figure 2.8). After the coordination unit has finished the execution of Algorithm 2.1, each subsystem has received y˜i∗ (t, σ (t)) and executes Algorithm 2.2 to determine the subtask Ti = (yi∗ (t)). Therefore, the local reconfiguration unit Ri complements the received trajectory y˜i∗ (t) by a spline interpolation (dashed black lines in Fig. 2.8) to build the local reference yi∗ (t). The following theorem is the main result of this chapter and states that the execution of Algorithms 2.1 and 2.2 solves the flexible task assignment problem that was specified as Problems 2.1 and 2.2. Theorem 2.1 If a coordination unit executes Algorithm 2.1 and the subsystems Pi∗ , (i ∈ N ), execute Algorithm 2.2 to determine subtasks Ti , the cooperative task T is solved.

2.4 Example: Transportation System 2.4.1 Cooperative Task This section continues the running example of the transportation system and shows the subtasks assigned to the linear actuators (see Fig. 2.3). Therefore, the coordination unit executes Algorithm 2.1 and the subsystems execute Algorithm 2.2. As a

32

K. Schenk and J. Lunze

Algorithm 2.1 Flexible task assignment (coordination unit) Given: cooperative task T = ( y∗p (t), Q(σ ), σ ), information about the fault 1: determine the reference switching signal σ ∗ (t) with (2.13) 2: use (2.17) to determine the set Ts (σ ∗ (t)) of all switching times 3: if some subsystem is affected by a fault then 4: determine a function δ y∗ (t) that satisfies the conditions stated in Lemma 2.5 and Lemma 2.7 5: else 6: determine a function δ y∗ (t) that satisfies only the conditions stated in Lemma 2.5 7: end if 8: determine the set of consistent references with Lemma 2.4 9: for all subsystem Pi∗ , (i ∈ N ) do 10: use (2.30) to determine the time NA (i, σ ∗ (t)) in which Pi∗ is active 11: determine partially defined reference y˜i∗ (t) with (2.34) ∗( j) 12: send y˜i∗ (t) to the subsystem including the supporting points y˜i (ts ) for ts ∈ Ts 13: end for Result: references y˜i∗ (t), (i ∈ N ), for each subsystem

Algorithm 2.2 Subtasks (local subsystem) Given: reference y˜i∗ (t) defined in set NA (i, σ ∗ (t)) 1: for all time intervals [ts , ts+1 ] ∈ Np (i, σ ∗ (t)) in which Pi∗ is passive do 2: determine reference yi∗ (t) that considers the conditions stated in Lemma 2.8 and Lemma 2.9 3: end for Result: consistent and satisfiable subtask Ti = (yi∗ (t))

reminder, the aim of all linear actuators is to steer the ball in the horizontal coordinate sx along the trajectory   sx∗ (t) = 10−3 · −0.0027 · t 7 + 0.095 · t 6 − 1.1 · t 5 + 4.7 · t 4 , for t ∈ T = [0, 10 s] which is shown in Fig. 2.9 (left). Based on this desired trajectory sx∗ (t), the mass of the ball, and the geometry of the transportation system, the reference acceleration of the ball as the reference performance output   y∗p (t) = 10−3 · −0.012 · t 5 + 0.3 · t 4 − 2.4 · t 3 + 6.1 · t 2 ∼ s¨x∗ (t)

(2.35)

is determined. In summary, the cooperative task T = ( y∗p (t), Q(σ ), σ ) for the transportation system is to ensure the reference acceleration (2.35) while taking into account the performance matrices (2.10) that depends on the switching state (2.11).

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

33

Fig. 2.9 Desired behavior: The ball should be steered along the reference sx∗ (t) in [m] (left) that is ensured by the choice of a proper performance output yp∗ (t) in [mm/s2 ] (right)

Fig. 2.10 Switching times: The subsystems Pi∗ , (i ∈ {1, . . . , 10}), are not active during the complete transportation process but only for some short period of time (left). Every time the ball is located in between two other linear actuators, the switching state changes (right)

2.4.2 Active and Passive Subsystems As part of executing Algorithm 2.1, the coordination unit has to determine which subsystem is active or passive in which time interval. The results are shown in Fig. 2.10. On the left-hand side, each bar represents the time in which a subsystem Pi∗ belongs to the set of active subsystems, while the right-hand side shows the switching instances. As expected from the transportation system, only two subsystems are active at the ∗ ) are much same time. Furthermore, the pairs of subsystems (P1∗ , P2∗ ) and (P9∗ , P10 longer active compared to other subsystems. The reason is that the ball’s velocity is very slow at the beginning, where the ball is between P1∗ and P2∗ , and at the end, ∗ . where the ball is between P9∗ and P10 Note that the results shown in Fig. 2.10 are the same in the fault-free case and in the fault case because the desired movement of the ball has not changed in the fault case and, hence, the ball passes the same subsystems in the same time interval as in the fault-free case.

34

K. Schenk and J. Lunze

2.4.3 Scenario 1: Fault-Free Subsystems Figure 2.11 shows the local references yi∗ (t), (i ∈ {1, . . . , 10}), for each subsystem in the fault-free case. Each trajectory is divided into different parts shown as by the different colors of a line, which is used to illustrate the different intervals in which it belongs to the set of active subsystems or the set of passive subsystems. Every time instance in which the switching state changes (see Fig. 2.10 (right)), the color changes. It can be seen that all references are smooth over the complete time interval as required by Corollary 2.1. The linear actuators have a relative degree of 2 that requires each trajectory to be at least continuously differentiable up to the order 1. When a subsystem is passive, it locally shapes its reference. For the simulation, it always uses a trajectory to move in the initial position yi (0) = 0. As an example, the procedure described in Sect. 2.3.4 is illustrated from the local perspective of subsystem P3∗ : 1. P3∗ belongs to the set of active subsystem for t ∈ [2.88 s, 4.22 s]. 2. P3∗ receives the reference y˜3∗ (t) for t ∈ [2.88 s, 4.22 s] from the coordination unit. 3. For the remaining time t ∈ [0, 2.88 s] and t ∈ [4.22 s, 10 s], subsystem P3∗ receives no reference information because it is part of the passive subsystems. 4. According to Algorithm 2.2, subsystem P3∗ complements y˜3∗ (t) to the complete interval [0, 10 s], which yields y3∗ (t) as shown in Fig. 2.11.

2.4.4 Scenario 2: Faulty Subsystem In the second scenario, it is assumed that subsystem P5 is affected by a fault as described by (2.4) with  P5 :

x˙ 5 (t) = 0, x 5 (0) = 0 y5 (t) = cT5 x 5 (t).

The output y5 (t) of the faulty subsystem is blocked at y5 (tf = 0 s) = 0 and can no longer be manipulated, i.e., y5 (t ≥ tf ) = 0. The way the proposed method handles the fault is by redistributing the subtasks. Figure 2.12 shows the new local references (solid red line). It can be seen that the reference of subsystem P5∗ is y5∗ (t) = 0, which takes into account that the subsystem is faulty. The neighbors also receive new subtasks from the coordination unit and, hence, generate new local references as well. Comparing the references of the fault-free case (dashed black line) with the references of the faulty-case (solid red line) in Fig. 2.12 shows that only y3∗ (t), y4∗ (t), y5∗ (t), y6∗ (t), and y7∗ (t) are changed in the fault case. This means, the fault does not propagate through the overall system. The coordination unit only has to take into

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

35

Fig. 2.11 Fault-free: Local reference trajectories yi∗ (t), (i ∈ {1, . . . , 10}), in mm for the fault-free case. All functions are continuously differentiable and, hence, can be satisfied by the subsystems

account the faulty subsystem and its neighbors to a certain extent. Furthermore, even those subsystems that receive new subtasks, receive them only for a short period of time. For example, subsystem P3∗ receives a new reference only for the time interval [2.88 s, 4.22 s].

2.5 Conclusions This chapter has investigated fault-tolerant control of networked systems. A method for flexible task assignment was presented that enables networked control systems to perform a cooperative task even if a subsystem is affected by a fault. A coordination unit uses global criteria to evaluate which subsystem has to perform which task. Furthermore, the coordinator considers that the subsystems may not be contin-

36

K. Schenk and J. Lunze

Fig. 2.12 Fault case: Reference trajectories yi∗ (t), (i ∈ {1, . . . , 10}), in mm for the fault case. The faulty subsystem P5∗ remains in its initial position since it cannot be moved. To counterbalance this behavior, subsystems P3∗ , P4∗ , P6∗ , and P7∗ move in a different way compared to the fault-free case

uously involved in the cooperative task and does not assign any tasks to these passive subsystems for these time intervals. In addition, the fault case was considered in this chapter in which single subsystems block completely. In this case, the coordinator must re-evaluate the cooperative task and assign new tasks to the subsystems. An important observation here is that the fault does not propagate completely through the network. Instead, only some subsystems receive new tasks and only for a part of the whole time interval. Acknowledgements This work was financially supported by the Deutsche Forschungsgemeinschaft (DFG) under the grant number LU 462/43-1.

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

37

Appendix For the proof of Lemma 2.9, the following lemma used states the necessary results concerning the structure of the inverse of matrices. Lemma 2.10 Given a permutation matrix T ∈ R N ×N , a matrix Q ∈ R p×N and the ˜ = QT −1 . Then, the following relations are true. matrix Q 1. T −1 = T T  ˜ = M 0 p×n it holds that: 2. For Q ˜+= Q



M+ 0n× p

and



+ I N −n − M + M 0 ˜ ˜ IN − Q Q = 0 In

˜ + = T · Q+ 3. Q ˜ ˜ +Q 4. T (i − Q + Q)T −1 = i − Q Proof Note that 1 is used to prove 3 and that 3 is used to prove 4. 1. Basic property of a permutation matrix. Can be found, for example, in [24].  + 2. The results follow from Q + = Q T Q Q T , which can be found, e.g., in [21], and the four Moore- Penrose Axioms of a pseudoinverse. 3. Follows from the definition of a pseudoinverse via the limit value:  T −1 T   ˜ = lim Q ˜ + δ2 i ˜ = lim T Q T QT −1 + δ 2 T T −1 −1 T Q T ˜ Q Q Q δ→0 δ→0  T  T 2 −1 = T lim Q Q + δ i Q = T Q+ δ→0

˜ ˜ +Q 4. T (i − Q + Q)T −1 = i − T Q + QT −1 = i − ( Q · T −1 )+ ( QT −1 ) = i − Q

Proof of Lemma 2.9 The considered switching time ts is arbitrary but fixed. Hence, T¯ = T¯ (σ (ts ), σ  (ts )), σ = σ (ts ), and σ  = σ  (ts ) are used for better readability. Multiply Eq. (2.23) from the left-hand side with T¯ :

 a+ (ts , j)  ∗( j)   + +   ¯ ¯ T Q (σ ) − Q (σ ) yp (ts ) = T δ Q(σ ) −δ Q(σ ) a− (ts , j) −1 Next, the right-hand side gets an extra T¯ · T¯ = i, which does not change the equation in any way:

38

K. Schenk and J. Lunze

 T¯ a (t , j)    −1 + s j)  ¯ −1 ¯ ¯ ¯ . (t ) = T¯ Q + (σ ) − Q + (σ  ) y∗( T δ Q(σ ) T − T δ Q(σ ) T s p T¯ a− (ts , j) As a consequence of the particular choice of the permutation matrix T¯ together with the results of Lemma 2.10 the matrices on both side have the following structure:

+  +  ∗( j) ¯ + (σ  ) ¯ (σ ) − M M +  ¯ T Q (σ ) − Q (σ ) yp (ts ) = 0|N P (σ,σ  )|× p and 

(2.36)

 −1 −1 = T¯ δ Q(σ  ) T¯ − T¯ δ Q(σ ) T¯

¯ + (σ  ) M(σ ¯ ) ¯ + (σ ) M(σ ¯ )) i− M 0 −(i − M 0 . 0 −i|N P (σ,σ  )|× p 0 i|N P (σ,σ  )|× p (2.37)

It must be noted that deriving (2.36) and (2.37) is only possible due to the special structure of the permutation matrix T¯ as it is defined by (2.33). The reason is that the zero matrices in (2.36) and the identity matrices in (2.37) are of the same dimensions, which is only possible because T¯ combines subsystems that remain passive in two consecutive switching states. The matrix T¯ separates the supporting points as well

act a+ (ts , j) ¯ T a+ (ts , j) = pas a+ (ts , j)



act a− (ts , j) ¯ T a− (ts , j) = pas a− (ts , j)

(2.38)

Now, the structure of the matrices in (2.36) and (2.37) together with the separation (2.38) allows finally to write the satisfiability condition (2.36) as two independent equations: j) ¯ + (σ ) − M ¯ + (σ  )) y∗( ¯ + (σ  ) M(σ ¯  ))aact (M p (ts ) = (i − M + (ts , j) + ¯ ¯ −(i − M (σ ) M(σ ))aact − (ts , j) pas pas 0 = a+ (ts , j) − a− (ts , j)

By the definition of the permutations matrix T¯ , the coefficients a+ (ts , j) and pas a− (ts , j) are the supporting points of those subsystems that remain passive between  two consecutive switching states σ and σ  , which completes the proof. pas

2 Fault Tolerance in Networked Control Systems by Flexible Task Assignment

39

References 1. Blanke, M., Kinnaert, M., Lunze, J., Staroswiecki, M.: Diagnosis and Fault-Tolerant Control. Springer, Berlin (2016) 2. Wang, L., Markdahl, J., Liu, Z., Hu, X.: Automatica 96, 121 (2018) 3. Zou, Y., Meng, Z.: Automatica 99, 33 (2019) 4. Schenk, K., Lunze, J.: In: Proceedings of 2016 3rd Conference on Control and Fault Tolerant Systems, Barcelona, pp. 723–729 (2016) 5. Schenk, K., Gülbitti, B., Lunze, J.: In: Proceedings of 10th Symposium on Fault Detection, Supervision and Safety for Technical Processes, Warsaw, pp. 570–577 (2018) 6. Kamel, M.A., Yu, X., Zhang, Y.: IEEE Trans. Control Syst. Technol. 26(2), 756 (2018) 7. Qin, J., Ma, Q., Gao, H., Zheng, W.X.: IEEE Trans. Mechatron. 23(1), 342 (2018) 8. Gallehdari, Z., Meskin, N., Khorasani, K.: Automatica 84, 101 (2017) 9. Yang, H., Jiang, B., Staroswiecki, M., Zhang, Y.: Automatica 54, 49 (2015) 10. Patton, R.J., Kambhampati, C., Casavola, A., Zhang, P., Ding, S., Sauter, D.: Eur. J. Control 13(2–3), 280 (2007) 11. Roychoudhury, I., Biswas, G., Koutsoukos, X.: IEEE Trans. Autom. Sci. Eng. 6(2), 277 (2009) 12. Ferdowsi, H., Raja, D.L., Jagannathan, S.: In: The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, pp. 1–7 (2012) 13. Reppa, V., Polycarpou, M.M., Panayiotou, C.G.: IEEE Trans. Autom. Control 60(6), 1582 (2015) 14. Panagi, P., Polycarpou, M.M.: IEEE Trans. Autom. Control 56(1), 178 (2010) 15. Vey, D., Hügging, S., Bodenburg, S., Lunze, J.: In: Proceedings of the 9th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes, Paris, pp. 360–367 (2015) 16. Schenk, K., Lunze, J.: In: Proceedings of 7th IFAC Workshop on Distributed Estimation and Control in Networked Systems, Groningen, pp. 40–45 (2018) 17. Schwab, A., Schenk, K., Lunze, J.: In: Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, Chigago, pp. 1–6 (2019) 18. Ge, X., Yang, F., Han, Q.L.: Inf. Sci. 380, 117 (2017) 19. Zhang, D., Shi, P., Wang, Q.G., Yu, L.: ISA Trans. 66, 376 (2017) 20. Zhang, X.M., Han, Q.L.: IEEE Trans. Ind. Inf. 12(5), 1740 (2016) 21. Laub, A.J. (ed.): Matrix Analysis for Scientists and Engineers. SIAM, Philadelphia (2005) 22. Schenk, K., Lunze, J.: In: Proceedings of 2019 4th Conference on Control and Fault Tolerant Systems, Casablanca, pp. 257–263 (2019) 23. Süli, E., Mayers, D.F.: An Introduction to Numerical Analysis. Cambridge University Press, Cambridge (2003) 24. Hogben, L. (ed.): Handbook of Linear Algebra. Chapman and Hall/CRC, Boca Raton (2007)

Chapter 3

Resilient Control Under Denial-of-Service: Results and Research Directions Claudio De Persis and Pietro Tesi

Abstract The question of security is becoming central for the current generation of engineering systems which more and more rely on networks to support monitoring and control tasks. This chapter addresses the question of designing network control systems that are resilient to Denial-of-Service, that is to phenomena which render a communication network unavailable to use. We review recent results in this area and discuss some of the research challenges.

3.1 Introduction Security is becoming central for modern engineering systems which more and more rely on networks to support monitoring and control tasks [1]. The main concern is that networks, especially wireless networks, can exhibit unreliable behavior as well as security vulnerabilities, and their malfunctioning can severely affect the systems which our society crucially relies on [2]. Denial-of-Service (DoS) is one of the most common, yet severe, malfunctions that a network can exhibit. By DoS, one usually refers to the phenomenon by which a communication network becomes unavailable to use, meaning that data exchange cannot take place. It is a general term incorporating different types of malfunctions (all causing network unavailability) such as congestion, devices de-authentication and jamming interference [3, 4], and it can be generated by unintentional or intentional sources in which case, the latter, one often uses the term DoS attacks. Due to its disruptive effects and common occurrence, DoS has become a central research theme in the context of networked control systems [5].

C. De Persis University of Groningen, Nijenborgh 4, 9747 AG Groningen, Groningen, Netherlands e-mail: [email protected] P. Tesi (B) University of Florence, Via di Santa Marta 3, 50139 Florence, Italy e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_3

41

42

C. De Persis and P. Tesi

This chapter addresses the question of designing DoS-resilient networked control systems. The literature on this topic is vast and diverse, and covers linear [6–9], nonlinear [10–12] and distributed systems [13–15]. In this chapter, we will review some of the results in this area, building on the framework developed in [8]. The objective here is not to provide a comprehensive literature review, which is an almost impossible task. Rather, the objective is to discuss, for three macro areas, basic results, and research challenges. In Sect. 3.2, we consider a centralized framework where controller and plant exchange data through a network which can undergo DoS. For a given controller, we characterize frequency and duration of DoS under which closed-loop stability is preserved. Related to this problem, we review other DoS models considered in the literature, and discuss two open problems in this area: optimality and the role of transmission scheduling. In Sect. 3.3, we focus the attention on the problem of designing control systems that maximize robustness against DoS. We show that his problem has clear connections with the problem of designing finite time observers, and discuss challenges that arise when the control unit is placed remotely from the plant actuators. In Sect. 3.4, we finally consider distributed systems, the area which currently poses most of the research challenges. We first discuss DoS-resilient consensus, which is as a prototypical distributed control problem [16]. Subsequently, we discuss some of the challenges that arise when dealing with networks having more complex dynamics, as well as the problem of identifying critical links in networks with peer-to-peer architecture. The chapter ends with some concluding remarks in Sect. 3.5. The main focus of this chapter is on linear systems. In fact, while nonlinear systems have their own peculiarities [10–12], most of the issues arising with DoS are shared by linear systems as well. In this chapter, we will mostly consider control problems. Yet, a large and fruitful research line has been developed also for remote estimation problems, that is problems in which the objective is to reconstruct the process state through network measurements [17, 18]. In this chapter, the focus is on methods to achieve resilience against DoS. In the context of DoS attacks, research has been carried out also to determine optimal attack scheduling [19, 20]. Albeit not central to our discussion, we will further elaborate on this point in Sect. 3.2 when discussing optimality issues. We finally point out that DoS is only one of the aspects that affect the security of networked control systems. In the last years, a large amount of research has been carried out on this topic, mostly in connection with security against attacks, which include for instance, bias injection, zero dynamics, and replay attacks. We refer the interested reader to [2, 5, 21] for a general overview of security issues in networked cyber-physical systems.

3 Resilient Control Under Denial-of-Service: Results and Research Directions

43

Fig. 3.1 Schematic representation of the networked control system

3.2 Stability Under Denial-of-Service 3.2.1 Basic Framework Consider a dynamical system given by x(t) ˙ = Ax(t) + Bu(t) + w(t),

(3.1)

where t ≥ 0 is the time; x ∈ Rn x is the state, u ∈ Rn u is the control signal, and w ∈ Rn x is a disturbance; A and B are matrices with (A, B) is stabilizable. The control action is implemented over a communication network, which renders the overall control system a networked control system. Let K be a controller designed in such a way that Φ := A + B K is Hurwitz (all the eigenvalues of Φ have negative real part), and let Δ > 0 be a constant specifying the desired update rate for the control signal. Ideally, the control signal is then u ∗ (t) := K x(kΔ) for all t ∈ [kΔ, (k + 1)Δ) with k = 0, 1, . . ., as in classic sampled-data control. Throughout this chapter, the sequence {kΔ}k=0,1,... will be referred to as the sequence of transmission times or more simply transmissions, that is t is a transmission time if and only if t = kΔ for some k. Due to the presence of a communication network some of the transmissions can fail, that is u(t) = u ∗ (t). Whenever transmissions fail, we say that the network is under Denial-of-Service (DoS). In general, DoS can have a genuine or malicious nature, in which case we refer to DoS attacks. A schematic representation of the networked control system is reported in Fig. 3.1. Remark 3.1 Throughout this chapter, we do not distinguish whether transmissions fail because communication is not possible (for instance, when transmission devices are disconnected from the network) of because data are corrupted (and discarded) due to interference signals [3, 4]. In fact, from a control perspective it is sufficient to interpret DoS as a mechanism inducing packet losses (cf. Sect. 3.2.2.1). 

44

3.2.1.1

C. De Persis and P. Tesi

Stability in DoS-Free Networks

Even in the ideal situation in which the network is DoS-free, the transmission times must be carefully chosen. The following result addresses this point and is key for the results of Sect. 3.2.2. Given any positive definite matrix Q = Q  , let P be the unique solution to the Lyapunov equation Φ  P + P Φ + Q = 0,

(3.2)

Let α1 and α2 be the smallest and largest eigenvalue of P, respectively. Let γ1 be the smallest eigenvalue of Q and let γ2 := 2P B K . Given a square matrix M, let μ M be its logarithmic norm, that is μ M := max{λ| λ ∈ spec{(M + M  )/2}}, and let ⎧  σ 1 ⎪ ⎪ , ⎨ 1 + σ max{Φ,  1}  Δ := 1 1 σ ⎪ ⎪ μ log + 1 , ⎩ A μA 1 + σ max{Φ, 1}

μA ≤ 0 (3.3) μA > 0

where σ ∈ (0, γ1 /γ2 ). Definition 3.1 (cf. [22]) Consider a dynamical system x˙ = f (x, w), and let L∞ denote the set of measurable locally essentially bounded functions. We say that the system is input-to-state stable (ISS) if there exist a K L -function β and a K∞ function γ such that, for all x(0) and w ∈ L∞ , x(t) ≤ β(x(0), t) + γ (w∞ )

(3.4)

for all t ≥ 0, where w∞ := sups≥0 w(s). If (3.4) holds when w ≡ 0, then the system is said to be globally asymptotically stable (GAS).  Lemma 3.1 ([8]) Consider the system Σ given by (3.1) with u(t) = K x(kΔ) for all t ∈ [kΔ, (k + 1)Δ), k = 0, 1, . . ., and where K is such that Φ = A + B K is Hurwitz. Suppose that the network is DoS-free. Then, Σ is ISS with respect to w for  every Δ ≤ Δ. According to Lemma 3.1, Δ should be then interpreted as an upper bound on the transmission times under which ISS is guaranteed.

3.2.2 Input-to-State Stability Under DoS We now turn the attention to the question of stability in the presence of DoS. Let {h n } with n = 0, 1, . . . and h 0 ≥ 0 be the sequence of DoS off/on transitions, that is the time instants at which DoS changes from zero (transmissions succeed)

3 Resilient Control Under Denial-of-Service: Results and Research Directions

45

to one (transmissions fail). Then Hn := {h n } ∪ [h n , h n + τn ) represents the nth DoS time interval, of duration τn ≥ 0, during which all the transmissions fail. Given nonnegative reals τ and t with t ≥ τ , the symbol Ξ (τ, t) :=



Hn



[τ, t]

(3.5)

n

represents the subset of the interval [τ, t], where transmissions fail. Accordingly, Θ(τ, t) := [τ, t] \ Ξ (τ, t) represents the subset of [τ, t], where transmissions succeed. Let now {sr } with r = 0, 1, . . . denote the sequence of successful transmissions, that is t = sr for some r if and only if t = kΔ for some k and t ∈ Θ(0, t). Then the control signal is given by ⎧ ⎨ u(t) = K z(t) z˙ (t) = 0, ⎩ z(t) = x(t),

t = sr t = sr

(3.6)

with z(0− ) = 0. In simple terms, the control signal behaves in a sample-and-hold fashion according to the last successful transmission. Here, the notation z(0− ) = 0 implies that the controller has zero initial conditions if no data is received at t = 0, that is z(0) = 0 if s0 > 0. The main question now is to determine the amount of DoS that the control system can tolerate before undergoing instability. Such an amount is obviously not arbitrary (the extreme case is when the network is constantly under DoS and no transmission can succeed). The result which follows stipulates that, in order to get stability, both DoS frequency and duration should be sufficiently small. Given τ, t ∈ R≥0 with t ≥ τ , let υ(τ, t) be the number of DoS off/on transitions occurring on the interval [τ, t). Assumption 3.1 (DoS frequency). There exist constants η ≥ 0 and τ D > 0 such that υ(τ, t) ≤ η +

t −τ τD

for all τ and t with t ≥ τ .

(3.7) 

Assumption 3.2 (DoS duration). There exist κ ≥ 0 and T > 0 such that |Ξ (τ, t)| ≤ κ + for all τ and t with t ≥ τ .

t −τ T

(3.8) 

Theorem 3.1 ([8]) Consider the system Σ given by (3.1) with control signal (3.6), where K is such that Φ = A + B K is Hurwitz. Let the inter-transmission time Δ be

46

C. De Persis and P. Tesi

chosen as in Lemma 3.1. Then, Σ is ISS for every DoS pattern satisfying Assumption 3.1 and 3.2 with arbitrary η and κ, and with τ D and T such that Δ 1 ω1 + < τD T ω1 + ω2

(3.9)

where ω1 := (γ1 − γ2 σ )/2α2 and ω2 := 2γ2 /α1 , and where α1 , α2 , γ1 and γ2 are as in Lemma 3.1.  Limiting the DoS frequency and duration is necessary in order to render stability a feasible task. We note in particular that (3.9) requires T > 1, otherwise, the network would be always in a DoS status. Limiting the duration of DoS is not sufficient since stability may be destroyed also by DoS patterns with short duration but having high frequency. In this respect, condition τ D > Δ in (3.9) captures the fact that DoS cannot occur at the same rate as the transmission rate. We will further elaborate on the role of the transmission rate in Sect. 3.2.3.1.

3.2.2.1

Models of DoS

Other DoS models have been proposed in the literature, mostly in connection with discrete-time formulations. While these models sometimes originate from different approaches, they all stipulate that, in order to get stability, the DoS action must be constrained in time. In discrete-time setting, a natural counterpart of Assumptions 3.1 and 3.2 is to require that there exist positive constants c ≥ 0 and λ > 0 such that k 1 −1

(1 − θk ) ≤ c +

k=k0

k1 − k0 λ

(3.10)

for all integers k0 and k1 with k1 > k0 , where θk = 0 when there is DoS at time k and θk = 1 otherwise. Similar to (3.10), other formulations focusing on finite-horizon control problems [6] consider constraints of the type T (1 − θk ) ≤ c

(3.11)

k=0

where T is the control horizon of interest, while probabilistic variants of (3.10) have been proposed in [23, 24]. All these models are high-level models in the sense that they abstract away the rule according to which the network undergoes DoS. This approach is useful when there is little knowledge regarding the type of DoS and the network characteristics. When more information is available other models can be used. For instance, [17] considers DoS in wireless networks caused by jamming signals. For this setting, a transmission at time k is successful with probability

3 Resilient Control Under Denial-of-Service: Results and Research Directions

 1 − 2Q

α

pk ωk + σ 2

47

 ,

(3.12)

∞ 2 where Q(x) = √12π x e−η /2 dη, and α is a parameter. This model dictates that the probability that a transmission at time k is successful depends on the ratio between the transmission power pk and the interference power ωk (the source of DoS) which on DoS are expressed in is added to the noise power σ 2 of the channel. Constraints T ωk ≤ c. a similar way as (3.11), for example, by imposing k=0 A detailed comparison among these several other models has been recently reported in [25, 26], to which the interested reader is referred to. In this respect, it is worth noting that the majority of the DoS models considered in the literature differ from the packet-loss models, for instance, Bernoulli models, considered in the classic literature on networked control [27]. The latter, in fact, are more effective in characterizing the quality of the network in normal operating condition, while DoS models account for abnormal situations such as prolonged periods of time where no transmission can succeed.

3.2.3 Research Directions: Scheduling Design and Min–Max Problems Theorem 3.1 is a prototypical result which shows that network control systems with suitably designed transmission rates enjoy some level of robustness against DoS. The result discussed here has been extended in several directions, which include for instance nonlinear systems and output feedback [10–12, 29], as well as robustness to transmission delay [28] and quantization [29]. While much remains to be done also in these areas, especially for nonlinear systems, in the sequel, we will focus on other aspects which we perceive as much less explored.

3.2.3.1

Transmission Scheduling

The preceding analysis rests on the assumption that the transmission rate is constant. This assumption can be easily relaxed by replacing the constraint Δ ≤ Δ with the constraint Δk ≤ Δ for all k ≥ 0. In this case, Theorem 3.1 continues to hold provided that in (3.9) we replace Δ with supk Δk [8]. This opens the way to the use of a more sophisticated transmission Policies, for instance, event-triggered policies [30]. The event-triggered paradigm in particular advocates the idea that transmissions should take place only when strictly needed, and this can play an important role in the context of DoS. In fact, especially for distributed systems (cf. Sect. 3.4), aggressive transmission policies can exacerbate DoS by inducing congestion phenomena. Limiting the amount of transmissions can therefore help to maintain a satisfactory network throughput.

48

C. De Persis and P. Tesi

It might be argued that low-rate transmissions render stability much more fragile in the sense that with low-rate transmissions stability can be destroyed by low-rate DoS. This fact is captured in Theorem 3.1 where the fulfillment of (3.9) becomes more difficult to satisfy as Δ increases. Yet, at least in the context of DoS attacks, the implication “low-rate transmissions =⇒ more vulnerability” need not apply. In fact, unless an attacker can access the sensor logic, event-triggered logics can render difficult for an attacker to predict when transmissions will take place, thus to learn the transmission policy. The idea of rendering the transmissions less predictable has been explored in [31], where the transmission times are randomized, but not in the context of event-triggered control. Understanding the amount of information needed to predict the transmission times associated with an event-triggered logic could lead to the development of control schemes that ensure low-rate transmissions along with low predictability of the transmission times.

3.2.3.2

Optimality

The preceding analysis does not take DoS into account at the stage of designing the controller. In the next section, we will focus on the question of robustness. Hereafter, we make some considerations on the design of optimal control laws. The design of control laws that are optimal in the presence of DoS is for sure one of the most challenging problems. A simple instance of this problem is as follows. Consider a finite-horizon optimal T control problem where the goal is to minimize f (x(k), u(k)). In the presence of DoS, a classic the desired cost function, say k=0 minimization problem of this type turns out to be a “min max” problem in which the objective is to minimize the cost function overall possible T DoS patterns within (1 − θk ) ≤ c, where a certain class C (for instance all DoS patterns such that k=0 θk = 0 when there is DoS at time k and θk = 1 otherwise), that is min max u

C

T

f (x(k), u(k))

k=0

Only a few papers have addressed this or similar problems; see for instance [6, 17, 32, 33] for DoS-resilient state estimation. Problems of this type are naturally cast in a game-theoretic framework. The main difficulty is that, depending on the objective function, pure Nash equilibria may not exist.

3.3 Robust Control Design In the preceding section, we considered a basic formulation in which the problem is to determine the amount of DoS that a given control system can tolerate. In this section, we consider the question of designing the control system so as to maximize

3 Resilient Control Under Denial-of-Service: Results and Research Directions

49

robustness against DoS. Later in Sect. 3.3.1, we will discuss some open problems in this research area. In connection with the model of DoS considered in Sect. 3.2.2, the question of designing robust control systems amounts to searching for control laws that ensure stability for all DoS patterns satisfying Δ 1 + < α τD T

(3.13)

with α as closest as possible to 1. We notice that α = 1 is the best possible bound since for α > 1 there would exist DoS patterns that satisfy (3.13) but for which no control system can guarantee stability. As an example, for α > 1 the DoS pattern characterized by (τ D , T ) = (∞, 1) satisfies (3.13) but causes the network to be always in a DoS status; as another example, for α > 1 the DoS pattern characterized by (τ D , T ) = (Δ, ∞) with h n = nΔ, n = 0, 1, . . ., satisfies condition (3.13) but destroys all the transmissions since the occurrence of DoS is exactly synchronized with the transmission times tk = kΔ, k = 0, 1, . . ..

3.3.1 Control Schemes Based on Finite-Time Observers A natural way for increasing robustness against DoS is to equip the controller with a “copy” of the system dynamics so as to compensate for the lack of data, that is to use observer-based controllers. To fix the ideas, suppose that the whole state of the system is available for measurements, and consider the following controller (recall that {sr } denotes the sequence of successful transmissions): ⎧ ⎨ u(t) = K z(t) z˙ (t) = Az(t) + Bu(t), ⎩ z(t) = x(t),

t = sr t = sr

(3.14)

with z(0− ) = 0. In simple terms, this controller runs a copy of the system dynamics and its state is reset whenever a new measurement becomes available. Intuitively, in the ideal case where there are no process disturbances a single measurement x(sr ) is sufficient to get stability since, starting from sr , one has z(t) ≡ x(t). In the sequel, we consider the general case where one only has partial state measurements, assuming that the system to control is disturbance-free. The case of disturbances is discussed later in Sect. 3.3.1. Consider a stabilizable and observable system 

x(t) ˙ = Ax(t) + Bu(t) y(t) = C x(t)

(3.15)

50

C. De Persis and P. Tesi

where y ∈ Rn y is the output (the measurement signal). Let S denote the set of successful transmissions, and let {vm }m=0,1,... be the sequence of successful transmissions preceded by μ − 1 consecutive successful transmissions, that is such that {vm , vm − Δ, . . . , vm − (μ − 1)Δ} ∈ S

(3.16)

where μ is a positive integer. The following result establishes an important property related to the frequency at which consecutive successful transmissions occur, and is independent of the system and the controller. Lemma 3.2 ([28]) Consider any DoS pattern satisfying Assumptions 3.1 and 3.2 with Δ 1 Δ + < 1 − (μ − 1) , τD T τD

(3.17)

where Δ is the inter-transmission time and μ is an arbitrary positive integer. Then, v0 ≤ Q + (μ − 1)Δ and vm+1 − vm ≤ Q + Δ for all m, where   1 μΔ −1 , Q := (κ + μηΔ) 1 − − T τD

(3.18)

with κ and μ as in Assumptions 3.1 and 3.2.



Lemma 3.2 essentially says that, for any positive integer μ, if (3.17) holds then we always have μ consecutive successful transmissions. The idea then is to equip the controller with a finite-time observer which is able to reconstruct the state of the system in μ steps; in turn, condition (3.17) ensures that the process state will be reconstructed in finite time, enabling the control unit to apply correct control signals even if the network subsequently undergoes large periods of DoS. We will now formalize these considerations. Let μ denote the observability index of (C, e AΔ ) (note that if (C, A) is observable then also (C, e AΔ ) is observable for generic choices of Δ), and consider the following controller: ⎧ ⎨ u(t) = K z(t) z˙ (t) = Az(t) + Bu(t), t = vm (3.19) ⎩ z(t) = ζ (t), t = vm where 

ζ˙ (t) = Aζ (t) + Bu(t), ζ (t) = ζ (t − ) + M(y(t) − Cζ (t − )),

t = sr t = sr

(3.20)

with z(0− ) = ζ (0− ) = 0, and where M is selected in such a way that R μ = 0 with R := (I − MC)e AΔ . 1 1 Note

that M always exists if (C, A) is observable. In fact, R μ = 0 amounts to requiring that

3 Resilient Control Under Denial-of-Service: Results and Research Directions

51

The functioning of (3.19)–(3.20) is as follows. System (3.19) runs a copy of (3.15), and its state is reset to ζ whenever μ consecutive successful transmissions take place. In turn, (3.20) gives a finite-time estimate of x, which is correct after μ consecutive successful transmissions. In case of full state measurements μ = 1, {vm } = {sr } and the controller (3.19)–(3.20) reduces to (3.14). Theorem 3.2 ([28]) Consider a stabilizable and observable system as in (3.15) with the controller (3.19)–(3.20). Let Δ be the inter-transmission time. Then, the closed-loop system is GAS for any DoS pattern satisfying Assumptions 3.1 and 3.2 with arbitrary η and κ, and with τ D and T satisfying Δ Δ 1 + < 1 − (μ − 1) τD T τD where μ is the observability index of (C, e AΔ ).

(3.22) 

When μ = 1, which holds in case of full state measurements and can be enforced if C is a design parameter, (3.22) reduces to the ideal bound 1/T + Δ/τ D < 1. Notice that in this case, by Lemma 3.2 at least one successful transmission is guaranteed to occur. On the other hand, for any given μ > 1 (C is a problem constraint), one can get close to 1/T + Δ/τ D < 1 by decreasing Δ, and the limit is only dictated by the maximum transmission rate allowed by the network.

3.3.2 Performant Observers and Packetized Control The control scheme (3.19)–(3.20) relies on the use of a finite-time observer with state resetting. In this section, we discuss two peculiarities of this control scheme which deserve special attention.

3.3.2.1

Robustness of Finite-Time Observers

Theorem 3.2 relies on a finite-time observer which ensures fast state reconstruction. Interestingly, to the best of our knowledge, it is not possible to obtain similar results by means of asymptotic observers. This suggests that, as far as stability is concerned, estimation speed is the primary factor. ⎤ Ce AΔ 2 AΔ ⎥ ⎢ Ce ⎥ ⎢ rank ⎢ ⎥ = nx .. ⎦ ⎣ . CeμAΔ ⎡

(3.21)

Since e AΔ is regular, this is equivalent to the fact that (C, e AΔ ) is μ-steps observable. The detailed procedure for constructing M can be found for instance in [34, Sect. 5].

52

C. De Persis and P. Tesi

In the presence of disturbance or measurement noise, however, using finite-time observers can negatively affect the control system performance. In order to illustrate this point, consider a variant of system (3.15) given by 

x(t) ˙ = Ax(t) + Bu(t) + d(t) y(t) = C x(t) + n(t)

(3.23)

where d and n represent disturbance and measurement noise, respectively. One can extend the analysis also to such situation [28, Theorem 1], but the estimation error e(vm ) = z(vm ) − x(vm ) at the times vm turns out to be e(vm ) =

μ−1

R k Mn(z m − kΔ) +

k=0

v(t) := −(I − MC)

R k v(z m − kΔ),

k=0



where

μ−2

t

e A(t−s) d(s)ds t−Δ

Albeit stability in an ISS sense is preserved, this implies that one can have a large noise amplification, which is a well-known fact for deadbeat observers. An important investigation in this area concerns the development of observers that guarantee robustness to noise and disturbance while preserving the properties of finite-time observers in the ideal case where noise and disturbance are zero. This research line has recently attracted an independent renewed interest in the context of hybrid systems [35]. Achievements in this area could contribute not only to control problems but also to estimation problems, another extremely active research area in the context of DoS [17, 18, 35].

3.3.2.2

Robustness in Remote Control Architectures

Another peculiarity of the control system (3.19)–(3.20) is that it requires that the control unit is co-located with the process actuators, which is needed to continuously update the control signal (Fig. 3.2). In case the control unit is instead placed remotely the situation is inevitably more complex. For remote systems, a possible approach is to emulate co-located architectures through buffering / packetized control [36, 37]. In simple terms, the basic idea is that at the transmission times the control unit should transmit not only the current control update but also the predictions of future control updates to be stored at the process side and to be used during the periods of DoS. In [38], for the case of full state measurements, it was shown that the ideal bound 1/T + Δ/τ D < 1 achievable through co-location becomes Δ ω2 (κ + ηΔ) 1 + 0 is a sensitivity parameter which is used at the design stage to trade-off frequency of the control updates vs. accuracy of the consensus region; the function avei : Rn → R is given by avei (t) :=



(x j (t) − xi (t)).

(3.27)

j∈N i

and represents the local average that node i forms with its neighbors; finally, the function f i : Rn → R>0 is given by ⎧ |avei (t)| ⎪ ⎨ , 4di f i (x(t)) := ε ⎪ ⎩ , 4di

|avei (t)| ≥ ε otherwise

In simple terms, at each update time node i polls its neighbors. If the transmission succeeds (there is no DoS) then node i computes its own local average that provides information on the nodes disagreement, and the control law is updated accordingly; otherwise, the control signal is set to zero, meaning that node i remains at its current value. At the same time, node i computes, through the function f i , the next time instant at which an update will occur. For this reason, the control logic is referred to as self-triggered. The use of this logic for consensus in DoS-free networks has been first proposed in [44].

3.4.1.2

Resilience Against DoS

The result which follows characterizes the robustness properties of (3.25)–(3.26) in the presence of DoS. Let (x j − xi )| < ε ∀ i ∈ I } (3.28) E := {x ∈ Rn : | j∈N i

Theorem 3.3 ([13]) Consider a connected undirected graph where the nodes follow the logic (3.25)–(3.26). Consider any DoS pattern satisfying Assumptions 3.1 and 3.2 with η and κ arbitrary, and with τ D and T satisfying

56

C. De Persis and P. Tesi

Δ 1 < 1 + τD T

(3.29)

where Δ := ε/(4dmin ) with dmin := mini∈I di . Then, for every initial condition, the nodes converge in finite time to a point belonging to the set E in (3.28).  This consensus algorithm has some interesting features that make it appealing in distributed control: (i) it is fully distributed, also with respect to the clocks of the nodes which do not have to be synchronized; (ii) it achieves finite-time convergence, where the accuracy of consensus depends on a design parameter ε which can be used to trade-off frequency of the control updates and consensus accuracy; (iii) it relies on a transmission logic in which the updates take place only when strictly needed. Like in event-based control, this feature helps to reduce the communication burden which is especially important in distributed settings. Other features of the algorithm are discussed in the section which follows.

3.4.2 Complex Network Systems and Critical Links Developing DoS-resilient distributed control algorithms is probably the topic where most of the research challenges are concentrated. In the sequel, we will discuss two important topics where results are lacking.

3.4.2.1

Networks with Complex Dynamics

As mentioned at the beginning of Sect. 3.4, most of the research in this area has been developed for consensus-like problems [13–15, 40–43]. Problems of this type are somehow “manageable” in the sense that they involve systems with stable or neutrally stable dynamics (like integrators in the context of consensus), which considerably simplifies analysis and design. For instance, in the consensus algorithm discussed in the previous section, one takes advantage of the fact that the dynamics are integrators. This makes it possible to “stop” the state evolution in the presence of DoS, which is instrumental to prevent the nodes from drifting away, simply by zeroing the control input. This is in general not possible with more complex network dynamics. Networks having more complex (even linear time-invariant) dynamics arise in many other distributed control problems, for instance, in the context of distributed stabilization of large-scale systems where the dynamics are those associated to the physical systems to control (rather than deriving from the control algorithm). As an example, consider a network of physically coupled systems with dynamics x˙i (t) = Ai xi (t) + Bi u i (t) +

j∈N i

Hi j x j (t),

3 Resilient Control Under Denial-of-Service: Results and Research Directions

57

where xi is state of subsystem i, u i is its local control signal, and where Hi j defines how subsystem i physically interacts with neighboring processes. A communication network is used to enable the design of distributed control laws u i (t) = K i xi (tki ) +



j

L i j x j (tk )

j∈N i

that should regulate the state of each subsystem to zero, where tki is the kth update time of the control law for subsystem i. The problem of designing DoS-resilient control schemes for this class of systems has been preliminary studied in [45]. Compared with consensus problems, however, the analysis becomes more complex and the stability conditions more conservative, requiring the subsystems to satisfy suitable small-gain properties on their couplings. The reason is that during DoS it is in general not possible to “stop” the evolution of xi (as done in consensus) since the evolution of xi does not depend solely from u i but also on the various x j . As a consequence, one needs strong conditions on the coupling matrices Hi j in order to ensure that the subsystems do not get far from the origin during DoS. This is an example of distributed control problems where even analysis tools are largely lacking.

3.4.2.2

Determining Critical Links

The consensus problem considered in Sect. 3.4.1, assumes that DoS simultaneously affects all the network links. This assumption is reasonable for networks where the data exchange is carried through a single access point. For peer-to-peer networks, this assumption need not be realistic. Concerning the consensus problem, it is possible to show that a result analogous to Theorem 3.3 holds provided that condition (3.29) is replaced with δ i j := ij

Δ ij τD

+

1 < 1 T ij

(3.30)

where τ D and T i j characterize DoS frequency and duration affecting the link (i, j). This result was proven in [13]. Even more, the same conclusions continue to hold even when δ i j ≥ 1 for some network links (meaning that communication over the link (i, j) is never possible). This happens whenever removing such links does not cause the graph to be disconnected. Specifically, if X is any set of links such that G X := (I, E\X ) remains connected, then consensus is preserved whenever δ i j < 1 for all (i, j) ∈ E\X ; see [13]. For consensus-like networks, one can introduce a simple notion of “critical” links as the links (or the minimum number of links) causing the network to disconnect, and one can identify such links by using classic tools like the Stoer–Wagner mincut algorithm [46]. For networks involving more complex dynamics such as the one

58

C. De Persis and P. Tesi

discussed in the previous subsection, the situation is instead much more involved. In particular, the loss of a link may render the network unstable even if the underlying graph remains connected. Developing efficient methods to identify and minimize the number of critical links through topology and control design is another key aspect to achieve DoS resilience.

3.5 Conclusions In this chapter, we reviewed some recent results on DoS-resilient networked control. While much has been done in this area, there remain several problems of paramount importance which are yet not fully understood. We mention in particular the design of DoS-resilient optimal control laws, the design of robust control laws for remote control architectures and the design of DoS-resilient distributed control algorithms, the latter being the area where most of the results are lacking. The present discussion is by no means exhaustive. We refer the interested reader to [2, 5, 21] for additional references on this topic, as well as for a more general overview of security issues in networked systems.

References 1. Lee, E.: Cyber physical systems: design challenges. In: IEEE International Symposium on Object Oriented Real-Time Distributed Computing (ISORC) (2008) 2. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: A secure control framework for resource-limited adversaries. Automatica 51, 135–148 (2015) 3. Bicakci, K., Tavli, B.: Denial-of-service attacks and counter-measures in IEEE 802.11 wireless networks. Comput. Stand. Interf. 31, 931–941 (2009) 4. Pelechrinis, K., Iliofotou, M., Krishnamurthy, S.: Denial of service attacks in wireless networks: the case of jammers. IEEE Commun. Surv. Tutor. 13, 245–257 (2011) 5. Lun, Y., D’Innocenzo, A., Smarra, F., Malavolta, I., Di Benedetto, M.: State of the art of cyber-physical systems security: an automatic control perspective. J. Syst. Softw. 149, 174– 216 (2019) 6. Amin, S., Càrdenas, A., Sastry, S.: Safe and secure networked control systems under denial of-service attacks. In: Hybrid Systems: Computation and Control, pp. 31–45 (2009) 7. Shisheh Foroush, H., Martínez, S.: On event-triggered control of linear systems under periodic Denial-of-Service attacks. In: IEEE Conference on Decision and Control, Maui, HI, USA (2012) 8. De Persis, C., Tesi, P.: Input-to-state stabilizing control under denial-of-service. IEEE Trans. Autom. Control 60, 2930–2944 (2015) 9. Lu, A., Yang, G.: Input-to-state stabilizing control for cyber-physical systems with multiple transmission channels under denial-of-service. IEEE Trans. Autom. Control 63, 1813–1820 (2018) 10. De Persis, C., Tesi, P.: Networked control of nonlinear systems under denial-of-service. Syst. Control Lett. 96, 124–131 (2016) 11. Dolk, V., Tesi, P., De Persis, C., Heemels, W.: Event-triggered control systems under denialof-service attacks. IEEE Trans. Control Netw. Syst. 4, 93–105 (2016)

3 Resilient Control Under Denial-of-Service: Results and Research Directions

59

12. Kato, R., Cetinkaya, A., Ishii, H.: Stabilization of nonlinear networked control systems under denial-of-service attacks: a linearization approach 2019 American Control Conference, Philadelphia, PA, USA (2019) 13. Senejohnny, D., Tesi, P., De Persis, C.: A jamming-resilient algorithm for self-triggered network coordination. IEEE Trans. Control Netw. Syst. 5, 981–990 (2017) 14. Lu, A., Yang, G.: Distributed consensus control for multi-agent systems under denial-of-service. Inf. Sci. 439, 95–107 (2018) 15. Yang, H., Li, Y., Dai, L., Xia, Y.: MPC-based defense strategy for distributed networked control systems under DoS attacks. Syst. Control Lett. 128, 9–18 (2019) 16. Nowzari, C., Garcia, E., Cortés, J.: Event-triggered communication and control of networked systems for multi-agent consensus. Automatica 105, 1–27 (2019) 17. Li, Y., Shi, L., Cheng, P., Chen, J., Quevedo, D.: Jamming attacks on remote state estimation in cyber-physical systems: a game-theoretic approach. IEEE Trans. Autom. Control 60, 2831– 2836 (2015) 18. Li, Y., Quevedo, D., Dey, S., Shi, L.: SINR-based DoS attack on remote state estimation: a game-theoretic approach. IEEE Trans. Control Netw. Syst. 4, 632–642 (2017) 19. Zhang, H., Cheng, P., Shi, L., Chen, J.: Optimal DoS attack scheduling in wireless networked control system. IEEE Trans. Control Syst. Technol. 24, 843–852 (2016) 20. Zhang, H., Cheng, P., Shi, L., Chen, J.: Optimal denial-of-service attack scheduling with energy constraint. IEEE Trans. Autom. Control 60, 3023–3028 (2015) 21. Mehran Dibaji, S., Pirani, M., Bezalel Flamholz, D., Annaswamy, A., Johansson, K., Chakrabortty, A.: A systems and control perspective of CPS security. Ann. Rev. Control 47, 394–411 (2019) 22. Sontag, E.: Input to state stability: basic concepts and results. Nonlinear Optim. Control Theory Lect. Notes Math. 163–220, 2008 (1932) 23. Cetinkaya, A., Ishii, H., Hayakawa, T.: Event-triggered control over unreliable networks subject to jamming attacks. In: IEEE Conference on Decision and Control, Osaka, Japan (2015) 24. Cetinkaya, A., Ishii, H., Hayakawa, T.: A probabilistic characterization of random and malicious communication failures in multi-hop networked control. SIAM J. Control Optim. 56, 3320– 3350 (2018) 25. De Persis, C., Tesi, P.: A comparison among deterministic packet-dropouts models in networked control systems. IEEE Control Syst. Lett. 2, 109–114 (2017) 26. Cetinkaya, A., Ishii, H., Hayakawa, T.: An overview on denial-of-service attacks in control systems: attack models and security analyses. Entropy 21(210), 1–29 (2019) 27. Hespanha, J., Naghshtabrizi, P., Xu, Y.: A survey of recent results in networked control systems. Proc. IEEE 95, 138–162 (2007) 28. Feng, S., Tesi, P.: Resilient control under denial-of-service: robust design. Automatica 79, 42–51 (2017) 29. Wakaiki, M., Cetinkaya, A., Ishii, H.: Quantized output feedback stabilization under DoS attacks. In: 2018 American Control Conference, Milwaukee, WI, USA (2018) 30. Tabuada, P.: Event-triggered real-time scheduling of stabilizing control tasks. IEEE Trans. Autom. Control 52, 1680–1685 (2007) 31. Cetinkaya, A., Kikuchi, K., Hayakawa, T., Ishii, H.: Randomized transmission protocols for protection against jamming attacks in multi-agent consensus. arXiv:1802.01281 32. Gupta, A., Langbort, C., Ba¸sar, T.: Optimal control in the presence of an intelligent jammer with limited actions. In: IEEE Conference on Decision and Control, Atlanta, GA, USA (2010) 33. Befekadu, G., Gupta, V., Antsaklis, P.: Risk-sensitive control under a class of denial-of-service attack models. In: American Control Conference, San Francisco, CA, USA (2011) 34. OReilly, J.: Observers for Linear Systems. Academic Press: Mathematics in Science & Engineering, London (1983) 35. Li, Y., Sanfelice, R.: A finite-time convergent observer with robustness to piecewise-constant measurement noise. Automatica 57, 222–230 (2015) 36. Chaillet, A., Bicchi, A.: Delay compensation in packet-switching networked controlled systems. In: IEEE Conference on Decision and Control, Cancun, Mexico (2008)

60

C. De Persis and P. Tesi

37. Quevedo, D., Ørstergaard, J., Neši´c, D.: Packetized predictive control of stochastic systems over bit-rate limited channels with packet loss. IEEE Trans. Autom. Control 56, 2854–2868 (2011) 38. Feng, S., Tesi, P.: Networked control systems under denial-of-service: co-located versus remote architectures. Syst. Control Lett. 108, 40–47 (2017) 39. Heemels, W., Johansson, K., Tabuada, P.: An introduction to event-triggered and self-triggered control. In: IEEE Conference on Decision and Control, Maui, Hawaii, USA (2012) 40. Feng, Z., Hu, G.: Distributed secure average consensus for linear multi-agent systems under DoS attacks. In: American Control Conference, Seattle, WA, USA (2017) 41. Kikuchi, K., Cetinkaya, A., Hayakawa, T., Ishii, H.: Stochastic communication protocols for multi-agent consensus under jamming attacks. In: IEEE Conference on Decision and Control, Melbourne, Australia (2017) 42. Senejohnny, D., Tesi, P., De Persis, C.: Resilient self-triggered network synchronization. In: Tarbouriech, S., Girard, A., Hetel, L. (eds), Control Subject to Computational and Communication Constraints. Lecture Notes in Control and Information Sciences, vol. 475. Springer, Cham (2018) 43. Amini, A., Azarbahram, A., Mohammadi, A., Asif, A.: Resilient event-triggered average consensus under denial of service attack and uncertain network. In: 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France (2019) 44. De Persis, C., Frasca, P.: Robust self-triggered coordination with ternary controllers. IEEE Trans. Autom. Control 58, 3024–3038 (2013) 45. Feng, S., Tesi, P., De Persis, C.: Towards stabilization of distributed systems under denial-ofservice. In: IEEE Conference on Decision and Control, Melbourne, Australia (2017) 46. Stoer, M., Wagner, F.: A simple min-cut algorithm. J. ACM 44, 585–591 (1997)

Chapter 4

Stealthy False Data Injection Attacks in Feedback Systems Revisited Henrik Sandberg

Abstract In this chapter, we consider false data injection (FDI) attacks targeting the actuator or sensor in feedback control systems. It is well known that such attacks may cause significant physical damage. We employ basic principles of linear feedback systems to characterize when the attacks have a physical impact and simultaneously are stealthy. A conclusion is that unstable plants are always sensitive to sensor attacks, and non-minimum phase plants are always sensitive to actuator attacks. Also benign stable and minimum-phase plants may be vulnerable if the control system bandwidth is high. We derive some general guidelines for determining system vulnerability and illustrate their use on a case study. Finally, we discuss some defense mechanism based on watermarking, coding, and loop shifts.

4.1 Introduction Control systems are essential elements in many critical infrastructures and govern the interaction of the cyber- and physical components. Modern control system contains application programs running on host computers or embedded devices, and as such constitute important instances of cyber-physical systems (CPS). Historically, cyber-security has not been a concern for these systems. However, recent incidents have clearly illustrated that malicious corruption of signals in control and monitoring systems is a legitimate, serious attack vector. Several malwares dedicated to attacking control and monitoring systems in nuclear facilities and power systems, for example, have been discovered [6]. Security has therefore in the last decade become an important topic on the research agenda of control system designers [3]. Unlike traditional security research, the development of security-enhancing mechanisms that leverage the control system itself is key. These mechanisms do not eliminate the need for traditional security technologies, such as encryption or authentication, but should

H. Sandberg (B) Division of Decision and Control Systems, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_4

61

62

H. Sandberg

rather be seen as complements that are able to mitigate attack vectors that are hard, or even impossible, to detect in the lower protocol layers. A special class of attacks on control systems is false data injection (FDI) attacks. In these data integrity attacks, sensor measurements or actuation commands are being corrupted by a cyber-attacker to cause physical impact, and they can sometimes remain hidden from the system operator. An early study is [7] where it was shown that coordinated sensor attacks can fool static bad-data detection systems of state estimators in power grids. In [4], FDI attacks targeting a process control system were considered, and the importance of dynamical models was emphasized. These works illustrated the feasibility and severeness of FDI attacks, and several more theoretically oriented studies followed. In [9], a framework for the detectability and isolability of FDI attacks was established. In [13], a general attack modeling framework and definitions of stealthiness were introduced. This is just a short sample of works treating FDI attacks, but these serve as a good introduction to the class. In this chapter, we approach the problem of FDI attacks using basic principles of feedback system design. This yields new insights concerning the impact and stealthiness of the attacks. We see that if the negative of the real part of any pole (zero) of the plant is small relative to the control system bandwidth, the feedback system is vulnerable to sensor (actuator) attacks. The analysis leads up to a novel sum-of-exponentials attack. Since the reasoning used is similar to that in classical loop shaping, the insights are not as precise as other available results in the area (cf. [9, 13]). Nonetheless, we believe they are still of value and may appeal to a control systems designer who needs to decide whether a particular FDI attack is a serious threat, or not. The organization of the chapter is as follows. In Sect. 4.2, we introduce the system, operator, and attacker models. Importantly, we state the goals and abilities of the involved parties. In Sect. 4.3, we separately provide analysis of FDI attacks on the sensor and actuator channels. This yields some simple guidelines useful for assessing the severity of different attack scenarios. In Sect. 4.4, we illustrate the use of the analysis on a relatively benign feedback system with a stable minimum-phase plant. This is in contrast to other studies, which often consider unstable or non-minimum phase systems where it is known FDI attacks may be devastating (cf. [9, 13]). In Sect. 4.5, we provide an overview of the recently proposed control system-oriented defense mechanism and relate to the previous analysis. Finally, in Sect. 4.6, we summarize the results and provide some conclusions.

4.2 Modeling and Assumptions We here state the basic assumptions made about the feedback system, the attacker, and the operator. In Fig. 4.1, the block diagram of the considered (networked) control system is shown. Networked control systems are special instances of CPS, containing strongly interacting components in both the cyber- and physical domains. The signals u and y can be thought of as physical quantities, where the physical effects of

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

63

Fig. 4.1 Block diagram of the considered feedback system. Through the communication network, the attacker may be able to intercept and corrupt the communication between the controller (C) and the physical plant (P) using signals au , a y

attacks will be realized and seen. These signals are communicated over a computer network, possibly under (partial) control of a malicious attacker. In reality, signals communicated over networks are quantized and sampled, but for simplicity, all signals will here be assumed to be in continuous time and real or complex-valued. The signals u d and yd are outputs and inputs to the feedback controller C, respectively. Note that in the presence of an attacker in the network, the cyber-quantities yd and u d may be different from their physical counterparts u and y. It is commonly assumed that an anomaly detector is co-allocated with the controller, although the detector may run on a different platform. The role of the detector is to generate alarms to alert the operator of unusual system behavior, caused by faults or attacks, for instance. Often the detectors are model-based fault detection-type filters (see [2], for instance), which sometimes can detect also malicious cyber-attacks.

4.2.1 System Model From Fig. 4.1, we derive the impact equations PC P ay + au 1 + PC 1 + PC C 1 ay + au , u=− 1 + PC 1 + PC y=−

which relate the data injection attacks a y , au to the physical signals y, u using the plant and controller transfer functions P(s) and C(s), respectively. All signals are here assumed to be scalar valued and of exponential form, F(s)est for some F(s), where the complex frequency is s ∈ C and the time variable is t ∈ R. Similarly, we

64

H. Sandberg

derive the detection equations 1 P ay + au 1 + PC 1 + PC C PC ay − au , ud = − 1 + PC 1 + PC yd =

which characterize the corresponding response in the signals available to the controller and anomaly detector. We recognize that only four transfer functions appear (the “Gang of Four” [1]), namely 1 , 1 + PC C C S := , 1 + PC S :=

P , 1 + PC PC T := . 1 + PC

P S :=

In particular, S is the sensitivity function and T is the complementary sensitivity function. A standing assumption is that the feedback system is internally stable, and these transfer functions have their poles in Re(s) < 0. We also assume the transfer functions can be realized as         S(s) P S(s) C1 D11 D12 −1 B1 B2 + (s I − A) , =: C2 D21 D22 C S(s) T (s) and the closed-loop system state vector is x(t) ∈ Rn . Remark 4.1 The feedback system in Fig. 4.1 is an idealization in many ways. In particular, we have not included natural sensor noise and plant disturbances. Such signals’ influence is also modeled by the Gang of Four, see [1], and would only make it easier for an attacker to remain stealthy.

4.2.2 Operator and Attacker Models The control system operator has access to an anomaly detector that is monitoring the detection signals yd , u d , and set-point r in Fig. 4.1. An important standing assumption in this paper is that the closed-loop system state x is at equilibrium (x˙ = 0) when the attack starts (w.l.o.g. we set x(0) = 0 below). This means that there will be perturbations in yd , u d away from equilibrium for any sensor or actuator FDI attack (but see Remark 4.2). Hence, the undetectable attack scenario in [9] where an initial state unknown to the operator covering the attack is not applicable. It is true that carefully crafted attacks still can mask themselves as natural disturbances [10]. Though in our operator model, breaking the equilibrium is considered an anomaly if the perturbation is large and of sufficient duration, even if it could be caused by natural events.

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

65

The attacker’s goal is to conduct a sensor or actuator FDI attack that simultaneously has physical impact and is stealthy. We consider such attacks especially dangerous, since non-stealthy attacks may be blocked before serious damage is caused. We measure impact by the perturbation caused in y, u interacting directly with the plant (“the physical signals”). Stealthiness is measured by the perturbation caused in yd , u d (“the detector signals”). Since the system is linear, all signals can be scaled and so the absolute values are not necessarily of interest. Instead, we consider that an attack has impact and is stealthy if the perturbations in the physical signals are larger and of longer duration relative to the perturbations in the detector signals. We do not provide, nor need, a more rigorous definition of stealthy attacks here (for that, see [13]). Remark 4.2 We do not consider coordinated sensor and actuator attacks. A coordinated attack such as au = a and a y = −Pa, with a an arbitrary signal, will be completely undetectable (no change in yd , u d ) and have arbitrary impact. For such covert attacks, we refer to [12]. We also do not consider attacks that rely on online measurements, such as the surge attacks in [4]. Hence, the attacks should be possible to compute offline. Removing this assumption is an interesting topic for future work.

4.3 False Data Injection Attacks The attacker will base his injections on exponential signals. This is both for simplicity and because this class contains some powerful attacks. As an example, the complete measurement response to a sensor attack a y (t) = a0 est when x(0) = 0 is yd (t) = yd,tr (t) + yd,ss (t), t ≥ 0 yd,tr (t) = −C1 e At (s I − A)−1 B1 a0 yd,ss (t) = [C1 (s I − A)

−1

(4.1)

B1 + D11 ]a0 e = S(s)a0 e . st

st

The signal has been decomposed into its transient and steady-state components. The transient is similar to that incurred by a step input if |s| is not much larger than 0. Since the closed-loop system is internally stable, all transients decay exponentially to zero. Assuming the closed loop is also well damped, we can estimate the duration of the transients of S and T by the rise time Tr . The rise time-bandwidth product [1] says that Tr ωb ≈ 1 − 3, where ωb is the closed-loop bandwidth. We suppose henceforth transients have mostly disappeared after the detection time Tdet :=

2 . ωb

(4.2)

Two examples with different closed-loop bandwidths are shown in Fig. 4.2 (see Sect. 4.4 for details on the examples). Both examples have controllers with integral action (S(0) = 0), and hence yd converges to zero as a step in a y is applied. It

66

H. Sandberg

Fig. 4.2 The sensitivity function S and complementary sensitivity function T for the two example systems considered in Sect. 4.4, along with their responses in yd to steps in a y . The detection time Tdet := 2/ωb estimates the duration of the peak in the detection signal transient

can be seen that Tdet gives an estimate of the time it takes for the detection signal transient to go from its maximum to around zero. The example with a faster tuning has less damping than the slower example, and hence the detection signal oscillates for some time beyond Tdet . Transients of C S and P S could also last longer than indicated by (4.2) if there are slow poles being canceled, or almost canceled, in the loop gain PC (see Remark 4.3). Thus (4.2) should be taken as a lower bound on the duration of transients in the detection signals. In the next sections, we investigate the complete response in the detection and impact signals following exponential attacks on first the sensor and then the actuator channel.

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

67

4.3.1 Sensor Attacks Let us consider a sensor attack, a y (t) = a0 e pt , t ≥ 0 and x(0) = 0, where the frequency of the attack is matched to a plant pole, P( p) = ∞. The complete responses in the detection and impact equations are1 yd (t) = yd,tr (t) + S( p)a0 e pt = yd,tr (t) u d (t) = u d,tr (t) − C S( p)a0 e pt = u d,tr (t), and

y(t) = ytr (t) − T ( p)a0 e pt = ytr (t) − a0 e pt u(t) = u tr (t) − C S( p)a0 e pt = u tr (t).

(4.3)

(4.4)

This is because S( p) = 0, C S( p) = 0, and T ( p) = 1, if we assume there are no pole-zero cancellation in the product P( p)C( p) (see Remark 4.3). The attack thus only causes transients in the detection signals, of estimated duration Tdet . After this, only the physical steady-state signal y(t) = −a0 e pt remains. We consider the attack to have impact if this signal persists for longer than the detection time Tdet by some margin. Clearly, this is the case if Re( p) ≥ 0. Hence, marginally stable or unstable plants are definitely sensitive to sensor attacks. The more common case is when Re( p) < 0. We define the impact time Timp as the time it takes for |e pt | do decrease from the level 0.9 to 0.1 (similar to the definition of rise time), which gives Timp :=

ln 9 , Re( p) < 0. Re(− p)

(4.5)

Hence, we say the sensor attack is potentially stealthy and has impact if Timp > cTdet ⇔ ωb > c

2 Re(− p) ≈ cRe(− p), ln 9

(4.6)

for some user-specified margin c > 0. Based on the complementary analysis in Sect. 4.4.2, we propose c  3 is reasonable. That is, these sensor attacks have impact and are stealthy only if there is a plant pole with negated real part a factor three smaller than the bandwidth. If Re( p) ≥ 0 we define Timp = +∞, and (4.6) holds, as it should, independent of ωb and c. The sensor attacks are illustrated in Sects. 4.4.1 and 4.4.2 Remark 4.3 Since the closed-loop system is internally stable, there can be no unstable or marginally stable pole-zero cancellations in P(s)C(s). However, stable polezero cancellations in P(s)C(s) are not uncommon in control design (see the Lambda method [8], for example). As a potential defense strategy, we may consider the following approach: To cancel a simple plant pole in s = p < 0, we may use a controller 1 The

exact forms of the transients can be worked out as in (4.1).

68

H. Sandberg

˜ ˜ p) finite. If we find such C(s) that meets all control speciC(s) = (s − p)2 C(s), C( fications, note that we have eliminated the steady-state impact of the sensor attack e pt (T ( p) = 0, C S( p) = 0), which is simultaneously perfectly detectable (S( p) = 1). Effectively we have reversed the impact and detectability effects. Nevertheless, if there are real disturbances acting on the plant input exciting frequencies around s = p, the controller will completely ignore those leading to poor control performance. Further defense mechanisms are discussed in Sect. 4.5. Remark 4.4 It should be clear that any sensor attack est at a frequency Re s  −ωb /c, where |P(s)| is very large will have large steady-state impact (|T (s)| ≈ 1) and low detectability (|S(s)| ≈ 0, |C S(s)| ≈ 0). We may call such s approximate plant poles. Nevertheless, exactly how large |P(s)| needs to be depends on C(s). As an example, if the plant has a large steady-state gain P(0), constant biases on the measurements (a y (t) = a0 ) have large lasting impact and are hard to detect. Remark 4.5 In case the anomaly detector is only monitoring the plant output yd , and not the control signal u d , note that controller poles p (C( p) = ∞) also generate viable sensor attacks of impact and low detectability even if C S( p) = 0. In particular, controllers often have integral action ( p = 0), and constant bias sensor attacks now become a serious threat.

4.3.2 Actuator Attacks Let us consider an actuator attack, au (t) = a0 e zt , t ≥ 0 and x(0) = 0, where the frequency of the attack is matched to a plant zero, P(z) = 0. The complete responses in the detection and impact signals are yd (t) = yd,tr (t) + P S(z)a0 e zt = yd,tr (t) u d (t) = u d,tr (t) − T (z)a0 e zt = u d,tr (t), and y(t) = ytr (t) + P S(z)a0 e zt = ytr (t) u(t) = u tr (t) + S(z)a0 e zt = u tr (t) + a0 e zt since S(z) = 1, P S(z) = 0, and T (z) = 0, if we assume there are no pole-zero cancellation in the product P(z)C(z) (cf. Remark 4.3). The situation is now analogous to the sensor case, and the attack only causes transients in the detection signals and after the detection time Tdet mainly only the physical steady-state signal u(t) = a0 e zt remains. This means the actuator possibly may execute harmful commands while not being much seen in the output. If Re(z) ≥ 0, the attack certainly has lasting impact. In the case Re(z) < 0, we can define impact time as in (4.5) but using Re(z) in place of Re( p). Hence, we say the actuator attack

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

69

is potentially stealthy and has impact if Timp > cTdet ⇔ ωb > c

2 Re(−z) ≈ cRe(−z), ln 9

(4.7)

for some user-specified margin c > 0. Condition (4.7) may seem restrictive and potentially easy for the operator to break by lowering the bandwidth ωb (performance constraints permitting). However, essentially all transfer functions P(s) of physical systems are strictly proper, which implies limω→∞ P(iω) = 0. Hence, they have zeros at infinity, and approximate zeros on the imaginary axis (cf. Remark 4.4). Non-decaying attacks au (t) = a0 sin(ωt) for large enough frequency ω are therefore stealthy and have impact on u for essentially all plants. In particular, applying high-frequency commands u to an actuator for a long time may wear it down, eventually resulting in component failure. We discuss possible defense against this attack in Sect. 4.5. An actuator attack is illustrated in Sect. 4.4.3.

4.4 Case Study In this section, we conduct a series of simulations to illustrate the previous analysis. We use a linear second-order plant with a transfer function P(s) =

K , (1 + sT1 )(1 + sT2 )

with parameters K = 10, T1 = 10 s, and T2 = 1 s. We can think of the plant as a fluid tank system, where the slow pole models the fluid-level dynamics, and the fast pole models the dynamics of a pump. This is a relatively benign plant, which neither has unstable poles nor unstable zeros (i.e., it is stable and minimum phase). Hence, it is not immediately clear that it is sensitive to FDI attacks. We use a regular PI-controller   1 , C(s) = kp 1 + sTi with parameters kp and Ti . For its tuning, we employ the SIMC-rule [11]: The plant is approximated with a first-order and time delay system K e−s L 1 + sT with K = K = 10, T = T1 + T2 /2 = 10.5 s, and L = T2 /2 = 0.5 s. We then choose

70

H. Sandberg

kp =

T 1 , Ti = min{T , 4(Tcl + L)}, K L + Tcl

where Tcl is the remaining tuning parameter. Skogestad [11] suggests that Tcl = L gives a good trade-off between fast response and good robustness. We will call this the fast tuning. For comparison, we will use the slow tuning, where Tcl = 10L. The parameter Tcl can be thought of as the approximate dominant time constant of the closed-loop dynamics. To summarize, the following controller tunings (and their resulting closed-loop bandwidths and peak sensitivities Ms := supω |S( jω)|) are used in the simulations: Slow Tuning : Tcl = 5 s, kp = 0.191, Ti = 10.5 s, ωb = 0.226 rad/s,

Ms = 1.12.

Fast Tuning : Tcl = 0.5 s, kp = 1.05, Ti = 4.00 s, ωb = 1.37 rad/s,

Ms = 1.69.

4.4.1 Exponential Sensor Attack Based on the analysis of Sect. 4.3.1, a sensor attack worth considering is a y (t) = a0 e p(t−τ0 ) θ (t − τ0 ),

(4.8)

with tunable parameters a0 (amplitude) and τ0 ≥ 0 (time delay), where θ (t) is the Heaviside step function (θ (t) = 1, t ≥ 0, and otherwise zero). We use p = −1/T1 = −0.1 s−1 since this is the slowest plant pole, and thus the slowest decaying sensor attack resulting in zero steady-state component in the detection signals. The discussion following (4.6) suggests that if the bandwidth ωb is larger than approximately 3/T1 = 0.3 rad/s, the attack has impact and is stealthy. The system with fast tuning satisfies this condition, whereas the slow tuning does not (with a small margin). We therefore simulate the attack in both cases to check whether the guideline appears reasonable. We choose a0 = 1 and τ0 = 20 s next, but because of time invariance and linearity of the system these choices are not critical. In Fig. 4.3 (fast tuning), it can be seen that the perturbations in the detection signals yd , u d caused by the attack are completely eliminated in about 5 s. The physical output y has a noticeable perturbation for about 25 s, however. Thus, the impact lasts about five times longer than the detection signal. In this case, impact time is Timp = ln(9)/0.1 ≈ 22 s, which agrees well with the figure. The detection time is Tdet = 2/1.37 ≈ 1.5 s. This is approximately the time it takes for the detection signals to go from the largest initial perturbation to the first crossing of the equilibrium levels. Using the fast tuning, the closed-loop system is not very well damped (Ms = 1.69), and this explains the undershoot in yd (overshoot in u d ) and that there is a visible detection signal for about 5 s. Thus, Tdet can underestimate the duration of the

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

71

Fig. 4.3 An exponential sensor attack using the fast controller tuning. The control system eliminates the transient in the detection signal quickly, whereas the impact in the actual plant output remains much longer

detection signal in such cases, but is still a simple and meaningful measure. Based on the simulation, it is fair to say that the attack has impact and is potentially stealthy (with a0 adapted to detection thresholds). In Fig. 4.4 (slow tuning), it can be seen that the perturbations in the detection signals yd , u d caused by the attack are eliminated in about 30 s. This means that the transient components in (4.3)–(4.4) last for this long. This should be compared to the steady-state component in y, which lasts for about Timp = ln(9)/0.1 ≈ 22 s (same as before). The physical output y has a noticeable perturbation for about 30 s, though more attenuated than in Fig. 4.3. In this case, the impact is dominated by the transient component. The detection time is Tdet = 2/0.226 ≈ 8.8 s, which approximately is the time it takes for the detection signals to go from the largest initial perturbation to the first crossing of the equilibrium levels. The closed-loop system is well damped in this case (Ms = 1.12) and the detection signals remain close to equilibrium after this time.

72

H. Sandberg

Fig. 4.4 An exponential sensor attack using the slow controller tuning. There is not a significant difference between the attack’s influence on the detection and physical signals, so the attack may be considered less dangerous as compared to the one in Fig. 4.3

Based on the simulation, it is fair to say that the attack is not simultaneously stealthy and of impact, since detection and physical signals are of comparable duration and magnitude. We see in both cases that most of the reaction in the detection signals occur within time Tdet , and the steady-state component lasts for about Timp . Only in the first case is the steady-state component clearly visible in y though because the transient dominates in the second case due to the low bandwidth. The simulation confirms that the variables used in condition (4.6) characterize important aspects of the attacked systems, and with proper choice of c, we can distinguish between them. A valid criticism against the attack (4.8) is that the impact disappears with time and that a0 probably needs to be small to avoid detection. Since S(∞) = 1, the exponential signal will create an instantaneous jump in yd of magnitude a0 . Hence, a0 should not be larger than any detection threshold employed by the operator. It is therefore natural to study repeated attacks. This is the topic of the next example.

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

73

4.4.2 Sum-of-Exponentials Sensor Attack Consider the sum-of-exponentials sensor attack a y (t) =

∞ 

ak e p(t−τk ) θ (t − τk ),

(4.9)

k=0

with p being the slowest plant pole,2 and {ak } and {τk } sequences of amplitudes and increasing delays such that the series is well defined for all t. We know from the previous example that most effect of an exponential attack on the detection signals is gone after time Tdet . To avoid having the detection signals adding up when (4.9) is applied, it is reasonable to require that τk+1 − τk ≥ Tdet to remain stealthy. If we make the choices τk = kTdet = 2k/ωb and ak = a0 , we have lim sup a y (t) = t→∞

a0 . 1 − e2 p/ωb

(4.10)

Hence, exponential signals of magnitude a0 are amplified by a factor 1/(1 − e2 p/ωb ). As in (4.6), the ratio p/ωb plays an important role in determining the effect of a sensor FDI attack. Note that a y in steady state is equal to the negative bias on the physical output y, and (4.10) is a measure of attack impact. To avoid detection, a0 needs to be small, and to have impact, the amplification should be significant. Requiring an amplification of at least d leads to the condition   1 1 p 1 a0 ln 1 − ≈− , ≥ da ⇔ ≥ 0 1 − e2 p/ωb ωb 2 d 2d

p < 0.

(4.11)

A modest requirement is d ≥ 2, which gives ωb ≥ 2.88(− p), and led us to recommend c  3 for (4.6). In the case study, we have p = −0.1 s−1 , and the amplification is about 7.4 for the fast tuning and 1.7 for the slow tuning. Hence, we only simulate the attack for the fast tuning, and the result is shown in Fig. 4.5 using τk = 20 + 2k/ωb s and ak = a0 = 0.05. Note that the amplitude of the exponentials is 20 times smaller than in Sect. 4.4.1, but the attack here introduces a steady bias of about 7.4 · 0.05 = 0.37 in y. Indeed, the attack can be seen as a pulse-width modulated constant bias attack on the sensor, where the pulse shape is chosen for stealthiness. Since the pulse height a0 is so small, the perturbations in the detection signals are barely noticeable. The attack clearly has more impact and is more stealthy than the previous exponential attack. 2 Assumed

negative and real here, for simplicity.

74

H. Sandberg

Fig. 4.5 Sum-of-exponentials sensor attack using the fast controller tuning. The attack is barely noticeable in the detection signals, but introduces a significant bias in the physical output

It is easy to come up with variants of (4.9) that are more stealthy by making the sequences {ak } and {τk } less regular. For example, one could choose them randomly from distributions such that Eτk+1 − Eτk ≥ Tdet and Eak is small.

4.4.3 Sinusoidal Actuator Attack The plant transfer function P(s) has two zeros at infinity, which means that s = iω for large ω are approximate plant zeros. In general, P S(iω) ≈ P(iω) for ω  ωb . Hence, if the attacker approximately knows the plant model P(s), the bandwidth ωb , and detection thresholds for yd (t), he can find the smallest high-frequency actuator attack that has impact and is stealthy. For instance, |P(i10)| ≈ 0.01, and au (t) = sin(10t)θ (t − 20)

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

75

Fig. 4.6 High-frequency actuator attack targeting the system using the fast controller tuning. The actuator is potentially subject to large wear, while the detection signals are negligible

will only result in a perturbation of magnitude around 0.01 in yd . This is barely visible in the detector signals in Fig. 4.6, where we have used the fast tuning. Note that the applied control signal u is highly perturbed, however. This may result in large wear of the actuator, and is potentially very harmful.

4.5 Defenses In this section, we will review some methods for defense against the sensor and actuator FDI attacks. Based on our analysis of detection and impact time, one simple approach is to lower the system bandwidth if all poles and zeros are stable. This will make the conditions (4.6), (4.7), and (4.11) more restrictive. On the other hand, this will also decrease the control system’s performance. For instance, its ability to reject unknown external system disturbances fast diminishes. Also, general guidelines for control

76

H. Sandberg

Fig. 4.7 Loop transformations (A) and (B) that can be used to move plant poles and zeros and render them unattractive for FDI attacks

system design anyways suggest to pick as low bandwidth as performance permits [1]. Another idea is to cancel dangerous plant poles or zeros using C(s), but this may also result in poor control performance in absence of attacks, see Remark 4.3. A defense mechanism that does not impair control performance is so-called multiplicative watermarking [14]. The sensor output y, for example, is fed into a watermark generator (unknown to attacker) before transmission over the network. The watermark is then removed on the controller side and attacks a y that are otherwise stealthy will be visible at the anomaly detector. It is shown in [14] that this scheme does not affect the closed-loop performance in absence of attacks. A related idea involving signal transformations was proposed in [5]. There particular loop shifts (“two-way coding schemes”) are applied to the control system. Two simple examples are illustrated in Fig. 4.7. As with watermarking, the main idea is to introduce a difference between the plant as seen from the attacker’s and controller’s perspective, respectively. Indeed, for any constants k1 and k2 , the controller C is as before interacting with the plant P in Fig. 4.7. The shifts cancel, and there is no loss of performance in absence of attacks. The attacker, on the other hand, sees a plant that is under negative feedback gain k1 in (A), and has a feedforward gain k2 in (B). Note that this is the case even if the attacker knows the values of k1 or k2 . Now, with a suitable choice of k1 , we may be able to shift the attacker’s plant poles so that sensor attack a y no longer satisfy (4.6). Indeed, it makes a lot of sense to stabilize an unstable plant pole using local feedback k1 instead of solely rely on a flawless communication network. By adding a direct feedthrough term k2 the attacker’s plant zeros will move. In particular, the zeros at infinity move and a high-frequency signal au is directly visible in yd . In [5], more general schemes based on the scattering transformation are proposed. It is also shown that multiplicative watermarking can be seen as one-way coding schemes.

4 Stealthy False Data Injection Attacks in Feedback Systems Revisited

77

4.6 Conclusion We have revisited the problem of stealthy FDI attacks by applying basic principles of feedback systems. We derived guidelines that can be used to determine whether a particular control system is sensitive to sensor or actuator attacks. It turns out that location of poles and zeros, and closed-loop system bandwidth are key parameters that determine the vulnerability. Plants with unstable poles are always sensitive to sensor attacks, and plants with unstable zeros are always sensitive to actuator attacks. If the plant is stable and minimum phase, a low bandwidth makes it hard for the attacker to inject data attacks that are simultaneously stealthy and has a physical impact. We also examined a novel sum-of-exponentials attack signal. In many ways, the data attacks are contrary to the expected behavior of natural disturbances: Sensor noise is typically assumed to be of high frequency, and so a dangerous sensor attack is of low frequency. Load disturbances are generally assumed to be of low frequency, and so a dangerous actuator attack is of high frequency. Finally, we reviewed some defense mechanism based on watermarking, coding, and loop shifts. Studies of FDI attacks have already inspired much work related to the security of control systems. We hope this study has shed some new light on the problem and can inspire similar treatments also for other attack and defense scenarios. Acknowledgements This work was supported in part by the Swedish Research Council (grant 201600861), the Swedish Foundation for Strategic Research (project CLAS), and the Swedish Civil Contingencies Agency (project CERCES).

References 1. Åström, K.J., Murray, R.M.: Feedback Systems: An Introduction for Scientists and Engineers. Princeton University Press, Princeton, NJ, USA (2008) 2. Blanke, M., Kinnaert, M., Lunze, J., Staroswiecki, M.: Diagnosis and Fault-Tolerant Control, 3rd edn. Springer Publishing Company, Incorporated, Berlin (2015) 3. Cárdenas, A.A.: Cyber-physical systems security. Knowledge area report, CyBOK (2019) 4. Cárdenas, A.A., Amin, S., Lin, Z.S., Huang, Y.L., Huang, C.Y., Sastry, S.: Attacks against process control systems: risk assessment, detection, and response. In: Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, ASIACCS ’11, pp. 355–366. ACM, New York, NY, USA (2011). https://doi.org/10.1145/1966913.1966959 5. Fang, S., Johansson, K.H., Skoglund, M., Sandberg, H., Ishii, H.: Two-way coding in control systems under injection attacks: from attack detection to attack correction. In: Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS ’19, pp. 141–150. ACM, New York, NY, USA (2019). https://doi.org/10.1145/3302509.3311047 6. Hemsley, K.E., Fisher, R.E.: History of industrial control system cyber incidents. Technical Report, INL/CON-18-44411-Rev002, Idaho National Laboratory (INL), Idaho Falls, ID (United States) (2018) 7. Liu, Y., Ning, P., Reiter, M.K.: False data injection attacks against state estimation in electric power grids. In: Proceedings of the 16th ACM Conference on Computer and Communications Security, CCS ’09, pp. 21–32. ACM, New York, NY, USA (2009). https://doi.org/10.1145/ 1653662.1653666

78

H. Sandberg

8. Panagopoulos, H., Hägglund, T., Åström, K.J.: The Lambda Method for Tuning PI Controllers. Technical Reports TFRT-7564. Department of Automatic Control, Lund Institute of Technology (LTH) (1997) 9. Pasqualetti, F., Dörfler, F., Bullo, F.: Attack detection and identification in cyber-physical systems. IEEE Trans. Autom. Control 58(11), 2715–2729 (2013). https://doi.org/10.1109/ TAC.2013.2266831 10. Sandberg, H., Teixeira, A.M.H.: From control system security indices to attack identifiability. In: 2016 Science of Security for Cyber-Physical Systems Workshop (SOSCYPS), pp. 1–6 (2016). https://doi.org/10.1109/SOSCYPS.2016.7580001 11. Skogestad, S.: Simple analytic rules for model reduction and PID controller tuning. J. Process. Control 13, 291–309 (2004). https://doi.org/10.1016/S0959-1524(02)00062-8 12. Smith, R.S.: A decoupled feedback structure for covertly appropriating networked control systems. IFAC Proc. Vol. 44(1), 90–95 (2011). https://doi.org/10.3182/20110828-6-IT-1002. 01721. 18th IFAC World Congress 13. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: A secure control framework for resource-limited adversaries. Automatica 51(1), 135–148 (2015). https://doi.org/10.1016/j. automatica.2014.10.067 14. Teixeira, A.M.H., Ferrari, R.M.G.: Detection of sensor data injection attacks with multiplicative watermarking. In: 2018 European Control Conference (ECC), pp. 338–343 (2018). https://doi. org/10.23919/ECC.2018.8550114

Chapter 5

Detection of Attacks in Cyber-Physical Systems: Theory and Applications Vaibhav Katewa, Cheng-Zong Bai, Vijay Gupta, and Fabio Pasqualetti

Abstract In this chapter, we characterize and illustrate fundamental limitations and trade-offs for the detection of attacks in stochastic systems with linear dynamics. Focusing on attacks that alter the control signals (actuator attacks), we propose metrics to measure the stealthiness level of an attack, which are independent from the specifics of the detection algorithm being used and thus lead to fundamental detectability bounds. Further, we characterize attacks that induce the largest performance degradation, as measured by the error covariance at a state estimator, and illustrate our results via simple examples and more involved power system models.

5.1 Introduction Using communication channels to inject malicious data that degrades the performance of a cyber-physical system has now been demonstrated both theoretically and practically [1–6]. Intuitively, there is a trade-off between the performance degradation an attacker can induce and how easy it is to detect the attack [7]. Quantifying this trade-off is of great interest to operate and design secure Cyber-Physical Systems (CPS). Text in this chapter is reproduced from Bai et al. (Automatica 82:251–260, 2017), Copyright 2017, with permission from Elsevier. V. Katewa · F. Pasqualetti (B) University of California at Riverside, Riverside, CA 92521, USA e-mail: [email protected] V. Katewa e-mail: [email protected] C.-Z. Bai Thermo Fisher Scientific, San Francisco, CA 94080, USA e-mail: [email protected] V. Gupta University of Notre Dame, Notre Dame, IN 46556, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_5

79

80

V. Katewa et al.

As explained in more detail later, for noiseless systems, zero dynamics provide a fundamental notion of stealthiness of an attacker, which characterizes the ability of an attacker to stay undetected even if the controller can perform arbitrary tests on the data it receives. However, similar notions for stochastic systems have been lacking. In this work, we consider stochastic cyber-physical systems, propose a graded stealthiness notion, and characterize the performance degradation that an attacker with a given level of stealthiness can induce. The proposed notion is fundamental in the sense that we do not constraint the detection test that the controller can employ to detect the presence of an attack. Related work Security of cyber-physical systems is a growing research area. Classic works in this area focus on the detection of sensor and actuator failures in control systems [8], whereas more recent approaches consider the possibility of intentional attacks at different system layers, e.g., see [9]. Both simple attacks, such as jamming of communication channels [10], and more sophisticated attacks, such as replay and data injection attacks, have been considered [11, 12]. One way to organize the literature in this area is based on the properties of the considered cyber-physical systems. While initial studies focused on static systems [13–17], later works exploited the dynamics of the system either to design attacks or to improve the performance of the detector that a controller can employ to detect if an attack is present [18–23]. For noiseless cyber-physical systems, the concept of stealthiness of an attack is closely related to the control-theoretic notion of zero dynamics [24, Sect. 4]. In particular, an attack is undetectable in noiseless systems if and only if it excites only the zero dynamics of an appropriately defined input–output system describing the system dynamics, the measurements available to the security monitor, and the variables compromised by the attacker [4, 25]. For cyber-physical systems driven by noise, instead, the presence of process and measurements noise offers the attacker an additional possibility to tamper with sensor measurements and control inputs within acceptable uncertainty levels, thereby making the detection task more difficult. Detectability of attacks in stochastic systems remains an open problem. Most works in this area consider detectability of attacks with respect to specific detection schemes employed by the controller, such as the classic bad data detection algorithm [11, 26]. The trade-off between stealthiness and performance degradation induced by an attacker has also been characterized only for specific systems and detection mechanisms [3, 27–29], and a thorough analysis of resilience of stochastic control systems to arbitrary attacks is still missing. While convenient for analysis, the restriction to a specific class of detectors prevents the characterization of fundamental detection limitations. In our previous work [30], we proposed the notion of ε-marginal stealthiness to quantify the stealthiness level in an estimation problem with respect to the class of ergodic detectors. In this work, we remove the assumption of ergodicity and introduce a notion of stealthiness for stochastic control systems that is independent of the attack detection algorithm, and thus provides a fundamental measure of the stealthiness of attacks in stochastic control systems. Further, we also characterize the performance degradation that such a stealthy attack can induce.

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

81

We limit our analysis to linear, time-invariant plants with a controller based on the output of an asymptotic Kalman filter, and to injection attacks against the actuation channel only. Our choice of using controllers based on Kalman filters is not restrictive. In fact, while this is typically the case in practice, our results and analysis are valid for arbitrary control schemes. Our choice of focusing on attacks against the actuation channel only, instead, is motivated by two main reasons. First, actuation and measurements channels are equally likely to be compromised, especially in networked control systems where communication between sensors, actuators, plant, and controller takes place over wireless channels. Second, this case has received considerably less attention in the literature—perhaps due to its enhanced difficulty—where most works focus on attacks against the measurement channel only, e.g., see [17, 25]. We remark also that our framework can be extended to the case of attacks against the measurement channel, as we show in [30] for scalar systems and a different notion of stealthiness. Finally, some recent literature builds on our framework and uses a notion of attack detectability that is similar to what we propose in [30, 31] and in this chapter. For instance, [32] extends the notion of ε-stealthiness of [31] to higher order systems, and shows how the performance of the attacker may differ in the scalar and vector cases (in this chapter we further extend the setup in [32] by leveraging the notion of right-invertibility of a system to consider input and output matrices of arbitrary dimensions). In [33], the authors extend the setup in [31] to vector and not necessarily stationary systems, but consider a finite horizon problem. In [34], the degradation of remote state estimation is studied, for the case of an attacker that compromises the system measurements based on a linear strategy. Two other relevant recent works are [35] that uses the notion of Kullback–Leibler divergence as a causal measure of information flow to quantify the effect of attacks on the system output, while [36] characterizes optimal attack strategies with respect to a linear quadratic cost that combines attackers control and undetectability goals. Chapter organization Section 5.2 contains the mathematical formulation of the problems considered in this chapter. In Sect. 5.3, we propose a metric to quantify the stealthiness level of an attacker, and we characterize how this metric relates to the information-theoretic notion of Kullback–Leibler divergence. Section 5.4 contains the main results of this chapter, including a characterization of the largest performance degradation caused by an ε-stealthy attack, a closed-form expression of optimal ε-stealthy attacks for right-invertible systems, and a sub-optimal class of attacks for not right-invertible systems. Section 5.5 presents illustrative examples and numerical results. Finally, Sect. 5.6 concludes the chapter.

5.2 Problem Formulation j

j

Notation: The sequence {xn }n=i is denoted by xi (when clear from the context, j the notation xi may also denote the corresponding vector obtained by stacking the

82

V. Katewa et al.

Fig. 5.1 Problem setup considered in the chapter. Reprinted from Ref. [38], Copyright 2017, with permission from Elsevier

Process

Actuator

Sensor

u ˜k

Channel

Δuk

y˜k

uk

Controller

appropriate entries in the sequence). This notation allows us to denote the probability j density function of a stochastic sequence xi as f x j , and to define its differential i

j

entropy h(xi ) as [37, Sect. 8.1]  j

h(xi ) 



−∞

j

j

j

− f x j (ti ) log f x j (ti )dti . i

i

Let x1k and y1k be two random sequences with probability density functions (pdf) f x1k and f y1k , respectively. The Kullback–Leibler divergence (KLD) [37, Sect. 8.5] between x1k and y1k is defined as    D x1k  y1k 





log −∞

f x1k (t1k ) f y1k (t1k )

f x1k (t1k )dt1k .

(5.1)

The KLD is a non-negative quantity that measures the dissimilarity between two  probability density functions with D x1k  y1k = 0 if f x1k = f y1k . Also, the KLD is       generally not symmetric, that is, D x1k  y1k = D y1k x1k . A Gaussian random vector x with mean μx and covariance matrix Σx is denoted by x ∼ N (μx , Σx ). We let I and O be the identity and zero matrices, respectively, with their dimensions clear from the context. We also let Sn+ and Sn++ denote the sets of n × n positive semidefinite and positive definite matrices, respectively. For a square matrix M, tr(M) and det(M) denote the trace and the determinant of M, respectively. We consider the setup shown in Fig. 5.1 as explained below. Process: The process is described by the following Linear Time-Invariant (LTI) state-space representation: xk+1 = Axk + Bu k + wk , yk = C xk + vk ,

(5.2)

where xk ∈ R Nx is the process state, u k ∈ R Nu is the control input, yk ∈ R N y is the output measured by the sensor, and the sequences w1∞ and v1∞ are process and measurement noises, respectively. We make the following assumptions for our setup:

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

83

Assumption 5.1 The noise random processes are independent and identically distributed (i.i.d.) sequences of Gaussian random vectors with wk ∼ N (0, Σw ), vk ∼ Ny Nx , and Σv ∈ S++ . N (0, Σv ), Σw ∈ S++ Assumption 5.2 The state-space realization (A, B, C) has no invariant zeros [24, Sect. 4.4]. In particular, this assumption implies that the system (A, B, C) is both controllable and observable. Assumption 5.3 The controller uses a Kalman filter to estimate and monitor the process state. Note that the control input itself may be calculated using an arbitrary control law. The Kalman filter, which calculates the Minimum-Mean-Squared-Error (MMSE) estimate xˆk of xk from the measurements y1k−1 , is described as xˆk+1 = A xˆk + K k (yk − C xˆk ) + Bu k ,

(5.3)

 where the Kalman gain K k and the error covariance matrix Pk+1  E (xˆk+1 −  xk+1 )(xˆk+1 − xk+1 )T are calculated through the recursions K k = A Pk C T (C PK C T + Σv )−1 , and Pk+1 = A Pk A T − A Pk C T (C Pk C T + Σv )−1 C Pk A T + Σw , with initial conditions xˆ1 = E[x1 ] = 0 and P1 = E[x1 x1T ]. Assumption 5.4 Given Assumption 5.2, limk→∞ Pk = P, where P is the unique solution of a discrete-time algebraic Riccati equation. For ease of presentation, we assume that P1 = P, although the results can be generalized to the general case at the expense of more involved notation. Accordingly, we drop the time index and let K k = K and Pk = P at every time step k. Notice that this assumption also implies that the innovation sequence z 1∞ calculated as z k  yk − C xˆk is an i.i.d. Gaussian Ny process with z k ∼ N (0, Σz ), where Σz = C PC T + Σv ∈ S++ . Let G(z) denote the N y × Nu matrix transfer function of the system (A, B, C). We say that the system (A, B, C) is right invertible if there exists an Nu × N y matrix transfer function G R I (z) such that G(z)G R I (z) = I N y . Attack model: An attacker can replace the input sequence u ∞ 1 with an arbitrary . Thus, in the presence of an attack, the system dynamics are given by sequence u˜ ∞ 1 x˜k+1 = A x˜k + B u˜ k + wk , y˜k = C x˜k + vk .

(5.4)

Note that the sequence y˜1∞ generated by the sensor in the presence of an attack u˜ ∞ 1 is different from the nominal measurement sequence y1∞ . We assume that the attacker knows the system parameters, including the matrices A, B, C, Σw , and Σv . The attack input u˜ ∞ 1 is constructed based on the system parameters and the information pattern Ik of the attacker. We make the following assumptions on the attacker’s information pattern:

84

V. Katewa et al.

Assumption 5.5 The attacker knows the control input u k ; thus u k ∈ Ik at all times k. Additionally, the attacker does not know the noise vectors for any time. Assumption 5.6 The attacker has perfect memory; thus, Ik ⊆ Ik+1 at all times k. Assumption 5.7 The attacker has causal information; in particular, Ik is indepen∞ for all k. dent of wk∞ and vk+1 Example 5.1 (Attack scenarios) Attack scenarios satisfying Assumptions 5.5–5.7 include the cases when (i) the attacker knows the control input exactly, that is, Ik = {u k1 }. (ii) the attacker knows the control input and the state, that is, Ik = {u k1 , x1k }. (iii) the attacker knows the control input and delayed measurements from the sensor, that is, Ik = {u k1 , y˜1k−d } for some d ≥ 1. Stealthiness of an attacker: The attacker is constrained in the input u˜ ∞ 1 it replaces since it seeks to be stealthy or undetected by the controller. If the controller is aware that an attacker has replaced the correct control sequence u ∞ 1 by a different sequence , it can presumably switch to a safer mode of operation. Notions of stealthiness u˜ ∞ 1 have been proposed in the literature before. As an example, for noiseless systems, [4] showed that stealthiness of an attacker is equivalent to the existence of zero dynamics for the system driven by the attack. Similar to [4], we seek to define the notion of stealthiness without placing any restrictions on the attacker or the controller behavior. However, we need to define a similar notion for stochastic systems when zero dynamics may not exist. To this end, we pose the problem of detecting an attacker by the controller as a (sequential) hypothesis testing problem. Specifically, the controller relies on the received measurements to decide the following binary hypothesis testing problem: H0 : No attack is in progress (the controller receives y1k ); H1 : Attack is in progress (the controller receives y˜1k ). For a given detector employed at the controller to select one of the two hypotheses, denote the probability of false alarm (i.e., the probability of deciding H1 when H0 is true) at time k by pkF , and the probability of correct detection (i.e., the probability of deciding H1 when H1 is true) at time k by pkD . One may envisage that stealthiness of an attacker implies pkD = 0. However, as is standard in detection theory, we need to consider both pkF and pkD simultaneously. For instance, a detector that always declares H1 to be true will achieve pkD = 1. However, it will not be a good detector because pkF = 1. Intuitively, an attack is harder to detect if the performance of any detector is independent of the received measurements. In other words, we define an attacker to be stealthy if there exists no detector that can perform better (in the sense of simultaneously achieving higher pkD and lower pkF ) than a detector that makes a decision by ignoring all the measurements and making a random guess to decide between the hypotheses. We formalize this intuition in the following definition.

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

85

Definition 5.1 (Stealthy attacks) Consider the problem formulation stated in Sect. 5.2. An attack u˜ ∞ 1 is 1. strictly stealthy, if there exists no detector such that pkF < pkD for any k > 0. 2. ε-stealthy, if, given ε > 0 and for any 0 < δ < 1, for any detector for which 0 < 1 − pkD ≤ δ for all times k, it holds that lim supk→∞ − k1 log pkF ≤ ε. Intuitively, an attack is strictly stealthy if no detector can perform better than a random guess in deciding whether an attack is in progress. Further, an attack is ε-stealthy if there exists no detector such that 0 < 1 − pkD ≤ δ for all time k and pkF converges to zero exponentially fast with rate greater than ε as k → ∞. Performance metric: The requirement to stay stealthy clearly curtails the performance degradation that an attacker can cause. The central problem that we consider is to characterize the worst performance degradation that an attacker can achieve for a specified level of stealthiness. In the presence of an attack (and if the controller is unaware of the attack), it uses the corrupted measurements y˜1∞ in the Kalman filter. Let xˆ˜1∞ be the estimate of the Kalman filter (5.3) in the presence of the attack u˜ ∞ 1 , which is obtained from the recursion xˆ˜k+1 = A xˆ˜k + K z˜ k + Bu k , where the innovation is z˜ k  y˜k − C xˆ˜k . Note that the estimate x˜ˆk+1 is a sub-optimal MMSE estimate of the state xk since it is obtained by assuming the nominal control input u k , whereas the system is driven by the attack input u˜ k . Also, note that the random sequence z˜ 1∞ need neither be zero mean, nor white or Gaussian. Since the Kalman filter estimate depends on the measurement sequence received, as a performance metric, we consider the covariance of the error in the predicted measurement yˆ˜k as compared to true value yk . Further, to normalize the relative impact of the degradation induced by the attacker among different components of this error vector, we weight each component of the error vector by an amount corresponding to how accurate the estimate of this component was without attacks. Thus, we consider

T

−1 ˆ ˆ y˜k − yk = Tr( P˜k W ), where P˜k is the the performance index E y˜k − yk Σz   error covariance matrix in the presence of an attack, P˜k = E (xˆ˜k − xk )(xˆ˜k − xk )T , and W = C T Σz−1 C. To obtain a metric independent of time and focus on the longterm effect of the attack, we consider the limit superior of the arithmetic mean k 1 ∞ ˜ ˜ ˜ and define P  lim sup of {tr( P˜k W )}∞ W k→∞ k n=1 tr( Pn W ). If {tr( Pk W )}k=1 is k=1 convergent, then limk→∞ tr( P˜k W ) = P˜W , which equals the Cesàro mean of P˜k W . Problems considered in the chapter: We assume that the attacker is interested in staying stealthy or undetected for as long as possible while maximizing the error covariance P˜W . We consider two problems: (i) What is a suitable metric for stealthiness of an attacker in stochastic systems where Assumption 5.2 holds? We consider this problem in Sect. 5.3. (ii) For a specified level of stealthiness, what is the worst performance degradation that an attacker can achieve? We consider this problem in Sect. 5.4.

86

V. Katewa et al.

Before we present our analysis, we illustrate the trade-off between stealthiness and performance (measured by the error) via a numerical example. Example 5.2 (Stealthiness-performance trade-off) Consider the following system ⎡

2 ⎢0 A=β⎢ ⎣1 0

0 −1 0 0

0 0 1 0

⎤ 0 0⎥ ⎥, 0⎦ 2



1 ⎢1 B=⎢ ⎣0 0

⎤ ⎡ 0 0 ⎢0 0⎥ ⎥, C = ⎢ ⎣2 2⎦ 1 0

⎤T 0 1⎥ ⎥ , Σw = 0.5I, Σv = I, 0⎦ 1

with β = 0.4, nominal control input u k = 0, and attacked input u˜ k = α[1, 1, . . . , 1]T with α ∈ R. We implement a bad data detector [11] that uses the innovation sequence k z 1∞ for attack detection. Specifically, let gk = z iT Σz−1 z i , which measures the i=k−N +1

normalized energy of the innovation sequence over a time window of N . Then, the H1

detection test is given by gk ≷ τ , where τ > 0 is an appropriate threshold. In absence H0

of an attack, gk has a Chi-squared (χ 2 ) distribution with N N y degrees of freedom [11]. Thus, the false alarm probability of the test is pkF = Pr(gk > τ |H0 ) = 1 − Fχ 2 (τ ; N N y ),

(5.5)

where Fχ 2 (x; m) denotes the CDF of a χ 2 random variable with m degrees of freedom. We fix pkF for the test and use (5.5) to compute the threshold τ . The performance is measured by the time-averaged error ek =

1 N

k 



yˆ˜i − yi

T



Σz−1 y˜ˆi − yi .

(5.6)

i=k−N +1

Figure 5.2 illustrates the stealthiness and the performance degradation induced by the attacker. It can be observed that a more stealthy attack (with less pkD ) induces a lower estimation error than a less stealthy attack. This shows that there exists a trade-off between stealthiness and performance. Our objective is to characterize a fundamental trade-off that is valid for any detector with respect to the stealthiness and performance defined above.

5.3 Stealthiness in Stochastic Systems In this section, we relate our stealthiness Definition 5.1 in terms of detection and false alarm probabilities to the KL-divergence between y˜1∞ and y˜1∞ . This result provides conditions that can be used to verify if an attack is stealthy or not.

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

87

3.5

300 3

250 2.5

200 150

2

500

1000

1500

2000

1.5

500

(a) Test statistics gk and threshold τ

1000

1500

2000

(b) Error ek

Fig. 5.2 Attack stealthiness and error corresponding to the attack in Example 5.2 with α = 0.1 (attack 1) and α = 0.2 (attack 2), and time window N = 100. Subfigure a shows that the attack 1 is more stealthy (gk is larger than τ at less time instants) than attack 2. Conversely, subfigure b shows that the error induced by attack 1 is less than attack 2

Theorem 5.1 (KLD and stealthy attacks) Consider the problem formulation in Sect. 5.2. An attack u˜ ∞ 1 is    (i) strictly stealthy if and only if D y˜1k  y1k = 0 ∀k > 0. (ii) ε-stealthy if the corresponding observation sequence y˜1∞ is ergodic and satisfies  1  k D y˜1  y1k ≤ ε. k→∞ k lim

(5.7)

(iii) ε-stealthy only if the corresponding observation sequence y˜1∞ satisfies (5.7). Proof See [38].

   The following result provides a characterization of D y˜1k  y1k that contains additional insight into the meaning of stealthiness of an attacker.    Proposition 5.1 (KLD and differential entropy) The quantity D y˜1k  y1k can be calculated as k   1    

1  k D y˜1  y1k = I z˜ 1n−1 ; z˜ n + D z˜ n z n , (5.8) k k n=1   where I z˜ 1n−1 ; z˜ n denotes the mutual information between z˜ 1n−1 and z˜ n [37, Sect. 8.5]. Proof See [38].

  Intuitively, the mutual information I z˜ 1n−1 ; z˜ n measures how much information about z˜ n can be obtained from z˜ 1n−1 , that is, it characterizes the memory of the sequence z˜ 1∞ . Similarly, the Kullback–Leibler divergence D(˜z n z n ) measures the dissimilarity between the marginal distributions of z˜ n and z n . Proposition 5.1 thus states that the stealthiness level of an ergodic attacker can be degraded in two ways: (i) if the sequence z˜ 1∞ becomes autocorrelated, and (ii) if the marginal distributions of the random variables z˜ (k) in the sequence z˜ 1∞ deviate from N (0, Σz ).

88

V. Katewa et al.

5.4 Fundamental Performance Limitations We are interested in the maximal performance degradation P˜W that an ε-stealthy attacker may induce. We begin by proving a converse statement that gives an upper bound for P˜W induced by an ε-stealthy attacker in Sect. 5.4.1. In Sect. 5.4.2, we prove a tight achievability result that provides an attack that achieves the upper bound when the system (A, B, C) is right-invertible. In Sect. 5.4.3 we prove a looser achievability result that gives a lower bound on the performance degradation for non-right-invertible systems. We will use a series of preliminary technical results to present the main results of the chapter. The following result is immediate. ¯ ¯ Lemma 5.1 Define the function δ¯ : [0, ∞) → [1, ∞) as δ(x) = 2x + 1 + log δ(x). 1 1 1 ¯ Then, for any γ > 0, δ(γ ) = arg maxx∈R x, subject to 2 x − γ − 2 ≤ 2 log x. Lemma 5.2 Consider the problem setup above. We have   k k  Ny 1    T  −1  1    Ny 1   tr E z˜ n z˜ n Σz ≤ tr E[˜z n z˜ nT ]Σz−1 . + D z˜ 1k z 1k + log 2k 2 k 2 Ny k n=1

(5.9)

n=1

Further, if the sequence z˜ 1∞ is a sequence of independent and identically distributed (i.i.d.) Gaussian random variables, z˜ k , each with mean zero and covariance matrix   E z˜ k z˜ kT = αΣz , for some scalar α, then (5.9) is satisfied with equality. Proof See [38]. Combining Lemmas 5.1 and 5.2 leads to the following result. Lemma 5.3 Consider the problem setup above. We have k 1   

1    T  −1  D z˜ 1k z 1k , tr E z˜ n z˜ n Σz ≤ δ¯ N y k n=1 Ny k

(5.10)

¯ is as defined in Lemma 5.1. where δ(.) The following result relates the covariance of the innovation and the observation sequence. Lemma 5.4 Consider the problem setup above. We have   C Pk C T = E z k z kT − Σv   C P˜k C T = E z˜ k z˜ kT − Σv .

(5.11) (5.12)

Proof By definition, z k = yk − C xˆk = C(xk − xˆk ) + vk , and similarly z˜ k = C(x˜k − xˆ˜k ) + vk . Since (xk − xˆk ) and (x˜k − xˆ˜k ) are independent of the measurement noise  vk due to Assumptions 5.1 and 5.7, the result follows.

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

89

5.4.1 Converse We now present an upper bound of the weighted MSE induced by an ε-stealthy attack. Theorem 5.2 (Converse) Consider the problem setup above. For any ε-stealthy ∞ attack u˜ ∞ 1 generated by an information pattern I1 that satisfies Assumptions 5.5– 5.7,     ε − 1 Ny , (5.13) P˜W ≤ tr(P W ) + δ¯ Ny where N y is the number of outputs of the system, the function δ¯ is defined in Lemma 5.1, and tr(P W ) is the weighted MSE in the absence of the attacker. Proof See [38]. Remark 5.1 (Stealthiness versus induced error) Theorem 5.2 provides an upper

bound for the performance degradation P˜W for ε-stealthy attacks. Since δ¯ Nεy is a monotonically increasing function of ε, the upper bound (5.13) characterizes a trade-off between the induced error and the stealthiness level of an attack. To further understand this result, we consider two extreme cases, namely, ε = 0, which implies strict stealthiness, and ε → ∞, that is, no stealthiness. Corollary 5.1 A strictly stealthy attacker cannot induce any performance degradation. Further, for an ε-stealthy attacker, the upper bound in (5.13) increases linearly with ε as ε → ∞. ¯ Proof A strictly stealthy attacker corresponds to ε = 0. Using the fact that δ(0) =1 ˜ in Theorem 5.2 yields that tr( P W ) ≤ tr(P W ). The second statement follows by ¯ noting that the first-order derivative of the function δ(x) → 2 from the right as x tends to infinity. 

5.4.2 Achievability for Right-Invertible Systems We now show that the bound presented in Theorem 5.2 is achievable if the system (A, B, C) is right invertible. We begin with the following preliminary result. Lemma 5.5 Let the system (A, B, C) be right invertible. Then, the system (A − K C, B, C) is also right invertible. Let G R I be the right inverse of the system (A − K C, B, C). We consider the following attack.

90

V. Katewa et al.

Attack A1 : The attack sequence is generated in three steps. In the first step, a sequence ζ1∞ is generated, such that each vector ζk is independent and identically distributed and independent of the  Ik of the attacker, with proba information  ε pattern ¯ ) − 1 Σz . In the second step, the sequence bility density function ζk ∼ N 0, δ( Ny φ1∞ is generated as the output of the system G R I with ζ1∞ as the input sequence. ˜ k = u k + φk . Finally, the attack sequence u˜ ∞ 1 is generated as u Remark 5.2 (Information pattern of attack A1 ) The attack A1 can be generated by an attacker with any information pattern satisfying Assumptions 5.5–5.7. We note the following property of the attack A1 . Lemma 5.6 Consider the attack A1 . With this attack, the innovation sequence z˜ 1∞ as calculated at the controller is a sequence of independent and identically disT tributed Gaussian random vectors with mean zero and covariance matrix E[˜z k z˜ k ] = δ¯ ε Σz . Ny

Proof See [38]. Theorem 5.3 (Achievability for right-invertible systems) Suppose that the LTI system (A, B, C) is right invertible. The attack A1 is ε-stealthy and achieves     ε −1 , P˜W = tr(P W ) + N y δ¯ Ny where W = C T Σz−1 C. Proof See [38]. Remark 5.3 (Attacker information pattern) Intuitively, we may expect that the more information about the state variables that an attacker has, larger the performance degradation it can induce. However, Theorems 5.2 and 5.3 imply that the only critical piece of information for the attacker to launch an optimal attack is the nominal control input u ∞ 1 .

5.4.3 Achievability for Non-Right-Invertible Systems If the system is not right invertible, the converse result in Theorem 5.2 may not be achieved. We now construct a heuristic attack A2 that allows us to derive a lower bound for the performance degradation P˜W induced by ε-stealthy attacks against such systems. Consider an auxiliary Kalman filter that is implemented as the recursion a = A xˆka + K z ka + B u˜ k , xˆk+1

(5.14)

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

91

with the initial condition xˆ1a = 0 and the innovation z ka = y˜k − C xˆka . The innovation sequence is independent and identically distributed with each z ka ∼ N (0, Σz ). Now, we express z˜ k = z ka − C e˜k , where e˜k  xˆ˜k − xˆka . Further, e˜k evolves according to the recursion e˜k+1 = (A xˆ˜k + K z˜ k + Bu k ) − (A xˆka + K z ka + B u˜ k ) = (A − K C)e˜k − Bφk ,

(5.15)

with the initial condition e˜1 = 0. Together, z˜ k and (5.15) define a system of the form

z ka

e˜k+1 = (A − K C)e˜k + B(−φk ), − z˜ k = C e˜k .

(5.16)

Attack A2 : The attack sequence is generated as u˜ k = u k + L e˜k − ζk , where e˜k = xˆ˜k − xˆka as in (5.16), and the sequence ζ1∞ is generated such that each vecand identically distributed with probability density function tor ζk is independent  ζk ∼ N 0, Σζ and independent of the information pattern Ik of the attacker. The feedback matrix L and the covariance matrix Σζ are determined in three steps, which are detailed next. Step 1 (Limiting the memory of the innovation sequence z˜ 1∞ ): Notice that with the attack A2 and the notation in (5.14), the dynamics of e˜k and z˜ k are given by e˜k+1 = (A − K C − B L)e˜k + Bζk z˜ k = C e˜k + z ka .

(5.17)

The feedback matrix L should be selected to eliminate the memory of the innovation sequence computed at the controller. One way to achieve this aim is to set A − K C − B L = 0. In other words, if A − K C − B L = 0, then z˜ 1∞ is independent and identically distributed. It may not be possible to select L to achieve this aim exactly. Thus, we propose the following heuristic. Note   that if A − K C − B L = 0, then the cost function limk→∞ k1 kn=1 tr E[e˜n e˜nT ]W is minimized, with W = C T Σz−1 C.     k T Since kn=1 tr E[e˜n e˜nT ]W = E n=1 e˜n W e˜n , selecting L to satisfy the constraint A − K C − B L = 0 is equivalent to selecting L to solve a cheap Linear Quadratic Gaussian (LQG) problem [39, Section VI]. Thus, heuristically, we select the attack matrix L as the solution to this cheap LQG problem and, specifically, as L = lim (B T Tη B + ηI )−1 B T Tη (A − K C), η→0

where Tη is the solution to the discrete algebraic Riccati equation

Tη = (A − K C)T Tη − Tη B(B T Tη B + ηI )−1 B T Tη (A − K C) + W.

(5.18)

92

V. Katewa et al.

Step 2 (Selection of the covariance matrix Σζ ): Notice that the selection of the feedback matrix L in Step 1 is independent of the covariance matrix Σζ . As the second step, we select the covariance matrix Σζ such that CΣe˜ C T is close to a scalar multiplication of Σz , say α 2 Σz . From (5.17), notice that limk→∞ E[˜z k z˜ kT ] = CΣe˜ C T + Σz , where Σe˜ ∈ S+Nx is the positive semi-definite solution to the equation Σe˜ = (A − K C − B L)Σe˜ (A − K C − B L)T + BΣζ B T .

(5.19)

We derive an expression for Σζ from (5.19) by using the pseudoinverse matrices of B and C, i.e.,

Σζ = α 2 B † C † Σz (C T )† −(A − K C − B L)C † Σz (C T )† (A − K C − B L)T (B T )† ,

(5.20)

where † denotes the pseudoinverse operation. It should be noted that the right-hand side of (5.20) may not be positive semi-definite. Many choices are possible to construct a positive semi-definite Σζ . We propose that if the right-hand side is indefinite, we set its negative eigenvalues to zero without altering its eigenvectors. Step 3 (Enforcing the stealthiness level): The covariance matrix Σζ obtained in Step 2 depends on the parameter α. We now select α so as to make the attack A2 ε-stealthy. To this aim, we first compute an explicit expression for the stealthiness level and the error induced by A2 . For the entropy rate of z˜ 1∞ , since z˜ 1∞ is Gaussian, we obtain    1  k (5.21) h z˜ 1 = lim h z˜ k+1 z˜ 1k k→∞ k k→∞  

1 = lim log (2π e) N y det E[(˜z k+1 − gk (˜z 1k ))(˜z k+1 − gk (˜z 1k ))T ] k→∞ 2 (5.22)   1 = log (2π e) N y det(C SC T + Σz ) , (5.23) 2 lim

where gk (˜z 1k ) is the minimum mean square estimate of e˜k+1 from z˜ 1k , which can be N obtained from Kalman filtering, and S ∈ S+y is the positive semi-definite solution to the following discrete algebraic Riccati equation

S = (A − K C − B L) S − SC T (C SC T + Σz )−1 C S (A − K C − B L)T + BΣζ B T . (5.24)

Note that the equality (5.21) is due to [37, Theorem 4.2.1]; (5.22) is a consequence of the maximum differential entropy lemma [40, Sect. 2.2]; the positive semi-definite matrix S that solves (5.24) represents the steady-state error covariance matrix of the Kalman filter that estimates z˜ k+1 from z˜ 1k . Thus, the level of stealthiness for the attack A2 is

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

93

 1  k 1 D z˜ 1 z 1k = ε = − log((2π e) N y det(C SC T + Σz ) k→∞ k 2   1   1 + log (2π ) N y det(Σz ) + tr (CΣe˜ C T + Σz )Σz−1 2 2 1 1 1 = − log det(I + SW ) + tr(Σe˜ W ) + N y , (5.25) 2 2 2 lim

where W = C T Σz−1 C. To conclude our design of the attack A2 , we use (5.25) to solve for the desired value of α, and compute the error induced by A2 as 1 tr(E[˜z n z˜ nT ]Σz−1 ) − tr(Σv Σz−1 ) = tr(P W ) + tr(Σe˜ W ) − N y , P˜W = lim k→∞ k n=1 k

(5.26) where Σe˜ is the solution to the Lyapunov equation (5.19).

5.5 Numerical Results Example 5.2 (Continued) Consider the system in Example 5.2 with scaling factor of A as β = 1. Figure 5.3 plots the upper bound (5.13) of performance degradation achievable for an attacker versus the attacker’s stealthiness level ε. From Theorem 5.3, the upper bound can be achieved by a suitably designed ε-stealthy attack. Thus, Fig. 5.3 represents a fundamental limitation for the performance degradation that can be induced by any ε-stealthy attack. Observe that plot is approximately linear as ε becomes large, as predicted by Corollary 5.1.

15 Converse & Achievability P˜W / tr(P W )

Fig. 5.3 The converse and achievability for the right-invertible system, where the weighted MSE P˜W is the upper bound in (5.13) and the weight matrix W = C T Σz−1 C. Reprinted from Ref. [38], Copyright 2017, with permission from Elsevier

10

5

0 0

2

4

6

8

10

94

V. Katewa et al. 12 Converse Achievability

P˜W / tr(P W )

10 8 6 4 2 0 0

2

6

4

8

10

Fig. 5.4 The converse and achievability for the right non-invertible system with the weight matrix W = C T Σz−1 C. The converse is obtained from (5.13) and the achievability is the weighted MSE P˜W induced by the heuristic algorithm A2 . Reprinted from Ref. [38], Copyright 2017, with permission from Elsevier

Example 5.3 (Non-right-invertible system) Consider the system (A, B, C) ⎡

2 ⎢1 ⎢ A=⎢ ⎢0 ⎣0 0

−1 −3 0 0 0

0 0 −2 0 0

0 0 0 −1 0

⎤ 0 0⎥ ⎥ 0⎥ ⎥, 0⎦ 3



2 ⎢1 ⎢ B=⎢ ⎢0 ⎣0 1

⎤ 0 ⎡ 1 −1 2 0 0⎥ ⎥ ⎣ −1 2 0 3 1⎥ , C = ⎥ 2 1 00 1⎦ 1

⎤ 0 0⎦, 4

which fails to be right invertible. Let Σw = 0.5I and Σv = I . In Fig. 5.4, we plot the upper bound for the value of P˜W that an ε-stealthy attacker can induce, as calculated using Theorem 5.2. The value of P˜W achieved by the heuristic attack A2 is also plotted. Although the bound is fairly tight as compared to the performance degradation achieved by the heuristic attack; nonetheless, there remains a gap between the two plots. Example 5.4 (Power system) Consider a power system network consisting of n synchronous generators interconnected by transmission lines. We assume that the line resistances are negligible. Each generator in the network is modeled according to the following second-order swing dynamics [41] Mi θ¨i + Di θ˙i = Pi −

n  Ei Ek sin(θi − θk ), X ik k=1

(5.27)

where θi , Mi , Di , E i , and Pi denote the rotor angle, moment of inertia, damping coefficient, internal voltage and mechanical power input of the ith generator, respectively. Further, X i j denotes the reactance of the transmission line connecting generators i and j (X i j = ∞ if they are not connected). We linearize (5.27) around an equilibrium point to obtain the following collective small-signal model:

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

   dθ d θ˙ 0 I 0 = + u(t), −M −1 L −M −1 D d θ˙ M −1 B1 d θ¨            

95



x(t)

Ac

x(t)

(5.28)

Bc

 T where dθ denotes a small deviation of θ = θ1 θ2 · · · θn from the equilibrium value, M = diag(M1 , M2 , . . . , Mn ), D = diag(D1 , D2 , . . . , Dn ), and L is a symmetric Laplacian matrix given by ⎧ EE i j ⎪ ⎪ ⎨− Xni j cos(θi − θ j ) for i = j, Li j = − Li j for i = j. ⎪ ⎪ ⎩ j=1

(5.29)

j=i

Further, u ∈ Rq models a small deviation  in the mechanical power input of generators 1, 2 . . . , q with B1 = e1 e2 · · · eq , where ej denotes the jth canonical vector. We assume that the nominal input is u = 0 and the attacker can maliciously alter this to a non-zero value. The rotor angle and the angular velocity of a subset of generators is measured using Phasor Measurement Units (PMUs). The measurements are modeled as y = C x, with an appropriate output matrix C. We sample the continuous-time system (5.28) (assuming a zero-order hold ! input) to#obtain a discrete-time system "Ts A τ e c dτ Bc , where Ts is the sampling similar to (5.2) with A = e Ac Ts and B = t=0

time period. Finally, we assume that the discrete-time dynamics are affected by process and measurement noises according to (5.2), and they satisfy Assumption 5.1. We consider a power network similar to the IEEE 39-bus test case [42] consisting of 10 generators. The generator voltage and angle values are obtained from [42]. We fix the damping coefficient for each generator as 10, and the moment of inertia values are chosen as M = [70, 10, 40, 30, 70, 30, 90, 80, 40, 50]. The reactance matrix X is generated randomly, where each entry of X is distributed independently according to N (0, 0.01). The sampling time is Ts = 1. We  assume that  the attacker is capable of manipulating generators 1 to 5, that is, B1 = e1 e2 · · · e5 . Further, two PMUs are located on the buses connected to generators 1 and 2, and therefore, the output matrix T  is C = e1 e6 e2 e7 . The noise covariances are Σw = 0.5I and Σv = I . A stable right inverse of the transfer function matrix G(z) = C(z I − A)−1 B is computed using the DSTOOLS toolbox in MATLAB [43]. For a given stealthiness level ε, we generate the optimal attack A1 as described after Lemma 5.5. Let the time-averaged energy of the optimal attack inputs be given by 1 U˜ k = N

k 

u˜ iT u˜ i ,

(5.30)

i=k−N +1

where N is the time window. Figure 5.5 shows the system performance and attack input energy for two stealthiness levels, It can be observed that the less stealthy

96 4

V. Katewa et al. 10

11

30

3 20

2

10

1 0

500

1000

1500

(a) Optimal attack energy U˜

2000

0

500

1000

1500

2000

(b) Error ek

Fig. 5.5 Optimal attack energy in (5.30) and the error in (5.6) (with N = 100) corresponding to stealthiness levels ε = 0.62 (attack 1) and ε = 0.1.82 (attack 2) for the power system Example 5.4. Subfigure a shows that the inputs corresponding to attack 1 have lower energy than attack 2. Subfigure b shows that the error induced by attack 1 is less than attack 2

attack inputs have larger energy, and therefore, result in a larger error when compared to more stealthy attacks. This illustrate the trade-off between stealthiness and performance.

5.6 Conclusion This work characterizes fundamental limitations and achievability results for performance degradation induced by an attacker in a stochastic control system. The attacker is assumed to know the system parameters and noise statistics, and is able to hijack and replace the nominal control input. We propose a notion of ε-stealthiness to quantify the difficulty of detecting an attack from the measurements, and we characterize the largest degradation of Kalman filtering induced by an ε-stealthy attack. For right-invertible systems, our study reveals that the nominal control input is the only critical piece of information to induce the largest performance degradation. For systems that are not right invertible, we provide an achievability result that lower bounds the performance degradation that an optimal ε-stealthy attack can achieve.

References 1. Farwell, J.P., Rohozinski, R.: Stuxnet and the future of cyber war. Survival 53(1), 23–40 (2011) 2. Kuvshinkova, S.: SQL Slammer worm lessons learned for consideration by the electricity sector. North American Electric Reliability Council (2003) 3. Mo, Y., Chabukswar, R., Sinopoli, B.: Detecting integrity attacks on SCADA systems. IEEE Trans. Control Syst. Technol. 22(4), 1396–1407 (2014) 4. Pasqualetti, F., Dörfler, F., Bullo, F.: Attack detection and identification in cyber-physical systems. IEEE Trans. Autom. Control 58(11), 2715–2729 (2013)

5 Detection of Attacks in Cyber-Physical Systems: Theory and Applications

97

5. Richards, G.: Hackers vs slackers. Eng. Technol. 3(19), 40–43 (2008) 6. Slay, J., Miller, M.: Lessons learned from the Maroochy water breach. Crit. Infrastruct. Prot. 253, 73–82 (2007) 7. Teixeira, A., Pérez, D., Sandberg, H., Johansson, K.H.: Attack models and scenarios for networked control systems. In: Proceedings of the 1st International Conference on High Confidence Networked Systems, pp. 55–64. ACM (2012) 8. Patton, R., Frank, P., Clark, R.: Fault Diagnosis in Dynamic Systems: Theory and Applications. Prentice Hall, Upper Saddle River (1989) 9. Pasqualetti, F., Dörfler, F., Bullo, F.: Control-theoretic methods for cyberphysical security: geometric principles for optimal cross-layer resilient control systems. IEEE Control Syst. Mag. 35(1), 110–127 (2015) 10. Foroush, H.S., Martínez, S.: On multi-input controllable linear systems under unknown periodic DoS jamming attacks. In: SIAM Conference on Control and Its Applications, pp. 222–229. SIAM (2013) 11. Mo, Y., Sinopoli, B.: Secure control against replay attacks. In: Allerton Conference on Communications, Control and Computing, Monticello, IL, USA, September, pp. 911–918 (2010) 12. Smith, R.: A decoupled feedback structure for covertly appropriating network control systems. In: IFAC World Congress, Milan, Italy, August, pp. 90–95 (2011) 13. Dan, G., Sandberg, H.: Stealth attacks and protection schemes for state estimators in power systems. In: IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, October, pp. 214–219 (2010) 14. Giani, A., Bitar, E., Garcia, M., McQueen, M., Khargonekar, P., Poolla, K.: Smart grid data integrity attacks: characterizations and countermeasures. In: IEEE International Conference on Smart Grid Communications, Brussels, Belgium, pp. 232–237 (2011) 15. Liu, Y., Reiter, M.K., Ning, P.: False data injection attacks against state estimation in electric power grids. In: ACM Conference on Computer and Communications Security, Chicago, IL, USA, November, pp. 21–32 (2009) 16. Mohsenian-Rad, A.-H., Leon-Garcia, A.: Distributed internet-based load altering attacks against smart power grids. IEEE Trans. Smart Grid 2(4), 667–674 (2011) 17. Teixeira, A., Amin, S., Sandberg, H., Johansson, K.H., Sastry, S.S.: Cyber security analysis of state estimators in electric power systems. In: IEEE Conference on Decision and Control, Atlanta, GA, USA, December, pp. 5991–5998 (2010) 18. Bhattacharya, S., Ba¸sar, T.: Differential game-theoretic approach to a spatial jamming problem. In: Advances in Dynamic Games, pp. 245–268. Springer, Berlin (2013) 19. Hamza, F., Tabuada, P., Diggavi, S.: Secure state-estimation for dynamical systems under active adversaries. In: Allerton Conference on Communications, Control and Computing, September, pp. 337–344 (2011) 20. Maharjan, S., Zhu, Q., Zhang, Y., Gjessing, S., Ba¸sar, T.: Dependable demand response management in the smart grid: a Stackelberg game approach. IEEE Trans. Smart Grid 4(1), 120–132 (2013) 21. Manshaei, M., Zhu, Q., Alpcan, T., Ba¸sar, T., Hubaux, J.-P.: Game theory meets network security and privacy. ACM Comput. Surv. 45(3), 1–39 (2011) 22. Zhu, M., Martínez, S.: Stackelberg-game analysis of correlated attacks in cyber-physical systems. In: American Control Conference, San Francisco, CA, USA, July, pp. 4063–4068 (2011) 23. Zhu, Q., Tembine, H., Ba¸sar, T.: Hybrid learning in stochastic games and its application in network security. In: Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, pp. 303–329 (2013) 24. Basile, G., Marro, G.: Controlled and Conditioned Invariants in Linear System Theory. Prentice Hall, Upper Saddle River (1991) 25. Fawzi, H., Tabuada, P., Diggavi, S.: Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Trans. Autom. Control 59(6), 1454–1467 (2014) 26. Cui, S., Han, Z., Kar, S., Kim, T.T., Poor, H.V., Tajer, A.: Coordinated data-injection attack and detection in the smart grid: a detailed look at enriching detection solutions. IEEE Signal Process. Mag. 29(5), 106–115 (2012)

98

V. Katewa et al.

27. Kosut, O., Jia, L., Thomas, R.J., Tong, L.: Malicious data attacks on the smart grid. IEEE Trans. Smart Grid 2(4), 645–658 (2011) 28. Kwon, C., Liu, W., Hwang, I.: Security analysis for cyber-physical systems against stealthy deception attacks. In: American Control Conference, Washington, DC, USA, pp. 3344–3349. IEEE (2013) 29. Liu, Y., Ning, P., Reiter, M.K.: False data injection attacks against state estimation in electric power grids. ACM Trans. Inf. Syst. Secur. 14(1), 13 (2011) 30. Bai, C.-Z., Gupta, V.: On Kalman filtering in the presence of a compromised sensor: fundamental performance bounds. In: American Control Conference, Portland, OR, June, pp. 3029–3034 (2014) 31. Bai, C.-Z., Pasqualetti, F., Gupta, V.: Security in stochastic control systems: fundamental limitations and performance bounds. In: American Control Conference, Chicago, IL, USA, July, pp. 195–200 (2015) 32. Kung, E., Dey, S., Shi, L.: The performance and limitations of ε-stealthy attacks on higher order systems. IEEE Trans. Autom. Control 62(2), 941–947 (2017) 33. Zhang, R., Venkitasubramaniam, P.: Stealthy control signal attacks in vector LQG systems. In: American Control Conference, Boston, MA, USA, pp. 1179–1184 (2016) 34. Guo, Z., Shi, D., Johansson, K.H., Shi, L.: Optimal linear cyber-attack on remote state estimation. IEEE Trans. Control Netw. Syst. 4(1), 4–13 (2017) 35. Weerakkody, S., Sinopoli, B., Kar, S., Datta, A.: Information flow for security in control systems. IEEE Conference on Decision and Control, Las Vegas, NV, USA, pp. 5065–5072 (2016) 36. Chen, Y., Kar, S., Moura, J.M.F.: Optimal attack strategies subject to detection constraints against cyber-physical systems. IEEE Trans. Control Netw. Syst. 5(3), 1157–1168 (2018) 37. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006) 38. Bai, C.-Z., Pasqualetti, F., Gupta, V.: Data-injection attacks in stochastic control systems: detectability and performance tradeoffs. Automatica 82, 251–260 (2017) 39. Hespanha, J.P.: Linear Systems Theory. Princeton University Press, Princeton (2009) 40. El Gamal, A., Kim, Y.-H.: Network Information Theory. Cambridge University Press, Cambridge (2011) 41. Kundur, P.: Power System Stability and Control. McGraw-Hill Education, New York (1994) 42. Athay, T., Podmore, R., Virmani, S.: A practical method for the direct analysis of transient stability. IEEE Trans. Power Appar. Syst. (PAS) 98(2), 573–584 (1979) 43. Varga, A.: Descriptor system tools (DSTOOLS) user’s guide (2018). ArXiv eprint arXiv:1707.07140

Chapter 6

Security Metrics for Control Systems André M. H. Teixeira

Abstract In this chapter, we consider stealthy cyber- and physical attacks against control systems, where malicious adversaries aim at maximizing the impact on control performance, while simultaneously remaining undetected. As an initial goal, we develop security-related metrics to quantify the impact of stealthy attacks on the system. The key novelty of these metrics is that they jointly consider impact and detectability of attacks, unlike classical sensitivity metrics in robust control and fault detection. The final objective of this work is to use such metrics to guide the design of optimal resilient controllers and detectors against stealthy attacks, akin to the classical design of optimal robust controllers. We report preliminary investigations on the design of resilient observer-based controllers and detectors, which are supported and illustrated through numerical examples.

6.1 Introduction Cyber-security of control systems has been receiving increasing attention in recent years. Overviews of existing cyber-threats and vulnerabilities in networked control systems have been presented by different authors [2, 16]. Adversaries endowed with rationality and intent are highlighted as one of the key items in security for control systems, as opposed to natural faults and disturbances. These intelligent adversaries may exploit existing vulnerabilities and limitations in anomaly detection mechanisms and remain undetected. In fact, [10] uses tools from geometric control theory to study such fundamental limitations and characterizes a set of stealthy attack policies for networked systems modeled by differential-algebraic equations. Related stealthy attack policies were also considered in [13, 16], while [3] characterizes the number of corrupted sensor channels that cannot be detected during a finite-time interval. A common thread within these approaches is that stealthy attacks are constrained to be entirely decoupled from the anomaly detector’s output. Classes of attacks that are in theory detectable, but hard to detect in practice, have not received as much attention. A. M. H. Teixeira (B) Department of Electrical Engineering, Uppsala University, Uppsala, Sweden e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_6

99

100

A. M. H. Teixeira

Another important direction is to analyze the potential damage of stealthy attacks. Recently, [1] investigated the detectability limitations and performance degradation of data injection attacks in stochastic control systems. The impact of stealthy data injection attacks on sensors is also investigated in [8], which characterized the set of states reachable by a stealthy adversary. The work in [17] formulated the impact of data injection attacks in finite-time horizon as a generalized eigenvalue problem, whereas [21] considered an alternative formulation that allowed for the impact to be characterized as the solution to a convex optimization problem. A similar approach was considered in [12] for impulsive attacks. While this set of results is useful to assess the impact of stealthy cyber-attacks on control systems, they cannot be used to directly design more resilient controllers, since the optimization problems have a complex non-convex dependence on the design parameters. The impact and detectability of data injection attacks on discrete-time systems has also been jointly considered in the author’s previous work [15]. The impact of stealthy attacks is characterized as the solution to a convex problem that has a remarkable similarity with existing optimization-based techniques to design optimal H∞ robust detectors and controllers [11, 22]. As main contributions of this chapter, we revisit the control system security problem in [15], but now for continuous-time systems. The goal is to investigate possible metrics with which to analyze the security of control systems to malicious adversaries that aim at maximizing impact while simultaneously minimizing detection. Classical metrics in robust control and fault detection are revisited, namely the H∞ norm [24] and the H− index [22]. Their suitability to security analysis is discussed, from which we conclude that they have limited applicability to security, as they consider impact and detection separately. Then, a first heuristic approach to integrate these metrics together is taken and further examined, which also shares some of the limitations of the classical metrics. The continuous-time version of the security metric developed in [15] is then presented: the output-to-output gain. Results characterizing the metric are given, based on which an efficient computation approach is proposed. Furthermore, fundamental limitations of this metric are also characterized, which are aligned with well-known fundamental limitations in the detectability of attacks [1, 10]. Finally, a first attempt to use this security metric in the design of optimal resilient controllers and detectors against stealthy attacks are investigated. An heuristic method to search for sub-optimal solutions is presented, based on alternating minimization. The discussion and results in this chapter are illustrated and supported by numerical experiments on a single closed-loop system, with small examples distributed through the different sections of the chapter.

6 Security Metrics for Control Systems

101

Fig. 6.1 Networked control system under cyber-physical attacks

6.2 Closed-Loop System Under Cyber- and Physical Attacks This section details the attack scenario and the closed-loop system structure under study, following a similar modeling framework as in [16]. Consider a control system depicted with a physical plant (P) controlled over a communication network, and an integrated observer-based feedback controller and anomaly detector (F). The closed-loop system under attacks is depicted in Fig. 6.1, and its dynamics are described by the following equations: ⎧ ˜ + E p f (t) ⎪ ⎨ x˙ p (t) = A p x p (t) + B p u(t) ˜ P : yc (t) = C p x p (t) + D p u(t) ⎪ ⎩ ym (t) = Cm x(t) ⎧ ˙ xˆ p (t) = A p x(t) ˆ + B p u(t) + K yr (t) ⎪ ⎪ ⎪ ⎨ yˆ (t) = C x(t) m mˆ F: ⎪ u(t) = L x(t) ˆ ⎪ ⎪ ⎩ yr (t) = y˜m (t) − yˆm (t),

(6.1)

where x p ∈ Rn x denotes the state of the plant, ym (t) ∈ Rn y is the measurement signal transmitted by the sensors from the plant to the observer, u ∈ Rn u is the control signal computed by the observer-based controller and then transmitted to the actuator, and f (t) ∈ Rn f is a physical fault signal, possibly inserted by a malicious adversary. The level of performance of the closed-loop system over a given time-horizon [0, T ] is measured by the quadratic control cost  J (x(t), u(t)) ˜ [0, T ] 

0

T

yc (t)22 dt  yc 2L 2 [0, T ] .

(6.2)

The signal yc (t) ∈ Rn c is thus denoted as the performance output. The communication network may be subject to malicious cyber-attacks, which are able to hijack, read, and re-write the data packets flowing through the network.

102

A. M. H. Teixeira

The possibly compromised measurement and actuator signals at the corresponding ˜ respectively. receiver are denoted by y˜m (t) and u(t), The observer produces an estimate of the plant’s state, x(t), ˆ which has a dual role. On the one hand, the estimate is the basis for computing the control input u(t) that steers the system. On the other hand, the observer’s estimate is also used to generate a so-called residual yr (t), which is evaluated to detect the presence of anomalies. Thus, the residual is also called as the detection output. In particular, we suppose that an anomaly is detected if the energy of the residual signal over a given time-horizon [0, T ] exceeds a certain threshold, i.e., yr 2L 2 [0, T ] > τr2 .

(6.3)

In the remainder of the chapter, without loss of generality, we let τr = 1.

6.2.1 Attack Scenario and Adversary Model In this chapter, we consider the class of false-data injection attacks on data exchanged with sensors and actuators, possibly combined with a physical attack f (t). The cyberattack component can be modeled as additive corruptions of the sensor and actuator signals, described by u(t) ˜ = u(t) + u(t) y˜m (t) = ym (t) + ym (t),

u(t) = Γu au (t) ym (t) = Γ y a y (t),

(6.4)

where au (t) ∈ Rm u and a y (t) ∈ Rm y are the data corruptions added to the actuator and sensor signals, respectively, and Γu ∈ Bn u ×m u and Γ y ∈ Bn y ×m y are binary-valued matrices indicating which m u and m y channels can be corrupted by the adversary. Additionally, we consider that the adversary can also stage a physical attack through the fault signal f (t), possibly in coordination with the cyber-attack on sensor and actuator data. In terms of the adversary model, we consider the worst-case scenario where the adversary has perfect knowledge of the closed-loop system model in (6.1). Moreover, concerning the intent of the malicious adversary, we suppose that the attacker has two objectives. First, the adversary aims at corrupting the sensor and actuator data so that the closed-loop system performance is deteriorated as much as possible. This means that the adversary aims at maximizing the control cost (6.2). Second, the attacker wishes to minimize the detection alarms (6.3), and therefore avoid being detected. Example 6.1 Throughout the chapter, the quadruple water tank system [5] will be used as a re-occurring example. The plant is depicted in Fig. 6.2, which illustrates one of the attack scenarios considered in this chapter.

6 Security Metrics for Control Systems

103

Fig. 6.2 The quadruple tank process, controlled through an LQG controller, under a physical attack. The physical attack may be modeled with Γu = Γ y = ∅ and E p = e1

The plant consists of four water tanks, with four states associated with the water level of each tank, two measurements signals corresponding to the water level in two of the tanks, and two actuators corresponding to two water pumps. Each water pump delivers water to two tanks, as depicted in Fig. 6.2, The flow ratio from each pump to the respective tanks is determined by a valve. The valves are configured so that the plant possesses one unstable transmission zero from u(t) ˜ to ym (t). The observer-based controller is implemented as a LQG controller, by means of a Kalman Filter that feeds its state estimate to an LQR controller. The LQG controller is designed as to minimize the quadratic cost (6.2). The closed-loop system dynamics are described by (6.1), with the data: ⎡ ⎤ −0.1068 0 0.0275 0 ⎢ 0 −0.0903 0 0.0258 ⎥ ⎥, Ap = ⎢ ⎣ ⎦ 0 0 −0.0275 0 0 0 0 −0.0258

 0.2000 0 0 0 , Cm = 0 0.2000 0 0 ⎤ ⎡ 6.3246 0 0 0 ⎥ ⎢ 0 6.3246 0 0 ⎥ ⎢ ⎢ 0 0 4.4721 0 ⎥ ⎥, Cp = ⎢ ⎢ 0 0 0 4.4721⎥ ⎥ ⎢ ⎣ 0 0 0 0 ⎦ 0 0 0 0 L=

 −0.5997 −0.2224 0.1003 −1.2153 , −0.1916 −0.6606 −1.1847 0.0816

⎡ ⎤ 0.0802 0 ⎢ 0 0.0807⎥ ⎥ Bp = ⎢ ⎣ 0 0.1345⎦ , 0.1337 0 ⎡

0 ⎢ 0 ⎢ ⎢ 0 Dp = ⎢ ⎢ 0 ⎢ ⎣3.1623 0 ⎡ 0.0345 ⎢0.0150 ⎢ K =⎣ 0.0456 0.0561

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥, 0 ⎥ ⎥ 0 ⎦ 3.1623 ⎤ 0.0150 0.0414⎥ ⎥. 0.0633⎦ 0.0515

The closed-loop system is under attack on a subset of the sensors and actuators, possibly complemented with a physical attack that can affect each water tank

104

A. M. H. Teixeira

independently. In particular, throughout the chapter we shall discuss different attack scenarios where the adversary corrupts at most two communication channels and physically attacks at most two water tanks. The former is modeled through Γu and Γ y , while the latter is captured by having E p = [ei ek ] ∈ R4×2 , where ei is the ith column of the identity matrix. As a first example, suppose that the adversary corrupts both pumps, which is modeled by (6.4) with Γu = I and Γ y = ∅. In this case, it has been previously shown in the literature [14, 16] that an attack on the actuators mimicking the unstable transmission zero dynamics will have an arbitrarily large impact on the plant’s states, while remaining undetectable. For our specific system, the attack signal could be designed as a(t) = δe0.0178t [−0.2243 0.2200] , for a sufficiently small scalar δ. Such an attack will have a negligible effect on the detection output, while driving the states, and thus the control cost, toward infinity.

6.2.2 Toward Metrics for Security Analysis Given the control system and attack scenario previously described, the next sections look into possible metrics for characterizing the worst-case attack, its impact on performance, and its level of detectability. For simplicity, we will re-write the dynamics of the closed-loop system under attack in a more compact form. To this end, let us define the estimation error as e(t)  x p (t) − xˆ p (t), the augmented state of the plant and observer as x(t)      x p (t) e(t) and the augmented attack signal as a(t) au (t) a y (t) f (t) ∈ Rna . The closed-loop dynamics (6.1) under the considered attack (6.4) are compactly described by x(t) ˙ = Ax(t) + Ba(t) yc (t) = Cc x(t) + Dc a(t) (6.5) yr (t) = Cr x(t) + Dr a(t), where the matrices are given by

 A p + B p L −B p L , 0 A p + K Cm   Cc = C p + D p L −D p L ,   Cr = 0 C m , A=

 B p Γu 0 E p , B p Γu K Γ y E p   D c = D p Γu 0 0 ,   Dr = 0 Γ y 0 .

B=

(6.6)

Furthermore, we shall denote Σc  (A, B, Cc , Dc ) and Σr  (A, B, Cr , Dr ) as the realizations of the closed-loop system as seen from the perspectives of attack a(t) to the outputs yc (t) and yr (t), respectively. Note that the adversary model places no explicit constraint on the attack signal a(t). Therefore, we will consider generic attack signals that lie in the so-called

6 Security Metrics for Control Systems

105

extended L2 space, denoted as L2e . More specifically, L2e is the space of signals that have finite energy over all finite-time horizons, but do not necessarily need to have finiteenergy in the infinite horizon. Formally, this  signal space can be defined as L2e  a : R+ → Rn | aL 2 [0, T ] < ∞, ∀T < ∞ . Given the above setup, as the start of our discussion, the classical metrics in robust control and fault detection are first examined, and we discuss to what extent they can or can not capture the attack scenario described in the previous section.

6.3 Classical Metrics in Robust Control and Fault Detection Typical worst-case metrics in control are the largest and smallest gains of a dynamical system. The largest gain is commonly used to capture the maximum amplification that the input can have on the output. On the contrary, the smallest gain captures the least amplification that the input has on the output. When the amplification is measured in terms of energy (i.e., the L2 signal norm), the largest and smallest gains are, respectively, the H∞ norm and the H− index.

6.3.1 The H∞ Norm The H∞ norm is a classical metric in robust control, capturing the largest energy amplification from input to output. In the context of our attack scenario, the adversary is interested in maximizing the energy of the performance output, yc . Hence, we are interested in the worst-case (largest) amplification from the attack signal to the performance output, which is captured by the H∞ norm of the system (6.5) from a(t) to yc (t), namely Σc H ∞ . There are multiple equivalent characterizations of the H∞ norm. An appealing one for our purposes is the formulation as the following optimal control problem: Σc 2H ∞ 

sup a∈L 2e ,x(0)=0

yc 2L 2

(6.7)

s.t. a2L 2 ≤ 1. Yet another useful interpretation of the H∞ norm is that of the maximum L2 amplification of the system, from input to output, i.e., Σc 2H ∞ = γ ≥ 0 implies yc 2L 2 ≤ γ a2L 2 , x(0) = 0.

(6.8)

Finally, defining G c (s) = Cc (s I − A)−1 B + Dc , the H∞ norm can also be related to the singular values of the system Σc :

106

A. M. H. Teixeira

σ c (w)2 ≤ sup σ c (w)2 = γ ,

(6.9)

w>0

where σ c (w)2  max n a∈C

a

G c ( jw)a2 . a22

The H∞ norm has well-known properties, for instance that the Σc H ∞ is unbounded if and only if the system Σc is unstable. Moreover, note that the constraint in (6.7) essentially restricts the attack signal to have finite energy over infinite horizons. This in turn implies that the worst-case amplification does not consider attack signals with possibly infinite energy, namely non-vanishing signals, such as strictly increasing exponential signals and ramps. Remark 6.1 Recalling Example 6.1 with the quadruple tank under attack on both actuators, we observe that the exponentially increasing, undetectable attack described in Example 6.1 is not considered by the H∞ norm. Such a potentially harmful attack is, therefore, not included in analyses based on the H∞ norm. Furthermore, the H∞ norm does not give any guarantees to the detectability of the attack signal. Hence, the worst-case attack signal may turn out to be easily detectable, despite resulting in the worst amplification to the performance output (with respect to input signals with finite energy). Example 6.2 Consider the quadruple tank system of Example 6.1, but now under attack on the first actuator (pump 1, corresponding to the first entry in the vector u(t)). Computing the H∞ norm for such a scenario, and in particular examining the largest singular value σ c (w), indicates that the worst-case frequency is w = 0. Hence, constant attacks will result in the worst-case amplification of the cost. For a constant attack of the form a(t) = 1, the worst-case in the H∞ sense, the RMS of both the performance output and the detection output are shown in Fig. 6.3 (blue dotted line). Another attack signal at a higher frequency, but with the same level of impact on performance at steady state, is also depicted (red dash-dotted line). As it can be observed from the lower plot, the new attack has a much lower level of detectability. From a security perspective, such an attack is thus of higher concern than the worst-case attack in the H∞ sense. As illustrated by the above discussions and the latter example, we need a metric that also accounts for detectability of the attack signal.

6.3.2 The H− Index As opposed to the H∞ norm, the H− index is a classical metric in fault detection, capturing the smallest energy amplification from input to output. In the context of our attack scenario, the adversary is interested in minimizing the energy of the detection

6 Security Metrics for Control Systems

107

Fig. 6.3 The RMS values over time of the performance and detection outputs, yc and yr , for different attack signals on actuator 1. The worst-case attack, in the H∞ sense, is depicted in dotted blue line. For the same level of impact on performance, another attack signal at a higher frequency (red dash-dotted line) is shown to have a much lower level of detectability

output, yr . Therefore, we are interested in the worst-case (smallest) amplification from the attack signal to the detection output, which is captured by the H− index of the system (6.5) from a(t) to yr (t), namely Σr H − . Similarly as for the H∞ norm, there are multiple possible characterizations of the H− index. First, we have again the formulation as the following optimal control problem: Σr 2H −  inf yr 2L 2 a∈L 2e ,x(0)=0 (6.10) s.t. a2L 2 ≥ 1. Second, a useful interpretation of the H− index is that of the minimum L2 amplification of the system, i.e., Σr 2H − = γ ≥ 0 implies yr 2L 2 ≥ γ a2L 2 , x(0) = 0.

(6.11)

Finally, defining G r (s) = Cr (s I − A)−1 B + Dr , the H− index can also be related to the singular values of the system Σr : σ r (w)2 ≥ inf σ r (w)2 = γ , w>0

(6.12)

108

A. M. H. Teixeira

where σ r (w)2  minn a∈C

a

G r ( jw)a2 . a22

Remark 6.2 The original definition of the H− in the fault detection literature [6] was based on the singular values (6.12). This definition is actually more conservative than the new formulations based on optimal control (6.10) and energy amplification (6.11). Specifically, the formulation in (6.12) implicitly constrains the input signal to have finite energy, and thus lie in L2 . On the other hand, (6.10) and (6.11) allow the input signal to lie in L2e and thus have infinite energy in infinite time horizons. The original H− index defined in (6.12) is well-known for its limitation in strictly proper systems, in which case σ r (w) decreases towards 0 for high frequencies, and thus H− = 0. Moreover, as per Remark 6.2, it again restricts the attack signal to have finite energy over infinite horizons. Hence, attacks that are typically dangerous but hard to detect, such as incipient signals, are not considered by the original H− index. The new formulations of the H− index, namely (6.10) and (6.11), address some of the conservatism of (6.12) and consider, for instance, the presence of unstable zeros in the system that will render H− = 0, which was neglected in (6.12). However, the H− index does not give any measure to the impact of the attack signal on the closed-loop system performance. Hence, the worst-case attack signal may turn out to be hard to detect, but at the same time result in negligible impact on performance. Example 6.3 Consider the quadruple tank system of Example 6.1, but now under attack on the second actuator (pump 2, corresponding to the second entry in the vector u(t)). Computing the H− index for such a scenario, and in particular examining the smallest singular value σ r (w), indicates that the detectability deteriorates monotonically with increasing frequencies. Thus the worst-case frequency is w = ∞, in which case Σr H − = 0. Hence, attack signals with very high frequency will result in the worst-case detectability at the detection output. The RMS of the performance and detection outputs for different attacks are illustrated in Fig. 6.4, where the attack magnitudes have been adjust so that all attacks have the same level of detectability at steady state (i.e., same RMS value for the detection outputs). As seen from the magnitude of the different attacks, a large magnitude is required for obtaining the same level of detectability, which confirms that the detectability in the H− sense decreases with frequency. However, worse detectability alone does not imply a larger impact in performance. In fact, the attack with the least impact in performance (depicted by dash-dotted red plots) is less detectable than the constant attack (dotted blue plots). Moreover, the impact on performance is not monotonic with frequency. Indeed, we observe that, for the same level of detectability, the most and the least detectable attacks (in dotted and dashed lines, respectively) yield the same RMS value for the performance output, and thus have the same level of impact.

6 Security Metrics for Control Systems

109

Fig. 6.4 The RMS values over time of the performance and detection outputs, yc and yr , for different attack signals on actuator 2. For the same level of detectability, the impact on performance does not decrease with the frequency

6.3.3 Mixing H∞ and H− As illustrated in the previous subsection, the H∞ norm and the H− index cannot simultaneously capture both of the aspects that define malicious attacks: the aim to maximize impact on performance, while simultaneously minimizing detection. Since H∞ addresses the impact on performance, while H− encodes detectability, a natural first approach would be to attempt to combine both metrics into one single quantity. However, as seen in Example 6.4, the worst-case frequency differs between the two metrics, which means that these metrics look into distinct worst-case input signals. As a first attempt to combine the essence of these metrics, and develop a security metric, we consider instead the ratio between the singular values, μ(w), and define a first heuristic security metric as μ, with μ(w) 

σ c (w)2 σ c (w)2 ≤ sup = μ, σ r (w)2 w>0 σ r (w)2

(6.13)

There are, however, two main flaws in this approach. The first is that it constraints the attack signal to have finite energy, i.e., to lie in L2 , and hence only considers vanishing signals. For instance, as for the H− index, the presence of unstable zeros would not be considered, while it is well-known that attacks replicating the unstable

110

A. M. H. Teixeira

Fig. 6.5 The singular values σ c (w) and σ r (w) (left) and their ratio μ(w) (right) for a cyber-attack on actuator 2

zero dynamics are not detected and still have a dramatic impact on the system [10, 15]. The second is that, for multiple-input systems (e.g., when multiple sensors and actuators are corrupted), μ(w) does not consider the spatial coordination of the different attack channels to achieve a large impact and worse detection. In other words, the spatial coordination between attack channels is encoded separately in the definitions of σ c (w) and σ r (w), and thus each singular value will consider different worst-case attack direction (or singular vector). Example 6.4 Consider the scenario in Example 6.3, where the quadruple tank system is under a cyber-attack on the second actuator. The singular values σ c (w) and σ r (w) are depicted in Fig. 6.5, as well as their ratio μ(w). As discussed in Example 6.3 and observed from σ r (w) in the figure, the detectability deteriorates monotonically with the frequency as Σr H − converges to 0. Hence, attack signals with very high frequency will result in the worst-case detectability at the detection output. This behavior is a natural consequence of the plant dynamics: in the absence of a direct feed-through term from u(t) to ym (t), actuation signals with very high frequency are naturally blocked by the system dynamics, and will not appear in the measured output. On the other hand, by observing σ c (w), we conclude that the worst-case impact occurs at low frequencies. However, note that the σ c (w) converges to a non-zero constant as the frequency increases. This occurs due to the non-zero feed-through

6 Security Metrics for Control Systems

111

Fig. 6.6 The singular values σ c (w) and σ r (w) (left) and their ratio μ(w) (right) for a cyber-attack on actuator 2 for a physical attack with additional pumps, such that E p = B p

term Dc = 0, which means that the attack on the actuator has a direct impact on the control cost (6.2). Connecting the observations from the two classical metrics, we conclude that attack signals with high frequencies will be hard to detect, while having a non-zero impact on the system. The worst-case would indeed be to have an infinitely high frequency, in which case the attack would be completely undetectable by yr , while still having a non-zero impact on yc . This is in fact correctly captured by the mixed metric μ(w), which tends to infinity as the frequency increases. The above example illustrates that the heuristic metric μ(w) can, for certain scenarios, correctly capture the worst-case malicious attacks. In particular, as only one signal was attacked in Example 6.4, there was no spatial coordination between attack signals to be considered. Next we briefly discuss an example where such spatial coordination is crucial. Example 6.5 Consider the scenario where E p = B p , that is, the adversary is able to stage a physical attack through some additional pumps other than the plant’s actuators, but with identical effect on the system’s states. The singular values σ c (w) and σ r (w), as well as their ratio μ(w), are depicted in Fig. 6.6. Since the attack does not affect the actual actuators, there is no direct feed-through term from attack to any of the outputs (i.e., Dc = Dr = 0). Therefore, both the impact and the detectability decay to zero with the frequency. Moreover, it turns out that they decay at the same slope, which leads to a constant high-frequency asymptote in μ(w). By observing μ(w), one would be led to conclude that the worst-case attack with maximum impact and minimum detectability happens for low frequencies.

112

A. M. H. Teixeira

On the contrary, there exists an attack that is undetectable and still leads to arbitrarily large impact. As discussed in Example 6.1, the plant has one unstable zero from the actuators to the measurement output, which can thus be exploited by the physical attack, since E p = B p . Such attack is non-vanishing and requires the coordination of both water pumps, two aspects that are not captured by the heuristic metric μ(w). As motivated by the above discussions and examples, the heuristic metric μ(w) correctly captures the impact of worst-case stealthy attacks in some scenarios (Example 6.4), but fails in other cases where spatial coordination is exploited by the adversary (Example 6.5). Thus, there is a need for an over-arching security metric that correctly tackles strategic stealthy attacks in general scenarios. One such security metric is described in the next section.

6.4 A Security Metric for Analysis and Design: The Output-to-Output Gain The previous section described the classical metrics in control and fault detection, and illustrated their shortcomings in capturing simultaneously the impact and the detectability of malicious attacks, which also carried over to the heuristic metric μ(w). In this section, a new security metric that can successfully consider these two aspects is discussed and characterized. Moreover, we show that such a metric can be the basis for designing controllers and detectors for increased resiliency against adversaries.

6.4.1 Security Analysis with the Output-to-Output Gain A security metric simultaneously integrating the impact on performance and the detectability of attacks was proposed in [15]: the output-to-output gain (OOG). This metric is tailored to the malicious adversary objectives of maximizing impact while remaining undetected, being defined as the optimal control problem Σ2yc ←yr 

sup a∈L 2e ,x(0)=0

yc 2L 2

(6.14)

s.t. yr 2L 2 ≤ 1. As before, a useful interpretation of the OOG is that of the maximum L2 amplification of the system, but now in terms of the two outputs yc 2L 2 and yr 2L 2 , i.e., Σ2yc ←yr = γ ≥ 0 implies yc 2L 2 ≤ γ yr 2L 2 , x(0) = 0.

(6.15)

6 Security Metrics for Control Systems

113

A few remarks are in order. First, note that the attack signal is not a priori constrained to be a vanishing signal, as opposed to the H∞ norm and the original H− index. Thus, the OOG can consider unstable zeros, as well as exponentially increasing signals and ramps. Second, the gain characterization is rather insightful. Recalling that no detection alarm is triggered while yr 2L 2 ≤ 1, the condition (6.15) implies a guaranteed bound on the impact in performance for any attack that is not detected, namely, yc 2L 2 ≤ γ yr 2L 2 ≤ γ . As such, the OOG seamlessly characterizes the worst-case impact on performance of undetected, possibly non-vanishing attacks. Finally, we refer to a complete characterization of the OOG in terms of dissipative systems theory [20], as summarized in the following statement.    Proposition 6.1 Consider the continuous-time system Σ  A, B, Cc Cr ,      , as described by (6.5), which is assumed to be controllable. Define Dc Dr G r (s) = Cr (s I − A)−1 B + Dr and G c (s) = Cc (s I − A)−1 B + Dc . The following statements are equivalent: 1. The OOG of the system satisfies the bound Σ2yc ←yr ≤ γ ; 2. The system Σ is dissipative w.r.t. the supply rate s(x(t), a(t)) = γ yr (t)22 − yc (t)22 ; 3. For all trajectories of the system with T > 0 and x(0) = 0, we have 

T

s(x(t), a(t)) dt ≥ 0;

0

4. There exists a P 0 such that the following Linear Matrix Inequality holds:  



  A P + P A P B Cc   Cr   Cr Dr + Cc Dc  0. R(Σ, P, γ )  −γ 0 B P Dr Dc (6.16)

Additionally, a necessary condition of the previous statements to hold is that the following frequency-domain condition holds: γ G r (¯s ) G r (s) − G c (¯s ) G c (s) 0, ∀s ∈ C,

(6.17)

with s ∈ / λ(A), Re(s) ≥ 0. The above result follows directly from solving the optimal control problem (6.14) using dissipative systems theory [23], akin to the discrete-time case investigated in [15], and recalling key results in dissipative system theory for linear systems with quadratic supply rates [20]. Note that statement 3 in Proposition 6.1 is precisely equivalent to the gain condition presented in (6.15). Moreover, and most importantly, the LMI in statement 4

114

A. M. H. Teixeira

leads to a computationally efficient approach to compute the OOG of a given system. In fact, the OOG can be obtained by solving the following convex optimization problem γ Σ2yc ←yr = min P 0,γ >0 (6.18) s.t. R(Σ, P, γ )  0. Finally, the inequality in (6.17) provides a necessary frequency-domain condition for the OOG to be bounded. This is in contrast to similar frequency-domain conditions for the classical metrics in the previous subsections, which are both necessary and sufficient, and also significantly less involved than (6.17). A necessary and sufficient frequency-domain condition was first derived in [9]. Unfortunately, this condition involves an infinite-dimensional inequality on the complex plane whose evaluation is not tractable [18]. In pursuit of tractable conditions, under certain regularity assumptions, it was shown that (6.17) was also sufficient, see [4, Sect. 2.3]. Under milder regularity assumptions, a frequency-domain condition based on Pick matrices was also proposed in [19]. Unfortunately, while the classical sensitivity metrics do satisfy the regularity assumptions investigated in the literature, the general formulation of the OOG does not. For instance, the case with Dc = Dr = 0 does not enjoy such regularity properties. Nonetheless, the necessary condition (6.17) points to useful structural results characterizing degenerate cases of the OOG, where the gain is unbounded regardless of the choice of controller and anomaly detector. Furthermore, it also provides a lower bound to the OOG, and it has a tight connection to the heuristic metric μ(w) defined in (6.13). These two aspects are further explored in the following.

6.4.1.1

Structural Results on the Output-to-Output Gain

The frequency-domain condition (6.17) is centered on the notion of unstable invariant zeros of a transfer function G(s), that is, values s ∈ C (possibly at infinity) with Re(s) ≥ 0 such that there exists a ∈ Cna for which G(s)a = 0. The precise result is as follows.    Proposition 6.2 Consider the continuous-time system Σ  A, B, Cc Cr ,      , as described by (6.5), which is assumed to be controllable. Define Dc Dr G r (s) = Cr (s I − A)−1 B + Dr and G c (s) = Cc (s I − A)−1 B + Dc . The OOG gain of the system, Σ2yc ←yr , is bounded if, and only if, all the unstable invariant zeros of G r (s) (including zeros at infinity and their multiplicity) are also zeros of G c (s). A variation of this result has first appeared in [15], for discrete-time systems. These conditions point to fundamental limitations in security under the presence of unstable zeros (including zeros at infinity), as observed in similar contexts in the literature [7, 10, 14, 16]. Such limitations are examined in the following example.

6 Security Metrics for Control Systems

115

Example 6.6 Consider the scenarios in Examples 6.4 and 6.5. Computing the OOG for both scenarios, by solving the optimization problem (6.18), would yield an unbounded value for the gain. This result is in line with Proposition 6.2, as discussed below. In Example 6.4, the multiplicity of the zeros at infinity of G r (s) is higher than that of G c (s), which leads to the increasing high-frequency asymptote of μ(w), and results in an unbounded OOG. As for Example 6.5, the system G r (s) has one unstable zero that is not a zero of G c (s) (since the direct term Dc is non-zero). Since invariant zeros are not changed with output-feedback, one may conclude that the inherent zero structure of the open-loop system plays a crucial role in the sensitivity to stealthy attacks, regardless of the control and monitoring algorithms.

6.4.1.2

A Lower Bound on the Output-to-Output Gain

Recalling the frequency-domain inequality (6.17), and inspired in (6.18), one can define the following variable: γˆ  min γ γ >0

s.t. γ G r (¯s ) G r (s) − G c (¯s ) G c (s) 0, ∀s ∈ / λ(A), Re(s) ≥ 0.

(6.19)

In the case where frequency-domain inequality (6.17) is both necessary and sufficient, the constraints (6.17) and the LMIs in statement 4 would be equivalent. Such an equivalence would transfer also to (6.18) and (6.19), which means that the OOG would be characterized as Σ2yc ←yr = γˆ . However, as previously discussed, the frequency-domain inequality (6.17) is only necessary in general, and thus γˆ is generally only a lower bound of the OOG, i.e., Σ2yc ←yr ≥ γˆ . Not surprisingly, the lower bound γˆ has a close connection to the singular values of the system. In fact, γˆ can be computed as γˆ = sup γ (s), s∈S

where S  {s ∈ C : s ∈ / λ(A), Re(s) ≥ 0} and γ (s) is defined as γ (s)  min γ γ ≥0

s.t. γ G r (¯s ) G r (s) − G c (¯s ) G c (s) 0.

(6.20)

Note that γ (s) essentially corresponds to the maximum generalized eigenvalue of the matrix pencil (G c (¯s ) G c (s), G r (¯s ) G r (s)), which may be interpreted as a generalized singular value of the system. Finally, we highlight one interesting relation between the lower bound γˆ (γ (s)) and the heuristic μ (μ(w)). Consider the class of single-input systems, in which

116

A. M. H. Teixeira

case G r (s) and G c (s) are complex-valued vector functions, denoted as gr (s) ∈ Cnr and gc (s) ∈ Cn c respectively. In such a case, the function γ (s) can be rewritten as gc (s)22 γ (s) = . Observing that g(s)22 = σ (s)2 = σ (s)2 , one can re-write γ (s) gr (s)22 σ c (s)2 as γ (s) = , from which it follows that γˆ is bounded from below by μ, since σ r (s)2 γˆ = sup γ (s) ≥ s∈S

sup

s∈S



Re(s)=0

γ (s) = sup μ(w) = μ. w∈λ(A) /

Hence, we conclude that for single-input systems (i.e., where the adversary corrupts only one resource, such that n a = 1), the heuristic metric μ(w) constructed in Sect. 6.3.3 can provide a lower bound to Σ2yc ←yr .

6.4.2 Security Metrics-Based Design of Controller and Observer The security metric described in the previous section, the output-to-output gain, bears strong similarities to the classical H∞ norm and the H− index. This points to the possibility of using the OOG to design controllers and anomaly detectors, as it happens with the classical metrics. In this section, we make a first exploration of a possible design for continuous-time systems, and discuss some of its properties. Recall the closed-loop system under attack Σ described by (6.5). Naturally, Σ depends on the actual choices of the observer and feedback gain matrices K and L, respectively. To highlight this dependency in this section, we use the notation Σ(K , L). From a design perspective, we wish to choose the matrices K and L that minimize the worst-case impact of attacks that are not detected. In summary, we look into approaches for choosing K and L such that the OOG gain of the corresponding system is minimized. Formally, the optimal K and L can be characterized as the optimal solutions to the following (non-convex) optimization problem min

P 0,γ >0,K ,L

γ

s.t. R (Σ(K , L), P, γ )  0,

(6.21)

which follows directly from items 3 and 4 in Proposition 6.1. Due to the products A P and Cc Cc in R (Σ(K , L), P, γ ) (cf. 4 in Proposition 6.1), the constraint in (6.21) is a Bilinear Matrix Inequality, which renders the optimization problem to be non-convex. Applying the Schur complement lemma allows us to remove the quadratic term Cc Cc , obtaining instead the following problem

6 Security Metrics for Control Systems

min

P 0,β>0,K ,L

β ⎡

⎤ ⎡ ⎤ A P + P A P B Cc Cr    0 Dc ⎦ − β ⎣ Dr ⎦ Cr Dr 0  0, s.t. ⎣ B  P Cc Dc β I 0

117

(6.22)

√ where the optimal OOG is given by Σ(K , L) yc ←yr = β = γ . Given that Cr and Dr do not depend on K and L, the only cross terms between decision variable are now in A P + P A and B  P. Although the constraint is still a BMI, we can now search for a sub-optimal solution through one of the different approaches to handle BMIs. For instance, in the following we propose a simple algorithm using the alternating minimization approach, where the decision variable tuples {K , L} and {P} are solved in alternating steps, with the other tuple fixed during each step. Algorithm 6.1 Integrated OOG-based design of observer and feedback gain matrices. Input: The data matrices describing (6.5): the system matrices A p , B p , Cm , C p , D p , and the adversary matrices Γu , Γ y , E p . Auxiliary variables: Ak , Bk , Cc,k , Dc,k Output: K ∗ , L ∗ , β ∗ , P ∗ 1: Set k = 0, P−1 = ∞, P0 = 0. 2: Find stabilizing K 0 and L 0 . 3: while ||Pk − Pk−1 || ≥ do 4: Ak = A(K k , L k ), Bk = B(K k , L k ), Cc,k = Cc (K k , L k ), and Dc,k = Dc (K k , L k ). 5: (Pk+1 , ) = arg min β P 0,β>0

⎡ ⎤  ⎤ A Cr  k P + P Ak P Bk C c,k    s.t. ⎣ Bk P 0 Dc,k ⎦ − β ⎣ Dr ⎦ Cr Dr 0  0. 0 Cc,k Dc,k β I ⎡

6: (K k+1 , L k+1 , βk+1 ) = arg min β K ,L ,β>0



s.t. ⎣

⎤ ⎡ ⎤ A Pk+1 + Pk+1 A Pk+1 B Cc Cr   B  Pk+1 0 Dc ⎦ − β ⎣ Dr ⎦ Cr Dr 0  0. Cc Dc β I 0

{The matrices A, B, Cc , and Dc depend on K and L, as detailed in (6.6).} 7: k = k + 1. 8: end while 9: return (K ∗ , L ∗ , β ∗ , P ∗ ) = (K k , L k , βk , Pk ).

Next we discuss an example where Algorithm 6.1 is used to re-design the closedloop system. Example 6.7 Consider a scenario where the quadruple tank system is subject to a physical attack directly on the first tank (i.e., the first entry of the state vector x p (t)).

118

A. M. H. Teixeira

Fig. 6.7 Original closed-loop system. The singular values σ c (w) and σ r (w) (left) and their ratio μ(w) (right) for a physical attack on water tank 1

This attack scenario can be modeled as (6.5) with Γu = Γ y = ∅ and E p = e1 , which leads to a single-input system with n a = 1. Therefore, the results from Sect. 6.4.1.2 hold, and the heuristic μ(w) provides a lower bound to the OOG, Σ yc ←yr . For illustration purposes, the singular values σ c (w) and σ r (w), and their ratio μ(w), are depicted in Fig. 6.7. The OOG of the system, computed through (6.18), is Σ yc ←yr = 36.4. This value is in accordance with the peak of μ(w), that is, the OGG and μ = sup μ(w) coincide, which indicates that the lower bound results from Sect. 6.4.1.2 are tight in this case. Next, we leverage Algorithm 6.1 to design an improved observer-based controller and detector. Applying the algorithm to the closed-loop system under attack yields the following gain matrices ⎡

−0.2945 ⎢−0.4852 ⎢ K =⎣ 0.0586 0.0218

⎤ 0.3364 −1.1088⎥ ⎥, 0.1256 ⎦ −0.2697

L=

 −0.5130 0.0280 0.1759 −0.5569 . 0.0183 −0.3563 −0.8769 0.1434

The resulting singular values σ c (w) and σ r (w), and their ratio μ(w), are shown in Fig. 6.8. As before, the OOG of the system computed through (6.18) and sup μ(w) coincide and are equal to Σ yc ←yr = 31.6.

6 Security Metrics for Control Systems

119

Fig. 6.8 Closed-loop system re-designed through Algorithm 6.1. The singular values σ c (w) and σ r (w) (left) and their ratio μ(w) (right) for a physical attack on water tank 1

Several remarks are in order. First, note that the shape of the ratio μ(w) flipped after the design, indicating that the worst-case are inputs moved from low frequencies (before the re-design) to the high frequencies (after the re-design). Second, we observe that the performance singular value σ c (w) became larger at the low frequencies after the re-design. Such an increase mean that the H∞ norm of the system has increased after re-design, which may initially be counter-intuitive. A third observation points to the possible usefulness of increasing the system’s H∞ norm: it has allowed the detectability singular value to increase substantially at low frequencies. This in turn implies that low-frequency attacks became much more detectable (by a factor of 10). Therefore, although the impact has slightly increased at low frequencies, the detectability has greatly increased as well. This effect is clearly visible by comparing the low-frequency asymptotes of μ(w) before and after the re-design. It is noteworthy to highlight that such trade-offs between impact and detectability, naturally imbued in the design procedure, are currently unavailable through existing techniques in robust control and fault detection. In particular, existing H− -based approaches would unlikely change low-frequency detectability, since the worst-case detection in the H− index sense occurs at very large frequencies.

120

A. M. H. Teixeira

6.5 Conclusions In this chapter, we have considered the security of control systems, in scenarios where malicious adversaries aim at maximizing the impact on control performance, while simultaneously remaining undetected. The objectives of the chapter were to investigate possible metrics to analyze, and re-design, the closed-loop system from a security perspective. Classical metrics from robust control and fault detection were revisited under the context of malicious attacks. The conclusion was that these metrics consider separately impact of attacks and detection, and are thus inadequate for security analysis. An initial attempt to merge these metrics was taken, by formulating a new metric consisting of the ratio between performance and detection singular values. Unfortunately, such an approach kept the limitations of classical metrics, and did not fully capture some of the known potentially dangerous attacks, that have an arbitrarily large impact and low detectability. A recently proposed security metric, the output-to-output gain (OOG), was then introduced and characterized. Borrowing results from dissipative systems theory for linear systems with quadratic supply rates, the OOG was entirely characterized. This in turn led to results enabling its efficient computation through convex optimization problems. Necessary and sufficient conditions describing fundamental limitations of the OOG were thus described. Additionally, a first step was taken to use the OOG as a basis for controller and detector design. The OOG-based design problem was cast as a non-convex optimization problem with BMI constraints. Using the heuristic of alternating minimization to address the BMI constraints, an algorithm was proposed that results in a sub-optimal closed-loop system minimizing the OOG. The results and insights contained in the chapter were supported and illustrated through several numerical examples on a common closed-loop system, which were presented continuously throughout the chapter. Acknowledgements This work is financed by the Swedish Foundation for Strategic Research, and by the Swedish Research Council under the grant 2018-04396.

References 1. Bai, C., Pasqualetti, F., Gupta, V.: Data-injection attacks in stochastic control systems: detectability and performance tradeoffs. Automatica 82, 251–260 (2017) 2. Cárdenas, A.A., Amin, S., Sastry, S.S.: Secure control: towards survivable cyber-physical systems. In: 1st International Workshop on Cyber-Physical Systems (2008) 3. Fawzi, H., Tabuada, P., Diggavi, S.: Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Trans. Autom. Control 59(6), 1454–1467 (2014) 4. Gannot, O.: Frequency criteria for exponential stability (2019). 5. Johansson, K.H., Horch, A., Wijk, O., Hansson, A.: Teaching multivariable control using the quadruple-tank process. In: 38th IEEE Conference on Decision and Control, pp. 807–812 (1999)

6 Security Metrics for Control Systems

121

6. Liu, J., Wang, J.L., Yang, G.H.: An LMI approach to minimum sensitivity analysis with application to fault detection. Automatica 41(11), 1995–2004 (2005) 7. Mo, Y., Sinopoli, B.: Integrity attacks on cyber-physical systems. In: 1st International Conference on High Confidence Networked Systems CPSWeek 2012 (2012) 8. Mo, Y., Sinopoli, B.: On the performance degradation of cyber-physical systems under stealthy integrity attacks. IEEE Trans. Autom. Control 61(9), 2618–2624 (2016) 9. Molinari, B.: Conditions for nonpositive solutions of the linear matrix inequality. IEEE Trans. Autom. Control 20(6), 804–806 (1975) 10. Pasqualetti, F., Dorfler, F., Bullo, F.: Attack detection and identification in cyber-physical systems. IEEE Trans. Autom. Control 58(11), 2715–2729 (2013) 11. Scherer, C., Weiland, S.: Linear matrix inequalities in control. In: Levine, W.S. (ed.) The Control Systems Handbook: Control System Advanced Methods. CRC Press, Boca Raton (2010) 12. Shames, I., Farokhi, F., Summers, T.H.: Security analysis of cyber-physical systems using H2 norm. IET Control Theory Appl. 11(11), 1749–1755 (2017) 13. Smith, R.S.: Covert misappropriation of networked control systems: presenting a feedback structure. IEEE Control Syst. 35(1), 82–92 (2015) 14. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: Revealing stealthy attacks in control systems. In: 50th Annual Allerton Conference on Communication, Control, and Computing (2012) 15. Teixeira, A., Sandberg, H., Johansson, K.H.: Strategic stealthy attacks: the output-to-output l2gain. In: 54th IEEE Proceedings of IEEE Conference on Decision and Control, pp. 2582–2587 (2015) 16. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: A secure control framework for resource-limited adversaries. Automatica 51(1), 135–148 (2015) 17. Teixeira, A., Sou, K., Sandberg, H., Johansson, K.: Secure control systems: a quantitative risk management approach. IEEE Control Syst. Mag. 35(1), 24–45 (2015) 18. Trentelman, H.L.: When does the algebraic Riccati equation have a negative semi-definite solution? In: Blondel, V., Sontag, E.D., Vidyasagar, M., Willems, J.C. (eds.) Open Problems in Mathematical Systems and Control Theory. Communications and Control Engineering, pp. 229–237. Springer, London (1999) 19. Trentelman, H.L., Rapisarda, P.: Pick matrix conditions for sign-definite solutions of the algebraic Riccati equation. SIAM J. Control Optim. 40(3), 969–991 (2001) 20. Trentelman, H.L., Willems, J.C.C.: The dissipation inequality and the algebraic Riccati equation. In: Bittanti, S., Laub, A.J., Willems, J.C. (eds.) The Riccati Equation. Communications and Control Engineering Series, pp. 197–242. Springer, Berlin (1991) 21. Umsonst, D., Sandberg, H., Cardenas, A.A.: Security analysis of control system anomaly detectors. In: 2017 American Control Conference, pp. 5500–5506. IEEE (2017) 22. Wang, J.L., Yang, G.H., Liu, J.: An LMI approach to H− index and mixed H− /H∞ fault detection observer design. Automatica 43(9), 1656–1665 (2007) 23. Willems, J.C.: Dissipative dynamical systems part II: linear systems with quadratic supply rates. Arch. Ration. Mech. Anal. 45(5), 352–393 (1972) 24. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice-Hall Inc, Upper Saddle River (1996)

Chapter 7

The Secure State Estimation Problem Yasser Shoukry and Paulo Tabuada

Abstract Sensors are the means by which cyber-physical systems perceive their own state as well as the state of their environment. Any attack on sensor measurements, or their transmission, has the potential to lead to catastrophic consequences since control actions would be based on an incorrect state estimate. In this chapter, we introduce the secure state estimation problem, discuss under which conditions it can be solved, and review existing algorithms.

7.1 Introduction With billions of sensors that are already deployed to realize the vision of the Internet of Things, sensors have captured the attention of hackers as a viable attack vector since most of these smart devices were not designed to withstand malicious attacks. Such sensor-related attacks can be launched via means of pure cyber-attacks (e.g., software viruses and attacks on communication channels) or via physical attacks by tampering with the sensor hardware or environment. An example of cyber-attacks is the infamous Stuxnet malware, which exploits vulnerabilities in the operating system running over SCADA (supervisory control and data acquisition) devices [10] with the final aim of corrupting sensor measurements collected by the SCADA system. Physical attacks, on the other side, exploit a vulnerability in the physics of a sensor to manipulate its measurements to report a maliciously selected output. Examples of physical attacks on sensors include spoofing magnetic sensors used for anti-lock braking systems in automotive [16], spoofing gyroscopes used to stabilize drones during flights [21], and spoofing LiDAR sensors used in autonomous driving [6]. Indeed, successful attacks on the information collected from sensors in a feedback control system can have more damaging consequences, compared to open-loop sysY. Shoukry (B) University of California, Irvine, USA e-mail: [email protected] P. Tabuada University of California, Los Angeles, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_7

123

124

Y. Shoukry and P. Tabuada

tems. This is due to the active property of control systems, where the data collected from sensors are used to decide the next actions to be taken. Motivated by these concerns, several researchers have pointed out the importance of studying these attacks from a control theory point of view [7] to characterize their stealth abilities [3, 20], and provide control-theoretic countermeasures to such attacks [14]. In this chapter, we study the problem of estimating the state of the underlying physical system from corrupted measurements. Such a “securely” estimated state can be used by the controller. We call this problem the secure state estimation problem. In Sect. 7.2, we present the threat model under consideration and introduce the secure state estimation problem for linear dynamical systems. We discuss fundamental limits on the ability to estimate the state despite the existence of an attack in Sect. 7.3. While we present these fundamental limits for the case when the underlying model of the system is linear, we extend these results to nonlinear systems using an information-theoretic interpretation of these fundamental limits. In Sect. 7.4, we study different algorithms to solve the secure state estimation problem. Being an NP-hard problem, we utilize the framework of Satisfiability Modulo Convex Programming to develop an efficient algorithm that harnesses the combinatorial complexity of the secure state estimation problem. Finally, in Sect. 7.5, we study more carefully the complexity of the secure state estimation problem and point to special cases in which the problem is solvable in polynomial time.

7.2 The Secure State Estimation Problem Before we start our technical discussion, it is necessary to define the notation used in this chapter.

7.2.1 Notation The symbols N, Q, R, and B denote the sets of natural, rational, real, and Boolean numbers, respectively. The symbols ∧ and ¬ denote the logical AND and logical NOT operators, respectively. The support of a vector x ∈ Rn , denoted by supp(x), is the set of indices of the non-zero elements of x. Similarly, the complement of the support of a vector x is denoted by supp(x) = {1, . . . , n} \ supp(x). If S is a set, |S| is the cardinality of S. We call a vector x ∈ Rn s-sparse, if x has at most s non-zero elements, i.e., if |supp(x)| ≤ s. Given p vectors of the same dimension x1 , . . . , x p ∈ Rn , we call x = (x1 , x2 , . . . , x p ) ∈ R pn a block vector and each component xi a block. To emphasize that a vector x is a block vector, we write it as an element of R pn where the exponent pn is written as the juxtaposition of the number of blocks p and the size of individual blocks n, respectively. With some abuse of notation, for the block vector x = (x1 , x2 , . . . , x p ) ∈ R pn , we denote by supp(x) the indices of the blocks on which x ∈ R pn is supported. In

7 The Secure State Estimation Problem

125

other words, an index i ∈ {1, . . . , p} belongs to the set supp(x) ⊆ {1, . . . , p} whenever the ith block xi is non-zero, i.e., i ∈ supp(x) ⇔ xi = 0,

i ∈ {1, . . . , p}.

Similarly, a block matrix M ∈ R pn×m is defined as the vertical concatenation of the matrices M1 , . . . , M p ∈ Rn×m . In such case, a block is defined as the matrix T  Mi ∈ Rn×m , and hence the matrix M can be written as M = M1T . . . M pT . Similar to the notation used for vectors, the row dimension of the block matrix M ∈ R pn×m is written as the juxtaposition of the number of blocks p and the size of the individual blocks n. For a vector x ∈ Rn , we denote by x2 the 2-norm of x and by M2 the induced 2-norm of a matrix M ∈ Rm×n . We also denote by Mi ∈ R1×n the ith row of M. For the set  ⊆ {1, . . . , m}, we denote by M ∈ R||×n the matrix obtained from M by removing all the rows except those indexed by . Then, M ∈ R(m−||)×n is the matrix obtained from M by removing the rows indexed by the set ,  representing the complement of . For example, if m = 4, and  = {1, 2}, we have  M =

 M1 , M2

 M =

 M3 . M4

By the same abuse of notation, for a block matrix M ∈ R pn×m , we denote by M ∈ R||n×m the block matrix obtained by removing all blocks except those indexed by . We define M similarly.

7.2.2 Threat Model and Attack Assumptions In this chapter, we consider a discrete-time, linear, time-invariant system  of the form:  x (t+1) = Ax (t) + Bu (t) , (7.1) = y (t) = C x (t) where x (t) ∈ Rn is the system state at time t ∈ N, u (t) ∈ Rm is the system input, and y (t) ∈ R p is the observed output. The matrices A, B, and C represent the system dynamics and have appropriate dimensions. We are interested in mathematically capturing the behavior of malicious attackers who are capable of changing the values of sensor readings to any arbitrary value without explicitly revealing its presence. Such an attack can be implemented as a physical attack on the sensors, by intercepting communications between the sensor and the controller, or by a software virus targeting the sensor software. Regardless of the mechanism by which the sensor measurements are being manipulated, we can

126

Y. Shoukry and P. Tabuada

mathematically model the system under sensor attacks as follows:  a =

x (t+1) = Ax (t) + Bu (t) , y (t) = C x (t) + a (t)

(7.2)

where the attack vector a (t) ∈ R p models how an attacker changes the sensor measurements at time t. If sensor i ∈ {1, . . . , p} is attacked then the ith element in a (t) is non-zero, otherwise the ith sensor is not attacked. While this model is general enough to capture any attack signal, we impose only one limitation on the power of the malicious agent, namely, it can attack a maximum number of s ≤ p sensors. This in turn implies that the attack vector a (t) is s-sparse. Apart from being s-sparse, we make no other assumptions on the vector a (t) . In particular, we do not assume bounds, statistical properties, nor restrictions on the time evolution of the elements in a (t) . The value of s is also not assumed to be known, although we assume the knowledge of an upper bound s on the number of sensors that can be attacked. We, therefore, only assume that the attacker has access to a subset of sensors of cardinality s ≤ s; whether a specific sensor in this subset is attacked or not may change with time. As shown in the next section, the maximum number of attacked sensors that can be detected s is a characteristic of the system and depends on the pair (A, C).

7.2.3 Attack Detection and Secure State Estimation Problems We are interested in two problems known as (i) attack detection and (ii) secure state estimation. Informally, the attack detection problem asks the question, can we process information collected from all sensors to detect whether a subset of these sensors is under attack, without necessarily knowing which sensors are under attack. The secure state estimation problem, on the other hand, asks for the state x (t) of the system a to be reconstructed despite the attack. To formulate these two problems, we start by collecting a set of τ measurements (τ ∈ N), where τ ≤ n is selected to guarantee that the system observability matrix O, as defined below, has full rank. Therefore, we can arrange the outputs from the ith sensor at different time instants as follows: i(t) = Oi x (t−τ +1) + E i(t) + Fi U (t) , Y where

7 The Secure State Estimation Problem



127

⎡ (t−τ +1) ⎤ ⎡ (t−τ +1) ⎤ ai u ⎢ y (t−τ +2) ⎢a (t−τ +2) ⎥ ⎢u (t−τ +2) ⎥ ⎥ i ⎢ ⎥ (t) ⎢ i ⎥ ⎥ i(t) = ⎢ Y ⎢ . ⎥ , E i = ⎢ . ⎥ , U (t) = ⎢ .. ⎥ , ⎣ .. ⎣ .. ⎦ ⎣ . ⎦ ⎦ u (t) yi(t) ai(t) ⎡ ⎡ ⎤ ⎤ 0 0 ... 0 0 Ci ⎢ Ci B ⎢ Ci A ⎥ ⎥ 0 . . . 0 0 ⎢ ⎢ ⎥ ⎥ Fi = ⎢ .. .. ⎥ , Oi = ⎢ .. ⎥ . .. ⎣ ⎣ ⎦ . . . . ⎦ τ −2 τ −3 Ci A B Ci A B . . . Ci B 0 Ci Aτ −1 yi(t−τ +1)



Since all the inputs in U (t) are known, we can further simplify the output equation as follows: (7.3) Yi(t) = Oi x (t−τ +1) + E i(t) , i(t) − Fi U (t) . For simplicity of notation, we will drop the time superwhere Yi(t) = Y scripts and hence we can rewrite (7.3) as  Oi x Yi = Oi x + E i

if the ith sensor is attack-free . if the ith sensor is under attack

We also define the block vectors Y ∈ R pτ and E ∈ R pτ and the block matrix O ∈ R pτ ×n as ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ Y1 E1 O1 ⎢ .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥ Y = ⎣ . ⎦, E = ⎣ . ⎦,O = ⎣ . ⎦, (7.4) Yp

Ep

Op

to denote, respectively, the vector of outputs, attacks, and observability matrices related with all sensors over the same time window of length τ . Using the notation above, we can define the attack detection problem as follows. Problem 7.1 (Attack Detection Problem) For the linear dynamical system under (t) attack a defined in (7.2), construct the attack detection flag dattack defined by  dattack =

0 if E i = 0 for all i ∈ {0, . . . , p} . 1 otherwise

(7.5)

In other words, Problem 7.1 asks for an attack detection flag dattack that is set to zero if and only if all sensors are attack-free. Similarly, we can define the secure state estimation problem as follows.

128

Y. Shoukry and P. Tabuada

Problem 7.2 (Secure State Estimation Problem) For the linear dynamical system under attack a defined in (7.2), correctly estimate the state of the system despite attacks of at most s sensors.

7.3 The s-Sparse Observability Condition In the absence of attacks, observability plays a key role in determining whether the state of the system can be inferred from the sensor measurements. In this section, we extend the classical notion of observability to reason about the sufficient and necessary conditions that govern the ability to reconstruct the state of the system from maliciously corrupted sensor measurements.

7.3.1 Sufficient and Necessary Conditions for Linear Time-Invariant Systems We start by recalling the classical definition of observability as follows. Definition 7.1 (Observable System) The linear control system , defined by (7.1), is said to be observable if the observability matrix O has a trivial kernel. In the presence of an attack on a subset of s sensors, a natural question is to ask how the observability of the system changes if one decides to ignore the measurements collected by s of the sensors. Since we do not know a priori which sensors are under attack, one needs to check the observability of the system for all different combinations of p − s sensors. This can be captured by the following observability notion [15]: Definition 7.2 (s-Sparse Observable System) The linear control system under attack a , defined by (7.2), is said to be s-sparse observable if for every set  ⊆ {1, . . . , p} with || = p − s, the system  defined as follows:   =

x (t+1) = Ax (t) + Bu (t) y (t) = C x (t)

(7.6)

is observable. Informally, a system is s-sparse observable if it remains observable after eliminating any choice of s sensors. Using this observability notion, we can formalize the conditions under which the attacks can be detected and the state of the system can be estimated in spite of sensor attacks as follows [15]. Theorem 7.1 Problem 7.1 admits a solution if and only if the dynamical system under s-sparse sensor attack a , defined by (7.2), is s-sparse observable.

7 The Secure State Estimation Problem

129

Theorem 7.2 Problem 7.2 admits a solution if and only if the dynamical system under s-sparse sensor attack a , defined by (7.2), is 2s-sparse observable. Remark 7.1 As stated in Theorem 7.2, the state of a dynamical system under an s-sparse sensor attack a , defined by (7.2), can be uniquely determined when the system is 2s-sparse observable. This condition seems expensive to check because of its combinatorial nature: we have to check observability of all possible pairs (A, C ). Yet, the 2s-sparse observability condition clearly illustrates a fundamental limitation for secure state estimation: it is impossible to correctly reconstruct the state whenever a number of sensors larger than or equal to p/2 is attacked, since there exist different states producing the same observations under the effect of attacks. Indeed, suppose that we have an even number of sensors p and s = p/2 sensors are attacked. Then, Theorem 7.2 requires the system to still be observable after removing 2s = p rows from the map C. However, this is impossible since C becomes the null matrix. Remark 7.2 Based on Theorem 7.2, the state of the system can be uniquely identified despite the existence of attacks if and only if the system is 2s-sparse observable, s being an upper bound on the number of attacked sensors. If such a bound s is not known a priori, we can use Theorem 7.2 to determine s by removing all the combinations of 2s sensors for s = 1, . . . , p/2 and checking the observability of the resulting system, until we find the maximum possible number of sensors that can be removed while still being able to reconstruct the system state. We observe that such a bound is an intrinsic characteristic of the system, since it only depends on the pair (A, C).

7.3.2 Extension to Nonlinear Systems—An Coding-Theoretic Interpretation In this subsection, we argue that the sufficient and necessary conditions stated in Theorem 7.2 extend naturally to nonlinear systems. To that end, we revisit the 2ssparse observability condition and give a coding-theoretic interpretation for the same. We first describe our interpretation for a linear system and then discuss how it can be generalized for nonlinear systems. Consider the linear dynamical system under an s-sparse sensor attack a , defined in (7.2). Assume that the system a is θ -sparse observable for some θ ∈ {0, 1, . . . , p − 1}. We would like to construct a coding-theoretic argument that relates θ to the maximum number of sensors under attack s. To that end, if the system’s initial state is x (0) ∈ Rn , then clearly in the absence of sensor attacks, by observing the outputs from any p − θ sensors for n time instants (t = 0, 1, . . . , n − 1), we can exactly recover x (0) , and hence exactly estimate the state of the plant. A coding-theoretic view of this can be given as follows. Consider the outputs from sensor d ∈ {1, 2, . . . , p} for n time instants as a symbol Yd ∈ Rn . Thus, in the

130

Y. Shoukry and P. Tabuada

  (symbol) observation vector Y = Y1 Y2 . . . Y p , due to θ -sparse observability, any p − θ symbols are sufficient (in the absence of attacks) to recover the initial state x (0) . Now, let us consider the case of an s-sparse adversary which can arbitrarily corrupt any s sensors. In the coding-theoretic view, this corresponds to arbitrarily corrupting any s (out of p) symbols in the observation vector Y . Intuitively, based on the relationship between error-correcting codes and the Hamming distance between codewords in classical coding theory [5], one can expect the recovery of the initial state despite such corruptions to depend on the (symbol) Hamming distance between the observation vectors corresponding to two distinct initial states (say x1(0) and x2(0) with x1(0) = x2(0) ). In this context, the following lemma relates θ -sparse observability to the minimum Hamming distance between observation vectors in the absence of attacks [12]. Lemma 7.1 For a θ -sparse observable system, the minimum (symbol) Hamming distance between observation vectors corresponding to distinct initial states is θ + 1. For a θ -sparse observable system, since the minimum Hamming distance between the observation vectors corresponding to distinct initial states is θ + 1, it follows from the fundamentals of error detection and error correction codes that we can (i) detect up to s ≤ θ sensor corruptions, sensor corruptions. (ii) correct up to s < θ+1 2 Note that (1) above is equivalent to the s-sparse observability condition in Theorem 7.1 which states that we can detect the existence of s-sparse attack if and only if the system is s-sparse observable system. Similarly, (2) above is equivalent to 2s ≤ θ which is exactly the sparse observability condition for secure state estimation in Theorem 7.2. It should be noted that an s-adversary can attack any set of s (out of p) sensors, and the condition s < θ+1 is both necessary and sufficient for exact state estimation 2 despite such attacks. When s ≥ θ+1 , it is straightforward to show a scenario where 2 the observation vector (after attacks) can be explained by multiple initial states, and hence exact state estimation is not possible. For (noiseless) nonlinear systems, by analogously defining s-sparse observability, the same coding-theoretic interpretation holds. This leads to the necessary and sufficient conditions for attack detection and secure state estimation in any noiseless nonlinear dynamical system with sensor attacks [12].

7.4 Algorithms for Attack Detection and Secure State Estimation In the previous section, we discussed sufficient and necessary conditions under which Problems 7.1 and 7.2 can be solved. In this section, we discuss how to design algorithms that can use the insights of Theorems 7.1 and 7.2 to solve the aforementioned problems.

7 The Secure State Estimation Problem

131

Algorithm 7.1 Attack-Detector Input: O, Y Output: dattack 1: 2: 3: 4: 5: 6: 7: 8:

x := arg min x∈Rn Y − O x22 ; residue := Y − O x22 ; if residue == 0 then dattack = 0; else dattack = 1; end if return dattack ;

7.4.1 Attack Detection Algorithm We start by presenting an algorithm to solve Problem 7.1. This algorithm forms the nucleus for all the other algorithms in this section. The attack detector starts by noticing the fact that the residue min x∈Rn Y − Ox22 is equal to zero if and only if all the sensor measurements are attack-free [12, 17]. This is a direct consequence of the s-sparse observability condition. This, in turn, suggests the following attack detector:  dattack =

0 if min x∈Rn Y − Ox22 = 0 , 1 otherwise

which is summarized in Algorithm 7.1 and its correctness is assessed by the following result [12, 17]. Theorem 7.3 Let the linear dynamical system under an s-sparse attack a , defined by (7.2), be s-sparse observable. Then, Algorithm 7.1 provides a solution to Problem 7.1.

7.4.2 Secure State Estimator: Brute Force Search Before we outline the solution to the secure state estimation problem, it is crucial first to mention that the secure state estimation problem is NP-hard problem [11]. While in Sect. 7.5 we discuss some special cases for which the problem can be solved in polynomial time, Theorem 7.8 in Sect. 7.5 establishes the fact that the secure state estimation problem is inherently of combinatorial nature. Therefore, a natural solution of the secure state estimation follows directly from the attack detector algorithm, presented in Algorithm 7.1 as follows. First, recall that whenever the system is s-sparse observable, then Algorithm 7.1 can detect the presence (or absence) of an attack when the algorithm is used to process

132

Y. Shoukry and P. Tabuada

the information collected from all the p sensors. This, in turn, means that whenever the system is 2s-sparse observable, then Algorithm 7.1 can detect the presence (or absence) of an attack when the algorithm is used to process the information collected from p − s sensors. Using this simple observation, we can solve Problem 7.2 through means  p of brute sensor force search as follows. The first step is to exhaustively enumerating all p−s the attacker is restricted to attack only s sensors, we subsets of size p − s. Since  p  sensor subsets is guaranteed to be attack-free. Finding conclude that one of the p−s the attack-free subset of sensors can be easily  p done by applying the attack detector sensor subsets until one is declared algorithm (Algorithm 7.1) to each of the p−s to be attack-free. This process is summarized in Algorithm 7.2, whose correctness is captured by the following result [18]. Theorem 7.4 Let the linear dynamical system under an s-sparse attack a , defined by (7.2), be 2s-sparse observable. Then, Algorithm 7.2 is a solution to Problem 7.2.

7.4.3 Secure State Estimator: Satisfiability Modulo Convex Programming While a solution of the secure state estimation problem has to be of combinatorial nature (thanks to the problem being NP-hard, Theorem 7.8 in Sect. 7.5), we argue, in this subsection, that one can harness this combinatorial growth. In other words, instead of searching over all different combinations of p − s subsets of sensors, one can drastically reduce the search space while guaranteeing the soundness and correctness of the algorithm. This subsection will discuss a framework named “Satisfiability Modulo Convex Programming” (or SMC for short) [19], which is designed to achieve this goal. The SMC framework combines ideas from Boolean satisfiability solvers (which have been successful in tackling large combinatorial search problems for the design and verification of hardware and software systems) and convex programming (which

Algorithm 7.2 Secure-State-Estimator Input: O, Y, s Output: x, I 1: Enumerate all sets s ∈ S of sensor indicator variables such that: S = {s | s ⊂ {1, 2, . . . , p}, |s| = p − s} 2: for all subsets of sensors s ∈ S do 3: dattack,s = Attack-Detector(Ys ) 4: if dattack,s == 0 then 5: xˆ = arg min x∈R ˆ 22 ˆ n Ys − Os x 6: return (x, ˆ s); 7: end if 8: end for

7 The Secure State Estimation Problem

133

is a powerful solution engine for various problems in control, communications, signal processing, data analysis and modeling, and machine learning) toward a scalable framework for reasoning about the combination of discrete and continuous variables. To encode the secure state estimation as an SMC problem, we start by defining a binary indicator variable bi ∈ B such that bi = 1 when the ith sensor is under attack and bi = 0 otherwise. Our goal, then, is to find an estimate xˆ of the system state and an assignment for the binary indicator variables such that it satisfies the following set of constraints:    p  p    ¬bi ⇒ Yi − Oi xˆ  = 0 ∧ bi ≤ s . (7.7) φ := 2

i=1

i=1

In other words, the formula φ asks for an estimate of the state xˆ that can only match the measurements obtained from the set of attack-free sensors. The second inequality enforces the cardinality constraint on the number of attacked sensors. By imposing such cardinality constraint, the formula in (7.7) insists on having at most p − s sensors being marked as attack-free sensors (recall that s is the maximum number of sensors that are under attack). The formula φ in (7.7) is an example of a class of formulas known as monotone satisfiability modulo convex programming formulas that enjoys some nice mathematical properties that SMC solvers can exploit [19]. In the remainder of this subsection, we will discuss how to design an SMC-based algorithm that is tailored toward the secure state estimation problem. For a comprehensive treatment of finding solutions for general satisfiability modulo convex programming formulas, the reader is directed to [19]. Under the 2s-sparse observability condition (Theorem 7.2), it is direct to establish the following result [17]. Theorem 7.5 The formula φ in (7.7) admits a unique solution if and only if the dynamical system under attack a , defined by (7.2), is 2s-sparse observable. Since the actual state of the system satisfies φ, it follows from the uniqueness of solutions in the previous theorem that a solution to φ is always a solution to Problem 7.2. And hence, we focus in this subsection on efficient algorithms that can find a satisfying solution for the formula φ.

7.4.3.1

SMC Overall Architecture

To decide whether a combination of Boolean and convex constraints is satisfiable, the SMC-based detection algorithm uses the lazy Satisfiability Module Theories (SMT) paradigm [4]. The SMC-based decision procedure combines a Boolean satisfiability solver (SAT-Solve) and a theory solver (T -Solve) for convex constraints on real numbers. The Boolean satisfiability solver efficiently reasons about combinations of Boolean and pseudo-Boolean constraints, using the Davis–Putnam–Logemann– Loveland (DPLL) algorithm [13], to suggest possible assignments for the convex

134

Y. Shoukry and P. Tabuada

constraints. The theory solver checks the consistency of the given assignments, and provides the reason for the conflict, a certificate, or a counterexample, whenever inconsistencies are found. Each certificate results in learning new constraints which will be used by the Boolean satisfiability solver to prune the search space. The complex detection and mitigation decision task is thus broken into two simpler tasks, respectively, over the Boolean and convex domains [17]. We denote the approach as lazy, because it checks and learns about consistency of convex constraints only when necessary, as detailed below. In particular, as illustrated in Algorithm 7.3, we start by mapping each convex constraint to an auxiliary Boolean variable ci to obtain the following (pseudo-)Boolean satisfiability problem: φ B :=



     ¬bi ⇒ ci ∧ bi ≤ s ,

i∈{1,..., p}

i∈{1,..., p}

  where ci = 1 if Yi − Oi xˆ 2 = 0 is satisfied, and zero otherwise. By only relying on the Boolean structure of the problem, SAT-Solve returns an assignment for the variables bi and ci (for i = 1, . . . , p), thus hypothesizing which sensors are attackfree, and hence which convex constraints should be jointly satisfied. This Boolean assignment is then used by T -Solve to determine whether there exists a state estimate xˆ ∈ Rn whichsatisfies all the convex constraints related to the attack-free sensors, i.e., Yi − Oi xˆ 2 = 0 for i ∈ supp(b). If xˆ is found, the SMCbased algorithm terminates with SAT and provides the solution (x, ˆ b). Otherwise, the UNSAT certificate φcert is generated in terms of new Boolean constraints, explaining which sensor measurements are conflicting and may be under attack. A very naïve certificate can always be provided in the form of φUNSAT-cert =



bi ≥ 1,

i∈supp(b)

which encodes the fact that at least one of the sensors in the set supp(b) (i.e., for which bi = 0) is actually under attack. The augmented Boolean problem consisting of the original formula φ B and the generated certificate φUNSAT-cert is then fed back to SATSolve to produce a new assignment. The sequence of new Boolean satisfiability queries is then repeated until T -Solve terminates with SAT. By the 2s-sparse observability condition (Theorem 7.5), there always exists a unique solution to φ, and hence Algorithm 7.3 will always terminate. However, to help the Boolean satisfiability solver quickly converges toward the correct assignment, a central problem in lazy SMT solving is to generate succinct explanations whenever conjunctions of convex constraints are infeasible, possibly highlighting the minimum set of conflicting assignments. The rest of this section will then focus on the implementation of the two main tasks of T -Solve, namely, (i) checking the satisfiability of a given assignment (T -Solve.Check) and (ii) generating succinct UNSAT certificates (T -Solve.Certificate).

7 The Secure State Estimation Problem

135

Algorithm 7.3 SMC-Based Secure State Estimator Input: A, B, C, Y, U, s Output: η = (x, b) 1: status := UNSAT;     2: φ B := i∈{1,..., p} ¬bi ⇒ ci ∧ i∈{1,..., p} bi ≤ s ; 3: while status == UNSAT do 4: (b, c) := SAT-Solve(φ B ); 5: (status, x) ˆ := T -Solve.Check(supp(b)); 6: if status == UNSAT then 7: φcert := T -Solve.Certificate(supp(b), x); ˆ 8: φ B := φ B ∧ φcert ; 9: end if 10: end while 11: return η = (x, ˆ b);

7.4.3.2

Satisfiability Checking

Given an assignment of the Boolean variable b computed by the Boolean satisfiability solver, with |supp(b)| ≤ s, we know from the analysis of the attack detection algorithm (Algorithm 7.1) that the following condition holds: 2  minn Ysupp(b) − Osupp(b) xˆ 2 = 0

x∈R ˆ

(7.8)

if and only if xˆ = x and all the sensors indexed by the set supp(b) are attack-free. This is a direct consequence of the 2s-sparse observability property. The preceding unconstrained least-squares optimization problem can be solved very efficiently, thus leading to Algorithm 7.4.

7.4.3.3

Generating Compact UNSAT Certificates

Whenever T -Solve.Check provides UNSAT, a naïve certificate could be easily generated as mentioned above:

Algorithm 7.4 T -Solve.Check(I) 1: 2: 3: 4: 5: 6: 7:

 2   Solve: xˆ := argmin x∈R ˆ n YI − OI xˆ 2 if YI − OI x22 = 0 then status = SAT; else status = UNSAT; end if return (status, x); ˆ

136

Y. Shoukry and P. Tabuada

c Fig. 7.1 Pictorial example illustrating the effect of generating smaller conflicting certificates ( [2017] IEEE. Reprinted, with permission, from [17])



φtriv-cert =

bi ≥ 1,

(7.9)

i∈supp(b)

indicating that at least one of the sensors, which was initially assumed as attack-free (i.e., for which bi = 0), is actually under attack; one of the bi variables should then be set to one in the next assignment of the Boolean satisfiability solver. However, such trivial certificate φtriv-cert does not provide much information, since it only excludes the current assignment from the search space. In other words, the generated UNSAT certificates heavily affect the overall execution time of Algorithm 7.3: the smaller the certificate, the more information is learnt and the faster is the convergence of the Boolean satisfiability solver to the correct assignment. For example, a certificate with bi = 1 would identify exactly one attacked sensor at each step, a substantial improvement with respect to the exponential worst-case complexity of the plain Boolean satisfiability problem, which is NP-complete. This intuition is described in Fig. 7.1 where the effect of generating two certificates with different sizes is shown. Hence, we focus on designing algorithms that can lead to more compact certificates to enhance the execution time of the SMC-based algorithm, by exploiting the specific structure of the secure state estimation problem. To do so, we first observe that the measurements of each sensor Yi = Oi x define an affine subspace Hi ⊆ Rn as Hi = {x ∈ Rn | Yi − Oi x = 0}. The dimension of Hi is given by the dimension of the null space of the matrix Oi , i.e., dim(Hi ) = dim(ker Oi ). Then, satisfiability checking in Algorithm 7.4 can be reformulated as follows. Let ri be the residual of the state estimate xˆ with respect to  2 ˆ = Yi − Oi xˆ 2 . The optimization problem the affine subspace Hi defined as ri (x) in Algorithm 7.4 is equivalent to searching for a point xˆ that minimizes the sum of the individual residuals with respect to all the affine subspaces Hi for i ∈ I, i.e.,   2   Yi − Oi xˆ 2 = min minn YI − OI xˆ 2 = minn ri (x). ˆ 2 n

x∈R ˆ

x∈R ˆ

i∈I

x∈R ˆ

i∈I

7 The Secure State Estimation Problem

137

c [2017] Fig. 7.2 Pictorial examples illustrating the geometrical intuitions behind Algorithm 7.6 ( IEEE. Reprinted, with permission, from [17])

Based on the formulation above, it is straightforward to show the following result. Proposition 7.1 Let the linear dynamical system under attack a , defined by (7.2), be 2s-sparse observable. Then, for any set of indices I ⊆ {1, . . . , p}, the following statements are equivalent: • T -Solve.Check(I) returns UNSAT,  n •  min x∈R r ( x) ˆ > 0, ˆ i∈I i • i∈I Hi = ∅. To generate a compact Boolean constraint that explains a conflict, we aim to find a small set of sensors that cannot all be attack-free. A key result is to show that such set exists and can be computed in time that is linear in the size of the problem. This is captured by the following proposition whose proof exploits the geometric interpretation provided by the affine subspaces Hi [17]. Proposition 7.2 Let the linear dynamical system under attack a , defined by (7.2), be 2s-sparse observable. If T -Solve.Check(I) is UNSAT for a set I, with |I| > p − 2s, then there exists a subset Itemp ⊂ I with |Itemp | ≤ p − 2s + 1 such that T -Solve.Check(Itemp ) is also UNSAT. Moreover, the time complexity of finding Itemp is linear in both p and s. Using Proposition 7.2, our objective is to find a small set of affine subspaces that fail to intersect. Our algorithm works as follows. First, we construct the set of indices I  by picking any random set of p − 2s sensors. We then search for one additional sensor i which can lead to a conflict with the sensors indexed by I  . To do this, we call T -Solve.Check by passing the set Itemp := I  ∪ i as an argument. If the check returns SAT, then we label these sensors as “non-conflicting” and we repeat the same process by replacing the sensor indexed by i with another sensor until we reach

138

Y. Shoukry and P. Tabuada

a conflicting set of affine subspaces. Termination of this process is guaranteed by Proposition 7.2, thus revealing a set of p − 2s + 1 conflicting affine subspaces. Once the set is discovered, we stop by generating the following, more compact, certificate: φconf-cert :=



bi ≥ 1.

i∈Itemp

These steps are summarized in Algorithm 7.5. While Algorithm 7.5 is guaranteed to terminate regardless of the initial random set I  or the order in which the sensor i is selected, the execution time may change. In Algorithm 7.6, we show the heuristics used to implement the two steps of Algorithm 7.5, namely, the selection of the initial set I  and the further addition of sensor indexes, which further exploit the geometry of our problem. Our conjecture is that the p − 2s affine subspaces with the lowest (normalized) residuals are most likely to have a common intersection point, which can then be used as a candidate intersection point for the affine subspaces against the higher (normalized) residuals, one by one, until a conflict is detected. A pictorial illustration of this intuition is given in Fig. 7.2a. Based on this intuition, we first compute the (normalized) residuals ri for all i ∈ I, and sort them in ascending order. We then pick the p − 2s minimum (normalized) residuals indexed by I_min_r , and search for one more affine subspace that leads to a conflict with the affine subspaces indexed by I_min_r . To do this, we start by solving the same optimization problem as in Algorithm 7.4, but on the reduced set of affine subspaces indexed by Itemp = I_min_r ∪ I_max_r , where I_max_r is the index associated with the affine subspace having the maximal (normalized) residual. If this set of affine subspaces intersect in one point, they are labeled as “non-conflicting”, and we repeat the same process by replacing the affine subspace indexed by I_max_r with the affine subspace associated with the second maximal (normalized) residual from the sorted

Algorithm 7.5 T -Solve.Certificate-Conflict-Orig(I, x) 1: Step 1: Pick a set I  ⊂ I of p − 2s sensors; 2: Step 2: Conduct a linear search for the UNSAT certificate: 3: status = SAT; 4: Pick a sensor index i ∈ I \ I  ; 5: I _temp := I  ∪ i; 6: while status == SAT do 7: (status, x) ˆ := T -Solve.Check(I _temp); 8: if status == UNSAT  then 9: φconf-cert := i∈I _temp bi ≥ 1; 10: else 11: Pick another sensor index i ∈ I \ I  ; 12: I _temp := I  ∪ i; 13: end if 14: end while 15: return φconf-cert ;

7 The Secure State Estimation Problem

139

Algorithm 7.6 T -Solve.Certificate-Conflict(I, x) 1: Compute  normalized residuals 2: r := i∈I {ri } , ri := Yi − Oi x22 / Oi 22 , i ∈ I ; 3: Sort the residual variables 4: r _sor ted := sortAscendingly(r ); 5: Pick the index corresponding to the maximum residual 6: I _max_r := Index(r _sor ted{|I |,|I |−1,..., p−2s+1} ); 7: I _min_r := Index(r _sor ted{1,..., p−2s} ); 8: Search linearly for the UNSAT certificate 9: status = SAT; counter = 1; 10: I _temp := I _min_r ∪ I _max_rcounter ; 11: while status == SAT do 12: (status, x) ˆ := T -Solve.Check(I _temp); 13: if status == UNSAT  then 14: φconf-cert := i∈I _temp bi ≥ 1; 15: else 16: counter := counter + 1; 17: I _temp := I _min_r ∪ I _max_rcounter ; 18: end if 19: end while 20: [Optional] Sort the rest according to dim(ker {O}) 21: I _temp2 := sortAscendingly(dim(ker {OI _temp })); 22: status := UNSAT; counter2 := |I _temp2| − 1; 23: I _temp2 := I _temp2{1,...,counter 2} ; 24: while status == UNSAT do 25: (status, x) ˆ := T -Solve.Check(Itemp ); 26: if status == SAT then 27: φconf-cert := i∈I _temp2{1,...,counter 2+1} bi ≥ 1; 28: else 29: counter2 := counter2 - 1; 30: I _temp2 := I _temp2{1,...,counter 2} ; 31: end if 32: end while 33: return φconf-cert

list, till we reach a conflicting set of affine subspaces. Once the set is discovered, we stop and generate the compact certificate using the sensors indexed in Itemp . A sample execution of Algorithm 7.6 is illustrated in Fig. 7.2b. Finally, as a post-processing step, we can further reduce the cardinality of Itemp by exploiting the dimension of the affine subspaces corresponding to the index list. Intuitively, the lower the dimension, the more information is provided by the corresponding sensor. For example, a sensor i with dim(Hi ) = dim(ker Oi ) = 0 can be used to uniquely reconstruct the state. This restricts the search space to the unique point and makes it easier to generate a conflict formula. Therefore, to converge faster toward a conflict, we iterate through the indexes in Itemp and remove at each step the one which corresponds to the affine subspace with the highest dimension until we are left with a reduced index set that is still conflicting. The following result provides the correctness of the proposed SMC-based algorithm [17].

140

Y. Shoukry and P. Tabuada

Fig. 7.3 Simulation results showing number of iterations with respect to number of states and c [2017] IEEE. Reprinted, with permission, from [17]) number of sensors (

Theorem 7.6 Let the linear dynamical system under attack a , defined by (7.2), be 2s-sparse observable. Then, Algorithm 7.3 using the conflicting UNSAT certificate φconf-cert in Algorithm 7.6 is a solution to Problem 7.2.

7.4.4 Numerical Evaluation To assess the effectiveness of the SMC-based secure state estimator, we will use the prototype SMC-based solver SatEX [2] which uses Z3 [8] as a Boolean satisfiability solver and CPLEX [1] as a convex optimization solver. All the experiments were executed on an Intel Core i7 2.5 GHz processor with 16 GB of memory. Figure 7.3 shows the number of iterations of the SMC-based algorithm (Algorithm 7.3) when the trivial certificate φtriv-cert (which is exactly equivalent to applying the brute force search, Algorithm 7.2) and the conflicting certificate φconf-cert is used. In each test case, a random support set for the attack vector, a random attack signal, and random initial conditions are generated. All reported results are averaged results over 20 runs of the same experiment. Although no statistical significance is claimed, the results reported in this section are representative of the several simulations performed [17]. In the first experiment (left), the number s of actual sensors under attack is increased for a fixed s = 20 (n = 25, p = 60). In the second experiment (right), both n and p are increased simultaneously. In both cases, the system is constructed with the dimensions of the kernels of Oi ranging between n − 1 and n − 2, meaning that the state is “poorly” observable from individual sensors. An average of 50× reduction in iterations is observed when φconf-cert was used compared to φtriv-cert . To better assess the scalability of the SMC-based estimator, its execution time is compared against state-of-the-art off-the-shelf Mixed-Integer Programming (MIP)

7 The Secure State Estimation Problem

141

Fig. 7.4 Simulation results showing the execution time with respect to number of states and number c [2017] IEEE. Reprinted, with permission, from [19]) of sensors (

solvers when MIP is used to solve a relaxation of the formula φ. The mixed-integer programming is solved using the commercial solver CPLEX [1]. Figure 7.4 compares the performance of SatEX with one of the CPLEX mixed-integer programming solvers (running on multiple cores). SatEX outperforms the mixed-integer programming solver up to 1–2 orders of magnitude as the number of sensors (hence the number of Boolean variables and constraints) increases, resulting in efficient solution techniques for this combinatorial problem.

7.5 Special Cases for Polynomial-Time Secure State Estimation The problems discussed in this chapter are inherently combinatorial. The notion of s-sparse observability requires that we check all the pairs (A, C ) for observability and this requires that we search over all the matrices C obtained from C by removing each choice of s rows. Similarly, solving Problems 7.1 and 7.2 implies that we check all subsets of sensors with cardinality s to find those under attack. In this section, we recall the complexity of solving these problems and then focus on a special case that can be solved in polynomial time. We begin by considering the problem of finding a set  of cardinality s that makes the pair (A, C ) unobservable [11]. This problem is dual to checking s-sparse observability. Recall that a problem is said to be coNP-complete when its dual is NP-complete. Theorem 7.7 Given (A, C) ∈ Qn×n × Q p×n , finding a set  of cardinality s that makes the pair (A, C ) unobservable is NP-complete. Hence, checking if the pair (A, C) is s-sparse observability is coNP-complete. Given the previous result it is not surprising that Problem 7.2 is also NP-hard [11]. Theorem 7.8 Given (A, C) ∈ Qn×n × Q p×n , Problem 7.2 is NP-hard.

142

Y. Shoukry and P. Tabuada

On the positive side we now show that for a large class of pairs (A, C) checking s-sparse observability, Problem 7.2 is solvable in polynomial time [11]. Theorem 7.9 Consider the pair (A, C) ∈ Qn×n × Q p×n and assume that every eigenvalue of A has at most unitary geometric multiplicity. Then, s-sparse observability can be checked in polynomial time and Problem 7.2 can be solved in polynomial time. Note that every diagonalizable matrix A satisfies the assumptions of Theorem 7.9 which shows its applicability to many problems of interest. It had already been shown in [9] that when A is diagonalizable with eigenvalues of different magnitudes, ssparse observability could be checked in polynomial time using an eigenvalue test. The techniques used in [11] to establish the existence of polynomial-time solutions are also based on an decomposition of the state space Rn into the eigenspaces of A so as to reduce the problems to one-dimensional versions. In particular, the scalar version of Problem 7.2 can be solved by computing a state estimate from each sensor and then performing majority voting. Although this explains why polynomial-time solutions are possible, the full algorithmic consequences of the unitary geometric observability assumption have not yet been investigated.

7.6 Conclusions and Future Work In this chapter, we introduced the secure state estimation problem that calls for augmenting cyber-physical systems with a layer of resilience toward adversarial attacks on sensors. By using an accurate mathematical model for the underlying physics, one can explain any discrepancy between the measured sensor data and the expected measurements—as per the model—on the existence of an adversarial attack. Once the malicious sensors are detected and isolated, one can estimate the state of the underlying physical system by using the data collected from attack-free sensors. Future work is needed to address the case when the precise model of the system is unknown a priori or changing over time.

References 1. IBM ILOG CPLEX Optimizer (2012). www.ibm.com/software/integration/optimization/ cplex-optimizer/ 2. SatEX Solver: A satisfiability modulo convex programming solver (2018). https://yshoukry. bitbucket.io/SatEX/ 3. Amin, S., Litrico, X., Sastry, S.S., Bayen, A.M.: Stealthy deception attacks on water scada systems. In: Proceedings of the 13th ACM International Conference on Hybrid Systems: Computation and Control, pp. 161–170. ACM (2010) 4. Barrett, C., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. In: Handbook of Satisfiability. IOS Press, Amsterdam (2009)

7 The Secure State Estimation Problem

143

5. Blahut, R.: Algebraic Codes for Data Transmission. Cambridge University Press, Cambridge (2003) 6. Cao, Y., Xiao, C., Cyr, B., Zhou, Y., Park, W., Rampazzi, S., Chen, Q.A., Fu, K., Mao, Z.M.: Adversarial sensor attack on LiDAR-based perception in autonomous driving. In: Proceedings of the 26th ACM Conference on Computer and Communications Security (CCS’19), London, UK (2019) 7. Cárdenas, A.A., Amin, S., Sastry, S.: Research challenges for the security of control systems. In: Proceedings of the 3rd Conference on Hot Topics in Security, HOTSEC’08, pp. 6:1–6:6 (2008) 8. De Moura, L., Björner, N.: Z3: an efficient SMT solver. In: Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 337– 340 (2008) 9. Fawzi, H., Tabuada, P., Diggavi, S.: Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Trans. Autom. Control 59(6), 1454–1467 (2014) 10. Langner, R.: Stuxnet: dissecting a cyberwarfare weapon. IEEE Secur. Priv. Mag. 9(3), 49–51 (2011) 11. Mao, Y., Mitra, A., Sundaram, S., Tabuada, P.: When is the secure state reconstruction problem hard? In: IEEE Conference on Decision and Control (CDC) (2019, to appear) 12. Mishra, S., Shoukry, Y., Karamchandani, N., Diggavi, S.N., Tabuada, P.: Secure state estimation against sensor attacks in the presence of noise. IEEE Trans. Control Netw. Syst. 4(1), 49–59 (2017) 13. Nieuwenhuis, R., Oliveras, A., Tinelli, C.: Solving SAT and SAT modulo theories: from an abstract Davis-Putnam-Logemann-Loveland procedure to DPLL(T). J. ACM 53(6), 937–977 (2006) 14. Pasqualetti, F., Dörfler, F., Bullo, F.: Cyber-physical security via geometric control: distributed monitoring and malicious attacks. In: IEEE 51st IEEE Conference on Decision and Control, pp. 3418–3425. IEEE (2012) 15. Shoukry, Y., Tabuada, P.: Event-triggered state observers for sparse sensor noise/attacks. IEEE Trans. Autom. Control 61(8), 2079–2091 (2016) 16. Shoukry, Y., Martin, P.D., Tabuada, P., Srivastava, M.B.: Non-invasive spoofing attacks for antilock braking systems. In: Proceedings of the 15th International Conference on Cryptographic Hardware and Embedded Systems, CHES’13, pp. 55–72 (2013) 17. Shoukry, Y., Nuzzo, P., Puggelli, A., Sangiovanni-Vincentelli, A.L., Seshia, S.A., Tabuada, P.: Secure state estimation for cyber-physical systems under sensor attacks: a satisfiability modulo theory approach. IEEE Trans. Autom. Control 62(10), 4917–4932 (2017) 18. Shoukry, Y., Chong, M., Wakaiki, M., Nuzzo, P., Sangiovanni-Vincentelli, A., Seshia, S.A., Hespanha, J.P., Tabuada, P.: SMT-based observer design for cyber-physical systems under sensor attacks. ACM Trans. Cyber-Phys. Syst. 2(1), 5:1–5:27 (2018) 19. Shoukry, Y., Nuzzo, P., Sangiovanni-Vincentelli, A.L., Seshia, S.A., Pappas, G.J., Tabuada, P.: SMC: satisfiability modulo convex programming. Proc. IEEE 106(9), 1655–1679 (2018) 20. Smith, R.S.: Covert misappropriation of networked control systems: presenting a feedback structure. IEEE Control Syst. Mag. 35(1), 82–92 (2015) 21. Son, Y., Shin, H., Kim, D., Park, Y., Noh, J., Choi, K., Choi, J., Kim, Y.: Rocking drones with intentional sound noise on gyroscopic sensors. In: 24th USENIX Security Symposium, pp. 881–896 (2015)

Chapter 8

Active Detection Against Replay Attack: A Survey on Watermark Design for Cyber-Physical Systems Hanxiao Liu, Yilin Mo, and Karl Henrik Johansson

Abstract Watermarking is a technique that embeds digital information, “watermark”, in a carrier signal to identify ownership of the signal or verify the authenticity or integrity of the carrier signal. It has been widely employed in the fields of image and signal processing. In this chapter, we survey some recent physical watermark design approaches for Cyber-Physical Systems (CPS). We focus on how to design physical watermarking to actively detect cyber-attacks, especially replay attacks, thereby securing the CPS. First, the system and the attack model are introduced. A basic physical watermarking scheme, which leverages a random noise as a watermark to detect the attack, is discussed. The optimal watermark signal is designed to achieve a trade-off between control performance and intrusion detection. Based on this scheme, several extensions are also presented, such as watermarks generated by a hidden Markov model and online data-based watermark generation. These schemes all use an additive watermarking signal. A multiplicative watermark scheme is also presented. The chapter is concluded with a discussion on some open problems on watermark design.

H. Liu School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] H. Liu · K. H. Johansson School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected] Y. Mo (B) Department of Automation and BNRist, Tsinghua University, Beijing, China e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_8

145

146

H. Liu et al.

8.1 Introduction Cyber-Physical Systems (CPS) integrate computational elements and physical processes closely. They are playing a more and more critical role in a large variety of infrastructures, such as transportation, power grid, defense, and environment. Most of them are of great importance to the operation of society. Any successful cyber-physical attack may bring huge damages to critical infrastructure, human lives and properties, and even threaten the national security. Maroochy water breach in 2000 [1], Stuxnet malware in 2010 [2], Ukraine power outage in 2015 [3], Venezuela blackouts in 2019 [4] and other security incidents motivate us to pay more attention to CPS security. Recent years have witnessed more and more research regarding how to design watermarking signals to secure CPS. Watermarking is a technique that embeds digital information, a watermark, in a carrier signal to identify ownership of the signal or verify the authenticity or integrity of the carrier signal. It has been widely employed in the fields of image and signal processing. One important application of this technique is to trace illegally copied movies where a watermark is used to determine the owner of the original movie [5, 6]. In [7, 8], a physical watermarking scheme is proposed for control systems. In this scheme, if the system is operating normally, then the effect of the carefully designed watermark signal is present in the sensor measurements. However, if the system is under attack, its effect cannot be detected. Actually, it could be considered as an active defense scheme. Mo and Sinopoli [7] investigate the problem of the detection of replay attacks and first propose the technique of introducing an authentication signal which is called physical watermark signal later. This approach enables the detection of replay attacks where an adversary can read and modify all sensor data as well as inject a malicious input into the system. Different from false data injection attacks, this type of attack does not need knowledge of the system model to generate stealthy outputs and only replays the recorded sensor measurements to the operator, which leads to that the replayed data and the real data share exactly the same statistics and for which replay attacks cannot be detected efficiently. By injecting a random control signal, the watermark signal, into the control system, it is possible to secure the system. The authors of [8, 9] further extend the results of [7] by providing a more general physical authentication scheme to detect the replay attacks. However, the watermark signal may deteriorate the control performance, and therefore it is important to find the optimal trade-off between the control performance and the detection efficiency, which can be cast as an optimization problem. Furthermore, Mo et al. [8] also characterize the relationship among the control performance loss, detection rate, and the strength of the Gaussian authentication input. The term physical watermarking is first proposed in [5] to authenticate the correct operation of CPS. As a generalization of [7–9], the technique of designing the optimal watermark signal is to maximize the expected Kullback–Leibler divergence between the distributions of the compromised and the healthy residue signals, while

8 Active Detection Against Replay Attack: A Survey …

147

guaranteeing a certain maximal control performance loss. The optimization problem is separated into two steps where the optimal direction of the signal for each frequency is first computed and then all possible frequencies are considered to find the optimal watermark signal. The watermarking approach proposed in [10] is based on an additive watermark signal generated by a dynamical system. Conditions on the parameters of the watermark signal are obtained which ensures that the residue signal of the system under attack is unstable and the attack can be detected. An optimization problem is proposed to give a loss-effective watermark signal with a certain amount of detection rate by adjusting the design parameters. A similar problem is studied for multi-agent systems in [11]. The problem of physical watermark design under packet drops at the control input is analyzed in [12]. It is interesting that Bernoulli packet drops can obtain better detection performance compared with a purely Gaussian watermarking signal. Consequently, a Bernoulli–Gaussian watermark, which incorporates both an additive Gaussian input and a Bernoulli drop process, is jointly designed to achieve the trade-off between detection performance and control performance. The effect of the proposed watermark on closed-loop performance and detection performance is analyzed. Satchidanandan and Kumar [13] provide a comprehensive procedure for dynamic watermarking. It suggests a private excitation signals on the control input which can be traced in the system to enable the detection of attacks. Such an active defense technique is used to secure CPS that include Single-Input Single-Output (SISO) systems with Gaussian noise, SISO auto-regressive systems with exogenous Gaussian noise, the SISO auto-regressive-moving average systems with exogenous terms, SISO systems with partial observations, multi-input multi-output systems with Gaussian noise and extension to non-Gaussian systems. In [14], they propose necessary and sufficient conditions that the statistics of the watermark needs to satisfy in order to achieve security-guaranteeing. It is worth noticing that in all research discussed above, precise knowledge of the system parameters is required in order to design the watermark signal and the detector. However, acquiring these parameters may be troublesome and costly in practice. Motivated by this, [15] proposes an algorithm that can simultaneously generate the watermarking signal and infer the system parameters to enable the detection of attacks with unknown system parameters. It is proved that the proposed algorithm converges to the optimal one almost surely. In [16, 17], Rubio-Hernán et al. define cyber-adversaries and cyber-physical adversaries and point out that the detection schemes proposed by Mo and Sinopoli [7] and Mo et al. [5] fail to detect an attack from the latter. Besides, a multi-watermarkbased detection scheme is proposed to overcome the limitation. Furthermore, in [18], a periodic and intermittent event-triggered control watermark detector is presented. The new detector strategy integrates local controllers with remote controller. It is proved that the new detector scheme can detect three adversary models defined in their work. Although it is proved that the introduction of watermark signals enables the detection of certain replay attacks, it degrades the control performance since the control

148

H. Liu et al.

input is not optimal. Considering the loss of control performance, Fang et al. [19] formulate a novel attack model for the replay attack. On the basis of this model, a periodic watermarking strategy is investigated. An approximated detection performance is obtained by using the proposed periodic strategy. Different from the additive watermarking schemes, a multiplicative sensor watermarking is proposed in [20]. In this scheme, each sensor output is watermarked. The corresponding watermark remover is employed to reconstruct the real sensor measurement from the received watermarked data. This scheme does not degrade the control performance in the absence of attacks and it could be designed independently of the design of the controller and anomaly detector. Furthermore, it also enables the isolation and identification of the replay attack. A similar scheme is applied to detect cyber-sensor routing attacks [21] and false data injection attacks [22]. The physical sensor re-routing attack and the cyber-measurement re-routing attack are considered in [21] and corresponding detectability and isolability of these two attacks are analyzed. In [22], Teixeira and Ferrari show how to design the watermarking filters to enable the detection of stealthy false data injection attacks and a novel technique is proposed to solve the limitation of single-output systems. The rest of chapter is organized as follows. Section 8.2 formulates the problem by introducing the system model as well as attacks model. A basic physical watermark scheme is introduced in Sect. 8.3. The optimal watermark signal is designed to achieve a trade-off between control performance and intrusion detection. In Sect. 8.4, several extensions are also presented, such as watermarks generated by a hidden Markov model, online data-based watermark generation and a multiplicative watermark scheme. Conclusions and a discussion on some open problems on watermark design are provided in Sect. 8.5. Notations: For an m × n matrix A, A > 0 (A ≥ 0) indicates that A is positive definite (positive semi-definite), A+ denotes the pseudoinverse of A, and A is the spectral norm of A, which is its largest singular value. For two matrices A and B, A ⊗ B T represents the symmetric is their Kronecker product. The notation sym(X )  X +X 2 part of a matrix X . The real part of Z is denoted by (Z ).

8.2 Problem Setup In this section, we set up the problem by introducing a system model as well as an attack model.

8.2.1 System Description Let us consider a Linear Time-Invariant (LTI) system described by the following equations:

8 Active Detection Against Replay Attack: A Survey …

149

xk+1 = Axk + Bu k + wk ,

(8.1)

yk = C xk + vk ,

(8.2)

where xk ∈ Rn and yk ∈ Rm are the state vector and the sensor’s measurement, respectively, wk ∈ Rn and vk ∈ Rn are process and measurement noise, respectively. It is assumed that the initial states x0 , wk and vk are independent Gaussian random variables, and x0 ∼ N (x¯0 , ), wk ∼ N (0, Q), vk ∼ N (0, R). It is also assumed that (A, B) is stabilizable and (A, C) is detectable. Here, we assume that the objective of the system operator is to derive an optimal solution to minimize the following Linear-Quadratic Gaussian (LQG) cost: 1 J  lim E T →∞ T

T −1 

xkT W xk

+

u kT V u k

  ,

(8.3)

k=0

where W, V are positive definite matrices and u k is measurable with respect to previous observations. Due to the separation principle, the optimal solution of (8.3) combines Kalman filter and LQG controller. The optimal state estimate xˆk is given by Kalman filter as follows: xˆ0|−1 = x¯0 , P0|−1 = , xˆk+1|k = A xˆk|k + Bu k , Pk+1|k = A Pk|k A T + Q, K k = Pk|k−1 C T (C Pk|k−1 C T + R)−1 , xˆk|k = xˆk|k−1 + K k (yk − C xˆk|k−1 ), Pk|k = Pk|k−1 − K k C Pk|k−1 . It is well known that the gain K k converges to a fixed gain since the system is detectable. Hence, define P  lim Pk|k−1 , K  PC T (C PC T + R)−1 . k→∞

Since control systems usually run for a long time, we can assume that the system is already in steady state. The covariance of the initial state is assumed  = P. Hence, the Kalman filter can be rewritten as follows: xˆ0|−1 = x¯0 , xˆk+1|k = A xˆk|k + Bu k , xˆk|k = xˆk|k−1 + K (yk − C xˆk|k−1 ). Based on the optimal state estimate xˆk , the optimal control input u ∗k is provided by the LQG controller: u ∗k = −(B T S B + V )−1 B T S A xˆk|k , where S satisfies the following Riccati equation:

150

H. Liu et al.

S = A T S A + W − A T S B(B T S B + V )−1 B T S A. Define L  −(B T S B + V )−1 B T S A, then u ∗k = L xˆk|k .

(8.4)

The objective function given by the optimal estimator and controller in our case is   J = tr(S Q) + tr (A T S A + W − S)(P − K C P) .

(8.5)

The χ 2 detector [23] is widely used to detect anomalies in control systems. It takes the following form at time k: gk =

k 

H0

(yi − C xˆi|i−1 )T P −1 (yi − C xˆi|i−1 ) ≶ η, H1

i=k−T +1

(8.6)

where T is the window size of detection and P = (C PC T + R) and η is the threshold which is related with the false alarm rate. When the system is under normal operation, the left of the above equation is χ 2 distributed with mT degrees of freedom. Furthermore, H0 denotes the system is under normal operation while H1 denotes a triggered alarm. Here, define the probability of false alarm αk and the probability of detection rate βk as αk  P(gk > η|H0 ),

βk  P(gk > η|H1 ).

8.2.2 Attack Model In this section, we introduce a replay attack model and analyze the feasibility of this kind of attacks on the control system. The adversary is assumed to have the following capabilities and resources: 1. The attacker has access to all the real-time sensor measurements. In other words, it knows true y0 , . . . , yk at time k. 2. The attacker can modify the true sensory data yk to arbitrary signals yk by adding malicious data yka to the sensor measurement. 3. The attacker can inject attack u ak to the control input. Under the above attack, the system dynamics changes to the following form: xk+1 = Axk + Bu k + B a u ak + wk ,

yk = C xk + D a yka + vk ,

where u ak and dka are the harmful input and output.

(8.7)

8 Active Detection Against Replay Attack: A Survey …

151

Given these capabilities, the adversary can launch multiple types of attacks, such as zero-dynamics attack [24], covert attack [25], false data injection attack [26, 27] and replay attack [5, 7, 8]. In this chapter, we mainly focus on replay attacks. Without loss of generality, we assume that attack starts at time 0. During the replay attack, the following attack strategies are employed: 1. The attacker records a sequence of sensor measurements yk s from time k1 to k1 + T p , where T p is large enough to guarantee that the attacker can replay the sequence for an extended period of time during the attack. 2. The attacker manipulates the sensor measurements yk starting from time 0 to the recorded signals, i.e., D a yka = yk − C xk − vk = yk−Δk − C xk − vk , ∀ 0 ≤ k ≤ T p , where Δk = −k1 . 3. The attacker injects the malicious input B a u ak . Here, considering the above system, detector and the attack strategies, the stability of A  (A + B L)(I − K C) implies that the detection rate βk converges to the false alarm rate αk . If A is unstable, the detection rate βk goes to one. For a more detailed discussion on the detectability of replay attack, please refer to [7]. Since the classical passive detection scheme, where the detector passively observes the sensory data, is incapable of detecting a replay attack in some CPS, an active detection scheme is needed to solve the problem. In the following section, we will develop a physical watermark scheme by which the detector can better detect such attacks.

8.3 Physical Watermark Scheme The main idea of physical watermark is to inject a random noise, which is called the watermark signal, into the system (8.1) to excite the system and check whether the system responds to the watermark signal in accordance to the dynamical model of the system. In order to detect replay attack, the controller is redesigned as u k = u ∗k + Δu k ,

(8.8)

where u ∗k is the optimal LQG control signal and Δu k is drawn from an IID Gaussian distribution with zero mean and covariance Q, and the watermark signal sequence is chosen to be also independent of u ∗k .

152

H. Liu et al.

8.3.1 LQG Performance Loss Δu k is added as an authentication signal. It is chosen to be zero mean because we do not wish to introduce any bias to xk . It is clear that when there is no attack, the controller is not optimal in the LQG sense anymore, which means that in order to detect the attack, we need to sacrifice control performance. The following theorem characterizes the loss of LQG performance when we inject Δu k into the system. Theorem 8.1 ([7]) The LQG performance after adding Δu k is given by J = J + tr[(V + B T S B)Q] .



(8.9)

ΔJ

8.3.2 Detection Performance Consider the χ 2 detector after adding the watermarking signal. The following theorem shows the effectiveness of the detector under the modified control scheme. Theorem 8.2 ([7]) In the absence of an attack, E[(yk − C xk|k−1 )T P −1 (yk − C xk|k−1 )] = m.

(8.10)

Under attack lim E[(yk − C xk|k−1 )T P −1 (yk − C xk|k−1 )] = m + 2 tr(C T P −1 CU ),

k→∞

where U is the solution of the following Lyapunov equation: U − BQ B T = A U A T . Corollary 8.1 ([7]) In the absence of an attack, E[(yk − C xk|k−1 )T P −1 (yk − C xk|k−1 )] = mT .

(8.11)

Under attack lim E[(yk − C xk|k−1 )T P −1 (yk − C xk|k−1 )] = mT + 2 tr(C T P −1 CU )T .

k→∞

8 Active Detection Against Replay Attack: A Survey …

153

8.3.3 The Trade-Off Between Control and Detection Performance The authentication signal Δu k can be optimized such to maximize the detection performance while minimizing the effect on controller performance. As the authentication signal has to be zero mean, the design hinges on the covariance matrix Q. Let the optimal value of Q, based on the design requirements, be denoted by Q ∗ . The optimization problem can be set up in two ways. Initially, the LQG performance loss (ΔJ ) can be constrained to be less than some design parameters Θ, and the increase (Δgk ) in the expected value of the quadratic residues in case of an attack maximized. In this case, the optimal Q ∗ is the solution to the following optimization problem: arg max Q

subject to

tr(C T P −1 CU ) U − BQ B T = A U A T Q≥0 tr[(V + B T S B)Q] ≤ Θ.

(8.12)

Remark 8.1 It can be observed from Theorems 8.1 and 8.2 that the increase (ΔJ ) in LQG cost and increase (Δgk ) in the expectation of the quadratic residues are linear functions of the noise covariance matrix Q. Thus, the optimization problem is a semi-definite programming problem, and hence can be solved efficiently. Theorem 8.3 ([8]) There exists an optimal Q ∗ for (8.12) of the following form: Q ∗ = αωω T , where α > 0 is a scalar and ω is a vector such that ω T ω = 1. Another way of optimizing is to constrain the increase (Δgk ) in the expected values of the quadratic residues to be above a fixed value Γ , thereby guaranteeing a certain rate of detection, and the performance loss (ΔJ ) can be minimized. The optimal Q is now the solution to the optimization problem arg max Q

subject to

tr[(V + B T S B)Q] U − BQ B T = A U A T Q≥0 tr(C T P −1 CU ) ≥ Γ.

(8.13)

154

H. Liu et al.

Remark 8.2 The solutions of the two optimization problems given in (8.12) and (8.13) will be scalar multiples of each other, thus solving either optimization problem guarantees same performance.

8.4 Extensions of Physical Watermark Scheme 8.4.1 A Non-IID Watermarking Design Approach In this subsection, we further investigate the problem of designing the watermarking signal to achieve the optimal trade-off between control performance and detection performance. The following technique generalizes the results in [7, 8] and considers non-independent and identically distributed Gaussian process. For the sake of simplicity, we define ζk  Δu k , where Δu k is defined in (8.8). Correspondingly, (8.8) is rewritten as u k = u ∗k + ζk .

(8.14)

Here, the auto-covariance function is defined as Γ (d)  Cov(ζ0 , ζk ) = Eζ0 ζdT , and the watermarking signal is generated by a Hidden Markov Model (HMM) ξk+1 = Ah ξk + ϕk ,

ζk = C h ξk ,

(8.15)

where ϕk ∈ Rn h , k ∈ Z is a sequence of IID zero-mean Gaussian random variables with covariance , and ξk ∈ Rn h is the hidden state. To make ζk be a stationary process, the covariance of ξ0 is assumed to be the solution of the following Lyapunov equation: Cov(ξ0 ) = Ah Cov(ξ0 )AhT + , where Ah is strictly stable. It is assumed that the watermark signal is chosen from a HMM with ρ(Ah ) < ρ, where ρ < 1 is a design parameter. A value of ρ close to 1 gives the system operator more freedom to design the watermark signal, while a value of ρ close to 0 improves the freshness of the watermark signal by reducing the correlation of ϕk at different time steps. To simplify notations, define the feasible set G (ρ) as G (ρ) = {Γ : Γ is generated by an HMM 8.15 with ρ(Ah ) < ρ}.

8 Active Detection Against Replay Attack: A Survey …

8.4.1.1

155

LQG Performance

Similarly, the injection of the watermarking signal ζk degrades the

  LQG performance. T 1 T T The LQG cost is J = lim E 2T +1 k=−T (x k W x k + u k V u k ) . The following theorem characterizes the performance loss incurred by the additional watermark. Theorem 8.4 ([5]) The LQG performance of the system described by (8.1) (8.2) and (8.14) is characterized as follows: J = J ∗ + ΔJ, where J ∗ is the optimal LQG cost without the watermark signal and ΔJ = tr

⎧ ⎨ ⎩

⎡ V Γ (0) + 2V sym ⎣ L

∞ 

⎤⎫ ⎬

(A + B L)d BΓ (1 + d)⎦



d=0



+ tr (W + L T V L)Θ1 ,

(8.16) where Θ1  2

∞ 

  sym (A + B L)d L1 (Γ (d)) − L1 (Γ (0)),

d=0

and L1 : C p× p → Cn×n is a linear operator defined as ∞ 

L1 (X ) =

(A + B L)i B X B T ((A + B L)i )T = (A + B L)L1 (X )(A + B L)T + B X B T .

i=0

8.4.1.2

Detection Performance

In the absence of the attack, since the real-time authentication signal ζk and the residue z k are available to the detector of the system, the residue z k follows a Gaussian distribution with mean zero and covariance P = C PC T + R [23]. In the presence of the attack, the residue z k converges to a Gaussian with mean μk−1 and covariance (P + ) [5], where μk  −C

k  i=0

A k−i Bζi and  = 2

∞ 

C sym[A d L2 (Γ (d))]C T − L2 (Γ (0))]C T ,

d=0

(8.17) where L2 : C p× p → Cn×n is a linear operator on the space of p × p, which is defined as

156

H. Liu et al.

L2 (X ) 

∞ 

A i B X B T (A i )T = A i L2 (X )(A i )T + B X B T .

i=0

To detect the replay attack, we need a detector to differentiate the distribution of yk under the following two hypotheses: N0 : z k ∼ N0 (0, P);

N1 : z k ∼ N1 (μk−1 , P + ).

By the Neyman–Pearson lemma [28], the optimal detector is given by the Neyman– Pearson detector as discussed in the following theorem. Theorem 8.5 ([5]) The optimal Neyman–Pearson detector rejects N0 in favor of N1 if g N P (z k , ζk−1 , ζk−2 , . . .) = z kT P −1 z k − (z k − μk−1 )T (P + )−1 (z k − μk−1 ) ≥ η. (8.18) Otherwise, hypothesis H0 is accepted. Since the detection rate and expected time to detection involve integrating a Gaussian distribution, which usually does not have an analytical solution, the Kullback– Leibler (KL) divergence is used to characterize the detection performance. The following theorem quantifies the detection performance from the perspective of the expected KL divergence between N0 and N1 : Theorem 8.6 ([5]) The expected KL divergence of distribution N1 and N0 is   1   E D K L (N1 N0 ) = tr P −1 − log det I + P −1 . 2

(8.19)

Furthermore, the expected KL divergence satisfies the inequality   1     1  tr P −1 ≤ E D K L (N1 N0 ) ≤ tr P −1 − log 1 + tr P −1 , 2 2 (8.20) where the upper bound is tight if C is of rank 1.

8.4.1.3

The Optimal Watermarking Signal

In order to achieve the optimal trade-off between the control performance and detection performance, the optimization problem is formulated as follows: arg max Γ (d)∈G (ρ)

subject to

E D K L (N1 N0 ) , ΔJ ≤ δ,

(8.21)

8 Active Detection Against Replay Attack: A Survey …

157

where δ > 0 is a design parameter depending on how much control performance loss is tolerable. It is worth noticing that directly maximizing the detection performance is   computationally difficult. Notice that the expected KL divergence is relaxed to tr P −1 , using the upper and lower bounds derived in Theorem 8.6. One can transform the above optimization problem to following problem:   tr P −1 ,

arg max Γ (d)∈G (ρ)

ΔJ ≤ δ.

subject to

(8.22)

Although  and J are linear functions of Γ , convex optimization techniques cannot be directly applied to solve (8.22), since Γ is in an infinite-dimensional space. Therefore, (8.22) is transformed into the frequency domain. Before continuing on, the following definition is needed. Definition 8.1 ([5]) ν is a positive Hermitian measure of size p × p on the interval (−0.5, 0.5] if for a Borel set S B ⊆ (−0.5, 0.5], ν(S B ) is a positive semi-definite Hermitian matrix with size p × p. The following theorem establishes the existence of a frequency domain representation for Γ (d). Theorem 8.7 (Bochner’s Theorem [29, 30]) Γ (d) is the auto-covariance function of a stationary Gaussian process ζk if and only if there exists a unique positive Hermitian measure ν of size p × p, such that  Γ (d) =

1 2

exp(2π jdω)dν(ω).

(8.23)

− 21

By the fact that Γ (d) is real, the Hermitian measure ν satisfies the following property, which can be applied to the Fourier transform of the real-valued signals. Proposition 8.1 ([5]) Γ (d) is real if and only if for all Borel-measurable sets S B ⊆ (−0.5, 0.5], ν(S B ) = ν(−S B ).

(8.24)

By (8.24), (8.23) can be simplified as  Γ (d) = 2

1 2

 exp(2π jdω)dν(ω) .

0

Theorem 8.8 ([5]) The optimal solution (not necessarily unique) of (8.22) is Γ∗ (d) = 2ρ |d|  (exp(2π jdω∗ )H∗ ) ,

(8.25)

158

H. Liu et al.

where ω∗ and H∗ are the solution of the ensuing optimization problem. arg max

tr[F2 (ω, H )C T P −1 C],

subject to

F1 (ω, H ) ≤ δ, 0 ≤ ω ≤ 0.5, H Hermitian and Positive Semi-definite,

ω,H

(8.26)

where the functions F1 and F2 are defined as F1 (ω, H )  tr [V Θ2 ] + tr[(W + L T V L)Θ3 ], −1

F2 (ω, H )  2{2 sym[(I − sρA ) L2 (H )] − L2 (H )},

(8.27) (8.28)

where Θ2  2{2 sym(sρ L[I − sρ(A + B L)]−1 B H ) + H }, Θ3  2{2 sym[(I − sρ(A + B L))−1 L1 (H )] − L1 (H )}, s  exp(2π jω). Furthermore, one optimal H∗ of optimization problem (8.26) is of the form H∗ = hh H , where h ∈ C p . The corresponding HMM is given by ξk+1 = ρ

  cos 2π ω∗ − sin 2π ω∗ ξk + ψk , sin 2π ω∗ cos 2π ω∗

ζk =

√ √  2h r 2h i ξk ,

(8.29)

where h r , h i ∈ R p are the real and imaginary parts of h, respectively, and  = Cov(ψk ) = (1 − ρ 2 )I .

8.4.2 An Online Design Approach It is worth noticing that in order to design the optimal watermark signal, precise knowledge of the system parameters is needed. However, acquiring the parameters may be troublesome and costly. Furthermore, there may be unforeseen changes in the model of the system, such as topological changes in power systems. As a result, the identified system model may change during the system operation. Therefore, it is beneficial for the system to “learn” the parameters and design the detector and watermark signal in real time, which is our focus in this section. Based on the physical watermark scheme, we develop an approach to infer the system parameters based only on the system input data φk and output data yk and design the marked parameters in Fig. 8.1: the covariance Uk of the watermark signal φk and the optimal detector based on the estimated parameters.

8 Active Detection Against Replay Attack: A Survey …

159

Fig. 8.1 The system diagram

To simplify notations, in this subsection, we consider a stable open-loop system. The LTI system described by (8.1) (8.2) is rewritten as follows: xk = Axk−1 + Bφk + wk , yk = C xk + vk ,

(8.30) (8.31)

where φk ∈ R p is the watermark signal and its covariance is denoted as U.

8.4.2.1

Physical Watermark for Systems with Known Parameters

The analyses on the control performance, detection performance, and optimal problem are similar to that in the above section. Due to the space constraints, we only provide the outline. Please refer to [15] for more details. In the absence of the attack, yk can be represented as follows: yk = ϕk + ϑk ,

(8.32)

where ϕk 

k 

Hτ φk−τ

and ϑk 

τ =0

k 

C At wk−t + vk + C Ak+1 x−1 ,

t=0

Gaussian whose where Hτ = C Aτ B. It is easy to know that  ϕk is a zero-mean T covariance converges to U , where U  ∞ τ =0 Hτ U Hτ . Similarly, ϑk is a zeromean Gaussian noise whose covariance is W = C PC T + R. Under the replay attack, the replayed yk can be written as yk = yk−Δk = ϕk−Δk + ϑk−Δk . Since Δk is unknown to the system operator, we shall treat ϕk−Δk as a zero-mean Gaussian random variable with covariance U . As a result, yk is a zero-mean Gaussian random variable with covariance U + W . Here, we provide the following two hypotheses on the distribution of yk : H0 : yk ∼ N0 (ϕk , W ),

H1 : yk ∼ N1 (0, U + W ).

160

H. Liu et al.

The Neyman–Pearson detector [28] is employed to differentiate two distributions and the KL divergence is used to characterize the detection performance. Hence, we aim to maximize tr(U W −1 ) to maximize the detection performance. Correspondingly, the following LQG metric is used to quantify the performance loss:  T −1     1  yk T y J = lim E X k (8.33) = tr(X yy W ) + tr(X S), φ φ T →+∞ T k=0 k k where 

U H0 U S= U H0T U



 X yy X yφ >0 X= X φy X φφ 

and

is the weight matrix for the LQG control, which is chosen by the system operator. Therefore, in order the achieve the optimal trade-off between the control and detection performance, the optimization problem is formulated as follows: tr(U W −1 )

U∗ = arg max U ≥0

tr(X S) ≤ δ,

subject to

(8.34)

where δ is a design parameter. An important property of the optimization problem (8.34) is that the optimal solution is usually a rank-1 matrix, which is formalized by the following theorem: Theorem 8.9 ([15]) The optimization problem (8.34) is equivalent to U∗ = arg max

tr(U P)

U ≥0

tr(U X ) ≤ δ,

subject to

(8.35)

where P=

∞  τ =0

HτT W −1 Hτ

and X =

∞ 

 HτT

X yy Hτ

+ H0T X yφ + X φy H0 + X φφ .

τ =0

The optimal solution to (8.35) is U∗ = zz T , where z is the eigenvector corresponding to the maximum eigenvalue of the matrix X −1 P and z T X z = δ. Furthermore, the solution is unique if X −1 P has only one maximum eigenvalue. Then we will develop an online “learning” procedure to infer the system parameters based on which we show how to design watermark signals and the optimal detector and prove that the physical watermark and the detector asymptotically converge to the optimal ones. Throughout this subsection, we make the following assumptions:

8 Active Detection Against Replay Attack: A Survey …

161

Assumption 8.1 ([15]) 1. 2. 3. 4. 5.

A is diagonalizable. The maximum eigenvalue of X −1 P is unique. The system is not under attack during the learning phase. The number of distinct eigenvalues of A, which is denoted as n, ˜ is known. The LQG weight matrix X and the largest tolerable LQG loss δ are known.

8.4.2.2

An Online Algorithm

In this subsection, we will present the complete algorithm in a pseudocode form. After that, the online “learning” scheme will be introduced in detail. Algorithm 8.1 describes our proposed online watermarking algorithm. The notations are described later. A pseudocode form for Algorithm 8.1 is as follows: Algorithm 8.1 Online Watermarking Design Require: P−1 ← I, X−1 ← X φφ , k ← 0 Ensure: 1: while true do 2: Uk,∗ ← arg maxU ≥0, tr(U X k−1 )≤δ tr(U Pk−1 ) 3: Uk ← Uk,∗ + (k + 1)−υ δ I 4: Generate random variable ζk ∼ N (0, I ) 1/2 5: Apply watermark signal φk ← Uk ζk 6: Collect sensory data yk  T U −1 7: Hk,τ ← k−τ1+1 kt=τ yt φt−τ t−τ 8: Compute the coefficient of pk (x) by solving (8.40) 9: if pk (x) is Schur stable then 10: Update Pk , Xk from (8.41)–(8.46) 11: end if 12: Update gˆ k from (8.47) 13: k ←k+1 14: end while

Then we will introduce this algorithm in detail. Generation of the Watermark Signal φ k Let us design Uk , which can be considered as an approximation for the optimal covariance of the watermark signal U as Uk = Uk,∗ +

δ I, (k + 1)υ

(8.36)

where 0 < υ < 1, δ is the maximum tolerable LQG loss, and Uk,∗ is the solution of the following optimization problem:

162

H. Liu et al.

Uk,∗ = arg max

tr(U Pk−1 ),

U ≥0

tr(U Xk−1 ) ≤ δ,

subject to

(8.37)

and Pk−1 and Xk−1 are the estimate of P and X matrices, respectively, based on y0 , . . . , yk−1 , φ0 , . . . , φk−1 , both of which are initialized as P−1 = I, X−1 = X φφ . The inference procedure of Pk and Xk for k ≥ 0 will be provided in the further subsections. 1/2 At each time k, the watermark signal is chosen to be φk = Uk ζk , where ζk s are IID Gaussian random vectors with covariance I . Inference on H τ Define the following quantity Hk,τ , where 0 ≤ τ ≤ 3n˜ − 2, as  1 T −1 yt φt−τ Ut−τ k − τ + 1 t=τ k

Hk,τ 

= Hk−1,τ +

  1 −1 T yk φk−τ Uk−τ − Hk−1,τ , k−τ +1

(8.38)

where Hk,τ can be interpreted as an estimate of Hτ . It is worth noticing that the calculation of the matrices U , W , P, and X requires Hτ for all τ ≥ 0. Next we shall show that in fact only finitely many Hτ s are needed to compute those matrices, which requires one intermediate result. Lemma 8.1 Assuming the matrix A is diagonalizable with λ1 , . . . , λn˜ being n˜ itsτdisλi Ωi . tinct eigenvalues, then there exist unique Ω1 , . . . , Ωn˜ , such that Hτ = i=1 n˜

i=1 (x

− λi ) = x n˜ +

Hi+n−1 + · · · + α0 Hi = C Ai p(A)B = 0. Hi+n˜ + αn−1 ˜ ˜

(8.39)

Since A satisfies its own minimal polynomial p(x) = n−1 ˜ + · · · + α0 , we know that for any i ≥ 0: αn−1 ˜ x

to estimate both λi s and Ωi s Leveraging (8.39), we could use H0 , H1 , . . . , H3n−2 ˜ and thus Hτ for any τ . To this end, let us define: ⎡

⎤ ⎡ ⎤ T tr(Hk,0 Hk,n˜ ) αk,0 ⎢ .. ⎥ ⎥ .. −1 ⎢ ⎣ . ⎦  −k ⎣ ⎦, . T tr(Hk,n−1 αk,n−1 ˜ ˜ Hk,n˜ ) where

(8.40)

8 Active Detection Against Replay Attack: A Survey …



⎤ T T Hk,0 ) · · · tr(Hk,0 Hk,n−1 tr(Hk,0 ˜ ) ⎢ ⎥ .. .. .. k  ⎣ ⎦ . . . T T tr(Hk,n−1 ˜ ) ˜ Hk,0 ) · · · tr(Hk,n−1 ˜ Hk,n−1

163

⎡ ⎢ ⎢ and Hk,i  ⎢ ⎣



Hk,i Hk,i+1 .. .

⎥ ⎥ ⎥. ⎦

Hk,i+2n−2 ˜

n−1 ˜ Let us denote the roots of the polynomial pk (x) = x n˜ + αk,n−1 + · · · + αk,0 ˜ x to be λk,1 , . . . , λk,n˜ . Define a Vandermonde like matrix Vk to be



1 λk,1 .. .

⎢ ⎢ Vk  ⎢ ⎣ 3n−2 ˜ λk,1

⎤ 1 ··· 1 λk,2 · · · λk,n˜ ⎥ ⎥ .. . . . ⎥, . .. ⎦ . 3n−2 ˜ 3n−2 ˜ λk,2 · · · λk,n

and we shall estimate Ωi as ⎡

⎤ ⎡ ⎤ Ωk,1 Hk,0 ⎢ .. ⎥ + ⎣ . ⎦ = (Vk ⊗ Im ) ⎣ · · · ⎦ . Hk,3n−2 ˜ Ωk,n˜

(8.41)

Inference on ϕ k , ϑ k , and W Define ϕˆ k 

n˜ 

ϕˆk,i ,

(8.42)

i=1

with ϕˆk,i = λk,i ϕˆk−1,i + Ωk,i φk , and ϕˆ−1,i = 0. As a result, we can estimate ϑk as ϑˆ k  yk − ϕˆ k .

(8.43)

The covariance of ϑk can be estimated as 1  ϑˆ t ϑˆ tT . k + 1 t=0 k

Wk 

(8.44)

Inference on P, X, U, and g k Finally, we can derive an estimation of the P and X matrices, which are required to compute the optimal covariance U of the watermark signal, given by

164

H. Liu et al.

Pk =

 n˜ ∞   τ =0

=

T Wk−1

λτk,i Ωk,i

 n˜ 

i=1

n˜  n˜  i=1 j=1

 λτk,i Ωk,i

i=1

1 Ω T W −1 Ωk, j , 1 − λk,i λk, j k,i k

(8.45)

and Xk =

∞  τ =0

=

⎛ ⎞T ⎛ ⎞ n˜ n˜ n˜ n˜     T X ⎝ λτk,i Ωk,i ⎠ X yy ⎝ λτk,i Ωk,i ⎠ + Ωk,i Ωk,i + X φφ yφ + X φy i=1

n˜  n˜  i=1 j=1

i=1

1 Ω T X yy Ωk, j + 1 − λk,i λk, j k,i

i=1 n˜ 

i=1

T X Ωk,i yφ + X φy

i=1

n˜ 

Ωk,i + X φφ .

i=1

(8.46) The Neyman–Pearson detection statistics gk can be approximated by T    gˆ k = yk − ϕˆ k Wk−1 yk − ϕˆk − ykT (Wk + Uk )−1 yk ,

(8.47)

where Uk =

 n˜ ∞   τ =0

=

i=1

n˜  n˜  i=1 j=1

8.4.2.3

 λτk,i Ωk,i

Uk,∗

 n˜ 

T λτk,i Ωk,i

i=1

1 Ωk,i Uk,∗ Ωk,T j . 1 − λk,i λk, j

(8.48)

Algorithm Properties

The following theorem establishes the convergence of Uk,∗ and gk , the proof can be found in [15].

8 Active Detection Against Replay Attack: A Survey …

165

Theorem 8.10 Assuming that A is strictly stable and Assumption 2 holds. If 0 < υ < 1, then for any  > 0, the following limits hold almost surely: lim

k→∞

Uk,∗ − U∗ gˆ k − gk = 0, lim −γ + = 0, k→∞ k k −γ +

(8.49)

where γ = (1 − υ)/2 > 0. In particular, Uk,∗ and gˆ k almost surely converge to U∗ and gk , respectively.

8.4.2.4

Simulation Result

In this section, the performance of the proposed algorithm is evaluated. We will apply the proposed online “learning” approach to a numerical example. First, we choose m = 3, n = 5, p = 2 and A, B, C are all randomly generated, with A being stable. It is assumed that X in (8.33), the covariance matrices Q and R are all identity matrices with proper dimensions. We assume that δ in (8.35) is equal to 10% of optimal LQG cost J0 . Figure 8.2 shows relative error Uk,∗ − U∗  F /U∗  F of the estimated Uk,∗ versus time k for different υs. From Fig. 8.2, one can see that the estimator error converges to 0 as time k goes to infinity and the convergence approximately follows a power law. From Theorem 8.10, we know that Uk,∗ − U∗ ∼ O(k −γ + ), where γ = (1 − υ)/2. However, from Fig. 8.2, it seems that the convergence speed of the error for different υ is comparable. Notice that Theorem 8.10 only provides an upper bound for the convergence rate. As a result, it would be interesting to quantify the exact impact of υ on the convergence rate, which we shall leave as a future research direction. Now we consider the detection performance of our online watermark signal design, after an initial inference period, where no attack is present. It is assumed

Fig. 8.2 Relative error of Uk,∗ for different υ. The black solid line denotes the relative error of Uk,∗ when υ = 0. The gray solid line is the relative error of Uk,∗ when υ = 1/3

166

H. Liu et al.

Fig. 8.3 The detection statistics versus time. The black solid line with circle markers is the true Neyman–Pearson statistics gk , assuming full system knowledge. The gray dashed line with cross markers denotes our estimated gˆ k

that the attacker records the sensor readings from time 104 + 1 to 104 + 100 and replays them to the system from time 104 + 101 to 104 + 200. Figure 8.3 shows the trajectory of the Neyman–Pearson statistic gk and our estimate gˆ k of gk for one simulation. Notice that gˆ k can track gk with high accuracy. Furthermore, both gˆ k and gk are significantly larger when the system is under replay attack (after time 104 + 101). Hence, one can conclude that even without parameter knowledge, we can successfully estimate gk and detect the presence of the replay attack.

8.4.3 A Multiplicative Watermarking Design Different from the additive physical watermark scheme, where the watermarking signal is injected to the control input in the above work, Riccardo M. G. Ferrari and André M. H. Teixeira proposed a multiplicative sensor watermarking scheme. It has been applied to detect several types of attacks including replay attacks [20], routing attacks [21], and false data injection attacks [22]. In this subsection, we mainly introduce the multiplicative watermarking scheme proposed in [20], in which each sensor’s output is separately watermarked. Correspondingly, the equalizing filters are equipped to reconstruct the real output signal from the watermarked data. About the proofs of theorems in this subsection, please refer to [20]. Consider the following system model as follows:

8 Active Detection Against Replay Attack: A Survey …

& P: & C : & R:

167

x p (k + 1) = A p x p (k) + B p u(k) + η(k) y p (k) = C p x p (k) + ξ(k) xcr (k + 1) = Ac xc (k) + Bc y˜ p (k) u(k) = Cc xc (k) + Dc y˜ p (k) xr (k + 1) = Ar xr (k) + Br u(k) + K r y˜ p (k) yr (k) = Cr xr (k) + Dr u(k) + Er y˜ p (k)

(8.50) ,

where all notations’ meaning and relative assumptions could be found in [20] and we omit them due to the space constraints. Here, define xc,r (k) = [xc (k)T xr (k)T ]T , the controller and detector dynamics can be represented as ⎧ ⎪ ⎨ xcr (k + 1) = Acr xcr (k) + Bcr y˜ p (k) yr (k) = Ccr xcr (k) + Dcr y˜ p (k) . Fcr : ⎪ ⎩ u(k) = Cu xcr (k) + Du y˜ p (k) 8.4.3.1

(8.51)

Multiplicative Watermarking and Equalizing Scheme

The main idea of a multiplicative sensor watermarking scheme is to pre-process the measurements through a filter parameterized by θ before transmitting them, denoted as sensor watermarking, and then to pre-process the received watermarked data through an equalizer filter parameterized by the very same θ before feeding them to the controller and anomaly detector, denoted as equalization [22]. Here, θ (k) is designed as a piecewise constant variable θ (k)  θ j ∈ Θ, for k j ≤ k < k j+1 , where Kθ  {k1 , . . . , k j , . . .} denotes the set of switching times and Θ  {θ1 , . . . , θ M } is the set of possible parameters [20]. For the watermarking step, the corresponding filters are denoted as W (θ ) and the watermarked measurements are denoted as y pw (k). For the equalization step, the equalizing filters are denoted as Q(θ ): & xw (k + 1) = Aw (θ )xw (k) + Bw (θ )y p (k) , W : y pw (k) = Cw (θ )xw (k) + Dw (θ )y p (k) & (8.52) xq (k + 1) = Aq (θ )xq (k) + Bq (θ ) y˜ pw (k) Q: , y pq (k) = Cq (θ )xq (k) + Dq (θ ) y˜ pw (k) where y pw (k) and y˜ pw (k) are employed to differentiate the watermarked data and the data received by the controller and anomaly detector. Then we will introduce how to design the parameters in this scheme. For the sake of simplicity and without loss of generality, we suppose that there is only one sensor. The watermark generator is represented as follows:

168

H. Liu et al.

y pw (k) =

N 

w A,(n) y pw (k − n) +

n=1

N 

w B,(n) y p (k − n),

(8.53)

n=0

where w A = [w A,(1) , . . . , w A,(N ) ]T ∈ R N and w B = [w B,(0) , . . . , w B,(N ) ]T ∈ R N +1 are the filter parameters. Consider that the objective of equalizing filters is to reconstruct the sensor measurement y(k), an intuitive approach is to derive the inverse of the respective filter, i.e., y pq (k) =

1 w B,(0)

 −

N 

w B,(n) y pq (k − n) + y˜ pw (k) −

n=0

N 

 w A,(n) y˜ pw (k − n) ..

n=1

(8.54) By using controllable canonical form, the corresponding parameters in (8.52) are designed as follows:  0 N −1,1 I N −1 , Aw (θ ) = w TA 

  0 N −1,1 , Bw = 1

Cw (θ ) = [· · · w B,(n) + w B,(0) w A,(n) · · · ], for n = 1, . . . , N ,     0 N −1,1 I N −1 0 N −1,1 Aq (θ ) = , Bq = , −1 1 T w w B,(0) B w B,(0) Cq (θ ) = [· · · − w A,(n) −

w B,(n) · · · ], w B,(0)

for n = 1, . . . , N ,

Dw (θ ) = w B,(0) ,

Dq (θ ) =

1 w B,(0)

.

The following theorem characterizes the performance of the system with the multiplicative scheme under no replay attacks. Theorem 8.11 ([20]) Consider the closed-loop system with watermarked sensors described by (8.50) and (8.52). Assume that theta(k) is updated at times k ∈ Kθ . The performance of the closed-loop system equipped with sensor watermarking filters and equalizing filters is same as the performance of the nominal closed-loop system (8.52) if and only if the states of Q(θ ) and W (θ )are such that xq (k) = xw (k) for all k ∈ Kθ with no replay attacks. We now present the main result of this section regarding the detectability of replay attacks under the proposed watermarking scheme. Theorem 8.12 ([20]) Consider a replay attack that has recorded data from time kr = k0 − T to k f = k0 − T f , and let θ (k) = θ for kr ≤ k ≤ k f . Suppose the recorded data is replayed from time k0 and let θ (k) = θ for k ≥ k0 . During the replay attack, yr converges asymptotically to yr for y p if and only if θ = θ . From Theorem 8.12, one can obtain that when θ = θ , the undetectability of the replay attack is not guaranteed a priori, since it depends on the exogenous input y p .

8 Active Detection Against Replay Attack: A Survey …

8.4.3.2

169

Detection and Isolation of Replay Attacks

In this subsection, through the multiplicative watermarking scheme, an anomaly detector and a corresponding threshold will be derived. For more details about the isolation and identification of relay attacks, please refer to [20]. It is assumed that there is no replay attacks for 0 ≤ k < k0 , where k0 is the start attack time. Furthermore, the variables x p , x pw , and u remain bounded before being attacked. Here, (A p , C p ) is assumed as a detectable pair [20]. The detector is designed in the following form [31]: & xˆ p (k + 1) = A p xˆ p (k) + B p u(k) + K (y pq (k) − yˆ p (k) (8.55) yˆ p (k) = C p xˆ p (k), where xˆ p and yˆ p are estimates of x p and y p and the gain matrix K is chosen to satisfy that Ar = A p − K C p is Schur. Set xr = xˆ p and the estimation error   x p − xˆ p , under the scenario with attacks, the detection residual dynamics are as follows: &

(k + 1) = Ar (k) − K ξ(k) + η(k) yr (k) = C p (k) + ξ(k),

(8.56)

and the detection threshold ith component is computed as   k−1  y¯r (k)  α i (δ i )k−1−i (η(h) ¯ + K ξ¯ (h)) + (δ i )k x¯r (0) + ξ¯ (k), h=0

where α i and δ i are two constants such that C p,(i) (Ar )k  ≤ α i (δ i )k ≤ C p,(i)  · ¯ x¯r (0), and ξ¯ are (Ar )k  with C p,(i) being the ith row of matrix C p . Furthermore, η, upper bounds on the norms of, respectively, η, xr (0), and ξ [20]. Theorem 8.13 ([20]) If there exists a time index kd > k0 and a component i ∈ 1, . . . , n y such that during a cyber-replay attack the following inequality holds: ( ( ⎤ ⎡ ( ( k d −1 ( ( kd −1−h (B Δu(h) − K Δy (h))⎦ + Δy (k)( (C p,(i) ⎣ (A ) r p p p ( ( ( ( h=k0 >2α i

−1−h kd 

(δ i )kd −1−h (η(h) ¯ + K ξ¯ (h)) + (δ i )kd −k0 (α i x¯r (k0 ) + y¯r,(i) (k0 )) + 2ξ¯ (kd ),

h=0

where y¯r,(i) (k0 )  maxx p ∈S x p |yr,(i) (k0 )| and Δu  u − u is the difference between delayed and actual input, then the attack will be detected at the time instant kd .

170

H. Liu et al.

8.5 Conclusion and Future Work In this chapter, we introduced a basic physical watermarking scheme where a random noise is injected into the system to excite the system and check whether the system responds to the watermark signal in accordance to the dynamical model of the system. The optimal watermark is derived via solving an optimization problem which aims to achieve the optimal trade-off between control performance and detection performance. Then three interesting extensions about the watermark design were presented in detail. For future works, it is worth noticing how to apply the watermark scheme to more complicated systems. Designing more efficient algorithms regarding the watermarking signal against more intelligent attackers is also interesting. Also, it is of great interest to test the proposed algorithms in CPS to verify their performance in a real scenario.

References 1. Slay, J., Miller, M.: Lessons learned from the maroochy water breach. In: International Conference on Critical Infrastructure Protection, pp. 73–82. Springer (2007) 2. Karnouskos, S.: Stuxnet worm impact on industrial cyber-physical system security. In: IECON 2011-37th Annual Conference of the IEEE Industrial Electronics Society, pp. 4490–4494. IEEE (2011) 3. Whitehead, D.E., Owens, K., Gammel, D., Smith, J.: Ukraine cyber-induced power outage: Analysis and practical mitigation strategies. In: 2017 70th Annual Conference for Protective Relay Engineers (CPRE), pp. 1–8. IEEE (2017) 4. Wikipedia contributors.: 2019 venezuelan blackouts — Wikipedia, the free encyclopedia (2019). https://en.wikipedia.org/w/index.php?title=2019_Venezuelan_blackouts& oldid=908146648. Accessed 21 Aug 2019 5. Mo, Y., Weerakkody, S., Sinopoli, B.: Physical authentication of control systems: designing watermarked control inputs to detect counterfeit sensor outputs. IEEE Control Syst. Mag. 35(1), 93–109 (2015) 6. Wikipedia contributors.: Digital watermarking — Wikipedia, the free encyclopedia (2019). https://en.wikipedia.org/w/index.php?title=Digital_watermarking&oldid=910119309. Accessed 22 Aug 2019 7. Mo, Y., Sinopoli, B.: Secure control against replay attacks. In: 2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 911–918. IEEE (2009) 8. Mo, Y., Chabukswar, R., Sinopoli, B.: Detecting integrity attacks on scada systems. IEEE Trans. Control Syst. Technol. 22(4), 1396–1407 (2013) 9. Chabukswar, R., Mo, Y., Sinopoli, B.: Detecting integrity attacks on scada systems. IFAC Proc. Vol. 44(1), 11, 239–11, 244 (2011) 10. Khazraei, A., Kebriaei, H., Salmasi, F.R.: A new watermarking approach for replay attack detection in lqg systems. In: 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 5143–5148. IEEE (2017) 11. Khazraei, A., Kebriaei, H., Salmasi, F.R.: Replay attack detection in a multi agent system using stability analysis and loss effective watermarking. In: 2017 American Control Conference (ACC), pp. 4778–4783. IEEE (2017)

8 Active Detection Against Replay Attack: A Survey …

171

12. Weerakkody, S., Ozel, O., Sinopoli, B.: A bernoulli-gaussian physical watermark for detecting integrity attacks in control systems. In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 966–973. IEEE (2017) 13. Satchidanandan, B., Kumar, P.R.: Dynamic watermarking: active defense of networked cyberphysical systems. Proc. IEEE 105(2), 219–240 (2016) 14. Satchidanandan, B., Kumar, P.: On the design of security-guaranteeing dynamic watermarks. IEEE Control Syst. Lett. 4(2), 307–312 (2019) 15. Liu, H., Yan, J., Mo, Y., Johansson, K.H.: An on-line design of physical watermarks. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 440–445. IEEE (2018) 16. Rubio-Hernán, J., De Cicco, L., Garcia-Alfaro, J.: Revisiting a watermark-based detection scheme to handle cyber-physical attacks. In: 2016 11th International Conference on Availability, Reliability and Security (ARES), pp. 21–28. IEEE (2016) 17. Rubio-Hernan, J., De Cicco, L., Garcia-Alfaro, J.: On the use of watermark-based schemes to detect cyber-physical attacks. EURASIP J. Inf. Secur. 2017(1), 8 (2017) 18. Rubio-Hernan, J., De Cicco, L., Garcia-Alfaro, J.: Event-triggered watermarking control to handle cyber-physical integrity attacks. In: Nordic Conference on Secure IT Systems, pp. 3– 19. Springer (2016) 19. Fang, C., Qi, Y., Cheng, P., Zheng, W.X.: Cost-effective watermark based detector for replay attacks on cyber-physical systems. In: 2017 11th Asian Control Conference (ASCC), pp. 940– 945. IEEE (2017) 20. Ferrari, R.M., Teixeira, A.M.: Detection and isolation of replay attacks through sensor watermarking. IFAC-PapersOnLine 50(1), 7363–7368 (2017) 21. Ferrari, R.M., Teixeira, A.M.: Detection and isolation of routing attacks through sensor watermarking. In: 2017 American Control Conference (ACC), pp. 5436–5442. IEEE (2017) 22. Teixeira, A.M., Ferrari, R.M.: Detection of sensor data injection attacks with multiplicative watermarking. In: 2018 European Control Conference (ECC), pp. 338–343. IEEE (2018) 23. Mehra, R.K., Peschon, J.: An innovations approach to fault detection and diagnosis in dynamic systems. Automatica 7(5), 637–640 (1971) 24. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: Revealing stealthy attacks in control systems. In: 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1806–1813. IEEE (2012) 25. Hoehn, A., Zhang, P.: Detection of covert attacks and zero dynamics attacks in cyber-physical systems. In: 2016 American Control Conference (ACC), pp. 302–307. IEEE (2016) 26. Mo, Y., Sinopoli, B.: False data injection attacks in control systems. In: Preprints of the 1st Workshop on Secure Control Systems, pp. 1–6 (2010) 27. Liu, Y., Ning, P., Reiter, M.K.: False data injection attacks against state estimation in electric power grids. ACM Trans. Inf. Syst. Secur. (TISSEC) 14(1), 13 (2011) 28. Scharf, L.L.: Statistical Signal Processing, vol. 98. Addison-Wesley, Reading (1991) 29. Chonavel, T.: Statistical Signal Processing: Modelling and Estimation. Springer Science & Business Media (2002) 30. Delsarte, P., Genin, Y., Kamp, Y.: Orthogonal polynomial matrices on the unit circle. IEEE Trans. Circuits Syst. 25(3), 149–160 (1978) 31. Ferrari, R.M., Parisini, T., Polycarpou, M.M.: A robust fault detection and isolation scheme for a class of uncertain input-output discrete-time nonlinear systems. In: 2008 American Control Conference, pp. 2804–2809. IEEE (2008)

Chapter 9

Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme Riccardo M. G. Ferrari and André M. H. Teixeira

Abstract This chapter addresses the problem of detecting stealthy data injection attacks on sensor measurements in a networked control system. A multiplicative watermarking scheme is proposed, where the data from each sensor is post-processed by a time-varying filter called watermark generator. At the controller’s side, the watermark is removed from each channel by another filter, called the watermark remover, thus reconstructing the original signal. The parameters of each remover are matched to those of the corresponding generator, and are supposed to be a shared secret not known by the attacker. The rationale for time-varying watermarks is to allow model-based schemes to detect otherwise stealthy attacks by constantly introducing mismatches between the actual and the nominal dynamics used by the detector. A specific model-based diagnosis algorithm is designed to this end. Under the proposed watermarking scheme, the robustness and the detectability properties of the model-based detector are analyzed and guidelines for designing the watermarking filters are derived. Distinctive features of the proposed approach, with respect to other solutions like end-to-end encryption, are that the scheme is lightweight enough to be applied also to legacy control systems, the absence of side-effects such as delays, and the possibility of utilizing a robust controller to operate the closed-loop system in the event of the transmitter and receiver losing synchronization of their watermarking filters. The results are illustrated through numerical examples.

9.1 Introduction The penetration of Information Technologies (IT) hardware and software in current networked Industrial Control Systems (ICS) has grown significantly in recent times. This has led ICS to being vulnerable to a steadily increasing number of cyber-threats, R. M. G. Ferrari (B) Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands e-mail: [email protected] A. M. H. Teixeira Department of Electrical Engineering,Uppsala University, Uppsala, Sweden e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_9

173

174

R. M. G. Ferrari and A. M. H. Teixeira

as discussed in [12, 17, 25]. Thus, it must not come as a surprise that, in recent years, the control systems community became more and more attentive to the topic of cybersecurity, in addition to the established focus on safety [2, 3, 24]. A keystone in such endeavor is the introduction of rational adversary models for describing cyber-attack policies, thus differentiating knowledgeable and malicious adversaries with respect to faults. Such adversaries aim at exploiting existing vulnerabilities and limitations in traditional anomaly detection mechanisms, while remaining undetected. The concept of stealthy attacks has been investigated in [18] and [21, 24], among others. Among the proposed approaches to detecting stealthy attacks, [23] has shown how they can be detected by taking advantage of mismatches between the system’s and the attack policy’s initial conditions. Another stream of research considered instead active modifications to the system dynamics that could expose such otherwise stealthy attacks. For instance, [15] proposed a static multiple-sensors output coding scheme. Nonetheless, both approaches bear some limitations, such as the unrealistic requirement of controlling the plant’s initial condition or the control performances drop caused by active modifications. Other related approaches found in the literature have been inspired in the concept of watermarking. Watermarking, a classic approach for guaranteeing authenticity in the multimedia industry [19], has been recently proposed as a way to overcome such drawbacks while making stealthy attacks detectable by existing model-based anomaly detectors. An additive watermarking scheme has been introduced by [16] to detect replay attacks, where colored noise of known covariance is purposely injected in the actuators. A similar but distributed approach for interconnected microgrids was instead presented in [10]. However, the injection of an additive watermark in the actuators leads to decreased control performances, and does not guarantee the detection of additive stealthy attacks. As a way to tackle such limitations, in this chapter, we further extend the modular multiplicative watermarking scheme proposed in [5]. Such an approach is based on each sensor output being independently pre-processed via a time-varying singleinput single-output (SISO) watermark generator before transmission over the control network. A bank of matched watermark removers is included on the controller side, where the original sensors’ signals are reconstructed, thus preventing any control performances loss (Fig. 9.1). The approach is independent from the plant’s initial condition and does not require extra communication or coordination between multiple sensors during its operation. The proposed solution resembles a channel encryption scheme; indeed, watermarking can be interpreted as a lightweight mechanism enforcing authentication of the data and its source, albeit with weaker cryptographic guarantees than strong encryption schemes [20]. For the case of networked control systems, this weakness often translates into a strength. As watermarking requires lighter computational power, it is better suited to meet critical real-time constraints. Furthermore, as authentication and data integrity are in this scenario more important than data confidentiality, the use of strong cryptographic methods may be unwarranted. Additionally, as investigated in this chapter, a robust controller may still be able to stabilize the system

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme The watermark parameters change over time Watermark generator

ypw a Attacker

y˜pw

+

False Data Attack

NETWORK

Watermark remover

The watermark is removed here

yp

175

PLANT

ypq Controller

u

Attack Detector Anomaly

Residual

yr

Detector

The data corruption is detected here

Fig. 9.1 Scheme of the proposed watermarking scheme under measurement false data injection attack

when the transmitter and receiver lose synchronization, which is not the case when standard cryptographic schemes are used. The rationale behind the proposed watermarking scheme is to make stealthy manin-the-middle attacks detectable, by having them cause an imperfect reconstruction of the sensors’ measurements. Such condition would cause a detection by a suitable anomaly detector [5, 6, 8]. In particular, in this chapter, we introduce novel watermark generators and removers, implemented as hybrid switching SISO systems with piecewise linear dynamics. The design of such switching filters is addressed and it is shown how they can guarantee perfect reconstruction of the plant outputs. Furthermore, their time-varying properties are linked to conditions on the detectability of otherwise stealthy attacks. Stability of the closed-loop system with the proposed watermarking scheme is also analyzed, including the case of constant but mismatched parameter filters at the generator and remover. The outline of the chapter is as follows. In Sect. 9.2, we describe the problem formulation as well as define stealthy data injection attacks that are undetectable without watermarking. The design of the switching sensor watermarking scheme is addressed in Sect. 9.3, where design guidelines for the watermarking scheme and its synchronization protocol are provided. An application example is provided as well, to illustrate the proposed approach. Detectability properties are investigated in Sect. 9.4, while numerical results illustrating the effectiveness of the proposed solutions are reported in Sect. 9.5. The chapter concludes with final remarks and future work directions in Sect. 9.6.

9.2 Problem Formulation Following the modeling framework for secure control systems presented in [24], in this section, the control system under attack is described, together with the adversary

176

R. M. G. Ferrari and A. M. H. Teixeira

Fig. 9.2 Scheme of the networked control system under measurement false data injection attack, without the proposed watermarking and detection architecture

yp

PLANT

PLANT

a Attacker

+

False Data Attack

NETWORK

y˜y˜pw p

Controller

u

Attack Detector Anomaly Detector

yr

model and known limitations to its detection. The conceptual structure of the closedloop system under attack is presented in Fig. 9.2. The control system is composed by a physical plant (P), a feedback controller (C), and an anomaly detector (R). The physical plant, controller, and anomaly detector are modeled as discrete-time linear systems:  P:  C:  R:

x p [k + 1] = A p x p [k] + B p u[k] + η p [k] y p [k] = C p x p [k] + ξ p [k] xc [k + 1] = Ac xc [k] + Bc y˜ p [k] u[k] = Cc xc [k] + Dc y˜ p [k]

,

(9.1)

xr [k + 1] = Ar xr [k] + Br u[k] + K r y˜ p [k] , yr [k] = Cr xr [k] + Dr u[k] + Er y˜ p [k]

where x p [k] ∈ Rn p , xc [k] ∈ Rn c , and xr [k] ∈ Rnr are the state variables, u[k] ∈ Rn u is the vector of control actions applied to the process, y p [k] ∈ Rn y is the vector of plant outputs transmitted by the sensors, y˜ p ∈ Rn y is the data received by the detector and controller, and yr [k] ∈ Rn y is the residual vector that is evaluated for detecting anomalies. The variables η p [k] and ξ p [k], finally, denote the unknown process and measurement disturbances. Assumption 9.1 The uncertainties represented by η p [k] and ξ p [k] are unknown, but their norms are upper bounded by some known and bounded sequences η¯ p [k] and ξ¯ p [k]. The sensor measurements are exchanged through a communication network, and can thus be targeted by cyber-attacks that manipulate the data arriving at the receiver. At the plant side, the data transmitted by the sensors is denoted as y p [k] ∈ Rn y , while the received sensor data at the detector’s side is denoted as y˜ p [k] ∈ Rn y . The operation of the closed-loop system is monitored by the anomaly detector, based only on the closed-loop models and the available input and output data u[k] and y˜ p [k]. In particular, given the residue signal yr , an alarm is triggered if, for at least one time instant k, the following condition holds:

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

yr  p,[k,k+Nr )

 k+Nr −1   p  yr [ j] p ≥ y¯r [k],

177

(9.2)

j=k

where y¯r [k] ∈ R+ is a robust detection threshold and 1 ≤ p < +∞ and Nr ≥ 1 are design parameters. The main focus of this chapter is to investigate the detection of so-called false data injection cyber-attacks on sensors. This attack scenario, as well as fundamental limitations in its detectability, is described next.

9.2.1 False Data Injection Attacks In the present false data injection attack scenario, derived from [8], we consider a malicious adversary that is able to access and corrupt the measurements sent to the controller. This attack policy may be modeled as y˜ p [k] = y p [k] + a[k],

(9.3)

where a[k] is the malicious data corruption added to the measurements. Note that such a scenario may be revised to also include replay attacks as in [5], which is modeled as y˜ p [k] = y p [k − T ], where previously recorded data is forwarded again to the controller. Similarly, even routing attacks can be considered as in [7], which are modeled as y˜ p [k] = Ry p [k], where R is a routing matrix. In this chapter, the false data injection attacks are examined in more detail, while brief remarks are given for the case of replay attacks. The attack is designed by the adversary according to a set of attack goals and constraints, attack resources, and system knowledge [24]. These aspects are further described below. Attack goals and constraints: The adversary aims at corrupting the sensor data so that the system’s operation is disrupted, while remaining undetected by the anomaly detector. Disruption and disclosure resources: The false data injection attack on the communication channels requires that the attacker is able to both read the transmitted data and corrupt it. Therefore, we assume that the attacker has the required disclosure resources to eavesdrop on the transmitted data, as well as the disruption resources to corrupt the measurement data received by the controller and anomaly detector. Model knowledge: Taking a worst-case perspective, the adversary is assumed to have access to the detailed nominal model of the plant, (A p , B p , C p ). Fundamental limitations to detectability: As it is well-known in the literature [18, 24], an attacker with detailed knowledge of the plant may be able to inject false data that mimics the behavior of the plant, and therefore bypass the detection of a linear time-invariant detector. In particular, this chapter discusses the detectability of attacks according to the following definition.

178

R. M. G. Ferrari and A. M. H. Teixeira

Definition 9.1 Suppose that the closed-loop system is at equilibrium such that yr [−1] = 0, and that there are no unknown disturbances, i.e., η p [k] = 0 and ξ p [k] = 0 for all k. An anomaly occurring at k = ka ≥ 0 is said to be ε-stealthy if yr  p,[k,k+Nr ) ≤ ε for all k ≥ ka . In particular, an ε-stealthy anomaly is termed as simply stealthy, whereas a 0-stealthy anomaly is named undetectable. More specifically, we focus on undetectable attacks that are able to produce no visible change to the residual generated by the anomaly detector. To characterize such a class of attacks, the following definition is required. Definition 9.2 Consider the system Σ = (A, B, C, D) with input a[k] and output ¯ g) ∈ C × Rn x × Rn u is a y[k], where B ∈ Rn x ×n u and C ∈ Rn y ×n x . A tuple (λ, x, zero dynamics of Σ if it satisfies      0 λIn x − A −B x¯ = , x¯ = 0. C D g 0

(9.4)

Moreover, the input a[k] = λk−k0 g is called an output-zeroing input that, for ¯ yields y[k] = 0 for all k ≥ k0 . x[k0 ] = x, Next we apply the previous definition to the closed-loop system under sensor false data injection attack (see (9.1) and (9.3)), and characterize a specific class of undetectable attacks that complies with the previously described adversary model. Consider the plant under a sensor data attack, which begins at time k = ka . The respective dynamics and the data received by the controller and anomaly detector are described as  x p [k + 1] = A p x p [k] + B p u[k] (9.5) y˜ p [k] = C p x p [k] + a[k]. Based on (9.1), the trajectories of the closed-loop system under attack, with η p [k] = ζ p [k] = 0, are described by 

x[k + 1] = Ax[k] + Ba[k] yr [k] = C x[k] + Da[k]

, ∀ k ≥ ka ,



where x = x is the augmented state of the closed-loop system, and the p x c xr matrices A, B, C, D are defined appropriately from (9.1). Observing that y˜ p [k] serves as input to both the controller and the anomaly detector, we conclude that an output-zeroing attack with respect to y˜ p [k] would lead to no change on the controller and anomaly detector, and would thus be undetectable. Motivated by this observation, we consider an output-zeroing attack based only on the plant dynamics (9.5), computed while assuming u[k] = 0 and thus captured by the dynamical system Σ = (A p , 0, C p , In y ). From Definition 9.2, a zero-dynamics tuple (λ, x¯a , g) of Σ satisfies

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme



λIn x − A p 0 In y Cp

    0 −x¯a = , g 0

179

(9.6)

from which we conclude that x¯a is an eigenvector of A p associated with λ, g = C p x¯a , and the corresponding attack signal is a[k] = λk−ka C p x¯a . In fact, for the case of sensor attacks, a generic attack signal can be generated by the autonomous system 

x ap [k + 1] = A p x ap [k] a[k] = C p x ap [k]

, ∀ k ≥ ka ,

(9.7)

with an arbitrary initial condition x ap [ka ] chosen by the adversary. Let us now look at the effect of such an attack on the closed-loop system (9.1). Combining (9.1) with (9.7), we observe that the compromised measurement output k−ka −1 a x p [ka ] + x ap [ka ] + i=k for k ≥ ka is described by y˜ p [k] = C p Ak−k p a k−1−ka −i Cp Ap B p u[i]. Thus, from the output’s perspective, the false data injection attack effectively induces a trajectory identical to that of an impulsive jump of the plant’s state at k = ka , from x p [ka ] to x p [ka ] + x ap [ka ]. Given that y˜ p [k] is the input to the detector and controller, these components will precisely react as if the system experienced the aforementioned impulsive jump. Therefore, the smaller the jump (i.e., x ap [ka ]), the harder it will be to detect the attack. As an example, and without loss of generality, let the plant be initialized at the origin x p [ka ] = 0. In this case, the impulsive jump essentially corresponds to a nonzero initial condition. Hence, if the closed-loop system is stable, then the impulsive jump in the attack will result in an asymptotically vanishing transient response, akin to the cases in [23]. As discussed above, (9.6) characterizes a set of undetectable attacks on sensors that essentially mimic a possible trajectory of the system, and thus the anomaly detector cannot distinguish between the attack and a normal transient trajectory. Next we describe a multiplicative watermarking scheme which extends the work in [5, 8] and that enables the detection of such attacks, while not affecting the performance of the closed-loop system in normal conditions.

9.3 Multiplicative Watermarking Scheme To detect the presence of man-in-the-middle attacks, we consider the watermarking scheme illustrated in Fig. 9.1, where the following elements are added to the control system: a Watermark Generator W and a Watermark Remover Q. To secure the system against adversaries, the watermark generator and remover should share a private key unknown to the adversary, similar to cryptographic schemes. Furthermore, for increased security, the private key must be updated over time. Essentially, the presence of the private watermarking filter introduces an asymmetry between the adversary’s knowledge and the watermarked plant, as illustrated

180

R. M. G. Ferrari and A. M. H. Teixeira PLANT

Model mismatch:

VS

Watermark generator

PLANT

allows detection!

Fig. 9.3 The role of multiplicative watermarking in attack detection. The attacker assumes the data being transmitted over the network is produced by the plant, of which he/she knows a model. Instead, it is produced by the cascade of the plant and of the watermark generator. Such asymmetry has a key role in making the attack detectable

in Fig. 9.3, which is the key to enable the attack’s detection. These aspects will guide the design of the multiplicative watermarking scheme, as described in the remainder of this section.

9.3.1 Watermarking Scheme: A Hybrid System Approach The watermark generator W and remover Q are designed as synchronized hybrid discrete-time linear systems, which will both experience discrete jumps at the time indexes contained in the sequence T  {k 1 , . . . , k N }. As anticipated, and as will be detailed later, such switching behavior is enabling the detection of stealthy data injection attacks. Between switches, that is, for k i ≤ k < k i+1 , the dynamics of W and Q are described by the following state-space equations:  W:  Q:

xw [k + 1] = Aw (θw [k])xw [k] + Bw (θw [k])y p [k] yw [k] = Cw (θw [k])xw [k] + Dw (θw [k])y p [k] xq [k + 1] = Aq (θq [k])xq [k] + Bq (θq [k])yw [k] , yq [k] = Cq (θq [k])xq [k] + Dq (θq [k])yw [k] ,

(9.8)

where the vectors xw , xq ∈ Rn w and yw , yq ∈ Rn y represent, respectively, the state of the watermark generator W and of the watermark remover Q and their output. Each component, or channel, of the output y p shall be watermarked independently, thus leading the matrices Aw , Aq ∈ Rn w ×n w ; Bw , Bq ∈ Rn w ×n y ; Cw , Cq ∈ Rn y ×n w ; and Dw , Dq ∈ Rn y ×n y to have a block diagonal structure. This will be denoted as n Aw = blkdiag(A1w , . . . , Awy ), where the blocks Aiw have suitable sizes, and similarly for the other matrices in Eq. (9.8). The vectors θw , θq ∈ Rn θ denote piecewise constant parameters affecting the dynamics and constitute the private key used by W and Q to generate and remove the watermark. These parameters are updated at switching times, and their values can be defined via two sequences ΘW  {θw [k 1 ], . . . , θw [k N ]} and ΘQ  {θq [k 1 ], . . . , θq [k N ]}, respectively. Moreover, the internal states of the watermark generator and remover are also affected by discrete jumps at switching times. In particular, their values at a switching time k = k i will not be determined by propa-

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

181

gating Eq. (9.8) one time step forward from k = k i − 1, but will be defined via two +  {xw+ [k 1 ], . . . , xw+ [k N ]} and XQ+  {xq+ [k 1 ], . . . , xq+ [k N ]}, further sequences: XW respectively. The notation xw+ [k]i is introduced by drawing on the hybrid systems literature [11, 22] to stress that we denote a value to which the variable xw is reset after a switch, rather than the value obtained by propagating forward Eq. (9.8). + Remark 9.1 The sequences T , ΘW , ΘQ , XW , and XQ+ , which define the switches of W and Q, can either be assumed to be defined offline a priori, or can be independently computed online by W and Q in a way to guarantee synchronicity of the two. Both approaches are acceptable in practice, and are perfectly equivalent with respect to the scope and goals of the present work.

9.3.2 Watermarking Scheme Design Principles In the following, we consider the watermarking scheme’s effect on the closed-loop system in the absence of attacks. As opposed to existing additive watermarking approaches such as [16], our aim is to design the watermark generator and remover so that there exists no performance degradation in the absence of attacks. In order to meet such a goal, the watermark generator and remover must essentially act as an encoder and decoder, respectively, where one is the inverse function of the other. Hence, the following design rules are made. Assumption 9.2 The sequences of parameter vectors Θw and Θq and the dependence of the matrices Aw , Bw , Cw , and Dw on θw and of the matrices Aq , Bq , Cq , and Dq on θq is such that, for every instant k: 1. W and Q are stable and invertible; 2. the inverses of W and Q are stable; 3. θw = θq implies that Q is the inverse of W. Naturally, these conditions are necessary ones, as they ensure that yq [k] converges to y p [k] asymptotically. Moreover, these design rules may be trivially met by the following choice of state-space representation: Dq Cw + Cq = 0, Aq + Bq Cw = Aw ,

Bq Dw = Bw ,

Dq Dw = 1,

Bq Cw = Aq − Bw Cq ,

(9.9)

as concluded by examining the cascade of the generator and remover, and its statespace representation. However, the above conditions are not sufficient, as they only provide an asymptotic result. An additional condition must be posed on the initialization of the internal states at each switching times, so that no transient behavior is induced by the watermarking scheme.

182

R. M. G. Ferrari and A. M. H. Teixeira

Assumption 9.3 The switching times, and the corresponding jump updates, are designed such that, for every switching instant k i ∈ T , 1. θw [k i ] = θq [k i ]; 2. xw+ [k i ] = xq+ [k i ]. The first condition ensures that the generator and remover are synchronous and simultaneously update their parameters to the same value. The second condition, together with the state-space description described in (9.9), guarantees that no transient mismatch occurs between the internal states of the generator and the remover. Consequently, examining the cascade system QW under these conditions, one concludes that the relation y p [k] = yq [k] holds true for all time instants, which in turn implies that the multiplicative watermarking scheme is transparent under no-attack conditions, and does not affect the closed-loop system operation. For a detailed formal proof, the reader is invited to refer to [8].

9.3.3 Stability Analysis In this section, we investigate the stability of the closed-loop system with the proposed watermarking scheme when Assumption 9.3 is not satisfied, and in the absence of attacks. Since the controller design is oblivious to the mismatch between the filters, determining stability of the closed-loop system with mismatched filter parameters is a robust stability problem with multiplicative model uncertainty, where the uncertainty is in fact a hybrid system. In the following, we restrict our attention to the inter-switching times (i.e., with constant mismatched parameters), during which the uncertainty behaves as a linear time-invariant system. We start by formulating the nominal system and the uncertainty under analysis. The key steps in our stability analysis are to first rewrite the closed-loop system with mismatched filters as the nominal closed-loop system (without filter) connected in feedback with a system composed of the mismatched filters. Second, to apply classical robust stability results to the feedback system, in terms of the H∞ norm of each sub-system. The first step is accomplished by rewriting y˜ p [k] = yq [k] as y˜ p [k] = y p [k] + Δyq [k], where Δyq [k] is described by ⎧     xw [k + 1] 0 Aw (θw ) xw [k] ⎪ ⎪ = + ⎪ ⎪ xq [k + 1] Bq (θq )Cw (θw ) Aq (θq ) xq [k] ⎪ ⎪ ⎪   ⎪ ⎪ ⎪ Bw (θw ) ⎨ y p [k] (θ B q q )Dw (θw ) D(θw , θq )   ⎪ ⎪

xw [k] ⎪ ⎪ ⎪ + Δyq [k] = Dq (θq )Cw (θw ) Cq (θq ) ⎪ ⎪ xq [k] ⎪ ⎪ ⎪ ⎩ Dq (θq )Dw (θw ) − In y y p [k].

(9.10)

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

183

Note that the system D(θw , θq ) has y p [k] as its input, and Δyq [k] as its output. Furthermore, observe that, under Assumption 9.3 and the relations in (9.9), we have Δyq [k] = 0 for all k, which corroborates the statements at Sect. 9.3.2 that matched watermarking filters do not affect the performance of the closed-loop system. Next, recalling that y˜ p [k] = y p [k] + Δy p [k], we consider the nominal closedloop system (9.1) as seen from the input Δyq [k] to the output y p [k], which is denoted as SΔyq ,y p . Given the above definitions of D(θw , θq ) and SΔyq ,y p , we are now in place to follow the second step of the robust stability analysis. In fact, note that the perturbed closed-loop system can be described as the nominal closed-loop system, SΔyq ,y p , interconnected through feedback with D(θw , θq ). Defining γ (Σ) as the H∞ -norm of a linear system Σ, the following stability result directly follows from classical results on robust stability [26]. Theorem 9.1 ([9]) Let the generator W and the remover Q be non-synchronized at a switching time instant k i , and assume no future switching occurs. Then the closed-loop and watermarking filters are robustly asymptotically stable if system γ SΔyq ,y p γ D(θw [k i ], θq [k i ]) ≤ 1. Although Theorem 9.1 gives only a sufficient condition, it allows for a simpler design of the filter parameters, by imposing two H∞ -norm constraints for each pair of filter parameters. The next results formalize this statement. Corollary 9.1 ([9]) Let the generator W and the remover Q be non-synchronized at a switching time instant ki , and assume no future switching occurs. Then the closedloop system and watermarking filters are robustly asymptotically stable if W(z; θi ), W−1 (z; θi ), W(z; θ j ), and W−1 (z; θ j ) are stable for all choice of filter parameters θi , θ j ∈ Θ, and, for all θi , θ j ∈ Θ, θ j = θi , the following frequency domain constraints are satisfied for all z ∈ C on the unit circle: −1 | W(z; θi ) − W(z; θ j ) | ≤ γ SΔyq ,y p |W(z; θ j )|.

(9.11)

Note that these frequency domain inequalities ensuring robust stability could be enforced by requiring different parameters θi and θ j to be sufficiently close, depending on the H∞ -norm of the nominal closed-loop system. On the other hand, to enable the detection of the mismatch and replay attacks, one desires that the filter parameters are as different as possible. Therefore, one must trade off robust stability and detectability of filter mismatches.

9.3.4 An Application Example In order to illustrate the application of the proposed watermarking technique, we will now introduce an example. We will make use of an unstable LTI plant from the database maintained by [1] and, in particular, of the fluidized bed system discussed

184

R. M. G. Ferrari and A. M. H. Teixeira

in [14]. The plant is described there by the following unstable second-order transfer function: 1 , (9.12) G p (s) = (s + 0.8695)(s − 0.0056) from which, after discretization with a sampling time Ts = 0.1 s, the following statespace realization can be obtained:  Ap =

  

1.9173 −0.9172 0.125 , Bp = , C p = 0.0389 0.0378 . 1.0000 0 0

(9.13)

For stabilization and reference tracking, we designed a one degree-of-freedom LQG servo-control law with integral action, leading to a controller having the following state-space matrices: ⎡

⎤ ⎡ ⎤ 1.4898 −1.2937 0.001 −10.4646 Ac = ⎣0.6800 −0.3109 0 ⎦ , Bc = ⎣ −8.2313 ⎦ 0 0 1.000 0.1000

Cc = −0.1657 0.1504 0.0078 .

(9.14)

The controller is fed the input e = y˜ p − r , with r being a square wave reference signal switching between the values 0.5 and 1.5 with a period of 500 s and a duty cycle of 50%. Finally, the uncertainties η p and ξ p were set to zero mean, truncated Gaussian random variables whose components’ absolute values were capped at, respectively, η¯ p = 0.3 and ξ¯ p = 0.15. First of all, we will show the behavior of the plant when no watermarking mechanism and no attacks are present. The reference signal with the corresponding plant output, the plant input, and the tracking error are presented, respectively, in Figs. 9.4, 9.5, and 9.6. From these figures, we observe that the controller is following the reference reasonably well, considering the non-negligible uncertainties in the model and measurements. The tracking error, as expected, shows positive and negative peaks corresponding to the reference rising and falling edges, and the control input has a similar behavior. We will now show the effects of the presence of the watermark, still in absence of an attack. To do this, first we will produce a sequence of N = 7 random watermark generators and removers, whose parameters will make up the sequences ΘW and ΘQ . In particular, the watermark generators will be chosen to be Finite Impulse Response (FIR) filters of order 3 and each filter transfer function will be defined as W(z) = w B,(1) + w B,(2) z −1 + w B,(3) z −2 + w B,(4) z −3 ,

(9.15)

where z −1 denotes the unitary delay, w B,(1) = 1 and w B,( j) with j ∈ {2, . . . , 4} are random numbers drawn from the interval [−w M w M ], different for each filter in the sequence. The scalar w M ∈ R will be termed the watermark magnitude, and each filter parameter θw in the sequence ΘW can be interpreted as θw = w B ∈ R4 .

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme 3.5 3 2.5

Output

Fig. 9.4 The reference signal and the plant output from the example considered throughout this chapter, when no watermark and no attack are present. The second plot shows a zoom in of the first one, during the last repetition of the periodic reference signal

185

2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3

Output

2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b) 1.5

Fig. 9.5 The plant input from the example considered throughout this chapter, when no watermark and no attack are present

1

Input

0.5 0 -0.5 -1 -1.5

0

500

1000

1500

Time [s]

2000

2500

186

R. M. G. Ferrari and A. M. H. Teixeira

Fig. 9.6 The tracking error from the example considered throughout this chapter, when no watermark and no attack are present

1.5 1 0.5 0 -0.5 -1 -1.5

0

500

1000

1500

2000

2500

2000

2500

Time [s]

3.5 3

Output

2.5 2 1.5 1 0.5 0

0

500

1000

1500

Time [s]

(a) 3.5 3 2.5

Output

Fig. 9.7 The reference signal, the true plant output, and the watermarked one from the example considered throughout this chapter, when no attack is present and a watermark of 5% amplitude is present. The second plot shows a zoom in of the first one, during the last repetition of the periodic reference signal

2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme Fig. 9.8 The true plant output and the reconstructed one from the example considered throughout this chapter, when no attack is present and a watermark of 5% amplitude is present. The second plot shows a zoom in of the first one, during the last repetition of the periodic reference signal

187

3.5 3 2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3 2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

We are thus ready to present the results of applying such watermarks to the example, following the closed-loop scheme with watermark generation and removal presented in Fig. 9.1. In the present example, as well as in the following simulations throughout this chapter, the watermark parameters are changed every 10 s, unless specified otherwise. This means that the switching time instants will be k 1 = 100, k 2 = 200, and so on, as the sampling time is equal to 0.1 s. After every N = 7 switching instants, the parameter sequence will cycle back and use again the first watermark parameter, such that during the entire simulation the watermark parameters will keep being switched. In Figs. 9.7 and 9.8, we can see that the watermark’s presence is barely noticeable when setting its amplitude to w M = 5%. Most of all, the watermark remover does properly recover the true output such that the control performances will stay exactly the same as in the non-watermarked case. If we increase the watermark amplitude to w M = 20% its presence, as well as the switching times, becomes clearly apparent (see Fig. 9.9). Still, the reconstructed

188 3.5 3 2.5

Output

Fig. 9.9 The reference signal, the true plant output, and the watermarked one from the example considered throughout this chapter, when no attack is present and a watermark of 20% amplitude is present. The second plot shows a zoom in of the first one, during the last repetition of the periodic reference signal

R. M. G. Ferrari and A. M. H. Teixeira

2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3

Output

2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

output y pq produced by the watermark remover continues to match exactly the true output y p , and again the control performances will be unaffected (see Fig. 9.10). In the next sections, we derive the conditions under which the attacks are detectable thanks to the multiplicative watermarking scheme. Then, we identify cases where fundamental limitations still exist, and propose an alternative approach to enforce detection, thus providing guidelines for our watermark scheme design.

9.4 Detection of Stealthy False Data Injection Attacks Having defined all the elements illustrated in Fig. 9.1, and characterized the essential design rules in normal conditions, the behavior of the watermarking scheme under attack is now examined. Proofs of results are omitted for the sake of brevity, but can be found in [8].

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme Fig. 9.10 The true plant output and the reconstructed one from the example considered throughout this chapter, when no attack is present and a watermark of 20% amplitude is present. The second plot shows a zoom in of the first one, during the last repetition of the periodic reference signal

189

3.5 3 2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3 2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

As a first step, we describe the full dynamics of the closed-loop system with watermarking, by having the following equations at the plant’s side:  P:  W(θw ) :

x p [k + 1] = A p x p [k] + B p u[k] + η p [k] y p [k] = C p x p [k] + ξ p [k] xw [k + 1] = Aw (θw )xw [k] + Bw (θw )y p [k]

(9.16)

yw [k] = Cw (θw )xw [k] + Dw (θw )y p [k].

The sensors transmit over a network the watermarked data yw [k], which may be corrupted en-route by an adversary and be replaced by y˜w [k]. At the controller side of the network, the residual and control inputs are computed from the received data y˜w [k] as

190

R. M. G. Ferrari and A. M. H. Teixeira

 Q(θq ) :

xq [k + 1] = Aq (θq )xq [k] + Bq (θq ) y˜w [k]

yq [k] = Cq (θq )xq [k] + Dq (θq ) y˜w [k] ⎧ ⎪ ⎨ xcr [k + 1] = Acr xcr [k] + Bcr yq [k] yr [k] = Ccr xcr [k] + Dcr yq [k] Fcr : ⎪ ⎩ u[k] = Cu xcr [k] + Du yq [k],

(9.17)

where xcr [k] = [xc [k] xr [k] ] , and the matrices Acr , Bcr , Ccr , Dcr , Cu , and Du are derived from (9.1). As explained earlier, a key assumption in the present work is that the watermark parameters θw = θq are unknown to the attacker. Thus, we investigate the detectability of the false data injection attack a[k] computed according to (9.7), based on the attacker knowing only the plant dynamics. The core of the analysis can be explained as follows. We start by recalling that the adversary knows the plant model, and stages an undetectable attack by mimicking a possible behavior of the plant. However, under the multiplicative watermarking scheme, we notice that the plant is augmented with the watermark generator, as described in (9.16). Similarly, as detailed in (9.17), we rest on the fact that the anomaly detector and the controller are augmented with the watermark remover at their input. Consequently, we can conclude that while the man-in-the-middle attack mimics the plant behavior without watermarking, the anomaly detector instead expects a behavior that is affected by the watermarking generator. This mismatch is what allows for the detection of (previously) undetectable attacks. We will formalize this intuitive explanation in the remaining part of this chapter. The main result of this section is the following,  where we use the notion of support set of a vector x ∈ Rn defined as supp(x)  i : x(i) = 0 . Theorem 9.2 ([8]) Consider the plant with sensor watermarking described in (9.16), with initial condition x pwq [0] = [x¯ p x¯ w x¯q ] . Suppose the system is under a false data injection attack on the watermarked measurements, y˜w [k] = yw [k] + a[k], where a[k] is characterized by (9.7) with x¯a being an eigenvector of A p associated with the eigenvalue λ ∈ C. Define the channel transfer functions Qi (z)  −1 i Cqi z I N − Aiq Bq + Dqi for all i = 1, . . . , n y . There exist x¯ p , and x¯wq = x¯w − x¯q such that the false data injection attack is 0-stealthy with respect to yq [k] if, and only if, (9.18) Qi (λ) = Q j (λ), ∀ i, j ∈ supp(C p x¯a ). The latter result characterizes under what conditions data injection attacks, computed based on (A p , C p ), are 0-stealthy, despite the presence of the watermarking filters. This result thus points to design guidelines that enable detection, by ensuring Qi (λ) = Q j (λ) for all i, j ∈ supp(C p x¯a ) and for all λ ∈ C in the spectrum of A p , where x¯a is the eigenvector of A p associated with λ. There are, however, fundamental limitations for single-output systems, as well as for the case of multiple outputs with homogeneous watermarks for all sensors, as formalized next.

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

191

Corollary 9.2 ([8]) For single-output systems and for multiple-output systems with j j j j homogeneous watermark filters, i.e., Aiw = Aw , Bwi = Bw , Cwi = Cw and Dwi = Dw for all i = j, there exist x¯ p and x¯wq = x¯w − x¯q such that the false data injection attack is 0-stealthy with respect to yq [k]. Despite such limitations, there is another degree of freedom that may be leveraged to make the attack ε-stealthy, and therefore detectable, even when (9.18) is satisfied, such as in the cases of Corollary 9.2. In fact, note that 0-stealthy attacks also require specific initial conditions of the plant and the watermarking filters, x¯ p and x¯wq , respectively. Although x¯ p cannot be directly controlled, x¯w and x¯q and thus x¯wq can, as the filters are implemented in digital computers. In particular, as follows from Theorem 2 in [5], resetting x¯w and x¯q to the same value such that x¯wq = 0 would have no adverse impact on the closed-loop performance. Theorem 9.3 ([8]) Consider the plant with sensor watermarking described in (9.16), with initial condition x pwq [0] = [x¯ p x¯ w x¯q ] . Suppose the system is under a sensor false data injection attack on the watermarked measurements, y˜w [k] = yw [k] + a[k], where a[k] is characterized by (9.7) with x¯a being an eigenvector of A p associated with the eigenvalue λ ∈ C. Furthermore, suppose that x¯ p = α x¯a and Qi (λ) =

a a such that α x¯a x¯wq x¯a satα, ∀ i ∈ supp(C p x¯a ), for some α = 0, and define x¯wq isfy the PBH unobservability test from [26]. The output y pq [k] under the measurement false data injection attack is described by the autonomous system Δxwq [k + 1] = Aq Δxwq [k] yq [k] = Dq Cw Δxwq [k]

(9.19)

a a with Δxwq [0] = x¯w − x¯q − x¯wq . Furthermore, for x¯w − x¯q = x¯wq , the false data injection attack is ε-stealthy with respect to the output yq [k], for a finite ε > 0.

Once the sensor data attack is made detectable through a multiplicative watermarking, the compromised sensors can be isolated through conventional FDI techniques [13] or approaches tailored to detect sparse sensor attacks [4]. For instance, in [8], the following estimator is introduced for attack detection:  Pˆ :

xˆ p [k + 1] = A p xˆ p [k] + B p u[k] + K yq [k] − yˆ p [k] yˆ p [k] = C p xˆ p [k],

(9.20)

where xˆ p ∈ Rn p and yˆ p ∈ Rn y are the estimates of x p and y p , and K is chosen such that Ar  A p − K C p is Schur. By defining xr = xˆ p and  x p − xˆ p , in no-attack conditions the detection residual yr  yq − yˆ p can be written as the solution to the following dynamical system: 

[k + 1] = Ar [k] − K ξ p [k] + η p [k] yr [k] = C p [k] + ξ p [k]

.

(9.21)

192

R. M. G. Ferrari and A. M. H. Teixeira

The last equation, thanks to the assumed knowledge on the upper bounds of the uncertainties, can be used to compute the detection threshold yˆr . When, instead, an attack is present the detector will receive and use the attacked signal y˜q for implementing the detection estimator in Eq. (9.20). This will lead to the residual being instead the solution of this system: 

˜ [k + 1] = Ar ˜ [k] − K (ξ p [k] + δa [k]) + η p [k] , (9.22) yr [k] = C p ˜ [k] + ξ p [k] + δa [k] where the term δa , called attack mismatch, can be obtained from the following. Lemma 9.1 ([8]) Define k ∗  maxi {ki | ki ≤ k, i ∈ N} as the last watermark switching instant before the current time k, and suppose that k ∗ ≥ ka . The term δa [k] can be written as the output of the following autonomous system: 

    Aq Bq Cq xq [k] xq [k + 1] = 0 Ap xa [k + 1] xa [k]  

xq [k] , δa [k] = Cq (Dq − I )C p xa [k]

(9.23)



for all k ≥ k ∗ , with xq [k ∗ ] = 0 and xa [k ∗ ] = λk −ka x¯a being the values at which xq and xa have been reset to at the last watermark switch. The importance of the term δa is that it can drive the residual to larger values than those it would have because of the presence of the uncertainties ξ p and η p alone, thus possibly allowing for detection. It has been shown in [8], furthermore, that frequently switching the watermark parameters will help detection by continuously resetting δa dynamics. In the following section, a numerical study is presented, which illustrates the problem of detecting an attack in a single-output system, and shows how the use of a switched watermark can solve such challenge.

9.5 Numerical Study The numerical study uses the same example introduced in Sect. 9.3.4 to show the effects of a stealthy false data injection attack, and how the combined use of switching watermarks and of the detection observer introduced in the previous section can lead to a successful detection. a x¯a = λk−ka C p x¯a , where λ = 1.0006 is The attack is defined as a[k] = C p Ak−k p

the plant’s unstable eigenvalue and x¯a = −10−4 × 0.7073 0.7069 is an initial condition aligned with the corresponding eigenvector. The attack starts at time Ta = ka · Ts = 30 s and, as it can be seen from Fig. 9.11, its magnitude begins to be comparable to the reference signal at about 2000 s.

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

193

0

Fig. 9.11 The attack signal used in the numerical study

-1

Attack

-2 -3 -4 -5 -6

0

500

1000

1500

2000

2500

1500

2000

2500

3.5

Fig. 9.12 The true plant output and the received one during an attack, when no watermark is in place

3

Output

2.5 2 1.5 1 0.5 0

0

500

1000

Time [s]

When no watermarking is used, the exponentially increasing attack signal causes the true plant output y p to diverge, while the received output y˜ p appears to follow the square wave reference faithfully (see Fig. 9.12). The input signal does not show any anomalous behavior (see Fig. 9.13) and the residual, too, does not reveal any sign of the attack as it is well below the threshold (see Fig. 9.14). Indeed, when only the detection observer introduced in the previous section is used and no watermarking is present, this attack is 0-stealthy. The addition of a watermark with amplitude w M = 5%, but with constant parameters that are not switched, does not lead to detection as shown in Figs. 9.15 and 9.16. This corresponds to the case encompassed by Corollary 9.2, but it can fortunately be avoided by introducing switching parameters. By choosing the reset states of W and Q according to Theorem 9.3 the attack can be made -stealthy only and, as such, detectable.

194 1.5 1 0.5

Input

Fig. 9.13 The plant input during an attack, when no watermark is in place. The input signal in this case is indistinguishable from the case when no attack is present (Fig. 9.5)

R. M. G. Ferrari and A. M. H. Teixeira

0 -0.5 -1 -1.5

0

500

1000

1500

2000

2500

2000

2500

Time [s]

Fig. 9.14 The detection residual and threshold during an attack, when no watermark is in place. The residual shows no sign of the presence of the attack, and is always well below the threshold

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

500

1000

1500

Time [s]

Indeed, by looking at Figs. 9.17 and 9.18, we can see that the watermark switching will introduce significant peaks in both the reconstructed output yq and the residual yr , whose amplitude increase with the attack magnitude, ultimately leading to detection. Finally, in case the watermark amplitude is raised to w M = 20%, the peaks in both the reconstructed output yq and the residual yr are even larger than in the previous case, leading to detection at an earlier time instant (Figs. 9.19 and 9.20). Finally, the detection capabilities of the proposed watermarking scheme for the different cases presented here will be quantitatively presented in Table 9.1. In particular, the detection time, the ratio between the residual and the threshold at detection and the attack amplitude at detection will be used as indexes for defining the scheme

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

195

3.5 3 2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

Fig. 9.15 The true plant output and the one reconstructed by Q during an attack, when a watermark with magnitude w M = 5% is in place, but the watermark parameters are kept constant. The reconstructed output y pq is indistinguishable from the attacked output y˜ p received in the case where no watermark is present (Fig. 9.12), and from the reconstructed output in case no attack is present (Fig. 9.8). As y pq is the signal used by the attack detector, it is by no surprise that no detection is possible in this case either Fig. 9.16 The detection residual and threshold during an attack, when a watermark with magnitude w M = 5% is in place, but the watermark parameters are kept constant. The residual behaves as in the case without watermark (Fig. 9.14) and is always well below the threshold

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

500

1000

1500

2000

2500

Time [s]

performance. From such results, it can be concluded that a switching watermark with large amplitude will lead to better detection performances, although the large amplitude will cause the watermark to be apparent to an adversary that is eavesdropping the signal yw .

196

R. M. G. Ferrari and A. M. H. Teixeira 3.5 3 2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3 2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

Fig. 9.17 (a) The true plant output and the one reconstructed by Q during an attack, when a watermark with magnitude w M = 5% is in place and the watermark parameters are switched every 10 s. (b) At a closer look, after about 2000 s the reconstructed output y pq shows some noticeable differences from the non-attacked case. In particular, peaks in correspondence to the watermark switches, whose amplitude increases along the amplitude of the attack

9.6 Conclusions Inspired in authentication techniques with weak cryptographic guarantees, we have proposed a multiplicative watermarking scheme for networked control systems. In this scheme, each sensor’s output is individually fed to a switching SISO watermark generator, which produces the watermarked data that is transmitted through the pos-

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

197

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

500

1000

1500

2000

2500

Time [s]

(a) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

Fig. 9.18 a The detection residual and threshold during an attack, when a watermark with magnitude w M = 5% is in place and the watermark parameters are switched every 10 s. b As we could have expected, after about 2000 s the residual is experiencing peaks of increasing magnitude synchronized with the watermark switches, which ultimately lead to detection

sibly unsecured communication network. At the controller’s side, the watermark remover reconstructs the original measurement data. This approach, combined with a model-based anomaly detector, is shown to lead to detection of otherwise stealthy false data injection attacks. In particular, the periodic switching of the watermark generator and remover parameters is key to a successful detection. An application example as well as theoretical results guaranteeing the absence of control performance losses and characterizing the scheme’s detectability condition are provided. Finally, simulation results illustrate the proposed approach and give insight into

198 Fig. 9.19 (a) The true plant output and the one reconstructed by Q during an attack, when a watermark with magnitude w M = 20% is in place and the watermark parameters are switched every 10 s. (b) With respect to the case with amplitude w M = 5% now the difference in the reconstructed output y pq shows even larger differences from the non-attacked case

R. M. G. Ferrari and A. M. H. Teixeira 3.5 3 2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

Time [s]

(a) 3.5 3 2.5 2 1.5 1 0.5 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

the correlation between the watermark magnitude and the attack detection performances. In the future, an extension to the case of nonlinear plant dynamics, as well as nonlinear watermarks, could significantly augment the scheme applicability as well as its resilience against advanced adversaries that may try to reverse-engineer the watermarking scheme.

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme Fig. 9.20 (a) The detection residual and threshold during an attack, when a watermark with magnitude w M = 20% is in place and the watermark parameters are switched every 10 s. (b) In this case, the residual is showing even larger peaks with respect to the case with w M = 5%, which largely surpass the threshold already at 2000 s, leading to an earlier to detection

199

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

500

1000

1500

2000

2500

Time [s]

(a) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500

Time [s]

(b)

Table 9.1 Performance of different watermarking strategies Index None Small sw. No sw. kd · Ts

|yr [kd ]|

 y¯r [kd ]   a[kd ]   y p [kd ] 

Large sw.

No sw.

N/A N/A

2010 s 1.01

N/A N/A

1870 s 1.31

N/A N/A

N/A

0.43

N/A

0.30

N/A

Performance is measured through three indexes: the detection time instant (the smaller, the better), the ratio of the residual and the threshold at detection (the larger, the better) and the ratio of the attack signal to the output at detection (the smaller, the better). Nomenclature: “none”, no watermark in place; “small”, amplitude is w M = 5%; “large”, amplitude is w M = 20%; “sw.”, parameters switched every 10 s; and “no sw.”, fixed parameters. “N/A” indicates that no detection occurred during simulation time

200

R. M. G. Ferrari and A. M. H. Teixeira

Acknowledgements This work is financed by the Swedish Foundation for Strategic Research, the Swedish Research Council under the grant 2018-04396, the European Union Seventh Framework Programme (FP7/2007–2013) under grant no. 608224, and by EU H2020 Programme under grant no. 707546 (SURE).

References 1. Gazdoš, F., et al.: Database of unstable systems (2012) [Online], Available: http://www. unstable-systems.cz, 19 Mar 2020 2. Cárdenas, A.A., Amin, S., Sastry, S.S.: Secure control: towards survivable cyber-physical systems. In: First International Workshop on Cyber-Physical System (2008) 3. Cárdenas, A.A., Amin, S., Sinopoli, B., Giani, A., Perrig, A., Sastry, S.S.: Challenges for securing cyber physical systems. In: Workshop on Future Directions in Cyber-Physical Systems Security, U.S. DHS (2009) 4. Fawzi, H., Tabuada, P., Diggavi, S.: Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Trans. Autom. Control. 59(6), 1454–1467 (2014) 5. Ferrari, R.M.G., Teixeira, A.M.H.: Detection and isolation of replay attacks through sensor watermarking. In: Proceedings of 20th IFAC World Congress, Toulouse (France), 9–14 July 2017, IFAC (2017) 6. Ferrari, R.M.G., Teixeira, A.M.H.: Detection and isolation of routing attacks through sensor watermarking. In: 2017 American Control Conference (ACC), pp. 5436–5442 (2017) 7. Ferrari, R.M.G., Teixeira, A.M.H.: Detection and isolation of routing attacks through sensor watermarking. In: Proceedings of American Control Conference, pp 5436–5442. IEEE (2017). 8. Ferrari, R.M.G., Teixeira, A.M.H.: Detection of sensor data injection attacks with multiplicative watermarking. In: Proceedings of European Control Conference (ECC 2018), Limassol (Cyprus), 12–15 June 2018 9. Ferrari, R.M.G., Teixeira, A.M.H.: A switching multiplicative watermarking scheme for detection of stealthy cyber-attacks. IEEE Trans. Autom. Control (2020) 10. Gallo, A.J., Turan, M.S., Boem, F., Ferrari-Trecate, G., Parisini, T.: Distributed watermarking for secure control of microgrids under replay attacks. IFAC-PapersOnLine 51(23), 182–187 (2018); In: 7th IFAC Workshop on Distributed Estimation and Control in Networked Systems NECSYS 2018 11. Goebel, R., Sanfelice, R.G., Teel, A.R.: Hybrid dynamical systems. IEEE Control Sys. 29(2), 28–93 (2009) 12. Gorenc, B., Sands, F.: The state of SCADA HMI vulnerabilities (2018). https://documents. trendmicro.com/assets/wp/wp-hacker-machine-interface.pdf 13. Hwang, I., Kim, S., Kim, Y., Eng, C.: A survey of fault detection, isolation, and reconfiguration methods. IEEE Trans. Control Syst. Technol. 18(3), 18 (2010). 14. Kendi, T.A., Doyle, F.J.: Nonlinear control of a fluidized bed reactor using approximate feedback linearization. Ind. Eng. Chem. Res. 35(3), 746–757 (1996) 15. Miao, F., Zhu, Q., Pajic, M., Pappas, G.J.: Coding schemes for securing cyber-physical systems against stealthy data injection attacks. IEEE Trans. Control Netw. Syst.4(1), (2017) 16. Mo, Y., Weerakkody, S., Sinopoli, B.: Physical authentication of control systems: designing watermarked control inputs to detect counterfeit sensor outputs. IEEE Control Syst. Mag. 35(1), 93–109 (2015) 17. NCCIC, ICS-CERT: ICS-CERT year in review (2016). https://ics-cert.us-cert.gov/sites/ default/files/Annual_Reports/Year_in_Review_FY2016_Final_S508C.pdf 18. Pasqualetti, F., Dorfler, F., Bullo, F.: Attack detection and identification in cyber-physical systems. IEEE Trans. Autom. Control 58(11), 2715–2729 (2013) 19. Pérez-Freire, L., Comesaña, P., Troncoso-Pastoriza, J.R., Pérez-González, F.: Watermarking security: a survey. Transactions on Data Hiding and Multimedia Security I. Springer, Berlin (2006)

9 Detection of Cyber-Attacks: A Multiplicative Watermarking Scheme

201

20. Sandberg, H., Amin, S., Johansson, K.H.: Cyberphysical security in networked control systems: an introduction to the issue. IEEE Control Syst. Mag. 35(1), 20–23 (2015) 21. Smith, R.S.: A decoupled feedback structure for covertly appropriating networked control systems. IFAC Proc. 18, 90–95 (2011) 22. Teel, A.R., Poveda, J.I.: A hybrid systems approach to global synchronization and coordination of multi-agent sampled-data systems. IFAC-PapersOnLine 48(27), 123–128 (2015) 23. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: Revealing stealthy attacks in control systems. In: 50th Annual Allerton Conference on Communication, Control, and Computing (2012). 24. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: A secure control framework for resource-limited adversaries. Automatica 51(1), 135–148 (2015) 25. Trend Micro: Unseen threats, imminent losses 2018 midyear security roundup (2018). https://documents.trendmicro.com/assets/rpt/rpt-2018-Midyear-Security-Roundupunseen-threats-imminent-losses.pdf 26. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice-Hall Inc, Upper Saddle River (1996)

Chapter 10

Differentially Private Anomaly Detection for Interconnected Systems Riccardo M. G. Ferrari, Kwassi H. Degue, and Jerome Le Ny

Abstract Detecting anomalies in large-scale distributed systems, such as cyberattacks launched against intelligent transportation systems and other critical infrastructures, or epidemics spreading in human populations, requires the collection and processing of privacy-sensitive data from individuals, such as location traces or medical records. Differential privacy is a powerful theoretical tool that by applying so-called randomized mechanisms to individual data, allows to do meaningful computations at the population level whose results are insensitive, in a probabilistic sense, to the data of any given individual. So far, differential privacy has been applied to several control problems, such as distributed optimization and estimation, filtering and anomaly detection. Still, several issues are open, regarding the balance between the accuracy of the computation results and the guaranteed privacy level for the individuals, as well as the dependence of this balance on the type of randomized mechanism used and on where, in the data acquisition and processing pipeline, the noise is applied. In this chapter, we explore the possibility of using differentially private mechanisms to develop fault-detection algorithms with privacy guarantees and discuss the resulting trade-offs between detection performance and privacy level.

10.1 Introduction The motivation for the present work is to allow distributed anomaly detection architectures, such as the ones developed in [2, 20], to be implemented in a way that protects the privacy of individual subsystems. For the sake of simplicity and withR. M. G. Ferrari (B) Delft University of Technology, Delft, The Netherlands e-mail: [email protected] K. H. Degue · J. Le Ny Polytechnique Montreal and GERAD, Montreal, QC H3T1J4, Canada e-mail: [email protected] J. Le Ny e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_10

203

204

R. M. G. Ferrari et al. alarm

alarm

Fig. 10.1 The minimal setup considered for the distributed anomaly detection problem. The two subsystems S and SN influence each other through the physical interconnection variables z and z N . For each system, a local agent L and LN collects local input and output measurement and possibly communicate with neighboring agents. Thick white lines represent physical interactions, while thin black lines represent measurements or communications

out loss of generality, we will consider a minimal topology consisting of only two interconnected subsystems. They will be called the ego system S and the neighbor system SN , and are depicted in Fig. 10.1. Both S and SN are driven by a local input u and u N and produce a physical output z and z N , respectively. Such outputs act as interconnection variables [20] through which both subsystems influence each other. For instance, if S and SN were parts of a water distribution network, z and z N could represent pressures at nodes which determine the flow of water through pipes linking the two systems. Similarly, in the case of a power grid example z and z N could determine electrical power flow, or in a multi-body system they could cause reaction forces between different objects. In order to implement a distributed anomaly detection architecture [20], the subsystems S and SN will be monitored by the local agents L and LN . Each agent will need to acquire local measurements of the input and output of its subsystem, as well as to communicate with each other in order to reach a diagnosis. We will assume that the inputs u and u N can be acquired exactly, while the outputs are only available as noisy measurements y = z + w and yN = z N + wN , with w and wN being such noises. The task demanded from L (mutatis mutandis, from LN ) is, thus, to use the signals u and y and a dynamical model of S to compute in real time a diagnosis for it, and raise an alarm in case a fault or another anomaly is detected. From the point of view of S it is legitimate to ask whether it should trust providing other parties information that could be deemed private, even if this allows to get as a benefit the diagnosis results. Moreover, apart from L, which directly accesses u and y, the other parties SN and LN are influenced by these signals as well. The first is physically influenced by z, while the second can communicate with L or access the measurement yN , which depends on u and z via the mentioned physical interconnection and the dynamics of SN . Although S has no reason to believe that L, LN or SN would behave in an adversarial way during the diagnosis process, still its safest assumption is to consider them

10 Differentially Private Anomaly Detection for Interconnected Systems

205

as semi-honest, or equivalently honest-but-curious [40] third parties. For instance, this means that L may retain internally past values of u and y and use them to try to infer private information belonging to S. In the present setting, this could amount to reverse-engineer the dynamical behavior of S and identify a model that is more accurate than what S provided for diagnostic purposes, or infer the control policy or external influence that produces the signal u. In particular, in the present work, we will consider the local input u to be the quantity that S wants to keep private. Similarly, SN will aim at keeping u N private from S, L, and LN . As a practical, motivating example we may consider the case where S is a local smart grid, the components of u represent the power consumed or generated by a large number of individual users, and z represents the total power delivered or absorbed by S at some interconnection point with the rest of the grid. The user energy patterns, and how S makes use of different available energy sources to keep its power balance, would be regarded as private data, as it can expose users habits from their consumption patterns, as well as pricing policies driving the choices of S when sourcing power [27, 50].

10.1.1 Contributions To allow S and SN to protect their privacy we will resort to the concept of Differential Privacy (DP), introduced by [15, 16]. DP “addresses the paradox of learning nothing about an individual while learning useful information about a population,” and was initially developed to protect the privacy of human individuals, for instance, when personal health data is collected and used in medical studies. DP allows to make the disclosure of information from a population indistinguishable, in a statistical sense, from the same disclosure occurring from an almost identical population. The latter differs from the former by only one individual and is thus termed adjacent. This would hide the presence, or absence, of the individual or of a specific trait of it to any third party that is able to observe only the disclosed data. To enforce DP, one party can avoid to directly disclose information to a third party by having such information pre-processed through a so-called privacy–preserving mechanism. Such mechanisms, which are randomized, make indeed difficult in a statistical sense for the adversary to detect whether the mechanism’s output was produced by a specific instance of the private content or an adjacent one. In particular, a privacy-preserving mechanism can be implemented by adding a random perturbation to such information, whose covariance depends on the sensitivity of such information on the quantity to keep private. A larger sensitivity will require a higher perturbation covariance to hide it from the adversary. In this paper, we apply the so-called Gaussian mechanism [14] to enforce DP for the subsystems S and SN , by perturbing the communication transmitted by each agent to third parties, as well as the physical variables through which they interact. After presenting our fault detection problem formulation in Sect. 10.2, we introduce in Sect. 10.3.3 two probabilistic definitions of fault detectability. In Sect. 10.4.1, we

206

R. M. G. Ferrari et al.

characterize the level of privacy-preserving noise needed to provide formal DP guarantees. We then study in Sects. 10.4.2 and 10.4.3 the “cost of privacy” for our architecture, i.e., the degradation in detection performance as the privacy level increases.

10.1.2 Related Work DP was introduced in a seminal paper by Cynthia Dwork [6, 15]. Much work in computer science has been concerned with the privacy-preserving analysis of databases in the past decade and is surveyed in [16]. In systems and control, a number of classical problems have been revisited in recent years with the goal of enforcing DP constraints, see [7, 34] for a survey. Examples include filtering problems [9, 33, 35, 36], consensus and distributed optimization [25, 28, 30, 46], distributed control [57], LQG control [11, 26], and system identification [3, 37, 44]. Differentially private hypothesis testing, fault diagnosis and anomaly detection problems are discussed, for example, in [5, 8, 10, 17, 21, 48, 49], among other papers. Privacy-preserving data analysis largely predates the introduction of differential privacy, see [13] for example. Various approaches are applicable depending on the specific problem formulation, including k-anonymity and its variations [29, 55], lower bounding achievable estimation performance [43] or entropy [51], and cryptography-based methods [1, 4, 19, 22, 32, 41, 56]. Finally, the core topic of this paper is motivated by distributed fault diagnosis problems, see [20, 23, 45, 47, 58, 59] for a sample of recent work in this area.

10.2 Problem Formulation 10.2.1 Differential Privacy In this paper, we rely on a formally defined notion of privacy called Differential Privacy [15, 16, 34], which can be used in general to release, in a privacy-preserving manner, certain statistics of interest computed from sensitive data. Differentially private mechanisms operate by releasing only noisy versions of these statistics, with a level of noise appropriately chosen depending on the query, in such a way that the probability distribution over released outputs does not depend too much on certain variations in the data. The definition of these variations is application dependent [38] and, intuitively, the goal is to make such variations hard to detect by an adversary having only access to the sensitive dataset through the differentially private outputs. More formally, consider a space D of possible datasets (e.g., a certain collection of signals), and define a symmetric binary relation on D, called adjacency and denoted with Adj, in order to define the variations among datasets that we want to protect. A mechanism is a randomized map M from the space D of datasets to some output

10 Differentially Private Anomaly Detection for Interconnected Systems

207

space R of published results. That is, once we fix some probability space (Ω, F , P), a mechanism M is a measurable map from D × Ω to R. In particular, for d ∈ D, M(d, ·) is a measurable map from the sample space Ω to R, i.e., a random variable, denoted simply M(d). Differentially private mechanisms are those that satisfy the following property. Definition 10.1 Let D be a space equipped with a symmetric binary relation denoted Adj, and let (R, R) be a measurable space. Let ε, δ ≥ 0. A mechanism M : D × Ω → R is (ε, δ)-differentially private for Adj (and the σ -algebra R) if for all d, d  ∈ D such that Adj(d, d  ), we have, for all sets S in R, P(M(d) ∈ S) ≤ eε P(M(d  ) ∈ S) + δ.

(10.1)

Definition 10.1 controls the allowed variations in distribution of differentially private outputs, for pairs of adjacent datasets. The parameters ε, δ set the level of privacy, with smaller values corresponding to a higher level of privacy. Here we always assume R to be a metric space, for which we can then take R to be the σ -algebra of Borel sets. Definition 10.1 characterizes the differentially private property of a mechanism but does not offer means to implement it. One standard technique is to perturb answers using Gaussian noise, leading to the Gaussian mechanism introduced first by [14]. First, define the notation Sd = (Rd )N , i.e., the set of sequences {xk }k≥0 , with xk ∈ Rd . For such sequences, we define the  p norm x p :=

∞ 

1/ p |xk | pp

,

k=0

provided the sum converges, with | · | p the p-norm on Rd , i.e., |v| p =

 d 

1/ p |vi | p

, for v ∈ Rd .

i=1

The following notion of sensitivity of a query, i.e., a function defined on a space of datasets, is useful to set the level of privacy-preserving noise. Definition 10.2 Let D be a space equipped with a binary symmetric adjacency relation Adj. Let V be a vector space equipped with a norm  · . The sensitivity of a query q : U → V is defined as Δq = supAdj(u,u  ) q(u) − q(u  ). In particular, if V is Rd , for some positive integer d, equipped with the p-norm, or if V = S d equipped with the  p -norm, we call Δq the p- or  p -sensitivity, respectively, and we use the notation Δ p q. In the following, we write X ∼ N (0, Σ) to denote that the random vector X follows a d-dimensional normal distribution with zero mean and covariance matrix ∞ 2 Σ. Define F (x) := √12π x e−u /2 du, and we let

208

R. M. G. Ferrari et al.

κδ,ε =

 1 (F −1 (δ) + (F −1 (δ))2 + 2ε), 2ε

(10.2)

for ε > 0, 1 > δ > 0. We then have the following theorem [14, 38]. Theorem 10.1 (Gaussian mechanisms) Let ε > 0, 1 > δ > 0, and fix an adjacency relation Adj on some set D. Let q : D → Rd be a vector valued query   with 2-sensitivity Δ2 q. The mechanism M(u) = q(u) + w, with w ∼ N 0, σ 2 Id , where σ ≥ κδ,ε Δ2 q and κδ,ε is defined in (10.2), is (ε, δ)-differentially private for Adj. Next, let q : D → Sd be a signal valued query, with 2 -sensitivity Δ2 q. The mechanism M(u) = q(u) + w, with w a white Gaussian noise signal (sequence  of iidzeromean Gaussian random vectors) such that, for each sample, wk ∼ N 0, σ 2 Id with σ ≥ κδ,ε Δ2 q, is (ε, δ)-differentially private for Adj. Finally, an important property of differential privacy is that it is resilient to postprocessing, i.e., one cannot invalidate this property simply by further processing a differentially private output, without re-accessing the sensitive input data. A proof of this result can be found, for example, in [34]. This is obviously an important property that a reasonable notion of privacy should possess. But it is also often directly useful to design differentially private mechanisms with better performance than the basic Gaussian mechanism of Theorem 10.1, by smoothing out the privacy-preserving noise before releasing the result [34].

10.2.2 The Case of an Isolated System In order to show how DP can be used to implement a privacy-preserving fault or anomaly detection scheme for interconnected systems, we will start by presenting a simplified situation, which is depicted in Fig. 10.2. Here we assume that S is an isolated system and the only third party it is in contact with is its local diagnosis agent L. With the notation of the figure, we assume that • S wants to keep the signal u differentially private; • The adjacency relation we are concerned with is Adj(u, u  ) ⇔ u − u  2 ≤ ρ,

(10.3)

where u, u  are signals and ρ is some given positive number. • We do not want to add physical noise to the system input u, in order to preserve the operating performance of S. There are two paths from S to agent L that may leak private information on u, the variable that S wants to keep private. The first one, which is manifest, is through the communication of u itself. The other, less prominent, is through the communication of y: indeed, as the value of y depends on u through the dynamics

10 Differentially Private Anomaly Detection for Interconnected Systems Fig. 10.2 A depiction of the further simplified setting used in this section. This corresponds to only one of the subsystems of the complete setting of Fig. 10.2, thus without physical interactions with other susbystems

Fig. 10.3 Differentially private version of the system of Fig. 10.2

209 alarm

alarm

of S, it holds that a difference in u may lead to a noticeable difference in y. Thus, the system S should implement two mechanisms Mu and M y , producing, respectively, the privatized signals u = u + υ and y = y + ζ which are then communicated to the agent L, see Fig. 10.3. Both mechanisms are randomized, and add the numerical perturbations υ and ζ to, respectively, u and y. One may ask how L could use the signals u and y , which are “corrupted” by the privacy-preserving noise introduced by Mu and M y , to successfully perform a diagnosis. The key point, in this respect, is that the privacy noise is no different than the normally occurring measurement noise that already affects the output of any plant. The effect of measurement noise on the detectability properties of model-based fault detectors is well studied in the literature, for instance, in [60] and in related work. The noise on u can be taken into consideration similarly. The open point still would be the fact that to guarantee the privacy of u, a possibly large noise should be added by Mu . This would depend on the specific system being monitored.

10.2.3 The Case of Interconnected Systems We can now move on to the interconnected scenario we introduced in Sect. 10.1 and which motivates the current chapter. Looking back at Fig. 10.1, we recall that, as anticipated, the private information of system S can now be leaked via the physical interconnection z as well, and this cannot be prevented by just applying a numerical

210

R. M. G. Ferrari et al.

perturbation to y as the mechanism M y did in the previous, simplified case. Now, the system S must implement a physical privacy mechanism, where the value of z is physically altered. In this way, the dependence of yN on z and thus on u is perturbed, thus preventing in a DP sense an indirect leakage of private information on u. To distinguish the notation from the previous case, we will denote such mechanism as z = M˜ z = z + ζ , where z is the physical variable obtained by altering z (see Fig. 10.4). In practice, such a mechanism could be implemented by letting S have an extra actuator that directly influences z, without being correlated with u. Such actuator can so generate a physical privacy noise. For example, in the water distribution network mentioned earlier, this would entail having an additional reservoir from which an extra water flow can be pumped in or out of the pipe connecting S to SN . Similarly, for smart grids, previous works, for example, by [39] and by [18] have suggested using batteries to hide private physical power consumption patterns. Note that since the privatized physical value z would already meet the privacy requirements of S, its measurements can then be directly communicated to L thus removing the need for the previous privacy mechanism M y . As explained before, the same reasoning holds for SN , which would thus need to implement the equivalent mechanisms Mu N and M˜ zN (Fig. 10.4). Let us now introduce the equations used to model the dynamics of the two subsystems and the relevant assumptions. In healthy conditions, that is, in absence of a fault, the ego system dynamics will be modeled as a Linear Time Invariant (LTI) discrete-time system

S:

⎧ xk+1 ⎪ ⎪ ⎪ ⎨z k ⎪ y ⎪ k ⎪ ⎩ uk

= Axk + Bu k + Gz N,k + vk = C xk + ζk , = z k + wk = u k + υk

(10.4)

where x and v ∈ Rn are the state and the process noise vectors, respectively. Furthermore, z , ζ , w and y ∈ Rm are, respectively, the physical value of the output, the alarm

alarm

Fig. 10.4 Depiction of a complete, privacy–preserving distributed anomaly detection architecture

10 Differentially Private Anomaly Detection for Interconnected Systems

211

output physical privacy perturbation, the output measurement noise, and the output measured value. The variables u and υ ∈ R p denote the input and its numerical privacy perturbation. The way ζ and υ are determined will be discussed in Sect. 10.4.1. is the neighbor’s physically privatized output and A, B, C, and G are Finally, z N constant real matrices of suitable size describing the dynamics of S. The neighbor system is similarly modeled by

SN :

⎧ xN,k+1 ⎪ ⎪ ⎪ ⎨z N,k ⎪ y ⎪ ⎪ ⎩ N,k u N,k

= AN xN,k + BN u N,k + G N z k + vN,k = CN xN,k + ζN,k . = z N,k + wN,k = u N,k + υN,k

(10.5)

In order to make the current problem tractable, we require the following Assumption 10.1 The state matrices A, B, C as well as AN , BN , and CN and G N are known. Assumption 10.2 The process uncertainties v and vN , the measurement uncertainties w and wN as well as the privacy perturbations ζ and υ are independent white Gaussian noise processes with zero mean and known covariance matrices Σv , ΣvN , Σw , ΣwN , Σζ , ΣζN , Συ and ΣυN , respectively. The initial conditions x0 , xN,0 are Gaussian random vectors, independent of the noise processes, with known mean x¯0 , x¯N,0 and covariances Σ¯ 0 , Σ¯ N,0 . Note that, as explained in Theorem 10.1 and in Sect. 10.4.1, it is indeed possible to enforce differential privacy using white Gaussian noise for ζ , ζN , υ and υN , in order to satisfy Assumption 10.2.

10.3 Diagnosis in Absence of Privacy Constraint We start by analyzing the case where the agent S is not employing any privacy preserving scheme, akin to the situations depicted in Figs. 10.1 and 10.2. We introduce a model-based residual generator and a probabilistic threshold, similar to what has been introduced in [10, 49], which shall be implemented by the agent L, and analyze its robustness and its detectability properties. In the next section, we will introduce a privacy preserving mechanism and characterize the cost-of-privacy, or privacy-utility trade-off, in terms of the degradation in robustness and detectability.

10.3.1 Model-Based Residual Generation In absence of privacy, the ego system agent L will run the following anomaly detection observer

212

R. M. G. Ferrari et al.

L:

xˆk+1 = A xˆk + Bu k + GyN,k + L( yˆk − yk ) , = C xˆk yˆk

(10.6)

where xˆ0 = x¯0 and L has been chosen such that A0  A + LC is stable, which is always possible if the pair (A, C) is observable. By taking the difference between (10.4) and (10.6), the state estimation error dynamics in healthy conditions can be written as 0 = A0 x˜k0 + vk + GwN,k + Lwk , x˜k+1   

(10.7)

ξk

where we introduced x˜ 0  x − xˆ (under healthy condition hypothesis) and the total uncertainty term ξk . The output estimation error y˜k0 = C x˜k0 can thus be computed from

k−1   0 0 (k−1−h) 0 k (A ) ξh + (A ) x˜0 + wk . (10.8) y˜k = C h=0

Due to ξk being the linear combination of independent random variables, the estimation error at time instant k + 1 is a random variable as well. For this reason, for the purpose of anomaly detection we will define a probabilistic threshold, rather than a deterministic one. In order to do so, we will first introduce a residual r equal to the squared Mahalanobis distance d M of y˜ ( y˜k − μ y˜k0 ) rk  (d M ( y˜k ))2 = ( y˜k − μ y˜k0 ) Σ y−1 ˜0

(10.9)

k

where μ y˜k0 is the mean of y˜k0 and Σ y˜k0 is its covariance matrix. Due to linearity of the ego system dynamics and observer and Assumption 10.2, it follows that y˜ 0 is a white Gaussian noise and it holds μ y˜k0 = 0. For this reason, μ y˜ 0 will be omitted in the subsequent equations. Furthermore, the following formulas can be used for propagating forward in time the covariance of y˜ 0 :

0 Σx˜k+1 = A0 Σx˜k0 (A0 ) + Σξ , (10.10) Σ y˜k0 = CΣx˜k0 C + Σw where it holds Σξ = Σv + GΣwN G + LΣw L and Σx˜k0 = Σ¯ 0 at time k = 0. As y˜ 0 is a zero-mean Gaussian variable, it follows that r 0 is the sum of the squares of m i.i.d N (0, 1) random variables and so it is a χm2 random variable with m degrees of freedom, as shown in [31].

10 Differentially Private Anomaly Detection for Interconnected Systems

213

10.3.2 A Probabilistic Detection Threshold We can thus define a threshold that in healthy conditions guarantees a given level of robustness against uncertainties. We can characterize such robustness via the probability of false alarms, called also Type I errors, which we will denote as PI . In the Fault Diagnosis literature PI is also referred to as the expected False Alarm Rate (FAR), see for instance [12]. Let us introduce the Tail Distribution (TD) Fχ 2 (r ; m)  1 − Pχ 2 (r ; m), where Pχ 2 (r ; m) is the Cumulative Distribution Function (CDF) of a χm2 variable computed at r . So, given 0 ≤ α ≤ 1 a user-defined PI , we can introduce the following threshold (10.11) r¯  Fχ−12 (α; m) , where Fχ−12 is the inverse function of the TD and is defined on the interval (0, 1]. By construction, then, this threshold guarantees that under healthy conditions it holds   P rk0 > r¯ = α ,

(10.12)

where rk0 denotes the residual in healthy conditions at time k. In other words, if we choose α = 1% then there is 1% probability at each period that during healthy conditions the threshold will be crossed, leading to a false alarm.1

10.3.3 Detectability Analysis In order to later evaluate the cost-of-privacy of the proposed approach in terms of detectability loss, let us initially assess detectability in absence of privacy. During an anomaly, that is for k ≥ k f where k f is the fault start time, and when no privacy mechanism is present, the ego system dynamics (10.4) becomes

S:

xk+1 = Axk + Bu k + Gz N,k + φk + vk , yk = C xk + wk

(10.13)

where φk is a signal representing the change in the dynamics due to a fault, or an anomaly. Let us denote with x˜ and y˜ the state and output estimation errors of L in faulty conditions. Then the solution to the output estimation error can be written as

k−1   0 (k−1−h) 0 k y˜k = C (A ) ˜ + wk . (ξh + φh ) + (A ) x(0)

(10.14)

h=0

stress that while α is the desired value of the probability of Type I errors, which is used in the design of the threshold, PI denotes the actual value. The two are the same as long as the agent L’s knowledge of the model of S and of the mean and covariance of the uncertainty terms appearing there are correct as stated in Assumptions 10.1 and 10.2.

1 We

214

R. M. G. Ferrari et al.

Let us now assume, purely for analysis reasons, to be able to compute in faulty conditions as well the signal y˜k0 we introduced earlier for healthy conditions. In this case, we can thus write y˜k =

y˜k0

k−1   0 (k−1−h) +C (A ) φh , 

h=0



(10.15)



Φk

where we introduced the term Φk to account for the response of the anomaly detection observer to the fault function φ. Now, asking under which conditions a fault will be detected is equivalent to asking when the term Φk will cause r to cross r¯ . To answer this we will assume in the following that for analysis purposes only, we know φ and that we can actually compute Φk and the effect it has on r . Of course, during operation the agent L is not assumed to know anything about φ. Since y˜k is the sum of a Gaussian variable, y˜k0 , and a deterministic one, Φk , its covariance is the same as in healthy conditions, Σ y˜ = Σ y˜ 0 , and we know how to compute it from (10.10). The mean instead will be equal to μ y˜k = μ y˜k0 + Φk = Φk ,

(10.16)

and hence the distribution of the residual rk under faulty conditions is altered. From (10.9) and (10.15), we have (−1/2)

rk = ( y˜k ) Σ y−1 ( y˜k ) = ( y˜k0 + μ y˜k ) Σ y−1 ( y˜k0 + μ y˜k ) = Σ y˜ 0 ˜0 ˜0

( y˜k0 + μ y˜k )2 , (10.17) where it shall be noted that, by construction, L always uses the covariance matrix Σ y˜k0 and the mean μ y˜k0 = 0 to compute the Mahalanobis distance of y˜k . As the true mean μ y˜ is not known to L, it cannot be subtracted from y˜ and so it affects r in (10.17). (−1/2) From the fact that the components of Σ y˜ 0 y˜k0 follow a N (0, I ) distribution we can k

k

k

k

conclude that rk is distributed as a non-central χ 2 with m degrees of freedom and non-central parameter      (−1/2) 2  (−1/2) 2 λk  Σ y˜ 0 μ y˜k  = Σ y˜ 0 Φk  . k

(10.18)

k

Due to λ being an increasing function of effects of the fault, and a decreasing one of the effects of the uncertainties, we will refer to λ as the Signal-to-Noise Ratio (SNR). In the following, we propose two different definitions of detectability that we can use later to analyze the cost of the privacy.

10 Differentially Private Anomaly Detection for Interconnected Systems

10.3.3.1

215

Detectability on Average

2 we can compute its expected As rk during a fault is distributed like a non-central χm,λ k  2  value E. From E χm,λk = m + λk [31] it follows that (−1/2)

E (rk ) = m + Σ y˜ 0

Φk 2 .

(10.19)

Definition 10.3 A fault φ is detectable on average if E (rk ) > r¯ holds for at least one time instant k > k f . It is important to notice that the larger (the norm of) φ, the more it contributes to satisfying the definition and being detectable. However, the larger the covariance of y˜ 0 , the farther we will get from satisfying it, as the residual is weighted by the inverse of the covariance. This is showing the intuitive fact, well-known in the Fault Diagnosis literature, that a fault, to be detectable, must be large with respect to the uncertainty (see, for instance, [60, Theorem 3.1]). From Definition 10.3 and (10.19) we immediately conclude the following (−1/2)

Proposition 10.1 A fault φ is detectable on average if Σ y˜ 0 k for at least one time instant k > k f .

Φk 2 > r¯ − m holds

However, saying that the expected value of r is bigger than r¯ is not telling us how likely it is that a given realization of r will indeed cross r¯ . This depends on the actual 2 , we will use this fact in distribution of r . As we know such distribution to be a χm,λ the following section to compute the probability of detection.

10.3.3.2

Detectability in Probability

Let us thus introduce the notion of detectability in probability for faults. Definition 10.4 A fault φ is detectable in probability with probability of detection β if βk  P (rk > r¯ ) ≥ β holds for at least one time instant k > k f , where βk is the probability of detection at time k. The probability of detection β for a given fault φ can be computed as a function of the desired probability of false alarm α, and this will allow to characterize the Receiver Operating Characteristic (ROC) of the proposed detection scheme.2 Let us define F(·; m, λ) as the tail distribution of the non-central chi-squared distribution with m degrees of freedom and non-centrality parameter λ [31, Chap. 2]. Let F−1 (·; m, λ) denote the inverse of F(·; m, λ), defined on (0, 1]. the literature, the quantity 1 − β is also termed the probability of Type II errors and denoted as PI I , or equivalently the expected Missed Detection Rate (MDR, see [12]).

2 In

216

R. M. G. Ferrari et al.

Proposition 10.2 Given a desired probability of false alarm α, the corresponding probability of detection βk at time k is   βk = F F −1 (α; m); m, λk ,

(10.20)

where λk is provided by (10.18). Proof The proof follows immediately from the definition of the threshold r¯ in Eq. (10.9) and from Definition 10.4. While the relationship (10.20) is exact, its consequences for the ROC of the proposed detector are not particularly intuitive. We would like now to develop an approximate but simpler bound on the detection probability β for a given expected false alarm rate α. Corollary 10.1 Given the residual r defined in Eq. (10.9), an expected false alarm rate α and the corresponding threshold r¯ defined in Eq. (10.11), the following bound holds 1−β ≥e

−λ/2

(1 − α) .

(10.21)

Proof From Eqs. (10.11) and (10.12) and from Definition 10.4 we can write β = P(r > r¯ ) =⇒ β = 1 − Pχ 2 (¯r ; m, λ)

(10.22)

=⇒ 1 − β = Pχ 2 (¯r ; m, λ) . By expressing Pχ 2 (·; m; λ) in series form [31] we can write 1 − β = Pχ 2 (¯r ; m, λ) = e

−λ/2

∞  (λ/2) j

j!

j=0

=e

−λ/2

Pχ 2 (¯r ; m + 2 j)

(10.23)

  −λ/2 Pχ 2 (P−1 [1 − α + H.O.T.] χ 2 (1 − α; m); m) + H.O.T. = e (10.24)

where the Higher Order Terms (H.O.T.) are H.O.T. 

∞  (λ/2) j j=1

j!

Pχ 2 (¯r ; m + 2 j) ≥ 0 .

From this and (10.22) the thesis follows. The result in this Corollary can be useful for investigating, in first approximation, what are the effects of the fault φ magnitude and of the robustness level α on the detectability β. In particular, we can make the following remarks:

10 Differentially Private Anomaly Detection for Interconnected Systems

217

• A larger magnitude of the fault φ, or a lower magnitude of the uncertainties covariance, will cause λ to increase and the lower bound on 1 − β, that is the missed detection probability PI I , to decrease. This leads to better detectability as we would have expected; • A lower false alarm probability α will cause the robustness 1 − α to increase. From (10.21) we can see that this will make the lower bound on the missed detection probability 1 − β to increase, which means worse detectability β. Both remarks confirm what we would have expected from the FAR/MDR trade-off [12]. A depiction of the upper bound 1 − e−λ/2 (1 − α) ≥ β resulting from Corollary 10.1 and of the exact value of β from Proposition 10.2 is provided in Fig. 10.5, for m = 2. Furthermore, Fig. 10.6 illustrates the ROC that results from Proposition 10.2 in the same situation. Both the probability βk and the ROC will be analyzed again in the remaining part of this chapter in order to quantify the privacy cost, that is, the drop in detectability that results from the introduction of the privacy mechanisms.

10.4 Privacy and Its Cost 10.4.1 Privacy Mechanism In this section, we return to the analysis of the anomaly detection mechanism, but now under differential privacy constraint as described at the beginning of Sect. 10.2.2, using in particular the adjacency relation (10.3). Recall that the privatized values transmitted to L are

yk = C xk + ζk + wk , (10.25) u k = u k + υk where ζ and υ are, respectively, physical and numerical privacy-preserving white Gaussian noise signals introduced by the mechanisms M˜ y and Mu . To determine variances for the privacy-preserving noise signals, note that the private signal u is directly transmitted to L but also influences the signal y, which is also provided to L, creating a second path for information about u to leak. A first possible approach to protect u, called input perturbation, would be to add physical noise directly to u in order for the perturbed signal to enter S while being already private. As a result of the resilience to post-processing property, the resulting output y would then be already differentially private and hence would not require additional noise before being sent to L. Physical input perturbation is ruled out, however, by our specification that u not be physically altered, so that the operation of S is not impacted by the privacy requirement. In that case, one must consider the system (Id, S), with input u and output the pair of signals (u, y), which must be perturbed to protect u. Note that Id denotes the identity system. To evaluate the level of noise to add, following Theorem

218

R. M. G. Ferrari et al.

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 -2

10 -1

10 0

10 1

10 2

10 -1

10 0

10 1

10 2

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 -2

Fig. 10.5 The upper bound on β resulting from Proposition 10.2 is plotted along the true value from Proposition 10.2 as a function of λ and for two different values of α. As expected, higher values of both will lead to a better probability of detection

10 Differentially Private Anomaly Detection for Interconnected Systems

219

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 10.6 ROC curves computed following Proposition 10.2, for different values of λ

10.1, we compute the 2 sensitivity of the system (Id, S) (Id, S)(u − u  )2 ≤ (Id, S)∞ u − u   ≤ ρ(Id, S)∞ , where  · ∞ denotes the H∞ norm of a system. As a result, we should add noise with standard deviation σ priv := κδ,ε ρ(Id, S)∞ to both u and y. Note in particular the higher level of noise on u compared to input perturbation (for which the standard deviation would be just κδ,ε ρ), to account for the additional leakage of information through y. In conclusion, we take ζk ∼ N(0, σ priv Im ) and υk ∼ N(0, σ priv I p ).

10.4.2 Residual and Threshold Generation Under Privacy Similarly to the scheme described in the no-privacy case in Sect. 10.3, the agent L for the ego system will implement the following observer for fault detection

L:

xˆk+1 = A xˆk + Bu k + GyN,k + L( yˆk − yk ) . = C xˆk yˆk

(10.26)

It is important to notice that here the ego observer is computing the input term as Bu , where u is the privatized version of u communicated by the system S when

220

R. M. G. Ferrari et al.

the privacy mechanisms are in place. Similarly, the privatized outputs measurements are used, the latter being communicated to L by LN (see Fig. 10.4). y and yN The presence of the numerical privacy noise υ affecting u and of the physical privacy noises ζ and ζN affecting z and z N will lead to the estimation error dynamics in healthy conditions becoming 0 = A0 x˜k0 + vk + L(wk + ζk ) − Bυk − G(ζN,k + wN,k ) , x˜k+1   

(10.27)

ξk

while the solution to the output estimation error becomes y˜k0 = C

k−1 

 (A0 )(k−1−h) ξh + (A0 )k x˜0 + wk + ζk .

(10.28)

h=0

In the above equations we introduced the notation x˜ 0 , ξk , and y˜ 0 for the estimation errors and the total uncertainty term under privacy conditions to highlight the effects of privacy on them. Indeed, by comparing (10.27) to (10.7) and (10.28) to (10.8) we can notice that the effect on ξk is as if the model uncertainty became vk − Bυk − GwN,k , the local measurement uncertainty of S became wk + ζk , and that of SN became wN,k + GζN,k . This affects the covariance of ξ , which is needed for computing the residual, according to the formula Σξ = Σξ + LΣζ L + BΣυ B + GΣζN G 2 2 = Σξ + σ priv (L L + B B ) + σ priv,N GG ,

(10.29)

where σ priv,N := κδ,ε ρN (Id, SN )∞ , and ρN is the parameter for the adjacency relation (10.3) assumed for system SN . However, due to Assumption 10.2, the mean μ y˜ 0 will still be equal to zero. When it comes for L to compute the residual r and the threshold r¯ , we consider two possibilities. First, L might be informed by S of the presence of the privacy mechanisms and of the new value of the total uncertainty covariance. Alternatively, L might not be aware of the presence of the privacy-preserving mechanisms. Both options are considered in the following sections. For simplicity, without loss of generality, we will assume that S and SN have the same dynamics and thus σ priv,N = σ priv .

10.4.2.1

L Knows the Privacy Noise Covariances Σζ , ΣζN , and Συ

If L knows the privacy noise covariances Σζ , ΣζN , and Συ , then it can update the covariances of the estimation errors similarly to what was done in (10.10):

10 Differentially Private Anomaly Detection for Interconnected Systems

221

0 Σx˜k+1 = A0 Σx˜k0 (A0 ) + Σξ . 2 Σ y˜k0 = CΣx˜k0 C + Σw + Σζ = CΣx˜k0 C + Σw + σ priv Im

(10.30)

˜ which will lead This will allow L to compute the residual still as r = y˜ Σ y−1 ˜ 0 y to r continuing to be distributed as a χm2 variable with m degrees of freedom. This permits to keep using the same threshold r¯ = Fχ−12 (α; m) and get the same robustness against false alarms. Proposition 10.3 If the agent L knows the values of Σζ , ΣζN , and Συ , then the actual probability of false alarms PI is equal to the design parameter α, which represents the desired probability, as in the non-privacy case. Regarding detectability, results will be worse when the privacy mechanisms are in place, as we could have expected from Proposition 10.2 and Corollary 10.1. Indeed, the privacy noise acts as an additional source of uncertainty from the detection point of view. By recalling those results and Eq. (10.18), we can now write    (−1/2) 2 λ k = Σ y˜ 0 Φk  , k

where we introduced the notation λ k for the residual non-centrality parameter under faulty conditions when the privacy mechanisms are present. We see that the norm (−1/2) of the factor Σ y˜ 0 should now be smaller, as the norm of the covariance of y˜k0 in privacy conditions is larger than in non-privacy ones, due to the contribution of ξ and ζ . Following this reasoning, λ will be smaller than λ and this will affect both the detectability on average and the probability of detection as detailed in the following propositions. Proposition 10.4 Under privacy conditions the detectability on average at each (1/2) (−1/2) period k is reduced by at least a factor ς  Σ y˜ 0 Σ y˜ 0 2 . k

k

Proof Under privacy conditions the left-hand term of the inequality in Proposition 10.1 can be written as (−1/2)

Σ y˜ 0

(−1/2)

Φk 2 = Σ y˜ 0

(1/2)

(−1/2)

Σ y˜ 0 Σ y˜ 0

(−1/2)

Φk 2 ≤ Σ y˜ 0

(1/2)

(−1/2)

Σ y˜ 0 2 Σ y˜ 0

Φk 2 ,

where we used the Cauchy–Schwarz inequality to derive the inequality. As the last factor in the expression above is equal to the left-hand term of the inequality in Proposition 10.1 when no privacy is present, the proposition is proved.

222

R. M. G. Ferrari et al.

Proposition 10.5 Under√ privacy conditions, the probability of detection βk is √ √ √ reduced by a factor Q m/2 ( λ , r¯ )/Q m/2 ( λ, r¯ ), where Q is the Marcum Q-function. Proof Under privacy conditions, the probability of detection βk can be written as  √    βk = F r¯ ; m, λ k = Q m/2 λ k , r¯ ,

(10.31)

where Q is the Marcum Q-function, which is equal to the tail of the non-central χ 2 variable [53]. Being Q a monotonically increasing function in λ [54], it follows that as λ k ≤ λk given all the other conditions stay equal, then βk ≤ βk . The ratio between the two probabilities can be obtained in a straightfoward way from Proposition 10.2 and the equation above. Furthermore, the bound on probability of detection is worse. From Corollary 10.1 we have, under privacy conditions: 1 − β ≥ e

−λ /2

(1 − α) ≥ 1 − β ,

which means that the probability of missed detection under privacy conditions is larger than under non-privacy ones.

10.4.2.2

L Does Not Know the Privacy Noise Covariances Σζ , ΣζN , and Συ

In this case, L will continue to implement the observer Eq. (10.27), whose solution for the output estimation error will still be as in Eq. (10.28) and have zero mean. What will be different is that the residual will still be computed using the quadratic y˜ , but the covariance of y˜k under privacy is Σ y˜k0 , not Σ y˜k0 as the form rk = y˜k Σ y−1 ˜0 k k

agent L believes. This has important implications as it will make rk no longer a χ 2 variable, neither in healthy nor in faulty conditions. Indeed the following proposition can be proved, which is relevant for the probability of false detection under healthy conditions, and the detectability under faulty ones. 1/2

1/2

Σ y˜ 0 and consider Proposition 10.6 Define the symmetric matrix Mk := Σ y˜ 0 Σ y−1 ˜0 k

k

k

the eigenvalue decomposition Mk = Pk Ωk Pk , where P ∈ Rm×m is an orthogonal matrix and Ωk = diag(ω1,k , . . . , ωm,k ) contains the eigenvalues of Mk . Under privacy conditions, when L does not know about the presence of the privacy mechanisms, the residual rk is distributed as the following weighted sum of non-central χ 2 variables with one degree of freedom: rk ∼

m 

2 ωi,k χ1,μ , k,i

i=1

where the non-central parameter μk,i is the i–th component of the vector

10 Differentially Private Anomaly Detection for Interconnected Systems

223

−1/2

μk  Pk Σ y˜ 0 Φk . k

Proof The proof is based on the principal axis theorem in [52] and on results from [24, 42]. We start by recalling that y˜k is a normal variable with mean equal to Φk and covariance equal to Σ y˜k0 . We can thus rewrite the quadratic form defining the residual computed by L as −1/2

1/2

1/2

−1/2

y˜ = y˜k Σ y˜ 0 Σ y˜ 0 Σ y−1 Σ y˜ 0 Σ y˜ 0 y˜k rk = y˜k Σ y−1 ˜0 k ˜0 k

k

k

k

k

k

= γk Ωk γk , −1/2

1/2

1/2

where γk  Pk Σ y˜ 0 y˜k . The spectral decomposition Σ y˜ 0 Σ y−1 Σ y˜ 0 = Pk Ωk Pk with ˜k0 k k k Pk orthogonal is always possible as the matrices in the left-hand side are covariance matrices and as such symmetric and positive semi-definite. From [52], we further know that γk ∼ N(μk , Im ). Then, from [24, 42] the result follows. It is important to note that rk , although is the weighted sum of χ 2 variables, is not itself a χ 2 variable. Its pdf and tail distributions, unfortunately, cannot be expressed in closed form, although approximations and series expansions are available [42]. For the present goal, we will denote the tail distribution of rk as F (r ; 0) in healthy conditions, and F (r ; μk ) in faulty ones. The probability of false alarms PI would thus be equal to α  F (¯r ; 0) = F (F−1 (α); 0) which in general would be different than the user-defined α and should be evaluated numerically. Similarly, the probability of detection would be βk = F (¯r ; μk ). Regarding the detectability on average, instead, a closed form result is available. Proposition 10.7 When L does not know about the presence of the mechanisms the residual mean value becomes   + Φk Σ y−1 Φk . (10.32) E(rk ) = tr Σ y−1 0 Σ y˜ 0 ˜ ˜0 k k

k

Proof The proof follows immediately from known results on quadratic forms in [24, 1/2 42]. Indeed, we can write y˜k = Φk + Σ y˜ 0 qk , with qk ∼ N(0, Im ). Then k

E(rk ) = E



Φk +

Φk = ΦkT Σ y−1 ˜0 k

=

ΦkT Σ y−1 Φk ˜k0

= ΦkT Σ y−1 Φk ˜0 k

which ends the proof.

  1/2 Φk + Σ y˜ 0 qk k   1/2 1/2 , since E(qk ) = 0 + E qkT Σ y˜ 0 Σ y−1 0 Σ 0 qk y˜k ˜k    k 1/2 1/2 T + E tr Σ y˜ 0 Σ y−1 0 Σ 0 qk qk y ˜ ˜ k k k    1/2 −1 1/2 + tr Σ y˜ 0 Σ y˜ 0 Σ y˜ 0 , since E qk qkT = Im ,

1/2 Σ y˜ 0 qk k

T

k

Σ y−1 ˜k0

k

k

224

R. M. G. Ferrari et al.

10.4.3 Numerical Study In order to illustrate the effects of the proposed distributed privacy mechanisms on the fault diagnosis performances of agent L, the results of a numerical study will now be presented. The dimension of S output was assumed to be m = 2 and, to normalize results and avoid making them dependent on a specific dynamical model or its trajectory, the estimation error covariance in the non-privacy case was set to be the identity matrix Σ y˜k0 = Im . The effect of the privacy mechanisms on the estimation error and on the residual used for detection will be characterized via the (1/2) (−1/2) term ς = Σ y˜ 0 Σ y˜ 0 2 (see Proposition 10.4), while the effect of fault will be k k associated to the term λ as computed in non-privacy situation. As a first result, we will analyze the actual probability of false alarms PI in the case where the agent L does not know about the existence of the privacy mechanisms. As we know from Proposition 10.6 the probability of the residual r crossing the threshold cannot be longer computed in closed form, and because of this we relied on drawing 216 samples and used them for computing sample probabilities. The dependence of PI on the privacy effects magnitude ς , for different desired probabilities of false alarms α, is presented in Fig. 10.7. It can be seen that, as expected, a higher ς will cause a higher PI and thus worse performances, with PI being equal to the desired α only for the non-privacy case ς = 1. As a next step, we present in Fig. 10.8 the behavior of the actual probability of detection β in the case that L knows about the privacy mechanisms, as a function of increasing values of the SNR λ and for different magnitudes of the privacy effect ς .

1 0.9 0.8

0.01 0.02 0.05 0.1 0.2 0.5 0.95

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

50

Fig. 10.7 The effect of privacy on the actual probability of false alarms PI , when the agent L does not know about the existence of the privacy mechanisms. The probability PI is plotted as a function of ς, for different values of the desired probability α

10 Differentially Private Anomaly Detection for Interconnected Systems

225

1 2 5 10 20 50

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 -3

10 -2

10 -1

10 0

10 1

10 2

10 3

10 4

Fig. 10.8 The effect of privacy on the actual probability of detection β , when the agent L knows about the existence of the privacy mechanisms. The probability β is plotted as a function of λ, for different values of ς

The behavior for ς = 1 is the same as the non-privacy one already seen in Fig. 10.5, while for increasing ς we see that the curve for β keeps its qualitative shape and moves constantly to the right. This is compatible with Proposition 10.5, which states that detectability will monotonically deteriorate for increasing magnitudes of the privacy effect. The case where L does not know about the existence of the privacy mechanisms is qualitatively different, as can be seen from Fig. 10.9. Here we see that the curve for β , instead of shifting, will change shape and flatten toward the value β = 1 for increasing values of ς . This could be initially mistaken for a good sign, as would signify an increase of the detection performances. Alas, from what was seen previously we know that an increase in ς is met by an increase in the probability of false detection PI , so we may suspect that the net effect on detection performances may not be necessarily positive. To investigate this, the ROC in the case of known and, respectively, unknown privacy is presented in Figs. 10.10 and 10.11. From these plots we see, someway surprisingly, that the ROC curves remain unchanged in the two cases. However, a caveat must be immediately brought forward to the reader’s attention. In both figures, the horizontal axis shows the actual probability of false alarms PI , and not the desired value α. To clarify the importance of this, in both plot we marked a symbol “o” the points corresponding to a specific value of α, in particular α = 5%. In the case where the privacy is known, as expected those points are vertically aligned and correspond to an actual PI of 5%. In the case of unknown privacy, PI is equal to 5% only for the non-privacy case ς = 1. In the other cases, we see that the 5% point moved toward the top right corner of the ROC, close to the point (1, 1) which corresponds

226

R. M. G. Ferrari et al.

1 0.9 0.8 0.7 0.6 0.5

1 2 5 10 20 50

0.4 0.3 0.2 0.1 0 10 -3

10 -2

10 -1

10 0

10 1

10 2

10 3

10 4

Fig. 10.9 The effect of privacy on the actual probability of detection β , when the agent L does not know about the existence of the privacy mechanisms. The probability β is plotted as a function of λ, for different values of ς

1 0.9 0.8 0.7 0.6 0.5

1 2 5 10 20 50

0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 10.10 The effect of privacy on the actual ROC, when the agent L knows about the existence of the privacy mechanisms. The plot shows the effect of different values of ς, for a fixed λ. The operating points corresponding to a desired probability of false alarms α = 5% are shown on each line with a circle marker

10 Differentially Private Anomaly Detection for Interconnected Systems

227

1 0.9 0.8 0.7 0.6 0.5

1 2 5 10 20 50

0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 10.11 The effect of privacy on the actual ROC, when the agent L does not know about the existence of the privacy mechanisms. The plot shows the effect of different values of ς, for a fixed λ. The operating points corresponding to a desired probability of false alarms α = 5% are shown on each line with a circle marker

to a critically bad detection performance, equivalent to that of a detector that always classifies a residual as anomalous, independently of its actual value. In the end, we can conclude that the presence of an unknown privacy mechanism will make the operating point of L to slide over the ROC curve, thus leading to performances much different than those used for the detection of threshold design. In particular, in order to keep a reasonably good operating point, a much smaller value of α should be used at design time.

10.5 Conclusions This chapter has discussed an anomaly detection scenario for interconnected systems. Here a local fault monitor attempts to diagnose a system, which is simultaneously trying to enforce a differential privacy guarantee protecting its input signal by perturbing the digital and physical signals sent to third parties. Measures of anomaly detectability were introduced, and we characterized analytically and numerically their degradation when the privacy requirement is added. In general, privacy-utility trade-offs should be established for a given application, in order to set the acceptable level of privacy through the parameters ε and δ. As future works, it would be significant to explore alternative definitions of privacy, such as the one based on the Fisher information introduced by [18], and to extend current results to nonlinear systems.

228

R. M. G. Ferrari et al.

References 1. Alexandru, A.B., Darup, M.S., Pappas, G.J.: Encrypted cooperative control revisited. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 7196–7202. IEEE (2019) 2. Boem, F., Ferrari, R.M., Keliris, C., Parisini, T., Polycarpou, M.M.: A distributed networked approach for fault detection of large-scale systems. IEEE Trans. Autom. Control 62(1), 18–33 (2017) 3. Bottegal, G., Farokhi, F., Shames, I.: Preserving privacy of finite impulse response systems. IEEE Control Syst. Lett. 1(1), 128–133 (2017) 4. Brickell, J., Porter, D.E., Shmatikov, V., Witchel, E.: Privacy-preserving remote diagnostics. In: Proceedings of the 14th ACM conference on Computer and communications security, pp. 498–507 (2007) 5. Cárdenas, A.A., Amin, S., Schwartz, G., Dong, R., Sastry, S.: A game theory model for electricity theft detection and privacy-aware control in ami systems. In: 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1830–1837. IEEE (2012) 6. Chawla, S., Dwork, C., McSherry, F., Smith, A., Wee, H.: Toward privacy in public databases. In: Theory of Cryptography Conference, pp. 363–385. Springer (2005) 7. Cortés, J., Dullerud, G.E., Han, S., Le Ny, J., Mitra, S., Pappas, G.J.: Differential privacy in control and network systems. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 4252–4272. IEEE (2016) 8. Cummings, R., Krehbiel, S., Mei, Y., Tuo, R., Zhang, W.: Differentially private change-point detection. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (2018) 9. Degue, K.H., Le Ny, J.: On differentially private Kalman filtering. In: Proceedings of the 5th IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, Canada (2017) 10. Degue, K.H., Le Ny, J.: On differentially private gaussian hypothesis testing. In: 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 842– 847. IEEE (2018) 11. Degue, K.H., Le Ny, J.: (2020) A two-stage architecture for differentially private Kalman filtering and LQG control. https://arxiv.org/abs/1707.08919 12. Ding, S.X.: (2008) Model-based fault diagnosis techniques: design schemes, algorithms, and tools. Springer Science & Business Media 13. Duncan, G., Lambert, D.: Disclosure-limited data dissemination. J. Am. Stat. Assoc. 81(393), 10–28 (1986) 14. Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: Privacy via distributed noise generation. In: Proceedings of the 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), St. Petersburg, Russia, pp. 486–503 (2006) 15. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference, pp. 265–284. Springer (2006) 16. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014) 17. Fan, L., Xiong, L.: Differentially private anomaly detection with a case study on epidemic outbreak detection. In: 2013 IEEE 13th International Conference on Data Mining Workshops, pp. 833–840. IEEE (2013) 18. Farokhi, F., Sandberg, H.: Fisher information as a measure of privacy: preserving privacy of households with smart meters using batteries. IEEE Trans. Smart Grid 9(5), 4726–4734 (2017) 19. Farokhi, F., Shames, I., Batterham, N.: Secure and private control using semi-homomorphic encryption. Control Eng. Pract. 67, 13–20 (2017) 20. Ferrari, R.M., Parisini, T., Polycarpou, M.M.: Distributed fault detection and isolation of large-scale discrete-time nonlinear systems: an adaptive approximation approach. IEEE Trans. Autom. Control 57(2), 275–290 (2012)

10 Differentially Private Anomaly Detection for Interconnected Systems

229

21. Gaboardi, M., Lim, H.W., Rogers, R., Vadhan, S.P.: Differentially private chi-squared hypothesis testing: goodness of fit and independence testing. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, JMLR.org, ICML’16, pp. 2111–2120 (2016) 22. Garcia, F.D., Jacobs, B.: Privacy-friendly energy-metering via homomorphic encryption. In: International Workshop on Security and Trust Management, pp. 226–238. Springer (2010) 23. Ge, X., Han, Q.: Distributed fault detection over sensor networks with Markovian switching topologies. Int. J. Gen. Syst. 43(3–4), 305–318 (2014) 24. Graybill, F.A.: Theory and Application of the Linear Model, vol. 183. Duxbury Press, North Scituate (1976) 25. Hale, M., Egerstedt, M.: Differentially private cloud-based multi-agent optimization with constraints. In: 2015 American Control Conference (ACC), pp. 1235–1240. IEEE (2015) 26. Hale, M., Jones, A., Leahy, K.: Privacy in feedback: the differentially private LQG. In: 2018 Annual American Control Conference (ACC), pp. 3386–3391. IEEE (2018) 27. Han, S., Topcu, U., Pappas, G.J.: Differentially private distributed protocol for electric vehicle charging. In: Conference on Communication, Control, and Computing, pp. 242–249. IEEE (2014) 28. Han, S., Topcu, U., Pappas, G.J.: Differentially private distributed constrained optimization. IEEE Trans. Autom. Control 62(1), 50–64 (2017) 29. Hoh, B., Gruteser, M., Herring, R., Ban, J., Work, D., Herrera, J.C., Bayen, A.M., Annavaram, M., Jacobson, Q.: Virtual trip lines for distributed privacy-preserving traffic monitoring. In: Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 15–28 (2008) 30. Huang, Z., Mitra, S., Dullerud, G.: Differentially private iterative synchronous consensus. In: Proceedings of the CCS Workshop on Privacy in the Electronic Society (WPES), Raleigh, North Carolina (2012) 31. Kay, S.M.: Fundamentals of Statistical Processing, Volume 2: Detection Theory. Prentice Hall Signal Processing Series. Prentice-Hall PTR, Upper Saddle River (2009) 32. Kogiso, K., Fujita, T.: Cyber-security enhancement of networked control systems using homomorphic encryption. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 6836–6843. IEEE (2015) 33. Le Ny, J.: Differentially private nonlinear observer design using contraction analysis. Int. J. Robust Nonlinear Control (2018) 34. Le Ny, J.: Differential Privacy for Dynamic Data. Springer Briefs in Electrical Engineering. Springer, Berlin (2020) 35. Le Ny, J., Mohammady, M.: Differentially private MIMO filtering for event streams. IEEE Trans. Autom. Control 63(1) (2018) 36. Le Ny, J., Pappas, G.J.: Differentially private Kalman filtering. In: Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing (2012) 37. Le Ny, J., Pappas, G.J.: Privacy-preserving release of aggregate dynamic models. In: Proceedings of the 2nd ACM International Conference on High Confidence Networked Systems (HiCoNS), Philadelphia, PA (2013) 38. Le Ny, J., Pappas, G.J.: Differentially private filtering. IEEE Trans. Autom. Control 59(2), 341–354 (2014) 39. Li, S., Khisti, A., Mahajan, A.: Information-theoretic privacy for smart metering systems with a rechargeable battery. IEEE Trans. Inf. Theory 64(5), 3679–3695 (2018) 40. Lindell, Y.: Secure multiparty computation for privacy preserving data mining. In: Encyclopedia of Data Warehousing and Mining, IGI Global, pp. 1005–1009 (2005) 41. Lu, Y., Zhu, M.: Privacy preserving distributed optimization using homomorphic encryption. Automatica 96, 314–325 (2018) 42. Mathai, A.M., Provost, S.B.: Quadratic Forms in Random Variables: Theory and Applications. Dekker (1992) 43. Mo, Y., Murray, R.M.: Privacy preserving average consensus. IEEE Trans. Autom. Control 62(2), 753–765 (2016)

230

R. M. G. Ferrari et al.

44. Nandakumar, L., Ferrari, R., Keviczky, T.: Privacy-preserving of system model with perturbed state trajectories using differential privacy: with application to a supply chain network. IFACPapersOnLine 52(20), 309–314 (2019) 45. Noursadeghi, E., Raptis, I.: Reduced-order distributed fault diagnosis for large-scale nonlinear stochastic systems. J. Dyn. Syst., Meas., Control (2017) 46. Nozari, E., Tallapragada, P., Cortés, J.: Differentially private average consensus: obstructions, trade-offs, and optimal algorithm design. Automatica 81, 221–231 (2017) 47. Riverso, S., Boem, F., Ferrari-Trecate, G., Parisini, T.: Plug-and-play fault detection and controlreconfiguration for a class of nonlinear large-scale constrained systems. IEEE Trans. Autom. Control 61(12), 3963–3978 (2016) 48. Rogers, R., Kifer, D.: A new class of private chi-square hypothesis tests. In: Artificial Intelligence and Statistics, pp. 991–1000 (2017) 49. Rostampour, V., Ferrari, R.M.G., Teixeira, A.H., Keviczky, T.: Differentially-Private distributed fault diagnosis for large-scale nonlinear uncertain systems. In: Proceedings of 10th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS 2018), Warsaw (Poland) 29–31 Aug 2018, IFAC (2018) 50. Sankar, L., Kar, S., Tandon, R., Poor, H.V.: Competitive privacy in the smart grid: An information-theoretic approach. In: Conference on Smart Grid Communications (SmartGridComm), pp. 220–225. IEEE (2011) 51. Sankar, L., Rajagopalan, S.R., Poor, H.V.: Utility-privacy tradeoffs in databases: an information-theoretic approach. IEEE Trans Inf. Forensics Secur. 8(6), 838–852 (2013) 52. Scheffé, H.: The Analysis of Variance [1959]. Wiley, Hoboken (1999) 53. Simon, M.K.: Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers and Scientists. Springer Science & Business Media (2007) 54. Sun, Y., Baricz, Á., Zhou, S.: On the monotonicity, log-concavity, and tight bounds of the generalized marcum and nuttall q-functions. IEEE Trans. Inf. Theory 56(3), 1166–1186 (2010) 55. Sweeney, L.: k-anonymity: a model for protecting privacy. Int. J. Uncertain., Fuzziness Knowl.Based Syst. 10(05), 557–570 (2002) 56. Tjell, K., Wisniewski, R.: Privacy preservation in distributed optimization via dual decomposition and admm. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 7203–7208. IEEE (2019) 57. Wang, Y., Huang, Z., Mitra, S., Dullerud, G.E.: Differential privacy in linear distributed control systems: entropy minimizing mechanisms and performance tradeoffs. IEEE Trans. Control Netw. Syst. 4(1), 118–130 (2017) 58. Zhang, D., Wang, Q., Yu, L., Song, H.: Fuzzy-model-based fault detection for a class of nonlinear systems with networked measurements. IEEE Trans. Instrum. Meas. 62(12), 3148– 3159 (2013) 59. Zhang, Q., Zhang, X.: Distributed sensor fault diagnosis in a class of interconnected nonlinear uncertain systems. Annu. Rev. Control 37(1), 170–179 (2013) 60. Zhang, X., Polycarpou, M.M., Parisini, T.: A robust detection and isolation scheme for abrupt and incipient faults in nonlinear systems. IEEE Trans. Autom. Control 47(4), 576–593 (2002)

Chapter 11

Remote State Estimation in the Presence of an Eavesdropper Alex S. Leong, Daniel E. Quevedo, Daniel Dolz, and Subhrakanti Dey

Abstract In this chapter, we study a remote state estimation problem in the presence of an eavesdropper. A sensor transmits local state estimates over a packet dropping link to a remote estimator, while an eavesdropper can successfully overhear each sensor transmission with a certain probability. The objective is to determine when the sensor should transmit, in order to minimize the estimation error covariance at the remote estimator, while trying to keep the eavesdropper error covariance above a certain level. Structural results on the optimal transmission policy are derived, and shown to exhibit thresholding behavior in the estimation error covariances. In the infinite horizon situation, it is shown that with unstable systems one can keep the expected estimation error covariance bounded while the expected eavesdropper error covariance becomes unbounded, for all eavesdropping probabilities strictly less than one. An alternative measure of security, constraining the amount of information revealed to the eavesdropper, is also considered, and similar structural results on the optimal transmission policy are derived. In the infinite horizon situation with unstable systems, it is now shown that for any transmission policy (within the class of stationary deterministic policies where the sensor at each time step can either ©[2019] IEEE. Parts of Sections 1, 2, 3 reprinted, with permission, from [1]. Parts of Sections 1, 4, 5 reprinted, with permission, from [2]. A. S. Leong Paderborn University, Paderborn, Germany e-mail: [email protected] D. E. Quevedo Queensland University of Technology, Brisbane, Australia e-mail: [email protected] D. Dolz Procter & Gamble, Euskirchen, Germany e-mail: [email protected] S. Dey (B) Maynooth University, Maynooth, Ireland e-mail: [email protected]

© Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_11

231

232

A. S. Leong et al.

transmit or not) which keeps the expected estimation error covariance bounded, the expected amount of information revealed to the eavesdropper is always lower bounded away from zero.

11.1 Introduction In information security, the two main types of attacks are generally regarded as: (1) passive attacks from eavesdroppers and (2) active attacks such as Byzantine attacks, denial-of-service attacks, and replay attacks. Estimation and control in the presence of different types of attacks are treated in the various chapters of this book. The current chapter is concerned with state estimation in the presence of passive attacks from eavesdroppers. Over the last decade, the amount of data transmitted wirelessly has increased dramatically, owing to the popularity of 3G/4G smartphones, and other devices communicating via Wi-Fi and Bluetooth such as laptops and tablets. Due to the broadcast nature of the wireless medium, other agents in the vicinity can often overhear what is being transmitted [3]. It is therefore important to protect transmissions from eavesdroppers, which has traditionally been achieved using cryptography. However, due to (1) the often limited computational power available at the transmitters to implement strong encryption, (2) the increased computational power available to malicious agents, and (3) poorly implemented security in some Internet of Things devices, achieving security using solely cryptographic means may not necessarily be guaranteed. Alternative and complementary ways to implement security using physical layer and information-theoretic techniques have thus received significant recent interest [4]. In communications theory, the basic ideas of information-theoretic security date back to the work of Claude Shannon in the 1940s [5]. Roughly speaking, a communication system is regarded as secure in the information-theoretic sense if the mutual information between the original message and what is received at the eavesdropper is either zero or becomes vanishingly small as the block length of the codewords increases [6]. The term “physical layer security” refers to approaches to implement information-theoretic security using physical layer characteristics of the wireless channel such as fading, interference, and noise, see, e.g., [3, 7]. Motivated in part by these ideas, the consideration of security issues in signal processing systems has been recently studied. For a survey on works in detection and estimation in the presence of eavesdroppers, focusing particularly on detection, see [8]. In estimation problems with eavesdroppers, studies include [9–13]. The objective is to minimize the average mean squared error at the legitimate receiver, while trying to keep the mean squared error at the eavesdropper above a certain level, by using techniques such as stochastic bit flipping [9], transmit filter design [10], and power control and addition of artificial noise [11, 12]. The works above deal with estimation of either constants or i.i.d. sources. In this chapter, we wish to consider the more general problem of state estimation of dynami-

11 Remote State Estimation in the Presence of an Eavesdropper

233

Fig. 11.1 Remote state estimation with an eavesdropper. ©[2019] IEEE. Reprinted, with permission, from [2]

cal systems when there is an eavesdropper, where we try to achieve security by adaptively scheduling the transmissions. For unstable systems, it has recently been shown that when using uncertain wiretap channels, one can keep the estimation error of the legitimate receiver bounded while the estimation error of the eavesdropper becomes unbounded for sufficiently large coding block length [14]. In this chapter, we do not assume coding, which may introduce large delays. In a similar setup to the current work, but transmitting measurements and without using feedback acknowledgements, mechanisms were proposed in [15] for keeping the expected error covariance bounded while driving the expected eavesdropper covariance unbounded, provided the reception probability is greater than the eavesdropping probability. By allowing for feedback and clever scheduling of the transmissions, here we show that the same behavior can be achieved for all eavesdropping probabilities strictly less than one. In this chapter, a sensor makes noisy measurements of a linear dynamical process. The sensor transmits local state estimates to the remote estimator over a packet dropping link. At the same time, an eavesdropper can successfully eavesdrop on the sensor transmission with a certain probability, see Fig. 11.1. Within this setup, we consider the problem of dynamic transmission scheduling, by designing stationary deterministic policies which decide at each instant whether the sensor should transmit or not. A detailed system model is described in Sect. 11.2. We first consider a covariance-based measure of security (Sect. 11.3), where one wants to keep the estimation error covariance at the eavesdropper large. We seek to minimize a linear combination of the expected error covariance at the remote estimator and the negative of the expected error covariance at the eavesdropper. Structural results on the optimal transmission policy will be derived. In the case where knowledge of the eavesdropper’s error covariances are available at the remote estimator, our results show that (1) for a fixed value of the eavesdropper’s error covariance, the optimal policy has a threshold structure: the sensor should transmit if and only if the remote estimator’s error covariance exceeds a certain threshold and (2) for a fixed value of the remote estimator’s error covariance, the sensor should transmit if and only if the eavesdropper’s error covariance is below a certain threshold. A similar result can be derived in the case where knowledge of the eavesdropper’s

234

A. S. Leong et al.

error covariances are unavailable at the remote estimator. Furthermore, for unstable systems, it is shown that in the infinite horizon situation there exist transmission policies which can keep the expected estimation error covariance bounded while the expected eavesdropper error covariance is unbounded. This behavior can be achieved for all eavesdropping probabilities strictly less than one. In the second part of this chapter (Sect. 11.4), an alternative measure of security from the viewpoint of restricting the amount of information (in the informationtheoretic sense) revealed to the eavesdropper is considered. Mutual information is a measure of the additional information gained by receiving a measurement. In particular, we will consider the sum of conditional mutual informations (also known as the directed information [16]) revealed to the eavesdropper. In addition to the wide use of mutual information in information-theoretic security measures, directed information has also been justified as a measure of privacy/secrecy for control applications in [17]. Other works in control system design using the directed information measure include [18, 19]. We seek to minimize a linear combination of the expected error covariance at the remote estimator and the expected information revealed to the eavesdropper, and similar structural results on the optimal transmission policy will be derived. In the infinite horizon situation, our results show that the expected information revealed to the eavesdropper is both upper and lower bounded. In particular, the lower bound says that information leakage is unavoidable for all such transmission policies which keep the expected error covariance at the legitimate receiver bounded. Notation: Given a square matrix X , we use |λmax (X )| to denote the spectral radius of X . For a symmetric matrix X , we say that X > 0 if it is positive definite, and X ≥ 0 if it is positive semi-definite. Given two symmetric matrices X and Y , we say that X ≤ Y if Y − X is positive semi-definite, and X < Y if Y − X is positive definite.

11.2 System Model A diagram of the system model is shown in Fig. 11.1. Consider a discrete-time process xk+1 = Axk + wk ,

(11.1)

where xk ∈ Rn x and wk is i.i.d. Gaussian with zero mean and covariance Q > 0. The sensor has measurements (11.2) yk = C xk + vk , where yk ∈ Rn y and vk is i.i.d. Gaussian with zero mean and covariance R > 0. The noise processes {wk } and {vk } are assumed to be mutually independent, and independent of the initial state x0 . s [20] to the remote estiThe sensor forms and transmits local state estimates xˆk|k mator. The local state estimates and estimation error covariances

11 Remote State Estimation in the Presence of an Eavesdropper

235

s xˆk|k−1  E[xk |y0 , . . . , yk−1 ] s xˆk|k  E[xk |y0 , . . . , yk ] s s s Pk|k−1  E[(xk − xˆk|k−1 )(xk − xˆk|k−1 )T |y0 , . . . , yk−1 ] s s s T Pk|k  E[(xk − xˆk|k )(xk − xˆk|k ) |y0 , . . . , yk ]

can be computed at the sensor using the Kalman filtering equations: s s xˆk|k−1 = A xˆk−1|k−1 s s s xˆk|k = xˆk|k−1 + K k (yk − C xˆk|k−1 ) s s = A Pk−1|k−1 AT + Q Pk|k−1 s s s s s Pk|k = Pk|k−1 − Pk|k−1 C T (C Pk|k−1 C T + R)−1 C Pk|k−1 s s K k = Pk|k−1 C T (C Pk|k−1 C T + R)−1 .

We will assume that the pair (A, C) is detectable and the pair (A, Q 1/2 ) is stabilizable. s as k → ∞, which exists due to the detectability Let P¯ be the steady-state value of Pk|k assumption. To simplify the presentation, we will assume that this local Kalman filter s ¯ ∀k. We note that, in general, = P, is operating in the steady-state regime, so that Pk|k convergence to steady state occurs at an exponential rate [21]. s is to Let νk ∈ {0, 1} be decision variables such that νk = 1 if and only if xˆk|k be transmitted at time k. The decision variables νk are determined at the remote estimator, which is assumed to have more computational capabilities than the sensor, using information available at time k − 1, and then fed back without error to the sensor before transmission at time k.1 s At time instances when νk = 1, the sensor transmits its local state estimate xˆk|k over a packet dropping channel to the remote estimator. Let γk be random variables such that γk = 1 if the sensor transmission at time k is successfully received by the remote estimator, and γk = 0 otherwise. We will assume that {γk } is i.i.d. Bernoulli [23] with P(γk = 1) = p ∈ (0, 1). The sensor transmissions can be overheard by an eavesdropper over another packet dropping channel. Let γe,k be random variables such that γe,k = 1 if the sensor transmission at time k is overheard by the eavesdropper, and γe,k = 0 otherwise. We will assume that {γe,k } is i.i.d. Bernoulli with P(γe,k = 1) = pe ∈ (0, 1). The processes {γk } and {γe,k } are assumed to be mutually independent.2 1 The

case of imperfect feedback links can also be handled, see Sect. II-C of [22] for details. assumption can be justified from wireless communication experiments, where it has been shown that channel fading becomes approximately independent for receivers separated by distances

2 This

236

A. S. Leong et al.

At instances where νk = 1, it is assumed that the remote estimator knows whether the transmission was successful or not, i.e., the remote estimator knows the value γk , with dropped packets discarded. Define the information set available to the remote estimator at time k as s s , . . . , νk γk xˆk|k }. Ik {ν0 , . . . , νk , ν0 γ0 , . . . , νk γk , ν0 γ0 xˆ0|0

Denote the state estimates and error covariances at the remote estimator by xˆk|k−1  E[xk |Ik−1 ] xˆk|k  E[xk |Ik ] Pk|k−1  E[(xk − xˆk|k−1 )(xk − xˆk|k−1 )T |Ik−1 ]

(11.3)

Pk|k  E[(xk − xˆk|k )(xk − xˆk|k )T |Ik ]. Similarly, the eavesdropper knows if it has eavesdropped successfully. Define the information set available to the eavesdropper at time k as s s , . . . , νk γe,k xˆk|k }, Ie,k {ν0 , . . . , νk , ν0 γe,0 , . . . , νk γe,k , ν0 γe,0 xˆ0|0

and the state estimates and error covariances at the eavesdropper by3 xˆe,k|k−1  E[xk |Ie,k−1 ] xˆe,k|k  E[xk |Ie,k ] Pe,k|k−1  E[(xk − xˆe,k|k−1 )(xk − xˆk|e,k−1 )T |Ie,k−1 ]

(11.4)

Pe,k|k  E[(xk − xˆe,k|k )(xk − xˆe,k|k )T |Ie,k ]. We will assume for simplicity of presentation that the initial covariances P0|0 = P¯ ¯ and Pe,0|0 = P. The decision variables νk are determined at the remote estimator and fed back to the sensor. Denote by V the class of stationary deterministic policies where the sensor at each step can either transmit its local estimate or not. We will consider both the case where νk depends on both Pk−1|k−1 and Pe,k−1|k−1 , and the case where νk depends only on Pk−1|k−1 and the remote estimator’s belief of Pe,k−1|k−1 constructed greater than half a wavelength of the transmitted signal [24, p. 71]. For the transmission frequencies currently in use in 3G/4G mobiles and Wi-Fi, such wavelengths are on the order of centimeters. 3 We make the assumption that the eavesdropper knows the system parameters A, C, Q, R, which gives a bound on the best performance that the eavesdropper can achieve. Such an assumption is similar in spirit to Kerckhoff’s principle in cryptography [25], where a cryptosystem should be secure even if the enemy knows everything about the system except the secret key, and also Shannon’s maxim that “the enemy knows the system being used” [5]. The same principle has been applied toward the assumption that the eavesdropper knows the scheduling decisions {νk } although, strictly speaking, the eavesdropper does not make use of this information directly in our subsequent treatment.

11 Remote State Estimation in the Presence of an Eavesdropper

237

from knowledge of previous νk ’s. In either case, the decisions do not depend on the state xk (or the noisy measurement yk ). Thus, the optimal remote estimator can be shown to have the form  A xˆk−1|k−1 , νk γk = 0 xˆk|k = s , νk γk = 1 xˆk|k (11.5)  f (Pk−1|k−1 ) , νk γk = 0 Pk|k = , P¯ , νk γk = 1 where f (X )  AX A T + Q.

(11.6)

Similarly, at the eavesdropper the optimal estimator has the form  xˆe,k|k =  Pe,k|k =

A xˆe,k−1|k−1 , νk γe,k = 0 s , νk γe,k = 1 xˆk|k f (Pe,k−1|k−1 ) , νk γe,k = 0 P¯ , νk γe,k = 1.

Define the countable set of matrices: ¯ . . .}, ¯ f ( P), ¯ f 2 ( P), S  { P,

(11.7)

where f n (.) is the n-fold composition of f (.), with the convention that f 0 (X ) = X . The set S consists of all possible values of Pk|k at the remote estimator, as well as all possible values of Pe,k|k at the eavesdropper. There is a total ordering on the elements of S given by (see, e.g., [26]) ¯ ≤ ... ¯ ≤ f 2 ( P) P¯ ≤ f ( P)

(11.8)

11.3 Covariance-Based Measure of Security We first look at a covariance-based measure of security, where the aim is to keep the estimation error covariance at the eavesdropper large. Similar notions have been used in [9–13], which studied the estimation of constant parameters or i.i.d sources in the presence of an eavesdropper. An alternative measure of security in terms of the information revealed to the eavesdropper is considered in Sect. 11.4.

238

A. S. Leong et al.

11.3.1 Eavesdropper Error Covariance Known at Remote Estimator In this subsection, we consider the case where the transmission decisions νk can depend on the error covariances of both the remote estimator Pk−1|k−1 and the eavesdropper Pe,k−1|k−1 . The situation where Pe,k−1|k−1 is not known at the remote estimator will be considered in Sect. 11.3.2.

11.3.1.1

Optimal Transmission Scheduling

We wish to minimize the expected error covariance at the remote estimator, while trying to keep the expected error covariance at the eavesdropper above a certain level. To accomplish this, we will formulate a problem that minimizes a linear combination of the expected estimation error covariance and the negative of the expected eavesdropper error covariance. The problem can be written as the finite horizon problem: min {νk }

K 

E[βtrPk|k − (1 − β)trPe,k|k ],

(11.9)

k=1

where β ∈ (0, 1) is a design parameter.4 The parameter β controls the Pareto tradeoff between estimation performance at the remote estimator and at the eavesdropper, with a larger β placing more importance on keeping E[Pk|k ] small, and a smaller β placing more importance on keeping E[Pe,k|k ] large (or −E[Pe,k|k ] small). We can rewrite (11.9) as min {νk }

K    E E[βtrPk|k − (1 − β)trPe,k|k |P0,0 , Pe,0|0 , Ik−1 , Ie,k−1 , νk ] k=1

K    = min E E[βtrPk|k − (1 − β)trPe,k|k |Pk−1|k−1 , Pe,k−1|k−1 , νk ] {νk }

= min {νk }

(11.10)

k=1 K 

 E β(νk ptr P¯ + (1 − νk p)tr f (Pk−1|k−1 ))

k=1

 − (1 − β)(νk pe tr P¯ + (1 − νk pe )tr f (Pe,k−1|k−1 )) .

The first equality in (11.10) holds since Pk−1|k−1 (similarly for Pe,k−1|k−1 ) is a deterministic function of P0|0 and Ik−1 , and Pk|k is a function of Pk−1|k−1 , νk , and γk . The second equality follows from computing the conditional expectations E[Pk|k |Pk−1|k−1 , νk ] and E[Pe,k|k |Pe.k−1|k−1 , νk ]. 4 One can also consider the equivalent problem min

with α being a Lagrange multiplier.

{νk }

K

k=1 E[trPk|k

− αtrPe,k|k ] for some α > 0,

11 Remote State Estimation in the Presence of an Eavesdropper

239

Problem (11.9) can be solved numerically using dynamic programming. First define the functions Jk (·, ·) : S × S → R recursively by JK +1 (P, Pe ) = 0 Jk (P, Pe ) = min

ν∈{0,1}



β(νptr P¯ + (1 − νp)tr f (P))

¯ P) ¯ − (1 − β)(νpe tr P¯ + (1 − νpe )tr f (Pe )) + νppe Jk+1 ( P, ¯ f (Pe )) + ν(1 − p) pe Jk+1 ( f (P), P) ¯ + νp(1 − pe )Jk+1 ( P,

 + ν(1 − p)(1 − pe ) + 1 − ν Jk+1 ( f (P), f (Pe )) (11.11) for k = K , . . . , 1. Then problem (11.9) is solved by computing Jk (Pk−1|k−1 , Pe,k−1|k−1 ) for k = K , K − 1, . . . , 1 [27]. We note that problem (11.9) can be solved exactly numerically since, for any horizon K , the possible values of (Pk|k , Pe,k|k ) ¯ × { P, ¯ f ( P), ¯ . . . , f K ( P)}, ¯ which ¯ f ( P), ¯ . . . , f K ( P)} will lie in the finite set { P, has finite cardinality (K + 1)2 .

11.3.1.2

Structural Properties of Optimal Transmission Schedule

We will next derive some structural properties on the optimal solution to problem (11.9). In particular, we will show that (1) for a fixed Pe,k−1|k−1 , the optimal policy is to transmit if and only if Pk−1|k−1 exceeds a threshold (which, in general, depends on k on Pe,k−1|k−1 ), and (2) for a fixed Pk−1|k−1 , the optimal policy is to transmit if and only if Pe,k−1|k−1 is below a threshold (which depends on k and Pk−1|k−1 ). Knowing that the optimal policies are of threshold-type gives insight into the form of the optimal solution, with characteristics of event-triggered estimation, and can also provide computational savings when solving problem (11.9) via finding the thresholds numerically, see, e.g., Remark 4.4 of [28]. Theorem 11.1 (i) For fixed Pe,k−1|k−1 , the optimal solution to problem (11.9) is a threshold policy on Pk−1|k−1 of the form νk∗ (Pk−1|k−1 , Pe,k−1|k−1 ) =



0 , if Pk−1|k−1 ≤ Pk∗ , 1 , otherwise

where the threshold Pk∗ ∈ S depends on k and Pe,k−1|k−1 . (ii) For fixed Pk−1|k−1 , the optimal solution to problem (11.9) is a threshold policy on Pe,k−1|k−1 of the form νk∗ (Pk−1|k−1 , Pe,k−1|k−1 ) =



∗ 0 , if Pe,k−1|k−1 ≥ Pe,k , 1 , otherwise

∗ where the threshold Pe,k ∈ S depends on k and Pk−1|k−1 .

240

A. S. Leong et al.

Proof See Theorem III.3 of [1]. Part (i) of Theorem 11.1 is quite intuitive, and similar to threshold-based scheduling policies in event-triggered estimation [22, 29]. Part (ii) is perhaps less intuitive, and one should think of it as saying that it is better to not transmit when the eavesdropper covariance is high, in order to further increase the eavesdropper covariance at the next time step. By combining parts (i) and (ii) of Theorem 11.1, we have that at each time k, the values of (Pk−1|k−1 , Pe,k−1|k−1 ) can be divided into a “transmit” and “don’t transmit” region separated by a staircase-like threshold, see Fig. 11.2 in Sect. 11.5.

11.3.2 Eavesdropper Error Covariance Unknown at Remote Estimator In order to construct Pe,k|k at the remote estimator as per Sect. 11.3.1, the process {γe,k } for the eavesdropper’s channel needs to be known, which in practice may be difficult to achieve. We consider in this subsection the situation where the remote estimator knows only the probability of successful eavesdropping pe (see also Remark 11.2) but not the actual realizations γe,k . Thus, the transmit decisions νk can now only depend on Pk−1|k−1 and our beliefs of Pe,k−1|k−1 constructed from knowledge of previous {ν j } j≤k . We will first derive the recursion for the conditional distribution of error covariances at the remote estimator (i.e., the “belief states” [27, p. 258]), and then consider the optimal transmission scheduling problem.

11.3.2.1

Conditional Distribution of Error Covariances at Eavesdropper

Define the belief vector ⎡ (0) πe,k ⎢ (1) ⎢ πe,k πe,k = ⎢ ⎢ .. ⎣ .



 ⎤ ⎡ ¯ 0 , . . . , νk P Pe,k|k = P|ν ⎥ ⎢  ¯ 0 , . . . , νk ⎥ ⎥ ⎢ P Pe,k|k = f ( P)|ν ⎥ ⎥⎢ ⎥. .. ⎥ ⎣ ⎦ . ⎦  K ¯ (K ) = f ( P)|ν , . . . , ν P P e,k|k 0 k πe,k

(11.12)

(K ) ¯ we have πe,k We note that by our assumption of Pe,0|0 = P, = 0 for k < K . Denote K +1 the set of all possible πe,k ’s by Πe ⊆ R . The vector πe,k represents our beliefs on Pe,k|k given the transmission decisions ν0 , . . . , νk . In order to formulate the transmission scheduling problem as a partially observed problem, we will first derive a recursive relationship between πe,k+1 and πe,k given the next transmission decision νk+1 .

11 Remote State Estimation in the Presence of an Eavesdropper

241

Now when νk+1 = 0, we have Pe,k+1|k+1 = f (Pe,k|k ) with probability one, and T  (0) (K −1) . When νk+1 = 1, then Pe,k+1|k+1 = P¯ with thus πe,k+1 = 0 πe,k . . . πe,k probability pe and Pe,k+1|k+1 = f (Pe,k|k ) with probability 1 − pe , and thus πe,k+1 =  T (0) (K −1) . Hence, by defining pe (1 − pe )πe,k . . . (1 − pe )πe,k  T 0 πe(0) . . . πe(K −1) , ν=0  Φ(πe , ν)   (0) (K −1) T pe (1− pe )πe . . . (1− pe )πe ,ν=1 we obtain the recursive relationship πe,k+1 = Φ(πe,k , νk+1 ).

11.3.2.2

Optimal Transmission Scheduling

We again wish to minimize a linear combination of the expected error covariance at the remote estimator and the negative of the expected error covariance at the eavesdropper. Since Pe,k−1|k−1 is now not available, the optimization problem will be formulated as a partially observed one with νk dependent on (Pk−1|k−1 , πe,k−1 ). We then have the following problem (cf. (11.9)):  K  min E β(νk ptr P¯ + (1 − νk p)tr f (Pk−1|k−1 )) {νk }

k=1



− (1 − β) νk pe tr P¯ + (1 − νk pe )

K 

tr

(i) ¯ e,k−1 f i+1 ( P)π

 (11.13) .

i=0

Problem (11.13) can be solved by using the dynamic programming algorithm for partially observed problems [27, p. 256]. Let the functions Jk (·, ·) : S × Πe → R be defined recursively as JK +1 (P, πe ) = 0 Jk (P, πe ) = min

ν∈{0,1}

 β(νptr P¯ + (1 − νp)tr f (P))

K    ¯ e(i) − (1 − β) νpe tr P¯ + (1 − νpe ) tr f i+1 ( P)π



i=0



¯ Φ(πe , 1) + ν(1 − p)Jk+1 + P, f (P), Φ(πe , 1) 

+ (1 − ν)Jk+1 f (P), Φ(πe , 0) (11.14)

νp Jk+1

242

A. S. Leong et al.

for k = K , . . . , 1. Problem (11.13) is then solved numerically by computing Jk (Pk−1|k−1 , πe,k−1 ) for k = K , K − 1, . . . , 1. The number of possible values of (Pk|k , πe,k ) is again finite, but now of cardinality (K + 1) × (1 + 2 + · · · + 2 K ) = (K + 1)(2 K +1 − 1). This is exponential in K , which may be very large when K is large. To reduce the complexity, one could consider instead probability distributions ⎡



 ⎤ ¯ 0 , . . . , νk P Pe,k|k = P|ν  ⎥ ⎢ ⎢ ¯ 0 , . . . , νk ⎥ ⎥ ⎢ P Pe,k|k = f ( P)|ν ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ . ⎥⎢ ⎢ .. ⎥ ⎥ ⎢  ⎢ ⎥ ⎢ (N −1) ⎥ ⎣ N −1 ¯ ⎦ = f ( P)|ν , . . . , ν P P ⎣ πe,k ⎦ e,k|k 0 k  N ¯ (N ) P Pe,k|k ≥ f ( P)|ν0 , . . . , νk πe,k (0) πe,k (1) πe,k .. .



for some N < K , and update the beliefs via Φ N (πe , ν)  T 0 πe(0) . . . πe(N −2) πe(N −1)+πe(N ) , ν=0 T .   pe (1 − pe )πe(0) . . . (1 − pe )πe(N −2) (1 − pe )(πe(N −1) + πe(N ) ) , ν = 1 Discretizing the space of πe,k to include the cases with up to N − 1 successive packet drops or non-transmissions, with the remaining cases grouped into the single (N ) , will then give a state space of cardinality (K + 1)(2 N +1 − 1). component πe,k 11.3.2.3

Structural Properties of Optimal Transmission Schedule

We have the following. Theorem 11.2 For fixed πe,k−1 , the optimal solution to problem (11.13) is a threshold policy on Pk−1|k−1 of the form νk∗ (Pk−1|k−1 , πe,k−1 ) =



0 , if Pk−1|k−1 ≤ Pk∗ , 1 , otherwise

where the threshold Pk∗ depends on k and πe,k−1 . Proof See Theorem IV.2 of [1]. The above results establish threshold-type structural properties for the optimal transmission scheduling policies in the finite horizon case, whether or not the instantaneous eavesdropper error covariance is known or unknown at the remote estimator. However, this threshold can be time-varying, as illustrated in Figs. 11.2 and 11.3 in Sect. 11.5. In the next section, we consider the infinite horizon long-term average cost formulation and show that one can find a transmission scheduling policy to drive the average estimation error covariance at the eavesdropper unbounded, while keeping

11 Remote State Estimation in the Presence of an Eavesdropper

243

the average remote estimation error covariance bounded, by cleverly choosing the threshold for transmission scheduling. We also show that this threshold depends only on the remote estimator error covariance matrix.

11.3.3 Infinite Horizon We now consider the infinite horizon situation. Let us first provide a condition on when E[Pk|k ] will be bounded. If A is stable, this is always the case. In the case where A is unstable, consider the policy with νk = 1, ∀k, which transmits at every time instant, and is similar to the situation where local state estimates are transmitted over packet dropping links [20, 30]. From the results of [20, 30], we have that E[Pk|k ] is bounded if and only if 1 . (11.15) p >1− |λmax (A)|2 Thus, condition (11.15) will ensure the existence of policies which keep E[Pk|k ] bounded. In Theorem 11.3, we will show that for unstable systems, in the infinite horizon situation, there exists transmission policies which can drive the expected eavesdropper error covariance unbounded while keeping the expected estimator error covariance bounded. This behavior can be achieved for all probabilities of successful eavesdropping pe strictly less than one. First we present a preliminary result. Lemma 11.1 Suppose that A is unstable, and that p > 1 − |λmax1(A)|2 . Consider the ¯ for some threshold policy which transmits at time k if and only if Pk−1|k−1 ≥ f t ( P) 1 K t ∈ N, where f (.) is defined in (11.6). Then lim K →∞ K k=1 trE[Pk|k ] < ∞ for all finite t ∈ N, and can be computed as lim

K →∞

K t ∞   1  p (1 − p) j−t p ¯ + ¯ tr( f j ( P)) tr( f j ( P)). trE[Pk|k ] = K k=1 pt + 1 pt + 1 j=0 j=t+1

(11.16) Proof This can be shown using results from Sect. IV-C of [22]. Theorem 11.3 Suppose that A is unstable, and that p > 1 − |λmax1(A)|2 . Then for such any pe < 1, there exist  K transmission policies in the infinite horizon situation K trE[Pk|k ] is bounded and lim inf K →∞ K1 k=1 trE[Pe,k|k ] that lim sup K →∞ K1 k=1 is unbounded. Proof See Theorem III.6 of [1].

244

A. S. Leong et al.

In particular, it turns out that the simple threshold policy of Lemma 11.1, with t sufficiently large that the condition pe < 1 −

1 . p|λmax (A)|2(t+1)

(11.17)

is satisfied, will have the required properties of Theorem 11.3. We note that the threshold policy depends on the estimation error covariance at the remote estimator, but not the error covariance at the eavesdropper. In a similar setup but transmitting measurements and without using feedback acknowledgements, mechanisms were derived in [15] for making the expected eavesdropper error covariance unbounded while keeping the expected estimation error covariance bounded, under the more restrictive condition that pe < p. In a slightly different context with coding over uncertain wiretap channels, it was shown in [14] that for unstable systems one can keep the estimation error at the legitimate receiver bounded while the eavesdropper estimation error becomes unbounded for a sufficiently large coding block length. Remark 11.1 It is perhaps instructive to give an intuitive explanation for why Theorem 11.3 holds, even in cases where pe > p. Firstly, the threshold policy constructed in the proof of Theorem 11.3 will transmit whenever the error covariance at the remote estimator is above a threshold, thus intuitively such a policy should keep the expected estimation error covariance bounded no matter how large the threshold is set (provided condition (11.15) is satisfied). On the other hand, from the eavesdropper’s viewpoint, by independence of the estimator and eavesdropper channels, and since the threshold policy doesn’t depend on the eavesdropper covariances, the times at which these transmissions occur look “random” to the eavesdropper. By increasing the threshold, these “random” times of transmission will occur less and less often, until eventually the expected eavesdropper covariance becomes unbounded, and this will happen no matter how large pe is (so long as pe < 1). Remark 11.2 Condition (11.17) for determining a sufficiently large threshold depends on the value of pe . However, the result in Theorem 11.3 can still apply even without exact knowledge of pe . For instance, suppose we only know an upper bound on pe , so that pe ≤ pe,max .5 Let t ∗ be the smallest t satisfying condition (11.17) for the true value pe , and t + be the smallest t satisfying condition + ¯ as the threshold, see that t + ≥ t ∗ . By using f t ( P) (11.17) for pe,max . It is easy to K 1 one will still have lim K →∞ K k=1 trE[Pk|k ] being bounded by Lemma 11.1, and K lim K →∞ K1 k=1 trE[Pe,k|k ] being unbounded, since |λmax (A)|( p(1 − pe ))1/2(t

5 If we regard

+

+1)

≥ |λmax (A)|( p(1 − pe ))1/2(t



+1)

> 1.

pe as a decreasing function of the distance from the sensor to the eavesdropper, upper bounding pe corresponds to there being no eavesdropper within a certain radius of the sensor.

11 Remote State Estimation in the Presence of an Eavesdropper

245

11.4 Information-Based Measure of Security We now turn to an alternative measure of security based on the information revealed to the eavesdropper. Here we aim to keep the information leakage to the eavesdropper low. As mentioned before, the use of mutual information in informationtheoretic security is standard, while the use of directed information as a measure of privacy/secrecy for control applications has also been justified in [17]. s ) be received by the eavesdropper at time k. The Let z e,k  (νk , νk γe,k , νk γe,k xˆk|k conditional mutual information Ie,k  I (xk ; z e,k |z e,0 , . . . , z e,k−1 ) between xk and z e,k has the following expression (see [19, 31]): Ie,k =

1 1 1 1 log det Pe,k|k−1 − log det Pe,k|k = log det f (Pe,k−1|k−1 ) − log det Pe,k|k . 2 2 2 2

(11.18)

11.4.1 Eavesdropper Error Covariance Known at Remote Estimator We wish to minimize a linear combination of the expected error covariance at the remote estimator and the information revealed to the eavesdropper. We consider the finite horizon problem: min {νk }

K 

E[βtrPk|k + (1 − β)Ie,k ]

(11.19)

k=1

for some β ∈ (0, 1). The design parameter β in problem (11.19) now controls the trade-off between estimation performance at the remote estimator and the amount of information revealed to the eavesdropper, with a larger β placing more importance on keeping E[Pk|k ] small, and a smaller β placing more importance on keeping E[Ie,k ] small. Problem (11.19) can be rewritten as min {νk }

K 

E[E[βtrPk|k + (1 − β)Ie,k |Pk−1|k−1 , Pe,k−1|k−1 , νk ]]

k=1

= min {νk }

K   E β(νk ptr P¯ + (1 − νk p)tr f (Pk−1|k−1 )) k=1

+ (1 − β)νk pe

1 2

log det f (Pe,k−1|k−1 ) −

 1 log det P¯ , 2

(11.20)

246

A. S. Leong et al.

noting that in the computation of E[Ie,k |Pk−1|k−1 , Pe,k−1|k−1 , νk ], Pe,k|k = P¯ when νk = 1 and γe,k = 1. Define Jk (·, ·) : S × S → R by JK +1 (P, Pe ) = 0 Jk (P, Pe ) = min

ν∈{0,1}

 β(νptr P¯ + (1 − νp)tr f (P))

 1 ¯ P) ¯ log det P¯ + νppe Jk+1 ( P, 2 2 ¯ f (Pe )) + ν(1 − p) pe Jk+1 ( f (P), P) ¯ + νp(1 − pe )Jk+1 ( P,

 + ν(1 − p)(1 − pe ) + 1 − ν Jk+1 ( f (P), f (Pe )) + (1 − β)νpe

1

log det f (Pe ) −

for k = K , . . . , 1. Problem (11.19) can be solved numerically by computing Jk (Pk−1|k−1 , Pe,k−1|k−1 ) for k = K , K − 1, . . . , 1. We have the following structural results: Theorem 11.4 (i) For fixed Pe,k−1|k−1 , the optimal solution to problem (11.19) is a threshold policy on Pk−1|k−1 of the form νk∗ (Pk−1|k−1 ,

 Pe,k−1|k−1 ) =

0 , if Pk−1|k−1 ≤ Pkt , 1 , otherwise

where the threshold Pkt ∈ S depends on k and Pe,k−1|k−1 . (ii) For fixed Pk−1|k−1 , the optimal solution to problem (11.19) is a threshold policy on Pe,k−1|k−1 of the form νk∗ (Pk−1|k−1 , Pe,k−1|k−1 ) =



t 0 , if Pe,k−1|k−1 ≥ Pe,k , 1 , otherwise

t where the threshold Pe,k ∈ S depends on k and Pk−1|k−1 .

Proof See Theorem V.1 of [32]. The above theorem is essentially an analogue of Theorem 1 for the case of eavesdropper error covariance-based security metric (as illustrated in Fig. 11.2), in that the optimal transmission scheduling is a threshold-based policy where the threshold is time-varying and is also a function of both the remote estimator and the eavesdropper error covariance matrices. In the next section, we consider the case where the eavesdropper error covariance is unknown at the remote estimator.

11 Remote State Estimation in the Presence of an Eavesdropper

247

11.4.2 Eavesdropper Error Covariance Unknown at Remote Estimator Recall the belief vector (11.12). In the case where the eavesdropper error covariances are unknown to the remote estimator, we have the problem: min {νk }

K   E β(νk ptr P¯ + (1 − νk p)tr f (Pk−1|k−1 )) k=1

+ (1 − β)νk pe

K  1 i=0

 1 (i) ¯ e,k−1 − log det P¯ . (log det f i+1 ( P))π 2 2

(11.21)

Similar techniques as in Sect. 11.3.2 can be used to numerically solve this problem. Additionally, we have the following structural result. Theorem 11.5 For fixed πe,k−1 , the optimal solution to problem (11.21) is a threshold policy on Pk−1|k−1 of the form νk∗ (Pk−1|k−1 , πe,k−1 )

 =

0 , if Pk−1|k−1 ≤ Pkt , 1 , otherwise

where the threshold Pkt depends on k and πe,k−1 . Proof Similar to the proof of Theorem IV.2 of [1].

11.4.3 Infinite Horizon In this subsection, we will show that the expected amount of information revealed to the eavesdropper is both upper and lower bounded, and then briefly present optimal transmission scheduling.

11.4.3.1

Upper and Lower Bounds on the Revealed Information

Recall that V is the class of stationary deterministic policies where the sensor at each step can either transmit its local estimate or not. We have Theorem 11.6 For any transmission policy in V and all pe , lim sup K →∞

K 1  1 ¯ − 1 log det P¯  U < ∞. E[Ie,k ] < log det f ( P) K k=1 2 2

Proof See Theorem 1 of [2].

248

A. S. Leong et al.

Theorem 11.7 Let A be an unstable matrix, and assume that pe > 0. Then, for any transmission policy in V satisfying K 1  lim sup trE[Pk|k ] < ∞, K →∞ K k=1

(11.22)

one must have K 1  ˜ L > 0. lim inf E[Ie,k ] > pe log |λmax (A)| 

K →∞ K k=1

Proof See Theorem 2 of [2]. In Theorem 11.3, we showed that for unstable systems, one could always find transmission policies which can keep the expected estimation error covariance bounded, while the expected eavesdropper covariance became unbounded. By contrast, Theorem 11.7 shows that there are no transmission policies in V which can drive the expected information revealed to the eavesdropper to zero. The two measures of security, namely, E[Pk|k ] investigated in Sect. 11.3 and E[Ie,k ] considered in the current section, therefore appear to be fundamentally different. An intuitive explanation for this is that any amount of information transmission from the sensor to the remote estimator, however small (but necessary to keep the average estimation error at the remote estimator bounded), is going to reveal a non-zero amount of information to the eavesdropper. Therefore, there are no transmission policies that can drive the average directed mutual information at the eavesdropper to zero while keeping the average estimator error at the remote estimator bounded.

11.4.3.2

Optimal Transmission Scheduling

Using the information-based measure of security, one can also consider optimal transmission scheduling in the infinite horizon. We briefly discuss this below. Eavesdropper Error Covariance Known at Remote Estimator The problem we wish to solve is the infinite horizon problem: min lim

{νk } K →∞

K 1  E[βtrPk|k + (1 − β)Ie,k ] K k=1

K 1  E β(νk ptr P¯ +(1−νk p)tr f (Pk−1|k−1 )) {νk } K →∞ K k=1  1 1 log det f (Pe,k−1|k−1 ) − log det P¯ , + (1 − β)νk pe 2 2

= min lim

(11.23)

11 Remote State Estimation in the Presence of an Eavesdropper

249

where the decisions νk can depend on both Pk−1|k−1 and Pe,k−1|k−1 . When A is unstable, E[Pk|k ] can be kept bounded if and only if condition (11.15) is satisfied. Furthermore, by Theorem 11.6, the expected information revealed is upper bounded for all pe . Thus, under condition (11.15) there exist policies for problem (11.23) with bounded cost, and the problem is therefore well-posed. Numerical solutions to problem (11.23) can be found using, e.g., the relative value iteration algorithm [27, p. 431]. Note that the number of possible values of (Pk|k , Pe,k|k ) is infinite. Thus, in practice, the state space will need to be truncated for numerical solution. Define the finite set S N ⊆ S by ¯ f ( P), ¯ . . . , f N ( P)}, ¯ S N  { P,

(11.24)

which includes the values of all error covariances with up to N successive packet drops or non-transmissions. Then one can run the relative value iteration algorithm over the finite state space S N × S N (of cardinality (N + 1)2 )), and compare the solutions obtained as N increases to determine an appropriate value for N [33]. Eavesdropper Error Covariance Unknown at Remote Estimator Consider the belief vector ⎡ (0) ⎤ ⎡  ⎤ ¯ 0 , . . . , νk πe,k P Pe,k|k = P|ν  ⎢ (1) ⎥ ⎢ ¯ 0 , . . . , νk ⎥ ⎢ πe,k ⎥ ⎢ P Pe,k|k = f ( P)|ν ⎥. 2 ¯ (2) ⎥  ⎢ P P πe,k = ⎢ ⎥ ⎥ ⎣ ⎢ πe,k e,k|k = f ( P)|ν0 , . . . , νk ⎦ ⎦ ⎣ .. .. . .

(11.25)

By defining   0 πe(0) πe(1) . . . , ν=0  Φ(πe , ν)   pe (1− pe )πe(0) (1− pe )πe(1) . . . , ν = 1 we can obtain the recursive relationship πe,k+1 = Φ(πe,k , νk+1 ). This leads to the transmission scheduling problem: K 1   min lim E β(νk ptr P¯ + (1 − νk p)tr f (Pk−1|k−1 )) {νk } K →∞ K k=1

+ (1 − β)νk pe

∞  1 i=0

2

(log det f

i+1

(i) ¯ e,k−1 ( P))π

 1 − log det P¯ . 2

(11.26)

250

A. S. Leong et al.

Problem (11.23) can be solved by using the relative value iteration algorithm on the reformulation of this partially observed problem into a fully observed one (a technique described in, e.g., [27, p. 252]) together with truncation of the belief vector (11.12).

11.5 Numerical Studies In this section, we illustrate the above theoretical results through a number of numerical experiments, both for the finite horizon and infinite horizon scenarios, as well as the error covariance-based and information security-based secrecy metrics. To this end, we consider an unstable (but satisfying the assumed stabilizability and observability criteria) discrete-time linear state-space system with the following parameters:  A=

   1.2 0.2 , C = 1 1 , Q = I, R = 1. 0.3 0.8

The steady-state estimation error covariance P¯ is easily computed as P¯ =



 1.3411 −0.8244 . −0.8244 1.0919

11.5.1 Finite Horizon We will first solve the finite horizon problem (11.9) with K = 10. The packet reception probability is chosen to be p = 0.6, and the eavesdropping probability pe = 0.6. We use the design parameter β = 0.7, Fig. 11.2 plots νk∗ for different values ¯ and Pe,k−1|k−1 = f n e ( P), ¯ at the time step k = 4. Figure 11.3 of Pk−1|k−1 = f n ( P) ∗ plots νk at the time step k = 6. We observe a threshold behavior in both Pk−1|k−1 and Pe,k−1|k−1 , with the thresholds also dependent on the time k, in agreement with Theorem 11.1. Next, we consider the performance as β is varied, for the problems (11.9) and (11.13) using the covariance measure of security, and the problems (11.19) and (11.21) using the information measure of security. Figure 11.4 plots the trace of the expected error covariance at the estimator trE[Pk|k ] versus the trace of the expected error covariance at the eavesdropper trE[Pe,k|k ], while Fig. 11.5 plots the trace of the expected error covariance trE[Pk|k ] versus the expected information E[Ie,k ] revealed to the eavesdropper, where Ie,k is given by (11.18). Each point is obtained by averaging over 105 Monte Carlo runs. We see that by varying β we obtain trade-offs between trE[Pk|k ] and trE[Pe,k|k ], and between trE[Pk|k ] and trE[Ie,k ], with the tradeoffs being better when the eavesdropper error covariance is known. We observe that the solutions to problems (11.19) and (11.21) give worse performance than prob-

11 Remote State Estimation in the Presence of an Eavesdropper Fig. 11.2 νk∗ for different ¯ values of Pk−1|k−1 = f n ( P) ¯ at and Pe,k−1|k−1 = f n e ( P), time k = 4

9

251

ν*k =1

8 7

ne

6 5 4 3 2 1 0 0

Fig. 11.3 νk∗ for different ¯ values of Pk−1|k−1 = f n ( P) ¯ at and Pe,k−1|k−1 = f n e ( P), time k = 6

9

1

2

3

4

2

3

4

n

5

6

7

8

9

5

6

7

8

9

*

νk =1

8 7

ne

6 5 4 3 2 1 0 0

1

n

lems (11.9) and (11.13) in terms of the trade-off between trE[Pk|k ] and trE[Pe,k|k ], but better performance in terms of the trade-off between trE[Pk|k ] and E[Ie,k ], since they directly optimize this trade-off.

11.5.2 Infinite Horizon We next present results for the infinite horizon situation. Figure 11.6 plots some values of trE[Pk|k ] and trE[Pe,k|k ], using the threshold policy in the proof of Theorem 11.3 ¯ Values of trE[Pk|k ] are which transmits at time k if and only if Pk−1|k−1 ≥ f t ( P). computed using the analytic expression (11.16), while values of trE[Pe,k|k ] are each obtained by taking the time average of a Monte Carlo run of length 106 . In the case

252

A. S. Leong et al. 50 45 40

tr E[Pe,k|k ]

35 30 25 20 15 eavesdropper cov. known, cov. meas. eavesdropper cov. unknown, cov. meas. eavesdropper cov. known, info. meas. eavesdropper cov. unknown, info meas.

10 5 0

4

6

8

10

12

14

16

18

20

22

tr E[Pk|k]

Fig. 11.4 Expected error covariance at estimator versus expected error covariance at eavesdropper. Finite horizon 0.6 eavesdropper cov. known, cov. meas. eavesdropper cov. unknown, cov. meas. eavesdropper cov. known, info. meas. eavesdropper cov. unknown, info meas.

0.5

E[I e,k ]

0.4

0.3

0.2

0.1

0

4

6

8

10

12

14

16

18

20

22

tr E[Pk|k]

Fig. 11.5 Expected error covariance at estimator versus expected information revealed to eavesdropper. Finite horizon

11 Remote State Estimation in the Presence of an Eavesdropper

253

10 14 tr E[P e,k|k], λe = 0.6

Error Covariance

10

tr E[P e,k|k], λe = 0.8

12

tr E[P

k|k

], λ = 0.6

10 10 10 8 10 6 10 4 10 2 10 0

1

1.5

2

2.5

3

3.5

4

4.5

5

5.5

6

t Fig. 11.6 Expected error covariance at estimator versus expected error covariance at eavesdropper. Infinite horizon 1

eavesdropper cov. known eavesdropper cov. unknown ΔU ˜L Δ

0.9 0.8 0.7

E[I e,k ]

0.6 0.5 0.4 0.3 0.2 0.1 0

10 1

10 2

tr E[Pk|k]

Fig. 11.7 Expected error covariance at estimator versus expected information revealed to eavesdropper. ©[2019] IEEE. Reprinted, with permission, from [2]

254

A. S. Leong et al.

p = 0.6, pe = 0.6, condition (11.17) for unboundedness of the expected eavesdropper covariance is satisfied when t ≥ 2, and in the case p = 0.6, pe = 0.8 (where the eavesdropping probability is higher than the packet reception probability), condition (11.17) is satisfied for t ≥ 3. We see that in both cases, by using a sufficiently large t, one can make the expected error covariance of the eavesdropper very large, while keeping the expected error covariance at the estimator bounded. Finally, we consider the performance obtained by solving the infinite horizon problems (11.23) and (11.26) as β is varied, both when the eavesdropper error covariance is known and unknown. We use p = 0.6 and pe = 0.8. In numerical solutions, we use the truncated set S N from (11.24) with N = 10. Figure 11.7 plots the trace of the expected error covariance trE[Pk|k ] versus the expected information E[Ie,k ] revealed to the eavesdropper, with trE[Pk|k ] and E[Ie,k ] obtained by taking the time average of a single Monte Carlo run of length 106 . The upper and lower bounds U and ˜ L from Theorems 11.6 and 11.7 are also shown. We observe a trade-off between

trE[Pk|k ] and E[Ie,k ]. Furthermore, as predicted by our result in Theorem 11.7, the expected information revealed to the eavesdropper is always lower bounded away from zero.

11.6 Conclusion This chapter has studied the problem of remote state estimation in the presence of an eavesdropper. We have considered the scheduling of sensor transmissions, where each transmission can be overheard by an eavesdropper with a certain probability. The scheduling is done by solving an optimization problem that minimizes a combination of the expected error covariance at the remote estimator and the negative of the expected error covariance at the eavesdropper. We have derived structural results on the optimal transmission scheduling which show a thresholding behavior in the optimal policies. In the infinite horizon situation, we have shown that with unstable systems one can keep the expected estimation error covariance bounded while the expected eavesdropper error covariance becomes unbounded. An alternative measure of security in terms of the information revealed to the eavesdropper has also been considered, and optimal transmission scheduling problems were analyzed. In the infinite horizon situation, our results show that for the class of policies considered, if one wishes to keep the expected error covariance bounded, then one must unavoidably reveal a non-zero expected amount of information to the eavesdropper. Notes: This chapter is mostly based on the works of [1, 2]. The reader is referred to these works for proofs and further detail. Additional alternative approaches and mechanisms for implementing security in the presence of eavesdroppers include [34–37].

11 Remote State Estimation in the Presence of an Eavesdropper

255

References 1. Leong, A.S., Quevedo, D.E., Dolz, D., Dey, S.: Transmission scheduling for remote state estimation over packet dropping links in the presence of an eavesdropper. IEEE Trans. Autom. Control 64(9), 3732–3739 (2019) 2. Leong, A.S., Quevedo, D.E., Dolz, D., Dey, S.: Information bounds for state estimation in the presence of an eavesdroppper. IEEE Control Syst. Lett. 3(3), 547–552 (2019) 3. Liang, Y., Poor, H.V., Shamai, S.: Secure communication over fading channels. IEEE Trans. Inf. Theory 54(6), 2470–2492 (2008) 4. Regalia, P.A., Khisti, A., Liang, Y., Tomasin, S. (eds.): Special issue on secure communications via physical-layer and information-theoretic techniques. Proc. IEEE 103(10) (2015) 5. Shannon, C.E.: Communication theory of secrecy systems. Bell Syst. Tech. J. 28(4), 656–715 (1949) 6. Wyner, A.D.: The wire-tap channel. Bell Syst. Tech. J. 54(8), 1355–1387 (1975) 7. Zhou, X., Song, L., Zhang, Y. (eds.): Physical Layer Security in Wireless Communications. CRC Press, Boca Raton (2014) 8. Kailkhura, B., Nadendla, V.S.S., Varshney, P.K.: Distributed inference in the presence of eavesdroppers: a survey. IEEE Commun. Mag. 53(6), 40–46 (2015) 9. Aysal, T.C., Barner, K.E.: Sensor data cryptography in wireless sensor networks. IEEE Trans. Inf. Forensics Secur. 3(2), 273–289 (2008) 10. Reboredo, H., Xavier, J., Rodrigues, M.R.D.: Filter design with secrecy constraints: the MIMO Gaussian wiretap channel. IEEE Trans. Signal Process. 61(15), 3799–3814 (2013) 11. Guo, X., Leong, A.S., Dey, S.: Estimation in wireless sensor networks with security constraints. IEEE Trans. Aerosp. Electron. Syst. 53(2), 544–561 (2017) 12. Guo, X., Leong, A.S., Dey, S.: Distortion outage minimization in distributed estimation with estimation secrecy outage constraints. IEEE Trans. Signal Inf. Process. Netw. 3(1), 12–28 (2017) 13. Goken, C., Gezici, S.: ECRB based optimal parameter encoding under secrecy constraints. IEEE Trans. Signal Process. 66(13), 3556–3570 (2018) 14. Wiese, M., Oechtering, T.J., Johansson, K.H., Papadimitratos, P., Sandberg, H., Skoglund, M.: Secure estimation and zero-error secrecy capacity. IEEE Trans. Autom. Control 64(3), 1047–1062 (2019) 15. Tsiamis, A., Gatsis, K., Pappas, G.J.: State estimation with secrecy against eavesdroppers. In: Proceedings of IFAC World Congress, Toulouse, France, July, pp. 8715–8722 (2017) 16. Massey, J.L.: Causality, feedback and directed information. In: Proceedings of ISITA, Waikiki, HI, November, pp. 303–305 (1990) 17. Tanaka, T., Skoglund, M., Sandberg, H., Johansson, K.H.: Directed information and privacy loss in cloud-based control. In: Proceedings of ACC, Seattle, USA, May, pp. 1666–1672 (2017) 18. Silva, E.I., Derpich, M.S., Østergaard, J.: A framework for control system design subject to average data-rate constraints. IEEE Trans. Autom. Control 56(8), 1886–1899 (2011) 19. Tanaka, T., Sandberg, H.: SDP-based joint sensor and controller design for informationregularized optimal LQG control. In: Proceedings of IEEE Conference on Decision and Control, Osaka, Japan, December, pp. 4486–4491 (2015) 20. Xu, Y., Hespanha, J.P.: Estimation under uncontrolled and controlled communications in networked control systems. In: Proceedings of IEEE Conference on Decision and Control, Seville, Spain, December, pp. 842–847 (2005) 21. Anderson, B.D.O., Moore, J.B.: Optimal Filtering. Prentice Hall, New Jersey (1979) 22. Leong, A.S., Dey, S., Quevedo, D.E.: Sensor scheduling in variance based event triggered estimation with packet drops. IEEE Trans. Autom. Control 62(4), 1880–1895 (2017) 23. Schenato, L., Sinopoli, B., Franceschetti, M., Poolla, K., Sastry, S.S.: Foundations of control and estimation over lossy networks. Proc. IEEE 95(1), 163–187 (2007) 24. Tse, D.N.C., Viswanath, P.: Fundamentals of Wireless Communication. Cambridge University Press, Cambridge (2005)

256

A. S. Leong et al.

25. Paar, C., Pelzl, J.: Understanding Cryptography. Springer, Heidelberg (2010) 26. Shi, L., Zhang, H.: Scheduling two Gauss-Markov systems: an optimal solution for remote state estimation under bandwidth constraint. IEEE Trans. Signal Process. 60(4), 2038–2042 (2012) 27. Bertsekas, D.P.: Dynamic Programming and Optimal Control, vol. I, 3rd edn. Athena Scientific, Massachusetts (2005) 28. Leong, A.S., Dey, S., Quevedo, D.E.: Transmission scheduling for remote state estimation and control with an energy harvesting sensor. Automatica 91, 54–60 (2018) 29. Trimpe, S., D’Andrea, R.: Event-based state estimation with variance-based triggering. IEEE Trans. Autom. Control 59(12), 3266–3281 (2014) 30. Schenato, L.: Optimal estimation in networked control systems subject to random delay and packet drop. IEEE Trans. Autom. Control 53(5), 1311–1317 (2008) 31. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley-Interscience, New Jersey (2006) 32. Leong, A.S., Quevedo, D.E., Dolz, D., Dey, S.: Remote state estimation over packet dropping links in the presence of an eavesdropper (2017). Available at arXiv:1702.02785 33. Sennott, L.I.: Stochastic Dynamic Programming and the Control of Queueing Systems. WileyInterscience, New York (1999) 34. Tsiamis, A., Gatsis, K., Pappas, G.J.: State-secrecy codes for networked linear systems. IEEE Trans. Autom. Control (2019). To be published 35. Leong, A.S., Redder, A., Quevedo, D.E., Dey, S.: On the use of artificial noise for secure state estimation in the presence of eavesdroppers. In: Proceedings of European Control Conference, Limmasol, Cyprus, June (2018) 36. Tsiamis, A., Gatsis, K., Pappas, G.J.: An information matrix approach for state secrecy. In: Proceedings of IEEE Conference on Decision and Control, Miami, FL, December, pp. 2062– 2067 (2018) 37. Özgen, S., Kohn, S., Noack, B., Hanebeck, U.D.: State estimation with model-mismatch-based secrecy against eavesdroppers. In: Proceedings of IEEE ICPS, Taipei, Taiwan, May, pp. 808– 812 (2019)

Chapter 12

Secure Networked Control Systems Design Using Semi-homomorphic Encryption Yankai Lin, Farhad Farokhi, Iman Shames, and Dragan Neši´c

Abstract A secure and private nonlinear networked control systems (NCSs) design using semi-homomorphic encryption is studied. Static feedback controllers are used and network architectures are provided to enable control signal computation using encrypted signals directly. As a result, the security of the NCSs is further enhanced by preserving the privacy of information flowing through the whole network. Whereas in traditional encryption techniques, encrypted signals are decrypted before control computation and are encrypted again after computation for transmission. While this is highly desirable from privacy point of view, additional technical difficulties in the design and analysis of NCSs are induced compared to standard NCSs. In this chapter, we provide sufficient conditions on the encryption parameters that guarantee robust stability of the NCS in the presence of disturbances in a semi-global practical sense and discuss the trade-offs between the required computational resources, security guarantees, and the closed-loop performance. The proof technique is based on Lyapunov methods.

Y. Lin · F. Farokhi · D. Neši´c Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, VIC, Australia e-mail: [email protected] F. Farokhi e-mail: [email protected] D. Neši´c e-mail: [email protected] I. Shames (B) School of Engineering, The Australian National University, Acton, ACT 0200, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_12

257

258

Y. Lin et al.

12.1 Introduction Networked control systems is an emerging technology that differs from traditional control systems. Shared communication channels are employed to transmit the sensor and actuation data instead of traditional point-to-point connections [5]. This leads to many advantages including easier installation and maintenance as well as lower cost, weight, and less volume. Moreover, it allows the control of large scale and distributed systems such as power systems and irrigation networks. On the other hand, some undesirable phenomena are also induced by the use of a shared network including quantization, scheduling, packet dropouts, and so on. Dealing with these issues results in extra technical difficulties in the analysis of the system. Hybrid system [11]-based approaches are widely used in design and control of NCS. Such methodologies allow controller synthesis while considering of non-uniform sampling intervals and other network induced behavior in a unified manner. Emulation-based design techniques are used in [13]. Alternatively, a framework for controller design for NCSs based on approximate discrete-time models is proposed in [43]. Similar approaches can also be found in many other works summarized in [14]. Network security and privacy as an important issue of NCS is not fully discussed in [14]. In the work of [40], the authors categorize different cyber-security attack scenarios into various categories based on the resources the adversary needs. Eavesdropping attack is an important class of those attacks where a malicious attacker tries to monitor the data going through the control loop to extract valuable information about the system. Thus, the integrity of data flowing through the network is compromised. Furthermore, it provides tools for the adversary to potentially launch more complex and harmful attacks such as replay attacks [27]. One possible solution to this problem is using encryption to ensure that information going through the communication channel is protected. As a result, the adversary cannot acquire any information on the system by listening to the communication channels if the encryption is done so that it works properly. Figure 12.1 illustrates a typical control loop that uses such encryption-decryption scheme. Due to the prevalence of wireless NCS, data flowing through the network is the most vulnerable information that adversaries can easily access. It can be seen that, by using encryption-decryption techniques, it is difficult for eavesdroppers to access data flowing through the network, however, if the attacker has access to the controller side, the encryption scheme fails to guarantee the security and privacy of the NCS.

Fig. 12.1 NCS consisting of a plant P, controller C, and network N, with encryption-decryption units

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

259

Fig. 12.2 NCS consisting of a plant P, controller C, and network N, with semi-homomorphic encryption-decryption units

Following this motivation, we focus on employing homomorphic encryption schemes for NCS configured in the form of Fig. 12.2 that allows implementing controllers using encrypted messages directly, therefore, preventing the curious adversary from obtaining sensitive information about the system. Homomorphic encryption is a form of encryption that allows computation on cipher-texts (encrypted data), generating an encrypted result which, when decrypted, matches the result of operations performed on the plain-texts (unencrypted data). The precise definition of homomorphic encryption will be given in Sect. 12.3. An example of such encryption methods is given in [10], where addition and multiplication of two unencrypted integers are implemented via appropriate operations on their corresponding encrypted versions. Semi-homomorphic encryption, on the other hand, only allows either summation or multiplication to be performed on cipher-texts but are easier to implement which is crucial to enable real-time computations. In the earlier work by Kogiso and Fujita [20], the authors propose a method to encrypt the controller using RSA [37] and El Gamal [9] encryptions which are encryption schemes semi-homomorphic under multiplications of cipher-texts. Stability and performance guarantees of the closedloop system using encryptions, however, are not studied thoroughly in [20]. In [8], the authors employ Paillier encryption and provide semi-global practical stability guarantees for linear time-invariant (LTI) systems without disturbances and asymptotic stability is achieved in [19] by introducing a dynamic quantizer. In [26], privacy preserving distributed projected gradient-based algorithms are proposed where both private- and public-key algorithms are developed, but an affine-gradient assumption is required for public-key algorithms. In [3], a private cloud-based quadratic optimization algorithm is studied and the trade-off between communication complexity and privacy guarantees is discussed. Similar ideas are also applied to private distributed averaging [12, 38] and localization problems [2]. However, most of the works mentioned above focus on linear dynamics with the exception of [19] where feedback linearization is applied to achieve linear dynamics. Moreover, robust stabilization problems with modeling uncertainties and disturbances are not studied. In this chapter, we consider the problem of stabilizing a nonlinear NCS using a static controller subject to disturbances using Paillier and RSA cryptosystems which are semi-homomorphic. Paillier cryptosystem is a probabilistic public-key cryptography scheme that allows addition of two cipher-texts and multiplication of a cipher-text by a plain-text while RSA encryption allows multiplications of multiple cipher-texts. This allows us to use Paillier encryption for a class of nonlinear

260

Y. Lin et al.

control systems with the help of RSA encryption if necessary. In this work, we use a discrete-time model to describe the dynamics of the NCS for simplicity, as the main theme of this work is on the privacy and security of the NCS. The designer follows an emulation-based approach to first design a controller for the discrete-time plant without the use of encryption and then based on the performance and stability requirements, the designer chooses the parameters of the encryption scheme. The approach can be generalized to cover systems modeled differently, for instance, using the hybrid modeling framework as shown in [13] and this would be the focus of future work. To summarize, the main contribution of the chapter is that we design privacy preserving static feedback controllers using semi-homomorphic encryption which robustly stabilize a class of NCSs with disturbances modeled by difference equations. Different ways for the implementation of an encryption-based control system in practice are presented. The stability properties are analyzed in a unified framework by providing sufficient conditions on the encryption parameters for stabilization of the nonlinear closed-loop systems. Particularly, semi-global practical input-to-state stability (ISS) of the NCS under the proposed encryption schemes are established. Under new sets of assumptions, asymptotic, or exponential stability of the aforementioned systems is demonstrated. The stability analysis is Lyapunov based and is different from typically used methods in quantized control literature [23, 24]. This is because in addition to measurement quantization, an additional source of quantization, i.e., gain matrix quantization, is also considered. This work extends our previous results [25] to the case where the system is subject to external disturbances. In addition, by imposing a homogeneity assumption on the closed-loop system, we provide less conservative sufficient conditions for the same problem. This also allows the use of dynamic quantizers under additional assumptions to provide stronger stability guarantees. Lastly, we show that homogeneous systems results recover the linear systems results as a special case and in this manner the results in [8] are recovered. The rest of the chapter is organized as follows. Preliminaries are given in Sect. 12.2. Background material about Paillier encryption is presented in Sect. 12.3. The NCS model and the problem formulation are presented in Sect. 12.4. Controller design for stabilization of the system is given in Sect. 12.5. Results for the homogeneous are presented in Sect. 12.6 which cover results for linear systems as a special case. We illustrate the results through an example of rigid body control in Sect. 12.7. Finally, concluding remarks are given in Sect. 12.8.

12.2 Preliminaries and Notations Let R := (−∞, ∞), R≥0 := [0, ∞), Z≥0 := {0, 1, 2, 3, ...}, Z>0 := {1, 2, 3, ...}, and Zn := {0, 1, 2, 3, ...n − 1}. A function α : R≥0 → R≥0 is of class K if it is continuous, zero at zero and strictly increasing, and it is of class K∞ if, in addition, it is unbounded. A continuous function β(s, t) : R≥0 × R≥0 → R≥0 is of class KL if for each fixed t ≥ 0, β(·, t) is of class K, and for each fixed s ≥ 0, β(s, ·) is non-

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

261

increasing and satisfies limt→∞ β(s, t) = 0. We let I d denote the identity function from R≥0 to R≥0 , and we use γ1 ◦ γ2 to denote the composition of two functions γ1 n and γ2 which are from R≥0 to R≥0 . The Euclidean norm of a vector x ∈ R is denoted n 2 by |x| = i=1 (x i ) and its ∞-norm is denoted by |x|∞ = max1≤i≤n |x i |. For any function φ : Z≥0 → Rn , we denote ||φ|| =sup{|φ(k)| : k ∈ Z≥0 } ≤ ∞. In the case when φ is bounded, this is the standard l∞ norm. For a matrix A ∈ Rn×m , ||A|| F denotes its Frobenius norm and |A| := supx=0 |Ax| denotes its induced 2-norm. The |x| scalar a ji denotes the element of A in the jth row i-th column for j ∈ {1, 2, ..., n} and i ∈ {1, 2, ..., m}. We consider a class of systems given by x + = F(x, u),

(12.1)

where x ∈ Rn x are the states of the system, F : Rn x × Rn u → Rn x is continuous and inputs u(·) are functions from Z≥0 to Rn u . We assume that F(0, 0) = 0, so the origin is a fixed point of the system without inputs. We use φ(·, x0 , u) to denote the trajectory of system (12.1) with initial state x(0) = x0 and the input signal u. For simplicity, we only consider the time-invariant case, but the results can be easily generalized to time-varying systems. Definition 12.1 ([17]) System (12.1) is (globally) input-to-state stable if there exist β ∈ KL and σ ∈ K such that for each input u ∈ l∞ and each x0 ∈ Rn x , it holds that |φ(k, x0 , u)| ≤ β(|x0 |, k) + σ(||u||),

(12.2)

for each k ∈ Z≥0 . If it is the case that u(k) = 0, ∀k ∈ Z≥0 , and |φ(k, x0 )| ≤ β(|x0 |, k), the origin of the system is uniformly globally asymptotically stable (UGAS). The system is uniformly globally exponentially stable (UGES) if the above holds with  β(s, k) = Ce−ρk for some C > 0 and ρ > 0. It is worth mentioning that, by causality, the previous definition will not change if (12.2) is replaced by |φ(k, x0 , u)| ≤ β(|x0 |, k) + σ(||u [k−1] ||),

(12.3)

where u [k−1] denotes the truncation of u at k − 1.

12.3 Backgroud on Paillier Encryption In this section, we formally introduce Paillier encryption and the necessary background knowledge of fixed-point operations that will enable us to apply Paillier encryption to the controller design.

262

Y. Lin et al.

12.3.1 Fixed-Point Operations The main aim of this subsection is to provide the necessary background for signed fixed-point rational numbers represented in base 2 and introduce the mapping to transfer them to integers so that they can be encrypted via the method of Paillier. For non-negative integers n ≥ m with n + m > 0, denote the set of signed rational numbers in base 2 as Q(n, m). Precisely: Q(n, m) = {b ∈ Q : b = −bn 2n−m−1 +  n−1 i−m−1 , bi ∈ {0, 1}, ∀i ∈ 1, ..., n}. It can be verified that this set contains all i=1 bi 2 rational numbers between −2n−m−1 and 2n−m−1 − 2−m separated from each other by the resolution of 2−m . For a digital processor to use these rational numbers, it is desirable to transform these numbers to integers. To do so, we define the mapping: f n,m (b) : Q(n, m) → Z2n = (2m b)mod2n . Moreover, the inverse mapping is defined n m as χ−1 n,m (a) : Z2n → Q(n, m) = (a − 2 Ia≥2n−1 )/2 , where I p is the characteristic function that is 1 if the statement p is true and is 0 otherwise. Using these functions, we state some results that enable us to perform certain operations. The complete proof of these results can be found in [8]. Proposition 12.1 The following statements are true: (i) χ−1 n,m (χn,m (b)) = b, ∀b ∈ Q(n, m); (ii) χn,m (χ−1 n,m (a)) = a, ∀a ∈ Z2n .



This proposition shows that every operation performed on the set of signed fixedpoint rational numbers can be transformed into an operation performed on the set of integers modulo 2n and vice versa. The next proposition provides more detailed relationships between these operations. Whenever appropriate, a succinct notation −1 with respect to m and n is used, e.g., χn,m and χ−1 n,m are written as χ and χ . The following operations on Z2n are defined: Definition 12.2 For a, a ∗ ∈ Z2n : n

(i) a ⊕ a ∗ := (a + a ∗ )mod2n ;



n

(ii) a ⊗ a ∗ := aa ∗ mod2n .

Finally, the following result is particularly useful for cases where fractional bits exist. Proposition 12.2 The following statement is true for all b, b∗ ∈ Q(n, m) such that bb∗ ∈ Q(n, m): n+2m

χn+2m,0 (22m bb∗ ) = χn+2m,0 (2m b) ⊗ χn+2m,0 (2m b).



12.3.2 Paillier Encryption In this subsection, we introduce the steps and properties of Paillier encryption. The security guarantees of the Paillier encryption rely on a standard cryptographic

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

263

assumption named Decisional Composite Residuosity (DCR) [32]. The steps to implement a Paillier encryption scheme are given in Algorithm 12.1. Algorithm 12.1 Paillier encryption Key generation: 1: Select large prime numbers p and q randomly and independently of each other such that gcd( pq, (1 − p)(1 − q)) = 1, where gcd(a, b) refers to the greatest common divisor of a and b 2: Compute public-key N = pq 3: Calculate private-key λ = lcm( p − 1, q − 1) and μ = λ−1 modN , where lcm(a, b) refers to the least common multiple of a and b Encryption: 1: Select random r ∈ Z∗N := {x ∈ Z N |gcd(x, N ) = 1} 2: Construct the cipher-text of a message t ∈ Z N as E(t; r ) = (N + 1)t r N modN 2 Decryption: 1: For any cipher-text c ∈ Z N 2 , the plain-text is given by D(c) = L(cλ modN 2 )μmodN , where L(x) = (x − 1)/N

In Paillier encryption, N is the public key which is shared with all parties and is used for encryption. The pair (λ, μ) is the private key which is accessible only for entity that needs to decrypt the message. It is shown in [32] that D(E(t; r )) = t, ∀r ∈ Z∗N , ∀t ∈ Z N .

(12.4)

This gives the invertible relationship between cipher-texts and plain-texts. Remark 12.1 It can be seen that carrying out the aforementioned encryption scheme and conducting computation on encrypted data involves quantization of the signals and calculation using large integers. Moreover, truncating extra digits is not allowed. This is due to the fact for two cipher-texts E(a) and E(b), a sufficiently small |E(a) − E(b)| in general does not imply a small |a − b|. This also reveals that the proposed encryption scheme prevents eavesdropping attacks at the cost of losing information of plain-texts available to the designer.  Before stating results of homomorphic properties of Paillier encryption, we first state the following definition: Definition 12.3 A homomorphic encryption scheme is a public-key encryption scheme for which D(E(a) ⊗ E(b)) = a ⊕ b, where ⊗, ⊕ are some group operations on the cipher-text and plain-text space, respectively.  If, for instance, ⊕ represents addition then the encryption scheme is additive homomorphic. The following proposition shows that under certain conditions Paillier encryption is additive homomorphic. Proposition 12.3 The following relationships of encrypted data hold:

264

Y. Lin et al.

(i) ∀r, r ∗ ∈ Z∗N and ∀t, t ∗ ∈ Z N such that t + t ∗ ∈ Z N , E(t; r )E(t ∗ ; r ∗ )modN 2 = E(t + t ∗ ; rr ∗ ); ∗ ∗ (ii) ∀r ∈ Z∗N and ∀t, t ∗ ∈ Z N such that tt ∗ ∈ Z N , E(t; r )t modN 2 = E(tt ∗ ; r t ).  These two results show that it is possible to do some calculations directly on the cipher-texts and then decrypt. However, since it is impossible to check the sign of a cipher-text, it is more difficult to implement multiplication. The following proposition shows that implementing multiplication of an integer and a cipher-text is possible using the operator defined in Definition 12.2. n

Proposition 12.4 Assume that N > 2n and a ⊗ a ∗ ∈ Z2n . Then, for any r ∈ Z∗N and n



a, a ∗ ∈ Z2n , we have D(E(a; r )a modN 2 )mod2n = a ⊗ a ∗ .



Remark 12.2 A necessary condition required in Proposition 12.4 is that the outcome of multiplication does not overflow, namely, it remains in Z2n . Note also that, multiplication can only be done between two fixed-point rational numbers with given integer and fractional bits, thus, only a subset of real control gains are available for use depending on the number of bits. Since checking overflows using only ciphertexts is impossible, the designer must carefully choose the relevant parameters to ensure that all algebraic computations are closed with respect to the chosen set of fixed-point rational numbers, otherwise the decrypted signals may not be the same as the original signal.  In view of Remark 12.2 and Proposition 12.2, to be able to implement the controller, the gain matrix must be restricted to the set Q(n 1 , m 1 )n u ×n y for some appropriately chosen non-negative integers n 1 ≥ m 1 and n 1 + m 1 > 0. The output from the sensor also needs to be quantized, i.e., it has to be projected to the set Q(n 2 , m 2 )n y for non-negative integers n 2 ≥ m 2 and n 2 + m 2 > 0. To simplify the notations of multiplication of matrices of encrypted elements, the following matrix product is defined: Definition 12.4 For a matrix A ∈ Rn×m , a column vector v ∈ Rm and a positive N2

integer N , the matrix product c = A ∗ v is defined as cj = (

j 

A

vi ji )modN 2 ,

(12.5)

i=1

for j ∈ {1, 2, ..., n}.



With this background knowledge, we present the architecture of the NCS in the next section.

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

265

12.4 NCS Architecture and Problem Statement In this section we propose the mathematical model of the NCS considered in this chapter and three different implementations of the NCS that can be described and analyzed in a unified manner: static output feedback control, combination of basis functions, and two-server-based state feedback control. We show that they fit in our general framework for stability analysis. The plant of the NCS is given by the discrete-time system: x + = f (x, u, w),

(12.6)

where x ∈ Rn x is the state of the system, u ∈ Rn u is the control input and w ∈ Rn w is a vector representing unknown disturbances and modeling uncertainties of the plant considered, respectively. The mapping f : Rn x × Rn u × Rn w → Rn x is in general nonlinear. In addition, f (0, 0, 0) = 0 so that the origin is a fixed point of the system when disturbance does not exist. Since only addition and multiplication (of ciphertexts and plain-texts) can be done using encrypted data, the following static feedback controller is considered in this chapter: u = K g(x),

(12.7)

where K ∈ Rn u ×n y is the gain matrix to be designed and g(x) : Rn x → Rn y is in general a continuous nonlinear function and g(0) = 0. Remark 12.3 The structure of the controller may be restrictive, as dynamic controllers like PID controllers, for instance, typically seen in the industry, are not included. There are some non-trivial challenges in dealing with dynamic or observerbased controllers that are beyond the scope of this chapter and they will be left for future work. We refer interested readers to recent works [7, 12] on this issue by requiring certain operations to take place over integers only and [28] which uses reset controllers.  By substituting (12.7) in (12.6), we arrive at the following expression of the closed-loop system: x + = f (x, K g(x), w). (12.8) The main objective of this study is to implement the controller of the form (12.7) using Paillier encryptions to achieve some desired closed-loop stability properties that will be specified later. In view of Remark 12.1, we make the following standing assumption throughout the chapter: Assumption 12.1 The control computation (12.7) is error free.



Following the discussion in Remark 12.1, errors in cipher-texts may possibly result in unpredictable errors in the corresponding plain-texts. This issue should be

266

Y. Lin et al.

addressed by properly designing the communication networks to ensure high-quality transmission, via parity code, for instance. If such an error occurs, it can be regarded as a disturbance signal with short durations acting on the system. Provided that the duration is not long enough, one would expect the desired properties of the closedloop system to be preserved. For simplicity of the stability analysis, we make the above assumption to eliminate the effect of such errors. As mentioned in the previous part of the chapter, Paillier encryption can only be applied to non-negative integers. As a result, for a real valued signal, we first truncate it to fixed-point rational numbers and then lift it to non-negative integers via χ defined in Sect. 12.3, so that it can be encrypted. Meanwhile, we must also ensure that overflow and underflow do not occur in algorithmic operations by choosing the public-key length sufficiently large. We follow an emulation-based approach where the designer first designs a gain matrix K ∈ Rn u ×n y that globally stabilizes the system (12.8) and then adjusts the gain matrix to get K¯ ∈ Rn u ×n y that have fixedpoint rational elements only. To do so, we introduce non-negative integers m 1 , n 1 , m 2 , and n 2 and perform the following operations on K and g(x): K¯ = arg minT ∈Q(n 1 ,m 1 )nu ×n y ||T − K || F ,

(12.9)

g(x) ¯ = arg minz∈Q(n 2 ,m 2 )n y |z − g(x)|.

(12.10)

Thus, with some abuse of notations, the closed-loop system using encrypted signals can be written in the following form: N2

¯ w), x + = f (x, χ2 (D(χ1 ( K¯ ) ∗ E(χ1 (g(x))))),

(12.11)

where the operators E and D here work element wise. The function χ1 converts fixed-point rational numbers to integers while χ2 does the opposite and the operation N2

∗ is defined in Definition 12.4. The following assumption is important to characterize the errors induced by the ¯ − g(x)| truncation/quantization process defined by e1 = || K¯ − K || F and e2 = |g(x) and the compact set BΔ = {x ∈ Rn x : |x| ≤ Δ} the system is supposed to operate in. Assumption 12.2 For any 1 , 2 , Δ¯ > 0, there exist n 1 , m 1 , n 2 and m 2 sufficiently ¯ ∈ Q(n 2 , m 2 )n y , ∀x ∈ large, such that e1 ≤ 1 , e2 ≤ 2 , K¯ ∈ Q(n 1 , m 1 )n u ×n y , and g(x)  BΔ . In the following, three examples that satisfy the above assumption are given.

12.4.1 Static Output Feedback This is the simplest structure that fits the model (12.6) and (12.7), where the function g(x) represents the output function and a proportional type controller is used to

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

267

stabilize the plant. If g(x) = x then it becomes the standard state feedback controller. In this situation, the truncation process takes place on the signal g(x) directly. Then we have the following lemmas: Lemma 12.1 For any 1 , 2 > 0, if m 1 , m 2 , n u and n y are such that m 1 ≥ √ √ −log2 ( 1 / n u n y ) − 1 and m 2 ≥ −log2 ( 2 / n y ) − 1, then e1 ≤ 1 and e2 ≤ 2 .  √ Proof It can be verified that 2−m 1 +1 ≤ 1 / n u n y . Thus, we have   n u n y  n u ,n y    2  ¯ (K i j − K i j ) ≤  2−2m 1 −2 ≤ 1 . e1 = i=1, j=1

i=1

By following similar steps one can also show that e2 ≤ 2 holds under the given assumption. Lemma 12.2 For any Δ¯ > 0, if m 1 , m 2 , n 1 , n 2 , n u and n y are such that n 1 ≥ m 1 + and n 2 ≥ m 2 + 1 + log2 (max0≤|x|≤Δ¯ |g(x)|∞ ), then 1 + log2 (maxi, j |K i j |) ¯ ∈ Q(n 2 , m 2 )n y , ∀x ∈ BΔ¯ .  K¯ ∈ Q(n 1 , m 1 )n u ×n y and g(x) Proof It can be proved by noting that 2n 1 −m 1 −1 ≥ maxi, j |K i j | and 2n 2 −m 2 −1 ≥ max0≤|x|≤Δ¯ |g(x)|∞ .

12.4.2 Combination of Basis Functions In this subsection, we consider the structure shown in Fig. 12.3. At the plant side, we have sensors capable of performing operations directly to generate functions which we call basis functions. One typical example for this situation is optimization algorithms over the cloud where the agent (plant) sends gradient information rather than the state x over the network. Another example is where multiple elementary functions are used as basis functions to approximate a desired nonlinear function that may

Fig. 12.3 NCS loop with multiple channels

268

Y. Lin et al.

be hard to realize due to computational constraints. This is common in engineering designs, for instance, when the principles of reinforcement learning are applied to adaptive and optimal control problems [15, 22]. A set of linearly independent smooth basis functions are normally used to approximate the cost function and the control policy in policy iteration of an optimal control problem. Problems in distributed averaging can also be put in this form where the basis functions are the states of the agents and the linear combination depends on the choice of the weights over the connection graph. A particular interesting problem in this set up is the potential information leakage between nodes/plants after decryption. This happens when the control input is decrypted at one particular node. If the plant knows the control law, it can infer the state of another node based on its own state information. Detailed discussion of this issue is presented in a recent paper [38]. The proposed encryption-based control law, however, can still be captured by (12.7). It is not hard to see that Lemmas 12.1 and 12.2 also hold in this setting.

12.4.3 Two-Server Structure with State Measurement Only The most general case considered in this chapter is shown in Fig. 12.4. At the plant side sensors are only capable of providing state measurements x, but not able to perform any computations. Since under Paillier encryption the most we can do is addition and multiplication, we introduce two independent non-colluding servers that make use of the continuity of g(x) and another multiplicative homomorphic cryptosystem to realize the controller (12.7). In this set up, we use a polynomial function g(x) ˜ to approximate the function g(x) and then apply RSA encryption to perform polynomial calculations.1 The sensor at the plant side first encrypts the state measurement x using RSA encryption to Server 1, and Server 1 calculates powers of x using cipher-texts directly and then returns it to the plant side. Then Paillier encryption is applied to extract g(x) ˜ as an approximation of g(x) in the same way as done in Sect. 12.4.1 and operations in cipher-texts are executed by Server 2. For x = [x1 x2 ... xn x ]T ∈ Rn x , let a = [a1 a2 ... an ]T be an n x -tuple of non-negative intea gers and define x a := x1a1 · x2a2 · · · xn xn x . From the Weierstrass n x approximation theorem ¯ ε1 > 0, there exists M > 0 such that i=1 ai ≤ M for all a and [35], for any Δ, |g(x) ˜ − g(x)| ≤ ε1 , ∀x ∈ BΔ¯ ,

(12.12)

where each element of g(x) ˜ is given by the sum over a finite number of n x -tuples:  a a ca x where ca are real coefficients that will be embedded in the matrix K .

1 El Gamal encryption is also multiplicative homomorphic, however, RSA encryption also works on

Z N while El Gamal encryption works only on a subset of Z N [20], for simplicity we consider RSA encryption here only, but the analysis can be extended to cover the case where El Gamal encryption is used.

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

269

Fig. 12.4 NCS loop with two servers

In addition to the use of polynomial basis function as a way of approximation, polynomial feedback is also used in other situations due to the development in Sum Of Squares (SOS) programs [34]. Since then, there have been many results in the literature on controller synthesis for nonlinear systems based on SOS approaches. See, for example, [4, 6, 16, 36, 44] on the applications of polynomial feedback controllers for different problems. In order to apply RSA encryption, the state measurement x has to go through the same truncation-lifting process to get a fixed-point rational number x. ˜ Due to uniform ˜ − g( ˜ x)| ˜ ≤ ε2 can be satisfied if continuity of g(x) ˜ over BΔ , for any ε2 > 0, |g(x) the truncation error |x − x| ˜ is small enough. If it is the case that ε1 + ε2 ≤ 2 , it can be verified that Assumption 12.2 is satisfied since the argument regarding 1 in Lemma 12.1 and Lemma 12.2 still hold in this setting. This two-server architecture covers the case of feedback linearization discussed in [19].

12.5 Main Result In this section, we state the main result of this work providing sufficient conditions on the parameters including m 1 , n 1 , m 2 , and n 2 to guarantee certain stability properties. We also show that large m 1 and m 2 lead to better closed-loop performance but requires more computational resources, similarly large n 2 results in a larger region of attraction of the origin.

12.5.1 Robust Stabilization In this part we will show that if the controller (12.7) renders the closed-loop system (12.8) ISS with respect to w, then the controller described by Algorithm 12.2 also inherits a robustness property weaker than ISS. In particular, if (12.8) is ISS with

270

Y. Lin et al.

respect to w, the state remains bounded for any bounded w, however, the controller in Algorithm 12.2 only gives bounded state when the l∞ norm of the disturbance is bounded by a known constant. Moreover, it will be shown that the more computational resources we have, the larger the upper bound of the disturbance can be made. To state the main result, we make the following assumptions: Assumption 12.3 There exists an ISS Lyapunov function V : Rn x → R≥0 for system (12.8), such that the following inequalities hold for any C > 0 and some α1 , α2 , α3 ∈ K∞ , γ ∈ K: (12.13) α1 (|x|) ≤ V (x) ≤ α2 (|x|), V ( f (x, K g(x), w)) − V (x) ≤ −α3 (|x|) + γ(|w|), for any x ∈ Rn x ,

|V (x1 ) − V (x2 )| ≤ L v |x1 − x2 |,

(12.14)

(12.15)

for any max{|x1 |, |x2 |} ≤ C, where L v is the Lipschitz constant of V that may depend on C.  If a globally Lipschitz V can be found, then L v is independent of C. Assumption 12.4 The function f is continuous on Rn x × Rn u × Rn w . Moreover, for every compact set S ⊂ Rn x × Rn u × Rn w , there exists L f > 0 such that | f (x, u 1 , w)  − f (x, u 2 , w)| ≤ L f |u 1 − u 2 |, for all (x, u 1 , w) and (x, u 2 , w) ∈ S. Remark 12.4 Assumption 12.3 is a necessary and sufficient condition for the closedloop system (12.8) to be ISS with respect to w if f is continuous, as shown by Theorem 1 of [17]. Besides, in [17], the existence of a smooth ISS Lyapunov function is also proved and the ISS-gain function σ(s) in Definition 12.1 is given by α1−1 ◦ α2 ◦ α3−1 ◦ (I d + ρ) ◦ γ(s), where ρ is any K∞ function such that I d − ρ ∈ K∞ . This is a stronger property than the Lipschitzness assumed in Assumption 12.3. However, such a Lyapunov function may be very hard to find since it typically requires the solution of the corresponding difference equation whereas finding a non-smooth but Lipschitz Lyapunov function may be easier. In fact, a continuous Lyapunov function implies certain robustness of the closed-loop system which is discussed in detail in [18]. If the above conditions only hold locally or a compact set rather than the origin is to be stabilized, one can obtain a similar result using the same approach.  The control designer can follow the steps given by Algorithm 12.2 to implement the control law using the uniform quantizations in (12.9) and (12.10), which is the main factor that may adversely impact the control performance. As mentioned in the introduction part, this is slightly different from the scenario considered in the quantized control literature [23, 24]. Namely, the above papers only consider quantized measurement or quantized input. However, in our case, in addition to measurement quantization, the gain matrix K also needs to be quantized. This is the reason why we take a different approach compared to [23, 24]. In the presence of disturbances, we have the following result for the closed-loop system (12.11) using the controller described in Algorithm 12.2:

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

271

Algorithm 12.2 Secure and private implementation of the static controller with encrypted measurements Require: n 1 , m 1 , n 2 , m 2 , K¯ , g¯ (x), p, q Ensure: u 1: Set n ← n y + n 1 + n 2 − 1, m ← m 1 + m 2 and y¯ ← g¯ (x) 2: # Control Designer 3: Construct Γ ji ← χn+2m,0 (2m K¯ ji ) 4: # Sensors 5: for i = 1, ..., n y do 6: Transmit z i ← E(χn+2m,0 (2m y¯i ); r ) to the controller 7: end for 8: # Controller N2

9: Construct l = Γ ∗ z 10: Transmit l j to the actuators 11: # Actuators 12: for j = 1, ..., n u do 13: Implement u j ← D(l j )mod2n+2m /22m 14: end for

Theorem 12.1 Suppose there exists K ∈ Rn u ×n y such that Assumptions 12.1–12.4 hold. Then, there exists β ∈ KL such that for any δ, Δ > 0, disturbances upper bounded by the constant Δw > 0 and design parameters 0 < μ1 < 1 and 0 < μ2 < 1, one μ1 (1−μ2 )α3 (δx ) 1 )α3 (δx ) ¯ ,

= and Δ = can choose N > 2n y +n 1 +n 2 −1 , 1 = (1−μ 2 M1 L V L f L L | K¯ | V

f

−1 −1 max{Δ, α1−1 ◦ α2 ◦ α3−1 ◦ ρ−1 (η + γ(Δw )), α1−1 ◦ α2 ◦ α3−1 ◦ μ−1 1 μ2 ρ (γ(Δw ))} to guarantee that any solution φ(·, x, w) to (12.11) satisfies

|φ(k, x0 , w)| ≤ β(|x0 |, k) + σ(||w||) + δ,

(12.16)

for any |x0 | ≤ Δ, where • δx is chosen to satisfy δ = α1−1 ◦ α2 ◦ α3−1 ◦ ρ−1 ( M2 (1−μM11)α3 (δx ) + μ1 (1 − μ2 ) α3 (δx )), • ρ is any K∞ function such that I d − ρ ∈ K∞ , −1 • σ(s) = max{σ1 , σ2 } with σ1 (s) = α1−1 ◦ α2 ◦ α3−1 ◦ μ−1 1 μ2 (I d + ρ) ◦ γ(s), −1 −1 σ2 (s) = α1 ◦ α2 ◦ α3 ◦ (I d + ρ) ◦ γ(2s). The functions α1 , α2 , α3 come from Assumption 12.3, M1 = maxδx ≤|x|≤Δ¯ |g(x)|,  M2 = max0≤|x|≤δx |g(x)|. Proof The proof is given in the Appendix. Remark 12.5 Assumption 12.2 requires the two errors to be small and can be met by selecting the design parameters appropriately based on Lemmas 12.1 and 12.2 and the discussion in Sect. 12.4.3. 

272

Y. Lin et al.

The above theorem shows that the closed-loop system exhibits semi-global inputto-state practical stability, namely, the upper bound of the disturbance can be arbitrary, the norm of the initial state can be from an arbitrarily large set and δ can be chosen arbitrarily small. This is very similar to the regional input-to-state practical stability discussed in [21], and the bound on the disturbance is used to establish the existence of robustly positively invariant set as shown in Definition 2.1 of [21]. One can follow the conditions listed in Theorem 12.1 to select large enough integers such that Assumption 12.2 holds to implement Algorithm 12.2. The tuning parameters μ1 and μ2 characterize the approximation errors of the gain matrix and measurement, respectively, larger (smaller) values lead to more (less) accurate approximations. Remark 12.6 In order to break the Paillier encryption scheme, the potential attacker needs to figure out the z i and l j in Algorithm 12.2. However, this is numerically intractable under Decisional Composite Residuosity assumption if the key length is chosen to be large enough. See [32] and [26] for more details on semantic security analysis of the Paillier encryption scheme. However, RSA encryption is not semantically secure. Consequently, if polynomial feedback is used, Server 1 in Fig. 12.4 must operate in a secure way. Alternatively, El Gamal encryption can be used to perform multiplications.  Remark 12.7 There is a trade-off between the performance of the system and the required resources. Large m and N are required to ensure a large domain of attraction and a small neighborhood around the origin to which the state converges to, when the system is disturbance free. In addition, when the system is subject to disturbances, larger m leads to larger μ1 and μ2 , therefore, smaller ISS gain function from w to x and smaller δ. This further results in a positively larger bound on the disturbance while still guaranteeing boundedness of the closed-loop trajectory of (12.11). On the other hand, larger n and N improves the bound of disturbances by ensuring a larger Δ. However, this also increases the computational cost of the encryption and decryption algorithm. Note that each multiplication in Algorithm 12.2 costs O(N 2 ) operations and each exponentiation costs O(N 3 ) operations. If n u and n y are independent of N , the overall computational complexity scales with O(N 3 ). Moreover, instead of sending packets of length O(n), in Algorithm 12.2, the communication involves sending packets of length O(N ), and N grows exponentially as n increases. This exponential growth of computational and communication burden may induce delays that may not be ignored in the modeling of the system and requires high data rate. This can be interpreted as the cost of achieving security and privacy of the NCS. 

12.5.2 Disturbance Free Case In this case, w ≡ 0 and the closed-loop NCS (12.8) can be written as x + = f (x, K g(x), 0).

(12.17)

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

273

Like the previous case, the control designer also follows the steps given by Algo˜ rithm 12.2 to implement the control law. For simplicity, we use the symbols E˜ and D to include the fixed-point rational number and integer conversions done in Algorithm 12.2 and the corresponding closed-loop system is given by 2

˜ K¯ N∗ E( ˜ g(x))), ¯ 0). x + = f (x, D(

(12.18)

The following result is the special case of Theorem 12.1 when w ≡ 0 and the main result of [25]. Corollary 12.1 Suppose there exists K ∈ Rn u ×n y such that Assumptions 12.1–12.4 hold with w = 0. Then, there exists β ∈ KL such that for any δ, Δ > 0 and design parameters 0 < μ1 < 1 and 0 < μ2 < 1, one can choose N > 2n y +n 1 +n 2 −1 , 1 = (1−μ1 )α3 (δx ) 2 )α3 (δx ) , 2 = μ1 (1−μ and Δ¯ = Δ to guarantee that any solution φ(·, x, w) M1 L V L f L V L f | K¯ | to (12.18) satisfies |φ(k, x0 )| ≤ β(|x0 |, k) + δ, (12.19) for any |x0 | ≤ Δ, where • δx is chosen to satisfy δ = α1−1 ◦ α2 ◦ α3−1 ◦ ρ−1 ( M2 (1−μM11)α3 (δx ) + μ1 (1 − μ2 ) α3 (δx )), • ρ is any K∞ function such that I d − ρ ∈ K∞ . The functions α1 , α2 , α3 come from Assumption 12.3, M1 = maxδx ≤|x|≤Δ¯ |g(x)|,  M2 = max0≤|x|≤δx |g(x)|. Proof The proof is a special case of the proof of Theorem 12.1 and is omitted. This result shows semi-global practical stability of the closed-loop system (12.17) which means that for any set of initial conditions of the form {x0 ∈ Rn x : |x0 | ≤ Δ} where Δ > 0 can be arbitrarily large and for any arbitrarily small δ, one can select large enough integers according to Assumption 12.2 and Corollary 12.1 to implement Algorithm 12.2 and ensure that (12.19) holds.

12.5.3 Security Enhancement In this subsection, we provide a brief analysis on the resilience of the proposed encryption-based control strategy against replay attacks. Replay attack [27] refers to the situation where an attacker wishes to disrupt the operation of a control system in steady state without being detected. The attacker will hijack the sensors, observe and record their readings for a certain amount of time and repeat them afterwards while carrying out the attack. In [27], the authors propose a watermarking-based scheme for discrete-time LTI systems controlled by an infinite horizon Linear Quadratic Gaussian (LQG) controller. The idea is to add a zero-mean Gaussian noise to the

274

Y. Lin et al.

original input from LQG design. As a result, even when the system is operated in steady state, the control input at different time instances will be different with probability one. This makes it hard for the attacker to remain stealthy at the cost of losing some performance in the LQG sense. Recall the detailed steps of Paillier encryption given in Sect. 12.3.2, for an appropriately chosen N based on Theorem 12.1, at each time instance a random r ∈ Z∗N must be selected and this randomness is useful in replay attack detection as shown in the following proposition. For notational convenience, a replay attack in which the number of consecutive recordings and repeats is no more than M is called an M-step replay attack. In particular, we restrict our attention to replay attacks that only record and replay the data without any other attempts to remain stealthy. Similarly, an M-step detector refers to the detector that collects and keeps the received (encrypted) measurements for M consecutive steps. The defender is using the following criterion to perform replay attack detection: 1. The M-step detector collects M consecutive measurements, when a new measurement arrives the oldest one is discarded. 2. The detector records the number of repeated measurements in the M collected ones. If a sequence of measurements with length more than 1 is repeated, an alarm is triggered to indicate the presence of the replay attack. Proposition 12.5 Given the public-key N , assume at each time instance, r is drawn from elements of Z∗N with equal probability independently from other time instances. Then an Ma -step replay attack can be detected by an Md -step detector with a false  alarm rate no larger than (N 1−3) if Md > Ma + 1. Proof By the definition of Z N and Z∗N , it can be seen that 1, p, and q are the only 3 elements in Z N that are not in Z∗N . As a result, r is drawn from elements of Z∗N with equal probability P = N 1−3 . It is shown by Lemma 12.3 in [32] that the integer-valued encryption function E(t; r ) from Z N × Z∗N to Z∗N 2 is bijective. As a result, for the same unencrypted measurement t ∈ Z N , the corresponding cipher-text will be different if r is chosen differently and each possible cipher-text is generated with the same probability P = N 1−3 . Moreover, the fact that Md > Ma + 1 allows the detector to collect at least 2 consecutive repeated measurements replayed by the attacker together with the measurements collected by the attacker to launch the attack. Due to independent selection at each time instance, if the system is at steady state and there is no disturbance, the probability of receiving a sequence 1 of repeated measurements of length M without a replay attack is given by (N −3) M 1 which makes the worst case false alarm rate (N −3) (corresponding to the case of 2 consecutive identical measurements). If this is not the case the probability can only be smaller than (N 1−3) . To see this, for t1 = t2 ∈ Z N , if there exist r1 = r2 ∈ Z∗N such that E(t1 ; r1 ) = E(t2 ; r2 ), E(t1 ; r1 ) = E(t2 ; r2 ) can only happen with probability (N 1−3) since the cardinality of Z∗N remains the same. If for t1 = t2 ∈ Z N , there exist no r1 = r2 ∈ Z∗N such that E(t1 ; r1 ) = E(t2 ; r2 ), the false alarm rate becomes 0. Thus the proof is complete.

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

275

Remark 12.8 In fact, the requirement of selecting r from Z∗N 2 with equal probability is not necessary. However, if this is the case, the false alarm rate will be given by Pmax , where Pmax denotes the largest probability of one particular element getting selected which is no smaller than N 1−3 . In other words, the selection rule in Proposition 12.5 gives the detector with the smallest upper bound of the false alarm rate estimate, provided that the detector has more memory than the attacker according to Md > Ma + 1. Moreover, it is also possible to relax the condition Md > Ma + 1 to Md > Ma by checking whether there is any element appearing more than once. But this will result in a false alarm rate that grows to 1 as Ma increases. In applications N is often chosen to be very large, the probability of receiving repeated measurements is extremely low. Thus, it is possible to detect potential replay attacks by checking the consistency of the encrypted measurements received with very low false alarm rate. However, the result above is stated for replay attacks that record and replay only. If the attacker is more intelligent and has access to the public-key N the attacker can encrypt 0 and multiply it by the recorded data (equivalent to adding 0 to the plain-text in view of Proposition 12.3) to generate a different random cipher-text and remain stealthy. In this case, encryption may not give this resilience for free and other results [27] should be used to prevent these more intelligent replay attacks.  For other attacks that require data injection to be stealthy, it is shown in [40] that, it is necessary to properly design the attack policy. However, this cannot be done without breaking the Paillier encryption scheme which makes our proposed scheme resilient against these type of attacks.

12.6 Homogeneous Control Systems In this section, we show that if more information about the system dynamics is available to the designer, the results in Theorem 12.1 and Corollary 12.1 can be improved and made less conservative. Namely, we show that if the controller can render the closed-loop system homogeneous with degree zero, the expressions for n 1 , n 2 , m 1 , and m 2 will be simplified. If in addition, the quantizers have infinite ranges, then global practical stability can be established. Consider the following autonomous discrete-time nonlinear system: x + = G(x),

(12.20)

where G : Rn x → Rn x is the jump map that satisfies G(0) = 0. A homogeneous system is defined as follows: Definition 12.5 System (12.20) is homogeneous (with degree zero) if given any real  number λ > 0, we have G(λx) = λG(x) for all x ∈ Rn x . Remark 12.9 The definition of homogeneity considered in this chapter is a simplified version compared to the work [42], where the authors consider a much more

276

Y. Lin et al.

generalized version of dilations and cover Definition 12.5 as a special case. Based on the homogeneity assumption and regularity on the jump map function of the difference inclusion, the existence of a Lyapunov function that has certain nice properties is guaranteed and local stability is proved to imply global stability.  The following lemma states equivalent properties for homogeneous systems: Lemma 12.3 Assume G is continuous on Rn x and the system (12.20) is homogeneous (of degree zero) in the sense of Definition 12.5 and the origin of (12.20) is (uniformly) locally asymptotically stable, then (i) The origin of (12.20) is (uniformly) globally exponentially stable. (ii) There exists a continuous function V : Rn x → R≥0 that is smooth on Rn x \{0} and positive constants c1 , c2 , c3 and L such that c1 |x| ≤ V (x) ≤ c2 |x|,

(12.21)

V (G(x)) − V (x) ≤ −c3 |x|,

(12.22)

|V (x + v) − V (x)| ≤ L|v|,

(12.23)

∀x, v such that x = 0 and x + v = 0.  Proof This is a special case of Theorem 1 of [31] and is thus omitted. By imposing another assumption on the closed-loop system (12.8), it is possible to state stability results when the closed-loop system is affected by disturbances: Assumption 12.5 The function f is continuous and satisfies | f (x, u 1 , w1 ) −  f (x, u 2 , w2 )| ≤ L f |u 1 − u 2 | + L w |w1 − w2 | globally on Rn x × Rn u × Rn w . Theorem 12.2 If Assumption 12.2 and Assumption 12.5 hold. And suppose there exists K ∈ Rn u ×n y such that Assumption 12.4 holds globally for the closed-loop system (12.17) and |g(x)| is upper bounded by κ|x| for some κ > 0. Moreover, if the controller (12.7) renders the disturbance free closed-loop system (12.17) homogeneous and asymptotically stabilizes the origin of (12.17), then for any δ, Δ > 0, disturbances upper bounded by the constant Δw > 0 and design parameters 0 < μ1 < 1 (1−μ1 ) 2 )c3 δx , 2 = μ1 (1−μ and 0 < μ2 < 1, one can choose N > 2n y +n 1 +n 2 −1 , 1 = c3κL Lf L L f | K¯ |

L w Δw ) c2 L L w Δw and Δ¯ = max Δ, c2 (η+L , c1 c3 μ1 μ2 ρ to guarantee that any solution φ(·, x, w) c1 c3 ρ to (12.11) satisfies |φ(k, x0 , w)| ≤

c2 μ1 μ2 c3 k |x0 | 1 − + σ||w|| + δ, c1 c2

(12.24)

2 )δx , ρ is any constant for any |x0 | ≤ Δ, where δx is chosen to satisfy δ = c2 μ1 (1−μ c1 ρ c2 L L w such that 0 < ρ < 1 and σ = c1 c3 μ1 μ2 . The positive constants c1 , c2 and c3 come from Lemma 12.3. 

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

277

Proof The proof can be found in the Appendix. The results in the previous two propositions show that if the controller renders the closed-loop homogeneous and g(x) is linearly bounded, then for any gain matrix K , there exists K¯ sufficiently close to K such that the desired stability properties of the quantization free system are preserved under gain matrix quantization. This also covers the result in [8, 25] as a special case, where a static state feedback controller is used to control a discrete-time LTI system . An immediate conclusion is that if all of the assumptions mentioned in Theorem 12.2 hold and the quantizer of the function g(x) has infinite range, then system (12.18) can be globally practically stabilized. Even if this is not the case, the choice of n 1 and m 1 will no longer depend on δ. Moreover, using the Lipschitzness of the Lyapunov function, it is possible to show that the closed-loop system is ISS with respect to actuator errors. Thus, this problem reduces to the discrete-time counterpart of the cases shown in [23, 24]. However, what is slightly different from the problem setting in [23, 24] is that, in [23, 24] it is required that saturation can be detected from quantized measurements. However, as discussed in Remark 12.2, it is impossible to satisfy the saturation detection assumption in [23, 24]. As a result, in order to use dynamic quantizer to achieve ISS (UGAS) of the closed-loop system when there are (no) disturbances, it is required that, prior to the encryption of the state measurements, saturation of the quantizer can be detected and zoom-out2 is triggered if saturation is detected. Such implementation is proposed in [1], where the rule to update the zoom variable is done in a closed-loop fashion: the information of the measurement is used to ensure that saturation is avoided before transmission. Since the technical details of dynamic quantization overlap a lot with the continuous version presented in [1, 24], we choose not to present them in this work and refer the interested readers to [1, 24].

12.7 An Illustrative Example In this part, we consider the following example of attitude control of a rigid body using three inputs from [36]. The complete attitude dynamics are given by ω˙ = J −1 S(ω)J ω + J −1 u ψ˙ = H (ψ)w

(12.25)

with ω ∈ R3 the angular velocity vector in a body-fixed frame, ψ ∈ R3 the Rodrigues parameter vector and u ∈ R3 the control input. The positive-definite inertia matrix J is equal to diag(4, 2, 1) in this example while S(ω) and H (ψ) are given by

2 See

[23] for an explanation of zoom-out .

278

Y. Lin et al.



⎤ 0 ω3 −ω2 S(ω) = ⎣−ω3 0 ω1 ⎦ ω2 −ω1 0 H (ψ) =

1 (I − S(ψ) + ψψ T ), 2

where I denotes the 3 × 3 identity matrix. And the designed controller in [36] is given by u 1 (ω, ψ) = − 0.49ψ13 − 0.86ω13 − 1.2ω1 ψ12 − 1.5ω1 ψ22 − 1.1ω1 ψ32 + 0.37ω12 ψ1 − 2.6ω1 − 0.77ψ1 + 0.035ω2 ψ1 ψ2 u 2 (ω, ψ) = − 0.28ψ23 − 0.29ω23 − 0.27ω2 ψ12 + 0.17ω22 ψ2 − 0.37ψ12 ψ2 − 0.69ω2 ψ22 − 1.1ω2 ψ32 − 0.45ψ2 ψ32 − 1.1ω12 ω2 − 0.44ω1 ψ1 ψ2 − 0.46ψ2 − 1.1ω2 + 0.24ω1 ω2 ψ1 u 3 (ω, ψ) = − 0.14ψ33 − 0.18ω33 − 0.44ω12 ω3 − 0.34ω22 ω3 − 0.55ω3 ψ22 + 0.11ω12 ψ3 + 0.052ω32 ψ3 − 0.18ψ12 ψ3 − 0.039ψ22 ψ3 − 0.2ω22 ψ3 − 0.38ω3 ψ32 + 0.4ω2 ω3 ψ2 + 0.37ω1 ω3 ψ1 + 0.43ω2 ψ2 ψ3 − 0.69ω3 − 0.35ψ3 . This controller fits into our framework and we study stability properties of the Euler discretization to the continuous-time system with a sampling period T = 0.01s. However, the controller is designed in [36] using density functions which do not necessarily guarantee UGAS of the closed-loop system. By using SOSTOOLS [33], we obtain the following Lyapunov function that guarantees that the closed-loop system is UGAS: V (ω, ψ) = 4.117ω12 − 1.569 × 10−11 ω1 ω2 + 9.63 × 10−12 ω1 ω3 + 2.031ω1 ψ1 − 3.877 × 10−12 ω1 ψ2 + 8.338 × 10−13 ω1 ψ3 + 3.678ω22 − 3.274 × 10−12 ω2 ω3 − 6.11 × 10−12 ω2 ψ1 + 2.323ω2 ψ2 + 2.051 × 10−12 ω2 ψ3 + 3.284ω32 + 8.196 × 10−12 ω3 ψ1 + 9.284 × 10−12 ω3 ψ2 + 1.709ω3 ψ3 + 1.4ψ12 − 4.147 × 10−12 ψ1 ψ2 + 3.635 × 10−12 ψ1 ψ3 + 2.118ψ22 + 1.065 × 10−11 ψ2 ψ3 + 1.669ψ32 . Therefore, semi-global practical asymptotic stability of the Euler discretized model can be concluded due to Theorem 2 in [39]. This is enough to apply our results on discretetime systems since the polynomial structure of the closed-loop dynamics ensure that Assumption 12.3 and 12.4 are always satisfied over compact sets. While this is the case, stability of the Euler discretized model in general does not imply stability of the exact discrete-time model and we refer readers to [29, 30] for more details. Additionally, it should be noted that the practical stability in this example is contributed by both the sampling time T and the encryption.

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

279

Table 12.1 Key lengths and computation times Key length (bits) Computation time (ms) 10 120 240 480 600

0.207 6.096 31.398 268.411 633.732

In this example, we set n 1 − m 1 − 1 = 1 and m 1 = 40. The precision levels of the quantizers used for state measurement is 2−60 (m 2 = 60). The initial state is fixed at [ω ψ]T = [−2 1 0 − 1 2 − 3]T and n y = 6 leading to n 2 − m 2 − 1 = 2, which gives the key length of at least 109 bits. Following the steps in Algorithm 12.2, we simulate the behavior of the closed-loop system with encrypted measurements. Additionally, a trajectory with key length being 13 bits is also generated for comparison. It is worth mentioning that a key length of 13 is often too short for practical use of Paillier encryption since it does not take too long for the attacker to break the encryption. It is used here just for illustrative purposes. The total computational times for systems using encryption with different key lengths are shown below in Table 12.1. The computation is done with Python programming language on Windows 10 over a laptop with Intel(R) i7-7500 CPU at 2.70GHz and 16GB of RAM. This also justifies the choice of T = 0.01s. However, the data shown in Table 12.1 indicate that a 1

0.5

1 Key Length=109 Bits Key Length=13 Bits

0.8 0

0.4

0.6

0.2

3

1

2

-0.5

0

-1

-0.2 -1.5

-2

0

200

400

600

800

0

-0.6 -0.8

1000

0

200

400

k

600

800

-0.2

1000

0

200

400

k

0

600

800

1000

k

2.5

0.2

0.5 Key Length=109 Bits Key Length=13 Bits

2

0 -0.5

-0.2

1.5

-1 3

1

2

-0.4 1

-0.6

-1.5 -2

0.5

-0.8

-2.5 0

Key Length=109 Bits Key Length=13 Bits

-1 -1.2

0.4 0.2

-0.4

Key Length=109 Bits Key Length=13 Bits

Key Length=109 Bits Key Length=13 Bits

0.8

0.6

0

200

400

600

800

1000

-0.5

Key Length=109 Bits Key Length=13 Bits

-3 0

200

400

k

Fig. 12.5 Trajectory of the controlled rigid body

600

k

800

1000

-3.5

0

200

400

600

k

800

1000

280

Y. Lin et al.

key length of 600 bits may not be suitable for real-time computations. For readers interested in real-time computations, we refer to [41] where FPGA-based systems are used to get 1ms delays for key length of 512 bits and 10ms for key length of 1024 bits. From Fig. 12.5, it can be seen that the trajectory of the closed-loop system using encrypted control signals of a 109-bit key length is almost identical to the plot in [36] showing that increasing the key length does help recover the behavior of the emulated controller. However, longer key lengths require longer computation times, which further require longer sampling periods to ensure that the necessary computations can be finished on time, this may lead to further issues since a large sampling period may result in the loss of stability of the original sampled-data model.

12.8 Conclusions and Future Work We have investigated the problem of using Paillier encryption to ensure the security and privacy of a NCS consisting of a discrete-time static controller and a discretetime plant connected via a network. Assuming that the corresponding closed-loop system satisfies a robust asymptotic stability property when no encryption is used, we have provided sufficient conditions on the encryption parameters to guarantee a given ultimate bound and region of attraction of the state. Additionally, we have shown that if the disturbance is bounded by a sufficiently small number, it is possible to robustly stabilize the plant. Moreover, by imposing homogeneity assumptions on the closed-loop system, we have provided sufficient conditions to ensure that under gain matrix quantization it is possible to fully inherit the stability properties of the emulated controller enabling the use of dynamic quantizers to achieve ISS (UGAS) when there are (no) disturbances. The results have been applied to linear time-invariant systems to recover the main result in [8]. Future work will focus on applying these results to NCSs modeled by hybrid systems [11] and to look at the ways of implementing dynamic controllers.

12.9 Proof of Theorem 12.1 Proof First, note that Step 9 of Algorithm 12.2 represents the standard multiplication between matrices on encrypted numbers. Since each addition may lead to at most one ¯ n y − 1 additions are required. more integer bit, for an n y dimensional vector g(x), Thus the condition N > 2n y +n 1 +n 2 −1 ensures that all encrypted data can be decrypted to get the desired control inputs without overflow and underflow. For simplicity, we will use the following equivalent dynamics f (x, K¯ g(x), ¯ w) to represent the closedloop system. We represent f (x, K¯ g(x), w) and f (x, K g(x), w) by f 1 and f , respectively, for simplicity. Similarly we write f (x, K¯ g(x), ¯ w) as f 2 . We prove the theorem by 2 steps. First, we show that under certain conditions the closed-loop system with

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

281

gain matrix quantization ( f 1 ) is ISS in a proper sense, then based on this result, we derive sufficient conditions to allow extra quantization of the measurement signal ( f 2 ) without losing ISS. Then for the same Lyapunov function given in Assumption 12.3, we have for all x ∈ Rn x : V ( f 1 ) − V (x) = V ( f 1 ) − V ( f ) + V ( f ) − V (x) (12.26) ≤ V ( f 1 ) − V ( f ) − α3 (|x|) + γ(|w|). For any Δx = Δ¯ > 0, on the ball of radius Δx , BΔx = {x ∈ Rn x : |x| ≤ Δx }, there exists a positive constant L V such that |V (x) − V (y)| ≤ L V |x − y|, for all x, y ∈ BΔx . Thus (12.26) implies that for all x ∈ BΔx : V ( f 1 ) − V (x) ≤ V ( f 1 ) − V ( f ) − α3 (|x|) + γ(|w|) ≤ |V ( f 1 ) − V ( f )| − α3 (|x|) + γ(|w|) ≤ L V | f 1 − f | − α3 (|x|) + γ(|w|)

(12.27)

≤ L V L f e1 |g(x)| − α3 (|x|) + γ(|w|), where the second step follows from the fact that a continuous mapping maps a compact set to another compact set, thus the existence of the Lipschitz constant L V can be guaranteed. Since g(x) is continuous on Rn x , for any δx ≤ |x| ≤ Δx , 1 )α3 (δx ) with |g(x)| attains its maximum M1 > 0. Thus we have if e1 ≤ 1 = (1−μ M1 L V L f 0 < μ1 < 1, then V ( f 1 ) − V (x) ≤ −μ1 α3 (|x|) + γ(|w|). For all δx ≤ |x| ≤ Δx the following holds: V ( f 2 ) − V (x) = V ( f 2 ) − V ( f 1 ) + V ( f 1 ) − V (x) ≤ V ( f 2 ) − V ( f 1 ) − μ1 α3 (|x|) + γ(|w|) ≤ L V | f 2 − f 1 | − μ1 α3 (|x|) + γ(|w|) ≤ L V L f | K¯ |e2 − μ1 α3 (|x|) + γ(|w|).

(12.28)

2 )α3 (δx ) Thus, if e2 ≤ 2 = μ1 (1−μ with 0 < μ2 < 1, we have V ( f (x, K¯ g(x), ¯ w)) − L V L f | K¯ | V (x) ≤ −μ1 μ2 α3 (|x|) + γ(|w|). In contrast to the continuous-time case, this condition is insufficient for concluding that the discrete-time closed-loop system is semi-globally practically ISS as discussed in Sect. III of [29]. For the region Bδx = {x ∈ Rn x : |x| ≤ δx }, (12.28) does not hold in general, instead, from (12.27), we have for all x ∈ Bδx :

V ( f 2 ) − V (x) ≤ V ( f 2 ) − V ( f 1 ) + V ( f 1 ) − V (x) ≤ L V L f (e1 |g(x)| + e2 | K¯ |) − α3 (|x|) + γ(|w|) ≤ L V L f ( 1 |g(x)| + 2 | K¯ |) − α3 (|x|) + γ(|w|) ≤ −α3 (|x|) + γ(|w|) + η,

(12.29)

282

Y. Lin et al.

where η = M2 (1−μM11)α3 (δx ) + μ1 (1 − μ2 )α3 (δx ), M2 = maxx∈Bδx |g(x)|. By Lemma 3.5 in [17], the set {x ∈ Rn x : V (x) ≤ b1 } where b1 = α2 ◦ α3−1 ◦ ρ−1 (η + γ(||w||)) and ρ is any K∞ function such that I d − ρ ∈ K∞ , is forward invariant. Define b1max = α2 ◦ α3−1 ◦ ρ−1 (η + γ(Δw )). Since the upper bound of the disturbance satisfies α1−1 (b1max ) < Δx , the state cannot escape away from the set BΔx from the set Bδx even in the presence of disturbances. As a result, following similar arguments, the set {x ∈ Rn x : V (x) ≤ b2 } is also forward invariant, where b2 = α2 ◦ −1 −1 −1 −1 −1 −1 α3−1 ◦ μ−1 1 μ2 ρ (γ(||w||)). Define b2max = α2 ◦ α3 ◦ μ1 μ2 ρ (γ(Δw )). Since −1 we have α1 (b2max ) < Δx , the set BΔx is forward invariant. Then follow similar arguments in Sect. 3.5 in [17], we deduce that there exists β ∈ KL such that φ(k, x0 , w), the solution of (12.11) starting at x0 with |x0 | ≤ Δx at time k satisfies |φ(k, x0 , w)| ≤ β(|x0 |, k) + σ(||w||) + δ,

(12.30)

1) + μ1 (1 − μ2 ))α3 (δx ) + α2 (δx )) and σ(s) = max{σ1 , σ2 } where δ = α1−1 (( M2 (1−μ M1 −1 −1 −1 −1 where σ1 (s) = α1 ◦ α2 ◦ α3−1 ◦ μ−1 1 μ2 (I d + ρ) ◦ γ(s), σ2 (s) = α1 ◦ α2 ◦ α3 ◦ (I d + ρ) ◦ γ(2s).

12.10 Proof of Theorem 12.2 Proof We only present a sketch of the proof since it overlaps a lot with the proof of Theorem 12.1. Based on stated assumptions and Lemma 12.3, (12.21), (12.22) and (12.23) are guaranteed to hold. Thus, by Lemma 12.3, for all x ∈ Rn x we have V ( f ) − V (x) = V ( f ) − V ( f (x, K g(x), 0)) + V ( f (x, K g(x), 0)) − V (x) ≤ −c3 |x| + L L w |w|.

(12.31)

V ( f 1 ) − V (x) = V ( f 1 ) − V ( f ) + V ( f ) − V (x) ≤ V ( f 1 ) − V ( f ) − c3 |x| + L L w |w| ≤ |V ( f 1 ) − V ( f )| − c3 |x| + L L w |w|

(12.32)

≤ L| f 1 − f | − c3 |x| + L L w |w| ≤ L L f e1 κ|x| − c3 |x| + L L w |w|. 1) Then we have if e1 ≤ 1 = c3L(1−μ , where 0 < μ1 < 1, it holds true that V ( f 1 ) − Lfκ V (x) ≤ −μ1 c3 |x| + L L w |w| for all x ∈ Rn x . Since the following holds:

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

V ( f 2 ) − V (x) = V ( f 2 ) − V ( f 1 ) + V ( f 1 ) − V (x) ≤ V ( f 2 ) − V ( f 1 ) − μ1 c3 |x| + L L w |w| ≤ L| f 2 − f 1 | − μ1 c3 |x| + L L w |w| ≤ L L f | K¯ |e2 − μ1 c3 |x| + L L w |w|.

283

(12.33)

2 )c3 δx Thus, for any 0 < δx < Δx < ∞, if e2 ≤ 2 = μ1 (1−μ , with 0 < μ2 < 1 we have L L f | K¯ | ¯ V ( f (x, K g(x), ¯ w)) − V (x) ≤ −μ1 μ2 c3 |x| + L L w |w| ≤ − μ1 μc22 c3 V (x) + L L w |w|. For the region Bδx = {x ∈ Rn x : |x| ≤ δx }, we have

V ( f 2 ) − V (x) ≤ V ( f 2 ) − V ( f 1 ) + V ( f 1 ) − V (x) ≤ μ1 (1 − μ2 )c3 δx − μ1 c3 |x| + L L w |w| (12.34)

≤ −μ1 c3 |x| + L L w |w| + η μ1 c3 V (x) + L L w |w| + η, ≤− c2

where η = μ1 (1 − μ2 )c3 δx . By Lemma 3.5 in [17], the set {x ∈ Rn x : V (x) ≤ b1 } where b1 = c2 (η+Lc3Lρw ||w||) and ρ is any constant such that 0 < ρ < 1 is forward

invariant. Define b1max = c2 (η+Lc3Lρ w Δw ) . Since the upper bound of the disturbance < Δx , the state cannot escape away from the set BΔx from the satisfies b1max c1 set Bδx even in the presence of disturbances. Following similar arguments, the . Define set {x ∈ Rn x : V (x) ≤ b2 } is also forward invariant, where b2 = c2cL3 μL1wμ||w|| 2ρ b2max w . Since we have < Δ , the set B is forward invariant. Then b2max = cc2 3LμL1wμΔ x Δx c1 2ρ φ(k, x0 ) with |x0 | ≤ Δx satisfies |φ(k, x0 )| ≤

c2 μ1 μ2 c3 k |x0 |(1 − ) + σ||w|| + δ, c1 c2

where δ = cc1 2cη3 ρ , σ = max{σ1 , σ2 }, σ1 = 0 < μ2 < 1, σ = σ1 = c1cc2 3LμL1wμ2 .

c2 L L w c1 c3 μ1 μ2

and σ2 =

c2 L L w . c1 c3 μ1

(12.35) Since we have

References 1. Abdelrahim, M., Dolk, V.S., W.P. Heemels, M.H.: Input-to-state stabilizing event-triggered control for linear systems with output quantization. In: In Proceedings of the 55th IEEE Conference on Decision and Control, pp. 483–488 (2016) 2. Alanwar, A., Shoukry, Y., Chakraborty, S., Martin, P., Tabuada, P., Srivastava, M.: Proloc: resilient localization with private observers using partial homomorphic encryption: demo abstract. In: Proceedings of the 16th ACM/IEEE International Conference on Information Processing in Sensor Networks, pp. 41–52 (2017) 3. Alexandru, A.B., Gatsis, K., Shoukry, Y., Seshia, S.A., Tabuada, P., Pappas, G.J.: Cloud-based quadratic optimization with partially homomorphic encryption (2018). arXiv:1809.02267

284

Y. Lin et al.

4. Anderson, J., Papachristodoulou, A.: Advances in computational lyapunov analysis using sumof-squares programming. Discret. Contin. Dyn. Syst. Ser. B 20(8), 2361–2381 (2015) 5. Antsaklis, P., Baillieul, J.: Special issue on technology of networked control systems. Proc. IEEE 95(1), 5–8 (2007) 6. Chen, Z., Huang, J.: Global robust stabilization of cascaded polynomial systems. Syst. Control Lett. 47(5), 445–453 (2002) 7. Cheon, J.H., Han, K., Kim, H., Kim, J., Shim, H.: Need for controllers having integer coefficients in homomorphically encrypted dynamic system. In: Proceedings of the 57th IEEE Conference on Decision and Control, pp. 5020–5025 (2018) 8. Farokhi, F., Shames, I., Batterham, N.: Secure and private control using semi-homomorphic encryption. Control Eng. Pract. 67, 13–20 (2017) 9. Gamal, T.E.: A public key cryptosystem and a signature scheme based on discrete logarithms. In: Proceedings of CRYPTO ’84, vol. 196, pp.10–18 (1984) 10. Gentry, C.: A fully homomorphic encryption scheme. Ph.D. thesis, Stanford University (2009) 11. Goebel, R., Sanfelice, R.G., Teel, A.R.: Hybrid Dynamical Systems: Modelling. Princeton University Press, Stability and Robustness (2012) 12. Hadjicostis, C.N.: Privary preserving distributed average consensus via homomorphic encryption. In: Proceedings of the 57th IEEE Conference on Decision and Control, pp. 1258–1263 (2018) 13. Heemels, W.P.M.H., Teel, A.R., van de Wouw, N., Neši´c, D.: Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance. IEEE Trans. Autom. Control 55(8), 1781–1796 (2010) 14. Hespanha, J.P., Naghshtabrizi, P., Xu, Y.: A survey of recent results in networked control systems. Proc. IEEE 95(1), 138–162 (2007) 15. Jiang, Y., Jiang, Z.P.: Robust adaptive dynamic programming and feedback stabilization of nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 882–893 (2014) 16. Jiang, Y., Jiang, Z.P.: Global adaptive dynamic programming for continuous-time nonlinear systems. IEEE Trans. Autom. Control 60(11), 2917–2929 (2015) 17. Jiang, Z.P., Wang, Y.: Input-to-state stability for discrete-time nonlinear systems. Automatica 37(6), 857–869 (2001) 18. Kellett, C.M., Teel, A.R.: On the robustness of KL-stability for difference inclusions: smooth discrete-time lyapunov functions. SIAM J. Control Optim. 44(3), 777–800 (2005) 19. Kishida, M.: Encrypted control system with quantiser. IET Control Theory Appl. 13(1), 146– 151 (2019) 20. Kosigo, K., Fujita, T.: Cyber-security enhancement of networked control systems using homomorphic encryption. In: Proceedings of the 54th IEEE Conference on Decision and Control, pp. 6836–6843 (2015) 21. Lazar, M., Muñoz de la Peña, D., Heemels, W.P.M.H., Alamo, T.: On input-to-state stability of min-max nonlinear model predictive control. Systems & Control Letters 57(1), 39–48 (2008) 22. Lewis, F.L., Vrabie, D., Vamvoudakis, K.G.: Reinforcement learning and feedback control: using natural decision methods to design optimal adaptive controllers. IEEE Control Syst. Mag. 32(6), 76–105 (2012) 23. Liberzon, D.: Hybrid feedback stabilization of systems with quantized signals. Automatica 39(9), 1543–1554 (2003) 24. Liberzon, D., Neši´c, D.: Input-to-state stabilization of linear systems with quantized state measurements. IEEE Trans. Autom. Control 52(5), 767–781 (2007) 25. Lin, Y., Farokhi, F., Shames, I., Neši´c, D.: Secure control of nonlinear systems using semihomomorphic encryption. In: Proceedings of the 57th IEEE Conference on Decision and Control, pp. 5002–5007 (2018) 26. Lu, Y., Zhu, M.: Privacy preserving distributed optimization using homomorphic encryption. Automatica 96, 314–325 (2018) 27. Mo, Y., Sinopoli, B.: Secure control against replay attacks. In: Proceedings of the 47th annual Allerton conference on communication, control, and computing, pp. 911–918 (2009)

12 Secure Networked Control Systems Design Using Semi-homomorphic Encryption

285

28. Murguia, C., Farokhi, F., Shames, I.: Secure and private implementation of dynamic controllers using semi-homomorphic encryption (2018). arXiv:1812.04168 29. Neši´c, D., Teel, A.R.: A framework for stabilisation of nonlinear sampled-data systems based on their approximate discrete-time models. IEEE Trans. Autom. Control 49(7), 1103–1122 (2004) 30. Neši´c, D., Teel, A.R., Kokotovi´c, P.V.: Sufficient conditions for stabilization of sampled-data nonlinear systems via discrete-time approximations. Syst. Control Lett. 38(4–5), 259–270 (1999) 31. Neši´c, D., Teel, A.R., Valmorbida, G., Zaccarian, L.: Finite-gain L p stability for hybrid dynamical systems. Automatica 49(8), 2384–2396 (2013) 32. Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Proceedings of the 17th International Conference on Theory and Application of Cryptographic Techniques (EUROCRYPT’99), pp. 223–238 (1999) 33. Papachristodoulou, A., Anderson, J., Valmorbida, G., Prajna, S., Seiler, P., Parrilo, P.A.: SOSTOOLS: sum of squares optimization toolbox for MATLAB (2013). http://www.eng.ox. ac.uk/control/sostools, http://www.cds.caltech.edu/sostools and http://www.mit.edu/~parrilo/ sostools 34. Parrilo, P.A.: Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. Ph.D. thesis, California Institute of Technology (2000) 35. Powell, M.J.D.: Approximation Theory and Methods. Cambridge University Press, Cambridge (1981) 36. Prajna, S., Parrilo, P.A., Rantzer, A.: Nonlinear control synthesis by convex optimization. IEEE Trans. Autom. Control 49(2), 310–314 (2004) 37. Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystem. Commun. ACM 21(2), 120–126 (1978) 38. Ruan, M., Gao, H., Wang, Y.: Secure and privacy-preserving consensus. IEEE Trans. Autom. Control (To appear) 39. Teel, A.R.: Lyapunov methods in nonsmooth optimization, part I: Quasi-Newton algorithms for Lipschitz, regular functions. In: Proceedings of the 39th IEEE Conference on Decision and Control, pp. 112–117 (2000) 40. Teixeira, A., Shames, I., Sandberg, H., Johansson, K.H.: A secure control framework for resource-limited adversaries. Automatica 51, 135–148 (2015) 41. Tran, J., Farokhi, F., Cantoni, M., Shames, I.: Implementing homomorphic encryption based secure feedback control. Control Eng. Pract. 97, 104350 (2020) 42. Tuna, S.E., Teel, A.R.: Discrete-time homogeneous Lyapunov functions for homogeneous difference inclusions. In: Proceedings of the 43rd IEEE Conference on Decision and Control, pp. 1606–1610 (2004) 43. van de Wouw, N., Neši´c, D., Heemels, W.P.M.H.: A discrete-time framework for stability analysis of nonlinear networked control systems. Automatica 48(6), 1144–1153 (2012) 44. Xu, J., Xie, L., Wang, Y.: Simultaneous stabilization and robust control of polynomial nonlinear systems using sos techniques. IEEE Trans. Autom. Control 54(8), 1892–1897 (2009)

Chapter 13

Deception-as-Defense Framework for Cyber-Physical Systems Muhammed O. Sayin and Tamer Ba¸sar

Abstract We introduce deceptive signaling framework as a new defense measure against advanced adversaries in cyber-physical systems. In general, adversaries look for system-related information, e.g., the underlying state of the system, in order to learn the system dynamics and to receive useful feedback regarding the success/failure of their actions so as to carry out their malicious task. To this end, we craft the information that is accessible to adversaries strategically in order to control their actions in a way that will benefit the system, indirectly and without any explicit enforcement. When the information of interest is Gaussian and both sides have quadratic cost functions, we arrive at a semi-definite programming problem equivalent to the infinite-dimensional optimization problem faced by the defender. The equivalence result holds also for the scenarios where the defender can have partial or noisy measurements or the objective of the adversary is not known. Under the solution concept of Stackelberg equilibrium, we show the optimality of linear signaling rule within the general class of measurable policies in communication scenarios and also compute the optimal linear signaling rule in control scenarios.

13.1 Introduction All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near. - Sun Tzu, The Art of War [27]

M. O. Sayin Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA e-mail: [email protected] T. Ba¸sar (B) Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, 1308 West Main St., Urbana, IL 61801, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_13

287

288

M. O. Sayin and T. Ba¸sar

As quoted above, even the earliest known work on military strategy and war, The Art of War, emphasizes the importance of deception in security. Deception can be used as a defense strategy by making the opponent/adversary to perceive certain information of interest in an engineered way. Indeed, deception is also not limited to hostile environments. In all non-cooperative multi-agent environments, as long as there is asymmetry of information and one agent is informed about the information of interest while the other is not, then the informed agent has power on the uninformed one to manipulate his/her decisions or perceptions by sharing that information strategically. Especially with the introduction of cyber-connectedness in physical systems, certain communication and control systems can be viewed as multi-agent environments, where each agent makes rational decisions to fulfill certain objectives. As an example, we can view transmitters (or sensors) and receivers (or controllers) as individual agents in communication (or control) systems. However, classical communication and control theory is based on the cooperation between these agents to meet certain challenges together, such as in mitigating the impact of a noisy channel in communication or in stabilizing the underlying state of a system around an equilibrium through feedback in control. However, cyber-connectedness makes these multi-agent environments vulnerable against adversarial interventions and there is an inherent asymmetry of information as the information flows from transmitters (or sensors) to receivers (or controllers).1 Therefore, if these agents are not cooperating, e.g., due to adversarial intervention, then the informed agents, i.e., transmitters or sensors, could seek to deceive the uninformed ones, i.e., receivers or controllers, so that they would perceive the underlying information of interest in a way the deceiver has desired, and correspondingly would take the manipulated actions. Our goal, here, is to craft the information that could be available to an adversary in order to control his/her perception about the underlying state of the system as a defensive measure. The malicious objective and the normal operation of the system may not be completely opposite of each other as in the framework of a zero-sum game, which implies that there is a part of malicious objective that is benign and the adversary would be acting in line with the system’s interest with respect to that aligned part of the objectives. If we can somehow restrain the adversarial actions to fulfill only the aligned part, then the adversarial actions, i.e., the attack, could inadvertently end up helping the system toward its goal. Since a rational adversary would make decisions based on the information available to him, the strategic crafting of the signal that is shared with the adversary, or the adversary can have access to, can be effective in that respect. Therefore, our goal is to design the information flowing from the informed agents, e.g., sensors, to the uninformed ones, e.g., controllers, in view of the possibility of adversarial intervention, so as to control the perception of the adversaries about the underlying system, and correspondingly to persuade them

1 In

control systems, we can also view the control input as information that flows implicitly from the controllers to the sensors since it impacts the underlying state and correspondingly the sensors’ measurements.

13 Deception-as-Defense Framework for Cyber-Physical Systems

289

(without any explicit enforcement) to fulfill the aligned parts of the objectives as much as possible without fulfilling the misaligned parts. In this chapter, we provide an overview of the recent results [21, 24, 25] addressing certain aspects of this challenge in non-cooperative communication and control settings. For a discrete-time Gauss–Markov process, and when the sender and the receiver in a non-cooperative communication setting have misaligned quadratic objectives, in [25], we have shown the optimality of linear signaling rules2 within the general class of measurable policies and provided an algorithm to compute the optimal policies numerically. Also in [25], we have formulated the optimal linear signaling rule in a non-cooperative linear-quadratic-Gaussian (LQG) control setting when the sensor and the controller have known misaligned control objectives. In [21], we have introduced a secure sensor design framework, where we have addressed the optimal linear signaling rule again in a non-cooperative LQG setting when the sensor and private-type controller have misaligned control objectives in a Bayesian setting, i.e., the distribution over the private type of the controller is known. In [24], we have addressed the optimal linear robust signaling in a non-Bayesian setting, where the distribution over the private type of the controller is not known, and provided a comprehensive formulation by considering also the cases where the sensor could have partial or noisy information on the signal of interest and relevance. We elaborate further on these results in some detail throughout the chapter. In Sect. 13.2, we review the related literature in economics and engineering. In Sects. 13.3 and 13.4, we introduce the framework and formulate the deception-asdefense game, respectively. In Sect. 13.5, we elaborate on Gaussian information of interest in detail. In Sects. 13.6 and 13.7, we address the optimal signaling rules in non-cooperative communication and control systems. In Sect. 13.8, we provide the optimal signaling rule against the worst possible distribution over the private types of the uninformed agent. In Sect. 13.9, we extend the results to partial or noisy measurements of the underlying information of interest. Finally, we conclude the chapter in Sect. 13.10 with several remarks and possible research directions. Notation: Random variables are denoted by bold lower case letters, e.g., x . For a random vector x , cov{xx } denotes the corresponding covariance matrix. For an ordered set of parameters, e.g., x1 , . . . , xκ , we use the notation xk:l = xl , . . . , xk , where 1 ≤ l ≤ k ≤ κ. N(0, ·) denotes the multivariate Gaussian distribution with zero mean and designated covariance. For a vector x and a matrix A, x  and A denote their transposes, and x denotes the Euclidean 2 -norm of the vector x. For a matrix A, Tr{A} denotes its trace. We denote the identity and zero matrices with the associated dimensions by I and O, respectively. Sm denotes the set of m-by-m symmetric matrices. For positive semi-definite matrices A and B, A  B means that A − B is also positive semi-definite.

2 We

use the terms “strategy”, “signaling/decision rule”, and “policy” interchangeably.

290

M. O. Sayin and T. Ba¸sar

13.2 Deception Theory in Literature There are various definitions of deception. Depending on the specific definition at hand, the analysis or the related applications vary. Commonly in signaling-based deception definitions, there is an information of interest private to an informed agent whereas an uninformed agent may benefit from that information to make a certain decision. If the informed and uninformed agents act in a strategic way while, respectively, sharing information and making a decision, then the interaction can turn into a game where the agents select their strategies according to their own objectives while taking into account the fact that the other agent would also have selected his/her strategy according to his/her different objective. Correspondingly, such an interaction between the informed and uninformed agents can be analyzed under a game-theoretic solution concept. Note that there is a main distinction between incentive compatible deception model and deception model with policy commitment. Definition 13.1 We say that a deception model is incentive compatible if neither the informed nor the uninformed agent have an incentive to deviate from their strategies unilaterally. The associated solution concept here is Nash equilibrium [2]. Existence of a Nash equilibrium is not guaranteed in general. Furthermore, even if it exists, there may also be multiple Nash equilibria. Without certain commitments, any of the equilibria may not be realized or if one has been realized, which of them would be realized is not certain beforehand since different ones could be favorable for different players. Definition 13.2 We say that in a deception model, there is policy commitment if either the informed or the uninformed agent commits to play a certain strategy beforehand and the other agent reacts being aware of the committed strategy. The associated solution concept is Stackelberg equilibrium, where one of the players leads the game by announcing his/her committed strategy [2]. Existence of a Stackelberg equilibrium is not guaranteed in general over unbounded strategy spaces. However, if it exists, all the equilibria would lead to the same game outcome for the leader of the game since the leader could have always selected the favorable one among them. We also note that if there is a favorable outcome for the leader in the incentive compatible model, the leader has the freedom to commit to that policy in the latter model. Correspondingly, the leader is advantageous by acting first to commit to play according to a certain strategy even though the result may not be incentive compatible. Game theoretical analysis of deception has attracted substantial interest in various disciplines, including economics and engineering fields. In the following subsections, we review the literature in these disciplines with respect to models involving incentive compatibility and policy commitment.

13 Deception-as-Defense Framework for Cyber-Physical Systems

291

13.2.1 Economics Literature The scheme of the type introduced above, called strategic information transmission, was introduced in a seminal paper by V. Crawford and J. Sobel in [9]. This has attracted significant attention in the economics literature due to the wide range of relevant applications, from advertising to expert advise sharing. In the model adopted in [9], the informed agent’s objective function includes a commonly known bias term different from the uninformed agent’s objective. That bias term can be viewed as the misalignment factor in-between the two objectives. For the incentive compatible model, the authors have shown that all equilibria are partition equilibria, where the informed agent controls the resolution of the information shared via certain quantization schemes, under certain assumptions on the objective functions (satisfied by quadratic objectives), and the assumption that the information of interest is drawn from a bounded support. Following the introduction of the strategic information transmission framework, also called cheap talk due to the costless communication over an ideal channel, different settings, such as • Single sender and multiple receivers [11, 12], • Multiple senders and single receiver [13, 17], • Repeated games [18], have been studied extensively; however, all have considered the scenarios where the underlying information is one dimensional, e.g., a real number. However, multidimensional information can lead to interesting results like full revelation of the information even when the misalignment between the objectives is arbitrarily large if there are multiple senders with different bias terms, i.e., misalignment factors [3]. Furthermore, if there is only one sender yet multidimensional information, there can be full revelation of information at certain dimensions while at the other dimensions, the sender signals partially in a partition equilibrium depending on the misalignment between the objectives [3]. The ensuing studies [3, 11–13, 17, 18] on cheap talk [9] have analyzed the incentive compatibility of the players. More recently, in [16], the authors have proposed to use a deception model with policy commitment. They call it “sender-preferred subgame perfect equilibrium” since the sender cannot distort or conceal information once the signal realization is known, which can be viewed as the sender revealing and committing to the signaling rule in addition to the corresponding signal realization. For information of interest drawn from a compact metric space, the authors have provided necessary and sufficient conditions for the existence of a strategic signal that can benefit the informed agent, and characterized the corresponding optimal signaling rule. Furthermore, in [28], the author has shown the optimality of linear signaling rules for multivariate Gaussian information of interest and with quadratic objective functions.

292

M. O. Sayin and T. Ba¸sar

13.2.2 Engineering Literature There exist various engineering applications depending on the definition of deception. Reference [19] provides a taxonomy of these studies with a specific focus on security. Obfuscation techniques to hide valuable information, e.g., via externally introduced noise [8, 15, 30] can also be viewed as deception-based defense. As an example, in [15], the authors have provided a browser extension that can obfuscate user’s real queries by including automatically fabricated queries to preserve privacy. Here, however, we specifically focus on signaling-based deception applications, in which we craft the information available to adversaries to control their perception rather than corrupting it. In line with the browser extension example, our goal is to persuade the query trackers to perceive the user behavior in a certain fabricated way rather than limiting their ability to learn the actual user behavior. In computer security, various (heuristic) deception techniques, e.g., honeypots and honey nets, are prevalent to make the adversary perceive a honey-system as the real one or a real system as a honey-one [26]. Several studies, e.g., [7], have analyzed honeypots within the framework of binary signaling games by abstracting the complexity of crafting a real system to be perceived as a honeypot (or crafting a honeypot to be perceived as a real system) to binary signals. However, here, our goal is to address the optimal way to craft the underlying information of interest with a continuum support, e.g., a Gaussian state. The recent study [20] addresses strategic information transmission of multivariate Gaussian information over an additive Gaussian noise channel for quadratic misaligned cost functions and identifies the conditions where the signaling rule attaining a Nash equilibrium can be a linear function. Recall that for scalar case, when there is no noisy channel in-between, all the equilibria are partition equilibria, implying all the signaling rules attaining a Nash equilibrium are nonlinear except babbling equilibrium, where the informed agent discloses no information [9]. Two other recent studies [1, 10] address strategic information transmission for the scenarios where the bias term is not common knowledge of the players and the solution concept is Stackelberg equilibrium rather than Nash equilibrium. They have shown that the Stackelberg equilibrium could be attained by linear signaling rules under certain conditions, different from the partition equilibria in the incentive compatible cheap talk model [9]. In [10], the authors have studied strategic sensor networks for multivariate Gaussian information of interest and with myopic quadratic objective functions in dynamic environments and by restricting the receiver’s strategies to affine functions. In [1], for jointly Gaussian scalar private information and bias variable, the authors have shown that optimal sender strategies are linear functions within the general class of measurable policies for misaligned quadratic cost functions when there is an additive Gaussian noise channel and hard power constraint on the signal, i.e., when it is no longer cheap talk.

13 Deception-as-Defense Framework for Cyber-Physical Systems

293

13.3 Deception-as-Defense Framework Consider a multi-agent environment with asymmetry of information, where each agent is a selfish decision-maker taking action or actions to fulfill his/her own objective only while actions of any agent could impact the objectives of the others. As an example, Fig. 13.1 illustrates a scenario with two agents: Sender (S) and Receiver (R), where S has access to (possibly partial or noisy version of) certain information valuable to R, and S sends a signal or signals related to the information of interest to R. Definition 13.3 We say that an informed agent (or the signal the agent crafts) is deceptive if he/she shapes the information of interest private to him/her strategically in order to control the perception of the uninformed agent by removing, changing, or adding contents. Deceptive signaling can play a key role in multi-agent non-cooperative environments as well as in cooperative ones, where certain (uninformed) agents could have been compromised by certain adversaries. In such scenarios, informed agents can signal strategically to the uninformed ones in case they could have been compromised. Furthermore, deceiving an adversary to act or attack the system in a way aligned with the system’s goals can be viewed as being too optimistic due to the very definition of adversary. However, an adversary can also be viewed as a selfish decision-maker seeking to satisfy a certain malicious objective, which may not necessarily be completely conflicting with the system’s objective. This now leads to the following notion of “deception-as-defense.” Definition 13.4 We say that an informed agent engages in a deception-as-defense mode of operation if he/she crafts the information of interest strategically to persuade the uninformed malicious agent (without any explicit enforcement) to act in line with the aligned part of the objective as much as possible without taking into account the misaligned part. We re-emphasize that this approach differs from the approaches that seek to raise suspicion on the information of interest to sabotage the adversaries’ malicious objectives. Sabotaging the adversaries’ malicious objectives may not necessarily be the best option for the informed agent unless the objectives are completely opposite of each other. In this latter case, the deception-as-defense framework actually ends up seeking to sabotage the adversaries’ malicious objectives.

Fig. 13.1 Strategic information disclosure

294

M. O. Sayin and T. Ba¸sar

We also note that this approach differs from lying, i.e., the scenario where the informed agent provides a totally different information (correlated or not) as if it is the information of interest. Lying could be effective, as expected, as long as the uninformed agent trusts the legitimacy of the provided information. However, in noncooperative environments, this could turn into a game where the uninformed agent becomes aware of the possibility of lying. This correspondingly raises suspicion on the legitimacy of the shared information and could end up sabotaging the adversaries’ malicious objectives rather than controlling their perception of the information of interest. Once a defense mechanism has been widely deployed, this can cause the advanced adversaries learn the defense policy in the course of time. Correspondingly, the solution concept of policy commitment model can address this possibility in the deception-as-defense framework in a robust way if the defender commits to a certain policy that takes into account the best reaction of the adversaries that are aware of the policy. Furthermore, the transparency of the signal sent via the committed policy generates a trust-based relationship in-between S and R, which is powerful to persuade R to make certain decisions inadvertently without any explicit enforcement by S.

13.4 Game Formulation The information of interest is considered to be a realization of a known, continuous, random variable in static settings, or a known (discrete-time) random process in dynamic settings. Since the static setting is a special case of the dynamic setting, we formulate the game in a dynamic, i.e., multi-stage, environment. We denote the information of interest by {xx k ∈ X }, where X ⊂ Rm denotes its support. Let {xx k } have zero mean and (finite) second-order moment k := cov{xx k } ∈ Sm . We consider the scenarios where each agent has perfect recall, i.e., each agent has an infinite memory of past observations and constructs his/her strategy accordingly. S has access to a possibly partial or noisy version of the information of interest, x k . We denote the noisy measurement of x k by y k ∈ Y, where Y ⊂ Rm denotes its support. For each instance of the information of interest, S selects his/her signal as a second-order random variable (13.1) s k = ηk (yy 1:k ), correlated with y 1:k , but not necessarily determined through a deterministic transformation on y 1:k (i.e., ηk (·) is in general a random mapping). Let us denote the set of all signaling rules by ϒk . As we will show later, when we allow for such randomness in the signaling rule, under certain conditions the solution turns out to be a linear function of the underlying information y 1:k and an additive independent noise term. Due to the policy commitment by S, at each instant, with perfect recall, R selects a Borel measurable decision rule γk : S k → U, where U ⊂ Rr , from a certain policy space k in order to make a decision

13 Deception-as-Defense Framework for Cyber-Physical Systems

295

u k = γk (ss 1:k ),

(13.2)

knowing the signaling rules {ηk } and observing the signals sent s 1:k . Let κ denote the length of the horizon. We consider that the agents have cost functions to minimize, instead of utility functions to maximize. Clearly, the framework could also be formulated accordingly for utility maximization rather straightforwardly. Furthermore, we specifically consider that the agents have quadratic cost functions, denoted by US (η1:κ , γ1:κ ) and UR (η1:κ , γ1:κ ). Example 13.1 In a non-cooperative communication system, over a finite horizon with Śκ length κ, consider the scenarios where S seeks to minimize over η1:κ ∈ ϒ := k=1 ϒk  US (η1:κ , γ1:κ ) = E  =E

κ 

 Q S x k − RS γk (η1 (yy 1 ), . . . , ηk (yy 1:k ))2

k=1 κ 

 Q S x k − RSu k 

,

2

(13.3)

k=1

by taking into account that R seeks to minimize over γ1:κ ∈  :=  UR (η1:κ , γ1:κ ) = E =E

κ 

k=1  κ 

Śκ

k

k=1

 Q R x k − RR γk (η1 (yy 1 ), . . . , ηk (yy 1:k ))

2

 Q R x k − RRu k 

,

2

(13.4)

k=1

where the weight matrices are arbitrary (but fixed). The following special case illustrates the applicability of this general structure of misaligned objectives (13.3) and (13.4). Suppose that the information of interest consists of two separate processes   {zz k } and {tt k }, e.g., x k := z k t k . Then (13.3) and (13.4) cover the scenarios where R seeks to estimate z k by minimizing  E

κ 

 zz k − u k 

,

2

(13.5)

k=1

whereas S wants R to perceive z k as t k , and end up minimizing  E

κ  k=1

 tt k − u k 

2

.

(13.6)

296

M. O. Sayin and T. Ba¸sar

Example 13.2 In a non-cooperative control system, consider a controlled Markov process, e.g., (13.7) x k+1 = Axx k + Buu k + w k , where w k ∼ N(0, w ) is a white Gaussian noise process. S seeks to minimize over η1:κ ∈ ϒ  κ     US (η1:κ , γ1:κ ) = E x k+1 Q S x k+1 + u k RSu k , (13.8) k=1

by taking into account that R seeks to minimize over γ1:κ ∈   UR (η1:κ , γ1:κ ) = E

κ 

 x k+1 Q R x k+1 + u k RRu k ,

(13.9)

k=1

with arbitrary (but fixed) positive semi-definite matrices Q S and Q R , and positivedefinite matrices RS and RR . Similar to the example in communication systems, this general structure of misaligned objectives (13.8) and (13.9) can bring in interesting applications. Suppose the information of interest consists of two separate processes  {zz k } and {tt k }, e.g., x k := z k t k , where {tt k } is an exogenous process, which does not depend on R’s decision u k . For certain weight matrices, (13.8) and (13.9) cover the scenarios where R seeks to regularize {zz k } around zero vector by minimizing  E

κ 

 z k+1z k+1

+ u k u k

,

(13.10)

k=1

whereas S seeks R to regularize {zz k } around the exogenous process {tt k } by minimizing 

 κ    E (zz k+1 − t k+1 ) (zz k+1 − t k+1 ) + u k u k .

(13.11)

k=1

We define the deception-as-defense game as follows: Definition 13.5 The deception-as-defense game G := (ϒ, , {xx k }, {yy k }, US , UR ) is a Stackelberg game between S and R, where • • • •

ϒ and  denote S’s and R’s strategy spaces, respectively {xx k } denotes the information of interest, {yy k } denotes S’s (possibly noisy) measurements of the information of interest, US and UR are the objective functions of S and R, defined, respectively, by (13.3) and (13.4), or (13.8) and (13.9).

Under the deception model with policy commitment, S is the leader, who announces (and commits to) his strategies beforehand, while R is the follower, reacting to the leader’s announced strategies. Since R is the follower and takes actions knowing S’s

13 Deception-as-Defense Framework for Cyber-Physical Systems

297

strategy η1:κ ∈ ϒ, we let B(η1:κ ) ⊂  be R’s best reaction set to S’s strategy η1:κ ∈ ∗ ∗ , B(η1:κ )) attains the Stackelberg ϒ. Then, the strategy and best reaction pair (η1:κ equilibrium provided that ∗ ∈ argmin η1:κ

max

η1:κ ∈ϒ γ1:κ ∈B(η1:κ )

US (η1:κ , γ1:κ ),

B(η1:κ ) = argminUR (η1:κ , γ1:κ ). γ1:κ ∈

(13.12) (13.13)

13.5 Quadratic Costs and Information of Interest Misaligned quadratic cost functions, in addition to their various applications, play an essential role in the analysis of the game G. One advantage is that a quadratic cost function can be written as a linear function of the covariance of the posterior estimate of the underlying information of interest. Furthermore, when the information of interest is Gaussian, we can formulate a necessary and sufficient condition on the covariance of the posterior estimate, which turns out to be just semi-definite matrix inequalities. This leads to an equivalent semi-definite programming (SDP) problem over a finite-dimensional space instead of finding the best signaling rule over an infinite-dimensional policy space. In the following, we elaborate on these observations in further detail. Due to the policy commitment, S needs to anticipate R’s reaction to the selected signaling rule η1:κ ∈ ϒ. Here, we will focus on the non-cooperative communication system, and later in Sect. 13.7, we will show how we can transform a non-cooperative control setting into a non-cooperative communication setting under certain conditions. Since the information flow is in only one direction, R faces the least mean square error problem for given η1:κ ∈ ϒ. Suppose that RR RR is invertible. Then, the best reaction by R is given by γk∗ (ss 1:k ) = (RR RR )−1 RR Q R E{xx k |ss 1:k }

(13.14)

almost everywhere over Rr . Note that the best reaction set B(η1:κ ) is a singleton and the best reaction is linear in the posterior estimate E{xx k |ss 1:k }, i.e., the conditional expectation of x k with respect to the random variables s 1:k . When we substitute the best reaction by R into S’s cost function, we obtain κ 

EQ S x k − MS E{xx k |ss 1:κ }2 ,

(13.15)

k=1

where MS := RS (RR RR )−1 RR Q R . Since for arbitrary random variables a and b , E{aa E{aa |bb }} = E{E{aa |bb }E{aa |bb }},

(13.16)

298

M. O. Sayin and T. Ba¸sar

the objective function to be minimized by S, (13.15), can be written as κ 

EQ S x k − MS E{xx k |ss 1:κ }2 =

k=1

κ 

Tr{Hk V } + c,

(13.17)

k=1

where Hk := cov{E{xx k |ss 1:k }} denotes the covariance of the posterior estimate, V := MS MS − MS Q S − Q S MS

(13.18)

and the constant c is given by c :=

κ 

Tr{Q S Q S k }.

(13.19)

k=1

We emphasize that Hk ∈ Sm is not the posterior covariance, i.e., cov{E{xx k |ss 1:k }} = cov{xx k |ss 1:k } in general. The cost function depends on the signaling rule η1:κ ∈ ϒ only through the covariance matrices H1:κ and the cost is an affine function of H1:κ . By formulating the relation, we can obtain an equivalent finite-dimensional optimization problem over the space of symmetric matrices as an alternative to the infinite-dimensional problem over the policy space ϒ. Next, we seek to address the following question: What is the relation between the signaling rule η1:κ ∈ ϒ and the covariance of the posterior estimate H1:κ ? Here, we only consider the scenario where S has access to the underlying information of interest perfectly. We will address the scenarios with partial or noisy measurements in Sect. 13.9 by transforming that setting to the setting of perfect measurements. There are two extreme cases for the shared information: either sharing the information fully without any crafting or sharing no information. The former one implies that the covariance of the posterior estimate would be k whereas the latter one implies that it would be cov{E{xx k |ss 1:k−1 }} since R has perfect memory. On the other hand, if we consider an arbitrary signaling strategy that is not necessarily one of these two extreme cases, then the positive semi-definite matrix cov{xx k − E{xx k |ss 1:k }} could be written as (13.20) cov{xx k − E{xx k |ss 1:k }} = k − Hk , which follows from (13.16). Furthermore, if we consider the positive semi-definite matrix cov{E{xx k |ss 1:k } − E{xx k |ss 1:k−1 }}, by (13.16) we obtain cov{E{xx k |ss 1:k } − E{xx k |ss 1:k−1 }} = Hk − cov{E{xx k |ss 1:k−1 }}.

(13.21)

13 Deception-as-Defense Framework for Cyber-Physical Systems

299

Therefore, based on (13.20) and (13.21), we obtain the necessary condition: k  Hk  cov{E{xx k |ss 1:k−1 }},

(13.22)

which holds for any signaling strategy and holds independent of the distribution of the underlying information. This yields that an affirmative answer to the following question could also address the main question above: Is the necessary condition on Hk ∈ Sm (13.22) sufficient? A tractable sufficient condition for arbitrary continuous distributions is an open problem. However, in the following subsection, we show that when information of interest is Gaussian, we can address the challenge and the necessary condition turns out to be sufficient.

13.5.1 Gaussian Information of Interest In addition to its use in modeling various uncertain phenomena based on the central limit theorem, Gaussian distribution has special characteristics which make it versatile in various engineering applications, e.g., in communication and control. The deception-as-defense framework is not an exception for the versatility of the Gaussian distribution. As an example, if the information of interest is Gaussian, the optimal signaling rule turns out to be a linear function within the general class of measurable policies, as to be shown in different settings throughout this chapter. Let us first focus on the single-stage setting, where the necessary condition (13.22) is given as (13.23) 1  H1  Om . The convention here is that for arbitrary symmetric matrices A, B ∈ Sm , A  B means that A − B  O, that is positive semi-definite. We further note that the space of positive-semi-definite matrices is a semi-cone [29]. Correspondingly, Fig. 13.2 provides a figurative illustration of (13.23), where H1 ∈ Sm is bounded from both below and above by certain semi-cones in the space of symmetric matrices. With a certain linear transformation bijective over (13.23), denoted by L1 : Sm → n S , where n ∈ Z is not necessarily the same with m ∈ Z, the necessary condition (13.23) can be written as (13.24) In  L1 (H1 )  On . As an example of such a linear mapping when 1 ∈ Sm is invertible, we can consider −1/2 −1/2 and n = m. If 1 is singular, then the following lemma L1 (H1 ) = 1 H1 1 from [23] plays an important role to compute such a linear mapping. Lemma 13.1 Provided that a given positive semi-definite matrix can be partitioned into blocks such that a block at the diagonal is a zero matrix, then certain off-diagonal

300

M. O. Sayin and T. Ba¸sar

Fig. 13.2 A figurative illustration that the covariance of the posterior estimate H1 is bounded from above and below by the semi-cones in the space of symmetric matrices, i.e., 1  H1  Om . Furthermore, we can transform the space to the form at the right figure through certain linear mapping L1 : Sm → Sn , where n ∈ Z may be different from m ∈ Z

blocks must also be zero matrices, i.e.,   A B  O ⇔ A  O and B = O. B O

(13.25)

Let the singular 1 ∈ Sm with rank n < m have the eigen-decomposition   1 O 1 = U 1 U, O O 1

(13.26)

where 1 On . Then, (13.23) can be written as 

     1 − N1,1 −N1,2 N1,1 N1,2 1 O =  O, −   N1,2 N2,2 −N1,2 −N2,2 O O

where we let U1 H1 U1



N1,1 N1,2 =  N1,2 N2,2

(13.27)

 (13.28)

be the corresponding partitioning, i.e., N1,1 ∈ Sn . Since U1 H1 U1  Om , the diagonal block N2,2 ∈ Sm−n must be positive semi-definite [14]. Further, (13.27) yields that −N2,2  Om−n , which implies that N2,2 = Om−n . Invoking Lemma 13.1, we obtain N1,2 = On×(m−n) . Therefore, a linear mapping bijective over (13.23) is given by  −1/2 

1 −1/2  , L1 (H1 ) = 1 On×(m−n) U1 H1 U1 O(m−n)×n

(13.29)

where the unitary matrix U1 ∈ Rm×m and the diagonal matrix 1 ∈ Sn are as defined in (13.26).

13 Deception-as-Defense Framework for Cyber-Physical Systems

301

With the linear mapping (13.29) that is bijective over (13.23), the necessary condition on H1 ∈ Sm can be written as 1  H1  Om ⇔ In  L1 (H1 )  On ⇒ Eigenvalues of L1 (H1 ) are in the closed interval [0, 1] since the eigenvalues of In weakly majorize the eigenvalues of the positive semidefinite L1 (H1 ) from below [14]. Up to this point, the specific distribution of the information of interest did not play any role. However, for the sufficiency of the condition (13.23), Gaussianness of the information of interest plays a crucial role as shown in the following theorem [24]. Theorem 13.1 Consider m-variate Gaussian information of interest x 1 ∼ N(0, 1 ). Then the necessary condition (13.23) on the covariance matrix of the posterior estimate is also a sufficient condition and the associated signaling rule is linear-in-xx 1 , as described later in (13.31). Proof Given any covariance matrix H1 ∈ Sm satisfying 1  H1  Om ,

(13.30)

¯ 1 U¯ 1 and ¯1 = let L1 (H1 ) ∈ Sn have the eigen-decomposition L1 (H1 ) = U¯ 1 diag{λ¯ 1,1 , . . . , λ¯ 1,n }. Next consider a probabilistic linear-in-xx 1 signaling rule given by η1 (xx 1 ) = L 1 x 1 + n 1 ,

(13.31)

where L 1 ∈ Rm×m and n 1 ∼ N(0, 1o ) is an independent m-variate Gaussian random variable. Since the information of interest and signal are jointly Gaussian, we obtain cov{E{xx 1 |L 1 x 1 + n 1 }} = 1 L 1 (L 1 1 L 1 + 1o )† L 1 1 .

(13.32)

Suppose that the gain matrix L 1 ∈ Rm×m and the covariance matrix 1o  Om satisfy     I −1/2 (13.33) L 1 := U1 n 1 U¯ 1 o1 In O , O where the unitary matrix U1 ∈ Rm×m and the diagonal matrix 1 ∈ Sn are as defined o 2 o 2 ) , . . . , (σ1,n ) , 0, . . . , 0}, in (13.26), o1 := diag{λo1,1 , . . . , λo1,n }, 1o = diag{(σ1,1 and (λo1,i )2 ¯ (13.34) o 2 = λ1,i ∈ [0, 1], ∀ i = 1, . . . , n. (λo1,i )2 + (σ1,i )

302

M. O. Sayin and T. Ba¸sar

After some algebra, (13.32) yields that cov{E{xx 1 |L 1 x 1 + n 1 }} = H1 ,

(13.35)

which completes the proof. Without any need to solve a functional optimization problem to compute optimal signaling rule, Theorem 13.1 shows the optimality of the “linear plus a random variable” signaling rule within the general class of stochastic kernels when the information of interest is Gaussian. As shown in the following corollary to Theorem 13.1, this enables us to transform this infinite-dimensional optimization problem into an equivalent and computationally tractable finite-dimensional optimization problem. Corollary 13.1 If the underlying information of interest is Gaussian, instead of the functional optimization problem min EQ S x 1 − K S E{xx 1 |η1 (xx 1 )}2 ,

η1 ∈ϒ

(13.36)

we can consider the equivalent finite-dimensional problem min Tr{SV }, subject to 1  S  O. S∈S

(13.37)

Then, we can compute the optimal signaling rule η1∗ corresponding to the solution of (13.37) via (13.31)–(13.34). Remark 13.1 We note that a linear signaling rule would still be optimal even when we introduce additional constraints on the covariance of the posterior estimate. Recall that the distribution of the underlying information plays a role only in proving the sufficiency of the necessary condition. Therefore, in general, based on only the necessary condition, we have min EQ S x 1 − K S E{xx 1 |η1 (xx 1 )}2 ≥ min Tr{SV }, subject to 1  S  O.

η1 ∈ϒ

S∈S

(13.38) The equality holds when the information of interest is Gaussian. Remark 13.2 For fixed covariance 1 ∈ Sm , Gaussian distribution is the best one for S to persuade R in accordance with his/her deceptive objective, since it yields total freedom to attain any covariance of the posterior estimate in-between the two extremes 1  H1  O. The following counterexample shows that the sufficiency of the necessary condition (13.45) holds only in the case of the Gaussian distribution.

13 Deception-as-Defense Framework for Cyber-Physical Systems

303

Example 13.3 For a clear demonstration, suppose that m = 2 and 1 = I2 , and   10 correspondingly x 1 = x 1,1 x 1,2 . The covariance matrix H := satisfies the 00 necessary condition (13.45) since I 2  H  O2 ,

(13.39)

which implies that the signal s 1 must be fully informative about x 1,1 without giving any information about x 1,2 . Note that 1 = I2 only implies that x 1,1 and x 1,2 are uncorrelated, yet not necessarily independent for arbitrary distributions. Therefore, if x 1,1 and x 1,2 are uncorrelated but dependent, then any signaling rule cannot attain that covariance of the posterior estimate even though it satisfies the necessary condition. Let us now consider a Gauss–Markov process, which follows the following firstorder auto-regressive recursion x k+1 = Axx k + w k ,

(13.40)

where A ∈ Rm×m and w k ∼ N(0, w ). For this model, the necessary condition (13.22) is given by (13.41) k  Hk  AHk−1 A , for k = 2, . . . , κ. Given H1:k−1 , let k − AHk−1 A have the eigen-decomposition k − AHk−1 A = Uk

  k O U, O O k

(13.42)

where k On k , i.e., k − AHk−1 A has rank n k . The linear transformation Lk : Śk m n i=1 S → S given by Lk (H1:k ) =



1/2 k

Om−n k

Uk (Hk



− AHk−1 A )Uk



1/2

k Om−n k

 (13.43)

is bijectiveŚ over (13.41). With the linear mapping (13.43), the necessary condition κ on H1:κ ∈ i=1 Sm can be written as k  Hk  AHk−1 A ⇔ In k  Lk (H1:k )  On k ,

(13.44)

which correspondingly yields that Lk (H1:k ) ∈ Sn k has eigenvalues in the closed interval [0, 1]. Then, the following theorem extends the equivalence result of the singlestage to multi-stage ones [24]. Theorem 13.2 Consider the m-variate Gauss–Markov process {xx k ∼ N(0, k )} following the state recursion (13.40). Then the necessary condition (13.41) on the covariance matrix of the posterior estimate at each state is also a sufficient condi-

304

M. O. Sayin and T. Ba¸sar

tion and the associated signaling rule is linear-in-xx k , i.e., memoryless, as described later in (13.46). Śκ Proof Given any covariance matrices H1:κ ∈ i=1 Sm satisfying k  Hk  AHk−1 A ,

(13.45)

where H0 = Om . Furthermore given H1:k−1 , let Lk (H1:k ) ∈ Sn k have the eigen¯ k U¯ k and ¯ k = diag{λ¯ k,1 , . . . , λ¯ k,n k }. decomposition Lk (H1:k ) = U¯ k Next consider a probabilistic linear-in-xx k signaling rule ηk (xx 1:k ) = L k x k + n k ,

(13.46)

where L k ∈ Rm×m and {nn k ∼ N(0, ko )} is independently distributed m-variate Gaussian process. Since the information of interest and signals are all jointly Gaussian, we obtain that Sk := cov{E{xx k |η1 (xx 1 ), . . . , ηk (xx 1:k )}} for k > 0 satisfies Sk = (k − ASk−1 A )L k (L k (k − ASk−1 A )L k + ko )† L k (k − ASk−1 A ). (13.47) Suppose that the gain matrix L k ∈ Rm×m and the covariance matrix ko  Om , for each k > 0, satisfy  L k := Uk

   In k −1/2 k U¯ k ok In k O , O

(13.48)

where the unitary matrix Uk ∈ Rm×m and the diagonal matrix k ∈ Sn k are defined o 2 o ) , . . . , (σk,n )2 , 0, . . . , 0}, in (13.42), ok := diag{λok,1 , . . . , λok,n k }, ko = diag{(σk,1 k and (λok,i )2 ¯ (13.49) o 2 o 2 = λk,i ∈ [0, 1], ∀ i = 1, . . . , n k . (λk,i ) + (σk,i ) After some algebra, (13.47) yields that Sk = Hk for all k > 0, which completes the proof. Without any need to solve a functional optimization problem to compute optimal signaling rules, Theorem 13.2 shows the optimality of the “linear plus a random variable” signaling rule within the general class of stochastic kernels also in dynamic environments, when the information of interest is Gaussian. As shown in the following corollary to Theorem 13.2, this enables us to transform this functional optimization problem into a semi-definite program. Corollary 13.2 If the underlying information of interest is Gauss–Markov, instead of the functional optimization problem

13 Deception-as-Defense Framework for Cyber-Physical Systems

min

η1:κ ∈ϒ

κ 

EQ S x k − K S E{xx k |η1 (xx 1 ), . . . , ηk (xx 1:k )}2 ,

305

(13.50)

k=1

we can consider the equivalent finite-dimensional problem min

κ 

{Sk ∈S}k

Tr{Sk V }, subject to 1  S  O.

(13.51)

k

Then, we can compute the optimal signaling rule ηk∗ corresponding to the solution of (13.51) via (13.46)–(13.49). In the following sections, we provide applications of these results in communication and control systems.

13.6 Communication Systems In this section, we elaborate further on the deception-as-defense framework in noncooperative communication systems with a specific focus on Gaussian information of interest. We first note that in this case the optimal signaling rule turns out to be a linear deterministic signaling rule, where S does not need to introduce additional independent noise on the signal sent. Furthermore, the optimal signaling rule can be computed analytically for the single-stage game [28]. We also extend the result on the optimality of linear signaling rules to multi-stage ones [25]. In the single-stage setting, by Theorem 13.1, the SDP problem equivalent to the problem (13.15) faced by S is given by min Tr{SV } subject to 1  S  Om .

S∈Sm

(13.52)

We can have a closed form solution for the equivalent SDP problem (13.15) [28]. If 1 ∈ Sm has rank n, then a change of variable with the linear mapping L1 : Sm → Sn (13.29), e.g., T := L1 (S), yields that (13.52) can be written as min Tr{T W } subject to In  T  Om ,

(13.53)



1/2  1  . U W := 1/2 V U O 1 1 1 n×(m−n) O(m−n)×n

(13.54)

T ∈Sn

where

If we multiply each side of the inequalities in the constraint set of (13.53) from left and right with unitary matrices such that the resulting matrices are still symmetric, the semi-definiteness inequality would still hold. Therefore, let the symmetric matrix W ∈ Sn have the eigen-decomposition

306

M. O. Sayin and T. Ba¸sar

      + O U+ , W = U+ U− O − − U−

(13.55)

where + and − are positive semi-definite matrices with dimensions n + and n − . Then (13.53) could be written as min Tr{T+ + } − Tr{T− − } subject to In +  T+  On + , In −  T−  On −

T+ ∈Sn + , T− ∈Sn −

(13.56)

and there exists a Tr ∈ Rn + ×n − such that      T+ Tr U+ T = U+ U− Tr T− U−

(13.57)

satisfies the constraint in (13.53). Then, the following lemma shows that an optimal solution for (13.56) is given by T+∗ = On + , Tr∗ = On + ×n − , and T−∗ = On − . Therefore, in (13.56), the second (negative semi-definite) term −Tr{T− − } can be viewed as the aligned part of the objectives whereas the remaining first (positive semi-definite) term Tr{T+ + } is the misaligned part. Lemma 13.2 For arbitrary In  A = [ai, j ]  On and diagonal positive semidefinite B = diag{b1 , . . . , bn }  On , we have 0 ≤ Tr{AB} =

n 

ai,i bi ≤ Tr{B} =

i=1

n 

bi .

(13.58)

i=1

Proof The left inequality follows since Tr{AB} = Tr{A1/2 B A1/2 } while A1/2 B A1/2 is positive semi-definite. Given two vectors a, b ∈ Rn , we say that x majorizes y if max

1≤i 1 ≤...≤i k ≤n

k 

xi j ≥

j=1

max

1≤i 1 ≤...≤i k ≤n

k 

yi j , ∀k

(13.59)

j=1

and with equality if k = n. Then the right inequality follows since the eigenvalues n by Schur Theorem [14, of A majorizes its vector of main diagonal entries [ai,i ]i=1 Theorem 4.3.45] while the eigenvalues of A are in [0, 1] since In  A  0, which yields that ai,i ≤ 1 for all i = 1, . . . , n. Based on (13.57), the solution for (13.56) implies that the optimal solution for (13.53) is given by   T ∗ = U+ U−



On + On + ×n − On − ×n + In −

   U+ . U−

(13.60)

13 Deception-as-Defense Framework for Cyber-Physical Systems

307

By invoking Theorem 13.1 and (13.33), we obtain the following theorem to compute the optimal signaling rule analytically in single-stage game G (a version of the theorem can be found in [28]). Theorem 13.3 Consider a single-stage deception-as-defense game G, where S and R have the cost functions (13.3) and (13.4), respectively. Then, an optimal signaling rule is given by η1∗ (xx 1 )

 =



In O(m−n)×n

  On + ×n −1/2  In On×(m−n) U1 x 1 , 1 U−

(13.61)

almost everywhere over Rm . The matrices U1 ∈ Rm×m , 1 ∈ Sn are as defined in (13.26), and U− ∈ Rn×n − is as defined in (13.55). Note that the optimal signaling rule (13.61) does not include any additional noise term. The following corollary shows that the optimal signaling rule does not include additional noise when κ > 1 as well (versions of this theorem can be found in [23, 25]). Corollary 13.3 Consider a deception-as-defense game G, where the exogenous Gaussian information of interest follows the first-order auto-regressive model (13.40), and the players S and R have theŚ cost functions (13.3) and (13.4), respectively. Then, ∗ ∗ ∈ κk=1 Sm of the equivalent problem, Pk := Lk (S1:k ) for the optimal solution S1:κ nk is a symmetric idempotent matrix, which implies that the eigenvalues of Pk ∈ S are either 0 or 1. Let n k,1 ∈ Z denote the rank of Pk , and Pk have the eigen-decomposition   On k −n k,1  U U Pk = k,0 k,1

   Uk,0 .  In k,1 Uk,1

(13.62)

Then, the optimal signaling rule is given by ηk∗ (xx 1:k ) =



In k



O(m−n k )×n k

  O(n k −n k,1 )×n k −1/2  In k On k ×(m−n k ) Uk x k , k  Uk,1

(13.63)

almost everywhere over Rm , for k = 1, . . . , κ. The unitary matrix Uk ∈ Rm×m and the diagonal matrix k ∈ Sn k are defined in (13.42).

13.7 Control Systems The deception-as-defense framework also covers the non-cooperative control settings including a sensor observing the state of the system and a controller driving the system based on the sensor outputs according to certain quadratic control objectives, e.g., (13.9). Under the general game setting where the players can select any measurable policy, the control setting cannot be transformed into a communication setting

308

M. O. Sayin and T. Ba¸sar

straightforwardly since the problem features non-classical information due to the asymmetry of information between the players and the dynamic interaction through closed-loop feedback signals, which leads to two-way information flow rather than one-way flow as in the communication setting in Sect. 13.6. However, the control setting can be transformed into a non-cooperative communication setting under certain conditions, e.g., when signaling rules are restricted to be linear plus a random term. Consider a controlled Gauss–Markov process following the recursion (13.7), and with players S and R seeking to minimize the quadratic control objectives (13.8) and (13.9), respectively. Then, by completing to squares, the cost functions (13.8) and (13.9) can be written as  E

κ 

 x k+1 Q j x k

k=1

+ u k R j u k

=

κ 

EK j,k x k + u k 2 j,k + δ j,0 ,

(13.64)

k=1

where j = S, R, and  ˜ K j,k = −1 j,k B Q j,k+1 A

(13.65)

j,k = B  Q˜ j,k+1 B + R j κ  Tr{ Q˜ j,k+1 w } δ j,0 = Tr{Q j 1 } +

(13.66) (13.67)

k=1

and { Q˜ j,k } follows the discrete-time dynamic Riccati equation:  ˜ Q˜ j,k = Q j + A ( Q˜ j,k+1 − Q˜ j,k+1 B −1 j,k B Q j,k+1 )A,

(13.68)

and Q˜ j,κ+1 = Q j . On the right-hand side of (13.64), the state depends on the control input u j,k , for j = S, R, however, a routine change of variables yields that κ 

EK j,k x k + u k 2 j,k =

k=1

κ 

EK j,k x ok + u ok 2 j,k ,

(13.69)

k=1

where we have introduced the control-free, i.e., exogenous, process {xx ok } following the first-order auto-regressive model x ok+1 = Axx ok + w k , k = 1, . . . , κ, and x o1 = x 1 ,

(13.70)

and a linearly transformed control input u ok = u k + K j,k Buu k−1 + . . . + K j,k Ak−2 Buu 1 .

(13.71)

13 Deception-as-Defense Framework for Cyber-Physical Systems

309

Remark 13.3 The right-hand side of (13.69) resembles the cost functions in the communication setting, which may imply separability over the horizon and for EK j,k x ok + u ok 2 j,k ,

(13.72)

the optimal transformed control input is given by u ok = −K j,k E{xx ok |ss 1:k } and the corresponding optimal control input could be computed by reversing the transformation (13.71). However, here, the control rule constructs the control input based on the sensor outputs, which are chosen strategically by the non-cooperating S while S constructs the sensor outputs based on the actual state, which is driven by the control input, rather than the control-free state. Therefore, R can have impact on the sensor outputs by having an impact on the actual state. Therefore, the game G under the general setting features a non-classical information scheme. However, if S’s strategies are restricted to linear policies ηk ∈ ϒk ⊂ ϒk , given by ηk (xx 1:k ) = L k,k x k + . . . + L k,1 x 1 + n k ,

(13.73)

then we have E{xx ok |L k,k x k + . . . +L k,1 x 1 + n k , . . . , L 1,1 x 1 + n 1 } (13.74) = E{xx ok |L k,k x ok + . . . + L k,1 x o1 + n k , . . . , L 1,1 x o1 + n 1 } (13.75)

since by (13.7) and (13.70), the signal s i for i = 1, . . . , k can be written as     x io + . . . + L 1,1 x o1 + n i + L i,i Buu i−1 + . . . + (L i,i Ai−2 + . . . + L i,2 )Buu 1 . s i = L i,i  σ −ss 1:i−1 measurable

(13.76) Therefore, for a given “linear plus noise” signaling rule, the optimal transformed control input is given by u ok = −K j,k E{xx ok |ss 1:k }. In order to reverse the transformation on the control input and to provide a compact representation, we introduce ⎡

I K j,κ B K j,κ AB ⎢ I K j,κ−1 B ⎢ ⎢ I  j := ⎢ ⎢ ⎣

⎤ · · · K j,κ Aκ−2 B · · · K j,κ−1 Aκ−3 B ⎥ ⎥ · · · K j,κ−2 Aκ−4 B ⎥ ⎥, ⎥ .. .. ⎦ . .

(13.77)

I and block diagonal matrices K j := diag{K j,κ , . . . , K j,1 } and j := diag{ j,κ , . . . , j,1 }. Then, (13.69) can be written as

(13.78)

310

M. O. Sayin and T. Ba¸sar κ 

EK j,k x ok + u ok 2 j,k = EK j x o +  j u 2 j ,

(13.79)

k=1

  where we have introduced the augmented vectors u = u κ · · · u 1 and x o =  o    (xx κ ) · · · (xx o1 ) . To recap, S and R seek to minimize, respectively, the following cost functions  , γ1:κ ) = EK S x o + Su 2 S + δS,0 , US (η1:κ

(13.80)

 UR (η1:κ , γ1:κ )

(13.81)

= EK R x + o

Ru 2 R

+ δR,0 .

We note the resemblance to the communication setting. Therefore following the same lines, S faces the following problem: min

 η1:κ ∈ϒ 

κ 

Tr{cov{E{xx ok |ss 1:k }}Vk } + vo ,

(13.82)

k=1

where vo := Tr{cov{xx o (xx o )}K S S K S } + δS,0 and Vk = k,k +

κ 

k,i Ai−k + (Ai−k ) i,k ,

(13.83)

i=k+1

where k,i ∈ Rm×m is an m × m block of  ∈ Rmκ×mκ , with indexing starting from the right-bottom to the left-top, and  := MS S MS − MS S K S − K S S MS ,

(13.84)

where MS := S −1 R K R . The optimal linear signaling rule in control systems can be computed according to Corollary 13.3 based on (13.82).

13.8 Uncertainty in the Uninformed Agent’s Objective In the deception-as-defense game G, the objectives of the players are common knowledge. However, there might be scenarios where the objective of the uninformed attacker may not be known precisely by the informed defender. In this section, our goal is to extend the results in the previous sections for such scenarios with uncertainties. To this end, we consider that R has a private-type ω ∈  governing his/her cost function and  is a finite set of types. For a known type of R, e.g., ω ∈ , as shown in both communication and control settings, the problem faced by the informed agent S can be written in an equivalent form as

13 Deception-as-Defense Framework for Cyber-Physical Systems

min

κ 

η1:κ ∈ϒ

Tr{Hk Vω,k } + vo ,

311

(13.85)

k=1

for certain symmetric matrices Vω,k ∈ Sm , which depend on R’s objective and correspondingly his/her type. If the distribution governing the type of R, e.g., { pω }ω∈ , where pω denotes the probability of type ω ∈ , were known, then the equivalence result would still hold straightforwardly when we consider Vk :=



pω Vω,k

(13.86)

ω∈

since (13.85) is linear in Vω,k ∈ Sm . For the scenarios where the distribution governing the type of R is not known, we can defend against the worst possible distribution over the types in a robust way. In the following, we define the corresponding robust deception-as-defense game. Definition 13.6 The robust deception-as-defense game G r := (ϒ, , , {xx k }, {yy k }, USr , URω )

(13.87)

is a Stackelberg game [2] between S and R, where • • • • •

ϒ and  denotes S’s and R’s strategy spaces  denotes the type set of R, {xx k } denotes the information of interest, {yy k } denotes S’s (possibly noisy) measurements of the information of interest, USr and URω are the objective functions of S and R, derived based on (13.3) and (13.4), or (13.8) and (13.9).

In this hierarchical setting, S is the leader, who announces (and commits to) his strategies beforehand, while R stands for followers of different types, reacting to the ω ∈ leader’s announced strategy. Players type-ω R and S select the strategies γ1:κ ω ω and η1:κ ∈ ϒ to minimize the cost functions UR (η1:κ , γ1:κ ) and ω }ω∈ ) = max USr (η1:κ , {γ1:κ

p∈ ||



ω pω US (η1:κ , γ1:κ ).

(13.88)

ω∈

Type-ω R selects his/her strategy knowing S’s strategy η1:κ ∈ ϒ. Let B ω (η1:κ ) ⊂  be type-ω R’s best reaction set to S’s strategy η1:κ ∈ ϒ. Then, the strategy and best ∗ ∗ , {B ω (η1:κ )}ω∈ ) attains the Stackelberg equilibrium provided that reactions pair (η1:κ ∗ ∈ argminη1:κ ∈ϒ η1:κ

ω max USr (η1:κ , {γ1:κ }ω∈ ), ω γ1:κ ∈B ω (η1:κ ), ω∈ ω argminγ1:κ ∈ URω (η1:κ , γ1:κ ).

(13.89)

B ω (η1:κ ) =

(13.90)

312

M. O. Sayin and T. Ba¸sar

Suppose S has access to the perfect measurement of the state. Then, in the robust deception-as-defense game G r , the equivalence result in Theorem 13.2 yields that the problem faced by S can be written as  min max Tr S S∈ p∈ ||



 pω Vω + vo ,

(13.91)

ω∈

where we have introduced the block diagonal matrices S := diag{Sκ , . . . , S1 } and Vω := diag{Vω,κ , . . . , Vω,1 }, and  ⊂ Smκ denotes the constraint set at this new high-dimensional space corresponding to the necessary and sufficient condition on the covariance of the posterior estimate. The following theorem from [24] provides an algorithm to compute the optimal signaling rules within the general class of measurable policies for the communication setting, and the optimal “linear plus noise” signaling rules for the control setting. Theorem 13.4 The value of the Stackelberg equilibrium (13.89), i.e., (13.91), is given by ϑ = minω∈ ϑω , where ϑω := min Tr{SVω } + vo , subject to Tr{(Vω − Vωo )S} ≥ 0 ∀ ωo = ω. S∈

(13.92)

Furthermore, let ω∗ ∈ argminω∈ ϑω and S ∗ ∈ argmin S∈ Tr{SVω∗ } + vo , subject to Tr{(Vω∗ − Vωo )S} ≥ 0 ∀ ωo = ω∗ . (13.93) Then, given S ∗ ∈ , we can compute the optimal signaling rule according to the equivalence result in Theorem 13.2. Proof There exists a solution for the equivalent problem (13.91) since the constraint sets are decoupled and compact while the objective function is continuous in the optimization arguments. Let (S ∗ , p ∗ ) be a solution of (13.91). Then, p ∗ ∈ || is given by p∗ ∈



 p ∈ || | pω = 0 if Tr{Vω S ∗ } < max Tr{Vωo S ∗ } , ωo =ω

(13.94)

since the objective in (13.91) is linear in p ∈ || . Since p ∗ ∈ || , i.e., a point over the simplex || , there exists at least one type with positive weight, e.g., pω∗ > 0. Then, (13.94) yields (13.95) Tr{Vω S ∗ } ≥ Tr{Vωo S ∗ }, ∀ ωo ∈  and furthermore

Tr{Vω S ∗ } =

 ωo ∈

pωo Tr{Vωo S ∗ },

(13.96)

13 Deception-as-Defense Framework for Cyber-Physical Systems

313

since for all ωo ∈  such that pωo > 0, we have Tr{Vωo S ∗ } = Tr{Vω S ∗ }. Therefore, given the knowledge that in the solution pω∗ > 0, we can write (13.91) as ⎧ ⎨ min max Tr S∈ p∈ ||



S

⎫ ⎬



pωo Vωo

ωo ∈



+ vo = min Tr{Vω S} S∈

s.t. Tr{(Vω − Vωo )S} ≥ 0 ∀ ωo ∈ . (13.97)

To mitigate the necessity pω∗ > 0 in the solution of the left-hand side, we can search over the finite set  since in the solution at least one type must have positive weight, which completes the proof. Remark 13.4 The optimization objective in (13.91) is given by  max Tr S

p∈ ||



 pω Vω + vo ,

(13.98)

ω∈

which is convex in S ∈  since the maximum of any family of linear functions is a convex function [6]. Therefore, the solution S ∗ ∈  may be a non-extreme point of the constraint set , which implies that in the optimal signaling rule S introduces independent noise. Note that Blackwell’s irrelevant information theorem [4, 5] implies that there must also be some other (nonlinear) signaling rule within the general class of measurable policies that can attain the equilibrium without introducing any independent noise.

13.9 Partial or Noisy Measurements Up to now, we have considered the scenario where S has perfect access to the underlying information of interest, but had mentioned at the beginning that results are extendable also to partial or noisy measurements, e.g., y k = Cxx k + v k ,

(13.99)

where C ∈ Rm×m and v k ∼ N(0, v ) is Gaussian measurement noise independent of all the other parameters. In this section, we discuss these extensions, which hold under certain restrictions on S’s strategy space. More precisely, for “linear plus noise” signaling rules ηk ∈ ϒk , k = 1, . . . , κ, the equivalence results in Theorems 13.1 and 13.2 hold in terms of the covariance of the posterior estimate of all the previous measurements,3 denoted by Yk := cov{E{yy 1:k |ss 1:k }}, rather than the covariance of the posterior estimate of the underlying state Hk = cov{E{xx k |ss 1:k }}. Particularly, the following lemma from [22] shows that there exists a linear relation between the 3 With

  some abuse of notation, we denote the vector y κ · · · y 1 by y 1:k ∈ Rmk .

314

M. O. Sayin and T. Ba¸sar

covariance matrices Hk ∈ Sm and Yk ∈ Smk since x k → y 1:k → s 1:k forms a Markov chain in that order. Lemma 13.3 Consider zero-mean jointly Gaussian random vectors x , y , s that form a Markov chain, e.g., x → y → s in this order. Then, the conditional expectations of x and y given s satisfy the following linear relation: E{xx |ss } = E{xx y  }E{yy y  }† E{yy |ss }.

(13.100)

Note that s 1:k is jointly Gaussian with x k and y 1:k since ηi ∈ ϒi , for i = 1, . . . , k. Based on Lemma 13.3, the covariance matrices Hk ∈ Sm and Yk ∈ Smk satisfy Hk = Dk Yk Dk ,

(13.101)

where Dk := E{xx k y 1:k }E{yy 1:k y 1:k }† ∈ Rm×mk . Furthermore, y 1:k ∈ Rmk follows the first-order auto-regressive recursion: y 1:k

    E{yy k y 1:k−1 }E{yy 1:k−1 y 1:k−1 }† y k − E{yy k |yy 1:k−1 } . (13.102) = y 1:k−1 + Im(k−1) 0m(k−1)  y

=:Ak

Therefore, the optimization problem faced by S can be viewed as belonging to the non-cooperative communication setting with perfect measurements for the Gauss– Markov process {yy 1:k } following the recursion (13.102), and it can be written as min

 η1:κ ∈ϒ 

κ 

Tr{Yk Wk } + vo ,

(13.103)

k=1

where Wk := Dk Vk Dk . Remark 13.5 Without loss of generality, we can suppose that the signal s k sent by S is mk dimensional so that S can disclose y 1:k . To distinguish the introduced auxiliary signaling rule from the actual signaling rule ηk , we denote it by η˜ k ∈ ϒ˜ k and the policy space ϒ˜ k is defined accordingly. When the information of interest is  , we can always set the ith optimal signaling rule Gaussian, for a given optimal η˜ 1:i  ηi (·) in the original signal space ϒi as ηi (yy 1:i ) = E{xx i |η˜ 1 (yy 1 ), . . . , η˜ i (yy 1:i )},

(13.104)

almost everywhere over Rm , and the right-hand side is the conditional expectation of x i with respect to the random variables η˜ 1 (yy 1 ), . . . , η˜ i (yy 1:i ). Then, for k = 1, . . . , κ, we would obtain E{xx k |η1 (yy 1 ), . . . , ηk (yy 1:k )} = E{xx k |η˜ 1 (yy 1 ), . . . , η˜ k (yy 1:k )},

(13.105)

13 Deception-as-Defense Framework for Cyber-Physical Systems

315

 almost everywhere over Rm , since for η1:κ ∈ ϒ  selected according to (13.104), all the  y   (yy 1:k−1 )} previously sent signals {η1 (y 1 ), . . . , ηk−1 (yy 1:k−1 )} are σ -{η˜ 1 (yy 1 ), . . . , η˜ k−1 measurable.

Based on this observation, for partial or noisy measurements, we have the equivalent problem min Śκ

Y1:κ ∈

k=1

κ  Smk

Tr{Yk Wk }, subject to cov{yy 1:k }  Yk  Ak Yk−1 (Ak ) , (13.106) y

y

k=1

∗ , we can compute the corresponding signaling where Y0 = 0. Given the solution Y1:κ  rules η˜ 1:κ according to Theorem 13.2 and then the actual optimal signaling rule  ∈ ϒ  can be computed by (13.104). η1:κ

13.10 Conclusion In this chapter, we have introduced the deception-as-defense framework for cyberphysical systems. A rational adversary takes certain actions to carry out a malicious task based on the available information. By crafting the information available to the adversary, our goal was to control him/her to take actions inadvertently in line with the system’s interest. Especially, when the malicious and benign objectives are not completely opposite of each other, as in a zero-sum game framework, we have sought to restrain the adversary to take actions, or attack the system, carrying out only the aligned part of the objectives as much as possible without meeting the goals of the misaligned part. To this end, we have adopted the solution concept of game theoretical hierarchical equilibrium for robust formulation against the possibility that advanced adversaries can learn the defense policy in the course of time once it has been widely deployed. We have shown that the problem faced by the defender can be written as a linear function of the covariance of the posterior estimate of the underlying state. For arbitrary distributions over the underlying state, we have formulated a necessary condition on the covariance of the posterior estimate. Then, for Gaussian state, we have shown the sufficiency of that condition since for any given symmetric matrix satisfying the necessary condition, there exists a “linear plus noise” signaling rule yielding that covariance of the posterior estimate. Based on that, we have formulated an SDP problem over the space of symmetric matrices equivalent to the problem faced by the defender over the space of signaling rules. We have first focused on the communication setting. This equivalence result has implied the optimality of linear signaling rules within the general class of stochastic kernels. We have provided the optimal signaling rule for single-stage settings analytically and provided an algorithm to compute the optimal signaling rules for dynamic settings numerically. Then, we have extended the results to control settings, where the adversary has a long-term control objective, by transforming the problem into a communication setting by

316

M. O. Sayin and T. Ba¸sar

restricting the space of signaling rules to linear policies plus a random term. We have also addressed the scenarios where the objective of the adversary is not known and the defender can have partial or noisy measurements of the state. Some future directions of research include formulation of the deception-asdefense framework for • robust control of systems, • communication or control systems with quadratic objectives over infinite horizon, • networked control systems, where there are multiple informed and uninformed agents, • scenarios where the uninformed adversary can have side-information, • applications in sensor selection. Acknowledgements This research was supported by the U.S. Office of Naval Research (ONR) MURI grant N00014-16-1-2710.

References 1. Akyol, E., Langbort, C., Ba¸sar, T.: Information-theoretic approach to strategic communication as a hierarchical game. Proc. IEEE 105(2), 205–218 (2017) 2. Ba¸sar, T., Olsder, G.J.: Dynamic Noncooperative Game Theory. Society for Industrial and Applied Mathematics (SIAM) Series in Classics in Applied Mathematics (1999) 3. Battaglini, M.: Multiple referrals and multidimensional cheap talk. Econometrica 70(4), 1379– 1401 (2002) 4. Blackwell, D.: Memoryless strategies in finite-stage dynamic programming. Ann. Math. Stat. 35, 863–865 (1963) 5. Blackwell, D., Ryll-Nardzewski, C.: Non-existence of everywhere proper conditional distributions. Ann. Math. Stat. 34, 223–225 (1962) 6. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004) 7. Carroll, T.E., Grosu, D.: A game theoretic investigation of deception in network security. Secur. Commun. Nets 4(10) (2011) 8. Clark, A., Zhu, Q., Poovendran, R., Ba¸sar, T.: Deceptive routing in relay networks. In: Grossklags, J., Warland, J., (eds) Proceedings of International Conference on Decision and Game Theory for Security on Lecture Notes in Computer Science. Springer, Berlin (2012) 9. Crawford, V., Sobel, J.: Strategic information transmission. Econometrica 50(6), 1431–1451 (1982) 10. Farokhi, F., Teixeira, A., Langbort, C.: Estimation with strategic sensors. IEEE Trans. Autom. Control 62(2), 724–739 (2017) 11. Farrell, J., Gibbons, R.: Cheap talk with two audiences. Am. Econ. Rev. 79, 1214–1223 (1986) 12. Farrell, J., Rabin, M.: Cheap talk. J. Econ. Pers 10(3), 103–118 (1996) 13. Gilligan, T.W., Krehbiel, K.: Collective decision-making and standing committees: An informational rational for restrictive amendments procedures. J. Law, Econ. Organ. 3, 287–335 (1989) 14. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985) 15. Howe, D.G., Nissenbaum, H.: TrackMeNot: resisting surveillance in web search. In: Kerr, I., Lucock, C., Steeves, V. (eds.) On the Identity Trail: Privacy, Anonymity and Identity in a Networked Society. Oxford University Press, Oxford (2009) 16. Kamenica, E., Gentzkow, M.: Bayesian persuasion. Am. Econ. Rev. 101, 25090–2615 (2011)

13 Deception-as-Defense Framework for Cyber-Physical Systems

317

17. Krishna, V., Morgan, J.: A model of expertise. Quart. J. Econ. 116, 747–775 (2000) 18. Morris, S.: Political correctness. J. Polit. Econ. 109, 231–265 (2001) 19. Pawlick, J., Colbert, E., Zhu, Q.: A game-theoretic taxonomy and survey of defensive deception for cybersecurity and privacy (2017). arXiv:171205441 20. Sarıta¸s, S., Yüksel, S., Gezici, S.: Quadratic multi-dimensional signaling games and affine equilibria. IEEE Trans. Autom. Control 62(2), 605–619 (2017) 21. Sayin, M.O., Ba¸sar, T.: Secure sensor design for cyber-physical systems against advanced persistent threats. In: Rass, S., An, B., Kiekintveld, C., Fang, F., Schauder, S., (eds) Proceedings of International Conference on Decision and Game Theory for Security on Lecture Notes in Computer Science, vol. 10575, pp. 91–111. Springer, Vienna (2017) 22. Sayin, M.O., Ba¸sar, T.: Deceptive multi-dimensional information disclosure over a Gaussian channel. In: Proceedings of the American Control Conference (ACC), pp. 6545–6552 (2018) 23. Sayin, M.O., Ba¸sar, T.: Dynamic information disclosure for deception. In: Proceedings of the 57th IEEE Conference on Decision and Control (CDC) , pp. 1110–1117 (2018) 24. Sayin, M.O., Ba¸sar, T.: Robust sensor design against multiple attackers with misaligned control objectives (2019). arXiv:190110618 25. Sayin, M.O., Akyol, E., Ba¸sar, T.: Hierarchical multi-stage Gaussian signaling games in noncooperative communication and control systems. Automatica 107, 9–20 (2019) 26. Spitzner, L.: Honeypots: Tracking Hackers. Addison-Wesley Professional (2002) 27. Sunzi, Wee, C.H.: Sun Zi Art of War: An Illustrated Translation with Asian Perspectives and Insights. Pearson Prentice Hall, Upper Saddle River (2003) 28. Tamura, W.: A theory of multidimensional information disclosure. Working paper, available at SSRN 1987877 (2014) 29. Wolkowicz, H., Saigal, R., Vandenberghe, L.: Handbook of Semidefinite Programming. Springer Science+Business, Berlin (2000) 30. Zhu, Q., Clark, A., Poovendran, R., Ba¸sar, T.: Deceptive routing games. In: Proceedings of IEEE Conference on Decision and Control, pp. 2704–2711 (2012)

Chapter 14

Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems Carlos Barreto, Galina Schwartz, and Alvaro A. Cardenas

Abstract In this chapter, we define cyber-risks, summarize their properties, and discuss the tools of cyber-risk management. We provide comparative analysis of cyber-risks and other (natural disasters, terrorism, etc.) risks. Importantly, we divide networked systems into two domains: traditional Information Technology Systems (ITSs) and Cyber-Physical Systems (CPSs), and compare and contrast their cyberrisks. We demonstrate that these domains have distinct cyber-risk features and bottlenecks of risk management. We suggest that this dichotomy is a useful tool for cyber-risk analysis and management. In the companion chapter on Cyber-insurance, we apply this classification to simplify the exposition of Cyber-insurance for CPS.

14.1 Introduction While investing in cyber-security protections is challenging for everyone, there is a difference between operating conventional Information Technology Systems (ITS) and Cyber-Physical Systems (CPS) such as Industrial Control Systems (ICS). Companies with classical ITS (e.g., web-presence or handling financial transactions) are constantly targeted by profit-motivated criminal groups. For the ITSs, attacks are so common, so that companies constantly upgrade and improve security of their systems to minimize losses. Also, data protection laws enacted in most countries require companies to disclose breaches if they pertain to user personal data. Thus,

C. Barreto (B) The University of Texas at Dallas, Richardson, TX, USA e-mail: [email protected] G. Schwartz Cyber Blocks Inc., Detroit, MI, USA e-mail: [email protected] A. A. Cardenas University of California, Santa Cruz, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_14

319

320

C. Barreto et al.

the public tends to know about ITS security breaches. Clearly, disclosure laws factor in cyber-risk management, especially for ITS companies. In contrast, CPS have traditionally relied on security by obscurity because (i) few people understand the architecture/operations of CPS; (ii) there used to be a separation of corporate ITS from operations of the systems controlling the physical world. That is, CPS used to operate in isolated environments with trusted communications: only authorized personnel had access to the system. Then, in the past decade, the need to incorporate operational data into business processes led to the integration of corporate and CPS networks. This increased CPS vulnerability posture. Industries in the CPS domain have rarely experienced attacks sabotaging their physical process. Indeed, it is hard for criminals to monetize CPS attacks. Also, attacks on CPS are largely unreported, and meager data precludes actuaries from making accurate risk estimates. Still, attackers have found various ways to affect CPS, such as the power grid, auto vehicles, and medical devices [1–6]. This makes it urgent to improve CPS risk assessment models. Such models will facilitate risk management via better targeting of security investments (risk reduction), and allow to advance cyber-insurance ecosystem (risk transfer). This work is motivated by a growing interest in cyber-insurance. The field is rapidly expanding, but even the usage of basic terms remains inconsistent, especially in CPS domain. We hope to harmonize and standardize the terminology of risks and insurance, and lower the learning barriers of entering into the field. Further, this chapter is organized as follows. In Sect. 14.2, we divide cyber-risks into two distinct, albeit overlapping domains: ITS and CPS risks. In Sect. 14.3, we address cyber-risks features and compare with other risks, such as natural disasters and financial risks. Section 14.4 focuses on CPS risks. Section 14.5 provides a short introduction to risk managements. Section 14.6 presents risk management tools, Sect. 14.7 addresses problems of risk evaluation, and Sect. 14.8 concludes.

14.2 Cyber-Attacks Against ITS and CPS In this section, we illustrate some cyber-risk properties using real incidents. We separate CPS and ITS risks, because they have different risk characteristics. Although the adversaries use similar tools to compromise both systems, the attackers pursue distinct goals and strategies. Profit-driven adversaries target mainly ITS. Only ideology-driven adversaries (e.g., nation states) have both incentives and capabilities (resources and expertise) to target CPS and disrupt physical processes. The properties of CPSs and ITSs risks are summarized in Table 14.1.

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems Table 14.1 CPS versus ITS risks CPS Threats Vulnerabilities

Damages

Risk estimates

Ideological-driven (nation states, terrorists) Poor code quality Poor patching management Poor network design/configuration Process safety and reliability Equipment damage Service interruption Very low precision due to • Extremely rare major attacks • Nodata about minor attacks • Rare events, i.e., low frequency and high impact • Must account for fat tails

321

ITS Profit-driven (cyber-criminals) Malware infection Phishing/social engineering Data losses Business interruptions Loss of reputation Low precision due to • Rapid tech. changes data is obsoleted fast • Rare events may occur Higher precision than for CPS due to • Frequent well-publicized attacks data is available

14.2.1 Attacks Against ITS Adversaries can profit extorting companies. For example, [7] reports that the owner of a company providing DDoS protection also co-authored the Mirai malware to launch DDoS attacks. This company thrives by manufacturing both threats and protections. Another extortion modality uses ransomware, which encrypts computer file systems and then demands a payment to restore the data. Such malware often attacks computers indiscriminately. For example, in 2017, the NotPetya ransomware selfpropagated and spread rapidly in hospitals, airports, and other companies in different sectors, such as energy (electricity and oil) and transportation (Maersk, FedEx), causing losses estimated at $10 bln [8]. Cyber-attacks create intangible losses, such as devaluation of brand names, as the experiences of Yahoo and Uber attest. In 2016, Verizon was negotiating the acquisition of Yahoo’s core Internet business, but the trade price dropped 350 million after Yahoo disclosed a data breach. The incident occurred in 2013 and compromised nearly 3 bln user accounts (leaking passwords and personal information) [9]. Also, it is believed that Uber’s valuation (2016) dropped from 68 bln to 48 bln due to a data breach that exposed personal information of 50 mln users and 7 mln drivers [10]. Some adversaries have targeted suppliers to access well-protected systems. In 2013, attackers accessed Target network [11] with credentials stolen from a subcontractor. The adversaries compromised credit and debit card information of about 40 mln consumers and personally identifiable information (PII) of another 70 mln. Supply chain attacks can affect multiple organizations. In 2017, attackers infiltrated the CCleaner’s network and replaced the original software with a malicious one. Presumably, the adversaries tried to infiltrate organizations that used CCleaner, such as Google, Microsoft, Intel, Samsung, and Cisco [12]. In 2019, Kaspersky reported the

322

C. Barreto et al.

operation ShadowHammer, which implemented a supply-chain attack targeting the ASUS Live Update Utility. The adversaries tampered ASUS software by injecting their malicious code, which was hosted on and distributed from the official ASUS update servers. The attackers didn’t compromise users indiscriminately: the malicious code attacked only devices with specific MAC addresses [13]. Security experts believe that some attack campaigns that target defense organizations are organized by state-sponsored groups. For example, in 2011, RSA Security suffered a cyber-attack resulting in information theft of its authentication products [14]. It is believed that this attack constituted a strategic step to target RSA Security’s customers (military, financial, and defense contractors). In particular, Lockheed Martin discovered an intruder that used legitimate credentials (including one of the RSA Security’s authentication tokens) [15]. In some cases, the data breaches occur due to accidents rather than attacks. For example, in 2019, First American Financial Corp. leaked millions of documents related to mortgage deals on its website [16]. Similarly, Inmediata, a health care company, allowed search engines to index internal pages that contained patients data [17]. Likewise, Instagram exposed private information of users when a marketing company left the data unprotected [18].

14.2.2 Attacks Against CPS In the last few years, we have witnessed the development of sophisticated attacks that target CPS [19]. For example, Stuxnet, the first known computer worm designed to harm a physical process, sabotaged centrifuges at the Natanz uranium enrichment plant in Iran. The attack deceived the operators making them believe that the process was operating normally [6] while at the same time sabotaging the enrichment of Uranium. Similarly, four years later, another cyber-attack disrupted the control systems of a steel mill in Germany [20]. This attack prevented the proper shut down of a blast furnace, causing massive damage to the system. In 2015, the first confirmed cyber-attack on a power grid was launched on the electricity system of Ukraine [21]. The attackers used the operator’s workstations to open breakers and interrupt the power flow of around 60 substations. They also overwrote the firmware of control devices, leaving them unresponsive to remote commands. On top of that the attack was designed to delay the report of incidents and to impede remote actions of operators [22]. Ukraine suffered a second and more sophisticated attack in 2016, taking a page from Stuxnet, the attackers created malware specifically designed to operate industrial equipment. The malware automatically located equipment, sent commands to switch the power flow on and off, and destroyed all files on infected systems to cover its tracks [1, 23]. While the result of the 2016 was an outage that was corrected within hours, a more damaging attack could target vital equipment for the operation of the power grid, such as generators or large-scale transformers, causing long-term damages.

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

323

In 2017, another sophisticated malware targeted an industrial plant. The malware, called Triton, targeted safety controllers of industrial systems [24]. Although both the identity and purpose of the attack remain publicly unknown, it is clear that an attack on safety controllers can damage equipment, and lead to life-threatening accidents, including environmental disasters. Moreover, the sophistication of the malware (which required substantial resources, skills, and time to develop) suggests that the author was a nation-state. Other attacks on CPS have focused on the nuclear energy sector, but no physical damage has been reported. For example, a group of hackers targeted nuclear facilities from the Wolf Creek Nuclear Corporation in the U.S. [25]. Although the attackers didn’t cause harm in the system (they seemed interested in collecting intelligence about nuclear facilities), damage in the system could cause explosions, fires, and spills of dangerous material. Another example was showcased in 2018, where researchers at McAfee detected an espionage campaign (named Sharpshooter) targeting defense contractors and the nuclear energy sector [26]. Experts believe that nation-states undertake most of the attacks against critical CPS infrastructures. Reference [27] found 97 campaigns against critical infrastructures (energy, transportation, and aerospace); but most of these campaigns have no large losses. Other critical infrastructures, such as water [28], transportation, and healthcare also have vulnerabilities that can cause devastating damages. The attack against the Maroochy Water Services in 2000 was a precursor of cyber-attacks to CPS [29, 30]. In that attack, an insider caused spills of raw sewage into parks and rivers. Another potential targets are autonomous vehicles and transportation infrastructure. Researchers have shown vulnerabilities in traffic signals, which can generate congestion and traffic advantage for the attacker [31, 32]. Also, autonomous vehicles are at risk of cyber-attacks [33, 34]. Medical systems can also suffer attacks which threaten patient health [35]. For example, the ransomware WannaCry caused cancellation of appointments and surgeries, and reduced the capacity of emergency rooms [36]. In addition, attacks against medical devices can harm their users [4, 5, 37]. Concretely, some researchers have shown that vulnerabilities in insulin pumps can trigger lethal overdoses [38, 39]. Attacks against critical infrastructures often exploit their particular vulnerabilities; however, conventional attacks (not designed for CPS) can damage accidentally critical infrastructures. For example, in 2003, the Slammer worm disabled the safety monitoring system of a nuclear plant in Ohio Davis–Besse. The worm accessed the plant network through the corporate network and infected a server (even though a patch for the vulnerability exploited by Slammer was released 6 months before the incident) [40]. Similarly, in 2016, operators suspended operations in Gundremmingen, a nuclear plant from Germany, after they found a malware in the ITS network. The infection occurred presumably by accident through a USB thumb drive, since the network wasn’t connected to the Internet [41]. In 2018, in a year after the initial epidemic, WannaCry attacked several ICS: the Taiwanese Semiconductor company (TSMC) suffered an infection after installing compromised software in the industrial company’s network without any security

324

C. Barreto et al.

scan. Then, the production stopped for 3 days. In 2019, the aluminium producer Norsk Hydro suffered the LockerGoga ransomware attack. This attack affected 22,000 computers and disrupted business and industrial processes, forcing the system into manual operation. Experts believe that the infection spread due to improper network segmentation. The incident led to $35–$41 mln in losses [42].

14.3 Cyber-Risks: Byproduct of the IT Revolution The concept of risk is used to address decision-making in the uncertain world. Humans face various risks: from becoming sick with the flu or diagnosed with lifedebilitating cancer, to being robbed at a gunpoint, or perishing during a natural (or human-induced) disaster. The National Institute of Standards and Technology (NIST) defines risk as “... a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization” [43]. In other words, accidents (undesired events) occur due to threats (e.g., natural disasters) that exploit vulnerabilities or weaknesses and cause losses (e.g., damage to assets). We can understand cyber-risks as a new class of risks brought by the information technology revolution. Following [44], we define a cyber-risk as any malfunction of a cyber-system that causes harm (losses). In the next section, we describe these risks.

14.3.1 Vulnerabilities While security problems tend to result in higher costs for industries than random faults, the prevention of security breaches is also more expensive, not only for the technical issues involved, but because of the strategic nature of attackers who adapt their strategies to the current security protections in place. The overall costs of producing secure software and securing corporate networks (in particular, from CPS systems) has left the CPS industry open to attacks. In order to lock-in the users, manufacturers strive to ship products faster than competitors. This leads to the commercialization of hasty (and vulnerable) products or systems; this is particularly evident in several Internet of Things (IoT) devices. Our taxonomy of cyber-risks is divided by the type of threat and the catalysts that facilitate these risks, see Fig. 14.1.

14.3.2 Threats Cyber-risks occur due to fortuitous events (such as hardware or software glitches) or planned actions of a malicious agent. We classify the attackers according to their motivation as profit-driven and ideology-driven. Indeed, the majority of the attackers

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

325

Industrial espionage Profit-driven

Extortion Fraud War / Terrorism

Threats Ideology-driven

Employee retaliation Hacktivism / Sabotage Low risk of punishment

Cyber-risks Cheap to attack

Cheap / free tools Catalysts Software vulnerabilities Costly to defend Hardware vulnerabilities

Fig. 14.1 Cyber-risks originate from both accidents and intentional actions

have financial reasons for engaging in criminal cyber-activities. Such profit-driven attackers steal information [45], perform industrial espionage, or extort individuals in several ways; for example, encrypting their data using a ransomware [46] or disabling their systems/services through distributed denial of service (DDoS) attacks [47]. Other attackers do not expect a financial remuneration from their actions. Ideology-driven attackers pursue different goals, which include personal vendettas (e.g., employee retaliation), hacktivism, sabotage, terrorism, and war [48]. The survey in [27] analyzed nearly 490 cyber-campaigns and identified 66 attack groups (49% state-sponsored, 26% hacktivist, 20% cyber-criminals, 5% terrorists). State-sponsored groups target government institutions, or industries affiliated with governments (e.g., defense contractors). Hacktivists often use DDoS attacks and leak sensitive information to denounce activities they deem intolerable. Cyber-criminals target services that are easy to monetize, e.g., e-commerce, credit card infrastructure, and financial system. Cyber-terrorists focus on recruiting sympathizers and destroying the data or the infrastructure of their rivals. The survey also found that the campaigns reflect geopolitical or economic tensions; also, countries with the highest Gross Domestic Product (GDP) are some of the top targets. Cyber-crime has thrived in recent years due to its high reward and low costs, i.e., attackers have a low risk of being captured and punished. Indeed, attackers tend to remain anonymous, and/or reside in countries where prosecution is problematic [49]. Moreover, several cyber-attackers do not need advanced training, in part because they can find information and tools in hacker forums [45, 50]. As a consequence, the number of information breaches has increased, as well as the number of records compromised in each breach [51].

326

C. Barreto et al.

14.3.3 Impact Security breaches can lead to identity theft, such as credit/debit card frauds and unauthorized tax claims. In addition, firms incur in both well-known tangible costs and uncertain long-term intangible costs [52]. Tangible costs include the expenses from technical investigations, notification of users, and identity recovery, among others. Nonetheless, one of the biggest concerns for firms come from intangible costs, such as revenue losses, long-term cost of losing customers, and devaluation of the firm’s brand name [53, 54]. According to [55], more than 90% of the total costs of cyber-attacks come from intangible factors. In contrast to ITS, cyber-incidents on CPS affect both information and physical systems, therefore attacks to CPS can lead not only to information leaks, but to physical damages to equipment, nature, and even humans. Some of the worst attack scenarios include widespread power blackouts, explosions in oil refineries, and the contamination of drinking water, among others. To make things worse, when the attacks damage physical elements, restoring these systems can take several months. For example, the production, delivery, and replacement of large transformers used in the transmission system of a power grid takes around two years [3]. These devices are not kept in backup inventories because their cost ranges from $3 to $10 mln and they have unique specifications, i.e., they are not interchangeable. The Cambridge centre for risk studied the potential effects of terrorist attacks on power systems, concluding that such attacks would cause total losses from $243 bn to $1 trn in the US [56] and $15 to $110 bn in the UK1 [57]. The estimated cost of these attacks is comparable with some of the most devastating natural disasters to date, such as Hurricane Katrina, which caused losses of $172 bn [58].

14.3.4 Comparing Cyber-Risks with Other Risks Cyber-risks differ from other risks in several aspects (Table 14.2). Traditional risks usually come from random events, such as natural disasters, and car accidents. Here, the threat source doesn’t choose a target purposefully. On the contrary, terror or cyber-attacks come from intelligent adversaries who choose their actions to achieve a particular goal strategically (e.g., steal information or damage critical components). Unlike other risks, terror and cyber-risks evolve, that is, the adversaries adapt their strategies to circumvent protection efforts. Then, data about past events is insufficient to estimate future incidents. Hence, it is difficult to quantify such risks. Cyber-attacks can damage both digital and physical assets, while other risks affect mainly physical assets. In addition, cyber-attacks on critical infrastructure have the potential to affect large geographical (and geopolitical) areas, causing damage comparable to natural disasters and terrorist attacks. Cyber-adversaries have lower marginal costs than terrorists, i.e., their tools are often inexpensive and they rarely receive punishments. 1 The

losses include the potential impact on other industries affected by blackouts.

Management

Damage scope

Threat motive

Fire

Accidental negligence Geographical area Geographical area buildings Reactive (Gov. Physical bailout) protections Insurance

Accidental

Natural disasters

Table 14.2 Properties of different types of risks

Financial instruments (options/futures)

Accidental financial Market

Capital markets

Mandatory insurance

Accidental negligence Local area

Car incidents

Gov. backed Insurance

Geopolitical area

Ideological

Terrorism

CPS

Digital and physical assets Cyber-protections Phys. protections insurance insurance (cyber-coverage)

Digital assets

Accidental financial ideological

Cyber-risks ITS

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems 327

328

C. Barreto et al.

Cyber-risks have both frequent and infrequent events, unlike other threats, such as terrorism. Frequent events have low cost (e.g., DDoS attacks and data breaches), while infrequent (or rare) events can have a large impact (e.g., a blackout in the power grid). Difficulties estimating risks limit the use of risk management strategies, such as insurance. For example, home owners that perceive low risks (e.g., low probability of occurrence) usually do not manage their risk of natural disasters, such as earthquakes or hurricanes. As a consequence, they remain exposed to extreme natural events and the government may have to provide financial relief. For this reason, insurance is mandatory for some risks (e.g., car accidents) or supported by the government (e.g., terrorism). Cyber-risks have limited insurance support due to the scarce data about incidents, which makes it difficult to estimate (quantify) risks.

14.4 Cyber-Risks on CPS CPS prioritize their operational requirements (reliability and safety) over security objectives (confidentiality), because malfunctions can cause irremediable physical damage to components, environment, and people. Also, existing CPS risk models focus on random events, not on targeted attacks. CPS providers focus design and operations on guaranteeing availability and integrity over confidentiality of information (in this order, according to the standard ISA/IEC 62443). This has several consequences (see a summary of vulnerabilities in Table 14.3) [59–63]: 1. CPS hardware and software are not updated as frequently as standard ITS components. Partly, because CPS updates must be thoroughly tested to be certified. Also, taking down the system for an update is not practical for CPS. CPS administrators cannot always fix the vulnerabilities, because older program versions may be unsupported by the vendors (e.g., Windows XP for embedded systems reached the end of support in 2019). In addition, ICS have less flexibility to apply patches, because they tend to run continuously. 2. CPS had focused on physical security (facility access control and monitoring), not on information security. Hence, field devices (PLC, RTU) have limited security (e.g., short passwords/no encryption). Secure system development was not a priority; hence, CPS software may have bugs that allow injection attacks. 3. ITS are robust to assessments, but audits can induce failures in CPS equipment (old PLCs and computers). 4. Incident response and forensics in IT are common, but in CPS they are focused on restoring the system, rather than monitoring continuously for intrusions. 5. CPS depend greatly on data availability, accuracy, and integrity; thus, CPS are more vulnerable to SQL injection and DDoS attacks than general IT databases.

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems Table 14.3 Examples of CPS Weakness Poor code quality

Description

The software design does not follow secure development (e.g., use safe functions, validate inputs validations) Cryptography Unencrypted sensitive data (user IDs, passwords, control commands) or use weak cryptographic algorithms Access controls Excessive sharing of sensitive information, no authentication for critical functions, excessive program execution privileges Insufficient verification of data Missing data integrity checks, authenticity e.g., download software updates without integrity checks Credentials management Insecure storage or transport of credentials (clear-text passwords between controller and communication software) Weak passwords Applications unprotected or have default/weak passwords. Legacy devices with fixed (hard-coded) credentials Poor patch management ICS have unpatched or old versions of programs (databases, web servers, OS) Bad security configuration Security not used or enabled, e.g., weak or missing firewalls Lack of port security Physical access to an unsecured port behind the firewall can allow network access Poor network segmentation ICS connected to corporate networks without firewall or IDS and direct connection to the Internet Lack of audits or assessments Lack or poor logging practices of events to identify security issues

329

Consequences The program can be crashed or taken over by the attacker

Attacker obtains unauthorized data or authentication credentials Attacker gets access to critical systems

Adversaries can send false information to controllers, install malicious software, modify PLC’s code A leak of credentials

Compromise of devices

System vulnerable to known vulnerabilities Degrade the efficacy of the defenses External devices can open remote connections

Allow remote connections to critical devices

Without constant network monitoring, hard to detect unauthorized usage/stop attacks

330

C. Barreto et al.

14.4.1 Obstacles to Securing CPSs Critical infrastructures are known to be insecure [60]. The Ponemon institute survey of critical infrastructure security found that most companies had a data breach in the last 12 months [64]. However, few infrastructure companies view cyber-risk protection as an important objective. For example, in some cases (16%), the management is aware of the vulnerabilities of the ICS/SCADA systems and most of the respondents confer that their company is unlikely to update legacy devices in a near future. Below we provide some reasons for the weak cyber-security of critical infrastructures. First, cyber-risks are difficult to estimate, and to make things worse, they evolve in time. Infrastructure companies use standard risk management approaches, and without reasonable risk estimates, they may disregard or accept their cyber-risks. In some cases, managers believe that the cost of protecting the system over-weights the consequences of (seemingly) unlikely events [65]. Second, CPS, unlike ITS, operate continuously and have little flexibility to incorporate updates timely (e.g., days or weeks), because the patches must be tested by the vendor and the asset owner to prevent crashes. For example, some SCADA systems couldn’t patch vulnerabilities used by the Blaster worm, because the patch modified the login configuration [66]. Moreover, some patches require system reboot, which cause service interruptions. For example, [67] finds that 50% of the devices used in ICS are patched within 60 days of the vulnerability’s disclosure. In contrast, ITS often patch vulnerabilities within 30 days of their release (software providers like Microsoft and Adobe release security patches monthly). Third, asset owners have meager incentives to protect CPS against cyber-threats. Reference [65] states that “... risk management is not a technical approach but a business’decision.2 Hence, firms and/or profit-maximizing technology providers would choose low investments in cyber-security, in part because the society bears part of cyber-losses (this situation is called a negative externality). In addition to externalities, which are similar for CPS and ITS, critical infrastructure providers are heavily regulated (comp. to ITSs), which distort incentives for security investments. Fourth, some asset owners believe that traditional insurance covers cyber-risks. The traditional insurance specializes in risks of tangible property, which often excludes damage to information. Disagreements in the interpretation of insurance policies concerning cyber-incidents have led to several legal disputes. See [51, 69] for examples of lawsuits in which the court sided with insurers who refuse to cover losses resulting from information breaches. Despite the ICS vulnerabilities, and numerous skillful and motivated attackers, only two groups have created infrastructure failures: the Equation group, author of the Stuxnet malware, and the Sandworm team, responsible for Ukrainian blackouts [70]. Attacking CPS is challenging because

2 For

example, managers under pressure to meet earnings benchmarks are likely to reduce safety, see [68].

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

331

• Attacks against ICS require large expert teams and resources (e.g., experts estimate that attacks against Ukraine took months of planning by dozens of people). • CPS, like power grids, are resilient to failures [71]; thus, attack impact is limited (the attacks against Ukraine lasted 6 h, far from the worst-case scenarios). • Adversaries need system expertise to prepare large impact attacks

14.5 Facing the Risks Uncertainty is present in nearly all environments. Agents (such as individuals, corporations, government servants) must make decisions under uncertainty, i.e., before the outcome is realized. Consider an individual making a decision whether to install an antivirus. Antivirus protects one’s computer from malware (benefit), but even a free antivirus reduces system performance (cost). Antivirus should be installed when the benefit exceeds the cost. This is an example of cost–benefit analysis. When expected benefit is high (i.e., high chance of malware exposure or high expected loss from infection), antivirus would be installed despite the cost (of reduced performance). Risk management consists of three phases (see Fig. 14.2): (1) determine risks (risk assessment), (2) implement mechanisms to address the risks (risk treatment), and (3) verify that the risk is within an acceptable level, which we here call (evaluation). Risk assessment involves risk identification (e.g., identify threats and vulnerabilities) and risk analysis, which requires measurements of the risk (e.g., calculate the expected value of losses). Next, risk treatment follows. It consists of the selecting tools to handle the risks, i.e., to reduce them to an acceptable level, or even fully eliminate them. In many situations, the latter is infeasible or prohibitively costly. [43, 72]. According to NIST [43], risk treatment should “address the greatest risk and strive for sufficient risk mitigation at lowest cost, with minimal impact on other mission capabilities.” The last step of the risk management process consists in evaluating whether the risks remain within the tolerated levels.

Fig. 14.2 Risk management process

Identify risks Risk assessment

Security Reliability Probability of loss

Evaluate risks

Amount of loss

Acceptance Risk treatment

Reduction Transfer

Evaluation

Avoidance Mitigation Cyber-insurance

332

C. Barreto et al.

14.5.1 Risk Assessment/Metrics The risk reflects the (quantifiable) randomness of an outcome; however, risk measures differ with the field of study. For example, insurers are concerned primary with losses, while financial investors are interested in both losses and gains. Unless otherwise stated, we focus on the insurer viewpoint. The quantification of risk requires accounting for the impact of future events. In many applications, it is assumed that the past is a good proxy of the future. That is, the occurrence of an event in the future will follow the same pattern as in the past [73]. Then, the impact of future events can be predicted by a probability distribution of losses that fit past observations, e.g., the likelihood that an event with certain impact occurs. Thus, we model a threat with a random variable X , which defines the losses that occur with some probability distribution f . Below we discuss some risk metrics and how they can underestimate the real risk. Figure 14.3 illustrates their differences for a random variable with heavy tail.

14.5.1.1

Expected Value and Extreme Events

We can measure the risk estimating the average impact of future events, that is, calculating the expected losses E[X ]. This quantification of risk has the following properties: (i) the predictions are more accurate on the aggregate, not individual, level; (ii) as with other statistical methods, it relies heavily on historical data; and (iii) expected value might underestimate the risk of extreme events (events with low probability and catastrophic consequences) due to both scarce data and low weight assigned by this risk measure [74]. Failure to account for extreme events can be catastrophic. For example, the 2011 accident in the Fukushima Daiichi nuclear reactor occurred because the cooling system was designed for a max wave height of 5.7 m, but failed due to a 14 m tsunami. Such tsunami occurred more than 1000 years ago; however, that information was not used in cooling system design [75].

14.5.1.2

Value at Risk (VaR)

VaR is a metric used extensively to estimate the risk of portfolios in financial markets [76]. Informally, VaR measures the maximum possible loss (during a certain time period) with confidence α; VaR excludes the tail of the distribution (i.e., large losses occurring with probability 1 − α). Let VaR with parameter α be denoted V a Rα : V a Rα (X ) = inf {x | P[X ≤ x] > α} .

(14.1)

The accuracy of VaR depends on assumptions about markets, and may fail in practice. First, calculations of VaR frequently assume a normal distribution of events, discarding so-called extreme events, i.e., rare events with high impact. Second, VaR

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

333

estimates correlations using past. But correlations are time-dependent, e.g., in the markets, the correlations are higher in a crisis [76]. Finally, individuals can use VaR to estimate risks and make decisions, which ultimately changes the risk distribution. Thus, VaR may result in inconsistent risk estimates. Some authors argue that extensive use of VaR led to 2007–2008 financial crisis [77, 78]. In 2004, regulators allowed Wall Street brokers/dealers to control their own risk, via imposing capital requirements based on VaR. Since capital requirements increase with the VaR, Wall Street brokers–dealers had incentives to trade the assets with good recent behavior (low VaR), regardless of their true risk. Traders accumulated highrisk assets, creating a bubble which led to 2007–2008 crisis [76, 77]. In a perfect world, individual agents have minor effect on the market. With a large number of agents, the aggregate effect is random. Regulations have a coordinating effect, because traders align to reduce a single metric (VaR), instead of reducing the actual risk. Hence, traders accumulate high risk. When the crisis started, they rushed to acquire secure assets which reduced liquidity. To sum up, an imperfect risk metric and coordinating regulatory effect can undermine risk management [76, 77].

14.5.1.3

Tail Value at Risk

The measure Tail conditional expectation (TCE) or tail value at risk (TailVaR): T ailV a Rα (X ) = E [X | X ≥ V a Rα (X )] =

1 1−α





x P[X = x] d x

(14.2)

V a Rα

is the average of the worst losses occurring with prob. 1 − α. From [79], TCE behaves better than VaR.3

14.5.1.4

Risk Metrics: A Comparison

On Fig. 14.3, we depict different risk metrics introduced above of a random variable X with a Fréchet distribution, which has a heavy tail. The expected value E[X ] estimates the average losses; however, although the distribution resembles a bell curve, it has a tail; hence, large losses away from the mean occur more often than in a Gaussian distribution. The VaR, on the other hand, estimates the typical loss in the worst case. For example, V a R0.99 measures the worst loss with a confidence of 99%; however, 1% of the losses can exceed this value. VaR has been used to estimate, with high confidence, the capital necessary to ensure that an entity remains solvent after accidents; however, rare events can surpass the expected losses, causing bankruptcy. T ailV a R calculates the average of the worst events, that is, the average loss that exceeds a threshold, reflecting better behavior of the tail.

3 See

[79] for details on “coherency conditions”, which VaR may violate, but TCE always satisfies.

334

C. Barreto et al.

Fig. 14.3 Three risk measures (expected value, V a Rα , and T ailV a Rα with α = 0.9) of a random variable X with a Fréchet distribution, which has a heavy tail

14.6 Risk Treatment Among the strategies to manage risk we find acceptance, avoidance, mitigation, and transference [80], which transform the random variable of losses X into X˜ . Firms can decide to accept the risk, that is, they can prepare to endure the potential losses. Hence, the risk does not change, thus, ρ( X˜ ) = ρ(X ), where ρ(·) is a measure of risk. This strategy is worthwhile if the cost for managing the risk exceeds its potential impact. The opposite to acceptance is avoidance, which consists of avoiding the activities that involve risk, at the expenses of giving up potential gains of such activities. In this case, the risk is zero, that is, ρ( X˜ ) = 0. Mitigation involves actions intended to prevent risks; for instance, self-protection (firewalls, antivirus, and encryption) and self-insurance (data backups) mitigate the risk by reducing the probability of accidents and their impact, respectively [81]. With mitigation a firm can reduce its risk (to some extent), that is, 0 < ρ( X˜ ) ≤ ρ(X ). Lastly, the risk can be (partially) transferred to another party in exchange of a payment, thus, 0 ≤ ρ( X˜ ) ≤ ρ(X ) ( X˜ denotes the risk not transferred). Indeed, insurers protect agents against losses in exchange of collecting premiums from them. Still, some high risk are not insurable, e.g., extreme earthquakes [82].

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

335

14.6.1 Diversification This is a strategy used in finance to mitigate risk in portfolio (a group of financial assets). Diversification does not improve the average portfolio return, but it reduces the variance of return (or risk4 ) if the assets are not perfectly correlated [80, 83]. Intuitively, a portfolio with diverse assets has less risk because the profit of some assets compensate losses of other. Diversification can mitigate specific risks (also called idiosyncratic risks), which affect only some particular assets or industries, i.e., risks uncorrelated to other industries. However, diversification fails to handle systematic risks (or aggregate risk), which have consequences over the whole market or economy. Catastrophic events, such as health epidemics or extreme natural disasters, can have systematic risks properties, i.e., affect too many economic agents [74].

14.6.2 Decisions Under Uncertainty Consider an uncertain situation where an individual with an initial endowment W0 receives either A or B with probability p and 1 − p, respectively. Then, the probability distribution ( p, 1 − p) over the outcomes A and B represents a lottery, i.e., the risk of the individual. If the risk changes with the agent’s actions (e.g., protections that reduce the risk of cyber-intrusions), we assume that the individual’s decisions k with pi ∈ [0, 1].5 allow him to choose among a set of lotteries, say {( pi , 1 − pi )}i=1 Decision theory provides the tools to model the preferences under uncertainty; e.g., when an agent (human, corporation, government servant) needs to make a decision before the outcome is realized. In general, utility functions reflect the satisfaction from possessing wealth [84]. Thus, the satisfaction by uncertain outcomes can be calculated as the expected utility from these outcomes [85]. Decision theory hypothesizes that a rational maker chooses to maximize his expected utility (calculated over his total wealth) [85]. A rational agent with utility function U (·) prefers the lottery i ∗ that maximizes his expected utility: i ∗ ∈ arg max pi U (W0 + A) + (1 − pi )U (W0 + B). i

(14.3)

Thus, the utility from a lottery is linear in probabilities (but not in preferences over outcomes). The shape of utility function U (·) determines agent risk attitude. For example, some agents prefer deterministic outcomes over random ones, even with equal expected payoffs [86]. Consider two random outcomes with equal expected value:

4 The

variance is related to the risk: high volatility increases the chance of high gains/losses. A and B; to simplify, we assume that only event probabilities change.

5 The protections can change

336

C. Barreto et al.

 W1 =

A with prob. p B with prob. (1 − p)

 and W2 =

E[W1 ] with prob. 1 0 with prob. 0

(14.4)

An individual is called risk-averse if he prefers the certain outcome, that is, if E[U (W1 )] < E[U (W2 )], which occurs when the utility function is concave. Moreover, persons that prefer bargains with more randomness, E[U (W1 )] > E[U (W2 )], are called risk-seeking (or risk-affine) and have a convex utility function. Finally, risk-neutrality indicates indifference to risk among lotteries with the same expected value, E[U (W1 )] = E[U (W2 )], which implies a linear utility function.

14.7 Evaluating Risks Effective risk management is important, because inadequate methods can overestimate/underestimate6 risk, which in turn can lead to biased decisions. The problem of evaluating interventions exists in other fields. For example, medical researchers assess how medical interventions improve patient health [87, 88]. Also, firm decisions on products and their features are based on experimental performance measures [89, 90]. We can measure the effectiveness of interventions through either experimental studies or observational studies, which we describe below.

14.7.1 Experimental Studies Consist in carrying out controlled experiments to determine causal relationships between an intervention (e.g., drugs) and some feature (e.g., the health of a person). The experiments usually compare how an experimental group, formed by individuals who receive the intervention, behave in comparison with a control group, who receive a placebo (e.g., a drug without therapeutic value). By selecting randomly the agents in each group, one can reduce the influence of unknown risk factors, such as seasonal effects, which may affect both groups. If the experiment is designed correctly, then the only change in both groups will be caused by the intervention of interest [87]. [87] describes three elements that impact the experiments: • Choice of measuring instruments, which quantify the parameter of interest (e.g., health or death rate of patients). • Analytic measures of outcomes estimate risk changes after the intervention. But some trials are too small and follow-ups are too short to detect negative effects. • Methods for extrapolation. The intervention can achieve the same results in others if the conditions remain the same (e.g., the characteristics of the individuals). 6 From

[87], “trial designers have strong motive to err on the side of over-estimating rather than underestimating effectiveness—this is a contingent sociological point based on pressure to publish among academic scientists and pressure to develop profitable products among corporate scientists.”

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

337

However, from [87] “In principle almost any difference between an experimental group of subjects and a target patient may diminish the treatment response. In practice there are always such differences.” One may conduct ICS security audits to assess cyber-security. Such audits are measuring instruments; the statistics about the number of vulnerabilities or their severity proxy security level. Experiments are the most reliable tool to assess the effectiveness of interventions, but execution may have obstacles. For example, it might be unethical to use a placebo when diseases have high mortality. Also, the management may refuse to participate in experiments due to high costs of deploying cyber-protections. Then, observational studies could be a tool to measure the effectiveness of interventions.

14.7.2 Observational Studies In these studies the investigator does not intervene directly; instead, he observes individuals who follow some treatment plan (e.g., based on clinical decisions) and tracks the parameters. Such studies are easier and cheaper to conduct; however, in some cases, they overestimate/underestimate the treatment effects. For example, observational studies can have selection bias, which occurs when the study subjects are not representative of the population. In particular, the researchers may observe individuals that already had showed symptoms of the feature of interest. Hence, the conclusions of the study cannot be extrapolated to other individuals. The bias arises when the outcome changes not only due to the intervention, but due to other factors, called confounding factors. For example, a drug treatment might create a particular medical condition, whose treatment in turn increases the risk of some outcome. A confounder variable is (see Fig. 14.4) [91]: • a risk factor that affects the outcome • associated with the exposure; unequally affects experimental and control groups • not an effect of the exposure Confounding can be avoided through stratified analysis, that is, separating the population in groups with the same potential confounding factor. In this way, the effect of the confounding variable is mitigated; however, in practice some of the confounding factors cannot be completely identified [92].

Fig. 14.4 Properties of a confounding variable

Confounder Association

Exposure

Risk factor

Causal relationship

Outcome

338

C. Barreto et al.

14.7.3 Measuring the Impact of Risk Treatment The literature typically considers relative and absolute risk measures to estimate how the risk of an outcome changes due a treatment [91, 92]. Let us illustrate these measures on the following example. Consider the experimental results for two groups (experimental (E) and control (C)) that can have a feature with frequencies summarized in Table 14.4. The probability that each group has the feature is P[Y |E] =

B A and P[Y |C] = . A+C B+D

(14.5)

The risk metrics for this experiment are in Table 14.5. Studies usually report the relative measures; however, [87, 93] show that individuals overestimate the effectiveness of medical interventions when they observe only the relative measures, but make more accurate estimations when they observe the absolute measures.

14.7.4 Estimating Risks: Detecting and Correcting Biased Estimates An important question is to understand when using a specific risk estimation procedure gives an unbiased risk estimate, and when biases are likely, and in what direction (under- or over-estimating the risks). One can arrive to biased risk estimates when

Table 14.4 Frequency of experiments Group Feature Y Experimental (E) Control (C)

A B

Total N C D

A+C B+D

Table 14.5 Measures of the effectiveness of some treatment Measure Definition Relative P[Y |E] P[Y |C]

Risk ratio

RR =

Relative risk ratio

RRR =

Absolute Risk difference

R D = P[Y |E] − P[Y |C]

Number needed to treat

N NT =

(P[Y |E]−P[Y |C]) P[Y |C]

1 P[Y |E]−P[Y |C]

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems Table 14.6 Corrections when common assumptions fail Assumption Problem Correction Gaussian dist.

Causality models No selection bias

Downward biased risk Use the dist. that if rare events are account for fat tails important Biased risk estimates Mitigate the effect of confounding variables Biased risk estimates Verify that study subjects represent the population

339

Example Natural disasters

Assess the impact of an action Extrapolate results form a study

one or more standard modeling assumptions fail (see Table 14.6). For example, it is a common practice to approximate the distribution of events with a Gaussian function. In practice, the assumption of Gaussian distribution is restrictive, and could bias the risk assessments in cases of the presence of rare events with high losses. Then, the assumption of Gaussian distribution will result in underestimations of the risk. Many authors warn of such effects (see, for example [94]). This problem can be resolved by using a more appropriate distribution to model the tails. Another problem could arise if the estimates of risk reductions are endogenous. Then, the model can lead to wrong interpretations of the causality. The presence of endogeneity can lead to biased (over-/under-estimated) effectiveness of the proposed interventions). Endogeneity can occur when the risk model omits so-called confounding variables (risk factors that affect the outcome). The bias introduced by unknown confounding variables can be reduced through stratified analysis. Another common assumption is i.i.d. events. But in cyber-security, events tend to be correlated, which invalidates this assumption, and likely biases risk estimates.

14.7.5 Combining Risk Reduction with Risk Transfer via Insurance Here we combine everything that we have learned to illustrate how cyber-insurance would affect CPS risk management. This section is the segway material between this risk-focused chapter, and our companion chapter focusing on cyber-insurance. Insurers combine risk management tools, and use them simultaneously. Risk transfer complements mitigation strategies of managing so-called residual risk, i.e., the risk remaining after the mitigation. A seminal paper [81] analyzes the relationship between insurance and two mitigation schemes, with these authors named “selfprotection” and “self-insurance.” They find that, given some assumptions, insurance and self-insurance are substitutes, and insurance and self-protection—complements, if the investment in protection reduces insurance premium, and substitutes otherwise. From [81], when investment in self-protection lowers premiums, insurance could

340

C. Barreto et al.

boost investments in self-protection, and thus, reduce the risk. Reference [95] demonstrates that hurricane mitigation measures can reduce losses significantly (from 61 to 31%) and lead to premium reductions. Also, CNA Hardy (Lloyd underwriter in Asia) offers discounts to firms that deploy security products from Waterfall Security Solutions [96]. During the 2000s, this reasoning became popular among cyberinsurance researchers, especially with engineering background. Numerous papers attempted to cast cyber-insurance as a tool to reduce cyber-insecurity. But after an initial euphoria, both practical data and game theory models lead to a shift of consensus. Cyber-insurance market has taken far longer to develop than originally expected. Nowadays, it is conventional to explain sluggish market development by high information asymmetries, causing so-called missing market. Despite the expansion of cyber-insurance market, information deficiencies persist. The precision of risk estimates remain low, and coverage—pricey, and limited in practice. Cyber-insurance is unlikely to reduce aggregate level of cyber-risk via the mechanism suggested by [81]. Indeed, both theoretically and in practice, being insured leads to more risky choices. In short, cyber-insurance is no panacea. It is now widely held view, that with cyber-insurance aggregate welfare is higher, despite lower cyber-security.

14.8 Conclusions This chapter is an introduction to cyber-risks management, which highlights the differences of ITS versus CPS risks. This creates a baseline for discussion in our companion chapter in this volume, where we focus on cyber-insurance. It is wellknown that cyber-security investments are socially sub-optimal due to the attendant externalities. In regulated environments typical for infrastructures, and especially critical infrastructures, investment incentives are sub-optimal as well. Thus, incentives to manage CPS security risks are especially misaligned. These particulars of CPS cyber-risks necessitate the development of risk management strategies that differ from ITSs. In this chapter, we emphasize that cyber-risks predominantly originate from intelligent adversaries, who keenly adjust strategies to bypass protections. To stay undetected, cyber-attackers mask their presence by mimicking system behavior that occurs when system malfunctions as a result of its un-reliability, i.e., hardware faults or/and software errors. Cyber-attacks on CPSs have the potential to affect large geographical areas, and severely disrupt life of humans therein. Thus, CPS risks are similar to natural disasters, and difficulties of insuring against them are well-known. Due to low precision of risk estimates, insuring CPS is even harder than natural disasters. It’s challenging to understand and model CPS risks. Our companion chapter discusses CPS Cyber-insurance as a tool of CPS risk management. Acknowledgements We are grateful to Anna Phuzhnikov and Vlad Zbarsky for their valuable comments on our draft. This work was partially supported by NSF CNS-1931573.

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

341

References 1. Cherepanov, A.: Win32/industroyer: a new threat for industrial control systems. White paper, ESET (2017) 2. Greenberg, A.: Hackers remotely kill a jeep on the highway–with me in it. https://www.wired. com/2015/07/hackers-remotely-kill-jeep-highway/ (2015). Accessed 24 Jan 2018 3. Koppel, T.: Lights out: a cyberattack, a nation unprepared, surviving the aftermath. Broadway Books (2016) 4. Leverett, E., Clayton, R., Anderson, R.: Standardisation and certification of the ‘internet of things’. In: the Annual Workshop on the Economics of Information Security (WEIS) (2017) 5. Newman, L.H.: Medical devices are the next security nightmare. https://www.wired.com/2017/ 03/medical-devices-next-security-nightmare/ (2015). Accessed 24 Jan 2018 6. Zetter, K.: An unprecedented look at stuxnet, the world’s first digital weapon. http://www. wired.com/2014/11/countdown-to-zero-day-stuxnet/ (2014). Accessed 29 June 2018 7. Krebs, B.: Who is anna-senpai, the mirai worm author? https://krebsonsecurity.com/2017/01/ who-is-anna-senpai-the-mirai-worm-author/ (2017). Accessed 19 May 2017 8. Greenberg, A.: The untold story of notpetya, the most devastating cyberattack in history. https:// www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/ (2018). Accessed 24 Sept 2019 9. Stempel, J., Finkle, J.: Yahoo says all three billion accounts hacked in 2013 data theft. https:// reut.rs/2yogbAQ (2017) 10. Somerville, H., Baker, L.B.: Softbank offers to buy uber shares at 30 percent discount. https:// www.reuters.com/article/us-uber-softbank-idUSKBN1DS03W (2017) 11. Krebs, B.: Target hackers broke in via hvac company. https://krebsonsecurity.com/2014/02/ target-hackers-broke-in-via-hvac-company (2014). Accessed 18 May 2017 12. Newman, L.H.: Inside the unnerving supply chain attack that corrupted ccleaner. https://www. wired.com/story/inside-the-unnerving-supply-chain-attack-that-corrupted-ccleaner/ (2018). Accessed 29 June 2018 13. Karpesky Lab: Operation shadowhammer: new supply chain attack threatens hundreds of thousands of users worldwide. https://www.kaspersky.com/about/press-releases/2019_operationshadowhammer-new-supply-chain-attack (2019). Accessed 9 April 2020 14. Richmond, R.: The rsa hack: How they did it. https://bits.blogs.nytimes.com/2011/04/02/thersa-hack-how-they-did-it/ (2011) 15. Schwartz, M.J.: Lockheed martin suffers massive cyberattack. https://www.darkreading.com/ risk-management/lockheed-martin-suffers-massive-cyberattack/d/d-id/1098013 (2011) 16. Krebs, B.: First American financial corp. leaked hundreds of millions of title insurance records. https://krebsonsecurity.com/2019/05/first-american-financial-corp-leakedhundreds-of-millions-of-title-insurance-records/ (2019) 17. Turner, S.: 2019 data breaches - the worst so far. https://www.identityforce.com/blog/2019data-breaches (2019) 18. Whittaker, Z.: Millions of instagram influencers had their contact data scraped and exposed. https://techcrunch.com/2019/05/20/instagram-influencer-celebrity-accounts-scraped/ (2019) 19. Anton, S.D., Fraunholz, D., Lipps, C., Pohl, F., Zimmermann, M., Schotten, H.D.: Two decades of scada exploitation: A brief history. In: 2017 IEEE Conference on Application, Information and Network Security (AINS), pp. 98–104. IEEE (2017) 20. Zetter, K.: A cyberattack has caused confirmed physical damage for the second time ever. http:// www.wired.com/2015/01/german-steel-mill-hack-destruction/ (2015). Accessed 16 Oct 2017 21. Zetter, K.: Inside the cunning, unprecedented hack of ukraine‘s power grid. http://www.wired. com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/ (2016). Accessed 16 Oct 2017 22. Cherepanov, A.: Blackenergy by the sshbeardoor: attacks against ukrainian news media and electric industry. We Live Security 3 (2016) 23. Greenberg, A.: ‘crash override’: The malware that took down a power grid. https://www.wired. com/story/crash-override-malware/ (2017). Accessed 30 Sept 2019

342

C. Barreto et al.

24. Finkle, J.: Hackers halt plant operations in watershed cyber attack. https://www.reuters.com/ article/us-cyber-infrastructure-attack/hackers-halt-plant-operations-in-watershed-cyberattack-idUSKBN1E8271 (2017). Accessed 16 April 2018 25. Perlroth, N.: Hackers Are Targeting Nuclear Facilities, Homeland Security Department and F.B.I. Say. https://www.nytimes.com/2017/07/06/technology/nuclear-plant-hack-report.html (2017). Accessed 16 Oct 2017 26. Threat landscape for industrial automation systems, h2 2018. https://ics-cert.kaspersky.com/ reports/2019/03/27/threat-landscape-for-industrial-automation-systems-h2-2018/ (2018) 27. The cyberthreat handbook. Technical Report, Verint - Thales (2019) 28. Amin, S., Litrico, X., Sastry, S., Bayen, A.M.: Cyber security of water scada systems-part i: analysis and experimentation of stealthy deception attacks. IEEE Trans. Control Syst. Technol. 21(5), 1963–1970 (2013). https://doi.org/10.1109/TCST.2012.2211873 29. Abrams, M., Weiss, J.: Malicious control system cyber security attack case study-maroochy water services, australia. Technical Report, The MITRE Corporation (2008) 30. Sayfayn, N., Madnick, S.: Cybersafety analysis of the maroochy shire sewage spill, working paper cisl# 2017-09. Cybersecurity Interdisciplinary Systems Laboratory (CISL), Sloan School of Management, Massachusetts Institute of Technology pp. 2017–09 (2017) 31. Ghena, B., Beyer, W., Hillaker, A., Pevarnek, J., Halderman, J.A.: Green lights forever: analyzing the security of traffic infrastructure. In: 8th USENIX Workshop on Offensive Technologies (WOOT 14). USENIX Association, San Diego, CA (2014) 32. Laszka, A., Potteiger, B., Vorobeychik, Y., Amin, S., Koutsoukos, X.: Vulnerability of transportation networks to traffic-signal tampering. In: Proceedings of the 7th International Conference on Cyber-Physical Systems, ICCPS ’16, pp. 16:1–16:10. IEEE Press, Piscataway, NJ, USA (2016) 33. Greenberg, A.: Cars that talk to each other are much easier to spy on. https://www.wired.com/ 2015/10/cars-that-talk-to-each-other-are-much-easier-to-spy-on/ (2015). Accessed 26 April 2018 34. Harris, M.: Researcher hacks self-driving car sensors. https://spectrum.ieee.org/cars-thatthink/transportation/self-driving/researcher-hacks-selfdriving-car-sensors (2015). Accessed 26 April 2018 35. Choi, S.J., Johnson, M.E., Lehmann, C.U.: Data breach remediation efforts and their implications for hospital quality. Health Serv. Res. 54(5), 971–980 (2019) 36. Office, N.A.: Investigation: wannacry cyber attack and the nhs (2018) 37. Security, T.: Medjack.4: Medical device hijacking. Technical Report, TrapX Security (2018) 38. Newman, L.H.: These hackers made an app that kills to prove a point. https://www.wired.com/ story/medtronic-insulin-pump-hack-app/ (2019) 39. Sadler, D.: Zero-day vulnerability prompts med company to recall wireless insulin pumps. https://cybersecuritymag.com/vulnerability-insulin-pumps/ (2019) 40. Poulsen, K.: Slammer worm crashed ohio nuke plant network (203) 41. Cimpanu, C.: Malware shuts down german nuclear power plant on chernobyl’s 30th anniversary. https://news.softpedia.com/news/on-chernobyl-s-30th-anniversary-malware-shutsdown-german-nuclear-power-plant-503429.shtml (2016) 42. Threat landscape for industrial automation systems, h1 2019. https://ics-cert.kaspersky.com/ reports/2019/09/30/threat-landscape-for-industrial-automation-systems-h1-2019/ (2019) 43. Stoneburner, G., Goguen, A.Y., Feringa, A.: Sp 800-30. risk management guide for information technology systems. Technical Report, National Institute of Standards and Technology, Gaithersburg, MD, United States (2002) 44. Loukas, G.: Cyber-physical attacks: a growing invisible threat. Butterworth-Heinemann (2015) 45. Karpesky Lab: Cybercrime, inc.: how profitable is the business? https://www.kaspersky.com/ blog/cybercrime-inc-how-profitable-is-the-business/15034/ (2014). Accessed 7 Oct 2016 46. Smith, B.: The need for urgent collective action to keep people safe online: Lessons from last week’s cyberattack. https://blogs.microsoft.com/on-the-issues/2017/05/14/need-urgentcollective-action-keep-people-safe-online-lessons-last-weeks-cyberattack (2017). Accessed 18 May 2017

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

343

47. Krebs, B.: Who Makes the IoT Things Under Attack? https://krebsonsecurity.com/2016/10/ who-makes-the-iot-things-under-attack/ (2016). Accessed 29 June 2018 48. Murray, G.R., Albert, C.D., Davies, K., Griffith, C., Heslen, J.J., Hunter, L.Y., Jilani-Hyler, N., Ratan, S.: Toward creating a new research tool: Operationally defining cyberterrorism 49. Greene, K.: Catching cyber criminals. https://www.technologyreview.com/s/405467/catchingcyber-criminals/ (2006). Accessed 19 May 2017 50. Ablon, L., Libicki, M.C., Golay, A.A.: Markets for cybercrime tools and stolen data: Hackers’ bazaar. Technical Report, Rand Corporation (2014) 51. Latham & Watkins: Cyber insurance: A last line of defense when technology fails. Technical Report, Latham & Watkins (2014) 52. Partnering for cyber resilience towards the quantification of cyber threats. https://www. weforum.org/reports/partnering-cyber-resilience-towards-quantification-cyber-threats (2015) 53. Institute, Ponemon: 2016 cost of data breach study: global analysis. Technical Report, Ponemon Institute (2016) 54. Verizon: 2017 data breach investigations report. Technical Report, Verizon (2017) 55. Mossburg, E., Gelinne, J., Calzada, H.: Beneath the surface of a cyberattack: A deeper look at business impacts. Technical Report, Deloitte (2016) 56. Cambridge Centre for Risk Studies: Business blackout: The insurance implications of a cyber attack on the us power grid. http://www.lloyds.com/news-and-insight/risk-insight/library/ society-and-security/business-blackout (2015) 57. Kelly, S., Leverett, E., Oughton, E.J., Copic, J., Thacker, S., Pant, R., Pryor, L., Kassara, G., Evan, T., Ruffle, S.J., Tuveson, M., Coburn, A.W., Ralph, D., Hall, J.W.: Integrated infrastructure: cyber resiliency in society, mapping the consequences of an interconnected digital economy. Technical Report, Centre for Risk Studies, University of Cambridge (2016) 58. Swiss Re: Sigma explorer. http://www.sigma-explorer.com/ (2014). Accessed 25 May 2017 59. Common cybersecurity vulnerabilities in industrial control systems. https://www.us-cert.gov/ sites/default/files/recommended_practices/DHS_Common_Cybersecurity_Vulnerabilities_ ICS_2010.pdf (2011) 60. Ics-cert annual assessment report. https://www.us-cert.gov/sites/default/files/Annual_ Reports/FY2016_Industrial_Control_Systems_Assessment_Summary_Report_S508C.pdf (2016) 61. Permann, M., Lee, K., Hammer, J., Rhode, K.: Mitigations for security vulnerabilities found in control systems networks. In: Proceedings of the 16th Annual Joint ISA POWID/EPRI Controls and Instrumentation Conference (2006) 62. Schlichting, A.D.: Assessment of operational energy system cybersecurity vulnerabilities (2018) 63. Zhu, B., Joseph, A., Sastry, S.: A taxonomy of cyber attacks on scada systems. In: 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing, pp. 380–388 (2011). https://doi.org/10.1109/iThings/CPSCom.2011. 34 64. Institute, Ponemon: Critical infrastructure: Security preparedness and maturity. Technical Report, Ponemon Institute (2014) 65. Langner, R., Pederson, P.: Bound to fail: Why cyber security risk cannot simply be “managed” away. Technical Report, Brookings (2013) 66. Maynor, D., Graham, R.: Scada security and terrorism: we’re not crying wolf. In: Black Hat Federal Conference (2006) 67. Wang, B., Li, X., de Aguiar, L.P., Menasche, D.S., Shafiq, Z.: Characterizing and modeling patching practices of industrial control systems. In: Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 1, no. 1, p. 18 (2017) 68. Caskey, J., Ozel, N.B.: Earnings expectations and employee safety. J. Account. Econ. 63(1), 121–141 (2017) 69. Romanosky, S., Ablon, L., Kuehn, A., Jones, T.: Content analysis of cyber insurance policies: How do carriers write policies and price cyber risk? Working paper, RAND Corporation (2017)

344

C. Barreto et al.

70. Greenberg, A.: How power grid hacks work, and when you should panic. https://www.wired. com/story/hacking-a-power-grid-in-three-not-so-easy-steps/ (2017). Accessed 30 Sept 2019 71. Huang, B., Cardenas, A.A., Baldick, R.: Not everything is dark and gloomy: power grid protections against iot demand attacks. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 1115–1132. USENIX Association, Santa Clara, CA (2019) 72. Marotta, A., Martinelli, F., Nanni, S., Yautsiukhin, A.: A survey on cyber-insurance. Computer Science Review (2015) 73. Bernstein, P.L.: Against the Gods: The Remarkable Story of Risk. Wiley, New York (1996) 74. Taleb, N.N.: The black swan: the impact of the highly improbable, vol. 2. Random house (2007) 75. Paté-Cornell, E.: On “black swans” and “perfect storms”: risk analysis and management when statistics are not enough. Risk Anal. 32(11), 1823–1833 (2012) 76. Danielsson, J.: The emperor has no clothes: Limits to risk modelling. J. Bank. & Financ. 26(7), 1273–1296 (2002) 77. Danielsson, J.: Blame the models. J. Finan. Stabil. 4(4), 321–328 (2008) 78. Triana, P.: The Number that Killed Us: A Story of Modern Banking, Flawed Mathematics, and a Big Financial Crisis. Wiley, New York (2011) 79. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of risk. Math. Financ. 9(3), 203–228 (1999) 80. Loubergé, H.: Developments in risk and insurance economics: the past 40 years. In: Handbook of Insurance, pp. 1–40. Springer, Berlin (2013) 81. Ehrlich, I., Becker, G.S.: Market insurance, self-insurance, and self-protection. J. polit. Econ. 80(4), 623–648 (1972) 82. Gottlieb, D.: Prospect theory, life insurance, and annuities. The Wharton School Research Paper No. 44 (2012) 83. Markowitz, H.: Portfolio selection. J. Financ. 7(1), 77–91 (1952) 84. Bernoulli, D.: Exposition of a new theory on the measurement of risk. Econometrica: J. Econom. Soci. 23–36 (1954) 85. Neumann, J.v., Morgenstern, O.: Theory of games and economic behavior, vol. 60. Princeton University Press, Princeton (1944) 86. Mas-Colell, A., Whinston, M.D., Green, J.R.: Microeconomic Theory, vol. 1. Oxford University Press, New York (1995) 87. Stegenga, J.: Measuring effectiveness. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences vol. 54, pp. 62–71 (2015) 88. Morris, J.A., Gardner, M.J.: Statistics in medicine: calculating confidence intervals for relative risks (odds ratios) and standardised ratios and rates. Br. Med. J. (Clinical research ed.) 296(6632), 1313 (1988) 89. Dmitriev, P., Gupta, S., Kim, D.W., Vaz, G.: A dirty dozen: Twelve common metric interpretation pitfalls in online controlled experiments. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1427–1436. ACM (2017) 90. Dmitriev, P., Wu, X.: Measuring metrics. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 429–437. ACM (2016) 91. Jager, K., Zoccali, C., Macleod, A., Dekker, F.: Confounding: what it is and how to deal with it. Kidney Int. 73(3), 256–260 (2008) 92. Psaty, B.M., Koepsell, T.D., Lin, D., Weiss, N.S., Siscovick, D.S., Rosendaal, F.R., Pahor, M., Furberg, C.D.: Assessment and control for confounding by indication in observational studies. J. Amer. Geriatr. Soc. 47(6), 749–754 (1999) 93. Naylor, C.D., Chen, E., Strauss, B.: Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness? Ann. Int. Med. 117(11), 916–921 (1992) 94. Andriani, P., McKelvey, B.: Beyond gaussian averages: redirecting international business and management research toward extreme events and power laws. J. Int. Bus. Studies 38(7), 1212– 1230 (2007)

14 Cyber-Risk: Cyber-Physical Systems Versus Information Technology Systems

345

95. Kunreuther, H., Michel-Kerjan, E.: Managing catastrophic risks through redesigned insurance: challenges and opportunities. In: Handbook of insurance, pp. 517–546. Springer, Berlin (2013) 96. Waterfall: Industrial cyber insurance comes of age. Technical Report, Waterfall (2018)

Chapter 15

Cyber-Insurance Carlos Barreto, Galina Schwartz, and Alvaro A. Cardenas

Abstract This chapter focuses on cyber-insurance for Cyber-Physical Systems (CPS). We identify two types of networked systems: traditional Information Technology Systems (ITS) and CPS. They differ with respect to cyber-security. Security challenges of CPSs are driven by their particulars (physical features and capabilities, regulations, and attacker motivations). The factors complicating the advancement of CPS cyber-insurance ecosystem, including: extreme information scarcity; risk assessment problems, exacerbated by the growing complexity of CPS and the intricacies of risk prorogation. We conclude that without new government policies improving security information, cyber-insurance market for CPS may stall.

15.1 Introduction The objective of this chapter is to present the principles of insurance and to apply them in the context of cyber-insurance for CPS. A brief introduction to CPS risk and risk management can be found in our companion chapter in this volume. In the past decades, power grid, vehicles, medical devices, buildings, and other systems have been modernized with embedded computers. Nowadays, such systems are called Cyber-Physical Systems (CPS). They are defined as a network of physically distributed sensors and actuators with computing and communications capabilities to monitor and control physical elements [39, 70, 78].

C. Barreto (B) The University of Texas at Dallas, Richardson, TX, USA e-mail: [email protected] G. Schwartz Cyber Blocks Inc., Detroit, MI, USA e-mail: [email protected] A. A. Cardenas University of California, Santa Cruz, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_15

347

348

C. Barreto et al.

CPS provide new societal benefits (such as improved adaptability, autonomy, safety and efficiency). But they create a new attack vector in physical infrastructures due to their susceptibility to cyber-attacks. A cyber-risk of CPS is defined as “a security breach in cyber-space that adversely affects physical world,” see [53]. Indeed, cyber-attacks against CPS can cause blackouts, vehicle accidents and can harm users of medical devices [40, 47, 52, 60, 74]. Nowadays, due to the ubiquitous networks, all industries are subject to cyberattacks. Still, primary security challenges (thus, specific cyber-risks) differ across industries due to their varied network features, threats, and capabilities. For the Information Technology Systems (ITS), primary cyber-risks come from adversaries that exploit vulnerabilities in web services to profit. ITS are subjected to increasingly sophisticated, well-organized attacks; hence, multiple attacks occur daily and successful attacks are regularly reported in the news [41]. With cyber-attacks being an ever-present concern, security personnel is dedicated to reducing cyber-risks via frequent upgrades and extensive system testing. ITS cyber-risks can be estimated using data from numerous attacks. In contrast, CPS lack a similar amount of visible cyber-attacks: attacks of physical processes exist, but happen rarely. Examples of such cyber-attacks are Stuxnet [85]; the attacks of the Ukrainian power grid [86], and the Triton malware in the Middle East [35] safety systems. Due to national security considerations, the reports about attacks on CPS are not widely disclosed, and minor attacks are unreported. Most companies with CPS-like exposure have never experienced attacks on their physical processes; thus, there is few data about attacks. Such companies lack a clear business case to invest in cyber-security [50]. In addition, cyber-risk assessments for CPS is exacerbated by the fact that individual companies take into account only direct expected damages. The potential indirect damages (also called negative externalities) to a larger society are ignored. For example, an electric blackout in New York City disrupted water treatment plants, causing a spill of 490 million gallons of sewage [28]. With indirect damages present, security spending is biased: it is too low from a societal standpoint. To sum up, extreme data scarcity and potentially sizable negative externalities are unique CPS characteristics that make it particularly hard to quantify (and thus insure) their cyber-risks. Still, the increased sophistication of attacks against the CPS brings interest in cyber-insurance [84], but the success of this risk management strategies ultimately depends on the insurer’s ability to estimate the risks. In the remainder of this chapter we explain the cyber-insurance ecosystem depicted in Fig. 15.1. In Sect. 15.2, we continue with details of cyber-insurance. In Sect. 15.3, we outline theoretical principles of insurance and we discuss practical considerations, and the role of regulation, in Sect. 15.4. We devote Sect. 15.5 to the modeling and management of extreme events. In Sect. 15.6, we summarize the challenges, outline future directions, and conclude.

15 Cyber-Insurance

349 Premium Infrastructures

Assess risks/damages

Negotiate risk transfer

Deductible Coverage Pool risks

Cyber-security consultant

Insurers Partnership

Tasks:

Manage risks Reduce risks

Measure risks/damages Tasks:

Suggest protections to decrease premiums

Reinsurer of last resort Government

Fig. 15.1 Cyber-insurance ecosystem

15.2 Cyber-Insurance Maintaining the integrity of critical infrastructures is an important government goal. Cyber-insurance is one of the tools intended to help [20, 24, 42, 80]. Policy-makers and researchers became interested in cyber-insurance because it allows to deal with residual cyber-risk, i.e., the leftover risk after technical tools were applied to reduce the risk. Historically, technical community had a widely held belief that insurance premiums should reflect risks and would motivate decision-makers to increase security investments. Indeed, the community expected that insurers would offer CPS management effective protection schemes (i.e., technical tools of risk reduction). The seminal paper [26] challenges a standard economists’ view (see [4]) that with market insurance, decision-makers invest less in system security. Indeed, [26] demonstrates that market insurance (i.e., usual insurance) and self-protection (i.e., the reduction of the probability of loss) can be complements. Then, cyber-insurance appears as an ideal tool for securing critical infrastructures. The problem with this optimistic worldview is threefold. First, numerous papers have established that cyber-insurers lack both data and technical expertise to assess cyber-risks [23]. Second, the results from [26] hold only if investments to reduce the probability of losses lead to a sufficient reduction in premiums (i.e., premium savings must justify extra security spending). This is restrictive; likely, it fails for reallife CPS scenarios. Third, [26] supports the conventional wisdom that the presence of market insurance crowds out the spending on self-insurance (investment in loss reduction). The hopes of early papers that cyber-insurance will lead to the reduction of aggregate (societal) cyber-risk never materialized [15]. The cyber-insurance market remains minor compared with the commercial insurance market [43, 58].

350

C. Barreto et al.

15.2.1 Insurance of Material Versus Cyber-Assets Ponemon institute [69] surveys the practices of insuring material (physical) and cyber-assets (customer/employee records, source code, etc.). Despite roughly identical maximal losses for both asset types, the damages for cyber-assets turned out to cost about twice due to the associated business disruptions. Firms insure almost the same fractions of both asset types but employ different instruments. For the material versus cyber-assets, firms buy insurance (51% vs 12%), and self-insure (28% vs 58%), i.e., 79% versus 70% of risks are covered. Firms avoid acquiring cybercoverage due to expensive premiums, and coverage limitations, such as exclusions and restrictions.

15.2.2 Cyber-Insurance Ecosystem Following [72], below we summarize the features of the cyber-insurance policies.

15.2.2.1

Coverage

Cyber-policies offer coverage against first-party losses and third-party liabilities. First-party coverage refers to losses suffered directly by the insured, such as personal data compromise. Third-party coverage provides liability protection for lawsuit costs (e.g., defense and settlement costs) against the insured by parties external to the insurance contract. Figure 15.2 classifies the coverage by the type of the damage.

Costs of first-party losses

Costs of third-party liability

Personal data compromise

Data compromise





PII (customers and/or employees)

Identity recovery Computer • •

attack†

Unauthorized access Malware or DoS attacks

Cyber extortion† • •

Investigative costs The amount of ransom

Personal data compromise

Network security† • •

Breach of business information Propagation of malware

Electronic media • •

Violation of privacy Infringement of copyright

Fig. 15.2 Types of Coverage; for CPS, required cyber-coverage is marked by †

15 Cyber-Insurance

351 Cost to substitute systems Costs recover attacks Extortion Terrorism Collateral damage

Exclusions

War Criminal acts Physical damage Business interruptions

Overtime salaries

Propagation of malware DDoS Electric or mechanical failure Fire, smoke Energy Telecommunications

Fig. 15.3 Common exclusions. Such coverage is crucial for infrastructure CPS

15.2.2.2

Exclusions

Exclusions in cyber-insurance are more diverse than in conventional insurance, see Fig. 15.3. Few cyber-policies cover the costs resulting from extortion, terrorism, criminal acts, violations of law, patent infringement, and disclosure of confidential information. Typical cyber-policies do not cover certain recovery expenses, such as costs of substitute systems or overtime personnel for such activities; the costs caused by infrastructure failures (e.g., interruptions in power and telecommunications), except when the insured controls the infrastructure [37]; the costs of physical harm (electric or mechanical failures, fire) and collateral damage, such as malware propagation/forwarding, DoS attacks, or losses to systems not controlled by the insured.

15.2.2.3

Risk Assessment

Typically, to assess the risk, cyber-insurers use questionnaires that inquire about four aspects, namely, organization, technical components, policies and procedures, and regulatory compliance (see Fig. 15.4). With respect to the organization, insurers inquire about the type and size of the business (e.g., type of industry, revenues, and assets) and the sensible information stored (e.g., SSN, credit/debit card numbers, driver license, email address, IP address, financial and banking information, medical records). Insurers request the history of incidents experienced by the insured and its third-party service providers (e.g., lawsuits; data, security, and privacy breaches). A technical part of the questionnaire assesses technology and computing infrastructures. It includes basic questions about the attack surface (e.g., number of computing devices, IP addresses, URLs), protections (e.g., antivirus, IDS/IPS, encryption), and access control schemes to restrict unauthorized seizure of sensitive information. However, few insurers pose specific technical questions. This suggests the lack of expertise of using such information in risk estimates. For the same reason insurers do not accept security certifications as a measure of security level [31].

352

C. Barreto et al. Organization

General • •

Type of industry; Revenues/Assets Size of contracts; Previous coverage

Technical components • • • •

Information assess / storage • •

PII; Confidential information Intellectual property

Number of devices; IPs; URL addresses Development of critical software Segregation of IT systems storing PII Security measures: Antivirus; IDS/IPS; Encryption; Firewalls

Policies and procedures

Outsourcing (Third-party services)

Data management • • •

Security assessment History of security breaches Contracts and liability

• •

Incident history

Policies: data retention / destruction Number of records stored; Data commercialization; Third-party data processing

Privacy and network security • •

Data, security, or privacy breaches Past incidents; Lawsuits; Extortion

Legal and compliance Compliance with regulations and standards •

HIPPA; PCI/DSS; GLBA

• •

Policies: software development; passwords Implementation and testing of policies for privacy, information and network security

Organization’s security • • •

Incident response plans; Security procedures training; Backups Penetration testings; Vulnerability scans

Fig. 15.4 Insurer questionnaire to assess cyber-risks

Policies and procedures part of questionnaire aim to assess cyber-risk management. Insurers ask about data management (e.g., storage/sale/destruction of records), incident prevention (backups, software development), and response plans. Insurers evaluate client’s protections (penetration testing, vulnerability scanning), and training in security procedures. Only one questionnaire asked about the budget for prevention, detection, and response to security incidents. Lastly, insurers assess the compliance to standards and regulation, such as HIPPA, PCI/DSS, and GLBA. Most of the questions seek to assess the amount and the type of stored data.

15.2.2.4

Cyber-Insurer Approaches to Data Limitations

Cyber-insurers tend to seek outside assistance. For premiums, they use competitor rates and data from other insurance lines. Oftentimes cyber-insurers appear to guess, and/or rely on industry, academic or government reports [72].

15.2.2.5

Types of Cyber-Premiums

Figure 15.5 depicts three types of premiums: flat rate, base rate, and premiums based on information security pricing, which account for 50%, 17%, and 33% of policies,

15 Cyber-Insurance

353 Coverage Flat rate

Premiums

Base rate (Annual revenues/assets) Information security pricing

Claim history Industry Security infrastructure Payment card control Media control Computer system interruption loss

Fig. 15.5 cyber-insurance premiums

respectively [72]. With flat rate the premiums are calculated using profit load (with a ratio between 25% and 35%). With base rate, the premiums are a function of annual revenue, adjusted using other variables (e.g., claim history, industry). Premiums based on information security pricing make a qualitative security evaluation based on security infrastructure, payment card control, media controls, and losses from system interruptions. However, it is unclear how underwriters assess the risk.

15.2.2.6

Gaps in Cyber-Coverage

Firms routinely purchase commercial general liability (CGL) insurance, which protects them against liabilities for bodily injury, property damage, and personal injury. The CGL insurance excludes injuries/damage “arising out of” damage to/destruction of electronic data [77]. Such insurance fails to cover usual cyber-attacks losses caused by damaged digital assets. Since data loss is not regarded as property damage, the CGL insurance excludes the coverage of losses caused by data breaches. Still, cyberinsurers do not fill this gap, (e.g., covering third-party claims of bodily injury or property damage), see Fig. 15.6. Second coverage gap is driven by the problem of assessing the responsibility (such as first- vs third-party). For example, Sony’s CGL didn’t cover information theft because the policy holder (first-party, i.e., Sony) didn’t release the information, but a third-party did. Third, few insurers offer coverage against social engineering attacks, because the breaches occur due to human errors, not due to unauthorized access. Lastly, exclusions of terrorism could apply to cyber-attacks arising from acts “committed for political, religious, ideological, or similar purposes, including the intention to influence any government and/or to put the public, or any section of the public, in fear” [38]. Uncertainties (gaps) in coverage and problems to identify the origin of an attack allow insurers to find reasons (excuses) to deny coverage, especially for large claims [75, 77].

354

C. Barreto et al. Coverage Gaps Data loss is not considered property damage Difficulty to assess responsibility (first- vs third-party liability) Coverage against social engineering is rare Protection against physical damage†

Fig. 15.6 Gaps in the coverage: † protection against physical damage is crucial for CPS

15.2.3 Limitations of Cyber-Insurance Ecosystem Despite of the initial enthusiasm, cyber-insurance hasn’t thrived as many researchers expected [14, 31, 51, 81]. Oftentimes, this is explained by problems of cyber-risk assessment, which results in limited coverage but high premiums [14, 81]. Also, loss assessment is hard, because security breaches can cause externalities, i.e., indirect losses to other individuals (e.g., in supply chain attacks). This leads to correlations, which violate the assumption of events independence. Insurers can cover correlated risks only if they are transferable via reinsurance or government help [27]. Firms forgo cyber-insurance adoption for several reasons. First, they may ignore cyber-risk and/or the benefits of insuring against it [31]. Second, uncertain coverage due to exclusions deter firms from buying coverage. Third, by submitting a claim, the firm effectively acknowledges security breach, which can lead to reputation loss. Management may decide against filing such claims, which discourages insurance purchase. Also, lacking coverage of intangible losses reduces insurance appeal [9]. The cyber-insurance is plagued by highly asymmetric information. For example, insurers do not directly account in the premiums for vulnerabilities of software used by firms, because the insurers lack information and expertise to translate the available information into risk assessments. Similarly, it is nearly impossible for the insurers to estimate the effectiveness of protection schemes. Importantly, it is too costly for the insurers to monitor whether the required protections always remain in place. This limits the insurer’s capabilities to offer premium reductions to customers who invest in protections. This results in pooling, i.e., due to insurer’s inability to distinguish clients with low and high risk, they are offered the same contracts. To sum up, the advancement of cyber-insurance ecosystem is hindered by (i) technical difficulties of risk assessment (and thus, premium size); (ii) highly asymmetric information, which causes two-sided moral hazard: the insurers cannot verify if the insured comply with the required protections, and the insureds lack certainty that in the case of a loss their insurers would not deny the coverage based on technicalities; (iii) data scarcity, which is especially severe for CPS; in addition, (iv) insuring cyber-risks is complicated due to the complexity of the cyber-risk distribution, such as correlations, and fat tails (due to the presence of extreme events).

15 Cyber-Insurance

355

Table 15.1 Insurance tools Risk pooling Reduces the risk via creating a pool with a large number of i.i.d. risks Social insurance Uses taxes or mandatory insurance to cover non-diversifiable risks Risk transfer Transfers risk from low—to high—risk tolerance agents

15.3 Introduction to Insurance Insurance is a contract between two actors, namely, the insured (the demand side) and the insurer (the supply side). The insured faces some risk that is transferred to the insurer in exchange for a premium (payment) P. Thus, the insurer covers some (or all) future losses L of the insured [71]. This risk transfer allows to reduce the randomness of an outcome; however, having insurance does not imply lower losses. Insurance contracts specify coverage and exclusions. The coverage might be for the first-party (damage to the insured) and/or a third-party (damage to others caused by the insured) [51]. The contract also specifies an indemnity I , i.e., the payment that the insured receives in case of a loss. A deductible D is the part of the loss paid by the insured. Then, indemnity is lower than the loss: L = I + D [54].

15.3.1 Types of Insurance There are three main insurance tools: risk pooling, risk spreading, and risk transfer (see Table 15.1) [6]. We use risk pooling to illustrate the concept of insurance.

15.3.1.1

Risk Pooling

This tool relies on the independence of events and the law of large numbers [57]. The insurers collect a large number of similar (yet independent) risks in pools; this allows accurate prediction of the average claim size and premium calculation. Risk pooling can fail with interdependent (correlated) risks, causing multiple simultaneous claims, e.g., natural disasters can cause losses for policies in a specific area.

15.3.1.2

Social Insurance and Risk Spreading

Terrorism, natural catastrophes, and nuclear accidents have correlated risks, i.e., a single event affects numerous agents. Hence, insurers won’t accept these risks, since risk pooling fails in such cases. Then, governments intervene by sponsoring insurance (often mandating it) to keep the costs of catastrophic events manageable [48]. Some examples are compulsory insurance for hurricanes, earthquakes, floods,

356

C. Barreto et al.

or volcanic eruptions, among others. Government interventions can take other forms. For example, when catastrophic events occur the government may transfer resources to the affected agents. In other words, governments address non-diversifiable risks by spreading the losses widely. This mechanism is known as risk spreading, which can be implemented as a form of social insurance, with compulsory participation financed through taxes. If the social welfare function is concave, it is preferable to distribute the risks across the entire population. Nonetheless, this solution might not be Pareto optimal (beneficial for everybody), because individuals without losses must contribute to cover the losses of others.

15.3.1.3

Risk Transfer

Another mechanism to deal with systemic risk is to transfer it to a more risk-tolerant agent. If the perceived cost of risk decreases with wealth, then wealthier agents will perceive a lower risk than less wealthy agents. Hence, less wealthy agents, which are more vulnerable to risks, would pay the wealthier agents to accept their risk. This method can lead to Pareto optimal outcomes if the transfer of risk benefits both parties. This mechanism, unlike risk pooling, relies in the insurer’s perception of wealth. Furthermore, the risk is not reduced through the law of large numbers, it is just transferred to another agent. This scheme was used by Lloyd’s (London), which gathered the wealth of British nobility and gentry to create a risk-tolerant agent [10]. In this way, Lloyd’s insured against large non-diversifiable risks, such as satellite launches and oil tanker transport, among others.

15.3.2 Model of Risk Pooling We will model the insureds (users/buyers) and the insurers (providers/sellers), and find theoretical premiums that both parties would accept. However, these models are benchmarks; real premiums are higher due to the insurer’s operating costs.

15.3.2.1

Insured Side

Let W0 denote initial wealth of the firm and L its random loss. Then, respective wealth W and Wˆ , without and with insurance are the following random variables: W = W0 − L and Wˆ = W0 − L − P + I,

(15.1)

where P is the premium, and I the indemnity (payment to the insured if the loss occurs). To simplify, consider full coverage: I = L, and risk averse firm (i.e., utility function satisfies U  > 0 and U  < 0) maximizing its expected utility U . Then, expected utility without and with insurance is, respectively,

15 Cyber-Insurance

357

E[U (W )] ≤ U (W0 − E[L]) and E[U (Wˆ )] = U (W0 − P).

(15.2)

The firm buys insurance if E[U (Wˆ )] ≥ E[U (W )],

(15.3)

i.e., agents accept policies with premiums no higher than the expected indemnity (P ≤ E[L]), and P = E[L] is called an actuarially fair premium. One can show that risk averse firms buy some insurance even if premiums exceed the actuarially fair. In our example, the insurance successfully removed the insured’s risk. However, with a fair premium the insurer does not change the expected wealth of the insured. Indeed, on average, the premium equals the expected loss. Here insurance reduces only the variance of losses. Below we introduce the collective risk model, which is useful to determine the premium that an insurer would charge [16, 25, 57]. Remark 15.1 Insurance policies may have exclusions, i.e., pre-specified losses (e.g., terrorist attack) are not covered. Exclusions discourage insurance adoption. From [82], agents prefer a lower coverage to default risk (i.e., risk of no coverage).

15.3.2.2

Insurer Side

Let indemnity I denote the aggregate claims received from a portfolio (insurance contract) in one time period. Then, I depends on the number of claims N and the amount X i of the ith claim, where N and X i are respective random variables: I =

N i=1

Xi .

(15.4)

For example, N may be the number of security breaches, and X i the amount paid for each breach claim. With full insurance, X i equals the loss, otherwise, it could be lower. N be indeLet N be independent from the amount of each claim X i , and {X i }i=1 pendent and identically distributed (i.i.d.) random variables. We want to find the distribution g(x) of the indemnity I : g(x) = P[I = x]. If N follows a Poisson distribution with parameter λ, and X i has mean μ and variance σ 2 , the portfolio’s distribution g(·) is a compound Poisson distribution with the following moments [16] (15.5) E[I ] = λμ and Var[I ] = λ(μ2 + σ 2 ). Consider a pool of i.i.d. insurance policies with indemnities {I1 , I2 , . . .}. The compound Poisson distribution satisfies the law of large numbers [57], which gives n lim

n→∞

k=1 Ik

n

= E[I ] = λμ,

(15.6)

358

C. Barreto et al.

Table 15.2 Statistical principles used in risk pooling to calculate premiums Law of large numbers The average of a large number of claims (i.i.d) converges to the expected value of the claims as portfolio grows Central limit theorem The sum of large number of claims (i.i.d. with finite variance) converges to a normal distribution as portfolio grows

i.e., the average claim of n identical portfolios converges to the expected value as n increases. Then, insurer can charge the premiums equal to the expected claims E[I ]. The pool allows to reduce the variance, and thus, insurer risk of bankruptcy (risk for claims to exceed the premiums) by insuring a large number of risks.  n Var

k=1 Ik

 =

n

1 n σ2 Var[X ] = . Var[X ] = k k=1 n2 n n

(15.7)

In summary, with i.i.d. risks, from the law of large numbers, insurers can deal with risks via pooling them together. The larger the pool, the higher insurer resources to absorb the losses [76]. From the central limit theorem claim distribution in the pool is approximated by the normal distribution with the same mean and variance as g(·) [57]. Next, we show that both the law of large numbers and the central limit theorem (Table 15.2) are useful to determine the premiums.

15.3.3 Premium Calculation Principles To simplify, assume a risk neutral insurer. Let  denote the expected insurer profit, which is equal to the total premiums net of the expected indemnities: =

 i

(Pi − E[Ii ]) . and

Pi ≥ E[Ii ],

(15.8)

where i denotes insurer clients, and the expected profit is non-negative for every i. A fair insurance satisfies the profit condition with equality, which implies that the insurer receives zero expected profit. However, this theoretical construct can lead to insurer bankruptcy, because in practice the insurer cannot completely eliminate the risk (the variance of the claims in Eq. (15.7)) due to estimation errors, interdependent events, small pool size, etc. The principles of premium estimation can be found in Table 15.3 [57].The equivalence principle states that the premium is actuarially fair: Pnet = E[I ].

(15.9)

It can be used if there is no risk, e.g., if the insurer sells enough i.i.d. policies. Else, the losses can exceed the premiums, and bankrupt the insurer (unless he borrows). The expected value principle adds a loading (safety) factor α:

15 Cyber-Insurance

359

Table 15.3 Premium calculation principles Principle

Equation Pnet = E[I ] PE V = (1 + α)E[I ] PV ar = E[I ] + αVar[I ] √ PS D = E[I ] + α Var[I ] PP L = E[I ]/(1 − γ )

Equivalence principle Expected value principle Variance principle Standard deviation principle Profit load

PE V = (1 + α)E[I ], α > 0.

(15.10)

With α = 0, the expected value principle is actuarially fair (PE V = Pnet ). A higher α, lowers insurer bankruptcy risk, but reduces competitiveness. The variance principle adds the variance to a loading factor: PV ar = E[I ] + αVar[I ].

(15.11)

The standard deviation principle, is also based on the equivalence principle, with an added loading factor proportional to the variance:  PS D = E[I ] + α Var[I ].

(15.12)

Cyber insurers largely use the profit load method to determine the premiums [72]: PP L = is the profit load. If α =

γ 1−γ

E[I ] , where γ > 0 (1 − γ )

(15.13)

, the expected value principle equals profit load method.

Example 15.1 (Risk Pooling1 ) Consider a risk which behaves as a Poisson with parameter λ = 10 (λ is average number of events per time interval), and X i follows a lognormal distribution with mean μ = 1 and variance σ 2 = 1.5. From the law of large numbers, Fig. 15.7 the average claim of such a pool converges to E[I ] as n increases. A single portfolio pool with λ = 100 is equivalent to a pool of n = 10 with λ = 10 (due to the properties of the Poisson distribution). Thus, a higher λ is equivalent to a higher number of portfolios. Figure 15.8 shows the distribution of the aggregate claims g(x) and its normal approximation g(x) ˆ for λ = 10 − 100.2 With a higher number of policies (λ), g(x) gets closer to its normal approximation g(x). ˆ However, the tail of the distribution is not represented accurately. We use the normal distribution (which underestimates the tail) for premium estimation and ensure positive insurer surplus with a 95% confidence level: 1 This 2 We

example is based on [25]. calculate the compound Poisson distribution g(x) using the Panjer recursion [63].

360

C. Barreto et al. Average claims of multiple portfolios 130

Claims

120

110

100

90

80

Expected claims E[S] Average claims 0

200

400

600

800

1000

Numberof portfolios (n)

Fig. 15.7 Average claims as a function of the number of portfolios n

Fig. 15.8 Distribution of the total claims g(·) and its normal approximation

P[P − I ≥ 0] ≥ 0.95.

(15.14)

From standard distribution table of normal, the previous expression is satisfied at P = μ + 1.65σ , where μ = 56.63 and σ = 30.47 are the mean and the standard deviation of approximating gˆ (for λ = 10). Figure 15.9 depicts the histogram of a normalized insurer surplus (P − I )/P for a portfolio with 104 samples of claims. From Eq. (15.14), at most 5% of the claims exceeds the premium; but from the experiments, such claims account for 8.4%. Here, normal distribution poorly fits the real one, in particular the tail. The premium based on normal distribution is biased

15 Cyber-Insurance

361 Distribution of the insurer’s surplus

1.75

Negative surplus

1.50

Frequency

1.25 1.00 0.75 0.50 0.25 0.00

-3

-2

-1

0

1

Surplus (Premium - Indemnities)

Fig. 15.9 Histogram of the insurer’s surplus ((P − I )/P) with λ = 10

against large (but rare) claims. In Sect. 15.5, we introduce special tools of modeling fat tails.

15.3.4 Insurance Markets and Reinsurance Insurance markets have importantly influenced insurance contracts; for instance, they determine whether premiums are fair or unfair. On one hand, competitive markets have a large number of insurers and none of them can propose a better contract. Theoretically, this implies that premiums are fair (that is, P = E[I ]) and the insurer has zero profit. However, in practice premiums exceed the fair premium to cover both claims and operation cost and to maintain reserves of capital [46]. On the other hand, in a monopolistic market the insurer uses its power to maximize its profit [55]. Insurers diversify risks via reinsurance. This enables insurers to cover the otherwise unacceptable risks (e.g., correlated risks). The insurers reinsure to redistribute their acquired risk to other companies. Reinsurance is purchased through a reinsurance market, a competitive market in which companies exchange risks. Insurers can purchase reinsurance for either individual or aggregated risks. The reinsurance agreements have two main types, namely, proportional and non-proportional coverage. With proportional coverage the parties share premiums and losses by a predefined ratio, while with non-proportional, the reinsurer covers losses that exceed a threshold. From [17], a Pareto optimal set of reinsurance treaties is equivalent to a pool where members agree on how to pay the claims against the pool. This is a negotiation

362

C. Barreto et al.

problem. Ideally, the market can reach optimal risk allocation if price discrimination in the pool is banned. In practice, the optimality is unfeasible. In theory, reinsurance is not needed if (i) the events are i.i.d., (ii) the insurers have perfect information, (iii) the number of claims is large (to reduce the variance), (iv) insurers have no borrowing constraints. In practice, insurer information is imperfect, and they bear the risk of unexpected events (with larger than expected claims).

15.4 Insurance in Practice 15.4.1 Agent Preferences and Insurance Instruments Individuals and institutions overreact to extreme events [45]. Indeed, people tend to purchase insurance after suffering an accident (when the premium is more expensive), but cancel their policy after some time if no other accidents occur. Insurers tend to overprice, restrict coverage, or even leave the market because they focus on the worst case scenarios, sometimes without considering the events’ likelihood [48]. These attitudes hinder the success of insurance as a risk management tool. As a result, governments often subsidize the losses acting as insurer of last resort [48].

15.4.2 Imperfections of Insurance Markets Some insurance markets fail due to market imperfections: mainly, due to high transaction costs or information asymmetries. High transaction costs occur if market size is insufficient to cover operation costs (e.g., rent of insurer office). Information asymmetries occur because insurers are less informed about agent risk than the agent herself [2, 5, 66]. Below we describe two main information asymmetries.

15.4.2.1

Adverse Selection

When insurers do not observe agent risk or protection level, they must offer the same policy to everyone. Its premium should reflect average risk. But such policies may be too pricey for low risk agents, who may choose to remain uninsured. But higher risk agents will remain in the pool, which results in excessive insurer risk. One solution is to force all agents to buy insurance [22]. Then, low risk agents subsidize high risk ones. This is socially desirable if the benefit for high risk agents surpasses the burden on low risk agents. Another solution would be offering policies for low/high risks with partial/full coverage, respectively. Then, the agent’s choice of policy should reveal their actual risk [49]. Lastly, the insurer can choose screening (e.g., security audit) the agents to assess their risk, which, would increase the premium.

15 Cyber-Insurance

15.4.2.2

363

Moral Hazard

Another form of information asymmetries occurs when the insurer cannot see the insurer’s risk after the contract is signed. As a consequence, individuals with insurance might act riskier. When moral hazard is present, making liability insurance compulsory does not resolve the problem [34]. The insurers can prevent moral hazard by continuously auditing their clients. Also, the insurers can condition the payment of indemnities to the fulfillment of predefined security practices. For example, insurers may exclude the coverage of data breaches if client systems were unpatched.

15.4.2.3

Failure of Reinsurance

Reinsurance can fail when the risk remains in the same market. For example, an insurance spiral happens when reinsurers purchase reinsurance against extreme events, and at the same time, write reinsurance policies for other reinsurers affected by the same risk. A reinsurer making the claims to its reinsurers (retrocessionaires) does not pay for additional claims. But insurance spirals create added claims: a reinsurer may receive claims from reinsurance policies that they wrote to retrocessionaires. Thus, insurance spirals concentrate, not dissipate the losses [7]. In 1993, Lloyd crisis was driven by multiple re-insuring of the same risk. That profits both: the underwritter3 (who claims the premium) and the broker4 (who gets a commission). This practice became frequent due to the underestimated possibility of claims that would require indemnifications from the reinsurer. But such event did occur in 1988 when the Piper Alpha oil ring exploded. The asset was insured for 700 million, but resulted in 15 billion claims, because the reinsurer failed to diversify.

15.4.3 Regulation Governments tend to intervene if regulations improve the social welfare; for example, when the competition within the market improves5 [46]. Market imperfections, such as information asymmetries, lead to undesirable outcomes, e.g., insurers either bear large financial risks or manipulate insurance supply and prices [46, 49]. Financial regulations could aim to prevent insurer insolvency (bankruptcy) by mandating the insurers to have capital reserves. Price regulations might prevent the insurer from modifying the premium. For example, insurer may take higher risks lowering the premiums to attract more customers, betting on low claims. Regulators also intervene when markets reach undesirable outcomes, e.g., high premiums, exclusions, or low adoption. This can occur due to the characteristics 3 Specialist

that uses statistical methods to assess the risk of a policy to calculate its premium. or firm that negotiates insurance policies in exchange for some compensation. Competitive market have many sellers/buyers, no entry/exit barriers, perfect information.

4 Person 5

364

C. Barreto et al.

of the risks, not market imperfections. For instance, in CA, homeowners tend to skip purchasing earthquake insurance because they believe it is too expensive (for its coverage) [46]. Then, the regulator might intervene to increase the adoption of insurance, and thus, reduce the risk of catastrophes.6 Another form of intervention is government aid during catastrophes. Such an aid tend to have adverse effects on insurance adoption, and other risk management strategies, because people tend to encounter on government bailouts. If the premiums do not reflect the risks (due to the subsidization), the insureds have biased incentives for risk management and may take excessive risks.

15.5 Extreme Events Natural disasters, e.g., hurricanes or earthquakes cause multiple concurrent claims. Still, when such events are restricted to a specific geographical area, they could be diversifiable via global reinsurance market. Indeed, such events are correlated locally but are independent globally [73]. Still, extreme events (rare, high impact events) can exceed the capabilities of the (re)insurer, causing insolvency [44]. Extreme events include natural catastrophes (e.g., earthquakes, tsunamis); manmade catastrophes (e.g., nuclear plant disasters, terrorist acts); and financial events (e.g., massive shocks to stock market aggravated by the cascades) [18]. Events like the Fukushima Daiichi disaster (2011) or terrorist attacks (9/11) classify as extreme, due to their repercussions for the global economy [64]. Lloyd crisis of the 90s was due to a long tail risk,7 which exploded due to the asbestos liability acquired by US reinsurers. The claims were due to health risks caused by the exposition to asbestos fibers. CPS cyber-attacks may cause extreme events, e.g., a massive power blackout. Thus, CPS risks analysis must address extreme events. Extreme risks require special risk management tools, because (i) they cannot be evaluated from actuarial data and (ii) insurers have to address large capital requirements. Next, we show how to model and manage extreme risks. We discuss terror risks, because they share many properties of CPS risks. We present an illustrative example of cyber-insurance effects on cyber-security investments.

15.5.1 Modeling Extreme Events Some extreme events have probability distributions that follow power laws [3], which usually do not have finite variance (e.g., it’s difficult to assess the maximum impact

6 The government is the insurer of last resort: it faces losses when insurers fail. Thus, the government

must strive to promote insurance as it reduces government exposure to catastrophic risk. 7 A long tail risk occurs in multi-year contracts if the insurer lacks the resources to cover the claims.

15 Cyber-Insurance

365

of extreme events, also called black swans [79]). Thus, risk analysis that assumes Gaussian distributions can lead to biased risk estimates. Extreme value theory (EVT) was developed to analyze extreme and infrequent events. It focuses on the extremes (i.e., the tails) of the distribution. EVT deals with the scarce samples, but it has a strong theoretical background [30, 56]. The most important results in EVT predict that the distribution of extreme events converges to a family of functions (similar to the central limit theorems, see Table 15.4). Below we outline the main results of EVT and refer interested readers to [29, 73].

15.5.1.1

Generalized Extreme Value (GEV) Distribution

Consider a sequence of random variables I1 , I2 , . . . with an unknown cdf G(x) = P[Ii ≤ x]. Here Ii may represent insurance claims as in the collective risk model in Sect. 15.3.2.2. We denote the maximum of the first n observations as Mn = max{I1 , . . . , In }.

(15.15)

n , where bn and an are, respectively, the location We can normalize maximum as Mna−b n and the scale. According to the Fisher–Tippett Theorem [36], if the distribution of the normalized maximum converges, then it converges to a distribution Hξ :



 Mn − bn P ≤ x = G max (an x + bn ) → Hξ (x), as n → ∞. an

(15.16)

Here Hξ is the generalized extreme value distribution with shape parameter ξ ∈ R, defined as  exp(−(1 + ξ x)−1/ξ ) if ξ = 0, Hξ (x) = (15.17) if ξ = 0. exp(−e−x ) The generalized extreme value distributions can be classified in three subfamilies (Fig. 15.10 shows some examples of each family with bn = μ = 80 and an = σ = 20): 1. Gumbel family (ξ = 0) has medium tails with an unlimited domain (x ∈ (−∞, ∞)). 2. Fréchet family (ξ > 0) has heavy tails (resembling a power law) and a lower limit (x ∈ (xmin , ∞)). 3. Weibull family (ξ < 0) has short tails with an upper limit (x ∈ (−∞, xmax )). From the Fisher–Tippett Theorem, one can approximate the distribution of Mn (Eq. (15.15)) by extreme value distribution with location μ and scale σ : G max (x) = ). When fitting the distribution Hξ to the data, one has to observe the losses Hξ ( x−μ σ and their time of occurrence to identify clusters, trends, or seasonal effects, which signal event correlations (recall that the results in EVT assume that the events are i.i.d.). An important parameter of the GEV distribution that determines the number of samples n is the period over which the maximum is estimated. Also, one might fit

366

C. Barreto et al.

Table 15.4 Analogues of central limit theorems used in EVT. © [2018] IEEE. Reprinted, with permission, from Ref. [11] Fisher–Tippett Theorem If the distribution exists, it converges to the extreme value distribution Pickands–Balkema–de Haan Theorem The tail of a distribution converges to the generalized Pareto distribution

Fig. 15.10 Examples of the families of extreme value distributions. © [2018] IEEE. Reprinted, with permission, from Ref. [11]

the distribution to the data of the extremes (large claims), rather than all the data. The parameters can be estimated using the maximum likelihood method, while QQ-plots allow to examine visually if a candidate distribution fits the empirical data [56, 73].

15.5.1.2

Generalized Pareto Distribution (GPD)

Unlike the GEV distribution, GPD describes the tail of a distribution, i.e., the distribution of observations exceeding a threshold u. For example, the cdf of insurance claims Ii that surpass u is G u (x) = P [Ii − u ≤ x|Ii > u] =

G(x + u) − G(u) . 1 − G(u)

(15.18)

The Pickands–Balkema–de Haan (PBH) Theorem [8, 68] states that the distribution’s tail G u converges to a GPD as u increases, that is,

15 Cyber-Insurance

367

G u (x) → G ξ,u,σ (x), as u → ∞.

(15.19)

The GPD has three parameters, namely, the threshold u, the shape ξ , and the scale σ  G ξ,u,σ (x) =

1 − (1 + ξσx )−1/ξ if ξ = 0, if ξ = 0. 1 − exp(− σx )

(15.20)

From the PBH Theorem, we can use a GPD to approximate the distribution’s tail. Thus, the distribution G for y > u becomes (see Eq. (15.18)) ˆ G(y) = (1 − G n (u))G ξ,u,σ (y) + G n (u),

(15.21)

where G n (u) is the empirical distribution function evaluated at u (i.e., G n (u) ≈ G(u)). To fit the GPD to the data that excesses the threshold u, one can use a methods such as the maximum likelihood estimation or the probability weighted moments. There is a trade-off between quality of the approximation and its bias: with a large threshold u (and less data to calculate G ξ,u,σ ) the approximation is less biased [73]. Lack of real data about extreme events makes it difficult to construct accurate probabilistic models. For this reason, researchers enrich the data with hypothetical losses, as we discuss next.

15.5.1.3

Catastrophe (CAT) Models

Catastrophe models are used to estimate the cost of catastrophes (such as earthquakes or hurricanes). These models incorporate measures of natural hazards, statistics of past events, geographic information, and data of the buildings and their contents to simulate the impact of events [83]. The adoption of CAT models increased after hurricane Andrew in 1992, because the actuarial data available was insufficient to assess the exposure to risks [61]. After Andrew, the widespread and sophistication of CAT models led to improvement of underwriting practices that seek to characterize individual risks (e.g., quality and age of homes). Despite of these efforts, the simulations tend to underestimate the losses of extreme events, since standard procedures assume normal distributions of some parameters [73]. Still, CAT models are an important tool to overcome the lack of actuarial data.

15.5.2 Managing Extreme Risks Although insurers have manage several risks, it is unclear whether they can manage extreme events [67]. Insurers have used the international reinsurance markets8 and 8 Half

of US catastrophic losses are covered by reinsurance—a de facto tool of managing them.

368

C. Barreto et al.

capital markets to diversify catastrophic risks, such as hurricanes or earthquakes [33, 62, 71]. However, some authors believe that highly correlated catastrophic losses (e.g., terrorism) may exceed the market capacity [18]. Hence, extreme events should be handled via social insurance, which requires government help. This argument is supported by the theory of world risk society, which argues that modern risks (e.g., global warming or terrorism) are product of the human activities [12]. These risks are radically uncertain, because their consequences are nearly impossible to assess statistically. This makes them un-insurable [13]. This theory is appealing due to increasing frequency and impact of catastrophes, e.g., cyber-risks. Its weakness is the lack of experimental support: the insurers insure against extreme events, such as terrorism (which shares many features of CPS risks) [32, 59]. Below we outline risk management for extreme events.

15.5.2.1

Government Backed Insurance

Some extreme events, such as terrorist attacks, led to the development of some government backed insurance mechanisms. For example, the bombing on Bishopsgate (London’s financial district) in 1993 caused significant insurer losses. Since it is difficult to assess the frequency or the impact of attacks, reinsurers decided to exclude terrorism from their contracts, which in turn would expose insurers [19]. To avoid this, insurers and the British government created the Pool resinsurance company (Pool Re), which supports insurers that have losses exceeding £75 million. The government’s involvement allows to spread losses widely. Likewise, the attacks on September 11, 2001 led to the Terrorism Risk Insurance Act (TRIA), which covers insurance claims from terrorism [21, 46]. Table 15.5 compares Pool Re and TRIA. Similar anti-terror strategies were adopted in other countries.

15.5.2.2

Risk-Linked Securities

Catastrophe (CAT) bonds are financial derivatives used to deal with specific catastrophic risks. The insurer manages to transfer its risk other parties selling bonds on the capital market. This bonds are investments that have the following rules: If the insurer doesn’t have claims from a specific risk, then the bond owners get a return (interest) on their investment. However, if a catastrophic event occurs, then the insurer can keep the capital to cover claims. This mechanism both guarantees the solvency of the insurer and offers an attractive investment opportunity to diversify portfolios [44].

15 Cyber-Insurance

369

Table 15.5 Government reinsurance for terrorism. © [2018] IEEE. Reprinted, with permission, from Ref. [11] Pool Re TRIA Membership Collect funds Premium

15.5.2.3

Optional Ex-ante Risk-based

Mandatory Ex-post Based on total claims

Proactive Risk Management

Although extreme events seem unpredictable or unimaginable, it’s possible to prevent them by recognizing and acting on warning signals. For example, simulations with updated threat estimates alerted about the tsunami risk of the Fukushima Daiichi nuclear plant before the accident [1]. Also, the Challenger disaster in 1986 occurred due to a problem that was known for nearly 15 years. Moreover, although the 9/11 attack seemed unavoidable, previous incidents alerted about the risk. Concretely, the World Trade Center was the target of an attack in 1993 and a flight of Air France was hijacked 1994 (to be blow up over the Eiffel Tower). Moreover, FBI agents detected suspicious flying training in the preceding months to the attack [65]. These examples show the importance of reevaluating the risks based on the evolution of both threats and best practices.

15.5.3 Case-Study: Effects of Insurance on Security The following example illustrates how cyber-insurance affects investments in security protections [11]. Let a firm’s utility be U (x) = 1 − e−ax , where x ∈ R and a = 0.01.

(15.22)

We assume an initial wealth w0 = 200; the cost of system protection C(z) = kc z η , where z ∈ [0, 1] is the protection level, kc = 50, and η = 1.5. Let the losses L have GEV distribution with parameters μ = 80, σ = 20, and ξ ∈ [0, 1]9 :  P[L ≤ x|ξ(z)] = G(x, ξ(z)) = Hξ(z)

x −μ . σ

(15.23)

Let ξ be linear in z: ξ(z) = 1 − z, z ∈ [0, 1]. Then, higher protection reduces the tail of the loss distribution.

9 The GEV describes the distribution of maximum events within a time period. Here we assume that

all events (not only the maximum one) follow the GEV.

370

C. Barreto et al.

Fig. 15.11 Expected utility and social cost; the firm ignores tails of its risk distribution. © [2008] IEEE. Reprinted, with permission, from Ref. [11]

We assume that the firm assumes a maximum loss Q α in its risk analysis. Thus, the expected utility is



E[U (w0 − C(z) − L)] =

U (w0 − C(z) − x)dG(x, ξ(z)).

(15.24)

0

If the firm uses value-at-risk (VaR) with precision α to estimate it’s risk, then the maximum loss Q α is the α% quantile of the loss distribution: P[L ≥ Q α |ξ(z)] = 1 − α.

(15.25)

The social cost L s is the expected losses not paid by the firm, but third parties, such as shareholders, customers, or the government

L s (z) =



(x − Q α )dG(x, ξ(z)).

(15.26)



Figure 15.11a and b depict the expected utility and social cost as function of the investment in protection, respectively. When the firm ignores the tails of its risk (lower α%-VaR10 ), the firm invests less in protection at the expenses of a higher social cost. This captures observed CPS behavior and identifies some factors behind security underinvestment.

15.5.3.1

Effects of Insurance

Consider the effects of insurance covering at most Q α .11 We assume actuary fair pre Q miums: P(z) = 0 α x dG(x, ξ(z)). From Fig. 15.12, full liability improves investment in security. Moreover, with insurance profits are higher, but investments in protection are lower (regardless of liability level). Here insurance fails to improve security. 10 With 11 An

90%-VaR/95%-VaR, events occurring once in 10/20 years respectively are ignored. insurer won’t accept the risk of extreme events unless she is prepared for them.

15 Cyber-Insurance

371

Fig. 15.12 Expected utility without and with insurance (limited and full liability). © [2018] IEEE. Reprinted, with permission, from Ref. [11]

15.6 Concluding Remarks Dangers of large scale CPS failures, especially in critical infrastructures, combined with cyber-risk assessment problems, make us question the CPS insurability. We stress that advancing the cyber-insurance for CPS requires improving the precision of cyber-risk assessment and the capabilities of cyber-forensics, as the cyber-insurance review [23] targeting engineering community. We conclude that in the absence of new policies, cyber-insurance for CPS may stall due to limited insurance instruments. Market advancement requires overcoming data scarcity and lack of standardization. We call for further research in CPS risk management, and specifically design and evaluation of novel technical tools and government policies improving incentives of the CPS decision-makers to collect and share security related data. Acknowledgements We are grateful to Anna Phuzhnikov and Vlad Zbarsky for valuable comments on our draft. This chapter was partially supported by NSF CNS-1931573.

References 1. Acton, J.M., Hibbs, M.: Why Fukushima was preventable. http://carnegieendowment.org/files/ fukushima.pdf (2012) 2. Akerlof, G.A.: The market for “lemons”: Quality uncertainty and the market mechanism. Q. J. Econ. 488–500 (1970) 3. Andriani, P., McKelvey, B.: Beyond gaussian averages: redirecting international business and management research toward extreme events and power laws. J. Int. Bus. Stud. 38(7), 1212– 1230 (2007) 4. Arrow, K.J.: Economic welfare and the allocation of resources for invention. In: N.B. of economic research (ed.) The Rate and Direction of Inventive Activity: Economic and Social Factors, pp. 609–626. Princeton University Press, Princeton (1962) 5. Arrow, K.J.: Uncertainty and the welfare economics of medical care. Amer. Econ. Rev. 53(5), 941–973 (1963) 6. Autor, D.: Adverse selection, risk aversion and insurance markets. https://dspace.mit. edu/bitstream/handle/1721.1/71009/14-03-fall-2004/contents/lecture-notes/lecture15.pdf (2010). Accessed 24 Sept 2019 7. Bain, A.: Insurance spirals and the Lloyd’s market (1997) 8. Balkema, A.A., De Haan, L.: Residual life time at great age. Ann. Probab. 792–804 (1974)

372

C. Barreto et al.

9. Bandyopadhyay, T., Mookerjee, V.S., Rao, R.C.: Why it managers don‘t go for cyber-insurance products. Commun. ACM 52(11), 68–73 (2009) 10. Barnes, J.: The deficit millionaires. http://www.newyorker.com/magazine/1993/09/20/thedeficit-millionaires (1993). Accessed 13 Jan 2017 11. Barreto, C., Cardenas, A.A., Schwartz, G.: Cyber-insurance for cyber-physical systems. In: 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 1704–1711 (2018) 12. Beck, U.: Risk society: towards a new modernity. Sage 17 (1992) 13. Beck, U., Willms, J.: Conversations with Ulrich Beck. Wiley, New York (2014) 14. Biener, C., Eling, M., Wirfs, J.H.: Insurability of cyber risk: an empirical analysis. The Geneva Papers on Risk and Insurance Issues and Practice 40(1), 131–158 (2015) 15. Böhme, R., Schwartz, G.: Modeling cyber-insurance: towards a unifying framework. In: The Annual Workshop on the Economics of Information Security (WEIS) (2010) 16. Boland, P.J.: Statistical and Probabilistic Methods in Actuarial Science. CRC Press, Boca Raton (2007) 17. Borch, K.: Equilibrium in a reinsurance market. Econometrica 30(3), 424–444 (1962) 18. Bougen, P.D.: Catastrophe risk. Econ. Soc. 32(2), 253–274 (2003) 19. Brice, W.B.: British government reinsurance and acts of terrorism: the problems of Pool Re. U. Pa. J. Int’l Bus. L. 15, 441 (1994) 20. Commission, E.: Protection of critical infrastructure. https://ec.europa.eu/energy/en/topics/ infrastructure/protection-critical-infrastructure (2014). Accessed 29 June 2018 21. Cummins, J.D., Lewis, C.M.: Catastrophic events, parameter uncertainty and the breakdown of implicit long-term contracting: the case of terrorism insurance. In: The Risks of Terrorism, pp. 55–80. Springer, Berlin (2003) 22. Dahlby, B.G.: Adverse selection and pareto improvements through compulsory insurance. Public Choice 37(3), 547–558 (1981) 23. Dambra, S., Bilge, L., Balzarotti, D.: SoK: Cyber insurance–technical challenges and a system security roadmap. In: 2020 IEEE Symposium on Security and Privacy (SP), pp. 293–309 (2020) 24. Daniel, M.: Incentives to support adoption of the cybersecurity framework. https://www. dhs.gov/blog/2013/08/06/incentives-support-adoption-cybersecurity-framework (2013). Accessed 29 June 2018 25. Dickson, D.C.: Insurance Risk and Ruin. Cambridge University Press, Cambridge (2005) 26. Ehrlich, I., Becker, G.S.: Market insurance, self-insurance, and self-protection. J. Polit. Econ. 80(4), 623–648 (1972) 27. Eling, M., Wirfs, J.H.: Cyber risk: Too big to insure? Risk transfer options for a mercurial risk class. Technical Report, Institute of Insurance Economics (2016) 28. Elliott, A.: Sewage spill during the blackout exposed a lingering city problem. https://www. nytimes.com/2003/08/28/nyregion/sewage-spill-during-the-blackout-exposed-a-lingeringcity-problem.html (2003) 29. Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events: For Insurance and Finance, vol. 33. Springer Science & Business Media, Berlin (2013) 30. Embrechts, P., Resnick, S.I., Samorodnitsky, G.: Extreme value theory as a risk management tool. North Amer. Actuar. J. 3(2), 30–41 (1999) 31. ENISA: Cyber insurance: Recent advances, good practices and challenges. Technical Report, European Union Agency For Network and Information Security (2016) 32. Ericson, R., Doyle, A.: Catastrophe risk, insurance and terrorism. Econ. Soc. 33(2), 135–173 (2004) 33. Ericson, R.V., Doyle, A.: Uncertain Business: Risk, Insurance, and the Limits of Knowledge. University of Toronto Press, Toronto (2004) 34. Faure, M.G.: Economic criteria for compulsory insurance. The Geneva Papers on Risk and Insurance - Issues and Practice 31(1), 149–168 (2006). https://doi.org/10.1057/palgrave.gpp. 2510063 35. Finkle, J.: Hackers halt plant operations in watershed cyber attack. https://www.reuters.com/ article/us-cyber-infrastructure-attack/hackers-halt-plant-operations-in-watershed-cyberattack-idUSKBN1E8271 (2017). Accessed 16 April 2018

15 Cyber-Insurance

373

36. Fisher, R.A., Tippett, L.H.C.: Limiting forms of the frequency distribution of the largest or smallest member of a sample. Math. Proceed. Cambr. Philos. Soc. 24, 180–190 (1928) 37. Franke, U.: Cyber insurance against electronic payment service outages. In: International Workshop on Security and Trust Management, pp. 73–84. Springer, Berlin (2018) 38. Gibson, J.P.: Terrorism insurance coverage for commercial property-a status report. https:// www.irmi.com/articles/expert-commentary/terrorism-insurance-coverage-for-commercialproperty-a-status-report/ (2002). Accessed 14 Nov 2017 39. Gill, H.: From vision to reality: cyber-physical systems. In: HCSS National Workshop on New Research Directions for High Confidence Transportation CPS: Automotive, Aviation, and Rail (2008) 40. Greenberg, A.: Hackers remotely kill a jeep on the highway – with me in it. https://www.wired. com/2015/07/hackers-remotely-kill-jeep-highway/ (2015). Accessed 24 Jan 2018 41. Grundy, C.: All data breaches in 2019 – an alarming timeline. https://selfkey.org/data-breachesin-2019/ (2019). Accessed 30 Sept 2019 42. Department of Homeland Security: Executive order 13636: Improving critical infrastructure cybersecurity. https://www.dhs.gov/sites/default/files/publications/dhs-eo13636-analyticreport-cybersecurity-incentives-study.pdf (2013). Accessed 18 May 2017 43. Institute, I.I.: Insurance industry at a glance. https://www.iii.org/publications/insurancehandbook/introduction/insurance-industry-at-a-glance (2015). Accessed 14 Nov 2017 44. Jaffee, D., Russell, T.: Markets under stress: The case of extreme event insurance. Economics for an Imperfect World: Essays in Honor of Joseph E. Stiglitz pp. 35–52 (2003) 45. Kahneman, D.: Thinking, Fast and Slow. Macmillan, New York (2011) 46. Klein, R.W.: Insurance market regulation: Catastrophe risk, competition, and systemic risk. In: Handbook of Insurance, pp. 909–939. Springer, Berlin (2013) 47. Koppel, T.: Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath. Broadway Books, Portland (2016) 48. Kunreuther, H.: The role of insurance in reducing losses from extreme events: The need for public-private partnerships. The Geneva Papers on Risk and Insurance Issues and Practice 40(4), 741–762 (2015) 49. Laffont, J.J., Martimort, D.: The Theory of Incentives: The Principal-Agent Model. Princeton University Press, Princeton (2009) 50. Langner, R., Pederson, P.: Bound to fail: Why cyber security risk cannot simply be “managed” away. Technical Report, Brookings (2013) 51. Latham & Watkins: Cyber insurance: a last line of defense when technology fails. Technical Report, Latham & Watkins (2014) 52. Leverett, E., Clayton, R., Anderson, R.: Standardisation and certification of the äòinternet of things. In: The Annual Workshop on the Economics of Information Security (WEIS) (2017) 53. Loukas, G.: Cyber-Physical Attacks: A Growing Invisible Threat. Butterworth-Heinemann, Oxford (2015) 54. Marotta, A., Martinelli, F., Nanni, S., Yautsiukhin, A.: A survey on cyber-insurance. Comput. Sci. Rev. (2015) 55. Mas-Colell, A., Whinston, M.D., Green, J.R.: Microeconomic Theory, vol. 1. Oxford University Press, New York (1995) 56. McNeil, A.J.: Estimating the tails of loss severity distributions using extreme value theory. ASTIN Bull. 27(01), 117–137 (1997) 57. Mikosch, T.: Non-life Insurance Mathematics: An Introduction with Stochastic Processes. Universitext. Springer, Berlin (2006) 58. Morgan, S.: Cybersecurity market reaches $75 billion in 2015, expected to reach $170 billion by 2020. https://www.forbes.com/sites/stevemorgan (2015). Accessed 29 June 2018 59. Mythen, G., Walklate, S.: Beyond the Risk Society: Critical Reflections on Risk and Human Security. McGraw-Hill Education, Maidenhead (2006) 60. Newman, L.H.: Medical devices are the next security nightmare. https://www.wired.com/2017/ 03/medical-devices-next-security-nightmare/ (2015). Accessed 24 Jan 2018

374

C. Barreto et al.

61. O’Connor, A.: 25 years later: How Florida’s insurance industry has changed since hurricane Andrew. https://www.insurancejournal.com/news/southeast/2017/08/24/462204.htm (2017). Accessed 30 Sept 2019 62. O’malley, P.: Governable catastrophes: a comment on Bougen. Econ. Soc. 32(2), 275–279 (2003) 63. Panjer, H.H.: Recursive evaluation of a family of compound distributions. Astin Bull. 12(01), 22–26 (1981) 64. Paté-Cornell, E.: On “black swans” and “perfect storms”: risk analysis and management when statistics are not enough. Risk Anal. 32(11), 1823–1833 (2012) 65. Paté-Cornell, E., Cox Jr., L.A.: Improving risk management: from lame excuses to principled practice. Risk Anal. 34(7), 1228–1239 (2014) 66. Pauly, M.V.: Overinsurance and public provision of insurance: the roles of moral hazard and adverse selection. Q. J. Econ. 88(1), 44–62 (1974) 67. Petersen, K.L.: Terrorism: when risk meets security. Alternatives 33(2), 173–190 (2008) 68. Pickands III, J.: Statistical inference using extreme order statistics. Ann. Stat. 119–131 (1975) 69. Institute, Ponemon: 2015 global cyber impact report. Technical Report, Ponemon Institute (2015) 70. Rajkumar, R., Lee, I., Sha, L., Stankovic, J.: Cyber-physical systems: the next computing revolution. In: Design Automation Conference, pp. 731–736 (2010). https://doi.org/10.1145/ 1837274.1837461 71. Swiss Re: The benefit of global diversification: how reinsurers create value and manage risk (2016). https://www.swissre.com/institute/research/topics-and-risk-dialogues/economy-andinsurance-outlook/The-benefit-of-global-diversification--how-reinsurers-create-value-andmanage-risk.html 72. Romanosky, S., Ablon, L., Kuehn, A., Jones, T.: Content analysis of cyber insurance policies: how do carriers write policies and price cyber risk? Working paper, RAND Corporation (2017) 73. Sanders, D.E.A.: The modelling of extreme events. Br. Actuar. J. 11(3), 519–572 (2005) 74. Security, T.: MEDJACK.4: Medical device hijacking. Technical Report, TrapX Security (2018) 75. Simmons, L.D., Konia, A., Davey, J.: Insurance coverage for lost profits arising from cyber attacks on the u.s. power grid. https://www.passwordprotectedlaw.com/2017/01/insurancecoverage-cyber-attacks-on-power-grid/ (2017). Accessed 14 Nov 2017 76. Smith, M.L., Kane, S.A.: The Law of Large Numbers and the Strength of Insurance, pp. 1–27. Springer Netherlands, Dordrecht (1994). https://doi.org/10.1007/978-94-011-1378-6_1 77. Stockman, P.K.: Cyber risk “IRL”: insurance issues arising from cyber-related property damage and bodily injury claims. https://www.passwordprotectedlaw.com/2016/09/cyber-riskinsurance-issues/ (2016). Accessed 14 Nov 2017 78. Sztipanovits, J.: Composition of cyber-physical systems. In: 14th Annual IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, 2007. ECBS’07, pp. 3–6. IEEE (2007) 79. Taleb, N.N.: The Black Swan: The Impact of the Highly Improbable, vol. 2. Random House, New York (2007) 80. The White House: Presidential executive order on strengthening the cybersecurity of federal networks and critical infrastructure. https://www.whitehouse.gov/the-press-office/2017/ 05/11/presidential-executive-order-strengthening-cybersecurity-federal (2017). Accessed 18 May 2017 81. Tøndel, I.A., Meland, P.H., Omerovic, A., Gjære, E.A., Solhaug, B.: Using cyber-insurance as a risk management strategy: knowledge gaps and recommendations for further research. Technical Report, SINTEF (2015) 82. Wakker, P., Thaler, R., Tversky, A.: Probabilistic insurance. J. Risk Uncertainty 15(1), 7–28 (1997) 83. Walker, G.R.: Earthquake engineering and insurance: past, present and future. Aon Re Australia (2000) 84. Waterfall: Industrial cyber insurance comes of age. Technical Report, Waterfall (2018)

15 Cyber-Insurance

375

85. Zetter, K.: An unprecedented look at Stuxnet, the world’s first digital weapon. http://www. wired.com/2014/11/countdown-to-zero-day-stuxnet/ (2014). Accessed 29 June 2018 86. Zetter, K.: Inside the cunning, unprecedented hack of Ukraine‘s power grid. http://www.wired. com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/ (2016). Accessed 16 Oct 2017

Chapter 16

Concluding Remarks and Future Outlook Riccardo M. G. Ferrari

Abstract In this final chapter, we summarize the results and conclusions from each individual contribution in the book, and take the opportunity to provide our and other authors’ insights into possible future developments in the fields of safety, security, and privacy for cyber-physical systems. In doing so, we make use of the taxonomy presented in Chap. 1, and use its pictorial representation to both quantify the breadth of current results and to identify unexplored and promising research avenues.

16.1 Looking Back: The Contributions of This Book The present section gives an overview of the solutions and the results provided by each chapter, supported by the taxonomy that we proposed in Chap. 1, whose pictorial representation is repeated for the reader’s convenience in Fig. 16.1. In the figure, chapters are represented with a round colored dot, making it easy to identify at a glance which of the possible combination of attack types and of mitigation measures received more attention. At first sight, indeed, we can see that most chapters deal with detection of deception attacks, a preference that mirrors the distribution of papers on security of CPS from the control systems community. The chapters that propose resilient approaches, instead, are quite equally distributed between disclosure, deception, and disruption kind of attacks. Regarding prevention mechanisms, only the case of disclosure attacks is considered here. Finally, Chaps. 14 and 15 are positioned right in the center of the diagram. Being devoted to risk assessment and risk insurance (i.e., transfer of risk to other entities), we would classify them as preventive measures, but they are not specific to any given class of attacks. In the following, we provide a brief summary, for each attack and for each mechanism categories, of the chapters that addressed them.

R. M. G. Ferrari (B) Delft University of Technology, Delft, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3_16

377

R. M. G. Ferrari ITY

E E T

D

C

DE CE PT

T I O N 10

PR

7

ENTI

6

N

10

EV 12

8

ITY EGR

11

9

INT

I L I E N C E S E R 13

O

DISCLOSURE

N

AL TI

N IO

CO NF ID E

378

5

14, 15

4 2

3

Y DIS ILIT RUP TION AVAILAB

Fig. 16.1 We repeat here, for the reader’s convenience, the pictorial representation of the taxonomy introduced in Chap. 1. The Prevention, Resilience, and Detection (PRD) triad is depicted via concentric circles, while the Confidentiality, Integrity, and Availability (CIA) and the Disclosure, Deception, and Disruption (DDD) triads are represented via slices of such circles. Each resulting sector corresponds to a possible combination of one security breach (CIA and DDD triads) and one possible mitigation measure (PRD triad). Each chapter of this book is thus represented as one or more orange dots accompanied by its number, positioned in one or more of such sectors according to its contribution

16.1.1 Deception Attacks and Loss of Integrity As anticipated, this is the kind of attack that received more attention, especially with regards to mechanisms employing detection techniques which are dealt with in Chaps. 4 through 10. Chapter 4 analyzed the sensitivity of plants modeled as linear dynamical systems to false data injection attacks. One of the key results is that plants with unstable poles are always sensitive to sensor attacks, while plants with unstable zeros are sensitive to actuator attacks. Furthermore, a frequency domain characterization of the danger associated to sensor and actuator attacks is provided. The chapter eventually built a precious foundation for better understanding the importance of the attack detection techniques described in Chap. 5, and in particular of approaches such as watermarking and coding, which are the focus of Chaps. 8 and 9. The capability of a class of attacks to cause damage, while staying undetected, is investigated in Chap. 5. In particular, the fundamental concept of -stealthiness, which is widely used in several works including Chaps. 6 and 9, is defined in a novel way. The importance of this concept is further stated by linking it to a bound on the

16 Concluding Remarks and Future Outlook

379

amount of degradation that an -stealthy attack can cause to the estimate produced by a Kalman filter. The characterization of the impact that an attack can have on a closed loop control system is the focus of Chap. 6 as well. Here, a novel twist is added by taking an approach inspired by robust control. In particular, classical metrics from robust control are revisited in the context of malicious, knowledgeable attacks. As a result, a new class of metrics based on dissipativity theory that takes into account both detectability and degradation capability is introduced. Such new metric can be used as a guideline for the design of controllers and detectors with better robustness against malicious attacks. Chapter 7 extends the analysis presented previously on the problem of detection of sensor attacks in linear dynamical systems. A further piece is added, namely the problem of reconstructing safely the true sensor output in presence of an attack. Structural conditions on detectability and reconstructability are provided, and the extension to nonlinear systems is discussed as well. Furthermore, a solution to the secure state estimation problem based on a framework called Satisfiability Modulo Convex Programming. Being such problems inherently of combinatorial nature, polynomial-time characterizations are given for special cases. Watermarking techniques, which take inspiration from authentication methods with weak cryptographic guarantees, have received significant attention recently. Chapter 8 delves into such approach by presenting a survey of the watermarking literature applied to CPSs. Furthermore, it proposes a so-called physical watermarking technique, where a random additive signal of known covariance is injected in the plant input. Extensions with respect to prior works include the use of a Hidden Markov Model for generating the watermark signal, while the control performance is characterized via an LQG cost function. A Neyman–Pearson detector is proposed and the scheme detection performances are characterized via the Kullback–Liebler divergence. This leads to designing the optimal watermark that has the better tradeoff between detectability and control performance. Online learning for uncertain or time-varying systems is furthermore considered, and the case of multiplicative watermarking is described. Future work may include addressing a more intelligent kind of attacks and testing in a real scenario. In Chap. 9, a multiplicative watermarking scheme, instead, has been proposed. The scheme is based on applying a watermark to each sensor’s output before it is transmitted on a network where a Man-in-the-Middle attacker may be present. Upon reception of the watermarked signal at the controller’s side, the watermark presence is checked for, and the watermark is removed before computing the controller action. Such an approach allows to detect the presence of an attack that corrupted the sensor data while in transit. Furthermore, during normal operation, it does not incur in any control performance penalty thanks to the watermark being removed. The potentiality of the scheme is tested against a stealthy false data injection attack, that would have been otherwise undetectable. Finally, Chap. 10 considers the case of implementing model-based anomaly detection schemes for distributed systems while protecting the privacy of each individual subsystem. In particular, the local input of each subsystem is considered as private

380

R. M. G. Ferrari

data, as it can leak information about users of the subsystem. The proposed approach is based on Differential Privacy, and theoretical results on detection performances in terms of false alarms and missed detections are provided. Such results address both the case where the presence of privacy-preserving mechanisms is known, and the case where this information is kept private itself by the subsystems. Among the solutions that provide instead resiliency against deception attacks, we can list the above-mentioned Chaps. 5, 6, and 7 and, in addition, Chap. 13. As a final defense against adversaries, Chap. 13 builds the concept of deception-as-defense. The problem formulation in this case considers two agents, one of which possesses information of use for the second one, that are engaged in a noncooperative communication and control setting. The solution concept utilized is that of Stackelberg equilibrium and deception-as-defense is defined in a game-theoretic framework. Such a framework is shown to cover also the non-cooperative control problem where a sensor observes the state of a system and a controller drives it according to a quadratic control objective. Ultimately, the goal of the defender is to craft the information available to the adversary such to induce it to act in a way that is advantageous for the defender itself, and this is made possible by the fact that the defender possesses an information advantage.

16.1.2 Disclosure Attacks and Loss of Confidentiality This kind of attack was the center of attention of Chaps. 11 and 12 and, in part, of the previously described Chaps. 10 and 13. In particular, Chap. 10 considered what can be seen as a resilient measure against disclosure, Differential Privacy (DP). DP indeed aims to make it difficult, in a probabilistic sense, to acquire knowledge about an individual even when data is disclosed. While most of the other chapters consider active attacks, that is, attacks that can alter data transmitted from sensors or sent to actuators, Chap. 11 addresses resiliency against passive, eavesdropping attacks. An event-triggered communication schedule from a sensor to a remote estimator is devised, that can guarantee boundedness of the estimation error covariance at the estimator side. In particular, a bound can be proven also for unstable systems, while at the same time making the covariance of the estimation error at the eavesdropper unbounded. Similar results are obtained also when an information-theoretic measure is introduced to formulate such optimal transmission problem, although in such a case it results that the eavesdropper will necessarily obtain a non-zero expected amount of information. Chapter 12 illustrates that a preventive approach to guaranteeing confidentiality of the data exchanged by sensors, actuators, and controllers over a network consists of employing encryption. One disadvantage of applying encryption to control systems is the computational and time overhead necessary for encrypting data at the originating node, such as a sensor, decrypting it at the controller for computing the control action, encrypting the result for sending it to actuators, and, finally, decrypting it again before applying it. Partial homomorphic encryption denotes a family of

16 Concluding Remarks and Future Outlook

381

algorithms that allow to take a subset of mathematical operations directly on cipher texts. Their use can indeed avoid the need for intermediate decryption and encryption steps at the controller node. This chapter explores the application of partial homomorphic encryption for networked control of nonlinear systems, and provides conditions for recovering the original stability results of non-encrypted controllers. Future works will include applications to hybrid systems and the implementation of dynamic controllers.

16.1.3 Disruption Attacks and Loss of Availability Two chapters are dedicated to availability in general and to Denial-of-Service (DoS) attacks. In Chap. 2 the authors considered a distributed system consisting of interconnected units that cooperate toward a common goal. For instance, this could include an electric smart grid where several power plants cooperate or a fleet of transportation vessels. A coordinator is designed in order to assign tasks to units in a flexible and adaptive way, taking into consideration availability of units and faults affecting the network. In case of blocking kind of faults, the supervisor can also re-evaluate a new feasible common task. While such scheme is not specifically designed for cyber-attacks, it is general enough to be applied to the case where one or more unit is unavailable due to an attack. It thus shows how a networked system can be made robust by adding a supervision layer implementing the task assignment protocol. Chapter 3 addressed the question of designing DoS-resilient networked control systems. Results on the stability of linear systems under DoS are presented, which depend on attack duration and frequency and on system eigenvalues. Two open problems are discussed, namely optimality and the role of transmission scheduling. Regarding the latter, event-triggered control is proposed as a way to increase robustness against DoS: for an attacker in this case, it is more difficult to figure out the sensor transmission logic and thus block transmissions. This is, in particular, relevant under the common, realistic assumption that the attacker has a limited energy budget, and as such cannot block communication at all times. Then, finite-time observers are investigated as another way of combating DoS, by allowing a controller to estimate the plant state through an attack, until the next new measurement is received. Finally, the robustness against DoS of a specific consensus algorithm in a distributed system is investigated, as well as the role of more complex dynamics and of critical links.

16.1.4 General Contributions As anticipated, Chaps. 14 and 15 constitute a category in their own right, as they are directed at no attack in particular, but instead, propose risk management and outsourcing techniques that can benefit every CPS where cyber-security is a concern.

382

R. M. G. Ferrari

While previous chapters investigated theoretically the possibility to prevent, resist or detect, and mitigate the effects of attacks to CPS, Chap. 14 and its companion chapter dive into the economic aspect of CPS security. In fact, readying cybersecurity defenses require monetary and human investments and can face practical difficulties that are harder to overcome than in conventional IT systems. In order to justify such investments, the risks associated to cyber-attacks must be assessed, using for instance, methodologies borrowed from the finance and investments sectors, such as the Value at Risk (VaR). Once risks are quantified, it is possible to decide whether to just accept them, implement measures for preventing or mitigating them, or transferring their economic outcomes to a third party via a cyber-insurance. A still-standing issue, anyway, is the lack of sufficient data on cyber-attacks in order to compute cyber-risks with high enough confidence. Chapter 15 remarks how the practice of insuring against unpredictable losses, such as a shipwreck crippling a merchant fleet, dates back thousands of years. The application of insurances to cyber-risks not only is a novel field, but poses significant challenges to insurers due to the scarce availability of data on which to calculate risks and, thus, premia. Furthermore, cyber-risks may include the occurrence of extreme events, that is events with a very low probability but catastrophic impacts. Examples of extreme events include terrorist attacks, earthquakes, and, in the cyber-case, the effect of a large-scale attack to a national critical infrastructure. Theoretical tools such as the Generalized Extreme Value Distribution, the Generalized Pareto Distribution, and Catastrophe Models are proposed to fill such gap.

16.2 Looking Forward: Future Outlook This section now proposes our outlook for the possible future developments in the field of safety, security, and privacy for CPSs. Two points of view are taken while doing so. The first one focuses on specific areas that could be explored, based on the analysis presented in Fig. 16.1. The second one, instead, looks at general, mostly theoretical advancements that could benefit the entire field, based on insights from the chapters presented in this book and on the editor’s own stand.

16.2.1 Specific Areas to Explore By looking again at Fig. 16.1, we can indeed spot areas that are under-represented or not addressed by the chapters of the present book. While the book does not aim at completeness, nor pretends to, still the distribution seen in Fig. 16.1 is representative of current trends in the control systems community, as testified for instance by recent surveys such as [14, 17]. In the following, we thus identify possible, specific new avenues for research in the least visited intersections of the CIA and PRD triads.

16 Concluding Remarks and Future Outlook

16.2.1.1

383

Detection of and Resilience to Disclosures

By looking at the sector corresponding to the disclosure class of attacks in Fig. 16.1, we can notice that no chapter addressed their detection, and few focused on resiliency against them. As a possible contribution to the detection of disclosures, we may suggest the use of watermarking, by taking inspiration from what is done in the entertainment industry to track illicit distribution of copyrighted material such as live television broadcasts [52]. While Chap. 9 employed watermarking to make some class of stealthy deception attacks detectable, its use for disclosure detection has not been explored yet in the control systems community. The idea of tracking unauthorized data copying via watermarking is instead central to reference works from the signal processing literature such as [8, 27, 51] and general works on watermarking and steganography such as [16, 46]. In the case of a CPS, data to be protected consist of historic time sequences of sensor measurements or actuator commands, which are usually regarded as confidential information as they can be linked to the specific know-how of a CPS operator. A possible research goal would thus entail to guarantee the possibility of detecting a disclosure while minimizing the loss of control performances due to the watermark presence in the CPS data. A further interesting extension would be the application to the CPS context of Zero-Knowledge Proof (ZKP) techniques. ZKP allows a legitimate party to prove ownership of a watermarked piece of data, without having to reveal its own, secret, watermark [1]. Another approach worth investigating would be to purposely release fake or dummy data, and check if further use of that data is done by an unsuspecting eavesdropper. One the one hand, such an approach would be another example of using deception-as-a-defense, akin to Chap. 13. On the other hand, we could consider this case as an extreme case of watermarking, as data would not be just altered by a watermark but replaced altogether at selected time instants. Yet, it would be reasonable to assume that the CPS component consuming such data could know at which time dummy data is transmitted, and ignore it, while an attacker would not possess such knowledge. This problem has been considered in the Communication Systems literature, especially for the case of wireless sensor networks (WSN). In order to protect the location of transmitting nodes, or of events detected by such nodes (see the Panda Hunter problem by [29]), traffic decorrelation, redirection, or dummy transmissions are employed by [11, 33, 41]. In the field of control systems, Chaps. 11 and 13 did consider the problem of sensor transmission scheduling and deceptive alteration in order to impair an eavesdropper’s knowledge of the sensor true measurement. An apparently unexplored direction is instead the use of traffic decorrelation and dummy transmissions in our field, applied in particular to the case of event-triggered monitoring or control. Resiliency to disclosure is important, in the situation when preventing disclosure is not an acceptable option. As examples, we may consider all the situations where one party agrees to disclose its own data, in order to receive a benefit. This may include, for instance, the cases of distributed and cloud-based monitoring and control architectures, where some form of multi-party computations are needed. Still, the releasing party would like to avoid that its own data is used in a non-authorized

384

R. M. G. Ferrari

and malicious way, which possibly leads to harmful actions by an adversary. A natural way to formulate this contrasting goal is by relying on the concept of privacy and thus finding an acceptable privacy-utility trade-off. In this case, the literature on CPSs is mainly divided into two approaches: on one side, we have the family of techniques based on Differential Privacy, on which Chap. 10 was focused. Possible developments in this field are discussed later in this section. On the other side, we have approaches based on secure multi-party computation, which lead to enforcing a concept called cryptographic multiparty privacy [3, 4]. The theoretical foundations of this family of works are built on top of the concepts of homomorphic encryption (like in Chap. 12) and secret sharing [45]. While homomorphic encryption is a powerful theoretical tool, practical limitations lead to only partial-homomorphic schemes being implementable in real time with currently available computation resources. The quest for fully homomorphic solutions that are applicable to real-time control problems is currently a highly relevant research problem, as well as solutions that need only partial-homomorphic schemes, and we expect many results to be produced by our community in the near future.

16.2.1.2

Detection of Disruptions

It should not come too much as a surprise that the problem of detection of disruptions, i.e., loss of availability, did not receive too much attention from the control systems community. Indeed, most control systems make use of data sampled at regular, equally spaced time instants, this case including networked control systems using time-periodic triggering. Under this assumption, the detection of a communication disruption requires only to check that a new transmission has been received before the next sampling instant. Consequently, most papers dealing with resiliency against disruptions safely assume that the attack presence can be detected and as such is known. Other papers, instead, propose robust approaches, such as Chap. 3 in the present book which requires only to know the attack average length and occurrence frequency. Still, disruption attacks does not necessarily need a transmission to be blocked. For instance, as proposed in [31], they can be implemented by preventing the normal update of measured values in an Industrial Control System (ICS). In that work, a bug in a Programmable Logic Controller (PLC) is exploited, causing a sensor reading to be held constant indefinitely without raising an error. This kind of situation can be anyway handled by using methods from the fault diagnosis literature that were developed for the stuck-at-value type of sensor faults, as exemplified by [5, 42]. The case of DoS attacks in event-triggered communication lends itself to some more interesting analysis. A receiving party, in fact, cannot in general know whether the lack of a new message is due to the event-triggering condition not being met at the sender side, or to a DoS. A possible solution would be based on the receiving party implementing an estimator that can predict whether the sender condition should have been triggered. If it should, and yet a transmission is not received, the recipient should conclude that a disruption occurred. This idea is presented in [48] and, for

16 Concluding Remarks and Future Outlook

385

the case of deception attacks, in [30]. Still, further exploration of this problem would be needed, for instance, by characterizing the detection accuracy in terms of the uncertainty in the knowledge of the recipient, which of course has no access to the same information as the sender.

16.2.2 General Advancements Finally, we would like to present some remarks about promising topics that could bring thrust to the development of the field of CPS safety, security and privacy as a whole.

16.2.2.1

A Conceptual Perspective on the Integration of Safety and Security

As early as 1999 [20] called for the integration of safety and security requirements. More recently, such integration has been the focus of several research efforts. For instance, in [53], a strategic, top-down vulnerability control approach is proposed where safety goals will drive security efforts. This is in contrast with tactical, bottomup techniques which are threat-driven. An approach called System Theoretic Process Analysis (STPA) is set as the foundation of such developments. In [12], an extensive survey of safety and security integration results are presented, addressing for instance the Fault and Vulnerabilities Modes and Effects Analysis (FMVEA, see [44]), a tool evolved from the classical FMEA used in the fault-tolerant control community. We believe that ultimately safety and security measures for CPS should lead to risk-adaptive schemes. By this term, we mean that deployed measures should adapt to the current estimated level of risk. Indeed, safety and security measures are costly and often fight for the same resources. For this reason, they should be developed in an integrated way and their enforcement optimized, such that resource usage is minimized while keeping risk under control. From this point of view the direction set by [53], where vulnerability control is proposed instead of absolute security, is promising. The paper by [6], instead, introduces a measure of security based on the determination of minimum-effort attack strategies. Intuitively, the existence of one or more low minimum-effort attacks corresponds to a low level of security.

16.2.2.2

Adaptation, Learning, and Nonlinear Systems

Several of the mitigation schemes available in the literature, as well as in the present book, rely on the knowledge of a model of the plant to be protected. Anyway, developing a good model is costly and, ironically, it is more likely that an evolved attacker would spend resources on obtaining such a model, rather than the plant operator itself.

386

R. M. G. Ferrari

For this reason, we believe that there will be a need for schemes that are adaptive and that employ some form of learning. This would be useful for acquiring and keeping up to date both the model of the plant, and of an attacker behavior. For instance, it would be possible to borrow from the fault diagnosis literature, which produced several adaptive and learning fault diagnosis schemes such as [23, 39, 40]. In the Computer Science field, instead, there is an extensive literature on using Machine Learning approaches for attack detection and classification, as exemplified by [9, 47] and the references contained therein. Apart from adaptation, another needed theoretical development regards nonlinearity. Indeed, most contributions in the CPS literature on security, safety and privacy consider linear models only, which may limit their applicability to real systems that usually exhibit complex, nonlinear behavior. Some works indeed already provided contributions in this direction. In the present book, Chap. 7 has extensions to nonlinear systems, and Chap. 12 considers in its problem formulation the networked control of nonlinear systems. In the recent literature, for instance, [7] use a modified χ 2 detector for detecting attacks against nonlinear systems.

16.2.2.3

New Definitions of Privacy for Dynamical Systems

Differential Privacy (DP) was developed for static databases [10, 18, 19]. In [15, 26], it was shown how to use DP for monitoring, control, and optimization in the field of dynamical systems. Still, limitations of the original concept are being highlighted by researchers, and other approaches that can lead to better privacy/utility trade-offs are proposed. For instance, [22] proposed to minimize the Fisher information as a proxy for maximizing the measure of privacy against a third party. Furthermore, the addition of physical privacy noise instead of numerical ones was proposed by same authors in [21]. We need to continue developing new privacy mechanism specifically for dynamical systems and in particular CPSs. Such mechanisms, while still sharing the fundamental probabilistic privacy concept of DP, should allow for better performance in terms of privacy/utility trade-off and be defined from the ground up for dynamical systems and signals.

16.2.2.4

Benchmarks and Performance Metrics

The prolific scientific production by the control systems community on the topic of CPS security is remarkable and, from a theoretical point of view, undeniably important. Still, in view of a larger acceptance and industrial adoption of these results, we feel that there is a strong need for proving the effectiveness of the approaches proposed by our community. Such goal, considering the reluctance of individual industrial players to publicly share information on security matters, should be attained by introducing performance metrics and benchmarks that are widely recognized and adopted.

16 Concluding Remarks and Future Outlook

387

It is so comforting to notice that some realistic experimental testbeds or datasets are already available for doing research on cyber-attack detection and resilience in CPSs. For instance, [25] implemented a testbed following an approach we would define as hybrid: they emulated higher automation levels [28], used real Level 1 automation hardware and software, and simulated the physical-level devices and the plant in a Hardware-in-the-Loop fashion. A similar approach is taken by [38], with emulation of network components and virtualization of SCADA servers and workstation, real physical automation hardware such as PLCs, and simulated physical process. In [34], instead, a laboratory-scale two-tanks system featuring industrial automation components such as PLCs and SCADA is presented and a comparison is drawn between a model-based fault detection algorithm and a network anomaly detector. [32] describes the largest available laboratory-scale testbed for industrial control systems security. Called SWaT, it features a physical water treatment plant with a complete SCADA and ICS. The same research group presented in [2] a water distribution testbed for research in secure CPS. Finally [50] presents a single tank system with a PLC that uses the Modbus/TCP protocol. Datasets as well are available, but most of them address network traffic anomaly detection at the protocol level. That is, without taking into consideration the specific physical dynamics of a CPS. They are usually developed by researchers in the IT and OT security field of CS for intrusion detection, rather than control systems engineers. For instance, [37] present results on anomaly detection methods applied to a power distribution system benchmark, while [35] describe a dataset of cyber-attacks against two laboratory-scale industrial control systems: a gas pipeline and a water storage tank. Datasets which include attacks on an ICS using the S7 communication protocol are presented in [43], where the sequential control of a mining refinery is simulated. [13], instead, provide a useful comparison of available datasets from ICS testbeds. Finally, a very recent dataset was released by [24], where Modbus/TCP traffic from a laboratory testbed is provided. Apart from testbeds and datasets, it is of paramount importance to define standardized test attacks, and metrics to compare different approaches on equal grounds. By drawing a parallel to the fault-tolerant control literature, we may so desire to have a benchmark and a related challenge such as that defined by [36] for the case of wind turbines. A promising contribution in this sense is the Battle of the Detection Algorithms, a contest which uses a water distribution network as a test system [49]. Furthermore, Chap. 6 in the present book proposed a set of metrics to evaluate both detectability and degradation capabilities of attacks in CPS. As a final line, we could say that, while there are benchmarks, still they are fragmented and polarized between two points of views. The first one, from the control systems community, includes few cases, which are usually simplified and make use of theoretical attacker models that have not been documented in real operating conditions. The second one, typical of the IT and OT security community, includes many cases that are highly detailed from the point of view of the hardware and software components and the networking protocols, consider realistic attacks but ignore almost always the physical dimension of a CPS.

388

R. M. G. Ferrari

Very few papers in the control community use such testbeds and datasets to prove the effectiveness of their approaches to attack mitigation. Furthermore, there are no universally accepted metrics for evaluating and comparing results. In order to significantly advance the state of the art of safety, security and privacy solutions for CPSs, all of these current research and practical issues should be addressed and solved.

References 1. Adelsbach, A., Sadeghi, A.R.: Zero-knowledge watermark detection and proof of ownership. In: International Workshop on Information Hiding, pp. 273–288. Springer (2001) 2. Ahmed, C.M., Palleti, V.R., Mathur, A.P.: WADI: a water distribution testbed for research in the design of secure cyber physical systems. In: Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks, pp. 25–28 (2017) 3. Alexandru, A.B., Pappas, G.J.: Secure multi-party computation for cloud-based control. In: Privacy in Dynamical Systems, pp. 179–207. Springer (2020) 4. Alexandru, A.B., Darup, M.S., Pappas, G.J.: Encrypted cooperative control revisited. In: 2019 IEEE 58th Conference on Decision and Control (CDC), IEEE, pp. 7196–7202 (2019) 5. Balaban, E., Saxena, A., Bansal, P., Goebel, K.F., Curran, S.: Modeling, detection, and disambiguation of sensor faults for aerospace applications. IEEE Sens. J. 9(12), 1907–1917 (2009) 6. Barrère, M., Hankin, C., Nicolaou, N., Eliades, D.G., Parisini, T.: Measuring cyber-physical security in industrial control systems via minimum-effort attack strategies. J. Inf. Secur. Appl. 52(102), 471 (2020) 7. Bhowmick, C., Jagannathan, S.: Detection and mitigation of attacks in nonlinear stochastic system using modified χ 2 detector. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 139–144 (2019) 8. Brassil, J.T., Low, S., Maxemchuk, N.F., O’Gorman, L.: Electronic marking and identification techniques to discourage document copying. IEEE J. Select. Areas Commun. 13(8), 1495–1504 (1995) 9. Buczak, A.L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. & Tutor. 18(2), 1153–1176 (2015) 10. Chawla, S., Dwork, C., McSherry, F., Smith, A., Wee, H.: Toward privacy in public databases. In: Theory of Cryptography Conference, pp. 363–385. Springer (2005) 11. Chen, H., Lou, W.: On protecting end-to-end location privacy against local eavesdropper in wireless sensor networks. Pervasive Mob. Comput. 16, 36–50 (2015) 12. Chockalingam, S., Hadžiosmanovi´c, D., Pieters, W., Teixeira, A., van Gelder, P.: Integrated safety and security risk assessment methods: a survey of key characteristics and applications. In: International Conference on Critical Information Infrastructures Security, pp. 50–62. Springer (2016) 13. Choi, S., Yun, J.H., Kim, S.K.: A comparison of ICS datasets for security research based on attack paths. In: International Conference on Critical Information Infrastructures Security, pp. 154–166. Springer (2018) 14. Chong, M.S., Sandberg, H., Teixeira, A.M.: A tutorial introduction to security and privacy for cyber-physical systems. In: 2019 18th European Control Conference (ECC), IEEE, pp. 968–978 (2019) 15. Cortés, J., Dullerud, G.E., Han, S., Le, Ny.J., Mitra, S., Pappas, G.J.: Differential privacy in control and network systems. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 4252–4272 (2016) 16. Cox, I., Miller, M., Bloom, J., Fridrich, J., Kalker, T.: Digital Watermarking and Steganography. Morgan Kaufmann, Burlington (2007)

16 Concluding Remarks and Future Outlook

389

17. Dibaji, S.M., Pirani, M., Flamholz, D.B., Annaswamy, A.M., Johansson, K.H., Chakrabortty, A.: A systems and control perspective of CPS security. Ann. Rev. Control (2019) 18. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference, pp. 265–284. Springer (2006) 19. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014) 20. Eames, D.P., Moffett, J.: The integration of safety and security requirements. In: International Conference on Computer Safety, Reliability, and Security, pp. 468–480. Springer (1999) 21. Farokhi, F., Sandberg, H.: Fisher information as a measure of privacy: preserving privacy of households with smart meters using batteries. IEEE Trans. Smart Grid 9(5), 4726–4734 (2017) 22. Farokhi, F., Sandberg, H.: Ensuring privacy with constrained additive noise by minimizing Fisher information. Automatica 99, 275–288 (2019) 23. Ferrari, R.M., Parisini, T., Polycarpou, M.M.: Distributed fault detection and isolation of large-scale discrete-time nonlinear systems: an adaptive approximation approach. IEEE Trans. Autom. Control 57(2), 275–290 (2011) 24. Frazão, I., Abreu, P., Cruz, T., Araújo, H., Simões, P.: Cyber-security Modbus ICS dataset (2019). https://doi.org/10.21227/pjff-1a03 25. Gao, H., Peng, Y., Dai, Z., Wang, T., Han, X., Li, H.: An industrial control system testbed based on emulation, physical devices and simulation. In: International Conference on Critical Infrastructure Protection, pp. 79–91. Springer (2014) 26. Han, S., Pappas, G.J.: Privacy in control and dynamical systems. Ann. Rev. Control Robot. Auton. Syst. 1(1), 309–332 (2018) 27. Hartung, F., Kutter, M.: Multimedia watermarking techniques. Proceed. IEEE 87(7), 1079– 1107 (1999) 28. International Electrotechnical Commission, et al.: IEC 62264-1 Enterprise-control system integration–Part 1: Models and terminology (2003) 29. Kamat, P., Zhang, Y., Trappe, W., Ozturk, C.: Enhancing source-location privacy in sensor network routing. In: 25th IEEE International Conference on Distributed Computing Systems (ICDCS’05), IEEE, pp. 599–608 (2005) 30. Keijzer, T., Ferrari, R.M.G.: A sliding mode observer approach for attack detection and estimation in autonomous vehicle platoons using event triggered communication. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 5742–5747 (2019) 31. Krotofil, M., Cardenas, A., Larsen, J., Gollmann, D.: Vulnerabilities of cyber-physical systems to stale data-determining the optimal time to launch attacks. Int. J. Crit. Infrastruct. Prot. 7(4), 213–232 (2014) 32. Mathur, A.P., Tippenhauer, N.O.: SWaT: a water treatment testbed for research and training on ICS security. In: 2016 International Workshop on Cyber-Physical Systems for Smart Water Networks (CySWater), pp. 31–36 (2016) 33. Mehta, K., Liu, D., Wright, M.: Protecting location privacy in sensor networks against a global eavesdropper. IEEE Trans. Mob. Comput. 11(2), 320–336 (2011) 34. Miciolino, E.E., Setola, R., Bernieri, G., Panzieri, S., Pascucci, F., Polycarpou, M.M.: Fault diagnosis and network anomaly detection in water infrastructures. IEEE Design & Test 34(4), 44–51 (2017) 35. Morris, T., Gao, W.: Industrial control system traffic data sets for intrusion detection research. In: International Conference on Critical Infrastructure Protection, pp. 65–78. Springer (2014) 36. Odgaard, P.F., Stoustrup, J., Kinnaert, M.: Fault-tolerant control of wind turbines: a benchmark model. IEEE Trans. Control Syst. Technol. 21(4), 1168–1182 (2013) 37. Pan, S., Morris, T., Adhikari, U.: Developing a hybrid intrusion detection system using data mining for power systems. IEEE Trans. Smart Grid 6(6), 3104–3113 (2015) 38. Pfrang, S., Kippe, J., Meier, D., Haas, C.: Design and architecture of an industrial it security lab. In: International Conference on Testbeds and Research Infrastructures, pp. 114–123. Springer (2016) 39. Polycarpou, M.M.: Fault accommodation of a class of multivariable nonlinear dynamical systems using a learning approach. IEEE Trans. Autom. Control 46(5), 736–742 (2001)

390

R. M. G. Ferrari

40. Polycarpou, M.M., Helmicki, A.J.: Automated fault detection and accommodation: a learning systems approach. IEEE Trans. Syst. Man Cybern. 25(11), 1447–1458 (1995) 41. Proano, A., Lazos, L., Krunz, M.: Traffic decorrelation techniques for countering a global eavesdropper in WSNs. IEEE Trans. Mob. Comput. 16(3), 857–871 (2016) 42. Reppa, V., Polycarpou, M.M., Panayiotou, C.G.: Sensor fault diagnosis. Found. Trends® Syst. Control 3(1–2), 1–248 (2016) 43. Rodofile, N.R., Schmidt, T., Sherry, S.T., Djamaludin, C., Radke, K., Foo, E.: Process control cyber-attacks and labelled datasets on S7Comm critical infrastructure. In: Suriadi, S. (ed.) Pieprzyk J, pp. 452–459. Information Security and Privacy. Springer International Publishing (2017) 44. Schmittner, C., Ma, Z., Smith, P.: Fmvea for safety and security analysis of intelligent and cooperative vehicles. In: International Conference on Computer Safety, Reliability, and Security, pp 282–288. Springer (2014) 45. Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979) 46. Shih, F.Y.: Digital Watermarking and Steganography: Fundamentals and Techniques. CRC Press, New York (2017) 47. Shon, T., Moon, J.: A hybrid machine learning approach to network anomaly detection. Inf. Sci. 177(18), 3799–3821 (2007) 48. Sid, M.A., Aberkane, S., Maquin, D., Sauter, D.: Fault detection of event based control system. In: 22nd Mediterranean Conference on Control and Automation, IEEE, pp. 452–458 (2014) 49. Taormina, R., Galelli, S., Tippenhauer, N.O., Salomons, E., Ostfeld, A., Eliades, D.G., Aghashahi, M., Sundararajan, R., Pourahmadi, M., Banks, M.K., et al.: Battle of the attack detection algorithms: Disclosing cyber attacks on water distribution networks. J. Water Resour. Plann. Manag. 144(8), 04018,048 (2018) 50. Teixeira, M.A., Salman, T., Zolanvari, M., Jain, R., Meskin, N., Samaka, M.: SCADA system testbed for cybersecurity research using machine learning approach. Future Int. 10(8), 76 (2018) 51. Tirkel, A.Z., Rankin, G., Van Schyndel, R., Ho, W., Mee, N., Osborne, C.F.: Electronic watermark. In: Digital Image Computing, Technology and Applications (DICTA ’93), IEEE, pp. 666–673 (1993) 52. Van Schyndel, R.G., Tirkel, A.Z., Osborne, C.F.: A digital watermark. In: Proceedings of 1st international conference on image processing, IEEE, vol. 2, pp. 86–90 (1994) 53. Young, W., Leveson, N.G.: An integrated approach to safety and security based on systems theory. Commun. ACM 57(2), 31–35 (2014)

Index

A Achievability, 88–90, 93, 94, 96 Attacks, 4–8, 41–43, 48, 123, 124, 127–130, 142, 145–148, 150–153, 155, 159, 165, 166, 169, 173–182, 184, 188, 190–199, 232, 258, 273–275, 319– 326, 328, 329, 331, 340, 348, 351, 353, 354, 357, 364, 368, 369, 377– 388 actuator attacks, 61, 65, 68, 77, 79, 101– 104, 106–112 adversary model. See attack model attack impact analysis, 104, 106–108 attack model, 62, 83, 145, 148, 150 attacker, 48, 79, 83, 173, 177, 180, 190, 258, 272–275, 279, 379, 381, 383, 385– 387 Denial-of-Service (DoS), 5, 6, 41–43, 45–58, 381, 384 eavesdropping, 5, 232, 258, 263 ε-stealthiness, 93, 96, 178, 191, 193 false data injection attacks, 102, 148, 151, 166, 175–179, 188, 190–192, 197 impact, 104, 106–108. See also control performances degradation 331, 379 replay attack, 145––148, 150, 151, 156, 159, 166, 168, 169, 174, 177, 183, 258, 273–275 sensor attacks, 100, 102, 103, 126, 128– 130 stealthy attacks, 65, 80, 81, 86, 88–90, 93, 96, 99, 100, 112, 115, 173–175, 191 threat model. See attack model zero-dynamics attacks, 178 Authentication

authentication signal, 146, 152, 153, 155 message-authentication, 174 Autonomy, 21, 27, 30, 31 Availability, 1, 3–5, 328, 378, 381, 382, 384 C Centralized, 42, 53 CIA triad, 4–7, 378, 382 Coding theory, 130 Communication network, 9, 10, 17, 41, 43, 57, 63, 76, 101, 125, 176, 197, 266 Confidentiality, 1–5, 8, 328, 378, 380 Consensus, 42, 53–57 Consistency, 20, 21, 23, 27, 29 Control systems Industrial Control System (ICS), 319, 323, 328–331, 337, 384, 387 networked control systems, 9, 10, 12, 13, 35, 41–43, 62, 81, 173, 174, 176, 196, 257–260, 264, 265, 267, 269, 272, 280, 316, 381, 384 Programmable Logic Controller (PLC), 384, 387 Supervisory Control And Data Acquisition (SCADA), 330, 387 Controller control performance, 13, 14, 52, 68, 76, 99, 120, 145–148, 152, 154, 156, 157, 159, 170, 174, 187, 188, 197, 270, 379, 383 control performance degradation, 13, 181 cooperative control, 20, 380 LQG controller, 103, 149, 184, 273, 289 reconfiguration, 9, 10, 13, 14, 31

© Springer Nature Switzerland AG 2021 R. M. G. Ferrari et al. (eds.), Safety, Security and Privacy for Cyber-Physical Systems, Lecture Notes in Control and Information Sciences 486, https://doi.org/10.1007/978-3-030-65048-3

391

392 Cooperative task, 9–16, 18–22, 27, 31, 32, 35, 36 Cryptographic multi-party privacy, 384 Cyber-Physical Systems (CPS), 1–6, 8, 42, 61, 79, 80, 123, 142, 145, 146, 287, 315, 319–324, 326, 328–331, 339, 340, 347–349, 351, 354, 364, 368, 370, 371, 377, 379, 381–388

D DDD triad, 6, 7, 378 Deception, 5, 6, 8, 287, 288, 290–292, 296, 377, 378, 380, 383, 385 deceptive agent, 293 deception-as-defense game, 289, 293, 294, 296, 299, 305, 307, 310–312, 315, 316 Decisional Composite Residuosity, 263, 272 Detection, 6–8, 62–68, 70–75, 79, 80, 84, 86, 99, 100, 102, 104, 105, 107–110, 112, 113 , 119, 120, 126, 127, 130, 131, 133–135, 145–148, 150, 151, 153, 155, 156, 164–166, 169, 174–177, 179, 180, 183, 188, 190–195, 197– 199, 203–206, 208, 210–212, 214– 217, 219, 221, 222, 224, 225, 227 Chi-squared detector, 86, 150, 213 detectability analysis, 173, 175, 177, 183, 190, 197, 213 detectable-on-average, 215 detection performance, 106–109, 120, 147, 148, 152–157, 159, 160, 165, 170 False-Alarm-Rate (FAR), 213, 216, 217 Missed-Detection-Rate (MDR), 215, 217 model-based, 173, 174, 197, 209, 211 Neyman–Pearson detector, 156, 160 residual, 102, 169, 178, 191, 194, 195, 197, 199, 211 Differential entropy, 82, 92 Disclosure, 5, 7, 8, 205, 377, 378, 380, 383 Disruption, 5, 6, 377, 378, 381, 384 Dissipativity, 113, 120 Distributed, 42, 47, 53, 54, 56–58, 203, 204, 206, 210, 214, 215, 221, 222, 224, 258, 259, 268, 377, 379, 381, 383

E Encryption, 257–260, 263, 268, 272–275, 277–280 homomorphic encryption, 380, 381, 384

Index Paillier cryptosystem, 259–263, 265, 266, 268, 272, 274, 275, 279, 280 public-key encryption, 263 RSA cryptosystem, 259, 260, 268, 269, 272 semi-homomorphic encryption, 257, 259, 260 Error covariance, 79, 83, 85, 92, 224, 231– 234, 236–238, 240–250, 252–254 Estimation estimation performance, 238, 245 state estimation, 123, 124, 126–133, 136, 141, 142, 231–233, 250, 254 Event-triggered, 47, 48, 380, 381, 383, 384 Extreme events, 332, 348, 354, 362–365, 367–370

F Fault, 3, 4, 6, 9–17, 20–22, 24, 27, 32–36, 203–205, 208–210, 213–217, 224, 227 Fault and Vulnerabilities Modes and Effects Analysis (FMVEA), 385 fault diagnosis, 206, 213, 215, 224 fault tolerant, 9, 11–14, 20, 21, 31, 35

G Games theory, 290, 315 stackelberg equilibrium, 287, 290, 292, 297, 311, 312 stackelberg games, 296, 311

H Hidden Markov Model, 145, 148, 154 H− index, 100, 105, 107–109, 113, 116, 119 H∞ norm, 100, 105–107, 182 Homogeneous systems, 260, 275, 276

I IEEE 39-bus, 95 Information information leakage, 234, 245, 268 Information Technology Systems (ITS), 319–321, 323, 326–328, 330, 340 information theory, 232, 234, 245, 313 Innovation, 83, 85, 86, 88, 90, 91 Integrity, 1, 3–5, 7, 62, 145, 146, 174, 258, 328, 329, 349, 378

Index K Kullback–Leibler Divergence (KLD), 81, 82, 87, 146

L Learning, 385, 386 machine learning, 386 online learning scheme, 379 Linear signaling rule, 287, 289, 291, 292, 302, 305, 310, 315 Loop shifts, 61, 76, 77 Lyapunov methods, 257

M Mahalanobis distance, 212, 214 Marcum Q-function, 222 Markov chain, 314 Mitigation, 6, 7, 134, 377, 378, 385, 388 Modbus/TCP, 387

N Non-cooperative agents, 288 Nonlinear dynamical system, 130 NP-hard, 124, 131, 132, 141

O Observability, 16, 50, 51, 126–129, 141, 142, 250 s-Sparse Observability, 128–131, 133– 135, 141, 142 Observers, 49–52, 99, 101–104, 116–118, 192, 193, 211, 212, 214, 219, 222 finite-time observers, 42, 49, 51, 52 Kalman filter, 149, 235 Optimization problem, 100, 114–116, 120, 136, 138, 146, 147, 153, 154, 156– 158, 160, 161, 170, 241, 254, 287, 298, 302, 304, 314 Output-to-Output Gain (OOG), 100, 112—118, 120

P Peer-to-Peer networks, 57 Performance loss, 152, 153, 155, 160, 197. See also control performance degradation Phasor-Measurement-Unit (PMU), 95 PRD triad, 6, 7, 378, 382 Prevention, 6, 8, 259, 377, 378

393 Privacy, 1–6, 203, 205–211, 213, 214, 217, 219–222, 224–227, 234, 245, 257– 260, 272, 280, 351, 377, 379, 380, 382, 384–386, 388 differential privacy, 203, 205, 206, 208, 211, 217, 227 measure of privacy, 234, 245 privacy-preserving mechanism, 205, 211, 220 privacy-utility trade-off, 227. See also measure of privacy Probability of detection, 150, 215, 216, 218, 221–226 R Receiver-Operating-Characteristic (ROC), 215–217, 219, 225–227 Resilience, 6, 8, 42, 55, 58, 99, 100, 112, 217, 378, 383, 387 Risk cyber insurance, 319, 320, 339, 340, 347–349, 354 cyber risk, 319, 320, 324–326, 328, 330, 340 insurance, 320, 328, 330, 339, 340, 368 risk analysis, 331, 365, 370 risk evaluation, 320 risk management, 319, 320, 328, 330, 331, 333, 336, 339, 340, 348, 352, 362, 364, 368, 369, 371 risk metrics, 332, 333, 338 risk pooling, 355, 356, 358 risk treatment, 331, 334, 338 S S7, 387 Safety, 1–4, 6, 9, 321, 323, 328, 330, 348, 358, 377, 382, 385, 386, 388 Satisfiability, 22, 23, 29, 30, 38, 132–136, 140 satisfiability modulo convex programming, 124, 132, 133 Secret sharing, 384 Secure multi-party computation, 384 Secure state estimation, 123, 124, 126–131, 133, 136, 141, 142 Security, 1–4, 6, 7, 41, 42, 58, 61, 77, 80, 99, 100, 104, 106, 114, 120, 146, 147, 179, 231–234, 237, 245, 246, 248, 250, 254, 257, 258, 260, 262, 272, 273, 280, 288, 292, 319, 320, 322–324, 326, 328–330, 337, 339,

394 340, 347–349, 351–354, 357, 362– 364, 368–371, 377, 378, 381, 382, 385–388 security datasets security metrics, 100, 109, 112 116, 120 Semi-honest adversary, 205 Stability, 151, 243, 244, 249 Globally-Asymptotically-Stable (GAS), 44, 51 Input-to-State-Stable (ISS), 44, 46, 260, 269, 270, 272, 277, 280, 281 robustly asymptotically stable, 183 Uniformly-Globally-AsymptoticallyStable (UGAS), 261, 276–278, 280 Uniformly-Globally-ExponentiallyStable (UGES), 261, 276 Stochastic, 79, 80, 82, 84–86, 96, 232 Switching system, 21, 175 System Theoretic Process Analysis (STPA), 385 T Task assignment, 11–15, 17–19, 21, 27, 31, 32, 35 Threats, 321, 323, 324, 326–328, 330–332 Transmission scheduling, 233, 238, 240– 243, 246–249, 254

Index Two-way coding, 76

V Vulnerability, 41, 48, 61, 77, 123, 320, 321, 323, 324, 328–331, 337, 348, 352, 354, 385

W Watermarking, 61, 76, 77, 145–148, 152, 154, 156, 161, 166–170, 173–176, 179, 181–184, 188–191, 193, 194, 198, 199, 378, 379, 383 multiplicative watermarking, 76, 173, 174, 179, 180, 182, 188, 190, 191, 196 steganography, 383 Wireless wireless channel, 81, 232 wireless communications, 235 wireless networks, 41, 46 wireless sensor networks, 383

Z Zero-Knowledge-Proof, 383