286 42 11MB
English Pages XVI, 285 [294] Year 2020
Zhen Li Sen Li Tyrone Fernando Xi Chen
Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid
Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid
Zhen Li • Sen Li • Tyrone Fernando • Xi Chen
Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid
Zhen Li Beijing Institute of Technology Beijing, China
Sen Li Space Engineering University Beijing, China
Tyrone Fernando The University of Western Australia Crawley, WA, Australia
Xi Chen GEIRI North America San Jose, CA, USA
ISBN 978-3-030-45657-3 ISBN 978-3-030-45658-0 (eBook) https://doi.org/10.1007/978-3-030-45658-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To Fiona and Little Lvbao, lights of my life
Preface
In the past decade, with the wide deployment of distributed energy resources (DERs), power system is envisioned to be Smart Grid, which is expected to be planned and operated in a smart response to a variety of stochastic and intermittent characteristics at multiple time scales. Because the applications are the prime concerns, it often turns out that the particular planning and control applications have always gained widespread focus long before the monitoring has been well developed, which plays an important role in providing the necessary information for the post-processing. However, the monitoring for conventional power system still heavily relies on the steady state model of the system, which rarely occurs in reality with the deep penetration of DERs since there exist a variety of stochastics in both demand side and generation side. As a result, the SCADA systems in energy management systems (EMS), which is dependent on the steady state assumption and no timestamps, cannot accurately capture the dynamics so as to fail the system planning, operation, and control in some complex occasions. To address this issue, the recent phasor measurement units (PMUs) facilitate the dynamics state estimation (DSE) application for wide-area measurement system with its high sampling rate up to 150 Hz. Therefore, it can help the event detection of oscillation, monitoring under extreme event, enhance the hierarchical decentralized control for DERs, and improve the fault detection without any a priori protection replay actions. Despite the locally available dynamic state information for the decentralized control or the remote available for the centralized control, the DSE application in WAMS relies on the advanced communication infrastructure in power system. However, considering the situation of nowadays that there are over 2000 productiongrade PMUs installed across the USA and Canada, which streams data and provides almost 100% visibility into the bulk power system, the dynamics recorder functionality of PMUs definitely boosts the data transmission for local control and protection action or to the remote in communication infrastructure, finally causing the network congestion with the booming size of smart grid. The power
vii
viii
Preface
system engineers cannot always broaden the communication bandwidth to meet this endless requirement. Therefore, it is of significance to include the limitation of communication bandwidth into consideration in advance through the event triggered technique. Besides, as the field of PMU and communication infrastructure gain maturity, the quest for better design, functionality, and reliability of DSE application has made it necessary for engineers to design and thoroughly analyze an accurate and robust DSE under all practical communication environment. For example, the packet dropout, a phenomenon characterized by the fail of measurement receiving in the remote via the transmission network, leads possibly to the misunderstanding by the remote center together with the event triggered filtering, which has no explicit idea on whether the measurement is received or lost. Therefore, knowing how (in what design) and when (under what conditions) an event triggered filtering for DSE application accurately and stably work should be of fundamental importance. Such knowledge, however, requires the appropriate design and in-depth analysis on its numerical stability. This book is concerned with the development and design oriented analysis of event triggered dynamic state estimation for practical WAMS applications. The objective is to provide a systematic treatment procedure for communication reduction, filtering design, and stability analysis of the DSE application in WAMS. The essential techniques for filtering with event triggered sampling strategy are given step by step and proved, along with the practical application examples describing the key procedure for strategy implementation integrated into the filtering design. The target audience includes graduate students, academic researchers, and engineers in industry who work in the field of DSE for WAMS application and have the development need to reduce the communication burden and guarantee the DSE accuracy in WAMS. Furthermore, in presenting various DSE filtering design, a conscientious effort has been made to emphasize the practical implementation using pseudo-code rather than only the mathematical abstraction. We hope this book can also be useful as a design guideline for graduate students and academic researchers who wish to grasp the essentials to design the DSE application for practical WAMS in order to reduce the communication burden, as well as a readable reference for engineers in industry who wish to implement such practical DSE. We begin in Chap. 1 with an overview of the WAMS constitution and its DSE application, and an outline of some practical concerns for application. In Chap. 2, in order to build the fundamental knowledge of event triggered based DSE, various event triggered sampling strategies and its advantageous features are given for linear filtering design. Besides, as a performance reference, the intermittent Kalman filtering is also designed, which is also targeted at the filtering for communication reduction. Furthermore, the simplest nonlinear Kalman variant, i.e., the extended Kalman filter, together with the event triggered strategy is designed for the brief demonstration of DSE implementation. However, its performance greatly suffers from the practical nonlinearity of power grid, which therefore initiates the subsequent research journey of this book for the sake of practical application. In Chap. 3, an event triggered cubature Kalman filter (ETCKF) is proposed to reduce the amount of data transmission while ensuring the estimation accuracy. The ETCKF uses the
Preface
ix
innovation based event triggered sampling strategy in the sensor node to reduce the data transmission. Based on the developed nonlinear event triggered strategy, the cubature Kalman filter (CKF), using the third-degree spherical-radial cubature rule, is adopted to further ensure the estimation accuracy. Further, the stochastic stability of ETCKF is analyzed. Using the stochastic Lyapunov stability lemma, ETCKF is proven to be stochastically stable if a sufficient condition, which is composed of offline parameters, is satisfied. Moreover, the average communication rate of ETCKF is derived, which is only related to design parameters in innovation condition. To satisfy the determined arrival rate need of the limited channel capacity, an event triggered particle filter is accordingly designed in Chap. 4. An arrival rate guaranteed event triggered strategy is established by utilizing Monte Carlo method to approximate the prior conditional distribution of observations. Moreover, an ETPF filtering algorithm is further proposed by making full use of the information from the event triggered strategy to enhance the performance of estimation. Under the constraints including both the communication and computation power at sensor nodes, an event triggered heterogeneous nonlinear Kalman filter (ET-HNF) is designed in Chap. 5. The ET-HNF utilizes the unified filtering of unscented transformation with PF theories so that both the accuracy and the relief of communication burden can be guaranteed. An unscented transformation based event triggered UKF (ET-UKF) is firstly designed to supply the event triggered strategy. Furthermore, a Monte Carlo based filtering algorithm is designed in the estimation center to provide the accurate filtering results. To deal with the non-Gaussian or unknown noises, Chap. 6 designs the stochastic event triggered robust cubature Kalman filter (SETRCKF). Firstly, to make up for the deficiency of ETCKF, the stochastic event triggered cubature Kalman filter (SETCKF) is proposed using the stochastic innovation based event triggered sampling strategy, which can maintain the Gaussian property of the conditional distribution of the system state. Based on SETCKF, the SETRCKF is further designed by using the moving-window estimation method and the adaptive method to estimate the measurement noise covariance matrices and the process noise covariance matrices and using the Huber function to make SETCKF more robust. Moreover, the stochastic stabilities of the two proposed filters are analyzed by deriving the sufficient conditions regarding the stochastic stability of the filtering error. To tackle the presence of packet dropout when using the stochastic innovation based event triggered sampling strategy, Chap. 7 proposes the stochastic event triggered cubature suboptimal filter (SETCF). Firstly, by modeling the packet dropout as a Bernoulli process and inspired by the linear suboptimal filter, the cubature suboptimal filter (CF) is designed for periodic sampling system. Based on CF and the stochastic innovation based event triggered sampling strategy, SETCF is proposed. Moreover, the stochastic stability of the two proposed filters is analyzed by using the Lyapunov stability lemma. Considering that CPS is vulnerable to cyber attack and has limited bandwidth, the event triggered cubature Kalman filters under two typical attack types, which are the data tampering attack and the deviation control command forgery attack, are established in Chap. 8, respectively. Aiming at the data tampering attack problem, the anomaly data detector is designed by using the projection statistics method. After the attack is discovered, the weight
x
Preface
matrix is constructed by using the detection result to correct the measurement value to ensure the filtering accuracy, which completes the filter design. For the deviation control command forgery attack problem, the problem is firstly transformed into the problem that the system is with unknown input. Then, the Bayesian inference method is used to derive the event triggered cubature Kalman filtering algorithm. The feasibility and performance of all the developed filterings are verified based on the IEEE 39 bus system. For the successful completion of this book, I highly appreciate a number of people, institutions, and organizations. First of all, in the course of my research in this field, I have constantly collaborated and been inspired with my former students, among which I am indebted to Dr. Sen Li, Dr. Luyu Li, and Bin Liu for their diligent work regarding the theoretical derivation and practical implementation as well as critical challenge to my knowledge making me pay specific attention to some easily overlooked problems but finally great findings. I also wish to give my sincere appreciation to Dr. Xi Chen, Prof. Tyrone Fernando, and Prof. Xiangdong Liu, who helped me a lot along the research in this field and their experienced advisory. Furthermore, Dr. Junbo Zhao deserves my grateful thanks due to our discussion on the research of DSE. I would also like to thank the staff of the Springer for their professional and constant support of this project. Last but not least, I must thank the Grant Council of Beijing Natural Science Foundation under grant No. 3182034 for funding my research work. Beijing, China
Zhen Li
Contents
1
2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 WAMS System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Basic Synchronization Principles of WAMS . . . . . . . . . . . . . . . . . . 1.1.2 Phasor Measurement Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 WAMS Control Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Network Communication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 State Estimation of WAMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 State Estimation Under Packet Dropout . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 State Estimation Under Network Attack. . . . . . . . . . . . . . . . . . . . . . . 1.3 Development of the Event Triggered Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Design of Event Triggered Sampling Strategy . . . . . . . . . . . . . . . . 1.3.2 Design of Event Triggered Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Development of Nonlinear Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Nonlinear Filtering Based on Taylor Expansion Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Nonlinear Filtering Based on Deterministic Sampling. . . . . . . . 1.4.3 Nonlinear Filtering Based on Stochastic Sampling . . . . . . . . . . . Event Triggered Sampling Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Event Triggered Sampling Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Deterministic Event Triggered Sampling Strategy . . . . . . . . . . . . 2.1.2 Stochastic Event Triggered Sampling Strategy. . . . . . . . . . . . . . . . 2.1.3 Comparisons of Event Triggered Sampling Strategy . . . . . . . . . 2.2 Event Triggered Linear Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Innovation Based Event Triggered Kalman Filter . . . . . . . . . . . . . 2.2.2 Stochastic Innovational Event Triggered Kalman Filter . . . . . . 2.2.3 Intermittent Kalman Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 ET-EKF Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 3 4 5 6 6 6 9 10 11 12 14 15 16 19 21 21 22 25 27 27 29 30 31 32 32 xi
xii
3
Contents
2.3.2 IEEE 39 Buses Simulation Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Simulation Verification for ET-EKF . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 38 43
Event Triggered CKF Using Innovation Based Condition . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Cubature Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Gaussian Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Spherical-Radial Cubature Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cubature Kalman Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 CKF Using Innovation Based Condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Design of Event Triggered Sampling Strategy . . . . . . . . . . . . . . . . 3.3.2 Design of Remote Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Stochastic Stability Analysis of ETCKF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Boundedness of the ETCKF Posterior Variance Matrix . . . . . . 3.4.2 Error Boundedness Analysis for the ETCKF . . . . . . . . . . . . . . . . . 3.5 Simulation and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Comparison of Filtering Dynamic Performance with Different δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Comparisons of Estimation Error under Different δ Values . . 3.5.3 Verification on Stability and Arrival Rate . . . . . . . . . . . . . . . . . . . . . 3.5.4 Simulation on Communication Delay . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53 53 54 55 57 58 59 60 61 65 67 71 74 75 75 80 83 84
4
Event Triggered Particle Filter Using Innovation Based Condition with Guaranteed Arrival Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.2 Monte Carlo Method and Its Application in Nonlinear Filtering . . . . . 88 4.2.1 Monte Carlo Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.2 Application of Monte Carlo Method in Nonlinear Filtering . . 90 4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate Based on Monte Carlo Method . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.1 Design of Triggered Strategy with A Guaranteed Arrival Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.2 Filtering Design Using Triggered Information . . . . . . . . . . . . . . . . 98 4.4 Simulation of ET-PF for DSE in WAMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5
Event Triggered Heterogeneous Nonlinear Filter Considering Nodal Computation Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 System Design of Heterogeneous Event Triggered Nonlinear Filter . 5.3 Design of Slave Filter Based on Event Triggered UKF . . . . . . . . . . . . . . . 5.3.1 UT Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Event Triggered Sampling Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Design of Event Triggered UKF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119 119 121 123 123 125 127
Contents
5.4 Master Filter and Its Cooperation with the Slave Filter . . . . . . . . . . . . . . . 5.4.1 Monte Carlo Based Master Filter Design . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Cooperation Between Master and Slave Filters at Control Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Simulation and Verification on ET-HNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
7
Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Design of Event Triggered Cubature Filter Using Stochastic Innovational Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Design of Event Triggered CKF Using Stochastic Innovational Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Filtering Design at the Control Center . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Design of Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Event Triggered Strategy Design for SETRCKF . . . . . . . . . . . . . . 6.3.2 Filtering Design of SETRCKF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Analysis of Stochastic Stability of Event Triggered Filtering Using Stochastic Innovational Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Stochastic Stability Analysis for SETCKF . . . . . . . . . . . . . . . . . . . . 6.4.2 Stochastic Stability Analysis for SETRCKF . . . . . . . . . . . . . . . . . . 6.5 Simulation and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Verification on SETCKF Filtering Performance . . . . . . . . . . . . . . 6.5.2 Verification on SETRCKF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event Triggered Suboptimal CKF Upon Channel Packet Dropout . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Design of Suboptimal CKF Under Channel Packet Dropout . . . . . . . . . 7.2.1 Suboptimal Kalman Filter Under Channel Packet Dropout . . 7.2.2 Design of Suboptimal CKF Under Channel Packet Dropout . 7.3 Design of Event Triggered Suboptimal Cubature Kalman Under Channel Packet Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Design of Event Triggered Sampling Strategy . . . . . . . . . . . . . . . . 7.3.2 Design of Filtering at Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of Channel Packet Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Analysis of Stochastic Stability for Suboptimal CKF . . . . . . . . 7.4.2 Analysis of Stochastic Stability of Event Triggered Suboptimal CKF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Simulation and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Verification on CF Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Verification on the SETCF Filtering Performance . . . . . . . . . . . . 7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
130 130 134 136 143 149 149 150 150 152 157 157 160 163 163 166 169 169 180 187 189 189 190 191 193 194 195 196 197 197 202 205 205 217 234
xiv
8
9
Contents
Event Triggered Cubature Kalman Filter Subject to Network Attacks 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Design of Event Triggered Cubature Kalman Filter Subject to Data Tampering Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Design of Abnormal Data Detection Algorithm . . . . . . . . . . . . . . 8.2.2 Design of Event Triggered CKF Filtering Subject to Data Tampering Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control Command Forging Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Simulation and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Verification on Event-Trigged CKF Subject to Data Tampering Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Verification on Performances of Event-Trigged CKF Subject to Control Command Forging Attacks . . . . . . . . . . . . . . . . 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
237 237 238 239 241 243 252 252 257 261
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Nomenclature
Symbol Description ≡ Identically equal to Defined as ∀ For all ∃ → ∈ ∈ / ⊂ exp(x) max min ∞
Symbol AF
Description The Frobenius norm of real matrix A A>0 The positive definite matrix A A≥0 The positive semi-definite matrix A Exists A>B The matrix (A − B) is positive definite Approach to A≥B The matrix (A − B) is positive semi-definite Belongs to A⊕B The block diagonal matrix whose diagonal elements are A, B Doesn’t belong diag[x1 , . . . , xn ] The diagonal matrix whose diagonal elements are {x1 , . . . , xn } to 1/2 The lower triangular matrix of Subset A Cholesky decomposition of A Union N (x, P ) The Gaussian distribution with x as mean and P as variance The p-dimensional vector with all Sum 1(p) one The p-dimensional vector with all Exponential 0(p) zero function of x The identity matrix with dimension Maximum Im m Minimum δ(x) Dirac function The particle point x i following the Infinity x i ⇐ p(x) distribution of p(x) xv
xvi
Symbol Description Rn n-dimensional column vector set Rm×n m × n real matrices E(x) The estimation of x Cov(x) The covariance of x The transposition xT of vector x x The Euclidian norm of real vector x A The spectral norm of real matrix A
Nomenclature
Symbol Description X (i) N (x, P ) The Sigma point X following the distribution of N (x, P ) x ∼ p(x)
dim(A)
The random variable x follows the distribution of p(x) The distribution of random variable of x The condition probability distribution of random number x given y The condition expectation of random number x given y The dimension of square matrix A
rank(A)
The rank of A
p(x) p(x|y) E(x|y)
Chapter 1
Introduction
1.1 WAMS System The development of power system is continuously accompanied with the emergence of new technologies, which ranges from the electrical infrastructure to the informatics. Smart grid is envisioned to offer an intelligent, automated, and widespread distributed generation (DG) by a two-way flow of electricity generation/consumption and information exchange. The metering is the fundamental of all the functional applications in power system such as control, analysis, plan, etc. Therefore, the wide-area measurement systems (WAMS) have greater impact on the reliable operation of power system. As shown in Fig. 1.1, the WAMS consist of the control center, time synchronization, network communication system, and phasor measurement unit (PMU). The measurement data are collected from stations distributed in different areas, which are equipped with PMUs. In general, the information about the power system can be extracted from its raw data measured by PMU or other data resources by a kind of computer aided tools known as WAMS applications. All the applications in the area control center (for large scale system) or central control center (for small scale system) acquire system data from PMUs or other data resources for dispatching and control purpose via the wire or wireless communication link as shown in Fig. 1.2. And most of them have strict real-time requirement. But with the widespread use of PMUs, a huge amount of data needs to be transmitted even within regions, which may result in communication link congestion and increasing communication latency. Therefore, the state estimation should be specifically designed to deal with the communication constraint. With the assist of accurate state estimation from WAMS, a scheduling and control center can master all operational data of the grid in a real time and online manner so as to calculate the real-time operating parameters and acquire the online state of the system through state estimation, and simulate the operational states of grid based © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_1
1
2
1 Introduction
PMU
Generation Unit 1
PDC Control Center
PMU Generation Unit 1
Ethernet
Network communication channel
PMU
Generation Unit 1
Real-time data sever
Other application sever
State estimation sever
Instability prediction sever
Control sever
Fig. 1.1 System structure of wide-area measurement system (WAMS) using PMU through network communication media Remote Node
Remote Node
Area Control Center Remote Node
Remote Node
Remote Node
Area3 Remote Node Remote Node Remote Node
Area Control Center
Area Control Center Remote Node
Remote Node Remote Node
Area1 Remote Node
Area2
Fig. 1.2 Communication structure of WAMS, whose hierarchical structure is from the area control center to the remote node
on the estimation, and further conduct online stability analysis of the system, and optimize grid scheduling, and finally combine local control and grid scheduling to ensure the reliability and accuracy of the whole system. It also allows the analysis of the local fault through the synchronization of phasor measurement data, and the locating of system’s weaknesses. By doing so, the systematic research, analysis, and improvement can be facilitated to avoid recurrence of similar accidents.
1.1 WAMS System
3
1.1.1 Basic Synchronization Principles of WAMS The WAMS was initially invented to enhance the reliability of the domestic grid of the USA and adjust its electric market. In 1989, WAMS was jointly developed by the Bonneville Power Administration (BPA) and the Western Area Power Administration of the Department of Energy (DOE) and firstly applied in the northwestern power grid. Its main scope of development is to establish the flexible AC transmission system (FACTS) as the core element base for real-time operation monitoring and optimization of the grid, and develop an effective online software package in response to voltage and dynamic safety evaluation issues. For the synchronization, WAMS must unify their time coordinate so as to establish a time synchronization system. The time coordinate system comprises the beginning timestamp and the scale unit of time (second). The time synchronizing system can be synchronized with the measurement data only when the clock of the synchronous measurement unit in WAMS conforms to the coordinated universal time (UTC). The time synchronizing technologies primarily include long- and short-wave-time transfer time synchronization, internet based time synchronization, satellite time service and digital hierarchy (SDH) network time synchronization, telephone dialing time synchronization, and other technologies. They are effectively applied in different fields of time synchronization and have their respective weaknesses. 1. Long-and-short-wave-time transfer time synchronization. Long- and short-wavetime service technology is commonly used for military and navigational purposes, and it is based on the transmission of radio signals. Its advantages include the simple transmitting and receiving devices, extensive signal coverage, low cost, and real-time calibration of the local clock. Its weakness is the seldom usage in civilian applications. 2. Internet time synchronization. This technology allows the remote calibration of computer clocks via the Internet at an accuracy of 1–50 ms. However, this technology is based on computers and the Internet so that the system safety cannot be guaranteed in a complicated network environment. 3. Digital hierarchy (SDH) network time synchronization. This technology utilizes the feature in synchronization between time code and clock so as to add the time code signals into the unused bytes of SDH or SONET-STM-N multiplex section overhead (MSOH). The whole coded signals satisfy the frame structure required in ITU-TG.708 with a length setting of 5 bit. This technology enables the long distance and highly accurate transmission in the scale of 100 ns. Its weaknesses are the frequent hardware maintenance, which hinders its own development in power system. 4. Telephone dialing time synchronization. Telephone dialing time synchronization is not a high technology and can be completed via a computer or some synchronization software with the help of a telephone wire, modem, or other common communication devices. However, this technology offers very poor real-
4
1 Introduction Table 1.1 Time accuracy for various synchronization techniques Time synchronization technique Accuracy
Long-and-shortwave 1–10 ms
Internet SDH Telephone GPS 1–50 ms 100 ns 100 ms 5–100 ns
time performance. Even if the transmission delay caused by part of the telephone wires is alleviated with the help of feedback technologies, it still cannot satisfy the practical requirement of the complex power system. 5. Satellite time synchronization. The worldwide well developed GPS system extensively applies the satellite time synchronization technology. GPS satellites send out the synchronous signals, and the user can receive the signals transmitted by 24 satellites in orbit and several standby satellites at any place on the earth. These satellites are all equipped with the accurate atomic clocks. If the user’s clock and the satellite’s atomic clock are synchronized, the navigation time from the satellite to the user can be acquired, and their distance and the current time coordinate can be inferred as well. The advantage of this technology is that the signals synchronized by GPS satellites can be received on a real-time basis and the reliability and high time accuracy can be guaranteed. However, some situations hinder the development of GPS technology. Firstly, the US army controls the time codes of higher accuracy and opens them only to authorized users. Secondly, the high GPS operational requirements disable the wide installation of GPS synchronizing equipment at the power plant with comparatively complex ambient environments. Table 1.1 lists the comparison of the different time accuracies for different synchronizing technologies. Considering the advantages of different synchronizing technologies mentioned above, the satellite synchronizing technology is the most appropriate for the implementation of accurate and affordable WAMS.
1.1.2 Phasor Measurement Unit PMU facilitates the widespread application of WAMS. Its main function is firstly receiving the synchronized measurement data and then transmitting these synchronized signals to the scheduling and control center. In a UTC system, the operator on duty analyzes and calculates all system measurement data collected by PMU to acquire the synchronized phasor measurement information. PMU can record the transient data through the triggering mode and the dynamic data through continuous recording. It can be further divided into centralized and distributed mode according to the installment. If there has only one control center to collect the measurement data, the centralized installment of PMUs is preferred. On the other hand, the distributed installment of PMUs is recommended for extensively distributed stations
1.1 WAMS System
5
(e.g., power plant). Finally, the data collector will package the distributed PMU data and transmit them to the master center. The synchronous phasor measurement of WAMS can link the massive state observations together within a structurally complex power system based on a UTC system.
1.1.3 WAMS Control Center The hardware system of WAMS’s control center includes the communication server, data analysis, historical data server, statement, scheduler workstation, and other equipment. The functions of WAMS control center include the real-time monitoring and management of PMUs, the reception of the real-time measurement data, recording files, event logging, wave recording. The data received from the station are further pre-processed and then sent to the real-time or historical database for the purpose of real-time or offline analysis. Besides, the software system includes the data collection and pre-processing, real-time monitoring system, database system (including real-time and historical data), advanced applications, and internet interface. Figure 1.1 shows the main WAMS applications in the control center. Some modern WAMS applications are as follows [1]. Wide-Area Dynamic Monitoring and Analysis This application can provide various basic tools to observe power grid and form a global and dynamic view of the system. Instability Prediction Conventionally, the stability analysis of power systems is done offline. But due to the high sample rate of PMU, the real-time stability analysis and instability prediction can be fulfilled. Generator Operation Status Monitoring This function can provide a close supervision of generators, which is very useful in system operation and stability assessment because it can provide the real-time information of generators. State Estimation This is the most important WAMS application and is considered as kernel because it can provide extracts creditable data, which is the input of other applications, by eliminating the effect of bad measurements. The WAMS control center and the conventional SCADA/EMS are interconnected through the network communication via the Internet, the latter of which provides the state estimation to WAMS main station. On the other hand, the WAMS offers the dynamic information of grid to the SCADA/EMS. Moreover, the WAMS control center can also extract the transient data of grid and the relay protection from the fault information management system of SCADA/EMS.
6
1 Introduction
1.1.4 Network Communication System The network communication system aims at providing the channel for data exchange between its subsystems to ensure its reliable operation, which needs to guarantee the real-time data transmission, particularly in a large scale interconnected network. The principles of the physical structure of a network communication system include that, 1. It should be layered and divided into zones according to the monitored area, i.e., provincial and regional monitoring system and national scheduling and monitoring system, among which the level of these three monitoring and control systems increases successively. 2. Various systems within WAMS keep the bidirectional data exchange through a tree structure. 3. Different WAMS control center and stations can engage in the bidirectional transmission of real-time online data and historical data stored in the database. 4. There is no specific requirement on the control center, which can have the direct exchange with the PMUs of lower levels.
1.2 State Estimation of WAMS WAMS is typically a cyber-physical system (CPS). As shown in Fig. 1.3, the measurement data needs to be transmitted to the decision-making layer for state estimation via a network, which is mostly via the wireless sensor network (WSN) in WAMS. Therefore, it is necessary to consider the influence of unstable transmission network on system states. Specifically, with the deep penetration of renewable energy to the conventional power system, which is widely installed in rural area, the network communication media can be easily exposed to the interference from environment. Such influence includes the time delay, packet dropout, channel attenuation, network attack, etc. This section specifically provides a general description of the state estimation in case of the packet dropout and network attack.
1.2.1 State Estimation Under Packet Dropout The packet dropout occurs when the filter on the decision-making level fails to receive the measurement transmitted by the sensor via the transmission network. The causes of packet loss include the physical fault, nodal hardware fault, network congestion, routing error, data packet conflict, etc. The packet dropout cannot be avoided in a communication network, particularly a radio transmission network, where it may occur due to EMI in the environment. If the state estimation is specifically designed for the dropout, the estimation accuracy will decline sharply
1.2 State Estimation of WAMS
Decisionmaker Layer
Transport Layer
Real-time data acquisition and processing
Physical Layer
7
Monitoring
Control
Signal Processing, State Estimation
Sys tem Decision, Intelligent Control
Node
Node
Communication Channel
Node
Node
Sensor
Control Command
Data Acquisition
Executing Agency
Real-time system decision and control
Physical System National Defense Military, Intelligent Transportation, Smart Grid, Industrial Production, Health Care
Fig. 1.3 The system diagram of classical cyber-physical system, whose hierarchical structure consists of decision-maker layer, transport layer, and physical layer
Process
xk
Sensor
yk
Communication Channel
Filter
xˆk
Fig. 1.4 The system diagram of intermittent filtering, the performance of which is mainly affected by the communication channel
and the divergence will even occur. Therefore, it is valuable in practice to investigate the state estimation and analyze its accuracy and stability under packet dropout. Generally, the filtering upon the packet dropout is referred to as the intermittent filtering. The architecture is shown in Fig. 1.4. The research basis for intermittent filtering is the modeling of packet loss. The most commonly used model is Bernoulli stochastic process model subject to i.i.d. (independent identically distribution), whose assumption is that the packet loss in channel at the current time has no dependence with the previous time instance so that the probability of packet loss at each time is identical. Based on this packet dropout model, the stochastic stability of linear intermittent Kalman filter was investigated and the definition of intermittent linear filter stability was pointed out that the estimation of the mean
8
1 Introduction
value of prior variance matrix was stochastically bounded [2]. Furthermore, it was further proved that the intermittent Kalman filter was stochastically stable when the packet loss rate was lower than the critical probability. However, the analytical solution of such critical probability was not given. It was also demonstrated that the analytical solution of such critical probability could be got only when the system’s observation matrix was nonsingular. Besides, the solving condition of critical probability was relaxed and the solution could be obtained when the observable subspace corresponding to the system observation matrix has full rank [3, 4]. Except Bernoulli stochastic process to model the packet loss, the other is the homogeneous Markov model, under which the packet loss rate at each time instance is no longer independent but relates to whether the packet loss occurs at previous time instance. Compared with Bernoulli stochastic process, this model can represent more extensive characteristics of channel condition and better fit in with the actual conditions. Based on this model, the stochastic stability issue of intermittent Kalman filter was studied under this model [5], where the mean peak of the prior variance matrix was selected as the evaluation index of stochastic stability and the sufficient conditions for the stochastic stability was given for the filter. The results showed that the peak stability of the Kalman filter for the scalar system was only related to the probability of recovery after the packet loss and there existed a critical recovery probability. Based on this, a stochastic Riccati equation was utilized to infer the more stringent sufficient condition of filter stability [6] than that in [5]. In [7], it was proved that the mean peak stability of the prior variance matrix was equivalent to the mean stability proposed in [2] under certain conditions. Apart from the evaluation index of peak stability above, the mean of posterior variance matrix was proposed in [8] as an evaluation index of stability estimation and more stringent sufficient conditions of stability were derived. The evaluation index of the weak convergence stability of filter was designed in [9] under the Markov process, which proved that the sufficient conditions considering weak convergence were most relaxed through the comparison of the sufficient conditions of filter stability using various evaluation indexes under the homogeneous Markov process. The aforementioned intermittent filter collects data from a single sensor. However, it is common that data are collected from distributed sensors and transmitted to the decision-making layer via various channels in the CPS system. According to such a scenario, the situation was studied in [10], where two sensors were used for data collection and both transmission channels were described as i.i.d. and Bernoulli stochastic process, and the stochastic stability of the filter was analyzed accordingly. The stability of the intermittent filter under the homogeneous Markov model was investigated when two sensors were used for data collection [11]. However, the previous two methods were inapplicable to the case of multiple sensors. The linear matrix inequality (LMI) equation was utilized in [12] to solve the critical probability of arrival rate for various channels so that the filtering stability was analyzed for the multi-sensor filtering system under packet loss. The previous research results are obtained based on the linear system. Many researchers have also studied the intermittent filters for nonlinear system. The LMI was utilized to simplify the nonlinear filtering into a multiple linear filtering so as to
1.2 State Estimation of WAMS
9
analyze nonlinear intermittent filter [13–15]. The stability of intermittent EKF was studied in [16] and the results showed that the stability of nonlinear intermittent filter must use the mean error as the evaluation index. However, there contains online parameters for the sufficient conditions of stability. The sufficient conditions of filtering stability were given for intermittent suboptimal EKF in [17], which only contained offline parameters. A more relaxed sufficient conditions of filtering stability was proposed in [18] based on [16], which can be completely expressed by offline parameters. The stability analysis of intermittent UKF under the Bernoulli stochastic process and the homogeneous Markov process was carried out in [19, 20]. Moreover, some researchers have investigated the state estimation upon the occurrence of packet dropout with other channel conditions. The state estimation under packet dropout and time delay was studied in [21, 22]. Besides, the state estimation upon packet dropout and quantification was considered in [23, 24]. However the previous research focused on the intermittent filtering under the periodic sampling strategy, there is almost no intermittent filtering under the event triggered sampling strategy. Although the event triggered sampling strategy can hinder the packet dropout to one certain extent due to the reduced data transmission, there are various kinds of causes for packet dropout as mentioned above, and not only within the scope of small channel bandwidth and heavy data transmission load. Therefore, it is necessary to design the intermittent filtering under event triggered sampling strategy and analyze its filtering stability.
1.2.2 State Estimation Under Network Attack The network attack refers to the damage to the measurement data due to an malicious attack on the transmission channel, which includes DoS attack [25], replay attack [26], data injection attack [27], wormhole attack [28], etc. Because the transmission highly relies on the media, the CPS is vulnerable to DoS and data injection attack. Once the network attack occurs, the accuracy of state estimation will be severely affected, and the divergence may even occur. Therefore, many researchers have studied the state estimation under network attack. Normally, the network attack may be blocked and intercepted by the protection. In other words, the network attack normally occurs on a stochastic basis. Therefore, the state estimation system under network attack is shown in Fig. 1.5. The state estimation of linear system upon data infusion attack was carried out and an attack set was established which would not be discovered by the detector [29]. The data infusion attack issue was studied for smart grid state estimation [30], which showed that the attacker could change the state estimation at any time [31] and Literature [32]. Two safety indexes were set in [31] to evaluate the difficulty of launching a data infusion attack on specific measurements while it was pointed out in [32] that the defense against data infusion attack could be fulfilled through the encryption of a certain number of sensors. Furthermore, the defense mechanism against data injection attack on state estimation was investigated [33], which
10
1 Introduction
Data Alteration Attack Process
xk
Sensor
yk
Communication Channel
Filter
xˆ k
Fig. 1.5 The system diagram of dynamic state estimation under the data alteration attack through the communication channel
proposed the fewest number of nodes to be attacked as the safety index and designed an designation algorithm for encrypted sensor to realize attack defense. Moreover, an elastic estimator was designed to address the issue of state estimation under a stochastic system hidden attack [34]. The estimation of the initial value of system state was investigated in [35] when some measurements are damaged, which showed that there had relationship between the quantity of recoverable sensors under attack and the existence of the decryptor. An attack-elastic estimator for the linear time-varying system was designed and the robustness of the estimator was analyzed as well [36]. An elastic self-adaptive filter was proposed in [37] for network attack defense and verified on a ground robot platform. It demonstrated that this algorithm could provide a better estimation results than an attack-elastic estimator. However, the previous research on state estimation under network attack only focuses on the linear system. But the practical CPS is mostly a nonlinear system and suffers from the heavy data transmission and inadequate bandwidth. Therefore, it is necessary to investigate the issue of nonlinear state estimation when a channel is subject to network attack with the event triggered sampling strategy.
1.3 Development of the Event Triggered Filter The concept of event triggered sampling was initially proposed by Ho et al. on the discrete event system in the early 1980s [38]. Different from traditional periodic sampling, in the event triggered sampling the sensor samples the data and transmits the data to the remote data center only when the specific condition is reached, which is called as event, which can effectively reduce the data transmission and the energy consumption of sensor nodes. This novel sampling mode aroused the focus from both the academia and the industry when Åström and Arzén creatively applied the event triggered sampling in a dynamic system with continuous state space [39, 40]. In [39], Åström compared the event based sampling and the stochastic system based periodic sampling and proved that the event based sampling could provide the better performances with smaller output variance at the same average sampling rate.
1.3 Development of the Event Triggered Filter
11
In [40], Arzén designed a PID controller based on the event triggered sampling and proved that the control performance would not decline when the CPU utilization ratio was substantially reduced, which promoted the continuous development of signal processing technology based on event triggered sampling. This section will summarize the research on the event triggered filtering from two perspectives: event triggered sampling strategy design and the filter design based on event triggered sampling.
1.3.1 Design of Event Triggered Sampling Strategy In [39], Åström proved that the performance can be improved for both the continuous time system and the discrete system as long as a reasonable event triggering strategy is designed. Inspired by this, the early research primarily focused on even triggered sampling strategy design. In general, the event triggered sampling strategy design has multiple conflict objectives. The conflicts between communication rate and filtering accuracy dominate in the design of event triggered filter. To address this problem and achieve multiple objectives, it is generally turned into the constrained optimization problem so that some objectives are listed into the objective function and others as the constraints [41–46]. Based on this, the transmission and scheduling were studied for the event based optimal finite-time sensor regarding the continuous time and discrete time scalar linear system [41, 42]. By relaxing the zero-mean initial conditions and considering the measurement noise, the previous results were further extended to a linear vector system [43]. The adaptive sampling method was developed for the continuous time linear system state estimation [46]. The relationship between sampling performance and mean sampling rate was analyzed and the suboptimal event triggered sampling strategy was further proposed, which could guarantee the minimum mean sampling rate [47]. Another method of addressing multi-objective conflicts is to include different weighted items into the objective function of the optimization [48, 49]. Following this concept, one event triggered sampling strategy was proposed to guarantee the error covariance boundedness through balancing the estimation of error covariance and communication rate [48, 49]. The distributed event triggered estimation was studied in [50] and one global event triggered communication strategy was developed for the state estimation through minimizing the weighted function of network energy consumption and communication cost and considering the estimation performance constraints. The joint design of event triggered sampling strategy and estimation was considered for the first-order stochastic system with the noise of arbitrary distribution [51], where the gaming theory framework was utilized to analyze the optimal tradeoff between mean square estimation error and the corresponding expected transmission rate. Except the previous achievements, some researchers also proposed other methods to design an event triggered sampling strategy. For example, the periodic
12
1 Introduction
Event-triggered Schedule Schedu d le yk
Process
xk
Sensor
yk
Communication Channel
yk
Remote xˆ k Estimator
Fig. 1.6 The system diagram of event triggered dynamic state estimation. The data sampled by the sensor is determined to be sent at the node by the event triggered schedule
sampling and event triggered sampling were combined in [42] and one mixed data sampling method was further proposed to reduce the computation complexity. The previous research designed the multiple event based sampling strategies from various perspectives and thus provided an important theoretical basis for the further design of the event triggered filters.
1.3.2 Design of Event Triggered Filtering The state estimation not only plays an important role in the design of feedback controller but also is essential to the performance monitoring and fault detection of a complicated dynamic system. Therefore, other than the research on event triggered sampling strategy design, designing the optimal filter for specific event triggered sampling strategy is also a hot topic of research. The state estimation system based on event triggered sampling is shown in Fig. 1.6. In a state estimation system based on event triggered sampling, the measurements will be transmitted to the remote filter only when the specifically pre-designed event is satisfied although the sensor still samples the actual physical system at each sampling time. Therefore, the filter must deal with the measurement set containing “point value” and “set value” at each time instance in order to acquire the optimal estimation performance. When the remote filter receives the measurement, the filter will use the “point value” for updating. On the other hand, when the remote filter does not receive the measurement, the filter is still acquainted that the current measurement satisfies the event triggering conditions, i.e., the measurement contained in the event triggering conditions. In this case, the filter carries out the update according to the “set value” data. Based on this inspiration, many researchers have conducted the work considering different practical applications. Based on the Gaussian hypothesis of system state condition distribution, the minimum mean square error filter was designed [52] for the innovational based event triggered sampling strategy and the analytical expression of communication rate was inferred theoretically, where the relationship between the estimation performance and communication rate was analyzed. The previous results were extended to the
1.3 Development of the Event Triggered Filter
13
event triggered sampling strategy based on sending data (Send-on-Delta) in [53]. By means of using the finite number of Gaussian distribution and approximate even distribution, the filter was proposed based on event triggered sampling strategy with a mixed updating mechanism [54]. One stochastic event triggered sampling strategy was proposed in [45] and the minimum mean square error filter was derived for the closed loop stochastic event triggered sampling strategy without introducing extra approximation. The sufficient conditions were provided in [55] for the existence of filter with expected performances regarding one certain nonlinear stochastic system with time delay. The Markov chain approximation algorithm was proposed to address the event triggered filtering of the nonlinear system [56]. Regarding the estimation of the event triggered sampling strategy of the hidden Markov model, the upgrade of system state was inferred considering the reliable channel and packet loss [57]. The performances of the linear filter were compared considering the periodic sampling and event triggered sampling and a quantitative comparison of the first-order and the second-order system was performed [58]. An event triggered sampling strategy based on state estimation of error covariance matrix was proposed and the convergence of the triggering strategy was strictly proved. It also showed that the triggering strategy can be designed offline [59]. In the previous research, the Gaussian assumption is adopted to directly address the non-Gaussian problem caused by the event triggered sampling strategy. However, many researchers also applied other methods to indirectly address this problem. The noise and event triggered sampling strategy were taken as a stochastic and non-stochastic uncertainty, respectively, and the filter based on event triggered sampling was derived by minimizing the mean square error in the worst conditions. According to this concept, the set value filtering was utilized to address the design issue of filters based on event triggered sampling [44]. In response to the event triggered sampling strategy proposed in [52], the constrained optimization was utilized to address the maximum likelihood estimation under this strategy [60]. The calculation of the upper and lower bound of the communication rate was discussed under this strategy [61]. The distributed filtering based on event triggered sampling was studied and the stability of the filter was further analyzed considering the unconditional distribution [62]. Meanwhile, there are also extensive researches on the filters for the optimal event triggered sampling regarding the different indexes, among which the robustness H∞ based filtering method gained the greatest concern. The robustness H∞ based filtering method was initially developed based on the gaming theory and Riccati equation for the periodically sampled linear system. With the help of linear matrix inequality (LMI) equation, it has been successfully extended to the event triggered sampling strategy [63–65]. In this method, the performance of filter based on event triggered sampling is normally quantified according to the dynamic stability of filter error and the L2 gain between the disturbance and estimation error. Although this method can only guarantee the filtering performance in the worst condition, the joint design of the filter and the event sampling strategy was facilitated with the capability to handle the packet dropout and time delay [66, 67]. Based on such idea, on the H∞ filtering based on event triggered sampling strategy was studied for
14
1 Introduction
the communication network with delay and the sufficient condition was obtained to ensure the index stability by using the Lyapunov–Krasovskii function [68]. A general event triggering framework for the discrete time-varying system was developed, which has an attenuated communication channel, and the recursive LMI method was utilized to design the filter gain [69]. The LMI tool was developed for the H∞ filter based on event triggered sampling [70]. The consensus filtering of distributed H∞ based on event triggered sampling was studied and the coordinated design algorithm was proposed to obtain the filter gain and the threshold parameter of event triggered sampling strategy was acquired based on the bounded real number lemma [71]. The previous research on the event triggered filtering are all carried out for the linear system instead of the nonlinear system. Furthermore, most only provides the filter design instead of the qualitative proof of filter performance and stability [72, 73]. Because the physical models in the CPS are mostly nonlinear, it is necessary to design the nonlinear filter based on the event triggered sampling and prove its corresponding stability.
1.4 Development of Nonlinear Filtering Generally, the high accuracy state estimation can effectively improve the accuracy and effectiveness of the control system. As an important branch of control theory, the state estimation theory has received extensive attention from academics. In the early nineteenth century, Gauss proposed the least square estimation algorithm, which is considered by academics as the earliest optimal estimation and has been applied till now. Under nonsingular situations of relevant Gramian matrix, this method only requires the least square error of the observation constraint equation and can acquire the unique solution of unknown variables but without the need for the statistic characteristics of observation. Therefore, this method can be easily realized from an engineering perspective. But the least square method suffers from the inadequate estimation accuracy [74]. In the 1940s, Wiener and Kolmogorov proposed the Wiener filtering successively applicable for the continuous time system and the discrete time system, which found the theoretical basis of estimation including forecast, filtering, and smoothening [75, 76]. The Wiener filtering fully uses the statistic characteristics of measurement and input signals. Together with the linear system theory, the optimal method was given to filter the disturbances of known statistic characteristics during the generalized stability process under the least mean square error criteria. As the Wiener filtering estimates the states at the current time instance according to all observations of the system, it is only applicable for a stable stochastic process and requires solving the Wiener–Hopf equation, which needs huge storage and high computation complexity and it is unsuitable to deal with the vector problem. Therefore, it has remarkable restrictions in application and cannot be extensively applied in practice. In 1960, in order to address the problem of high accuracy state estimation for NASA’s moon landing initiative, Kalman summarized
1.4 Development of Nonlinear Filtering
15
the concepts of minimum mean square error, probability theory and stochastic system, and introduced the state variable into Weiner filtering and further proposed the epoch-making Kalman filter for the linear time domain state space model [77]. The Kalman filtering realizes the estimation for the time-varying unstable stochastic process and multi-dimensional signal through the iteration within the time domain, beyond the constraints of Weiner filtering, making it easy implementation on a computer. Although it was proved later that Kalman filtering was the unbiased optimal filter in the sense of both mean square error and maximum likelihood from multiple perspectives [78–80], Kalman filtering is only applicable for the linear system. However, the nonlinear system prevails in engineering. Therefore, the optimal nonlinear filter was inferred based on the previous work and the critical fundamental for the iterative computation of optimal nonlinear filter in the sense of mean square error is to solve one second-order nonlinear partial differential equation [81–83]. Since there is generally no analytical solution for the second-order nonlinear partial differential equation, and huge computation is required to derive the numerical solution, it is almost impossible to realize the iterative computation of optimal nonlinear filtering. Therefore, the researchers began to seek the design of nonlinear suboptimal filter with adequate accuracy and high filtering stability. Its design concept is to approximate the optimal nonlinear filter according to one certain approximation principles. The approximation method can be categorized into the nonlinear filtering based on Taylor expansion approximation, the filtering based on deterministic sampling and filtering based on stochastical sampling.
1.4.1 Nonlinear Filtering Based on Taylor Expansion Approximation To address the state estimation for nonlinear system in the Apollo Program, Schmidt et al. proposed the extended Kalman filter (EKF) based on Kalman filtering [84, 85]. The EKF performs the Taylor series expansion on the nonlinear state and observation equation at the steady state, but only the first order is taken into consideration, and then the Jacobian matrix is used to replace the original nonlinear function. Based on this, the linear Kalman filtering is further used for the iterated estimation. Since this filtering algorithm is easy to be implemented using Taylor expansion, EFK has been extensively applied in engineering in the last several decades. However, the EKF is only applicable for the nonlinear system that can be locally linearized, and the round off error is introduced during the linearization. Therefore, the accuracy of the filter will decline sharply and even become divergent when the system is of strong nonlinearity. To reduce the influence of linear round off error and increase the filtering accuracy for EKF, various EKF variants have been developed. The nonlinear filter was proposed that used the estimation results as the
16
1 Introduction
condition for system linearization (i.e., statistic linear EKF, SL-EKF) [86]. However, this method was hardly applied in engineering. The iterated extended Kalman filter (IEKF) was proposed to perform multiple linearizations of the measurement at the estimated system state [87], which was similar to the Newton–Raphson method because it utilized the new estimation for the iterated solution to reduce linearization error and thus increase accuracy. However, this method became unstable when the system was incompletely observable. The round off error generated during linearization was merged into the system and measurement noise and a virtual noise was established [88]. The iterated estimation was conducted according to the set-membership filter of the linear system, and the extended set-membership filter (ESMF) was further proposed, which gained the application in the case of unknown but bounded noise distribution. Besides, the EKF was taken as the observer for nonlinear system and its sufficient conditions were analyzed to guarantee the stability analysis [89]. Based on this, the LMI was further utilized to provide more relaxed sufficient conditions of EKF observer to guarantee system stability [90]. The sufficient conditions of EKF’s stochastic stability was established and the stochastic stability index for nonlinear filter was set [91]. Moreover, the Riccati equation was utilized to add the slack variables and the robust EKF was proposed in order to increase EKF filtering stability [92]. Although this filter increased the filtering stability, the accuracy declined due to the introduction of the slack variables. To address the problem with sudden disturbance, the strong tracing filter based on EKF was designed [93–95], which used the orthogonality of observation information to re-obtain the filtering gains so that the filter was capable of dynamically tracing the physical system. Although the previous work made improvements based on EKF, it still cannot inherently eliminate the linear error caused by the round off of high order items during the Taylor series expansion. Therefore, EKF and its variants are inapplicable to the strong nonlinear system.
1.4.2 Nonlinear Filtering Based on Deterministic Sampling With the booming increase of computer’s computation capability and the development of the statistic theories, the local linearization technique was gradually gave up for nonlinear filtering design. Among them, the most representative methods are the nonlinear filtering based on deterministic sampling and stochastic sampling. Different from the local linearization of EKF, the nonlinear filtering based on deterministic sampling selects a certain number of deterministic sampling points according to certain principles. These sampling points are then used to approximate the prior probability distribution of system state, which are further transformed by nonlinear function. Finally, the transformed results are processed by the weighted sum to approximate the posterior probability distribution of system state so that the solution of the high order partial differential equation can be approximated. These
1.4 Development of Nonlinear Filtering
17
selected deterministic sampling points are called as Sigma points, and the nonlinear filtering algorithms based on deterministic sampling are referred to as Sigmapoint Kalman filter (SPKF) [96]. Due to differences in approximation concepts, the SPKF has the following advantages over EKF. On the one hand, there is no need to solve the Jacobian matrix and thus is applicable for the system with non-differentiable nonlinear function. On the other hand, the approximation to the probability distribution of nonlinear system state is normally easier than the direct approximation to nonlinear function. Moreover, the filtering based on deterministic sampling can provide first moment and second moment of the estimated value with a higher approximation accuracy [97]. Considering the superiority of nonlinear filtering based on deterministic sampling, various selection of sigma points was proposed by using different numerical differentiation or integral methods. Based on the numerical differentiation, the covariance matrix was approximated via the center definite differential method to propose central difference filter (CDF) [97]. The Stirling interpolation polynomial was used to linearize the nonlinear function and a divided difference filter (DDF) was accordingly proposed [98]. This kind of method did not require the acquisition of the partial derivative of the function and was easily implemented. Moreover, it could offer a higher filtering accuracy than EKF with the equivalent computation. As another branch of the nonlinear filtering based on deterministic sampling, the filtering method based on numerical integration plays the dominant role over the filtering based on numeric differentiation and is substantially applied in engineering [99]. Among all these methods, the most representative one is the unscented Kalman filter (UKF). Based on the concept that “the approximation to the probability density distribution of nonlinear state is generally much easier than approximate to nonlinear function,” Julier and Uhlmann et al. proposed unscented transformation (UT) in 1995 and, based on this transformation, further proposed the UKF [100–102]. UT transformation uses the estimation at the current time instance and the error covariance matrix of estimation to select 2n + 1 Sigma points (where n is the number of system state dimension). The weight is further given which guarantees the same first moment and second moment of these sampling points and estimation. Finally, the approximated posterior probability distribution is obtained. At the same time, they also pointed out that the approximate accuracy of mean and variance matrix using first-order Taylor approximate in EKF are second order and fourth order, respectively. However, the UKF can provide the fourth order for both and has a smaller error on a higher order. Identical to CDF and DDF, UKF has basically the same computation as EKF since there is no need to solve the Jacobian matrix. Recently, more effort has been made on the improvement of UKF. The scalable UT was proposed to acquire the freely distributed sampling points [103]. The simple UT was developed that only n + 1 asymmetric Sigma points were selected [104, 105]. Despite the poor accuracy of this type UT, it substantially reduces the computation. The filtering accuracy of UKF was analyzed and compared, the result of which showed that the symmetric sampling had the highest UKF accuracy [106].
18
1 Introduction
Ito and Xiong et al. utilized the Gauss–Hermite numerical integration rule with the Gaussian assumption to select sigma points and determine the corresponding weight. The Gauss–Hermite filter (GHF) was proposed and it was pointed out that the UKF is the approximate with the order of 2 to GHF [107]. A mixed Gaussian model was further utilized to approximate the non-Gaussian distribution. Based on GHF, a mixed Gaussian filter was proposed to address the nonlinear system noise of non-Gaussian distribution. Similarly, the Gauss–Hermite integration rule was utilized to design the nonlinear filter [108, 109]. Although GHF can reach a higher approximate order through the selection of Sigma points, the selected number of Sigma points will increase exponentially causing heavy computation burden, i.e., the curse of dimensionality [110]. To address this issue, the Smolyak principle was utilized to design a sparse grid quadrature filter (SGQF) [97], which used a small quantity of Sigma points to acquire the near accuracy to GHF. To further reduce the number of Sigma points of SGQF, an anisotropic SGQF was designed according to the uncertainties in various dimension of system state and importance [111], where the sigma points were distributed diversely along various dimensions and the distribution and accuracy of Sigma points could be adjusted by a vector that described the importance of each dimension. Besides, the sufficient conditions are given for an approximate level better than UKF and SGQF. Although UKF and GHF can acquire higher accuracy of estimation results, there always has a central Sigma point with a comparably high weight according to the proposed rules. As a result, the posterior variance matrix of system state that is positive definite in theory may become non-positive definite during numerical calculation, which is called as numerical stability. To address this issue, Arasaratnam et al. designed the cubature Kalman filter (CKF) [112, 113]. Although CKF chooses Sigma points via one certain numerical integration rules like UKF and GHF, the spherical radial cubature rule is adopted, which transforms the multiple integral in the Cartesian coordinate into the integral in the spherical coordinate and further transforms the original multiple integrals into two parts, i.e., the integral along the surface of the sphere and the integral along the axis. When the spherical radial cubature rule with the order of 3 is adopted, the number of Sigma points is 2n. The CKF may be taken as an exception when the weight of central Sigma point selected by UKF is 0 [114]. Therefore, the CKF overcomes the poor numerical stability for the high dimension system, which further pointed out that the distance between Sigma points selected by CKF and UKF, and the estimated value was related to the dimensions of system state. Therefore, it was proposed to use the orthogonal transformation to process the selected Sigma points so that the distance of Sigma point and the estimated value is unrelated to the dimensions of system state. Considering the lower accuracy of CKF than GHF, the spherical integral of any order was used using Silvester integration rule to select the Sigma points and a high order CKF was accordingly designed [115]. Theoretically, this high order CKF can reach the approximate accuracy of any order and the filtering accuracy close to GHF can be achieved when the computation is comparatively low. By considering that the increase of the spherical integral’s order will cause the number of necessary Sigma points increase in a polynomial way while the increase of the radial integral’s order will only cause the number of necessary Sigma points increase linearly, a self-
1.4 Development of Nonlinear Filtering
19
adaptive CKF was further designed [116]. According to accuracy requirements, only the order of axial integral was adjusted to avoid the substantial growth in the number of Sigma points.
1.4.3 Nonlinear Filtering Based on Stochastic Sampling The aforementioned filters are all based on the Gaussian assumption. However, for a nonlinear system, even if the system and observation noise are both Gaussian white noise, the probability distribution of system state does not fully follow the Gaussian distribution. To obtain the optimal solution regarding the nonlinear model, the particle filter (PF) that performs a global approximation to the optimal Bayesian filtering based on stochastic sampling was developed [117]. Although this method also uses the numerical method to solve the multiple integral for the nonlinear filtering, it is different from the nonlinear filtering based on the deterministic sampling because PF utilizes Monte Carlo method to stochastically select the sampling points (particles) and the statistical information of the selected particles is further used to approximate the probability distribution of state. As PF has no specific requirement on noise, it is applicable for any noises with known distribution. Therefore, PF gains widespread research focus. In the 1950s, Hammersley and Morton et al. proposed a sequential importance sampling (SIS) method, which is taken as the earliest particle filtering architecture [118]. This algorithm selects one group of weighted stochastic particles sampled to approximate the posterior probability distribution of the system state and then uses these particles as well as their weights to acquire the estimation of state. However, the weight of most stochastic particles selected through this method will gradually approach toward zero after several iterations, i.e., the phenomena of particle degradation. Moreover, the filtering accuracy of this method is positively related to the quantity of the particles selected, while the excessive particles will sharply raise the computation. To overcome the problem of particle degradation, Gordon proposed the sampling importance resampling (SIR) method in 1993. The key of this method lies in the resampling of the particle after weight updating, which is realized by abandoning the particles with the small weight and copying those with the heavy weight so that various particles have the equivalent weights [119]. Based on this technique, a bootstrap filter was designed to build the foundation for the subsequent research on particle filtering. Various resampling strategies were further proposed to improve its performance [120–122]. Moreover, the particles may be identically duplicated by the same one when there has the sudden disturbances due to the resampling mechanism so as to result in the particle impoverishment. To address this problem, the interpolation method was used to fit the discrete distribution comprising the particles and weights and the resampling was performed [123], based on which Regular particle filtering was designed. The Markov chain Monte Carlo method
20
1 Introduction
was used after resampling to weaken the relevance between particles and effectively inhibit the impoverishment of particles [124]. Moreover, it was pointed out that the convergence rate of PF was positively related to the degree of importance sampling approximation to the actual distribution of system [125]. Therefore, the PF convergence rate and filtering accuracy can be improved by selecting the importance sampling function that is more approximate to the actual distribution but without increasing the number of particles sampled. Based on this concept, by combining the local linearization in EKF, the estimated posterior distribution was chosen as the importance sampling function and the extended particle Kalman filter was designed, which improved the accuracy and convergence rate without raising the number of particles [126]. Other researches further include unscented particle Kalman filter [127], Gauss–Hermite particle Kalman filter [128], Gauss and particle filter [129], and auxiliary resampling particle filter [130], etc. Although the PF is applicable for any noise with known distribution, it is too complicated for implementation. The resulted computation burden is too heavy, causing the poor real-time performance. Therefore, it is infeasible to implement the PF in engineering, which is the bottleneck to the development of PF. In the rest of this book, we will present a detailed design of event triggered dynamic state estimation for WAMS application with various practical concerns. We will begin in Chap. 2 with some fundamental knowledge of event triggered based DSE such as the event triggered sampling strategies and its design will be demonstrated considering the linear and nonlinear filters, and in Chap. 3 we will proceed with the essential design of event triggered cubature Kalman filter (ETCKF) as a design reference, which can not only reduce the amount of data transmission but also ensuring the estimation accuracy, in order to address the congestion encountered in WAMS. The stochastic stability of ETCKF is given for facilitating the critical parameter design. From Chap. 4, through the end of this book, we will examine the practical concerns for WAMS, such as the guaranteed arrival rate to design the communication media, and the heavy computation balance for the usage of PF to overcome the non-Gaussian noise, and the lightweight computation requirement but still robust to the non-Gaussian noise, and the packet dropout frequently encountered in practical WAMS communication media, as well as the cyber attack widely existent in modern power grid, with emphasis on the implementation for DSE. In the process of designing various event triggered filtering for WAMS, we try to illustrate the DSE structure design and its implementation approach that we have found effective in addressing the practical concerns in WAMS applications.
Chapter 2
Event Triggered Sampling Strategies
This chapter will introduce the commonly used event triggered sampling strategies for linear system. The event triggered sampling strategies applicable to nonlinear system will be selectively designed through the comparisons of the basic sampling principles of the linear counterparts. The design procedure of linear filters will be further used as the fundamental to design the event triggered nonlinear filters.
2.1 Event Triggered Sampling Strategy Considering the state estimation of remote PMUs in smart grid, the event triggered sampling strategy is adopted at the PMUs to determine whether the measurements need to be transmitted to the remote filter, which can effectively reduce the data transmission, in order to get the burden on bandwidth relieved. The basic principle of event triggered sampling strategy is designed as follows. At each time instance k, after the sensor samples the measurement yk , the event triggered sampling strategy will determine whether the measurement yk satisfies the event triggered condition through the logic value γk . When γk = 1, the sensor will transmit the measurement at this time instance to the remote filter via the transmission network. Otherwise, the measurement will not be transmitted. The mathematical expression of logic variable γk is formulated in (2.1). γk =
1,
yk is transmitted
0,
yk is not transmitted
.
(2.1)
It is obvious that, the key of the event triggered sampling strategy is the design of its condition. According to the inherent feature of event triggered condition, the event triggered sampling strategy includes the deterministic event triggered sam© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_2
21
22
2 Event Triggered Sampling Strategies
pling strategy and stochastic event triggered sampling strategy. Some representative strategies related to these two major types are introduced in this section, which can be mainly referred to [59, 131–134]. Before introducing the event triggered conditions of these representative strategies, some necessary definitions subsequently used are given at first. Yk indicates the measurements received by the remote filter at time instance k. When γk = 1, Yk is the measurement yk . When γk = 0, Yk is the measurement information contained in the event triggered condition. Although the sensor does not receive the measurement yk , Yk remains the joint Gaussian distribution of system state. Based on this, the information set received by the remote filter at time instance k is defined as follows: Fk = {Y1 , Y2 , . . . , Yk , {γ1 , γ2 , . . . γk }}.
(2.2)
According to this definition, the remote filter knows γk at each time instance other than the event triggered condition, which means that it is known whether the measurement is transmitted or not at each time instance. Moreover, for the convenience, xˆk = E(xk |Fk ) is defined as the estimation of state, and xˆk|k−1 = E(xk |Fk−1 ) is the one-step prediction of system state, and Pk = Cov(xk − xˆk |Fk ) is the error covariance matrix of state estimation, and Pk|k−1 = Cov(xk − xˆk|k−1 |Fk−1 ) is the error covariance matrix of one-step prediction of system state, and yˆk|k−1 = E(yk |Fk−1 ) is the one-step prediction of measurement, and Pyk = Cov(yk − yˆk|k−1 |Fk−1 ) is the error covariance matrix of one-step prediction of measurement.
2.1.1 Deterministic Event Triggered Sampling Strategy For the deterministic event triggered sampling strategy, γk at each time instance is determined by whether the measurement belonging to one specific deterministic set or not, i.e., γk =
0,
yk ∈ k ,
1,
otherwise,
(2.3)
where k indicates the event triggered condition at time instance k. This condition may be time varying or fixed according to the estimation performance requirement. The commonly used deterministic event triggered conditions are given as follows.
2.1.1.1
Send-on-Delta
Send-on-Delta condition is the most commonly used event triggered condition. By using this condition, the sensor will send the measurement to the remote filter via the transmission network only when there has substantial difference between
2.1 Event Triggered Sampling Strategy
23
the measurement at the current time instance and that at the last time instance. Otherwise, no data will be transmitted. According to this concept, Send-on-Delta condition may be formulated by (2.4). k = {yk ∈ Rm |dist(yk , yτk ) ≤ δ},
(2.4)
where dist(·) indicates the distance measurement of Rm . τk is the time instance when the last data is transmitted. δ indicates the design parameters adjustable according to system performance requirement. Specifically, when the weighted 2-norm on Rm is used to indicate the distance measurement, this event triggered condition is written as follows: k = yk ∈ Rm |(yk − yτk )T (yk − yτk ) ≤ δ .
(2.5)
The set k above becomes an elliptical set, whose shape is determined by the design parameter δ. Send-on-Delta condition has advantage in that it only comprises the sensor measurements and there is no need for the data feedback from the remote filter. As long as the parameter δ is reasonably selected, the communication rate will be substantially reduced. However, when the system state fluctuates frequently and substantially, it will be hard to design δ due to the frequent and substantial change on the measurements. Too small δ will cause the measurement to be frequently transmitted to satisfy the event triggered condition. Therefore, the communication rate reduction will become meaningless. Too large δ will disable the transmission of most measurements due to dissatisfy the event triggered condition. Under this circumstance, the remote filters can only use the information contained in the event triggered condition for the state estimation, causing the decline on the filtering performance, so that the high accuracy requirement on dynamic state estimation cannot be met.
2.1.1.2
Innovation Based Condition
Regarding the weakness of Send-on-Delta condition, the innovation based condition was proposed [52]. In the state estimation, the innovational information refers to the difference between the current measurement and the one-step prediction of measurement, indicating the innovational level contained in the current measurement. Therefore, it is reasonable and appropriate to use the innovational information to design the event triggered condition. The mathematical description of the innovation based condition is indicated in (2.6), k = yk ∈ Rm |FkT (yk − yˆk|k−1 )∞ ≤ δ ,
(2.6)
24
2 Event Triggered Sampling Strategies
where Fk is the Cholesky decomposition of the error covariance matrix of the one-step prediction measurement Pyk , satisfying FkT Fk = Py−1 , where · ∞ is k the infinite norm. Like Send-on-Delta, δ is the design parameter adjustable by the designer. In (2.6), it is indicated that only the measurements containing considerable innovational information will be transmitted to the remote filter. According to the update process of Kalman, the measurement containing limited innovational information has little impact on the state update. Therefore, the innovation based event triggered sampling strategy not only substantially reduces the communication rate but also guarantees the filtering of enough accuracy. However, the innovation based condition still has its inherent weaknesses. Firstly, the logic variable γk , determining whether the data transmission is needed, requires the return of Pyk and yˆk|k−1 during the estimation process from the remote filter, which conflicts the intention of communication reduction (this problem can be addressed by designing one local filter at the sensor, which will be specified subsequently). Secondly, (2.6) shows that the innovation based event triggered sampling strategy will result in the truncated Gaussian probability distribution of the measurements instead of Gaussian distribution, introducing the difficulty to the design of filters particularly under the Gaussian assumption.
2.1.1.3
Covariance Based Condition
The main objective of designing a state estimation for the linear Gaussian system is to minimize the error covariance. Therefore, the covariance based condition, which is designed via the direct use of the error covariance matrix, has caught considerable attention. By using this condition, the current measurement is determined to be transmitted to the remote filter by investigating whether the difference between the one-step prediction of error covariance and the minimum realizable error covariance of estimation exceeds one certain threshold value, formulated as follows: k = yk ∈ Rm |C[Pk|k−1 − P ]C T ∞ ≤ δ , (2.7) where C is the observation matrix of the linear system, and P is the minimum realizable error covariance of estimation acquired based on the periodic sampling, whose value can be acquired by solving the discrete time algebraic Riccati equation (DARE). From (2.7), it can be observed that this condition has no dependence on the current measurement but relies on the one-step prediction of error covariance at each time instance. Moreover, the covariance based event triggered sampling strategy only utilizes the measurement received by the filter, which means that the measurement is not updated in the absence of measurement transmission. It is noteworthy that the covariance based condition is only applicable to linear system instead of the nonlinear system due to the difficulty of acquiring the minimum realizable error covariance of estimation.
2.1 Event Triggered Sampling Strategy
25
2.1.2 Stochastic Event Triggered Sampling Strategy Unlike a deterministic event triggered sampling strategy, the probability by which a stochastic event triggered sampling strategy determines at each time instance whether to transmit the measurement to the remote filter is neither 0 nor 1. Instead, it is a stochastic probability determined by a data related probability function. The stochastic probability function can be designed in any form, and the variable can be the one-step prediction of system state or the last measurement transmitted. Normally, the current measurement is selected. The stochastic probability function should guarantee that the measurements generated by this event triggered sampling strategy can keep Gaussian property instead of truncated Gaussian. In this way, the nonlinear problem introduced by truncated Gaussian in the deterministic event triggered sampling strategy becomes the linear one, which is suitable for the filter design. The rule to design the stochastic event triggered sampling strategy is as follows. Firstly, the parameter ηk is randomly generated according to the uniform distribution over [0, 1]. Secondly, the probability computed by the previous probability function with ηk is compared to determine whether to transmit the current measurement to the remote filter. This method is mathematically described by the following formula: γk =
0,
ηk ≤ ϕ(yk ),
1,
otherwise,
(2.8)
where ϕ(·)indicates the probability function. As shown by (2.3) and (2.8), the deterministic event triggered sampling strategy can be consider as the special case of stochastic counterpart. The key of the stochastic event triggered sampling strategy is to design one reasonable probability function ϕ(·). Two commonly used probability functions are introduced as follows.
2.1.2.1
Stochastic Open Loop Condition
Stochastic open loop condition designs the probability function as follows: 1
ϕ(yk ) = e− 2 (yk )
T Yy
k
,
(2.9)
where Y is a nonsingular weighted positive definite matrix. It is a design parameter, which is used to determine the shape of Gaussian distribution. As shown in (2.9), this event triggered condition only comprises the measurement generated by the local sensor and does not require data return from the remote filter. Moreover, it is mainly applicable for the system which has a stable system state and is close to 0, e.g., error based tracing system.
26
2.1.2.2
2 Event Triggered Sampling Strategies
Stochastic Innovational Condition
The stochastic innovational condition is also called as stochastic closed loop condition. It corresponds to the innovation based condition within the stochastic framework. Its probability function is written as follows: 1
ϕ(yk ) = e− 2 (yk −yˆk|k−1 )
T Y (y
k −yˆk|k−1 )
.
(2.10)
Similarly, Y is a nonsingular weighted positive definite matrix as the design parameter. As shown in (2.10), this stochastic condition aims at guaranteeing the estimation error within tolerance. Moreover, its disadvantage is obvious that it requires data return of the one-step prediction of measurement from the remote to the sensor. Therefore, the communication burden is doubled compared with the stochastic open loop condition. It is noteworthy that the inherent motivation to choose (2.9) and (2.10) as probability function can be reasoned by (1) When the stochastic conditions as shown in (2.9) and (2.10) are chosen, it is highly probable that the sensor may not transmit the measurement to the remote if yk or yk − yˆk|k−1 is very small. On the contrary, it is highly probable that the measurement data will be transmitted to the remote. Therefore, even if the remote filter does not receive the measurement, it can still perform the measurement update because the innovational information is very low with high probability. This is the strong advantage of these stochastic strategies compared with the event triggered sampling strategy designed offline, e.g., covariance based condition, because the offline designed event triggered sampling strategy will not perform the measurement update without the measurement transmission. (2) (2.9) and (2.10) are highly similar to the probability density function of Gaussian distribution, which will play a critical role in filter design (the filter design using the stochastic innovational condition will be given in detail). The nonlinearity, which is caused by the truncated Gaussian prior distribution of system state, is avoided by using both these probability function design and the stochastic variable ηk subject to the uniform distribution work together to introduction arising from a system state before distribution being truncated Gauss. (3) The design parameter Y can be adjusted and introduces one more design freedom to balance communication rate and filtering performance.
2.2 Event Triggered Linear Filter
27
Table 2.1 Comparison of event triggered sampling strategies
Strategy
Send-on-Delta Innovation based
Computation complexity
Covariance
Easy Medium Complex but offline
Open loop Stochastic
Easy Complex
System requirement Systems cannot fluctuate frequently No No Stable system state which should approach zero No
Data return?
Inapplicable for nonlinear system?
No Yes
Yes Yes
No
No
No Yes
Yes Yes
2.1.3 Comparisons of Event Triggered Sampling Strategy This section gives a detailed comparison among the event triggered sampling strategies mentioned above from the perspectives of application, computation complexity, and feasibility for the nonlinear system as the fundamental criteria for the subsequent selection of event triggered sampling strategy herein. The comparison results are shown in Table 2.1. As Table 2.1 shows, all triggered conditions are applicable for the nonlinear system except the covariance based condition because the minimum realizable stable state error covariance of estimation for the nonlinear system cannot be calculated. However, the Send-on-Delta condition and stochastic open loop condition have stringent requirements on the system so as not to satisfy DSE’s high accuracy requirement. Both the innovation based condition and the stochastic innovational condition need the data return from the remote filter to the sensor, which increases the communication bandwidth compared with other conditions. However, as mentioned above, this problem can be overcome by equipping one local filter on the sensor. In terms of this advantage, the innovation based condition and stochastic innovational condition are selected to design the event triggered sampling strategy and the corresponding remote filter for the nonlinear system.
2.2 Event Triggered Linear Filter This section introduces the design of the event triggered linear filter based on the innovation based condition and stochastic innovational condition.
28
2 Event Triggered Sampling Strategies
The linear system is considered as below: xk+1 = Axk + νk , yk = Cxk + ωk ,
(2.11)
where xk ∈ Rn is system state, and n is the dimension of system state, and A ∈ Rn×n is the system state matrix, and yk ∈ Rm is system measurement, and m is the dimension of system measurement, and C ∈ Rm×m is system observation matrix. νk ∈ Rn and ωk ∈ Rm are the system noise and observation noise, respectively, which conform to zero-mean Gaussian distribution with the variance matrix Q > 0, R > 0. Moreover, it is assumed that the initial system state x0 is subject to Gaussian distribution N (x0 |x0 , P0 ) and the initial state, system noise, and observation noise are independent of each other. For the linear system, the Kalman filter is the unbiased optimal filter in the sense of mean square error and maximum likelihood [78]. Therefore, the event triggered linear filter was designed based on the Kalman filter. According to the basic principle of event triggered sampling strategy, since the remote filter cannot receive the measurement from the sensor at each time instance, the filtering accuracy will be severely degraded if the filter is updated only upon the receipt of the measurement. Moreover, the event triggered condition and logic judgment variable γk , by which the decision is made whether the data will be transmitted at each time instance, are known to the remote filter. Therefore, γk = 0 is key of the event triggered filter to design how the remote filter use the measurement information contained in the event triggered condition for update in the absence of data transmission. As both the innovation based condition and stochastic innovational condition need the intermediate quantity return from the remote filter to the sensor in order to calculate the logic value γk , the state estimation based on these two triggered conditions can be drawn in Fig. 2.1. The detailed event triggered Kalman filter based on the innovation based condition and stochastic innovational condition will be introduced as follows.
Fig. 2.1 Block diagram of the event-triggered filtering system, where there requires a data return for the evaluation of event triggered condition
2.2 Event Triggered Linear Filter
29
2.2.1 Innovation Based Event Triggered Kalman Filter Like a traditional Kalman filter, it is assumed that the system state satisfies the Gaussian distribution, i.e., xk ∼ N(xk |xˆk , Pk ).
(2.12)
Based on this assumption, the time update process is firstly carried out like Kalman filter (i.e., one-step prediction) as shown below xˆk|k−1 = Axˆk−1 , Pk|k−1 = APk−1 AT + Q.
(2.13)
Moreover, the event triggered sampling strategy requires the intermediate quantity in the estimation process to calculate the logic variable γk . Therefore, it is necessary to calculate the following one-step prediction measurement and one-step error covariance matrix during the update process. yˆk|k−1 = CAxˆk−1 ,
Pyk = C APk|k−1 AT + Q C T + R.
(2.14)
After that, the remote filter will transmit these two variables above back to the sensor, and Fk will be calculated through Cholesky decomposition. The logic variable is calculated according to (2.3) and (2.6) to determine whether the current measurement γk needs to be transmitted or not. Different from the traditional Kalman filter, which directly conduct the measurement update after time update, the event triggered filter needs to implement various measurement update according to different logic value γk . When γk = 1, i.e., the measurement is transmitted to the remote filter, the filter’s measurement update is identical to that of traditional Kalman filter as shown below: xˆk = xˆk|k−1 + Kk (yk − yˆk|k−1 ), Pk = Pk|k−1 − Kk CPkk −1 ,
(2.15)
where Kk is the filtering gain computed as follows: −1 . Kk = Pk|k−1 C T CPk|k−1 C T + R
(2.16)
When γk = 0, i.e., there is no data transmission, and the filter does not know the measurement but is acquainted that the measurement at the current time instance is contained in the innovational information set. Therefore, the filter can update the measurement according to the information. This process is written as follows:
30
2 Event Triggered Sampling Strategies
xˆk = xˆk|k−1 , Pk = Pk|k−1 − λ(δ)Kk Pyk KkT ,
(2.17)
2 2 δ [1 − 2D(δ)]−1 . λ(δ) = √ δ exp − 2 2π
(2.18)
where
D(·) is the Q Function of standard Gaussian distribution, i.e., right tailed function. From (2.17), it is shown that the state estimation in the process of measurement update is directly replaced by the one-step prediction of system state in despite of the lack of measurement. However, the error covariance matrix of system state performs the correction by introducing the variable λ related to the design parameter δ in the innovation based condition. As Kalman filtering is a process of iteration, although the error covariance matrix of system state corrected has no influence on the current state estimation, it will affect the filtering update process at the next time instance. Therefore, the influence due to the absence of measurement can be corrected. This section only introduces the design of innovation based event triggered filtering and the inference process can be found in [52]. Moreover, Chap. 3 will introduce the design and inference of the innovation based event triggered nonlinear filter practical for smart grid application.
2.2.2 Stochastic Innovational Event Triggered Kalman Filter The time update is the same as the innovation based event triggered Kalman filter as (2.13). But the remote filter only needs to transmit the one-step prediction of measurement back to the sensor under this triggered strategy, i.e., the first formula in (2.14). The triggered probability is calculated according to (2.10). Next, it is compared with the stochastic value η generated by the uniform distribution over [0, 1] to obtain the logic variable γk . When γk = 1, the measurement update is also the same as that of the innovation based event triggered filter, i.e., (2.15). However, when γk = 0, the remote filter does not receive the measurement, and it is necessary to update the measurement according to the measurement contained in (2.10). The update is thus as follows: xˆk = xˆk|k−1 , Pk = Pk|k−1 − Kk CPkk −1 ,
(2.19)
−1 Kk = Pk|k−1 C T CPk|k−1 C T + R + Y −1 .
(2.20)
where
2.2 Event Triggered Linear Filter
31
As (2.19) shows that, like the innovation based event triggered Kalman filter, the system state estimation is replaced by the one-step prediction of system state in the absence of measurement. Regarding the update of error covariance matrix of system state, although it is the same in form as the traditional Kalman filter, the design parameter Y in the triggered condition is introduced into the filtering gain Kk for the correction purpose so that the influence due to the lack of measurement can be relieved of the filtering accuracy. Moreover, it is noteworthy that the specific design of stochastic innovational condition strictly guarantees the Gaussian property of posterior/prior distributions of system state, while the nonlinearity is introduced due to the truncated Gaussian prior distribution in the innovation based condition. Therefore, the estimation accuracy of the stochastic innovational event triggered Kalman filter is higher than the innovation based condition. However, the previous condition is only applied to the linear system. The accuracy performance of event triggered filters using these two conditions for the nonlinear system are still unknown. Therefore, the detailed design of event triggered nonlinear filter will be given in sequent chapter in detail using the stochastic innovational condition and the estimation accuracy of the event triggered nonlinear filter using these two conditions will be compared. Like the previous section, this section only introduces the design of stochastic innovational event triggered Kalman filter, the inference of which can be referred to [45]
2.2.3 Intermittent Kalman Filter Prior to the event triggered sampling strategy, the modeling of packet dropout rate as Bernoulli process is commonly used to deal with the inadequate bandwidth. Such Kalman based filtering is called as the intermittent Kalman filter. As the nonlinear intermittent filter will be adopted as the comparison to the event triggered nonlinear filter in subsequent chapter, it is necessary here to briefly introduce the intermittent Kalman filter. The update process of intermittent Kalman filter is identical to the previous two event triggered filters as (2.13). When the remote filter receives the measurement, its update is also shown in (2.15). Unlike the event triggered filter, since the probability of measurement transmission is stochastic and identical at each time instance, the remote filter will have no knowledge about the measurement information in the absence of measurement. Therefore, the filter will not carry out the measurement update so that the one-step prediction and the one-step prediction error covariance matrix of state are used as the final results as the follows: xˆk = xˆk|k−1 , Pk = Pk|k−1 .
(2.21)
32
2 Event Triggered Sampling Strategies
The comparison among (2.15), (2.17), (2.19), and (2.21) shows that the estimation accuracy of event triggered Kalman filter is higher than that of the intermittent Kalman filter from the following two views. On the one hand, the event triggered sampling strategy will transmit the measurement containing innovational information to the remote filter but avoid the transmission of non-innovational information about measurement so as to reduce the communication rate. But by manually setting the packet dropout rate, measurement is abandoned in a stochastic way without considering the importance of various measurement. On the other hand, the event triggered filter is designed to use the measurement information contained in the triggered conditions for the measurement update even when no measurement is received. But there is no specific consideration on such design for the intermittent filter.
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System 2.3.1 ET-EKF Design Although the event triggered filter is effective for linear system, the power system is typically nonlinear, consisting of generators, transformers, etc. Therefore, its practical application should be specifically considered for nonlinear system. This section will apply the innovation based condition to the simplest Kalman nonlinear variant, i.e., extended Kalman filter (EKF). The design of event triggered EKF (ETEKF) provides the fundamental for the event triggered nonlinear filter in the sequent chapters. The general nonlinear model is considered as xk+1 = f (xk ) + ωk , yk = h(xk ) + νk ,
(2.22)
where xk ∈ Rn and yk ∈ Rp are the system state with the dimension n, and measurement output with the dimension p, respectively. The functions of f (x) and g(x) are continuously differentiable at ∀x. The process noise ωk ∈ Rn and measurement noise νk ∈ Rn are white Gaussian noises with zero means. Their covariance matrices are Q > 0 and R > 0, respectively. The initial state x0 follows the Gaussian distribution with zero mean and covariance matrix P0 . Moreover, x0 , ωk , and νk are assumed to be independent with each other. The ET-EKF is developed based on the two-step iteration of EKF, where the Taylor expansion is carried out over the system function f (x) and observation function h(x) at xˆk and xˆk|k−1 , respectively, which can be expressed by
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System
f (xk ) = f (xˆk ) + Ak (xk − xˆk ) + φ(xt , xˆk ), h(xk ) = h(xˆk|k−1 ) + Ck (xk − xˆk|k−1 ) + χ (xk , xˆk|k−1 ),
33
(2.23)
where Ak , Ck are the Jacobi matrices of f (x) and h(x) at xˆk and xˆk|k−1 , respectively, i.e., Ak =
∂f ∂x x=xˆk
Ck =
∂h . ∂x x=xˆk|k−1
(2.24)
The ET-EKF design consists of two parts, which are the event triggered sampling strategy design at the sensor node and the corresponding event triggered filtering at the remote center. Event Triggered Sampling Strategy at the Sensor Node When xˆk and Pk at the previous time instance are already acquired, the one-step prediction identical to that of EKF is carried out as xˆk+1|k = f (xˆk ) Pk+1|k = Ak Pk ATk + Q.
(2.25)
The development of event triggered sampling strategy is inspired from the innovation based condition for linear system so that it is necessary to further derive the one-step observation as follows: yˆk+1|k = h(xˆk+1|k ),
(2.26)
and its corresponding variance matrix and correction matrix are T Py,k+1|k = Ck+1 Pk+1|k Ck+1 + R, −1/2
Fk+1|k = Py,k+1|k .
(2.27)
Therefore, the non-triggered set in the ET-EKF can be designed as follows: T (yk+1 − yˆk+1|k )∞ ≤ ξ }. k+1 = {yk+1 |Fk+1
(2.28)
Correspondingly, based on the definition of the logic variable in (2.3), the event triggered sampling strategy of ET-EKF can be designed by γk =
1 if yk+1 ∈ / k+1 0 if yk+1 ∈ k+1 .
(2.29)
As a result, the pseudo-code of the event triggered strategy of ET-EKF at the sensor node can be implemented as Algorithm 2.1.
34
2 Event Triggered Sampling Strategies
Algorithm 2.1 Event triggered strategy of ET-EKF at the sensor node Input: Observation {yk }, the preset arrival rate ξ Output: triggering logic variable {γk } 1: Initialization: 2: xˆ0|0 = x0 , P0 = R0 3: Iteration process: 4: while k ≥ 1 do 5: local filter output xˆk+1|k Pk+1|k ∂h 6: Ck+1 = ∂x x=xˆk+1|k
7: yˆk+1|k = h(xˆk+1|k ) T 8: Py,k+1|k = Ck+1 Pk+1|k Ck+1 +R −1 9: [Fk+1|k , Hk+1|k ] = Cholesky Decomposition (Py,k+1|k ) T (y 10: if Fk+1 − y ˆ ) ≤ ξ then k+1 k+1|k ∞ 11: return γk+1 = 0 12: else 13: return γk+1 = 1 14: end if 15: end while
Filtering Design at the Remote Center The update process of ET-EKF at the remote center, corresponding to the event triggered sampling strategy designed above, is dependent on whether the observation is received or not at the remote. When γk = 1, the observation yk is received at the remote center so that the update process of ET-EKF is same as that of EKF as xˆk+1 = xˆk+1|k + Kk (yk+1 − h(xˆk+1|k )), Pk+1 = Pk+1|k − Kk Ck+1 Pk+1|k ,
(2.30)
where T (Ck+1 Pk+1|k Ck+1 + R)−1 . Kk = Pk+1|k Ck+1
(2.31)
When γk = 0, yk is not received at the remote center so that (2.30) cannot be executed. The innovational information yk ∈ k can be still acquired at the remote center, by which the update process is designed by xˆk+1 = xˆk+1|k , ξ2 2 Pk+1 = Pk+1|k − √ ξ e− 2 [1 − 2D(ξ )]−1 Kk Ck+1 Pk+1|k . 2π
As a result, the filtering algorithm can be designed as in Algorithm 2.2.
(2.32)
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System
35
Algorithm 2.2 Filtering of ET-EKF at the remote center Input: The condition of information received {γk }, observation information containing the triggered information {Yk }, preset arrival rate Output: Filtering result xˆk 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
δ2
Initialization: xˆ0|0 = x0 , P0 = R0 , β(ξ ) = √2 δe− 2 [1 − 2D(δ)]−1 2π Iteration process: while k ≥ 1 do Ak−1 = ∂f ∂x (xˆ k−1 ) xˆk|k−1 = f (xˆk−1 ) Pk|k−1 = Ak−1 Pk−1 ATk−1 + Q ∂h Ck = ∂x (xˆk|k−1 ) Kk = Pk|k−1 CkT (Ck Pk|k−1 Ck + R)−1 if γk = 1 then xˆk = xˆk|k−1 + Kk (yk − h(xˆk|k−1 )) Pk = Pk|k−1 − Kk Ck Pk|k−1 else xˆk = xˆk|k−1 Pk = Pk|k−1 − β(ξ )Pk|k−1 CkT (Ck Pk|k−1 Ck + R)−1 Ck Pk|k−1 end if return xˆk end while
2.3.2 IEEE 39 Buses Simulation Platform Nowadays, the renewable energy, such as the wind energy, solar energy, and so on, plays an irreplaceable role as the green replacement for fossil fuel. Accordingly, the stability and sustainability of power grid has been challenged due to its penetration. In conventional power grid, the energy measurement system (EMS) is the key to these challenges. However, due to the slow update rate of supervisory control and data acquisition systems (SCADA), the conventional power grid can only sustain the state estimation (SE) for the steady state but fail to capture the dynamics of power grid. Although the SE is important from the viewpoint of power system, it is still inadequate for system monitoring specially after the deep penetration of renewable energy. The dynamic state estimation becomes feasible due to the development of the state-of-the-art PMUs. The IEEE 39 buses 10 generator system is shown in Fig. 2.2, where G1 −G10 represents 10 synchronous generators and the downward arrow indicates the loading. The PMU is installed at the point of common coupling (PCC) to provide the necessary voltage and current information of generator. According to the generator of exciting voltage Ef d , 10 generators can be divided into three categories. Type I includes G7 –G10 , whose exciting voltages are set constant. Type II includes G1 –G5 , whose exciting voltages are adjusted by IEEE DC1A automatic Voltage regulator
36
2 Event Triggered Sampling Strategies
Fig. 2.2 IEEE 39 buses 10 generators system
(AVR). Type III includes G6 , whose exciting voltage is adjusted by AVR and power system Stabilizer (PSS). Based on [139], its dynamic model for Type I generator can be given as
t − Ed (k) + Xq − Xq iq (k) + Ed (k), Tqo
t Eq (k + 1) = − Eq (k) − Xd − Xd id (k) + Ef d (k) + Eq (k), Tdo σ (k + 1) = t ω(k) − ωs ωb + σ (k),
Ed (k + 1) =
ω(k + 1) =
t ωs Tm − Ed (k)id (k) − Eq (k)iq (k) 2H
− Xq − Xd id (k)iq (k) − D(ω(k) − ωs ) + ω(k),
(2.33)
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System
37
where [Ed , Eq , σ, ω] is the system state, representing the emf on d − q axis, rotor , T electrical angle, and rotor speed. The others are the system parameters. Tqo do are the transient time constant on d − q axis, respectively. t is the sampling period, and Xq , Xd are the synchronous reactance on d − q axis, and ωs , ωb are the nominal rotation speed and base speed. H is the inertial constant and D represents the damping constant of rotor. For the Type II generator, except (2.33), the dynamics modeling further includes the following formula: t − KE + Ax eBx Ef d (k) Ef d (k) + VR (k) + Ef d (k), TE t KF Rf (k + 1) = − Rf (k) + Ef d (k) + Rf (k), TF TF
KA K F t VR (k + 1) = − Ef d (k) − VR (k) + KA Rf (k) + KA vref − v(k) TA TF
Ef d (k + 1) =
+VR (k),
(2.34)
where TE , TA , and TF are the time constants of AVR exciter, regulator, and filter. KE , KA , and KF are the gains of AVR exciter, regulator, and filter. vref is the voltage reference of AVR. Therefore, the state variables are [Ed , Eq , σ, ω, Ef d , Rf , VR ], where Rf and VR are the AVR filter voltage and AVR regulator voltage, respectively. Only two types of generators are used for verification. Therefore, the dynamic modeling of Type II is not covered in this part. The detailed parameters are listed in Tables 2.2 and 2.3 [140]. Besides, the measurements of all generators are the outputs of PMU, [id , iq , vd , vq ], representing the voltage and current on d − q axis. The measurement equation is given as follows: 1 E (k) − v (k) , q q Xd 1 iq (k) = Ed (k) − vd (k) , Xq
id (k) =
vd (k) = vsin(σ (k) − θ ), vq (k) = vcos(σ (k) − θ ).
(2.35)
38
2 Event Triggered Sampling Strategies
Table 2.2 Generator parameters I No. of generator
H
D
Xd
Xd
Xq
Xq
Tdo
Tqo
1 2 3 4 5 7 8 9 10
42 30.3 35.8 28.6 26 26.4 24.3 34.5 248
4 9.75 10 10 3 8 9 14 33
0.1 0.295 0.2495 0.262 0.67 0.296 0.29 0.2106 0.296
0.031 0.0697 0.0531 0.0436 1.132 0.0491 0.057 0.057 0.006
0.069 0.282 0.237 0.258 0.62 0.292 0.280 0.205 0.0286
0.028 0.17 0.0876 0.166 0.166 0.186 0.0911 0.0587 0.005
10.2 6.65 5.7 5.69 5.4 5.66 6.7 4.79 5.9
1.5 1.5 1.5 1.5 0.44 1.5 0.41 1.96 1.5
vref 1.087 1.097 1.069 1.074 1.369
Ax 0.07 0.07 0.07 0.07 0.07
Bx 0.91 0.91 0.91 0.91 0.91
Table 2.3 Generator parameters II No. of generator 1 2 3 4 5
KA 40 40 40 40 40
TA 0.2 0.2 0.2 0.2 0.2
KE 1 1 1 1 1
TE 0.785 0.785 0.785 0.785 0.785
KF 0.063 0.063 0.063 0.063 0.063
TF 0.35 0.35 0.356 0.35 0.35
2.3.3 Simulation Verification for ET-EKF The IEEE 39 buses standard system previously introduced is used in this DSE research. The ET-EKF designed in this section will be verified and its performance is compared with the intermittent counterpart, i.e., EKF-I, at the same arrival rate. The generator No. 2 in the IEEE 39 buses standard system is selected as an example to verify the filtering accuracy. The simulation is to verify the DSE of this generator within 15 s after the sudden disconnecting fault on bus 14 and bus 15. The sampling period is t = 0.02s. Therefore, there are totally 750 samples for one simulation. The system state variables are selected as x [ Ed , Eq , δ, η, Ef d , Rf , η, VR ]T with the dimension of 7. As a result, the system function consists of (2.33) and (2.34). By taking v(k), θ (k) as the input variables, the observation variables can be set as y [id (k), iq (k) ]T so that the observation function consists of (2.35). In this simulation, the system noise is set as the Gaussian white noise, whose variance matrix is diag[10−17 , 10−17 , 10−8 , 10−8 , 10−8 , 10−8 , 10−17 ]. Besides, the observation noise is also set as the Gaussian white noise, whose variance matrix is diag[10−3 , 10−3 ]. Since the measurement needs to be transmitted from the sensor node to the remote center through the communication media, the measurement yk cannot be fully acquired due to the limited bandwidth of communication media. Both the
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System
39
ET-EKF and EKF-I can deal with this situation, where the measurement cannot be entirely received. Therefore, the performance is compared between ET-EKF and EKF-I at the same arrival rate. As mentioned previously, the EKF-I can also reduce the amount of data communication. The time update process of filtering is the same as the conventional EKF, and its measurement update is modified as follows: xˆk+1 = xˆk+1|k + γkI Kk (yk+1 − h(xˆk+1|k )),
(2.36)
Pk+1 = Pk+1|k − γkI Kk Ck+1 Pk+1|k , where γkI is the logic variable satisfying γkI
=
1, The remote filter receives the measurements 0, The remote filter doesn’t receive the measurements
,
(2.37)
and Pr(γkI = 1) is preset as constant, following a Bernoulli process, which means that the probability whether the measurement data at each time instance is transmitted remains the same. The simulation of ET-EKF consists of three parts. The first part runs the simulation of dynamic system state described in Sect. 2.3.2 together with power flow analysis. The second one executes the event triggered sampling strategy based on Algorithm 2.1 to generate the triggering signal γk . The third part carries out the ET-EKF at the remote center based on Algorithm 2.2. Note that the filtering at the center can use the measurement yk only when γk = 1. The EKF-I is used here for comparison purpose. γkI , following the Bernoulli process, is generated at the same arrival rate as in ET-EKF determined in advance, based on which the corresponding EKF-I filtering is conducted. Also note that the measurement yk can be used only when γkI = 1. Totally, five preset arrival rates are applied, i.e., 90%, 70%, 50%, 30%, 10%. Two sets of simulations will be given as follows. The first set provides the filtering performance on the dynamic system state and the corresponding triggered information and the second set reflects the mean of one-step posterior estimation error ek|k−1 = xk − xˆk|k−1 2 , which is averaged over 20 times simulations. The first set shows the filtering performance comparison between ET-EKF and EKF-I at three typical arrival rates 10%, 30%, 70%, where the triggered condition of measurement yk is demonstrated at the top sub-figure for each arrival rate setting. The simulation results for λ = 10% are presented in Fig. 2.3. There exists a critical arrival rate for the EKF-I under the Bernoulli process γkI . When the arrival rate is lower than this critical rate, the EKF-I may be divergent. It can be demonstrated by Fig. 2.3, where the estimation results of EKF-I for the generator’s state Ef d , Rf VR gradually deviate from the actual system state and have the divergent tendency. However, the ET-EKF can still provide the stable estimation
40
2 Event Triggered Sampling Strategies
γ of ET−EKF
1 0.8 0.6
k
0.4 0.2 0
0
5
10
15
10
15
Time (s) (a)
0.8 0.6 0.4
k
γ of EKF−I
1
0.2 0
0
5
Time (s) (b) System ETD−EKF EKF−I
Ed (p.u.)
0.4 0.3 0.2 0.1 0 −0.1 −0.2
0
5
10
15
Time (s) (c) Fig. 2.3 DSE performance for ET-EKF and EKF-I and the corresponding triggered time instance when the arrival rate λ = 10%, where the system state is drawn in black and the red dotted dash line represents ET-EKF and the blue dash line denotes the EKF-I result. (a) The triggering condition of ET-EKF. (b) The triggering condition of EKF-I. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
2.3 Event Triggered Extended Kalman Filter Design for Nonlinear System
41
1.5
Eq (p.u.)
1 0.5 0 −0.5 −1
0
5
10
15
10
15
10
15
Time (s) (d) 2000
δ (rad)
1500 1000 500 0
0
5
Time (s) (e) 1.8
η (p.u.)
1.6 1.4 1.2 1
0
5
Time (s) (f) Fig. 2.3 (continued)
42
2 Event Triggered Sampling Strategies 5
Efdl (p.u.)
4 3 2 1 0
0
5
10
15
10
15
10
15
Time (s) (g)
1
Rfl (p.u.)
0.8 0.6 0.4 0.2 0 −0.2
0
5
Time (s) (h)
VRl (p.u.)
15 10 5 0 −5
0
5
Time (s) (i) Fig. 2.3 (continued)
2.4 Conclusions
43
under the arrival rate λ = 10%, which can follow the dynamic system state with the effective accuracy. The simulation results for λ = 30% are shown in Fig. 2.4. Under this arrival rate, both the ET-EKF and EKF-I can guarantee the convergent filtering results. However, it should be noted that the large error can be observed for Eq and Rf state estimation results of EKF-I. In comparison, the ET-EKF can still tightly follow the dynamic system state. The simulation results for λ = 30% are demonstrated in Fig. 2.5. Under this high arrival rate, both the ET-EKF and EKF-I can provide the accurate estimation results. Through the comparisons among Figs. 2.3, 2.4, and 2.5, the estimation accuracy will be improved with the increasing arrival rate. In order to compare the accuracy more explicitly, the second set of simulations demonstrates the average of one-step posterior estimation error and various arrival rates are used to reflect its influence on the accuracy. On the other hand, from Figs. 2.3, 2.4, and 2.5, it can be observed that unlike EKF-I, whose triggering is distributed evenly, the distribution of ET-EKF’s triggering is more concentrated, causing the congestion in the communication media during the triggering time slot on the contrary. This issue will be discussed considering the guaranteed arrival rate in the next chapter. Through 20 times simulation, the averages of one-step posterior estimation error ek|k−1 = xk − xˆk|k−1 2 are given in Fig. 2.6 under the arrival rates 90%, 70%, 50%, 30%, 10%. As shown in Fig. 2.6a, the accuracy of ET-EKF and EKF-I is very closed under the high arrival rate. By the further comparison from Fig. 2.6a to e, the performance of ET-EKF becomes more superior with the decreasing arrival rate. Until the arrival rate as low as λ = 10%, the EKF-I becomes divergent, while the ET-EKF can still provide the accurate enough estimation results. Besides, the effect of arrival rate on the ET-EKF is explicitly demonstrated in Fig. 2.7, indicating that the ET-EKF is more accurate with the increasing arrival rate. However, the EKF-I will become divergent under low arrival rate although its accuracy is similarly degraded with the decreasing arrival rate. Therefore, the DSE design should balance the communication rate and estimation accuracy. The arrival rate should be set as low as possible avoiding the congestion when the estimation accuracy can be guaranteed (Fig. 2.8).
2.4 Conclusions This chapter introduces five commonly used event triggered sampling strategies as well as the corresponding design principles and then compares them in computation complexity, application limitation, and applicability for the nonlinear system. The pros and cons are pointed out so that the innovation based condition and stochastic innovational condition are selected as the event triggered sampling strategy for subsequent design. The linear filters using these two event triggered sampling strategies are further introduced to build the theoretical fundamental for the subsequent design
44
2 Event Triggered Sampling Strategies
γ of ET−EKF
1 0.8 0.6
k
0.4 0.2 0
0
5
10
15
10
15
Time (s) (a)
0.8 0.6 0.4
k
γ of EKF−I
1
0.2 0
0
5
Time (s) (b) System ETD−EKF EKF−I
Ed (p.u.)
0.4 0.3 0.2 0.1 0 −0.1 −0.2
0
5
10
15
Time (s) (c) Fig. 2.4 DSE performance for ET-EKF and EKF-I and the corresponding triggered time instance when the arrival rate λ = 30%, where the system state is drawn in black and the red dotted dash line represents ET-EKF and the blue dash line denotes the EKF-I result. (a) The triggering condition of ET-EKF. (b) The triggering condition of EKF-I. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
2.4 Conclusions
45
1.2
E (p.u.) q
1 0.8 0.6 0.4 0.2
0
5
10
15
10
15
10
15
Time (s) (d) 2000
δ (rad)
1500 1000 500 0
0
5
Time (s) (e) 1.8
η (p.u.)
1.6 1.4 1.2 1
0
5
Time (s) (f) Fig. 2.4 (continued)
46
2 Event Triggered Sampling Strategies 3.5
(p.u.)
3
E
fdl
2.5 2 1.5
0
5
10
15
10
15
10
15
Time (s) (g)
0.45
fl
R (p.u.)
0.5
0.4
0
5
Time (s) (h) 10
VRl (p.u.)
8 6 4 2 0 −2
0
5
Time (s) (i) Fig. 2.4 (continued)
2.4 Conclusions
47
γ of ET−EKF
1 0.8 0.6
k
0.4 0.2 0
0
5
10
15
10
15
Time (s) (a)
0.8 0.6 0.4
k
γ of EKF−I
1
0.2 0
0
5
Time (s) (b) System ETD−EKF EKF−I
Ed (p.u.)
0.4 0.3 0.2 0.1 0 −0.1 −0.2
0
5
10
15
Time (s) (c) Fig. 2.5 DSE performance for ET-EKF and EKF-I and the corresponding triggered time instance when the arrival rate λ = 70%, where the system state is drawn in black and the red dotted dash line represents ET-EKF and the blue dash line denotes the EKF-I result. (a) The triggering condition of ET-EKF. (b) The triggering condition of EKF-I. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
48
2 Event Triggered Sampling Strategies
E (p.u.) q
1
0.8
0.6
0.4
0
5
10
15
10
15
10
15
Time (s) (d) 2000
δ (rad)
1500 1000 500 0
0
5
Time (s) (e) 1.8
η (p.u.)
1.6 1.4 1.2 1
0
5
Time (s) (f) Fig. 2.5 (continued)
2.4 Conclusions
49
3.5
(p.u.)
3
E
fdl
2.5 2 1.5
0
5
10
15
10
15
10
15
Time (s) (g)
R (p.u.)
0.5
fl
0.45
0.4
0
5
Time (s) (h) 10
l
VR (p.u.)
8 6 4 2 0 −2
0
5
Time (s) (i) Fig. 2.5 (continued)
50
2 Event Triggered Sampling Strategies
0.05
ET−EKF EKF−I
e
k|k−1
0.04 0.03 0.02 0.01 0
0
5
10
15
10
15
10
15
Time (s) (a) 0.08
0.04
e
k|k−1
0.06
0.02 0
0
5
Time (s) (b) 0.2
0.1
e
k|k−1
0.15
0.05 0
0
5
Time (s) (c) Fig. 2.6 Comparisons of estimation error between ET-EKF and EKF-I under various arrival rates, where the ET-EKF and EKF-I are drawn in red and blue, respectively. (a) The estimation error between ET-EKF and EKF-I when λ = 90%. (b) The estimation error between ET-EKF and EKFI when λ = 70%. (c) The estimation error between ET-EKF and EKF-I when λ = 50%. (d) The estimation error between ET-EKF and EKF-I when λ = 30%. (e) The estimation error between ET-EKF and EKF-I when λ = 10%
2.4 Conclusions
51
0.25
e
k|k−1
0.2 0.15 0.1 0.05 0
0
5
10
15
10
15
Time (s) (d)
ek|k−1
1.5 1 0.5 0
0
5
Time (s) (e) Fig. 2.6 (continued)
52
2 Event Triggered Sampling Strategies 0.2 λ = 0.9 λ = 0.7 λ = 0.5 λ = 0.3 λ = 0.1
0.18 0.16
e
k|k−1
0.14 0.12 0.1 0.08 0.06 0.04 0.02 0
0
5
10
15
10
15
Time (s) Fig. 2.7 The effect of arrival rates on the accuracy of ET-EKF 1.5 λ = 0.9 λ = 0.7 λ = 0.5 λ = 0.3 λ = 0.1
ek|k−1
1
0.5
0
0
5
Time (s) Fig. 2.8 The effect of arrival rates on the accuracy of EKF-I
of the event triggered nonlinear filter. Finally, the simplest nonlinear filter, i.e., extended Kalman filter (EKF), using the innovation based condition, is designed as a fundamental on the practical power system. However, the accuracy of ET-EKF is greatly affected by nonlinearity of power grid and the condition of noise. Therefore, it is always improper for WAMS DSE application.
Chapter 3
Event Triggered CKF Using Innovation Based Condition
3.1 Introduction High precision real-time state estimation is essential to guarantee the reliable and efficient operation of smart grid. Therefore, the widespread installment of PMUs is accompanied with the large amount of measurement data for DSE to be transmitted to the decision-making layer in WAMS control center via a wire or wireless medium. With limited capacity, a wireless sensor will finally become exhausted and thus affect the system’s stability and perhaps even disable the entire system. Moreover, there always has limited bandwidth on the data communication network so that the congestion occurs due to the large amount of data transmission, causing delay, which Therefore, it is necessary to reduce the data transmission bandwidth requirement while guaranteeing the filtering accuracy. In general, a simple method for data communication bandwidth reduction is to lower the sampling frequency of sensor or manually set the packet dropout rate (as introduced in Sect. 2.2.3). However, both methods aim at reducing the measurement data transmission simply from the viewpoint of the data acquisition without considering the filtering accuracy. Therefore, these methods cannot satisfy the real-time state estimation requirement of smart grid. In the early 1980s, the event triggered sampling strategy was proposed by Ho et al. as a new approach to the aforementioned accuracy issue. In the 1990s, Åström et al. applied the event triggered sampling strategy to address the signal processing of the dynamic system with continuous state space and further proved that the event triggered sampling strategy could achieve better performances with a smaller output variance in the same average sampling rate compared with the periodic sampling [39]. After that, many researchers contributed to the event triggered strategies and related state estimation with significant achievements. By using the event triggered sampling at the sensor, the amount of measurement transmission can be effectively reduced. The corresponding event triggered filtering design © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_3
53
54
3 Event Triggered CKF Using Innovation Based Condition
at the remote filter ensures that the measurement can be updated according to the event triggered condition even without the receipt of measurement data, thus guaranteeing filtering accuracy. Therefore, the event triggered state estimation can satisfy accuracy requirement on DSE in smart grid while significantly reducing the communication bandwidth. However, the present research on the event triggered state estimation is primarily focused on linear system, but there are few research outputs for nonlinear system. In summary, it is necessary to develop the event triggered nonlinear state estimation to address the heavy burden on data transmission of WAMS in smart grid while guaranteeing the high accuracy requirement on DSE. This chapter will take advantage of the concept for the existing event triggered Kalman filtering and further develop the event triggered cubature Kalman filter (ETCKF) which uses the innovation based event triggered condition for nonlinear system. This chapter is arranged as follows: Sect. 3.2 analyzes the basic principle of cubature Kalman filter and reasonably explains the advantage of cubature Kalman filter in addressing the nonlinearity. Section 3.3 designs the event triggered CKF using the innovation based condition. Section 3.4 analyzes the stochastic stability of ETCKF and gives the analytical form of communication arrival rate so that the parameter can be designed according to the requirement by the user in industry. Section 3.5 uses IEEE 39 buses to verify the effectiveness of the ETCKF.
3.2 Cubature Kalman Filter As previously stated, the Kalman filter is an unbiased optimal filter in both mean square error and maximum likelihood for linear system. However, for nonlinear system, Bucy et al. inferred the iterative formula of the optimal filtering based on the Kalman filter, but such iterative algorithm was required to solve the secondorder nonlinear partial differential equation, which was almost impossible for the current computation facilities [82, 83]. Therefore, the researchers began to design nonlinear suboptimal filters, among which the EKF is representative. However, the EKF cannot eliminate the linearization error introduced by rounding high order terms during Taylor series expansion so that it is inapplicable for the system of strong nonlinearity. To overcome this issue, theSigma-point Kalman filters were successively proposed based on the concept that the approximation to the probability density distribution of nonlinear state is much easier than the direct approximation to nonlinear function. The cubature Kalman filter (CKF) is one of the representatives among the Sigma-point filters. Since all Sigma-point filters are based on the Gaussian filter (GF), the GF will be firstly introduced in this chapter, and then the spherical-radial cubature rule of CKF will be detailed before the iteration algorithm of CKF is finally provided.
3.2 Cubature Kalman Filter
55
The following state and observation equation was formulated for the discrete nonlinear system: xk+1 = f (xk ) + νk , yk = h(xk ) + ωk ,
(3.1)
where f (·) and h(·) are both known nonlinear equations and continuously differentiable everywhere within the domain. xk ∈ Rn is the vector of the system state in dimension of n, and yk ∈ Rm is the vector of the observation in dimension of m. νk ∈ Rn and ωk ∈ Rm represent the system and observation noise, respectively, which conforms to zero-mean Gaussian distribution. Their variance matrix is Q > 0 and R > 0, respectively. Moreover, it is assumed that the initial system state x0 is subject to Gaussian distribution N(x0 |x0 , P0 ) and the initial value, system noise, and observation noise are independent of each other.
3.2.1 Gaussian Filter Regarding the nonlinear system shown in (3.1), Maybeck and Ito et al. designed the Gaussian filter based on the Gaussian assumption [107, 135]. According to the Gaussian assumption, the priori estimation of system state and the system measurement shall obey the Gaussian distribution as follows:
p xk |Y¯ k−1 = N(xk |xˆk|k−1 , Pk|k−1 ), (3.2)
p yk |Y¯ k = N(yk |yˆk|k−1 , Pyk ), where p(·) refers to the probability density function, and Y¯ k = {y1 , y2 , . . . , yk } is the information set of measurement till the time instance k. From this hypothesis, the system state vector also obeys the following Gaussian distribution:
p xk |Y¯ = N(xk |xˆk , Pk ). (3.3) Based on the assumption and after the acquisition of the estimation xˆk−1 at the last time instance and the error covariance matrix Pk|−1 , the time update of Gaussian filter is conducted as follows:
(3.4) xˆk|k−1 = E xk |Y¯ k−1 , where E(·) is the expectation of the stochastic variable. (3.1) and (3.3) are substituted into the previous formula to derive xˆk|k−1 = E f (xk−1 ) + νk−1 |Y¯ k−1
= f (xk−1 )p xk−1 |Y¯ k−1 dxk−1 =
f (xk−1 )N (xk−1 |xˆk−1 , Pk−1 )dxk−1 .
(3.5)
56
3 Event Triggered CKF Using Innovation Based Condition
The previous formula holds because νk is zero-mean Gaussian distribution and has independence of prior measurements. Similarly, the one-step prediction of error covariance matrix of system state can be got as follows: Pk|k−1 = E (xk − xˆk|k−1 )(xk − xˆk|k−1 )T |Y¯ k−1 = (f (xk−1 ) − xˆk|k−1 )(f (xk−1 ) − xˆk|k−1 )T ×N(xk−1 |xˆk−1 , Pk−1 )dxk−1 + Qk−1 .
(3.6)
Based on this, the one-step prediction of measurement and its one-step prediction of error covariance are updated in the following process:
yˆk|k−1 = E yk |Y¯ k−1 = E h(xk ) + ωk |Y¯ k−1 = h(xk )N (xk |xˆk|k−1 , Pk|k−1 )dxk Pyk = E (yk − yˆk|k−1 )(yk − yˆk|k−1 )T |Y¯ k−1 = (h(xk ) − yˆk|k−1 )(h(xk ) − yˆk|k−1 )T ×N(xk |xˆk|k−1 , Pk|k−1 )dxk + Rk .
(3.7)
Similar to the Kalman filtering, the measurement update of the Gaussian filter is derived by Kk = Pxk ,yk Py−1 k
¯ xˆk = E xk |Yk = xˆk|k−1 + Kk (yk − yˆk|k−1 ) Pk = E (xk − xˆk| )T (xk − xˆk )T |Y¯ k = Pk|k−1 − Kk Pyk KkT ,
(3.8)
where Pxk ,yk is the cross covariance matrix between the system state vector and system measurement vector and the computation can be got as follows: Pxk ,yk = E (xk − xˆk|k−1 )(yk − yˆk|k−1 )T |Y¯ k−1 = (xk − xˆk|k−1 )(h(xk ) − yˆk|k−1 )T N(xk |xˆk|k−1 , Pk|k−1 )dxk . (3.9)
3.2 Cubature Kalman Filter
57
3.2.2 Spherical-Radial Cubature Integral According to the time update and measurement update of Gaussian filtering, the key of the iteration is to solve the Gauss integral as follows: g(x)N (x|x, ˆ P )dx =
1 n 2
(2π ) |P |
1
ˆ g(x)e[− 2 (x−x)
1 2
T P −1 (x−x)] ˆ
dx, (3.10)
where g(·) is the arbitrary function. This Gauss integral almost cannot be directly acquired. Therefore, various numerical integration methods are studied to approximate this integral and different filterings based on Gaussian filter were designed according to various numerical integration methods. The basic concept of these numerical integration methods is to choose a series of sampling points according to one certain rule and then acquire the corresponding weights of these points. The following formula is used to approximate the solution of (3.10): g(x)N (x|x, ˆ P )dx ≈
ωi g(xi ).
(3.11)
According to the previous concept, Arasaratnam proposed the CKF filter, which acquired the sampling points and determined the corresponding weights using the spherical-radial cubature rule and the Gaussian integral was further approximated using the formula above [112]. Therefore, the calculation of CKF can be written as follows. Firstly, 2n Sigma points are selected according to the spherical-radial cubature rule as follows: √ nei , i = 1, . . . , n, (i) ξ = (3.12) √ − nei−n , i = n + 1, . . . , 2n, where ei denotes a unit vector in the direction of the coordinate axis i. Next, the following formula is used to approximate the Gaussian integral shown in (3.10):
√
1 g xˆ + P ξ (i) , 2n 2n
g(x)N (x|x, ˆ P )dx ≈
(3.13)
i=1
√ 1 where xˆ + P ξ (i) and 2n correspond to xi and ωi in (3.11), respectively, i.e., the selected sampling points (Sigma points) and the corresponding weights.
58
3 Event Triggered CKF Using Innovation Based Condition
3.2.3 Cubature Kalman Filter The CKF filtering can be obtained by applying the spherical cubature integral in Sect. 3.2.2 to the Gaussian Kalman filter in Sect. 3.2.1 as follows: • Time update Firstly, the following Sigma points are generated according to the state estimation at the last time instance and error covariance matrix. (i) Xk−1 = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n. (3.14) After that, the Sigma points are transited via the system state function f (·), (i) (i) Xˆk = f Xk−1 , i = 1, . . . , 2n. (3.15) The one-step prediction of system state and its error covariance matrix are obtained according to (3.5), (3.6), and (3.13), i.e., 1 ˆ (i) Xk 2n 2n
xˆk|k−1 =
i=1
(i) T 1 ˆ (i) Xk − xˆk|k−1 Xˆk − xˆk|k−1 + Qk−1 . = 2n 2n
Pk|k−1
(3.16)
i=1
• Measurement update The following Sigma points are established according to the one-step prediction of system state and its corresponding error covariance matrix, (i) Xk|k−1 = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n. (3.17) Next, the Sigma points derived above are transited via the system observation function h(·), (i) (i) (3.18) Yˆ k = h Xk|k−1 , i = 1, . . . , 2n. Finally, the estimation of system state and its error covariance matrix is calculated as follows according to the measurement update of Gaussian Kalman filtering and (3.13): 1 ˆ (i) Yk 2n 2n
yˆk|k−1 =
i=1
T 1 ˆ (i) Yk − yˆk|k−1 Yˆ k(i) − yˆk|k−1 + Rk = 2n 2n
Pyk
i=1
3.3 CKF Using Innovation Based Condition
59
T 1 (i) Xk|k−1 − xˆk|k−1 Yˆ k(i) − yˆk|k−1 2n 2n
Pxk yk =
i=1
Kk = Pxk yk Py−1 k xˆk = xˆk|k−1 + Kk [yk − yˆk|k−1 ] Pk = Pk|k−1 − Kk Pyk KkT .
(3.19)
It should be noted that there are other methods proposed to approximate the Gaussian integral (3.10) and other filterings have been developed except CKF introduced here. For instance, the Gauss–Hermite integral was utilized to choose Sigma points and design Gauss–Hermite Kalman filter [107]. The unscented transformation was used to choose the Sigma points and design UKF [101], while the Monte Carlo method was utilized to stochastically choose the sampling points (also called particle points) and design the particle filter [119]. The CKF is adopted in this research to design the event triggered nonlinear filter primarily for the following viewpoints. Firstly, although the Gauss–Hermite Kalman filter and particle filter can provide more accurate state estimation, they require much greater computation capability than CKF so that it is hard to satisfy the real-time DSE requirement. Secondly, the UKF is similar to CKF in accuracy and computing. But the Sigma points selected by UKF based on the unscented transformation have central one with high weight so that the posterior variance matrix that has should remained positive definite during the computation may become nonpositive definite, disabling the iteration. But such a problem is not encountered for the CKF. Based on these two reasons above, the CKF is selected to address the nonlinearity issue of DSE in WAMS.
3.3 CKF Using Innovation Based Condition This section will design the event triggered nonlinear filter to deal with the large amount of data transmission in the remote state estimation for nonlinear smart grid. This design will use innovation based conditions as the event triggered sampling strategy to reduce the transmission of measurement. Based on this strategy, the CKF will be used to fulfill the nonlinear filtering, guaranteeing the filtering accuracy, so that the final event triggered cubature Kalman filter (ETCKF) can satisfy DSE’s accuracy requirement while reducing the communication bandwidth. The ETCKF design comprises two parts. Firstly, the event triggered sampling strategy is designed at the sensor. Secondly, the iteration of filtering at the remote filter is designed. This section will provide the detailed description of the design and the corresponding principle.
60
3 Event Triggered CKF Using Innovation Based Condition
Fig. 3.1 System diagram of the ETCKF, where the data return path is excluded by the specific design
3.3.1 Design of Event Triggered Sampling Strategy As mentioned in 2.1, when the innovation based condition is adopted as the event triggered sampling strategy, the remote filter needs to return the medium quantity during the iteration in order to calculate the logic variable γk , which increases the communication on the contrary. Therefore, it is necessary to consider the reduction and even elimination of the data return in order to further lower the communication bandwidth. To address this problem, the sensor will be equipped with a local filter. This filter implements the identical algorithm to the remote filter and provides the sensor with the medium quantity prerequisite to calculate the logic judgment value γk . The architecture of the whole DSE is shown in Fig. 3.1, which requires the sensor a certainly high computation capability. The state-of-the-art intelligent sensor such as PMU in smart grid can fully sustain the computation requirement supported by its microprocessor [136]. Therefore, this design is feasible. The key to event triggered sampling strategy is to calculate the logic variable γk based on the current measurement to determine whether the current measurement needs to be transmitted to the remote filter via the transmission network. Regarding the nonlinear system (3.1), when the innovation based condition in Sect. 2.1.1.2 is used as the event triggered sampling strategy, the whole design can be reasonably referred to the event triggered Kalman filter using this condition and the CKF algorithm. After the estimation of system state at the last time instance is obtained at the local filter, the update is conducted according to the conventional CKF as (3.14)– (3.16). Next, the one-step prediction of measurement and its error covariance matrix are calculated to obtain the logic variable γk as in (3.17)–(3.19). Accordingly, the innovation based condition (3.20) is calculated. Finally, γk is determined according to (2.3). −1/2
Fk = Pyk , k = yk ∈ Rm |FkT (yk − yˆk|k−1 )∞ ≤ δ . The pseudo-code of the ETCKF algorithm is shown in Algorithm (3.1).
(3.20)
3.3 CKF Using Innovation Based Condition
61
Algorithm 3.1 Event triggered strategy for the ETCKF Input: the current measurementyk , the design parameter δ Output: the logic variable γk 1: Initialization: system initial valuex0 , error covariance matrix P0 2: Iteration: 3: While k ≥ 1 do √ nei , i = 1, . . . , n 4: ξ (i) = √ − nei−n , i = n + 1, . . . , 2n √ (i) Xk−1 = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n 5: (i) (i) 6: Xˆk = f (X ), i = 1, . . . , 2n 7: 8: 9:
xˆk|k−1 =
(i)
Xˆk
i=1 2n
(i)
(i)
(i) Yˆ k
1 2n
i=1
(i)
11:
= h(Xk|k−1 ), 2n 1 ˆ (i) yˆk|k−1 = 2n Yk
12:
Py k =
10:
(i)
(Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n
Pk|k−1 = Xk|k−1
k−1 2n
1 2n
i = 1, . . . , 2n
i=1
1 2n
2n
(i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T + Rk
i=1 −1/2 F k = Py k ξk = FkT (yk
13: 14: − yˆk|k−1 )∞ 15: if ξk < δ then 16: return γk = 0 17: else 18: return γk = 1 19: end if 20: end while
3.3.2 Design of Remote Filter When the aforementioned event triggered sampling strategy is adopted, like the event triggered Kalman filter, the key to the event triggered CKF design is, when γ = 0, the measurement information contained in the event triggered condition should be taken full advantage of for measurement update so that the system state estimation accuracy can be assured in the absence of measurement transmission. Inspired from the design principle of the event triggered Kalman for linear system, the measurement update of the ETCKF is given according to the following theorems.
62
3 Event Triggered CKF Using Innovation Based Condition
Theorem 3.1 As for the remote state estimation system model (3.1) illustrated in Fig. 3.1, if the event triggered sampling strategy of Algorithm (3.1) is adopted at the sensor, the system state vector xk will obey the following Gaussian distribution p(xk |Fk ) = N (xˆk , Pk ). xˆk = xˆk|k−1 + γk Kk (yk − yˆk|k−1 ) Pk|k−1 − Kk Pyk KkT , γk = 1 Pk = , T Pk|k−1 − λ(δ)Kk Pyk Kk , γk = 0
(3.21)
where Kk = Pxk yk Py−1 is the filtering gain, Pxk yk is the cross covariance matrix k between the system state and measurement. δ2 2 [1 − 2D(δ)]−1 , λ(δ) = √ δ exp − 2 2π
(3.22)
where D(·) is the Q function of standard Gaussian distribution, i.e., right tail function. Before proving this theorem, the error of system state’s one-step prediction, state estimation error, and innovational measurement are firstly defined as follows: x˜k|k−1 = xk − xˆk|k−1 x˜k = xk − xˆk y˜k = yk − yˆk|k−1 .
(3.23)
The following lemma is introduced at first. Lemma 3.1 ([137]) If the stochastic variable x ∈ Rn and y ∈ Rm obey the united Gaussian distribution, its expectation and variance are as follows: x¯ y¯ xx xy . = yx yy m=
(3.24)
The conditional probability of x under the condition y also obeys the Gaussian distribution, and the expectation and variance are −1 μ = x¯ + xy yy (y − y) ¯
x|y = xx − xy yy −1 yx .
(3.25)
3.3 CKF Using Innovation Based Condition
63
Proof The time update of CKF at the remote filter and the conventional CKF are shown in (3.14)–(3.16). Their measurement update depends on the logic variable γk , determining whether the measurement is received. When γk = 1, the filter will receive the measurement from the sensor, and then the ETCKF performs the measurement update according to conventional CKF as shown in (3.17)–(3.19). When γk = 0, the filter cannot receive the measurement and only knows that the measurement information is contained in the information set determined by the innovational condition. To increase the filtering accuracy, the measurement update can only take advantage of this implicit information. According to the definition of system state estimation, it can be got by xˆk = E(xk |Fk−1 , yk ∈ k ) = xk p(xk |Fk−1 , yk ∈ k )dxk .
(3.26)
xk
For the convenience, k = FkT (yk − yˆk|k−1 ) and = {k ∈ Rm : k ∞ ≤ δ} are defined. There exists k ∈ ⇐⇒ yk ∈ k . The Bayes formula can be further applied, and the posterior conditional probability of system state can be expressed as p(xk |Fk−1 , yk ∈ k ) = =
p(xk , k |Fk−1 ) p(k |Fk−1 )dk
p(xk |Fk−1 , k )p(k |Fk−1 ) . p(k |Fk−1 )dk
(3.27)
By substituting into (3.26), it is derived by xˆk = E(xk |Fk−1 , k ∈ ) E(xk |Fk−1 , k )p(k |Fk−1 )dk = . p(k |Fk−1 )dk
(3.28)
According to Lemma (3.1) and the following formula:
(i)
T 1 ˆ (i) Yk − yˆk|k−1 Yˆ k − yˆk|k−1 + Rk 2n 2n
Pyk =
i=1
T 1 (i) Xk|k−1 − xˆk|k−1 Yˆ k(i) − yˆk|k−1 . = 2n 2n
Pxk yk
(3.29)
i=1
E(xk |Fk−1 , k ) in (3.28) can be calculated as follows: −1 E(xk |Fk−1 , k ) = E xk |Fk−1 , y˜k = FkT k = xˆk|k−1 + Pxk yk Py−1 FkT k = xˆk|k−1 + Kk FkT
−1
k .
−1
k (3.30)
64
3 Event Triggered CKF Using Innovation Based Condition
By substituting into (3.28), it is derived by T −1 p( |F k k k−1 )dk xˆ k|k−1 + Kk Fk xˆk = p(k |Fk−1 )dk −1 Kk F T k p(k |Fk−1 )dk = xˆk|k−1 + k . p(k |Fk−1 )dk
(3.31)
According to the definition of k , it obeys the standard Gaussian distribution. Therefore, p(k |Fk−1 ) above is an even function. The definition of shows that it −1 is a symmetric space to the origin. Therefore, Kk FkT k p(k |Fk−1 )dk = 0, and it can be simplified as follows: xˆk = xˆk|k−1 .
(3.32)
The following equation can be obtained according to the posteriori variance matrix of system state: Pk = E (xk − xˆk )(xk − xˆk )T |Fk−1 , yk ∈ k .
(3.33)
Furthermore, (3.23) and (3.32) are substituted into the equation above and transformed to derive T Pk = E x˜k|k−1 x˜k|k−1 |Fk−1 , yk ∈ k = E{[(x˜k|k−1 − Kk y˜k ) + Kk y˜k ][(x˜k|k−1 − Kk y˜k ) + Kk y˜k ]T |Fk−1 , yk ∈ k }, which is extended to Pk = E (x˜k|k−1 − Kk y˜k )(x˜k|k−1 − Kk y˜k )T |Fk−1 , yk ∈ k +2E (x˜k|k−1 − Kk y˜k )y˜kT KkT |Fk−1 , yk ∈ k +E Kk y˜k y˜kT KkT |Fk−1 , yk ∈ k .
(3.34)
(3.35)
Firstly, the first term in the right of the equation, due to x˜k|k−1 − Kk y˜k = xk − (xˆk|k−1 +Kk y˜k ), can be modified as E[(x˜k|k−1 −Kk y˜k )(x˜k|k−1 −Kk y˜k )T |Fk−1 , yk ∈ k ] = E(x˜k x˜kT |Fk−1 , yk ∈ k ), which is the error covariance matrix of system state in CKF.
3.4 Stochastic Stability Analysis of ETCKF
65
E (x˜k|k−1 − Kk y˜k )(x˜k|k−1 − Kk y˜k )T |Fk−1 , yk ∈ k = Pk|k−1 − Kk Pyk KkT . (3.36) To solve the second term in (3.35), the following equation is analyzed as 1 −1
E xk |Fk−1 , y˜k = FkT k p(k |Fk−1 )dk (3.37) −xˆk|k−1 kT Fk−1 p(k |Fk−1 )dk .
E x˜k|k−1 y˜kT |Fk−1 , yk ∈ k =
By substituting (3.30) into the equation above, it is derived by −1 Kk FkT k kT p(k |Fk−1 )dk Fk−1 p( |F )d k k−1 k
(3.38) = Kk E y˜k y˜kT |Fk−1 , yk ∈ k .
E x˜k|k−1 y˜kT |Fk−1 , yk ∈ k =
Through its substitution into the second term of (3.35), it can be got by E (x˜k|k−1 − Kk y˜k )y˜kT KkT |Fk−1 , yk ∈ k = 0.
(3.39)
From the third term in (3.35), it can be observed that E(y˜k y˜kT |Fk−1 , yk ∈ k ) is the variance matrix of truncated Gauss. It can be computed as
E y˜k y˜kT |Fk−1 , yk ∈ k = [1 − λ(δ)]Pyk .
(3.40)
By putting (3.35), (3.36), (3.38), and (3.40) together, it comes up with Pk = Pk|k−1 − λ(δ)Kk Pyk KkT .
(3.41)
In sum, the measurement update related to γk = 1 and γk = 0 can be combined to derive the ETCKF measurement update of (3.21). According to Theorem 3.1, the ETCKF at the remote filter is shown in Algorithm (3.2).
3.4 Stochastic Stability Analysis of ETCKF Except the filtering accuracy, the stochastic stability is also an important evaluation for the state estimation. The stochastic stability of the Kalman filter for linear system was proven in [138], which pointed out that when the system state is observable, the error covariance matrix of filtering state gradually converges to a fixed bounded
66
3 Event Triggered CKF Using Innovation Based Condition
Algorithm 3.2 Filtering algorithm of the ETCKF Step 1: generate 2n Sigma Points √
ξ
(i)
(i)
Xk−1
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n =
Step 2: time update Xˆk
(i)
(i) = f Xk−1 , i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n (i) T 1 (i) Xˆk − xˆk|k−1 Xˆk − xˆk|k−1 + Qk−1 2n i=1
(i) Xk|k−1
Yˆ k
(i)
= xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n (i) i = 1, . . . , 2n = h Xk|k−1 ,
yˆk|k−1 =
2n 1 (i) Yˆ k 2n i=1
Py k =
2n T 1 (i) (i) Yˆ k − yˆk|k−1 Yˆ k − yˆk|k−1 + Rk 2n i=1
Px k y k =
2n T 1 (i) (i) Xk|k−1 − xˆk|k−1 Yˆ k − yˆk|k−1 2n i=1
Step 3: measure update Kk = Pxk yk Py−1 k xˆk = xˆk|k−1 + γk Kk (yk − yˆk|k−1 ) Pk|k−1 − Kk Pyk KkT , γk = 1 Pk = Pk|k−1 − λ(δ)Kk Pyk KkT , γk = 0 Step 4: execute Step 1 to Step 3 in loop at each time instance k
matrix. However, for nonlinear filter, the index to evaluate the stochastic stability is no longer the error covariance matrix of state but the expectation of one-step priori error estimate E(x˜k|k−1 2 ) [91]. This section will present the analysis of stochastic stability of the ETCKF according to the previous evaluation index and
3.4 Stochastic Stability Analysis of ETCKF
67
give the sufficient condition of stochastic stability. Moreover, this section also gives the analytical expression of the average communication arrival rate of the ETCKF.
3.4.1 Boundedness of the ETCKF Posterior Variance Matrix This section will linearize the ETCKF using the pseudo-linearization in [19] and then give the sufficient condition of boundedness of the ETCKF posterior variance matrix. The equation of (3.23) can be modified as follows according to (3.1): x˜k|k−1 = f (xk−1 ) − f (xˆk−1 ) + νk−1 y˜k = h(xk ) − h(xˆk|k−1 ) + ωk .
(3.42)
The pseudo-linearization is further used to transform the equation above to x˜k|k−1 = αk−1 Fk−1 x˜k−1 + νk−1 y˜k = βk Hk x˜k|k−1 + ωk ,
(3.43)
where αk−1 = diag[α1,k−1 , . . . , αn,k−1 ] and βk = diag[β1,k , . . . , βm,k ] are used to represent the error caused by the first-order linearization. Fk−1 = [∂f (xk−1 )/∂xk−1 ]|xk−1 =xˆk−1 and Hk = [∂h(xk )/∂xk ]|xk =xˆk|k−1 are the first-order terms of Taylor expansion. Based on these, the priori variance matrix of system state estimation, one-step prediction error variance matrix of system measurement, and cross covariance matrix between the system state vector and measurement vector can be linearized as follows: T Pk|k−1 = αk−1 Fk−1 Pk−1 Fk−1 αk−1 + Qk−1 ,
Pxk yk = Pk|k−1 HkT βk , Pyk =
βk Hk Pk|k−1 HkT βk
(3.44) (3.45)
+ Rk .
(3.46)
The following assumption of the system is made before inferring the sufficient condition of the ETCKF posterior variance matrix boundedness. ¯ h, q, ¯ β, h, Assumption 3.1 P1|0 > 0 and there exist the constants α, ¯ α, f¯, f , β, ¯ q, r¯ , r > 0 so that the following inequality holds ¯ α ≤ αk ≤ α, ¯ h ≤ Hk ≤ h,
f ≤ Fk ≤ f¯,
qIn ≤ Qk ≤ qI ¯ n,
Besides, the following lemma is used.
¯ β ≤ βk ≤ β, rIm ≤ Rk ≤ r¯ Im .
(3.47)
68
3 Event Triggered CKF Using Innovation Based Condition
Lemma 3.2 ([2]) If both A, B ∈ Rn×n are the symmetric positively definite matrix, the following inequality holds (A + B)−1 > A−1 − A−1 BA−1 .
(3.48)
Therefore, the following theorem can be derived based on Assumption 3.1. Theorem 3.2 Considering the nonlinear system of (3.1) and the innovation based condition of (3.20) as the event triggered sampling strategy, if the system satisfies Assumption 3.1 and Hk is always invertible at any time instance k, there exist the bounded constant p, p¯ > 0 when λ(δ) > 1 − (α¯ f¯)−2 so that ¯ n, pIn ≤ Pk < pI
(3.49)
γ = 1 − [1 − 2D(δ)] ,
(3.50)
m
where In ∈ Rn×n is a unit matrix, and n is the number of dimension, and γ is the average communication arrival rate and D(·) is Q function of standard Gaussian distribution, i.e., right tail function. Proof Firstly, in order to acquire the lower bound of Pk , Pk is linearized according to (3.45) and (3.46).
−1 Pk = Pk|k−1 − γk Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1
−1 − (1 − γk )λ(δ)Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1 . (3.51) According to the definition of λ(δ), 0 < λ(δ) ≤ 1. Therefore,
−1 βk Hk Pk|k−1 Pk ≥ Pk|k−1 − γk Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk
−1 −(1 − γk )Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1
−1 = Pk|k−1 − Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1 . (3.52) Pk is defined to represent the right-hand side of inequality (3.52), i.e., the lower bound of Pk ,
−1 βk Hk Pk|k−1 . (3.53) Pk = Pk|k−1 − Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk
3.4 Stochastic Stability Analysis of ETCKF
69
By taking inversion on both sides of (3.53) and using the matrix inversion lemma, it can be got by −1 + HkT βk Rk−1 βk Hk . Pk −1 = Pk|k−1
(3.54)
By taking inversion on both sides of (3.44) and substituting them into (3.54), it is derived by
−1 T αk−1 + Qk−1 + HkT βk Rk−1 βk Hk . Pk −1 = αk−1 Fk−1 Pk−1 Fk−1
(3.55)
The following is obtained according to Assumption 3.1 and (3.55):
−1 β¯ 2 h¯ 2 + In . Pk −1 ≤ α 2 f 2 Pk−1 + q r
(3.56)
The following function is defined by
−1 β¯ 2 h¯ 2 + In , S(X) = α 2 f 2 X−1 + q r
(3.57)
which is monotonically increasing. Therefore, the following iteration is obtained:
Pk −1 ≤ S k P0−1 ,
(3.58)
where S k (·) = S(S(. . . S (·))). k
By taking the inversion on both sides of (3.58), the lower bound of Pk can be derived as below
−1 . Pk ≥ Pk ≥ S k P0−1
(3.59)
To obtain the upper bound of Pk , according to 0 < λ(δ) ≤ 1 and (3.51), the following inequality can be got by:
−1 βk Hk Pk|k−1 Pk ≤ Pk|k−1 − γk λ(δ)Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk
−1 − (1 − γk )λ(δ)Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1
−1 ≤ Pk|k−1 − λ(δ)Pk|k−1 HkT βk βk Hk Pk|k−1 HkT βk + Rk βk Hk Pk|k−1 . (3.60) According to (3.51), it is obvious that Pk ≤ Pk|k−1 . Therefore, the upper bound of Pk can be obtained by solving the upper bound of Pk|k−1 . To acquire the upper bound of Pk|k−1 , (3.60) is substituted into (3.44) and Lemma 3.2 is used, it can be got by
70
3 Event Triggered CKF Using Innovation Based Condition
T Pk|k−1 < [1 − λ(δ)]αk−1 Fk−1 Pk−1|k−2 Fk−1 αk−1
−1 T T + λ(δ)αk−1 Fk−1 (βk−1 Hk−1 )−1 Rk−1 × Hk−1 βk−1 Fk−1 αk−1 + Qk−1
< [1 − λ(δ)](α¯ f¯)2 Pk−1|k−2 + λ(δ)
(α¯ f¯)2 r¯ In + qI ¯ n. (βh)2
(3.61)
Furthermore, the mathematical induction is used to derive the upper bound of Pk|k−1 as follows: Pk|k−1 < p
k−1 [(1 − λ(δ))(α¯ f¯)2 ]j In ,
(3.62)
j =0
where p = max(P1|0 , λ(δ)¯r (α¯ f¯)2 (βh)−2 + q). ¯ It should be noted that
k j =0
[(1 −
λ(δ))(α¯ f¯)2 ]j is convergent only when (1−λ(δ))(α¯ f¯)2 < 1, i.e., λ(δ) > 1−(α¯ f¯)−2 . In summary, the following inequality can be obtained according to (3.59) and (3.62): k−1
[(1 − λ(δ))(α¯ f¯)2 ]j In . [S P0−1 ]−1 ≤ Pk < Pk|k−1 < p
(3.63)
j =0
Therefore, the bound of (3.49) is proven. Secondly, in order to prove (3.50), the average communication arrival rate is firstly defined as follows: 1 γ = lim sup E(γk |Fk−1 ). T →∞ T + 1 T
(3.64)
k=0
According to the definition (2.3) of γk , it is within the set {0, 1}. Therefore, the equation above can be calculated via the following one: 1 E(γk |Fk−1 ) T +1 T
γ = lim sup T →∞
k=0
1 P r(γk = 1|Fk−1 ) T +1 T
= lim sup T →∞
k=0
= Pr(γk = 1|Fk−1 ) = 1 − Pr(k ∞ ≤ δ|Fk−1 ),
(3.65)
3.4 Stochastic Stability Analysis of ETCKF
71
where Pr(·) is the probability of the stochastic variable. According to the definition of infinite norm, k ∞ ≤ δ above is equivalent to that the absolute value of each dimension in k is smaller than δ, i.e., |ki | ≤ δ holds for 1 ≤ i ≤ m. As k obeys the m-dimension standard normal distribution, it can be got by Pr(k ∞ ≤ δ|Fk−1 ) =
m
Pr |ki | ≤ δ|Fk−1
i=1
= [1 − 2D(δ)]m .
(3.66)
By substituting (3.66) into (3.65), it is derived by γ = 1 − [1 − 2D(δ)]m .
(3.67)
In summary, Lemma 3.2 is proven.
3.4.2 Error Boundedness Analysis for the ETCKF This section will give the analysis on the error boundedness of the ETCKF filtering based on the previous section. For the convenience, the function Vk (x˜k|k−1 ) = −1 T x˜k|k−1 Pk|k−1 x˜k|k−1 is firstly defined. Next, the following lemma is applied. Lemma 3.3 ([91]) Assume that there exist a stochastic process Vn (ψn ) and the ¯ φ, κ and τ ≤ 1 satisfying positive real numbers φ, ¯ n 2 φψn 2 ≤ Vn (ψn ) ≤ φψ E[Vn+1 (ψn+1 )|ψn ] ≤ κ + (1 − τ )Vn (ψn ), and then ψn satisfies the following inequality:
E ψn
2
n−1
φ¯ κ 2 n ≤ E ψ0 (1 − τ ) + (1 − τ )i . φ φ i=0
The sufficient condition of error boundedness for the ETCKF is given as follows. Theorem 3.3 Considering the nonlinear system of (3.1) and the innovation based condition of (3.20) as the event triggered sampling strategy, if the system satisfies Assumption 3.1 and Hk is invertible at any time time instance k, 0 < E(x˜1|0 2 ) ≤ σ,
72
3 Event Triggered CKF Using Innovation Based Condition
E(x˜k|k−1 2 ) ≤
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(3.68)
i=0
where p¯ and p are the upper and lower bounds of Pk of the previous section. Other parameters will be given in the proof. Proof Firstly, the following inequality can be obtained according to the definition of Vk (x˜k|k−1 ) and (3.63). 1 1 x˜k|k−1 2 ≤ Vk (x˜k|k−1 ) ≤ x˜k|k−1 2 . p¯ p
(3.69)
One-step prediction error of system state can be modified as follows according to (3.21), (3.23), and (3.43): x˜k|k−1
= αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 − γk−1 Kk−1 ωk−1 + νk−1 . (3.70)
Therefore, Vk (x˜k|k−1 ) can be written as follows: T −1 Vk (x˜k|k−1 ) = αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 Pk|k−1 × αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 −1 + (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T Pk|k−1
× (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 ) T −1 + αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 Pk|k−1 × (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 ) + (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T −1 αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 . Pk|k−1
(3.71)
By taking the conditional expectation on both sides of (3.71) over x˜k−1|k−2 , since ωk−1 , νk−1 and x˜k−1|k−2 are independent of each other, the conditional expectation on the last two terms of (3.71) over x˜k−1|k−2 is 0. According to Lemma 6.1 and Lemma 6.2 in [16], the first item on the right-hand side of (3.71) can be transformed as follows: T −1 αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 Pk|k−1 × αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 < (1 − τ )Vk−1 (x˜k−1|k−2 ),
(3.72)
3.4 Stochastic Stability Analysis of ETCKF
73
where τ = q/(α¯ 2 f¯2 /p + q). According to the matrix trace, the expectation on the second term on the righthand side of (3.71) can be calculated as:
T −1 E (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 Pk|k−1 (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )
T −1 = E tr νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 sPk|k−1
× νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1
−1 T T . (3.73) Fk−1 αk−1 + Qk−1 Pk|k−1 = tr αk−1 Fk−1 γk−1 Kk−1 Rk−1 Kk−1 According to (3.45) and (3.46) and Assumption 3.1, the upper bound of filtering ¯ Therefore, the upper bound of (3.73) is gain Kk−1 is Kk−1 ≤ β¯ h¯ p/r ¯ = k. −1 (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 ) E (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T Pk|k−1 ≤ (α¯ 2 f¯2 k¯ 2 r¯ + q)/p. ¯
(3.74)
The following expectation is obtained according to (3.72) and (3.74), E[Vk (x˜k|k−1 )|x˜k−1|k−2 ] T −1 = E αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 Pk|k−1
× αk−1 Fk−1 (In − γk−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 |x˜k−1|k−2 −1 + E (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T Pk|k−1 × (νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )|x˜k−1|k−2
< (1 − τ )Vk−1 (x˜k−1|k−2 ) + α¯ 2 f¯2 k¯ 2 r¯ + q¯ /p. (3.75)
The following inequality is derived according to (3.69) and (3.75) and Lemma 3.3: k−1
p¯ κ E x˜k|k−1 2 ≤ σ (1 − τ )k−1 + (1 − τ )i , p p
(3.76)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p. ¯
Remark 3.1 This section analyzes the stochastic stability of the ETCKF, and provides the sufficient condition of the ETCKF and the analytical form of average communication arrival rate. Through the sufficient condition of stability and the arrival rate, the average communication rate declines with the increase of the
74
3 Event Triggered CKF Using Innovation Based Condition
design parameter δ so that Pk may not converge because λ(δ) > 1 − (α¯ f¯)−2 is dissatisfied, causing the divergence of E(x˜k|k−1 2 ), i.e., the failure of stochastic stability of the ETCKF. Otherwise, with the increase of the average communication rate, the ETCKF filtering accuracy and stochastic stability will be improved. Therefore, the appropriate design parameter δ can be selectively used to balance data communication and filtering accuracy.
3.5 Simulation and Verification This section uses the practical dynamic state estimation of the smart grid to verify the ETCKF developed in this chapter. The effectiveness of the ETCKF as well as Theorems 3.2 and 3.3 is also verified. This part takes the No. 10 generator of IEEE 39 buses as in the previous part as an example for the verification. Its system equation is as shown in (2.33), and the measurement is formulated in (2.35). The simulation is to verify the DSE of the generator within 15 s after the sudden disconnecting fault on bus 14 and bus 15. The sampling period is t = 0.02s. To better demonstrate the advantages of the ETCKF, three methods are used in this simulation for the performance comparison of DSE, i.e., conventional CKF, CKF with intermittent observations (CKF-I), and the ETCKF, where the conventional CKF is expected as the template to provide the optimal filtering performance with full data communication. Its algorithm is shown in Sect. 3.2.3. As mentioned above, CKF-I can also reduce the amount of data communication. The time update of filtering is the same as the conventional CKF, and its measurement update is formulated as follows: xˆk = xˆk|k−1 + γkI Kk (yk − yˆk|k−1 ), Pk = Pk|k−1 − γkI Kk Pyk KkT ,
(3.77)
where γkI is the logic variable satisfying γkI =
1,
The remote filter receives the measurements
0,
The remote filter does not receive the measurements
(3.78)
And Pr(γkI = 1) is the pre-set constant, following a Bernoulli process, which means that the probability whether the measurement data at each time instance is transmitted remains the same. For the aforementioned three filtering, the noise variance is Q = diag[10−3 , 10−3 , 10−7 , 10−7 ] and the measurement noise variance is R = diag[10−2 , 10−2 , 10−2 , 10−2 ].
3.5 Simulation and Verification
75
3.5.1 Comparison of Filtering Dynamic Performance with Different δ This section compares the dynamic performance of the ETCKF under δ = 1.05 and δ = 1.61. According to Theorem 3.2, the theoretical average arrival rate according to these two δ is 50% and 20%, respectively. For the convenience of comparison, CKF-I’s arrival rate is set at Pr(γkI = 1) = 50% and Pr(γkI = 1) = 20%. The dynamic performance of these three filtering for two δ settings is drawn in Figs. 3.2 and 3.3. The last two sub-figures indicate the logic variable γk and γkI for the ETCKF and CKF-I at each time instance (for the convenience of illustration, they are marked as γE and γI , respectively). As observed in these four sub-figures, it is shown that the ETCKF arrival rate can reach 49% and 20.91%, respectively, for each case, while the CKF-I arrival rate is 54.19% and 21.15%, respectively. From Fig. 3.2, it is shown that the estimation performance of the ETCKF is almost the same as the CKF with full communication case. It is also true even if the communication rate of the ETCKF is reduced by 50%, meaning that half of the measurements are not transmitted to the remote filter. Although the estimation performance of CKF-I is acceptable, its accuracy is still degraded at some specific time instances. When the communication rate is further reduced to 20%, i.e., δ = 1.61, the ETCKF can still provide estimation with acceptable accuracy as shown in Fig. 3.3, while the estimation of CKF-I already greatly deviates from the actual state. On the other hand, the comparison between Figs. 3.2 and 3.3 shows that the estimation performance of the ETCKF will decline with the increase in design parameter δ. According to (3.50) of Theorem 3.2, the system’s average arrival rate will decline with the increase of δ, meaning that there are fewer measurements transmitted.
3.5.2 Comparisons of Estimation Error under Different δ Values To further demonstrate the influence of δ on the estimation accuracy of the ETCKF and better compare the estimation accuracy between the ETCKF and CKF-I at the same arrival rate, this part applies the Monte Carlo simulation method, meaning that the simulation with the same parameter δ is continuously run for 500 times. Besides, the following estimation error is defined to represent the estimation accuracy of the filtering. 1 xk − xˆki 2 , 500 500
xk − xˆk 2 =
i=1
(3.79)
76
3 Event Triggered CKF Using Innovation Based Condition
0.8
0.4
d
E (p.u.)
0.6
System CKF ETCKF CKF−I
0.2 0 0
5
10
15
10
15
10
15
Time (s) (a)
1
q
E (p.u.)
1.5
0.5
0 0
5
Time (s) (b)
σ (rad)
2
1.5
1
0.5 0
5
Time (s) (c) Fig. 3.2 System states and estimation results for δ = 1.05, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the ETCKF and the cyan line represents the CKF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ETCKF. (f) The triggering condition of CKF-I
3.5 Simulation and Verification
77
ω (p.u.)
1.15
1.1
1.05
1 0
5
10
15
Time (s) (d) 1
γ
E
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (e) 1
γ
I
0.8 0.6 0.4 0.2 0
100
200
300
400
k (f) Fig. 3.2 (continued)
78
3 Event Triggered CKF Using Innovation Based Condition
E (p.u.) d
0.8
System CKF ETCKF CKF-I
0.6 0.4 0.2 0 0
5
10
15
Time (s) (a)
d axis.
E (p.u.) q
1.5
1
0.5
0 0
5
10
15
Time (s) (b)
q axis.
2
σ (rad)
1.5 1 0.5 0 0
5
10
15
Time (s) (c) Fig. 3.3 System states and estimation results for δ = 1.61, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the ETCKF and the cyan line represents the CKF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ETCKF. (f) The triggering condition of CKF-I
3.5 Simulation and Verification
79
1.3
ω (p.u.)
1.2 1.1 1 0.9 0
5
10
15
Time (s) (d) 1
γ
E
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (e) 1
γ
I
0.8 0.6 0.4 0.2 0
100
200
300
400
k (f) Fig. 3.3 (continued)
80
3 Event Triggered CKF Using Innovation Based Condition
where · 2 is the 2-norm, and xˆki indicates the i-th state estimation result at time instance k. The simulation results are drawn in Fig. 3.4. As shown in Fig. 3.4, when the communication rate is 90%, ETCKF and CKFI basically have the same estimation accuracy. With the continuous decrease in communication rate, their difference on accuracy becomes more significant and the ETCKF has obvious advantage in estimation accuracy due to the following reasons. The comparison of the measurement update of the ETCKF and CKF-I shows that when the remote filter does not receive the measurement, ETCKF fully utilizes the implicit measurement information contained in the event triggered condition to correct the posteriori variance matrix, while the CKF-I simply takes the system priori variance matrix as the posteriori matrix. Therefore, the ETCKF can have a higher accuracy during the iteration at the next time instance although both take one-step prediction of system state as the final estimation result. When the arrival rate is lower, the error becomes more remarkable. For the ETCKF alone, Fig. 3.4 also gives the same conclusion as the previous part, i.e., the higher δ causing the lower communication rate and lower estimation accuracy.
3.5.3 Verification on Stability and Arrival Rate To verify the correctness of Theorem 3.2, the 2-norm Pk of the posterior matrix is adopted here to reflect its relationship with the upper and lower bounds since the first-order term of Taylor expansion of measurement process Hk at xˆk|k−1 is full rank in the simulation example, satisfying the assumption in Theorem 3.2. Figure 3.5 demonstrates the relationship between Pk and its upper and lower bounds when the communication rate is 50%, i.e., δ = 1.05. The posterior variance matrix is within the upper and lower bounds at each time instance k. Moreover, to verify (3.50) in Theorem 3.2 and further demonstrate the relationship between the estimation accuracy of the ETCKF and δ, Monte Carlo simulation is adopted for 500 times. The average estimation error and communication rate are defined as follows. 1 xk − xˆk 2 750 750
Average Estimation Error =
k=1
1 i γ , 500 500
Communication Rate =
(3.80)
i=1
where the definition of xk − xˆk 2 is formulated in (3.79), and γ i is the actual arrival rate of the i-th simulation. The simulation results are shown in Fig. 3.6. The theoretical arrival rate can be calculated in (3.50).
3.5 Simulation and Verification
81
Estimation error
0.05
CKF ETCKF CKF-I
0.04 0.03 0.02 0.01 0
5
10
15
10
15
10
15
Time(s) (a)
Estimation error
0.05 0.04 0.03 0.02 0.01 0
5
Time(s) (b)
Estimation error
0.06
0.04
0.02
0 0
5
Time(s) (c) Fig. 3.4 The estimation error for different δ for CKF, ETCKF, and CKF-I, where blue line represents the CKF, and the red line denotes the ETCKF and the green one stands for the CKF-I. (a) δ = 0.41, i.e., the estimation error for the communication rate is 90%. (b) δ = 0.75, i.e., the estimation error for the communication rate is 75%. (c) δ = 1.05, i.e., the estimation error for the communication rate is 50%. (d) δ = 1.39, i.e., the estimation error for the communication rate is 30%
82
3 Event Triggered CKF Using Innovation Based Condition
Estimation error
0.08 0.06 0.04 0.02 0 0
5
10
15
Time(s) (d) Fig. 3.4 (continued)
Pk and its bounds
0.01 0.008 ||P || k
0.006
Upper Bound Lower Bound
0.004 0.002 0 0
5
10
15
Time (s) Fig. 3.5 The relationship between Pk and the upper/lower bound when the communication rate is 50%, where the red line represents the 2-norm of the posterior matrix, and the blue dotted line and the green dash line denote the upper and lower bound, respectively
As Fig. 3.6 shows, the actual and theoretical arrival rate are almost identical with different δ, proving the correctness of (3.50). It is obvious that both the system arrival rate and the ETCKF filtering accuracy will decline with the increasing δ. Therefore, the appropriate δ should be selectively designed according to actual performance requirement to balance the data communication and filtering accuracy.
3.5 Simulation and Verification
83
0.25
1
0.8 0.7
0.15
0.6 0.5
0.1
0.4 0.3
0.05
0.2
Communication Rate
Estimation Error
0.9
Estimation Error Actual Communication Rate Theoretical Communication Rate
0.2
0.1 0
0
0.5
1
1.5
δ
2
2.5
3
0
Fig. 3.6 The average error with different δ as well as the actual and theoretical arrival rate, where the estimation error is drawn in blue, and the theoretical communication rate is marked in red and compared with the actual communication rate from simulations in green
3.5.4 Simulation on Communication Delay The communication delay comprises the transmission delay, spreading delay, processing delay, and queuing delay, which is an important factor that affects the real time and accuracy performance of the DSE. The ETCKF can reduce communication delay by decreasing the amount of data communication. This part verifies the reduction on communication delay via simulation. Firstly, the following assumption are made according to [141]: 1. The user datagram protocol (UDP) is used as the communication protocol. 2. It is assumed the system has only one control center which is located at the substation with the most communication links. 3. It is assumed the bandwidth of the link between the substation and the control center is 50 Mbps. 4. The routing is always the shortest under a default steady state. 5. The PMU data are transmitted via the data frame in C37.118 format. 6. The system works properly and only the data frames are transmitted. 7. PMU has a sampling rate of 50 samples/second. 8. It is assumed the processing delay is zero. 9. To calculate the spreading delay, the network reactance is transformed into mileage and the spreading delay equals to the distance divided by the communication rate. 10. The transmission delay equals the amount of data transmitted (in kb) divided by the bandwidth of link. 11. The PMU is installed at the generator node.
84
3 Event Triggered CKF Using Innovation Based Condition
Based on the previous assumption, the communication delay can be calculated as follows according to the method in [1]: τ = τf + τp +
L , R
(3.81)
where τf is the processing delay, i.e., 0. τp is the spreading delay which can be calculated according to Assumption 9. L is the total amount of data transmitted and R is the bandwidth of link. See Table 3.1 for the simulation results. As Table 3.1 shows, the ETCKF can effectively reduce communication delay, which is smaller with the decline in communication rate.
3.6 Conclusions This chapter develops the event triggered nonlinear filtering based on innovation based condition and CKF to address the excessive data communication and guaranteed estimation accuracy. The scope of this chapter is summarized as follows: 1. Inspired by the concept of event-trigger linear filter, the innovation based event triggered cubature Kalman filtering is designed and the detailed inference for innovation based condition and the related cubature Kalman filter are provided. 2. The stochastic stability of the ETCKF is studied. By means of the stochastic Lyapunov method, the stochastic stability of the ETCKF is proved and the sufficient condition of stochastic stability of the ETCKF is provided, which is only determined by offline parameters. Moreover, the theoretical arrival rate of the ETCKF is inferred so as to provide the theoretical reference for the selection of design parameters for the event triggered sampling strategy. 3. The standard IEEE39 buses 10 generator system is modeled to provide the verification platform for the ETCKF in this chapter and subsequent chapters. Based on the simulation platform, the verification is performed for the ETCKF. The simulation results show that the ETCKF can provide higher estimation accuracy than the CKF-I at the same arrival rate since it fully utilizes the measurement information implicitly contained in innovation based condition when no measurement is received. Moreover, it can effectively reduce the data communication, while the estimation accuracy is still similar to the CKF with full data communication. Various simulations are also carried out to describe the relationship between the design parameter and estimation accuracy and arrival rate, which provide the reference for the selection of the design parameter from the viewpoint of engineering.
a The
= 0.41) = 0.75) = 1.05) = 1.39)
Communication rate (%) 100% 90 70 50 30 Max. delay (ms) 3.83 3.74 3.65 3.56 2.48
Min. delay (ms) 2.05 1.98 1.90 1.84 1.42
amount of transmission data corresponds to only one PMU generated in 15 s simulation
CKF ETCKF(δ ETCKF(δ ETCKF(δ ETCKF(δ
Data amounta (Bytes) 12,390 11,452 8765 6286 3685
Table 3.1 Communication delay between the PMU and control center Avg. delay (ms) 2.56 2.42 2.23 1.92 1.65
% of delay reduction 0% 5.4% 12.8% 28.5% 35.5%
3.6 Conclusions 85
Chapter 4
Event Triggered Particle Filter Using Innovation Based Condition with Guaranteed Arrival Rate
4.1 Introduction The DSE of WAMS in smart grid faces many challenges. It is required to provide accurate state estimation when the system is highly nonlinear, and tightly track system state when the system noise is non-Gaussian or corrupted by sudden disturbance, and reduce the transmission of observations while guaranteeing the filtering accuracy under limited network bandwidth, and guarantee the stable data transmission within a given time while avoiding network congestion aroused from fluctuations in the amount of data transmission. The event triggered EKF can reduce the observations transmitted, which solve the limitation on communication bandwidth of WAMS, while it has higher accuracy than the EKF with intermittent observations at the same arrival rate. However, there are many disadvantages to apply event triggered EKF in the DSE of WAMS. Firstly, the event triggered EKF cannot provide an accurate arrival rate calculation like the event triggered linear filter. Therefore, the arrival rate of event triggered EKF can only be acquired by simulation and even practical experiment. Secondly, it is observed that the triggered observation is highly centralized for event triggered EKF so as to cause enormous fluctuations on the data amount to be transmitted and data congestion in the worst case. Thirdly, the error due to the local linearization by the event triggered EKF is augmented with the increasing nonlinearity of the system. When the nonlinearity of system is strong, the event triggered EKF suffers from the poor accuracy. Fourthly, the event triggered EKF requires both the system and observation noise Gaussian, which can hardly be satisfied in practice. In case of the sudden disturbance, it takes a long recovery time on the reliable accuracy for the event triggered EKF. In summary, it is necessary to design a high accuracy nonlinear event triggered filter, which is applicable for various kinds of noise and guarantee a stable arrival rate in order to address the challenges of DSE in WAMS. This chapter designs © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_4
87
88
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
a Monte Carlo based event triggered particle filter with a guaranteed arrival rate. The Monte Carlo method is used to design the event triggered sampling strategy to ensure a guaranteed arrival rate by predicting the distribution of observations at the next time instance. The observation information implicitly contained in the triggered strategy is used in filtering iteration to improve the filtering accuracy. The remaining contents of this chapter are arranged as follows. Section 4.2 introduces the Monte Carlo method and the particle filter based on Monte Carlo method. Section 4.3 designs the event triggered particle filter (ET-PF) with a constant arrival rate by using the Monte Carlo method to approximate the probability. The ETPF includes the design of the event triggered sampling strategy with a guaranteed arrival rate in Sect. 4.3.1 and the design of a high accuracy filtering that uses the innovational information contained in triggering strategy in Sect. 4.3.2. In Sect. 4.4, the IEEE 39 buses system is used to verify the filtering accuracy and arrival rate of ET-PF.
4.2 Monte Carlo Method and Its Application in Nonlinear Filtering 4.2.1 Monte Carlo Method The Monte Carlo method is one kind of statistic methods. Based on the law of large numbers, the sufficiently large number of particle points are used to approximate the probability of distribution. The fundamental of Monte Carlo method is to generate one set of M particle points according to the probability p(x), i.e., {x i }, i = 1, . . . , M, to approximate the distribution p(x), i.e., p(x) ≈
M
δ x − xi .
(4.1)
i=1
Lemma 4.1 If the cumulative distribution function (CDF) corresponding to the distribution p(x) is F (x) and U is stochastically generated on the uniform distribution over [0, 1], x = min{x|F (x) ≥ U } obeys the distribution p(x). The generation of particle points {x i } is simply expressed as follows: x i ⇐ p(x).
(4.2)
It is hard for some p(x) to acquire the formal inverse function x = min{x|F (x) ≥ U } of the CDF. The generation of particle points becomes difficult by applying Lemma 4.1. Therefore, an importance sampling (IS) method is developed. The sampling of p(x) can be indirectly realized via the test distribution π(x).
4.2 Monte Carlo Method and Its Application in Nonlinear Filtering
89
Lemma 4.2 If Lemma 4.1 is applied to generate the particle point {x i } according to the distribution π(x) and the weight is calculated by
ωˆ i = g x i /π x i . After the normalization ωi = ωˆ i / approximated by p(x) ≈
M
M
ˆ i=1 ω
i,
(4.3) the probability p(x) can be
ωi δ x − x i .
(4.4)
i=1
The generation of particle points and its weight {x i , ωi } is simply expressed as follows:
x i , ωi ⇐ p(x).
(4.5)
For the generation of particle points in multi-dimension or multi-moment data, the sequential Monte Carlo (SMC) method can be adopted for iteration by times to realize the generation of particle points. To generate multi-moment joint distribution g(x1 , . . . , xd ), the joint distribution may be decomposed by the conditional probability at first. g(x1 , . . . , xd ) = g1 (x1 )g2 (x2 |x1 ) · · · gd (xd |x1 , . . . , xd−1 ).
(4.6)
Lemma 4.3 After the initial particle point x1i ⇐ g1 (x1 ) is generated, each x1i , ∀i can be taken as the first element in the i particle of the joint distribution to i generate x2i , . . . , xd−1 of the i particle. Through the iteration, x1i is substituted into g2 (x2 |x1 ) to establish the iteration probability g2 (x2 |x1i ) and the particle point x2i ⇐ g2 (x2 |x1i ) is generated at the 2nd time instance. By fulfilling d − 1 iteration, the generated particle points xi = [x1i , . . . , xdi ] satisfy
g(x1 , . . . , xd ) ≈
M
δ x − xi .
(4.7)
i=1
By combining IS and SMC methods, the sequential importance sampling (SIS) can be founded. Because the joint distribution g(x1 , . . . , xd ) can hardly be decomposed via (4.6) or it is inconvenient for the conditional probability to generate the particles after decomposition. The joint test distribution π(x1 , . . . , xd ) can be introduced and decomposed as follows according to the conditional probability. π(x1 , . . . , xd ) = π1 (x1 )π2 (x2 |x1 ) · · · πd (xd |x1 , . . . , xd−1 ).
(4.8)
90
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
Lemma 4.4 By the IS method, the SIS is used to initialize the particle point generation x1i ⇐ π1 (x1 ) and initial weight calculation ω1i = g1 (x1i )/π1 (x i ). The subsequent iteration can be realized by generating the particle points xti ⇐ i ) and updating the weights (4.9) for any t = 2, . . . , d. πt (xt |x1i , . . . , xt−1
ωti
=
i ωt−1
gi x1i , . . . , xti
i i gt−1 x1i , . . . , xt−1 πt xti |x1i , . . . , xt−1
(4.9)
.
4.2.2 Application of Monte Carlo Method in Nonlinear Filtering A general system is considered by xk = f (xk−1 , μk ), yk = h(xk , νk ),
(4.10)
where xk ∈ Rn is the system state with the dimension n, and yk ∈ Rp is the system observation vector with the dimension p. The system noise μk ∼ pk (μ) and observation noise νk ∼ pk (ν) satisfy any given stochastic distribution. The initial system state is observed as a stochastic variable x0 obeying the distribution p0 (x). It is shown that the system (4.10) is a more general form of system description and can better describe the stochasticity of the physical system. According to the filtering theory, the optimal state estimation xˆk at the time instance k is xˆk =
xk p(xk |y1 , . . . , yk )dxk .
(4.11)
The Monte Carlo method in Sect. 4.2.1 can be applied to generate one set of M particle points and their weights {xki , ωki } ⇐ p(xk |y1 , . . . , yk ). Therefore, xˆk can be approximately calculated according to (4.12).
xˆk =
M
ωki xki .
(4.12)
i=1
Because it is hard to directly acquire the posterior conditional probability p(xk |y1 , . . . , yk ), it is necessary to infer its iteration form as (4.13).
4.2 Monte Carlo Method and Its Application in Nonlinear Filtering
91
p(x1:k |y1 , . . . , yk ) =
p(x1:k , yk |y1 , . . . , yk−1 ) p(yk |y1 , . . . , yk−1 )
p(yk |xk )p(xk |xk−1 )p(x1:k−1 |y1 , . . . , yk−1 ) = p(yk |y1 , . . . , yk−1 )
(4.13)
∝ p(yk |xk )p(xk |xk−1 )p(x1:k−1 |y1 , . . . , yk−1 ). It should be noted that the last term p(x1:k−1 |y1 , . . . , yk−1 ) in (4.13) is the previous term of iteration for p(x1:k |y1 , . . . , yk ). The generated initial particle points {x0i , ω0i } ⇐ p0 (x) can initialize the iteration. According to SIS Lemma 4.3, i , ωi after the particle points {xk−1 k−1 } are acquired to describe the distribution p(x1:k−1 |y1 , . . . , yk−1 ) via iteration, the particle points can be collected at the new iteration moment by the test distribution.
i , y1 , . . . , yk−1 . xki ⇐ π xk |xk−1
(4.14)
According to the updating probability of p(yk |xk )p(xk |xk−1 ) in (4.13), the weight can be updated as follows according to SIS. ωˆ ki
=
i ωk−1
i p yk |xki p xki |xk−1
. i ,y ,...,y π xki |xk−1 1 k−1
(4.15)
As the last item of (4.13) is a direct ratio, the weight must be normalized, ωki = ωˆ ki
M
ωˆ ki .
(4.16)
i=1
The {xki , ωki } acquired above can be used to approximate the posterior probability, p(xk |y1 , . . . , yk ) ≈
M
ωki δ xk − xki .
(4.17)
i=1
The state estimation at time instance k can be completed according to (4.12), and it can be taken as the beginning of iteration at k + 1 moment. It should be noted that there exists the severe numerical issue “particle impoverishment” in the realization of SIS iteration, which means that the weight ωki of most particle points will decline along the iteration process until it becomes negligible. A very small quantity of particle points occupies almost all the weight so that the posteriori probability becomes inaccurate by using a very small number of particle points. Therefore, it is necessary to introduce the “resampling” in the iteration to abandon the particle points with extremely low weights and duplicate
92
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
the particle points with large weights in order to avoid “particle impoverishment.” It is unnecessary to execute “resampling” at each iteration moment, which is only executed to resume the effectiveness of particle points when the “particle impoverishment” reaches one certain level. For this purpose, the number of valid samples at the time instance k is defined by Meff,k = 1/
M
ωki
2
! .
(4.18)
i=1
Based on this concept, one threshold ∈ [0, 1] can be selected according to the practical requirement. The resampling is conducted only when the number of valid examples Meff,k < M. The resampling generates the new particle points {xki∗ } through the current particle points and weights {xki , ωki } as follows: xki∗ ⇐
M
ωki δ xk − xki .
(4.19)
i=1
The following weight is chosen for the newly generated particles. ωki∗ = 1/M.
(4.20)
All the new particles and weights {xki∗ , ωki∗ } are used to replace the particle points before the resampling,
xki , ωki = xki∗ , ωki∗ .
(4.21)
The SIS with resampling is called as sampling importance resampling (SIR), and the aforementioned filter realized by SIR is the particle filter. The particle filters vary for different types of test distribution (4.14). According to i ,y ,...,y SIR theory, the particle filter has better performance when π(xki |xk−1 1 k−1 ) is closer to p(xk |y1 , . . . , yk ). However, it is obvious that p(xk |y1 , . . . , yk ) cannot be acquired. Therefore, various types of particle filters are proposed according to i ,y ,...,y different selections of π(xki |xk−1 1 k−1 ). The simplest bootstrap filter selects
i i . π xki |xk−1 , y1 , . . . , yk−1 = p xki |xk−1
(4.22)
By using this selection, for the system (4.10), the particle points at time instance k can be generated through μik according to the system noise distribution.
i , μik . xki = f xk−1
(4.23)
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . .
93
The weight updating can be simplified by
i ωˆ ki = ωk−1 p yk |xki .
(4.24)
The bootstrap filter is simple and convenient for implementation, and the computation burden is comparatively light among particle filters. Therefore, this chapter designs the nonlinear event triggered particle filter (ET-PF) with a guaranteed arrival rate with reference to the test distribution selection of the Bootstrap filter.
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate Based on Monte Carlo Method 4.3.1 Design of Triggered Strategy with A Guaranteed Arrival Rate As shown in Fig. 4.1, the event triggered filtering is realized far away from the data ˜ k in this center. To avoid the misunderstanding, the non-triggered set is defined as chapter and the logic variable γk representing the triggering at nodes and the receipt of observation at the center is defined as follows: ˜ k, 1, yk ∈ / (4.25) γk = ˜ k. 0, yk ∈ The observation acquired at the data center at the time instance k is indicated by Yk =
{yk }
if γk = 1,
˜k yk ∈
if γk = 0.
Fig. 4.1 System structure of event-trigger system
(4.26)
94
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
Based on this, the observation set acquired till the time instance k is defined as follows: Fk {Y1 , Y2 , . . . , Yk }.
(4.27)
Based on the particle points and weights {xki , ωki } that satisfy (4.28) at time ˜ k+1 to make sure that the instance k, this section designs the non-triggered set triggered strategy can guarantee the arrival rate satisfying (4.28) at the time instance k + 1. p(xk |Fk ) ≈
M
ωki δ(xk − xki ).
(4.28)
i=1
˜ k+1 . λ ≈ Pr yk+1 ∈ /
(4.29)
Firstly, the Monte Carlo method is adopted to approximate the distribution p(yk+1 |Fk ) of observation prediction at k + 1 moment, meaning that the additional i , ωi } are generated to satisfy particle points {yk+1 k p(yk+1 |Fk ) ≈
M
i . ωki δ yk+1 − yk+1
(4.30)
i=1
It should be noted that p(yk+1 |Fk ) =
p(yk+1 |xx , Fk )p(xk |Fk )dxk
xk
=
(4.31) p(yk+1 |xx )p(xk |Fk )dxk .
xk
The term p(xk |Fk ) in (4.31) has been approximated via the particle point and weight {xki , ωki } according to (4.28). Meanwhile, the noise particle points can be generated through the distribution of system noise and observation noise, i.e., μik ⇐ pk (μ), νki ⇐ pk (ν).
(4.32)
The influence of distribution p(yk+1 |xx ) on (4.31) can be reflected by (4.33).
i = f xki , μik , xk+1
i i yk+1 = h xk+1 , νki .
(4.33)
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . .
95
i , ωi } satisfies the distribution Therefore, the observation additional particle {yk+1 k approximation to one-step prediction of observation as described in (4.30), which can be further used to compute the expectation of one-step prediction of observation by
yˆk+1|k = E(yk+1 |Fk ) =
M
i ωki yk+1 .
(4.34)
i=1
Based on the definition of innovation in the innovation based event triggered filtering, this chapter defines the filtering innovation ey,k+1 of ET-PF at k+1 moment as follows: ey,k+1 = (yk+1 − yˆk+1|k )2 .
(4.35)
i , ωi } is transformed into Furthermore, the observation additional particle {yk+1 k the innovation based additional particle. i i = yk+1 − yˆk+1 2 . ey,k+1
(4.36)
Similarly, it satisfies p(ey,k+1 |Fk ) =
M
i . ωki δ ey,k+1 − ey,k+1
(4.37)
i=1 i } are re-arranged in ascending order, together As shown in Fig. 4.2, {ey,k+1 i with which ωk are accordingly adjusted, to form a new orderly innovation based i i , ω˜ k+1 }, which satisfies additional particle and weight set {e˜y,k+1 j
i e˜y,k+1 ≤ e˜y,k+1 ,
Based on this setting,
mx i=1
i < j.
(4.38)
i ω˜ k+1 can be used to approximate the CDF of e˜y,k+1 .
mx
mx i ω˜ k+1 . p yk+1 − yˆk+1|k 2 ≤ e˜y,k+1 |Fk =
(4.39)
i=1
By setting
0 i=0
defined as
i ω˜ k+1 = 0, and it is noted that
M i=0
i ω˜ k+1 = 1, the array can be
96
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
yk1
1 1 k
yk2 1 yk3 1 yk4 1 ˆ 4 yk 3 2 k
1|k
k
k
yk5 1
yk6 1
5 k
3 2 5 4 e1y ,k 1 e y ,k 1e y ,k 1e y ,k 1 e y ,k
1
6 k
e6y ,k
(e4y ,k 1 ) e1y ,k (e5y ,k 1 ) e y2,k
1 1
... (e (e
1 k 2 k
( (
7 k
e7y ,k
1
Gate to decide trigger or not, e.g., (e˜ y5,k Rearrange particals in ascending i order of e y , k 1
yk7 1
1
1
e˜y6,k 1 ) / 2.
Calculate the gate by the corresponding intege using equation (32) 0% 4 k 5 k
1
) )
... i 1
...
5
i 1
1 y ,k 1 7 y ,k 1
)e )e
6 y ,k 1 7 y ,k 1
3 k 4 k
( (
1 k 7 k
) )
6 i 1
5 i 1
i k
6
1 i 1
i k
i k
i k
100% i k
Calculate the accumulate of new weights, and find a corresponding integer by equation (31), e.g., m1=5. Fig. 4.2 Design of event triggered sampling strategy with the guaranteed arrival rate
ϒCDF (n)
n
i ω˜ k+1 .
(4.40)
i=0
It is known that the array 0 ≤ ϒCDF (n) ≤ 1 and is a monotonic increasing. For the predefined arrival rate λ ∈ [0, 1], the positive integer m1 ∈ {0, 1, . . . , M − 1} will certainly be found, which satisfies
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . . m1
i ω˜ k+1 ≤1−λ≤
i=0
m 1 +1
i ω˜ k+1 .
97
(4.41)
i=0
According to (4.41), m1 is selected. It is obvious that m1
m1 i p yk+1 − yˆk+1|k 2 ≤ e˜y,k+1 |Fk = ω˜ k+1 < 1 − λ, i=1 m1+1
m1+1 i p yk+1 − yˆk+1|k 2 ≤ e˜y,k+1 |Fk = ω˜ k+1 ≥ 1 − λ.
(4.42)
i=1
The dynamic threshold ξ˜k+1 that satisfies (4.43) certainly exists in the range of (4.44).
p yk+1 − yˆk+1|k 2 ≤ ξ˜k+1 |Fk = 1 − λ.
(4.43)
m1 m1 +1 . ξ˜k+1 ∈ e˜y,k+1 , e˜y,k+1
(4.44)
When (4.44) is satisfied, the threshold can be set by ξ˜k+1 =
m1 m1 +1 + e˜y,k+1 e˜y,k+1
2
.
(4.45)
The triggered strategy with a guaranteed arrival rate in ET-PF can be designed as γk =
1, 0,
˜ k+1 , if yk ∈ / ˜ k+1 . if yk ∈
(4.46)
˜ k+1 is where the non-triggered observed set ˜ k+1 = yk+1 |(yk+1 − yˆk+1|k )2 ≤ ϑ + ξ˜k+1 .
(4.47)
Remark 4.1 According to the aforementioned inference, (4.47) shall be designed as ˜ k+1 = yk+1 |(yk+1 − yˆk+1|k )2 ≤ ξ˜k+1 .
(4.48)
It should be noted that p(e˜y,k+1 |Fk ) is approximated by the Monte Carlo method so that the errors cannot be avoided. When (4.48) is adopted to compute the ˜ k+1 , it varies according to the sign of threshold of non-triggered observation set error. When the error is negative, the threshold is small so that the arrival rate is lower than expected, and the filtering accuracy will decline slightly. When the error
98
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
is positive, the threshold is large so that the arrival rate is higher than expected, aggravating the communication burden. In general engineering application, the cost to be paid is different in view of either the slight decline in filtering accuracy or the increasing communication burden. Slight sacrifice on accuracy can be made in DSE in order to keep the real-time requirement on data communication within the scope of design. Therefore, one protection parameter ϑ is manually introduced to keep the transmitted data within the preset during the design of the triggered strategy. As Monte Carlo method provides the high accuracy, ϑ is selected as a very small positive constant. In this chapter, ϑ = 0.01. In Algorithm 4.1, the pseudo-code of the ET-PF with a guaranteed arrival rate is provided in detail.
4.3.2 Filtering Design Using Triggered Information Section 4.3.1 designs the event triggered sampling strategy with a guaranteed arrival rate based on the particles and weights {xki , ωki } of (4.28) that describes the posteriori probability at the time instance k. This section will design the filtering algorithm of ET-PF based on the event triggered sampling strategy in Sect. 4.3.1, which can fully utilize the triggered information at the data center. The initialization of ET-PF filtering generates the initial particles as x0i ⇐ p0 (x).
(4.49)
The initial average weight is set as ω0i = 1/M,
∀i.
(4.50)
To design the ET-PF filtering based on this, it is necessary to generate the corresponding particle and weight {xki , ωki } according to the receipt or non-receipt of observations based on the particles p(xk−1 |Fk−1 ) acquired to approximate the i , ωi distribution {xk−1 k−1 } so as to satisfy the approximate posteriori probability in (4.28). The filtering at time instance k can be calculated as xˆk =
M
ωki xki ≈ E(xk |Fk ).
(4.51)
i=1
When the Observation is Received, i.e., γk = 1 The data center receives the observation yk for filtering at the time instance k, (4.51) is transformed as xˆk =
M i=1
ωki xki ≈ E(xk |Fk−1 , yk )
(4.52)
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . .
99
Algorithm 4.1 The event triggered sampling strategy with the guaranteed arrival rate Input: Observations {yk } Output: Logic variable {γk } while k ≥ 1 do Local filter performs {xki , ωki } y¯k+1|k = 0 for i = 1 to M do i μ˜ ik ⇐ p(νk+1 ) ν˜ k+1 ⇐ p(νk+1 ) i i i i
i i , ν˜ k+1 xk+1 = f xk , μk yk+1|k = h xk+1 i y¯k+1|k = y¯k+1|k + yk+1|k end for for i = 1 to M do
i i = y˜k+1 − y¯k+1|k 2 ey,k+1 end for ω˜ sum = 0 for i = 1 to M do i i e˜y,k+1 = ey,k+1 for j = i + 1 to M do j i if e˜y,k+1 < ey,k+1 then j
j
i i e˜y,k+1 = ey,k+1 , ω˜ k+1 = ωk end if end for j j i ey,k+1 = ey,k+1 , ωk = ωki i ω˜ sum + = ω˜ k+1 if ω˜ sum > 1 − λ then e˜i−1 +e˜i
ξ˜k+1 = y,k+1 2 y,k+1 if yk+1 − yˆk+1|k 2 < ϑ + ξ˜k+1 then return γk+1 = 0 else return γk+1 = 1 end if end if end for end while
Therefore, {xki , ωki } shall be generated according to (4.53).
xki , ωki ⇐ p(xk |Fk−1 , yk ).
(4.53)
100
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
Note that the posterior probability can be inferred by p(x1:k |yk , Fk−1 ) =
p(x1:k , yk |Fk−1 ) p(yk |Fk−1 )
=
p(yk |xk )p(xk |xk−1 )p(x1:k−1 |Fk−1 ) p(yk |Fk−1 )
∝ p(yk |xk )p(xk |xk−1 )p(x1:k−1 |Fk−1 ).
(4.54)
According to SIS, if the particles are generated by
i . xki ⇐ p xk |xk−1
(4.55)
i , μi ), the weight can By generating μik ⇐ pk (μk ) and calculating xki = f (xk−1 k be updated by
i ωˆ ki = ωk−1 p yk |xki .
(4.56)
The particle points {xki , ωki } acquired through the normalization of ωki = M ωˆ ki ωˆ ki satisfy (4.53) and filtering result at the time instance k can be computed i=1
according to (4.52). If the Observation is Not Received, i.e., γk = 0 In this case, the observation yk is not transmitted to the data center. Therefore, p(yk |xki ) in the weight update (4.56) cannot be calculated. Other consideration should be specifically taken to generate the corresponding particle point and weight {xki , ωki }. One straightforward method to acquire the particle and weight is to directly replace the posteriori probability by priori probability.
i xki = f xk−1 , μik , i ωki = ωk−1 .
(4.57)
The particle {xki , ωki } is generated by (4.57) to approximate the priori probability (4.58). p(xk |Fk−1 ) ≈
M
ωki δ xk − xki .
(4.58)
i=1
Based on {xki , ωki } generated from (4.57) and according to Lemma 4.3, the filtering can be derived by
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . .
xˇk =
M
ωki xki ≈ E(xk |Fk−1 ).
101
(4.59)
i=1
Compared with the conventional Kalman filtering with intermittent observations, the ET KF improves the filtering performance from two facets. On the one hand, the observation with significant contribution to filtering is selected to be transmitted by the innovation based strategy. On the other hand, by means of the triggering innovation yk ∈ k , the priori estimation E(xk |Fk−1 ) is replaced by E(xk |Fk−1 , yk ∈ k ) as the filtering result when no observation is received. Based on the principle of ET KF, it is necessary to fully use the triggering ˜ k of Sect. 4.3.1 to calculate the ET-PF when no observations innovation yk ∈ are received. ∞
˜ k dxk . xˆk = xk p xk |Fk−1 , yk ∈ (4.60) −∞
The posteriori conditional probability can be inferred as ˜ k , Fk−1 ) p(x1:k |yk ∈ =
˜ k |Fk−1 ) p(x1:k , yk ∈ p(yk |Fk−1 )
=
˜ k |xk )p(xk |xk−1 )p(x1:k−1 |Fk−1 ) p(yk ∈ ˜ k |Fk−1 ) p(yk ∈
˜ k |xk )p(xk |xk−1 )p(x1:k−1 |Fk−1 ). ∝ p(yk ∈
(4.61)
By generating the noise particle μik ⇐ pk (μk ) and calculating xki i , μi ), the generation of particle satisfies f (xk−1 k
i . xki ⇐ p xk |xk−1
=
(4.62)
Combined with the weight update, it can be got by
i ˜ k |xki . p yk ∈ ωˆ ki = ωk−1 The particle {xki , ωki } acquired by the normalization ωki = ωˆ ki
(4.63) M i=1
ωˆ ki can be used
to compute the filtering result. ˜ k |x i ) in (4.63), which is solved by the However, it is difficult to solve p(yk ∈ k integration as
102
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
˜ k |xki ) = p(yk ∈
˜k yk ∈
p yk |xki dyk .
(4.64)
As the integral function p(yk |xki ) is determined by the online parameter xki and ˜ k , it is hard for realthe range of integration depends on the other online parameter time ET-PF to calculate the analytical solution to the right-hand side term of (4.64). One feasible method is to acquire the integral in (4.64) by the numerical method. However, since the corresponding numeric method is executed to solve the integral for each particle point xki , the computation is too heavy to sustain. Therefore, it is prerequisite to consider other methods to generate the particles and weights {xki , ωki } in order to approximate the posteriori conditional probability ˜ k , Fk−1 ). Based on this consideration, the constrained Bayesian state p(xk |yk ∈ estimation theory is introduced [142, 143]. Lemma 4.5 The “Constraint system” indicates that not only the system satisfies the system and observation equations (4.10) but also the system state satisfies the constraints, φ(xk ) ∈ k .
(4.65)
If the posteriori probability of “constraint system” at the last time instance can be approximated as p(xk−1 |Fk−1 ) =
M
i i , ωk−1 δ xk−1 − xk−1
(4.66)
i=1
and the observation at the time instance k is yk . Therefore, the particle points can i ) and the corresponding weight is updated as be generated by xki ⇐ p(xk |xk−1
i L xki p yk |xki , ωˆ ki = ωk−1
(4.67)
where L(xki ) is the constraint evaluation logic variable and satisfies L(xki )
=
1,
if φ(xki ) ∈ k ,
0,
otherwise.
The particle {xki , ωki } normalized by ωki = ωˆ ki
M i=1
(4.68)
ωˆ ki can be used to describe the
posteriori conditional probability p(x ˜ k |Fk−1 , yk , φ(xki ) ∈ k ) of the “constraint system.” Remark 4.2 When the constraint φ(xk ) ∈ k is relaxed, and only a small number of particle points satisfy L(xki ) = 0, the filtering accuracy of constrained Bayesian
4.3 Design of Event Triggered Particle Filter with A Guaranteed Arrival Rate. . .
103
state estimation is not affected. When the constraint φ(xk ) ∈ k is too strict, most of the constraint evaluation logic variables L(xki ) = 0, which is called as “constraint particle impoverishment,” causing a decline in the filtering accuracy of constrained Bayesian state estimation. The method to overcome “constraint particle impoverishment” is to abandon the particles corresponding to L(xki ) = 0, i ) is executed again to generate new particles, and the weight is xki ⇐ p(xk |xk−1 further re-calculated according to (4.67) until the adequate generation of particles with L(xki ) = 1. In order to apply the constrained Bayesian state estimation theory for the ˜ k ) when γk = 0, the general posteriori conditional probability p(xk |Fk−1 , yk ∈ extension of the constraint is performed at first. φ ∗ (xk−1 , μk , νk ) ∈ k .
(4.69)
For the system (4.10), the constraint φ(xk ) ∈ k of Lemma 4.5 is extended into a special form of (4.69) through (4.70). φ(xk ) = φ(f (xk−1 , μk )) φ ∗ (xk−1 , μk , νk ).
(4.70)
Naturally, Lemma 4.5 can be extended to the following Lemma 4.1. Theorem 4.1 The “extended constraint system” not only satisfies the system and observation equation (4.10) and but also the system state satisfies the constraint (4.69). Based on the acquisition of the particle and weight {xki , ωki } approximate to the subsequent conditions at the time instance k−1, the corresponding noise particles are generated according to the system and observation noise distribution: μik ⇐ pk (μ),
νki ⇐ pk (ν).
(4.71)
The particle point is generated at the time instance k by
i , μik . xki = f xk−1
(4.72)
The corresponding weight is updated as i
i L∗ xk−1 , μik , νki p yk |xki , ωˆ ki = ωk−1
(4.73)
where L∗ (xki ) is the extended constraint evaluation logic variable which satisfies
L xki = ∗
1,
i
when φ ∗ xk−1 , μik , νki ∈ k ,
0,
otherwise
(4.74)
104
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
The particle {xki , ωki } normalized by ωki = ωˆ ki the posteriori conditional probability “extended constraint system.”
M
ωˆ ki can be used to describe i=1 p(xk |Fk−1 , yk , φ ∗ (xk−1 , μk , νk ) ∈ k ) of the
˜ k ), it can be derived from the definition. To acquire p(xk |Fk−1 , yk ∈ φ1∗ (xk−1 , ωk , νk ) h(f (xk−1 , μk ), νk ) = yk .
(4.75)
˜ k is taken as the system state constraint The observation triggering yk ∈ ˜ k . After that, the filter can be considered in the absence φ1∗ (xk−1 , μk , νk ) ∈ of observations. The particle and weight can be updated based on the acquisition i , ωi of {xk−1 k−1 } at the time instance k − 1, which is transformed to constrained Bayesian state estimation with the constraint of no observations yk As for the particle filtering without observations yk , the priori conditional probability p(xk |Fk−1 ) can be approximated by (4.57), and the influence of system state constraint on PF can be dealt with by the constraint Bayesian state estimation. Therefore, by combining (4.57) and Lemma 4.1, the particle xki at the time instance k and observation estimation particle yki can be generated through obtaining the noise particles μik ⇐ pk (μ) and νki ⇐ pk (ν).
i , μik , xki = f xk−1
yki = h xki , νki .
(4.76)
i , ωi , ν i ), the extended constraint evaluation logic Because of yki = φ1∗ (xk−1 k k variable can be calculated as ˜ k, i
i
1, when yki ∈ ∗ ∗ i i (4.77) L˜ 1 yk L1 xk−1 , μk , νk = 0, otherwise.
Based on this and (4.57), the weight can be updated and calculated as ωˆ ki
i = L˜ ∗1 yki ωk−1 =
i ˜ k, ωk−1 , if yki ∈ i ˜ k. 0, if y ∈ /
(4.78)
k
According to Lemma 4.1, xki obtained from (4.76) and xki derived from (4.78) are M normalized by ωki = ωˆ ki ωˆ ki to acquire {xki , ωki }, which satisfies i=1
˜ k) ≈ p(xk |Fk−1 , yk ∈
M i=1
ωki δ xk − xki
(4.79)
4.4 Simulation of ET-PF for DSE in WAMS
105
Accordingly, the filtering xˆk at the time instance k is xˆk =
M
˜k . ωki xki = E xk |Fk−1 , yk ∈
(4.80)
i=1
Remark 4.3 According to the triggered strategy set by Sect. 4.3.1, the extended constraint evaluation logic variable satisfies p(L˜ ∗1 (yki ) = 0) = λ. Therefore, the number of valid particle points is about λ·M after the execution of (4.78). Generally, the number of valid particle points λ · M is within the acceptable range of PF in engineering application so that there is no need to consider the influence of “constraint particle impoverishment.” Except the SIR resampling of ET-PF, there is no extra need to design an additional recovery for valid constraint Bayesian state estimation particle. In summary, both the iteration of ET-PF and the acquisition of observation are derived regardless of γk , i.e., the receipt of observations. As the Monte Carlo method similar to SIS is adopted for ET-PF, it is necessary to add the resampling in ET-PF iteration in order to address “particle impoverishment.” The algorithm of Sect. 4.2.2 is adopted for ET-PF algorithm, and the number of valid examples is defined as M Meff,k = 1/( (ωki )2 ) and the threshold ∈ [0, 1]. When Meff,k < M i=1 M i i at the time instance k, the resampling xki∗ ⇐ i=1 ωk δ(xk − xk ) is executed and the weight is averaged by ωki∗ = 1/M. The particle and weight {xki∗ , ωki∗ } generated from the resampling are taken as the beginning of particle update at the next moment. The pseudo-code, which uses the triggered innovation information for filtering, is designed in Algorithm 4.2.
4.4 Simulation of ET-PF for DSE in WAMS The IEEE 39 buses 10 generator system is adopted as the example of the grid system. In this section, the ET-PF filter designed in Sect. 4.3 is applied for DSE of the IEEE 39 buses. The performances of ET-PF are verified through the comparison with PT with intermittent observation (PF-I) at the same arrival rate. The generator G3 in the IEEE 39 buses as shown in Fig. 2.2 is selected to verify the filtering performance of ET-PF and PF-I. In the simulation, the DSE on G3 is tested under a sudden disconnecting fault between bus 14 and bus 15. The sampling period of simulation is t = 0.02 s, within which there are totally 750 sampling points. The system state variable is set as x [ Ed , Eq , δ, η, Ef d , Rf , η, VR ]T with the dimension of 7. As the PF has better performances than EKF, the variance matrix of the additive zero-mean Gaussian white noise of the system in simulation is selected to be much higher than diag[10−3 , 10−3 , 10−7 , 10−7 , 10−7 , 10−7 , 10−7 ].
106
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
Algorithm 4.2 ET-PF filtering algorithm Input: {Yk } Output: xˆk Initialization: xki ⇐ p0 (x0 ), ω0i = 1/M Loop: while k ≥ 1 do ωˆ k,sum = 0, xˆk = 0 for i = 1 to M do
i μik ⇐ pk (μ), xki = f xk−1 , μik if γk = 1 then
i ωˆ ki = ωk−1 p yk |xki else
νki ⇐ pk (ν), yki = h xki , νki ˜ k then if yki ∈ i ωˆ ki = ωk−1 else ωˆ ki = 0 end if end if ωˆ k,sum = ωˆ k,sum + ωˆ ki end for xˆk = 0 for i = 1 to M do ωki = ωˆ ki /ωˆ k,sum , xˆk = xˆk + ωki xki end for Output: xˆk 0 τ ⇐ (0, 1) uniform distribution,ωsum =0 for i = 1 to M do i i−1 + ωi ωsum = ωsum k i if ωsum > τ then j xki = xk , break, end if end for ωki = 1/M end while
Besides, the variance matrix of the additive zero-mean Gaussian white noise of the observation is also selected to be much higher than diag[10−1 , 10−1 ]. The performances of three filters are tested in this simulation for comparison, i.e., PF, ET-PF, and PF-I. The PF in Sect. 4.2.2 dissatisfies the limited communication capability of the WAMS in smart grid. All the observation data yk must be provided to the filter for DSE application. The PF based on the Monte Carlo method is used here simply to provide the optimal DSE performance of the filtering on IEEE 39 buses with all observations transmitted. Due to constraints on the communication capability of WAMS, the ET-PF and PF-I cannot acquire the observation yk all the time. According to Cramer–Rao lower bounds theory, both performances are certainly lower than PF with full observations. Through the comparison on the filtering performance between ET-PF and PF-I at the same arrival rate, the
4.4 Simulation of ET-PF for DSE in WAMS
107
effectiveness of ET-PF can be verified. The simulation on ET-PF comprises two parts. The first part generates the triggering logic variable γk at the physical node by the innovation based event triggered sampling strategy Algorithm 4.1. The second part executes the ET-PF filtering algorithm at the control center according to Algorithm 4.2. Note that the filtering at the control center can use the observation yk only when γk = 1 from the simulation in first part. As a comparison, the PF-I is also conducted, and its triggering logic variable γk is generated by the independent Bernoulli stochastic process. When γk = 1, the particle and weight iteration is realized according to (4.55) and (4.56). Furthermore, when γk = 0, the particle and weight iteration is realized according to (4.57) together with the resampling. Three groups of simulation results are analyzed as below. The first group of simulation results reflects the filtering on dynamic state at the physical node and the triggering of observation. The second group of simulation results reflects the mean of the filter’s one-step priori estimation error ek|k−1 = xk − xˆk|k−1 2 . Through 100 simulation, the average over these simulation results is used to acquire the expectation of a one-step priori estimation error. The third group of simulation results is used to evaluate the event triggered sampling strategy of ET-PF. The performance of guaranteed arrival rate is tested by the box plot commonly used in statistics. The first group of simulation results provides the filtering results of ET-PF and PF-I on the synchronous generator dynamic state at the physical node under two typical arrival rates, i.e., λ = 10% and λ = 70% and the triggering conditions of observation yk are given in the lowermost sub-figure. See Fig. 4.4 for the simulation results at an arrival rate of λ = 70%. Thanks to the high accuracy of particle filtering, the results of ET-PF and PF-I are a close fit for the system state at a high arrival rate, and then filtering results are accurate. In Fig. 4.3, the filtering results are given at the arrival rate of λ = 10%. At this low arrival rate, the PF-I cannot sustain the reliable observation for the states Ef d , Eq , and η at all the time. However, the ET-PF can still provide highly accurate DSE at the low arrival rate of λ = 10%. Therefore, the filtering accuracy is far higher than that of PF-I. Moreover, γk in Figs. 4.3 and 4.4 reflects that the triggering for PF-I and ET-PF has a similar distribution. The detailed evaluation of the event triggered sampling strategy will be given in the box plot in the third group of simulations. The second group of figures shows the mean of one-step priori estimation error for ET-PF and PF-I in order to compare the filtering accuracy. The filtering accuracy is compared under various arrival rates to reflect the influence of arrival rate on ETPF and PF-I. Figure 4.5 reflects the average of estimation error ek = xk − xˆk 2 of ET-PF and PF-I at each time instance through 100 simulations under the arrival rates of 90%, 70%, 50%, 30%, 10%. The Fig. 4.5a and b show that both ET-PF and PF-I can provide extremely high filtering accuracy close to PF with full observation at the arrival rate λ ≥ 70%. It shows that the non-transmission of a small number of observations will not affect the performance of the Monte Carlo filter. When the arrival rate declines to λ = 50%,
108
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
System PF ET-PF PF-I
0.6
E (p.u.)
0.5
d
0.4 0.3 0.2 0.1 0
0
5
10
15
10
15
10
15
Time (s) (a) 1.5
Eq (p.u.)
1 0.5 0 −0.5
0
5
Time (s) (b) 2.5
δ (rad)
2 1.5 1 0.5
0
5
Time (s) (c) Fig. 4.3 Under the arrival rate λ = 10%, the DSE performance of ET-PF and PF-I and the corresponding triggering conditions, where the black line represents the actual system state, and the green dotted line denotes the PF, and the red dotted dash line stands for the ET-PF, and the blue dash line represents the PF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ET-PF. (f) The triggering condition of PF-I
4.4 Simulation of ET-PF for DSE in WAMS
109
1.25
η (p.u.)
1.2 1.15 1.1 1.05 1
0
5
10
15
10
15
10
15
Time (s) (d)
γk of ET−PF
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (e)
γk of PF−I
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (f) Fig. 4.3 (continued)
110
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
System PF ET-PF PF-I
Ed (p.u.)
0.6 0.5 0.4 0.3 0.2 0.1 0
0
5
10
15
10
15
10
15
Time (s) (a) 1.3
Eq (p.u.)
1.2 1.1 1 0.9 0.8 0.7 0
5
Time (s) (b) 1.4
δ (rad)
1.2 1 0.8 0.6 0.4
0
5
Time (s) (c) Fig. 4.4 Under the arrival rate λ = 70%, the DSE performance of ET-PF and PF-I and the corresponding triggering conditions, where the black line represents the actual system state, and the green dotted line denotes the PF, and the red dotted dash line stands for the ET-PF, and the blue dash line represents the PF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ET-PF. (f) The triggering condition of PF-I
4.4 Simulation of ET-PF for DSE in WAMS
111
η (p.u.)
1.1
1.05
1
0.95
0
5
10
15
10
15
10
15
Time (s) (d)
γ of ET−PF k
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (e)
γ of PF−I k
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (f) Fig. 4.4 (continued)
112
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
meaning that half of the observation will be lost, the accuracy of PF-I is degraded to one certain low extent due to the obvious reduction of observation. The ET-PF can still provide the filtering accuracy close to PF with full observation at this arrival rate because the event triggered sampling strategy chooses the important observation to be transmitted and the filtering algorithm fully utilizes the innovational information. Therefore, the observation information used by ET-PF are far more than those of PF-I. When the arrival rate is as low as λ = 10%, the ET-PF can still provide comparably high filtering accuracy. Therefore, it is superior to PF-I in performance. For comparison of filtering performance at different arrival rates, the mean of errors over all simulation time at the same arrival is calculated by 1 xk − xˆk 2 . 750 750
(4.81)
k=1
Figure 4.6 is drawn for the mean of 100 simulations. As can be seen that the filtering accuracy of ET-PF and PF-I will decline with the decrease in arrival rate. The ET-PF will have greater advantages than PF-I with the decline of arrival rate. When the arrival rate is very low at λ = 10%, the PF-I’s filtering error is very high but ET-PF can still provide very high filtering accuracy. The ET-PF is designed to reduce the communication burden on the transmission, which raises two requirements on event triggered sampling strategy. On the one hand, the event triggered sampling strategy should guarantee that the triggering rate satisfies the design requirement, i.e., Pr(γ = 1) = λ. On the other hand, the variance of event triggered sampling strategy needs to be adequately low to avoid the problem of heavy transmission load within a short time due to centralized triggering distribution. After 200 simulations at each designed arrival rate, the number of observation transmitted in each ET-PF simulation is taken as Nλ,T and the number of observation transmitted in each PF-I simulation is taken as Nλ,I . Furthermore, λ˜ λ,T = Nλ,T /750 is calculated as the practical arrival rate of ET-PF for each simulation and λ˜ λ,I = Nλ,I /750 is calculated as the practical arrival rate of PF-I for each simulation. The designed arrival rate is calculated and summarized corresponding to the practical arrival rate from 90 to 10%. 200 λ˜ λ,T and λ˜ λ,I are obtained at each arrival rate. All these practical arrival rates are taken as points and the box plot is used as in Fig. 4.7 to analyze the statistical property of event triggering for ET-PF and PF-I. The box plot is a statistic diagram to display the distribution of a set of data, and its elements include the box, median line of the box, whisker, and abnormal value. The median line of box refers to the horizontal line in the box and represents the median value of a set of statistics. The upper and lower edges of box refer to the upper and lower first quadrants (Q1). The edges of the whisker reaching beyond the box are the upper and lower thirds and fourth quadrants (Q3) and the interquartile range (I QR = Q3 − Q1), respectively. Generally, the upper edge of whisker may be selected as Q3 = Q1 + 1.5I QR and the lower edge can be selected symmetrically. The statistics outside the range of whisker are called as
4.4 Simulation of ET-PF for DSE in WAMS
113
0.06
ET-PF PF-I PF
0.05
e
k|k−1
0.04 0.03 0.02 0.01 0
0
5
10
15
10
15
10
15
Time (s) (a)
0.04
e
k|k−1
0.06
0.02
0
0
5
Time (s) (b) 0.1
e
k|k−1
0.08 0.06 0.04 0.02 0
0
5
Time (s) (c) Fig. 4.5 The comparison of estimation error between ET-PF and PF-I under various arrival rates, where the ET-PF is drawn in red, and the PF-I is marked in blue and compared with the PF in green. (a) The estimation error of ET-PF and PF-I under λ = 90%. (b) The estimation error of ET-PF and PF-I under λ = 70%. (c) The estimation error of ET-PF and PF-I under λ = 50%. (d) The estimation error of ET-PF and PF-I under λ = 30%. (e) The estimation error of ET-PF and PF-I under λ = 10%
114
4 Event Triggered Particle Filter Using Innovation Based Condition with. . . 0.1
e
k|k−1
0.08 0.06 0.04 0.02 0
0
5
10
15
10
15
Time (s) (d) 0.2
0.1
e
k|k−1
0.15
0.05 0
0
5
Time (s) (e) Fig. 4.5 (continued)
abnormal values and indicated as a cross. The width of the box and the range of whisker reflect the variance of statistics to one certain extent. The smaller width and range is, the more intensive statistics is, representing the smaller variance of statistics. The abnormal value will result in the problem beyond the design during system operation. The comparison of Fig. 4.7a and b shows that the statistic characteristics of ETPF and PF-I are highly close to each other. Note that the event triggered condition of PF-I is an independent identically distributed Bernoulli stochastic process and its variance can be theoretically calculated as λ(1 − λ)/750.
(4.82)
4.5 Conclusions
115
0.13 0.12
ET-PF PF-I PF
0.11 0.1
e
k|k−1
0.09 0.08 0.07 0.06 0.05 0.04 0.03 10
20
30
40
50
λ (%)
60
70
80
90
Fig. 4.6 The influence of various arrival rates on the estimation performance of ET-PF and PF-I, where the ET-PF is drawn in red, and the PF-I is marked in blue and compared with the PF in green
As λ ∈ [0, 1], the variance of PF-I is at the magnitude of 10−4 . Figure 4.7a and b reflect that the box length of ET-PF is not as twice as PF-I. Therefore, the variance of ET-PF must be at the magnitude of 10−4 as well and can guarantee even observation.
4.5 Conclusions To satisfy the requirement on stable data transmission and reduced data transmission, this chapter designs the ET-PF to reduce data transmission and guarantees the arrival rate maintained at the setting. The ET-PF is designed by combining the advantages of event triggered filtering, with the highly accurate Monte Carlo statistic method so that the communication bandwidth can be reduced with the guaranteed filtering accuracy. The design of ET-PF includes the event triggered sampling strategy with a guaranteed arrival rate and the particle filtering algorithm that uses the innovational information. For the design of event triggered strategy, the Monte Carlo statistic method is adopted to describe the probability property by using particles. Through the generation and sorting of additional innovation based particles, the triggering threshold is dynamically selected to ensure the designed arrival rate. The simulation shows that the event triggered sampling strategy not
116
4 Event Triggered Particle Filter Using Innovation Based Condition with. . .
0.9
Practical arrival rate
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.3
0.2
0.1
Designed arrival rate (a)
0.9
Practical arrival rate
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.9
0.8
0.7
0.6
0.5
0.4
Designed arrival rate (b) Fig. 4.7 The comparison on the performance of event triggered sampling strategy between ETPF and PF-I. (a) Stochastical analysis of PF-I. (b) Stochastical analysis of ET-PF
4.5 Conclusions
117
only has the triggering mean satisfying the designed arrival rate but also has a very small variance so that the congestion due to the centralized triggering within a given period will not occur. By transforming event triggered information into system state constraint, the filtering algorithm of ET-PF is designed based on the constraint Bayesian state estimation theory, which realizes the use of event triggered information and thus improves the filtering accuracy. The simulation results show that the ET-PF is better in accuracy than PF with intermittent observation when the arrival rate is identical. Moreover, the ET-PF has remarkably advantages over PF with intermittent observation with a lower arrival rate.
Chapter 5
Event Triggered Heterogeneous Nonlinear Filter Considering Nodal Computation Capability
5.1 Introduction For the design of a filter for DSE in WAMS, it is necessary to consider the computation capability at node except the constraint on communication. With the rapid development of computer technology, cloud computing, and parallel computing, the computation capability at control center in WAMS becomes stronger and stronger so as to support more complex algorithms. With the development of state-of-the-art smart sensor technologies, a physical node will also have the computation capability to one certain level for collecting and processing signals. The ET-PF filter architecture in Chap. 4 can decrease the transmission delay by reducing the data communication, which solves the constraint on communication bandwidth. However, the execution of the event triggered sampling strategy of ETPF in a physical node requires the filtering information from the control center. The data transmission of filter data from control center to the physical node will increase the communication burden and is infeasible. Therefore, one local filter is designed at the physical node to duplicate the filtering process at the control center and provide the filter data locally. However, the ET-PF filter is realized based on the Monte Carlo method, and its computation requirements cannot be satisfied by the smart sensor in the physical node at the current stage. Considering the challenges aroused from constraints on the capacity of transmission network in WAMS and the limited computation capability of a physical node, this chapter develops an event triggered heterogeneous nonlinear Kalman filter (ET-HNF), which utilizes the event triggered sampling strategy to reduce the data transmission of observation, and designs the slave filter with a lower computation requirement to reduce the computation burden of the triggered strategy at the physical node and develops the high accuracy master filter at the center to provide highly accurate filtering results.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_5
119
120
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
The slave filter is the fundamental to realize the event triggered sampling strategy, and its algorithm needs to be executed both at the center and the physical node. To adapt to the limited computation capability at the physical node, the slave filter is designed as an event triggered UKF based on UT transformation. Different from PF which uses a large number of particle points to approximate the posteriori conditional distribution, UKF only relies on a small number of specific Sigma points to obtain the first-order origin moment and second-order central moment for the stochastic variables. To address the problem of nonlinear filtering, where both the system and observation noise are Gaussian, UKF provides higher approximation accuracy compared with EKF but does not increase excessive computation burden. The master filter is only executed at the center with the high computation capability so as to provide the dynamic state estimation results. Therefore, the master filter only needs to consider the filtering accuracy and the tracking ability to system state in case of non-Gaussian noise and sudden disturbance without the consideration on the computation burden. In terms of the weakness of the slave filter, i.e., the difficulty in overcoming the error and tracking accidental disturbances when it uses Gaussian distribution to approximate posterior probability distribution in a non-system state during UKF iteration process, the master filter can realize highly accurate filtering based on Monte Carlo method. Moreover, it is also necessary to consider the synergy of master and slave filters while designing the master filter. The master filter is only executed at a control center with a high computation capability and provides state estimation results. Therefore, the master filter only needs to consider the filtering accuracy and the ability to track system state in case of non-Gaussian noise and sudden disturbance and does not need to consider the necessary computation burden much. Considering the weakness of the slave filter, i.e., the difficulty in overcoming the error when it uses Gaussian distribution to approximate posteriori probability distribution in a nonlinear system state during UKF iteration process and tracking sudden disturbance, the master filter is designed based on Monte Carlo method for high accuracy. Moreover, it is also necessary to consider the cooperation between the master and slave filter while designing the master filter. Other parts of this chapter are organized as follows. Section 5.2 introduces the structure of heterogeneous event triggered nonlinear filters. Section 5.3 designs the event triggered UKF as the slave filter and designs the event triggered sampling strategy based on event triggered UKF. Section 5.4 designs the heterogeneous event triggered nonlinear filter, including the design of the master filter based on the Monte Carlo method and the cooperation between the master and the slave filter. Section 5.5 verifies the DSE performance on IEEE 39 buses and compares the ETUKF and heterogeneous event triggered nonlinear filter with a similar computation burden at the physical node. Please note that since the heterogeneous filter comprises the ET-UKF and master filter, to avoid the misunderstanding, the Sigma points and weights of ET-UKF are indicated as X (i) and W (i) , respectively. The particle and weight of master filter based on Monte Carlo method are indicated as xki and ωki , respectively. Except the
5.2 System Design of Heterogeneous Event Triggered Nonlinear Filter
121
particle and weight, the medium variables of filtering will be indicated as “x” ˆ if related to ET-UKF, and variables related to the master filter will be indicated as “x.” ˜
5.2 System Design of Heterogeneous Event Triggered Nonlinear Filter Consider a general nonlinear system, xk = f (xk−1 , μk ) , yk = h(xk , νk )
(5.1)
where xt ∈ Rn is the system state with the dimension of n, and yt ∈ Rp is the system observation with the dimension of P . The system noise ωt ∈ Rn and observation noise νt ∈ Rp satisfy any given stochastic distribution. To get relieved of the computation burden at the physical node, this section develops the heterogeneous event triggered filter, as shown in Fig. 5.1. The heterogeneous event triggered filter is designed to include two parts, i.e., the design of the event triggered sampling strategy of physical node and design of DSE algorithm for the control center. The physical node, where the PMU is installed, realizes the acquisition of observation and determines whether the observation shall be transmitted to the control center according to the event triggered sampling strategy. The physical node comprises the sensor, trigger, and local slave filter, where the sensor samples the system observations. The trigger determines whether the observation shall be triggered while the local slave filter provides the information necessary for the event triggered sampling strategy to the slave filter. In order to achieve only transmitting the significant observation, the event triggered sampling strategy in the trigger determines whether the observation at the current moment shall be transmitted
Fig. 5.1 System structure of ET-HNF
122
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
according to the innovational information. As the innovational information is determined by the deviation between the observation and the one-step priori estimation of observation, the trigger needs to acquire the medium variable of the filter, including the mean of one-step priori estimation for observation yˆrmk and variance Pˆrmy,rmk|rmk−1 except the current observation yrmk from the sensor. The local slave filter provides the same medium variable of filtering as the center slave filter to the event triggered sampling strategy. Otherwise, the medium variable of filtering necessary for the event triggered sampling strategy needs to be transmitted from the control center to the physical node, causing the extra communication. The local slave filter and the center slave filter must be identical to each other in order to provide the same medium variable of filtering. Therefore, both are referred to as the slave filter in this chapter. As the computation capability at the physical node is limited, the slave filter is realized by the UT-transformation based event triggered UKF, which uses a small number of Sigma points for the probability fitting to approximate the system state expectation and variance, so as to provide the medium variable of filtering necessary for event triggered sampling strategy. Moreover, there is no need to solve the Jacobian matrix for event triggered UKF. Therefore, it is not only more suitable for the system of strong nonlinearity but also has a computation requirement similar to ET-EKF but far smaller than ET-PF designed in Chap. 4. As a result, its computing requirement can be easily satisfied by the physical node. This chapter uses the logic variable γk to indicate whether the observation yrmk is transmitted to the control center at the time instance k and is assumed that the channel is not subject to other interferences and the observation transmitted will certainly be received by the control center. γk =
1,
yk is transmitted
0,
yk isn’t transmitted
.
(5.2)
Similar to Chap. 4, both the observations acquired by the control center or event triggered sampling strategy are indicated as Yk and the observation information set acquired at the time instance K is defined as Frmk . As the control center cannot acquire the observation yrmk at γrmk = 0, the state estimation can only be conducted based on the observation yrmk at γrmk = 1 although the local slave filter at the physical node can acquire the observation all the time. The control center fulfills the DSE by receiving the observation from the physical node, which comprises a trigger detector, center slave filter, master filter, and comparator. The trigger detector judges whether the observations are received. The center slave filter provides the master filter with the medium variable of filtering when it does not receive the observation, including the mean of one-step priori estimation yˆk|k−1 and variance Pˆy,k|k−1 . The master filter provides the DSE results based on the Monte Carlo method. It utilizes the features of particle filtering to deliver high accuracy and perform DSE upon the occurrence of sudden disturbance. When no observation is received, the master filter can determine the range of the observation yk through the medium variable of filtering received by the slave
5.3 Design of Slave Filter Based on Event Triggered UKF
123
filter and the event triggered sampling strategy. The range of the observation yk is also called event triggered information. The master filter uses the event triggered information for filtering, and it further improves the filtering accuracy of the master filter while reducing the communication burden. The comparator coordinates the operations of the master and the slave filter. The event triggered information inferred from the medium variable from the slave filter can substantially improve the filtering accuracy of the master filter only when the filtering results provided by the master and the slave filters are close to each other. Therefore, the comparator utilizes the master filter to correct the slave filter and improve the performance of heterogeneous event triggered filter when the filtering results of the slave filter are badly deviated from the master filter and the stability issue occurs.
5.3 Design of Slave Filter Based on Event Triggered UKF 5.3.1 UT Transformation It is hard to acquire the optimal solution to nonlinear filtering. Therefore, the current nonlinear filtering method mainly depends on the approximate. The basic concept of a probability fitting type filter is that the approximation of probability distribution of the stochastic variable is better than the approximation of a nonlinear function. Different from PF, which uses a large number of particle points to approximate the posteriori conditional distribution, the UT transformation relies on a small number of Sigma points to approximate the critical indexes of stochastic variables, e.g., first-order origin moment and second-order central moment, and then uses the Gaussian distribution with equal critical indexes to further approximate the probability distribution of stochastic variable. As Gaussian distribution is the most general form of noise in nature, it is appropriate to use UT transformation to approximate the probability distribution after nonlinear transformation. The UT transformation targets at approximating the distribution of stochastic variable y after the transformation when the stochastic variable x is Gaussian x ∼ N (x, ¯ P ). y = f (x) + μ,
(5.3)
where μ also satisfies Gaussian distribution μ ∼ N (0, Q) and x and μ are independent of each other. Lemma 5.1 If the variable satisfies, z=
f (x)N (x, ¯ P )dx,
(5.4)
when the stochastic variable x has n dimensions, UT transformation defines 2n + 1 Sigma points and their weights as follows:
124
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
X (0) = x, ¯ X (i) X (i+n)
W (0) = κ/(n + κ) = x¯ + (n + κ)P ξi , W (i) = 0.5/(n + κ), = x¯ − (n + κ)P ξi , W (i+n) = 0.5/(n + κ),
(5.5)
where κ is the constant selected by the designer. It is normally n + κ = 3 and ξi is the unit vector, i.e., ξi = [0, 0, . . . , 0, 1, . . . , 0]Tn×1 ,
(5.6)
i−1
Therefore, the first-order origin moment and second-order central moment of z can be approximated as E(z) =
2n
W (i) f (X (i) )
(5.7)
j =0
Cov(z) =
2n
T W (i) f (X (i) ) − E(z) f (X (i) ) − E(z) ,
j =0
For the sake of convenience, the generation process of sigma point and its weight from the distribution x ∼ N (x, ¯ P ) is indicated as {X (i) , W (i) } N (x, ¯ P ),
(5.8)
According to Lemma 5.1 and the definition of μ as a zero-mean and dependent of x, it is obviously concluded that E(y) = E(z) =
2n
W (i) f (X (i) )
(5.9)
j =0
Cov(y) = Cov(z) + Q =
2n
T W (i) f (X (i) ) − E(z) f (X (i) ) − E(z) + Q.
j =0
It is assumed that y obeys the Gaussian distribution. As Gaussian distribution can be completely described only via first-order origin moment and second-order central moment, the distribution of stochastic variable y can be acquired via (5.9). For a more general system, y = f (x, μ).
(5.10)
5.3 Design of Slave Filter Based on Event Triggered UKF
125
where x and μ have the same definition. UT transformation can still be used for (5.10), and it is only necessary to define the extended stochastic variable, X = [x, μ].
(5.11)
Obviously, the stochastic variable X obeys Gaussian distribution, and its mean and variance matrix satisfies E(X) = [x, ¯ 0] P 0 Cov(X) = . 0 Q
(5.12)
The system (5.10) is converted into acquiring the stochastic variable y = f (X), which fits into the form of (5.7). For the sake of convenience, only the stochastic variable in the form of (5.3) is investigated, and it is equivalent to study the stochastic variable in the general form of (5.10).
5.3.2 Event Triggered Sampling Strategy For the sake of convenience, the event triggered UKF is simplified as ET-UKF in the sequel. Based on the analysis of Sect. 5.3.1, it is only necessary to develop the filtering of nonlinear system with the additive noise, xk+1 = f (xk ) + μk yk = h(xk ) + νk
(5.13)
,
where xk ∈ Rn is the system state with the dimension of n, and yk ∈ Rp is the system observation with the dimension of p. The system noise μk ∈ Rn is a Gaussian white noise, and its mean is 0 with the variance matrix Q > 0. Meanwhile, the observation noise νk ∈ Rp is also a Gaussian white noise, and its mean is 0 with the variance matrix R > 0. The initial system state is observed as x0 , which is the zero-mean Gaussian noise with the variance matrix R0 ∈ Rn . All the system noise ωk , observation noise νk , and initial noise x0 are independent of each other. The ET-UKF is designed based on UT transformation, including event triggered sampling strategy and filtering algorithm, where the event triggered sampling ˆ k at the time instance k based on the strategy is to design the non-triggered set filtering result (5.14) at the time instancek − 1. xˆk−1|k−1 = E(xk−1 |Fk−1 ) Pˆk−1|k−1 = Cov(xk−1 |Fk−1 )
.
(5.14)
126
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
As xˆk−1|k−1 and Pˆk−1|k−1 of ET-UKF are acquired through UT transformation. Therefore, like UKF algorithm, the posteriori conditional probability can be approximated as Gaussian. xk−1 ∼ N (xˆk−1|k−1 , Pˆk−1|k−1 ).
(5.15)
For the system (5.13), the iteration is yk = h(f (xk−1 ) + μk ) + νk .
(5.16)
Therefore, by defining the augmented state vector, Xk [Xk,n1 , Xk,n2 ]T [xk−1 , μk ]T .
(5.17)
The iteration (5.16) can be transformed as yk = h(Xk ) + νk = h(f (Xk,n1 ) + Xk,n2 ) + νk .
(5.18)
According to the definition of Xk and the nature of xk−1 and μk , it is easily known that Xk satisfies
0 Pˆ . Xk ∼ N [xˆk−1|k−1 , 0]T , k−1|k−1 0 Q
(5.19)
According to the distribution (5.19) of Xk , the corresponding Sigma points can be generated by
0 Pˆ (i) (i) . {Xk , Wk } N [xˆk−1|k−1 , 0]T , k−1|k−1 0 Q
(5.20)
Corresponding to the definition of Xk , each Sigma point can be divided as (i) (i) T Xk(i) = Xk,n1 , Xk,n2 .
(5.21)
Based on Lemma 5.1 of UT transformation, it is possible to get yˆk|k−1 =
2n
(i)
(i) (i)
Wk−1|k−1 h f Xk,n1 + Xk,n2
(5.22)
2 (i) (i)
(i)
Wk−1|k−1 h f Xk,n1 + Xk,n2 − yˆk|k−1 + R,
(5.23)
j =0
Pˆy,k|k−1 =
2n j =0
5.3 Design of Slave Filter Based on Event Triggered UKF
127
where yˆk|k−1 is the first-order origin moment yˆk|k−1 = E(yk |Fk−1 ) of yk over the observation set Fk−1 and the corresponding Pˆy,k|k−1 is its second-order central moment Pˆy,k|k−1 = Cov(yk − yˆk|k−1 |Fk−1 ). Based on the innovation based event triggered sampling strategy, the untriggered ˆ k can be designed as set ˆ k = {yk |FˆyT (yk − yˆk|k−1 )∞ ≤ ψ}, k
(5.24)
−1 where Fˆyk is Pˆy,k|k−1 is the lower triangular matrix acquired by solving Cholesky −1 T decomposition, i.e., Fˆyk Fˆyk = yˆy,k|k−1 . ψ is the parameter selected by the designer, determining the arrival rate. ˆ k in (5.24), the corresponding event triggered sampling strategy Based on the may be designed as follows via the logic variable.
γk =
1, 0,
ˆk yk ∈ / , ˆk yk ∈
(5.25)
where γk = 1 represents the observation yk at the time instance k. Correspondingly, γk = 0 represents untriggered observation at the time instance k. The innovational information k = FkT (yk − yˆk|k−1 ) is close to the standard Gaussian function distribution. Based on this, the arrival rate of ET-UKF can be calculated as ˆ k |Fk−1 ) Pr(γk = 1|Fk−1 ) = 1 − Pr(yk ∈ = 1 − [1 − 2D(ψ)]p .
(5.26)
5.3.3 Design of Event Triggered UKF Under the event triggered sampling strategy of Sect. 5.3.2, the physical node determines whether the observation shall be triggered according to the value of ˆ k , yk will be sent to the control observation yk at the time instance k. If yk ∈ / ˆ k , the observation will not be transmitted, and the control center center. If yk ∈ ˆ k . When it is known that the only knows the event triggered information of yk ∈ filtering results xˆk−1 |k − 1 and Pˆk−1 |k − 1 at the time instance k − 1 satisfy (5.14), the filtering result xˆk|k and Pˆk|k at the time instance k can be designed through iteration. The ET-UKF’s iteration process includes two processes, i.e., one-step prediction and filtering update by observations. The one-step prediction is expected to get the estimation xˆk|k−1 and variance Pˆk|k−1 at the time instance k under the observation Fk−1 at the time instance k − 1.
128
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
xˆk|k−1 = E(xk |Fk−1 ) = f (xk−1 )N (xˆk−1|k−1 , Pˆk−1|k−1 )dxk−1 Pˆk|k−1 = Cov(xk − xˆk|k−1 |Fk−1 ) = (f (xk−1 ) − xˆk|k−1 )(f (xk−1 ) − xˆk|k−1 )T N (xk−1 |xˆk−1 , Pk−1 )dxk−1 + Qk−1 .
(5.27)
Firstly, the Gaussian property of (5.15) is used to generate the Sigma points,
(i) (i) Xk−1 N (xˆk−1|k−1 , Pˆk−1|k−1 ). , Wk−1
(5.28)
The estimation and variance in (5.27) can be approximated by UT transformation (i) (i) Lemma 5.1 and Sigma points {Xk−1 , Wk−1 }. xˆk|k−1 =
2n
(i)
(i) Wk−1 f Xk−1
(5.29)
i=0
Pˆk|k−1 =
2n
T (i) (i)
(i)
Wk−1 f Xk|k−1 −xˆk|k−1 f Xk|k−1 −xˆk|k−1 +Q. (5.30)
i=0
The filtering updating is determined by the receiving condition of the observation. It is equivalent to an conventional UKF and the posteriori estimation xˆk|k and variance Pˆk|k can be derived at the time instance k upon the receipt of observation yk , xˆk|k = E(xk |yk , Fk−1 ) Pˆk|k = Cov(xk − xˆk|k |yk , Fk−1 ).
(5.31)
For this purpose, the innovational observation yk − yˆk|k−1 is firstly got according to the observation yk . After the priori is acquired via one-step estimation (5.27), (5.29), (5.30), the priori Sigma point can be generated,
(i) (i) Xk|k−1 , Wk|k−1 N xˆk|k−1 , Pˆk|k−1 .
(5.32)
The corresponding yˆk|k−1 particle point can be calculated as (i) ). Yk(i) = h(Xk|k−1
It can be approximated by using UT transformation.
(5.33)
5.3 Design of Slave Filter Based on Event Triggered UKF
129
yˆk|k−1 = =
h(xk )N (xk |xˆk|k−1 , Pk|k−1 )dxk 2n
(i)
(i)
Wk|k−1 Yk
i=1
Pˆyk =
(h(xk ) − yˆk|k−1 )(h(xk ) − yˆk|k−1 )T ×N(xk |xˆk|k−1 , Pk|k−1 )dxk + Rk
=
2n
(i) (i) (i) T Wk|k−1 yˆk|k−1 − Yk yˆk|k−1 − Yk
i=1
Pˆxk yk =
(xk − xˆk|k−1 )(h(xk ) − yˆk|k−1 )T ×N(xk |xˆk|k−1 , Pk|k−1 )dxk
=
2n
T (i) (i) Xk|k−1 − xˆk|k−1 Yˆ k(i) − yˆk|k−1 . Wk|k−1
(5.34)
i=1
Based on this, the filtering update can be realized by using the information according to the vertical projection principle of filtering. Kk = Pˆxk yk Pˆy−1 k xˆk|k = xˆk|k−1 + Kk [yk − yˆk|k−1 ].
(5.35)
Pˆk|k = Pˆk|k−1 − Kk Pˆyk KkT When no observation yk is received, it is necessary to fully utilize the event ˆ k to get the posteriori estimation xˆk|k and variance triggered information yk ∈ ˆ Pk|k at the time instance k. ˆ k , Fk−1 ) xˆk|k = E(xk |yk ∈ ˆ k , Fk−1 ). Pˆk|k = Cov(xk − xˆk|k |yk ∈
(5.36)
The event triggered sampling strategy designed in Sect. 5.3.2 guarantees that the innovational observation ey,k yk − yˆk|k−1 is symmetric, i.e., for ∀a, p(yk − yˆk|k−1 = a|Fk−1 ) = p(yk − yˆk|k−1 = −a|Fk−1 ). Therefore, it can be obtained by
(5.37)
130
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
xˆk|k =
ˆ k , Fk−1 )dxk xk p(xk |yk ∈
= = =
ˆk yk ∈
ˆk yk ∈
ˆk yk ∈
xk p(xk |yk , Fk−1 )p(yk |Fk−1 )dyk dxk
p(yk |Fk−1 )
xk p(xk |yk , Fk−1 )dxk dyk
p(yk |Fk−1 )[xˆk|k−1 + Kk (yk − yˆk|k−1 )]dyk
= xˆk|k−1 .
(5.38)
Therefore, the posterior filter variance is Pˆk|k = Pˆk|k−1 − β(ψ)Kk Pˆyk Kk ,
(5.39)
where Kk and Pˆyk have the same definition as (5.34) and (5.35), respectively. β(ψ) is ψ2 2 β(ψ) = √ ψe− 2 [1 − 2D(ψ)]−1 . 2π
(5.40)
By combining (5.35), (5.38), and (5.39), the logic variable γk is utilized to unify the filtering update of ET-UKF as xˆk|k = xˆk|k−1 + γk Kk [yk − yˆk|k−1 ]
(5.41)
Pˆk|k = Pˆk|k−1 − [β(ψ) + γk (1 − β(ψ))]Kk Pˆyk KkT .
5.4 Master Filter and Its Cooperation with the Slave Filter 5.4.1 Monte Carlo Based Master Filter Design The master filter is realized using the SIR in Sect. 4.2 based on Monte Carlo method. The iteration algorithm is designed to get the corresponding particle and weight {xki , ωki } to approximate the posteriori conditional probability p(xk |Fk ) at the time instance k according to γk indicating whether the observation is received based on i , ωi the known particle and weight {xk−1 k−1 } of approximate posteriori conditional probability at the time instance k − 1. When γk = 1, the observation yk are received by the control center for the filtering. The particle and weight iteration can be realized by SIR in Sect. 4.2.2.
5.4 Master Filter and Its Cooperation with the Slave Filter
μik ⇐ pk (μ) i
xki = f xk−1 + μik
i ω˜ ki = ωk−1 p yk |xki . ωki
=
ω˜ ki
M
131
(5.42)
ω˜ ki
i=1
The corresponding filtering result and variance are calculated through {xki , ωki }. x˜k = =
xk p(xk |Fk−1 , yk )dx M
ωki xki
i=1
(xk − x)(x ˆ k − x) ˆ T p(xk |Fk−1 , yk ) P˜k = =
M
T ωki xki − xˆ xki − xˆ .
(5.43)
i=1
When γk = 0, the observation yk is not received by the control center. The ˆ k from the center master filter acquires the event triggered information yk ∈ ˆ slave filter, where the untriggered set k is acquired via (5.24). The particle and weight {xki , ωki } generated by the iteration shall be approximated to the posteriori conditional probability. M
ˆ k ) = p xk |Fk−1 , FˆyT yk − yˆk ∞ ≤ ψ , ωki δ xk − xki ≈ p(xk |Fk−1 , yk ∈ k
i=1
where yˆk and FˆyTk are acquired by the slave filter. By the definition, it is got by yk = h(f (xk−1 , μk ), νk ) φ2∗ (xk−1 , ωk , νk ).
(5.44)
(5.45)
ˆ k can be transformed into the system The event triggered information yk ∈ state constraint. ˆ k. φ2∗ (xk−1 , μk , νk ) = yk ∈
(5.46)
By (5.46), the particle and weight approximating the posteriori conditional ˆ k ) can be transformed distribution of triggered information p(xk |Fk−1 , yk ∈
132
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
into the priori conditional distribution p2 (xk |Fk−1 ) of the “constraint system” that satisfies the state and observation equation and state constraint. According to Lemma 4.5 of constraint Bayesian state estimation theory, the particle {xki } is generated to approximate to the priori conditional distribution of the “constraint system” via (5.47) as μik ⇐ pk (μ) . i
xki = f xk−1 + μik
(5.47)
Based on this, the additional particles are generated νki ⇐ pk (ν) .
yki = h xki + νki
(5.48)
The particle weight is updated as follows based on whether the additional particle satisfies the constraint condition (5.46) or not, ω˜ ki =
i ˆk ωk−1 , when yki ∈ . ˆk 0, when y i ∈ /
(5.49)
k
The heterogeneous filter cannot guarantee that an adequate number of particles ˆ k . Therefore, it corresponding to the additional particle satisfy the constraint yki ∈ i may occur that most particles have their weight cleared, i.e., ω˜ k = 0, leading to the “constraint particle impoverishment” effect, so that the filtering accuracy is reduced. Therefore, it is necessary to introduce the particle point activity judgment process and the particle activity parameter Mε is defined as Mε
M
ˆk , 1 yki ∈
(5.50)
i=1
where 1(·) is the discrimination function. When the content inside the brackets is established, 1(·) = 1. Otherwise, 1(·) = 0. When the activity parameter Mε is not over the designed threshold Mε ≤ MG , all the particles xki corresponding to ˆ k are abandoned and re/ additional particles dissatisfying the constraint yki ∈ i of the previous time execute (5.47) for the particle generation from the particle xk−1 instance. Equation (5.48) is executed to generate the additional particles. Finally, the particle weight is updated according to (5.49). The activity parameter ε is recalculated until the designed threshold Mε ≥ MG is exceeded. In this case, the particle points that have not satisfied the restriction of additional particles will be directly abandoned, and the corresponding weight will be set as ω˜ ki = 0.
5.4 Master Filter and Its Cooperation with the Slave Filter
133
Through the normalization, ωki = ω˜ ki
M
ω˜ ki .
(5.51)
i=1
The particle and weight satisfy the approximation of posteriori probability from (5.44). Therefore, the filtering value at the time instance k and the corresponding filtering variance are x˜k = =
ˆ k )dx xk p(xk |Fk−1 , yk ∈
M
ωki xki
i=1
˜ ˆ k) (xk − x)(x ˆ k − x) ˆ T p(xk |Fk−1 , yk ∈ Pk = =
M
T ωki xki − xˆ xki − xˆ .
(5.52)
i=1
Remark 5.1 Although both aims at avoiding “particle impoverishment,” the regeneration process of the particle points dissatisfying the constraints in constraint Bayesian state estimation is different from the re-sampling in SIR. The cause of the “constraint particle impoverishment” effect is that only a very small number of particles satisfy the constraints. Therefore, (5.51) is used to define the particle activity in order to acquire adequate particle points for the filtering. The cause of “particle impoverishment” is that the increasing weight of a small number of particles during SIS weight iteration results in most particle weights approaching to zero. Therefore, (4.18) is used to evaluate whether the weights are even so that the weight of each particle can be reduced by resampling and duplicating the heavyweight particles when part of the particles has excessive weight. The “particle impoverishment” effect always occurs during the iteration no matter whether the particle and weight {xki , ωki } are generated by (5.42) upon receipt of observation yk or (5.47) and (5.49) upon non-receipt of observation,. For that reason, the resampling method is introduced in ET-HNF like Sect. 4.2.2 after the generation of {xki , ωki }. The number of valid samples is defined as Meff,k = M 1/( (ωki )2 ) and threshold as ∈ [0, 1]. When Meff,k < M at the time instance i=1 i i k, the re-sampling xki∗ ⇐ M i=1 ωk δ(xk − xk ) is executed and the weight is set as i∗ i∗ ωk = 1/M. The particle and weight {xk , ωki∗ } generated during re-sampling are taken as the beginning point of particle update at the next moment.
134
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
5.4.2 Cooperation Between Master and Slave Filters at Control Center The cooperation must be realized between the master and slave filters of the heterogeneous event triggered nonlinear filter in order to establish a stable and highly accurate filtering. The master filter in Sect. 5.4.1 utilizes the information provided by the slave filter to recover the data when no observation is received so as to improve the filtering accuracy. Besides, the master filter shall also use its highly accurate filtering information to correct the slave filter. ˆ k is a bridge between the master filter and the slave The non-triggered set ˆ filter. k is generated by ET-UKF in the slave filter and utilized by the particle based filtering algorithm in the master filter. Under normal circumstances, each independently operates. As the completely different filtering algorithms are adopted, their estimation for the state distribution at the same moment may be highly close to or deviated far away from each other. As reflected in Fig. 5.2, because the Gaussian assumption is made for ET-UKF, the Gaussian curve in red represents the estimation distribution of ET-UKF for the state of any dimension. The dynamic state distribution of physical node under system (5.13) is normally a unimodal function. Therefore, the unimodal function curve in blue represents the estimation distribution approximated by particles for the state of any dimension. The ET-UKF ˆ k designed by of (5.24) is the medium part in event triggered sampling strategy set Gaussian distribution of its estimation symmetric to the mean, which corresponds to the horizontal axis of the black rectangle. The integral corresponding to the estimation distribution of the master filter on this cross axis is the area in green and ˆ k and further reflects the probability by which the additional particles satisfy yki ∈ represents the amount of information provided by the event triggered information ˆ k of the slave filter to the master filter. As Fig. 5.2a shows, when the filtering value x˜k of the master filter is close to the ˆ k of the slave filter, the distribution estimation of the master and filtering value slave filters will have many close and overlapped parts, and thus the event triggered ˆ k will provide such data to the master filter. On the contrary, in information ˆk Fig. 5.2b, the filtering x˜k of the master filter is highly deviated from the filtering of the slave filter so that their distribution estimations have limited overlapped area so that the area of the green part that reflects the event triggered information is very small. When the difference in filtering between the master and slave filters further increases, the area of the green part will approach 0 and the “constraint particle impoverishment” effect will be intensified, and the additional particles can hardly ˆ k . When the abandon resample processes (5.47), (5.48), satisfy the constraint yki ∈ and (5.49) are continuously repeated but the valid particle points cannot be acquired, the last effort has been made to degrade itself into a PF-I to satisfy the real-time requirement. i , μik ) xki = f (xk−1
5.4 Master Filter and Its Cooperation with the Slave Filter
135
Fig. 5.2 Relationship between distribution of TIU-PF and ET-UKF. (a) Distribution of TIU-PF and ET-UKF is close to each other. (b) Distribution of TIU-PF and ET-UKF is separate from each other
i ωki = ωk−1 .
(5.53)
Based on the knowledge in Sect. 4.3, the particle and weight acquired in this way are equivalent to approximate the priori probability distribution, M
ωki δ(xk − xki ) ≈ p(xk |Fk−1 ).
(5.54)
i=1
The loss of posteriori information will deteriorate the filtering accuracy. Therefore, it is necessary to correct the slave filter by using the filtering information from the master filter when the deviation on filtering of the master and slave filters is
136
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
significant. The correction process shall be designed based on the non-triggered set ˆ k . Based on the design of ˆ k from (5.24), the data transmission efficiency between the master and the slave filters is defined as τ = F˜xk (x˜k − xˆk )∞ ,
(5.55)
where F˜xk is the upper triangular matrix of the Cholesky decomposition on P˜k|k . Obviously, τ ∈ [0, 1]. The transmission threshold is set by τG ∈ (0, 1). When τ ≥ τG , the slave filter receives the filtering result from the master filter to reset the filtering parameters of ET-UKF, xˆk|k = x˜k Pˆk|k = P˜k
.
(5.56)
It is impossible to accurately calculate the resetting ratio. However, since the master filter based on particle filtering has a high filtering accuracy so as to guarantee that x˜k is close to the real system state xk , the resetting ratio can be approximated via (5.57) according to the principle of (5.26), Pr(τ ≥ τG ) = [1 − 2D(τG )]n .
(5.57)
Equation (5.57) chooses the appropriate threshold τG ∈ (0, 1) within the capacity of the transmission system so as to increase the accuracy of ET-HNF.
5.5 Simulation and Verification on ET-HNF The numeric simulation of this section still uses the IEEE 39 buses as an example. This section applies the ET-HNF in Sect. 5.4 to the IEEE 39 buses standard system and compares its filtering performance with the ET-UKF of a similar computation burden at the same arrival rate. This section selects the generator G3 in Fig. 2.2 to verify the filtering accuracy. The DSE on the generator G3 is demonstrated with a sudden break between system bus 14 and bus 15 in figure. The sampling period of simulation is t = 0.02 s. Accordingly, there generate 750 sampling points. The system state variable is x [ Ed , Eq , δ, η, Ef d , Rf , η, VR ]T with the dimension of 7. v(k), θ (k) are taken as the input variable and the observation variable is defined as y [id (k), iq (k) ]T . The initial state of the node 3 may be set as [0:4535, 1:008, 0:6, 1, 2:234, 0:402, 3:43 ]. This section will include two simulations to test the influence of non-Gaussian noise and sudden disturbance on the DSE in WAMS. For each simulation, the comparison is carried out between the ET-HNF and ET-UKF at the arrival rate of λ = 10%. The ET-UKF uses the design in Sect. 5.3 and the algorithm of Sect. 5.3.2
5.5 Simulation and Verification on ET-HNF
137
is used as the event triggered sampling strategy at node. The filtering algorithm is applied as in Sect. 5.3.3. The ET-HNF has the heterogeneous architecture designed in Sect. 5.2 and the algorithm of Sect. 5.3.2 is used as the event triggered sampling strategy at node. The filtering algorithm is given in Sect. 5.4.1. The ET-UKF is reset if necessary by using the master and slave filter cooperation in Sect. 5.4.2. Note that ET-UKF and ET-HNF adopt the same event triggered sampling strategy. Therefore, ET-UKF and ET-HNF have the same requirements on the computation capability of the physical node. In the first set of simulations, the system noise of the physical node is zero-mean Rayleigh noise, which obeys the distribution as p(x) =
√ x − δ π/2 −(x−δ √π/2)2 /(2δ 2 ) e . δ2
(5.58)
The variance (4 − π )δ 2 /2 of Rayleigh noise corresponding to the seven states is selected as [109 , 1011 , 105 , 105 , 105 , 105 , 109 ]. The variance matrix of additive zero-mean Gaussian white noise of observation process is chosen as diag[10−3 , 10−3 ]. The system’s initial observation noise is the additive zero-mean Gaussian white noise, and the variance matrix is ten times to the system noise. Total 64 simulations of ET-UKF and ET-HNKF are performed separately and the system’s real state and the 64 simulation results are drawn in Figs. 5.3 and 5.4, respectively. The wider each filtering result in color is, the wider the estimation range of the 64 simulation results will be, indicating the greater filtering error. Only the filtering results of the generator state Ed , Eq , δ, η are compared in this section. The comparisons between Figs. 5.3 and 5.4 show that the area of ET-UKF filtering results in color is wider than the ET-HNF filtering results. Therefore, the filtering error of ET-HNF is far smaller than ET-UKF. A special filtering comparison of the rotation angle δ with accumulation effect in two states shows that the ET-HNF provides accurate filtering results while ET-UKF filtering results increase with time. The mean of the filtering error ek = xk − x˜k 2 of the 64 simulations is compared in Fig. 5.5 and the results show that ET-HNF is far better than ET-UKF in filtering accuracy. In the second set of simulations, the variance matrix of the additive zero-mean Gaussian white noise of the system is chosen as diag[ 10−10 , 10−10 , 10−6 , 10−6 , 10−6 , 10−6 , 10−10 ]. Other than additive white noise, the system is subject to disturbance from the sudden deceleration of the generator at the simulation time of 0.04 s, 0.2 s, and 2 s, specifically [101 , 103 , 103 , 10−5 , 106 , 106 , 106 ]. Meanwhile, the variance matrix of the additive zero-mean Gaussian white noise of the observation is chosen as diag[10−3 , 10−3 ]. The simulation is separately performed on ET-UKF and ET-HNF for 64 times. The real state of the system and the results of 64 simulations are drawn in Figs. 5.6 and 5.7, respectively. It should be noted that the sudden disturbance has much greater influence on the system state η than other states. Figure 5.7 shows that the ET-UKF needs a longer time to resume the tracking on the state η upon the occurrence of sudden disturbance. Figure 5.6 also shows that the ET-HNF can
138
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
Fig. 5.3 Estimation from the ET-HNF of 64 sets with non-Gaussian noise, where the actual system state is drawn in black and the ET-UKF results from 64 sets are in red. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed
5.5 Simulation and Verification on ET-HNF
139
Fig. 5.3 (continued)
tightly track the state η upon the occurrence of the same sudden disturbance. In the simulation, the ET-HNF delivers very high filtering accuracy, and the filtering results almost coincide with the real system state. Figure 5.8 shows the mean of filtering errors ek = xk − x˜k 2 at each moment in the 64 simulations of ET-HNF and ET-UKF. Figure 5.8 shows that the ET-HNF has much better filtering accuracy than ET-UKF, particularly upon the occurrence of stochastic disturbance. The average simulation time for ET-UKF, ET-HNF, and ET-PF filter with the same computation capability can be used to analyze their requirement on the computation capability of the physical node and control center. The simulation code of this section is realized in MATLAB using m language, and the parallel processing of MATLAB is adapted for the operation in a 4 core i7 processor. The timing of the three filters run for the codes is separately performed at the physical node and control center and the average simulation time are shown in Fig. 5.9. It shows that it is only necessary for ET-UKF to calculate a small number of Sigma points to realize the filtering iteration process, indicating that its computation burden is small. Both the event triggered sampling strategy of the corresponding ET-PF at the physical node and the filtering algorithm at the control center are based on the Monte Carlo method, and it is necessary to acquire the stochastics of system state through a large number of particle points. Therefore, the computation burden at the physical node and the control center is very heavy. Through its heterogeneous architecture, the ETHNF combines the advantageous features of ET-UKF and ET-PF simultaneously. The algorithm similar to ET-UKF is adopted as the event triggered sampling strategy of ET-HNF at the physical node and therefore the computation burden is light while the filtering algorithm of ET-HNF at the control center performs the computation of Sigma point and particle so that the computation burden is similar to that of ET-PF.
140
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
Fig. 5.4 Estimation from the ET-UKF of 64 sets with non-Gaussian noise, where the actual system state is drawn in black and the ET-UKF results from 64 sets are in cyan. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed
5.5 Simulation and Verification on ET-HNF
141
Fig. 5.4 (continued)
Fig. 5.5 Average of estimation error from 0 s to 15 s for ET-UKF and ET-HNF with Non-Gaussian noise, where the ET-UKF is in cyan and the counterpart ET-HNF is in red
In summary, the ET-HNF takes the full advantage of the additional computation capability at the control center, which can help to reduce the computation burden at the physical node. As the particle filtering can be realized via the parallel processing of each particle point, the control center can thus substantially reduce the operation time of the Monte Carlo based ET-HNF through advanced parallel computing technology [142, 144]. The computation requirement on ET-HNF at the physical node can be substantially reduced, which enables the ET-HNF to be realized on the current infrastructures in smart grid without the additional cost at the computing node.
142
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . . 0.6
System ET-UKF
Ed (p.u.)
0.4 0.2 0 -0.2 -0.4
0
5
10
15
10
15
10
15
Time (s) (a) 1.2
Eq (p.u.)
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (b) 1 0
δ (rad)
-1 -2 -3 -4 -5 -6 -7
0
5
Time (s) (c) Fig. 5.6 Estimation from the ET-UKF of 64 sets with intense disturbance, where the actual system state is drawn in black and the ET-UKF results from 64 sets are in cyan. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed
5.6 Conclusions
143
1.2
η (p.u.)
1 0.8 0.6 0.4 0.2 0
0
5
10
15
Time (s) (d) Fig. 5.6 (continued)
5.6 Conclusions Based on the premise through design to reduce the data communication and guarantee the arrival rate, this chapter designs a heterogeneous event triggered nonlinear filtering structure and corresponding algorithm considering the limited computation capability of the WAMS. The heterogeneous event triggered nonlinear DSE utilizes the ET-UKF with a small computation burden as the slave filter, and the particle filtering as the high accuracy master filter with a high computation burden to cooperate and realize the event triggered innovational information transmission strategy and high accuracy filtering algorithm. The same ET-UKF slave filter is utilized at the physical node and control center to provide the event triggered sampling strategy for the node and event triggered innovational information for the control center, respectively. With the high computation capability at the control center, the master filter uses a Monte Carlo based method that can fully utilize the event triggered information from the slave filter to increase the filtering accuracy of the system. Meanwhile, the cooperation method between the master and slave filters is designed to transmit the event triggered information from the slave filter to the master one, and uses the master filter to correct the slave filter based on the distance between the estimation results of master and slave filters so that both filters cooperate to deliver an appropriate accurate filtering. The simulation shows that the heterogeneous event triggered filter has a much better filtering accuracy than the event triggered UKF with the same limitation on the communication bandwidth and computation capability of the physical node. In particular, the event triggered UKF needs longer time to cover the filtering accuracy upon the occurrence of sudden disturbance at the node while the heterogeneous event triggered filtering can rapidly resume tracking of the dynamic system state.
144
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . . 0.6
System ET-HNF
Ed (p.u.)
0.4 0.2 0 -0.2 -0.4
0
5
10
15
10
15
10
15
Time (s) (a) 1.2
Eq (p.u.)
1 0.8 0.6 0.4 0.2 0
0
5
Time (s) (b) 1 0
δ (rad)
-1 -2 -3 -4 -5 -6 -7
0
5
Time (s) (c) Fig. 5.7 Estimation from the ET-HNF of 64 sets with intense disturbance, where the actual system state is drawn in black and the ET-UKF results from 64 sets are in red. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed
5.6 Conclusions
145
1.2
η (p.u.)
1 0.8 0.6 0.4 0.2 0
0
5
10
15
Time (s) (d) Fig. 5.7 (continued) 2
ET−UKF ET−HNF
ek
1.5 1 0.5 0
0
5
10
15
Time (s) Fig. 5.8 Average of estimation error from 0 s to 15 s for ET-UKF and ET-HNF with intense disturbance, where the ET-UKF is in cyan and the counterpart ET-HNF is in red
0.32
0.33
Generator node
7.23
0.33 7.61
Estimation center
7.24 0
1
2 ET-UKF
3 ET-HNF
4
5
6
7
8
ET-PF
Fig. 5.9 Average of elapsed computation time in generator node and estimation center of ET-UKF, ET-HNF, and ET-PF
146
5 Event Triggered Heterogeneous Nonlinear Filter Considering Nodal. . .
Algorithm 5.1 The slave filtering algorithm and event triggered sampling strategy of heterogeneous event triggered filtering Input: {yk }, possible(x˜k , P˜k ) Output: {γk } Initialization: κ = 3 − n, xˆ1|0 = f (x0 ), P1|0 = P0 Loop: while Sample yk or receive the correction from the control center x˜k , P˜k do if receive x˜k , P˜k then xˆk|k = x˜k , Pˆk|k = P˜k return γk = 1 else if Fˆyk (yk − yˆk|k−1 )∞ < ψ then xˆk|k = xˆk|k−1 + Kk [yk − yˆk|k−1 ] Pˆk|k = Pˆk|k−1 − βKk Pˆyk KkT return γk = 0 else xˆk|k = xˆk|k−1 + Kk [yk − yˆk|k−1 ] Pˆk|k = Pˆk|k−1 − Kk Pˆyk KkT return γk = 1 end if end if (i) Xk+1|k N (xˆk|k , Pˆk|k ) 2n (i) (i) xˆk+1|k = Wk+1|k f (Xk+1|k ) Pˆk+1|k =
j =0 2n
j =0
(i)
(i)
Wk+1|k [f (Xk+1|k ) − xˆk+1|k ]2
Xk+1|k+1 N (xˆk+1|k , Pˆk+1|k ) (i)
(i) Yk+1
(i) = h(Xk+1|k+1 ) 2n (i) yˆk+1|k = W (i) Yk+1 i=1
Pˆyk =
2n
(i)
W (i) (yˆk+1|k − Yk+1 )2
i=1 2n
Pˆxk yk =
i=1
(i) (i) W (i) (Xk+1|k+1 − xˆk+1|k )(Yˆ k+1 − yˆk+1|k )T
Kk+1 = Pˆxy Pˆy−1 k+1
use the Cholesky decomposition Fˆyk+1 = chol(Pˆy−1 ) k+1 end while
5.6 Conclusions
147
Algorithm 5.2 The master filtering algorithm of heterogeneous event triggered filtering Input: observation Yk , the slave filter provides xˆk|k , yˆk|k−1 , Fˆyk+1 Output: x˜k , Initialization: xki ⇐ p0 (x), ωi = 1/M, ∀i Loop: while k ≥ 1 do ωsum = 0 if γk = 1 then for i = 1 to M do i ) + μi μik ⇐ pk (μ), xki = f (xk−1 k i i i ω˜ k = ωk−1 p(yk |xk ), ωsum = ωsum + ω˜ ki end for else Mε = 0, MFi = 0, , F lag = 0 for n = 1 to Tε do for i = 1 to M do if MFi = 0 then i ) + μi , ν i ⇐ p (ν), y i = h(x i , ν i ) μik ⇐ pk (μ), xki = f (xk−1 k k k k i ˆ if Fyk (y − yˆk|k−1 ) < ξ then i ω˜ i = ωk−1 , MFi = 1, Mε + +, ωsum = ωsum + ω˜ ki else ω˜ i = 0, MFi = 0 end if end if end for if Mε ≥ MG,ε then F lag = 1, break end if end for if MFi = 0 then for i = 1 to M do i ω˜ ki = ωk−1 , ωsum = ωsum + ω˜ ki end for end if end if x˜k+1 = 0 for i = 1 to M do ωki = ω˜ i /ωsum , x˜k = x˜k + ωki xki end for for i = 1 to M do P˜k = ωki (x˜k − xki )(x˜k − xki )T end for F˜xk = chol(P˜k ) if F˜xk (x˜k − xˆk ) > τG then Send x˜k and P˜k to ET-UKF end if return x˜k+1 end while
Chapter 6
Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational Condition
6.1 Introduction The event triggered cubature Kalman filter based on the innovation based condition in this chapter utilizes the event triggered sampling strategy using innovation based condition at the sensor so as to effectively reduce the transmission of measurement data. At the center, an event triggered CKF filter is further designed, which can still run the update according to the measurements implicitly contained in the innovation based condition when no measurement data is received, thus guaranteeing the estimation accuracy of system state. It is also applicable to nonlinear system. However, this event triggered CKF filter still has its own weaknesses as follows. Firstly, although the event triggered sampling strategy using innovation based condition at the sensor node reduces the data transmission, the innovation based condition causes the statistic features of measurements as the truncated Gaussian distribution instead of pure Gaussian distribution considering the design concepts of innovation based condition. It is obvious in conflict with the design premise of CKF filter, i.e., the assumption that the priori statistic characteristics of system state and measurement obey the Gaussian distribution. Therefore, the filtering accuracy of remote event triggered CKF will be degraded. Secondly, although the event triggered CKF at the center extends the event triggered estimation from linear system to nonlinear system, it is only applicable when the system and measurement noise are known as Gaussian white noise. However, in practical WAMS of smart grid, the system noise and measurement noise are mostly nonGaussian and unknown. Therefore, the application scope of the event triggered CKF filter is limited to one certain extent. To address the first disadvantage, a stochastic event triggered sampling strategy was proposed using the innovation based condition and the linear Kalman filter was designed based on such sampling strategy [45], which showed that the stochastic innovational condition could guarantee the Gaussian characteristics of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_6
149
150
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
measurements. Therefore, the Kalman filter based on this stochastic condition could acquire a higher filtering accuracy than the event triggered Kalman filter using the innovation based condition. As mentioned above, the WAMS are mostly nonlinear so that the event triggered Kalman filter using stochastic innovational condition obviously cannot address the DSE for nonlinear system. To address the second weakness, although the particle filter and nonlinear H∞ filter can effectively address the unknown system noise or non-Gaussian system noise, these algorithms run in a complicated iteration process and cannot satisfy the real-time DSE requirement on WAMS. In summary, in order to address the disadvantages of ETCKF in Chapter 3, will firstly design the stochastic event triggered cubature Kalman filter (SETCKF) using the stochastic innovational condition and based on this strategy, the stochastic event triggered robust cubature Kalman filter (SETRCKF) is designed regarding the unknown noise or non-Gaussian noise considering the computation complexity. Finally, the stochastic stability of SETRCKF is analyzed. This chapter is organized as follows: Section 6.2 designs the SETCKF and utilizes the IEEE-39 bus to compare the filtering accuracy between SETCKF and ETCKF. Section 6.3 further provides the design and inference process of SETRCKF algorithm based on SETCKF. Section 6.4 analyzes the stochastic stability of SETRCKF and provides the sufficient condition for its stochastic stability. Finally, Sect. 6.5 utilizes the IEEE-39 bus to verify the effectiveness of SETRCKF algorithm.
6.2 Design of Event Triggered Cubature Filter Using Stochastic Innovational Condition In terms of the innovation based condition ruining the Gaussian characteristics of measurement, this section develops the stochastic event triggered CKF filtering algorithm using the stochastic innovational condition. Like the design of ETCKF, the SETCKF design also comprises two parts: design of event triggered sampling strategy at the sensor and design of filtering algorithm at the control center.
6.2.1 Design of Event Triggered CKF Using Stochastic Innovational Condition Like the event triggered sampling strategy using the innovation based condition, the calculation of logic variable γk also requires the feedback of the medium value of filtering from the center when the stochastic innovational condition is taken as the event triggered sampling strategy. To remove the data return and further get communication burden relieved of, communication burden, the sensor is equipped
6.2 Design of Event Triggered Cubature Filter Using Stochastic Innovational. . .
151
Fig. 6.1 System diagram of DSE using SETCKF
with one local filter to provide the logic value with the median value of filtering, which is used to determine whether the current measurement shall be transmitted. Therefore, the architecture of SETCKF is shown in Fig. 6.1. For the nonlinear system of (3.1), when the stochastic innovational condition is taken as the stochastic event triggered sampling strategy, the time update will be conducted according to the conventional CKF algorithm after the system state estimation xˆk−1 and Pk−1 of the last time instance are received at the local filter. √ ξ (i) =
(i)
Xk−1 = xˆk−1 +
nei , √ − nei−n ,
i = n + 1, . . . , 2n
i = 1, . . . , 2n
Pk−1 ξ (i) ,
(i) (i) Xˆk = f (Xk−1 ),
i = 1, . . . , n
i = 1, . . . , 2n
1 ˆ (i) Xk 2n 2n
xˆk|k−1 =
(6.1)
i=1
1 ˆ (i) (i) (Xk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 . 2n 2n
Pk|k−1 =
i=1
Next, the one-step prediction of measurement is acquired to calculate the logic variable γk . (i)
Xk|k−1 = xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i) (i) Yˆ k = h(Xk|k−1 ),
yˆk|k−1
2n 1 ˆ (i) Yk . = 2n i=1
i = 1, . . . , 2n
i = 1, . . . , 2n
(6.2)
152
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Furthermore, the stochastic innovational condition is calculated according to the one-step prediction and the designed parameter Y , 1
ϕ(yk ) = e− 2 (yk −yˆk|k−1 )
T Y (y
k −yˆk|k−1 )
.
(6.3)
Finally, the value of γk is determined by comparing the stochastic innovational condition and the stochastic value ηk obeying the uniform distribution over [0, 1]. The pseudo-code of event triggered sampling strategy in the SETCKF is given in Algorithm 6.1. Algorithm 6.1 Event triggered strategy of SETCKF Input: the current measurement yk and design parameter Y Output: the logic variable γk 1: Initialization: the initial system state x0 and covariance matrix P0 2: Iteration process: 3: While k ≥ 1 do √ nei , i = 1, . . . , n (i) 4: ξ = √ − nei−n , i = n + 1, . . . , 2n √ (i) Xk−1 = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n 5: (i) (i) 6: Xˆk = f (X ), i = 1, . . . , 2n 7: 8: 9: 10: 11:
xˆk|k−1 =
(i)
Xˆk
i=1 2n
(i)
(i)
(i)
(Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n
Pk|k−1 = Xk|k−1
k−1 2n
1 2n 1 2n
i=1
(i) Yˆ k
(i) = h(Xk|k−1 ), 2n (i) 1 yˆk|k−1 = 2n Yˆ k
i = 1, . . . , 2n
i=1
1
12: ϕ(yk ) = e− 2 (yk −yˆk|k−1 ) Y (yk −yˆk|k−1 ) 13: ηk is randomly generated obeying the uniform distribution over [0, 1] 14: if ϕ(yk ) < ηk then 15: return γk = 1 16: else 17: return γk = 0 18: end if 19: end while T
6.2.2 Filtering Design at the Control Center Based on the stochastic event triggered sampling strategy designed in the previous section, the event triggered Kalman filter was designed to address the issue of remote
6.2 Design of Event Triggered Cubature Filter Using Stochastic Innovational. . .
153
state estimation for linear system [45]. This section provides a detailed design to extend the stochastic event triggered Kalman filter to the nonlinear system. Like ETCKF in this chapter, the key of SETCKF design is to fully utilize the measurement information contained in the stochastic innovational condition for measurement update when the center does not receive the measurement, i.e., γ = 0. Inspired by the design process of stochastic event triggered Kalman filter, the measurement update of SETCKF is provided according to the following theorem. Theorem 6.1 In terms of the system model (3.1) for the remote state estimation shown in Fig. 6.1. It is assumed that the event triggered sampling strategy as in Algorithm 6.1 is used at the sensor. The system state vector xk obeys the Gaussian distribution p(xk |Fk ) = N(xˆk , Pk ) in the following conditions, and xˆk = xˆk|k−1 + γk Kk (yk − yˆk|k−1 ) Pk = Pk|k−1 − Kk PxTk yk Kk = Pxk yk [Pyk + (1 − γk )Y −1 ]−1 , where Pxk yk is the cross-covariance matrix between the system state and measurement, and Pyk is the one-step prediction of covariance for the measurement. Before proving the theorem, it is necessary to introduce the following lemma, Lemma 6.1 ([45]) For the following dividable matrix "
# xx xy = , Txy yy where xx ∈ Rn×n ,xy ∈ Rn×n ,m×m ∈ Rm × m. If it satisfies the following equation: −1 + where Y ∈ Rm×m , equation:
00 = 0Y
−1
,
is a dividable matrix and can be calculated in the following " =
# xx T xy
xy yy
xx
= xx − xy (yy + Y −1 )−1 Txy
xy
= xy (I + yy Y )−1
yy
−1 = (−1 yy + Y ) .
154
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Proof The time update of the SETCKF filtering is identical to the conventional CKF. The measurement update comprises two different cases according to logic variable γk . When γk = 1, the center receives the measurement and SETCKF updates the measurement according to conventional CKF. However, when γk = 0, the center does not receive the measurement and has to rely on the measurement information contained in the stochastic innovational condition for measurement update in order to increase the filtering accuracy. To acquire the system state estimation and the error covariance matrix, the joint distribution probability density function between xk and yk is firstly defined over the set Fk as p(xk , yk |Fk ). The following the Bayesian equation can be acquired by p(xk , yk |Fk ) = p(xk , yk |γk = 0, Fk−1 ) =
Pr(γk = 0|xk , yk , Fk−1 )p(xk , yk |Fk−1 ) Pr(γk = 0|Fk−1 )
=
Pr(γk = 0|yk )p(xk , yk |Fk−1 ) , Pr(γk = 0|Fk−1 )
(6.4)
where the third equation holds due to the independence of γk from xk and Fk−1 . According to the definition of stochastic innovational condition, the first term in the numerator of equation above can be calculated by the following equation: Pr(γk = 0|yk ) = Pr(η ≤ ϕ(yk )|yk ) 1
= e− 2 (y−yˆk|k−1 )
T Y (y−yˆ
k|k−1 )
(6.5)
.
To acquire the second term in the numerator, the variance of joint distribution between xk and yk over the set Fk−1 is defined as ! so that " # Pk|k−1 Pxk yk != , PxTk yk Pyk where Pxk yk =
1 2n
2n
(i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T ,Pyk =
i=1 yˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T + Rk .
(6.6)
1 2n
2n i=1
(Yˆ k − (i)
Based on this definition, the second term of the numerator in (6.4) can be calculated as 1 T −1 1 p(xk , yk |Fk−1 ) = √ e− 2 E ! E , m+n det(!)(2π )
(6.7)
xk − xˆk|k−1 is the one-step prediction error between the system state yk − xˆk|k−1 and measurement, and det(·) is the determinant computation. where E =
6.2 Design of Event Triggered Cubature Filter Using Stochastic Innovational. . .
155
The equations of (6.5) and (6.7) are introduced into (6.4) to derive p (xk , yk |Fk ) =
1 1 T −1 T e− 2 [E ! E+(y−yˆk|k−1 ) Y (y−yˆk|k−1 )] . √ Pr(γk = 0|Fk−1 ) det(!)(2π )m+n
(6.8) Here, " is defined as the degree of exponential term, and it can be got by " = E T ! −1 E + (y − yˆk|k−1 )T Y (y − yˆk|k−1 ) T −1 T 0 0 = E ! E+E E. 0Y According to Lemma 6.1, it can be acquired by " = ET " xk yk
xk yk yk
Rn Rm
, wherein
yk
xk
As
(6.9)
E,
# xk T xk yk
=
where
−1
= Pk|k−1 − Pxk yk (Pyk + Y −1 )−1 PxTk yk
(6.10)
= Pxk yk (I + Pyk Y )−1
(6.11)
= (Py−1 + Y )−1 . k
(6.12)
p(xk , yk |Fk )dxk dyk = 1,
1 1 =√ . √ m+n Pr(γk = 0|Fk−1 ) det(!)(2π ) det( )(2π )m+n
(6.13)
The equation above and (6.9) are introduced into (6.8) to get
1 p(xk , yk |Fk ) = √ e det( )(2π )m+n
− 12 E T
−1 E
.
(6.14)
Equation (6.14) shows that (xk , yk ) obeys the joint Gaussian distribution over the set Fk , and xˆk = xˆk|k−1 Pk =
xk
= Pk|k−1 − Pxk yk (Pyk + Y −1 )−1 PxTk yk .
(6.15)
156
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Algorithm 6.2 Filtering algorithm of SETCKF Step 1: generate 2n Sigma points √
ξ
(i)
(i)
Xk−1
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n =
Step 2: time update Xˆk
(i)
(i) = f (Xk−1 ), i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n 1 (i) (i) (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 2n i=1
(i) Xk|k−1
Yˆ k
(i)
= xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i)
= h(Xk|k−1 ),
yˆk|k−1 =
i = 1, . . . , 2n
i = 1, . . . , 2n
2n 1 (i) Yˆ k 2n i=1
Py k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T + Rk 2n i=1
Px k y k =
2n 1 (i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n i=1
Step 3: measurement update Kk = Pxk yk [Pyk + (1 − γk )Y −1 ]−1 xˆk = xˆk|k−1 + γk Kk (yk − yˆk|k−1 ) Pk = Pk|k−1 − Kk PxTk yk Step 4: execute the loop process from Step 1 to Step 3 at each time instance k.
By combining Eq. (6.15) and the measurement update upon γ = 1, which is equivalent to the measurement update of conventional CKF, the SETCKF filtering can be obtained as in Theorem 6.1. According to Theorem 6.1, the filtering of SETCKF at the center is shown in Algorithm 6.2.
6.3 Design of Event Triggered Robust Cubature Kalman Filter Using. . .
157
6.3 Design of Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational Condition In the practical WAMS, the system and measurement noise generally do not obey the Gaussian distribution or the variance matrix is unknown even if they obey Gaussian distribution. The use of ETCKF or SETCKF based on CKF with the Gauss assumption will certainly reduce the accuracy of system state estimation and even lead to filtering divergence. The widely used filtering applicable for complex noise, such as particle filter and H∞ nonlinear filter, needs the huge amount of computation so that the real-time requirement cannot be realized for DSE in WAMS. Therefore, this section will develop the stochastic event triggered robust cubature Kalman filter (SETRCKF) using the stochastic innovational condition to address the issue of complicated noise with low computation complexity. Similarly, the design of SETRCKF also comprises two parts, i.e., event triggered sampling strategy design and the filtering design at the center. Before providing the SETRCKF algorithm, the system state and observation equations for the following discrete nonlinear system is firstly formulated by xk+1 = f (xk ) + νk yk = h(xk ) + ωk ,
(6.16)
where νk indicates that the system noise obeys the zero-mean Gaussian distribution, but the variance matrix remains unknown. The measurement noise ωk is a nonGaussian noise and obeys the following distribution, i.e., p(vk ) ∼ (1 − ε)p(1 ) + εp(2 ),
(6.17)
where ε is the contamination degree, and 1 ∼ N (0, 1 ) and 2 ∼ N (0, 2 ).
6.3.1 Event Triggered Strategy Design for SETRCKF As the SETRCKF still uses the stochastic innovational condition, its system architecture is as same as shown in Fig. 6.1. After the local filter obtains the state estimation and error covariance matrix of the previous moment, the local filter needs to perform the time update at first to provide the medium value of filtering necessary for the calculation of logic variable γk . However, there exists the major difference from SETCKF that the system noise variance matrix is unknown in this case so that the covariance matrix of one-step prediction error of system state cannot be obtained. Therefore, the top priority of SETRCKF event triggered sampling strategy design is the estimation for the variance of process noise.
158
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
To estimate the process noise, the adaptive factor τk is introduced. The process noise variance can be indicated as ˆ k = τk Q ˆ k−1 , Q
(6.18)
ˆ 0 is the process noise ˆ k is the estimation of process noise variance, while Q where Q variance designed according to the experience. In order to calculate τk , the following filtering residual error is firstly defined by e˜k−1 = y(k − 1) − h(xˆk−1 ).
(6.19)
After that, the statistic linearization in [145] is used to linearize the previous equation −1 (xk−1 − xˆk−1 ), e˜k−1 = ωk−1 + PxTk−1 yk−1 P¯k−1|k−2
(6.20)
∗ ˆ k−2 , and P ∗ where P¯k−1|k−2 = Pk−1|k−2 +Q k−1|k−2 is the part including the covariance matrix of one-step prediction error of system state minus the process noise covariance. After that, the variance matrix e˜k−1 is calculated to get −1 −1 Pk−1 P¯k−1|k−2 Pxk−1 yk−1 . Pe˜k−1 = Rˆ k−1 − PxTk−1 yk−1 P¯k−1|k−2
(6.21)
The system state’s posteriori variance matrix is introduced into the both side of (6.21) and the trace operation is taken as −1 −1 ∗ Pk−1|k−2 Pxk−1 yk−1 ) tr(Pe˜k−1 ) = tr(Rˆ k−1 ) − tr(PxTk−1 yk−1 P¯k−1|k−2 P¯k−1|k−2 −1 ˆ k−2 P¯ −1 Q −τk−1 tr(PxTk−1 yk−1 P¯k−1|k−2 k−1|k−2 Pxk−1 yk−1 ) −1 −1 T +tr(PxTk−1 yk−1 P¯k−1|k−2 Kk−1 Pyk−1 Kk−1 Pxk−1 yk−1 ), (6.22) P¯k−1|k−2
where Rˆ k−1 is the estimation of the measurement error variance matrix, and Kk−1 is the filtering gain of SETRCKF, whose detailed computation method will be provided in the next section. Based on the research in [146], the change on Q T ) ≥ tr(P matrix will cause tr(e˜k−1 e˜k−1 e˜ k−1 ). Therefore, in order to calculate the T ) = tr(P adaptive factor, it is made that tr(e˜k−1 e˜k−1 e˜ k−1 ). According to the matrix T T trace property tr(e˜k−1 e˜k−1 ) = e˜k−1 e˜k−1 , τk−1 can be calculated in the following equation: −1 −1 T ∗ e˜k−1 = tr(Rˆ k−1 ) − tr(PxTk−1 yk−1 P¯k−1|k−2 Pk−1|k−2 Pxk−1 yk−1 ) e˜k−1 P¯k−1|k−2 −1 ˆ k−2 P¯ −1 Q −τk−1 tr(PxTk−1 yk−1 P¯k−1|k−2 k−1|k−2 Pxk−1 yk−1 ) −1 −1 T +tr(PxTk−1 yk−1 P¯k−1|k−2 Kk−1 Pyk−1 Kk−1 Pxk−1 yk−1 ). (6.23) P¯k−1|k−2
6.3 Design of Event Triggered Robust Cubature Kalman Filter Using. . .
159
Algorithm 6.3 Event triggered strategy of SETRCKF Input: the current measurement yk , design parameter Y , the estimation of system noise variance ˆ k−2 at last moment Q Output: logic variable γk 1: Initialization: initial system state x0 , initial covariance matrix P0 2: Iteration process: 3: While k ≥ 1 do 4: Use the equation (6.25) and calculate ak−1 bk−1 5: τk−1 = abk−1 k−1 ˆ k−2 ˆ k−1 = τk−1 Q 6: Q √ nei , i = 1, . . . , n 7: ξ (i) = √ − nei−n , i = n + 1, . . . , 2n √ (i) Xk−1 = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n 8: (i) (i) 9: Xˆk = f (X ), i = 1, . . . , 2n 10: 11: 12: 13: 14:
xˆk|k−1 =
(i)
(i) Yˆ k
i=1 2n
Xˆk
(i)
(i)
(i)
ˆ k−1 (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Q = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n
Pk|k−1 = Xk|k−1
k−1 2n
1 2n 1 2n
i=1
(i)
= h(Xk|k−1 ), 2n 1 ˆ (i) yˆk|k−1 = 2n Yk
i = 1, . . . , 2n
i=1
1
15: ϕ(yk ) = e− 2 (yk −yˆk|k−1 ) Y (yk −yˆk|k−1 ) 16: Ramdomly generate the parameter ηk obeying the uniform distribution over [0, 1]. 17: if ϕ(yk ) < ηk then 18: return γk = 1 19: else 20: return γk = 0 21: end if 22: end while T
From the previous equation, it can be obtained by τk−1 =
ak−1 , bk−1
(6.24)
where −1 −1 ∗ ak−1 = tr(Rˆ k−1 ) − tr(PxTk−1 yk−1 P¯k−1|k−2 Pk−1|k−2 Pxk−1 yk−1 ) P¯k−1|k−2 −1 −1 T T +tr(PxTk−1 yk−1 P¯k−1|k−2 Kk−1 Pyk−1 Kk−1 Pxk−1 yk−1 ) − e˜k−1 e˜k−1 P¯k−1|k−2 −1 ˆ k−2 P¯ −1 bk−1 = tr(PxTk−1 yk−1 P¯k−1|k−2 Q k−1|k−2 Pxk−1 yk−1 ).
(6.25)
In general, for a system whose noise variance is unknown, the system noise variance can be estimated by the previous iterative. After the system noise variance
160
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
ˆ k is used to replace Qk in (6.1) so that the event matrix is acquired, this estimation Q triggered sampling strategy of SETRCKF can be obtained as in Algorithm 6.3.
6.3.2 Filtering Design of SETRCKF When the nonlinear system’s measurement noise in (6.17) is the non-Gaussian, the following Huber formula is used to enhance the robustness of SETCKF so as to obtain the SETRCKF according to [147]. ψ(vk ) =
|vk | ≤ L
vk ,
L · sgn(vk ), |vk | > L
,
(6.26)
−1/2
where vk = Pyk (yk − yˆk ) and L is the constant determined by ε. When γk = 1, based on the robust Kalman filtering proposed in [147], the SETRCKF performs the following measurement update after the time update specified in the previous section Py∗k =
1 ˆ (i) (i) (Yk − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n
Pyk =
Py∗k
2n
i=1
+ Rk
1 (i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n 2n
Pxk yk =
i=1
−T /2
Kk = Pxk yk Pyk
xˆk = xˆk|k−1 + Kk !(vk ) ˙ k )KkT , Pk = Pk|k−1 − Kk !(v
(6.27)
where !(vk ) = [ψ(νk1 ) · · · ψ(νkm )]T ⎤T ⎡ ˙ 1) · · · 0 ψ(ν k . .. ⎥ . ˙ k) = ⎢ !(v ⎣ .. . . . . ⎦ 0
...
(6.28)
˙ m) ψ(ν k
From (6.27), it is shown that the computation of this algorithm needs the measurement noise variance matrix Rk . As measurement noise is a non-Gaussian subject to (6.17), its variance matrix is unknown. Therefore, it is necessary to firstly estimate Rk before the measurement update.
6.3 Design of Event Triggered Robust Cubature Kalman Filter Using. . .
161
In order to estimate Rk , it is necessary to introduce an adaptive factor Sk to satisfy Rˆ k# = Sk Rk ,
(6.29)
where Rˆ k# is the estimation of measurement noise variance. After that, the historical innovational information is utilized to calculate Sk . If the innovational information is expressed by ek = yk − yˆk|k−1 , the system measurement’s covariance matrix of one-step prediction error can be approximated in the following estimation method: Pyk
N 1 T = ek−j ek−j , N
(6.30)
j =0
where N is the length of the time window. From the previous equation and Pyk in the filtering algorithm, it can be got by Py∗k + Sk Rk =
N 1 T ek−j ek−j . N
(6.31)
j =0
Therefore, it can be derived by ⎛
⎞ N 1 T Sk = ⎝ ek−j ek−j − Py∗k ⎠ Rk−1 . N
(6.32)
j =0
Equation (6.32) is introduced into (6.29) to obtain Rˆ k# =
1 N
N j =0
T −P ∗ . To ek−j ek−j yk
guarantee the symmetry and positive definite of the measurement noise error matrix, it is made that Rˆ k = diag{|Rˆ k# (1)|, |Rˆ k# (2)|, . . . , |Rˆ k# (m)|},
(6.33)
where Rˆ k# (m) is the mth diagonal element of Rˆ k# . Rˆ k is used to replace Rk in (6.27) so that the measurement update of SETRCKF at γk = 1 can be derived. When γk = 0, according to the filtering algorithm of SETCKF, the measurement update of SETRCKF can be obtained simply by replacing Pyk in (6.27) by the following equation: Pyk = Py∗k + Rˆ k + Y −1 .
(6.34)
In summary, the pseudo-code of filtering algorithm for SETRCKF is given in Algorithm 6.4.
162
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Algorithm 6.4 Filtering algorithm for SETRCKF Step 1: generate 2n Sigma points √
ξ
(i)
(i)
Xk−1
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n =
Step 2: time update Xˆk
(i)
(i) = f (Xk−1 ), i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n 1 (i) (i) ˆ k−1 (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Q 2n i=1
(i) Xk|k−1
Yˆ k
(i)
= xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i)
= h(Xk|k−1 ),
yˆk|k−1 =
i = 1, . . . , 2n
i = 1, . . . , 2n
2n 1 (i) Yˆ k 2n i=1
Step 3: measurement update Py∗k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n i=1
N 1 T Rˆ k# = ek−j ek−j − Py∗k N j =0
Rˆ k = diag{|Rˆ k# (1)|, |Rˆ k# (2)|, . . . , |Rˆ k# (m)|} Pyk = Py∗k + Rˆ k Px k y k =
2n 1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T 2n i=1
Kk = Pxk yk [Pyk + (1 − γk )Y −1 ]−T /2 xˆk = xˆk|k−1 + γk Kk !(vk ) ˙ k )KkT Pk = Pk|k−1 − Kk !(v Step 4: execute the loop process from Step 1 to Step 3 at each moment.
6.4 Analysis of Stochastic Stability of Event Triggered Filtering Using. . .
163
6.4 Analysis of Stochastic Stability of Event Triggered Filtering Using Stochastic Innovational Condition This section carries out the stochastic stability analysis for two event triggered filter using stochastic innovational condition proposed in this chapter. Like Chap. 3, the index to evaluate the stochastic stability is still the expectation of one-step prediction error, i.e., E(x˜k|k−1 2 ), which is stochastically bounded. This section will firstly give the sufficient condition of boundedness for the posteriori variance of the system state regarding SETCKF and SETRCKF. Based on this, the sufficient condition of the stochastic stability is given for these two filters. Moreover, this section further analyzes the average communication arrival rate of these two filters and provides the upper and lower bounds of average communication arrival rate.
6.4.1 Stochastic Stability Analysis for SETCKF Like the stochastic stability analysis for ETCKF, the system is linearized at first. The one-step prediction error of the system state and the estimation error of system measurement can be linearized in the following way x˜k|k−1 = αk−1 Fk−1 x˜k−1 + νk−1 y˜k = βk Hk x˜k|k−1 + ωk ,
(6.35)
where αk−1 = diag[α1,k−1 , . . . , αn,k−1 ] and βk = diag[β1,k , . . . , βm,k ] are used to simulate the error aroused from the first-order linearization. Fk−1 = [∂f (xk−1 )/∂xk−1 ]|xk−1 =xˆk−1 and Hk = [∂h(xk )/∂xk ]|xk =xˆk|k−1 is the first-order term of Taylor expansion. Before the stochastic stability analysis for SETCKF, the following assumption is made by ¯ ¯ β, h, Assumption 6.1 P1|0 > 0, and there are the fixed constants α, ¯ α, f¯, f , β, h, q, ¯ q, r¯ , r > 0, so that the following inequality holds. α ≤ αk ≤ α, ¯ ¯ h ≤ Hk ≤ h,
f ≤ Fk ≤ f¯,
qIn ≤ Qk ≤ qI ¯ n,
β ≤ βk ≤ β¯
rIm ≤ Rk ≤ r¯ Im .
(6.36)
Based on such an assumption, the sufficient condition of stochastic stability for SETCKF is given by the following theorem. Theorem 6.2 By considering the nonlinear system of (3.1) and using the stochastic innovational condition of (2.10) as the event triggered sampling strategy, if the system satisfies Assumption 6.1 and Hk is invertible at any time instance k, there
164
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
exist the bounded constants p, p¯ > 0 when Y ≥ yIm and 0 < E(x˜1|0 2 ) ≤ σ , so that ¯ n pIn ≤ Pk < pI E(x˜k|k−1 2 ) ≤
(6.37)
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(6.38)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, ¯ k¯ = β¯ h¯ p/r, ¯ τ = q/(α¯ 2 f¯2 /p + q). Moreover, the average communication arrival rate satisfies 1−
1 det(Im + Y )
≤γ ≤1−
1 det(Im + Y¯ )
(6.39)
,
where Y = (βhp + r)Y , Y¯ = (β¯ h¯ p¯ + r¯ )Y Proof Firstly, as Pxk yk and Pyk can be separately linearized as Pxk yk = Pk|k−1 HkT βk
(6.40)
Pyk = βk Hk Pk|k−1 HkT βk + Rk .
(6.41)
The previous two equations are introduced into the posteriori variance in SETCKF algorithm to derive Pk = Pk|k−1 − γk Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 −(1 − γk )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk
+ Rk + Y
−1 −1
)
(6.42)
βk Hk Pk|k−1 .
The previous equation can be expanded to get Pk ≥ Pk|k−1 − γk Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 −(1 − γk )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 = Pk|k−1 − Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 . (6.43) Similar to the proving process for the lower bound of posteriori variance matrix in Theorem 3.2, it can be got by Pk ≥ [S k (P0−1 )]−1 , where S k (·) = S(S(. . . S (·))), and S(X) = (α 2 f 2 X−1 + q)−1 +
(6.44) β¯ 2 h¯ 2 r In .
k
To obtain the upper bound of Pk , (6.43) can be expanded as follows according to Lemma 3.2:
6.4 Analysis of Stochastic Stability of Event Triggered Filtering Using. . .
165
Pk ≤ γk (βk Hk )−1 Rk (HkT βk )−1 + (1 − γk )(βk Hk )−1 (Rk + Y −1 )(HkT βk )−1 ≤ (βk Hk )−1 (Rk + Y −1 )(HkT βk )−1 .
(6.45)
Based on the previous equation and Assumption 6.1, it can be got by Pk ≤
r¯ + y −1 β 2 h2
(6.46)
In .
By combining (6.44) and (6.46), (6.37) can be derived. The proving process of (6.38) is similar to Theorem 3.3, and the details will not be provided here. To prove (6.39), the average communication arrival rate is firstly defined as 1 E(γk |Fk−1 ). T +1 T
γ = lim sup T →∞
(6.47)
k=0
According to the definition (2.8) of γk , the value is within the range of set {0, 1}. Therefore, the equation above can be calculated from the following equation: 1 E(γk |Fk−1 ) T +1 T
γ = lim sup T →∞
k=0
1 P r(γk = 1|Fk−1 ) T +1 T
= lim sup T →∞
k=0
= Pr(γk = 1|Fk−1 ) 1 T = 1 − Pr e− 2 y˜k Y y˜k ≥ ηk |Fk−1 ,
(6.48)
where Pr(·) refers to the probability of stochastic variable y˜k = yk − yˆk|k−1 . As p(yk |Fk−1 ) N(yk |yˆk|k−1 , Pyk ), 1 T 1 T Pr e− 2 y˜k Y y˜k ≥ ηk |Fk−1 = E(e− 2 y˜k Y y˜k ≥ ηk |Fk−1 ) =
− 12 y˜kT (Py−1 +Y )y˜k
Rm
=
e
det(Pyk )(2π )m
dyk
1 det(I + Pyk Y )(2π )m
×
k
− 1 y˜ T (P −1 +Y )y˜
Rm
k e 2 k yk . dyk , −1 )(2π )m det((Py−1 + Y ) k
166
where
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Rm
− 21 y˜kT (Py−1 +Y )y˜k k
. e dyk det((Py−1 +Y )−1 )(2π )m k
= 1. Therefore,
1 T 1 Pr e− 2 y˜k Y y˜k ≥ ηk |Fk−1 = . det(I + Pyk Y )(2π )m
(6.49)
From (6.41), (6.37), and Assumption 6.1, it can be got by βhp + r ≤ Pyk ≤ ¯ ¯ β hp¯ + r¯ . Therefore, it comes to 1 1 1− . . (6.50) ≤γ ≤1− det(Im + (β¯ h¯ p¯ + r¯ )Y ) det(Im + (βhp + r)Y ) In sum, Theorem 6.2 is proven. Remark 6.1 According to Theorem 6.2, with the increase of design parameter Y , the estimation of system’s average communication arrival rate or the communication load will increase. In this case, the system state estimation accuracy and the stochastic stability will increase. On the contrary, the lower the average communication arrival rate is, the higher Pk will be, and Pk maybe even become divergent. Therefore, Theorem 6.2 provides the design guideline of parameter Y through the analysis of stochastic stability for SETCKF and average communication arrival rate to better balance the data communication and system state estimation accuracy.
6.4.2 Stochastic Stability Analysis for SETRCKF This section will conduct the stochastic stability analysis for the SETRCKF design in Sect. 6.3. Firstly, the calculation of the posteriori variance matrix in this algorithm may be modified as follows according to [147]: Pk = Pk|k−1 − Kk KkT E,
(6.51)
˙ k )] = (1 − ε)(2(L) − 1), (L) = where E = E[ψ(v the following assumptions can be made as
1 2π
L 0
x2
e− 2 dx. After that,
¯ h, ¯ β, h, Assumption 6.2 P1|0 > 0, and there are the fixed constants α, ¯ α, f¯, f , β, q, ¯ q, r¯ , r > 0, so that the following inequality is established ¯ α ≤ αk ≤ α, ¯ h ≤ Hk ≤ h,
f ≤ Fk ≤ f¯,
ˆ k ≤ qI qIn ≤ Q ¯ n,
β ≤ βk ≤ β¯
rIm ≤ Rˆ k ≤ r¯ Im .
(6.52)
Based on this assumption, the sufficient condition of the stochastic stability for SETRCKF.
6.4 Analysis of Stochastic Stability of Event Triggered Filtering Using. . .
167
Theorem 6.3 By considering the nonlinear system of (6.16) and using the stochastic innovational condition of (2.10) as the event triggered sampling strategy, if the system satisfies Assumption 6.2 and Hk is invertible at any time instance k, when Y ≥ yIm , E > 1 − (α¯ f¯)−2 and 0 < E(x˜1|0 2 ) ≤ σ , bounded constant p, p¯ > 0 so that ¯ n pIn ≤ Pk < pI E(x˜k|k−1 2 ) ≤
(6.53)
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(6.54)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, ¯ k¯ = β¯ h¯ p/r, ¯ τ = q/(α¯ 2 f¯2 /p + q). Moreover, the average communication arrival rate satisfies 1−
1 det(Im + Y )
≤γ ≤1−
1 det(Im + Y¯ )
(6.55)
,
where Y = (βhp + r)Y , Y¯ = (β¯ h¯ p¯ + r¯ )Y . −T /2
Proof Firstly, the filtering gain of SETRCKF Kk = Pxk yk Pyk into (6.51) to derive
is introduced
Pk = Pk|k−1 − Pxk yk [Pyk + (1 − γk )Y−1 ]−1 PxTk yk E.
(6.56)
Equations (6.40) and (6.41) are introduced into the previous equation to get Pk = Pk|k−1 − γk Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rˆ k )−1 βk Hk Pk|k−1 E −(1 − γk )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rˆ k + Y −1 )−1 ×βk Hk Pk|k−1 E.
(6.57)
According to the definition of E, E ≤ 1. Therefore, the previous equation can be expanded as Pk ≥ Pk|k−1 − γk Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rˆ k )−1 βk Hk Pk|k−1 −(1 − γk )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rˆ k )−1 βk Hk Pk|k−1 = Pk|k−1 − Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rˆ k )−1 βk Hk Pk|k−1 . (6.58) Similar to the proving process for the lower bound of the posteriori variance matrix in Theorem 6.2, it can be got Pk ≥ [S k (P0−1 )]−1 , where S k (·) = S(S(. . . S (·))), and S(X) = (α 2 f 2 X−1 + q)−1 + k
(6.59) β¯ 2 h¯ 2 r In .
168
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
According to (6.57), it is obvious that Pk ≤ Pk|k−1 . Therefore, the acquisition of upper bound of Pk can be transformed into the acquisition of the upper bound of Pk|k−1 . Equation (6.57) is introduced into the linear form of Pk|k−1 , i.e., (3.44), and Lemma 3.2 is used to get T αk−1 Pk|k−1 < (1 − E)αk−1 Fk−1 Pk−1|k−2 Fk−1
+γk−1 Eαk−1 Fk−1 (βk−1 Hk−1 )−1 Rˆ k−1 T T ×(Hk−1 βk−1 )−1 Fk−1 αk−1 + (1 − γk−1 )Eαk−1 Fk−1 (βk−1 Hk−1 )−1 T T ˆ k−1 ×(Rˆ k−1 + Y −1 )(Hk−1 βk−1 )−1 Fk−1 αk−1 + Q T < (1 − E)αk−1 Fk−1 Pk−1|k−2 Fk−1 αk−1 + Eαk−1 Fk−1 (βk−1 Hk−1 )−1 T T ˆ k−1 . ×(Rˆ k−1 + Y −1 )(Hk−1 βk−1 )−1 Fk−1 αk−1 + Q
(6.60)
According to Assumption 6.1, the mathematical induction method is used to get Pk|k−1 < (1 − E)(α¯ f¯)2 Pk−1|k−2 + E
(α¯ f¯)2 (¯r + (y)−1 ) (βh)2
k−1
1 − (α¯ f¯)−2 , [(1 − E)(α¯ f¯)2 ]j will j =0
converge. By combining (6.59) and the previous equation, (6.53) can be proved. To prove (6.54), according to the definition of !(vk ), the system state estimation in SETRCKF algorithm can be transformed as follows: xˆk = xˆk|k−1 + γk K˜ k (yk − yˆkk −1 ),
(6.62)
where K˜ k = Pxk yk [Pyk + (1 − γk )Y −1 ]−1 . After that, the proving process of (6.54) is similar to that of Theorem 3.3, so that no more detail will be provided here. The proof of (6.55) can be obtained simply by replacing the measurement noise variance matrix Rk in the proving process of (6.39) by Rˆ k . In sum, Theorem 6.3 is proven.
6.5 Simulation and Verification
169
6.5 Simulation and Verification This section will utilize IEEE 39-bus 10 generator system to verify the filtering performance and stochastic stability of the two estimation algorithms of SETCKF and SETRCKF in this chapter.
6.5.1 Verification on SETCKF Filtering Performance This section will utilize the generator G3 in the IEEE 39-bus 10 generator standard system in Sect. 3.5.1 to verify the estimation performance of SETCKF. Therefore, the system state vector and measurement vectors are shown as follows with the dimension of 7 and 4, respectively. The state equation is shown in (2.33) and (2.34). The measurement equation is shown in (2.35). x = [Ed y = [id
Eq iq
σ vd
ω
Ef d
Rf
VR ]T
vq ]T .
(6.63)
The simulation scenario is the DSE for the generator G3 for 15 s after the sudden break between bus 14 and bus 15. The sampling period t = 0.02 s. Similarly, this simulation will make the comparison of the filtering performances between the SETCKF and CKF to verify that SETCKF is capable of effectively reducing the communication rate at a similar estimation accuracy. The comparison between SETCKF and CKF-I is also given to prove that SETCKF can deliver higher filtering accuracy under the same communication rate. Moreover, this section will also compare the estimation accuracy between SETCKF and ETCKF in Chap. 3. For these four filterings, the system noise variance is set as Q = diag[10−3 , 10−3 , 10−7 , 10−7 , 10−7 , 10−7 , 10−7 ] and the measurement noise variance is R = diag[10−2 , 10−2 , 10−2 , 10−2 ].
6.5.1.1
Comparison of Dynamic Performance with Different Y
This section will compare the dynamic filtering performances of SETCKF and CKF-I at different arrival rates. The CKF filtering will provide the template of filtering performance with full communication. As SETCKF and ETCKF have similar dynamic filtering performance, it is hard to distinguish them in the figure. Therefore, the estimation accuracy of both will be compared in the estimation error comparison in the next section. This section will conduct two simulations at Y = 1.9I and Y = 0.2I , and the simulation results are shown in Figs. 6.2 and 6.3. The first two graphs in Figs. 6.2 and 6.3 indicate the logic variable γk and γkI for SETCKF and CKF-I algorithms at each moment, respectively, where the
170
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5 System CKF SETCKF CKF−I
0
−0.5 0
5
10
15
Time (s) (c) Fig. 6.2 System state and filtering results for three kinds of filters with Y = 1.9I , where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCKF and the cyan dash line represents the CKF-I. (a) The triggering condition of SETCKF. (b) The triggering condition of CKF-I. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
6.5 Simulation and Verification
171
Eq (p.u.)
1.5 1 0.5 0 −0.5 0
5
10
15
10
15
10
15
Time (s) (d)
4
σ (rad)
3 2 1 0 0
5
Time (s) (e)
1.2
ω (p.u.)
1.15 1.1 1.05 1 0
5
Time (s) (f) Fig. 6.2 (continued)
172
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
3.5
E (p.u.)
3
fd
2.5 2 1.5 0
5
10
15
10
15
10
15
Time (s) (g)
Rf (p.u.)
0.55 0.5 0.45 0.4 0.35 0
5
Time (s) (h)
10
R
V (p.u.)
8 6 4 2 0 0
5
Time (s) (i) Fig. 6.2 (continued)
6.5 Simulation and Verification
173
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
500
600
700
k (a)
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
700
k (b)
Ed (p.u.)
1 0.5 System CKF SETCKF CKF−I
0 −0.5 −1 0
5
10
15
Time (s) (c) Fig. 6.3 System state and filtering results for three kinds of filters with Y = 0.2I , where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCKF and the cyan dash line represents the CKF-I. (a) The triggering condition of SETCKF. (b) The triggering condition of CKF-I. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
174
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
1.5
Eq (p.u.)
1 0.5 0 −0.5 0
5
10
15
10
15
10
15
Time (s) (d)
2
σ (rad)
1.5 1 0.5 0 0
5
Time (s) (e)
ω (p.u.)
1.15
1.1
1.05
1 0
5
Time (s) (f) Fig. 6.3 (continued)
6.5 Simulation and Verification
175
Efd (p.u.)
3.5 3 2.5 2 1.5 0
5
10
15
10
15
10
15
Time (s) (g)
0.55
R (p.u.)
0.5
f
0.45 0.4 0.35 0
5
Time (s) (h)
10
R
V (p.u.)
8 6 4 2 0 0
5
Time (s) (i) Fig. 6.3 (continued)
176
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
value of γk is determined by Algorithm 6.1 and the value of γkI is generated stochastically and satisfies Pr(γkI = 1) = γ . Here, γ is the average communication rate. These four graphs show that the SETCKF’s arrival rate in two simulations is 50.7% and 18.3%, respectively. The CKF-I’s arrival rate is 50.3% and 21.6%. As shown in Fig. 6.2, although the SETCKF has data transmission load reduced by half when the communication rate is close to 50%, the filtering performance of SETCKF and CKF are basically the same. Although the CKF-I can deliver the satisfactory filtering results, its filtering performance is still inferior to the SETCKF at some time instances. When the communication rate further reduces to nearly 20%, Fig. 6.3 shows that the accuracy of CKF-I is severely poorer than SETCKF although its arrival rate is slightly higher than SETCKF. The SETCKF can still deliver acceptable filtering performances at this low communication rate. The comparison between Figs. 6.2 and 6.3 shows that the SETCKF is better than CKF-I in filtering performances at both communicate rates. With the decline in design parameter Y , the SETCKF’s communication rate and the corresponding filtering performance will decline accordingly.
6.5.1.2
Comparison of Filtering Error with Different Y
This section will test the filtering error of the SETCKF at different Y to describe the influence of parameter Y on SETCKF filtering accuracy. Moreover, it will also compare the filtering error between SETCKF and CKF-I filtering at different arrival rate to further verify the effectiveness of SETCKF algorithm. The definition of filtering error is the same in this section as Sect. 3.5, i.e., (3.79). Similarly, the simulation results of the Monte Carlo method are shown in Fig. 6.4. As shown in Fig. 6.4, when Y ≥ 4.5I , i.e., the communication rate is at least 70%, the SETCKF and CKF basically have similar filtering errors. When the communication rate is further reduced, the difference between SETCKF and CKF in filtering error will gradually become significant within the acceptable range, even if the communication rate is as low as 30%. However, the filtering error of CKF-I is always larger than that of SETCKF at each arrival rate setting. When the arrival rate declines to 30% or even lower, the filtering accuracy will decrease substantially and even become unreliable. Moreover, the comparison on the filtering error and arrival rate of SETCKF under different Y can lead to the same conclusions as the previous section, verifying that the arrival rate and filtering accuracy will decline with the decrease of Y . Moreover, in order to compare the filtering performances between SETCKF and ETCKF developed in Chap. 3 under the same arrival rate, the Monte Carlo method is adopted for simulation of these two filters under the arrival rate of 50%, whose filtering errors are shown in Fig. 6.5. It shows that the ETCKF and SETCKF’s filtering performance are both better than CKF-I under the same arrival rate since the former two filters can fully utilize the measurements implicitly contained in the event triggered condition to correct the posteriori variance when no measurement is
6.5 Simulation and Verification
177
(a)
Estimation error
0.08 0.06 0.04 0.02 0 0
5
10
15
10
15
Time(s) (b)
Estimation error
0.1 0.08 0.06 0.04 0.02 0 0
5
Time(s) (c)
Fig. 6.4 Filtering error for CKF, SETCKF, and CKF-I under different Y , where the CKF is drawn in blue, and the SETCKF is marked in red and compared with the CKF-I in cyan. (a) Y = 20I , the estimation error for three filterings under the communication rate is 90%. (b) Y = 4.9I , the estimation error for three filterings under the communication rate is 70%. (c) Y = 1.9I , the estimation error for three filterings under the communication rate is 50%. (d) Y = 0.55I , the estimation error for three filterings under the communication rate is 30%
178
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Estimation error
0.2 0.15 0.1 0.05 0 0
5
10
15
Time(s) (d)
Fig. 6.4 (continued)
Fig. 6.5 The estimation error for four filters under the communication rate 50%, where the CKF is drawn in blue, and the SETCKF is marked in red and the ETCKF is shown in green and compared with the CKF-I in cyan
received. Moreover, it is further shown that the SETCKF has slightly better filtering accuracy than ETCKF because CKF is established based on Gaussian assumption, but the ETCKF’s innovation based condition ruins the Gaussian property of the measurement innovational information so that it is subject to the truncated Gaussian distribution, while the stochastic innovational condition of SETCKF can still ensure the Gaussian property of the measurement. Finally, the same method in Sect. 3.5.2 is adopted for the simulation of communication delay to come up with the results shown in Table 6.1. The table shows that the SETCKF, like ETCKF, is also capable of reducing communication delay due to its ability to reduce data transmission load.
a The
Communication rate (%) 100% 90% 70% 50% 30% Max. delay (ms) 4.02 3.64 3.59 3.43 2.65
Min. delay (ms) 2.14 1.98 1.92 1.76 1.53
amount of transmission data corresponds to only one PMU generated in the 15 s simulation
CKF SETCKF (Y SETCKF (Y SETCKF (Y SETCKF (Y
Amount of dataa (Bytes) 108,502 = 20I ) 96,953 = 4.9I ) 75,730 = 1.9I ) 54,260 = 0.55I ) 32,460
Table 6.1 Communication delay between PMU and control center Avg. delay (ms) 2.75 2.57 2.28 1.93 1.72
% of delay reduction 0% 6.5% 17.1% 29.8% 37.5%
6.5 Simulation and Verification 179
180
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
6.5.2 Verification on SETRCKF This section utilizes the generator G7 in the IEEE 39-bus 10 generator standard system to verify the estimation performance of SETRCKF. Therefore, the system state vector and measurement vector are both in the dimension of 4, as formulated in the following equation. The state equation is shown in (2.33), and the measurement equation is formulated in (2.35). x = [Ed y = [id
Eq iq
σ vd
ω]T vq ]T .
(6.64)
Similarly, this section investigates the DSE performance of the generator G7 within 15 s after the sudden break between bus 14 and bus 15. The sampling period t = 0.02 s. This simulation will compare the filtering performance of three filtering algorithms, i.e., SETRCKF, RCKF, and RCKF-I, to prove that the SETRCKF is capable of reducing data communication while guaranteeing the filtering accuracy. The filtering performance of the two filters, i.e., SETRCKF and SETCKF, is compared to verify that the SETRCKF is capable of handling nonGaussian noise. For all aforementioned four filtering algorithms, the process noise is set as Gaussian white noise with an unknown variance, and the measurement noise is also non-Gaussian and subject to the distribution as (6.17), where ε = 0.3, 1 = diag[10−2 , 10−2 , 10−2 , 10−2 ], 2 = diag[10−3 , 10−3 , 10−2 , 10−3 ].
6.5.2.1
Comparison of Dynamic Performance at Different Y
This section compares the dynamic filtering performance of SETRCKF and RCKFI at different arrival rates. The RCKF filtering will provide the template of performance with full communication. This section conducts two simulations at Y = 1.7I and Y = 0.15I , and the simulation results are shown in Figs. 6.6 and 6.7. The last two graphs in Figs. 6.6 and 6.7 indicate the logic variable γk and γkI of SETRCKF and RCKF-I at each moment, respectively, where the value of γk is determined by Algorithm 6.3, and the value of γkI is stochastically generated and satisfies Pr(γkI = 1) = γ , where γ is the average communication rate. The four graphs show that the SETRCKF’s arrival rates are 49.6% and 20.3% for these two simulations, respectively, and the CKF-I’s arrival rates are 50.5% and 20.7%, respectively. As shown in Fig. 6.6, the SERTCKF and RCKF still have almost the same filtering performance when the communication rate is close to 50% although the data transmission is reduced by half for SETCKF. However, the CKF-I is inferior to the previous two filters in filtering performance. When the communication rate
6.5 Simulation and Verification
181
Ed (p.u.)
1
0.5 System RCKF SETRCKF RCKF−I
0
−0.5 0
5
10
15
10
15
10
15
Time (s) (a)
Eq (p.u.)
1.5 1 0.5 0 −0.5 0
5
Time (s) (b)
4
σ (rad)
3 2 1 0 0
5
Time (s) (c) Fig. 6.6 System state and estimation results for three filtering when Y = 1.7I , where the black line represents the actual system state, and the green dotted line denotes the RCKF, and the red dotted dash line stands for the SETRCKF and the cyan dash line represents the RCKF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ET-PF. (f) The triggering condition of PF-I
182
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
1.4
ω (p.u.)
1.3 1.2 1.1 1 0
5
10
15
Time (s) (d)
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (e)
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
k (f) Fig. 6.6 (continued)
6.5 Simulation and Verification
183
(a)
E (p.u.) q
1.5
1
0.5
0 0
5
10
15
10
15
Time (s) (b)
2
σ (rad)
1.5 1 0.5 0 0
5
Time (s) (c) Fig. 6.7 System state and estimation results for three filtering when Y = 0.15I , where the black line represents the actual system state, and the green dotted line denotes the RCKF, and the red dotted dash line stands for the SETRCKF and the cyan dash line represents the RCKF-I. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of ET-PF. (f) The triggering condition of PF-I
184
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
1.3
ω (p.u.)
1.2 1.1 1 0.9 0
5
10
15
Time (s) (d)
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (e)
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
k (f) Fig. 6.7 (continued)
6.5 Simulation and Verification
185
is further reduced to nearly 20%, Fig. 6.7 shows that the SETRCKF can still deliver the satisfactory filtering accuracy, but the filtering results of RCKF-I are far deviated from the system state. The comparison in Figs. 6.6 and 6.7 shows that the SETRCKF is better than RCKF-I in filtering performances at both communicate rates. With the decline in design parameter Y , the SETRCKF’s communication rate and the corresponding filtering performance will decline accordingly. It will be verified more straightforward in the next section.
6.5.2.2
Comparison of Filtering Error at Different Y
This section investigates the filtering error of the SETRCKF at different Y in order to demonstrate the influence of design parameter Y on the filtering accuracy of SETRCKF in a more straightforward way. Moreover, it will compare the filtering error of SETRCKF and RCKF-I at different arrival rates to further verify the effectiveness of SETRCKF. Finally, it will compare the filtering error of SETRCKF and RCKF-I to verify the robustness of SETRCKF to non-Gaussian noise. The definition of filtering error is the same in this section as Sect. 3.5, as (3.79) shows. Similarly, the simulation results of the Monte Carlo method are shown in Fig. 6.8. As shown in Fig. 6.4, when Y ≥ 4.5I , i.e., the communication rate is at least 70%, the SETRCKF and RCKF basically have similar filtering errors. However, the SETRCKF can effectively reduce the data communication. When the communication rate is further reduced, the difference between SETRCKF and RCKF in filtering error will be gradually significant but within the acceptable range, even if the communication rate is as low as 30%. However, the filtering error of RCKF-I is always larger than that of SETRCKF at each arrival rate setting. When the arrival rate declines to 30% or even lower, the filtering accuracy will decrease substantially and become unreliable. As the SETCKF is inapplicable to non-Gaussian noise, its filtering error is significantly bigger than that of the robust SETRCKF, which verifies the effectiveness the SETRCKF under non-Gaussian noise. Moreover, it is noteworthy that the first-order term Hk of Taylor expansion of the measurement equation at xˆk|k−1 in the current simulation example is a full-rank matrix and satisfies the assumption of Theorem 6.3. Therefore, the induced 2-norm Pk using the posteriori matrix reflects its relationship with the upper and lower bounds. Figure 6.9 provides the relationship between Pk and its upper and lower bounds when the communication rate is 30%. As shown in the figure, the posteriori variance matrix Pk is within the upper and lower bounds at each time instance. Finally, in order to reflect the influence of design parameter Y on the filtering accuracy and system arrival rate of SETRCKF in a more straightforward way, the same method as in Sect. 3.5.2 is adopted to simulate the SETRCKF filtering error and system arrival under ten different Y , and the results are shown in Fig. 6.10. The figure shows that the system arrival rate and the filtering accuracy of SETCKF will increase with the increase of the design parameter Y .
186
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Fig. 6.8 The estimation error for RCKF, SETRCKF, RCKF-I and SETCKF with different Y , where the RCKF is drawn in blue, and the SETRCKF is marked in red and the RCKF-I is shown in green and compared with the SETCKF-I in cyan. (a) Y = 19I , the estimation error for four filterings when the communication rate is about 90%. (b) Y = 5I , the estimation error for four filterings when the communication rate is about 70%. (c) Y = 1.7I , the estimation error for four filterings when the communication rate is about 50%. (d) Y = 0.45I , the estimation error for four filterings when the communication rate is about 30%
6.6 Conclusions
187
Estimation error
0.3 0.25 0.2 0.15 0.1 0.05 0
0
5
10
15
Time(s) (d)
Fig. 6.8 (continued)
Pk and its bounds
0.01 0.008 ||P || k
0.006
Upper Bound Lower Bound
0.004 0.002 0 0
5
10
15
Time (s) Fig. 6.9 The relationship between Pk and lower/upper bound when the communication rate is 30%, where the red line represents the 2-norm of the posterior matrix, and the blue dotted line and the green dash line denote the upper and lower bound, respectively
6.6 Conclusions This chapter develops the event triggered robust nonlinear filtering method based on stochastic innovational condition and robust CKF to address the issue of DSE in WAMS under non-Gaussian noise. The results of this chapter are summarized as follows: 1. The stochastic event triggered sampling strategy and stochastic event triggered cubature Kalman filtering at the control center are designed based on the stochastic innovational condition and cubature Kalman filter, respectively. The inference process is provided theoretically.
188
6 Event Triggered Robust Cubature Kalman Filter Using Stochastic Innovational. . .
Fig. 6.10 The filtering error and arrival rate of SETRCKF under different Y
2. Firstly, the adaptive method is used to estimate the process noise for the issue of unknown system noise. Secondly, the sliding window estimation method is used to estimate the measurement noise for the issue of non-Gaussian measurement noise. Finally, the Huber equation is used for the robustness design of SETCKF so as to finalize the design of SETRCKF. 3. The stochastic stability of SETCKF and SETRCKF are analyzed in this chapter. The Lyapunov method is used to prove the stochastic stability of SETCKF and SETRCKF and the sufficient conditions of stochastic stability for these filters are provided as determined by offline parameters. 4. Verification are conducted on IEEE 39-bus to test these two event triggered algorithms in this chapter. The simulation results show that the SETCKF can deliver higher estimation accuracy than the ETCKF in Chap. 3 since it can guarantee the Gaussian characteristics of measurement innovational information. It also proves that SETCKF is capable of reducing the communication delay. Moreover, it shows the SETRCKF is applicable for the non-Gaussian noise and proves the relationship between the design parameter with the estimation accuracy of filtering and the arrival rate through simulations. It offers the design guideline for the selection of design parameter in engineering.
Chapter 7
Event Triggered Suboptimal CKF Upon Channel Packet Dropout
7.1 Introduction The channel packet dropout is a common encountered and widely existent in the communication network of WAMS. The causes of packet dropout include the circuit fault, nodal hardware fault, network congestion, electromagnetic interference, etc. The occurrence of channel packet dropout will inevitably lead to a decline and even divergence of the DSE’s accuracy. Therefore, the design and filtering stability of the DSE is widely investigated considering the channel packet dropout. Among various research, a representative research is the intermittent observation filter. For a linear system, the independent Bernoulli process of the same distribution was utilized to modeling the channel packet dropout. Based on this, the intermittent Kalman filter was designed and its stochastic stability was analyzed [2], which pointed out that the intermittent Kalman filter is the optimal filter in the Bernoulli packet dropout model and proposed that the filter’s stochastic stability could be judged by the uniform and bounded mean of filter variance matrix. Furthermore, it provided the sufficient condition of the stochastically bounded filter. For a nonlinear system, the intermittent extended Kalman filter was designed in [16] under the Bernoulli packet dropout model and its stochastic stability was analyzed based on [2], which pointed out that, unlike a linear filter, the filtering variance matrix can only be taken as the performance index of a nonlinear filter instead of criteria of stochastic stability. Therefore, it was necessary to use bounded filtering error mean as the criteria index for nonlinear filter’s stochastic stability. Based on this, the sufficient condition was provided for the stochastic stability of intermittent extended Kalman filter. However, this sufficient condition uses the online parameters in the iterative process so that it is only possible to evaluate the filter performance but impossible to provide the design guideline. To address this issue, a suboptimal Kalman filter was designed under the Bernoulli packet dropout model for the linear system and points out that the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_7
189
190
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
sufficient condition of a filter’s stochastic stability could be provided that completely comprises offline parameters although the filtering accuracy was sacrificed in part [148]. Based on this, the previous results were extended to the nonlinear system, and the extended suboptimal filter was designed under the Bernoulli packet dropout model and its stochastic stability was proven [17]. However, the extended suboptimal filters, like EKF, use Taylor series expansion for local linearization of the nonlinear system so that it can hardly be applied in a strong nonlinear system. Moreover, although the event triggered sampling strategy can effectively address the issue of network congestion so as to reduce the occurrence of channel packet dropout to one certain extent, there are various causes of channel packet dropout as mentioned above. Therefore, it is also necessary to consider the influence of channel packet dropout on filtering accuracy, even if the event triggered sampling strategy is adopted. However, there is almost no relevant research on event triggered nonlinear filter considering channel packet dropout. In summary, to address the issue of channel packet dropout in the transmission network for a nonlinear DSE using event triggered sampling strategy, it is necessary to investigate the design of event triggered nonlinear filtering considering the channel packet dropout. This chapter will firstly design the suboptimal CKF according to the concept of suboptimal Kalman filter in case of channel packet dropout. The design of event triggered suboptimal will be further studied in case of channel packet dropout. Finally, the stochastic stability of both filters will be analyzed. This chapter is arranged as follows: Section 7.2 provides a detailed description of the design process of suboptimal CKF in case of channel packet dropout. Section 7.3 illustrates the design of event triggered suboptimal CKF in case of channel packet dropout. Section 7.4 analyzes the stochastic stability of these two filters and provides the sufficient condition of stochastic stability that completely comprises offline parameters. In section 5.5, the IEEE 39 buses is used to verify the effectiveness of the algorithm.
7.2 Design of Suboptimal CKF Under Channel Packet Dropout This section firstly analyzes the design principles of the suboptimal CKF under the Bernoulli packet dropout model as proposed in [148] and then uses the spherical cubature integration principle to extend the previous filter to variant for nonlinear system and thus obtain the suboptimal CKF under the Bernoulli packet dropout model. Before design, the channel packet dropout model is firstly described as follows: xk+1 = f (xk ) + νk , yk = γ˜k h(xk ) + ωk ,
(7.1) (7.2)
7.2 Design of Suboptimal CKF Under Channel Packet Dropout
191
where f (·) and h(·) are both known nonlinear equations and continuously differentiable within domain. xk ∈ Rn is the system state vector of n-dimension, and yk ∈ Rm is the system measurement vector of m-dimension. νk ∈ Rn and ωk ∈ Rm are the system and observation noises, respectively, both of which are subject to zero-mean Gaussian distribution. Their variance matrix are Q > 0 and R > 0, respectively. γ˜k ∈ {0, 1} is the logic variable and indicates whether the packet dropout occurs during the transmission of measurement data. Its mathematical expression is expressed by γ˜k =
1,
Packet dropout does not occur,
0,
Packet dropout occurs.
(7.3)
The channel packet dropout process is modeled via the independent Bernoulli process of the same distribution. The model’s feature is that packet dropout at the current moment has no relationship to the packet dropout of the previous moment and the probability is the same for channel packet dropout at each moment, i.e., for each time instance k, γ˜k satisfies Pr(γ˜k = 1) = λ, Pr(γ˜k = 0) = 1 − λ,
(7.4)
where λ ∈ [0, 1] is the known arrival rate of measurement. Moreover, it is assumed that the system state’s initial value x0 obeys the Gaussian distribution N(x0 |x0 , P0 ) and γ˜k , νk , ωk , and x0 are independent of each other. Based on the previous packet dropout description, the system diagram of the DSE under channel packet dropout is shown in Fig. 7.1.
7.2.1 Suboptimal Kalman Filter Under Channel Packet Dropout For the linear system corresponding to the nonlinear system of (7.1) xk+1 = Axk + νk , yk = γ˜k Cxk + ωk ,
Fig. 7.1 The system diagram of DSE under packet dropout
(7.5)
192
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
where A ∈ Rn×n is the system state matrix, and C ∈ Rm×m is the system observation matrix. The measurement is assumed to be transmitted with a time stamp [148]. Therefore, the value of γ˜k , i.e., if the channel packet dropout occurs, is known to the remote filter. Based on this assumption, the suboptimal linear filter was designed under channel packet dropout to minimize E[(xk − xˆk|k−1 )(xk − xˆk|k−1 )T ]. To achieve this purpose, after acquiring the system state estimation xˆk−1 at the previous time instance and its variance matrix Pk−1 , the time update is carried out as follows firstly referred to the conventional Kalman filtering. xˆk|k−1 = Axˆk−1 Pk|k−1 = APk−1 AT + Q.
(7.6)
After that, xk is updated according to the following equation: xˆk = xˆk|k−1 + Kk (yk − γ˜k yˆk|k−1 ),
(7.7)
where Kk is the filtering gain. The issue is thus transformed into how to choose an appropriate Kk to minimize E[(xk − xˆk|k−1 )(xk − xˆk|k−1 )T ]. To acquire Kk , x˜k|k−1 = xk − xˆk|k−1 is defined. The equations (7.4) and (7.7) are introduced into the definition to obtain x˜k|k−1 = Ax˜k−1|k−2 − AKk−1 γ˜k−1 C x˜k−1|k−2 + νk−1 − AKk−1 ωk−1 .
(7.8)
T ) can be calculated as Therefore, E(x˜k|k−1 x˜k|k−1 T T T ) = APk−1|k−2 AT +Q+AKk−1 Rk−1 Kk−1 AT −λAPxk−1 yk−1 Kk−1 AT E(x˜k|k−1 x˜k|k−1
−AKk−1 PxTk−1 yk−1 AT ,
(7.9)
where Pxk−1 yk−1 = Pk−1|k−2 C T . ∗ = λPxk−1 yk−1 (λCPk−1|k−2 C T + R)−1 , (7.9) can be modified as follows: If Kk−1 T E(x˜k|k−1 x˜k|k−1 ) =APk−1|k−2 AT +Q−Aλ2 Pxk−1 yk−1 (λCPk−1|k−2 C T +R)−1 PxTk−1 yk−1 ∗ )(λCP T ∗ T ×AT +(Kk−1 −Kk−1 k−1|k−2 C +R)(Kk−1 −Kk−1 ) .
(7.10)
∗ , K ∗ According to the previous equation, when Kk−1 = Kk−1 k−1 = Kk−1 is T −1 minimal. Therefore, Kk = λPxk yk (λC × Pk|k−1 C + R) . T T According to the definition of E(x˜k|k−1 x˜k|k−1 ), it is known that E(x˜k|k−1 x˜k|k−1 ). By combining the second equation (7.6) and (7.10), it can be obtained by T . Pk−1 = Pk−1|k−2 − Kk−1 (λCPk−1|k−2 C T + R)Kk−1
(7.11)
7.2 Design of Suboptimal CKF Under Channel Packet Dropout
193
Therefore, the measurement update process of suboptimal Kalman filter under the Bernoulli packet dropout model is as follows: Kk = λPxk yk (λCPk|k−1 C T + R)−1 , xˆk = xˆk|k−1 + Kk (yk − γ˜k yˆk ), Pk = Pk|k−1 − Kk (λCPk|k−1 C T + R)KkT .
(7.12)
7.2.2 Design of Suboptimal CKF Under Channel Packet Dropout Targeted at the nonlinear system under channel packet dropout described in the independent Bernoulli stochastic process of the same distribution as in (7.1), the spherical cubature integration principle is used to extend the suboptimal Kalman filter under packet dropout to nonlinear system application. This section designs the suboptimal CKF in case of channel packet dropout based on the previous section. Firstly, after acquiring the system state estimation xˆk−1 of the previous moment and its variance matrix Pk−1 , the time update is performed according to the conventional CKF as shown in the following equation: √ ξ
(i)
=
(i)
Xk−1 = xˆk−1 +
nei , i = 1, . . . , n, √ − nei−n , i = n + 1, . . . , 2n,
Pk−1 ξ (i) , i = 1, . . . , 2n,
(i) (i) Xˆk = f (Xk−1 ), i = 1, . . . , 2n,
1 ˆ (i) Xk , 2n 2n
xˆk|k−1 =
i=1
1 ˆ (i) (i) (Xk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 . 2n 2n
Pk|k−1 =
(7.13)
i=1
According to the inference process of suboptimal linear filter, the system state estimation of the suboptimal CKF under the Bernoulli packet dropout model can be calculated in the following equation: xˆk = xˆk|k−1 + Kk (yk − γ˜k yˆk|k−1 ), where Kk is the filtering gain, whose calculation process is
(7.14)
194
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout (i)
Xk|k−1 = xˆk|k−1 +
.
Pˆk|k−1 ξ (i) ,
(i) Yˆ k(i) = h(Xk|k−1 ),
i = 1, . . . , 2n,
i = 1, . . . , 2n,
1 ˆ (i) Yk , = 2n 2n
yˆk|k−1
i=1
Py∗k =
1 ˆ (i) (i) (Yk − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T , 2n
Pyk =
λPy∗k
2n
i=1
+ Rk ,
1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T , 2n 2n
Pxk yk =
i=1
. Kk = λPxk yk Py−1 k
(7.15)
Therefore, the posterior variance matrix can be computed as Pk = Pk|k−1 − Kk Pyk KkT .
(7.16)
It is noteworthy that, according to the comparison of the suboptimal CKF under the Bernoulli packet dropout model and the intermittent CKF under the packet dropout model of Sect. 3.5, the calculation of posteriori variance in the intermittent CKF is independent of γ˜k and therefore only the mean of posteriori variance matrix can be inferred, i.e., the upper and lower bounds of E(Pk|k ), while the upper bound of Pk is a stochastic value related to γ˜k . But the premise for the boundedness of stochastic stability index E(xk − xˆk|k−1 2 ) for nonlinear filter is the boundedness of Pk . Therefore, the sufficient condition for the stochastic stability of an intermittent CKF can only be determined in an online manner. On the contrary, the suboptimal CKF’s posteriori variance matrix is related to the expectation λ of γ˜k so that the upper bound of Pk is definite (to be proven in Sect. 7.5), and the stochastic stability condition of suboptimal CKF can be determined offline, which is beneficial to the prior design in engineering.
7.3 Design of Event Triggered Suboptimal Cubature Kalman Under Channel Packet Dropout For the nonlinear system of (7.1), this section designs the event triggered suboptimal CKF under channel packet dropout when the stochastic innovational condition is selected as the event triggered sampling strategy and there exists the channel packet dropout described by the independent Bernoulli stochastic process of the same
7.3 Design of Event Triggered Suboptimal Cubature Kalman Under Channel. . .
195
distribution based on the suboptimal CKF. This design also has two parts, i.e., the design of event triggered sampling strategy at the sensor and the design of the filtering algorithm at the control center.
7.3.1 Design of Event Triggered Sampling Strategy Before designing the event triggered sampling strategy, it is firstly assumed whether the packet dropout occurs during transmission, i.e., the value of γ˜k is known to the sensor and the control center. If the TCP/IP protocol is adopted for the transmission network, the receiver will know whether the data transmitted currently are lost due to the protocol’s handshake mechanism. Therefore, the previous assumption is reasonable. As the stochastic innovational condition is used as the event triggered sampling strategy, one local filter is equipped at the sensor node to eliminate data feedback and further reduces the data communication load like the SETCKF. The difference from the SETCKF is that the local filter needs to acquire the value of γ˜k to determine the subsequent implementation of the measurement update. According to the previous filter structure, the event triggered sampling strategy’s design process is as follows. When the local filter acquires the system state estimation xˆk−1 of the previous moment and its variance matrix Pk−1 , the time update is performed according to (7.13). After that, the one-step estimation of the system measurement is calculated so as to compute the stochastic innovational information condition according to the aforementioned estimation and the designed parameter Y . It can be written as (i)
Xk|k−1 = xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i) Yˆ k(i) = h(Xk|k−1 ),
i = 1, . . . , 2n,
i = 1, . . . , 2n,
1 ˆ (i) Yk , 2n 2n
yˆk|k−1 =
i=1
− 12 (yk −yˆk|k−1 )T Y (yk −yˆk|k−1 )
ϕ(yk ) = e
.
(7.17)
Finally, ϕ(yk ) and the stochastic variable ηk of uniform distribution over [0, 1] to ensure the γk to determine whether it is necessary to transmit the measurement of the current moment to the control center. Therefore, the previous event triggered sampling strategy and its pseudo-code are shown in Algorithm 7.1.
196
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Algorithm 7.1 Event triggered sampling strategy Input: the current measuremet yk , design parameter Y Output: logic variableγk 1: Initialization: the initial system state x0 , the covariance matrix 2: Iteration process: 3: While k ≥ 1 do √ nei , i = 1, . . . , n 4: ξ (i) = √ − nei−n , i = n + 1, . . . , 2n √ (i) Xk−1 = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n 5: (i) (i) 6: Xˆk = f (X ), i = 1, . . . , 2n 7: 8: 9: 10: 11:
xˆk|k−1 =
(i)
(i) Yˆ k
Xˆk
i=1 2n
(i)
(i)
(i)
(Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n
Pk|k−1 = Xk|k−1
k−1 2n
1 2n 1 2n
i=1
(i)
= h(Xk|k−1 ), 2n 1 ˆ (i) yˆk|k−1 = 2n Yk
i = 1, . . . , 2n
i=1
1
12: ϕ(yk ) = e− 2 (yk −yˆk|k−1 ) Y (yk −yˆk|k−1 ) 13: stochastically generate ηk over [0, 1] of uniform distribution. 14: if ϕ(yk ) < ηk then 15: return γk = 1 16: else 17: return γk = 0 18: end if 19: end while T
7.3.2 Design of Filtering at Center Considering the channel packet dropout, the measurement Yk acquired by the filter in control center at time instance k will become more complicated and can be indicated by the following equation:
Yk =
⎧ ⎪ ⎪ ⎨yk ∈ [yk |ηk ≤ ϕ(yk )] , γk = 0, yk ⎪ ⎪ ⎩0
, γk = 1 γ˜k = 1,
(7.18)
, γk = 1 γ˜k = 0.
According to the assumption of the previous section, it is known to the center whether any channel packet dropout occurs during the transmission. Therefore, the measurement information set received by the remote filter in center at time instance k is Fk = {Y1 , Y2 , . . . , Yk , {γ1 , γ2 , . . . γk }, {γ˜1 , γ˜2 , . . . , γ˜k }}.
(7.19)
7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of. . .
197
According to the definition of Yk , when γk = 0, i.e., when the sensor chooses not to transmit the measurement as the triggering conditions are not satisfied, the value of γ˜k does not prevent the remote filter from acquiring the measurement hidden in the stochastic new information. Therefore, the filtering is the same as the one applicable when the SETCKF γk = 0. When γk = 1, i.e., the sensor determines to send the measurement to the remote filter, the remote filter may not receive the measurement and has no information relating to the measurement due to the presence of channel packet dropout. In this case, the suboptimal CKF of Sect. 7.2 may be adapted to execute the filtering iteration process. Therefore, the remote filtering algorithm is as shown in Algorithm 7.2.
7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of Channel Packet Dropout This section presents the stochastic stability analysis for suboptimal CKF and event triggered suboptimal CKF under channel packet dropout of this chapter. The evaluation indexes of stochastic stability is still the boundedness of the expectation of one-step prediction estimation error of system state E(x˜k|k−1 2 ). Through the analysis, the sufficient condition of stochastic stability for these two filters is provided based on the sufficient condition of boundedness of the posteriori variance matrix for these two filters.
7.4.1 Analysis of Stochastic Stability for Suboptimal CKF Before analyzing the stochastic stability for the suboptimal CKF, the pseudolinearization method is firstly utilized to linearize the system (7.5). The one-step prediction estimation error of the system state and the one-step prediction estimation error of measurement may be linearized as follows: x˜k|k−1 = αk−1 Fk−1 x˜k−1 + νk−1 , y˜k = βk Hk x˜k|k−1 + ωk ,
(7.20)
where αk−1 = diag[α1,k−1 , . . . , αn,k−1 ] and βk = diag[β1,k , . . . , βm,k ] are used to approximate the linearization error aroused from the first-order linearization. Fk−1 = [∂f (xk−1 )/∂xk−1 ]|xk−1 =xˆk−1 and Hk = [∂h(xk )/∂xk ]|xk =xˆk|k−1 are the firstorder term of Taylor expansion. According to the previous equation, the error variance matrix of one-step prediction of the system state, and the error covariance matrix of one-step prediction of measurement, and the covariance matrix of system state and measurement are linearized as follows:
198
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Algorithm 7.2 Filtering design of event triggered suboptimal CKF Step 1: generate 2n Sigma points √
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n
ξ (i) = (i)
Xk−1
Steo 2: time update Xˆk
(i)
(i)
= f (Xk−1 ), i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n 1 (i) (i) (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 2n i=1
(i) Xk|k−1 (i) Yˆ k
= xˆk|k−1 + =
yˆk|k−1 =
Pk|k−1 ξ (i) ,
(i) h(Xk|k−1 ),
i = 1, . . . , 2n
i = 1, . . . , 2n
2n 1 (i) Yˆ k 2n i=1
Step 3: measurement update when γk = 0 Py k =
2n 1 (i) (Yˆ k − yˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T + Rk 2n i=1
Px k y k =
2n 1 (i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n i=1
Kk = Pxk yk [Pyk + Y −1 ]−1 xˆk = xˆk|k−1 Pk = Pk|k−1 − Kk PxTk yk when γk = 1
Py∗k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n i=1
Pyk = λPy∗k + Rk Px k y k = λ
2n 1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T 2n i=1
Kk = Pxk yk Py−1 k xˆk = xˆk|k−1 + Kk (yk − γ˜k yˆk|k−1 ) Pk = Pk|k−1 − Kk Pyk KkT Step 4: execute the loop from Step 1 to Step 3.
7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of. . . T Pk|k−1 = αk−1 Fk−1 Pk−1 Fk−1 αk−1 + Qk−1 ,
Pxk yk = λPk|k−1 HkT βk ,
199
(7.21) (7.22)
Pyk = λβk Hk Pk|k−1 HkT βk + Rk .
(7.23)
Next, the following assumption is given by ¯ β, Assumption 7.1 P1|0 > 0, and there are the fixed constants α, ¯ α, f¯, f , β, ¯ h, q, ¯ q, r¯ , r > 0, so that the following inequality is established: h, α ≤ αk ≤ α, ¯ ¯ h ≤ Hk ≤ h,
f ≤ Fk ≤ f¯,
qIn ≤ Qk ≤ qI ¯ n,
¯ β ≤ βk ≤ β, rIm ≤ Rk ≤ r¯ Im .
(7.24)
Based on the previous assumption, the sufficient condition of the stochastic stability of suboptimal CKF is given by the following theorem. Theorem 7.1 Considering the nonlinear system of (7.2) and there exists the channel packet dropout described in the stochastic Bernoulli process expressed in (7.2) and (7.2). If the system satisfies Assumption 7.1 and Hk is invertible at any time instance k, when λ > 1 − (α¯ f¯)−2 and 0 < E(x˜1|0 2 ) ≤ σ , there exist the bounded constants p, p¯ > 0 so that ¯ n, pIn ≤ Pk < Pk|k−1 < pI
(7.25)
k−1 p¯ κ k−1 + (1 − τ )i , E(x˜k|k−1 ) ≤ σ (1 − τ ) p p 2
(7.26)
i=0
where, In ∈ Rn×n is a unit matrix of n order, κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, ¯ k¯ = β¯ h¯ p/r, ¯ 2 2 ¯ τ = q/(α¯ f /p + q). Proof To prove (7.25), the system posteriori variance matrix Pk can be linearized as follows according to (3.45) and (3.46): Pk =Pk|k−1 −λ2 Pk|k−1 HkT βk (λβk Hk Pk|k−1 HkT βk +Rk )−1 βk Hk Pk|k−1 . (7.27) According to the definition of λ, it is known 0 < λ ≤ 1. Therefore, Pk ≥ Pk|k−1 − Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 . (7.28) According to Assumption 7.1 and the proving process of Theorem 3.2 of Chap. 3, the lower bound on the right-hand side of the equation above is [S k (P0−1 )]−1 , where S k (·) = S(S(. . . S (·))), k
200
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
S(X) = (α 2 f 2 X−1 + q)−1 +
β¯ 2 h¯ 2 In . r
(7.29)
Therefore, the lower bound of Pk is as follows: Pk ≥ Pk ≥ [S k (P0−1 )]−1 .
(7.30)
To obtain the upper bound of Pk , it is obvious that Pk < Pk|k−1 according to (7.16). Therefore, the upper bound of Pk can be obtained by acquiring the upper bound of Pk|k−1 . To acquire the upper bound of Pk|k−1 , (7.16) of Pk is firstly introduced into (7.22) to derive T βk−1 (λβk−1 Hk−1 Pk|k−1 = αk−1 Fk−1 [Pk−1|k−2 −λ2 Pk−1|k−2 Hk−1 T T Pk−1|k−2 Hk−1 βk−1 + Rk−1 )−1 βk−1 Hk−1 Pk−1|k−2 ]Fk−1 αk−1 + Qk−1 ,
(7.31) T β where make A = λβk−1 Hk−1 Pk−1|k−2 Hk−1 k−1 , B = Rk−1 . Lemma 3.2 of Chap. 3 is used to obtain T αk−1 + αk−1 Fk−1 (βk−1 Hk−1 )−1 Rk−1 Pk|k−1 < (1 − λ)αk−1 Fk−1 Pk−1|k−2 Fk−1 T T ×(Hk−1 βk−1 )−1 Fk−1 αk−1 + Qk−1 .
(7.32)
According to the upper and lower bounds of various parameters in Assumption 7.1, it is obtained that ¯ n. Pk|k−1 < (1 − λ)(α¯ f¯)2 Pk−1|k−2 + [¯r (α¯ f¯)2 (βh)−2 + q]I
(7.33)
According to the previous equation, the mathematical induction is used to obtain the upper bound of Pk|k−1 as follows: Pk|k−1 < p
k−1 [(1 − λ)(α¯ f¯)2 ]j In ,
(7.34)
j =0
where p = max(P1|0 , λ¯r (α¯ f¯)2 (βh)−2 + q). ¯ It is noteworthy that
k j =0
[(1 −
λ(δ))(α¯ f¯)2 ]j will converge when and only when (1 − λ)(α¯ f¯)2 < 1, i.e., λ > 1−(α¯ f¯)−2 . The minimum measurement arrival rate that guarantees the upper bound of Pk can be obtained. In general, the following can be acquired according to the equation (7.30) and (7.33).
7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of. . .
[S(P0−1 )]−1 ≤ Pk < Pk|k−1 < Pk|k−1 < p
k−1
[(1 − λ)(α¯ f¯)2 ]j In ..
201
(7.35)
j =0
Therefore, (7.25) is proven. −1 T To prove (7.26), Vk (x˜k|k−1 ) = x˜k|k−1 Pk|k−1 x˜k|k−1 is firstly defined. According to its definition and (7.35), it is obtained by 1 1 x˜k|k−1 2 ≤ Vk (x˜k|k−1 ) ≤ x˜k|k−1 2 . p¯ p
(7.36)
The one-step prediction error of system state can be modified as follows according to (7.14) and (7.20): x˜k|k−1 = αk−1 Fk−1 (In −γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 −αk−1 Fk−1 Kk−1 ωk−1 +νk−1 . (7.37) The previous equation is introduced into Vk (x˜k|k−1 ) to derive −1 Vk (x˜k|k−1 ) = [αk−1 Fk−1 (In − γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ]T Pk|k−1
×[αk−1 Fk−1 (In − γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ] −1 +(νk−1 −αk−1 Fk−1 Kk−1 ωk−1 )T Pk|k−1 (νk−1 −αk−1 Fk−1 Kk−1 ωk−1 ) −1 +[αk−1 Fk−1 (In − γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ]T Pk|k−1
×(νk−1 − αk−1 Fk−1 Kk−1 ωk−1 ) + (νk−1 − αk−1 Fk−1 Kk−1 ωk−1 )T −1 Pk|k−1 [αk−1 Fk−1 (In − γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ].
(7.38)
According to the proving process of Theorem 3.3 in Chap. 3, the expectation is conducted over x˜k−1|k−2 on both sides of the previous equation and Assumption 7.1 is considered to derive ¯ (7.39) E[Vk (x˜k|k−1 )|x˜k−1|k−2 ] < (1 − τ )Vk−1 (x˜k−1|k−2 ) + (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, where τ = q/(α¯ 2 f¯2 /p + q), k¯ = β¯ h¯ p/r ¯ ≥ Kk−1 . The following inequality is acquired from the equation above, and (7.36) and Lemma 3.3 of Chap. 3. E(x˜k|k−1 2 ) ≤
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(7.40)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p. ¯ In sum, Theorem 7.1 is proven.
202
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
7.4.2 Analysis of Stochastic Stability of Event Triggered Suboptimal CKF This section analyzes the stochastic stability of the event triggered suboptimal CKF under the channel packet dropout designed in Sect. 7.3 based on Theorem 7.1. Similarly, based on Assumption 7.1, the sufficient condition of the stochastic stability for the event triggered suboptimal CKF is given by the following theorem. Theorem 7.2 Considering the nonlinear system of (7.1) and using the stochastic innovational condition of (2.10) as the event triggered sampling strategy, it is assumed that there exists the channel packet dropout in system described by the stochastic Bernoulli process by (7.3) and (7.5). If the system satisfies Assumption 6.2 and Hk is invertible at any time instance k, when Y ≥ yIm , λ > 1 − (α¯ f¯)−2 , and 0 < E(x˜1|0 2 ) ≤ σ , there exists the bounded constant p, p¯ > 0 so that ¯ n, pIn ≤ Pk < pI E(x˜k|k−1 2 ) ≤
(7.41)
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(7.42)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, ¯ k¯ = λβ¯ h¯ p/r, ¯ τ = q/(α¯ 2 f¯2 /p + q). Proof According to the execution process of event triggered suboptimal CKF, the posteriori variance matrix of system state for this filter can be written as follows: Pk = Pk|k−1 − γk λ2 Px∗k yk (λPy∗k + R)−1 Px∗T − (1 − γk )Px∗k yk (Py∗k + R + Y −1 )−1 k yk ×Px∗T , k yk where Px∗k yk =
(7.43) 1 2n
2n i=1
(i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T . According to (7.22)
and (7.23), the previous equation can be linearized as follows: Pk = Pk|k−1 − γ˜k λ2 Pk|k−1 HkT βk (λβk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 −(1 − γ˜ )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk + Y −1 )−1 ×βk Hk Pk|k−1 .
(7.44)
Because 0 ≤ λ ≤ 1, the previous equation can be extended as follows: Pk ≥ Pk|k−1 − γ˜k Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 −(1 − γ˜k )Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 = Pk|k−1 − Pk|k−1 HkT βk (βk Hk Pk|k−1 HkT βk + Rk )−1 βk Hk Pk|k−1 . (7.45)
7.4 Analysis of Stochastic Stability of Suboptimal Kalman Filter in Case of. . .
203
It can be acquired as follows according to the proving process of the lower bound of the posteriori variance matrix in Theorem 7.1, Pk ≥ [S k (P0−1 )]−1 ,
(7.46)
where S k (·) = S(S(. . . S (·))), S(X) = (α 2 f 2 X−1 + q)−1 +
β¯ 2 h¯ 2 r In .
k
In order to acquire the upper bound of Pk , it should be noted at first that 0 ≤ λ ≤ 1, Pk can be extended as follows: Pk ≤ Pk|k−1 − γ˜k Pk|k−1 HkT βk λ2 (λβk Hk Pk|k−1 HkT βk + Rk + Y −1 )−1 βk Hk Pk|k−1 −(1 − γ˜k )Pk|k−1 HkT βk λ2 (λβk Hk Pk|k−1 HkT βk + Rk + Y −1 )−1 βk Hk Pk|k−1 = Pk|k−1 − λ2 Pk|k−1 HkT βk (λβk Hk Pk|k−1 HkT βk + Rk + Y −1 )−1 βk Hk Pk|k−1 . (7.47) Similarly, because Pk ≤ Pk|k−1 , the derivation of upper bound of Pk can be transformed into the derivation of upper bound of Pk|k−1 . The right part of the previous equation is introduced into the linearized form of Pk|k−1 to derive T T Pk|k−1≤αk−1 Fk−1 Pk−1|k−2 Fk−1 αk−1 −αk−1 Fk−1 λ2 Pk−1|k−2 Hk−1 βk−1 (λβk−1 Hk−1 T T ×Pk−1|k−2 Hk−1 βk−1 +Rk−1 +Y −1 )−1 βk−1 Hk−1 Pk−1|k−2 Fk−1 αk−1
+Qk−1 .
(7.48)
T β −1 , After that, by assuming that A = λβk−1 Hk−1 Pk−1|k−2 Hk−1 k−1 , B = R + Y and using Lemma 3.2, the previous equation can be transformed into T αk−1 + αk−1 Fk−1 (βk−1 Hk−1 )−1 Pk|k−1 < (1 − λ)αk−1 Fk−1 Pk−1|k−2 Fk−1 T T ×(Rk−1 + Y −1 )(Hk−1 βk−1 )−1 Fk−1 αk−1 + Qk−1 .
(7.49)
According to Assumption 7.1 and the mathematic induction method, it can be obtained as Pk|k−1 < (1 − λ)(α¯ f¯)2 Pk−1|k−2 +
1 − (α¯ f¯)−2 .
204
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
By combining (7.46) and the previous equation, (7.41) can be proven. To prove (7.42), the one-step prediction error of the state of the event triggered suboptimal CKF can be linearized as follows according to the measurement update of event triggered suboptimal CKF and (7.20): x˜k|k−1 = αk−1 Fk−1 (In − γk−1 γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 −γk−1 αk−1 Fk−1 Kk−1 ωk−1 + νk−1 .
(7.51)
After that, like the proving process of Theorem 7.1, the function Vk (x˜k|k−1 ) = is defined. According to (7.41),
−1 T x˜k|k−1 Pk|k−1 x˜k|k−1
1 1 x˜k|k−1 2 ≤ Vk (x˜k|k−1 ) ≤ x˜k|k−1 2 . p¯ p
(7.52)
Similarly, (7.51) is introduced into Vk (x˜k|k−1 ) to obtain −1 Vk (x˜k|k−1 ) = [αk−1 Fk−1 (In − γk−1 γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ]T Pk|k−1
×[αk−1 Fk−1 (In − γk−1 γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ] −1 +(νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T Pk|k−1
(νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 ) −1 +[αk−1 Fk−1 (In − γk−1 γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ]T Pk|k−1
×(νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 ) +(νk−1 − αk−1 Fk−1 γk−1 Kk−1 ωk−1 )T −1 Pk|k−1 [αk−1 Fk−1 (In − γk−1 γ˜k−1 Kk−1 βk−1 Hk−1 )x˜k−1|k−2 ].
(7.53) It is assumed that ϒk−1 = γk−1 γ˜k−1 . According to the proving process of Theorem 3.3 of Chap. 3, it can be derived by ¯ (7.54) E[Vk (x˜k|k−1 )|x˜k−1|k−2 ] < (1 − τ )Vk−1 (x˜k−1|k−2 ) + (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, where τ = q/(α¯ 2 f¯2 /p + q). The variable k¯ is the upper bound of filtering gain, which is acquired as follows. According to (7.22) and (7.23), Kk can be linearized by Kk = λPk|k−1 HkT βk (λβk Hk Pk|k−1 HkT βk + Rk )−1 .
(7.55)
Because λβk Hk Pk|k−1 HkT βk ≥ 0, and according to Assumption 7.1, it can be got by
7.5 Simulation and Verification
205
Kk ≤
λp¯ h¯ β¯ ¯ = k. r
(7.56)
Finally, according to (7.52), (7.54), and Lemma 3.3 of Chap. 3, E(x˜k|k−1 2 ) ≤
k−1 p¯ κ σ (1 − τ )k−1 + (1 − τ )i , p p
(7.57)
i=0
where κ = (α¯ 2 f¯2 k¯ 2 r¯ + q)/p, ¯ k¯ = λβ¯ h¯ p/r, ¯ τ = q/(α¯ 2 f¯2 /p + q). In sum, Theorem 7.2 is proven.
7.5 Simulation and Verification This section uses IEEE 39 buses to verify these two suboptimal filterings under channel packet dropout designed in this chapter, i.e., the suboptimal CKF (CF) using periodic sampling strategy and the stochastic event triggered suboptimal cubature filter (SETCF) using stochastic innovational condition.
7.5.1 Verification on CF Performance This section applies the CF filtering algorithm of Sect. 7.2 for DSE on the generator G4 in the IEEE 39 buses 10 generator standard system to verify the effectiveness of the filtering. Therefore, the system state vector and measurement vector is formulated same as in (6.63) of Sect. 6.5, and the system state equation is same as in (2.33) and (2.34), the measurement equation is identical to (2.35). The simulation scenarios is still the DSE for the generator G4 for 15 s after the sudden break between bus 14 and bus 15. The sampling period is 0.02 s. The section compares the filtering performance between CF and CKF-I under packet dropout designed in this chapter.
7.5.1.1
Comparison of Dynamic Filtering Performance at Different Packet Dropout Rates
This section tests the DSE of a generator at three different packet dropout rates by comparing the dynamic performance of CF and CKF-I filterings. In three simulations, λ is set at 70%, 30%, and 10% and the corresponding packet dropout rate is 30%, 70%, and 90%, respectively. The results of three sets of simulations are drawn in Figs. 7.2, 7.3, and 7.4. The first two graphs in each set shows the
206
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γCF
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5 System CKF CKF−I CF
0
−0.5 0
5
10
15
Time (s) (c) Fig. 7.2 The estimation results for three filtering when λ = 0.7, i.e., the packet dropout 30%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the CKF-I and the cyan dash line represents the CF. (a) The triggering condition of CKF-I. (b) The triggering condition of CF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
7.5 Simulation and Verification
207
1
q
E (p.u.)
1.5
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
3
2
1
0 0
5
Time (s) (e)
1.2
ω (p.u.)
1.15 1.1 1.05 1 0
5
Time (s) (f) Fig. 7.2 (continued)
208
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Efd (p.u.)
3.5 3 2.5 2 1.5 0
5
10
15
10
15
10
15
Time (s) (g)
0.55
R (p.u.)
0.5
f
0.45 0.4 0.35 0
5
Time (s) (h)
10
R
V (p.u.)
8 6 4 2 0 0
5
Time (s) (i) Fig. 7.2 (continued)
7.5 Simulation and Verification
209
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γCF
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
E (p.u.) d
0.8 0.6 0.4
System CKF CKF−I CF
0.2 0 0
5
10
15
Time (s) (c) Fig. 7.3 The estimation results for three filtering when λ = 0.3, i.e., the packet dropout 70%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the CKF-I and the cyan dash line represents the CF. (a) The triggering condition of CKF-I. (b) The triggering condition of CF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
210
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
1
q
E (p.u.)
1.5
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
3
2
1
0 0
5
Time (s) (e)
1.2
ω (p.u.)
1.15 1.1 1.05 1 0
5
Time (s) (f) Fig. 7.3 (continued)
7.5 Simulation and Verification
211
Efd (p.u.)
3.5 3 2.5 2 1.5 0
5
10
15
10
15
10
15
Time (s) (g)
0.55
R (p.u.)
0.5
f
0.45 0.4 0.35 0
5
Time (s) (h)
10
R
V (p.u.)
8 6 4 2 0 0
5
Time (s) (i) Fig. 7.3 (continued)
212
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
logic variables of packet dropout for CKF-I and CF at each time instance, which are acquired via (7.3) and (7.5). From Fig. 7.2, it is shown that CKF-I and CF have almost the same dynamic filtering performance at the packet dropout rate of 30% as the CKF with full communication although it does not keep tight tracking on the real system state at specific time instances. From Fig. 7.3, it is further shown that CKF-I and CF have a similar dynamic filtering performance at the packet dropout rate of 70% although it is obviously inferior to that of the CKF with full communication in both cases. When the packet dropout rate continues increasing to 90%, it is shown in Fig. 7.4 that CKF-I and CF can hardly track the real system state as the measurements the control center receives are too limited so that the confidence of their filtering results is substantially reduced. Through the comparison of these three figures, the conclusion can be come up with that, with the decline of λ, i.e., the increase of packet dropout rate, the dynamic filtering performances of CKF-I and CF worsen gradually due to fewer measurements received. Through the comparison among these three figures, the conclusion can be come up with that, with the decline of λ, i.e., the increase of the packet dropout rate, the dynamic filtering performances of CKF-I and CF become worsen gradually due to fewer measurements received.
7.5.1.2
Comparison of Filtering Error at Different Packet Dropout Rates
To further illustrate the influence of packet dropout rate on the filtering accuracy of CF filtering and compare the filtering performances of CKF-I and CF at the same packet dropout rate, this section uses the Monte Carlo method to conduct the filtering process for 500 times under 4 different packet dropout rates and the evaluation index of filtering accuracy is obtained by the average of filtering errors for the 500 simulations (Fig. 7.5). The simulation results are shown in Fig. 7.6. As the figure shows, when the packet dropout rate is lower than 50%, CKF-I, CF, and CKF have almost the same filtering accuracy. When the packet dropout rate further increases, the difference between CKF-I and CF from CKF in filtering accuracy will become significant. When the packet dropout rate reaches 90%, the filtering accuracy of CKF-I and CF is almost three time worse than that of CKF, and the difference tends to increase because CF can be stochastically stable only when λ is higher than one given value according to Theorem 7.1. Moreover, it is further shown that CF’s filtering accuracy is slightly lower than that of CKF-I at the same packet dropout rate. As mentioned above, although CF is inferior to CKF-I in filtering accuracy, the sufficient condition can be obtained for the filter’s stochastic stability, which completely comprises offline parameters (as specified in Theorem 7.1), so that it can be used for design guideline of filter in engineering. This is a unique advantage of CF, which cannot be provided from CKF-I.
7.5 Simulation and Verification
213
1
γI
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γCF
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5 System CKF CKF−I CF
0
−0.5 0
5
10
15
Time (s) (c) Fig. 7.4 The estimation results for three filtering when λ = 0.1, i.e., the packet dropout 90%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the CKF-I and the cyan dash line represents the CF. (a) The triggering condition of CKF-I. (b) The triggering condition of CF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed. (g) DSE performance of the excitation voltage. (h) DSE performance of the AVR filter voltage. (i) DSE performance of the AVR regulator voltage
214
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
E (p.u.)
1.5
q
1
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
2
σ (rad)
1.5 1 0.5 0 0
5
Time (s) (e)
1.2
ω (p.u.)
1.15 1.1 1.05 1 0
5
Time (s) (f) Fig. 7.4 (continued)
7.5 Simulation and Verification
215
Efd (p.u.)
3.5 3 2.5 2 1.5 0
5
10
15
10
15
10
15
Time (s) (g)
0.55
R (p.u.)
0.5
f
0.45 0.4 0.35 0
5
Time (s) (h)
10
R
V (p.u.)
8 6 4 2 0 0
5
Time (s) (i) Fig. 7.4 (continued)
216
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Estimation error
0.08 CKF CKF−I CF
0.06
0.04
0.02 0
5
10
15
10
15
10
15
Time(s) (a)
Estimation error
0.08
0.06
0.04
0.02 0
5
Time(s) (b)
Estimation error
0.1 0.08 0.06 0.04 0.02 0
5
Time(s) (c) Fig. 7.5 The estimation error for CKF, CKF-I, and CF under different dropout rate, where the CKF is drawn in blue, and the CKF-I is marked in red and compared with the CF in cyan. (a) The estimation error for three filterings when λ = 0.7, i.e., the dropout rate 30%. (b) The estimation error for three filterings when λ = 0.7, i.e., the dropout rate 30%. (c) The estimation error for three filterings when λ = 0.5, i.e., the dropout rate 50%. (d) The estimation error for three filterings when λ = 0.1, i.e., the dropout rate 90%
7.5 Simulation and Verification
217
Estimation error
0.2 0.15 0.1 0.05 0 0
5
10
15
Time(s) (d) Fig. 7.5 (continued)
7.5.2 Verification on the SETCF Filtering Performance This section uses the generator G8 for DSE to verify the filtering performance of the SETCF. Therefore, the system state vector and measurement vector are in the dimension of 4 as shown as follows. The state equation is same as in (2.33), and the measurement equation is same as in (2.35), x = [Ed y = [id
Eq iq
σ vd
ω]T vq ]T .
(7.58)
Moreover, the total simulation time and the sampling period are the same as in Sect. 7.5.1. The system noise variance is set as Q = diag[10−3 , 10−3 , 10−7 , 10−7 ] and the measurement noise variance is set as R = diag[10−2 , 10−2 , 10−2 , 10−2 ]. The simulation comprises two parts. In the first part, with a fixed design parameter Y = 4I , four sets of different packet dropout rates are selected for simulation to verify the influence of arrival rates on the filtering performance of the SETCF. In the second part, with a fixed packet dropout rate of 1 − λ = 0.7, four sets of different design parameter Y are selected to verify the influence of design parameter Y on the filtering performance of the SETCF. In both simulations, CKF with full communication without data packet dropout is used for the performance reference.
7.5.2.1
Comparison of SETCF Filtering Performance Under Different Packet Dropout Rates Under Y = 4I
This section firstly compares the DSE performance of the SETCF algorithm at the packet dropout rate of 1 − λ = 10%, 50%, and 90% under Y = 4I . The simulation
218
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
results are shown in Figs. 7.6, 7.7, and 7.8. The first two graphs in all sets of simulations indicate the logic variable γk at each moment and the logic variable γ to determine whether the control center receives the measurement, where γk can be obtained by (2.8), while γ is 1 only when γk = 1 and γ˜k = 1. From the first two graphs in Fig. 7.6, it is shown that the event triggered condition are triggered by 517 times. Considering the packet dropout, the control center only receives the measurements by 458 times. In this case, the DSE performance of the SETCF is almost the same as that of CKF with full communication without the packet dropout. When the packet dropout rate is 50%, the first two graphs in Fig. 7.7 show that the event triggered conditions are triggered by 564 times. However, the control center only receives the measurements for 273 times. Although the SETCF can still track the system state globally, its filtering results deviate from the system state to one certain extent at some specific moments. When the packet dropout rate increases to 90%, the first two graphs in Fig. 7.8 show that the event triggered condition are triggered by 643 times. However, the control center only receives the measurements for 62 times. In this case, the filtering performance of the SETCF have been severely worsened and significantly deviated from the system state. From the comparison of these three sets of figures, the conclusions can be made as follows. When the design parameter Y is fixed, the increase of packet dropout rate leads to the increasing triggered times of event triggered condition but decline in the DSE performance of the SETCF. This phenomenon is caused because, with the increase of data packet dropout rate, the number of measurements received by the control center will decline and the filtering accuracy of the SETCF will surely decrease, which results in the decrease on the accuracy of one-step prediction of yk upon time update and finally the increase on deviation in yk − yˆk|k−1 , so that the event triggered condition is more easily satisfied. To further describe the influence of data packet dropout rate on the SETCF filtering accuracy, the Monte Carlo method is used for simulation on 4 sets of different data packet ratios by 500 times. The filtering error defined by (3.79) can be derived for these four sets of conditions. The simulation results are drawn in Fig. 7.9. As the figure shows, when the packet dropout rate is lower than 30%, the filtering accuracy of SETCF is almost same as that of CKF with full communication. When the packet dropout rate further increases, the difference between them becomes significant. In particular, when the packet dropout rate is 90%, the filtering accuracy of the SETCF has severely worsened and even tends to be divergent because the stochastic stability of the SETCF cannot be guaranteed when the packet dropout rate exceeds one given limit according to Theorem 7.2.
7.5.2.2
Comparison of the SETCF Filtering Performance at Different Y Under a Fixed Packet Dropout Rate 70%
This section firstly compares the DSE performance of the SETCF under three conditions of Y = 19I, 1.5I , and 0.035I at a fixed packet dropout rate of 70%. The simulation results are shown in Figs. 7.10, 7.11, and 7.12. Similarly, the first two
7.5 Simulation and Verification
219
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γ
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5 System CKF SETCF
0
−0.5 0
5
10
15
Time (s) (c) Fig. 7.6 The system state and estimation error for two filterings when λ = 0.9, i.e., the dropout rate 10%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
220
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
1.5
Eq (p.u.)
1 0.5 0 −0.5 0
5
10
15
10
15
10
15
Time (s) (d)
3
σ (rad)
2 1 0 −1 0
5
Time (s) (e)
1.4
ω (p.u.)
1.3 1.2 1.1 1 0
5
Time (s) (f) Fig. 7.6 (continued)
7.5 Simulation and Verification
221
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γ
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5
0
−0.5 0
System CKF SETCF 5
10
15
Time (s) (c) Fig. 7.7 The system state and estimation error for two filterings when λ = 0.5, i.e., the dropout rate 50%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
222
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
E (p.u.) q
1.5
1
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
4
2
0
−2 0
5
Time (s) (e)
1.3
ω (p.u.)
1.2 1.1 1 0.9 0
5
Time (s) (f) Fig. 7.7 (continued)
7.5 Simulation and Verification
223
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
k (a)
1
γ
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
k (b)
Ed (p.u.)
1
0.5 System CKF SETCF
0
−0.5 0
5
10
15
Time (s) (c) Fig. 7.8 The system state and estimation error for two filterings when λ = 0.1, i.e., the dropout rate 90%, where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
224
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
1
q
E (p.u.)
1.5
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
3
2
1
0 0
5
Time (s) (e)
ω (p.u.)
1.15
1.1
1.05
1 0
5
Time (s) (f) Fig. 7.8 (continued)
7.5 Simulation and Verification
225
Estimation error
0.08 CKF SETCF 0.06
0.04
0.02 0
5
10
15
10
15
10
15
Time(s) (a)
Estimation error
0.08
0.06
0.04
0.02 0
5
Time(s) (b)
Estimation error
0.2 0.15 0.1 0.05 0 0
5
Time(s) (c) Fig. 7.9 The estimation error of CKF and the SETCF under different dropout rate, where the CKF is drawn in blue, and the SETCKF is marked in red. (a) The estimation error of two filterings when λ = 0.9, i.e., the dropout rate 10%. (b) The estimation error of two filterings when λ = 0.7, i.e., the dropout rate 30%. (c) The estimation error of two filterings when λ = 0.3, i.e., the dropout rate 70%. (d) The estimation error of two filterings when λ = 0.1, i.e., the dropout rate 90%
226
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Estimation error
0.2 0.15 0.1 0.05 0 0
5
10
15
Time(s) (d) Fig. 7.9 (continued)
figures in each set of simulation show the logic variable γk at each moment and the logic variable γ to determine whether the control center receives the measurement. From the first two graphs of Fig. 7.10, it is shown that the event triggered condition is triggered by 628 times. Considering the packet dropout, the control center only receives the measurements by 380 times. In this case, the DSE performance of the SETCF is almost the same as that of CKF with full communication without packet dropout. When Y = 1.5I , the first two graphs in Fig. 7.11 show that the event triggered condition is triggered by 380 times. However, the control center only receives the measurements for 259 times. In this case, the DSE performance of the SETCF is similar to that of CKF, although they are different at some specific time instances. When Y continues decreasing to 0.035I , the first two graphs in Fig. 7.12 show that the event triggered condition is triggered by 73 times. However, the control center only receives the measurements by 54 times. In this case, the filtering results of the SETCF have been severely worsened and deviated significantly from the system state. By comparison on the three sets of simulations, the following conclusions can be made. When the packet dropout rate remains the same, with the continuous decline in design parameter Y , the triggered times of event triggered condition accordingly decreases so that the DSE performance of the SETCF is degraded due to the decreasing measurements received. To further illustrate the influence of design parameter Y on the filtering accuracy of the SETCF, this section conducts the Monte Carlo simulation method using four different sets of Y for 500 times each and the corresponding filtering errors are obtained for these four conditions as shown in Fig. 7.13. As the figure shows, when Y > 4I , the filtering accuracy of the SETCF is almost the same as that of CKF with full communication. When Y further reduces, their difference becomes increasingly significant. In particular, when Y ≤ 0.035I , the filtering accuracy of the SETCF has been severely worsened and even tends to divergence because the stochastic stability of the SETCF cannot be guaranteed when Y is lower than one given value according to Theorem 7.2.
7.5 Simulation and Verification
227
Fig. 7.10 The system state and estimation error for two filterings with Y = 19I , where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (e) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
228
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
1.5
Eq (p.u.)
1 0.5 0 −0.5 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
3
2
1
0 0
5
Time (s) (e)
1.4
ω (p.u.)
1.3 1.2 1.1 1 0
5
Time (s) (f) Fig. 7.10 (continued)
7.5 Simulation and Verification
229
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γ
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
E (p.u.) d
0.8 0.6
System CKF SETCF
0.4 0.2 0 0
5
10
15
Time (s) (c) Fig. 7.11 The estimation error for two filterings when Y = 1.5I , where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
230
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
E (p.u.) q
1.5
1
0.5
0 0
5
10
15
10
15
10
15
Time (s) (d)
4
σ (rad)
3 2 1 0 0
5
Time (s) (e)
1.3
ω (p.u.)
1.2 1.1 1 0.9 0
5
Time (s) (f) Fig. 7.11 (continued)
7.5 Simulation and Verification
231
1
γE
0.8 0.6 0.4 0.2 0
100
200
300
400
500
600
700
500
600
700
k (a)
1
γ
0.8 0.6 0.4 0.2 0
100
200
300
400
k (b)
Ed (p.u.)
1
0.5
0
−0.5 0
System CKF SETCF 5
10
15
Time (s) (c) Fig. 7.12 The system state and estimation error of two filterings under Y = 0.035I , where the black line represents the actual system state, and the green dotted line denotes the CKF, and the red dotted dash line stands for the SETCF. (a) The triggering condition of SETCF. (b) The triggering condition of CKF. (c) DSE performance of the emf on d axis. (d) DSE performance of the emf on q axis. (e) DSE performance of the rotor electrical angle. (f) DSE performance of the rotor speed
232
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
E (p.u.) q
2 1.5 1 0.5 0 0
5
10
15
10
15
10
15
Time (s) (d)
σ (rad)
1.5
1
0.5
0 0
5
Time (s) (e)
1.3
ω (p.u.)
1.2 1.1 1 0.9 0
5
Time (s) (f) Fig. 7.12 (continued)
7.5 Simulation and Verification
233
Estimation error
0.08 CKF SETCF
0.06 0.04 0.02 0 0
5
10
15
10
15
10
15
Time(s) (a)
Estimation error
0.08 0.06 0.04 0.02 0 0
5
Time(s) (b)
Estimation error
0.1 0.08 0.06 0.04 0.02 0 0
5
Time(s) (c) Fig. 7.13 The estimation error of CKF and the SETCF under different Y , where the CKF is drawn in blue, and the SETCKF is marked in red. (a) The estimation error for Y = 19I . (b) The estimation error for Y = 4I . (c) The estimation error for Y = 0.4I . (d) The estimation error for Y = 0.035I
234
7 Event Triggered Suboptimal CKF Upon Channel Packet Dropout
Estimation error
0.2 0.15 0.1 0.05 0 0
5
10
15
Time(s) (d) Fig. 7.13 (continued)
7.6 Conclusions This chapter develops the event triggered suboptimal CKF considering the packet dropout based on stochastic innovational condition to address the problem of data packet dropout in the transmission network of WAMS. The main results are summarized as follows: 1. Based on the design concept of the suboptimal linear filter under packet dropout, the suboptimal CKF is designed considering the packet dropout for nonlinear system. 2. Based on suboptimal CKF and the stochastic innovational condition, the stochastic event triggered sampling strategy and event triggered suboptimal CKF at the control center are designed to address the problem of packet dropout in the communication channel when the DSE in WAMS adopts the event triggered sampling strategy. 3. The stochastic stability of the suboptimal CKF and event triggered suboptimal CKF is analyzed in this chapter. The stochastic Lyapunov method is used to prove the stochastic stability for these two filters and the sufficient condition of the stochastic stability for the filter is provided, which is only determined by offline parameters. Such feature is applicable for filter design guideline under packet dropout in engineering. 4. The verification on IEEE 39 buses are used for these two suboptimal filtering algorithms designed in this chapter. The simulation results show that suboptimal CKF is capable of addressing the packet dropout in communication channel although its filtering accuracy is slightly lower than the intermittent CKF because its accuracy is partially sacrificed to guarantee its sufficient condition of the stochastic stability for the filter, which is completely determined by offline parameters. Moreover, the simulation further verifies that the event triggered
7.6 Conclusions
235
suboptimal CKF can still deliver the high filtering accuracy in case of packet dropout. Simulations are also conducted to verify the influence of design parameters and packet dropout rate on the filtering accuracy.
Chapter 8
Event Triggered Cubature Kalman Filter Subject to Network Attacks
8.1 Introduction Network attack is defined as any action of exposing, altering, prohibition of use, damaging, stealing, unauthorized access to or unauthorized use of system resources [149]. The WAMS is vulnerable to network attacks due to the following two reasons. Firstly, the WAMS generally uses the generic operating system and heavily relies on network communication. Secondly, the ability of attacker to identify system bugs and attacking methods keep improving. Diversified network attacks certainly and severely affect the accuracy of DSE application in WAMS and even lead to its divergence, becoming the potential risk to the instability and low efficiency of DSE. Therefore, the in-depth research has been conducted on the design of DSE subject to network attacks. The network attacks on the cyber-physical system comprise the attack on perception layer represented by data tampering, attack on network transmission layer represented by denial-of-service (DoS), and attack on application control layer represented by control command forging. Among them, DoS refers to a mode of attack by which the attacker makes the target system stop services through the consumption of network bandwidth, which brings the consequences such as the channel packet dropout. The design of event triggered nonlinear filter under channel packet dropout has been already fulfilled in this chapter so that this case will not be considered again in this chapter. Data tampering means that the attacker tampers the measurements intercepted and sends the tampered measurements to the receiver. In terms of this attack, the measurement innovational information was used to design an abnormal data tester, and after detecting the abnormal data, one-step prediction of measurement was used to replace the measurement for measurement update [150, 151]. However, this method was only applicable for attacks with limited data tampering. Control command forging indicates that the attacker forges the control commands in the application control layer to maliciously utilize the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_8
237
238
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
system or damage the system. In terms of such attack, the attack issue was turned into the problem of system with unknown input and then the KF was used to design the linear filter to encounter this attack [152]. Based on this, the previous result was extended to the linear system based on event triggered sampling strategy and the event triggered linear filtering was designed to encounter this attack [153]. However, considering that most DSE in WAMS are nonlinear, it is necessary to develop the event triggered nonlinear filtering under the attack of control command forging. In summary, it is necessary to develop the event triggered nonlinear DSE subject to network attacks in order to address the problem of massive data transmission in WAMS and vulnerability to network attacks. This chapter uses projection statistics (PS) method at first to design the abnormal data identification for data tampering attacks. Furthermore, the historical measurement is used to correct the current measurement to acquire the satisfactory estimation accuracy. Based on the design concept in [152], the event triggered cubature Kalman filter is designed to deal with control command forging attacks. This chapter is organized as follows: Section 8.2 designs an abnormal data detection algorithm. Based on this detection, the event triggered cubature Kalman filter subject to data tampering attacks is designed. Section 8.3 provides the detailed design and inference process of the event triggered cubature Kalman filter subject to control command forging attacks. Section 8.4 utilizes the IEEE 39 buses 10 generators system to verify the effectiveness of the algorithm.
8.2 Design of Event Triggered Cubature Kalman Filter Subject to Data Tampering Attacks This section designs the event triggered cubature Kalman filter subject to data tampering attacks when the event triggered sampling strategy is adopted to reduce the data communication for DSE of nonlinear system. This design adopts stochastic innovational condition as the event triggered sampling strategy to reduce the amount of data transmission of system measurement in WAMS communication network. Based on this, in order to reduce the influence of data tampering attacks on DSE accuracy, the projection statistic method is used at first to design an abnormal data detector and determine whether the measurement received is subject to data tampering attacks. When the data tampering on measurement is detected, the historical measurement is further used to correct the measurement so as to fulfill the design of the filter. Therefore, the design of event triggered CKF subject to data tampering attacks includes two parts as follows. The first part is the design of an abnormal data detector, and the other part is the design of event triggered CKF tampering algorithm. Before design, the following system model subject to data tampering attack is provided at first by xk+1 = f (xk ) + νk
8.2 Design of Event Triggered Cubature Kalman Filter Subject to Data. . .
239
Fig. 8.1 The block diagram of DSE under data tampering attacks
yk = (1 − τk )[h(xk ) + ωk ] + τk ξk ,
(8.1)
where f (·) and h(·) are both known nonlinear equations and continuously differentiable within their domains. sxk ∈ Rn is the system state vector with the dimension of n, and yk ∈ Rm is the system measurement vector with the dimension of m. νk ∈ Rn and ωk ∈ Rm are the system and observation noise, respectively, both of which obey the zero-mean Gaussian distribution. Their variance matrices are Q > 0 and R > 0, respectively. ξk is the unknown value injected by the attacker. τk ∈ [0, 1] is the logic variable, indicating that the system is subject to data tampering attacks when τk = 1. Moreover, it is assumed that the initial value of system state x0 obeys the Gaussian distribution N(x0 |x0 , P0 ) and the initial system value, system noise, and observation noise are independent of each other. Based on this model, the block diagram of the DSE subject to data tampering attacks is shown in Fig. 8.1.
8.2.1 Design of Abnormal Data Detection Algorithm When the system may be subject to data tampering attacks shown in (8.1), the filter at control center shall firstly determine whether the current measurement is the abnormal false data injected by the attacker upon the receipt of measurement, i.e., determining the value of τk . Therefore, based on the method in [154] this section firstly uses the projection statistics (PS) method to design an abnormal data detector to determine whether the system is subject to a data tampering attack. Because the innovational information of system is sampled in time series and has strong relevance to time. Once the data tampering attack occurs, its statistic characteristics must change. Therefore, it is possible to test whether the system is subject to a data tampering attack by checking the statistic characteristics of continuous correlated samples about innovational information. Therefore, the following matrix, which contains continuous correlated samples about innovational information, is firstly defined by ˜ k = [yk−1 − yˆk−1|k−2 , yk − yˆk|k−1 ]. Y
(8.2)
240
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Algorithm 8.1 Algorithm of abnormal data detection Input: the current measurement yk Outputs logic variable τk 1: M = {medj =1,··· ,m (lj 1 ), medj =1,··· ,m (lj 2 )} 2: While 1 ≤ i ≤ m do 3: ui = li − M 4: Li = uuii = . 2ui 2 ui1 +ui2
TL 5: ζ1i = l1T Li , ζ2i = l2T Li , · · · , ζmi = lm i 6: ζi = medj =1,2,··· ,m (ζj i ) 7: MADi = 1.4826medj =1,2··· ,m (|ζj i − ζi |) 8: While 1 ≤ j ≤ m do |ζj i −ζi | 9: Pj i = MAD i 10: end while 11: end while 12: PSj = max[Pj 1 , Pj 2 , · · · , Pj m ] 2 13: if ∀j ∈ {1, 2, · · · , m}, PS2j < χ2,0.975 then 14: return τk = 0 15: else 16: return τk = 1 17: end if
It is noteworthy that the statistic characteristics of the matrix above can provide enough information to acquire the ideal testing results according to the research in [155] although more correlated samples can be introduced for statistic characteristic test to determine whether any attacks occur. After that, the PS method is ˜ k to derive the following variable: applied on the matrix Y PSj = max
L=1
|ljT L − medi (liT L)| 1.4826medκ |lκT L − medi (liT L)|
,
(8.3)
where i, j, κ = 1, 2, . . . , m, and ljT , liT , lκT corresponds to the ith, j th, and κth row ˜ k , respectively. L normalizes the projection of direction vector, vectors of matrix Y and med(·) is the acquisition function to the median. The calculation process of the previous equation is shown in Algorithm 8.1. According to the research results in [156], PSj is subject to a chi-square distribution with freedom of 2. Therefore, when PSj satisfies the following inequality, it can be concluded that the j th-element of measurement is the abnormal data at a confidence level of 97.5%, 2 . PS2j > χ2,0.975
(8.4)
In summary, the pseudo-code of the abnormal data detection algorithm is shown as follows.
8.2 Design of Event Triggered Cubature Kalman Filter Subject to Data. . .
241
Fig. 8.2 The block diagram of event triggered CKF subject to data tampering
8.2.2 Design of Event Triggered CKF Filtering Subject to Data Tampering Attacks When the stochastic innovational condition is used as the event triggered sampling strategy to reduce data transmission for the nonlinear system in (8.1) vulnerable to data tampering attacks, referred to the SETCKF in Sect. 6.2. A local filter is equipped at the sensor to provide the filtering median necessary for the calculation of logic variable γk . Moreover, the abnormal data detector designed in the previous section is also equipped to determine whether the current measurement is subject to data tampering attacks. Therefore, the block diagram of event triggered CKF subject to data tampering attacks is shown in Fig. 8.2. As shown in Fig. 8.2, unlike the SETCKF in Sect. 6.2, if data tampering attacks on system are detected at the current moment, i.e., τk = 1, the filter at the control center needs to transmit the state estimation results back to the local filter to correct the DSE results of the local filter because the local filter is not equipped with the abnormal data detector. Without it, the filtering results cannot be corrected and keep aligned to that from the control center once there is data tampering attack. As a result, the deviation on determining whether the data needs to be transmitted will occur at the next moment. Therefore, the pseudo-code of the event triggered sampling strategy of event triggered CKF subject to data tampering attacks is shown in Algorithm 8.2.
242
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Algorithm 8.2 Event triggered strategy Input: the current measurement yk , design parameter Y , attack logic variable at last moment τk−1 Output: logic variable γk 1: Initialization: the initial system state x0 , covariance matrix P0 2: Iteration: 3: While k ≥ 1 do √ nei , i = 1, . . . , n 4: ξ (i) = √ − nei−n , i = n + 1, . . . , 2n 5: if τk−1 = 1 . (i)
Remote + Xk−1 = xˆk−1
6: 7: 8:
else
9:
Xˆk
(i)
Xk−1 = xˆk−1 + (i)
Remote ξ (i) , i = 1, . . . , 2n Pk−1
√ Pk−1 ξ (i) , i = 1, . . . , 2n
(i)
10:
= f (Xk−1 ), i = 1, . . . , 2n 2n 1 ˆ (i) Xk xˆk|k−1 = 2n
11:
Pk|k−1 =
12: 13: 14:
(i)
Xk|k−1 (i) Yˆ k
i=1 2n
(i)
(i)
(Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 = xˆk|k−1 + Pk|k−1 ξ (i) , i = 1, . . . , 2n 1 2n
i=1
(i)
= h(Xk|k−1 ), 2n 1 ˆ (i) yˆk|k−1 = 2n Yk
i = 1, . . . , 2n
i=1
1
15: ϕ(yk ) = e− 2 (yk −yˆk|k−1 ) Y (yk −yˆk|k−1 ) 16: randomly generate the parameter ηk obeying the uniform distribution over [0, 1] 17: if ϕ(yk ) < ηk then 18: return γk = 1 19: else 20: return γk = 0 21: end if 22: end while T
When the system adopts the aforementioned event triggered sampling strategy, similar to SETCKF, the design of filter at the control center also comprises two scenarios, i.e., γk = 0 and γk = 1. When γk = 0, the measurement is not transmitted as the triggering conditions are dissatisfied and the measurement information that the filter at the control center can process are contained in the triggered condition and have no relationship with τ . Therefore, the filter at the control center uses the same algorithm as SETCKF.
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control. . .
243
When γk = 1, meaning that the filter at the control center receives the measurements from the sensor, it is necessary for the abnormal data detector to determine whether the measurement received is tampered, i.e., determining the value of τk . When τk = 0, the system is not subject to data tampering attacks. Therefore, the measurement received can be directly used for measurement update in the SETCKF algorithm of γk = 1. When τk = 1, at least one of the measurements received is tampered by the attackers. To reduce the influence of the tampered data on the measurement update process, referred to the method in [157], the following weight matrix is defined to correct the measurement: = diag[ω¯ 1 , ω¯ 2 , · · · , ω¯ m ],
(8.5)
2 /P Sj2 ], j ∈ [1, 2, · · · , m]. After that, yk is used to where ω¯ j = min[1, χ2,0.975 replace yk for measurement update. Therefore, the pseudo-code of the estimation algorithm of event triggered CKF subject to data tampering attacks at the control center is shown in Algorithm 8.3.
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control Command Forging Attacks The control command forging attack maliciously changes the system state via forged control commands so as to cause the DSE accuracy decrease substantially and prevent the system from effective control and even lead to its failure. In terms of this attacks, the network attack issue was turned into the issue of system with unknown input. Based on this, an event triggered linear filter was designed in [153]. Inspired by the design concept in [153], the stochastic innovational condition is used as the event triggered sampling strategy to address the problem of limited bandwidth of WAMS and the cubature Kalman is utilized to address the issue of DSE for nonlinear system so that the application of event triggered linear filtering subject to control command forging attack can be extended to nonlinear system. Before the design, the following system model subject to control command forging attack is firstly provided xk+1 = f (xk ) + Gk dk + νk yk = h(xk ) + ωk ,
(8.6)
where f (·) and h(·) are both known nonlinear equations and continuously differentiable within their domain. xk ∈ Rn is the system state vector with the dimension of n, and yk ∈ Rm is the system measurement vector with the dimension of m. νk ∈ Rn and ωk ∈ Rm are the system and observation noise, respectively, both of which obey the zero-mean Gaussian distribution. The variance matrices are Q > 0 and R > 0, respectively. dk ∈ Rp is the attack injection series, and Gk ∈ Rn×p is the attack
244
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Algorithm 8.3 The event triggered CKF algorithm at the control center Step 1: generate 2n Sigma points √
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n
ξ (i) = (i)
Xk−1
Step 2: time update Xˆk
(i)
(i)
= f (Xk−1 ), i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n 1 (i) (i) (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 2n i=1
(i) Xk|k−1
Yˆ k
(i)
= xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i)
= h(Xk|k−1 ),
yˆk|k−1 =
i = 1, . . . , 2n
i = 1, . . . , 2n
2n 1 (i) Yˆ k 2n i=1
Step 3: measurement update when γk = 0 Py k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T + Rk 2n i=1
Px k y k =
2n 1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T 2n i=1
Kk = Pxk yk [Pyk + Y −1 ]−1 xˆk = xˆk|k−1 Pk = Pk|k−1 − Kk PxTk yk when γk = 1 Py k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T + Rk 2n i=1
Px k y k =
2n 1 (i) (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k − yˆk|k−1 )T 2n i=1
Kk = Pxk yk Py−1 k xˆk = xˆk|k−1 + Kk (yk − yˆk|k−1 ) Pk = Pk|k−1 − Kk Pyk KkT Step 4: execute the loop from Step 1 to Step 3 at each time instance k
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control. . .
245
Fig. 8.3 The block diagram of DSE under the control command forging attack
matrix, and it is assumed rank(G)k = p < n. Moreover, it is also assumed that the initial system state x0 obeys the Gaussian distribution N(x0 |x0 , P0 ) and the system initial value, system noise, observation noise, and attack series are independent of each other. The control command forging attacks can be categorized according to various attack series. In this section, it is assumed that the system is subject to deviated control command forging attacks, meaning that the system is injected with small data at each time instance. Such an attack will change the system operational state and severely affect the DSE accuracy by the continuous accumulation. Moreover, such attacks may be identified through the sequential assumption identification method. Based on this model, the block diagram of DSE subject to control command forging attack is shown in Fig. 8.3. Regarding the nonlinear system subject to deviated control command forging attacks shown in (8.6), since the probability of the attack injection series dk is unknown and identical for arbitrary set value, its probability density function may be indicated as f (dk ) ∝ 1.
(8.7)
⊥ T ⊥ The matrix G⊥ k is defined to satisfy rank[Gk Gk ] = n and Gk Gk = 0. It −1 and L = [0 I is assumed Tk = [Gk G⊥ k n−p ]Tk . Based on the previous k] definition and (8.7), the event triggered CKF filtering algorithm at the control center is given by the following theorem.
Theorem 8.1 Regarding the DSE at the control center as shown in Fig. 8.4 and the system model in (8.6), it is assumed that the stochastic innovational condition is used at the sensor as the event triggered sampling strategy and the control command forging attack are identified. If the system measurement equation satisfies rank[HkT
LTk−1 ]T = n,
(8.8)
where Hk is the linearized matrix of system measurement equation. The calculation method will be given in the proof. Therefore, the system state vector xk is subject to
246
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Fig. 8.4 The block diagram of event triggered filtering subject to control command forging attacks
the following conditional Gaussian distribution p(xk |Fk ) = N(xˆk , Pk ): xˆk = xˆk|k−1 + γk Pk HkT (Rk + R˜ k )−1 (yk − yˆk|k−1 ) Pk = {HkT [Rk + R˜ k + (1 − γk )Y −1 ]−1 Hk +LTk−1 (Lk−1 Pk|k−1 LTk−1 )−1 Lk−1 }−1 ,
(8.9)
where the value of R˜ k will be given in the proof as filtering gain. To prove the previous theorem, the following theorem is introduced. Lemma 8.1 ([153]) For the matrix A ∈ Rp×n , X ∈ Rp×p , C ∈ Rq×n and Y ∈ Rq×q and vectors x ∈ Rn , b ∈ Rp and dd ∈ Rq , if rank[AT XA + C T Y C] = n, then (Ax + b)T X(Ax + b) + (Cx + d)T Y (Cx + d) = [x + (AT XA + C T Y C)−1 × (AT Xb + C T Y d)]T (AT XA + C T Y C) × [x + (AT XA + C T Y C)−1 (AT Xb + C T Y d)] + U ∗ ,
(8.10)
where U ∗ is a constant irrelevant to x. Moreover, if neither X nor Y is singular, then x T Xx + (x − b)T Y (x − b) = [x − (X + Y )−1 Y b]T (X + Y )[x − (X + Y )−1 Y b] +bT (X−1 + Y −1 )−1 b.
(8.11)
Proof Regarding the nonlinear system as shown in (8.6) subject to the control command forging attack, when the stochastic innovational event triggered sampling strategy is adopted for the system, the time update of CKF at the control center is the same as the conventional CKF, as in shown in (3.14), (3.15) and (3.16). Its measurement update depends on the logic variable γk , i.e., whether the measurement is received.
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control. . .
247
Before providing the measurement update iteration, the system equation is firstly transformed as follows for the convenience of proof. It is assumed that zk = Tk−1 xk , the following equation is obtained by: zk = Tk−1 f (xk−1 ) + Dk−1 dk−1 + ν˜ k ,
(8.12)
where Dk−1 = Tk−1 Gk−1 = [Ip 0]T , ν˜ k = Tk−1 νk . Next, the system measurement equation is linearized as follows according to [158]: yk = Hk xk + h(xˆk|k−1 ) − Hk xˆk|k−1 + ω˜ k ,
(8.13)
where Hk = (Pxk yk )T (Pk|k−1 )−1 and 1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T 2n 2n
Pxk yk =
i=1
1 ˆ (i) (i) (Xk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 , 2n 2n
Pk|k−1 =
(8.14)
i=1
where ω˜ k = ωk + εk , and εk is linearization error and ω˜ k ∼ N(ω˜ k |0, Rk + R˜ k ). In 2n 1 ˆ (i) the previous equation, R˜ k = Pyk − (Pxk yk )T Pk|k−1 Pxk yk , where Pyk = 2n (Y k − i=1
yˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T + Rk . Like the proving process of SETCKF filtering iteration, the proving of the iteration for event triggered CKF filtering subject to control command forging attack also comprises two parts, i.e., γk = 0 and γk = 1. When γk = 0, the measurement is not transmitted since it fails to satisfy the event triggered condition. According to the Bayesian law, f (zk |Fk ) = f (zk |Fk−1 , γk = 0) ∝ P(γk = 0|zk , Fk−1 )f (zk |Fk−1 ) = f (zk |Fk−1 ) P(γk = 0|yk , Fk−1 )f (yk |zk , Fk−1 ) = f (zk |Fk−1 )
Rm
Rm
P(γk = 0|yk , Fk−1 )f (yk |xk , Fk−1 ),
(8.15)
where the first equation of the last two above holds because the value of γk has no relationship with the current system state, i.e., no relationship with zk . The second equation holds because it is a nonsingular matrix according to the definition of Tk−1 . Therefore, yk has the same conditional probability density function over [zk , Fk−1 ] and [xk , Fk−1 ].
248
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
The following is an analysis of the two terms in the right-hand side of the last equation above. Firstly, the following equation can be obtained according to the stochastic innovational condition (2.9) and (8.13): P(γk = 0|yk , Fk−1 )f (yk |xk , Fk−1 ) Rm
∝
1 exp − (yk − yˆk|k−1 )T Y (yk − yˆk|k−1 ) 2 Rm 3 1 × exp − yk − (Hk xk + h(xˆk|k−1 ) 2
− Hk xˆk|k−1 )]T (Rk + R˜ k )−1 [yk − (Hk xk + h(xˆk|k−1 ) − Hk xˆk|k−1 )]}dyk .
(8.16)
It is assumed Mk = Hk xk + h(xˆk|k−1 ) − Hk xˆk|k−1 . The following transformation is made on the first exponential term in the previous equation. 1 1 − (yk − yˆk|k−1 )T Y (yk − yˆk|k−1 ) = − [(yk − Mk ) + (Mk − yˆk|k−1 )]T Y [(yk − Mk ) 2 2 (8.17) +(Mk − yˆk|k−1 )]. It is assumed x = yk − Mk , X = (Rk + R˜ k )−1 , b = Mk − yˆk|k−1 , Y = Y . According to (8.11) of Lemma 8.1, the following is obtained by:
1 P(γk = 0|yk , Fk−1 )f (yk |xk , Fk−1 ) ∝ exp{− [Hk xk + h(xˆk|k−1 ) − Hk xˆk|k−1 2 Rm −yˆk|k−1 ]T (Rk + R˜ k + Y −1 )−1 [Hk xk +h(xˆk|k−1 ) − Hk xˆk|k−1 ) − yˆk|k−1 ]} 1 = exp[− (Hk xk − Hk xˆk|k−1 )T (Rk + R˜ k 2 +Y −1 )−1 (Hk xk − Hk xˆk|k−1 )].
(8.18)
According to (8.7) and (8.13), it is derived by f (zk |Fk−1 ) =
Rm
3 T z 1 exp − zk − (Tk−1 xˆk|k−1 + Dk−1 dk−1 ) (Pk|k−1 )−1 m 2 R 4 ×[zk − (Tk−1 xˆk|k−1 + Dk−1 dk−1 )] ddk−1 . (8.19)
∝
f (zk |Fk−1 , dk−1 )f (dk−1 )ddk−1
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control. . .
249
Firstly, it should be noted that Dk−1 dk−1 = [dk−1 0]T so that Dk−1 = [Ip 0]T . For the convenience of analysis, assume z˜ k = zk − Tk−1 xˆk|k−1 , D¯ k−1 = [0 In−p ]. Therefore, z˜ k is transformed as follows: z˜ k =
T Dk−1 z˜ k z˜ k,1 = , z˜ k,2 D¯ k−1 z˜ k
(8.20)
where z˜ k,1 ∈ Rp is the vector comprising the first p elements of z˜ k and z˜ k,2 ∈ Rn−p z is the last n − p elements of z˜ k . Similarly, Pk|k−1 in (8.19) can be rewritten as follows: T D z z T = ¯ k−1 Pk|k−1 Pk|k−1 Dk−1 D¯ k−1 Dk−1 " # (8.21) z T Pz T ¯T Dk−1 k|k−1 Dk−1 Dk−1 Pk|k−1 Dk−1 = ¯ . Dk−1 D¯ k−1 P z Dk−1 P z D¯ T k|k−1
k|k−1
k−1
The Eqs. (8.20) and (8.21) are introduced into (8.19) to get T 3 1 dk−1 − z˜ k,1 exp − −˜zk,2 2 Rm " #−1 T Pz T Pz ¯T D D D Dk−1 k−1 k−1 k|k−1 k−1 k|k−1 z z T Dk−1 D¯ k−1 Pk|k−1 D¯ k−1 Pk|k−1 D¯ k−1 4 − z˜ k,1 d ddk−1 . × k−1 −˜zk,2
f (zk |Fk−1 ) ∝
(8.22)
According to the marginal distribution of multi-dimensional Gaussian stochastic variable, the previous equation can be further written as 1 z T T f (zk |Fk−1 ) ∝ exp − (zk − Tk−1 xˆk|k−1 )T D¯ k−1 (D¯ k−1 Pk|k−1 )−1 D¯ k−1 D¯ k−1 2 ×(zk − Tk−1 xˆk|k−1 ) . (8.23) The Eq. (8.18) and the previous formula are introduced into (8.15) and assume −1 , it is got by Ck = Hk Tk−1 1 f (zk |Fk ) ∝ exp − (Ck zk − Hk xˆk|k−1 )T (Rk + R˜ k + Y −1 )−1 2 1 T ×(Ck zk − Hk xˆk|k−1 )]exp − (zk − Tk−1 xˆk|k−1 )T D¯ k−1 2
250
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks z T (D¯ k−1 Pk|k−1 )−1 D¯ k−1
×D¯ k−1 (zk − Tk−1 xˆk|k−1 )].
(8.24)
According to the definitions of Ck and Lk−1 , we have
Hk Ck = T . k−1 Lk−1 D¯ k−1
(8.25)
According to the previous equation and (8.8), it is known that rank[Ck D¯ k−1 ]T = n and thus # T " 5 0 (Rk + R˜ k + Y −1 )−1 Ck Ck rank = n. z T )−1 D¯ k−1 0 (D¯ k−1 Pk|k−1 D¯ k−1 D¯ k−1 (8.26) Based on the previous equation, the following is acquired according to (8.10) in Lemma 8.1: 3 1 f (zk |Fk ) ∝ exp − [zk − (CkT (Rk + R˜ k + Y −1 )−1 Ck 2 z T T +D¯ k−1 (D¯ k−1 Pk|k−1 )−1 D¯ k−1
×D¯ k−1 )−1 (CkT (Rk + R˜ k + Y −1 )−1 Hk xˆk|k−1 z T T +D¯ k−1 (D¯ k−1 Pk|k−1 )−1 D¯ k−1
×D¯ k−1 Tk−1 xˆk|k−1 )]T [CkT (Rk + R˜ k + Y −1 )−1 Ck z T T +D¯ k−1 (D¯ k−1 Pk|k−1 × D¯ k−1 )−1 D¯ k−1 ][zk z T −(CkT (Rk + R˜ k + Y −1 )−1 Ck + D¯ k−1 (D¯ k−1 Pk|k−1 T ×D¯ k−1 )−1 D¯ k−1 )−1 (CkT (Rk + R˜ k + Y −1 )−1 Hk xˆk|k−1 z T T +D¯ k−1 (D¯ k−1 Pk|k−1 )−1 × D¯ k−1 Tk−1 xˆk|k−1 )]}. D¯ k−1
(8.27)
According to the previous equation, it is known that the conditional distribution of zk over Fk obeys the Gaussian distribution N(ˆzk , Pkz ), where z T T Pkz = [CkT (Rk + R˜ k + Y −1 )−1 Ck + D¯ k−1 (D¯ k−1 Pk|k−1 )−1 D¯ k−1 ]−1 D¯ k−1 z T T zˆ k = Pkz [CkT (Rk + R˜ k + Y −1 )−1 Hk + D¯ k−1 (D¯ k−1 Pk|k−1 )−1 D¯ k−1
×D¯ k−1 Tk−1 ]xˆk|k−1 .
(8.28)
8.3 Design of Event Triggered Cubature Kalman Filter Subject to Control. . .
251
When γk = 1, f (zk |Fk ) = f (zk |Fk−1 , yk ) ∝ f (zk |Fk−1 )f (yk |zk ),
(8.29)
where f (zk |Fk−1 ) similarly obeys (8.24), while f (yk |zk ) satisfies the following equation: 3 1 f (yk |zk ) ∝ exp − [yk − (Ck zk + h(xˆk|k−1 ) − Hk xˆk|k−1 )]T (Rk + R˜ k )−1 [yk 2 4 −(Ck zk + h(xˆk|k−1 ) − Hk xˆk|k−1 )] . (8.30) The previous equation and (8.24) are introduced into (8.29) and the inference when γk = 0 is used to conclude that zk obeys the Gaussian distribution N(ˆzk , Pkz ), where z T T (D¯ k−1 Pk|k−1 )−1 D¯ k−1 ]−1 Pkz = [CkT (Rk + R˜ k )−1 Ck + D¯ k−1 D¯ k−1
zˆ k = Pkz [CkT (Rk + R˜ k )−1 (yk − h(xˆk|k−1 ) + Hk xˆk|k−1 ) z T T +D¯ k−1 (D¯ k−1 Pk|k−1 )−1 × D¯ k−1 Tk−1 xˆk|k−1 ] D¯ k−1
= Pkz [CkT (Rk + R˜ k )−1 (yk − h(xˆk|k−1 ) + Hk xˆk|k−1 ) + (Pkz )−1 Tk−1 xˆk|k−1 −CkT (Rk + R˜ k )−1 Ck Tk−1 xˆk|k−1 ] = Tk−1 xˆk|k−1 + Pkz CkT (Rk + R˜ k )−1 [yk − h(xˆk|k−1 )].
(8.31)
Finally, the following is derived according to zk = Tk−1 xk , −1 z −1 T Pk|k−1 (Tk−1 ) Pk|k−1 = Tk−1 −1 z −1 T Pk = Tk−1 Pk (Tk−1 ) −1 xˆk = Tk−1 zˆ k .
(8.32)
According to the previous equation, it is known that the measurement update of event triggered CKF subject to control command forging attack is, when γk = 0, Pk = [HkT (Rk + R˜ k + Y −1 )−1 Hk + LTk−1 (Lk−1 Pk|k−1 LTk−1 )−1 Lk−1 ]−1 xˆk = Pk [HkT (Rk + R˜ k + Y −1 )−1 Hk + LTk−1 (Lk−1 Pk|k−1 LTk−1 )−1 ×Lk−1 ]xˆk|k−1 = xˆk|k−1 , when γk = 1,
(8.33)
252
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Pk = [HkT (Rk + R˜ k )−1 Hk + LTk−1 (Lk−1 Pk|k−1 LTk−1 )−1 Lk−1 ]−1 xˆk = xˆk|k−1 + Pk HkT (Rk + R˜ k )−1 [yk − h(xˆk|k−1 )].
(8.34)
In sum, Theorem 8.1 is proven.
Remark 8.1 According to the definition of Hk and Lk , it is known that Hk ∈ Rm×n , Lk ∈ R(n−p)×n . Therefore, [CkT LTk ]T ∈ R[n+(m−p)×n] . According to Condition (1) in Theorem 8.1, it is known that (8.8) is satisfied if m ≥ p. According to Theorem 8.1, the pseudo-code of event triggered CKF iteration algorithm subject to control command forging attack is shown in Algorithm 8.4.
8.4 Simulation and Verification The IEEE 39 buses is still used in this section to verify the filtering performance of the event-trigged CKF subject to data tampering attacks (DaSETCKF) and control command forging attacks (CaSETCKF).
8.4.1 Verification on Event-Trigged CKF Subject to Data Tampering Attacks This section applies the DaSETCKF filter designed in Sect. 8.2 for the DSE on the generator G8 in the IEEE 39 buses 10 generator system to verify the effectiveness of the algorithm. Therefore, the system state vector and measurement vector are as in (6.64) of Sect. 6.5, and the system state equation is as in (2.33), and the measurement equation is as in (2.35). The simulation scenario is still the DSE for the generator G8 for 15 s after the sudden break between bus 14 and bus 15. The sampling period is 0.02 s. The comparison is reasonably conducted between the DaSETCKF developed in this chapter and the SETCKF (DaSETCKF-2) designed based on the normalized innovational statistic detection method proposed in [150]. This method utilizes the innovational information to establish the following normalized vectors of elements: λk,j =
yk,j − yˆk|k−1,j , Pyk ,j
(8.35)
where yk,j and yˆk|k−1,j are the j th element of yk and yˆk|k−1 , respectively. Pyk ,j is the j th diagonal element of Pyk . When λk,j > λ0 , it is concluded that the j th element of the measurement is tampered, where the value of λ0 is determined according to the system. λ0 = 10 is chosen in this simulation. The
8.4 Simulation and Verification
253
Algorithm 8.4 The event triggered CKF algorithm subject to control command forging attack Step 1: generate 2n Sigma points √
ξ
(i)
(i)
Xk−1
nei , i = 1, . . . , n √ − nei−n , i = n + 1, . . . , 2n = xˆk−1 + Pk−1 ξ (i) , i = 1, . . . , 2n =
Step 2: time update Xˆk
(i)
(i)
= f (Xk−1 ), i = 1, . . . , 2n
xˆk|k−1 =
2n 1 (i) Xˆk 2n i=1
Pk|k−1 =
2n 1 (i) (i) (Xˆk − xˆk|k−1 )(Xˆk − xˆk|k−1 )T + Qk−1 2n i=1
(i) Xk|k−1
Yˆ k
(i)
= xˆk|k−1 +
Pk|k−1 ξ (i) ,
(i) = h(Xk|k−1 ),
yˆk|k−1 =
i = 1, . . . , 2n
i = 1, . . . , 2n
2n 1 (i) Yˆ k 2n i=1
Py k =
2n 1 (i) (i) (Yˆ k − yˆk|k−1 )(Yˆ k − yˆk|k−1 )T + Rk 2n i=1
Px k y k =
2n 1 (i) (Xk|k−1 − xˆk|k−1 )(Yˆ k(i) − yˆk|k−1 )T 2n i=1
Step 3: measurement update Hk = (Pxk yk )T (Pk|k−1 )−1 R˜ k = Pyk − (Pxk yk )T Pk|k−1 Pxk yk xˆk = xˆk|k−1 + γk Pk HkT (Rk + R˜ k )−1 (yk − yˆk|k−1 ) Pk = {HkT [Rk + R˜ k + (1 − γk )Y −1 ]−1 Hk +LTk−1 (Lk−1 Pk|k−1 LTk−1 )−1 Lk−1 }−1 Step 4: execute the loop from Step 1 to Step 3 at each time instance k
254
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
corresponding filtering algorithm is designed as follows. When it is determined that a specific element of the measurement is tampered, this element is replaced by the corresponding element in the one-step prediction vector of measurement for measurement update. Besides, the previous two filters are also compared with the SETCKF in Sect. 6.2 to verify whether the previous two filterings can detect the data tampering attack and correct the filtering results in the corresponding algorithm. This section includes two groups of simulations to compare the filtering accuracy among the previous three filtering algorithms subject to data tampering attacks. In both groups, the process noise variance is set as Q = diag[10−3 , 10−3 , 10−7 , 10−7 , 10−7 , 10−7 , 10−7 ], and the measurement noise is set as R = diag[10−2 , 10−2 , 10−2 , 10−2 ] for all the three filtering algorithms. In the first group of simulations, the measurements are continuously tampered within 4–5 s. The second element of the measurement, i.e., iq , is tampered as 1.2 times as the real value. Moreover, set Y = 4.5I . Simulation results are shown in Fig. 8.5. The last three subgraphs of the figure show the value of logic variable γk for these three filtering algorithms at each moment. The three subgraphs show that the communication arrival rate of these three filtering algorithms is about 70%. As shown in Fig. 8.5, the filtering performance of SETCKF is badly affected by data tampering attacks and cannot track the system state, while the abnormal data detectors in DaSETCKF and DaSETCKF-2 filtering algorithms can successfully detect the data tampering attacks and correct the filtering process so as to guarantee the filtering accuracy. To further compare the filtering accuracy between DaSETCKF and DaSETCKF2, this section uses the Monte Carlo simulation method to run filtering process 500 times and the filtering errors are obtained as in 3.79. Simulation results are drawn in Fig. 8.6. As shown in the figure, although there are no significant difference on dynamic performance of DaSETCKF and DaSETCKF-2, DaSETCKF has a higher filtering accuracy because DaSETCKF-2 makes yk,2 = yˆk,2 after it detects the data tampering attacks in order to eliminate the influence of tampered data on filtering accuracy. This method can guarantee accuracy only when the estimation results are accurate enough at each step and the system and observation noise strictly obeys the zero-mean Gaussian distribution. which is hardly satisfied in practice. Moreover, the filtering accuracy of DaSETCKF-2 worsens with the increasing number of elements in measurement subject to the data tampering attacks. To verify this conclusion, in the second group of simulations, all the measurements are tampered to be as 1.2 times as the real value within 4–5 s. Moreover, set Y = 4.5I . Simulation results are drawn in Fig. 8.7. The last three subgraphs of the figure demonstrate the value of logic variable γk for these three filtering algorithms at each moment. The three subgraphs show that the communication arrival rate of these three filtering algorithms is still about 70%. As shown in this figure, although DaSETCKF-2 can still detect data tampering attacks, its filtering correction method prevents it from acquiring the satisfactory dynamic filtering performance upon data tampering attacks. However, the DaSETCKF designed in this chapter can still provide the satisfactory filtering results upon data tampering attacks. The Monte Carlo simulation method is still used to conduct the filtering
8.4 Simulation and Verification
255
0.6
Ed (p.u.)
0.4 0.2 0 -0.2
0
5
10
15
10
15
10
15
Time (s) (a)
(rad)
1 0.8 0.6 0.4 0.2
0
5
Time (s) (b)
1.4
Eq (p.u.)
1.2 1 0.8 0.6 0
5
Time (s) (c) Fig. 8.5 The system state and three estimation performance only when iq is tampered, where the black line represents the actual system state, and the green dotted line denotes the SETCKF, and the red dotted dash line stands for the DaSETCKF and the cyan dash line represents the DaSETCKF2. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of DaSETCKF. (f) The triggering condition of SETCKF. (g) The triggering condition of DaSETCKF-2
256
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
1.15
(p.u.)
1.1 1.05 1 0.95
0
5
10
15
Time (s) (d)
DS1
1
0.5
0
100
200
300
400
500
600
700
500
600
700
k (e)
S
1
0.5
0
100
200
300
400
k (f) Fig. 8.5 (continued)
8.4 Simulation and Verification
257
DS2
1
0.5
0
100
200
300
400
500
600
700
k (g)
Estimation error
Fig. 8.5 (continued)
0.2 0.15 0.1 0.05 0
0
5
10
15
Time(s) Fig. 8.6 The estimation error only when iq is tampered, where the SETCKF is drawn in green, and the DaSETCKF is marked in red and compared with the DaSETCKF-2 in cyan
process for 500 times and the filtering errors are obtained as shown in Fig. 8.8. It shows that DaSETCKF can still provide the satisfactory filtering accuracy upon data tampering attacks, while DaSETCKF-2’s filtering accuracy is badly degraded compared with DaSETCKF upon data tampering attacks.
8.4.2 Verification on Performances of Event-Trigged CKF Subject to Control Command Forging Attacks This section still utilizes the generator G9 in the IEEE 39 buses 10 generator system to verify the filtering performance of CaSETCKF filter subject to deviated control command forging attack. The system state vector, measurement vector, system state equation, and measurement equation are the same as those of the previous section.
258
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Ed (p.u.)
0.6 0.4 0.2 0 -0.2
0
5
10
15
10
15
10
15
Time (s) (a)
Eq (p.u.)
1.5
1
0.5
0
0
5
Time (s) (b)
2
(rad)
1.5 1 0.5 0 0
5
Time (s) (c) Fig. 8.7 The system state and estimation performance for three filterings when all the measurement is tampered, where the black line represents the actual system state, and the green dotted line denotes the SETCKF, and the red dotted dash line stands for the DaSETCKF and the cyan dash line represents the DaSETCKF-2. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of DaSETCKF. (f) The triggering condition of SETCKF. (g) The triggering condition of DaSETCKF-2
8.4 Simulation and Verification
259
1.15
(p.u.)
1.1
1.05
1
0
5
10
15
Time (s) (d)
DS1
1
0.5
0
100
200
300
400
500
600
700
500
600
700
k (e)
S
1
0.5
0
100
200
300
400
k (f) Fig. 8.7 (continued)
260
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
DS2
1
0.5
0
100
200
300
400
500
600
700
k (g)
Estimation error
Fig. 8.7 (continued)
0.2 0.15 0.1 0.05 0
0
5
10
15
Time(s) Fig. 8.8 The estimation error for three filterings when all the measurement is tampered, where the SETCKF is drawn in green, and the DaSETCKF is marked in red and compared with the DaSETCKF-2 in cyan
The simulation scenario is still the DSE for the generator G9 for 15 s after the sudden break between bus 14 and bus 15. The sampling period is 0.02 s. The simulation in this section compares the filtering results of CaSETCKF with the real value of the original system without the attack and the filtering results of the SETCKF algorithm to prove that the CaSETCKF can effectively overcome the control command forging attack. By comparing the filtering results of CaSETCKF at different Y and CKF using the periodic sampling strategy (CaCKF) subject to control command forging attack, the influence of design parameter Y on the filtering accuracy of CaSETCKF is analyzed. The CaSETCKF algorithm is shown in Algorithm 8.4, and SETCKF algorithm is shown in Algorithm 6.2 in Sect. 6.2, while the algorithm of CaCKF is the part of Algorithm 8.4 with γ = 0 removed. In this simulation, the attack injection series is set as dk = [dk1 dk2 ]T , where both dk1 and dk2 obey the uniform distribution over [0.05, 0.15] and the attack matrix
8.5 Conclusions
261
G is the first two columns of 4-order unit matrix. Therefore, p = 2 in this case. According to the system measurement equation, the linearization matrix of system equation is Hk ∈ R4×4 , i.e., m = 4. According to Remark 8.1, the conditions of Theorem 8.1 are satisfied in this case. As shown in Fig. 8.9 when Y = 1.8I , the last two subgraphs in the figure show the logic variable γk for CaSETCKF and SETCKF algorithm at each time instance. It is concluded that the former system has an average arrival rate of about 50%, while the average arrival rate of latter system is about 96.13%. Although both use the same design parameter Y , their arrival rates are substantially different from each other because the injection of attack series dk leads to the changes in the system equation, while the SETCKF still performs the iteration according to the previous system equation so that the filtering accuracy is reduced at each moment and the measurement at each moment is substantially different from its one-step prediction. Finally, the event triggered condition is satisfied at almost each time instance so that the measurement is frequently sent to the control center. As shown in Fig. 8.9, the system state undergoes significant changes due to the injection of attack series. However, the CaSETCKF can still accurately estimate the real value of the system state at the current moment even under the unknown attack series, while the filtering results of SETCKF are substantially different from the real value of the system state. The comparison between the filtering results of CaSETCKF and CaCKF shows that CaSETCKF algorithm not only reduces 50% data communication but also provides the filtering performance close to CaCKF with full communication. To further compare the filtering accuracy for these three filters, i.e., CaSETCKF, SETCKF, and CaCKF with full communication, and specify the influence of design parameter Y on CaSETCKF’s filtering accuracy, Monte Carlo simulation method is adopted for simulations for four sets of different Y for 500 times and the filtering error results are drawn in Fig. 8.10. The figure shows that CaSETCKF and CaCKF have almost the same filtering accuracy when Y ≥ 6I . With the decline of Y , the difference between these two algorithms increases gradually in filtering accuracy. Moreover, the filtering accuracy of the SETCKF is always much poor compared with these previous two algorithms.
8.5 Conclusions The DSE application in WAMS is vulnerable to network attacks. This chapter focuses on the design of event triggered CKF subject to network attack in terms of two common attacks, which have significant impact on DSE accuracy, i.e., data tampering attack and control command forging attack. The main research developments are concluded as follows: 1. In terms of data tampering attacks, the projection statistic method is firstly utilized to design the abnormal data detection algorithm to determine whether the system is attacked and then the architecture of the stochastic event triggered
262
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
4
Ed (p.u.)
3 2
System without cyber attack System CaCKF CaSETCKF SETCKF
1 0 -1
0
5
10
15
10
15
10
15
Time (s) (a)
8
Eq (p.u.)
6 4 2 0
0
5
Time (s) (b)
2
(rad)
1.5 1 0.5 0 -0.5 0
5
Time (s) (c) Fig. 8.9 The system state and estimation results when Y = 1.8I , where the black dash line indicates the actual system state without cyber attack, and the black line represents the actual system state, and the cyan dotted line denotes the CaCKF, and the red dotted dash line stands for the CaSETCKF and the green dash line represents the SETCKF. (a) DSE performance of the emf on d axis. (b) DSE performance of the emf on q axis. (c) DSE performance of the rotor electrical angle. (d) DSE performance of the rotor speed. (e) The triggering condition of CaSETCKF. (f) The triggering condition of SETCKF
8.5 Conclusions
263
(p.u.)
1.2 1 0.8 0.6 0.4 0
5
10
15
Time (s) (d)
1
1
0.5
0
100
200
300
400
500
600
700
500
600
700
k (e)
2
1
0.5
0
100
200
300
400
k (f) Fig. 8.9 (continued)
264
8 Event Triggered Cubature Kalman Filter Subject to Network Attacks
Estimation error
0.6
0.4
0.2
0
0
5
10
15
10
15
10
15
Time(s)
Estimation error
(a)
0.4 0.3 0.2 0.1 0 0
5
Time(s) (b)
Estimation error
0.6
0.4
0.2
0
0
5
Time(s) (c) Fig. 8.10 The estimation error of CaSETCKF, SETCKF, and CaCKF under different Y , where the CaCKF is drawn in cyan, and the CaSETCKF is marked in red and compared with the SETCKF in green. (a) The estimation error when Y = 0.6I . (b) The estimation error when Y = 1.8I . (c) The estimation error when Y = 6I . (d) The estimation error when Y = 18I
Estimation error
8.5 Conclusions
265
0.4 0.3 0.2 0.1 0
0
5
10
15
Time(s) (d) Fig. 8.10 (continued)
DSE subject to this attack is designed according to the principle of the event triggered DSE using the stochastic innovational condition and features of data tampering attacks. Finally, the attack detection results are used to establish the weight matrix to correct measurement in order to guarantee filtering accuracy. Finally, the stochastic event triggered state estimation system is designed subject to data tampering attack. 2. In terms of control command forging attacks, the deviated control command forging attack model is firstly established and then the problem of network attack is transformed into the problem of system with unknown input. The Bayesian inference method is utilized to infer the iteration of stochastic event triggered DSE subject to control command forging attack. 3. The IEEE 39 buses 10 generator is used to verify these two filtering algorithms in this chapter. The simulation results show that the stochastic event triggered CKF is capable of attack detection when it is subject to data tampering attack. Moreover, if the design parameters are properly selected, it can still deliver highly accurate estimation and reduce the data transmission even if it is attacked. Moreover, the simulation results of stochastic event triggered CKF subject to control command forging attack show that the filter can still accurately estimate the real value of the system state at the current moment when the system state suffers from significant changes due to attack series injection and even unknown attack series. Moreover, its filtering accuracy is enhanced with the increase of design parameter Y .
Chapter 9
Conclusion
The dynamic state estimation has become more and more important as the fundamental and critical factor to guarantee the efficient and stable operation of WAMS in power system. However, with booming size of power grid, although PMUs facilitate the DSE to capture the dynamics of power system, the communication network is easily congested due to the PMU data transmission to the WAMS application in center. Considering the limited bandwidth of the communication network and the frequent occurrence of packet dropout and network attack, the DSE in WAMS is facing severe challenges. To address these issues, this book includes the systematic development based on the cubature Kalman filter using event triggered sampling strategy under complicated channel conditions. The book is targeted at providing the design guideline for DSE solution to the practical concerns on accuracy, communication bandwidth, and complex or poor communication environment. The main developments are as follows: 1. To address the problems for WAMS, including heavy data transmission load, limited communication bandwidth, and high requirements on DSE real time performance and accuracy, this book provides the design of the event triggered CKF. Firstly, the innovation based condition is selected as the event triggered sampling strategy to reduce the data transmission and alleviate the pressure on communication bandwidth through the comparison among different event triggered strategies. The event triggered CKF is designed to provide a high filtering accuracy and guarantee the high numerical stability, and comparatively small computation requirement. Finally, the stochastic stability of the filtering is analyzed and the sufficient condition of stochastic stability of filter is provided, which completely comprises offline parameters. Moreover, the average arrival rate of DSE at the control center is further inferred, which can provide the theoretical reference of the parameter selection for event triggered CKF from the viewpoint of engineering.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0_9
267
268
9 Conclusion
2. To address non-Gaussian noise widely existent in practical WAMS, this book provides the design of the event triggered robust CKF algorithm. Firstly, considering the disadvantage in innovation based event triggered condition, stochastic innovational condition is adopted as the event triggered sampling strategy to guarantee that the statistic feature of measurement information obeys the Gaussian distribution and the stochastic event triggered Kalman filtering algorithm is further inferred and designed based on this strategy. In terms of the unknown system process noise, the adaptive method is adopted to estimate the process noise and the slide window estimation method is utilized to estimate the measurement noise so as to overcome the issue of non-Gaussian noise. Based on this, the Huber equation is further used to improve the stochastic event triggered Kalman filtering algorithm and enhance its robustness. Finally, the stochastic stability of the previous two stochastic event triggered filtering algorithms is analyzed and the sufficient conditions of the stochastic stability of the filter are inferred, which also completely comprises offline parameters. Moreover, it further provides the upper and lower bounds of the average arrival rate for these two DSE as the theoretical design guideline for the parameter selection in the stochastic innovational condition. 3. To address the problem of packet dropout in the transmission network of WAMS, the periodic sampling strategy and the event triggered sampling strategy are adopted to design the suboptimal CKF and the event triggered suboptimal CKF, respectively. Firstly, the model for the channel packet loss is established through the independent Bernoulli stochastic process of the same distribution. Based on the suboptimal Kalman filter, the spherical cubature integration principle is used to address the system’s nonlinearity and the suboptimal CKF is accordingly designed. The stochastic innovational condition is further used as the event triggered sampling strategy and the event triggered suboptimal is designed. Finally, the stochastic stability of the previous two filters is analyzed and the sufficient condition of stochastic stability for filters is provided, which completely comprises offline parameters, including the range of the system’s initial error, maximum packet dropout rate, and other constraints to be satisfied by the system, which can be used as the theoretical guideline for the filter design at the control center in case of channel packet dropout. 4. In terms of the vulnerability of the transmission network in WAMS, the book provides the design of the event triggered filter subject to network attack. Firstly, the model is established for two common types of attacks, i.e., data tampered attack and control command forging attack. The projection statistic method is further used to design the abnormal data detection algorithm. Based on this, the detection results are used to establish the weight matrix for measurement correction after the attack is identified and the event triggered CKF subject to data alteration attack is accordingly fulfilled. Finally, to address the problem of deviated control command forging attack, the problem of such network attack is transformed into the problem of system with unknown input and the Bayesian inference method is adopted to infer and design the event triggered CKF subject to control command forging attack.
References
1. Naduvathuparambil B, Valenti MC, Feliachi A (2002) Communication delays in wide area measurement systems. In: Proceedings of the thirty-fourth southeastern symposium on system theory, Huntsville, pp 118–122 2. Sinopoli B, Schenato L, Franceschetti M et al (2004) Kalman filtering with intermittent observations. IEEE Trans Autom Control 49(9):1453–1464 3. Plarre K, Bullo F (2009) On Kalman filtering for detectable systems with intermittent observations. IEEE Trans Autom Control 54(2):386–390 4. Mo Y, Sinopoli B (2008) A characterization of the critical value for Kalman filtering with intermittent observations. In: Proceedings of the 47th IEEE conference on decision and control, Cancun, pp 2692–2697 5. Huang M, Dey S (2007) Stability of Kalman filtering with Markovian packet losses. Automatica 43(4):598–607 6. Xie L, Xie L (2007) Peak covariance stability of a random Riccati equation arising from Kalman filtering with observation losses. J Syst Sci Complex 20(2):262–272 7. Xie L, Xie L (2008) Stability of a random Riccati equation with Markovian binary switching. IEEE Trans Autom Control 53(7):1759–1764 8. You K, Fu M, Xie L (2011) Mean square stability for Kalman filtering with Markovian packet losses. Automatica 47(12):2647–2657 9. Xie L (2012) Stochastic comparison, boundedness, weak convergence, and ergodicity of a random Riccati equation with Markovian binary switching. SIAM J Control Optim 50(1):532–558 10. Liu X, Goldsmith A (2004) Kalman filtering with partial observation losses. In: Proceedings of the 43rd IEEE conference on decision and control (CDC), Atlantis, pp 4180–4186 11. Wang B-F, Guo G (2009) Kalman filtering with partial Markovian packet losses. Int J Autom Comput 6(4):395 12. Deshmukh S, Natarajan B, Pahwa A (2014) State estimation over a lossy network in spatially distributed cyber-physical systems. IEEE Trans Signal Process 62(15):3911–3923 13. Sahebsara M, Chen T, Shah SL (2008) Optimal H∞ filtering in networked control systems with multiple packet dropouts. Syst Control Lett 57(9):696–702 14. Sun S, Xie L, Xiao W et al (2008) Optimal filtering for systems with multiple packet dropouts. IEEE Trans Circuits Syst II Express Briefs 55(7):695–699 15. Sun S, Xie L, Xiao W (2008) Optimal full-order and reduced-order estimators for discretetime systems with multiple packet dropouts. IEEE Trans Signal Process 56(8):4031–4038
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0
269
270
References
16. Kluge S, Reif K, Brokate M (2010) Stochastic stability of the extended Kalman filter with intermittent observations. IEEE Trans Autom Control 55(2):514–518 17. Wang G, Chen J, Sun J (2013) Stochastic stability of extended filtering for non-linear systems with measurement packet losses. IET Control Theory Appl 7(17):2048–2055 18. Liu X, Li L, Li Z et al (2017) Stochastic stability condition for the extended Kalman filter with intermittent observations. IEEE Trans Circuits Syst II Express Briefs 64(3):334–338 19. Li L, Xia Y (2012) Stochastic stability of the unscented Kalman filter with intermittent observations. Automatica 48(5):978–981 20. Li L, Xia Y (2013) Unscented Kalman filter over unreliable communication networks with Markovian packet dropouts. IEEE Trans Autom Control 58(12):3224–3230 21. Yang R, Shi P, Liu GP (2011) Filtering for discrete-time networked nonlinear systems with mixed random delays and packet dropouts. IEEE Trans Autom Control 56(11):2655–2660 22. Dong H, Wang Z, Gao H (2010) Robust H∞ filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts. IEEE Trans Signal Process 58(4):1957–1966 23. Wang Z, Dong H, Bo S et al (2013) Finite-horizon H∞ filtering with missing measurements and quantization effects. IEEE Trans Autom Control 58(7):1707–1718 24. Zhang C, Gang F, Gao H et al (2011) H∞ filtering for nonlinear discrete-time systems subject to quantization and packet dropouts. IEEE Trans Fuzzy Syst 19(2):353–365 25. Amin S, Cárdenas AA, Sastry SS (2009) Safe and secure networked control systems under denial-of-service attacks. In: Proceedings of international workshop on hybrid systems: computation and control, pp 31–45 26. Farwell JP, Rohozinski R (2011) Stuxnet and the future of cyber war. Survival 53(1):23–40 27. Yang Q, Yang J, Yu W et al (2014) On false data-injection attacks against power system state estimation: modeling and countermeasures. IEEE Trans Parallel Distrib Syst 25(3):717–729 28. Kim J, Tong L (2013) On topology attack of a smart grid: undetectable attacks and countermeasures. IEEE J Sel Areas Commun 31(7):1294–1305 29. Mo Y, Garone E, Casavola A et al (2010) False data injection attacks against state estimation in wireless sensor networks. In: Proceedings of the 49th IEEE conference on decision and control (CDC), Atlanta, GA, pp 5967–5972 30. Liu Y, Ning P, Reiter MK (2011) False data injection attacks against state estimation in electric power grids. ACM Trans Inf Syst Secur 14(1):13 31. Sandberg H, Teixeira A, Johansson KH (2010) On security indices for state estimators in power networks. In: Proceedings of the first workshop on secure control systems (SCS) 32. Bobba RB, Rogers KM, Wang Q et al (2010) Detecting false data injection attacks on dc state estimation. In: Proceedings of preprints of the first workshop on secure control systems, CPSWEEK 33. Dan G, Sandberg H (2010) Stealth attacks and protection schemes for state estimators in power systems. In: Proceedings of the first IEEE international conference on smart grid communications, Gaithersburg, MD, pp 214–219 34. Weimer J, Bezzo N, Pajic M et al (2014) Attack-resilient minimum mean-squared error estimation. In: 2014 American control conference, Portland, OR, pp 1114–1119 35. Fawzi H, Tabuada P, Diggavi S (2011) Secure state-estimation for dynamical systems under active adversaries. In: 2011 49th annual allerton conference on communication, control, and computing (Allerton), Monticello, pp 337–344 36. Pajic M, Weimer J, Bezzo N et al (2014) Robustness of attack-resilient state estimators. In: Proceedings of the 5th international conference on cyber-physical systems, Washington, DC, pp 163–174 37. Bezzo N, Weimer J, Pajic M et al (2014) Attack resilient state estimation for autonomous robotic systems. In: Proceedings of 2014 IEEE/RSJ international conference on intelligent robots and systems, Chicago, IL, pp 3692–3698 38. Ho Y-C, Cassandras C (1983) A new approach to the analysis of discrete event dynamic systems. Automatica 19(2):149–167
References
271
39. Åström KJ, Bernhardsson B (1999) Comparison of periodic and event based sampling for first-order stochastic systems. In: Proceedings of the 14th world congress of the international federation of automatic control (IFAC), pp 301–306 40. Arzén K-E (1999) A simple event-based PID controller. In: Proceedings of the 14th world congress of the international federation of automatic control (IFAC), pp 423–428 41. Imer OC, Basar T (2005) Optimal estimation with limited measurements. In: Proceedings of the 44th IEEE conference on decision and control, Seville, pp 1029–1034 42. Rabi M, Moustakides GV, Baras JS (2006) Multiple sampling for estimation on a finite horizon. In: Proceedings of the 45th IEEE conference on decision and control, San Diego, CA, pp 1351–1357 43. Li L, Lemmon M, Wang X (2010) Event-triggered state estimation in vector linear processes. In: Proceedings of the 2010 American control conference (ACC), Baltimore, MD, pp 2138– 2143 44. Shi D, Chen T, Shi L (2015) On set-valued Kalman filtering and its application to event-based state estimation. IEEE Trans Autom Control 60(5):1275–1290 45. Han D, Mo Y, Wu J et al (2015) Stochastic event-triggered sensor schedule for remote state estimation. IEEE Trans Autom Control 60(10):2661–2675 46. Rabi M, Moustakides GV, Baras JS (2012) Adaptive sampling for linear state estimation. SIAM J Control Optim 50(2):672–702 47. Li L, Lemmon M. Performance and average sampling period of sub-optimal triggering event in event triggered state estimation. In: Proceedings of the 50th IEEE conference on decision and control and European control conference (CDC-ECC), Orlando, pp 1656–1661 48. Xu Y, Hespanha JP (2004) Optimal communication logics in networked control systems. In: Proceedings of the 43rd IEEE conference on decision and control (CDC), Nassau, pp 3527– 3532 49. Marck JW, Sijs J (2010) Relevant sampling applied to event-based state-estimation. In: Proceedings of the 4th international conference on sensor technologies and applications, Venice, pp 618–624 50. Weimer J, Araújo J, Johansson KH (2012) Distributed event-triggered estimation in networked systems. In: Proceedings of the 4th IFAC conference on analysis and design of hybrid systems (ADHS’12), Eindhoven, pp 178–185 51. Molin A, Hirche S (2012) An iterative algorithm for optimal event-triggered estimation. In: Proceedings of the 4th IFAC conference on analysis and design of hybrid systems (ADHS’12), Eindhoven, pp 64–69 52. Wu J, Jia Q-S, Johansson KH et al (2013) Event-based sensor data scheduling: trade-off between communication rate and estimation quality. IEEE Trans Autom Control 58(4):1041– 1046 53. Shi D, Chen T, Shi L (2014) An event-triggered approach to state estimation with multiple point-and set-valued measurements. Automatica 50(6):1641–1648 54. Sijs J, Lazar M (2012) Event based state estimation with time synchronous updates. IEEE Trans Autom Control 57(10):2650–2655 55. Zou L, Wang Z, Gao H et al (2015) Event-triggered state estimation for complex networks with mixed time delays via sampled data information: the continuous-time case. IEEE Trans Cybern 45(12):2804–2815 56. Lee S, Liu W, Hwang I (2015) Markov chain approximation algorithm for event-based state estimation. IEEE Trans Control Syst Technol 23(3):1123–1130 57. Shi D, Elliott RJ, Chen T (2016) Event-based state estimation of discrete-state hidden Markov models. Automatica 65:12–26 58. Wang B, Fu M (2014) Comparison of periodic and event-based sampling for linear state estimation. In: Proceedings of the 19th world congress of the international federation of automatic control (IFAC), Cape Town, pp 5508–5513 59. Trimpe S, D’Andrea R (2014) Event-based state estimation with variance-based triggering. IEEE Trans Autom Control 59(12):3266–3281
272
References
60. Shi D, Chen T, Shi L (2014) Event-triggered maximum likelihood state estimation. Automatica 50(1):247–254 61. Shi D, Chen T, Shi L (2014) Event-based state estimation of linear dynamical systems: communication rate analysis. In: Proceedings of 2014 American control conference, pp 4665– 4670 62. Mazo Jr M, Cao M (2014) Asynchronous decentralized event-triggered control. Automatica 50(12):3197–3203 63. Shaked U, Yaesh I (1990) Estimation variance and H∞ -error minimization of stationary process with perfect measurements. IEEE Trans Autom Control 35(3):310–314 64. Ba¸sar T, Bernhard P (2008) H-infinity optimal control and related minimax design problems: a dynamic game approach. Springer, New York 65. Yaesh I, Shaked U (1992) Game theory approach to state estimation of linear discrete-time processes and its relation to H∞ -optimal estimation. Int J Control 55(6):1443–1452 66. Lin Y, Han Q-L, Yang F et al (2013) Event-triggered H∞ filtering for networked systems based on network dynamics. In: Proceedings of 2013–39th annual conference of the IEEE industrial electronics society, Vienna, pp 5638–5643 67. Yan S, Yan H, Shi H et al (2014) Event-triggered H∞ filtering for networked control systems with time-varying delay. In: Proceedings of the 33rd Chinese control conference, Nanjing, pp 5869–5874 68. Hu S, Yue D (2012) Event-based H∞ filtering for networked system with communication delay. Signal Process 92(9):2029–2039 69. Zhang Y, Wang Z, Zou L et al (2017) Event-based finite-time filtering for multirate systems with fading measurements. IEEE Trans Aerosp Electron Syst 53(3):1431–1441 70. Zhang X-M, Han Q-L (2015) Event-based H∞ filtering for sampled-data systems. Automatica 51:55–69 71. Ding L, Guo G (2015) Distributed event-triggered H∞ consensus filtering in sensor networks. Signal Process 108:365–375 72. Liu X, Li L, Li Z et al (2018) Event-trigger particle filter for smart grids with limited communication bandwidth infrastructure. IEEE Trans Smart Grid 9(6):6918–6928 73. Wang X, Zhang B (2016) Lebesgue approximation model of continuous-time nonlinear dynamic systems. Automatica 64:234–239 74. Gauss CF (1809) Theoria motus corporum coelestium in sectionibus conicis solem ambientium. Perthes et Besser, Hamburg 75. Weiner N (1949) Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. Technology Press of MIT, Cambridge 76. Kolmogorov AN, Doyle WL, Selin I (1962) Interpolation and extrapolation of stationary random sequences. Rand Corporation, Santa monica 77. Kalman RE (1960) A new approach to linear filtering and prediction problems. J Basic Eng 82(1):35–45 78. Kuznetsov, PI, Stratonovich RL, Tikhonov VI (1965) Conditional markov processes. In: Stratonovich RL (ed) Non-linear transformations of stochastic processes. Pergamon, New York, pp 427–453 79. Kushner HJ (1964) On the dynamical equations of conditional probability density functions, with applications to optimal stochastic control theory. J Math Anal Appl 8(2):332–344 80. Kushner HJ (1964) On the differential equations satisfied by conditional probablitity densities of markov processes, with applications. J Soc Ind Appl Math Ser A Control 2(1):106–119 81. Kushner HJ (1967) Dynamical equations for optimal nonlinear filtering. J Differ Equ 3(2):179–190 82. Bucy RS (1970) Linear and nonlinear filtering. Proc IEEE 58(6):854–864 83. Jazwinski A (2003) Filtering for nonlinear dynamical systems. IEEE Trans Autom Control 11(4):765–766 84. Smith GL, Schmidt SF (1961) The application of statistical filter theory to optimal trajectory determination onboard a circumlunar vehicle. In: Proceedings of AAS meeting 85. Bucy RS, Senne KD (1971) Digital synthesis of non-linear filters. Automatica 7(3):287–298
References
273
86. Sunahara Y (1970) An approximate method of state estimation for nonlinear dynamical systems. J Basic Eng 92(2):385–393 87. Bell BM, Cathey FW (1993) The iterated Kalman filter update as a Gauss-Newton method. IEEE Trans Autom Control 38(2):294–297 88. Scholte E, Campbell ME (2003) A nonlinear set-membership filter for on-line applications. Int J Robust Nonlinear Control 13(15):1337–1358 89. Boutayeb M, Rafaralahy H, Darouach M (1997) Convergence analysis of the extended Kalman filter used as an observer for nonlinear deterministic discrete-time systems. IEEE Trans Autom Control 42(4):581–586 90. Boutayeb M, Aubry D (1999) A strong tracking extended Kalman observer for nonlinear discrete-time systems. IEEE Trans Autom Control 44(8):1550–1556 91. Reif K, Gunther S, Yaz E et al (1999) Stochastic stability of the discrete-time extended Kalman filter. IEEE Trans Autom Control 44(4):714–728 92. Einicke GA, White LB (1999) Robust extended Kalman filtering. IEEE Trans Signal Process 47(9):2596–2599 93. Zhou D, Frank P (1996) Strong tracking filtering of nonlinear time-varying stochastic systems with coloured noise: application to parameter estimation and empirical robustness analysis. Int J Control 65(2):295–307 94. Zhou D (1991) A suboptimal multiple fading extended Kalman filter. Acta Autom Sin 17(6):689–695 95. Zhou DH, Su YX, Xi YG et al (1993) Extension of Friedland’s separate-bias estimation to randomly time-varying bias of nonlinear systems. IEEE Trans Autom Control 38(8):1270– 1273 96. Van Der Merwe R et al (2004) Sigma-point Kalman filters for probabilistic inference in dynamic state-space models. PhD Thesis, OGI School of Science & Engineering at OHSU 97. Jia B, Xin M, Cheng Y (2012) Sparse-grid quadrature nonlinear filtering. Automatica 48(2):327–341 98. NøRgaard M, Poulsen NK, Ravn O (2000) New developments in state estimation for nonlinear systems. Automatica 36(11):1627–1638 99. Hartikainen J, Solin A, Särkkä S (2011) Optimal filtering with Kalman filters and smoothers. Department of Biomedica Engineering and Computational Sciences, Aalto University School of Science, 16 August 2011 100. Julier SJ, Uhlmann JK, Durrant-Whyte HF (1995) A new approach for filtering nonlinear systems. In: Proceedings of 1995 American control conference, pp 1628–1632 101. Julier SJ, Uhlmann JK (1996) A general method for approximating nonlinear transformations of probability distributions. Technical Report, Robotics Research Group, Department of Engineering Science, University of Oxford, Oxford, 1996 102. Julier S, Uhlmann J, Durrant-Whyte HF (2000) A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans Autom Control 45(3):477–482 103. Julier SJ (2002) The scaled unscented transformation. In: Proceedings of the 2002 American control conference (IEEE Cat. No. CH37301), Anchorage, AK, pp 4555–4559 104. Julier SJ, Uhlmann JK (2002) Reduced sigma point filters for the propagation of means and covariances through nonlinear transformations. In: Proceedings of the 2002 American control conference (IEEE Cat. No. CH37301), Anchorage, AK, pp 887–892 105. Julier SJ (2003) The spherical simplex unscented transformation. In: Proceedings of the 2003 American control conference, Denver, CO, pp 2430–2434 106. Zhang W, Liu M, Zhao Z-G (2009) Accuracy analysis of unscented transformation of several sampling strategies. In: Proceedings of the 10th ACIS international conference on software engineering, artificial intelligences, networking and parallel/distributed computing, Daegu, pp 377–380 107. Ito K, Xiong KQ (2000) Gaussian filters for nonlinear filtering problems. IEEE Trans Autom Control 45(5):910–927 108. Challa S, Bar-Shalom Y, Krishnamurthy V (2000) Nonlinear filtering via generalized Edgeworth series and Gauss-Hermite quadrature. IEEE Trans Signal Process 48(6):1816– 1820
274
References
109. Kushner HJ, Budhiraja AS (2000) A nonlinear filtering algorithm based on an approximation of the conditional distribution. IEEE Trans Autom Control 45(3):580–585 110. Arasaratnam I, Haykin S, Elliott RJ (2007) Discrete-time nonlinear filtering algorithms using Gauss–Hermite quadrature. Proc IEEE 95(5):953–977 111. Jia B, Xin M, Cheng Y (2012) Anisotropic sparse Gauss-Hermite quadrature filter. J Guid Control Dyn 35(3):1014–1022 112. Arasaratnam I, Haykin S (2009) Cubature kalman filters. IEEE Trans Autom Control 54(6):1254–1269 113. Arasaratnam I, Haykin S, Hurd TR (2010) Cubature Kalman filtering for continuous-discrete systems: theory and simulations. IEEE Trans Signal Process 58(10):4977–4993 114. Chang L, Hu B, Li A et al (2013) Transformed unscented Kalman filter. IEEE Trans Autom Control 58(1):252–257 115. Jia B, Xin M, Cheng Y (2013) High-degree cubature Kalman filter. Automatica 49(2):510– 518 116. Jia B, Xin M (2015) Adaptive radial rule based cubature Kalman filter. In: Proceedings of the 2015 American control conference (ACC), pp 3156–3161 117. Schön TB (2006) Estimation of nonlinear dynamic systems: theory and applications. Dissertations, Institutionen för systemteknik 118. Rosenbluth MN, Rosenbluth AW (1955) Monte Carlo calculation of the average extension of molecular chains. J Chem Phys 23(2):356–359 119. Gordon NJ, Salmond DJ, Smith AF (1993) Novel approach to nonlinear/non-Gaussian Bayesian state estimation. In: IEE proceedings F (radar and signal processing), pp 107–113 120. Kitagawa G (1996) Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. J Comput Graph Stat 5(1):1–25 121. Liu JS, Chen R (1998) Sequential Monte Carlo methods for dynamic systems. J Am Stat Assoc 93(443):1032–1044 122. Carpenter J, Clifford P, Fearnhead P (1999) Improved particle filter for nonlinear problems. IEE Proc Radar Sonar Navig 146(1):2–7 123. Musso C, Oudjane N, Le Gland F (2001) Improving regularised particle filters. In: Musso C, Oudjane N, Le Gland F (eds) Sequential Monte Carlo methods in practice. Springer, New York, pp 247–271 124. Andrieu C, Doucet A (2002) Particle filtering for partially observed Gaussian state space models. J R Stat Soc Ser B Stat Methodol 64(4):827–836 125. Crisan D, Doucet A (2002) A survey of convergence results on particle filtering methods for practitioners. IEEE Trans Signal Process 50(3):736–746 126. Eaton M, Ricker N (1995) Extended kalman filtering for particle size control in a fed-batch emulsion polymerization reactor. In: Proceedings of the 1995 American control conference, Seattle, pp 2697–2701 127. Van Der Merwe R, Doucet A, De Freitas N et al (2001) The unscented particle filter. In: Proceedings of advances in neural information processing systems, pp 584–590 128. Kotecha JH, Djuric PM (2003) Gaussian particle filtering. IEEE Trans Signal Process 51(10):2592–2601 129. Kotecha JH, Djuric PM (2003) Gaussian sum particle filtering. IEEE Trans Signal Process 51(10):2602–2612 130. Pitt MK, Shephard N (1999) Filtering via simulation: auxiliary particle filters. J Am Stat Assoc 94(446):590–599 131. Nguyen V, Suh Y (2007) Improving estimation performance in networked control systems applying the send-on-delta transmission method. Sensors 7(10):2128–2138 132. Miskowicz M (2006) Send-on-delta concept: an event-based data reporting strategy. Sensors 6(1):49–63 133. Trimpe S (2012) Event-based state estimation with switching static-gain observers. In: Proceedings of the the 3rd IFAC workshop on distributed estimation and control in networked systems, Santa Barbara, pp 91–96
References
275
134. Xu Y, Hespanha JP (2005) Estimation under uncontrolled and controlled communications in networked control systems. In: Proceedings of the 44th IEEE conference on decision and control, Seville, pp 842–847 135. Maybeck PS (1982) Stochastic models, estimation, and control. Academic, New York 136. Chen Y, Henderson TC (2001) S-NETS: smart sensor networks. In: Chen Y, Henderson TC (eds) Experimental robotics VII. Springer, New York, pp 81–90 137. Anderson BD, Moore JB (2012) Optimal filtering. Courier Corporation, North Chelmsford 138. Kalman RE, Bucy RS (1961) New results in linear filtering and prediction theory. J Basic Eng 83(1):95–108 139. Singh AK, Pal BC (2014) Decentralized dynamic state estimation in power systems using unscented transformation. IEEE Trans Power Syst 29(2):794–804 140. Emami K, Fernando T, Iu HHC et al (2014) Particle filter approach to dynamic state estimation of generators in power systems. IEEE Trans Power Syst 30(5):2665–2675 141. Kansal P, Bose A (2011) Smart grid communication requirements for the high voltage power system. In: Proceedings of power and energy society general meeting, San Diego, pp 1–6 142. Lang L, Chen W-S, Bakshi BR et al (2007) Bayesian estimation via sequential Monte Carlo sampling—constrained dynamic systems. Automatica 43(9):1615–1622 143. Shao X, Huang B, Lee JM (2010) Constrained Bayesian state estimation—a comparative study and a new particle filter based approach. J Process Control 20(2):143–157 144. Romano P, Paolone M (2014) Enhanced interpolated-DFT for synchrophasor estimation in FPGAs: theory, implementation, and validation of a PMU prototype. IEEE Trans Instrum Meas 63(12):2824–2836 145. Lefebvre T, Bruyninckx H, De Schuller J (2002) Comment on “A new method for the nonlinear transformation of means and covariances in filters and estimators” [with authors’ reply]. IEEE Trans Autom Control 47(8):1406–1409 146. Hajiyev C, Soken HE (2014) Robust adaptive unscented Kalman filter for attitude estimation of pico satellites. Int J Adapt Control Signal Process 28(2):107–120 147. Masreliez CJ, Martin RD (1977) Robust bayesian estimation for the linear model and robustifying the Kalman filter. IEEE Trans Autom Control 22(3):361–371 148. Zhang H, Song X, Shi L (2012) Convergence and mean square stability of suboptimal estimator for systems with measurement packet dropping. IEEE Trans Autom Control 57(5):1248–1253 149. Hathaway OA, Crootof R, Levitz P et al (2012) The law of cyber-attack. Calif Law Rev 100:817 150. Valverde G, Terzija V (2011) Unscented Kalman filter for power system dynamic state estimation. IET Gener Transm Distrib 5(1):29–37 151. Wang S, Gao W, Meliopoulos AS (2012) An alternative method for power system dynamic state estimation based on unscented transform. IEEE Trans Power Syst 27(2):942–950 152. Li B (2013) State estimation with partially observed inputs: a unified Kalman filtering approach. Automatica 49(3):816–820 153. Shi D, Chen T, Darouach M (2016) Event-based state estimation of linear dynamic systems with unknown exogenous inputs. Automatica 69:275–288 154. Mili L, Cheniae M, Vichare N et al (1996) Robust state estimation based on projection statistics [of power systems]. IEEE Trans Power Syst 11(2):1118–1127 155. Zhao J, Netto M, Mili L (2017) A robust iterated extended Kalman filter for power system dynamic state stimation. IEEE Trans Power Syst 32(4):3205–3216 156. Rousseeuw PJ, van Zomeren BC (1991) Robust distances: simulations and cutoff values. In: Proceedings of directions in robust statistics and diagnostics, New York, pp 195–203 157. Thomas L, Mili L (2007) A robust GM-estimator for the automated detection of external defects on barked hardwood logs and stems. IEEE Trans Signal Process 55(7):3568–3576 158. Zhao J, Mili L (2017) Robust unscented Kalman filter for power system dynamic state estimation with unknown noise statistics. IEEE Trans Smart Grid 10:1215–1224
Index
A Adaptive method, 189, 270 Arrival rate guaranteed event triggered strategy design, 95–97 distribution of system and observation noise, 94 in ET-PF, 97–99 innovation, definition, 95 non-triggered observation, 97 non-triggered set, 93, 94 observation, 93–95 system structure, 93 Automatic voltage regulator (AVR), 35–37, 40, 44, 47, 170, 174, 208, 211, 215 AVR filter voltage, 37, 40, 44, 47, 170, 174, 208, 211, 215
B Bayes formula, 63 Bayesian equation, 154 Bayesian inference method, 270 Bayesian law, 249 Bayesian state estimation theory, 102–104, 117, 132 Bernoulli packet dropout model channel packet dropout process, 193 description, 192 DSE, 193 Gaussian distribution, 193 spherical cubature integration principle, 192 suboptimal CKF design, 195–196
suboptimal Kalman filter, 193–195 variance matrix, 193 Bernoulli process, 31, 39, 73, 191, 201, 204 Bernoulli stochastic process, 8, 107, 115, 195, 196, 270 Bonneville Power Administration (BPA), 3 Bootstrap filter, 19, 92, 93
C CDF, see Cumulative distribution function (CDF) Central difference filter (CDF), 17, 88, 95 CF performance verification dynamic filtering comparison, 207–214 filtering algorithm, 207 filtering error comparison, 214 IEEE 39 buses, 207, 236 simulation, 207 Channel packet dropout Bernoulli process, 191 causes, 191 event triggered (see also Event triggered suboptimal CKF) nonlinear filter, 192 sampling strategy, 192 Kalman filter, 191, 192 suboptimal CKF (see Bernoulli packet dropout model) transmission network, 192 Chi-square distribution, 242 Cholesky decomposition, 24, 29, 127, 135, 136 CKF, see Cubature Kalman filter (CKF)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Z. Li et al., Event-Trigger Dynamic State Estimation for Practical WAMS Applications in Smart Grid, https://doi.org/10.1007/978-3-030-45658-0
277
278 CKF with intermittent observations (CKF-I), 73, 74, 76, 78, 80, 81, 84, 170, 173, 174, 177, 179, 181, 211, 214, 215, 218 and CF filtering accuracy, 207 and CF packet dropout, 207 triggering condition, 208, 211, 215 Communication delay, 83–85, 179, 180 Communication rate, 181, 186 Constraint particle impoverishment, 103, 105, 132–134 Constraint system, 102–104, 132 Control center, filtering, 198–199 Control command forging attacks (CaSETCKF), 270 Bayesian law, 249 categorization, 247 conditional distribution, 252–253 DSE, block diagram, 247, 248 ET CKF algorithm, 254, 255 Gaussian distribution, 248, 252–253 multi-dimensional Gaussian stochastic variable, 251–252 nonlinear equations and continuously differentiable, 245 nonlinear system, 248 probability density function, 247 sequential assumption identification method, 247 SETCKF, 249 stochastic innovational condition, 245, 250–251 system measurement equation, 247–254 variance matrices, 241, 245 verification attack injection series, 262–263 estimation error, 263, 266, 267 filtering accuracy, 263 generator G9 , 259, 262 linearization matrix, 263 logic variable, 263 periodic sampling strategy (CaCKF), 262 system state and estimation results, 263 Conventional CKF, 60, 63, 73, 151, 154, 156, 195, 248 Conventional Kalman filtering, 194 CPS, see Cyber-physical system (CPS) Cramer–Rao lower bounds theory, 106 Cubature Kalman filter (CKF), 18–19, 170 discrete nonlinear system, 55 ET (see Event triggered CKF (ETCKF))
Index Gauss–Hermite integral, 59 Gauss–Hermite Kalman filter, 59 GF, 55–56 MC method, 59 measurement update, 58–59 second-order nonlinear partial differential equation, 54 Sigma-points, 54, 59 spherical-radial cubature integral, 57 time update, 58, 63 UKF, 59 unbiased optimal filter, 54 zero-mean Gaussian distribution, 55 Cubature suboptimal filter (CF), 207, 208, 211, 214, 215, 218 Cumulative distribution function (CDF), 88, 95–96 Cyber-physical system (CPS), 6, 8–10, 14 D Data communication bandwidth, 53 and filtering accuracy, 73, 82 and guaranteed estimation accuracy, 84 network, 53 SETCKF, 181 SETRCKF, 181 and system state estimation accuracy, 166 Data tampering attacks (DaSETCKF), 270 abnormal data detection algorithm block diagram, 243 chi-square distribution, 242 pseudo-code, 242–243 PS method, 241, 242 statistic characteristics, 241–242 abnormal data detector, 240 block diagram, 241 data transmission, 240 design block diagram, 241, 243 at control center, 245, 246 pseudo-code, 243–245 SETCKF, 243–245 stochastic innovational condition, 243 weight matrix, 245 DSE accuracy, 240 system model, 240–241 verification estimation error, 256, 259, 262 filtering accuracy, 256 filtering performance, 256 IEEE 39 buses, 254
Index normalized vectors of elements, 254 system state and estimation performance, 256–262 Data transmission bandwidth, 53 filter data, 119 and inadequate bandwidth, 10 measurement, 53 PMU, 269 real-time, 6 and sensor nodes, 10 SETCKF, 173, 243 WAMS, 54, 240 Denial-of-service (DoS), 9, 239 Deterministic event triggered sampling strategy covariance based condition, 24 innovation based condition, 23–24 Send-on-Delta condition, 22–23 Discrete nonlinear system, 55, 156 Discrete time algebraic Riccati equation (DARE), 24 Discrimination function, 132 Divided difference filter (DDF), 17 DSE, see Dynamics state estimation (DSE) Dynamics state estimation (DSE) application, 52 ET-EKF and EKF-I, 40, 47 ET-PF, 105–115 measurement data, 53 packet dropout, 193 performance, 208, 211, 215 SETCF, 219, 220, 228 SETCKF, 151 in smart grid, 54
E EKF, see Extended Kalman filter (EKF) Energy measurement system (EMS), 35 Error boundedness, ETCKF, 70–73 Error covariance matrix (ECM), 13, 17, 22, 24, 29–31, 55, 56, 58, 60, 64–66, 158, 199 ESMF, see Extended set-membership filter (ESMF) ETCKF, see Event triggered CKF (ETCKF) ET-EKF, see Event triggered EKF (ET-EKF) ET-HNF, see Event triggered HNF (ET-HNF) ET-PF, see Event triggered particle filter (ET-PF) ET-UKF, see Event triggered UKF (ET-UKF) Event triggered CKF (ETCKF) advantages, 73 Bernoulli process, 73
279 to control command forging attacks, 245–254 to data tampering attacks abnormal data detection algorithm, 241–242 design, 243–245 verification, 254–259 different δ values vs. estimation error, 74, 80–82 vs. estimation performance, 75–80 disadvantages, 150 effectiveness, 73 filtering algorithm, 66 IEEE 39 buses, 73 innovation based condition, 59–65 measurement update, 253–254 simulation on communication delay, 83–85 stochastic stability analysis, 65–73 verification on stability and arrival rate, 80, 82, 84 Event triggered condition, 220, 228 Event triggered EKF (ET-EKF) disadvantages, 87 Gaussian noises, 32 nonlinear model, 32 power system, 32 remote center, 34–35 sensor node, 33–34 simulation verification Bernoulli process, 39 dynamic system state, 39 effect of arrival rates, 43, 52 and EKF-I, 38–52 filtering performance comparison, 39 IEEE 39, 38 measurement, 39 system state variables, 38 Taylor expansion, 32 Event triggered HNF (ET-HNF) data transmission, 119 Sigma points, 120 simulation and verification advantages, 141 architecture, 139 average of elapsed computation time, 141, 145 average of estimation error, 137, 138, 140, 144 computation requirement, 141 estimation, 140–142 filtering algorithm, 137 IEEE 39 buses, 136 mean of filtering errors, 139, 145 non-Gaussian noise, 136
280 Event triggered HNF (ET-HNF) (cont.) sudden disturbance, 136, 137, 143, 145 system state variable, 136 variance matrix, 137 zero-mean Gaussian white noise, 137 zero-mean Rayleigh noise, 137 slave filter (see Slave filter) system design DSE algorithm, control center, 122–123 physical node, 121–122 structure, 121 Event triggered particle filter (ET-PF) architecture, 119 arrival rate, 117 average simulation time, 139 filtering design using triggered information Bayesian state estimation theory, 103, 104 constraint particle impoverishment, 105 constraint system, 102–103 at data center, 99 extended constraint system, 103–105 initialization, 99 integral function, 102 non-receipt of observation, 100–102 posteriori conditional probability, 101 pseudo-code, 105 receipt of observation, 99 time instance, 99 MC (see Monte Carlo (MC) method) physical node, 119 simulation, for DSE in WAMS, 105–115 Event triggered sampling strategies academia and industry, 10 comparisons, 27 covariance based condition, 24 design adaptive sampling method, 11 arbitrary distribution, 11 conflicts, 11 continuous time system, 11 discrete system, 11 distributed filtering, 13 error covariance and communication rate, 11 error covariance matrix, 13 feedback controller, 12 Gaussian hypothesis, 12 linear filter, 13 linear vector system, 11 Lyapunov–Krasovskii function, 14 Markov chain approximation algorithm, 13 measurement set, 12
Index multi-objective conflicts, 11 remote filter, 12 set value filtering, 13 transmission and scheduling, 11 designed parameter Y, 152 innovation, 29–30 innovation based condition, 150 intermittent Kalman filter, 31–32 Kalman filter, 30–31 linear filter, 27–28 measurements, 22 nonlinear system, 151 one-step prediction, measurement, 151, 152 pseudo-code, 197, 198 remote filter, 21, 22 sensor nodes, 10 SETCKF, 151, 152, 197 signal processing technology, 11 stochastic innovational condition, 197 system state estimation, 197 transmission, packet dropout, 197 variance matrix, 197 Event triggered strategy design, 156–159 Event triggered suboptimal CKF Bernoulli stochastic process, 196 control center, filtering, 198–199 sampling strategy, 197–198 stochastic innovational condition, 196, 236 Event triggered UKF (ET-UKF) arrival rate, 127 design filtering update, 128–130 innovational observation, 129–130 one-step prediction, 127–128 posterior filter variance, 130 sampling strategy, 125–127 simulation and verification average of elapsed computation time, 139, 145 average of estimation error, 137, 141 estimation, 140–142 filtering accuracy, 137, 139 mean of filtering errors, 139 sudden disturbance, 137 UT transformation, 123–125 Extended constraint system, 103–105 Extended Kalman filter (EKF), 15 ET (see Event triggered EKF (ET-EKF)) higher accuracy, 87 IE (see Iterated extended Kalman filter (IEKF)) SL, 16 statistic linear, 16 Extended set-membership filter (ESMF), 16
Index F Filtering accuracy, 188, 214, 220, 236, 237 Filtering algorithm, 198, 199 Filtering design, SETRCKF innovational information, 161 measurement noise, 161 nonlinear system’s measurement noise, 159 robust Kalman filtering, 159 Filtering error, 173, 177, 181, 187, 188, 220 Filtering gain, 194 Filtering iteration process, 199 First-order linearization, 199 Flexible AC transmission system (FACTS), 3
G Gauss–Hermite filter (GHF), 18 Gauss–Hermite integral, 59 Gauss–Hermite Kalman filter, 59 Gaussian assumption, 13, 18, 19, 24, 55, 134, 179 Gaussian distribution, 13, 18, 19, 22, 25, 26, 28–30, 32, 55, 56, 62–65, 120, 123–125, 134, 149, 155, 156, 193, 241, 245, 248, 252, 253, 256, 270 Gaussian filter (GF) error covariance matrix, 56 Gaussian assumption, 55 Gaussian distribution, 55 stochastic variable, 55 zero-mean Gaussian distribution, 56 Gaussian property of the conditional distribution of the system state, 179 Gaussian white noise, 38, 105, 125, 149, 179 GF, see Gaussian filter (GF) GHF, see Gauss–Hermite filter (GHF)
H Heterogeneous nonlinear Kalman filter (HNF) ET (see Event triggered HNF (ET-HNF)) Huber formula, 159, 270
I IEEE 39 buses, 207, 236 IEEE 39 buses simulation platform generator parameters, 37–38 generator system, 35–36 power grid, 35 renewable energy, 35 Type I generator, 36–37 Type II generator, 37
281 IEKF, see Iterated extended Kalman filter (IEKF) Importance sampling (IS) method, 88–90 Infinite norm definition, 70 Send-on-Delta, 24 Innovation based event triggered sampling strategy communication bandwidth, 60 ETCKF pseudo-code, 60–61 remote filter, 61–65 sensor, 60 system diagram, 60 measurement and error covariance matrix, 60 Intermittent Kalman filter, 7, 8, 31–32, 191 Iterated extended Kalman filter (IEKF), 16 J Jacobian matrix, 15, 17, 122 K Kalman filter Bernoulli packet dropout model, 191 CKF (see Cubature Kalman filter (CKF)) ETCKF (see Event triggered CKF (ETCKF)) ET-EKF (see Event triggered EKF (ET-EKF)) ET-HNF (see Event triggered HNF (ET-HNF)) intermittent, 7, 8, 31–32, 191 stochastic stability, 191 L Linear matrix inequality (LMI) equation, 8–9, 13, 14, 16 Linear suboptimal filter, 54 Lyapunov method, 189 Lyapunov stability lemma, 202, 205, 207
M Master and slave filters algorithm, 147 Cholesky decomposition, 135 computation capability, 120 cooperation at control center, 134–136 data transmission efficiency, 135 dynamic state estimation, 120
282 Master and slave filters (cont.) MC based design, 130–133 non-Gaussian noise, 120 sudden disturbance, 120 TIU-PF and ET-UKF distribution, 134–135 Mathematical induction method, 168 Mathematic induction method, 205 Measurement noise, 11, 16, 32, 73, 149, 156, 159, 161, 169, 173, 189, 219, 256, 270 Measurement noise covariance matrices, 156, 161, 169, 173 Monte Carlo (MC) based filtering master filter, 130–133 Monte Carlo (MC) method ET-PF application in nonlinear filtering, 90–93 CDF, 88 with guaranteed arrival rate, 93–105 IS, 90 particle points, 89–90 probability and distribution, 88 SIS, 89–90 SMC, 89 filtering error, 173–179 PF utilizes, 19 SETCF, 220, 228 simulation results, 181 Moving-window estimation method, 270 N Network attack cyber-physical system, 239 data tampering, 239 definition, 239 DoS, 239 DSE application, 239 ETCKF (see Event triggered cubature Kalman filter (ETCKF)) KF, 240 state estimation, 9–10 Non-Gaussian noise, 120, 187, 270 Nonlinear filtering application, MC, 90–93 control system, 14 deterministic sampling, 16–19 Gramian matrix, 14 Kalman filtering, 15 least square estimation algorithm, 14 optimal, 15 stochastic sampling, 19–20 Taylor expansion approximation, 15–16 Wiener filtering, 14 Wiener–Hopf equation, 14
Index O One-step prediction estimation error, 199, 203, 206, 220
P Packet dropout, 6–9, 191–237 Particle filter (PF), 19, 20, 59, 87–117, 122, 136, 141, 143, 150, 156 Particle impoverishment, 19, 92, 105, 133 PCC, see Point of common coupling (PCC) PF, see Particle filter (PF) Phasor measurement units (PMUs), 1, 4–5, 21, 35, 37, 53, 60, 83, 85, 121, 180, 269 PMUs, see Phasor measurement units (PMUs) Point of common coupling (PCC), 35 Positively definite matrix, 68 Posteriori conditional probability, 101–104, 126, 130, 131 Posteriori probability, 91, 99, 100, 102, 120, 133 Posteriori variance, 196, 201, 204, 205 Posterior probability, 16, 17, 19, 91, 100, 120 Posterior variance matrix, 8, 18, 59, 64, 67–71, 80, 196 Power system, 1–6, 32, 35, 52, 269 Power system stabilizer (PSS), 36 Priori conditional distribution of observations, 132 Priori probability, 100, 135 Process noise covariance matrices, 158, 179, 189, 256, 270 Projection statistics (PS) method, 240–242 Pseudo-linearization, ETCKF, 67–71 PSS, see Power system stabilizer (PSS)
R Rayleigh noise, 137 RCKF filtering, 181 Remote filter, ETCKF, 61–65 Remote state estimation system model, 62–65 Riccati equation, 16
S Sampling importance resampling (SIR), 19, 92, 105, 130, 133 SCADA, see Supervisory control and data acquisition systems (SCADA) SDH, see Service and digital hierarchy (SDH) SE, see State estimation (SE) Second-order nonlinear partial differential equation, 54
Index Sequential importance sampling (SIS), 19, 89–92, 100, 105, 133 Sequential Monte Carlo (SMC) method, 89 Service and digital hierarchy (SDH), 3 SETCF, see Stochastic event triggered cubature suboptimal filter (SETCF) SETCF filtering performance verification arrival rates, 219 comparison, different packet dropout rates, 220–236 comparison, fixed packet dropout rates, 219–220 design parameter, 219 DSE performance, 219, 221, 223, 225 generator G8 , 219 measurement noise variance, 219 system noise variance, 219 system state and measurement vector, 219 SETCKF, see Stochastic event triggered cubature Kalman filter (SETCKF) SETCKF filtering performance vs. CKF-I at different arrival rates, 173 communication rate, 180 DSE, 169 vs. filtering error and arrival rate at different Y, 177, 178 IEEE 39-bus 10 generator system, 169 SETRCKF, see Stochastic event triggered robust cubature Kalman filter (SETRCKF) SETRCKF filtering performance DSE, 179 filtering algorithms, 179 vs. filtering error at different Y, 181–188 IEEE 39-bus 10 generator system, 179 vs. RCKF-I at different arrival rates, 181 SGQF, see Sparse grid quadrature filter (SGQF) Sigma-point Kalman filter (SPKF), 17 Sigma-points, 54, 57–59, 120, 126 Simulation of ET-PF abnormal values, 114 Bernoulli stochastic process, 107 communication capability, 106 Cramer–Rao lower bounds theory, 106 filtering performance, 112 IEEE 39 buses, 105 and PF-I arrival rates, 106–112, 115 average of estimation error, 107 event triggered sampling strategy, 112 filtering accuracy, 112 priori estimation error, 107 statistical property, 112
283 statistic characteristics, 115 stochastical analysis, 116 simulation results, 107 system state variable, 105 SIR, see Sampling importance resampling (SIR) SIS, see Sequential importance sampling (SIS) Slave filter algorithm, 120, 146 cooperation at control center, 134–136 ET-UKF arrival rate, 127 design, 127–130 sampling strategy, 125–127 UT transformation, 123–125 and master filter (see Master and slave filters) Sparse grid quadrature filter (SGQF), 18 Spherical cubature integration principle, 195 Spherical-radial cubature integral, 57 State estimation (SE) CPS, 6 influence, 6 packet dropout Bernoulli stochastic process model, 7, 8 decision-making layer, 8 intermittent filtering, 7, 8 intermittent linear filter stability, 7–8 LMI, 8–9 Markov process, 8 network attack, 9–10 nonlinear intermittent filter, 9 Riccati equation, 8 stability analysis, 9 transmission network, 6 triggered sampling strategy, 9 WSN, 6 Stochastic event triggered cubature Kalman filter (SETCKF), 197, 199 control command forging attack, 270 disadvantage, 149 event triggered sampling strategy, 149 filtering accuracy, 269 filtering algorithm, 157 filtering performance (see SETCKF filtering performance) Gaussian characteristics, measurements, 149 Gaussian distribution, 149 innovation based condition, 149, 150 measurement noise, 149 nonlinear H ∞ filter, 150 nonlinear system, 149 particle filter, 150
284 Stochastic event triggered cubature Kalman filter (SETCKF (cont.) sampling strategy, 149 SETRCKF, 150 stochastic innovational condition (see Stochastic innovation based event triggered sampling strategy) stochastic stability, 162–166 WAMS, 150 Stochastic event triggered cubature suboptimal filter (SETCF), 207, 219–235 Stochastic event triggered robust cubature Kalman filter (SETRCKF) data alteration attack, 270 event triggered strategy design, 156–159 filtering algorithm, 163 filtering design, 159–162 filtering performance (see SETRCKF filtering performance) measurement noise, 156 nonlinear system, 156 stochastic innovational condition, 156 stochastic stability analysis, 167–169 system noise, 156 WAMS, 156 Stochastic event triggered sampling strategy innovational condition, 26 open loop condition, 25 Stochastic innovational condition SETCKF (see Stochastic event triggered cubature Kalman filter (SETCKF)) SETRCKF (see Stochastic event triggered robust cubature Kalman filter (SETRCKF)) Stochastic innovation based event triggered sampling strategy cubature Kalman filter, 269 data transmission, 269 event triggered sampling strategy designed parameter Y, 152 DSE, 151 nonlinear system, 151 one-step prediction, measurement, 151, 152 variables, 150, 152 filtering design, control center, 152–156 Stochastic Lyapunov stability lemma, 84, 189, 236 Stochastic stability ETCKF error boundedness, 71–74 posterior variance matrix, 67–71 pseudo-linearization, 67–71 filtering, 269
Index one-step prediction error, 162 SETCKF, 162–166 SETRCKF, 167–169 sufficient condition, 162, 269, 270 Stochastic stability analysis event triggered suboptimal CKF assumptions, 206 channel packet dropout, 204 equation transformation, 205 filtering gain, 206 innovational condition, 204 linearized equation, 204 mathematic induction method, 205 one-step prediction error, 206 posteriori variance matrix, 204, 205 theorem function, 206 upper bound derivation, 205 suboptimal CKF assumptions, 201, 203 error variance matrix, 199 first-order linearization, 109 Lyapunov stability lemma, 203 mathematical induction, 202 minimum measurement arrival rate, 202 one-step prediction estimation error, 199, 203 theorem, 201 Suboptimal CKF design Bernoulli packet dropout model, 195, 196 calculation process, 195 conventional CKF, 195 independent Bernoulli stochastic process, 195 posteriori variance matrix, 196 stochastic stability index, 196 system state estimation, 195 Suboptimal Kalman filter, 270 Bernoulli model, 195 channel packet dropout, 194 conventional Kalman filtering, 194 definition, 194 DSE, 193 filtering gain, 194 Sufficient condition, 162, 270 Supervisory control and data acquisition systems (SCADA), 35 System state estimation, 11, 31, 61, 63, 154, 156, 166, 169, 194, 195
T Taylor expansion, 67, 199 TCP/IP protocol, 197 Test distribution, 89, 91–93
Index The third degree spherical radial cubature rule, 18, 54 Transmission network, 270
U UKF, see Unscented Kalman filter (UKF) Unbiased optimal filter, 15, 28, 54 Universal time (UTC), 3–5 Unscented Kalman filter (UKF), 17, 59 Unscented transformation based event triggered UKF (ET-UKF), 123–125 User datagram protocol (UDP), 83 UT transformation, 17, 120, 123–126, 128
V Variance matrices, 137, 193, 194, 197, 245
W WAMS, see Wide-area measurement systems (WAMS) Weight matrix, 245, 267, 270
285 Wide-area measurement systems (WAMS) applications, 1 basic synchronization principles, 3–4 communication structure, 1, 2 computer aided tools, 1 control center, 5 DSE, 269 dynamic state estimation, 269 measurement data, 1 network communication media, 1, 2 network communication system, 6 non-Gaussian noise, 270 PMU, 4–5 power system, 1 smart grid, 1 state estimation, 1 (see State estimation) transmission network, 270 Wireless sensor network (WSN), 6 WSN, see Wireless sensor network (WSN) Z Zero-mean Gaussian distribution, 28, 55, 56, 105, 106, 156, 193, 241, 245, 256