221 88 10MB
English Pages 234 [237] Year 2021
Stability Analysis and State Estimation of Memristive Neural Networks
Stability Analysis and State Estimation of Memristive Neural Networks
Hongjian Liu
(https://orcid.org/0000-0001-6471-5089)
Zidong Wang
(https://orcid.org/0000-0002-9576-7401)
Lifeng Ma
(https://orcid.org/0000-0002-1839-6803)
First edition published 2021 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN © 2021 Hongjian Liu, Zidong Wang and Lifeng Ma CRC Press is an imprint of Taylor & Francis Group, LLC Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermissions@tandf. co.uk Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and explanation without intent to infringe. ISBN: 978-1-032-03710-3 (hbk) ISBN: 978-1-032-03810-0 (pbk) ISBN: 978-1-003-18915-2 (ebk) Typeset in CMR10 by KnowledgeWorks Global Ltd.
To the Wang Dynasty and our families.
Contents
Preface Acknowledgment Authors Biographies
xi xiii xv
List of Figures
xvii
List of Tables
xix
Symbols
xxi
1 Introduction 1.1 Background on Memristive Neural Networks . . . . . . . . . 1.1.1 Memristor and Its Circuit Realization . . . . . . . . . 1.1.2 Stability Analysis and State Estimation for MNNs . . 1.1.3 Recent Progress on Several Types of Neural Networks 1.1.3.1 RNNs . . . . . . . . . . . . . . . . . . . . . . 1.1.3.2 BAMNNs . . . . . . . . . . . . . . . . . . . 1.1.3.3 CMNNs . . . . . . . . . . . . . . . . . . . . 1.2 MNNs subject to Engineering-Oriented Complexities . . . . 1.2.1 Stochasticity . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Time-Delays . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Network-Induced Incomplete Information . . . . . . . 1.2.3.1 Missing Measurements . . . . . . . . . . . . . 1.2.3.2 Channel Fading . . . . . . . . . . . . . . . . 1.2.3.3 Signal Quantization . . . . . . . . . . . . . . 1.3 Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Event-Triggering Mechanisms . . . . . . . . . . . . . 1.3.2 Network Communication Protocols . . . . . . . . . . 1.3.2.1 RR Protocol . . . . . . . . . . . . . . . . . . 1.3.2.2 WTOD Protocol . . . . . . . . . . . . . . . 1.3.2.3 SC Protocol . . . . . . . . . . . . . . . . . . 1.3.3 Set-Membership Technique . . . . . . . . . . . . . . . 1.3.4 Non-Fragile Algorithm . . . . . . . . . . . . . . . . . 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 4 5 6 7 8 8 9 10 10 11 11 12 12 13 13 14 14 15 15 15 16 16
vii
viii
Contents
2 H∞ State Estimation for Discrete-Time Memristive Recurrent Neural Networks with Stochastic Time-Delays 2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 2.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 20 23 28 31
3 Event-Triggered H∞ State Estimation for Delayed Stochastic Memristive Neural Networks with Missing Measurements: The Discrete Time Case 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 3.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 34 40 49 52
4 H∞ tive 4.1 4.2 4.3 4.4
Stochastic MemrisTime-Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 56 63 72 76
5 Stability Analysis for Discrete-Time Stochastic Memristive Neural Networks with Both Leakage and Probabilistic Delays 5.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 5.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . 5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77 78 83 92 94
State Estimation for Discrete-Time BAM Neural Networks with Mixed Problem Formulation and Preliminaries Main Results . . . . . . . . . . . . . . . Numerical Example . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . .
6 Delay-Distribution-Dependent H∞ State Estimation for Discrete-Time Memristive Neural Networks with Mixed Time-Delays and Fading Measurements 95 6.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 96 6.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.3 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . 112 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 7 On State Estimation for Discrete Time-Delayed Memristive Neural Networks under the WTOD Protocol: A Resilient Set-Membership Approach 117 7.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 118 7.1.1 Memristive Neural Network Model . . . . . . . . . . . 118 7.1.2 The WTOD Protocol . . . . . . . . . . . . . . . . . . 120 7.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 130
Contents 7.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix 134
8 On Finite-Horizon H∞ State Estimation for Discrete-Time Delayed Memristive Neural Networks under Stochastic Communication Protocol 135 8.1 Problem Formulation and Preliminaries . . . . . . . . . . . . 136 8.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 8.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 146 8.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 9 Resilient H∞ State Estimation for Discrete-Time Stochastic Delayed Memristive Neural Networks: A Dynamic Event-Triggered Mechanism 151 9.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 152 9.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 9.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 164 9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 10 H∞ and l2 − l∞ State Estimation for Delayed Memristive Neural Networks on Finite Horizon: The Round-Robin Protocol 10.1 Problem Formulation and Preliminaries . . . . . . . . . . . . 10.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 An Illustrative Example . . . . . . . . . . . . . . . . . . . . . 10.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169 170 173 182 190
11 Conclusions and Future Topics
191
Bibliography
193
Index
213
Preface
The rapid development of artificial intelligence (AI) has been profoundly changing our daily life as well as the human society. Nowadays, the relevant disciplines of AI technology have been promoted from different aspects including but not limited to theoretical modeling, technological innovation and software and hardware upgrades, etc. To date, the AI technology has been triggering a chain of breakthroughs and promoting various fields of society from networked and digital to intelligent. Since the announcement from the HP Lab on the experimental prototyping of the memristor, memristors and memristive devices have gained wide research attention for their prospective applications in nonvolatile memories, logic devices, neuromorphic devices, and neuromorphic self-organized computation and learning. In the context of neural networks, synapses are essential elements for computation and information storage, which needs to remember its past dynamical history, store a continuous set of states, and be “plastic” according to the synaptic neuronal activity. All these cannot be accomplished by a resistor in traditional recurrent neural networks (RNNs). When the resistors are replaced by the memristors, the resulting memristive neural networks (MNNs) could rather completely solve these problems. Meanwhile, the implemented MNNs could be more efficient than the traditional RNNs when applied in brain emulation, combinatorial optimization, knowledge acquisition, and pattern recognition. As such, the dynamics analysis problems, such as stability and synchronization for MNNs, have recently received considerable research attention and a rich body of relevant literature has been available for different kinds of MNNs. It should be mentioned that almost all results obtained so far have been exclusively for continuous time MNNs and the corresponding results on discrete-time memristive neural networks (DMNNs) have been much fewer. On the other hand, in real-world applications especially in the networked situations, certain frequently-occurring engineering-related issues, such as time-delays, parameter uncertainties, random disturbances, limited communication bandwidth and incomplete information, have proved to be the main sources of system instability as well as performance deterioration, and further imposed fundamentally new challenges on the study of various types of neural networks. When discussing the stability analysis and estimator design problems, these engineering-oriented phenomena cannot be neglected. In contrast, they must be taken into simultaneous consideration with the neural networks
xi
xii
Preface
dynamics under a unified framework so as to achieve a satisfactory level of performance. In this book, faced with various sorts of network-induced phenomena, we discuss the stability analysis and estimator design problems for discrete-time MNNs subject to time-delays. By drawing on a variety of theories and methodologies such as Lyapunov stability theory, delay-dependent technique, graph theory and certain convex optimization algorithms, the study on stability analysis and state estimation have been approached from different perspectives including systems sciences, control theory, signal processing and optimization. Specifically, in each chapter, the analysis problems are firstly considered, where the stability, synchronization and other performances (e.g. reliability, robustness, disturbances attenuation level) are investigated within a unified theoretical framework. In this stage, some novel notions are put forward to reflect the engineering practice in a much more realistic yet comprehensive way. Then, the estimator design issues are discussed where sufficient conditions are derived to ensure the existence of the desired estimators with the guaranteed performances. Finally, the theories and techniques developed in previous parts are applied to deal with some issues in several emerging research areas. This book is a research monograph whose intended audience is graduate and postgraduate students as well as researchers. Hongjian Liu Wuhu, China Zidong Wang London, UK Lifeng Ma Nanjing, China
Acknowledgment
The authors would like to express their deep appreciation to those who have been directly involved in various aspects of the research leading to this book. Special thanks go to Professor Bo Shen from Donghua University, Shanghai, China, Professor Fuad E. Alsaadi from King Abdulaziz University, Jeddah, Saudi Arabia, Professor Xiaohui Liu from Brunel University London, London, U.K., Professor Abdullah M. Dobaie from King Abdulaziz University, Jeddah, Saudi Arabia, Professor Tingwen Huang from Texas A&M University at Qatar, Doha, Qatar, Professor Hongli Dong from Northeast Petroleum University, Daqing, China, Professor Weiyin Fei from Anhui Polytechnic University, Wuhu, China and Professor Yurong Liu from Yangzhou University, Yangzhou, China. The writing of this book was supported in part by the National Natural Science Foundation of China under Grants 61773209, 61773017, 61873148, 61933007 and 61973163, the AHPU Youth Top-notch Talent Support Program, the Natural Science Foundation of Universities in Anhui Province under Grants gxyqZD2019053 and KJ2019A0160, the Natural Science Foundation of Jiangsu Province under Grant BK20190021, the Six Talent Peaks Project in Jiangsu Province under Grant XYDXX-033, the Heilongjiang Postdoctoral Sustentation Fund under Grant LBH-Z19048, the Engineering and Physical Sciences Research Council (EPSRC) of the UK, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany. The support of these organizations is gratefully acknowledged.
xiii
Authors Biographies
Hongjian Liu received his B.Sc. degree in applied mathematics in 2003 from Anhui University, Hefei, China and the M.Sc. degree in detection technology and automation equipments in 2009 from Anhui Polytechnic University, Wuhu, China, and the Ph.D. degree in control science and engineering in 2018 from Donghua University, Shanghai, China. In 2016, he was a Research Assistant with the Department of Mathematics, Texas A&M University at Qatar, Doha, Qatar, for two months. From March 2017 to March 2018, he was a Visiting Scholar in the Department of Information Systems and Computing, Brunel University London, UK. He is currently a Professor in the School of Mathematics and Physics, Anhui Polytechnic University, Wuhu, China. His current research interests include filtering theory, memristive neural networks and network communication systems. He is a very active reviewer for many international journals. Zidong Wang was born in Jiangsu, China, in 1966. He received the B.Sc. degree in mathematics in 1986 from Suzhou University, Suzhou, China, and the M.Sc. degree in applied mathematics in 1990 and the Ph.D. degree in electrical engineering in 1994, both from Nanjing University of Science and Technology, Nanjing, China. He is currently Professor of Dynamical Systems and Computing in the Department of Computer Science, Brunel University London, U.K. From 1990 to 2002, he held teaching and research appointments in universities in China, Germany and the UK. Prof. Wang’s research interests include dynamical systems, signal processing, bioinformatics, control theory and applications. He has published more than 600 papers in international journals. He is a holder of the Alexander von Humboldt Research Fellowship of Germany, the JSPS Research Fellowship of Japan, William Mong Visiting Research Fellowship of Hong Kong. Prof. Wang serves (or has served) as the Editor-in-Chief for International Journal of Systems Science, the Editor-in-Chief for Neurocomputing, and an Associate Editor for 12 international journals including IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, IEEE Transactions on Neural Networks, IEEE Transactions on Signal Processing, and IEEE Transactions on Systems, Man, and Cybernetics-Part C. He is a Member of the Academia Europaea, a Fellow of the IEEE, a Fellow of the Royal Statistical Society and a member of program committee for many international conferences. xv
xvi
Authors Biographies
Lifeng Ma received the B.Sc. degree in Automation from Jiangsu University, Zhenjiang, China, in 2004 and the Ph.D. degree in Control Science and Engineering from Nanjing University of Science and Technology, Nanjing, China, in 2010. From August 2008 to February 2009, he was a Visiting Ph.D. Student in the Department of Information Systems and Computing, Brunel University London, U.K. From January 2010 to April 2010 and May 2011 to September 2011, he was a Research Associate in the Department of Mechanical Engineering, the University of Hong Kong. From March 2015 to February 2017, he was a Visiting Research Fellow at the King’s College London, U.K. He is currently a Professor in the School of Automation, Nanjing University of Science and Technology, Nanjing, China. His current research interests include control and signal processing, machine learning and deep learning. He has published more than 50 papers in refereed international journals. He serves as an editor for Neurocomputing and International Journal of Systems Science.
List of Figures
1.1 1.2 1.3
Framework of the survey.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Memristor mimicking human brain neuron synapses [56]. . . . . . . .
2.1 2.2 2.3
Output estimation error z˜(k). . . . The state and its estimate of node 1. The state and its estimate of node 2.
3.1 3.2 3.3 3.4
The state and its estimate of node 1. The state and its estimate of node 2.
. . . . . . . . . . . . . . . . . . Estimation error of node 1 and nod 2. . . . . . . . . Event-based release instants and release interval. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
51 51 52 52
4.1 4.2 4.3 4.4
Trajectories of state xi (k), its estimate x ˆi (k), i = 1, 2. ˆ Trajectories of state x ˜i (k), its estimate x ˜i (k), i = 1, 2. Estimation error of xi (k) − x ˆi (k), i = 1, 2. . . . . . . ˆ Estimation error of x ˜i (k) − x ˜i (k), i = 1, 2. . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
74 75 75 76
5.1
State trajectory of the DSMNN in the example.
. . . . . . . . . . .
93
6.1 6.2 6.3 6.4
The framework of H∞ state estimation. . . . . . . . . . . . Ideal measurements yk and received signals y˜k of the estimator. zk and its estimate zˆk . . . . . . . . . . . . . . . . . . . . Estimation error ek . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
100 114 115 115
7.1 7.2 7.3 7.4 7.5 7.6
SEP for an MNN under the WTOD protocol. . . . . . The selected nodes of MNNs. . . . . . . . . . . . . . The actual and estimated trajectories of z1 (t) and z2 (t).
. . . The estimation errors of z1 (t) and z2 (t). . . . . . . . . . The actual and estimated values of y¯1 (t) and y¯2 (t). . . . . The estimation errors of y¯1 (t) and y¯2 (t). . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
120 131 132 132 133 133
8.1 8.2 8.3 8.4 8.5
State estimation with stochastic communication protocols. z1 (s) and zˆ1 (s). . . . . . . . . . . . . . . . . . . . . . z2 (s) and zˆ2 (s). . . . . . . . . . . . . . . . . . . . . . z3 (s) and zˆ3 (s). . . . . . . . . . . . . . . . . . . . . . z¯(s) and zˆ ¯(s). . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
138 148 148 149 149
Physical model of memristor element [145].
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 4 5 30 30 31
xvii
xviii
List of Figures
9.1 9.2 9.3 9.4 9.5 9.6 9.7
Remote state estimation under the dynamic ETM. The state z1 (s) and its estimate zˆ1 (s).
. The state z2 (s) and its estimate zˆ2 (s). . The state z¯(s) and its estimate zˆ ¯(s). . . The estimation error of z¯(s). . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . Event-based release instants and release interval: dynastic case. . Event-based release instants and release interval: static case. . .
10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10
Remote state estimation under the RRP.
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . z2 (s) and zˆ2 (s) with RRP. . . . . . . . z¯(s) and zˆ ¯(s) with RRP. . . . . . . . . The estimate error z˜ ¯(s) with RRP. . . . z1 (s) and zˆ1 (s) without RRP. . . . . . z2 (s) and zˆ2 (s) without RRP. . . . . . z¯(k) and zˆ ¯(k) without RRP. . . . . . . The estimate error z˜ ¯(k) without RRP. . z1 (s) and zˆ1 (s) with RRP.
The scheduling of the nodes with RRP.
. . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
154 165 165 166 166 167 167
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
171 185 186 186 187 187 188 188 189 189
List of Tables
5.1 5.2
Allowable maximum τM for different δ. . . . . . . . . . . . . Allowable maximum τM for different `. . . . . . . . . . . . .
94 94
10.1 Filter Gains . . . . . . . . . . . . . . . . . . . . . . . . . . .
185
xix
Symbols
Rn Rn×m || · || Sym{A}
The n-dimensional Euclidean space. The set of all n × m real matrices. The Euclidean norm in Rn . The symmetric matrix A + AT .
l2 ([0, ∞); Rm )
The space of square-summable m-dimensional vector functions over [0, ∞).
co{u, v}
The closure of the convex hull generated by real numbers u and v.
⊗
The Kronecker product of matrices.
N
The set of all nonnegative integers.
N+
The set of all positive integers.
λmin (A)
The smallest eigenvalue of A.
λmax (A)
The largest eigenvalue of A.
diagn {A} coln {xi } (Ω, F, P) δ(·) ∈ {0, 1} mod(a, b)
The n diagonal block matrix A. T The vector as x1 x2 . . . xn . The complete probability space. The Dirac delta function. The unique nonnegative remainder on division of a by b.
E{x}
The expectation of stochastic variable x.
P{x}
The probability of stochastic variable x.
I X>Y
The identity matrix of compatible dimension. The X − Y is positive definite, where X and Y are symmetric matrices.
xxi
xxii
Symbols X≥Y MT
The X − Y is positive semi-definite, where X and Y are symmetric matrices. The transpose matrix of M .
diag{M1 , ..., Mn }
The block diagonal matrix with diagonal blocks being the matrices M1 , ..., Mn .
∗
The ellipsis for terms induced by symmetry, in symmetric block matrices.
1 Introduction
The rapid development of artificial intelligence (AI) has been profoundly changing our daily life as well as the human society. Nowadays, the relevant disciplines of AI technology have been promoted from different aspects including but not limited to theoretical modeling, technological innovation and software and hardware upgrades, etc. To date, the AI technology has been triggering a chain of breakthroughs and promoting various fields of society from networked and digital to intelligent. The artificial neural network (ANN) is one of the key cornerstones of the development of AI technology, which has once again attracted widespread attention all over the world. Artificial neural network, also referred to as neural network, is a mathematical or computational model that mimics the structure and function of biological, especially human, brain neural networks. This idea of mimicking the function of the biological brain directly affects the development of AI technology. It should be noted that it is of great significance for the development of AI to better realize the “intelligence-like-brain”, thereby completing the decision-making behavior, program behavior and reflection behavior of the human brain. In view of this, researchers have paid considerable attention to the key link of ANN to realize human brain bionics, namely “bionic synapse”. It is widely known that the synapse of human brain neurons is not only the transmission channel of information, but also the basic unit of human brain learning and storing information [105, 110, 149]. On the other hand, along with the appearance of deep neural network (DNN) algorithm, the promotion of parallel computing of graphics processor and the emergence of big data, it has been found that the traditional Complementary Metal Oxide Semiconductor (CMOS) transistor is difficult to meet the current requirements of mass data computation due to its physical defects such as large size, high energy consumption and inability of multiple storage. These defects pose many difficulties for the realization of “bionic synapse”, and thus, largely hinder the theory and technology from being applied. The theory of memristor and its physical realization have brought a new dawn to conquer the aforementioned bottlenecks during the development of ANN. The memristor-based neural network, like biological brain, has the ability to handle multiple tasks at the same time. Most importantly, the memristor-based neural network does not require repeated data movement when processing large amounts of data, which is particularly suitable for machine learning systems. Therefore, memristor is a better choice for the implementation of neural 1
2
Introduction
networks in comparison to the traditional computer system structure. What is more exciting is that, a neural network chip has been created by Dmitri Strukov and his colleagues from University of California and State University of New York using only memristor [62], which is an important step toward creating larger-scale neural networks. In this chapter, we give a review of recent advances on memristive neural networks with emphasis on the issues of stability analysis and state estimation. The rest of the chapter is organized as follows. Section 1.1 introduces the background of the memristive neural networks. Section 1.2 provides the overview of recent progress on the study of MNNs under certain wellstudied engineering-oriented complexities. Several frequently utilized design techniques are presented in Section 1.3 when facing synthesis issues relevant to MNNs. Section 1.4 gives the whole structure of this book. The framework of the overview is shown in Fig. 1.1. A. Memristor & Circuit realization Section 1.1: Background on MNNs
B. Stability analysis & State estimation
C. Recent progress on NNs A. Stochasticity Section 1.2: MNNs with engineeringoriented complexities
B. Time-delays C. Network-induced incomplete information
Introduction A. Event-triggering mechanisms
Section 1.3: Design techniques
B. Network communication protocols C. Set-membership technique
D. Non-fragile algorithm Section 1.4: Outline of the book
FIGURE 1.1:
1.1
Framework of the survey.
Background on Memristive Neural Networks
ANNs have now been widely utilized in image processing [12], pattern recognition [107], and dynamic optimization [185], and accordingly, the analysis and synthesis issues of ANNs have also been research hotspots in the field of neural networks. During the past few decades, researchers have devoted a large amount of effort in the area, resulting in a series of results published in the
Background on Memristive Neural Networks
3
literature, see, e.g. [61, 69, 80–82, 84, 85, 112, 117, 127, 140, 143, 168, 201, 208]. It should be pointed out that memristive neural network (MNN) is a special type of ANNs that is based on the “memristor bionic synapse”, which replaces the traditional resistors with memristors in the circuit realization process. Nowadays, thanks to the fast development of computer science, on one hand, the digital technology is widely applied in modern industrial control systems; on the other hand, for the purpose of obtaining better control performance or simplifying the design procedure, the continuous systems or processes are usually discretized first before system design. As for the mathematical model of MNNs for continuous or discrete cases, it can be described, respectively, by dynamical differential or difference equations whose parameters are state-dependent. Based on these models, the analysis and application issues of MNNs have attracted widespread attention from both academia and industry. A multitude of research work has been reported in the literature, most of which, however, are relevant to the continuous MNNs and the corresponding discrete case is largely neglected despite its stronger engineering insights. This is mainly due to the lack of appropriate paradigms capable of dealing with state-dependent parameters in the context of discrete memristive neural networks (DMNNs) and the resulting unpredictable nonlinearities. In general, the DMNN has not yet been adequately investigated and the stability analysis and estimator design problems have rarely been discussed, which have recently become very popular in communities of control science and signal processing. In addition, due to the fast growth of network communication technology in recent years, more and more systems have been implemented via networks, where the computer communication network has been adopted for the connection between estimators and remote sensors. The signal is then transmitted through the connected communication network, which not only avoids laying a point-to-point dedicated line, but also achieves effective remote operation, thereby enhancing the system’s flexibility and reliability. Up to now, there have been a large number of results regarding the stability of various types of neural networks including MNNs. On the other hand, during recent years, state estimation problem in the context of networked system has become a hotspot in the area of control theory and engineering, and has shown a promising application prospects within various fields such as coordinated control of communication-network-based underwater robots and unmanned vehicles, as well as advanced control of complex industrial systems. Notice that the current research on systems in the network environment mainly focuses on relatively simple cases. For instance, most results have been obtained with certain strict assumptions that the communication ability of network is unconstrained or the target system is linear and time-invariant. Obviously, these results are no longer applicable for an MNN with statedependent parameters which actually can be described by a time-varying system with strong nonlinearity. Consequently, it is a new challenge to investigate the state estimation issue for MNNs while taking full consideration
4
Introduction
of the impacts on the performance from the network communication capability and system nonlinear dynamics. In summary, it is of profound significance, from both theoretical research and engineering practice, to study the stability analysis as well as state estimation problems of MNNs.
FIGURE 1.2:
1.1.1
Physical model of memristor element [145].
Memristor and Its Circuit Realization
Memristor, also known as memory resistor, is recognized as the fourth passive electronic component discovered after resistors, capacitors and inductors. It was first presented in 1971 by Professor Leon O. Chua from the University of California, Berkeley [18]. The physical realization of the memristor, however, has not made a breakthrough due to the limitations of the processing technology at that time. It was not until 2008 that the physical entity of the first memristor was developed by HP Labs, which consists of a double-layered TiO2 film [145], as shown in Figure 1.2. In 2010, Professor Wei Lv and his colleagues at the University of Michigan filled a memristor made of a mixture of Ag + Si into the intersection of two metal electrodes, and proposed the first electrical neural circuit by using synapse of a memristor [56], as shown in Figure 1.3. Professor Lv also verified in the literature [56] that the memristor is not only small in size (nanoscale), low in energy consumption, but also has unique memory characteristics to the internal state of the system. It is the memory characteristics that enable the memristor to realize the mechanism of synaptic plasticity and make it the most similar electronic component to the synaptic function of the human brain, thereby paving the way of the research on MNNs. In the era of nano-scale high-density integrated circuits, the memory characteristics and learning functions of brain-like synapses of memristors make the neural networks comprised of memristors much closer to the human brain, and at the same time, get rid of weight constraints in the circuit realization of the traditional neural network design. Since the first memristor invented by Hewlett-Packard Laboratories in the United States, the research on memristors and memristor-based nonlinear systems has received extensive attention, especially the theoretical study of MNNs with the background of circuit systems. Recently, it has become one of
Background on Memristive Neural Networks
FIGURE 1.3:
5
Memristor mimicking human brain neuron synapses [56].
the hotspots in the field of ANN and other relevant areas, see [1,10,11,58,111] and the references therein. Nowadays, the work regarding memristor mainly focuses on two aspects: 1) Following HP Labs, researchers aim to discover more economical materials; 2) Based on results provided by Professor Leon O. Chua, investigation is carried out from the aspects of establishment of system model as well as analysis and application of dynamical behavior. As for the study of system dynamical behavior, it mainly concentrates on the following three directions: the first is the analysis of stability and attractiveness of MNNs; the second is the analysis of periodic solution and homomorphic orbital behavior; and the third is its chaotic phenomenon and synchronous control. It should be emphasized that the memristor-based system, especially the MNN with dynamical behavior, not only plays a key role in the application in real-world industrial engineering, but also provides technical support for improving or manufacturing new memristors. As such, it is of significance to study the stability issue for MNNs and establish a unified theoretical framework. Note that because of the memory characteristics of memristors, memristorbased neural networks can be generally described as a dynamical differential equation with state-dependent parameters [175]. It is usually assumed in the theoretical analysis that the synaptic weight of MNN takes the largest absolute value, and the dynamical behavior is then discussed by utilizing different approaches and theories such as set-valued mapping, differential inclusion, nonsmooth analysis, Filipov functional, M -matrix method, Lyapunov functional, ω-measure and linear matrix inequality (LMI), see, e.g. [159,174,177,191,192]. For instance, in [177], a detailed mathematical derivation has been proposed for RNN with memristor delay. Since then, the research work has gradually started for MNN system.
1.1.2
Stability Analysis and State Estimation for MNNs
So far, most of the existing results concerning MNNs are mainly on the analysis of dynamical behaviors such as stability, synchronization or chaos in the continuous case. The corresponding investigations have been much fewer when it comes to its discrete counterpart. On the other hand, the problem of state
6
Introduction
estimation has always been popular in the field of complex networks [20,126]. Moreover, in recent years, the corresponding state estimation issue for neural networks has stirred tremendous research interest mainly due to the fact that it is usually impractical, if not impossible, to directly access the states of neurons. Consequently, in these occasions, before taking the advantages from the implementation of neural networks in classification, approximation and optimization, the states of the neurons should be estimated firstly by utilizing the available network measurements, see [126, 165] for examples. More specifically, in [54], the authors have considered the state estimation for a class of neural networks with both stochastic and uncertain dynamics. In addition, for a sort of coupled stochastic networks, the data packet dropout has been studied in [73] based on which the robust estimator has been designed with guaranteed performance. When it comes to the estimation issue for MNNs, some preliminary results have been obtained in [40, 118, 123] by utilizing the passivity theory. Moreover, the framework of Philippov differential inclusion has been adopted in [6,171] to design the state estimator of continuous MNNs with stochasticity and time-delay, respectively. In [118], the estimator has been proposed for a type of continuous MNNs with time-varying delays by resorting to the dissipation theory. Again, it is worth noting that there have been so far relatively fewer results on the stability analysis and estimator design for DMNNs. For DMNN-related dynamics analysis as well as state estimation, a challenge is that the nonsmooth analysis methods and differential inclusion theory that are established for continuous MNNs cannot be directly extended to investigate the discrete counterpart. Thus, novel appropriate paradigm should be developed to deal with the analysis as well as design issues for DMNNs. However, up to now, the corresponding research has been far from adequate, although some limited work has appeared. For instance, in [27, 75], the authors have first described MNNs with state-dependent parameters by a state-switching model, and then the state estimation issue has been handled by virtue of Lyapunov theory and robust analysis.
1.1.3
Recent Progress on Several Types of Neural Networks
Before presenting the review of latest advances on DMNNs, in this subsection, the recent research on several widely seen classes of neural networks that are relevant to MNNs will be introduced, including recurrent neural networks (RNNs), bidirectional associative memory neural networks (BAMNNs) and continuous memristive neural networks (CMNNs), all of which have been playing a pivotal role in both theoretical research and industrial engineering.
Background on Memristive Neural Networks 1.1.3.1
7
RNNs
For decades, it has been generally acknowledged that RNNs possess many remarkable capabilities (e.g. fault tolerance, self-organizing, nonlinear function approximation and self-learning, etc.). Consequently, RNN has attracted more and more research attention and found wide applications in many branches of systems science and signal processing, such as pattern recognition, image processing and combinatorial optimization [79, 109, 161, 205], to name but a few. It is noteworthy that the aforementioned applications would depend largely on the dynamic behavior of RNNs, which subsequently has simulated the extensive research concerning dynamics analysis including but are not limited to, stability, state estimation and synchronization. For some representative work, we refer to [61, 82, 84, 127, 140, 168, 201, 208] and the references therein. The stability of RNN is one of the most studied dynamic behaviors, and many analysis paradigms have been proposed, such as inequality analysis approach, Lyapunov functional scheme, M-matrix technique, topological degree strategy and linear matrix inequality method [43, 47, 137, 167, 176, 212]. To be specific, by resorting to LMI approach, the asymptotic stability has been analyzed in [47, 137] for a class of delayed neural networks. Particularly, the global stability is undoubtedly the most desirable dynamical property for RNNs, which plays a pivotal role in practical engineering and has thus gained tremendous research interest [204]. Moreover, note that in an actual neural network, due to certain probabilistic reasons such as random fluctuations from the release of neurotransmitters, the synaptic transmission would probably confront with stochastic noises [44]. If not appropriately dealt with, such random noises could be one of the main causes for the performance deterioration in real-world practice. Therefore, stochastic RNNs have started to attract researchers’ attention, leading to many results published in the literature, see [75] and the references therein. In practical engineering, RNN can be implemented by a circuit. As an essential element for computation and information storage, the neuron’s synapse is often realized by a resistor which, however, has certain limitations. On one hand, resistor is a volatile device which means that the state information of the neural circuit would disappear with the disappearance of voltage. On the other hand, a neuron often corresponds to multiple synapses, and the large size of the resistors inevitably leads to a significant reduction in the integration density of RNNs, which may exceed the acceptable range in some cases. In view of the above-mentioned problems, it is crucial to develop a new kind of efficient component to reduce the size of circuits while implementing the functions of both processing and information storage, which gives rise to the so-called memristors and memristive neural networks.
8 1.1.3.2
Introduction BAMNNs
Late in the 1980s, Kosko first proposed the concept of bidirectional associative memory neural network which was comprised of fully interconnected neurons on two layers [59, 60]. During the past 20 years, researchers have devoted special efforts to the theoretical investigations of BAMNNs with an emphasis on dynamics analysis. Moreover, BAMNNs have so far found wide and successful applications in various industrial areas such as combination optimization, associative memory, image processing, pattern recognition and so forth. It should be pointed out that the dynamic behaviors (e.g. stability, chaos, synchronization, etc.) play a paramount role in the practical applications of BAMNNs. As a sequence, several techniques have been developed to deal with the dynamics behavior analysis of BAMNNs [85, 112, 117, 143] with special concerns on the analysis of stability. As for the practical engineering, the exponential stability is always among the most desirable performances of BAMNNs, which has thus captured special interest in the last decades. Specifically, the exponential stability of neural networks has been revealed to be an indication of the ability to retrieve the stored pattern in many applications such as associative memory [200]. Some latest progress regarding the exponential stability for BAMNNs can be found in [66, 186]. 1.1.3.3
CMNNs
It is now well known that in neural networks, synapses are the most essential units used for computing and storing information. They possess the capability to record the dynamic history, store a series of continuous states, and most importantly, they should be “plastic” according to the activity of synaptic neurons. Unfortunately, by a resistor in traditional recurrent and other sorts of neural networks, these functions cannot be realized, which gives rise to the so-called MNNs. MNNs are developed by replacing resistors with memristors and have the ability to conquer the aforementioned difficulties. Furthermore, as shown in many applications [111], the proposed MNNs have exhibited other advantages over the traditional neural networks such as higher efficiency. Nevertheless, in spite of these merits, the memristors and the resulting MNNs have not received enough attention for nearly four decades ever since the concept was proposed. The key feature of MNNs is that the connection weights coupling the neurons are heavily dependent on their states. As a consequence, MNNs show much richer dynamic behaviors than those of conventional RNNs. In recent years, the dynamics analysis issues for MNNs (such as stability and synchronization), especially the continuous case, have received considerable attention and plenty of research results have been published in the literature discussing various sorts of MNNs such as memristive recurrent neural networks [164], fractional-order MNNs [14], cellular MNNs [33], memristive Hopfield networks [182], chaotic MNNs [202] and memristive complex-value neural networks [157], to name but a few.
MNNs subject to Engineering-Oriented Complexities
9
Note that MNNs actually can be mathematically modeled by utilizing a series of coupled differential/difference equations and can be characterized by the state-dependent nonlinear systems [176] whose dynamic behaviors are rather rich and complicated. As such, parallel to the development/deployment of the MNNs, much effort has been devoted to investigating the stability of MNNs by borrowing ideas from systems and control theory. Quite a few elegant results have been obtained from different facets, see [108, 156, 164, 179, 181]. For example, in [156], the authors have gained a set of sufficient conditions for MNNs subject to impulse effect to ensure the exponential stability. The Lagrange stability has been discussed in [179] for MNNs whose dynamics are affected by mixed time-delays and the addressed problem has been solved by virtue of non-smooth analysis approach. It should be mentioned that most of obtained results including those aforementioned have been mainly focused on CMNNs. However, in today’s digital world, most information sequences are inherently discrete or should be discretized before implementation in engineering applications. This motivates the initial investigations on the discrete-time neural networks because of its powerful capability in handling sequence-based issues [43]. Unfortunately, despite the remarkable advantages, DMNNs have not been adequately investigated due mainly to the lack of suitable paradigms in a discrete-time setting to depict and deal with the state-dependent behaviors.
1.2
MNNs subject to Engineering-Oriented Complexities
In real-world applications especially in the networked situations, certain frequently occurring engineering-related issues, such as time-delays, parameter uncertainties, random disturbances and incomplete information, have proved to be the main sources of system instability as well as performance deterioration, and further imposed fundamentally new challenges on the study of various types of neural networks. When discussing the stability analysis and estimator design problems, these engineering-oriented phenomena cannot be neglected. In contrast, they must be taken into simultaneous consideration with the neural networks dynamics under a unified framework so as to achieve a satisfactory level of performance. In this section, several extensively studied engineering-oriented phenomena that would make the dynamics of neural networks more complex will be introduced with certain recently developed techniques for analysis and synthesis.
10
1.2.1
Introduction
Stochasticity
It is well known that during the last decades, stochastic systems have exerted a tremendous fascination on many researchers from both academia and industry because of the unavoidable random phenomena frequently seen in practical engineering. Apparently, in the modeling of this kind of systems, the routine way that neglects the uncertain stochastic effects to obtain a conventional deterministic model for analysis and synthesis is no longer appropriate. It is now widely acknowledged that a host of complex practical systems such as engineering systems, social systems, as well as many different types of neuralnetwork-based systems are more suitable to be described by stochastic differential/difference equations [87,95,97]. Consequently, the stability analysis and state estimation issues for dynamical systems subject to random fluctuations or uncertain disturbances have garnered extensive research interest, see [4] for reference. As for the application of a neural network, the signal transmission among the neurons of an actual MNN always confront with random interferences or noisy fluctuations stemming from the release of neurotransmitter or other probabilistic perturbations. These kinds of probabilistic disturbances make the dynamics of the rustling MNNs uncertain and random, which is usually referred to as stochastic MNNs. As mentioned previously, the framework established based on inclusion theory and non-smooth analysis applicable for CMNNs cannot be directly utilized to deal with their discrete counterparts. If not taken into account, these random effects would become the main sources of degradation of system performance. As such, much effort has been dedicated to exploring the stochastic MNNs and to date, the stability analysis has aroused some preliminary research attention. A few scattered pioneering results have appeared, see, e.g. [72, 76, 123] and the references therein.
1.2.2
Time-Delays
Time-delay is now widely recognized as an inherent characteristic of neural networks due to physical hardware limitations, which might bring about certain complicated dynamic behaviors (e.g. oscillation, divergence, etc.) and sometimes even leads to instability [113,115,129,133]. Based on the occurrence mode, time-delays can be generally classified into two types, namely, discrete time-delays and distributed time-delays [15, 17, 86, 100]. Note that real-world engineering systems are often confronted with simultaneous presence of different types of time-delays [53, 82, 96, 99]. Therefore, ever since the introduction in [103], researchers and engineers have poured attention to the delayed neural networks and developed quite a few schemes for stability analysis issues, such as integral inequality approach [61], matrix inequality method [142, 211] and descriptor model transformation technique [36]. Along with the research on stability, the state estimation has gradually become a hotspot in both theoretical research and practical engineering for
MNNs subject to Engineering-Oriented Complexities
11
delayed neural networks, and some work have been done for traditional RNNs [73, 165] and MNNs [75]. For CMNNs, time-delays such as constant delays, time-varying delays and distributed delays have been adopted to characterize the network latency during the data transmissions due to certain physical constraints. The impacts imposed on the dynamical behaviors by the timedelays have been extensively studied [42,179], among which the characteristics of global exponential synchronization for a class of MNNs subject to multiple time-delays have been examined in [42] by means of LMI approach. Recently, various kinds of time-delays have been taken into account when dealing with estimation issues for different neural networks. The time-varying delay has been considered in [55] for static neural networks, while in [57], the addressed discrete neural networks have been assumed to be affected by the mixed time-delays. In [5], the random delays have been examined for discrete stochastic neural networks, where the problem is solved by using a delay-distribution-dependent method. Employing Lyapunov functional in combination with LMI approach, time-varying delays have been discussed in [165] for the estimator design. It should be emphasized that, two timedelays usually occurring in neural networks (namely, leakage and probabilistic delays) start to stir particular interest yet still remain unexplored, see [16,198] and the references therein. Moreover, an army of work has been done regarding RNNs with probabilistic time-varying delays, see, for instance, [5, 137, 144, 198]. Nevertheless, the corresponding study of DMNNs has been relatively scattered despite certain limited results such as [75], and there still exists a huge gap toward the investigation on the estimator design for different classes of DMNNs, let alone the case where various engineering-oriented complexities like multiple time-delays are also involved.
1.2.3
Network-Induced Incomplete Information
In the network environment, the stability analysis and state estimation often suffer from the phenomenon of network-induced incomplete information, which would lead to performance deterioration. In this subsection, we shall discuss several types of network-induced incomplete information that are extensively encountered in engineering practice, such as missing measurements, channel fading, and signal quantization. 1.2.3.1
Missing Measurements
The measurements missing may be caused by many factors such as network congestion, instantaneous sensor failures and limited channel bandwidth in real-world practical engineering. In this case, the measurement data of the system will be lost or partially lost. In recent years, researchers have been working on the state estimation problems subject to measurement missing, see, e.g. [30, 48, 52, 106, 120, 124, 141, 160, 170, 173], among which the literature [141] has introduced a stochastic variable that obeys Bernoulli distribu-
12
Introduction
tion to describe the measurements missing. In [139], such a phenomenon has been expressed by a Markovian jump process. A more general approach that is capable of characterizing both totally and partially missing measurements has been developed in [170] where a random variable is utilized that obeys an arbitrary distribution with known probability. This approach is able to cope with situations in the context of multiple sensors where the missing probability of different sensors is different, and a diagonal matrix can be constructed whose diagonal entries indicate the missing probabilities of the corresponding sensors [170]. The consecutive data packet dropout case has been discussed in [120] by resorting to the Bernoulli distributed random parameters. Based on these established models, much work has then been done to the study of state estimation problem subject to different performance specifications. For instance, in [146], a class of optimal filters has been designed for the system by means of the Riccati equation approach when multiple consecutive measurements missing occurs. With the help of the linear matrix inequality technique, the problem has been solved in [121] for systems with multiple consecutive packet losses, whereas the corresponding problem has been examined in [122]. 1.2.3.2
Channel Fading
In the context of digital data transmission via networks, the phenomenon of channel fading is inevitably occurring. This is mainly because the communication channel has certain physical constraints such as limited bandwidth. Moreover, such fading measurements always happen in a probabilistic way due to the random variation in the network environment [49]. For recurrent neural networks, it is often the case that the measurement output suffers signal fading when being transmitted via unreliable communication network channels for further processing [24, 25, 29]. Although there exist some initial publications concerning the state estimation for RNNs by using fading measurements (e.g. [47, 137]), the relevant results on DMNNs have been much fewer due probably to the level of mathematical difficulties/complexities. 1.2.3.3
Signal Quantization
Due to the limited capacity of transmission channel, systems communicating via networks always confront with signal quantization of the transmitted data [51, 94, 130, 206, 213]. It should be emphasized that in real-world applications especially in the context of networked systems, the signal quantization has proved to be one of the main sources of system instability as well as performance deterioration, and further imposed fundamentally new challenges on the study of networked systems. When discussing the analysis, design and application problems for neural networks, such network-induced quantization effect cannot be neglected. As a result, the relevant research has recently been garnering interest, see, e.g. [119, 135] for some recent results, among which two main techniques have been utilized to deal with the quantization effects. One is to convert the quantization effect into parameter uncertainties whereas
Design Techniques
13
the other is to transfer the quantization function to nonlinearities constrained within known sector bound.
1.3
Design Techniques
The complexity of neural networks dynamics makes it extremely challenging to establish a systematic framework for estimator design with desired performances and required specifications. Apparently, it is impractical, if not impossible, to develop one particular procedure capable of tackling all kinds of neural networks; therefore, a host of tools and mechanisms have been proposed to deal with stability analysis and estimator design for specific types of neural networks. In the sequel, we will investigate several techniques that have been widely applied in estimator design of neural networks to meet different design demands.
1.3.1
Event-Triggering Mechanisms
In most of the existing literature, it is always assumed that the state estimator is time-triggered rather than event-triggered. However, in the network environment, it is more attractive to design the state estimator with an event-triggering strategy for the purpose of saving network resources, thereby reducing the heavy communication burden [104]. In addition, the eventtriggering scheme can largely reduce the activating frequency of electronic components in the digital control circuit, and help extend the working life of the circuit. It has proved that the event-triggered scheduling scheme is more suitable for state estimation of MNNs in the network environment. In view of this, particular interest has been poured to state estimation with event-triggering mechanism, and quite a few results have been published, see, e.g. [35, 63, 64, 69, 81, 125, 138, 148, 162, 195, 214]. For example, the eventtriggering approach has been used in [162] for estimator design of delayed RNN. It should be pointed out that the most significant feature of the eventtriggering mechanism is that the input of the desired estimator (i.e. the measurement output) will not be updated until a specific event occurs, and the transmission signal between consecutive updating interval remains unchanged. However, the desired update instant is unknown and should be determined according to the pre-specified triggering condition which usually depends on time, system state and sampled data [35]. The design of triggering condition plays a pivotal role in the estimator synthesis and the subsequent analysis of the resulting impacts on stability and other performances. It is noteworthy that most of the event-triggering mechanisms employed in the existing literature are static, that is, the threshold parameters in the
14
Introduction
triggering condition are fixed [76, 150]. Recently, in order to further achieve the target of energy conservation, the dynamic event-triggering mechanisms have been applied to many systems [39, 70] by adjusting the threshold parameters dynamically at each checking instant. Intuitively, using a dynamic event-triggering mechanism would decrease the total number of released events than using a static one. From this point of view, we can conclude that the employment of the static event-triggering mechanisms possibly results in unnecessary data transmissions and exhibits a conservative solution. It is worth mentioning that the static type of event-triggered mechanism has not been applied extensively in state estimation for MNNs, let alone the case of dynamic triggering schemes.
1.3.2
Network Communication Protocols
In most existing results relevant to the state estimation issues of systems in the network environment, an assumption has usually been made that all the network nodes could simultaneously be allowed to transmit signals via communication channel. It is, however, basically impractical for network-based systems since real-world networks will inevitably face limited bandwidth which would probably lead to network congestions as well as data collisions in the situations of simultaneous multiple accesses. Accordingly, the protocols for communication are adopted to regulate the data transmission sequence of different nodes [7, 188, 210]. Some of the extensively used communication protocols in industrial applications include the Round-Robin (RR) protocol [37, 151], the Weighted Try-Once-Discard (WTOD) protocol [32, 152] and stochastic communication (SC) protocol [136, 147], to name just a few. To date, although some pioneering results have been reported regarding the state estimation for MNNs, see e.g. [75, 76], little effort has been devoted to the constrained communication between the estimator and the network output. In fact, due to the large size of the MNN and the complexity of the tasks to be accomplished, the amount of the measurement data of the network output tends to be huge, and this has placed a great deal of burden on the transmission networks of limited capacity [93]. Due primarily to the limited bandwidth of the network, data collision may probably happen. For the purpose of avoiding this phenomenon and coordinating the network traffic, an effective approach widely employed in the practical engineering is to employ communication protocols like those aforementioned ones, whose mechanisms will be introduced as follows. 1.3.2.1
RR Protocol
As a periodic scheduling scheme, the so-called RR protocol has been widely used in communication, load balancing and server resource allocation [93]. The RR protocol is also named as the time-division multiple access (TDMA) protocol or Token Ring protocol. Under the regulation of the RR protocol,
Design Techniques
15
the allowable transmission steps of all the nodes are determined by applying a fixed circular order. Up to now, many researchers have introduced the RR protocol to govern the data transmission when dealing the network-based communication problems and designing the remote estimator with an equal allocation scheme of resource, thereby avoiding data congestion and saving communication energy [67, 132, 155, 187, 216]. Recently, the RR protocol has been extensively utilized in the estimation problems for neural networks such as the ANNs in [21, 93], Markovian jump neural networks in [67] and MNN in [193]. 1.3.2.2
WTOD Protocol
The so-called Weighted Try-Once-Discard protocol is categorized to the sort of quadratic protocols. By applying the WTOD protocol, the allowable signal transmission steps to certain nodes are assigned by some predetermined quadratic selection principles, which differs from the “periodic assignment” behavior of the RR protocol. WTOD protocol has been widely applied to govern the data propagation over the channels between the neural networks and its remote state estimator. Note that the WTOD protocol is a classical dynamic regulating scheme in associated with the importance of different missions, and obviously, it is more suitable for tasks such as resource allocation than those static scheduling schemes, see e.g. [32, 151, 217]. As such, it is necessary to study how the WTOD protocol can be employed to schedule the possibly huge data communication between the MNN and its state estimator, which still remains open and challenging. 1.3.2.3
SC Protocol
The so-called SC protocol has been first investigated in [147] for continuoustime systems and [31] for discrete-time ones. As a matter of fact, SC protocol is classified to the CSMA/CA protocols, which should be implemented according to the rule of “listen before talk” or “sense before transmit”. It is worth noting that the CSMA/CA protocol may be considered and applied as a modified version of the CSMA protocol. In practical application, such a protocol is able to prevent the network congestion and data collision by means of employing the acknowledgments signal from the communication network to mark each transmission. Up to now, SC protocol has started to attract interest in both theoretical research and engineering application, see, e.g. [2,153,215] for some latest publications.
1.3.3
Set-Membership Technique
In reality, the MNN can be realized by very large scale integration (VLSI) circuits, and the connection weights are implemented by the memristors [176]. Unfortunately, it is often the case that the VLSI circuits are subject to both the device noises (e.g. thermal noise, shot noise, flicker noise) and the external
16
Introduction
noises (e.g. substrate noise, power/ground bounce, crosstalk). In particular, these external noises, if not appropriately handled, might largely deteriorate the performance and the reliability of integrated circuits [46]. From the practical point of view, in many cases, these external noises are not stochastic but deterministic, unknown yet bounded within certain sets. As such, the set-membership state estimation algorithm is introduced to confine the state estimate in certain desired regions centered at the true value of state of targeted neuron. As one of the most popular state estimation approaches, the set-membership state estimation issue has been well investigated, and a rich body of results have been available in the literature (see e.g. [8, 68, 98, 217]). Nevertheless, the corresponding results for neural networks have been much limited, see e.g. [193], and the study for MNNs of set-membership state estimation has been far from adequate at the moment.
1.3.4
Non-Fragile Algorithm
In the course of estimator execution, due to various reasons (e.g. digital rounding errors, finite computer word length, and inaccuracy of analog to digital conversion), the actual execution parameters of the estimator often slightly differs from the expected ones. It is generally acknowledged that small even tiny drifts/variations of the estimator parameters would result in great damage to the overall system dynamics [65, 154, 196]. Accordingly, over the past several years, the so-called non-fragile estimation problem has become a research focus that aims to construct an effective estimator which is insensitive to the admissible estimator execution errors. For example, in [65], a non-fragile finite-time l2 −l∞ state estimator has been designed for a type of Markov jump neural networks with unreliable communication links. Nevertheless, in contrast to the rich literature regarding traditional RNNs, the available results on the non-fragile state estimation problems for MNNs have been very few, not to mention MNNs with various engineering-oriented complexities.
1.4
Outline
The framework of this book is shown as follows: • In Chapter 1, the research background is firstly introduced, where the stability analysis and state estimation problems for MNNs are comprehensively reviewed. It is pointed out that most existing literature regarding the stability analysis problem for MNNs have focused on CMNNs. The corresponding results for discrete-time case have been much fewer, especially in the context of networked systems where certain engineering-oriented complexities such as network-induced phenomena should be taken into consideration. After presenting the motivation, the outline of this book is then listed.
Outline
17
• In Chapter 2, the robust H∞ state estimation problem is studied for a class of discrete-time memristive recurrent neural networks with stochastic time delays. The stochastic time delays under consideration are assumed to switch randomly between two values according to the Bernoulli distribution. By employing the difference inclusion theory and the stochastic analysis technique, a sufficient condition is obtained to ensure the exponential mean-square stability of the output estimation error dynamics and the prescribed H∞ performance requirement is met. Based on the derived sufficient condition, the explicit expression of the desired estimator gain is given. • Chapter 3 investigates the event-triggered state estimation problem for a class of discrete-time stochastic memristive neural networks with timevarying delays and randomly occurring missing measurements. Based on the state-dependent future of memristive neural networks, by utilizing a Lyapunov-Krasovskii functional and stochastic analysis techniques, an event-triggered state estimator is designed and sufficient conditions are given to ensure both the exponential mean-square stability of the output estimation error dynamics and the prescribed H∞ performance requirement. Based on the derived sufficient conditions, the explicit expression of the desired estimator gain is given. • Chapter 4 studies H∞ state estimation problem for the discrete-time stochastic memristive bidirectional associative memory neural networks with mixed time-delays. Different from the description model used in continuous-time setting, a novel yet succinct switching model is employed to quantify and tackle the state-dependent connection weights for the discrete-time stochastic memristive bidirectional associative memory neural networks in discrete-time case. Through available output measurements, we design a state estimator for estimating the neuron states, and propose some sufficient conditions in terms of LMIs such that the estimation error system is exponentially stable in the mean square and the prescribed H∞ performance constraint is satisfied. • In Chapter 5, we deal with the analysis problem for the exponential stability of a class of discrete-time stochastic memristive neural networks with both leakage and probabilistic delays. By utilizing a Bernoulli distributed random sequence, the probabilistic time-varying delays are characterized according to certain probability distributions. Some sufficient conditions are derived under which the addressed discrete-time stochastic memristive neural network is globally exponentially stable by choosing an appropriate Lyapunov-Krasovskii functional. • In Chapter 6, the H∞ state estimation problem is investigated for discretetime memristive neural networks subject to both randomly occurring mixed time-delays and degraded measurements. By introducing a set of new switching functions, the addressed MNN is converted into a system
18
Introduction involving uncertain interval parameters. Based on this uncertain system, the robust analysis theory and Lyapunov stability theory are utilized, and some conditions are established under which the estimation error system is stochastically stable and the prescribed H∞ performance is satisfied. Subsequently, the desired estimator gain matrices are obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages.
• Chapter 7 solves the set-membership state estimation problem for DMNNs with hybrid time-delays under WTOD protocol. The WTOD protocol is utilized to mitigate the unnecessary network congestion occurred in the channel between the DMNNs and estimator. A resilient set-membership estimator is devised to achieve the estimate of the DMNN susceptible to the unknown-but-bounded noises and WTOD. By means of the recursive matrix inequalities, sufficient conditions are pinned down to ensure the existence of the desired resilient set-membership estimator. Then, an optimization problem is formulated by minimizing the certain ellipsoidal region under the WTOD protocol. • In Chapter 8, the finite-time H∞ state estimation problem is investigated in this chapter for delayed MNNs under the stochastic communication protocol. First, a theoretical framework is established for the addressed MNNs to analyze the finite-time H∞ performance. Within such a framework, sufficient conditions are obtained for the existence of the desired remote estimator. Subsequently, the required estimator gains are obtained by resorting to solve certain recursive linear matrix inequalities. • In Chapter 9, we cope with the dynamic event-triggered resilient state estimation problem for MNNs with time-delays and stochastic disturbances. Combining Lyapunov-Krasovskii functional and stochastic analysis technique, a dynamic event-triggered criterion is presented to guarantee both the exponential mean-square stability of the error dynamics and the prescribed H∞ constraint. Furthermore, a desired estimator insensitive to parameter variations/fluctuations is devised via solving an optimization problem. • Chapter 10 is concerned with the finite-horizon H∞ and l2 − l∞ state estimation problem for MNNs with time-delays and the Round-Robin protocol. The Round-Robin protocol is utilized to mitigate the unnecessary network congestion occurred in the channel between the MNNs and the remote estimator. By using the recursive linear matrix inequalities and the Lyapunov-Krasovskii functional methods, a protocol-based criterion is presented to guarantee not only the H∞ and l2 − l∞ constraints, but also the existence of the desired estimator. • In Chapter 11, the conclusions and some potential topics for future work are given.
2 H∞ State Estimation for Discrete-Time Memristive Recurrent Neural Networks with Stochastic Time-Delays
The past few decades have witnessed constant research interests on various aspects of recurrent neural networks (RNNs) due primarily to their wide applications in many fields such as image processing, pattern recognition and dynamic optimization, etc. In neural networks, owning to the limit of communication capacity, time delays often occur in the signal transmission among neurons. It is well-known that time delays are the main source causing system oscillations and instability. Therefore, the time-delayed neural networks have received increasing research attention, and a great deal of results have been available in the literature. The time-delays under consideration include constant/ time-varying delays, distributed delays and mixed delays. For the RNNs with such types of delays, a lot of results have been obtained on the dynamical behavior analysis of the solution such as global asymptotic stability, global exponential stability, robust stability, and delay-dependent multistability. Recently, a new kind of time-delay, called stochastic time-delays, has received increasing research attention. As is well known, in the theoretical modeling of traditional neural circuits, the system parameters are determined by the electric components such as capacitance and resistance. Recently, memristor has received increasing research attention due to its advantages over resistance such as small size, low energy consumption and storage capacity. As the rapid development of the memristor, the memristive recurrent neural networks have stirred a great deal of research interests, and considerable research efforts have been made on the dynamical behavior analysis issues of memristive recurrent neural networks such as stability and synchronization. It should be pointed out that, in the existing literature, almost all the memristive recurrent neural networks concerned are continuous-time. Actually, the discrete-time neural networks could be more suitable to model digitally transmitted signals in a dynamical way. Besides, the H∞ performance index of state estimator could be to ensure that the energy gain from the noise inputs to the estimation error is less than a certain level. However, to the best of the authors’ knowledge, the state estimation problem for discrete-time memristive recurrent neural networks (DMRNNs) with stochastic time-delays has not been adequately addressed in the literature yet, not to mention that the H∞ performance index is imposed 19
20
H∞ State Estimation for Discrete-Time Memristive RNNs
simultaneously. It is, therefore, the purpose of this chapter to shorten such a gap. This chapter deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed H∞ performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to an LMI. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
2.1
Problem Formulation
Consider the following class of DMRNNs consisting of n neurons: ˘ ˘ ˘ x(k + 1) =D(x(k))x(k) + A(x(k))f (x(k)) + δ(k)B(x(k)) ˘ × g(x(k − τ1 )) + (1 − δ(k))B(x(k))g(x(k − τ2 )) + Lς(k), (2.1) x(k) =Ψ(k), k ∈ Γ T where x(k) = x1 (k) x2 (k) · · · xn (k) is the neural state vector; ˘ D(x(k)) = diag{d1 (x1 (k)), d2 (x 2 (k)), · · · , dn (xn (k))} is the self-feedback ˘ ˘ = bij (xi (k)) n×n are matrix; A(x(k)) = aij (xi (k)) n×n and B(x(k)) the connection and the delayed connection weight matrices, respectively; T ς(k) = ς1 (k) ς2 (k) · · · ςn (k) is the external disturbance input vector belonging to l2 ([0, ∞); Rn ), L = diag{l1 , l2 , · · · , ln } is the disturbance T intensity; f (x(k)) = f1 (x1 (k)) f2 (x2 (k)) · · · fn (xn (k)) and g(x(k)) = [g1 (x1 (k)) g2 (x2 (k)) · · · gn (xn (k))]T are the nonlinear functions standing for the neuron activation functions; τ1 and τ2 are two positive scalars denoting the transmission delays, and δ(k) is a stochastic variable accounting for the randomly occurring transmission delay. Ψ(k) = [ψ1 (k)ψ2 (k) · · · ψn (k)]T describes the initial condition, and ψi (k) is defined on Γ := {− max{τ1 , τ2 }, − max{τ1 , τ2 } + 1, · · · , 0}. For the neuron activation functions f (x(k)) and g(x(k)) are assumed to be continuous and satisfy the following conditions [82]: ( [f (x) − f (y) − ρ1f (x − y)]T [f (x) − f (y) − ρ2f (x − y)] ≤ 0, (2.2) [g(x) − g(y) − ρ1g (x − y)]T [g(x) − g(y) − ρ2g (x − y)] ≤ 0
Problem Formulation
21
for all x, y ∈ Rn (x 6= y), where ρ1f , ρ2f , ρ1g and ρ2g are real matrices. The stochastic variable δ(k) is a Bernoulli-distributed white sequence taking values on 0 or 1 with the following probabilities ¯ Prob{δ(k) = 1} = δ,
Prob{δ(k) = 0} = 1 − δ¯
(2.3)
where δ¯ ∈ [0, 1] is a known constant. Along the similar lines in [18, 178], di (x(k)), aij (x(k)) and bij (x(k)) are state-dependent functions with the form ( ( sˆij , |xi (·)| > κi , dˆi , |xi (·)| > κi , di (xi (·)) = ˇ sij (xi (·)) = sˇij , |xi (·)| ≤ κi di , |xi (·)| ≤ κi , where s stands a or b, the switching jumps κi > 0, |dˆi | < 1, |dˇi | < 1, sˆij and + − ˆ ˇ ˆ ˇ sˇij are constants. Then, denoting d− i = min{di , di }, di = max{di , di }, aij = + − + min{ˆ aij , a ˇij }, aij = max{ˆ aij , a ˇij }, bij = min{ˆbij , ˇbij }, bij = max{ˆbij , ˇbij }, + − − − + − − = (a− )n×n , D = diag{d1 , d2 , · · · , dn }, D+ = diag{d+ 1 , d2 , · · · , dn }, A −ij + + − + + − + ˘ D , D , A = (aij )n×n , B = (bij )n×n , B = (bij )n×n , one has D(x(k)) ∈ − + − + ˘ ˘ A(x(k)) ∈ A , A and B(x(k)) ∈ B ,B . Define n + − + − + − + −o ¯ := D + D = diag d1 + d1 , d2 + d2 , · · · , dn + dn , D 2 2 2 2 + − b+ + b− + − + − aij + aij A +A B +B ij ij ¯ ¯ A := = , B := = . 2 2 2 2 n×n n×n ˘ ˘ ˘ The matrices D(x(k)), A(x(k)) and B(x(k)) can be written as ˘ ¯ + ∆D(k), A(x(k)) ˘ ˘ ¯ + ∆B(k) (2.4) D(x(k)) =D = A¯ + ∆A(k), B(x(k)) =B Pn Pn where ∆D(k) = i=1 ei si (k)eTi , ∆A(k) = i,j=1 ei tij (k)eTj and ∆B(k) = Pn T n i,j=1 ei pij (k)ej , ek ∈ R is the column vector with the kth element being 1 and others being 0, si (k), tij (k) and pij (k) are unknown scalars satisfying a+ −a− d+ −d− ˜ij = ij ij |si (k)| ≤ d˜i , |tij (k)| ≤ a ˜ij and |pij (k)| ≤ ˜bij with d˜j = j j , a +
2
−
2
b −b and ˜bij = ij 2 ij . ∆D(k), ∆A(k) and ∆B(k) are the parameter matrices of the following structures
∆D(k) =HF1 (k)E1 ,
∆A(k) = HF2 (k)E2 ,
∆B(k) = HF3 (k)E3
(2.5)
T T T T where H = H1 H2 · · · Hn and Ei = Ei1 Ei2 · · · Ein (i = 1, 2, 3) are known real constant matrices with Hi = ei ei · · · ei , E1j = | {z } n T ˜j1 eT1 a ˜j2 eT2 · · · a ˜jn eTn and E3j = e1 eT2 · · · d˜j eTj · · · eTn , E2j = a
22 H∞ State Estimation for Discrete-Time Memristive RNNs ˜bj1 eT ˜bj1 eT · · · ˜bj1 eT . Fi (k) (i = 1, 2, 3) are unknown time-varying man 1 2 trices which are given by Fi (k) =diag{Fi1 (k), · · · , Fin (k)}, F1j (k) =diag{0, · · · , 0, sj (k)d˜−1 · · · , 0}, j , 0, | {z } | {z } j−1
F2j (k) F3j (k)
=diag{tj1 (k)˜ a−1 j1 , · · · =diag{pj1 (k)˜b−1 j1 , · · ·
n−j
, tjn (k)˜ a−1 jn }, , pjn (k)˜b−1 }. jn
It is not difficult to verify that the matrices Fi (k) (i = 1, 2, 3) satisfy FiT (k)Fi (k) ≤ I. Suppose that the measurement output and the output to be estimated of the neural network (2.1) are given as follows: y(k) =Cx(k) + N ξ(k),
(2.6)
z(k) =M x(k)
(2.7)
where y(k)∈ Rm is the measurement output, z(k)∈ Rr is the output to be estimated, ξ(k) ∈ Rl is the disturbance input belonging to l2 ([0, ∞); Rl ). In order to estimate the neuron state x(k), we employ the following state estimator ¯ x(k) + Af ¯ (ˆ ¯ x(k − τ1 )) x ˆ(k + 1) =Dˆ x(k)) + δ¯Bg(ˆ ¯ ¯ + (1 − δ)Bg(ˆ x(k − τ2 )) + K(y(k) − C x ˆ(k)), (2.8) zˆ(k) =M x ˆ(k), x ˆ(k) =0, k ∈ Γ where x ˆ(k) ∈ Rn is the estimate of the neuron state x(k), zˆ(k) ∈ Rr is the estimate of the output z(k), and K ∈ Rn×m is the estimator gain to be determined. From (2.1), (2.6), (2.7) and (2.8), the dynamics of the estimation error can be obtained as follows: ¯ − KC)e(k) + ∆D(k)x(k) + A¯f˜(k) + ∆A(k)f (x(k)) e(k + 1) =(D ¯ Bg(x(k ¯ ¯ g 1 (k) + (δ(k) − δ) ˘ + δ¯B˜ − τ1 )) + δ∆B(k) ¯ ˘ × g(x(k − τ1 )) + (δ¯ − δ(k))Bg(x(k − τ2 )) + (1 − δ) (2.9) ¯ ¯ g 2 (k) + (1 − δ)∆B(k)g(x(k × B˜ − τ )) 2 + Lς(k) − KN ξ(k), e(k) =Ψ(k), k ∈ Γ where e(k) = x(k) − x ˆ(k), f˜(k) = f (x(k)) − f (ˆ x(k)) and g˜r (k) = g(x(k − τr )) − g(ˆ x(k − τr )) (r = 1, 2).
Main Results
23
T Then, setting η(k) = xT (k) eT (k) and letting the output estimation error be z˜(k) = z(k) − zˆ(k), we have the following augmented system ¯W ˜ 1 η(k) + W ˜ 2 f~(k) + δ¯W ˜ 3~g 1 (k) + (δ(k) − δ) ˜ 4~g 1 (k) η(k + 1) =W ¯W ˜ 4~g 2 (k) + (1 − δ) ˜ 3~g 2 (k) + W5 ζ(k), + (δ¯ − δ(k))W (2.10) ~ η(k), z˜(k) =M T η(k) = xT (k) ψ T (k) , k ∈ Γ where T f~(k) = f T (x(k)) f˜T (k) , T g r (k))T (r = 1, 2), ~g r (k) = g T (x(k − τr )) (˜ T T ~ = 0 M , ζ(k) = ς (k) ξ (k) T , W ˜ 1 = W1 + ∆D(k), M ˜ 2 = W2 + ∆A(k), W ˜ 3 = W3 + ∆B(k), W ˜ 4 = W4 + ∆B(k), W ¯ A}, ¯ ¯ B}, ¯ W2 = diag{A, W3 = diag{B, ¯ ¯ 0 D 0 B L 0 W1 = ¯ − KC , W4 = B ¯ 0 , W5 = L −KN , 0 D ∆D(k) 0 ∆A(k) 0 ∆B(k) 0 ∆D(k) = , ∆A(k) = , ∆B(k) = . ∆D(k) 0 ∆A(k) 0 ∆B(k) 0 Definition 2.1 The augmented system (2.10) with ζ(k) = 0 is said to be exponentially mean-square stable if there exist constants > 0 and 0 < λ < 1 such that E{kη(k)k2 } ≤ λk max E{kη(i)k2 }, ∀k ∈ N. i∈Γ
The aim of this chapter is to design an H∞ state estimator for DMRNNs with stochastic time-delays given by (2.1). More specifically, we are interested in looking for the gain matrix K such that the following two requirements are met simultaneously: 1) The augmented system (2.10) with ζ(k) = 0 is exponentially meansquare stable; 2) Under zero initial conditions, for a given disturbance attenuation level γ > 0 and all nonzero ζ(k), the output estimation error z˜(k) satisfies X∞ X∞ E{k˜ z (k)k2 } ≤ γ 2 kζ(k)k2 . (2.11) k=0
2.2
k=0
Main Results
In this section, the stability and the H∞ performance are analyzed for the augmented system (2.10). A sufficient condition is established to guarantee
24
H∞ State Estimation for Discrete-Time Memristive RNNs
that the augmented system (2.10) is exponentially mean-square stable and the H∞ performance is achieved. Then, the explicit expression of the desired estimator gain is given in terms of the solution to certain matrix inequality. Theorem 2.1 Let the estimator parameter K be given. The augmented system (2.10) with ζ(k) = 0 is exponentially mean-square stable if there exist positive definite matrices P = diag{P1 , P2 }, Qi = diag{Qi1 , Qi2 } (i = 1, 2) and positive scalars λj (j = 1, 2, 3) satisfying the following inequality: ˜ Θ11 ∗ ∗ ˜ Φ= ∗ ∗ ∗
0 ˜ 22 Θ ∗ ∗ ∗ ∗
0 0 ˜ 33 Θ ∗ ∗ ∗
˜ 14 Θ 0 0 ˜ 44 Θ ∗ ∗
Θ15 ˜ 25 Θ 0 Θ45 ˜ 55 Θ ∗
Θ16 0 ˜ 36 Θ < 0, Θ46 Θ56 ˜ 66 Θ
(2.12)
where φf1 = I ⊗ Sym{ 21 ρf1 T ρf2 }, φf2 = I ⊗ (ρf1 + ρf2 )/2, g φg2 = I ⊗ (ρg1 + ρg2 )/2 (r = 1, 2), φg1 = I ⊗ Sym{ 21 ρgT 1 ρ2 }, ˜ 11 = W ˜ TPW ˜ 1 − P + Q1 + Q2 − λ1 φf~, Θ ˜ 14 = W ˜ TPW ˜ 2 + λ1 φf~T , Θ 1 1 1 2 1 ¯W ˜ TPW ˜ 3 , Θ16 = (1 − δ) ˜ TPW ˜ 3, Θ ˜ 22 = −Q1 − λ2 φ~g , Θ15 = δ¯W 1 1 1 1T 2 2T ˜ 25 = λ2 φ~g , Θ ˜ 33 = −Q2 − λ3 φ~g , Θ ˜ 36 = λ3 φ~g , Θ 2 1 2 ˜ 44 = W ˜ TPW ˜ 2 − λ1 I, Θ45 = δ¯W ˜ TPW ˜ 3, Θ ˜ 55 = δ¯2 W ˜ TPW ˜ 3 − λ2 I, Θ 2 2 3 T T ¯ ¯ ¯ ˜ ˜ ˜ ˜ Θ46 = (1 − δ)W2 P W3 , Θ56 = δ(1 − δ)W3 P W3 , ¯ 2W ˜ 66 = (1 − δ) ˜ TPW ˜ 3 − λ3 I. Θ 3
Proof Construct the following Lyapunov-Krasovskii functional for system (2.10) V (η(k)) = V1 (η(k)) + V2 (η(k)) + V3 (η(k)) (2.13) P k−1 T where V1 (η(k)) = η(k)T P η(k), V2 (η(k)) = i=k−τ1 η(i) Q1 η(i) and Pk−1 T V3 (η(k)) = i=k−τ2 η(i) Q2 η(i). In the case of ζ(k) = 0, we calculate the difference of V1 (k) along (2.10) as follows: E{∆V1 (η(k))} ˜ 1T P W ˜ 1 η(k) + f~T (k)W ˜ 2T P W ˜ 2 f~(k) + δ¯2~g 1T (k)W ˜ 3T P W ˜3 = E{[η T (k)W ¯ 2~g 2T (k)W ˜ TPW ˜ 3~g 2 (k) + 2η T (k)W ˜ 1T P W ˜ 2 f~(k) × ~g 1 (k) + (1 − δ) 3 ¯ T (k)W ¯ T (k)W ˜ 1T P W ˜ 3 × ~g 1 (k) + 2(1 − δ)η ˜ 1T P W ˜ 3~g 2 (k) + 2δη ¯ × f~T (k)W ˜ 2T P W ˜ 3~g 1 (k) + 2(1 − δ) ˜ 2T P W ˜ 3~g 2 (k) + 2δ¯f~T (k)W ¯ − δ) ¯ × ~g 1T (k)W ˜ TPW ˜ 3~g 2 (k)] − η T (k)P η(k)}. + 2δ(1 3
(2.14)
Main Results
25
Similarly, we obtain E{∆V2 (η(k))} = E{η T (k)Q1 η(k) − η T (k − τ1 )Q1 η(k − τ1 )}, T
T
E{∆V3 (η(k))} = E{η (k)Q2 η(k) − η (k − τ2 )Q2 η(k − τ2 )}.
(2.15) (2.16)
T By setting ϑ(k) = η T (k) η T (k − τ1 ) η T (k − τ2 ) f~T (k) ~g 1T (k) ~g 2T (k) , the combination of (2.14)–(2.16) results in E{V (η(k + 1)) − V (η(k))} =
X3 i=1
E{∆Vi (η(k))} = E{ϑT (k)Φϑ(k)} (2.17)
where
Θ11 Φ= ∗
Π12 , Π22
Θ22 ∗ Π22 = ∗ ∗ ∗ Θ15 Θ16 ,
0 Θ33 ∗ ∗ ∗
0 0 0 0 0 0 Θ44 Θ45 Θ46 , ∗ Θ55 Θ56 ∗ ∗ Θ66 T ˜ ˜ = W1 P W1 − P + Q1 + Q2 ,
Π12 = 0 0 Θ14 Θ11 T ˜1 PW ˜ 2 , Θ22 = −Q1 , Θ33 = −Q2 , Θ14 = W 2 T ¯ 2W ˜ PW ˜ 3 , Θ66 = (1 − δ) ˜ TPW ˜3 Θ55 = δ¯ W 3
˜ 2T P W ˜ 2, Θ44 = W
3
and Θ15 , Θ16 , Θ45 , Θ46 , Θ56 are defined in Theorem 2.1. Taking (2.2) into consideration, one has E{V (η(k + 1)) − V (η(k))} n X3 ≤E E{∆Vi (η(k))} − λ1 [f~(k) − (I ⊗ ρf1 )η(k)]T [f~(k) i=1 X2 − (I ⊗ ρf2 )η(k)] − λr+1 [~g r (k) − (I ⊗ ρg1 )η(k − τr )]T
(2.18)
r=1
× [~g r (k) − (I ⊗ ρg2 )η(k − τr )] ˜ ≤ E{ϑT (k)Φϑ(k)} ˜ is defined by (2.12). Since Φ ˜ < 0, we have E{V (η(k+1))−V (η(k))} ≤ where Φ −E{kη(k)k} where = −λmax (Ψ). Then, by following the similar analysis in [73], the exponential mean-square stability of the augmented system (2.10) with ζ(k) = 0 can be shown and hence this proof is complete. In the following theorem, the H∞ performance analysis is conducted and a sufficient condition is derived to guarantee that the output estimation error z˜(k) satisfies the H∞ performance constraint (2.11). Theorem 2.2 Let the estimator parameter K be given. The augmented system (2.10) with ζ(k) = 0 is exponentially mean-square stable and the output estimation error z˜(k) satisfies the H∞ performance constraint (2.11) under
26
H∞ State Estimation for Discrete-Time Memristive RNNs
the zero initial condition for all nonzero ζ(k) if there exist positive definite matrices P = diag{P1 , P2 }, Qi = diag{Qi1 , Qi2 } (i = 1, 2) and positive scalars λj (j = 1, 2, 3) satisfying the following inequality: ˆ 11 ˜ 14 Θ15 Θ16 Θ17 Θ 0 0 Θ ∗ ˜ 22 ˜ 25 Θ 0 0 Θ 0 0 ˜ 33 ˜ 36 ∗ Θ 0 0 Θ 0 ∗ ˆ = ∗ ˜ 44 Θ45 Θ46 Θ47 (2.19) Φ < 0, ∗ ∗ Θ ˜ 55 Θ56 Θ57 ∗ ∗ ∗ ∗ Θ ˜ 66 Θ67 ∗ ∗ ∗ ∗ ∗ Θ ∗ ∗ ∗ ∗ ∗ ∗ Θ77 ˆ 11 = W ˜ TPW ˜ 1 − P + Q1 + Q2 + M ~ TM ~ − λ1 φf , Θ77 = −γ 2 I + where Θ 1 1 ˜ T P W5 , Θ47 = W ˜ T P W5 , Θ57 = δ¯W ˜ T P W5 , Θ67 = (1 − W5T P W5 , Θ17 = W 1 2 3 ¯W ˜ T P W5 and other parameters are defined in Theorem 2.1. δ) 3 Proof The proof of the exponential mean-square stability for (2.10) in the case of ζ(k) = 0 follows immediately from Theorem 2.1 since that the inequality (2.12) is implied by (2.19). For the H∞ performance analysis, we choose the same Lyapunov-Krasovskii functional and calculate the difference of V (η(k)) along (2.10) as follows: ¯ E{V (η(k + 1)) − V (η(k)) + k˜ z (k)k2 − γ 2 kζ(k)k2 } = E{$T (k)Φ$(k)} where h iT $(k) = η T (k) η T (k − τ1 ) η T (k − τ2 ) f~T (k) ~g 1T (k) ~g 2T (k) ζ T (k) and
¯ 11 Θ ¯ = ∗ Φ ∗
Π12 Π22 ∗
Θ17 Π23 Θ77
~ , Π23 = 0 0 ¯ 11 = W ˜ TPW ˜ 1 −P +Q1 +Q2 + M ~ TM with Θ 1 and other parameters defined in Theorems 2.1 and 2.2. Using (2.2) and (2.19), we have
ΘT47
E{V (η(k + 1)) − V (η(k)) + k˜ z (k)k2 − γ 2 kζ(k)k2 } < 0.
ΘT57
ΘT67
T
(2.20)
Under the zero initial condition, summing up (2.20) from 0 to ∞ with respect to k, and considering E{V (η(∞)} ≥ 0, we obtain (2.11), which accomplishes the proof of Theorem 2.2. According to the H∞ performance analysis conducted in Theorem 2.2, a design method of the H∞ state estimator for (2.1) is provided in Theorem 2.3.
Main Results
27
Theorem 2.3 Consider the system (2.1) and let the disturbance attenuation level γ > 0 be given. The augmented system (2.10) is exponentially stable in mean square and the H∞ performance constraint (2.11) is met for all nonzero ζ(k) under the zero initial condition if there exist positive definite matrices P = diag{P1 , P2 }, Qi = diag{Qi1 , Qi2 } (i = 1, 2), matrix X and positive scalars ε, λj (j = 1, 2, 3) satisfying the following inequality: ˇ ˇ ˇT Φ H εE ∗ −εI (2.21) 0 < 0, ∗ ∗ −εI where ˇ ˇ ˇ 14 Θ11 0 0 Θ 0 0 0 Θ18 ˜ 22 ˜ 25 ∗ 0 Θ 0 0 Θ 0 0 ∗ 0 ˜ 33 ˜ 36 ∗ Θ 0 0 Θ 0 ˇ 11 = ∗ ˇ ˇ ˇ 44 Π ∗ ∗ Θ 0 0 Θ47 , Π12 = Θ48 , ∗ Θ ˇ ˇ 55 ∗ ∗ ∗ Θ 0 Θ57 58 ∗ ˇ 68 ˇ Θ ∗ ∗ ∗ ∗ Θ66 Θ67 ˇ 78 ˇ Θ ∗ ∗ ∗ ∗ ∗ ∗ Θ77 T 0 0 0 0 0 0 0 HT Pˇ T ˇ 11 Π ˇ 12 Π T ˇT ˇ ˇ , Φ= H= 0 0 0 0 0 0 0 H P ˇ 88 , ∗ Θ 0 0 0 0 0 0 0 HT Pˇ T S18 0 0 0 0 0 0 0 P 1 ˇ = 0 0 0 S48 0 0 0 0 , Pˇ = , E P2 0 0 0 0 S58 S68 0 0 ˇ ~ TM ~ − λ 1 φf , Θ ˇ 14 = λ1 φf T , Θ11 = −P + Q1 + Q2 + M 1
2
ˇ 18 = W T P T , Θ ˇ 44 = −λ1 I, Θ ˇ 48 = W T P T , Θ ˇ 55 = −λ2 I, Θ 1 2 ¯ TPT, Θ ¯ TPT, Θ ˇ 58 = δW ˇ 66 = −λ3 I, Θ ˇ 68 = (1 − δ)W ˇ 77 = −γ 2 I, Θ 3 3 ˇ 78 = W T P T , Θ ˇ 88 = −P, S18 = E1 0 , S48 = E2 0 , Θ 5 ¯ 3 0 , S68 = (1 − δ)E ¯ 3 0 (2.22) S58 = δE and other parameters are defined in Theorems 2.1 and 2.2. Moreover, if the above inequality is solvable, the state estimator gain can be determined by K = P2−1 X. Proof In order to eliminate the uncertainties in (2.19), we use Schur complement lemma and obtain ˇ 11 Ξ12 Π Ξ= (2.23) ˇ 88 < 0 ∗ Θ
28 where Ξ12 = ΛT18
H∞ State Estimation for Discrete-Time Memristive RNNs ˇT T, 0 0 ΛT48 ΛT58 ΛT68 Θ 78
T ˇ 18 + S18 Λ18 = Θ F1T (k)HT Pˇ T , T ˇ 58 + S58 Λ58 = Θ F3T (k)HT Pˇ T ,
T ˇ 48 + S48 Λ48 = Θ F2T (k)HT Pˇ T , T ˇ 68 + S68 Λ68 = Θ F3T (k)HT Pˇ T
(2.24)
and other parameters are defined in Theorems 2.2 and 2.3. By considering X = P2 K, it follows from (2.24) that ˇ +H ˇ Fˇ (k)E ˇ + (H ˇ Fˇ (k)E) ˇ T 1, 0.6, |x2 (·)| > 1, d1 (x1 (·)) = d2 (x2 (·)) = 0.6, |x1 (·)| ≤ 1, 0.4, |x2 (·)| ≤ 1, ( ( 0.5, |x1 (·)| > 1, 0.2, |x1 (·)| > 1, a11 (x1 (·)) = a12 (x1 (·)) = 0.2, |x1 (·)| ≤ 1, −0.3, |x1 (·)| ≤ 1, ( ( 0.3, |x2 (·)| > 1, 0.6, |x2 (·)| > 1, a21 (x2 (·)) = a22 (x2 (·)) = 0.15, |x2 (·)| ≤ 1, −0.18, |x2 (·)| ≤ 1, ( ( 0.2, |x1 (·)| > 1, 0.3, |x1 (·)| > 1, b11 (x1 (·)) = b12 (x1 (·)) = 0.5, |x1 (·)| ≤ 1, 0.2, |x1 (·)| ≤ 1,
An Illustrative Example ( ( 0.2, |x2 (·)| > 1, −0.3, |x2 (·)| > 1, b21 (x2 (·)) = b22 (x2 (·)) = −0.1, |x2 (·)| ≤ 1, 0.1, |x2 (·)| ≤ 1, 0.1 sin(0.6k) 0 ∆D(k) = , 0 0.1 sin(0.6k) 0.15 sin(0.8k) 0.3 sin(0.8k) ∆A(k) = , 0.07 sin(0.8k) 0.39 sin(0.8k) 0.15 cos(0.5k) 0.05 cos(0.5k) ∆B(k) = , 0.15 cos(0.5k) 0.2 cos(0.5k) 0.1 0.2 0.1 0 C= , N= , 0.2 0.3 0 0.2 0.08 0 L= , M = 0.35 0.3 . 0 0.15
29
The activation functions f (x(k)) and g(x(k)) are chosen as tanh(−0.3x1 (k)) tanh(0.10x1 (k)) f (x(k)) = , g(x(k)) = , tanh(0.5x2 (k)) 0.02x2 (k) − 0.06tanh(x2 (k)) which satisfy −0.3 ρ1f = 0
the constraint (2.2) with 0 0 0 0 0 , ρ2f = , ρ1g = , 0 0 0.5 0 −0.04
ρ2g =
0.1 0
0 . 0.02
In the example, the probability is taken as δ¯ = 0.86, the stochastic time delays are set as τ1 = 1, τ2 = 3, and the disturbance attenuation level is chosen as γ = 0.95. By solving the LMI (2.21) in Theorem 2.3 with the help of Matlab toolbox, we can obtain P2 and X as follows: 0.1846 0.0533 −0.4065 0.4750 P2 = , X= . 0.0533 0.1436 0.3992 −0.0277 Then, according to K = P2−1 X, the desired estimator parameter is designed as −3.3658 2.9443 K= . 4.0281 −1.2856 In the simulation, the external disturbance inputs are assumed to be ς1 (k) = ς2 (k) = ξ1 (k) = ξ2 (k) = 3 exp(−0.3k) cos(0.2k). Simulation results are shown in Figs. 2.1–2.3. The output estimation error z˜(k) is presented in Fig. 2.1. Figs. 2.2 and 2.3 plot the state and those estimates for node 1 and node 2, respectively. Simulation results show that the estimator designed works well.
30
H∞ State Estimation for Discrete-Time Memristive RNNs
0.05
0
-0.05
-0.1
-0.15
-0.2
-0.25 0
10
20
30
40
50
60
FIGURE 2.1: Output estimation error z˜(k).
0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 0
10
20
30
40
50
60
FIGURE 2.2: The state and its estimate of node 1.
Summary
31 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 0
10
20
30
40
50
60
FIGURE 2.3: The state and its estimate of node 2.
2.4
Summary
In this chapter, the robust H∞ state estimation problem has been studied for a class of DMRNNs with stochastic time delays. The stochastic time delays under consideration are assumed to switch randomly between two values according to the Bernoulli distribution. In order to estimate the neuron states, an estimator has been constructed. By employing the difference inclusion theory and the stochastic analysis technique, a sufficient condition has been obtained to ensure the exponential mean-square stability of the output estimation error dynamics and the prescribed H∞ performance requirement is met. Based on the derived sufficient condition, the explicit expression of the desired estimator gain has been given. Finally, a numerical example has been provided to show the usefulness and effectiveness of the proposed estimator design method.
3 Event-Triggered H∞ State Estimation for Delayed Stochastic Memristive Neural Networks with Missing Measurements: The Discrete Time Case
It is now well recognized that the time-delays are inherent characteristics in hardware implementation of neural networks which may lead to some complex dynamic behaviors such as oscillation, divergence, and even instability in the network systems. For continuous-time MNNs, so far, various time-delays (including constant delays, time-varying delays, distributed delays and mixed time-delays) have been introduced to model the lags in signal transmissions due to finite switching speeds of amplifiers, and the impacts of the time delays on the dynamical behaviors of continuous-time MNNs have been thoroughly examined in the literature. Nevertheless, the corresponding results for discretetime MNNs have been very few and this constitutes the main motivation of the present research to shorten such a gap. In neural network applications, it is quite common that the neuron states are not fully accessible due probably to the large scale of the networks and the implementation cost in monitoring network output, and this makes it significantly difficult to analyze the dynamical behaviors of the real-time neural networks. Therefore, in such cases, it becomes a prerequisite to estimate the neuron states through available network measurements before exploiting the merits of RNNs in tasks such as classification, approximation and optimization. Again, as far as the discrete-time MNN is concerned, the state estimation subject to missing measurements remains an open problem that deserves further investigation. In the course of implementing large-scale RNNs consisting of a large number of computing units, much resource (e.g. processing, storage, communications) would have to be consumed and the energy saving issue with resource constraints is becoming an emerging topic of research that has started to draw some initial attention. For state estimation problems of RNNs, it is quite desirable to avoid unnecessary signal transmissions (from the measured network outputs to the estimator) and reduce the network burden as long as certain estimation performance requirements (accuracy and convergence) are guaranteed. Recently, the event-triggered mechanism (EVM) has received much research attention because of its distinctive merits in saving resource. 33
34
Event-Triggered H∞ State Estimation
Therefore, a seemingly natural idea is to investigate into the event-triggered state estimation problem for discrete-time MNNs and see how the efficiency of energy utilization can be improved. It should be pointed out that the eventtriggered state estimation problem for RNNs neural networks has not received adequate research attention yet, not to mention the case when the RNN is in the discrete-time setting coupled with time-delays as well as deterministic and stochastic disturbances. More importantly than all of that, compared with the existing results, we can find that the discrete-time stochastic memristive neural networks (DSMNNs) with time-varying delays are more comprehensive and practical than the established ones. In this case, both the stability analysis based on the theory of differential inclusion and the state estimation approach without event-triggered scheme are no longer applicable. Motivated by the above discussions, in this chapter, we endeavor to study the event-triggered H∞ state estimation problem for delayed stochastic MNNs with missing measurements. The problem addressed is to estimate the neuron states through available output measurements subject to probabilistic missing values under an event-triggered mechanism. By utilizing a Lyapunov-Krasovskii functional and stochastic analysis techniques, both the existence conditions and the explicit expression of the desired state estimators are established under which the estimation error dynamics is stable and the prescribed H∞ disturbance rejection attenuation level is guaranteed.
3.1
Problem Formulation
Consider the following n-neuron discrete-time stochastic memristive neural networks with time delays: x(k + 1) =D(x(k))x(k) + A(x(k))f (x(k)) + B(x(k))g(x(k − τ (k))) + Lς(k)
(3.1)
+ σ(k, x(k), x(k − τ (k)))w(k) T where x(k) = x1 (k) x2 (k) · · · xn (k) is the neural state vector; D(x(k)) = diag{d1 (x1 (k)), d2 (x2 (k)), · · · , dn (xn (k))} is the self-feedback matrix with entries |di (xi (k))| < 1; A(x(k)) = aij (xi (k)) n×n and B(x(k)) = bij (xi (k)) n×n are the connection and the delayed connection weight matrices, respectively; ς(k) ∈ Rn is the external disturbance input vector belonging to l2 ([0, ∞); Rn ), L = diag{l1 , l2 , · · · , ln } is the intensity matrix of the deterministic disturbance; τ (k) represents the time-varying transmission delay which satisfies τm ≤ τ (k) ≤ τM , k = 1, 2, · · · (3.2)
Problem Formulation
35
where the positive integers τm and τM are the lower and upper bounds, respectively; w(k) is a scalar Wiener process on (Ω, F, P) with E[w(k)] = 0, E[w2 (k)] = 1, E[w(i)w(j)] = 0 (i 6= j),
(3.3)
and σ : R × Rn × Rn → Rn is called the noise intensity function vector (for the stochastic disturbance) satisfying σ T (k, u, v)σ(k, u, v) ≤ ρ1 uT u + ρ2 v T v,
(3.4) u, v ∈ Rn where ρ1 and ρ2 are known positive constants; f (x(k)) = f1 (x1 (k)) f2 (x2 (k)) T · · · fn (xn (k)) and g(x(k − τ (k))) = g1 (x1 (k − τ (k))) g2 (x2 (k − τ (k))) · · · T gn (xn (k − τ (k))) are the nonlinear functions standing for the neuron activation functions. Remark 3.1 In real-world neural networks, some neural networks is often disturbed by environmental noises, and the noise intensity has bound. Therefore, in this chapter, we assume that the DSMNN is subject to the multiplicative stochastic noises, and this noise intensity described by the function vector σ(·). We also assume that this noise intensity σ(·) is related to the time and the system states and has upper bound. For the neuron activation functions, the following assumption is needed. Assumption 3.1 [82] The neuron activation functions f (·) and g(·) satisfy [f (x) − f (y) − Λ1 (x − y)]T × [f (x) − f (y) − Λ2 (x − y)] ≤ 0, x, y ∈ Rn (x 6= y),
(3.5)
T
[g(x) − g(y) − Υ1 (x − y)] × [g(x) − g(y) − Υ2 (x − y)] ≤ 0, x, y ∈ Rn (x 6= y)
(3.6)
where Λ1 , Λ2 , Υ1 and Υ2 are constant matrices. Chua [18] stated that memristor needs to exhibit only two sufficient distinct equilibrium states since digital computer applications requiring only two memory states. On the other hand, along the similar lines in [176], memristor-based neural network (3.1) can be implemented by a large-scale integration circuit. Then according to the feature of the memristor and the current-voltage characteristic, di (xi (k)), aij (xi (k)) and bij (xi (k)) are state-dependent functions with the form ( n dˆi , |xi (·)| > κi , 1 1 i 1 hX 1 ( + )+ = ˇ di (xi (·)) = Ci j=1 Raij Rbij Ri di , |xi (·)| ≤ κi , (3.7) ( signij sˆij , |xi (·)| > κi , sij (xi (·)) = = Ci Rsij sˇij , |xi (·)| ≤ κi
36
Event-Triggered H∞ State Estimation
where s stands a or b, Ci is the capacitor, Ri is the parallel-resistor. Raij and Rbij are respectively the non-delayed and time-varying delayed connection memristors between the feedback fj (·) and state xi (·). ( 1, i 6= j, signij = −1, i = j, the switching jumps satisfy κi > 0, |dˆi | < 1, |dˇi | < 1, sˆij and sˇij are constants. Denote d− = min{dˆi , dˇi }, d+ = max{dˆi , dˇi }, i
i
a− aij , a ˇij }, ij = min{ˆ
a+ aij , a ˇij }, ij = max{ˆ
ˆ ˇ b− ij = min{bij , bij },
ˆ ˇ b+ ij = max{bij , bij },
− − D− = diag{d− 1 , d2 , · · · , dn }, + + D+ = diag{d+ 1 , d2 , · · · , dn },
A− = (a− ij )n×n ,
A+ = (a+ ij )n×n ,
B − = (b− B + = (b+ ij )n×n , ij )n×n . − + D , D , A(x(k)) ∈ A− , A+ and B(x(k)) ∈ It is clear that D(x(k)) ∈ − + B ,B . Define n + − + − + − + −o ¯ , D + D = diag d1 + d1 , d2 + d2 , · · · , dn + dn , D 2 2 2 2 a+ + a− + − A + A ij ij = A¯ , , 2 2 n×n b+ + b− + − ij ¯ , B + B = ij B . 2 2 n×n Then, the matrices D(x(k)), A(x(k)) and B(x(k)) can be written as ¯ + ∆D(k), D(x(k)) =D A(x(k)) =A¯ + ∆A(k), ¯ + ∆B(k) B(x(k)) =B where ∆D(k) = ∆A(k) =
∆B(k) =
n X
ei si (k)eTi ,
i=1 n X i,j=1 n X i,j=1
ei tij (k)eTj , ei pij (k)eTj .
(3.8)
Problem Formulation
37
Here, ek ∈ Rn is the column vector with the kth element being 1 and others being 0, si (k), tij (k) and pij (k) are unknown scalars satisfying |si (k)| ≤ d˜i , |tij (k)| ≤ a ˜ij and |pij (k)| ≤ ˜bij with d˜j =
− d+ j − dj , 2
a ˜ij =
− a+ ij − aij , 2
˜bij =
− b+ ij − bij . 2
The parameter matrices ∆D(k), ∆A(k) and ∆B(k) can be rewritten in the following compact form: ∆D(k) ∆A(k) ∆B(k) = HF (k)E (3.9) where H = H H H and E = diag{E1 , E2 , E3 } are known constant matrices with H = H1 H2 · · · Hn , Hi = ei ei · · · ei , {z } | n
T T T T Ei2 · · · Ein Ei = Ei1 (i = 1, 2, 3), T E1j = e1 eT2 · · · eTj−1 d˜j eTj eTj+1 · · · T ˜j1 eT1 a ˜j2 eT2 · · · a ˜jn eTn , E2j = a T E3j = ˜bj1 eT1 ˜bj2 eT2 · · · ˜bjn eTn ,
eTn
T
,
and F (k) = diag{F1 (k), F2 (k), F3 (k)} are unknown time-varying matrices given by Fi (k) =diag{Fi1 (k), · · · , Fin (k)}, · · · , 0}, F1j (k) =diag{0, · · · , 0, sj (k)d˜−1 j , 0, | {z } | {z } j−1
n−j
F2j (k) =diag{tj1 (k)˜ a−1 a−1 j1 , · · · , tjn (k)˜ jn }, F3j (k) =diag{pj1 (k)˜b−1 , · · · , pjn (k)˜b−1 }. j1
jn
It is not difficult to verify that the matrices Fi (k) (i = 1, 2, 3) satisfy FiT (k)Fi (k) ≤ In2 , where In2 denotes n2 -dimensional identity matrix. Remark 3.2 Usually, the norm-bounded condition of uncertainties is given as an assumption in most of the existing literatures. However, in this chapter, the state-dependent switching to norm-bounded uncertainties is based on the feature of the memristor and the current-voltage characteristics. In this chapter, the network output of (3.1) is of the following form: y(k) =α(k)Cx(k) + N ξ(k),
(3.10)
z(k) =M x(k)
(3.11)
38
Event-Triggered H∞ State Estimation
where y(k)∈ Rm is the measurement output, z(k)∈ Rr is the output to be estimated and ξ(k) ∈ Rl is the disturbance input belonging to l2 ([0, ∞); Rl ). The stochastic variable α(k) is a Bernoulli-distributed white sequence taking values on 0 or 1 with Prob{α(k) = 1} = α ¯, (3.12) Prob{α(k) = 0} = 1 − α ¯ where α ¯ ∈ [0, 1] is a known constant. For resource-constrained systems, the event-based mechanism has proven to be capable of reducing the information exchange frequency and therefore improving the efficiency in resource utilization. For the purpose of introducing the event-based scheduling, we first denote the triggering instant sequence by 0 ≤ k0 ≤ · · · ≤ kι ≤ · · · and then define an event generator function ϕ(·, ·) : Rm × R → R as follows: ϕ(µ(k), δ) = µT (k)µ(k) − δy T (k)y(k)
(3.13)
where µ(k) = y(k)−y(kι ). Here, y(kι ) is the measurement at latest event time (triggering instant) and δ > 0 is a given positive scalar. The execution (i.e. the measurement output is transmitted to the estimator) is triggered as long as the condition ϕ(µ(k), δ) > 0
(3.14)
is satisfied. Therefore, the next triggering instant is determined iteratively by kι+1 = inf{k ∈ N|k > kι , ϕ(µ(k), δ) > 0}.
(3.15)
Remark 3.3 The event-triggered scheme is a kind of sampling which generates the measurements and transmits the data after the occurrence of a certain external event. Compared to the conventional time-triggered scheme, the event-triggered scheme shows a significant advantage of reducing the amount of sampling instants. In other words, the signals are updated only when necessary and, therefore, the unnecessary computation and transmission could be avoided. However, it is not difficult to see that the introduction triggering condition (3.14) gives reduce to the amount of data, and therefore an adequate trade-off can be achieved between the efficiency in resource utilization and the estimation performance. In order to estimate the neuron state x(k) based on the event-triggered scheme (3.14), we employ the following state estimator ¯ x(k) + Af ¯ (ˆ ¯ x(k − τ (k))) + K(y(kι ) − C x x ˆ(k + 1) = Dˆ x(k)) + Bg(ˆ ˆ(k)) (3.16) for k ∈ [kι , kι+1 ], where x ˆ(k) ∈ Rn is the estimate of the neuron state x(k) and K ∈ Rn×m is the estimator gain to be determined.
Problem Formulation
39
The dynamics of the estimation error can be obtained from (3.1), (3.10), (3.11) and (3.16) as follows: ¯ e(k + 1) = (D − KC)e(k) + ∆D(k)x(k) + Kµ(k) + (1 − α ¯ )KCx(k) + A¯f˜(k) ¯ g (x(k − τ (k))) + ∆A(k)f (x(k)) + B˜ (3.17) + ∆B(k)g(x(k − τ (k))) + Lς(k) − KN ξ(k) − (α(k) − α ¯ )KCx(k) + σ(k, x(k), x(k − τ (k)))w(k), z˜(k) = M e(k), k ∈ [kι , kι+1 ) where e(k) , x(k) − x ˆ(k), f˜(x(k)) , f (x(k)) − f (ˆ x(k)), g˜(x(k − τ (k))) , g(x(k − τ (k))) − g(ˆ x(k − τ (k))) and z˜(k) is the output estimation error. Then, by setting η(k) = [xT (k) eT (k)]T , we have the following augmented system ˜ ¯ − α(k))W2 η(k) η(k + 1) =W1 η(k) + (α ˜ 3 f~(k) + W ˜ 4~g (k − τ (k)) + W5 ζ(k) +W (3.18) + W6 w(k) + W7 µ ~ (k), z˜(k) =Mη(k), k ∈ [kι , kι+1 ) where T f~(k) = f T (x(k)) f˜T (x(k)) , T ~g (k − τ (k)) = g T (x(k − τ (k))) g˜T (x(k − τ (k))) , T T ζ(k) = ς T (k) ξ T (k) , µ ~ (k) = 0 µT (k) , ˜ 1 = W1 + ∆D(k), M= 0 M , W ˜ 3 = W3 + ∆A(k), W ˜ 4 = W4 + ∆B(k), W ¯ D 0 0 0 W1 = ¯ − KC , W2 = KC 0 , (1 − α ¯ )KC D ¯ 0 A¯ 0 B 0 0 W3 = , W4 = ¯ , W7 = 0 K , 0 A¯ 0 B L 0 ∆D(k) 0 W5 = , ∆D(k) = , L −KN ∆D(k) 0 ∆A(k) 0 ∆B(k) 0 ∆A(k) = , ∆B(k) = , ∆A(k) 0 ∆B(k) 0 T W6 = σ T (k, x(k), x(k − τ (k))), σ T (k, x(k), x(k − τ (k))) . Our main aim in this chapter is to design a suitable H∞ state estimator for stochastic memristive neural networks given by (3.1). More specifically, we are interested in looking for the gain matrix K such that the following two requirements are met simultaneously:
40
Event-Triggered H∞ State Estimation
1) The augmented system (3.18) with ζ(k) = 0 is exponentially meansquare stable; 2) Under zero initial conditions, for a given disturbance attenuation level γ > 0 and all nonzero ζ(k), the output z˜(k) satisfies ∞ X
∞ X E k˜ z (k)k2 ≤ γ 2 E kζ(k)k2 .
k=0
3.2
(3.19)
k=0
Main Results
Before proceeding to the stability analysis for system (3.18), we introduce one lemma that will be useful in deriving our results. Lemma 3.1 [9] Let N = N T , H and E be real matrices with appropriate dimensions, and F T (k)F (k) ≤ I, where I denotes the identity matrix of compatible dimension. Then the inequality N + HF E + (HF E)T < 0 if and only if there exists a positive scalar ε such that N + εHH T + ε−1 E T E < 0 or, equivalently, N εH E T εH T −εI 0 < 0. E 0 −εI For the stability of system (3.18), we have the following results. Theorem 3.1 Let K be a given constant matrix. Then, under Assumption 3.1, the augmented system (3.18) with ζ(k) = 0 is exponentially mean-square stable if there exist positive definite matrices P = diag{P1 , P2 }, Q and positive scalars λ∗1 , λ∗2 and λj (j = 1, 2, 3) satisfying the following inequalities: P < Λ∗ ⊗ In , ˜ Θ11 ∗ ˜ = ∗ Φ ∗ ∗
0 ˜ Θ22 ∗ ∗ ∗
˜ 13 Θ 0 ˜ 33 Θ ∗ ∗
Θ14 ˜ 24 Θ Θ34 ˜ 44 Θ ∗
(3.20)
Θ15 0 Θ35 0.02, d1 (x1 (·)) = d2 (x2 (·)) = 0.65, |x1 (·)| ≤ 0.02, 0.25, kx2 (·)| ≤ 0.02, ( ( −0.20, |x1 (·)| > 0.02, −0.40, |x1 (·)| > 0.02, a11 (x1 (·)) = a12 (x1 (·)) = 0.90, |x1 (·)| ≤ 0.02, 0.20, |x1 (·)| ≤ 0.02, ( ( 0.68, |x2 (·)| > 0.02, 0.32, |x2 (·)| > 0.02, a21 (x2 (·)) = a22 (x2 (·)) = −0.22, |x2 (·)| ≤ 0.02, 0.05, |x2 (·)| ≤ 0.02, ( ( 0.20, |x1 (·)| > 0.02, 0.60, |x1 (·)| > 0.02, b11 (x1 (·)) = b12 (x1 (·)) = 0.50, |x1 (·)| ≤ 0.02, −0.10, |x1 (·)| ≤ 0.02, ( ( 0.50, |x2 (·)| > 0.02, −0.30, |x2 (·)| > 0.02, b21 (x2 (·)) = b22 (x2 (·)) = −0.40, |x2 (·)| ≤ 0.02, 0.10, |x2 (·)| ≤ 0.02,
50
0.06 sin(0.6k) 0
0.09 sin(0.8k) 0.04 sin(0.8k)
∆D(k) = ∆A(k) =
0.09 cos(0.5k) 0.09 cos(0.5k) 0.10 0.20 C= , 0.20 0.30 0.08 0 L= , 0 0.15
∆B(k) =
Event-Triggered H∞ State Estimation 0 , 0.06 sin(0.6k) 0.18 sin(0.8k) , 0.22 sin(0.8k) 0.03 cos(0.5k) , 0.12 cos(0.5k) 0.10 0 N= , 0 0.20 M = 0.35 0.30 .
The activation functions f (x(k)) and g(x(k)) are chosen as 0.10x1 (k) − tanh(0.40x1 (k)) f (x(k)) = , tanh(0.50x2 (k)) tanh(0.10x1 (k)) g(x(k)) = , 0.02x2 (k) − 0.06tanh(x2 (k)) which satisfy the constraint (3.2) with −0.30 0 0.10 0 Λ1 = , Λ2 = , 0 0 0 0.50 0 0 0.10 0 Υ1 = , Υ2 = . 0 −0.04 0 0.02 In the example, the probability is taken as α ¯ = 0.85, the disturbance attenuation level is chosen as γ = 0.95, constant scalars ρ1 = ρ2 = 0.25, and the time-varying delays are set as τ (k) = 3 − (sin(kπ))2 . Then, it can be verified that the upper bound and the lower bound of the time-varying delays are τM = 4 and τm = 2, respectively. By solving the LMI (3.36) in Theorem 3.3 with the help of Matlab toolbox, we can obtain matrices P2 and X as follows: 2.2358 0.0739 0.1291 0.3168 P2 = , X= 0.0739 0.9423 0.5607 0.6733 and then, according to K = P2−1 X, the desired estimator parameter is designed as 0.0382 0.1184 K= . 0.5920 0.7052 In the simulation, the external disturbance inputs are assumed to be ς1 (k) = ς2 (k) = ξ1 (k) = ξ2 (k) = 3 exp(−0.30k) × cos(0.20k). Simulation results are shown in Figs. 3.1–3.4. Figs. 3.1 and 3.2 plot the state and its estimate for node 1 and node 2, respectively. The estimation errors for node 1
An Illustrative Example
51
and node 2 are presented in Fig. 3.3. The event-based release instants and release interval of the proposed event-triggered scheme are displayed in Fig. 3.4. The simulation result has confirmed the effectiveness of the estimation scheme presented in this chapter. 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 0
10
20
30
40
50
Time (k)
FIGURE 3.1:
The state and its estimate of node 1.
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1 0
10
20
30
40
50
Time (k)
FIGURE 3.2:
The state and its estimate of node 2.
52
Event-Triggered H∞ State Estimation 1
0.5
0
-0.5 0
10
20
30
40
50
Time (k)
FIGURE 3.3: Estimation error of node 1 and nod 2.
10 9 8 7 6 5 4 3 2 1 0 0
10
20
30
40
50
60
Time (k)
FIGURE 3.4: Event-based release instants and release interval.
3.4
Summary
In this chapter, we have investigated the event-triggered state estimation problem for a class of discrete-time stochastic memristive neural networks with
Summary
53
time-varying delays and randomly occurring missing measurements. In the model of measurement output, a stochastic variable according to the Bernoulli distribution has been introduced to characterize the randomly occurring missing measurements. Based on the state-dependent future of memristive neural networks, by utilizing a Lyapunov-Krasovskii functional and stochastic analysis techniques, an event-triggered state estimator is designed and sufficient conditions are given to ensure both the exponential mean-square stability of the output estimation error dynamics and the prescribed H∞ performance requirement. Based on the derived sufficient conditions, the explicit expression of the desired estimator gain has been given. Finally, a numerical example has been provided to show the usefulness and effectiveness of the proposed estimator design method. Further research topics include the extension of the main results to more complex systems with more complicated network-induced phenomena, see e.g. [13, 22, 26, 74, 78, 91, 92, 172, 199, 209].
4 H∞ State Estimation for Discrete-Time Stochastic Memristive BAM Neural Networks with Mixed Time-Delays
Bidirectional associative memory (BAM) neural networks, which were initially proposed by Kosko [59, 60], comprise fully interconnected neurons on two layers. Over the past two decades, the dynamics analysis issue of BAM neural networks have been attracting persistent research interests with successful applications to a variety of practical areas such as associative memories, pattern recognition and image processing, and combinatorial optimization problems. For some recent results on the analysis issues of the exponential stability for BAM neural networks, we refer the readers to [66, 186] and the references therein. It is noteworthy that most research effort made so far has been dedicated to continuous-time yet deterministic MNNs under the framework of differential inclusions and Filippov’s solution. Nevertheless, thanks to the rapid development of computer technologies, the popularity of digitalization in today’s world has made it practically important and theoretically significant to investigate the discrete-time MNNs especially in the digital implementation stage. On the other hand, it is often the case that the signal transmission in an actual MNN is subject to noisy perturbations resulting from random fluctuations from the release of neurotransmitters and other probabilistic causes. Unfortunately, when it comes to the discrete-time stochastic memristive BAM (DSMBAM) neural networks, the stability analysis issue has not been explored yet, and this constitutes one of the motivations for the current study. It is widely acknowledged that the discrete-time MNNs pose several inherent challenges for the corresponding dynamics analysis. One of the main challenges stems from the switching (between two states) in each connection weight of an MNN. Another challenge would be the level of complexity brought from the two-layer structure of the BAMs coupling with the switching behaviors of the addressed MNNs. It is, therefore, the main aim of this chapter to overcome the identified challenges and launch a major study on the state estimation problem for DSMBAM neural networks. Motivated by the above discussion, in this chapter, we endeavor to investigate H∞ state estimation problem for DSMBAM neural networks with mixed time-delays. A distinctive characteristic of MNNs is that the network parameters are dependent on the neuron states. As such, we first develop a 55
56
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM
novel switching model to account for such a state-dependent characteristic for the network parameters in the discrete-time setting. Based on the proposed switching functions, we then propose an appropriate Lyapunov-Krasovskii functional in order to derive sufficient conditions guaranteeing the existence of the desired state estimator gains. Explicit expressions for the estimator gains are parameterized by utilizing the semi-definite programming method. Finally, a simulation example is employed to demonstrate the usefulness and effectiveness of the proposed theoretical results.
4.1
Problem Formulation and Preliminaries
In this section, we consider the following DSMBAM neural network with mixed time delays described by: n n X X x (k + 1) =d (x (k))x (k) + a (x (k))f (˜ x (k)) + bij (xi (k)) i i i i ij i j j j=1 j=1 n ς X X × f (˜ x (k − τ (k))) + c (x (k)) fj (˜ xj (k − ν)) j j ij i ν=1 j=1 + σi (k, xi (k), x ˜j (k − τ (k)))w(k) + li ξ(k), i ∈ N, (4.1) n n X X ˜ ˜ x ˜j (k + 1) =dj (˜ xj (k))˜ xj (k) + a ˜ij (˜ xj (k))gi (xi (k)) + bij (˜ xj (k)) i=1 i=1 ς˜ n X X gi (xi (k − ν)) c ˜ (˜ x (k)) × g (x (k − ι(k))) + ij j i i ν=1 i=1 ˜ +σ ˜j (k, x ˜j (k), xi (k − ι(k)))w(k) + ˜lj ξ(k), j∈N where, for k = 1, 2, . . ., xi (k) and x ˜j (k) describe the states of the neuron i ˜ and j at time k; ξ(k) and ξ(k) ∈ l2 ([0, ∞); R) are the exogenous disturbance inputs; fj (˜ xj (k)), fj (˜ xj (k − τ (k))), gi (xi (k)) and gi (xi (k − ι(k))) denote the neuron activation functions; ς and ς˜ represent upper bounds of distributed time-varying delays; τ (k) and ι(k) correspond to the discrete time-varying delays satisfying τm ≤ τ (k) ≤ τM , (4.2) ιm ≤ ι(k) ≤ ιM where τm , ιm , τM and ιM are known positive integers. Here, the white noise w(k) is scalar Wiener process (Brown motion) defined on (Ω, F, P) with E[w(k)] = 0,
E[w2 (k)] = 1,
E[w(p)w(q)] = 0
(p 6= q),
(4.3)
furthermore, σi (·, ·, ·) : R × R × R → R and σ ˜j (·, ·, ·) : R × R × R → R are named the noise intensity functions; di (xi (k)), aij (xi (k)), bij (xi (k)),
Problem Formulation and Preliminaries
57
cij (xi (k)), d˜j (˜ xj (k)), a ˜ij (˜ xj (k)), ˜bij (˜ xj (k)) and c˜ij (˜ xj (k)) represent the memristive connection weights, and n 1 X 1 1 1 1 di (xi (k)) , + + × sign(i, j) + , Ci j=1 Raij Rbij Rcij Ri sign(i, j) , Ci Raij sign(i, j) cij (xi (k)) , , Ci Rcij
aij (xi (k)) ,
sign(i, j) , Ci Rbij sign(i, j) c˜ij (˜ xj (k)) , , ˜ c˜ij C˜j R bij (xi (k)) ,
sign(i, j) sign(i, j) , , ˜bij (˜ xj (k)) , ˜ ˜ ˜˜ Cj Ra˜ij C˜j R bij " n ! # 1 1 1 1 1 X ˜ + + × sign(i, j) + dj (˜ xj (k)) , ˜ a˜ij ˜˜ ˜ c˜ij ˜j C˜j i=1 R R R R bij
a ˜ij (˜ xj (k)) ,
˜ j denote the where sign(i, j) = 1 if i 6= j holds, and otherwise −1; Ri and R ˜ parallel memristors corresponding to the capacitors Ci and Cj , respectively; ˜ κ˜ ij stand for the memristors between the neuron states and the Rκij and R neuron activation functions, where κ stands for d, a, b or c. Remark 4.1 Notice that the state-dependent characteristic of the memristive connection weights are well reflected in the function sign(i, j) and, as such, the discrete-time MNNs can be generally regarded as a class of state-dependent switching systems. Traditional neural networks, however, do not possess such a switching behavior (see e.g. [73, 85, 88]) and their dynamical behaviors are much easier to analyze as opposed to the MNN cases. In view of the need for digital computations and the state-dependent behavior of memristors, it has been affirmed in [19] that only two equilibrium states for memristor are required. In this chapter, in order to represent the memristive weights in the MNNs (4.1), based on Chua’s analysis [19], a convenient rule is constructed as follows: ( ( d+ d˜+ , |˜ xj (k)| > `˜j , i , |xi (k)| > `i , ˜ di (xi (k)) , − dj (˜ xj (k)) , ˜j− di , |xi (k)| ≤ `i , dj , |˜ xj (k)| ≤ `˜j , ( ( a+ a ˜+ xj (k)| > `˜j , ij , |xi (k)| > `i , ij , |˜ aij (xi (k)) , a ˜ij (˜ xj (k)) , − − aij , |xi (k)| ≤ `i , a ˜ij , |˜ xj (k)| ≤ `˜j , ( ( (4.4) ˜b+ , |˜ b+ xj (k)| > `˜j , ij , |xi (k)| > `i , ij ˜ bij (xi (k)) , − bij (˜ xj (k)) , ˜− bij , |xi (k)| ≤ `i , bij , |˜ xj (k)| ≤ `˜j , ( ( c+ , |xi (k)| > `i , c˜+ xj (k)| > `˜j , ij , |˜ cij (xi (k)) , ij c ˜ (˜ x (k)) , ij j − − cij , |xi (k)| ≤ `i , c˜ij , |˜ xj (k)| ≤ `˜j
58
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM
where `i > 0 and `˜j > 0 are known constants denoting the switching jumps, − + − + − + − ˜+ ˜− 0 < d+ i < 1, 0 < di < 1, 0 < dj < 1, 0 < dj < 1, aij , aij , bij , bij , cij , cij , ˜+ ˜− ˜+ and c˜− are known constants. a ˜+ ˜− ij , a ij , bij , bij , c ij ij Considering the switching behaviors of memristive connection weights (4.4), we define the switching functions as follows: for i 6= j, ϑdij (xi (k)) , 0 xj (k)) , 0, and for other cases, and ϑ˜dij (˜ (
1, , −1, ( 1, ϑaij (xi (k)) , −1, ( 1, ϑbij (xi (k)) , −1, ( 1, ϑcij (xi (k)) , −1, ϑdii (xi (k))
|xi (k)| > `i , |xi (k)| ≤ `i , |xi (k)| > `i , |xi (k)| ≤ `i , |xi (k)| > `i , |xi (k)| ≤ `i , |xi (k)| > `i , |xi (k)| ≤ `i ,
(
1, −1, ( 1, ϑ˜aij (˜ xj (k)) , −1, ( 1, ϑ˜bij (˜ xj (k)) , −1, ( 1, ϑ˜cij (˜ xj (k)) , −1, ϑ˜dii (˜ xj (k))
,
|˜ xj (k)| > `˜j , |˜ xj (k)| ≤ `˜j , |˜ xj (k)| > `˜j , |˜ xj (k)| ≤ `˜j , |˜ xj (k)| > `˜j , |˜ xj (k)| ≤ `˜j ,
(4.5)
|˜ xj (k)| > `˜j , |˜ xj (k)| ≤ `˜j .
Then, it can be easily known that di (xi (k)), aij (xi (k)), bij (xi (k)), cij (xi (k)), d˜j (˜ xj (k)), a ˜ij (˜ xj (k)), ˜bij (˜ xj (k)) and c˜ij (˜ xj (k)) satisfy ˜ ¯ d˜j (˜ xj (k)) , d˜j + δjd , a ˜ ¯˜ij + δij a ˜ij (˜ xj (k)) , a ,
di (xi (k)) , d¯i + δid , a aij (xi (k)) , a ¯ij + δij ,
(4.6)
˜ ¯ b ˜bij (˜ xj (k)) , ˜bij + δij , c˜ ¯ c˜ij (˜ xj (k)) , c˜ij + δij
b bij (xi (k)) , ¯bij + δij , c cij (xi (k)) , c¯ij + δij ,
− − + − ¯ where d¯i = (d+ ¯ij = (a+ ¯ij = (c+ i + di )/2, a ij + aij )/2, bij = (bij + bij )/2, c ij + ¯ ¯ + − + − + − − ˜ ˜ ˜ ˜ ˜ ˜ ¯ ¯ cij )/2, dj = (dj + dj )/2, a ˜ij = (˜ aij + a ˜ij )/2, bij = (bij + bij )/2, c˜ij = − d d d 2 a a a 2 b b b 2 (˜ c+ +˜ c )/2, δ = ϑ (x (k))(t ) , δ = ϑ (x i i (k))(tij ) , δij = ϑij (xi (k))(tij ) , i ii ii ij ij ij ij ˜
˜
˜
˜
b a ˜ ˜ ˜ 2 c = = ϑaij (˜ xj (k))(taij ) , δij δij = ϑcij (xi (k))(tcij )2 , δjd = ϑdjj (˜ xj (k))(tdjj )2 , δij 1 ˜ ˜ + − b b 2 c˜ c˜ c˜ 2 d d ˜ 2 ϑij (˜ xj (k))(tij ) , δij = ϑij (˜ xj (k))(tij ) . Here, tii = (|di − di |/2) , tjj = ˜− |/2) 12 , td = 0 and t˜d = 0 (i 6= j), ta = (|a+ − a− |/2) 21 , t˜a = (|d˜+ − d ij ij ij ij j j ij ij 1 1 1 1 (|˜ a+ −˜ a− |/2) 2 , tb = (|b+ −b− |/2) 2 , t˜b = (|˜b+ − ˜b− |/2) 2 , tc = (|c+ −c− |/2) 2 ij
ij
ij
ij
ij
ij
ij
ij
ij
ij
ij
1 2 and t˜cij = (|˜ c+ ˜− ij − c ij |/2) .
Remark 4.2 For the sake of computer-based simulation, experimentation or computation, it is the usual case that we need to discretize the continuous-time networks when implementing continuous-time networks [88]. It is worth mentioning that the available results in dynamics analysis of discrete-time MNNs have been really scattered. On the other hand, due to the state-dependent behavior of the memristive connection weights, it is mathematically difficult to
Problem Formulation and Preliminaries
59
discuss the dynamic behaviors of the MNNs, and traditional approaches cannot directly be adopted to study the stability and state estimation problems for discrete-time MNNs. By introducing the novel switching functions (4.5), the MNN (4.1) can be transformed into a kind of systems with parameter uncertainties, with which the existing robust analysis method could be ideally applied. For notation simplicity, we denote x(k) = coln {xi (k)}, x ˜(k) = coln {˜ xj (k)}, f˜(k) = coln {fj (˜ xj (k))}, f˜τ (k) = coln {fj (˜ xj (k − τ (k)))}, g(k) = coln {gi (xi (k))}, gι (k) = coln {gi (xi (k − ι(k)))}, ˜ x(k)) = a A(x(k)) = aij (xi (k)) n×n , A(˜ ˜ij (˜ xj (k)) n×n , and B(x(k)) = bij (xi (k)) n×n , C(x(k)) = cij (xi (k)) n×n ,
˜ x(k)) = ˜bij (˜ B(˜ xj (k)) n×n , ˜ x(k)) = c˜ij (˜ C(˜ xj (k)) n×n ,
D(x(k)) = diagn {di (xi (k))},
˜ x(k)) = diagn {d˜j (˜ D(˜ xj (k))},
σ(k, x(k), x ˜(k − τ (k))) = coln {σi (k, xi (k), x ˜j (k − τ (k)))}, σ ˜ (k, x ˜(k), x(k − ι(k))) = coln {σj (k, x ˜j (k), xi (k − ι(k)))},
L = coln {li }, ˜ L = coln {˜lj }.
With the above notations, for k ∈ N+ , we rewrite the DSMBAM neural network (4.1) as the following more compact form: x(k + 1) = D(x(k))x(k) + A(x(k))f˜(k) + B(x(k))f˜τ (k) + C(x(k)) ς X × f˜(k − ν) + σ(k, x(k), x ˜(k − τ (k)))w(k) + Lξ(k), ν=1 (4.7) ˜ x(k))˜ ˜ x(k))g(k) + B(˜ ˜ x(k))gι (k) + C(˜ ˜ x(k)) ˜(k + 1) = D(˜ x(k) + A(˜ x ς˜ X ˜ ˜ ξ(k) × g(x(k − ν)) + σ ˜ (k, x ˜(k), x(k − ι(k)))w(k) + L ν=1
where the state-dependent matrices D(x(k)), A(x(k)), B(x(k)), C(x(k)), ˜ x(k)), A(˜ ˜ x(k)), B(˜ ˜ x(k)) and C(˜ ˜ x(k)) are described by D(˜ D(x(k)) , D0 + ∆D(x(k)), A(x(k)) , A0 + ∆A(x(k)), B(x(k)) , B0 + ∆B(x(k)), C(x(k)) , C0 + ∆C(x(k)),
˜ x(k)) , D ˜ 0 + ∆D(˜ ˜ x(k)), D(˜ ˜ x(k)) , A˜0 + ∆A(˜ ˜ x(k)), A(˜ ˜ x(k)) , B ˜0 + ∆B(˜ ˜ x(k)), B(˜ ˜ x(k)) , C˜0 + ∆C(˜ ˜ x(k)). C(˜
(4.8)
Here, D0 = diagn {d¯i }, A0 = (¯ aij )n×n , B0 = (¯bij )n×n , C0 = (¯ cij )n×n ,
60
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM
˜j }, A˜0 = (a ˜ 0 = diagn {d¯ ˜0 = (˜¯bij )n×n , C˜0 = (c¯˜ij )n×n , ∆K(x(k)) = ¯ D ˜ij )n×n , B κ ˜ x(k)) = H ˜ κ ϑ˜κ (˜ ˜κ , in which K stands for D, Hκ ϑ (x(k))Eκ and ∆K(˜ x(k))E κ κ A, B or C, Hκ = t11 1 . . . t1n 1 . . . tκn1 n . . . tκnn n , ϑκ (x(k)) = κ κ κ κ diag{ϑ (xn (k))}, EκT = 11 (x1 (k)), . . . , ϑ1n (x1 (k)), . . . , ϑn1 (x κ n (k)), . . . ,ϑnn κ κ κ κ ˜ t11 1 . . . t1n n . . . tn1 1 . . . tnn n , Hκ = t˜11 1 . . . t˜κ1n 1 . . . t˜κn1 n . . . t˜κnn n , ϑ˜κ (˜ x(k)) = diag{ϑ˜κ11 (˜ x1 (k)), . . . , ϑ˜κ1n (˜ x1 (k)), . . . , ϑ˜κn1 (˜ xn (k)), κ κ κ κ T ˜ ˜ . . . , ϑnn (˜ xn (k))}, Eκ = t˜11 1 . . . t˜1n n . . . t˜n1 1 . . . t˜κnn n , i ∈ Rn is the column vector with the ith element being 1 and others being 0, while κ stands for d, a, b or c. It is clear that the matrices ϑκ (x(k)) and ϑ˜κ (˜ x(k)) satisfy ϑκT (x(k))ϑκ (x(k)) ≤ I and ϑ˜κT (˜ x(k))ϑ˜κ (˜ x(k)) ≤ I, respectively. Remark 4.3 With help from the concise switching functions (4.5), the statedependent connection weights of MNN (4.1) have been recast into a model (4.6) convenient for dynamics analysis. Based on this analysis-tractable model, the MNN (4.1) are reconstructed into system (4.7) that could be regarded as a system with norm-bounded uncertainties. As such, in this chapter, the state estimation problem of the MNN (4.7) can be investigated via robust analysis approach. Different from the maximal absolute value operation-based method (MAVOM) and the characteristic function-based method (CFM), the robust analysis approach to be developed considers the excitatory and inhibitory of connection weights, utilizes the semi-definite program method, and achieves a tradeoff between the conservatism and the complexity. In this chapter, the measurement output and the output to be estimated of the DSMBAM neural network (4.7) are, respectively, given as follows: ( y(k) =Ex(k) + M υ(k), (4.9) ˜x ˜ υ˜(k), y˜(k) =E ˜(k) + M and
(
z(k) =N x(k), ˜x z˜(k) =N ˜(k)
(4.10)
where y(k) ∈ Rm and y˜(k) ∈ Rm are the measurement outputs; z(k) ∈ Rr and z˜(k) ∈ Rr are the outputs to be estimated; υ(k) ∈ l2 ([0, ∞); R) and ˜ M ˜ and N ˜ υ˜(k) ∈ l2 ([0, ∞); R) are the disturbance inputs; and E, M , N , E, are known constant matrices with compatible dimensions. In order to estimate the states of the MNN (4.7), we construct the following state estimator: ς X ˆ˜ ˆ˜ x ˆ (k + 1) =D x ˆ (k) + A f (k) + B f (k) + C f (ˆ y (k − ν)) 0 0 0 τ 0 ν=1 + K(y(k) − E x ˆ(k)), (4.11) ς˜ X ˜ 0x ˜0 g˜ˆι (k) + C˜0 ˆ(k + 1) =D ˆ(k) + A˜0 gˆ(k) + B x ˜ ˜ g(ˆ x(k − ν)) ν=1 ˜ y (k) − E ˜x ˆ˜(k)), + K(˜
Problem Formulation and Preliminaries
61
ˆ˜(k) ∈ Rn are the estimates of the where, for k ∈ N+ , x ˆ(k) ∈ Rn and x ˆ ˆ˜j (k))}, fˆ˜τ (k) = neuron states x(k) and x ˜(k), respectively. f˜(k) = coln {fj (x ˆ coln {fj (x ˜j (k − τ (k)))}, gˆ(k) = coln {gi (ˆ xi (k))}, gˆ˜ι (k) = coln {gi (ˆ xi (k − ι(k)))}, ˜ K and K are the estimator gain matrices to be determined. From (4.7))–(4.11), the dynamic system of the estimation error is obtained as follows: ex (k + 1) =(D0 − KE)ex (k) + ∆D(k)x(k) + A0 f~(k) + ∆A(k)f˜(k) ς(k) X ~τ (k) + ∆B(k)f˜τ (k) + C0 + B f f~(k − ν) 0 ν=1 ς X + ∆C(k) f˜(k − ν) + Lξ(k) − KM υ(k) ν=1 + σ(k, x(k), x ˜(k − τ (k)))w(k), (4.12) ˜0 − K ˜ E)e ˜ x˜ (k) + ∆D(k)y(k) ˜ ˜ ex˜ (k + 1) =(D + A˜0~g (k) + ∆A(k)g(k) ς˜(k) X ˜ ˜ ˜ + B ~ g (k) + ∆ B(k)g (k) + C ~g (k − ν) 0 ι ι 0 ν=1 ς˜ X ˜ −K ˜ ξ(k) ˜M ˜ υ˜(k) ˜ g(k − ν) + L + ∆ C(k) ν=1 +σ ˜ (k, x ˜(k), x(k − ι(k)))w(k), where, for k ∈ N+ , ex (k) = x(k) − x ˆ(k), ex˜ (k) = x ˜(k) − x ˜ˆ(k), f~(k) = f˜(k) − ˆ ˆ ˜ ~ ˜ ˜ f (k), fτ (k) = fτ (k) − fτ (k), ~g (k) = g(k) − gˆ(k) and ~gι (k) = gι (k) − gˆι (k). T T T ˜ (k) eTx˜ (k) , we have Setting η(k) = xT (k) eTx (k) and η˜(k) = x the following augmented system: η(k + 1) =D(k)η(k) + A(k)F (k) + B(k)F (k − τ (k)) + C(k) ς X × F (k − ν) + Lζ(k) + ~σ (k)w(k), ν=1 (4.13) ˜ η (k) + A(k)G(k) ˜ ˜ ˜ η˜(k + 1) =D(k)˜ + B(k)G(k − ι(k)) + C(k) ς˜ X ˜ + ~σ × G(k − ν) + L˜ζ(k) ˜ (k)w(k), ν=1
˜ η(k), where and ~z(k) = N η(k), ~z˜(k) = N h iT h iT F (k) = f˜T (k) f~T (k) , F (k − τ (k)) = f˜τT (k) f~τT (k) , T T G(k) = g T (k) ~g T (k) , G(k − ι(k)) = gιT (k) ~gιT (k) , ~σ (k) = diag{σ(k, x(k), x ˜(k − τ (k))), σ(k, x(k), x ˜(k − τ (k)))},
62
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM
~σ ˜ (k) = diag{˜ σ (k, x ˜(k), x(k − ι(k))), σ ˜ (k, x ˜(k), x(k − ι(k)))}, T T ˜ ˜ = ξ(k) ζ(k) = ξ(k) υ(k) , ζ(k) υ˜(k) , D(k) = D + K + ∆D(k), A(k) = A + ∆A(k), B(k) = B + ∆B(k), ˜ ˜+K ˜ + ∆D(k), ˜ ˜ ˜ C(k) = C + ∆C(k), D(k) =D A(k) = A˜ + ∆B(k), ˜ ˜ ˜ ˜ B(k) = B˜ + ∆A(k), C(k) = C˜ + ∆C(k), D = diag{D0 , D0 }, A = diag{A0 , A0 }, ˜ = diag{D ˜ 0, D ˜ 0 }, D
B = diag{B0 , B0 }, K = diag{0, −KE}, A˜ = diag{A˜0 , A˜0 },
˜0 , B ˜0 }, B˜ = diag{B
˜ = diag{0, −K ˜ E}, ˜ K C = diag{C0 , C0 }, C˜ = diag{C˜0 , C˜0 }, N = N L 0 ∆D(x(k)) 0 L= , ∆D(k) = , L −KM ∆D(x(k)) 0 ˜ ˜ L 0 ∆D(x(k)) 0 ˜ , ∆D(k) = , L˜ = ˜ ˜ ˜ ˜ L −K M ∆D(x(k)) 0 ∆B(x(k)) 0 ∆C(x(k)) ∆B(k) = , ∆C(k) = ∆B(x(k)) 0 ∆C(x(k)) ˜ ˜ ∆B(x(k)) 0 ∆C(x(k)) ˜ ˜ ∆B(k) = , ∆C(k) = ˜ ˜ ∆B(x(k)) 0 ∆C(x(k))
˜ = N ˜ 0 , N ∆A(x(k)) ∆A(k) = ∆A(x(k)) ˜ ∆A(x(k)) ˜ ∆A(k) = ˜ ∆A(x(k)) 0 , 0 0 . 0 0 ,
0 , 0 0 , 0
Throughout the chapter, we introduce the following assumptions and definition. Assumption 4.1 The neuron activation functions f˜(·) and g(·) are continuous, and satisfy f˜(0) = 0, g(0) = 0, as well as the following conditions: [f˜(x) − f˜(y) − Λ1 (x − y)]T [f˜(x) − f˜(y) − Λ2 (x − y)] ≤ 0, [g(x) − g(y) − Λ3 (x − y)]T [g(x) − g(y) − Λ4 (x − y)] ≤ 0 for all x, y ∈ Rn , and Λ1 , Λ2 , Λ3 and Λ4 are known constant matrices with appropriate dimensions. Assumption 4.2 The noise intensity vector functions σ(·, ·, ·) : R × Rn × Rn → Rn and σ ˜ (·, ·, ·) : R × Rn × Rn → Rn satisfy σ T (k, x, y)σ(k, x, y) ≤ ρ1 xT x + ρ2 y T y,
x, y ∈ Rn ,
σ ˜ T (k, x, y)˜ σ (k, x, y) ≤ ρ3 xT x + ρ4 y T y,
x, y ∈ Rn
for k ∈ N+ , where ρ1 , ρ2 , ρ3 and ρ4 are known positive constants. Remark 4.4 In Assumption 4.2, the inequalities are applied to a) limit how fast the noise intensity vector functions σ(k, x(k), x ˜(k − τ (k))) and σ ˜ (k, x ˜(k), x(k − ι(k))) change and b) ensure their boundedness. Actually, the nonlinear functions satisfying such an assumption are general that includes a large class of nonlinear functions occurring in practical engineering.
Main Results
63
˜ Definition 4.1 The augmented system (4.13) with ζ(k) = ζ(k) = 0 is said to be exponentially mean-square stable if there exist constants α > 0 and 0 < β < 1 such that (4.14) E kη(k)k2 + k˜ η (k)k2 ≤ αβ k max E kη(i)k2 + k˜ η (i)k2 , i∈H
for all k ∈ N+ , where H = max{τM , ιM , ς, ς˜}. The aim of this chapter is to design an H∞ state estimator for DSBAM neural networks with mixed time delays given by (4.7) such that the following two requirements are met simultaneously. ˜ 1) The augmented system (4.13) with ζ(k) = ζ(k) = 0 is exponentially mean-square stable. 2) Under zero initial conditions, for a given disturbance attenuation level ˜ γ > 0 and all nonzero ζ(k) and ζ(k), the output estimation errors ~z(k) and ~z˜(k) satisfy (∞ ) ∞ ∞ n o X X X 2 2 2 2 2 ˜ E k~z(k)k + k~z˜(k)k ≤ γ kζ(k)k + kζ(k)k . (4.15) k=0
4.2
k=0
k=0
Main Results
In this section, a sufficient condition is first derived to guarantee the exponential stability and H∞ disturbance attenuation level of the augmented system (4.13). Then, based on the obtained analysis results, the desired H∞ estimator is designed for the addressed MNN (4.7). The following basic lemmas will be used in deriving our main results in this chapter. Lemma 4.1 (Discretized Jensen inequality) For any matrix Ω > 0, integers γ1 and γ2 satisfying γ2 > γ1 , and vector function ω : {γ1 , γ1 + 1, · · · , γ2 } → R, such that the sums concerned are well defined, we have T γ2 γ2 γ2 X X X ω(i) Ω ω(i) ≤ (γ2 − γ1 + 1) ω T (i)Ωω(i). i=γ1
i=γ1
i=γ1
Lemma 4.2 (Schur complement) Given constant matrices Ω1 , Ω2 , Ω3 where Ω1 = ΩT1 and Ω2 > 0, then Ω1 + ΩT3 Ω−1 2 Ω3 < 0 if and only if Ω1 Ω3
ΩT3 < 0. −Ω2
64
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM
Lemma 4.3 (S-procedure) Let N = appropriate dimensions, and F T F ≤ (HF E)T < 0 if and only if there exists µ−1 E T E < 0 or, equivalently, N µH ∗ −µI ∗ ∗
N T , H and E be real matrices with I. Then the inequality N + HF E + a scalar µ > 0 such that N + %HH T + ET 0 < 0. −µI
For notation simplicity, hereafter, we denote ˜ 1 = I ⊗ Sym (ΛT1 Λ2 )/2 , Λ ˜ 2 = I ⊗ [(Λ1 + Λ2 )/2], Λ T ˜ 4 = I ⊗ [(Λ3 + Λ4 )/2], ˜ 3 = I ⊗ Sym (Λ Λ4 )/2 , Λ Λ 3 Xς Υ(ς) = F (k − ν), Γ1 = diag{ρ1 I, 0}, Γ2 = diag{ρ2 I, 0}, ν=1 Xς˜ ˜ ς) = Υ(˜ G(k − ν), Γ3 = diag{ρ3 I, 0}, Γ4 = diag{ρ4 I, 0}. ν=1
In the following theorem, a sufficient condition is derived under which the augmented system is exponentially stable in the mean square and the H∞ performance constraint is met. ˜ be given. Under Theorem 4.1 Let the estimator parameters K and K Assumptions 4.1 and 4.2, the augmented system (4.13) with ζ(k) = 0 and ˜ ζ(k) = 0 is globally exponentially mean-square stable if there exist positive scalars λi (i = 1, 2, 3, 4, 5, 6), µ1 and µ2 , symmetric positive-definite matrices P1 = diag{P11 , P12 }, P2 = diag{P21 , P22 }, Q1 , Q2 , R1 and R2 such that P1 < µ1 I, P2 < µ2 I, and the following inequalities hold: Φ Φ12 Ψ1 = 11 < 0, (4.16) ∗ −P1 ˜ ˜ ˜ 1 = Φ11 Φ12 < 0 Ψ (4.17) ∗ −P2 where
Φ11
Φ12
ˆ Θ11 ∗ = ∗ ∗ ∗
0 ¯ 22 Θ ∗ ∗ ∗
= P1 D(k) ˜ = P2 D(k)
0
ˆ 13 Θ 0 ˆ 33 Θ ∗ ∗
0 ¯ 24 Θ 0 ˆ 44 Θ ∗
0 0 0 , 0 ˆ 55 Θ
P1 A(k) P1 B(k)
˜ 11 Φ
˜ ˆ 11 Θ ∗ = ∗ ∗ ∗
P1 C(k)
T T
∗
˜ˆ Θ 13 0 ˜ˆ Θ 33
∗
∗
0 ˜ ˆ Θ44
∗
∗
∗
0 ˜¯ Θ 22
0 ˜¯ Θ 24
,
˜ 12 ˜ ˜ ˜ Φ , 0 P2 A(k) P2 B(k) P2 C(k) ˆ 11 = (ιM − ιm + 1)Q1 + λ1 Γ1 − λ3 Λ ˜ 1 − P1 , Θ ¯ 22 = λ1 Γ2 − λ5 Λ ˜ 1 − Q2 , Θ
0 0 0 , 0 ˜ˆ Θ 55
Main Results
65
ˆ 55 = − 1 R2 , Θ ˆ 13 = λ3 Λ ˜ 2, Θ ¯ 24 = λ5 Λ ˜ 2, Θ ς ˜¯ = λ Γ − λ Λ ˜ 3 − P2 , Θ ˜ = (τM − τm + 1)Q2 + λ2 Γ3 − λ4 Λ 22 2 4 6 3 − Q1 , 1 ˜ˆ ˜ ˜ˆ ˜¯ = λ Λ ˜ ˜ ˆ 44 = −λ6 I, Θ Θ Θ = R1 − λ4 I, Θ 13 = λ4 Λ4 , 24 6 4. 55 = − R1 , ς˜
ˆ 33 = R2 − λ3 I, Θ ˜ ˆ 11 Θ ˜ ˆ 33 Θ
ˆ 44 = −λ5 I, Θ
Proof In order to establish the stability conditions, we define ϑ(k) = η(k+ ˜ 1) − η(k), ϑ(k) = η˜(k + 1) − η˜(k), and consider the following LyapunovKrasovskii functional candidate for the augmented system (4.13): V (k) = V1 (k) + V2 (k) + V3 (k) + V4 (k)
(4.18)
where V1 (k) = η T (k)P1 η(k) + η˜T (k)P2 η˜(k), V2 (k) =
k−1 X
k−ι Xm
k−1 X
k−1 ς˜ X X
GT (i)R1 G(i) +
ν=1 i=k−ν
k−τ Xm
η T (i)Q1 η(i) +
j=k−ιM +1 i=j
V4 (k) =
η˜T (i)Q2 η˜(i)
i=k−τ (k)
i=k−ι(k)
V3 (k) =
k−1 X
η T (i)Q1 η(i) +
k−1 X
η˜T (i)Q2 η˜(i),
j=k−τM +1 i=j k−1 ς X X
F T (i)R2 F (i).
ν=1 i=k−ν
Now, calculating the difference of V (k) along the system (4.13) and taking the mathematical expectation, we have 4 nX o E{∆V (k)} = E ∆Vi (k)
(4.19)
i=1
where E{∆V1 (k)} = E{V1 (k + 1) − V1 (k)} n = E η T (k)(DT (k)P1 D(k) − P1 )η(k) + F T (k)AT (k)P1 A(k)F (k) + F T (k − τ (k))B T (k)P1 B(k)F (k − τ (k)) + ~σ T (k)P1~σ (k) ˜ T (k)P2 D(k) ˜ ˜ + η˜T (k)(D − P2 )˜ η (k) + GT (k)A˜T (k)P2 A(k)G(k) ˜ + GT (k − ι(k))B˜T (k)P2 B(k)G(k − ι(k)) + ~σ ˜ T (k)P2~σ ˜ (k) + 2η T (k)DT (k)P1 (A(k)F (k) + B(k)F (k − τ (k)) + C(k)Υ(ς)) (4.20)
66
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM + 2F T (k)AT (k)P1 (B(k)F (k − τ (k)) + C(k)Υ(ς)) ˜ T (k)P2 A(k)G(k) ˜ + 2F T (k − τ (k))B T (k)P1 C(k)Υ(ς) + 2˜ η T (k)D ˜ ˜ Υ(˜ ˜ ς ) + 2GT (k)A˜T (k)P2 B(k)G(k ˜ + B(k)G(k − ι(k)) + C(k) − ι(k)) ˜ Υ(˜ ˜ ς ) + 2GT (k − ι(k))B˜T (k)P2 C(k) ˜ Υ(˜ ˜ ς) + C(k) ˜ T (˜ ˜ Υ(˜ ˜ ς ), + ΥT (ς)C T (k)P1 C(k)Υ(ς) + Υ ς )C˜T (k)P2 C(k) E{∆V2 (k)} = E{V2 (k + 1) − V2 (k)} ≤ E{η T (k)Q1 η(k) − η T (k − ι(k))Q1 η(k − ι(k)) + η˜T (k)Q2 η˜(k) − η˜T (k − τ (k))Q2 η˜(k − τ (k)) k−ι Xm
+
k−τ Xm
η T (i)Q1 η(i) +
j=k+1−ιM
(4.21)
η˜T (i)Q2 η˜(i),
j=k+1−τM
E{∆V3 (k)} = E{V3 (k + 1) − V3 (k)} = E{(ιM − ιm )η T (k)Q1 η(k) + (τM − τm )˜ η T (k)Q2 η˜(k) k−ι Xm
−
k−τ Xm
η T (i)Q1 η(i) −
j=k+1−ιM
(4.22)
η˜T (i)Q2 η˜(i),
j=k+1−τM
and E{∆V4 (k)} = E{V4 (k + 1) − V4 (k)} = GT (k)R1 G(k) + F T (k)R2 F (k) −
ς˜ X
GT (k − ν)R1 G(k − ν) −
ν=1
ς X
F T (k − ν)R2 F (k − ν)
ν=1
(4.23)
(By Lemma 4.1) 1 ˜T ˜ ς) (˜ ς )R1 Υ(˜ ≤ GT (k)R1 G(k) + F T (k)R2 F (k) − Υ ς˜ 1 − ΥT (ς)R2 Υ(ς). ς Noticing Assumption 4.2, it is easily known that ~σ T (k)P1~σ (k) + ~σ ˜ T (k)P2~σ ˜ (k) ≤ λmax (P1 )~σ T (k)~σ (k) + λmax (P2 )~σ ˜ T (k)~σ ˜ (k) T T ≤ λ1 η (k)Γ1 η(k) + η˜ (k − τ (k))Γ2 η˜(k − τ (k)) + λ2 η˜T (k)Γ3 η˜(k) + η T (k − ι(k))Γ4 η(k − ι(k)) .
(4.24)
Main Results
67
Therefore, by denoting T $(k) = η T (k) η˜T (k − τ (k)) F T (k) F T (k − τ (k)) ΥT (ς) T ˜ T (˜ $(k) ˜ = η˜T (k) η T (k − ι(k)) GT (k) GT (k − ι(k)) Υ ς) and combining (4.18)–(4.24), we immediately have E{∆V (k)} n o ˜ 1 $(k) ≤E $T (k)Ω1 $(k) + $ ˜ T (k)Ω ˜
(4.25)
where Θ11 ∗ Ω1 = ∗ ∗ ∗
0 Θ22 ∗ ∗ ∗
Θ13 0 Θ33 ∗ ∗
Θ14 0 Θ34 Θ44 ∗
˜ Θ11 ∗ ˜1 = ∗ Ω ∗ ∗
Θ15 0 Θ35 , Θ45 Θ55
0 ˜ 22 Θ ∗ ∗ ∗
Θ11 = DT (k)P1 D(k) + (ιM − ιm + 1)Q1 + λ1 Γ1 − P1 , Θ14 = DT (k)P1 B(k), T
Θ15 = DT (k)P1 C(k),
˜ 11 Θ ˜ 14 Θ ˜ 33 Θ
˜ 14 Θ 0 ˜ Θ34 ˜ 44 Θ ∗
˜ 15 Θ 0 ˜ Θ35 , ˜ 45 Θ ˜ 55 Θ
Θ13 = DT (k)P1 A(k),
Θ22 = λ1 Γ2 − Q2 ,
Θ35 = AT (k)P1 C(k), 1 = B T (k)P1 B(k), Θ45 = B T (k)P1 C(k), Θ55 = C T (k)P1 C(k) − R2 , ς T T ˜ ˜ ˜ ˜ ˜ = D (k)P2 D(k) + (τM − τm + 1)Q2 + λ2 Γ3 − P2 , Θ13 = D (k)P2 A(k), ˜ T (k)P2 B(k), ˜ ˜ 15 = D ˜ T (k)P2 C(k), ˜ ˜ 22 = λ2 Γ4 − Q1 , =D Θ Θ ˜ ˜ 34 = A˜T (k)P2 B(k), ˜ ˜ 35 = A˜T (k)P2 C(k), ˜ = A˜T (k)P2 A(k) + R1 , Θ Θ
Θ33 = A (k)P1 A(k) + R2 , Θ44
˜ 13 Θ 0 ˜ Θ33 ∗ ∗
˜ 44 = B˜T (k)P2 B(k), ˜ Θ
T
Θ34 = A (k)P1 B(k),
˜ 45 = B˜T (k)P2 C(k), ˜ Θ
˜ 55 = C˜T (k)P2 C(k) ˜ − 1 R1 . Θ ς˜
Furthermore, from Assumption 4.1, it can be obtained that [F (k) − (I ⊗ Λ1 )η(k)]T [F (k) − (I ⊗ Λ2 )η(k)] ≤ 0, [G(k) − (I ⊗ Λ3 )˜ η (k)]T [G(k) − (I ⊗ Λ4 )(k)˜ η (k)] ≤ 0, [F (k − τ (k)) − (I ⊗ Λ1 )˜ η (k − τ (k))]T [F (k − τ (k)) − (I ⊗ Λ2 )˜ η (k − τ (k))] ≤ 0, [G(k − ι(k)) − (I ⊗ Λ3 )η(k − ι(k))]T [G(k − ι(k)) − (I ⊗ Λ4 )η(k − ι(k))] ≤ 0.
(4.26)
68
H∞ State Estimation for Discrete-Time Stochastic Memristive BAM Substituting (4.26) into (4.25), we have E{∆V (k)} n ≤E $T (k)Ω1 $(k) + $ ˜ T (k)Ω2 $(k) ˜ − λ3 [F (k) − (I ⊗ Λ1 )η(k)]T [F (k) − (I ⊗ Λ2 )η(k)] − λ4 [G(k) − (I ⊗ Λ3 )˜ η (k)]T [G(k) − (I ⊗ Λ4 )˜ η (k)] − λ5 [F (k − τ (k)) − (I ⊗ Λ1 )˜ η (k − τ (k))]T × [F (k − τ (k)) − (I ⊗ Λ2 )˜ η (k − τ (k))]
(4.27)
− λ6 [G(k − ι(k)) − (I ⊗ Λ3 )η(k − ι(k))]T o × [G(k − ι(k)) − (I ⊗ Λ4 )η(k − ι(k))] n o ˜ 2 $(k) ≤E $T (k)Ω2 $(k) + $ ˜ T (k)Ω ˜ where ˜¯ Θ 0 13 ˜ ¯ Θ22 0 ˜¯ ∗ Θ 33 ∗ ∗ ∗ ∗ T ˜ 1 − P1 , = D (k)P1 D(k) + (ιM − ιm + 1)Q1 + λ1 Γ1 − λ3 Λ ˜ 2, Θ ¯ 22 = λ1 Γ2 − λ5 Λ ˜ 1 − Q2 , Θ ¯ 24 = DT (k)P1 A(k) + λ3 Λ ¯ 44 = B T (k)P1 B(k) − λ5 I, = AT (k)P1 A(k) + R2 − λ3 I, Θ
¯ 11 Θ ∗ Ω2 = ∗ ∗ ∗ ¯ 11 Θ ¯ 13 Θ
0 ¯ 22 Θ ∗ ∗ ∗
¯ 13 Θ 0 ¯ 33 Θ ∗ ∗
Θ15 0 Θ35 , Θ45 Θ55
Θ14 ¯ 24 Θ Θ34 ¯ 44 Θ ∗
˜ ¯ 11 Θ ∗ ˜2 = Ω ∗ ∗ ∗
˜ 14 Θ ˜¯ Θ 24 ˜ Θ34 ˜¯ Θ 44 ∗
˜ 15 Θ 0 ˜ , Θ35 ˜ Θ45 ˜ 55 Θ
˜ 2, = λ5 Λ
¯ 33 Θ ˜ ¯ 11 = D ˜ T (k)P2 D(k) ˜ ˜ 3 − P2 , Θ + (τM − τm + 1)Q2 + λ2 Γ3 − λ4 Λ ˜ =D ˜¯ = λ Γ − λ Λ ˜¯ = λ Λ ¯ ˜ T (k)P2 A(k) ˜ ˜ 4, Θ ˜ ˜ Θ + λ4 Λ Θ 13 22 2 4 6 3 − Q1 , 24 6 4, ˜ ˜¯ = B˜T (k)P B(k) ¯ = A˜T (k)P A(k) ˜ ˜ Θ + R − λ I, Θ − λ I. 33
2
1
4
44
2
6
By using Lemma 4.2 to (4.16) and (4.17), we know that Ω2 < 0 and ˜ 2 < 0. Therefore, it can be concluded that E{∆V (k)} ≤ 0 and Ω E{∆V (k)} ≤ − εE{kη(k)k2 + k˜ η (k)k2 }
(4.28)
˜ 2 )}. Finally, in the light of the analysis where ε = min{−λmax (Ω2 ), −λmax (Ω in [85], the exponentially mean-square stability of the augmented system (4.13) ˜ with ζ(k) = ζ(k) = 0 is guaranteed and the proof is now complete. Now, let us consider the H∞ performance of the augmented system (4.13). In the following theorem, a sufficient condition is obtained that guarantees both the exponential mean-square stability and the H∞ performance for the augmented system (4.13).
Main Results
69
˜ as well as a prescribed Theorem 4.2 Let the estimator parameters K and K disturbance attenuation level γ > 0 be given. Under Assumptions 4.1 and 4.2, the augmented system (4.13) is exponentially mean-square stable and also satisfies the pre-specified H∞ performance constraint (4.15) if there exist positive scalars λi (i = 1, 2, 3, 4, 5, 6), µ1 and µ2 , symmetric positive-definite matrices P1 , P2 , Q1 , Q2 , R1 and R2 such that P1 < µ1 I, P2 < µ2 I, and the following inequalities hold: ¯ ¯ 12 Φ Φ Ψ2 = 11 < 0, (4.29) ∗ −P1 ˜¯ ˜¯ Φ Φ 11 12 ˜ Ψ2 = 0 be given. Under Assumptions 4.1 and 4.2, the augmented system (4.13) is exponentially mean-square stable and also satisfies the pre-specified H∞ performance constraint (4.15) if there exist positive scalars λi (i = 1, 2, 3, 4, 5, 6) and µi (i = 1, 2, 3, 4), symmetric positive-definite matrices P1 = diag{P11 , P12 }, P2 = diag{P21 , P22 }, Q1 , Q2 , R1 and R2 such that P1 < µ1 I, P2 < µ2 I,
Main Results
71
and the following linear matrix inequalities (LMIs) (4.33) and (4.34) hold: Ψ3 H µ3 E T ∗ −µ3 I 0 < 0, (4.33) ∗ ∗ −µ3 I ˜3 ˜ Ψ H µ4 E˜T ∗ −µ4 I (4.34) 0 0, there exists a positive constant ρ such that E{kηk k} < ε,
(6.13)
whenever k ∈ N and kη0 k < ρ. In this chapter, under RoM and FMs, we are interested in designing an estimator parameter K such that 1) the system (6.10) with ζk = 0 is stochastically stable; and 2) the output z˜k with zero initial value satisfies J (+∞) ,
+∞ X
E k˜ zk k2 − γ 2 kζk k2 ≤ 0
(6.14)
k=0
for all nonzero ζk , where γ > 0 is given disturbance attenuation level.
6.2
Main Results
In this section, by utilizing the robust analysis approach, a sufficient condition is first provided to guarantee the stochastic stability and H∞ performance of the augmented system (6.10). Then, based on the established analysis result, the desired estimator is designed for the MNN (6.1). To start with, we give the following useful lemma that will be needed for the subsequent derivation of our main results in this chapter.
Main Results
103
Lemma 6.1 (Jensen inequality in the discrete case) For a positive semidefinite matrix W ∈ Rn×n , a vector xi ∈ Rn and nonnegative constant ai (i = 1, 2, · · · ), if the series concerned is convergent, then we have +∞ X i=1
!T ai xi
W
+∞ X
! ai xi
i=1
≤
+∞ X
! ai
i=1
+∞ X
! ai xTi W xi
.
i=1
Lemma 6.2 (Schur complement) Given constant matrices Ω1 , Ω2 , Ω3 where Ω1 = ΩT1 and Ω2 > 0, then Ω1 + ΩT3 Ω−1 2 Ω3 < 0 if and only if Ω1 Ω3
ΩT3 < 0. −Ω2
Lemma 6.3 (S-procedure) Let N = N T , H and E be real matrices with appropriate dimensions, and F T F ≤ I. Then, the inequality N + HF E + (HF E)T < 0 holds if and only if there exists a scalar µ > 0 such that N + %HH T + µ−1 E T E < 0 or, equivalently, N µH ET ∗ −µI 0 < 0. ∗ ∗ −µI The following Theorem establishes sufficient conditions under which the augmented system (6.10) is stochastically stable and satisfies the H∞ performance constraint (6.14). Theorem 6.1 Let the estimator gain K and the disturbance attenuation level γ > 0 be given. Assume that there exist matrices P > 0, Q > 0, R > 0, Si > 0 (i = 1, 2, · · · , ~) and scalars λ1 > 0, λ2 > 0 and λ3 > 0 such that ¯ 11 ¯ 13 Θ14 Θ ¯ 15 Θ16 Θ17 Θ18 Θ 0 Θ ¯ 22 ¯ 24 ∗ Θ 0 Θ 0 0 0 0 ¯ ∗ ∗ Θ33 Θ34 0 Θ36 Θ37 Θ38 ¯ 44 ∗ ∗ ∗ Θ 0 Θ46 Θ47 Θ48 0 be given. Assume that there exist matrices P > 0, Q > 0, R > 0, Si > 0 (i = 1, 2, · · · , ~), matrix Y , and scalars λ1 > 0, λ2 > 0 and λ3 > 0 such that ˜ Φ H κE˜T Ψ2 = ∗ −κI (6.30) 0 0.01, k d1 (x1k ) = d2 (x2k ) = 0.092, |x1k | ≤ 0.01, 0.080, |x2k | ≤ 0.01, ( ( 0.020, |x1k | > 0.01, −0.800, |x1k | > 0.01, a12 (x1k ) = a11 (x1k ) = 0.120, |x1k | ≤ 0.01, 0.716, |x1k | ≤ 0.01, ( ( 0.500, |x2k | > 0.01, 0.040, |x2k | > 0.01, a22 (x2k ) = a21 (x2k ) = −0.548, |x2k | ≤ 0.01, 0.017, |x2k | ≤ 0.01, ( ( −0.800, |x1k | > 0.01, 0.600, |x1k | > 0.01, b11 (x1k ) = b12 (x1k ) = −1.600, |x1k | ≤ 0.01, 1.000, |x1k | ≤ 0.01, ( ( −0.200, |x2k | > 0.01, 0.600, |x2k | > 0.01, b21 (x2k ) = b22 (x2k ) = −0.800, |x2k | ≤ 0.01, 1.600, |x2k | ≤ 0.01, ( ( 1.800, |x1k | > 0.01, 0.300, |x1k | > 0.01, c11 (x1k ) = c12 (x1k ) = 0.900, |x1k | ≤ 0.01, 1.200, |x1k | ≤ 0.01, ( ( 0.500, |x2k | > 0.01, 0.800, |x2k | > 0.01, c21 (x2k ) = c22 (x2k ) = 0.900, |x2k | ≤ 0.01, 1.600, |x2k | ≤ 0.01. The activation functions f (xk ) and g(xk ) are selected as h i 1 2 f (xk ) = −tanh(0.4xk ) tanh(0.6xk ) , " # tanh(0.8x1k ) g(xk ) = 0.3 − tanh(0.7x2k ) which satisfy the constraints (6.11)–(6.12) with −0.4 0 0 0 Λ1 = , Λ2 = , 0 0 0 0.6 0 0 0.8 0 Υ1 = , Υ2 = . 0 −0.4 0 0.3 In the simulation, let the probabilities of discrete delays and distributed delays are respectively chosen as α ¯ 1 = 0.10 and α ¯ 2 = 0.15. The discrete delay is set as τk = 3 + cos(kπ), from which we have τ = 2 and τ¯ = 4. For the distributed delays, the constant sequence is selected as µk = 2−(k+3) . It is easy to obtain that, µk converge to µ ¯ = 2−4 for k ∈ [0, +∞), which contents
114
Delay-Distribution-Dependent H∞ State Estimation
the convergence condition (6.3). Furthermore, the order of the Rice fading model is taken as ~ = 2 and the statistical characteristics of the random variables σs,k (s = 0, 1, 2) are assumed to be σ ¯0 = 0.90, σ ¯1 = 0.40, σ ¯2 = 0.11, σ ˇ0 = 0.01, σ ˇ1 = 0.05 and σ ˇ2 = 0.01. By utilizing the Matlab YALMIP 3.0 Toolbox, we solve LMI (6.30) and obtain the matrices P and Y as follow: 27.2196 −1.3368 0.3762 P = , Y = . −1.3368 50.8195 −0.3010 Then, according to K = P −1 Y , the parameter of the desired state estimator are derived 0.0135 K= . −0.0055 Choose the exogenous disturbances νk = cos(k − 1)/(2k + 1), ωk = e(−0.5k) sin(k) and ξk = e(−0.2k) /(3k + 1). The initial values of the neuron state are taken randomly by following the uniform distribution over [−1, 1]. Simulation results are shown in Figs. 6.2–6.4. Fig. 6.2 plots the ideal measurements and the received signals of the estimator. Fig. 6.3 and Fig. 6.4 show the trajectories of zk , its estimate zˆk and the corresponding estimation error ek . From Fig. 6.3 and Fig. 6.4, it can be seen that the proposed estimator performs quite well. 7 6 5 4 3 2 1 0 -1 -2 -3 0
FIGURE 6.2:
10
20
30
40
50
60
Ideal measurements yk and received signals y˜k of the estimator.
Summary
115 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0
10
20
30
40
50
60
FIGURE 6.3: zk and its estimate zˆk .
0.6
0.4
0.2
0
-0.2
-0.4
-0.6 0
10
20
30
40
50
60
FIGURE 6.4: Estimation error ek .
6.4
Summary
In this chapter, the H∞ state estimation problem has been investigated for discrete-time memristive neural networks subject to both RoM and degraded
116
Delay-Distribution-Dependent H∞ State Estimation
measurements. The RoM is allowed to occur in a random manner governed by their known statistical characteristics. The degraded measurements reflect the signal degradation obeying certain prescribed probabilistic distributions according to the Rice fading model. By introducing a set of new switching functions, the addressed MNN (6.1) has been converted into a system involving uncertain interval parameters. Based on this uncertain system, the robust analysis theory and Lyapunov stability theory have been utilized, and some conditions are established under which the estimation error system is stochastically stable and the prescribed H∞ performance is satisfied. Subsequently, the desired estimator gain matrices have been obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, a simulation example has been provided to show the effectiveness of the main results. It is worth mentioning that there is still some potential research directions, for example, the extension of the present results to more complicated systems, and the application of the main results to robotic control problems.
7 On State Estimation for Discrete Time-Delayed Memristive Neural Networks under the WTOD Protocol: A Resilient Set-Membership Approach
In practice, MNNs can be realized via very large scale integration circuits (VLSICs) where the connection weights are executed via memristors. Basically speaking, VLSICs are easily susceptible to both device noises (e.g. the flicker, shot and thermal noises) and ambient noises (e.g. the crosstalk, power and/or ground bounce, and substrate noises). Particularly, the ambient noises, if not adequately addressed, could significantly degrade the VLSIC performance and reliability. From a practical point of view, we assume the ambient noises to be deterministic, unknown-but-bounded within certain ellipsoidal regions (CERs). In this case, the set-membership filtering method becomes especially suitable to constrain neuron states in CERs so as to assure satisfactory state estimation performance. Although a vast body of set-membership state estimation work has been presented, relevant results on MNNs have been scattered, let alone the discrete-time setting is also involved. It should be pointed out that, even though some attempts have been initiated on the state estimation issues for MNNs, little attention has been paid to the issue of limited communication under resource constraints between the estimator and the network output. Because of the large size of MNNs and the high-degree complexity of the to-do tasks, the volume of the network output data could become considerably high, which has posed great challenges (e.g. data collisions and communication congestion) onto the transmission networks of limited capacity. To handle the network-induced challenges, an effective measure that has been favorably taken by industry is to leverage the communication scheduling protocols in order to regulate the network traffic. Among various protocols that have been in use so far, the Weighted TryOnce-Discard protocol has proven to be a particularly attractive scheduling strategy in allocating network resources because of its dynamic scheduling behavior based on the significance of different missions. As such, it is of practical significance to explore how the WTODP can be leveraged to coordinate the considerable data transmission between the MNN and the estimator, and this constitutes one of our motivations in the present investigation.
117
118
On State Estimation
When realizing filter/controller algorithms in traditional system design, an implicit assumption is that filter/controller gains are exactly implemented. This assumption, unfortunately, goes against the engineering practice as the actual execution of the filter/controller gains might undergo parameter variations induced by many factors including the analogue-digital conversion, round-off errors, and finite resolution of instruments. Such kind of gain variations/drifts, though possibly small, would undoubtedly impair the corresponding controller/filter performance. As such, a growing body of work has been reported on the design of the resilient state estimation algorithms that are of insensitivity to the estimator gain variations. Nevertheless, to our best knowledge, very few results have been acquired so far on the resilient setmembership state estimation problems (SEPs) for DMNNs, not to mention the case where the hybrid time-delays (HTDs) (consisting both discrete and distributed time-delays), unknown-but-bounded noises (UBBNs) and the WTOD protocol (WTODP) are all involved. This constitutes another motivation of the research presented in this chapter.
7.1 7.1.1
Problem Formulation Memristive Neural Network Model
Consider a typical DMNN with HTDs: z(t + 1) =D(z(t))z(t) + A(z(t))f (z(t)) + B(z(t))g(z(t − τ1 )) τ2 X µι h(z(t − ι)) + L1 υ(t), + C(z(t)) ι=1 y(t) =M z(t) + L2 υ(t), z(ς) =φ0 (ς), ς = −τ, −τ + 1, · · · , −1, 0
(7.1)
where T z(t) = z1 (t) z2 (t) · · · zn (t) , T y(t) = y1 (t) y2 (t) · · · ym (t) , T υ(t) = υ1 (t) υ2 (t) · · · υq (t) , D(z(t)) = diag{d1 (z1 (t)), d2 (z2 (t)), · · · , dn (zn (t))} are the neuron state vector, the measurement output vector, the exogenous disturbance, and the self-feedback matrix, respectively; A(z(t)) = (aij (zi (t)))n×n , B(z(t)) = (bij (zi (t)))n×n and C(z(t)) = (cij (zi (t)))n×n are
Problem Formulation
119
CWs; and T f (z(t)) = f1 (z1 (t)) f2 (z2 (t)) · · · fn (zn (t)) , T g(z(t)) = g1 (z1 (t)) g2 (z2 (t)) · · · gn (zn (t)) , T h(z(t)) = h1 (z1 (t)) h2 (z2 (t)) · · · hn (zn (t)) are the nonlinear neuron activation functions (AFs); τ1 and τ2 are the constant discrete and distributed time-delays, respectively; φ0 (ς) (ς = −τ, −τ + 1, · · · , −1, 0) are initial conditions with τ , max{τ1 , τ2 }; L1 , L2 and M are known matrices of compatible dimensions. Similar to [75], the state-dependent functions di (zi (t)), aij (zi (t)), bij (zi (t)) and cij (zi (t)) are ( dˆi , |zi (·)| > `i , di (zi (·)) = ˇ di , |zi (·)| ≤ `i , ( (7.2) κ ˆ ij , |zi (·)| > `i , κij (zi (·)) = κ ˇ ij , |zi (·)| ≤ `i , where κ ∈ {a, b, c}, the switching jumps satisfy `i > 0, |dˆi | < 1, |dˇi | < 1, and κ ˆ ij and κ ˇ ij are known constants. Based on (7.2), we first define following switching functions (SFs): ( ( 1, |zi (·)| > `i , 1, |zi (·)| > `i , d κ ϑii (zi (·)) , ϑij (zi (·)) , (7.3) −1, |zi (·)| ≤ `i , −1, |zi (·)| ≤ `i , and ϑdij (zi (t)) , 0 for i 6= j. Denote di , min{dˆi , dˇi }, d¯i , max{dˆi , dˇi }, κij , min{ˆ κij , κ ˇ ij }, κ ¯ ij , max{ˆ κij , κ ˇ ij }. In addition, we introduce the following matrices: ¯ , diag{d¯1 , d¯2 , · · · , d¯n }, D , diag{d1 , d2 , · · · , dn }, D ¯ ¯ , (¯ ¯ D0 , (D + D)/2, S , (κij )n×n , S κij )n×n , S0 , (S + S)/2 where S ∈ {A, B, C}. Thus, D(z(t)), A(z(t)), B(z(t)) and C(z(t)) are rewritten as D(z(t)) = D0 + ∆D(t), A(z(t)) = A0 + ∆A(t), (7.4) B(z(t)) = B0 + ∆B(t), C(z(t)) = C0 + ∆C(t) with ∆D(z(t)) , Hd ϑd (z(t))Ed , ∆A(z(t)) , Ha ϑa (z(t))Ea , ∆B(z(t)) , Hb ϑb (z(t))Eb , ∆C(z(t)) , Hc ϑc (z(t))Ec
120
On State Estimation
where
Hr , tr11 e1 · · · tr1n e1 · · · trn1 en · · · trnn en , ErT , tr11 e1 · · · tr1n en · · · trn1 e1 · · · trnn en , ϑr (z(t)) , diag{ϑr11 (z1 (t)), · · · , ϑr1n (z1 (t)), · · · , ϑrn1 (zn (t)), · · · , ϑrnn (zn (t))}, 1
trij , (|¯ rij − rij |/2) 2 , with r ∈ {d, a, b, c}, and ei ∈ Rn is a column vector with 1 being its the ith entry and 0 being others. Apparently, ϑr (z(t)) satisfies ϑrT (z(t))ϑr (z(t)) ≤ I. Remark 7.1 In case of computer-based calculations, experiments and simulations, continuous-time networks are often required to be discretized in the implementation process, and this merits the necessity of studying the DMNNs. Note that available results on dynamics analysis problems of DMNNs have been really scattered. Additionally, owing to the state-dependent feature of the memristive CWs, it is mathematically difficult to analyze the dynamics of MNNs, and conventional methods cannot be directly employed to investigate the stability and estimation issues for DMNNs. Thanks to the novel SFs (7.3), we are able to convert the MNN (7.1) into an equivalent one with parametric uncertainties, on which some traditional robust analysis approaches could be ideally applied.
7.1.2
The WTOD Protocol y t! Communication Channel (with Weighted Try-OnceDiscard protocol)
ĂĂ
v t!
!"#$%&$'!( )!*#+,( )!&-.#/
y t!
0!".&!(1&+&!( 2%&$"+&.#
z3 t !
ym t !
FIGURE 7.1:
SEP for an MNN under the WTOD protocol.
As shown in Fig. 7.1, the measurements y(t) (as the output of the MNN) are sent to the remote estimator via constrained transmission channels. To prevent the data from collisions and maximize the efficiency of data utilization, the WTODP is applied to schedule the data transmission between the MNN and estimator. Under the WTODP, each entry of y(t) has the priority that is directly proportional to the norm of the error between the current measurement and the last storing measurement. At each time instant, only the entry with the highest priority is allowed to be updated. If there are more than one entry being assigned with the highest priority, we can choose an arbitrary one for updating.
Problem Formulation
121
Define y¯(t) , y¯1 (t) y¯2 (t) . . .
T y¯m (t)
as the received measurement output by the estimator, where y¯s (t) (s = 1, 2, . . . , m) is the sth entry of y¯(t). Let σs be the known weight of the sth component, and ~(t) ∈ {1, 2, . . . , m} be the entry activated at time t. Then, ~(t) can be determined by ¯ s (y(t) − y¯(t − 1)) ~(t) = arg max (y(t) − y¯(t − 1))T Q 1≤s≤m
(7.5)
¯ s , QΨs , Q , diag{σ1 , σ2 , . . . , σm } and Ψs , diag{δ(s − 1), δ(s − where Q 2), . . . , δ(s − m)}. Denote T T z˜(t) , z T (t) y¯T (t − 1) , f˜(˜ z (t)) , f T (z(t)) 0 , T ˜ z (t − ι)) , hT (z(t − ι)) 0 t , g˜(˜ z (t − τ1 )) , g T (z(t − τ1 )) 0 , h(˜ T φ2 (ς) , φT0 (ς) φT1 , v˜(t) , v T (t) v T (t) . According to the WTODP and the zero order-holders strategy, the actually received measurement can be expressed by y¯(t) = Ψ~(t) y(t) + (Im − Ψ~(t) )¯ y (t − 1),
(7.6)
and y¯(t) = φ1 for t < 0 where φ1 is known. On the basis of (7.5), the DMNN in (7.1) is rewritten as: z˜(t + 1) =D(t)˜ z (t) + A(t)f˜(˜ z (t)) + B(t)˜ g (˜ z (t − τ1 )) τ 2 X ˜ z (t − ι)) + L1 v˜(t), µι h(˜ + C(t) (7.7) ι=1 y¯(t) =M˜ z (t) + L2 v(t), z˜(ς) =φ2 (ς), ς = −τ, −τ + 1, . . . , −1, 0 where D(z(t)) 0 D(t) , , Ψ~(t) M Im − Ψ~(t)
A(t) , diag{A(z(t)), 0},
B(t) , diag{B(z(t)), 0}, L1 , diag{L1 , L2 },
C(t) , diag{C(z(t)), 0}, L2 , Ψ~(t) L2 , M , Ψ~(t) M
Im − Ψ~(t) .
Based on (7.7), the following resilient set-membership estimator (RSME) is constructed: ¯ zˆ˜(t) + A¯f˜(zˆ˜(t)) zˆ ˜(t + 1) =D τ2 X ˜ zˆ˜(t − ι)) ¯g (zˆ˜(t − τ1 )) + C¯ + B˜ µι h( (7.8) ι=1 + (K(t) + ∆K(t))(¯ y (t) − Mzˆ˜(t)), zˆ ˜(ς) =0, ς = −τ, −τ + 1, . . . , −1, 0
122
On State Estimation
where zˆ(t) D0 0 ¯, zˆ ˜(t) , ˆ , D , y¯(t − 1) Ψ~(t) M Im − Ψ~(t) A¯ , diag{A0 , 0}, B¯ , diag{B0 , 0}, C¯ , diag{C0 , 0}, K(t) , [ KzT (t) KyT (t) ]T , ∆K(t) , [ ∆KzT (t)
∆KyT (t) ]T .
Here, zˆ ˜(t) ∈ Rn+m is the estimate of z˜(t), Kz (t) ∈ Rn×m and Ky (t) ∈ Rm×m are parameters to be determined. ∆Kz (t) and ∆Ky (t) are parameter variations satisfying ∆Kz (t) = Hz Fz (t)Tz , ∆Ky (t) = Hy Fy (t)Ty where Hz , Hy , Tz and Ty are known matrices, and the unknown matrices Fz and Fy satisfy FzT (t)Fz (t) ≤ I and FyT (t)Fy (t) ≤ I. Let T e(t) , z˜(t) − zˆ˜(t), ξ(t) , v˜T (t) v T (t) , F (e(t)) , f˜(˜ z (t)) − f˜(z˜ˆ(t)), G(e(t − τ1 )) , g˜(˜ z (t − τ1 )) − g˜(zˆ˜(t − τ1 )), ˜ z (t − ι)) − h( ˜ zˆ˜(t − ι)). H(e(t − ι)) , h(˜ Then, the estimation error system with respect to (7.7) and (7.8) is e(t + 1) ¯ − K(t)M)e(t) ˜ ¯ (e(t)) = (D + ∆D(t)˜ z (t) + AF ¯ + ∆A(t)f˜(˜ z (t)) + BG(e(t − τ1 )) + ∆B(t)˜ g (˜ z (t − τ1 )) +
τ2 X
¯ µι CH(e(t − ι))
ι=1
+
τ2 X
˜ z (t − ι)) + Lξ(t) µι ∆C(t)h(˜
ι=1
where ∆D(z(t)) 0 ∆B(z(t)) ∆B(t) , 0 ˜ L , L1 −K(t)L 2 ,
∆D(t) ,
0 ∆A(z(t)) 0 , ∆A(t) , , 0 0 0 0 ∆C(z(t)) 0 , ∆C(t) , , 0 0 0 ˜ , K(t) + ∆K(t). K(t)
(7.9)
Problem Formulation
123
In what follows, we define vectors T ˜ , v˜T (t) ξ T (t) T , γ(t) , z˜T (t) eT (t) , ξ(t) T F˜ (γ(t)) , f˜T (˜ z (t)) F T (e(t)) , T ˜ z (t − τ1 )) GT (e(t − τ1 )) , G(γ(t − τ1 )) , g˜T (˜ T T ˜ ˜ (˜ H(γ(t − ι)) , h z (t − ι)) H T (e(t − ι)) , and then have the augmented system ˜ ˜ F˜ (γ(t)) + B(t) ˜ G(γ(t ˜ γ(t + 1) = D(t)γ(t) + A(t) − τ1 )) τ 2 X ˜ ˜ ˜ + C(t) µι H(γ(t − ι)) + L˜ξ(t)
(7.10)
ι=1
where ˜ ,D ¯ γ + ∆Dγ (t), A(t) ˜ , A¯γ + ∆Aγ (t), B(t) ˜ , B¯γ + ∆Bγ (t), D(t) ˜ , C¯γ + ∆Cγ (t), L˜ , L¯γ + ∆Lγ , D ¯ γ , diag{D, ¯ D ¯ − K(t)M}, C(t) ¯ A}, ¯ ¯ B}, ¯ ¯ C}, ¯ A¯γ , diag{A, B¯γ , diag{B, C¯γ , diag{C, ¯ ¯ L¯ , L1 −K(t)L2 , L¯γ , diag{L1 , L}, ∆Lγ , diag{0, ∆L}, ∆D(t) 0 ∆L¯ , 0 −∆K(t)L2 , ∆Dγ (t) , , ∆D(t) −∆K(t)M ∆C(t) 0 ∆A(t) 0 ∆B(t) 0 ∆Cγ (t) , , ∆Aγ (t) , , ∆Bγ (t) , . ∆C(t) 0 ∆A(t) 0 ∆B(t) 0 For presentation convenience, we now introduce the following assumptions and definitions. Assumption 7.1 φ2 (ς) (ς = −τ, −τ + 1, · · · , −1, 0) satisfy φT2 (ς)P −1 (ς)φ2 (ς) ≤ 1
(7.11)
where matrices P (ς) (ς = −τ, −τ + 1, · · · , −1, 0) are known and positivedefinite. Assumption 7.2 The external stochastic disturbance v(t) satisfies the following condition (7.12) v T (t)Γ−1 v (t)v(t) ≤ 1 where matrix Γv (t) is known and positive-definite. Assumption 7.3 The neuron AFs f (·), g(·) and h(·) satisfy f (0) = g(0) = h(0) = 0 and following Lipschitz conditions: kf (s) − f (t)k ≤ kΓ1 (s − t)k, kg(s) − g(t)k ≤ kΓ2 (s − t)k, kh(s) − h(t)k ≤ kΓ3 (s − t)k for all s, t ∈ Rn , where Γ1 , Γ2 and Γ3 are known constant matrices.
(7.13)
124
On State Estimation
Definition 7.1 Let the matrix (ellipsoid matrices) sequence P (t) ∈ R(n+m)×(n+m) (t ∈ N+ ) be given. System (7.10) is said to meet the P (t)dependent constraint if R(t) , γ T (t)P −1 (t)γ(t) ≤ 1
(7.14)
holds for t ∈ N+ . In this chapter, we are set to design an RSME that is capable of confining the estimates of the DMNN (7.1) to a CER under the UNBBNs. Such an aim is accomplished in two steps. First, for the given matrix sequence {P (t)}t∈N+ , we like to find the sufficient condition that ensures that the RSME exists and, subsequently, (7.10) meets the P (t)-dependent constraint (7.14). Second, we like to minimize the trace of P (t) through appropriately selecting K(t) that satisfies the aforementioned sufficient condition.
7.2
Main Results
This section aims at establishing sufficient conditions that guarantee that the P (t)-dependent constraint (7.14) is satisfied by system (7.10). Then, a recursive algorithm is proposed to determine K(t) under the WTODP. To start with, we present the following useful lemmas to benefit the subsequent derivation. Lemma 7.1 [9] Let ϕ0 (·), ϕ1 (·), · · · , ϕp (·) be quadratic functions of s ∈ Rn : ϕj (s) , sT Zi s (i = 0, 1, · · · , p) and Zj = ZjT . If there exist %1 ≥ 0, %2 ≥ Pp 0, · · · , %p ≥ 0 such that Z0 − i=1 %i Zi ≤ 0, then ϕ1 (s) ≤ 0, ϕ2 (s) ≤ 0, · · · , ϕp (s) ≤ 0 → ϕ0 (s) ≤ 0. Lemma 7.2 [9] Given matrices Ω1 , Ω2 , Ω3 , Ω1 = ΩT1 and Ω2 > 0, then Ω1 + ΩT3 Ω−1 2 Ω3 < 0 iff Ω1 Ω3
ΩT3 < 0. −Ω2
Lemma 7.3 [163] Given matrices N = N T , H, E and F T F ≤ I, then N + HFE + (HFE)T < 0 holds iff there exists a scalar µ > 0 such that N + µHHT + µ−1 E T E < 0
Main Results
125
or
N ∗ ∗
µH −µI ∗
ET 0 < 0. −µI
For the convenience of presentation, we denote ˜ r , HrT 0 T , E ˜r , Er 0 , P (t) , L(t)LT (t). H The following theorem gives a sufficient condition for the solvability of the concerned SEP. Theorem 7.1 Consider the DMNN (7.1) with given estimator (7.8) and matrices sequences {P (t) > 0}t∈N+ . If there exist K(t), ε(t), λi (t) (i ∈ {1, 2, 3, 4}) and s (t) (s ∈ {1, 2, · · · , m}, t ∈ N+ ) satisfying ˜ Ω(t) ∗ ∗ ˜ T −ε(t)I ε(t)H (7.15) ∗ 0}t∈N+ in Algorithm 7.1 as follows. Algorithm 7.1 Computational Algorithm for {K(t) > 0}t∈N+ . Step 1. Initialization: Set t = 0 and given N and P (ς). Step 2. Calculate L(ς) based on P (ς) = L(ς)LT (ς) for ς = t−τ1 and t−τ2 ≤ ς ≤ t. Step 3. Solve (7.34) s. t. (7.15). Then, K(t) and P (t + 1) are obtained based on the solution of (7.34). Step 4. Set t = t + 1. If t > N , exit. Otherwise, go to Step 2.
130
On State Estimation
Remark 7.2 In the chapter, the set-membership SEP for DMNNs has been effectively coped with under HTDs and the WTODP. One can observe from Theorem 7.1 and Algorithm 7.1 that, in the pursuit of the RSME, all significant factors (including the state-dependent parameters, HTDs, noise information, estimation accuracy, WTODP) are fully reflected in the above analysis, and a comprehensive framework is formulated, under which the desired estimator gains are derived by taking into account factors that sophisticate the concerned model. Remark 7.3 The state estimation problem for artificial neural networks has received a large amount of research attention and a large body of results has been available in the literature. In comparison with the existing literature, the main results of this chapter exhibits the following distinctive features: 1) the set-membership state estimation problem discussed in this chapter is new in the sense that the random gain variation and the WTODP are taken into careful consideration; 2) a novel yet unified estimation scheme is developed to tackle the mathematical complexities stemming from the state-dependent switching behaviors and hybrid time-delays and 3) the design algorithm of the desired filters is recursive and can be realized online.
7.3
An Illustrative Example
This section provides a demonstrative example to validate the proposed estimation approach. Consider (7.1) with parameters: ( ( 0.990, |z1 (·)| > 0.3, 0.420, |z2 (·)| > 0.3, d1 (z1 (·)) = d2 (z2 (·)) = 0.930, |z1 (·)| ≤ 0.3, 0.860, |z2 (·)| ≤ 0.3, ( ( 0.060, |z1 (·)| > 0.3, −0.330, |z1 (·)| > 0.3, a11 (z1 (·)) = a12 (z1 (·)) = 0.050, |z1 (·)| ≤ 0.3, −0.350, |z1 (·)| ≤ 0.3, ( ( −0.230, |z2 (·)| > 0.3, 0.060, |z2 (·)| > 0.3, a21 (z2 (·)) = a22 (z2 (·)) = −0.200, |z2 (·)| ≤ 0.3, 0.040, |z2 (·)| ≤ 0.3, ( ( −0.300, |z1 (·)| > 0.3, 0.005, |z1 (·)| > 0.3, b11 (z1 (·)) = b12 (z1 (·)) = 0.240, |z1 (·)| ≤ 0.3, 0.015, |z1 (·)| ≤ 0.3, ( ( −0.310, |z2 (·)| > 0.3, 0.020, |z2 (·)| > 0.3, b21 (z2 (·)) = b22 (z2 (·)) = 0.270, |z2 (·)| ≤ 0.3, 0.060, |z2 (·)| ≤ 0.3, ( ( 0.050, |z1 (·)| > 0.3, 0.015, |z1 (·)| > 0.3, c11 (z1 (·)) = c12 (z1 (·)) = 0.010, |z1 (·)| ≤ 0.3, 0.045, |z1 (·)| ≤ 0.3,
An Illustrative Example 131 ( ( 0.260, |z2 (·)| > 0.3, 0.015, |z2 (·)| > 0.3, c21 (z2 (·)) = c22 (z2 (·)) = −0.300, |z2 (·)| ≤ 0.3, 0.035, |z2 (·)| ≤ 0.3, 0.25 0 Γ1 = Γ2 = Γ3 = , τ1 = 1, τ2 = 3. 0 0.36 The weight values in √ the WTODP are σ1 = 1.0 and σ2 = 0.8. The bounded noise is set as v(t) = 0.02 sin(t), and thus we have Γv (t) = 0.02I for t ∈ N+ . T Furthermore, φ2 (ς) = 3 3 1 1 , P (ς) = diag{9, 9, 1, 1, 9, 9, 1, 1} (ς = −τ, −τ + 1, . . . , 0). Using Matlab YALMIP 3.0 Toolbox, the optimization problem (7.34) is solved subject to (7.15), and the corresponding demonstration results are given in Figs. 7.2–7.6. Fig. 7.2 reveals the impact from the neural states on the selected neural nodes, while Fig. 7.3 and Fig. 7.4 plot the actual and estimated trajectories of z1 (t),z2 (t) and their estimation errors, respectively. The actual and estimated values of y¯1 (t), y¯2 (t) and their estimation errors are respectively displayed in Fig. 7.5 and Fig. 7.6. The obtained results demonstrates that our proposed estimation scheme is indeed effective.
2.5
2
1.5
1
0.5 0
5
FIGURE 7.2:
10
15
The selected nodes of MNNs.
20
132
On State Estimation
FIGURE 7.3: The actual and estimated trajectories of z1 (t) and z2 (t).
FIGURE 7.4: The estimation errors of z1 (t) and z2 (t).
An Illustrative Example
FIGURE 7.5: The actual and estimated values of y¯1 (t) and y¯2 (t).
FIGURE 7.6: The estimation errors of y¯1 (t) and y¯2 (t).
133
134
7.4
On State Estimation
Summary
The set-membership SEP has been solved in this chapter for DMNNs with HTDS under WTODP. The WTODP has been utilized to mitigate the unnecessary network congestion occurred in the channel between the DMNNs and estimator. To cater for possible execution errors, estimator gains are contaminated by bounded parameter fluctuations. A RSME has then been devised to achieve the estimate of the DMNN susceptible to the UBBNs and WTOD. By means of the RMIs, sufficient conditions have been pinned down to ensure the existence of the desired RSME. Then, an optimization problem has been formulated by minimizing the CER (with respect to estimation errors) under the WTODP. Finally, simulation results illustrate the feasibility of our RSME. Note that the developed event-based resilient strategy might be applied to systems with higher complexities such as genetic regulatory networks and sensor networks.
8 On Finite-Horizon H∞ State Estimation for Discrete-Time Delayed Memristive Neural Networks under Stochastic Communication Protocol
The last decades have witnessed an everlasting research enthusiasm on the study of steady-state behaviors (e.g. asymptotic/exponential stability) of MNNs. It is notable that, in reality, with ensured steady-state behaviors, systems are also required to possess satisfactory transient performances (e.g. finite-time convergence) on a finite horizon. Unfortunately, the achievement of the finite-time stability is usually difficult since the system dynamics is required to converge to the equilibrium in a specified yet limited time. As a matter of fact, in comparison to the absolute convergence, the idea that keeps system states below a given level seems much more realistic on a finite horizon in case of exogenous disturbances. In other words, when considering transient performance, it might be more favorable to constrain the state evolution to a desirable level in accordance with engineering practice. As such, much research attention has recently been drawn towards state estimation problems for MNNs under energy-bounded disturbances with transient behavior requirements. As mentioned in Chapter 7, with the fast development of network technology, nowadays, the state estimation algorithm of MNNs in practical engineering is sometimes required to be realized at a remote location in a networked environment, which gives rise to the remote state estimation issue. In such a case, the actual measurement outputs are often transmitted to a remote estimator through a communication medium (e.g. wireless/distributed networks). Because of the big size of MNNs and the high-degree intricacy of the to-do tasks, the data volume of the network output could become considerably high, thereby posing great challenges (e.g. fading measurement and communication delays) onto the transmission networks of limited capacity. To handle these network-induced challenges, an effective measure is to leverage communication protocols that help regulate the data transmission. Nevertheless, to our best knowledge, very few results have been acquired so far on the finite-time H∞ state estimation problem for delayed MNNs under the regulation of stochastic communication protocol (SCP), not to mention the case where the energybounded disturbances is embraced. 135
136
On Finite-Horizon H∞ State Estimation
In response to above discussion, in this chapter, we aim at developing a finite-time H∞ estimator for delayed MNNs with energy-bounded disturbances under the stochastic communication protocol. It is our objective to construct an H∞ estimator ensuring a prescribed disturbance attenuation level over a finite time-horizon for the delayed MNNs. By virtue of the LyapunovKrasovskii functional in combination with stochastic analysis methods, the delay-dependent criteria are established that guarantee the existence of the desired H∞ estimator. Subsequently, the estimator gains are computed by resorting to solve a bank of convex optimization problems. Finally, the validity of the designed H∞ estimator is demonstrated via a numerical example.
8.1
Problem Formulation and Preliminaries
Consider an MNN with the following structure: z(s + 1) = D(z(s))z(s) + A(z(s))f (z(s)) + B(z(s))g(z(s − τ )) + L(s)v(s)
y(s) = C(s)z(s) + M (s)v(s)
(8.1)
z¯(s) = N (s)z(s)
where T z(s) = z1 (s) z2 (s) · · · zn (s) , T y(s) = y1 (s) y2 (s) · · · ym (s) , D(z(s)) = diag{d1 (z1 (s)), d2 (z2 (s)), · · · , dn (zn (s))}, di (zi (s)) > 0 (i = 1, 2, . . . , n) are the neuron state vector, the ideal measurement output and the selffeedback matrix, respectively; z¯(s) ∈ Rp is the neural state to be estimated; v(s) ∈ l2 [0, N − 1] is the disturbance vector; A(z(s)) = aij (zi (s)) n×n and B(z(s)) = bij (zi (s)) n×n are connection weight matrices with no delays and discrete delays, respectively; the matrices C(s), L(s), M (s) and N (s) are known and with compatible dimensions. The nonlinear neuron activation functions (NNAFs) f (z(s)) and g(z(s−τ )) have the following forms: T f (z(s)) = f1 (z1 (s)) f2 (z2 (s)) · · · fn (zn (s)) , T g(z(s − τ )) = g1 (z1 (s − τ )) g2 (z2 (s − τ )) · · · gn (zn (s − τ )) where τ ∈ Z+ is a constant delay and the initial condition is φ(s) = T φ1 (s) φ2 (s) · · · φn (s) for s ∈ [−τ, 0].
Problem Formulation and Preliminaries
137
Assumption 8.1 The NNAFs f (z(s)) and g(z(s)) are assumed to be continuous and satisfy kf (z(s))k2 ≤ ϑf (s)kΥf (s)z(s)k2 , 2
2
kg(z(s))k ≤ ϑg (s)kΥg (s)z(s)k
(8.2) (8.3)
for all s ∈ [0, N], where ϑf (s) and ϑg (s) are known positive scalars, Υf (s) and Υg (s) are known matrices, and f (0) = g(0) = 0. By using the technique employed in [76], the MNN given by (8.1) is rewritten as: ¯ + ∆D(s))z(s) + (A¯ + ∆A(s))f (z(s)) z(s + 1) =(D ¯ + ∆B(s))g(z(s − τ )) + L(s)v(s) + (B (8.4) y(s) =C(s)z(s) + M (s)v(s) z¯(s) =N (s)z(s) where ¯ , diag{d1 , d2 , . . . , dn }, D ∆D(s) , HF1 (s)E1 ,
A¯ , [aij ]n×n ,
∆A(s) , HF2 (s)E2 ,
¯ , [bij ]n×n , B ∆B(s) , HF3 (s)E3 .
T T T T Ei2 · · · Ein Here, H = H1 H2 · · · Hn and Ei = Ei1 (i = 1, 2, 3) are known matrices, dj , aij and bij are known positive scalars, and Fi (s) (i = 1, 2, 3) satisfies FiT (s)Fi (s) ≤ I. Remark 8.1 In most existing literature, the norm-bounded conditions are enforced to facilitate the handling of parameter uncertainties. It is worth mentioning that, in the context of MNN, it is exactly the time-varying uncertain terms ∆D(s), ∆A(s) and ∆B(s) that reflect the influence from the memristors. It should be emphasized that, in the current work, our main focus is on examining the effect of the state-dependent switching (towards norm-bounded disturbances) that relies on features of the memristor and the current-voltage. For the purpose of reducing data collisions and mitigating network burdens, we adopt the SCP to schedule the data transmission. Let ~ys (s) be the actually received data from the ith (i = 1, 2 . . . , m) node. Under the SCP, ~yi (s) updates as ( yi (s), if i = ξ(s) ~yi (s) = (8.5) ~yi (s − 1), otherwise where ξ(s) ∈ {1, 2, . . . , m} denotes the selected sensor for data transmissions at time s. ξ(s) is regulated by a Markov chain with the transition probability matrix P = pij m×m where pij , Prob ξ(s + 1) = j|ξ(s) = i , i, j = 1, 2, . . . , m.
(8.6)
138
On Finite-Horizon H∞ State Estimation
Pm Here pij ≥ 0 and j=1 pij = 1. Consequently, this description leads to the following data exchange model between the transmitter and the receiver: ~y (s) = Ψξ(s) y(s) + (I − Ψξ(s) )~y (s − 1)
(8.7)
where Ψξ(s) = diag{δ(ξ(s − 1)), δ(ξ(s − 2)), . . . , δ(ξ(s − m))} is the update matrix, and δ(·) is the Kronecker delta function.
y s!
FIGURE 8.1:
ĂĂ
!"#$%&$'!( )!*#+,( )!&-.#/%( -$&0(1$"! 2!,+3
Communication Network (with Stochastic Communication Protocol)
y s!
4!".&!(5&+&!( 6%&$"+&.#
ym s !
State estimation with stochastic communication protocols.
Remark 8.2 In general, the communication protocols can be classified into two categories, i.e. static and dynamic protocols. Specifically, the Round-Robin prootcol is an equal scheduling mechanism that belongs to the category of static protocols, whereas the SCP belongs to dynamic protocols in which the Markov chains are utilized to model the regulation procedure. Regardless of protocol categories, at each time step s, only one sensor is permitted to access the network for data transmission. To make full use of the data, the zero-order holder mechanism is implemented to store information that is not transmitted. Based on (8.1) and (8.5), the desired estimator is built as ( ¯ z (s) + Af ¯ (ˆ ¯ z (s − τ )) + K(s)(~y (s) − C(s)ˆ zˆ(s + 1) = Dˆ z (s)) + Bg(ˆ z (s)) zˆ ¯(s) = N (s)ˆ z (s) (8.8) where zˆ(s) and zˆ ¯(s) are the estimates of z(s) and z¯(s), respectively, and K(s) is the parameter to be designed. See Fig. 8.1 for the schematic structure of the state estimation problem with stochastic communication protocols. By defining e(s) , z(s) − zˆ(s), z¯˜(s) , z¯(s) − zˆ¯(s), f˜(s) , f (z(s)) − f (ˆ z (s)), g˜(s − τ ) , g(z(s − τ )) − g(ˆ z (s − τ )),
Problem Formulation and Preliminaries
139
we acquire the estimation error dynamics from (8.4), (8.7) and (8.8) as follows: ¯ − K(s)Ψξ(s) C(s))e(s) + (∆D(s) − K(s)(Ψξ(s) − I)C(s))z(s) e(s + 1) = (D ¯ g (s − τ ) + ∆B(s)g(z(s − τ )) + A¯f˜(s) + ∆A(s)f (z(s)) + B˜ − K(s)(I − Ψξ(s) )~y (s − 1) + (L(s) − K(s)Ψξ(s) M (s))v(s) z˜ ¯(s) = N (s)e(s). (8.9) By denoting T η(s) , z T (s) eT (s) ~y T (s − 1) , T f~(η(s)) , f T (z(s)) f˜T (s) , T ~g (η(s − τ )) , g T (z(s − τ )) g˜T (s − τ ) , and taking into account (8.1), (8.7) and (8.9), we further have the following augmented system: ( η(s + 1) = D(s)η(s) + A(s)f~(η(s)) + B(s)~g (η(s − τ )) + L(s)v(s) (8.10) z˜ ¯(s) = N (s)η(s) where ¯ + ∆D(s), A(s) , A¯ + ∆A(s), D(s) , D ¯ D 0 0 ¯ 21 ¯ 22 ¯ 23 , ¯, D D D B(s) , B¯ + ∆B(s), D Ψξ(s) C(s) 0 I − Ψξ(s) ¯ ∆A(s) 0 A 0 ∆D(s) 0 0 ∆D(s) , ∆D(s) 0 0 , A¯ , 0 A¯ , ∆A(s) , ∆A(s) 0 , 0 0 0 0 0 0 0 ¯ 0 ∆B(s) 0 B ¯ , ∆B(s) , ∆B(s) 0 , B¯ , 0 B 0 0 0 0 L(s) ¯ 21 , K(s)(I − Ψξ(s) )C(s), L(s) , L(s) − K(s)Ψξ(s) M (s) , D Ψξ(s) M (s), ¯ 22 ,D ¯ − K(s)Ψξ(s) C(s), D ¯ 23 , −K(s)(I − Ψξ(s) ). D N (s) , 0
N (s)
0 ,
In this chapter, it is our purpose to design estimator of form (8.8) such that (N−1 ) (N−1 ) 0 X X X 2 2 2 T E ||z˜ ¯(s)|| ≤γ E ||v(s)|| + η (s)S(s)η(s) (8.11) s=0
s=0
s=−τ
where γ > 0 is a prescribed disturbance attenuation level and {S(s)}−τ ≤s≤0 are known positive-definite matrices.
140
8.2
On Finite-Horizon H∞ State Estimation
Main Results
Before presenting our main results, we first introduce the following lemmas that are helpful in subsequent derivations. Lemma 8.1 (Schur Complement Equivalence [9]) Given constant matrices T T −1 S1 , S2 , S3 where S1 = ST 1 and 0 < S2 = S2 , then S1 + S3 S2 S3 < 0 if and only if −S2 S3 S1 ST 3 < 0 or < 0. (8.12) S3 −S2 ST S1 3 Lemma 8.2 [9] Let M = MT , H and E be real matrices of appropriate dimensions, and ∆ satisfies k∆k ≤ 1, then M + H∆E + ET ∆HT ≤ 0
(8.13)
if and only if there exists a positive scalar ε such that M + εHHT + ε−1 ET E ≤ 0.
(8.14)
We are now in the situation to present the sufficient condition under which the required H∞ performance is guaranteed. Theorem 8.1 Let the attenuation level γ > 0, the matrix sequence {S(s)}−τ ≤s≤0 , and parameter K(s) be given. The estimation error z˜ ¯(s) satisfies the H∞ constraint (8.11) if there exist matrix sequences {P (s)}1≤s≤N and {Q(s)}−τ +1≤s≤N+1 and scalar sequences {λ1 (s)}0≤s≤N−1 and {λ2 (s)}0≤s≤N−1 satisfying ( ) ( 0 ) 0 X X T T 2 T E η (0)P (0)η(0) + η (s)Q(s + 1)η(s) ≤ γ E η (s)S(s)η(s) s=−τ
s=−τ
(8.15) and
Θ11 ∗ Ω(s) = ∗ ∗ ∗
0 Θ22 ∗ ∗ ∗
Θ13 0 Θ33 ∗ ∗
Θ14 0 Θ34 Θ44 ∗
Θ15 0 Θ35 0 and {S(s)}−τ ≤s≤0 be given. The concerned H∞ state estimation issue of MNN (8.1) is solved if there exist matrix sequences {P (s) > 0}1≤s≤N (P (s) = diag{P1 (s), P2 (s), P3 (s)}), {Q(s) > 0}−τ +1≤s≤N+1 and {Y (s)}0≤s≤N−1 , scalar sequences {λ1 (s)}0≤s≤N−1 , {λ2 (s)}0≤s≤N−1 and {κ(s)}0≤s≤N−1 satisfying (8.15) and the following RLMIs ˜ ˜ Ω(s) H κ(s)E˜T ˘ Ω(s) = ∗ (8.26) −κ(s)I 0 0.01, d1 (z1 (·)) = 0.48, |z1 (·)| ≤ 0.01, ( 0.56, |z3 (·)| > 0.01, d3 (z3 (·)) = 0.56, |z3 (·)| ≤ 0.01, ( −0.66, |z1 (·)| > 0.01, a12 (z1 (·)) = −0.70, |z1 (·)| ≤ 0.01, ( −0.40, |z2 (·)| > 0.01, a21 (z2 (·)) = −0.46, |z2 (·)| ≤ 0.01,
( d2 (z2 (·)) = (
0.12, |z1 (·)| > 0.01, 0.10, |z1 (·)| ≤ 0.01,
(
0.10, |z1 (·)| > 0.01, 0.07, |z1 (·)| ≤ 0.01,
(
0.12, |z2 (·)| > 0.01, 0.08, |z2 (·)| ≤ 0.01,
a11 (z1 (·)) = a13 (z1 (·)) = a22 (z2 (·)) =
0.64, |z2 (·)| > 0.01, 0.64, |z2 (·)| ≤ 0.01,
An Illustrative Example ( 0.11, |z2 (·)| > 0.01, a23 (z2 (·)) = 0.09, |z2 (·)| ≤ 0.01, ( −0.36, |z3 (·)| > 0.01, a32 (z3 (·)) = −0.16, |z3 (·)| ≤ 0.01, ( −0.15, |z1 (·)| > 0.01, b11 (z1 (·)) = −0.15, |z1 (·)| ≤ 0.01, ( 0.10, |z1 (·)| > 0.01, b13 (z1 (·)) = 0.10, |z1 (·)| ≤ 0.01, ( 0.20, |z2 (·)| > 0.01, b22 (z2 (·)) = 0.20, |z2 (·)| ≤ 0.01, ( −0.01, |z3 (·)| > 0.01, b31 (z3 (·)) = −0.01, |z3 (·)| ≤ 0.01, ( 0.075, |z3 (·)| > 0.01, b33 (z3 (·)) = 0.075, |z3 (·)| ≤ 0.01.
147 (
0.10, |z3 (·)| > 0.01, 0.08, |z3 (·)| ≤ 0.01,
(
0.12, |z3 (·)| > 0.01, 0.10, |z3 (·)| ≤ 0.01,
(
0.05, |z1 (·)| > 0.01, 0.05, |z1 (·)| ≤ 0.01,
a31 (z3 (·)) = a33 (z3 (·)) = b12 (z1 (·)) =
( −0.01, |z2 (·)| > 0.01, b21 (z2 (·)) = −0.01, |z2 (·)| ≤ 0.01, ( 0.15, |z2 (·)| > 0.01, b23 (z2 (·)) = 0.15, |z2 (·)| ≤ 0.01, ( 0.065, |z3 (·)| > 0.01, b32 (z3 (·)) = 0.065, |z3 (·)| ≤ 0.01,
Other parameters of system (8.1) are given by T 0.30 sin(2s) 0.30 0.25 C(s) = , L(s) = 0.01 −0.01 sin(s) 0 , −0.20 0.25 0.10 T M (s) = −0.01 0.02 sin(3s) , N (s) = 0.10 sin(s) −0.10 0.10 . The NNAFs are chosen as tanh(0.5z1 (s)) 1 f (z(s)) = (1.2 + 0.12 sin(s)) 2 tanh(0.7z2 (s)) , tanh(0.6z3 (s)) tanh(0.5z1 (s)) 1 g(z(s)) = (1.2 − 0.10 sin(s)) 2 tanh(0.7z2 (s)) , tanh(0.6z3 (s)) 1
which satisfy the constraints (8.2) and (8.3) with ϑf (s) = (1.2 + 0.12 sin(s)) 2 , 1 ϑg (s) = (1.2 − 0.10 sin(s)) 2 , and 0.25 0 0 0.49 0 . Υf (s) = Υg (s) = 0 0 0 0.36 In the example, the time-delay is chosen as τ = 3, the attenuation value T T γ = 1.05, z¯(0) = −0.7 0.5 0.4 , zˆ¯(0) = −0.5 0.3 0 , and the simulation run length is set as N = 21.
148
On Finite-Horizon H∞ State Estimation
Based on the proposed protocol-based H∞ algorithm, the RLMIs in Theorem 8.2 are solved recursively under prescribed initial conditions, and the corresponding demonstration results are given in Figs. 8.2–8.5. Specifically, Figs. 8.2–8.4 show the neuron states, estimates and estimation errors ei (s)(i = 1, 2, 3), respectively, while Fig. 8.5 plots the output estimation errors z˜ ¯(s). From the simulation results, we see clearly that our proposed estimation scheme is indeed effective. 0.1
States, estimation and their errors
0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 States z1 (s) Estimation zˆ1 (s) Errors e1 (s)
−0.7 −0.8
0
5
FIGURE 8.2:
10 Time (s)
15
20
z1 (s) and zˆ1 (s).
0.8 States z2 (s) Estimation zˆ2 (s) Errors e2 (s)
States, estimation and their errors
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
5
FIGURE 8.3:
10 Time (s)
15
z2 (s) and zˆ2 (s).
20
Summary
149 0.4 States z3 (s) Estimation zˆ3 (s) Errors e3 (s)
States, estimation and their errors
0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5
0
5
10 Time (s)
15
20
FIGURE 8.4: z3 (s) and zˆ3 (s).
0.05
Outputs, estimation and their errors
0 −0.05 −0.1 −0.15 −0.2 −0.25 −0.3 Outputs z¯(s) Estimation zˆ¯(s)
−0.35
Errors z˜¯(s) −0.4
0
5
10 Time (s)
15
20
FIGURE 8.5: z¯(s) and zˆ¯(s).
8.4
Summary
The finite-time H∞ state estimation problem has been investigated in this chapter for delayed MNNs under the SCP. To reduce communication
150
On Finite-Horizon H∞ State Estimation
burdens in case of large-scale data exchange in the sensor-estimator channel, the SCP (modeled as a Markov chain) has been used to regulate the data transmission process. First, a theoretical framework has been established for the addressed MNNs to analyze the finite-time H∞ performance. Within such a framework, sufficient conditions have been obtained for the existence of the desired remote estimator. Subsequently, the required estimator gains have been obtained by resorting to solve certain RLMIs. Finally, an illustrative example has been provided to verify the validity of our estimation scheme. Moreover, it is also our interest to apply the developed algorithm to MNNs under other protocols, for instance, the RRP [134] and TODP [101] in the near future. Also, the methodology proposed in this chapter can be utilized to deal with more complicated systems with more comprehensive network-induced phenomena [77, 86, 89, 114, 116, 131–133].
9 Resilient H∞ State Estimation for Discrete-Time Stochastic Delayed Memristive Neural Networks: A Dynamic Event-Triggered Mechanism
The circuit implementation of large-scale MNNs is often faced with the consumption of a huge amount of various resources (e.g. processing, storage and communication), and thus the resource saving issue has emerged as a hot topic when addressing the state estimation problems for MNNs. To reduce unnecessary resource consumption, the event-triggered mechanism (ETM) serves as an efficient means when implementing the sensor-to-estimator communication, where the current data is sent only if the prescribed triggering conditions (TCs) are satisfied. It is worth mentioning that most available ETMs are static, namely, the thresholds of the TCs are fixed (rather than adaptive or dynamic). To further conserve resource expenses, the so-called dynamic ETMs have been established where the threshold parameters are dynamically adjusted at each checking time. In comparison with its static counterpart, the dynamic ETM is capable of decreasing the frequency of event releasing, thereby getting around needless data transmissions and achieving conservative solutions. As such, it is of practical significance to explore how the dynamic ETM can be employed to coordinate the considerable data transmission between the MNN and the remote estimator, and this constitutes the main motivation in the present investigation. In this chapter, a resilient H∞ approach is put forward to deal with the state estimation problem for a type of discrete-time delayed memristive neural networks (MNNs) subject to stochastic disturbances and dynamic eventtriggered mechanism. The dynamic ETM is utilized to mitigate unnecessary resource consumption occurring in the sensor-to-estimator communication channel. To guarantee resilience against possible realization errors, the estimator gain is permitted to undergo some norm-bounded parameter drifts. For the delayed MNNs, our aim is to devise an event-based resilient H∞ estimator that not only resists gain variations and stochastic disturbances, but also ensures the exponential mean-square stability of the resulting estimation error system with a guaranteed disturbance attenuation level. By resorting to the stochastic analysis technique, sufficient conditions are acquired for the expected estimator and, subsequently, estimator gains are obtained via 151
152
Resilient H∞ State Estimation
figuring out a convex optimization problem. The validity of the H∞ estimator is finally shown via a numerical example.
9.1
Problem Formulation
Consider a class of stochastic delayed MNNs in the following form: z(s + 1) = D(z(s))z(s) + A(z(s))f (z(s)) + B(z(s))f (z(s − τ (s))) + L(s)v(s) + σ(s, z(s), z(s − τ (s)))w(s) y(s) = Cz(s) + M v(s) z¯(s) = N (s)z(s)
(9.1)
where T z(s) , z1 (s) z2 (s) · · · zn (s) , T y(s) , y1 (s) y2 (s) · · · ym (s) , D(z(s)) , diag{d1 (z1 (s)), d2 (z2 (s)), · · · , dn (zn (s))} are the neuron state vector, the ideal measurement output and the selffeedback matrix, respectively. z¯(s) ∈ Rp is the NS to beestimated. di (zi (s)) > 0. A(z(s)) = aij (zi (s)) n×n and B(z(s)) = bij (zi (s)) n×n are CWs with no delays and discrete delays, respectively. f (z(s)) = f1 (z1 (s)) f2 (z2 (s))
···
fn (zn (s))
T
,
is the nonlinear neuron activation function (NAF). v(s) ∈ l2 [0, +∞) is the disturbance vector, and w(s) is a scale Wiener process (Brownian motion) defined on (Ω, F, P) where E{w(s)} = 0, E{w2 (s)} = 1, E{w(s)w(t)} = 0, (s 6= t)
(9.2)
and σ : R × Rn × Rn 7→ Rn is the noise intensity function satisfying σ T (s, a, b)σ(s, a, b) ≤ ρ1 aT a + ρ2 bT b,
∀a, b ∈ Rn
(9.3)
where ρ1 and ρ2 are known positive constants. The positive integer τ (s) is the time-delay satisfying τm ≤ τ (s) ≤ τM , s ∈ N+ (9.4) where τm and τM are known constants satisfying 0 ≤ τm ≤ τM . C, L, M and N are known matrices. The initial condition of (9.1) has the form φ(s) = T φ1 (s) φ2 (s) · · · φn (s) for s ∈ [−τM , 0].
Problem Formulation
153
Assumption 9.1 The NAF f (z(s)) is continuous that satisfies [f (x) − f (y) − Λ1 (x − y)]T [f (x) − f (y) − Λ2 (x − y)] ≤ 0, ∀x, y ∈ Rn (x 6= y)
(9.5)
where Λ1 and Λ2 are constant matrices and f (0) = 0. By using the similar techniques employed in [76], the MNNs given by (9.1) can be rewritten as the following form: ¯ + ∆D(s))z(s) + (A¯ + ∆A(s))f (z(s)) z(s + 1) =(D ¯ + ∆B(s))f (z(s − τ (s))) + Lv(s) + (B (9.6) y(s) =Cz(s) + M (s)v(s) z¯(s) =N (s)z(s) where ¯ , diag{d1 , d2 , . . . , dn }, A¯ , [aij ]n×n , B ¯ , [bij ]n×n , D ∆D(s) , HF1 (s)E1 , ∆A(s) , HF2 (s)E2 , ∆B(s) , HF3 (s)E3 . Here, dj , aij and bij are known positive scalars, H = H1 H2 · · · Hn T T T T Ei2 · · · Ein and Ei = Ei1 (i = 1, 2, 3) are known matrices, and the time-varying matrix Fi (s) (i = 1, 2, 3) satisfies FiT (s)Fi (s) ≤ I.
(9.7)
Remark 9.1 Compared to the traditional RNNs, a remarkable feature of the MNNs is the dependence of their memristive CWs on the neuron states (NSs), which makes it mathematically difficult to analyze the dynamics of the MNNs. Thanks to the state-dependent switching (towards norm-bounded uncertainties) that relies on the characteristics of the memristors and the currentvoltage, we are able to convert the MNN (9.1) with state-dependent uncertainties into (9.6) with norm-bounded uncertainties displayed in the terms ∆D(s), ∆A(s) and ∆B(s). In this chapter, in order to reduce resource consumption in the sensorto-estimator channel, a dynamic ETM is taken to decide if the current data should be sent or not (see Fig. 9.1). Denote the triggering instant sequence as 0 ≤ s0 < s1 < · · · < s` < · · · , where s`+1 is determined by n ς(s) s`+1 = min s ∈ [0, N]|s > s` , + %y T (s)y(s) ϑ o
− T (s)(s) ≤ 0 ,
(9.8)
154
Resilient H∞ State Estimation
where ϑ and % are known positive scalars, (s) , y(s) − y(s` ), y(s` ) is the latest transmitted measurement, and the dynamical variable ς(s) satisfies ( ς(s + 1) = ~ς(s) + %y T (s)y(s) − T (s)(s), (9.9) ς(0) = ς0 where 0 < ~ < 1 is a known positive scalar. !"#$%&$'!(
y0 s !
)!*#+,(
ĂĂ
v s!
)!&-.#/
ym s !
Communication Channel (with Dynamic EventTriggered Mechanism)
y s
!
!"#$!%&$'$!% ()$*"'$#+
z s!
FIGURE 9.1: Remote state estimation under the dynamic ETM.
Remark 9.2 From the triggering condition (9.8), it is straightforward to see that (9.8) will reduce to a traditional static ETM when ϑ approaches +∞. That is to say, the dynamic ETM (9.8) includes its static counterpart as a special case. Based on the dynamic ETM, the desired estimator is built as ¯ z (s) + Af ¯ (ˆ ¯ (ˆ z (s)) + Bf z (s − τ (s))) zˆ(s + 1) = Dˆ + (K(s) + ∆K(s))(y(s` ) − C zˆ(s)) ˆ z¯(s) = N zˆ(s)
(9.10)
ˆ(s) are the estimates of z(s) and z¯(s), respectively, K(s) is where zˆ(s) and z¯ the parameter to be designed, and the gain perturbation ∆K(s) satisfies ∆K(s) , HK FK (s)EK
(9.11)
T where HK and EK are known matrices, and FK (s) satisfies FK (s)FK (s) ≤ I. Let
e(s) , z(s) − zˆ(s), ˜(s) , z¯(s) − zˆ¯(s), z¯ f˜(s) , f (z(s)) − f (ˆ z (s)), f˜(s − τ (s)) , f (z(s − τ (s))) − f (ˆ z (s − τ (s))). Following from (9.6) and (9.10), we obtain the following dynamics of the
Problem Formulation
155
estimation error: ¯ − K(s)C)e(s) − ∆K(s)Ce(s) e(s + 1) = (D + ∆D(s)z(s) + A¯f˜(s) + ∆A(s)f (z(s)) ¯ f˜(s − τ (s)) + ∆B(s)f (z(s − τ (s))) +B + (L(s) − K(s)M )v(s) + K(s)(s)
(9.12)
− ∆K(s)M v(s) + ∆K(s)(s) + σ(s, z(s), z(s − τ (s)))w(s) ˜ z¯(s) = N e(s).
Furthermore, we now have the augmented system: η(s + 1) = D(s)η(s) + A(s)f~(η(s)) + B(s)f~(η(s − τ (s))) + K(s)ζ(s) + L(s)v(s) + ~σ (s)w(s) ˜(s) = N (s)η(s) z¯
(9.13)
where T T η(s) , z T (s) eT (s) , ζ(s) , 0 T (s) , T ¯ f~(η(s)) , f T (z(s)) f˜T (s) , D(s) , D(s) + ∆D(s), ¯ ∆D(s) , ∆D1 (s) + ∆D2 (s), A(s) , A + ∆A(s), ¯ A}, ¯ N (s) , 0 N (s) , B(s) , B¯ + ∆B(s), A¯ , diag{A, ¯ B}, ¯ K(s) , K(s) ¯ B¯ , diag{B, + ∆K(s), ¯ D 0 ¯ ¯ L(s) , L(s) + ∆L(s), D(s) , ¯ − K(s)C , 0 D ∆D(s) 0 0 0 ∆D1 (s) , , ∆D2 (s) , , ∆D(s) 0 0 ∆K(s)C ∆A(s) 0 ∆B(s) 0 ∆A(s) , , ∆B(s) , , ∆A(s) 0 ∆B(s) 0 0 0 0 0 ¯ K(s) , , ∆K(s) , , 0 K(s) 0 ∆K(s) L(s) 0 ¯ L(s) , , ∆L(s) , , L(s) − K(s)M −∆K(s)M σ(s, z(s), z(s − τ (s))) ~σ (s) , σ(s, z(s), z(s − τ (s))) T and f~(η(s − τ (s))) , f T (z(s − τ (s))) f˜T (s − τ (s)) . Our main purpose is to design the estimator (9.10) such that a) system (9.13) with v(s) = 0 is exponentially stable in the mean square;
156
Resilient H∞ State Estimation
b) for nonzero v(s), E
(+∞ X
) 2
kz˜¯(s)k
≤ γ2
s=0
+∞ X
kv(s)k2
(9.14)
k=0
where γ is a prescribed disturbance attenuation level.
9.2
Main Results
Lemma 9.1 [70] Given the dynamic ETM (9.8)–(9.9) with ς0 ≥ 0, if parameters ~ (0 < ~ < 1) and ϑ (ϑ > 0) satisfy ~ϑ ≥ 1, then ς(s) satisfies ς(s) ≥ 0 for all s ∈ [0, N]. Remark 9.3 Lemma 9.1 tells that the dynamic variables ς(s) are nonnegative for s ∈ [0, N]. Unlike the static case, the negative values of %y T (s)y(s) − T (s)(s) are now acceptable. As a result, the triggering frequency is greatly reduced, thereby alleviating the resource consumption in the sensor-to-estimator substantially. Theorem 9.1 Let the estimator gain K and the attenuation level γ > 0 be given. Then, the exponential mean-square stability (EMSS) of system (9.13) with v(s) = 0 is guaranteed if there exist a positive-definite matrix P and scalars λ1 , λ2 , λ3 and λ∗ such that ¯ 11 ¯ 13 Θ14 Θ15 Θ 0 Θ 0 ¯ 22 ¯ 24 ∗ Θ 0 Θ 0 0 ¯ 33 Θ34 Θ35 ∗ ∗ Θ 0 ¯ 0 be given. Then, system (9.13) is EMSS and satisfies the H∞ constraint (9.14) if there exist a positive-definite matrix P and scalars λ1 , λ2 , λ3 and λ∗ such that ˜ 11 ¯ 13 Θ14 Θ15 Θ 0 Θ 0 Θ17 ¯ 22 ¯ 24 ∗ Θ 0 Θ 0 0 0 ¯ 33 Θ34 Θ35 ∗ ∗ Θ 0 Θ 37 ¯ 44 Θ45 ˜ = ∗ ∗ ∗ Θ 0 Θ (9.24) Ω 47 < 0 ¯ 55 ∗ ∗ ∗ ∗ Θ 0 Θ 57 ¯ 66 ∗ ∗ ∗ ∗ ∗ Θ 0 ˜ 77 ∗ ∗ ∗ ∗ ∗ ∗ Θ where ˜ 11 , Θ ¯ 11 + N T N , Θ % Θ17 , DT (s)P L + C˜T M + λ3 %C˜T M, ϑ Θ37 , AT (s)P L, Θ47 = B T (s)P L, Θ57 , KT P L, ˜ 77 , Θ77 − γ 2 I, Θ Θ77 , λ3 %MT M + LT P L +
% T M M. ϑ
Proof Inequality (9.15) follows from (9.24) and the EMSS of system (9.13) is implied by Theorem 9.1.
160
Resilient H∞ State Estimation
For the analysis of the expected H∞ performance, by resorting to (9.17), we have E{∆V (s)} = E{V (s + 1) − V (s)} = E{(D(s)η(s) + A(s)f~(η(s)) + B(s)f~(η(s − τ (s))) + Kζ(s) + Lv(s) + ~σ (s)w(s))T P (D(s)η(s) + A(s)f~(η(s)) + B(s)f~(η(s − τ (s))) + Kζ(s) + Lv(s) 1 + ~σ (s)w(s)) − η T (s)P η(s) + (ς(s + 1) − ς(s))} ϑ n = E η T (s)DT (s)P D(s)η(s) + f~T (η(s))AT (s)P A(s) × f~(η(s)) + f~T (η(s − τ (s)))B T (s)P B(s)f~(η(s − τ (s))) + ζ T (s)KT P Kζ(s) + v T (s)LT P Lv(s) + ~σ (s)T P ~σ (s) + 2η T (s)DT (s)P A(s)f~(η(s)) + 2η T (s)DT (s)P B(s) × f~(η(s − τ (s))) + 2η T (s)DT (s)P Kζ(s) + 2η T (s)DT (s)P Lv(s) + 2f~T (η(s))AT (s)P B(s) × f~(η(s − τ (s))) + 2f~T (η(s))AT (s)P Kζ(s) + 2f~T (η(s))AT (s)P Lv(s) + 2f~T (η(s − τ (s))) × B T (s)P Kζ(s) + 2f~T (η(s − τ (s)))B T (s)P Lv(s) % ~−1 T ˜ ~ς (s)~ς(s) + η T (s)C˜T Cη(s) − η T (s)P η(s) + ϑ ϑ % % + v T (s)M T M v(s) + 2 η T (s)C˜T M v(s) ϑ ϑ o 1 T T − ζ (s)I1 I1 ζ(s) . ϑ Similar to the proof of Theorem 9.1, we obtain n o ˜ E{∆V (s)} ≤ E ξ˜T (s)Ω2 ξ(s)
(9.26)
where ˜ , ξ T (s) v T (s) T , ξ(s) ¯ 11 ¯ 13 Θ 0 Θ ¯ ∗ Θ22 0 ¯ 33 ∗ ∗ Θ ∗ ∗ Ω2 , ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
Θ14 ¯ 24 Θ Θ34 ¯ 44 Θ ∗ ∗ ∗
Θ15 0 Θ35 Θ45 ¯ 55 Θ ∗ ∗
0 0 0 0 0 ¯ 66 Θ ∗
(9.25)
Θ17 0 Θ37 Θ47 . Θ57 0 Θ77
Main Results
161
Next, adding the zero term E{z˜ ¯T (s)z˜ ¯(s) − γ 2 v T (s)v(s) − z˜¯T (s)z˜¯(s) + γ 2 v T (s)v(s)} to (9.26) yields E{∆V (s)} = E V (s + 1) − V (s) + z˜¯T (s)z˜¯(s) − γ 2 v T (s)v(s) − z˜ ¯T (s)z˜ ¯(s) + γ 2 v T (s)v(s) ˜ + z˜¯T (s)z˜¯(s) − γ 2 v T (s)v(s) ≤ E ξ˜T (s)Ω2 ξ(s) − z˜ ¯T (s)z˜ ¯(s) + γ 2 v T (s)v(s) ˜ ˜ ξ(s)} = E ξ˜T (s)Ω − E{z˜¯T (s)z˜¯(s) − γ 2 v T (s)v(s) .
(9.27)
Adding together both sides of (9.27) from 0 to T in relation to s gives T X
E{∆V (s)}
s=0
≤
T X
˜ ˜ ξ(s)} E{ξ˜T (s)Ω −
s=0
+
T X
E{z˜¯T (s)z˜¯(s)}
(9.28)
s=0
T X
E{γ 2 v T (s)v(s)}
s=0
and hence
T X
E{z˜¯T (s)z˜¯(s)} −
s=0
≤
T X
T X
E{γ 2 v T (s)v(s)}
s=0
˜ ˜ ξ(s)} E{ξ˜T (s)Ω − E{V (T + 1)}
(9.29)
s=0
≤
T X
˜ ˜ ξ(s)}. E{ξ˜T (s)Ω
s=0
Noticing (9.24) and making T → ∞ result in the satisfaction of the H∞ constraint (9.14). In terms of Theorem 9.2, our desired estimator is given as follows. Theorem 9.3 The resilient H∞ SEP for MNN (9.1) is solvable if there exist matrix P , diag{P1 , P2 } and scalars λ1 , λ2 , λ3 , λ∗ and κ such that ˆ ˘ Ω H κE˘T ˘ = ∗ −κI Ω (9.30) 0 0.01, d2 (z2 (·)) = d1 (z1 (·)) = 0.85, |z1 (·)| ≤ 0.01, 0.66, |z2 (·)| ≤ 0.01, ( ( 0.10, |z1 (·)| > 0.01, −0.45, |z1 (·)| > 0.01, a11 (z1 (·)) = a12 (z1 (·)) = 0.50, |z1 (·)| ≤ 0.01, −0.25, |z1 (·)| ≤ 0.01, ( ( −0.80, |z2 (·)| > 0.01, 0.52, |z2 (·)| > 0.01, a21 (z2 (·)) = a22 (z2 (·)) = 1.00, |z2 (·)| ≤ 0.01, −0.12, |z2 (·)| ≤ 0.01, ( ( 0.10, |z1 (·)| > 0.01, −0.14, , |z1 (·)| > 0.01, b11 (z1 (·)) = b12 (z1 (·)) = 0.60, |z1 (·)| ≤ 0.01, 0.64, |z1 (·)| ≤ 0.01, ( ( −0.70, |z2 (·)| > 0.01, 1.10, |z2 (·)| > 0.01, b21 (z2 (·)) = b22 (z2 (·)) = 0.60, |z2 (·)| ≤ 0.01, −1.30, |z2 (·)| ≤ 0.01, 0.10 L= , C = 0.30 0.20 , M = 0.3, N = 0.35 0.20 . 0.15 The NAF is chosen as f (z(s)) =
0.6z1 (s) − tanh(z1 (s)) , 0.4z2 (s) − tanh(z2 (s))
which satisfies the constraints (9.5) with −0.6 0 −0.2 0 Λ1 = , Λ2 = . 0 −0.4 0 −0.3 In the example, the time-delays τ (s) = 1 + (sin(sπ))2 , the attenuation level γ = 0.8, α ¯ = 0.85, and ρ1 = ρ2 = 0.125. Apparently, the upper and lower bounds of the time-delays are τM = 2 and τm = 1. Solving (9.30), the feasible solutions are (for space consideration, only parts of the solutions are listed): 2.6166 −0.5434 0.1057 P2 = , X= . −0.5434 2.4791 0.0610 Accordingly, we obtain 0.0477 K= . 0.0351 Simulation results are displayed in Figs. 9.2–9.6. Figs. 9.2 and 9.3 characterize the values of the NSs and the associated estimates. Fig. 9.4 sketches
An Illustrative Example
165
the measurement outputs and associate estimates, and Fig. 9.5 plots the estimation errors. The release instants and intervals of the dynamic ETM are described in Fig. 9.6. These figures obviously illustrate that our proposed estimation scheme is indeed effective. More specially, in order to verify the superiority of the dynamic ETM than the static one on reducing resource consumption, the corresponding release instants and intervals of the static one are shown in Fig. 9.7. We can come to a conclusion exactly that the dynamic ETM can reduce the resource consumption more effectively than the traditional static one.
0.15
0.1
0.05
0
-0.05
-0.1
-0.15 0
10
20
30
40
50
60
Time (s)
FIGURE 9.2:
The state z1 (s) and its estimate zˆ1 (s).
0.2
0.15
0.1
0.05
0
-0.05
-0.1 0
10
20
30
40
50
60
Time (s)
FIGURE 9.3:
The state z2 (s) and its estimate zˆ2 (s).
166
Resilient H∞ State Estimation
0.08
0.06
0.04
0.02
0
-0.02
-0.04
-0.06 0
10
20
30
40
50
60
Time (s)
FIGURE 9.4: The state z¯(s) and its estimate zˆ¯(s).
0.08
0.06
0.04
0.02
0
-0.02
-0.04
-0.06 0
10
20
30
40
50
Time (s)
FIGURE 9.5: The estimation error of z¯(s).
60
An Illustrative Example
167
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 0
10
20
30
40
50
60
Time (s)
FIGURE 9.6: Event-based release instants and release interval: dynastic case.
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 0
10
20
30
40
50
60
Time (s)
FIGURE 9.7: Event-based release instants and release interval: static case.
168
9.4
Resilient H∞ State Estimation
Summary
In this chapter, we have coped with the dynamic event-triggered resilient SEP for MNNs with time-delays and SDs. Combining Lyapunov-Krasovskii functional and stochastic analysis technique, a dynamic event-triggered criterion has been presented to guarantee both the EMSS of the error dynamics and the prescribed H∞ constraint. Furthermore, a desired estimator insensitive to parameter variations/fluctuations has been devised via solving an optimization problem. In the end, the validity of our estimator has been verified via a quintessential example. Note that the developed event-based resilient strategy might be applied to systems with higher complexities such as genetic regulatory networks and sensor networks.
10 H∞ and l2 − l∞ State Estimation for Delayed Memristive Neural Networks on Finite Horizon: The Round-Robin Protocol
As mentioned in previous chapters, in practice, the actual measurement outputs are sometimes required to be transmitted to a remote estimator via the networked medium (e.g. wireless/distributed networks). With the increasing scale/complexity of the MNNs, data volume of the network output could rise to a considerately high level, thereby posing challenges (e.g. fading measurement and communication delays) onto the transmission networks of limited capacity. To handle the network-induced challenges, it is of momentous significance to leverage communication protocols so as to regulate the data transmission. As a widely used communication strategy in a variety of domains (e.g. transportation dispatching, computer network, communication system and industrial process control), the Round-Robin protocol (RRP) is a periodic scheduling mechanism that manages to orchestrate the data transmission order in a fixed sequence. In this chapter, we aim at solving the H∞ and l2 − l∞ state estimation problem for a class of delayed memristive neural networks under Round-Robin communication protocol over a finite-time horizon. The network under consideration is subject to time-varying delays and energy-bounded disturbances. In order to effectively alleviate data collisions and save energies, an equal allocation scheme (i.e. RRP) is employed in the communication channel between the MNN and the remote state estimator. An update matrix method is developed to settle the time-varying yet periodic time-delays caused by the introduction of the protocol. Through using a combination of Lyapunov-Krasovskii functional and inequality manipulations, delay-dependent criteria are established for the existence of the desired state estimator ensuring that the corresponding augmented estimation error system satisfies the H∞ and l2 − l∞ performance requirement. Then, the explicit characterization of the estimator gain is obtained by solving a convex optimization problem. Finally, a numerical example is carried out to show the effectiveness of the proposed protocol-based estimation scheme.
169
H∞ and l2 − l∞ State Estimation
170
10.1
Problem Formulation and Preliminaries
Consider a class of stochastic delayed MNNs: z(s + 1) = D(z(s))z(s) + A(z(s))f (z(s)) + B(z(s))f (z(s − τ (s))) + L1 (s)w(s)
(10.1)
y(s) = C(s)z(s) + L2 (s)v(s) z¯(s) = M (s)z(s)
where T z(s) = z1 (s) z2 (s) · · · zn (s) , T y(s) = y1 (s) y2 (s) · · · ym (s) , D(z(s)) = diag{d1 (z1 (s)), d2 (z2 (s)), · · · , dn (zn (s))} are the neuron state vector, the ideal measurement output and the selffeedback matrix, respectively; z¯(s) ∈ Rp is the NS to be estimated; di (zi (s)) > 0; A(z(s)) = aij (zi (s)) n×n and B(z(s)) = bij (zi (s)) n×n are connection weight matrices with no delays and discrete delays, respectively; T f (z(s)) = f1 (z1 (s)) f2 (z2 (s)) · · · fn (zn (s)) is the nonlinear neuron activation function (NNAF); w(s) and v(s) ∈ l2 [0, N − 1] are the disturbance vectors; C(s), L1 (s), L1 (s) and M (s) are known matri T ces; the initial condition is φ(s) = φ1 (s) φ2 (s) · · · φn (s) for s ∈ [−¯ τ , 0], the time-delay τ (s) satisfies τ ≤ τ (s) ≤ τ¯,
s = 1, 2, · · · , N
(10.2)
where τ and τ¯ are known constants; and the NNAF f (z(s)) is continuous that satisfies [f (u) − f (v) − Γ1 (u − v)]T [f (u) − f (v) − Γ2 (u − v)] ≤ 0, ∀u, v ∈ Rn (u 6= v) (10.3) where Γ1 and Γ2 are constant matrices and f (0) = 0. By using the similar techniques employed in [76], the MNNs given by (10.1) can be rewritten as ¯ + ∆D(s))z(s) + (A¯ + ∆A(s))f (z(s)) z(s + 1) = (D ¯ + ∆B(s))f (z(s − τ (s))) + L1 (s)v(s) + (B (10.4) y(s) = C(s)z(s) + L2 (s)v(s) z¯(s) = M (s)z(s)
Problem Formulation and Preliminaries
171
where ¯ , diag{d1 , d2 , . . . , dn }, D A¯ , [aij ]n×n , ¯ , [bij ]n×n , B ∆D(s) , HF1 (s)E1 , ∆A(s) , HF2 (s)E2 , ∆B(s) , HF3 (s)E3 . Here, dj , aij and bij known positive scalars, H = H1 H2 · · · Hn and T T T T Ei2 · · · Ein (i = 1, 2, 3) are known matrices, and Fi (s) (i = Ei = Ei1 1, 2, 3) satisfies FiT (s)Fi (s) ≤ I. (10.5) For the purpose of reducing data collisions and mitigating network burdens, we adopt the RRP to schedule the data transmission (see Fig. 10.1). Let y˜i (s) be the actually received data from the ith node. Under the RRP, y˜i (s) updates as ( yi (s), if mod(s − i, m) = 0 y˜i (s) = (10.6) 0, otherwise.
!"#$%&$'!( )!*#+,( )!&-.#/%( -$&0(1$"!2 3+#4$56 7!,+4%
y s! ĂĂ
w s!
Communication Channel (with Round-Robin Protocol)
y s!
!"#$!%&$'$!% ()$*"'$#+
z, s !
ym s !
FIGURE 10.1:
Remote state estimation under the RRP.
Defining the update matrix as Φi = diag{δ(i − 1), δ(i − 2), . . . , δ(i − m)}, (10.6) is equivalently written as ( y˜(s) = Φξ(s) y(s), if mod(s − i, m) = 0 (10.7) y˜(s) = 0, otherwise T where y˜(s) = y˜1 (s) y˜1 (s) · · · y˜m (s) ∈ Rm , ξ(s) = mod(s − 1, m) + 1 ∈ {1, 2, . . . , m} is the number of the selected node that has access to the channel, and y˜(s) = ϕ for s < 0 where ϕ is known. Based on (10.7), the following estimator is constructed: ( ¯ z (s) + Af ¯ (ˆ ¯ (ˆ zˆ(s + 1) = Dˆ z (s)) + Bf z (s − τ (s))) + K(s)(˜ y (s) − C zˆ(s)) zˆ ¯(s) = M (s)ˆ z (s) (10.8)
H∞ and l2 − l∞ State Estimation
172
where zˆ(s) and zˆ ¯(s) are the estimates of z(s) and z¯(s), respectively. K(s) is the parameter to be determined. Let e(s) , z(s) − zˆ(s), z˜ ¯(s) , z¯(s) − zˆ¯(s), f˜s , f (z(s)) − f (ˆ z (s)), ˜ fs−τ , f (z(s − τ (s))) − f (ˆ z (s − τ (s))). From (10.4), (10.7) and (10.8), the estimation error dynamics is given by ¯ − K(s)Φξ(s) C(s))e(s) + ∆D(s)z(s) e(s + 1) = (D ¯ f˜s−τ + A¯f˜s + ∆A(s)f (z(s)) + B (10.9) + ∆B(s)f (z(s − τ (s))) + L1 (s)w(s) − K(s)Φξ(s) L2 (s)v(s) ˜(s) = M (s)e(s). z¯ Setting T η(s) , z T (s) eT (s) , T f~s , f T (z(s)) f˜sT , T T f~s−τ , f T (z(s − τ (s))) f˜s−τ , T T $(s) , w (s) v T (s) , we have the augmented system ( η(s + 1) = D(s)η(s) + A(s)f~s + B(s)f~s−τ + L(s)$(s) ˜(s) = M(s)η(s) z¯ where ¯ + ∆D(s), ∆D(s) , ∆D(s)I, ˜ D(s) , D T T IT , I , I 0 , I˜ , I ¯ A}, ¯ A(s) , A¯ + ∆A(s), A¯ , diag{A, ¯ B}, ¯ B(s) , B¯ + ∆B(s), B¯ , diag{B, ˜ ∆B(s) , ∆B(s)I, ˜ ∆A(s) , ∆A(s)I, ¯ 0 ¯, D D ¯ − K(s)Φξ(s) C(s) , 0 D M(s) , 0 M (s) , L (s) 0 L(s) , 1 . L1 (s) −K(s)Φξ(s) L2 (s)
(10.10)
Main Results
173
For γ1 > 0, S1 > 0 and for ∀(η(0), {$(s)}) 6= 0, our main purpose is to design the estimator (10.8) such that the H∞ performance constraint (N −1 ) (N −1 ) X X 2 2 2 k$(s)k + γ12 η T (0)S1 η(0), E kz˜ ¯(s)k < γ1 (10.11) s=0
s=0
and the l2 − l∞ performance constraint (N −1 ) X 2 2 2 E sup kz˜ ¯(s)k < γ2 k$(s)k + γ22 η T (0)S2 η(0). ∀s
(10.12)
s=0
are both satisfied.
10.2
Main Results
To start with, with a given estimator, the sufficient condition that ensures the H∞ performance is established. Theorem 10.1 Let the attenuation level γ1 > 0, the matrix S1 , the parameter K(s) and the initial condition P (0) = S1 be given. For nonzero $(s), the H∞ performance constraint (10.11) is guaranteed if there exist matrix families {P (s)}1≤s≤N and scalars {λ1 (s)}1≤s≤N −1 , {λ2 (s)}1≤s≤N −1 satisfying ˜ 11 ¯ 13 Θ14 Θ15 Θ 0 Θ ∗ Θ22 0 Θ24 0 ˘ 1 (s) = ∗ ¯ 33 Θ34 Θ35 < 0 Ω (10.13) ∗ Θ ¯ 44 Θ45 ∗ ∗ ∗ Θ ¯ 55 ∗ ∗ ∗ ∗ Θ where ˜ , I ⊗ Sym Γ
1 T Γ1 Γ2 , 2
¯ , I ⊗ (Γ1 + Γ2 )/2, Γ ¯ 11 , DT (s)P (s + 1)D(s) − λ1 (s)Γ ˜ − P (s), Θ T ˜ 11 , Θ ¯ 11 + M (s)M(s), Θ ¯ 13 , Θ13 + λ1 (s)Γ, ¯ Θ Θ13 , DT (s)P (s + 1)A(s), Θ14 , DT (s)P (s + 1)B(s), Θ15 , DT (s)P (s + 1)L(s), ˜ Θ22 , − λ2 (s)Γ, ¯ Θ24 , λ2 (s)Γ,
H∞ and l2 − l∞ State Estimation
174 ¯ 33 , Θ33 − λ1 (s)I, Θ
Θ33 , AT (s)P (s + 1)A(s), Θ34 , AT (s)P (s + 1)B(s), Θ35 , AT (s)P (s + 1)L(s), ¯ 44 , Θ44 − λ2 (s)I, Θ Θ44 , B T (s)P (s + 1)B(s), Θ45 , B T (s)P (s + 1)L(s), ¯ 55 , Θ55 − γ12 I, Θ Θ55 , LT (s)P (s + 1)L(s). Proof Define the Lyapunov-Krasovskii functional: V (s) = η T (s)P (s)η(s),
(10.14)
which generates ∆V (s) = V (s + 1) − V (s) T
(10.15) T
= η (s + 1)P (s + 1)η(s + 1) − η (s)P (s)η(s). It follows from (10.10), (10.14) and (10.15) that ∆V (s) = (D(s)η(s) + A(s)f~s + B(s)f~s−τ + L(s)$(s))T × P (s + 1)(D(s)η(s) + A(s)f~s + B(s)f~s−τ + L(s)$(s)) − η T (s)P (s)η(s) = η T (s)DT (s)P (s + 1)D(s)η(s) + f~T AT (s)P (s + 1)A(s)f~s s
T + f~s−τ B T (s)P (s + 1)B(s)f~s−τ
+ $T (s)LT (s)P (s + 1)L(s)$(s) + 2η T (s)DT (s)P (s + 1)A(s)f~s + 2η T (s)DT (s)P (s + 1)B(s)f~s−τ + 2η T (s)DT (s)P (s + 1)L(s)$(s) + 2f~T AT (s)P (s + 1)B(s)f~s−τ s
+ 2f~sT AT (s)P (s + 1)L(s)$(s) + 2f~T B T (s)P (s + 1)L(s)$(s) s−τ T
− η (s)P (s)η(s) = ξ T (s)Ω1 (s)ξ(s)
(10.16)
Main Results
175
where h ξ(s) , η T (s) Θ11 ∗ Ω1 (s) , ∗ ∗ ∗
η T (s − τ (s)) f~sT 0 Θ13 0 0 ∗ Θ33 ∗ ∗ ∗ ∗
Θ14 0 Θ34 Θ44 ∗
T f~s−τ
iT $T (s) ,
Θ15 0 Θ35 , Θ45 Θ55
Θ11 ,DT (s)P (s + 1)D(s) − P (s). Subsequently, according to (10.3), we have iT h i f~s − (I ⊗ Γ2 )η(s) ≤ 0, iT − (I ⊗ Γ1 )η(s − τ (s)) h i × f~s−τ − (I ⊗ Γ2 )η(s − τ (s)) ≤ 0.
h
f~s − (I ⊗ Γ1 )η(s)
h
f~s−τ
(10.17)
(10.18)
Substituting (10.17) and (10.18) into (10.16) leads to ∆V (s) ≤ ξ T (s)Ω1 (s)ξ(s) h iT − λ1 (s) f~s − (I ⊗ Γ1 )η(s) h i × f~s − (I ⊗ Γ2 )η(s) h iT − λ2 (s) f~s−τ − (I ⊗ Γ1 )η(s − τ (s)) i h × f~s−τ − (I ⊗ Γ2 )η(s − τ (s)) ≤ ξ T (s)Ω2 (s)ξ(s) where ¯ 11 Θ ∗ Ω2 (s) , ∗ ∗ ∗
0 Θ22 ∗ ∗ ∗
¯ 13 Θ 0 ¯ 33 Θ ∗ ∗
Θ14 Θ24 Θ34 ¯ 44 Θ ∗
Θ15 0 Θ35 . Θ45 Θ55
Next, adding the zero term ˜T (s)z˜ z¯ ¯(s) − γ12 $T (s)$(s) − z˜¯T (s)z˜¯(s) + γ12 $T (s)$(s)
(10.19)
H∞ and l2 − l∞ State Estimation
176 to (10.19) yields ∆V (s)
=V (s + 1) − V (s) + z˜T (s)˜ z (s) − γ12 $T (s)$(s) − z˜ ¯T (s)z˜ ¯(s) + γ12 $T (s)$(s) ˘ 1 (s)ξ(s) − z˜¯T (s)z˜¯(s) − γ 2 $T (s)$(s). ≤ξ T (s)Ω 1
(10.20)
Adding together both sides of (10.20) from 0 to N − 1 with respect to s, we have N −1 X
∆V (s)
s=0 T
= η (N )P (N )η(N ) − η T (0)P (0)η(0) ≤
N −1 X
˘ 1 (s)ξ(s) − ξ T (s)Ω
s=0
N −1 X
z¯˜T (s)z˜¯(s) +
s=0
N −1 X
γ12 $T (s)$(s)
(10.21)
s=0
and hence N −1 X s=0
≤
N −1 X
˜T (s)z˜ z¯ ¯(s) −
N −1 X
γ 2 $T (s)$(s) − γ12 η T (0)S1 η(0)
s=0
(10.22)
˘ 1 (s)ξ(s) − η T (N )P (N )η(N ) ξ T (s)Ω
s=0
+ η T (0)P (0)η(0) − γ12 η T (0)S1 η(0) Noting (10.13), P (N ) > 0 and P (0) < γ12 S1 , (10.11) is assured. Theorem 10.2 Let the parameter K(s), the attenuation level γ2 > 0, and the matrix S2 be given. For nonzero $(s) and the initial condition Q(0) = S2 , the l2 − l∞ performance constraint (10.12) is ensured if there exist matrix families {Q(s)}1≤s≤N , scalars {λ1 (s)}1≤s≤N −1 and {λ2 (s)}1≤s≤N −1 satisfying MT (s)M(s) < γ22 Q(s), ˜ Ξ11 ∗ ˘ 2 (s) = ∗ Ω ∗ ∗
0 Ξ22 ∗ ∗ ∗
˜ 13 Ξ 0 ˜ Ξ33 ∗ ∗
Ξ14 Ξ24 Ξ34 ˜ 44 Ξ ∗
(10.23) Ξ15 0 Ξ35 0 and γ2 > 0 and the matrices S1 and S2 be given. Then, system (10.10) satisfies both H∞ and l2 −l∞ performance constraints (10.11) if there exist matrix families {P (s), Q(s)}1≤s≤N and scalars {λ1 (s), λ2 (s), λ3 (s), λ4 (s)}0≤s≤N −1 satisfying ˜ ˜ 2 (s) Ξ ˘ 3 (s) = Ξ1 (s) Ω < 0, , (10.30) ∗ −P −1 (s + 1) 2 −1 −1 ˘ 4 (s) = −γ2 Q (s) Q (s)M(s) < 0, Ω (10.31) ∗ −I ˘ 1 (s) ˜ 2 (s) Ξ Ξ ˘ Ω5 (s) = < 0, (10.32) ∗ −Q−1 (s + 1) with initial conditions
(
P (0) ≤ γ12 S1
(10.33)
Q(0) = S2 where ˜ 2 (s) , D(s) 0 A(s) B(s) L(s) , Ξ ˜ 111 (s) , −λ1 (s)Γ ˜ + MT (s)M(s) − P (s), Ξ ˘ 111 (s) , −λ3 (s)Γ ˜ − Q(s), Ξ ˜ 111 (s) Ξ 0 ∗ ˜ −λ (s)Γ 2 ˜ Ξ1 (s) , ∗ ∗ ∗ ∗ ∗ ∗ ˘ Ξ111 (s) 0 ∗ ˜ −λ 4 (s)Γ ˘ 1 (s) , ∗ Ξ ∗ ∗ ∗ ∗ ∗
¯ λ1 (s)Γ 0 −λ1 (s)I ∗ ∗ ¯ λ3 (s)Γ 0 −λ3 (s)I ∗ ∗
0 ¯ λ2 (s)Γ 0 −λ2 (s)I ∗ 0 ¯ λ4 (s)Γ 0 −λ4 (s)I ∗
0 0 0 , 0 −γ12 I 0 0 0 . 0 −I
Proof By referring to the Schur Complement Lemma, it is easy to find that i) (10.31) and (10.23) are equivalent; and ii) inequalities (10.30) and (10.32) imply (10.13) and (10.24), respectively. Thus, the performance constraints (10.11) and (10.12) are simultaneously satisfied according to Theorems 10.1 and 10.2. In terms of Theorems 10.1–10.3, our desired estimator is given as follows. Theorem 10.4 Let the attenuation levels γ1 > 0, γ2 > 0, the matrices S1 and S2 and the initial conditions 2 P (0) ≤ γ1 S1 , (10.34) MT (0)M(0) < γ22 Q(0), Q(0) = S2
H∞ and l2 − l∞ State Estimation
180
be given. If there exist matrix families {K(s)0≤s≤N −1 }, ˜ 1 (s), Q ˜ 2 (s)}1≤s≤N {P˜1 (s), P˜2 (s), Q and scalars {λ1 (s), λ2 (s), λ3 (s), λ4 (s), µ1 (s), µ2 (s)}0≤s≤N −1 such that the following RLMIs ˘ 9 (s) ˘ Ω H µ1 (s)E˘T ˘ 6 (s) = ∗ < 0, Ω −µ1 (s)I 0 ∗ ∗ −µ1 (s)I 2 ˜ ˜ ˘ 7 (s) = −γ2 Q(s) Q(s)M (s) < 0, Ω ∗ −I ˘ ˘ Ω10 (s) H µ2 (s)E˘T ˘ 8 (s) = ∗ 0.01, Wa11 (z1 (·)) = 0.050, |z1 (·)| ≤ 0.01, ( 0.250, |z1 (·)| > 0.01, Wa12 (z1 (·)) = 0.150, |z1 (·)| ≤ 0.01, ( 0.040, |z2 (·)| > 0.01, Wa21 (z2 (·)) = 0.120, |z2 (·)| ≤ 0.01, ( 0.160, |z2 (·)| > 0.01, Wa22 (z2 (·)) = 0.240, |z2 (·)| ≤ 0.01,
and the time-varying delayed connection memresistor memductances
An Illustrative Example
183
Wb11 (z1 (·)), Wb12 (z1 (·)), Wb21 (z2 (·)) and ( 0.100, Wb11 (z1 (·)) = 0.040, ( 0.340, Wb12 (z1 (·)) = 0.200, ( 0.020, Wb21 (z2 (·)) = 0.060, ( 0.140, Wb22 (z2 (·)) = 0.230,
Wb22 (z2 (·)) are
As stated in [18], the parameters of the by: ( 0.860, d1 (z1 (·)) = 0.460, ( 0.385, d2 (z2 (·)) = 0.675, ( −0.150, a11 (z1 (·)) = −0.050, ( 0.250, a12 (z1 (·)) = 0.150, ( 0.040, a21 (z2 (·)) = 0.120, ( −0.160, a22 (z2 (·)) = −0.240, ( −0.100, b11 (z1 (·)) = −0.040, ( 0.340, b12 (z1 (·)) = 0.200, ( 0.020, b21 (z2 (·)) = 0.060, ( −0.140, b22 (z2 (·)) = −0.230,
circuit subsystem can be expressed
|z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01.
|z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01, |z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01, |z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z1 (·)| > 0.01, |z1 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01, |z2 (·)| > 0.01, |z2 (·)| ≤ 0.01.
184
H∞ and l2 − l∞ State Estimation
The remaining parameters are 0.16 + 0.2 sin(s) L1 (s) = , −0.65 0.45 L2 (s) = , 0.2 + 0.02 sin(s) M (s) = −0.35 0.2 , 1.2 −0.5 + 0.02 sin(s) C(s) = , 0.3 0.6 1 0 0 0 0 8 0 0 S1 = 0 0 3 0 , 0 0 0 1.5 0.01 0 0 0 0 0.01 0 0 , S2 = 0 0 0.1325 −0.07 0 0 −0.07 0.05 0.8 0 0 0 0 6.4 0 0 , P (0) = 0 0 2.4 0 0 0 0 1.2 τ (s) = 1 + sin2 ((s + 2)π), γ1 = γ2 = 0.98, Q(0) = S2 . The NNAF and noises are 0.2z1 (s) − 0.1 sin(s + 2)z1 (s) f (z(s)) = , 0.25z2 (s) − 0.05 sin(s + 2)z2 (s) w(s) = v(s) = sin(s) exp(−0.1s). Obviously, Γ1 and Γ2 can be chosen as 0.3 0 0.1 Γ1 = , Γ2 = 0 0.3 0
0 . 0.2
After solving the RLMIs (10.34)–(10.37), the corresponding values of K(s) are shown in Table 10.1 and the demonstration results are given in Figs. 10.2– 10.5. Figs. 10.2–10.3 plot the true NSs and their estimates, it is obvious to see that the estimates follow the states of the neurons well which considerably coincide the theoretical results. Fig. 10.4 sketches the output sensor measurements and their estimates, and Fig. 10.5 depicts the estimation errors.
An Illustrative Example
185 TABLE 10.1: Filter Gains
s Ks
s
0 0.9939 0.4756
0 0
1 0.8774 0.6433
0 0
59 0.8942 0.6703
···
Ks
0 0
···
2 1.0665 0.55580
60 0.6325 −0.1980
0
0 0
In order to further show the efficiency of the proposed method, we have considered the same system without the RRP, and the corresponding simulation results are depicted in Figs. 10.6–10.9. Compared with Figs. 10.2–10.5, we can find that the employment of the RRP only has little influence on the estimation performance. Nevertheless, from Fig. 10.10, it is obvious that due to the introduction of the RRP, the communication burden has been reduced effectively. Therefore, we can come to a conclusion exactly that the employment of the RRP is efficient for providing a tradeoff between the consumption of communication resources and estimation performance. 0.4 State z1 (s) Estimate of z1 (s)
0.35 0.3 0.25
Value
0.2 0.15 0.1 0.05 0 -0.05 -0.1 0
10
20
30
40
50
Time (s)
FIGURE 10.2:
z1 (s) and zˆ1 (s) with RRP.
60
H∞ and l2 − l∞ State Estimation
186
0.4 State z2 (s) Estimate of z2 (s)
0.3 0.2 0.1
Value
0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 0
10
20
30
40
50
60
Time (s)
FIGURE 10.3: z2 (s) and zˆ2 (s) with RRP.
0.5 Output z¯(s) Estimate of z¯(s)
Value
0
-0.5
-1 0
10
20
30
40
50
Time (s)
FIGURE 10.4: z¯(s) and zˆ¯(s) with RRP.
60
An Illustrative Example
187
0.1 Estimation error of z¯(s)
0.05
Value
0
-0.05
-0.1
-0.15 0
10
20
30
40
50
60
Time (s)
FIGURE 10.5: The estimate error z˜¯(s) with RRP.
0.4 State z1 (s) Estimate of z1 (s)
0.35 0.3 0.25
Value
0.2 0.15 0.1 0.05 0 -0.05 -0.1 0
10
20
30
40
50
Time (s)
FIGURE 10.6: z1 (s) and zˆ1 (s) without RRP.
60
H∞ and l2 − l∞ State Estimation
188
0.3 State z2 (s) Estimate of z2 (s)
0.2 0.1
Value
0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 0
10
20
30
40
50
60
Time (s)
FIGURE 10.7: z2 (s) and zˆ2 (s) without RRP.
0.5 Output z¯(s) Estimate of z¯(s)
Value
0
-0.5
-1 0
10
20
30
40
50
Time (s)
FIGURE 10.8: z¯(k) and zˆ¯(k) without RRP.
60
An Illustrative Example
189
0.1 Estimation error of z¯(s)
0.05
Value
0
-0.05
-0.1
-0.15 0
10
20
30
40
50
60
Time (s)
FIGURE 10.9: The estimate error z˜¯(k) without RRP.
3
2.5
Node
2
1.5
1
0.5
0 0
10
20
30
40
50
60
Time (s)
FIGURE 10.10: The scheduling of the nodes with RRP.
H∞ and l2 − l∞ State Estimation
190
10.4
Summary
In this chapter, we have coped with the finite-horizon H∞ and l2 − l∞ SEP for MNNs with time-delays and the RRP. The RRP has been utilized to mitigate the unnecessary network congestion occurred in the channel between the MNNs and the remote estimator. By using the RLMIs and the LyapunovKrasovskii functional methods, a protocol-based criterion has been presented to guarantee not only the H∞ and l2 − l∞ constraints, but also the existence of the desired estimator. In the end, the validity of our estimator has been verified via a quintessential example. Note that the developed H∞ and l2 − l∞ strategy might be applied to MNNs prone to other protocols such as the stochastic communication and try-once-discard protocols [203, 218]. Moreover, in future, we will attempt to investigate the H∞ and l2 − l∞ state estimation problem with the help of the annealing algorithm [183, 184].
11 Conclusions and Future Topics
In this book, the stability analysis and state estimation problems have been investigated for discrete-time memristive neural networks. Latest results on stability analysis and estimator design issues have been firstly surveyed for different types of MNNs subject to various engineering-oriented complexities especially network-induced phenomena. Then, in each chapter, for the addressed discrete-time MNNs subject to different constraints (e.g. time-delays, bandwidth limitations, non-Gaussian noises, etc.), sufficient conditions have been derived for the systems to achieve certain pre-specified desired performances including stability, H∞ specification and l2 − l∞ criterion. Subsequently, the synthesis problems have been discussed, and sufficient conditions have been provided for the existence of the desired state estimators that are capable of satisfying all the pre-specified performance requirements by means of certain convex optimization algorithms. This book has established a unified theoretical framework for stability analysis and estimator synthesis for discrete-time memristive neural networks in the context of networked control systems. Various types of engineeringoriented complexities especially those induced by network have been taken into account in both stability analysis and estimator design procedure. However, the obtained results are still quite limited. Some related areas for potential future research work could be mentioned as follows: In practical engineering, there are still some more complicated yet important kinds of network-induced phenomena (e.g. data disorder, sensor saturations, etc.) that have not been studied in a unified framework of neural networks. Therefore, the stability analysis and state estimation problems for MNNs taking into account more engineering-oriented complexities still remain open and challenging. Moreover, it is necessary to consider certain other performance indices than H∞ specification in relevant studies. These performances include, but are not limited to, accuracy, robustness, reliability that can be simultaneously considered in a unified framework. Also, the state estimation problems could be considered for DMNNs subject to many other types of delays such as the neutral delay. Another future research direction is to further study the practical application of the DMNNs in many branches of science and industrial engineering due to the remarkable merits and wide implementation in image processing/recognition, machine learning, reinforcement learning and so forth.
191
Bibliography
[1] A. Afifi, A. Ayatollahi, and F. Raissi. Efficient hybrid CMOS-Nano circuit design for spiking neurons and memristive synapses with STDP. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E93A(9):1670–1677, 2010. [2] F. E. Alsaadi, Y. Luo, Y. Liu, and Z. Wang. State estimation for delayed neural networks with stochastic communication protocol: The finitetime case. Neurocomputing, 281:86–95, 2018. [3] L. Arnold. Stochastic Differential Equations: Theory and Applications. USA: Wiley-Interscience, New York, 1974. [4] K. Astrom. Introduction to stochastic control theory. Academic Press, New York and London, 1970. [5] H. Bao and J. Cao. Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay. Neural Networks, 24(1):19–28, 2011. [6] H. Bao, J. Cao, J. Kurths, A. Alsaedi, and B. Ahmad. H∞ state estimation of stochastic memristor-based neural networks with time-varying delays. Neural Networks, 99:79–91, 2018. [7] N. W. Bauer, M. C. F. Donkers, N. van de Wouw, and W. P. M. H. Heemels. Decentralized observer-based control via networked communication. Automatica, 49(7):2074–2086, 2013. [8] A. Benavolia and D. Piga. A probabilistic interpretation of setmembership filtering: Application to polynomial systems through polytopic bounding. Automatica, 70:158–172, 2016. [9] S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. Philadelphia, USA:SIAM, 1994. [10] K. Cantley, A. Subramaniam, H. Stiegler, R. Chapman, and E. Vogel. Hebbian learning in spiking neural networks with nanocrystalline silicon TFTs and memristive synapses. IEEE Transactions on Nanotechnology, 10(5):1066–1073, 2011.
193
194
Bibliography
[11] K. Cantley, A. Subramaniam, H. Stiegler, R. Chapman, and E. Vogel. Neural learning crcuits utilizing nano-crystalline silicon transistors and memristors. IEEE Transactions on Neural Networks and Learning Systems, 23(4):565–573, 2012. [12] H. Chen, Y. Hung, C. Chen, T. Li, and C. Chen. Image-processing algorithms realized by discrete-time cellular neural networks and their circuit implementations. Chaos, Solitons and Fractals, 29(5):1100–1108, 2006. [13] H. Chen, J. Liang, and Z. Wang. Pinning controllability of autonomous Boolean control networks. Science China Information Sciences, 59(7):ArticlenID 070107, 2016. [14] J. Chen, Z. Zeng, and P. Jiang. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Networks, 51:1–8, 2014. [15] Y. Chen, S. Fei, and Y. Li. Robust stabilization for uncertain saturated time-delay systems: a distributed-delay-dependent polytopic approach. IEEE Transactions on Automatic Control, 62(7):3455–3460, 2017. [16] Y. Chen, Z. Fu, Y. Liu, and F. E. Alsaadi. Further results on passivity analysis of delayed neural networks with leakage delay. Neurocomputing, 224:135–141, 2017. [17] Y. Chen, Z. Wang, S. Fei, and Q.-L. Han. Regional stabilization for discrete time-delay systems with actuator saturations via a delaydependent polytopic approach. IEEE Transactions on Automatic Control, 64(3):1257-1264, 2019. [18] L. Chua. Memristor-the missing circuit element. IEEE Transactions on Circuit Theory, 18(5):507–519, 1971. [19] L. Chua. Resistance switching memories are memristors. Applied Physics A, 102(4):765–783, 2011. [20] D. Ding, Z. Wang, H. Dong, and H. Shu. Distributed H∞ state estimation with stochastic parameters and nonlinearities through sensor networks: the finite-horizon case. Automatica, 48(8):1575–1585, 2012. [21] D. Ding, Z. Wang, Q. Han, and G. Wei. Neural-network-based outputfeedback control under round-robin scheduling protocols. IEEE Transactions on Cybernetics, 49(6):2372–2384, 2019. [22] D. Ding, Z. Wang, D. W. C. Ho, and G. Wei. Distributed recursive filtering for stochastic systems under uniform quantizations and deception attacks through sensor networks. Automatica, 78:231–240, 2017.
Bibliography
195
[23] D. Ding, Z. Wang, D. W. C. Ho, and G. Wei. Observer-based eventtriggering consensus control for multiagent systems with lossy sensors and cyber-attacks. IEEE Transactions on Cybernetics, 47(8): 1936– 1947, 2017. [24] D. Ding, Z. Wang, B. Shen, and H. Dong. Envelope-constrained H∞ filtering with fading measurements and randomly occurring nonlinearities: The finite horizon case. Automatica, 55:37–45, 2015. [25] D. Ding, Z. Wang, B. Shen, and H. Dong. H∞ state estimation with fading measurements, randomly varying nonlinearities and probabilistic distributed delays. International Journal of Robust and Nonlinear Control, 25:2180–2195, 2015. [26] D. Ding, Z. Wang, G. Wei, and F. E. Alsaadi. Event-based security control for discrete-time stochastic systems. IET Control Theory & Applications, 10(15):1808–1815, 2016. [27] S. Ding, Z. Wang, J. Wang, and H. Zhang. H∞ state estimation for memristive neural networks with time-varying delays: The discrete-time case. Neural Networks, 84:47–56, 2016. [28] S. Ding, Z. Wang, and H. Zhang. Dissipativity analysis for stochastic memristive neural networks with time-varying delays: A discrete-time case. IEEE Transactions on Neural Networks and Learning Systems, 29(3):618–630, 2018. [29] H. Dong, Z. Wang, S. X. Ding, and H. Gao. Event-based filter design for a class of nonlinear time-varying systems with fading channels and multiplicative noises. IEEE Transactions on Signal Processing, 63(13):3387– 3395, 2015. [30] H. Dong, Z. Wang, and H. Gao. Robust H∞ filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts. IEEE Transactions on Signal Processing, 58(4):1957–1966, 2010. [31] M. Donkers and W. Heemels. Output-based event-triggered control with guaranteed L∞ gain and improved and decentralised eventtriggering. IEEE Transactions on Automatic Control, 57(6):1326– 1332, 2012. [32] M. C. F. Donkers, W. P. M. H. Heemels, N. van de Wouw, and L. Hetel. Stability analysis of networked control systems using a switched linear systems approach. IEEE Transactions on Automatic Control, 56(9):2101–2115, 2011.
196
Bibliography
[33] S. Duan, X. Hu, Z. Dong, L. Wang, and P. Mazumder. Memristorbased cellular nonlinear/neural network: Design, analysis, and applications. IEEE Transactions on Neural Networks and Learning Systems, 26(6):1202–1213, 2015. [34] N. Elia. Remote stabilization over fading channels. Systems and Control Letters, 54(3):237–249, 2005. [35] Y. Fan, G. Feng, Y. Wang, and C. Song. Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica, 49(2):671–675, 2013. [36] E. Fridman. New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems. Systems Control & Letters, 43(4):309–319, 2001. [37] M. Gao, S. Yang, L. Sheng, and D. Zhou. Fault diagnosis for timevarying systems with multiplicative noises over sensor networks subject to Round-Robin protocol. Neurocomputing, 346:65–72, 2019. [38] L. El Ghaoui and G. Calafiore. Robust filtering for discrete-time systems with bounded noise and parametric uncertainty. IEEE Transactions on Automatic Control, 46(7):1302–1313, 2001. [39] A. Girard. Dynamic triggering mechanisms for event-triggered control. IEEE Transactions on Automatic Control, 60(7):1992–1997, 2015. [40] J. Guo, Z. Meng, and Z. Xiang. Passivity analysis of stochastic memristor-based complex-valued recurrent neural networks with mixed time-varying delays. Neural Processing Letters, 47(3):1097–1113, 2018. [41] Z. Guo, J. Wang, and Z. Yan. Passivity and passification of memristorbased recurrent neural networks with time-varying delays. IEEE Transactions on Neural Networks and Learning Systems, 25(11):2099–2109, 2014. [42] Z. Guo, S. Yang, and J. Wang. Global exponential synchronization of multiple memristive neural networks with time delay via nonlinear coupling. IEEE Transactions on Neural Networks and Learning Systems, 26(6):1300–1311, 2015. [43] Q. Han, Y. Liu, and F. Yang. Optimal communication network-based H∞ quantized control with packet dropouts for a class of discrete-time neural networks with distributed time delay. IEEE Transactions on Neural Networks and Learning Systems, 27(2):426–434, 2016. [44] S. Haykin. Neural Networks. Englewood Cliffs, USA:Prentice-Hall, 1994.
Bibliography
197
[45] Y. He, Q. Wang, M. Wu, and C. Lin. Delay-dependent state estimation for delayed neural networks. IEEE Transactions on Neural Networks, 17(4):1077–1081, 2006. [46] P. Heydari and M. Pedram. Capacitive coupling noise in high-speed VLSI circuits. IEEE Transations on Computer-Aided Design of Integrated Circuits and Systems, 24(3):478–488, 2005. [47] N. Hou, H. Dong, Z. Wang, W. Ren, and F. E. Alsaadi. H∞ state estimation for discrete-time neural networks with distributed delays and randomly occurring uncertainties through fading channels. Neural Networks, 89:61–73, 2017. [48] F. O. Hounkpevi and E. E. Yaz. Robust minimum variance linear state estimators for multiple sensors with different failure rates. Automatica, 43(7):1274–1280, 2007. [49] J. Hu, Z. Wang, D. Chen, and F. E. Alsaadi. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Information Fusion, 31:65–75, 2016. [50] J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas. Robust sliding mode control for discrete stochastic systems with mixed time delays, randomly occurring uncertainties, and randomly occurring nonlinearities. IEEE Transactions on Industrial Electronics, 59(7):3008–3015, 2012. [51] J. Hu, Z. Wang, G.-P. Liu, and H. Zhang. Variance-constrained recursive state estimation for time-varying complex networks with quantized measurements and uncertain inner coupling. IEEE Transactions on Neural Networks and Learning Systems, 31(6):1955–1967, 2020. [52] J. Hu, Z. Wang, S. Liu, and H. Gao. A variance-constrained approach to recursive state estimation for time-varying complex networks with missing measurements. Automatica, 64:155–162, 2016. [53] J. Hu, H. Zhang, X. Yu, H. Liu, and D. Chen. Design of sliding-modebased control for nonlinear systems with mixed-delays and packet losses under uncertain missing probability. IEEE Transactions on Systems, Man and Cybernetics, Systems, DOI:10.1109/TSMC.2019.2919513, 2019. [54] H. Huang, G. Feng, and J. Cao. Robust state estimation for uncertain neural networks with time-varying delay. IEEE Transactions on Neural Networks, 19(8):1329–1339, 2008. [55] H. Huang, G. Feng, and J. Cao. State estimation for static neural networks with time-varying delay. Neural Networks, 23:1202–1207, 2010.
198
Bibliography
[56] S. Jo, T. Chang, I. Ebong, B. Bhadviya, P. Mazumder, and W. Lu. Nanoscale memristor device as synapse in meuromorphic systems. Nano Letters, 10(4):1297–1301, 2010. [57] X. Kan, H. Shu, and Z. Li. Robust state estimation for discrete-time neural networks with mixed time-delays, linear fractional uncertainties and successive packet dropouts. Neurocomputing, 135:130–138, 2014. [58] H. Kim, M. Sah, C. Yang, T. Roska, and L. Chua. Neural synaptic weighting with a pulse-based memristor circuit. IEEE Transactions on Circuits and Systems I: Regular Papers, 59(1):148–158, 2012. [59] B. Kosko. Adaptive bidirectional associative memories. Applied Optics, 26(23):4947–4860, 1987. [60] B. Kosko. Bidirectional associative memories. IEEE Transactions on System, Man, and Cybernetics, 18(1):49–60, 1988. [61] O. M. Kwon, M. J. Park, S. M. Lee, J. H. Park, and E. J. Cha. Stability for neural networks with time-varying delays via some new approaches. IEEE Transactions on Neural Networks and Learning Systems, 24(2):181–193, 2013. [62] R. Legenstein. Nanoscale connections for brain-like circuits. Nature, 521(7550):37–38, 2015. [63] B. Li, Z. Wang, and L. Ma. An event-triggered pinning control approach to synchronization of discrete-time stochastic complex dynamical networks. IEEE Transactions on Neural Networks and Learning Systems, 29(2):5812–5822, 2018. [64] B. Li, Z. Wang, L. Ma, and H. Liu. Observer-based event-triggered control for nonlinear systems with mixed delays and disturbances: the inputto-state stability. IEEE Transactions on Cybernetics, 49(7):2806–2819, 2019. [65] F. Li, H. Shen, M. Chen, and Q. Kong. Non-fragile finite-time l2 − l∞ state estimation for discrete-time Markov jump neural networks with unreliable communication links. Applied Mathematics and Computation, 271:467–481, 2015. [66] H. Li, H. Jiang, and C. Hu. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with timevarying delays. Neural Networks, 75:97–109, 2016. [67] J. Li, H. Dong, Z. Wang, and W. Zhang. Protocol-based state estimation for delayed Markovian jumping neural networks. Neural Networks, 108:355–364, 2018.
Bibliography
199
[68] Q. Li, B. Shen, Z. Wang, and Fuad E. Alsaadi. A sampled-data approach to distributed H∞ resilient state estimation for a class of nonlinear timedelay systems over sensor networks. Journal of the Franklin Institute, 354:7137–7157, 2017. [69] Q. Li, B. Shen, Z. Wang, and Fuad E. Alsaadi. An event-triggered approach to distributed H∞ state estimation for state-saturated systems with randomly occurring mixed delays. Journal of the Franklin Institute, 355(6):3104–3121, 2018. [70] Q. Li, B. Shen, Z. Wang, T. Huang, and J. Luo. Synchronization control for a class of discrete time-delay complex dynamical networks: a dynamic event-triggered approach. IEEE Transactions on Cybernetics, 49(5):1979–1986, 2019. [71] R. Li and J. Cao. Dissipativity analysis of memristive neural networks with time-varying delays and randomly occurring uncertainties. Mathematical Methods in the Applied Sciences, 39(11):2896–2915, 2016. [72] R. Li, J. Cao, A. Alsaedi, and T. Hayat. Non-fragile state observation for delayed memristive neural networks: Continuous-time case and discretetime case. Neurocomputing, 245:102–113, 2017. [73] J. Liang, Z. Wang, and X. Liu. State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: the discrete-time case. IEEE Transactions on Neural Networks, 20(5):781–793, 2009. [74] D. Liu, Y. Liu, and F. E. Alsaadi. A new framework for output feedback controller design for a class of discrete-time stochastic nonlinear system with quantization and missing measurement. International Journal of General Systems, 45(5):517–531, 2016. [75] H. Liu, Z. Wang, B. Shen, and F. E. Alsaadi. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays. International Journal of General Systems, 45(5):633–647, 2016. [76] H. Liu, Z. Wang, B. Shen, and X. Liu. Event-triggered H∞ state estimation for delayed stochastic memristive neural networks with missing measurements: The discrete time case. IEEE Transations on Neural Networks and Learning Systems, 29(8):3726–3737, 2018. [77] Q. Liu and Z. Wang. Moving-horizon estimation for linear dynamic networks with binary encoding schemes. IEEE Transactions on Automatic Control, in press, DOI: 10.1109/TAC.2020.2996579. [78] S. Liu, G. Wei, Y. Song, and Y. Liu. Extended Kalman filtering for stochastic nonlinear systems with randomly occurring cyber attacks. Neurocomputing, 207:708–716, 2016.
200
Bibliography
[79] Y. Liu, W. Liu, M. A. Obaid, and I. A. Abbas. Exponential stability of Markovian jumping Cohen-Grossberg neural networks with mixed mode-dependent time-delays. Neurocomputing, 177:409–415, 2016. [80] Y. Liu, B. Shen, and Q. Li. State estimation for neural networks with Markov-based nonuniform sampling: The partly unknown transition probability case. Neurocomputing, 357:261–270, 2019. [81] Y. Liu, B. Shen, and H. Shu. Finite-time resilient H∞ state estimation for discrete-time delayed neural networks under dynamic event-triggered mechanism. Neural Networks, 121:356–365, 2020. [82] Y. Liu, Z. Wang, and X. Liu. Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Networks, 19(5):667–675, 2006. [83] Y. Liu, Z. Wang, and X. Liu. Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis. Neurocomputing, 71:823–833, 2007. [84] Y. Liu, Z. Wang, and X. Liu. Robust stability of discrete-time stochastic neural networks with time-varying delays. Neurocomputing, 71:823–833, 2008. [85] Y. Liu, Z. Wang, and X. Liu. On global stability of delayed BAM stochastic neural networks with Markovian switching. Neural Processing Letters, 30(1):19–35, 2009. [86] Y. Liu, Z. Wang, L. Ma, and F. E. Alsaadi. A partial-nodes-based information fusion approach to state estimation for discrete-time delayed stochastic complex networks. Information Fusion, 49:240–248, 2019. [87] Y. Liu, Z. Wang, L. Ma, Y. Cui, and F. E. Alsaadi. Synchronization of directed switched complex networks with stochastic link perturbations and mixed time-delays. Nonlinear Analysis: Hybrid Systems, 27:213– 224, 2018. [88] Y. Liu, Z. Wang, A. Serrano, and X. Liu. Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis. Physics Letters A, 362:480–488, 2007. [89] Y. Liu, Z. Wang, Y. Yuan, and W. Liu. Event-triggered partial-nodesbased state estimation for delayed complex networks with bounded distributed delays. IEEE Transactions on Systems, Man, and CyberneticsSystems, 49(6):1088–1098, 2019. [90] C. Lu, X.-M. Zhang, M. Wu, Q.-L. Han, and Y. He. Energy-to-peak state estimation for static neural networks with interval time-varying delays. IEEE Transactions on Cybernetics, 48(10):2823–2835, 2018.
Bibliography
201
[91] Y. Luo, Z. Wang, J. Liang, G. Wei, and F. E. Alsaadi. H∞ control for 2-D fuzzy systems with interval time-varying delays and missing measurements. IEEE Transactions on Cybernetics, 47(2):365–377, 2017. [92] Y. Luo, Z. Wang, G. Wei, and F. E. Alsaadi. Robust H∞ filtering for a class of two-dimensional uncertain fuzzy systems with randomly occurring mixed delays. IEEE Transactions on Fuzzy Systems, 25(1):70– 83, 2017. [93] Y. Luo, Z. Wang, G. Wei, Fuad E. Alsaadi, and T. Hayat. State estimation for a class of artificial neural networks with stochastically corrupted measurements under Round-Robin protocol. Neural Networks, 77:70–79, 2016. [94] L. Ma, Z. Wang, Q.-L. Han, and H. K. Lam. Envelope-constrained H∞ filtering for nonlinear systems with quantization effects: The finite horizon case. Automatica, 93:527–534, 2018. [95] L. Ma, Z. Wang, Q.-L. Han, and Y. Liu. Consensus control of stochastic multi-agent systems: a survey. Science China Information Sciences, 60(12):Article ID 120201, 2017. [96] L. Ma, Z. Wang, Q.-L. Han, and Y. Liu. Dissipative control for nonlinear Markovian jump systems with actuator failures and mixed time-delays. Automatica, 98:358–362, 2018. [97] L. Ma, Z. Wang, and H.-K. Lam. Mean-square H∞ consensus control for a class of nonlinear time-varying stochastic multiagent systems: the finite-horizon case. IEEE Transactions on Systems, Man and Cybernetics: Systems, 47(7):1050–1060, 2017. [98] L. Ma, Z. Wang, H.-K. Lam, and N. Kyriakoulis. Distributed eventbased set-membership filtering for a class of nonlinear systems with sensor saturations over sensor networks. IEEE Transactions on Cybernetics, 47(11):3772–3783, 2017. [99] L. Ma, Z. Wang, Y. Liu, and F. E. Alsaadi. A note on guaranteed cost control for nonlinear stochastic systems with input saturation and mixed time-delays. International Journal of Robust and Nonlinear Control, 27(8):4443–4456, 2017. [100] L. Ma, Z. Wang, Y. Liu, and F. E. Alsaadi. Distributed filtering for nonlinear time-delay systems over sensor networks subject to multiplicative link noises and switching topology. International Journal of Robust and Nonlinear Control, 29(10):2941–2959, 2019. [101] M. H. Mamduhi, and S. Hirche. Try-once-discard scheduling for stochastic networked control systems. International Journal of Control, 92(11):2532–2546, 2019.
202
Bibliography
[102] X. Mao. Stochastic Differential Equations and Applications. USA: Elsevier, New York, 2007. [103] C. M. Marcus and R. M. Westervelt. Stability of analog neural networks with delay. Physics Review A, 39:347–359, 1989. [104] X. Meng and T. Chen. Event based agreement protocols for multi-agent networks. Automatica, 49(7):2125–2132, 2013. [105] T. Mitchell. Machine Learning, New York, USA, The MeGraw-Hill Companies, 1997. [106] M. Moayedi, Y. K. Foo, and Y. C. Soh. Adaptive Kalman filtering in networked systems with random sensor delays, multiple packet dropouts and missing measurements. IEEE Transactions on Signal Processing, 58(3):1577–1588, 2010. [107] N. Nasrabadi and W. Li. Object recognition by a Hopfield neural network. IEEE Transactions on Systems Man and Cybernetics, 21(6):1523– 1535, 1991. [108] X. Nie, W. X. Zheng, and J. Cao. Coexistence and local mu-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded timevarying delays. Neural Networks, 84:172–180, 2016. [109] T. Onomi, Y. Maenami, and K. Nakajima. Superconducting neural network for solving a combinatorial optimization problem. IEEE Transactions on Applied Superconductivity, 21(3):701–704, 2011. [110] E. Painkras, L. Plana, and J. Garside. SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE Journal of Solid-State Circuits, 48(8):1943–1953, 2013. [111] Y. V. Pershin and M. Di Ventra. Experimental demonstration of associative memory with memristive neural networks. Neural Networks, 23(7):881–886, 2010. [112] J. Qi, C. Li, and T. Huang. Stability of inertial BAM neural network with time-varying delay via impulsive control. Neurocomputing, 161:162– 167, 2015. [113] W. Qian, Y. Gao, and Y. Yang. Global consensus of mnultiagent systems with internal delays and communication delays. IEEE Transations on Systems, Man, and Cybernetics: Systems, 49(10):1961–1970, 2019. [114] W. Qian, Y. Li, Y. Chen, and W. Liu. L2 -L∞ filtering for stochastic delayed systems with randomly occurring nonlinearities and sensor saturation, International Journal of Systems Science, 51(13):2360–2377, 2020.
Bibliography
203
[115] W. Qian, L. Wang, and M. Z. Q. Chen. Local consensus of nonlinear multiagent systems with varying delay coupling. IEEE Transations on Systems, Man, and Cybernetics: Systems, 48(12):2462–2469, 2018. [116] W. Qian, W. Xing, and S. Fei. H∞ state estimation for neural networks with general activation function and mixed time-varying delays. IEEE Transactions on Neural Networks and Learning Systems, 1–10, 2020, DOI: 10.1109/TNNLS.2020.3016120. [117] R. Raja, U. K. Raja, R. Samidurai, and A. Leelamani. Dynamic analysis of discrete-time BAM neural networks with stochastic perturbations and impulses. International Journal of Machine Learning and Cybernetics, 5(1):39–50, 2014. [118] R. Rakkiyappan, A. Chandrasekar, S. Laksmanan, and J. H. Park. State estimation of memristor-based recurrent neural networks with timevarying delays based on passivity theory. Complexity, 19(4):32–43, 2013. [119] R. Rakkiyappan, K. Maheswari, G. Velmurugan, and J. H. Park. Eventtriggered H-infinity state estimation for semi-Markov jumping discretetime neural networks with quantization. Neural Networks, 105:236–248, 2018. [120] M. Sahebsara, T. Chen, and S. L. Shah. Optimal H2 filtering with random sensor delay, multiple packet dropout and uncertain observations. International Journal of Control, 80(2):292–301, 2007. [121] M. Sahebsara, T. Chen, and S. L. Shah, Optimal H2 filtering in networked control systems with multiple packet dropout, IEEE Transactions on Automatic Control, 52(8):1508–1513, 2007. [122] M. Sahebsara, T. Chen, and S. L. Shah. Optimal H∞ filtering in networked control systems with multiple packet dropouts. Systems & Control Letters, 57(9):696–702, 2008. [123] R. Sakthivel, R. Anbuvithya, K. Mathiyalagan, and P. Prakash. Combined H∞ and passivity state estimation of memristive neural networks with random gain fluctuations. Neurocomputing, 168:1111–1120, 2015. [124] M. Sathishkumar, R. Sakthivel, C. Wang, B. Kaviarasan, and S. M. Anthoni. Non-fragile filtering for singular Markovian jump systems with missing measurements. Signal Processing, 142:125–136, 2018. [125] G. S. Seyboth, D. V. Dimarogonas, and K. H. Johansson. Event-based broadcasting for multi-agent average consensus. Automatica, 49(1):245– 252, 2013.
204
Bibliography
[126] B. Shen, Z. Wang, D. Ding, and H. Shu. H∞ state estimation for complex networks with uncertain inner coupling and incomplete measurements. IEEE Transactions on Neural Networks and Learning Systems, 24(12):2027–2037, 2013. [127] B. Shen, Z. Wang, and H. Qiao. Event-triggered state estimation for discrete-time multidelayed neural networks with stochastic parameters and incomplete measurements. IEEE Transactions on Neural Networks and Learning Systems, 28(5):1152–1163, 2017. [128] B. Shen, Z. Wang, H. Shu, and G. Wei. On nonlinear filtering for discrete-time stochastic systems with missing measurements. IEEE Transactions on Automatic Control, 53(9):2170–2180, 2008. [129] B. Shen, Z. Wang, and H. Tan. Guaranteed cost control for uncertain nonlinear systems with mixed time-delays: The discrete-time case. European Journal of Control, 40:62–67, 2018. [130] B. Shen, H. Tan, Z. Wang, and T. Huang. Quantized/Saturated control for sampled-data systems under noisy sampling intervals: A confluent Vandermonde matrix approach. IEEE Transactions on Automatic Control, 6(9):4753–4759, 2017. [131] B. Shen, Z. Wang, D. Wang, and Q. Li. State-saturated recursive filter design for stochastic time-varying nonlinear complex networks under deception attacks. IEEE Transactions on Neural Networks and Learning Systems, 31(10):3788–3800, 2020. [132] B. Shen, Z. Wang, D. Wang, and H. Liu. Distributed state-saturated recursive filtering over sensor networks under Round-Robin protocol. IEEE Transactions on Cybernetics, 50(8):3605–3615, 2020. [133] B. Shen, Z. Wang, D. Wang, J. Luo, H. Pu, and Y. Peng. Finite-horizon filtering for a class of nonlinear time-delayed systems with an energy harvesting sensor. Automatica, 100(2):144–152, 2019. [134] H. Shen, S. Huo, J. Cao, and T. Huang. Generalized state estimation for Markovian coupled networks under Round-Robin protocol and redundant channels. IEEE Transactions on Cybernetics, 49(4):1292–1301, 2019. [135] H. Shen, Y. Zhu, L. Zhang, and J. H. Park. Extended dissipative state estimation for markov jump neural networks with unreliable links. IEEE Transactions on Neural Networks And Learning Systems, 28(2):346– 358, 2017. [136] L. Sheng, Y. Niu, and M. Gao. Distributed resilient filtering for time-varying systems over sensor networks subject to RoundRobin/stochastic protocol. ISA Transactions, 87:55–67, 2019.
Bibliography
205
[137] L. Sheng, Z. Wang, E. Tian, and F. E. Alsaadi. Delay-distributiondependent H∞ state estimation for delayed neural networks with (x, v)dependent noises and fading channels. Neural Networks, 84:102–112, 2016. [138] L. Sheng, Z. Wang, L. Zou, and F. E. Alsaadi. Event-based H∞ state estimation for time-varying stochastic dynamical networks with state- and disturbance-dependent noises. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2382–2394, 2017. [139] P. Shi, M. Mahmoud, S. K. Nguang, and A. Ismail. Robust filtering for jumping systems with mode-dependent delays. Signal Processing, 86(1):140–152, 2006. [140] V. Singh. A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks. IEEE Transactions on Neural Networks, 15(1):223–225, 2004. [141] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. S. Sastry. Kalman filtering with intermittent observations. IEEE Transactions on Automatic Control, 49(9):1453–1464, 2004. [142] Q. Song, H. Yan, Z. Zhao, and Y. Liu. Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Networks, 79:108–116, 2016. [143] Q. Song, Z. Zhao, and Y. Li. Global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms. Physics Letters A, 335:213–225, 2005. [144] Q. Song, Z. Zhao, and Y. Liu. Stability analysis of complex-valued neural networks with probabilistic time-varying delays. Neurocomputing, 159:96–104, 2015. [145] D. Strukov, G. Snider, G. Stewart, and R. Williams. The missing memristor found. Nature, 453(7191):80–83, 2008. [146] S. Sun, L. Xie, and W. Xiao. Optimal full-order and reduced-order estimators for discrete-time systems with multiple packet dropouts. IEEE Transactions on Signal Processing, 56(8):4031–4038, 2008. [147] M. Tabbara and D. Neˇsi´c. Input–output stability of networked control systems with stochastic protocols and channels. IEEE Transactions on Automatic Control, 53(5):1160–1175, 2008. [148] P. Tabuada. Event-triggered real-time scheduling of stabilizing control tasks. IEEE Transactions on Automatic Control, 52(9):1680–1685, 2007.
206
Bibliography
[149] D. Tank and J. Hopfield. Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Transactions on Circuits and Systems, 33(5):533–541, 1986. [150] E. Tian, Z. Wang, L. Zou, and D. Yue. Probability-constrained filtering for a class of nonlinear systems with improved static event-triggered communication. International Journal of Robust and Nonlinear Control, 29(5):1484–1498, 2019. [151] V. Ugrinovskii and E. Fridman. A Round-Robin type protocol for distributed estimation with H∞ consensus. Systems and Control Letters, 69:103–110, 2014. [152] G. C. Walsh, H. Ye, and L. G. Bushnell. Stability analysis of networked control systems. IEEE Transactions on Control Systems Technology, 10(3):438–446, 2002. [153] X. Wan, Z. Wang, Q.-L. Han, and M. Wu. Finite-time H∞ state estimation for discrete time-delayed genetic regulatory networks under stochastic communication protocols. IEEE Transactions on Circuits and Systems-Part I, 65(10):3481–3491, 2018. [154] X. Wan, Z. Wang, Q.-L. Han, and M. Wu. A recursive approach to quantized H∞ state estimation for genetic regulatory networks under stochastic communication protocols. IEEE Transactions on Neural Networks and Learning Systems, 30(9):2840–2852, 2019. [155] X. Wan, Z. Wang, M. Wu, and X. Liu. H∞ state estimation for discretetime nonlinear singularly perturbed complex networks under the RoundRobin protocol. IEEE Transactions on Neural Networks and Learning Systems, 30(2):415–426, 2019. [156] H. Wang, S. Duan, T. Huang, C. Li, and L. Wang. Novel stability criteria for impulsive memristive neural networks with time-varying delays. Circuits Systems and Signal Processing, 35(11):3935–3956, 2016. [157] H. Wang, S. Duan, T. Huang, L. Wang, and C. Li. Exponential stability of complex-valued memristive recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 28(3):766–771, 2017. [158] L. Wang, Y. Shen, and G. Zhang. Synchronization of a class of switched neural networks with time-varying delays via nonlinear feedback control. IEEE Transactions on Cybernetics, 46(10):2300–2310, 2016. [159] L. Wang, Y. Shen, Q. Yin, and G. Zhang. Adaptive synchronization of memristor-based neural networks with time-varying delays. IEEE Transactions on Neural Networks and Learning Systems, 26(9): 2033– 2042, 2015.
Bibliography
207
[160] L. Wang, Z. Wang, Q.-L. Han, and G. Wei. Event-based varianceconstrained H∞ filtering for stochastic parameter systems over sensor networks with successive missing measurements. IEEE Transactions on Cybernetics, 48:1007–1017, 2018. [161] L. Wang, Z. Wang, T. Huang, and G. Wei. An event-triggered approach to state estimation for a class of complex networks with mixed time delays and nonlinearities. IEEE Transactions on Cybernetics, 46(11):24972508, 2016. [162] L. Wang, Z. Wang, G. Wei, and Fuad E. Alsaadi. Finite-time state estimation for recurrent delayed neural networks with component-based event-triggering protocol. IEEE Transations on Neural Networks and Learning Systems, 29(4):1046–1057, 2018. [163] Y. Wang, L. Xie, and C. E. de Souza. Robust control of a class of uncertain nonlinear systems. Systems and Control Letters, 19(2):139– 149, 1992. [164] Z. Wang, S. Ding, Z. Huang, and H. Zhang. Exponential stability and stabilization of delayed memristive neural networks based on quadratic convex combination method. IEEE Transactions on Neural Networks and Learning Systems, 27(11):2337–2350, 2016. [165] Z. Wang, D. W. C. Ho, and X. Liu. State estimation for delayed neural networks. IEEE Transactions on Neural Networks, 16(1):279–284, 2005. [166] Z. Wang, H. Liu, B. Shen, F. E. Alsaadi, and A. M. Dobaie. H∞ state estimation for discrete-time stochastic memristive BAM neural networks with mixed time-delays. International Journal of Machine Learning and Cybernetics, 10(4):771–785, 2019. [167] Z. Wang, Y. Liu, K. Fraser, and X. Liu. Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Physics Letters A, 354:288–297, 2006. [168] Z. Wang, Y. Liu, and X. Liu. On global asymptotic stability of neural networks with discrete and distributed delays. Physics Letters A, 345(4– 6):299–308, 2005. [169] Z. Wang, Y. Liu, and X. Liu. Exponential stabilization of a class of stochastic system with Markovian jump parameters and modedependent mixed time-delays. IEEE Transactions on Automatic Control, 55(7):1656–1662, 2010. [170] G. Wei, Z. Wang, and H. Shu. Robust filtering with stochastic nonlinearities and multiple missing measurements. Automatica, 45(3):836– 841, 2009.
208
Bibliography
[171] H. Wei, R. Li, and C. Chen. State estimation for memristor-based neural networks with time-varying delays. International Journal of Machine Learning and Cybernetics, 6(2):213–225, 2015. [172] C. Wen, Y. Cai, Y. Liu, and C. Wen. A reduced-order approach to filtering for systems with linear equality constraints. Neurocomputing, 193:219–226, 2016. [173] C. Wen, Z. Wang, J. Hu, Q. Liu, and F. E. Alsaadi. Recursive filtering for state-saturated systems with randomly occurring nonlinearities and missing measurements. International Journal of Robust and Nonlinear Control, 28:1715–1727, 2018. [174] S. Wen, G. Bao, Z. Zeng, Y. Chen, and T. Huang. Global exponential synchronization of memristor-based recurrent neural networks with time-varying delays. Neural Networks, 48:195–203, 2013. [175] S. Wen and Z. Zeng. Dynamics analysis of a class of memristor-based recurrent networks with time-varying delays in the presence of strong external stimuli. Neural Processing Letters, 35:47–59, 2012. [176] S. Wen, Z. Zeng, and T. Huang. Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays. Neurocomputing, 97:233–240, 2012. [177] A. Wu, S. Wen, and Z. Zeng. Synchronization control of a class of memristor-based recurrent neural networks. Information Sciences, 183(1):106–116, 2012. [178] A. Wu and Z. Zeng. Anti-synchronization control of a class of memristive recurrent neural networks. Communications in Nonlinear Science and Numerical Simulation, 18:373–385, 2013. [179] A. Wu and Z. Zeng. Lagrange stability of memristive neural networks with discrete and distributed delays. IEEE Transactions on Neural Networks and Learning Systems, 25(4):690–703, 2014. [180] A. Wu and Z. Zeng. Lagrange stability of neural networks with memristive synapses and multiple delays. Information Sciences, 280:135–151, 2014. [181] A. Wu and Z. Zeng. Global Mittag-Leffler stabilization of fractionalorder memristive neural networks. IEEE Transactions on Neural Networks and Learning Systems, 28(1):206–217, 2017. [182] A. Wu, J. Zhang, and Z. Zeng. Dynamics behaviors of a class of memristor-based Hopfield networks. Physics Letters A, 375(11):1661– 1665, 2011.
Bibliography
209
[183] Z. Wu, B. Jiang, and H. R. Karimi. A logarithmic descent direction algorithm for the quadratic knapsack problem. Applied Mathematics and Computation, 369:Article ID 124854, 2020. [184] Z. Wu, H. R. Karimi, and C. Dang. A deterministic annealing neural network algorithm for the minimum concave cost transportation problem. IEEE Transactions on Neural Networks and Learning Systems, 31(10):4354–4366. [185] Y. Xia and J. Wang. A general projection neural network for solving monotone variational inequalities and related optimization problems. IEEE Transactions on Neural Networks, 15(2):318–328, 2004. [186] J. Xiao, S. Zhong, Y. Li, and F. Xu. Finite-time Mittag-Leffler synchronization of fractional-order memristive BAM neural networks with time delays. Neurocomputing, 219:431–439, 2017. [187] Y. Xu, R. Lu, P. Shi, H. Li, and S. Xie. Finite-time distributed state estimation over sensor networks with round-robin protocol and fading channels. IEEE transactions on Cybernetics, 48(1):336–345, 2016. [188] Y. Xu, H. Su, Y. Pan, Z. Wu, and W. Xu. Stability analysis of network control systems with round-robin scheduling and packet dropouts. Journal of The Franklin Institute, 350(8):2013–2027, 2013. [189] C. Yang, K. Huang, H. Cheng, Y. Li, and C.-Y. Su. Haptic identification by ELM-controlled uncertain manipulator. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(8):2398–2409, 2017. [190] C. Yang, X. Wang, Z. Li, Y. Li, and C.-Y. Su. Teleoperation control based on combination of wave variable and neural networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(8):2125– 2136, 2017. [191] X. Yang, J. Cao, and J. Liang. Exponential synchronization of memristive neural networks with delays: Interval matrix method. IEEE Transactions on Neural Networks and Learning Systems, 39(2):1878– 1888, 2017. Markovian jumping and time-delay, IEEE Transactions on Circuits and Systems I: Regular Papers, 60(2):363–376, 2013. [192] X. Yang and D. W. Ho. Synchronization of delayed memristive neural networks: Robust analysis approach. IEEE Transactions on Cybernetics, 46(12):3377–3387, 2016. [193] Y. Yang, J. Hu, N. Yang, and J. Du. Non-fragile set-membership state estimation for memristive neural networks with incomplete measurements via round-robin protocol. in Proc. of 2019 Chinese Control and Decision Conference, Nanchang, China, pages 3395–4000, 2019.
210
Bibliography
[194] Y. Yuan, L. Guo, and Z. Wang. Composite control of linear quadratic games in delta domain with disturbance observers. Journal of the Franklin Institute, 354(4):1673–1695, 2017. [195] Y. Yuan, Z. Wang, and L. Guo. Event-triggered strategy design for discrete-time nonlinear quadratic games with disturbance compensations: The noncooperative case. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48(11):1885–1896, 2017. [196] Y. Yuan, Z. Wang, P. Zhang, and H. Dong. Nonfragile near-optimal control of stochastic time-varying multiagent systems with control-and state-dependent noises. IEEE Transactions on Cybernetics, 49(7):2605– 2617, 2018. [197] Y. Yuan, H. Yuan, Z. Wang, L. Guo, and H. Yang. Optimal control for networked control systems with disturbances: a delta operator approach. IET Control Theory & Applications, 11(9):1325–1332, 2017. [198] D. Yue, E. Tian, Y. Zhang, and C. Peng. Delay-distribution-dependent stability and stabilization of T-S fuzzy systems with probabilistic interval delay. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39(2):503–516, 2009. [199] N. Zeng, Z. Wang, and H. Zhang. Inferring nonlinear lateral flow immunoassay state-space models via an unscented Kalman filter. Science China Information Sciences, 59(11):Article ID 112204, 2016. [200] Z. Zeng, D. Huang, and Z. Wang. Pattern memory analysis based on stability theory of cellular neural networks. Applied Mathematical Modelling, 32(1):112–121, 2008. [201] C. Zhang, Y. He, L. Jiang, W. Lin, and M. Wu. Delay-dependent stability analysis of neural networks with time-varying delay: A generalized free-weighting-matrix approach. Applied Mathematics and Computation, 294:102–120, 2017. [202] G. Zhang and Y. Shen. Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control. Neural Networks, 55:1–10, 2014. [203] H. Zhang, J. Hu, H. Liu, X. Yu, and F. Liu. Recursive state estimation for time-varying complex networks subject to missing measurements and stochastic inner coupling under random access protocol. Neurocomputing, 346:48–57, 2019. [204] H. Zhang, Z. Wang, and D. Liu. A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 25(7):1229–1262, 2014.
Bibliography
211
[205] J. Zhang, L. Ma, and Y. Liu. Passivity analysis for discrete-time neural networks with mixed time-delays and randomly occurring quantization effects. Neurocomputing, 216:657–665, 2016. [206] J. Zhang, L. Ma, Y. Liu, M. Lyu, F. E. Alsaadi, and Y. Bo. H∞ and l2 − l∞ finite-horizon filtering with randomly occurring gain variations and quantization effects. Applied Mathematics and Computation, 298:171– 187, 2017. [207] L. Zhang, Y. Zhu, and W. Zheng. State estimation of discrete-time switched neural networks with multiple communication channels. IEEE Transactions on Cybernetics, 47(4):1028–1040, 2017. [208] W. Zhang, Y. Tang, W. Wong, and Q. Miao. Stochastic stability of delayed neural networks with local impulsive effects. IEEE Transactions on Neural Networks and Learning Systems, 26(10):2336–2345, 2015. [209] W. Zhang, Z. Wang, Y. Liu, D. Ding, and F. E. Alsaadi. Event-based state estimation for a class of complex networks with time-varying delays: a comparison principle approach. Physics Letters A, 381(1):10– 18, 2017. [210] W. Zhang, L. Yu, and G. Feng. Optimal linear estimation for networked systems with communication constraints. Automatica, 47(9):1992–2000, 2011. [211] X.-M. Zhang and Q.-L. Han. Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach. Neural Networks, 54:57–69, 2014. [212] X.-M. Zhang, Q.-L. Han, Z. Wang, and B. Zhang. Neuronal state estimation for neural networks with two additive time-varying delay components. IEEE Transactions on Cybernetics, 47(10):3184–3194, 2017. [213] Y. Zhang, Z. Wang, and L. Ma. Variance-constrained state estimation for networked multi-rate systems with measurement quantization and probabilistic sensor failures. International Journal of Robust and Nonlinear Control, 26(16):3507–3523, 2016. [214] Y. Zhang, Z. Wang, L. Ma, and F. E. Alsaadi. Annulus-event-based fault detection, isolation and estimation for multirate time-varying systems: applications to a three-tank system. Journal of Process Control, 75:48– 58, 2019. [215] Z. Zhao, Z. Wang, L. Zou, and H. Liu. Finite-horizon H-infinity state estimation for artificial neural networks with component-based distributed delays and stochastic protocol. Neurocomputing, 321:169–177, 2018.
212
Bibliography
[216] L. Zou, Z. Wang, H. Gao, and X. Liu. State estimation for discretetime dynamical networks with time-varying delays and stochastic disturbances under the Round-Robin protocol. IEEE Transactions on Neural Networks and Learning Systems, 28(5):1139–1151, 2016. [217] L. Zou, Z. Wang, and H. Gao. Set-membership filtering for time-varying systems with mixed time-delays under Round-Robin and Weighted TryOnce-Discard protocols. Automatica, 74:341–348, 2016. [218] L. Zou, Z. Wang, J. Hu, and H. Gao. On H∞ finite-horizon filtering under stochastic protocol: dealing with high-rate communication networks. IEEE Transactions on Automatic Control, 62(9):4884–4890, 2017.
Index
H2 filtering, 12 H∞ state estimation, 19 H∞ filtering, 12 H∞ performance, 19, 25 l2 − l∞ state estimation, 169 AI, 1 ANN, 1 artificial intelligence, 1 artificial neural network, 1 asymptotic stability, 7 attractiveness, 5 BAMNNs, 6, 8 Bernoulli distribution, 12 bidirectional associative memory neural networks, 6 big data, 1 cellular MNNs, 8 channel fading, 11, 12, 99 chaotic MNNs, 8 CMNNs, 6, 8 CMOS, 1 communication protocols, 14, 117 communication technology, 3 complex networks, 6 constant delays, 11, 33 continuous memristive neural networks, 6 data packet dropout, 6 deep neural network, 1 delay-distribution-dependent, 11, 95 delayed memristive neural networks, 117 delayed neural networks, 7, 11, 77 delayed stochastic MNNs, 33
discrete memristive neural networks, 3 discrete time-delays, 10, 119 distributed delays, 10, 11, 19, 33, 119 disturbance attenuation level, 23, 40, 63, 139, 156 DMNNs, 3 DMRNNs, 19, 23 DNN, 1 dynamic event-triggered mechanism, 14, 151 dynamical differential equation, 5 ellipsoid matrices, 124 engineering-oriented complexities, 11, 16 estimator design, 3 event-triggered mechanism, 13, 33, 151 exponential stability, 8 exponentially mean-square stable, 23, 40, 63 fading measurements, 95 finite-horizon H∞ state estimation, 135 fractional-order MNNs, 8 global stability, 7 globally exponentially stable, 83 incomplete information, 9 inequality analysis approach, 7 leakage delays, 77 limited bandwidth, 12 linear matrix inequality, 5 LMI, 5 213
214 Lyapunov functional, 5, 11 Lyapunov-Krasovskii functional, 20, 24, 41, 65, 85, 141, 174 mathematical induction, 126 memory resistor, 4 memristive BAM neural networks, 55 memristive complex-value neural networks, 8 memristive Hopfield networks, 8 memristive neural network, 3 memristive recurrent neural networks, 8, 19 memristor, 1 missing measurements, 11, 33 mixed time-delays, 9, 11, 19, 33, 96 MNN, 3 multiple time-delays, 11 network-induced incomplete information, 11 networked system, 3 neural network, 1 non-fragile algorithm, 16 non-fragile estimation, 16 nonlinear systems, 4 norm-bounded uncertainties, 37, 153 parameter uncertainties, 9, 12 partially missing measurements, 12 performance specifications, 12 probabilistic delays, 11, 77 random delays, 11 recurrent neural networks, 6 remote sensors, 3 resilient H∞ state estimation, 151 resilient state estimation, 118 Riccati equation approach, 12 RNNs, 6 Round-Robin protocol, 14, 169 RR protocol, 14 SC protocol, 14, 15 set-membership filtering, 117
Index set-membership state estimation, 16, 117 set-membership technique, 15 signal quantization, 11 stability analysis, 3 state-dependent parameters, 3 state-dependent switching, 37, 153 static neural networks, 11 steady-state behaviors, 135 stochastic communication protocol, 14 stochastic difference equations, 10 stochastic differential equations, 10 stochastic memristive neural networks, 77 stochastic MNNs, 10 stochastic noises, 7 stochastic systems, 10 stochastic time-delays, 19 stochasticity, 10 synchronization, 7 threshold parameters, 13 time-delayed neural networks, 19 time-delays, 33 time-triggered, 13 time-varying delays, 6, 11, 33 time-varying system, 3 transient performances, 135 triggering condition, 13 underwater robots, 3 unknown-but-bounded, 117 unmanned vehicles, 3 Weighted Try-Once-Discard protocol, 14, 117 WTOD protocol, 14, 15, 117