Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays (Studies in Systems, Decision and Control, 514) 3031478347, 9783031478345

This monograph studies the synchronization control of Markovian complex neural networks with time-varying delays, and th

154 61 5MB

English Pages 167 [162] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgements
Contents
1 Introduction
1.1 The Complex Neural Networks and Neural Networks
1.2 Synchronization of Complex Networks
1.3 The Multiple Forms of Transition Rates
1.4 Markovian Complex Network
1.5 Organization of the Monograph
2 Stochastic Synchronization of Markovian Coupled Neural Networks with Partially Unknown Transition Rates
2.1 Introduction
2.2 Problem Formulation
2.3 Main Results
2.4 Simulation
2.5 Conclusion
3 Local Synchronization of Markovian Neutral-Type Complex Networks with Partially Unknown Transition Rates
3.1 Introduction
3.2 Problem Statement and Preliminaries
3.3 Main Results
3.4 Simulation
3.5 Conclusion
4 Local Synchronization of Markovian Nonlinearly Coupled Neural Networks with Generally Uncertain Transition Rates
4.1 Introduction
4.2 Preliminaries
4.3 Main Results
4.4 Simulation
4.5 Conclusion
5 Sampled-Data Synchronization of Complex Networks Based on Discontinuous Lyapunov-Krasovskii Functional
5.1 Introduction
5.2 Problem Formulation
5.3 Main Results
5.4 Simulation
5.5 Conclusion
6 Sampled-Data Synchronization of Markovian Coupled Neural Networks with Time-Varying Mode Delays
6.1 Introduction
6.2 Preliminaries
6.3 Sampled-Data Synchronization Criteria
6.4 Illustrative Examples
6.5 Conclusion
7 Synchronization Criteria of Delayed Inertial Neural Networks with Generally Uncertain Transition Rates
7.1 Introduction
7.2 Problem Statement and Preliminaries
7.3 Main Results
7.4 Simulation
7.5 Conclusion
8 Conclusions
References
Recommend Papers

Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays (Studies in Systems, Decision and Control, 514)
 3031478347, 9783031478345

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Systems, Decision and Control 514

Junyi Wang Jun Fu

Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays

Studies in Systems, Decision and Control Volume 514

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

Junyi Wang · Jun Fu

Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays

Junyi Wang Faculty of Robot Science and Engineering Northeastern University Shenyang, China

Jun Fu The State Key Laboratory of Synthetical Automation for Process Industries Northeastern University Shenyang, China

ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-031-47834-5 ISBN 978-3-031-47835-2 (eBook) https://doi.org/10.1007/978-3-031-47835-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Acknowledgements

There are numerous individuals without whose constructive comments, useful suggestions, and wealth of ideas this monograph could not have been completed. Special thanks go to Prof. Huaguang Zhang at Northeastern University, Prof. Zhanshan Wang at Northeastern University, Prof. Yingchun Wang at Northeastern University, and Prof. Qinggang Meng at Loughborough University. The authors are especially grateful to their families for their encouragement and never-ending support when it was most required. Finally, we would like to thank the editors at Springer for their professional and efficient handling of this project. The monograph was supported in part by National Natural Science Foundation of China (61903075, 61825301, 61877033, 61627809), in part by Young and Middleaged Science and Technology Innovation Talent of Shenyang (RC220485), in part by Guangdong Basic and Applied Basic Research Foundation (2022A1515140126, 2023A1515011172), and in part by Project of Liaoning Province Science and Technology Program under Grant (2019-KF-03-02). Shenyang, China

Junyi Wang Jun Fu

v

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The Complex Neural Networks and Neural Networks . . . . . . . . . . . . 1.2 Synchronization of Complex Networks . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The Multiple Forms of Transition Rates . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Markovian Complex Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Organization of the Monograph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 6 6 7

2 Stochastic Synchronization of Markovian Coupled Neural Networks with Partially Unknown Transition Rates . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 11 14 26 29

3 Local Synchronization of Markovian Neutral-Type Complex Networks with Partially Unknown Transition Rates . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Problem Statement and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 31 33 37 45 51

4 Local Synchronization of Markovian Nonlinearly Coupled Neural Networks with Generally Uncertain Transition Rates . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 55 59 70 74

vii

viii

Contents

5 Sampled-Data Synchronization of Complex Networks Based on Discontinuous Lyapunov-Krasovskii Functional . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 75 77 79 86 90

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks with Time-Varying Mode Delays . . . . . . . . . . . . . . . . . . . . . . . 93 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.3 Sampled-Data Synchronization Criteria . . . . . . . . . . . . . . . . . . . . . . . . 99 6.4 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7 Synchronization Criteria of Delayed Inertial Neural Networks with Generally Uncertain Transition Rates . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Problem Statement and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119 119 121 126 138 144

8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Chapter 1

Introduction

1.1 The Complex Neural Networks and Neural Networks Artificial neural network research began in the 1940s with the M-P neuron network model, which firstly proposed by biologist McCulloch and mathematician Pitts, kicking off neural network research [95]. In 1949, Hebb studied adaptive laws in neural systems and proposed Hebb’s rule for improving the strength of connections between neurons, breaking new ground in the study of neural networks. In 1982, Hopfield proposed a recurrent neural network model, which could perform optimal computation and associative memory. In 1986, Rumelhart et al. gave an error back propagation algorithm (i.e. the BP algorithm) for training weights of multilayer perceptrons. Both the forward neural network model and the recurrent neural network model are based on the nonlinear characteristics, adaptability and excitation inhibition of biological neurons, and the main purpose is to design high-performance controllers based on the bionic principle [152]. A controller that can be implemented online must be relatively simple in structure and have online adaptive and learning capabilities. The neural controllers are the preferred method to meet this requirement, especially recurrent neural networks. They not only have the universal approximation property of the forward neural network, but also the dynamical properties and associative memory capabilities that the forward neural network does not have. In early neural network research, neural networks were discussed as single individual to solve relatively homogeneous task, which was related to the task requirement and the scientific problem at the time. The correction of connection weights between neurons is a key concern, which has led to research on the design of neural network learning algorithms. With the advent of the network era, the complex neural networks (i.e. coupled neural networks) show potential advantages in large-scale pattern recognition and image processing [3, 17, 30, 42, 94, 131, 132, 151, 203]. The complex neural networks as a special kind of complex network, have many good characteristics a single neural network does not have. As a result, the dynamic properties of complex neural networks (such as synchronization properties and chaos properties) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_1

1

2

1 Introduction

are widely studied. The research of complex neural networks focuses on the topological structure or connection strength between nodes, rather than focusing on the connection relationships between microscopic neurons. Each node in complex neural networks is composed of neural networks with appropriate dimensions, rather than a single neuron. In this way, the complex neural networks are endowed with some complex networks characteristics, and they have the following features compared with the earlier neural networks. (1) Traditional neural networks are concerned with the strength of the connections of the intrinsic neurons. Complex neural networks are concerned with the topological configuration between neural networks. (2) Traditional neural networks design weight update laws and learning algorithms for neurons to achieve control and learning tasks at a given performance metric. Complex neural networks are based on the interconnected configuration matrices to achieve synchronization and consensus of the entire networks. (3) For the early-stage research of neural networks, there are no restrictions on the connection weights between neurons, which could be given in advance and learned online. The configuration structure of complex neural networks has different restrictions according to different research objectives. For synchronization, the configuration structure needs to satisfy the dissipation condition. (4) For the stability of recurrent neural networks, the main concerns are the constraint conditions of the neuron connection matrix and the establishment of the corresponding stability criteria. For the synchronization of complex neural networks, the main concerns are the constraints between the configuration matrix and the internal parameters of the nodes (isolated neural networks).

1.2 Synchronization of Complex Networks Many systems in nature and society can be modeled as complex networks, such as biological neural networks, power grids, Internet, World Wide Web, metabolic systems, and disease transmission networks [6, 50, 59, 99, 124, 140, 155, 215]. Therefore, the analysis and control of dynamic behavior in complex networks have attracted extensive attention in recent years. A complex network is made up of nodes and edges that connect these nodes. These nodes represent individuals in a real system, which can be parts or subsystems of the system. Edges are used to describe the interrelationships between individuals in a real system. An edge exists if there is relationship or exchange of information between two nodes. Otherwise, the edge does not exist. A typical group behaviour in complex networks is the synchronization phenomenon, which is widely found in natural and social systems. As a result, the theory of synchronization problems and their applications were widely studied in the natural and social sciences [27, 45, 50, 66, 90, 126, 128–130, 140, 143, 160, 162, 180, 188, 197, 201, 203, 215]. The synchronization phenomena include the synchronization of two close pendulums [148], synchronous luminescence of fireflies on summer nights, synchronous chirping of frogs, synchronization of neural networks

1.2 Synchronization of Complex Networks

3

in the brain [40, 43, 125], synchronization of heart and breathing rates, gradual synchronization of audience applause in the theatre [97]. The synchronization problem means that two or more nodes (subsystems) exchange information between each other to ensure all nodes have the same dynamical properties. The study of synchronization phenomena has a long history, especially after the discovery of chaos synchronization in the 1990s, which has led to a boom in the study of chaos synchronization. The synchronization of large scale networks has been used in a wide range of applications, such as confidential communication, electronic circuits, oscillators, semiconductor lasers and biological systems [124, 146, 161, 162, 217]. The synchronization of complex networks is the deepening and expansion of chaos synchronization research, which is an important research topic. There have been many reviews and works on synchronization, including pinning synchronization [27, 90, 126, 128, 188], stochastic synchronization [140, 160, 180], impulsive synchronization [45, 66], adaptive synchronization [203], distributed synchronization [130] for complex dynamical networks, and corresponding synchronization criteria have been proposed. However, synchronization sometimes leads to harmful effects. For example, routers on the Internet periodically send routing information, which leads to network communication congestion. People’s movement frequency synchronization generates resonance, which leads to the vibration of the Millennium Bridge in London in June 2006. It is assumed that the states of all individuals in a network changes periodically, and this synchronization phenomenon can be described by mathematical language. Each individual is a dynamic system, and there are many specific coupling relationships, resulting in complex pattern structures and diverse dynamic behaviors. Therefore, it is crucial to study complex network synchronization deeply in order to better understand its behaviour, which could exploit beneficial synchronization and avoid harmful synchronization [69, 87, 135, 148, 161]. The unified form of the complex network model was proposed in [145–147]. However, in real networks, there are time delays in transmission and response [164, 198] due to limited transmission speed and signal congestion. Hence, research on complex dynamical networks with delays has attracted much attention [12, 26, 71, 88, 119, 140, 153, 159, 170, 186, 189, 196, 204]. The complex network model (1.1) with coupled time delay was presented in [63], and the synchronization problem of this model was investigated. x˙ (k) = f (xk (t)) + c

N ∑

. k

G kl ┌xl (t − τ ) , k = 1, 2, . . . , N ,

(1.1)

i=1

where .τ is the time delay. The constant .c > 0 indicates coupling strength. .┌ ∈ kn×n denotes inner linked matrix, and .G ∈ kn×n denotes the scale-free coupling network structure. If node.k is connected to node.l, then.akl = alk = 1, otherwise.G kl = G lk = N N ∑ ∑ 0,.(k /= l). If the degree of the node is.dk , then. G kl = G lk =dk . The diagl=1,l/=k

l=1,l/=k

onal elements of the coupling matrix is .G kk = −dk . The coupled neural networks, as a special class of complex networks, have complex and unpredictable behaviours,

4

1 Introduction

including stable equilibrium, periodical vibration, bifurcation and chaotic attractor. Meanwhile, time delay is also commonly encountered in the implementation of neural networks due to the finite switching speed of amplifier, and it is usually a source of oscillation in neural networks [4, 193, 198, 203]. Therefore, the synchronization analysis of coupled neural networks with time delays has been developed in recent years. Therefore, the coupled delay neural network model with . N nodes was proposed, the synchronization problem of this class of complex neural networks was investigated in [23, 91]. ∑ d xk (t) G kl ┌xl (t), = −C xk (t) + A f (xk (t)) + B f (xk (t − τ )) + U (t) + dt t=1 N

.

k = 1, 2, . . . , N ,

(1.2)

where .xk (t) is the state vector of the .kth network. .C = diag [c1 , c2 , . . . , cn ] is a positive diagonal matrix. . A and . B are connection weight matrix and delay connection weight matrix, respectively. The output of node .k in the system is expressed as T . f (x k (t)) = [ f 1 (x k1 (t)) , f 2 (x k2 (t)) , . . . , f n (x kn (t))] . The external inputs of the T system are .U (t) = [U1 (t) , U2 (t) , . . . , Un (t)] . .┌ is the internal coupling matrix, indicating the internal coupling between subsystems. .G is the coupling structure matrix, which represents the topology of the network and satisfies the following N ∑ G kl . Due to the size and length of varconditions: .G kl ≥ 0 (k /= l), .G kk = − l=1,l/=k

ious axons, many parallel paths appear in neural networks, which makes the neural network have spatial properties. In order to better solve pattern recognition in timedelay dependent signal, neural networks with distributed delays were designed by researchers. Therefore, it is very necessary to consider complex neural networks with discrete and distributed delay coupling. In [12], the coupled neural network model (1.3) was proposed, which included discrete time-delay coupling and distributed time-delay coupling. ∑ (1) d xk (t) G kl ┌1 xl (t) = −C xk (t) + A f (xk (t)) + B f (xk (t − τ )) + U (t) + dt t=1 { t N N ∑ ∑ (3) G (2) ┌ x G ┌ xl (s)ds, − τ + + (t ) kl 2 l kl 3 N

.

l=1

t−τ

l=1

k = 1, 2, . . . , N .

(1.3)

┌q (q = 1, 2, 3) is the internal coupling ( )matrix, which represents the internal cou(q) is the coupling structure matrix, reppling between subsystems. .G (q) = G kl

.

N ×N

resenting .

(q)

G kk = −

the N ∑ l=1,l/=k

network (q)

topology

and

satisfying

.

(q)

G kl ≥ 0, k /= l,

G kl , (q = 1, 2, 3). The other parameters in system (1.3) are the

1.2 Synchronization of Complex Networks

5

same in system (1.2). The discrete complex network model with distributed delay was given in [81], and the synchronization and state estimation issues were studied. In the recent decade, many researchers discussed the synchronization of complex networks with time delays from different perspectives, such as pulse synchronization [68], adaptive synchronization [204], local synchronization [143, 185], cluster synchronization [13, 138], sampled data synchronization [67, 170], finite-time synchronization [48, 96], robust synchronization [71, 196], lag synchronization [55]. In fact, complex networks evolve in time and space through the interactions between individual nodes, so there are differences in the relevancy degree, medium quantity, etc., and different nodes have different levels of importance. Complex networks can also achieve synchronization behavior of the whole network by applying control to some of the nodes, and the control method is also known as pinning control. Pinning control was first applied to the spatiotemporal chaos control in regular networks. In recent years, it has been used to control a large-scale dynamic network. Many scholars studied the pinning synchronization problem for complex networks with time delay [86] and without time delay [51, 120, 188], and the corresponding synchronization criteria were proposed. Among these synchronization problems, local synchronization and cluster synchronization are different from other types of synchronization, which requires that the states of all nodes in a complex network can be consistent. However, in a largescale coupled networks, the states of all nodes do not reach synchronization, and only the states of nodes with the same character can be synchronized. Based on the properties of nodes, these nodes can be divided into several groups, each of which has the same properties and can be synchronized, thus forming cluster synchronization. Without considering the nodes in the other groups, the synchronization problem of only one group of nodes is studied, which is named as local synchronization. In most traditional synchronization and stability of complex networks, the continuous-time controllers are designed to realize the system synchronization. With the development of digital computer technology, continuous-time controllers are replaced by digital controllers. In sampled-data control, the control signal is only updated at the moment of sampling instants and remains constant during sampling periods. Up to now, the complex network synchronization problems are roughly divided into three categories. The first category is characteristic value method [146, 147, 162]. The synchronization criteria for these methods are expressed as the characteristic value of the coupled structure matrix satisfying some inequality condition. The second type is M matrix method [121]. The last one is linear matrix inequalities (LMIs) method [12, 14, 186]. With the development of computer technology, LMIs are more widely used because they are easy to verify and solve by Matlab Toolbox.

6

1 Introduction

1.3 The Multiple Forms of Transition Rates In practical systems, due to the influence of network environment and sensor node dynamics, the network system may be affected by parameter transformation, which can be described by Markovian chain [32, 38, 93, 171, 175, 201]. Up to now, the transition rates in Markovian jumping system can be divided into the following forms: (1) The transition rate is completely known. (2) The transition rate is partially unknown. (3) The transition rate is time-varying. (4) The transition rate is uncertain. (5) The transition rate is generalized, which means that transition rate is uncertain and partially unknown. In the early study of Markovian systems, the elements of the transition rate matrix were completely known. The stochastic stability of linear jumping system was studied in [93]. Then, the exponential stability of general nonlinear stochastic differential equations with Markovian jumping was studied in [93]. The stability of discrete-time Markovian linear systems with partially unknown transition rate and time-varying delay was investigated in [206]. By proposing the quantitative principle, the stability of generalized Markovian jumping systems with time-varying transition rates was investigated in [133], and the corresponding controllers were proposed. The passivity and. H∞ -filtering of semi-Markovian jumping systems with random uncertainties and sensor failures were studied in [117]. The stochastic stability of semi-Markovian linear systems with time-varying transition rates and mode-dependent delays was studied in [64]. The robust stability of Markovian jumping systems with uncertain switching probabilities and transfer probability was studied in [175], and a robust state feedback controller was designed. Based on input time delay methods and linear matrix inequality (LMI) technique, sampled-data synchronization problem of Markovian neural networks was researched in [171]. In view of whether time delay is dependent on Markovian jumping, it can be divided into mode-dependent time delay and mode independent time delay.

1.4 Markovian Complex Network Markovian complex networks are the special kind of hybrid systems, in which there are many modes, and each mode belongs to a certain system. In [85], the Markovian complex networks with mixed delays were proposed, and the synchronization criteria were proposed based on the Kronecker product and the stochastic analysis tool. Then, the random synchronization problem of T-S fuzzy discrete-time complex network model with random perturbation was studied in [127], and new synchronization criteria were obtained by applying the new Lyapunov-Krasovskii functional, Kronecker product and random theory. The bounded . H∞ synchronization and state estimation of discrete-time stochastic complex networks were studied in [114]. The synchro-

1.5 Organization of the Monograph

7

nization problem of coupled continuous-time neural networks with Markovian and random variables was studied in [179, 181]. By constructing the suitable stochastic Lyapunov-Krasovskii functional, some finite-time synchronization criteria were proposed for Markovian complex networks with partially unknown transition rates in [179]. After that, the finite-time synchronization problem was investigated for a class of Markovian neutral complex dynamical networks with partly unknown transition rates by utilizing the pinning control technique and constructing the appropriate stochastic Lyapunov-Krasovskii functional in [181]. The synchronization problem of complex networks with random disturbances was studied in [119], and the novel delay-dependent synchronization criteria were proposed by utilizing stochastic analysis techniques. Based on the above discussions, many synchronization issues of complex networks with Markovian jumping have been developed in the past few years, some of which, however, have not been successfully solved so far. For example, the local synchronization of Markovian nonlinearly coupled complex neural networks is still an open problem in generally uncertain transition rates. In addition, it is urgent to carry out sampled-data synchronization on Markovian complex neural networks with mode-dependent time delays and Lyapunov-Krasovskii functional.

1.5 Organization of the Monograph This monograph studies the synchronization control of Markovian complex neural networks with time-varying delays, and the structure of the book is summarized as follows. This chapter introduces the system description and some background knowledges, and also addresses the motivations of this monograph. In Chap. 2, the stochastic synchronization issue of Markovian coupled neural networks with partially unknown transition rates and random coupling strengths is investigated. The new delay-dependent synchronization criteria are obtained by the new augmented Lyapunov-Krasovskii functional and reciprocally convex combination technique. The obtained synchronization criteria depend not only on the upper and lower bounds of the delay, but also on the mathematical expectation and variance of the random coupling strength. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 3, the local synchronization issue of Markovian neutral complex networks with partially information of transition rates is investigated. By designing the new augmented Lyapunov-Krasovskii functional and adopting reciprocally convex combination technique, the new delay-dependent synchronization criteria in terms of LMIs are derived, which depends on the upper and lower bounds of the delays. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 4, the local synchronization issue of Markovian nonlinear coupled neural networks with uncertain and partially unknown transition rates is investigated. The

8

1 Introduction

less conservative local synchronization criteria containing the bounds of delay and delay derivative are obtained based on the novel augmented Lyapunov-Krasovskii functional and a new integral inequality that combines with free-matrix-based integral inequality and further improved integral inequality. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 5, the sampled-data synchronization issue of delayed complex networks with aperiodic sampling interval is investigated based on enhanced input delay approach. By introducing the improved discontinuous Lyapunov-Krasovskii functional, the less conservative sampled-data synchronization criteria are obtained by adopting Wirtinger’s integral inequality and mixed convex combination, which makes full use of the upper bound of the variable sampling interval and the sawtooth structure information of varying input delay. Furthermore, the sampled-data controllers are obtained by solving a set of LMIs. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 6, the sampled-data synchronization issue of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals is investigated based on an enhanced input delay approach. By applying the mode-dependent augmented Lyapunov-Krasovskii functional and extended Jensen’s integral inequality, the new delay-dependent synchronization criteria are obtained, which makes full use of the sawtooth structure information of varying input delay. Furthermore, the mode-dependent sampled-data controllers are proposed based on the delay-dependent synchronization criteria. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 7, the synchronization issue of inertial neural networks with time-varying delays and generally Markovian jumping is investigated. Due to the delay of data transmission channel or the loss of data information, the Markovian process in the systems is uncertain or partially known, which is more general and practicable to consider generally Markovian jumping inertial neural networks. The synchronization criteria are obtained by the delay-dependent Lyapunov-Krasovskii functionals and higher order polynomial based relaxed inequality. Furthermore, the desired controllers are obtained by solving a set of LMIs. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. In Chap. 8, we conclude the monograph by briefly summarizing the main theoretical findings.

Chapter 2

Stochastic Synchronization of Markovian Coupled Neural Networks with Partially Unknown Transition Rates

In this chapter, we focus stochastic synchronization of Markovian coupled neural networks with partially information on transition rates and random coupling strengths. The coupling configuration matrices are not restricted to be symmetric, and the coupling strengths are mutually independent random variables. By designing the novel augmented Lyapunov–Krasovskii functional and adopting reciprocally convex combination technique and the properties of random variables, new delay-dependent synchronization criteria in terms of LMIs are derived. The obtained criteria depend not only on upper and lower bounds of delay but also on mathematical expectations and variances of random coupling strengths. Numerical examples are provided to verify the effectiveness of the presented results.

2.1 Introduction Complex dynamical networks have been widely applied to model many complex systems in sciences, and have received increasing attention of researches in many fields these years [124, 216]. As the major collective behavior, synchronization is an important issue and has been extensively addressed because it can describe many natural phenomena and has many potential applications to image processing, neuronal synchronization and secure communication [122, 143, 162, 185, 189, 190]. Coupled neural networks, as the special kind of complex networks, have been extensively investigated [4, 12, 13, 18, 19, 23, 41, 53, 65, 74–76, 84, 88, 106, 110–112, 130, 140, 153, 164, 167, 168, 170, 171, 179, 180, 184, 186, 193–196, 198, 199, 201, 203, 218]. In the application of complex networks, time delay exists naturally due to the finite speed of the switching and transmitting signals. It is well known that time delay may result in undesirable dynamic behaviors, such as performance degradation and instability of the systems. As a particular kind of time delay, the continuously © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_2

9

10

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

distributed delay has received much research attention since neural networks have a number of parallel pathways with various axon sizes and lengths [12, 13, 143, 196]. Therefore, the study on the synchronization problem for neural networks with distributed delay coupling is attractive both in research and application fields. Markovian jumping systems have been widely used to model various practical systems, which experience abrupt changes in their structure and parameters caused by phenomena such as component failures or repairs, sudden environmental disturbances and so on [212]. In order to reflect more realistic dynamical behaviors, many researchers have investigated the synchronization problem for stochastic complex networks with Markovian jumping or external stochastic perturbations [82, 83, 130, 179, 213]. In [82], the stability and synchronization problems for discrete-time neural networks with Markovian jumping parameters and mode-dependent mixed time delays were investigated. The synchronization problem for delayed neutral-type neural networks with Markovian jumping parameters was studied in [83]. In [130], the authors studied the distributed synchronization for stochastic coupled neural networks via randomly occurring control. In [213], the authors investigated the global exponential adaptive synchronization for a class of stochastic complex dynamical networks with neutral-type neural network nodes and stochastic disturbances. In [179], the synchronization problem for coupled neural networks with Markovian jumping and random coupling strengths was investigated. In most cases, the transition rates of Markovian jumping systems are unknown. Therefore, it is necessary to further consider more general jumping systems with partially information on transition rates. In [92], the authors proposed stochastic synchronization criteria of Markovian complex networks with partially information on transition rates. In [212], the stabilization for Markovian jumping linear systems with partially information on transition rates was studied. There is still further room to investigate Markovian coupled neural networks with random coupling strengths. In this chapter, we aim to deal with the stochastic synchronization problem of Markovian coupled neural networks with partially information on transition rates and random coupling strengths. The transition rates are partially unknown, and the coupling strengths are positive random variables. The considered coupled neural networks with random coupling strengths are more practical than most of existing models in which coupling strengths of the whole complex networks are deterministic constants or time-varying variables. In this chapter, the new delay-dependant stochastic synchronization criteria are obtained based on the augmented Lyapunov– Krasovskii functional and reciprocally convex combination technique, which are in the form of LMIs. The illustrative examples are given to show the effectiveness of the results. The contributions of this chapter are highlighted as follows: (1) The coupled neural networks model with Markovian jumping parameters and random coupling strengths is proposed. In addition, the coupling configuration matrices are not restricted to be symmetric, and the transition rates of jumping process are considered to be partially accessed. Compared with [12, 41, 83, 92, 122, 143, 179, 186], the considered model in this chapter is more suitable for practical application. (2) The stochastic synchronization criteria are obtained by designing the augmented Lyapunov–Krasovskii functional and adopting reciprocally convex com-

2.2 Problem Formulation

11

bination technique, which could introduce more relaxed variables to alleviate the requirements of positive matrices. (3) The obtained novel delay-dependent synchronization criteria depend on the mathematical expectations and variances of random coupling strengths. The rest of this chapter is organized as follows. In Sect. 2.2, some preliminaries and Markovian coupled neural networks with partially unknown transition rates and random coupling strengths are introduced. In Sect. 2.3, new sufficient synchronization criteria for coupled neural networks with Markovian jumping parameters are derived by designing the argument Lyapunov–Krasovskii functional and adopting reciprocally convex combination technique. In Sect. 2.4, numerical simulations are given to demonstrate the effectiveness of the proposed results. Finally, conclusions are drawn in Sect. 2.5. Notation In the chapter, . R n denotes the .n-dimensional Euclidean space. The notation .|| · || refers to the Euclidean vector norm. . R m×n denotes a set of .m × n real matrices. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0), where . X ∈ R n×n , means that . X is real symmetric positive semidefinite matrix (negative definite matrix). . In represents the .n-dimensional n×n identity ( matrix. ( . A ∈)R , .λmin (A) denotes the minimum eigenvalue ) For a matrix X Y X Y stands for . . .(Ω, F, Ft , {Ft }t≥0 , P) is a complete probabilof . A. . ∗ Z YT Z ity space with a filtration .{Ft }t≥0 satisfying the conditions that it is right continuous and .F0 contains all .P-null sets. .E{·} stands for the mathematical expectation. Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

2.2 Problem Formulation We consider the model for Markovian coupled neural networks as follows. x˙ (t) = −C(rt )xk (t) + A(rt ) f (xk (t)) + B(rt ) f (xk (t − τ (t))) + U (t)

. k

+ a1,rt (t)

N ∑

G (1) kl (r t )Γ1 (r t )xl (t) + a2,rt (t)

l=1

+ a3,rt (t)

N ∑

N ∑

G (2) kl (r t )Γ2 (r t )xl (t − τ (t))

l=1

G (3) kl (r t )Γ3 (r t )

l=1

k = 1, 2, . . . , N .

t t−τ (t)

xl (s)ds, (2.1)

where .xk (t) = [xk1 (t), xk2 (t), . . . , xkn (t)]T ∈ R n is state vector. .C(rt ) is diagonal matrix with positive diagonal entries. . A(rt ) = (ai j (rt ))n×n and . B(rt ) = (bi j (rt ))n×n are the connection weight matrices. .U (t) = [U1 (t), U2 (t), . . . , Un (t)]T ∈ R n is an external input. The output of node .k is expressed as . f (xk (t)) = [ f 1 (xk1 (t)),

12

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

f 2 (xk2 (t)), . . . , f n (xkn (t))]T ∈ R n . .τ (t) is interval time-varying delay and satisfies .0 ≤ d1 ≤ τ (t) ≤ d2 , .τ˙ (t) ≤ h, where .d1 , d2 , .(d1 < d2 ), and .h are known constants. .a1,rt (t), .a2,rt (t) and .a3,rt (t) are random variables, which describe random coupling strengths. In addition, the random coupling strengths are mutually independent, and their values are taken on some nonnegative intervals. .Γ1 (rt ), .Γ2 (rt ) and n×n .Γ3 (r t ) ∈ R denote the inner coupling matrices, which describe the inner-coupling (2) (2) between the subsystems. .G (1) (rt ) = (G (1) kl (r t )) N ×N , .G (r t ) = (G kl (r t )) N ×N and (3) (3) .G (rt ) = (G kl (rt )) N ×N are the coupling configuration matrices representing topological structures of the network and satisfy the following conditions: .

.

(q)

(q)

G kl (rt ) ≥ 0 k /= l,

G kk (rt ) = −

N ∑

(q)

G kl (rt ), q = 1, 2, 3.

l=1,l/=k

Let .{rt , t ≥ 0} be a right-continuous Markovian process on the probability space taking values in a finite state space .G = {1, 2, . . . , N } with generator .Π = (πi j ), i, j ∈ G given by .

Pr {rt+Δt = j | rt = i} =

πi j Δt + o(Δt), i /= j, 1 + πii Δt + o(Δt), i = j,

where .Δt > 0, lim (o(Δt)/Δt) = 0. .πi j ≥ 0, (i /= j) is the transition rate from Δt→0

mode .i at time .t to mode . j at time .t + Δt, and .πii = −

N ∑ j=1,i/= j

πi j . In this chapter,

the transition rates of the jumping process are considered to be partially accessible. For instance, the transition rate matrix .Π for system Eq. 2.1 with .N operation modes may be expressed as ⎛ ⎞ π11 ? ? · · · π1N ⎜ ? π22 ? · · · ? ⎟ ⎜ ⎟ .⎜ . .. .. .. .. ⎟ , ⎝ .. . . . . ⎠ ? πN 2 πN 3 · · · πN N where .? represents the unknown transition rate. For notational clarity, .∀ i ∈ G, we i i , where .Gki = { j : πi j is known for . j ∈ G} and .Guk = { j : πi j denote .G i = Gki Guk is unknown for . j ∈ G}. In the transition rate matrix .Π , the unknown transition rates are given at random. Since we don’t restrict their positions, there are many forms of matrix .Π . No matter which form we choose, the corresponding stochastic synchronization criteria can be obtained.

2.2 Problem Formulation

13

The initial condition associated with Eq. 2.1 is given as follows: .xk (s) = φk0 (s) ∈ C([−d2 , 0], R n ) (k = 1, 2, . . . , N ), where .d2 = supt∈R τ (t) and .C([−d2 , 0], R n ) is the set of continuous function from .[−d2 , 0] to . R n . Remark 2.1 In this chapter, the stochastic synchronization problem for Markovian coupled neural networks Eq. 2.1 is studied. The coupling configuration matrices are not assumed to be symmetric, which can be less restrictive than existing results [12, 23, 41, 82, 83, 92, 119, 122, 189, 190, 196]. Remark 2.2 In Refs. [12, 13, 23, 41, 82, 83, 88, 92, 119, 122, 130, 143, 153, 170, 185, 186, 190, 196, 199, 213], the coupling strengths of complex networks are constant or time-varying. There are few synchronization criteria for complex networks with random coupling strengths. Since complex networks are often affected by external factors, the coupling strength randomly varies around a constant. Hence, the study for Markovian coupled neural networks with random coupling strengths has practical implications. i Remark 2.3 It is notable that if.Gki = G i ,.Guk = ∅, then the transition rates of Markovian coupled neural networks are completely known, which is the usual sense. If i i i .Guk = G , .Gk = ∅, the transition rates of Markovian coupled neural networks are completely unknown, which is considered as a Markovian system under arbitrary jumping.

Throughout this chapter, the following assumptions are needed. Assumption 2.1 ([84]) For any .x1 , x2 ∈ R, there exist constants .er− , er+ such that e− ≤

. r

fr (x1 ) − fr (x2 ) ≤ er+ , r = 1, 2, . . . , n. x1 − x2

We denote .

E1 =

diag(e1+ e1− , . . . , en+ en− ),

(

e+ + en− e+ + e1− ,..., n E 2 = diag 1 2 2

) .

2 are the Assumption 2.2 ([179]) .E{aq,rt (t)} = aq0,i and .E{(aq,rt (t) − aq0,i )2 } = bq,i 2 mathematical expectations and variances of.aq,rt (t), respectively, where.aq0,i and.bq,i are nonnegative constants for .q = 1, 2, 3, i ∈ G.

Remark 2.4 In the Assumption 2.1, the.er− , er+ are allowed to be positive, negative or zero, which makes the activation functions more general than nonnegative sigmoidal functions. Definition 2.1 The system Eq. 2.1 is said to be globally asymptotically synchronized in mean square if . lim E{||xk (t) − xl (t)||2 } = 0, .k, l = 1, 2, . . . N holds for any t→∞ initial values.

14

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

ˆ K ) ={the set of matrices Definition 2.2 ([162]) Let. Rˆ denote a ring and define.T ( R, with entries . Rˆ such that the sum of the entries in each row is equal to . K for some ˆ }. .K ∈ R ˆ K ). Then the Lemma 2.1 ([162]) Let .G be an . N × N matrix in the set .T ( R, (N − 1) × (N − 1) matrix . H satisfies . M G = H M, where . H = M G J , ⎛ ⎞ 1 −1 0 0 · · · 0 ⎜ 0 1 −1 0 · · · 0 ⎟ ⎜ ⎟ .M = ⎜ . , . . . . ⎟ ⎝ .. .. .. .. . . . .. ⎠

.

0 0 1 −1

0 0 ⎛

1 ⎜0 ⎜ ⎜ .. ⎜ .J = ⎜ . ⎜0 ⎜ ⎝0 0

(N −1)×N

⎞ 1 ··· 1 1 ··· 1⎟ ⎟ .. . . .. ⎟ . . .⎟ , ⎟ 0 ··· 1 1⎟ ⎟ 0 ··· 0 1⎠ 0 · · · 0 0 N ×(N −1) 1 1 .. .

ˆ Moreover, the matrix . H can be rewritten in which .1 is the multiplicative identity of . R. j ∑ (G (ik) − G (i+1,k) ) for .i, j ∈ {1, 2, . . . , N − 1}. explicitly as follows: . Hi j = k=1

Lemma 2.2 ([44]) Assume that the vector function .ω : [0, r ] → R m is well defined for the following integrations. For any symmetric positive definite matrix .W ∈ R m×m and scalar .r > 0, one has r

r

.

(

r

ω T (s)W ω(s)ds ≥

0

0

)T ω(s)ds

r

W

ω(s)ds.

0

Lemma 2.3 ([25]) The Kronecker product has the following properties: (1) (α A) ⊗ B = A ⊗ (αB), .(2) (A + B) ⊗ C = A ⊗ C + B ⊗ C, .(3) (A ⊗ B)(C ⊗ D) = (AC) ⊗ (B D). .

2.3 Main Results In this section, stochastic synchronization criteria for Markovian coupled neural networks with partially information on transition rates are obtained based on the augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique. For convenience, each possible value of .rt is denoted by .i, .i ∈ G. Then, the system Eq. 2.1 can be formulated as

2.3 Main Results . x˙ k (t)

15

= −Ci xk (t) + Ai f (xk (t)) + Bi f (xk (t − τ (t))) + U (t) + a10,i

N ∑

G (1) kl,i Γ1,i xl (t) + (a1,i (t) − a10,i )

l=1

+ a20,i

N ∑

+ a30,i

G (1) kl,i Γ1,i xl (t)

l=1

G (2) kl,i Γ2,i xl (t − τ (t)) + (a2,i (t) − a20,i )

l=1 N ∑

N ∑

N ∑

G (2) kl,i Γ2,i xl (t − τ (t))

l=1 t

G (3) kl,i Γ3,i

t−τ (t)

l=1

xl (s)ds + (a3,i (t) − a30,i )

N ∑

G (3) kl,i Γ3,i

l=1

t t−τ (t)

xl (s)ds,

k = 1, 2, . . . , N .

(2.2)

Let .M = M ⊗ In , where . M is defined in Lemma 2.1. ' ' C¯i = I N ⊗ Ci , .C¯i = I N −1 ⊗ Ci , . A¯ i = I N ⊗ Ai , . A¯ i = I N −1 ⊗ Ai , . B¯i = I N ⊗ Bi , ' (1) (2) ¯i = I N −1 ⊗ Bi , .Γ¯1,i =G i ⊗ Γ1,i , .Γ¯2,i = G i ⊗ Γ2,i , .Γ¯3,i = G i(3) ⊗ Γ3,i , .x(t) = .B [x1T (t), x2T (t), . . . , x NT (t)]T , . f (x(t)) = [ f T (x1 (t)), f T (x2 (t)), . . . , f T (x N (t))]T , ¯ (t) = [U T (t), U T (t), . . . , U T (t)]T , . E¯1 = I N −1 ⊗ E 1 , . E¯2 = I N −1 ⊗ E 2 , then the .U complex network Eq. 2.2 can be rewritten as follows: .

.

x(t) ˙ = −C¯i x(t) + A¯ i f (x(t)) + B¯i f (x(t − τ (t))) + U¯ (t) + a10,i Γ¯1,i x(t) + (a1,i (t) − a10,i )Γ¯1,i x(t) + a20,i Γ¯2,i x(t − τ (t)) + (a2,i (t) − a20,i )Γ¯2,i x(t − τ (t)) + a30,i Γ¯3,i + (a3,i (t) − a30,i )Γ¯3,i

t

x(s)ds t−τ (t)

t

x(s)ds.

(2.3)

t−τ (t)

Theorem 2.1 Suppose that Assumptions 2.1 and 2.2 hold. Markovian coupled neural networks Eq. 2.3 with a partially unknown transition rate matrix is globally asymptotically synchronized in mean square if there exist matrices ( (1) (1) ) ( (2) (2) ) ( (3) (3) ) ( (1) (1) ) Q 11 Q 12 Q 11 Q 12 Q 11 Q 12 U11 U12 > 0,. > 0,. > 0,. . Pi > 0,. (2) (3) (1) ∗ Q (1) ∗ Q ∗ Q ∗ U22 ( (2) (2) ) 22 ( (1) (1) ) 22 ( (2) (2) ) 22 U11 U12 V V V V > 0, . 11 12 > 0, . 11 12 > 0, .W1 > 0, .W2 > 0, > 0, . (2) (1) (2) ∗ U22 ∗ V22 ∗ V22 T . W3 > 0, and positive diagonal matrices . R1 , . R2 , . R3 , . R4 , and matrices . Z i = Z i , . S, such that, for any .i ∈ G, the succeeding matrix inequalities are satisfied Φi = Φi(1) Φi(2) < 0,

(2.4)

i P j − Z i ≤ 0, ∀ j ∈ Guk , j /= i,

(2.5)

.

.

16

2 Stochastic Synchronization of Markovian Coupled Neural Networks … .

i P j − Z i ≥ 0, ∀ j ∈ Guk , j = i,

( .

W2 S ∗ W2

)



where

Φi(1)

.

Φi(2)

.

'

≥ 0,

Φ11,i Φ12,i W1 0 Φ15,i ⎜ ∗ Φ22,i Φ23,i Φ24,i Φ25,i ⎜ ⎜ ∗ 0 ∗ Φ33,i S ⎜ ⎜ ∗ 0 ∗ ∗ Φ 44,i ⎜ ⎜ ∗ ∗ ∗ ∗ Φ55,i ⎜ ⎜ ∗ ∗ ∗ ∗ ∗ ⎜ ∗ ∗ ∗ ∗ ∗ =⎜ ⎜ ⎜ ∗ ∗ ∗ ∗ ∗ ⎜ ⎜ ∗ ∗ ∗ ∗ ∗ ⎜ ⎜ ∗ ∗ ∗ ∗ ∗ ⎜ ⎜ ∗ ∗ ∗ ∗ ∗ ⎜ ⎝ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ⎛

Φ19,i ⎜ Φ29,i ⎜ ⎜ 0 ⎜ ⎜ ⎜ 0 ⎜ ⎜ Φ59,i ⎜ ⎜ Φ69,i ⎜ =⎜ ⎜ 0 ⎜ 0 ⎜ ⎜Φ ⎜ 99,i ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎝ ∗ ∗ '

(1) T (U11 ) 0 (1) T ) −(U11 0 (1) T (U12 ) 0 (1) T ) −(U12 0 0 (1) −V11 ∗ ∗ ∗

(2.7) ⎞ Φ16,i 0 0 Φ26,i 0 0 ⎟ ⎟ 0 Φ37,i 0 ⎟ ⎟ 0 0 Φ48,i ⎟ ⎟ Φ56,i 0 0 ⎟ ⎟ 0 ⎟ Φ66,i 0 ⎟ ∗ Φ77,i 0 ⎟ ⎟, ∗ ∗ Φ88,i ⎟ ⎟ ∗ ∗ ∗ ⎟ ⎟ ∗ ∗ ∗ ⎟ ⎟ ∗ ∗ ∗ ⎟ ⎟ ∗ ∗ ∗ ⎠ ∗ ∗ ∗

(1) U12 0 0 0 (2) T (1) (2) (U11 ) −U12 U12 (2) T (2) −(U11 ) 0 −U12 (1) T 0 (U22 ) 0 0 0 0 (2) T (1) T (2) T (U12 ) −(U22 ) (U22 ) (2) T (2) T −(U12 ) 0 −(U22 ) 0 0 0 (1) 0 −V12 0 (2) (2) 0 −V12 −V11 (1) ∗ −V22 0 (2) ∗ ∗ −V22

0 0

Φ11,i = −Pi C¯i − (Pi C¯i )T + a10,i Pi H¯ 1,i + (a10,i Pi H¯ 1,i )T +

.

(2.6)

∑ j∈Gk

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

πi j (P j − Z i ) +

(2) 2 (1) 2 2 ¯ ' T 2 (2) ¯ ¯' Q (1) 11 + Q 11 + d1 V11 + (d2 − d1 ) V11 − W1 + d2 W3 − R1 E 1 + d1 (C i ) W1 C i − ' ' 2 T W1 H¯ 1,i + d12 d12 × a10,i (C¯i )T W1 H¯ 1,i − (d12 × a10,i (C¯i )T W1 H¯ 1,i )T + d12 × a10,i H¯ 1,i ' ' ' 2 ¯T × b1,i H1,i W1 H¯ 1,i + (d2 − d1 )2 (C¯i )T W2 C¯i − (d2 − d1 )2 × a10,i (C¯i )T W2 H¯ 1,i − ' 2 T W2 H¯ 1,i + (d2 − d1 )2 × ((d2 − d1 )2 × a10,i (C¯i )T W2 H¯ 1,i )T + (d2 − d1 )2 × a10,i H¯ 1,i 2 ¯T b1,i H1,i W2 H¯ 1,i ,

2.3 Main Results

17

' Φ12,i =a20,i Pi H¯ 2,i − d12 × a20,i (C¯i )T W1 H¯ 2,i + d12 × a10,i × a20,i ( H¯ 1,i )T W1 H¯ 2,i ' − (d2 − d1 )2 × a20,i (C¯i )T W2 H¯ 2,i + (d2 − d1 )2 × a10,i × a20,i ( H¯ 1,i )T W2 H¯ 2,i , (2) 2 (1) 2 ¯ ' T 2 (2) ¯ i ' + Q (1) ¯ .Φ15,i = Pi A 12 + Q 12 + d1 V12 + (d2 − d1 ) V12 + R1 E 2 − d1 (C i ) W1 ' ' ' ' ' T T W1 A¯ i −(d2 − d1 )2 (C¯i )T W2 A¯ i +(d2 − d1 )2 × a10,i H¯ 1,i W2 A¯ i , A¯ i + d12 × a10,i H¯ 1,i ' ' ' ' ' ' T ¯i − d12 (C¯i )T W1 B¯i + d12 × a10,i H¯ 1,i .Φ16,i =Pi B W1 B¯i − (d2 − d1 )2 (C¯i )T W2 B¯i ' T + (d2 − d1 )2 × a10,i H¯ 1,i W2 B¯i , 2 ¯ 3,i − d1 × a30,i (C¯i ' )T W1 H¯ 3,i − (d2 − d1 )2 × a30,i (C¯i ' )T W2 H¯ 3,i .Φ19,i = a30,i Pi H 2 T T + d1 × a10,i × a30,i H¯ 1,i W1 H¯ 3,i + (d2 − d1 )2 × a10,i × a30,i H¯ 1,i W2 H¯ 3,i , (1) 2 2 T T ¯ ¯ 2,i .Φ22,i = − (1 − h)Q 11 − W2 + S + S − W2 − R2 E 1 + d1 × a20,i H W1 H¯ 2,i + d12 2 ¯T 2 T 2 T 2 2 × b2,i H2,i W1 H¯ 2,i +(d2 − d1 ) × a20,i H¯ 2,i W2 H¯ 2,i +(d2 − d1 ) × b2,i H¯ 2,i W2 H¯ 2,i , T .Φ23,i = W2 − S , .Φ24,i = −S + W2 , ' ' 2 T T ¯ 2,i .Φ25,i = d1 × a20,i H W1 A¯ i + (d2 − d1 )2 × a20,i H¯ 2,i W2 A¯ i , ' (1) 2 T T ¯ 2,i .Φ26,i = −(1 − h)Q 12 + R2 E¯2 + d1 × a20,i H W1 B¯i + (d2 − d1 )2 × a20,i H¯ 2,i ' W2 B¯i , 2 T T ¯ 2,i .Φ29,i = d1 × a20,i × a30,i H W1 H¯ 3,i + (d2 − d1 )2 × a20,i × a30,i H¯ 2,i W2 H¯ 3,i , (2) (3) (3) (2) ¯ ¯ .Φ33,i = −Q 11 + Q 11 − W1 − W2 − R3 E 1 , .Φ37,i = R3 E 2 + Q 12 − Q 12 , (3) (3) .Φ44,i = −Q 11 − W2 − R4 E¯1 , .Φ48,i = −Q 12 + R4 E¯2 , (1) (2) 2 (1) 2 ¯ ' T 2 (2) ¯' .Φ55,i = Q 22 + Q 22 + d1 V22 + (d2 − d1 ) V22 − R1 + d1 ( A i ) W1 Ai + (d2 − ' ' ' ' ' ' d1 )2 ( A¯ i )T W2 A¯ i , .Φ56,i = d12 ( A¯ i )T W1 B¯i + (d2 − d1 )2 ( A¯ i )T W2 B¯i , 2 ¯ i ' )T W1 H¯ 3,i + (d2 − d1 )2 × a30,i ( A¯ i ' )T W2 H¯ 3,i , .Φ59,i = d1 × a30,i ( A (1) 2 ¯ ' T 2 ¯ ' T ¯' ¯' .Φ66,i = −(1 − h)Q 22 − R2 + d1 ( B i ) W1 Bi + (d2 − d1 ) ( Bi ) W2 Bi , 2 ¯i ' )T W1 H¯ 3,i + (d2 − d1 )2 × a30,i ( B¯i ' )T W2 H¯ 3,i , .Φ69,i = d1 × a30,i ( B (2) (3) (3) .Φ77,i = −Q 22 + Q 22 − R3 , .Φ88,i = −Q 22 − R4 , 2 2 T 2 ¯T 2 ¯ 3,i W1 H¯ 3,i + d12 × b3,i .Φ99,i = −W3 + d1 × a30,i H H3,i W1 H¯ 3,i + (d2 − d1 )2 × a30,i T 2 ¯T W2 H¯ 3,i + (d2 − d1 )2 × b3,i H¯ 3,i H3,i W2 H¯ 3,i , ¯ 1,i = H1,i ⊗ Γ1,i , . H¯ 2,i = H2,i ⊗ Γ2,i , . H¯ 3,i = H3,i ⊗ Γ3,i , . H1,i = M G i(1) J, . H2,i .H = M G i(2) J, . H3,i = M G i(3) J, . M and . J have the same structures as Lemma 2.1. .

Proof Consider the following stochastic Lyapunov–Krasovskii functional for Markovian coupled neural networks Eq. 2.3

.

V (x(t), t, i) =

5 ∑ [Vq (x(t), t, i)],

(2.8)

q=1

where .

V1 (x(t), t, i) =x T (t)MT Pi Mx(t),

(2.9)

18

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

) Q (1) Q (1) Mx(s) 11 12 . V2 (x(t), t, i) = ds (1) M f (x(s)) ∗ Q 22 t−τ (t) ) T ( (2) t Mx(s) Q 11 Q (2) Mx(s) 12 ds + (2) M f (x(s)) M f (x(s)) ∗ Q t−d1 22 ) T ( (3) t−d1 Mx(s) Q 11 Q (3) Mx(s) 12 + ds, (3) M f (x(s)) M f (x(s)) ∗ Q t−d2 22 t

Mx(s) M f (x(s))

T

(

(2.10) ( .

(

(1) (1) ) U12 U11 (1) ∗ U22 )T ( (2) (2) ) Mx(s) U11 U12 ds (2) M f (x(s)) ∗ U22

t

V3 (x(t), t, i) =

Mx(s) ds M f (x(s))

t−d1 t−d1

+

t−d2

)T (

t t−d1 t−d1 t−d2

Mx(s) ds M f (x(s)) Mx(s) ds, M f (x(s)) (2.11)

.

V4 (x(t), t, i) = d1

t

t

t−d1

θ

Mx(s) M f (x(s))

+ (d2 − d1 )× t−d1 t−d2

.

t θ

T

Mx(s) M f (x(s))

V5 (x(t), t, i) = d1

0

t

−d1

t+θ

(

(1) (1) V12 V11 (1) ∗ V22

(2) (2) V12 V11 (2) ∗ V22

)

)

Mx(s) dsdθ M f (x(s))

Mx(s) dsdθ, (2.12) M f (x(s))

T (Mx(s)) ˙ W1 Mx(s)dsdθ ˙ −d1

+ (d2 − d1 ) + d2

(

T

−d2

0

t

−d2

t+θ

t

T (Mx(s)) ˙ W2 Mx(s)dsdθ ˙

t+θ

(Mx(s))T W3 Mx(s)dsdθ.

(2.13)

Let .L be weak infinitesimal operator along system Eq. 2.3. Calculating the time derivative of .V1 (x(t), t, i), we get

2.3 Main Results

19

LV1 (x(t), t, i)

.

= 2x T (t)MT Pi M[−C¯i x(t) + A¯ i f (x(t)) + B¯i f (x(t − τ (t))) + U¯ (t) + a10,i Γ¯1,i x(t) + a20,i Γ¯2,i x(t − τ (t)) + a30,i Γ¯3,i +

N ∑

t

x(s)ds] t−τ (t)

πi j (Mx(t))T P j Mx(t).

j=1 ' ' ' According to the structure of.M, one has.MC¯i = C¯i M,.M A¯ i = A¯ i M,.M B¯i = B¯i M, (q) (q) ¯ (t) = 0,.MΓ¯q,i = (M ⊗ In )(G i ⊗ Γq,i ) = (M G i ) ⊗ Γq,i = (Hq,i M) ⊗ Γq,i .MU = (Hq,i ⊗ Γq,i )(M ⊗ In ) = H¯ q,i M, q = 1, 2, 3. ◻

Therefore, one obtains LV1 (x(t), t, i)

.

'

'

= −2x T (t)MT Pi C¯i Mx(t) + 2x T (t)MT Pi A¯ i M f (x(t)) '

+ 2x T (t)MT Pi B¯i M f (x(t − τ (t))) + 2a10,i x T (t)MT Pi H¯ 1,i Mx(t) + 2a20,i x T (t)MT Pi H¯ 2,i Mx(t − τ (t)) + 2a30,i x T (t)MT Pi H¯ 3,i M +

N ∑

πi j (Mx(t))T P j Mx(t),

t

x(s)ds t−τ (t)

(2.14)

j=1

LV2 (x(t), t, i ) T

(

(1) Q (1) 11 Q 12 ∗ Q (1) 22

)

Mx(t) M f (x(t)) ) T ( (1) Mx(t − τ (t)) Q 11 Q (1) Mx(t − τ (t)) 12 − (1 − τ˙ (t)) (1) M f (x(t − τ (t))) M f (x(t − τ (t))) ∗ Q 22 ( ) T (2) Q (2) Mx(t) Mx(t) 11 Q 12 + M f (x(t)) M f (x(t)) ∗ Q (2) 22 ( ) T (2) Mx(t − d1 ) Q (2) Mx(t − d1 ) 11 Q 12 − M f (x(t − d1 )) M f (x(t − d1 )) ∗ Q (2) 22 ) T ( (3) Q 11 Q (3) Mx(t − d1 ) Mx(t − d1 ) 12 + M f (x(t − d1 )) M f (x(t − d1 )) ∗ Q (3) 22 T ( (3) (3) ) Mx(t − d2 ) Q 11 Q 12 Mx(t − d2 ) − , (2.15) M f (x(t − d2 )) M f (x(t − d2 )) ∗ Q (3) 22 =

Mx(t) M f (x(t))

20

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

.LV3 (x(t), t, i)

=2

( t

+2

Mx(s) ds M f (x(s))

t−d1 t−d1

)T

Mx(s) ds M f (x(s))

t−d2

(1)

(1)

U11 U12 (1) ∗ U22

T

(2)

Mx(t) − Mx(t − d1 ) M f (x(t)) − M f (x(t − d1 ))

(2)

Mx(t − d1 ) − Mx(t − d2 ) , M f (x(t − d1 )) − M f (x(t − d2 ))

U11 U12 (2) ∗ U22

(2.16)

LV4 (x(t), t, i)

.

(1) (1) ) V12 V11 Mx(t) (1) M f (x(t)) ∗ V22 )T ( (1) (1) ) ( t ) ( t Mx(s) V11 V12 Mx(s) ds ds − (1) ∗ V22 t−d1 M f (x(s)) t−d1 M f (x(s)) ( T (2) (2) ) Mx(t) V11 V12 Mx(t) + (d2 − d1 )2 (2) M f (x(t)) M f (x(t)) ∗ V22 )T ( (2) (2) ) ( t−d1 ) ( t−d1 Mx(s) V11 V12 Mx(s) ds ds , (2.17) − (2) M f (x(s)) M f (x(s)) ∗ V22 t−d2 t−d2

≤ d12

Mx(t) M f (x(t))

T

(

LV5 (x(t), t, i)

.

(

T ≤ d12 (Mx(t)) ˙ W1 Mx(t) ˙ −

)T

t

Mx(s)ds ˙

t

t−d1

t−d1

T ˙ W2 Mx(t) ˙ − (d2 − d1 ) + (d2 − d1 )2 (Mx(t))

t−d1

T (Mx(s)) ˙ W2 Mx(s)ds ˙

t−d2

( + d22 (Mx(t))T W3 Mx(t) −

Mx(s)ds ˙

W1

)T

t

Mx(s)ds t−τ (t)

t

W3

Mx(s)ds. t−τ (t)

Based on reciprocally convex combination approach of [102], we have

(d2 − d1 )

t−d1

.

T (Mx(s)) ˙ W2 (Mx(s))ds ˙ ≥

t−d2 t−d1 ˙ t−τ (t) M x(s)ds t−τ (t) ˙ t−d2 M x(s)ds

Therefore, we have

T

(

W2 S ∗ W2

)

t−d1 ˙ t−τ (t) M x(s)ds t−τ (t) ˙ t−d2 M x(s)ds

.

(2.18)

2.3 Main Results

21

LV5 (x(t), t, i)

.

(

T ≤ d12 (Mx(t)) ˙ W1 Mx(t) ˙ −

t

)T Mx(s)ds ˙

t

Mx(s)ds ˙

W1

t−d1

t−d1

T + (d2 − d1 )2 (Mx(t)) ˙ W2 Mx(t) ˙ − [Mx(t − d1 ) − Mx(t − τ (t))]T W2

[Mx(t − d1 ) − Mx(t − τ (t))] − 2[Mx(t − d1 ) − Mx(t − τ (t))]T S [Mx(t − τ (t)) − Mx(t − d2 )] − [Mx(t − τ (t)) − Mx(t − d2 )]T W2 [Mx(t − τ (t)) − Mx(t − d2 )] + d22 (Mx(t))T W3 Mx(t) ( t )T t − Mx(s)ds W3 Mx(s)ds. t−τ (t)

(2.19)

t−τ (t)

From Assumption 2.1, we get

.

.

Mx(t) M f (x(t))

Mx(t − τ (t)) M f (x(t − τ (t)))

T

Mx(t − d1 ) M f (x(t − d1 ))

T

.

T

.

Mx(t − d2 ) M f (x(t − d2 ))

T

(

(

(

(

−R1 E¯ 1 R1 E¯ 2 ∗ −R1

−R2 E¯ 1 R2 E¯ 2 ∗ −R2 −R3 E¯ 1 R3 E¯ 2 ∗ −R3 −R4 E¯ 1 R4 E¯ 2 ∗ −R4

)

)

)

)

Mx(t) ≥ 0, M f (x(t))

(2.20)

Mx(t − τ (t)) ≥ 0, M f (x(t − τ (t)))

(2.21)

Mx(t − d1 ) ≥ 0, M f (x(t − d1 ))

(2.22)

Mx(t − d2 ) ≥ 0. M f (x(t − d2 ))

(2.23)

Because .a1,i (t), a2,i (t) and .a3,i (t) are mutually independent variables, we obtain that T E{(Mx(t)) ˙ W1 Mx(t)} ˙

.

' ' ' ' = (Mx(t))T (−C¯i )T W1 [(−C¯i )Mx(t) + 2 A¯ i M f (x(t)) + 2 B¯i M f (x(t − τ (t)))

+ 2a10,i H¯ 1,i Mx(t) + 2a20,i H¯ 2,i Mx(t − τ (t)) + 2a30,i H¯ 3,i

t

Mx(s)ds] t−τ (t)

T + a10,i (Mx(t))T H¯ 1,i W1 [a10,i H¯ 1,i Mx(t) + 2a20,i H¯ 2,i Mx(t − τ (t))

+ 2a30,i H¯ 3,i

t t−τ (t)

2 T Mx(s)ds] + b1,i (Mx(t))T H¯ 1,i W1 H¯ 1,i Mx(t)

T + a20,i (Mx(t − τ (t)))T H¯ 2,i W1 [a20,i H¯ 2,i Mx(t − τ (t))

22

2 Stochastic Synchronization of Markovian Coupled Neural Networks … t

+ 2a30,i H¯ 3,i

t−τ (t)

2 T Mx(s)ds] + b2,i (Mx(t − τ (t)))T H¯ 2,i W1 H¯ 2,i Mx(t − τ (t)) '

'

'

+ (M f (x(t)))T ( A¯ i )T W1 [ A¯ i M f (x(t)) + 2 B¯i M f (x(t − τ (t))) + 2a10,i H¯ 1,i Mx(t) + 2a20,i H¯ 2,i Mx(t − τ (t)) + 2a30,i H¯ 3,i ' ' + (M f (x(t − τ (t))))T ( B¯i )T W1 [ B¯i M f (x(t − τ (t)))

+ 2a10,i H¯ 1,i Mx(t) + 2a20,i H¯ 2,i Mx(t − τ (t)) + 2a30,i H¯ 3,i ( +

)T

t

Mx(s)ds t−τ (t)

2 ¯ + b3,i H3,i

t t−τ (t)

T 2 W1 a30,i H¯ 3,i H¯ 3,i

t

Mx(s)ds] t−τ (t)

t

Mx(s)ds t−τ (t)

t

Mx(s)ds t−τ (t)

Mx(s)ds .

(2.24)

Taking into account that the information of transition rates is not accessible comN ∑ pletely, the following zero equations hold for .∀ . Z i = Z iT because of . πi j = 0, j=1

.

− (Mx(t))T

(∑ j∈Gki

πi j +



) πi j Z i Mx(t) = 0.

(2.25)

i j∈Guk

Combining Eqs. 2.14–2.25, we get ∑ E{LV (x(t), t, i)} ≤ E{ξ T (t, i)Φi ξ(t, i) + (Mx(t))T πi j (P j − Z i )(Mx(t))},

.

i j∈Guk T

where .ξ(t, i) = {(Mx(t)) , (Mx(t − τ (t))) , (Mx(t − d1 ))T , (Mx(t − d2 ))T , T T T T .(M f (x(t))) , (M f (x(t − τ (t)))) , (M f (x(t − d1 ))) , (M f (x(t − d2 ))) , t t t−d1 t T T T T .( t−τ (t) Mx(s)ds) , ( t−d1 Mx(s)ds) , ( t−d2 Mx(s)ds) , ( t−d1 M f (x(s))ds) , T

t−d

( t−d21 M f (x(s))ds)T }T . So, we obtain .E{LV (x(t), t, i)} ≤ −γE{||Mx(t)||2 }, where .γ = min{λmin (−Φi ), i∈G ∑ λmin (− πi j (P j − Z i ))} > 0.

.

i j∈Guk

Applying It.oˆ ’s formula, we have t

E{V (x(t), t, i)} − E{V (x(0), 0, i 0 )} = E

.

LV (x(s), s, rs )ds .

0

According to [179], we know that there exists positive constant .β such that

(2.26)

2.3 Main Results

23

βE{||Mx(t)||2 } ≤ E{V (x(t), t, i)}

.

t

= E{V (x(0), 0, i 0 )} + E

LV (x(s), s, rs )ds

0 t

≤ E{V (x(0), 0, i 0 )} − γ

E{||Mx(t)||2 }ds.

(2.27)

0

Thus, Markovian coupled neural networks Eq. 2.3 is asymptotically synchronized in mean square. The proof is completed. Remark 2.5 In [83, 179], the stochastic synchronization problem for Markovian coupled neural networks was studied, while the transition rates were completely known. The synchronization criteria don’t apply to the situation that the transition rates are not exactly known. In [92], the synchronization problem for Markovian coupled neural networks with partially unknown transition rates was investigated, while coupling strengths, coupling configuration matrices, and inner coupling matrices were constants. The synchronization criteria don’t apply to the situation that the coupling strengths and the configuration matrices are affected by external factors. In addition, when the coupling configuration matrices are asymmetric, the method in [92] is not applicable. In this chapter, the problem of stochastic synchronization for Markovian coupled neural networks with random coupling strengths and hybrid coupling is studied. The transition rates for Markovian coupled neural networks are partially known. In addition, we have introduced the augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique in the proof, which is different from most methods that simply used general Lyapunov–Krasovskii functional and Jensen’s inequality. Remark 2.6 In this chapter, the synchronization criteria for Eq. 2.3 depend on the mathematical expectations and variances of the random coupling strengths but are not related to their intervals. It is different from the traditional robust synchronization criteria which are related to the intervals of uncertain parameters. Hence, the synchronization criteria are more conducive in application. In Theorem 2.1, the known transition rates are contained in Eq. 2.4, and the unknown transition rates are transformed into Eqs. 2.5 and 2.6 through 2.25. When the positions of unknown transition rates have changed, the corresponding form of stochastic synchronization condition will also change. Hence, the transition rate matrix plays a key role in stochastic synchronization problem. (1) (2) (2) When .a1,rt (t) = a2,rt (t) = 1, .a3,rt (t) = 0, .G (1) kl (r t ) = G kl , .G kl (r t ) = G kl , .Γ1 (r t ) = Γ1 , .Γ2 (r t ) = Γ2 , the system Eq. 2.1 turns out to be the following coupled neural networks: x˙ (t) = −C(rt )xk (t) + A(rt ) f (xk (t)) + B(rt ) f (xk (t − τ (t))) + U (t)

. k

+

N ∑ l=1

G (1) kl Γ1 xl (t) +

N ∑ l=1

G (2) kl Γ2 xl (t − τ (t)), k = 1, 2, . . . , N ,

(2.28)

24

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

where .τ (t) satisfies .0 ≤ τ (t) ≤ d, and .τ˙ (t) ≤ h. Corollary 2.1 Suppose that Assumptions 2.1 is satisfied. Markovian coupled neural networks Eq. 2.28 with a partially unknown transition rate matrix is glob. Pi > 0, ally asymptotically in mean ( (1) (synchronized ( square) if there ( exist matrices ) (1) ) (2) (2) ) Q 11 Q 12 Q 11 Q 12 U11 U12 V V > 0,. > 0,. > 0,. 11 12 > 0,.W1 > 0, . (2) ∗ U ∗ V22 ∗ Q (1) ∗ Q 22 22 22 T . W2 > 0, and positive diagonal matrices . R1 , . R2 , . R3 , and matrices . Z i = Z i , . S, such that, for any .i ∈ G, the following matrix inequalities are satisfied ⎛

Θ11,i Θ12,i S Θ14,i ⎜ ∗ Θ22,i Θ23,i Θ24,i ⎜ ⎜ ∗ ∗ Θ33,i 0 ⎜ ⎜ ∗ ∗ ∗ Θ44,i ⎜ ∗ ∗ ∗ ∗ .Θi = ⎜ ⎜ ⎜ ∗ ∗ ∗ ∗ ⎜ ⎜ ∗ ∗ ∗ ∗ ⎜ ⎝ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

T Θ15,i 0 0 U11 Θ25,i 0 0 0 T 0 Θ36,i 0 −U11 T Θ45,i 0 0 U12 Θ55,i 0 0 0 T ∗ Θ66,i 0 −U12 ∗ ∗ −W2 0 ∗ ∗ ∗ −V11 ∗ ∗ ∗ ∗

⎞ U12 0 ⎟ ⎟ −U12 ⎟ ⎟ T ⎟ U22 ⎟ 0 ⎟ ⎟, T ⎟ −U22 ⎟ 0 ⎟ ⎟ −V12 ⎠ −V22

(2.29)

.

i P j − Z i ≤ 0, ∀ j ∈ Guk , j /= i,

(2.30)

.

i P j − Z i ≥ 0, ∀ j ∈ Guk , j = i,

(2.31)

( .

W1 S ∗ W1

) ≥ 0,

' ' where .Θ11,i = − Pi C¯i −(Pi C¯i )T + Pi H¯ 1 + (Pi H¯ 1 )T +

(2.32) ∑

πi j (P j − Z i ) + Q (1) 11 + j∈Gk ' ' d 2 (C¯i )T W1 H¯ 1 − d 2 ((C¯i )T

2 2 2 ¯ ' T ¯ ¯' Q (2) 11 + d V11 − W1 +d W2 − R1 E 1 +d (C i ) W1 C i − T T 2 W1 H¯ 1 ) + d H¯ 1 W1 H¯ 1 , ¯ 2 + W1 − S − d 2 (C¯i ' )T W1 H¯ 2 + d 2 ( H¯ 1 )T W1 H¯ 2 , .Θ12,i = Pi H (2) 2 2 ¯ ' T 2 ¯T ¯ i ' + Q (1) ¯ ¯' ¯' .Θ14,i = Pi A 12 + Q 12 + d V12 + R1 E 2 − d (C i ) W1 Ai + d H1 W1 Ai , ' ' ' ' T ¯i − d 2 (C¯i )T W1 B¯i + d 2 H¯ 1 W1 B¯i , .Θ15,i = Pi B (1) T 2 ¯T ¯ .Θ22,i = −(1 − h)Q 11 − W1 + S + S − W1 − R2 E¯1 + d H 2 W1 H2 , ' (1) 2 ¯T ¯ i ,.Θ25,i = −(1 − h)Q 12 + R2 E¯2 + d 2 H¯ 2T W1 B¯i ' , .Θ23,i =W1 − S,.Θ24,i =d H2 W1 A (2) (2) .Θ33,i = −Q 11 − W1 − R3 E¯1 , .Θ36,i = R3 E¯2 − Q 12 , ' (1) (2) 2 2 ¯ T 2 ¯ ' T ¯' ¯' .Θ44,i = Q 22 + Q 22 + d V22 − R1 + d ( A i ) W1 Ai , .Θ45,i = d ( Ai ) W1 Bi , (1) (2) 2 ¯ ' T ¯' .Θ55,i = −(1 − h)Q 22 − R2 + d ( B i ) W1 Bi , .Θ66,i = −Q 22 − R3 ,

2.3 Main Results .

25

H¯ 1 = H1 ⊗ Γ1 , . H¯ 2 = H2 ⊗ Γ2 , . H1 = M G (1) J, . H2 = M G (2) J, . M and . J have the same structures as Lemma 2.1.

Proof Consider the following stochastic Lyapunov–Krasovskii functional V (x(t), t, i) =

.

5 ∑ [Vq (x(t), t, i)],

(2.33)

q=1

where .

.

V1 (x(t), t, i) =x T (t)MT Pi Mx(t),

t

V2 (x(t), t, i) =

t−τ (t) t

+

T

Mx(s) M f (x(s))

t−d

(1) ) Q (1) Mx(s) 11 Q 12 ds M f (x(s)) ∗ Q (1) 22 ( (2) (2) ) Q 11 Q 12 Mx(s) ds, M f (x(s)) ∗ Q (2) 22 T

Mx(s) M f (x(s))

(2.34)

(

(2.35) ( .

V3 (x(t), t, i) =

t t−d

Mx(s) ds M f (x(s))

)T (

U11 U12 ∗ U22

)

t t−d

Mx(s) ds, M f (x(s)) (2.36)

.

V4 (x(t), t, i) = d

t t−d

.

t θ

Mx(s) M f (x(s))

V5 (x(t), t, i) = d +d

0

t

−d 0

t+θ t

−d

t+θ

T

(

V11 V12 ∗ V22

)

Mx(s) dsdθ, M f (x(s)) (2.37)

T (Mx(s)) ˙ W1 Mx(s)dsdθ ˙

(Mx(s))T W2 Mx(s)dsdθ.

The remaining proof is similar to that proof of Theorem 2.1 and is omitted.

(2.38) ◻

26

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

2.4 Simulation In order to demonstrate the validity of the proposed method, numerical examples are presented to demonstrate its effectiveness. Let’s consider the following neural networks with interval time-varying delay .

x(t) ˙ = −Ci x(t) + Ai f (x(t)) + Bi f (x(t − τ (t))) + U (t), i = 1, 2, 3,

(2.39)

x (t) tanh(x1 (t)) where .x(t) = 1 , . f (x(t)) = , .τ (t) = 1.2 + 0.6sin(t), x2 (t) tanh(x2 (t)) ( ) ( ) ( ) ( ) 1.0 0 1.2 0 1.1 0 3.1 −0.1 .C 1 = , .C2 = , .C3 = , . A1 = , 0 1.0 0 0.8 0 1.0 2.1 3.0 ( ) ( ) ( ) ( ) 2.8 −0.5 2.9 −0.4 −2.7 −1.1 −2.6 −1.2 . A2 = , . A3 = , . B1 = , . B2 = , 2.0 2.9 2.2 3.1 −0.4 −2.3 −0.3 −2.4 ( ) −2.8 −1.0 0.1 . B3 = , .U (t) = . The trajectories of system Eq. 2.39 with −0.38 −2.2 0.25 0.4 initial values .x(t) = are given in Figs. 2.1, 2.2 and 2.3. 0.6 We consider Markovian coupled neural networks, x˙ (t) = −Ci xk (t) + Ai f (xk (t)) + Bi f (xk (t − τ (t))) + U (t)

. k

+ a1,i (t)

3 ∑

G (1) kl,i Γ1,i xl (t) + a2,i (t)

l=1

+ a3,i (t)

3 ∑ l=1

Fig. 2.1 Trajectory of the mode .i = 1 for neural networks Eq. 2.39

3 ∑

G (2) kl,i Γ2,i xl (t − τ (t))

l=1

G (3) kl,i Γ3,i

t t−τ (t)

xl (s)ds, k = 1, 2, 3,

(2.40)

2.4 Simulation

27

Fig. 2.2 Trajectory of the mode .i = 2 for neural networks Eq. 2.39

Fig. 2.3 Trajectory of the mode .i = 3 for neural networks Eq. 2.39



⎞ ⎛ ⎞ ⎛ ⎞ −3 1 2 −4 3 1 −6 2 4 ⎝ 0 −3 3 ⎠, .G (1) ⎝ 2 −5 3 ⎠, .G (1) ⎝ 3 −4 1 ⎠, where .G (1) 1 = 2 = 3 = 3 2 −5 1 2 −3 5 4 −9 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ −1 0 1 −2 1 1 −4 2 2 (2) (2) (2) . G 1 = ⎝ 1 −3 2 ⎠, . G 2 = ⎝ 1 −1 0 ⎠, . G 3 = ⎝ 2 −3 1 ⎠, 2 2 −4 2 1 −3 1 1 −2 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ −5 2 3 −2 1 1 −3 1 2 (3) (3) (3) . G 1 = ⎝ 1 −2 1 ⎠, . G 2 = ⎝ 1 −2 1 ⎠, . G 3 = ⎝ 2 −3 1 ⎠, 1 1 −2 1 1 −2 1 1 −2 ( ) ( ) ( ) ( ) ( ) 30 30 30 10 10 .Γ1,1 = , .Γ1,2 = , .Γ1,3 = , .Γ2,1 = , .Γ2,2 = , 03 03 03 01 01 ( ( ( ( ) ) ) ) 10 0.2 0 0.2 0 0.2 0 .Γ2,3 = , .Γ3,1 = , .Γ3,2 = , .Γ3,3 = . 01 0 0.2 0 0.2 0 0.2 The random variables satisfy the following condition .a1,1 (t) = a1,2 (t) = a1,3 (t) ∈ 2 2 2 = b1,2 = b1,3 = 0.09. .a2,1 (t) = a2,2 (t) = (2, 4), .a10,1 = a10,2 = a10,3 = 3, .b1,1

28

2 Stochastic Synchronization of Markovian Coupled Neural Networks …

Fig. 2.4 Trajectory for random variables .a1,i (t) i = 1, 2, 3

Fig. 2.5 Trajectory for random variables .a2,i (t) i = 1, 2, 3

2 2 2 a2,3 (t) ∈ (1.4, 2.8), .a20,1 = a20,2 = a20,3 = 2, .b2,1 = b2,2 = b2,3 = 0.04. .a3,1 (t) = 2 2 2 = b3,2 = b3,3 = 0.0225. a3,2 (t) = a3,3 (t) ∈ (1, 2), .a30,1 = a30,2 = a30,3 = 1.5, .b3,1 The trajectories of .a1,i (t), a2,i (t), a3,i (t) are described by Figs. 2.4, 2.5 and 2.6. ( ) 00 The activation function satisfies Assumption 2.1, and . E 1 = , . E2 = 00 ( ) 0.5 0 . We obtain .d1 = 0.6, .d2 = 1.8, .h = 0.6. The partially unknown tran0 0.5 ⎛ ⎞ −2.5 ? ? sition rate matrix is given as .Π = ⎝ ? −1.5 ? ⎠ . ? 2 ?

According to Theorem 2.1, the system Eq. 2.40 is globally asymptotically synchronized in mean square. The complete / synchronization error is illustrated by Fig. 3 2 ∑ ∑ 2.7, which is calculated by .e(t) = (x1i − x ji )2 . i=1

j=2

In addition, the admissible delay upper bounds for .d2 are shown in Table 2.1.

2.5 Conclusion

29

Fig. 2.6 Trajectory for random variables .a3,i (t) i = 1, 2, 3

Fig. 2.7 Synchronization error of the Markovian coupled neural networks Eq. 2.40

Table 2.1 Admissible upper bounds .d2 for different .d1 and .h Methods .d1 = 0.6, .h = 0.6 Theorem 2.1

10.3689

.d1

= 0.8, .h = 0.8

7.0285

2.5 Conclusion In this chapter, delay-dependant synchronization criteria are obtained for Markovian coupled interval time-varying neural networks with partially unknown transition rates based the augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique. The coupling strengths are some random variables with known mathematical expectations and variances. The coupling configuration matrices are not restricted to be symmetric, and they are changed by Markovian chain. The derived stochastic synchronization criteria depend on mathematical expectations and the variances of random coupling strengths. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results.

Chapter 3

Local Synchronization of Markovian Neutral-Type Complex Networks with Partially Unknown Transition Rates

In this chapter, we investigate local stochastic synchronization of Markovian neutraltype complex networks with partially information on transition rates. The coupling configuration matrices are not restricted to be symmetric. By designing a new augmented Lyapunov–Krasovskii functional and adopting reciprocally convex combination technique, the new delay-dependent synchronization criteria in terms of LMIs are derived. The obtained criteria depend on upper and lower bounds of delays. Numerical examples are provided to verify the effectiveness of the presented results.

3.1 Introduction Nowadays, some systems contain information about the derivative of the past state [98, 176]. In this condition, the dynamical system is also called neutral-type system and is dependent on the delays both in state and state derivative. The neutral-type delayed complex networks with time delays have been studied in many cases [80, 83, 197, 213, 214]. Local synchronization, a particular synchronization phenomenon, is defined by the inner synchronization within a group where some nodes in the network can be synchronized while synchronization in the whole network cannot be achieved [185]. This synchronization phenomenon has been highly concerned due to its engineering applications, such as brain science, reputation evaluation and secure communication [123, 185]. In [135], local and global output synchronization problems of delayed complex dynamical networks with output coupling were investigated. In [185], the authors proposed a novel complex network model to evaluate the reputation of virtual organizations and further investigated local synchronization problem of this model. After that, cluster, local and complete synchronization problems of nonlinear coupled neural networks with time delays were studied in [123], and local exponential

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_3

31

32

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

synchronization problem of delayed complex dynamical networks was investigated in [143]. In addition, another particular synchronization phenomenon, cluster synchronization, is observed when the nodes in networks are divided into several groups. Cluster synchronization is that synchronization occurs in each group, and there is no synchronization among the different groups [8]. Different from cluster synchronization, the local synchronization only requires that all the nodes in the same group can be synchronized without concerning with other nodes in other groups. In [100], less conservative robust stability conditions of Lur’e systems were obtained based on the novel augmented Lyapunov–Krasovskii functional and Wirtinger-based inequality. In [61], stability problem for static neural networks was studied by adopting a new augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique, and improved stability criteria were obtained. However, in most existing results of local synchronization analysis of complex networks, when dealing with the delayed integral terms in Lyapunov–Krasovskii functional, only Jensen’s inequality was used, and much useful information was ignored. Therefore, there are still room for investigation. Markovian jumping systems have been widely studied because information latching phenomenon often occurs in network systems, such as networked control system, manufacturing system and coupled neural networks system. In order to reflect more realistic dynamical behaviors, synchronization problems of stochastic complex networks with Markovian jumping have attracted much attention [37, 83, 92, 181]. In [83], the synchronization problem of neutral-type complex networks with Markovian jumping parameters was studied. While mostly, transition rates of Markovian systems are not exactly known. Whether in theory or in practice, it is essential to further investigate more general Markovian systems with partially information on transition rates [92, 212]. However, the complete synchronization requires all the nodes in complex networks to achieve a unified behavior. In some case, this synchronization problem can not be realized. Among the synchronization criteria of stochastic complex networks, no related results have been obtained for local stochastic synchronization of Markovian neutral-type delayed complex networks with partially information on transition rates. Therefore, the main purpose of this chapter is to fill the gap by making the first attempt to obtain local stochastic synchronization criteria of Markovian neutral-type complex networks with time delays and partially information on transition rates. Motivated by the above discussions, local stochastic synchronization problem of Markovian neutral-type complex networks with partially information on transition rates is investigated. The synchronization criteria are proposed to ensure neutral-type complex networks locally asymptotically synchronized in mean square by applying an augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique. Numerical examples are provided to show the effectiveness of the proposed theoretic results. The rest of this chapter is organized as follows. In Sect. 3.2, some preliminaries and Markovian neutral-type complex dynamical networks are introduced. In Sect. 3.3, the new local stochastic synchronization criteria are obtained. In Sect. 3.4,

3.2 Problem Statement and Preliminaries

33

numerical simulations are given to demonstrate the effectiveness of the proposed method. Finally, conclusion is drawn in Sect. 3.5. Notation Throughout this chapter, . R n and . R n×n denote .n-dimensional Euclidean space and set of all .n × n real matrices, respectively. .|| · || stands for Euclidean vector norm. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0), where . X ∈ R n×n , means that . X is real positive symmetric semidefinite matrix (negative definite matrix). . In represents the .n-dimensional n×n identity For a matrix ( matrix. ) ( . A ∈) R , .λmin (A) denotes the minimum eigenvalue of X Y X Y . A. . stands for . . .∅ stands for empty set. .E{·} stands for the math∗ Z YT Z ematical expectation. Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

3.2 Problem Statement and Preliminaries Let .{rt , t ≥ 0} be a right continuous Markovian process on the probability space and taking values in a finite state space .G = {1, 2, . . . , N } with generator .Π = (πi j ), i, j ∈ G given by { .

Pr {rt+Δt = j | rt = i} =

πi j Δt + o(Δt), i /= j, 1 + πii Δt + o(Δt), i = j,

where .Δt > 0, lim (o(Δt)/Δt) = 0. .πi j ≥ 0 (i /= j) is the transition rate from Δt→0

mode .i at time .t to mode . j at time .t + Δt, and .πii = −

N ∑ j=1, j/=i

πi j . In this chapter,

transition rates of the jumping process are considered to be partially accessible. For instance, transition rate matrix .Π with .N operation modes may be expressed as ⎛

⎞ π11 ? ? · · · π1N ⎜ ? π22 ? · · · ? ⎟ ⎜ ⎟ .⎜ . .. .. . . .. ⎟ , . ⎝ . . . . . ⎠ ? π N 2 ? · · · πN N where .? represents unknown transition rate. For notational clarity, .∀ i ∈ G, we ∪ the i i , where .Gki = { j : πi j is known for . j ∈ G} and .Guk = { j : πi j denote .G i = Gki Guk is unknown for . j ∈ G}. The model of Markovian neutral-type complex networks with time-varying delays can be formulated as

34

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

x˙ (t) − D(rt )x˙k (t − σ(t)) = −C(rt )xk (t) + A(rt ) f (xk (t)) + B(rt ) f (xk (t − τ (t)))

. k

+ Uk (t) +

N ∑

G (1) kl (r t )Γ1 (r t )xl (t)

l=1

+

N ∑

G (2) kl (r t )Γ2 (r t )xl (t − τ (t))

l=1

+

N ∑

G (3) kl (r t )Γ3 (r t )

l=1

{

t t−τ (t)

xl (s)ds,

k = 1, 2, . . . , N .

(3.1)

where .xk (t) = [xk1 (t), xk2 (t), . . . , xkn (t)]T ∈ R n is the state vector of the .kth node. .C(rt ) = diag {c1 (rt ), c2 (rt ), . . . , cn (rt )} is a positive diagonal matrix. . A(rt ) = (ai j (rt ))n×n , . B(rt ) = (bi j (rt ))n×n , and . D(rt ) = (di j (rt ))n×n are the connection weight matrices. .Uk (t) = [Uk1 (t), Uk2 (t), . . . , Ukn (t)]T ∈ R n is external input. T n . f (x k (t)) = [ f 1 (x k1 (t)), f 2 (x k2 (t)), . . . , f n (x kn (t))] ∈ R denotes the output of node .k. Time-varying delays .τ (t) and .σ(t) satisfy .0 ≤ d1 ≤ τ (t) ≤ d2 , .τ˙ (t) ≤ h, .0 ≤ σ(t) < σ1 , .σ(t) ˙ ≤ σ < 1, where .d1 , d2 , .(d1 < d2 ), .h, .σ and .σ1 are known constants. .Γ1 (rt ), .Γ2 (rt ) and .Γ3 (rt ) ∈ R n×n are inner coupling matrices representing the (2) (2) inner coupling between the nodes. .G (1) (rt ) = G (1) kl (r t ) N ×N , .G (r t ) = G kl (r t ) N ×N and .G (3) (rt ) = G (3) kl (r t ) N ×N are coupling configuration matrices representing the topological structure of the network, which may not be identical and satisfy the folN ∑ (q) (q) (q) lowing conditions: .G kl (rt ) ≥ 0 k /= l, .G kk (rt ) = − G kl (rt ), q = 1, 2, 3. l=1,l/=k

The initial condition associated with (3.1) is given as follows: .xk (s) = φk0 (s) ∈ C([−τ , 0], R n ) (k = 1, 2, . . . , N ), .τ = max{d2 , σ1 } and .C([−τ , 0], R n ) is the set of continuous function from .[−τ , 0] to . R n . Remark 3.1 In this chapter, local stochastic synchronization for Markovian neutraltype complex networks (3.1) with time-varying delays and partially information on transition rates is studied. In addition, the coupling configuration matrices are not assumed to be symmetric, which can be less restrictive than most existing results [12, 26, 37, 71, 83, 92, 119, 135, 140, 146, 189, 196, 197, 214]. i Remark 3.2 When .Gki = G i , .Guk = ∅, transition rates of Markovian neutral-type i = G i , .Gki = ∅, the transition complex networks are completely known. When .Guk rates of Markovian neutral-type complex networks are completely unknown. Here, we combine two cases and consider a general form.

Throughout this chapter, the following assumptions are needed. Assumption 3.1 ([84]) For any .x1 , x2 ∈ R, there exist constants .er− , er+ , such that e− ≤

. r

fr (x1 ) − fr (x2 ) ≤ er+ , r = 1, 2, . . . , n. x1 − x2

3.2 Problem Statement and Preliminaries

35

We denote .

E 1 = diag(e1+ e1− , . . . , en+ en− ), E 2 = diag

(

e1+ + e1− e+ + en− ,..., n 2 2

) .

Assumption 3.2 Each possible value of .rt is denoted by .i, .i ∈ G. The coupling (q) configuration matrices .G i (q = 1, 2, 3) satisfy the conditions: 9 (q) .G i

(q)

(q)

=

(q)

(q)

Z 11,i Z 12,i (q) (q) Z 21,i Z 22,i (q)

) ,

(3.2) (q)

where . Z 11,i ∈ R m×m , . Z 22,i ∈ R (N −m)×(N −m) , . Z 12,i ∈ R m×(N −m) , . Z 21,i ∈ R (N −m)×m , (q) (q) and assume that all rows in . Z 12,i and . Z 21,i are the same, respec(q) (q) (q) (q) T (q) (q) (q) (q) (r ) tively. . Z 12,i = [vi , vi , . . . , vi ] , . Z 21,i = [u i , u i , . . . , u i ]T , where .vi = (q) (q) (q) (q) (q) (q) (q) [v1,i , v2,i , . . . , v(N −m),i ]T and .u i = [u 1,i , u 2,i , . . . , u m,i ]T are vectors. .U1 (t) = U2 (t) = · · · = Um (t). Remark 3.3 In Assumption 3.1, .er− and .er+ are allowed to be positive, negative or zero, which makes the activation functions more general than nonnegative sigmoidal functions. Some basic definitions are needed, which can be found in [91, 123, 161, 185]. Definition 3.1 The set .S = {x = (x1 (s), x2 (s), . . . , x N (s)) : xk (s) ∈ C([−τ , 0], R n ), xk (s) = xl (s), k, l = 1, 2, . . . , N } is called the global synchronization manifold. Definition 3.2 The set .S' = {x = (x1 (s), x2 (s), . . . , x N (s)) : xk (s) ∈ C([−τ , 0], R n ), xk (s) = xl (s), k, l = 1, 2, . . . , m} is called the local synchronization manifold. Definition 3.3 The local synchronization manifold .S' is said to be locally asymptotically synchronized if . lim ||xk (t) − xl (t)||2 = 0, .k, l = 1, 2, . . . , m holds for any t→∞ initial value. Definition 3.4 Dynamical system (3.1) is said to be locally asymptotically synchronized in mean square if . lim E{||xk (t) − xl (t)||2 } = 0, .k, l = 1, 2, . . . , m holds for t→∞ any initial value. ˆ K ) ={the set of matriDefinition 3.5 ( [162]). Let . Rˆ denote a ring and define .T ( R, ˆ ces with entries . R such that the sum of the entries in each row is equal to . K for some ˆ }. .K ∈ R ˆ K ). Then Lemma 3.1 ([162]) Let .G be an . N × N matrix in the set .T ( R, (N − 1) × (N − 1) matrix . H satisfies . M G = H M, where . H = M G J ,

.

36

3 Local Synchronization of Markovian Neutral-Type Complex Networks …



−1 1 .. .

1 ⎜0 ⎜ .M = ⎜ . ⎝ ..

0 0 ⎛

1 ⎜0 ⎜ ⎜. .J = ⎜ . ⎜. ⎝0 0

0 ··· 0 ··· .. . . . . 0 ··· 1

0 −1 .. .

0 0 .. .

⎞ ⎟ ⎟ ⎟ ⎠

−1

,

(N −1)×N

⎞ 1 ··· 1 1 ··· 1⎟ ⎟ .. . . .. ⎟ , . . .⎟ ⎟ ⎠ 0 0 ··· 0 1 0 0 · · · 0 0 N ×(N −1) 1 1 .. .

1 1 .. .

ˆ Moreover, the matrix . H can be rewritten in which .1 is the multiplicative identity of . R. j ∑ (G (ik) − G (i+1,k) ) for .i, j ∈ {1, 2, . . . , N − 1}. explicitly as follows: . Hi j = k=1

Lemma 3.2 ([185]) Under Assumption 3.2, the .(m − 1) × (m − 1) matrix . H˜ q (q) ˜ . M˜ is the .(m − 1) × N matrix defined by . H˜ q = M Z 11 J satisfies . M˜ G (q) = H˜ q M. ⎛ ⎞ 1 −1 0 0 · · · 0 0 · · · 0 ⎜ 0 1 −1 0 · · · 0 0 · · · 0 ⎟ ( ) ⎟ ˜ =⎜ .M ⎜ .. .. .. .. . . .. ⎟= M O , ⎝. . . . . . 0 ··· 0⎠ 0 0

0 · · · 1 −1 0 · · · 0

where . M ∈ R (m−1)×m , O ∈ R (m−1)×(N −m) , and each entry of . O is zero. . J˜ is the . N × (m − 1) matrix ⎞ ⎛ 1 1 1 1 ··· 1 ⎜0 1 1 1 ··· 1⎟ ⎟ ⎜ ⎜ .. .. .. .. . . .. ⎟ ⎜. . . . . .⎟ ⎟ ( ⎜ ) ⎜0 0 0 ··· 0 1⎟ J ⎟= , . J˜ = ⎜ ⎜0 0 0 ··· 0 0⎟ OT ⎟ ⎜ ⎜0 0 0 ··· 0 0⎟ ⎟ ⎜ ⎜. . . . . .⎟ ⎝ .. .. .. .. .. .. ⎠ 0 0 0 ··· 0 0

where . J ∈ R m×(m−1) . Moreover, . M and . J are similar as they are defined in Lemma 3.1. Lemma 3.3 ([102]) Let . f 1 , f 2 , . . . , f N : R m |→ R have positive values in an open subset . D of . R m . Then, the reciprocally convex combination of . f i over . D satisfies .

min ∑

{αi |αi >0,

i

∑ ∑ 1 ∑ f i (t) = f i (t) + max gi, j (t) gi, j (t) αi =1} αi i i/= j i

3.3 Main Results

37

subject to { ( ) } f i (t) gi, j (t) m . gi j : R |→ R, g j,i (t) Δ gi, j (t), ≥0 . gi, j (t) f j (t) Lemma 3.4 ([25]) Kronecker product has the following properties: (1) (α A) ⊗ B = A ⊗ (αB), .(2) (A + B) ⊗ C = A ⊗ C + B ⊗ C, .(3) (A ⊗ B)(C ⊗ D) = (AC) ⊗ (B D). .

3.3 Main Results In this section, local stochastic synchronization criteria for neutral-type complex networks (3.1) with time delays and partially information on transition rates are obtained based on an augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique. To be more convenient, each possible value of .rt is denoted by .i, .i ∈ G. Then, system (3.1) can be formulated as the following form x˙ (t) − Di x˙k (t − σ(t)) = −Ci xk (t) + Ai f (xk (t)) + Bi f (xk (t − τ (t))) + Uk (t)

. k

+

N ∑

G (1) kl,i Γ1,i xl (t) +

l=1

+

N ∑

G (3) kl,i Γ3,i

l=1

{

N ∑

G (2) kl,i Γ2,i xl (t − τ (t))

l=1 t

t−τ (t)

xl (s)ds,

k = 1, 2, . . . , N .

(3.3)

Let .M = M˜ ⊗ In , and . M˜ is defined in Lemma 2.2. . D¯ i = I N ⊗ Di , ' ¯ ¯i = I N ⊗ Ci , .C¯i ' = Im−1 ⊗ Ci , . A¯ i = I N ⊗ Ai , . A¯ i ' = . Di = Im−1 ⊗ Di , .C ' Im−1 ⊗ Ai , . B¯i = I N ⊗ Bi , . B¯i = Im−1 ⊗ Bi , .Γ¯1,i = G i(1) ⊗ Γ1,i , .Γ¯2,i = G i(2) ⊗ Γ2,i , .Γ¯3,i = G i(3) ⊗ Γ3,i , .x(t) = [x1T (t), x2T (t), . . . , x NT (t)]T , . f (x(t)) = [ f T (x1 (t)), f T (x2 (t)), . . . , f T (x N (t))]T , .U¯ (t) = [U1T (t), U2T (t), . . . , U NT (t)]T , . E¯1 = Im−1 ⊗ E 1 , . E¯2 = Im−1 ⊗ E 2 , then system (3.3) can be rewritten as follows: .

x(t) ˙ − D¯ i x(t ˙ − σ(t)) = −C¯i x(t) + A¯ i f (x(t)) + B¯i f (x(t − τ (t))) + U¯ (t) { t ¯ ¯ ¯ x(s)ds. (3.4) + Γ1,i x(t) + Γ2,i x(t − τ (t)) + Γ3,i t−τ (t)

Theorem 3.1 Under Assumptions 3.1 and 3.2, system (3.4) is locally asymptotically synchronized in mean square if there exist( positive definite matrices . Pi ∈ q q ) Q Q 11 12 ∈ R 2(m−1)n×2(m−1)n , R (m−1)n×(m−1)n (i = 1, 2, 3, . . . , N ), . Q q = q ∗ Q 22

38

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

p p ) Y11 Y12 ∈ R 2(m−1)n×2(m−1)n , p ∗ Y22 (m−1)n×(m−1)n .S ∈ R , .Wq ∈ R (m−1)n×(m−1)n , .(q = 1, 2, 3; p = 1, 2), and positive diagonal matrices . Rr ∈ R (m−1)n×(m−1)n (r = 1, 2, 3, 4), and matrices (m−1)n×(m−1)n .Zi ∈ R , Z i = Z iT , . J ∈ R (m−1)n×(m−1)n , .T ∈ R (m−1)n×(m−1)n thus that, for any .i ∈ G, the succeeding matrix inequalities are satisfied

(

.

Fp =

p

p

F11 F12 p ∗ F22

)

(

∈ R 2(m−1)n×2(m−1)n ,

.

Yp =

) ( i i Ω(2) < 0, Ω i = Ω(1)

(3.5)

.

i P j − Z i ≤ 0, ∀ j ∈ Guk , j /= i,

(3.6)

.

i P j − Z i ≥ 0, ∀ j ∈ Guk , j = i,

(3.7)

.

( .

W2 J ∗ W2

) > 0,

(3.8)

where ⎛

i Ω(1)

.

i Ω11 ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ =⎜ ⎜ ∗ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎝ ∗ ∗

i Ω12 i Ω22 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

W1 i Ω23 i Ω33 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 i Ω24 J i Ω44 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ω15 0 0 0 i Ω55 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ω16 i Ω26 0 0 0 i Ω66 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 i Ω37 0 0 0 i Ω77 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 i Ω48 0 0 0 i Ω88 ∗ ∗ ∗ ∗ ∗ ∗ ∗

⎞ i i Ω19 Ω1,10 i Ω29 0 ⎟ ⎟ 0 0 ⎟ ⎟ 0 0 ⎟ ⎟ i Ω59 0 ⎟ ⎟ i Ω69 0 ⎟ ⎟ 0 0 ⎟ ⎟ 0 0 ⎟ ⎟, i i ⎟ Ω99 Ω9,10 ⎟ i ⎟ ∗ Ω10,10 ⎟ ∗ ∗ ⎟ ⎟ ∗ ∗ ⎟ ⎟ ∗ ∗ ⎟ ⎟ ∗ ∗ ⎠ ∗ ∗

3.3 Main Results

39



i Ω(2)

.

'

i Ω1,11 ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ =⎜ ⎜ 0i ⎜Ω ⎜ 9,11 ⎜ 0 ⎜ i ⎜Ω ⎜ 11,11 ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎝ ∗ ∗

1 T (F11 ) 0 1 T ) −(F11 0 1 T (F12 ) 0 1 T ) −(F12 0 0 0 0 1 −Y11 ∗ ∗ ∗

1 F12 0 0 0 2 T 1 2 (F11 ) −F12 F12 2 T 2 −(F11 ) 0 −F12 1 T 0 (F22 ) 0 0 0 0 2 T 1 T 2 T (F12 ) −(F22 ) (F22 ) 2 T 2 T −(F12 ) 0 −(F22 ) 0 0 0 0 0 0 0 0 0 1 0 −Y12 0 2 2 0 −Y12 −Y11 1 ∗ −Y22 0 2 ∗ ∗ −Y22

0 0

'

i Ω11 = −Pi C¯i − (Pi C¯i )T + Pi H¯ 1,i + (Pi H¯ 1,i )T +

.

∑ j∈Gk

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

πi j (P j − Z i ) + Q 111 +

1 2 2 + d12 Y11 − W1 + d22 W3 − R1 E¯1 , Q 211 + d12 Y11 i i 1 2 2 ¯ 2,i , .Ω = Pi A¯ i ' + Q 112 + Q 212 + d12 Y12 .Ω12 = Pi H + d12 Y12 + R1 E¯2 , 15 ' ' i i T T ¯ ¯ ¯ .Ω16 = Pi Bi , .Ω19 = −(T C i ) + (T H1,i ) , i i ¯ i ' , .Ω1,11 .Ω1,10 = Pi D = Pi H¯ 3,i , i 1 T .Ω22 = −(1 − h)Q 11 − W2 + J + J − W2 − R2 E¯1 , i i i i 1 T .Ω23 = W2 − J , .Ω24 = −J + W2 , .Ω26 = −(1 − h)Q 12 + R2 E¯2 , .Ω29 = i i 2 3 3 2 T (T H¯ 2,i ) , .Ω33 = −Q 11 + Q 11 − W1 − W2 − R3 E¯1 , .Ω37 = R3 E¯2 + Q 12 − Q 12 , i i 3 3 .Ω44 = −Q 11 − W2 − R4 E¯1 , .Ω48 = −Q 12 + R4 E¯2 , i i 1 2 2 1 2 2 ¯ i ' )T , .Ω55 = Q 22 + Q 22 + d1 Y22 + d12 Y22 − R1 , .Ω59 = (T A i i i 1 ¯i ' )T , .Ω77 .Ω66 = −(1 − h)Q 22 − R2 , .Ω69 = (T B = −Q 222 + Q 322 − R3 , i i 3 2 2 T .Ω88 = −Q 22 − R4 , .Ω99 = S + d1 W1 + d12 W2 − T − T, ' i i i i ¯ i , .Ω9,11 .Ω9,10 = T D = T H¯ 3,i , .Ω10,10 = −(1 − σ)S, .Ω11,11 = −W3 , ˜ ˜ ˜ 3,i ⊗ Γ3,i , ¯ ¯ ¯ .d12 = d2 − d1 , . H1,i = H1,i ⊗ Γ1,i , . H2,i = H2,i ⊗ Γ2,i , . H3,i = H (1) (2) (3) ˜ 1,i = M Z 11,i J, . H˜ 2,i = M Z 11,i J, . H˜ 3,i = M Z 11,i J, . M and . J have the same .H structures as Lemma 3.2.

Proof Consider the following Lyapunov–Krasovskii functional

.

V (x(t), t, i) =

6 ∑ [Vq (x(t), t, i)],

(3.9)

q=1

where .

V1 (x(t), t, i) =x T (t)MT Pi Mx(t),

(3.10)

40

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

)[ ]T ( 1 ] Mx(s) Q 11 Q 112 Mx(s) . V2 (x(t), t, i) = ds ∗ Q 122 M f (x(s)) t−τ (t) M f (x(s)) )[ ]T ( 2 ] { t [ Mx(s) Q 11 Q 212 Mx(s) ds + ∗ Q 222 M f (x(s)) t−d1 M f (x(s)) )[ ]T ( 3 ] { t−d1 [ Mx(s) Q 11 Q 312 Mx(s) ds, + ∗ Q 322 M f (x(s)) M f (x(s)) t−d2 {

V3 (x(t), t, i) =

.

V4 (x(t), t, i) = d1

t

[

{

t

(3.11)

] )T ( 1 1 ) { t [ ] Mx(s) F11 F12 Mx(s) ds ds 1 ∗ F22 t−d1 M f (x(s)) t−d1 M f (x(s)) ( { t−d1 [ ] )T ( 2 2 ) { t−d1 [ ] Mx(s) F11 F12 Mx(s) + ds ds, 2 ∗ F22 M f (x(s)) M f (x(s)) t−d2 t−d2 (3.12) ({

.

[

t

{ t[

t−d1

θ

]T (

Mx(s) M f (x(s))

1 1 Y12 Y11 1 ∗ Y22

)[

] Mx(s) dsdθ M f (x(s))

+ (d2 − d1 )× ]T ( 2 2 ) [ ] { t−d1 { t [ Mx(s) Y11 Y12 Mx(s) dsdθ, 2 ∗ Y22 M f (x(s)) M f (x(s)) θ t−d2 { .

t

V5 (x(t), t, i) =

T (Mx(s)) ˙ S(Mx(s))ds, ˙

(3.13)

(3.14)

t−σ(t)

{ .

V6 (x(t), t, i) = d1

{

0 −d1

t t+θ

{

+ (d2 − d1 ) { + d2

0

T (Mx(s)) ˙ W1 (Mx(s))dsdθ ˙

{

−d2

−d1 −d2

t

{

t

T (Mx(s)) ˙ W2 (Mx(s))dsdθ ˙

t+θ

(Mx(s))T W3 (Mx(s))dsdθ.

(3.15)

t+θ

Let .L be the weak infinitesimal generator along system (3.4). We obtain LV1 (x(t), t, i) = 2x T (t)MT Pi M[ D¯ i x(t ˙ − σ(t)) − C¯i x(t) + A¯ i f (x(t)) + B¯i f (x(t − τ (t))) + U¯ (t) + Γ¯1,i x(t) + Γ¯2,i x(t − τ (t)) { t N ∑ x(s)ds] + πi j (Mx(t))T P j Mx(t). + Γ¯3,i

.

t−τ (t)

j=1

3.3 Main Results

41

' ' According to the structure of .M, one has .M D¯ i = D¯ i M, .MC¯i = C¯i M, .M A¯ i = ' ' (q) (q) A¯ i M, .M B¯i = B¯i M, .MU¯ (t) = 0, .MΓ¯q,i = ( M˜ ⊗ In )(G i ⊗ Γq,i ) = ( M˜ G i ) ⊗ ˜ ⊗ Γq,i = ( H˜ q,i ⊗ Γq,i )( M˜ ⊗ In ) = H¯ q,i M, q = 1, 2, 3. Γq,i = ( H˜ q,i M) Therefore, one obtains

LV1 (x(t), t, i)

.

'

'

= 2x T (t)MT Pi D¯ i Mx(t ˙ − σ(t)) − 2x T (t)MT Pi C¯i Mx(t) ' ' + 2x T (t)MT Pi A¯ i M f (x(t)) + 2x T (t)MT Pi B¯i M f (x(t − τ (t))) + 2x T (t)MT Pi H¯ 1,i Mx(t) + 2x T (t)MT Pi H¯ 2,i Mx(t − τ (t)) { t N ∑ x(s)ds + πi j (Mx(t))T P j Mx(t), + 2x T (t)MT Pi H¯ 3,i M t−τ (t)

(3.16)

j=1

LV2 (x(t), t, i ) )[ ] [ ]T ( 1 Q 11 Q 112 Mx(t) Mx(t) = ∗ Q 122 M f (x(t)) M f (x(t)) [ ]T ( 1 )[ ] Mx(t − τ (t)) Q 11 Q 112 Mx(t − τ (t)) − (1 − τ˙ (t)) ∗ Q 122 M f (x(t − τ (t))) M f (x(t − τ (t))) [ ]T ( 2 ) [ ] 2 Mx(t) Q 11 Q 12 Mx(t) + ∗ Q 222 M f (x(t)) M f (x(t)) )[ ]T ( 2 ] [ Q 11 Q 212 Mx(t − d1 ) Mx(t − d1 ) − ∗ Q 222 M f (x(t − d1 )) M f (x(t − d1 )) ]T ( 3 ] )[ [ 3 Q 11 Q 12 Mx(t − d1 ) Mx(t − d1 ) + ∗ Q 322 M f (x(t − d1 )) M f (x(t − d1 )) ]T ( 3 ] ) [ [ Q 11 Q 312 Mx(t − d2 ) Mx(t − d2 ) , − (3.17) ∗ Q 322 M f (x(t − d2 )) M f (x(t − d2 ))

.LV3 (x(t), t, i)

] ] )T ( 1 1 ) [ F11 F12 Mx(s) Mx(t) − Mx(t − d1 ) ds 1 M f (x(t)) − M f (x(t − d1 )) ∗ F22 t−d1 M f (x(s)) ( { t−d1 [ ] )T ( 2 2 ) [ ] F11 F12 Mx(s) Mx(t − d1 ) − Mx(t − d2 ) +2 ds , 2 M f (x(s)) M f (x(t − d1 )) − M f (x(t − d2 )) ∗ F22 t−d2 =2

({ t

[

(3.18)

42

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

LV4 (x(t), t, i) [ ]T ( 1 1 ) [ ] Mx(t) Y11 Y12 Mx(t) = d12 1 ∗ Y22 M f (x(t)) M f (x(t)) ]T ( 1 1 ) [ ] { t [ Mx(s) Y11 Y12 Mx(s) − d1 ds 1 ∗ Y22 M f (x(s)) t−d1 M f (x(s)) [ ]T ( 2 2 ) [ ] Mx(t) Y11 Y12 Mx(t) 2 + d12 2 ∗ Y22 M f (x(t)) M f (x(t)) ]T ( 2 2 ) [ ] { t−d1 [ Mx(s) Y11 Y12 Mx(s) − d12 ds 2 ∗ Y22 M f (x(s)) M f (x(s)) t−d2 [ ]T ( 1 1 ) [ ] Mx(t) Y11 Y12 Mx(t) ≤ d12 1 ∗ Y22 M f (x(t)) M f (x(t)) ] )T ( 1 1 ) { t [ ] ({ t [ Mx(s) Y11 Y12 Mx(s) ds ds − 1 ∗ Y22 t−d1 M f (x(s)) t−d1 M f (x(s)) [ ]T ( 2 2 ) [ ] Mx(t) Y11 Y12 Mx(t) 2 + d12 2 ∗ Y22 M f (x(t)) M f (x(t)) ( { t−d1 [ ] )T ( 2 2 ) { t−d1 [ ] Mx(s) Y11 Y12 Mx(s) − ds ds, 2 ∗ Y22 M f (x(s)) M f (x(s)) t−d2 t−d2

(3.19)

LV5 (x(t), t, i)

.

T = (Mx(t)) ˙ SMx(t) ˙ − (1 − σ(t))(M ˙ x(t ˙ − σ(t)))T SMx(t ˙ − σ(t)) T ≤ (Mx(t)) ˙ SMx(t) ˙ − (1 − σ)(Mx(t ˙ − σ(t)))T SMx(t ˙ − σ(t)),

LV6 (x(t), t, i)

.



T d12 (Mx(t)) ˙ W1 Mx(t) ˙

({ −

t

)T Mx(s)ds ˙

t−d1

+ d22 (Mx(t))T W3 Mx(t) −

{

Mx(s)ds ˙

t−d1 t−d1

T (Mx(s)) ˙ W2 Mx(s)ds ˙

t−d2

)T

t

Mx(s)ds t−τ (t)

t

W1

T ˙ W2 Mx(t) ˙ − (d2 − d1 ) + (d2 − d1 )2 (Mx(t))

({

{

(3.20)

W3

{

t

Mx(s)ds. t−τ (t)

Based on reciprocally convex combination technique of Lemma 3.3, we have

(3.21)

3.3 Main Results

{

t−d1

d

. 12

43

{ (Mx(s)) ˙ W2 Mx(s)ds ˙ ≥( T

t−d2

+( +( +(

{

t−d1

t−τ (t) { t−d1

Mx(s)ds) ˙ W2 T

t−τ (t) t−τ (t)

{ T Mx(s)ds) ˙ J

t−τ (t) { t−τ (t) t−d2 { t−τ (t)

t−d1

Mx(s)ds ˙

Mx(s)ds ˙

t−d2

{

T T Mx(s)ds) ˙ J T Mx(s)ds) ˙ W2

t−d1

Mx(s)ds ˙

t−τ (t) { t−τ (t)

t−d2

Mx(s)ds. ˙

t−d2

Therefore, we have LV6 (x(t), t, i)

.

({

T ˙ W1 Mx(t) ˙ − ≤ d12 (Mx(t))

t

)T Mx(s)ds ˙

t−d1

{

t

W1

Mx(s)ds ˙

t−d1

2 T + d12 (Mx(t)) ˙ W2 Mx(t) ˙ − [Mx(t − d1 ) − Mx(t − τ (t))]T W2

× [Mx(t − d1 ) − Mx(t − τ (t))] − 2[Mx(t − d1 ) − Mx(t − τ (t))]T J [Mx(t − τ (t)) − Mx(t − d2 )] − [Mx(t − τ (t)) − Mx(t − d2 )]T W2 [Mx(t − τ (t)) − Mx(t − d2 )] ({ t )T { t 2 T Mx(s)ds W3 Mx(s)ds. + d2 (Mx(t)) W3 Mx(t) − t−τ (t)

(3.22)

t−τ (t)

From system (3.4), the following equation holds for any matrix .T , T 0 = 2(Mx(t)) ˙ T M[−x(t) ˙ + D¯ i x(t ˙ − σ(t)) − C¯i x(t) + A¯ i f (x(t)) { ¯ ¯ ¯ ¯ ¯ + Bi f (x(t − τ (t))) + U (t) + Γ1,i x(t) + Γ2,i x(t − τ (t)) + Γ3,i

.

t

x(s)ds].

t−τ (t)

(3.23) From Assumption 3.1, we get [ .

[ .

Mx(t) M f (x(t))

Mx(t − τ (t)) M f (x(t − τ (t)))

]T (

]T (

−R1 E¯ 1 R1 E¯ 2 ∗ −R1

−R2 E¯ 1 R2 E¯ 2 ∗ −R2

)[

)[

] Mx(t) ≥ 0, M f (x(t))

(3.24)

] Mx(t − τ (t)) ≥ 0, M f (x(t − τ (t)))

(3.25)

44

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

[ .

[ .

Mx(t − d1 ) M f (x(t − d1 )) Mx(t − d2 ) M f (x(t − d2 ))

]T (

]T (

)[

−R3 E¯ 1 R3 E¯ 2 ∗ −R3

)[

−R4 E¯ 1 R4 E¯ 2 ∗ −R4

] Mx(t − d1 ) ≥ 0, M f (x(t − d1 ))

(3.26)

] Mx(t − d2 ) ≥ 0. M f (x(t − d2 ))

(3.27)

Taking into account the situation that the information of transition probabilities is not completely accessible, the following zero equations hold for .∀ . Z i = Z iT because N ∑ of . πi j = 0, j=1

.

− (Mx(t))T (





πi j +

j∈Gki

πi j )Z i Mx(t) = 0.

(3.28)

i j∈Guk

Then, combining (3.9)–(3.28), we get E{LV (x(t), t, i)} ≤ E{η T (t)Ω i η(t) +



.

(Mx(t))T πi j (P j − Z i )Mx(t)},

i j∈Guk

T T T T where .η(t) = {(Mx(t)) , (Mx(t − τ (t))) , (Mx(t − d1 )) , (Mx(t − d2 )) , T T T T (M f (x(t))) , (M f (x(t − τ (t)))) , (M f (x(t − d1 ))) , (M f (x(t − d2 ))) , {t {t { t−d T , (Mx(t ˙ − σ(t)))T , ( t−τ (t) Mx(s)ds)T , ( t−d1 Mx(s)ds)T , ( t−d21 Mx(s) (Mx(t)) ˙ {t { t−d ds)T , ( t−d1 M f (x(s))ds)T , ( t−d21 M f (x(s))ds)T }T .

So, we obtain E{LV (x(t), t, i)} ≤ −γE{||Mx(t)||2 },

.

where γ = min{λmin (−Ω i ), λmin (−



.

i∈G

πi j (P j − Z i ))} > 0.

i j∈Guk

Applying Dynkin’s formula, we have {

t

E{V (x(t), t, i)} − E{V (x(0), 0, i 0 )} = E{

.

LV (x(s), s, rs )ds}.

0

We know that there exists positive constant .β such that

(3.29)

3.4 Simulation

45

βE{||Mx(t)||2 } ≤ E{V (x(t), t, i)}

.

{

t

= E{V (x(0), 0, i 0 )} + E{ LV (x(s), s, rs )ds} 0 { t E{||Mx(t)||2 }ds. ≤ E{V (x(0), 0, i 0 )} − γ

(3.30)

0

Thus, system (3.4) is locally asymptotically synchronized in mean quare. The proof is completed. □ Remark 3.4 The local synchronization is the inner synchronization within one group where all the nodes in the group can be synchronized while synchronization in the whole network can’t be achieved. Because of the engineering applications, such as brain science, reputation evaluation and secure communication [123, 185], this synchronization problem has been studied in recent years. Remark 3.5 When .m = N , local stochastic synchronization becomes complete stochastic synchronization. Therefore, the complete synchronization is a special case in this chapter. Remark 3.6 There are no results for the local stochastic synchronization of Markovian neutral-type complex networks with time delays. In most case, the transition rates of Markovian systems are not known. Hence, we have studied the neutraltype complex networks with Markovian parameters and partially information on transition rates to fill the gap for this field. In addition, in [61, 100], the stability problems for Lur’e systems and static neural networks were studied based on augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique, and less conservative stability conditions were obtained. In this chapter, unlike the methods in [123, 143, 185], we have applied an augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique to study Markovian neutral-type complex networks, and corresponding local stochastic synchronization criteria have been derived. When the coupling configuration matrices are constants, we can get similar results, which are omitted here.

3.4 Simulation In this section, numerical examples are given to demonstrate the effectiveness of the proposed method. Let’s consider the following Markovian neutral-type neural networks with time-varying delays x(t) ˙ − Di x(t ˙ − σ(t)) = −Ci x(t) + Ai f (x(t)) + Bi f (x(t − τ (t))) + U (t), i = 1, 2, 3. (3.31) .

46

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

] ] [ x1 (t) tanh(x1 (t)) , . f (x(t)) = , .τ (t) = 1.6 + 0.2sin(2t), .σ(t) = x2 (t) tanh(x2 (t)) ( ( ( ) ) ) 0.9 0 2.0 −0.1 −3.1 0.1 0.4 + 0.3sin(t), .C1 = , . A1 = ,. B1 = , 0 0.9 −5 4.5 0.1 −1.9 ( ( ( ) ) ) −0.1 0.2 1.1 0 2.1 −0.2 . D1 = , .C 2 = , . A2 = , . B2 = 0.3 −0.1 0 1.0 −4.9 4.3 ( ) ) ) ) ( ( ( −2.9 −0.2 −0.2 0.3 1.0 0 1.9 −0.4 , . D2 = , .C3 = , . A3 = , 0.1 −2.3 0.2 −0.3 0 1.2 −5.1 3.9 ( ( [ ) ) ] −3.0 −0.1 −0.2 0.4 0.3 . B3 = , . D3 = , .U (t) = . The trajectories of −0.2 −3.0 0.3 −0.1 0.15 [ ] 0.4 system (3.31) with initial values .x(t) = are given in Figs. 3.1, 3.2 and 3.3. 0.6 [

where .x(t) =

Fig. 3.1 Chaotic trajectory of the mode .i = 1 for neutral-type neural network system (3.31)

Fig. 3.2 Chaotic trajectory of the mode .i = 2 for neutral-type neural network system (3.31)

3.4 Simulation

47

Fig. 3.3 Chaotic trajectory of the mode .i = 3 for neutral-type neural network system (3.31)

Now, consider Markovian coupled neutral-type neural networks x˙ (t) − Di x˙k (t − σ(t)) = −Ci xk (t) + Ai f (xk (t)) + Bi f (xk (t − τ (t))) + Uk (t)

. k

+

4 ∑

G (1) kl,i Γ1,i xl (t) +

l=1

+

4 ∑ l=1



G (3) kl,i Γ3,i

{

4 ∑

G (2) kl,i Γ2,i xl (t − τ (t))

l=1 t

t−τ (t)

xl (s)ds, k = 1, 2, 3, 4.

(3.32)

⎛ ⎞ ⎞ −15.0 11.6 1.7 1.7 −16.0 13.1 1.45 1.45 ⎜ 11.6 −15.0 1.7 ⎜ 13.1 −16.0 1.45 1.45 ⎟ 1.7 ⎟ (q) (q) ⎜ ⎟ ⎟ .G 1 = ⎜ ⎝ 0.01 0.01 −0.04 0.02 ⎠, .G 2 = ⎝ 0.02 0.02 −0.05 0.01 ⎠, 0.01 0.01 0.02 −0.04 0.02 0.02 0.01 −0.05 ⎛ ⎞ −17.0 12.5 2.25 2.25 ( ) ⎜ 12.5 −17.0 2.25 2.25 ⎟ 1.5 0 (q) ⎜ ⎟ .G 3 = ⎝ 0.01 0.01 −0.03 0.01 ⎠, .q = 1, 2, 3. .Γ1,i = 0 1.5 , 0.01 0.01 0.01 −0.03 ( ( ) ) 0.2 0 0.1 0 .Γ2,i = , .Γ3,i = , .i = 1, 2, 3. 0 0.2 0 0.5 ( ) 00 The activation function satisfies Assumption 3.1 and . E 1 = , . E2 = 00 ( ) 0.5 0 . We obtain .d1 = 1.4, .d2 = 1.8, .h = 0.4, .σ = 0.3. 0 0.5 ⎞ ⎛ −1.5 ? ? The partially unknown transition rate matrix is given as .Π = ⎝ ? ? 0.8 ⎠ . 0.6 ? ?

48

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

Fig. 3.4 Trajectory of the first node of Markovian neutral-type complex networks (3.32)

Fig. 3.5 Trajectory of the second node of Markovian neutral-type complex networks (3.32)

By using Matlab LMI Toolbox, we obtain a set of feasible solutions as follows (we do not list all the obtained matrices for space consideration): ( ) ( ) ( ) 0.6728 0.0579 0.7202 0.0720 0.8143 0.0990 . P1 = , . P2 = , . P3 = , 0.0579 0.2503 0.0720 0.2552 0.0990 0.2637 ( ( ( ) ) ) 1.5366 0.2335 0.6965 0.0649 0.7672 0.0855 .Z1 = , .Z2 = , .Z3 = , 0.2335 0.2964 0.0649 0.2527 0.0855 0.2594 ⎛ ⎞ 0.5626 0.0938 −0.2642 −0.0374 ( ) ⎜ 0.0938 0.0936 −0.0302 −0.0815 ⎟ 0.0127 −0.0006 ⎟ .J = , .F 1 = ⎜ ⎝ −0.2642 −0.0302 0.4259 0.0648 ⎠. −0.0006 0.0040 −0.0374 −0.0815 0.0648 0.0911

3.4 Simulation Fig. 3.6 Trajectory of the third node of Markovian neutral-type complex networks (3.32)

Fig. 3.7 Trajectory of the fourth node of Markovian neutral-type complex networks (3.32)

Fig. 3.8 Synchronization error .e1 (t) of the first two nodes

49

50

3 Local Synchronization of Markovian Neutral-Type Complex Networks …

Fig. 3.9 Complete error .e(t) of Markovian neutral-type complex networks (3.32)

According to Theorem 3.1 and Figs. 3.4, 3.5, 3.6, 3.7, 3.8 and 3.9, we obtain that the first two nodes can be asymptotically synchronized in mean square, while the whole network can not be achieved. This shows that dynamical system (3.32) is locally asymptotically synchronized in mean square. The synchronization error of the first two nodes and the complete error of the complex networks (3.32) are illustrated in Figs. 3.8 and 3.9, which are calculated by e (t) =

. 1

2 √ ∑ (x1i − x2i )2 , i=1

┌ 2 |∑ ∑ | 4 | (x1i − x ji )2 . .e(t) = i=1

j=2

Remark 3.7 It’s known that the complete synchronization requires all the nodes in complex networks to approach a uniform dynamical behavior. Sometimes, the complete synchronization can’t be achieved. In this case, the synchronization criteria in [83, 92] may be inapplicable anymore. Hence, we consider the local synchronization problem of the complex networks. In large-scale complex networks, some nodes in the complex networks have similar properties and can be synchronized in complex networks. In this case, the local synchronization can well describe the dynamics of the complex networks.

3.5 Conclusion

51

3.5 Conclusion In this chapter, the novel delay-dependent local stochastic synchronization criteria are obtained for Markovian neutral-type complex networks with partially information on transition probabilities based on an augmented Lyapunov–Krasovskii functional and reciprocally convex combination technique. The coupling configuration matrices are not restricted to be symmetric and they are changed by Markovian chain. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results.

Chapter 4

Local Synchronization of Markovian Nonlinearly Coupled Neural Networks with Generally Uncertain Transition Rates

In this chapter, we investigate local synchronization of Markovian nonlinear coupled neural networks with uncertain and partially unknown transition rates. Each transition rate in the Markovian coupled neural network model is either uncertain or completely unknown because complete knowledge of the transition rate is difficult and costly. The less conservative local synchronization criteria containing the bounds of delay and delay derivative are obtained based on the novel augmented Lyapunov–Krasovskii functional and a new integral inequality that combines with free-matrix-based integral inequality and further improved integral inequality. Finally, the validity of the results is demonstrated by numerical examples.

4.1 Introduction Time delay is a common problem in biological and artificial neural networks, and its existence is often a source of oscillation and instability [3, 11, 22, 47, 58, 72, 73, 134, 136, 144, 156, 169, 200, 203, 208, 211]. Therefore, the synchronization of coupled neural networks with time delay has been a key topic [12, 71, 88, 89, 163, 187, 196]. In [182], the asymptotic synchronization issues of coupled neural networks with bounded time-varying discrete delay and infinite-time distributed delay were investigated based on quantized intermittent pinning control scheme and weighted double-integral inequalities, and some less conservative sufficient conditions were proposed to ensure that the coupled neural networks asymptotically synchronize to an isolated system. Based on [182], the global exponential synchronization for a class of switched discrete-time neural networks was investigated based on the new Lyapunov–Krasovskii functionals and logarithmic quantization technique [183]. The local synchronization has been highly concerned due to its engineering applications, such as brain science, reputation evaluation and secure communication [123, 185]. Based on [185], the cluster, local and complete synchronization problems of coupled neural networks with mixed delays were studied in [123], and several novel © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_4

53

54

4 Local Synchronization of Markovian Nonlinearly Coupled …

delay-dependent criteria were developed to achieve cluster, local and complete synchronization by utilizing a special coupling matrix. However, when dealing with the integral terms in the Lyapunov–Krasovskii functional, some conservative integral inequalities, such as Jensen’s integral inequality and reciprocally convex combination, have been used, resulting in the conservative synchronization criteria. Markovian jumping systems have received widespread attention due to the information locking phenomenon which frequently occurs in networked systems [19, 21, 36, 46, 57, 83, 92, 116, 139, 171, 174, 179, 201, 202, 205, 207, 212, 219]. In [79], the cluster synchronization for a class of Markovian complex networks composed of nonidentical nodes with hybrid couplings was addressed by adopting stochastic Lasalle-type invariance theorem, and the designed adaptive controller was proposed. After that, the local synchronization problem for the interconnected Boolean networks with and without stochastic disturbances was concerned in [24], and several necessary and sufficient local synchronization conditions were proposed based on mutually independent two-valued random logical variables. However, in most cases, it is difficult to obtain exact transition rates in Markovian processes due to equipment limitations and uncertainties. Hence, it is necessary to further consider more general Markovian nonlinearly coupled neural networks with uncertain and partially unknown transition rates. Based on the above discussions, this chapter addresses local synchronization of Markovian nonlinear coupled neural networks. Some sufficient conditions are obtained to achieve the local synchronization. The contributions of this chapter are summarised as follows: 1. For Markovian nonlinearly coupled neural networks of this chapter, each transition rate is completely unknown or only the estimate value is known. Compared with traditional transition rate matrices in [83, 139, 179, 201], the neural networks in this chapter are more practical because the transition rates in Markovian processes are difficult to precisely get due to equipment limitations and uncertainties. 2. A new integral inequality that combines with free-matrix-based integral inequality and further improved integral inequality is proposed to deal with {t T . (Ψ x(s)) ˙ SΨ x(s)ds ˙ by adopting two interval variable parameters, which t−τ could introduce more free matrices and reduce the conservatism. 3. The new local synchronization criteria containing the bounds of delay and delay derivative are proposed based on the new integral inequality, which are less conservative than existing ones. The remainder of this chapter is organized as follows. In Sect. 4.2, some preliminaries and Markovian nonlinearly coupled neural networks with uncertain and partially unknown transition rates are introduced. In Sect. 4.3, the new local synchronization criteria of Markovian coupled neural networks are derived. In Sect. 4.4, an example is given to demonstrate the validity of the proposed results. Finally, conclusions are drawn in Sect. 4.5.

4.2 Preliminaries

55

Notation: Throughout this chapter, . R n and . R n×n denote .n-dimensional Euclidean space and set of all .n × n real matrices, respectively. .|| · || stands for Euclidean vector norm. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0) means that . X is a real symmetric positive semidefinite matrix (negative definite matrix). .Sym{X } = X + X T . . In represents the .nn×n .A ∈ R , .λmin (A) denotes the minimum dimensional identity For a matrix ( matrix. ( ) ) X Y X Y stands for. ..∅ stands for empty set..E stands for the eigenvalue of. A.. ∗ Z YT Z mathematical expectation. .eγ = [0(m−1)n×(γ−1)(m−1)n , I(m−1)n , 0(m−1)n×(11−γ)(m−1)n ] .(γ = 1, 2, . . . , 11). Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

4.2 Preliminaries Let .{rt , t ≥ 0} be a right-continuous Markovian process on the probability space and taking values in a finite state space .G = {1, 2, . . . , N } with generator .Π = (πi j ), i, j ∈ G given by { .

Pr {rt+Δt = j | rt = i} =

πi j Δt + o(Δt), i /= j, 1 + πii Δt + o(Δt), i = j,

where .Δt > 0, lim (o(Δt)/Δt) = 0. .πi j ≥ 0 (i /= j) is the transition rate from Δt→0

mode .i at time .t to mode . j at time .t + Δt, and .πii = −

N ∑ j=1, j/=i

πi j . Each .rt is denoted

by .i, .i ∈ G. In this chapter, transition rates of the jumping process are considered to be general uncertain. For example, the transition rate matrix .Π with .N operation modes can be expressed as ⎛

? ? π˜ 13 + Δ13 · · · π˜ 1N + Δ1N ⎜ π˜ 21 + Δ21 π˜ 22 + Δ22 ? ··· ? ⎜ .Π = ⎜ .. .. .. .. . .. ⎝ . . . . ? ? ? π˜ N 3 + ΔN 3 · · ·

⎞ ⎟ ⎟ ⎟, ⎠

(4.1)

where .? represents the complete unknown transition rate. .π˜ i j and .Δi j represent the estimate value and estimate error of the uncertain transition rate .πi j , respectively. .|| Δi j || ≤ ϖi j and .ϖi j ≥ 0. .π ˜ i j , .ϖi j are known. For notational clarity, .∀ i ∈ G, we ∪ i , where.Gki = { j : The estimate value of.πi j is known for. j ∈ G} denote.G i = Gki Guk i and .Guk = { j : The estimate value of .πi j is unknown for . j ∈ G}. If .Gki /= ∅, it is described as .Gki = {Li1 , Li2 , . . . , LiK }, where .LiK represents the bound-known element with the index .LiK in the .ith row of matrix .Π .

56

4 Local Synchronization of Markovian Nonlinearly Coupled …

According to the properties of the transition rates, we assume that the known estimate values of the transition rates are defined as follows: / Gki , then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ Gki ). If .Gki /= G, and .i ∈ i i i ∑If .Gk /= G, and .i ∈ Gk , then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ Gk , j /= i), .π˜ ii + ϖii ≤ 0, and . π˜ i j ≤ 0. j∈Gki

If .Gki = G, then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ G, j /= i), π˜ ii = − ϖii =

.

N ∑ j=1, j/=i

N ∑ j=1, j/=i

π˜ i j ≤ 0, and

ϖi j ≥ 0.

Remark 4.1 In [92, 139, 205, 212], the transition rate matrices are form (4.2) with partially unknown transition rates. ⎛

? π13 π22 ? .. .. . . ? ? πN 3

? ⎜ π21 ⎜ .⎜ . ⎝ ..

⎞ · · · π1N ··· ? ⎟ ⎟ . . . .. ⎟ . . ⎠ ··· ?

(4.2)

In this chapter, we consider more general transition rate matrix (4.1) and make full use of bounds of uncertain values in the transition rates. Consider the following . N Markovian nonlinearly coupled neural networks with time-varying delay: y˙ (t) = −C(rt )yk (t) + A(rt )g(yk (t)) + B(rt )g(yk (t

. k

− τ (t))) + Jk (t) + a1

N ∑

G (1) kl (r t )Γ1 (r t )g(yl (t))

l=1

+ a2

N ∑

G (2) kl (r t )Γ2 (r t )g(yl (t − τ (t)))

l=1

+ a3

N ∑

G (3) kl (r t )Γ3 (r t )

l=1

k = 1, 2, . . . , N ,

{

t t−τ (t)

g(yl (s))ds, (4.3)

where . yk (t) = [yk1 (t), yk2 (t), . . . , ykn (t)]T ∈ R n is the state vector of the .kth node. .C(r t ) is a positive diagonal matrix. . A(r t ) and . B(r t ) are the connection weight matrices. .g(yk (t)) = [g1 (yk1 (t)), g2 (yk2 (t)), . . . , gn (ykn (t))]T ∈ R n denotes the output of node .k. . Jk (t) is an external input vector. .a1 > 0, .a2 > 0 and .a3 > 0 are coupling strengths. Time-varying delay .τ (t) satisfies .0 ≤ τ (t) ≤ τ , .μ1 ≤ τ˙ (t) ≤ μ2 , where .τ , .μ1 , μ2 are constants. .Γα1 (r t ) (α1 = 1, 2, 3) are inner coupling matrices representing 1) the inner coupling between the nodes. .G (α1 ) (rt ) = G (α kl (r t ) N ×N (α1 = 1, 2, 3) are

4.2 Preliminaries

57

coupling configuration matrices representing the topological structure of the network, which may not be identical and satisfy the following conditions: N ∑ (α1 ) (α1 ) 1) . G kl (r t ) ≥ 0 k / = l, . G kk (r t ) = − G (α kl (r t ). l=1,l/=k

The initial condition associated with (4.1) is given as follows: . yk (s) = φk0 (s) ∈ C([−τ , 0], R n ) (k = 1, 2, . . . , N ), and .C([−τ , 0], R n ) is the set of continuous function from .[−τ , 0] to . R n . Throughout this chapter, the following assumptions are needed. Assumption 4.1 ([84]) For any . y1 , y2 ∈ R, there exist constants .er− , er+ , such that e− ≤

. r

gr (y1 ) − gr (y2 ) ≤ er+ , r = 1, 2, . . . , n. y1 − y2

We denote .

) E 1 = diag(e1+ e1− , . . . , en+ en− ,

) e1+ + e1− en+ + en− . . E 2 = diag ,..., 2 2 (

Assumption 4.2 Each possible value of .rt is denoted by .i .(i ∈ G). The coupling configuration matrices .G i(α1 ) (α1 = 1, 2, 3) satisfy the conditions: ( (α1 ) .G i

=

(α1 ) (α1 ) Z 11,i Z 12,i (α1 ) (α1 ) Z 21,i Z 22,i

) ,

(4.4)

(α1 ) (α1 ) (α1 ) (α1 ) where . Z 11,i ∈ R m×m , . Z 22,i ∈ R (N −m)×(N −m) , . Z 12,i ∈ R m×(N −m) , . Z 21,i ∈ (α1 ) (α1 ) (N −m)×m R , .m < N , and assume that all rows in . Z 12,i and . Z 21,i are the same, (α1 ) (r ) respectively. . Z 12,i = [vi(α1 ) , vi(α1 ) , . . . , vi(α1 ) ]T , . Z 21,i = [u i(α1 ) , u i(α1 ) , . . . , u i(α1 ) ]T , (α1 ) (α1 ) (α1 ) (α1 ) (α1 ) (α1 ) T 1) T where .vi(α1 ) = [v1,i , v2,i , . . . , v(N = [u (α 1,i , u 2,i , . . . , u m,i ] are −m),i ] and .u i vectors. . J1 (t) = J2 (t) = · · · = Jm (t).

ˆ K ) ={the set of matrices Definition 4.1 ([162]) Let. Rˆ denote a ring and define.T ( R, with entries . Rˆ such that the sum of the entries in each row is equal to . K for some ˆ }. .K ∈ R Definition 4.2 Dynamical system (4.3) is said to be locally asymptotically synchronized in mean square if . lim E{||yk (t) − yl (t)||2 } = 0, .k, l = 1, 2, . . . , m holds. t→∞

ˆ K ). Then .(N − 1) × Lemma 4.1 ([162]) Let .G be an . N × N matrix in the set .T ( R, (N − 1) matrix . H satisfies . M G = H M, where . H = M G J ,

58

4 Local Synchronization of Markovian Nonlinearly Coupled …



1 ⎜0 ⎜ .M = ⎜ . ⎝ ..

−1 1 .. .

0 0 ⎛

1 ⎜0 ⎜ ⎜. .J = ⎜ . ⎜. ⎝0 0

0 ··· 0 ··· .. . . . . 0 ··· 1

0 −1 .. .

0 0 .. .

⎞ ⎟ ⎟ ⎟ ⎠

−1

,

(N −1)×N

⎞ 1 ··· 1 1 ··· 1⎟ ⎟ .. . . .. ⎟ , . . .⎟ ⎟ ⎠ 0 0 ··· 0 1 0 0 · · · 0 0 N ×(N −1) 1 1 .. .

1 1 .. .

ˆ . Rˆ is defined in Definition 4.1. Moreover, in which .1 is the multiplicative identity of . R. the matrix . H can be rewritten explicitly as follows:

.

Hi j =

j ∑ (G (ik) − G (i+1,k) ) k=1

for .i, j ∈ {1, 2, . . . , N − 1}. Lemma 4.2 ([185]) Under Assumption 4.2, the.(m − 1) × (m − 1) matrix. H˜ defined ˜ . M˜ is the .(m − 1) × N matrix by . H˜ = M Z 11 J satisfies . M˜ G = H˜ M. ⎛ ⎞ 1 −1 0 0 · · · 0 0 · · · 0 ⎜ 0 1 −1 0 · · · 0 0 · · · 0 ⎟ ( ) ⎟ ˜ =⎜ .M ⎜ .. .. .. .. . . .. ⎟= M O , ⎝. . . . . . 0 ··· 0⎠ 0 0

0 · · · 1 −1 0 · · · 0

where . M ∈ R (m−1)×m , O ∈ R (m−1)×(N −m) , and each entry of . O is zero. . J˜ is the . N × (m − 1) matrix ⎞ ⎛ 1 1 1 1 ··· 1 ⎜0 1 1 1 ··· 1⎟ ⎟ ⎜ ⎜ .. .. .. .. . . .. ⎟ ⎜. . . . . .⎟ ⎟ ( ⎜ ) ⎜0 0 0 ··· 0 1⎟ J ⎟= , . J˜ = ⎜ ⎜0 0 0 ··· 0 0⎟ OT ⎟ ⎜ ⎜0 0 0 ··· 0 0⎟ ⎟ ⎜ ⎜. . . . . .⎟ ⎝ .. .. .. .. .. .. ⎠ 0 0 0 ··· 0 0

where . J ∈ R m×(m−1) . Moreover, . M and . J have the same structures as those in Lemma 4.1.

4.3 Main Results

59

Lemma 4.3 ([191]) Let . y be a differentiable function: .[α, β] → R n . For symmetric matrices . S ∈ R n×n and . Z 1 , Z 3 ∈ R 3n×3n , and any matrices . Z 2 ∈ R 3n×3n and 3n×n . N1 N2 ∈ R satisfying ⎛

⎞ Z 1 Z 2 N1 ¯ = ⎝ ∗ Z 3 N2 ⎠ ≥ 0, .Φ ∗ ∗ S the following inequality holds: { .



α

β

¯ y˙ T (s)S y˙ (s)ds ≤ ϖ T Ωϖ

where .Ω¯ = (β − α)(Z 1 + 13 Z 3 ) + Sym{N1 Π¯ 1 + N2 Π¯ 2 }, .Π¯ 1 = e¯1 − e¯2 , Π¯ 2 = 2e¯3 − e¯1 − e¯2 , .e¯1 {= [I 0 0], .e¯2 = [0 I 0], .e¯3 = [0 0 I ], and .ϖ = β T 1 T [y T (β) y T (α) β−α α y (s)ds] . Lemma 4.4 ([60]) For .a < b, . R{= R T > 0, then, the following inequality {b 1 holds: .− a y˙ T (s)R y˙ (s)ds ≤ − b−a Υ0T (a, b)RΥ0 (a, b) + 3Υ1T (a, b)RΥ1 (a, b) + } 5Υ2T (a, b)RΥ2 (a, b) , where .Υ0 (a, b) = y(b) − y(a), .Υ1 (a, b) = y(b) + y(a) − {b {b 12 a+b 2 y(s)ds and .Υ2 (a, b) = y(b) − y(a) − (b−a) 2 a (s − 2 )y(s)ds. b−a a Remark 4.2 From [60, 191], the free-matrix-based integral inequality in Lemma 4.3 and further improved integral inequality in Lemma 4.4 includes the well-known Wirtinger-based integral inequality in [113]. Hence, the integral inequality in Lemmas 4.3 and 4.4 could provide a tighter lower bound than Wirtinger-based integral inequality. Lemma 4.5 ([174]) Given any real number .λ and any square matrix . P, the matrix inequality T 2 −1 T .λ(P + P ) ≤ λ H + P H P holds for any matrix . H .

4.3 Main Results In this section, local synchronization criteria of system (4.3) are obtained based on delay-dependent Lyapunov–Krasovskii functional. To be more convenient, each possible value of .rt is denoted by .i .(i ∈ G). Then, system (4.3) is rewritten as follows

60

4 Local Synchronization of Markovian Nonlinearly Coupled …

y˙ (t) = −Ci yk (t) + Ai g(yk (t)) + Bi g(yk (t − τ (t))) + Jk (t)

. k

+ a1

N ∑

G (1) kl,i Γ1,i g(yl (t)) + a2

l=1

+ a3

N ∑

G (3) kl,i Γ3,i

{

l=1

N ∑

G (2) kl,i Γ2,i g(yl (t − τ (t)))

l=1 t t−τ (t)

g(yl (s))ds, k = 1, 2, . . . , N .

(4.5)

, , Let.Ψ = M˜ ⊗ In ,.C¯i = I N ⊗ Ci ,.C¯i = Im−1 ⊗ Ci ,. A¯ i = I N ⊗ Ai ,. A¯ i = Im−1 ⊗ , (α ) Ai , . B¯i = I N ⊗ Bi , . B¯i = Im−1 ⊗ Bi , .Γ¯α1 ,i = G i 1 ⊗ Γα1 ,i , .(α1 = 1, 2, 3), . y(t) = T T T T .g(y(t)) = [g (y1 (t)), g (y2 (t)), . . . , g (y N (t))] , [x1T (t), y2T (t), . . . , y NT (t)]T , T T T T ¯ ¯ ¯ ˜ . J (t) = [J1 (t), J2 (t), . . . , J N (t)] , . E 1 = Im−1 ⊗ E 1 , . E 2 = Im−1 ⊗ E 2 , and . M is defined in Lemma 4.2. Then, system (4.5) is rewritten as the following form.

.

y˙ (t) = −C¯i y(t) + A¯ i g(y(t)) + B¯i g(x(t − τ (t))) + J¯(t) + a1 Γ¯1,i g(y(t)) + a2 Γ¯2,i g(y(t − τ (t))) { t g(y(s))ds. + a3 Γ¯3,i

(4.6)

t−τ (t)

Theorem 4.1 Under Assumptions 4.1 and 4.2, and for given scalars .0 ≤ ρ1 , ρ2 ≤ 1, (ρ1 + ρ2 = 1), system (4.6) is locally asymptotically synchronized in mean square if there are positive definite matrices .Υi ∈ R (m−1)n×(m−1)n (i = 1, 2, 3, . . . , N ), 2(m−1)n×2(m−1)n . Q α2 ∈ R (α2 = 1, 2), . R ∈ R (m−1)n×(m−1)n , . S ∈ R (m−1)n×(m−1)n , .T ∈ (m−1)n×(m−1)n 2(m−1)n×2(m−1)n , .U1 ∈ R , .U2 ∈ R (m−1)n×(m−1)n , .U3 ∈ R (m−1)n×(m−1)n , R and positive diagonal matrices . Rα1 ∈ R (m−1)n×(m−1)n (α1 = 1, 2, 3), and symmetric matrices . Z α3 ∈ R 3(m−1)n×3(m−1)n , .Yα3 ∈ R 3(m−1)n×3(m−1)n , (α3 = 1, 3), and matrices . X ∈ R 3(m−1)n×3(m−1)n , . Z 2 ∈ R 3(m−1)n×3(m−1)n , .Y2 ∈ R 3(m−1)n×3(m−1)n , . Mα2 ∈ R 3(m−1)n×3(m−1)n , . Nα2 ∈ R 3(m−1)n×(m−1)n , (α2 = 1, 2), such that, for any .i ∈ G, the succeeding matrix inequalities are satisfied. / Gki and .Gki = {Li1 , Li2 , . . . , LiK1 }, there are positive definite matrices . Hi j ∈ If .i ∈ i (m−1)n×(m−1)n R (i ∈ / Gki , j ∈ Gki ) and .Υi − Υ j ≥ 0 ( j ∈ Guk , j /= i), such that ⎛

Ω˜ iδ ΛLi1 ΛLi2 ⎜ ∗ −H i 0 ⎜ iL1 ⎜ δ ∗ ∗ −H ⎜ ˜ iLi2 .Θi = ⎜ . .. .. ⎜ . ⎝ . . . ∗ ∗ ∗

· · · ΛLiK 1 ··· 0 ··· 0 .. .. . . · · · −HiLiK

⎞ ⎟ ⎟ ⎟ ⎟ < 0 (δ = 1, 2, 3, 4), ⎟ ⎟ ⎠

(4.7)

1



⎞ Z 1 Z 2 N1 . ⎝ ∗ Z 3 N2 ⎠ ≥ 0, ∗ ∗ S

(4.8)

4.3 Main Results

61



⎞ Y1 Y2 M1 . ⎝ ∗ Y3 M2 ⎠ ≥ 0, ∗ ∗ S ( .

S¯ X ∗ S¯

(4.9)

) ≥ 0,

(4.10)

where .Ω˜ i1 = Ω˜ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 , .Ω˜ i2 = Ω˜ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 , ˜ i3 = Ω˜ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 , .Ω˜ i4 = Ω˜ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 , .Ω ˜ i (τ (t), τ˙ (t)) = Ω1i + Ω˜ 2i + Ω3 (τ (t)) + Ω4 (τ (t), τ˙ (t)) + Ω5 (τ˙ (t)), .Ω T T T T .Ω1i = Sym{e1 Υi Υi Σ1 } + Σ2 (Q 1 + Q 2 )Σ2 − Σ3 Q 2 Σ3 + τ (Υi Σ1 ) SΥi Σ1 + T T Sym{ρ1 Σ4 (N1 (e1 − e2 ) + N2 (−e1 − e2 + 2e8 ))} + Sym{ρ1 Σ6 (M1 (e2 − [ ]T [ ] Σ7 Σ7 + τ e4T T e4 − τ1 e7T T e7 + Ξ¯ e3 ) + M2 (−e2 − e3 + 2e9 ))} − ρτ2 Σ8 Σ8 Sym{e8T U3 e1 − e9T U2 e3 } + Σ2T Ξ1 Σ2 + Σ5T Ξ2 Σ5 + Σ3T Ξ3 Σ3 , ∑ ∑ 2 ˜ 2i = e1T ϖi j Hi j e1 , .Ω π˜ i j (Υ j − Υi )e1 + 41 e1T j∈Gki

j∈Gki 1 T T .Ω3 (τ (t)) = τ (t)Sym{e8 R(e1 − e3 )} + ρ1 τ (t)Σ4 (Z 1 + Z 3 )Σ4 + (τ − 3 1 T T τ (t))Sym{e9 R(e1 − e3 )} + ρ1 (τ − τ (t))Σ6 (Y1 + 3 Y3 )Σ6 , .Ω4 (τ (t), τ ˙ (t)) = Sym[τ (t)e8T , (τ − τ (t))e9T ]U1 [e1T − (1 − τ˙ (t))e2T , (1 − T τ˙ (t))e2 − e3T ]T , .Ω5 (τ ˙ (t)) = τ˙ (t){e8T U3 e8 − e9T U2 e9 + Sym(e9T U2 e9 − e8T U3 e8 )} + (1 − τ˙ (t)){−Σ5T Q 1 Σ5 + Sym(e9T U2 e2 − e8T U3 e2 )}, ¯i , , A¯ i , + a1 H¯ 1,i , B¯i , + a2 H¯ 2,i , a3 H¯ 3,i ], .Υi = [−C T T T T T T T T T T T T T T T .Σ1 = [e1 , e4 , e5 , e7 ] , .Σ2 = [e1 , e4 ] , .Σ3 = [e3 , e6 ] , .Σ4 = [e1 , e2 , e8 ] , T T T T T T T T T T T T T T .Σ5 = [e2 , e5 ] , .Σ6 = [e2 , e3 , e9 ] , .Σ7 = [e2 − e3 , e2 + e3 − 2e9 , e2 − e3 − T T T T T T T T T T T ¯ 12e11 ]( ,.Σ8 )= [e1 − e2[, e1 + e2 − 2e8 ,]e1 − e2 − 12e10 ] ,. S = diag{S, 3S, 5S},

S¯ X −Rα1 E¯ 1 Rα1 E¯ 2 , .Ξα1 = (α1 = 1, 2, 3), ¯ ∗ −Rα1 ∗ S T T .ΛLi = [(ΥLi − Υi ) 0 · · · 0] , .ΛLi = [(ΥLi − Υi ) 0 · · · 0] , .ΛLi = [(ΥLiK − ~ ~~ ~ ~ ~~ ~ 2 2 1 1 K1 1 .

Ξ¯ =

10

10

(α1 ) Υi ) ~0 ·~~ · · 0~]T , . H¯ α1 ,i = H˜ α1 ,i ⊗ Γα1 ,i (α1 = 1, 2, 3), . H˜ α1 ,i = M Z 11,i J (α1 = 10

1, 2, 3), . M and . J are defined in Lemma 4.2. i /= ∅, there are positive definite matrices If .i ∈ Gki (Gki = {Li1 , . . . , LiK2 }) and .Guk i i i (m−1)n×(m−1)n . Vi jQ ∈ R (i, j ∈ Gk , Q ∈ Guk ) and .ΥQ − Υ j ≥ 0(∀ j ∈ Guk ), such that

62

4 Local Synchronization of Markovian Nonlinearly Coupled …

⎛ ⎜ ⎜ ⎜ ˜ δ ˜i = ⎜ .Θ ⎜ ⎜ ⎜ ⎝

Q Ω˜˜ iδ ΛLi

Q

1



Q

ΛLi

· · · ΛLi

2

∗ −ViLi1 Q 0 ∗ ∗ −ViLi2 Q .. .. .. . . . ∗ ∗ ∗

⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎠

K2

··· 0 ··· 0 .. .. . . · · · −ViLiK

2

(4.11)

Q



⎞ Z 1 Z 2 N1 . ⎝ ∗ Z 3 N2 ⎠ ≥ 0, ∗ ∗ S

(4.12)



⎞ Y1 Y2 M1 . ⎝ ∗ Y3 M2 ⎠ ≥ 0, ∗ ∗ S ( .

where

.

S¯ = diag{S, 3S, 5S},

S¯ X ∗ S¯

) ≥ 0,

Q

ΛLi = [(ΥLi1 − ΥQ )

.

(4.13)

1

(4.14) 0 · · 0~]T , ~ ·~~

Q ΥQ ) 0~ ·~~ · · 0~]T , .ΛLi = [(ΥLiK − ΥQ ) 0~ ·~~ · · 0~]T , .Ω˜˜ 2i = 2 K2 1 T e 4 1



10

j∈Gki

10

10 e1T



Q

ΛLi = [(ΥLi2 −

.

2

π˜ i j (Υ j − ΥQ )e1 +

j∈Gki

ϖi2j Vi jQ e1 . In matrix .Ω˜˜ iδ (δ = 1, 2, 3, 4), only the element .Ω˜˜ 2i is dif-

ferent from .Ω˜ 2i . The other elements in .Ω˜˜ iδ (δ = 1, 2, 3, 4) are the same as ˜ iδ (δ = 1, 2, 3, 4). .Ω i If .i ∈ Gki and .Guk = ∅, there are positive definite matrices .Wi j ∈ (m−1)n×(m−1)n R (i, j ∈ Gki ), such that ⎛ ˜ Ω˜˜ δ Λ1 ⎜ i ⎜ ∗ −Wi1 ⎜ .. ⎜ .. . . ˜˜ δ ⎜ ⎜ ˜i = ⎜ ∗ .Θ ∗ ⎜ ⎜ ∗ ∗ ⎜ ⎜ . .. ⎝ .. . ∗ ∗

· · · Λi−1 Λi+1 ··· 0 0 .. .. .. . . . · · · −Wi(i−1) 0 ··· ∗ −Wi(i+1) .. .. ··· . . ··· ∗ ∗

· · · ΛN ··· 0 .. ··· . ··· 0 ··· 0 .. .. . . · · · −Wi N

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎟ ⎟ ⎠

(4.15)

4.3 Main Results

63



⎞ Z 1 Z 2 N1 . ⎝ ∗ Z 3 N2 ⎠ ≥ 0, ∗ ∗ S

(4.16)



⎞ Y1 Y2 M1 . ⎝ ∗ Y3 M2 ⎠ ≥ 0, ∗ ∗ S (

S¯ X ∗ S¯

.

where

.

S¯ = diag{S, 3S, 5S},

(4.17)

) ≥ 0,

Λ1 = [(Υ1 − Υi )

.

(4.18) · · 0~]T , ~0 ·~~

Λi−1 = [(Υi−1 −

.

10

Υi ) ~0 ·~~ · · 0~]T , .Λi+1 = [(Υi+1 − Υi ) ~0 ·~~ · · 0~]T , .ΛN = [(ΥN − Υi ) ~0 ·~~ · · 0~]T , 10

˜˜ ˜ 2i = e1T .Ω

10

N ∑ j=1, j/=i

π˜ i j (Υ j − Υi )e1 +

10

N ∑

1 T e 4 1

j=1, j/=i

ϖi2j Wi j e1 .

˜ ˜ In matrix.Ω˜˜ iδ (δ = 1, 2, 3, 4), only the element.Ω˜˜ 2i is different from.Ω˜ 2i . The other ˜ elements in .Ω˜˜ iδ (δ = 1, 2, 3, 4) are the same as .Ω˜ iδ (δ = 1, 2, 3, 4). Proof Consider the following Lyapunov–Krasovskii functional for system (4.6)

.

6 ∑

V (e(t), t, i) =

[Vq (e(t), t, i)].

q=1

[{t ] ] Ψ y(s)ds Ψ y(t) (t) Let .η(t) = , .ϑ(s) = { t−τ . t−τ (t) Ψ g(y(t)) Ψ y(s)ds [

t−τ

.

V1 (y(t), t, i) = y T (t)Ψ T Υi Ψ y(t), {

.

V2 (y(t), t, i) = +

t

(4.19)

η T (s)Q 1 η(s)ds

t−τ (t) { t T

η (s)Q 2 η(s)ds,

(4.20)

t−τ

{ .

V3 (y(t), t, i) =

t t−τ

{ (Ψ y(s))T ds R

t

t−τ

Ψ y(s)ds,

(4.21)

64

4 Local Synchronization of Markovian Nonlinearly Coupled …

{ .

V4 (y(t), t, i) = +

.

0

{

t

−τ t+θ { 0{ t −τ

(Ψ y˙ (s))T SΨ y˙ (s)dsdθ,

(Ψ g(y(s)))T T Ψ g(y(s))dsdθ,

V5 (y(t), t, i) = ϑT (s)U1 ϑ(s),

( . V6 (y(t), t, i) = (τ − τ (t)) (

1 τ − τ (t) { t−τ (t)

{

t−τ (t)

(4.23) )T

Ψ y(s)ds

t−τ

) ( 1 Ψ y(s)ds + τ (t) τ (t) t−τ ) )T ( { t { t 1 Ψ y(s)ds , Ψ y(s)ds U3 × τ (t) t−τ (t) t−τ (t)

× U2

(4.22)

t+θ

1 τ − τ (t)

(4.24)

Let .L be the weak infinitesimal operator. Calculating the time derivative of V (y(t), t, i), we have

.

LV1 (y(t), t, i) = 2y T (t)Ψ T Υi Ψ [−C¯i y(t) + A¯ i g(y(t)) + B¯i g(y(t − τ (t))) + J¯(t) + a1 Γ¯1,i g(y(t)) + a2 Γ¯2,i g(y(t − τ (t))) { t N ∑ ¯ g(y(s))ds] + πi j (Ψ y(t))T Υ j Ψ y(t). + a3 Γ3,i

.

t−τ (t)

j=1

, , According to the structure of .Ψ , one has .Ψ C¯i = C¯i Ψ , .Ψ A¯ i = A¯ i Ψ , .Ψ B¯i = , B¯i Ψ , .Ψ J¯(t) = 0, .Ψ Γ¯α1 ,i = ( M˜ ⊗ In )(G i(α1 ) ⊗ Γα1 ,i ) = ( M˜ G i(α1 ) ) ⊗ Γα1 ,i = ˜ ⊗ Γα1 ,i = ( H˜ α1 ,i ⊗ Γα1 ,i )( M˜ ⊗ In ) = H¯ α1 ,i Ψ, α1 = 1, 2, 3. ( H˜ α1 ,i M) Therefore, we have , , LV1 (y(t), t, i) = 2(Ψ y(t))T Υi [−C¯i Ψ y(t) + A¯ i Ψ g(y(t))

.

, + B¯i Ψ g(y(t − τ (t))) + a1 H¯ 1,i Ψ g(y(t)) { t + a2 H¯ 2,i Ψ g(y(t − τ (t))) + a3 H¯ 3,i Ψ

g(y(s))ds]

t−τ (t)

+

N ∑ j=1

πi j (Ψ y(t))T Υ j Ψ y(t),

(4.25)

4.3 Main Results

65

LV2 (y(t), t, i) = η T (t)Q 1 η(t) − (1 − τ˙ (t))η T (t − τ (t))Q 1 η(t − τ (t))

.

+ η T (t)Q 2 η(t) − η T (t − τ )Q 1 η(t − τ ), { LV3 (y(t), t, i) = 2

t

.

(Ψ y(s))T ds R(Ψ y(t) − Ψ y(t − τ )),

(4.26)

(4.27)

t−τ

{ LV4 (y(t), t, i) ≤ τ (Ψ y˙ (t))T SΨ y˙ (t) −

t

(Ψ y˙ (s))T SΨ y˙ (s)ds t−τ { 1 t T (Ψ g(y(s)))T ds + τ (Ψ g(y(t))) T Ψ g(y(t)) − τ t−τ (t) { t ×T Ψ g(y(s))ds, (4.28)

.

t−τ (t)

[

τ (t)e8 .LV5 (y(t), t, i) = 2ξ (t) (τ − τ (t))e9 T

]T

[ U1

] e1 − (1 − τ˙ (t))e2 ξ(t), (1 − τ˙ (t))e2 − e3

(4.29)

LV6 (y(t), t, i) = −τ˙ (t)ξ T (t)e9T U2 e9 ξ(t) + 2ξ T (t)e9T U2 [(1 − τ˙ (t))e2

.

− e3 + τ˙ (t)e9 ]ξ(t) + τ˙ (t)ξ T (t)e8T U3 e8 ξ(t) + 2ξ T (t)e8T U3 [e1 − (1 − τ˙ (t))e2 − τ˙ (t)e8 ]ξ(t).

(4.30)

According to Lemmas 4.3, 4.4, and reciprocally convex approach in [102], we get { .



t

{ (Ψ y˙ (s))T SΨ y˙ (s)ds = −ρ1

t−τ

{ − ρ1 − ρ2

t

t−τ (t) t−τ (t)

(Ψ y˙ (s))T SΨ y˙ (s)ds

(Ψ y˙ (s))T SΨ y˙ (s)ds

t−τ { t

(Ψ y˙ (s))T SΨ y˙ (s)ds

t−τ (t) { t−τ (t)

(Ψ y˙ (s))T SΨ y˙ (s)ds ( ) 1 ≤ ρ1 τ (t)ξ T (t)Σ4T Z 1 + Z 3 Σ4 ξ(t) 3

− ρ2

t−τ

+ 2ρ1 ξ T (t){Σ4T (N1 (e1 − e2 ) + N2 (−e1 − e2 + 2e8 ))}ξ(t) ( ) 1 + ρ1 (τ − τ (t))ξ T (t)Σ6T Y1 + Y3 Σ6 ξ(t) 3 + 2ρ1 ξ T (t){Σ6T (M1 (e2 − e3 ) + M2 (−e2 − e3 + 2e9 ))}ξ(t) − ρ2

1 ¯ 7 ξ(t) − ρ2 1 ξ T (t)Σ8T SΣ ¯ 8 ξ(t) ξ T (t)Σ7T SΣ τ − τ (t) τ (t)

66

4 Local Synchronization of Markovian Nonlinearly Coupled … 1 Z 3 )Σ4 ξ(t) 3 T T + 2ρ1 ξ (t){Σ4 (N1 (e1 − e2 ) + N2 (−e1 − e2 + 2e8 ))}ξ(t) ( ) 1 + ρ1 (τ − τ (t))ξ T (t)Σ6T Y1 + Y3 Σ6 ξ(t) 3 ≤ ρ1 τ (t)ξ T (t)Σ4T (Z 1 +

+ 2ρ1 ξ T (t){Σ6T (M1 (e2 − e3 ) + M2 (−e2 − e3 + 2e9 ))}ξ(t) [ ]T [ ] ρ2 Σ7 Σ7 − ξ T (t) (4.31) ξ(t), Ξ¯ Σ8 Σ8 τ

⎛ ⎞ ⎞ Y1 Y2 M1 Z 1 Z 2 N1 . ⎝ ∗ Z 3 N2 ⎠ ≥ 0, ⎝ ∗ Y3 M2 ⎠ ≥ 0, ∗ ∗ S ∗ ∗ S ⎛

where

.

Ξ¯ ≥ 0.

From Assumption 4.1, we get [ η T (t)

.

] −R1 E¯ 1 R1 E¯ 2 η(t) ≥ 0, ∗ −R1

(4.32)

[

] −R2 E¯ 1 R2 E¯ 2 .η (t − τ (t)) η(t − τ (t)) ≥ 0, ∗ −R2 T

(4.33)

[

] −R3 E¯ 1 R3 E¯ 2 .η (t − τ ) η(t − τ ) ≥ 0. ∗ −R3 T

(4.34)

Then, combining (4.19)–(4.34), we get E{LV (y(t), t, i)} ≤ E{ξ T (t)Ωi (τ (t), τ˙ (t))ξ(t)},

.

T T T where .ξ(t)={(Ψ y(t))T , (Ψ y(t g(y(t − { t − τ (t))) , (Ψ y(tT− τ1 )) {,t (Ψ g(y(t))) , (Ψ T T τ (t)))) , (Ψ g(y(t − τ ))) , ( t−τ (t) Ψ g(y(s))ds) , τ (t) ( t−τ (t) Ψ y(s)ds)T , τ −τ1 (t) { t−τ (t) {t { t−τ (t) 1 1 T ( t−τ Ψ y(s)ds)T , (τ (t)) 2 ( t−τ (t) ((s − t + 0.5τ (t))Ψ y(s))ds) , (τ −τ (t))2 ( t−τ ((s − t + 0.5τ + 0.5τ (t))Ψ y(s))ds)T }T . So, we obtain .E{Ωi (τ (t), τ˙ (t))} < 0, if and only if .E{Ωiδ } < 0 (δ = 1, 2, 3, 4), where .Ωi (τ (t), τ˙ (t)) = Ω¯ i (τ (t), τ˙ (t)) + ¯ i (τ (t), τ˙ (t)) = Ω1i + Ω3 (τ (t)) + Ω4 (τ (t), τ˙ (t)) + Ω5 (τ˙ (t)), .Ω .Ω2i = Ω2i , N ∑ T δ δ πi j Υ j e1 , .Ωi = Ω¯ i + Ω2i , e1 j=1

Ω¯ i1 = Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 , .Ω¯ i2 = Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 , ¯ i3 = Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 , .Ω¯ i4 = Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 . .Ω Applying Dynkin’s formula, we have .

4.3 Main Results

67

{{

t

E{V (y(t), t, i)} − E{V (y(0), 0, i 0 )} = E

.

} LV (y(s), s, rs )ds .

(4.35)

0

According to [179], there is positive constant .β such that βE{||Ψ y(t)||2 } ≤ E{V (y(t), t, i)}

.

{{

t

= E{V (y(0), 0, i 0 )} + E

} LV (y(s), s, rs )ds

0

{

t

≤ E{V (y(0), 0, i 0 )} − γ

E{||Ψ y(t)||2 }ds.

(4.36)

0

Thus, according to Definition 4.2, the system (4.6) is locally asymptotically synchronized in mean square. Let .Θiδ = e1 Ω¯ iδ e1T (δ = 1, 2, 3, 4). i If .i ∈ / Gki and .Gki /= ∅.∑For any . j ∈ Guk , and . j /= i, there exists .Υi − Υ j ≥ 0, and ∑ . πi j = −πii − πi j . Thus, we have i j∈Guk , j/=i

j∈Gki

e Ωiδ e1T = Θiδ +



. 1



( πi j Υ j + πii Υi +

− πii −

j∈Gki

= Θiδ +

∑ ∑



) πi j Υi

j∈Gki

πi j (Υ j − Υi )

j∈Gki

= Θiδ +

πi j Υ j

i j∈Guk , j/=i

j∈Gki

≤ Θiδ +



πi j Υ j + πii Υi +

π˜ i j (Υ j − Υi ) +

j∈Gki



Δi j (Υ j − Υi ).

(4.37)

j∈Gki

According to Lemma 4.5, one gets ∑ .

Δi j (Υ j − Υi ) ≤

j∈Gki

∑ (( 1 2

j∈Gki



∑ (1 j∈Gki

4

)2 Δi j

Hi j + (Υ j −

Υi )Hi−1 j (Υ j

) − Υi )

) ϖi2j Hi j + (Υ j − Υi )Hi−1 (Υ − Υ ) . j i j

(4.38)

( ∑ ∑ 1 2 δ holds if .Θi + π˜ i j (Υ j − Υi ) + ϖ H + (Υ j − 4 ij ij j∈Gki j∈Gki ) −1 Υi )Hi j (Υ j − Υi ) < 0. From Schur complement, one gets (4.7)–(4.10).

e Ωiδ e1T < 0

. 1

i i i /= ∅. There exist .Q ∈ Guk , and .ΥQ − Υ j ≥ 0 for any . j ∈ Guk . If .i ∈ Gki and .Guk

68

4 Local Synchronization of Markovian Nonlinearly Coupled …

e Ωiδ e1T = Θiδ +



πi j Υ j +

. 1

j∈Gki

≤ Θiδ +



πi j Υ j +



πi j ΥQ

i j∈Guk



πi j Υ j −

j∈Gki

= Θiδ +

πi j Υ j

i j∈Guk

j∈Gki

= Θiδ +





πi j ΥQ

j∈Gki



π˜ i j (Υ j − ΥQ ) +

j∈Gki

≤ Θiδ +





Δi j (Υ j − ΥQ )

j∈Gki

π˜ i j (Υ j − ΥQ ) +

j∈Gki

∑ (1 j∈Gki

)

4

ϖi2j Vi jQ

+ (Υ j − ΥQ Vi−1 jQ (Υ j − ΥQ )).

(4.39)

According to Schur complement, one gets (4.11)–(4.14). i = ∅. If .i ∈ Gki and .Guk e Ωiδ e1T = Θiδ +

N ∑

. 1

πi j Υ j + πii Υi

j=1, j/=i

= Θiδ +

N ∑

πi j (Υ j − Υi )

j=1, j/=i

= Θiδ +

N ∑

(π˜ i j + Δi j )(Υ j − Υi )

j=1, j/=i

≤ Θiδ +

N ∑

π˜ i j (Υ j − Υi ) +

j=1, j/=i

+ (Υ j −

Υi )Wi−1 j (Υ j

) − Υi ) .

( N ∑ 1 2 ϖi j Wi j 4 j=1, j/=i (4.40)

According to Schur complement, one gets (4.15)–(4.18). Remark 4.3 In (4.31), the .ρ1 and .ρ2 are introduced to deal with {t − t−τ (Ψ y˙ (s))T SΨ y˙ (s)ds. When .ρ1 = 1 and .ρ2 = 0, this integral inequality reduces to the free-matrix-based integral inequality [191]. When .ρ1 = 0 and .ρ2 = 1, this integral inequality reduces to the further improved integral inequality [60]. Hence, by using .ρ1 and .ρ2 , the two less conservative integral inequalities are combined together.

.

Remark 4.4 Due to the engineering applications, such as brain science, reputation evaluation and secure communication [123, 185], the local synchronization issues

4.3 Main Results

69

have been studied in recent years. However, there are few local synchronization conditions on Markovian complex networks with uncertain and partially unknown transition rates, which motives our study. In addition, when .m = N , the local synchronization becomes global synchronization. Therefore, the global synchronization is a special case in this chapter. Remark 4.5 Different from the existing approaches in [19, 60, 139, 171, 179, 191, 196, 201], in order to obtain less conservative local synchronization criteria, a novel integral inequality to deal with the integral term { t we propose T .− (Ψ y ˙ (s)) SΨ y ˙ (s)ds, which includes the celebrated free-matrix-based intet−τ gral inequality and further improved integral inequality. Remark 4.6 In [103], the novel auxiliary function-based single integral inequalities and double integral inequalities were proposed, respectively. The auxiliary function-based single integral inequalities contain Wirtinger-based integral inequality of [113] as a special case. When we substitute the auxiliary function-based single integral inequality in Lemma 5.1 { t of [103] for the further improved integral inequality [60] to deal with .− t−τ (Ψ y˙ (s))T SΨ y˙ (s)ds, the corresponding local synchronization criteria can be obtained. In addition, according to [103], in order to obtain the less conservative synchronization cri{0 {0{t teria, .V7 (y(t), t, i) = − −τ ω t+θ (Ψ y˙ (s))T SΨ y˙ (s)dsdθdω and .V8 (y(t), t, i) = {0 {ω {t − −τ −τ t+θ (Ψ y˙ (s))T SΨ y˙ (s)dsdθdω can be added in Lyapunov–Krasovskii functional for system (4.6), and they can be solved by using auxiliary function-based double integral inequalities. Remark 4.7 In many actual engineering and applications, the maximum allowable delay .τ is valuable and meaningful. Hence, we set .δ¯ = 1/τ in Theorem 4.1. The optimal value can be obtained through following optimization procedure: min

ρ , ρ ∈[0,1] . 1 2

s.t.

δ¯ (4.7)−(4.18).

Remark 4.8 In the communication networks, the limited bandwidth of the communication channel inevitably leads to some network-induced phenomena. In [156], the reliable . H∞ filtering was studied for discrete-time piecewise linear systems subject to sensor failures and infinite distributed delays. In [33], the receding horizon filtering problem was addressed for a class of discrete time-varying nonlinear systems with multiple missing measurements, and a novel estimation scheme of state covariance matrices was proposed. In this chapter, when considering network-induced phenomena, such as missing measurements, randomly occurring discrete delay coupling and infinite distributed delay are taken into account, the dimensions of the states and measurements are too high, which gives rise to too much difficulty in mathematical analysis. Motivated by [33, 34, 83, 144, 156], the state estimation problem of Markovian nonlinearly coupled neural networks with network-induced phenomena will be carried out in the near future.

70

4 Local Synchronization of Markovian Nonlinearly Coupled …

4.4 Simulation In this section the effectiveness of the proposed method is verified by taking a simulation example. Let’s consider the following Markovian neural networks with time-varying delays .

y˙ (t) = −Ci y(t) + Ai g(y(t)) + Bi g(y(t − τ (t))) + J (t), i = 1, 2, 3,

(4.41)

] ] [ y1 (t) tanh(y1 (t)) where . y(t) = , .g(y(t)) = , .τ (t) = 0.8 + 0.2sin(t), y (t) tanh(y2 (t)) )2 ) ) ) ( ( ( ( 1.0 0 0.31 −0.2 −4.2 −5.1 1.1 0 .C 1 = , . A1 = , . B1 = , .C2 = , 0 1.0 −0.5 0.31 −2.2 −5.2 0 1.0 ( ( ( ) ) ) 0.31 −0.3 −4.0 −4.8 1.1 0 . A2 = , . B2 = , .C 3 = , . A3 = −0.5 0.31 −2.1 −4.8 0 1.1 ( ) ( [ ) ] 0.3 −0.2 −3.8 −4.9 0.6 , . B3 = , . J (t) = . The trajectories of system −0.4 0.3 −2.2 −5.1 −1.7 [ ] −0.6 (4.41) with initial values . y(t) = are given in Figs. 4.1, 4.2 and 4.3. 0.6 Now, consider Markovian nonlinearly coupled neural networks [

y˙ (t) = −Ci yk (t) + Ai g(yk (t)) + Bi g(yk (t − τ (t))) + Jk (t)

. k

+ 0.5

4 ∑

G (1) kl,i Γ1,i g(yl (t)) + 0.2

l=1

+ 0.3

4 ∑ l=1

Fig. 4.1 State trajectory of the mode .i = 1 for neural network system (4.41)

G (3) kl,i Γ3,i

{

4 ∑

G (2) kl,i Γ2,i g(yl (t − τ (t)))

l=1 t t−τ (t)

g(yl (s))ds, k = 1, 2, 3, 4.

(4.42)

4.4 Simulation

71

Fig. 4.2 State trajectory of the mode .i = 2 for neural network system (4.41)

Fig. 4.3 State trajectory of the mode .i = 3 for neural network system (4.41)



.

1) G (α 1

13.6 −15.2 0.02 0.02

0.8 0.8 −0.05 0.01

⎞ 0.8 0.8 ⎟ ⎟, 0.01 ⎠ −0.05

14.0 −17.2 0.015 0.015

1.6 1.6 −0.06 0.03

⎞ 1.6 1.6 ⎟ ⎟, 0.03 ⎠ −0.06

−16.5 13.5 ⎜ 13.5 −16.5 =⎜ ⎝ 0.01 0.01 0.01 0.01

1.5 1.5 −0.04 0.02

⎞ 1.5 1.5 ⎟ ⎟, 0.02 ⎠ −0.04

−15.2 ⎜ 13.6 =⎜ ⎝ 0.02 0.02 ⎛

.

1) G (α 2

−17.2 ⎜ 14.0 =⎜ ⎝ 0.015 0.015 ⎛

.

1) G (α 3

72

4 Local Synchronization of Markovian Nonlinearly Coupled …

Fig. 4.4 State trajectory of the first node . y1 (t) in system (4.42)

where .α1 = ( 1, 2, 3. ) ( ) ( ) 4.1 0 0.1 0 0.5 0 .Γ1,i = , .Γ2,i = , .Γ3,i = , .i = 1, 2, 3. 0 4.1 0 0.1 0 0.5 ( ) 00 , . E2 = The activation function satisfies Assumption 4.1 and . E 1 = 00 ( ) 0.5 0 . We obtain .τ = 1.0, .μ1 = −0.2, .μ2 = 0.2. 0 0.5 ⎛ ⎞ ? ? 2.5+ Δ13 ⎠ , where ? The transition matrix is given as .Π = ⎝ ? −4.9+ Δ22 ? ? −3.2+ Δ33 .ϖ1i = 0.19, .Δ13 ∈ [−0.19, 0.19], .ϖ2i = 0.18, .Δ22 ∈ [−0.18, 0.18], .ϖ3i = 0.15, .Δ33 ∈ [−0.15, 0.15]. When .ρ1 = 0.7, .ρ2 = 0.3 and .τ = 0.9, by using Matlab LMI Toolbox, a part of feasible ( solutions are obtained ) as follows: ( ) 0.0030 −0.0000 0.0028 −0.0000 .Υ1 = , .Υ2 = , .Υ3 = −0.0000 0.0047 −0.0000 0.0045 ( ) 0.0029 −0.0000 , −0.0000 0.0046 ⎛ ⎞ 0.0005 −0.0004 0.0017 −0.0000 ⎜ −0.0004 0.0006 0.0007 0.0040 ⎟ ⎟ .Q1 = ⎜ ⎝ 0.0017 0.0007 0.0983 0.1051 ⎠ , −0.0000 0.0040 0.1051 0.2283 ⎛ ⎞ 0.0004 −0.0003 0.0004 −0.0003 ⎜ −0.0003 0.0005 0.0002 0.0007 ⎟ ⎟ .Q2 = ⎜ ⎝ 0.0004 0.0002 0.0131 −0.0074 ⎠ . −0.0003 0.0007 −0.0074 0.0145 According to Theorem 4.1 and Figs. 4.4, 4.5, 4.6 and 4.7, we obtain that the first two nodes are asymptotically synchronized in mean square, while the whole network are not achieved. This shows that dynamical system (4.42) is locally asymptotically synchronized in mean square. The synchronization error of the first two nodes and

4.4 Simulation Fig. 4.5 State trajectory of the second node . y2 (t) in system (4.42)

Fig. 4.6 Synchronization error .e1 (t) of the first two nodes for system (4.42)

Fig. 4.7 Global error .e2 (t) for system (4.42)

73

74

4 Local Synchronization of Markovian Nonlinearly Coupled …

the global synchronization error of the complex networks (4.42) are illustrated in Figs. 4.6 and 4.7, which are calculated by 2 √ ∑ .e1 (t) = (y1i − y2i )2 , i=1

┌ 2 |∑ ∑ | 4 | (y1i − y ji )2 . .e2 (t) = i=1

j=2

4.5 Conclusion In this chapter, the less conservative local synchronization criteria are obtained for Markovian nonlinearly coupled neural networks with uncertain and partially unknown transition rates based on Lyapunov–Krasovskii functional, a new integral inequality containing with free-matrix-based integral inequality and further improved integral inequality. In addition, the coupling configuration matrices are not restricted to be symmetric, and they are changed by Markovian chain. Finally, a simulation example is given to demonstrate the effectiveness of the theoretical results.

Chapter 5

Sampled-Data Synchronization of Complex Networks Based on Discontinuous Lyapunov-Krasovskii Functional

In this chapter, we investigate the sampled-data synchronization problem of delayed complex networks with aperiodic sampling interval based on enhanced input delay approach. By introducing an improved discontinuous Lyapunov-Krasovskii functional, the new delay-dependent synchronization criteria are obtained by adopting Wirtinger’s integral inequality and mixed convex combination, which fully utilizes the upper bound on variable sampling interval and the sawtooth structure information of varying input delay. The derived criteria are less conservative than the existing ones. In addition, the desired sampled-data controllers are obtained by solving a set of LMIs. Finally, numerical examples are provided to demonstrate the feasibility of the proposed method.

5.1 Introduction With the rapid development of computer hardware, the sampled-data control technology has shown more and more superiority over other control approaches [67]. In the sampled-data control, the control signals are updated only at sampling instants and kept constant during the sampling period. Moreover, choosing proper sampling interval in sampled-data control systems is more important for designing suitable controllers. Consequently, sampled-data systems have been investigated extensively [19, 39, 67, 77, 115, 157, 164, 166, 170, 171, 178]. In [39], sampled-data control problem of linear systems was investigated by using input delay method. In [77], the less conservative sampled-data stabilization criteria were obtained by using Wirtinger’s integral inequality. In [164], the synchronization problem of neural networks with time-varying delay under sampled-data control was studied. After that, the sampled-data control was investigated for T-S fuzzy systems with aperiodic sampling interval in [178]. In [67], the sampled-data synchronization control was investigated for complex networks with time-varying coupling delay based on input delay approach. Based on [67], the sampled-data synchronization of complex dynamical © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_5

75

76

5 Sampled-Data Synchronization of Complex Networks …

networks with time-varying coupling delay was studied in [166], and the less conservative synchronization results were obtained. However, in [67, 166], there is no constraint on the input delay derivative induced by sampling, and only the traditional Lyapunov-Krasovskii functional and Jensen’s inequality are used to obtain the synchronization criteria. In this case, the sawtooth structure and all available information about the actual sampling pattern are overlooked, which inevitably leads to the conservatism of the results. After that, the less conservative sampled-data synchronization criteria were obtained for complex dynamical networks with time-varying delay and uncertain sampling using novel time-dependent Lyapunov-Krasovskii functional in [170]. In this chapter, the discontinuous Lyapunov-Krasovskii functional .Vq (e(t))(q = 5, 6, 7) that contain .t p and .t p+1 are adopted, which could make full use of the sawtooth structure characteristic of sampling input delay. In addition, the mixed convex combination technique is utilized, which could introduce the free matrices. Hence, the less conservative synchronization criteria are obtained in this chapter. Motivated by the above discussions, we study the problem of sampled-data synchronization for complex dynamical networks based on enhanced input delay approach. In addition, the discontinuous Lyapunov-Krasovskii functional, Wirtinger’s integral inequality and mixed convex combination included linear convex combination and reciprocally convex combination are employed in order to fully utilize the information of actual sampling pattern. Hence, the less conservative sampled-data synchronization criteria are obtained. Finally, numerical examples are given to show the effectiveness of the theoretical results. The rest of this chapter is organized as follows. In Sect. 5.2, some preliminaries and complex networks with time-varying are introduced. In Sect. 5.3, the novel sampled-data synchronization criteria for complex networks are obtained. In Sect. 5.4, numerical simulations are given to demonstrate the effectiveness of the proposed results. Finally, conclusions are drawn in Sect. 5.5. Notation Throughout this chapter,. R n and. R n×n denote the.n-dimensional Euclidean space and the set of all .n × n real matrices respectively. The .|| · || stands for the Euclidean vector norm. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0), where . X ∈ R n×n , means that . X is real symmetric positive semidefinite matrix (negative definite matrix). . In×n represents the n×n .n-dimensional identity matrix. For a matrix. A ∈ R ,.λmin (A) denotes the minimum ( ( ) ) X Y X Y stands for . . Matrices, if their dimensions are not eigenvalue of . A. . ∗ Z YT Z explicitly stated, are assumed to be compatible for algebraic operations.

5.2 Problem Formulation

77

5.2 Problem Formulation Consider the following complex dynamical networks consisting of . N dynamical nodes. Each node of the networks is an .n-dimensional dynamical system. y˙ (t) = g(yi (t)) + c

N ∑

. i

G i j Γ y j (t − τ (t)) + u i (t), i = 1, 2, · · · , N ,

(5.1)

j=1

where . yi (t) = [yi1 (t), yi2 (t), . . . , yin (t)]T ∈ R n is the state vector of the .ith node. g(yi (t)) = [g1 (yi1 (t)), g2 (yi2 (t)), . . . , gn (yin (t))]T ∈ R n is continuous vector valued function. .c > 0 is coupling strength. .τ (t) is interval time-varying delay and satisfies .0 ≤ τ1 ≤ τ (t) ≤ τ2 , .τ˙ (t) ≤ h, where .τ1 , τ2 .(τ1 < τ2 ), and .h are constants. .Γ = diag[γ1 , γ2 , · · · , γn ] is the inner coupling matrix with .γi ≥ 0, i = 1, 2, · · · , n. . G is the coupling configuration matrix representing the topological structure of network and satisfies the following condition: if there exists a connection from node . j to node .i (. j /= i), then .G i j > 0; otherwise, .G i j = 0; and the diagonal elements of matrix .G are defined by .

.

G ii = −

N ∑

G i j , i = 1, 2, · · · , N .

j=1, j/=i

In this chapter, consider the following state trajectory of the isolate node s˙ (t) = g(s(t)).

.

(5.2)

By defining the error signal as .ei (t) = yi (t) − s(t), the error dynamic of complex networks are obtained as follows e˙ (t) = h(ei (t)) + c

N ∑

. i

G i j Γ e j (t − τ (t)) + u i (t), i = 1, 2, · · · , N ,

(5.3)

j=1

where .h(ei (t)) = g(yi (t)) − g(s(t)). The control signal is generated by using a zero-order-hold (ZOH) function with a sequence of hold times .0 = t0 < t1 < · · · < t p < · · · . The sampled-data control input may be represented as discrete control signal u (t) = u di (t p ) = u di (t − (t − t p )) = u di (t − d(t)), t ∈ [t p , t p+1 ),

. i

(5.4)

where .i = 1, 2, · · · , N , .d(t) = t − t p . The delay .d(t) is a piecewise linear function ˙ = 1. Sampling interval .d p satisfies the following condition and .d(t) 0 ≤ d(t) = t − t p ≤ d p = t p+1 − t p ≤ d,

.

(5.5)

78

5 Sampled-Data Synchronization of Complex Networks …

where .d > 0 is the largest sampling interval and bounded. The sampled-data state feedback controllers are designed u (t) = L i ei (t p ), t ∈ [t p , t p+1 ), i = 1, 2, · · · , N ,

. i

(5.6)

where . L i (i = 1, 2, · · · , N ) are the feedback controller gain matrices to be determined. In addition, we don’t require the sampling to be periodic. Substituting (5.6) into (5.3), we have the following closed-loop system e˙ (t) = g(ei (t)) + c

N ∑

. i

G i j Γ e j (t − τ (t)) + L i ei (t − d(t)), t ∈ [t p , t p+1 ), (5.7)

j=1

where .i = 1, 2, · · · , N . Assumption 5.1 ([154]) The nonlinear function . f : . R n → R n satisfies [g(x) − g(y) − U (x − y)]T [g(x) − g(y) − V (x − y)] ≤ 0, ∀x, y ∈ R n

.

where .U and .V are constant matrices of appropriate dimensions. dimensional vector, respecLemma 5.1 ([178]) Let .Y > 0 and .ω(s) be appropriate {t tively. Then, we have the following inequality:.− t12 ω T (s)Y ω(s)ds ≤ (t2 − t1 )η T (t) {t F T Y −1 Fη(t) + 2η T (t)F T t12 ω(s)ds, where the matrix . F and the vector .η(t) independent on the integral variable are arbitrary appropriate dimensional ones. Lemma 5.2 ([101, 104]) For matrices . Z i .(i = 1, 2, 3) with proper dimensions, Z 1 + αZ 2 + (1 − α)Z 3 < 0 holds for .∀α ∈ [0, 1] if and only if the following set of inequalities hold . Z 1 + Z 2 < 0, . Z 1 + Z 3 < 0.

.

Lemma 5.3 ([177]) For matrices .T , . R = R T > 0, scalars .d1 ≤ d(t) ≤ d2 , a vector function . y˙ : [−d2 , −d1 ] → R n such that the integration in the following inequality is well defined, then it holds that { (d1 − d2 )



t−d1

.

t−d2

⎤ −R R+T −T y˙ T (s)R y˙ (s)ds ≤ υ T (t) ⎣ ∗ −2R − T − T T R + T ⎦ υ(t), ∗ ∗ −R

[ ] ] −R T [ where .υ T (t) = y T (t − d1 ) y T (t − d(t)) y T (t − d2 ) , . ≤ 0. ∗ −R Lemma 5.4 ([77]) .z(t) ∈ W [a, b) denotes the space of functions .φ : [a, b] → R n , which are absolutely continuous on .[a, b). The functions in z(t) have a finite .lim θ→b− φ(θ) and have square integrable first-order derivatives with the norm {b 2 1/2 ˙ .||φ||W = maxθ∈[a,b] |φ(θ)| + [ . Furthermore, if .z(a) = 0, then for a |φ(s)| ds] any .n × n matrix . R > 0, the following inequality holds: { { 2 b T 2 b T .π a z (s)Rz(s)ds ≤ 4(b − a) a z˙ (s)R z˙ (s)ds.

5.3 Main Results

79

In this chapter, a set of sampled-data controls . L i (i = 1, 2, · · · , N ) are designed such that the error system (5.7) is asymptotically stable, that is, system (5.1) is sampled-data synchronization.

5.3 Main Results In this section, the sampled-data synchronization criteria of complex dynamical networks (5.1) are derived based on a novel discontinuous Lyapunov-Krasovskii functional and mixed convex combination technique. Let .e(t) = [e1T (t), e2T (t), . . . , e TN (t)]T , .g(e(t)) = [g T (e1 (t)), g T (e2 (t)), . . . , T g T (e N (t))]T , .Γ¯ = G ⊗ Γ , . L = diag[L 1 , L 2 , · · · , L N ], .U¯ = (I N ⊗U )2 (I N ⊗V ) + T T (I N ⊗V )T (I N ⊗U ) ¯ N ⊗V ) , .V = − (I N ⊗U ) +(I . Then, error dynamical (5.7) can be rewritten 2 2 as follows e(t) ˙ = g(e(t)) + cΓ¯ e(t − τ (t)) + Le(t p ).

.

(5.8)

Theorem 5.1 Under Assumption 5.1, error dynamical system (5.8) is asymptotically stable if there exist matrices . P > 0, . Q 1 > 0, . Q 2 > 0, . Q 3 > 0, .W1 > 0, .W2 > 0, . Z 1 > 0, . Z 2 > 0, . N1 > 0, . N2 > 0, . N3 > 0, and matrices .Y , . X 1 , . X 2 , . F1 , . F2 , and a scalar .σ > 0, such that the following matrix inequalities are satisfied [ ]T [ ] Π1 = Π + d 0n N ×n N In N ×n N 0n N ×6n N N1 0n N ×n N In N ×n N 0n N ×6n N ]T [ ] [ +2d 0n N ×2n N In N ×n N 0n N ×5n N N2 0n N ×2n N In N ×n N 0n N ×5n N 0, the nonlinear function.g(e(t)) satisfies the following inequality [ σ

.

e(t) g(e(t))

]T (

U¯ V¯ ∗ I

)[

] e(t) ≤ 0. g(e(t))

(5.31)

For any appropriate dimensional matrices . F1 and . F2 , the following equation holds (e T (t)F1 + e˙ T (t)F2 )[−e(t) ˙ + g(e(t)) + cΓ¯ e(t − τ (t)) + Le(t p )] = 0.

.

(5.32)

Then, for .t ∈ [t p , t p+1 ), combining(5.21)–(5.32), we get .

] [ V˙ (e(t)) ≤ ξ T (t) Ξ1 + (d − d(t))Ξ2 + d(t)Y N1−1 Y T ξ(t), d(t) ∈ [0, d], (5.33)

5.3 Main Results

83

] [ whereΞ1 = Ψ + Y8n N ×n N 08n N ×n N −Y8n N ×n N 08n N ×5n N . , ]T [ + Y8n N ×n N 08n N ×n N −Y8n N ×n N 08n N ×5n N ]T [ ] [ Ξ2 = 0n N ×n N In N ×n N 0n N ×6n N N1 0n N ×n N In N ×n N 0n N ×6n N . ]T [ ], [ + 2 0n N ×2n N In N ×n N 0n N ×5n N N2 0n N ×2n N In N ×n N 0n N ×5n N {

}

, .ξ(t) = e T (t), e˙ T (t), e T (t p ), e T (t − d), e T (t − τ1 ), e T (t − τ (t)), e T (t − τ2 ), g T (e(s)) . Let .Φ(d(t)) = Ξ1 + (d − d(t))Ξ2 + d(t)Y N1−1 Y T . According to Lemma 5.2 and .d(t) ∈ [0, d], we have .Φ(d(t)) < 0 if and only if Φ(0) = Ξ1 + dΞ2 < 0,

(5.34)

Φ(d) = Ξ1 + dY N1−1 Y T < 0.

(5.35)

.

.

According to the Schur complement, (5.34) is equivalent to (5.9), and (5.35) is equivalent to (5.10). .

V˙ (e(t)) ≤ −γ{||e(t)||2 }, t ∈ [t p , t p+1 )

where .γ = min{λmin (−Φ(0)), λmin (−Φ(d))} > 0. According to [77] and.V˙ (e(t)) < 0, error dynamical system (5.8) is asymptotically stable. The proof is completed. □ Remark 5.1 It can be found that .V5 (e(t)) is discontinuous at .t p , and . lim− V5 (e(t)) t→t p

≥ 0, .V5 (e(t p )) = 0. Hence, the condition . lim− V (e(t)) ≥ V (e(t p )) holds. From t→t p

Wirtinger’s integral inequality in Lemma 5.4, we obtain . lim− V7 (e(t)) ≥ 0 and t→t p

.

V7 (e(t p )) = 0. The domain of definition of .V (e(t)) is .t ∈ [t p , t p+1 ). .V (e(t)) is absolutely continuous for .t /= t p and satisfies . lim− V (e(t)) ≥ V (e(t p )). t→t p

Remark 5.2 In Lyapunov-Krasovskii functional, .Vq (e(t))(q = 5, 6, 7) are adopted in order to make full use of the sawtooth structure characteristic of sampling input delay. Thus, the conservatism of the synchronization criteria is further reduced. Remark 5.3 The delayed coupling term in [160] involves transmission delay and self-feedback delay, which is feedback with non-identical delay. In system (5.1) of this chapter, the transmission delay and self-feedback delay in the coupling term are feedback with identical delay .τ (t). Inspired by [160], if the coupling term in system (5.1) is feedback with non-identical delays, we have y˙ (t) = g(yi (t)) + c

N ∑

. i

j=1, j/=i

G i j Γ (y j (t − τ (t)) − yi (t − σ(t))) + u i (t),

84

5 Sampled-Data Synchronization of Complex Networks …

where .σ(t) is self-feedback delay, and .0 ≤ σ1 ≤ σ(t) ≤ σ2 , .σ(t) ˙ ≤ h 1 . According N ∑ to .G ii = − G i j , we have j=1, j/=i

y˙ (t) = g(yi (t)) + c

N ∑

. i

G i j Γ y j (t − τ (t)) − cG ii Γ yi (t − τ (t))

j=1

+ cG ii Γ yi (t − σ(t)) + u i (t). Let .G ii = −α and .Γ¯¯ = I N ⊗ Γ . We can have error system e(t) ˙ = g(e(t)) + cΓ¯ e(t − τ (t)) + cαΓ¯¯ e(t − τ (t)) − cαΓ¯¯ e(t − σ(t)) + Le(t p ).

.

In addition, we can obtain the corresponding synchronization criteria when we choose the following Lyapunov-Krasovskii functional .

V˜ (e(t)) = V (e(t)) + V˜1 (e(t)) + V˜2 (e(t)),

where ˜1 (e(t)) = .V

˜2 (e(t)) .V

{ t t−σ(t)

= σ1

e T (s)Q 1 e(s)ds +

{ 0 { t −σ1 t+θ

{ t t−σ1

e T (s)Q 2 e(s)ds +

e˙ T (s)W1 e(s)dsdθ ˙ + (σ2 − σ1 )

{ t−σ1 t−σ2

{ −σ1 { t −σ2

t+θ

e T (s)Q 3 e(s)ds,

e˙ T (s)W2 e(s)dsdθ. ˙

The matrix inequalities in Theorem 5.1 are nonlinear when . L i (i = 1, 2, · · · , N ) are not given. In this case, they cannot be directly solved by Matlab LMI toolbox. Hence, Theorem 5.2 is given to convert nonlinear matrix inequalities into LMIs. At the same time, desired sampled-data controllers are obtained in Theorem 5.2. Theorem 5.2 Under Assumption 5.1 and for given scalar.m, error dynamical system (5.8) is asymptotically stable if there exist matrices. P > 0,. Q 1 > 0,. Q 2 > 0,. Q 3 > 0, . W1 > 0,. W2 > 0,. Z 1 > 0,. Z 2 > 0,. N1 > 0,. N2 > 0,. N3 > 0, and matrices.Y ,. X 1 ,. X 2 , .F = diag{F11 , F22 , · · · , F N N }, .M = diag{M11 , M22 , · · · , M N N }, and a scalar .σ > 0, such that the following matrix inequalities are satisfied [ ]T [ ] Π˜ 1 = Π˜ + d 0n N ×n N In N ×n N 0n N ×6n N N1 0n N ×n N In N ×n N 0n N ×6n N ]T [ ] [ + 2d 0n N ×2n N In N ×n N 0n N ×5n N N2 0n N ×2n N In N ×n N 0n N ×5n N

.

< 0,

(5.36)

5.3 Main Results

85

) ( Π˜ Y Π˜ 2 = < 0, ∗ − d1 N1 ( ) W2 X 1 ≥ 0, ∗ W2 ( ) Z2 X2 ≥ 0, ∗ Z2

(5.37)

.

(5.38) (5.39)

where [ ] Π˜ = Ψ˜ + Y8n N ×n N 08n N ×n N −Y8n N ×n N 08n N ×5n N ]T [ + Y8n N ×n N 08n N ×n N −Y8n N ×n N 08n N ×5n N ,

.



Ψ11 ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎜ ˜ =⎜ ∗ .Ψ ⎜ ∗ ⎜ ⎜ ∗ ⎜ ⎝ ∗ ∗

Ψ˜ 12 Ψ˜ 22 ∗ ∗ ∗ ∗ ∗ ∗

Ψ˜ 13 Ψ˜ 23 Ψ33 ∗ ∗ ∗ ∗ ∗

Ψ14 0 Ψ34 Ψ44 ∗ ∗ ∗ ∗

Ψ15 0 0 0 Ψ55 ∗ ∗ ∗

Ψ˜ 16 Ψ˜ 26 0 0 Ψ56 Ψ66 ∗ ∗

0 0 0 0 Ψ57 Ψ67 Ψ77 ∗

⎞ Ψ˜ 18 Ψ˜ 28 ⎟ ⎟ 0 ⎟ ⎟ 0 ⎟ ⎟, 0 ⎟ ⎟ 0 ⎟ ⎟ 0 ⎠ Ψ88

2 Ψ˜ 12 = −F + P,.Ψ˜ 13 = d1 Z 2 − d1 X 2 + π4 N3 + M,.Ψ˜ 16 = cF Γ¯ ,.Ψ˜ 18 = −σ V¯ + F, ˜ 22 = τ12 W1 + (τ2 − τ1 )2 W2 + d Z 2 + d 2 N3 − mF − mF T , .Ψ ˜ 23 = mM, .Ψ˜ 26 = c × mF Γ¯ , .Ψ˜ 28 = mF. .Ψ Moreover, the desired controller gain matrices in (5.6) can be given as

.

.

L i = Fii−1 Mii , i = 1, 2, · · · , N .

(5.40)

Proof Let . F1 = F, . F2 = mF, and .F L = M. From Theorem 5.1, we know that □ (5.36)–(5.39) hold. The proof is completed. Remark 5.4 In this chapter, the sampled-data synchronization problem of complex networks with fixed coupling has been investigated, and the less conservative results have been obtained. The method in this chapter is not suitable for complex networks with time-varying coupling because the sampled-data controller gain matrices need to be obtained by using system parameters. How to obtain the sampled-data synchronization criteria of complex networks with time-varying coupling is our future research. If the coupling matrix is changed with Markovian parameter, the method in this chapter is also available. The stochastic sampled-data synchronization criteria can be obtained when we choose mode-dependent Lyapunov-Krasovskii functional. Remark 5.5 Compared with [67, 166, 170], mixed convex combination and Wirtinger’s integral inequality are adopted in this chapter, which effectively utilizes the sawtooth structure and all available information about the actual aperiodic

86

5 Sampled-Data Synchronization of Complex Networks …

sampling pattern. Hence, the less conservative sampled-data synchronization criteria can be obtained. Remark 5.6 In this chapter, in order to reduce conservatism, some free matrices are introduced by adopting discontinuous Lyapunov-Krasovskii functional and mixed convex combination technique. As a result, the computation complexity increases because of many decision variables in the obtained results.

5.4 Simulation In order to demonstrate the validity of the proposed method, numerical examples are given to demonstrate the effectiveness. Example 5.1 Consider the following complex networks with time-varying delay borrowed from [67] y˙ (t) = g(yi (t)) + c

N ∑

. i

G i j Γ y j (t − τ (t)) + u i (t), i = 1, 2, 3,

(5.41)

j=1



⎞ ( ) −1 0 1 10 ⎝ ⎠ 0 −1 1 , .Γ = where .τ (t) = 0.2 + 0.05sin(10t), .G = . 01 1 1 −2 ] [ −0.5yi1 + tanh(0.2yi1 ) + 0.2yi2 , The activation function is given as:.g(yi (t))= 0.95yi2 − tanh(0.75y2 ) ( ( ) ) −0.5 0.2 −0.3 0.2 which satisfies Assumption 5.1, and .U = , .V = . From 0 0.95 0 0.2 time-varying delay, we obtain .τ1 = 0.15, .τ2 = 0.25, .h = 0.5. When .c = 0.5, .m = 1 and using Theorem 5.2 in our chapter, we get that the maximum value of sampling interval is .d = 0.9225. The following state feedback controllers are obtained under the maximum sampling interval .d = 0.9225 ( .L 1

=

−0.5193 −0.1604 0.0009 −1.2166

)

( , .L 2 =

−0.5193 −0.1604 0.0009 −1.2166

)

( , .L 3 =

−0.3675 −0.1590 0.0046 −1.0777

) .

In addition, Tables 5.1 and 5.2 provide the maximum sampling interval and the comparisons of complexity, respectively. It can be seen that the synchronization criteria in this chapter are less conservative than the existing ones. From Tables 5.1 and 5.2, the conservatism of this method is reduced on the basis of increasing complexity.

5.4 Simulation

87

Table 5.1 Maximum sampling interval .d for .c = 0.5 Methods Theorem 5.2 [67] .d

0.9225 Improvement rates

0.5409 70.55%

Table 5.2 The complexity of different methods Theorem 5.2 [67] Methods Decision variables The maximum order of LMI The number of LMIs

[166]

[170]

0.5573 65.53%

0.9016 2.32%

[166]

[170]

568

148

148

418

54

42

42

42

4

2

5

5

Fig. 5.1 State trajectories of the error system (5.41)

Fig. 5.2 Responses of the control inputs

88

5 Sampled-Data Synchronization of Complex Networks …

The state trajectories of the error system and control inputs are presented in ( )T ( )T Figs. 5.1 and 5.2 with initial values . y1 (0) = 3 −1 , . y2 (0) = 0 1 , . y3 (0) = ( )T ( )T −6 2 , .s(0) = 2 3 . In discontinuous Lyapunov-Krasovskii functional, .t p and .t p+1 are contained { {t T 2 t T in .(d − t + t p ) t p e˙ T (s)N1 e(s)ds, ˙ .(t p+1 − t)(t − t p )e (t p )N2 e(t p ), and .d t p e˙ (s) { π2 t T N3 e(s)ds ˙ − 4 t p [e(s) − e(t p )] N3 [e(s) − e(t p )]ds, which could make full use of the sawtooth structure characteristic of sampling input delay. In addition,{unlike t [67, 166], we use reciprocally convex combination technique to deal with . t−d e˙ T { t−τ1 T (s)Z 2 e(s)ds ˙ and . t−τ2 e˙ (s)W2 e(s)ds ˙ in the Lyapunov-Krasovskii, which provides tighter lower bound than Jensen’s inequality. Hence, the conservatism of the sampleddata synchronization criteria is further reduced. Table 5.1 provides the comparisons of the maximum sampling interval .d for different methods in this chapter and [67, 166, 170]. It is clear that, the synchronization criteria in this chapter are more effective than existing results. Example 5.2 Consider Chua’s circuit in [166], which is described follows ⎧ ⎨ s˙1 = σ1 (−s1 + s2 − ψ(s1 )) s˙2 = s1 − s2 + s3 . ⎩ s˙3 = −σ2 s2

(5.42)

where .σ1 = 10, .σ2 = 14.87, .ψ(s1 ) = +⎛0.68)(|s1 + 1| − ⎞ |s1 ⎛−0.68s1 + 0.5(−1.27 ⎞ 2.7 10 0 −3.2 10 0 −1 1 ⎠ , .V = ⎝ 1 −1 1 ⎠ . − 1|), .τ (t)=0.03 + 0.01sin(t), .U = ⎝ 1 0 −14.87 0 0 −14.87 0 ⎛ ⎞ ⎛ ⎞ −2 1 1 0.9 0 0 . G = ⎝ 1 −1 0 ⎠, .Γ = ⎝ 0 0.9 0 ⎠. 1 0 −1 0 0 0.9 We get that the maximum sampling interval is.d = 0.1120. In [166], the maximum sampling interval is .d = 0.0711. It can be calculated that the maximum sampling interval .d = 0.1120, which is 57.52% larger than that in [166]. Hence, the method in this chapter is less conservative than existing one. In addition, the state feedback controllers can be obtained under the maximum value of sampling interval .d = 0.1120⎛ ⎞ ⎛ ⎞ −8.8783 −7.7467 0.4037 −9.0658 −7.7945 0.4076 . L 1 = ⎝ 0.2107 −2.9613 −1.3997 ⎠ , . L 2 = ⎝ 0.2124 −3.0817 −1.3947 ⎠ , 4.1609 11.6779 −6.1945 4.4108 11.6875 −6.2911 ⎛ ⎞ −9.0658 −7.7945 0.4076 . L 3 = ⎝ 0.2124 −3.0817 −1.3947 ⎠ . 4.4108 11.6875 −6.2911 According to the state feedback controllers, the corresponding state trajectories of the error system and control inputs are presented in Figs. 5.3 and 5.4 with initial ( )T ( )T ( )T values . y1 (0) = −5 −4 −3 , . y2 (0) = −2 −1 0 , . y3 (0) = 1 2 3 , .s(0) = ( )T 1 −2 5 .

5.4 Simulation

89

Fig. 5.3 State trajectories of the error system

Fig. 5.4 Responses of the control inputs

Example 5.3 Consider the following delayed complex networks with fifteen nodes, y˙ (t) = g(yi (t)) + c

N ∑

. i

j=1

where

G i j Γ y j (t − τ (t)) + u i (t), i = 1, 2, · · · , 15,

(5.43)

90

5 Sampled-Data Synchronization of Complex Networks …



−2 ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 0 ⎜ ⎜ 1 ⎜ ⎜ 1 ⎜ .G = ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ 0 ⎜ ⎝ 1 1

1 −3 0 0 0 0 0 0 1 0 0 0 0 1 0

1 0 −1 0 0 0 0 0 1 0 0 0 0 1 0

0 0 0 −3 0 0 0 0 0 0 0 0 0 0 0

0 1 0 1 −3 0 0 0 0 0 0 0 0 0 0

0 1 0 1 1 −1 0 0 0 0 0 0 0 0 0

0 0 0 0 1 0 −2 0 0 1 0 0 0 0 0

0 0 0 0 1 0 0 −3 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 −5 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 −3 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 −2 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 −1 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 −2 0 0

0 0 0 0 0 0 0 1 1 0 0 0 1 −3 0

⎞ 0 0 ⎟ ⎟ 0 ⎟ ⎟ 0 ⎟ ⎟ 0 ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ ⎟ 1 ⎟ ⎟, 1 ⎟ ⎟ 1 ⎟ ⎟ 1 ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ ⎟ 0 ⎠ −1

g(yi (t)), .τ (t), .Γ , .U , and .V are the same as Example 5.1. When .c = 0.5, .m = 1 and using Theorem 5.2 in our chapter, we get the following state feedback controllers under sampling interval .d = 0.6325

.

(

) ( ) ( ) −0.6110 −0.1819 −0.4918 −0.1219 −0.8455 −0.1832 , .L 2 = , .L 3 = , 0.0015 −1.2133 −0.0078 −0.9889 0.0045 −1.6162 ( ) ( ) ( ) −0.5698 −0.1350 −0.5667 −0.1098 −0.8839 −0.1800 .L 4 = , .L 5 = , .L 6 = , 0.0112 −1.3117 0.0070 −1.0665 0.0088 −1.7040 ( ) ( ) ( ) −0.7256 −0.1468 −0.5582 −0.1001 −0.3085 −0.0768 .L 7 = , .L 8 = , .L 9 = , 0.0097 −1.3704 −0.0060 −1.0356 −0.0204 −0.8184 ( ) ( ) ( ) −0.5821 −0.1309 −0.7307 −0.1542 −0.8709 −0.1717 . L 10 = , . L 11 = , . L 12 = , 0.0052 −1.2936 0.0232 −1.5421 0.0247 −1.7693 ( ) ( ) ( ) −0.7355 −0.1411 −0.5556 −0.1347 −0.8895 −0.1899 . L 13 = , . L 14 = , . L 15 = . 0.0205 −1.4609 −0.0205 −1.2689 0.0037 −1.6716 .L 1

=

The state trajectories of the error system and control inputs are presented in ( )T ( )T Figs. 5.5 and 5.6 with initial values . y1 (0) = 3 2 , . y2 (0) = −1 0 , . y3 (0) = ( )T ( )T ( )T ( )T 1 −2 , . y4 (0) = 2 1 , . y5 (0) = −3 3 , . y6 (0) = 1.5 −1.5 , . y7 (0) = ( )T ( )T ( )T −2.5 2.5 , . y8 (0) = 1.2 −1.2 , . y9 (0) = 4.8 −4.5 , ( )T ( )T ( )T . y10 (0) = −4.8 3.2 , . y11 (0) = −3.4 −3.2 , . y12 (0) = −5 −5.2 , ( )T ( )T ( )T . y13 (0) = 5.2 4.8 , . y14 (0) = 4.6 4.7 , . y15 (0) = −5.5 −5.4 .

5.5 Conclusion In this chapter, the problem of sampled-data synchronization is studied for complex networks under aperiodic sampling intervals via input-delay approach. The novel delay-dependant sampled-data synchronization criteria are obtained by using the

5.5 Conclusion

91

Fig. 5.5 State trajectories of the error system (43)

Fig. 5.6 Responses of the control inputs

discontinuous Lyapunov-Krasovskii functional, Wirtinger’s integral inequality and mixed convex combination technique. At the same time, the desired controllers have been obtained by solving the LMIs. Finally, numerical examples are given to demonstrate the effectiveness of the theoretical results. Further research includes the study of finite-time sampled-data of complex networks with Markovian parameters and stochastic perturbations, and the study of sampled-data synchronization of complex networks with uncertain parameters and time-varying coupling.

Chapter 6

Sampled-Data Synchronization of Markovian Coupled Neural Networks with Time-Varying Mode Delays

In this chapter, we address the sampled-data synchronization of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals based on enhanced input delay method. The mode-dependent augmented Lyapunov-Krasovskii functional including several mode-dependent matrices is adopted in this chapter. The new delay-dependent synchronization conditions are proposed based on extended Jensen’s integral inequality, which fully adopts the upper bound of variable sampling interval and the sawtooth structure information of varying input delay. In addition, the mode-dependent sampled-data controllers are proposed based on the delay-dependent synchronization criteria. Finally, two examples are proposed to show the effectiveness of the results.

6.1 Introduction In several control strategies, such as state feedback control, adaptive control, and impulsive control, the control input is continuous, that is, state variables are got, transmitted, and dealt with by the minute [193]. However, it is difficult to ensure this condition in real-world in most cases. With the development of computer technology, continuous-time controllers tend to be replaced with digital controllers. In this case, the digital computer is used to sample and quantize the continuous time measurement signal to generate the discrete time signal. After that, it generates the discrete time control input signal and is further converted back into the continuous time control input signal via zero-order hold [28, 165, 178, 210]. In sampleddata controllers, the control signals are updated only at sampling instants and are kept at updated constant during the sampling interval [19, 106, 112]. These control signals are in stepwise form and may disturb the stability of systems [109, 110, 112]. Hence, in order to make the system have better performance, the sampleddata systems have been investigated extensively. In [109, 112], the synchronization problems of delayed neural networks and complex dynamical networks with control © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_6

93

94

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

packet loss were studied based on stochastic sampled-data controllers. After that, sampled-data synchronization control of delayed neural networks was investigated base on complete sampling-interval-dependent discontinuous Lyapunov functional in [173]. Based on [173], the novel sampling-interval-dependent stability condition was proposed based on two-sided sampling-interval-dependent discontinuous Lyapunov functional in [70]. In [67, 166], the problems of synchronization control of sampled-data in time-varying coupling delay complex networks based on the input time delay method was investigated. However, in [67, 166], the sawtooth structure and all available information about the actual sampling pattern were neglected. After that, the sampled-data synchronization of delayed complex networks was studied in [165], which fully adopted the available information about the actual sampling pattern. In recent years, the Markovian jumping neural networks have been widely studied because they are recognized the suitables system to model the phenomenon of information latching, and the abrupt phenomena such as sudden environmental changes, random failures of the components [19, 106, 140, 150, 171, 179, 201, 218]. In [171], two delay-dependent sampled-data criteria were proposed to ensure the stochastic synchronization of the master and slave Markovian neural networks, and the desired mode-independent controllers were designed based on synchronization conditions. In [106], the exponential sampled-data synchronization of Markovian jumping neural networks with time-varying delays and variable sampling control was investigated based on combining the convex technique, and several delaydependent synchronization criteria were proposed to ensure the convergence of the master and slave systems. However, all the modes share the common time delays in [19, 106, 140, 171]. When it comes to Markovian system, different system models are with not only different system parameters but also different time delays. The common time delay for all the modes in Markovian system may lead to unpractical results [179]. Since mode-dependent matrices are more flexible than common ones, more mode-dependent matrices contained in Lyapunov-Krasovskii functional could obtain better synchronization conditions [53]. Hence, in order to get less conservative sampled-data synchronization conditions, it is necessary to establish the appropriate Lyapunov-Krasovskii functional with more mode-dependent matrices. In addition, the.Vq (e(t))(q = 11, 12, 13) containing.t p and.t p+1 are adopted in this chapter, which could make full use of the sawtooth structure characteristic of sampling input delay. Based on the above discussions, the sampled-data synchronization for Markovian coupled neural networks with mode delays is investigated, which includes both mode-dependent time-varying discrete delays and distributed delays. Compared with existing results, the mode-dependent augmented Lyapunov-Krasovskii functional is proposed, which involves triple integrals and sampling instance. In this case, more information of sawtooth structure about the actual aperiodic sampling pattern could be fully adopted. Different from works resorting to free weighting matrices [19, 108] and interval dividing approach [80], an uncorrelated augmented matrix is applied to deal with cross terms in this chapter, which reduces the computational burden to some degree. Based on the Wittinger’s inequality and the extended Jensen’s integral

6.2 Preliminaries

95

inequality, the less conservative sampled-data synchronization criteria are proposed, and mode-dependent sampled-data controllers are obtained by solving LMIs. The rest of this chapter is organized as follows. In Sect. 6.2, some preliminaries and Markovian coupled neural networks are introduced. In Sect. 6.3, the novel sampleddata synchronization criteria are obtained. In Sect. 6.4, two numerical examples are proposed to show the effectiveness of the proposed results. Finally, conclusions are drawn in Sect. 6.5. Notation Throughout this chapter, . R n and . R n×n denote the .n-dimensional Euclidean space and the set of all .n × n real matrices, respectively. The .|| · || stands for the Euclidean vector norm. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0), where . X ∈ R n×n , means that . X is real symmetric positive semidefinite matrix (negative definite n×n matrix). . In represents the .n-dimensional identity( matrix. ( . A ∈)R , ) For a matrix X Y X Y .λmin (A) denotes the minimum eigenvalue of . A. . stands for . . .E{·} ∗ Z YT Z stands for the mathematical expectation. . E˜ i represents block matrices, for instance ˜ i = [0n×n , 0n×n , . . . , 0n×n , In×n , 0n×n , 0n×n , . . . , 0n×n ]. Matrices, if their dimen.E ~~ ~ ~~~~ ~ ~~ ~ ~ i−1

i

16−i

sions are not explicitly stated, are assumed to be compatible for algebraic operations.

6.2 Preliminaries Let .{rt , t ≥ 0} be a right-continuous Markovian process on the probability space and taking values in a finite state space .G = {1, 2, . . . , N } with generator .Π = (πi j ), i, j ∈ G given by { .

Pr {rt+Δt = j | rt = i} =

πi j Δt + o(Δt), i /= j, 1 + πii Δt + o(Δt), i = j,

where .Π ∈ R N ×N . .Δt > 0. .o(Δt) is the higher-order infinitesimal of .Δt, and . lim (o(Δt)/Δt) = 0. .πi j ≥ 0 (i / = j) is the transition rate from mode .i at time .t to Δt→0

mode . j at time .t + Δt, and .πii = −

N ∑ j=1, j/=i

πi j . Each .rt is denoted by .i, .i ∈ G.

Consider the following . N Markovian coupled neural networks with modedependent time-varying delays: y˙ (t) = −C(rt )yk (t) + A(rt )g(yk (t)) + B(rt )g(yk (t

. k

− τ (t, rt ))) + I (t) + a1

N ∑ l=1

G (1) kl (r t )Γ1 (r t )yl (t)

96

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

+ a2

N ∑

G (2) kl (r t )Γ2 (r t )yl (t − τ (t, r t ))

l=1

+ a3

N ∑

G (3) kl (r t )Γ3 (r t )

l=1

{

t t−τ (t,rt )

yl (s)ds + u k (t, rt ),

k = 1, 2, . . . , N ,

(6.1)

where . yk (t) = [yk1 (t), yk2 (t), . . . , ykn (t)]T ∈ R n is the state vector of the .kth node. C(rt ) = diag{c1 (rt ), c2 (rt ), . . . , cn (rt )} is diagonal matrix with positive diagonal entries. . A(rt ) = (ai j (rt ))n×n and . B(rt ) = (bi j (rt ))n×n are the connection weight matrices..u k (t, rt ) is the control input of the node.k..g(yk (t))=[g1 (yk1 (t)), g2 (yk2 (t)), . . . , gn (ykn (t))]T ∈ R n is the neuron activation function. . I (t) is an external input vector. .a1 , .a2 and .a3 are coupling strengths. .τ (t, rt ) is mode-dependent delay and satisfies .0 ≤ τi(1) ≤ τ (t, rt ) ≤ τi(2) , .τ˙ (t, rt ) ≤ h i , where .τi(1) , τi(2) .(τi(1) < τi(2) ), and (1) .h i are constants. For convenience, let .τ = min{τi(1) | i = 1, 2, . . . , N }, .τ (2) = (2) max{τi | i = 1, 2, . . . , N }. .Γq (r t )(q = 1, 2, 3) denote the inner coupling matrices, which describe the inner coupling between the subsystems. .Γq (rt )(q = 1, 2, 3) are diagonal matrices with (q) positive diagonal entries. .G (q) (rt ) = (G kl (rt )) N ×N (q = 1, 2, 3) are the coupling configuration matrices representing the topological structures of network, which may not be identical and satisfy the following conditions: .

.

.

(q)

(q)

G kl (rt ) ≥ 0 k /= l,

G kk (rt ) = −

N ∑

(q)

G kl (rt ).

l=1,l/=k

The initial condition associated with (6.1) is given as follows: . yk (s) = φk0 (s) ∈ C([−τ (2) , 0], R n ) (k = 1, 2, . . . , N ), where.C([−τ (2) , 0], R n ) is the set of continuous functions from .[−τ (2) , 0] to . R n . Remark 6.1 In system (6.1), the constants .aq (q = 1, 2, 3) represent the coupling strength between nodes and satisfy .aq > 0 (q = 1, 2, 3). In this chapter, we consider the state trajectory of the isolate node s˙ (t) = − C(rt )s(t) + A(rt )g(s(t)) + B(rt )g(s(t − τ (t, rt ))) + I (t).

.

(6.2)

6.2 Preliminaries

97

By defining the error signal as .e(t) = y(t) − s(t), the error system is obtained as follows: e˙ (t) = −C(rt )ek (t) + A(rt )g(ek (t)) + B(rt )g(ek (t

. k

− τ (t, rt ))) + a1

N ∑

G (1) kl (r t )Γ1 (r t )el (t)

l=1

+ a2

N ∑

G (2) kl (r t )Γ2 (r t )el (t − τ (t, r t ))

l=1

+ a3

N ∑

G (3) kl (r t )Γ3 (r t )

l=1

{

t t−τ (t,rt )

el (s)ds + u k (t, rt ),

k = 1, 2, . . . , N ,

(6.3)

where .g(ek (t)) = g(yk (t)) − g(s(t)), .g(ek (t − τ (t, rt ))) = g(yk (t − τ (t, rt ))) − g(s(t − τ (t, rt ))). The control signal is generated by using a zero-order-hold function with a sequence of hold times .0 = t0 < t1 < · · · < t p < · · · . The sampled-data control input is represented as discrete control signal u (t, rt ) = u ki (t) = u dki (t p ) = u dki (t − (t − t p ))

. k

= u dki (t − d(t)), t ∈ [t p , t p+1 ),

(6.4)

˙ = 1. where .d(t) = t − t p . The delay .d(t) is a piecewise linear function and .d(t) Sampling interval .d p satisfies the following condition. 0 ≤ d(t) = t − t p ≤ d p = t p+1 − t p ≤ d,

.

(6.5)

where .d > 0 is the largest sampling interval and is bounded. The mode-dependent state feedback controllers are designed as follows: u (t) = L ki ek (t p ), t ∈ [t p , t p+1 ), k = 1, 2, . . . , N ,

. ki

(6.6)

where. L ki (k = 1, 2, . . . , N ) are sampled-data feedback controllers to be determined. The sampling pattern is not required periodic. From (6.3) and (6.6), we have the following closed-loop system

98

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

e˙ (t) = −C(rt )ek (t) + A(rt )g(ek (t)) + B(rt )g(ek (t

. k

− τ (t, rt ))) + a1

N ∑

G (1) kl (r t )Γ1 (r t )el (t)

l=1 N ∑

+ a2

G (2) kl (r t )Γ2 (r t )el (t − τ (t, r t ))

l=1 N ∑

+ a3

G (3) kl (r t )Γ3 (r t )

l=1

{

t t−τ (t,rt )

el (s)ds

+ L ki ek (t − d(t)), t ∈ [t p , t p+1 ), k = 1, 2, . . . , N .

(6.7)

Assumption 6.1 ([84]) For any . y1 , y2 ∈ R, there are constants .er− , er+ , such that e− ≤

. r

gr (y1 ) − gr (y2 ) ≤ er+ , r = 1, 2, . . . , n. y1 − y2

We denote .

.

E 1 = diag(e1+ e1− , . . . , en+ en− ),

E 2 = diag(

e1+ + e1− e+ + en− ,..., n ). 2 2

Definition 6.1 ([25]) If . A = (ai j ) ∈ R m×n and . B = (bi j ) ∈ R p×q , then the Kronecker product . A ⊗ B is the .mp × nq block matrix: ⎞ a11 B · · · a1n B ⎜ .. . . . ⎟ .A ⊗ B = ⎝ . .. ⎠ . . am1 B · · · amn B ⎛

Definition 6.2 ([171]) The Markovian coupled neural networks (6.1) is stochastically synchronous if error system (6.7) is stochastically stable, such that the following condition is satisfied: {{ .

lim E

T →∞

T

} ||e(s)||2 ds < ∞.

(6.8)

0

Lemma 6.1 ([178]) Let .Y > 0{and .ω(s) be an appropriate dimensional vector. t Then, we have the following: .− t12 ω T (s)Y ω(s)ds ≤ (t2 − t1 )ξ T (t)F T Y −1 Fξ(t) + { t 2ξ T (t)F T t12 ω(s)ds, where matrix . F and vector .ξ(t) independent on the integral variable are arbitrary appropriate dimensional ones.

6.3 Sampled-Data Synchronization Criteria

99

Lemma 6.2 ([44]) For any constant matrix . Z = Z T > 0 and scalars .b > a > 0 such that the following integration is well defined, then (1) { .

t−a

− (b − a)

{ y (s)Z y(s)ds ≤ −

{ .

t−b

t−a

− (b − a)

t−a

y (s)ds Z T

t−b

(2)

{

t−a

T

{ y T (s)Z y(s)ds ≤ −

t−b

y(s)ds, t−b

{

t−a

t−a

y T (s)ds Z

t−b

y(s)ds. t−b

Lemma 6.3 ([101, 104]) For any matrices . Z i (.i = 1, 2, 3) with proper dimensions, Z 1 + αZ 2 + (1 − α)Z 3 < 0 holds for .∀α ∈ [0, 1] if and only if the following set of inequalities hold: . Z 1 + Z 2 < 0, Z 1 + Z 3 < 0.

.

Lemma 6.4 ([77, 78]) Let .z(t) ∈ W [a, b) and .z(a) = 0. Then for any .n × n matrix R > 0 the following inequality holds:

.

.

π2 4

{

b

{ z T (s)Rz(s)ds ≤ (b − a)2

a

b

z˙ T (s)R z˙ (s)ds.

a

6.3 Sampled-Data Synchronization Criteria In this section, the sampled-data synchronization conditions for Markovian coupled neural networks (6.1) are proposed based on the mode-dependent LyapunovKrasovskii functional. For convenience, .rt is denoted by .i, and .τ (t, rt ) is denoted by .τi (t) (.i ∈ G). The system (6.7) is rewritten as follows. e˙ (t) = −Ci ek (t) + Ai g(ek (t)) + Bi g(ek (t − τi (t)))

. k

+ a1

N ∑

G (1) kl,i Γ1,i el (t)

l=1

+ a3

N ∑

G (3) kl,i Γ3,i

l=1

k = 1, 2, . . . , N .

{

+ a2

N ∑

G (2) kl,i Γ2,i el (t − τi (t))

l=1 t t−τi (t)

el (s)ds + L ki ek (t p ), (6.9)

100

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

According to the Kronecker product in Definition 6.1, we have .C¯i = (q) ¯ i = I N ⊗ Ai , ¯i = I N ⊗ Bi , .A .B .Γ¯q,i = G i ⊗ Γq,i (q = 1, 2, 3), I N ⊗ Ci , T T T T . L i = diag{L 1i , L 2i , . . . , L N i }, .e(t) = [e1 (t), e2 (t), . . . , e N (t)] , .g(e(t)) = [g T (e1 (t)), g T (e2 (t)), . . . , g T (e N (t))]T , . E¯1 = I N −1 ⊗ E 1 , . E¯2 = I N −1 ⊗ E 2 . The system (6.9) is rewritten as follows: e(t) ˙ = −C¯i e(t) + A¯ i g(e(t)) + B¯i g(e(t − τi (t))) + a1 Γ¯1,i e(t) { t e(s)ds + L i e(t p ). + a2 Γ¯2,i e(t − τi (t)) + a3 Γ¯3,i

.

(6.10)

t−τi (t)

Theorem 6.1 Under Assumption 6.1, error )(6.10) is stochastically )stable ( 11system ( Ni Ni12 N11 N12 > 0, .N = > 0, if there are matrices .Υi > 0, .Ni = ∗ Ni22 ∗ N22 ) ( 11 12 ) ( ( 11 12 ) Si Si S11 S12 Ti Ti > 0, . S = > 0, .Ti = > 0, .T = . Si = ∗ Si22 ∗ S22 ∗ Ti22 ( ) T11 T12 > 0, .Ui > 0, .U > 0, .Wi > 0, .W > 0, . Z 1 > 0, . Z 2 > 0, . N1 > 0, ∗ T22 . N2 > 0, . N3 > 0, and positive diagonal matrices . R1 , . R2i , . R3i , . R4i , and matrices ( 11 12 ) Xi Xi T T T T .Y = [Y1 , Y2 , . . . , Y15 ] , . X i = , . X , . F1 , . F2 , such that, for any .i ∈ G, X i21 X i22 the succeeding matrix inequalities are satisfied Πi(1) =Πi + d E˜ 2T N1 E˜ 2 + 2d E˜ 3T N2 E˜ 3 < 0,

.

Πi(2) =

(

.



Ui ⎜∗ .⎜ ⎝∗ ∗

0 3Ui ∗ ∗ (

.

Πi Y ∗ − d1 N1 X i11 X i21 Ui ∗

Z2 X ∗ Z2

N ∑ .

j=1, j/=i

(6.11)

) < 0,

(6.12)

⎞ X i12 X i22 ⎟ ⎟ ≥ 0, 0 ⎠ 3Ui

(6.13)

) ≥ 0,

πi j W j − W ≤ 0,

(6.14)

(6.15)

6.3 Sampled-Data Synchronization Criteria

101

Πi = Ψi + Y ( E˜ 1 − E˜ 3 ) + ( E˜ 1 − E˜ 3 )T Y T

.

⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ Ψi = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ .

i Ψ11 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ψ12 i Ψ22 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ψ1,10 0 i Ψ2,10 0 0 0 0 0 i 0 Ψ5,11 0 0 0 0 0 0 0 0 i Ψ10,10 0 i ∗ Ψ11,11 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ψ13 i Ψ23 i Ψ33 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ψ14 0 i Ψ34 i Ψ44 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ55 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

i Ψ16 i Ψ26 0 0 i Ψ56 i Ψ66 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ57 i Ψ67 i Ψ77 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ58 i Ψ68 i Ψ78 i Ψ88 ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ59 i Ψ69 i Ψ79 i Ψ89 i Ψ99 ∗ ∗ ∗ ∗ ∗ ∗ ⎞

i i Ψ1,12 0 Ψ1,14 0 i i Ψ2,12 0 Ψ2,14 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ ⎟ i 0 0 0 ⎟ Ψ6,12 ⎟ i 0 Ψ7,13 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟, 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ i 0 0 0 ⎟ Ψ12,12 ⎟ ⎟ i ∗ Ψ13,13 0 0 ⎟ ⎟ i ∗ ∗ Ψ14,14 0 ⎠ i ∗ ∗ ∗ Ψ15,15

where .W represents .N, . S, .T , .U , .W and .W j represents .N j , . S j , .T j , .U j , .W j (for example, when .W is .N, .W j is .N j ). N ∑ (2) 2 i .Ψ11 = πi j Υ j + Ni11 + τ (2) NN11 + τ (12) S11 + τi(2) Wi + (τ 2 ) W + Z 1 − d1 Z 2 − j=1

N3 − R1 E¯ 1 − F1 C¯i − (F1 C¯i )T + a1 F1 Γ¯1,i + a1 (F1 Γ¯1,i )T , i ¯i )T + a1 (F2 Γ¯1,i )T , .Ψ12 = Υi − F1 − (F2 C 2 1 i Z − d1 X + π4 N3 + F1 L i , .Ψ13 = d 2 1 i i X, .Ψ16 = a2 F1 Γ¯2,i , .Ψ14 = d i i 12 (2) .Ψ1,10 = Ni + τ N12 + τ (12) S12 + R1 E¯ 2 + F1 A¯ i , .Ψ1,12 = F1 B¯ i , i ¯ .Ψ1,14 = a3 F1 Γ3,i , (2) 2 (1) 2 (12) ) i U + d Z 2 + d 2 N3 − F2 − F2T + τi(12) Ui .Ψ22 = τi Ui + (τ ) −(τ 2 (2) 2 (1) 2 (τ ) −(τ ) U, 2 π2 4

+

102 i Ψ23 i .Ψ33 i .Ψ34 i .Ψ55 .

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks … i i i i = F2 L i , .Ψ26 = a2 F2 Γ¯2,i , .Ψ2,10 = F2 A¯ i , .Ψ2,12 = F2 B¯ i , .Ψ2,14 = a3 F2 Γ¯3,i , 2 1 1 π 1 T = − d Z 2 + d (X + X ) − d Z 2 − d N2 − 4 N3 , i = − d1 X + d1 Z 2 , .Ψ44 = −Z 1 − d1 Z 2 , 4 = Si11 − R3i E¯1 − (12) Ui ,

i Ψ56 =

.

i Ψ57 =

1

τi(12)

τi

(−2Ui − X i11 − X i12 − X i21 − X i22 ),

1 (X i11 + X i21 − X i12 − X i22 ), τi(12) 1 6 i i 12 22 .Ψ58 = (12) Ui , .Ψ59 = (12) (2X i + 2X i ), τi τi i 12 .Ψ5,11 = Si + R3i E¯2 , 1 i 11 11 11 T 12 12 T .Ψ66 = −(1 − h i )Ni − R2i E¯1 + (12) (−8Ui + X i + (X i ) + X i + (X i ) τi X i21 − (X i21 )T − X i22 − (X i22 )T ), 1 i 11 12 21 22 .Ψ67 = (12) (−2Ui − X i + X i + X i − X i ), τi 1 i 21 T 22 T .Ψ68 = (12) (6Ui + 2(X i ) + 2(X i ) ), τi 1 i 12 22 .Ψ69 = (12) (6Ui − 2X i + 2X i ), τi i 12 .Ψ6,12 = −(1 − h i )Ni + R2i E¯2 , 4 i 11 ¯ .Ψ77 = −Si − R4i E 1 − (12) Ui , τi 2 i 21 T 22 T .Ψ78 = (12) (−(X i ) + (X i ) ), τi 6 i i 12 .Ψ79 = (12) Ui , .Ξ7,13 = −Si + R4i E¯2 , τi 12 4 12 i i i 22 .Ψ88 = − (12) Ui , .Ξ89 = − (12) X i , .Ξ99 = − (12) Ui , τi τi τi (2) 2 (1) 2 ) i 22 (2) T − R1 , .Ψ10,10 = Ni + τ N22 + τ (12) S22 + τi(12) Ti + (τ ) −(τ 2 i i i 22 22 22 .Ψ11,11 = Si − R3i , .Ψ12,12 = −(1 − h i )Ni − R2i , .Ψ13,13 = −Si − R4i , 1 1 i i .Ψ14,14 = − (2) Wi , .Ψ15,15 = − (12) Ti , τi τi (12) .τ = τ (2) − τ (1) , .τi(12) = τi(2) − τi(1) . .



Proof Consider the following Lyapunov-Krasovskii functional for system (6.10)

.

V (e(t), t, i) =

13 ∑

[Vq (e(t), t, i)] t ∈ [t p , t p+1 ).

q=1

Let [ η(t) =

.

.

] e(t) . g(e(t))

V1 (e(t), t, i) = e T (t)Υi e(t),

(6.16)

6.3 Sampled-Data Synchronization Criteria

{ .

V2 (e(t), t, i) =

t t−τi (t)

{ η T (s)Ni η(s)ds +

{ .

V3 (e(t), t, i) = +

.

V6 (e(t), t, i) =

{ .

V7 (e(t), t, i) =

V8 (e(t), t, i) =

V9 (e(t), t, i) =

{

V10 (e(t), t, i) = +

0

−τi(2)

−τ (2)

0

θ

t

t

{

(6.19)

g T (s)T g(s)dsdαdθ,

(6.20)

e˙ T (s)Ui e(s)dsdθ, ˙

t

(6.21)

e˙ T (s)U e(s)dsdαdθ, ˙

(6.22)

t+α

t

e T (s)Wi e(s)dsdθ,

(6.23)

t+θ

{

t

e T (s)W e(s)dsdαdθ,

(6.24)

t+α

e T (s)Z 1 e(s)ds

t−d { 0{ t −d

g T (s)Ti g(s)dsdθ,

t

{

{

{

(6.18)

t+θ

0

0

η T (s)Sη(s)dsdθ,

t+α

θ

{ .

{

−τi(2)

−τ (2)

{ .

0

−τi(1)

{ .

t

θ

−τ (1)

(6.17)

t+θ

{

−τ (2)

η T (s)Si η(s)ds,

η T (s)Nη(s)dsdθ t

{

−τi(2)

−τ (1)

t−τi(2)

t+θ

−τi(1)

{ .

{

−τ (2)

V4 (e(t), t, i) =

V5 (e(t), t, i) =

t

t−τi(1)

t+θ

−τ (1)

{

{

{

0

−τ (2)

{

.

103

t+θ

e˙ T (s)Z 2 e(s)dsdθ, ˙

(6.25)

104

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

{

t

e˙ T (s)N1 e(s)ds, ˙

(6.26)

V12 (e(t), t, i) = (t p+1 − t)(t − t p )e T (t p )N2 e(t p ),

(6.27)

.

V11 (e(t), t, i) = (d − t + t p )

tp

.

{ .

t

V13 (e(t), t, i) = d 2

e˙ T (s)N3 e(s)ds ˙

tp

π2 − 4

{

t

(e(s) − e(t p ))T N3 (e(s) − e(t p ))ds.

(6.28)

tp

Let .L be the infinitesimal operator. Calculating the time derivative of .V (e(t), t, i), we get LV1 (e(t), t, i) = 2e T (t)Υi e(t) ˙ +

N ∑

.

πi j e T (t)Υ j e(t),

(6.29)

j=1

LV2 (e(t), t, i) ≤ η T (t)Ni η(t) − (1 − h i )η T (t − τi (t))Ni η(t − τi (t)) { t N ∑ πi j η T (s)N j η(s)ds +

.

t−τ j (t)

j=1

+ η T (t − τi(1) )Si η(t − τi(1) ) − η T (t − τi(2) )Si η(t − τi(2) ) { t−τ (1) N ∑ j πi j η T (s)S j η(s)ds, (6.30) + t−τ (2) j

j=1

LV3 (e(t), t, i) = τ

.

{

(2) T

η (t)Nη(t) −

+ τ (12) η T (t)Sη(t) −

t

t−τ (2)

{

t−τ (1) t−τ (2)

LV4 (e(t), t, i) = τi(12) g T (t)Ti g(t) −

.

+

N ∑ j=1

{ πi j

−τ (1) j −τ (2) j

{

η T (s)Nη(s)ds

t t+θ

{

η T (s)Sη(s)ds,

t−τi(1) t−τi(2)

g T (s)Ti g(s)ds

g T (s)T j g(s)dsdθ.

(6.31)

6.3 Sampled-Data Synchronization Criteria

105

According to Lemma 6.2, we get

{ ×(

t−τi(1)

t−τi(2)

{

1

LV4 (e(t), t, i) ≤ τi(12) g T (t)Ti g(t) −

( (12)

.

τi

N ∑

g(s)ds) +

t−τi(1)

t−τi(2)

{ πi j

−τ (1) j −τ (2) j

j=1

g(s)ds)T Ti {

t

g T (s)

t+θ

× T j g(s)dsdθ,

(6.32)

(τ (2) )2 − (τ (1) )2 T g (t)T g(t) 2 { −τ (1) { t g T (s)T g(s)dsdθ, −

LV5 (e(t), t, i) =

.

−τ (2)

LV6 (e(t), t, i) ≤

.

+

τi(12) e˙ T (t)Ui e(t) ˙ N ∑

{ πi j

−τ (1) j

{

−τ (2) j

j=1

(6.33)

t+θ

{ − t

t−τi(1)

t−τi(2)

e˙ T (s)Ui e(s)ds ˙

e˙ T (s)U j e(s)dsdθ, ˙

(6.34)

t+θ

(τ (2) )2 − (τ (1) )2 T e˙ (t)U e(t) ˙ 2 { −τ (1) { t e˙ T (s)U e(s)dsdθ, ˙ −

LV7 (e(t), t, i) =

.

−τ (2)

LV8 (e(t), t, i) ≤ τi(2) e T (t)Wi e(t) −

.

+

N ∑ j=1

LV9 (e(t), t, i) =

.

{ πi j

t

−τ (2) j

{

(6.35)

t+θ

{

1 τi(2)

{

t

t−τi (t)

e T (s)dsWi

t

e(s)ds t−τi (t)

t

e(s)T W j e(s)dsdθ,

(6.36)

t+θ

(τ (2) )2 T e (t)W e(t) − 2

{

0

−τ (2)

{

t

t+θ

e T (s)W e(s)dsdθ,

(6.37)

106

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

LV10 (e(t), t, i) = e T (t)Z 1 e(t) − e T (t − d)Z 1 e(t − d) { t T ˙ − e˙ T (s)Z 2 e(s)ds, ˙ + d e˙ (t)Z 2 e(t)

.

(6.38)

t−d

{ LV11 (e(t), t, i) = (d − d(t))e˙ T (t)N1 e(t) ˙ −

t

e˙ T (s)N1 e(s)ds, ˙

.

(6.39)

t−d(t)

LV12 (e(t), t, i) = (d p − 2d(t))e T (t − d(t))N2 e(t − d(t))

.

≤ (d − 2d(t))e T (t − d(t))N2 e(t − d(t)) = −de(t p )N2 e(t p ) + 2(d − d(t))ξ T (t, i) E˜ 3T N2 E˜ 3 ξ(t, i), (6.40)

LV13 (e(t), t, i) = d 2 e˙ T (t)N3 e(t) ˙ −

.

π2 (e(t) − e(t p ))T N3 (e(t) − e(t p )). 4

(6.41)

From Lemma 6.1, we obtain { .

t



t−d(t)

e˙ T (s)N1 e(s)ds ˙ ≤ d(t)ξ T (t, i)Y N1−1 Y T ξ(t, i) + 2ξ T (t, i)Y ( E˜ 1 − E˜ 3 )ξ(t, i).

(6.42)

According to the extended Jensen’s integral inequality in [113, 201] and reciprocally convex combination technique in [102], we have { .



t−τi(1)

t−τi(2)

e˙ T (s)Ui e(s)ds ˙ ≤−

1 τi(12)

ζ T (t, i)Δi ζ(t, i),

where ]T [ ζ(t, i) = ζ1T (t, i) ζ2T (t, i) ζ3T (t, i) ζ4T (t, i) ,

.

ζ (t, i) = e(t − τi(1) ) − e(t − τi (t)),

. 1

ζ (t, i) = e(t −

. 2

τi(1) )

+ e(t − τi (t)) −

2 τi (t) − τi(1)

{

t−τi(1)

e(s)ds, t−τi (t)

(6.43)

6.3 Sampled-Data Synchronization Criteria

107

ζ (t, i) = e(t − τi (t)) − e(t − τi(2) ),

. 3

2

ζ (t, i) = e(t − τi (t)) + e(t − τi(2) ) −

. 4



Ui ⎜∗ ⎜ .Δi = ⎝∗ ∗

0 3Ui ∗ ∗

X i11 X i21 Ui ∗

τi(2)

− τi (t)

{

t−τi (t) t−τi(2)

⎞ X i12 X i22 ⎟ ⎟ ≥ 0. 0 ⎠ 3Ui

e(s)ds,

(6.44)

⎡ ⎤ ) [ ] I 0 ( 1 Z2 X I −I 0 T T ⎣ ⎦ × ϖ(t), .− e˙ (s)Z 2 e(s)ds ˙ ≤ − ϖ (t) −I I ∗ Z2 0 I −I d t−d 0 −I (6.45) {

t

] [ where .ϖ T (t) = e T (t) e T (t p ) e T (t − d) , ( .

Z2 X ∗ Z2

) ≥ 0.

(6.46)

From Assumption 6.1, we get ( η T (t)

.

−R1 E¯ 1 R1 E¯ 2 ∗ −R1

)

(

−R2i E¯ 1 R2i E¯ 2 .η (t − τi (t)) ∗ −R2i

η(t) ≥ 0, )

T

η T (t − τi(1) )

(

.

η T (t − τi(2) )

.

(

−R3i E¯ 1 R3i E¯ 2 ∗ −R3i −R4i E¯ 1 R4i E¯ 2 ∗ −R4i

(6.47)

)

)

η(t − τi (t)) ≥ 0,

(6.48)

η(t − τi(1) ) ≥ 0,

(6.49)

η(t − τi(2) ) ≥ 0.

(6.50)

According to .πi j ≥ 0 (i /= j), .N j ≥ 0 and (6.15), we have

108

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks … N ∑ .

{ πi j

j=1

t

t−τ j (t)

{

N ∑

η T (s)N j η(s)ds ≤

πi j

t−τ j (t)

j=1, j/=i N ∑



{ πi j

j=1, j/=i t t−τ (2)

t

t−τ (2)

{ ≤

t

η T (s)N j η(s)ds

η T (s)N j η(s)ds

η T (s)Nη(s)ds.

(6.51)

Similarly, from (6.15), we get N ∑ .

{ πi j

t−τ (2) j

j=1

N ∑ .

{ πi j

N ∑

{ πi j

j=1

N ∑ .

j=1

{

−τ (2) j

j=1

.

−τ (1) j

−τ (1) j

πi j

0

t

{ η (s)S j η(s)ds ≤

{

t

{ g T (s)T j g(s)dsdθ ≤

t

{

−τ (2)

t

{

−τ (2)

0

t+θ

−τ (2)

(6.52)

g T (s)T g(s)dsdθ,

(6.53)

e˙ T (s)U e(s)dsdθ, ˙

(6.54)

e T (s)W e(s)dsdθ.

(6.55)

t t+θ

{

t

T

−τ (2) j

η T (s)Sη(s)ds,

t+θ

−τ (1)

{ e (s)W j e(s)dsdθ ≤

t−τ (2)

−τ (1)

{ e˙ T (s)U j e(s)dsdθ ˙ ≤

t+θ

{

t−τ (1)

T

t+θ

−τ (2) j

{

t−τ (1) j

t+θ

For any appropriate dimensional matrices . F1 and . F2 , the following equation holds: (e T (t)F1 + e˙ T (t)F2 )[−e(t) ˙ − C¯i e(t) + A¯ i g(e(t)) + B¯i g(e(t − τi (t))) + a1 Γ¯1,i e(t) { t e(s)ds + L i e(t p )] = 0. (6.56) + a2 Γ¯2,i e(t − τi (t)) + a3 Γ¯3,i

.

t−τi (t)

Then, for .t ∈ [t p , t p+1 ), by combining (6.16)–(6.56), we get [ ] E{LV (e(t), t, i)} ≤ ξ T (t, i) Ξi(1) + (d − d(t))Ξi(2) + d(t)Y N1−1 Y T ξ(t, i),

.

d(t) ∈ [0, d], where

Ξi(1) = Ψi + Y ( E˜ 1 − E˜ 3 ) + ( E˜ 1 − E˜ 3 )T Y T ,

.

(6.57)

6.3 Sampled-Data Synchronization Criteria

109

Ξi(2) = E˜ 2T N1 E˜ 2 + 2 E˜ 3T N2 E˜ 3 ,

.

{ ξ(t, i)= e T (t), e˙ T (t), e T (t p ), e T (t − d), e T (t − τi(1) ), e T (t − τi (t)), e T (t − τi(2) ), and

.

{ t−τi(1)

{ t−τi (t)

e(s)ds)T , g T (e(t)), g T (e(t − τi(1) )), g T }T {t { t−τi(1) (2) T T T (e(t − τi (t))), g (e(t − τi )), ( t−τi (t) e(s)ds) , ( . (2) g(e(s))ds) (

1 τi (t)−τi(1)

e(s)ds)T , (

t−τi (t)

1 τi(2) −τi (t)

t−τi(2)

t−τi

Ξi(1)

d(t))Ξi(2)

Let .Φi (d(t)) = + (d − + d(t)Y N1−1 Y T . According to Lemma 6.3 and .d(t) ∈ [0, d], we have .Φi (d(t)) < 0, if and only if Φi (0) = Ξi(1) + dΞi(2) < 0,

(6.58)

Φi (d) = Ξi(1) + dY N1−1 Y T < 0.

(6.59)

.

.

According to the Schur complement, (6.58) is equivalent to (6.11), and (6.59) is equivalent to (6.12). E{LV (e(t), t, i)} ≤ −γE{||e(t)||2 }, t ∈ [t p , t p+1 )

.

where .γ = min{λmin (−Φi (0)), λmin (−Φi (d))} > 0. Applying Dynkin’s formula, i∈N

we have − − − E{V (y(t p+1 ), t p+1 , e(t p+1 ))} − E{V (y(t p ), t p , e(t p ))} − { t p+1 ≤ −γE{ ||e(s)||2 ds}.

.

(6.60)

tp

From Lyapunov-Krasovskii functional, we know that .V11 (e(t), t, i), .V12 (e(t), t, i), V13 (e(t), t, i) vanish before .t p and after .t p . From (6.60), we get

.

∞ ∑ .

p=0

{ E{

− t p+1

||e(s)||2 ds} ≤ γ −1 E{V (y0 , 0, e(0))}.

(6.61)

tp

Thus, according to Definition 6.2, the system (6.10) is stochastically stable. The proof is completed. In Theorem 6.1, the sampled-data synchronization problem of Markovian coupled neural networks with mode delays is investigated. Since the matrices . F1 , . F2 and . L i (i = 1, 2, . . . , N ) are not given, the matrix inequalities are nonlinear. Hence, the desired sampled-data controllers cannot be solved directly. In Theorem 6.2, the

110

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

nonlinear matrix inequalities have been converted into LMIs. At the same time, the mode-dependent controllers can be obtained in Theorem 6.2. Theorem 6.2 Under Assumption 6.1 and for given scalar .m, error system (6.10) is stochastically stable matrices .Υ)i > 0, ) if there are ( ( 11 12 ( 11 12 ) Ni Ni N11 N12 Si Si > 0, .N = > 0, . Si = > 0, . S = .Ni = ∗ Ni22 ∗ N22 ∗ Si22 ) ) ( ( 11 12 ) ( S11 S12 Ti Ti T11 T12 > 0, > 0, > 0, . Ti = .T = .Ui > 0, ∗ S22 ∗ Ti22 ∗ T22 .U > 0, . Wi > 0, . W > 0, . Z 1 > 0, . Z 2 > 0, . N1 > 0, . N2 > 0, . N3 > 0, . R1 , . R2i , . R3i , . R4i , and matrices .Y = and positive diagonal matrices ( 11 12 ) Xi Xi T T T T , . X , .F = diag{F11 , F22 , . . . , F N N }, [Y1 , Y2 , . . . , Y15 ] , . X i = X i21 X i22 .Mi = diag{M11i , M22i , . . . , M N N i }, such that, for any .i ∈ G, the succeeding matrix inequalities are satisfied ~i(1) = Π ~i + d E˜ 2T N1 E˜ 2 + 2d E˜ 3T N2 E˜ 3 < 0, Π

.

~i(2) = .Π ⎛

Ui ⎜∗ .⎜ ⎝∗ ∗

(

0 3Ui ∗ ∗ (

.

~i Y Π ∗ − d1 N1 X i11 X i21 Ui ∗

Z2 X ∗ Z2

N ∑ .

~i = Ψ ~i + Y ( E˜ 1 − E˜ 3 ) + ( E˜ 1 − E˜ 3 )T Y T Π

.

) < 0,

(6.63)

⎞ X i12 X i22 ⎟ ⎟ ≥ 0, 0 ⎠ 3Ui

(6.64)

) ≥ 0,

πi j W j − W ≤ 0,

j=1, j/=i

(6.62)

(6.65)

(6.66)

6.3 Sampled-Data Synchronization Criteria ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ~ Ψi = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ .

~i Ψ 11 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

~i Ψ 12 ~i Ψ 22 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

~i Ψ 0 1,10 ~i 0 Ψ 2,10 0 0 0 0 i 0 Ψ5,11 0 0 0 0 0 0 0 0 i Ψ10,10 0 i ∗ Ψ11,11 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

~i Ψ 13 ~i Ψ 23 i Ψ33 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

111 ~i Ψ 14 0 i Ψ34 i Ψ44 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ55 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

~i Ψ 16 ~i Ψ 26 0 0 i Ψ56 i Ψ66 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ57 i Ψ67 i Ψ77 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ58 i Ψ68 i Ψ78 i Ψ88 ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 i Ψ59 i Ψ69 i Ψ79 i Ψ89 i Ψ99 ∗ ∗ ∗ ∗ ∗ ∗ ⎞

~i ~i Ψ 0 Ψ 0 1,12 1,14 i ~ ~i Ψ 0 Ψ 0 ⎟ ⎟ 2,12 2,14 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ ⎟ i 0 0 0 ⎟ Ψ6,12 ⎟ i 0 Ψ7,13 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟, 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ 0 0 0 0 ⎟ ⎟ i Ψ12,12 0 0 0 ⎟ ⎟ ⎟ i 0 0 ⎟ ∗ Ψ13,13 ⎟ i ∗ ∗ Ψ14,14 0 ⎠ i ∗ ∗ ∗ Ψ15,15

where .W represents .N, . S, .T , .U , .W and .W j represents .N j , . S j , .T j , .U j , .W j (for example, when .W is .N, .W j is .N j ). N ∑ (2) 2 i ~11 .Ψ = πi j Υ j + Ni11 + τ (2) N11 + τ (12) S11 + τi(2) Wi + (τ 2 ) W + Z 1 − d1 Z 2 − j=1

N3 − R1 E¯ 1 − F1 C¯i − (F1 C¯i )T + a1 F1 Γ¯1,i + a1 (F1 Γ¯1,i )T , i ¯i )T + a1 (mF1 Γ¯1,i )T , ~ .Ψ12 = Υi − F1 − (mF1 C 2 1 π 1 i i ~16 ~13 = Z 2 − X + N3 + Mi , .Ψ = a2 F1 Γ¯2,i , .Ψ d d 4 i 12 (2) (12) ~ .Ψ1,10 = Ni + τ N12 + τ S12 + R1 E¯ 2 + F1 A¯ i , i i ¯ ~ ~ .Ψ1,12 = F1 Bi , .Ψ1,14 = a3 F1 Γ¯3,i , (2) 2 (1) 2 ) i ~22 U + d Z 2 + d 2 N3 − mF1 − mF1T + τi(12) Ui + .Ψ = τi(12) Ui + (τ ) −(τ 2 (τ (2) )2 −(τ (1) )2 U, 2 i i i i ~ ~26 ~2,10 ~2,12 .Ψ23 = mMi , .Ψ = a2 × mF1 Γ¯2,i , .Ψ = mF1 A¯ i , .Ψ = mF1 B¯ i , (12) (2) i (12) (2) (1) ~2,14 = a3 × mF1 Γ¯3,i , .τ .Ψ = τ − τ , .τi = τi − τi(1) . Moreover, the desired controller gain matrices in (6.6) can be given as π2 4

112

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks … .

−1 L ki = Fkk Mkki .

(6.67)

Proof Let . F1 = F, . F2 = mF, and .F L i = Mi . From Theorem 6.1, we know that (6.62)–(6.67) hold. The proof is completed. In system (6.1), when.aq = 0 (q = 1, 2, 3) or.a3 = 0, the corresponding sampleddata synchronization criteria can be obtain, which are omitted here. Remark 6.2 .V11 (e(t), t, i), .V12 (e(t), t, i) and .V13 (e(t), t, i) make full use of the sawtooth structure characteristic of sampling input delay. They play the main roles to reduce the conservativeness because the piecewise linear varying delay with its derivative .τ˙ (t) = 1 .(t /= tk ) is fully utilized in the derivation. Thus, the conservatism of the synchronization criteria is further reduced. { t−τi(1) T Remark 6.3 When it comes to . e˙ (s)Ui e(s)ds, ˙ an extended Jensen’s integral t−τi(2) inequality is utilized in this chapter, which reduces the conservatism. In addition, the mode-dependent augmented Lyapunov-Krasovskii functional is adopted, which makes the Lyapunov-Krasovskii functional matrices more flexible than common ones. Remark 6.4 Different from most existing methods, the uncorrelated augmented vector .ξ(t) in Theorem 6.1 is adopted, which effectively reduces the computational burden to some degree.

6.4 Illustrative Examples In order to demonstrate the validity of the proposed method, two examples are proposed in this section. Example 6.1 We consider the three-node Markovian coupled neural networks ( ) 1.1 0 , (6.1). The parameters are given as follows: .C1 = 0 2.1 ( ) ( ) ( ) 2.35 0 2.2 −0.21 1.46 −0.5 .C 2 = , . A1 = , . A2 = , 0 1.56 −4.81 4.61 −3.41 3.61 ( ( ) ) −1.5 −0.21 −1.82 −0.1 . B1 = , . B2 = , .τ1 (t) = 0.5 + 0.1sin(4t), −0.2 −3.61 −0.29 −4.2 .τ2 (t) = 0.3 + 0.1sin(3t), .a1 = 0.01, .a2 = 0.02, .a3 = 0.03, ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ −5 2 3 −2 1 1 −1 0 1 (1) (1) (2) . G 1 = ⎝ 0 −2 2 ⎠, . G 2 = ⎝ 2 −3 1 ⎠, . G 1 = ⎝ 2 −3 1 ⎠, 1 2 −3 1 1 −2 2 1 −3 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ −3 2 1 −1 0 1 −4 2 2 (2) (3) (3) . G 2 = ⎝ 0 −1 1 ⎠, . G 1 = ⎝ 1 −2 1 ⎠, . G 2 = ⎝ 1 −2 1 ⎠, 3 3 −6 1 2 −3 2 1 −3 ( ) ( ) ( ) 0.02 0 0.2 0 1.3 0 .Γ1,i = , .Γ2,i = , .Γ3,i = , .i = 1, 2. 0 0.1 0 0.1 0 1.2

6.4 Illustrative Examples

113

Fig. 6.1 Trajectory of error system (6.7) with .u(t) = 0

[ f (y(t))=

] tanh(y1 (t)) , tanh(y2 (t))

satisfies ( ) 00 , Assumption 6.1. From activation function . f (y (t)), we get . E 1 = 00 ( ) 0.5 0 . E2 = . 0 0.5 We obtain .τ1(1) = 0.4, .τ1(2) = 0.6, .τ2(1) = 0.2, .τ2(2) = 0.4. The transition matrix is given as The

activation

function

is

.

( Π=

.

which

) −0.6 0.6 . 0.3 −0.3

The trajectories of error system (6.7) is given in Fig. 6.1 with initial values ( )T ( )T ( )T . y1 (0) = −1.6 1.0 , . y2 (0) = 0.5 −1.2 , . y3 (0) = 1.1 −0.6 , ( )T .s(0) = 0.4 0.5 . We know that the system (6.1) is non-synchronization under condition .u(t) = 0. When .m = 0.1 and .d = 0.08, from Theorem 6.2, the following state feedback controllers ( can be obtained. ) ( ) −8.1669 0.1392 −7.4182 0.1414 . L 11 = , . L 12 = , 2.4493 −12.8440 1.8731 −12.1414 ( ) ( ) −8.1944 0.1324 −7.4317 0.1366 . L 21 = , . L 22 = , 2.4483 −12.8513 1.8708 −12.1120 ( ) ( ) −8.2086 0.1320 −7.4494 0.1368 . L 31 = , . L 32 = . 2.4482 −12.8604 1.8671 −12.1253 Under the above controllers, the trajectories of error system (6.7) and control input are given in Figs. 6.2 and 6.3. From Fig. 6.2, the sampled-data synchronization of Markovian coupled neural networks with time-varying mode delays (6.1) is achieved based on obtained mode-dependent controllers.

114

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

Fig. 6.2 State responses of error system (6.7) with control input

Fig. 6.3 Control input

Example 6.2 We consider the complex network model of the reputation computation for Virtual Organizations in [16, 185] as follows: { y˙ (t) = −C yk (t) + Ag(yk (t)) + B

. k

+ Ik (t) +

N ∑

t −∞

[ G kl Γ1 yl (t) + Γ2

l=1

k = 1, 2, . . . , N .

{

K (t − s)g(yk (s))ds t

−∞

] K (t − s)yl (s)ds , (6.68)

According to the kernel matrix function . K (·) in [185], if . K k (s) = δ(s − τ ) (.τ > 0), the system (6.68) is changed into the following form

6.4 Illustrative Examples

115

y˙ (t) = −C yk (t) + Ag(yk (t)) + Bg(yk (t − τ ))

. k

+ Ik (t) +

N ∑

G kl [Γ1 yl (t) + Γ2 yl (t − τ )],

(6.69)

l=1

where .δ(·) is the Dirac function. System (6.69) is system (6.13) in [185]. We assume that the delay is time-varying and mode-dependent, and the parameters have two different modes. Under control input, we have the following model y˙ (t) = −C(rt )yk (t) + A(rt )g(yk (t)) + B(rt )g(yk (t

. k

− τ (t, rt ))) + I (t) +

N ∑

G (1) kl (r t )Γ1 (r t )yl (t)

l=1

+

N ∑

G (2) kl (r t )Γ2 (r t )yl (t − τ (t, r t )) + u k (t, r t ), k = 1, 2, 3,

(6.70)

l=1

) ) ) ( ( ( 1.1 0 1.5 0 2.31 −0.22 where .C1 = , .C2 = , . A1 = , 0 1.1 0 1.3 −4.81 3.61 ( ( ( ) ) ) 1.9 −0.3 −1.61 −0.62 −2.1 −0.51 . A2 = , . B1 = , . B2 = . The other −4.6 3.5 −1.1 −3.1 −0.31 −5.1 parameters are same as Example 6.1. The state trajectories of error system (6.70) is given in Fig. 6.4 with ini( )T ( )T ( )T tial values . y1 (0) = −1.3 1.2 , . y2 (0) = 1.3 −1.8 , . y3 (0) = 1.2 1.5 , ( )T .s(0) = 0.3 0.4 . We know that the system (6.70) is non-synchronization under condition .u(t) = 0. When .m = 0.1 and .d = 0.06, we know that system (6.70) can achieve synchronization, and the following state feedback controllers can be obtained.

Fig. 6.4 Trajectory of error system (6.70) with .u(t) = 0

116

6 Sampled-Data Synchronization of Markovian Coupled Neural Networks …

Fig. 6.5 State responses with control input

) ) ( −10.2004 0.2433 −10.0008 0.1677 , . L 12 = , 2.1503 −15.1500 2.2307 −15.5869 ( ( ) ) −10.6682 0.0954 −9.9675 0.1462 . L 21 = , . L 22 = , 1.9957 −15.0756 2.1340 −15.2336 ( ( ) ) −11.2635 0.1256 −11.1720 0.0894 . L 31 = , . L 32 = . 2.1360 −15.6277 2.1693 −15.9370 Under the above controllers, the trajectories of error system (6.70) and control input are given in Figs. 6.5 and 6.6. From Fig. 6.5, we know that system (6.70) is synchronization under the controllers. (

.

L 11 =

Fig. 6.6 Control input

6.5 Conclusion

117

6.5 Conclusion In this chapter, the problem of sampled-data synchronization in Markovian coupled neural networks with mode delays is investigated based on aperiodic sampling intervals. The novel delay-dependent sampled-data synchronization conditions are proposed by adopting the mode-dependent augmented Lyapunov-Krasovskii functional, an extended Jensen’s integral. At the same time, the desired controllers are achieved based on sampled-data synchronization conditions. Finally, the effectiveness of the theoretical results are verified by two examples. Further studies include the analysis of finite-time synchronization of coupled switching neural networks via sampled-data control, and the analysis of sampled-data synchronization of coupled neural networks with uncertain parameters and time-varying coupling.

Chapter 7

Synchronization Criteria of Delayed Inertial Neural Networks with Generally Uncertain Transition Rates

In this chapter, we investigate the synchronization problem of inertial neural networks with time-varying delays and generally uncertain transition rates. The second order differential equations are transformed into the first-order differential equations by utilizing the variable transformation method. The Markovian process in the systems is uncertain or partially known due to the delay of data transmission channel or the loss of data information, which is more general and practicable to consider generally Markovian jumping inertial neural networks. The synchronization criteria can be obtained by using the delay-dependent Lyapunov–Krasovskii functionals and higher order polynomial based relaxed inequality. In addition, the desired controllers are obtained by solving a set of LMIs. Finally, numerical examples are provided to demonstrate the effectiveness of the theoretical results.

7.1 Introduction As a kind of special neural networks, the inertial neural networks described by the second-order differential equations were addressed in [5]. Because of finite switching speeds of the amplifiers as well as traffic congestions in the process of signal transmission, the time delays usually exist in real-world systems [9, 49, 192]. Hence, research on inertial neural networks with delays has become a hot topic [15, 52, 105, 107]. In [15], the single inertial BAM neural network with time-varying delays and external inputs was investigated, and several sufficient conditions of the global exponential stability were proposed. Based on [15], some novel global exponential stability assertions of inertial neural networks were proposed by utilizing non-reduced order method. Owning to the various engineering applications of synchronization, such as secure communication, biological systems, chaos generator design, and harmonic oscillation generation [1, 2, 7, 10, 20, 29, 35, 118, 125, 149], the synchronization criteria of the neural networks have attracted many researchers. In [158], the authors investi© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_7

119

120

7 Synchronization Criteria of Delayed Inertial Neural Networks …

gated global exponential lag synchronization of switched neural networks with timevarying delays, and the obtained results were applied to the field of image encryption. After that, the authors in [105] concerned with the synchronization criteria of delayed bidirectional inertial neural networks and the applications in secure image communications. Based on novel Lyapunov–Krasovskii functional, synchronization problem for inertial neural networks with heterogeneous time-varying delays was addressed through quantized sampled-data control in [209], and corresponding synchronization criteria were proposed in the terms of LMIs. Due to the fact that many practical systems experience random changes in their parameters or structures, Markovian jumping systems have been developed into an active topic in the control field [57, 64, 107, 137, 141, 142, 175]. In [205], the stability and stabilization problems of Markovian jumping linear systems with partly known transition rates were investigated. Based on [205], the mode jumping transition rates were assumed to be partially unknown, and corresponding state estimation problem of discrete Markovian jumping neural networks was addressed in [172]. After that, stability and synchronization for Markovian jumping neural networks with partly unknown transition rates were investigated in [92]. The semi-Markov jumping systems were investigated to reduce conservativeness caused by constant transition rates. In [107], state estimation problem of semi-Markovian jumping neural networks with event-triggered scheme was investigated, and corresponding event-triggered stabilization criteria were proposed. However, all these Markovian jumping systems are the first order differential equation. In [105], the second order inertial neural networks with Markovian jumping was concerned, and synchronization criteria were proposed. After that, the reachable set estimation for inertial Markovian jumping neural network with partially unknown transition rates was investigated in [56], and stability analysis of inertial Cohen-Grossberg neural networks was addressed in [54]. However, in most cases, the transition rates in Markovian process are difficult to precisely acquire due to the limitations of equipment and the influence of uncertain factors [57, 142]. There are few results for inertial neural networks with uncertain and partially unknown transition rates. Whether in theory or in practice, it is necessary to further consider the synchronization problem of more general inertial neural networks with generally Makrovian jumping and its applications in image encryption. By benefiting from the above discussions, this chapter addresses synchronization issue of inertial neural networks with uncertain and partially unknown transition rates. Moreover, the applications of obtained solutions are utilized in image secure communications. The synchronization criteria are proposed to ensure drive system and response system synchronized in mean square by designing suitable error-feedback control gains. The contributions of this chapter are summarized as follows. 1. For Markovian jumping inertial neural networks, each transition rate can be completely unknown or only the estimate value is known. Compared with traditional transition rates in [54, 56, 105], the inertial neural networks in this chapter is more practical because the transition rates in Markovian process are difficult to precisely acquire due to the limitations of equipment and the influence of uncertain factors.

7.2 Problem Statement and Preliminaries

121

{ t−τ (t) 2. The delay-dependent Lyapunov–Krasovskii functional. τ −τ1 (t) .× t−τ e1T (s)dsU2 { t−τ (t) {t {t 1 T . e1 (s)ds + τ (t) . t−τ t−τ (t) e1 (s)dsU3 . t−τ (t) e1 (s)ds is utilized in the chapter, which makes derivative term of the Lyapunov–Krasovskii functional contain more information about time-varying delay and its derivative. 3. According to delay-dependent Lyapunov–Krasovskii functional and higher order polynomial based relaxed inequality, the synchronization criteria including estimate value and estimate error of the uncertain transition rates are obtained based on three types of transition rate cases and the convex theory. The remainder of this chapter is organized as follows. In Sect. 7.2, some preliminaries and the inertial neural networks with generally Markovian jumping are introduced. In Sect. 7.3, the synchronization criteria and control gains of error inertial neural networks are proposed. In Sect. 7.4, three examples are given to demonstrate the validity of the proposed results. Finally, conclusions are drawn in Sect. 7.5. Notation: Throughout this chapter, .Rn and .Rn×n denote .n dimensional vectors and .n × n real matrices, respectively. .|| · || stands for Euclidean vector norm. The symbol .⊗ represents Kronecker product. . X T denotes the transpose of matrix . X . . X ≥ 0 (. X < 0) means that . X is a real symmetric positive semidefinite matrix (negative X + X T .. In represents the.n dimensional identity matrix. definite ( ( } =) )matrix)..Sym{X X Y X Y . stands for . . .∅ stands for empty set. .E stands for the mathematical ∗ Z YT Z expectation. .eγ = [0n×(γ−1)n , In , 0n×(15−γ)n ](γ = 1, 2, . . . , 15). Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

7.2 Problem Statement and Preliminaries Let .{rt , t ≥ 0} be a right-continuous Markovian process on the probability space and taking values in a finite state space .G = {1, 2, . . . , N } with generator .Π = (πi j ), i, j ∈ G given by { .

Pr {rt+Δt = j | rt = i} =

πi j Δt + o(Δt), i /= j, 1 + πii Δt + o(Δt), i = j,

where.Δt > 0, lim (o(Δt)/Δt) = 0.πi j ≥ 0 (i /= j) is the transition rate from mode Δt→0

i at time .t to mode . j at time .t + Δt, and .πii = −

.

N ∑ j=1, j/=i

πi j . Each .rt is denoted by

i, .i ∈ G. In this chapter, transition rates of the jumping process are considered to be general uncertain. For instance, transition rate matrix .Π with .N operation modes may be expressed as

.

122

7 Synchronization Criteria of Delayed Inertial Neural Networks …

⎛ ⎜ ⎜ Π =⎜ ⎝

.

? π˜ 21 + Δ21 .. .

? ? .. .

? · · · π˜ 1N + Δ1N ? · · · π˜ 2N + Δ2N .. . . .. . . .

⎞ ⎟ ⎟ ⎟, ⎠

(7.1)

π˜ N 1 + ΔN 1 ? ? · · · π˜ N N + ΔN N where .? represents the completely unknown transition rate. .π˜ i j and .Δi j represent the estimate value and estimate error of the uncertain transition rate .πi j , respectively. .|| Δi j || ≤ ϖi j and .ϖi j ≥ 0. .π ˜ i j , .ϖi j are known. For notational clarity, .∀ i ∈ G, we ∪ i , where.Gki = { j : The estimate value of.πi j is known for. j ∈ G} denote.G i = Gki Guk i and .Guk = { j : The estimate value of .πi j is unknown for . j ∈ G}. If .Gki /= ∅, it is described as .Gki = {|i1 , |i2 , . . . , |iK }, where .|iK represents the bound-known element with the index .|iK in the .ith row of matrix .Π . According to the properties of the transition rates, we assume that the known / Gki , estimate values of the transition rates are defined as follows: If .Gki /= G, and .i ∈ i then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ Gk ). If∑.Gki /= G, and .i ∈ Gki , then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ Gki , j /= i), .π˜ ii + ϖii ≤ 0, and . π˜ i j ≤ 0. j∈Gki

If .Gki = G, then .π˜ i j − ϖi j ≥ 0, (∀ j ∈ G, j /= i), π˜ ii = − N ∑ j=1, j/=i

N ∑ j=1, j/=i

π˜ i j ≤ 0, and .ϖii =

ϖi j ≥ 0.

Remark 7.1 The transition rate matrices in [54, 56, 92, 105] are the following forms with completely known transition rates or partially unknown transition rates. ⎛

π11 π12 ⎜ π21 π22 ⎜ .⎜ . .. ⎝ .. . πN 1 π N 2

· · · π1N · · · π2N . .. . .. · · · πN N





? π12 ⎟ ⎜ π21 π22 ⎟ ⎜ ⎟ , ⎜ .. .. ⎠ ⎝ . . πN 1 ?

⎞ ? · · · π1N ? ··· ? ⎟ ⎟ .. . . . ⎟. . . .. ⎠ ? · · · πN N

Different from above-mentioned transition rate matrices, more general transition rate matrix (7.1) is considered in this chapter, which could make the best use of estimation value and bounds of uncertain elements in the transition rate matrix. Consider the following inertial neural network ∑ d 2 xk (t) d xk (t) 1 − bk (rt )xk (t) + = −ak (rt ) wkl (rt ) fl (xl (t)) 2 dt dt l=1 n

.

+

n ∑ l=1

2 wkl (rt ) fl (xl (t − τ (t))) + Jk , k = 1, 2, . . . , n.

(7.2)

7.2 Problem Statement and Preliminaries

123

The second derivative is known as inertial term of system (7.2). .xk (t) is the state 1 2 (rt ) and .wkl (rt ) of the .kth neuron at time .t. .ak (rt ) > 0, .bk (rt ) > 0 are constants. .wkl are connection weights related to the neurons without delays and with delays, and they satisfy Markovian jumping process. . fl (·) denotes the activation function of .lth neuron at time .t. . Jk (t) is an external input of the .kth neuron at time .t. Time-varying delay .τ (t) satisfies .0 ≤ τ (t) ≤ τ , .μ1 ≤ τ˙ (t) ≤ μ2 , where .τ , .μ1 , μ2 are constants. k (s) = The initial condition associated with (7.2) is given as follows:.xk (s) = φk (s),. d xds ˜ ˜ φk (s), .s ∈ ([−τ , 0]) and .φk (s), φk (s) ∈ C([−τ , 0], R). Assumption 7.1 ([84]) For any .x1 , x2 ∈ R, there exist constants .ek− , ek+ , such that e− ≤

. k

f k (x1 ) − f k (x2 ) ≤ ek+ , k = 1, 2, . . . , n. x1 − x2

We denote .

E 1 = diag[e1+ e1− , . . . , en+ en− ], E 2 = diag



˺ e+ + en− e1+ + e1− ,..., n . 2 2

For constant .ξk , the following transformation is employed

y (t) =

. k

d xk (t) + ξk xk (t), k = 1, 2, . . . , n. dt

(7.3)

After that, the system (7.2) is rewritten as follows .

d xk (t) = −ξk xk (t) + yk (t), dt dyk (t) = −[bk (rt ) + ξk (ξk − ak (rt ))]xk (t) − (ak (rt ) − ξk )yk (t) dt n n ∑ ∑ 1 2 + wkl (rt ) fl (xl (t)) + wkl (rt ) fl (xl (t − τ (t))) + Jk l=1

l=1

= −a˜ k (rt )xk (t) − b˜k (rt )yk (t) +

n ∑

1 wkl (rt ) fl (xl (t))

l=1

+

n ∑

2 wkl (rt ) fl (xl (t − τ (t))) + Jk ,

(7.4)

l=1

where .a˜ k (rt ) = bk (rt ) + ξk (ξk − ak (rt )), .b˜k (rt ) = ak (rt ) − ξk . The initial condition associated with (7.4) is considered as follows .xk (s) = φk (s), . d xdtk (s) = φ˜ k (s), . yk (s) = φ˜ k (s) + ξk φk (s), and .s ∈ ([−τ , 0]). Now, the system (7.4) is rewritten in the following form

124

.

7 Synchronization Criteria of Delayed Inertial Neural Networks …

d x(t) = −Ax(t) + y(t), dt dy(t) = −B(rt )y(t) − C(rt )x(t) + W1 (rt ) f (x(t)) + W2 (rt ) f (x(t − τ (t))) + J, dt (7.5)

where .x(t) = [x1 (t), x2 (t), . . . , xn (t)]T , . y(t) = [y1 (t), y2 (t), . . . , yn (t)]T , . A = diag{ξ1 , . . . , ξn }, . B(rt ) = diag{a1 (rt ) − ξ1 , · · · , an (rt ) − ξn }, .C(rt ) = diag{b1 (rt ) 1 (rt ))n×n , .W2 (rt ) + ξ1 (ξ1 − a1 (rt )), . . . , bn (rt ) + ξn (ξn − an (rt ))}, .W1 (rt ) = (wkl 2 = (wkl (rt ))n×n , . J = diag{J1 , . . . , Jn }. To be more convenient, each possible value of.rt is denoted by.i(i ∈ G). According to (7.5), one gets .

d x(t) = −Ax(t) + y(t), dt dy(t) = −Bi y(t) − Ci x(t) + Wi1 f (x(t)) + Wi2 f (x(t − τ (t))) + J. dt

(7.6)

For the drive system (7.6), the response system is considered as .

d x(t) ˆ = −A x(t) ˆ + yˆ (t) + u 1 (t), dt d yˆ (t) = −Bi yˆ (t) − Ci x(t) ˆ + Wi1 fˆ(x(t)) ˆ + Wi2 fˆ(x(t ˆ − τ (t))) + J + u 2 (t), dt (7.7)

where .x(t) ˆ = [xˆ1 (t), xˆ2 (t), . . . , xˆn (t)]T , . yˆ (t) = [ yˆ1 (t), yˆ2 (t), . . . , yˆn (t)]T are states response of controlled system. .u 1 (t) ∈ Rn , .u 2 (t) ∈ Rn are the control inputs. The ˆ − x(t), and.e2 (t) = yˆ (t) − y(t)..g(e1 (t)) = fˆ(x(t)) ˆ − f (x(t)). error is.e1 (t) = x(t) Then, one gets the error system as follows .

de1 (t) = −Ae1 (t) + e2 (t) + u 1 (t), dt de2 (t) = −Bi e2 (t) − Ci e1 (t) + Wi1 g(e1 (t)) + Wi2 g(e1 (t − τ (t))) + u 2 (t). (7.8) dt

The control inputs are assumed to be error-feedback control terms, which is given as follows u (t) = L 1i e1 (t),

. 1

u 2 (t) = L 2i e2 (t). After that, the system (7.8) is rewritten as follows

(7.9)

7.2 Problem Statement and Preliminaries

.

125

de1 (t) = −(A − L 1i )e1 (t) + e2 (t), dt de2 (t) = −(Bi − L 2i )e2 (t) − Ci e1 (t) + Wi1 g(e1 (t)) + Wi2 g(e1 (t − τ (t))). dt (7.10)

Lemma 7.1 (Higher order polynomial based relaxed inequality [31]) For given matrices. Z l = Z lT > 0, (l = 1, 2), there are matrices. N1 , N2 , the following inequality holds: {

a(t)

a

. 12

{

a1



ω ζ (t) 1 ω2

˺T

┌ Δ=

.

x˙ T (s)Z 2 x(s)ds ˙ ≥

a(t)

˺ ω1 ζ(t), Δ ω2 ┌

T

where

a2

x˙ T (s)Z 1 x(s)ds ˙ + a12

(7.11)

˺ ~ ~1 (1 − γ)N1 + γ N2 Z 1 + (1 − γ)Ξ , ~ ~2 ∗ Z2 + γΞ a(t) − a1 , a12

γ=

.

ζ(t) =

.

[x T (a2 ), x T (a(t)), x T (a1 ), ζ1T (t), ζ2T (t), ζ3T (t), ζ4T (t), ζ5T (t), ζ6T (t)],

.

ζ (t) =

. 1

ζ (t) =

. 2

ζ (t) =

. 3

ζ (t) =

. 5

x(s)ds,

{

6 (a2 − a(t))3

{

a2

x(s)ds, a(t)

a(t)

{

a(t)

x(s)dsdu, a1

2 (a2 − a(t))2

6 (a(t) − a1 )3

ζ (t) =

. 6

{

a(t)

a1

1 a2 − a(t)

2 (a(t) − a1 )2

ζ (t) =

. 4

{

1 a(t) − a1

u

{

a(t)

a2

{

a2

x(s)dsdu, a(t)

{

u

a(t)

{

a(t)

x(s)dsdvdu, a1

v

u

{

a2

{

a2

{

a2

x(s)dsdvdu, a(t)

u

v

126

7 Synchronization Criteria of Delayed Inertial Neural Networks … .

~ Z 2 = diag{Z 2 , 3Z 2 , 5Z 2 , 7Z 2 }, Z 1 = diag{Z 1 , 3Z 1 , 5Z 1 , 7Z 1 }, ~ ~1 = ~ ~2 = ~ Ξ Z 1 − N2T ~ Z 2 − N1T ~ Z 2−1 N2 , Ξ Z 1−1 N1 ,

.

⎤ e3 ~ e2 − ~ ⎥ ⎢ ~ e2 + ~ e3 − 2~ e4 ⎥, .ω1 = ⎢ ⎦ ⎣ ~ e2 − ~ e3 + 6~ e4 − 6~ e6 ~ e2 + ~ e3 − 12~ e4 + 30~ e6 − 20~ e8 ⎡

⎤ e2 ~ e1 − ~ ⎥ ⎢ ~ e1 + ~ e2 − 2~ e5 ⎥, .ω2 = ⎢ ⎦ ⎣ ~ e1 − ~ e2 + 6~ e5 − 6~ e7 ~ e1 + ~ e2 − 12~ e5 + 30~ e7 − 20~ e9 ⎡

~ e = [0n×(k−1)n In×n 0n×(9−k)n ], k = 1, 2, . . . , 9.

. k

Lemma 7.2 ([174]) Given any real number .λ and any square matrix . P, the matrix inequality T 2 −1 T .λ(P + P ) ≤ λ H + P H P holds for any matrix . H .

7.3 Main Results In this section, the synchronization criteria of the error system with generally Markovian jumping are obtained based on delayed-dependent Lyapunov–Krasovskii functional and higher order polynomial based relaxed inequality. Theorem 7.1 Under Assumptions 7.1, generally Markovian jumping drive system (7.2) and response system (7.7) are synchronous if there exist positive definite matrices . Pi ∈ Rn×n , . Z i ∈ Rn×n , . Q α2 ∈ R2n×2n (α2 = 1, 2), . R ∈ Rn×n , . S ∈ Rn×n , n×n .T ∈ R , .U1 ∈ R2n×2n , .U2 ∈ Rn×n , .U3 ∈ Rn×n , and positive diagonal matrices . Rα1 ∈ Rn×n (α1 = 1, 2, 3), and any matrices . X 1 ∈ R4n×4n , X 2 ∈ R4n×4n , . F1 ∈ Rn×n , F2 ∈ Rn×n , such that, for any .i ∈ G, the succeeding matrix inequalities are satisfied. / Gki and.Gki = {|i1 , |i2 , . . . , |iK1 }, there are positive definite matrices. Hi j , H˜ i j ∈ If.i ∈ i Rn×n (i ∈ / Gki , j ∈ Gki ) and . Pi − P j ≥ 0, Z i − Z j ≥ 0 ( j ∈ Guk , j /= i), such that

7.3 Main Results

127



Ω˜ iδ Λ|i1 Λ|i2 ⎜ ∗ −H i 0 ⎜ i|1 ⎜ ∗ ∗ −Hi|i2 ⎜ ⎜ . .. .. ⎜ . ⎜ . . . ⎜ ∗ ∗ ˜ iδ = ⎜ ∗ .Θ ⎜ ⎜ ∗ ∗ ∗ ⎜ ⎜ ∗ ∗ ⎜ ∗ ⎜ . .. .. ⎜ . ⎝ . . . ∗ ∗ ∗

· · · Λ|iK 1 ··· 0 ··· 0 .. .. . . · · · −Hi|iK ··· ··· .. . ···

∗ ∗ .. . ∗

1

Λ˜ |i1 0 0 .. . 0 − H˜ i|i

Λ˜ |i2 0 0 .. . 0

0 ∗ − H˜ i|i2 .. ··· . ∗ ∗ 1



· · · Λ˜ |iK 1 ··· 0 ··· 0 .. .. . . ··· 0 ··· 0 ··· 0 .. .. . . · · · − H˜ i|iK

δ = 1, 2, 3, 4.

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ 1

(7.12)

where

Ω˜ i1 =

Ω˜ i2 =

Ω˜ i3 =

Ω˜ i4 =

(

(

(

(

Ω˜ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 1 T X Σ τ 2 4

1 T Σ X2 τ 4 − τ1 S˜

Ω˜ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 1 T X Σ τ 2 4

1 T Σ X2 τ 4 − τ1 S˜

Ω˜ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 1 X Σ τ 1 5

1 T T Σ X1 τ 5 − τ1 S˜

Ω˜ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 1 X Σ τ 1 5

1 T T Σ X1 τ 5 − τ1 S˜

) , ) , ) , ) .

Ω˜ i (τ (t), τ˙ (t)) = Ω1i + Ω˜ 2i + .Ω˜ 3i + Ω4 (τ (t)) + Ω5 (τ (t), τ˙ (t))+ .Ω6 (τ˙ (t)), T T Ω1i = Σ2T (Q 1 + Q 2 )Σ2 − Σ3T Q 2 Σ3 +.τ e12 Se12 + Sym{e1T Pi e12 }+.Sym{e11 Z i e13 } + Sym{e7T U3 e1 .−e8T U2 e3 } + Σ2T Ξ1 Σ2 + Σ1T Ξ2 Σ1 + Σ3T Ξ3 Σ3 .−Sym{σ1 e1T F1 A T F1 Ae1 } e1 } + Sym{σ1 e1T M1i e1 } .+Sym{σ1 e1T F1 e11 } − Sym{σ1 e1T F1 e12 } .−Sym{e12 T T T T .−Sym{σ2 e11 F2 Bi e11 } + + Sym{e12 M1i e1 } + Sym{e12 F1 e11 } − Sym{e12 F1 e12 } T T T T M2i e11 } − Sym{σ2 e11 F2 Ci e1 } .+Sym{σ2 e11 F2 Wi1 e4 } + Sym{σ2 e11 F2 Sym{σ2 e11 2 T T T T Wi e5 } .−Sym{σ2 e11 F2 e13 } .−Sym{e13 F2 Bi e11 } + Sym{e13 M2i e11 } − Sym{e13 F ∑ 2 T 1 T 2 T T ˜ Ci e1 } .+Sym{e13 F2 Wi e4 } + Sym{e13 F2 Wi e5 } − Sym{e13 F2 e13 }, .Ω2i = e1 π˜ i j

. .

(P j − Pi )e1 + 14 e1T Ω˜ 3i =

.

T e11

∑ j∈Gki

∑ j∈Gki

j∈Gki

ϖi2j Hi j e1 ,

T π˜ i j (Z j − Z i )e11 + 41 e11

∑ j∈Gki

ϖi2j H˜ i j e11 ,

128

7 Synchronization Criteria of Delayed Inertial Neural Networks …

Ω4 (τ (t)) = τ (t)Sym{e7T R(e1 − e3 )} + (τ − τ (t))Sym{e8T R(e1 − e3 )} − τ1 [

.

Σ4 T ] Σ5

Σ4 ], Σ5 .Ω5 (τ (t), τ ˙ (t)) = Sym{[τ (t)e7T , (τ − τ (t))e8T ]U1 [e1T − (1 − τ˙ (t))e2T , (1 − τ˙ (t))e2T T T − e3 ] }, .Ω6 (τ ˙ (t)) = τ˙ (t){e7T U3 e7 − e8T U2 e8 + Sym(e8T U2 e8 − e7T U3 e7 )} + (1 − τ˙ (t)) {−Σ1T Q 1 Σ1 + Sym(e8T U2 e2 − e7T U3 e2 )}, T T T T T T T T T .Σ1 = [e2 , e5 ] , .Σ2 = [e1 , e4 ] , .Σ3 = [e3 , e6 ] , T T T T T T T T T T T T T .Σ4 = [e1 − e2 , e1 + e2 − 2e7 , e1 − e2 + 6e7 − 6e9 , e1 + e2 − 12e7 + 30e9 T T − 20e14 ] , T T T T T T T T T T T T T .Σ5 = [e2 − e3 , e2 + e3 − 2e8 , e2 − e3 + 6e8 − 6e10 , e2 + e3 − 12e8 + 30e10 T T − 20e15 ] , ˜ = diag{S, 3S, 5S, 7S}, . M1i = F1 L 1i , . M2i = F2 L 2i , . S ˺ ˺ ┌ ┌ −Rα1 E 1 Rα1 E 2 (2 − ∈)~ S (1 − ∈)X 1 + ∈X 2 τ (t) ,.∈ = τ ,.Ξα1 = .Φ2 (∈, 1 − ∈) = ∗ −Rα1 ∗ (1 + ∈)~ S (α1 = 1, 2, 3), T T .Λ|i = [(P|i − Pi ) 0 · · · 0] , .Λ|i = [(P|i − Pi ) 0 · · · 0] , ~ ~~ ~ ~ ~~ ~ 1 1 2 2 Φ2 (∈, 1 − ∈)[

17

Λ|iK = [(P|iK − Pi )

.

1

1

17

0~ ·~~ · · 0~]T ,

· · 0~ Λ˜ |i1 = [0~ ·~~

(Z |i1 − Z i )

.

17

0~ ·~~ · · 0~]T ,

10

7

Λ˜ |i2 = [0~ ·~~ · · 0~ (Z |i2 − Z i ) ~0 ·~~ · · 0~]T , .Λ˜ |iK = [0~ ·~~ · · 0~ (Z |iK − Z i ) ~0 ·~~ · · 0~]T . 1 1

.

10

7

10

7

i /= ∅, there are positive definite matrices If .i ∈ Gki (Gki = {|i1 , · · · , |iK2 }) and .Guk i i i n×n ˜i jQ ∈ R (i, j ∈ Gk , Q ∈ Guk ) and . PQ − P j ≥ 0, Z Q − Z j ≥ 0 (∀ j ∈ Guk . Vi jQ , V ), such that

⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ˜ δ ˜i = ⎜ .Θ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

Q Ω˜˜ iδ Λ|i

1

Q

Λ|i

2

∗ −Vi|i1 Q 0 ∗ ∗ −Vi|i2 Q .. .. .. . . . ∗ ∗ ∗ ∗ ∗ .. . ∗

∗ ∗ .. . ∗

(δ = 1, 2, 3, 4),

∗ ∗ .. . ∗

Q

· · · Λ|i

K2

··· 0 ··· 0 .. .. . . · · · −Vi|iK ··· ··· .. . ···

∗ ∗ .. . ∗

2

Q Λ˜ |i

Q Λ˜ |i

0 0 .. . 0 −V˜i|i Q

0 0 .. . 0

1

Q

∗ ··· ∗

1

2

0 ˜ −Vi|i2 Q .. . ∗



Q · · · Λ˜ |i

··· ··· .. . ···

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

K2

0 0 .. . 0

··· 0 ··· 0 .. .. . . ˜ · · · −Vi|iK

2

Q

(7.13)

7.3 Main Results

129 Q

Q

· · 0~]T , where .Λ|i = [(P|i1 − PQ ) 0~ ·~~ · · 0~]T , .Λ|i = [(P|i2 − PQ ) ~0 ·~~ 1 2 17

Q

Λ|i

.

K2

˜ Qi .Λ |2

17

Q

· · 0~ (Z |i1 − Z Q ) ~0 ·~~ · · 0~]T , = [(P|iK − PQ ) 0~ ·~~ · · 0~]T , .Λ˜ |i = [0~ ·~~ 2 1 17

10

7

Q · · 0~]T , .Λ˜ |i = [0~ ·~~ · · 0~ (Z |iK − Z Q ) 0~ ·~~ · · 0~]T , = [0~ ·~~ · · 0~ (Z |i2 − Z Q ) 0~ ·~~ 2 K2 10

7

∑ ∑ 2 Ω˜˜ 2i = e1T π˜ i j (P j − PQ )e1 + 14 e1T ϖi j Vi jQ e1 ,

10

7

.

j∈Gki

T Ω˜˜ 3i = e11

.

j∈Gki



j∈Gki

T π˜ i j (Z j − Z Q )e11 + 41 e11

∑ j∈Gki

ϖi2j V˜i jQ e11 .

In matrices .Ω˜˜ iδ (δ = 1, 2, 3, 4), only the element .Ω˜˜ 2i and .Ω˜˜ 3i are different from ˜ 2i and .Ω˜ 3i . The other elements in .Ω˜˜ iδ (δ = 1, 2, 3, 4) are the same as .Ω˜ iδ (δ = .Ω 1, 2, 3, 4). i If .i ∈ Gki and .Guk = ∅, there are positive definite matrices .Wi j , W˜ i j ∈ Rn×n (i, j ∈ i Gk ), such that ⎛ ˜ Ω˜˜ δ Λ1 ⎜ i ⎜ ∗ −Wi1 ⎜ .. ⎜ .. ⎜ . . ⎜ ⎜ ⎜ ⎜ ∗ ∗ ⎜ ˜δ ⎜ ˜ ˜i = ⎜ ∗ .Θ ∗ ⎜ ⎜ . .. ⎜ . ⎜ . . ⎜ ∗ ⎜ ∗ ⎜ ⎜ ∗ ∗ ⎜ ⎜ . .. . ⎝ . . ∗ ∗

··· ··· .. .

Λi−1 0 .. .

Λi+1 0 .. .

· · · −Wi(i−1) ···

∗ .. . ∗ ∗ .. . ∗

··· ··· ··· ··· ···

0 −Wi(i+1) .. . ∗ ∗ .. . ∗

· · · ΛN ··· 0 .. ··· . ···

0

··· 0 .. .. . . · · · −Wi N ··· ∗ .. ··· . ··· ∗

Λ˜ 1 0 .. . .. . .. . .. . 0 −W˜ i1 .. . ∗

· · · Λ˜ N ··· 0 .. ··· . ···

0

···

0 .. . 0 0 .. . −W˜ i N

··· ··· ··· .. . ···

< 0, (δ = 1, 2, 3, 4),

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(7.14)

where .Λ1 = [(P1 − Pi ) ~0 ·~~ · · 0~]T , .Λi−1 = [(Pi−1 − Pi ) ~0 ·~~ · · 0~]T , 17

17

Λi+1 = [(Pi+1 − Pi ) ~0 ·~~ · · 0~]T , .ΛN = [(PN − Pi ) ~0 ·~~ · · 0~]T ,

.

˜˜ ˜ 2i = e1T .Ω

N ∑ j=1, j/=i

17

π˜ i j (P j − Pi )e1 +

1 T e 4 1

N ∑ j=1, j/=i

17

ϖi2j Wi j e1 .

˜ Λ˜ 1 = [0~ ·~~ · · 0~ (Z 1 − Z i ) 0~ ·~~ · · 0~]T ,.Λ˜ N = [0~ ·~~ · · 0~ (Z N − Z i ) 0~ ·~~ · · 0~]T ,.Ω˜˜ 3i =

.

T e11

N ∑ j=1, j/=i

10

π˜ i j (Z j − Z i )e11 +

7 1 T e 4 11

N ∑ j=1, j/=i

10

ϖi2j W˜ i j e11 .

7

130

7 Synchronization Criteria of Delayed Inertial Neural Networks …

˜ ˜ ˜ In matrices .Ω˜˜ iδ (δ = 1, 2, 3, 4), only the element .Ω˜˜ 2i and .Ω˜˜ 3i are different from ˜ ˜ 2i and .Ω˜ 3i . The other elements in .Ω˜˜ iδ (δ = 1, 2, 3, 4) are the same as .Ω˜ iδ (δ = .Ω 1, 2, 3, 4). Moreover, the controller gain matrices . L 1i and . L 2i are given as .

L 1i = F1−1 M1i ,

(7.15)

L 2i = F2−1 M2i .

(7.16)

.

Proof Consider the following delayed-dependent Lyapunov–Krasovskii functional for error system (7.10) 6 ∑ V (e1 (t), e2 (t), t, i ) = [Vq (t, i )]. q=1

[{t ] ˺ e (s)ds e1 (t) 1 (t) Let .η(t) = , .ϑ(s) = { t−τ . t−τ (t) g(e1 (t)) e1 (s)ds ┌

t−τ

.

V1 (t, i) = e1T (t)Pi e1 (t) + e2T (t)Z i e2 (t), {

.

V2 (t, i) =

{

t

t−τ (t)

{

t

V3 (t, i) =

t−τ

{ .

V4 (t, i) =

.

η T (s)Q 2 η(s)ds,

(7.18)

t−τ

{ .

t

η T (s)Q 1 η(s)ds +

(7.17)

0

−τ

e1T (s)ds R {

t

t+θ

t

e1 (s)ds,

(7.19)

e˙1T (s)S e˙1 (s)dsdθ,

(7.20)

t−τ

V5 (t, i) = ϑT (s)U1 ϑ(s),

{ t−τ (t) { t−τ (t) 1 T e1 (s)dsU2 e1 (s)ds . V6 (t, i) = τ − τ (t) t−τ t−τ { t { t 1 e T (s)dsU3 e1 (s)ds. + τ (t) t−τ (t) 1 t−τ (t)

(7.21)

(7.22)

7.3 Main Results

131

Let .L be the weak infinitesimal operator. Calculating the time derivative of .V (t, i), one has LV1 (t, i)

.

= 2e1T (t)Pi e˙1 (t) + 2e2T (t)Z i e˙2 (t) +

N ∑

πi j e1T (t)P j e1 (t) +

j=1 T = ξ T (t)[2e1T Pi e12 + 2e11 Z i e13 +

N ∑

N ∑

πi j e2T (t)Z j e2 (t)

j=1

πi j e1T P j e1T +

N ∑

j=1

T πi j e11 Z j e11 ]ξ(t), (7.23)

j=1

LV2 (t, i) = η T (t)Q 1 η(t) − (1 − τ˙ (t))η T (t − τ (t))Q 1 η(t − τ (t))

.

+ η T (t)Q 2 η(t) − η T (t − τ )Q 2 η(t − τ ),

(7.24)

LV3 (t, i) { t e1T (s)ds R(e1 (t) − e1 (t − τ )), =2

.

t−τ

= 2[τ (t)

{

t

t−τ (t)

1 T e (s)ds + (τ − τ (t)) τ (t) 1

{

t−τ (t)

t−τ

1 e T (s)ds]R(e1 (t) τ − τ (t) 1

− e1 (t − τ )), = ξ T (t)[2τ (t)e7T R(e1 − e3 ) + 2(τ − τ (t))e8T R(e1 − e3 )]ξ(t), { LV4 (t, i) = τ e˙1T (t)S e˙1 (t) −

t

e˙1T (s)S e˙1 (s)ds t−τ { t T T = ξ (t)τ e12 Se12 ξ(t) − e˙1T (s)S e˙1 (s)ds,

.

(7.25)

(7.26)

t−τ

┌ LV5 (t, i) = 2ξ T (t)

.

τ (t)e7 (τ − τ (t))e8

˺T

┌ U1

˺ e1 − (1 − τ˙ (t))e2 ξ(t), (1 − τ˙ (t))e2 − e3

(7.27)

LV6 (t, i) = −τ˙ (t)ξ T (t)e8T U2 e8 ξ(t) + 2ξ T (t)e8T U2 [(1 − τ˙ (t))e2

.

− e3 + τ˙ (t)e8 ]ξ(t) + τ˙ (t)ξ T (t)e7T U3 e7 ξ(t) + 2ξ T (t)e7T U3 [e1 − (1 − τ˙ (t))e2 − τ˙ (t)e7 ]ξ(t). According to Lemma 7.1, one can obtain

(7.28)

132

7 Synchronization Criteria of Delayed Inertial Neural Networks …

{ .



t t−τ

{ e˙1T (s)S e˙1 (s)ds

=−

t t−τ (t)

{ e˙1T (s)S e˙1 (s)ds



t−τ (t)

t−τ

e˙1T (s)S e˙1 (s)ds

1 Σ Σ ≤ −ξ T (t) [ 4 ]T Φ1 (∈, 1 − ∈)[ 4 ]ξ(t) Σ Σ5 τ 5 1 Σ Σ = −ξ T (t)( [ 4 ]T Φ2 (∈, 1 − ∈)[ 4 ] Σ5 τ Σ5 1−∈ T ∈ − Σ4 X 2 S˜ −1 X 2T Σ4 − Σ5T X 1T S˜ −1 X 1 Σ5 )ξ(t), τ τ (7.29) ˺ ~ ~1 (1 − ∈)X 1 + ∈X 2 S + (1 − ∈)Ξ ~1 = ~ , .Ξ S − X 2~ S −1 X 2T , ~ ~2 ∗ S + ∈Ξ ˜ .Φ2 (∈, 1 − ∈) are given in Theorem 7.1. ~2 = ~ .Ξ S − X 1T ~ S −1 X 1 , .∈, .Σ4 , .Σ5 , . S, According to (7.10), for appropriate dimensions matrices . F1 , . F2 , and scalars .σ1 , .σ2 , the following equations hold ┌

where, .Φ1 (∈, 1 − ∈) =

2[σ1 e1T (t)F1 + e˙1T (t)F1 ][−(A − L 1i )e1 (t) + e2 (t) − e˙1T (t)] = 0,

.

(7.30)

2[σ2 e2T (t)F2 + e˙2T (t)F2 ][−(Bi − L 2i )e2 (t) − Ci e1 (t) + Wi1 g(e1 (t))

.

+ Wi2 g(e1 (t − τ (t))) − e˙2T (t)] = 0.

(7.31)

From Assumption 7.1, we get ┌ η T (t)

.

┌ η T (t − τ (t))

.

˺ −R1 E 1 R1 E 2 η(t) ≥ 0, ∗ −R1

(7.32)

˺ −R2 E 1 R2 E 2 η(t − τ (t)) ≥ 0, ∗ −R2

(7.33)

˺ −R3 E 1 R3 E 2 η(t − τ ) ≥ 0. .η (t − τ ) ∗ −R3 ┌

T

(7.34)

Then, combining (7.17)–(7.34), one gets E{LV (x(t), t, i)} ≤ E{ξ T (t)Ωi (τ (t), τ˙ (t))ξ(t)} < 0,

.

where.ξ(t)={e1T (t), e1T (t − τ (t)), e1T (t − τ ), g T (e1 (t)), g T (e1 (t − τ (t))), g T (e1 (t − {t { t−τ (t) T {t {t T 1 2 2 1 T τ )), . τ (t) e1 (s)ds, . (τ (t)) 2 t−τ (t) e1 (s)ds, τ −τ (t) t−τ t−τ (t) θ e1 (s)dsdθ, . (τ −τ (t))2

7.3 Main Results

133

{t {t 6 . e1T (s)dsdθ, e2T (t), e˙1T (t), e˙2T (t), . (τ (t)) 3 ( t−τ (t) θ t−τ θ { t−τ (t) { t−τ (t) { t−τ (t) 6 . ( e1 (s)dsdαdθ)T }T . θ α (τ −τ (t))3 t−τ { t−τ (t) { t−τ (t)

{t

α e1 (s)dsdαdθ)

T

,.

According to convex theory, .E{Ωi (τ (t), τ˙ (t))} < 0 can be satisfied for all .(τ (t), τ˙ (t)) ∈ [0, τ ] × [μ1 , μ2 ] if it is satisfied at the vertices of the interval .[0, τ ] × [μ1 , μ2 ]. According to Schur complement, one can obtain .E{Ωiδ } < 0 (δ = 1, 2, 3, 4), Σ4T X 2 S˜ −1 X 2T Σ4 − τ∈ Σ5T where.Ωi (τ (t), τ˙ (t)) = Ω¯ i (τ (t), τ˙ (t)) + Ω2i + Ω3i − 1−∈ τ X 1T S˜ −1 X 1 Σ5 , .Ω¯ i (τ (t), τ˙ (t)) = Ω1i + Ω4 (τ (t)) + Ω5 (τ (t), τ˙ (t)) + Ω6 (τ˙ (t)), N N ∑ ∑ T T .Ω2i = e1 πi j P j e1 , .Ω3i = e11 πi j Z j e11 , j=1 ( j=1 ) Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 + Ω2i + Ω3i τ1 Σ4T X 2 1 .Ωi = , 1 T X Σ − τ1 S˜ τ 2 4 ) ( Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 + Ω2i + Ω3i τ1 Σ4T X 2 2 .Ωi = , 1 T X Σ − τ1 S˜ τ 2 4 ( ) Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 + Ω2i + Ω3i τ1 Σ5T X 1T 3 .Ωi = , 1 X Σ − τ1 S˜ τ 1 5 ( ) Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 + Ω2i + Ω3i τ1 Σ5T X 1T 4 .Ωi = . 1 X Σ − τ1 S˜ τ 1 5 Thus, the error system (7.10) is globally asymptotically stable in the mean square. Hence, the generally Markovian jumping drive system (7.2) and response system (7.7) are synchronous. Let .Θiδ = e1 Ω¯ iδ e1T (δ = 1, 2, 3, 4), F1 L 1i = M1i , . F2 L 2i = M2i , where .Ω¯ i1 = Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 , .Ω¯ i2 = Ω¯ i (τ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 , ¯ i3 = Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 , .Ω¯ i4 = Ω¯ i (τ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 , .Ω ¯¯ 1 = Ω¯ 1 + Ω + Ω , .Ω¯¯ 2 = Ω¯ 2 + Ω + Ω , .Ω¯¯ 3 = Ω¯ 3 + Ω + Ω , .Ω 2i 3i 2i 3i 2i 3i i i i i i i ¯¯ 4 = Ω¯ 4 + Ω + Ω . .Ω 2i 3i i i i i / Gki and∑ .Gk / = ∅. For any . j ∈ Guk , and . j / = i, there are . Pi − P j ≥ 0, Z i − If .i ∈ ∑ πi j = −πii − πi j . Thus, we have Z j ≥ 0, and . i j∈Guk , j/=i

e Ω¯¯ iδ e1T = Θiδ +

j∈Gki



. 1

πi j P j + πii Pi +

i j∈Guk , j/=i

j∈Gki

≤ Θiδ +





πi j P j + πii Pi + (−πii −

j∈Gki

= Θiδ +

∑ ∑

j∈Gki



πi j )Pi

j∈Gki

πi j (P j − Pi )

j∈Gki

= Θiδ +

πi j P j

π˜ i j (P j − Pi ) +

∑ j∈Gki

Δi j (P j − Pi ).

(7.35)

134

7 Synchronization Criteria of Delayed Inertial Neural Networks …

According to Lemma 7.2, one can obtain ∑ .

Δi j (P j − Pi ) ≤

j∈Gki

∑ 1 (( Δi j )2 Hi j + (P j − Pi )Hi−1 j (P j − Pi )) 2 i j∈Gk

∑ 1 ( ϖi2j Hi j + (P j − Pi )Hi−1 ≤ j (P j − Pi )). 4 i

(7.36)

j∈Gk

∑ ∑ 1 2 e Ω¯¯ iδ e1T < 0 holds if.Θiδ + π˜ i j (P j − Pi )+ ( 4 ϖi j Hi j + (P j − Pi )Hi−1 j (P j −

. 1

j∈Gki

j∈Gki

Pi )) < 0. Similarly, according to . Z i − Z j ≥ 0, we get .−Sym{σ2 F2 Bi } + Sym{σ2 M2i } + ∑ ∑ 1 2 π˜ i j (Z j − Z i ) + ( 4 ϖi j H˜ i j + (Z j − Z i ) H˜ i−1 j (Z j − Z i )) < 0. From Schur j∈Gki

j∈Gki

complement, we get (7.12). i i If .i ∈ Gki and .Guk /= ∅. There are .Q ∈ Guk , and . PQ − P j ≥ 0, . Z Q − Z j ≥ 0 for any i . j ∈ Guk . e Ωiδ e1T = Θiδ +



. 1

πi j P j +

j∈Gki

≤ Θiδ +





πi j P j +





πi j PQ

i j∈Guk

πi j P j −

j∈Gki

= Θiδ +

πi j P j

i j∈Guk

j∈Gki

= Θiδ +





πi j PQ

j∈Gki

π˜ i j (P j − PQ ) +

j∈Gki

≤ Θiδ +



Δi j (P j − PQ )

j∈Gki

π˜ i j (P j − PQ ) +

j∈Gki

+ (P j −



∑ 1 ( ϖi2j Vi jQ 4 i j∈Gk

PQ )Vi−1 jQ (P j

− PQ )).

(7.37)

i Similarly, according to . Z Q − Z j ≥ 0 for any . j ∈ Guk , we get .−Sym{σ2 F2 Bi } + ∑ ∑ 1 2 π˜ i j (Z j − Z Q )+ ( 4 ϖi j V˜i jQ +(Z j − Z Q )V˜i−1 Sym{σ2 M2i } + jQ (Z j − Z Q )) < 0. j∈Gki

j∈Gki

According to Schur complement, we get (7.13). i If .i ∈ Gki and .Guk = ∅.

7.3 Main Results

135

e Ωiδ e1T = Θiδ +

N ∑

. 1

πi j P j + πii Pi

j=1, j/=i

=

Θiδ

+

N ∑

πi j (P j − Pi )

j=1, j/=i

= Θiδ +

N ∑

(π˜ i j + Δi j )(P j − Pi )

j=1, j/=i

≤ Θiδ +

N ∑

π˜ i j (P j − Pi ) +

j=1, j/=i

N ∑

1 ( ϖi2j Wi j 4 j=1, j/=i

+ (P j − Pi )Wi−1 j (P j − Pi )). Similarly, we obtain .−Sym{σ2 F2 Bi } + Sym{σ2 M2i } + N ∑ j=1, j/=i

(7.38) N ∑ j=1, j/=i

π˜ i j (Z j − Z i ) +

( 41 ϖi2j W˜ i j + (Z j − Z i )W˜ i−1 j (Z j −Z i )) < 0. According to Schur complement,

we obtain (7.14). In the (7.30) and (7.31), and . F1 L 1i = M1i , . F2 L 2i = M2i , we get the controller gains (7.46) and (7.47) through solving the LMIs in Theorem 7.1. The proof is completed. Remark 7.2 In the (7.35), there are the estimate value .π˜ i j , the estimate error .Δi j , and the completely unknown transition rate .?. Because of these uncertain terms, it is difficult to obtain the feasible solution of the synchronization criteria. In this case, three types of transition rate cases and bounded element .ϖi j are introduced in this chapter, which is possible to obtain the feasible solution of the synchronization criteria based on LMI. Remark 7.3 In the [54, 105], the synchronization and stability analysis for inertial neural networks with completely known transition rates were studied respectively, which could not used for uncertain transition rates situation. In this chapter, the generally Markovian jumping inertial neural networks are studied, and three different cases synchronization criteria are proposed, which could be used for all types of the uncertain and unknown Markovian transition rates. When we consider the system without Markovian jumping, system (7.6) is changed into the following form .

d x(t) = −Ax(t) + y(t), dt dy(t) = −By(t) − C x(t) + W 1 f (x(t)) + W 2 f (x(t − τ (t))) + J. dt

(7.39)

136

7 Synchronization Criteria of Delayed Inertial Neural Networks …

For the drive system (7.39), the response system is considered as .

d x(t) ˆ = −A x(t) ˆ + yˆ (t) + u 1 (t), dt d yˆ (t) ˆ + W 2 fˆ(x(t ˆ − τ (t))) + J + u 2 (t), = −B yˆ (t) − C x(t) ˆ + W 1 fˆ(x(t)) dt (7.40)

where u (t) = L 1 e1 (t),

. 1

u 2 (t) = L 2 e2 (t).

(7.41)

Corollary 7.1 Under Assumption 7.1, the drive system (7.39) and response system (7.40) are synchronous if there are positive definite matrices . P ∈ Rn×n , . Z ∈ Rn×n , . Q α2 ∈ R2n×2n (α2 = 1, 2), . R ∈ Rn×n , . S ∈ Rn×n , .T ∈ Rn×n , .U1 ∈ R2n×2n , n×n .U2 ∈ R , .U3 ∈ Rn×n , and positive diagonal matrices . Rα1 ∈ Rn×n (α1 = 1, 2, 3), and any matrices . X 1 ∈ R4n×4n , X 2 ∈ R4n×4n , . F1 ∈ Rn×n , F2 ∈ Rn×n , such that, the succeeding matrix inequalities are satisfied. (

ˆ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ1 Ω(τ 1 T X Σ τ 2 4

)

1 T Σ X2 τ 4 − τ1 S˜ ) ( ˆ (t), τ˙ (t))τ (t)=0,τ˙ (t)=μ2 1 Σ4T X 2 Ω(τ τ 1 T X Σ − τ1 S˜ τ 2 4 ) ( ˆ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ1 1 Σ5T X 1T Ω(τ τ 1 X Σ − τ1 S˜ τ 1 5 ( ) ˆ (t), τ˙ (t))τ (t)=τ ,τ˙ (t)=μ2 1 Σ5T X 1T Ω(τ τ 1 X Σ − τ1 S˜ τ 1 5

.

< 0,

(7.42)

< 0,

(7.43)

< 0,

(7.44)

< 0.

(7.45)

ˆ (t), τ˙ (t)) = Ω1 + Ω4 (τ (t)) + Ω5 (τ (t), τ˙ (t)) + Ω6 (τ˙ (t)), Ω(τ T T Ω1 = Σ2T (Q 1 + Q 2 )Σ2 − Σ3T Q 2 Σ3 +τ e12 Se12 +Sym{e1T Pe12 }+Sym{e11 Z e13 } + T T T T T Sym{e7 U3 e1 − e8 U2 e3 }+Σ2 Ξ1 Σ2 + Σ1 Ξ2 Σ1 + Σ3 Ξ3 Σ3 − Sym{σ1 e1T F1 Ae1 } T F1 Ae1 }+Sym + Sym{σ1 e1T M1 e1 }+Sym{σ1 e1T F1 e11 }−Sym{σ1 e1T F1 e12 } − Sym{e12 T T T T T {e12 M1 e1 } + Sym{e12 F1 e11 } − Sym{e12 F1 e12 } − Sym{σ2 e11 F2 Be11 } + Sym{σ2 e11 T T T 1 2 M2 e11 } − Sym{σ2 e11 F2 Ce1 } + Sym{σ2 e11 F2 W e4 } + Sym{σ2 e11 F2 W e5 } − Sym T T T T T F2 e13 } − Sym{e13 F2 Be11 }+Sym{e13 M2 e11 }−Sym{e13 F2 Ce1 } + Sym{e13 F2 {σ2 e11 T T 1 2 W e4 } + Sym{e13 F2 W e5 } − Sym{e13 F2 e13 },

. .

Ω4 (τ (t)) = τ (t)Sym{e7T R(e1 − e3 )}+(τ − τ (t))Sym{e8T R(e1 − e3 )}− τ1 [

.

Σ4 T ] Σ5

Σ4 ], Σ5 .Ω5 (τ (t), τ ˙ (t))=Sym{[τ (t)e7T , (τ − τ (t))e8T ]U1 [e1T − (1−τ˙ (t))e2T , (1 − τ˙ (t))e2T − T T e3 ] }, Φ2 (∈, 1 − ∈)[

7.3 Main Results

137

Ω6 (τ˙ (t)) = τ˙ (t){e7T U3 e7 − e8T U2 e8 + Sym(e8T U2 e8 − e7T U3 e7 )}+(1 − τ˙ (t)){−Σ1T Q 1 Σ1 + Sym(e8T U2 e2 − e7T U3 e2 )}, T T T T T T T T T .Σ1 = [e2 , e5 ] , .Σ2 = [e1 , e4 ] , .Σ3 = [e3 , e6 ] , T T T T T T T T T T T T T .Σ4 = [e1 − e2 , e1 + e2 − 2e7 , e1 − e2 + 6e7 − 6e9 , e1 + e2 − 12e7 + 30e9 − T T 20e14 ] , T T T T T T T T T T T T T .Σ5 = [e2 − e3 , e2 + e3 − 2e8 , e2 − e3 + 6e8 − 6e10 , e2 + e3 − 12e8 +30e10 − T T 20e15 ] , ˜ = diag{S, 3S, 5S, 7S}, . M1 = F1 L 1 , . M2 = F2 L 2 , . S ˺ ┌ (2 − ∈)~ S (1 − ∈)X 1 + ∈X 2 , .∈ = τ (t) , .Φ2 (∈, 1 − ∈) = τ ∗ (1 + ∈)~ S ˺ ┌ −Rα1 E 1 Rα1 E 2 (α1 = 1, 2, 3). .Ξα1 = ∗ −Rα1 .

Moreover, the controller gain matrices . L 1 and . L 2 can be given as .

L 1 = F1−1 M1 ,

(7.46)

L 2 = F2−1 M2 .

(7.47)

.

Proof Consider the following Lyapunov–Krasovskii functional V (e1 (t), e2 (t), t) =

6 ∑ [Vq (t)]. q=1

.

V1 (t) = e1T (t)Pe1 (t) + e2T (t)Z e2 (t),

{ .

V2 (t) =

t

{

t

η (s)Q 1 η(s)ds +

η T (s)Q 2 η(s)ds,

T

t−τ (t)

V3 (t) =

{

t

t−τ

{ .

(7.49)

t−τ

{ .

(7.48)

V4 (t) =

.

0 −τ

e1T (s)ds R {

t t+θ

t

e1 (s))ds,

(7.50)

t−τ

e˙1T (s)S e˙1 (s)dsdθ,

V5 (t) = ϑT (s)U1 ϑ(s),

(7.51)

(7.52)

138

7 Synchronization Criteria of Delayed Inertial Neural Networks …

1 . V6 (t) = τ − τ (t)

.

+

1 τ (t)

{

{

t−τ (t)

t−τ

t

t−τ (t)

{ e1T (s)dsU2 {

e1T (s)dsU3

t

t−τ (t)

t−τ (t)

e1 (s)ds

(7.53)

t−τ

e1 (s)ds.

(7.54)

In.V (e1 (t) ; e2 (t) ; t), only the.V1 (t, i) is different from Theorem 7.1. The remaining proof is similar to that proof of Theorem 7.1, which is omitted.

7.4 Simulation In this section, three numerical examples are given to demonstrate the effectiveness of the proposed method. Example 7.1 Let’s consider the following Markovian nonlinear system .

d 2 x1 (t) d x1 (t) 1 1 − b1 (rt )x1 (t) + w11 = −a1 (rt ) (rt ) f 1 (x1 (t)) + w12 (rt ) f 2 (x2 (t)) 2 dt dt 2 2 (rt ) f 1 (x1 (t − τ (t))) + w12 (rt ) f 2 (x2 (t − τ (t))) + J1 , + w11 d 2 x2 (t) d x2 (t) 1 1 − b2 (rt )x2 (t) + w21 = −a2 (rt ) (rt ) f 1 (x1 (t)) + w22 (rt ) f 2 (x2 (t)) 2 dt dt 2 2 (rt ) f 1 (x1 (t − τ (t))) + w22 (rt ) f 2 (x2 (t − τ (t))) + J2 . (7.55) + w21 (

The activation function satisfies Assumption 7.1 and . E 1 = ( ) 0.5 0 . 0 0.5

) 00 , . E2 = 00

The ( system ( is considered with the following parameters ) ) (7.54) 0 10 b1 (rt ) + ξ1 (ξ1 − a1 (rt )) .A = , . Bi = , 0 b2 (rt ) + ξ2 (ξ2 − a2 (rt )) 01 ) ( ( ) 0 a1 (rt ) − ξ1 1.0 0 .C i = , .(rt = 1, 2, 3). . B1 = , 0 an (rt ) − ξn 0 1.5 ( ( ( ( ) ) ) ) 1.6 0 1.8 0 −1.1 0 −0.7 0 . B2 = , . B1 = , .C1 = , .C2 = , 0 1.2 0 1.5 0 −0.9 0 −1.0 ( ( ( ) ) ) −0.9 0 0.9 0.2 −1.0 1.7 .C 3 = , .W11 = , .W21 = , 0 −1.2 −0.5 2.7 −1.0 1.3 ( ) ( ) ( ) −1.1 1.5 0.9 2.1 −1.7 0.8 1 . W3 = , .W12 = , .W22 = , −1.2 1.1 −3.2 0.8 −1.2 1.3 ( ) −1.5 0.9 2 . W3 = , .ξ1 = ξ2 = 1. −1.0 1.2

7.4 Simulation

139

Fig. 7.1 State trajectory of error system without controllers



⎞ ? ? 2.32+ Δ13 ⎠ , where ? The transition matrix is given as .Π = ⎝ ? −4.75+ Δ22 ? ? −2.98+ Δ33 .ϖ1i = 0.13, .Δ13 ∈ [−0.13, 0.13], .ϖ2i = 0.19, .Δ22 ∈ [−0.19, 0.19], .ϖ3i = 0.18, .Δ33 ∈ [−0.18, 0.18]. .τ (t) = 0.8 + 0.2sin(t), .τ = 1, .μ1 = −0.2, .μ2 = 0.2. Set the parameters .σ1 = σ2 = 1. From Theorem 7.1, one can obtain the following feasible(matrices. ( ) ) −1.8703 0.0832 −2.0902 0.0501 . L 11 = , . L 12 = , 0.0174 −1.8173 −0.0087 −2.1084 ( ( ) ) −2.0094 0.0531 −10.2235 −3.9984 . L 13 = , . L 21 = , −0.0021 −2.0232 −4.3352 −17.5184 ) ) ( ( −9.2343 −4.4950 −8.8384 −4.4014 . L 22 = , . L 23 = . −5.2879 −16.6877 −5.1231 −16.1671 ┌ ˺ 0.2 The trajectories of system (7.54) with initial values .e1 (t) = , .e2 (t) = 0.3 ┌ ˺ 0.15 are given in Figs. 7.1 and 7.2. −0.23 According to Theorem 7.1 and Figs. 7.1 and 7.2, we obtain that the error system is asymptotically stable. So the generally Markovian jumping drive system and response system can be synchronous based on the designed controllers. Example 7.2 Let’s consider the inertial neural networks system (7.40), and the parameters of the system are from [62], which is shown as below.

140

7 Synchronization Criteria of Delayed Inertial Neural Networks …

Fig. 7.2 State trajectory of error system with controllers

(

( ( ( ) ) ) ) 2.78 1 −0.9 2.5 10 −0.9 0 . W1 = , . W2 = ,.A = , .B = , −1 2.5 −1.9 1.5 01 0 −0.8 ( ) 10 .C = , .τ (t) = 0.1sint(t). 01 7.1, feasible matrices . L 1 = ) ( According to Corollary ) ( we obtain the following −7.0827 0.4260 −29.7186 −9.724 , .L 2 = . The trajectories of system −0.2921 −6.9091 −10.4814 −26.6239 (7.40) are given as follows (Figs. 7.3 and 7.4) Example 7.3 Let’s consider the generally Markovian jumping inertial neural networks (7.6). ( ) 00 The activation function satisfies Assumption 7.1 and . E 1 = , . E2 = 00 ( ) 0.5 0 . 0 0.5 The with parameters ( system ( ( ( ) is considered ) the following ) ) 10 1.0 0 1.8 0 1.8 0 .A = , . B1 = , . B2 = , . B3 = , 01 0 1.8 0 1.0 0 1.5 ( ( ( ) ) ) −0.9 0 −0.8 0 −0.9 0 .C 1 = , .C2 = , .C3 = , 0 −0.8 0 −0.9 0 −1.2 ) ) ) ( ( ( 0.78 0.1 −0.9 1.8 −1.1 1.5 1 . W1 = , .W21 = , .W31 = , −0.6 2.5 −0.9 1.5 −1.2 1.1 ( ( ( ) ) ) 0.78 2 −1.5 0.5 −1.5 0.9 2 . W1 = ,.W22 = ,.W32 = ,.ξ1 = ξ2 = 1, and the −0.9 1.5 −1 1.5 −1.0 1.2 matrices . A, B1 , B2 , C1 , C2 , W11 , W21 , W12 , W22 are from [105].

7.4 Simulation

Fig. 7.3 State trajectory of error system without controllers

Fig. 7.4 State trajectory of error system with controllers

141

142

7 Synchronization Criteria of Delayed Inertial Neural Networks …

Fig. 7.5 State trajectory of error system without controllers



⎞ ? ? 2.32+ Δ13 ⎠ , where ? The transition matrix is given as .Π = ⎝ ? −4.75+ Δ22 ? ? −2.98+ Δ33 .ϖ1i = 0.13, .Δ13 ∈ [−0.13, 0.13], .ϖ2i = 0.19, .Δ22 ∈ [−0.19, 0.19], .ϖ3i = 0.18, .Δ33 ∈ [−0.18, 0.18]. .τ (t) = 0.8 + 0.2sin(t), .τ = 1, .μ1 = −0.2, .μ2 = 0.2. Set the parameters .σ1 = σ2 = 1. From Theorem 7.1, one can obtain the following feasible(matrices ( ) ) −1.8576 0.0860 −2.0979 0.0488 . L 11 = , . L 12 = , 0.0126 −1.8224 −0.0119 −2.1215 ( ( ) ) −2.0080 0.0546 −9.2054 −3.5516 . L 13 = , . L 21 = , −0.0047 −2.0350 −3.8436 −15.8586 ) ) ( ( −8.1839 −4.0972 −7.9659 −4.0083 . L 22 = , . L 23 = . −4.8260 −15.7680 −4.7093 −15.0698 ┌ ˺ 0.2 The trajectories of system (7.54) with initial values .e1 (t) = , .e2 (t) = 0.3 ┌ ˺ 0.15 are given in Figs. 7.5 and 7.6. −0.23 Based on Example 7.3, the obtained results can be applied in the field of digital signal process and the pseudo-algorithm. The encryption procedure of color image with the size . H × W × 3 is given as follows. Input: The color image . Pimage with . H × W × 3. Output: The ciphered image .Cimage with . H × W . Step 1: Extracting color image. Pimage with RGB (red, green and blue) components. After that, one can get pixel series . R( p, q), G( p, q), B( p, q), ( p ∈ {1, ..., h}, q ∈ {1, ..., w}).

7.4 Simulation

143

Fig. 7.6 State trajectory of error system with controllers

Step 2: Setting step size 0.001 and the initial conditions .x(0), . y(0), .x(0), ˆ .y ˆ (0). The times iterations is . L > H × W times. Based on uncontrolled system (7.6) and fourth order Runge–Kutta method with . L times iterations, one can get a group of time series chaotic signals. Step 3: Master system is continuously iterated . L times. For each iteration, four floating-point number sequences .x1 , x2 , y1 , y2 with the length of . L can be obtained. . x 1 = {x 1 (1), x 1 (2), ..., x 1 (L)} . x 2 = {x 2 (1), x 2 (2), ..., x 2 (L)} . y1 = {y1 (1), y1 (2), ..., y1 (L)} . y2 = {y2 (1), y2 (2), ..., y2 (L)}. Step 4: From these sequences, . X R , . X G and . X B could be generated based on .x1 , . x 2 , and . y1 , . y2 . 14 . x R (ρ) = (|x 1 (ρ)| − ˪ |x 1 (ρ)|˩ ) × 10 mod 256, 14 . x G (ρ) = (|x 2 (ρ)| − ˪ |x 2 (ρ)|˩ ) × 10 mod 256, 14 . x B (ρ) = (|y1 (ρ) + y2 (ρ)| − ˪ |y1 (ρ) + y2 (ρ)|˩ ) × 10 mod 256, where, .x R (ρ) ∈ X R , .x G (ρ) ∈ X G and .x B (ρ) ∈ X B , ρ = 1, ..., h × w. .˪ X ˩ denotes the nearest integer that is less than or equal to . X . mod.(X, Y ) denotes the remainder after division. Step 5: The RGB components of . Pimage are encrypted by utilizing . X R , . X G and . X B . After that, one can get the pixel sequence .c R , .cG and .c B of the ciphered image .C image . .c R (ρ) = p R (ρ) ⊕ x R (ρ), .cG (ρ) = pG (ρ) ⊕ x G (ρ), .c B (ρ) = p B (ρ) ⊕ x B (ρ), where .⊕ is the bitwise . X O R operator. . p R (ρ), . pG (ρ) and . p B (ρ) are the pixel sequence of the input image . Pimage . Hence, the ciphered image is produced. Figs. 7.8, 7.9 and 7.10 show the image encryption process of color image Fig. 7.7.

144

7 Synchronization Criteria of Delayed Inertial Neural Networks …

Fig. 7.7 The standard image utilized for encryption process

Fig. 7.8 Histogram of the RGB components of standard image

7.5 Conclusion In this chapter, the synchronization problem of generally Markovian jumping inertial neural networks with time-varying delay is investigated. By using the delaydependent Lyapunov–Krasovskii functionals and higher order polynomial based relaxed inequality, some less conservative conditions are proposed to ensure drive system and response system synchronization by fully considering the uncertain or partially known transition rates in the Markovian process. Based on the theoretical derivations and numerical solutions, the application of the uncontrolled inertial neural network for image encryption in secure communication is discussed. The pixel values of the considered image are shuffled, and the image is encrypted by the

7.5 Conclusion

145

Fig. 7.9 Histogram of the RGB components of shuffled image

Fig. 7.10 Histogram of the RGB components of encrypted image

solutions of the inertial chaotic neural networks. The obtained results indicate that the proposed method is robust and resistant against security attacks. Finally, three simulation examples are given to demonstrate the effectiveness of the theoretical results.

Chapter 8

Conclusions

In recent years, the researches on the synchronization of complex neural networks have achieved important results, which could better understand and explain the various dynamical phenomena in the real world. The dynamical behaviour of complex neural networks is of great importance for the development of society and the economy. This monograph mainly studies the stochastic synchronization, local synchronization, and sampled-data synchronization of Markovian complex neural network with time-varying delays. Based on different Lyapunov–Krasovskii functional, integral inequality and generally Markovian jumping, the corresponding synchronization criteria are proposed, respectively. The main results of this monograph are summarized as follows: In Chap. 2, the stochastic synchronization of Markovian coupled neural networks with partially information of transition rates and random coupling strengths is discussed. The coupled configuration matrix is not limited to symmetry, and the coupling strength is a random variable independent of each other. The coupling configuration matrices are not restricted to be symmetric, and the coupling strengths are mutually independent random variables. By designing a new augmented Lyapunov-Krasovsky functional, the new delay-dependent synchronization criteria are obtained based on reciprocally convex combination technique and the properties of random variables. The obtained criteria depends not only on the upper and lower bounds of the delay, but also on the mathematical expectation and variance of the random coupling strength. The validity of the results is demonstrated by numerical examples. In Chap. 3, the local synchronization of Markovian neutral complex networks with partially information of transition rates is studied. The coupled configuration matrix is not limited to symmetric. By designing a new augmented Lyapunov-Krasovsky functional and adopting the technique of reciprocally convex combination, the new delay-dependent synchronization criteria depending on the upper and lower bounds of the delay are derived. The validity of the results is demonstrated by numerical examples. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2_8

147

148

8 Conclusions

In Chap. 4, the local synchronization of Markovian nonlinear coupled neural networks with uncertain and partially unknown transition rates is studied. Each transition rate in the Markovian coupled neural network model is either indeterminate or completely unknown because complete knowledge on the transition rates is difficult and costly. By adopting a new integral inequality that combines with free-matrix-based integral inequality and further improved integral inequality, the less conservative local synchronization criteria containing the bounds of delay and delay derivative are obtained based the novel augmented Lyapunov–Krasovskii functional. Finally, the validity of the method is demonstrated by numerical examples. In Chap. 5, the sampled-data synchronization for aperiodic sampling interval complex networks based on enhanced input delay method is studied. By introducing an improved discontinuous Lyapunov–Krasovskii functional, the new delay-dependent synchronization criteria are obtained by adopting Wirtinger’s integral inequality and mixed convex combination, which fully utilize the upper bound on variable sampling interval and the sawtooth structure information of varying input delay. The derived criteria are less conservative than the existing ones. In addition, the desired sampleddata controllers are obtained by solving a set of LMIs. Finally, numerical examples are provided to demonstrate the feasibility of the proposed method. In Chap. 6, the sampled-data synchronization problem of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals are investigated based on an enhanced input delay approach. The mode-dependent augmented Lyapunov–Krasovskii functional is utilized, which makes the Lyapunov functional matrices mode-dependent as much as possible. By applying an extended Jensen’s integral inequality and Wirtinger’s inequality, new delay-dependent synchronization criteria are obtained, which fully utilize the upper bound on variable sampling interval and the sawtooth structure information of varying input delay. In addition, the desired stochastic sampled-data controllers are obtained by solving a set of LMIs. Finally, two examples are provided to demonstrate the feasibility of the proposed method. In Chap. 7, the synchronization of inertial neural networks with time-varying delays and generally Markovian jumping is studied. The second-order differential equations are transformed into first-order differential equations by using variable transformation method. Because of the delay of data transmission channel or the loss of data information, the Markovian process in the system is uncertain or partially known, which is more general and practicable to consider generally Markovian jumping inertial neural networks. The synchronization criteria are obtained by adopting the delay-dependent Lyapunov–Krasovskii functionals and higher order polynomial based relaxed inequality. Furthermore, the desired controllers are obtained by solving a set of LMIs. Finally, numerical examples are given to demonstrate the validity of the proposed method.

References

1. Alimi, A.M., Aouiti, C., Assali, E.A.: Finite-time and fixed-time synchronization of a class of inertial neural networks with multi-proportional delays and its application to secure communication. Neurocomputing 332, 29–43 (2019) 2. Arenas, A., Díaz-Guilera, A., Kurths, J., Moreno, Y., Zhou, C.: Synchronization in complex networks. Phys. Rep. 469(3), 93–153 (2008) 3. Arik, S.: Stability analysis of delayed neural networks. IEEE Trans. Circuits Syst. I: Fund. Theory Appl. 47(7), 1089–1092 (2000) 4. Arik, S.: New criteria for global robust stability of delayed neural networks with norm-bounded uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 91–106 (2014) 5. Babcock, K.L., Westervelt, R.M.: Stability and dynamics of simple electronic neural networks with added inertia. Phys. D: Nonlinear Phenom. 23(1–3), 464–469 (1986) 6. Barabási, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 7. Barahona, M., Pecora, L.M.: Synchronization in small-world systems. Phys. Rev. Lett. 89(5), 054101 (2002) 8. Belykh, V.N., Belykh, I.V., Mosekilde, E.: Cluster synchronization modes in an ensemble of coupled chaotic oscillators. Phys. Rev. E 63(3), 036216 (2001) 9. Blythe, S., Mao, X., Liao, X.: Stability of stochastic delay neural networks. J. Franklin Inst. 338(4), 481–495 (2001) 10. Boccaletti, S., Kurths, J., Osipov, G., Valladares, D.L., Zhou, C.S.: The synchronization of chaotic systems. Phys. Rep. 366(1–2), 1–101 (2002) 11. Cao, J.: Periodic oscillation and exponential stability of delayed CNNs. Phys. Lett. A 270(3), 157–163 (2000) 12. Cao, J., Chen, G., Li, P.: Global synchronization in an array of delayed neural networks with hybrid coupling. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 38(2), 488–498 (2008) 13. Cao, J., Li, L.: Cluster synchronization in an array of hybrid coupled neural networks with delay. Neural Netw. 22(4), 335–342 (2009) 14. Cao, J., Li, P., Wang, W.: Global synchronization in arrays of delayed neural networks with constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006) 15. Cao, J., Wan, Y.: Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw. 53, 165–172 (2014) 16. Cao, J., Wenwu, Yu., Yuzhong, Q.: A new complex network model and convergence dynamics for reputation computation in virtual organizations. Phys. Lett. A 356(6), 414–425 (2006) 17. Cao, J., Zhou, D.: Stability analysis of delayed cellular neural networks. Neural Netw. 11(9), 1601–1605 (1998) © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 J. Wang and J. Fu, Synchronization Control of Markovian Complex Neural Networks with Time-varying Delays, Studies in Systems, Decision and Control 514, https://doi.org/10.1007/978-3-031-47835-2

149

150

References

18. Chandrasekar, A., Rakkiyappan, R., Cao, J., Lakshmanan, S.: Synchronization of memristorbased recurrent neural networks with two delay components based on second-order reciprocally convex approach. Neural Netw. 57, 79–93 (2014) 19. Chandrasekar, A., Rakkiyappan, R., Rihan, F.A., Lakshmanan, S.: Exponential synchronization of Markovian jumping neural networks with partly unknown transition probabilities via stochastic sampled-data control. Neurocomputing 133, 385–398 (2014) 20. Chen, C., Li, L., Peng, H., Yang, Y.: Fixed-time synchronization of inertial memristor-based neural networks with discrete delay. Neural Netw. 109, 81–89 (2019) 21. Chen, C.P., Liu, Y.-J., Wen, G.-X.: Fuzzy neural network-based adaptive control for a class of uncertain nonlinear stochastic systems. IEEE Trans. Cybern. 44(5), 583–593 (2014) 22. Chen, C.P., Wen, G.-X., Liu, Y.-J., Wang, F.-Y.: Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1217–1226 (2014) 23. Chen, G., Zhou, J., Liu, Z.: Global synchronization of coupled delayed neural networks and applications to chaotic CNN models. Int. J. Bifurc. Chaos 14(07), 2229–2240 (2004) 24. Chen, H., Liang, J.: Local synchronization of interconnected boolean networks with stochastic disturbances. IEEE Trans. Neural Netw. Learn. Syst. 31(2), 452–463 (2019) 25. Chen, J.L., Chen, X.H.: Special Matrices. Tsinghua University Press, Beijing (2001) 26. Chen, M.: Synchronization in complex dynamical networks with random sensor delay. IEEE Trans. Circuits Syst. II Express Briefs 57(1), 46–50 (2010) 27. Chen, T., Liu, X., Wenlian, L.: Pinning complex networks by a single controller. IEEE Trans. Circuits Syst. I Regul. Pap. 54(6), 1317–1326 (2007) 28. Chen, T., Francis, B.A.: Optimal Sampled-Data Control Systems. Springer Science & Business Media (2012) 29. Chen, X., Huang, T., Cao, J., Park, J.H., Qiu, J.: Finite-time multi-switching sliding mode synchronisation for multiple uncertain complex chaotic systems with network transmission mode. IET Control Theory Appl. 13(9), 1246–1257 (2019) 30. Cohen, M.A., Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst., Man, Cybern. (5), 815–826 (1983) 31. Datta, R., Dey, R., Bhattacharya, B., Saravanakumar, R., Kwon, O.-M.: Stability and stabilization of T-S fuzzy systems with variable delays via new Bessel-Legendre polynomial based relaxed integral inequality. Inf. Sci. 522, 99–123 (2020) 32. De Farias, D.P., Geromel, J.C., Do Val, J.B.R., Costa, O.L.V.: Output feedback control of Markov jump linear systems in continuous-time. IEEE Trans. Autom. Control 45(5), 944– 949 (2000) 33. Ding, D., Wang, Z., Alsaadi, F.E., Shen, B.: Receding horizon filtering for a class of discrete time-varying nonlinear systems with multiple missing measurements. Int. J. General Syst. 44(2), 198–211 (2015) 34. Ding, D., Wang, Z., Shen, B., Shu, H.: State estimation for discrete-time complex networks with randomly occurring sensor saturations and randomly varying sensor delays. IEEE Trans. Neural Netw. Learn. Syst. 23(5), 725–736 (2012) 35. Ding, S., Wang, Z.: Event-triggered synchronization of discrete-time neural networks: a switching approach. Neural Netw. 125, 31–40 (2020) 36. Baozhu, D., Lam, J., Zou, Y., Shu, Z.: Stability and stabilization for Markovian jump timedelay systems with partially unknown transition rates. IEEE Trans. Circuits Syst. I Regul. Pap. 60(2), 341–351 (2013) 37. Feng, J., Wang, S., Wang, Z.: Stochastic synchronization in an array of neural networks with hybrid nonlinear coupling. Neurocomputing 74(18), 3808–3815 (2011) 38. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear systems. IEEE Trans. Autom. Control 37(1), 38–53 (1992) 39. Fridman, E.: A refined input delay approach to sampled-data control. Automatica 46(2), 421–427 (2010)

References

151

40. Glass, L.: Synchronization and rhythmic processes in physiology. Nature 410(6825), 277–284 (2001) 41. Gong, D., Zhang, H., Wang, Z., Huang, B.: Novel synchronization analysis for complex networks with hybrid coupling by handling multitude Kronecker product terms. Neurocomputing 82, 14–20 (2012) 42. Gopalsamy, K.: Stability of artificial neural networks with impulses. Appl. Math. Comput. 154(3), 783–813 (2004) 43. Gray, C.M.: Synchronous oscillations in neuronal systems: mechanisms and functions. J. Comput. Neurosci. 1(1–2), 11–38 (1994) 44. Gu, K., Chen, J., Kharitonov, V.L.: Stability of Time-Delay Systems. Birkhuser Press, Boston (2003) 45. Guan, Z.-H., Liu, Z.-W., Feng, G., Wang, Y.-W.: Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control. IEEE Trans. Circuits Syst. I Regul. Pap. 57(8), 2182–2195 (2010) 46. Guo, Y., Wang, Z.: Stability of Markovian jump systems with generally uncertain transition rates. J. Franklin Inst. 350(9), 2826–2836 (2013) 47. Guo, Z., Wang, J., Yan, Z.: Global exponential synchronization of two memristor-based recurrent neural networks with time delays via static or dynamic coupling. IEEE Trans. Syst., Man, Cybern.: Syst. 45(2), 235–249 (2015) 48. He, P., Ma, S.-H., Fan, T.: Finite-time mixed outer synchronization of complex networks with coupling time-varying delay. Chaos: Interdiscipl. J. Nonlinear Sci. 22(4), 043151 (2012) 49. He, Y., Liu, G.-P., Rees, D., Min, W.: Stability analysis for neural networks with time-varying interval delay. IEEE Trans. Neural Networks 18(6), 1850–1854 (2007) 50. Cheng, H., Juan, Yu., Jiang, H., Teng, Z.: Exponential synchronization of complex networks with finite distributed delays coupling. IEEE Trans. Neural Netw. 22(12), 1999–2010 (2011) 51. Cheng, H., Juan, Yu., Jiang, H., Teng, Z.: Pinning synchronization of weighted complex networks with variable delays and adaptive coupling weights. Nonlinear Dyn. 67, 1373–1385 (2012) 52. Hua, L., Zhong, S., Shi, K., Zhang, X.: Further results on finite-time synchronization of delayed inertial memristive neural networks via a novel analysis method. Neural Netw. 127, 47–57 (2020) 53. Huang, H., Huang, T., Chen, X.: A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays. Neural Netw. 46, 50–61 (2013) 54. Huang, Q., Cao, J.: Stability analysis of inertial Cohen-Grossberg neural networks with Markovian jumping parameters. Neurocomputing 282, 89–97 (2018) 55. Ji, D., Jeong, S., Park, J., Lee, S., Won, S.: Adaptive lag synchronization for uncertain complex dynamical network with delayed coupling. Appl. Math. Comput. 218(9), 4872–4880 (2012) 56. Ji, H., Zhang, H., Senping, T.: Reachable set estimation for inertial Markov jump BAM neural network with partially unknown transition rates and bounded disturbances. J. Franklin Inst. 354(15), 7158–7182 (2017) 57. Kao, Y., Xie, J., Wang, C.: Stabilization of singular Markovian jump systems with generally uncertain transition rates. IEEE Trans. Autom. Control 59(9), 2604–2610 (2014) 58. Karimi, H.R., Gao, H.: New delay-dependent exponential . H∞ synchronization for uncertain neural networks with mixed time delays. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 40(1), 173–185 (2010) 59. Kárn`y, M., Herzallah, R.: Scalable harmonization of complex networks with local adaptive controllers. IEEE Trans. Syst., Man, Cybern.: Syst. 47(3), 394–404 (2017) 60. Kim, J.-H.: Further improvement of Jensen inequality and application to stability of timedelayed systems. Automatica 64, 121–125 (2016) 61. Kwon, O.-M., Park, M.-J., Park, J.H., Lee, S.-M., Cha, E.-J.: New and improved results on stability of static neural networks with interval time-varying delays. Appl. Math. Comput. 239, 346–357 (2014)

152

References

62. Lakshmanan, S., Prakash, M., Lim, C.P., Rakkiyappan, R., Balasubramaniam, P., Nahavandi, S.: Synchronization of an inertial neural network with time-varying delays and its application to secure communication. IEEE Trans. Neural Netw. Learn. Syst. 29(1), 195–207 (2018) 63. Li, C., Chen, G.: Synchronization in general complex dynamical networks with coupling delays. Phys. A 343, 263–278 (2004) 64. Li, F., Ligang, W., Shi, P.: Stochastic stability of semi-Markovian jump systems with modedependent delays. Int. J. Robust Nonlinear Control 24(18), 3317–3330 (2014) 65. Li, H., Gao, H., Shi, P.: New passivity analysis for neural networks with discrete and distributed delays. IEEE Trans. Neural Netw. 21(11), 1842–1847 (2010) 66. Li, K., Lai, C.H.: Adaptive-impulsive synchronization of uncertain complex dynamical networks. Phys. Lett. A 372(10), 1601–1606 (2008) 67. Li, N., Zhang, Y., Jiawen, H., Nie, Z.: Synchronization for general complex dynamical networks with sampled-data. Neurocomputing 74(5), 805–811 (2011) 68. Li, P., Cao, J., Wang, Z.: Robust impulsive synchronization of coupled delayed neural networks with uncertainties. Phys. A 373, 261–272 (2007) 69. Li, Z., Duan, Z., Chen, G., Huang, L.: Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint. IEEE Trans. Circuits Syst. I Regul. Pap. 57(1), 213–224 (2010) 70. Lian, H.-H., Xiao, S.-P., Wang, Z., Zhang, X.-H., Xiao, H.-Q.: Further results on sampled-data synchronization control for chaotic neural networks with actuator saturation. Neurocomputing 346, 30–37 (2019) 71. Liang, J., Wang, Z., Liu, Y., Liu, X.: Robust synchronization of an array of coupled stochastic discrete-time delayed neural networks. IEEE Trans. Neural Netw. 19(11), 1910–1921 (2008) 72. Liao, X., Chen, G., Sanchez, E.N.: Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Netw. 15(7), 855–866 (2002) 73. Liao, X., Zhongfu, W., Juebang, Yu.: Stability switches and bifurcation analysis of a neural network with continuously delay. IEEE Trans. Syst., Man, Cybern.-Part A: Syst. Humans 29(6), 692–696 (1999) 74. Liu, D., Li, H., Wang, D.: Neural-network-based zero-sum game for discrete-time nonlinear systems via iterative adaptive dynamic programming algorithm. Neurocomputing 110, 92– 100 (2013) 75. Liu, D., Wang, D., Wang, F.-Y., Li, H., Yang, X.: Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems. IEEE Trans. Cybern. 44(12), 2834–2847 (2014) 76. Liu, D., Wang, D., Yang, X.: An iterative adaptive dynamic programming algorithm for optimal control of unknown discrete-time nonlinear systems with constrained inputs. Inf. Sci. 220, 331–342 (2013) 77. Liu, K., Fridman, E.: Wirtinger’s inequality and Lyapunov-based sampled-data stabilization. Automatica 48(1), 102–108 (2012) 78. Liu, K., Suplin, V., Fridman, E.: Stability of linear systems with general sawtooth delay. IMA J. Math. Control. Inf. 27(4), 419–436 (2010) 79. Liu, Q., Kan, B., Wang, Q., Fang, Y.: Cluster synchronization of markovian switching complex networks with hybrid couplings and stochastic perturbations. Phys. A 526, 120937 (2019) 80. Liu, X., Xi, H.: Synchronization of neutral complex dynamical networks with Markovian switching based on sampled-data controller. Neurocomputing 139, 163–179 (2014) 81. Liu, Y., Wang, Z., Liang, J., Liu, X.: Synchronization and state estimation for discrete-time complex networks with distributed delays. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 38(5), 1314–1325 (2008) 82. Liu, Y., Wang, Z., Liang, J., Liu, X.: Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Trans. Neural Netw. 20(7), 1102–1116 (2009) 83. Liu, Y., Wang, Z., Liang, J., Liu, X.: Synchronization of coupled neutral-type neural networks with jumping-mode-dependent discrete and unbounded distributed delays. IEEE Trans. Cybern. 43(1), 102–114 (2013)

References

153

84. Liu, Y., Wang, Z., Liu, X.: Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw. 19(5), 667–675 (2006) 85. Liu, Y., Wang, Z., Liu, X.: Exponential synchronization of complex networks with Markovian jump and mixed delays. Phys. Lett. A 372(22), 3986–3998 (2008) 86. Liu, Z., Chen, Z., Yuan, Z.: Pinning control of weighted general complex dynamical networks with time delay. Phys. A 375(1), 345–354 (2007) 87. Jianquan, L., Ho, D.W.C.: Local and global synchronization in general complex dynamical networks with delay coupling. Chaos, Solitons Fractals 37(5), 1497–1510 (2008) 88. Jianquan, L., Ho, D.W.C., Cao, J., Kurths, J.: Exponential synchronization of linearly coupled neural networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011) 89. Jianquan, L., Ho, D.W.C., Wang, Z.: Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers. IEEE Trans. Neural Netw. 20(10), 1617–1629 (2009) 90. Renquan, L., Wenwu, Y., Lü, J., Xue, A.: Synchronization on complex networks of networks. IEEE Trans. Neural Netw. Learn. Syst. 25(11), 2110–2118 (2014) 91. Wenlian, L., Chen, T.: Synchronization of coupled connected neural networks with delays. IEEE Trans. Circuits Syst. I Regul. Pap. 51(12), 2491–2503 (2004) 92. Ma, Q., Shengyuan, X., Zou, Y.: Stability and synchronization for Markovian jump neural networks with partly unknown transition probabilities. Neurocomputing 74(17), 3404–3411 (2011) 93. Mao, X.: Stability of stochastic differential equations with Markovian switching. Stochast. Process. Their Appl. 79(1), 45–67 (1999) 94. Marcus, C.M., Westervelt, R.M.: Stability of analog neural networks with delay. Phys. Rev. A 39(1), 347 (1989) 95. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943) 96. Mei, J., Jiang, M., Wangming, X., Wang, B.: Finite-time synchronization control of complex dynamical networks with time delay. Commun. Nonlinear Sci. Numer. Simul. 18(9), 2462– 2478 (2013) 97. Néda, Z., Ravasz, E., Vicsek, T., Brechet, Y., Barabási, A.-L.: Physics of the rhythmic applause. Phys. Rev. E 61(6), 6987 (2000) 98. Park, J.H.: Synchronization of cellular neural networks of neutral type via dynamic feedback controller. Chaos, Solitons Fractals 42(3), 1299–1304 (2009) 99. Park, J.H., Lee, T.H.: Synchronization of complex dynamical networks with discontinuous coupling signals. Nonlinear Dyn. 79(2), 1353–1362 (2015) 100. Park, M., Kwon, O., Park, J., Lee, S., Cha, E.: Robust stability analysis for Lur’e systems with interval time-varying delays via Wirtinger-based inequality. Adv. Differ. Equ. 2014(1), 1–13 (2014) 101. Park, P., Ko, J.W.: Stability and robust stability for systems with a time-varying delay. Automatica 43(10), 1855–1858 (2007) 102. Park, P., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with timevarying delays. Automatica 47(1), 235–238 (2011) 103. Park, P., Lee, W.I.l., Lee, S.Y.: Auxiliary function-based integral inequalities for quadratic functions and their applications to time-delay systems. J. Franklin Inst. 352(4), 1378–1396 (2015) 104. Peng, C., Han, Q.-L., Yue, D., Tian, E.: Sampled-data robust. H∞ control for T-S fuzzy systems with time delay and uncertainties. Fuzzy Sets Syst. 179(1), 20–33 (2011) 105. Prakash, M., Balasubramaniam, P., Lakshmanan, S.: Synchronization of Markovian jumping inertial neural networks and its applications in image encryption. Neural Netw. 83, 86–93 (2016) 106. Rakkiyappan, R., Chandrasekar, A., Park, J.H., Kwon, O.M.: Exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control. Nonlinear Anal.: Hybrid Syst. 14, 16–37 (2014)

154

References

107. Rakkiyappan, R., Maheswari, K., Velmurugan, G., Park, J.H.: Event-triggered . H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization. Neural Netw. 105, 236–248 (2018) 108. Rakkiyappan, R., Sakthivel, N.: Stochastic sampled-data control for exponential synchronization of Markovian jumping complex dynamical networks with mode-dependent time-varying coupling delay. Circuits, Syst. Signal Process. 34, 153–183 (2015) 109. Rakkiyappan, R., Sakthivel, N., Cao, J.: Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Neural Netw. 66, 46–63 (2015) 110. Rakkiyappan, R., Sivasamy, R., Cao, J.: Stochastic sampled-data stabilization of neuralnetwork-based control systems. Nonlinear Dyn. 81, 1823–1839 (2015) 111. Rakkiyappan, R., Chandrasekar, A., Cao, J.: Passivity and passification of memristor-based recurrent neural networks with additive time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 2043–2057 (2015) 112. Rakkiyappan, R., Dharani, S., Cao, J.: Synchronization of neural networks with control packet loss and time-varying delay via stochastic sampled-data controller. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 3215–3226 (2015) 113. Seuret, A., Gouaisbaut, F.: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49(9), 2860–2866 (2013) 114. Shen, B., Wang, Z., Liu, X.: Bounded . H∞ synchronization and state estimation for discrete time-varying stochastic complex networks over a finite horizon. IEEE Trans. Neural Netw. 22(1), 145–157 (2011) 115. Shen, B., Wang, Z., Liu, X.: Sampled-data synchronization control of dynamical networks with stochastic sampling. IEEE Trans. Autom. Control 57(10), 2644–2650 (2012) 116. Shen, H., Park, J.H., Zhang, L., Wu, Z.-G.: Robust extended dissipative control for sampleddata Markov jump systems. Int. J. Control 87(8), 1549–1564 (2014) 117. Shen, H., Wu, Z.-G., Park, J.H.: Reliable mixed passive and filtering for semi-Markov jump systems with randomly occurring uncertainties and sensor failures. Int. J. Robust Nonlinear Control 25(17), 3231–3251 (2015) 118. Shi, J., Zeng, Z.: Global exponential stabilization and lag synchronization control of inertial neural networks with time delays. Neural Netw. 126, 11–20 (2020) 119. Song, B., Park, J.H., Wu, Z.-G., Zhang, Y.: Global synchronization of stochastic delayed complex networks. Nonlinear Dyn. 70, 2389–2399 (2012) 120. Song, Q., Cao, J.: On pinning synchronization of directed and undirected complex dynamical networks. IEEE Trans. Circuits Syst. I Regul. Pap. 57(3), 672–680 (2010) 121. Song, Q., Liu, F., Cao, J., Wenwu, Yu.: Pinning-controllability analysis of complex networks: an m-matrix approach. IEEE Trans. Circuits Syst. I Regul. Pap. 59(11), 2692–2701 (2012) 122. Song, Q.: Synchronization analysis of coupled connected neural networks with mixed time delays. Neurocomputing 72(16–18), 3907–3914 (2009) 123. Song, Q., Zhao, Z.: Cluster, local and complete synchronization in coupled neural networks with mixed delays and nonlinear coupling. Neural Comput. Appl. 24(5), 1101–1113 (2014) 124. Strogatz, S.H.: Exploring complex networks. Nature 410(6825), 268–276 (2001) 125. Strogatz, S.H., Stewart, I.: Coupled oscillators and biological synchronization. Sci. Am. 269(6), 102–109 (1993) 126. Su, H., Rong, Z., Chen, M.Z.Q., Wang, X., Chen, G., Wang, H.: Decentralized adaptive pinning control for cluster synchronization of complex dynamical networks. IEEE Trans. Cybern. 43(1), 394–399 (2013) 127. Tang, Y., Fang, J., Xia, M., Xiaojing, G.: Synchronization of takagi-sugeno fuzzy stochastic discrete-time complex networks with mixed time-varying delays. Appl. Math. Model. 34(4), 843–855 (2010) 128. Tang, Y., Gao, H., Jianquan, L., Kurths, J.: Pinning distributed synchronization of stochastic dynamical networks: a mixed optimization approach. IEEE Trans. Neural Netw. Learn. Syst. 25(10), 1804–1815 (2014)

References

155

129. Tang, Y., Qian, F., Gao, H., Kurths, J.: Synchronization in complex networks and its application-a survey of recent advances and challenges. Annu. Rev. Control. 38(2), 184–198 (2014) 130. Tang, Y., Wong, W.K.: Distributed synchronization of coupled neural networks via randomly occurring control. IEEE Trans. Neural Netw. Learn. Syst. 24(3), 435–447 (2013) 131. Zhengwen, T., Yang, X., Wang, L., Ding, N.: Stability and stabilization of quaternion-valued neural networks with uncertain time-delayed impulses: direct quaternion method. Phys. A 535, 122358 (2019) 132. Zhengwen, T., Zhao, Y., Ding, N., Feng, Y., Zhang, W.: Stability analysis of quaternion-valued neural networks with both discrete and distributed delays. Appl. Math. Comput. 343, 342–353 (2019) 133. Wang, G., Zhang, Q., Yang, C.: Stabilization of singular Markovian jump systems with timevarying switchings. Inf. Sci. 297, 254–270 (2015) 134. Wang, J., Zhang, X.-M., Han, Q.-L.: Event-triggered generalized dissipativity filtering for neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 27(1), 77–88 (2016) 135. Wang, J.-L., Huai-Ning, W.: Local and global exponential output synchronization of complex delayed dynamical networks. Nonlinear Dyn. 67, 497–504 (2012) 136. Wang, J.-L., Huai-Ning, W., Guo, L.: Stability analysis of reaction-diffusion Cohen-Grossberg neural networks under impulsive control. Neurocomputing 106, 21–30 (2013) 137. Wang, J., Zhang, H., Wang, Z., Gao, D.W.: Finite-time synchronization of coupled hierarchical hybrid neural networks with time-varying delays. IEEE Trans. Cybern. 47(10), 2995–3004 (2017) 138. Wang, J., Zhang, H., Wang, Z., Liang, H.: Cluster exponential synchronization of a class of complex networks with hybrid coupling and time-varying delay. Chin. Phys. B 22(9), 090504 (2013) 139. Wang, J., Zhang, H., Wang, Z., Liang, H.: Local stochastic synchronization for Markovian neutral-type complex networks with partial information on transition probabilities. Neurocomputing 167, 474–487 (2015) 140. Wang, J., Zhang, H., Wang, Z., Liang, H.: Stochastic synchronization for Markovian coupled neural networks with partial information on transition probabilities. Neurocomputing 149, 983–992 (2015) 141. Wang, J., Zhang, H., Wang, Z., Liu, Z.: Sampled-data synchronization of Markovian coupled neural networks with mode delays based on mode-dependent LKF. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2626–2637 (2016) 142. Wang, J., Zhang, H., Wang, Z., Shan, Q.: Local synchronization criteria of Markovian nonlinearly coupled neural networks with uncertain and partially unknown transition rates. IEEE Trans. Syst., Man, Cybern.: Syst. 47(8), 1953–1964 (2017) 143. Wang, J., Zhang, H., Wang, Z., Wang, B.: Local exponential synchronization in complex dynamical networks with time-varying delay and hybrid coupling. Appl. Math. Comput. 225, 16–32 (2013) 144. Wang, L., Wei, G., Shu, H.: State estimation for complex networks with randomly occurring coupling delays. Neurocomputing 122, 513–520 (2013) 145. Wang, X.F., Chen, G.: Pinning control of scale-free dynamical networks. Phys. 310(3–4), 521–531 (2002) 146. Wang, X.F., Chen, G.: Synchronization in scale-free dynamical networks: robustness and fragility. IEEE Trans. Circuits Syst. I: Fund. Theory Appl. 49(1), 54–62 (2002) 147. Wang, X.F., Chen, G.: Synchronization in small-world dynamical networks. Int. J. Bifurcat. Chaos 12(01), 187–192 (2002) 148. Wang, X.-F., Li, X., Chen, G.-R., et al.: Complex Network Theory and Its Application. Qing Hua University Publication, Beijing (2006) 149. Wang, Y., Jianquan, L., Li, X., Liang, J.: Synchronization of coupled neural networks under mixed impulsive effects: a novel delay inequality approach. Neural Netw. 127, 38–46 (2020)

156

References

150. Wang, Y., Zhang, H., Wang, X., Yang, D.: Networked synchronization control of coupled dynamic networks with time-varying delay. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 40(6), 1468–1479 (2010) 151. Wang, Z., Ding, S., Shan, Q., Zhang, H.: Stability of recurrent neural networks with timevarying delay via flexible terminal method. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2456–2463 (2017) 152. Wang, Z., Liu, Z., Zheng, C.: Qualitative Analysis and Control of Complex Neural Networks with Delays. Springer (2016) 153. Wang, Z., Zhang, H.: Synchronization stability in complex interconnected neural networks with nonsymmetric coupling. Neurocomputing 108, 84–92 (2013) 154. Wang, Z., Liu, Y., Liu, X.: . H∞ filtering for uncertain stochastic time-delay systems with sector-bounded nonlinearities. Automatica 44(5), 1268–1277 (2008) 155. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998) 156. Wei, G., Han, F., Wang, L., Song, Y.: Reliable . H∞ filtering for discrete piecewise linear systems with infinite distributed delays. Int. J. Gen Syst 43(3–4), 346–358 (2014) 157. Wen, J.W., Liu, F., Nguang, S.K.: Sampled-data predictive control for uncertain jump systems with partly unknown jump rates and time-varying delay. J. Franklin Inst. 349(1), 305–322 (2012) 158. Wen, S., Zeng, Z., Huang, T., Meng, Q., Yao, W.: Lag synchronization of switched neural networks via neural activation function and applications in image encryption. IEEE Trans. Neural Netw. Learn. Syst. 26(7), 1493–1502 (2015) 159. Wen, S., Chen, S., Guo, W.: Adaptive global synchronization of a general complex dynamical network with non-delayed and delayed coupling. Phys. Lett. A 372(42), 6340–6346 (2008) 160. Wong, W.K., Zhang, W., Tang, Y., Wu, X.: Stochastic synchronization of complex networks with mixed impulses. IEEE Trans. Circuits Syst. I: Regul. Papers 60(10), 2657–2667 (2013) 161. Wu, C.W.: Synchronization in arrays of coupled nonlinear systems with delay and nonreciprocal time-varying coupling. IEEE Trans. Circuits Syst. II Express Briefs 52(5), 282–286 (2005) 162. Wu, C.W., Chua, L.O.: Synchronization in an array of linearly coupled dynamical systems. IEEE Trans. Circuits Syst. I: Fund. Theory Appl. 42(8), 430–447 (1995) 163. Wei, W., Chen, T.: Global synchronization criteria of linearly coupled neural network systems with time-varying coupling. IEEE Trans. Neural Netw. 19(2), 319–332 (2008) 164. Wu, Z.G., Park, J.H., Su, H., Chu, J.: Discontinuous Lyapunov functional approach to synchronization of time-delay neural networks using sampled-data. Nonlinear Dyn. 69, 2021–2030 (2012) 165. Wu, Z.G., Park, J.H., Su, H., Chu, J.: Non-fragile synchronisation control for complex networks with missing data. Int. J. Control 86(3), 555–566 (2013) 166. Wu, Z.G., Park, J.H., Su, H., Song, B., Chu, J.: Exponential synchronization for complex dynamical networks with sampled-data. J. Franklin Inst. 349(9), 2735–2749 (2012) 167. Wu, Z.G., Shi, P., Su, H., Chu, J.: Delay-dependent stability analysis for switched neural networks with time-varying delay. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 41(6), 1522–1530 (2011) 168. Zheng-Guang, W., Shi, P., Hongye, S., Chu, J.: Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays. IEEE Trans. Neural Netw. 22(10), 1566–1575 (2011) 169. Zheng-Guang, W., Shi, P., Hongye, S., Chu, J.: Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling. IEEE Trans. Neural Netw. Learn. Syst. 23(9), 1368–1376 (2012) 170. Zheng-Guang, W., Shi, P., Hongye, S., Chu, J.: Sampled-data exponential synchronization of complex dynamical networks with time-varying coupling delay. IEEE Trans. Neural Netw. Learn. Syst. 24(8), 1177–1187 (2013) 171. Zheng-Guang, W., Shi, P., Hongye, S., Chu, J.: Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)

References

157

172. Zhengguang, W., Hongye, S., Chu, J.: State estimation for discrete Markovian jumping neural networks with time delay. Neurocomputing 73(10–12), 2247–2254 (2010) 173. Xiao, S.P., Lian, H.H., Teo, K.L., Zeng, H.B., Zhang, X.H.: A new lyapunov functional approach to sampled-data synchronization control for delayed neural networks. J. Franklin Inst. 355(17), 8857–8873 (2018) 174. Xiong, J., Lam, J.: Robust . H2 control of Markovian jump systems with uncertain switching probabilities. Int. J. Syst. Sci. 40(3), 255–265 (2009) 175. Xiong, J., Lam, J., Gao, H., Ho, D.W.C.: On robust stabilization of Markovian jump systems with uncertain switching probabilities. Automatica 41(5), 897–903 (2005) 176. Xu, S., Lam, J., Ho, D.W.C., Zou, Y.: Delay-dependent exponential stability for a class of neural networks with time delays. J. Comput. Appl. Math. 183(1), 16–28 (2005) 177. Yang, F., Zhang, H., Hui, G., Wang, S.: Mode-independent fuzzy fault-tolerant variable sampling stabilization of nonlinear networked systems with both time-varying and random delays. Fuzzy Sets Syst. 207, 45–63 (2012) 178. Yang, F., Zhang, H., Wang, Y.: An enhanced input-delay approach to sampled-data stabilization of T-S fuzzy systems via mixed convex combination. Nonlinear Dyn. 75, 501–512 (2014) 179. Yang, X., Cao, J., Jianquan, L.: Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths. IEEE Trans. Neural Netw. Learn. Syst. 23(1), 60–71 (2012) 180. Yang, X., Cao, J., Jianquan, L.: Synchronization of coupled neural networks with random coupling strengths and mixed probabilistic time-varying delays. Int. J. Robust Nonlinear Control 23(18), 2060–2081 (2013) 181. Yang, X., Cao, J., Jianquan, L.: Synchronization of randomly coupled neural networks with Markovian jumping and time-delay. IEEE Trans. Circuits Syst. I Regul. Pap. 60(2), 363–376 (2013) 182. Yang, X., Feng, Y., Yiu, K.F.C., Song, Q., Alsaadi, F.E.: Synchronization of coupled neural networks with infinite-time distributed delays via quantized intermittent pinning control. Nonlinear Dyn. 94, 2289–2303 (2018) 183. Yang, X., Wan, X., Zunshui, C., Cao, J., Liu, Y., Rutkowski, L.: Synchronization of switched discrete-time neural networks via quantized output control with actuator fault. IEEE Trans. Neural Netw. Learn. Syst. 32(9), 4191–4201 (2020) 184. Yang, X., Liu, D., Huang, Y.: Neural-network-based online optimal control for uncertain nonlinear continuous-time systems with control constraints. IET Control Theory Appl. 7(17), 2037–2047 (2013) 185. Yu, W., Cao, J., Chen, G., Lu, J., Han, J., Wei, W.: Local synchronization of a complex network model. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 39(1), 230–241 (2009) 186. Wenwu, Yu., Cao, J., Lü, J.: Global synchronization of linearly hybrid coupled networks with time-varying delay. SIAM J. Appl. Dyn. Syst. 7(1), 108–133 (2008) 187. Wenwu, Yu., Cao, J., Wenlian, L.: Synchronization control of switched linearly coupled neural networks with delay. Neurocomputing 73(4), 858–866 (2010) 188. Wenwu, Yu., Chen, G., Lü, J.: On pinning synchronization of complex dynamical networks. Automatica 45(2), 429–435 (2009) 189. Wenwu, Yu., DeLellis, P., Chen, G., Di Bernardo, M., Kurths, J.: Distributed adaptive control of synchronization in complex networks. IEEE Trans. Autom. Control 57(8), 2153–2158 (2012) 190. Yue, D., Li, H.: Synchronization stability of continuous/discrete complex dynamical networks with interval time-varying delays. Neurocomputing 73(4–6), 809–819 (2010) 191. Zeng, H.-B., He, Y., Min, W., She, J.: Free-matrix-based integral inequality for stability analysis of systems with time-varying delay. IEEE Trans. Autom. Control 60(10), 2768–2772 (2015) 192. Zeng, Z., Wang, J., Liao, X.: Global asymptotic stability and global exponential stability of neural networks with unbounded time-varying delays. IEEE Trans. Circuits Syst. II Express Briefs 52(3), 168–173 (2005)

158

References

193. Zhang, C.-K., He, Y., Min, W.: Exponential synchronization of neural networks with timevarying mixed delays and sampled-data. Neurocomputing 74(1–3), 265–273 (2010) 194. Zhang, H., Cui, L., Luo, Y.: Near-optimal control for nonzero-sum differential games of continuous-time nonlinear systems using single-network adp. IEEE Trans. Cybern. 43(1), 206–216 (2012) 195. Zhang, H., Gong, D., Chen, B., Liu, Z.: Synchronization for coupled neural networks with interval delay: a novel augmented Lyapunov-Krasovskii functional method. IEEE Trans. Neural Netw. Learn. Syst. 24(1), 58–70 (2012) 196. Zhang, H., Gong, D., Chen, B., Liu, Z.: Synchronization for coupled neural networks with interval delay: a novel augmented Lyapunov-Krasovskii functional method. IEEE Trans. Neural Netw. Learn. Syst. 24(1), 58–70 (2013) 197. Zhang, H., Gong, D., Wang, Z., Ma, D.: Synchronization criteria for an array of neutral-type neural networks with hybrid coupling: a novel analysis approach. Neural Process. Lett. 35, 29–45 (2012) 198. Zhang, H., Liu, Z., Huang, G.B.: Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via sc technique. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 40(6), 1480–1491 (2010) 199. Zhang, H., Ma, T., Huang, G.B., Wang, Z.: Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 40(3), 831–844 (2010) 200. Zhang, H., Shan, Q., Wang, Z.: Stability analysis of neural networks with two delay components based on dynamic delay interval method. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 259–267 (2017) 201. Zhang, H., Wang, J., Wang, Z., Liang, H.: Mode-dependent stochastic synchronization for Markovian coupled neural networks with time-varying mode-delays. IEEE Trans. Neural Netw. Learn. Syst. 26(11), 2621–2634 (2015) 202. Zhang, H., Wang, J., Wang, Z., Liang, H.: Sampled-data synchronization analysis of Markovian neural networks with generally incomplete transition rates. IEEE Trans. Neural Netw. Learn. Syst. 28(3), 740–752 (2017) 203. Zhang, H., Wang, Z., Liu, D.: A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(7), 1229–1262 (2014) 204. Zhang, H., Zhao, M., Wang, Z., Zhenning, W.: Adaptive synchronization of an uncertain coupling complex network with time-delay. Nonlinear Dyn. 77, 643–653 (2014) 205. Zhang, L., Boukas, E.-K.: Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities. Automatica 45(2), 463–468 (2009) 206. Zhang, L., Boukas, E.-K., Lam, J.: Analysis and synthesis of Markov jump linear systems with time-varying delays and partially known transition probabilities. IEEE Trans. Autom. Control 53(10), 2458–2464 (2008) 207. Zhang, L., Leng, Y., Colaneri, P.: Stability and stabilization of discrete-time semi-Markov jump linear systems via semi-Markov kernel approach. IEEE Trans. Autom. Control 61(2), 503–508 (2016) 208. Zhang, L., Zhuang, S., Braatz, R.D.: Switched model predictive control of switched linear systems: Feasibility, stability and robustness. Automatica 67, 8–21 (2016) 209. Zhang, R., Zeng, D., Park, J.H., Liu, Y., Zhong, S.: Quantized sampled-data control for synchronization of inertial neural networks with heterogeneous time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 29(12), 6385–6395 (2018) 210. Zhang, W., Branicky, M.S., Phillips, S.M.: Stability of networked control systems. IEEE Control Syst. Mag. 21(1), 84–99 (2001) 211. Zhang, X.-M., Han, Q.-L.: Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach. Neural Netw. 54, 57–69 (2014) 212. Zhang, Y., He, Y., Min, W., Zhang, J.: Stabilization for Markovian jump systems with partial information on transition probability based on free-connection weighting matrices. Automatica 47(1), 79–84 (2011)

References

159

213. Zhang, Y., Da-Wei, G., Shengyuan, X.: Global exponential adaptive synchronization of complex dynamical networks with neutral-type neural network nodes and stochastic disturbances. IEEE Trans. Circuits Syst. I Regul. Pap. 60(10), 2709–2718 (2013) 214. Zhang, Y., Shengyuan, X., Chu, Y., Jinjun, L.: Robust global synchronization of complex networks with neutral-type delayed nodes. Appl. Math. Comput. 216(3), 768–778 (2010) 215. Zhao, J., Hill, D.J., Liu, T.: Synchronization of complex dynamical networks with switching topology: a switched system point of view. Automatica 45(11), 2502–2511 (2009) 216. Zhao, J., Hill, D.J., Liu, T.: Global bounded synchronization of general dynamical networks with nonidentical nodes. IEEE Trans. Autom. Control 57(10), 2656–2662 (2012) 217. Zheleznyak, A.L., Chua, L.O.: Coexistence of low-and high-dimensional spatiotemporal chaos in a chain of dissipatively coupled chuaa˛r´s circuits. Int. J. Bifurcat Chaos 4(03), 639–674 (1994) 218. Zhu, Q., Cao, J.: Robust exponential stability of Markovian jump impulsive stochastic cohengrossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 21(8), 1314– 1325 (2010) 219. Zhu, Q., Cao, J.: Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays. IEEE Trans. Syst., Man, Cybern. Part B (Cybern.) 41(2), 341–353 (2011)