Machine Behavior Design And Analysis: A Consensus Perspective 9789811532306, 9811532303

In this book, we present our systematic investigations into consensus in multi-agent systems. We show the design and ana

233 29 5MB

English Pages 200 [193]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgments
Contents
Acronyms
1 Background
1.1 Collective Machine Behaviour
1.2 Consensus
1.3 Theoretical Tools
1.4 Summary
References
2 Second-Order Min-Consensus
2.1 Introduction
2.2 Problem Formulation
2.3 Min-Consensus Under Switching Topology
2.3.1 Protocol
2.3.2 Theoretical Analysis
2.4 Simulation Example
2.5 Summary
References
3 High-Order Discrete-Time Consensus
3.1 Introduction
3.2 Problem Formulation
3.3 Consensus Protocols
3.3.1 Min-Type Protocol
3.3.2 Max-Type Protocol
3.4 Theoretical Analysis
3.4.1 Static Graphs
3.4.2 Time-Varying Graphs
3.5 Computer Simulations
3.5.1 Example 1
3.5.2 Example 2
3.5.3 Example 3
3.6 Summary
References
4 Continuous-Time Biased Min-Consensus
4.1 Introduction
4.2 Protocol
4.3 Theoretical Analysis
4.4 Equivalence to Shortest Path Planning
4.5 Simulations and Applications
4.5.1 Illustrative Example
4.5.2 Application to Maze Solving
4.5.3 Application to Complete Coverage
4.6 Summary
References
5 Discrete-Time Biased Min-Consensus
5.1 Introduction
5.2 Consensus Protocols
5.2.1 Protocol
5.2.2 Convergence Analysis
5.2.3 Relationship with Shortest Path Planning
5.3 Algorithms
5.3.1 Maze Solving
5.3.2 Complete Coverage
5.4 Computer Simulations
5.5 Applications
5.6 Summary
References
6 Biased Consensus Based Distributed Neural Network
6.1 Introduction
6.2 Problem Description
6.3 Unified Method
6.3.1 Classical Shortest Path Problem
6.3.2 Generalized Shortest Path Problem
6.4 Theoretical Analysis
6.5 Computer Simulations
6.5.1 Example 1
6.5.2 Example 2
6.6 Robot Navigation Application
6.7 Summary
References
7 Near-Optimal Consensus
7.1 Introduction
7.2 Problem Background
7.3 Consensus Protocols
7.3.1 Multiple Double Integrators
7.3.2 Multiple Linear Agents
7.3.3 Multiple Nonlinear Agents
7.4 Theoretical Analysis
7.4.1 Stability
7.4.2 Optimality of Cost Function
7.4.3 Existence of Matrix L
7.5 Simulation Results
7.5.1 Double-Integrator Agents
7.5.2 Second-Order Linear Agents
7.5.3 Second-Order Nonlinear Agents
7.6 Summary
References
8 Adaptive Near-Optimal Consensus
8.1 Introduction
8.2 Problem Formulation
8.3 Nominal Design
8.4 Adaptive Design
8.5 Computer Simulations
8.6 Conclusions
References
Recommend Papers

Machine Behavior Design And Analysis: A Consensus Perspective
 9789811532306, 9811532303

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Yinyan Zhang Shuai Li

Machine Behavior Design And Analysis A Consensus Perspective

Machine Behavior Design And Analysis

Yinyan Zhang • Shuai Li

Machine Behavior Design And Analysis A Consensus Perspective

Yinyan Zhang College of Cyber Security Jinan University Guangzhou Guangdong, China

Shuai Li School of Engineering Swansea University Swansea, UK

ISBN 978-981-15-3230-6 ISBN 978-981-15-3231-3 (eBook) https://doi.org/10.1007/978-981-15-3231-3 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To our ancestors and parents, as always

Preface

Multi-agent systems have attracted lots of research interest in the recent decades. Consensus is a fundamental issue in distributed cooperative control of multi-agent systems and distributed computing, which has broad applications, such as formation control. Such topics could be included in an emerging interdisciplinary field of machine behavior: the scientific study of behavior exhibited by intelligent machines, which was recently discussed in a review article entitled “Machine behaviour” written by Rahwan et al. published in Nature. Decision making is an important component of machine behavior. The investigations on cooperative control and computing have shown that via cooperations, a team of simple agents can perform complex tasks. In this book, we show the design and analysis of machine behaviors from a consensus perspective. We discuss second-order and high-order min-consensus. A very interesting topic regarding the link between consensus and path planning is also included. We show that a biased min-consensus protocol can lead to the path planning phenomenon, which means that the complexity of shortest path planning can emerge from a perturbed min-consensus protocol, which as a case study may encourage researchers in the field of distributed control to rethink the nature of complexity and the distance between control and intelligence. We also illustrate the design and analysis of consensus protocols for nonlinear multi-agent systems derived from an optimal control formulation, which do not require solving a Hamilton–Jacobi–Bellman (HJB) equation. Most of the materials in this book are derived from the authors’ papers published in journals, such as IEEE Transactions on Automatic Control. To make the contents clear and easy to follow, in this book, each part (and even each chapter) is written in a relatively self-contained manner. Meanwhile, in each chapter, we provide steps of algorithm design, theoretical analysis, and numerical examples. This book is divided into the following eight chapters. Chapter 1: In this chapter, we present some background knowledge about the topics discussed in this book. Chapter 2: In this chapter, a second-order min-consensus protocol is presented with provable convergence. Under certain conditions, the protocol can guarantee vii

viii

Preface

global asymptotic min-consensus, even for the case with jointly connected communication graphs. An illustrative example is also presented. Chapter 3: In this chapter, two novel distributed protocols are presented to address the consensus of high-order discrete-time multi-agent systems. By the presented consensus protocols, each agent only needs to transmit the value of a real-value variable to neighbor agents regardless of the order of agent dynamics. Theoretical analysis shows that the presented protocols guarantee asymptotic consensus of the agent states in different cases of communication graphs, including jointly connected ones. Various simulative examples about the protocols are also presented. Chapter 4: In this chapter, a perturbed continuous-time min-consensus protocol is presented. It is shown that the complex behavior of shortest path planning can emerge from the protocol. It is proved that the graph dynamics is convergent and its equilibrium is equivalent to a corresponding shortest path solution. An illustrative simulation on a small scale graph is provided to show the convergence of the biased min-consensus dynamics to shortest path solution over the graph. The applications to path planning in a maze and the robot complete coverage are also presented. Chapter 5: In this chapter, a discrete-time biased min-consensus protocol is presented, which is of finite-time convergence and is designed via modifying an existing min-consensus protocol. The convergence of the protocol is analyzed under time delay and a synchronous state update. It is shown that a complex behavior that can address shortest path planning on a graph emerges from this consensus protocol. To show the scalability and efficiency of the presented protocol, it is applied to largescale maze solving on a maze map. In addition, we also present an application of the presented protocol to address the complete coverage problem, which further demonstrates the potential of biased min-consensus in robotic applications. Chapter 6: In this chapter, a unified neural network scheme based on biased min-consensus is presented for solving the classical shortest path problem and the generalized shortest path problem, which are highly nonlinear. Particularly, the generalized shortest path problem is more complex than the classical shortest path problem since it requires finding a shortest path among the paths from a vertex to all the feasible destination vertices. Different from existing results, inspired by the optimality principle of Bellman’s dynamic programming, we formulate the two types of shortest path problems as linear programs with the decision variables denoting the lengths of possible paths. Then, biased consensus neural networks are adopted to solve the corresponding linear programs in an efficient and distributed manner. Theoretical analysis guarantees the performance of the presented scheme. In addition, two illustrative examples are presented to validate the efficacy of the presented scheme and the theoretical results. Moreover, an application to mobile robot navigation in a maze further substantiates the efficacy of the presented scheme. Chapter 7: In this chapter, a unified framework is proposed for designing distributed control laws to achieve the distributed consensus of linear and nonlinear multi-agent systems. The consensus problem is formulated as a receding-horizon dynamic optimization problem with an integral-type performance index subject to the dynamics of the considered multi-agent system. Different from conventional

Preface

ix

optimal control that solves HJB equation numerically in high dimensions, we present a suboptimal solution with analytical expressions by utilizing Taylor expansion for prediction along time and give the corresponding distributed control law in an explicit form. Theoretical analysis shows that the proposed control laws can guarantee exponential and asymptotical stability of the multi-agent systems. It is also proved that the proposed suboptimal control laws tend to be optimal with time. Illustrative examples are also presented. Chapter 8: In this chapter, the near-optimal distributed consensus of high-order nonlinear multi-agent systems consisting of heterogeneous agents is investigated, which is the extension of Chap. 7. Under the condition that the dynamics of all agents are fully known, a nominal near-optimal protocol is presented via making approximation of the performance index. For the situation with fully unknown system parameters, sliding-mode auxiliary systems, which are independent for different agents, are built to reconstruct the input-output properties of agents. Based on the sliding-mode auxiliary systems, an adaptive near-optimal protocol is finally presented to control high-order nonlinear multi-agent systems with fully unknown parameters. Theoretical analysis shows that the presented protocols can simultaneously guarantee the asymptotic optimality of the performance index and the asymptotic consensus of multi-agent systems. An illustrative example is given to show the performance. In summary, this book mainly focuses on min-consensus and related topics. We discuss min-consensus protocols, biased min-consensus and the complex behavior (i.e., path planning) emerging from biased min-consensus protocols, and consensus protocols for nonlinear multi-agent systems via near-optimal adaptive control. This book is written for graduate students and academic and industrial researchers in the field of distributed computing, consensus, and distributed control. We hope that this book will benefit the readers. Guangzhou, China Swansea, UK December 2019

Yinyan Zhang Shuai Li

Acknowledgments

During the writing period of this book, we have had the pleasure of discussing its various aspects and results with many cooperators and students. We highly appreciate their contributions, which helped improve the quality and presentation of this book.

xi

Contents

1

Background .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1 Collective Machine Behaviour . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2 Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.3 Theoretical Tools .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

1 1 1 2 3 3

2 Second-Order Min-Consensus. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3 Min-Consensus Under Switching Topology .. . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.1 Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.2 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

5 5 6 6 7 8 12 13 14

3 High-Order Discrete-Time Consensus . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3 Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.1 Min-Type Protocol .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.2 Max-Type Protocol . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.4 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.4.1 Static Graphs .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.4.2 Time-Varying Graphs .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.5 Computer Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.5.1 Example 1 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.5.2 Example 2 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.5.3 Example 3 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

21 21 23 24 24 25 26 26 29 32 32 34 36 38 39 xiii

xiv

Contents

4 Continuous-Time Biased Min-Consensus . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2 Protocol .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4 Equivalence to Shortest Path Planning .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.5 Simulations and Applications . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.5.1 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.5.2 Application to Maze Solving .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.5.3 Application to Complete Coverage . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

45 45 47 48 58 59 60 61 64 65 65

5 Discrete-Time Biased Min-Consensus . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2 Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.1 Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.3 Relationship with Shortest Path Planning .. . . . . . . . . . . . . . . . . . . . 5.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.3.1 Maze Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.3.2 Complete Coverage .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.4 Computer Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

73 73 74 75 76 81 82 83 86 86 88 91 92

6 Biased Consensus Based Distributed Neural Network.. . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3 Unified Method.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.1 Classical Shortest Path Problem . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.2 Generalized Shortest Path Problem . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.4 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5 Computer Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5.1 Example 1 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5.2 Example 2 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.6 Robot Navigation Application .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

97 97 99 102 102 104 106 111 112 112 115 117 118

7 Near-Optimal Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2 Problem Background.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3 Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3.1 Multiple Double Integrators .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3.2 Multiple Linear Agents . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3.3 Multiple Nonlinear Agents . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

125 125 127 129 129 132 135

Contents

xv

7.4 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4.2 Optimality of Cost Function . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4.3 Existence of Matrix L . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.5.1 Double-Integrator Agents . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.5.2 Second-Order Linear Agents .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.5.3 Second-Order Nonlinear Agents . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

138 139 142 144 146 146 147 148 150 151

8 Adaptive Near-Optimal Consensus . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.3 Nominal Design .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.4 Adaptive Design.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.5 Computer Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

157 157 159 160 169 177 181 181

Acronyms

ARX ASV BMC FLOPs HJB

Auto-regressive exogenous Auxiliary state variable Biased min-consensus Floating point operations Hamilton–Jacobi–Bellman

xvii

Chapter 1

Background

1.1 Collective Machine Behaviour Machine behaviour has been recently recognized as a research discipline in an article published in Nature [1]. According to the article, “the study of collective machine behaviour focuses on the interactive and system wide behaviours of collections of machine agents”. Such studies are often inspired by nature phenomena, such as flocking of birds or schooling of fish. Through cooperations, a group of agents can generate much more complex behaviours than individual agents. In this book, we will show an example (i.e., path planning) about how intelligent behaviour can emerge via local interactions.

1.2 Consensus Consensus, which means reaching at an agreement on certain quantity of interest through local interactions, is a fundament issue for collective behaviours [2, 3]. Depending on the objective, in the field of distributed computing, there are maxconsensus, min-consensus, and average-consensus, which aims at finding the global maximum, minimum, or the global average, through local interactions of agents. An agent can be physical one or software agent which is equipped with communication, sensing, computation, and/or actuation capabilities. For example, in a wireless sensor network, each sensor can be viewed as an agent. The interactions here often mean communications. In the field of distributed control or cooperative control, there are state consensus and output consensus, which aims at reaching an agreement on the state variables or output variables of the agents.

© Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_1

1

2

1 Background

A general definition of consensus is the χ consensus, which is presented as follows. Definition 1.1 (χ Consensus [4]) Consider a network consisting of n nodes defined in undirected connected graph G = (V, E) with the state value of node i denoted by xi . We say that the nodes asymptotically achieve χ consensus if limt →+∞ xi = χ(x(0)), ∀i ∈ V, where x(0) = [x1 (0), x2 (0), · · · , xn (0)]T ∈ Rn denotes the initial state of all the nodes and χ(x) : Rn → R denotes a function for which the function value is unique for any argument x. In fact, the consensus can also be classified as leaderless consensus and leaderfollower consensus. In leaderless consensus, the agreement state is decided by the negotiation among the agents, while in leader-follower consensus the agreement state is decided by the leader. The agents are also often referred to as robots, and the consensus problem is also called the cooperation problem of multi-robots. Natural questions could be how to define the interactions such that consensus can be reached and how to theoretically analyze the performance of the design. In this book, we will answer such questions for min-consensus and the consensus of nonlinear multiagent systems.

1.3 Theoretical Tools In the study of consensus problems, the graph theory [5] plays an important role. Graph theory is adopted to model the communications among the agents, where a node in a graph represents an agent and a link between any two nodes represents the existence of a communication channel. For example, if bidirectional communications are allowed between any two agents, then undirected graphs can be used. Such kind of modeling help researchers to better analyze the effects of communication topologies on the consensus performance. Consider an undirected connected graph G = (V, E) consisting of N nodes, where V = {1, 2, · · · , N} and E denote the set of nodes and the set of edges in the graph, respectively. The value of node i in the graph is denoted by xi . The edge connecting node i ∈ V and another node j ∈ V is denoted by (i; j ). The set of neighbors of node i is denoted by N(i) = {j | (i; j ) ∈ E}. The weight of edge (i; j ) in an undirected graph is denoted by wij . Specifically, if edge (i; j ) exists, then wij > 0; otherwise, wij = 0. The union of a collection of graphs G1 = (V, E1 ), G2 = (V, E2 ), · · · , Gm = (V,  Em ), where m is a finite integer, with the same node ˆ = (V, m Ei ). The collection G1 , G2 , · · · , Gm is said to be set V is defined as G i=1 ˆ is connected [6, 7]. jointly connected if the union graph G For the convenience of latter illustration, we present the definitions of the shortest path problem in the graph theory as follows.

References

3

Definition 1.2 (Shortest Path Problem) The shortest path problem defined in graph G = (V, E) is to find a path from a node s ∈ V to another node v ∈ V such that the sum of the weights of its constituent edges is minimized. The control theory [8] plays an important role in analyzing the design for collective machine behaviours, where stability in the sense of Lyapunov is often adopted. In this regard, readers are expected to have some basic knowledge about the terminologies in control theory. A distinct feature for collective machine behaviours is the local interactions among agents, which does not occur in traditional control of single agents. It has been found that the theoretical analysis is often based on the connectivity of the communication graph, i.e., properties of the Laplacian matrix of the graph [2]. Thus, some knowledge about matrix computation is also required. In the last two chapters of this book, the control problem is formulated in the form of optimal control, for which the readers are expected to have some basic knowledge about optimization theory. For readers who are not familiar with the above theoretical tools, the contents in this book are still readable as sufficient references about the necessary definitions and background knowledge are provided in each chapter. Besides, important definitions and lemmas are directly provided in the book.

1.4 Summary In this chapter, we have presented some basic knowledge about collective machine behaviour and consensus. It is hoped that the background could be helpful for readers to have a basic understanding of the subjects in this book.

References 1. I. Rahwan et al., Machine behavior. Nature 568, 477–486 (2019) 2. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 3. H. Rezaee, F. Abdollahi, Average consensus over high-order multiagent systems. IEEE Trans. Autom. Control 60(11), 3047–3052 (2015) 4. R. Olfati-Saber, R.M. Murray, Consensus protocols for networks of dynamic agents, in Proceeding of American Control Conference (2003), pp. 951–956 5. C. Godsil, G. Royal, Algebraic Graph Theory (Springer, New York, 2001) 6. P. Lin, Y. Jia, Multi-agent consensus with diverse time-delays and jointly-connected topologies. Automatica 47(4), 848–856 (2011) 7. T. Liu, J. Huang, Cooperative output regulation for a class of nonlinear multi-agent systems with unknown control directions subject to switching networks. IEEE Trans. Autom. Control 63(3), 783–790 (2018) 8. H.K. Khalil, Nonlinear Systems, 3rd edn. (Prentice-Hall, Upper Saddle River, 2002)

Chapter 2

Second-Order Min-Consensus

2.1 Introduction Consensus is a fundamental issue in distributed cooperative control of multiagent systems and distributed computing, which has broad applications [1–12]. Depending on the common value at which an agrement is reached, special cases of consensus include average consensus, max-consensus, and min-consensus [13– 16]. Different from centralized models [17–82], consensus protocols aims at forcing the agents to reach agreement via only local interactions. Max-consensus and minconsensus protocols were employed in [83] to realize distributed stopping for average consensus in digraphs. Two max-consensus algorithms were proposed in [84] for estimating the maximum value in wireless sensor networks. A soft-max approach was proposed in [85] for reliable computation of the maximum value of local measurements over autonomous sensor networks by max-consensus, which was also extended to deal with the min-consensus problem. Extensive results about how to design consensus protocols for different types of systems under different situations have been reported in recent years. Some protocols were proposed in [86] for achieving various types of consensus. A unified framework was introduced in [87] to address the consensus of multi-agent systems consisting of agents with linear dynamics and the synchronization of complex networks. A max-min consensus algorithm was proposed in [88] and its convergence was proved. Necessary and sufficient conditions were proposed in [89] for consensus in nonlinear monotone networks with unilateral interactions, where a max-consensus protocol for a system with three agents was also proposed. However, the protocols proposed in [88] and [89] are for first-order systems. In [90], the consensus problem of double-integrator multi-agent systems was addressed. In [91], a novel distributed consensus protocol was proposed for second-order linear multiagent systems with a directed communication topology. Results have also been reported about consensus protocol design based on optimization approaches [92– 94], even-triggered mechanism [95, 96], etc. © Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_2

5

6

2 Second-Order Min-Consensus

On the one hand, from the literature review, one of the research directions in the consensus community is to extend protocols for multi-agent systems of low-order agent dynamics to those of second-order or even higher-order agent dynamics. In other words, the completeness of consensus theory requires the development of such protocols. On the other hand, in practice, a wide class of agents can be described by double-integrator dynamics, such as mobile robots and unmanned aerial vehicles [97]. Consider the altitude alignment problem of multiple unmanned aerial vehicles. When the alignment altitude is required to be the minimal initial altitude, then a minconsensus protocol is needed. The min-consensus protocol for double-integrator multi-agent systems may also be helpful for multi-mobile-robot systems in certain rendezvous task [98]. It is not a trivial work to extend the results for first-order minconsensus or max-consensus to higher-order cases due to the inherent nonlinearity of the problem. The problem becomes more complicated for the case with switching topology. It should be noted that max-consensus can be reached via a corresponding min-consensus protocol via a mapping or conversion for the state variables.

2.2 Problem Formulation Consider a group of agents whose communication topology is described by an undirected graph. In this chapter, we are interested in the min-consensus problem of such a N-agent system with identical second-order dynamics described as follows: x¨i = ui , i = 1, 2, · · · , N,

(2.1)

where xi ∈ R and ui ∈ R are the state and input of the ith agent, respectively. N denotes the number of agents in the system. Specifically, we aim at designing a distributed protocol such that lim xi (t) = min{xi (0)}, ∀i ∈ V,

t →+∞

i∈V

(2.2)

where xi (t) denotes the value of xi at time instant t and V = {1, 2, · · · , N} is the set of nodes. In other words, it is expected to design a fully distributed protocol such that the agents of the multi-agent system achieves asymptotic min-consensus. The distributed property makes the protocol suitable for large-scale systems.

2.3 Min-Consensus Under Switching Topology In practical applications, the failure of communication between two agents in a multi-agent system may happen due to various reasons, such as the existence of obstacle, long distance between the agents making them out of communication

2.3 Min-Consensus Under Switching Topology

7

region, fault of communication units, etc. [13]. On the other hand, the communication between two agents may be reestablished when, for example, the fault of communication units has been eliminated. These situations correspond to switching communication topologies or graphs. There is also the case that the graphs may be unconnected during the process for reaching consensus. A worse case is that the graph is disconnected at every time instant. In this section, we mainly present the results about the presented min-consensus protocol in the case with jointly connected graphs.

2.3.1 Protocol Let the communication graph of multi-agent system (2.1) at time instant t be denoted by G(t) = (V, E(t)). By the definition of jointly connected graph, the communication graph can also be denoted by Gσ (t ), where σ (t) is called the switching signal and σ (t) ∈ {1, 2, · · · , m} with m denoting the number of different graphs. Let the set of neighbors of node i in graph Gσ (t ) be denoted by NGσ (t) (i), ∀i ∈ V. Then, the presented min-consensus protocol for multi-agent system (2.1) under jointly connected graphs Gσ (t ) is described as ui = −2x˙i − xi +

min

j ∈{i}∪NGσ (t) (i)

(x˙j + xj ), ∀i ∈ V.

(2.3)

The design of the presented protocol is partially inspired by the existing minconsensus protocol [99] for single-integrator multi-agent systems for which the state value is used in the min term. As a result, for double-integrator ones, we make an attempt by incorporating the time derivative of the state into the min term. Remark 2.1 Evidently, protocol (2.3) is in the following form: ui = h(xi , x˙i , χ, ν), i = 1, 2, · · · , N, where χ = {xj |j ∈ NGσ (t) (i)} and ν = {x˙j |j ∈ NGσ (t) (i)}, from which we can readily observe that it is fully distributed. In other words, the control action of each agent only depends on its own state and the states of its neighbor agents.

8

2 Second-Order Min-Consensus

2.3.2 Theoretical Analysis In this subsection, we present theoretical analysis about the performance of the presented min-consensus protocol. Theorem 2.1 Given that x˙i (0) = 0, ∀i ∈ V, starting from any initial state xi (0), ∀i ∈ V, the multi-agent system (2.1) defined on a set of jointly connected graphs Gσ (t ) = {V, Eσ (t )} asymptotically achieves min-consensus with lim xi (t) = min{xj (0)}, ∀i ∈ V,

t →+∞

j ∈V

when the min-consensus protocol (2.3) is used. Proof Substituting protocol (2.3) into system dynamics (2.1) yields the following closed-loop dynamics: x¨i = −2x˙i − xi +

min

j ∈{i}∪NGσ (t) (i)

(x˙j + xj ), i = 1, 2, · · · , N.

(2.4)

Let zi = x˙i + xi . Then, (2.4) can be rewritten as the following cascaded system: x˙i = −xi + zi , z˙ i = −zi +

(2.5a) min

{zj },

j ∈{i}∪NGσ (t) (i)

(2.5b)

where i = 1, 2, · · · , N. For subsystem (2.5b), consider the following function: e(t) = min{zi (t)}. i∈V

By Clarke’s generalized derivative [100], calculating the time-derivative of e(t) yields e˙ (t) =



λi z˙ i (t),

i∈M(t )

where M(t) = {i|zi (t) = min{zj (t)}} j ∈V

2.3 Min-Consensus Under Switching Topology

9

and 

λi = 1

i∈M(t )

with 0 < λi ≤ 1. In view of (2.5b), ∀i ∈ M(t), z˙ i (t) = −zi (t) +

min

j ∈{i}∪NGσ (t) (i)

{zi (t)}

= −zi (t) + zi (t) = 0. It follows that e˙ (t) = 0. In other words, starting from any initial value zi (0) = (x˙i (0) + xi (0)), ∀i ∈ V, the minimum value of zi (t) among all the agents is invariant with time. From (2.5b), we have z˙ i = −zi +

min

j ∈{i}∪NGσ (t) (i)

{zi } ≤ −zi + zi = 0,

∀i ∈ V. Thus, zi is lower bounded and monotonically non-increasing, ∀i ∈ V. It follows that the limit of zi exists, ∀i ∈ V and limit zi∗ satisfies zi∗ ≥ min{x˙i (0) + xi (0)}, i∈V

(2.6)

i.e., there exists a steady state or an equilibrium for the multi-agent system. Suppose that the multi-agent system does not achieve consensus at the steady state. Then, there must exist a node k ∈ V, such that zk∗ > min{x˙i (0) + xi (0)}, i∈V

where zk∗ denotes the value of zk at the steady state. Since the graph is jointly connected, it follows that there must exist two neighbor nodes s1 ∈ V and s2 ∈ V satisfying zs∗1 = zs∗2 . Suppose that zs∗1 > zs∗2 . Then, from (2.5b), at the steady state, z˙ s1 ≤ −zs∗1 + zs∗2 < −zs∗1 + zs∗1 = 0, which means that the value of zs1 will decrease, which contradicts with the assumption that zs∗1 is the equilibrium value of zs1 . Similarly, if zs∗1 < zs∗2 , then, at the steady state, zs2 will decrease which contradicts with the assumption that

10

2 Second-Order Min-Consensus

zs∗2 is the equilibrium value of zss . By summarizing the above analysis, it is thus concluded that there does not exist a node k ∈ V satisfying zk∗ > min{x˙i (0) + xi (0)}. i∈V

Together with Eq. (2.6), it is further concluded that zi∗ = min{x˙i (0) + xi (0)}, i∈V

∀i ∈ V. Given the condition that x˙i (0) = 0, ∀i ∈ V, we further have lim zi (t) = min{xi (0)}.

t →+∞

i∈V

(2.7)

Consider subsystem (2.5a), i.e., x˙i = −xi + zi with i = 1, 2, · · · , N. Let ωi = xi − min{xi (0)}, i∈V

∀i ∈ V. Then, for the ith agent, we have ω˙ i = −ωi + zi − min{xi (0)}, i∈V

(2.8)

which can be viewed as a exponentially stable linear time-invariant system ω˙ i = −ωi with the external input being zi − mini∈V {xi (0)}. From (2.7), it is evident that lim (zi (t) − min{xi (0)}) = 0, ∀i ∈ V.

t →+∞

i∈V

Then, in view of (2.8), by input-to-state stability [101], the equilibrium ωi = xi − min{xi (0)} = 0, i∈V

i.e., xi = mini∈V {xi (0)}, ∀i ∈ V, is asymptotically stable. Moreover, since the above result applies to any initial states xi (0), the equilibrium is also globally stable. The proof is complete. Remark 2.2 The condition x˙i (0) = 0 for all i ∈ V means that, at the initial state, all the agents are static, i.e., the initial speed is zero. Intuitively, this condition guarantees that the changes in the state values of all the agents are a result of the values of the initial states and their local interactions. Without this condition, the agreement value is also related with the initial speeds, i.e., x˙i (0), of all the agents.

2.3 Min-Consensus Under Switching Topology

11

Suppose that there is a group of unmanned aerial vehicles, where each of them is flying with a constant altitude. Then, when they need to carry out the min-consensus with respect to the altitude, the initial altitude speed is zero. Note that this condition is also adopted in Theorem 1.1 of [102]. Meanwhile, the condition also indicates that the presented protocol only applies to the case with zero initial speeds, and thus limits its applicability. Remark 2.3 For second-order multi-agent system (2.1) with time-varying undirected connected communication graph G(t) = (V, E(t)), the presented minconsensus protocol is given as follows: ui (t) = −2x˙i (t) − xi (t) +

min

(x˙j (t) + xj (t))

j ∈{i}∪Nt (i)

(2.9)

with i = 1, 2, · · · , N, where Nt (i) denotes the set of neighbors of agent i in graph G(t) at time instant t. Note that a static undirected connected graph is a special case of time-varying undirected connected graphs, for which G(t) = Gc with Gc denoting a static undirected connected graph. As (2.9) is a special case of (2.3), the conclusion given in Theorem 2.1 also holds for (2.9), which is summarized as the following corollary. Corollary 2.1 Given that x˙i (0) = 0, ∀i ∈ V, starting from any xi (0), ∀i ∈ V, the multi-agent system (2.1) defined on an undirected connected graph G(t) = (V, E(t)) asymptotically achieves min-consensus with lim xi (t) = min{xj (0)}, ∀i ∈ V,

t →+∞

j ∈V

when min-consensus protocol (2.9) is used. For the case that x˙i (0) = 0, ∀i ∈ V, we have the following remark. Remark 2.4 From the proof of Theorem 2.1, without the condition x˙i (0) = 0, ∀i ∈ V, we have lim xi = min xj (0) + x˙ j (0).

t →+∞

j ∈V

Specifically, if x˙i (0) = 0, ∀i ∈ V, Eq. (2.5b) in the proof of Theorem 2.1 becomes lim zi (t) = min{xi (0) + x˙i (0)}.

t →+∞

i∈V

Then, by considering ωi = xi − min{xi (0) + x˙ i (0)}, ∀i ∈ V i∈V

12

2 Second-Order Min-Consensus

and following the steps in the rest proof of Theorem 2.1, we can prove that the equilibrium xi = min{xi (0) + x˙i (0)}, ∀i ∈ V i∈V

is asymptotically stable.

2.4 Simulation Example In this section, a simulation example regarding jointly connected graphs is given to validate the theoretical results and the efficiency of the presented protocol. Let the graph at time instant t be denoted by G(t). In this example, G(t) is defined as follows: ⎧ 1 ⎪ ⎪ ⎪G1 , if sT0 ≤ t < (s + 4 )T0 , ⎪ ⎨G , if (s + 1 )T ≤ t < (s + 2 )T , 2 4 0 4 0 G(t) = (2.10) ⎪G3 , if (s + 2 )T0 ≤ t < (s + 3 )T0 , ⎪ 4 4 ⎪ ⎪ ⎩ G4 , if (s + 34 )T0 ≤ t < (s + 1)T0 , where s = 0, 1, 2, · · · and T = 12. The four graphs, i.e., G1 , G2 , G3 , and G4 , are shown in Fig. 2.1. The simulation results of min-consensus protocol (2.3) for multi-

2 1

2

1

3 3 5

4

4 5

(b)

(a) 1

2

2

1

3

3

5

4

(c)

4

5

(d)

Fig. 2.1 Jointly connected graphs in the example. (a) G1 . (b) G2 . (c) G3 . (d) G4

2.5 Summary 15

13 1

xi

x˙i

1

10

0

2 3

1

−1

4

5

2

5

0

−2

3

−3

5

4

−5 −4 t

−10 0

10

20

t

30

40

50

−5 0

60

10

20

(a)

30

40

50

60

(b) 4

ui

2 0 1

−2

2

−4

3

−6

4

−8

5

−10 −12 −14 0

t

10

20

30

40

50

60

(c) Fig. 2.2 Profiles of xi , x˙i , and ui of multi-agent system (2.1) with the jointly connected communication graph shown in Fig. 2.1 when min-consensus protocol (2.3) is adopted. (a) Profiles of xi . (b) Profiles of x˙i . (c) Profiles of ui

agent system (2.1) with jointly connected communication graphs shown in Fig. 2.1 are shown in Fig. 2.2. As seen from Fig. 2.2, the min-consensus of the multi-agent system is successfully achieved, and the behaviors of the changes in state variables xi and x˙i coincide with Theorem 2.1. This example validates the efficiency of the presented protocol for the case with jointly connected graphs and Theorem 2.1.

2.5 Summary In this chapter, a fully distributed continuous-time min-consensus protocol has been presented for a second-order multi-agent system. Theoretical analysis has shown that via the protocol, under certain conditions, global asymptotic min-consensus can be achieved under different situations of communication topologies. The efficiency of the presented min-consensus protocol and the corresponding theoretical results

14

2 Second-Order Min-Consensus

have been validated via an illustrative example. It is worth pointing out that, to the best of the authors’ knowledge, this is the first min-consensus protocol for secondorder multi-agent systems. In the future, we may extend the presented protocol to the case with a directed communication topology. In addition, the relaxation of the requirement on the initial condition is also worth investigating.

References 1. Y. Ren, H. Chao, W. Bourgeous, N. Sorensen, Y. Chen, Experimental validation of consensus algorithms for multivehicle cooperative control. IEEE Trans. Control Syst. Technol. 16(4), 745–752 (2008) 2. Y. Xu, W. Liu, J. Gong, Stable multi-agent-based load shedding algorithm for power systems. IEEE Trans. Power Syst. 26(4), 2006–2014 (2011) 3. L. Schenato, F. Fiorentin, Average TimeSynch: a consensus-based protocol for clock synchronization in wireless sensor networks. Automatica 47(9), 1878–1886 (2011) 4. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2018) 5. L. Jin, S. Li, H. M. La, X. Zhang, B. Hu, Dynamic task allocation in multi-robot coordination for moving target tracking: a distributed approach. Automatica 100, 75–81 (2019) 6. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291, 50–58 (2018) 7. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2018) 8. S. Li, M. Zhou, X. Luo, Z. You, Distributed winner-take-all in dynamic networks. IEEE Trans. Autom. Control 62(2), 577–589 (2017) 9. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 10. L. Jin, S. Li, X. Luo, M. Shang, Nonlinearly-activated noise-tolerant zeroing neural network for distributed motion planning of multiple robot arms, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2017), pp. 4165–4170 11. M.U. Khan, S. Li, Q. Wang, Z. Shao, Distributed multirobot formation and tracking control in cluttered environments. ACM Trans. Auton. Adapt. Syst. 11(2), 1–22 (2016) 12. S. Li, Z. Wang, Y. Li, Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks. Neural Process. Lett. 37(3), 411–424 (2013) 13. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 14. H. Rezaee, F. Abdollahi, Average consensus over high-order multiagent systems. IEEE Trans. Autom. Control 60(11), 3047–3052 (2015) 15. L. Cheng, Z.G. Hou, M. Tan, X. Wang, Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Control 56(8), 1958–1963 (2011) 16. L. Macellari, Y. Karayiannidis, D.V. Dimarogonas, Multi-agent second order average consensus with prescribed transient behavior. IEEE Trans. Autom. Control 62(10), 5282–5288 (2017) 17. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019)

References

15

18. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019) 19. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 20. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 21. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019) 22. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019) 23. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 24. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 25. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Control Syst. Technol. 27(3), 1250–1258 (2019) 26. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 27. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 28. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 29. L. Xiao, S. Li, F. Lin, Z. Tan, A.H. Khan, Zeroing neural dynamics for control design: comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019) 30. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 31. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 32. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 33. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 34. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 35. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 36. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 37. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018) 38. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018)

16

2 Second-Order Min-Consensus

39. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018) 40. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 41. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 42. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Control Syst. Technol. 26(4), 1345–1357 (2018) 43. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 44. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 45. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 46. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 47. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a Jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 48. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 49. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 50. H. Wang, P.X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018) 51. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 52. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, Piscataway, 2018), pp. 2956–2961 53. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 54. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 55. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 56. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 57. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 58. L. Jin, S. Li, H. M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017)

References

17

59. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017) 60. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 61. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 62. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE) (IEEE, Piscataway, 2017), pp. 1368–1373 63. Y. Li, S. Li, D. E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, Piscataway, 2017), pp. 3538–3543 64. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 65. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 66. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 67. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 68. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 69. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 70. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016) 71. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 72. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceedings of the IEEE 16th International Conference on Data Mining (ICDM) (IEEE, Piscataway, 2016), pp. 311–319 73. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC) (IEEE, Piscataway, 2016), pp. 1221–1226 74. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015) 75. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 76. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceedings of the International Conference on Intelligent Computing (Springer, Cham, 2015), pp. 713–720

18

2 Second-Order Min-Consensus

77. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014) 78. Y. Li, S. Li, Q. Song, H. Liu, M.Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014) 79. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 80. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2014), pp. 2952–2956 81. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013) 82. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. (24)(2), 301–309 (2013) 83. N.E. Manitara, C.N. Hadjicostis, Distributed stopping for average consensus in digraphs. IEEE Trans. Control Netw. Syst. (5)(3), 957–967 (2018) 84. F. Iutzeler, P. Ciblat, J. Jakubowicz, Analysis of max-consensus algorithms in wireless channels. IEEE Trans. Signal Process. 60(11), 6103–6107 (2012) 85. S. Zhang, C. Tepedelenlio˘glu, M.K. Banavar, A. Spanias, Max-consensus using the soft maximum, in Proceedings of the Asilomar Conference on Signals, Systems and Computers (IEEE, Piscataway, 2013), pp. 433–437 86. J. Cortés, Distributed algorithms for reaching consensus on general functions. Automatica 44(3), 726–737 (2008) 87. Z. Li, Z. Duan, G. Chen, L. Huang, Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint. IEEE Trans. Circuits Syst. Regul. Pap. 57(1), 213– 224 (2010) 88. G. Shi, W. Xia, K.H. Johansson, Convergence of max-min consensus algorithms. Automatica 62, 11–17 (2015) 89. S. Manfredi, D. Angeli, Necessary and sufficient conditions for consensus in nonlinear monotone networks with unilateral interactions. Automatica 77, 51–60 (2017) 90. A. Abdessameud, A. Tayebi, On consensus algorithms design for double integrator dynamics. Automatica 49(1), 253–260 (2013) 91. N. Huang, Z. Duan, G. Chen, Some necessary and sufficient conditions for consensus of second-order multi-agent systems with sampled position data. Automatica 63, 148–155 (2016) 92. Y. Zhang, S. Li, Predictive suboptimal consensus of multiagent systems with nonlinear dynamics. IEEE Trans. Syst. Man Cybern. Syst. 47(7), 1701–1711 (2017) 93. K.H. Movric, F.L. Lewis, Cooperative optimal control for multi-agent systems on directed graph topologies. IEEE Trans. Autom. Control 59(3), 769–774 (2014) 94. Y. Zhang, S. Li, Adaptive near-optimal consensus of high-order nonlinear multi-agent systems with heterogeneity. Automatica 85, 426–432 (2017) 95. L. Ma, Z. Wang, H.K. Lam, Event-triggered mean-square consensus control for time-varying stochastic multi-agent system with sensor saturations. IEEE Trans. Autom. Control 62(7), 3524–3531 (2017) 96. C. Nowzari, J. Cortés, Distributed event-triggered coordination for average consensus on weight-balanced digraph. Automatica 68, 237–244 (2016) 97. W. Ren, E. Atkins, Distributed multi-vehicle coordinated control via local information exchange. Int. J. Robust Nonlinear Control 17(10), 1002–1033 (2007) 98. H. Su, X. Wang, G. Chen, Rendezvous of multiple mobile agents with preserved network connectivity. Syst. Control Lett. 59, 313–322 (2010)

References

19

99. Y. Zhang, S. Li, Distributed biased min-consensus with applications to shortest path planning. IEEE Trans. Autom. Control 62(10), 5429–5436 (2017) 100. F.H. Clarke, Generalized gradients and application. Trans. Am. Math. Soc. 205, 247–262 (1975) 101. H.K. Khalil, Nonlinear Systems (Prentice-Hall, Upper Saddle River, 2002) 102. G. Xie, L. Wang, Consensus control for a class of networks of dynamic agents. Int. J. Robust Nonlinear Control 17, 10–25 (2007)

Chapter 3

High-Order Discrete-Time Consensus

3.1 Introduction The wide applications of multi-agent systems make the design and analysis of distributed consensus protocols a heated research topic in different communities, such as computer science and control theory. Some typical applications of multiagent systems include but are not limited to smart grids, formation control, and load shedding [1–15]. Intuitively, a consensus protocol can be viewed as a kind of information transmission rule such that, via local information transmission, the agents, no matter physical agents or virtual computing agents, can reach an agreement. The distributed computing or control models are considered to be more robust than centralized ones [16–81]. When we talk about multi-agent systems, the first step could be modeling of the agent dynamics. Generally, there are two types of agent models, i.e., continuoustime agent dynamics characterized by ordinary differential equations and discretetime agent dynamics characterized by difference equations [82]. It should be noted that traditionally, there have been lots of results about the control of single discretetime dynamical systems, such as [83–87]. From the existing literature, lots of results have been reported about the design and analysis of distributed control laws for multi-agent systems with continuous-time agent dynamics have been, such as [88–92]. However, the corresponding investigations on discrete-time ones seem to be limited. While the consensus study on discrete-time multi-agent systems is important, it is not direct to apply the results on multi-agent systems with the agent dynamics being continuous-time. In recent years, a few results have been reported about the consensus of discrete-time multi-agent systems. For instance, the conditions to guarantee the capability of reaching consensus among multiagent systems consisting of multiple first-order agents with discrete-time dynamics were analyzed in [93]. A very realistic issue to be addressed in the application of consensus protocols is the change of communication topologies, which is a result of the limited communication ability of the agents, for which the communication © Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_3

21

22

3 High-Order Discrete-Time Consensus

graphs are called switching graphs [88]. For the case with switching graphs, in terms of multi-agent systems whose agents are modeled by linear difference equations, consensus protocols were proposed in [94], which can address two kinds of consensus settings, i.e., leaderless consensus and leader-follower consensus. It should be noted that, in practice, the models of the agents may not be the same, which is referred to as heterogeneous multi-agent systems. In this case, leader-follower discrete-time consensus protocols were designed and analyzed in [95, 96], where the time-delay issue in communications is also taken into account. In terms of the design and analysis of consensus protocols for multi-agent systems with second-order discrete-time agent dynamics, results were reported in [97–102]. When it comes to multi-agent systems consisting of agents modeled by high-order difference equations, the consensus problem is more difficult. This problem in the case with undirected connected communication topology was solved in [103], but the extension to time-varying communication graphs was not addressed. This chapter is concerned with the leaderless consensus of multi-agent systems consisting of agents with discrete-time dynamics. Some typical leaderless consensus types include but are not limited to min consensus, average consensus, and max consensus [88, 104–106]. The definition of different types of consensus depends on the relationship between the agreement state and the initial states of the agents involved in the cooperation. For min/max consensus, the final state is the minimum/maximum of the initial values of the agent states; while for the average consensus, the final state is the average of the initial values of the agent states. In view of the limited communication capability of the agents in a multi-agent system, the communication burden should be taken into account when designing consensus protocols, which is decided the amount of transmission data in the communications. Normally, the information transmission increases when the order of the agent dynamics increase [96, 99–101, 103]. The limited data transmission capability, as stated in [107, 108], thus posses a design difficulty for high-order discrete-time multi-agent systems. It should be noted that the scalability of consensus protocols is important, and thus we need to design the consensus protocols for discrete-time multi-agent systems with order-independent data transmission. In this chapter, we present the design and analysis of distributed consensus protocols for systems with multiple agents that are modeled by high-order difference equations. Based on min-consensus and max-consensus protocols, we discuss two types of nonlinear consensus protocols whose communication burdens are independent from the order of the agent dynamics, which can be applied in different types of communication topologies, such as the case jointly connected graphs. One interesting property of the two protocols is that, when the protocols are used, the consensus value is proportional to the maximum or the minimum of the initial values of the agent states, which, to some degree, provides an estimation of the global minimum or maximum of initial values of the agent states. To make the presented distributed consensus protocols to have the desired performance, only mild conditions are needed, and the consensus performance could be adjusted via setting parameters of the consensus protocols. It can be found that, in comparison with [103], the two consensus protocols can be applied in more general scenarios.

3.2 Problem Formulation

23

The rest of this chapter adopts the following organization. The consensus problems investigated in this chapter are given in Sect. 3.2, and the consensus protocols are designed in Sect. 3.3, which are theoretically analyzed in Sect. 3.4 under different types of communication topologies. Some illustrative numerical simulation results are given in Sect. 3.5 to evaluate the performance of the consensus protocols. We then summarize this chapter in Sect. 3.6.

3.2 Problem Formulation In this section, we formulate the consensus problem discussed in this chapter, and provide some necessary graph definitions for the sake of latter illustrations. Consider a multi-agent system consisting of N agents with the capability of local communications. Let the state variable and the control input of the ith agent be denoted by xi ∈ R and ui ∈ R, respectively, where i ∈ {1, 2, · · · , N}. The agent dynamics is given as follows in the form of a difference equation: xi (k + 1) = −

D 

αT xi (k + 1 − T ) +

T =1

F −1 

βT ui (k − T ),

(3.1)

T =0

in which D represents the order of the agent dynamics; F is the order of the control input; αT and βT are system coefficients, and, in particular, α1 = β0 = 1. In this chapter, we follow the standard assumption about the agent states of the multi-agent system with discrete-time agent dynamics, i.e., xi (n) = 0 and ui (n) = 0, ∀n < 0, ∀i ∈ V [96]. Besides, another assumption about stability of the agent dynamics  the −1 (3.1) is also adopted, i.e., all the roots of the FT =0 βT z−T = 0 are within the unit circle with the center being the origin of the complex plane [109]. Some discussions about the agent dynamics (3.1) are provided as follows. Remark 3.1 The agent model described in difference equation (3.1) is general, and it contains the first-order agent dynamics investigated in [88] as a particular example by setting D = F = 1. Meanwhile, one of the commonly adopted discrete-time system model, called the auto-regressive exogenous model xi (k + 1) =

D−1  T =0

αT xi (k − T ) +

F −1 

ui (k − T ) + et

T =0

could also be transformed into (3.1) if the noise et is sufficiently small. Some application examples include but are not limited to shape memory alloy actuators [110] and micro air vehicles [111].

24

3 High-Order Discrete-Time Consensus

With the above modeling, in this chapter, we discuss the problem of how to reach consensus for multiple agents whose dynamics are given by the difference equation (3.1) through local interactions, i.e., how to find a consensus protocol in the form of ui (k) = p(i, N(i)) for every agent i, with N(i) denoting the set of neighboring agents of agent i, such that the following equation holds: lim x1 (k) = lim x2 (k) = · · · = lim xN (k) = xc ,

k→+∞

k→+∞

k→+∞

in which xc represents the constant consensus value. As mentioned earlier, the design in this chapter would make the communication burden to be as low as possible.

3.3 Consensus Protocols In this section, to realize the distributed consensus of multiple locally connected agents modeled as (3.1), two protocols are presented.

3.3.1 Min-Type Protocol Firstly, through the use of the min function, a consensus protocol is designed in the following equation for solving the high-order discrete-time consensus problem: ui (k) = −

F −1 

βT ui (k − T ) −

T =1

+

min

j ∈{i}∪N(i)



D  T =1

D 

γT xi (k + 1 − T )

(3.2)

(αT + γT )xj (k − T ) ,

T =0

in which γT ∈ R are the design coefficients with γ0 = 0. As seen from (3.2), the protocol contains the min function, and thus we call it a min-type protocol. Later on, we will show the setting of the protocol coefficients via theoretical derivations. Besides, according to (3.2), to calculate the input, each agent only need to know the state information of its neighbors and its own state information.

3.3 Consensus Protocols

25

3.3.2 Max-Type Protocol It is known that the min function is a counterpart of the max function in terms of the calculation rule. Thus, given a min-type consensus protocol (3.2), we can readily get the corresponding max-type consensus protocol for the realization of the consensus among multiple agents with high-order discrete-time agent dynamics. Specifically, we need to replace the min function with the max function, which yields the following consensus protocol: ui (k) = −

F −1 

βT ui (k − T ) −

T =1

+



max

j ∈{i}∪N(i)

D 

γT xi (k + 1 − T )

T =1 D 

(3.3)



(αT + γT )xj (k − T ) .

T =0

Due the similarity with the min-type consensus protocol, the setting of the coefficients γT can follow the same requirements. Besides, from (3.3), the calculation of the control input for each agent only need its own state information and the information of neighboring agents. Thus, the max-type protocol is also distributed. In the following, we discuss the communication burden of the two distributed consensus protocols via analyze the data transmission. Remark 3.2 The consensus protocol (3.2) is the same as ui (k) = −

F −1 

βT ui (k − T ) −

T =1

in which εj = same as ui (k) = −

T =1

γT xi (k + 1 − T ) +

T =1

D

F −1 

D 

T =0 (αT

min

j ∈{i}∪N(i)

{εj },

(3.4)

+ γT )xj (k − T ). Accordingly, the protocol (3.3) is the

βT ui (k − T ) −

D  T =1

γT xi (k + 1 − T ) +

max {εj }.

j ∈{i}∪N(i)

(3.5)

Given that the coefficients in the model of the agents, and the coefficients in the consensus protocols are a prior for each agent, they can calculate the term εj in a local manner via the use of their state information. From (3.4) and (3.5), no matter which consensus protocol is adopted, for every agent in the multi-agent system, the consensus can be achieve if it only transmit the value of εj , which is a scalar, to its neighbors. This means that, the consensus protocols have smaller amount of transmission data compared with those in [96, 99]. The readers may be interested in why we need two types of consensus protocols for the same high-order discrete-time multi-agent systems and what are their

26

3 High-Order Discrete-Time Consensus

differences. Essentially, in the latter theoretical analysis, we will show that by using different consensus protocols, the agreement state is different. In this regard, in practice, we may make a choice between them depending on the objective.

3.4 Theoretical Analysis In this section, we show theoretical analysis for the two consensus protocols in different communication scenarios in terms of the communication topology. While there are two protocols, due to the similarity in their theoretical analysis, the proofs are only given for the case with the min-type protocol.

3.4.1 Static Graphs In this subsection, theoretical results are given for the case that the graph is static and undirected connected. Theorem 3.1 Consider N agents whose models are described by Eq. (3.1), for which the communication topology is static and undirected connected. If the mintype protocol (3.2) is used, then the agents can asymptotically reach consensus given  −T = 0 are (α that, on the complex plane, all roots of the equation D T =0 T + γT )z within the unit circle with the center being the origin. Proof The Substitution of the min-type consensus protocol (3.2) into the agent dynamics (3.1) gives the following closed-loop system for each agent i: xi (k + 1) = −

D 

(αT + γT )xi (k + 1 − T )

T =1

+



min

j ∈{i}∪N(i)

D 

((αT + γT )xj (k − T )) .

T =0

Adopt the following notation: yi (k) =

D 

(αT + γT )xi (k − T ),

(3.6)

T =0

for which α0 = 1 and γ0 = 0. As a result, the following equation is obtained: yi (k + 1) =

min

j ∈{i}∪N(i)

{yj (k)}.

(3.7)

3.4 Theoretical Analysis

27

Let us define y(k) = min{yi (k)}, i∈V

S(k) = {i|yi (k) = y(k), i ∈ V}, y(k) ¯ = max yi (k), i∈V

¯ ¯ i ∈ V}, S(k) = {i|yi (k) = y(k), y(k) ˜ = y(k) ¯ − y(k). In view of Eq. (3.7), ∀i ∈ S(k), the following equation holds: yi (k + 1) =

min

{yj (k)}

j ∈{i}∪N(i)

= min{y(k), min {yj (k)}} j ∈N(i)

= y(k). Consequently, y(k) = y(0) = min{yi (0)}, ∀k ≥ 0. i∈V

According to (3.1), with xi (k) = 0, ∀n < 0, one readily has yi (0)=xi (0). As a result, y(k) = min{xi (0)}, ∀k ≥ 0. i∈V

(3.8)

¯ ¯ In view of the notation of S(k), ∀i ∈ S(k), one has yi (k) = y(k) ¯ = max{yj (k)} ≥ yj (k), ∀j ∈ V. j ∈V

As a result, for the undirected connected communication graph G, in light of (3.7), ¯ ∀i ∈ S(k), one can derive yi (k + 1) =

min

j ∈{i}∪N(i)

{yj (k)}

= min{yi (k), min {yj (k)}} j ∈N(i)

= min{y(k), ¯ min {yj (k)}} j ∈N(i)

= min {yj (k)} j ∈N(i)

≤ yi (k).

(3.9)

28

3 High-Order Discrete-Time Consensus

In other words, the value of y(k) ¯ does not increase with k. By referring to Eq. (3.8), the term y(k) is constant, and thus the term y(k) ¯ has a lower bound y(k). This means that the term y(k) ¯ converges if k keeps on increasing. Thus, the term y(k) ˜ = y(k) ¯ − y(k) converges, too. By recalling Eq. (3.9), one can further draw a conclusion that the minimum value among all the values of yi propagates during the state evolution process from the nodes with the minimum value to others whose value is larger than them. It should be noted here that the communication graph is static and undirected connected, meaning that for every two nodes in the graph there exists a shortest path whose length is smaller than the total number of nodes in the graph. Thus, we can further conclude that y(k) ¯ is convergent to y(k) after at most N time instants. In other words, the term y(k) ˜ is convergent to zero after at most N time instants, and at the steady state we have yi (k) = min{yj (0)}, ∀i ∈ V, ∀k > N. j ∈V

With the above analysis, Eqs. (3.6) and (3.8), if k ≥ N, ∀i ∈ V, the following equation holds: D 

(αT + γT )xi (k − T ) = c,

(3.10)

T =0

where c = min{xi (0)}. i∈V

It can be known from [112] that, given that all the roots of polynomial equation D 

(αT + γT )z−T = 0,

(3.11)

T =0

are within the unit circle whose center is origin of the complex plane, the system (3.10) is asymptotically stable and, more importantly, for the system (3.10), the following equation holds: minj ∈V {xj (0)} lim xi (k) = D , k→+∞ T =0 (αT + γT ) which shows that asymptotic consensus is realized.

(3.12)

With regard to the conditions in Theorem 3.1, we give the comments below on how to select coefficient values in the min-type protocol presented in this chapter. Remark 3.3 From our point of view, the condition with respect to the roots of (3.11) for establishing the stability result of the closed-loop system equipped

3.4 Theoretical Analysis

29

with the min-type consensus protocol (3.2) is weak, because we can easily find the corresponding parameters via simple calculation. For example, if we want the roots of the polynomial Eq. (3.11) to be r1 , r2 , · ·  · , rD , then we can get the D−T with corresponding values of γT by comparing each term of D T =0 (αT + γT )z the corresponding term of (z − r1 )(z − r2 ) · · · (z − rD ) after expansion. Besides, because the polynomial Eq. (3.11) is the characteristic equation of the closed-loop system, the roots of it can actually adjust the behavior of the closed-loop system, and thus changing the consensus behavior of the multi-agent system. In this regard, such a condition can even be considered as a good property of the consensus protocol. On the basis of Theorem 3.1, we get the following result in terms of the behavior of state values of the multi-agent system aided by the consensus protocol (3.3). Corollary 3.1 Consider N agents whose models are described by Eq. (3.1), for which the communication topology is static and undirected connected. If the maxtype protocol (3.3) is used, then the agents can asymptotically reach consensus given that, on the complex plane, all roots of the equation D 

(αT + γT )z−T = 0

T =0

are within the unit circle with the center being the origin. Proof The proof can be completed by following the procedures in the proof of Theorem 3.1 by taking into account the difference between the min function and the max function, and is thus omitted.

3.4.2 Time-Varying Graphs In this subsection, we consider the case that the graphs are time-varying. As mentioned earlier, the communication graph can be changing as a result of the limited communication ability of the agents involved in the multi-agent systems. We can assume that the time-varying graphs always fall in a given set of graphs [113]. Some cases about the time-varying communication graphs can be found in [88]. To further demonstrate the validity of the two consensus protocols, theoretical results are given about situations with a time-varying communication graph. Firstly, for the ease of illustration, some symbols are defined. Specifically, the communication graph of the multi-agent system at the kth time instant is represented by Gσ (k) where σ (k) ∈ {1, 2, · · · , q}. Due to the fact that the communication graph is time-varying, the neighbors of an agent may also be time-varying. Here, at the kth time instant, the set of neighbors of agent i is represented by Nσ (k) (i), in which q represents the total number of communication graphs that can appear. With the above notations, the min-type protocol (3.2) in the scenario with time-varying

30

3 High-Order Discrete-Time Consensus

communication topologies is written in the following: ui (k) = −

F −1 

βT ui (k − T ) −

T =1

+

min

j ∈{i}∪Nσ (k) (i)

D 

γT xi (k + 1 − T )

T =1 D 



(3.13)

(αT + γT )xj (k − T ) .

T =0

Accordingly, the max-type protocol (3.3) in the scenario with time-varying communication topologies is written as ui (k) = −

F −1 

βT ui (k − T ) −

T =1

+

max

j ∈{i}∪Nσ (k) (i)

D 

γT xi (k + 1 − T )

T =1 D 



(3.14)

(αT + γT )xj (k − T ) .

T =0

For the time-varying communication graphs in multi-agent systems, there are two types that we are interested in: (1) while the communication graphs are timevarying, they always undirected and connected; (2) the communication graphs may not be connected in some time instants but we can find the joint graphs that are undirected connected, which are called jointly connected graphs. In terms of the time-varying undirected connected communication graphs, we have the following results. Theorem 3.2 Consider N agents whose models are described by Eq. (3.1), for which the communication topology is time-varying and undirected connected. If the min-type protocol (3.13) is used, then the agents can asymptotically reach consensus given that, on the complex plane, all roots of the equation D 

(αT + γT )z−T = 0

T =0

are within the unit circle with the center being the origin. Proof The proof can be completed by following the procedures in the proof of Theorem 3.1 by taking into account the difference of the number of time instants that are required to make the global minimum to propagate throughout the whole graph, and is thus omitted. In view of Theorem 3.2, one can also get the following result with respect to the max-type consensus protocol (3.14), of which the proof is omitted due to similarity.

3.4 Theoretical Analysis

31

Corollary 3.2 Consider N agents whose models are described by Eq. (3.1), for which the communication topology is time-varying and undirected connected. If the max-type protocol (3.14) is used, then the agents can asymptotically reach consensus given that, on the complex plane, all roots of the equation D 

(αT + γT )z−T = 0

T =0

are within the unit circle with the center being the origin. According to [114], we can define jointly connected graphs for a collection of graphs, if their union is connected over a time period. Obviously, such a condition is much weaker than that for time-varying connected graph, because in a jointly connected time-varying graph, the graph may not be connected at certain time instants, which can better model the communication faults, for example, in practical applications of multi-agent systems. For this situation, we have the following result, Theorem 3.3 Consider N agents whose models are described by Eq. (3.1), for which the time-varying communication graph is undirected and jointly connected. If the min-type protocol (3.13) is used, then the agents can asymptotically reach consensus given that, on the complex plane, all roots of the equation D 

(αT + γT )z−T = 0

T =0

are within the unit circle with the center being the origin. Proof To prove this theorem, we can follow the procedures in the proof of Theorem 3.1. What should be carefully addressed is the stability of the system (3.7). Due to the features of jointly connected graphs, the time it takes for y(k) ¯ to reach y(k) depends the time instant is takes to have a undirected connected union graph. From the definition of jointly connected graphs, which is mentioned earlier, there must exist two countable positive integer K1 > K2 for which we have a undirected connected union graph denoted by ¯ = G

K2

Gσ (k).

k=K1

¯ is undirected and connected, there always exists a path the Because the graph G node of the global minimum to any other nodes in the same graph. As a result, from (3.9), it still needs finite time instants to make the (3.10) holds, although the upper bound of the needed time instants is decided by the switching pattern of the graph.

32

3 High-Order Discrete-Time Consensus

In view of Theorem 3.3, one can also get the following result with respect to the max-type consensus protocol (3.14) for the situation with jointly connected communication graphs, of which the proof is omitted due to similarity. Corollary 3.3 Consider N agents whose models are described by Eq. (3.1), for which the time-varying communication graph is undirected and jointly connected. If the max-type protocol (3.14) is used, then the agents can asymptotically reach  (α consensus given that, on the complex plane, all roots of the equation D T =0 T + γT )z−T = 0 are within the unit circle with the center being the origin. On the basis of the above results, in addition to the main advantages of the orderindependent data transmission, we also summarize other features of the presented distributed consensus protocols for high-order discrete-time multi-agent systems. Remark 3.4 In light of the above theoretical results, for different types of undirected communication graphs, we can analyze the performance of the two consensus protocols under the same framework. In other words, the design is systematic and easy to follow. Besides, from the above analysis, it is known that the consensus value is clearly defined with respect to the global minimum or maximum. Consequently, a byproduct of the consensus protocols in this chapter is a distributed method to estimate the global minimum and the maximum.

3.5 Computer Simulations In the above sections, we discuss the design and theoretical results of two consensus protocols for the consensus of a group of agents whose dynamics can be modeled as high-order difference equations. To further demonstrate the effectiveness of the two protocols, with respect to different cases of communication topologies, we perform computer simulations under different settings.

3.5.1 Example 1 To verify the performance of the two consensus protocols presented in this chapter, a high-order discrete-time multi-agent system consisting of 20 agents is adopted in the simulation, for which the communication topology is set as Fig. 3.1, and the agent dynamics is identical as the following: xi (k + 1) = − xi (k) − 1.4xi (k − 1) − 0.8xi (k − 2) − 1.1xi (k − 3) + ui (k) + 0.2ui (k − 1) + 0.4ui (k − 2) + 0.1ui (k − 3).

(3.15)

3.5 Computer Simulations

33

Fig. 3.1 The undirected connected communication topology of the high-order discrete-time multiagent system in Example 1 10

20

5

0

0

−20

−5

−40

−10

−60

−15

−80

−20

−100

−25

−120

−30

−140

−35 0

5

k

(a)

10

15

0

5

k

10

15

(b)

Fig. 3.2 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 1, of which the agent dynamics is given in (3.15), with a static undirected connected communication topology as shown in Fig. 3.1 when the min-type protocol (3.2) is used. (a) xi (k). (b) ui (k)

 −1 It can be readily found that the roots of the characteristic equation FT =0 βT z−T = 0 of the agent dynamics are approximately 0.022 + 0.64i, 0.022 − 0.64i, and −0.24, all of which are within the unit circle whose center is the origin of the complex plane. The coefficients of the consensus protocols are set as γ0 = 0, γ1 = −2, γ2 = −1.05, γ3 = −0.85, and γ4 = −1.0976, for which we can verify that the roots of Eq. (3.11), i.e., 0.4, 0.3, 0.2, and 0.1, are within the unit circle whose center is the origin of the complex plane, which satisfies the requirement stated in the theorems and corollaries in this chapter. As seen from Fig. 3.2, when the mintype consensus protocol is employed for the discrete-time high-order multi-agent system with agent dynamics being (3.15), asymptotic consensus among the agents can be reach, for which the state differences are very small after 10 time instants. Under the same setting, we also perform simulation for the case when the max-type consensus protocol (3.3) is adopted for the discrete-time multi-agent system with

34

3 High-Order Discrete-Time Consensus

150

40 30

100

20 50

10 0

0 −10

0

5

k

10

15

−50 0

5

(a)

k

10

15

(b)

Fig. 3.3 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 1, of which the agent dynamics is given in (3.15), with a static undirected connected communication topology as shown in Fig. 3.1 when the max-type protocol (3.3) is used. (a) xi (k). (b) ui (k)

the agent dynamics being described as (3.15). As seen from Fig. 3.3, when the maxtype consensus protocol is employed, the consensus of the agents is still achieved in a asymptotic manner. Besides, it is found in Figs. 3.2 and 3.3 that the control inputs are bounded. These results shows the effectiveness of the two presented consensus protocols for high-order discrete-time multi-agent systems with static undirected connected communication topologies.

3.5.2 Example 2 In this subsection, as the second example, we use a 9-agent multi-agent system whose agent dynamics is given in (3.15) in Example 1 but with a time-varying undirected connected topology to evaluate the performance of the two consensus protocols in this kind of communication graphs. Specifically, the communication graph is time-varying and follows the following rule:

Gσ (k)

⎧ ⎪ G1 , if mod(k, 4) = 0, ⎪ ⎪ ⎪ ⎨G , if mod(k, 4) = 1, 2 = ⎪ G ⎪ 3 , if mod(k, 4) = 2, ⎪ ⎪ ⎩ G4 , if mod(k, 4) = 3,

(3.16)

in which G1 , G2 , G3 , and G4 denotes the four possible communication graphs which are demonstrated in Fig. 3.4. It should be noted here that the operation mod(a, b) returns the remainder when a is divided by b.

3.5 Computer Simulations

35

(a)

(b)

(c)

(d)

Fig. 3.4 Four possible communication graphs for Example 2. (a) G1 . (b) G2 . (c) G3 . (d) G4 10

50

0

0

−10

−50

−20

−100

−30

−150

−40

−200

0

5

k

(a)

10

15

0

5

k

10

15

(b)

Fig. 3.5 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 2, of which the agent dynamics is given in (3.15), with a time-varying communication topology defined in (3.16) when the min-type protocol (3.13) is used. (a) xi (k). (b) ui (k)

In the simulations, we adopt the same coefficient settings for the two protocols as those in Example 1. As seen From Fig. 3.5, when the min-type consensus protocol is employed for the discrete-time high-order multi-agent system with agent dynamics being (3.15), asymptotic consensus among the agents can be reach, for which the state differences are very small after 10 time instants. Under the same setting, we also perform simulation for the case when the max-type consensus protocol (3.3)

36

3 High-Order Discrete-Time Consensus

30

150

25 20

100

15 10

50

5 0

0

−5 −10

−50 0

5

k

10

15

0

(a)

5

k

10

15

(b)

Fig. 3.6 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 2, of which the agent dynamics is given in (3.15), with a time-varying communication topology defined in (3.16) when the max-type protocol (3.14) is used. (a) xi (k). (b) ui (k)

is adopted for the discrete-time multi-agent system with the agent dynamics being described as (3.15). As seen from Fig. 3.3, when the max-type consensus protocol is employed, the consensus of the agents is still achieved in a asymptotic manner. Besides, it is found in Figs. 3.5 and 3.6 that the control inputs are bounded. These results shows the effectiveness of the two presented consensus protocols for highorder discrete-time multi-agent systems with time-varying undirected connected communication topologies.

3.5.3 Example 3 In this subsection, the performance of the two consensus protocols in the scenario of time-varying jointly connected communication topology is evaluated via computer simulations. The high-order multi-agent system used in Example is adopted here with a different communication topology. Specifically, the communication topology switches according to the following definition:

Gσ (k)

⎧ ⎪ G5 , ⎪ ⎪ ⎪ ⎨G , 6 = ⎪G7 , ⎪ ⎪ ⎪ ⎩ G8 ,

if mod(k, 4) = 0, if mod(k, 4) = 1, if mod(k, 4) = 2,

(3.17)

if mod(k, 4) = 3,

in which G5 , G6 , G7 , and G8 denotes the four possible communication graphs during the simulation process, which are depicted in Fig. 3.7. According to Fig. 3.7, one can easily prove that the union graph of G5 , G6 , G7 , and G8 is undirected connected.

3.5 Computer Simulations

(a)

(c)

37

(b)

(d)

Fig. 3.7 The four possible communication topologies for the multi-agent system in Example 3. (a) G5 . (b) G6 . (c) G7 . (d) G8

As seen From Fig. 3.8, when the min-type consensus protocol is employed for the discrete-time high-order multi-agent system with agent dynamics being (3.15), asymptotic consensus among the agents can be reach, for which the state differences are very small after 10 time instants. Under the same setting, we also perform simulation for the case when the max-type consensus protocol (3.14) is adopted for the discrete-time multi-agent system with the agent dynamics being described as (3.15). As seen from Fig. 3.9, when the max-type consensus protocol is employed, the consensus of the agents is still achieved in a asymptotic manner. Besides, it is found in Figs. 3.8 and 3.9 that the control inputs are bounded. These results shows the effectiveness of the two presented consensus protocols for highorder discrete-time multi-agent systems with time-varying undirected and jointly connected communication topologies.

38

3 High-Order Discrete-Time Consensus

20

50

10 0 0 −50 −10 −100

−20 −30

−150 0

5

k

10

15

0

5

(a)

k

10

15

(b)

Fig. 3.8 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 2, of which the agent dynamics is given in (3.15), with a time-varying communication topology defined in (3.17) when the min-type protocol (3.13) is adopted. (a) xi (k). (b) ui (k) 150 30 20

100

10 50 0 0 −10 −20

−50 0

5

k

(a)

10

15

0

5

k

10

15

(b)

Fig. 3.9 State evolutions of the agents in the high-order discrete-time multi-agent system given in Example 2, of which the agent dynamics is given in (3.15), with a time-varying communication topology defined in (3.17) when the max-type protocol (3.14) is adopted. (a) xi (k). (b) ui (k)

3.6 Summary In this chapter, we have presented the design and analysis of two consensus protocols for systems consisting of multiple agents whose dynamics are modeled as high-order discrete-time difference equations. An interesting feature of the protocols is that their communication burden is independent from the order of the model of the agents, which bears a relatively good scalability, making it a good alternative for realizing the consensus of multi-agent systems with high-order discrete-time dynamics. Another feature of the two protocols is that they can be adopted in different scenarios of communication topologies, including the jointly connected time-varying ones, making the protocols more tolerant to communication faults. Besides, it has been shown in the analysis that the consensus value is well defined

References

39

when either consensus protocol is used. It has also been shown that the performance of the two protocols is theoretically guaranteed, and it has been validated via computer simulations in different types of communication topologies.

References 1. M. Liu, X. Wang, Z. Li, Robust bipartite consensus and tracking control of high-order multiagent systems with matching uncertainties and antagonistic interactions. IEEE Trans. Syst. Man Cybern. Syst. https://doi.org/10.1109/TSMC.2018.2821181 2. C. Zhao, X. Duan, Y. Shi, Analysis of consensus-based economic dispatch algorithm under time delays. IEEE Trans. Syst. Man Cybern. Syst.. https://doi.org/10.1109/TSMC.2018. 2840821 3. T. Yang, D. Wu, Y. Sun, J. Lian, Minimum-time consensus-based approach for power system applications. IEEE Trans. Ind. Electron. 63(2), 1318–1328 (2016) 4. Y. Zhang, S. Li, Predictive suboptimal consensus of multiagent systems with nonlinear dynamics. IEEE Trans. Syst. Man Cybern. Syst. 47(7), 1701–1711 (2017) 5. Z. Peng, G. Wen, S. Yang, A. Rahmani, Distributed consensus-based formation control for nonholonomic wheeled mobile robots using adaptive neural network. Nonlinear Dyn. 86(1), 605–622 (2016) 6. S. Yang, Y. Cao, Z. Peng, G. Wen, K. Guo, Distributed formation control of nonholonomic autonomous vehicle via RBF neural network. Mech. Syst. Signal Process. 87, 81–95 (2017) 7. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2018) 8. L. Jin, S. Li, H. M. La, X. Zhang, B. Hu, Dynamic task allocation in multi-robot coordination for moving target tracking: a distributed approach. Automatica 100, 75–81 (2019) 9. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291, 50–58 (2018) 10. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2018) 11. S. Li, M. Zhou, X. Luo, Z. You, Distributed winner-take-all in dynamic networks. IEEE Trans. Autom. Control 62(2), 577–589 (2017) 12. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 13. L. Jin, S. Li, X. Luo, M. Shang, Nonlinearly-activated noise-tolerant zeroing neural network for distributed motion planning of multiple robot arms, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2017), pp. 4165–4170 14. M.U. Khan, S. Li, Q. Wang, Z. Shao, Distributed multirobot formation and tracking control in cluttered environments. ACM Trans. Auton. Adapt. Syst. 11(2), 1–22 (2016) 15. S. Li, Z. Wang, Y. Li, Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks. Neural Process. Lett. 37(3), 411–424 (2013) 16. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019) 17. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019)

40

3 High-Order Discrete-Time Consensus

18. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 19. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 20. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019) 21. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019) 22. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 23. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 24. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Contr. Syst. Technol. 27(3), 1250–1258 (2019) 25. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 26. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 27. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 28. L. Xiao, S. Li, F. Lin, Z. Tan, A. H. Khan, Zeroing neural dynamics for control design: Comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019) 29. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 30. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 31. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 32. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 33. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 34. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 35. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 36. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018) 37. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018) 38. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018)

References

41

39. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 40. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 41. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Control Syst. Technol. 26(4), 1345–1357 (2018) 42. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 43. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 44. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 45. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 46. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a Jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 47. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 48. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 49. H. Wang, P.X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018) 50. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 51. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, Piscataway, 2018), pp. 2956–2961 52. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 53. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 54. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 55. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 56. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 57. L. Jin, S. Li, H. M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017) 58. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017)

42

3 High-Order Discrete-Time Consensus

59. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 60. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 61. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE) (IEEE, Piscataway, 2017), pp. 1368–1373 62. Y. Li, S. Li, D.E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, Piscataway, 2017), pp. 3538–3543 63. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 64. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 65. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 66. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 67. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 68. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 69. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016) 70. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 71. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceedings of the IEEE 16th International Conference on Data Mining (ICDM) (IEEE, Piscataway, 2016), pp. 311–319 72. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC) (IEEE, Piscataway, 2016), pp. 1221–1226 73. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015) 74. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 75. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceedings of the International Conference on Intelligent Computing ICIC (Springer, Cham, 2015), pp. 713–720 76. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014) 77. Y. Li, S. Li, Q. Song, H. Liu, M.Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014)

References

43

78. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 79. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2014), pp. 2952–2956 80. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013) 81. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. (24)(2), 301–309 (2013) 82. Y. Cao, W. Ren, Sampled-data discrete-time coordination algorithms for double-integrator dynamics under dynamic directed interaction. Int. J. Control 83(3), 506–515 (2010) 83. Y. Liu, S. Li, S. Tong, C.L.P. Chen, Neural approximation-based adaptive control for a class of nonlinear nonstrict feedback discrete-time systems. IEEE Trans. Neural Netw. Learn. Syst. 28(7), 1531–1541 (2017) 84. Y. Liu, S. Tong, Optimal control-based adaptive NN design for a class of nonlinear discretetime block-triangular systems. IEEE Trans. Cybern. 46(11), 2670–2680 (2016) 85. X. Xie, Q. Zhou, D. Yue, H. Li, Relaxed control design of discrete-time Takagi-Sugeno fuzzy systems: an event-triggered real-time scheduling approach. IEEE Trans. Syst. Man Cybern. Syst.. https://doi.org/10.1109/TSMC.2017.2737542 86. Y. Liu, Y. Gao, S. Tong, C.L.P. Chen, A unified approach to adaptive neural control for nonlinear discrete-time systems with nonlinear dead-zone input. IEEE Trans. Neural Netw. Learn. Syst. 27(1), 139–150 (2016) 87. Y. Liu, L. Tang, S. Tong, C.L.P. Chen, D. Li, Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems. IEEE Trans. Neural Netw. Learn. Syst. 26(1), 165–176 (2015) 88. R. Olfati-Saber, R.M. Murry, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 89. Y. Zhang, S. Li, Predictive suboptimal consensus of multiagent systems with nonlinear dynamics. IEEE Trans. Syst. Man Cybern. Syst. 47(7), 1701–1711 (2017) 90. C.L.P. Chen, G.X. Wen, Y.J. Liu, Z. Liu, Observer-based adaptive backstepping consensus tracking control for high-order nonlinear semi-strict-feedback multiagent systems. IEEE Trans. Cybern. 46(7), 1591–1601 (2016) 91. W. Zhang, Y. Tang, T. Huang, A.V. Vasilakos, Consensus of networked Euler-Lagrange systems under time-varying sampled-data control. IEEE Trans. Ind. Inf. 14(2), 535–544 (2018) 92. Y. Zhang, S. Li, Adaptive near-optimal consensus of high-order nonlinear multi-agent systems with heterogeneity. Automatica 85, 426–432 (2017) 93. L. Moreau, Stability of multiagent systems with time-dependent communication links. IEEE Trans. Autom. Control 50(2), 169–181 (2005) 94. Y. Su, J. Huang, Two consensus problems for discrete-time multi-agent systems with switching network topology. Automatica 48(9), 1988–1997 (2012) 95. Y. Wei, G.P. Liu, Consensus tracking of heterogeneous discrete-time networked multiagent systems based on the networked predictive control scheme. IEEE Trans. Cybern. 47(8), 2173– 2184 (2017) 96. P. Lin, Y. Jia, Consensus of second-order discrete-time multi-agent systems with nonuniform time-delays and dynamically changing topologies. Automatica 45(9), 2154–2158 (2009) 97. J. Qin, C. Yu, S. Hirche, Stationary consensus of asynchronous discrete-time second-order multi-agent systems under switching topology. IEEE Trans. Ind. Inf. 8(4), 986–994 (2012) 98. S.M. Dibaji, H. Ishii, Resilient consensus of second-order agent networks: asynchronous update rules with delays. Automatica 81, 123–132 (2017) 99. D. Xie, S. Wang, Consensus of second-order discrete-time multi-agent systems with fixed topology. J. Math. Anal. Appl. (387)(1), 8–16 (2012)

44

3 High-Order Discrete-Time Consensus

100. Y. Chen, J. Lü, X. Yu, Z. Lin, Consensus of discrete-time second-order multiagent systems based on infinite products of general stochastic matrices. SIAM J. Control Optim. 51(4), 3274–3301 (2013) 101. J. Zhu, Y.P. Tian, J. Kuang, On the general consensus protocol of multi-agent systems with double-integrator dynamics. Linear Algebra Appl. 431(5), 701–715 (2009) 102. F. Sun, Z.H. Guan, X.S. Zhan, F.S. Yuan, Consensus of second-order and high-order discretetime multi-agent systems with random networks. Nonlinear Anal. Real World Appl. 13(5), 1979–1990 (2012) 103. W. He, J. Cao, Consensus control for high-order multi-agent systems. IET Control Theory Appl. 5(1), 231–238 (2011) 104. L. Macellari, Y. Karayiannidis, D.V. Dimarogonas, Multi-agent second order average consensus with prescribed transient behavior. IEEE Trans. Autom. Control 62(10), 5282–5288 (2017) 105. Y. Zhang, S. Li, Perturbing consensus for complexity: a finite-time discrete biased minconsensus under time-delay and asynchronism. Automatica 85, 441–447 (2017) 106. J. Cortés, Distributed algorithms for reaching consensus on general functions. Automatica 44(3), 726–737 (2008) 107. T. Li, M. Fu, L. Xie, J.F. Zhang, Distributed consensus with limited communication data rate. IEEE Trans. Autom. Control 56(2), 279–292 (2011) 108. K. You, L. Xie, Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Trans. Autom. Control 56(10), 2262–2275 (2011) 109. K.S. Narendra, C. Xiang, Adaptive control of discrete-time systems using multiple models. IEEE Trans. Autom. Control 45(9), 1669–1686 (2000) 110. E.A. Khidir, N.A. Mohamed, M.J.M. Nor, M.M. Mustafa, A new concept of a linear smart actuator. Sensors Actuators A 135(1), 244–249 (2007) 111. H. Wu, D. Sun, Z. Zhou, Model identification of a micro air vehicle in loitering flight based on attitude performance evaluation. IEEE Trans. Robot. 20(4), 702–712 (2004) 112. I.D. Landau, G. Zito, Digital Control Systems: Design, Identification And Implementation (Springer, New York, 2006) 113. S. Knorn, Z. Chen, R.H. Middleton, Overview: collective control of multiagent systems. IEEE Trans. Control Netw. Syst. 3(4), 334–347 (2016) 114. H. Yu, X. Xia, Adaptive consensus of multi-agents in networks with jointly connected topologies. Automatica 48(8), 1783–1790 (2012)

Chapter 4

Continuous-Time Biased Min-Consensus

4.1 Introduction Given a graph, searching for a path with the shortest length from one node to another node is called the shortest path problem, where the former is referred to as the source node and the latter is referred to as the destination node [1]. In the past decades, many algorithms were designed and analyzed for solving the shortest path problem, which include but are not limited to the A∗ algorithm and the Dijkstra’s algorithm [2–5]. The shortest path finding algorithms shown in [2–9] are centralized, while the algorithms shown in [8, 9] cannot always find a shortest path for given graphs with a predefined source node and given destination node. It is known that centralized algorithms are less robust to faults compared with distributed ones [10–76]. Another issue about centralized algorithms is the scalability. Because all the computation is perform on a center unit, the computational capability of the center unit should be excellent when the scale of the problem is extremely large [77, 78]. From this point of view, for solving large-scale shortest path finding problems, it would be better if we can have a distributed algorithm. Multi-agent systems have received much attention in recent years in both the computer science community and the control community, for which one of the fundamental problems is how to find a mechanism to drive the agents to reach consensus in a distributed manner, i.e., with only local information exchange of neighboring agents [79–88]. For multi-agent systems, the consensus of the agents mean that their is no state difference among the participating agents. The rules that define the local communications for the agents, including the variables to be transmitted, are called consensus protocols. The distributed consensus essentially shows how to reach global goal for a group of agents by using only local information exchange, and thus bear good scalability for solving large-scale problems, which finds its various application, such as clock synchronization [10] and formation control [89, 90]. Some recent developments of the design and analysis of consensus protocols are discussed in [91–97]. Some researchers also tried to cope with the © Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_4

45

46

4 Continuous-Time Biased Min-Consensus

realistic issues in the application of consensus protocols, such communication delays [98–100], measurement noises [101], unknown dynamics of the agents in the multi-agent system [77, 102] and the changes of the communication graphs during the process to achieve consensus [103]. Some detailed discussions on how to applied consensus protocols to physical systems were given in [78, 104, 105]. Generally, the consensus of multi-agent systems and the shortest path problem are considered to be unrelated. Intuitively, consensus seems to be a simple cooperation among agents while the path planning behavior is more complex, which should be an ability of intelligent robots. We consider the path planning ability important for realizing the intelligence of robots, especially the shortest path planing. It should be noted that some path planning problems were even proved to be NP hard [106– 108]. Revealing the relationship between the consensus of multi-agent systems and the shortest path planing may help us have a better understanding about how the emergence of intelligence. There are clearly two benefits for us to handle the shortest path problem by using the advancements of consensus study. On one hand, the consensus protocols are distributed and thus bear good scalability and robustness. On the other hand, such investigation may help us get a new perspective about the capability of consensus algorithms. For consensus protocols, if the data transmission is ideal, then the agent states asymptotically converge to a common state [91]. According to the experience in analyzing the stability of dynamical systems, we may expect that given a small bias term in the consensus protocol, an approximate agreement of the state. The interesting in the results shown in this chapter is that the agent states behave far more than an approximate agreement and have a clear correspondence to the shortest path problem defined in the graph. Such a finding trigger our thinking to the design of high-level intelligence, such as shortest path planning, by the usage of simple state evolutions among cooperative agents. Table 4.1 is provided to have a clear picture about the similarity and difference between the presented biased min-consensus protocol and other existing ones. The organization of the remaining contents in this chapter is as below. Firstly, in Sect. 4.2, we present existing consensus protocols, after which the biased minconsensus protocol is designed and analyzed. Then, in Sect. 4.4, we show the relationship between the behavior of the multi-agent system equipped with the biased min-consensus protocol and the shortest path planning problem defined on

Table 4.1 Comparisons among different consensus protocols Consensus Max/min-consensus [109–112] Average consensus [90–93, 98, 101, 113–116] Weighted consensus [117] Consensus filter [79] This chapter

Distributed Yes Yes

Convergence Max/min Average

Robotic application NA Formation control

Yes Yes Yes

Weighted average Input average Shortest path

Formation control NA Path planning

4.2 Protocol

47

the same graph. To evaluate the performance of the biased min-consensus protocol for finding shortest paths, we discuss the simulation results in Sect. 4.5, where some application scenarios are also illustrated. The summary of this chapter is given in Sect. 4.6.

4.2 Protocol In this section, we start by revisiting the consensus protocols in existing literature. Then, on the basis of them, we present the continuous-time biased min-consensus protocol, together with the corresponding theoretical analysis on the convergence of the multi-agent system using the continuous-time biased min-consensus protocol. For a multi-agent system with n nodes with the state variable denoted by xi (i = 1, 2, · · · , n), of which the communication is defined in an undirected connected graph G = (V, E). The min-consensus means that the agents can asymptotically reach an agreement with lim xi (t) = χ(x) = min{x1 (0), x2 (0), · · · , xn (0)}, ∀i ∈ {1, 2, · · · , n},

t →+∞

which can be viewed as a special case of the χ consensus discussed in [116]. In leader-follower consensus, the agents in a multi-agent system are divided into two types: leader agents and follower agents. The state evolution of the follower agents are affected by the leader agents, while the state evolution of the leader agents are independent from the follower agents. For the sake of illustration, let the set of leader agents be denoted by S1 , and let the set of follower agents be denoted by S2 . For a multi-agent system consisting of the two types of agents with static leader agents and follower agents, a min-consensus protocol was discussed in [109] as follows: x˙i = 0, i ∈ S1 , (4.1) x˙i = −xi + minj ∈N(i){xj }, i ∈ S2 . Accordingly, a min-consensus protocol for multi-agent systems without leader agents can be given as x˙i = −xi + min {xj }, ∀i ∈ V. j ∈N(i)

It should be noted that the max-consensus protocol have a similar structure as the min-consensus protocol, for which some recent discussions are available in [110–112] and the references therein. Another commonly encountered consensus type is the average consensus, which can also be viewed as the special type of χ consensus [116]. In average consensus,

48

4 Continuous-Time Biased Min-Consensus

we need the agent states to have a steady state as follows: lim xi (t) = χ(x) =

t →+∞

n 

xi (0)/n, ∀i ∈ V.

i=1

To achieve this goal, we can adopt the following consensus protocol: x˙i = −



wij (xi − xj ), ∀i ∈ V.

j ∈N(i)

The average consensus have been extensively studied [113–115]. As seen from (4.1), in the multi-agent system, the state values of the leader agents are always 0 while the states of the follower agents dynamically change according to the received state information of neighboring agents by local communication. We consider a modified version of the min-consensus protocol (4.1), where the state information received by a follower agent i from a neighboring agent j is changed from xj to xj + wij , by which we have the corresponding min-consensus protocol with a biased term wij : x˙i = 0, i ∈ S1 , εx˙i = −xi + minj ∈N(i) {xj + wij }, i ∈ S2

(4.2)

in which ε > 0 ∈ R denotes a coefficient to adjust the convergence of the agent dynamics. Based on the model of the continuous-time biased min-consensus protocol (4.2), we have some explanations about its property. Remark 4.1 As seen from the continuous-time biased min-consensus protocol (4.2), the agent state evolution is related to the difference between its own state and the minimum of a term related to the neighboring states and the biased term. Thus, the continuous-time biased min-consensus protocol (4.2) is distributed.

4.3 Theoretical Analysis In this section, we theoretically analyze the convergence property of the continuoustime biased min-consensus protocol (4.2). Different from the widely adopted approaches for relevant analysis of consensus protocols, we do not use the properties of the graph Laplacian matrices to facilitate the theoretical analysis, due to the existence of the biased term wij and the min function in the continuous-time biased min-consensus protocol (4.2). Before moving to the main results, we provide some notations for the sake of readability. Specifically, the right-hand side of biased min-consensus protocol (4.2)

4.3 Theoretical Analysis

49

is represented by ei , i.e., ei = 0∀i ∈ S1 and ei = min {xj + wij } − xi , ∀i ∈ S2 . j ∈N(i)

The upper bound of the term ei is denoted by e¯ = max{ei }, i∈V

(4.3)

of which the set of agents whose state values reach the upper bound is represented by S¯ = arg max{ei }. i∈V

(4.4)

Accordingly, in terms of the lower bound of ei , we have e = min{ei }, i∈V

(4.5)

of which the set of agents whose state values reach the lower bound is represented by S = arg min{ei }. i∈V

(4.6)

We use the symbol ∅ to denote an empty set that contains no element. We also define the set of parent agents for an agent i defined in the same graph: P(i) = ∅, i ∈ S1 , P(i) = arg minj ∈N(i) {xj + wij }, i ∈ S2 .

(4.7)

It is obvious that, in a undirected connected graph, we have P(i) = ∅, ∀i ∈ S2 . Now, we are ready to present the theoretical results about the properties of the protocol (4.2). Lemma 4.1 The upper bound e¯i of the continuous-time biased min-consensus protocol (4.2), which is mathematically defined in Eq. (4.3), is monotonically nonincreasing.

50

4 Continuous-Time Biased Min-Consensus

Proof We first revisit the definition of the upper bound e¯i as follows: ei = min {xj + wij } − xi , ∀i ∈ S2 j ∈N(i)

and recall the definition of P(i) in Eq. (4.7). In light of the generalized derivative of Clarke [118–120], the time derivative of the right-hand side of the agent dynamics is given as follows e˙i =



λj x˙j − x˙i , ∀i ∈ S2

j ∈P(i)

where λj simultaneously satisfies 0 ≤ λj ≤ 1 and 

λj = 1.

j ∈P(i)

Thus, for every i ∈ S2 , the following equation holds: e˙i =



λj x˙j −

j ∈P(i)

=





λj x˙ i

j ∈P(i)

λj (x˙j − x˙i )

j ∈P(i)

=

 λj (ej − ei ). ε

j ∈P(i)

Meanwhile, from the continuous-time biased min-consensus protocol (4.2), we know that e˙i = 0, ∀i ∈ S1 . Similarly, for the upper bound e, ¯ the following equation holds: e˙¯ =

 i∈S¯

δi e˙i

4.3 Theoretical Analysis

51

where δi simultaneously satisfies the inequality 0 ≤ δi ≤ 1 and the equation  ¯ δ ¯ i∈S i = 1. Here, we can divide the set S into two subsets in the following manner: S¯ = S¯ ∩ V = S¯ ∩ (S1 + S2 ) = (S¯ ∩ S1 ) + (S¯ ∩ S2 ). Then, we have e˙¯ =





δi e˙i +

¯ 1 i∈S∩S

δi e˙i .

¯ 2 i∈S∩S

It is worth pointing out that 

δi e˙i = 0

¯ 1 i∈S∩S

since e˙i = 0, ∀i ∈ S1 . Thus, e˙¯ =

 ¯ 2 i∈S∩S

δi e˙i =



 λi δj (ej − ei ). ε

¯ 2 j ∈P(i) i∈S∩S

(4.8)

¯ one has Because i ∈ S, ei = e¯ ≥ ej , i.e., ej − ei ≤ 0, ∀j ∈ V. Recall that the fact that λi ≥ 0, δj ≥ 0 and ε > 0. We further have e˙¯ ≤ 0, by which it is concluded that the upper bound e¯ is monotonically non-increasing.



Lemma 4.2 The lower bound ei of the continuous-time biased min-consensus protocol (4.2), which is mathematically defined in Eq. (4.5), is monotonically nondecreasing. Proof The proof can be completed by following the steps in the proof of Lemma 4.1.

52

4 Continuous-Time Biased Min-Consensus

Lemma 4.3 Consider the continuous-time biased min-consensus protocol (4.2). If t → +∞, then we have ¯ ∀i ∈ S¯ P(i) ⊂ S, where P(i) and S¯ are defined in Eqs. (4.7) and (4.4), respectively. Proof We first revisit the definitions of the upper bound and the lower bound, i.e., e¯ = max{ei } i∈V

and e = min{ei }. i∈V

According to Lemma 4.1, we know that e˙¯ ≤ 0, by which we have e(t) ¯ ≤ e(0), ¯ ∀t ≥ 0. Meanwhile, according to Lemma 4.2, we have e(t) ≥ e(0), ∀t ≥ 0. Thus, the following inequality holds: e(0) ≤ e(t) ≤ e(t) ≤ e¯ ≤ e¯(0), by which we conclude that the upper bound e¯ is monotonically non-increasing and lower-bounded. As a result, there exists a limit of the upper bound, i.e., ¯ = c1 , lim e(t)

t →+∞

in which c1 ∈ R denotes a constant that satisfies the inequality e(0) ≤ c1 ≤ e¯(0) [121]. According to Lemma 4.1, we also know that e˙¯ =



  λi δj (ej − ei ) = ε

¯ 2 j ∈P(i) i∈S∩S

 λi δj (ej − e). ¯ ε

¯ 2 j ∈P(i) i∈S∩S

(4.9)

4.3 Theoretical Analysis

53

Now we move on to find the largest invariant set and use the invariance principle to analyze the steady-state behavior [122]. Consider e˙¯ = 0, by which and (4.9) we have 



λi δj (ej − e)/ε ¯ = 0.

¯ 2 j ∈P(i) i∈S∩S

Recalling that ej ≤ e¯, ∀j ∈ V, we can further conclude that ej = e¯, i.e., ¯ j ∈ S, ¯ According to the invariance principle for system dynamics with ∀j ∈ P(i), ∀i ∈ S. non-smooth right-hand sides [122], we can conclude that ¯ = c1 . lim ej (t) = lim e(t)

t →+∞

t →+∞

As a result, lim (ej (t) − e(t)) ¯ = 0,

t →+∞

¯ By referring to (4.9), we further know that ∀j ∈ P(i), ∀i ∈ S. ˙¯ = 0. lim e(t)

t →∞

According to the above analysis, we are able to conclude that ¯ ∀i ∈ S¯ P(i) ⊂ S, if t → +∞.



54

4 Continuous-Time Biased Min-Consensus

Lemma 4.4 Consider the continuous-time biased min-consensus protocol (4.2). If t → +∞, then we have P(i) ⊂ S, ∀i ∈ S where P(i) and S are defined in Eqs. (4.7) and (4.6), respectively. Proof The proof can be complete by following the steps in the proof of Lemma 4.3. Lemma 4.5 Consider a multi-agent system defined in an undirected connected graph G with the agent dynamics given as (4.2), the state value xi is upper bounded, ∀t > 0, ∀i ∈ V. Proof From Lemma 4.1, it is known that e˙¯ ≤ 0, i.e., ¯ ∀t > 0, ∀i ∈ V. e(t) ¯ = max{−xi (t) + min {xj (t) + wij }} ≤ e(0), i∈V

j ∈N(i)

Thus, we have ¯ xk + wik − xi ≤ e(0), ¯ From Lemma 4.2, we know that ∀k ∈ P(i), ∀i ∈ S. e˙ ≥ 0, ∀t > 0. Recalling e˙¯ ≤ 0, it is further concluded that ¯ e(0) ≤ ei (t) ≤ e(0), ∀t > 0. It should be noted that ei = −xi + min {xj + wij } j ∈N(i)

= −xi + xk + wik

4.3 Theoretical Analysis

55

where k ∈ P(i). As a result, e(0) ≤ −xi + xk + wik ≤ e(0), ¯ i.e., xi ≤ −e(0) + xk + wik where k ∈ P(i). Meanwhile, for the set P(i), the following inequality holds: xk + wik ≤ xj + wij , ∀j ∈ N(i), ∀k ∈ P(i). Thus, xi ≤ −e(0) + xj + wij , ∀j ∈ N(i).

(4.10)

Given that the graph is undirected connected, there exists a path from a node i1 ∈ S1 to another node iη ∈ S2 . Let us assume that there are η (η ≥ 2) nodes including node i1 and node iη in the path. Because we have x˙i = 0, ∀i ∈ S1 , by referring to (4.10), we further have xiη ≤ −e(0)(η − 1) + max{xi (0)} + (η − 1) max {wij }. i∈S1

(i;j )∈E

As a result, the following inequality is derived: xi ≤ −e(0)(η − 1) + max{xi (0)} + (η − 1) max {wij }, ∀i ∈ V, i∈S1

(i;j )∈E

meaning that the state variable xi is upper bounded.



Lemma 4.6 Consider a multi-agent system defined on an undirected connected graph G with the agent dynamics given as (4.2). If t → +∞, then S ∩ S1 = ∅ where the set S is defined in (4.5) and S1 denotes the set of leader agents. Proof According to Lemmas 4.1 and 4.2, by referring to the definitions of e¯ and e, the following inequality holds: e(0) ¯ ≥ e(t) ¯ ≥ e(t) ≥ e(t) ≥ e(0)

56

4 Continuous-Time Biased Min-Consensus

and e˙ (t) ≥ 0, ∀t ≥ 0. By following the proof of Lemma 4.3, we have lim e˙ (t) = 0,

t →+∞

meaning that ei (t) equals ∀i ∈ S and ei ≤ 0. In other words, −xi + min {xj + wij } ≤ 0, j ∈N(i)

if t → +∞. Let us assume S ∩ S1 = ∅ if t → +∞, by which P(i) = ∅, ∀i ∈ S, in view of the definition of P(i) in (4.7) for the multi-agent system defined on an undirected connected graph. Meanwhile, on the basis of Lemma 4.4, P(i) ⊂ S, ∀i ∈ S, if t → +∞. Thus, the following inequality holds: xi (t) ≥ min {xj (t) + wij } > xk (t), ∀k ∈ P(i) ⊂ S, ∀i ∈ S j ∈N(i)

(4.11)

if t → +∞. Consider xm = lim min{xi (t)}, t →+∞ i∈S

for which we have the following inequality xm ≤ xk (t), ∀k ∈ S, if t → +∞. According to (4.11), if t → +∞, then xi (t) > xk (t),

(4.12)

4.3 Theoretical Analysis

57

∀k ∈ N(i) ⊂ S, yielding an contradiction with the inequality (4.12). As a result, we conclude that S ∩ S1 = ∅ if t → +∞.



On the basis of the above lemmas about the properties of the continuous-time bias-min consensus protocol (4.2), we show the main result about its convergence in the following theorem. Theorem 4.1 Consider a multi-agent system of which the communication is defined in the undirected connected graph G, of which the agent dynamics follows the continuous-time bias-min consensus protocol (4.2). The agent states are globally asymptotically convergent to x∗ satisfying the following set of equations: xi∗ = xi (0), i ∈ S1 ,

xi∗ = minj ∈N(i) {xj + wij }, i ∈ S2 .

(4.13)

Proof In view of Lemmas 4.1 and 4.2, one has lim e¯(t) = c1

t →+∞

and lim e(t) = c2

t →+∞

where c1 ∈ R and c2 ∈ R are constants. Because the graph is undirected connected, according to Lemma 4.6, we have S ∩ S1 = ∅ if t → +∞. Thus, there exists an agent with label i satisfying i ∈ (S1 ∩ S). Because ej = 0, ∀j ∈ S1 , according to the definition of e, i.e., (4.5), one has lim e(t) = 0.

t →+∞

From Eq. (4.3), it is known that e¯ ≥ 0. Meanwhile, according to the model of the continuous-time bias-min consensus protocol, i.e., (4.2), we can readily derive ¯ εx˙i = ei = e¯ ≥ 0, ∀i ∈ S.

58

4 Continuous-Time Biased Min-Consensus

It should be noted that the following result has been proven previously: lim e(t) = 0.

t →+∞

Thus, ei ≥ 0 if t → +∞. Consequently, ε



¯ lim e¯(t), x˙ i ≥ |S| t →+∞

i∈V

¯ ¯ in which  the term |S| stands for the number of elements of the set S. Obviously, the term i∈V x˙i will increase and become unbounded if limt →+∞ e(t) ¯ > 0, leading to a contradiction with Lemma 4.5 saying that xi is bounded. As a result, we have lim e¯(t) = 0.

t →+∞

Based on the above analysis, it is clear that lim ei (t) = 0, ∀i ∈ V.

t →+∞

Then, we also have lim x˙i (t) = 0, ∀i ∈ V.

t →+∞

Therefore, the agent states are globally asymptotically convergent to x∗ satisfying Eq. (4.13).

4.4 Equivalence to Shortest Path Planning Consensus is a simple evolutionary process, while shortest path planning is a complicated process that is associated with higher intelligence. Traditionally, there has been a clear gap between counter intuitions. In this part, the relationship between consensus and shortest path planning is given by using the continuous-time biased min-consensus protocol (4.2). The relationship between the consensus of multi-agent systems and the process of finding a shortest path in undirected connected graphs can be explained in the following manner. The state value of an agent is the path length from the agent (represented by a node) on the graph to anther one, i.e., the destination node. The state value of the destination node is kept on 0, and the follower nodes are the source nodes for the shortest path problems, i.e., the source node set corresponds to S1 , and the target node set corresponds to S2 . In addition, if there exists an edge between two

4.5 Simulations and Applications

59

nodes on the graph, the corresponding agents can communicate with each other. Let the length of the edge between node i and node j be represented by wij . Then, we can adopt the continuous-time biased min consensus protocol (4.2) to let the agents share state information with its neighboring agents. In the following, we present a theorem about the equivalence between the steady state of the continuous-time biased min-consensus protocol (4.2) and the solution to the corresponding shortest path problem. Theorem 4.2 Consider the continuous-time biased min-consensus protocol (4.2) over an undirected connected graph. If the initial states of the agents are set to xi (0) = 0, ∀i ∈ S1 , then at the steady state, the state values of the agents constitute a shortest path to a corresponding shortest path problem. Proof From Theorem 4.1, the equilibrium x∗ of the agent dynamics is asymptotically stable, which satisfies (4.13). On the basis of the optimality principle of dynamic programming [123], the solution to the shortest path problem satisfies xi = 0, i ∈ S1 , xi = minj ∈N(i) {xj + wij }, i ∈ S2 .

(4.14)

Besides, according to [123], if the shortest path problem is solvable, then uniqueness of the solution of (4.14) is theoretically guaranteed. According to Theorem 4.1, if we set xi (0) = 0, ∀i ∈ S1 , then at the steady state, the state values of the agents satisfy nonlinear Eqs. (4.14), which completes the proof. On the basis of the above two theorems, we have the following remark. Remark 4.2 According to Eq. (4.14), given the steady-state variables of the multiagent system with the continuous-time biased consensus protocol (4.2), we can derive the shortest path by repeating the searching of the parent nodes. Remark 4.3 On the basis of the proof of Theorems 4.1 and 4.2, the initial states of the agents does not affect the state values of the participating agents at the steady state. Thus, in practice, the initial values of the state variables can be randomly set.

4.5 Simulations and Applications To evaluate the performance of the continuous-time biased min-consensus protocol (4.2) over undirected connected graphs, we perform simulations and the results are discussed in this section. Here, we also present two potential applications, i.e., navigation of mobile robots in a maze and the complete coverage by using mobile robots, which is the technical basis of cleaning robots.

60

4 Continuous-Time Biased Min-Consensus

4.5.1 Illustrative Example To evaluate the performance of the continuous-time biased min-consensus protocol (4.2) we first use a graph shown in Fig. 4.1 with ten agents, in which the destination node is node 1. The parameter of the continuous-time biased min-consensus protocol (4.2) is set to ε = 10−6 , and the initial states of the agents are randomly generated. As seen from Fig. 4.2a, with the continuous-time biased min-consensus protocol (4.2), the state variables of the agents are convergent. Figure 4.2b shows the steady-state values of the agents, which correspond to the lengths of the shortest paths from the corresponding nodes on the graph to the destination node. According to the steady-state result shown in Fig. 4.2b and Remark 4.2, we are able to calculate the corresponding shortest paths in a recursive manner. For instance, for the node 10, a shortest path is 10 → 8 → 3 → 6 → 5 → 1 and 10 → 8 → 9 → 4 → 6 → 5 → 1, which bears the shortest length as calculated directly based on the graph

1 0.2

2

0.3

0.2

0.6

0.8

0.7

0.2

7

0.2

6

5

3

0.1

0.3 4

0.4 0.1

0.4

10

0.3

8

0.3

9 0.3

Fig. 4.1 A undirected connected graph consisting of ten nodes, where node 1 is set as the destination node 1.2

1.5

1 1

0.8 0.6

0.5

0.4 0.2

0

0

0.2

0.4

0.6

t

(a)

0.8

1 −5

x 10

0

1

2

3

4

5

6

7

8

9

10

i

(b)

Fig. 4.2 Simulation results with respect to the continuous-time biased min-consensus protocol (4.2) over the undirected connected graph shown in Fig. 4.1. (a) Transient values of xi . (b) Steadystate values of xi

4.5 Simulations and Applications

61

shown in Fig. 4.1. The simulation results verify the effectiveness of the continuoustime biased min-consensus protocol (4.2) and are consistent with theoretical results.

4.5.2 Application to Maze Solving In this subsection, we are concerned with the potential of the continuous-time biased min-consensus protocol (4.2) for the navigation of a mobile robot in a maze environment. The maze is depicted in Fig. 4.3, where the position with a rectangle denotes the initial position of the mobile robot, and the positions marked by circles represent feasible destination positions. The task of the mobile robot is to go from the initial position to one of the feasible destination positions but with a shortest path. The maze graph is consisting of 254 × 254 pixels, which is converted into a undirected connected graph of 43,826 nodes. To adopt the continuous-time biased min-consensus protocol (4.2), the feasible destination nodes are viewed as static leader nodes, while the other nodes in free blank space are viewed as follower nodes. The parameter of the continuous-time biased min-consensus protocol (4.2) is set to ε = 1e − 4, and the transient states of the agents are shown in Fig. 4.4.

Fig. 4.3 Initial position of the mobile robot in the maze and the three locations of the feasible destinations

62

4 Continuous-Time Biased Min-Consensus

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 4.4 The transient state values of the agents corresponding to the nodes shown in the graph depicted in Fig. 4.3, during the state evolution process when the continuous-time biased minconsensus protocol (4.2) is used. (a) t = 0. (b) t = 0.01. (c) t = 0.02. (d) t = 0.03. (e) t = 0.04. (f) t = 0.05

4.5 Simulations and Applications

63

The transient behavior of node/agent state values evolutes in the following manner. At the beginning, the state values of the follower agents are randomly generated, and the state values of the target agents are 0, so the initial transient state figure looks unordered. Before reaching the steady state, we find that the state values of the nodes far away from the target positions are almost the same as their initial values. This is because the information from the leader nodes has not yet been passed to them. Then, with the delivery of state information, due to the influence of the continuous-time biased min-consensus protocol (4.2), the state variables gradually converge and reach the equilibrium state. From this point of view, the continuous-time biased min-consensus protocol (4.2) forces the state variables to establish the shortest path length gradient from any location to the target locations. Intuitively, the shortest path from any position to the target position is the path with the fastest gradient descent speed. The shortest path generated by the continuoustime biased min-consensus protocol (4.2) is shown in Fig. 4.5. The simulation results show the potential of the continuous-time biased min-consensus protocol (4.2) for the robot navigation task in complex environment with a given map.

Fig. 4.5 Robot navigation result by using the continuous-time biased min-consensus protocol (4.2)

64

4 Continuous-Time Biased Min-Consensus

4.5.3 Application to Complete Coverage The complete coverage problem is a fundamental issue is the technique of cleaning robots, in which a mobile need to pass all the free space in the environment [124]. In light of its significance, in this subsection, we show how the continuous-time biased min-consensus protocol (4.2) could be useful for solving the complete coverage problem. The complete coverage problem is regarded as a variant of the shortest path problem here. Specifically, we can view each pixel in the free space as a node of the corresponding undirected connected graph. In this case, the set S1 corresponds to a set of positions that need to be visited by the mobile robot, while the set S2 corresponds to a set of visited positions of the mobile robot. Let p(t) stand for the position of the mobile robot at time t. The complete coverage problem can be solved by the following steps: 1. Find the shortest path from the current position p(t) of the mobile robot to all positions in set S1 by using the continuous-time biased min-consensus protocol (4.2), and force the mobile robot to move along the path to the end of the path. In the process of movement, the node corresponding to the position that the mobile robot has passed is removed from the set S1 ; 2. If S1 contains no element, stop; Otherwise, go to step (1). To evaluate the performance of the above method for complete coverage, we consider an example shown in Fig. 4.6. The environment, containing free space and

Fig. 4.6 The application result of the continuous-time biased min-consensus protocol (4.2) for the complete coverage task of a mobile robot, where the starting location of the mobile robot is marked by a rectangle and the areas to be covered by the robot are marked in green. (a) Initial state. (b) Coverage result

References

65

obstacles, as well as the starting location of the mobile robot is depicted in Fig. 4.6a. Specifically, the free space in the environment is marked with small circles, while the starting location of the mobile robot is marked with a rectangle. The coverage result is demonstrated in Fig. 4.6b, from which we can easily find that the task has been finished with a relative good performance, meaning only a few space has been visited by the mobile robot for at least twice. These further demonstrate the potential of the continuous-time biased min-consensus protocol (4.2) in the complete coverage of cleaning robots.

4.6 Summary In this chapter, we have discussed a modified continuous-time min-consensus protocol with a biased term, which is referred to as the continuous-time biased minconsensus protocol. The interesting properties of the protocol show that a simple state evolution of a multi-agent system defined on a graph could lead to complicated behavior, i.e., the shortest path finding in the corresponding undirected connected graph, making us to rethink the link between low-level intelligence to higher-level intelligence. We have evaluated the performance of the continuous-time biased minconsensus protocol via theoretical analysis and computer simulations, where two application scenarios have also been discussed.

References 1. T.H. Cormen, C.E. Leiserson, R.L. Rivest, C. Stein, Introduction to Algorithms (MIT Press, London, 1990) 2. Y. Lu, X. Huo, P. Tsiotras, A beamlet-based graph structure for path planning using multiscale information. IEEE Trans. Autom. Control 57(5), 1166–1178 (2012) 3. S. Ma, K. Feng, J. Li, H. Wang, G. Cong, J. Huai, Proxies for shortest path and distance queries. IEEE Trans. Knowl. Data Eng. 28(7), 1835–1850 (2016) 4. J. Ulen, P. Strandmark, F. Kahl, Shortest paths with higher-order regularization. IEEE Trans. IEEE Trans. Pattern Anal. Mach. Intell. 37(12), 2588–2600 (2015) 5. T.V. Hoai, P.T. An, N.N. Hai, Multiple shooting approach for computing approximately shortest paths on convex polytopes. J. Comput. Appl. Math. 317, 235–246 (2017) 6. L. Fu, D. Sun, L.R. Rilett, Heuristic shortest path algorithms for transportation applications: state of the art. Comput. Oper. Res. 33(11), 3324–3343 (2006) 7. X. Lu, M. Camitz, Finding the shortest paths by node combination. Appl. Math. Comput. 217(13), 6401–6408 (2011) 8. A.W. Mohemmed, N.C. Sahoo, T.K. Geok, Solving shortest path problem using particle swarm optimization. Appl. Soft Comput. 8(4), 1643–1653 (2008) 9. G.E. Jan, C.C. Sun, W.C. Tsai, T.H. Lin, An O(nlogn) shortest path algorithm based on Delaunay triangulation. IEEE/ASME Trans. Mechatron. 19(2), 660–666 (2014) 10. R. Carli, S. Zampieri, Network clock synchronization based on the second-order linear consensus algorithm. IEEE Trans. Autom. Control 59(2), 409–422 (2014)

66

4 Continuous-Time Biased Min-Consensus

11. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019) 12. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019) 13. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 14. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 15. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019) 16. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019) 17. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 18. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 19. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Control Syst. Technol. 27(3), 1250–1258 (2019) 20. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 21. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 22. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 23. L. Xiao, S. Li, F. Lin, Z. Tan, A.H. Khan, Zeroing neural dynamics for control design: comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019) 24. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 25. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 26. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 27. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 28. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 29. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 30. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 31. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018)

References

67

32. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018) 33. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations,. Neural Netw. 98, 102–113 (2018) 34. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 35. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 36. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Control Syst. Technol. 26(4), 1345–1357 (2018) 37. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 38. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 39. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 40. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 41. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 42. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 43. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 44. H. Wang, P. X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018) 45. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 46. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, Piscataway, 2018), pp. 2956–2961 47. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 48. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 49. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 50. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 51. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 52. L. Jin, S. Li, H. M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017)

68

4 Continuous-Time Biased Min-Consensus

53. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017) 54. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 55. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 56. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE) (IEEE, Piscataway, 2017), pp. 1368–1373 57. Y. Li, S. Li, D.E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, Piscataway, 2017), pp. 3538–3543 58. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 59. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 60. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 61. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 62. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 63. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 64. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016) 65. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 66. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceedings of the IEEE 16th International Conference on Data Mining (ICDM) (IEEE, Piscataway, 2016), pp. 311–319 67. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC) (IEEE, Piscataway, 2016), pp. 1221–1226 68. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015) 69. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 70. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceedings of the International Conference on Intelligent Computing (Springer, Cham, 2015), pp. 713–720 71. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014)

References

69

72. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 73. Y. Li, S. Li, Q. Song, H. Liu, M.Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014) 74. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2014), pp. 2952–2956 75. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013) 76. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 301– 309 (2013) 77. Y. Cao, W. Ren, D.W. Casbeer, C. Schumacher, Finite-time connectivity-preserving consensus of networked nonlinear agents with unknown Lipschitz terms. IEEE Trans. Autom. Control 61(6), 1700–1705 (2016) 78. T. Yang, D. Wu, Y. Sun, J. Lian, Minimum-time consensus-based approach for power system applications. IEEE Trans. Ind. Electron. 63(2), 1318–1328 (2016) 79. S. Li, Y. Guo, Distributed consensus filter on directed switching graphs. Int. J. Robust Nonlinear Control 25(13), 2019–2040 (2015) 80. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2018) 81. L. Jin, S. Li, H.M. La, X. Zhang, B. Hu, Dynamic task allocation in multi-robot coordination for moving target tracking: a distributed approach. Automatica 100, 75–81 (2019) 82. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291, 50–58 (2018) 83. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2018) 84. S. Li, M. Zhou, X. Luo, Z. You, Distributed winner-take-all in dynamic networks. IEEE Trans. Autom. Control 62(2), 577–589 (2017) 85. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 86. L. Jin, S. Li, X. Luo, M. Shang, Nonlinearly-activated noise-tolerant zeroing neural network for distributed motion planning of multiple robot arms, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2017), pp. 4165–4170 87. M.U. Khan, S. Li, Q. Wang, Z. Shao, Distributed multirobot formation and tracking control in cluttered environments. ACM Trans. Auton. Adapt. Syst. 11(2), 1–22 (2016) 88. S. Li, Z. Wang, Y. Li, Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks. Neural Process. Lett. 37(3), 411–424 (2013) 89. W. Wang, J. Huang, C. Wen, H. Fan, Distributed adaptive control for consensus tracking with application to formation control of nonholonomic mobile robots. Automatica 50(4), 1254– 1263 (2014) 90. W. Ren, N. Sorensen, Distributed coordination architecture for multi-robot formation control. Robot. Auton. Syst. 56(4), 324–333 (2008) 91. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 92. L. Cheng, Z.-G. Hou, M. Tan, X. Wang, Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Control 56(8), 1958–1963 (2011) 93. H. Rezaee, F. Abdollashi, Average consensus over high-order multiagent systems. IEEE Trans. Autom. Control 60(11), 3047–3052 (2015)

70

4 Continuous-Time Biased Min-Consensus

94. W. Yu, G. Chen, M. Cao, Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. Automatica 46(6), 1089–1095 (2010) 95. Z. Li, Z. Duan, G. Chen, L. Huang, Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint. IEEE Trans. Circuits Syst. Regul. Pap. 57(1), 213– 224 (2010) 96. W. Ren, K.L. Moore, Y. Chen, High-order and model reference consensus algorithms in cooperative control of multivehicle systems. J. Dyn. Sys. Meas. Control 129(5), 678–688 (2007) 97. T. Yang, S. Roy, Y. Wan, A. Saberi, Constructing consensus controllers for networks with identical general linear agents. Int. J. Robust Nonlinear Control 21(11), 1237–1256 (2011) 98. H. Wang, Consensus of networked mechanical systems with communication delays: a unified framework. IEEE Trans. Autom. Control 59(6), 1571–1576 (2014) 99. P. Lin, W. Ren, Constrained consensus in unbalanced networks with communication delays. IEEE Trans. Autom. Control 59(3), 775–781 (2014) 100. X. Wang, A. Saberi, A.A. Stoorvogel, H.F. Grip, T. Yang, Consensus in the network with uniform constant communication delay. Automatica 49(8), 2461–2467 (2013) 101. S. Kar, J.M.F. Moura, Distributed consensus algorithms in sensor networks with imperfect communication: link failures and channel noise. IEEE Trans. Autom. Control 57(1), 355–369 (2009) 102. H. Meng, Z. Chen, L. Zhu, R. Middleton, Consensus of a class of second-order nonlinear heterogeneous multi-agent systems with uncertainty and communication delay. Int. J. Robust Nonlinear Control 26(15), 3311–3329 (2016) 103. Y. Wang, L. Cheng, W. Ren, Z.-G. Hou, M. Tan, Seeking consensus in networks of linear agents: communication noises and Markovian switching topologies. IEEE Trans. Autom. Control 60(5), 1374–1379 (2015) 104. X. Dong, B. Yu, Z. Shi, Y. Zhong, Time-varying formation control for unmanned aerial vehicles: theories and applications. IEEE Trans. Control Syst. Technol 23(1), 340–348 (2015) 105. V. Cichella, I. Kaminer, V. Dobrokhodov, E. Xargay, R. Choe, N. Hovakimyan, A.P. Aguiar, A.M. Pascoal, Cooperative path following of multiple multirotors over time-varying networks. IEEE Trans. Autom. Sci. Eng. 12(3), 945–957 (2015) 106. A. Kasperskia, P. Zielinski, The robust shortest path problem in series-parallel multidigraphs with interval data. Oper. Res. Lett. 34(1), 69–76 (2006) 107. K. Fujimura, H. Samet, A hierarchical strategy for pat planning among moving obstacles. IEEE Trans. Robot. Autom. 5(1), 61–69 (1989) 108. D. Bertsimas, M. Sim, Robust discrete optimization and network flows. Math. Program. 98(1), 49–71 (2003) 109. V. Yadav, M.V. Salapaka, Distributed protocol for determining when averaging consensus is reached, in Proceedings of the 45th Annual Allerton Conference on Communication, Control, and Computing (2007), pp. 715–720 110. S. Giannini, A. Petitti, D.D. Paola, A. Rizzo, Asynchronous max-consensus protocol with time delays: convergence results and applications. IEEE Trans. Circuits Syst. Regul. Pap. 63(2), 256–264 (2016) 111. F. Iutzeler, P. Ciblat, J. Jakubowicz, Analysis of max-consensus algorithms in wireless channels. IEEE Trans. Signal Process. 60(11), 6103–6107 (2012) 112. J. He, H. Li, J. Chen, P. Cheng, Study of consensus-based time synchronization in wireless sensor networks. ISA Trans. 53(2), 347–357 (2014) 113. G.S. Seyboth, D.V. Dimarogonas, K.H. Johansson, Event-based broadcasting for multi-agent average consensus. Automatica 49(1), 245–252 (2013) 114. T. Li, J.-F. Zhang, Consensus conditions of multi-agent systems with time-varying topologies and stochastic communication noises. IEEE Trans. Autom. Control 55(9), 2043–2057 (2010) 115. N.E. Manitara, C.N. Hadjicostis, Distributed stopping for average consensus in undirected graphs via event-triggered strategies. Automatica 70, 121–127 (2016) 116. R. Olfati-Saber, R.M. Murray, Consensus protocols for networks of dynamic agents, in Proceedings of the American Control Conference (IEEE, Piscataway, 2003), pp. 951–956

References

71

117. W. Ren, R.W. Beard, E.M. Atkins, Information consensus in multivehicle cooperative control. IEEE Control Syst. Mag. 27(2), 71–82 (2007) 118. F.H. Clarke, Optimization And Nonsmooth Analysis (Society for Industrial and Applied Mathematics, Philadelphia, 1990) 119. Z. Zuoa, D.W.C. Ho, Y. Wang, Reachable set bounding for delayed systems with polytopic uncertainties: the maximal Lyapunov–Krasovskii functional approach. Automatica 46(5), 949–952 (2010) 120. T. Hu, A. R. Teel, L. Zaccarian, Stability and performance for saturated systems via quadratic and nonquadratic Lyapunov functions. IEEE Trans. Autom. Control 51(11), 1770–1786 (2006) 121. H.L. Royden, Real Analysis (Macmillan, New York, 1988) 122. A. Isidori, Nonlinear Control Systems (Springer, New York, 2013) 123. R. Bellman, On a routing problem. Q. Appl. Math. 16(1), 87–90 (1958) 124. C.-H. Kuo, H.-C. Chou, S.-Y. Tasi, Pneumatic sensor: a complete coverage improvement approach for robotic cleaners. IEEE Trans. Instrum. Meas. 60(4), 1237–1256 (2011)

Chapter 5

Discrete-Time Biased Min-Consensus

5.1 Introduction Consensus includes but is not limited to min-consensus, max-consensus and average consensus, which is very useful in various applications among different fields [1– 11]. Consensus can be categorized as leader-follower consensus and leaderless consensus by checking whether there is a leader [12–14]. I light of the widely adopted digital implementation of consensus protocols, in this chapter, we are concerned with the discrete-time consensus, of which the agent dynamics is modeled by difference equations. In the past decades, many results have been reported about the design and analysis of distributed discrete-time protocols with consideration of different factors that may affect the consensus behavior, such as time delay [15], measurable states [16], nonlinearity in the data transmission [17], and the saturation of control inputs [18]. Some typical applications of discretetime consensus protocols are utility maximization in sensor networks [19], control of frequency targeting at microgrids [20], and even the detection of whether the average consensus is reached [21]. While there is a large amount of literature about the design and analysis of discrete-time consensus as well as the application issues [14, 19–21], the relationship between discrete-time consensus and shortest path finding is seldom discussed [22]. In this chapter, we discuss a modified discrete-time min-consensus protocol by adding a biased term, which is referred to as the discrete-time biased min-consensus protocol and is convergent in finite time. Based on this protocol, we have a distributed shortest path finding algorithm. A brief comparison of the algorithm with existing algorithms is given in Table 5.1. As mentioned in the previous chapters, distributed algorithms are more robust and more favorable when handling largescale problems [28]. The presented algorithm is also applied to maze solving [29] and complete coverage [30]. The organization of the rest content of this chapter is as below. In Sect. 5.2, we briefly review the discrete-time min-consensus protocol, based on which the design © Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_5

73

74

5 Discrete-Time Biased Min-Consensus

Table 5.1 Comparisons of different shortest path finding algorithms Type Distributed Centralized Centralized Centralized Centralized Centralized

This chapter Dijkstra algorithm [23] Genetic algorithm [24] PSO [25] ID algorithm [26] CPCNN [27]

Convergence      

Destinations Multiple One One One One One

Path Shortest Shortest NA NA Near-shortest NA

and analysis of the discrete-time biased min-consensus protocol are presented. Then, the algorithms for maze solving and complete coverage are illustrated in Sect. 5.3. To evaluate the performance of the discrete-time biased min-consensus protocol, the computer simulation results are discussed in Sect. 5.4, followed by two applications discussed in Sect. 5.5. Then, we summarize this chapter in Sect. 5.6.

5.2 Consensus Protocols In this section, we discuss the design and analysis of the discrete-time biased minconsensus protocol. We are concerned with an undirected connected graph G = (V, E) consisting of n nodes with V = {1, 2, · · · , n}. Let t0 < t1 < t2 < · · · denote the time instants at which state update of the nodes in the graph is performed. Let xik denote the state variable of node i at given time tk . As mentioned in the earlier chapter, the goal of the min-consensus is to make the following equation hold: lim xik = min{xi0}, ∀i ∈ V,

k→+∞

i∈V

in which xi0 denotes the initial value of the state variable of node i. In this chapter, we also need the definition of finite-time convergence, which, for the discrete-time minconsensus, means that the min-consensus can be reach after finite state updates [31]. Compared with asymptotic convergence, the finite one is evidently more favorable. We adopt the framework of leader-follower consensus, for which two types of nodes are involved, i.e., leader nodes and follower nodes. Let the sets of leader nodes and follower nodes be denoted by S and V − S, respectively. Then, a discrete-time minconsensus protocol with a leader is given as follows:

xik+1 = xik , i ∈ S, xik+1 = minj ∈N(i) {xjk }, i ∈ V − S.

(5.1)

5.2 Consensus Protocols

75

5.2.1 Protocol In this subsection, on the basis of the discrete-time min-consensus protocol (5.1), we present the model of the biased min-consensus protocol. We consider the situation at which the information from a neighbor node j ∈ N(i) that a follower node i receives is perturbed as xjk + wij , where wij is a biased term. Then, with synchronous update, adding the perturbation into the discrete-time min-consensus protocol (5.1) generates the synchronous discrete-time biased minconsensus protocol given as follows: xik+1 = xik , i ∈ S, xik+1 = minj ∈N(i){xjk + wij } i ∈ V − S.

(5.2)

As seen from the discrete-time biased min-consensus protocol (5.2), the dynamics of the leader nodes are static, i.e., xik = xi0 , ∀i ∈ S. The protocol (5.2) is synchronous because the state update of each node is simultaneously performed. For the discrete-time biased min-consensus protocol (5.2), we have the following remark. Remark 5.1 As seen from the discrete-time biased min-consensus protocol (5.2), for every node in the graph, its state update only depends on its own state and the states of its neighbor nodes. Thus, the protocol is distributed. Owing to the distributed feature, it is more tolerant to faults than centralized computing models: [32–97]. Clearly, because there is a biased term wij in the protocol (5.2), we would not expect the reaching of the min-consensus. In terms of the time complexity of the discrete-time biased min-consensus protocol (5.2), it can be analyzed by checking the number of floating point operations (FLOPs) [98, 99] that is consumed by each node in each state update, due to the distributed feature. One FLOP can be one addition, subtraction, multiplication, or division between two floating-point numbers. From [99], the calculation of the square root of a real number that is not less than zero costs six FLOPs. It is worth pointing out that, we can calculate value of the min function min(a, b) with respect to two real numbers a and b by the formula min(a, b) = (a + b − (a − b)2 )/2, which consumes totally 11 FLOPs for each evaluation. Now, by referring to the discrete-time biased min-consensus protocol (5.2), we are ready to calculate the consumed FLOPs each state update by each node. One the one hand, because the values of the state variables of leader nodes do not change with time, there are no state update, and thus costs 0 FLOPs. On the other hand, with Nmax denoting the largest number of neighboring nodes of a follower node in the graph, from (5.2), at each state update, each follower node consumes less than 11(Nmax −1)+Nmax = 12Nmax −11 FLOPs. It should be noted here that Nmax is bounded, and is less than the number of nodes in the graph.

76

5 Discrete-Time Biased Min-Consensus

Time-delay during communication is one of the main issues in researches on consensus for practical applications. The synchronous discrete-time biased minconsensus protocol (5.2) in the presence of bounded time delay is modeled as: xik+1 = xik , i ∈ S,

k−τij

xik+1 = minj ∈N(i) {xj

+ wij }, i ∈ V − S,

(5.3)

in which τij ∈ {1, 2, 3, · · · } scales the time delay for the communication between node i and node j . In this chapter, without loss of generality, we assume xi0 = xi−1 = xi−2 = · · · , ∀i ∈ V. The basis of synchronous communication is the synchronization of clocks, which may not be available in practice before using the consensus protocols [28, 100]. Considering this limitation, we will also analyze the effects of asynchronous communication on the behavior of the discrete-time biased min-consensus. Let the set of the labels of nodes on graph G be denoted U(k) ⊂ V, which perform state update at the kth time instant. Due to the asynchronism, the time instant here is only used for the sake of analysis, and there is no centralized clock when the protocol is adopted in this situation. With the above explanations about the asynchronous communication, based on (5.2), an asynchronous discrete-time biased min-consensus protocol is modeled as

xik+1 = xik , i ∈ V − (V − S) ∩ U(k), xik+1 = minj ∈N(i) {xjk + wij } , i ∈ (V − S) ∩ U(k).

(5.4)

To guarantee the existence of an information-flow path from any node in the graph to any other nodes, we need the following assumption. Assumption 5.1 ([28, 100]) There exists a integer δ ≥ 0, with which the set U(k) satisfies the following equation: η+δ

U(k) = V, ∀η ∈ {0, 1, 2, · · · },

k=η

where



denotes the union operation of sets.

5.2.2 Convergence Analysis In this subsection, the convergence of the discrete-time biased min-consensus protocols is analyzed.

5.2 Consensus Protocols

77

Theorem 5.1 If the discrete-time biased min-consensus protocol (5.2) is applied to an undirected connected graph, then the state variables are convergent to the equilibrium in finite time. Proof Given undirected connected graph G, when k ≥ n, for the discrete-time biased min-consensus protocol (5.2), one has xik = min {xjk−1 + wij } j ∈N(i)

k−2 + wj m } + wij } = min { min {xm j ∈N(i) m∈N(j )

=

k−2 {xm + wj m + wij }

min

j ∈N(i),m∈N(j )

(5.5)

= ··· = min{min{xl0 + pil }, min {xl0 + dil }}, ∀i ∈ V − S, l∈S

l∈V−S

in which the term pil represents the length of a path from node i to node l in set S, which consists of not more than k steps, and dil denotes the length a walk from node i to node l in set V − S, which consists of k steps. Let w = min {wij }, (i;j )∈E

by which we have dil ≥ kw. Let the least initial state value be denoted by x 0 = min{xj0 }. j ∈V

One has min {xl0 + dil } ≥ x 0 + kw, ∀i ∈ V − S.

l∈V−S

It can be easily verified that there exists a k such that x 0 + kw > max {min{xl0 + pil }}. i∈V−S l∈S

Then, by further considering (5.6), one has min {xl0 + dil } > min{xl0 + pil },

l∈V−S

l∈S

∀i ∈ V − S. Thus, together with Eq. (5.5), the following equation is obtained: xik = min{xl0 + pil }, ∀i ∈ V − S. l∈S

(5.6)

78

5 Discrete-Time Biased Min-Consensus

Meanwhile, from (5.2), one has xik = xi0 , ∀i ∈ S, ∀k ≥ 0. As a result, it is concluded that, within finite time instants, the state values of the discrete-time biased min-consensus protocol (5.2) are convergent to the equilibrium given as follows: xi∗ = xi0 , i ∈ S, xi∗ = minl∈S {xl0 + pil }, i ∈ V − S,

(5.7)

and the maximum number of required time instants is max{n, ( max {min{xl0 + pil }} − x 0 )/w + 1}, i∈V−S l∈S



which completes the proof. Based on Theorem 5.1, we have the following remark.

Remark 5.2 In view of the equilibrium (5.7) of the discrete-time biased minconsensus protocol (5.2), the state values of the leader nodes are kept unchanged, while the state values of the follower nodes depend on the length of the paths from the follower nodes to the leader nodes, for which the initial values of the leader nodes also have an effect. We have the following theorem regarding the performance of the biased discretetime min-consensus protocol under time delay. Theorem 5.2 If the discrete-time biased min-consensus protocol (5.3) with bounded time delay is applied to an undirected connected graph, then the state variables are convergent to the equilibrium in finite time. Proof For undirected connected graph G = (V, E), with k ≥ n + max{τij }, in view of the discrete-time time-delay biased min-consensus protocol (5.3), one has k−τij −1

xik = min {xj j ∈N(i)

+ wij } k−τjm −2

= min { min {xm j ∈N(i) m∈N(j )

=

min

k−τjm −2

{xm

j ∈N(i),m∈N(j )

+ wj m } + wij } + wj m + wij }

= ··· = min{min{xl0 + pil }, min {xl0 + dil }}, ∀i ∈ V − S. l∈S

l∈V−S

5.2 Consensus Protocols

79

The rest of the proof can be completed by following the steps in the proof of Theorem 5.1. We also have the following theorem regarding the performance of the biased discrete-time min-consensus protocol under asynchronous communication. Theorem 5.3 Given that Assumption 5.1 holds, if the discrete-time biased minconsensus protocol (5.4) with asynchronous communication is applied to an undirected connected graph, then the state variables are convergent to the equilibrium in finite time. Proof The asynchronous discrete-time biased min-consensus protocol (5.4) is first written as ⎧ ⎪ ⎪xik+1 = xik , ∀i ∈ S, ⎨ (5.8) xik+1 = xik , ∀i ∈ (V − S) − (V − S) ∩ U(k), ⎪ ⎪ ⎩x k+1 = min k {x + w }, ∀i ∈ (V − S) ∩ U(k). j ∈N(i)

i

ij

j

q

q−1

Let si (k) = q, where xik = xik−1 = xik−2 = · · · = xi = xi with q ∈ {0, 1, 2 · · · , k}. Then, with Assumption 5.1, if k > δ, the asynchronous discretetime biased min-consensus protocol (5.8) is further written as

xik+1 = xik , ∀i ∈ S,

s (k)

xik+1 = minj ∈N(i) {xj j

+ wij }, ∀i ∈ (V − S).

(5.9)

Because the graph G is undirected connected, if k ≥ nδ, from (5.9), one has s (sj (k)−1)

xik = min {xj j j ∈N(i)

+ wij }

sm (sj (sj (k)−1)−1)

+ wj m } + wij }

sm (sj (sj (k)−1)−1)

+ wj m + wij }

= min { min {xm j ∈N(i) m∈N(j )

=

min

{xm

j ∈N(i),m∈N(j )

= ··· = min{min{xl0 + pil }, min {xl0 + dil }}, ∀i ∈ V − S, l∈S

l∈V−S

where dil in this case consists of at least ϕ steps with ϕ = arg

max {x ≤ k/δ}.

x∈{1,2,··· }

(5.10)

80

5 Discrete-Time Biased Min-Consensus

Let w = min {wij }. (i;j )∈E

Then, one further has dil ≥ ϕw because dil has at least ϕ steps. As a result, the following inequality holds: min {xl0 + dil } ≥ x 0 + ϕw.

l∈V−S

(5.11)

Let x 0 = min{xj0 }. j ∈V

If we have k ≥ max



maxi∈V−S {minl∈S {xl0 + pil }} − x 0 + 1 δ, nδ w

then the following inequality holds: x 0 + (k/δ − 1)w > min{xl0 + pil } l∈S

by which we further have x 0 + ϕw > min{xl0 + pil }, ∀i ∈ V − S. l∈S

Recall dil ≥ ϕw. Consequently, the following inequality also holds: x 0 + dil > min{xl0 + pil }, ∀i ∈ V − S. l∈S

With the aid of inequality (5.11), it is clear that min {xl0 + dil } > min{xl0 + pil }, ∀i ∈ V − S.

l∈V−S

l∈S

(5.12)

5.2 Consensus Protocols

81

Thus, from (5.10), the following equation holds: xik = min{xl0 + pil }, ∀i ∈ V − S, l∈S

if k satisfies (5.12). Additionally, by Eq. (5.9), there is xik = xi0 , ∀i ∈ S, ∀k ≥ 0. By summarizing the above analysis, it is concluded that, within finite time, the state variables of the asynchronous discrete-time biased min-consensus protocol (5.4) are convergent to the equilibrium as below: xi∗ = xi0 , i ∈ S,

xi∗ = minl∈S {xl0 + pil }, i ∈ V − S,

and the lower bound of k is shown in (5.12).

(5.13)

5.2.3 Relationship with Shortest Path Planning In this subsection, we discuss the relationship between the discrete-time biased minconsensus and the finding of shortest paths over the same graphs, which is given as the following theorem. Theorem 5.4 For undirected connected graph G = (V, E), if xi0 = 0, ∀i ∈ S, where S denotes the set of destination nodes, then the equilibrium of the synchronous discrete-time biased min-consensus protocol (5.2) constitutes the corresponding shortest paths defined on graph G. Proof From Theorem 5.1, the asymptotic stable equilibrium of the synchronous biased min-consensus protocol (5.2) is

xi∗ = minl∈S {xl0 + pil }, i ∈ V − S xi∗ = xi0 , i ∈ S.

If xi0 = 0, ∀i ∈ S, then the stable equilibrium is further reduced to

xi∗ = minl∈S {pil }, i ∈ V − S xi∗ = 0, i ∈ S.

which actually describes the relationship between the lengths of the shortest paths from follower nodes to leader nodes.

82

5 Discrete-Time Biased Min-Consensus

In light of Theorem 5.4, we have the following remark about how to extract the shortest path from the equilibrium of the discrete-time biased min-consensus protocol (5.2). Remark 5.3 According to Theorem 5.4, the equilibrium xi∗ of the discrete-time biased min-consensus protocol (5.2) actually records the lengths of the shortest paths from all the follower nodes to leader nodes in set S, if xi0 = 0, ∀i ∈ S. Let F(i) denote the set of parent nodes of node i ∈ V, which is defined as

F(i) = ∅, i ∈ S, F(i) = arg minj ∈N(i) {xj∗ + wij }, i ∈ V − S.

Then, the finding for a shortest path from a follower node i to a leader node j can be reduced to successive finding of the parent nodes according to the equilibrium of the discrete-time biased min-consensus protocol (5.2) and the definition of the set of parent nodes, i.e., F(i). Based on Theorem 5.4, we have the following corollaries about the relationship between the shortest path finding and the equilibrium of the consensus protocols under time delay and asynchronous communication. The proofs of the following two corollaries are omitted and can be derived by following the procedures of the proof of Theorem 5.4. Corollary 5.1 For undirected connected graph G = (V, E), if xi0 = 0, ∀i ∈ S, where S denotes the set of destination nodes, then the equilibrium of the synchronous discrete-time biased min-consensus protocol (5.2) with bounded time-delay τij constitutes the corresponding shortest paths defined on graph G. Corollary 5.2 For undirected connected graph G = (V, E), if xi0 = 0, ∀i ∈ S, where S denotes the set of destination nodes, then the equilibrium of the asynchronous discrete-time biased min-consensus protocol (5.4) constitutes the corresponding shortest paths defined on graph G.

5.3 Algorithms In this section, on the basis of the relationship between the shortest path finding and the equilibrium of the discrete-time biased min-consensus protocols, we present the algorithms for solving the maze navigation problem and the complete coverage problem.

5.3 Algorithms

83

5.3.1 Maze Solving In this subsection, we discuss the design of an algorithm based on the discrete-time biased min-consensus algorithm for solving the maze problem. In the maze solving, we need to search for a feasible path from the given starting location in the maze to a destination location. In the maze there are both free space and obstacle space that cannot be included in the path. For the maze map given in Fig. 5.1a, if each pixel of free space is considered as a node, then, any two neighboring pixels of free space correspond to two connected nodes on the graph. With this idea in our mind, we can convert the map into an undirected connected graph G, which is depicted in√Fig. 5.1b. The length of the edges in the undirected connected graph is either 2 or 1. Because all the nodes in the undirected graph corresponds to the free positions in the maze, each path on the graph contains no obstacle. Let the number of nodes in the obtained undirected connected graph G be denoted by n. Let xi denote the state variable of node i in the graph G with i = 1, 2, · · · , n. Let M denote the maze map. Let (qi , qi ) and (pd , qd ) denote the initial and destination positions on the maze map, respectively. Then, on the basis of the discrete-time biased min-consensus protocols, we have the biased min-consensus based (BMC-based) maze solving algorithm, which is given in Algorithm 1. The BMC-based maze solving algorithm consists of mainly four steps. Firstly, we convert the given maze map into undirected connected graph G and get the labels of the initial position and the destination position. Secondly, for each node in the undirected connected graph G, we employee the discrete-time biased minconsensus protocol (5.2). Then, we constitute the shortest path on the graph from

Fig. 5.1 An example showing the conversion from a map consisting of 10 × 10 pixels, of which the white ones stand for free space and the black ones stand for obstacle area, to an undirected connected graph. (a) Map. (b) Graph

84

5 Discrete-Time Biased Min-Consensus

Algorithm 1 BMC-based maze solving algorithm Input: The maze map M, the initial position (qi , qi ) and the destination position (pd , qd ) on the map M Output: A shortest path p without obstacles from the initial position (qi , qi ) to the destination position (pd , qd ) Convert the maze map M into an undirected connected graph G = (V, E) and get the number n of nodes in the graph, the label ii ∈ V of the node corresponding to the initial position (qi , qi ) and the label id ∈ V of the node corresponding to the destination position (pd , qd ) Allocate an empty list l Randomly generate the initial state values xi0 at interval (0, 1), ∀i ∈ V xi0d ← 0 √ For k ← 1 to arg maxj ∈{0,1,··· } {j < 2n + 2} If i = id xik ← xik−1 Else xik ← minj ∈N(i) {xjk−1 + wij } EndIf EndFor in ← ii While in = id Add in to the list l imin ← arg minj ∈N(in ) {xjk + wij } in ← imin1 EndWhile Convert the nodes in list l into the path p on the maze map Return path p

the initial position to the destination position, which is stored as a list l. Finally, the shortest path is converted to a obstacle-free path in the maze map. When using the BMC-based maze solving algorithm, i.e., Algorithm 1, there is one issue that should be mentioned. For the finding of the parent nodes, i.e., arg minj ∈N(in ) {xjk + wij }, the result may contain be more than one nodes. In this case, it means that there contains more than one shortest path for the problem, and we can simply adopt the first element of imin (i.e., imin1 ) and assign it to in , because we only need one path with shortest length. Obviously, if we want to find out all the shortest paths from the initial position to the destination position, whenever the finding of parent nodes has multiple solutions we can add copies of the list l, and from the time on, we need to find the parent node of each node at the end of the lists. For Algorithm 1, we have the following remark regarding the setting of iteration numbers. Remark 5.4 We need to keep in mind of the graph in √ that the length of an edge √ 2, for which we have p < 2n and w = this maze problem is either 1 or il √ min{1, 2} = 1, where the definitions of pil and w are available in Theorem 5.1.

5.3 Algorithms

85

Algorithm 2 BMC-based complete coverage algorithm Input: The map M of the environment containing the uncovered area and the initial position (qi , qi ) of the mobile robot Output: Navigating the mobile robot to cover all the free space given in the map M Convert the map M into an undirected connected graph G = (V, E) and get the number n of nodes in the graph and the label ii ∈ V of the node corresponding to the initial position (pi , qi ) of the mobile robot Gu ← G − {ii } ic ← ii Randomly generate the initial states xi0 at interval (0, 1), ∀i ∈ V While the set Gu is no-empty √ for k ← 1 to arg maxj ∈{0,1,··· } {j < 2n + 2} ifi ∈ Gu xik ← 0 else xik ← minj ∈N(i) {xjk−1 + wij } EndIf EndFor While xic = 0 imin ← arg minj ∈N(ic ) {xjk + wij } in ← imin1 Control the mobile robot to move to the position on the area corresponding to in on G ic ← in remove ic from Gu EndWhile EndWhile

Meanwhile, in the maze problem, the set S of destination nodes consisting of only one node, i.e., id . If xi0 ∈ [0, 1), ∀i ∈ V, then (min{xl0 + pil } − x 0 )/w + 1 < (1 + l∈S

max{n, ( max {min{xl0 + pil }} − x 0 )/w + 1}. i∈V−S l∈S

It follows that, on the√basis of Theorem 5.1, the Algorithm 1 takes no more than arg maxj ∈{0,1,··· } {j ≤ 2n + 2} iterations to converge.

86

5 Discrete-Time Biased Min-Consensus

5.3.2 Complete Coverage In this subsection, we discuss the design of an algorithm based on the discrete-time biased min-consensus for solving the complete coverage problem. The complete coverage problem requires a mobile robot to visit all the free space in the environment, which contains free space that can be passed by the mobile robot and the obstacle space that cannot be passed by the mobile robot [101, 102]. By using the idea of divide and conquer, we can convert the problem of complete coverage into a series of shortest path problems, and then the discrete-time biased min-consensus based shortest path finding algorithm can be applied. Let the map of the environment be denoted by M. As discussed in the previous subsection, we can convert the map of the environment into an undirected connected graph denoted by G with the number of nodes in the graph be denoted by n. Let the set of nodes corresponding to uncovered positions in the environment be denoted by Gu . Let (qi , qi ), and ii denote a pair nodes/positions on the graph G and the map M with one-to-one correspondence. On the basis of the discrete-time biased min-consensus protocol (5.2), we have the algorithm shown as Algorithm 2 for solving the maze problem, which is call the biased min-consensus based (BMC-based) maze solving algorithm. For the BMC-based maze solving algorithm, i.e., Algorithm 1, and the BMCbased maze solving algorithm, i.e., Algorithm 2, we have the following remark. Remark 5.5 About how to set the iteration number in the for loops of Algorithms 1 and 2, we may use algorithms to get an estimation of the number of nodes. For example, a distributed algorithm for estimating the number of nodes in a network was provided in [103]. It has also been shown that the distributed average consensus algorithm can also be used to realize the estimation of the number of nodes in a given network [21] with the adding of an auxiliary state variable (ASV) for each participating node. Meanwhile, as shown in the theoretical analysis, the discretetime biased min-consensus is convergent in finite time. In this case, if the iteration number is set to be sufficiently large, then we can guarantee the exact convergence.

5.4 Computer Simulations In this section, to evaluate the behavior of the discrete-time biased min-consensus protocol under different situations, we discuss the simulation results under different settings. For the sake of illustration, we use the graph shown in Fig. 5.2 to conduct the simulations, for which the set of leader nodes is S = {1, 2, 3}. The synchronous discrete-time biased min-consensus protocol (5.2) is applied to the nodes on the

5.4 Computer Simulations

87

Fig. 5.2 A undirected connected graph consisting of eight nodes where the set of leaders is S = {1, 2, 3} 12

x k1 x k2 x k3 x k4 x k5 x k6 x k7 x k8

10 8 6 4 2 0 0

1

2

k

(a)

3

4

x k1 x k2 x k3 x k4 x k5 x k6 x k7 x k8

10 8 6 4 2 5 00

5

10

k

15

20

25

(b)

Fig. 5.3 State evolutions of nodes on the graph when the discrete-time biased min-consensus protocol (5.2) is adopted with xi0 = i, ∀i ∈ {1, 2, · · · , 8}. (a) Without time delay. (b) With bounded time delay

graph, where the initial node states are given as xi0 = i, ∀i ∈ {1, 2, · · · , 8}. From Fig. 5.3a, it can be find that after about three time instants, the state variables of the nodes reach the equilibrium at which xi∗ = xi0 = i, ∀i ∈ {1, 2, 3} and xi∗ = minl∈{1,2,3}{xl∗ + pil }, ∀i ∈ {4, 5, · · · , 8}, in which pil denotes the length of a shortest path from node i to node l in set {1, 2, 3}. The result is consistent with Theorem 5.1. We also evaluate the case with bounded time delays. For this case, the randomly varying time delays τij within interval (1, 10) are considered. As seen from Fig. 5.3b, under time delays, the same equilibrium is still reached although more time instants are took, which is consistent with Theorem 5.2.

88

5 Discrete-Time Biased Min-Consensus 8

8

x k1

x k2

6

x k3 x k4 x k5 x k6 x k7 x k8

4

2

0 0

1

2

k

(a)

3

x k1

7

x k2

6

x k3

5

x k4

4

x k5

3

x k6

2

x k7

1

x k8

0 4 0

2

4

k

6

8

10

(b)

Fig. 5.4 State evolutions of nodes on the graph when the discrete-time biased min-consensus protocol (5.2) is adopted with xi0 = 0, ∀i ∈ {1, 2, 3} and the initial states of non-leader nodes on the graph randomly generated within interval (0, 5). (a) Without time delay. (b) With bounded time delay

To check the relationship with the shortest path finding, we same graph is still used but we set xi0 = 0, ∀i ∈ {1, 2, 3} and the initial states of non-leader nodes on the graph are randomly generated within interval (0, 5). From Fig. 5.4 and Remark 5.3, we readily have F(5) = minj ∈{3,4} {xj∗ + w5j } = {3} and F(4) = ∅. Thus, a shortest path starting from node 5 and ends at a node in the set {1, 2, 3} is 5 → 3. It can be readily verified that the generated shortest path is correct. These results are consistent with Theorem 5.4 as well as Corollary 5.1. We also evaluate the performance of the discrete-time biased min-consensus protocol under asynchronous state update on the basis of the model (5.4) by using the same graph. In the simulation, the updating set is set to U(k) = {(k mod n)+1}, in which n = 8. It can be readily verified that Assumption 5.1 holds with δ = 8. In view of Fig. 5.5, the state variables of the nodes on the graph cost less than 64 time instants for reaching the same equilibrium as that of the case with the synchronous discrete-time biased min-consensus protocol. The results are consistent with Theorem 5.3 and Corollary 5.2.

5.5 Applications In this section, we evaluate the performance of the two algorithm for solving the maze problem and the complete coverage problem. In terms of the BMC-based maze solving algorithm, i.e., Algorithm 1, a largescale maze is adopted in the simulation, which is of 300 × 300 pixels and the matched undirected connected graph consists of 42,185 nodes. The starting location,

5.5 Applications

89 8

10

x k1 x k2

8

x k3 6 4 2 0 0

5

k

(a)

10

15

data1

7

data2

6

data3

x k4

5

x k5

4

x k6

3

data6

x k7

2

data7

x k8

1

data8

0 0

data4 data5

5

k

10

15

(b)

Fig. 5.5 State evolutions of nodes on the graph when the discrete-time biased min-consensus protocol (5.2) is adopted with different setting of initial node states. (a) xi0 = 0, ∀i ∈ {1, 2, 3}. (b) Randomly generated xi0

Fig. 5.6 The maze solving result by using the BMC-based maze solving algorithm, i.e., Algorithm 1, where the maze map consists of 300 × 300 pixels, of which the corresponding undirected connected graph contains 42,185 nodes

and the destination location on the maze are marked by a rectangle and a circle, respectively. As seen from Fig. 5.6, a shortest path is generated by the algorithm for the maze problem. Besides, Transient states of the nodes when using the BMCbased maze solving algorithm, i.e., Algorithm 1, for solving the maze problem

90

5 Discrete-Time Biased Min-Consensus

Fig. 5.7 Transient states of the nodes when using the BMC-based maze solving algorithm, i.e., Algorithm 1, for solving the maze problem. (a) Transient state 1. (b) Transient state 2. (c) Transient state 3. (d) Transient state 4

are shown in Fig. 5.7. The results verify the effectiveness of the BMC-based maze solving algorithm. In terms of the BMC-based complete coverage algorithm, i.e., Algorithm 2, a house-like environment is considered, and the coverage result is shown in Fig. 5.8. In the figure, the black area stands for the space that cannot be passed by the mobile robot executing the complete coverage task, of which of the starting location is marked with a rectangle, and its trajectory is recorded as a red line. From the figure, the resultant trajectory does not pass the obstacle area and the number of positions that have been visited for more than twice is small, showing the effectiveness of the BMC-based complete coverage algorithm.

5.6 Summary

91

Fig. 5.8 The complete coverage result by using the BMC-based complete coverage algorithm, i.e., Algorithm 1, where the starting location of the mobile robot is marked with a red rectangle

5.6 Summary In this chapter, we have discussed the design, analysis, and applications of discretetime biased min-consensus protocols and revealed their relationship with the finding of shortest path finding over the same graphs. It has been shown that with the protocols, the state variables are always convergent to the equilibrium regardless, even in the presence of time delay or asynchronous state updates. The performance of the protocols have been evaluated via simulations under different settings. The applications the maze problem and the complete coverage problem have further shown the effectiveness and potential of the discrete-time biased min-consensus protocols.

92

5 Discrete-Time Biased Min-Consensus

References 1. J. Cortés, Distributed algorithms for reaching consensus on general functions. Automatica 44(3), 726–737 (2008) 2. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2018) 3. L. Jin, S. Li, H.M. La, X. Zhang, B. Hu, Dynamic task allocation in multi-robot coordination for moving target tracking: a distributed approach. Automatica 100, 75–81 (2019) 4. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291 50–58 (2018) 5. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2018) 6. S. Li, M. Zhou, X. Luo, Z. You, Distributed winner-take-all in dynamic networks. IEEE Trans. Autom. Control 62(2), 577–589 (2017) 7. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 8. L. Jin, S. Li, X. Luo, M. Shang, Nonlinearly-activated noise-tolerant zeroing neural network for distributed motion planning of multiple robot arms, in Proceeding of 2014 International Joint Conference on Neural Networks (IJCNN) (2017), pp. 4165–4170 9. M.U. Khan, S. Li, Q. Wang, Z. Shao, Distributed multirobot formation and tracking control in cluttered environments. ACM Trans. Auton. Adapt. Syst. 11(2), 1–22 (2016) 10. S. Li, Z. Wang, Y. Li, Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks. Neural Process. Lett. 37(3), 411–424 (2013) 11. S. Li, Y. Guo, Distributed consensus filter on directed switching graphs. Int. J. Robust Nonlinear Control 25(13), 2019–2040 (2015) 12. N. Abaid, M. Porfiri, Leader-follower consensus over numerosity-constrained random networks. Automatica 48(8), 1845–1851 (2012) 13. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 14. K. Cai, H. Ishii, Average consensus on general strongly connected digraphs. Automatica 48(11), 2750–2761 (2012) 15. F. Xiao, L. Wang, Consensus protocols for discrete-time multi-agent systems with timevarying delays. Automatica 44(10), 2577–2582 (2008) 16. X. Xu, S. Chen, W. Huang, L. Gao, Leader-following consensus of discrete-time multi-agent systems with observer-based protocols. Neurocomputing 118(22), 334–341 (2013) 17. Y. Chen, J. Lü, Z. Lin, Consensus of discrete-time multi-agent systems with transmission nonlinearity. Automatica 49(6), 1768–1775 (2013) 18. T. Yang, Z. Meng, D.V. Dimarogonas, K.H. Johansson, Global consensus for discrete-time multi-agent systems with input saturation constraints. Automatica 50(2), 499–506 (2014) 19. J. He, L. Duan, F. Hou, P. Cheng, J. Chen, Multiperiod scheduling for wireless sensor networks: a distributed consensus approach. IEEE Trans. Signal Process. 63(7), 1651–1663 (2015) 20. S.T. Cady, A.D. Domíguez-García, C.N. Hadjicostis, Finite-time approximate consensus and its application to distributed frequency regulation in islanded AC microgrids, in Proceeding of 48th Hawaii International Conference of System Science (2015), pp. 2664–2670 21. V. Yadav, M.V. Salapaka, Distributed protocol for determining when averaging consensus is reached, in Proceeding of 45th Annual Allerton Conference Communication, Control, Computer (2007), pp. 715–720 22. L. Fu, D. Sun, L.R. Rilett, Heuristic shortest path algorithms for transportation applications: state of the art. Comput. Oper. Res. 33, 3324–3343 (2006)

References

93

23. E.W. Dijkstra, A note on two problems in connexion with graphs. Numer. Math. 1(1), 269– 271 (1959) 24. C.W. Ahn, R.S. Ramakrishna, A genetic algorithm for shortest path routing problem and the sizing of populations. IEEE Trans. Evol. Comput. 6(6), 566–579 (2002) 25. A.W. Mohemmed, N.C. Sahoo, T.K. Geok, Solving shortest path problem using particle swarm optimization. Appl. Soft Comput. 8(4), 1643–653 (2008) 26. G.E. Jan, C. Sun, W.C. Tsai, T. Lin, An O(nlogn) shortest path algorithm based on Delaunay triangulation. IEEE/ASME Trans. Mechatron. 19(2), 660–666 (2014) 27. Y. Sang, J. Lv, H. Qu, Z. Yi, Shortest path computation using pulse-coupled neural networks with restricted autowave. Know. Syst. 114, 1–11 (2016) 28. J. Qin, C. Yu, S. Hirche, Stationary consensus of asynchronous discrete-time second-order multi-agent systems under switching topology. IEEE Trans. Ind. Inf. 8(4), 986–994 (2012) 29. Z. Ni, H. He, J. Wen, X. Xu, Goal representation heuristic dynamic programming on maze navigation. IEEE Trans. Neural Netw. Learn. Syst. 24(12), 2038–2050 (2013) 30. S.X. Yang, C. Luo, A neural network approach to complete coverage path planning. IEEE Trans. Syst. Man Cybern. Part B Cybern. 34(1), 718–725 (2004) 31. L. Wang, F. Xiao, Finite-time consensus problems for networks of dynamic agents. IEEE Trans. Autom. Control 55(4), 950–955 (2010) 32. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019) 33. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019) 34. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 35. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 36. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019) 37. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019) 38. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 39. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 40. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Control Syst. Technol. 27(3), 1250–1258 (2019) 41. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 42. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 43. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 44. L. Xiao, S. Li, F. Lin, Z. Tan, A.H. Khan, Zeroing neural dynamics for control design: comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019)

94

5 Discrete-Time Biased Min-Consensus

45. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 46. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 47. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 48. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 49. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 50. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 51. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 52. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018) 53. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018) 54. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018) 55. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 56. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 57. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Control Syst. Technol. 26(4), 1345–1357 (2018) 58. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 59. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 60. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 61. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 62. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a Jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 63. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 64. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 65. H. Wang, P.X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018)

References

95

66. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 67. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceeding of 2018 IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 2956–2961 68. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 69. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 70. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 71. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 72. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 73. L. Jin, S. Li, H.M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017) 74. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017) 75. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 76. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 77. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceeding of 2017 13th IEEE Conference on Automation Science and Engineering (CASE) (2017), pp. 1368–1373 78. Y. Li, S. Li, D.E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceeding of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 3538–3543 79. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 80. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 81. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 82. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 83. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 84. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 85. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016)

96

5 Discrete-Time Biased Min-Consensus

86. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 87. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceeding of 2016 IEEE 16th International Conference on Data Mining (ICDM) (2016), pp. 311–319 88. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceeding of 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2016), pp. 1221–1226 89. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015) 90. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 91. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceeding of International Conference on Intelligent Computing, vol. 2015 (2015), pp. 713– 720 92. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014) 93. Y. Li, S. Li, Q. Song, H. Liu, M.Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014) 94. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 95. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceeding of 2014 International Joint Conference on Neural Networks (IJCNN) (2014), pp. 2952–2956 96. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013) 97. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 301– 309 (2013) 98. S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004) 99. S. Fang, H. Chan, Human identification by quantifying similarity and dissimilarity in electrocardiogram phase space. Pattern Recog. 42, 1824–1831 (2009) 100. L. Fang, P.J. Antsaklis, Information consensus of asynchronous discrete-time multi-agent systems, in Proceeding of 2005 American Control Conference (2005), pp. 1883–1888 101. E. Galceran, M. Carreras, A survey on coverage path planning for robotics. Robot. Auton. Syst. 61(12), 1258–1276 (2013) 102. C.H. Kuo, H.C. Chou, S.Y. Tasi, Pneumatic sensor: a complete coverage improvement approach for robotic cleaners. IEEE Trans. Instrum. Meas. 60(4), 1237–1256 (2011) 103. O. Slu´ciak, M. Rupp, Network size estimation using distributed orthogonalization. IEEE Signal Process. Lett. 20(4), 347–350 (2013)

Chapter 6

Biased Consensus Based Distributed Neural Network

6.1 Introduction The shortest path problem, i.e., finding a path whose length is the shortest in a graph from a predefined source node to a predefined destination node, is a complex combinatorial optimization problem, which is referred to as the classical shortest path problem in this chapter. The past decades have witnessed the usefulness of algorithms that can find the shortest paths in lost of applications, which include but are not limited to transportation routing [1], localization of sensor networks [2], classification of color texture [3] and the navigation of mobile robots [4]. For the classical shortest path problem, there have been some well-known methods, such as the dynamic programming method, the labeling method, and the Bellman-Ford method, whose computational complexity are O(n2 ), O(n2 ), and O(n3 ), respectively, in which n stands for the number of nodes in the graph [5–7]. Evidently, for the solving of the classical shortest path problem, the computational complexity of the adopted methods is very important, which affects the efficiency of solving the problem, especially when it comes to large-scale graphs. Therefore, some researchers tried to make a trade-off between the optimality of the generated path and the efficiency in finding the path, based on which methods with the computational complexity being O(nlogn) were proposed for finding a suboptimal path in [8, 9], where the tool of Delaunay triangulation is utilized. Using an efficient reduction approach for data, i.e., dividing the graph into several small subgraphs called proxies, it was shown in [10] that the queries of shortest paths and the corresponding length over big graph can be handled faster. It should be noted there that the above methods are all sequential method, the computational time increases with the increase of the number of nodes in the graphs. To further improve the computational efficiency, in [11], efforts were put on the acceleration of a modified Dijkstra method by using field-programmable gate arrays, which was tested in a large-scale shortest path problem.

© Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_6

97

98

6 Biased Consensus Based Distributed Neural Network

Owing to the potentials of storage in a distributed manner and computation in a parallel manner, the bio-inspired computational model called neural networks have been widely investigated [12–110]. The pioneering works in [111, 112] demonstrated that neural networks can be an efficient tool to find solutions for optimization problems via an example of traveling salesman problem, which triggered the research interest in this line of research. For instance, a continuous-time recurrent neural network was presented in [6] for solving the classical shortest path problem over a n-node graph, which contains totally n(n − 1) neurons. To solve the same problem, a discrete-time neural network consisting of n2 + n neurons was proposed in [7] on the basis of a good property of convex optimization problems, called the duality. For neural networks, the number of neurons decides the complexity of the network structure and some researchers have been trying to design neural networks with the least neurons for solving the classical shortest path problem, e.g., [113]. For the design of neural networks by using tools from optimization theory, the first step is to formulate the corresponding problem as an optimization problem, which would be more preferred if it is convex. The investigations reported in [6, 7, 113] used an integer linear program [114] to model the shortest path problem. A good property of such neural network methods is the rigor theoretical guarantee. For example, the neural network proposed in [115] for finding shortest paths was proven to be stable according to the Lyapunov theory and the capability of finding a shortest path was also theoretically proved. In recent years, to find shortest paths, a type of neural networks called pulse-coupled neural networks [116–119] were designed and analyzed, too. It is worth noting that almost all the available solution schemes to the shortest path problem are centralized. When it comes to large-scale problems, a centralized approach may not be favorable. For instance, in an alternative implementation of a centralized neural network via hardware, each neuron may need to know state information about all other neurons in the same neural network, consuming many hardware resources. Meanwhile, the robustness of the centralized approach may not be satisfactory due to the existence of a center computing unit. From this point of view, a distributed approach or scheme is needed to find shortest paths. In addition, the generalized short path problem covers more scenarios than the classic shortest path problem, which requires the shortest path to be found in all paths in the graph from one node to all feasible target vertices. For instance, if a person wants to go to a store, all nearby ones could be viewed as an alternative. The generalized shortestpath problem discussed in this chapter remedies for such application scenarios via extending the classic shortest path problem from a single-destination scenario to a multi-destination scenario. In this chapter, we discussed a unified solution to the problem of the classical shortest path and the problem of finding shortest path in the case with multiple destination nodes. Specifically, on the basis of the principle of optimality of Bellman [120], the formulation of the shortest path problem as a linear program is adopted in

6.2 Problem Description

99

which the decision variable represents the length of the possible paths. On this basis, the formulated linear program is solved in a distributed way efficiently by using the biased consensus neural network, where every neuron only needs to know the state information of its neighbors. As a result, the method shown in this chapter is more effective compared with the existing neural network-based schemes. One point that should be mentioned here is about the distributed property, which is a common requirement in consensus [121–125]. It should be mentioned here that the results in [121–125] about the design and analysis of consensus protocols cannot be directly applied to the finding of shortest paths in either the case with only one destination node or the case with multiple possible destination nodes. We also carried out theoretical analysis to ensure the performance of the presented method in finding shortest paths in different situations. Meanwhile, the performance of presented method is also evaluated via computer simulations, together with an application to the navigation of a mobile robot in a maze environment. The organization of the rest part is as below. We first give the problem formulation for the two types of shortest path finding problem, together with some basic definitions in graph theory, in Sect. 6.2, followed by the design of the biased neural network method Sect. 6.3. The theoretical analysis on the performance of the biased neural network method is given in Sect. 6.4, for which computer simulations of two simple examples are discussed in Sect. 6.5 to further evaluate the method. A potential application of the method, i.e., the navigation of mobile robots is discussed in Sect. 6.6 via simulation, followed by the conclusions given in Sect. 6.7.

6.2 Problem Description In this section, the problem discussed in this chapter is described. Some basic notations are presented here for the convenience of illustration. A directed n-node graph consisting of nodes and edges is denoted by G = (V , E) in which V = {1, 2, · · · , n} is the set of nodes and E is the set of edges. In the graph, if there is an directed edge pointing from node i to node j (i = j ), which is denoted by (i; j ) then the length of the edge, which is represented by lij , is a finite positive real number. If there is no an edge pointing from node i and node j , then we set lij = +∞. Let the neighboring set of a node i be denoted as N(i) = {j |lij = +∞, j = i}. It should be noted here that undirected graphs can be viewed as a special case of directed graphs, for which lij = lj i . We first consider the classical shortest path problem. Let node s and node v denote the source node and the destination node, respectively. The classical shortest path problem can be described as a linear program with integer decision variables

100

6 Biased Consensus Based Distributed Neural Network

as follows [6]: maxpij



lij pij ,

ij ∈E

s.t. pij ∈ {0, 1},  j

pij −

 j

pj i =

⎧ ⎪ ⎪ ⎨ 1,

if i = s,

(6.1)

− 1, if i = v, ⎪ ⎪ ⎩ 0, otherwise,

in which pij = 1 means that the directed edge (i; j ) contributes to the shortest path from the source node s to the destination node v; pij = 0 means the directed edge (i; j ) is not within the shortest path. For this integer linear program, the optimal solution pij∗ (with i = 1, 2, · · · , n, j = 1, 2, · · · , n, and i = j ) can only be 0 or 1. The directed edges (i; j ) with pij = 1 constitute a shortest path from the source node s to the destination node v. The problem with the generalized shortest path is finding the shortest path from the source node i to a unspecified target node in the set of all the closable target nodes, which is denoted by Ω, in the graph. In other words, there are at least two viable destination nodes, and we need to find the shortest path in the path from vertices i to all feasible destination nodes. The feasible destination nodes means that they share some common features which make them to be valid choices as a destination node. From this point of view, the classical shortest path problem is a particular type of the generalized shortest path problem when the number of the alternative destination nodes become one. For the finding of the shortest paths, we have the remarks below about the extension of the integer linear program formulation (6.1) for the generalized shortest path problem. Remark 6.1 The mathematical description given in (6.1) may not be directly applicable to the case with multiple possible destination nodes. This is due to the main difference between the two types of shortest path problems. Specifically, while in the classical shortest path problem, both the source node and the destination node is known beforehand, in the generalized shortest path problem, only the source node is known and the destination node can only be known when the problem is solved, which cause a dilemma to use the formulation (6.1). It is worth pointing out that, if we simply extend the formulation by setting all the values in the righthand side of the equality constraint to −1 for all possible destination nodes, a feasible solution to the optimization problem may not exist when the number of possible destination nodes is more than two. One may choose the use the integer liner program description to solve for shortest paths from the source node to each possible destination node, and adopts the path the shortest length. While this method can work, the problem is that the computational time is approximately proportional to the number of possible destination nodes.

6.2 Problem Description

101

Remark 6.2 From the dynamic programming principle of Bellman [120], we can have a unified perspective to address the two types of shortest path problems: (1) For the shortest path problem consisting of only one feasible destination node, let xi∗ denote the length of the shortest path from the source node i to the destination node d in the graph G. From the dynamic programming principle of Bellman [120], xi∗ satisfy the equations as below: xi∗ = min{lij + xj∗ }, j ∈ N(i), xd∗ = 0.

(6.2)

(2) For the shortest path problem consisting of two or more feasible destination nodes, let xi∗ denote the shortest length of the paths from the source node i to all the feasible destination nodes in set Ω. From the dynamic programming principle of Bellman [120], xi∗ satisfy the equations as below: xi∗ = min{lij + xj∗ }, j ∈ N(i), xi∗ = 0 if i ∈ Ω.

(6.3)

By comparing the two cases in terms of the nonlinear equations that should be solved, we can have a clear idea about how to solve the two problems by using a unified framework. Recalling the definition of xi∗ , if we can find xi∗ , then the corresponding shortest path can be constructed by following nonlinear equations (6.2) or (6.3) in a back-stepping way. To be specific, we know that a path is formed by a sequence of nodes. Then, let s1 → s2 → · · · → sd denote the sequence of nodes, in which si represents the label of the ith node in the shortest path, i.e., si ∈ V . From Eq. (6.2) or (6.3), we have s1 = i, si = arg min{lsi−1 j + xj∗ }, j ∈ N(si−1 ), i ≥ 2. Meanwhile, in terms of the label of the destination node, which for easier illustration is denoted by sd , we have xs∗d = 0. In addition, from (6.2) and (6.3) we can find that the process of finding the next node in the shortest path is essentially distributed, showing us the possibility of solving the problem in a distributed manner. The difficult here is that (6.2) and (6.3) are implicit and non-differential nonlinear equations. Thus, some widely used methods, such as those reported in [54, 126, 127], cannot apply. On the basis of the above two remarks, we will present a unified method for solving the two types of shortest path problems, which bears the advantages of

102

6 Biased Consensus Based Distributed Neural Network

general distributed models over centralized ones. Obviously, we need to assume that the shortest path problems are solvable.

6.3 Unified Method In this section, we discuss the design of the biased consensus neural networks for solving the two types of shortest path problems. We first discuss the simple case with only one destination node, and then move on to the case with multiple feasible destination nodes.

6.3.1 Classical Shortest Path Problem Based on the analysis above, the classical shortest path problem consisting of only one destination node is described as a linear program: max

n 

xi ,

i=2

s.t. xi ≤ xj + lij , i = j, j = 1, 2, · · · , n,

(6.4)

x1 = 0, in which xi are the decision variables, of which the optimal values correspond the length of a shortest path from source node i (i = 1) to the destination node 1. The optimization problem described in (6.4) can be rewritten as the following minimization problem: min cT x,

(6.5)

s.t. Ax ≤ l, in which the symbols are defined as c = [−1, −1, · · · , −1]T ∈ Rn−1 , x = [x2 , x3 , · · · , xn ]T ∈ Rn−1 , and T ]T ∈ R(n−1) A = [W1T , W2T , · · · , Wn−1

2 ×(n−1)

,

6.3 Unified Method

103

where Wi ∈ R(n−1)×(n−1) (i = 1, 2, · · · , n − 1) is given as ⎡

0 ··· 0 1 ⎢−1 · · · 0 1 ⎢ ⎢ . . ⎢ .. . . ... ... ⎢ ⎢ Wi = ⎢ 0 · · · −1 1 ⎢ ⎢ 0 ··· 0 1 ⎢ ⎢ .. . . .. .. ⎣ . . . . 0 ··· 0 1

0 ··· 0 ··· .. . . . . 0 ··· −1 · · · .. . . . .

0 0 .. .



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0 ⎥. ⎥ 0⎥ ⎥ .. ⎥ . ⎦ 0 · · · −1

Note that all elements of the ith column of matrix Wi are 1, and 2

l = [l21 , l23 , · · · , ln,n−1 ]T ∈ R(n−1) . In practice, to make the computation to be more efficient when solving the minimization problem (6.5), some constraints that always hold can be directly removed by removing the elements in l and the matched rows in A. For example, xi ≤ xj + lij always holds if lij = +∞. Let the obtained matrix and vector after removal of ignorable constraints be denoted by C and d, which correspond to A and l, respectively. The optimization problem described by (6.5) is reduced to min cT x, s.t. Cx ≤ d,

(6.6)

which has less constraints, and can be solved more efficiently. For the problem formulations (6.5) and (6.6), the comparison is given below. Remark 6.3 The original formulation (6.5) given applies to the dynamic shortest path problem where the graph edges vary with time. For this kind of problem description, when solving it, we can use a sufficiently large number to represent the +∞. At the same time, the problem description in (6.5) is quite redundant for static shortest path problems, because for static shortest path problems, the graph connection does not change over time, and we can directly remove those constraints that are always satisfied. In other words, when it comes to static shortest path problems, the simplified description (6.6) outperforms (6.5) in terms of storage cost and computational efficiency. However, it should be noted that (6.6) does not apply to dynamic shortest path problems, especially when hardware implementation is involved.

104

6 Biased Consensus Based Distributed Neural Network

For solving (6.5), according to the results in [107], we can design a continuoustime recurrent neural network whose model is ⎧   ⎪ φ(xi − xj − lij ) + σ φ(xj − xi − lj i ) + 1, ⎪ ⎨λx˙i = −σ (6.7) j = i, j = 1, 2, · · · , n , if i = 1, ⎪ ⎪ ⎩x = 0, if i = 1, i

in which λ > 0 is a design coefficient of the neural network with σ denoting the nonnegative gain parameter, and φ(·) denoting the activation function with ⎧ ⎪ ⎪ ⎨= 1,

if h > 0, φ(h) ∈ [0, 1], if h = 0, ⎪ ⎪ ⎩= 0, if h < 0.

(6.8)

Accordingly, to solve the shortest path planning problem with the simplified problem formulation (6.6), we can design the following continuous-time recurrent neural network: ⎧   ⎪ φ(xi − xj − lij ) + σ φ(xj − xi − lj i ) + 1, if i = 1, ⎨λx˙i = −σ (6.9) j ∈N(i) j ∈Q(i) ⎪ ⎩x = 0, if i = 1, i in which Q(i) = {j |lj i = +∞, j = i}. It is worth pointing out that the model (6.9) is equivalent to (6.7) for static graphs, meaning that the values of xi at the steady state are the same.

6.3.2 Generalized Shortest Path Problem In this subsection, we discuss the shortest path problem for the case that the number of alternative destination nodes are more than one, which is more complicated than the case discussed in the above subsection. The set of the alternative destination nodes are denoted by Ω. Consider a graph G = (V , E). The shortest path problem with the set of feasible destination nodes being Ω is described as max



xi ,

i ∈Ω /

s.t. xi ≤ xj + lij , i ∈ / Ω, j ∈ N(i), xi = 0, ∀i ∈ Ω,

(6.10)

6.3 Unified Method

105

in which xi are the decision variable representing the possible length of a path from the node i (i ∈ / Ω) to a feasible destination node in the set Ω. Then, we have the continuous-time recurrent neural network model for solving the problem shown in (6.10) as follows: ⎧   ⎪ φ(xi − xj − lij ) + σ φ(xj − xi − lj i ) + 1, if i ∈ / Ω, ⎨λx˙i = −σ j ∈N(i)

⎪ ⎩x = 0, if i ∈ Ω, i

j ∈Q(i)

(6.11) For the continuous-time recurrent neural network model (6.11), we have the following remark about the intuition of the design. Remark 6.4 For the continuous-time recurrent neural network model (6.11), the evolution of state variable xi , ∀i ∈ V only needs the state information of neighboring nodes. The state evolution  of non-destination nodes mainly contains two parts. For neuron i, the first part −σ j ∈N(i) φ(xi − xj − lij ) forms a negative feedback to force xi to satisfy the inequality constraint xi ≤ xj + lij for every  j ∈ N(i). At the same time, the second part σ j ∈Q(i) φ(xj − xi − lj i ) forms a positive feedback to drive the value of xi to increase such that every node j ∈ Q(i) satisfies xj ≤ xi + lj i . Besides, the term 1 to some extent could be viewed as energy input to the ith neuron to make it remain active. In essence, there exists a competition-like mechanism to make the state values of non-destination nodes to be maximized subject to the inequality constraints. Intuitively, we may expect an equilibrium state at which we have /Ω xi = min{xj + lij }, ∀i ∈ with j ∈ N(i). An important setting is about the state variables of the destination nodes, whose values are always 0. Consequently, the requirements shown in (6.3) are met in the equilibrium state. Remark 6.5 The continuous-time recurrent neural network (6.11) actually is related to consensus, and can be referred to as a biased consensus neural network. The explanation is given as follows. From the neural network model (6.11), if lij are either 0 or +∞ for each pair of nodes i and j in the graph, the model (6.11) becomes ⎧   λ 1 ⎪ ⎨ x˙i = − / Ω, φ(xi − xj ) + φ(xj − xi ) + , if i ∈ σ σ j ∈N(i) j ∈Q(i) ⎪ ⎩ x˙i = xi = 0, if i ∈ Ω. Let λ = εσ

(6.12)

106

6 Biased Consensus Based Distributed Neural Network

with ε > 0. If σ → +∞, the model (6.12) will degrade into the following: ⎧   ⎪ φ(xi − xj ) + φ(xj − xi ), if i ∈ / Ω, ⎨εx˙i = − j ∈N(i)

⎪ ⎩x˙ = x = 0, if i ∈ Ω. i i

j ∈Q(i)

(6.13)

The dynamics shown in (6.13) essentially models the dynamics of a consensus neural network where neurons i ∈ Ω are the leaders. The properties of (6.13) lie in the existence of multiple leaders and it is distributed. Actually, the investigations on the leader-follower type consensus is very active in the consensus community, and the relationship between consensus and shortest path planning is seldom studied [122, 124, 128–130].

6.4 Theoretical Analysis In the previous section, we present the biased consensus based continuous-time recurrent neural networks for solving the two types of shortest path problems. In this section, we provide theoretical analysis on the performance of the biased consensus neural networks. Before showing the main results, we first give a lemma about the nonlinear equation (6.2). Lemma 6.1 ([120]) If the shortest path problem exists a solution, then the solution to the nonlinear equation (6.2) is unique. On the basis of Lemma 6.1, we are ready to give the main result about relationship between the linear program (6.4) and the nonlinear equation (6.2). Theorem 6.1 The solution to the linear program (6.4) is the same as that to the nonlinear equation (6.2) if the corresponding classical shortest path problem has a solution. Proof The proof consists of two parts: in the first part, we need to show that a solution to the optimization problem (6.4) is also a solution to nonlinear equation (6.2); in the second part, we need to show the solution uniqueness of the nonlinear equation (6.2). Let the solution to the optimization problem (6.4) be denoted by xˆi with i = 2, 3, · · · , n. Because xˆ i is a solution to the optimization problem (6.4), thus xˆi satisfies all the constraints in (6.4), by which we have xˆi ≤ xˆj + lij where i = j , and j = 1, 2, · · · , n. Thus, xˆi ≤ minj =i {lij + xˆj }, ∀i = 2, 3, · · · , n.

6.4 Theoretical Analysis

107

Meanwhile, we also have n 

minj =i {lij + xˆj } >

i=2

n 

x¯i

i=2

because x¯i is an optimal solution and the fact that x¯i < minj =i {lij + xˆ j } for every i. By summarizing the above analysis, we have xˆi = minj =i {lij + xˆj }, ∀i = 2, 3, · · · , n. Besides, in the problem description shown in (6.4), we know that the value of state variable x1 is always 0. Thus, we can conclude that the solution to the optimization problem (6.4) also satisfies the nonlinear equation (6.2). Additionally, on the basis of Lemma 6.1, we can guarantee the uniqueness of the solution to nonlinear equation (6.2), which completes the proof. Now, we are ready to show the main result about the performance of the continuous-time biased consensus neural network (6.7). Theorem 6.2 The state variable x(t) of the continuous-time biased consensus recurrent neural network (6.7) is globally convergent to a solution to the linear program (6.5). Proof We first rewrite the continuous-time biased consensus recurrent neural network (6.7) in the following form: λ˙x = −σ AT φ(Ax − l) − c. Let y = Ax − l. In view of the neural network dynamics (6.14), we further have y˙ = (−σ AAT φ(y) − Ac)/λ. Recall that AAT is full-rank because A is full column-rank. Let Q = AAT and b = Ac.

(6.14)

108

6 Biased Consensus Based Distributed Neural Network

Let the smallest eigenvalue of the matrix Q be denoted by γmin . Then, we have γmin > 0, because Q is full-rank. Consider the following auxiliary function: V1 =

n 

yi+ ,

i=1

in which (yi )+ = yi when yi ≥ 0; (yi )+ = 0 when yi < 0. Calculating the timederivative of V1 along the trajectory of the system, we have V˙1 =

n 

y˙i φ(yi )

i=1

= φ T (y)˙y  1 = − σ φ T (y)Qφ(y) + φ T (y)b λ  1 ≤ − σ φ(y)22 γmin + φ T (y)b , λ in which the operator  · 2 stands for the 2-norm calculation of a vector. In light of the definition of φ(·) in Eq. (6.8), we can derive the following norm equation: φ(y)22 =

n 

φ 2 (yi ) ≤

i=1

n 

1 = n,

i=1

by which we know that φ(y)2 ≤

√ n.

As a result, V˙1 ≤ −

σ φ(y)22 γmin + λ

√ nb2 λ

Now, we move on to discuss the case where the existence of i ≤ n is guaranteed such that yi > 0. Then, in light of the definition of φ(·) in Eq. (6.8), we have φ(yi ) = 1

6.4 Theoretical Analysis

109

and φ(y)2 > 1. Thus, the following inequality about V˙1 holds: σ γmin + V˙1 ≤ − λ

√ nb2 . λ

If σ >



nb2 /γmin ,

then we have V˙1 = −(σ γmin −



nb2 )/λ < 0.

(6.15)

By directly solving (6.15), we have V1 (t) ≤ V1 (0) − (σ γmin −



nb2 )t/λ.

Clearly, if the following equation holds: t ≥ T0 =

λV1 (0) , √ σ γmin − nb2

then V1 (t) ≤ 0. Because V1 (t) =

n 

yi+ ≥ 0,

i=1

we know that V1 (t) = 0 if t > T0 . As a result, yi = 0 for any i = 1, 2, · · · , n, leading to the conclusion that y = 0, i.e., Ax − l ≤ 0 for t > T0 . According to the invariance principle [131], in the following, our analysis will perform on the set S = {x|Ax − l ≤ 0}. Let J = 1T φ(Ax − l),

110

6 Biased Consensus Based Distributed Neural Network

in which 1 is a column vector of which all the elements are 1. Clearly, we have ∇J = ∂J /∂x = AT φ(Ax − l), by which the continuous-time recurrent neural network model (6.7), i.e., λ˙x = −σ AT φ(Ax − l) − c, can be further written as follows: λ˙x = −(σ ∇J + c). Consider another auxiliary function V2 = σ J + cT x. The time derivative of V2 along the trajectory of the dynamical system is calculated as follows:   1 V˙2 = σ (∇J )T + cT x˙ = − σ ∇J + c22 ≤ 0, λ from which we have σ ∇J + c = 0 if t → +∞, i.e., σ AT φ(Ax − l) + c = 0. Let μ = σ φ(Ax − l). Consider the Lagrangian function given below: L(x, μ) = cT x + μT (Ax − l).

(6.16)

According to the above results, we have ⎧ T ⎪ ⎪ ⎨∇L(x, μ) = c + A μ = 0, Ax − l = 0, if μ > 0, ⎪ ⎪ ⎩Ax − l < 0, if μ = 0.

(6.17)

6.5 Computer Simulations

111

Thus, we conclude that x of the continuous-time recurrent neural network model (6.7) satisfies the optimality conditions about the solution to linear program [132], in the invariant set. Thus, together with the fact that the candidate functions are radially unbounded, according to the invariance principle [131], we further conclude that the state vector x of the continuous-time recurrent neural network model (6.7) is globally asymptotically convergent to the optimal solution x∗ of linear program (6.5). On the basis of the above two theorems, we have the following results. Corollary 6.1 The state vector x(t) of the continuous-time biased consensus recurrent neural network (6.7) is globally convergent to a solution of nonlinear equation (6.2). Proof According to Theorem 6.2, the state vector x(t) of the continuous-time recurrent neural network model (6.7) is globally asymptotically convergent to the optimal solution to the linear program (6.5). Meanwhile, from Theorem 6.1, we know the equivalence between linear program (6.4) and nonlinear equation (6.2) whose solution is unique. Thus, the state vector x(t) of the continuous-time biased consensus recurrent neural network (6.7) is globally asymptotically convergent to a solution of nonlinear equation (6.2). Corollary 6.2 The state vector x(t) of the continuous-time biased consensus recurrent neural network (6.11) is globally convergent to a solution of nonlinear equation (6.3). Proof The proof can be completed by following the procedures for the proof of Theorems 6.1 and 6.2, and is thus omitted. Corollary 6.3 The continuous-time consensus neural network (6.12) globally asymptotically achieves consensus. Proof The proof can be completed by following the procedures for the proof of Theorems 6.1 and 6.2, and is thus omitted.

6.5 Computer Simulations In this section, to evaluate the performance of the continuous-time biased consensus recurrent neural networks in finding shortest paths over graphs, we discuss the simulation results for the two types of shortest path problems by using the corresponding continuous-time biased consensus recurrent neural networks.

112

6 Biased Consensus Based Distributed Neural Network 0.1 0.3 0.3

1

0.2

2

0.6

0.15 3

0.1 4

0.05

7

0.2

6

0.3

0.8

0.7

0.1 5

0.2

0.1 0.4

Fig. 6.1 The six-node graph adopted in Example 1 for evaluating the performance of the biased consensus recurrent neural network (6.9), in which node 1 is the destination node

6.5.1 Example 1 In this subsection, we show an example about the finding of a shortest path over a directed graph by using the biased consensus recurrent neural network (6.9). The graph considered here has six nodes where node 1 is the destination node, which is depicted in Fig. 6.1. According to the aforementioned results, the shortest path problem can be formulated as a linear program (6.6) with AT = [1, −1, 0, 0, 0, 0, 0, 0, 0, −1, 0; 0, 1, 1, −1, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 1, 1, 1, 0, −1, 0, 0, 0; 0, 0, 0, 0, −1, 0, 1, 1, 0, 0, −1; 0, 0, −1, 0, 0, −1, 0, 0, 1, 1, 1], b = [0.3, 0.8, 0.2, 0.4, 0.3, 0.1, 0.2, 0.05, 0.6, 0.7, 0.1]T, and x = [x2, x3 , x4 , x5 , x6 ]T . Because the node with the label being 1 is the destination node, we set x1 = 0 all the time, and for the other state variables the initial values are randomly generated. The parameters of the biased consensus recurrent neural network (6.9) are set to λ = 10−6 and σ = 10. The simulation result is presented in Fig. 6.2, from which and Remark 6.2, the shortest path from any node in the graph to the destination node, i.e., node 1, can be obtained. For example, we can easily find that a shortest path from node 4 to node 1 is 4 → 6 → 5 → 1. These verify the effectiveness of the biased consensus recurrent neural network (6.9).

6.5.2 Example 2 In this subsection, we show an example about the finding of a shortest path over a directed graph shown in Fig. 6.3 by using the biased consensus recurrent neural network (6.11). In the graph considered here, the feasible destination nodes are node

6.5 Computer Simulations

113

1

x2 x3

0.8

x4 x5 x6

0.6 0.4 0.2 0 0

1

2

3

t

4

5 −7

x 10

Fig. 6.2 State evolutions of neurons, which correspond to non-destination nodes, of the biased consensus recurrent neural network (6.9) with the parameters set to λ = 10−6 and σ = 10 when solving the shortest path problem with respect to the graph shown in Fig. 6.1

1 0.1

0.3

0.1

2

0.3

0.4

12

1.2

0.2

0.4

0.3 4 0.1

0.6

0.1

0.9

9

0.6

0.4

21

20 0.4

0.3

0.9 17

0.1 0.2

0.3

0.2 16

0.3

10

0.2

0.2 0.6

0.9 0.1

0.5

0.5

23

0.4

8

0.3

2.2 0.4

0.3

0.4

0.1

15

14

0.1

0.15

3

0.1

0.1

11

7

0.2

6

5

0.1

0.3 0.1

0.8

0.7

13

0.2

0.5

0.3

0.8

18

0.2 0.3

19

0.15

1.1 0.6

0.1 0.4

0.5 22

Fig. 6.3 The graph adopted in Example 2 for evaluating the performance of the biased consensus recurrent neural network (6.11), in which node 1, node 2, and node 3 are the feasible destination nodes, i.e., Ω = {1, 2, 3}

1, node 2, and node 3, i.e., Ω = 1, 2, 3. According to the aforementioned results, the shortest path problem can be formulated as a linear program below: max

23 

xi ,

i=4

s.t. xi ≤ xj + lij , i = j, j = 1, 2, · · · , 23.

114

6 Biased Consensus Based Distributed Neural Network 2.5 2 1.5 xi

Fig. 6.4 State evolutions of neurons, which correspond to non-destination nodes, of the biased consensus recurrent neural network (6.11) with the parameters set to λ = 10−6 and σ = 20 when solving the shortest path problem with respect to the graph shown in Fig. 6.3

1 0.5 0 0

0.5

t

1

1.5 −6

x 10

For the feasible destination nodes, we set x1 = x2 = x3 = 0 and x˙1 = x˙2 = x˙ 3 = 0. The initial states of the neurons corresponding to non-destination nodes, i.e., x(0) = [x4 (0), x5 (0), · · · , x23 (0)]T are randomly generated. The state evolutions of neurons, which correspond to non-destination nodes, of the biased consensus recurrent neural network (6.11) with the parameters set to λ = 10−6 and σ = 20 when solving the shortest path problem are shown in Fig. 6.4. From Fig. 6.4, the neurons corresponding to non-destination nodes are convergent. To better analyze the steady-state behavior of the biased consensus recurrent neural network (6.11), the steady states are shown in Fig. 6.5. From the aforementioned theoretical analysis, the neuron states of the biased consensus recurrent neural network (6.11) is globally asymptotically convergent to the solution of nonlinear equation (6.3) with respect to the shortest path problem in this example. Thus, by checking the steady states of the biased consensus recurrent neural network (6.11) shown in Fig. 6.5 and following Remark 6.2, we can derive the corresponding path from any node to all the nodes in the set Ω = 1, 2, 3 with the shortest length. For instance, we can get the path with the shortest length from node 22 to all the nodes in set Ω = 1, 2, 3, i.e., 22 → 17 → 16 → 14 → 11 → 7 → 3. It can be artificially verified according to the graph shown in Fig. 6.3 that the obtained shortest path is correct. These verifies the effectiveness of the biased consensus recurrent neural network (6.11).

6.6 Robot Navigation Application 2.5 2 1.5 xi

Fig. 6.5 Steady states of neurons, which correspond to the shortest lengths of paths from non-destination nodes to all the nodes in set Ω = 1, 2, 3 of feasible destination nodes, of the biased consensus recurrent neural network (6.11) with the parameters set to λ = 10−6 and σ = 20 when solving the shortest path problem with respect to the graph shown in Fig. 6.3

115

1 0.5 0 0

5

10

i

15

20

6.6 Robot Navigation Application In this section, we show how the presented method can be applied to the navigation of mobile robots. We consider the scenario where the robot needs to fetch a diamond with a labyrinth map given in Fig. 6.6. In Fig. 6.6, the black region represents obstacle area which cannot be passed by the robot, and the white region represents the area that can be passed by the robot. Besides, the circles in the map represent the positions where the diamonds are located. Clearly, the area in Fig. 6.6 can be divided into two parts corresponding to the area that can be passed by the robot, denoted by set F, and the area that cannot be passed by the robot, denoted by set O. In addition, the position marked with a rectangle in the graph shown in Fig. 6.6 is the starting position of the robot. Here, we only consider the navigation of the mobile robot, i.e., given a map to the robot, how can it navigate itself to fetch a diamond. Clearly, this problem is a shortest path problem with multiple feasible destination nodes. Let the set D denote the feasible destination set consisting of the locations of the diamonds. Let N(i, j ) = {(z1 , z2 )|z1 ∈ {i−1, i, i+1}, z2 ∈ {j −1, j, j +1}, (z1 , z2 ) = (i, j ), (z1 , z2 ) ∈ / O}.

Let U = {(i, j )|(i, j ) ∈ / D, (i, j ) ∈ / O}. Then, to solve the robot navigation problem, we first try to obtain the length of the shortest path among all the paths from the starting position of the mobile robot to all the positions with diamonds. Thus, according to the biased consensus recurrent neural network (6.11) for solving the generalized shortest path problem, we have the

116

6 Biased Consensus Based Distributed Neural Network

Fig. 6.6 The graph adopted in the robot navigation application, where the black region represents obstacle area which cannot be passed by the robot, the white region represents the area that can be passed by the robot, position marked with a rectangle is the starting position of the robot, and the circles represent the positions where the diamonds are located

specific biased consensus recurrent neural network for this application as follows:   ⎧  ⎪ φ(− (z1 − i)2 + (z2 − j )2 ⎪λx˙ij = −σ ⎪ ⎪ ⎪ ⎪ (z1 ,z2 )∈N(i,j ) ⎪  ⎪ ⎨ + xij − xz1 z2 ) − φ(− (z1 − i)2 + (z2 − j )2 + xj − xi ) + 1, if (i, j ) ∈ U, ⎪ ⎪ ⎪ ⎪ ⎪ if (i, j ) ∈ D, xij = 0, ⎪ ⎪ ⎪ ⎩ xij = +∞, if (i, j ) ∈ O.

(6.18) The performance of the biased consensus recurrent neural network (6.18) can be theoretically guaranteed by following the proofs of Theorems 6.1 and 6.2, by which xij is globally asymptotically convergent to the length of the shortest paths from the position with label ij to all the positions with diamonds. Let (r1 (k), r2 (k)) denote position of the mobile robot at the kth time instant. Let xˆ ij denote the steady-state solution for the shortest path problem, which are generated by biased consensus recurrent neural network (6.18). At the k + 1th time instant, the position that the mobile robot should go to in the map can be calculated by minimize the following function xˆp1 p2 +



(p1 − z1 )2 + (p2 − z2 )2

6.7 Summary

117

Fig. 6.7 The trajectory of the mobile robot navigated by the presented scheme aid by the biased consensus recurrent neural network (6.18) with the parameters setting to σ = 30 and λ = 1e − 6 for the diamonds fetching task with a given map

where the decision variable is (p1 , p2 ) ∈ N(r1 , r2 ). Because we simplify the number of movement directions of the mobile robot into eight, there are totally eight positions in the set N(r1 , r2 ), and thus computing the next position should be fast. In the simulation, we set the parameters of the biased consensus recurrent neural network to σ = 30 and λ = 10−6 . As seen from Fig. 6.7, the mobile robot can reach a position storing a diamond. We further verify that the path is with the shortest length among all the possible paths starting from the initial positions and ending at the position with a diamond. These show the application potential of the presented biased consensus neural network.

6.7 Summary In this chapter, we have shown the design and analysis of biased consensus recurrent neural networks for finding shortest paths over given graphs in a distributed manner for two cases: the first case is the same as the traditional shortest path problem focusing on only one destination node; and the second case is with multiple alternative destination nodes, which is more complicated. The design is inspired by the optimality principle of dynamic programming, by which the problems are described as linear programs. The theoretical results indicate that the biased consensus recurrent neural networks is capable of generating the values of lengths

118

6 Biased Consensus Based Distributed Neural Network

of shortest paths for the two cases, by which the shortest paths can be readily constructed. The performance of the biased consensus recurrent neural networks have been evaluated by computer simulations and an application to the robot navigation task has further shown the effectiveness of the biased consensus recurrent neural networks.

References 1. S. Kim, M.E. Lewis, C.C. White, State space reduction for nonstationary stochastic shortest path problems with real-time traffic information. IEEE Trans. Intell. Transp. Syst. 6, 273–284 (2005) 2. J. Cota-Ruiz, P. Rivas-Perea, E. Sifuentes, R. Gonzalez-Landaeta, A recursive shortest path routing algorithm with application for wireless sensor network localization. IEEE Sensors J. 16, 4631–4637 (2016) 3. J.J. Junior, P.C. Cortex, A.R. Backes, Color texture classification using shortest paths in graphs. IEEE Trans. Image Process. 23, 3751–3761 (2014) 4. Y. Zhang, X. Yan, D. Chen, D. Guo, W. Li, QP-based refined manipulability-maximizing scheme for coordinated motion planning and control of physically constrained wheeled mobile redundant manipulators. Nonlinear Dyn. 85, 245–261 (2016) 5. E.L. Lawler, Combinatorial Optimization: Networks and Matroids (Holt, Rinehart, and Winston, New York, 1976) 6. J. Wang, A recurrent neural network for solving the shortest path problem. IEEE Trans. Circuits Systems I Fund. Theory Appl. 43, 482–486 (1996) 7. X. Xia, J. Wang, A discrete-time recurrent neural network for shortest-path routing. IEEE Trans. Autom. Control 45(11), 2129–2134 (2000) 8. C.C. Sun, G.E. Jan, S.W. Leu, K.C. Yang, Y.C. Chen, Near-shortest path planning on a quadratic surface with O(nlogn) time. IEEE Sensors J. 15, 6079–6080 (2015) 9. G.E. Jan, C.C. Sun, W.C. Tsai, T.H. Lin, An O(nlogn) shortest path algorithm based on Delaunay triangulation. IEEE/ASME Trans. Mechatron. 19, 660–666 (2014) 10. S. Ma, K. Feng, J. Li, H. Wang, G. Cong, J. Huai, Proxies for shortest path and distance queries. IEEE Trans. Knowl. Data Eng. 28(7), 1835–1849 (2016) 11. G. Lei, Y. Dou, R. Li, F. Xia, An FPGA implementation for solving the large single-sourceshortest-path problem. IEEE Trans. Circuits Syst. II 63, 473–477 (2016) 12. X. Li, R. Rakkiyappan, G. Velmurugan, Dissipativity analysis of memristor-based complexvalued neural networks with time-varying delays. Inf. Sci. 294, 645–665 (2015) 13. S. Li, S. Chen, B. Liu, Y. Li, Y. Liang, Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 91, 1– 10 (2012) 14. S. Li, H. Cui, Y. Li, B. Liu, Y. Lou, Decentralized control of collaborative redundant manipulators with partial command coverage via locally connected recurrent neural networks. Neural Comput. Appl. 23(3), 1051–1060 (2013) 15. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 16. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 17. L. Jin, S. Li, H. M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017)

References

119

18. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in 2018 IEEE International Conference on Robotics and Automation (2018), pp. 2956–2961 19. Y. Zhang, S. Li, A neural controller for image-based visual servoing of manipulators with physical constraints. IEEE Trans. Neural Netw. Learn. Syst. 29(11), 5419–5429 (2018) 20. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 21. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 22. L. Jin, S. Li, X. Luo, Y. Li, B. Qin, Neural dynamics for cooperative control of redundant robot manipulators. IEEE Trans. Ind. Inf. 14(9), 3812–3821 (2018) 23. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 24. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a Jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 25. Y. Zhang, S. Li, J. Gui, X. Luo, Velocity-level control with compliance to acceleration-level constraints: a novel scheme for manipulator redundancy resolution. IEEE Trans. Ind. Inf. 14(3), 921–930 (2018) 26. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 27. Y. Zhang, S. Chen, S. Li, Z. Zhang, Adaptive projection neural network for kinematic control of redundant manipulators with unknown physical parameters. IEEE Trans. Ind. Electron. 65(6), 4909–4920 (2018) 28. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Contr. Sys. Technol. 26(4), 1345–1357 (2018) 29. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291, 50–58 (2018) 30. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 31. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 32. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 33. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 34. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Contr. Syst. Technol. 27(3), 1250–1258 (2019) 35. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 36. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2016) 37. A.M. Mohammed, S. Li, Dynamic neural networks for kinematic redundancy resolution of parallel Stewart platforms. IEEE Trans. Cybern. 46(7), 1538–1550 (2016) 38. L. Jin, Y. Zhang, G2-type SRMPC scheme for synchronous manipulation of two redundant robot Arms. IEEE Trans. Cybern. 45(2), 153–164 (2015)

120

6 Biased Consensus Based Distributed Neural Network

39. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2017). https://doi.org/10.1109/TSMC.2017.2693400 40. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2016). https://doi.org/10.1109/TSMC.2016. 2627579 41. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019) 42. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019) 43. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 44. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 45. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018) 46. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 47. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 48. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 49. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 50. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 51. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 52. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 53. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013) 54. L. Xiao, R. Lu, Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function. Neurocomputing 151, 246–251 (2016) 55. L. Xiao, Y. Zhang, A new performance index for the repetitive motion of mobile manipulators. IEEE Trans. Cybern. 44, 280–292 (2014) 56. Y. Wang, L. Cheng, Z.G. Hou, J. Yu, M. Tan, Optimal formation of multirobot systems based on a recurrent neural network. IEEE Trans. Neural Netw. Learn. Syst. 27, 322–333 (2016) 57. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019) 58. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 59. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019)

References

121

60. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 61. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 62. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 63. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 64. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 65. L. Xiao, S. Li, F. Lin, Z. Tan, A.H. Khan, Zeroing neural dynamics for control design: comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019) 66. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 67. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 68. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 69. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 70. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 71. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 72. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018) 73. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018) 74. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018) 75. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 76. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 77. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 78. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 79. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 80. H. Wang, P.X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018)

122

6 Biased Consensus Based Distributed Neural Network

81. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceeding of 2018 IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 2956–2961 82. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 83. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 84. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 85. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 86. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017) 87. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 88. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceeding of 2017 13th IEEE Conference on Automation Science and Engineering (CASE) (2017), pp. 1368–1373 89. Y. Li, S. Li, D.E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceeding of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 3538–3543 90. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 91. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 92. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 93. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 94. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 95. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016) 96. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 97. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceeding of 2016 IEEE 16th International Conference on Data Mining (ICDM) (2016), pp. 311–319 98. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceeding of 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), vol 2016 (2016), pp. 1221–1226 99. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015)

References

123

100. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 101. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceeding of International Conference on Intelligent Computing, vol 2015 (2015), pp. 713– 720 102. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014) 103. S. Li, Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 104. Y. Li, S. Li, Q. Song, H. Liu, M. Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014) 105. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceeding of 2014 International Joint Conference on Neural Networks (IJCNN) (2014), pp. 2952–2956 106. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 301– 309 (2013) 107. Q. Liu, J. Wang, Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions. IEEE Trans. Neural Netw. 22, 601–613 (2011) 108. Y. Xia, J. Wang, A bi-projection neural network for solving constrained quadratic optimization problems. IEEE Trans. Neural Netw. Learn. Syst. 27, 214–224 (2016) 109. S. Zhang, Y. Xia, J. Wang, A complex-valued projection neural network for constrained optimization of real functions in complex variables. IEEE Trans. Neural Netw. Learn. Syst. 26, 3227–3238 (2015) 110. X. Li, S. Song, Impulsive control for existence, uniqueness, and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays. IEEE Trans. Neural. Netw. Learn. Syst. 24, 868–877 (2013) 111. J.J. Hopfield, D.W. Tank, ‘Neural’ computation of decisions in optimization problems.Biol. Cybern. 52, 141–152 (1985) 112. D.W. Tank, J.J. Hopfield, Simple “neural” optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans. Circuits Syst. 33, 533–541 (1986) 113. F. Araújo, B. Ribeiro, L. Rodrigues, A neural network for shortest path computation. IEEE Trans. Neural Netw. 12(5), 1067–1073 (2001) 114. L. Taccari, Integer programming formulations for the elementary shortest path problem. Eur. J. Oper. Res. 252, 122–130 (2016) 115. A. Nazemi, F. Omidi, An efficient dynamic model for solving the shortest path problem. Transport. Res. C-Emer. 26, 1–19 (2013) 116. Y. Zhang, L. Wu, G. Wei, S. Wang, A novel algorithm for all pairs shortest path problem based on matrix multiplication and pulse coupled neural network. Digital Signal Process. 21, 517–521 (2011) 117. Y. Sang, J. Lv, H. Qu, Z. Yi, Shortest path computation using pulse-coupled neural networks with restricted autowave. Knowl.-Based Syst. 114, 1–11 (2016) 118. X. Li, Y. Ma, X. Feng, Self-adaptive autowave pulse-coupled neural network for shortest-path problem. Neurocomputing 115, 63–71 (2013) 119. H. Qu, Z. Yi, S.X. Yang, Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks. IEEE Trans. Cybern. 43, 995–1010 (2013) 120. R. Bellman, On a routing problem. Q. Appl. Math. 16(1), 87–90 (1958) 121. H. Li, G. Chen, X. Liao, T. Huang, Leader-following consensus of discrete-time multiagent systems with encoding-decoding. IEEE Trans. Circuits Syst. II, Exp. Briefs 63, 401–405 (2016)

124

6 Biased Consensus Based Distributed Neural Network

122. S. Cheng, L. Yu, D. Zhang, L. Huo, J. Ji, Consensus of second-order multi-agent systems using partial agents’ velocity measurements. Nonlinear Dyn. 86, 1927–1935 (2016) 123. H. Li, G. Chen, T. Huang, Z. Dong, W. Zhu, L. Gao, Event-triggered distributed average consensus over directed digital networks with limited communication bandwidth. IEEE Trans. Cybern. 46, 3098–3110 (2016) 124. G.X. Wen, C.L.P. Chen, Y.J. Liu, Z. Liu, Neural network-based adaptive leader-following consensus control for a class of nonlinear multiagent state-delay systems. IEEE Trans. Cybern. 47(8), 2151–2160 (2017) 125. H. Li, G. Chen, T. Huang, Z. Dong, High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies. IEEE Trans. Neural Netw. Learn. Syst. 28(5), 1043–1054 (2017) 126. Y. Zhang, D. Chen, D. Guo, B. Liao, Y. Wang, On exponential convergence of nonlinear gradient dynamics system with application to square root finding. Nonlinear Dyn. 79, 983– 1003 (2015) 127. L. Xiao, A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 173, 1983–1988 (2016) 128. Z. Ma, Y. Wang, X. Li, Cluster-delay consensus in first-order multi-agent systems with nonlinear dynamics. Nonlinear Dyn. 83(3), 1303–1310 (2016) 129. C.L.P. Chen, G.X. Wen, Y.J. Liu, Z. Liu, Observer-based adaptive backstepping consensus tracking control for high-order nonlinear semi-strict-feedback multiagent systems. IEEE Trans. Cybern. 46, 1591–1601 (2016) 130. B. Zhou, X. Liao, Leader-following second-order consensus in multi-agent systems with sampled data via pinning control. Nonlinear Dyn. 78, 555–569 (2014) 131. H.K. Khalil, Nonlinear Systems (Prentice-Hall, Upper Saddle River, 2002) 132. S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)

Chapter 7

Near-Optimal Consensus

7.1 Introduction The distributed cooperation of networked agents has attracted lots of research attention in the past two decades, which is of great application significance. Such systems are called multi-agent systems. The foundation of the distributed cooperation of multi-agent systems is consensus, of which the objective is to make the agents in the network to achieve an agreement with respect to some quantities [1, 2]. Consensus theory finds its applications in various systems in practice, which includes but are not limited to robotic systems, [3–5], micro-grids [6], autonomous underwater vehicles [7], and filter systems [8, 9]. Some other applications are illustrated in [10–16] and the references therein. Owing to the theoretical and practical significance, the past years have witnessed the advancements in the design and analysis of distributed protocols for achieving consensus among networked agents. Being different from the computation mechanism with a centralized unit, such as the models in [17–82], the decentralized ones do not have such a unit and only requires local communications of directly connected neighboring agents. In [83], the consensus of multiple integrators with local communications is investigated under different communication topologies. In [84], the investigation reported in [83] was further enhanced, and the same objectives can be achieved under looser assumptions. While the consensus of continuous-time multi-agent systems of integrators is studied in [83] and [84], the corresponding case with a discrete-time system model is addressed in [85]. As an extensively encountered consensus type, for networked agents with time-varying communication topology, the average consensus is studied in [86], where another issue called time-delay was also handled. Existing literature has shown that the optimal consensus of multiple first-order integrator agents can be realized via a linear distributed control law [87]. It is worth pointing out that the aforementioned results only apply to simple multi-agent systems of first-order integrators.

© Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_7

125

126

7 Near-Optimal Consensus

In terms of second-order agents, the distributed consensus problem has also been addressed in [88–96]. For example, theoretical guarantees were given in [90] for realizing distributed consensus among multiple locally connected double integrators subject to noises in state measurements. As a robust control strategy that can not only deal with performance index but also handle state constraints, the model predictive control was also applied to the consensus of multi-agent systems of discrete-time integrators [93, 94]. Considering that some physical multiagent systems can be modeled as those of high-order integrators, the consensus investigation has also been extended to the high-order case [97]. For the consensus among agents of linear dynamics, results were reported in [98–101]. For instance, a distributed consensus protocol was proposed in [101] with the aid of the highgain approach for a group of linear agents. When it comes to nonlinear agents, the consensus problem becomes more difficult due to the nonlinearity in the dynamics of the agents, for which some distributed protocols were also proposed [102–108]. In particular, the consensus of multiple nonlinear agents with both leader agents and follower agents is addressed through the usage of the so-called M-matrix method [103], which were also further applied to the case of second-order nonlinear agents [104]. Being broadly adopted in solving optimization problems [109] and the redundancy resolution of manipulators [61, 110, 111], the neural network method has also been found useful in the consensus problem. For instance, by using the universal approximation property of neural networks, we can handle the consensus problem of a group of nonlinear dynamic agents with unknown dynamics [107]. It should be noted that the optimal consensus problem among multiple agents have seldom been investigated. For this problem, there are only a few results about using the computational approach called the model predictive control [93, 94, 104]. However, this method only applies to discrete-time system models and the computational efficiency is still low when it comes to systems of high dimensions. Meanwhile, the discrete-time system models could cause additional modeling errors for the controlled agents. Recently, some researchers have investigated the inverse optimality of control laws. For the consensus problem, the inverse optimality was discussed in [112]. In this chapter, our objective is to discuss a scheme to realize the consensus among multiple dynamic agents whose dynamics are linear or nonlinear but restricted to be second-order. The scheme is based on the idea of the so-called receding-horizon optimal control [113–116]. A very important contribution in this area is the idea of reducing the complicated dynamic optimization problem into a simpler one via approximation, by which analytical controllers can be obtained [114, 115]. This chapter could be viewed as an extension of [113–115] from the centralized control of a single agent to the distributed control of multiple agents. The contents of this chapter starts from the consensus of a group of agents with simplest second-order dynamics, which is latter expanded to the case with secondorder nonlinear dynamics.

7.2 Problem Background

127

Intuitively, the design of the distributed consensus protocols for each type of multi-agent systems follows the following procedures: (1) design and formulation of a time-varying optimization problem whose performance index is with respect to the states of the agents and the constraints are the agent dynamics; (2) conversion of the problem to simpler ones via approximation with the aid of the Taylor expansion over time; (3) derivation of the corresponding analytical consensus protocols. Due to the inherent similarity of the consensus problem for different types of multiagent systems, the extensions in this chapter are easy to follow. Through standard stability analysis, it is demonstrated that the closed-loop system is exponentially asymptotically stable, by which the conclusion about the asymptotic optimality of the consensus protocols is also derived. The effectiveness of the scheme is illustrated via numerical simulations. The rest of this chapter adopts the following organization. We first show the background knowledge for the formulation of the problem in Sect. 7.2. Then, in Sect. 7.3, we present the design of the consensus protocols for different types of second-order multi-agent systems in a successive manner. The theoretical analysis on the closed-loop consensus performance of the second-order agents equipped with the presented consensus protocols and the optimality of the distributed consensus protocols are given in Sect. 7.4, followed by the numerical simulation results of different cases in Sect. 7.5 to provide more information about the effectiveness of the consensus protocols. Concluding remarks for this chapter are shown in Sect. 7.6.

7.2 Problem Background This section is concerned with the theoretical background of the problem investigated in this chapter. Some definitions about graph theory and some properties of Laplacian matrices are first presented [117]. We use G = (V, ε) to denote a graph whose nodes are given in a set V = {1, 2, · · · , N} and the edges are given in another set ε, where N is the number of nodes. In this chapter, the considered graph is assumed to be undirected and connected, and thus ε = {(i, j )|if node i and node j are connected, i ∈ V, j ∈ V, i = j }. With the definition of ε, the neighboring nodes of a node i ∈ V can be described as a set N(i) = {j |(i, j ) ∈ ε}. Let deg(i) denotes the degree of a node i ∈ V. deg(i) is calculated by its number of neighboring nodes. To define the Laplacian matrix LG of the graph, we need to first define the degree matrix Δ ∈ RN×N and the adjacency matrix A ∈ RN×N . Specifically, we have Δij =

deg(i), i = j, 0,

i = j,

128

7 Near-Optimal Consensus

and Aij =

1, (i, j ) ∈ ε, 0, otherwise.

Then, LG = Δ − A . Because the graph considered in this chapter is undirected and connected, we have the following basic results. Property 7.1 Let 1 = [1, 1, · · · , 1]T ∈ RN denotes an N-dimensional column vector. 1 is one of the eigenvalues of LG with the corresponding eigenvalue being 0. Besides, the eigenvalues of LG are non-negative, and only one eigenvalue is 0. Property 7.2 The matrix LG is symmetric and positive semi-definite. To lay a basis for further illustrations, we also need the following definitions adopted from [118, 119]. Definition 7.1 Nonlinear systems in the affine form can be modeled as

x˙ = f (x) + g(x)u(t),

(7.1)

y(t) = h(x),

in which x(t) ∈ Rn , u(t) ∈ Rr , and y(t) ∈ Rr represents the state variable, control input, and system output, respectively. The functions f : Rn → Rn , g : Rn → Rn×r , and h : Rn → Rr are assumed to be smooth. ϕ

Definition 7.2 Let ϕ denote a positive integer, and Lf h(x) denote the ϕth Lie derivative of h(x). To be specific, when ϕ = 0, L0f h(x) = h(x); when ϕ = 1, L1f h(x) =

∂h(x) f (x); ∂x

when ϕ > 1, we have ϕ−1

ϕ

Lf h(x) =

∂Lf

h(x)

∂x

f (x).

ϕ

Accordingly, the term Lg Lf h(x) is calculated by the following rule: ϕ

ϕ

Lg Lf h(x) =

∂Lf h(x) ∂x

g(x).

7.3 Consensus Protocols

129

Definition 7.3 We say that the relative degree of the affine system (7.1) is ρ within the region U if the conditions specified below are satisfied: ϕ

• ∀x ∈ U, Lg Lf h(x) = 0, when 0 ≤ ϕ < ρ − 1; ρ−1

• ∀x ∈ U, Lg Lf

h(x) = 0.

By Definition 7.3, for second-order dynamical systems, we have ρ = 2. For a multi-agent systems consisting of N agents, the communication relationship can be described as a graph. As we consider bidirectional communication we adopt undirected graphs, where an edge means the existence of the communication channel between the two agents. In the graph, each node represents an agent, and the graph is characterized by the corresponding Laplacian matrix LG . We assume that the communication graph is connected. With such configurations, the consensus problem here focuses on the output consensus, i.e., the agents reach the same output.

7.3 Consensus Protocols In this section, we present the design of near-optimal consensus protocols for second-order dynamical multi-agent systems with different types of agent dynamics.

7.3.1 Multiple Double Integrators Let xi , vi , ui and yi denote the position, velocity, control input, and system output of the ith agent with i = 1, 2, · · · , N. Let x = [x1 , x2 , · · · , xN ]T ∈ RN , v = [v1 , v2 , · · · , vN ]T ∈ RN , u = [u1 , u2 , · · · , uN ] ∈ RN , y = [y1 , y2 , · · · , yN ]T ∈ RN . The dynamics of the multi-agent system consisting of multiple double integrators can be described as follows: ⎧ ⎪ ⎪x˙ = v, ⎨ (7.2) v˙ = u, ⎪ ⎪ ⎩y = x.

130

7 Near-Optimal Consensus

To realize the consensus of the multiple double integrators, the following timevarying cost function is considered: 

T

Jd (t) =



T

y (t + τ )Ly(t + τ )dτ + k T

0

uT (t + τ )u(t + τ )dτ,

(7.3)

0

in which T > 0 ∈ R is referred to as the predictive period; L denotes a positive semi-definite symmetric matrix, which satisfies L1 = 0, i.e., L has the properties of a Laplacian matrix; k > 0 ∈ R is the coefficient for minimizing the magnitude of inputs of the agents. Our objective to realize the minimization of the cost function (7.3) for every t. Evidently, the consensus problem of the multi-agent system with multiple double integrators can be described as a dynamic optimization problem below: argu(t ) min Jd (t), s.t. x˙ = v, v˙ = u, y = x. T Remark 7.1 The cost function (7.3) consists of two parts. The first part 0 yT (t + τ )Ly(t + τ )dτ compensates for the differences among the outputs of the double T integrators, while the second part 0 uT (t + τ )u(t + τ )dτ is designed to minimize the magnitude of the control inputs of the agents. Remark 7.2 As seen from the problem formulation, the system dynamics, i.e., a set of ordinary differential equations are involved as the constraints, which makes it difficult to solve directly using the tools in optimization theory. In this regard, we will present a method that can relax the problem into an unconstrained one. According to the Taylor expansion [120], based on (7.2), we readily have y(t + τ ) ≈ y(t) + τ y˙ (t) +

τ2 τ2 y¨ (t) = x(t) + τ v(t) + u(t), 2 2

and u(t + τ ) ≈ u(t).

7.3 Consensus Protocols

131

It follows that the cost function Jd (t) can be approximated in the following manner: 

T

Jd (t) ≈ 0



T    τ2 τ2 x(t) + τ v(t) + u(t) L x(t) + τ v(t) + u(t) dτ 2 2 T

+k 0



T

= 0



uT (t)u(t),  τ4 T u (t)Lu(t) + τ 2 xT (t)Lu(t) + τ 3 vT (t)Lu(t) + xT (t)Lx(t) dτ 4

+ kT uT (t)u(t) + ∗d T5 T T3 T T4 T u Lu(t) + x (t)Lu(t) + v (t)Lu(t) + kT uT (t)u(t) 20 3 4  T + xT Lx(t)dτ + ∗d ,

=

0

= Jˆd + ∗d , in which ∗d represents the residual terms that are not explicitly related to u(t). It is worth pointing out that the decision variable is u(t). Thus, the minimization of the above cost function is the same as the minimization of the following one: T5 T T3 T T4 T u Lu(t) + x (t)Lu(t) + v (t)Lu(t) + kT uT (t)u(t). Jˆd = 20 3 4 Recalling that matrix L is positive semi-definite, we know that Jˆd is a convex function with respect to the decision variable u(t). Thus, we can further derive the analytical consensus protocol that can minimize Jˆd by solving for u(t) from the solving equation: T5 T3 T4 ∂ Jˆd = Lu(t) + 2kT u(t) + Lx(t) + Lv(t) = 0, ∂u 10 3 4 yielding  u(t) = −κ

T4 L + 2kI 10

−1  2  T3 T x(t) + v(t) , L 3 4

in which the coefficient κ > 0 ∈ R is referred to as the gain of the protocol.

(7.4)

132

7 Near-Optimal Consensus

For the consensus protocol above, we have the following remark. Remark 7.3 The consensus protocol (7.4) owns the same communication graph with that of the real system consisting of multiple double integrators if we set (T 4 L/10 + 2kI )−1 L = LG , by which the protocol becomes distributed.

7.3.2 Multiple Linear Agents As an extension of the results in the above subsection, we consider the case that the agents are of identical linear dynamics in this subsection, which is modeled as x˙ i = Axi + bui , y i = cT x i , in which xi ∈ Rn , ui ∈ R and yi ∈ R are defined the same as the above while A ∈ Rn×n , b ∈ Rn and c ∈ Rn are referred to as the parameter matrix and vectors of the linear agent dynamics. Let X = [xT1 , xT2 , · · · , xTN ]T , u = [u1 , u2 , · · · , uN ]T , y = [y1 , y2 , · · · , yN ]T . The multi-agent system consisting of multiple linear agents is modeled in a compact form as below: ¯ + Bu, X˙ = AX (7.5) y = CX, in which A¯ = I ⊗ A ∈ RnN×nN , B = I ⊗ b ∈ RnN×N , C = I ⊗ cT ∈ RN×nN with ⊗ and I being the operation of Kronecker product [121] and the N-by-N identity matrix, respectively. In this chapter, we are concerned with the situation

7.3 Consensus Protocols

133

¯ = 0, i.e., ρ = 2. Additionally, assumptions are also in which CB = 0 and C AB adopted on the controllability of the multi-agent system with multiple linear agents, i.e., Rank([b, Ab, A2 b, · · · , An−1 b]) = n. To realize the consensus among a group of agents with linear dynamics (7.5), according to the design in the previous subsection, we can adopt the cost function as follows:  T  T ¯ Jl (t) = yT (t + τ )Ly(t + τ )dτ + k C A¯ AX(t + τ) 0

¯ + C ABu(t + τ)

T

0

(7.6)

¯ ¯ (C A¯ AX(t + τ ) + C ABu(t + τ ))dτ,

by which the consensus problem can be described in the following manner: argu(t ) min Jl (t), ¯ + Bu(t), s.t. X˙ = AX

(7.7)

y = CX. Due to assumption that the linear agents are second-order, i.e., ρ = 2, we have ¯ y˙ (t) = C AX(t) and ¯ ¯ y¨ (t) = C A¯ AX(t) + C ABu(t). According to the Taylor expansion, the output y(t + τ ) of the agents in the future can be approximately by current states. Specifically, we have y(t + τ ) ≈ y(t) + τ y˙ (t + τ ) +

τ2 y¨ (t) 2

¯ = CX(t) + τ C AX(t) +

τ2 ¯ ¯ ¯ C AAX(t) + C ABu(t). 2

Accordingly, we have X(t + τ ) ≈ X(t)

134

7 Near-Optimal Consensus

and u(t + τ ) ≈ u(t). It follows that Jl (t) satisfies T  τ2 ¯ τ2 ¯ ¯ ¯ C AAX(t ) + C ABu(t ) L CX(t ) + τ C AX(t ) 2 2 0   T τ2 ¯ τ2 ¯ ¯ ¯ ¯ (C A¯ AX(t ) + C ABu(t ))T + C AAX(t ) + C ABu(t ) dτ + k 2 2 0 

Jl (t ) ≈

T



¯ CX(t ) + τ C AX(t )+

¯ ¯ × (C A¯ AX(t ) + C ABu(t ))dτ,

which can also be rewritten in the following manner: 



τ4 T ˆ ¯ ¯ T LC ABu(t ¯ u (t )B T A¯ T C T LC ABu(t ) + τ 2 XT (t )C T LC ABu(t ) + τ 3 XT (t )AC ) 4 0  τ4 ¯ T LC ABu(t ¯ ¯ ¯ + XT A¯ AC ) dτ + 2kT XT (t )A¯ TA¯ T C T C ABu(t ) + kT uT (t )B T A¯ T C T C AB 2 T

× u(t ) + ∗l =

T5 T T3 T T4 T ¯ T ˆ ¯ ¯ u (t )B T A¯ T C T LC ABu(t X (t )C T LC ABu(t X (t )AC LC ABu(t )+ )+ ) 20 3 4

+

T5 T ¯ T LC ABu(t ¯ ¯ ¯ X (t )A¯ AC ) + 2kT XT (t )A¯ T A¯ T C T C ABu(t ) + kT uT (t )B T A¯ T C T C AB 10

× u(t ) + ∗l , = Jˆl (t ) + ∗l ,

in which ∗l represents the residual terms that are not explicitly related to u(t). By following the design steps in the above subsection, the optimal solution for the minimization of Jˆl + ∗ is the same as that for the minimization of the convex function Jˆl with respect to the decision variable. Thus, the near-optimal consensus protocol can be found by solving T 3 T ¯T T T 4 T ¯T T T 5 T ¯T T ¯ ¯ B A C LC ABu(t B A C LCX(t ) + B A C LC AX(t )+ ) 10 3 4 +

T 5 T ¯T T T ¯T ¯T ¯ ¯ B A C LC A A X(t ) + 2kT B T A¯ T C T C A¯ AX(t ) + 2kT B T A¯ T C T C ABu(t ) = 0. 10

If B and C are symmetric matrices, we can solve for the consensus protocol as below:     2 T3 ¯ T −1 ¯ ¯ ¯ (7.8) CX(t) + C AX(t) + C AAX(t) , κL0 u(t) = −(C AB) 3 4

7.3 Consensus Protocols

135

in which  L0 =

T4 L + 2kI 10

−1 L.

If we set L0 = LG , then the consensus protocol above is distributed.

7.3.3 Multiple Nonlinear Agents In this subsection, we are concerned with the case that the dynamics in the multiagent system is nonlinear. The results in this subsection can be viewed as an extension of the above two subsections to a more complicated scenario. We consider a group of nonlinear agents equipped with sensing, communication, and actuation capabilities, whose dynamics is modeled as x˙ i = f (xi ) + g(xi )ui , yi = h(xi ), in which xi ∈ Rn , ui ∈ R and yi ∈ R are defined as the above, while the functions f (·) : Rn → Rn , g(·) : Rn → Rn and h(·) : Rn → R are assumed to be smooth. Obviously, the agent dynamics can be written in a compact form: X˙ = F (X) + G(X)u,

(7.9)

y = H (X), for which X = [xT1 , xT2 , · · · , xTN ]T , y = [y1 , y2 , · · · , yN ]T , H (X) = [h(x1 ), h(x2 ), · · · , h(xN )]T , and ⎤ ⎡ f (x1 ) g(x1 ) 0 ⎢ f (x2 ) ⎥ ⎢ 0 g(x2 ) ⎥ ⎢ ⎢ F (X) = ⎢ . ⎥ , G(X) = ⎢ . .. ⎣ .. ⎦ ⎣ .. . f (xN ) 0 ··· ⎡

··· ··· .. .

0 0 .. .

· · · g(xN )

⎤ ⎥ ⎥ ⎥. ⎦

136

7 Near-Optimal Consensus

As mentioned in the previous section, this chapter is concerned with secondorder multi-agent systems. Thus, in this subsection, we still consider the case that ρ = 2. Then, by taking into account the agent dynamics shown in (7.9), we are able to obtain the following equations regarding the relationship between output derivatives and the state variables: y˙ = LF H (X), y¨ = L2F H (X) + LG LF H (X)u. Let F  (X) = L2F H (X) and G (X) = LG LF H (X). It can be easily verified that the matrix G (X) is symmetric. With the notations, we have y¨ = F  (X) + G (X)u.

(7.10)

Now, we are ready to present the corresponding cost function for the realization of the consensus among multiple nonlinear agents: (7.9): 

T

Jn (t) = 0



T

yT (t + τ )Ly(t + τ )dτ +

(F  (X(t + τ )) + G (X(t + τ ))u(t + τ ))T

0

× Q(F  (X(t + τ )) + G (X(t + τ ))u(t + τ ))dτ,

(7.11) in which Q = kI ∈ RN×N and k > 0 ∈ R are coefficients of the cost function. It follows that the consensus problem can be modeled as an time-varying optimization problem with dynamic constraints: argu(t ) min Jn (t), s.t. X˙ = F (X) + G(X)u(t), y = H (X).

(7.12)

7.3 Consensus Protocols

137

In light of the Taylor expansion approach using the above subsections, for the case with nonlinear agent dynamics (7.9), we have the following formula regarding the approximation of the future output via current states: y(t + τ ) ≈ y(t) + τ y˙ (t) +

τ2 y¨ (t) 2

= H (X(t)) + τ LF H (X(t)) +

τ2  τ2 F (X(t)) + G (X(t))u(t). 2 2

Accordingly, we have u(t + τ ) ≈ u(t) and X(t + τ ) ≈ X(t). Consequently, T  τ2  τ2  Jn (t) ≈ H (X(t)) + τ LF H (X(t)) + F (X(t)) + G (X(t))u(t) L 2 2 0   τ2  τ2  × H (X(t)) + τ LF H (X(t)) F (X(t)) + G (X(t))u(t) dτ 2 2  T (F  (X(t)) + G (X(t))u(t))T Q(F  (X(t)) + G (X(t))u(t))dτ +k 

T



T

0



τ4 T T u (t)G (X(t))LG (X(t))u(t) + τ 2 H T (X(t))LG (X(t))u(t) 4 0  τ 4 T 3 T   + τ (LF H (X(t))) LG (X(t))u(t) + F (X(t))LG (X(t))u(t) dτ 2

=

+ 2kT F  (X(t))QG (X(t))u(t) + kT uT G (X(t))QG (X(t))u(t) + ∗n T

=

T

T5 T T3 T T u (t)G (X(t))LG (X(t))u(t) + H (X(t))LG (X(t))u(t) 20 3

+

T4 T 5 T (LF H (X(t)))T LG (X(t))u(t) + F (X(t))LG (X(t))u(t) 4 10

+ 2kT F  (X(t))QG (X(t))u(t) + kT uT G (X(t))QG (X(t))u(t) + ∗n T

T

= Jˆn (t) + ∗n , (7.13)

138

7 Near-Optimal Consensus

in which the symbol Jˆn (t) is given for the ease of explanation while ∗n represents the residual terms that are not explicitly related to u(t). Note that Jˆn (t) is a convex function with respect to the decision variable u(t) and ∂ Jˆn (t) T 3 T T 4 T T 5 T = G (X(t))LG (X(t))u(t) + G (X(t))LH (X(t)) + G (X(t))L ∂u 10 3 4 × LF H (X(t)) +

T 5 T T G (X(t))LF  (X(t)) + 2kT G (X(t))QF  (X(t)) 10

+ 2kT G (X(t))QG (X(t))u(t). T

The equivalence between the two cost functions allows us to solve for u(t) from the following equation ∂ Jˆn (t)/∂u = 0, from which we can obtain the consensus protocol for second-order nonlinear multiagent systems as below: u(t) = −(LG LF H (X(t)))

−1

  T2 T3 2 κL1 ( y(t) + LF H (X(t))) − LF H (X(t)) , 3 4 (7.14)

in which  L1 =

T4 L + 2kI 10

−1 L.

It should be noted that, if we set L1 = LG , then the communication topology follows the real setting, and the consensus protocol is distributed.

7.4 Theoretical Analysis This section focuses on the theoretical analysis for the stability of the closedloop second-order multi-agent systems with different types of agent dynamics synthesized by the presented near-optimal consensus protocols, as well as the asymptotic optimality of the cost function when the presented consensus protocols are employed.

7.4 Theoretical Analysis

139

7.4.1 Stability We first present the theoretical results about the stability of the closed-loop secondorder multi-agent systems when the presented consensus protocols are adopted. Theorem 7.1 When the consensus protocol (7.14) is adopted, the agents of the multi-agent system (7.9) exponentially asymptotically reach consensus. Proof In view of (7.10), replacing the control input in (7.9) with the consensus protocol (7.14) gives  y¨ (t) = −κL1

 T3 T2 y(t) + y˙ (t) . 3 4

Let the output difference among the agents be denoted by e(t) = L1 y(t). From Properties 7.1 and 7.2, we readily have L1 = LT1 and L1 1 = 0. As a result, we have 1T L1 = 0 and 1T y¨ (t) = 0. Consequently, the following equation holds: e¨ (t) = L1 y¨ (t) + c11Ty¨ (t) where c > 0 ∈ R. Then, e¨ (t) = (L1 + c11T)¨y(t) = −κ

T3 T2 Me(t) − κ M e˙ (t), 3 4

in which M = L1 + c11T.

(7.15)

140

7 Near-Optimal Consensus

Due to the fact that L1 = LTl , by following the spectral theorem [122], we have the following decomposition of the matrix L1 : T L1 = λ1 α1 α1T + λ2 α2 α2T + · · · , λN αN αN ,

for which λi and αi represents the ith normalized eigenvalue and the ith normalized eigenvector, respectively, with i = 1, 2, · · · , N. For the sake of illustration, let the order of the eigenvalues be λ2 ≤ λ3 ≤ · · · ≤ λN . Let the unique smallest eigenvalue of matrix Ll be denoted by λ1 = 0. From Property 7.1, we have λi > 0 ∀i ≥ 2 as well as α1 = 11T /N. Accordingly, we have the decomposition of the matrix M as follows: T . M = (λ1 + Nc)11T /N + λ2 α2 α2T + · · · , λN αN αN

Evidently, all the eigenvalues of the matrix M are strictly larger than 0, with the smallest eigenvalue being min{Nc, λ2 } > 0. Let e˙ (t) = z(t). Then, (7.15) can be rewritten as        0 I e˙ (t) e(t) e(t) = = W . 2 3 z˙ (t) z(t) −κ T3 M −κ T4 M z(t)

(7.16)

According to the stability theory of linear dynamical systems, the stability of the closed-loop system (7.16) is decided by the eigenvalue of W . Let an eigenvalue and the corresponding eigenvector of the matrix W be denoted by γ and [pT , qT ]T , respectively, for which p ∈ RN and q ∈ RN . It follows that W [pT , qT ]T = γ [pT , qT ]T , from which we have 

    p p =γ . T2 T3 q −κ 3 M −κ 4 M q 0

I

It follows that the following equation holds: Mp =

−12γ 2 p. 4T 2 κ + 3T 3 κγ

7.4 Theoretical Analysis

141

Obviously, −12γ 2 /(4T 2 κ + 3T 3 κγ ) is one of the eigenvalues of the matrix M. Let −12γ 2/(4T 2 κ + 3T 3 κγ ) = λ2 . Then, it is clear that 12γ 2 + 3T 3 κλ2 γ + 4T 2 κ = 0. Consequently,

γ =

−3T 3 λ2 κ ±

 9T 6 κ 2 λ22 − 192T 3 κ 24

(7.17)

Recalling that T , κ, and λ2 are all strictly larger than 0, we can readily verified that γ is located on the left-hand side of the complex plane, by which we further conclude that all the eigenvalues of W are also located on the left-hand side of the complex plane. It follows, by the linear system theory [123], the equilibrium e = 0 is exponentially asymptotically stable. That is to say, when the consensus protocol (7.14) is adopted, the agents of the multi-agent system (7.9) exponentially asymptotically reach consensus. Corollary 7.1 When the consensus protocol (7.4) is adopted, the agents in system (7.2) consisting of multiple double integrators exponentially asymptotically reach consensus. Proof The proof can be completed via following the procedures in the proof of Theorem 7.1. Specifically, the closed-loop system of (7.2) with protocol (7.4) is a typical case of (7.9) with LF H (X) = v, L2F H (X) = 0 and LG LF H (X) = I. Corollary 7.2 When the consensus protocol (7.8) is adopted, the linear agents in system (7.5) exponentially asymptotically reach consensus. Proof The proof can be conducted by following the procedures in the proof of Theorem 7.1, which is thus omitted.

142

7 Near-Optimal Consensus

7.4.2 Optimality of Cost Function In this subsection, we theoretically show the asymptotic optimality of the cost functions of the formulated consensus problems in this chapter when the presented consensus protocols are employed. Theorem 7.2 If the consensus protocol (7.14) is adopted for the system (7.9) consisting of multiple nonlinear agents, then the cost function Jn (t) defined in (7.11) is asymptotically optimal. Proof For system (7.9), according to the exact Taylor expansion, one has the following equation: y(t + τ ) = y(t) + τ y˙ (t) +

τ2 y¨ (t + η1 τ ) 2

as well as ... y¨ (t + τ ) = y¨ (t) + τ y (t + η2 τ ) for which η1 ∈ (0, 1) and η2 ∈ (0, 1). Consider 1 =

τ2 τ2 y¨ (t + η1 τ ) − y¨ (t) 2 2

and ... 2 = τ y (t + η2 τ ). It follows that, in view of the formula of Jn (t) defined in (7.11), we readily have 

T

Jn (t) = 0



T   τ2 τ2 y(t) + τ y˙ (t) + y¨ (t) + 1 ) L y(t) + τ y˙ (t) + y¨ (t) + 1 )dτ 2 2 T

+k

(¨y(t) + 2 )T (¨y(t) + 2 )dτ.

0

By using the formula of triangle inequality [124], and taking into account the formula of Jˆn (t) and ∗n (t) in (7.13), we further have Jn (t) ≤ 2(Jˆn (t) + ∗n (t)) + 2

 0

T

 T1 L1 dτ

+2 0

T

T2 L2 dτ.

7.4 Theoretical Analysis

143

From the proof of Theorem 7.1 and the stability theory of linear dynamical systems, we have lim L1 y(t) = 0

t →+∞

as well as lim L1 y˙ (t) = 0.

t →+∞

In view of (7.10), and the nonlinear consensus protocol (7.14) for second-order nonlinear multi-agent system (7.9), we further have lim y¨ (t)

t →+∞

= lim (L2F H (X(t)) + LG LF H (X(t))u(t)) t →+∞

= L2F H (X(t)) − L2F H (X(t)) = 0. This implies ... lim y (t) = 0.

t →+∞

As a result, we can conclude  lim

t→+∞ 0

T

 T1 L1 dτ = lim

T

t→+∞ 0

(

τ2 τ2 y¨ (t + η1 τ ) − y¨ (t))T L × (τ 2 /2¨y(t + η1 τ ) 2 2

− τ 2 /2¨y(t))dτ = 0,

and 

T

lim 2

t →+∞

0

 T2 L2 dτ

= lim 2 t →+∞

T

... ... (τ y (t + η2 τ ))T Lτ y (t + η2 τ )dτ

0

= 0. Additionally, because then consensus protocol u(t) shown in (7.14) minimizes the convex function Jˆn (t) + ∗n (t) whose minimum is 0, we have lim (Jˆn (t) + ∗n (t)) = 0

t →+∞

144

7 Near-Optimal Consensus

when u(t) shown in (7.14) is adopted. In view of the fact that Jn (t) ≥ 0, by using the pinching theorem [125], it is concluded that lim Jn (t) = 0,

t →+∞

(7.18)

which completes the proof.

Corollary 7.3 For system (7.2) consisting of multiple double integrators, if the consensus protocol (7.4) is adopted, the cost function Jd (t) defined in (7.3) is asymptotically optimal. Proof Due to the fact that double integrators are a special case of second-order nonlinear systems and a systematic protocol design framework is adopted in this chapter, the proof of this Corollary can be completed via following the procedures in Theorem 7.2. Corollary 7.4 For system (7.5) consisting of a group of linear agents, if the consensus protocol (7.8) is adopted, the cost function Jd (t) defined in (7.6) is asymptotically optimal. Proof The proof is similar to that of Theorem 7.2, and is thus omitted.



7.4.3 Existence of Matrix L The design framework in this chapter for the consensus of different types of secondorder multi-agent systems relies on the setting of some parameters about matrix L, which is given in the performance indices, in the consensus protocol with respect to LG . In this subsection, we theoretically analyze the existence of matrix L. Theorem 7.3 For system (7.9) consisting of N nonlinear dynamic agents, if the parameter T in the cost function Jn (t) shown in (7.11) satisfies 0 < T ≤ (5/N)1/4 , given any Laplacian matrix LG , then the existence of matrix L is theoretically guaranteed, for which the consensus protocol (7.14) is asymptotically optimal. Proof In view of consensus protocol (7.14), if (T 4 L/10 + 2kI )−1 L = LG , we have L = (T 4 L/10 + 2kI )LG ,

7.4 Theoretical Analysis

145

i.e., L(I −

T4 LG ) = 2kLG . 10

(7.19)

Consider Υ = I − T 4 LG /10 and let i with i = 1, 2, · · · , N represent the eigenvalues of the Laplacian matrix LG . Here we need to keep Property 7.1 about the property of the Laplacian matrices of undirected and connected graphs. Let the eigenvalues be ordered as 1 < 2 ≤ 3 ≤ 4 · · · ≤ N . According to the Gershgorin circle theorem [126], the following inequality holds for the largest eigenvalue N of the Laplacian matrix LG of the communication graph: |N − lGii |
0 with i = 1, 2, · · · , N. Thus, the matrix Υ is invertible. As a result, we can calculate L from Eq. (7.19): L = 2k(I −

T4 LG )−1 LG . 10

Evidently L1 = 0, LT = L, and the eigenvalues of L are non-negative. The proof is thus complete. Corollary 7.5 For system (7.9) consisting of N double integrators, if the parameter T in the cost function Jd (t) shown in (7.3) satisfies 0 < T ≤ (5/N)1/4 , given any Laplacian matrix LG , then the existence of matrix L is theoretically guaranteed, for which the consensus protocol (7.4) is asymptotically optimal.

146

7 Near-Optimal Consensus

Proof The proof is similar to that for Theorem 7.3 and is thus omitted.



Corollary 7.6 For system (7.5) consisting of N double integrators, if the parameter T in the cost function Jl (t) shown in (7.6) satisfies 0 < T ≤ (5/N)1/4 , given any Laplacian matrix LG , then the existence of matrix L is theoretically guaranteed, for which the consensus protocol (7.8) is asymptotically optimal. Proof The proof is similar to that for Theorem 7.3 and is thus omitted.



7.5 Simulation Results To show the effectiveness of the presented consensus protocols for second-order multi-agent systems with different types of agent dynamics, we perform simulations for the corresponding cases and the results are discussed in this section.

7.5.1 Double-Integrator Agents In this subsection, we consider the case that the system consists of eight agents of double integrator dynamics, of which the communication topology is given in Fig. 7.1. In the simulation, the parameter T is selected according to the requirement shown in Theorem 7.6, for which 0 < T ≤ (5/N)1/4 = (5/8)1/4 ≈ 0.889. The other coefficients are chosen as T = 0.7, k = 0.0001, and κ = 300, while the initial values of the state variables of all the agents are given in a random manner. As seen from Fig. 7.2, the outputs of the agents asymptotically reach consensus, i.e., the trajectories of yi overlap with each other. The asymptotic consensus is also found in the derivatives of the state yi , i.e., y˙i . This shows the effectiveness of the consensus protocol (7.4) for multi-agent systems with multiple agents of double integrator dynamics. Fig. 7.1 The communication topology of the system with eight agents of double integrator dynamics

7.5 Simulation Results

147 150

140 120

100 100 80

50

60

0

40 −50 20 0 0

1

2

t (a)

3

4

−100 0

1

2

t (b)

3

4

Fig. 7.2 Simulation results when the consensus protocol (7.4) is adopted to realize the consensus of eight agents of double integrator dynamics with the communication graph given in Fig. 7.1, for which the parameters are set as T = 0.7, k = 0.0001, and κ = 300. (a) yi . (b) y˙i

Fig. 7.3 The communication topology of the system with ten agents of second-order linear dynamics

7.5.2 Second-Order Linear Agents In this subsection, we are concerned with the simulation of the consensus of a group of ten second-order linear agents whose communication topology is given in Fig. 7.3. The dynamics of each agent is given in the following: ⎧ ⎤ ⎡ ⎡ ⎤ ⎪ 020 0 ⎪ ⎪ ⎥ ⎪ ⎢ ⎢ ⎥ ⎪ ⎨x˙ i = ⎢0 0 1⎥ xi + ⎢0⎥ ui , ⎦ ⎣ ⎣ ⎦ ⎪ 2 1 1 2 ⎪ ⎪ ! ⎪ ⎪ ⎩y i = 1 1 0 x i .

(7.20)

We can easily verify the order of the system by following the definition of relative degree. In accordance with Theorem 6.6, the coefficient T is set as T = 0.7 which satisfies 0 < T ≤ (5/N)1/4 = (5/10)1/4 ≈ 0.841, while the other coefficients

148

7 Near-Optimal Consensus 100

50

50

0

0

−50

−50 0

1

2

3

t (a)

4

−100 0

1

2

t (b)

3

4

Fig. 7.4 Simulation results when the consensus protocol (7.8) is adopted to realize the consensus of 10 agents of second-order linear dynamics shown in (7.20) with the communication graph given in Fig. 7.3, for which the parameters are set as T = 0.7, k = 0.0001, and κ = 200. (a) yi . (b) y˙i

are set to k = 0.0001 and κ = 200. The initial values of the state variables of the agents are given in a random manner. As seen from Fig. 7.4, the outputs of the agents asymptotically reach consensus, i.e., the trajectories of yi overlap with each other. The asymptotic consensus is also found in the derivatives of the state yi , i.e., y˙i . This shows the effectiveness of the consensus protocol (7.8) for multi-agent systems with multiple agents of linear dynamics.

7.5.3 Second-Order Nonlinear Agents In this subsection, we show the effectiveness of the consensus protocol (7.14) for the consensus of systems with multiple agents with second-order nonlinear dynamics. To show the robustness of the protocol. we also demonstrate the performance of the protocol in the case with measurement noises. In this subsection, the system with ten agents of second-order nonlinear dynamics is adopted to evaluate the performance of the nonlinear consensus protocol, for which the communication topology is shown in Fig. 7.5. The second-order agent dynamics of the 10 agents are given in the following: ⎧ ⎪ ⎪ ⎨x˙i1 = xi2 , x˙i2 = sin(xi1 ) + (cos(xi1 ) + 2)ui , ⎪ ⎪ ⎩y = x . i

(7.21)

i1

The coefficients are set as k = 0.0001, κ = 200 and T = 0.8. As the aforementioned, the initial values of the state variables of the agents shown in (7.21) are randomly generated. As seen from Fig. 7.6, with the aid of the near-

7.5 Simulation Results

149

Fig. 7.5 The communication topology of the system with ten agents of second-order nonlinear dynamics 10

20 15

5

10 0

5 0

−5

−5 −10 −10 −15 0

1

2

3

t (a)

4

−15 0

1

2

t (b)

3

4

Fig. 7.6 Simulation results when the consensus protocol (7.14) is adopted to realize the consensus of 10 agents of second-order nonlinear dynamics shown in (7.21) with the communication graph given in Fig. 7.5, for which the parameters are set as T = 0.8, k = 0.0001, and κ = 200. (a) yi . (b) y˙i

optimal consensus protocol (7.14), all the state variables of the second-order nonlinear agents can reach consensus. The results demonstrate the effectiveness of the consensus protocol in the noise-free situation. We also consider the case that there exist noises, which is more realistic for the applications of the presented consensus protocols in practice. Let the noise be denoted by δ ∈ R10 for the ten 10 second-order nonlinear agents. By modeling the noise into the dynamics of the agents, we have ⎧ ⎪ ⎪ ⎨x˙i1 = xi2 ,

x˙i2 = sin(xi1 ) + (cos(xi1 ) + 2)(ui + δi ), ⎪ ⎪ ⎩y = x . i

(7.22)

i1

The coefficient of the cost function shown in Eq. (7.11) is given as T = 0.8. The other two coefficients are set to κ = 200 and k = 0.0001, while the actuator noise is set to δ ∈ [−10, 10]10. With such settings, the simulation results of the 10 agents

150

7 Near-Optimal Consensus

10

20 2.2

15

5

2.1

y˙i

yi

0 −5

10

2

5

1.9 3

3.5

4

0 −5

−10 −10 −15 0

1

2

t (s) (a)

3

4

−15 0

1

2

3

4

t (s) (b)

Fig. 7.7 Simulation results when the consensus protocol (7.14) is adopted to realize the consensus of 10 agents of second-order nonlinear dynamics shown in (7.21) with the communication graph given in Fig. 7.5 with random actuator noise δ ∈ [−10, 10]10 , for which the parameters are set as T = 0.8, k = 0.0001, and κ = 200. (a) yi . (b) y˙i

aided by the consensus protocol (7.9) are depicted in Fig. 7.7. From Fig. 7.7, we can readily found that even in the presence such a relatively large noise, the difference of the agent states can convergence to a small error bound, which shows the robustness of the presented consensus protocol in the presence of noises.

7.6 Summary In this chapter, we have presented a consensus protocol design method for networked systems of multiple agents with different types of agent dynamics by formulating the consensus problem as a dynamic optimization problem in a systematic manner. Analytical consensus protocols have been derived via making approximation for the cost functions with respect to the outputs of the agents involved in the consensus task for multi-agent systems with double-integrator agents, linear dynamic agents, and nonlinear dynamic agents. The performance of the consensus protocols have been theoretically analyzed, and it has been shown that the protocols can make the agents exponentially asymptotically reach consensus with the cost function being asymptotically optimal. The effectiveness of the three consensus protocols has been evaluated via computer simulations for different cases, including the case with actuator noises.

References

151

References 1. C.L.P. Chen, G. Wen, Y. Liu, F. Wang, Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1217–1226 (2014) 2. M. Porfiri, D.G. Roberson, D.J. Stilwell, Tracking and formation control of multiple autonomous agents: a two-level consensus approach. Automatica 43(8), 1318–1328 (2007) 3. W. Qiao, R. Sipahi, Consensus control under communication delay in a three-robot system: design and experiments. IEEE Trans. Control Syst. Technol. 24(2), 687–694 (2016) 4. Z. Wu, Z. Guan, X. Wu, T. Li, Consensus based formation control and trajectory tracking of multi-agent robot systems. J. Intell. Robot. Syst. 48(3), 397–410 (2007) 5. L. Jin, S. Li, Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst. Man Cybern. Syst. 48(5), 693–701 (2018) 6. L.Y. Lu, C.C. Chu, Consensus-based droop control synthesis for multiple DICs in isolated micro-grids. IEEE Trans. Power Syst. 30(5), 2243–2256 (2015) 7. Y. Wang, W. Yan, J. Li, Passivity-based formation control of autonomous underwater vehicles. IET Control Theory Appl. 6(4), 518–525 (2012) 8. S. Li, Y. Guo, Distributed consensus filter on directed switching graphs. Int. J. Robust Nonlinear Control 25(13), 2019–2040 (2015) 9. S. Li, Y. Guo, Dynamic consensus estimation of weighted average on directed graphs. Int. J. Control 46(10), 1839–1853 (2015) 10. L. Jin, S. Li, B. Hu, C. Yi, Dynamic neural networks aided distributed cooperative control of manipulators capable of different performance indices. Neurocomputing 291, 50–58 (2018) 11. L. Jin, S. Li, L. Xiao, R. Lu, B. Liao, Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 48(10), 1715–1724 (2018) 12. S. Li, M. Zhou, X. Luo, Z. You, Distributed winner-take-all in dynamic networks. IEEE Trans. Autom. Control 62(2), 577–589 (2017) 13. S. Li, J. He, Y. Li, M.U. Rafique, Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 14. L. Jin, S. Li, X. Luo, M. Shang, Nonlinearly-activated noise-tolerant zeroing neural network for distributed motion planning of multiple robot arms, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2017), pp. 4165–4170 15. M.U. Khan, S. Li, Q. Wang, Z. Shao, Distributed multirobot formation and tracking control in cluttered environments. ACM Trans. Auton. Adapt. Syst. 11(2), 1–22 (2016) 16. S. Li, Z. Wang, Y. Li, Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks. Neural Process. Lett. 37(3), 411–424 (2013) 17. L. Jin, S. Li, B. Hu, M. Liu, A survey on projection neural networks and their applications. Appl. Soft Comput. 76, 533–544 (2019) 18. B. Liao, Q. Xiang, S. Li, Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 325, 234–241 (2019) 19. P.S. Stanimirovic, V.N. Katsikis, S. Li, Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329, 129–143 (2019) 20. Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329, 255–266 (2019) 21. L. Xiao, K. Li, Z. Tan, Z. Zhang, B. Liao, K. Chen, L. Jin, S. Li, Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 142, 35–40 (2019)

152

7 Near-Optimal Consensus

22. D. Chen, S. Li, Q. Wu, Rejecting chaotic disturbances using a super-exponential-zeroing neurodynamic approach for synchronization of chaotic sensor systems. Sensors 19(1), 74 (2019) 23. Q. Wu, X. Shen, Y. Jin, Z. Chen, S. Li, A.H. Khan, D. Chen, Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 19(8), 1758 (2019) 24. Q. Xiang, B. Liao, L. Xiao, L. Lin, S. Li, Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 23(3), 755–766 (2019) 25. Z. Zhang, S. Chen, S. Li, Compatible convex-nonconvex constrained QP-based dual neural networks for motion planning of redundant robot manipulators. IEEE Trans. Control Syst. Technol. 27(3), 1250–1258 (2019) 26. Y. Zhang, S. Li, X. Zhou, Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 27. L. Jin, S. Li, B. Hu, M. Liu, J. Yu, A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Ind. Inf. 15(1), 236–246 (2019) 28. Y. Li, S. Li, B. Hannaford, A model-based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inf. 15(4), 2054–2063 (2019) 29. L. Xiao, S. Li, F. Lin, Z. Tan, A.H. Khan, Zeroing neural dynamics for control design: comprehensive analysis on stability, robustness, and convergence speed. IEEE Trans. Ind. Inf. 15(5), 2605–2616 (2019) 30. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, X. Liu, Reconfigurable battery systems: a survey on hardware architecture and research challenges. ACM Trans. Des. Autom. Electron. Syst. 24(2), 19:1–19:27 (2019) 31. S. Li, Z. Shao, Y. Guan, A dynamic neural network approach for efficient control of manipulators. IEEE Trans. Syst. Man Cybern. Syst. 49(5), 932–941 (2019) 32. L. Jin, S. Li, H. Wang, Z. Zhang, Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl. Soft Comput. 62, 840–850 (2018) 33. M. Liu, S. Li, X. Li, L. Jin, C. Yi, Z. Huang, Intelligent controllers for multirobot competitive and dynamic tracking. Complexity 2018, 4573631:1–4573631:12 (2018) 34. D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust and rapid solution for parallel robot manipulators against superposition of multiple disturbances. Neurocomputing 275, 845–858 (2018) 35. L. Jin, S. Li, J. Yu, J. He, Robot manipulator control using neural networks: a survey. Neurocomputing 285, 23–34 (2018) 36. L. Xiao, S. Li, J. Yang, Z. Zhang, A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018) 37. P.S. Stanimirovic, V.N. Katsikis, S. Li, Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 316, 124–134 (2018) 38. X. Li, J. Yu, S. Li, L. Ni, A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 317, 70–78 (2018) 39. L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 102–113 (2018) 40. L. Xiao, Z. Zhang, Z. Zhang, W. Li, S. Li, Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 105, 185–196 (2018) 41. Z. Zhang, Y. Lu, L. Zheng, S. Li, Z. Yu, Y. Li, A new varying-parameter convergentdifferential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans. Autom. Control 63(12), 4110–4125 (2018) 42. Z. Zhang, Y. Lin, S. Li, Y. Li, Z. Yu, Y. Luo, Tricriteria optimization-coordination motion of dual-redundant-robot manipulators for complex path planning. IEEE Trans. Control Syst. Technol. 26(4), 1345–1357 (2018)

References

153

43. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, Incorporation of efficient secondorder solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 44. L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inf. 14(1), 98–105 (2018) 45. L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inf. 14(1), 189–199 (2018) 46. X. Luo, M. Zhou, S. Li, M. Shang, An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inf. 14(5), 2011–2022 (2018) 47. D. Chen, Y. Zhang, S. Li, Tracking control of robot manipulators with unknown models: a jacobian-matrix-adaption method. IEEE Trans. Ind. Inf. 14(7), 3044–3053 (2018) 48. J. Li, Y. Zhang, S. Li, M. Mao, New discretization-formula-based zeroing dynamics for realtime tracking control of serial and parallel manipulators. IEEE Trans. Ind. Inf. 14(8), 3416– 3425 (2018) 49. S. Li, H. Wang, M.U. Rafique, A novel recurrent neural network for manipulator control with improved noise tolerance. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1908–1918 (2018) 50. H. Wang, P.X. Liu, S. Li, D. Wang, Adaptive neural output-feedback control for a class of nonlower triangular nonlinear systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3658–3668 (2018) 51. S. Li, M. Zhou, X. Luo, Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 52. Y. Li, S. Li, B. Hannaford, A novel recurrent neural network for improving redundant manipulator motion planning completeness, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, Piscataway, 2018), pp. 2956–2961 53. M.A. Mirza, S. Li, L. Jin, Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 266, 114–122 (2017) 54. L. Jin, S. Li, Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267, 107–113 (2017) 55. L. Jin, S. Li, B. Liao, Z. Zhang, Zeroing neural networks: a survey. Neurocomputing 267, 597–604 (2017) 56. L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control 62(2), 992–997 (2017) 57. Z. You, M. Zhou, X. Luo, S. Li, Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 47(3), 731–743 (2017) 58. L. Jin, S. Li, H. M. La, X. Luo, Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017) 59. S. Muhammad, M.U. Rafique, S. Li, Z. Shao, Q. Wang, N. Guan, A robust algorithm for state-of-charge estimation with gain optimization. IEEE Trans. Ind. Inf. 13(6), 2983–2994 (2017) 60. X. Luo, J. Sun, Z. Wang, S. Li, M. Shang, Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inf. 13(6), 3098–3107 (2017) 61. S. Li, Y. Zhang, L. Jin, Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 62. X. Luo, S. Li, Non-negativity constrained missing data estimation for high-dimensional and sparse matrices, in Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE) (IEEE, Piscataway, 2017), pp. 1368–1373

154

7 Near-Optimal Consensus

63. Y. Li, S. Li, D.E. Caballero, M. Miyasaka, A. Lewis, B. Hannaford, Improving control precision and motion adaptiveness for surgical robot with recurrent neural network, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, Piscataway, 2017), pp. 3538–3543 64. X. Luo, M. Zhou, M. Shang, S. Li, Y. Xia, A novel approach to extracting non-negative latent factors from non-negative big sparse matrices. IEEE Access 4, 2649–2655 (2016) 65. M. Mao, J. Li, L. Jin, S. Li, Y. Zhang, Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207, 220–230 (2016) 66. Y. Huang, Z. You, X. Li, X. Chen, P. Hu, S. Li, X. Luo, Construction of reliable proteinprotein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing 218, 131–138 (2016) 67. X. Luo, M. Zhou, H. Leung, Y. Xia, Q. Zhu, Z. You, S. Li, An incremental-and-staticcombined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 68. S. Li, Z. You, H. Guo, X. Luo, Z. Zhao, Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 69. L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 70. X. Luo, M. Zhou, S. Li, Z. You, Y. Xia, Q. Zhu, A nonnegative latent factor model for largescale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2016) 71. L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016) 72. X. Luo, M. Shang, S. Li, Efficient extraction of non-negative latent factors from highdimensional and sparse matrices in industrial applications, in Proceedings of the IEEE 16th International Conference on Data Mining (ICDM) (IEEE, Piscataway, 2016), pp. 311–319 73. X. Luo, S. Li, M. Zhou, Regularized extraction of non-negative latent factors from highdimensional sparse matrices, in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC) (IEEE, Piscataway, 2016), pp. 1221–1226 74. X. Luo, Z. Ming, Z. You, S. Li, Y. Xia, H. Leung, Improving network topology-based protein interactome mapping via collaborative filtering. Knowl.-Based Syst. 90, 23–32 (2015) 75. X. Luo, M. Zhou, S. Li, Y. Xia, Z. You, Q. Zhu, H. Leung, An efficient second-order approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inf. 11(4), 946–956 (2015) 76. L. Wong, Z. You, S. Li, Y. Huang, G. Liu, Detection of protein-protein interactions from amino acid sequences using a rotation forest model with a novel PR-LPQ descriptor, in Proceedings of the International Conference on Intelligent Computing (Springer, Cham, 2015), pp. 713–720 77. Z. You, J. Yu, L. Zhu, S. Li, Z. Wen, A MapReduce based parallel SVM for large-scale predicting protein-protein interactions. Neurocomputing 145, 37–43 (2014) 78. Y. Li, S. Li, Q. Song, H. Liu, M.Q.H. Meng, Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inf. 10(1), 331–339 (2014) 79. S. Li, and Y. Li, Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 44(8), 1397–1407 (2014) 80. Q. Huang, Z. You, S. Li, Z. Zhu, Using Chou’s amphiphilic pseudo-amino acid composition and extreme learning machine for prediction of protein-protein interactions, in Proceedings of the International Joint Conference on Neural Networks (IJCNN) (IEEE, Piscataway, 2014), pp. 2952–2956 81. S. Li, Y. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application. Neural Netw. 39, 27–39 (2013)

References

155

82. S. Li, B. Liu, Y. Li, Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 301– 309 (2013) 83. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 84. W. Ren, R. Beard, Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 50(5), 655–661 (2005) 85. F. Xiao, L. Wang, State consensus for multi-agent systems with switching topologies and time-varying delays. Int. J. Control 79(10), 1277–1284 (2006) 86. P. Lin, Y. Jia, Average consensus in networks of multi-agents with both switching topology and coupling time-delay. Phys. A 387(1), 303–313 (2008) 87. Y. Cao, W. Ren, Optimal linear-consensus algorithms: an LQR perspective. IEEE Trans. Syst. Man Cybern. B Cybern. 40(3), 819–830 (2010) 88. W. Ren, On consensus algorithms for double-integrator dynamics. IEEE Trans. Autom. Control 53(6), 1503–1509 (2008) 89. L. Ding, P. Yu, Z. W. Liu, Z.H. Guan, G. Feng, Consensus of second-order multi-agent systems via impulsive control using sampled hetero-information. Automatica 49(9), 2881– 2886 (2013) 90. L. Cheng, Z.G. Hou, M. Tan, X. Wang, Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Control 56(8), 1958–1963 (2011) 91. D. Goldin, J. Raisch, Consensus for agents with double integrator dynamics in heterogeneous networks. Asian J. Control 16(1), 30–39 (2014) 92. A. Abdessameud, A. Tayebi, On consensus algorithms for double-integrator dynamics without velocity measurements and with input constraints. Syst. Control Lett. 59(12), 812– 821 (2010) 93. L. Zhou, S. Li, Distributed model predictive control for consensus of sampled-data multiagent systems with double-integrator dynamics. IET Control Theory Appl. 9(12), 1774–1780 (2015) 94. G. Ferrari-Trecate, L. Galbusera, M.P.E. Marciandi, R. Scattolini, Model predictive control schemes for consensus in multi-agent systems with single- and double-integrator dynamics. IEEE Trans. Autom. Control 54(11), 2560–2572 (2009) 95. W. Li, M. W. Spong, Analysis of flocking of cooperative multiple inertial agents via a geometric decomposition technique. IEEE Trans. Syst. Man Cybern. Syst. 44(12), 1611–1623 (2014) 96. G. Wen, C.L.P. Chen, Y. Liu, Z. Liu, Neural-network-based adaptive leader-following consensus control for second-order non-linear multi-agent systems. IET Control Theory Appl. 9(13), 1927–1934 (2015) 97. W. He, J. Cao, Consensus control for high-order multi-agent systems. IET Control Theory Appl. 5(1), 231–238 (2011) 98. Z. Li, W. Ren, X. Liu, L. Xie, Distributed consensus of linear multi-agent systems with adaptive dynamic protocols. Automatica 49(7), 1986–1995 (2013) 99. H. Su, M.Z.Q. Chen, J. Lam, Z. Lin, Semi-global leader-following consensus of linear multiagent systems with input saturation via low gain feedback. IEEE Trans. Circuits Syst. Regul. Pap. 60(7), 1881–1889 (2013) 100. L. Cheng, Z. G. Hou, Y. Lin, M. Tan, W. Zhang, Solving a modified consensus problem of linear multi-agent systems. Automatica 47(10), 2218–2223 (2011) 101. T. Yang, S. Roy, Y. Wan, A. Saberi, Constructing consensus controllers for networks with identical general linear agents. Int. J. Robust Nonlinear Control 21(11), 1237–1256 (2011) 102. Y. Qian, X. Wu, J. Lu, J.A. Lu, Second-order consensus of multi-agent systems with nonlinear dynamics via impulsive control. Neurocomputing 125(11), 142–147 (2014) 103. Q. Song, F. Liu, J. Cao, W. Yu, M-matrix strategies for pinning-controlled leader-following consensus in multiagent systems with nonlinear dynamics. IEEE Trans. Cybern. 43(6), 1688– 1697 (2013)

156

7 Near-Optimal Consensus

104. Y. Gao, L. Dai, Y. Xia, Y. Liu, Distributed model predictive control for consensus of nonlinear second-order multi-agent systems. Int. J. Robust. Nonlinear Control 27(5), 830–842 (2017) 105. C.E. Ren, L. Chen, C.L.P. Chen, T.D, Quantized consensus control for second-order multiagent systems with nonlinear dynamics. Neurocomputing 175, 529–537 (2016) 106. H. Li, X. Liao, T. Huang, Second-order locally dynamical consensus of multiagent systems with arbitrarily fast switching directed topologies. IEEE Trans. Syst. Man Cybern. Syst. 43(6), 1343–1353 (2013) 107. G. Wen, C.L.P. Chen, Y. Liu, Z. Liu, Neural network-based adaptive leader-following consensus control for a class of nonlinear multiagent state-delay systems. IEEE Trans Cybern. 47(8), 2151–2160 (2016). https://doi.org/10.1109/TCYB.2016.2608499 108. C.L.P. Chen, G. Wen, Y. Liu, Z. Liu, Observer-based adaptive backstepping consensus tracking control for high-order nonlinear semi-strict-feedback multiagent systems. IEEE Trans. Cybern. 46(7), 1591–1601 (2016) 109. L. Xiao, A nonlinearly-activated neurodynamic model and its finite-time solution to equalityconstrained quadratic optimization with nonstationary coefficients. Appl. Soft Comput. 40, 252–259 (2016) 110. L. Jin, Y. Zhang, G2-Type SRMPC scheme for synchronous manipulation of two redundant robot arms. IEEE Trans. Cybern. 45(2), 153–164 (2015) 111. D. Guo, Y. Zhang, A new inequality-based obstacle-avoidance MVN scheme and its application to redundant robot manipulators. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42(6), 1326–1340 (2012) 112. K.H.M., F.L. Lewis, Cooperative optimal control for multi-agent systems on directed graph topologies. IEEE Trans. Autom. Control 59(3), 769–774 (2014) 113. D.Q. Mayne, H. Michalska, Receding horizon control of nonlinear systems. IEEE Trans. Autom. Control 35(7), 814–824 (1990) 114. P. Lu, Approximate nonlinear receding-horizon control laws in closed form. Int. J. Control 71(1), 19–34 (1998) 115. W.H. Chen, D.J. Ballance, P.J. Gawthrop, Optimal control of nonlinear systems: a predictive control approach. Automatica 39(4), 633–641 (2003) 116. Z. Li, J. Deng, R. Lu, Y. Xu, J. Bai, C.Y. Su, Trajectory-tracking control of mobile robot systems incorporating neural-dynamic optimized model predictive approach. IEEE Trans. Syst. Man Cybern. Syst. 46(6), 740–749 (2016) 117. C. Godsil, G. Royal, Algebraic Graph Theory (Springer, New York, 2001) 118. A. Isidori, Nonlinear Control Systems: An Introduction (Springer, New York, 1995) 119. Y. Zhang, D. Chen, L. Jin, Y. Zhang, Y. Yin, GD-aided IOL (input-output linearisation) controller for handling affine-form nonlinear system with loose condition on relative degree. Int. J. Control 89(4), 757–769 (2016) 120. B. Liao, Y. Zhang, L. Jin, Taylor O(h3 ) discretization of ZNN models for dynamic equalityconstrained quadratic programming with application to manipulators. IEEE Trans. Neural Netw. Learn. Syst. 27(2), 225–237 (2016) 121. R. Bellman, Introduction to Matrix Analysis (McGraw-Hill, New York, 1960) 122. C.D. Meyer, Matrix Analysis and Applied Linear Algebra (Society for Industrial and Applied Mathematics, Philadelphia, 2000) 123. W.J. Rugh, Linear System Theory (Prentice Hall, Upper Saddle River, 1996) 124. C.Y. Hsu, S.Y. Shaw, H.J. Wong, Refinements of generalized triangle inequalities. J. Math. Anal. Appl. 344(1), 17–31 (2008) 125. H.L. Royden, Real Analysis (Macmillan, New York, 1988) 126. H. Minc, Nonnegative Matrices (Wiley, New York, 1988)

Chapter 8

Adaptive Near-Optimal Consensus

8.1 Introduction Distributed consensus means that a group of dynamic agents reach an agreement on certain quantities of interest, which generally requires designing appropriate control laws or protocols based on local information. In recent years, much effort has been paid to investigations on the consensus of multi-agent systems due to the widespread applications in sensor networks, mobile robots, unmanned aerial vehicles, etc. The consensus of multi-agent systems consisting of first-order or second-order integrators have been extensively investigated, e.g., [1–5]. The consensus of multiagent systems consisting of high order integrators has been investigated in [6]. The necessary and sufficient conditions for consensusability of linear multi-agent systems of fully-known parameters have been investigated in [7]. Some protocols for linear multi-agent systems have been reported in [8–12]. The consensus of nonlinear multi-agent systems is of greater significance since physical systems are more or less nonlinear. A consensus algorithm based on pinning control was proposed in [13] for a nonlinear multi-agent system with second-order dynamics. By combining the variable structure approach and the adaptive method, a distributed leader-follower consensus problem of second-order multi-agent systems with unknown nonlinear dynamics is investigated in [14]. By introducing a kind of variable transformation called star transformation, in [15], the consensus problem of second-order nonlinear multi-agent systems was converted to a partial stability problem. A measurement output feedback consensus controller was proposed in [16] with a dynamic observer to handle the situation when the states of a second-order nonlinear multi-agent system are not fully measurable. Note that the results in [13–16] are about nonlinear multi-agent systems with identical agents. In terms of homogeneous nonlinear multi-agent systems, a few results have been reported in [17–19]. It is worth pointing out that exact dynamics of agents are generally difficult to obtain, which leads to investigations on the consensus of nonlinear multi-agent systems with unknown dynamics [18, 20, 20–25]. For © Springer Nature Singapore Pte Ltd. 2020 Y. Zhang, S. Li, Machine Behavior Design And Analysis, https://doi.org/10.1007/978-981-15-3231-3_8

157

158

8 Adaptive Near-Optimal Consensus

example, the adaptive consensus of high-order nonlinear multi-agent systems was investigated in [24, 25]. It is also worth pointing out that the design of adaptive consensus to tackle unknown parameters is not trivial due to the distributed nature of consensus, and it is not straightforward to extend existing adaptive control results [26–28] for individual agents to the consensus of multi-agents. The distributed optimal consensus problem is difficult to solve since that the solution of a global optimization problem generally requires centralized, i.e. global, information [29]. This problem becomes more difficult when it comes to nonlinear multi-agent systems. In [30], the optimal consensus of nonlinear multi-agent system with respect to local performance indices is investigated by fuzzy adaptive dynamic programming from the perspective of game theory. Note that the optimal control of nonlinear system often requires solving a partial differential equation called the Hamilton-Jacobi-Bellman equation for which an analytical solution is generally difficult to obtain. In [31], a distributed consensus protocol was proposed for firorder nonlinear multi-agent system with uncertain nonlinear dynamics based on adaptive dynamic programming, in which uniformly ultimately bounded convergence of the agent states is guaranteed. It is remarkable that, in [32], by making approximation to the performance index, an analytical near-optimal control law is obtained for a multi-input multi-output nonlinear system, which significantly reduces the computational cost compared with the dynamic programming approach for optimal control. Motivated by the existing works, we investigate the distributed optimal consensus of high-order nonlinear multi-agent systems consisting of heterogeneous agents. We formulate the consensus problem as an receding-horizon optimal control problem. We start from considering the consensus of nonlinear agents with completely known parameters and then extend it the case without any parameter information. On the one hand, for the situation when the system parameters are fully-known, by making approximation of the performance index, we relax the optimal consensus problem to a convex quadratic program, from which a distributed near-optimal protocol is designed. On the other hand, for the situation when the system parameters are fullyunknown, we introduce auxiliary systems based on the concept of sliding-mode control [33, 34]. The auxiliary systems asymptotically reconstruct the input-output properties of the agents. Based on the auxiliary systems, by making further approximation of the performance index, we design a distributed adaptive nearoptimal protocol. The organization of the rest part of this chapter is given as follows. In Sect. 8.2, we describe the problem. In Sects. 8.3 and 8.4, we show the design and analysis of the consensus protocols for solving the problem under two cases of knowledge of agent dynamics. In Sect. 8.5, computer simulation results are discussed. Finally, the summary of this chapter is given in Sect. 8.6.

8.2 Problem Formulation

159

8.2 Problem Formulation Consider a multi-agent system consisting of N nonlinear heterogeneous agents of order σ . The ith (i = 1, 2, · · · , N) agent is described as follows: ⎧ (0) (1) ⎪ ⎪ ⎪x˙i = xi , ⎪ ⎪ ⎨ .. . (σ −2) (σ −1) ⎪ ⎪ = xi , x ˙ ⎪ i ⎪ ⎪ ⎩x˙ (σ −1) = f (x ) + g (x )u , i i i i i i

(8.1)

(j )

where state variable xi ∈ R with j = 0, 1, 2, · · · , σ denotes the j th time (0) (1) (σ −1) T derivative of xi = xi ; xi = [xi , xi , · · · , xi ] ∈ Rσ and ui ∈ R denotes the state vector and input of the ith agent, respectively; fi (·) : Rσ → R and gi (·) : Rσ → R are continuously differentiable functions which are different for each agent. It is assumed that gi (xi ) = 0 for all i = 1, 2, · · · , N, i.e., there is no control singularity. Let xm = [x1, x2 , · · · , xN ]T , fm (xm ) = [f1 (x1 ), f2 (x2 ), · · · , fN (xN )]T , u = [u1 , u2 , · · · , uN ]T and gm (xm ) = diag([g1 (x1 ), g2 (x2 ), · · · , gN (xN )]T ). The multi-agent system is thus formulated as follows: ⎧ (0) (1) ⎪ x˙ m = xm , ⎪ ⎪ ⎪ ⎪ . ⎨ . .

(σ −2) (σ −1) ⎪ ⎪ x˙ m = xm , ⎪ ⎪ ⎪ ⎩x˙ (σ −1) = f (x ) + g (x )u. m m m m m

(8.2)

To achieve the consensus of multi-agent system (8.2), we have the following performance index: 

T

J (t) = 0

 xTm (t

T

+ τ )L0 xm (t + τ )dτ + 0

x˙ Tm (t + τ )Q˙xm (t + τ )dτ, (8.3)

160

8 Adaptive Near-Optimal Consensus

where T > 0 ∈ R denotes the predictive period; L0 ∈ RN×N denotes a Laplacian matrix to be determined; Q ∈ RN×N is a diagonal positive-definite matrix. About performance index (8.3), we offer the following remark as an intuitive explanation of the optimal consensus. Remark 8.1 The physical meaning of the two integral terms in the performance index are as follows. The first term corresponds to the relative potential energy of the multi-agent system. The first term corresponds to the kinetic energy of the whole multi-agent system. Intuitively, when performance index J (t) achieves its minimum, i.e., J (t) = 0, the relative potential energy and the kinetic energy become zero. It follows that all the agents of the multi-agent system achieve static consensus. In this sense, the results presented in this chapter can be extended to velocity consensus, acceleration consensus, etc, by modifying the second term. Before moving to the discussion of the design, we first review the two properties of Laplacian matrices. Property 8.1 Column vector 1 with each element being 1 is an eigenvector of a Laplacian matrix, which corresponds to eigenvalue 0. In addition, only one of the eigenvalues of a Laplacian matrix is 0 and all of the other eigenvalues are greater than 0 [35]. Property 8.2 Laplacian matrices of undirected connected graphs are positivesemidefinite and symmetric [35].

8.3 Nominal Design In this section, we consider the situation where multi-agent system (8.2) are consisting of agents of fully-known dynamics. Let (σ −1) Xm (t) = [xm (t), x(1) (t), fm (xm )]. m (t), · · · , xm

Then, xm (t + τ ) of multi-agent system (8.2) can be approximated via time-scale Taylor expansion and written in a compact form as follows: xm (t + τ ) ≈ Xm (t)w1 (τ ) +

τσ gm (xm (t))u(t), σ!

where w1 (τ ) = [1, τ, · · · , τ σ −1 /(σ − 1)!, τ σ /σ !]T .

8.3 Nominal Design

161

Similarly, we have the following approximation for x˙ m (t + τ ): x˙ m (t + τ ) ≈ Xm (t)w2 (τ ) +

τ σ −1 gm (xm (t))u(t), (σ − 1)!

where w2 (τ ) = [0, 1, τ, · · · , τ σ −1 /(σ − 1)!]T . In addition, u(t + τ ) ≈ u(t). The performance index J (t) in Eq. (8.3) is thus approximated as J (t) ≈ Jˆ(t) T   T τσ τσ gm (xm (t))u(t) L0 Xm (t)w1 (τ ) + gm (xm (t)) Xm (t)w1 (τ ) + σ! σ! 0  T  T τ σ −1 × u(t) dτ + gm (xm (t))u(t) Xm (t)w2 (τ ) + (σ − 1)! 0   τ σ −1 × Q Xm (t)w2 (τ ) + gm (xm (t))u(t) dτ (σ − 1)!  T σ  T τ T T wT wT = 1 (τ )Xm (t)L0 Xm (t)w1 (τ )dτ + 2 1 (τ )Xm (t)L0 gm (xm (t))u(t)dτ 0 0 σ!  T  T 2σ τ T (t)g T (x (t))L g (x (t))u(t)dτ + T u wT + m m m 0 m 2 (τ )Xm (t)QXm (t) 0 (σ !)2 0  T  T τ σ −1 τ 2σ −2 T (t)Qg (x (t))u(t)dτ + × w2 (τ )dτ + 2 (τ )Xm wT m m 2 0 (σ − 1)! 0 ((σ − 1)!)2 =

T (x (t))Qg (x (t))u(t)dτ. × uT (t)gm m m m

(8.4) Let 

T

v1 = 0

 = 

τσ w1 (τ )dτ σ!

T σ +2 T 2σ +1 T σ +1 , ,··· , (σ + 1)σ ! (σ + 2)1!σ ! (2σ + 1)(σ !)2 T

v2 = 0

τ σ −1 w2 (τ )dτ (σ − 1)!

T ,

162

8 Adaptive Near-Optimal Consensus

 = 0,  κ1 = 

T

τ 2σ T 2σ +1 dτ = , (σ !)2 (2σ + 1)(σ !)2

T

τ 2σ −2 T 2σ −1 dτ = . ((σ − 1)!)2 (2σ − 1)((σ − 1)!)2

0

κ2 = 0

 T 2σ −1 Tσ ,··· , , σ (σ − 1)! (2σ − 1)((σ − 1)!)2

Then, Jˆ(t) in Eq. (8.4) can be rewritten as Jˆ(t) =

 T 0

T T T T wT 1 (τ )Xm (t)L0 Xm (t)w1 (τ )dτ + 2v1 Xm (t)L0 gm (xm (t))u(t)τ + κ1 u (t)

T (x (t))L g (x (t))u(t) + × gm m 0 m m

 T 0

T T T wT 2 (τ )Xm (t)QXm (t)w2 (τ )dτ + 2v2 Xm (t)

T (x (t))Qg (x (t))u(t). Qgm (xm (t))u(t) + κ2 uT (t)gm m m m

In light of the fact that the decision variable is input u(t), minimizing Jˆ(t) is equivalent to minimizing Jˆe (t) defined as follows: T T Jˆe (t) = 2vT1 Xm (t)L0 gm (xm (t))u(t) + κ1 uT (t)gm (xm (t))L0 gm (xm (t))u(t) + 2vT2 T T Xm (t)Qgm (xm (t))u(t) + κ2 uT (t)gm (xm (t))Qgm (xm (t))u(t).

Note that κ1 > 0, κ2 > 0, Q is a positive-definite diagonal matrix, L0 is a Laplacian matrix and gm (xm (t)) is a diagonal matrix of nonzero elements. It follows T (x (t))L g (x (t)) + κ g T (x (t))Qg (x (t)) is positive-definite, i.e., that κ1 gm m 0 m m 2 m m m m Jˆe (t) is a convex quadratic performance index. Then, a near-optimal protocol can be obtained by solving ∂ Jˆe (t)/∂u = 0, from which we have 2gm (xm (t))L0 Xm (t)v1 + 2κ1 gm (xm (t))L0 gm (xm (t))u(t) + 2gm (xm (t))QXm (t)v2 + 2κ2 gm (xm (t))Qgm (xm (t))u(t) = 0. Thus, L0 Xm (t)v1 + κ1 L0 gm (xm (t))u(t) + QXm (t)v2 + κ2 Qgm (xm (t))u(t) = 0, Substituting the expressions of v1 and v2 into the above equation yields Q

σ −1 j =1

T σ +j −1 (j ) xm + (L0 κ1 + Qκ2 ) (σ + j − 1)(σ − 1)!(j − 1)!

× (fm (xm ) + gm (xm (t))u(t)) + L0

σ −1 j =0

T σ +j +1 (j ) xm = 0, (σ + j + 1)σ !j !

8.3 Nominal Design

163

Let κ3 = κ1 (σ − 1)!/T σ −1 = T σ +2 /(2σ + 1)/σ/σ ! and κ4 = κ2 (σ − 1)!/T σ −1 = T σ /(2σ − 1)/(σ − 1)!. We further have Qκ4

σ −1  j =1

T j −σ (2σ − 1)(σ − 1)! (j ) xm + (L0 κ3 + Qκ4 ) (σ + j − 1)(j − 1)!

× (fm (xm ) + gm (xm (t))u(t)) + L0

σ −1  j =0

(8.5)

T j +2

(j )

(σ + j + 1)σj !

xm = 0,

Let (L0 κ3 + Qκ4 )−1 L0 = L, from Eq. (8.5), we have   σ −1 −1 (x (t)) L − u(t) = gm m j =0 (j )



xm (t) − fm (xm (t)) −

σ −1 j −σ T j +2 T (2σ − 1)(σ − 1)! (j ) xm (t) + κ3 (σ + j + 1)σj ! (σ + j − 1)(j − 1)! j =1

σ −1 j =1

  σ −1 −1 (x (t)) L − = gm m j =0

+

σ −1 j =1

T j −σ (2σ − 1)(σ − 1)! (j ) xm (t) (σ + j − 1)(j − 1)!

T j +2 (j ) x (t) (σ + j + 1)σj ! m

T j +2 (2σ − 1) (j ) xm (t) σ 2 (2σ + 1)(σ + j − 1)(j − 1)!

− fm (xm (t)) −

σ −1 j =1





 T j −σ (2σ − 1)(σ − 1)! (j ) xm (t) , (σ + j − 1)(j − 1)!

(8.6) which is called nominal near-optimal protocol. For nominal near-optimal protocol (8.6), we have the following theoretical results. Theorem 8.1 If all the roots of the following equation are located at the left halfplane for all i = 1, 2, · · · , N: sσ +

σ −1 j =1

(χi αj + βj )s j +

T2 χi = 0, σ (σ + 1)

164

8 Adaptive Near-Optimal Consensus

where χ1 = Nε with ε > 0 ∈ R, χk = λk for k = 2, 3, · · · , N with λk being the positive eigenvalues of L, and αj =

T j +2 (2σ − 1) T j +2 − 2 , (σ + j + 1)σj ! σ (2σ + 1)(σ + j − 1)(j − 1)!

βj =

T j −σ (2σ − 1)(σ − 1)! , (σ + j − 1)(j − 1)!

then nonlinear multi-agent system (8.2) synthesized by near-optimal protocol (8.6) exponentially converges to consensus. Proof Near-optimal protocol (8.6) can be rewritten as u(t) =

−1 gm (xm (t))



σ −1

 T2 Lxm (t) − L − αj σ (σ + 1) j =1

(j ) × xm (t)

− fm (xm (t)) −

σ −1

 (j ) βj xm (t)

(8.7)

,

j =1

where αj and βj are defined as follows: αj =

T j +2 (2σ − 1) T j +2 − 2 , (σ + j + 1)σj ! σ (2σ + 1)(σ + j − 1)(j − 1)!

T j −σ (2σ − 1)(σ − 1)! . βj = (σ + j − 1)(j − 1)!

(8.8)

Substituting equation (8.7) into the last equation of (8.2) yields σ −1

) x(σ m (t) = −

 T2 (j ) Lxm (t) − (Lαj + βj I )xm (t), σ (σ + 1) j =1

Let e(t) = Lxm (t). One further has σ −1

e(σ ) (t) = −

 T 2L e(t) − (αj Le(j ) (t) + βj e(j ) (t)), σ (σ + 1)

(8.9)

j =1

Based on Properties 8.1 and 8.2 of Laplacian matrices, one has L = LT and L1 = 0, where 1 ∈ RN denotes the column vector with each element being 1. It follows that

8.3 Nominal Design

165

1T L = 0 and thus ε11T e(t) = 0 with ε > 0. Then, one has Le(t) = (L + ε11T)e(t). Let A = L + ε11T. Equation (8.9) can then be rewritten as σ −1

e(σ ) (t) = −

 T2 Ae(t) − (Aαj + βj I )e(j ) (t). σ (σ + 1)

(8.10)

j =1

Since L is symmetric, according to the spectral theorem [36], L can be T , where λ and α denotes the decomposed as L = λ1 α1 α1T +λ2 α2 α2T +· · · , λN αN αN i i ith eigenvalue and the ith eigenvector after the normalization process, respectively, and i = 1, 2, · · · , N. Note that the least eigenvalue of a Laplacian matrix is 0, which corresponds to eigenvector 1, and the second least eigenvalue is greater than 0 [35]. Let λ1 = 0. Then, λi > 0 for i ≥ 2 and α1 = 11T /N. Similarly, matrix A T , which can be decomposed as A = (λ1 + Nc)11T /N + λ2 α2 α2T + · · · , λN αN αN indicates that the eigenvalues of A are λ1 + Nε = Nε, λ2 , · · · , λN , all of which are evidently positive. Therefore, A is a positive-definite symmetric matrix. By eigenvalue decomposition, there exist orthogonal matrix P ∈ RN×N and diagonal matrix Λ ∈ RN×N = diag([χ1 , χ2 , · · · , χN ]T ) = diag([Nε, λ2 , · · · , λN ]T ) such that A = P ΛP T . Let y(t) = P T e(t). From Eq. (8.10), one has σ −1

y(σ ) (t) = −

 T2 Λy(t) − (Λαj + βj I )y(j ) (t). σ (σ + 1)

(8.11)

j =1

The ith subsystem of (8.11) is σ −1

(σ )

yi (t) = −

 T2 (j ) χi yi (t) − (χi αj + βj )yi (t), σ (σ + 1) j =1

of which the characteristic equation is sσ +

σ −1 

(χi αj + βj )s j +

j =1

T2 χi = 0. σ (σ + 1)

(8.12)

By linear system theory [37], given that all the roots of Eq. (8.12) are located at the left half-plane, the ith subsystem of (8.11) is exponentially asymptotically stable. If each subsystems of (8.11) is exponentially asymptotically stable, then y(t) exponentially converges to zero. Recall that P is an orthogonal matrix and y(t) = P T e(t). It follows that e(t) exponentially converges to zero. The proof is complete. Practical systems may not have a very high relative degree σ . For example, most mechanical systems have second-order nonlinear dynamics and usually have a relative degree σ = 2. Besides, according to the linear system theory, if the

166

8 Adaptive Near-Optimal Consensus

coefficients of a polynomial equation satisfy the Routh-Hurwitz criterion, then all the roots of the polynomial equation are located at the left half-plane [37]. For low relative degree systems with σ = 1, 2, 3, 4, their specialty allows us to derive further simplified expressions on the convergence criterion. In this subsection, we present those further results. Corollary 8.1 Synthesized by near-optimal protocol (8.6), nonlinear multi-agent system (8.2) with σ being 2 or 3 exponentially converges to consensus. Proof When σ = 2, Eq. (8.12) becomes s 2 + (χi α1 + β1 )s +

T2 χi = 0. 6

(8.13)

Recall that χi > 0, α1 > 0, β1 > 0 and T > 0. Evidently, the coefficients of Eq. (8.13) satisfy the Routh-Hurwitz criterion [37] for all i = 1, 2, · · · , N. When σ = 3, Eq. (8.12) becomes s 3 + (χi α2 + β2 )s 2 + (χi α1 + β1 )s +

T2 χi = 0. 12

(8.14)

Recall that χi > 0, αj > 0, βj > 0 and T > 0. Then, (χi α2 + β2 )(χi α1 + β1 ) = α1 α2 χi2 + (α2 β1 + α1 β2 )χi + β1 β2 > (α2 β1 + α1 β2 )χi . From the definitions of αj and βj shown in Eq. (8.8), one has α2 β1 + α1 β2 =

10 10 56 2 T4 38T 3 T2 × × = T . + > 126 3T 2 945 4T 441 12

Therefore, (χi α2 + β2 )(χi α1 + β1 ) > T 2 χi /12. It follows that the coefficients of Eq. (8.14) satisfy the Routh-Hurwitz criterion [37] for all i = 1, 2, · · · , N. According to Theorem 8.1, synthesized by near-optimal protocol (8.6), nonlinear multi-agent system (8.2) with σ being 2 or 3 exponentially converges to consensus. Corollary 8.2 Synthesized by near-optimal protocol (8.6), fourth-order nonlinear multi-agent system (8.2) with σ = 4 exponentially converges to consensus when T > 2.5λˇ −1/6 , where λˇ denotes the least positive eigenvalue of L. Proof When σ = 4, Eq. (8.12) becomes s 4 + (χi α3 + β3 )s 3 + (χi α2 + β2 )s 2 + (χi α1 + β1 )s +

T2 χi = 0. 20

(8.15)

8.3 Nominal Design

167

Recall that χi > 0, αj > 0, βj > 0 and T > 0. From the definitions of αj and βj shown in Eq. (8.8), one has α1 =

17T 3 41T 4 T5 21 42 , α2 = , α3 = , β1 = , β2 = , 576 5040 864 2T 3 5T 2

and β3 = 7/(2T ). Then, one has β2 β3 = 147/(5T 3 ) > β1 and α3 β2 + α2 β3 = 77T 3 /2016 > α1 . It follows that (χi α3 + β3 )(χi α2 + β2 ) = χi2 α2 α3 + (α3 β2 + α2 β3 )χi + β3 β3 > χi α1 + β1 . Let λˇ = min{χ2 , χ3 , · · · , χN } = max{λ2 , λ3 , · · · , λN }, i.e., λˇ denotes the least positive eigenvalue of L. Given that −1/6 T > 2.5χi , one has (χi α3 + β3 )(χi α2 + β2 )(χi α1 + β1 ) > (χi α1 + β1 )2 + 2 (χi α3 + β3 ) T 2 /20χi . Recall that χ1 = Nε where ε > 0 can be arbitrarily chosen. Let λ2 /N < ε < λN /N. Then, given that T > 2.5λˇ −1/6, one thus have (χi α3 + β3 )(χi α2 + β2 )(χi α1 + β1 ) > (χi α1 + β1 )2 + (χi α3 + β3 )2 T 2 /20χi for all i = 1, 2, · · · , N. Therefore, given that T > 2.5λˇ −1/6 , the coefficients of Eq. (8.15) satisfy the Routh-Hurwitz criterion [37] for all i = 1, 2, · · · , N. According to Theorem 8.1, synthesized by near-optimal protocol (8.6), fourth-order nonlinear multi-agent system (8.2) exponentially converges to consensus. The proof is complete. Based on Theorem 8.1, we also have the following remark about nominal nearoptimal protocol (8.6). Remark 8.2 From (L0 κ3 + Qκ4 )−1 L0 = L, one has L0 (I − Lκ3 ) = κ4 QL. Similar to the proof in Theorem 8.1 based on the spectral theorem [36], when 1 − κ3 λi = 0, i.e., T = ((2σ + 1)σ σ !)/(2λi ))1/(σ +2), for all i = 1, 2, · · · , N, all the eigenvalues of matrix I − Lκ3 are nonzero, which guarantees that matrix I − Lκ3 is invertible. It follows that, when T = ((2σ + 1)σ σ !)/(2λi ))1/(σ +2) for all i = 1, 2, · · · , N, there always exists L0 = κ4 QL(I − Lκ3 )−1 . Theorem 8.2 Nominal near-optimal protocol (8.6) for nonlinear multi-agent system (8.2) with fully-known parameters converges to optimal with time, if all the roots of the following equations are located at the left half-plane for all i = 1, 2, · · · , N: ⎧ σ −1  ⎪ T2 ⎪ σ j ⎪ s ⎪ + (χ α + β )s + χi = 0, i j j ⎪ ⎨ σ (σ + 1) j =1

σ −1 ⎪  ⎪ σ −1 ⎪ ⎪ s + βj s j −1 = 0, ⎪ ⎩ j =1

where αj and βj are defined in Eq. (8.8), and χ1 = Nε with ε > 0 ∈ R, χk = λk for k = 2, 3, · · · , N with λk being the positive eigenvalues of L.

168

8 Adaptive Near-Optimal Consensus

Proof By Taylor expansion, performance index J (t) shown in Eq. (8.3) can be rewritten as  T τσ J (t) = gm (xm (t))u(t) Xm (t)w1 (τ ) + σ! 0 T  τσ gm (xm (t))u(t) + Θ1 (t) L0 Xm (t)w1 (τ ) + σ!   T τ σ −1 + Θ1 (t) dτ + Xm (t)w2 (τ ) + (σ − 1)! 0 T  × gm (xm (t))u(t) + Θ2 (t) Q Xm (t)w2 (τ ) +

 τ σ −1 gm (xm (t))u(t) + Θ2 (t) dτ (σ − 1)!

) (σ ) σ −1 (x(σ ) (t + τ ) − where Θ1 (t) = τ σ (x(σ m (t + τ ) − xm (t))/σ ! and Θ2 (t) = τ m (σ ) xm (t))/(σ − 1)! with  ∈ (0, 1). Recall approximated performance index Jˆ(t) shown in Eq. (8.4). By triangle inequality [38], one further has

J (t) ≤ 2Jˆ(t) + 2



T 0

 Θ1T (t)L0 Θ1 (t)dτ + 2

0

T

Θ2T (t)

× QΘ2 (t)dτ = 2Jˆ(t) + 2T Θ1T (t)L0 Θ1 (t) + 2T Θ2T (t)QΘ2 (t) Since nominal near-optimal protocol (8.6) minimizes convex function Jˆ(t), one has Jˆ(t) = 0 when nominal near-optimal protocol (8.6) is adopted. It follows that J (t) ≤ 2T Θ1T (t)L0 Θ1 (t) + 2T Θ2T (t)QΘ2 (t). (σ ) Substituting equation (8.7) into the last equation of (8.2) yields xm (t) + σ −1 (j ) j =1 βj xm (t) = δ(t), where αj and βj are defined in Eq. (8.8), and δ(t) =  −1 (j ) αj xm (t). Let zm (t) = x˙ m (t). Then, we have −T 2 Lxm (t)/σ (σ + 1) − L σj =1  −1 (j −1) (σ −1) (t) + σj =1 βj zm (t) = δ(t). When δ(t) = 0, one has zm −1) z(σ (t) + m

σ −1 

(j −1)

βj zm

(t) = 0,

(8.16)

j =1

of which each subsystem has the same characteristic equation as follows: s σ −1 +

σ −1 j =1

βj s j −1 = 0.

(8.17)

8.4 Adaptive Design

169

By linear system theory [37], if all the roots of Eq. (8.17) are located at the halfplane, then system (8.16) is exponentially asymptotically stable. From Theorem 8.1, if all the roots of the following equation are located at the half-plane for all i = 1, 2, · · · , N: sσ +

σ −1

(χi αj + βj )s j +

j =1

T2 χi = 0, σ (σ + 1)

where χ1 = Nε with ε > 0 ∈ R, χk = λk for k = 2, 3, · · · , N with λk being the positive eigenvalues of L, then limt →+∞ Lxm (t) = 0 yielding limt →+∞ δ(t) = 0. ) By bounded-input bounded-output stability of linear systems [37], system x(σ m (t) + σ −1 (j ) ˙ m (t) converging to j =1 βj xm (t) = δ(t) is asymptotically stable with zm (t) = x (σ )

(σ )

zero. It follows that limt →+∞ xm (t) = 0 and limt →+∞ xm (t + τ ) = 0, yielding limt →+∞ Θ1 (t) = 0 and limt →+∞ Θ2 (t) = 0. Then, limt →+∞ 2T Θ1T (t)L0 Θ1 (t)+ 2T Θ2T (t)QΘ2 (t) = 0. Note that J (t) ≥ 0. Recall that J (t) ≤ 2T Θ1T (t)L0 Θ1 (t) + 2T Θ2T (t)QΘ2 (t). According to the pinching theorem [39], limt → J (t) = 0. The proof is complete.

8.4 Adaptive Design In this section, we further consider the situation where the parameters of multiagent system (8.2) are unknown, i.e., the parameters of fi (xi ) and gi (xi ) are unknown. Specifically, the ith agent of multi-agent system (8.2) can be reformulated as follows: ⎧ ⎪ x˙i(0) = xi(1), ⎪ ⎪ ⎪ ⎪ ⎨ .. . (8.18) (σ −2) (σ −1) ⎪ ⎪ x ˙ = x , ⎪ i i ⎪ ⎪ ⎩x˙ (σ −1) = wT φ (x ) + bT ϕ (x )u , i

i

i

i

i

i

i

i

where wi and bi are unknown constant parameter vectors; φ(xi ) and ϕ(xi ) are basis function vectors. It follows that multi-agent system (8.2) can be reformulated as ⎧ (0) (1) ⎪ x˙ m = xm , ⎪ ⎪ ⎪ ⎪ ⎨ .. . −2) −1) ⎪ ⎪ x˙ (σ = x(σ , m m ⎪ ⎪ ⎪ ⎩x˙ (σ −1) = W φ(x ) + Bϕ(x )u. m m m

(8.19)

170

8 Adaptive Near-Optimal Consensus

with ⎡ T w1 ⎢0 ⎢ W =⎢ . ⎣ ..

0 ··· wT2 · · · .. . . . . 0 0 ···

0 0 .. .

⎡ T b1 ⎢0 ⎥ ⎢ ⎥ ⎥,B = ⎢ . ⎣ .. ⎦ ⎤

⎤ 0 ··· 0 bT2 · · · 0 ⎥ ⎥ .. . . .. ⎥ . . ⎦ . 0 0 · · · bTN

wTN

being unknown parameter matrices. Besides, φ(xm ) = [φ1T (x1 ), φ2T (x2 ), · · · , T (x )]T and φN N ⎡

⎤ ϕ1 (x1 ) 0 · · · 0 ⎢ 0 ϕ2 (x2 ) · · · 0 ⎥ ⎢ ⎥ ϕ(xm ) = ⎢ . .. ⎥ . .. .. ⎣ .. . . ⎦ 0 0 · · · ϕN (xN ) are basis function matrices. To handle the parameter uncertainty, we design the following auxiliary system to reconstruct the input-output property of the system (8.2): ⎧ (σ ) ˆ ⎪ xˆ m (t) = Wˆ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t) ⎪ ⎪ σ −2 ⎪ (c+1) ⎨ − ν x˜ (t) − γ s(t), c=0

c m

⎪ Wˆ˙ (t) = −K1 s(t)φ T (xm (t)), ⎪ ⎪ ⎪ ⎩ ˙ˆ B(t) = −K2 s(t)uT (t)ϕ T (xm (t)),

(8.20)

 −1 (c) where s(t) = σc=0 νc x˜ m (t) with νσ −1 = 1; x˜ m (t) = xˆ m (t) − xm (t) with xˆ m (t) ˆ being the auxiliary state vector; Wˆ (t) and B(t) are auxiliary parameter matrices; K1 ∈ RN×N and K2 ∈ RN×N are diagonal positive-definite gain matrices; γ > 0 ∈ R is a parameter used to scale the state feedback. The auxiliary system is aided by the concept of sliding mode control with the sliding surface being s(t) = 0, and is thus called sliding-mode auxiliary system. Via properly choosing parameters νc for c = 0, 1, 2, · · · , σ − 2, it can be guaranteed that, on the sliding surface s(t) = 0, equilibrium point x˜ m (t) = 0 is exponentially asymptotically stable [37]. Remark 8.3 Auxiliary system (8.20) in the compact form is only used for the convenience of theoretical analysis. In applications and simulations, for the ith agent, the corresponding auxiliary system is ⎧ (σ ) ⎪ ˆ Ti (t)φi (xi (t)) + bˆ Ti ϕi (xi (t))ui (t) xˆi (t) = w ⎪ ⎪  ⎪ ⎨ − σ −2 ν x˜ (c+1) (t) − γ s (t), c=0

c i

˙ˆ T (t) = −k1ii si (t)φ T (xi (t)), ⎪ w ⎪ i i ⎪ ⎪ ⎩ ˙ˆ T bi (t) = −k2ii si (t)ui (t)ϕiT (xi (t)),

i

(8.21)

8.4 Adaptive Design

171

where k1ii and k2ii denote the ith diagonal elements of matrices K1 and K2 respectively. Evidently, the auxiliary system of each agent is independent from other agents. The following theorem guarantees the performance of sliding-mode auxiliary system (8.20) in reconstructing the input-output property of multi-agent system (8.2). Theorem 8.3 The input-output property of sliding-mode auxiliary system (8.20) asymptotically converges to that of multi-agent system (8.2) of fully-unknown parameters with time. ˜ ˆ Proof Let W˜ (t) = Wˆ (t) − W and B(t) = B(t) − B. Consider the following candidate Lyapunov function: 1 T 1 s (t)s(t) + tr(W˜ T (t)K1−1 W˜ (t)) 2 2 1 ˜ + tr(B˜ T (t)K2−1 B(t)). 2

V1 (t) =

The first equation of (8.20) is written as ) ˆ ˆ xˆ (σ m (t) = W (t)φ(xm (t)) + B(t)ϕ(xm (t))u(t) − s˙ (t) ) − γ s(t) + x˜ (σ m (t).

(8.22)

Recall that multi-agent system (8.2) can be described as Eq. (8.19). Subtracting equation (8.19) from Eq. (8.22) yields ˜ s˙(t) = −γ s(t) + W˜ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t). Based on Eqs. (8.23) and (8.20), the following result is obtained: V˙1 (t) = −γ sT (t)s(t) + sT (t)W˜ (t)φ(xm (t)) ˜ ˜ T (t)s(t) + sT (t)B(t)ϕ(x m (t))u(t) − tr(W × φ T (xm (t))) − tr(B˜ T (t)s(t)uT (t)ϕ T (xm (t))) = −γ s(t)22 + tr(sT (t)W˜ (t)φ(xm (t))) ˜ + tr(sT (t)B(t)ϕ(x m (t))u(t)) − tr(W˜ T (t)s(t)φ T (xm (t))) − tr(B˜ T (t)s(t)uT (t)ϕ T (xm (t)))

(8.23)

172

8 Adaptive Near-Optimal Consensus

Besides, according to the properties of the trace operation [40], one has tr(sT (t)W˜ (t)φ(xm (t))) = tr(φ(xm (t))sT (t)W˜ (t)) = tr(W˜ T (t)s(t)φ T (xm (t))) and T ˜ ˜ tr(sT (t)B(t)ϕ(x = tr(B˜ T (t)s(t)uT (t)ϕ T m (t))u(t)) = tr(ϕ(xm (t))u(t)s (t)B(t)) 2 ˙ (xm (t))), which yield V1 (t) = −γ s(t)2 . Evidently, V˙1 (t) ≤ 0. Define set S = {˜xm (t) ∈ RN |V˙1 (t) = 0}. From V˙1 (t) = 0, one readily has S =  −1 (c) {˜xm (t) ∈ RN |s(t) = 0}. Recall s(t) = σc=0 νc x˜ m (t). Given that the coefficients νc for c = 0, 1, 2, · · · , σ − 1 with νσ −1 = 1 satisfy the Routh-Hurwitz stability criterion [37], one has limt →+∞ x˜ m (t) = 0. In other words, no solution can stay identically in set S, other than the trivial solution x˜ m (t) = 0. By LaSalle’s invariable set principle [41], equilibrium point x˜ m (t) = 0 is asymptotic stable. It follows that, when t → +∞, ˆ ˆ ˆ (ρ) x(ρ) m (t) = x m (t) = W (t)φ(xm (t)) + B(t)ϕ(xm (t))u(t).

(8.24)

In other words, the input-output property of sliding-mode auxiliary system (8.20) asymptotically converges to that of multi-agent system (8.2) of fully-unknown parameters with time. The proof is complete. ˆ are generated by sliding-mode auxiliary Since parameter matrices Wˆ (t) and B(t) system (8.20), they are totally known. In this sense, for multi-agent system (8.2) with fully unknown parameters, Jˆ(t) shown in Eq. (8.4) is further approximated as follows: Jˆ(t) ≈ J¯(t) T  T τσ ˆ = B(t)ϕ(xm (t))u(t) L0 Xˆ m (t)w1 (τ ) + σ! 0   σ τ ˆ × Xˆ m (t)w1 (τ ) + B(t)ϕ(xm (t))u(t) dτ σ!  T τ σ −1 ˆ + Xˆ m (t)w2 (τ ) + B(t)ϕ(xm (t)) (σ − 1)! 0 T  τ σ −1 ˆ B(t) × u(t) Q Xˆ m (t)w2 (τ ) + (σ − 1)!  × ϕ(xm (t))u(t) dτ,

(8.25)

8.4 Adaptive Design

173

(σ −1) where Xˆ m (t) = [xm (t), · · · , xm (t), Wˆ (t)φ(xm (t))]. Following similar steps in Sect. 8.3, a protocol minimizing performance index J¯(t) is obtained as follows:

  σ −1 −1 ˆ L − u(t) = (B(t)ϕ(x m (t))) j =0

+

σ −1  j =1

T j +2 xm (t) (σ + j + 1)σj ! (j )

T j +2 (2σ − 1)xm (t) 2 σ (2σ + 1)(σ + j − 1)(j − 1)! (j )

× φ(xm (t)) −

σ −1 j =1



− Wˆ (t)

(8.26)

 T j −σ (2σ − 1)(σ − 1)! (j ) xm (t) . (σ + j − 1)(j − 1)!

−1 is also a ˆ ˆ Since B(t)ϕ(x m (t)) is a diagonal matrix, it is evident that (B(t)ϕ(x m (t))) diagonal matrix. It follows that adaptive near-optimal protocol (8.26) is a distributed protocol. The block diagram shown in Fig. 8.1 gives an intuitive description about the distributed adaptive near-optimal approach for the consensus of nonlinear multi-agent system (8.2) consisting of heterogeneous agents with fully-unknown parameters. Specifically, auxiliary system (8.21) receives input information and state information of the ith agent and adaptively reconstructs the input-output property of the agent. Meanwhile, distributed protocol (8.26) generates the input signal ui (t) for the ith agent based on the parameters of auxiliary system (8.21) and the state information of the ith agent and its communication neighbours. For adaptive near-optimal protocol (8.26), we have the following theorems.

Theorem 8.4 If all the roots of Eq. (8.12) are located at the left half-plane for all i = 1, 2, · · · , N, then, synthesized by adaptive near-optimal protocol (8.26) and sliding-mode auxiliary system (8.20), nonlinear multi-agent system (8.2) with fullyunknown parameters exponentially converges to consensus.

Auxiliary System (8.21)

ˆ Ti (t) w

Xπ (t)

bˆ Ti (t)

Protocol (8.26)

xi (t) ui (t)

Agent i

Fig. 8.1 Block diagram about the ith agent of multi-agent system (8.2) synthesized by adaptive near-optimal protocol (8.26) and auxiliary system (8.21), where Xπ (t) = {xj (t)|j ∈ N(i)}, i.e., the state set of neighbors of the ith agent in the communication graph defined by Laplacian matrix L

174

8 Adaptive Near-Optimal Consensus

Proof Nonlinear multi-agent system (8.2) can be reformulated as ) x(σ m = W φ(xm ) + Bϕ(xm )u(t).

(8.27)

˜ ˆ Let W˜ (t) = Wˆ (t) − W and B(t) = B(t) − B. Then, Eq. (8.27) can be further rewritten as ) ˆ ˆ x(σ m = W (t)φ(xm ) + B(t)ϕ(xm )u(t) + η(t),

(8.28)

˜ where η(t) = −(W˜ (t)φ(xm ) + B(t)ϕ(x m )u(t)). Besides, adaptive near-optimal protocol (8.26) can be rewritten as  −1 ˆ u(t) = (B(t)ϕ(xm (t))) − ×

σ −1 

(j ) αj xm (t)

T2 Lxm (t) − L σ (σ + 1)

− Wˆ (t)φ(xm (t)) −

j =1

σ −1



(8.29)

(j ) βj xm (t)

j =1

with αj and βj defined in Eq. (8.8). Substituting equation (8.29) into Eq. (8.28) yields: σ −1

) x(σ m (t) = −

 T2 (j ) Lxm (t) − L αj xm (t) σ (σ + 1) j =1



σ −1 

(j )

βj xm (t) + η(t),

j =1

Let e(t) = Lxm (t). One further has e

(σ )

σ −1

σ −1

j =1

j =1

  T2 (t) = − αj e(j ) (t) − βj Le(t) − L σ (σ + 1)

(8.30)

× e(j ) (t) + η(t). Based on Properties 8.1 and 8.2 of Laplacian matrices, one has L = LT and L1 = 0, where 1 ∈ RN denotes the column vector with each element being one. It follows that 1T L = 0

8.4 Adaptive Design

175

and thus ε11T e(t) = 0 with ε > 0. Then, one has Le(t) = (L + ε11T )e(t). Let A = L + ε11T. Equation (8.30) can then be rewritten as e(σ ) (t) = −

σ −1

σ −1

j =1

j =1

  T2 Ae(t) − A αj e(j ) (t) − βj σ (σ + 1)

(8.31)

× e(j ) (t) + η(t). When η(t) = 0, closed-loop system (8.31) becomes e(σ ) (t) = −

σ −1

σ −1

j =1

j =1

  T2 Ae(t) − A αj e(j ) (t) − βj σ (σ + 1)

(8.32)

× e(j ) (t). According to the proof of Theorem 8.1, if all the roots of Eq. (8.12) are located at the left half-plane for all i = 1, 2, · · · , N, then system (8.32) is exponentially asymptotically stable. According to Theorem 8.2, ˜ lim (W˜ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t)) = 0.

t →+∞

It follows that lim η(t) = 0.

t →+∞

By bounded-input bounded-output stability of linear systems [37], closed-loop system (8.31) is thus exponentially stable. The proof is complete. Theorem 8.5 Adaptive near-optimal protocol (8.26) for nonlinear multi-agent system (8.2) with fully-unknown parameters converges to optimal with time, if all the roots of Eqs. (8.12) and (8.17) are located at the left half-plane for all i = 1, 2, · · · , N.

176

8 Adaptive Near-Optimal Consensus

Proof Recall equation (8.19) describing nonlinear multi-agent system (8.2) with fully-unknown parameters. By Taylor expansion, performance index J (t) shown in Eq. (8.3) can be rewritten as T  τσ ˆ Xˆ m (t)w1 (τ ) + B(t)ϕ(xm (t))u(t) + Θ1 (t) + Θ3 (t) L0 Xˆ m (t)w1 (τ ) σ! 0   T τ σ −1 τσ ˆ Xˆ m (t)w2 (τ ) + + B(t)ϕ(xm (t))u(t) + Θ1 (t) + Θ3 (t) dτ + σ! (σ − 1)! 0 T  ˆ × B(t)ϕ(x m (t))u(t) + Θ2 (t) + Θ4 (t) Q Xm (t)w2 (τ )

J (t) =

 T

+

 τ σ −1 ˆ B(t)ϕ(xm (t))u(t) + Θ2 (t) + Θ4 (t) dτ (σ − 1)!

where ) (σ ) Θ1 (t) = τ σ (x(σ m (t + τ ) − xm (t))/σ !, ) (σ ) Θ2 (t) = τ σ −1 (x(σ m (t + τ ) − xm (t))/(σ − 1)!,

˜ Θ3 (t) = −τ σ (W˜ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t))/σ !, and ˜ Θ4 (t) = −τ σ −1 (W˜ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t))/(σ − 1)!. Recall performance index J¯(t) shown in Eq. (8.25). By triangle inequality [38], one further has J (t) ≤ 2J¯(t) + 2T Θ1T (t)L0 Θ1 (t) + 2T Θ3T (t)L0 Θ3 (t) + 2Θ3T (t)QΘ3 (t) + 2T Θ4T (t)QΘ4 (t) From Theorem 8.3, one has ˜ lim (W˜ (t)φ(xm (t)) + B(t)ϕ(x m (t))u(t)) = 0,

t →+∞

yielding lim Θ3 (t) = 0

t →+∞

8.5 Computer Simulations

177

and lim Θ4 (t) = 0.

t →+∞

Similar to the proof of Theorem 8.2, from Theorem 8.4, one has lim x(σ ) (t) t →+∞ m

= 0,

yielding lim Θ1 (t) = 0

t →+∞

and lim Θ2 (t) = 0,

t →+∞

if all the roots of Eqs. (8.12) and (8.17) are located at the left half-plane for all i = 1, 2, · · · , N. Recall that adaptive near-optimal protocol (8.26) minimize convex performance index J¯(t), i.e., J¯(t) = 0 when adaptive near-optimal protocol (8.26) is adopted. In addition, J (t) ≥ 0. According to pinching Theorem [39], lim J (t) = 0,

t →+∞

indicating that adaptive near-optimal protocol (8.26) for nonlinear multi-agent system (8.2) with fully-unknown parameters converges to optimal with time. The proof is complete.

8.5 Computer Simulations In this section, an illustrative example about a third-order nonlinear multi-agent system consisting of 10 heterogeneous agents with fully-unknown parameters is shown to substantiate the efficacy of the adaptive near-optimal distributed consensus approach.

178

8 Adaptive Near-Optimal Consensus

Consider a third-order (i.e., σ = 3) nonlinear multi-agent system (8.2) consisting of 10 agents with heterogeneous dynamics being (3)

x1 = 3x˙1 sin(x1 ) + 8x¨1 + 10u1 , (3)

x2 = 8 cos(x˙2 ) − 3x¨2 + 12(cos(x2 ) + 2)u2 , (3)

x3 = 2x3x¨3 − 2x˙3 + 4 sin(x¨3 ) + 7(4 + sin(x3 ))u3 , (3)

x4 = 2 sin(x4 ) + 9(cos(x4 ) + 2)u4 , (3)

x5 = −3x¨5 sin(x˙5 ) + 4x5 + 5(x52 + 0.2)u5 ,

(8.33)

(3)

x6 = 6x˙62 − 2x6 + 14(x62 + 0.2)u6 , (3)

x7 = 2 cos(x7 ) − 3x˙7 + 9(cos(x7 ) + 2)u7 , (3)

x8 = 9 sin(x8 ) + 18(1.1 + sin(x8 x˙8 ))u8 , (3)

x9 = 7 sin(x˙9 ) + 4x9x˙9 + 15u9 , (3)

x10 = 2x10x˙10 + (6(cos(x10 ) + 2) + 2(sin(x˙10 ) + 3))u10. Note that the parameters of the multi-agent system are only used to generate the actual system states in the simulation. These parameters are not used in the adaptive near-optimal protocol. The communication graph of the multi-agent system is given in Fig. 8.2.

4

2

6 1

9

3

5 7

8

Fig. 8.2 A graph describing the communication relationship among the agents

10

8.5 Computer Simulations

179

The corresponding basis function vectors of the agents of multi-agent system (8.33) are φ1 (x1 ) = [x˙1 sin(x1 ), x¨1 ]T , φ2 (x2 ) = [cos(x˙2 ), x¨2 ]T , φ3 (x3 ) = [x3 x¨3 , x˙3 , sin(x¨3 )]T , φ4 (x4 ) = sin(x4 ), φ5 (x5 ) = [x¨5 sin(x˙5 ), x5 ]T , φ6 (x6 ) = [x˙62 , x6 ]T , φ7 (x7 ) = [cos(x7 ), x˙7 ], φ8 (x8 ) = sin(x8 ), φ9 (x9 ) = [sin(x˙9 ), x9 x˙9 ]T , φ10 (x10) = x10 x˙10 , ϕ1 (x1 ) = 1, ϕ2 (x2 ) = cos(x2 ) + 2, ϕ3 (x3 ) = 1.1 + sin(x3 ), ϕ4 (x4 ) = cos(x4 ) + 2, ϕ5 (x5 ) = x52 + 0.2, ϕ6 (x6 ) = x62 + 0.2, ϕ7 (x7 ) = cos(x7 ) + 2, ϕ8 (x8 ) = 1.1 + sin(x8 x˙8 ), ϕ9 (x9 ) = 1, ϕ10 (x10) = [cos(x10) + 2, sin(x˙10) + 3]T . The parameters of sliding-mode auxiliary system are set as γ = 2, K1 = K2 = 1.5I with I being the 10 × 10 identity matrix, ν0 = 6.25, ν1 = 5, ν2 = 1. Parameter T of performance index J (t) is set as T = 6 s. Each element of initial state xm (0) ∈ R30 are randomly generated at interval (−2, 2) and xˆ m (0) is set the same as xm (0). The ˆ i (t) and bˆ i (t) with initial value of each element of auxiliary parameter vector w i = 1, 2, · · · , N is randomly generated at interval (0, 5). Under the above setting, the consensus results of nonlinear multi-agent system (8.33) synthesized by adaptive near-optimal protocol (8.26) and sliding-mode auxiliary system (8.20) are shown in Fig. 8.3. Evidently, the agents of multi-agent system (8.33) successfully achieve consensus. In addition, the corresponding control inputs ui are shown in Fig. 8.4.

180

8 Adaptive Near-Optimal Consensus

3

2 1.5

2

1

1

0.5

0

−0.5

0

−1

−1

−1.5 −2 0

5

10

15

20

25

−2 0

30

5

10

15

t

t

(a)

(b)

20

25

30

2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 −2 0

5

10

15

20

25

30

t

(c) Fig. 8.3 State evolutions of the agents with dynamics given in (8.33) with fully-unknown parameters when the adaptive near-optimal protocol (8.26) and the auxiliary systems (8.21) are adopted. (a) xi . (b) x˙i . (c) x¨i

20 15

ui

10 5 0 −5 −10 0

5

10

t (s)

15

20

Fig. 8.4 Agent inputs during the consensus process when the adaptive near-optimal protocol (8.26) and the auxiliary systems (8.21) are adopted

References

181

These results substantiate the efficacy of the adaptive near-optimal protocol for highorder nonlinear multi-agent system (8.2) with fully-unknown parameters.

8.6 Conclusions In this chapter, the near-optimal distributed consensus of high-order nonlinear heterogeneous multi-agent systems has been investigated. Under the condition that the system dynamics of all the agents are fully-known, a nominal nearoptimal protocol has been firstly designed and via making approximation of the performance index. Then, based on sliding-mode auxiliary systems, an adaptive near-optimal protocol has been discussed for high-order nonlinear multi-agent systems with fully-unknown parameters. Theoretical analysis has shown that the presented protocols can simultaneously guarantee the asymptotic optimality of the performance index with multi-agent systems exponentially converging to consensus. An illustrative example about a third-order nonlinear multi-agent system consisting of 10 heterogeneous agents has further substantiated the efficacy of the presented adaptive near-optimal distributed consensus approach.

References 1. R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 2. G. Ferrari-Trecate, L. Galbusera, M.P.E. Marciandi, R. Scattolini, Model predictive control schemes for consensus in multi-agent systems with single- and double-integrator dynamics. IEEE Trans. Autom. Control 54(11), 2560–2572 (2009) 3. T. Li, J.-F. Zhang, Consensus conditions of multi-agent systems with time-varying topologies and stochastic communication noises. IEEE Trans. Autom. Control 55(9), 2043–2057 (2010) 4. G.S. Seyboth, D.V. Dimarogonas, K.H. Johansson, Event-based broadcasting for multi-agent average consensus. Automatica 49(1), 245–252 (2013) 5. S. Li, H. Du, X. Lin, Finite-time consensus algorithm for multi-agent systems with doubleintegrator dynamics. Automatica 47(8), 1706–1712 (2011) 6. H. Rezaee, F. Abdollahi, Average consensus over high-order multiagent systems. IEEE Trans. Autom. Control 60(11), 3047–3052 (2015) 7. C.-Q. Ma, J.-F. Zhang, Necessary and sufficient conditions for consensusability of linear multiagent systems. IEEE Trans. Autom. Control 55(5), 1263–1268 (2010) 8. Z. Li, W. Ren, X. Liu, L. Xie, Distributed consensus of linear multi-agent systems with adaptive dynamic protocols. Automatica 49(7), 1986–1995 (2013) 9. L. Cheng, Z.-G. Hou, M. Tan, A mean square consensus protocol for linear multi-agent systems with communication noises and fixed topologies. IEEE Trans. Autom. Control 59(1), 261–267 (2014) 10. H. Li, W. Yan, Receding horizon control based consensus scheme in general linear multi-agent systems. Automatica 56, 12–18 (2015) 11. E. Semsar-Kazerooni, K. Khorasani, Optimal consensus algorithms for cooperative team of agents subject to partial information. Automatica 44, 2766–2777 (2008)

182

8 Adaptive Near-Optimal Consensus

12. E. Semsar-Kazerooni, K. Khorasani, Multi-agent team cooperation: a game theory approach. Automatica 45, 2205–2213 (2009) 13. Q. Song, J. Cao, W. Yu, Second-order leader-following consensus of nonlinear multi-agent systems via pinning control. Syst. Control Lett. 59(9), 553–562 (2010) 14. W. Yu, W. Ren, W.X. Zheng, G. Chen, J. Lü, Distributed control gains design for consensus in multi-agent systems with second-order nonlinear dynamics. Automatica 49(7), 2107–2115 (2013) 15. K. Liu, G. Xie, W. Ren, L. Wang, Consensus for multi-agent systems with inherent nonlinear dynamics under directed topologies. Syst. Control Lett. 62, 152–162 (2013) 16. M.-C. Fan, Z. Chen, H.-T. Zhang, Semi-global consensus of nonlinear second-order multiagent systems with measurement output feedback. IEEE Trans. Autom. Control 59(8), 2222– 2227 (2014) 17. X. Zhang, L. Liu, G. Feng, Leader-follower consensus of time-varying nonlinear multi-agent systems. Automatica 52, 8–14 (2015) 18. L. Zhu, Z. Chen, Robust homogenization and consensus of nonlinear multi-agent systems. Syst. Control Lett. 65, 50–55 (2014) 19. C.-C. Hua, X. You, X.-P. Guan, Leader-following consensus for a class of high-order nonlinear multi-agent systems. Automatica 73, 138–144 (2016) 20. C.L.P. Chen, G.-X. Wen, Y.-J. Liu, F.-Y. Wang, Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1217–1226 (2014) 21. Y. Cao, W. Ren, Finite-time consensus for multi-agent networks with unknown inherent nonlinear dynamics. Automatica 50, 2648–2656 (2014) 22. W. Chen, X. Li, W. Ren, C. Wen, Adaptive consensus of multi-agent systems with unknown identical control directions based on a novel Nussbaum-type function. IEEE Trans. Autom. Control 59(7), 1887–1892 (2014) 23. J. Huang, C. Wen, W. Wang, Y.-D. Song, Adaptive finite-time consensus control of a group of uncertain nonlinear mechanical systems. Automatica 51, 292–301 (2015) 24. C.L.P. Chen, G.-X. Wen, Y.-J. Liu, Z. Liu, Observer-based adaptive backstepping consensus tracking control for high-order nonlinear semi-strict-feedback multiagent systems. IEEE Trans. Cybern. 46(7), 1591–1601 (2016) 25. H. Rezaee, F. Abdollahi, Adaptive stationary consensus protocol for a class of high-order nonlinear multiagent systems with jointly connected topologies. Int. J. Robust Nonlinear Control 27(9), 1677–1689 (2017). https://doi.org/10.1002/rnc.3626 26. Y.-J. Liu, S. Tong, Barrier Lyapunov functions-based adaptive control for a class of nonlinear pure-feedback systems with full state constraints. Automatica 64, 70–75 (2016) 27. Y.-J. Liu, Y. Gao, S. Tong, Y. Li, Fuzzy approximation-based adaptive backstepping optimal control for a class of nonlinear discrete-time systems with dead-zone. IEEE Trans. Fuzzy Syst. 24(1), 16–28 (2016) 28. Y.-J. Liu, Y. Gao, S. Tong, C.L.P. Chen, A unified approach to adaptive neural control for nonlinear discrete-time systems with nonlinear dead-zone input. IEEE Trans. Neural Netw. Learn. Syst. 27(1), 139–150 (2016) 29. K.H. Movric, F.L. Lewis, Cooperative optimal control for multi-agent systems on directed graph topologies. IEEE Trans. Autom. Control 59(3), 769–774 (2014) 30. H. Zhang, J. Zhang, G.-H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming. IEEE Trans. Fuzzy Syst. 23(1), 152–163 (2015) 31. R. Kamalapurkar, H. Dinh, P. Walters, W. Dixon, Approximate optimal cooperative decentralized control for consensus in a topological network of agents with uncertain nonlinear dynamics, in American Control Conference (IEEE, Washington, 2013), pp. 1320–1325 32. W.H. Chen, D.J. Ballance, P.J. Gawthrop, Optimal control of nonlinear systems: a predictive control approach. Automatica 39(4), 633–641 (2003) 33. A. Levant, M. Livne, Weighted homogeneity and robustness of sliding mode control. Automatica 72, 186–193 (2016)

References

183

34. C. Edwards, Y.B. Shtessel, Adaptive continuous higher order sliding mode control. Automatica 65, 183–190 (2016) 35. C. Godsil, G. Royal, Algebraic Graph Theory (Springer, New York, 2001) 36. C.D. Meyer, Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics, Philadelphia, 2000) 37. W.J. Rugh, Linear System Theory (Prentice Hall, Upper Saddle River, 1996) 38. C.Y. Hsu, S.Y. Shaw, H.J. Wong, Refinements of generalized triangle inequalities. J. Math. Anal. Appl. 344(1), 17–31 (2008) 39. H.L. Royden, Real Analysis (Macmillan, New York, 1988) 40. H. Minc, Nonnegative Matrices (Wiley, New York, 1988) 41. H.K. Khalil, Nonlinear Systems, 3rd edn. (Prentice-Hall, Upper Saddle River, 2002)