Intelligent Optimal Control for Distributed Industrial Systems 9819902673, 9789819902675

This book focuses on the distributed control and estimation of large-scale networked distributed systems and the approac

191 47 8MB

English Pages 272 [273] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
1 Status of Research on Networked Distributed Systems
1.1 Background
1.2 Status of Research on Predictive Control for Networked Distributed Systems
1.2.1 Current Status of Research on Networked Moving Horizon Estimation
1.2.2 Status of Research and Classification of Distributed Predictive Control
1.3 Main Contents of the Book
References
2 Moving Horizon State Estimation for Networked Systems with Random Packet Loss
2.1 Overview
2.2 Moving Horizon State Estimation for Networked Systems with Feedback-Channel Packet Loss
2.2.1 Description of the Problem
2.2.2 Networked Moving Horizon State Estimator
2.2.3 Performance Analysis of the Estimator
2.2.4 Numerical Simulation
2.3 Moving Horizon State Estimation for Networked Systems with Two-Channel Packet Loss
2.3.1 Description of the Problem
2.3.2 Networked Moving Horizon State Estimator
2.3.3 Performance Analysis of the Estimator
2.3.4 Numerical Simulations
2.4 Summary of This Chapter
References
3 Design of Predictive Controller for Networked Systems
3.1 Overview
3.2 Predictive Control for Networked Control Systems with Bounded Packet Loss
3.2.1 Modelling of Networked Control Systems
3.2.2 Networked Predictive Controller Based on Terminal Convex Set Constraints
3.2.3 Feasibility and Stability Analysis of Networked Predictive Controllers
3.2.4 Numerical Simulation
3.3 Robust Predictive Control of Networked Control Systems With Control Input Quantization
3.3.1 Modelling of Networked Control Systems
3.3.2 Stability Analysis and Robust Predictive Controller Design
3.3.3 Numerical Simulations
3.4 Summary of This Chapter
References
4 Moving Horizon Scheduling for Networked Systems with Communication Constraints
4.1 Overview
4.2 Networked Moving Horizon Scheduling
4.2.1 Description of the Problem
4.2.2 Moving Horizon Scheduling Strategy
4.3 Moving Horizon State Estimation for Networked Control Systems
4.4 Performance Analysis of Networked Moving Horizon Estimators
4.5 Numerical Simulation and Physical Experiments
4.5.1 Numerical Simulation
4.5.2 Experiments on Two-Tank Liquid-Level Systems
4.6 Summary of This Chapter
References
5 Distributed Predictive Control for Local Performance Index
5.1 Overview
5.2 Nash Optimal Based Distributed Predictive Control
5.2.1 Distributed Predictive Controller Design
5.2.2 Performance Analysis
5.2.3 Performance Analysis of the One-Step Prediction Optimization Strategy for Local Communication Failures
5.2.4 Simulation Example
5.3 Constrained Distributed Predictive Control with Guaranteed Stability
5.3.1 Description of the Problem
5.3.2 Distributed Predictive Control Design
5.3.3 Performance Analysis
5.3.4 Simulation Example
5.4 Summary of This Chapter
References
6 Cooperative Distributed Predictive Control System
6.1 Overview
6.2 Non-iterative Cooperative Distributed Predictive Control
6.2.1 State-, Input-Coupled Distributed Systems
6.2.2 Local Predictive Controller Design
6.2.3 Performance Analysis
6.2.4 Simulation Example
6.3 Constrained Coordinated Distributed Predictive Control with Guaranteed Stability [3]
6.3.1 Description of Distributed Systems
6.3.2 Local Predictive Controller Design
6.3.3 Performance Analysis
6.3.4 Simulation Example
6.4 Summary of This Chapter
References
7 Distributed Predictive Control Under Communication Constraints
7.1 Overview
7.2 Distributed Predictive Control Based on Neighborhood Optimization
7.2.1 State-, Input-Coupled Distributed Systems
7.2.2 Local Predictive Controller Design
7.2.3 Performance Analysis
7.2.4 Numerical Results
7.3 Stabilized Neighborhood Optimization Based Distributed Model Predictive Control
7.3.1 Problem Description
7.3.2 DMPC Design
7.3.3 Stability and Convergence
7.3.4 Simulation
7.4 Summary of This Chapter
References
8 Application of Distributed Model Predictive Control in Accelerated Cooling Process
8.1 Overview
8.2 Accelerated Cooling Process
8.2.1 Accelerated Cooling Process and Plant Instrumentation
8.2.2 Accelerated Cooling Process Simulation Platform
8.2.3 Process Control Requirements
8.3 Heat Balance Equation for the Unit
8.4 Distributed Predictive Control Based on Optimal Objective Recalculation
8.4.1 Subsystem Optimization Objective Recalculation
8.4.2 Subsystem State Space Model
8.4.3 Extending the Kalman Global Observer
8.4.4 Local Predictive Controller
8.4.5 Local State Prognosticator
8.4.6 Local Controller Iterative Solution Algorithm
8.5 Simulation Platform Algorithm Validation
8.6 Summary of This Chapter
References
Index
Recommend Papers

Intelligent Optimal Control for Distributed Industrial Systems
 9819902673, 9789819902675

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advanced and Intelligent Manufacturing in China

Shaoyuan Li · Yi Zheng · Binqiang Xue

Intelligent Optimal Control for Distributed Industrial Systems

Advanced and Intelligent Manufacturing in China Series Editor Jie Chen, Tongji University, Shanghai, Shanghai, China

This is a set of high-level and original academic monographs. This series focuses on the two fields of intelligent manufacturing and equipment, control and information technology, covering a range of core technologies such as Internet of Things, 3D printing, robotics, intelligent equipment, and epitomizing the achievements of technological development in China’s manufacturing sector. With Prof. Jie Chen, a member of the Chinese Academy of Engineering and a control engineering expert in China, as the Editorial in Chief, this series is organized and written by more than 30 young experts and scholars from more than 10 universities and institutes. It typically embodies the technological development achievements of China’s manufacturing industry. It will promote the research and development and innovation of advanced intelligent manufacturing technologies, and promote the technological transformation and upgrading of the equipment manufacturing industry.

Shaoyuan Li · Yi Zheng · Binqiang Xue

Intelligent Optimal Control for Distributed Industrial Systems

Shaoyuan Li Department of Automation Shanghai Jiao Tong University Shanghai, China

Yi Zheng Department of Automation Shanghai Jiao Tong University Shanghai, China

Binqiang Xue School of Automation Qingdao University Qingdao, Shandong, China

ISSN 2731-5983 ISSN 2731-5991 (electronic) Advanced and Intelligent Manufacturing in China ISBN 978-981-99-0267-5 ISBN 978-981-99-0268-2 (eBook) https://doi.org/10.1007/978-981-99-0268-2 Jointly published with Chemical Industry Press The print edition is not for sale in China (Mainland). Customers from China (Mainland) please order the print book from: Chemical Industry Press. The translation was done with the help of artificial intelligence (machine translation by the service DeepL. com). A subsequent human revision was done primarily in terms of content. © Chemical Industry Press 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Contents

1 Status of Research on Networked Distributed Systems . . . . . . . . . . . . . 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Status of Research on Predictive Control for Networked Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Current Status of Research on Networked Moving Horizon Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Status of Research and Classification of Distributed Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Main Contents of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Moving Horizon State Estimation for Networked Systems with Random Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Moving Horizon State Estimation for Networked Systems with Feedback-Channel Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Description of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Networked Moving Horizon State Estimator . . . . . . . . . . . . . 2.2.3 Performance Analysis of the Estimator . . . . . . . . . . . . . . . . . . 2.2.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Moving Horizon State Estimation for Networked Systems with Two-Channel Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Description of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Networked Moving Horizon State Estimator . . . . . . . . . . . . . 2.3.3 Performance Analysis of the Estimator . . . . . . . . . . . . . . . . . . 2.3.4 Numerical Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 3 3 5 8 9 13 13 15 15 17 20 26 30 30 33 36 41 48 49

v

vi

Contents

3 Design of Predictive Controller for Networked Systems . . . . . . . . . . . . 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Predictive Control for Networked Control Systems with Bounded Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Modelling of Networked Control Systems . . . . . . . . . . . . . . . 3.2.2 Networked Predictive Controller Based on Terminal Convex Set Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Feasibility and Stability Analysis of Networked Predictive Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Robust Predictive Control of Networked Control Systems With Control Input Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Modelling of Networked Control Systems . . . . . . . . . . . . . . . 3.3.2 Stability Analysis and Robust Predictive Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Numerical Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Moving Horizon Scheduling for Networked Systems with Communication Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Networked Moving Horizon Scheduling . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Description of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Moving Horizon Scheduling Strategy . . . . . . . . . . . . . . . . . . . 4.3 Moving Horizon State Estimation for Networked Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Performance Analysis of Networked Moving Horizon Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Numerical Simulation and Physical Experiments . . . . . . . . . . . . . . . . 4.5.1 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Experiments on Two-Tank Liquid-Level Systems . . . . . . . . . 4.6 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Distributed Predictive Control for Local Performance Index . . . . . . . 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Nash Optimal Based Distributed Predictive Control . . . . . . . . . . . . . 5.2.1 Distributed Predictive Controller Design . . . . . . . . . . . . . . . . 5.2.2 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Performance Analysis of the One-Step Prediction Optimization Strategy for Local Communication Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 53 53 57 62 65 69 70 73 87 90 91 93 93 95 95 97 101 103 108 108 111 119 119 121 121 122 122 124

128 134

Contents

5.3 Constrained Distributed Predictive Control with Guaranteed Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Description of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Distributed Predictive Control Design . . . . . . . . . . . . . . . . . . . 5.3.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Cooperative Distributed Predictive Control System . . . . . . . . . . . . . . . . 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Non-iterative Cooperative Distributed Predictive Control . . . . . . . . . 6.2.1 State-, Input-Coupled Distributed Systems . . . . . . . . . . . . . . . 6.2.2 Local Predictive Controller Design . . . . . . . . . . . . . . . . . . . . . 6.2.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Constrained Coordinated Distributed Predictive Control with Guaranteed Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Description of Distributed Systems . . . . . . . . . . . . . . . . . . . . . 6.3.2 Local Predictive Controller Design . . . . . . . . . . . . . . . . . . . . . 6.3.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Distributed Predictive Control Under Communication Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Distributed Predictive Control Based on Neighborhood Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 State-, Input-Coupled Distributed Systems . . . . . . . . . . . . . . . 7.2.2 Local Predictive Controller Design . . . . . . . . . . . . . . . . . . . . . 7.2.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Stabilized Neighborhood Optimization Based Distributed Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 DMPC Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Stability and Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

136 137 141 147 154 157 158 159 159 160 160 161 166 171 176 176 177 181 188 191 192 193 193 194 194 195 206 211 219 219 220 230 230 235 235

viii

Contents

8 Application of Distributed Model Predictive Control in Accelerated Cooling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Accelerated Cooling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Accelerated Cooling Process and Plant Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Accelerated Cooling Process Simulation Platform . . . . . . . . 8.2.3 Process Control Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Heat Balance Equation for the Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Distributed Predictive Control Based on Optimal Objective Recalculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Subsystem Optimization Objective Recalculation . . . . . . . . . 8.4.2 Subsystem State Space Model . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Extending the Kalman Global Observer . . . . . . . . . . . . . . . . . 8.4.4 Local Predictive Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Local State Prognosticator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Local Controller Iterative Solution Algorithm . . . . . . . . . . . . 8.5 Simulation Platform Algorithm Validation . . . . . . . . . . . . . . . . . . . . . 8.6 Summary of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

237 237 238 238 241 243 245 246 247 249 252 254 256 257 258 261 263

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

Chapter 1

Status of Research on Networked Distributed Systems

1.1 Background A wide class of systems exists in the industrial field, such as large petroleum and chemical processes, urban water supply and drainage systems, distributed power generation systems, etc. These systems are essentially connected by a number of subsystems according to the production process, and the subsystems are coupled to each other through energy and material transfer. In the past, although these systems were distributed in structure, the centralized control framework was used due to the limitation of information transmission, such as the use of instrumentation and central control room to centralize all the information, global system design and calculation, and then the control output of each subsystem was sent to the site for execution through the cable point to point. With the continuous development of modern technology, the increasing of system complexity and the development of communication technology, the control of such systems is changing from a centralized control to a distributed control approach. This is because centralized control has a low tolerance for faults, i.e. when a particular sensor or actuator of the system fails, it will affect the operation of the whole system; the increase in the number of system dimensions leads to an increase in the computational burden of centralized control, which makes its online real-time application difficult (control methods based on nonlinear constraint optimization such as model predictive control are particularly prominent); when the system changes locally, or when new or deleted subsystems are added, centralized control needs to re-modify the control algorithm, the structure is not flexible enough, and maintenance is difficult. On the other hand, with the development of electronics, computer technology and communication technology, the cost of intelligent instruments, sensors and actuators with communication and computing functions has become increasingly low and easy to install, and fieldbus has been commonly used, making the network formed between controllers and controllers, between sensors, and between different sensors and controllers and can effectively communicate, making the improvement of global performance possible to achieve through effective coordination. This has enabled © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_1

1

2

1 Status of Research on Networked Distributed Systems

the development of distributed control methods to improve the overall performance of the system through effective coordination [1–3], and the transformation of the system from a hierarchical progression to a distributed system [2, 4]. Distributed control differs significantly in the information structure and control algorithms of the control system compared to the MIMO system with the centralized control, and some new challenging problems emerge to be explored. The distributed control system is shown schematically in Fig. 1.1, where the controlled system consisting of multiple interconnected units is logically divided into multiple interrelated subsystems, each controlled by an individual local controller, which are interconnected through a network and can exchange data with each other according to the actual situation. Distributed control should have the following characteristics: ➀ each local controller is reciprocal, can be designed individually and controlled independently, and controllers can improve the overall system performance together through effective coordination with each other; ➁ the system is fault-tolerant, and when a controller fails, the system can still work normally. In addition, because model predictive control can predict the state evolution of the system and can consider the actions of the actuators of other systems while calculating the control action of the current subsystem in real time, and has good dynamic performance [5–11], it is natural that model predictive control is used to design the coordinated control of distributed systems and has attracted extensive attention from the academic community [2, 4, 12] and has achieved continuous development. The mainstream journals such as IEEE Trans. Automatic Control and Automatica

Cyber Network Information area of controller-1 Controller-1

Controller-3

Controller-4 Information area of controller-3,4

Controller-2

Unit-1

Unit-3

Subsytem-1

Unit-4

Unit-5

Subsytem-3 Unit-2

Subsytem-2

Unit-6 Physical Network

Fig. 1.1 Schematic diagram of distributed control system

Subsytem-4

1.2 Status of Research on Predictive Control for Networked Distributed …

3

have many articles on distributed predictive [13–19] control theory and applications. It can be seen that the study of coordinated control of distributed systems in the framework of predictive control is of great significance both in theoretical research and in industrial applications. ➀

Most of the current research results of predictive control assume that the state is known, and in fact the state observer is an indispensable and important part of the predictive control, and the feedback correction part of the traditional DMC algorithm is essentially a state observer [11, 20–22]. In the network environment, the existence of phenomena such as delay and packet loss, as well as the physical constraints of the system, bring new difficulties to the accuracy guarantee and algorithm convergence of the state observation of the system, and need to provide new design methods for the state observer with constraints in the network environment. ➁ Since the performance of distributed predictive control is not yet able to reach the optimization performance of centralized predictive control, it is a key issue for distributed predictive control to design a coordination strategy and effectively coordinate the subsystems through this strategy to achieve the purpose of improving the global optimization performance of the system while taking into account the network connectivity and computational complexity. While the current coordination strategies show different advantages in various aspects, therefore, it is necessary to provide an effective design method for coordinated predictive control for different design requirements. It includes the treatment of constraints, the feasibility of optimization problems and the guarantee of asymptotic stability of the closed-loop system [23].

1.2 Status of Research on Predictive Control for Networked Distributed Systems In recent years, many scholars in China and abroad have studied networked state observers and distributed predictive control [18, 24–29] and obtained many useful results, as follows.

1.2.1 Current Status of Research on Networked Moving Horizon Estimation With the continuous development of predictive control, the moving horizon estimation (MHE), which is also based on the moving horizon optimization strategy, has attracted great attention from scholars and has been widely used in chemical processes, fault detection, system identification, and other fields. This estimation method embeds system constraints directly into the optimization problem

4

1 Status of Research on Networked Distributed Systems

and makes them dynamically satisfied by online moving horizon optimization, thus using those known information about the system state and noise in the form of constraints to improve the reasonableness and accuracy of the estimation. Because of its moving horizon optimization mechanism and its great potential to handle complex constraints, the theoretical research on moving horizon estimation has made a rapid development [30–45]. In early theoretical studies, scholars considered the stability of MHE for linear systems and the exploration of quantitative relationships between MHE design parameters and system performance. Alessandri et al. studied the convergence and unbiasedness of moving horizon estimators [30], and discussed the effect of the weight coefficients in the objective function and the optimized time domain on the estimation error. The literature [31] gave a MHE method for simultaneously estimating the system state with unknown noise. The computation of the arrival cost function for constrained systems is difficult and its analytic expressions may not exist, so a number of literatures approximated the arrival cost function for constrained systems using the arrival cost function for unconstrained systems [32–34]. In recent years, academics have changed the research mindset of MHE theory from the quantitative study of existing algorithms to the comprehensive design of new algorithms, and many results have been achieved in the analysis and design of MHE stability with singularity, uncertainty, nonlinearity and networking. Boulkroune et al. derived analytical expressions for the MHE of unconstrained linear singular systems [35], and concluded that under some assumptions moving horizon estimation is equivalent to Kalman filter. Zhao et al. studied the problem of state estimation for partial measurement output failure [36] for linear systems with uncertain parameters. Based on the literature [30], the problem of moving horizon estimation for nonlinear systems was further investigated in the literature [37–39]. In the literature [40, 41], the state estimation problem of networked systems when the packet loss process satisfied Bernoulli random distribution was studied. Among others, based on the proposed stochastic model of the networked system, the paper designed a moving horizon estimator containing the network characteristic parameters and gave sufficient conditions to guarantee the convergence of the estimation performance. Considering the noise and packet loss problems in the form of inequality constraints contained in the system, Liu et al. designed constrained moving horizon estimators based on the LOQO interior-point algorithm as well as gave sufficient conditions to guarantee that the estimation error parametrization was bounded [42]. Subsequently, Liu et al. extended to networked systems with quantization and random packet loss [43], establishing a relationship between quantization density and packet loss probability and estimation performance. In addition, Zeng et al. studied distributed moving horizon estimation methods [44], and Vercammen et al. used MHE for metabolic response networks [45]. Throughout the development of moving horizon estimation, despite its fruitful research results, the vast majority of the literature studies MHE for traditional control systems, and its qualitative theory also focuses on guaranteeing and improving the stability of the MHE algorithm. While fully considering its constraint and uncertainty handling capabilities of moving horizon estimation and extending the moving horizon

1.2 Status of Research on Predictive Control for Networked Distributed …

5

estimation results to networked constrained systems, the results about MHE “Why is it good? What is good about it? How much better?” are almost non-existent. In general, the research on the problem of moving horizon estimation for networked constrained systems is still in its infancy in the academic community at home and abroad.

1.2.2 Status of Research and Classification of Distributed Predictive Control The research on distributed predictive control has long been a hot issue internationally, and the concept of distributed predictive control was firstly [46, 47] introduced in 2001 in a paper published in America Control Conference (ACC). Then, the research on the coordination strategy of distributed predictive control, the stability theory of distributed predictive control, distributed predictive control for different systems, and the applications in various fields were gradually enriched from 2006. For example, literature [48] proposed distributed predictive control with Nash optimization; literature [49] proposed distributed predictive control with Neighborhood optimization; literature [50] proposed distributed predictive control based on agent negotiation; literature [51, 52] proposed distributed predictive control based on global performance; literature [19] gave a distributed predictive control with an integrated approach; literature [53, 54] gives an integrated approach for distributed predictive control with Neighborhood optimization; literature [55, 56] gives an iterative global performance DMPC design approach for guaranteed stability. In addition, literature [57, 58] and others study the solution problem of distributed predictive control from the perspective of decomposition of large-scale optimization algorithms; literature [18] and others study DMPC algorithms for guaranteed stability for network systems. In DMPC applications cover chemical systems [59], metallurgical industry [60], water network systems, [61] etc., especially in recent years in the application of power systems is an explosive growth of the article [62–64]. The available distributed predictive control algorithms, in general, can be classified in different ways as follows. Classified by the number of times information is exchanged between controllers in each control cycle, they can be classified as iterative and non-iterative algorithms; classified by network connectivity, they can be classified as fully connected and non-fully connected algorithms; classified by the performance index of the system they can be classified as distributed predictive control methods based on global performance index, based on local performance index and based on neighborhood (scope) performance index. In general, using iterative algorithms, the optimization performance of the full system is better than that of non-iterative algorithms, while non-iterative methods have a smaller number of communications and optimization problem solving, and are relatively more computationally efficient. The non-fully connected method obtains a small range of information, which is not conducive to the coordination strategy to improve the overall

6

1 Status of Research on Networked Distributed Systems

optimization performance of the system, but the method is more compatible with the characteristics of distributed control due to its high fault tolerance and flexibility compared to the fully connected algorithm. Since this paper focuses on the coordination strategy of distributed predictive control, it is presented here in the way that the performance indices of each subsystem MPC are classified. (1) Local Cost Optimization based [46–48] DMPC (LCO-DMPC) N −1     2    xi,k+l 2 + ui,k+l 2 Ji,k = xi,k+N Pi + Qi Ri

(1.1)

l=0

Each subcontroller uses the future state sequence and subsystem model provided by the upstream subsystem to predict the state evolution of the current subsystem, and optimizes to find the optimal solution of the controller so that its own local performance index is minimized [24]. The literature [48] uses Nash optimization to find the subsystem control rate. This method is easy and simple to implement and has low information requirements, but its performance has some deviations compared to centralized predictive control, which is also called uncoordinated distributed predictive control since each subsystem controller uses local performance index as the optimization objective. The literature [19] gives a design method for stabilizing controllers for nonlinear systems in a non-iterative solution approach; the literature [1] gives a design method based on local performance index for guaranteed stability of linear systems with input constraints. The literature [65] further gives a design method for stability-preserving controllers with input and state constraints, which replaces the state estimation trajectory at algorithm update by a fixed reference trajectory and a rolling window. And the literature [19] states that the difficulty of stability design for distributed predictive controllers relative to the centralized predictive control approach lies in designing feasibility constraints and stability constraints such that the variation of the inputs of the neighboring systems is within a bound. (2) Coordinated DMPC (Cooperative DMPC: CDMPC) based on global performance indicators Subsystem controller C i exchanges information with all subsystem controllers, obtains the input sequences obtained from the previous computation of other subsystems, predicts the future state sequences using the full-system dynamic model, and optimizes the global performance index [51–53, 66]. J˜i,k =



J j,k

(1.2)

j∈P

With this coordination strategy, each subsystem needs to be informed by the whole system and the subsystems must be connected to each other. Compared to DMPC based on local performance index, this type of approach requires high reliability and

1.2 Status of Research on Predictive Control for Networked Distributed …

7

reduced flexibility and fault tolerance of the network. The advantage is that better global optimality can be obtained. When solved by iterative methods, the resulting solution is Pareto optimal if the convergence condition is satisfied. However, the premise of this coordination strategy to improve system performance is that each subsystem needs access to global information and high network reliability requirements, sacrificing the advantages of distributed control methods of fault tolerance and good flexibility. Considering that, on the one hand, distributed control systems are fault-tolerant and have little impact on the overall system when individual subsystems fail, which is a very prominent advantage of distributed control structures, and on the other hand, many practical systems, subject to certain limitations each local controller does not have access to global information, more and more scholars focus on researching coordination methods that do not rely on global information. For the design method of stabilized controllers under this coordination strategy, the literature [1, 52] analyzes the stability of iterative distributed predictive control using global performance index based on the convergence of global performance index, and also gives a controller design method to guarantee stability. The literature [1] gives a design method for guaranteeing stability of non-iterative distributed predictive control with input constraints based on global performance index. The method makes the closed-loop system asymptotically stable by incorporating consistency constraints and stability constraints combined with terminal invariant sets and dual mode predictive control. (3) Neighborhood Performance index based DMPC Considering that the amount of control of a subsystem has an impact not only on its own performance, but also on the optimization performance of its downstream subsystems. Therefore, the literature [53, 54, 60] gives a coordination strategy in which the performance index of each subsystem controller contains not only the performance of its corresponding subsystem but also the performance of the subsystem it directly affects, called neighborhood optimization or scope optimization. The optimization objective function is J i,k =



J j,k

(1.3)

j∈Pi

where Pi = { j : j ∈ P−i or j = i} is the set of subscripts downstream of the subsystem Si , i.e., the subsystems affected by Si . This control algorithm is also known as DMPC based on Neighborhood optimization. It can achieve better performance than the first algorithm, while the communication load is much smaller than the second algorithm. The literature [49] gives a unified form of different degrees of coordination (the number of paradigms of the subsystems involved in the performance index) when adding the states of other subsystems to the optimization indices of each local subsystem to coordinate distributed predictive control, and shows that different degrees of coordination can lead to different system performance [67]. Clearly, the

8

1 Status of Research on Networked Distributed Systems

third coordination strategy [3, 49, 53, 54, 68] is an effective means to achieve tradeoffs between communication load and global performance. However, the current coordination approach improves the global optimal performance of the system mainly by adding the state of the associated system to the performance index of the local controllers [67]. However, it also increases the amount of information obtained by the local controllers in the network, which negatively affects the fault tolerance of the system. To solve this problem, the literature [3] proposes a method that combines the sensitivity function and the predicted state of the neighboring system at the previous moment to calculate the predicted value of the state sequence of the neighbor, which can improve the DMPC synergy without increasing the network connectivity, based on the role-area optimization. For the coordinated distributed predictive control based on optimizing the performance indices of multiple subsystems in the non-global information model, it poses more difficulties in designing feasibility constraints and stability constraints due to its complex structure compared to the uncoordinated distributed predictive control. The literature [3, 49, 54] designed consistency constraints and stability conditions for distributed predictive control with guaranteed stability under this coordination strategy. From the above analysis, it is clear that the current research results on how to improve the global performance method of the system are very rich and have been relatively mature, and the theoretical results have been systematized [69].

1.3 Main Contents of the Book The research of the authors and their group on theoretical approaches to predictive control of distributed systems in networked environments and their applications has been conducted since 2001, and they have accumulated a wealth of systematic results on the current mainstream coordination approach—distributed predictive control of the type that improves the global optimal performance of a system by adding the performance index of an associated system to the performance index of a local controller, and system performance analysis and synthesis methods have accumulated a wealth of systematic results. Therefore, at a time when the international hot issue of predictive control of distributed systems is emerging and is developing in a big way, it is felt necessary to summarize the previous research results and introduce the theory and methods related to distributed predictive control in a systematic way. The main contents of this book are as follows. Chapter 2 introduces moving horizon state estimation that can make full use of the system input and output information within the moving window for the communication case where packet random loss exists between the forward and feedback channels, and gives sufficient conditions to guarantee convergence of the estimation performance. Chapter 3 presents networked predictive control based on a moving horizon optimization strategy for the communication case where bounded packet loss or data quantization occurs when quantized control signal are transmitted from

References

9

the controller to the actuator via a shared network, as well as sufficient conditions to guarantee asymptotic stability of the system with certain control performance. Chapter 4 presents moving horizon scheduling based on quadratic scheduling index for the communication case where only part of the measurement data is transmitted to the remote estimator through the shared network at each sampling moment, to ensure that the estimator still has good estimation performance. Chapter 2 introduces rolling time-domain state estimation that can make full use of the system input and output information within the rolling window for the communication case where packet random loss exists between the forward and feedback channels, and gives sufficient conditions to guarantee convergence of the estimation performance. Chapter 3 presents network predictive control based on a rolling time-domain optimization strategy for the communication case where packet bounded loss or data quantization occurs when control quantities are transmitted from the controller to the actuator via a shared network, as well as sufficient conditions to guarantee asymptotic stability of the system with certain control performance. Chapter 4 presents rolling timedomain scheduling based on quadratic scheduling metrics for the communication case where only part of the measurement data is transmitted to the remote estimator through the shared network at each sampling moment, to ensure that the estimator still has good estimation performance. Chapter 5 focuses on distributed predictive control based on local performance index, including distributed predictive control capable of obtaining Nash equilibrium, and design methods for non-iterative distributed predictive control with guaranteed stability. Chapter 6 focuses on coordinated distributed predictive control based on global performance index, including analytical solutions for unconstrained coordinated distributed predictive control, closed-loop stability conditions; and design methods for guaranteed stability for coordinated distributed predictive control with input constraints. Chapter 7 introduces coordinated distributed predictive control based on the neighborhood performance index, including the analytical solution of unconstrained neighborhood optimization based distributed predictive control, closed-loop stability conditions, and the design method of guaranteed stability for scope-based optimal distributed predictive control with input constraints. Chapter 8 presents a typical example of the application of distributed predictive control in metallurgical processes, taking the accelerated cooling process of a medium-thick plate in a steel mill in Shanghai as an example.

References 1. Li S, Zheng Y (2015) Distributed model predictive control for plant-wide systems. Wiley, Singapore Pte. Ltd. 2. Scattolini R (2009) Architectures for distributed and hierarchical model predictive control—a review. J Process Control 19(5):723–731 3. Zheng Y, Li S (2013) Coordinated predictive control of distributed systems under the network information model. IEEE/CAA J Automatica Sinica 39(11):1778–1786. (郑毅, 李少远.网络 信息模式下分布式系统协调预测控制.自动化学报, 2013, 39 (11):1778–1786)

10

1 Status of Research on Networked Distributed Systems

4. Christofides PD et al (2013) Distributed model predictive control: a tutorial review and future research directions. Comput Chem Eng 51:21–41 5. Richalet J et al (1976) Algorithmic control of industrial processes. In: Proceedings of the 4th IFAC symposium on identification and system parameter estimation. URSS, Tbilisi, September 6. Richalet J et al (1978) Model predictive heuristic control: applications to industrial processes. Automatica 14(5):413–428 7. Cutler CR, Ramaker BL (1980) Dynamic matrix control—a computer control algorithm. In: Proceedings of the joint automatic control conference. American Automatic Control Council Piscataway, NJ 8. Cutler C, Morshedi A, Haydel J (1983) An industrial perspective on advanced control. In: AIChE annual meeting 9. Maciejowski JM (2000) Predictive control with constraints 10. Joe QS (1998) Control performance monitoring—a review and assessment. Comput Chem Eng 23(2):173–186 11. Xi Y (1993) Predictive control. National Defense Industry Press, Beijing. (席裕庚.预测控制. 北京:国防工业出版社, 1993) 12. Giselsson P, Rantzer A (2013) On feasibility, stability and performance in distributed model predictive control. arXiv preprint arXiv:1302.1974 13. Camponogara E, de Lima ML (2012) Distributed optimization for MPC of linear networks with uncertain dynamics. IEEE Trans Autom Control 57(3):804–809 14. Hours JH, Jones CN (2016) A parametric nonconvex decomposition algorithm for real-time and distributed NMPC. IEEE Trans Autom Control 61(2):287–302 15. Kyoung-Dae K, Kumar PR (2014) An MPC-based approach to provable system-wide safety and liveness of autonomous ground traffic. IEEE Trans Autom Control 59(12):3341–3356 16. de Lima ML et al (2016) Distributed satisficing MPC with guarantee of stability. IEEE Trans Autom Control 61(2):532–537 17. Dai L et al (2015) Cooperative distributed stochastic MPC for systems with state estimation and coupled probabilistic constraints. Automatica 61:89–96 18. Liu J, de la Peña DM, Christofides PD (2010) Distributed model predictive control of nonlinear systems subject to asynchronous and delayed measurements. Automatica 46(1):52–61 19. Dunbar WB (2007) Distributed receding horizon control of dynamically coupled nonlinear systems. IEEE Trans Autom Control 52(7):1249–1263 20. Ding B (2008) The theory and method of predictive control. Machinery Industry Press, Beijing. (丁宝苍.预测控制的理论与方法.北京:机械工业出版社, 2008) 21. Li S (2008) Predictive control of global working condition system and its application. Science Press, Beijing. (李少远.全局工况系统预测控制及其应用.北京:科学出版社, 2008) 22. Qian J, Zhao J, Xu Z (2007) Predictive control. Chemical Industry Press, Beijing. (钱积新, 赵 均, 徐祖华.预测控制.北京:化学工业出版社, 2007) 23. Pontus G (2012) On feasibility, stability and performance in distributed model predictive control. IEEE Trans Autom Control 24. Camponogara E et al (2002) Distributed model predictive control. IEEE Control Syst 22(1):44– 52 25. Vadigepalli R, Doyle III FJ (2003) A distributed state estimation and control algorithm for plantwide processes. IEEE Trans Control Syst Technol 11(1):119–127 26. Wang C, Ong C-J (2010) Distributed model predictive control of dynamically decoupled systems with coupled cost. Automatica 46(12):2053–2058 27. A-l Gherwi W, Budman H, Elkamel A (2011) Robust distributed model predictive control algorithm. J Process Control 21(8):1127–1137 28. Alvarado I et al (2011) A comparative analysis of distributed MPC techniques applied to the HD-MPC four-tank benchmark. J Process Control 21(5):800–815 29. Camponogara E, de Lima ML (2012) Distributed optimization for MPC of linear networks with uncertain dynamics. IEEE Trans Autom Control 57(3):804–809 30. Alessandri A, Baglietto M, Battistelli G (2003) Receding-horizon state estimation for discretetime linear systems. IEEE Trans Autom Control 48(3):473–478

References

11

31. Rao CV, Rawlings JB, Lee JH (2001) Constrained linear state estimation—a moving horizon approach. Automatica 37(10):1619–1628 32. Muske KR, Rawlings JB, Lee JH (1999) Receding horizon recursive state estimation. In: American control conference 33. Rao CV (2000) Moving horizon strategies for the constrained monitoring and control of nonlinear discrete-time systems. PhD thesis, University of Wisconsin-Madison 34. Rao CV, Rawlings JB, Lee JH (1999) Stability of constrained linear moving horizon estimation. In: American control conference 35. Boulkroune B, Darouach M, Zasadzinski M (2010) Moving horizon state estimation for linear discrete-time singular systems. IET Control Theor Appl 4(3):339–350 36. Zhao H, Chen H, Ma Y (2009) Robust moving horizon estimation for system with uncertain measurement output. In: Proceedings of the 48th IEEE conference on decision and control and 28th Chinese control conference. IEEE, Shanghai 37. Alessandri A, Baglietto M, Battistelli G (2008) Moving-horizon state estimation for nonlinear discrete-time systems: new stability results and approximation schemes. Automatica 44(7):1753–1765 38. Guo Y, Huang B (2013) Moving horizon estimation for switching nonlinear systems. Automatica 49(11):3270–3281 39. Fagiano L, Novara C (2013) A combined moving horizon and direct virtual sensor approach for constrained nonlinear estimation. Automatica 49(1):193–199 40. Xue B, Li S, Zhu Q (2012) Moving horizon state estimation for networked control systems with multiple packet dropouts. IEEE Trans Autom Control 57(9):2360–2366 41. Xue B et al (2013) Moving horizon scheduling for networked control systems with communication constraints. IEEE Trans Ind Electron 60(8):3318–3327 42. Liu A, Yu L, Zhang W (2012) Moving horizon estimation for networked systems with multiple packet dropouts. J Process Control 22(9):1593–1608 43. Liu A, Yu L, Zhang W (2013) Moving horizon estimation for networked systems with quantized measurements and packet dropouts. IEEE Trans Circ Syst I Regular paper 60(7):1823–1834 44. Zeng J, Liu J (2015) Distributed moving horizon state estimation: simultaneously handling communication delays and data losses. Syst Control Lett 75(1):56–68 45. Vercammen D, Logist F, Van lmpe J (2016) Online moving horizon estimation of fluxes in metabolic reaction networks. J Process Control 37(1):1–20 46. Du X, Xi Y, Li S (2001) Distributed model predictive control for large-scale systems. In: Proceedings of the 2001 American control conference. IEEE 47. Jia D, Krogh BH (2001) Distributed model predictive control. In: Proceedings of the 2001 American control conference. IEEE 48. Li S, Zhang Y, Zhu Q (2005) Nash-optimization enhanced distributed model predictive control applied to the shell benchmark problem. Inform Sci 170(2–4):329–349 49. Li S, Zheng Y, Ling Z (2015) Impacted-region optimization for distributed model predictive control systems with constraints. IEEE Trans Autom Sci Eng 12(4):1447–1460 50. Maestre JM et al (2011) Distributed model predictive control based on agent negotiation. J Process Control 21(5):685–697 51. Zheng Y, Li S, Qiu H (2013) Networked coordination-based distributed model predictive control for large-scale system. IEEE Trans Control Syst Technol 21(3):991–998 52. Venkat AN et al (2008) Distributed MPC strategies with application to power system automatic generation control. IEEE Trans Control Syst Technol 16(6):1192–1206 53. Zheng Y, Li S, Li N (2011) Distributed model predictive control over network information exchange for large-scale systems. Control Eng Pract 19(7):757–769 54. Zheng Y, Li S et al (2012) Stabilized neighborhood optimization based distributed model predictive control for distributed system. In: 2012 31st Chinese control conference (CCC). IEEE 55. Stewart BT et al (2010) Cooperative distributed model predictive control. Syst Control Lett 59(8):460–469

12

1 Status of Research on Networked Distributed Systems

56. Giselsson P et al (2013) Accelerated gradient methods and dual decomposition in distributed model predictive control. Automatica 49(3):829–833 57. Doan MD, Keviczky T, De Schutter B (2011) An iterative scheme for distributed model predictive control using Fenche’s duality. J Process Control 21(5):746–755 58. A-l Gherwi W, Budman H, Elkamel A (2013) A robust distributed model predictive control based on a dual mode approach. Comput Chem Eng 50(9):130–138 59. Xu S, Bao J (2009) Distributed control of plantwide chemical processes. J Process Control 19(10):1671–1687 60. Zheng Y, Li S, Wang X (2009) Distributed model predictive control for plant-wide hot-rolled strip laminar cooling process. J Process Control 19(9):1427–1437 61. Negenborn RR et al (2009) Distributed model predictive control of irrigation canals. NHM 4(2):359–380 62. Moradzadeh M, Boel R, Vandevelde L (2013) Voltage coordination in multi-area power systems via distributed model predictive control. IEEE Trans Power Syst 28(1):513–521 63. del Real AJ, Arce A, Bordons C (2014) An integrated framework for distributed model predictive control of large-scale power networks. IEEE Trans Ind Inform 10(1):197–209 64. del Real AJ, Arce A, Bordons C (2014) Combined environmental and economic dispatch of smart grids using distributed model predictive control. Int J Electr Power Energ Syst 54:65–76 65. Farina M, Scattolini R (2012) Distributed predictive control: a non-cooperative algorithm with neighbor-to-neighbor communication for linear systems. Automatica 48(6):1088–1096 66. Chen Q, Li S, Xi Y (2005) Distributed predictive control of the whole production process based on global optimality. J Shanghai Jiaotong Univ 39(3):349–352. (陈庆, 李少远, 席裕庚. 基于 全局最优的生产全过程分布式预测控制. 上海交通大学学报, 2005, 39(3):349–352) 67. A-l Gherwi W, Budman H, Elkamel A (2010) Selection of control structure for distributed model predictive control in the presence of model errors. J Process Control 20(3):270–284 68. Zheng Y, Li N, Li S (2013) Hot-rolled strip laminar cooling process plant-wide temperature monitoring and control. Control Eng Prac 21(1):23–30 69. Li S (2017) Towards to dynamic optimal control for large-scale distributed systems. Control Theor Technol 15(2):158–160

Chapter 2

Moving Horizon State Estimation for Networked Systems with Random Packet Loss

2.1 Overview Networked Control Systems (NCSs) consist of partially distributed devices connected to accomplish control objectives by transmitting data between sensors and controllers and between controllers and controlled plants through a shared network. Compared with the traditional point-to-point control mode, networked control systems have the advantages of resource sharing, remote control, low cost, and easy installation, diagnosis, maintenance, and expansion, which increase the flexibility and reliability of the system. However, due to the limited network carrying capacity and limited communication bandwidth of the shared network, data during transmission will inevitably generate problems such as induced time delay, data packet disorder, data packet loss, and quantization distortion and so on. These problems will lead to system performance degradation and even cause system instability, thus making the analysis and design of networked control systems complex and diverse. This poses a new challenge to the analysis and research of the system, and requires the establishment of control theory and control methods that are compatible with networked control systems. Since the state of a real control system is often unpredictable, the same is true for networked control systems. At the same time, the characteristics of shared networks determine that the estimation problem of networked control systems is more complex than that of tradtional control systems. In recent years, the state estimation problem with packet loss has become one of the research hotspots, and significant research results have been achieved [1–12]. Sinopoli et al. assumed that the measurement packet loss process satisfied an independent identical distribution, and proved that there existed a critical value of the measurement data arrival probability such that the time-varying estimation error covariance obtained by Kalman filter was bounded [1]. The authors then generalized the state estimation problem for single-channel packet loss to the closed-loop control problem for two-channel packet loss [2]. The literature [3] gave a design method for the optimal linear estimator (including filter, predictor and smoother) based on a linear stochastic model of multi-packet loss. © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_2

13

14

2 Moving Horizon State Estimation for Networked Systems with Random …

When the packet loss process satisfied a two-state Marovian chain, the literature [4] proposed the concept of the peak error covariance, and gave a stability condition for the system associated with the recovery rate. Following the idea of literature [1], literature [5] designed the optimal H2 filter for the case of simultaneous packet loss in the feedback and forward channels. The literature [6] gave a new Kalman filter method using the probability that the error covariance matrix is less than or equal to some expected error matrix as the estimated performance index. The literature [7] designed a linear minimum variance filter based on the orthogonality principle, but without considering the actual state of data packet arrival. In summary, research results continue to emerge, but the research work still has some limitations and extended space. For example, most of the research works are based on an ideal assumption condition that both system noise and process noise are white noise satisfying Gaussian probability distribution, and then an optimal linear estimator similar to Kalman filter can be derived. However, it is difficult to satisfy this assumption condition because in real industrial processes, the noises are not simply white noises satisfying Gaussian distribution properties, but some energy-limited signals. Further, in real systems, system constraints are prevalent, for example, chemical component concentrations and liquid leakage always are greater than zero, and disturbances can fluctuate in some given range. The above constraints cannot be handled by methods such as H∞ filter and Kalman filter. If this realistic and useful information in the actual system is ignored, the estimation accuracy and the estimation performance are inevitably reduced. Also, when packet loss occurs in the system, the existing estimation methods often adopt the strategy of keeping the original input [7–9] or the strategy of setting the input to zero directly [10–12], which will also reduce the estimation accuracy. To this end, a networked state estimation method based on a moving horizon optimization strategy will be presented in this chapter to overcome the impact of packet loss on the estimation performance by making full use of the system information within the moving window as well as additional information about noise, state and input and output in the form of inequality constraints. This chapter is organized as follows: Sect. 2 describes the moving horizon state estimation method for a networked control system with random packet loss in the feedback channel; Sect. 3 describes the moving horizon state estimation method for a networked control system with random packet loss in both the feedback and forward channels, and gives sufficient conditions to guarantee the convergence of the estimation performance; Sect. 4 gives a short conclusion.

2.2 Moving Horizon State Estimation for Networked Systems …

15

2.2 Moving Horizon State Estimation for Networked Systems with Feedback-Channel Packet Loss 2.2.1 Description of the Problem This subsection investigates the problem of state estimation with packet loss in the feedback channel, when the state of a remote controlled plant is not measurable. The phenomenon of data packet loss is inevitable because packets are transmitted between sensors to the controller over an unreliable shared network. For this research objective, a typical networked control system is established as shown in Fig. 2.1. In this case, this networked control system is composed of a sensor, an unreliable shared network, an estimator, a controller, and a controlled plant. First, the following discrete-time linear time-invariant system is considered x(k + 1) = Ax(k) + Bu(k) + w(k) y˜ (k) = C x(k) + v(k) x(k) ∈ X, u(k) ∈ U, w(k) ∈ W, v(k) ∈ V

(2.1)

where x(k) ∈ R n , u(k) ∈ R m and y˜ (k) ∈ R p are the system state, control input and system output, respectively, as well as w(k) ∈ W ⊂ R n and v(k) ∈ V ⊂ R p are the system noise and measurement noise, respectively; A, B and C are the coefficient matrices of the system. The sets X, U, W, V are all convex polyhedral sets satisfying X = {x : Dx ≤ d}, U = {u : ||u|| ≤ u max }, W = {w : ||w|| ≤ ηw } and V = {v : ||v|| ≤ ηv }. Furthermore, it is assumed that the system noise and measurement noise are not Gaussian noise, but rather these noises are treated as unknown bounded deterministic variables, as well as the assumption that matrix pair (A, B) is controllable and matrix pair (A, C) is measurable. As shown in Fig. 2.1, the sensor measures the system output at each sampling moment and transmits it to the remote estimator via an unreliable shared network. However, data packet loss is inevitable during data transmission, and only packet loss between the sensor and the estimator is considered in this section, without considering packet loss between the controller and the controlled plant. Without loss Fig. 2.1 Networked control system with packet loss

Plant

Sensor

y( k )

u( k ) Lossy Network (k ) {0,1}

Controller

(k ) xˆ( k k )

Moving Horizon Estimator

y( k )

16

2 Moving Horizon State Estimation for Networked Systems with Random …

of generality, the unreliable shared network can be viewed as a switch that closes and opens in random mode [7], where the switch closure indicates the occurrence of no packet loss in the channel, as well as the switch opening indicates the occurrence of data packet loss in the channel. Thus, at any k moments, when the system output y˜ (k) is successfully transmitted to the remote estimator, then it has y(k) = y˜ (k). Conversely, when dat packet is lost, the estimator employs a Zero Order Holder (ZOH) to hold the data at the previous moment, i.e., y(k) = y(k − 1). From the above, the following model of networked control system with random packet loss is obtained x(k + 1) = Ax(k) + Bu(k) + w(k) y(k) = γ (k) y˜ (k) + (1 − γ (k))y(k − 1)

(2.2)

where the random variable γ (k) characterizes the arrival status of data packets during transmisson in an unreliable shared network and satisfies a Bernoulli distribution that takes values between 0 and 1 with the probability P(γ (k) = 1) = E{γ (k) = 1} = γ P(γ (k) = 0) = E{γ (k) = 0} = 1 − γ

(2.3)

where γ denotes the arrival probability of data packet, γ (k) = 1 means no packet loss at moment k, while γ (k) = 0 means packet loss at moment k; E{·} denotes the expectation operator. In addition, the random variable γ (k) is assumed to be independent of the noises, the system state, and the system input and output. Clearly, at moment k the estimator knows whether data packets from the sensor are lost, that is, the estimator knows the arrival state γ (k) of the packets at moment k (the arrival state γ (k) can be known by comparing the values of y(k) and y(k − 1)). Remark 2.1 In the literature [1, 6], the update model of the Kalman filter relies on whether or not the data packet at the current k moment was obtained, without considering the compensation of the measured data when packet loss occurs, and a relatively simple packet loss model was given. However, the system model with packet loss described by Eq. (2.2) considers a compensation strategy for packet loss, i.e., it appears more reasonable to adopt the measurement data y(k − 1) at moment k − 1 as the current measurement data y(k) to compensate for the effect of packet loss when the packet at moment k is lost. This strategy can be implemented by a Zero-Order Holder. Moreover, if the model (2.2) is used in the literature [1, 6], then relevant conclusions will not be obtained. Combining Eqs. (2.1) and (2.2), a model of networked control system with random packet loss is obtained, and the state estimation problem of NCSs with random packet loss will be studied based on this model in the following. Since the design of the controller is not considered in this subsection, for the purpose of analyzing the property of the proposed estimator, it is assumed that for arbitrary noise {w(k)} and

2.2 Moving Horizon State Estimation for Networked Systems …

17

{v(k)} there exists an initial state x0 and a control sequence {u(k)} such that the state trajectory {x(k)} remains in a convex set X . Remark 2.2 For a random variable γ (k) satisfying the Bernoulli distribution, it has some of the following properties: V ar (γ (k)) = γ (1 − γ ), E(γ 2 (k)) = γ , E[(1 − γ (k))2 ] = 1 − γ , E[γ (k)(1 − γ (t))] = γ (1 − γ ), k /= t etc.

2.2.2 Networked Moving Horizon State Estimator In order to overcome the impact of uncertainty brought by packet loss in networked control systems, this section introduces a novel state estimation method, namely, networked moving horizon state estimation (MHE) based on a moving horizon optimization strategy [13]. Unlike other estimation methods, moving horizon state estimation is an online optimization problem based on a segment of the latest input– output data within a moving window, rather than using only the input–output data at the current moment, as shown in Fig. 2.2. Due to the presence of packet loss, the actual useful input and output data for the moving horizon estimator are IkN  {y(k), . . . , y(k − N ), u(k − 1), . . . , u(k − N )}, k = N , N +1, . . ., where N +1 denotes the length of data in the window from moment k − N to moment k, and also is the number of input–output data used. Moreover, the selection of the moving horizon N requires a trade-off between estimation accuracy and computation time. Simply put, the MHE optimization problem based on the latest data within the moving window IkN and the predicted value x(k − N ) of the state x(k − N ), uses for estimating the sequence of states x(k − N ), x(k − N +1), . . . , x(k) within the moving window. x(k ˆ − N |k ), . . . , x(k|k ˆ ) denote the estimates of states x(k − N ), . . . , x(k) at moment k, respectively, and the predicted states x(k − N ) (i.e. x(k ˆ − N |k − 1)) can be obtained from the equation x(k − N ) = A x(k ˆ − N − 1|k − ˆ − N −1|k −1) is obtained 1) + Bu(k − N −1) with k = N +1, N +2, . . .. Since x(k Fig. 2.2 Moving horizon state estimation strategy

Sliding window

State x

State estimated at k

State estimated at k 1

I kN 1 k

N

k

N

1

I kN

k

k 1

Time k

18

2 Moving Horizon State Estimation for Networked Systems with Random …

by solving the MHE optimization problem at moment k − 1, so it can be a known quantity at moment k. In summary, the moving horizon state estimation problem for networked control systems with packet loss can be described as the following optimization problem. Problem 2.1 At moment k, for the known information (IkN , x(k−N )), by minimizing cost function k ∑ || ||2 || || || y(i ) − yˆ (i|k )||2 ˆ − N |k ) − x(k − N )|| M + J (k) = ||x(k R

(2.4)

i=k−N

with the following constraints x(i ˆ + 1|k ) = A x(i|k ˆ ) + Bu(i ) yˆ (i|k ) = γ (i )C x(i|k ˆ )+[1−γ (i )]y(i − 1) x(i|k ˆ ) ∈ X = {x : Dx ≤ d}, i = k − N , . . . , k

(2.5)

the optimal state estimate values xˆ ∗ (k − N |k ), . . . , xˆ ∗ (k|k ) are obtained. where ||·|| denotes the Euclidean norm, and the positive definite matrices M and R denote the parameter matrices to be designed. Furthermore, the first term in the cost function (2.4) outlines the impact of the input–output information prior to the k − N moments on the cost function, i.e., the performance index, while the parameter M reflects the level of confidence in the initial state estimate within the rolling window. Meanwhile, the second term of this cost function characterizes the cumulative amount of deviation between the system output and the estimated output within the window, while the parameter R is used to penalize the deviation between the system output and the estimated output. As for the selection of parameters M and R, a more detailed discussion is given in reference [14]. Fortunately, this optimization Problem 2.1 can be transformed into a standard quadratic programming problem as follows, and thus solved using some simpler computational tools xˆ ∗ (k − N |k )  arg min

|k ) x(k−N ˆ

J (k)

s.t. D N [ F˜ N x(k ˆ − N |k ) + G˜ N U N ] ≤ d N among others J (k) = xˆ T (k − N |k )[M + F T S(k)R N S(k)FN ]x(k ˆ − N |k ) N

+ 2[U T (k)G TN S(k)R N S(k)FN − x¯ T (k − N )M − Y T (k)R N S(k)FN ]x(k ˆ − N |k ) + x¯ T (k − N )M x¯ (k − N ) + U T (k)G TN S(k)R N S(k)G N U (k) − 2U T (k)G TN S(k)R N Y (k) + Y T (k)R N Y (k)

(2.6)

2.2 Moving Horizon State Estimation for Networked Systems …

19



⎡ ⎤ ⎤ ⎡ ⎤ y(k − N ) C u(k − N ) ⎢ y(k − N + 1) ⎥ ⎢ CA ⎥ ⎢ u(k − N + 1) ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎥ Y (k) = ⎢ ⎥, U (k) = ⎢ ⎥, FN = ⎢ . ⎥, .. .. . ⎣ ⎣ ⎦ ⎣ ⎦ . . ⎦ . y(k) C AN u(k − 1) ⎡ ⎡ ⎤ ⎡ ⎤ ⎤ I d D ⎢ A ⎥ ⎢d ⎥ ⎢ D ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ , DN = ⎢ F˜ N = ⎢ . ⎥, d N = ⎢ . ⎥ ⎥, .. ⎣ .. ⎦ ⎣ .. ⎦ ⎣ . ⎦ AN

d ⎡

⎢ ⎢ ⎢ S(k) = ⎢ ⎢ ⎣

⎢ ⎢ ⎢ ⎢ G˜ N = ⎢ ⎢ ⎢ ⎣

N +1 ⎡ R ⎥ ⎢ ⎥ ⎢ R ⎥ ⎢ ⎥, R N = ⎢ .. ⎥ ⎢ . ⎦ ⎣

γ (k − N )I γ (k − N + 1)I ..

··· 0 ··· 0 ··· 0 . . ··· . N −1 N −2 A B A B · · · AB 0 0 B . . .

. γ (k)I

N +1

0 B AB . . .

D









(N +1)n×1









⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ R

N +1

0 0 0 ⎢ 0⎥ CB 0 ⎢ ⎥ ⎢ ⎥ 0 ⎥, G = ⎢ C AB CB ⎢ ⎥ N ⎢ ⎥ . ⎥ . . ⎢ . ⎦ . . ⎣ . . . N −1 N CA B B C A −2 B

⎤ 0 0 0 0 ⎥ ⎥ ⎥ 0 0 ⎥ ⎥ . ⎥ . . ⎥ . . ⎦ ··· . · · · C AB C B ··· ··· ···

At moment k, the optimal state estimate xˆ ∗ (k − N |k ) can be obtained by solving optimization Problem 2.1, while the other optimal state estimates xˆ ∗ (k − N + j|k ) within the window can be derived from Eq. (2.7) xˆ ∗ (k − N + j|k ) = A j xˆ ∗ (k − N |k ) +

j−1 ∑

A j−i−1 Bu(k − N + i ),

j = 1, 2, . . . , N

i=0

(2.7) Clearly, when the system output at moment k + 1 is transmitted via an unreliable network, the known information moves from the corresponding data window at moment k to the corresponding data window at moment k+1, i.e., from (IkN , x(k−N )) N , x(k + 1 − N )), where the predicted state x(k + 1 − N ) at moment k + 1 can to (Ik+1 be derived from the optimal state estimate xˆ ∗ (k − N |k ) at moment k by using the prediction formula x(k + 1 − N ) = A xˆ ∗ (k − N |k ) + Bu(k − N ), then the optimal state estimate xˆ ∗ (k + 1 − N |k + 1 ) at moment k + 1 can be found by resolving optimization Problem 2.1 as well as the other state estimates within the window. Remark 2.3 In the performance index (2.4), the weight matrices M and R can be seen as an extension of the scalar μ in the literature [14]. In addition, the introduction of the weight matrices M and R brings more degrees of freedom to the estimator designed and can better compensate the effects of uncertainty due to packet loss. The unique feature compared to other estimation methods is that a segment of the latest input–output data within a moving window instead of the previous moment [17] or

20

2 Moving Horizon State Estimation for Networked Systems with Random …

directly setting to zero [1, 6], can be used for the design of the estimator when packet loss occurs. Remark 2.4 For ease of analysis, this subsection considers only the case where the packet arrival probability γ is constant, i.e., γ does not vary with time. It can be seen from Eq. (2.6) that packet loss affects the optimal state estimate solved by the optimization Problem 2.1, and makes the estimation performance worse. However, by reasonably adjusting the weight matrices M and R, this moving horizon estimation method is able to effectively overcome system noise and measurement noise as well as compensate the uncertainty caused by packet loss. The following section will specifically analyze the impact of the arrival probability γ of the data packet on the estimation performance, and derive the appropriate penalty weight matrices M and R by solving a linear matrix inequality to ensure good estimation performance of the estimator.

2.2.3 Performance Analysis of the Estimator This subsection focuses on the estimation performance of networked control systems in the presence of packet loss. First, the estimation error at k − N moments is defined: ek−N  x(k − N ) − xˆ ∗ (k − N |k )

(2.8)

As stated in Eq. (2.6), the dynamics of the estimation error is a stochastic process with respect to the random variable γ (k), and thus Theorem 2.1 will give a conclusion on the square expectation of the Euclidean norm of the estimation error. Theorem 2.1 For the system (2.2) above and the estimation error expressed by Eq. (2.8), if there exist the penalty weight matrices M and R in the cost function (2.4) such that inequality (2.9) holds, a = 8 f −1 ρ < 1

(2.9)

then the expectation of Euclidean norm square of the estimated error is limited as } { limk→∞ E ||e(k − N )||2 ≤ b/(1 − a), where } { ˜ − N ), k = N , N + 1, . . . E ||e(k − N )||2 ≤ e(k

(2.10)

and the upper bound function has the following form e(k) ˜ = a e(k ˜ − 1) + b, e(0) ˜ = b0 and

(2.11)

2.2 Moving Horizon State Estimation for Networked Systems …

21

ρ  λmax (A T M A), m  λmax (M), r N  ||R N ||, ηw  max||w||, ηv  max||v||, h N  ||H N || f  λmin (M + γ FNT R N FN ), a  8 f −1 ρ, √ √ 2 + r ( N + 1η h + N η )2 ] b  4 f −1 [2mηw v w N N √ √ −1 2 b0  4 f [md0 + r N ( N + 1ηw h N + N ηv )2 ], d0  ⎡ ⎢ ⎢ ⎢ HN = ⎢ ⎢ ⎢ ⎣

0 0 C 0 CA C . . .. .. C A N −1 C A N −2



max

x(0),x¯ (0)∈X

||x(0) − x¯ (0)||

⎤ ⎡ ⎤ ⎡ ··· 0 0 v(k − N ) w(k − N ) ··· 0 0 ⎥ ⎢ v(k − N + 1) ⎥ ⎢ w(k − N + 1) ⎥ ⎥ ⎥ ⎢ ⎥ ⎢ ··· 0 0 ⎥ ⎥ ⎥, V (k) = ⎢ ⎥, W (k) = ⎢ . . ⎥ ⎢ ⎥ ⎢ ⎥ . . . . ⎥ ⎦ ⎣ ⎦ ⎣ . . . . ··· . . ⎦ v(k) w(k − 1) ··· CA C

Proof The key to proving the theorem is to find the upper and lower bounds on the minimum of the performance index J ∗ (k). First, the problem of upper bounds on the minimum value of the performance index J ∗ (k) is considered. Obviously, according to the optimality principle of xˆ ∗ (k − N |k ), it follows k ∑ || || ||2 || || y(i ) − yˆ (i|k )||2 J ∗ (k) ≤ ||xˆ ∗ (k − N |k ) − x(k − N )|| M + R i=k−N

xˆ ∗ (k−N |k )=x(k−N )

(2.12) The second term on the right-hand side of Eq. (2.12) can be simplified as ⎧ ⎫ k ⎨ ∑ || ||2 ⎬ || y(i ) − yˆ (i|k )|| R⎭ ⎩ i=k−N

|| ||2 || || = ||Y˜ (k) − [FN x(k − N ) + G N U (k)]||

S(k)R N S(k)

xˆ ∗ (k−N |k )=x(k−N )

(2.13)

where Y˜ (k) = [ y˜ T (k − N ), . . . , y˜ T (k)]T and Y˜ (k) = FN x(k − N ) + G N U (k) + HN W (k) + V (k), then Eq. (2.13) simplifies to k ∑ || || || y(i ) − yˆ (i|k )||2

= ||HN W (k) + V (k)||2S(k)R N S(k)

R

i=k−N

xˆ ∗ (k−N |k )=x(k−N )

(2.14) Therefore, an upper bound on the minimum value of the performance index J ∗ (k) is J ∗ (k) ≤ ||x(k − N ) − x(k − N )||2M + ||HN W (k) + V (k)||2S(k)R N S(k)

(2.15)

22

2 Moving Horizon State Estimation for Networked Systems with Random …

Next, the problem of lower bounds on the minimum value of the performance metric J ∗ (k) is considered. Notice that the second term on the right-hand side of Eq. (2.4) can be transformed into the following one k || ||2 ∑ || || || || y(i ) − yˆ (i|k )||2 = || ||Y˜ (k) − [FN xˆ ∗ (k − N |k ) + G N U (k)]|| R

S(k)R N S(k)

i=k−N

(2.16) Since Eq. (2.16) satisfies the following form || || || FN x(k − N ) − FN xˆ ∗ (k − N |k )||2

S(k)R N S(k)

||{ } { }||2 || || = || Y˜ (k) − [FN xˆ ∗ (k − N |k ) + G N U (k)] − Y˜ (k) − [FN x(k − N ) + G N U (k)] ||

S(k)R N S(k)

(2.17)

then it can be deduced that || || || FN x(k − N ) − FN xˆ ∗ (k − N |k )||2 S(k)R N S(k) || ||2 || ˜ || ∗ ≤ 2||Y (k) − [FN xˆ (k − N |k ) + G N U (k)]|| S(k)R N S(k) || ||2 || ˜ || + 2||Y (k) − [FN x(k − N ) + G N U (k)]|| S(k)R N S(k)

(2.18)

where Eq. (2.18) can be further translated into || ||2 || ˜ || ||Y (k) − [FN xˆ ∗ (k − N |k ) + G N U (k)]|| S(k)R N S(k) || ||2 ≥ 0.5|| FN x(k − N ) − FN xˆ ∗ (k − N |k )|| S(k)R N S(k) || ||2 || || − ||Y˜ (k) − [FN x(k − N ) + G N U (k)]||

S(k)R N S(k)

(2.19)

Combining Eqs. (2.13), (2.16) and (2.19), it has k ∑ || || || || || y(i ) − yˆ (i|k )||2 ≥ 1 || FN x(k − N ) − FN xˆ ∗ (k − N |k )||2 − ||H N W (k) + V (k)||2S(k)R N S(k) S(k)R N S(k) R 2

i=k−N

(2.20) Since the first term on the right-hand side of Eq. (2.4) can be transformed into || || || ||x(k − N ) − xˆ ∗ (k − N |k )||2 = ||[x(k − N ) − x¯ (k − N )] +[x¯ (k − N ) − xˆ ∗ (k − N |k )||2 M M || ||2 ≤ 2||x(k − N ) − x¯ (k − N )||2M + 2||x¯ (k − N ) − xˆ ∗ (k − N |k )|| M

(2.21)

2.2 Moving Horizon State Estimation for Networked Systems …

23

it further gives || || || || ∗ ||xˆ (k − N |k ) − x¯ (k − N )||2 ≥ 0.5||x(k − N ) − xˆ ∗ (k − N |k )||2 − ||x(k − N ) − x¯ (k − N )||2 M M M

(2.22)

Combining Eqs. (2.8), (2.20) and (2.22), it concludes J ∗ (k) ≥ 0.5||e(k − N )||2M − ||x(k − N ) − x(k − N )||2M + 0.5||FN e(k − N )||2S(k)R N S(k) − ||HN W (k) + V (k)||2S(k)R N S(k)

(2.23)

Finally, combining the upper and lower bounds on the minimum of the performance index J ∗ (k) gives the expected properties in the sense of the estimation error norm. Specifically, combining Eqs. (2.15) and (2.23) yields ||e(k − N )||2M + ||FN e(k − N )||2S(k)R N S(k) ≤ 4||x(k − N ) − x(k − N )||2M + 4||HN W (k) + V (k)||2S(k)R N S(k) (2.24) Considering the second term on the right-hand side of Eq. (2.24), it has ||HN W (k) + V (k)||2S(k)R N S(k) ≤ ||R N ||(||HN ||||W (k)|| + ||V (k)||)2 √ √ ≤ r N ( N + 1ηw h N + N ηv )2

(2.25)

Thus, Eq. (2.24) can be translated into the following form ||e(k − N )||2M + ||FN e(k − N )||2S(k)R N S(k) ≤ 4||x(k − N ) − x(k − N )||2M + 4||HN W (k) + V (k)||2S(k)R N S(k) (2.26) For the first term on the right-hand side of Eq. (2.26), it follows ||x(k − N ) − x¯ (k − N )||2M = ||Ae(k − N − 1) + w(k − N − 1)||2M ≤ 2||Ae(k − N − 1)||2M + 2||w(k − N − 1)||2M

(2.27)

Based on Eqs. (2.26) and (2.27), the collation leads to ||e(k − N )||2M + ||FN e(k − N )||2S(k)R N S(k) ≤ 8||Ae(k − N − 1)||2M + 8||we(k − N − 1)||2M √ √ (2.28) + 4r N ( N + 1ηw h N + N ηv )2

Also, Eq. (2.28) is equivalent to e T (k − N )[M + FNT S(k)R N S(k)FN ]e(k − N )

24

2 Moving Horizon State Estimation for Networked Systems with Random …

√ √ 2 ≤ 8e T (k − N − 1) A T M Ae(k − N − 1) + 8mηw + 4r N ( N + 1ηw h N + N ηv )2 (2.29)

Since Eq. (2.29) contains the random variable α(k), finding the expectation of both sides of Eq. (2.29) yields { } E e T (k − N )[M + FNT S(k)R N S(k)FN ]e(k − N ) } { √ √ 2 + 4r N ( N + 1ηw h N + N ηv )2 ≤ 8E e T (k − N − 1)A T M Ae(k − N − 1) + 8mηw

(2.30) and further it concludes { } E [λmin (M + FNT S(k)R N S(k)FN )]e T (k − N )e(k − N ) } { √ √ 2 + 4r N ( N + 1ηw h N + N ηv )2 ≤ 8λmax (A T M A)E ||e(k − N − 1)||2 + 8mηw (2.31) Since λmin (M + FNT S(k)R N S(k)FN ) and the estimation error e(k − N ) are independent of each other, and the definition of the parameters given in Theorem 2.1, then it gives } { } { 2 f · E ||e(k − N )||2 ≤ 8ρ · E ||e(k − N − 1)||2 + 8mηw √ √ + 4r N ( N + 1ηw h N + N ηv )2

(2.32)

Further, from Eq. (2.26), it follows √ √ { } E ||e(0)||2 ≤ 4 f −1 [md02 + r N ( N + 1ηw h N + N ηv )2 ] = b0

(2.33)

By the upper bound function e(k ˜ − N ), it has } { ˜ − N ), k = N , N + 1, . . . . E ||e(k − N )||2 ≤ e(k

(2.34)

Finally, if the inequality condition (2.9) holds, it is easy to obtain an upper bound b/(1 − a) on the square expectation of the estimation error norm, since e(k) ˜ = ∑k−1 ˜ + b i=0 a i . Thus, the proof is completed. a k e(0) It follows from Theorem 2.1 that the desired properties of the estimation error norm square are the result of a combination of several factors, e.g., the coefficient matrix of the system, the penalty weight matrices M and R, the moving horizon N , and the packet arrival probability γ . Due to the presence of the packet arrival probability γ , the weight matrices M and R need to be adjusted to compensate the effect of packet loss, thus ensuring that the estimation error converges asymptotically. Here, in the case of a given moving horizon N , the estimation performance can be satisfied by solving the following linear matrix inequality to obtain the appropriate

2.2 Moving Horizon State Estimation for Networked Systems …

25

penalty weight matrices M and R ⎧ ⎪ 0 0, i.e., φ2−1 (1 − φ1 ) < β ≤ 1

(2.60)

Then for condition (2.58) to hold, the weight parameters μ and η need to satisfy the following condition 0 < μη−1 < (φ1 + βφ2 − 1)−1 f [1 − (1 − α)2 (φ1 + βφ2 )]

(2.61)

where Eq. (2.61) implies / 1 − (1 − α)2 (φ1 + βφ2 ) > 0, i.e., 1 −

(φ1 + βφ2 )−1 < α ≤ 1

(2.62)

That is, if the given arrival probability α of data packet satisfies / 0≤α ≤1−

(φ1 + βφ2 )−1

(2.63)

and β satisfies Eq. (2.60), then there are no arbitrary weight parameters μ and η such that Eq. (2.58) holds, thus the norm of the estimation error expectation converges to infinity. In short, if Eq. (2.59) holds, or if Eqs. (2.60) and (2.62) hold simultaneously,

40

2 Moving Horizon State Estimation for Networked Systems with Random …

then there always exist weight parameters μ and η such that the norm of the estimation error expectation converges to a constant b/(1 − a). Furthermore, if the given packet arrival probabilities α and β satisfy Eqs. (2.59) and (2.63), and there exist weight parameters μ and η such that the inequality μη−1 > (φ1 + βφ2 − 1)−1 f [1 − (1 − α)2 (φ1 + βφ2 )] holds, then the norm of the estimation error expectation converges to the constant b/(1 − a). Furthermore, it follows from Theorem 2.2 that if there is no system noise and no measurement noise, and the inequality condition (2.52) holds, then the norm of the estimation error expectation converges to zero. Moreover, when the given α and β let Eq. (2.52) hold, then the upper bound function e(k ˜ − N ) of the estimated error expectation will converge faster as μη−1 decreases. In other words, the smaller μη−1 is, the better the estimation performance is. In addition, for the problem of selecting the moving horizon N , it can be considered that a larger moving horizon N indicates a larger amount of data available in the window, but instead brings a larger propagation error and computational burden. Therefore, the selection of the moving horizon N has to be the trade-off between the estimation performance and the computational burden. Then, here N = 1 is considered. If the moving horizon N > 1 is selected, then the expectation (2.51) of the estimation error will change significantly, thus making it difficult to obtain a good convergence conclusion (Eqs. (2.52)–(2.54)). Remark 2.7 It is worth noting that the weight parameters μ and η satisfying the linear inequality (2.52) are not unique values respectively, and the feasible domains of μ and η are derived here. Of course, in order to find the minimal convergence value b/(1 − a) of the estimation performance, b/(1 − a) that makes the condition a < 1 hold can be minimized so that the optimal solutions of the weight matrices μ, η are obtained instead of their feasible domains. The minimization of the convergence value b/(1−a) states that Theorem 2.2 gives a supremum function on the expectation of the Euclidean norm of the estimation error, rather than an arbitrary upper bound function. Remark 2.8 In a random packet loss process, if data packets keep dropping during a long period of time (i.e., from moment t = k + 1 to moment t = k + n 1 ), when the number of consecutive packet drops is n 1 , then the probability of such a packet loss event is P(α(k + 1) = 0, β(k + 1) = 0, . . . , α(k + n 1 ) = 0, β(k + n 1 ) = 0|t = k + 1, . . . , t = k + n 1 ) =

n1 ∏

P(α(k + i ) = 0|t = k + i ) · P(β(k + i ) = 0|t = k + i )

i=1

¯ n 1 , 0 < α¯ < 1, 0 < β¯ < 1, = (1 − α) ¯ n 1 (1 − β)

(2.64)

It can be seen from Eq. (2.64) that as n 1 becomes larger, the probability of this event occurring becomes smaller and is a small probability event. Further, the design of the estimator does not consider the relationship between the number of consecutive packet loss n 1 and the moving horizon N . As shown in the estimation error Eq. (2.45),

2.3 Moving Horizon State Estimation for Networked Systems …

41

when the number of consecutive packet loss n 1 is larger than the moving horizon N , the convergence problem of the estimation error is then only related to the system matrix, i.e., the estimation error equation is then simplified to e(k − N ) = ϕ(k − N − 1)e(k − N − 1) + μw(k − N − 1)

(2.65)

It follows that the estimation error may then gradually diverge if the eigenvalues of ϕ(k − N − 1) are outside the unit circle. However, in the performance analysis of the estimator, as shown in Eqs. (2.51)–(2.54), this section considers the statistical properties of the estimation error expectation, i.e., the convergence problem of the norm of the estimation error expectation, and does not consider the convergence problem of the estimation error in a specific packet loss event, such as the convergence problem of a large number of consecutive packet loss. Moreover, the packet loss case with a large number of consecutive packet loss is a small probability event and has little impact on the convergence of the norm of the estimation error expectation. As long as there exist weight parameters such that inequality (2.52) holds, then the norm of the estimation error expectation converges to a constant value.

2.3.4 Numerical Simulations In order to verify the effectiveness of the moving horizon estimation method proposed in this section, a networked control system with multiple packet loss in a real network environment is given, and a real-time simulation experimental platform is built. Among them, the experimental platform consists of a computer, the controlled plant and two ARM 9 embedded modules as shown in Fig. 2.12. These two modules are used for the controller side and the controlled plant side, respectively, and are connected by an IP network, where the communication protocol uses the UDP protocol. A specific description of the ARM 9 embedded module can be found in the literature [18]. Firstly, the following controlled plant described by the state space is considered with the sampling time 0.1 s, and [

] [ ] 1.7240 −0.7788 1 x(k) + u(k) + w(k) 1 0 1 [ ] y(k) = 0.0286 0.0264 x(k) + v(k)

x(k + 1) =

As shown in Fig. 2.13, this real-time simulation experiment is implemented in the computer Matlab/Simulink environment. Its structural framework can be divided into the controller part and the controlled plant part. Among them, module Netsend and module Netrecv represent the sender and receiver based on UDP protocol for sending and receiving data packets, respectively. The system output signal and the controller output signal transmit data via two campus intranets with IP addresses 192.168.0.201 and 192.168.0.202, respectively. In summary, the procedure of the whole real-time

42

2 Moving Horizon State Estimation for Networked Systems with Random …

Fig. 2.12 ARM 9 embedded module

simulation experiment can be described as follows: firstly, install the software corresponding to the ARM 9 embedded module and connect the related hardware devices; secondly, build the corresponding Simulink module diagram (shown in Fig. 2.14) in the Matlab/Simulink environment based on the simulation experiment structure Fig. 2.13, and debug and run it; finally, monitor the real-time simulation on a human– computer interaction interface to monitor real-time data and collect and process the required data (shown in Fig. 2.15). Since the simulation experiments run in a wireless IP network environment, the transmission of data inevitably introduces packet loss phenomena. After repeated test experiments, an average packet arrival is obtained when sending and receiving packets over the IP network in 15 time steps (150 sampling points), as shown in Figs. 2.16 and 2.17, as well as an approximate value of the packet arrival probability α = β = 0.85. In regarding to the simulation, assuming the controller output u(k) = 2 sin(k) and moving horizon N = 1, φ1 = 2.5771 and φ1 = 1.7321 are found according to Eqs. (2.52)–(2.54), and a suitable weight parameter μη = 3 × 10−4 is chosen, yielding f = 0.0015, a = 0.6750 < 1, and b = 6.2186. In addition, it is assumed that system noise and measurement noise are independent uniformly distributed noises varying between [−0.1, 0.1]. To facilitate the analysis, the following performance index, the asymptotic root mean square error, is established k L 1 ∑ ∑ A R M S E(k) = N + 1 l=1 i=k−N

/

||2 || || || L −1 ||ξ(l, i ) − ξˆ (l, i|k )||

2.3 Moving Horizon State Estimation for Networked Systems … Controller side

Control signal

43 Plant side

Netsend

Netrecv

Lossy

Moving horizon

Plant

Network

estimator

Netrecv

Netsend

Fig. 2.13 Networked real-time simulation test rig

Fig. 2.14 Simulink module for simulation experiments

where ξˆ (l, i|k ) denotes the estimate of the state ξ(l, i ) at moment k in the lth simulation, and L denotes the number of simulation experiments. In the simulation experiments, the relevant experimental results are described below. The sensor measurement output y˜ (k), the valid data y(k) used by the estimator, ˜ of the system are shown and the controller output u(k) and the control input u(k) on Figs. 2.18 and 2.19, respectively. By comparing with the Kalman filter method

44

2 Moving Horizon State Estimation for Networked Systems with Random …

Fig. 2.16 Packet loss status of feedback channel

Packet dropouts in feedback channel

Fig. 2.15 Visual monitoring interface

Real-time network packet dropouts

1

0

0

5

10

Time (s)

15

Fig. 2.17 Packet loss status of forward channel

Packet dropouts in forward channel

2.3 Moving Horizon State Estimation for Networked Systems …

45

Real-time network packet dropouts

1

0

0

5

10

15

Time (s)

proposed in the literature [1], Figs. 2.20 and 2.21 give the results of comparing the two networked optimal state estimation algorithms. As shown, in the presence of multiple packet loss, the estimation results obtained based on the moving horizon method proposed are significantly better than those obtained using the Kalman filter, mainly because the moving horizon estimation method has multiple degrees of freedom (i.e., the weight parameters μ and η) that can be adjusted, which allows the estimator to have good robustness and thus better estimation performance can be obtained, while the Kalman filter methods do not have such a condition. Also, the estimation result of an extended state variable, i.e., the system control input, is shown in Fig. 2.22. Compared with the actual control input signal, this state variable can be estimated accurately using the moving horizon estimation method, indicating that the estimation algorithm is effective. Furthermore, it can be seen from Fig. 2.23: by comparing the performance metrics of the asymptotic root mean square error mentioned above, then the estimation deviation obtained using moving horizon estimation is significantly smaller than the results obtained using the Kalman filter method. Despite there exists the packet loss, the estimator still has some estimation performance. Finally, Fig. 2.24 gives the results of comparing the Euclidean norm of the state estimation error expectation and its upper bound function (2.53), where the solid line indicates the Euclidean norm trajectory of the estimation error expectation, and the dotted line indicates its upper bound function with system noise and measure noise, while the dashed line indicates its upper bound function without the noises. It can be seen from the figure that: the upper bound function of the Euclidean norm of the estimation error expectation with the noises converges to a stable value over time, which characterizes the boundedness of the estimation performance, while its upper bound function without the noises converges to zero, which indicates the unbiased nature of the estimation performance. Moreover, it can also be seen from the figure: the Euclidean norm of the estimation error expectation always varies within the range of its upper bound function. In this way, the simulation results not only show the good estimation performance of the designed moving horizon estimator, but also

46

2 Moving Horizon State Estimation for Networked Systems with Random …

Fig. 2.18 Impact of packet loss on measured data

Data on measurement

1

Sensor measurement Estimator input

0

-1

15

10

5

0

Fig. 2.19 Effect of packet loss on control signals

Controller output and system input

Time (s)

3

Controller output System input

2 1 0 -1 -2 0

5

10

15

Time (s) Fig. 2.20 Comparison of methods on state x 1

20 Our method

15

Kalman filter True value

State x1

10 5 0 -5 -10

0

5

10

Time (s)

15

2.3 Moving Horizon State Estimation for Networked Systems … Fig. 2.21 Comparison of methods on state x 2

47

20

Our method Kalman filter

15

True value

State x2

10 5 0 -5 -10 0

5

10

15

Time (s)

Fig. 2.22 Comparison of methods on control inputs

Our method

Estimated system input

3

Kalman filter True value

2

1

0

-1

-2 0

5

10

15

Time (s)

Fig. 2.23 Comparison of methods on ARMSE

Our method Kalman filter

ARMSE

2

1

0

0

10

5

Time (s)

15

48

2 Moving Horizon State Estimation for Networked Systems with Random … 10

Upper bound functions

8

Norm of the estimation error expectation

6

Upper bound function with noises Upper bound function without noises

4

2

0

0

10

5

15

Time (s) Fig. 2.24 Norm of the estimation error expectation and upper bound function

verify the correctness of the conclusions obtained from the above analysis on the estimation performance.

2.4 Summary of This Chapter In this chapter, a moving horizon estimation method based on a moving horizon optimization strategy is proposed to solve the state estimation problem of networked control systems with packet loss for the two cases of packet loss caused by transmitting data between sensor and controller through unreliable shared networks, and multiple packet loss caused by transmitting data between sensors and controllers, and controllers and actuators. The impact of packet loss on the estimation performance is effectively overcome by solving the weight matrices satisfying some linear matrix inequalities. This estimation method makes full use of those additional information about noise, state and input–output in the form of inequality constraints to improve the accuracy and reasonableness of the estimation. Unlike other estimation methods, a distinctive feature of this estimation method is that if the current packet loss occurs, a segment of the latest data within the moving window, rather than just the previous data at the previous moment or directly setting to zero, is used in the estimator design. Finally, by analyzing the performance of this estimator, sufficient conditions are given to guarantee the convergence of the estimation performance.

References

49

References 1. Sinopoli B et al (2004) Kalman filtering with intermittent observations. IEEE Trans Autom Control 49(9):1453–1464 2. Schenato L et al (2007) Foundations of control and estimation over Lossy networks. Proc IEEE 95(1):163–187 3. Liu X, Goldsmith A (2004) Kalman filtering with partial observation losses. In: Proceedings of the 43rd IEEE conference on decision and control, Nassau, Bahamas 4. Huang M, Dey S (2007) Stability of Kalman filtering with Markovian packet losses. Automatica 43(4):598–607 5. Mo Y, Sinopoli B (2012) Kalman filtering with intermittent observations: tail distribution and critical value. IEEE Trans Autom Control 57(3):677–689 6. Shi L, Epstein M, Murray RM (2010) Kalman filtering over a packet-dropping networked: a probabilistic perspective. IEEE Trans Autom Control 55(3):594–604 7. Liang Y, Chen T, Pan Q (2010) Optimal linear state estimator with multiple packet dropouts. IEEE Trans Autom Control 55(6):1428–1433 8. Sahebsara M, Chen T, Shah SL (2007) Optimal H-2 filtering in networked control systems with multiple packet dropout. IEEE Trans Autom Control 52(8):1508–1513 9. Sahebsara M, Chen T, Shah SL (2008) Optimal H-inf filtering in networked control systems with multiple packet dropouts. Syst Control Lett 57(9):696–702 10. Epstein M et al (2008) Probabilistic performance of state estimation across a Lossy network. Automatica 44(12):3046–3053 11. Wang Z, Yang F, Ho DWC (2006) Robust H-inf filtering for stochastic time-delay systems with missing measurements. IEEE Trans Sig Process 54(7):2579–2587 12. Yang R, Shi P, Liu G (2011) Filtering for discrete-time networked nonlinear systems with mixed random delays and packet dropouts. IEEE Trans Autom Control 56(11):2655–2660 13. Xue B, Li S, Zhu Q (2012) Moving horizon state estimation for networked control systems with multiple packet dropouts. IEEE Trans Autom Control 57(9):2360–2366 14. Alessandri A, Baglietto M, Battistelli G (2003) Receding-horizon state estimation for discretetime linear systems. IEEE Trans Autom Control 48(3):473–478 15. Alessandri A, Baglietto M, Battistelli G (2008) Moving-horizon state estimation for nonlinear discrete-time systems: new stability results and approximation schemes. Automatica 44(7):1753–1765 16. Gupta V, Hassibi B, Murray RM (2007) Optimal LQG control across packet-dropping links. Syst Control Lett 56(6):439–446 17. Rao CV, Rawlings JB, Lee JH (2001) Constrained linear state estimation—a moving horizon approach. Automatica 37(10):1619–1628 18. Hu W, Liu G, Rees D (2007) Event-driven networked predictive control. IEEE Trans Ind Electron 54(3):1603–1613

Chapter 3

Design of Predictive Controller for Networked Systems

3.1 Overview In the previous chapter, the state estimation problem of networked control systems with random packet loss was addressed, the networked moving horizon state estimation method was introduced, and the relationship between the packet arrival probability and the estimation performance of NCSs was given. It is known that packet loss and data quantization can reduce the stability and system performance of networked control systems, or even lead to system instability. For the packet loss problem of networked control systems, scholars at home and abroad have done a lot of research on controller design. When packet loss occurs in the network, the first problem is how the controller or actuator should act at this time to attenuate or eliminate the effect of packet loss. There are two compensation strategies that are usually used as follows. ➀

Zero input policy [1, 2]: In case of packet loss, the controller or actuator treats zero as the control input at the current moment. ➁ Hold input policy [3, 4]: When the packet is lost, the controller or actuator takes the control input at the previous moment as the control input at the current moment. The literature [1] investigated the state estimation and control problem of NCSs with simultaneous packet loss between sensor-controller and controller-actuator using a zero-input compensation strategy. The literature [2] considered the robust H∞ control problem of NCSs based on a zero-input compensation strategy for networked control systems with random packet loss. The literature [3] by defining the sequence of time intervals between two successful packet transmissions, gave the stability conditions of networked control systems with arbitrary packet loss process and Markov packet loss process by constructing the Lapunov method associated with packet loss, however the design method of H∞ controller was not given. In contrast, in the literature [4], for a class of networked control systems with short time delay and packet loss, the historical state information and control information were used © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_3

51

52

3 Design of Predictive Controller for Networked Systems

to estimate the lost control signals and further gave the optimal control law for the closed-loop control system. However, these two compensation strategies are simple and intuitive, with small computational effort, which will produce a more conservative compensation effect. In addition, the above compensation strategies ignore an important characteristic of network transmission, that is, data information is transmitted through the communication network in the form of data packets. The packets can contain not only the data information at the current moment, but also the past and future data information, which is not possible for the traditional control system. Based on this property of network transmission, Liu et al. [5] designed a novel networked predictive controller to compensate the effects of network delay and packet loss in the feedback channel, and gave sufficient conditions for the stability of closed-loop NCSs with constant delay and random bounded delay, respectively. In contrast to the infinite time-domain quadratic performance index in the literature [5], a networked predictive control compensation method for forward channels with packet loss was investigated in the literature [6] based on finite-time domain quadratic performance index containing terminal cost functions. Moreover, in network communication, due to the limited transmission capacity of the communication channel, the data must be quantized before transmission to reduce the size of the data packet, and then transmitted. In fact, the quantization process can be seen as an encoding process, which is achieved by means of a quantizer. In other words, a quantizer can be seen as a device that can map a continuous signal to a segmented constant signal that takes values within a finite set. Although there are many types of quantizers, the following two types of quantizers are commonly employed. ➀ Static quantizers without memory Logarithmic quantizers [7–9] and uniform quantizers [10] belongs to the static quantizers. The advantage of these quantizers is the simplicity of the decoding and encoding process, and their disadvantage is that they require an infinite number of quantization levels to ensure the asymptotic stability of the system [7–9]. The literature [7] investigated the quantized feedback control problem for a class of discrete single-input single-output linear time-invariant systems and proved that the logarithmic quantizer is the optimal static quantizer for the quadratic stabilization of the system. The literature [8] extended the conclusions of literature [7] to multi-input multi-output systems based on a sector bounded approach. ➁ Dynamic time-varying quantizer with memory This type of quantizer by adjusting the quantization parameters make the attraction domain larger and the steady-state limit loop smaller. However, this makes the controller design more complex, which is not conducive to the analysis and synthesis of the system [11]. In the literature [12], a quantized state feedback controller with certain H∞ performance was designed for the simultaneous existence of network induced delay, packet loss and quantization error. The literature [13]

3.2 Predictive Control for Networked Control Systems with Bounded …

53

studied the generalized H2 filtering problem for bandwidth-constrained networked control systems. Based on the above analysis, this chapter will firstly introduce networked predictive control methods with bounded packet loss, and give sufficient conditions for the existence of networked predictive controllers that guarantee the asymptotic stability of NCSs with a certain performance level, and establish the relationship between the maximum number of consecutive packet loss and the control performance of NCSs. Secondly, a networked robust predictive control algorithm is also introduced based on a sector bounded approach for networked control systems with control input quantization, and a cone complementary linearization method for solving the coarsest quantization density is given under the premise of guaranteeing system stability and control performance.

3.2 Predictive Control for Networked Control Systems with Bounded Packet Loss For the control of networked control systems with bounded packet loss, a novel model of NCSs will be developed and based on this model a networked predictive control strategy capable of predicting future control actions of the system in advance will be introduced [14]. The basic idea of the networked predictive controller design is that the controller uses hold input policy when packet loss occurs, while the actuator will select the appropriate predictive control variable from the latest packets stored in the buffer, which can act on the controlled plant. In addition, this section will give sufficient conditions for the existence of a networked predictive controller that guarantees the asymptotic stability of the networked control system with a certain level of performance, as well as establish the relationship between the maximum number of consecutive packet loss and the performance of the NCSs.

3.2.1 Modelling of Networked Control Systems First, a structure networked control system architecture with bounded packet loss is shown in Fig. 3.1. Bounded packet loss means that the maximum number of consecutive packet loss is bounded and the packet loss process does not satisfy some random distribution, while the number of consecutive packet loss refers to the number of sustained packet loss that occurs between two successful packet transmissions. A networked control system consists of sensors, an unreliable communication network, a predictive controller, a buffer, and controlled plant (including an actuator). The controlled plant can be described by the following discrete-time state-space model x(k + 1) = Ax(k) + Bu(k)

(3.1)

54

3 Design of Predictive Controller for Networked Systems

Buffer

U (k , k ) 2

(k )

u (k )

Plant

Sensor

x (k ) Loss network

U (k )

MPC

1

(k )

x (k )

Fig. 3.1 Networked control system with bounded packet loss

where x(k) ∈ R n and u(k) ∈ R m are the system state and control input, respectively, and A and B are constant matrices with appropriate dimensions, assuming that the system state is fully measurable and matrix pair ( A, B) is controllable. Without loss of generality, assume that the input constraints of the system are ||u(k)|| ≤ u max

(3.2)

As shown in Fig. 3.1, the sensor, controller, and actuator are in time-driven mode, and the sensor and controller transmit data in packets at each sampling time. Two switches depict the state of packet loss for the sensor-controller channel and the controller-actuator channel, respectively. The buffer is used to receive and save data from the controller and to send data to the actuator. In addition, the controller adopts the hold input strategy, the buffer stores the latest predictive control sequence derived by the predictive control strategy, and the actuator selects the appropriate predictive control variable from the buffer and acts on the controlled plant. As shown in Fig. 3.2, when data packets are transmitted by the sensorcontrol channel, assume that only packets at moments d0 , d1 , . . . , di , di+1 , di+2 , . . ., (di < di+1 ) can be successfully transmitted from the sensor to the controller, the number of the consecutive packet loss in time interval [di , di+1 ] is τ1 (di , di+1 ) = di+1 − di − 1 satisfying 0 ≤ τ1 (di , di+1 ) ≤ Nd . When packets are transmitted by the controller-actuator channel, only packets at moments h 0 , h 1 , . . . , h i , h i+1 , h i+2 , . . . (h i < h i+1 ) can be successfully transmitted from the controller to the buffer, then the number of the consecutive packet loss in time interval [h i , h i+1 ] is τ2 (h i , h i+1 ) = h i+1 − h i − 1 satisfying 0 ≤ τ2 (h i , h i+1 ) ≤ Nh . Clearly, the initial successful transmission moment satisfies the constraint d0 ≤ h 0 (since the controller can drop packets at some future moment only if it first gets the data packet from sensor). As shown in the figure, a concrete analysis of the packet loss process in time interval [di , h i+1 ) satisfying di ≤ h i is considered. Then the optimal predictive control sequence for successful transmission from the controller to the buffer at moment h i is U ∗ (di ) = [u ∗ (di |di ), u ∗ (di + 1|di ), . . . , u ∗ (h i |di ), . . . , u ∗ (h i+1 − 1|di ), u ∗ (h i+1 |di ), . . . , u ∗ (di + N −1|di )], where u ∗ (di +l|di ), l = 0, 1, . . . , N −1

3.2 Predictive Control for Networked Control Systems with Bounded …

55

denotes the optimal predictive value of the control input at moment di +l based on the system state x(di ) at moment di , and N denotes the predictive horizon satisfying the constraint Nd + Nh < N . If only the successful transmission moment h i falls within time interval [di , di+1 ) and the other successful transmission moments h i+ j do not fall within this time interval, then the predictive control sequence U ∗ (di ) in the buffer will always act on the controlled plant in the future time interval [h i , h i+1 − 1], i.e., the predicted control quantity u ∗ (h i + j|di ), j = 0, 1, . . . , h i+L −h i −1 is applied to the plant and its action time satisfies h i+1 − h i ≤ Nh . However, if multiple successful transmission moments h i , h i+1 , . . . , h i+L (di ≤ h i < h i+1 < · · · < h i+L ≤ di+1 ) all fall within time [di , di+1 ), then it is shown that the predictive control sequences arriving at the buffer at moment h i , h i+1 , . . . , h i+L respectively, are the same sequence U ∗ (di ), since only one system state x(di ) successfully arrives at the controller within time [di , di+1 ) and the predicted control sequence is computed on this basis, so the successful transmission moments h i+1 , . . . , h i+L can be considered as packet loss moments, i.e., whether the packets sent from the controller to the buffer within time interval [h i+1 , h i+L ] are lost has no impact on the buffer, and its stored data remains U ∗ (di ), and thus the moment h i+L+1 of successful transmission is changed to the moment h i+1 and so on. Therefore, the predictive control sequence U ∗ (di ) in the buffer will always act on the controlled plant in time interval [h i , h i+1 − 1] and its action time satisfies h i+1 − h i ≤ Nh . Similarly, if there are multiple successful transmission moments di+1 , di+2 , . . . , di+ j falling within time interval [h i , h i+1 ), the tendency from the control point of view is to utilize the latest sampled data and drop the old packets, which is called active packet loss, then only the packets at moment di+ j are retained, the packets at moment di+1 , di+2 , . . . , di+ j−1 are actively lost, and changing the moment di+ j to the moment di+1 . In summary, there is only one successful transmission moment h i or di+1 within time interval [di , di+1 ) or [h i , h i+1 ), whether within time interval [di , di+1 ) or within time interval [h i , h i+1 ). Based on the above analysis, the specific implementation strategy of the networked predictive controller introduced in this section is that if the optimal predictive control sequence U ∗ (di ) based on the system state x(di ) at moment di does not reach the buffer in time interval [di , h i+1 ), the actuator sequentially selects from the buffer the predictive control variable u ∗ (h i + j ) from the optimal predictive control sequence U ∗ (di−1 ) computed at moment (di−1 |di−1 ), j = di − h i , di − h i + 1, . . . , −1,

h0

d0

h1

d1

hi

di

hi

di

1

Fig. 3.2 Successful transmission moment of data packet

hi

1

di

2

2

hi

L

56

3 Design of Predictive Controller for Networked Systems

and applies it to the controlled plant, i.e., u(h i + j ) = u ∗ (h i + j|di−1 ); if the optimal predictive control sequence U ∗ (di ) arrives at the buffer, the actuator sequentially selects from the buffer the predictive control quantities u ∗ (h i + j|di ), j = 0, 1, . . . , h i+1 − h i − 1, and applies it to the controlled plant, i.e., u(h i + j ) = u ∗ (h i + j|di ). Thus, when the successful transmission moment of a packet satisfies di ≤ h i , then a networked control system can be modeled within time interval [di , h i+1 ) x(k + 1) = Ax(k) + Bu(k), k ∈ [di , h i+1 − 1]

(3.3)

with { u(k) =

u(k|di−1 ), k ∈ [di , h i − 1] u(k|di ), k ∈ [h i , h i+1 − 1]

di denotes the moment when the ith (i = 1, 2, . . .) packet successfully arrives at the controller, h i denotes the moment when the ith (i = 1, 2, . . .) packet successfully arrives at the actuator, and they satisfies h i+1 − di − 1 ≤ Nh + Nd < N (Nh , Nd , N are positive integers). Furthermore, the total number of packet loss in the sensorcontrol channel at moment [di , h i+1 −1] is τ1 (di , h i+1 −1) = h i+1 −di −1 satisfying 0 ≤ τ1 (di , h i+1 −1) ≤ Nd + Nh < N , as well as the total number of packet loss in the controller-actuator channel at this time is τ2 (di , h i+1 − 1) = h i+1 − di − 1 satisfying 0 ≤ τ2 (di , h i+1 − 1) ≤ Nd + Nh < N . Thus, in time interval [di , h i+1 − 1], the total number of packet loss occurring in the sensor-control channel and the controlleractuator channel is τ (di , h i+1 − 1) = τ1 (di , h i+1 − 1) + τ2 (di , h i+1 − 1) satisfying 0 ≤ τ (di ) < 2N . Assuming that the initial moment d0 satisfies the constraint d0 < h 0 , in the initial time interval [d0 , h 0 − 1], since no packet (i.e., the predictive control sequence) is successfully transmitted to the buffer, then the control variable applied to the controlled plant in time interval [d0 , h 0 − 1] is u(d0 + l) = u(0), l = 0, 1, . . . , h 0 − d0 − 1. Remark 3.1 Unlike the existing the description of packet loss in the literature, i.e., Bernoulli process or Markov process to describe the packet loss process, this subsection considers only bounded packet loss and does not require the packet loss process to satisfy a particular probability distribution, thus being more general to the data packet loss process. Furthermore, since each packet is time-stamped [5], the number of consecutive packet loss for the sensor-controller channel is known for the controller, while the number of consecutive packet loss for the controller-actuator channel is unknown. For the actuator, both the number of consecutive packet loss for the sensor-controller channel and the number of consecutive packet loss for the controller-actuator channel are known.

3.2 Predictive Control for Networked Control Systems with Bounded …

57

3.2.2 Networked Predictive Controller Based on Terminal Convex Set Constraints Based on the above description of packet loss process and the modeling analysis of NCSs, this subsection will give the networked predictive controller design method for the networked control system (3.3), which makes the closed-loop NCSs asymptotically stable and has some control performance. Before that, the following lemma is given firstly. Lemma 3.1 If the spectral radius of a matrix A1 is less than or equal to 1, then the following inequality holds for any positive definite matrix S A1T S A1 ≤ S

(3.4)

Proof Suppose λ is any eigenvalue of the matrix A1 and z is any non-zero eigenvector corresponding to the eigenvalue λ, i.e., A1 z = λz, then it has z T A1 S A1 z = λ2 z T Sz ≤ z T Sz

(3.5)

From λ2 ≤ 1, it follows z T A1 S A1 z = λ2 z T Sz ≤ z T Sz

(3.6)

The above equation is equivalent to z T (A1 S A1 − S)z ≤ 0

(3.7)

Since inequality (3.7) holds constantly, it follows that Eq. (3.4) holds constantly. The proof is completed. For NCSs (3.3) with bounded packet loss, this section presents a constrained predictive controller design method containing a set of terminal state constraints and a terminal cost function. Since this approach simply drives the terminal state into an invariant set, unlike the MPC approach in the literature [5] which drives the terminal state to zero state, i.e. the equilibrium point. The design framework of the constrained MPC algorithm including an offline part and an online optimization part, is discussed specifically below. However, packet loss causes the system state to reach the controller intermittently, such that the controller based on the system state at the successful arrival moment k = di , i = 1, 2, . . . only computes the predicted control sequence which can be transmited over the network to the actuator, while the controller does not compute at other moments, but only transmits the most recently computed predicted control sequence sequentially over the network to the actuator. Therefore, the specific optimization algorithm for the controller to compute the predictive control sequence at the successful transmission moment k = di , i = 1, 2, . . . is given below

58

3 Design of Predictive Controller for Networked Systems

min J (di ) =

U (di )

N −1 ∑ [ ] ||x(di + l|di )||2Q + ||u(di + l|di )||2R + ||x(di + N |di )||2𝚿 l=0

s.t. x(di + l + 1|di ) = Ax(di + l|di ) + Bu(di + l|di ) ||u(di + l|di )|| ≤ u max , u(di + l|di ) ∈ U (di ) U (di ) = [u(di |di ), u(di + 1|di ), . . . , u(di + N − 1|di )] x(di + N |di ) ∈ X T

(3.8)

where u(di + i|di ) denotes the predicted value of the control input at moment di + i based on the system state x(di ) at moment di , and is also the optimization variable in the optimization problem (3.8). x(di +i|di ) denotes the predicted value of the system state at moment di + i based on the system state information x(di ) at moment di and satisfies x(di |di ) = x(di ). In addition, Q > 0 and R > 0 are the positive definite weight matrices for the state and input, respectively, N is the predictive horizon, the positive definite symmetric matrix 𝚿 is the terminal weighting matrix to be designed, and the convex set X T is the set of terminal state constraints defined as follows | X T = {x(di + N |di ) ∈ R n |x T (di + N |di )𝚿x(di + N |di ) ≤ 1 }

(3.9)

where 𝚿 is a positive definite symmetric matrix satisfying the following conditions [15] (A + B F)T 𝚿(A + B F) − 𝚿 + Q + F T R F < 0

(3.10)

u 2max I − F𝚿 −1 F T ≥ 0

(3.11)

Equations (3.10) and (3.11) are satisfied if and only if the following constraint holds [

⎤ M ∗ ∗∗ ⎢ AM + BW M ∗ ∗ ⎥ ⎢ ⎥≥0 ⎣ Q 21 M 0 I ∗⎦ 1 R2W 0 0I [ 2 ] u max I ∗ ≥0 WT M

(3.12)

(3.13)

Therefore, the LMI optimization problem is solved by solving offline as follows {

min − log det(M)

M>0,W

s. t. (3.12) and (3.13)

(3.14)

3.2 Predictive Control for Networked Control Systems with Bounded …

59

and then the terminal weighting matrix 𝚿 = M −1 and the local suppression control law F = W M −1 and the set of terminal state constraints can be obtained X T = {x : x T 𝚿x ≤ 1, 𝚿 = M −1 }

(3.15)

Remark 3.2 Equation (3.14) is an optimization problem for solving the terminal weighting matrix 𝚿, the local feedback control law F and the terminal constraint set X T offline, while Eq. (3.8) is essentially a finite-time domain optimization problem with a terminal state constraint set X T . In practical applications, it is always desired that the set of terminal state constraints X T is as large as possible so that the initial feasible domain of the system is as large as possible. Currently, the existing literature is based on two main methods to enlarge the terminal constraint set of the system: an offline design method of terminal constraint set with less computational burden, which is generally more conservative, and an online optimization method of terminal constraint set, however which often leads to an excessive online computational burden. Here the terminal constraint set is obtained only from the perspective of offline optimization. Remark 3.3 For traditional predictive control [15], the optimization problem (3.8) needs to be solved online at each sampling time to obtain the optimal predictive control sequence and to apply the first element of this sequence on the controlled plant. However, this section considers networked control systems with bounded packet loss (3.3), so it is not necessary to solve the optimization problem (3.8) online at every sampling time to obtain the optimal predictive control sequence. That is, since the controller intermittently receives packets from the sensors, i.e., the system state, the controller only needs to intermittently solve the optimization problem (3.8) online to derive the optimal predictive control sequence. In summary, when solving the optimization problem (3.8) online, the optimal predictive control sequence U ∗ (di ) is obtained based on the state information x(di ) at moment di ,which can arrives at the buffer at moment h i . Subsequently, U ∗ (di ) is always applied to the controlled plant through time interval [h i , h i+1 − 1]. It follows that the control variables applied to the controlled plant during the time interval [di , h i − 1] before the moment h i are u(h i + j ) = u ∗ (h i + j|di−1 ),

j = di − h i , di − h i + 1, . . . , −1

(3.16)

Since the control variables u(h i + j), j = di − h i , di − h i + 1, . . . , −1 are unknown when solving the optimization problem (3.8) at moment di , u(h i + j|di ), j = di −h i , di −h i +1, . . . , −1 are still the optimization variables in the optimization problem (3.8). However, the resulting predictive control variables u ∗ (h i + j|di ), j = di − h i , di − h i + 1, . . . , −1 does not act on the controlled plant in time interval [di , h i − 1]. Furthermore, it is not difficult to derive the following expression for the predicted state

60

3 Design of Predictive Controller for Networked Systems

x(di + l|di ) = Al x(di |di ) +

l−1 ∑

A j Bu(di + l − 1 − j|di ), l = 1, 2, . . . , N

j=0

(3.17) After offline solving the optimization problem (3.14) in the linear matrix inequality (LMI) to obtain the local control law F, the terminal constraint set X T , and the terminal weight 𝚿 matrix 𝚿 𝚿, only the optimization problem (3.8) needs to be online solved for real-time control. According to Eq. (3.17), the state prediction for N steps can be calculated by the following equation ⎤ [ x(di + 1|di ) ⎥ ⎢ ⎢ .. ⎦=⎣ ⎣ . [

x(di + N |di )

⎤[ ⎤ u(di |di ) 0 0 ⎥⎢ ⎥ .. .. ⎦ . 0 ⎦⎣ . N −1 B ··· B u(di + N − 1|di ) A (3.18)

⎤ [ A .. ⎥x(d |d ) + ⎢ ⎣ . ⎦ i i

B .. .

AN

or equivalently [

[ ] ] ] [ B˜ x(d ˜ i + 1, di + N − 1) A˜ x(di |di ) + U (di ) = BN x(di + N |di ) AN

(3.19)

Based on the above equation, the performance index (3.8) can be converted to the following form || ||2 || ˜ || 2 ˜ |d J (di ) = ||x(di |di )||2Q + || Ax(d ) + BU (d ) i i i || + ||U (di )|| R˜ Q˜ || ||2 + || A N x(di |di ) + B N U (di )||𝚿

(3.20)

where Q˜ and R˜ are the diagonal matrices with diagonal elements Q and R, respectively. Since the term ||x(di |di )||2Q is known, which does not affect the optimal solution of the optimization problem (3.8). Therefore, for the networked control system (3.3) with bounded packet loss, its online optimization problem (3.8) with finite-time predictive horizon and terminal constraint set can be solved by the following linear inequality (LMI). min

U (di ), γ (di )

γ (di )

(3.21)

with the following constraints [

γ (di ) ⎢ Ax(d ˜ ˜ |d i i ) + BU (di ) ⎢ ⎣ U (di ) A N x(di |di ) + B N U (di )

∗ Q˜ −1 0 0

⎤ ∗ ∗ ∗ ∗ ⎥ ⎥≥0 −1 R˜ ∗ ⎦ 0 𝚿 −1

(3.22)

3.2 Predictive Control for Networked Control Systems with Bounded …

[

1 ∗ N |d A x(di i ) + B N U (di ) 𝚿 −1 [ 2 ] u max ∗ ≥0 U (di ) I

61

] ≥0

(3.23)

(3.24)

where “*” in Eq. (3.22) denotes the corresponding element of the symmetric matrix, Eq. (3.23) indicates the set of end-state constraints, and Eq. (3.24) indicates the system control input constraint (3.2), such that the minimum value γ (di ) of J (di ) is derived. Furthermore, if there exists an optimal solution U ∗ (di ) = [u ∗ (di |di ), u ∗ (di + 1|di ), . . . , u ∗ (h i |di ), . . ., u ∗ (h i+1 − 1|di ), u ∗ (h i+1 |di ), . . . , u ∗ (di + N − 1|di )] for the optimization problem (3.21) at moment di , then a feasible solution of the optimization problem (3.21) at moment di+1 can be chosen to be expressed in the following form U˜ (di+1 ) = [u ∗ (di+1 |di ), . . . , u ∗ (di + N − 1|di ), F x ∗ (di + N |di ), . . . , F x(di+1 + N − 1|di )]. where x(di + N + j|di ) = (A + B F) j x ∗ (di + N |di ), j = 1, 2, . . . , di+1 − di − 1. Based on the above description of the networked predictive control algorithm (including the offline optimization part and the online optimization part), the specific procedure for implementing the networked predictive control compensation strategy are given below. Step 1: The local control law F, terminal state constraint set X T and the terminal weight matrix 𝚿 are obtained by offline solving the optimization problem (3.14). Step 2: When the initial successful transmission moment d0 satisfies the condition d0 < h 0 , the control variables applied to the controlled plant during this time interval is u(d0 + l) = u(0), l = 0, 1, . . . , h 0 − d0 − 1, since no packet (i.e., optimal predictive control sequence) has yet been successfully transmitted to the buffer during the initial time interval [d0 , h 0 − 1]. Step 3: At the sampling time di , based on the known state information x(di ) = x(di |di ),the optimization problem (3.21) is solved online to obtain the optimal predictive control sequence U ∗ (di ). In time interval [di , h i+1 ), if U ∗ (di ) does not reach the buffer, the actuator sequentially selects the optimal control variable u ∗ (h i + j|di−1 ) of the predictive control sequence U ∗ (di−1 ) in the buffer solved at moment di−1 , and applies it to the plant, i.e., u(h i + j ) = u ∗ (h i + j|di−1 ), j = di − h i , di − h i + 1, . . . , −1; if U ∗ (di ) does not reach the buffer at moment h i , the actuator sequentially selects the optimal control variable u ∗ (h i + j|di ), j = 0, 1, . . . , h i+1 −h i −1, and applies it to the controlled plant, i.e., u(h i + j ) = u ∗ (h i + j|di ). Step 4: At the sampling time di+1 , let di = di+1 , and repeat Step 3.

62

3 Design of Predictive Controller for Networked Systems

3.2.3 Feasibility and Stability Analysis of Networked Predictive Controllers In networked predictive controller design, feasibility is often closely related to stability and is the basis of stability analysis. Sometimes the stability conditions for predictive control can be derived directly from the feasibility conditions. Thus, the following conclusions can be drawn. Theorem 3.1 For a networked control system (3.3) with bounded packet loss, the prediction horizon N is chosen to be larger than the upper bound Nd + Nh for packet loss, i.e., Nd + Nh < N . If there exist matrices M > 0 and W such that the optimization problem (3.14) holds, and on this basis there exist U ∗ (di ), γ ∗ (di ) such that the optimization problem (3.21) to (3.24) holds, then in time interval [di , h i+1 ), if this control sequence U ∗ (di ) does not reach the buffer, the actuator sequentially selects the optimal control variable u ∗ (h i + j|di−1 ) of the predictive control sequence U ∗ (di−1 ) in the buffer solved at moment di−1 , and applies it to the plant, i.e., u(h i + j ) = u ∗ (h i + j|di−1 ), j = di − h i , di − h i + 1, . . . , −1, if U ∗ (di ) does not reach the buffer at moment h i , the actuator sequentially selects the optimal control variable u ∗ (h i + j|di ), j = 0, 1, . . . , h i+1 − h i − 1, and applies it to the controlled plant. This networked predictive control strategy makes the networked control system (3.3) asymptotically stable. Proof Without loss of generality, it is assumed that packets at moments di , di+1 , di+2 (di < di+1 < di+2 , di+1 − di ≤ Nd , di+2 − di+1 ≤ Nd ) can be successfully transmitted from the sensor to the controller, while all packets between moments di , di+1 , and di+2 are lost; similarly, it is assumed that packets at moments h i , h i+1 , h i+2 (h i < h i+1 < h i+2 , h i+1 − h i ≤ Nh , h i+2 − h i+1 ≤ Nh ) can be successfully transmitted from the controller to the buffer, while packets between moments h i , h i+1 , and h i+2 are all lost. If there exist optimal solutions for both the offline optimization problem (3.14) as well as the online optimization problems (3.21) to (3.24), then the optimal performance index can be obtained in the following form J ∗ (di ) =

N −1 [ ∑ || ∗ || || || ] || || ||x (di + l|di )||2 + ||u ∗ (di + l|di )||2 + ||x ∗ (di + N |di )||2 Q R 𝚿 l=0

(3.25) with optimal predictive control sequences and optimal state sequences. [ ] U ∗ (di ) = u ∗ (di |di ) u ∗ (di + 1|di ) · · · u ∗ (h i |di ) u ∗ (h i + 1|di ) · · · u ∗ (di + N − 1|di ) [ ] X ∗ (di ) = x ∗ (di |di ) x ∗ (di + 1|di ) · · · x ∗ (h i |di ) x ∗ (h i + 1|di ) · · · x ∗ (di + N |di )

(3.26) According to the proposed networked predictive control method, when a packet loss occurs between moments h i and h i+1 , then the predictive control

3.2 Predictive Control for Networked Control Systems with Bounded …

63

sequence [u ∗ (h), u(h + 1)] in time interval [h i , h i+1 − 1], [u ∗ (h i |di ), u ∗ (h i + 1|di ), . . . , u ∗ (h i+1 − 1|di )] are sequentially applied to the controlled plant. Since x ∗ (di + N |di ) ∈ X T and X T is an invariant set, there will exist a feasible solution to the optimization problem (3.21) to (3.24) at moment di+1 , i.e. U˜ (di+1 ) = [u ∗ (di+1 |di ), . . . , u ∗ (di + N − 1|di ), F x ∗ (di + N |di ), . . . , F x(di+1 + N − 1|di )] where x(di + N + j|di ) = ( A + B F) j x ∗ (di + N |di ), j = 1, 2, . . . , di+1 − di − 1 (3.27) Since there is no uncertainty as well as external disturbance in the networked control system (3.3), under the action of feasible control sequence U˜ (di+1 ), it gives x(di+1 + l|di+1 ) = x ∗ (di+1 + l|di ), l = 0, 1, . . . , di + N − di+1

(3.28)

and x(di + N + L|di+1 ) = ( A + B F) L x ∗ (di + N |di ), L = 1, 2, . . . , di+1 − di (3.29) The corresponding sequence of states is [x ∗ (di+1 |di ), . . . , x ∗ (di + N |di ), (A + B F)x ∗ (di + N |di ), . . . , ( A + B F)di+1 −di x ∗ (di + N |di )]

(3.30)

Since the terminal state x ∗ (di + N |di ) ∈ X T , it only needs to prove that the terminal state x(di+1 + N |di+1 ) still meets the terminal state constraint set X T under the action of the control sequence [F x ∗ (di + N |di ), . . . , F x ∗ (di+1 + N − 1|di )]. If there exists a local control law F such that the offline optimization problem (3.14) holds, then the eigenvalues of the matrix A + B F fall all in the unit circle. Based on Lemma 3.1, it follows ( A + B F)T M −1 (A + B F) ≤ M −1

(3.31)

Further, from di+1 − di ≥ 1, it easily gives [( A + B F)di+1 −di ]T M −1 (A + B F)di+1 −di ≤ (A + B F)T M −1 (A + B F) ≤ M −1 (3.32) Since

64

3 Design of Predictive Controller for Networked Systems

x T (di+1 + N |di+1 )M −1 x(di+1 + N |di+1 ) )T )T ( ( = x ∗ (di + N |di ) (A + B F)di+1 −di M −1 ( A + B F)di+1 −di x ∗ (di + N |di ) )T ( (3.33) ≤ x ∗ (di + N |di ) M −1 x ∗ (di + N |di ) it is showed that at moment di+1 , taking x ∗ (di+1 |di ) as the initial state, under the action of the control sequence U˜ (di+1 ), terminal state estimate x(di+1 + N |di+1 ) still belongs to the terminal state constraint set X T , i.e., x(di+1 + N |di+1 ) ∈ X T . On the other hand, from Eqs. (3.32) and (3.33), it can be seen that the condition W M −1 x ∗ (di + N + j|di ), j= 0, 1, . . . ,di+1 − di − 1 satisfies the control input constraint (3.2). Thus the control sequence U˜ (di+1 ) is a feasible solution of Eqs. (3.22), (3.23), and (3.24). The stability of the algorithm proposed in the previous subsection is analyzed in the following. Let J ∗ (di ) and J ∗ (di+1 ) correspond to the optimal values of the performance index at moment di and di+1 , respectively, while J (di+1 ) is the performance index corresponding to the feasible control sequence U˜ (di+1 ) (the non-optimal solution U ∗ (di+1 )), expressed by the following equation N −1 [ ∑ || ∗ || || || ] ||x (di + l|di )||2 + ||u ∗ (di + l|di )||2 Q R

J (di+1 ) =

l=di+1 −di

+

di+1 −di −1 [



|| ∗ || || || ] ||x (di + N + l|di )||2 + || F x ∗ (di + N + l|di )||2 Q R

l=0

|| ||2 + ||x ∗ (di+1 + N |di )||

𝚿

(3.34)

According to Eq. (3.10), for 0 ≤ l ≤ di+1 − di − 1, it has || || || || ∗ ||x (di + N + l|di )||2 + || F x ∗ (di + N + l|di )||2 Q R || ∗ ||2 || ∗ ||2 || || || ≤ x (di + N + l|di ) 𝚿 − x (di + N + l + 1|di )||𝚿

(3.35)

Accumulating both sides of the above equation from l = 0 to l = di+1 − di − 1 gives di+1 −di −1 {



|| ∗ || || || } ||x (di + N + l|di )||2 + || F x ∗ (di + N + l|di )||2 Q R

l=0



di+1 −di −1 {



|| ∗ || || || } ||x (di + N + l|di )||2 − ||x ∗ (di + N + l + 1|di )||2 𝚿 𝚿

j=0

|| ||2 || ||2 = ||x ∗ (di + N |di )||𝚿 − ||x ∗ (di+1 + N |di )||𝚿 From Eqs. (3.34) and (3.36), it has

(3.36)

3.2 Predictive Control for Networked Control Systems with Bounded …

J (di+1 ) ≤

65

N −1 [ ∑ || || || ] || ∗ ||x (di + l|di )||2 + ||u ∗ (di+l |di )||2 Q R l=di+1 −di

|| ||2 + ||x ∗ (di + N |di )||𝚿 ≤ J ∗ (di )

(3.37)

According to the optimality principle, it follows J ∗ (di+1 ) ≤ J (di+1 ) ≤ J ∗ (di )

(3.38)

Since the optimal value of the performance index function is chosen as the Lyapunov function, the closed-loop system is asymptotically stable. The proof is completed. Remark 3.4 Since the predictive control problem for networked control systems with multiple packet loss is considered, the feasibility and stability of the networked predictive controller algorithm differs from the traditional predictive control algorithm. Specifically, in the case of multiple packet loss, the optimal value of the performance indexes J ∗ (di ), J ∗ (di+1 ), … for each successful transmission moment di , di+1 , … t are chosen as the Lyapunov functions such that J ∗ (di+1 ) ≤ J ∗ (di ) holds, rather than satisfying J ∗ (k + 1) ≤ J ∗ (k) at sampling time k. However, if the optimal value of the performance index at each successful transmission moment di , di+1 , … satisfies J ∗ (di+1 ) ≤ J ∗ (di ), then the optimal value of the performance index in time interval [di , di+1 ] satisfies J ∗ (l + 1) ≤ J ∗ (l), l=di , di + 1, . . . , di+1 . Remark 3.5 A conclusion such as Theorem 3.1 can be drawn only if the controlled plant is an exact discrete linear time-invariant model without the noises. However, in practical applications, it is difficult to establish an exact mathematical model of the controlled plant due to system nonlinearity, model mismatch and the presence of external disturbances. Therefore, if the controlled plant is time-varying, or uncertain, or noisy, Theorem 3.1 will no longer hold.

3.2.4 Numerical Simulation In this subsection, a networked predictive control method with terminal state constraint set is presented to compensate the effects of packet loss for a linear nominal controlled plant. To verify the effectiveness of this networked predictive control algorithm, the control problem of an inverted pendulum system is considered [16] (see Fig. 3.3) and [16, 17] compared with the algorithm of Gao et al. In the inverted pendulum system shown in Fig. 3.3, M and m denote the masses of the car and the pendulum, respectively; l is the distance from the rotation point of the pendulum to the center of gravity of the pendulum; x is the position of the car; u is the force applied to the car along the x direction; and θ is the angle at which the pendulum deviates from the vertical. Also assume that the pendulum is a thin

66

3 Design of Predictive Controller for Networked Systems

y

m

l

u (t)

0

M

x (t )

x

Fig. 3.3 Inverted pendulum system

rod and that its surface is smooth and free of friction. Thus, by employing Newton’s second law, the following kinematic model is obtained (M + m)x¨ + ml θ¨ cos θ − ml θ˙ 2 sin θ = u ml x¨ cos θ + 4/3ml 2 θ¨ − mgl sin θ = 0

(3.39)

where g is the acceleration of gravity. The state variable of the system is z = [z 1 z 2 ]T = [θ θ˙ ]T , and by linearizing the kinematic model (3.39) with z = [0 0]T at the equilibrium point, the following state-space model can be obtained. [ z˙ (t) =

] [ ] 0 1 0 z(t) + u(t) 3(M + m)g/l(4M + m) 0 −3/l(4M + m)

(3.40)

where M = 8.0 kg, m = 2.0 kg, l = 0.5 m, and g = 9.8 m/s [2]. Discretizing the system model (3.40) with a sampling period Ts = 0.03 s, the following discrete-time system model is obtained [

] [ ] 1.0078 0.0301 −0.0001 x(k + 1) = x(k) + u(k) 0.5202 1.0078 −0.0053

(3.41)

From the system model, the poles of the system are 1.1329 and 0.8827 respectively, so this system is unstable. The weight matrices for the states and inputs in the given performance index are Q = I2 , R = 0.1 the predictive horizon N = 14 and the initial system state x(0) = [ 0.5 −0.3 ]T . Substituting the above parameters into the offline LMI optimization problem (3.14) and performing calculations, the following local control law F, terminal constraint set X T and terminal weight matrix 𝚿 can be

3.2 Predictive Control for Networked Control Systems with Bounded …

67

obtained. | F = [ 221.8937 53.4029 ], X T = {x ∈ R n |x T M −1 x ≤ 1}, 𝚿 = M −1 , [ ] 0.0075 −0.0309 M= −0.0309 0.1292 According to the above Networked Predictive Control (NMPC) approach, specific simulation results are given below. Figure 3.4 shows the network packet loss in the sensor-controller channel, and its maximum number of continuous packet loss is 7, while Fig. 3.5 shows the network packet loss in the controller-actuator channel, and its maximum number of continuous packet loss is 7. In addition, in Figs. 3.4 and 3.5, the packet state “1” indicates that the packet reaches the receiver successfully, while the packet state “0” indicates that the packet does not reach the receiver successfully, i.e., the packet is lost in transmission. Figures 3.6 and 3.7 show the comparison results of the state trajectories of the inverted pendulum system, where the solid line represents the state trajectory obtained by the networked predictive control method proposed, and the dashed line represents the state trajectory obtained by the control method proposed in the literature [16]. From Figs. 3.6 and 3.7, it can be seen that the proposed networked predictive control method can make the controlled plant asymptotically stable and obtain better system performance in the case of bounded packet loss. However, using the control method proposed in the literature [16] cannot stabilize the controlled plant successfully (the system state is in oscillation), which means that this control method does not compensate well the effects of packet loss. The control input trajectory is given in Fig. 3.8, where the control input according to the literature [16] is always present during the simulation time, while the control input according to NMPC gradually tends to zero during the simulation time. These results show that the networked predictive control method designed is much more effective than the control method in the literature [16]. Fig. 3.4 Packet loss of sensor-controller channel Status of data packet

Status of data packet

1

0 0

20

40

60

Sample point

80

100

68

3 Design of Predictive Controller for Networked Systems

Fig. 3.5 Packet loss of controller-actuator channel Status of data packet

Status of data packet

1

0 20

0

40

60

80

100

Sample point

Fig. 3.6 Trajectory of system state x 1

0.5

State x1

0.4

0.3

0.2

0.1

Proposed NMPC Method by Gao

0 0

20

40

60

80

100

Sample point Fig. 3.7 Trajectory of system state x 2

0.2 0

State x2

-0.2 -0.4 -0.6 -0.8 Proposed NMPC

-1 -1.2

Method by Gao

0

20

40

60

Sample point

80

100

3.3 Robust Predictive Control of Networked Control Systems With Control … Fig. 3.8 Comparison of control inputs

69

120 Proposed NMPC Method by Gao

Control input

100 80 60 40 20 0

0

20

40

60

80

100

Sample point

Table 3.1 Upper bound of packet loss number based on different methods Uper bound of packet loss number

Method by reference [16]

Method by reference [17]

The proposed method

Nd + Nh

7

9

14

To further verify the effectiveness of this networked predictive control method comparing with the methods in the literature [16, 17], the specific comparison results are shown in Table 3.1. From Table 3.1, it can be seen that the designed networked predictive control method is able to tolerate more consecutive packet loss, i.e., a larger number of consecutive packet loss, compared to the existing results [16, 17]. That is, the proposed networked predictive control method still makes the inverted pendulum system stable when the upper bound of packet loss Nd + Nh = 14, while the methods in the literature [16, 17] do not well stablize inverted pendulum system.

3.3 Robust Predictive Control of Networked Control Systems With Control Input Quantization In NCSs, the controller and the controlled plant transmit data over an unreliable shared network. The restricted network bandwidth makes it necessary to quantize the data and reduce the packet size before transmission, which inevitably has an impact on the stability and performance of the system. In this section, a robust predictive control method for networked control systems will be presented for the case where the forward channel has control input quantization.

70

3 Design of Predictive Controller for Networked Systems

3.3.1 Modelling of Networked Control Systems A networked control system architecture is considered shown in Fig. 3.9. This networked control system is composed of a sensor, a controller, an encoder, an unreliable network, a decoder, and a controlled plant. Among them, the controlled plant can be described by the following discrete linear time-invariant state space model x(k + 1) = Ax(k) + Bv(k)

(3.42)

where x(k) ∈ R n , v(k) ∈ R m are the system state and control input, respectively, and A and B are the system matrices of appropriate dimensions, respectively. Assume that the system matrix A is an unstable matrix and (A, B) is controllable. Considering the quantitative effect of unreliable networks, the system model (3.42) can be further described as v(k) = f (x(k)) f (u(k)) =

[

(3.43)

f 1 (u 1 (k)) f 2 (u 2 (k)) . . . f m (u m (k))

]T

(3.44)

u(k) = g(x(k))

(3.45)

where g(·) denotes the unquantized feedback control law, u j (k) denotes the jth component of the controller output u, and f j (·) denotes the jth symmetric quantizer, i.e., f (−u j (k)) = − f (u j (k)), j = 1, . . . , m. The quantizers used here are a class of static time-invariant quantizers with a simple structure, i.e., logarithmic quantizers [8, 9]. Firstly, the definition of a logarithmic quantifier and its main properties are considered. Definition 3.1 A quantifier is called a logarithmic quantizer if its set of quantization levels can be expressed in the following form Fig. 3.9 Networked control system with control input quantization

Decoder

v (k )

Plant

Networks

Sensor

Coder

u (k )

MPC

x (k )

3.3 Robust Predictive Control of Networked Control Systems With Control …

71

V = {±vi , vi = ρ i v0 , i = ±1, ±2, . . .} ∪ {±v0 } ∪ {0}, 0 < ρ < 1, v0 > 0 (3.46) where each quantization level vi corresponds to a feasible region, and each quantizer f j (·) maps each feasible region to a quantization level. These feasible regions constitute the feasible domain R of the entire control input and become a partition of the feasible domain R. That is, these feasible regions have no intersection and their aggregation is the feasible domain R of the entire control input. The mapping relation f j (·) can be defined as ⎧ ( j) ( j) 1 ⎪ if 1+δ v < uj ≤ ⎨ vi j i f j (u j ) = 0 if u j = 0 ⎪ ⎩ − f j (−u j ) if u j < 0

1 1−δ j

( j)

vi , u j > 0 (3.47)

where δj =

1 − ρj , δ = diag{ δ1 δ2 . . . δm }, j = 1, . . . , m 1 + ρj

(3.48)

According to the literature [8], the density of the quantizer u(k|k ) is expressed as n j = lim sup ε→0

#g j [ε] − ln ε

(3.49)

where #g j [ε] denotes the number of quantization levels in Eq. (3.46) in the interval [ε, 1/ε]. Clearly, according to the above definition, the quantization density n j grows logarithmically with the interval [ε, 1/ε]. If the number of quantization levels is finite, n j defined by the Eq. (3.49) equals to 0. As n j decreases, the number of quantization levels decreases, and the quantizer becomes more “coarse”. From the literature [8], it can be seen that the quantization density described −1 for a logarithmic quantizer. in Eq. (3.49) can be simplified to n j = 2[ln(ρ −1 j )] This equation reflects that the smaller the value of ρ j , the smaller the quantization density, accordingly ρ j instead of n j is viewed as the quantization density of the quantizer f j (.). Furthermore, a single parameter δ j is related to the quantization density ρ j described by (3.48). Finally, the logarithmic quantizer can be represented by Fig. 3.10. Remark 3.6 In communication networks, the quantization process of data is necessarily present. On the one hand, most of the controllers in the network are digital and the presence of A/D and D/A converters possibly makes data transmission perform with limited accuracy. On the other hand, the restricted bandwidth of the communication network makes the network transmission capacity limited. Therefore, in order to reduce the usage of the limited bandwidth, the data must be quantized before being transmitted through the network. It is proved in the literature [7] that the coarsest or least dense static quantizer that quadratically stabilizes the linear discrete timeinvariant system is logarithmic quantizer, and that the coarsest quantization density

72

3 Design of Predictive Controller for Networked Systems

Fig. 3.10 Logarithmic quantizer

v j = (1 + δ j ) u j

vj

vj = uj

v j = (1 − δ j ) u j

v j = f j (u j )

uj

is related explicitly to the unstable poles of the system. Therefore, the logarithmic quantizer is used in this chapter. According to the sector bound approach proposed in the literature [8], then the relationship between the control input and the controller output can be expressed as v(k) = f (u(k)) = (I + △(k))u(k)

(3.50)

Based on Eqs. (3.42) and (3.50), the model of a networked control system with control input quantization can be described as x(k + 1) = Ax(k) + Bv(k) = Ax(k) + B(I + △(k))u(k)

(3.51)

where | | △(k) = diag{△1 (k), △2 (k), . . . , △m (k)}, |△ j (k)| ≤ δ j , △(k) ∈ [−δ, δ] (3.52) From the literature [8], it is clear that the system described by Eqs. (3.51) and (3.52) is equivalent to the system described by Eqs. (3.42)–(3.45). In this way, a networked control system with quantized control inputs is transformed into a linear system with structural uncertainty. Therefore, this section will design robust predictive controllers that make the uncertain system described by Eq. (3.51) asymptotically stable under a given quantization density, as well as further obtain the coarsest quantization density while ensuring system stability and certain control performance. The next subsection will specify the design method of networked robust predictive controllers to overcome the uncertainty associated with the quantization of the control inputs.

3.3 Robust Predictive Control of Networked Control Systems With Control …

73

3.3.2 Stability Analysis and Robust Predictive Controller Design It is well known that the model predictive control strategy has the advantages of good control effect and robustness, which can effectively overcome the system uncertainty and can well handle the various constraints in the controlled and operating variables. In this section, for the networked control system (3.51) with quantization in the forward channels, a networked robust predictive control strategy will be introduced to overcome the uncertainty caused by quantization of control input. The design of networked predictive control usually can be changed into a moving horizon optimization algorithm based on a predictive model and an infinite horizon quadratic performance index, when implementing the moving horizon control strategy. First, the predictive model can be represented by the following equation x(k + l|k ) = Ax(k + l − 1|k ) + Bv(k + l − 1|k ) = Ax(k + l − 1|k ) + B(I + △(k))u(k + l − 1|k ), l ≥ 1

(3.53)

where x(k +l|k ) denotes the prediction of the system state at moment k +l based on the known information at moment k, and u(k + l|k ) denotes the predicted controller output at moment k + l based on the known information at moment k. Then the predictive output sequence U0∞ of the controller can be expressed by the following equation U0∞ (k) = [ u(k|k ), U1∞ (k) ], U1∞ (k) = {u(k + l|k ) ∈ R m : u(k + l|k ) = K (k)x(k + l|k ), l ≥ 1}

(3.54)

where u(k|k ) is the first element of the predicted controller output sequence U0∞ , i.e. u(k) = u(k|k ) and is transmitted to the decoder at moment k through a shared network, while the other predictive controller output variables u(k + l|k )(l ≥ 1) from U0∞ can be computed from the feedback control law K (k) at moment k. Thus, based on the infinite horizon quadratic performance index J (k), the networked robust predictive control problem for solving the predicted output sequence U0∞ at moment k can be transformed into the following online optimization problem min max J (k) = min max ∞ ∞ U0

△(k)

U0

△(k)

∞ ∑ [ ] ||x(k + l|k )||2Q + ||(I + △(k))u(k + l|k )||2R l=0

max[J0 (k) + J1 (k)] = min ∞ U0

△(k)

and J0 (k) = x T (k|k )Qx(k|k ) + [(I + △(k))u(k|k )]T R[(I + △(k))u(k|k )]

(3.55)

74

3 Design of Predictive Controller for Networked Systems

J1 (k) =

∞ { ∑

} x T (k + l|k )Qx(k + l|k ) +[(I + △(k))u(k + l|k )]T R[(I + △(k))u(k + l|k )]

l=1

where the symmetric positive definite matrix Q = Q T > 0 and R = R T > 0 are the state and input weight matrices, and u(k|k ) is the optimization variable in the performance index. Assuming that the system state x(k) = x(k|k ) is measurable at every moment k and the control objective is to regulate the initial system state x(0) to the origin. To simplify the algorithm, the hard constraints on the input and output of the system are not considered here. In summary, the above optimization problem (3.55) is a min–max optimization problem with infinite optimization variables, and then a linear state feedback control strategy is used to obtain the predictive output of the controller, i.e. u(k + l|k ) = K (k)x(k + l|k ), l ≥ 1. To keep the optimization problem simple and to ensure the asymptotic stability of the system, an upper bound on the performance index J (k) is constructed, where an inequality constraint is mandatorily added so that it is easy to derive an upper bound on the performance index J (k) and the controller output u(k|k ). Specifically, the quadratic function shown below is first defined. V (x(k + l|k )) = x T (k + l|k )P(k)x(k + l|k ),

P > 0, l ≥ 1

(3.56)

and satisfies the following inequality condition for guaranteeing performance index monotonicity and stability ( ) V (k + l + 1|k ) − V (k + l|k ) < − ||x(k + l|k )||2Q + ||(I + △(k))u(k + l|k )||2R (3.57) If the system (3.51) is asymptotically stable, then it gives x(∞|k ) = 0 with V (∞|k ) = 0. Based on the above analysis, summing Eq. (3.57) from l = 1 to l = ∞ yields max J1 (k) < V (x(k + 1|k )) △(k)

(3.58)

Then the optimization problem regarding the robust predictive control of the system (3.51) can be further described as min

u(k|k ),K (k),P(k)

γ (k)

(3.59)

with the constraints (3.57) and a mandatory constraint max{x T (k|k )Qx(k|k ) + [(I + △(k))u(k|k )]T R[(I + △(k))u(k|k )] △(k)

+ x T (k + 1|k )P(k)x(k + 1|k )} < γ (k), △(k) ∈ [−δ, δ]

(3.60)

3.3 Robust Predictive Control of Networked Control Systems With Control …

75

where a positive scalar γ (k) be the upper bound of J0 (k) + V (x(k + 1)). Remark 3.7 When the inequality (3.57) holds, it does not only imply V (k + l + 1|k ) − V (k + l|k ) < 0 which guarantees the robust stability of the networked control system (3.51), but also guarantees the boundness of the system performance index, i.e.max[J0 (k) + J1 (k)] < γ (k). Therefore, the inequality (3.57) is not a strong △(k)

conservative constraint condition. It is worth noting that the data transmitted via the shared network is the output of the controller u(k|k ) rather than the predictive controller output sequence U0∞ of the controller. The objective of this subsection is to design the networked robust predictive controller by solving optimization problems (3.57), (3.59), and (3.60) such that the networked control system (3.51) is asymptotically stable under the influence of control input quantization and the minimum value of the quadratic performance index is obtained.

3.3.2.1

Stability Analysis of NCSs

In this subsection, stability conditions for the networked control system (3.51) will be analyzed. In the derivation of the main results, the following lemma is required. Lemma 3.2 [18] D and E are real matrices with appropriate dimensions, F is a symmetric matrix, and R is a positive definite symmetric matrix satisfying F T F ≤ R. Then there exists a scalar ε > 0 such that the following inequality holds D F E + E T F T D T ≤ ε D D T + ε−1 E R E T

(3.61)

Based on Lemma 3.2, Theorem 3.2 will give sufficient conditions to ensure that the networked control system (3.51) is asymptotically stable and has some control performance. Theorem 3.2 Assume that the state of the networked control system (3.51) is measurable. For a given quantization parameter δ for the logarithmic quantizer, the feedback gain K (k) and the controller output u(k|k ), if there exist matrices M(k) > 0, S(k) > 0, P(k) > 0 and a scalar γ (k) > 0 such that the following optimization problem (3.62) with infinite horizon performance index J (k) is feasible, i.e. min

M(k),S(k),P(k)

γ (k)

(3.62)

with the following constraints satisfying ⎤ −P(k) + Q + δ K T (k)M(k)δ K (k) ∗ ∗ ∗ ⎢ P(k)[A + B K (k)] −P(k) ∗ ∗ ⎥ ⎥ 0, Y (k), the controller output u(k|k ) with a scalar γ (k) such that the following optimization problem (3.75) with infinite horizon performance index J (k) is feasible, i.e.

3.3 Robust Predictive Control of Networked Control Systems With Control …

min

M(k),S(k),X (k),Y (k),u(k|k )

γ (k)

79

(3.75)

and the following constraints are satisfied. ⎤ −X (k) ∗ ∗ ∗ ∗ ⎢ AX (k) + BY (k) −X (k) + Bδ M(k)δ B T ∗ ∗ ∗ ⎥ ⎥ ⎢ ⎥ ⎢ −R −1 + δ M(k)δ ∗ ∗ ⎥ 0, X (k) > 0 and Y (k) such that inequality (3.76) holds. On the other hand, inequality (3.77) is a constraint condition that guarantees a certain control performance of the closed-loop system. In addition, unlike other

82

3 Design of Predictive Controller for Networked Systems

robust control methods [8, 9], the controller output u(k) and the feedback control law K (k) are also dependent on the current state x(k|k ). Next, Theorem 3.4 reduces the conservativeness of Theorem 3.3 by introducing an auxiliary variable, i.e., a free weight matrix, which results in better control performance. Theorem 3.4 Assume a networked control system (3.51) with measurable states and a given quantization parameter δ for logarithmic quantizer. If there exist matrices M(k) > 0, S(k) > 0, X (k) > 0, G(k), Z (k) and controller output u(k|k ) and a scalar γ (k) > 0 the following optimization problem (3.88) with infinite horizon performance index J (k) is feasible, i.e. min

G(k),M(k),S(k),X (k),Z (k),u(k|k )

γ (k)

(3.88)

with (3.77) and the following constraint [

⎤ −G(k) − G T (k) + X (k) ∗ ∗ ∗ ∗ ⎢ AG(k) + B Z (k) −X (k) + Bδ M(k)δ B T ∗ ∗ ∗ ⎥ ⎢ ⎥ ⎢ ⎥ Z (k) δ M(k)δ B T −R −1 + δ M(k)δ ∗ ∗ ⎥ 0, it is easy to obtain [G(k) − X (k)]T (−X (k))−1 [G(k) − X (k)] ≤ 0, which further gives −G T (k)X −1 (k)G(k) ≤ −G(k) − G T (k) + X (k)

(3.91)

where the matrix G(k) is a non-singular matrix. Taking into account inequality (3.91), if inequality (3.89) holds, then the following conclusion also holds [

⎤ ∗ ∗ ∗ ∗ −G T (k)X −1 (k)G(k) ⎢ AG(k) + B Z (k) −X (k) + Bδ M(k)δ B T ∗ ∗ ∗ ⎥ ⎢ ⎥ ⎢ ⎥ Z (k) δ M(k)δ B T −R −1 + δ M(k)δ ∗ ∗ ⎥ 0, S(k) > 0, T (k) > 0, X (k) > 0, G(k), Z (k), controller output u(k|k ), the diagonal matrix 0 < δ(k) < I and a scalar γ (k) > 0, such that the following optimization problem (3.94) with infinite horizon performance index J (k) is feasible, i.e. min

G(k),M(k),N (k),S(k),T (k),X (k),Z (k),u(k|k ),δ(k)

with the following constraint

γ (k)

(3.94)

84

3 Design of Predictive Controller for Networked Systems

[

⎤ −G(k) − G T (k) + X (k) ∗ ∗ ∗ ∗ ∗ ⎢ A(k)G(k) + B Z (k) −X (k) ∗ ∗ ∗ ∗ ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ⎥ Z (k) 0 −R −1 ⎢ ⎢ ⎥ 0. The matrix Pi must satisfy the following Lyapunov equation. ˆi ATdi Pi Adi − Pi = −Q

142

5 Distributed Predictive Control for Local Performance Index

Table 5.1 Symbol meaning Symbol

Interpretation

i

Subscripts of all downstream systems of subsyste Si

+i

Subscripts of all upstream systems of subsyste Si

p

xi (k + s|k)

Predicted state sequence of subsystem S j calculated by Ci at time p

p

k,xi (k + s|k) = xi,i (k + s|k) p

ui (k + s|k)

Optimized control sequence of subsystem S j calculated by Ci at time k,

xˆ i (k + s|k)

Set state sequence of subsystem S j calculated by Ci at time k,

uˆ i (k + s|k)

Set control sequence of subsystem S j calculated by Ci at time k,

xif (k + s|k)

Feasible predicted state sequence of subsystem S j defined by Ci at time k,

xˆ i (k + s|k) = xˆ i,i (k + s|k)

f (k + s|k) xif (k + s|k) = xi,i

uif (k + s|k)

Feasible control sequence of subsystem S j defined by Ci at time k, f (k + s|k) uif (k + s|k) = ui,i

ˆ i = Qi + KT Ri Ki . Define where Q i P = block − diag{P1 , P2 , . . . , Pm }, Q = block − diag{Q1 , Q2 , . . . , Qm }, R = block − diag{R1 , R2 , . . . , Rm }, Ad = block − diag{Ad1 , Ad2 , . . . , Adm }. We can obtain that ˆ AdT PAd − P = −Q, ˆ = Q + K T RK > 0. where Q To obtain the predictive state sequence of the system under the action of the Si p control decision sequence ui (k +s|k), xi (k +s|k), the system evolution model is first derived. Since all subsystem controllers are updated synchronously, the subsystem does not know the states and control sequences of other subsystems. Therefore, the MPC prediction model Si of the subsystem at moment k requires the assumed state sequence {ˆx j (k|k), xˆ j (k + 1|k), . . . , xˆ j (k + N |k)}, which can be expressed as p

p

xi (k + l|k) = Alii xi (k|k) +

l ∑

Al−h ii Bii ui (k + l|k)

h=1

+



l ∑

j∈P+i h=1

ˆ j (k + h − 1|k) Al−h ii Ai j x

(5.37)

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

143

p

Given xi (k|k) = xi (k|k), set the control sequence of the subsystem Si be { uˆ i (k + s − 1|k) =

ui (k + s − 1|k − 1), s = 1, 2, . . . , N − 1, p Ki xi (k + N − 1|k − 1), s = N .

(5.38)

Let the assumed sequence of states of each subsystem xˆ i equal to the predicted value at moment k −1, the response of the closed-loop system under feedback control is obtained as ⎧ p xˆ i (k + s − 1|k) = xi (k + s − 1|k − 1), s = 1, 2, . . . , N , ⎪ ⎪ ⎪ ⎨ xˆ (k + N + 1 − 1|k) = A xp (k + N − 1|k − 1) i di i (5.39) ∑ p ⎪ ⎪ Ai j x j (k + N − 1|k − 1) + ⎪ ⎩ j∈P+i

It is worth noting that since xˆ i (k + N |k) is only an intermediate variable, the xˆ i (k + N |k) does not equal to the solution obtained by substituting uˆ i (k + N − 1|k) into (5.37). In MPC systems, recursive feasibility and stability are very important properties. The same is true in DMPC. To extend the feasible region, a terminal state constraint is included in each MPC to guarantee that the terminal controller can make the system stable in a set of terminals. To define such a terminal set, an assumption needs to be made and the corresponding citation needs to be presented. Assumption 5.2 The block diagonal matrix Ad = block − diag{Ad1 , Ad2 , . . . , Adm } and the non-diagonal matrix Ao = Ac − Ad satisfies the inequality ˆ AoT PAo + AoT PAd + AdT PAo < Q/2 ˆ = Q + K T RK > 0. where Q Assumption 5.2 and Assumption 5.1 are proposed to aid in the design of the terminal set. Assumption 5.2 quantifies the coupling between subsystems, and it states that when the coupling between subsystems is weak enough, the subsystems can be controlled by the algorithm proposed below. This assumption is not necessary, and some systems that do not satisfy it may also be calmed by this DMPC algorithm. Therefore, it is a future work to be done on how to design more relaxed assumption conditions. Lemma 5.1 If Assumption 5.1 and Assumption 5.2 hold, then for any positive scalar c, the set { } Ω(c) = x ∈ Rn x : ||x||P ≤ c

144

5 Distributed Predictive Control for Local Performance Index

is a positive invariant domain of attraction for the closed-loop system x(k + 1) = Ac x(k) and there exists a sufficiently small scalar ε such that for any x ∈ Ω(ε), K x is a feasible input, i.e.,u ∈ Rn u . Proof Define V (k) = ||x(k)||2P . Differentiating V (k) along the closed-loop system x(k + 1) = Ac x(k), we have △V (k) = xT (k)ATc PAc x(k) − xT (k)Px(k) = xT (k)(AdT PAd − P + ATo PAo + ATo PAd + AdT PAo )x(k) 1 ˆ ˆ ≤ −xT (k)Qx(k) + xT (k)Qx(k) 2 ≤ 0,

(5.40)

holds for all states x(k) ∈ Ω(c)\{0}. That is, all state trajectories starting at Ω(c) will always remain within Ω(c) and converge asymptotically to the origin. Since P is positive definite, Ω(ε) can be reduced to 0. Thus, there exists ε > 0 small enough such that K x ∈ U for all x ∈ Ω(ε). The MPC terminal constraint for the subsystem Si can be defined as { /√ } m Ωi (ε) = xi ∈ Rn xi : ||xi ||Pi ≤ ε

(5.41)

Clearly, if x ∈ Ω1 (ε)×· · ·×Ωm (ε), then the system will be asymptotically stable, due to ||xi ||2Pi ≤

ε2 , ∀i ∈ P m

i.e., ∑

||xi ||2Pi ≤ ε2

i∈P

Thus, x ∈ Ω(ε). Assuming that at time k0 the states of all subsystems satisfy xi,k0 ∈ Ωi (ε) and i using the control law K i xi,k , then, by Lemma 5.1, the system is asymptotically stable. In summary, as long as the MPC is designed to push the states of the corresponding subsystems Si into the set Ωi (ε), then the system can be stablized to the origin by a feedback control law. Once the state reaches some suitable neighborhood of the origin, the method of switching MPC control to terminal control is called dual mode MPC [4]. Therefore, the algorithm proposed in this chapter is also called the dual mode DMPC algorithm. Moreover, since in the DMPC algorithm, the subsystem controller uses the estimate at the k − 1 moment to predict the future state, which deviates from the estimate at the current moment. Therefore, it is difficult to construct a feasible solution at the time moment k and a consistency constraint needs to be added to limit this error.

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

145

Next, the optimization problem for each subsystem MPC will be written. Problem 5.1 In the subsystemi , let ε > 0 satisfy Lemma 5.1 and let the update moment k ≥ 1. It is known that xi (k), xˆ j (k + s|k), s = 1, 2, . . . , N , ∀ j ∈ P+i , to determine the control sequence ui (k + s|k) : {0, 1, . . . , N − 1} → u i to minimize the performance index N −1( ) ∑ || p || p ||2 || ||x (k + s|k)||2 + ||ui (k + s|k)||2 || || Ji (k) = xi (k + N |k) Pi + Ri i Qi

(5.42)

s=0

subject to ∑s

ξ κε p , s = 1, 2, . . . , N − 1 (5.43) αs−l ||xi (k + l|k) − xˆ i (k + l|k)||2 ≤ √ 2 mm 1 || || p κε ||x (k + N |k) − xˆ i (k + N |k)|| ≤ √ (5.44) i Pi 2 m || || || || p ε ||x (k + s|k)|| ≤ ||xf (k + s|k)|| + (5.45) √ , s = 1, 2, . . . , N i i Pi Pi μN m

l=1

p

ui (k + s|k) ∈ Ui , s = 0, 1, . . . , N − 1 p

(5.46)

xi (k + N |k) ∈ Ωi (ε/2)

(5.47)

m 2 = max{number of elements in P+i }

(5.48)

In the above constraint i∈P

} 1 (( )} )T 2 Alii Ai j P j Alii Ai j , l = 0, 1, . . . , N − 1 αl = max max λmax i∈P j∈Pi

(5.49)

The constants 0 < κ < 1 and 0 < ξ ≤ 1 are design parameters and will be given in detail in the following subsections. The constraints (5.43), (5.44) in the optimization problem above are consistency constraints, which require that the predicted state and control variables of the system at this moment do not differ significantly from the set values at the previous moment. These constraints are the key to ensure that the system is feasible at each update moment. Equation (5.45) is a stability constraint, a condition for proving the stability of the LCO-DMPC algorithm in Problem 5.1, where μ > 0 is a design parameter that satisfies Lemma 5.1 and will be given in detail below.xif (k + s|k) is a sequence of feasible states and is the solution of Eq. (5.37) under the initial condition of xi (k), setting the state xˆ j (k +s|k), j ∈ P+i and the feasible control sequence uif (k +s −1|k) is defined as follows.

146

5 Distributed Predictive Control for Local Performance Index

{ uif (k

+ s − 1|k) =

p

ui (k + s − 1|k − 1), s = 1, 2, . . . , N − 1, Ki xif (k + N − 1|k), s = N .

(5.50)

It is worth noting that although Lemma 5.1 ensures that Ω(ε) guarantees the feasibility of the terminal controller, the set of terminal constraints defined here is Ωi (ε/2) rather than Ω(ε). As will be shown in the analysis in the next section, the terminal set defined in this way can guarantee the feasibility of the closed-loop system.

5.3.2.2

Subsystem MPC Solution Algorithm

Before describing the N-DMPC algorithm, an assumption is first made about the initialization. Suppose 5.3 At the initial moment k0 , there exists a feasible control law ui (k0 + s|k0 ) ∈ Ui , s = 1, 2, . . . , N − 1, i ∈ P, such that the solution of the system p p x(s+1+k0 ) = Ax(s+k0 )+Bu(s+k0 |k0 ), i.e., xi (·|k0 ) satisfies xi (N +k0 ) ∈ Ωi (ε/2) and Ji (k0 ) is bounded. And, for each subsystem, the initial control input ui (·|k0 ) is known for the other subsystems. Assumption 5.3 avoids the problem of constructing an initial feasible solution using a distributed approach. In fact, for many optimization problems, finding an initial feasible solution is usually a hard problem. Therefore, many centralized MPCs assume the existence of an initial feasible solution [4]. The following DMPC algorithm is obtained under the condition that Assumption 5.3 is satisfied. Algorithm 5.2 (DMPC algorithm with constraints) The dual mode MPC control law for an arbitrary subsystem Si can be computed from the following steps. Step 1: Initialization. ➀ Initialize x(k0 ), ui (k0 +s|k0 ), s = 1, 2, . . . , N such that it satisfies Assumption 5.3. ➁ At time k0 , if x(k0 ) ∈ Ω(ε), then for all k ≥ k0 , a feedback control ui (k) = Ki (xi (k)) is applied. Otherwise, according to Eq. (5.37), calculate xˆ i (k0 + s + 1|k0 + 1) and send it to the downstream subsystem controller. Step 2: Communicate at k + 1,k ≥ k0 .Measure xi (k), send xi (k), xˆ i (k + s + 1|k) to S j , j ∈ P−i , and receive x j (k), xˆ j (k + s|k). Step 3: Update the control law at times k + 1, k ≥ k0 . ➀ If x(k) ∈ Ω(ε), then apply terminal control ui (k) = Ki (xi (k)); otherwise proceed ➁. ➁ Solve Optimization Problem 5.1 to obtain ui (k|k) and use ui (k|k) as the control law. ➂ Calculate from Eq. (5.37) xˆ i (k + s + 1|k + 1) and send it to its downstream system S j , j ∈ P−i . Step 4: Update the control law at moment k + 1 such that k + 1 → k. Repeat step 2.

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

147

Algorithm 5.2 assumes that for all local controllers Ci , i ∈ P, all states x(k) of the system are available. This assumption is made simply because dual mode control requires simultaneous switching of the control method when x(k) ∈ Ω(ε), where Ω(ε) is defined in Lemma 5.1. In the following subsections it will be shown that the LCO-DMPC algorithm can drive the state x(k + s) into Ω(ε) after a finite number of updates. Thus, if Ωi (ε) is small enough, MPC can always be used for control without the local controller knowing all the states. At this point, of course, there is no guarantee that the system is asymptotically stable to the origin, only that the controller can steer the states into a small set of Ω(ε). The feasibility and stability of the LCO-DMPC algorithm is analyzed in detail in the next section.

5.3.3 Performance Analysis 5.3.3.1

Feasibility of MPC Iterations for Each Subsystem

The main result of this part is that if the system is feasible at the initial moment, while assuming that 5.2 holds, then for any system i and any moment k ≥ 1, ui (·|k) = uif (·|k) is a feasible solution to Problem 5.1, i.e., (uif (·|k), xif (·|k)), j ∈ Pi satisfies the system consistency constraints (5.43) and (5.44), the control input constraint (5.46), and the terminal constraint (5.47). Lemma 5.2 gives the guarantee that xˆ i (k + N |k) ∈ ' ' Ω || ε = (1 − κ)ε. Lemma 5.3 gives the guarantee that || i f(ε /2) is sufficient, where ||x (s + k|k) − xˆ i (s + k|k)|| ≤ κε/(2√m), i ∈ P, a sufficient condition for i ∈ P. i Pi Lemma 5.4 guarantees the control input constraint. Finally, combining Lemmas 5.2– 5.4, it is concluded that for any i ∈ P, the control input and state pair (uif (·|k), xif (·|k)) is a feasible solution to Problem 5.1 at any k ≥ 1 moments. Lemma 5.2 If Assumption 5.1 and Assumption 5.2 hold and satisfy x(k0 ) ∈ X , while for any k ≥ 0, Problem 5.1 has a feasible solution at moments 1, 2, …, k − 1 and for any i ∈ P, xˆ i (k + N − 1|k − 1) ∈ Ωi (ε/2) holds, then

xˆ i (k + N − 1|k) ∈ Ωi (ε/2) ˆ i satisfy. yet xˆ i (k + N |k) ∈ Ωi (ε' /2). where Pi and Q max(ρi ) ≤ 1 − κ i∈P

'

where ε = (1 − κ)ε, ρ = λmax

/(

ˆ i P−1 Q i

)T

(5.51)

ˆ i P−1 . Q i

Proof Since Problem 5.1 has a feasible solution at the moment k−1, it follows through Eq. (5.39) that

148

5 Distributed Predictive Control for Local Performance Index

|| || || || ||xˆ i (k + N − 1)|k || = ||xp (k + N − 1)|k − 1|| i Pj Pi ε ≤ √ 2 m furthermore p

xˆ i (k + N |k) = Adi xi (k + N − 1|k − 1) + = Adi xˆ i (k + N − 1|k) +





p

Ai j x j (k + N − 1|k − 1)

j∈P+i

Ai j xˆ j (k + N − 1|k)

j∈P+i

obtain || || || || ∑ || || || || ||xˆ i (k + N |k)|| = ||Adi xˆ i (k + N − 1|k) + Ai j xˆ j (k + N − 1|k)|| || || Pi || || j∈P+i

Pi

ˆ In conjunction with Assumption 5.2, AoT PAo + AoT PAd + AdT PAo < Q/2. We can obtain || || || || ||xˆ i (k + N |k)|| ≤ ||xˆ i (k + N − 1|k)|| ˆ Q/2 Pi /( )T || || ˆ i P−1 Q ˆ i P−1 ||xˆ i (k + N − 1|k)|| ≤ λmax Q i i Pi ε ≤ (1 − κ) √ 2 m Lemma 5.3 If Assumption 5.1–5.3 holds and x(k0 ) ∈ X, ∀k ≥ 0, Problem 5.1 has a solution at every update moment l, l = 1, 2, . . . , k − 1, then || || f κε ||x (k + s|k) − xˆ i (k + s|k)|| ≤ √ i Pi 2 m

(5.52)

Equation (5.52) and the following parameter conditions hold at moments s = 1, 2, . . . , N if for any i ∈ P. √ N −2 m2 ∑ αl ≤ 1 ξ λmin (P) l=0

(5.53)

where αl is defined in Eq. (5.49). Then, the feasible control inputs uif (k + s|k) and the state xif (k + s|k) satisfy constraints (5.43) and (5.44).

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

149

Proof First prove Eq. (5.52). Since there exists a feasible solution to Problem 5.1 at moments 1, 2, . . . , k − 1, according to Eqs. (5.37), (5.38), and (5.50), for any s = 1, 2, . . . , N − 1, the feasible state is given by the following equation.

xif (k + l|k) = Alii xif (k|k) +

l ∑

f Al−h ii Bii ui (k + l|k) +

h=1

l ∑ ∑

ˆ j (k + h − 1|k) Al−h ii Ai j x

j∈P+i h=1

(

= Alii Alii xi (k − 1|k − 1) +Bii ui (k − 1|k − 1) +



⎞ Ai j x j (k − 1|k − 1)⎠

j∈P+i

+

l ∑

ˆ i (k + l|k) + Al−h ii Bii u

h=1



l ∑

P Al−h ii Ai j x j (k + h − 1|k − 1)

j∈P+i h=1

(5.54) in the same way, it has xˆ (k + l|k) = Alii xi (k|k − 1) +

l ∑

Al−h ii Bii ui (k + l|k − 1)

h=1



+

l ∑

ˆ j (k + h − 1|k − 1) Al−h ii Ai j x

j∈P+i h=1

(

= Alii Alii xi (k − 1|k − 1) +Bii ui (k − 1|k − 1) +



⎞ Ai j xˆ j (k − 1|k − 1)⎠

j∈P+i

+

l ∑

ˆ i (k + l|k) + Al−h ii Bii u

h=1



l ∑

ˆ j (k + h − 1|k − 1) (5.55) Al−h ii Ai j x

j∈P+i h=1

Subtracting Eq. (5.55) from Eq. (5.54) and from the definition of Eq. (5.49), the difference between the feasible state and the assumed sequence of states is obtained as follows. || || f ||x (k + s|k) − xˆ j,i (k + s|k)|| j,i Pj || s || ||∑ || || || p s−l = || Aii Ai j (xi (k + l − 1|k − 1) − xˆ i (k + l − 1|k − 1))|| || || l=1

≤ ≤

s ∑

Pi

|| s−l ( p )|| ||A Ai j x (k + l − 1|k − 1) − xˆ i (k + l − 1|k − 1) || ii

l=1 s ∑ l=1

i

Pi

|| || || p || αs−l ||xi˜ (k + l − 1|k − 1) − xˆ i (k + l − 1|k − 1)||

2

(5.56)

150

5 Distributed Predictive Control for Local Performance Index

Assume that the subsystem Sg maximizes the following equation. s ∑

|| || p αs−l ||xi (k − 1 + l|k − 1) − xˆ i (k − 1 + l|k − 1)||2 , i ∈ P

l=1

Then the following equation can be obtained from Eq. (5.56). s || || √ ∑ || || f αs−l ||x j (k + s|k) − xˆ j (k + s|k)|| ≤ m 1 Pi

l=1

|| || p ||x (k + l − 1|k − 1) − xˆ g (k + l − 1|k − 1)|| g

2

p

Since xi (l|k − 1) for all moments l = 1, 2, . . . , N − 1 satisfies the constraint (5.43), the following equation is obtained. || || f − κ)ε ξ(1 − κ)ε κε ||x (k + s|k) − xˆ i (k + s|k)|| ≤ (1 − ξ )(1 + = √ (5.57) √ √ i Pi 2 m 2 m 2 m Thus, Eq. (5.52) holds for all l = 1, 2, . . . , N − 1. When l = N , it follows that ∑ xif (k + N |k) = Ad,i xif (k + N − 1|k) + Ai j xˆ j (k + N − 1|k)

(5.58)

j∈P+i

xˆ i (k + N |k) = Ad,i xˆ i (k + N − 1|k) +



Ai j xˆ j (k + N − 1|k)

(5.59)

j∈P+i

Subtracting the two equations gives ( ) xif (k + N |k) − xˆ i (k + N |k) = Ad,i xif (k + N − 1|k) − xˆ i (k + N − 1|k) (5.60) Equation (5.52) testifies to the end. Next, prove that, provided that Eq. (5.52) holds, the feasible solution xif (k + s|k) satisfies constraints (5.43) and (5.44). When l = 1, 2, ..., N −1, substituting Eq. (5.43) for xif (k +s|k) into the constraint (5.53), we get s ∑

s ∑ || || 1 αs−l ||xif (k + l|k) − xˆ i (k + l|k)||P i λmin (Pi ) l=1 √ s ∑ m 2 ξ κε 1 ≤ (5.61) αs−l √ λmin (P) ξ 2 mm 2

|| || αs−l ||xif (k + l|k) − xˆ i (k + l|k)||2 ≤

l=1

l=1

Therefore, if the satisfaction of

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

151

√ s m2 ∑ αs−l ≤ 1 ξ λmin (P) l=1 The state xif (k + s|k), s = 1, 2, . . . , N − 1, satisfies the constraint (5.43). Finally, for l = N , xif (k + N |k) satisfies the constraint (5.44). || || f κε ||x (k + N |k) − xˆ i (k + N |k)|| ≤ √ i Pi 2 m

(5.62)

Testimony Bi. It will be shown next that at moment k, if constraints (5.51) and (5.53) are satisfied, then xfj,i (k + s|k) and uif (k + s|k), s = 1, 2, . . . , N is a feasible solution to Problem 5.1. Lemma 5.4 If Assumptions 5.1–5.3 hold, and x(k0 ) ∈ Rn x , subject to constraints (5.51) and (5.53), if for any k ≥ 0, problem 5.1 has a solution at every update moment l, l = 1, 2, . . . , k − 1, then for any s = 1, 2, . . . , N − 1, uif (k + s|k) ∈ Ui . Proof Since there exists a feasible solution to Problem 5.1 at moments l = p 1, 2, · · · , k − 1, the uif (k + s − 1|k) = ui (k + s − 1|k − 1) with s = 1, 2, . . . , N − 1, f then it suffices to show that ui (k + N − 1|k) ∈ u. Since the selection of ε satisfies the conditions of Lemma 5.1, when x ∈ Ω(ε), for any i ∈ P, there exists Ki xi ∈ Ui , so a sufficient condition for uif (k + N − 1|k) ∈ Ui is xif (k + N − 1|k) ∈ Ω(ε). Adding Lemmas 5.2 and 5.3 and using the trianglar inequality yields || || || || || || f ||x (k + N − 1|k)|| ≤ ||xf (k + N − 1|k) − xˆ i (k + N − 1|k)|| + ||xˆ i (k + N − 1|k)|| i i Pi Pi Pi ε ε ε ≤ (5.63) √ + √ ≤ √ 2(q + 1) m 2 m m

It follows from the above that xif (k + N |k) ∈ Ωi (ε). This completes the proof. Lemma 5.5 If we assume that 5.1–5.3 hold and x(k0 ) ∈ Rn x , subject to conditions (5.51) and (5.53), if for any k ≥ 0, problem 5.1 has a solution at every update moment l, l = 1, 2, . . . , k − 1, then xif (k + N |k) ∈ Ω(ε/2), ∀i ∈ P. Proof Combining Lemmas 5.2 and 5.3 and using the trianglar inequality, one obtains that || || || || || || f ||x (k + N |k)|| ≤ ||xf (k + N |k) − xˆ i (k + N |k)|| + ||xˆ i (k + N |k)|| i i Pi Pi Pi κε (1 − κ)ε ε ≤ √ + = √ , √ 2 m 2 m 2 m

(5.64)

For all j ∈ Pi , i ∈ P, the above equation shows that the terminal state constraint is satisfied. The citation is proved.

152

5 Distributed Predictive Control for Local Performance Index

Theorem 5.3 If we assume that 5.1–5.3 hold and x(k0 ) ∈ Rn x , while at time k0 the system satisfies constraints (5.43), (5.44) and (5.46), then for any i ∈ P, the control laws defined by Eqs. (5.50) and (5.37) uif (·|k) and the state xfj,i (·|k) are feasible solutions to Problem 5.1 at any moment k. Proof The following proof of the theorem is by induction. p

First, in the case k = 1, the sequence of states x j,i (·|1) = xfj,i (·|1) satisfies the dynamic Eq. (5.37), the stability constraint (5.45) and the consistency constraint (5.43) and (5.44). Obviously, p

xˆ i (1|1) = xi (1|0) = xif (1|1) = xi (1), i ∈ P furthermore p

xif (1 + s|1) = xi (1 + s|0), s = 1, 2, . . . , N − 1. Therefore.xif (N |1) ∈ Ωi (ε/2). By the invariance of Ω(ε) under the action of the terminal controller and Lemma 5.1, the terminal state and control input constraints are also satisfied. Thus the k = 1 case is proved. p Now suppose that ui (·|l) = uif (·|l) with l = 1, 2, . . . , k − 1 is a feasible solution. Here it will be shown that uif (·|k) is a feasible solution at moment k. Similarly, the consistency constraint (5.43) is clearly satisfied, and xif (·|k) is the corresponding sequence of states that satisfies the dynamic equations. Since problem 5.1 has a feasible solution at moments l = 1, 2, . . . , k − 1, Lemmas 5.2–5.5 hold. Lemma 5.4 guarantees the feasibility of the control input constraint. Lemma 5.5 guarantees that the terminal state constraint is satisfied. Thus Theorem 5.3 is proved.

5.3.3.2

Closed-Loop System Stability Analysis

The stability of the closed-loop system will be analyzed below. Theorem 5.4 If we assume that 5.1–5.3 hold and x(k0 ) ∈ Rn x , the conditions (5.43), (5.44) and (5.46) are satisfied simultaneously, then if the following parameter conditions hold,

1 1 (N − 1)κ − + ε P

(5.69)

Applying Theorem 5.3 yields || || f ||x (k + N |k)|| ≤ ε/2 P

(5.70)

Also, applying Lemma 5.3 yields N −1 ∑ || ) || || (|| f ||x (k + s|k)|| − ||xˆ (k + s|k)|| ≤ (N − 1)κε P P 2 s=1

(5.71)

Substituting Eqs. (5.69)–(5.71) into Eq. (5.68) yields (

1 1 (N − 1)κ + + V (k) − V (k − 1) < ε −1 + 2 2 μ

) (5.72)

154

5 Distributed Predictive Control for Local Performance Index

By Eq. (5.67), V (k) − V (k − 1) < 0. Thus, for any k ≥ 0, if x(k) ∈ X \Ω(ε), then there exists a constant η ∈ (0, ∞) such that V (k) ≤ V (k − 1) − η holds. So there exists a finite time k ' such that x(k ' ) ∈ Ω(ε). Proof. In summary, the analysis of the feasibility and stability of DMPC are given. When the initial feasible solution of the system can be obtained computationally, the subsequent feasibility of the algorithm can be guaranteed at each step of the update, and the corresponding closed-loop system can be asymptotically stable at the origin.

5.3.4 Simulation Example A distributed system consisting of four interrelated subsystems is used below to verify the effectiveness of the proposed algorithm. The four subsystems are related as shown in Fig. 5.7, where S1 affected by S2 , S3 affected by S1 and S2 and S4 affected by S3 . Define △u i to reflect the input constraint u i ∈ [ u imin u imax ] and the ] [ input incremental constraint △u i ∈ △u imin △u imax . Four subsystem models are given separately. S1 : x1 (k + 1) = 0.62x1 (k) + 0.34u 1 (k) − 0.12x2 (k), S2 : x2 (k + 1) = 0.58x2 (k) + 0.33u 2 (k), S3 : x3 (k + 1) = 0.60x3 (k) + 0.34u 3 (k) + 0.11x1 (k) − 0.07x2 (k), S4 : x4 (k + 1) = 0.65x4 (k) + 0.35u 4 (k) + 0.13x3 (k).

(5.73)

For comparison purposes, both the centralized MPC and LCO-DMPC algorithms are applied. Simulations are performed in the MATLAB environment. In each control cycle, the optimization tool Fmincon is applied to solve each subsystem MPC problem. Fmincon is an integrated program for MATLAB that solves nonlinear constrained multivariate optimization problems. Some of the parameters of the controller are shown in Table 5.2. Wherei is obtained by solving the Lyapunov function. The eigenvalue of the closed-loop system under feedback control is 0.5. The eigenvalues of AoT PAo + AoT PAd + AdT PAo − Q/2 are {−2.42, −2.26, −1.80, −1.29}, all of which are negative. Thus, Assumption 5.2 is √ satisfied. Set ε = 0.2 if ||xi || pi ≤ ε/ N ≤ 0.1, then ||K i xi ||2 will be less than Fig. 5.7 Interaction relationships among subsystems

5.3 Constrained Distributed Predictive Control with Guaranteed Stability

155

0.1, and the input constraints and input increment constraints in Table 5.2 will be satisfied. Set the optimization horizon to N = 10. Set the initial input and state at the moment k0 = 0 to be the solution and the corresponding predicted state of the centralized MPC, respectively. The state response and inputs of the closed-loop system are shown in Figs. 5.8 and 5.9, respectively. The states of all four subsystems converge to zero after 14 s. The states 4 have an overshoot of −0.5 before converging to zero. To further demonstrate the performance benefits of the proposed DMPC algorithm while applying the dual-mode centralized MPC to the system (5.73). Next, the performance of the proposed DMPC algorithm is compared with that of the centralized MPC algorithm. In the centralized MPC, the same dual mode control strategy is used with a control horizon of N = 10. The end-state constraint for all subsystems is ||xi (k + 10|k)|| Pi ≤ ε/2 = 0.1. When the state enters the attraction domain Ω(ε) it switches from MPC control to the feedback control given in Table 5.2. The upper and lower bounds for the input and input increments of the four subsystems are [−2, 2] and [−1, 1], respectively. Table 5.2 LCO-DMPC parameters Subsytem

Ki

Pi

Qi

Ri

△u imax , △u imin

u imax , u imin

S1

−0.35

5.36

4

0.2

±1

±2

S2

−0.25

5.35

4

0.2

±1

±2

S3

−0.28

5.36

4

0.2

±1

±2

S4

−0.43

5.38

4

0.2

±1

±2

x1

1

x2 x3 x4

xi

0.5

0

-0.5

5

10

15

time (sec) Fig. 5.8 System state evolution trajectory based on LCO-DMPC

20

156

5 Distributed Predictive Control for Local Performance Index

Fig. 5.9 System input under LCO-DMPC control

u1

0

u2 u3 u4

ui

-0.5

-1

-1.5

5

10

15

20

time (sec)

The closed-loop system state response and control inputs for the centralized MPC are shown in Figs. 5.10 and 5.11, respectively. The state response curve of the centralized MPC is similar to that of the LCO-DMPC. In the centralized MPC case, all subsystems converge to zero within 8 s. In contrast, in the LCO-DMPC case, it takes 14 s to reach convergence. In comparison, there is no significant overshoot in the state response curve when using the centralized MPC strategy. Table 5.3 gives the state variance of the closed-loop system for the centralized MPC and LCO-DMPC cases, respectively. The total error for LCO-DMPC is 6.55 (40.5%), which is larger than the total error obtained using the centralized MPC algorithm. From the simulation results, it can be seen that the algorithm presented in this section drives the system state to converge asymptotically to the origin in the presence of an initial feasible solution.

Fig. 5.10 Evolution of the system state under the action of centralized predictive control

x1

1

x2 x3 x4

xi

0.5

0

-0.5

5

10

time (sec)

15

20

5.4 Summary of This Chapter

157

Fig. 5.11 Control input of the system under centralized MPC control

u1

0

u2 u3 u4

ui

-0.5

-1

-1.5

5

10

15

20

time (sec)

Table 5.3 State variance of centralized MPC and LCO-DMPC closed-loop systems

Subsystem S1

Centralized MPC 2.07

LCO-DMPC 2.22

S2

5.47

6.26

S3

3.63

4.12

S4

5.00

10.12

16.17

22.72

Sum

5.4 Summary of This Chapter The first part of this chapter presents the Nash-optimal based DMPC algorithm and also analyzes the nominal stability and performance deviation of the one-step predictive control strategy in the presence of communication faults. This facilitates the user to better understand the proposed algorithm and is also instructive in applications. In addition, the simulation results verify the effectiveness and practicality of this distributed predictive control algorithm. The main advantage of the strategy is that it can convert the online optimization of a large-scale system into the optimization of some smaller-scale systems, thus ensuring a satisfactory system performance while greatly reducing the computational complexity, which makes the algorithm highly flexible in analysis and application. At the same time, these methods maintain the integrity of the system in case of system failure and reduce the computational burden. In addition, this chapter introduces the stability-preserving distributed MPC algorithm suitable for distributed systems with state coupling and input constraints. Each local controller optimizes its own performance index and uses the state predictions from the previous moment to approximate the state values at this moment when solving the optimization problem. Under this coordination strategy, the key factor to ensure feasibility and stability is to limit the actual state, control input and set value

158

5 Distributed Predictive Control for Local Performance Index

errors to a certain range. If an initial feasible solution exists, subsequent feasibility at each update moment is guaranteed, as well as asymptotic stability of the closed-loop system.

References 1. Li S, Zhang Y, Zhu Q (2005) Nash-optimization enhanced distributed model predictive control applied to the shell benchmark problem. Inf Sci 170(2–4):329–349 2. Giovanini L (2011) Game approach to distributed model predictive control. IET Control Theory Appl 5(15):1729–1739 3. Rawlings JB, Muske KR (1993) The stability of constrained receding horizon control. IEEE Trans Autom Control 38(10):1512–1516 4. Mayne DQ et al (2000) Constrained model predictive control: stability and optimality. Automatica 36(6):789–814 5. Venkat AN, Rawlings JB, Wright SJ (2005) In: IEEE conference on decision and control, 2005 and 2005 European control conference: Cdc-Ecc 05[C]. IEEE, pp 6680–6685 6. Nash J (1951) Non-cooperative games. Ann Math, pp 286–295 7. Du X, Xi Y, Li S (2001) Distributed model predictive control for large-scale systems. In: Proceedings of the 2001. American Control Conference, Arlington. IEEE 8. Du X, Xi Y, Li S (2002) Optimal algorithms for distributed predictive control. Control Theory Appl 19(5):793–796. (杜晓宁, 席裕庚, 李少远. 分布式预测控制优化算法. 控制理论与应 用, 2002, 19(5):793–796 9. Fagnani F, Zampiert S (2003) Stability analysis and synthesis for scalar linear systems with a quantized feedback. IEEE Trans Autom Control 48(9):1569–1584 10. Prett DM, Gillette R (1980) Optimization and constrained multivariable control of a catalytic cracking unit. In: Proceedings of the joint automatic control conference 11. Dunbar WB (2007) Distributed receding horizon control of dynamically coupled nonlinear systems. IEEE Trans Autom Control 52(7):1249–1263 12. Farina M, Scattolini R (2012) Distributed predictive control: a non-cooperative algorithm with neighbor-to-neighbor communication for linear systems. Automatica 48(6):1088–1096

Chapter 6

Cooperative Distributed Predictive Control System

6.1 Overview As discussed in Chap. 5, the optimization performance of the closed-loop system with distributed predictive control is not as good as that under centralized predictive control, especially when there is strong coupling between the subsystems. The approach in Chap. 5, Sect. 5.2 uses an iterative algorithm in the predictive control strategy for each subsystem. The predictive control process for each subsystem interacts with the parameters of the neighboring subsystems multiple times and solves the quadratic programming problem multiple times in each control period. In essence, it improves the global performance by minimizing the computational error. The optimal solution computed by this method is the “Nash optimum”. Are there other methods to improve the global performance of closed-loop systems under distributed predictive control? To address this problem, the literature [1–3] proposes a method called “Coordinated DMPC (Cooperative DMPC: C-DMPC)”: each local predictive controller not only optimizes the cost function of the corresponding subsystem, but also takes into account the cost function of the whole system, and achieves the goal of improving the performance of the whole closed-loop system by optimizing the global performance index to achieve the goal of improving the performance of the whole closed-loop system. If an iterative algorithm is used, the optimization metrics will converge to a “Pareto” optimum [1, 4]. However, if the iterative algorithm is used, each subsystem controller needs to continuously communicate with all subsystem controllers, and the amount of communication is bound to increase dramatically, which will have a significant impact on the overall computational update speed of the system and even become a critical dominant factor. Therefore, this chapter proposes a non-iterative distributed predictive control method based on global performance index [5], with each local controller communicating only once per control period to avoid the negative impact of communication load. Designing distributed predictive control that guarantees stability is an important and challenging problem [6, 7]. In non-iterative DMPC, the optimized control sequence computed at the previous moment is not necessarily feasible at the current © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_6

159

160

6 Cooperative Distributed Predictive Control System

moment due to the inconsistency of the solution, which makes it more difficult to design stabilized DMPCs. Chapter 5 gives a method for designing a stabilized LCODMPC, however, the prediction model and optimization problem of C-DMPC are different from those in LCO-DMPC, so designing a C-DMPC containing constraints that can guarantee the stability of the closed-loop system is still a problem of research value. In this chapter, a design method for a stable C-DMPC in which each subsystem MPC controller communicates with the other subsystem controllers only once during a control period will be given [3]. The method incorporates consistency constraints and stability constraints in the solution process of each subsystem MPC optimization problem. Among them, the consistency constraint ensures that the error between the optimized input sequence computed at the previous moment and the optimized input sequence computed at the current moment is limited to a certain range. By setting the above constraints and using the dual mode predictive control method, the recursive feasibility of the designed controller and the asymptotic stability of the closed-loop system can be guaranteed. In this chapter, a non-iterative coordinated distributed predictive control method that improves global performance will be presented in Sect. 6.2, giving the closedloop system solution and stability conditions. Section 6.3 will give a method for designing coordinated distributed predictive control with constraints to guarantee stability, and analyze the recursive feasibility of the control algorithm and the asymptotic stability of the system. Finally, a short summary of the chapter will be given.

6.2 Non-iterative Cooperative Distributed Predictive Control 6.2.1 State-, Input-Coupled Distributed Systems Without loss of generality, assume that the system S consists of m discrete-time linear subsystems Si , i = 1, . . . , m. Each subsystem is connected to the other subsystems by inputs and states, then the Si description of the equation of state can be expressed as   ⎧ ⎪ xi (k + 1) = Aii xi (k) + Bii ui (k) + Ai j x j (k) + Bi j u j (k) ⎪ ⎪ ⎪ ⎪ j=1,...,m; j=1,...,m; ⎨ j/=i j/=i (6.1)  ⎪ ⎪ yi (k) = Cii xi (k) + Ci j x j (k) ⎪ ⎪ ⎪ ⎩ j=1,...,m, j/=i

where xi ∈ Rn xi , ui ∈ Rn ui and yi ∈ Rn yi are the state, input and output vectors of the subsystem, respectively. The model of the overall system can be expressed as

6.2 Non-iterative Cooperative Distributed Predictive Control



x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k)

161

(6.2)

where x ∈ Rn x , u ∈ Rn u and y ∈ Rn y are the state, input and output vectors, respectively. A, B and C are the system matrices. The control objective is to minimize the following global performance indicators.  P

m M   ||  ||2 2 d ||yi (k + l) − y (k + l)|| + ||△ui (k + l − 1)|| Ri J (k) = i Qi i=1

l=1

(6.3)

l=1

where yid is the output setpoint of Si ; △ui (k) = ui (k) − ui (k − 1) is the input increment of Si ; Qi and Ri are the weight matrices; P, M ∈ N, P ≥ M, are the optimization horizon and control horizon, respectively. This section describes the design of non-iterative coordinated distributed predictive controllers capable of improving the global optimization performance of closed-loop systems that guarantee stability.

6.2.2 Local Predictive Controller Design The coordinated distributed predictive control presented in this section consists of a series of independent controllers Ci (i = 1, . . . , m), each Si responsible for controlling the corresponding subsystem Si (i = 1, . . . , m). Each controller can exchange information with other controllers through the network system. In order to simplify the problem and facilitate the analysis, the following assumptions are made. Assumption 6.1 (i) The sampling interval is usually long compared to the computation time of the control program, and therefore the computation of the controller is assumed to be synchronized. (ii) The controller communicates once in each sampling interval. (iii) The local state quantity xi (k), i = 1, 2, . . . , m is measurable. Also, to facilitate the description of the algorithm, Table 6.1 defines some common notations in the text.

6.2.2.1

Mathematical Description of the Local Controller Optimization Problem

(1) Performance indicators Since Si the optimal control decision of the subsystem can affect or even destroy the optimization performance of other subsystems, the performance of other subsystems

162

6 Cooperative Distributed Predictive Control System

Table 6.1 Definition of symbols Symbol

Interpretation

diaga {A}

Block diagonal matrix equal to A

λ j {A}

The j-th eigenvalue of rectangular matrix A

O(a)

Proportional to a

0a×b

Zero matrix with size of a × b

0a

Zero matrix with size of a × a

Ia

Identity matrix with size of a × a

xˆ i (l|h )

Predicted value of xi (l) calculated by Ci at time h

yˆ i (l|h )

Predicted value of yi (l) calculated by Ci at time h

ui (l|h )

Input value of ui (l) calculated by Ci at time h

△ui (l|h )

Input increment of ui (l) calculated by Ci at time h

yid (l|h ) yd (l|h )

The set value of yi (l|h )

xˆ i (l|h )

Predicted value of x(l) calculated by Ci at time h

yˆ i (l|h )

Predicted value of y(l) calculated by Ci at time h

Ui (l, p|h )

A complete input vector:

Set value of y(l|h )

Ui (l, p|h ) = [ uiT (l|h ) uiT (l + 1|h ) · · · uiT (l + p|h ) ]T △Ui (l, p|h )

Input incremental sequence vector, △Ui (l, p|h ) = [△ uiT (l|h ) △uiT (l + 1|h ) · · · △uiT (l + p|h ) ]T

U(l, p|h )

Input sequence of the global system U(l, p|h ) = [ uT (l|h ) uT (l + p|h ) · · · uT (l + p|h ) ]T

ˆ i (l, p|h ) X

Distributed state vector ˆ i (l, p|h ) = [ xˆ iT (l|h ) xˆ iT (l + 1|h ) · · · xˆ iT (l + p|h ) ]T X

ˆ i (l, p|h ) X

Stack distribution state vector ˆ i (l, p|h ) = [ xˆ T (l|h ) xˆ T (l + 1|h ) · · · xˆ T (l + p|h ) ]T X i i i

ˆ X(l, p|h )

Full stack state vector ˆ X(l, p|h ) = [ xˆ T (l|h ) xˆ T (l + 1|h ) · · · xˆ T (l + p|h ) ]T

ˆ i (l, p|h ) Y

Stack distribution state vector ˆ i (l, p|h ) = [ yˆ iT (l|h ) yˆ iT (l + 1|h ) · · · yˆ iT (l + p|h ) ]T Y

Y(l, p|h )

Full stack output vector Y(l, p|h ) = [ yT (l|h ) yT (l + 1|h ) · · · yT (l + p|h ) ]T

Yd (l, p|h )

The set value of Y(l, p|h )

ˆ X(l, p|h )

T (l, p|h ) ]T ˆ ˆ T (l, p|h ) · · · X ˆm p|h ) = [ X Full stack state vector X(l, 1

Yd (l, p|h )

Yd (l, p|h ) = diagm (Yd )

U(l, p|h )

T (l, p|h ) ]T Full stack state vector U(l, p|h ) = [ U1T (l, p|h ) · · · Um

6.2 Non-iterative Cooperative Distributed Predictive Control

163

should be considered when finding the optimal solution for the controller of the i-th closed-loop system in order to improve the overall performance of the closedloop system. Therefore, in the distributed predictive control designed in this section, each subsystem controller Ci , i = 1, . . . , m, is optimized with the following "global performance index". M P || ||2   || || i d ||△ui (k + l − 1|k)||2Ri J i (k) = ||y (k + l|k) − y (k + l|k)|| +

Q

l=1

(6.4)

1=1

where Q = diag{Q1 , Q2 , . . . , Qm }. Also, the exclusion of △u j (k + l − 1|k ) because the future input sequence of Si is not a function of the current subsystem control law. (2) Predictive models Since the state evolution of other subsystems is affected by ui (k) after one or several control periods, this effect should be considered during the computation. In addition, since the subsystems are computed synchronously, the information from the other subsystems is available only after a sampling interval time. Considering these factors, the state and output of the subsystem after step 1 at moment k can be predicted using the following equation. ⎧ i ⎪ x (k + l + 1|k) = Al Li x(k) + Al Li' x(k|k − 1) ⎪ ⎪ ⎪ l l ⎨ + As−1 Bi ui (k + l|k) + As−1 B j u j (k + l|k − 1) s=1 j∈{1,...,m}, s=1 ⎪ ⎪ j/=i ⎪ ⎪ ⎩ i i y (k + l + 1|k) = Cx (k + l + 1|k)





(6.5)



where [ Li =

0

n xi ×

⎧ ⎨

i−1

nx j

In xi 0

n xi ×

j=1

, 0n xi , I Li' = diag I i−1 ⎩ nx j j=1

m

]T [ Bi = BT1i BT2i · · · BTmi

j=i+1

m j=i+1

] nx j

⎫ ⎬

nx j ⎭

Remark 6.1 The input values of the above prediction model are still the input values of the other subsystems, and the input values and states of the other subsystems are considered as measurable disturbances. The controller Ci needs to obtain the predicted state values of all other subsystems at the current moment and the predicted values of the control input sequence.

164

6 Cooperative Distributed Predictive Control System

(3) Optimization Problem Problem 6.1 For each independent controller Ci , i = 1, . . . , m, the unconstrained distributed coordinated predictive control problem with predictive time domain P and control horizon M at moment k is to find the optimal control law Ui (k, M|k ) that minimizes the global performance index while satisfying the constraints of the system equations, i.e. M P || ||2   || i || ||△ui (k + l − 1|k)||2Ri ||y (k + l|k) − yd (k + l|k)|| +

min

△Ui (k,M|k)

Q

l=1

l=1

s.t. Eq. (6 − 5)

(6.6)

At moment k, the controller Ci solves the respective optimization problem 6.1 based on the U j (k + l|k − 1 ) and x(k) information obtained from the network. Selects the first element of the optimal input sequence and applies ui (k) = ui (k − 1)+△ui (k|k ) to Si . Then, the optimal control sequence is sent to the other subsystems through the network. At moment k+1, each local controller repeats the above solution and information exchange process based on the updated state information and the received input sequence predictions from the other subsystems.

6.2.2.2

Analytical Solution for Closed-Loop Systems

This section focuses on giving the analytical solution of the coordinated distributed predictive control presented in this section. To obtain the analytical solution, the distributed predictive control Problem 6.1 is first transformed to solve a local standard quadratic programming problem online at each sampling moment. Define ⎧ ⎫ ⎨ ⎬ i = diag I i−1 , 0n ui , I T (6.7) M ⎩ nu j nu j ⎭ j=1

j=i+1

{ }⎤ 0(M−1)n x ×n u diag M−1 B Ti ⎢ 0n ×(M−1)n ⎥ BTi u ⎢ x ⎥  Bi = ⎢ ⎥ .. .. ⎣ ⎦ . . BTi 0n x ×(M−1)n u ⎡ 0 ⎤ ⎡ ⎤ A 0 ··· 0 A ⎢ 1 . ⎥ ⎢0⎥ ⎢ A A0 . . . .. ⎥ ⎥ ⎥, Aa = ⎢ S=⎢ ⎢.⎥ ⎢ .. . . . . ⎥ . ⎣ .⎦ ⎣ . . . 0 ⎦ 0 A P−1 · · · A1 A0 ⎡

(6.8)

(6.9)

6.2 Non-iterative Cooperative Distributed Predictive Control

⎤ 0(M−1)n x ×n ui diag M−1 {Bi } ⎥ ⎢ 0n ×(M−1)n Bi ui ⎥ ⎢ x Bi = ⎢ ⎥ .. .. ⎦ ⎣ . . 0n x ×(M−1)n ui Bi ⎤ ⎡ In ui 0n ui . . . 0n ui M ⎢ . ⎥   ⎢ In ui In ui . . . .. ⎥ ' ⎥, ┌ i = [ In · · · In ]T ┌i = ⎢ ui ui ⎥ ⎢ . . . ⎣ .. . . . . 0n ⎦ ui In ui · · · In ui In ui

165



Ni = Ca SBi ┌ i , Q = diag P {Q}, Ri = diag M {Ri }

(6.10)

(6.11)

(6.12)

Then the following lemma can be derived from Eq. (6.5) and Eqs. (6.7)–(6.12). Lemma 6.1 (Quadratic Programming Form) Under Assumption 6.1, at moment k, each local controller Ci , i = 1, . . . , m solves the following optimization problem. min

△Ui (k,M|k )

[△UiT (k, M|k )Hi △Ui (k, M|k ) − Gi (k + 1, P|k )△Ui (k, M|k )] (6.13)

where Hi is a positive definite matrix and Hi = NiT QNi + Ri

(6.14)

Gi (k + 1, P|k ) = 2NiT Q[Yd (k + 1, P|k ) − Zˆ i (k + 1, P|k )]

(6.15)

Zˆ i (k + 1, P|k ) = Ca S[Bi ┌ ' i ui (k − 1) + Aa Li xi (k|k ) + Aa L' i xˆ (k|k − 1 ) + B˜ i U(k − 1, M|k − 1 )]

(6.16)

Proof According to Eq. (6.5) and Eqs. (6.7)–(6.12), the states predictions and the output predictions of the subsystem Si calculated at moment k can be expressed as [ ⎧ i ˆ (k + 1, P|k ) = S Aa Li xi (k) + Bi Ui (k, M|k ) X ⎪ ⎪ ⎨ ! ˜ i U(k − 1, M|k − 1 ) +Aa L' i xˆ (k|k − 1 ) + B ⎪ ⎪ ⎩ i ˆ (k + 1, P|k ) = Ca X ˆ i (k + 1, P|k ) Y

(6.17)

where, it is specified that U the values of the last P − M + 1 moments of U(k − 1, M|k − 1 ) and Ui (k, M|k ) equal to the values the last term of U(k − 1, M|k − 1 ) and Ui (k, M|k ). Due to

166

6 Cooperative Distributed Predictive Control System

ui (k + h|k ) = ui (k − 1) +

h r =0

△ui (k + r |k )

according to Eq. (6.11), it follows that Ui (k, M|k ) = ┌ ' i ui (k − 1) + ┌ i △Ui (k, M|k )

(6.18)

Substituting Eqs. (6.7)–(6.12) and (6.17) into Eq. (6.6) yields the standard quadratic programming problem form (6.13). According to the quadratic programming form (6.13), the solution to Problem 6.1 can be obtained as △Ui (k, M|k ) = 1/2 · Hi−1 Gi (k + 1, P|k ) It further leads to Theorem 6.1. Theorem 6.1 (Analytic Solution) If Assumption 6.1 holds, then the control law for each controller Ci , i = 1, . . . , m acting on the corresponding subsystem Si , i = 1, . . . , m at moment k can be calculated from the following equation. ui (k) = ui (k − 1) + Ki [Yd (k + 1, P|k ) − Zˆ i (k + 1, P|k )]

(6.19)

where Ki =  i Ki Ki = Hi−1 NiT Q ! "  i = In ui 0n ui ×(M−1)n ui

(6.20)

Remark 6.2 The complexity of solving the analytical solution in depends mainly on Ci the computation of the inverse of Hi . Using the Gauss-Jordan algorithm for Hi = M · n u i dimensional matrix) the complexity of solving the inverse is O(M 3 · n 3u i ). Thus, the overall computational complexity of distributed predictive control n n 3u i ), while the computational complexity of centralized predictive is O(M 3 · i=1 n 3 n u i )3 ). control is O(M · ( i=1

6.2.3 Performance Analysis 6.2.3.1

Closed-Loop Stability

According to the analytical solution given in Theorem 6.1, the stability conditions of the closed-loop system are analyzed by analyzing the coefficient matrix of the closed-loop system model.

6.2 Non-iterative Cooperative Distributed Predictive Control

167

meaning  = [ T1 · · · TP ]T l = diag{1l , . . . , ml } il = [ 0n xi ×(l−1)n xi In xi 0n xi ×(P−l)n xi ] (i = 1, . . . , m, l = 1, . . . , P);

(6.21)

 = [ T1 · · · TM ]T l = diag{1l , . . . , ml } il = [ 0n ui ×(l−1)n ui In ui 0n ui ×(M−l)n ui ] (i = 1, . . . , m, l = 1, . . . , M)

(6.22)

A = diagm {Aa } B = diag{B1 , . . . , Bm } C = diagm {Ca } ˜ T1 · · · B˜ Tm ]T B˜ = [ B

(6.23)

Li = diag P {LiT } L = diag{L1 , . . . , Lm } L = diag{L1 , . . . , Lm } 'T ]T L' = [ L1'T · · · Lm L˜ = L' [ In x 0n x ×(P−1)n x ]

(6.24)

┌ ' = diag{┌ ' 1 , . . . , ┌ ' m }  = diag{ 1 , . . . ,  m } S = diagm {S}  = diag{┌ 1 K1 , . . . , ┌ m Km }

(6.25)

 = −CSAL  = −CSA L ˜  =  '  − CS(B '  + B)

(6.26)

This in turn leads to Theorem 6.2. Theorem 6.2 (Stability condition) The closed-loop system obtained by applying the control law computed at the global performance index is asymptotically stable when

168

6 Cooperative Distributed Predictive Control System

and only if Eq. (6.27) is satisfied. # # #λ j {A N }# < 1, ∀ j = 1, . . . , n N

(6.27)

where ⎤ A 0 B 0 ⎢ LSAL LSA L LSB LS B ⎥ ⎥ AN = ⎢ ⎣ A + LSAL LSA L B + LSB +  LS B ⎦ 0 0 0 I Mn u ⎡

The order of the whole closed-loop system is n N = Pn x + n x + 2Mn u . Proof According to Eqs. (6.7) and (6.13), the i-th predicted value of the future state sequence computed by the controller Ci at moment k can be expressed as [ ˆ i (k + 1, P|k ) = LiT S Aa Li xi (k) + Bi Ui (k, M|k ) X +Ai L' i xˆ (k|k − 1 ) + B˜ i U(k − 1, M|k − 1 )

! (6.28)

From Eq. (6.21), we can see that ˆ ˆ X(k, P|k − 1 ) = X(k, P|k − 1)

(6.29)

U(k, M|k − 1) = U(k, M|k − 1 )

(6.30)

From Eqs. (6.29), (6.30), (6.23), (6.25) and (6.28), the vector formed by connecting the state prediction sequences of all subsystems can be expressed as ˆ + 1, P|k ) = LS[ALx(k) + BU(k, M|k ) X(k ˆ LX(k, P|k − 1 ) +  BU(k − 1, M|k − 1 )] + A

(6.31)

Since ui (k − 1) =  i Ui (k − 1, M|k − 1 ), by Eqs. (6.10) and (6.13), it follows that '

Ui (k, M|k ) =  i  i Ui (k − 1, M|k − 1 ) ˆ i (k + 1, P|k )] +  i Ki [Yd (k + 1, P|k ) − Z

(6.32)

Substituting (6.16) into (6.32), the vector formed by joining the optimal control sequences of all subsystems from Eqs. (6.12), (6.23)–(6.25), (6.29) and (6.30) is ˆ U(k, M|k ) = U(k − 1, M|k − 1 ) + x(k) + X(k, P|k − 1) + Yd (k + 1, P|k ) (6.33)

6.2 Non-iterative Cooperative Distributed Predictive Control

169

It is worth noting that the feedback control law obtained from all controller calculations is u(k) = U(k, M|k )

(6.34)

Combining Eqs. (6.2), (6.31), (6.33), and (6.34) yields the closed-loop system state space equation as x(k) = Ax(k − 1) + BU(k − 1, M|k − 1 )

(6.35)

ˆ X(k, P|k − 1) = LS[ALx(k − 1) + BU(k − 1, M|k − 1 ) ˆ − 1, P|k − 2 ) +  LX(k BU(k − 2, M|k − 2 )] + A

(6.36)

U(k, M|k ) = [Ax(k − 1) + BU(k − 1, M|k − 1 )] + LS[ALx(k − 1) + BU(k − 1, M|k − 1 ) ˆ − 1, P|k − 2 ) +  LX(k BU(k − 2, M|k − 2 )] + A + U(k − 1, M|k − 1 ) + Yd (k + 1, P|k ) y(k) = Cx(k)

(6.37) (6.38)

where, under the assumption that the state is fully measurable, xˆ (k|k ) can be replaced by x(k). ˆ T (k, P|k − 1 ), UT (k, M|k ), UT (k− Define the extended state X N (k) = [xT (k), X 1, M|k − 1 )]T , from (6.35)–(6.38), we obtain Theorem 6.2. Remark 6.3 The first two column blocks of the dynamic matrix A N contained in Eq. (6.27) depend on the elements in matrix A and matrix B, while the third column block depends on A, B, C, Q, Ri , P and M. The design degrees of freedom are the matrices Q and Ri , and the parameters P and M.

6.2.3.2

Optimizing Performance Analysis

To explain the difference between coordinated distributed predictive control and centralized predictive control, the optimization problem for distributed predictive control for each Ci , i = 1, . . . , m is rewritten as min

△Ui (k,M|k )

P || M ||2   || i || ||△ui (k + l − 1|k )||2Ri ||yˆ (k + l|k ) − yd (k + l|k )|| + l=1

Q

l=1

170

6 Cooperative Distributed Predictive Control System

⎡ ⎤ ⎤ xˆ ii (k + l + 1|k ) xˆ 1 (k|k − 1 ) ⎢ ⎥ ⎢ ⎥ .. .. ⎢ ⎥ ⎢ ⎥ . . ⎢ ⎥ ⎢ ⎥ ⎢ xˆ (k|k − 1 ) ⎥ ⎢ xˆ i (k + l + 1|k ) ⎥ l ⎢ i−1 ⎥ ⎢ i−1 ⎥  ⎢ ⎥ ⎢ ⎥ s.t. ⎢ xˆ i (k + l + 1|k ) ⎥ = Al ⎢ xi (k|k ) ⎥ + As−1 B U(k, l|k ) ⎢ ⎥ ⎢ i ⎥ ⎢ xˆ i+1 (k|k − 1) ⎥ s=1 ⎢ xˆ i+1 (k + l + 1|k ) ⎥ ⎢ ⎥ ⎢ ⎥ .. .. ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ . . xˆ m (k|k − 1 ) xˆ im (k + l + 1|k ) ˜ U(k, M|k ) = [ uT (k|k − 1 ) . . . uT (k|k − 1 ) ui (k|k ) ⎡

1 i−1 T T ui+1 (k|k − 1 ) · · · um (k|k − 1 ) · · · T (k + l|k − uT1 (k + l|k − 1 ) . . . ui−1 T ui+1 (k

+ l|k − 1 ) · · ·

uTm (k

1 ) ui (k + l|k )

+ l|k − 1 ) ]T

i



yˆ i (k + l|k ) = Cx (k + l|k )

(6.39)

The optimization problem for centralized predictive control can be expressed as min

△Ui (k,M|k )



M P   || || ||yˆ (k + l|k ) − yd (k + l|k )||2 + ||△ui (k + l − 1|k )||2Ri Q l=1







l=1

x1 (k|k ) xˆ 1 (k + l + 1|k ) ⎢ ⎢ ⎥ ⎥ .. .. ⎢ ⎢ ⎥ ⎥ . . ⎢ ⎢ ⎥ ⎥ ⎢ x (k|k ) ⎥ ⎢ xˆ (k + l + 1|k ) ⎥ l ⎢ i−1 ⎢ i−1 ⎥ ⎥  ⎢ ⎢ ⎥ ⎥ s.t. ⎢ xˆ i (k + l + 1|k ) ⎥ = Al ⎢ xi (k|k ) ⎥ + As−1 BU(k, l|k ) ⎢ ⎢ ⎥ ⎥ ⎢ xi+1 (k|k ) ⎥ s=1 ⎢ xˆ i+1 (k + l + 1|k ) ⎥ ⎢ ⎢ ⎥ ⎥ .. .. ⎢ ⎢ ⎥ ⎥ ⎣ ⎣ ⎦ ⎦ . . xˆ m (k + l + 1|k ) xm (k|k ) T T ˜ U(k, M|k ) = [ u (k|k ) . . . u (k|k ) ui (k|k ) 1

i−1

T ui+1 (k|k ) · · · umT (k|k ) · · · T (k + l|k ) ui (k + l|k ) u1T (k + l|k ) . . . ui−1 T (k + l|k ) · · · umT (k + l|k ) ]T ui+1

yˆ (k + l|k ) = Cx(k + l|k )

(6.40)

As can be seen, Eqs. (6.39) and (6.40) have the same performance index and similar state evolution models. The only difference between the two is that in distributed predictive control, the future control sequences of the other subsystems at moment k are the estimates computed at moment k − 1. If there are disturbances, the states of the subsystems computed at moment k will not be equal to the values computed at moment k − 1, which can have an impact on the final performance of the closed-loop

6.2 Non-iterative Cooperative Distributed Predictive Control

171

system. Nevertheless, the optimization problem for coordinated distributed predictive control is still very similar to the optimization problem for centralized distributed predictive control.

6.2.4 Simulation Example This section investigates and compares the performance of coordinated distributed predictive control with the performance of predictive control based on local performance index presented in Chap. 5. Using the minimum phase system from Chap. 5, this system is discretized with a sampling time of 0.2 s to obtain [

%[ ] $ −0.024(z−1.492)(z+0.810) ] α (z 20.018(z+0.935) u 1 (z) y1 (z) (z−0.819)(z 2 −1.922z+0.961) −1.676z+0.819) = 0.147(z−0.668) 0.126 α (z−0.368) y2 (z) u 2 (z) (z 2 −1.572z+0.670)

The corresponding coefficient matrix for the state space realization of ] A11 0 , A= 0 A22 ⎡ ⎤ 2.74 −1.27 0.97 0 ⎢ 2 0 0 0 ⎥ ⎥, A11 = ⎢ ⎣ 0 0.5 0 0 ⎦ 0 0 0 0.37 ⎡ ⎤ 1.68 −0.82 0 0 ⎢ 1 0 0 0 ⎥ ⎥, A22 = ⎢ ⎣ 0 0 1.57 −0.67 ⎦ 0 0 1 0 ] [ B11 0 , B= 0 B22 ⎡ ⎡ ⎤ ⎤ 0.25 0.25 ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎥ B11 = ⎢ ⎣ 0 ⎦B22 = ⎣ 0.5 ⎦ [

0.5 ] [ C11 C12 C= C21 C22

0

C11 = [ −0.1 0.03 0.12 0 ], C12 = α[ 0.07 0.07 0 0 ], C21 = α[ 0 0 0 2.25 ],

172

6 Cooperative Distributed Predictive Control System

C22 = [ 0 0 0.29 −0.20 ], The decomposition into SISO subsystems S1 and S2 , whose corresponding state space model coefficients are {A11 , B11 , C11 } and {A22 , B22 , C22 }, respectively. The magnitude of the interaction between S1 and S2 is denoted by the parameter α. Similar to distributed predictive control based on local performance index, the stability of coordinated distributed predictive control depends on whether the factors P, M, Q and Ri ,i = 1, . . . , m can satisfy the relevant conditions of Theorem 6.2. Furthermore, P=M, R = γ Iu and Q = I y are chosen to simplify the computation. Three-dimensional Figs. 6.1a, 6.2a, and 6.3a show the maximum eigenvalues of the closed-loop system for different combinations of γ and P. The Z -axis represents the maximum eigenvalue, and the X - and Y -axes represent the logarithm of γ and P, respectively. Figures 6.1b, 6.2b, and 6.3b represent the control performance of the closed-loop system, where the dotted lines are the output set points, the solid lines are the input and output of the system under the distributed predictive control based on local performance index, and the dashed lines represent the input and output of the system under the coordinated distributed predictive control. As can be seen from the figure, the system stability depends on the parameters γ and P. For weaker coupling effects Figs. 6.1a and 6.2a, the range of parameter for the coordinated distributed predictive control is similar to that of the distributed predictive control based on local performance index. It is worth pointing out that the closed-loop system with coordinated distributed predictive control exhibits better overall performance. When α = 0.1 and α = 1, the output mean square error of the coordinated distributed predictive control is smaller than that of the distributed predictive control based on local performance index, which are (0.2086, 0.2034) and (0.2568, 0.2277), respectively. When α = 10, γ = 1 and P = 20, the closedloop system under the action of distributed predictive control based on local performance index is not stable, while the closed-loop system with coordinated distributed predictive control in this chapter is stable. In summary, the parameter range of coordinated distributed predictive control is larger than that of distributed predictive control based on local performance index. Typically, the stability region is correlated with the optimization horizon P and γ . Moreover, coordinated distributed predictive control exhibits better overall performance regardless of the strength of the subsystem inter-coupling relationships.

6.2 Non-iterative Cooperative Distributed Predictive Control

173 LCO-DMPC C-DMPC

4 3.5 3 2.5 2 1.5 1 0.5 3 2 1 0

log

-1 10

15 -2

25

30

10

5

-3

20

P

1

y

1

0.5 0 -0.5

5

10

15

20

25

30

35

40

25

30

35

40

25

30

35

40

25

30

35

40

Time (Sec) 1

y

2

0.5 0 -0.5

5

10

15

20

Time (Sec)

1

0.5

u

0

-0.5

5

10

15

20

Time (Sec) 2

u

2

1 0 -1

5

10

15

20

Time (Sec)

Fig. 6.1 Maximum eigenvalues and control performance of the closed-loop system for parameters α = 0.1 and γ = 1

174

6 Cooperative Distributed Predictive Control System

LCO-DMPC C-DMPC 4 3.5 3 2.5 2 1.5 1 0.5 3 2 1 0 15

-1 log

-2

10

-3

5

20

25

30

10 P

1

y

1

0.5 0 -0.5

5

10

15

20 Time (Sec)

25

30

35

40

5

10

15

20 Time (Sec)

25

30

35

40

5

10

15

20 Time (Sec)

25

30

35

40

5

10

15

20 Time (Sec)

25

30

35

40

1

y

2

0.5 0 -0.5

1

0.5

u

0

-0.5

2

u

2

1 0 -1

Fig. 6.2 Maximum eigenvalues and control performance of the closed-loop system for parameter α = 0.1

6.2 Non-iterative Cooperative Distributed Predictive Control

175 LCO-DMPC C-DMPC

10 8 6 4 2 0 3 2 1 0 log

-1 10

15 -2 -3

10

x 10

10

5

25

20

30

P

53

1 0.5

y

y

1

1

5 0

0

-5

-0.5

10

20 Time (Sec)

30

40

1 0.5

20 Time (Sec)

30

40

10

20 Time (Sec)

30

40

10

20 Time (Sec)

30

40

10

20 Time (Sec)

30

40

y

y

2

2

1 0.5

10

0

0

-0.5

-0.5

10

20 Time (Sec)

30

40

0.5

1

u1

u1

0.5 0

0 -0.5

10

20 Time (Sec)

30

-0.5

40

0.5

2

u1

u2

1 0

0 -1

10

20 Time (Sec)

30

40

-0.5

Fig. 6.3 Maximum eigenvalues and control performance of the closed-loop system for parameter α = 10

176

6 Cooperative Distributed Predictive Control System

6.3 Constrained Coordinated Distributed Predictive Control with Guaranteed Stability [3] 6.3.1 Description of Distributed Systems Without loss of generality, assume that the system consists of m discrete linear subsystems Si , i ∈ P, P = {1, . . . , m}. Each subsystem is interconnected by states such that the subsystems can be represented as ⎧  ⎪ Ai j x j (k) ⎨ xi (k + 1) = Aii xi (k) + Bii ui (k) + ⎪ ⎩ y (k) = C x (k) i ii i

j∈P+i

(6.41)

where xi ∈ Rn xi , ui ∈ u i ⊂ Rn ui and yi ∈ Rn yi are the state, input and output vectors of the subsystem, respectively. Ui is the feasible set of inputs ui containing the origin, determined by the physical constraints of the system, among other conditions. The non-zero matrix Ai j denotes the subsystem Si affected by the subsystem S j , j ∈ P. A subsystem S j is called an upstream system of a subsystem Si . Define P+i be the set of all upstream systems of Si , and accordingly define P−i be the set of all downstream systems of Si , Pi = { j| j ∈ P, and, j /= i}. Combining the subsystem states, inputs and outputs, the dynamic equations of the full system can be written as &

x(k + 1) = Ax(k) + Bu(k), y(k) = Cx(k),

(6.42)

where x = [x1T , x2T , · · · , xmT ]T ∈ Rn x , u = [u1T , u2T , · · · , umT ]T ∈ Rn u and y = [y1T , y2T , . . . , ymT ]T ∈ Rn y are the state, control input, and output vectors of the whole system S, respectively; A, B, and C are constant matrices with suitable dimensions. Also, u ∈ u = u 1 × u 2 × · · · × u m , and the origin is in this set. The control objective is to achieve stability of the global system using a distributed predictive control algorithm under communication constraints, while the global performance of the system is to be as close as possible to that obtained by a centralized MPC. When the MPC controller of each subsystem has access to global information, optimizing the performance index of the global system can obtain better global performance. Therefore, in this section, a stability-preserving C-DMPC method under this coordination strategy is designed.

6.3 Constrained Coordinated Distributed Predictive Control …

177

6.3.2 Local Predictive Controller Design This section defines the optimization problem for m independent MPC controllers. Each subsystem MPC communicates only once in a sampling period and has the same optimization horizon N , N > 1, while their control laws are updated synchronously. At each update moment, each subsystem MPC controller optimizes its own openloop control law with the current state of the system and the estimated inputs of the whole system known. To facilitate the analysis, the following assumptions are first made about the system. Assumption 6.2 For each subsystem Si , i ∈ P, there exists a state feedback control law ui = Ki x such that the closed-loop system x(k + 1) = Ac x(k) can be asymptotically stable, where Ac = A+BK and K = block−diag(K1 , K2 , . . . , Km ). For a more detailed description, some necessary symbols need to be defined, as shown in Table 6.2.

6.3.2.1

Mathematical Description of the Local Optimization Problem

Because the state evolution of the subsystem S j , j ∈ P−i , is affected by the i-th subsystem Si ’s optimal control law, and that effect may sometimes be negative, the following global optimization performance index is defined in the algorithm [1, 2, 4]. N −1 ' (  || || || || || ||xˆ (k + l|k, i )|| + ||ui (k + l|k)||R || J i = xˆ (k + N |k, i) P + j Q

(6.43)

l=0

Table 6.2 Description of symbols Symbol

Interpretation

P

Collection of all subsystems

Pi

Collection of all subsystems excluding current subsystem Si

ui (k + l − 1|k) Optimized control sequence of subsystem Si calculated by Ci at time k xˆ j (k + l|k, i )

Predicted state sequence of subsystem Si calculated by Ci at time k

xˆ (k + l|k, i )

Predicted state sequences of all subsystems calculated by v at time k

uif (k + l − 1|k) Feasible control law of subsystem Si calculated by Ci at time k xfj (k + l|k, i )

Feasible predicted state sequence of subsystem Si defined by Ci at time k

xf (k + l|k, i )

Feasible predicted state sequences of all subsystems calculated by Ci at time k

xf (k + l|k) ||·||P

Feasible predicted state sequences of all subsystems at time k √ xT (k)Px(k)

P norm, P is any positive matrix, ||z|| P =

178

6 Cooperative Distributed Predictive Control System

where Q = QT > 0, R j = RTj > 0 and P = PT > 0. The matrix P must satisfy the following Lyapunov equation. ˆ ATc PAc − P = −Q

(6.44)

ˆ = Q + KT RK, R = block − diag{R1 , R2 , . . . , Rm }. where Q Since each subsystem controller is updated synchronously, the control sequence of all subsystems S j , j ∈ Pi at the current moment is unknown to the subsystems Si . At moment k, assume that the control sequence for subsystem S j , j ∈ Pi is the optimal control sequence computed from moment k − 1 and use the following feedback control law. [ ] u j (k|k − 1), u j (k + 1|k − 1) , . . . , u j (k + N − 2|k − 1), K j xˆ (k + N − 1|k − 1, j ) (6.45) Then the prediction model in the MPC of the subsystem Si can be expressed as xˆ (k + l|k, i ) = Al x(k) +

l 

Al−h Bi ui (k + h − 1|k)

h=1

+

l 

Al−h B j u j (k + h − 1|k − 1)

(6.46)

j∈Pi h=1

where, for ∀i and j ∈ Pi , Bi = [ 0n ui ×∑ j i n x j T

]

(6.47)

To extend the feasible region, each subsystem MPC controller is constrained using a set of terminal state constraints, which must ensure that the terminal controller is stable within its range. Lemma 6.2 If Assumption 6.2 holds, for any positive scalar c, the set Ω(c) = {x ∈ Rn x : ||x|| P ≤ c} is a positive invariant domain of attraction for the closed-loop system x(k + 1) = Ac x(k) and there exists a sufficiently small scalar ε such that Kx is a feasible input for any x ∈ Ω(ε), i.e.,u ⊂ Rn u . Proof by Assumption 6.2, the closed-loop system x(k + 1) = Ac x(k) is asymptotically stable for any x(k) ∈ Ω(c)\{0}. This shows that all state trajectories starting at Ω(c) will always remain within Ω(c). Furthermore, since P satisfies the Lyapunov equation, all state trajectories starting at Ω(c) converge asymptotically to the origin. Since P is positive definite, there exists an ε > 0 such that for all x ∈ Ω(ε), Kx ∈ u and Ω(ε) can be reduced to the origin when ε decreases to 0. In the optimization problem for each subsystem MPC controller, the set of final state constraints for the whole system can be defined as

6.3 Constrained Coordinated Distributed Predictive Control …

179

{ } Ω(ε) = x ∈ Rn x |||x||P ≤ ε . If at time k0 the states of all subsystems satisfy x(k0 ) ∈ Ω(ε) and Ci , i ∈ P using the control law Ki xi (k), then, by Lemma 6.1, the system is asymptotically stable for any k ≥ k0 . As can be seen above, the goal of MPC control is to be able to push the states of all subsystems into the set Ω(ε). Once the states of all subsystems are in the set Ω(ε), the control law switches to the feedback controller that makes the system stable. This strategy of switching the MPC control law to a terminal controller once the state reaches near the origin is called dual mode MPC. Given this, the MPC strategy presented in this section is a dual mode distributed predictive control algorithm. The optimization problem for each subsystem MPC in the distributed predictive control method is described in the following formulation. Problem 6.2 In the systemi such that ε > 0 satisfies Lemma 6.1 with k ≥ 1. Knowing x(k) and u(k + l|k − 1) with l = 1, 2, . . . , N − 1, determine the optimal control sequence ui (k + l|k) : {0, 1, · · · , N − 1} → Ui to minimize Performance index N −1 ' (  || || || || ||xˆ (k + l|k, i )|| + ||ui (k + l|k)||R J i = ||xˆ j (N |k, i )||P + i Q l=0

satisfying the constraint equation l 

βl−h ||ui (k + h|k) − ui (k + h|k − 1)||2 ≤

h=0

γ καe , l = 1, 2, . . . , N − 1 (6.48) m−1

ui (k + l − 1|k) ∈ Ui , l = 0, 1, . . . , N − 1

(6.49)

xˆ (k + N |k, i ) ∈ Ω(αε)

(6.50)

In the above constraint ' ( 1 βl = max λmax ((Al Bi )T PAl Bi ) 2 , l = 0, 1, . . . , N − 1. i∈P

λmax

*/ , ATc Ac ≤ 1 − κ, 0 < 1 − κ < 1

(6.51) (6.52)

where 0 < κ < 1, 0 < α < 0.5 and γ > 0 are design parameters that will be explained in detail later. Equation (6.48) is the consistency constraint, which is mainly designed to maintain the successive feasibility of the system. It ensures that the sequence of optimization variables at the previous moment is a feasible solution at the current moment by

180

6 Cooperative Distributed Predictive Control System

restricting the error between the current sequence of optimization variables of the system and the sequence of optimization variables at the previous moment to remain within a certain range. Note that while a larger invariant set Ω(ε) is given in Lemma 6.1, the terminal constraint used in each optimization control problem is Ω(αε), 0 ≤ α < 0.5, rather than Ω(ε). The purpose of this selection is to ensure recursive feasibility, which will be used in the analysis in the next section.

6.3.2.2

Constrained C-DMPC Solution Algorithm

Assumption 6.3 At the initial moment k0 , for each subsystemi of controllers, there exists a feasible control law ui (k0 + l) ∈ Ui , l ∈ {1, . . . , N } such that the solution of the global system x(l + 1 + k0 ) = Ax(l + k0 ) + Bu(l + k0 ). xˆ (·|k0 , i ) satisfies the condition xˆ (N + k0 |k0 , i ) ∈ Ω(αε), and it is guaranteed that J i (k0 ) is bounded. In fact, Assumption 6.3 bypasses the problem of constructing an initial feasible solution for distributed predictive control. In fact, finding an initial feasible solution is the primary problem in many optimization problems, regardless of whether the optimization problem is related to the controller design [6, 7]. In this paper, an initial feasible solution can be obtained by solving the corresponding centralized MPC problem at the initial moment. With only one communication per cycle, the C-DMPC control algorithm for dual mode control of either subsystemi is as follows. Algorithm 6.1 (constraint coordination DMPC algorithm) Step 1: At time k0 , initialize the controller. ➀ Initialize x(k0 ), u(k0 + l − 1|k0 ) such that it satisfies Assumption 6.2, where l = 1, 2, . . . , N . ➁ Send ui (k0 +l|k0 ) and x j (k0 ) to all other subsystems; receive u j (k0 + l − 1|k0 ) and x j (k0 ) from other subsystems S j , j ∈ Pi . ➂ At time k0 , switch to the terminal control law ui (k0 ) = Ki x(k0 ) if x(k0 ) ∈ Ω(ε), where k ≥ k0 . Otherwise, solve Algorithm 6.1 to obtain ui (k0 + l − 1|k0 ). ➃ Apply ui (k0 |k0 ) to the subsystem Si . Step 2: At moment k, update the control law. ➀ Measure xi (k); send xi (k) and ui (k + l|k) to all other subsystems S j , j ∈ Pi ; receive x j (k) and u j (k + l − 1|k − 1) from other subsystems S j , j ∈ Pi . ➁ If x(k) ∈ Ω(ε), switch to the terminal control law ui (k) = Ki x(k). Otherwise, solve Algorithm 6.1 to obtain ui (k + l − 1|k). ➂ Apply ui (k|k) to the subsystem Si . Step 3: At moment k + 1, turn k + 1 → k and repeat step 2. Algorithm 6.1 is premised on the assumption that the controller Ci , ∀i ∈ P is able to obtain all state variables x(k). In the next section, it will be shown that the C-DMPC policy is able to transfer the state x(k +l) to Ω(ε) in finite time and remain in the set Ω(ε) forever.

6.3 Constrained Coordinated Distributed Predictive Control …

181

6.3.3 Performance Analysis In this section, the feasibility analysis is performed first, followed by a proof of stability.

6.3.3.1

Recursive Feasibility

The main result of this section is that if Assumption 6.3 holds, in the presence of an initial feasible solution, for any subsystem Si , at any k ≥ 1 moments, ui (·|k) = uif (·|k) is a feasible solution to Problem 6.2. Note that uif (·|k) is the vector consisting of the remainder of the MPC control sequence computed at the previous moment after removing the first term and the terminal feedback control, i.e.  uif (k

+ l − 1|k) =

ui (k + l − 1|k − 1) l = 1, . . . , N − 1 Ki xf (k + N − 1|k, i ) l = N

(6.53)

where xf (k + l|k, i ), l = 1, 2, . . . , N , is equivalent to the solution of Eq. (6.46) with the initial state x(k) and under the control sequence uif (k +l −1|k), u j (k +l −1|k −1), j ∈ Pi , which can be expressed as xf (k + l|k, i ) = Al x(k) +

l 

Al−h Bi uif (k + h − 1|k)

h=1

+

l 

Al−h B j u j (k + h − 1|k − 1)

(6.54)

j∈Pi h=1

Substituting Eq. (6.53) into Eq. (6.54), we get xf (k + l|k, i ) = xf (k + l|k, j ) = xf (k + l|k), ∀i, j ∈ P, l = 1, 2, . . . , N

(6.55)

and xf (k + N |k) = Ac xf (k + N − 1|k)

(6.56)

The control law uif (·|k) is a feasible solution to the optimization problem for the subsystem Si at the moment k ≥ 1, which means that the control law uif (·|k) should satisfy Eq. (6.48) and the control constraint (6.49), and the corresponding state xf (k + N |k) satisfies the terminal state constraint (6.50). To illustrate this feasibility result, define the state xˆ (k + N |k − 1, i ) = Ac xˆ (k + N − 1|k − 1, i )

(6.57)

182

6 Cooperative Distributed Predictive Control System

Fig. 6.4 Schematic representation of the error between the feasible and hypothetical state sequences

where the state xˆ (k + N |k − 1, i ) is not equal to that obtained by substituting the control law ui (k + N − 1|k − 1) defined in Eq. (6.45) into the system Eq. (6.46). This is because xˆ (k + N |k − 1, i ) is only an intermediate variable for the purpose of proving feasibility, and has no effect on either the optimization problem or stability. Therefore, it can be assumed to be of the form of Eq. (6.57). Figure 6.4 illustrates how to ensure that the state xf (k + N |k) satisfies the terminal constraint (6.50). If the deviations of the input sequences at two adjacent update moments are within a certain range, then the hypothetical sequence of states {ˆx(k + 1|k, i ), xˆ (k + 2|k, i ), · · · } and the deviations between the state sequences {xf (k + 1|k, i), xf (k + 2|k, i ), · · · } are bounded. When a more appropriate bound is chosen, the states xˆ (k + N |k, i ) and xf (k + s|k) can be sufficiently close such that the state xf (k + s|k) is within the Ω(αε) ellipse shown in Fig. 6.4. In this section, Lemma 6.3 gives the sufficient conditions that guarantee that ||xf (k + l|k) − xˆ (k + l|k, i )|| P ≤ καε, i ∈ P Lemma 6.4 illustrates the feasibility of control constraints. Lemma 6.5 illustrates the feasibility of the terminal constraint. Finally, using Lemmas 6.2–6.4, it is concluded that for i ∈ P, the control law uif (·|k) is a feasible solution to Problem 6.2 at moment k. The set Ω(ε) is defined to be the invariant set at level ε of the dynamic equation x(k + 1) = Ac x(k) of the closed-loop system, so under condition (6.52) and given xˆ (k + N − 1|k − 1, i ) ∈ Ω(αε), it is easy to obtain x(k + N ) ∈ Ω(αε' ) by taking ε' = (1 − κ)ε. Lemma 6.3 If Assumption 6.2 and Assumption 6.3 hold and Problem 6.2 has a solution at every update moment 0, . . . , k − 1 for x(k0 ) ∈ X and k ≥ 0, then ||xf (k + l|k) − xˆ (k + l|k, i )||P ≤ γ καε

(6.58)

where 0 < γ < 1 is the design parameter. Also, if Eq. (6.58) holds, then uif (k + l − 1|k), l = 1, 2, . . . , N − 1 satisfies the constraint (6.48). Proof: First, prove that Eq. (6.58) holds in the presence of solutions at moments 0, 1, 2, . . . , k − 1.

6.3 Constrained Coordinated Distributed Predictive Control …

183

Substituting Eq. (6.53) into Eq. (6.54) yields x(k) = Ax(k − 1) +



Bi ui (k − 1|k − 1)

i∈P

xf (k + l|k) =A

l+1

x(k − 1) + A Bi u(k − 1|k − 1) + l

l 

Al−h Bi uif (k + h − 1|k)

h=1

+

l 

Al−h B j u j (k + h − 1|k − 1)

j∈Pi h=0

= Al+1 x(k − 1) +

l 

Al−h Bi ui (k + h − 1|k − 1)

h=0

+

l 

Al−h B j u j (k + h − 1|k − 1).

(6.59)

j∈Pi h=0

At moment k − 1, the predicted state is xˆ (k + l|k − 1, i ) =A

l+1

x(k − 1) +

l 

Al−h Bi ui (k + h − 1|k − 1)

h=0

+

l 

Al−h B j u j (k + h − 1|k − 2)

(6.60)

j∈Pi h=0

Using Eq. (6.59) minus Eq. (6.60), the deviation of the feasible state sequence from the predicted state sequence at moment k − 1 can be obtained as || || f ||x (k + l|k) − xˆ (k + l|k − 1, i )|| P || || || l || ||  l−h .|| || = || A B j u j (k + h − 1|k − 1) − u j (k + h − 1|k − 2) || || || h=0 j∈Pi ||

(6.61) P

Define Sr is the subsystem that maximizes the equations of l  h=0

βl−h ||ui (k + h − 1|k − 1) − ui (k + h − 1|k − 2)||2 , i ∈ P.

184

6 Cooperative Distributed Predictive Control System

Then, from Eq. (6.61), it follows that || || f ||x (k + l|k) − xˆ (k + l|k − 1, i )|| ≤

l 

P

βl−h ||ur (k + h − 1|k − 1) − ur (k + h − 1|k − 2)||2

(6.62)

h=0

When l = 1, 2, . . . , N − 1, since at moments 0, 1, 2, . . . , k − 1, there exists a solution for ∀i ∈ P satisfying the constraint (6.48), we have l 

βl−h ||ur (k + h − 1|k − 1) − ur (k + h − 1|k − 2)||2 ≤ γ καε/(m − 1) (6.63)

h=0

and || || f ||x (k + l|k) − xˆ (k + l|k − 1, i )|| ≤ γ καε P

(6.64)

In this way, Eq. (6.58) holds for all l = 1, 2, . . . , N − 1. When l = N , according to Eqs. (6.56) and (6.57). || || f ||x (k + N |k) − xˆ (k + N |k − 1, i )|| P || || ≤ λmax (ATc Ac )||xf (k + N − 1|k) − xˆ (k + N − 1|k − 1, i )||P ≤ (1 − κ)γ καε

(6.65)

In summary, Eq. (6.58) holds for all l = 1, 2, . . . , N − 1. Also, it follows from definition (6.53) that uif (k +l −1|k)−ui (k +l −1|k −1) = 0. So, when l = 1, 2, . . . , N − 1, uif (k + l − 1|k) satisfies the constraint (6.48), yielding the proof. It will be shown below that if the condition (6.58) is satisfied, then uif (k +l − 1|k), l = 1, 2, . . . , N − 1 is a feasible solution to Problem 6.2. Lemma 6.4 If Assumption 6.2 and Assumption 6.3 hold and x(k0 ) ∈ X , for any k ≥ 0, Problem 6.2 has a solution at every update moment t, t = 0, . . . , k − 1, then for all l, l = 1, 2, . . . , N , i ∈ P, the uif (k + l − 1|k) ∈ Ui . Proof Since there is a feasible solution to Problem 6.2 at moment k − 1uif (k + l − 1|k) = ui (k + l − 1|k − 1), l ∈ {1, . . . , N − 1}, then it suffices to show that uif (k + N − 1|k) is in the set Ui . Since the choice of ε satisfies the conditions of Lemma 6.1, when x ∈ Ω(ε), for all i ∈ P, there exists Ki x ∈ Ui , so, a sufficient condition for uif (k + N − 1|k) to be a feasible solution is that xf (k + N − 1|k) ∈ Ω(ε).

6.3 Constrained Coordinated Distributed Predictive Control …

185

Then by Lemma 6.2 and α ≤ 0.5, and using the triangle inequality we get ||xf (k + N − 1|k)|| P ≤ ||xf (k + N − 1|k) − xˆ (k + N − 1|k − 1)||P + ||ˆx(k + N − 1|k − 1)||P ≤ γ καε + αε ≤ ε,

(6.66)

It follows from the above that xf (k + N − 1|k) ∈ Ω(ε) and i ∈ P. Lemma 6.3 yields the proof. Lemma 6.5 If Assumption 6.2 and Assumption 6.3 hold and x(k0 ) ∈ X , for any k ≥ 0, Problem 6.2 has a solution at every update moment t, t = 0, . . . , k − 1, then the terminal state satisfies the terminal constraint for all i ∈ P. Proof Since a solution to Problem 6.2 exists at the update moment t = 1, . . . , k − 1, Lemma 6.3 and Lemma 6.4 hold, and by using the triangle inequality according to Lemma 6.2 and condition (6.58), one obtains ||xf (k + N |k)||P ≤ ||xf (k + N |k) − xˆ (k + N |k − 1, i )||P + ||ˆx(k + N |k − 1, i )||P ≤ (1 − κ)γ καε + (1 − κ)αε ≤ αε

(6.67)

This shows that the terminal state constraint is satisfiedi for all i ∈ P. Theorem 6.3 If Assumption 6.2 and Assumption 6.3 hold and x(k0 ) ∈ X and conditions (6.48)–(6.50) are satisfied at time k0 , then the control law defined by Eqs. (6.53), (6.54) and (6.56) for any i ∈ P, uif (·|k) and state xf (·|k) are feasible solutions to Problem 6.2 at any k ≥ 1 moments. Proof First, with k = 1, the sequence of states xˆ (·|1, i ) = xf (·|1, i ) clearly satisfies the dynamic Eq. (6.54) and the consistency constraint (6.48). Then, assuming the existence of feasible solutions at moments t = 1, . . . , k − 1, the results of Lemmas 6.2–6.4 hold. In this case, the consistency constraint is clearly satisfied, and the feasibility of the control constraint and the end-state constraint is guaranteed. Thus Theorem 6.3 is proved.

6.3.3.2

Asymptotic Stability

This section performs the stability analysis of the closed-loop system. Theorem 6.4 If Assumption 6.2 and Assumption 6.3 hold and at time k0 , x(k0 ) ∈ X and condition (6.48) ~ condition (6.50) are satisfied, while the following parameter conditions hold.

186

6 Cooperative Distributed Predictive Control System

. . ρ − α 0.42 + (N − 1)ρ ' + 1 γ κ > 0

(6.68)

where ρ = λmin (P − 2 Q P − 2 ) 2 1

1

1

ρ ' = λmax (P − 2 Q P − 2 ) 2 1

1

1

(6.69)

Then, applying Algorithm 6.1, the closed-loop system (6.42) can be asymptotically stabilized to the origin. Proof by Algorithm 6.1 and Lemma 6.2, the terminal controller is able to stabilize the system to converge to the origin for k ≥ 0 and x(k) ∈ Ω(ε). Therefore, it suffices to show that the closed-loop system (6.42) can enter the invariant set in finite time when x(k0 ) ∈ X \Ω(ε) by applying Algorithm 6.1. m Define the non-negative function Vk of the global system S, Vk = i=1 Vk,i . Vk,i = ||ˆx(k + N |k, i )||P +

N −1  . ||ˆx(k + l|k, i )||Q + ||ui (k + l|k)||Ri

(6.70)

l=0

It will be shown below that for k ≥ 0, if x(k) ∈ X \Ω(ε) is satisfied, then there exists a constant η ∈ (0, ∞) such that Vk ≤ Vk−1 − η. Since the performance index of a closed-loop subsystem Si , ∀i ∈ P under the action of the optimal solution ui (·|k) will not be larger than that of a closed-loop subsystem under the action of the feasible solution uif (·|k) the performance index of the i-th closed-loop subsystem under the action of ||ui (k − 1|k − 1)||R > 0 is larger, so we have Vk,i − Vk−1,i || || ≤ −||xˆ (k − 1|k − 1, i )||Q − ||ui (k − 1|k − 1)||Ri +

N −2 ||

|| || || || || || || (||xf (k + l|k)|| + ||uif (k + l|k)|| Q

l=0

Ri

)

|| || || || || || || || + (||xf (k + N − 1|k)|| + ||uif (k + N − 1|k)|| Q



Ri

|| || || || ) + ||xf (k + N |k)||

N −2 ||

P

|| || || || || (||xˆ (k + l|k − 1, i )||Q + ||uˆ i (k + l|k − 1)||R ) − (||xˆ (k + N − 1|k − 1, i )||P ).

l=0

i

(6.71)

6.3 Constrained Coordinated Distributed Predictive Control …

187

|| || Assume that x(k) ∈ X \Ω(ε), i.e.,||xˆ (k − 1|k − 1, i )||Q ≥ ρε. When ||ui (k − 1|k − 1)||R > 0, substituting Eq. (6.58) into the above equation yields Vk,i − Vk−1,i ≤ −ρe + ρ ' (N − 1)γ καe || || || || + ||xf (k + N − 1|k)||Q + ||uif (k + N − 1|k)||R || || || || + ||xf (k + N |k)||P − ||xˆ (k + N − 1|k − 1, i )||P

(6.72)

In the above equation, consider the third to fifth terms. || || || || || (2 1 '|| ||xf (k + N − 1|k)|| + ||uf (k + N − 1|k)|| +||xf (k + N |k)||2 i P Q Ri 2 || f ||2 || f ||2 || f ||2 ≤ ||x (k + N − 1|k)||Q + ||ui (k + N − 1|k)||Ri + ||x (k + N |k)||P || ||2 || ||2 || ||2 ≤ ||xf (k + N − 1|k)||Q + ||uf (k + N − 1|k)||R + ||xf (k + N |k)||P

(6.73)

||2 || ||2 || ˆ = Q + KT RK, the where ||xf (k + N |k)||P = ||Ac xf (k + N − 1|k)||P . Consider Q ˆ it follows that ATc PAc − P = −Q, || || || || || || f ||x (k + N − 1|k)||2 + ||uf (k + N − 1|k)||2 + ||xf (k + N |k)||2 Q P R || f ||2 || ||2 f ≤ ||x (k + N − 1|k)||Qˆ + ||Ac x (k + N − 1|k)||P || ||2 = ||xf (k + N − 1|k)|| P

(6.74)

Considering √ || f || || || 2||x (k + N − 1|k)||P − ||xˆ (k + N − 1|k − 1, i)||P || || || || ≤ 0.42||xf (k + N − 1|k)||P + ||xf (k + N − 1|k) − xˆ (k + N − 1|k − 1, i )||P ≤ 0.42αe + γ καe,

(6.75)

substituting Eqs. (6.73)–(6.75) into Eq. (6.72), we get Vk,i − Vk−1,i ≤ −ρe + (N − 1)ρ ' γ καe + 0.42αe + γ καe . .. = −e ρ − α 0.42 + (N − 1)ρ ' + 1 γ κ

(6.76)

Combining Eqs. (6.68), we get Vk,i − Vk−1,i < 0 Thus, for any k ≥ 0, if x(k) ∈ X \Ω(ε), then there exists a constant ηi ∈ (0, ∞) such that Vk,i ≤ Vk−1,i − ηi holds. Since m is bounded, the inequality relation

188

6 Cooperative Distributed Predictive Control System

Vk ≤ Vk−1 − η can be obtained, where η =

m

ηi . With this inequality, it follows

i=1

that there exists a finite time k ' such that x(k ' ) ∈ Ω(ε). If it does not hold, then the inequality states that as k → ∞, Vk → −∞. But since Vk ≥ 0, there must exist a finite time k ' such that x(k ' ) ∈ Ω(ε). Proof. In summary, the analysis of both feasibility and stability of C-DMPC has been given. If the initial feasible solution can be found, then the subsequent feasibility of the algorithm can also be guaranteed at each update step and the corresponding closed-loop system can be asymptotically stable at the origin.

6.3.4 Simulation Example This section verifies the effectiveness of the presented C-DMPC algorithm using load frequency control (LFC) in power networks. The purpose of LFC control is to ensure that the power generated by the generators in the grid is similar to the power consumed by the actual users in the presence of power consumption disturbances, consequently keeping the grid frequency at 50 or 60 Hz. The power network system can be decomposed into several sub-networks containing generation and consumption units respectively. In this section, a power system consisting of five sub-networks is simulated and the control algorithm is verified, as shown in Fig. 6.5. A sub-network containing basic generation units, consumption units and transmission lines is used here to simplify the representation of the power network model. This model can reflect the basic characteristics of the power network LFC problem. The dynamics of the sub-network after continuous time linearization can be described by the following second order model.

Fig. 6.5 Interaction relationships between subsystems

6.3 Constrained Coordinated Distributed Predictive Control …

189

d△δi (t) = 2π △ f i (t) dt d△ f i (t) η K ,i η K ,i 1 △ f i (t) + △Pg,i (t) − △Pd,i (t) =− dt ηT ,i ηT ,i ηT ,i ⎫ ⎧ ]⎬ η K ,i ⎨  ηS,i j [ + △δ j (t) − △δi (t) ⎭ ηT,i ⎩i∈P 2π +i

where, at time t, △δi is the change in phase angular velocity, rad; △ f i is the change in frequency, Hz; △Pg,i is the change in generation power, Unit; △Pd,i is the change in load disturbance, Unit; and ηg,i j is the synchronization factor of the connection between the i-th sub-network and the j-th sub-network. The specific values are given in Table 6.3. The centralized MPC, LCO-DPMC and C-DMPC are applied to this system and analyzed for comparison, respectively. ε = 0.1 is chosen and the control horizon of all controllers is N = 10, and the input sequence and state of each subsystem at the initial moment are 0. Dual mode predictive control is also used in the subsystem controllers under the centralized MPC, LCO-DMPC strategy, and the same parameters and initial values are set as in the C-DMPC control. The bounds for the inputs are defined as {−1, 2} and the bounds for the input increments are {−0.2, 0.2}. Simulation program in developed using the MATLAB, in which the optimization problem for each local controller is solved using ILOG CPLEX (also using MATLAB’s Fmincon function). When applying the control algorithm to an automated system, the MATLAB program can be compiled directly into an executable or dynamic link library if a suitable solver is not available. The state responses and input variables of the system under the action of the above three control algorithms when disturbances are injected into the subsystems S 1 , S 3 and S are 4 shown in Figs. 6.6 and 6.7, respectively. The state response curves of the system under C-DMPC control are almost identical to those of the system under centralized MPC control. Under LCO-DMPC control, the states of all subsystems converge to the set point, but with greater variance compared to C-DMPC. The mean square error obtained using the Table 6.3 Parameters of the subnetwork (where i ∈ {1, …, m} and j ∈ P +i ) Subsystem

Ki

Pi

Qi

Ri

△ui,U , △ui,L

S1

−0.44

5.38

4

0.2

±1

S2

−0.34

5.36

4

0.2

±1

S3

−0.37

5.37

4

0.2

±1

S4

−0.52

5.40

4

0.2

±1

S5

−0.68

5.46

4

0.2

±1

S6

−0.37

5.37

4

0.2

±1

S7

−0.76

5.49

4

0.2

±1

190

6 Cooperative Distributed Predictive Control System

Fig. 6.6 The response curves for △δi , i ∈ P under the centralized MPC, C-DMPC and LCO-DMPC, respectively

centralized MPC and C-DMPC control methods are 0.4789 and 0.5171, respectively. The mean square error of LCO-DMPC is twice as large as that of C-DMPC. From the above simulation results, it is clear that the C-DMPC containing constraints introduced in this section can control the state to the set value in the presence of disturbances. Under C-DMPC control, the communication and computational load are greatly reduced compared to the iterative algorithm because the subsystems communicate only once in each control period, and at the same time, the optimization performance similar to that of the centralized MPC control can be obtained under this control. The control law curves are shown in Fig. 6.8.

6.4 Summary of This Chapter

191

Fig. 6.7 Response curves for △fi , i ∈ P under the centralized MPC, C-DMPC and LCO-DMPC, respectively

6.4 Summary of This Chapter In this chapter, an unconstrained distributed predictive control method based on the optimization of global performance index is proposed for large linear systems, provided that all information is available for each subsystem. This method transforms the online optimization problem of a large-scale system into an online optimization problem of several small systems, significantly reducing the computational complexity and ensuring good performance, while giving global analytical solutions and stability conditions for coordinated distributed predictive control, etc. In addition, the guaranteed stability coordinated DMPC algorithm for distributed systems containing input constraints is presented in this chapter. This algorithm is effective in improving the optimization performance of the global system when each subsystem communicates only once in each sampling cycle. If a feasible initial solution can be found, the subsequent feasibility of the algorithm is also guaranteed and the closed-loop system will be asymptotically stable.

192

6 Cooperative Distributed Predictive Control System

Fig. 6.8 The input △Pg,i , i ∈ P, under the centralized MPC, C-DMPC and LCO-DMPC, respectively

References 1. Venkat AN et al (2008) Distributed MPC strategies with application to power system automatic generation control. IEEE Trans Control Syst Technol 16(6):1192–1206 2. Chen Q, Li S, Xi Y (2005) Distributed predictive control of the whole production process based on global optimality. J Shanghai Jiaotong Univ 39(3):349–352. (陈庆, 李少远, 席裕庚.基于全 局最优的生产全过程分布式预测控制. 上海交通大学学报, 2005, 39(3):349–352). 3. Zheng Y, Li S, Wei Y (2017) Predictive control of distributed systems with global stability under communication information constraints (English). Control Theory Appl 34(5):575–585. (郑毅, 李少远, 魏永松.通讯信息约束下具有全局稳定性的分布式系统预测控制 (英文) .控 制理论与应用, 2017, 34 (5):575–585). 4. Stewart BT, Wright SJ, Rawlings JB (2011) Cooperative distributed model predictive control for nonlinear systems. J Process Control 21(5):698–704 5. Zheng Y, Li S, Qiu H (2013) Networked coordination-based distributed model predictive control for large-scale system. IEEE Trans Control Syst Technol 21(3):991–998 6. Pontus G (2012) On feasibility, stability and performance in distributed model predictive control. IEEE Trans Autom Control 7. Mayne DQ et al (2000) Constrained model predictive control: stability and optimality. Automatica 36(6):789–814

Chapter 7

Distributed Predictive Control Under Communication Constraints

7.1 Overview The previous two chapters introduced two coordination strategies: distributed predictive control based on local performance index and global performance index. In the local performance index-based method, each local controller only needs to exchange information with its neighboring subsystems, so the method has lower network requirements, but the overall performance of the whole system is poorer compared to the global performance index-based control method because only its own performance is considered in the optimization process. In comparison, the global performance index-based control method can better improve the overall performance of the system, and its use with the iterative method can guarantee the feasibility of the optimization problem during the iteration. However, this method requires each local controller to exchange information with all local controllers, the network load is relatively large, and the controller algorithm is relatively complex, which is not convenient for engineering applications. Distributed model predictive control has better flexibility and fault tolerance because the subsystem controllers are independent of each other and have weak dependencies. This means that if the number of subsystems communicating with each subsystem MPC increases, then the overall closed-loop system will be less flexible and fault-tolerant. In addition, there are areas where global information is not available to the controller due to management or system size (e.g., multi-intelligent body systems, partitioned power generation, etc.). Therefore, there is a need to design distributed predictive control methods that can effectively improve the performance of global closed-loop systems with limited information and the presence of structural constraints. In order to achieve a balance between global system performance and network communication topology complexity, a new coordination strategy will be proposed in this chapter. In which the model predictive control of each subsystem considers only the performance of that subsystem and the subsystems it directly affects. This strategy can be called “neighborhood optimization based optimization” distributed © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_7

193

194

7 Distributed Predictive Control Under Communication Constraints

model predictive control [1–4]. In this chapter, Sect. 7.2 presents the design and stability analysis of unconstrained distributed predictive control under communication constraints based on this idea, and applies this design idea to metallurgical systems and explains why this coordination strategy can improve the global system performance. Numerical experiments show that the control results obtained with this coordination strategy are close to those obtained with the centralized approach. Section 7.3 will build on this to present the design and synthesis problem of distributed predictive control based on neighborhood optimization with guaranteed stability under input and state constraints [5], With this method, each local MPC coordinates and communicates with its strong coupling neighbors. It takes its strongly coupling downstream subsystems cost function into account in its cost function for improving the performance of the entire closed-loop system. To reduce the unnecessary network connectivity, the interaction terms of weak coupling upstream neighbors are ignored in its predictive model and considered as bounded disturbances. In addition, the closed-loop optimization performance is used to determine which interaction should be regarded as strong coupling and be considered in DMPC. Tubebased technolegy is used to guarantee the recursive feasibility and stability in target tracking problems. The asymptotical stability of the closed-loop system with state constraints is guaranteed.

7.2 Distributed Predictive Control Based on Neighborhood Optimization 7.2.1 State-, Input-Coupled Distributed Systems Consider the general system, which is assumed to consist of m discrete-time linear subsystems Si (i = 1, . . . , m), each associated with the other subsystems through inputs and states, then the equation of state description of the subsystems Si can be expressed as  ⎧ ⎪ x i (k + 1) = Aii x i (k) + B ii ui (k) + Ai j x j (k) + ⎪ ⎪ ⎪ ⎪ j=1,...,m; ⎨ j/=i  ⎪ ⎪ y (k) = C ii x i (k) + C i j x j (k) ⎪ ⎪ i ⎪ ⎩ j=1,...,m,



B i j u j (k)

j=1,...,m; j/=i

j/=i

(7.1) In the above equation, xi ∈ Rn xi , ui ∈ Rn ui and yi ∈ Rni are the state, input and output vectors of the subsystem, respectively. The overall system model can be expressed as

7.2 Distributed Predictive Control Based on Neighborhood Optimization



x(k + 1) = Ax(k) + Bu(k) y(k) = C x(k)

195

(7.2)

In the above model: x ∈ Rn x , u ∈ Rn u and y ∈ Rn y are the state, input and output vectors, respectively. A, B and C are the system matrices. The control objective of the system is to minimize the following global performance indicators.  P

m M    2 2 d yi (k + l) − y (k + l) + ||△ui (k + l − 1)|| Ri J (k) = (7.3) i Q i

i=1

l=1

l=1

where yid and △ui (k) are the output set values and input increments of S j and △ui (k) = ui (k) − ui (k − 1). Qi and Ri are the weight matrices, P, M ∈ N, P ≥ M, in the prediction and control horizons, respectively. The problem studied in this chapter is to design coordination strategies that can significantly improve the overall performance of the closed-loop system without adding or adding less communication networks (i.e., without destroying the flexibility and fault tolerance of the system) in a distributed framework.

7.2.2 Local Predictive Controller Design The distributed predictive control proposed in this chapter consists of a series of mutually independent MPC controllers Ci , i = 1, 2, . . . , n, corresponding to different subsystems Si , i = 1, 2, . . . , n, respectively. These subsystem MPCs are able to exchange information with their neighboring controllers through the network. To clearly illustrate the control method proposed in this chapter, the states xi (k) of each subsystem are assumed to be measurable and the following assumptions and definitions are made (Assumption 7.1, Definition 7.1 and Table 7.1). Assumption 7.1 (i) Synchronization of the local controllers. (ii) The controllers communicate with each other only once during a sampling cycle. (iii) There is a one-step signal delay in the communication process. In fact, this set of assumptions is not too harsh. Because the sampling interval is usually much longer than the computation time in process control, the condition of controller synchronization is not strong; the controller communicates only once during a sampling cycle, as proposed in assumption (ii), to reduce the amount of network communication while increasing the reliability of the algorithm. In the actual process, instantaneous communication does not exist, so the one-step delay in assumption (iii) is necessary.

196

7 Distributed Predictive Control Under Communication Constraints

Table 7.1 Symbols used in the text Symbol

Interpretation



Predicted values of x i (l) and yi (l) calculated at time h, and l, h ∈ N, h < l

x i (l|h ),



yi (l|h )

ui (l|h ), △ui (l|h )

Predicted value of ui (l) and input increment △ui (l) calculated by controller Ci at time h, l, h ∈ N and h < l

yid (l|h )

Set value of yi (l|h )



State and output of Si ’s neighborhood, x i (k) = [ x iT (k) x iT (k) · · · x iT (k) ]T , m 1



x i (k), yi (k)





yi (k) = [ yiT (k) yiT (k) · · · yiT (k) ]T , m is the number of output neighbors of m 1

subsystem Si 



wi (k), v i (k)

Interactions between state and output of Si ’s output neighbors, refer to (7.9) and (7.10)





x i (l|h ),









Predicted values of x i (l) and yi (l) calculated at time h, l, h ∈ N and h < l





yi (l|h )





wi (l|h ),

Predicted values of wi (l) and v i (l) calculated at time h, l, h ∈ N and h < l





v i (l|h )

d



yi (l|h )

Seeting value of yi (l|h )

U i (l, p|h )

Input sequence vector of subsystem, U i (l, p|h ) = [ uiT (l|h ) uiT (l + 1|h ) · · · uiT (l + p|h ) ]T , p, l, h ∈ N and h < l

△U i (l, p|h )

Input incremental sequence vector of subsystem, △U i (l, p|h ) = [△ uiT (l|h ) △uiT (l + 1|h ) · · · △uiT (l + p|h ) ]T and h < l

U(l, p|h )

Input sequence vector of the global system, U(l, p|h ) = [ uT1 (l|h ) · · · uTn (l|h ) · · · uT1 (l + p|h ) · · · uTn (l + p|h ) ]T

Xˆ i (l, p|h )

State estimation sequence of subsystem,

X i (l, p|h ) = [ x iT (l|h ) x iT (l + 1|h ) · · · x iT (l + p|h ) ]T

ˆ X(l, p|h )





State estimation sequence of the global system, ˆ X(l, p|h ) = [ xˆ T1 (l|h ) · · · vnT (l|h ) · · · xˆ T1 (l + p|h ) · · · xˆ Tn (l + p|h ) ]T





X i (l, p|h )

State estimation sequences of neighborhood subsystems,



T





T



T

T   X (l, p|h ) = [  x i (l|h ) x i (l + 1|h ) · · · x i (l + p|h ) ] , p, l, h ∈ N and h < l



Y i (l, p|h )

Output estimation sequences of neighborhood subsystems,



T





T



T

Y i (l, p|h ) = [ y (l|h ) y (l + 1|h ) · · · y (l + p|h ) ]T , p, l, h ∈ N and h < l i i i (continued)

7.2 Distributed Predictive Control Based on Neighborhood Optimization

197

Table 7.1 (continued) Symbol

Interpretation

d

Setting value of Y i (l, p|h )





Y i (l, p|h )



W i (l, p|h )

State action vector sequences of neighborhood subsystems, T

T





T



   ]T , p, l, h ∈ N and h < l [w i (l|h ) w i (l + 1|h ) · · · w i (l + p|h )





V i (l, P|h )

Output action vector sequences of neighborhood subsystems, T



T



T



T   [ v i (l|h ) v i (l + 1|h ) · · · v i (l + p|h ) ] , p, l, h ∈ N and h < l

ˆ X(l, p|h )

State estimation sequences of the global system, T T ˆ X(l, p|h ) = [ Xˆ 1 (l, p|h ) · · · Xˆ m (l, p|h ) ]T

U(l, p|h )

Input sequence vector of the global system, U(l, p|h ) = [ U T1 (l, p|h ) · · · U Tm (l, p|h ) ]T

Definition 7.1 Neighboring subsystem: the subsystem Si interacts with the subsystem S j and the output and state of the subsystem Si is influenced by the subsystem S j , in which case S j is referred to as the input neighbor to the subsystem Si and the subsystem Si is called the output neighbor to the subsystem S j . Si and S j are referred to as neighboring subsystems or neighbors. Subsystem’s neighbourhood: the input (output) neighbourhood Niin (Niout ) of the subsystem Si is the set of all input (output) neighbours of the subsystem Si .

Niin = Si , S j | S j is the input neighborhood of Si Niout = {Si , S j | S j is the output neighborhood of Si } Subsystem Si ’s neighbourhood Ni means: the set of all neighbours of the subsystem Si . Ni = Niin ∪ Niout

7.2.2.1

Mathematical Description of the Local Controller Optimization Problem

(1) Performance index For the large-scale system considered in the text, the global performance index (7.3) can be decomposed into the following local performance index Ji for each subsystem

198

7 Distributed Predictive Control Under Communication Constraints

Si , i = 1, 2, . . . , n. Ji (k) =

P M   ˆyi (k + l|k ) − yd (k + l|k ) 2 + ||△ui (k + l − 1|k )||2Ri i Q i

l=1

(7.4)

l=1

In distributed MPC, the local decision variables of the subsystem Si can be obtaind by solving the optimization problem min Ji (k) based on the estimates of the △U(k,M|k )

neighborhood states and neighborhood inputs at k − 1 moments and considering the local input–output constraints (the state prediction method), or by Nash optimization [1]. However, since the state evolution of the output neighborhood of the subsystem is influenced by the optimization decision variables of the subsystem, see Eq. (7.1), the performance of the output neighborhood subsystems of the subsystem i can be affected by the inputs of the subsystem i. To solve this problem, a method called “Neighborhood Optimization”is used in this paper, with the following performance index.  J¯ i (k) = Ji (k) j∈niout

=





j∈Niout

P M   yˆ j (k + l|k ) − yd (k + l|k ) 2 + △u j (k + l − 1|k ) 2 j Rj Q j

l=1



l=1

(7.5) Since △u j (k + l − 1|k ) ( j ∈ Niout , j /= i, l = 1, . . . , M) is unknown and independent of the control decision increment of the subsystem Si , the control decision increment △u j (k + l − 1|k − 1) at moment k − 1 is used to approximate the control decision increment △u j (k + l − 1|k ) at moment k, so that Eq. (7.5) becomes J¯ i (k) =

P M    yˆ j (k + l|k ) − yd (k + l|k ) 2 + ||△ui (k + l − 1|k )||2Ri j Q j

j∈Niout l=1

+

=



M 

j∈Niout , j/=i

l=1



P 

||△ui (k + l − 1|k − 1)||2R j

yˆ j (k + l|k ) − yd (k + l|k ) 2 j Q

j∈Niout l=1

+

M 

j

||△ui (k + l − 1|k )||2Ri + Constant

l=1

Simplifying the above equation, redefine J i (k) as

l=1

7.2 Distributed Predictive Control Based on Neighborhood Optimization

2 M P    yi (k + l|k ) − yd (k + l|k ) + ||△ui (k + l − 1|k )||2Ri i 

199



J i (k) =

Qi

l=1

(7.6)

l=1



where Q i = diag( Q i , Q i1 , . . . , Q ib ). The optimization index J i (k) takes into account not only the performance of the subsystem Si , but also the performance of the output neighborhood of the subsystem Si , fully considering the effect of the subsystem Si control variables on S j ∈ Niout of the subsystem control variables. Therefore, the method is expected to improve the global performance of the system. It is worth noting that the global performance of the system can potentially be further improved if the performance index (7.3) is used in each subsystem, but it requires a high-quality network environment, and the communication complexity and algorithmic complexity of such an algorithm are higher than that of the neighborhood optimization. On the other hand, since the performance of neighborhood optimization is already very close to that of global optimization (which will be described in the subsequent part of this chapter), the neighborhood optimization approach is used in this chapter. (2) Predictive models Since the state evolution process of the subsystem S j ∈ Niout is affected by the manipulation variable ui (k) of the subsystem Si , in order to improve the prediction accuracy, when predicting the state evolution of the subsystem Si and its output neighboring subsystems, the subsystem Si and its output neighboring subsystems should be considered together as a relatively large neighboring subsystem. Assuming that the number of output neighbors of the subsystem Si is m, then the state evolution equation of its output neighborhood subsystem can be easily introduced from (7.1) as ⎧     ⎨ x i (k + 1) = A i x i (k) + B i ui (k) + w i (k) (7.7)   ⎩  yi (k) = C i x i (k) + v i (k) where ⎤ Aii Ai1 i1 · · · Ai im ⎢ Ai i Ai i · · · Ai i ⎥  1 1 m m ⎥ ⎢ 1  (1)  (2) Ai = [ A ]=⎢ . .. . . .. ⎥, A i i ⎣ .. . . ⎦ . Aim i Aim i1 · · · Aim im ⎤ ⎤ ⎡ ⎡ B ii C ii C i1 i1 · · · C i im ⎢ Bi i ⎥  ⎢ Ci i Ci i · · · Ci i ⎥  1 1 1 m ⎥ ⎢ 1 ⎥ ⎢ 1 B i = ⎢ . ⎥, C i = ⎢ . .. . . .. ⎥ . . ⎣ . ⎦ ⎣ . . . ⎦ . ⎡

Bim i

Cim i Cim i1 · · · Cim im

(7.8)

200

7 Distributed Predictive Control Under Communication Constraints

⎡ ⎢ ⎢ ⎢ ⎢  ⎢ wi (k) = ⎢ ⎢ ⎢ ⎢ ⎣

 j∈Niin , j/=i

 j∈Niin , j/=i 1

 j∈Niinm , j/=i

B i j u j (k) + 0

B i1 j u j (k) + .. . B im j u j (k) +

⎡ ⎢ ⎢ ⎢  ⎢ v i (k) = ⎢ ⎢ ⎢ ⎣

 j∈Niin , j ∈N / iout 1



⎤ ⎥ ⎥ Ai1 j x j (k) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ Ai j x j (k) ⎦

j∈Niinm , j ∈N / iout

m

⎤ 0  C i1 j x j (k) ⎥ ⎥ ⎥ j∈Niin , j ∈N / iout 1 ⎥ ⎥ .. ⎥ . ⎥  C x (k) ⎦ j∈Niinm , j ∈N / iout

im j

(7.9)

(7.10)

j

It is worth noting that in the neighbourhood subsystem model (7.7), its input is still the input of the subsystem Si . The input of the output neighbouring subsystems S j ∈ Niout , j /= i of subsystem Si is regarded as measurable interference. This is because each MPC can only determine the manipulated variables of the subsystem corresponding to it. Due to the introduction of unit delay in the network (see Assumption 7.13), for subsystems Si , information about other subsystems is only available after one sampling interval. That is, at moment k, for the controller Ci , the predicted value ˆ ˆ   xiTh (k|k ), the wi (k + l − s|k ) and vi (k + l|k ) are not available, only the predicted ˆ ˆ   values xˆ iTh (k|k − 1 ), wi (k + l − s|k − 1 ) and vi (k + l|k − 1 ) can be obtained from other controllers through network information exchange. Therefore, in the controller Ci , the inter-system interaction part of the model (7.7) should be calculated based on the states computed from the other subsystems at k − 1 moments and the predicted values of the inputs, and the initial states of the output neighboring subsystems are given by xiTh (k|k − 1 ) (h = 1, . . . , m) instead. For all i = 1, . . . , n define.  T  x i (k|k ) = x iT (k|k ) xˆ iT1 (k|k − 1 ) · · · xˆ iTm (k|k − 1 )

(7.11)

In this way, the one step ahead state and output of the neighborhood subsystem can be predicted by the following equation.

7.2 Distributed Predictive Control Based on Neighborhood Optimization

⎧ l  l   s−1  ⎪  ⎪ ⎪ (k + l|k ) = A x (k|k ) + A B i ui (k + l − s|k ) x i i ⎪ i i ⎪ ⎪ ⎪ s=1 ⎪ ⎨ l   s−1  ⎪ Ai wi (k + l − s|k − 1 )) + ⎪ ⎪ ⎪ ⎪ s=1 ⎪ ⎪ ⎪ ⎩   yi (k + l|k ) = C i x i (k + l|k ) + v i (k + l|k − 1 )

201









(7.12)



(3) Optimization Problem For each independent controller Ci (i = 1, . . . , n), the unconstrained production full process MPC problem with forecast period P and control period M and M < P can be turned into solving the following optimization problem at each k moments. min

△Ui (k,M|k )

J¯ i (k) =

2 P M   ˆ yi (k + l|k ) − yd (k + l|k ) + ||△ui (k + l − 1|k )||2Ri i  l=1

Qi

l=1

s.t. Eq. (12)

(7.13)

At moment k, each controller Ci (i = 1, . . . , n) can obtain by exchanging inforˆ ˆ   mation wi (k + l − 1|k − 1 ) and vi (k + l|k − 1 ), l = 1, . . . , P. Take them and the  current state xi (k|k ) together as given to solve the optimization problem (7.13). After completing the solution, choose the first element △u∗ (k|k ) of the optimal solution △Ui∗ (k) and apply ui (k) = ui (k −1)+△u∗ (k|k ) to the subsystem Si . Then, the state trajectory over the optimization horizon is estimated by Eqs. (7.13) and is sent to the other subsystems along with the optimized control sequence through the network. At the moment k + 1, each controller uses this information to estimate the affection from the other systems and computes a new control law based on it. The above steps are repeated in each control period throughout the control process. When solving the control action of controller Ci (i = 1, . . . , n), it is only necessary to know the future behavior of its neighboring subsystems S j ∈ Ni and the neighboring subsystem of the neighboring subsystems Sg ∈ N j . Similarly, the controller Ci only needs to send its future behavior to the controller C j of subsystem S j ∈ Ni and the controller C g of subsystem Sg ∈ N j . In the next section, it is described how to find the analytical solution to the optimization problem (7.13).

7.2.2.2

Analytical Solution for Closed-Loop Systems

The main purpose of this subsection is to calculate the analytical solution of the MPC method for the whole production process presented in this chapter. To achieve this, the analytical form of the predicted values of the interaction and state quantities between subsystems is first given. Assume that

202

7 Distributed Predictive Control Under Communication Constraints

(7.14)

(7.15) ⎧⎡ ⎪ B i,1 ⎪ ⎪ ⎪ ⎨⎢ ⎢ B i1 ,1 B˜˜ i = diag P ⎢ ⎢ .. ⎪ ⎪ ⎣ . ⎪ ⎪ ⎩ B

i m ,1

· · · B i,i−1 0n xi ×n ui B i,i+1 · · · B i1 ,i−1 0n xi1 ×n ui B i1 ,i+1 .. .. .. ··· . . . · · · B im ,i−1 0n xim ×n ui B im ,i+1

⎤⎫ · · · B i,n ⎪ ⎪ ⎪ ⎥⎪ · · · B i1 ,n ⎥⎬ ⎥ . ⎥ ⎪ · · · .. ⎦⎪ ⎪ ⎪ · · · B im ,n ⎭

(7.16)

(7.17)

where, if S j ∈ / Nhin (Sh ∈ Niout ), then Ai, j , Bi, j and Ci, j are zero matrices. Lemma 7.1 (Prediction of the interaction among subsystems) Under the assumption that Assumption 7.1 holds, for each controller Ci , i = 1, . . . , n, the predicted sequence of interaction obtained at moment k, based on the information of the other subsystems at moment k − 1 obtained by information exchanging, is as follows.





˜ i1 X(k, ˆ ˜ i U(k − 1, M|k − 1 ), P|k − 1 ) + B W i (k, P|k − 1 ) = A

(7.18)



ˆ V i (k, P|k − 1 ) = C˜ i X(k, P|k − 1 ). where ⎡

⎤ 0(M−1)n u ×n u I (M−1)n u n ⎢ 0n ×(M−1)n  I nu ⎥ u ⎢ u ⎥ ˜ ˜  nu = n ul , ˜ = ⎢ ⎥, B i = B˜ i Γ .. .. ⎣ ⎦ . . l=1 I nu 0n u ×(M−1)n u

(7.19)

Proof At moment k, each controller Ci (i = 1, . . . , n) can write the vector form of the prediction of the interaction [see Eqs. (7.9), (7.10)] after h (h = 1, . . . , P) steps based on the information at the moment k − 1. Suppose that U j (k, P|k − 1 )( j =

7.2 Distributed Predictive Control Based on Neighborhood Optimization

203

1, 2, . . . , n) in the last P − M + 1 not contained in U j (k − 1, M|k − 1 ) in the control quantity is equal to the last element U j (k − 1, M|k − 1 ) in the last element of U j (k + M − 1|k − 1 ). According to definitions (7.14)–(7.17), (7.19) and Table 7.1, the relation (7.18) can be obtained. Lemma 7.2 (State Prediction) With assumption 7.1 holding, for each controller Ci , i = 1, . . . , n, the predicted state sequence and output sequence of the subsystem Si and its output adjacent subsystem at moment k can be expressed as follows. ⎧  ⎪ ⎪ ¯ i(1) x(k|k ¯ i U i (k, M|k ) + A˜ i X(k, ˆ ⎪ X (k + 1, P|k ) = S¯ i [ A ˆ )+ B P|k − 1 ) ⎪ ⎨ i (7.20) + B˜ i U(k − 1, M|k − 1 )] ⎪ ⎪ ⎪ ⎪  ⎩ ˆ + 1, P|k − 1) Y i (k + 1, P|k ) = C¯ i X i (k + 1, P|k ) + T i C˜ i X(k





where 







(1)



(2)



Ai Ai ⎦, 0 Pn  ×n xi 0 Pn  ×(n  −n xi ) xi xi xi   n 0(P−1)n  ×n  I (P−1)n   y y y Ti = n xl , nx = 0n  ×(P−1)n  I n  l=1 y y y ⎤ ⎡  ⎡ 0 ⎤ diag M (Bi )  ⎥ ⎢  ⎢ 0n ×(M−1)n B i ⎥ ⎢ Ai · · · 0 ⎥ ui ⎥ ⎢   ⎢ .. . . .. ⎥ ¯ ⎥, N ¯ i = ⎢ xi B S = ⎥, C i = diag P {Ci } . . . .. .. ⎥ i ⎢ ⎢ ⎣ P−1 ⎦ ⎢ . . ⎥  0 ⎦ ⎣  Ai · · · Ai 0n  ×(M−1)n ui B i ¯i = A

¯ i(1) A ¯ i(2) = ⎣ A

(7.21)

xi

Proof Substitute ui (k + P − 1|k ) = ui (k + P − 2|k ) = . . . = ui (k + M|k ) = ˆ ˆ   ui (k + M − 1|k ) and vi (k + P|k − 1 ) = vi (k + P − 1|k − 1 ) in Eq. (7.12) and ˆ ˆ   replace Wi (k, P|k − 1 ) and Vi (k, P|k − 1 ) by the analytic expression (7.18) then the following vector form of the prediction sequence for the controller Ci is obtained.

 ¯ i xˆ i (k|k ) + B ¯ i Ui (k, M|k ) X i (k + 1, P|k ) = S¯ i [ A

ˆ + A˜ i1 X(k, P|k − 1 ) + B˜ i U(k − 1, M|k − 1 )]

(7.22)

ˆ'  Order x i (k|k − 1 ) = [ xˆ iT1 (k|k − 1 ) · · · xˆ iTmi (k|k − 1 ) ]T , by definitions (7.8), (7.14), (7.15) and (7.21), the above equation becomes

204

7 Distributed Predictive Control Under Communication Constraints

 (2)  (1) ˆ' X i (k + 1, P|k ) = Si [Ai xˆ (k|k ) + Ai x i (k|k − 1 ) + B i U i (k, M|k )

˜ i(1) X(k, ˆ ˜ i U(k − 1, M|k − 1 )] P|k − 1 ) + B +A (1)

= Si [Ai xˆ (k|k ) + B i Ui (k, M|k ) ˜ i(2) ) X(k, ˜ i(1) + A ˆ ˜ i U(k − 1, M|k − 1 )] P|k − 1 ) + B + (A (1)

= Si [Ai xˆ (k|k ) + B i Ui (k, M|k ) ˜ i X(k, ˆ ˜ i U(k − 1, M|k − 1 )] +A P|k − 1 ) + B According to model (7.7) and definition (7.21), the predicted output sequence of the controller Ci can be expressed as







ˆ + 1, P|k − 1) Y i (k + 1, P|k ) = C i X i (k + 1, P|k ) + T i C˜ i X(k

(7.23)

This completes the proof. By introducing the following matrix, the problem (7.13) described in this chapter can be transformed into a standard quadratic programming problem such that 

Q i = diag P { Q i },

(7.24)

Ri = diag P {Ri }. S i = C i Si , N i = Si B i  i , ⎤ ⎡ ⎡ I n ui I n ui ' ⎢ .. ⎢ .. ⎥ Γ i = ⎣ . ⎦, i =⎣ . (M×M blocks) (M blocks) I n ui I n ui

··· .. . ···

⎤ 0 .. ⎥, . ⎦

(7.25)

I n ui

Then there is the following Lemma. Lemma 7.3 (Quadratic Programming Form) Under the Assumption 7.1, for each controller Ci , i = 1, . . . , n, the following quadratic programming problem needs to be solved at moment k.

min

[△U iT (k, M|k )H i △U i (k, M|k ) − G(k + 1, P|k )△U i (k, M|k )] (7.26)

△Ui (k,M|k )

where the positive definite matrix Hi has the following form. H i = N iT Q i N i + Ri and

(7.27)

7.2 Distributed Predictive Control Based on Neighborhood Optimization

205

G i (k + 1, P|k ) = 2N iT Q i [Y id (k + 1, P|k ) − Zˆ i (k + 1, P|k )]

(7.28)

where ' Zˆ i (k + 1, P|k ) = Si [B i Γ i ui (k − 1)

(1) ˜ i X(k, ˆ ˜ i U(k − 1, M|k − 1 )] + Ai xˆ (k|k ) + A P|k − 1 ) + B ˆ + 1, P|k − 1) + T i C˜ i X(k (7.29)

Proof According to the definition (7.24), using the vector form, the objective function of the controller Ci can be expressed in the following equivalent form.

d



J i = ||Y i (k + 1, P|k ) − Y i (k + 1, P|k )||2Q + ||△U i (k, M|k )||2R i

i

(7.30)

ˆ  The predicted output sequence of the output neighborhood Yi (k + 1, P|k ) is a function of the control increment. Thus, in order to represent J i as a function of the control sequence △Ui (k, M|k ), it is necessary to give the analytic form of the h  output predicted values. Considering that ui (k+h|k ) = ui (k−1)+ △ui (k + r |k ), r =0

h = 1, 2, . . . , M, substituting the local control sequence Ui (k, M|k ) into Eq. (7.20), according to Eq. (7.25), an analytic form of the output prediction can be obtained in the following form.



ˆ + 1, P|k ) Y i (k + 1, P|k ) = N i △U i (k, M|k ) + Z(k

(7.31)

By substituting the above equation into Eq. (7.30), the optimization objective function J i can be transformed into the form of Eq. (7.26). In addition, since the matrices Qi and Ri are positive definite, then Hi is also positive definite. In this way, the MPC problem is equivalently transformed into solving an unconstrained quadratic programming problem (7.26) at each control period. Theorem 7.1 (Analytic Solution) Under the Assumption 7.1, for each controller Ci , i = 1, . . . , n, the analytic form of the control law imposed on the system at moment k is

ui (k) = u(k − 1) + K i [Y id (k + 1, P|k ) − Zˆ i (k + 1, P|k )]

(7.32)

K i =  i K i , Γ i = [ I n ui 0n ui ×Mn ui ], K i = H i−1 N iT Q i

(7.33)

where

206

7 Distributed Predictive Control Under Communication Constraints

Proof For the sequence of control variable increments △Ui (k, M|k ) of the MPC problem with minimization objective function (7.26), the optimal solution has the following form.  △U i (k, M|k ) = ((1 2)H i−1 G i (k + 1, P|k ))

(7.34)

According to the rolling optimization strategy, only the first element of the optimization sequence is applied at each control period, the control action of the system is ui (k) = ui (k − 1) + Γ i △U i (k, M|k )

(7.35)

In this way, the analytical expression (7.32) of the MPC law can be obtained from Eqs. (7.33) to (7.35). Remark 7.1 For each locally controlled subsystem, the complexity of solving the controller Ci arises mainly from the inverse process for the matrix Hi . Using the Gauss-Jordan algorithm and considering the dimensionality of the matrix Hi equals to M · n u i , then the complexity of the inverse algorithm is O(M 3 , n 3u i ). Thus, the computational complexity of solving the entire distributed predictive control is n O(M 3 , i=1 n 3u i ), while the computational complexity of the centralized predictive n n u i )3 ). control is O(M 3 , ( i=1

7.2.3 Performance Analysis 7.2.3.1

Closed-Loop System Stability

Since the analytic solution of the DMPC is given in Theorem 7.1, the dynamics of the closed-loop system can be introduced and, in turn, the stability conditions of the system can be obtained by analyzing the system matrix of the closed-loop dynamic system. In fact, the mathematical expression for the control sequence feedback system used to characterize the stability of the global system can be introduced by the analytical solution of the controller Ci (i = 1, . . . , n) expressed in Eq. (7.32). To simplify the stability proof process, define Ω = [ Ω T1 · · · Ω TP ]T , Ω j = diag{Ω 1 j , . . . , Ω n j } Ω i j = [ 0n xi ×( j−1)n xi I n xi 0n xi ×(P− j)n xi ], (i = 1, . . . , n, j = 1, . . . , P) Π = [ Π T1 · · · Π TM ]T , Π j = diag{Π 1 j , . . . , Π n j } Π i j = [ 0n ui ×( j−1)n ui I n ui 0n ui ×(M− j )n ui ], (i = 1, . . . , n, j = 1, . . . , M) then there is

(7.36)

(7.37)

7.2 Distributed Predictive Control Based on Neighborhood Optimization

207



ˆ X (k, P|k − 1 ) = Ω X(k, P|k − 1)

(7.38)

U(k, M|k − 1) = ΠU(k, M|k − 1 )

(7.39)

Define A = diag{ A11 , . . . , An1 };

˜ =[A ˜ T1 · · · A ˜ Tn ]T ; A

B = diag{B 1 , . . . , B n };

˜ =[B ˜ Tn ]T ; ˜ T1 · · · B B

L = diag{L 1 , . . . , L n };

L i = diag P {[ I n xi 0n xi ×(n x −n xi ) ]};

(7.40)

i

S = diag{S1 , . . . , Sn }; Then for each controller Ci (i = 1, . . . , n) according to Lemma 7.2 and Definition (7.40), the sequence of states predicted under the distributed framework at moment k can be expressed as



Xˆ i (k + 1, P|k ) = L i X i (k + 1, P|k ) ˆ ) + B i U i (k, M|k ) = L i Si [Ai1 x(k|k ˜ ˆ ˜ i U(k − 1, M|k − 1 )] + Ai X(k, P|k − 1 ) + B

(7.41)

According to the definition (7.40), the system-wide state predicted using the distribution approach can be expressed as ˆ + 1, P|k ) = L S[ Ax(k|k ˆ ) + BU(k, M|k ) X(k ˆ ˜ + A˜ X(k, P|k − 1 ) + BU(k − 1, M|k − 1 )]

(7.42)

Substituting Eqs. (7.38) and (7.39) into Eq. (7.42), we get ˆ + 1, P|k ) = L S[ Ax(k|k ˆ ) + BU(k, M|k ) X(k ˆ ˆ − 1, M|k − 1 )] ˜ X(k, ˜ U(k + AΩ P|k − 1 ) + BΠ

(7.43)

Since the local control law ui (k − 1) =  i Ui (k − 1, m|k − 1 ) applied at moment k − 1 is known, the open-loop optimal control sequence of the controller Ci at ' moment k, Ui (k, M|k ) can be expressed as Ui (k, M|k ) =  i  i Ui (k−1, M|k − 1 )+ [ i △Ui (k, M|k ), then according to Eqs. (7.25), (7.29) and (7.32), the open-loop optimized sequence of the controller Ci at moment k can be directly expressed as ' U i (k, M|k ) = Γ i ui (k − 1) +  i K i [Y id (k + 1, P|k ) − Zˆ i (k + 1, P|k )] '

= Γ i ui (k − 1) +  i K i {Y id (k + 1, P|k ) ' (1) ˜ i X(k, ˆ − Si [B i Γ i ui (k − 1) + Ai x(k|k ˆ )+ A P|k − 1 )

208

7 Distributed Predictive Control Under Communication Constraints

˜ i U(k − 1, M|k − 1 )] − T i C˜ i X(k, ˆ +B P|k − 1)}

(7.44)

Define '

Γ = diag{[ ' 1 , . . . , [ ' n }, Γ = diag{Γ 1 , . . . , Γ n }, S = diag{S1 , . . . , Sn }, T = diag{T 1 , . . . , T n },

(7.45)

= diag{ 1 K 1 , . . . ,  n K n }. From definitions (7.38)–(7.40), Eqs. (7.44) and (7.45), the analytical expression for the open-loop optimal control sequence of the system can be written directly as follows. '

U(k, M|k ) = Γ Γ U(k − 1, M|k − 1 ) + {Y d (k + 1, P|k ) '



− S[B Γ U(k − 1, M|k − 1 ) + Ax(k|k ˆ )σ

12

ˆ ˆ ˜ X(k, ˜ X(k, ˜ + AΩ P|k − 1 ) + BΠU(k − 1, M|k − 1 )] − T CΩ P|k − 1)} (7.46) Define Θ = − S A ˜ + T CΩ), ˜ Φ = − (S AΩ

(7.47)

'

˜ Ψ = Γ Γ − S(B Γ + BΠ), '

Then the open-loop optimal control sequence (7.46) of the full system can be expressed as U(k, M|k ) = Ψ U(k − 1, M|k − 1 ) + Θ x(k|k ˆ ) ˆ + Φ X(k, P|k − 1) + Y d (k + 1, P|k )

(7.48)

The overall feedback control law of the system obtained from all controller calculations can be expressed as u(k) = Γ U(k, M|k )

(7.49)

Combining the process model (7.2), the feedback control law (7.49), the prediction equation for the entire system (7.43) and the control equation for the entire system (7.48), the closed-loop state-space expression for the entire system under a distributed control structure can be obtained as follows.

7.2 Distributed Predictive Control Based on Neighborhood Optimization

⎧ x(k) = Ax(k − 1) + BU(k − 1, M|k − 1 ) ⎪ ⎪ ⎪ ⎪ ⎪ ˆ ˆ − 1, P|k − 2 ) ¯ A ¯ x(k ˜ X(k ⎪ X(k, P|k − 1 ) = L S[ ˆ − 1) + AΩ ⎪ ⎪ ⎪ ⎪ ⎪ ˜ ¯ ⎪ − 1, M|k − 1 ) + BΠU(k − 2, M|k − 2 )] + BU(k ⎪ ⎪ ⎪ ⎪ ⎪ ˆ ⎪ U(k, M|k ) = Θx(k) ˆ + ΦX(k, P|k − 1) + ψU(k − 1, M|k − 1 ) ⎪ ⎪ ⎪ ⎨ d + Y (k + 1, P|k ) ⎪ = Θ[Ax(k − 1) + BU(k − 1, M|k − 1 )] ⎪ ⎪ ⎪ ⎪ ⎪ ˆ − 1, P|k − 2 ) ¯ N ˜ X(k ⎪ Ax(k ˆ − 1) + AΩ + ΦL S[ ⎪ ⎪ ⎪ ⎪ ⎪ ˜ ¯ ⎪ − 1, M|k − 1 ) + BΠU(k − 2, M|k − 2 )] + BU(k ⎪ ⎪ ⎪ ⎪ d ⎪ + ψU(k − 1, M|k − 1 ) + Y (k + 1, P|k ) ⎪ ⎪ ⎪ ⎩ y(k) = C x(k)

209

(7.50)

where, since the system state is assumed to be reachable, Eqs. (7.43), (7.48) in xˆ (k|k ) is replaced by x(k). Define extended state. T

ˆ (k, P|k − 1 ) UT (k, M|k ) UT (k − 1, M|k − 1 ) ]T X N (k) = [ x T (k) X

(7.51)

Then the closed-loop state-space expression of the system has the following form. 

X N (k) = A N X N (k − 1) + B N Y d (k + 1, p|k ) y(k) = C N X N (k)

(7.52)

where ⎡

⎤ A 0 BΓ 0 ⎢ ⎥ ˜ ˜ LS A LS AΩ LS BΠ LSB ⎥ AN = ⎢ ⎣ Θ A + LSA ΦLS AΩ ˜ ΘB + ΦLSB + Ψ ΦLS BΠ ˜ ⎦ 0 0 0 I Mn u

(7.53)

Based on this, the following stability criterion theorem can be obtained. Theorem 7.2 (Stability Criterion) A closed-loop system consisting of all controllers Ci (i = 1, . . . , n) with control laws of Eq. (7.35) and controlled objective S is asymptotically stable when and only when λ j { A N } < 1, ∀ j = 1, . . . , n N

(7.54)

where n N = Pn x + n x + 2Mn u is the order of the entire closed-loop system. Remark 7.2 The first two rows of the matrix A N in Eq. (7.53) are determined by the matrix A (first two columns) and the matrix B (last two columns), while the third row is related to the matrices A, B and C, the weight matrices Qi , Ri and

210

7 Distributed Predictive Control Under Communication Constraints

the optimization horizon P and the control horizon M. This provides the basis for designing the proposed DMPC. Since the weight matrices Qi , Ri and the optimization horizon P and the control horizon M have a significant effect on the third row of matrix A N in the stability criterion (7.54), the controller can be designed to make the closed-loop system stable by a reasonable choice of these parameters.

7.2.3.2

Optimizing Performance Analysis

To illustrate the essential difference between optimization problems using neighborhood optimization performance index and optimization problems using local performance index, for controller Ci (i = 1, . . . , n), the DMPC optimization problem (7.13) can be rewritten in the following form. min

△Ui (k,M|k )

 P  M n    2 yˆ i (k + l|k ) − yd (k + l|k ) + ||△ui (k + l − 1|k )||2Ri i Q i

i=1

l=1

l=1

⎤⎡ ⎤ xˆ i (k + l|k ) Aii Ai i1 · · · Ai im ⎢ ⎥ Ai1 i Ai1 i1 · · · Aim im ⎥ ⎥⎢ xˆ i1 (k + l|k ) ⎥ ⎥ .. .. . . .. ⎥⎢ .. ⎦ . . ⎦⎣ . . . xˆ im (k + l|k ) Aim i Aim i1 · · · Aim im ⎡ ⎤ B ii ⎢ Bi i ⎥  ⎢ 1 ⎥ +⎢ . ⎥ui (k + l|k ) + wi (k + l|k − 1 )); ⎣ .. ⎦

⎤ ⎡ xˆ i (k + l + 1|k ) ⎢ xˆ i (k + l + 1|k ) ⎥ ⎢ ⎢ 1 ⎥ ⎢ s.t. ⎢ ⎥=⎢ .. ⎣ ⎦ ⎣ . ⎡

xˆ im (k + l + 1|k )



B im i (7.55) xˆ j (k + l + 1|k ) = xˆ j (k + l + 1|k − 1), ( j ∈ / Niout ); yˆ i (k + l|k ) = C i xˆ i (k + l|k ) + vˆ i (k + l|k − 1 ), (i = 1, . . . , n); △u j (k + l − 1|k ) = △u j (k + l − 1|k − 1), ( j /= i ). (7.56) If the local performance index (7.4) is used, for each controller Ci (i = 1, . . . , n), the optimization problem for distributed predictive control can be rewritten as min

△Ui (k,M|k )

 P  M n    2 2 d ˆy j (k + l|k ) − y (k + l|k ) + △u j (k + l − 1|k ) j Rj Q j=1

l=1

j

l=1

(7.57)

7.2 Distributed Predictive Control Based on Neighborhood Optimization

211

ˆ i (k + l|k − 1 )); s.t. xˆi (k + l + 1|k ) = Aii xi (k + l|k ) + B ii u i (k + l|k ) + w





ˆyi (k + l|k ) = C i xˆi (k + l|k ) + v i (k + l|k − 1 ); ˆy j (k + l|k ) = ˆy j (k + l|k − 1) ( j /= i ); △u j (k + l − 1|k ) = △u j (k + l − 1|k − 1) ( j /= i ). (7.58) From Eqs. (7.55) and (7.57), it can be seen that the optimization objectives of the two equations are the same, but the system equations are different. In Eq. (7.56) the subsystem Si is solved together with the state evolution of its output neighboring subsystem. During the system state evolution, the control increment sequence △Ui (k, M|k ) affects both the subsystem Si and its output neighboring subsystems, and the effect on the neighboring subsystems affects the system Si in turn. As solved together, this part of the coupling relationship is fully considered. However, in problem (7.58), only the state of the system Si is calculated based on △Ui (k, M|k ) is computed, and the state evolution of the other subsystems is replaced by the estimated value at the k − 1 moment. And it is obvious from the model form that the model of the optimization problem with neighborhood optimization performance index is closer to the system model (7.2) than the model in the optimization problem with local performance index. In fact, after several control periods, the sequence of control increments △Ui (k, M|k ) affects not only the output-adjacent subsystem of the subsystem Si , but also other subsystems (e.g., output-adjacent subsystem of the output-adjacent subsystem). In the production-wide MPC, the influence of subsystems other than the output-adjacent subsystem of the subsystem Si is not considered. If the network bandwidth is sufficient for the iterative algorithm, then, using the iterative algorithm, the effects on subsystems other than the output-adjacent subsystem will also be taken into account. It is worth noting that in the MPC based on neighborhood optimization, each controller only communicates with its neighboring subsystem and the neighboring subsystem of its neighboring subsystem. Moreover, if each controller is able to communicate with its adjacent subsystem twice in a control period, then it is completely possible to obtain information about the adjacent subsystem of its adjacent subsystem through its adjacent subsystem. This means that each controller only needs to communicate with its adjacent subsystems, relaxing the network requirements and thus increasing the fault tolerance of the system.

7.2.4 Numerical Results A distributed predictive control method based on neighborhood optimization is validated for the accelerated cooling process after rolling of medium thick plates as an example. The accelerated cooling process is a large-scale system consisting of

212

7 Distributed Predictive Control Under Communication Constraints

Fig. 7.1 Accelerated cooling process and distributed control framework

multiple input and output variables, and the subsystems are interconnected through energy flow and material flow. If centralized control is used, it will be limited by the computational speed, the size of the device, and the centralized MPC will also fail to work when one or several subsystems fail. Therefore, typically for large scale systems consisting of multiple input and output variables, a distributed control structure with slightly weaker global performance is generally used, as shown in Fig. 7.1, where the system is decomposed into multiple interconnected subsystems, each controlled by a local controller, which are interconnected through a network. In this section the accelerated cooling process is naturally divided into n subsystems based on the spatial layout of the system itself, with each nozzle corresponding to one subsystem, so that the controlled system, the individual local controllers and the network together form a distributed control system. (1) Accelerated cooling process system model As shown in Fig. 7.2, the open system  taking the location of sensors P2 and P3 and the upper and lower surfaces of the steel plate as the boundary is divided into n subsystems along the length coordinate direction. As shown in Fig. 7.2, the sth subsystem ranges from li−1 to li (s = 1, 2, . . . , n), where l0 is the coordinate of TP2 , li (i = 1, 2, . . . , 15) is the coordinate at the exit of the i-th group of nozzle sets, ln−1 is the coordinate at the exit of the water-cooled zone, and ln is the position of TP3 . The output is the average temperature in the direction of the thickness of the steel plate at the li th location, and the input is the water flow rate of the corresponding nozzle. In order to facilitate the numerical calculation, each subsystem is uniformly divided into m layers along the thickness direction and n s columns in the length direction. Define (i, j) the temperature of the cell of i layer of the j column of the subsystem Ss be xs . Let the system sampling interval be △t seconds, we can obtain the Hammerstain model of the subsystem Ss in the equilibrium point as following.

7.2 Distributed Predictive Control Based on Neighborhood Optimization

213

Fig. 7.2 Subsystem decomposition and set points of accelerated cooling process



x s (k + 1) = Ass · x s (k) + B ss · u s (k) + Ds,s−1 · x s−1 (k) , s = 1, 2, . . . , N ys (k) = C ss · x s (k) (7.59)    u s = 2186.7 × 10−6 × a · (v v0 )b × (Fs F0 )c , s ∈ Cw (7.60) u s = 1,s ∈ C A

where xs = [ (xs,1 )T (xs,2 )T · · · (xs,n s )T ]T , and xs, j = [ xs(1, j) xs(2, j) · · · xs(m, j) ]T , j = 1, 2, . . . , n s is the state vector of the subsystem Ss , ys is the average temperature of the last cell of the subsystem Ss , u s is the input to the subsystem Ss (and there is a fixed relationship between input u s and the water flow rate of the nozzle group of the subsystem Ss ). Ass , Bss , Ds,s−1 and Css are the coefficient matrices of the subsystem Ss . ⎡ ⎢ ⎢ Ass = ⎢ ⎢ ⎣

Φ(1) s ·

0

0 .. .

Φ(2) s ·

0

···

⎡ ⎢ ⎢ +⎢ ⎢ ⎣

(1 − γ )I m γ Im .. . 0

··· ..

.

0 .. .

0 0 Φs(n s ) · 

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ··· 0 ⎥ .. . ⎥ . (1 − γ )Im . . ⎥ ⎥ .. .. ⎦ . . 0 ··· γ I m (1 − γ ) I m 0

214

7 Distributed Predictive Control Under Communication Constraints

⎤ ψ (1) s ⎥ ⎢ Bss = ⎣ ... ⎦;Css = m −1 · [ 01×m·(n s −1) 11×m ]; ⎡

! Ds,s−1 =

s) ψ (n s

0m×m·(n s −1)

"

γ Im

(7.61)

0m(n s −1)×m·(n s −1) 0m·(n s −1)×m

yet ⎡ Φ (s j)

(1, j)

a(x s ⎢ .. ⎢ =⎣ .

) ··· .. .

⎤ 0 .. .

(m, j)

· · · a(x s

0



⎥ ⎥; ψ ( j) (xs ) = ⎢ ⎣ s ⎦ )

(1, j )

θs

(m, j)

θs

(1, j)

(x s

0(m−2)×1

(m, j)

(x s

(1, j )

− x∞ ) · β(x s

)

(m, j )

− x∞ ) · β(x s

⎤ ⎥ ⎦;

)

(7.62) ⎡ ⎢ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣

−1 1

0 ··· . 1 −2 1 . . . . . 0 .. .. .. .. . . . 1 −2 .

0 ··· 0



0 .. ⎥ ⎧ . ⎥ ⎥ ⎨ θ (i, j) = (x (i, j) /x)a , s ∈ Cw ⎥ s s m×m ; Im ∈ R ; ; 0 ⎥ ⎥ ⎩ (i, j)

(i, j) θs = h air (x s ), s ∈ C A ⎥ 1 ⎦

1 −1 (7.63)

# $ a(xs(i, j) ) = −△t · λ(xs(i, j) )/ △z 2 ρ(xs(i, j) ))c p (xs(i, j) )

(7.64)

β(xs(i, j) ) = △t · a(xs(i, j) )/λ(xs(i, j) )

(7.65)

γ = △t · v/△l, i = 1, 2, . . . , m, j = 1, 2, . . . , n s

(7.66)

where △l and △ z are the length and thickness of each small cell, respectively, ρ is the density of the steel plate, c p is the specific heat capacity, λ is the heat transfer

(i, j )

coefficient, v is the plate velocity, and x s is the equilibrium temperature of the subsystem Ss , Cw is the set of subsystems with water-cooled cooling of the steel plate, C A is the set of subsystems with air-cooled cooling of the steel plate, Fs is the water flow rate of the nozzle of the subsystem Ss , F0 , v0 , a, b and c are constants. For the convenience of the following algorithm study, the linear parts of the models (7.59) and (7.60) are taken for the study, and additionally, to make the algorithm more general, the linear part of the model (7.59) for each subsystem is rewritten in the following state space form.

7.2 Distributed Predictive Control Based on Neighborhood Optimization

⎧ ⎪ ⎪ ⎪ x i (k + 1) = Aii x i (k) + B ii u i (k) + ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ y (k) = C ii x i (k) + ⎪ ⎩ i

n 

n 

Ai j x j (k) +

j=1( j/=i )

215 n 

B i j u j (k)

j=1( j/=i)

C i j x j (k)

j=1( j/=i)

(7.67) where xi ∈ Rn xi , ui ∈ Rn ui and yi ∈ Rn yi are the local subsystem state, control input and output vectors, respectively. When one of the matrices in Ai j , Bi j and Ci j is not zero, it means that there is an association S j with Si . The whole system model can be represented as 

x(k + 1) = Ax(k) + Bu(k) y(k) = C x(k)

(7.68)

where x ∈ Rn x , u ∈ Rn u and y ∈ Rn y are the full system state, control inputs and outputs, respectively. (2) Optimization of control objectives The control objective of the entire control system is to obtain a global performance index that requires the temperature of the steel plate passing through the coordinate points l1 , l2 , …, ln to have a minimum deviation from the reference temperature yd = [ y1d y2d · · · ynd ]T with minimum deviation from the reference temperature. If a rolling optimization strategy is used, at each control moment k, the global performance index J (k) to be obtained is denoted as  P  n M    2 2 d ||△ui (k + l − 1)|| Ri yi (k + l) − yi (k + l) Q + J (k) = (7.69) i

i=1

l=1

l=1

where Q i and Ri are the weight coefficient matrices; the natural numbers P, M ∈ N are the prediction horizon and control horizon, respectively, and P ≥ M. yid is the output setpoint of the subsystem Si ; △ui (k) = ui (k) − △ui (k − 1) is the input increment of the subsystem Si . To better illustrate the performance of the distributed predictive control algorithm presented in this chapter, all plate points are cooled according to the same cooling curve. Using the DMPC introduced in this chapter, each subsystem is controlled by a local controller. An example of X70 pipeline steel with a thickness of 19.28 mm, a length of 25 m and a width of 5 m is used to illustrate the superior performance of the method. The entire opening system is covered with cells of 3 mm thickness and 0.8 m length with a steel plate speed of 1.6 m/s. The 1st to 12th groups of cooling water nozzle sets are movable nozzle sets to regulate the plate temperature of the steel plate. The distribution of the equilibrium temperature of the steel plate is shown in Fig. 7.3.

216

7 Distributed Predictive Control Under Communication Constraints

Temperature ° C

800 700 600 500 400 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

1

2

3

4

5

6

7

Layer

Subsystem

Temperature °C

800

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S11

S12 S13

S14

S15

S16

S17

700

600

500

400

1 2 3 4 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 3 4

Column

Fig. 7.3 Equilibrium of the states of the entire system

Let the optimization horizon and control horizon of each local MPC equal to 10, i.e., P = 10 and M = 10. Let the whole cooling process open cooling temperature TP2 be 780 °C. The closed-loop system performance obtained by controlling the system with a centralized MPC, a neighborhood optimization based MPC and a distributed MPC using local performance index, respectively, is shown in Fig. 7.4. The corresponding manipulated variables [unit: l ·m −2 ·min −1 ] are shown in Fig. 7.5. From Figs. 7.4 and 7.5, it can be seen that for the ACC process, the performance of the closed-loop system is significantly improved after using the neighborhood optimization-based production full-process MPC compared to the distributed MPC using local performance index. The control decision and closed-loop system performance of the neighborhood optimization based MPC is very close to that of the centralized MPC. In addition, the computational effort of the production neighborhood optimization based MPC is much less than that of the centralized MPC. Therefore, the production full-process MPC approach is an efficient one that can significantly improve the system performance with guaranteed computational speed and network burden.

780

800

760

750

740

y2

y1

7.2 Distributed Predictive Control Based on Neighborhood Optimization

5

10

15

700

20

750

y4 5

10

15

650

20

700

650

y6

700

5

10

15

600

20

5

10

15

600

20

y 10

y9

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

620

650 600

5

10

15

600 580

20

600

600

580

y 12

y 11

20

650

650

550

15

y8

y7

700

600

10

700

750

650

5

750

y3 y5

700

217

550

560 5

10

15

500

20

560

y 14

y 13

560 540 520

5

10

15

520

20

560

y 16

y 15

560 540 520

540

5

10

15

20

540 520

y 17

560 Reference Centralized MPC LCO-DMPC N-DMPC

540 520

5

10 Time(Sec)

15

20

Fig. 7.4 Performance of the closed-loop system under centralized MPC, neighborhood optimization based DMPC, and local performance index based DMPC

218

7 Distributed Predictive Control Under Communication Constraints 400

u2

u1

1 0 -1

5

10

15

0

20

200

5

10

15

0

20

u5

u6 5

10

15

0

20

u7

u8 5

10

15

0

20

10

15

0

20

u9

u12

u11

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

5

10

15

20

1000

500

5

10

15

500 0

20

1

u14

1000

u13

20

u10 5

1000

500

5

10

15

0 -1

20

1

u16

1

u15

15

500

500

0 -1

10

500

1000

0

5

1000

500

0

20

500

1000

0

15

1000

500

0

10

500

1000

0

5

1000

u4

u3

400

0

200

5

10

15

20

0 -1

u17

1 Centralized MPC LCO-DMPC N-DMPC

0 -1

5

10

15

20

Time(Sec)

Fig. 7.5 Water flow rate of each sprayer under the centralized MPC, neighborhood optimization based DMPC, and local performance index based DMPC

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

219

7.3 Stabilized Neighborhood Optimization Based Distributed Model Predictive Control In the above Section, we presented the design of Distributed MPC with neighboring subsystem performance optimization, where each subsystem only communicates with its neighboring subsystems once during a sampling time. In this section, each subsystem still can only communicate with its neighboring subsystems, there is no limitation on communicating band-wide. That means each subsystem can communicate with its neighboring subsystems many times during each sampling.

7.3.1 Problem Description Considers a large-scale discrete-time linear system which is composed of many interacted subsystems, the overall system model is: 

x + = Ax + Bu y = Cx

(7.70)

where x ∈ R n x is the system state, u ∈ R n u is the system current control input, y ∈ R n y is the controlled output and x + is the successor state. The state of the system and control input applied at sample time t are denoted as x(t), u(t), respectively. Moreover, there are hard constraints in the system state and control input. That is, for ∀t ≥ 0: x(t) ∈ X, u(t) ∈ U

(7.71)

where X ⊂ R n x and U ⊂ R n u are compact convex polyhedra containing the origin in their iterior. Given model (7.70), without loss of generality, the overall system is divided in to m subsystems, denoted as Si , i ∈ I0:m . Thus, u = (u 1 , u 2 , ..., u m ) and x = (x1 , x2 , ..., xm ), then the subsystem model for Si , ∀i ∈ I0:m is: xi+ = Aii xi + Bii u i +



Bi j u j

(7.72)

j∈Ni

where Ni is the set of subsystems that send inputs to current subsystem Si . For subsystem S j , j ∈ Ni , S j couples with Si by sending control input u i to Si . In particularly, j ∈ Ni if Bi j /= 0. Given the overall system constraints set X, U , xi , u i fit hard constraints xi (t) ∈ X i , u i (t) ∈ Ui . Here, for easing analysis, here the definitions of neighbor (upstream-neighbor) and downstream neighbor are given.

220

7 Distributed Predictive Control Under Communication Constraints

Definition 7.2: Given subsystem Si with state evolution Eq. (7.72), define S j , S j ∈ Ni , which send input information to Si as the neighbor(upstream neighbor) of Si . Moreover, for arbitrary S j , S j ∈ Ni , since Si receives input information from S j , Si is defined as a downstream neighbor of S j . Denote the tracking target as yt . Assume that (A, B) is stabilizable and state is measurable. The aim of tracking problem given a target yt is to design a controller which enable y(t) → yt in an admissible way when t → ∞. Hence the origin control objective function of the overall system is: VNorigin (x, yt ; u) =

N −1 k=0

2 2 2 ( C x(k) − yˆt Q o + u(k) − uˆ t R ) + C x(N ) − yˆt Po (7.73)

where P0 > 0, Q 0 > 0 and R > 0 is weighting coefficients matrix, u t is steady input corresponding to yt . The problem considered here is to design a DMPC algorithm to control the physical network, which coordinate with each other with considering following performance: • to achieve a good optimization performance of the entire closed-loop system. • guaranteeing the information connection constraints and simplifying the information connectivity that will benefit the network security, structural flexibility and error-tolerance of the distributed control framework. • to guarantee the feasibility of target tracking. To solve this problem, in this Section, an enhancing strong neighbor-based optimization DMPC is designed and is detailed in the next section.

7.3.2 DMPC Design In the interacted distributed system, the state evolution of each subsystem is affected by the optimal control decisions of its upstream neighbors. Each subsystem considers these affections into account will do help to improve the performance of the entire closed-loop system. On the other hand, these impactions have different strengths for different downstream subsystems. Some of the affections are too small and can be ignored. If these weakly coupling downstream subsystem’s cost functions are involved in each subsystem’s optimization problem, additional information connections rise with little improvement of the performance of the closed-loop system. The increasing of information connections will hinder the error tolerance and flexibility of the distributed control system. Thus, each subsystem-based MPC takes into account the cost functions of its strong interacted downstream subsystems to improve the closed-loop performance of the entire system and receive the information from its strong coupling neighbors.

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

7.3.2.1

221

Strong Coupling Neighbor-Based Optimization for Tracking

Given that the coupling degrees between different subsystems differs a lot, here we enable the subsystem cooperate with strong-coupling neighbors while treat the weakcoupling ones as disturbance. Define Ni (strong) as set of strong coupling neighboring subsystems and Ni(weak) as set of weak coupling neighbors. The rule for deciding strong-coupling systems is detailed in Sect. 7.3.2.4. Then, for Si , we have: 

xi+ = Aii xi + Bii u i +

Bi j u j + wi

(7.74)

j∈Ni (strong)

where wi =



Bi j u j

j∈Ni(weak)

wi ∈ Wi , Wi = (⊕Bi j U j ) Ni (weak) ∪ Ni (strong) = Ni = { j|Bi j /= 0, j /= i} The deviation wi represents the influence collection of weak coupling upstream neighbors in Ni,(weak) . wi is contained in a convex and compact set Wi which contains the origin. If the weak coupling influence wi is neglected, a simplified model based on Si is acquired. That is: x i+ = Aii x i + Bii u i +



Bi j u j

(7.75)

j∈Ni (strong)

Here x i , u i and u j , j ∈ Ni(strong) represent the state and input of simplified subsystem model which neglects weak-coupling upstream neighbors’ influence wi . The simplified overall system model with new coupling relation matrix B is: x + = Ax + Bu

(7.76)

where, x = (x 1 , x 2 , ..., x m ) and u = (u 1 , u 2 , ..., u m ) represent states and inputs in this simplified model. Considering the target-tracking problem of the simplified model, to ensure the output track the given target yt , constraints are given for terminal state prediction. If current target yt is set as the tracking-target through the optimization of controller, when yt changes, the terminal constraints need to change at once. The optimal solution at previous time may not fit the terminal constraints brought by the changed yt .

222

7 Distributed Predictive Control Under Communication Constraints

This violates the recursive feasibility of the system. Thus, here a steady state optimization is integrated in the MPC for tracking where an artificial feasible tracking goal ys is proposed as a medium variable. This variable is work as an optimized variable. With setting tracking-point ys equal the previous target, the recursive feasibility will not be violated by the target change. The medium target ys and its state x S and input u s should satisfy the simplified system’s steady state equations, it has !

⎡ ⎤ ! " " xs A0In x B 0 ⎣ ⎦ 0 us = C 0 −I 0 ys   x s u s = M y ys

(7.77)

(7.78)

Here M y is a suitable matrix. That is, target ys ’s corresponding inputs u s and states x s in simplified model can be expressed by ys . The equation is on the premise  of Lemma 1.14 in [1]. If Lemma 1.14 doesn’t hold, a Mθ and θ which fits x s u s = Mθ θ can be found, which can replace the ys as a variable to be solved. For the manual tracking target ys for overall system, we have ys = [y1,s , . . . , yi,s , . . . , ym,s ]. That is, given ys , arbitrary subsystem Si get a subtracking target ys,i . Similar to (7.78), x s,i , u s,i are solved. With the simplified model and artificial tracking target ys,i , according to (7.78), in strong-coupling neighbor-based optimization MPC algorithm, the objective function optimized in subsystem Si , ∀i ∈ [1, m] is set as Vi'N (xi , yt ; xi , u i,0:N −1 , ys ) as follows: Vi'N (xi , yt ; xi , u i,0:N −1 , ys ) =

2 2 ( xi (k) − x i,s Q i + u i (k) − u i,s Ri ) k=0 2 + xi (N ) − x i,s Pi + V0 (yi,s , yt,i ) N −1  x h (k) − x s,h 2 + x h (N ) − x s,h + Qh Ph N −1

k=0 h∈ Hi

(7.79) where xi , yt is the given initial state and target, u i,0:N −1 are input predictions in 0 : N − 1 sample time ahead. ys is the admissible target. Q i = Ci ' Q o,i Ci > 0 and Hi = {h|i ∈ Nh(strong) , ∀Sh , h ∈ [1, m], h /= i}

(7.80)

Here, Si ’s controller design takes strong coupling downstream neighbors’ performances as part of its optimized objection. That is, current subsystem Si ’s optimal solution is decided by its own and downstream neighbors in set Hi which Si has strong impact on.

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

223

Next we will use simplified model in (7.75) with only strong couplings to solve tracking problem (7.79) for each subsystem. To guarantee the control feasibility and stability, following definitions and assumptions are given. One important issue is to deal with the deviation caused by neglecting weakcoupling neighbor inputs. Here robust invariant set are adopted to enable the deviation of states be bounded and the real systems states be controlled in X . Definition 7.3 (Robust inariant set control law): Given e = (x − x) which represents dynamics of the error between the origin plant and the simplified model:

e+ = Ak e + w

(7.81)

with Ak = (A + B K ). A set φ is called a robust invariant set for system (7.81), if Ak φ ⊕ W ⊆ φ. And the control law is called robust invariant set control law. The definition of robust invariant set illustrates that for system x = Ax + Bu + w if φ and robust invariant set control law K exist, then for e(0) = x(0) − x(0), the trajectories of origin system at arbitrary time t denoted as x(t) can be controlled in x(t) = x(t) ⊕ φ. Based on this definition, in this Section the dynamics of deviation (xi − x i ) introduced by neglecting weak-coupling neighbors can be solved. For subsystem Si proposed as (7.74), deviation is written as: ei+ = Aii ei + Bii u i,e + wi where ei = xi − x i is the deviation from simplified model to origin model and u i,e is the control law. There exists set φi as an robust invariant set for Si if for all ei ∈ φi and all wi ∈ Wi . Here u i,e = K i ei is a feedback control input and we denote K i as the robust invariant set control law for Si . Then it is easy to obtained xi (t) = x i (t)⊕φi for time t. Let (x i (t), u i (t)) ∈ Fi , where Fi = (X i ×Ui )(φi × K i φi ) , the origin system state and input satisfy (xi (t), K i (xi (t) − x i (t)) + u i (t)) ∈ X i × Ui . Thus, with the help of robust invariant set, the origin system optimization is transferred to simplified model. Besides, for the overall system, we have K = diag(K 1 , K 2 , ..., K m ). With Definition 7.3, if deviation brought by omitting weak-coupling neighbors is controlled in an RPI set φi with control law K i and simplified model in (7.76) has control law and state u i , x i confined in Ui K i φi , X i φi , respectively, local subsystem will have feasible solution for the optimization. As for manually selected tracking target ys , based on the overall simplified model in (7.76), following definition are given: Definition 7.4 (Tracking invariant set control law): Consider that overall system (7.76) is controlled by the following control law:

224

7 Distributed Predictive Control Under Communication Constraints

u = K (x − x s ) + u s = K x + L ys

(7.82)

Let A + B K be Hurwitz, then this control law steers the system (7.76) to the steady state and input (x s , u s ) = M y ys . K is denoted as the tracking invariant set control law. Denote the set of initial state and steady output that can be stabilized by control law (7.82) while fulfilling the system constraints throughout its evolution as an invariant set for tracking Ω K . For any (x(0), ys ) ∈ Ω K , the trajectory of the system x + = Ax + Bu controlled by u = K x +L ys is confined in Ω K and tends to (xs , u s ) = M y ys . Under Definition 7.3 and 7.4, before introducing the enhancing strong neighborbased optimization DMPC, some assumptions for the close-loop system feasibility and stability are given as follows. Concrete theorem and analysis of stability and feasibility are in Sect. 7.3.3. Assumption 7.2 The eigenvalues of Aii + Bii K i are in the interior of the unitary circle. φi is an admissible robust positively invariant set for Si ’s deviation (xi − x i ) subject to constraints Fi , and corresponding feed back control law is u i,e = K i ei . Assumption 7.3 Let Ω K be a tracking invariant set for the simplified system (7.76) subject to constraints F = {{(x 1 , u 1 ), ..., (x m , u m )}|∀i, (x i , u i ) ∈ (X i × Ui )(φi × K i φi )}, and the corresponding feedback gain matrix is K = {K 1 , K 2 , . . . , K m }. = block - diag{Q 1 , Q 2 , . . . , Q m }, R Assumption 7.4 For Q block - diag{R1 , R2 , . . . , Rm } and P = block - diag(P1 , P2 , . . . , Pm ), it has: '

(A + B K )' P(A + B K ) − P = −(Q + K R K )

=

(7.83)

Assumption 7.2 ensures that, with the feedback control law u i,e = K i ei , i ∈ I0:m , the state estimated by simplified model (7.76) is near to the real system’s trajectory before system reaching the target. In Assumption 7.3, Ωk is set as a terminal constraint of DMPC. Assumption 7.4 is used in the proof of the convergence of system presented in the Appendix. So far, the strong-coupling neighbor-based optimization DMPC algorithm, which is solved iteratively, is able to be defined as follows: Firstly, denote the optimal objective of subsystem Si as Vi N . According to (7.79), at iterating step p, Vi N fits: N −1  2 2 Vi N (xi , yt , p; x i , u i,0:N −1 , yi,s ) = ( x i (k) − x i,s Q i + u i (k) − u i,s Ri ) k=0

2 + x i (N ) − x i,s Pi + V0 (yi,s , yi,t ) +

N −1  2  [ p−1] x h (k) − x h,s

Qh

k=0 h∈Hi

[ p−1] + x h (N ) − x h,s

Ph

(7.84)

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

225

Compute optimization solution ' ' (x i' (0), u i,0:N −1 , yi,s ) = arg min Vi N (x i , yt , p; x i , u i,0:N −1 , yi,s )

(7.85)

Subjects to constraints: 

x h i (k + 1) = Ah i h i x h i (k) +

[ p]

Bh j u h j (k) + Bh i h i u h i (k)

h j ∈Nh(str ong)

(x h i (k) u h i (k)) ∈ F, F : (X h i , Uh i )(Wh i , K h i Wi ) (x(N ), ys ) ∈ Ω K x i (0) ∈ xi − φi

(7.86)

M y yi,s = (x i,s , u i,s ) with h i ∈ Hi ∪ {i}, and φi , Ωk defined in Assumption 7.2 and 7.3, respectively. The optimization function (7.85) updates Si ’s initial state, inputs in N steps u i,0:N −1 and current tracking target yi,s based on the information from subsystems in H . Secondly, set [ p]

[ p−1]

' u i,0:N −1 = γi u i,0:N −1 + (1 − γi )u i,0:N −1 [ p]

'

(7.87)

[ p−1]

yi,s = γi yi,s + (1 − γi )yi,s [ p]

[ p−1]

x i (0) = γi xi' (0) + (1 − γi )x i m 

(7.88) (0)

(7.89)

γi = 1, γi > 0

(7.90)

i=1

γi ∈ R, 0 < γi < 1 is to guarantee the consistency of the optimization problem. That is, at the end of the current sample time, all shared variables converge. After that, we take p = p+1 [ p]

to iterate until the solutions convergence. Then we have x i∗ = x i , u i∗ = [ p] [ p] ∗ u i,0:N −1 , yi,s = yi,s . Finally, when the solution converges, according to Assumption 7.2, take the control law of Si as ∗ ∗ + K i (xi − x i∗ ) u i,0 = u i,0

(7.91)

226

7 Distributed Predictive Control Under Communication Constraints

∗ where, K i is the robust invariant set control law. u i,0 is the first element of u i∗ .For better understanding, the algorithm is also presented in Algorithm 7.1. In this algorithm, we take iterative strategy to guarantee the distributed control solution (x(0), u 0:N −1 , ys ) consistent. Next, the selection of warm start, the given solution for each subsystem at initial iterative step 0, is proposed in next section.

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

7.3.2.2

227

Warm Start

Considering at a new sample time, with updated system’s states, the choosing of warm start is on the principle that it fits the simplified system’s constraints in (7.86), so that real subsystem solution’s feasibility is guaranteed. The warm start is designed as the following algorithm: The algorithm illustrates that, for warm start, two choices are provided. One is acquiring solution from tracking invariant set control law K , with the simplified ∗ (t)) as initial state and tracking target, respectively. model prediction (x i∗ (1|t), yi,s The other is taking solution from the simplified model prediction at time t. Both of them fit the constraints of (7.86). Noted that only when subsystem entered the tracking invariant set, option 2 will be considered.

7.3.2.3

RPI Control Law and RPI Set

Here one constraint coupling subsystem is considered. Given that, for Si , we have xi ∈ X i and u i ∈ Ui . Express the constraints in inequalities: X i = {xi |liT xi | ≤ 1} and Ui = {u i |h iT u i | ≤ 1}. The robust invariant set φi is denoted as φi = {xi : xiT Pi xi ≤ 1}. With the definition of robust invariant set in Definition 7.3, φi should ensure that ∀xi ∈ φi ,xi ∈ X i . That is: |h iT xi | ≤ 1, ∀xi ∈ φi

(7.92)

Based on definitions of Ni (strong) and Ni(weak) , Wi is decided according to the constains of Ni(weak) . For deviation caused by neglecting subsystem in Ni(weak) , a minimization of robust positive invariant set φi by introducing a parameter γi ∈ [0, 1] can be obtained. The parameter γi controls the size of robust positive invariant set φi by further √ minimizing φi in φi ⊆ γi X . That is: minγi √ s.t. |h iT xi | ≤ γi , ∀xi ∈ φi

(7.93)

Besides, we should consider the input constraint Ui : |liT K i xi | ≤ 1, ∀xi ∈ φi

(7.94)

And constraint brought by property of robust positive invariant set φi itself should be considered. Based on above analysis, referring to [1], we can abtain the γi and K i by solving following LMI optimization problem.

228

7 Distributed Predictive Control Under Communication Constraints

min γi

(7.95)

Wi ,Yi ,γi



⎤ ∗ ∗ λi Wi ⎣ 0 1 − λi ∗ ⎦ > 0, ∀wi ∈ vert(Wi ) Aii Wi + Bi Yi wi Wi ! " 1 ∗ >0 YiT li Wi ! " γi ∗ >0 Wi h i Wi

(7.96)

(7.97)

(7.98)

And K i = Yi Wi−1 . Thus, we get RPI control law K i and γi which illustrates the size of φi . To get φi , we take the procedure in Ref. [1].

7.3.2.4

Determination of Strong Coupling

There are many measurements to measure the strength of the interactions among subsystems. Different measurements will lead to different optimization performance. In this Section, performance and connectivity for subsystems are focused. Thus, determination of strong-coupling neighbors is based on the influence on the size of current subsystem’s robust invariant (RPI) set and subsystem connectivity. On the one hand, as defined in Definition 7.3, φiis a robust invariant set for Bi j u j k when u j is set to subsystem Si described as xi+ = Aii xi + Bii u i + j∈Ni

zero. Given that φi deals with deviation caused by neglecting some of the inputs u j , j ∈ Ni , the size of φi is expected to be as small as enough. The benefit is that the solution in (7.84) can get larger feasible domain. Here we consider that a sufficiently large domain means the solution has more degrees of freedom and bring better subsystem performance. Based on the idea above, to decide weak-coupling neighbor set Ni (weak) to omit, we choose neighbor collection which results in small size of robust invariant set φi . And the basis to measure the robust invariant set φi by introducing γi is mentioned in last section. On the other hand, connectivity, as the measurement of subsystem topology complexity, is easy to obtained. Next we give the numerical analysis. Denote an arbitrary option on deciding the strong, weak-coupling neighbors as Ci,(d) , d ∈ Di . Di = {1, ..., dmax } ∈ I is the label set of ways of Si ’s neighbors’ distribution. dmax represents the size of feasible distribution methods which fits dmax ≤ 2size(Ni ) . For better understanding of Ci,(d) , here we take an arbitrary neighbor set Ni = { j1 , j2 , j3 } as an example. if we treat ji as strong-coupling neighbor and j2 , j2 as weak ones, we have ∃d ∈ Di , Ci,(d) satisfies: Ci,(d) = {(Ni (strong) , Ni (weak) )|Ni (strong) = { j1 }, Ni(weak) = { j2 , j3 }}

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

229

Option Ci,(d) results in a specified connectivity amount(normalized) ci,(d) ∈ [0, 1] √ and an RPI set denoted as φ(i,d) ⊆ γi,(d) X i . Here ci,(d) ∈ [0, 1] are defined as: ci,(d) =

size(Ni (strong) ) ∈ [0, 1] size(Ni )

(7.99)

To find the optimal distribution Ci,(d ∗ ) of strong and weak-coupling neighbors, here we take: Ci,(d ∗ ) =

argmin

Ci,(d) ,Wi,(d) ,Yi,(d) ,γi,(d)

((γi,(1) + μi ci,(1) ), ..., (γi,(d) + μi ci,(d) ),

..., (γi,(dmax ) + μi ci,(dmax ) )) where for d ∈ Di , ⎡

⎤ ∗ ∗ λi Wi,(d) ⎣ 0 1 − λi ∗ ⎦ > 0, ∀wi ∈ vert(Wi,(d) ) Aii Wi,(d) + Bi Yi,(d) wi Wi,(d) ! " 1 ∗ >0 T li Wi,(d) Yi,(d) ! " ∗ γi,(d) >0 Wi,(d) h i Wi,(d) 0 ≤ γi,(d) ≤ 1

(7.100)

(7.101)

(7.102) (7.103)

In this equation, μi is a weight coefficient for the optimization.γi,(d) , Wi,(d) , Yi,(d) , X i,(d) represent the γi , Wi , Yi , X i under distribution Ci,(d) . Moreover, Ci,(d ∗ ) is the optimal solution. This optimization means, to make optimal decision on strong-coupling neighbors and weak-coupling neighbors while taking both connectivity and performance into account, the optimization that minimizes the combination of subsystem connectivity and size of φi should be solved. to decide whether a neighbor S j , j ∈ Ni is a strongcoupling neighbor or a weak one, the size of φi is expected to be small so that the solution in (7.84) can get larger feasible domain. At the same time, the connectivity is expected to be small to reduce the system topological complexity. The optimization achieves the goal of choosing neighbors which result in smaller size of robust invariant set φi and connectivity. Solution Ci,(d ∗ ) reflects the consideration of influence on RPI set φi and connectivity. With this method, even though “weakcoupling” neighbors are omitted and deviation is brought, the simplified model has large degree of freedom to design control law of tracking and reduces the connectivity at the same time. Thus, a good system performance and error tolerance can be obtained.

230

7 Distributed Predictive Control Under Communication Constraints

7.3.3 Stability and Convergence In this section, the feasibility and stability theorem of strong coupling neighbor-based DMPC are given. Denote X N = {x ∈ X |∃v = (x, u 0:N −1 , ys ), u(k) ∈ U, k ∈ I0:N −1 , ys ∈ Ys , s.t.v ∈ Z N } Z N = {v|u(k) ∈ U, k ∈ I0:N −1 , ys ∈ Ys , x(k; x, u) ∈ X, k ∈ I0:N −1 , x(N ; x, u) ∈ Ω K¯ } x(k; x, u) represents current time’s state prediction after k sample time. Ys is the feasible tracking set based on hard constraints of x and u. Theorem 7.3 Assume Assumption 7.2, 7.3, 7.4 hold. Then for all initial state x(0) with tracking target yt if v(0) ∈ Z N , the close-loop system based on strong-coupling neighbor-based DMPC algorithm is feasible and asymptotically stable and converge to yˆs ⊕ Cφk . Where yˆs = ( yˆ1,s , ..., yˆm,s ), yˆi,s = arg min V0 (yi,s , yi,t ) among feasible targets. Proof Feasibility is proved by Lemma 7.4, 7.5. Stability’s proofs are in Lemma 7.6, 7.7 in Appendix.

7.3.4 Simulation The simulation takes an industrial system model with five subsystems interacted with each other as example. Between different subsystems, the coupling degrees vary a lot. The relationships of subsystems and designed MPC are shown in Fig. 7.6. In Fig. 7.6, dotted lines are used to represent the weak coupling, while solid lines are used to represent the strong coupling. With the strategy we defined here, weak coupling are neglected. As a result, it can be seen in Fig. 7.6, only parts of the subsystems are joint in cooperation.

Fig. 7.6 An illustration of the structure of a distributed system and its distributed control framework

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

231

Subsystem models are also given as follows: S1 : x1,t+1 y1,t S2 : x2,t+1 y1,t S3 : x3,t+1 y1,t S4 : x4,t+1 y1,t S5 : x5,t+1 y1,t

! ! ! " " " 0.5 0.6 0.1 0 = x1,t + u 1,t + u 2,t 0 0.66 0.7 0.04   = 0 1 x1,t ! ! ! ! " " " " 0.6 0.1 0.5 0 0 = x2,t + u 2,t + u 1,t + u 3,t 0 0.71 1 0.3 0.01   = 0 1 x1,t ! ! ! ! " " " " 0.7 0.2 0.9 0 0 = x3,t + u 3,t + u 2,t + u 4,t 0.1 0.4 1 0.4 0.05   = 0 1 x1,t ! ! ! ! " " " " 0.9 0.7 0.4 0.3 0 = x4,t + u 4,t + u 3,t + u 5,t 0 0.6 0.4 0.6 0.01   = 0 1 x1,t ! ! " ! " " 0.8 0 0 0.4 = x5,t + u 5,t + u 4,t 0.5 0.78 1 0.2   = 0 1 x1,t (7.104)

By the strong-coupling neighbor-based DMPC, connections including S2 → S1 , S3 → S2 , S4 → S3 , S5 → S4 are neglected. For the five subsystems in the given model, γ1 , γ2 , γ3 , γ4 and γ5 which evaluate the system performance are obtained by optimization problem (7.100) in Sect. 7.3.2.4, they are: (γ1 , γ2 , γ3 , γ4 , γ5 ) = (0.54, 0.66, 0.72, 0.53, 0)

(7.105)

Among them, γ5 = 0 illustrates that subsystem S5 has no weak-coupling upstream neighbors. And robust invariant set feedback control laws are {K 1 , K 2 , K 3 , K 4 } = {[−0.119 − 0.762]T , [−0.171 − 0.434]T , [−0.316 − 0.251]T , [−0.724 − 0.966]T } The optimization horizon N is 10 sample time. Take Q = I10×10 and R = I5×5 . To accelerate the iterative process, in both of these two iterative algorithms, the terminal [ p] [ p−1] ||2 ≤ 10−3 or p > 100. If either of these conditions of iteration are ||u i − u i two conditions is satisfied, iteration terminates. The following shows the system performance when strong coupling neighborbased DMPC algorithm applied. Here we choose different set-points to detect the system stability. Three groups of setpoints are given to verify the system’s feasibility and stability. For a better understanding, Cooperative DMPC strategy control results which cooperates with all neighbors are also introduced to make comparison. The simulation takes totally $74.3$ seconds for $90$ sampling times. The performance

232

7 Distributed Predictive Control Under Communication Constraints

Fig. 7.7 States of each subsystem under the control of SCN-DMPC and Cooperative DMPC, respectively

comparison of Strong-Coupling Neighbor-based DMPC (SCN-DMPC) with Cooperative DMPC where each subsystem uses full system’s information in its controller is show in Figs. 7.6, 7.7, 7.8 and 7.9. Figure 7.7 shows the states evolution of each subsystem. The two curves of SCNDMPC and Cooperative are close to each other. This is because the weak couplings in the given example are tiny compared with strong couplings and thus don’t have much impaction on system dynamics. Besides, SCN-DMPC optimization algorithm is always feasible and is able to keep stable with tracking target changing. Figure 7.8 shows the inputs difference between these two algorithms. The control law of these to algorithms are almost same to each other. Tracking results are shown in Fig. 7.9. There are a small off-set in subsystem S1,3 , which could be eliminated by adding an observer. All other subsystem could track the steady state target without steady-state off-set. From the simulation result of Figs. 7.6, 7.7, 7.8 and 7.9, The stability and good optimization performance of closed-loop system using SCN-DMPC is verified. In Figs. 7.6, 7.7, 7.8 and 7.9 the curves of SCN-DMPC and Cooperative DMPC are close to each other. The reason is that the weak couplings in the given example are tiny compared with strong couplings and thus they don’t have much impaction on system dynamics, even though a small difference exists. Specifically, given the state equation input weight coefficients in each subsystem, the deviations of five subsystems

7.3 Stabilized Neighborhood Optimization Based Distributed Model …

233

Fig. 7.8 Inputs of each subsystem under the control of SCN-DMPC and Cooperative DMPC, respectively

fit ||w1 || ≤ 0.04, ||w2 || ≤ 0.01, ||w3 || ≤ 0.05, ||w4 || ≤ 0.01. The affection of these disturbances is very small compared with that of each subsystem’s inputs. Under robust feedback control law, they don’t have much influence on system dynamics. Besides, the computation of simplified model control law in SCN-DMPC equals to the control law which optimizes the global performance of simplified system. As a result, the system performance under SCN-DMPC is close to that in Cooperative DMPC. Under the circumstances where the weak interactions are close to the impaction of each subsystem’s inputs, which sacrifices part of the performance to achieve less network connectivity, omitting weak couplings may result in more influence on system dynamics and the simulation results can differ. Moreover, mean square errors between the closed-loop systems with strongcoupling neighbor-based optimization DMPC and Cooperative DMPC outputs are listed in Table 7.2, the total error of 5 subsystems is only 3.5, which illustrates the goof optimization performance SCN-DMPC. Besides, connectivity is compared in following table. Table 7.3 shows that when strong coupling neighbor-based DMPC applied, the total information connections reduced to 8, which means that 5 connections are avoided comparing with cooperative DMPC. Above all, the simulation results show that the proposed SCN-DMPC achieve a good performance close to the cooperative DMPC with significant reducing of information connectivity.

234

7 Distributed Predictive Control Under Communication Constraints

Fig. 7.9 Output of each subsystem under the control of SCN-DMPC and Cooperative DMPC, respectively

Table 7.2 MSE of outputs between SCN-DMPC and Cooperative DMPC Items

CMPC

CF-DMPC

LCO-DMPC

SCN-DMPC

Cooperative DMPC

MSE

0.5771

1.1512

0.7111

0.1375

0.9162

Table 7.3 Comparison of system connectivity with different control methods

Items

SCN-DMPC

Cooperative DMPC

S1

1

2

S2

2

3

S3

2

4

S4

2

3

S5

1

1

S

8

13

References

235

7.4 Summary of This Chapter This chapter focuses on a distributed predictive control method based on Neighborhood optimization, which considers not only the performance index of this system but also those of its neighboring systems during the solution process, with a view to improving the global performance of the system. In the solution process, each subsystem only communicates with its neighboring subsystems, and the requirement for network communication resources is very low. In addition, the stability proof and performance analysis of the distributed MPC based on neighborhood optimization are given in this chapter, and numerical simulation examples are used to illustrate that this method can improve the global performance of the system. Then, a strongcoupling neighbor-based optimization DMPC method is introduced where the influence of weakly coupled subsystems is neglected in cooperating design. A closed-loop system’s performance and network connectivity-based method is proposed to determine the strength of coupling relationship among subsystems. The feasibility and stability of the closed-loop system in the case of target tracking are analyzed. Simulation results show that the proposed SCN-DMPC can achieve similar performance with comparison to the DMPC which does not neglect the information and influence of weakly coupling subsystems. At the same time, connectivity is significantly decreased.

References 1. Zheng Y, Li S (2014) Distributed predictive control for building temperature regulation with impact-region optimization. IFAC Proc Volumes 47(3):12074–12079 2. Zheng Y, Li S, Li N (2011) Distributed model predictive control over network information exchange for large-scale systems. Control Eng Pract 19(7):757–769 3. Zheng Y, Li S, Wang X (2009) Distributed model predictive control for plant-wide hot-rolled strip laminar cooling process. J Process Control 19(9):1427–1437 4. Zheng Y, Li S, Wu J, Zhang X (2012) Stabilized neighborhood optimization based distributed model predictive control for distributed system. In: Proceedings of the 31st Chinese control conference, Hefei. IEEE 5. Zheng Y, Li S (2013) Coordinated predictive control of distributed systems under network information mode. Acta Autom 39(11). (郑毅, 李少远.网络信息模式下分布式系统协调预测 控制.自动化学报, 2013, 39(11))

Chapter 8

Application of Distributed Model Predictive Control in Accelerated Cooling Process

8.1 Overview Accelerated and Controlled Cooling (ACC) is the main controlled cooling technique. In the production process of thick plate, the controlled rolling can effectively improve the performance of steel. However, due to the influence of thermal deformation factors, the deformation of austenite to ferrite transformation temperature (Ar3) increased, resulting in ferrite precipitation at higher temperatures, and the cooling process ferrite grain growth, resulting in reduced mechanical properties [1]. Therefore, the steel plate after controlled rolling is generally combined with controlled cooling, making full use of the residual heat of the steel plate after rolling, to achieve the purpose of improving the organization and properties of steel by controlling the cooling curve of the steel after rolling [2]. As the authors have cooperated with the research institute under a large domestic steel group company, the production line of the thick plate mill under the company adopts accelerated cooling technology, and the accelerated cooling process is columnar laminar flow cooling, which is generally representative. Therefore, this chapter aims at this particular production line. As new materials continue to be created, the demands on steel, a classic material, are increasing. For example, the automotive industry needs light weight, thin, yet high performance steel plates for the production of automobiles, oil transportation needs pipeline steel that can have good low temperature toughness and welding properties to lay oil pipelines, and the construction industry needs high performance structural steel for construction. In face of this demand, steel producers, in addition to adding alloys, are placing higher demands on the control of the cooling section, requiring control of the cooling curve throughout the cooling process. This requires a control method that is highly accurate, flexible (suitable for a wide range of cooling profiles), and suitable for mass production. In view of the above requirements, the traditional method of using plate speed to control a single cooling rate and final cooling temperature is no longer suitable, and the dimensionality of the control variables needs to be increased to improve the flexibility and accuracy of control by using the cooling water flow rate per group © Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2_8

237

238

8 Application of Distributed Model Predictive Control in Accelerated …

of cooling nozzles as the control quantity to control the steel plate cooling curve. If the distance from the cooling zone inlet to the cooling zone outlet is considered as an open system, then the inputs and outputs of the system control problem can be expressed analytically, facilitating the use of model-based optimal control methods to optimize the control amount cooling water flow. The accelerated cooling process is thus a large-scale system with multiple inputs and multiple outputs. Considering that the ACC process is a large-scale system, and in order to accurately control the “time–temperature” curve of each plate point, the target reference trajectory needs to change continuously according to the open cooling temperature of the plate point. In order to speed up the operation and meet the requirement of time-varying reference trajectory, a distributed predictive control algorithm with a time-varying target is designed. The optimization target of each local controller of this method is set dynamically according to the open cooling temperature of the steel plate; the prediction model is in the form of state space, and each control period is linearized near the operating point to avoid a large number of calculations caused by the nonlinear model. This chapter is as follows: Section 8.2 introduces the accelerated cooling process process and device instrumentation, the accelerated cooling process simulation platform and the process control requirements; Section 8.3 designs the temperature balance equation for the accelerated cooling process distributed by spatial location; Section 8.4 details the model predictive control of the accelerated cooling process based on target set value recalculation proposed in this chapter, including how to recalculate the optimization target, the state space of each subsystem model transformation, the design of the extended Kalman filter, the design of each local controller, and the iterative predictive control solution method proposed in this chapter; Section 8.4 illustrates the advantages of the method with numerical results. Finally, a brief summary of the chapter is given.

8.2 Accelerated Cooling Process 8.2.1 Accelerated Cooling Process and Plant Instrumentation Accelerated cooling process diagram is shown in Fig. 8.1. The accelerated cooling device is generally installed between the finishing mill and the straightening machine, consisting of multiple nozzles on the top and bottom. After finishing the steel plate, the cooling device is continuously cooled to the target temperature, and after returning to the red, it enters the straightening machine to flatten the deformation of the steel plate caused by rolling and cooling. The purpose of accelerated cooling is to select the best cooling rate to meet the needs of different hot-rolled products, and the steel plate does not need any subsequent heat treatment after accelerated cooling.

8.2 Accelerated Cooling Process

239

Fig. 8.1 Schematic diagram of the accelerated cooling process for medium thickness plates

The phase transformation product of accelerated cooling is ferrite plus pearlite or ferrite plus bainite. The accelerated cooling process can lower the phase transformation temperature and increase the number of ferrite nuclei, thus inhibiting the growth of ferrite grains after phase transformation and further refining the ferrite grains, while making the generated pearlite more uniformly distributed and possibly generating a fine bainite organization [3, 4]. Through a reasonable cooling process, accelerated cooling after rolling can strenghen the thick plate strength without weakening the toughness, and improve the plasticity and welding properties due to the reduction of carbon content or alloying elements. Accelerated cooling process ensure that the plate requirements of the plate size specifications at the same time can control and improve the comprehensive mechanical properties of the plate, improve the working conditions of the workshop, reduce the area of the cold bed, but also the effective use of post-rolling steel waste heat to save energy, reduce costs, improve production capacity, thereby increasing economic benefits. The dimensions of the accelerated cooling process unit for a medium-thick plate in a steel mill are shown in Fig. 8.2. The accelerated cooling device is installed between the finishing mill and the straightening machine and consists of multiple groups of nozzles, each group of nozzles is divided into two parts: upper and lower. The steel plate is continuously cooled to the target final cooling temperature by the cooling device after finishing rolling, and after returning to red into the straightening machine to flatten the deformation of the steel plate caused by rolling and cooling. The cooling method of the cooling unit is continuous cooling. The unit is divided into three sections: the air-cooled section, the water-cooled section and the reddening section. As shown in Fig. 8.2: the air-cooled section is the section between the exit of the finishing mill and the cooling device, 45.1 m long; the water-cooled section is from the entrance of the cooling device to the exit of the cooling device, 25 m long; the reddening section is the section between the exit of the cooling device and the straightening machine, 20.5 m long meter and grating tracker. The basic parameters are as follows. ➀ Cooling method: continuous cooling. ➁ Cooling unit size: 5500–25000 mm. ➂ Maximum width of cooling steel plate: 5000 mm. ➃ Open cooling temperature: 750–900 °C.

240

8 Application of Distributed Model Predictive Control in Accelerated …

Fig. 8.2 Dimensions of the accelerated cooling process

➄ Final cooling temperature: 500–600 °C. (1) Cooling water nozzle The accelerated cooling unit consists of a total of 15 groups of nozzles. Each group of nozzles consists of upper and lower parts, and the upper (lower) nozzles are evenly mounted on the upper (lower) nozzle collector pipe. The upper nozzle adopts the laminar flow cooling method, and the lower collector pipe adopts the jet cooling method. To facilitate the injection of cooling water, the lower collector pipe was installed between the two rolls of the rollway. The spacing between each collector pipe is 1.6 m, which corresponds to the spacing of the rolls. The upper collector pipes correspond to the lower collector pipes one by one and have the same spacing. Both the upper and lower nozzle collector pipes are made of stainless steel to avoid being corroded. Both the upper and lower nozzle collector pipes are connected to a common collector pipe with a pipe diameter of 4–10 times that of the lower pipe. This common collector acts as a buffer to absorb fluctuations in water pressure and also serves the purpose of equalizing the water distribution of each nozzle collector. The water flow is controlled by a regulating valve for each nozzle collector pipe. An external profiling flow meter is installed about 1 m after each regulating valve to measure the cooling water flow. The upper and lower collection pipes distribute water according to a certain ratio of upper and lower water, so that the upper and lower cooling of the steel plate is uniform. (2) Instrumentation system This accelerated cooling unit is fitted with multiple point pyrometers, infrared scanners, thickness and velocity gauges, and other inspection instruments, as shown in Table 8.1.

8.2 Accelerated Cooling Process

241

Table. 8.1 Accelerated cooling unit test instruments Measuring instrument

Installation position Function

PY01

At point P1, 12.4 m Detect the finishing rolling temperature of the steel from the rolling plate for upper surface, which is applied to the use of mill dynamic control cooling parameters and correction of preset models

PY02

At point P2, 6.4 m from the entrance of cooling area

Detect the cooling temperature of the upper surface of the steel plate

PY02a

At point P2, 6.4 m from the entrance of cooling area

Detect the cooling temperature of the lower surface of the steel plate

PY03

At point P3, the Detect the actual cooling temperature of the upper exit of cooling area surface of the steel plate to prevent the surface of the steel plate from supercooling

PY03a

At point P3, the Detect the actual cooling temperature of the lower exit of cooling area surface of the steel plate to prevent the surface of the steel plate from supercooling

PY04

At point P4, 20.5 m Detect the reddening temperature of the upper surface from the exit of of the steel plate cooling area

Scanning thermometer S1

The entrance of cooling area

Collect the temperature data of the center point in the width direction of the steel plate

Scanning thermometer S2

The exit of cooling area

Collect the temperature data of the steel plate out of the cooling area

Thermal metal detector

The entrance of cooling area (Six)

Detect the running position of the steel plate

Thermal metal detector

The exit of cooling area (Two)

Detect the running position of the steel plate

Rotary encoder

Each roller table group

Track the running position and speed of steel plate

Thickness gauge

At point P1

Detect the thickness of steel plate and provide start signal

8.2.2 Accelerated Cooling Process Simulation Platform (1) Cooling unit simulation equipment Accelerated cooling process simulation experimental device and control system as shown in Fig. 8.3, are designed and developed by the production plant in order to carry out the control algorithm, the experimental device is designed based on the actual parameters of the accelerated cooling process of the steel mill medium thickness plate introduced above, in a 10:1 ratio of the reduced design, the experimental device is partly using the actual physical components and mechanisms, partly using some

242

8 Application of Distributed Model Predictive Control in Accelerated …

Fig. 8.3 Experimental setup for accelerated cooling process simulation

typical steel plate experimental data and process model. For these typical steel plates, the accuracy has been verified by the manufacturer. (2) Basic automation system The structure of the automatic control system of the experimental unit is shown in Fig. 8.4, including an industrial control computer (IPC0 to IPC6), a Siemens TDC programmable controller, a TCP/IP network card, a Profibus communication card, 2 MB of program memory and eight ET200M input and output modules. Where WinCC software and OPC server are installed on IPC0. WinCC is used to supervise the accelerated cooling process; OPC server is used to exchange information with other components and provide the collected data to other IPCs. IPC0 to IPC6 are reserved for the implementation of advanced control algorithms. The control algorithms in this paper are implemented in IPC0 to IPC6, and the language used is a mix of C++ and MATLAB. The results obtained from the upper-level control algorithms in the IPC are sent through the OPC server to the PLC, which directly controls the actuators in the accelerated cooling process simulator and collects the detection data through the input and output modules. The exchange of information between IPC0 to IPC6 and with the PLC is achieved through the OPC server. The communication protocol is TCP/IP. The Profibus protocol is used for communication between the PLC and the input and output modules. The above is the accelerated cooling process simulation experimental platform, which provides a good debugging and verification environment for the study of advanced accelerated cooling process control methods for medium-thick plates.

8.2 Accelerated Cooling Process

243

Fig. 8.4 Structure of automatic control system of accelerated cooling process experimental device

8.2.3 Process Control Requirements The purpose of using accelerated cooling control in the medium thick plate production is to control different cooling profiles to meet the needs of different hot rolled products, so that the steel plate after accelerated cooling no longer needs any subsequent heat treatment. The process technology objectives are. ➀ Control the temperature of the steel plate at various points along a desired cooling curve to the final cooling temperature. ➁ Control the final cooling temperature TF T of the steel plate with the set value TFoT Consistent. ➂ The steel plate is cooled uniformly at the top and bottom, with consistent temperatures in the width direction and in the length direction. The control variables are. ➀ Number of cooling water valve opening groups Nho (ii) Number of cooling water valve opening groups ➁ Cooling water flow rate of each group of cooling water nozzle collector pipes: Fi (i = 1, 2, . . . , Nho ).

244

8 Application of Distributed Model Predictive Control in Accelerated …

➂ the width of the edge masking. ➃ Board speed. The temperature uniformity in the width direction and thickness direction of the steel plate is controlled and regulated by the edge masking and the upper and lower water ratios, independent of other control quantities, which can be obtained from experimental data based on parameters such as plate thickness, plate width, and plate open cooling temperature. Therefore, it can be assumed that the upper and lower nozzles cooling water ratio and edge masking configuration are reasonable, and the cooling water flow rate of a set of upper and lower nozzles is considered as a quantity, without considering the uniformity of cooling of the upper and lower surfaces of the steel plate and the uniformity of cooling of the steel plate in the width direction. Consider the need of controling the entire cooling curve with a high degree of freedom. In this way the control problem of the accelerated cooling process of the medium-thick plate is simplified to a fixed plate plate speed, and the cooling water flow rate F i (k) of each nozzle is used as a control variable to control the cooling rate of the plate. As in Fig. 8.5, the distance from P2 to P4 is also considered as an open system, and the cooling curve at one of the plate points is transformed into a “temperatureposition” curve by choosing the temperature at positions l1 , l2 , . . . , lm as the reference temperature, and defining it as r = [ r 1 r 2 · · · r m ]T

(8.1)

where, in the water-cooled zone l2 , . . . , l Nh +1 respectively, corresponding to the right boundary of each group of nozzle spraying. As the opening cooling temperature of each plate point is different, the reference cooling “temperature-position” curve of each plate point is also different. As shown in the figure, C1 and C2 are the “temperature-position” curves corresponding to the opening cooling temperature

Fig. 8.5 Position-temperature curves for different plate points

8.3 Heat Balance Equation for the Unit

245

of x P2 and x P2 , respectively. This means that the temperature setpoint at positions l1 , l2 , . . . , lm aries accordingly to the plate point of cooling at that time. In addition, the whole system is a large-scale nonlinear system with relatively fast dynamic, so the calculation speed of the control algorithm in the controller has to be relatively fast. Therefore, for such an accelerated cooling process using nozzle flow as the control variable, the control optimization method needs to satisfy the following two requirements. ➀ The optimization process needs to take into account changes in the control objectives. ➁ Meet the speed requirements for online computing. Given the system’s requirement for controller execution speed, it is not very practical to use a centralized MPC. Therefore, this chapter designs a distributed predictive control method, where setpoint recalculation, operation point linearization and neighborhood optimization are used, for the accelerated cooling process to satisfy the above two requirements.

8.3 Heat Balance Equation for the Unit Using P2 to P4 and the upper and lower surfaces of the steel plate as boundaries, the open system Γ can be obtained as shown in Fig. 8.6. Based on the energy exchange of the system and combined with the results of some scholarly and industrial research, the accelerated cooling process can be represented by the energy balance equation in a Cartesian coordinate system as follows. x˙ =

∂x λ ∂2x − l˙ · ρ · c p ∂z 2 ∂l

(8.2)

where x(z, l, t) is the plate temperature at location (z, l); l and z are the locations of the steel plate length and thickness coordinates; ρ is the steel density; c p is the specific heat capacity; λ is the heat transfer coefficient, which is a scalar quantity, and the heat transfer in the length and width directions are neglected here; for model (8.2), the latent heat is considered in the temperature-dependent thermal physical properties parameter. The boundary conditions for Eq. (8.2) are 

 ∓λ ∂∂Tz z=±d/2 = ±h · (T − T∞ )  =0 −λ ∂ T  ∂ z z=0

where h is the upper and lower surface heat exchange coefficient; d is the thickness; T∞ is the ambient temperature Tm or cooling water temperature TW according to different heat exchange conditions; the heat radiation heat exchange coefficient h A

246

8 Application of Distributed Model Predictive Control in Accelerated …

Fig. 8.6 Energy flow of the open-ended system Γ

in the air-cooled zone, the convection heat exchange coefficient h w between cooling water and steel plate and the radiation heat exchange coefficient h R in the reddening zone are respectively 4 h Air = k A · [σ0 ε(T 4 − T∞ )/(T − T∞ )]

hW

2186.7 × kW · α · = 106



T TB

a 

v vB

b 

F FB

c 

TW TW B

d

4 h R = k A · [σ0 ε(T 4 − T∞ )/(T − T∞ )]

where ε is the radiation coefficient; σ0 is the Stefan-Boltzmann constant equal to 5.67 × 10−8 W/m2 K4 ; v is the roller conveyor velocity; F is the nozzle water flow rate; v B , FB , TB and TW B are constants for the baseline velocity, flow rate, plate temperature and water temperature, respectively, at the time of modeling; a, b, c and d are constants; and k A and k W are correction factors that need to be derived online.

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation The accelerated cooling process with the water flow rate of each valve as the manipulated variable can be viewed as a large-scale system with multiple inlets and outlets. For computational reasons, the entire system is divided into N subsystems shown in Fig. 8.7, and each subsystem is controlled using a local MPC. Where the s-th subsystem is bounded by position ls−1 and position ls (s = 1, 2, . . . , N ), corresponding to the area for which the s-th group of nozzles is responsible. The control

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

247

Fig. 8.7 DMPC control framework for accelerated cooling process

quantity of each subsystem is the respective corresponding cooling nozzle water flow rate. The local MPCs exchange information about the mutual interference between the systems through the network. The optimal control solution obtained by each local controller is transformed into the cooling water flow rate by a nonlinear transformation, which is then used as the setting value for the underlying PI controller. The reference output of each local MPC and the number of open cooling water nozzle valves are redetermined at each control period based on the open cooling temperature measurements. For subsystems without cooling water, the local MPC metamorphoses into a predictor. The function of the predictor is to estimate the future state sequence of the corresponding subsystem and broadcast it to other subsystems through the network. In this way the local MPCs, the predictor and the PI controller collaborate in the control of the steel plate temperature cooling process via network communication.

8.4.1 Subsystem Optimization Objective Recalculation The target “time–temperature” cooling curve varies from plate to plate due to the different opening temperatures of the plates. As the steel plate moves, the temperature setpoint at the temperature control point ls varies from control period to control period. This subsection describes how to calculate the optimization targets for each subsystem. For simplicity and convenience, the single cooling rate cooling curve is used as an example.

248

8 Application of Distributed Model Predictive Control in Accelerated …

Fig. 8.8 Reference Sequence of temperature

The detailed procedure of the derivation is shown in Fig. 8.8, where the axes are: cooling device position, time and temperature, respectively. The cooling process starts at point P2. L1 is the “position-temperature” cooling curve for the plate point entering the cooling zone at moment k; L2 is the sequence of measured values for the plate at point P2. If the desired temperatures of all parts of the steel plate in the cooling zone are connected into a curve, the temperature settings of the curve at each position in the cooling zone at time k + h i can be obtained. The curve L3 can be obtained by connecting the set values at all moments at li . This gives a twodimensional set value curve at position li The setpoint curves for the other positions can be derived in the same way. Before determining the optimization objective, the plate speed should first be determined. In order to reduce the “granularity” of the process control, the plate speed of the plate can be calculated by the following equation. v = lh × Nh

RC dmax − x f

(8.3)

where RC is the cooling rate; dmax is the maximum open cooling temperature (which can be predicted from the measured value of PY01); x f is the final cooling temperature; and Nh is the number of nozzles in the cooling zone. For a plate point arriving at point P2 at moment k, assuming that its open cooling temperature is d(k), the ideal temperature value of the plate point as it travels under the ith nozzle can be calculated by the following equation.   ri (τi + k) = max d(k) + △d(k) − RC × τi , x f

(8.4)

where τi = xi /v; △d(k) is the difference between the temperature drop at the first nozzle at the bottom of the cooling water at a cooling rate of RC and the temperature

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

249

drop using air cooling, and △d(k) = l1 /v × (RC − R A ); R A is the air-cooling cooling rate. Since τi in Eq. (8.4) is a real number, that is, the resulting setpoint is not necessary. Therefore, let h i = int(τi ), then the set value at time h i can be found by an interpolation algorithm (e.g., quadratic spline function). The linear interpolation algorithm is used here as follows. ri (h i + k) = ri (τi + k)(1 − τi + h i ) + ri (τi − 1 + k)(τi − h i )

(8.5)

For simplicity of description, let ri (k) = f (d(k − h i ), d(k − 1 − h i ))

(8.6)

It is worth noting that when the plate speed is constant, h i is a constant and has the same value for all plate points. The system-wide optimization goal thus becomes P

N   2 s 2 s rs (k + i ) − yˆ (k + i|k ) + u (k + i − 1|k ) min J (k) = Qs Rs s=1

i=1

(8.7) Here, the temperature is used as the basic target variable. If one step deeper, the micro-particle structure of each plate point can be used as the final control target. At each control cycle, each plate point calculates its future process cooling “time– temperature” profile based on the target crystal structure and current state. These time–temperature curves are then transformed according to the logic represented in Fig. 8.8 to obtain a cooling target that can control the micro-particle crystalline structure at each location of the plate. The controller controls the cooling water flow according to the distributed predictive control method described in the following subsections, after obtaining the setpoint for each subsystem for each control cycle.

8.4.2 Subsystem State Space Model The model (8.2) given previously is in partial differential form and is not suitable to act as a predictive model for each local MPC. Therefore, the distribution parameter model is first summed here. Since when the finite-volume method is used to lump the distributed parameter model, sufficient approximation accuracy is obtained as long as the grid is divided small enough and each small cell has a clear physical meaning. Therefore, the finitevolume method is used in this chapter to lump the model (8.2). As shown in Fig. 8.9, the sth subsystem is divided into n s segments in the l-direction and into m layers in the z-direction of thickness. The area of each grid is equal to △l△z, and △l and △z

250

8 Application of Distributed Model Predictive Control in Accelerated …

Fig. 8.9 Meshing of each subsystem

are the length and thickness of each cell. Define the temperature of the jth cell in the ith length direction in the thickness direction as xi,s j , applying the energy balance Eq. (8.2) to the cells on the upper and lower surfaces have s x˙1, j

s λ(x1, j)



h s1, j 1 s s s =− (x − x − △z (x1, 1, j j − x ∞ )) s s s 2 2, j ρ(x1, )cp(x ) △z λ(x ) j 1, j 1, j



1 s s · v(x1, (8.8) j − x 1, j−1 ) △l   s h sm, j λ(xm, 1 j) s s s (x − xm, j − △z =− (xm, j − x∞ )) s s s 2 m−1, j ρ(xm, λ(xm, j )cp(x m, j ) △z j) −

s x˙m, j



1 s s · v(xm, j+1 − x m, j−1 ) △l

(8.9)

For the internal cells, there are λ(xi,s j )  s  1 s xi+1, j − 2xi,s j + xi−1, j s s 2 △z ρ(xi, j )cp(xi, j ) 1 − · v(xi,s j+1 − xi,s j−1 ) △l

x˙i,s j = −

(8.10)

s−1 N N s 0 where v = i is the plate velocity and xi,n = x ST , xi,n = xi,n , xi,0 = xi,n , s s s −1 s−1 s+1 s xi,n s +1 = xi,1 (i = 1, 2, . . . , m). In the industrial process, since the measurements provided are digital signals with a sampling time of △t, the Euler method is used here to discretize the model. Define

  a(xi,s j ) = −△t · λ(xi,s j )/ △y 2 ρ(xi,s j ))cp(xi,s j )

(8.11)

β(xi,s j ) = △t · a(xi,s j )/λ(xi,s j )

(8.12)

γ = △t · v/△x

(8.13)

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

251

Then the nonlinear state space expression for the sth subsystem can be derived from Eqs. (8.8)–(8.10) as follows. 

(k) xs (k + 1) = f(xs (k)) · xs (k) + g(xs (k)) · u s (k) + D · xns−1 s−1 s s y (k) = C · x (k)  d

b c

TW F u s (k) = 2186.7 × 10−6 × k W · α · vvB , s ∈ CW FB TW B

(8.14)

(8.15)

u s (k) = 1, s ∈ C A where xs = [ (x1s )T (x2s )T · · · (xns s )T ]T T s s s xsj = [ x1, j x 2, j · · · x m, j ] , ( j = 1, 2, . . . , n s )

(8.16)

is the state vector of subsystem s and y s is the average temperature of the last cell of the sth subsystem; C W is the set of subsystems cooled by cooling water andA is the set of subsystems cooled by radiant heat exchange. f(xs (k)), g(xs (k)), D and C are the coefficient matrices of subsystem s, specified as ⎡ ⎢ ⎢ f(xs (k)) = ⎢ ⎢ ⎣

1 (xs (k)) · △

0

0 .. .

2 (xs (k)) · △

··· ..

0 .. .

.

0 0 ··· 0 n s (x (k)) · △ ⎡ ⎤ (1 − γ )Im 0 ··· 0 ⎢ ⎥ .. . ⎢ γ Im ⎥ . (1 − γ )Im . . ⎢ ⎥ +⎢ ⎥ .. . . . . ⎣ ⎦ . . . 0 0 ··· γ Im (1 − γ )Im     T g(xs (k)) = ψ1 (xs (k)) T · · · ψn s (xs (k)) T

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

s

(8.17)

(8.18)

C = m −1 · [ 01×m·(n s −1) 11×m ]

(8.19)

T  D = γ Im 0m×m·(n s −1)

(8.20)

harmony ⎤ ⎤ ⎡ s s s 0 a(x1, θ1,s j (x1, j) · · · j − x ∞ ) · β(x 1, j ) ⎥ ⎥ ⎢ ⎢ .. s ..  j (xs ) = ⎣ ... 0(m−2)×1 ⎦; ψ j (x ) = ⎣ ⎦; . . s s s s θ (x − x ) · β(x ) ∞ 0 · · · a(xm, j ) m, j m, j m, j ⎡

252

8 Application of Distributed Model Predictive Control in Accelerated …

⎡ ⎢ ⎢ ⎢ ⎢ △=⎢ ⎢ ⎢ ⎣ 

−1 1

0 ··· . 1 −2 1 . . . . . 0 .. .. .. .. . . . 1 −2 .

0 ··· 0

⎤ 0 .. ⎥ . ⎥ ⎥ ⎥ ; Im ∈ R m×m ; 0 ⎥ ⎥ ⎥ 1 ⎦

1 −1

θi,s j = (xi,s j /x B )a , s ∈ CW , i = 1, 2, . . . , m, j = 1, 2, . . . , n s . θi,s j = h air (xi,s j ), s ∈ C A

The above transformations lead to a state space expression for each subsystem. The various parts of the distributed predictive control in this chapter are based on this model. The following section first describes how to detect the temperature distribution of the steel plate in the water-cooled zone.

8.4.3 Extending the Kalman Global Observer Observers, often also called soft measurements [5], have become a common method to overcome the lack of online sensors. They are used to give parameters and unmeasurable states that are not available from online or offline instruments (see literature [6]). In the process control community, the design of nonlinear observers is a very broad topic. One class of observer is the “classical observer”, which assumes that the process model and parameters are completely known. The most common of these are the extended Kalman filter [7] and the extended Luenberger observer [8], which are improved from versions of the original suitable for linear systems by linearizing the tangent model. In addition to these “extended” solutions, other such observers can be reduced to “high-gain observers”, and the usual criterion for whether a “highgain observer” can be used is whether it can transform a nonlinear system into an observable canonical form. Another class of observers corresponding to the “classical observer” is used for modelexpression systems with uncertain parameters or even uncertain structure, and related observers are asymptotic, sliding mode, and adaptive observers. For accelerated cooling processes, although the model (8.14) can be transformed into a standard observer canonical type allowing the design of high gain observers, however, if one views the extensive literature, it becomes clear that high gain observers are rarely used for systems containing high dimensional state spaces. The reason for this is that the nonlinearity of the system and the order of the system increase the mathematical complexity of the algorithm. Since the totalization of the set of heat balance equations using the finite volume method requires a relatively fine division of the cells (at least 70 cells here), the order of the system will inevitably be high, which can make the use of a high gain observer very fragile. For this reason,

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

253

the well-known extended Kalman filter (EKF) was chosen to design the temperature supervisor for the accelerated cooling process of a medium-thick plate in order to facilitate the design, even though the regression rate of the Kalman filter is more difficult to adjust. The nonlinear model of the whole system can be easily derived from Eq. (8.14) in the following form. 

0

x(k + 1) = F(x(k))x(k) + G(x(k))u(k) + Dx (k) y(k) = Cx(k)

(8.21)

where x = [ (x1 )T (x2 )T · · · (x N )T ]T ; u = [ u 1 u 2 · · · u n s ]T ; x0 is the average value of the open cooling temperature.y is the output matrix, which is the temperature of the upper surface of the steel plate detected by PY04. The expressions F(x(k)), G(x(k)) and D can be written according to Eqs. (8.17), (8.18) and (8.20) with the coefficients C are defined as follows.   C = 01×(N −1)n s m 1 01×(n s m−1)

(8.22)

system (8.21) is observable. This is because it is clear from the physical principles that the temperature (state) of each cell is influenced by the cells above and below it and the cell to the left. And the temperature detection point PY04 is on the rightmost side of the system, so the temperature of all other cells will have an effect on the measured value of PY04 after a certain time, which means that the output PY04 contains information about all states. Referring to the literature [9], the structure of the observer is as follows. ➀

Measured value update △

xˆ (k + 1) = xˆ (k + 1|k ) + Kk+1 (y(k + 1) − Cx(k + 1|k )) T

T

Kk+1 = Pk+1/k C (CPk+1/k C + Rk+1 )−1 Pk+1 = (I − Kk+1 C)Pk+1/k

(8.23)

➁ Time Update Pk+1/k = Fk Pk FkT + Qk 0

xˆ (k + 1|k ) = F(ˆx(k))ˆx(k) + G(ˆx(k))u(k) + Dx (k)   ∂(F(x(k))x(k))  ∂G(x(k)) · u(k)  + Fk =   ∂x(k) ∂x(k) x(k)=ˆx(k)

(8.24) x(k)=ˆx(k)

When using an observer for a linear deterministic system,  N Qk and Rk can be n s ) and I N . In linear chosen arbitrarily, for example, by choosing 0 M (M = m × s=1 stochastic systems, covariance arrays Qk and Rk can be obtained for the system noise

254

8 Application of Distributed Model Predictive Control in Accelerated …

and the measurement noise, respectively, at the maximum likelihood angle. However, for nonlinear systems, although their optimality is not proven, Qk and Rk are usually still considered as covariance arrays. Since Eq. (8.21) is a deterministic system, choose Qk = 0. The observer estimates the full system state at each control cycle and sends the measurements to the controllers or predictors of all other subsystems.

8.4.4 Local Predictive Controller For the sth subsystem, if s ∈ CW , a local MPC is required as a controller to optimize the steel plate temperature. In this section, the distributed MPC algorithm based on neighborhood optimization and successive linearization is described in detail. For the accelerated cooling process, the global performance index (8.7) can be decomposed into local performance indices for each subsystem as follows, according to the method described in the literature [10]. Js (k) =

P

 rs (k + i ) − yˆ s (k + i|k ) 2 + u s (k + i − 1|k ) 2 Qs Rs

(8.25)

i=1

min J (k) =

N 

Js (k)

(8.26)

s=1

Local control decisions are obtained by solving a local optimization problem with minimization Js (k) as the optimization objective. However, the optimized solution obtained for the local system performance index is not equal to the optimal solution of the global problem. To improve the performance of the global system, this chapter uses the neighborhood optimization objective as the performance index of the local controller. Define Nsin , Nsout as the input neighborhood and output neighborhood of the sth subsystem, respectively. Here the output neighborhood of subsystem s refers to the set of subsystems whose states are affected by the state of the sth subsystem, and s ∈ / Nsout . In contrast, the input neighborhood of subsystem s refers to the set of / subsystems whose states are affected by the state of the sth subsystem, and s ∈ Nsin . Since the state of the output neighborhood subsystem of the sth subsystem are influenced by the control decisions of subsystem s. Referring to the literature [2, 11–13], the performance of each subsystem can be improved by using the following performance index in each predictive controller. min J s (k) =



J j (k)

(8.27)

j∈{π+s ,s}

It is worth noting that, for the sth subsystem, the new performance index includes not only the performance index of the current subsystem, but also the performance

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

255

index of its output neighborhood subsystem, which is called “neighborhood optimization”. Neighborhood optimization is a coordination strategy that can effectively improve system performance in distributed predictive control of large-scale systems. The subsystem model (8.14) is a nonlinear model. In model predictive control, if the future state evolution process is predicted by the model (8.14), then the optimization process is a nonlinear optimization problem. In order to overcome the possible computational requirement of solving the nonlinear optimization problem, the predictive model is treated by successive linearization methods in solving the local MPC. That is, the system model is linearized near the current operating point at each control cycle. This allows the design of linear local MPCs based on this linear time-varying (LTV) system. Although the use of time-varying models has just recently been normalized in the field of process control, its history dates back to the 1970s. A study of linear parametric time-varying (LPV) MPCs can be found in the literature [14], where MPC methods for LTV models have been successfully validated in Boeing aircraft. The literature [14, 15] contains the closest approximation to our approach. In the ACC process, at moment k the model (8.14) can be approximated by the following linear model, namely139 

(i|k ) xs (i + 1|k ) = As (k) · xs (i|k ) + Bs (k) · u s (i|k ) + D · xns−1 s−1 y s (i|k ) = C · xs (i|k )

s = 1, 2, . . . , N (8.28)

where As (k) = f(xs (k)), B(k) = g(xs (k)). Equation (8.15) is a static nonlinear equation and is held constant here. Thus Eqs. (8.28) and (8.15) form a Hammerste in system. For such a system, only the linear part is generally used as a predictive model in MPC, while the static nonlinear part is processed after the optimal solution is obtained. In the accelerated cooling process, the input neighborhood of the sth subsystem is the s−1st subsystem and the output subsystem is the s + 1st subsystem. Assuming that the state x(k) is known at moment k, the local optimization problem for each subsystem at sampling moment k, considering the operational variables, output variables, and operational variable increment constraints, is as follows min J s (k) =

U s (k)





P 

 2 2  r j (k + i ) − yˆ j (k + i|k ) + u j (k + i − 1|k )

Qj Rj j∈{s,s+1} i=1 j−1 j j j s.t. x (i + 1|k ) = A j (k) · x (i|k ) + B j (k) · u (i|k ) + D · xn j−1 (i|k ), j ∈ {s, s + 1} i = 1, . . . , P u smin ≤ u s (k + i − 1|k ) ≤ u smax , △u smin ≤ △u s (k + i − 1|k ) ≤ △u smin , i = 1, . . . , P j j i = 1, . . . , P, j ∈ {s, s + 1} ymin ≤ y j (k + i|k ) ≤ ymin ,

j

j

(8.29)

where {u smin , u smax }, {△u smin , △u smax } and {ymin , ymax } ( j ∈ {s, s + 1}) are the upper and lower bounds of the operational variables, operational variable increments and states, respectively, and

256

8 Application of Distributed Model Predictive Control in Accelerated …

T  Us = u s (k) u s (k + 1) · · · u s (k + M)

(8.30)

 T Xs,n s (k) = xns s (k + 1) xns s (k + 2) · · · xns s (k + P)

(8.31)

meaning

Then if the sequence Xs−1,n s−1 (k) and Us+1 are known for the sth subsystem, then the optimization problem (8.29) can be transformed into a Quadratic Programming Problem (QP). The optimal control sequence of the sth subsystem in the current state can be obtained by solving this quadratic problem at moment k Us∗ (k). Then the Us∗ (k) The first control action of (k) is then nonlinearly transformed by Eq. (8.15) to obtain the optimal setting of the cooling water flow rate. It is worth noting that Eq. (8.28) is linearized near the operating point, which is not normally the equilibrium point. When measuring the online computational effort of the method presented in this chapter, in addition to the time required to solve problem (8.29) one also needs to consider the time required to compute the coefficients of the linear model (8.28) and to transform the optimization problem (8.29) into a quadratic programming problem. Compared to MPC with a direct nonlinear model, the method proposed in this chapter will significantly reduce the computational complexity and is more accurate using a successive linearization method than a direct linear model.

8.4.5 Local State Prognosticator For the sth subsystem, if its cooling is radiative heat transfer, i.e. s ∈ C A , then instead of the local MPC, a predictor is applied to predict the future state sequence Xs (k). T  Xs (k) = xs (k + 1) xs (k + 2) · · · xs (k + P)

(8.32)

The prediction model is Eqs. (8.14) and (8.15). It is worth noting that in Eqs. (8.14), (8.15) the coefficients gs (k) and the input term u s have different forms in the aircooled and water-cooled subsystems. For the first subsystem, the open cooling temperature is used as a measurable disturbance and its future sequence can be estimated from the measured values of PY01. The predictor sends the estimate of Xs (k) to its output neighbors after obtaining the sequence of future states to facilitate its output neighbors in solving their optimal control laws.

8.4 Distributed Predictive Control Based on Optimal Objective Recalculation

257

8.4.6 Local Controller Iterative Solution Algorithm If the optimized sequence of output neighbors of the sth subsystem and the future state sequence of input neighbors are known, then, according to neighborhood optimization, the optimized solution of the current subsystem can be obtained by solving problem (8.29), i.e.160   ∗ Us,M (k) = arg Optimization Problem(8 − 29)|U ∗j,M (k)( j∈Nsout , j=i ),X h,∗ p (k)(h∈Nsin ,h=i ) , (s = 1, . . . , N )

(8.33)

It follows that the optimal solution of the current subsystem depends on the future input sequences of its output neighbors and the future state sequences of its input neighbors. However, the local optimal solutions of the current subsystem’s neighbors are unknown at moment k. Therefore, each subsystem must first make a prediction of the future states and input sequences of its neighboring subsystems. This is then necessarily subject to some bias. In order to obtain a more accurate solution to the optimization problem (8.29), the following iterative algorithm for finding the optimal solution to problem (8.29) at each control cycle is developed. The iterative DMPC solving algorithm based on neighborhood optimization is as follows. Step 1 Initialization and information exchange: at sampling moment k, the optimal goal recalculation section resets the reference goal for each subsystem. The sth subsystem gets the observed state from the network xˆ s (k), initializes the local optimization control sequence and sends this optimization sequence to the output neighborhood subsystem through the network, set the number of iterations is 0. U s(l) (k) = Uˆ s (k), (s = 1, 2, . . . , N ). Calculate the state estimate by using Eq. (8.28) xˆ ns(l) (i|k ), (i = 1, 2, . . . , P, s s = 1, 2, . . . , N ) and sends them over the network to its output subsystem. Step 2 Subsystem optimization: for each subsystem s, s ∈ CW , while solving its local optimization problem (8.29) to obtain the optimal control law, i.e.     (l) (l) U (l+1) (k) = arg problem(4 − 29)  out in s U (k)( j∈N ), X (k)(h∈N ) , s ∈ CW j

s

h

s

Define the optimal solution of the system s ∈ C A as  T U (l+1) (k) = 1 1 · · · 1 (s ∈ C A ) s Each subsystem calculates its future state sequence estimate by using Eq. (8.28). Step 3 Check for updates: each subsystem checks whether its iteration termination condition is satisfied, i.e. whether for the given error accuracy εs ∈ R (s = 1, . . . , N ), there exists

258

8 Application of Distributed Model Predictive Control in Accelerated …

(l+1) U (k) − U s(l) (k) ≤ εs , (s = 1, . . . , N ) s If all the iteration termination conditions are satisfied at time l ∗ , set the local ∗ optimization control sequence to U (ls ) (k), skip to step 5; otherwise, let l = l + 1 and each subsystem sends its new input information U (l) s (k) to its input neighbors and (i|k ) to its output neighbors, skipping to step 2. sends xˆ ns(l) s Step 4 Assign and apply: compute the instantaneous control law   u ∗s (k) = 1 0 · · · 0 U ∗s (k), (s = 1, . . . , N ) and apply it to each subsystem. Step 5 Reassign and initialize the estimate: make the initial value of the locally optimized control decision at the next sampling moment Uˆ s (k + 1) = U ∗s (k), (s = 1, . . . , N ) Recede the horizon, move the time domain to the next sample period, which is k + 1 → k, jump to step 1, and repeat the above steps. The distributed control approach in this section transforms a large-scale nonlinear online optimization problem into a distributed computational problem for several small systems, greatly reducing the the complexity of the system computation. In addition, the control performance of the system can be improved by exchanging information among neighboring subsystems. In order to verify the effectiveness of the control strategy proposed in this section, the control method will be verified in the experimental setup in the next section.

8.5 Simulation Platform Algorithm Validation (1) Solution time for distributed model predictive control Empirically, when the number of iterations is l ≥ 3, the performance of the method proposed in this chapter does is close to that of the centralized form. The performance of the centralized and the method proposed in this chapter on the solution times in a computer with a frequency of 18 and 512 memory are shown in Table 8.2. It can be seen that the maximum solution time is only 0.1194 s when l = 3, which satisfies the the requirements of online solving. (2) Advantages of the DMPC method with optimization objective recalculation To further illustrate the advantages of the distributed MPC control method with optimization target recalculation proposed in this chapter, the MPC method with velocity as input, the DMPC method without optimization target recalculation and the DMPC method with optimization target recalculation are used to control the accelerated cooling process, respectively. For simplicity, the cooling curve with a

8.5 Simulation Platform Algorithm Validation

259

Table. 8.2 Computing time for DMPC and centralized MPC Item

Minimum time (s)

Maximum time (s)

Average time (s)

Construct state space model for subsystem

0.0008

0.0012

0.0009

DMPC (Iterations: l = 1)

0.0153

0.0484

0.0216

DMPC (Iterations: l = 2)

0.0268

0.0690

0.0452

DMPC (Iterations: l = 3)

0.0497

0.1194

0.0780

DMPC (Iterations: l = 5)

0.0895

0.3665

0.1205

Construct state space model for global system

0.0626

0.1871

0.0890

Centralized MPC

0.6535

1.8915

0.9831

single cooling rate is still used as the control target. Three pieces of X70 pipeline steel with a thickness of 19.28 mm, a length of 25 m and a width of 5 m are used as an example. The entire system is divided with a 3 mm thick, 0.8 m long grid, with a target cooling rate of 17 °C/s and a target final cooling temperature of 560 °C. From Fig. 8.10, it can be seen that although the average cooling rate and final cooling temperature can be guaranteed by using the velocity control method, the cooling rate of each plate point is not always a constant value. However, when the cooling water flow rate is used as the control variable to control the accelerated cooling process, if the DMPC method without optimization target recalculation is used, the control results are shown in Fig. 8.11, the accuracy of the final cooling temperature and cooling rate of the plate points can be guaranteed, but the temperature difference between the plate points is mainly eliminated by the first few nozzles, which makes the temperature drop process of each plate point inconsistent, and then affect the final product quality. Using the DMPC method with optimized target recalculation proposed in this chapter (see Fig. 8.12), the cooling “time–temperature” curve of each plate point is basically the same as the reference cooling curve. (3) Experimental results The performance of the method proposed in this chapter is further illustrated in the following simulation experiments, still using X70 pipeline steel as an example. Figure 8.13 shows the setting curves of each local controller obtained from the recalculation part of the optimization objective. Figures 8.14 and 8.15 show the performance of the closed-loop system and the corresponding operating variables, respectively. As can be seen in Fig. 8.14, each subsystem is able to track its reference trajectory well and is able to obtain a final cooling temperature with high accuracy. The control accuracy and control flexibility of the steel plate accelerated cooling process are improved. The method proposed in this chapter solves the problem of open cooling temperature variation by resetting the optimization target. If further, the micro-particle structure of each plate point can be used as the final control target, and at each

260

8 Application of Distributed Model Predictive Control in Accelerated …

Temperature(°C)

800 750 700 650 600 550 0 0 5

5 10 10

Time(Sec)

15 20 15

25

Plate point

Fig. 8.10 Cooling curves and reference cooling curves using speed control method

Temperature(°C)

800 750 700 650 600 550 0

0 5

5

10 15

10

Time(Sec)

20 15

25

Plate point

Fig. 8.11 Cooling curves and reference cooling curves obtained using the DMPC method without optimization target recalculation

control cycle, each plate point calculates its future process cooling “time–temperature” profile based on the target crystal structure and current state. Then, the “positiontemperature” curve is calculated from these time–temperature curves. By controlling the crystalline microparticle structure of the plate, the plate properties can be better controlled, as they directly determine the physical and mechanical properties of the plate. The process objectives are more stringent and precise than for simple cooling curve control. In terms of control method and strategy, the above control method is applicable to both cooling curve control and steel plate microparticle control, except for the increase of different control target settings, so the method has great potential value.

8.6 Summary of This Chapter

261

Temperature(°C)

800 750 700 650 600 550 0

0 5

5 10

Time(Sec)

15

10 20 15

Plate point

25

Fig. 8.12 Cooling curves and reference cooling curves obtained using the DMPC method with optimized target recalculation

800 750 700 650 600 550 1

3

5

7

9

11 13

Subsystem

15

17

0

5

10

15

20

25

30

35

40

Time(Sec)

Fig. 8.13 The reference curves of local subsystems

8.6 Summary of This Chapter In this chapter, the flow rate of each cooling water nozzle is used as a manipulation variable in order to control the cooling curve more accurately and flexibly. For such a large-scale system with interrelated parts, a distributed MPC control method based on set value recalculation and successive linearization is designed in this chapter. This control method divides the system into several subsystems, each of which is controlled with a local MPC controller, and the optimization target of each local MPC is recalculated at each control cycle based on the open cooling temperature and the current cooling state of the steel plate. In this chapter, the finite volume method is first used to derive the state space expression of the system equations; then the

740 720 700 680 660 20

30

40

40

Temperature y14 Temperature y11 Temperature y8

10

40

680 660 640 620 10

20

30

40

650 600 10

20

30

40

600 550 10

20

30

600 550 10

20

30

740 720 700 680 660

10

10

20

20

30

30

40

40

680 660 640 620 10 620 600 580 560

10

20

20

30

30

40

40

600 550 10 580 560 540 520

20

30

40

Temperature y 3

40

Temperature y 6

30

760 740 720 700 680

720 700 680 660 640

Temperature y 15 Temperature y 12 Temperature y 9

20

Temperature y 2

10

780 760 740 720 700

Temperature y 5

780 760 740 720 700

8 Application of Distributed Model Predictive Control in Accelerated …

Temperature y17

Temperature y16 Temperature y13 Temperature y10 Temperature y7 Temperature y4

Temperature y1

262

10

20

30

40

10

20

30

40

10

20

30

40

10

20

30

40

10

20

30

40

650 600

600 550 600 550

Measurement Reference Temperature 10

30 20 Time (Sec)

40

Fig. 8.14 Accelerated closed-loop performance of the cooling process using the distributed MPC control method proposed in this chapter

extended Kalman filter is designed to observe the system state; in each control cycle, the local MPC linearizes the prediction model at the current working point so that the computational speed can meet the requirements of online computation, and the problem of different cooling curves at each plate point is solved by the optimization target recalculation; so that the cooling process The cooling “time–temperature” curve of each plate point is closer to the required cooling curve. This method can improve the flexibility of cooling curve control in the accelerated cooling process. In addition, if the microstructure of the steel plate is used as the final control target, the control method in this chapter can be applied without change by recalculating the process cooling curve at each control cycle, obtaining the desired microstructure of the steel plate and thus providing a control algorithm for producing higher quality or more specific steel plate requirements. Therefore, the method has a great potential application.

0 20

30

0 30

400 200 0 20

30

200 0 30

200 0 20

30

40

Flux u 3 30

400

400 0 10

20

30

40

20

30

40

20

30

40

10

20

30

40

10

20

30

40

10

20

30

40

10

20

30

40

0

200 0

200 0

40

200

40

400

10

0 20

400

10

200

40

200 10

Flux u 17

20

30

400

40

400

10

0 20

400

40

200 10

Flux u 14

10

30

400

40

Flux u 11

20

0 20

0

40

400

10

Flux u 8

200

30

200

40

400

20

200

Flux u 6

Flux u 5

Flux u 4 Flux u 7

10

40

200

10

Flux u 10

30

400

10

Flux u 13

20

0

400

Flux u 9

0

400 200

Flux u 12

200

Flux u 15

400

10

Flux u 16

263 Flux u 2

Flux u 1

References

400 200 0

400 200 0 10

Time (Sec)

Fig. 8.15 Cooling water flow rate of each nozzle using the distributed MPC control method proposed in this chapter

References 1. Wang X, Wang Z, Chai T (2000) Development and current situation of controlled cooling technology after plate rolling. Steel Rolling 17(03): 44–47. (王笑波, 王仲初, 柴天佑.中厚板 轧后控制冷却技术的发展及现状.轧钢, 2000, 17(03)):44–47). 2. Zheng Y, Li S, Wang X (2012) Optimization target resetting distributed model predictive control for accelerated cooling process. In: The 10th world congress on in intelligent control and automation (WCICA). IEEE, Beijing 3. Hawbolt EB, Chau B, Brimacombe JK (1983) Kinetics of austenite-pearlite tansformation in eutectoid carbon steel. Metall Mater Trans A 14(9):1803–1815 4. Pham TT, Hawbolt EB, Brimacombe JK (1995) Predicting the onset of transformation under noncontinuous cooling conditions: part I theory. Metall Mater Trans A 26(26):1987–1992 5. Sotomayor OAZ, Song WP, Garcia C (2002) Software sensor for on-line estimation of the microbial activity in activated sludge systems. ISA Trans 41(2):127–143 6. Astorga CM et al (2002) Nonlinear continuous-discrete observers: application to emulsion polymerization reactors. Control Eng Pract 10(1):3–13 7. Dochain D (2003) State and parameter estimation in chemical and biochemical processes: a tutorial. J Process Control 13(8):801–818 8. Quinteromarmol E, Luyben WL, Georgakis C (1991) Application of an extended luenberger observer to the control of multicomponent batch distillation. Ind Eng Chem Res 30(8):1870– 1880

264

8 Application of Distributed Model Predictive Control in Accelerated …

9. Boutayeb M, Rafaralahy H, Darouach M (1997) Convergence analysis of the extended kalman filter used as an observer for nonlinear deterministic discrete-time systems. IEEE Trans Autom Control 42(4):581–586 10. Katebi MR, Johnson MA (1997) Predictive control design for large-scale systems. Automatica 33(3):421–425 11. Zheng Y, Li S (2012) Stabilized neighborhood optimization based distributed model predictive control for distributed system. In: Control conference (CCC), 2012 31st Chinese. IEEE, Hefei 12. Zheng Y, Li S, Li N (2011) Distributed model predictive control over network information exchange for large-scale systems. Control Eng Pract 19(7):757–769 13. Yi Z, Xiaobo LSW (2009) Distributed model predictive control for plant-wide hot-rolled strip laminar cooling process. J Process Control 19(9):1427–1437 14. Keviczky T, Balas GJ (2005) Flight test of a receding horizon controller for autonomous UAV guidance. In: Proceedings of the American control conference 15. Falcone P et al (2007) Predictive active steering control for autonomous vehicle systems. IEEE Trans Control Syst Technol 15(3):566–580 16. Latzel S (2001) Advanced automation concept of runout table strip cooling for hot strip and plate mills. IEEE Trans Ind Appl 37(4):1088–1097 17. Pehlke R, Jeyarajan A, Wada H (1982) Summary of thermal properties for casting alloys and mold materials. NASA STI/Recon Tech Rep N 83:36293

Index

A Accelerated cooling process, 212 ARM9 Embedded Module, 41 Asymptotic stability, 53, 74, 78

B Bernoulli distribution, 16, 32 Board point, 244 Buffer, 53

C Communication constraints, 93 Communication sequences, 94 Communication topology, 193 Cone complement linearization, 87 Consistency constraints, 144, 185 Continuous cooling, 239 Convex polyhedral set, 15, 31, 95 Cooling curves, 237, 244 Cooling rate, 247 Cooling water nozzles, 240 Coupling, 160 input coupling, 160 state coupling, 160

D Decoder, 70 Distributed control, 2 Distributed Predictive Control (DMPC), 3 constrained coordinated distributed predictive control for stability preservation, 176 constrained distributed predictive control for stability preservation, 136

constraint coordination distributed predictive control, 180 coordinated distributed predictive control, 6, 160 distributed predictive control based on goal recalculation, 257 distributed predictive control of global performance index, 5 distributed predictive control of local performance index, 6 highly flexible distributed model predictive control, 220 nash optimal based distributed predictive control, 122 non-iterative coordinated distributed predictive control, 160 non-iterative distributed predictive control, 159 role-optimized distributed predictive control, 8, 201 Distributed systems, 136, 176, 220 Domain of attraction, 144, 178 Downstream systems, 137, 176, 220 Dual solution level system, 112

E Encoder, 70 Estimated error, 20, 37, 103 estimated error expectation, 40 estimating the Euclidean parametric square of the error, 103 estimating the Euclidean parametrization of error expectations, 37

© Chemical Industry Press 2023 S. Li et al., Intelligent Optimal Control for Distributed Industrial Systems, Advanced and Intelligent Manufacturing in China, https://doi.org/10.1007/978-981-99-0268-2

265

266 estimating the expectation of the squared Euclidean norm of the error, 20 Euclidean parameters, 18, 37 Extended Kalman observer, 253 Extraction rate, 134

F Feasibility analysis, 154, 188 Feasible solutions, 154, 188 Feedback channel, 15 Final cooling temperature, 240 Fine-rolling mill, 241

H Hammerstein model, 212, 255 Heavy oil fractionator, 134

I ILOG CPLEX, 189 Inverted pendulum system, 65–67

K Kalman filtering, 13, 14

L Laminar flow cooling equipment, 237, 240 Linear matrix inequalities, 24, 60 Load frequency control, 188 Local communication failure, 128 column failure, 130 line failures, 130 mxed fa, 130 Local controllers, 122, 215 Local optimization problems, 177, 197, 201 Local predictive control, 159, 161, 177, 195

M Matrix of penalty rights, 20 Medium thick plate, 211 Movement Control dynamic scheduling, 93 rolling time domain scheduling, 111 static scheduling, 93 Multifunctional process control experiment platform, 112

Index N Nash (Nash) optimization, 5, 122 Nash equilibrium, 123 Nash optimal, 122 Nash optimal solution, 123 Neighborhood Optimization, 198 Neighbourhood subsystem entering the neighbourhood subsystem, 200 exproting the neighbourhood subsystem, 200 Neighbourhood systems, 7, 200 Network connectivity, 233 Networked control systems, 13, 14, 31, 51, 69 Nominal system, 126 Normal distribution, 109

O Optimization Problem, 17, 33, 58, 59, 74, 98 local performance index, 121, 197, 210, 211, 216–218 Optimizing performance, 169, 210

P Packet loss, 15, 30, 53 bounded packet loss, 53 number of consecutive packet losses, 53 random packet loss, 13 Performance analysis, 20, 36, 103 Point measuring instruments, 241 Predicted values of interactions, 201 Predictive control (model predictive control), 3, 6, 54, 73 constraint predictive control, 136 dual mode predictive control, 146, 189 networked predictive control algorithms, 61 robutst predictive control, 73 Predictive models, 124, 163, 178, 199, 219 Probability of arrival, 13, 27

Q Quantification, 52, 71 dynamic temporalizer, 52 for quantifier, 70 quantified density, 71, 90 quantifiers, 52 static quantifiers, 52

Index

267

R Real-time simulation experiment platform, 41 Recursive feasibility, 143, 181, 222 Return to the red zone, 246 Rolling time domain state estimation, 18, 111 Root mean square error, 26, 111 asymptotic root mean square error, 42 Rotary encoders, 241

Status Predictor, 256 Step Response Matrix, 124 Straightening machines, 238 Structural disturbances, 128 Substitution function, 35

S Scanning measuring instruments, 241 Schur complementary reasoning, 77 Secondary planning mixed integer quadratic programming, 94 Sector bounded methods, 72 Set of end-state constraints, 61 Specific heat capacity, 214 Spectral radius, 126 Stability analysis, 62, 73, 152 Stability conditions, 167, 206 Stability constraints, 152

U Uniform distribution, 27 Unreliable shared networks, 15, 31, 48 Upstream systems, 137, 176, 220

T Terminal constraint set, 136, 180, 221 Thickness gauges, 241

W Weight matrix, 60 terminal weighting matrix, 60

Z Zero Order Retainer, 16, 31