Stabilizing and optimizing control for time-delay systems 9783319927039, 9783319927046


290 83 2MB

English Pages 431 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface......Page 7
Contents......Page 12
1.1.1 Models......Page 17
1.1.2 Control Objectives......Page 18
1.2.1 Stabilizing Controls......Page 20
1.2.2 Optimal Controls over Finite, Infinite, and Receding Horizons......Page 21
1.3 Models for Time-Delay Systems......Page 24
1.3.1 Input Delayed Systems......Page 25
1.3.2 State Delayed Systems......Page 28
1.4 Computation......Page 35
1.5 About the Book and Notations......Page 36
References......Page 41
2.1 Introduction......Page 43
2.2.1 General Time-Delay Systems......Page 44
2.2.2 Stability of General Time-Delay Systems......Page 45
2.3 Inequalities for Stability......Page 47
2.3.1 Matrix Inequalities......Page 48
2.3.2 Integral Inequalities for Quadratic Functions......Page 49
2.4 Stability of State Delayed Systems......Page 52
2.4.1 Lyapunov–Razumikhin Approach......Page 53
2.4.2 Lyapunov–Krasovskii Approach......Page 56
2.4.3 Discretized State Approach......Page 59
2.4.4 *Extension to Systems with Time-Varying Delays......Page 61
2.5 Robust Stability of State Delayed Systems......Page 67
2.5.1 Lyapunov–Krasovskii Approach......Page 68
2.5.2 Discretized State Approach......Page 69
2.5.3 *Extension to Systems with Time-Varying Delays......Page 71
2.6 Stability and Robust Stability of Distributed State Delayed Systems......Page 74
2.6.1 Lyapunov–Razumikhin Approach......Page 75
2.6.2 Lyapunov–Krasovskii Approach......Page 76
2.6.3 Lyapunov–Krasovskii Approach for Robust Stability......Page 77
References......Page 78
3.1 Introduction......Page 80
3.2.1 State Predictor Approach......Page 81
3.2.2 Reduction Transformation Approach......Page 84
3.3.1 Lyapunov–Razumikhin Approach......Page 85
3.3.2 Lyapunov–Krasovskii Approach......Page 87
3.3.3 Discretized State Approach......Page 91
3.3.4 *Extension to Systems with Time-Varying Delays......Page 94
3.4 Robust Stabilizing Controls for State Delayed Systems......Page 97
3.4.1 Lyapunov–Krasovskii Approach......Page 98
3.4.2 Discretized State Approach......Page 100
3.4.3 *Extension to Systems with Time-Varying Delays......Page 103
References......Page 107
4.1 Introduction......Page 109
4.2.1 Smith Predictor Approach......Page 110
4.2.2 Luenberger Observer Approach with Reduction Transformation......Page 112
4.2.3 Dynamic Feedback Control Approach with Reduction Transformation......Page 114
4.3.1 Luenberger Observer Approach......Page 116
4.3.2 Lyapunov–Razumikhin Approach......Page 118
4.3.3 Lyapunov–Krasovskii Approach......Page 121
4.3.4 Cascaded-Delay System Approach......Page 123
4.3.5 *Extension to Systems with Time-Varying Delays......Page 128
4.4.1 Lyapunov–Krasovskii Approach......Page 132
4.4.2 Cascaded-Delay System Approach......Page 135
4.4.3 *Extension to Systems with Time-Varying Delays......Page 141
References......Page 145
5.1 Introduction......Page 147
5.2 Guaranteed LQ Controls for Input Delayed Systems......Page 148
5.2.1 State Feedback Guaranteed LQ Control for Predictive Costs......Page 150
5.2.2 State Feedback Guaranteed LQ Control for Standard Costs......Page 152
5.2.3 Output Feedback Guaranteed LQ Control for Standard Costs......Page 155
5.3.1 State Feedback Guaranteed LQ Control......Page 160
5.3.2 Robust State Feedback Guaranteed LQ Control......Page 168
5.3.3 Output Feedback Guaranteed LQ Control......Page 171
5.4 Guaranteed mathcalHinfty Controls for Input Delayed Systems......Page 173
5.4.1 State Feedback Guaranteed mathcalHinfty Control for Predictive Costs......Page 175
5.4.2 State Feedback Guaranteed mathcalHinfty Control for Standard Costs......Page 176
5.4.3 Output Feedback Guaranteed mathcalHinfty Control for Standard Costs......Page 179
5.5 Guaranteed mathcalHinfty Controls for State Delayed Systems......Page 183
5.5.1 State Feedback Guaranteed mathcalHinfty Control......Page 184
5.5.2 Robust State Feedback Guaranteed mathcalHinfty Control......Page 192
5.5.3 Output Feedback Guaranteed mathcalHinfty Control......Page 195
References......Page 197
6.1 Introduction......Page 200
6.2.1 Fixed Horizon LQ Control for Predictive Costs......Page 201
6.2.2 Fixed Horizon LQ Control for Standard Costs......Page 204
6.3.1 Receding Horizon LQ Control for Predictive Costs......Page 218
6.3.2 Receding Horizon LQ Control for Standard Costs......Page 227
6.4 Fixed Horizon LQ Controls for State Delayed Systems......Page 236
6.4.1 Fixed Horizon LQ Control for Simple Costs......Page 237
6.4.2 Fixed Horizon LQ Control for Costs with Single Integral Terms......Page 239
6.4.3 *Fixed Horizon LQ Control for Costs with Double Integral Terms......Page 243
6.5 Receding Horizon LQ Controls for State Delayed Systems......Page 249
6.5.1 Receding Horizon LQ Control for Simple Costs......Page 250
6.5.2 Receding Horizon LQ Control for Costs with Single Integral Terms......Page 252
6.5.3 *Receding Horizon LQ Control for Costs with Double Integral Terms......Page 260
6.5.4 Receding Horizon LQ Control for Short Horizon Costs......Page 269
References......Page 276
7.1 Introduction......Page 278
7.2 Fixed Horizon LQG Controls for Input Delayed Systems......Page 279
7.2.1 Fixed Horizon LQG Control for Predictive Costs......Page 280
7.2.2 Fixed Horizon LQG Control for Standard Costs......Page 284
7.3.1 Receding Horizon LQG Control for Predictive Costs......Page 289
7.3.2 Receding Horizon LQG Control for Standard Costs......Page 293
7.4 Fixed Horizon LQG Controls for State Delayed Systems......Page 299
7.4.1 Fixed Horizon LQG Control for Costs with Single Integral Terms......Page 308
7.4.2 *Fixed Horizon LQG Control for Costs with Double Integral Terms......Page 311
7.5 Receding Horizon LQG Controls for State Delayed Systems......Page 317
7.5.1 Receding Horizon LQG Control for Costs with Single Integral Terms......Page 319
7.5.2 *Receding Horizon LQG Control for Costs with Double Integral Terms......Page 325
References......Page 331
8.1 Introduction......Page 332
8.2.1 Fixed Horizon mathcalHinfty Control for Predictive Costs......Page 333
8.2.2 Fixed Horizon mathcalHinfty Control for Standard Costs......Page 339
8.3.1 Receding Horizon mathcalHinfty Control for Predictive Costs......Page 346
8.3.2 Receding Horizon mathcalHinfty Control for Standard Costs......Page 355
8.4.1 Fixed Horizon mathcalHinfty Control for Costs with Single Integral Terms......Page 367
8.4.2 *Fixed Horizon mathcalHinfty Control for Costs with Double Integral Terms......Page 371
8.5 Receding Horizon mathcalHinfty Controls for State Delayed Systems......Page 377
8.5.1 Receding Horizon mathcalHinfty Control for Costs with Single Integral Terms......Page 378
8.5.2 *Receding Horizon mathcalHinfty Control for Costs with Double Integral Terms......Page 388
References......Page 399
A.1 Matrix Inertia Properties......Page 401
A.3 Inverse of Block Matrices......Page 402
A.4 Determinant......Page 403
B.1.1 Definition and Concept......Page 404
B.2.1 mathcalS-Procedure with Non-strict Inequalities......Page 405
B.2.3 Example......Page 406
B.4 Elimination of Matrix Variables......Page 407
C.1 Wirtinger-Based Integral Inequality......Page 409
C.2 Auxiliary Function-Based Integral Inequality......Page 410
D.1 Least-Squares Estimate......Page 414
D.2 Kalman Filtering......Page 417
E.1 Infinite Horizon LQ Control for Single Input Delayed Systems......Page 419
E.2 Infinite Horizon LQ Control for State Delayed Systems......Page 421
E.3 Infinite Horizon mathcalHinfty Control for State Delayed Systems......Page 423
Appendix F Program Codes......Page 426
Index......Page 427
Recommend Papers

Stabilizing and optimizing control for time-delay systems
 9783319927039, 9783319927046

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Communications and Control Engineering

Wook Hyun Kwon · PooGyeon Park

Stabilizing and Optimizing Control for TimeDelay Systems Including Model Predictive Controls

Communications and Control Engineering Series editors Alberto Isidori, Roma, Italy Jan H. van Schuppen, Amsterdam, The Netherlands Eduardo D. Sontag, Boston, USA Miroslav Krstic, La Jolla, USA

Communications and Control Engineering is a high-level academic monograph series publishing research in control and systems theory, control engineering and communications. It has worldwide distribution to engineers, researchers, educators (several of the titles in this series find use as advanced textbooks although that is not their primary purpose), and libraries. The series reflects the major technological and mathematical advances that have a great impact in the fields of communication and control. The range of areas to which control and systems theory is applied is broadening rapidly with particular growth being noticeable in the fields of finance and biologically-inspired control. Books in this series generally pull together many related research threads in more mature areas of the subject than the highly-specialised volumes of Lecture Notes in Control and Information Sciences. This series’s mathematical and control-theoretic emphasis is complemented by Advances in Industrial Control which provides a much more applied, engineering-oriented outlook. Publishing Ethics: Researchers should conduct their research from research proposal to publication in line with best practices and codes of conduct of relevant professional bodies and/or national and international regulatory bodies. For more details on individual ethics matters please see: https://www.springer.com/gp/authors-editors/journal-author/journalauthorhelpdesk/publishing-ethics/14214

More information about this series at http://www.springer.com/series/61

Wook Hyun Kwon PooGyeon Park •

Stabilizing and Optimizing Control for Time-Delay Systems Including Model Predictive Controls

123

Wook Hyun Kwon Department of Electrical and Computer Engineering Seoul National University Seoul Korea (Republic of)

PooGyeon Park Department of Electrical Engineering POSTECH Pohang Korea (Republic of)

and Department of Information and Communication Engineering DGIST Daegu Korea (Republic of)

Additional material to this book can be downloaded from http://extras.springer.com. ISSN 0178-5354 ISSN 2197-7119 (electronic) Communications and Control Engineering ISBN 978-3-319-92703-9 ISBN 978-3-319-92704-6 (eBook) https://doi.org/10.1007/978-3-319-92704-6 Library of Congress Control Number: 2018943378 MATLAB® is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive, Natick, MA 01760-2098, USA, http://www.mathworks.com. Mathematics Subject Classification (2010): 49-XX, 34-XX, 93-XX, 90-XX, 65-XX, 91-XX, 60-XX, 68X © Springer International Publishing AG, part of Springer Nature 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by the registered company Springer International Publishing AG part of Springer Nature The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Dedicated to Son Cha Yi and Jung Suh Yun, Authors’ wives

Preface

Time delays appear in many real processes such as chemical processes, biological systems, transportation systems, digital communication networks, some mechanical systems. Time-delay systems containing control inputs are often called time-delay control systems. Controls of time-delay control systems are more complicated compared to those of ordinary delay-free control systems. A stabilizing control has been a major subject for the control design for a control system. This book deals with stabilizing controls for time-delay control systems. There are three classes of stabilizing controls in the time domain, which are non-optimal stabilizing controls, optimal stabilizing controls, and sub-optimal stabilizing controls such as guaranteed cost controls. This book covers all these three classes of stabilizing controls in unified frameworks and notations. Thus, this book may be very useful when readers want to grasp all of these three classes of stabilizing controls. This book discusses differences among these stabilizing controls in terms of control structures and performance criteria. Optimizing controls in this book mean optimal stabilizing controls and sub-optimal stabilizing controls. Optimal stabilizing controls and sub-optimal stabilizing controls are called optimal controls and sub-optimal controls, respectively. The controls should be of feedback forms, either state feedback or output feedback. Time-delay control systems combined with a certain feedback control become homogeneous systems which do not include external variables. Therefore, the stability conditions of these homogeneous systems are essential to be utilized for obtaining stabilizing controls for time-delay control systems. Fortunately, there are several stability test methods for these homogeneous time-delay systems, which are given in Chap. 2. A feedback control can be easily designed if its structure can be assumed a priori with unknown parameters that are sought to meet the stability or robust stability of the closed-loop system without considering a performance. These non-optimal stabilizing controls are given in Chaps. 3 and 4. These stabilizing controls become useful particularly for time-delay systems since feedback controls to meet both the stability and the performance are much harder to obtain for time-delay systems.

vii

viii

Preface

Optimal controls can be designed to meet LQ and H1 performance criteria. Optimal controls assume no prior control structures, unlike non-optimal stabilizing controls. Optimal controls are well defined on finite horizons. They are usually obtained as open-loop controls, which are certain time functions. Fortunately, optimal controls for quadratic performance criteria for linear time-delay systems are of distributed-delay state feedback form. Finite horizon optimal controls must be extended to infinite horizon controls in order to be used as stabilizing feedback controls which must be defined over infinite horizons. But optimal solutions on infinite horizons and their stability properties are not well known for time-delay systems. In addition, the computation of optimal solutions on infinite horizons is also difficult. An alternative is an optimal control on a receding horizon. At the current time, optimal controls, either open-loop or closed-loop, are obtained on a finite horizons from the current time. Among the optimal controls on the finite horizons, only the first one is implemented as the current control law. The procedure is then repeated at the next time with the finite horizon from the next time. This control is called a receding horizon control (RHC). It is also called a model predictive control with the receding horizon principle, or simply model predictive control (MPC). An RHC or MPC is defined on any time including infinity. It is a built-in feedback control and possesses stability property under certain conditions. Thus, RHCs or MPCs could be very useful feedback controls even for time-delay systems, while they became one of most successful controls in ordinary systems. This book extensively investigates receding horizon controls as optimal stabilizing controls. These finite, infinite, and receding horizon optimal controls are dealt with for linear time-delay systems in Chaps. 6–8. They can be extended to constrained and nonlinear systems, which are beyond the scope of this book. It is noted that, to distinguish the “finite horizon” and the “infinite horizon” from the “receding horizon,” we use the “fixed horizon” which includes both the finite and the infinite horizons, in the sense that horizons are fixed and do not recede. Since optimal controls are rather complicated in forms, we may introduce another stabilizing control. Like in non-optimal stabilizing controls, a feedback control structure is assumed a priori with unknown parameters and then these parameters are sought to meet not only the stability of the closed-loop system but also a certain performance bound. These controls are guaranteed cost controls, which are sub-optimal controls in the sense that they are in between non-optimal stabilizing controls and optimal controls. We deal with these guaranteed LQ and H1 controls in Chap. 5 before optimal controls in Chaps. 6–8 since they are technically close to non-optimal stabilizing controls in Chaps. 3 and 4. This book deals with not only state-delayed systems but also input-delayed systems in some details. This book introduces the reduction transformation and the state predictor for input-delayed systems. Through them, input-delayed systems can be transformed to ordinary systems, for which there are many control design methods. This transformation and the state predictor are used not only for non-optimal stabilizing controls but also for optimal stabilizing controls for input-delayed systems.

Preface

ix

This book introduces several recent important inequalities, some of which are utilized for obtaining robust stabilizing controls with less conservatism. This book deals with more output feedback stabilizing controls, compared to other books, and introduces cascaded-delay systems to solve output feedback control problems easily in non-optimal and sub-optimal stabilizing controls. This book introduces a general cost functional in double integral forms for optimal stabilizing controls and also suggests some simpler optimal stabilizing controls for state-delayed systems, compared to standard optimal stabilizing controls which require some complicated computation. Computations of stabilizing controls for time-delay systems look very difficult since control solutions look very complicated in forms. Almost all gain matrices of non-optimal stabilizing controls and guaranteed cost stabilizing controls are given in linear matrix inequality (LMI) forms in this book so that they can be easily computed by using powerful LMI convex programming methods. Likewise, all the initial conditions of coupled partial differential Riccati equations necessary for optimal stabilizing controls are given in LMI forms. This book starts with simple subjects and then gradually moves to more complicated ones for better understanding. This book deals first with systems with input delays and then systems with state delays. We do not deal with systems with both input and state delays together due to the space limit, which can be easily understood if we understand systems with input delays only and those with state delays only separately. We deal with a single delay only and do not deal with multiple delays nor distributed delays for the same reasons. Thus in this book, input-delayed systems and state-delayed systems mean often systems with a single input delay and systems with a single state delay, respectively. First, the stability of homogeneous systems is dealt with, and then, state feedback stabilizing controls are introduced based on the stability of homogeneous systems, followed by output feedback stabilizing controls. Optimal controls for input-delayed systems are dealt with before optimal controls for state-delayed systems are introduced. LQ/LQG optimal controls are first dealt with before H1 optimal controls. This book is organized as follows. In Chap. 1, basic concepts for control design such as control structures and control objectives are introduced. Then, simple and non-optimal stabilizing controls, optimal controls on finite, infinite, and receding horizons, and sub-optimal guaranteed cost controls are introduced briefly. Several advantages of receding horizon controls are explained. For better understanding of time-delay systems, several real processes and their mathematical models are given. All notations used in this book are listed in this chapter. In Chap. 2, stability test methods for general homogeneous time-delay systems are introduced, which are followed by various stability and robust stability test methods for certain and uncertain linear time-delay systems, respectively. Systems with time-varying and distributed delays in states are also briefly introduced. Some useful inequalities used for stabilizing controls are introduced. In Chap. 3, stabilizing state feedback

x

Preface

controls for input-delayed systems are given by using the state predictor method and the reduction transformation method. Then, stabilizing and robust stabilizing state feedback controls for state-delayed systems are introduced, followed by stability investigation based on the results in Chap. 2. In Chap. 4, stabilizing output feedback controls for input-delayed systems are introduced by employing Luenberger observers and reduction transformations. Then, stabilizing and robust stabilizing output feedback controls for state-delayed systems are given by employing Luenberger observers or additional dynamic systems, followed by stability investigation based on the results in Chaps. 2 and 3. In Chap. 5, first guaranteed LQ controls are introduced for input-delayed systems and also for state-delayed systems. Then, guaranteed H1 controls are introduced for input-delayed systems and also for state-delayed systems. This chapter deals with only some basic problems, compared with Chaps. 2 and 3 due to the space limit. In Chap. 6, fixed horizon LQ controls are introduced first for input-delayed systems and then receding horizon LQ controls with investigation of stability conditions. Next, fixed horizon LQ controls are introduced for state-delayed systems and then receding horizon LQ controls with investigation of stability conditions. All of them are state feedback controls. In Chap. 7, fixed horizon LQG controls combined with state observers are introduced first for stochastic input-delayed systems and then receding horizon LQG controls with investigation of stability conditions. Next, fixed horizon LQG controls combined with filtered estimates are introduced for stochastic state-delayed systems and then receding horizon LQG with investigation of stability conditions. All of them are output feedback controls. In Chap. 8, fixed horizon H1 controls are introduced first for input-delayed systems with disturbances and then receding horizon H1 controls with investigation of stability conditions. Fixed horizon H1 controls are introduced for state-delayed systems with disturbances and then receding horizon H1 controls with investigation of stability conditions. Output feedback H1 stabilizing controls are not dealt with due to the space limit. In each chapter, we introduce references that help challenging readers to obtain the original sources of the important results in the book and further related literatures. There are a substantial number of newly obtained results in this book. Several numerical examples covering important optimal and sub-optimal stabilizing controls in this book are provided with MATLAB codes, and they can be downloaded from http://extras.springer.com. Subsections denoted by an asterisk can be skipped at first reading for easier understanding of subjects. We tried to include the existing important literatures, but this may not be complete. We sincerely apologize for it. This book deals basically with linear time-delay systems with some limited uncertainties. This book may be a guide to obtain related results for constrained and nonlinear time-delay systems. We would like to express our appreciation to Prof. Young Sam Lee at Inha University, Prof. Soohee Han at POSTECH, Dr. Han Woong Yoo at Delft University of Technology, and Dr. Jung Hun Park at POSCO, who were all graduate students at Seoul National

Preface

xi

University and provided some contributions to this book. Also Dr. Jeong Wan Ko at Defense Agency for Technology and Quality, Dr. Won Il Lee at Samsung Electronics, and Mr. Seok Young Lee at POSTECH, who are all former or current graduate students at POSTECH, were of great help to us in developing this book. We appreciate the publishing team of Springer for their support. Seoul, Korea March 2018

Wook Hyun Kwon PooGyeon Park

Contents

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 1 2 4 4

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

5 8 8 9 12 19 20 25

2 Stability of Time-Delay Systems . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Fundamentals of General Time-Delay Systems . . . . . . . . . 2.2.1 General Time-Delay Systems . . . . . . . . . . . . . . . . 2.2.2 Stability of General Time-Delay Systems . . . . . . . 2.3 Inequalities for Stability . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Integral Inequalities for Quadratic Functions . . . . . 2.4 Stability of State Delayed Systems . . . . . . . . . . . . . . . . . 2.4.1 Lyapunov–Razumikhin Approach . . . . . . . . . . . . . 2.4.2 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . 2.4.3 Discretized State Approach . . . . . . . . . . . . . . . . . 2.4.4 *Extension to Systems with Time-Varying Delays .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

27 27 28 28 29 31 32 33 36 37 40 43 45

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Control Systems . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Models . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Control Objectives . . . . . . . . . . . . . . . 1.2 Stabilizing and Optimizing Controls . . . . . . . 1.2.1 Stabilizing Controls . . . . . . . . . . . . . . 1.2.2 Optimal Controls over Finite, Infinite, and Receding Horizons . . . . . . . . . . . 1.2.3 Guaranteed Cost Controls . . . . . . . . . . 1.3 Models for Time-Delay Systems . . . . . . . . . . 1.3.1 Input Delayed Systems . . . . . . . . . . . . 1.3.2 State Delayed Systems . . . . . . . . . . . . 1.4 Computation . . . . . . . . . . . . . . . . . . . . . . . . 1.5 About the Book and Notations . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

xiv

Contents

2.5 Robust Stability of State Delayed Systems . . . . . . . . . . . . . . 2.5.1 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . . . 2.5.2 Discretized State Approach . . . . . . . . . . . . . . . . . . . 2.5.3 *Extension to Systems with Time-Varying Delays . . . 2.6 Stability and Robust Stability of Distributed State Delayed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Lyapunov–Razumikhin Approach . . . . . . . . . . . . . . . 2.6.2 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . . . 2.6.3 Lyapunov–Krasovskii Approach for Robust Stability . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

51 52 53 55

. . . . .

. . . . .

. . . . .

. . . . .

58 59 60 61 62

3 State Feedback Stabilizing Controls . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Stabilizing Controls for Input Delayed Systems . . . . . . . . 3.2.1 State Predictor Approach . . . . . . . . . . . . . . . . . . . 3.2.2 Reduction Transformation Approach . . . . . . . . . . . 3.3 Stabilizing Controls for State Delayed Systems . . . . . . . . 3.3.1 Lyapunov–Razumikhin Approach . . . . . . . . . . . . . 3.3.2 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . 3.3.3 Discretized State Approach . . . . . . . . . . . . . . . . . 3.3.4 *Extension to Systems with Time-Varying Delays . 3.4 Robust Stabilizing Controls for State Delayed Systems . . . 3.4.1 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . 3.4.2 Discretized State Approach . . . . . . . . . . . . . . . . . 3.4.3 *Extension to Systems with Time-Varying Delays . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

65 65 66 66 69 70 70 72 76 79 82 83 85 88 92

4 Output Feedback Stabilizing Controls . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Stabilizing Controls for Input Delayed Systems . . . . . . . . 4.2.1 Smith Predictor Approach . . . . . . . . . . . . . . . . . . 4.2.2 Luenberger Observer Approach with Reduction Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Dynamic Feedback Control Approach with Reduction Transformation . . . . . . . . . . . . . . . 4.3 Stabilizing Controls for State Delayed Systems . . . . . . . . 4.3.1 Luenberger Observer Approach . . . . . . . . . . . . . . 4.3.2 Lyapunov–Razumikhin Approach . . . . . . . . . . . . . 4.3.3 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . 4.3.4 Cascaded-Delay System Approach . . . . . . . . . . . . 4.3.5 *Extension to Systems with Time-Varying Delays .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

95 95 96 96

......

98

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

100 102 102 104 107 109 114

Contents

xv

4.4 Robust Stabilizing Controls for State Delayed Systems . . . 4.4.1 Lyapunov–Krasovskii Approach . . . . . . . . . . . . . . 4.4.2 Cascaded-Delay System Approach . . . . . . . . . . . . 4.4.3 *Extension to Systems with Time-Varying Delays . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Guaranteed Cost Controls . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Guaranteed LQ Controls for Input Delayed Systems . . . 5.2.1 State Feedback Guaranteed LQ Control for Predictive Costs . . . . . . . . . . . . . . . . . . . . . 5.2.2 State Feedback Guaranteed LQ Control for Standard Costs . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Output Feedback Guaranteed LQ Control for Standard Costs . . . . . . . . . . . . . . . . . . . . . . 5.3 Guaranteed LQ Controls for State Delayed Systems . . . 5.3.1 State Feedback Guaranteed LQ Control . . . . . . 5.3.2 Robust State Feedback Guaranteed LQ Control . 5.3.3 Output Feedback Guaranteed LQ Control . . . . . 5.4 Guaranteed H1 Controls for Input Delayed Systems . . 5.4.1 State Feedback guaranteed H1 Control for Predictive Costs . . . . . . . . . . . . . . . . . . . . . 5.4.2 State Feedback guaranteed H1 Control for Standard Costs . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Output Feedback guaranteed H1 Control for Standard Costs . . . . . . . . . . . . . . . . . . . . . . 5.5 Guaranteed H1 Controls for State Delayed Systems . . 5.5.1 State Feedback Guaranteed H1 Control . . . . . . 5.5.2 Robust State Feedback Guaranteed H1 Control 5.5.3 Output Feedback Guaranteed H1 Control . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

118 118 121 127 131

. . . . . . . . 133 . . . . . . . . 133 . . . . . . . . 134 . . . . . . . . 136 . . . . . . . . 138 . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

141 146 146 154 157 159

. . . . . . . . 161 . . . . . . . . 162 . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

165 169 170 178 181 183

6 LQ Optimal Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fixed Horizon LQ Controls for Input Delayed Systems . . . 6.2.1 Fixed Horizon LQ Control for Predictive Costs . . . . 6.2.2 Fixed Horizon LQ Control for Standard Costs . . . . . 6.3 Receding Horizon LQ Controls for Input Delayed Systems . 6.3.1 Receding Horizon LQ Control for Predictive Costs . 6.3.2 Receding Horizon LQ Control for Standard Costs . . 6.4 Fixed Horizon LQ Controls for State Delayed Systems . . . . 6.4.1 Fixed Horizon LQ Control for Simple Costs . . . . . . 6.4.2 Fixed Horizon LQ Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

187 187 188 188 191 205 205 214 223 224

. . . . . .

. . . . . .

. . . . . 226

xvi

Contents

6.4.3 *Fixed Horizon LQ Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . . . . 6.5 Receding Horizon LQ Controls for State Delayed Systems . . . 6.5.1 Receding Horizon LQ Control for Simple Costs . . . . . 6.5.2 Receding Horizon LQ Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 *Receding Horizon LQ Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Receding Horizon LQ Control for Short Horizon Costs References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 LQG Optimal Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Fixed Horizon LQG Controls for Input Delayed Systems . . . 7.2.1 Fixed Horizon LQG Control for Predictive Costs . . . 7.2.2 Fixed Horizon LQG Control for Standard Costs . . . . 7.3 Receding Horizon LQG Controls for Input Delayed Systems 7.3.1 Receding Horizon LQG Control for Predictive Costs . 7.3.2 Receding Horizon LQG Control for Standard Costs . . 7.4 Fixed Horizon LQG Controls for State Delayed Systems . . . 7.4.1 Fixed Horizon LQG Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 *Fixed Horizon LQG Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Receding Horizon LQG Controls for State Delayed Systems . 7.5.1 Receding Horizon LQG Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . . . 7.5.2 *Receding Horizon LQG Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 H‘ Optimal Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Fixed Horizon H1 Controls for Input Delayed Systems . . . 8.2.1 Fixed Horizon H1 Control for Predictive Costs . . . 8.2.2 Fixed Horizon H1 Control for Standard Costs . . . . 8.3 Receding Horizon H1 Controls for Input Delayed Systems 8.3.1 Receding Horizon H1 Control for Predictive Costs . 8.3.2 Receding Horizon H1 Control for Standard Costs . 8.4 Fixed Horizon H1 Controls for State Delayed Systems . . . 8.4.1 Fixed Horizon H1 Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . . 8.4.2 *Fixed Horizon H1 Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . 230 . . . 236 . . . 237 . . . 239 . . . 247 . . . 256 . . . 263 . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

265 265 266 267 271 276 276 280 286

. . . . 295 . . . . 298 . . . . 304 . . . . 306 . . . . 312 . . . . 318 . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

319 319 320 320 326 333 333 342 354

. . . . . 354 . . . . . 358

Contents

8.5 Receding Horizon H1 Controls for State Delayed Systems . . . 8.5.1 Receding Horizon H1 Control for Costs with Single Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 *Receding Horizon H1 Control for Costs with Double Integral Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xvii

. . 364 . . 365 . . 375 . . 386

Appendix A: Useful Matrix Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Appendix B: Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Appendix C: Integral Inequalities for Quadratic Functions . . . . . . . . . . . 399 Appendix D: Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Appendix E: Numerical Procedures for Infinite Horizon Controls . . . . . 411 Appendix F: Program Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421

Chapter 1

Introduction

1.1 Control Systems In this section, we will briefly discuss important concepts for control systems, such as models and control objectives.

1.1.1 Models The basic variables of systems are input, state, and output variables as in Fig. 1.1. The input variables are control input variables and noises or disturbances. Noises or disturbances are undesirable elements and enter the system from outside of the system. Control input variables are designed for the system to meet certain good properties. The output variables consist of controlled output variables and measured output variables, where measured output variables are used for feedback controls and controlled output variables are target variables to be controlled to meet a certain performance. Usually, output variables are a subset of whole state variables. Often they are called inputs, states, outputs or controls for simplicity. Systems containing control inputs are called control systems to emphasize the control issues. Systems we are dealing with in this book are dynamic systems. Dynamic control systems can be linear or nonlinear with or without time delays. There can be several undesirable elements, such as disturbances, noises, uncertainties of dynamics and its parameters, constraints in input and state variables, etc. Time delays are considered undesirable elements since they are difficult to handle. All or some parts of undesirable elements exist in each system, depending on the system characteristics. A mathematical model or simply model is a mathematical representation of a real system, sometimes called a plant or process. A model is necessary in order to be analyzed or synthesized for certain objectives. A model can be represented as a stochastic system with noises or a deterministic system with disturbances. Usually, dynamic models are described by © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_1

1

2

1 Introduction Noise/disturbance

Controlled output System

Control input

Measured output

Fig. 1.1 Control systems

state-space systems where the input, state, and output are often represented as u(t), x(t), and y(t) respectively. Time delays appear in many real processes such as chemical processes, biological systems, transportation systems, digital communication networks, some mechanical systems, etc. Time delays can appear in input, state, and output variables. There are different types of time delays such as a single time delay, multiple time delays, and distributed time delays. For example, a state with a single delay is x(t − h), where h is a time delay, that with multiple time delays is x(t − h 1 ), x(t − h 2 ), . . . , x(t − h N ), where h 1 , h 2 , . . . , h N are multiple time delays, and that with distributed time delays use all values x(s), t − h ≤ s ≤ t. Time delays can be constant or time-varying. Time-delay systems may have a single delay, multiple delays, or distributed delays in inputs, states, and outputs. The time-delay systems with a single input delay and those with a single state delay are considered to be basic time-delay systems since these two systems possess many important properties unique for time-delay systems. The systems with a single input delay are easier to handle than those with a single state delay. Systems with single input and state delays together, or systems with multiple or distributed input and state delays can be easily understood if systems with a single input delay and those with a single state are understood separately. Therefore, this book deals with only systems with a single input delay and also systems with a single state delay due to the space limit and the above reason. In this book, input delayed systems and state delayed systems mean often systems with a single delay input and systems with a single state delay, respectively. Systems without time delays are called ordinary systems. States in ordinary systems are points in multi-dimensional systems while states in time-delay systems are functions in multidimensional systems. Therefore, the analysis and design for time delay systems are more difficult than ordinary systems.

1.1.2 Control Objectives There can be several objectives for control systems, which are applied to both ordinary and time-delay systems. It is generally accepted that the control is designed such that the closed-loop system resulted from the feedback controls is stable and the controlled output tracks the reference signal under the above undesirable elements. That is, the closed-loop stability and the tracking performance even under the undesirable elements are known to be important control objectives. In order to achieve control objectives easily, the model is separated into a nominal system without model uncertainties and an additional uncertain system representing the above undesirable

1.1 Control Systems

3 Noise/disturbance

Reference signal

Controlled output

Controller

System Input

Measured state

Fig. 1.2 State feedback control Noise/disturbance Controlled output Reference signal

Controller

System Measured output Input

Fig. 1.3 Output feedback control

elements. The control can be designed first for the nominal system and then for the uncertain system. In this case, the control objectives can be divided into simpler intermediate control objectives, such as • Nominal stability: closed-loop stability for nominal systems. • Robust stability: closed-loop stability for uncertain systems. • Nominal performance: tracking performance for nominal systems. • Robust performance: tracking performance for uncertain system. where the first three are considered to be the most important. Sometimes robust stabilizing controls for uncertain systems including nominal systems are directly obtained without two steps of above procedures. In order to design a control, we must determine a type of the control, so called a control structure. It is already mentioned that the control structure must be of feedback form. If all the states can be measured, we can use state feedback controls such as in Fig. 1.2. However, if only the outputs can be measured, then we have to use output feedback controls such as in Fig. 1.3. The state feedback controls are easier to design than the output feedback controls, since state variables contain all the system information. Static feedback controls are simple in control structure since they are obtained by simple multiplication of constant matrices to states. These static feedback controls are used often for state feedback controls but seldom for output feedback controls since outputs contain only partial information of the system. An output feedback control often employs a certain dynamic system whose outputs are used for the inputs of the control system, where inputs of the dynamic system are from the inputs and outputs of the control system as in Fig. 1.3. These feedback controls containing a certain dynamic system in the feedback loop are called dynamic feedback controls. Thus output feedback controls are often dynamic feedback controls. In these cases, the dimension of the overall dynamic feedback control systems increases compared to static feedback control systems. The feedback control structure can be required to

4

1 Introduction

be of linear form or allowed to be of nonlinear form depending on problems. Single delays, multiple delays, or distributed delays can be chosen for the feedback control structure.

1.2 Stabilizing and Optimizing Controls There are several approaches for control designs to meet some of above control objectives with some control structures. Controls can be designed in time domain or in frequency domain, where the frequency domain approaches can be applied usually to linear systems. This book deals with only the time domain approaches. There are three main approaches in time domain to obtain stabilizing feedback controls for timedelay control systems. They are non-optimal stabilizing controls, optimal controls on infinite and receding horizons, and sub-optimal guaranteed cost controls. This book deals with these three main approaches in unified frameworks and notations. There seem to be no other books that deal with all three classes of stabilizing control such as non-optimal stabilizing, optimal stabilizing, and suboptimal stabilizing controls all together.

1.2.1 Stabilizing Controls A feedback control can be easily designed if it meets only the stability or robust stability of the closed-loop systems without considering a performance. It will be much easier if in addition a feedback control structure can be assumed a priori with unknown parameters and then these parameters are sought to meet the stability or robust stability of the closed-loop system. These simple stabilizing controls are called simply stabilizing controls. They are non-optimal stabilizing controls since no performance criteria are required. These non-optimal stabilizing controls become useful particularly for time-delay systems since feedback controls to meet both stability and performance are much harder to obtain for time-delay systems. The procedure to obtain non-optimal stabilizing controls is as follows. First we choose a rather simple control structure such as a static or dynamic feedback control with undetermined parameters. Then these undetermined parameters will be sought so that the resulting closed-loop systems satisfy stability or robust stability conditions. Several static state feedback stabilizing controls are introduced for systems with a single input delay and then for systems with a single state delay in Chap. 3, while several dynamic output feedback stabilizing controls in Chap. 4. When feedback controls are applied to the systems, the resulting systems are without external variables, which are called homogeneous systems. Therefore, the stability for homogeneous closed-loop systems without external variables is very essential for the design of feedback stabilizing controls and has been much investigated. The well-known Lyapunov theories for general time-delay systems and various modifications for linear time-delay sys-

1.2 Stabilizing and Optimizing Controls

5

tems are given in Chap. 2. Early discussions on stability are found in [16]. Stability and robust stability are well treated in [11, 13, 24]. Stability and stabilization are discussed in [2, 21].

1.2.2 Optimal Controls over Finite, Infinite, and Receding Horizons Optimal controls, or optimizing controls, have been widely used methods to meet certain control objectives. An optimal control is obtained to meet a certain performance criterion and is mathematically well defined on a finite horizon. Optimal controls for general systems on finite horizons are usually obtained as open-loop controls, which are certain time functions, and not of feedback forms. Fortunately, optimal controls for quadratic performance criteria for linear time-delay systems can be obtained in closed-loops form, i.e. state and distributed input feedback form for input delayed systems and state and distributed state feedback form for state delayed systems. Often, real feedback control systems must run continuously in time as in electrical power generation plants and chemical processes. In these ongoing processes, finite horizon optimal controls cannot be adopted, but infinite horizon optimal controls must be used in order to be utilized as stabilizing feedback controls. Thus finite horizon optimal controls are extended to infinite horizon controls by extending the terminal time to infinity. These infinite horizon optimal controls, often called steady state optimal controls, become important if they are used for stabilizing controls defined on infinite horizons. Therefore, we have to check whether infinite horizon optimal controls exist and stabilize the systems under some conditions. There are two main performance criteria for the control design, which are LQ and H∞ performance criteria. For linear time-delay systems, LQ controls are obtained to meet LQ performance criteria, which are given as state feedback controls. For linear stochastic time-delay system with noises, LQG control is obtained to meet the expected values of linear quadratic performance criteria with certain output feedback constraints, which are given as output feedback controls. For linear time-delay systems with disturbances, H∞ controls are obtained to meet a certain H∞ performance criterion with respect to two variables of controls input and disturbance inputs. Disturbance inputs sometimes reflect system uncertainties. Thus H∞ controls can be considered as robust optimal controls. As stated above, LQ, LQG, and H∞ controls are obtained first on finite horizons and then extended to infinite horizons. However, stability properties and computations of infinite horizon LQ, LQG, and H∞ controls are not well known for time-delay systems. LQ optimal controls are dealt with in [7, 22, 40]. H∞ controls are discussed in [1, 7, 37, 40]. We can introduce a new type of control as follows. At the current time, the optimal control is obtained on a finite horizon from the current time t, say [t, t + T ]. Among the optimal controls on the entire fixed horizon [t, t + T ] only the first one at the time t is adopted as the current control. The procedure is then repeated at the next time, say

6

1 Introduction

At 2017 2017 2018 2019 2020 2021

Year

At 2018

.. .

2018 2019 2020 2021 2022

Year

At 2026 2026 2027 2028 2029 2030

Year

At 2027 2027 2028 2029 2030 2031

Investments

Year

...... 2017 2018

2026 2027

Year

Fig. 1.4 Annual short-term planning [17] © 2005 Springer with permission

[s, s + T ] for s > t. This control is called a receding horizon control (RHC), which is also called a model predictive control (MPC) with the receding horizon principle, or simply MPC. The term receding horizon is introduced, since the horizon recedes as time proceeds. The concept of RHC can be easily explained by using a company’s annual investment planning to maximize the profit. The investment planning should be continued for the years to come. For a new policy, it may be good to have a shortterm, for example 5-year, investment planning every year and to select the first year investment among investments over a short term. This annual short-term planning is exactly the same as RHC. This concept is depicted in Fig. 1.4. It is noted that the receding horizon control is from repeated applications of finite horizon controls but defined on any time including infinity in order to be used as stabilizing controls. It is the built-in closed-loop control, i.e. feedback control, for general systems since the control is obtained by using the current state. Receding horizon LQ, LQG, and H∞ controls can be obtained from finite horizon LQ, LQG, and H∞ controls. In this book, fixed horizons are used in order to distinguish receding horizons. Since finite horizons and infinite horizons are fixed and not receding, they together are called fixed horizons. Finite horizons are sometimes named as fixed finite horizon. There are several advantages of RHCs. • Applicability to a broad class of systems. The optimization problem over the finite horizon, on which RHC is based, can be applied to a broad class of systems, including nonlinear systems and time-delay systems. • Built-in closed-loop and feedback control. While optimal controls are usually open-loop controls, RHCs always provide closed-loop controls due to the repeated computation and the implementation of only the first current control. • Constraint handling capability. For linear systems with the input and state constraints that are common in industrial problems, RHC can be easily and efficiently computed by using mathematical programming, such as quadratic programming (QP) and semi-definite programming (SDP).

1.2 Stabilizing and Optimizing Controls

7

• Guaranteed stability. By introducing some terminal requirements, RHC can guarantees the stability easily for linear and nonlinear systems. • Good tracking performance. RHC presents good tracking performance by utilizing the future reference signal for a finite horizon that can be known in many cases. In PID control, only the current reference signal is used even when the future reference signals are available on a finite horizon. This PID control might be too short-sighted for the tracking performance. • Adaptation to changing parameters. Since RHC is computed repeatedly, it can adapt to future system parameters changes that can be known later. RHC can be an appropriate strategy for known time-varying systems. • Easier computation compared with infinite horizon optimal controls. Since computation is carried over a finite horizon, the solution can be obtained in a smaller batch form repeatedly, while infinite horizon optimal control must handle whole information at one time. RHCs are more suitable for on-line computation than infinite horizon optimal controls. • Broad industrial applications. Owing to the above advantages, there exist broad industrial applications for RHC, particularly in industrial processes. When processes are slow or the computation is fast enough, RHCs can be utilized easily. There exist broad applications for ordinary systems and more applications for time-delay systems will come. There are some disadvantages of RHCs. • Longer computation time compared with conventional non-optimal controls. The absolute computation time of RHC may be longer compared with conventional non-optimal controls, particularly for nonlinear systems, although the computation time of RHC at each time can be smaller than the corresponding infinite horizon optimal control. However, this problem may be overcome by high-speed processors which are now available. • Difficulty in the design of robust controls for parameter uncertainties. System properties, such as robust stability and robust performance due to parameter uncertainties, are usually hard to tract in optimization problems in time domain on which RHCs are based. The repeated computation for RHC makes it more difficult to analyze the robustness. Advantages and disadvantages are mentioned for constrained and nonlinear systems even for they are not covered in this book. These will be useful when we extend results in this book to constrained and nonlinear systems. There are no other books that deal with receding horizon controls or model predictive controls for time-delay systems, while there exist several books for ordinary systems including [3, 17]. The advantages and disadvantages of receding horizon controls in this subsection are from [17] © 2005 Springer with permission.

8

1 Introduction

1.2.3 Guaranteed Cost Controls Simple and non-optimal stabilizing controls are obtained without any performance criteria while optimal stabilizing controls are obtained with some performance criteria. For linear time-delay systems, LQ controls and H∞ controls are obtained in feedback forms of states and distributed inputs or states. Therefore, optimal stabilizing controls are rather difficult to compute, while non-optimal stabilizing controls given in feedback forms of single input or state delay are rather easy to compute. Receding horizon controls still have some burden to solve repeated optimization problems. If there are time-varying delays, unknown delays or model uncertainties, we have complicated problems. We will look at alternative approaches, so-called guaranteed cost controls (GCC), where stabilizing control structures are chosen a priori in feedback forms of single input or state delay like non-optimal stabilizing controls. They are obtained to meet a requirement that an infinite horizon cost is less than a certain upper bound larger than the minimum cost and the closed-loop system is stable. GCCs are sub-optimal controls in the sense that the control structure is not free but limited to a certain form with unknown parameters and they do not minimize costs but seek certain upper bounds larger than the minimum costs. The computation of guaranteed cost controls is rather easy compared to optimal stabilizing controls and a little more difficult than non-optimal stabilizing controls. Therefore, these controls may be between non-optimal stabilizing controls and optimal stabilizing controls. Performance criteria employed for GCCs are similar to those for optimal controls such as LQ and H∞ performance criteria. Guaranteed cost controls are dealt with some limits due to the space limit. They are given in Chap. 5 before the optimal controls in Chaps. 6, 7, and 8 since they are technically close to those in Chaps. 3 and 4.

1.3 Models for Time-Delay Systems We introduce different time-delay systems and their models to understand real problems. This book discusses about systems with delays in inputs and states. First we introduce time-delay systems with delay only in inputs and outputs such as a networked control system, an inverted pendulum, and an industrial boiler system. We then introduce more complicated time-delay systems with delay in states such as a gene regulatory bio system, a refining plant, a liquid propellant rocket motor, a fast transmission control protocol (TCP) network system, and a human immunodeficiency virus (HIV) pathogenesis system. There exist many other types of time-delay systems which are not discussed in this book due to the space limit. While this book deals with linear time-delay systems, the models in this section are mostly nonlinear to reflect real worlds. Linearized models can be obtained at operating points of the nonlinear systems if necessary.

1.3 Models for Time-Delay Systems

9

1.3.1 Input Delayed Systems Networked Control System Some control systems may have multiple sensors and actuators, and a controller, where they share a common communication network. Control systems with networks are called networked control systems (NCSs). Figure 1.5 shows a typical networked control system where there exist network induced delays and computation delays [14]. Let u j and y j are the jth actuator input and sensor output of the plant respectively j j and u c and yc are the jth input and output of the controller. Since multiple data of sensors and controller outputs are transmitted sequentially through a single network medium, it takes some time, called the communication delay. In addition, some urgent sporadic real-time data or some non-real-time data must be transmitted through the same network, which causes unpredicted time delays. Therefore, there exists the j communication delay from the sensor to the controller, say τsc and the communication j delay from the controller to the actuator, say τca . These communication delays may be less than or equal the maximum communication delays. We have the following relation u cj (t) = y j (t − τscj ), u j (t) = ycj (t − τcaj ), j

j

j

(1.1) (1.2)

j

where 0 ≤ τsc ≤ τsc,max and 0 ≤ τca ≤ τca ,max . In addition, we may have a compuj tation delay when the controller computes some control algorithms. This is, yc may j j have a computation delay, say τcom from the ideal output of the controller, yc ,ideal such as j ), ycj (t) = ycj ,ideal (t − τcom j

(1.3)

j

where 0 ≤ τcom ≤ τcom ,max . Inverted Pendulum Consider an inverted pendulum model in Fig. 1.6, where x is the displacement of the cart, θ is the angle from the upright position of the pendulum, M is the mass of the cart, m is the mass of the pendulum, b is the friction coefficient of the cart, l is the length from the pivot to the center of mass of the pendulum, g is the gravitational constant. The gear shaft is rotated by some torque T usually by an electric motor, which generates the force F applied to the cart. The inverted pendulum can be represented as the following model: (M + m)x¨ + b x˙ + ml cos(θ)θ¨ − ml sin(θ)θ˙2 = F, l θ¨ − g sin(θ) − cos(θ)x¨ = 0.

(1.4) (1.5)

10

1 Introduction actuator

sensor Plant

y j (t )

u j (t ) Network h = computation delay

h

Ideal controller

j c

y (t )

ucj (t )

controller

Fig. 1.5 Networked control loops with multiple sensors and actuators

T

Fig. 1.6 Inverted pendulum

There exists a backlash represented by a time-delay h such as F = cT (t − h) for some constant c. Let us define state variables x1 , x2 , x3 and x4 as follows. ˙ x3 = x, x4 = x, ˙ u = T. x1 = θ, x2 = θ, A linearized model equation of an inverted pendulum near θ = 0 is then given by ⎡ ⎤ 10 0 0 ⎢− c ⎢ (M+m)g 0 0 b ⎥ Ml Ml Ml ⎥ x(t) + ⎢ x(t) ˙ =⎢ ⎣ 0 ⎣ 0 00 1 ⎦ c − mg 0 0 − Mb M M  1000 y(t) = x(t), 0010 ⎡

0

⎤ ⎥ ⎥ u(t − h), ⎦

1.3 Models for Time-Delay Systems

11

Feedwater flow rate Steam Pressure

Furnace

Water Level Steam Drum Steam Flow Rate

Fuel Flow Rate

Mud Drum Burner

Excess Oxygen

Induced Draft Fan Air Flow Rate

Forced Draft Fan

Fig. 1.7 Industrial boiler plant [26] © 1996 IEEE with permission

where y1 is the measured θ and y2 is the measured x. The control objective is to maintain the pendulum upright at a specified position Industrial Boiler Delays in control are frequently encountered in systems, where valves act as actuators. For example, see Fig. 1.7, which shows the schematic diagram for an industrial boiler, where x1 is the drum pressure state (Kg f /cm2 ), y1 is the measured drum pressure (PSI), y2 and x2 are the measured excess oxygen level and its state, respectively, (percent), x3 is the system fluid density (Kg/m3 ), y3 is the measured drum water level (in), y4 is the measured steam flow rate (Kg/s), u 1 , u 2 , u 3 are respectively, the fuel, air, and feedwater flow rate inputs which take values between 0 and 1, x4 is the exogenous variable related to the load disturbance intensity between 0 and 1, and the variables n i are colored noises. The generated steam in the boiler goes to a turbine of an electric generator, which can be load to the boiler. The load disturbance is reflected in x4 . Due to the valve sticking or some other reasons, there can be delays h 1 , h 2 , and h 3 . in the opening action of the control valve. Delays h 4 , h 5 , h 6 , and h 7 include measurement instrumentation delays, process delays, and transport delays. The explicit model equations have the form

12

1 Introduction 9

x˙1 (t) = c11 x4 (t)x18 (t) + c12 u 1 (t − h 1 ) − c13 u 3 (t − h 3 ), {c22 u 2 (t − h 2 ) − c23 u 1 (t − h 1 ) − c24 u 1 (t − h 1 )x2 (t)} x˙2 (t) = c21 x2 (t) + , {c25 u(t − h 2 ) − c26 u 1 (t − h 1 )} x˙3 (t) = −c31 x1 (t) − c32 x4 (t)x1 (t) + c33 u 3 (t − h 3 ), x˙4 (t) = −c41 x4 (t) + c42 u 1 (t − h 1 ) + c43 + n 5 (t), y1 (t) = c51 x1 (t − h 4 ) + n 1 (t), y2 (t) = c61 x2 (t − h 5 ) + n 2 (t), y3 (t) = c70 x1 (t − h 6 ) + c71 x3 (t − h 6 ) + c72 x4 (t − h 6 )x1 (t − h 6 ) + c73 u 3 (t − h 3 − h 6 ) + c74 u 1 (t − h 1 − h 6 ) {c75 x1 (t − h 6 ) + c76 }{1 − c77 x3 (t − h 6 )} + c79 + n 3 (t), + x3 (t − h 6 ){x1 (t − h 6 ) + c78 } y4 (t) = {c81 x4 (t − h 7 ) + c82 }x1 (t − h 7 ) + n 4 (t), where ci j are appropriate coefficients and h i are delay constants. The above equations and explanations of variables are from [26] © 1996 IEEE with permission. Control objectives are to maintain the steam pressure, the level of the water in the drum, and the oxygen level at their specified levels. These output levels should be maintained despite variations in steam flow rate, fluctuations in the heating value of the fuel, and other commonly present disturbances such as variations in ambient temperature and various leaks.

1.3.2 State Delayed Systems Gene Regulatory Network A gene regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins. These play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology. Figure 1.8 shows a diagram for gene regulatory networks with time delays. In a genetic regulatory network, a number of genes interact and regulate the expression of other genes by proteins, the gene derivatives. The change in expression of a gene is governed by the stimulation and inhibition of proteins in transcriptional and translational processes. The following differential equations have been used recently to describe the gene regulatory network: φ˙ i (t) = −κi φi (t) + K i {ψ1 (t − τ1 (t)), ψ2 (t − τ2 (t)), · · · , ψn (t − τn (t))}, (1.6) ψ˙i (t) = −θi ψi (t) + vi φi (t − di (t)), i = 1, 2, . . . , n, (1.7) where φi (t) ∈ R and ψi (t) ∈ R are the concentrations of mRNA and protein of the ith node, whose decay rates are denoted by κi and θi respectively; vi is the translation

1.3 Models for Time-Delay Systems

13

Transcription process Delay :

Translation process

mRNA

 (t )

 (t )

Delay :

Protein

 (t )

d (t )

Fig. 1.8 Gene regulatory networks

rate, and the function K i represents the feedback regulation of the protein on the transcription, which is generally a nonlinear function but has a form of monotonicity with each variable. τi (t) and di (t) represent delays in the transcription and translation processes, respectively. From (1.6)–(1.7) we can see that, in this network, for any single gene i, there is only one output ψi (t − τi (t)) to other genes, but multiple inputs ψ j (t − τ j (t)) for j = 1, 2, . . . , n from other genes. Being a monotonic increasing or decreasing Hill form. In this time, the function K i is regulatory function, K i usually takes a

taken as K i {ψ1 (t), ψ2 (t), · · · , ψn (t)} = nj=1 K i j (ψ j (t)), which is called the SUM logic because each transcription factor acts additively to regulate the ith gene. K i j is a monotonic function of the Hill form, that is, if the transcription factor j is an activator of the gene i, then K i j (ψ j (t)) = ρi j

1+

ψ j (t) βj



Hj

ψ j (t) βj

Hj

(1.8)

and if the transcription factor j is a repressor of the gene i, then

K i j (ψ j (t)) = ρi j

1+



1 ψ j (t) βj

⎧ ⎪ ⎨



ψ j (t) βj

Hj

⎫ ⎪ ⎬

H j = ρi j ⎪1 − Hj ⎪ . ψ (t) ⎩ ⎭ 1 + βj j

(1.9)

Here, H j is the Hill coefficient; β j is a positive constant; ρi j is a constant that is the dimensionless transcriptional rate of the transcription factor j to i. Based on (1.8)–(1.9), the gene networks (1.6)–(1.7) can be rewritten as φ˙ i (t) = −κi φi (t) +

n 





bi j f j ψ j (t − τ j (t)) + ρi ,

(1.10)

ψ˙i (t) = −θi ψi (t) + vi φi (t − di (t)), i = 1, 2, . . . , n, Hj ψ j (t)    βj f j ψ j (t) = ρi j , H j , ρi = ψ (t) j∈Ii 1 + βj j

(1.11)

j=1

(1.12)

14 FA FB

1 Introduction

Chemical Reactor

Reaction Cooler (Heat Exchanger)

Distillation Column

Decanter

Product P

Reboiler Waste

Recycle Loop

FA FB

Chemical Reactor

L Recycle Loop (Time Lag)

Fig. 1.9 Refining plant [28] © 1971 IEEE with permission

where Ii is the set of all j which is a repressor of the gene i, and bi j is defined as bi j = ρi j if the transcription factor j is an activator of the gene i; bi j = 0 if there is no link from the gene j to the gene i; and bi j = −ρi j if the transcription factor j is a repressor of the gene i. Figure 1.8 shows a diagram for gene regulatory networks, where ⎡ ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ ψ1 (t) d1 (t) τ1 (t) φ1 (t) ⎢ ψ2 (t) ⎥ ⎢ d2 (t) ⎥ ⎢ τ2 (t) ⎥ ⎢ φ2 (t) ⎥ ⎢ ⎢ ⎢ ⎥ ¯ ⎥ ⎥ ⎥ ¯ =⎢ = ⎢ . ⎥ , d(t) = ⎢ . ⎥ , τ (t) = ⎢ . ⎥ . φ(t) ⎢ .. ⎥ , ψ(t) ⎣ .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ⎣ . ⎦ φn (t)

ψn (t)

dn (t)

τn (t)

The Eqs. (1.6) and (1.7) and explanations of variables are from [4] © 2002 IEEE with permission. Other Eqs. (1.8)–(1.12) are straightforward which are from [33]. The analysis of gene regulatory networks helps one understand the behavior of the system in increasing levels of complexity, from the gene to the signaling pathway, cell or tissue level. Refining Plant Consider a typical control problem occurring in chemical and petroleum industries. The block diagram in Fig. 1.9a shows a refining plant. Raw materials A and B enter the chemical reactor and take part in three chemical reactions that produce a product P along with some other byproducts. FA and FB represent the feed rates (in pounds per hour) of raw materials A and B, respectively. The reaction happens to be exothermic, so a heat exchange is required to cool the reactants to a temperature at which an undesirable byproduct will settle out of the reactant mixture. This settling takes place in the decanter. Finally the product P and some impurities enter a distillation column. There the product is separated from the impurities. Then that material that is composed of the undistilled portion of product

1.3 Models for Time-Delay Systems

15

P, a certain percentage of raw materials A and B, and some byproducts of the chemical reaction is recycled back to the chemical reactor, where it is reprocessed. The recycle loop ensures that useful products will not be discarded. The recycle loop introduces a significant time delay into the problem. In practical situations, it is not at all unusual for material to take 10 min to travel from the chemical reactor, and the recycled loop back to the reactor. This is because of the distances separating the various stages of the overall processes and lengths of piping in the stages themselves. The control problem is to obtain feed rates, FA and FB , of the raw materials A and B so that the relative compositions of the products of the chemical reaction are kept at desired values (which will ensure steady production of product P). The differential equations governing the chemical reactor are nonlinear, but the problem we consider here is that of regulating the derivations in reactant compositions consequently, we use the linearized equations to determine the proper corrections in the feed rates FA and FB . The cooler, decanter, and distillation column alter only the relative compositions of the reactants leaving the reactor by fixed ratios. In such a case the block diagram in Fig. 1.9a can be simplified to that shown in Fig. 1.9b, where L is a linear transformation accounting for the change in relative compositions. The block in the recycle loop represents the effects of the cooler, decanter, and distillation column upon the change in the relative compositions of the reactants. The thick lines in the recycle loop indicate that several reactor products are recycled. δ FA is the deviation from the nominal value of the feed rate of material A in pounds per hour, VR is the pound-volume of the chemical reactor, δ FB is the deviation of the feed rate of material B, a(t) is the deviation in the weight composition (dimensionless) of the reactant A from its nominal value, b(t) is the deviation in the weight composition of the reactant B, c(t) is the deviation in the weight composition of an intermediate product C, and p(t) is the deviation in the weight composition of the product P. The linearized (time-scaled) equations for the chemical reactor are da dt db dt dc dt dp dt

= c11 a(t) + c¯11 a(t − h) + c12 b(t) +

δ FA , 6VR

= c21 a(t) + c22 b(t) + c¯22 b(t − h) + c23 c(t) +

δ FB , 6VR

= c31 a(t) + c32 b(t) + c33 c(t) + c¯33 c(t − h) + c34 p(t), = c42 b(t) + c43 c(t) + c44 p(t) + c¯44 p(t − h).

Here, h is the recycle time. Letting x1 = a, x2 = b, x3 = c, x4 = p, u 1 = δ FA /(6VR ), and u 2 = δ FB /(6VR ), we can write the system equations in matrix

16

1 Introduction

Oxidizer

Fuel

Pumps

Cumbustion Chamber

Nozzle

Exhaust

Throat

Fig. 1.10 Liquid propellant rocket motor



c11 ⎢c21 x(t) ˙ =⎢ ⎣c31 0

c12 c22 c32 c42

0 c23 c33 c43

⎡ ⎤ c¯11 0 ⎢0 0⎥ ⎥ x(t) + ⎢ ⎣0 c34 ⎦ c44 0

0 c¯22 0 0

0 0 c¯33 0

⎡ ⎤ 1 0 ⎢0 0⎥ ⎥ x(t − h) + ⎢ ⎣0 0⎦ 0 c¯44

⎤ 0 1⎥ ⎥ u(t). 0⎦ 0

The above equations and explanations of variables are from [28] © 1971 IEEE with permission. Liquid Propellant Rocket Motor Consider a liquid monopropellant rocket motor with a pressure feeding system as in Fig. 1.10. A linearized version of the feeding system and combustion chamber equations, assuming non-steady flow, is given by as follows: ˙ = (γ − 1)φ(t) − γφ(t − h) + μ(t − h), φ(t)   p0 − p1 1 μ˙ 1 (t) = −ψ(t) + , ξJ 2Δp 1 μ(t) ˙ = {−μ(t) + ψ(t) − Pφ(t)}, (1 − ξ)J ˙ = 1 {μ1 (t) − μ(t)}, ψ(t) E

(1.13)

where T is the reduced time normalized by the gas residence time θg in steady operation, h = τ¯ /θg is the reduced time lag with τ¯ the value of the time lag in steady operations, φ(t) = ( p(t) − p)/ ¯ p, ¯ with p(t) the instantaneous pressure in the combustion chamber and p¯ the pressure in the combustion chamber in steady ¯˙ m, ¯˙ with m˙ i the instantaneous mass rate of injected prooperation, μ(t) = (m˙ i − m)/ ¯˙ m, ¯˙ with m˙ 1 (t) pellant and m¯˙ the value of m˙ i in steady operation, μ1 (t) = (m˙ 1 (t) − m) the instantaneous mass flow upstream of the capacitance representing the elasticity, ψ(t) = ( p1 (t) − p¯ 1 )/(2Δp), with p1 (t) the instantaneous pressure at the place in the feeding line where the capacitance is located, p¯ 1 the value of p1 in steady operation and Δp = p¯ 1 − p¯ the injector pressure drop in steady operation, p0 is the regulated gas pressure for the pressure supply, P = p/(2Δp), ¯ γ is the pressure exponent of the pressure dependence of the combustion process taking place during the time lag,

1.3 Models for Time-Delay Systems

17

Congestion window Queuing delay

Packets

Sender

Router

Receiver

Round-trip time (RTT)

Acknowledgments

Fig. 1.11 General network system

ξ represents the fractional length for the pressure supply, J is the inertia parameter of the line, E is the elasticity parameter of the line, and u = ( p0 − p1 )/(2Δp) is a control variable. Letting x1 = φ, x2 = μ1 , x3 = μ, and x4 = ψ. The system (1.13) becomes ⎡

γ−1 0 0 0 1 ⎢ 0 0 0 − ⎢ ξJ ⎢ x(t) ˙ =⎢ 1 P 1 ⎣ − (1−ξ)J 0 − (1−ξ)J (1−ξ)J 1 0 − E1 0 E



⎡ −γ ⎥ ⎥ ⎢ ⎥ x(t) + ⎢ 0 ⎥ ⎣ 0 ⎦ 0

0 0 0 0

1 0 0 0

⎤ ⎡ 0 0 ⎢ 1 0⎥ ⎥ x(t − h) + ⎢ ξ J ⎣ 0 0⎦ 0 0

⎤ ⎥ ⎥ u(t). ⎦

The Eq. (1.13) and explanations of variables are from [39] © 1995 Elsevier with permission. Related results are found in [5]. The control objective is to guide the rocket to hit a desired target. FAST TCP Network System Congestion is a situation in which too many packets are present in a part of a general network as in Fig. 1.11 and performance degrades. Congestion in a network may occur when the number of packets sent to the network is greater than the capacity of the network. Congestion control is a distributed algorithm to share network resources among competing users. Congestion control mechanisms in the Internet represent one of the largest deployed artificial feedback systems as the Internet continues to expand in size, diversity, and speed, playing an ever-increasing role in the integration of other networks. On the Internet, this congestion control is performed by the Transmission Control Protocol (TCP) in source and destination computers involved in data transfers. FAST TCP is especially designed for high-throughput and high-speed data transfer over long distance and uses queuing delay rather than packet loss as an indicator of

18

1 Introduction

congestion. The sender measures the total round-trip time (RTT) and then updates the congestion window indicating the maximum amount of data that can be sent out on a connection without being acknowledged. FAST TCP employs the following rule for updating the congestion window w(t):  w(t + Δ) = γ

 baseRTT w(t) + α + (1 − γ)w(t), RTT

(1.14)

where Δ denotes the congestion window update period, γ ∈ (0, 1] is the tuning parameter, baseRTT denotes the minimum RTT observed thus far, and α is a positive protocol parameter that determines the total number of packets queued in routers. Generally, the baseRTT is assumed to be constant and is denoted by d, which is the round-trip propagation delay and the RTT is expressed as a sum of the baseRTT d and the measured queuing delay q(t) and denoted as T (t) = d + q(t). To simplify the model, let us derive the equilibrium points of w(t) and q(t), say w∗ and q ∗ . For (1.14), in the equilibrium point, the relation that w∗ = α + αd/q ∗ is obtained. If we use the static link model at the source side, i.e. w(t − T (t))/T (t) = c, we obtain w∗ = T ∗ c = dc + q ∗ c, which results in the relation that q ∗ = α/c and w ∗ = α + cd. Therefore, the model (1.14) can be written around the equilibrium point such as for w(t) ˜  w(t) − w∗ ≡ w(t) − α − cd,  w(t ˜ + Δ) − w(t) ˜ =γ −

 d q(t) w(t) ˜ − w(t ˜ − T (t)) . d + q(t) d + q(t)

(1.15)

For a small Δ, a continuous-time FAST TCP model is given as  ˙ w(t) ˜ = γe −

 q(t) d w(t) ˜ − w(t ˜ − T (t)) , d + q(t) d + q(t)

(1.16)

where γe = γ/Δ. The Eq. (1.14) and explanations of variables are from [34] © 2006 IEEE with permission. Other Eqs. (1.15) and (1.16) can be found in [15]. As stated above, the window size is controlled to avoid the congestion collapse and to achieve high-speed data transfer. The control algorithm of the window size contains a time delay system. HIV Pathogenesis System HIV (Human Immunodeficiency Virus) is the virus that causes AIDS (Acquired Immune Deficiency Syndrome) by infecting helper T-cells of the immune system. Through infection and eventual killing of these T-cells, HIV damages the body’s immune system, leading to humoral and cellular immune function loss, making the body susceptible to opportunistic infections. A mathematical model is given as

1.3 Models for Time-Delay Systems

19 T(t)

Non-infectious T-cell Process

Infectious Virus Process

V I(t)

Delay : Delay :

Infectious T-cell Process

T*(t)

Non-infectious V NI (t) Virus Process

Fig. 1.12 HIV pathogenesis

  T (t) d − k I (1 − n r t )VI (t)T (t), T (t) = λ − dT (t) + r T (t) 1 − dt Tmax d ∗ T (t) = k A (1 − n r t )VI (t − τ )T (t − τ ) − δT ∗ (t), dt d VI (t) = (1 − n p )N δT ∗ (t) − cVI (t), dt d VN I (t) = n p N δT ∗ (t) − cVN I (t), dt

(1.17) (1.18) (1.19) (1.20)

where T (t) and T ∗ (t) are the uninfected and infected T-cells, respectively, and VI (t) and VN I (t) are the infected and non-infected viruses, respectively. Here, λ is the source of T-cells from precursors, d is the natural death rate of T-cells, r is the growth rate and Tmax , where dTmax > λ, is the carrying capacity of the T-cell population. The parameter k I represents the rate of infection of T-cells with free virus, k A is the rate at which the infected cells become actively infected, δ is the death rate of infected cells, N is the total number of virus particles released by a productively infected cell over its lifespan with the mean 1/δ, and c is the viral clearance rate constant. τ represents a time delay between the initial viral entry into a cell and the subsequent viral production. Protease inhibitors, with efficacy 0 ≤ n p < 1, cause the infected cells to produce the non-infectious virus with the rate n p N . Reverse transcriptase inhibitors prevent the production of the infected cells with efficacy 0 ≤ n r t < 1. This model has incorporated the effects of drug therapy, which is depicted in Fig. 1.12. The Eqs. (1.17)–(1.20) and explanations of variables are from [32] © 2009 Elsevier with permission. This model is used to determine whether the infected or uninfected steady states can be destabilized, leading to either transient or sustained oscillations.

1.4 Computation Stabilizing and optimizing controls in this book are obtained via linear matrix inequalities (LMIs), coupled partial differential Riccati equations, or their combinations. The LMIs appear in stability analysis in Chap. 2, non-optimal stabilizing controls in

20

1 Introduction

Chaps. 3 and 4, and sub-optimal guaranteed cost controls in Chap. 5. The coupled partial differential Riccati equations arise in optimal finite horizon LQ, LQG, and H∞ controls in Chaps. 6, 7, and 8. Their combinations occur in receding horizon LQ, LQG, and H∞ controls in Chaps. 6, 7, and 8. They look complicated at first glance but are easy to compute with well-known software packages with the speed fast enough to be used as on-line real-time controls in many cases. The LMIs are nonlinear and non-smooth, but convex. If an objective function is convex, this problem becomes convex. A global solution of convex problems always exists and such a solution is unique. Furthermore, the solution can be found within finite time. If the objective function is linear, which also is convex, this problem becomes the semidefinite program. It is well known that there exist polynomial time algorithms for solving semidefinite programs, which implies that the computational effort to solve a semidefinite program increases very slowly as a polynomial of the number of the problem variables. The semidefinite programs can be numerically solved in about 10 to 100 iterations where each iteration is a least-squares problem. With modern fast computers, its computation time is very fast enough to be used in real time. There exist several software packages which enable you to describe LMI problems with a high-level language such as MATLAB LMI control toolbox [8], semi-definite programing [31], MATLAB YALMIP toolbox [18], MATLAB CVX convex programming [10], etc. There are several books for mathematical programming such as [20, 23, 31, 35, 36]. The coupled partial differential Riccati equations in this book belong to onepoint, rather than two-point, boundary value problems. Partial differential equations with one-point boundary values can be very easily solved like ordinary differential equations with one-point boundary values. The computation algorithms including such as Runge-Kutta methods are common and appear as basic tools in all computing software packages including MATLAB. The boundary values are given in finite horizon LQ, LQG, and H∞ controls while they should be found via the LMIs in receding horizon LQ, LQG, and H∞ controls. Several numerical examples covering important optimal and sub-optimal stabilizing controls in this book are provided with MATLAB codes and simulation results. They can be downloaded from a Springer website, http://extras.springer.com. Numerical examples for non-optimal stabilizing controls are similar to those in suboptimal stabilizing controls.

1.5 About the Book and Notations This book introduces three important feedback controls in time domain for linear time-delay control systems, i.e. simple and non-optimal stabilizing controls, less complicated sub-optimal guaranteed cost controls, and somewhat complicated opti-

1.5 About the Book and Notations

21

mal controls on infinite and receding horizons. This book starts with simple subjects and then gradually moves to more complicated ones for better understanding. This book deals first with systems with a single input delay and then systems with a single state delay. We do not deal with systems with both input and state delays together due to the space limit, which can be easily understood if we understand systems with a single input delay only and those with a single state delay only separately. We deal with single delays only and do not deal with multiple delays nor distributed delays for the same reasons. State feedback stabilizing controls are introduced before output feedback stabilizing controls. LQ/LQG optimal controls are first dealt with before H∞ optimal controls. This book is organized as follows. In Chap. 1, basic concepts for control design such as control structures and control objectives are introduced. Then, simple and non-optimal stabilizing controls, optimal controls on finite, infinite, and receding horizons, and sub-optimal guaranteed cost controls are introduced briefly. Several advantages and some disadvantages of RHCs are explained. In order to understand time-delay systems, several real processes and their mathematical models are given, some with input and output delays and some with state delays. All notations used in this book are listed in this chapter. In Chap. 2, In order to be utilized for feedback controls for time-delay systems, stability results for general homogeneous delay systems such as the Krasovskii and Razumikhin theorems are introduced and followed by various stability test methods for linear time-delay systems based on Lyapunov-Krasovskii, LyapunovRazumikhin, and discretized state approaches. Robust stability test methods for uncertain linear time-delay systems are also introduced. Mainly single state delayed systems are dealt with but distributed delayed systems are also briefly covered since optimal control systems in Chaps. 6, 7, and 8 are of these forms. The constant delays are mainly discussed while time-varying delays are dealt with some limits. Some useful inequalities for obtaining stabilizing controls are introduced. In Chap. 3, stabilizing controls for input delayed systems are first discussed for simplicity. In particular, for input delayed control systems, the state predictor method and the reduction transformation method are presented to obtain stabilizing controls, where input delayed systems will be transformed to finite dimensional ordinary systems. Thus well-known methods for ordinary systems can be employed. Then stabilizing controls and robust stabilizing controls for systems with a single state delay are introduced based on various stability and robust stability results in the previous chapter such as Lyapunov-Razumikhin, Lyapunov-Krassovskii, and discretized state approaches. In Chap. 4, the state feedback stabilizing control results of Chap. 3 are extended to output feedback stabilizing controls. For the output feedback control for systems with single input and measurement delays, the Smith predictor is introduced with some discussions on drawbacks. When the state values are not available, a Luenberger observer is introduced with the reduction transformation. For the output feedback control for systems with a single state delay, Luenberger observers or additional dynamic systems are introduced. They are obtained from the stability criteria developed in Chaps. 2 and 3.

22

1 Introduction

In Chap. 5, alternative approaches, so-called guaranteed cost controls (GCC) are introduced, which are different from optimal controls in Chaps. 6, 7, and 8. Like in simple and non-optimal stabilizing controls in Chaps. 3 and 4, feedback control structures are assumed a priori with unknown parameters and then these unknown parameters are sought to meet certain bounds greater than the minimum costs. The computation is much less complicated compared to optimal controls. Guaranteed LQ controls for both input and state delayed systems are first introduced. Then guaranteed H∞ controls for both input and state delayed systems are introduced. Both state feedback and output feedback controls are introduced. In Chap. 6, LQ optimal controls for input delayed systems are first introduced and then LQ optimal controls for state delayed systems follow since the former controls are easier than the latter ones. First, finite horizon controls are dealt with, which are fundamental results mathematically. Since they cannot be used as feedback controls due to their finite horizons, infinite horizon controls will be obtained if possible with their stability properties. Then receding horizon controls, or model predictive controls, are dealt with which can be used as feedback controls. Terminal conditions to guarantee the closed-loop stability are investigated. All of them are state feedback controls In Chap. 7, LQG controls in this chapter use output feedbacks while LQ optimal controls in Chap. 6 require state feedbacks. When states are not available, state observers or filtered estimates are obtained from inputs and outputs. LQG optimal controls for stochastic input delayed systems are first introduced and then LQG optimal controls for stochastic state delayed systems follow since the former controls are easier than the latter ones. First, finite horizon LQG controls are dealt with, which are fundamental results mathematically. Since they cannot be used as feedback controls due to their finite horizons, infinite horizon LQG controls are obtained if possible with their stability properties. Then receding horizon LQG controls are dealt with which can be used as feedback controls. Terminal conditions to guarantee the closed-loop stability are investigated. In Chap. 8, H∞ controls for input delayed systems with disturbances are first introduced and then H∞ controls for state delayed systems with disturbances follow. First finite horizon H∞ controls are dealt with, which are fundamental results mathematically. Since they cannot be used as feedback controls due to the inherent requirement of infinite horizons associated with stability properties, infinite horizon H∞ controls are obtained if possible with their stability properties. Then receding horizon H∞ controls are dealt with which can be used as feedback controls. Terminal conditions to guarantee the closed-loop stability are investigated. These controls are robust against disturbances and thus considered as robust controls. H∞ controls in this chapter cover only state feedback controls due to the space limit. There are several books dealing with subjects on stability, stabilizing controls, and optimizing controls for time-delay systems in the time domain such as [1, 2, 7, 9, 11–13, 16, 21, 22, 24, 29, 37, 40, 41]. Books dealing with similar subjects in the frequency domain are not listed here since this book does not deal with them. Many topics for time-delay systems are dealt with in the edited books of [6, 19, 25, 30],

1.5 About the Book and Notations

23

several survey papers including [27, 38], and proceedings of dedicated workshops including IFAC workshops on Time Delay Systems. Notations • System – Nominal system x(t) ˙ = Ax(t) + A1 x(t − h(t)) + Bu(t) + Gw(t), y(t) = C x(t) + C1 x(t − h(t)) + Du u(t) + Dw w(t), z(t) = C z x(t) + C z,1 x(t − h(t)) + Dz,u u(t) + Dz,w w(t) – Uncertain system x(t) ˙ = [A + DΔ(t)E]x(t) + [A1 + DΔ(t)E 1 ]x(t − h(t)) + [B + DΔ(t)E b ]u(t)

x: state u: input y: measured output z: controlled output w: noise or disturbance Δ(t): norm-bounded time-varying uncertainties h(t): time-varying delay h: constant delay • Dimension n: dimension of state x m: dimension of input u l: dimension of disturbance w p: dimension of output y q: dimension of desired signal z lw : dimension of uncertain-block input qw m w : dimension of uncertain-block output pw • Time indices t0 : initial time t f : terminal time t, τ : time variable T : horizon length • Feedback gain K: feedback gain matrix for state K 1 : feedback gain matrix for delayed state K 2 : feedback gain matrix for delayed input

24

1 Introduction

• Solutions to coupled partial differential Riccati equations Depending on the time interval, we have three notations S1 (·), S2 (·, ·), S3 (·, ·, ·): solution to Riccati equation W1 (·), W2 (·, ·), W3 (·, ·, ·): solution to Riccati equation P1 (·), P2 (·, ·), P3 (·, ·, ·): solution to Riccati equation • Weighting matrices Q: weighting matrix of state R: weighting matrix of input F1 , F2 , F3 : weighting matrix of terminal state trajectory • Performance criterion: Depending on interested variables, we have several notations. J J (xt0 , u, t0 , t f ) J ∗ (xt0 , t0 , t f ) • States and inputs with reference times x(s| t): x(s), where s belongs to [t, t + T ] u(s| t): u(s), where s belongs to [t, t + T ] Note that these appear in receding horizon controls. • Kronecker sum and product of two matrices A and B: ⎡ ⎤ a11B · · · a1nB    A 0 ⎢ ⎥ A B= , A B = ⎣ ... . . . ... ⎦ 0 B am1B · · · amnB • Definiteness of matrices: P > 0: P is a positive definite matrix P < 0: P is a negative definite matrix A ≤ B: B-A is a positive semi-definite matrix • Others G(z): transfer function I : identity matrix p(t): costate in optimization method Rn : n-dimensional real space Cn : n-dimensional complex space Rm×n : m × n matrix with real components Cm×n : m × n matrix with complex components ¯ extended set of real numbers, {R ∪ (+∞) ∪ (−∞)} R: ¯ C+ : closed right half complex plane

1.5 About the Book and Notations

25

A T : transpose of matrix A A∗ : complex conjugate transpose of matrix A A: spectral norm of matrix A L 2 [a, b]: set of real square integrable functions on [a, b] σmax (A): maximum singular value of matrix A Cn,h  C([−h, 0], Rn ): Banach space of continuous vector functions mapping [−h, 0] into Rn  · : Euclidean vector norm φc  sup−h≤t≤0 φ(t): norm of a function φ ∈ Cn,h .

References 1. 2. 3. 4. 5.

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

Boukas EK, Liu ZK (2002) Deterministic and stochastic time delay systems. Birkhäuser, Basel Briat C (2014) Linear parameter-varying and time-delay systems. Springer, Berlin Camacho EF, Bordones C (2013) Model predictive control, 2nd edn. Springer, Berlin Chen L, Aihara K (2002) Stability of genetic regulatory networks with time delay. IEEE Trans Circuits Syst I: Fundam Theory Appl 49(5):602–608 Crocco L (1951) Aspects of combustion stability in liquid propellant rocket motors part I: fundamentals low frequency instability with monopropellants. J Am Rocket Soc 21(6):163– 178 Dugard L, Verriest EI (1998) Stability and control of time-delay systems. Springer, Berlin Fridman E (2014) Introduction to time-delay systems. Birkhäuser, Basel Gahinet P, Nemirovskii A (1988) LMI control toolbox: the LMI lab. Syst Control Lett 11(3):167–172 Gorecki H, Fuksa S, Grabowski P, Korytowski A (1989) Analysis and synthesis of time delay systems. Wiley, New York Grant M, Boyd S, Ye Y (2008) CVX: MATLAB software for disciplined convex programming. CVX Forum, CVX.com Gu K, Kharitonov VL, Chen J (2003) Stability of time-delay systems. Birkhäuser, Basel Insperger T, Ersal T, Orosz G (eds) (2017) Time delay systems: theory, numerics, applications, and experiments. Springer, Berlin Kharitonov VL (2012) Time-delay systems: Lyapunov functionals and matrices. Birkhäuser, Basel Kim DS, Lee YS, Kwon WH, Park HS (2003) Maximum allowable delay bounds of networked control systems. Control Eng Pract 11(11):1301–1313 Koo K, Choi JY, Lee JS (2008) Parameter conditions for global stability of fast TCP. IEEE Commun Lett 12(2):155–157 Krasovskii NN (1963) Optimal processes in systems with time lags. In: Proceedings of the 2nd IFAC conference, Bazel, Switzerland Kwon WH, Han S (2006) Receding horizon control: model predictive control for state models. Springer, Berlin Löfberg J (2004) YALMIP : a toolbox for modeling and optimization in MATLAB. In: 2004 IEEE international symposium on computer aided control systems design, Taipei, Taiwan Loiseau JJ, Michiels W, Niculescu SI, Sipahi R (2009) Topics in time delay systems: analysis, algorithms and control. Springer, Berlin Luenberger DG (1973) Introduction to linear and nonlinear programming. Addison-Wesley Publishing Company, Boston Mahmoud MS (2000) Robust control and filtering for time-delay systems. Marcel Dekker Inc., New York

26

1 Introduction

22. Malek-Zavarei M, Jamshidi M (1987) Time-delay systems analysis, optimization and applications. Elsevier Science Publication, Amsterdam 23. Nesterov Y, Nemirovskii A (1994) Interior-point polynomial algorithms in convex programming. SIAM, Philadelphia 24. Niculescu SI (2001) Advances in time-delay systems. Springer, London 25. Niculescu SI, Gu K (2012) Advanced in time-delay systems. Springer Science & Business Media, New York 26. Pellegrinetti G, Bentsman J (1996) Nonlinear control oriented boiler modeling-a benchmark problem for controller design. IEEE Trans Control Syst Technol 4(1):57–64 27. Richard JP (2003) Time-delay systems: an overview of some recent advances and open problems. Automatica 39(10):1667–1694 28. Ross DW (1971) Controller design for time lag systems via a quadratic criterion. IEEE Trans Autom Control 16(6):664–1672 29. Sipahi R, Vyhlídal T, Niculescu SI, Pepe P (2012) Time delay systems: methods, applications and new trends. Springer, Berlin 30. Sun JQ, Ding Q (2013) Advances in analysis and control of time-delayed dynamical systems. World Scientific Publishing Company, Singapore 31. Vandenberghe L, Boyd S (1995) Semidefinite programming. SIAM Rev 38(1):49–95 32. Wang Y, Zhou Y, Wu J, Heffernan J (2009) Oscillatory viral dynamics in a delayed HIV pathogenesis model. Math Biosci 219(2):104–112 33. Wang Z, Gao H, Cao J, Liu X (2008) On delayed genetic regulaory networks with polytopic uncertainties: robust stability analysis. IEEE Trans Nanobioscience 7(2):154–163 34. Wei DX, Jin C, Low SH, Hegde S (2006) Fast TCP: motivation, architecture, algorithms, performance. IEEE/ACM Trans Netw 14(6):1246–1259 35. Wolkowicz H, Saigal R, Vandenberghe L (2012) Handbook of semidefinite programming: theory, algorithms, and applications. Springer Science & Business Media, New York 36. Wu S, Boyd S (1995) Sdpsol: a parser/solver for semidefinite programs with matrix structure. SIAM, Philadelphia 37. Wu M, He Y, She JH (2010) Stability analysis and robust control of time-delay systems. Springer, Berlin 38. Xu S, Lam J (2008) A survey of linear matrix inequality techniques in stability analysis of delay systems. Int J Syst Sci 39(12):1095–1113 39. Zheng F, Cheng M, Gao WB (1995) Variable structure control of time-delay systems with a simulation study on stabilizing combustion in liquid propellant rocket motors. Automatica 31(7):1031–1037 40. Zhang H, Xie L (2007) Control and estimation of systems with input/output delays. Springer, Berlin 41. Zhong QC (2006) Robust control of time-delay systems. Springer, London

Chapter 2

Stability of Time-Delay Systems

2.1 Introduction In this book, stabilizing and optimizing controls are obtained for time-delay control systems which include controls as input variables. When feedback controls are used for these purposes, the resulting closed-loop systems become homogeneous timedelay systems without external variables such as controls. These resulting closedloop systems must be stable to meet some control objectives. Therefore the stability of homogeneous time-delay systems is very important. This chapter is devoted to the study of various stability test methods for homogeneous time-delay systems, which are utilized in subsequent chapters. After stability results for general timedelay systems are briefly introduced, various stability test methods for linear timedelay systems are followed, together with robust stability test methods for time-delay systems with model uncertainties. Mainly single state delayed systems are dealt with but distributed delayed systems are also briefly covered since optimal control systems in Chaps. 6–8 are of these forms. This chapter is outlined as follows. In Sect. 2.2, some fundamentals for general time-delay systems are briefly summarized. The existence and uniqueness of solutions of general nonlinear time-delay systems are dealt with. After definitions of stability for time-delay systems are introduced, two important stability results such as the Krasovskii theorem and the Razumikhin theorem are introduced. In Sect. 2.3, linear and bilinear matrix inequalities are introduced together with recent integral inequalities for quadratic functions such as in the integral inequality lemma and the Jensen inequality lemmas with and without reciprocal convexity, which are useful when the Krasovskii and Razumikhin theorems are utilized. In Sect. 2.4, various approaches to obtain stability test methods, either delay-independent or delay-dependent, for single state delayed systems are developed. These approaches are based on the Razumikhin theorem, the Krasovskii theorem, and the discretized state description. Some results are extended to systems with time-varying delays. In Sect. 2.5, robust stability test methods for time-delay systems with model uncertainties are introduced based on © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_2

27

28

2 Stability of Time-Delay Systems

above approaches in Sect. 2.4. Since the Krasovskii theorem gives more flexibility in designing Lyapunov functionals than the Razumikhin theorem, the approaches based on the Krasovskii theorem are only introduced in this subsection for the robust stability of time-delay systems with model uncertainties. Likewise some results are extended to systems with both time-varying delays and model uncertainties. In Sect. 2.6, the stability of distributed state delayed systems is briefly introduced, whose forms are appeared in optimal controls in Chaps. 6–8. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

2.2 Fundamentals of General Time-Delay Systems 2.2.1 General Time-Delay Systems A large class of systems for the analysis of time-delay systems is adapted into the functional differential equation (FDE) setting such as x(t) ˙ = f (t, xt )

(2.1)

x(t0 + θ) = φ(θ), θ ∈ [−h, 0].

(2.2)

with the initial condition

Since the exact construction of the evolution of the system (2.1) needs information in some non-zero interval, these systems belong to the infinite-dimensional system. The state of the system is an important concept. The state of (2.1)–(2.2) at time t is the trajectories of x in the interval [t − h, t], or equivalently, the element xt in the Banach space Cn,h of continuous functions, mapping the interval [−h, 0] into Rn : xt (θ) = x(t + θ),

− h ≤ θ ≤ 0.

Since, xt is defined on [−h, 0], it seems natural to designate the norm of an element xt ∈ Cn,h as xt c  sup x(t + θ), θ∈[−h,0]

(2.3)

where, x(t + θ) refers to the Euclidean vector norm at time t + θ. For Cn,h and xt c , refer to the notations in Sect. 1.5. Definition 2.1 If there exists a scalar δ > 0 such that the following conditions hold: (i) xt ∈ Cn,h , (ii) x(t) satisfies (2.1)–(2.2) for all t ∈ [t0 , t0 + δ),

2.2 Fundamentals of General Time-Delay Systems

29

the function x : R → Rn is a solution of the FDE (2.1) with the initial condition (2.2). The following lemma on the existence of a solution is well known. Lemma 2.1 If the function f (t, ·) in (2.1) is continuous with respect to both arguments, and f (t, ·) satisfies the local Lipschitz condition in the second variable, i.e. for any H > 0, there exist a Lipschitz constant L(H ) > 0 such that || f (t, ψ (1) ) − f (t, ψ (2) )|| ≤ L(H )||ψ (1) − ψ (2) ||, then the existence and uniqueness of the solution of (2.1)–(2.2) can be proved, as well as its continuous dependence on the initial value. The local existence and uniqueness of the solution in Lemma 2.1 can be found as in [2, 11, 23]. We briefly summarize some basic definitions used through the book. With the initial condition φ at t0 , each denotes • x(t0 , φ) • xt , xt (t0 , φ) • xt+h , xt+h (t0 , φ) • x(t), x(t0 , φ)(t) • x(t + h), x(t0 , φ)(t + h)

the solution of x in (2.1) and (2.2), the trajectory of x in the interval [t − h, t], the trajectory of x in the interval [t, t + h], the instant value of x at time t, the instant value of x at time t + h.

For example, the following relations hold: xt0 (t0 , φ) ≡ φ, x(t0 , φ)(t0 + θ) ≡ φ(θ), for − h ≤ θ ≤ 0.

2.2.2 Stability of General Time-Delay Systems We introduce stability concepts of (2.1)–(2.2). Assume additionally that the system (2.1)–(2.2) admits a trivial solution x(t0 , φ)(t) = 0, i.e. f (t, φ) = 0 for t ≥ t0 when φ = 0 or x(s) = 0 for s ∈ [t0 − h, t0 ]. The function f is supposed to be continuous with respect to both the arguments and to satisfy enough smoothness conditions to ensure that the solution x(t0 , φ)(t) is continuous in (t0 , φ, t) in the domain of definition of the function. We generalize the Lyapunov’s second method of ordinary differential equations. Definition 2.2 For the system (2.1)–(2.2), the trivial solution x(t0 , φ)(t) = 0 is said to be stable if for any t0 ∈ R and any  > 0, there exists a δ = δ(t0 , ) > 0 such that xt0 c < δ implies x(t0 , φ)(t) <  for t ≥ t0 . It is said to be asymptotically stable if it is stable, and for any t0 ∈ R and any  > 0, there exists a δa = δa (t0 , ) > 0 such that xt0 c < δa implies limt→∞ x(t0 , φ)(t) = 0. It is said to be uniformly stable if it is stable and δ(t0 , ) can be chosen independently of t0 . It is uniformly asymptotically stable if it is uniformly stable and there exists a δa > 0 such that for any η > 0,

30

2 Stability of Time-Delay Systems

there exists a T = T (δa , η), such that xt0 c < δa implies x(t0 , φ)(t) < η for t ≥ t0 + T . It is globally uniformly asymptotically stable if it is uniformly asymptotically stable and δa can be arbitrarily large. We give sufficient conditions for stability of the trivial solution x(t0 , φ)(t) = 0 of the FDE (2.1)–(2.2). 1. Lyapunov–Krasovskii Approach If V : R × Cn,h → R is continuous and x(t0 , φ)(t) is the solution of the FDE (2.1)– (2.2), we define   1 V (t + r, xt+r ) − V (t, xt ) . V˙ (t, xt )  lim sup r →0+ r ¯ + are continuous ¯ + → R Theorem 2.1 (Krasovskii Theorem) Suppose u, v, w : R nondecreasing functions, u(s) and v(s) are positive for s > 0, and u(0) = v(0) = 0. If there is a continuous functional V : R × Cn,h → R such that u(x(t)) ≤ V (t, xt ) ≤ v(xt c ), V˙ (t, xt ) ≤ −w(x(t)),

(2.4) (2.5)

then, the solution x(t) of (2.1)–(2.2) is uniformly stable. If w(s) > 0 for s > 0, then it is uniformly asymptotically stable. If u(s) → ∞ as s → ∞, then it is globally uniformly asymptotically stable. Proof For any  > 0, there is a δ = δ(), 0 < δ < , such that v(δ) < u(). If xt0 c < δ, then V˙ (t, xt ) ≤ 0 for all t ≥ t0 and the inequalities on V (t, xt ) imply u(x(t)) ≤ V (t, xt ) ≤ V (t0 , xt0 ) ≤ v(δ) < u(), t ≥ t0 . Therefore, x(t) <  for all t ≥ t0 . Since xt c < δ < , this proves uniform stability. The rest of the proof can be found in [11], which completes the proof.  2. Lyapunov–Razumikhin Approach If V : R × Rn → R is a continuous function, then V˙ (t, x(t)), the derivative of V along the solution x(t) of (2.1)–(2.2) is defined to be   1 ˙ V (t + r, x(t + r )) − V (t, x(t)) . V (t, x(t))  lim sup r →0+ r ¯ + → R ¯ + are continuous Theorem 2.2 (Razumikhin Theorem) Suppose u, v, w : R nondecreasing functions, u(s), v(s) positive for s > 0, u(0) = v(0) = 0, v strictly increasing. If there is a continuous function V : R × Rn → R such that for t ≥ t0 u(x(t)) ≤ V (t, x(t)) ≤ v(x(t)) ˙ V (t, x(t)) ≤ −w(x(t)) if V (t + θ, x(t + θ)) ≤ V (t, x(t)) for θ ∈ [−h, 0],

(2.6) (2.7)

2.2 Fundamentals of General Time-Delay Systems

31

then the solution x(t) of (2.1)–(2.2) is uniformly stable. In addition, if w(s) > 0 for s > 0, and there is a continuous nondecreasing function p(s) > s for s > 0 such that (2.7) is strengthened to V˙ (t, x(t)) ≤ −w(x(t)) if V (t + θ, x(t + θ)) < p(V (t, x(t)) for θ ∈ [−h, 0], then it is uniformly asymptotically stable. If u(s) → ∞ as s → ∞, then it is globally uniformly asymptotically stable. Proof Choose a Lyapunov functional V¯ (t, xt )  sup V (t + θ, x(t + θ)) −h≤θ≤0

for t ∈ R, xt ∈ Cn,h ; then there is a θ0 in [−h, 0] such that V¯ (t, xt ) = V (t + θ0 , x(t + θ0 )) and either θ0 = 0 or θ0 < 0 and V (t + θ, x(t + θ)) < V (t + θ0 , x(t + θ0 )) if θ0 < θ ≤ 0. If θ0 < 0, then for r > 0 sufficiently small, V¯ (t + r, xt+r ) = V¯ (t, xt ) and thus V˙¯ (t, xt ) = 0. If θ0 = 0, then V˙¯ (t, xt ) ≤ 0 by (2.7). Also, (2.6) implies that u(x(t)) ≤ V¯ (t, xt ) ≤ v(xt c ) for t ∈ R, xt ∈ Cn,h . Then, Theorem 2.1 implies the uniform stability of the solution x(t) = 0. The rest of the proof can be found in [11], which completes the proof.  The Lyapunov–Krasovskii approach is relatively complicated compared with the Lyapunov–Razumikhin approach; but there is more freedom in the Lyapunov– Krasovskii approach than in the Lyapunov–Razumikhin approach because one can choose various Lyapunov functionals. The above results will be used for obtaining stabilizing controls for time-delay systems in subsequent chapters. The Krasovskii theorem in Theorem 2.1 and the Razumikhin theorem in Theorem 2.2 for stability of time-delay systems in this subsection can be found as in [13, 27, 28], respectively. For rigorous and complete proofs of the Krasovskii theorem and the Razumikhin theorem, we refer to the book [11].

2.3 Inequalities for Stability Lyapunov–Krasovskii and Lyapunov–Razumikhin approaches often used in this book contain inequalities. We introduce useful inequalities which can be combined with the above methods. Generally speaking, stability conditions for time-delay systems can be classified into two types, that is, a delay-independent stability and a delay-dependent stability. Generally, a delay-independent approach is considered to be more conservative than a delay-dependent one since the latter contains more delay information such as the size or the bound of delays and their derivatives. To obtain

32

2 Stability of Time-Delay Systems

the delay-dependent stability conditions which can be represented by a special form called linear matrix inequalities, some inequalities for an integral term of quadratic functions are often necessary. In this section, we introduce linear matrix inequalities as well as some other efficient inequalities.

2.3.1 Matrix Inequalities Linear Matrix Inequalities Many problems in linear system theory and robust control theory can be formulated as convex optimizations involving linear matrix inequalities (LMIs). An LMI has the form m  M(x)  M0 + xi Mi ≥ 0, i=1

where the symmetric matrices Mi ∈ Rn×n for 0 ≤ i ≤ m are given and the variable x = [x1 , . . . , xm ]T is in Rm . The LMI constraint is nonlinear and nonsmooth with respect to x, but convex in x, i.e. if M(u) ≥ 0 and M(v) ≥ 0, then M(αu + (1 − α)v) ≥ 0 for α ∈ [0, 1]. As an example, given matrices A, B, C and D, the matrix inequalities,   P A + AT P P B − C T ≤ 0, P > 0, B T P − C −(D + D T ) are LMIs over the matrix P. Actually, the above LMIs can be rewritten as the original LMI form. Several standard LMI problems occur in various applications. A semidefinite program is defined as a minimization of a linear objective subject to positive semidefinite LMIs, i.e. minimize c T x subject to M(x) ≥ 0, where c ∈ Rm and Mi , i = 0, 1, . . . , m are given. A special case of semidefinite programs, which is referred to as an LMI feasibility problem, is to find a feasible x subject to positive semidefinite LMIs, i.e. find x satisfying M(x) ≥ 0. The semidefinite program belongs to a convex optimization problem because the linear objective is convex and the LMI constraint set, i.e. {x | x ∈ Rm , M(x) ≥ 0}, is convex, too. Due to the convexity property, there exist easy methods such as polynomial time algorithms for solving semidefinite programs. They can be solved numerically in a small amount of iterations, where each iteration is a least square problem. There are efficient algorithms which yield globally optimal solutions. In particular, for a given algorithm accuracy, the computational effort to solve a semidef-

2.3 Inequalities for Stability

33

inite program increases very slowly as a polynomial of the problem size, rather than as an exponential of the problem size. Bilinear Matrix Inequalities An optimization problem involving bilinear matrix inequalities (BMIs) is an extension of the semidefinite program. The BMI has the form M(x, y)  M00 +

m 

xi Mi0 +

i=1

n  j=1

y j M0 j +

n m  

xi y j Mi j ≥ 0,

i=1 j=1

where symmetric matrices Mi j ∈ R p× p for i = 0, 1, . . . , m and j = 0, 1, . . . , n are given and the variables are x ∈ Rm and y ∈ Rn . The BMI problem is defined as minimize c T x + d T y subject to M(x, y) ≥ 0. The BMI framework enables to address several control synthesis. For example, a static output feedback control and a decentralized control can be included in the control specifications. The BMI problems are NP-hard, but there are many heuristic methods used to find local solutions. In particular, there is an alternating algorithm that utilizes various efficient LMI tools to solve the BMI problem. For fixed x, finding y satisfying the BMI problem is a semidefinite program. Similarly, for fixed y, finding x satisfying the BMI problem is another semidefinite program. The contents of this subsection can be found as in [1, 18, 29, 30].

2.3.2 Integral Inequalities for Quadratic Functions In this subsection, we introduce some useful inequalities for an integral term of quadratic functions. 1. Integral Inequality Lemma To exploit a delay dependency in the stability condition, an integral inequality lemma can be obtained in the following lemma. Lemma 2.2 Assume that an integrable function w(u) ∈ Rn w is defined on an interval [a, b], and ξ ∈ Rn ξ is any function with an appropriate dimension. Then, for a matrix Z > 0, and M, the following inequality holds:  − a

b

 w T (α)Z w(α)dα ≤ (b − a)ξ T (Z M + I )T Z −1 (Z M + I ) ξ

 b +2ξ T (Z M + I )T w(α)dα . (2.8) a

34

2 Stability of Time-Delay Systems

Proof Since Z > 0, we have 

b



a

  T  ξ ξ (Z M + I )T Z −1 (Z M + I ) (Z M + I )T dα ≥ 0. ZM + I Z w(α) w(α) 

The remaining proof is straightforward and thus omitted. The integral inequality lemma has been modified as the following form.

Lemma 2.3 Assume that an integrable function w(u) ∈ Rn w is defined on an interval [a, b], and ξ ∈ Rn ξ is any function with an appropriate dimension. Then, for matrices X, Y and Z , the following inequality holds:  −

b

 w (α)Z w(α)dα ≤ (b − a)ξ X ξ + 2ξ Y T

T

T

a

b

w(α)dα ,

(2.9)

a

where 

 X Y ≥ 0. YT Z

(2.10)

Proof If the condition (2.10) holds, then we have  a

b



ξ w(α)

T 

X Y YT Z



 ξ dα ≥ 0. w(α) 

The remaining proof is straightforward and thus omitted.

The integral inequality lemma in Lemma 2.2 was first given in [24] and the modified one in Lemma 2.3 can be found in [20]. 2. Jensen Inequality Lemma It can be seen that Lemma 2.3 introduces additional matrix variables X and Y in addition to Z in the quadratic integral function. As a way of reducing the additional matrix variables, the Jensen inequality lemma can be obtained in the following lemma. Lemma 2.4 Assume that an integrable function w(u) ∈ Rn w is defined on an interval [a, b]. Then for a matrix Z > 0, the following inequality holds:  − a

b

w T (α)Z w(α)dα ≤ −

1 b−a



b

T w(α)dα

a

 Z

b

w(α)dα .

a

Proof By the Schur complement, for a ≤ α ≤ b, we can obtain  T  w (α)Z w(α) w T (α) ≥ 0. w(α) Z −1

(2.11)

2.3 Inequalities for Stability

35

By integrating (2.11) from a to b, we have ⎡ ⎣

b a

w T (α)Z w(α)dα   b w(α)dα a



b a

w(α)dα

T ⎤ ⎦ ≥ 0.

(b − a)Z −1

(2.12)

Applying the Schur complement to (2.12) gives the lemma. For the Schur complement, refer to the definition in Appendix A.1. This completes the proof.  Since Lemma 2.4 directly relaxes the integral term without additional matrix variables, Lemma 2.4 has been applied to various time-delay systems. The Jensen inequality for quadratic functions can be found in [8]. 3. Jensen Inequality Lemma with Reciprocal Convexity Let us consider the following term 

−h 1

−h 2

 w T (α)Z w(α)dα =

−h 1

−h

 w T (α)Z w(α)dα +

−h −h 2

w T (α)Z w(α)dα, (2.13)

where an unknown delay h is on an interval [h 1 , h 2 ] and h 12 = h 2 − h 1 . Apply the Jensen inequality lemma to the integral terms in the right-hand side of (2.13), then we have  −

−h 1 −h 2

w T (α)Z w(α)dα ≤ − −

h 12 h − h1 h 12 h2 − h



−h 1 −h



−h −h 2

T w(α)dα

 Z

T w(α)dα

 Z

−h 1 −h −h

−h 2

w(α)dα

w(α)dα . (2.14)

Since h is the unknown delay, the terms in the right-hand side of (2.14) should be transformed as a solvable form. To do this, the following reciprocal convexity approach can be obtained. Lemma 2.5 For any vectors x1 , x2 , matrices Z > 0, S, and real scalars α ≥ 0, β ≥ 0 satisfying α + β = 1, the following inequality holds: −

 T    1 1 T Z S x1 x , x1 Z x1 − x2T Z x2 ≤ − 1 x2 S T Z x2 α β

(2.15)

subject to 

 Z S ≥ 0. ST Z

(2.16)

36

2 Stability of Time-Delay Systems

Proof If (2.16) holds, we have ⎡

⎤T

β x α 1 ⎦ ⎣  − α − β x2



⎤ ⎡  β Z S ⎣ α x1 ⎦ ≤ 0. ST Z − αβ x2

Then, 1 1 α+β T α+β T − x1T Z x1 − x2T Z x2 = − x1 Z x1 − x2 Z x2 α β α β β α = −x1T Z x1 − x1T Z x1 − x2T Z x2 − x2T Z x2 α β T T T T T ≤ −x1 Z x1 − x1 Sx2 − x2 Z x2 − x2 S x1 . 

This completes the proof.

1 Let us reconsider the terms in (2.14). If we define h−h = α and hh2 −h = β, then h 12 12 α + β = 1. Thus, we can easily apply Lemma 2.5 to (2.14), which gives

 − h 12

−h 1 −h 2

 −h

w(α)dα w (α)Z w(α)dα ≤ − −h −h w(α)dα −h 2 1

T 

T

Z S ST Z

   −h 1 w(α)dα −h , −h −h 2 w(α)dα (2.17)

subject to (2.16). As a result, the reciprocal convexity approach has become an essential technique to derive delay-dependent stability conditions for systems with time-varying delays. In this book, we will only deal with the approaches based on the integral inequality lemma for the stability and control problems of time-delay systems because of the space limit of this book although some recent inequalities may be less conservative than the integral inequality lemma. We will provide additional contents in Appendix C. The Jensen inequality lemma with the reciprocally convexity approach in Lemma 2.5 was first given in [26].

2.4 Stability of State Delayed Systems Consider a linear system with a single delay given by x(t) ˙ = Ax(t) + A1 x(t − h)

(2.18)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0], where x(t) ∈ Rn is the state and h denotes the constant delay.

2.4 Stability of State Delayed Systems

37

2.4.1 Lyapunov–Razumikhin Approach We use the Razumikhin theorem to obtain a simple stability condition using a Lyapunov function V (x(t)) = x(t)T P x(t),

(2.19)

where P > 0. 1. Delay-Independent Stability Theorem 2.3 If there exist a scalar α > 0, and a matrix P > 0 such that 

 P A + A T P + αP P A1 < 0, A1T P −αP

(2.20)

then the system (2.18) is asymptotically stable. Proof Choose a Lyapunov function V as (2.19). Since (2.20) implies P > 0, it holds that σmin (P)x(t)2 ≤ V (x(t)) ≤ σmax (P)x(t)2 , where σmin (P) and σmax (P) denote the minimum and maximum singular values of a matrix P, respectively. Now consider the derivative of V along the trajectory of the system such as V˙ (x(t)) = 2x T (t)P{Ax(t) + A1 x(t − h)}. Whenever xt satisfies V (x(t + θ)) < pV (x(t)) for all h ≤ θ ≤ 0 and for some p > 1, we can conclude that, for any α > 0, V˙ (x(t)) ≤ 2x T (t)P{Ax(t) + A1 x(t − h)} + α{ px T (t)P x(t) − x T (t − h)P x(t − h)}    T  x(t) P A + A T P + α p P P A1 x(t) . = A1T P −αP x(t − h) x(t − h) If the condition (2.20) holds, then one can always find an infinitesimally small constant β > 0 such that 

   αP 0 P A + A T P + αP P A1 < −β < 0. A1T P −αP 0 0

38

2 Stability of Time-Delay Systems

Therefore, the condition (2.20) says that one can always construct p = 1 + β > 1 such that   P A + A T P + α p P P A1 < 0, A1T P −αP which also guarantees the existence of  > 0 such that V˙ (x(t)) ≤ −



x(t) x(t − h)

T 

 x(t) ≤ x(t)2 . x(t − h) 

This completes the proof.

This stability criterion does not depend on the delay h. Therefore, it is obvious that if the stability criterion is satisfied, then the system is asymptotically stable for an arbitrary delay h. If the delay of the system is small, such criterion often gives very conservative stability results. 2. Delay-Dependent Stability Consider again the system (2.18). With the observation that  x(t − h) = x(t) −

0

−h

 x(t ˙ + θ)dθ = x(t) −

0

−h

{Ax(t + θ) + A1 x(t + θ − h)}dθ

for t ≥ h, we can write the system (2.18) as  x(t) ˙ = (A + A1 )x(t) − x(t) = ψ(t),

0 −h

{A1 Ax(t + θ) + A1 A1 x(t + θ − h)}dθ,

where ψ(t) is the solution of (2.18) for t ∈ [t0 , t0 + h] and φ(t) for t ∈ [t0 − h, t0 ]. Since this is a time-invariant system, we can shift the initial time to write it in a more standard form  0 ¯ ¯ A(θ)y(t + θ)dθ, (2.21) y˙ (t) = Ay(t) + −2h

where 

−A1 A for θ ∈ [−h, 0] ¯ = A¯ = A + A1 , A(θ) −A1 A1 for θ ∈ [−2h, −h) with the initial condition y(θ) = ψ(θ) for θ ∈ [−2h, 0]. The process of transforming a system (2.18) to one represented by (2.21) is known as the model transformation. It is known that the stability of the system represented

2.4 Stability of State Delayed Systems

39

by (2.21) implies the stability of the original system. However, the reverse is not necessarily true because of the time-shifting of the initial condition. Theorem 2.4 If there exist real scalars α0 > 0, α1 > 0, matrices P > 0, R0 , and R1 such that (2.22) P(A + A1 ) + (A + A1 )T P + h(R0 + R1 ) < 0, 

   α0 P − R0 −P A1 A α1 P − R1 −P A1 A1 < 0, < 0, −A1T A1T P −α1 P −A T A1T P −α0 P

(2.23)

then the system (2.18) is asymptotically stable. Proof Let us consider a Lyapunov function V (y(t)) = y(t)T P y(t) for the system (2.21), which clearly satisfies σmin (P)y(t)2 ≤ V (y(t)) ≤ σmax (P)y(t)2 . Also, let p > 1 and choose the piecewise constant functions  α0 for − h ≤ θ < 0 α(θ) = , α1 for − 2h ≤ θ < −h

 R(θ) =

R0 for − h ≤ θ < 0 . R1 for − 2h ≤ θ < −h

Then, whenever it is satisfied that V (y(t + θ)) < pV (y(t)) for all −2h ≤ θ ≤ 0 we can calculate   T ˙ ¯ V (y(t)) = 2y (t)P Ay(t) + 

¯ ≤ 2y (t)P Ay(t) +



0 −2h 0

T

 +

0 −2h

−2h

+

0 −2h

¯ A(θ)y(t + θ)dθ

 



α(θ) py T (t)P y(t) − y T (t + θ)P y(t + θ) dθ

  = y T (t) P A¯ + A¯ T P + 

¯ A(θ)y(t + θ)dθ



0

 R(θ)dθ y(t)

−2h

  T  ¯ y(t) y(t) pα(θ)P − R(θ) P A(θ) dθ y(t + θ) −α(θ)P y(t + θ) A¯ T (θ)P

= y T (t){P(A + A1 ) + (A + A1 )T P + h(R0 + R1 )}y(t)  T    0 y(t) pα0 P − R0 −P A1 A y(t) + dθ −A T A1T P −α0 P y(t + θ) −h y(t + θ)   T   −h  y(t) y(t) pα1 P − R1 −P A1 A1 dθ. + −A1T A1T P −α1 P y(t + θ) −2h y(t + θ)

40

2 Stability of Time-Delay Systems

With the above expression and the fact that p can be arbitrary close to 1, (2.22) and (2.23) imply V˙ (y(t)) ≤ −y(t)2 for some sufficiently small  > 0. And the stability of the system (2.21) implies the asymptotic stability of the original system. This completes the proof.  The delay-independent and delay-dependent stability conditions based on the Lyapunov–Razumikhin approach in Theorems 2.3 and 2.4 in this subsection can be found as in [21, 22].

2.4.2 Lyapunov–Krasovskii Approach 1. Delay-Independent Stability The simplest stability condition can be obtained by using a Lyapunov functional 

t

V (xt ) = x T (t)P x(t) +

x T (α)Qx(α)dα,

(2.24)

t−h

where P > 0 and Q > 0. Then, the time derivative of V along the system trajectory is V˙ (xt ) =



 T   P A + A T P + Q P A1 x(t) x(t) . A1T P −Q x(t − h) x(t − h)

(2.25)

It is clear that V˙ (xt ) ≤ −x(t)2 for some sufficiently small  > 0 if the matrix in expression (2.25) is negative definite. Thus we can conclude the following result. Theorem 2.5 If there exist matrices P > 0, and Q > 0 such that 

 P A + A T P + Q P A1 < 0, A1T P −Q

(2.26)

then the system (2.18) is asymptotically stable. Proof Choose a Lyapunov functional as (2.24). Note that (2.26) implies σmin (P)x(t)2 ≤ V (x) ≤ σmax (P)x(t)2 + hσmax (Q) sup x(t + θ)2 −h≤θ≤0

≤ (σmax (P) +

hσmax (Q))xt 2c .

2.4 Stability of State Delayed Systems

41

Also, for some sufficiently small  > 0. (2.26) implies that V˙ (xt ) ≤ −x(t)2 in view of (2.25). Therefore, the system is asymptotically stable, which completes the proof.  The delay-independent stability condition based on the Lyapunov–Krasovskii approach in Theorem 2.5 can be found as in [21, 22]. 2. Delay-Dependent Stability Whereas the delay-independent stability condition based on the Krasovskii theorem employs the two terms in the Lyapunov functional (2.24), where the first term and the second term are associated with the time behavior and the delay behavior, respectively, we need more terms to extract the delay term h in the stability. Let us use a Lyapunov functional V (xt ) = V1 (t) + V2 (t) + V3 (t),

(2.27)

where Vi (t) denote V1 (t) = x T (t)P x(t),  t x T (α)Qx(α)dα, V2 (t) = V3 (t) =

t−h  0 t −h

x˙ T (β)S x(β)dβdα ˙

(2.28) (2.29) (2.30)

t+α

with P > 0, Q > 0, and S > 0, which provides the following theorem. Theorem 2.6 If there exist matrices P > 0, Q > 0, S > 0, Yi j , and Σ such that Σ(Ae1T + A1 e2T − e3T ) + (Ae1T + A1 e2T − e3T )T Σ T + hY11 + Y12 (e1 − e2 )T T +(e1 − e2 )Y12 + e3 Pe1T + e1 Pe3T + e1 Qe1T − e2 Qe2T + he3 Se3T < 0,(2.31)   Y11 Y12 ≥ 0, S − Y22 ≥ 0, (2.32) T Y12 Y22

where ei ∈ R3n×n are defined as       e1T = I 0 0 , e2T = 0 I 0 , e3T = 0 I 0 , then the system (2.18) is asymptotically stable.

(2.33)

42

2 Stability of Time-Delay Systems

 T Proof For the system (2.18), let us define χ(t) as χ(t)  x T (t) x T (t − h) x˙ T (t) . Then, the system (2.18) can be written as 0 = (Ae1T + A1 e2T − e3T )χ(t). Let us think about the time derivative of the Lyapunov functional (2.27) such as V˙1 (t) = 2 x˙ T (t)P x(t) = 2χT (t)e3 Pe1T χ(t), V˙2 (t) = x T (t)Qx(t) − x T (t − h)Qx(t − h) = χT (t){e1 Qe1T − e2 Qe2T }χ(t),  t  t V˙3 (t) = h x˙ T (t)S x(t) ˙ − x˙ T (α)S x(α)dα ˙ = hχT (t)e3 Se3T χ(t) − x˙ T (α)S x(α)dα. ˙ t−h

t−h

Furthermore, using the integral inequality lemma in Lemma 2.3, for 

 Y11 Y12 ≥ 0, T Y12 Y22

we have  0≤

t t−h



χ(t) x(α) ˙

T 

Y11 Y12 T Y12 Y22



 χ(t) dα x(α) ˙



T }χ(t) + = χT (t){hY11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12

t

x˙ T (α)Y22 x(α)dα, ˙

t−h

and the time derivative of V (xt ) can be upper-bounded by the following quantities: T V˙ (t) ≤ χT (t){hY11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12

+e3 Pe1T + e1 Pe3T + e1 Qe1T − e2 Qe2T + he3 Se3T }χ(t)  t  t T + x˙ (α)Y22 x(α)dα ˙ − x˙ T (α)S x(α)dα. ˙ t−h

t−h

Finally, we now remove the constraints of the model dynamics itself in (2.18) by introducing free variables Σ such as 0 ≡ χT (t)Σ(Ae1T + A1 e2T − e3T )χ(t). This completes the proof.



The delay-dependent stability condition in Theorem 2.6 can be found in [20] and similar restrictive results can be found in [19].

2.4 Stability of State Delayed Systems

43

2.4.3 Discretized State Approach A standard approach based on the Krasovskii theorem employs at least three terms. In the Lyapunov functional (2.27), the first term (2.28) and the second term (2.29) are associated with the time behavior and the delay behavior, respectively. The last term (2.30) is required to provide the delay bound h¯ in the condition, which produces a delay-dependent stability criterion. To improve the maximal upper bound ¯ one could include several various high-order integral terms. Alternatively, the h, discretization of the state such as T  h xi,N (t)  x T (t) x T (t − τ ) · · · x T (t − iτ ) , τ  N

(2.34)

¯ might also improve the maximal upper bound h. Theorem 2.7 If there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S > 0, Yi j , and Ni such that 

   T    T Π1 e0 Pe0T N1 e0 A T + e N A1T N1 e0 A T + e N A1T + < 0, (2.35) + N2 N2 −e0 −e0 e0 Pe0T Π2   Y11 Y12 ≥ 0, S − Y22 ≥ 0, (2.36) T Y Y12 22

where Π1 and Π2 are defined as T , Π1 = E 0 Q 0 E 0T − E N Q 0 E NT + τ Y11 + Y12 (E 0 − E N )T + (E 0 − E N )Y12 T T T Π2 = E 0 Q 1 E 0 − E N Q 1 E N + τ E 0 S E 0 ,  T  T ∈ Rn(N +1)×n N , E 0 = I 0 ∈ Rn(N +1)×n N , E N = 0 I  T  T e0 = I 0 · · · 0 ∈ Rn(N +1)×n , e N = 0 · · · 0 I ∈ Rn(N +1)×n ,

(2.37) (2.38) (2.39) (2.40)

then the system (2.18) is asymptotically stable. Proof The system (2.18) can be written as 0 = (Ae0T + A1 e TN )x N ,N (t) − e0T x˙ N ,N (t). Let us choose a Lyapunov functional as follows. V (t) = V1 (t) + V2 (t) + V3 (t) + V4 (t), where Vi (t) denote

(2.41)

44

2 Stability of Time-Delay Systems  V1 (t) = x T (t)P x(t), V2 (t) = 

V3 (t) =

t

x NT −1,N (α)Q 0 x N −1,N (α)dα,

t−τ t t−τ



x˙ NT −1,N (α)Q 1 x˙ N −1,N (α)dα, V4 (t) =

0 −τ



t t+α

x˙ NT −1,N (β)S x˙ N −1,N (β)dβdα

with P > 0, Q 0 > 0, Q 1 > 0, and S > 0. Take the time derivative of the Lyapunov functional such as V˙1 (t) = 2 x˙ T (t)P x(t) = 2 x˙ NT ,N (t)e0 Pe0T x N ,N (t), V˙2 (t) = x NT −1,N (t)Q 0 x N −1,N (t) − x NT −1,N (t − τ )Q 0 x N −1,N (t − τ ) = x NT ,N (t){E 0 Q 0 E 0T − E N Q 0 E NT }x N ,N (t),

V˙3 (t) = x˙ NT −1,N (t)Q 1 x˙ N −1,N (t) − x˙ NT −1,N (t − τ )Q 1 x˙ N −1,N (t − τ ) = x˙ NT ,N (t){E 0 Q 1 E 0T − E N Q 1 E NT }x˙ N ,N (t),  t V˙4 (t) = τ x˙ NT −1,N (t)S x˙ N −1,N (t) − x˙ NT −1,N (α)S x˙ N −1,N (α)dα t−τ  t T T = τ x˙ N ,N (t)E 0 S E 0 x˙ N ,N (t) − x˙ NT −1,N (α)S x˙ N −1,N (α) dα, t−τ

where E 1 and E 2 as defined in (2.39). Furthermore, for  Y11 Y12 , 0≤ T Y12 Y22 

we have  0≤



t t−τ

x N ,N (t) x˙ N −1,N (α)

T 

Y11 Y12 T Y12 Y22



 x N ,N (t) dα x˙ N −1,N (α)

T = x N ,N (t){τ Y11 + Y12 (E 0 − E N )T + (E 0 − E N )Y12 }x N ,N (t)  t + x˙ NT −1,N (α)Y22 x˙ N −1,N (α)dα, t−τ

and the time derivative of V (t) can be upper-bounded by the following quantities V˙ (t) ≤ V˙1 (t) + V˙2 (t) + V˙3 (t) + V˙4 (t) + x N ,N (t){τ Y11 + Y12 (E 0 − E N )T  t T +(E 0 − E N )Y12 }x N ,N (t) + x˙ NT −1,N (α)Y22 x˙ N −1,N (α)dα t−τ

 T   Π1 e0 Pe0T x N ,N (t) x N ,N (t) = e0 Pe0T Π2 x˙ N ,N (t) x˙ N ,N (t)  t x˙ NT −1,N (α)(S − Y22 )x˙ N −1,N (α)dα, + 

t−τ

2.4 Stability of State Delayed Systems

45

where the Π1 and Π2 entries are defined in (2.38). We now remove the constraints of the model dynamics itself in (2.18) by introducing free variables N1 and N2 such as T      x N ,N (t) N1  T x N ,N (t) T T Ae + A e −e . 0= 1 N 0 0 N2 x˙ N ,N (t) x˙ N ,N (t) 

This completes the proof.



The following example shows the maximum allowable time delays obtained by Theorem 2.5 for various discretization numbers. Example 2.1 Consider the system (2.18) for the following system matrices: 

   −2.0 0.0 −1.0 0.0 A= , A1 = . 0.0 −0.9 −1.0 −1.0

(2.42)

The maximum allowable time delays h for different discretization numbers N = 1, 2, 5, and 10 are 4.47, 5.71, 6.09, and 6.15, respectively. The delay-dependent stability condition based on the discretized state approach in Theorem 2.7 is the special case of the results in [7, 9, 12].

2.4.4 *Extension to Systems with Time-Varying Delays Consider a linear system with a time-varying delay given by x(t) ˙ = Ax(t) + A1 x(t − h(t))

(2.43)

¯ 0] and a timewith the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h¯ for θ ∈ [−h, ¯ We consider two cases where h(t) is a general delay varying delay h(t) ∈ [0, h]. ˙ ≤ μ. and where h(t) is differentiable with bounded derivative such as h(t) ¯ For a general delay case, the following theorem can be obtained by applying the Lyapunov– Krasovskii approach. Theorem 2.8 If there exist matrices P > 0, Q 0 > 0, S0 > 0, Yi j , Z i j , and Σ such that T T ¯ 11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 hY ¯ 4 S0 e4T + e1 Q 0 e1T − e3 Q 0 e3T + e4 Pe1T + e1 Pe4T + he

+ Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0,

(2.44)

46

2 Stability of Time-Delay Systems T T + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 h¯ Z 11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12 ¯ 4 S0 e4T + e1 Q 0 e1T − e3 Q 0 e3T + e4 Pe1T + e1 Pe4T + he



+ Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0,

(2.45)

   Z 11 Z 12 Y11 Y12 ≥ 0, ≥ 0, S0 − Z 22 ≥ 0, S0 − Y22 ≥ 0, T T Y12 Y22 Z 12 Z 22

(2.46)

where ei ∈ R4n×n are defined as  e1T = I 0 0  e3T = 0 0 I

  0 , e2T = 0 I 0   0 , e4T = 0 0 0

 0 ,  I ,

(2.47) (2.48)

¯ 0] is asympthen the system (2.43) with a general time-varying delay h(t) ∈ [−h, totically stable. Proof Let us represent the system (2.43) as 0 = (Ae1T + A1 e2T − e4T )χ(t), where   ¯ x˙ T (t) T . χ(t)  x T (t) x T (t − h(t)) x T (t − h) We choose a Lyapunov functional as follows. V (t) = V1 (t) + V2 (t) + V3 (t),

(2.49)

where Vi (t) denote  V1 (t) = x T (t)P x(t), V2 (t) =

t t−h¯

 x T (α)Q 0 x(α)dα, V3 (t) =

0 −h¯



t

x˙ T (β)S0 x(β)dβdα ˙

t+α

with P > 0, Q 0 > 0, and S0 > 0. Take the time derivative of the Lyapunov functional as V˙1 (t) = 2χT (t)e4 Pe1T χ(t), V˙2 (t) = χT (t){e1 Q 0 e1T − e3 Q 0 e3T }χ(t),  t T T ˙ ¯ V3 (t) = hχ (t)e4 S0 e4 χ(t) − x˙ T (α)S0 x(α)dα. ˙ t−h¯

Furthermore, for

we have



   Z 11 Z 12 Y11 Y12 ≥ 0, ≥ 0, T T Y12 Y22 Z 12 Z 22

2.4 Stability of State Delayed Systems  0≤



t t−h(t)

χ(t) x(α) ˙

T 

Y11 Y12 T Y Y12 22



47

 χ(t) dα x(α) ˙

   T χ(t) + = χT (t) h(t)Y11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12  0≤

t−h(t)  t−h¯

χ(t) x(α) ˙

T 

Z 11 Z 12 T Z Z 12 22



 χ(t) dα x(α) ˙

t

x˙ T (α)Y22 x(α)dα, ˙

t−h(t)

  T χ(t) = χT (t) (h¯ − h(t))Z 11 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12  t−h(t) x˙ T (α)Z 22 x(α)dα, ˙ + t−h¯

so that V˙ (t) can be upper-bounded by the following quantity:  V˙ (t) ≤ χT (t) h(t)Y11 + (h¯ − h(t))Z 11 + Ω1 χ(t) + Ω2 , where Ωi denote T T + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 Ω1 = Y12 (e1 − e2 )T + (e1 − e2 )Y12 ¯ 4 S0 e4T + e1 Q 0 e1T − e3 Q 0 e3T , +e4 Pe1T + e1 Pe4T + he  t  t−h(t)  t x˙ T (α)Y22 x(α)dα ˙ + x˙ T (α)Z 22 x(α)dα ˙ − x˙ T (α)S0 x(α)dα. ˙ Ω2 = t−h¯

t−h(t)

t−h¯

Since h(t)Y11 + (h¯ − h(t))Z 11 is a convex combination of the matrices Y11 and Z 11 on h(t), it can be handled non-conservatively by two corresponding boundary LMIs: one for h(t) = h¯ and the other for h(t) = 0. Furthermore, if it holds that S0 − Z 22 ≥ 0 and S0 − Y22 ≥ 0, the matrix Ω2 is non-positive definite, i.e. Ω2 ≤ 0. Finally, we remove the constraints of the model dynamics itself in (2.43) by introducing free variables Σ as 0 = χT (t)Σ(Ae1T + A1 e2T − e4T )χ(t). This completes the proof.

(2.50) 

˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ the following result can be obtained. Theorem 2.9 If there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S0 > 0, S1 > 0, Yi j , Z i j , and Σ such that T T ¯ 11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12 hY + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12

¯ 4 (S0 + S1 )e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e + e4 Pe1T + e1 Pe4T + he ¯ 2 Q 1 e2T + Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0,

(2.51)

48

2 Stability of Time-Delay Systems T T h¯ Z 11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12

¯ 4 S0 e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e + e4 Pe1T + e1 Pe4T + he ¯ 2 Q 1 e2T 

Y11 T Y12

+ Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0, (2.52)    Z 11 Z 12 Y12 ¯ 1 − Y22 ≥ 0, (2.53) ≥ 0, ≥ 0, S0 − Z 22 ≥ 0, S0 + (1 − μ)S T Z Y22 Z 12 22

where ei ∈ R4n×n are defined in (2.47)–(2.48), then the system (2.43) with a time¯ 0] that is differentiable with bounded derivative such as varying delay h(t) ∈ [−h, ˙h(t) ≤ μ¯ is asymptotically stable. Proof Let us consider the delayed system (2.43) with a time-varying delay h(t) ∈ ¯ 0] that is differentiable with bounded derivative such as h(t) ˙ ≤ μ. [−h, ¯ To combine this added constraint with the previous result, we introduce two additional terms in a Lyapunov functional such as  V4 (t) =



t

x (α)Q 1 x(α)dα, V5 (t) =



0

t

x˙ T (β)S1 x(β)dβdα, ˙

T

t−h(t)

−h(t)

t+α

where Q 1 > 0 and S1 > 0, such that the resultant Lyapunov functional is V (t) = V1 (t) + V2 (t) + V3 (t) + V4 (t) + V5 (t).

(2.54)

The time derivatives are T ˙ V˙4 (t) = χT (t){e1 Q 1 e1T − (1 − h(t))e 2 Q 1 e2 }χ(t) ≤ χT (t){e1 Q 1 e1T − (1 − μ)e2 Q 1 e2T }χ(t),  t ˙ V˙5 (t) = h(t)χT (t)e4 S1 e4T χ(t) − (1 − h(t)) x˙ T (α)S1 x(α)dα ˙

 ≤ h(t)χT (t)e4 S1 e4T χ(t) − (1 − μ)

t−h(t)

t

x˙ T (α)S1 x(α)dα. ˙

t−h(t)

Using the similar procedure as in Theorem 2.8, the time derivative of V (t) can be upper-bounded by the following quantities: V˙ (t) ≤ χT (t){h(t)(Y11 + e4 S1 e4T ) + (h¯ − h(t))Z 11 + Ω1 }χT (t) + Ω2 , where Ωi denote T T Ω1 = Y12 (e1 − e2 )T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 ¯ 4 S0 e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e +e4 Pe1T + e1 Pe4T + he ¯ 2 Q 1 e2T ,  t−h(t)  t x˙ T (α)(S0 − Z 22 )x(α)dα ˙ − x˙ T (α){S0 − Y22 + (1 − μ)S ¯ 1 }x(α)dα. ˙ Ω2 = − t−h¯

t−h(t)

2.4 Stability of State Delayed Systems

49

Finally, we can obtain the corresponding stability criterion from the convexity on h(t), producing two non-conservative LMIs, and by eliminating the constraint of model dynamics as in (2.50). This completes the proof.  The delay-dependent stability conditions for systems with time-varying delays based on the Lyapunov–Krasovskii approach in Theorems 2.8 and 2.9 in this subsection can be found similarly as in [25]. Alternative Approach Using the Jensen inequality lemma and the reciprocal convexity approach in Lemmas 2.4 and 2.5, respectively, the following results can be obtained. Theorem 2.10 If there exist matrices P > 0, Q 0 > 0, S0 > 0, Y , and Σ such that  ¯ 4 S0 e4T + e1 Q 0 e1T − e3 Q 0 e3T − 1 (e1 − e2 )S0 (e1 − e2 )T e4 Pe1T + e1 Pe4T + he ¯h − (e1 − e2 )Y (e2 − e3 )T − (e2 − e3 )Y T (e1 − e2 )T − (e2 − e3 )S0 (e2 − e3 )T + Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0, 

 S0 Y ≥ 0, Y T S0

(2.55) (2.56)

where ei ∈ R4n×n are defined in (2.47)–(2.48), then the system (2.43) with a general time-varying delay h(t) is asymptotically stable. Proof Consider the same Lyapunov functional (2.49). By the Jensen inequality lemma in Lemma 2.4, we have  1 T − (e1 − e2 )S0 (e1 − e2 ) χ(t), x˙ (α)S0 x(α)dα ˙ ≤ −χ (t) h(t) t−h(t)    t−h(t) 1 − x˙ T (α)S0 x(α)dα ˙ ≤ −χT (t) (e2 − e3 )S0 (e2 − e3 )T χ(t). h¯ − h(t) t−h¯ 



t

T

T

Furthermore, by the reciprocal convexity approach in Lemma 2.5, it holds for (2.56):   ¯ 1 h¯ h (e1 − e2 )S0 (e1 − e2 )T + − χT (t) (e2 − e3 )S0 (e2 − e3 )T h(t) h¯ h¯ − h(t) T   T   T T 1 T S0 Y e1 − e2T e1 − e2 χ(t). ≤ − χ (t) T e2 − e3T Y T S0 e2T − e3T h¯ The remaining proof is similar to that in Theorem 2.8. This completes the proof.  ˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ the following result can be obtained.

50

2 Stability of Time-Delay Systems

Theorem 2.11 If there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S0 > 0, S1 > 0, Y , and Σ such that ¯ 4 (S0 + S1 )e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e ¯ 2 Q 1 e2T e4 Pe1T + e1 Pe4T + he 1 − (e1 − e2 )(S0 + (1 − μ)S ¯ 1 )(e1 − e2 )T − (e1 − e2 )Y (e2 − e3 )T h¯ − (e2 − e3 )Y T (e1 − e2 )T − (e2 − e3 )S0 (e2 − e3 )T + Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0, (2.57) T T T T T T ¯ e4 Pe1 + e1 Pe4 + he4 S0 e4 + e1 (Q 0 + Q 1 )e1 − e3 Q 0 e3 − (1 − μ)e ¯ 2 Q 1 e2 1 − (e1 − e2 )(S0 + (1 − μ)S ¯ 1 )(e1 − e2 )T − (e1 − e2 )Y (e2 − e3 )T h¯ − (e2 − e3 )Y T (e1 − e2 )T − (e2 − e3 )S0 (e2 − e3 )T + Σ(Ae1T + A1 e2T − e4T ) + (e1 A T + e2 A1T − e4 )Σ T < 0, 

 ¯ 1 Y S0 + (1 − μ)S ≥ 0, S0 YT

(2.58) (2.59)

where ei ∈ R4n×n are defined in (2.47)–(2.48), then the system (2.43) with a time¯ 0] that is differentiable with bounded derivative such as varying delay h(t) ∈ [−h, ˙h(t) ≤ μ¯ is asymptotically stable. Proof Consider the same Lyapunov functional (2.54). By the Jensen inequality lemma in Lemma 2.4, we have  −(1 − μ) ¯

t

t−h(t)

 x˙ T (α)S1 x(α)dα ˙ ≤ −χT (t)

 1 ¯ 1 (e1 − e2 )T χ(t). (e1 − e2 )(1 − μ)S h(t)

Furthermore, by the reciprocal convexity approach in Lemma 2.5, it holds for (2.59) that  ¯  h 1 h¯ (e2 − e3 )S0 (e2 − e3 )T − χT (t) ¯ 1 ](e1 − e2 )T + (e1 − e2 )[S0 + (1 − μ)S h(t) h¯ h¯ − h(t)       T T 1 S0 + (1 − μ)S ¯ 1 Y e1T − e2T e − e2T χ(t). ≤ − χT (t) 1T T T T T Y S0 e2 − e3 e2 − e3 h¯

The remaining proof is similar to that in Theorem 2.9. This completes the proof.  Generally speaking, Theorems 2.9 and 2.11 can yield the larger maximum allowable delay than Theorems 2.8 and 2.10 since the property of h(t) that is differentiable ˙ ≤ μ¯ is utilized, which is shown in the following with bounded derivative such as h(t) examples.

2.4 Stability of State Delayed Systems

51

Example 2.2 Consider the system (2.43) for the following system matrices:    −1.0 0.0 −2.0 0.0 . A= , A1 = −1.0 −1.0 0.0 −0.9 

Based on Theorems 2.8 and 2.10, the system is stable up to 1.86809 identically. Example 2.3 Consider the system in Example 2.2 with a time-varying delay h(t) ∈ ¯ 0] that is differentiable with bounded derivative such as h(t) ˙ ≤ μ. [−h, ¯ When μ¯ =0.0, 0.1, 0.5, 0.9, 3.0, the maximal allowable h¯ are given as 4.472, 3.669, 2.337, 1.873, 1.868 based on Theorem 2.9 and 4.472, 3.658, 2.337, 1.873, 1.869 based on Theorem 2.11, respectively. The delay-dependent stability conditions based on the Jensen inequality lemma with the reciprocal convexity approach in Theorems 2.10 and 2.11 can be found in [26] and extended in [14–17]. Other important results of stability analysis of timedelayed systems based on the descriptor system approach can be found in [3] and the results were extended in [4–6], which are not covered in this book.

2.5 Robust Stability of State Delayed Systems Consider a state delayed system with model uncertainties given by x(t) ˙ = Ax(t) + A1 x(t − h) + Dpw (t), pw (t) = Δ(t)qw (t), ΔT (t)Δ(t) ≤ ρ−2 I,

(2.60) (2.61)

qw (t) = E x(t) + E 1 x(t − h) + F pw (t)

(2.62)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0], where pw (t) ∈ Rm w and qw (t) ∈ Rlw are the output and the input of an uncertainty block Δ(t). The above description belongs to the linear fractional representation of model uncertainties, which can be written as x(t) ˙ = (A + D(I − Δ(t)F)−1 Δ(t)E)x(t) + (A1 + D(I − Δ(t)F)−1 Δ(t)E 1 )x(t − h). (2.63)

If F = 0, this becomes x(t) ˙ = (A + DΔ(t)E)x(t) + (A1 + DΔ(t)E 1 )x(t − h), which is a simple model uncertainty often used in the literature.

(2.64)

52

2 Stability of Time-Delay Systems

2.5.1 Lyapunov–Krasovskii Approach Based on the Lyapunov functional (2.27)–(2.30) which is used for obtaining the delay-dependent stability condition, the following condition can be derived. Theorem 2.12 For a given ρ, if there exist matrices P > 0, Q > 0, S > 0, Yi j , and Σ such that Σ(Ae1T + A1 e2T − e3T + De4T ) + (Ae1T + A1 e2T − e3T + De4T )T Σ T + hY11 T + Y12 (e1 − e2 )T + (e1 − e2 )Y12 + e3 Pe1T + e1 Pe3T + e1 Qe1T − e2 Qe2T

+ he3 Se3T + (e1 E T + e2 E 1T + e4 F T )(Ee1T + E 1 e2T + Fe4T ) − ρ2 e4 e4T < 0, (2.65)   Y11 Y12 ≥ 0, S − Y22 ≥ 0, (2.66) T Y12 Y22 where ei are defined as, when m w is the dimension of pw (t),  e1T = I 0 0  e3T = 0 0 I

  0 ∈ Rn×(3n+m w ) , e2T = 0 I 0   0 ∈ Rn×(3n+m w ) , e4T = 0 0 0

 0 ∈ Rn×(3n+m w ) ,  I ∈ Rm w ×(3n+m w ) ,

(2.67) (2.68)

then the system (2.60)–(2.62) is asymptotically stable.  T Proof Let us define χ(t) as χ(t)  x T (t) x T (t − h) x˙ T (t) pwT (t) . Then, the system (2.60) can be written as 0 = (Ae1T + A1 e2T − e3T + De4T )χ(t). Take the time derivative of the Lyapunov functional as V˙1 (t) = 2 x˙ T (t)P x(t) = 2χT (t)e3 Pe1T χ(t), V˙2 (t) = x T (t)Qx(t) − x T (t − h)Qx(t − h) = χT (t){e1 Qe1T − e2 Qe2T }χ(t),  t  t V˙3 (t) = h x˙ T (t)S x(t) ˙ − x˙ T (α)S x(α)dα ˙ = hχT (t)e3 Se3T χ(t) − x˙ T (α)S x(α)dα. ˙ t−h

t−h

Furthermore, for  Y11 Y12 , 0≤ T Y12 Y22 

we have  0≤

t t−h



χ(t) x(α) ˙

T 

Y11 Y12 T Y12 Y22



 χ(t) dα x(α) ˙

T }χ(t) + = χT (t){hY11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12



t t−h

x˙ T (α)Y22 x(α)dα, ˙

2.5 Robust Stability of State Delayed Systems

53

and the time derivative of V (t) can be upper-bounded by the following quantities: T + e3 Pe1T + e1 Pe3T + e1 Qe1T V˙ (t) ≤ χT (t){hY11 + Y12 (e1 − e2 )T + (e1 − e2 )Y12  t  t T T T −e2 Qe2 + he3 Se3 }χ(t) + x˙ (α)Y22 x(α)dα ˙ − x˙ T (α)S x(α)dα. ˙ t−h

t−h

Finally, we now remove the constraints of the model dynamics in (2.60)–(2.62) by introducing free variables Σ such as 0 ≡ χT (t)Σ(Ae1T + A1 e2T − e3T + De4T )χ(t). And, note that the time derivative of the Lyapunov functional must be subject to the following constraint associated with (2.61) 0 ≤ qwT (t)qw (t) − ρ2 pwT (t) pw (t) = χT {(e1 E T + e2 E 1T + e4 F T )(Ee1T + E 1 e2T + Fe4T ) − ρ2 e4 e4T }χ(t),(2.69) which can be handled through the so-called S-procedure to get the condition (2.65). This completes the proof.  The robust delay-dependent stability condition based on the Lyapunov–Krasovskii approach in Theorem 2.12 can be found in [20].

2.5.2 Discretized State Approach Using the discretized state defined in (2.34), we can get the following result. Theorem 2.13 For a given ρ, if there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S > 0, Yi j , and Ni such that ⎡

⎤ (e0 E T + e N E 1T )(Ee0T + E 1 e TN ) + Π1 e0 Pe0T (e0 E T + e N E 1T )F ⎢ ⎥ e0 Pe0T Π2 0 ⎣ ⎦ F T (Ee0T + E 1 e TN ) 0 −ρ2 I + F T F ⎤T ⎡ ⎤ ⎡ ⎤T ⎡ ⎤⎡ e0 A T + e N A1T e0 A T + e N A1T N1 N1 ⎦ +⎣ ⎦ ⎣ N2 ⎦ < 0, (2.70) −e0 −e0 + ⎣ N2 ⎦ ⎣ N3 DT DT N3   Y11 Y12 ≥ 0, S − Y22 ≥ 0, (2.71) T Y12 Y22 where the Π1 and Π2 entries are defined as

54

2 Stability of Time-Delay Systems T Π1 = E 0 Q 0 E 0T − E N Q 0 E NT + τ Y11 + Y12 (E 0 − E N )T + (E 0 − E N )Y12 ,

Π2 = E 0 Q 1 E 0T − E N Q 1 E NT + τ E 0 S E 0T , and τ , E 0 , E N , e0 , and e N are defined in (2.34) and (2.39)–(2.40), then the system (2.60)–(2.62) is asymptotically stable. Proof Let us now consider the discretized state (2.34) so that the system (2.60)– (2.62) can be written as 0 = (Ae0T + A1 e TN )x N ,N (t) − e0T x˙ N ,N (t) + Dpw (t), where e0 and e N are defined in (2.40). We choose a Lyapunov functional as (2.41). From the discussion in Sect. 2.4.3, the time derivative of V (t) can be upper-bounded by the following quantities: V˙ (t) ≤ V˙1 (t) + V˙2 (t) + V˙3 (t) + V˙4 (t) + x N ,N (t){τ Y11 + Y12 (E 0 − E N )T  t T +(E 0 − E N )Y12 }x N ,N (t) + x˙ NT −1,N (α)Y22 x˙ N −1,N (α)dα t−τ

T    Π1 e0 Pe0T x N ,N (t) x N ,N (t) = e0 Pe0T Π2 x˙ N ,N (t) x˙ N ,N (t)  t − x˙ NT −1,N (α)(S − Y22 )x˙ N −1,N (α)dα. 

t−τ

We now remove the constraints of the model dynamics itself in (2.60) by introducing free variables N1 , N2 and N3 such as ⎡ ⎤T ⎡ ⎤ ⎤ x N ,N (t) x N ,N (t) N1   0 = ⎣x˙ N ,N (t)⎦ ⎣ N2 ⎦ Ae0T + A1 e TN −e0T D ⎣x˙ N ,N (t)⎦ . N3 pw (t) pw (t) ⎡

Also, note that the time derivative of the Lyapunov functional must be subject to the following constraint associated with (2.61) T 0 ≤ qwT (t)qw (t) − ρ2 pw (t) pw (t) ⎡ ⎤T ⎡ ⎤ ⎤⎡ x N ,N (t) (e0 E T + e N E 1T )(Ee0T + E 1 e TN ) 0 (e0 E T + e N E 1T )F x N ,N (t) ⎦ ⎣x˙ N ,N (t)⎦ , 0 0 0 = ⎣x˙ N ,N (t)⎦ ⎣ pw (t) pw (t) F T (Ee0T + E 1 e TN ) 0 −ρ2 I + F T F

which can be handled through the so-called S-procedure. Note that the nonnegative scalar variable in the S-procedure of the constraint (2.61) can be scaled to 1 without loss of generality. This completes the proof.  The following example shows the maximum allowable time delays obtained by Theorem 2.13 for various discretization numbers.

2.5 Robust Stability of State Delayed Systems

55

Example 2.4 Consider the system (2.60)–(2.62). The following parameters are used:    −1 0 −2 0 , D = I, A= , A1 = −1 −1 0 −1     1 0 11 , F = 0, ρ−1 = 0.1. E= , E1 = 0 −1 11 

When the number of state discretization is 1, 2, 3, 4, and, 5, the maximal delay upper bound h¯ is obtained as 3.44, 4.33, 4.51, 4.58, and 4.61, respectively. The robust delay-dependent stability condition based on the discretized state approach in Theorem 2.13 is newly obtained here.

2.5.3 *Extension to Systems with Time-Varying Delays Let us consider the system (2.60)–(2.62), where the delay h is not constant but time¯ We choose a Lyapunov functional as (2.49), then the varying such that h(t) ∈ [0, h]. following theorem can be obtained by applying the Lyapunov–Krasovskii approach. Theorem 2.14 For a given ρ, if there exist matrices P > 0, Q 0 > 0, S0 > 0, Yi j , Z i j , and Σ such that ¯ 11 + Y12 (e1 − e2 )T Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T + hY T T ¯ 4 S0 e4T + e1 Q 0 e1T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 + e4 Pe1T + e1 Pe4T + he

− e3 Q 0 e3T + (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0, (2.72) Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T + h¯ Z 11 + Y12 (e1 − e2 )T T T ¯ 4 S0 e4T + e1 Q 0 e1T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 + e4 Pe1T + e1 Pe4T + he

− e3 Q 0 e3T + (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0, (2.73)     Y11 Y12 Z 11 Z 12 ≥ 0, ≥ 0, S0 − Z 22 ≥ 0, S0 − Y22 ≥ 0, (2.74) T T Y12 Y22 Z 12 Z 22

where ei are defined as, when m w is the dimension of pw ,  e1T = I  e2T = 0  e3T = 0  e4T = 0  e5T = 0

 0 0 0 0 ∈ Rn×(4n+m w ) ,  I 0 0 0 ∈ Rn×(4n+m w ) ,  0 I 0 0 ∈ Rn×(4n+m w ) ,  0 0 I 0 ∈ Rn×(4n+m w ) ,  0 0 0 I ∈ Rm w ×(4n+m w ) ,

(2.75) (2.76) (2.77) (2.78) (2.79)

¯ 0] is then the system (2.60)–(2.62) with a general time-varying delay h(t) ∈ [−h, asymptotically stable.

56

2 Stability of Time-Delay Systems

Proof Let us define χ(t) as   ¯ x˙ T (t) pwT (t) T . χ(t)  x T (t) x T (t − h(t)) x T (t − h) In this case, the time derivative of the Lyapunov function must be subject to the following constraints associated with the system (2.60) 0 ≡ χT (t)Σ(Ae1T + A1 e2T − e4T + De5T )χ(t), and the additional uncertainty constraint (2.61): 0 ≤ qwT (t)qw (t) − ρ2 pwT (t) pw (t) = χT (t){(e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T }χ(t), (2.80) which can be handled through the S-procedure. This completes the proof.



˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ we again use a Lyapunov functional as (2.54). Using convexity on h(t) and eliminating the model constraints, we have the following result. Theorem 2.15 For a given ρ, if there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S0 > 0, S1 > 0, Yi j , Z i j , and Σ such that ¯ 11 Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T + hY T T + Y12 (e1 − e2 )T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 + e4 Pe1T + e1 Pe4T

¯ 4 (S0 + S1 )e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e + he ¯ 2 Q 1 e2T + (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0,

(2.81)

+ A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T + h¯ Z 11 T T + Y12 (e1 − e2 )T + (e1 − e2 )Y12 + Z 12 (e2 − e3 )T + (e2 − e3 )Z 12 + e4 Pe1T + e1 Pe4T Σ(Ae1T

¯ 4 S0 e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T − (1 − μ)e + he ¯ 2 Q 1 e2T 

Y11 T Y12

+ (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0,    Y12 Z 11 Z 12 ≥ 0, ≥ 0, S0 − Z 22 ≥ 0, S0 + (1 − μ)S ¯ 1 − Y22 ≥ 0, T Z Y22 Z 12 22

(2.82) (2.83)

where ei are defined in (2.75)–(2.79), then the system (2.60)–(2.62) with a time¯ 0] that is differentiable with bounded derivative such as varying delay h(t) ∈ [−h, ˙h(t) ≤ μ¯ is asymptotically stable. Proof The proof is straightforward and thus omitted.



The robust delay-dependent stability conditions for systems with time-varying delays based on the Lyapunov–Krasovskii approach in Theorems 2.14 and 2.15 in this subsection can be found in [25].

2.5 Robust Stability of State Delayed Systems

57

Alternative Approach Using the Jensen inequality lemma and the reciprocal convexity approach in Lemmas 2.4 and 2.5, respectively, the following results can be obtained. Theorem 2.16 For a given ρ, if there exist matrices P > 0, Q 0 > 0, S0 > 0, Y , and Σ such that Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T

1 (e1 − e2 )S0 (e1 − e2 )T − (e1 − e2 )Y (e2 − e3 )T − (e2 − e3 )Y T (e1 − e2 )T h¯ ¯ 4 S0 e4T + e1 Q 0 e1T − e3 Q 0 e3T − (e2 − e3 )S0 (e2 − e3 )T + e4 Pe1T + e1 Pe4T + he −

+ (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0, (2.84)   S0 Y ≥ 0, (2.85) Y T S0 where ei are defined in (2.75)–(2.79), then the system (2.60)–(2.62) with a general time-varying delay h(t) is asymptotically stable. Proof The proof is straightforward and thus omitted.



˙ ≤ μ, If h(t) is differentiable with bounded derivative as h(t) ¯ the following result can be obtained. Theorem 2.17 For a given ρ, if there exist matrices P > 0, Q 0 > 0, Q 1 > 0, S0 > 0, S1 > 0, Y , and Σ such that Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T 1 − (e1 − e2 )(S0 + (1 − μ)S ¯ 1 )(e1 − e2 )T − (e1 − e2 )Y (e2 − e3 )T h¯ − (e2 − e3 )Y T (e1 − e2 )T − (e2 − e3 )S0 (e2 − e3 )T ¯ 4 (S0 + S1 )e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T + e4 Pe1T + e1 Pe4T + he − (1 − μ)e ¯ 2 Q 1 e2T + (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0, (2.86) Σ(Ae1T + A1 e2T − e4T + De5T ) + (e1 A T + e2 A1T − e4 + e5 D T )Σ T 1 − (e1 − e2 )(S0 + (1 − μ)S ¯ 1 )(e1 − e2 )T − (e1 − e2 )Y (e2 − e3 )T h¯ − (e2 − e3 )Y T (e1 − e2 )T − (e2 − e3 )S0 (e2 − e3 )T ¯ 4 S0 e4T + e1 (Q 0 + Q 1 )e1T − e3 Q 0 e3T + e4 Pe1T + e1 Pe4T + he − (1 − μ)e ¯ 2 Q 1 e2T + (e1 E T + e2 E 1T + e5 F T )(Ee1T + E 1 e2T + Fe5T ) − ρ2 e5 e5T < 0, (2.87)   ¯ 1 Y S0 + (1 − μ)S ≥ 0, (2.88) YT S0

58

2 Stability of Time-Delay Systems

where ei are defined in (2.75)–(2.79), then the system (2.60)–(2.62) with a time¯ 0] that is differentiable with bounded derivative such as varying delay h(t) ∈ [−h, ˙ ≤ μ¯ is asymptotically stable. h(t) Proof The proof is straightforward and thus omitted.



Generally speaking, Theorems 2.15 and 2.17 can yield larger maximum allowable delays than Theorems 2.14 and 2.16 since the property of h(t) that is differentiable ˙ ≤ μ¯ is utilized, which is be shown in the folwith bounded derivative such as h(t) lowing examples. Example 2.5 Consider the delayed system with model uncertainties (2.60)–(2.62). The following parameters are used:    −1.0 0.0 −2.0 0.0 , D = I, , A1 = −1.0 −1.0 0.0 −1.0     0.1 0.0 1.6 0.0 E= , ρ = 1. , E1 = 0.0 0.3 0.0 0.05 

A=

Based on Theorems 2.14 and 2.16, the system is robustly stable up to 1.068 identically. Example 2.6 Consider the delayed system with model uncertainties in Example 2.5 ¯ 0] that is differentiable with bounded derivawith a time-varying delay h(t) ∈ [−h, ˙ tive such as h(t) ≤ μ. ¯ When μ¯ is given as 0.0, 0.2, 0.4, 0.6, 0.8, the maximal allowable h¯ are given as 1.149, 1.099, 1.077, 1.070, 1.068 based on Theorem 2.15 and 1.149,1.092, 1.077, 1.070, 1.068 based on Theorem 2.17, respectively. The robust delay-dependent stability conditions based on the Jensen inequality lemma with the reciprocal convexity approach in Theorems 2.16 and 2.17 are newly obtained here and similar restrictive conditions can be obtained from a slight modification of the results in [25].

2.6 Stability and Robust Stability of Distributed State Delayed Systems Optimal controls in subsequent chapters often include distributed state feedbacks. Thus, distributed state delayed systems are dealt with in this section. Consider a linear system with a state delay and a distributed delay given by  x(t) ˙ = Ax(t) + A1 x(t − h) +

0

−h

A2 (θ)x(t + θ)dθ

(2.89)

2.6 Stability and Robust Stability of Distributed State Delayed Systems

59

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0] and a given matrix valued function A2 (θ), θ ∈ [−h, 0]. We only deal with the Razumikhin theorem and the Krasovskii theorem in this section due to the space limit of the book.

2.6.1 Lyapunov–Razumikhin Approach We can again use V (x) in (2.19) to study the stability of the system (2.89) and conclude the following result. Theorem 2.18 If there exist a matrix P > 0, a scalar function α(θ) > 0, a scalar β > 0, and a matrix function R(θ) such that 

 0 P A + A T P + β P + −h R(θ)dθ P A1 < 0, A1T P −β P   α(θ)P − R(θ) P A2 (θ) < 0, −α(θ)P A2T (θ)P

(2.90) (2.91)

for −h ≤ θ ≤ 0, then the system (2.89) is asymptotically stable. Proof Since (2.91) implies P > 0, we can conclude that a Lyapunov function V (x) = x T P x satisfies σmin (P)x2 ≤ V (x) ≤ σmax (P)x2 . Then, whenever V (x(t + θ)) < pV (x(t)) for all θ ∈ [−h, 0] and for some p > 1, which includes that V (x(t − h)) < pV (x(t)), we have   V˙ (x(t)) = 2x T (t)P Ax(t) + A1 x(t − h) +  ≤ 2x T (t)P Ax(t) + A1 x(t − h) + 

0



0 −h 0 −h

 A2 (θ)x(t + θ)dθ  A2 (θ)x(t + θ)dθ

α(θ){ px T (t)P x(t) − x T (t + θ)P x(t + θ)}dθ  +β px T (t)P x(t) − x T (t − h)P x(t − h)    T  0 x(t) x(t) P A + A T P + pβ P + −h R(θ)dθ P A1 = x(t − h) A1T P −β P x(t − h)       0 T x(t) x(t) pα(θ)P − R(θ) P A2 (θ) dθ. + −α(θ)P x(t + θ) A2T (θ)P −h x(t + θ) +

−h

60

2 Stability of Time-Delay Systems

With the above expression and the fact that p > 1 can be arbitrarily close to 1, (2.90) and (2.91) imply that V˙ (x(t)) ≤ −x(t)2 for some sufficiently small  > 0. This completes the proof. 

2.6.2 Lyapunov–Krasovskii Approach Consider a Lyapunov functional for the system (2.89) given by  V (xt ) = x (t)P x(t) +

0

T

 +

0 −h

 θ

0

−h

x T (t + α)Sx(t + α)dα

x T (t + α)Q(θ)x(t + α)dαdθ,

(2.92)

where P > 0, S > 0, and Q(θ) > 0 for all θ ∈ [−h, 0], which produces the following result. Theorem 2.19 If there exist a matrix P > 0, matrix functions Q(θ) > 0, and R(θ) such that   0 P A + A T P + S + −h R(θ)dθ P A1 < 0, (2.93) A1T P −S   Q(θ) − R(θ) P A2 (θ) < 0, (2.94) −Q(θ) A2T (θ)P for all θ ∈ [−h, 0], then the system (2.89) is asymptotically stable. Proof The time derivative of V can be calculated as V˙ (xt ) =

  T  0 x(t) x(t) P A + A T P + S + −h R(θ)dθ P A1 x(t − h) x(t − h) A1T P −S       0 T x(t) Q(θ) − R(θ) P A2 (θ) x(t) + dθ, (2.95) T (θ)P −Q(θ) A x(t + θ) x(t + θ) 2 −h



where the second term is always less than zero due to (2.94). Also, (2.93) implies the existence of a sufficiently small  > 0 such that 

P A + AT P + S + A1T P

0 −h

 R(θ)dθ P A1 ≤ −I. −S

Therefore, V˙ (xt ) ≤ −x(t)2 and the system is asymptotically stable. This completes the proof. 

2.6 Stability and Robust Stability of Distributed State Delayed Systems

61

The delay-independent stability conditions for a system with a state delay and a distributed delay based on the Lyapunov–Razumikhin approach and the Lyapunov– Krasovskii approach in Theorems 2.18 and 2.19 in this section, respectively, are newly obtained here and similar restrictive conditions can be found in [10].

2.6.3 Lyapunov–Krasovskii Approach for Robust Stability Consider a distributed state delayed system with model uncertainties given by  x(t) ˙ = Ax(t) + A1 x(t − h) +

0

−h

A2 (θ)x(t + θ)dθ + Dpw (t),

pw (t) = Δ(t)qw (t), ΔT (t)Δ(t) ≤ ρ−2 I,  0 E 2 (θ)x(t + θ)dθ + F pw (t) qw (t) = E x(t) + E 1 x(t − h) +

(2.96) (2.97) (2.98)

−h

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. We choose a Lyapunov functional as (2.92) to get the following result. Theorem 2.20 For a given ρ, if there exist matrices P > 0, Q > 0, S > 0, and matrix functions Q(θ) > 0 and R(θ) such that ⎤ 0 P A + A T P + S + E T E + −h R(θ)dθ P A1 + E T E 1 P D + E T F ⎦ < 0, (2.99) ⎣ −S + E 1T E 1 0 A1T P + E 1T E DT P + F T E 0 F T F − ρ2 I   Q(θ) − R(θ) P A2 (θ) + E T E 2 (θ) < 0, (2.100) T T T A2 (θ)P + E 2 (θ)E −Q(θ) + E 2 (θ)E 2 (θ) ⎡

for θ ∈ [−h, 0], then the system (2.96)–(2.98) is asymptotically stable. Proof Note that the time derivative of the Lyapunov functional must be subject to the following constraint associated with (2.97) 0 ≤ qwT (t)qw (t) − ρ2 pwT (t) pw (t)

 T T T T = x (t)E + x (t − h)E 1 +  +E 1 x(t − h) +

0

−h

0

−h



x (t + T

θ)E 2T (θ)dθ

+

pwT (t)F T

E x(t)

E 2 (θ)x(t + θ)dθ + F pw (t) − ρ2 pwT (t) pw (t), (2.101)

which can be handled through the so-called S-procedure.

62

2 Stability of Time-Delay Systems

Then the time derivative of V can be calculated from (2.95) and (2.101) as V˙ (xt )

⎤ 0 P A + A T P + S + E T E + −h R(θ)dθ P A1 + E T E 1 P D + E T F ⎦χ1 (t) ≤ χ1T (t)⎣ −S + E 1T E 1 0 A1T P + E 1T E T T T 2 D P+F E 0 F F −ρ I    0 T E (θ) (θ) + E Q(θ) − R(θ) P A 2 2 χ (t, θ)dθ, + χ2T (t, θ) T A2 (θ)P + E 2T (θ)E −Q(θ) + E 2T (θ)E 2 (θ) 2 −h ⎡

for all θ ∈ [−h, 0], where  T  T χ1 (t)  x T (t) x T (t − h) pwT (t) , χ2 (t)  x T (t) x T (t + θ) . This completes the proof.



The robust delay-independent stability condition for a system with a state delay and a distributed delay based on the Lyapunov–Krasovskii approach in Theorem 2.20 is newly obtained here.

References 1. Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia 2. Driver RD (1962) Existence and stability of solutions of a delay differential system. Springer, Berlin 3. Fridman E (2001) New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems. Syst Control Lett 43(4):309–319 4. Fridman E, Shaked U (2002) A descriptor system approach to H∞ control of time-delay systems. IEEE Trans Autom Control 47(2):253–270 5. Fridman E, Shaked U (2002) An improved stabilization method for linear systems with timedelay. IEEE Trans Autom Control 47(11):1931–1937 6. Fridman E, Shaked U (2003) Delay-dependent stability and H∞ control: constant and timevarying delays. Int J Control 76(1):48–60 7. Gu K (1997) Discretized LMI set in the stability problem of linear uncertain time-delay systems. Int J Control 68(4):923–934 8. Gu K (2000) An integral inequality in the stability problem of time-delay systems. In: Proceedings of the 39th IEEE conference on decision and control. Sydney, Australia, pp 2805–2810 9. Gu K, Han QL (2000) Controller design for time-delay systems using discretized Lyapunov functional approach. In: Proceedings of the 39th IEEE conference on decision and control. Sydney, Australia, pp 2793–2798 10. Gu K, Kharitonov VL, Chen J (2003) Stability of time-delay systems. Birkhäuser, Basel 11. Hale JK, Lunel SMV (1993) Introduction to functional differential equations. Springer, Berlin 12. Ko JW, Park P (2009) Delay-dependent stability criteria for systems with time-varying delays: state discretization approach. IEICE Trans Fundam Electron Commun Comput Sci E92– A(4):1136–1141 13. Krasovskii NN (1956) On the application of the second method of Lyapunov for equations with time delays. Prikl Mat Mek 20:315–327

References

63

14. Lee WI, Lee SY, Park P (2016) A combined first- and second-order reciprocal convexity approach for stability analysis of systems with interval time-varying delays. J Franklin Inst 353(9):2104–2116 15. Lee SY, Lee WI, Park P (2016) Polynomials-based integral inequality for stability analysis of linear systems with time-varying delays. J Franklin Inst 354(4):2053–2067 16. Lee SY, Lee WI, Park P (2017) Improved stability criteria for linear systems with interval time-varying delays: generalized zero equalities approach. Appl Math Comput 292:336–348 17. Lee WI, Park P (2014) Second-order reciprocally convex approach to stability of systems with interval time-varying delays. Appl Math Comput 229:245–253 18. Lur’e AI (1957) Some nonlinear problems in the theory of automatic control. H. M. Stationery Office, London (In Russian, 1951) 19. Michiels W, Niculescu SI (2007) Stability and stabilization of time-delay systems. SIAM, Philadelphia 20. Moon YS, Park P, Kwon WH, Lee YS (2001) Delay-dependent robust stabilization of uncertain state-delayed systems. Int J Control 74(14):1447–1455 21. Niculescu SI, de Souza CE, Dion JM, Dugard L (1994) Robust stability and stabilization of uncertain linear systems with state delay: single delay case (i). In: Proceedings of the IFAC symposium on robust control design. Rio de Janeiro, pp 469–474 22. Niculescu SI, Verriest EI, Dugard L, Dion JM (1998) Stability and robust stability of time-delay systems: a guided tour. Stability and control of time-delay systems. Lecture notes in control and information sciences, vol 228. Springer, New York 23. Oˇguztöreli MN (1966) Time-lag control systems. Academic Press, New York 24. Park P (1999) A delay-dependent stability criterion for systems with uncertain time-invariant delays. IEEE Trans Autom Control 44(4):876–877 25. Park P, Ko JW (2007) Stability and robust stability for systems with a time-varying delay. Automatica 43(10):1855–1858 26. Park P, Ko JW, Jeong C (2011) Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1):235–238 27. Razumikhin BS (1956) On the stability of systems with a delay. Prikl Mat Meh 20:500–512 28. Razumikhin BS (1960) Application of Lyapunov’s method to problems in the stability of systems with a delay. Automat i Telemekh 21:740–749 29. Yakubovich VA (1962) The solution of certain matrix inequalities in automatic control theory. Soviet Math Dokl 3:620–623 (In Russian, 1961) 30. Yakubovich VA (1964) Solution of certain matrix inequalities encountered in nonlinear control theory. Soviet Math Dokl 5:652–656

Chapter 3

State Feedback Stabilizing Controls

3.1 Introduction This chapter mainly considers single input and state delayed systems with constant delays. But time-varying delays are also dealt with in addition when necessary. This chapter develops stabilizing controls for input delayed control systems and stabilizing and robust stabilizing controls for state delayed control systems. In particular, for input delayed control systems, the state predictor method and the reduction transformation method are presented to obtain stabilizing controls with ease, where input delayed systems are transformed to finite dimensional ordinary systems. Thus wellknown methods for ordinary systems can be employed. Stabilizing and robust stabilizing controls for state delayed control systems are obtained, based on the stability criteria developed in Chap. 2, such as Lyapunov-Razumikhin, Lyapunov-Krasovskii, and discretized state approaches. This chapter is outlined as follows. In Sect. 3.2, stabilizing controls for input delayed systems are concerned through the state predictor and the reduction transformation, where input delayed systems are transformed to ordinary systems. In Sect. 3.3, stabilizing controls for state delayed systems are explored with different approaches such as Lyapunov-Razumikhin, Lyapunov-Krasovskii, and discretized state approaches. Delay-independent stabilizing controls are first introduced but delay-dependent stabilizing controls are also introduced when possible. Some results based on the Krasovskii theorem are extended to systems with time-varying delays with and without bounded derivatives. In Sect. 3.4, robust stabilizing controls for state delayed systems with model uncertainties are explored, based on the same approaches suggested in Sect. 3.3. Some results based on the Krasovskii theorem are extended to systems with both model uncertainties and time-varying delays with and without bounded derivatives. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

© Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_3

65

66

3 State Feedback Stabilizing Controls

3.2 Stabilizing Controls for Input Delayed Systems For a given system with control, the control is said to stabilize and asymptotically stabilize the system if the resulting closed-loop system is stable and asymptotically stable, respectively.

3.2.1 State Predictor Approach 1. Pure Input Delay : A Special Case Consider a linear system with a pure input delay given by x(t) ˙ = Ax(t) + B1 u(t − h)

(3.1)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0), where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, and h denotes the constant delay. It is well known that the solution of (3.1) can be represented as x(t) = e A(t−t0 ) x(t0 ) +



t

e A(t−τ ) B1 u(τ − h)dτ .

(3.2)

t0

This can be seen from the fact that by taking derivative of (3.2) we can obtain    t x(t) ˙ = A e A(t−t0 ) x(t0 ) + e A(t−τ ) B1 u(τ − h)dτ + B1 u(t − h) = Ax(t) + B1 u(t − h). t0

From (3.2), the state x(t + h) at time t + h is represented by x(t + h) = e

Ah

   t A(t−t0 ) A(t−τ ) e x(t0 ) + e B1 u(τ − h)dτ t0



t+h

e A(t+h−τ ) B1 u(τ − h)dτ  t = e Ah x(t) + e A(t−τ ) B1 u(τ )dτ . +

t

(3.3)

t−h

The state x(t + h) can be called as a state predictor at time t. It is noted that the future input u ∈ [t, t + h] is not necessary to obtain the predicted state x(t + h) for the pure input delayed system (3.1). Let z(t)  x(t + h), which is  z(t) = x(t + h) = e x(t) +

t

Ah

t−h

Then, differentiating z(t) with respect to t yields

e A(t−τ ) B1 u(τ )dτ .

(3.4)

3.2 Stabilizing Controls for Input Delayed Systems

67

 t z˙ (t) = e Ah x(t) ˙ + B1 u(t) − e Ah B1 u(t − h) + Ae A(t−τ ) B1 u(τ )dτ t−h  t Ah A(t−τ ) e B1 u(τ )dτ + B1 u(t). = e Ax(t) + A t−h

Recalling e Ah A = Ae Ah , we have z˙ (t) = Az(t) + B1 u(t),

(3.5)

which is an ordinary system. If (A, B1 ) is stabilizable, we can compute K such that (A + B1 K ) is Hurwitz. The stabilizing control is given as u(t) = K z(t)   = K e Ah x(t) +



t

e A(t−τ ) B1 u(τ )dτ .

(3.6) (3.7)

t−h

Note that z(t) is a function of the state x(t) and the past control actions u(τ ), τ ∈ [t − h, t]. If there exists a stabilizing control u(t) = K z(t) for the system (3.5), then z(t) and u(t) go to zero and, as a result, so does x(t) from (3.4). This indicates that the original system (3.1) is stable. Using the state predictor control method, we can obtain a stabilizing control for the input delayed system (3.1) by simply designing a stabilizing control for the ordinary system (3.5) with well-developed control methods for finite-dimensional systems. We can summarize the above result in the following theorem. Theorem 3.1 If the control (3.6) asymptotically stabilizes the ordinary system (3.5), then the control (3.7) also asymptotically stabilizes the input delayed system (3.1). 2. Single Input Delay If the system has not only a delayed input but also a non-delayed input, the situation is then different slightly. Consider a linear system with a single input delay given by x(t) ˙ = Ax(t) + Bu(t) + B1 u(t − h)

(3.8)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0). The state x(t + h) at time t + h can be obtained similarly as above by x(t + h) = e A(t+h−t0 ) x(t0 ) +  = e x(t) + e Ah

Ah



t+h

t0 t+h

e t

e A(t+h−τ ) {Bu(τ ) + B1 u(τ − h)}dτ

A(t−τ )

 Bu(τ )dτ +

t t−h

e A(t−τ ) B1 u(τ )dτ .

68

3 State Feedback Stabilizing Controls

However x(t + h) involves the future input u(s), s ∈ [t, t + h] which is not available at the present time. We define a state predictor z(t) as x(t + h) with u[t, t + h) ≡ 0,  z(t)  x(t + h; u t ≡ 0) = e x(t) +

t

Ah

e A(t−τ ) B1 u(τ )dτ ,

(3.9)

t−h

which results in  x(t + h) = z(t) + e Ah

t+h

e A(t−τ ) Bu(τ )dτ .

(3.10)

t

It is noted that the state predictor (3.9) for the single input delayed system (3.8) is of the same form as the predicted state (3.4) for the pure input delayed system (3.1). It is also noted that the state predictor z(t) is often denoted as x(t ˆ + h) in order to indicate a prediction of x(t + h) without future inputs. Differentiating z(t) with respect to t yields    t Ae A(t−τ ) B1 u(τ )dτ z˙ (t) = e Ah {Ax(t) + Bu(t)} + B1 u(t) + t−h   (3.11) = Az(t) + e Ah B + B1 u(t), which is a delay-free system.   If (A, e Ah B + B1 ) is stabilizable, we can compute K such that A + e Ah B + B1 K is Hurwitz. The state predictor based control is constructed as u(t) = K z(t)   = K e Ah x(t) +

t



e A(t−τ ) B1 u(τ )dτ .

(3.12) (3.13)

t−h

  If A + e Ah B + B1 K is Hurwitz, then z(t) and u(t) go to zero and, as a result, so does x(t) from (3.9). We can summarize the above result in the following theorem. Theorem 3.2 If the control (3.12) asymptotically stabilizes the ordinary system (3.11), then the control (3.13) also asymptotically stabilizes the input delayed system (3.8). It is noted that Theorem 3.1 can be obtained from Theorem 3.2 by taking B = 0. However, the pure input delayed system (3.1) was first introduced to give a motivation for the state predictor control for the single input delayed system (3.8). The non-optimal stabilization of time-delay systems was initiated from the Smith predictor for pure input delays in [14]. A state predictor defined at the current time with zero future inputs is introduced in this book not only for non-optimal stabilization but also for optimal stabilization of later chapters. The state predictor does not appear explicitly in the existing literatures, although similar forms appear in the

3.2 Stabilizing Controls for Input Delayed Systems

69

controllability analysis. This state predictor is shown to satisfy ordinary systems in this book, like the reduction transformation in the next subsection. Thus stabilization approaches in Theorem 3.1 for pure input delayed systems and in Theorem 3.2 for single input delayed systems are similar to reduction transformation approaches.

3.2.2 Reduction Transformation Approach Consider again the single input delayed system (3.8). We introduce a new variable ξ(t) ∈ Rn such that  ξ(t) = x(t) +

t

e A(t−τ −h) B1 u(τ )dτ .

(3.14)

t−h

By taking the derivative of (3.14), we can have ˙ = A x(t) ξ(t) ˙ + e−Ah B1 u(t) − B1 u(t − h) + A



t

e A(t−τ −h) B1 u(τ )dτ ,

t−h

from which we have ˙ = Aξ(t) + (B + e−Ah B1 )u(t), ξ(t)

(3.15)

which is a delay-free system. The transformation (3.14) will be called the reduction transformation. It is noted that the reduction transformation (3.14) is introduced irrespective of the state predictor but we can show ξ(t) = e−Ah z(t). Consider a static control structure: u(t) = K ξ(t)   = K x(t) +

t



e A(t−τ −h) B1 u(τ )dτ .

(3.16) (3.17)

t−h

The control gain matrix K can be chosen using any stabilization method for delayfree systems. If there exists a stabilizing control (3.17) for the delay-free system (3.15), then ξ(t) and u(t) go to zero and, as a result, so does x(t) from (3.14). This indicates that the original system (3.8) is stable. We can summarize the above result in the following corollary. Corollary 3.1 If the control (3.16) asymptotically stabilizes the ordinary system (3.15), then the control (3.17) also asymptotically stabilizes the input delayed system (3.8). The input delayed system (3.8) with the control (3.17) is depicted in Fig. 3.1. To obtain the solution of the closed-loop system at time t, distributed input delay

70

3 State Feedback Stabilizing Controls

 (t )

K



0

h

u (t )

x(t )  Ax(t )  Bu (t )  B1u (t  h)

x(t )

e  A(  h ) B1u (t   )d

Fig. 3.1 Control system using the reduction method

information u ∈ [t − h, t] is necessary, which indicates that the closed-loop system is an infinite dimensional system. The reduction transformation for the stabilizing control for single input delayed systems in Corollary 3.1 can be found independently as in [8, 9] and its extension for general input delayed system can be found in [1]. The extended results using the reduction transformation for single input delayed systems with model uncertainties can be found in [11].

3.3 Stabilizing Controls for State Delayed Systems Consider a linear system with a state delay given by x(t) ˙ = Ax(t) + A1 x(t − h) + Bu(t)

(3.18)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. Whereas the systems with input delays (3.1) and (3.8) can be simply stabilized through the transformation into delay-free systems, the system with a state delay (3.18) cannot. Therefore, this section uses several methods introduced in the previous chapter to find stabilizing controls.

3.3.1 Lyapunov–Razumikhin Approach 1. Delay-Independent Stabilizing Controls Now, the results of Theorem 2.3 can be used to verify the stability of the closedloop system x(t) ˙ = (A + B K )x(t) + (A1 + B K 1 )x(t − h), which is obtained by applying the delayed state feedback control

(3.19)

3.3 Stabilizing Controls for State Delayed Systems

u(t) = K x(t) + K 1 x(t − h)

71

(3.20)

to the system (3.18). A stabilizing control (3.20) for the system (3.18) can be constructed in the following theorem. Theorem 3.3 If there exist a scalar α > 0, matrices P¯ > 0, K¯ , and K¯ 1 such that 

 A P¯ + B K¯ + P¯ A T + K¯ T B T + α P¯ A1 P¯ + B K¯ 1 < 0, −α P¯ P¯ A1T + K¯ 1T B T

(3.21)

then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.18). Proof If we replace the matrices (A, A1 ) in Theorem 2.3 by (A + B K , A1 + B K 1 ), we can get 

 P A + P B K + A T P + K T B T P + αP P A1 + P B K 1 < 0. A1T P + K 1T B T P −αP

By the congruence transformation via a nonsingular matrix P −1 ⊕ P −1 , we can get the condition in the theorem with P¯ = P −1 , K¯ = K P −1 and K¯ 1 = K 1 P −1 . For the congruence transformation and ⊕, refer to the definition in Appendix A.1 and notations in Sect. 1.5, respectively. This completes the proof.  The relation (3.21) is an LMI which can be solved easily. The condition (3.21) is independent of the delay h and thus the Theorem 3.3 holds for any delay value. If we do not have any information on the delay constant h, we cannot use x(t − h) in the control. Or we may want to use x(t) only for the feedback control due to the simple structure, i.e. u(t) = K x(t).

(3.22)

The corresponding result for the memoryless state feedback control (3.22) can be directly obtained from Theorem 3.3 by taking K¯ 1 = 0 since K¯ 1 can be defined for K 1 = 0 from K¯ 1 = K 1 P −1 . In this book, we mainly deal with stabilizing conditions for the delayed state feedback controls such as (3.20) since the memoryless state feedback controls such as (3.22) are easily obtained by just removing some variables from the results obtained for delayed state feedback controls (3.20). However, it is not generally true in output feedback controls, which will be shown in the next chapter. When we have no choice but to take the memoryless state feedback controls due to some difficulty or when different procedures are used to obtain stabilizing conditions for both controls, we provide also stabilizing conditions for the memoryless state feedback controls.

72

3 State Feedback Stabilizing Controls

2. Delay-Dependent Stabilizing Controls When the control is chosen as (3.20), the condition based on Theorem 2.4 is hardly solved. But, the control (3.22) can be used to obtain a solvable condition as shown in the following theorem. Theorem 3.4 If there exist scalars α0 > 0, α1 > 0, matrices P¯ > 0, K¯ , R¯ 0 , and R¯ 1 such that ¯ + A1 )T + K¯ T B T + h( R¯ 0 + R¯ 1 ) < 0, (A + A1 ) P¯ + B K¯ + P(A

(3.23)



   α0 P¯ − R¯ 0 −A1 A P¯ − A1 B K¯ α1 P¯ − R¯ 1 −A1 A1 P¯ < 0, < 0, − P¯ A T A1T − K¯ T B T A1T − P¯ A1T A1T −α1 P¯ −α0 P¯ (3.24) then the control (3.22) with K = K¯ P¯ −1 asymptotically stabilizes the system (3.18). Proof If we replace the matrix A in Theorem 2.4 by A + B K , we can get P(A + A1 ) + P B K + (A + A1 )T P + K T B T P + h(R0 + R1 ) < 0, 

−P A1 A − P A1 B K α0 P − R0 −α0 P −A T A1T P − K T B T A1T P



 < 0,

(3.25)

 α1 P − R1 −P A1 A1 < 0. T T −A1 A1 P −α1 P

(3.26) By the congruence transformation via nonsingular matrices P −1 and P −1 ⊕ P −1 for (3.25) and (3.26), respectively, we can get the conditions in the theorem with P¯ = P −1 , K¯ = K P −1 , R¯ 0 = P −1 R0 P −1 and R¯ 1 = P −1 R1 P −1 . This completes the proof.  The condition (3.23) is dependent on the delay h. Generally speaking, the delaydependent condition is less conservative compared to the delay-independent condition. The delay-independent and delay-dependent stabilizing controls for state delayed systems based on the Lyapunov-Razumikhin approach in Theorems 3.3 and 3.4 in this subsection can be found in [13].

3.3.2 Lyapunov–Krasovskii Approach 1. Delay-Independent Stabilizing Controls A control (3.20) to stabilize the system (3.18) can be found based on Theorem 2.5 as shown in the following theorem. Theorem 3.5 If there exist matrices P¯ > 0, Q¯ > 0, K¯ , and K¯ 1 such that

3.3 Stabilizing Controls for State Delayed Systems

73



⎤ A P¯ + B K¯ + P¯ A T + K¯ T B T A1 Q¯ + B K¯ 1 P¯ ⎣ P¯ A1T + K¯ 1T B T − Q¯ 0 ⎦ < 0, P¯ 0 − Q¯

(3.27)

then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 Q¯ −1 asymptotically stabilizes the system (3.18). Proof By using the Schur complement to the condition (2.26), we can get ⎡

⎤ P A + A T P P A1 I ⎣ −Q 0 ⎦ < 0. A1T P I 0 −Q −1 By the congruence transformation via the nonsingular matrix P −1 ⊕ Q −1 ⊕ I , we can get ⎡

⎤ A P¯ + P¯ A T A1 Q¯ P¯ ⎣ Q¯ A1T − Q¯ 0 ⎦ < 0, ¯ P 0 − Q¯ where P¯ = P −1 and Q¯ = Q −1 . If we replace the matrices (A, A1 ) by (A + B K , A1 + B K 1 ), we can get the condition in the theorem. This completes the proof.  For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding result can easily be derived by taking the variable K¯ 1 = 0 from the inequalities (3.27) in Theorem 3.5. 2. Delay-Dependent Stabilizing Controls A control (3.20) to stabilize the system (3.18) can be obtained by simply choosing S as Y22 in Theorem 2.6 as shown in the following theorem. ¯ K¯ , and K¯1 such that Theorem 3.6 If there exist matrices P¯ > 0, Q¯ > 0, Y¯i j , Σ,   (1, 1) h Σ¯ −1 ¯ < 0, P h Σ¯ T −h P¯ Y¯22   Y¯11 Y¯12 ≥ 0, T ¯ Y¯12 Y22

(3.28) (3.29)

where (1, 1) denotes ¯ 2T ) + (A Pe ¯ 2T )T e3T ¯ 1T + A1 Pe ¯ 1T + A1 Pe ¯ 1 − e3 )T + e3 (A Pe (1, 1) = (e1 − e3 )Σ¯ T + Σ(e T ¯ 1T − e2 Qe ¯ 2T + h Y¯11 + Y¯12 (e1 − e2 )T + (e1 − e2 )Y¯12 +e1 Qe + e3 B( K¯ e1T + K¯ 1 e2T ) +( K¯ e1T + K¯ 1 e2T )T B T e3T ,

(3.30)

74

3 State Feedback Stabilizing Controls

and ei ∈ Rn×3n are defined in (2.33), then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.18).

T Proof Let us define χ(t) as χ(t)  x T (t) x T (t − h) x˙ T (t) . For the control design procedure, we need to convert the form of (·)A + (·)A1 in (2.31) to the form of A(·) + A1 (·) because we will later replace A and A1 with A + B K and A1 + B K 1 ,

T respectively. Let us divide Σ into three parts: Σ  Σ1T Σ2T Σ3T . Then, judging from the (3, 3)-entry in (2.31), we can claim that Σ3 + Σ3T > 0, which guarantees the invertibility of the matrix Σ3 . Also, in the following development e1 Pe3T + Σ(Ae1T + A1 e2T − e3T ) ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ T ⎤T ⎡ P 0 Σ1 P A = ⎣ 0 ⎦ ⎣0⎦ + ⎣Σ2 ⎦ ⎣ A1T ⎦ = ⎣ 0 0 I Σ3 −I 0 

⎤⎡ ⎤ 0 Σ1 0 0 I P Σ2 ⎦ ⎣ 0 0 0 ⎦ , A A1 −I 0 Σ3  

the inverse of the under-braced block matrix can be defined as ⎡

⎤−1 ⎡ −1 ⎤ ⎡ P 0 −P −1 Σ1 Σ3−1 P¯ P 0 Σ1 −1 ⎦ −1 −1 ¯ ⎣ ⎣ ⎦ ⎣ P = 0 P Σ2 = = 0 0 P −P Σ2 Σ3 0 0 Σ3 0 0 0 Σ3−1

0 P¯ 0

⎤ Σ¯ 1 Σ¯ 2 ⎦ Σ¯ 3

T with Σ¯  Σ¯ 1T Σ¯ 2T Σ¯ 3T . For further development, we define some matrices as follows.     T  Y¯11 Y¯12 P¯ 0 Y11 Y12 P¯ 0 ¯  , Q¯  P¯ Q P. T T ¯ Y22 0 P¯ Y¯12 Y22 0 P¯ Y12 Multiplying on the left side of (2.31) by P¯ and on the right side by its transpose, we can obtain a stability criterion that is derived below in a step-by-step manner. ⎡

⎤ ⎤⎡ P¯ 0 0 0 0 I ¯ 1 Pe3T + Σ(Ae1T + A1 e2T − e3T )}P¯ T = ⎣ 0 0 0 ⎦ ⎣ 0 P¯ 0 ⎦ P{e A A1 −I Σ¯ 1T Σ¯ 2T Σ¯ 3T ¯ 1T + A1 Pe ¯ 2T ), = (e1 − e3 )Σ¯ T + e3 (A Pe T ¯T T ¯ ¯ ¯ ¯ P(hY 11 + Y12 (e1 − e2 ) )P = h Y11 + PY12 P(e1 − e2 ) = h Y¯11 + Y¯12 (e1 − e2 )T , T T T ¯ 1 Qe1 − e2 Qe2 }P¯ = e1 P¯ Q Pe ¯ 1T − e2 P¯ Q Pe ¯ 2T P{e ¯ 1T − e2 Qe ¯ 2T , = e1 Qe T ¯T ¯ ¯ 22 Σ¯ T . P(he 3 Y22 e3 )P = h ΣY Applying the Schur complement to the results derived, we can obtain the dual form of the stability criterion for the system (3.18). Replacing matrices (A, A1 ) with

3.3 Stabilizing Controls for State Delayed Systems

75

(A + B K , A1 + B K 1 ), we can extend it to the synthesis criterion for the form u(t) =  K x(t) + K 1 x(t − h). This completes the proof. Whereas the condition (3.29) is linear in all the variables, the condition (3.28) is −1 ¯ not due to the term P¯ Y¯22 P. A simple idea to relax this non-linear term into a linear term is summarized in the following lemma. Lemma 3.1 For matrices Q > 0, M, and N with compatible dimensions, it holds that N Q −1 N T ≥ N M + M T N T − M T Q M. (3.31) Proof See the following matrix inequality (N − M T Q)Q −1 (N T − Q M) ≥ 0, which immediately concludes (3.31). This completes the proof.



For any non-positive definite matrix Λ, the left part of (3.28) is bounded as follows. 

   Π¯ h Π¯ Π¯ h Σ¯ −1 ¯ ≤ h Σ¯ T −h P¯ N − h N T P¯ + h N T Y¯22 N h Π¯ T −h P¯ Y¯22 P   Π¯ h Σ¯ . ≤ h Σ¯ T −h P¯ N − h N T P¯ + h N T Y¯22 N − Λ

Therefore, if the following inequality holds: 

 Π¯ h Σ¯ < 0, h Σ¯ T −h P¯ N − h N T P¯ + h N T Y¯22 N − Λ

then if the following inequality holds: ⎡

⎤ Π¯ h Σ¯ 0 ⎣h Σ¯ T −h P¯ N − h N T P¯ − Λ h N T Y¯22 ⎦ < 0, 0 h Y¯22 N −h Y¯22

(3.32)

then the inequality (3.28) holds. However, the inequality (3.32) is still a BMI, which ¯ This kind can be solved via alternative LMI conditions by fixing N or fixing Y¯22 and P. of technique to solve a BMI is called the two-phase-based algorithm. Here, the following algorithm tries to find the maximum allowable delay bound h in Theorem 3.6. Proposition 3.1 (Two-phase-based Algorithm) 1. Set the appropriate scaling factors (σu > 1, σd < 1) and the initial guess of h. 2. Set the initial variables as h max = 0 and an appropriate value of N .

76

3 State Feedback Stabilizing Controls

3. Solve the following convex optimization problem: minimize α subject to ⎡ ⎤  Π¯ h Σ¯ 0 ¯11 Y¯12 Y P¯ > 0, Q¯ > 0, αI ≥ Λ, ¯ T ¯ ≥ 0, ⎣h Σ¯ T −h P¯ N − h N T P¯ − Λ h N T Y¯22 ⎦ < 0, Y12 Y22 0 h Y¯22 N −h Y¯22 

where Π¯ is defined in (3.30). 4. If α ≤ 0, set h max = max(h max , h) and h = Σu × h. Then, go back to solve the ¯ Y¯22 ) convex optimization problem in step 3. In this step, N is fixed again and ( P, are free variables. Else if α > 0, set h = Σd × h and go back to solve the convex ¯ Y¯22 ) optimization problem in step 3 again. In this step, the resultant matrices ( P, in the previous step are fixed and N is a free variable. 5. Stop the loop if no progress is expected even for enough iterations.  For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding result can easily be derived by taking the variable K¯ 1 = 0 from the inequality (3.28) in Theorem 3.5. The delay-independent stabilizing control for state delayed systems based on the Lyapunov-Krasovskii approach in Theorem 3.5 can be found as in [7, 13]. The delay-dependent stabilizing control for state delayed systems based on the LyapunovKrasovskii approach in Theorem 3.6 is the special case of the results in [6]. Other notable results based on a similar approach can be found in [12]. The relaxation technique in Lemma 3.1 can be found in [2].

3.3.3 Discretized State Approach A control (3.20) to stabilize the system (3.18) can be constructed by simply choosing S as Y22 in Theorem 2.7 as shown in the following theorem. Theorem 3.7 If there exist matrices P¯ > 0, Q¯ 0 > 0, Q¯ 1 > 0, Y¯i j , N¯ 1 , N¯ 2 , K¯ , and K¯ 1 such that ⎤ (1, 1) (1, 2) τ T¯2 E 0 T¯2 E 0 ⎢ (2, 1) (2, 2) τ T¯3 E 0 T¯3 E 0 ⎥ ⎥ ⎢ ⎣ τ E T T¯ T τ E T T¯ T −τ Y¯22 0 ⎦ < 0, 0 2 0 3 E 0T T¯2T E 0T T¯3T 0 − Q¯ 1   Y¯11  Y¯12      ≥ 0, −1 T Y¯12 IN IN P¯ Y¯22 P¯ ⎡

where (i, j) denote

(3.33)

(3.34)

3.3 Stabilizing Controls for State Delayed Systems

77

T (1, 1) = E 0 Q¯ 0 E 0T − E N Q¯ 0 E NT + τ Y¯11 + Y¯12 (E 0 − E N )T + (E 0 − E N )Y¯12

 T + N¯ 1 0 + N¯ 1 0 ,   T T  − N¯ 1T + (A P¯ + B K¯ )e0T + (A1 P¯ + B K¯ 1 )e TN N¯ 2 T + (2, 1) = (1, 2) = , 0 0     T  0 0 − N¯ 2T − N¯ 2T (2, 2) = , + − 0 0 0 Q¯ 1

and E 0 , E N , e0 , and e N are defined in (2.39)–(2.40), then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.18).

 Proof Dividing N1T N2T in appropriate parts as

 T  T T T N1T N2T  N10 · · · N1N N20 · · · N2N

and judging from the (N + 2, N + 2)-entry in (2.35), we can claim the following T > 0, which guarantees the invertibility of the matrix N20 . Also, condition: N20 + N20 the following decomposition ⎤ ⎡ T  0   e0 A T + e N A1T +⎣ 0 ⎦ 0 0 −e0 Q1 ⎡ ⎤          N1  0  Ae0T + A1 e TN −e0T P e0T I N +1 ⎣ ⎦ + 0 = 0 0 0 0 0 N2 Q1  T ⎤ ⎡ ⎤ ⎡  e0 0 I N +1 P N1  0  ⎢ 0 ⎥ ⎥ =⎣ 0 ⎦⎢ ⎣ Ae0T + A1 e TN −e0T ⎦ 0 N2 Q1 0 0    

0 e0 Pe0T 0 0





N1 + N2



urges us to utilize the under-braced invertible matrix whose inverse can be found and is defined as 

T¯1 T¯2 0 T¯3



⎡ =⎣

I N +1



0

⎤−1 P N1  0  0 ⎦ , N2 Q1

where T¯i denote T¯1 = I N +1



P −1 = I N +1



¯ P,

(3.35)

78

3 State Feedback Stabilizing Controls



⎤ −1 0 ⎡N20 ⎤    ⎢ ⎥ N21 0 ⎢ ⎥ ¯ = , T¯3 = ⎢ N ⎥ ⎢ ⎥ .. 2 −1 −1 Q¯ 1 ⎣ −Q −1 1 ⎣ . ⎦ N20 Q 1 ⎦ N2N 

  −1 −1 ¯ T2 = −(I N +1 P )N1 N20 0 = N¯ 1 0 . For further development, we define some matrices as follows.  T       P¯ 0 P¯ 0 Y11 Y12 I N +1 I N +1 Y¯11 Y¯12 −1 ¯   . Y22  Y22 , ¯ T ¯  T Q0 0 IN 0 IN P¯ Y12 P¯ Y12 Q 0 Multiplying on the left side of (2.35) by (3.35) and on the right side by its transpose, we can obtain an equivalent stability criterion. Some principal items are listed here.       T   T N1 e0 A T + e N A1T T¯1 T¯2 T¯1 T¯2 0 e0 Pe0T + 0 0 N2 −e0 0 T¯3 0 T¯3 ⎡

⎤ N¯ 1T N¯ 2T ⎢ 0 0 ⎥ ⎥ =⎢ ⎣ A Pe ¯ 0T + A1 Pe ¯ TN − N¯ 1T − N¯ 2T ⎦ , 0 0 

  T  T¯1 T¯2 Π1 0 T¯1 0 0 Π2 T¯2T T¯3T 0 T¯3    

T  

T¯ T¯1 ¯ Π1 T1 0 + ¯2 {E 0 Q 1 E 0T − E N Q 1 E NT + τ E 0 Y22 E 0T } T¯2T T¯3T , = 0 T3       I N +1 P¯ (τ Y11 ) I N +1 P¯ = τ Y¯11 ,       T I N +1 P¯ {Y12 (E 0 − E N ) } I N +1 P¯ = Y¯12 (E 0 − E N )T ,       I N +1 P¯ (E 0 Q 0 E 0T − E N Q 0 E NT ) I N +1 P¯ = E 0 Q¯ 0 E 0T − E N Q¯ 0 E NT ,  T  T  T   ¯ ¯ ¯

 0 T T2 T T2 T T2 ¯ ¯ ¯ = N1 0 = T2 E N Q 1 E N ¯ = 0, T2 E N Q 1 E N ¯ Q1 E N ¯    I T3 T3 T3 =0         

 0 0 0 0 N¯ T Q 1 0 I 2¯  = . T¯3 E N Q 1 E NT T¯3T = N¯ 2 ¯ I 0 Q¯ 1 Q1 0 Q1 Again, we replace (A, A1 ) with (A + B K , A1 + B K 1 ), respectively. This completes the proof. 

3.3 Stabilizing Controls for State Delayed Systems

79

   −1    The term I N P¯ Y¯22 I N P¯ in (3.34) is not linear in P¯ and Y¯22 , which can be relaxed to the resultant BMI condition by applying Lemma 3.1 and getting an algorithm similar to the algorithm in Proposition 3.1. For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding result can easily be derived by taking the variable K¯ 1 = 0 from the inequality (3.33) in Theorem 3.7. The stabilizing control for state delayed systems based on the discretized state approach in Theorem 3.7 is newly obtained here and similar restrictive controls can be found in [5]. Other controls based on several approaches can be found in [10], which are not covered in this book.

3.3.4 *Extension to Systems with Time-Varying Delays Let us consider the state delayed system (3.18), where the delay h is not a constant but time-varying. In this case, we consider two cases where h(t) is a general delay and ˙ ≤ μ. where h(t) is differentiable with bounded derivative such as h(t) ¯ For a general time-varying delay case, the following result can be obtained by applying the similar Lyapunov-Krasovskii approach used in Theorem 2.8 with the control such as u(t) = K x(t) + K 1 x(t − h(t)).

(3.36)

Theorem 3.8 If there exist matrices P¯ > 0, Q¯ 0 > 0, S¯0 > 0, Y¯i j , Z¯ i j , K¯ , K¯ 1 and Σ¯ such that     Π + h¯ Z¯ 11 h¯ Σ¯ Π + h¯ Y¯11 h¯ Σ¯ < 0, < 0, (3.37) h¯ Σ¯ T −h¯ S¯0 h¯ Σ¯ T −h¯ S¯0  

   Y¯11 Y¯12 Z¯ 11 Z¯ 12 ≥ 0, ¯ T ¯ ≥ 0, T ¯ Y¯12 Y22 Z 12 Z 22

(3.38)

   Y¯22 Y¯22 Z¯ 22 Z¯ 22 ≥ 0, ≥ 0, Y¯22 P¯ S¯0−1 P¯ Z¯ 22 P¯ S¯0−1 P¯

(3.39)

where Π denotes ¯ T + e4 A1 Pe ¯ T − e4 Σ¯ T + Σe ¯ T + e1 P¯ A T e T + e2 P¯ A T e T − Σe ¯ T Π = e1 Σ¯ T + e4 A Pe 1 2 1 4 1 4 4 + Y¯12 (e1 − e2 )T + (e1 − e2 )Y¯ T + Z¯ 12 (e2 − e3 )T + (e2 − e3 ) Z¯ T 12

12

+ e1 Q¯ 0 e1T − e3 Q¯ 0 e3T + e4 B K¯ e1T + e1 K¯ T B T e4T + e4 B K¯ 1 e2T + e2 K¯ 1T B T e4T ,

and ei ∈ R4n×n are defined in (2.47)–(2.48), then the control (3.36) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.18) with a general timevarying delay h(t) ∈ [−h, 0].

80

3 State Feedback Stabilizing Controls

 ¯ x˙ T (t) T . In Proof Let us define χ(t) as χ(t)  x T (t) x T (t − h(t)) x T (t − h) T

Theorem 2.8, dividing Σ into four parts: Σ  Σ1T Σ2T Σ3T Σ4T , and judging from the (4, 4)-entries in (2.44)–(2.45), we can claim the following condition: Σ4 + Σ4T > 0, which guarantees the invertibility of the matrix Σ4 . Also, from the following development: e1 Pe4T + Σ(Ae1T + A1 e2T − e4T ) ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ T ⎤T ⎡ P A Σ1 P 0 ⎢ 0 ⎥ ⎢0 ⎥ ⎢Σ2 ⎥ ⎢ A T ⎥ ⎢0 ⎢ ⎥⎢ 1 ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎣ 0 ⎦ ⎣0⎦ + ⎣Σ3 ⎦ ⎣ 0 ⎦ = ⎣ 0 Σ4 0 −I 0 I 

0 0 P 0 0 P 0 0 

⎤⎡ 0 Σ1 ⎢0 Σ2 ⎥ ⎥⎢ Σ3 ⎦ ⎣ 0 Σ4 A 

0 0 0 A1

0 0 0 0

⎤ I 0 ⎥ ⎥, 0 ⎦ −I

we utilize the under-braced block matrix whose inverse can be found and is defined as ⎡

P ⎢0 ⎢ P¯ = ⎣ 0 0

0 P 0 0

0 0 P 0

⎤−1 ⎡ −1 P Σ1 ⎥ ⎢ Σ2 ⎥ 0 =⎢ ⎣ 0 Σ3 ⎦ Σ4 0

0 P −1 0 0

⎤ ⎡ 0 −P −1 Σ1 Σ4−1 P¯ −1 ⎥ −1 ⎢ 0 −P Σ2 Σ4 ⎥ ⎢ 0 = P −1 −P −1 Σ3 Σ4−1 ⎦ ⎣ 0 0 0 Σ4−1

0 P¯ 0 0

0 0 P¯ 0

⎤ Σ¯ 1 Σ¯ 2 ⎥ ⎥ Σ¯ 3 ⎦ Σ¯ 4

T

with Σ¯  Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T . Let us define some matrices as follows. 

    T Y¯11 Y¯12 P¯ 0 Y11 Y12 P¯ 0  , T T ¯ Y22 0 P¯ Y¯12 Y22 0 P¯ Y12     T  Z¯ 11 Z¯ 12 P¯ 0 Z 11 Z 12 P¯ 0  , T T ¯ Z 22 0 P¯ Z¯ 12 Z 22 0 P¯ Z 12 ¯ S¯0  S0−1 . Q¯ 0  P¯ Q 0 P, Multiplying on the left sides of (2.44)–(2.45) by P¯ and on the right sides by its transpose, we can obtain an equivalent stability criterion that is derived below in a step-by-step manner. ¯ 1 Pe T + Σ(Ae T + A1 e T − e T )}P¯ T P{e 4 1 2 4 ⎤⎡ ⎤⎡ ⎤ ⎡ P¯ 0 0 0 0 0 0 I P 0 0 Σ1 ⎢ 0 P 0 Σ2 ⎥ ⎢ 0 0 0 0 ⎥ ⎢ 0 P¯ 0 0 ⎥ ⎥⎢ ⎥⎢ ⎥ = P¯ ⎢ ⎣ 0 0 P Σ3 ⎦ ⎣ 0 0 0 0 ⎦ ⎣ 0 0 P¯ 0 ⎦ 0 0 0 Σ4 A A1 0 −I Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T ⎡ ⎤ Σ¯ 2T Σ¯ 3T Σ¯ 4T Σ¯ 1T ⎢ 0 0 0 0 ⎥ ⎥ =⎢ ⎣ 0 0 0 0 ⎦ A P¯ − Σ¯ T A1 P¯ − Σ¯ T −Σ¯ T −Σ¯ T 1

2

3

4

3.3 Stabilizing Controls for State Delayed Systems

81

¯ T + e4 A1 Pe ¯ T − e4 Σ¯ T , = e1 Σ¯ T + e4 A Pe 1 2 ¯ hY ¯ 12 (e1 − e2 )T }P¯ T = PY ¯ 12 P(e ¯ 11 )P¯ T = h¯ Y¯11 , P{Y ¯ 1 − e2 )T = Y¯12 (e1 − e2 )T , P( ¯ h¯ Z 11 )P¯ T = h¯ Z¯ 11 , P{Z ¯ 12 (e2 − e3 )T }P¯ T = P¯ Z 12 P(e ¯ 2 − e3 )T = Z¯ 12 (e2 − e3 )T , P( ¯ 1 Q 0 e T − e3 Q 0 e T }P¯ T = e1 P¯ Q 0 Pe ¯ T − e3 P¯ Q 0 Pe ¯ T = e1 Q¯ 0 e T − e3 Q¯ 0 e T , P{e 1 3 1 3 1 3 ¯ he ¯ 4 S0 e T )P¯ T P( 4 T



= h¯ Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T S0 Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T = h¯ Σ¯ S0 Σ¯ T ,   Y¯22 Y¯ ≥ 0, S¯0−1 ≥ P¯ −1 Y¯22 P¯ −1 ⇔ P¯ S¯0−1 P¯ ≥ Y¯22 ⇔ ¯22 ¯ ¯ −1 Y22 P S0 P¯   Z¯ 22 Z¯ S¯0−1 ≥ P¯ −1 Z¯ 22 P¯ −1 ⇔ P¯ S¯0−1 P¯ ≥ Z¯ 22 ⇔ ¯ 22 ¯ ¯ −1 ≥ 0. Z 22 P S P¯ 0

Finally, we apply the Schur complement to the results derived and replace (A, A1 )  with (A + B K , A1 + B K 1 ), respectively. This completes the proof. The non-convex term P¯ S¯0−1 P¯ in (3.39) can be relaxed by applying Lemma 3.1 and the algorithm in Proposition 3.1 can treat the resultant BMI problem. If h(t) is ˙ ≤ μ, differentiable with bounded derivative such as h(t) ¯ the following result can be obtained based on Theorem 2.9. Theorem 3.9 If there exist matrices P¯ > 0, Q¯ 0 > 0, Q¯ 1 > 0, S¯0 > 0, S¯1 > 0, Y¯i j , Z¯ i j , K¯ , K¯ 1 and Σ¯ such that ⎡

⎤   Π + h¯ Y¯11 h¯ Σ¯ h¯ Σ¯ ¯ ¯ ¯¯ ⎣ h¯ Σ¯ T −h¯ S¯0 0 ⎦ < 0, Π + hTZ 11 h Σ < 0, h¯ Σ¯ −h¯ S¯0 h¯ Σ¯ T 0 −h¯ S¯1     Y¯11 Y¯12 Z¯ 11 Z¯ 12 ≥ 0, ≥ 0, T ¯ T ¯ Y¯12 Y22 Z¯ 12 Z 22     Y¯22 Y¯22 Z¯ 22 Z¯ 22 ≥ 0, ≥ 0, Y¯22 P¯ S¯0−1 P¯ + (1 − μ) ¯ P¯ S¯1−1 P¯ Z¯ 22 P¯ S¯0−1 P¯

(3.40)

(3.41) (3.42)

where Π denotes ¯ 2T − e4 Σ¯ T + Σe ¯ 1T + e4 A1 Pe ¯ 1T + e1 P¯ A T e4T + e2 P¯ A1T e4T − Σe ¯ 4T Π = e1 Σ¯ T + e4 A Pe T T +Y¯12 (e1 − e2 )T + (e1 − e2 )Y¯12 + Z¯ 12 (e2 − e3 )T + (e2 − e3 ) Z¯ 12 + e1 ( Q¯ 0 + Q¯ 1 )e1T −(1 − μ)e ¯ 2 Q¯ 1 e2T − e3 Q¯ 0 e3T + e4 B K¯ e1T + e1 K¯ T B T e4T + e4 B K¯ 1 e2T + e2 K¯ 1T B T e4T ,

and ei ∈ R4n×n are defined in (2.47)–(2.48), then the control (3.36) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.18) with a time-varying ˙ ≤ μ. delay h(t) ∈ [−h, 0] that is differentiable with bounded derivative such as h(t) ¯ Proof Using Theorem 2.9, we can derive another criterion having a time-varying ¯ 0] that is differentiable with bounded derivative such as h(t) ˙ ≤ μ. delay h(t) ∈ [−h, ¯

82

3 State Feedback Stabilizing Controls

Let us define some matrices as Q¯ 1  P¯ Q 1 P¯ and S¯1  S1−1 and focus on those that are strategically added: (Q 1 , S1 )-related terms. ¯ 1 Q 1 e1T − (1 − μ)e P{e ¯ 2 Q 1 e2T }P¯ T ¯ 1T − (1 − μ)e ¯ 2T = e1 Q¯ 1 e1T − (1 − μ)e = e1 P¯ Q 1 Pe ¯ 2 P¯ Q 1 Pe ¯ 2 Q¯ 1 e2T , 



¯ he ¯ 4 S1 e4T )P¯ T = h¯ Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T T S1 Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T = h¯ Σ¯ S1 Σ¯ T , P( ¯ S¯1−1 ≥ P¯ −1 Y¯22 P¯ −1 S¯0−1 + (1 − μ)   Y¯22 Y¯22 −1 ¯ −1 ¯ ¯ ¯ ¯ ¯ ¯ ≥ 0. ⇔ P S0 P + (1 − μ) ¯ P S1 P ≥ Y22 ⇔ ¯ ¯ ¯ −1 ¯ Y22 P S0 P + (1 − μ) ¯ P¯ S¯1−1 P¯ Finally, we apply the Schur complement to the results derived and replace (A, A1 )  with (A + B K , A1 + B K 1 ), respectively. This completes the proof. The non-convex terms P¯ S¯0−1 P¯ and P¯ S¯1−1 P¯ in (3.42) can be relaxed by applying Lemma 3.1 and the algorithm in Proposition 3.1 can treat the resultant BMI problem. For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding results can easily be derived by taking the variable K¯ 1 = 0 from the inequality (3.37) and (3.40) in Theorems 3.8 and 3.9, respectively. Generally speaking, Theorem 3.9 can yield larger maximum allowable delays than Theorem 3.7 since the property of h(t) that is differentiable with bounded derivative such as ˙ ≤ μ¯ is utilized. h(t) The stabilizing controls for systems with time-varying delays in Theorem 3.8 and Theorem 3.9 can be found similarly in [6].

3.4 Robust Stabilizing Controls for State Delayed Systems Consider a state delayed system with model uncertainties given by x(t) ˙ = Ax(t) + A1 x(t − h) + Dpw (t) + Bu(t), pw (t) = (t)qw (t), T (t)(t) ≤ ρ−2 I,

(3.43) (3.44)

qw (t) = E x(t) + E 1 x(t − h) + F pw (t) + E b u(t)

(3.45)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0], where pw (t) ∈ Rm w and qw (t) ∈ Rlw are the output and the input of an uncertainty block (t). As already mentioned in the previous chapter, we will not deal with the Razumikhin theorem in this section since it has less flexibility than the Krasovskii theorem.

3.4 Robust Stabilizing Controls for State Delayed Systems

83

3.4.1 Lyapunov–Krasovskii Approach A stabilizing control for the system (3.43)–(3.45) can be found by simply choosing S as Y22 in Theorem 2.12 as shown in the following theorem. ¯ K¯ , and Theorem 3.10 For a given ρ, if there exist matrices P¯ > 0, Q¯ > 0, Y¯i j , Σ, K¯ 1 such that ⎡

⎤ (1, 1) h Σ¯ (1, 3) ⎣ h Σ¯ T −h P¯ Y¯ −1 P¯ 0 ⎦ < 0, 22 0 −I (1, 3)T   Y¯11 Y¯12 ≥ 0, T ¯ Y¯12 Y22

(3.46)

(3.47)

where (1,1) and (1,3) denote ¯ 2T + e2 P¯ A1T e3T − e3 Σ¯ T − Σe ¯ 1T + e1 P¯ A T e3T + e3 A1 Pe ¯ 1T + e3 A Pe ¯ 3T (1, 1) = e1 Σ¯ T + Σe T T T T T T ¯ 1 +e3 De4 + e4 D e3 + h Y¯11 + Y¯12 (e1 − e2 ) + (e1 − e2 )Y¯12 + e1 Qe ¯ 2T − ρ2 e4 e4T + e3 B K¯ e1T + e3 B K¯ 1 e2T + e1 K¯ T B T e3T + e2 K¯ 1T B T e3T , −e2 Qe (1, 3) = e1 P¯ E T + e1 K¯ T E bT + e2 P¯ E 1T + e2 K¯ 1T E bT + e4 F T ,

and ei are defined in (2.67)–(2.68), then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.43)–(3.45). T

Proof Let us define χ(t) as χ(t)  x T (t) x T (t − h) x˙ T (t) pwT (t) . Let us divide

T Σ into four parts: Σ  Σ1T Σ2T Σ3T Σ4T . Then, judging from the (3, 3)-entry in (2.65), we can claim: Σ3 + Σ3T > 0, which guarantees the invertibility of the matrix Σ3 . Also, from the following development e1 Pe3T + Σ(Ae1T + A1 e2T − e3T + De4T ) ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ T ⎤T ⎡ ⎤⎡ P 0 A Σ1 P 0 Σ1 0 0 ⎢ 0 ⎥ ⎢0 ⎥ ⎢Σ2 ⎥ ⎢ A T ⎥ ⎢ 0 P Σ2 0⎥ ⎢ 0 ⎥⎢ ⎢ ⎥⎢ 1 ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎣ 0 ⎦ ⎣ I ⎦ + ⎣Σ3 ⎦ ⎣ −I ⎦ = ⎣ 0 0 Σ3 0⎦ ⎣ A Σ4 DT 0 0 Σ4 I 0 0 0   

0 0 A1 0

I 0 −I 0

⎤ 0 0⎥ ⎥, D⎦ 0

we utilize the under-braced matrix whose inverse can be found and is defined as ⎡

P ⎢0 P¯ = ⎢ ⎣0 0

0 P 0 0

Σ1 Σ2 Σ3 Σ4

⎤−1 ⎡ −1 P 0 ⎢ 0 0⎥ ⎥ =⎢ ⎣ 0 0⎦ I 0

0 −P −1 Σ1 Σ3−1 P −1 −P −1 Σ2 Σ3−1 0 Σ3−1 0 −Σ4 Σ3−1

⎤ ⎡ 0 P¯ ⎥ ⎢ 0⎥ ⎢ 0 = 0⎦ ⎣ 0 0 I

0 P¯ 0 0

Σ¯ 1 Σ¯ 2 Σ¯ 3 Σ¯ 4

T with Σ¯  Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T . Let us define some matrices as follows.

⎤ 0 0⎥ ⎥ 0⎦ I

84

3 State Feedback Stabilizing Controls



    T Y¯11 Y¯12 P¯ 0 Y11 Y12 P¯ 0 ¯  , Q¯  P¯ Q P. T T ¯ Y22 0 P¯ Y¯12 Y22 0 P¯ Y22

To facilitate later steps, we apply the Schur complement to the constraint (2.65): 

 Ω e1 E T + e2 E 1T + e4 F T 0> , Ee1T + E 1 e2T + Fe4T −I

(3.48)

where Ω  Σ(Ae1T + A1 e2T − e3T + De4T ) + (Ae1T + A1 e2T − e3T + De4T )T Σ T + hY11 + Y12 (e1 − e2 )T T +(e1 − e2 )Y12 + he3 Y22 e3T + e3 Pe1T + e1 Pe3T + e1 Qe1T − e2 Qe2T − ρ2 e4 e4T .



Then, multiplying on the left side of (3.48) by P0¯ 0I and on the right side by its transpose, we can obtain an equivalent stability criterion that is derived below in a step-by-step manner. P¯ {e1 Pe3T + Σ(Ae1T + A1 e2T ⎤⎡ ⎡ 0 0 I P 0 Σ1 0 ⎢ 0 P Σ2 0⎥ ⎢ 0 0 0 ⎥⎢ = P¯ ⎢ ⎣ 0 0 Σ3 0⎦ ⎣ A A1 −I 0 0 Σ4 I 0 0 0

− e3T + De4T )}P¯ T ⎤⎡ ⎤ P¯ 0 0 0 0 ⎢ ⎥ ¯ 0⎥ ⎥⎢ 0 P 0 0 ⎥ D ⎦ ⎣Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T ⎦ 0 0 0 0 I

¯ 2T − e3 Σ¯ T + e3 De4T , ¯ 1T + e3 A1 Pe = e1 Σ¯ T + e3 A Pe T ¯T ¯ ¯ 1 − e2 )T = h Y¯11 + Y¯12 (e1 − e2 )T , P (hY11 + Y12 (e1 − e2 ) )P = h Y¯11 + P¯ Y12 P(e T T 2 T T T ¯ 1 − e2 Qe ¯ 2T − ρ2 e4 e4T , P¯ {e1 Qe1 − e2 Qe2 − ρ e4 e4 }P¯ = e1 Qe ¯ 22 Σ¯ T , P¯ (he3 Y22 e3T )P¯ T = h ΣY T T P¯ (e1 E + e2 E 1 + e4 F T )I = e1 P¯ E T + e2 P¯ E 1T + e4 F T .

Applying the Schur complement to the results derived, we can obtain the dual form of the stability criterion for the system (3.43)–(3.45). Replacing matrices (A, A1 , E, E 1 ) with (A + B K , A1 + B K 1 , E + E b K , E 1 + E b K 1 ), we can extending it to the synthesis criterion for the form u(t) = K x(t) + K 1 x(t − h). This completes the proof.  −1 ¯ The non-convex term P¯ Y¯22 P in (3.46) can be relaxed by applying Lemma 3.1, and Proposition 3.1 can treat the resultant BMI problem. For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding result can easily be derived by taking the variable K¯ 1 = 0 from the inequality (3.46) in Theorem 3.10. The robust stabilizing control for state delayed systems based on the LyapunovKrasovskii approach in Theorem 3.10 is the special case of the results in [6].

3.4 Robust Stabilizing Controls for State Delayed Systems

85

3.4.2 Discretized State Approach A stabilizing control for the system (3.43)–(3.45) can be found by simply choosing S as Y22 in Theorem 2.13 as shown in the following theorem. Theorem 3.11 For a given ρ, if there exist matrices P¯ > 0, Q¯ 0 > 0, Q¯ 1 > 0, Y¯i j , N¯ i , K¯ , and K¯ 1 such that ⎛

⎞ ⎛ ⎞ ⎛ ⎞ ⎤ T¯2 T¯2 Ψ ⎝ 0 ⎠ ⎝T¯3 ⎠ E 0 ⎝T¯3 ⎠ E 0 ⎥ ⎢ Π τ ⎢ ⎥ ⎢ ⎥ FT T¯4 T¯4 ⎢ ⎥ < 0,   T ⎢ ⎥ 0 F Ψ −I 0 0 ⎢ ⎥         −1 ⎣τ E T T¯ T T¯ T T¯ T ⎦ ¯ ¯ ¯ 0 −τ Y 0 IN P 22 I N P 0 2 3 4 T ¯T ¯T ¯T ¯ 0 0 −Q1 E 0 T2 T3 T4 (3.49)   ¯ ¯ Y11 Y12 ≥ 0, (3.50) T ¯ Y¯12 Y22 n ⎡

where Π and Ψ denote ⎡

⎤ ⎤ ⎡ ⎤T ⎡ Π¯ 0 0 (1, 1) (1, 2) 0 (1, 1) (1, 2) 0 Π = ⎣ (2, 1) (2, 2) 0 ⎦ + ⎣ (2, 1) (2, 2) 0 ⎦ + ⎣ 0 −(0 ⊕ Q¯ 1 ) 0 ⎦ , (3, 1) (3, 2) 0 (3, 1) (3, 2) 0 0 0 −ρ2 I Ψ = e0 ( P¯ E T + K¯ T E bT ) + e N ( P¯ E 1T + K¯ 1T E bT ), T +E Q T ¯ 0ET , Π¯ = τ Y¯11 + Y¯12 (E 0 − E N )T + (E 0 − E N )Y¯12 0 ¯ 0 E0 − E N Q N

 

(1, 1) = N¯ 1 0 , (1, 2) = − N¯ 1 + e0 P¯ A T + e N P¯ A1T + e0 K¯ T B T + e N K¯ 1T B T 0 ,

 (2, 1) = N¯ 2 0 , (2, 2) = −(2, 1), 



(3, 1) = N¯ 3 0 , (3, 2) = − N¯ 3 + D T 0 ,

and E 0 , E N , e0 , and e N are defined in (2.39)–(2.40), then the control (3.20) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.43)–(3.45).

 Proof Let us divide N1T N2T N3T in appropriate parts:

 T  T T T · · · N1N N20 · · · N2N N3T . N1T N2T N3T  N10

T > Then, judging from the (N + 2, N + 2)-entry in (2.70), we can claim: N20 + N20 0, which guarantees the invertibility of the matrix N20 . Also, from the following development



0 e0 Pe0T ⎣0 0 0 0

⎤T ⎤ ⎡ ⎤⎡ e0 A T + e N A1T 0 N1 ⎦ −e0 0⎦ + ⎣ N 2 ⎦ ⎣ T N3 D 0

86

3 State Feedback Stabilizing Controls

⎤ ⎤ ⎡ P N10 ⎢0⎥ ⎢ .. ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ .. ⎥ ⎡ ⎤T ⎢ . ⎥ ⎡ ⎤ ⎢ . ⎥ 0 ⎢ N1N ⎥ e0 A T + e N A1T T ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ + ⎢ N20 ⎥ ⎣ ⎦ −e0 =⎢ ⎥ ⎢ 0 ⎥ e0 ⎢ T ⎢ .. ⎥ 0 ⎢ .. ⎥ D ⎢ . ⎥ ⎢ . ⎥ ⎥ ⎢ ⎥ ⎢ ⎣0⎦ ⎣ N2N ⎦ ⎡



0

N3

P 0 ··· ⎢ 0 .. .. ⎢ . . ⎢. ⎢ .. . . . . ⎢ . . ⎢ ⎢ 0 ··· 0 =⎢ ⎢ 0 ··· ⎢ ⎢ . ⎢ .. ⎢ ⎣ 0 ··· 

0

···

0 N10 .. . . .. 0

0 ··· 0 0



⎡ ⎥ .. .. ⎥ ⎢ . .⎥ ⎥⎢ ⎥⎢ ⎥⎢ ⎢ 0 ··· 0 0⎥ ⎥⎢ ⎥ 0 ··· 0 0⎥⎢ ⎢ Ae T ⎢ 0 .. ⎥ ⎥ Q1 . ⎥ ⎣ 0⎦ .. .

P N1N 0 N20 .. .. . . 0 N2N 0 N3 0 · · · 0 I  

e0T 0 .. .

0 0 .. .

0 0 .. .

0 0 + A1 e TN −e0T 0 0 0 0



⎥ ⎥ ⎥ ⎥ ⎥ , 0⎥ ⎥ ⎥ D⎥ 0⎦ 0

we utilize the under-braced matrix whose inverse can be found and is defined as ⎡



T¯1 ⎣0 0

T¯2 T¯3 T¯4

P 0 ··· ⎢ 0 .. .. ⎢ . . ⎢. ⎢ . . ⎤ ⎢ . .. ... 0 ⎢ ⎢ 0⎦ = ⎢ 0 ··· 0 ⎢ 0 ··· I ⎢ ⎢ . ⎢ .. ⎢ ⎣ 0 ··· 0

···

0 N10 .. . . .. 0

0 ··· 0 0

⎤−1

⎥ .. .. ⎥ . .⎥ ⎥ ⎥ ⎥ 0 ··· 0 0⎥ ⎥ 0 ··· 0 0⎥ ⎥ .. ⎥ Q1 . ⎥ ⎥ 0⎦ .. .

P N1N 0 N20 .. .. . . 0 N2N 0 N3 0 · · · 0 I

,

where T¯i denote   ¯ T¯1 = I N +1 P −1 = I N +1 P, 

  −1 −1 T¯2 = −(I N +1 P )N1 N20 0 = N¯ 1 0 , ⎤ ⎡ −1 0 ⎡N20 ⎤    ⎥ ⎢ N21 0 ⎥ ⎢ ¯ = , T¯3 = ⎢ N ⎢ .. ⎥ −1 −1 ⎥ 2 Q¯ 1 ⎣ −Q −1 1 ⎣ . ⎦ N20 Q 1 ⎦ N2N 

 −1 ¯ T4 = −N3 N20 0 = N¯ 3 0 .

3.4 Robust Stabilizing Controls for State Delayed Systems

87

For further development, we define some matrices as follows. 

       P¯ 0 P¯ 0 Y11 Y12 I N +1 I N +1 Y¯11 Y¯12   ,  T T ¯ Y22 0 IN 0 IN P¯ Y12 P¯ Y¯12 Y22   ¯ 0 (I N P), ¯ Q¯ 0  (I N P)Q

and apply the Schur complement to the constraint (2.70): 

 (1, 1) (1, 2) 0> , (2, 1) −I

(3.51)

where ⎡

⎤ ⎡ ⎤⎡ ⎤T ⎡ ⎤ ⎡ ⎤T Π1 e0 Pe0T 0 N1 N1 e0 A T + e N A1T e0 A T + e N A1T T ⎦ +⎣ ⎦ ⎣ N2 ⎦ , (1, 1) = ⎣e0 Pe0 Π2 −e0 −e0 0 ⎦ + ⎣ N2 ⎦ ⎣ N3 N3 DT DT 0 0 −ρ2 I 

T T T (2, 1) = (1, 2) = Ee0 + E 1 e N 0 F .

Multiplying on the left side of (3.51) by ⎡

T¯1 ⎣0 0

⎤ T¯2 0 T¯3 0 ⎦ ⊕ I T¯4 I

and on the right side by its transpose, we can obtain an equivalent stability criterion that is derived below in a step-by-step manner. ⎡

T¯1 ⎣0 0 ⎡ T¯1 ⎣0 0

⎤ ⎡ ⎤ ⎡ ⎤ ⎤⎡ T¯2 0 e0 E T + e N E 1T T¯1 (e0 E T + e N E 1T ) e0 P¯ E T + e N P¯ E 1T ⎦=⎣ ⎦=⎣ ⎦, 0 T¯3 0⎦ ⎣ 0 0 T T T ¯ F F F T4 I ⎤T ⎫ ⎡ ⎤ ⎧⎡ ⎤T ⎤ ⎡ ⎤⎡ T¯2 0 ⎪ e0 A T + e N A1T ⎪ N1 ⎬ T¯1 T¯2 0 ⎨ 0 e0 Pe0T 0 ⎦ ⎣ 0 T¯3 0⎦ −e0 T¯3 0⎦ ⎣0 0 0⎦ + ⎣ N2 ⎦ ⎣ ⎪ ⎪ ⎭ 0 T¯4 I N3 DT T¯4 I ⎩ 0 0 0

⎡ 0 ⎢ ⎢ T T =⎢ ⎢ Ae0 + A1 e N ⎣ 0 0

e0T 0 −e0T 0 0



 ⎡ P¯ 0 0 0 ⎥ I N +1 T T ⎥⎢ ¯ 2T ¯ ¯ N N N 1 ⎢

 3 D⎥ ⎥⎣ ¯ 0 0 0 Q1 0 ⎦ 0 0 I 0

⎤ ⎥ ⎥ ⎦

88

3 State Feedback Stabilizing Controls

⎡ ⎢ ⎢ =⎢ ⎢ ⎣

N¯ 1T N¯ 2T N¯ 3T 0 0 0 ¯ 0T + A1 Pe ¯ TN − N¯ 1T − N¯ 2T D − N¯ 3T A Pe 0 0 0 0 0 0

⎤ ⎥ ⎥ ⎥, ⎥ ⎦



=

=

=

=

⎤ ⎤⎡ T ⎤⎡ T¯1 T¯2 0 Π1 0 0 T¯1 0 0 ⎣ 0 T¯3 0⎦ ⎣ 0 Π2 0 ⎦ ⎣T¯ T T¯ T T¯ T ⎦ 2 3 4 0 0 −ρ2 I 0 0 I 0 T¯4 I ⎡ ⎡ ⎤T ⎤ ⎡ ⎤ T¯2 T¯2 T¯1 Π1 T¯1T 0 0 T T T ⎣¯ ⎦ ⎣ 0 ⎦ ⎦ ⎣ ¯ , + T3 {E 0 Q 1 E 0 − E N Q 1 E N + τ E 0 Y22 E 0 } T3 0 0 0 0 −ρ2 I T¯4 T¯4    T + E 0 Q 0 E 0T I N +1 P¯ {τ Y11 + Y12 (E 0 − E N )T + (E 0 − E N )Y12    −E N Q 0 E NT } I N +1 P¯       τ Y¯11 + I N +1 P¯ Y12 I N P¯ (E 0 − E N )T    T   +(E 0 − E N ) I N P¯ Y12 I N +1 P¯       T       T +E 0 I N P¯ Q 0 I N P¯ E 0 − E N I N P¯ Q 0 I N P¯ E N T T T τ Y¯11 + Y¯12 (E 0 − E N ) + (E 0 − E N )Y¯12 + E 0 Q¯ 0 E 0 − E N Q¯ 0 E NT ,

T T T T

 E N Q 1 E NT T¯2T T¯3T T¯4T T¯2 T¯3 T¯4 ⎡ ⎡ ⎡ ⎤T ⎤ ⎤T ⎡ ⎤ ¯ N¯ 1  0    0 0  T N1  0  ⎢ ⎢ ⎢0⎥ ⎢0⎥ ⎥ 0 ⎥ 0 0 0 ⎢ N¯ 2 ⎢ N¯ 2 ⎥ ⎥ = ⎢ ⎥ Q¯ 1 ⎢ ⎥ . Q1 ⎣ ⎣ ⎣I⎦ ⎣I⎦ I Q¯ 1 ⎦ I Q¯ 1 ⎦ 0 0 N¯ 3 0 N¯ 3 0

Finally, we apply the Schur complement to the results derived, and replace matrices A, A1 , E, and E 1 in Theorem 2.13 with A + B K , A1 + B K 1 , E + E b K , and E 1 +  E b K 1 , respectively. This completes the proof. For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding result can easily be derived by taking the variable K¯ 1 = 0 from the inequality (3.49) in Theorem 3.11. The robust stabilizing control for state delayed systems based on the discretized state approach in Theorem 3.11 is newly obtained here.

3.4.3 *Extension to Systems with Time-Varying Delays Let us consider the system (3.43)–(3.45), where the delay h is not a constant but ¯ In this case, we consider two cases where h(t) time-varying such that h(t) ∈ [0, h]. is a general delay and where h(t) is differentiable with bounded derivative such as

3.4 Robust Stabilizing Controls for State Delayed Systems

89

˙ ≤ μ. h(t) ¯ For a general time-varying delay case, the following result can be obtained based on Theorem 2.14. ¯ 0, Q¯ 0 > 0, S¯0 > 0, Y¯i j , Theorem 3.12 For a given ρ, if there exist matrices P > Z¯ i j , K¯ , and Σ¯ such that ⎤ Π + h¯ Y¯11 h¯ Σ¯ Ξ ⎣ h¯ Σ¯ T −h¯ S¯0 0 ⎦ < 0, 0 −I ΞT   Y¯11 Y¯12 ≥ 0, T ¯ Y¯12 Y22   Y¯22 Y¯22 ≥ 0, Y¯22 P¯ S¯0−1 P¯ ⎡



⎤ Π + h¯ Z¯ 11 h¯ Σ¯ Ξ ⎣ h¯ Σ¯ T −h¯ S¯0 0 ⎦ < 0, T 0 −I Ξ   ¯ ¯ Z 11 Z 12 ≥ 0, T ¯ Z¯ 12 Z 22   Z¯ 22 Z¯ 22 ≥ 0, Z¯ 22 P¯ S¯0−1 P¯

(3.52)

(3.53) (3.54)

where Π and Ξ denote  ¯ 2T − e4 Σ¯ T + e4 De5T + Y¯12 (e1 − e2 )T + Σe ¯ 1T + e4 A1 Pe ¯ 1T + e1 P¯ A T e4T Π = e1 Σ¯ T + e4 A Pe T T T T T T 2 T T ¯ 4 + e5 D e4 + (e1 − e2 )Y¯12 − ρ e5 e5 + e1 Q¯ 0 e1 − e3 Q¯ 0 e3T +e2 P¯ A1 e4 − Σe T + Z¯ 12 (e2 − e3 )T + (e2 − e3 ) Z¯ 12 + e4 B K¯ e1T + e1 K¯ T B T e4T + e4 B K¯ 1 e2T + e2 K¯ 1T B T e4T ,  Ξ = e1 P¯ E T + e2 P¯ E 1T + e5 F T + e1 K¯ T E bT + e2 K¯ 1T E bT ,

and ei are defined in (2.75)–(2.79), then the control (3.36) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.43)–(3.45) with a general timevarying delay h(t) ∈ [−h, 0].

 ¯ x˙ T (t) pwT (t) T . Proof Let us define χ(t) as χ(t)  x T (t) x T (t − h(t)) x T (t − h) T

In Theorem 2.14, dividing Σ into five parts: Σ  Σ1T Σ2T Σ3T Σ4T Σ5T , and judging from the (4, 4)th entries in (2.72)–(2.73), we have the condition that Σ4 + Σ4T > 0, which guarantees the invertibility of the matrix Σ4 . Also, from the following development e1 Pe4T + Σ(Ae1T + A1 e2T − e4T + De5T ) ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ T ⎤T ⎡ ⎤⎡ 0 A Σ1 P 0 0 Σ1 0 P 0 ⎢Σ2 ⎥ ⎢ A T ⎥ ⎢ 0 P 0 Σ2 0⎥ ⎢ 0 ⎢ 0 ⎥ ⎢0 ⎥ ⎢ ⎥⎢ 1 ⎥ ⎢ ⎢ ⎥⎢ ⎥ ⎥⎢ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎥⎢ =⎢ ⎢ 0 ⎥ ⎢0⎥ + ⎢Σ3 ⎥ ⎢ 0 ⎥ = ⎢ 0 0 P Σ3 0⎥ ⎢ 0 ⎣Σ4 ⎦ ⎣ −I ⎦ ⎣ 0 0 0 Σ4 0⎦ ⎣ A ⎣ 0 ⎦ ⎣I ⎦ 0 0 Σ5 0 0 0 Σ5 I 0 DT   

0 0 0 A1 0

0 0 0 0 0

I 0 0 −I 0

⎤ 0 0⎥ ⎥ 0⎥ ⎥, D⎦ 0

we utilize the under-braced block matrix whose inverse can be found and is defined as

90

3 State Feedback Stabilizing Controls



P −1 ⎢ 0 ⎢ P¯ = ⎢ ⎢ 0 ⎣ 0 0

0 P −1 0 0 0

0 −P −1 Σ1 Σ4−1 0 −P −1 Σ2 Σ4−1 P −1 −P −1 Σ3 Σ4−1 0 Σ4−1 0 −Σ5 Σ4−1

⎤ ⎡ 0 P¯ ⎢ ⎥ 0⎥ ⎢ 0 ⎢ 0⎥ ⎥ = ⎢0 0⎦ ⎣ 0 0 I

0 P¯ 0 0 0

0 0 P¯ 0 0

Σ¯ 1 Σ¯ 2 Σ¯ 3 Σ¯ 4 Σ¯ 5

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎦ I

(3.55)

T with Σ¯  Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T Σ¯ 5T . Let us define some matrices as follows. 

    T Y¯11 Y¯12 P¯ 0 Y11 Y12 P¯ 0  , T T ¯ Y22 0 P¯ Y¯12 Y22 0 P¯ Y12     T  Z¯ 11 Z¯ 12 P¯ 0 Z 11 Z 12 P¯ 0  , T T ¯ Z 22 0 P¯ Z¯ 12 Z 22 0 P¯ Z 12 ¯ S¯0  S0−1 . Q¯ 0  P¯ Q 0 P, After applying the Schur complement

to the constraints (2.72)–(2.73) as in (3.48) and multiplying on the left side by P0¯ 0I and on the right side by its transpose, we can obtain an equivalent stability criterion that is derived below in a step-by-step manner. ¯ 1 Pe T + Σ(Ae T + A1 e T − e T + De T )}P¯ T P{e 4 1 2 4 5 ⎡ ⎤⎡ ⎤ ⎤⎡ ¯ P 0 0 0 0 0 0 0 I 0 P 0 0 Σ1 0 ⎢ 0 P 0 Σ2 0⎥ ⎢ 0 0 0 0 0 ⎥ ⎢ 0 P¯ 0 0 0 ⎥ ⎢ ⎥⎢ ⎥ ⎥⎢ ⎥⎢ ⎥ ⎥⎢ ¯ = P¯ ⎢ ⎢ 0 0 P Σ3 0 ⎥ ⎢ 0 0 0 0 0 ⎥ ⎢ 0 0 P 0 0 ⎥ ⎣ 0 0 0 Σ4 0⎦ ⎣ A A1 0 −I D ⎦ ⎣Σ¯ T Σ¯ T Σ¯ T Σ¯ T Σ¯ T ⎦ 1 2 3 4 5 0 0 0 Σ5 I 0 0 0 0 0 0 0 0 0 I ⎡ ⎤ Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T Σ¯ 5T ⎢ ⎥ 0 0 0 0 0 ⎢ ⎥ ⎥ 0 0 0 0 0 =⎢ ⎢ ⎥ ⎣ A P¯ − Σ¯ T A1 P¯ − Σ¯ T −Σ¯ T −Σ¯ T D − Σ¯ T ⎦ 1 2 3 4 5 0 0 0 0 0 ¯ T + e4 A1 Pe ¯ T − e4 Σ¯ T + e4 De T , = e1 Σ¯ T + e4 A Pe 1 2 5 ¯ hY ¯ 12 (e1 − e2 )T }P¯ T = PY ¯ 12 P(e ¯ 11 )P¯ T = h¯ Y¯11 , P{Y ¯ 1 − e2 )T = Y¯12 (e1 − e2 )T , P( T T T ¯ h¯ Z 11 )P¯ = h¯ Z¯ 11 , P{Z ¯ 12 (e2 − e3 ) }P¯ = P¯ Z 12 P(e ¯ 2 − e3 )T = Z¯ 12 (e2 − e3 )T , P( ¯ 1 Q 0 e T − e3 Q 0 e T }P¯ T = e1 P¯ Q 0 Pe ¯ T − e3 P¯ Q 0 Pe ¯ T = e1 Q¯ 0 e T − e3 Q¯ 0 e T , P{e 1 3 1 3 1 3 ¯ 2 e5 e T )P¯ T = ρ2 e5 e T , P(ρ 5 5 ¯ he ¯ 4 S0 e T )P¯ T P( 4



T T T T T T ¯ = h Σ¯ 1 Σ¯ 2 Σ¯ 3 Σ¯ 4 Σ¯ 5 S0 Σ¯ 1T Σ¯ 2T Σ¯ 3T Σ¯ 4T Σ¯ 5T = h¯ Σ¯ S0 Σ¯ T , ¯ 1 E T + e2 E T + e5 F T )I = e1 P¯ E T + e2 P¯ E T + e5 F T , P(e 1 1   ¯22 Y¯22 Y −1 −1 −1 −1 ≥ 0, ⇔ P¯ S¯0 P¯ ≥ Y¯22 ⇔ ¯ S¯0 ≥ P¯ Y¯22 P¯ Y22 P¯ S¯0−1 P¯

3.4 Robust Stabilizing Controls for State Delayed Systems

S¯0−1 ≥ P¯ −1 Z¯ 22 P¯ −1 ⇔ P¯ S¯0−1 P¯ ≥ Z¯ 22 ⇔



91

 Z¯ 22 Z¯ 22 ≥ 0. Z¯ 22 P¯ S¯0−1 P¯

Finally, we apply the Schur complement to the results derived. This completes the proof.  ˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ the following result can be obtained based on Theorem 2.15. Theorem 3.13 For a given ρ, if there exist matrices P¯ > 0, Q¯ 0 > 0, Q¯ 1 > 0, S¯0 > 0, S¯1 > 0, Y¯i j , Z¯ i j , K¯ , and Σ¯ such that ⎡

⎤ ⎡ ⎤ Π + h¯ Y¯11 h¯ Σ¯ h¯ Σ¯ Ξ Π + h¯ Z¯ 11 h¯ Σ¯ Ξ ⎢ h¯ Σ¯ T −h¯ S¯0 0 ⎥ 0 ⎢ ⎥ < 0, ⎣ h¯ Σ¯ T −h¯ S¯0 0 ⎦ < 0, ⎣ h¯ Σ¯ T 0 −h¯ S¯1 0 ⎦ T 0 −I Ξ 0 0 −I ΞT     Y¯11 Y¯12 Z¯ Z¯ 12 ≥ 0, ¯ 11 ≥ 0, T ¯ T ¯ Y¯12 Y22 Z 12 Z 22     Y¯22 Y¯22 Z¯ 22 Z¯ 22 ≥ 0, ¯ ¯ ¯ −1 ¯ ≥ 0, Y¯22 P¯ S¯0−1 P¯ + (1 − μ) ¯ P¯ S¯1−1 P¯ Z 22 P S0 P

(3.56)

(3.57) (3.58)

where Π and Ξ denote  ¯ 2T − e4 Σ¯ T + e4 De5T + Y¯12 (e1 − e2 )T − (1 − μ)e ¯ 1T + e4 A1 Pe Π = e1 Σ¯ T + e4 A Pe ¯ 2 Q¯ 1 e2T T ¯ 1T + e1 P¯ A T e4T + e2 P¯ A1T e4T − Σe ¯ 4T + e5 D T e4T + (e1 − e2 )Y¯12 +Σe − ρ2 e5 e5T T T T T + Z¯ 12 (e2 − e3 ) + (e2 − e3 ) Z¯ 12 + e1 ( Q¯ 0 + Q¯ 1 )e1 − e3 Q¯ 0 e3

+e4 B K¯ e1T + e1 K¯ T B T e4T + e4 B K¯ 1 e2T + e2 K¯ 1T B T e4T ,

 Ξ = e1 P¯ E T + e2 P¯ E 1T + e5 F T + e1 K¯ T E bT + e2 K¯ 1T E bT ,

and ei are defined in (2.75)–(2.79), then the control (3.36) with K = K¯ P¯ −1 and K 1 = K¯ 1 P¯ −1 asymptotically stabilizes the system (3.43)–(3.45) with a time-varying ˙ ≤ μ. delay h(t) ∈ [−h, 0] that is differentiable with bounded derivative such as h(t) ¯ Proof Let us define some matrices as ¯ S¯1  S1−1 , Q¯ 1  P¯ Q 1 P, and focus on those that are strategically added: (Q 1 , S1 )-related terms. ¯ 1 Q 1 e1T − (1 − μ)e P{e ¯ 2 Q 1 e2T }P¯ T ¯ 1T − (1 − μ)e ¯ 2T = e1 Q¯ 1 e1T − (1 − μ)e = e1 P¯ Q 1 Pe ¯ 2 P¯ Q 1 Pe ¯ 2 Q¯ 1 e2T , T T ¯ he ¯ 4 S1 e4 )P¯ P(

T T T T T T T T T T T  = h¯ Σ¯ 1 Σ¯ 2 Σ¯ 3 Σ¯ 4 Σ¯ 5 S1 Σ¯ 1 Σ¯ 2 Σ¯ 3 Σ¯ 4 Σ¯ 5 ,

92

3 State Feedback Stabilizing Controls

¯ S¯1−1 ≥ P¯ −1 Y¯22 P¯ −1 S¯0−1 + (1 − μ) ⇔ P¯ S¯0−1 P¯ + (1 − μ) ¯ P¯ S¯1−1 P¯ ≥ Y¯22 ⇔



 Y¯22 Y¯22 ≥ 0. Y¯22 P¯ S¯0−1 P¯ + (1 − μ) ¯ P¯ S¯1−1 P¯

Finally, we apply the Schur complement to the results derived. This completes the proof.  For the memoryless state feedback control (3.22), where u(t) = K x(t), the corresponding results can easily be derived by taking the variable K¯ 1 = 0 from the inequalities (3.52) and (3.56) in Theorems 3.12 and 3.13, respectively. Generally speaking, Theorem 3.13 can yield larger maximum allowable delays than Theorem ˙ ≤ μ, 3.12 since the differentiability and upper-boundedness of the delay, i.e. h(t) ¯ is utilized. The robust stabilizing controls for systems with time-varying delays based on the Lyapunov-Krasovskii approach in Theorems 3.12 and 3.13 in this subsection can be found similarly in [6]. Other important results of stabilizing and robust stabilizing controls for systems with time-invariant and time-varying delays based on the descriptor system approach can be found in [3, 4, 15], which are not covered in this book.

References 1. Artstein Z (1982) Linear systems with delayed controls: a reduction. IEEE Trans Autom Control 27(4):869–879 2. de Oliveira MC, Bernussou J, Geromel JC (1999) A new discrete-time robust stability condition. Syst Control Lett 37(4):261–265 3. Fridman E, Shaked U (2002) A descriptor system approach to H∞ control of time-delay systems. IEEE Trans Autom Control 47(2):253–270 4. Fridman E, Shaked U (2002) An improved stabilization method for linear systems with timedelay. IEEE Trans Autom Control 47(11):1931–1937 5. Gu K (1997) Discretized LMI set in the stability problem of linear uncertain time-delay systems. Int J Control 68(4):923–934 6. Ko JW, Park P (2009) Delay-dependent robust stabilization for systems with time-varying delays. Int J Control Autom Syst 7(5):711–722 7. Kwon WH, Pearson AE (1977) A note on feedback stabilization of a differential-difference system. IEEE Trans Autom Control 22(3):468–470 8. Kwon WH, Pearson AE (1980) Feedback stabilization of linear systems with delayed control. IEEE Trans Autom Control 25(2):266–269 9. Lewis RM (1979) Control-delayed system properties via an ordinary model. Int J Control 30(3):477–490 10. Michiels W, Niculescu SI (2007) Stability and stabilization of time-delay systems. SIAM, Cambridge 11. Moon YS, Park P, Kwon WH (2001) Robust stabilization of uncertain input-delayed systems using reduction method. Automatica 37(2):307–312 12. Moon YS, Park P, Kwon WH, Lee YS (2001) Delay-dependent robust stabilization of uncertain state-delayed systems. Int J Control 74(14):1447–1455

References

93

13. Niculescu SI, de Souza CE, Dion JM, Dugard L (1994) Robust stability and stabilization of uncertain linear systems with state delay: single delay case (i). In: Proceedings of the IFAC symposium on robust control design, Rio de Janeiro, pp 469–474 14. Smith OJ (1959) A controller to overcome dead time. ISA J 6:28–33 15. Xu S, Lam J, Zou Y (2005) Simplified descriptor system approach to delay-dependent stability and performance analyses for time-delay systems. IEE Proc Control Theory Appl 152(2):147– 151

Chapter 4

Output Feedback Stabilizing Controls

4.1 Introduction In Chap. 3, state information was available for controls and thus state feedback controls could be used for the asymptotic stability of the closed-loop systems. When state information is not available for controls and only output information is available, then output feedback controls must be employed for the asymptotic stability of the closed-loop systems. This chapter deals with these output feedback stabilizing controls for input and state delayed systems. For the output stabilizing controls for systems with input delays or measurement delays, the Smith predictor method is first introduced briefly due to its historic importance although the Smith predictor method is not a main time-domain approach of this book. The advantages and drawbacks of the Smith predictor method are discussed. Then a Luenberger-type observer and a dynamic output feedback control are obtained with the reduction transformation as main approaches for output feedback stabilizing controls for input delayed systems. For output feedback stabilizing controls for systems with state delays, Luenbergertype observers and dynamic output feedback controls are obtained from the stability criteria developed in the previous chapters based on such as Lyapunov–Razumikhin, Lyapunov–Krasovskii, and cascaded-delay system approaches. Robust output feedback stabilizing controls for systems with state delays are also obtained similarly. The discretized state approach is not discussed in this chapter due to the space limit of the book This chapter is outlined as follows. In Sect. 4.2 output feedback stabilizing controls for input delayed systems are introduced. First the Smith predictor method is introduced. It is shown that the main advantage of the Smith predictor method is that it is a simple output feedback control and the time delay is eliminated in the characteristic equation of the closed-loop system. The drawbacks of this method are also mentioned. Then a Luenberger-type observer and a dynamic output feedback control, both with the reduction transform, are introduced. In Sect. 4.3 output feedback stabilizing controls for state delayed systems are introduced. First Luenberger-type © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_4

95

96

4 Output Feedback Stabilizing Controls

observers are introduced, combined with state feedback controls. Then dynamic output feedback stabilizing controls are obtained, based on such as Lyapunov– Razumikhin, Lyapunov–Krasovskii, and cascaded-delay system approaches. Some results based on the Krasovskii theorem are extended to systems with time-varying delays with and without bounded derivatives. In Sect. 4.4 robust output feedback stabilizing controls for uncertain state delayed systems are introduced. Dynamic output feedback stabilizing controls are obtained, based on such as Lyapunov–Krasovskii and cascaded-delay state approaches. Some results based on the Krasovskii theorem are extended to systems with both model uncertainties and time-varying delays with and without bounded derivatives. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

4.2 Stabilizing Controls for Input Delayed Systems 4.2.1 Smith Predictor Approach The Smith predictor method has been known from the early stage as a delay compensation method, which can be found in [15] and its extension can be found in [16] for pure input delayed systems. There are several drawbacks although some advantages exist. This book covers the Smith predictor method briefly since it is not closely related to main subjects of this book. Consider a linear system with a pure input delay given by x(t) ˙ = Ax(t) + Bu(t − h), y(t) = C x(t), ym (t) = y(t),

(4.1) (4.2) (4.3)

or with a pure measurement delay given by x(t) ˙ = Ax(t) + Bu(t), y(t) = C x(t), ym (t) = y(t − h),

(4.4) (4.5) (4.6)

where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input or the control, y(t) ∈ R p is the output or the controlled output, ym (t) ∈ R p is the the measurement or the measured output, and h denotes the constant delay. The transfer functions from u to ym of the systems (4.1)–(4.3) and (4.4)–(4.6) are exactly the same and given by G(s)e−sh , where G(s) = C(s I − A)−1 B.

(4.7)

4.2 Stabilizing Controls for Input Delayed Systems

97

Fig. 4.1 Unity output feedback control structure for input delayed systems

Fig. 4.2 Unity output feedback control structure for measurement delay systems

The control purpose is to control the output y to meet a certain objective, using the measurement ym . Figures 4.1 and 4.2 show the unity output feedback control structure for input delayed systems and measurement delayed systems, respectively, where r (t) ∈ R p is a reference input. The transfer functions from the reference input r (s) to the measured output ym (s) in Figs. 4.1 and 4.2 are given by K (s)G(s)e−sh ym (s) = . r (s) 1 + K (s)G(s)e−sh

(4.8)

However, the transfer functions from the reference input r (s) to the controlled output y(s) are not the same, which means two systems are subtly different. It should be noted that the feedback system may have an infinite number of poles considering the characteristic equation 0 = 1 + K (s)G(s)e−sh .

(4.9)

Therefore, the closed-loop stability may not be easy to check. For the control for systems with input delays or measurement delays, the Smith predictor is often used. Smith proposed a delay compensation technique in [15], which utilizes a mathematical model of the system in the minor feedback loop around the conventional controller. The Smith predictor captures the effect of the manipulating input u, on the controlled output y. Figure 4.3 shows the Smith predictor control structure for input delayed systems or measurement delay systems. The

98

4 Output Feedback Stabilizing Controls

r

v

e

ys

K (s)

u

G ( s)e  sh

ym

G ( s )  G ( s)e  sh Control

Fig. 4.3 Smith predictor for input delayed systems or measurement delayed systems

transfer function from the reference input r (s) to the measured output ym (t) is given by G(s)e−sh ym (s) = . r (s) 1 + K (s)G(s)

(4.10)

The main advantage of the Smith predictor method is that the time delay is eliminated from the characteristic equation of the closed-loop system as can be seen from (4.10). Thus the design problem for the systems with delays can be converted to the systems without delays if the feedback stabilization is concerned. Although the Smith predictor method is a simple and effective method to deal with some processes with delays, it has several weak points. First, the Smith predictor cannot be applied to open-loop unstable systems. It is well known that the poles of the closed-loop system using the Smith predictor contain all the eigenvalues of the open-loop system matrix from (4.10), which implies that open-loop unstable systems cannot be stabilized with the Smith predictor. The second drawback is that the Smith predictor control cannot handle the disturbances and nonzero initial conditions. In particular, if the system has poles near the origin in the left half plane, then the responses to disturbances and nonzero initial conditions may not be good enough to be acceptable. The third shortcoming is the lack of robustness. The performance of the Smith predictor method is sensitive to the accuracy of the model of the system and the time delay. The smith predictor method and the silmilar properties can be obtained for systems with a single input delay in addition to a pure input delay.

4.2.2 Luenberger Observer Approach with Reduction Transformation Consider a linear system with a single input delay given by

4.2 Stabilizing Controls for Input Delayed Systems

99

x(t) ˙ = Ax(t) + Bu(t) + B1 u(t − h),

(4.11)

y(t) = C x(t)

(4.12)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0). Using the reduction transform in (3.14), we transfer the system (4.11)–(4.12) into ˙ = Aξ(t) + Bu(t), ¯ ξ(t)

(4.13)

y¯ (t) = Cξ(t),

(4.14)

where 

t

y¯ (t) = y(t) + C

e A(t−τ −h) B1 u(τ )dτ ,

(4.15)

t−h

¯ is stabilizable, we and B¯ = B + e−Ah B1 . y¯ (t) is measurable from (4.15). If (A, B) can compute K such that (A + B¯ K ) is Hurwitz. The input is now given as  u(t) = K x(t) ˆ +



t

e

A(t−τ −h)

 Bu(τ )dτ ,

(4.16)

t−h

where x(t) ˆ is constructed by a Luenberger observer such as ˙ˆ = A x(t) x(t) ˆ + Bu(t) + B1 u(t − h) + L{y(t) − C x(t)}. ˆ

(4.17) 

In this case, the input can be expressed with the combination of ξ(t) and x(t) ˜ = x(t) − x(t) ˆ such as   u(t) = K x(t) +

 e A(t−τ −h) Bu(τ )dτ − K {x(t) − x(t)} ˆ = K ξ(t) − K x(t). ˜

t

t−h

(4.18) The augmented closed-loop system including the error dynamics is given by ˙ = (A + B¯ K )ξ(t) − B¯ K e Ah x(t). ˜ ξ(t) ˙x(t) ˜ = (A − LC)x(t), ˜

(4.19) (4.20)

and we have the following result. Theorem 4.1 If (A, B) and (A, C) are stabilizable and detectable, respectively, then we can construct K and L such that (A + B¯ K ) and (A − LC) are Hurwitz. The control (4.16) based on the observer (4.17) using such K and L asymptotically stabilizes the system (4.11)–(4.12). The reduction transformation can be found as in [1, 8, 9]. The observer-based stabilization combined with the reduction transformation in Theorem 4.1 can be

100

4 Output Feedback Stabilizing Controls

easily obtained from the well know results for ordinary systems. A similar result can be found in [4, 17].

4.2.3 Dynamic Feedback Control Approach with Reduction Transformation Using the reduction transform in (3.14), we transfer the system (4.11)–(4.12) into the system (4.13)–(4.14). Let us now introduce a dynamic output feedback control given by x˙c (t) = Ac xc (t) + Bc y¯ (t), u(t) = Cc xc (t) + Dc y¯ (t),

(4.21) (4.22)

where xc (t) ∈ Rn is the state of the dynamic output feedback control system (4.21)– (4.22). The augmented closed-loop system can be written as 

    ˙ ¯ c ξ(t) A + B¯ Dc C BC ξ(t) . = xc (t) Bc C Ac x˙c (t)

(4.23)

Using system theories on ordinary systems, we can get the following result. ¯ and (A, C) in the system (4.11)–(4.12) are stabiTheorem 4.2 Assume that (A, B) lizable and detectable, respectively. Then there always exist matrices X and X¯ such that   B¯ ⊥T A X¯ + X¯ A T B¯ ⊥ < 0, (4.24)   T A T X + X A (C T )⊥ < 0, (4.25) (C T )⊥   X I > 0, (4.26) I X¯ where B¯ ⊥ and (C T )⊥ denote the orthogonal complements of B¯ and C T , respectively. From such fixed values X and X¯ , a feasible set of {Ac , Bc , Cc , Dc } can be found by solving the LMI problem:   ¯ c X X¯ −1 − X A + B¯ Dc C BC Bc C Ac X¯ −1 − X X − X¯ −1 T    ¯ c X X¯ −1 − X A + B¯ Dc C BC . + Bc C Ac X¯ −1 − X X − X¯ −1

 0>

(4.27)

4.2 Stabilizing Controls for Input Delayed Systems

101

Then the dynamic output feedback control (4.21)–(4.22) with such a set of {Ac , Bc , Cc , Dc } asymptotically stabilizes the system (4.11)–(4.12). Proof The stability of the closed-loop system (4.23) is ensured if there exist X , Y , Z , Ac , Bc , Cc , and Dc such that  X Y , 0< YT Z        ¯ c ¯ c T X Y X Y A + B¯ Dc C BC A + B¯ Dc C BC + 0> YT Z YT Z Bc C Ac Bc C Ac         T  X Y A0 X Y X Y A0 0 B¯ + = + YT Z 0 0 YT Z YT Z 0 0 I 0  T  T      T  0 I Ac Bc X Y 0 I Ac Bc 0 B¯ + × C c Dc C c Dc C 0 YT Z C 0 I 0 

= Q + UΣV T + VΣ T U T , where    T  X Y A0 A0 , + YT Z 0 0 0 0       T    X Y Ac Bc 0 I 0 B¯ , Σ = , V = U= . C c Dc YT Z C 0 I 0 



Q=



X Y YT Z

Since Ac , Bc , Cc and Dc can be chosen freely, by the elimination lemma in Appendix B.1, the above second condition can be written as  0>

X Y YT Z



  ⊥

X Y YT Z



   T  X Y A0 A0 + YT Z 0 0 0 0

 T X Y 0 B¯ , YT Z I 0 ⊥    T      T  X Y X Y 0 I A0 A0 0> + YT Z YT Z C 0 0 0 0 0 ⊥  T T 0 I , C 0 ⊥ 



0 B¯ I 0

(4.28)

(4.29)

where M⊥ denotes the orthogonal complement of M. In this case, 

X Y YT Z



0 B¯ I 0

 ⊥



X Y = YT Z

−1 

   T   0 I 0 B¯ ⊥T , , = T C 0 (C T )⊥ 0 ⊥

which transfers the conditions (4.28)–(4.29) to the LMIs (4.24)–(4.25), where

102

4 Output Feedback Stabilizing Controls



−1   X¯ ∗  X Y = . YT Z ∗ ∗

From the condition (4.26), we have 0 < X , 0 < X¯ , and 0 < X − X¯ −1 , from which we can choose 

X Y YT Z



 =

X X¯ −1 − X −1 ¯ X − X X − X¯ −1



 =

X¯ X¯ ¯ ¯ ¯ X X ( X − X −1 )−1 X¯

−1 . 

This completes the proof. This proof is based on Lemma B.1 in Appendix B.1 which can be found in [2].

4.3 Stabilizing Controls for State Delayed Systems Consider a linear system with a state delay given by x(t) ˙ = Ax(t) + A1 x(t − h) + Bu(t),

(4.30)

y(t) = C x(t)

(4.31)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0].

4.3.1 Luenberger Observer Approach Assuming that the delayed output y(t − h) can be stored in a memory, we can design an observer-based control of the form     ˙ˆ ˆ − h) , x(t) = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + L y(t) − C x(t) ˆ + L 1 y(t − h) − C x(t u(t) = K x(t) ˆ + K 1 x(t ˆ − h),

where x(t) ˆ ∈ Rn is the state of the Luenberger observer. Let us define the observation error by x(t) ˜ = x(t) − x(t). ˆ Then the augmented closed-loop system can be written as 

       x(t) ˙ A + B K −B K x(t − h) x(t) A1 + B K 1 −B K 1 = . + ˙˜ 0 A1 − L 1 C x(t 0 A − LC x(t) ˜ − h) ˜ x(t)

4.3 Stabilizing Controls for State Delayed Systems

103

In view of the structures of the system matrices, the eigenvalues of the closed-loop system are the roots of the following characteristic equation     0 = det s I − (A + B K ) − e−sh (A1 + B K 1 ) · det s I − (A − LC) − e−sh (A1 − L 1 C) .

Note that the roots of   0 = det s I − (A + B K ) − e−sh (A1 + B K 1 )

(4.32)

are the eigenvalues of the control subsystem assuming perfect knowledge of the states and the roots of   (4.33) 0 = det s I − (A − LC) − e−sh (A1 − L 1 C) are the eigenvalues of the estimation error subsystem. We can apply various stabilization criteria developed in Chap. 3 to obtain the control gain matrices K and K 1 which asymptotically stabilize the system (4.30). Since (4.33) is equivalent to  0 = det s I − (A − LC) − e T

−sh

 (A1 − L 1 C) , T

we can obtain desired observer gain matrices by replacing the control gain matrices A + B K and A1 + B K 1 with (A − LC)T and (A1 − L 1 C)T , respectively. If we do not have any information on the delay constant h, we cannot use x(t ˆ − h) in the control. Or we may want to use x(t) ˆ only for the feedback control due to the simple structure, i.e. u(t) = K x(t). ˆ

(4.34)

We can apply various stabilization criteria in Chap. 3 for the closed-loop system x(t) ˙ = (A + B K )x(t) + A1 x(t − h)

(4.35)

to obtain the control gain matrix K for the system (4.35). Also, we can obtain a desired observer gain matrix L by replacing the matrices A + B K and A1 with (A − LC)T and A1T , respectively. Similar controls based on the Luenberger observer approach can be found in [18].

104

4 Output Feedback Stabilizing Controls

4.3.2 Lyapunov–Razumikhin Approach For the state delayed system (4.30)–(4.31), we design a dynamic output feedback control of the form x˙c (t) = Ac xc (t) + Ac,1 xc (t − h) + Bc y(t), u(t) = Cc xc (t) + Cc,1 xc (t − h) + Dc y(t),

(4.36) (4.37)

which gives the following augmented closed-loop system ˙¯ = A¯ x(t) ¯ − h), x(t) ¯ + A¯ 1 x(t

(4.38)

      A + B Dc C BC c  A1 BC c,1 x(t) ¯ ¯ , A= , A1 = . x(t) ¯ = Bc C Ac 0 Ac,1 xc (t)

(4.39)

where 



A dynamic output feedback control for the system (4.30)–(4.31) can be obtained based on Theorem 2.3 as shown in the following theorem. ¯ G, ¯ H¯ , J¯, F¯1 , H¯ 1 , and a Theorem 4.3 If there exist matrices X > 0, X¯ > 0, F, scalar α such that   X¯ I α > 0, > 0, I X ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

   ⎤ A X¯ + B H¯ + X¯ A T A + B J¯C ¯ ¯ A A X + B H 1 1 1 ⎥ + H¯ T B T + α X¯ + F¯ T + αI ⎥     T ⎥ ¯ + AT X ⎥ X A + GC A + C T J¯T B T ¯ X A1 ⎥ < 0, F1 ⎥ +C T G¯ T + αX + F¯ + αI ⎥ ⎥ T T T T −α X¯ −αI ⎦ X¯ A1 + H¯ 1 B F¯1 A1T X −αI −αX A1T

then the dynamic output feedback control (4.36)–(4.37) with Dc = J¯, Cc = ( H¯ − Dc C X¯ )Y¯ −T , Cc,1 = H¯ 1 Y¯ −T , Bc = Y −1 (G¯ − X B Dc ),

(4.40) (4.41) (4.42)

(4.43) −T −1 −T −T −1 ¯ ¯ ¯ ¯ ¯ ¯ Ac = Y F Y − Y X (A + B Dc C) X Y − Bc C X Y − Y X BCc , (4.44) Ac,1 = Y −1 F¯1 Y¯ −T − Y −1 X A1 X¯ Y¯ −T − Y −1 X BCc,1 , (4.45) −1

where Y and Y¯ can be chosen as X¯ −1 − X and X¯ , respectively, asymptotically stabilizes the system (4.30)–(4.31).

4.3 Stabilizing Controls for State Delayed Systems

105

Proof Let us define matrices X and X¯ which are sub-matrices of P and P −1 , by 

   X Y X¯ Y¯ −1 P , P  ¯T ¯ . YT Z Y Z It is routine to verify that



(4.46)

   I 0 X¯ Y¯ P= X Y I 0 

and if it holds that

 X I > 0, I X¯

then there exist invertible matrices Y , Y¯ , Z , and Z¯ such that the condition (4.46) holds. A simple choice introduced is Z = −Y = X − X¯ −1 > 0, Y¯ = X¯ , Z¯ = X¯ ( X¯ − X −1 )−1 X¯ . Our next step is devoted to multiplying on the left side of (2.20) by 

   X¯ Y¯ X¯ Y¯ ⊕ I 0 I 0

and on the right side by its transpose to get a more manageable stability criterion. Each entry is found as follows.         X¯ I X¯ I I 0 X¯ I X¯ Y¯ =α = α , P ¯T X Y Y¯ T 0 I X I 0 Y 0         X¯ I X¯ I I 0 A + B Dc C BCc X¯ Y¯ = P A¯ ¯ T T X Y Bc C Ac I 0 Y¯ 0 Y 0   T A + B Dc C A X¯ + B(Dc C X¯ + Cc Y¯ ) = Y Bc C X¯ + X BCc Y¯ T + X (A + B Dc C) X¯ + Y Ac Y¯ T X A + (X B Dc + Y Bc )C   A X¯ + B H¯ A + B J¯C  , ¯ F¯ X A + GC         X¯ I X¯ I I 0 A1 BCc,1 X¯ Y¯ = P A¯ 1 ¯ T X Y 0 Ac,1 I 0 Y 0 Y¯ T 0     T A1 X¯ + B H¯ 1 A1 A1 A1 X¯ + BCc,1 Y¯  . = X A1 F¯1 X BCc,1 Y¯ T + X A1 X¯ + Y Ac,1 Y¯ T X A1 

α

This completes the proof.



Let us consider a dynamic output feedback control without delayed elements given by

106

4 Output Feedback Stabilizing Controls

x˙c (t) = Ac xc (t) + Bc y(t),

(4.47)

u(t) = Cc xc (t) + Dc y(t),

(4.48)

which gives the augmented closed-loop system (4.38), where 

x(t) ¯ =



      A + B Dc C BC c  A1 0 x(t) , A¯ = . , A¯ 1 = Bc C Ac 0 0 xc (t)

A dynamic output feedback control (4.47)–(4.48) for the system (4.30)–(4.31) can be found by the similar procedure as in Theorem 4.3 as shown in the following corollary. ¯ G, ¯ H¯ , J¯, and a scalar α Corollary 4.1 If there exist matrices X > 0, X¯ > 0, F, such that   X¯ I α > 0, > 0, (4.49) I X ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

   ⎤ A X¯ + B H¯ + X¯ A T A + B J¯C ¯ A X A 1 1 ⎥ + H¯ T B T + α X¯ + F¯ + αI ⎥     T ⎥ ¯ ⎥ X A + GC+ A + C T J¯T B T ¯ X A X X A 1 1 ⎥ < 0, (4.50) ⎥ + F¯ + αI A T X + C T G¯ T + αX ⎥ ⎥ −α X¯ −αI ⎦ X¯ A1T X¯ A1T X A1T

A1T X

−αI −αX

then the dynamic output feedback control (4.47)–(4.48) with Dc = J¯, Cc = ( H¯ − Dc C X¯ )Y¯ −T ,

Bc = Y −1 (G¯ − X B Dc ), Ac = Y −1 F¯ Y¯ −T − Y −1 X (A + B Dc C) X¯ Y¯ −T − Bc C X¯ Y¯ −T − Y −1 X BCc ,

where Y and Y¯ can be chosen as X¯ −1 − X and X¯ , respectively, asymptotically stabilizes the system (4.30)–(4.31). Proof The proof is similar to that of Theorem 4.3 and thus omitted.



Since the inequality (4.50) is not linear but bilinear in X and X¯ , the inequality cannot be easily solved via a convex optimization technique. Unlike state feedback controls, dynamic output feedback controls without delayed elements such as (4.47)– (4.48) cannot be directly obtained from the results for dynamic output feedback controls with delayed elements such as (4.36)–(4.37) and may yield BMI stabilization conditions which are hardly solvable. In this book, we only deal with the dynamic output feedback controls with delayed elements for simplicity. The dynamic output feedback controls for state delayed systems based on the Lyapunov–Razumikhin approach in Theorem 4.3 and Corollary 4.1 are newly obtained in this subsection.

4.3 Stabilizing Controls for State Delayed Systems

107

4.3.3 Lyapunov–Krasovskii Approach 1. Delay-Independent Stabilizing Controls A dynamic output feedback control can be obtained by applying the same procedure developed in Sect. 4.3.2 to Theorem 2.5 as shown in the following theorem. ¯ G, ¯ H¯ , J¯, F¯1 , and Theorem 4.4 If there exist matrices X > 0, X¯ > 0, Q¯ > 0, F, ¯ H1 such that  ⎡ ⎢ ⎢ ⎢ ⎣

 X¯ I > 0, I X

∗ A X¯ + B H¯ + X¯ A T + H¯ T B T ¯ + A T X + C T G¯ T A T + C T J¯T B T + F¯ X A + GC   X¯ A1T + H¯ 1T B T F¯1T A1T A1T X



⎤ + Q¯ ∗ ⎥ ⎥ ⎥ < 0, ⎦ ¯ −Q

then the dynamic output feedback control (4.36)–(4.37) with (4.40)–(4.45) asymptotically stabilizes the system (4.30)–(4.31). Proof Let us transfer Q into Q¯ such that     X¯ I X¯ Y¯ ¯ Q . Q ¯T I 0 Y 0 The rest of the proof is similar to that of Theorem 4.3, which completes the proof.  2. Delay-Dependent Stabilizing Controls In Theorem 2.5, we replace A and A1 with the corresponding matrices A¯ and A¯ 1 in the augmented closed-loop system (4.38), which gives the following result. ¯ G, ¯ H¯ , J¯, Y˜i j , and Σ˜ Theorem 4.5 If there exist matrices X > 0, X¯ > 0, Q¯ > 0, F, such that     Y˜11 Y˜12 X¯ I ≥ 0, > 0, (4.51) T ˜ I X Y˜12 Y22 ⎡ ⎣

where (1, 1) denotes

(1, 1) h Σ˜ T

⎤ ˜  h Σ  X¯ I ˜ −1 X¯ I ⎦ < 0, −h Y22 I X I X 

(4.52)

108

4 Output Feedback Stabilizing Controls 

(1, 1) = e3

  T A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e + e e3T + h Y˜11 1 1 ¯ ¯ F¯ X A + GC F¯ X A + GC

T ˜ 1T − e2 Qe ˜ 2T ˜ 1 − e3 )T + e1 Qe + Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 + (e1 − e3 )Σ˜ T + Σ(e    T    T I I X¯ X¯ A1T + e3 e2T + e2 e3T , (4.53) A1 X X I I

and ei ∈ Rn×3n are defined in (2.33), then the dynamic output feedback control (4.36)–(4.37) with (4.40)–(4.45) asymptotically stabilizes the system (4.30)–(4.31).  T Proof Let us define χ(t) as χ(t)  x¯ T (t) x¯ T (t − h) x˙¯ T (t) . Let us partition matrix P¯ = P −1 as     X¯ Y¯ X Y P¯  ¯ T ¯ , P¯ −1  , (4.54) YT Z Y Z and define an invertible block diagonal matrix    I X Ti  Ii 0 YT

(4.55)

whose dimension varies with the parameter i. We define some matrices as follows. 

  T    Y˜11 Y˜12 T3 0 Y¯11 Y¯12 T3 0 ¯ 1 , Σ˜  T3T Σ¯ T1 .  , Q˜  T1T QT T ¯ T ˜ 0 T1 0 T1 Y¯12 Y22 Y˜12 Y22

  Multiplying on the right side of (2.31) by T03 T01 and on the left side by its transpose, we can obtain a transformed criterion that is derived below in a step-by-step manner. ¯ T }T3 T3T {e3 A¯ 1 Pe 2

        X¯ I T I 0 A1 0 I e A1 X¯ I e2T , = e3 T1T A¯ 1 P¯ T1 e2T = e3 = e 3 T X Y X 0 0 Y¯ 0 2

¯ T }T3 = e3 T T A¯ P¯ T1 e T T3T {e3 A¯ Pe 1 1 1     X¯ I T I 0 A + B Dc C BCc e = e3 X Y Bc C Ac Y¯ T 0 1 ⎡ ⎤ A X¯ + B(Dc C X¯ + Cc Y¯ T )A + B Dc C   ⎦ eT = e3 ⎣ Y Bc C X¯ + X BCc Y¯ T + X A + (X B Dc + Y Bc )C 1 T ¯ ¯ X (A + B Dc C) X + Y Ac Y   A X¯ + B H¯ A + B J¯C T e1 ,  e3 ¯ F¯ X A + GC T3T {(e1 − e3 )Σ¯ T }T3 = (e1 − e3 )T1T Σ¯ T T3 = (e1 − e3 )Σ˜ T , T3T {h Y¯11 }T3  h Y˜11 , T3T {Y¯12 (e1 − e2 )T }T3 = T3T Y¯12 T1 (e1 − e2 )T  Y˜12 (e1 − e2 )T , ¯ T − e2 Qe ¯ T }T3 = e1 T T QT ¯ 1 e T − e2 T T QT ¯ 1 e T = e1 Qe ˜ T − e2 Qe ˜ T, T3T {e1 Qe 1 2 1 1 1 2 1 2 −1 ¯ −1 T ¯ T1 P T1 P T1 = T1T P¯ T1 Y˜22 T1T P¯ Y¯22

4.3 Stabilizing Controls for State Delayed Systems

 =

X¯ Y¯ I 0



109

        I X ˜ −1 I 0 X¯ I X¯ I ˜ −1 X¯ I = Y . Y I X 22 I X 0 Y T 22 X Y Y¯ T 0

This completes the proof.



Applying Lemma 3.1, we can relax the under-braced parts in (4.52)–(4.53) as follows. 

       X¯ I ˜ −1 X¯ I X¯ I X¯ I ≥ − M T Y˜22 M + αI, Y22 M + MT I X I X I X I X    T    T I I X¯ X¯ A1 e2T + e2 e3T e3 A1T X X I I      T       I A1 I I X¯ X¯ e2 e3 e2 = e3 X X I I A1T A1T A1    T    T I I X¯ X¯ − e3 e3T − e2 e2T A1T A1 X X I I         T     T I I I I X¯ X¯ e2 e3 e2 ≤ e3 T T X X I I A1 A1    T    T I I X¯ X¯ V0 − V0T − e3 e3T + V0T V0 − e2 e2T + V1T V1 A1T V1 − V1T A1 X X I I        T    I I X¯ X¯ T T + e2 e3 + e2 ≤ e3 − αI A1 A1 X X I I    T    T I I X¯ X¯ T T T T T V − V0 e3 + V0 V0 − e2 e2T + V1T V1 − e3 A1 V1 − V1 A1 X 0 X I I

for any non-positive scalar α, which, using the Schur complement, leads to the twophase-based iterative algorithm in Algorithm 3.1. The delay-independent dynamic output feedback controls for state delayed systems based on the Lyapunov–Krasovskii approach in Theorem 4.4 are newly obtained here. The delay-dependent dynamic output feedback controls for state delayed systems based on the Lyapunov–Krasovskii approach in Theorem 4.5 are the special case of the results in [7]. Other controls can be found in [5, 6, 10, 12, 13], which are not covered in this book.

4.3.4 Cascaded-Delay System Approach Whereas the conditions in the Lyapunov–Razumikhin approach of Theorem 4.3 and in the delay-dependent Lyapunov–Krasovskii approach of Theorem 4.5 are not linear

110

4 Output Feedback Stabilizing Controls

matrix inequalities, those in the dynamic feedback control approach with reduction transformation of Theorem 4.2 and in the delay-independent Lyapunov–Krasovskii approach of Theorem 4.4 are linear matrix inequalities. If we can use the delay constant h, then we can provide a more flexible control by introducing the cascadeddelay system description for LMI conditions. A basic form of the dynamic output feedback control follows the cascaded-delay system description such as x˙c (t) = F11 xc (t) + F12 pc (t − h) + G 1 y(t), pc (t) = F21 xc (t) + F22 pc (t − h) + G 2 y(t), u(t) = H1 xc (t) + H2 pc (t − h) + J y(t)

(4.56) (4.57) (4.58)

with xc (t0 ) = 0 and pc (t0 + θ) = 0 for θ ∈ [−h, 0), where xc (t) ∈ Rn and pc (t) ∈ Rn are the states of the cascaded-delay control system. Then the resulting augmented closed-loop system with the state delayed system (4.30)–(4.31) and the control system (4.56)–(4.58) is as follows. 

    ˙¯ x(t) ¯ A¯ A¯ x(t) , = ¯ 11 ¯ 12 p(t ¯ − h) p(t) ¯ A21 A22

(4.59)

where     x(t) x(t) , p(t) ¯ = , xc (t) pc (t) ⎤ ⎡ A + B J C B H1 A1 B H2  G 1C F11 0 F12 ⎥ A¯ 12  ⎢ ⎥. =⎢ ⎣ ¯ I 0 0 0 ⎦ A22 F21 0 F22 G2C 



x(t) ¯ = 

A¯ 11 A¯ 21

(4.60)

(4.61)

Consider a Lyapunov functional  V (t) = x¯ T (t)X x(t) ¯ +

t

p¯ T (α)Z p(α)dα, ¯

(4.62)

t−h

where X > 0 and Z > 0. Using this Lyapunov functional, a delay-independent stability criterion can be obtained in the following theorem. Theorem 4.6 If there exist matrices X > 0 and Z > 0 such that 

T T T A¯ 11 X + X A¯ 11 + A¯ 21 Z A21 X A¯ 12 + A¯ 21 Z A¯ 22 T T T A¯ 12 X + A¯ 22 Z A21 A¯ 22 Z A¯ 22 − Z

 < 0,

(4.63)

then the dynamic output feedback control (4.56)–(4.58) asymptotically stabilizes the system (4.30)–(4.31).

4.3 Stabilizing Controls for State Delayed Systems

111

Proof Take the derivative of the functional (4.62); then ¯ + p¯ T (t)Z p(t) ¯ − p¯ T (t − h)Z p(t ¯ − h) V˙ (t) = 2 x˙¯ T (t)X x(t)  T   x(t) ¯ x(t) ¯ = M , p(t ¯ − h) p(t ¯ − h) where M denotes M=



T T T A¯ 11 X + X A¯ 11 + A¯ 21 Z A¯ 21 X A¯ 12 + A¯ 21 Z A¯ 22 T T X + A¯ 22 Z A¯ 21 A¯ 12



T Z A¯ 22 − Z A¯ 22

.

Therefore, the condition that M < 0 ensures the asymptotic stability of the system. This completes the proof.  Theorem 4.6 provides matrix inequalities to find the dynamic output feedback control (4.56)–(4.58) for the system (4.30)–(4.31). However, these are not convex. In fact, we can find convex formulation known as linear matrix inequalities. Theorem 4.7 Assume that there exist matrices X 11 > 0, X¯ 11 > 0, Z 11 > 0, and Z¯ 11 > 0 such that    Z 11 I X 11 I ≥ 0, ≥ 0, I X¯ 11 I Z¯ 11  T T  T C C M (X , Z ) < 0, 0 ⊥ V 11 11 0 ⊥  T   B B MU ( X¯ 11 , Z¯ 11 ) < 0, 0 ⊥ 0 ⊥ 

(4.64) (4.65) (4.66)

where MV (X 11 , Z 11 ) and MU ( X¯ 11 , Z¯ 11 ) denote  A T X 11 + X 11 A + Z 11 X 11 A1 , A1T X 11 −Z 11   A X¯ 11 + X¯ 11 A T + A1 Z¯ 11 A1T X¯ 11 MU ( X¯ 11 , Z¯ 11 ) = . − Z¯ 11 X¯ 11 

MV (X 11 , Z 11 ) =

(4.67) (4.68)

From such fixed values X 11 , X¯ 11 , Z 11 , and Z¯ 11 , matrices X 12 , X 22 , Z 12 , and Z 22 are determined by using the relation −1 −1 −1 −1 X 21 = X 11 − X¯ 11 , Z 12 Z 22 Z 21 = Z 11 − Z¯ 11 X 12 X 22

(4.69)

and a feasible set of {Fi j , G i , H j , J } can be found by solving the LMI problem 

W = [Wi j ]i, j=1,...,6 < 0,

(4.70)

112

4 Output Feedback Stabilizing Controls

where W11 = (A + B J C)T X 11 + C T G 1T X 21 + X 11 (A + B J C) + X 12 G 1 C, W12 = (A + B J C)T X 12 + C T G 1T X 22 + X 11 B H1 + X 12 F11 , W13 = X 11 A1 , W14 = X 11 B H2 + X 12 F12 , W15 = Z 11 + C T G 2T Z 21 , W16 = Z 12 + C T G 2T Z 22 , T W22 = H1T B T X 12 + F11 X 22 + X 21 B H1 + X 22 F11 , W23 = X 21 A1 + X 22 G 1 C2 , W24 = X 21 B H2 + X 22 F12 , T T W25 = F21 Z 21 , W26 = F21 Z 22 , W33 = −Z 11 , W34 = Z 12 , W35 = 0, W36 = 0, T T W44 = −Z 22 , W45 = F22 Z 21 , W46 = F22 Z 22 , W55 = −Z 11 , W56 = −Z 12 , W66 = −Z 22 . Then the dynamic output feedback control (4.56)–(4.58) with {Fi j , G i , H j and J } asymptotically stabilizes the system (4.30)–(4.31). Proof From the condition (4.63) in Theorem 4.6, we have   ¯T ¯  T A21 Z A21 A¯ 11 X + X A¯ 11 X A¯ 12 + 0> T X 0 A¯ 12 A¯ T Z A¯ 21 22

T Z A¯ A¯ 21 22



T Z A¯ − Z A¯ 22 22

= Q + UΣV T + VΣ T U T ,

where ⎡

A T X 11 + X 11 A A T X 12 X 11 A1 0 ⎢ X 21 A 0 X 21 A1 0 ⎢ T T ⎢ X A X −Z 11 −Z 12 A 11 12 1 1 Q=⎢ ⎢ 0 0 −Z 21 −Z 22 ⎢ ⎣ 0 0 0 Z 11 Z 21 0 0 0 ⎤ ⎡ 00B ⎢I 0 0⎥ ⎡ ⎤ ⎥ ⎢ F11 F12 G 1 ⎢0 0 0 ⎥ ⎥ Σ = ⎣ F21 F22 G 2 ⎦ , U = T ⎢ ⎢0 0 0 ⎥, V ⎥ ⎢ H1 H2 J ⎣0 0 0 ⎦ 0I 0 ⎡ X 11 ⎢ X 21 ⎢     ⎢ 0 X 11 X 12 Z 11 Z 12 X= ,Z = ,T = ⎢ ⎢ 0 X 21 X 22 Z 21 Z 22 ⎢ ⎣ 0 0 We remark that

⎤ Z 11 Z 12 0 0 ⎥ ⎥ 0 0 ⎥ ⎥, 0 0 ⎥ ⎥ −Z 11 −Z 12 ⎦ −Z 21 −Z 22 ⎡ ⎤ 0 0 CT ⎢I 0 0 ⎥ ⎢ ⎥ ⎢0 0 0 ⎥ ⎥ =⎢ ⎢0 I 0 ⎥, ⎢ ⎥ ⎣0 0 0 ⎦ 00 0 X 12 X 22 0 0 0 0

0 0 I 0 0 0

0 0 0 I 0 0

0 0 0 0 Z 11 Z 21

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥. 0 ⎥ ⎥ Z 12 ⎦ Z 22

4.3 Stabilizing Controls for State Delayed Systems



Nb1 ⎢ 0 ⎢ ⎢ 0 U⊥ = T −1 ⎢ ⎢ 0 ⎢ ⎣ Nb2 0

0 0 I 0 0 0

113

⎤ ⎡ ⎤ Nc1 0 0 0 ⎢ 0 0 0⎥ 0⎥ ⎥ ⎢ ⎥ ⎥ ⎢ Nc2 0 0 ⎥ 0⎥ ⎢ ⎥ , V = ⊥ ⎢ 0 0 0⎥, I⎥ ⎥ ⎢ ⎥ ⎣ 0 I 0⎦ 0⎦ 0 0 0I

where Nbi and Ncj denote 

Nb1 Nb2





=



B 0



 ⊥

,

Nc1 Nc2







=

CT 0

 ⊥

.

From the elimination lemma in Appendix B.1, the inequality condition that 0 > Q + UΣV T + VΣ T U T is equal to the following two conditions: 0 > V⊥T QV⊥ and 0 > U⊥T QU⊥ . Let us investigate the first condition.  T     Nc1 Nc1 ⊕ I2 ⊕ I2 , 0 > V⊥T QV⊥ = M¯ V Nc2 Nc2 ⎤ ⎡ T A X 11 + X 11 A11 X 11 A1 Z 11 Z 12  ⎢ A1T X 11 −Z 11 0 0 ⎥ ⎥, M¯ V = ⎢ ⎣ 0 −Z 11 −Z 12 ⎦ Z 11 Z 21 0 −Z 21 −Z 22 which can be simplified by the Schur complement associated with the (3, 3)th and (4, 4)th entries of M¯ V such as  0>

Nc1 Nc2

T

 Nc1 , MV (X 11 , Z 11 ) Nc2 

where MV (X 11 , Z 11 ) is defined in (4.67). Let us now investigate the second condition. First, consider ⎡

T −T QT −1

A X¯ 11 + X¯ 11 A T A X¯ 12 A1 0 ⎢ ¯ 21 A T X 0 0 0 ⎢ ⎢ 0 −Z 11 −Z 12 A1T =⎢ ⎢ 0 0 −Z 21 −Z 22 ⎢ ⎣ X¯ 11 0 0 X¯ 12 0 0 0 0

Then, the second condition becomes

⎤ X¯ 11 0 X¯ 21 0 ⎥ ⎥ 0 0 ⎥ ⎥. 0 0 ⎥ ⎥ − Z¯ 11 − Z¯ 12 ⎦ − Z¯ 21 − Z¯ 22

114

4 Output Feedback Stabilizing Controls

 T    Nb1 Nb1 ¯ ⊕ I2 ⊕ I2 , 0> = MU Nb2 Nb2 ⎤ ⎡ 0 A X¯ 11 + X¯ 11 A T X¯ 11 A1 ⎥ ¯ 11 ¯ 11 0  ⎢ X − Z 0 ⎥, M¯ U = ⎢ ⎣ 0 −Z 11 −Z 12 ⎦ A1T 0 0 −Z 21 −Z 22 

U⊥T QU⊥

which can be simplified by the Schur complement associated with the (3, 3)th and (4, 4)th entries of M¯ U such as  0>

Nb1 Nb2

T

MU ( X¯ 11 , Z¯ 11 )



 Nb1 , Nb2

where MU ( X¯ 11 , Z¯ 11 ) is defined in (4.68). Since the rest parts of proof are straightforward, we omit them. This completes the proof.  The size of the control is determined from the ranks of (X − X¯ −1 ) and (Z − Z¯ −1 ) because rank (X − X¯ −1 ) = size (X 22 ) = dimension (xc ), rank (Z − Z¯ −1 ) = size (Z 22 ) = dimension ( pc ).

(4.71) (4.72)

It is noted that the dimensions of xc (t) and pc (t) can be smaller than or equal to n since the sizes of X and Z are the dimension n of the state x(t). Even though the cascaded-delay system approach produces a little complicated results, these results are formulated with LMIs. The stability condition and the dynamic output feedback control for state delayed systems based on the cascaded-delay system approach in Theorems 4.6 and 4.7 in this subsection are newly obtained here.

4.3.5 *Extension to Systems with Time-Varying Delays If the delay in the system (4.30)–(4.31) is time-varying with the condition that h(t) ∈ ¯ a dynamic output feedback control is designed as the following form [0, h], x˙c (t) = Ac xc (t) + Ac,1 xc (t − h(t)) + Bc y(t),

(4.73)

u(t) = Cc xc (t) + Cc,1 xc (t − h(t)) + Dc y(t),

(4.74)

and the augmented closed-loop system such as (4.38) is written as ˙¯ = A¯ x(t) ¯ − h(t)), x(t) ¯ + A¯ 1 x(t

(4.75)

4.3 Stabilizing Controls for State Delayed Systems

115

where A¯ and A¯ 1 are defined in (4.39). In this case, we consider two cases where ¯ 0] that is h(t) is a general delay and where h(t) is a time-varying delay h(t) ∈ [−h, ˙ ≤ μ. differentiable with bounded derivative such as h(t) ¯ For a general time-varying delay case, the following result can be obtained by applying the similar Lyapunov– Krasovskii approach used in Theorem 2.8. ¯ Theorem 4.8 Assume that there exist matrices Q˜ 0 > 0, S˜0 > 0, X > 0, X¯ > 0, F, ¯ H¯ , J¯, Y˜i j , Z˜ i j , and Σ˜ such that G, 

   Π + h¯ Z˜ 11 h¯ Σ˜ Π + h¯ Y˜11 h¯ Σ˜ < 0, < 0, h¯ Σ˜ T −h¯ S˜0 h¯ Σ˜ T −h¯ S˜0       Y˜11 Y˜12 Z˜ 11 Z˜ 12 X¯ I ≥ 0, ˜ T ˜ ≥ 0, > 0, T ˜ I X Y˜12 Y22 Z 12 Z 22 ⎡ ⎤ ˜ Y˜22   Y22   ⎣ ¯ X I ˜ −1 X¯ I ⎦ ≥ 0, Y˜22 S0 I X I X ⎡ ⎤ ˜ ˜ Z 22   Z 22   ⎣ X¯ I ˜ −1 X¯ I ⎦ ≥ 0, Z˜ 22 S0 I X I X

(4.76) (4.77)

(4.78)

(4.79)

where Π denotes 

  T A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e + e e4T 1 1 ¯ ¯ F¯ X A + GC F¯ X A + GC T T + Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 + Z˜ 12 (e2 − e3 )T + (e2 − e3 ) Z˜ 12 ˜ 1 − e4 )T + e1 Q˜ 0 e1T − e3 Q˜ 0 e3T + (e1 − e4 )Σ˜ T + Σ(e  T     T T  + e4 I X A1 X¯ I e2T + e2 X¯ I A I X e4T ,

Π  e4

and ei ∈ R4n×n are defined in (2.47)–(2.48). Then the dynamic output feedback control (4.73)–(4.74) with Dc = J¯, Cc = ( H¯ − Dc C X¯ )Y¯ −T , Bc = Y −1 (G¯ − X B Dc ),

(4.80) (4.81)

(4.82) −T −1 −T −T −1 ¯ ¯ ¯ ¯ ¯ ¯ Ac = Y F Y − Y X (A + B Dc C) X Y − Bc C X Y − Y X BCc , (4.83) Cc,1 = ( H¯ 1 − Dc C1 X¯ )Y¯ −T , (4.84) Ac,1 = Y −1 F¯1 Y¯ −T −Y −1 X (A1 + B Dc C1 ) X¯ Y¯ −T − Bc C1 X¯ Y¯ −T −Y −1 X BCc,1 , −1

(4.85)

116

4 Output Feedback Stabilizing Controls

where Y and Y¯ can be chosen as X − X¯ −1 and X¯ , respectively, asymptotically stabilizes the system (4.30)–(4.31) with a general time-varying delay h(t) ∈ [−h, 0].  T ¯ x˙¯ T (t) . In Proof Let us define χ(t) as χ(t)  x¯ T (t) x¯ T (t − h(t)) x¯ T (t − h) Theorem 2.8, we partition P¯ as (4.54) and define an invertible block diagonal matrix Ti as (4.55). By defining some matrices as T   Y¯11 Y¯12 T4 T4 0 T ¯ 0 T1 0 Y¯12 Y22  T   T4 0 Z¯ 11 Z¯ 12 T4 T ¯ 0 T1 0 Z¯ 12 Z 22 

   0 Y˜ Y˜12  ˜11 , T ˜ T1 Y12 Y22    0 Z˜ Z˜ 12  ˜ 11 , T ˜ T1 Z 12 Z 22

Σ˜  T4T Σ¯ T1 , S˜0  T1T S¯0 T1 , Q˜ 0  T1T Q¯ 0 T1 ,   and multiplying on the right side of (2.44)–(2.45) by T04 T01 and on the left side by its transpose, we can obtain a transformed criterion that is derived below in a step-by-step manner. ¯ 2T }T4 T4T {e4 A¯ 1 Pe



    T X¯ I T I X¯ e A1 = e e2T , 4 T 2 ¯ X I Y 0     X¯ I T ¯ 1T }T4 = e4 T1T A¯ P¯ T1 e1T = e4 I 0 A + B Dc C BCc e T4T {e4 A¯ Pe Bc C Ac X Y Y¯ T 0 1 ⎡ ⎤ ¯ ¯ ¯T A + B Dc C  A X + B(Dc C X + CcTY )  ⎦ e1T = e4 ⎣ Y Bc C X¯ + X BCc Y¯ + X A + (X B D + Y B )C c c T X (A + B Dc C) X¯ + Y Ac Y¯   A X¯ + B H¯ A + B J¯C T e1 ,  e4 ¯ F¯ X A + GC T4T {(e1 − e4 )Σ¯ T }T4 = (e1 − e4 )T1T Σ¯ T T4 = (e1 − e4 )Σ˜ T , = e4 T1T A¯ 1 P¯ T1 e2T = e4



I 0 X Y



A1 0 0 0

T4T {h¯ Y¯11 }T4 = h¯ Y˜11 , T4T {Y¯12 (e1 − e2 )T }T4 = T4T Y¯12 T1 (e1 − e2 )T = Y˜12 (e1 − e2 )T , T4T {h¯ Z¯ 11 }T4 = h¯ Z˜ 11 , T4T { Z¯ 12 (e2 − e3 )T }T4 = T4T Z¯ 12 T1 (e2 − e3 )T = Z˜ 12 (e2 − e3 )T , T4T {e1 Q¯ 0 e1T − e3 Q¯ 0 e3T }T4 = e1 T1T Q¯ 0 T1 e1T − e3 T1T Q¯ 0 T1 e3T = e1 Q˜ 0 e1T − e3 Q˜ 0 e3T , ¯ 1 = T1T P¯ T1 S˜0−1 T1T P¯ T1 T1T { P¯ S¯0−1 P}T           X¯ I X¯ I ˜ −1 X¯ I X¯ Y¯ I X ˜ −1 I 0 = , = S S I X 0 I X I 0 0 Y T 0 X Y Y¯ T 0 S¯0−1 ≥ P¯ −1 Y¯22 P¯ −1 ⇔ P¯ S¯0−1 P¯ ≥ Y¯22 ⇔ T1T P¯ S¯0−1 P¯ T1 ≥ T1T Y¯22 T1 = Y˜22

4.3 Stabilizing Controls for State Delayed Systems

117



⎤ ˜ Y˜22   Y22   ⇔⎣˜ X¯ I ˜ −1 X¯ I ⎦ ≥ 0, Y22 S I X 0 I X −1 −1 −1 ⇔ P¯ S¯0−1 P¯ ≥ Z¯ 22 ⇔ T1T P¯ S¯0−1 P¯ T1 ≥ T1T Z¯ 22 T1 = Z˜ 22 S¯0 ≥ P¯ Z¯ 22 P¯ ⎡ ⎤ ˜ 22 Z Z˜ 22     ⇔⎣˜ X¯ I ˜ −1 X¯ I ⎦ ≥ 0. Z 22 S0 I X I X 

This completes the proof.

˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ the following result can be obtained based on Theorem 2.9. Theorem 4.9 Assume that there exist matrices Q˜ 0 > 0, Q˜ 1 > 0, S˜0 > 0, S˜1 > 0, ¯ G, ¯ H¯ , J¯, Y˜i j , Z˜ i j , and Σ˜ such that X > 0, X¯ > 0, F, ⎡

⎤   Π + h¯ Y˜11 h¯ Σ˜ h¯ Σ˜ ¯ ˜ ¯˜ ⎣ h¯ Σ˜ T −h¯ S˜0 0 ⎦ < 0, Π + h Z 11 h Σ < 0, h¯ Σ˜ T −h¯ S˜0 h¯ Σ˜ T 0 −h¯ S˜1       Y˜11 Y˜12 Z˜ 11 Z˜ 12 X¯ I ≥ 0, ˜ T ˜ ≥ 0, > 0, T ˜ I X Y˜12 Y22 Z 12 Z 22 ⎡ ⎤ Y˜22 Y˜22     ⎣ X¯ I X¯ I ⎦ ≥ 0, ¯ S˜1−1 } Y˜22 { S˜0−1 + (1 − μ) I X I X ⎤ ⎡ ˜ Z˜ 22   Z 22   ⎦ ≥ 0, ⎣ ¯ I ¯ X −1 X I ˜ ˜ Z 22 S0 I X I X

(4.86)

(4.87)

(4.88)

(4.89)

where Π denotes  Π  e4

  T A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e + e e4T 1 1 ¯ ¯ F¯ X A + GC F¯ X A + GC

˜ 1 − e4 )T + e1 ( Q˜ 0 + Q˜ 1 )e T − e3 Q˜ 0 e T − (1 − μ)e +(e1 − e4 )Σ˜ T + Σ(e ¯ 2 Q˜ 1 e2T 1 3 T + Z˜ (e − e )T + (e − e ) Z˜ T +Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 12 2 3 2 3 12    T    T ¯ ¯ I I X X A1 +e4 e2T + e2 e4T , A1T X X I I

ei ∈ R4n×n are defined in (2.47)–(2.48). Then the dynamic output feedback control (4.73)–(4.74) with (4.80)–(4.85) asymptotically stabilizes the system (4.30)–(4.31) with a time-varying delay h(t) ∈ [−h, 0] that is differentiable and upper-bounded ˙ ≤ μ. such as h(t) ¯

118

4 Output Feedback Stabilizing Controls

Proof Let us define Q˜ 1  T1T Q¯ 1 T1 , S˜1  T1T S¯1 T1 . The rest of the proof is similar to that of Theorem 4.8, which completes the proof.  Generally speaking, Theorem 4.9 can yield larger maximum allowable delays than Theorem 4.8 since the property of h(t) that is differentiable with bounded derivative ˙ ≤ μ¯ is utilized. such as h(t) The dynamic output feedback controls for systems with time-varying delays based on the Lyapunov–Krasovskii approach in Theorems 4.8 and 4.9 in this subsection can be found similarly in [7].

4.4 Robust Stabilizing Controls for State Delayed Systems Consider a state delayed system with model uncertainties given by x(t) ˙ = Ax(t) + A1 x(t − h) + Dpw (t) + Bu(t), y(t) = C x(t),

(4.90) (4.91)

pw (t) = Δ(t)qw (t), ΔT (t)Δ(t) ≤ ρ−2 I, qw (t) = E x(t) + E 1 x(t − h) + F pw (t) + E b u(t)

(4.92) (4.93)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0], where pw (t) ∈ Rm w and qw (t) ∈ Rlw are the output and the input of an uncertainty block Δ(t).

4.4.1 Lyapunov–Krasovskii Approach If we use the dynamic output feedback control (4.36)–(4.37) for the delayed system with model uncertainties (4.90)–(4.93), we have the augmented closed-loop system such as ˙¯ = A¯ x(t) ¯ − h) + D¯ pw (t), x(t) ¯ + A¯ 1 x(t −2

pw (t) = Δ(t)qw (t), Δ (t)Δ(t) ≤ ρ I, qw (t) = E¯ x(t) ¯ + E¯ 1 x(t ¯ − h) + F pw (t), T

where 

A¯ E¯

A¯ 1 E¯ 1

D¯ F¯





⎤ A + B Dc C BCc A1 0 D Ac 0 0 0 ⎦ . Bc C =⎣ E + E b Dc C E b C c E 1 0 F 

(4.94) (4.95) (4.96)

4.4 Robust Stabilizing Controls for State Delayed Systems

119

A dynamic output feedback control for the system (4.90)–(4.93) can be found by simply choosing S as Y22 in Theorem 2.12 as shown in the following theorem. Theorem 4.10 Assume that, for a given ρ, there exist matrices Q˜ > 0, X > 0, X¯ > ¯ G, ¯ H¯ , J¯, Y˜i j , and Σ˜ such that 0, F, ⎡

⎤     Π h Σ˜  ˜ ˜ ¯ ⎣h Σ˜ T −Ω 0 ⎦ < 0, X I > 0, Y11 Y12 ≥ 0, T ˜ I X Y˜12 Y22 T 0 −I

(4.97)

where Π , , and Ω denote 

  T A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e + e e3T 1 1 ¯ ¯ F¯ X A + GC F¯ X A + GC    T ˜ 1 − e3 )T + e3 I De4T + e4 D T I + (e1 − e3 )Σ˜ T + Σ(e e3T X X T ˜ 1T − e2 Qe ˜ 2T − ρ2 e4 e4T + h Y˜11 + Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 + e1 Qe    T    T I I X¯ X¯ + e3 e2T + e2 e3T , A1 A1T X X I I      X¯ E T + H¯ T E bT X¯ + e2  = e1 E 1T + e4 F T , I E T + C T J¯T E bT      X¯ I ˜ −1 X¯ I Ω=h , Y22 I X I X 

Π = e3

ei are defined in (2.67)–(2.68). Then the dynamic output feedback control (4.36)– (4.37) with Dc = J¯, Cc = ( H¯ − Dc C X¯ )Y¯ −T , Bc = Y −1 (G¯ − X B Dc ), Ac = Y Cc,1 Ac,1

−1

(4.98) (4.99)

(4.100) −T −1 −T −T −1 ¯ ¯ ¯ ¯ ¯ ¯ F Y − Y X (A + B Dc C) X Y − Bc C X Y − Y X BCc ,

= ( H¯ 1 − Dc C1 X¯ )Y¯ −T , = Y −1 F¯1 Y¯ −T − Y −1 X (A1 + B Dc C1 ) X¯ Y¯ −T − Bc C1 X¯ Y¯ −T −Y −1 X BCc,1 ,

(4.101) (4.102) (4.103)

where Y and Y¯ can be chosen as X − X¯ −1 and X¯ , respectively, asymptotically stabilizes the system (4.90)–(4.93). ¯ A¯ 1 , D, ¯ Proof Let us replace A, A1 , D, E, and E 1 in the condition (3.46) with A, ¯ ¯ ¯ E, and E 1 , respectively. In Theorem 3.10, we partition P as (4.54) and define an invertible block diagonal matrix Ti as (4.55). By defining some matrices as

120

4 Output Feedback Stabilizing Controls



Y˜11 T Y˜12

Y˜12 Y˜22



⎡

T3 0 ⎣ 0 I 0



⎤T 0⎦ T1



Y¯11 T Y¯12

⎡  ⎤  T3 0 ¯ Y12 ⎣ 0⎦ 0 I , Y¯22 0 T1

T  T3 0 T ¯ ˜ ˜ Σ¯ T1 , Q  T1 QT1 , Σ  0 I and multiplying on the right side of (3.46) by 

   T3 0 T 0 ⊕ 1 0 I 0 I

and on the left side by its transpose, we can obtain a transformed criterion that is derived below in a step-by-step manner. 

T



 T3 0 0 I         X¯ I T I 0 A1 0 I = e e = e3 T1T A¯ 1 P¯ T1 e2T = e3 A1 X¯ I e2T , 3 T 2 0 0 Y¯ 0 X Y X   T  T3 0 ¯ 1T } T3 0 {e3 A¯ Pe 0 I 0 I     X¯ I T I 0 A + B Dc C BCc e = e3 T1T A¯ P¯ T1 e1T = e3 T ¯ Bc C Ac X Y Y 0 1 ⎡ ⎤ ¯T ¯ ¯ A + B Dc C  A X + B(Dc C X + CcTY )  ⎦ eT = e3 ⎣ Y Bc C X¯ + X BCc Y¯ + X A + (X B Dc + Y Bc )C 1 T ¯ ¯ X (A + B Dc C) X + Y Ac Y    A X¯ + B H¯ A + B J¯C T = e3 e1 , ¯ F¯ X A + GC    T   T3 0 T 0 T 0 {(e1 − e3 )Σ¯ T } 3 = (e1 − e3 )T1T Σ¯ T 3 = (e1 − e3 )Σ˜ T , 0 I 0 I 0 I   T  T3 0 T 0 {h Y¯11 } 3 = h Y˜11 , 0 I 0 I   T   T T3 0 T 0 T 0 Y¯12 T1 (e1 − e2 )T = Y˜12 (e1 − e2 )T , {Y¯12 (e1 − e2 )T } 3 = 3 0 I 0 I 0 I  T   T3 0 ¯ 1T − e2 Qe ¯ 2T } T3 0 {e1 Qe 0 I 0 I T3 0 0 I

¯ 2T } {e3 A¯ 1 Pe

¯ 1 e1T − e2 T1T QT ¯ 1 e2T = e1 Qe ˜ 1T − e2 Qe ˜ 2T , = e1 T1T QT  T        T3 0 ¯ 4T } T3 0 = e3 T1T De ¯ 4T = e3 I 0 D e4T = e3 I De4T , {e3 De 0 I 0 I X Y 0 X   T  T 0 T3 0 {−ρ2 e4 e4T } 3 = −ρ2 e4 e4T , 0 I 0 I −1 ¯ −1 T ¯ P T1 = T1T P¯ T1 Y˜22 T1T P¯ Y¯22 T1 P T1

4.4 Robust Stabilizing Controls for State Delayed Systems

121



         X¯ I X¯ Y¯ I X ˜ −1 I 0 X¯ I ˜ −1 X¯ I Y22 Y , = T T X Y Y¯ 0 I 0 0Y I X 22 I X T  T3 0 {e1 P¯ E¯ T }I 0 I      X¯ (E + E b Dc C)T + Y¯ (E b Cc )T X¯ Y¯ (E + E b Dc C)T = e1 = e1 T1T P¯ E¯ T = e1 T T I 0 (E b Cc ) (E + E b Dc C)     T T T T T ¯ ¯ ¯ ¯ ¯ X E + H Eb {E X + E b (Dc C X + Cc Y )} = e , = e1 1 (E + E b Dc C)T E T + C T J¯T E bT       T T3 0 X¯ Y¯ E 1T X¯ E 1T , = e2 {e2 P¯ E¯ 1T }I = e2 T1T P¯ E¯ 1T = e2 0 I I 0 I 0  T T3 0 {e4 F T }I = e4 F T . 0 I =



This completes the proof.

The robust dynamic output feedback control for state delayed systems based on the Lyapunov–Krasovskii approach in Theorem 4.10 is the special case of the results in [7].

4.4.2 Cascaded-Delay System Approach As mentioned in Sect. 4.3.4, a dynamic output feedback control with the cascadeddelay system description could provide more flexibility comparing to the control in Theorem 4.10. Let us use the cascaded-delay dynamic output feedback control (4.56)–(4.58) for the delayed system with model uncertainties (4.90)–(4.93); then we have the augmented closed-loop system such as ⎤ ⎡ ˙¯ A¯ 11 x(t) ⎦ ⎣ ⎣ p(t) ¯ = A¯ 21 qw (t) C¯ 1 ⎡

A¯ 12 A¯ 22 C¯ 2

⎤ ⎤⎡ B¯ 1 x(t) ¯ ¯ − h) ⎦ , B¯ 2 ⎦ ⎣ p(t pw (t) F

pw (t) = Δ(t)qw (t), ΔT (t)Δ(t) ≤ ρ−2 I

(4.104) (4.105)

with the initial condition p(t0 + θ) = 0 for all θ ∈ [−h, 0), where     x(t) x(t) , p(t) ¯ = , xc (t) pc (t) ⎡ A + B J C B H1 A1 B H2 ⎤ ⎢ G 1C F11 0 F12 B¯ 1  ⎢ I 0 0 0 B¯ 2 ⎦ = ⎢ ⎢ ⎣ G2C F21 0 F22 F E1 0 E2 0 



x(t) ¯ = ⎡

A¯ 11 ⎣ A¯ 21 C¯ 1

A¯ 12 A¯ 22 C¯ 2

(4.106) ⎤ D 0⎥ ⎥ 0⎥ ⎥. 0⎦ F

(4.107)

122

4 Output Feedback Stabilizing Controls

Using a Lyapunov functional as (4.62), a delay-independent stability criterion can be obtained in the following theorem. Theorem 4.11 If there exist matrices X > 0, and Z > 0 such that ⎡

⎤ T T T T A¯ 11 X + X A¯ 11 + A¯ 21 Z A¯ 21 X A¯ 12 + A¯ 21 Z A¯ 22 X B¯ 1 + A¯ 21 Z B¯ 2 T T T T ⎣ ⎦ X + A¯ 22 Z A¯ 21 Z A¯ 22 − Z A¯ 22 Z B¯ 2 A¯ 12 A¯ 22 T T T T ¯ ¯ ¯ ¯ ¯ ¯ ¯ B1 X + B2 Z A21 B2 Z A22 B2 Z B2 ⎡ T ⎤ T ¯ ¯ ¯ ¯ C1 C1 C1 C2 C¯ 1T F ⎦ < 0, (4.108) + ⎣ C¯ 2T C¯ 1 C¯ 2T C¯ 2 C¯ 2T F T ¯ T ¯ T −2 F C1 F C2 F F − ρ I then the dynamic output feedback control (4.56)–(4.58) asymptotically stabilizes the system (4.90)–(4.93). Proof Let us take the derivative of V (t) as follows. ¯ + p¯ T (t)Z p(t) ¯ − p¯ T (t − h)Z p(t ¯ − h) V˙ (t) = 2 x˙¯ T (t)X x(t) ⎡ ⎡ ⎤T ⎤ x(t) ¯ x(t) ¯ ¯ − h) ⎦ . ¯ − h) ⎦ M(X, Z ) ⎣ p(t = ⎣ p(t pw (t) pw (t) where M(X, Z ) denotes ⎡  ⎢ M(X, Z ) = ⎢ ⎣

T T T T X + X A¯ 11 + A¯ 21 Z A¯ 21 X A¯ 12 + A¯ 21 Z A¯ 22 X B¯ 1 + A¯ 21 Z B¯ 2 A¯ 11 T T X + A¯ 22 Z A¯ 21 A¯ 12

T Z A¯ 22 − Z A¯ 22

T A¯ 22 Z B¯ 2

B¯ 1T X + B¯ 2T Z A¯ 21

B¯ 2T Z A¯ 22

B¯ 2T Z B¯ 2

⎤ ⎥ ⎥. ⎦

If 0 > V˙ (t) + λ(ρ−2 qwT qw − pwT pw ) for some λ > 0, then it clearly holds that 0 > v(t), ˙ which provides T 0 > V˙ (t) + λ( pw pw − ρ−2 qwT qw ) ⎡ T ⎤T ⎛ ⎡ C¯ 1 C¯ 1 x(t) ¯ ⎝ ⎣ ⎦ ⎣ ¯ − h) M(X, Z ) + λ C¯ 2T C¯ 1 = p(t pw (t) F T C¯ 1

⎤⎞ ⎡ ⎤ C¯ 1T C¯ 2 C¯ 1T F x(t) ¯ ⎦⎠ ⎣ p(t ¯ − h) ⎦ , C¯ 2T C¯ 2 C¯ 2T F T T −2 ¯ pw (t) F C2 F F − ρ I

where λ can be chosen as 1 without loss of generality because we can scale X and  Z by λ−1 such as λ−1 X and λ−1 Z . This completes the proof. We remark that the inequality (4.108) can be written as

4.4 Robust Stabilizing Controls for State Delayed Systems



T A¯ 11 X + X A¯ 11 T ⎢ X A¯ 12 ⎢ ⎢ ¯ 21 Z A ⎢ ⎣ B¯ 1T X C¯ 1

X A¯ 12 −Z Z A¯ 22 0 C¯ 2

⎤ T Z X B¯ 1 C¯ 1T A¯ 21 T A¯ 22 Z 0 C¯ 2T ⎥ ⎥ −Z Z B¯ 2 0 ⎥ ⎥ < 0, X > 0, Z > 0 B¯ 2T Z −ρ2 I F T ⎦ 0 F −I

123

(4.109)

or ⎡

T X¯ A¯ 11 + A¯ 11 X¯ T ⎢ ¯ Z A¯ 12 ⎢ ⎢ A¯ 21 X¯ ⎢ ⎣ B¯ 1T C¯ 1 X¯

A¯ 12 Z¯ − Z¯ A¯ 22 Z¯ 0 C¯ 2 Z¯

T X¯ A¯ 21 B¯ 1 T ¯ ¯ Z A22 0 − Z¯ B¯ 2 B¯ 2T −ρ2 I 0 F

X¯ C¯ 1T Z¯ C¯ 2T 0 FT −I

⎤ ⎥ ⎥ ⎥ < 0, X¯ > 0, Z¯ > 0, (4.110) ⎥ ⎦

where X¯ = X −1 and Z¯ = Z −1 . Theorem 4.11 provides matrix inequalities to find the dynamic output feedback control (4.56)–(4.58) for the system (4.90)–(4.93). However, these are not convex. In fact, we can find convex formulation known as linear matrix inequalities. Theorem 4.12 Assume that there exist matrices X 11 > 0, X¯ 11 > 0, Z 11 > 0, and Z¯ 11 > 0 such that    Z 11 I X 11 I ≥ 0, ≥ 0, I X¯ 11 I Z¯ 11  T  T   T  C C ⊕ I2 MV (X 11 , Z 11 ) ⊕ I2 < 0, 0 ⊥ 0 ⊥   T    B B ⊕ I2 MU ( X¯ 11 , Z¯ 11 ) ⊕ I2 < 0, 0 ⊥ 0 ⊥ 

(4.111) (4.112) (4.113)

where ⎤ A T X 11 + X 11 A + Z 11 X 11 A1 X 11 D E 1T  ⎢ A1T X 11 −Z 11 E 2T ⎥ ⎥, MV (X 11 , Z 11 ) = ⎢ T 2 ⎣ D X 11 0 −ρ I F T ⎦ E1 E2 F −I ⎡ T T ¯ ¯ ¯ ¯ A X 11 + X 11 A + A1 Z 11 A1 X 11 D X¯ 11 E 1T + Z¯ 11 E 2T ⎢  − Z¯ 11 0 0 X¯ 11 MU ( X¯ 11 , Z¯ 11 ) = ⎢ T ⎣ 0 −ρ2 I FT D E 1 X¯ 11 + E 2 Z¯ 11 A1T 0 F −I + E 2 Z¯ 11 E 2T ⎡

⎤ ⎥ ⎥. ⎦

From such fixed values X 11 , X¯ 11 , Z 11 , and Z¯ 11 , matrices X 12 , X 22 , Z 12 , and Z 22 are determined by using the relation −1 −1 −1 −1 X 21 = X 11 − X¯ 11 , Z 12 Z 22 Z 21 = Z 11 − Z¯ 11 X 12 X 22

(4.114)

124

4 Output Feedback Stabilizing Controls

and a feasible set of {Fi j , G i , H j , J } can be found by solving the LMI problem 

0 > W = [Wi j ]i, j=1:8 ,

(4.115)

where W11 = (A + B J C)T X 11 + C T G 1T X 21 + X 11 (A + B J C) + X 12 G 1 C, W12 = (A + B J C)T X 12 + C T G 1T X 22 + X 11 B H1 + X 12 F11 , W13 = X 11 A1 , W14 = X 11 B H2 + X 12 F12 , W15 = Z 11 + C T G 2T Z 21 , W16 = Z 12 + C T G 2T Z 22 , W17 = X 11 D, W18 = E 1T , T W22 = H1T B T X 12 + F11 X 22 + X 21 B H1 + X 22 F11 , W23 = X 21 A1 , T W24 = X 21 B H2 + X 22 F12 , W25 = F21 Z 21 , T W26 = F21 Z 22 , W27 = Z 21 D, W28 = 0, W33 = −Z 11 , W34 = −Z 12 , W35 = 0, W36 = 0, W37 = 0, W38 = E 2T , T T W44 = −Z 22 , W45 = F22 Z 21 , W46 = F22 Z 22 , W47 = 0, W48 = 0, W55 = −Z 11 , W56 = −Z 12 , W57 = 0, W58 = 0,

W66 = −Z 22 , W67 = 0, W68 = 0, W77 = −ρ2 I, W78 = F T , W88 = −I. Then the dynamic output feedback control (4.56)–(4.58) with {Fi j , G i , H j , and J } asymptotically stabilizes the system (4.90)–(4.93). Proof Using the inequality in (4.109), we can obtain the LMI conditions in the theorem such as ⎡ T ⎤ T A¯ 11 X + X A¯ 11 X A¯ 12 A¯ 21 Z X B¯ 1 C¯ 1T T T ⎢ X −Z A¯ 22 Z 0 C¯ 2T ⎥ A¯ 12 ⎢ ⎥ T T T 0>⎢ Z A¯ 21 Z A¯ 22 −Q Q B¯ 2 0 ⎥ ⎢ ⎥ = Q + UΣV + VΣ U , (4.116) T T 2 T ⎣ B1 X 0 B¯ 2 Z −ρ I F ⎦ 0 F −I C¯ 1 C¯ 2 where Q, U, V, and Σ denote ⎡ T A X 11 + X 11 A A T X 12 ⎢ YT A 0 ⎢ T T ⎢ X A A 11 1 1 X 12 ⎢ ⎢ 0 0 Q=⎢ ⎢ 0 Z 11 ⎢ ⎢ 0 Z 21 ⎢ ⎣ D T X 12 D T X 11 E1 0

⎤ X 11 A1 0 Z 11 Z 12 X 11 D E 1T X 21 0 0 0 X 21 D 0 ⎥ ⎥ −Z 11 −Z 12 0 0 0 E 2T ⎥ ⎥ −Z 21 −Z 22 0 0 0 0 ⎥ ⎥, 0 0 −Z 11 −Z 12 0 0 ⎥ ⎥ 0 0 −Z 21 −Z 22 0 0 ⎥ ⎥ 0 0 0 0 −ρ2 I F T ⎦ E2 0 0 0 F −I

4.4 Robust Stabilizing Controls for State Delayed Systems



0 ⎢I ⎢ ⎢0 ⎢ ⎢0 U =T ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎣0 0 ⎡ X 11 X 12 X 22 0 0 0 0 0 0 0 0

⎢ X 21 ⎢ 0 ⎢  ⎢ 0 T =⎢ 0 ⎢ ⎢ 0 ⎣

0 0 0 0 0 I 0 0 0 0 I 0 0 0 0 0

⎡ ⎤ 0 0 CT B ⎢ ⎥ 0⎥ ⎢I 0 0 ⎢0 0 0 0⎥ ⎢ ⎥ ⎢0 I 0 ⎥ 0⎥ ⎢ , V = ⎢0 0 0 ⎥ 0⎥ ⎢ ⎢0 0 0 ⎥ 0⎥ ⎢ ⎣0 0 0 ⎦ 0 0 00 0 ⎤

0 0 0 I 0 0 0 0

0 0 0 0 Z 11 Z 21 0 0

0 0 0 0 Z 12 Z 22 0 0

125

⎤ ⎥ ⎥ ⎥ ⎡ ⎤ ⎥ F11 F12 G 1 ⎥  ⎥, Σ = ⎣ F21 F22 G 2 ⎦ , ⎥ ⎥ H1 H2 J ⎥ ⎥ ⎦

0 0 0 0 0 0 I 0

0 0⎥ 0⎥     ⎥ X 11 X 12 Z 11 Z 12 0⎥ , X = , Z = . ⎥ 0⎥ X 21 X 22 Z 21 Z 22 ⎥ 0 ⎦ 0 I

0 0 0 I 0 0 0 0

0 0 0 0 0 0 I 0

We remark that ⎡

Nb1 ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ 0 U⊥ = T −1 ⎢ ⎢ Nb2 ⎢ ⎢ 0 ⎢ ⎣ 0 0

0 0 I 0 0 0 0 0

⎡ ⎤ Nc1 0 0 0 0 ⎢ 0 000 0⎥ ⎢ ⎥ ⎢ Nc2 0 0 0 0⎥ ⎢ ⎥ ⎢ 0 000 ⎥ 0⎥ ⎢ , V = ⊥ ⎢ 0 I 00 ⎥ 0⎥ ⎢ ⎢ 0 0I 0 ⎥ 0⎥ ⎢ ⎣ 0 00I ⎦ 0 I 0 000

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥, 0⎥ ⎥ 0⎥ ⎥ 0⎦ I

where Nbi and Ncj denote 

Nb1 Nb2





=



B 0



 ⊥

,

Nc1 Nc2





=



CT 0

 ⊥

.

From the elimination lemma in Appendix B.1, the inequality condition (4.116) is equal to the following two conditions: 0 > V⊥T QV⊥ and 0 > U⊥T QU⊥ . Let us investigate the first condition.  T     Nc1 Nc1 ⊕ I4 ⊕ I4 , 0 > V⊥T QV⊥ = M¯ V Nc2 Nc2 ⎤ ⎡ T A X 11 + X 11 A X 11 A1 Z 11 Z 12 X 11 D1 E 1T ⎢ A1T X 11 −Z 11 0 0 0 E 2T ⎥ ⎥ ⎢ ⎢  Z 11 0 −Z 11 −Z 12 0 0 ⎥ ⎥, M¯ V = ⎢ ⎢ 0 −Z 21 −Z 22 0 0 ⎥ Z 21 ⎥ ⎢ ⎣ 0 0 0 −ρ2 I F T ⎦ D T X 11 E1 E2 0 0 F −I

126

4 Output Feedback Stabilizing Controls

which can be simplified by the Schur complement associated with the (3, 3)th and (4, 4)th entries of M¯ v such as  0>

 T    Nc1 Nc1 ⊕ I2 ⊕ I2 MV (X 11 , Z 11 ) Nc2 Nc2

because it holds that 

−1  T   Z 11 0 0 0 Z 11 Z 12 Z 11 0 0 0 Z 21 Z 22 Z 21 0 0 0 Z 21 0 0 0  T    I 000 Z 11 Z 12 I 000 = . 0000 Z 21 Z 22 0000

Let us now investigate the second condition. First, consider ⎡

T −T QT −1

A X¯ 11 + X¯ 11 A T A X¯ 12 A1 0 ⎢ ¯ 21 A T 0 0 0 X ⎢ ⎢ A1T 0 −Z 11 −Z 12 ⎢ ⎢ 0 0 −Z 21 −Z 22 =⎢ ⎢ ¯ 11 ¯ 12 X 0 0 X ⎢ ⎢ 0 0 0 0 ⎢ ⎣ 0 0 0 DT E 1 X¯ 11 E 1 X¯ 12 E 2 0

⎤ X¯ 11 0 D X¯ 11 E 1T X¯ 21 0 0 X¯ 21 E 1T ⎥ ⎥ 0 0 0 E 2T ⎥ ⎥ 0 0 0 0 ⎥ ⎥. − Z¯ 11 − Z¯ 12 0 0 ⎥ ⎥ − Z¯ 21 − Z¯ 22 0 0 ⎥ ⎥ 0 0 −ρ2 I F T ⎦ 0 0 F −I

Then, the second condition becomes  0 > U⊥T QU⊥ =

 T    Nb1 Nb1 ⊕ I4 ⊕ I4 , M¯ U Nb2 Nb2

where M¯ U denotes ⎡

A X¯ 11 + X¯ 11 A T X¯ 11 ⎢ − Z¯ 11 X¯ 11 ⎢ T ⎢  0 A1 M¯ U = ⎢ ⎢ 0 0 ⎢ ⎣ 0 DT E 1 X¯ 11 0

⎤ A1 0 D X¯ 11 E 1T 0 0 0 0 ⎥ ⎥ −Z 11 −Z 12 0 E 2T ⎥ ⎥, −Z 21 −Z 22 0 0 ⎥ ⎥ 0 0 −ρ2 I F T ⎦ E2 0 F −I

which can be simplified by the Schur complement associated with the (3, 3)th and (4, 4)th entries of M¯ U such as  0>

 T    Nb1 Nb1 ⊕ I2 ⊕ I2 MU ( X¯ 11 , Z¯ 11 ) Nb2 Nb2

4.4 Robust Stabilizing Controls for State Delayed Systems

127

because it holds that −1  T T   Z 11 Z 12 A1 0 0 E 2T A1T 0 0 E 2T 0 00 0 Z 21 Z 22 0 00 0     T   T Z¯ 11 Z¯ 12 A1T 0 0 E 2T A1 0 0 E 2T = . 0 00 0 0 00 0 Z¯ 21 Z¯ 22



This completes the proof.



Even though the cascaded-delay system approach produces a little complicated results, these results are formulated with LMIs. The robust stability condition and the robust dynamic output feedback control for state delayed systems based on the cascaded-delay system approach in Theorems 4.11 and 4.12 are newly obtained here.

4.4.3 *Extension to Systems with Time-Varying Delays Let us consider the system (4.90)–(4.93), where the delay h is not a constant but ¯ In this case, we consider two cases where h(t) time-varying such that h(t) ∈ [0, h]. is a general delay and where h(t) is differentiable with bounded derivative such as ˙ ≤ μ. h(t) ¯ For a general time-varying delay case, the following result can be obtained based on Theorems 2.14 and 3.12. Theorem 4.13 Assume that, for a given ρ, there exist matrices Q˜ 0 > 0, S˜0 > 0, ¯ G, ¯ H¯ , J¯, Y˜i j , Z˜ i j , and Σ˜ such that X > 0, X¯ > 0, F, ⎤ ⎡ ⎤ Π + h¯ Z˜ 11 h¯ Σ˜  Π + h¯ Y˜11 h¯ Σ˜  ⎣ h¯ Σ˜ T −h¯ S˜0 0 ⎦ < 0, ⎣ h¯ Σ˜ T −h¯ S˜0 0 ⎦ < 0, T T 0 −I 0 −I   ⎡ ⎤ ˜ ˜ Y22   Y22   ⎣ ⎦ ≥ 0, ¯ I ¯ X −1 X I ˜ ˜ Y22 S0 I X I X ⎡ ⎤ ˜ Z˜ 22   Z 22   ⎣ X¯ I ˜ −1 X¯ I ⎦ ≥ 0, Z˜ 22 S I X 0 I X       Z˜ 11 Z˜ 12 Y˜11 Y˜12 X¯ I ≥ 0, ≥ 0, > 0, T ˜ T ˜ I X Y˜12 Y22 Z˜ 12 Z 22 ⎡

where Π and  denote

(4.117)

(4.118)

(4.119)

(4.120)

128

4 Output Feedback Stabilizing Controls

  T A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e1 + e1 Π = e4 e4T ¯ ¯ F¯ X A + GC F¯ X A + GC    T ˜ 1 − e4 )T + e4 I De5T + e5 D T I + (e1 − e4 )Σ˜ T + Σ(e e4T − ρ2 e5 e5T X X T T + Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 + Z˜ 12 (e2 − e3 )T + (e2 − e3 ) Z˜ 12    T    T I I X¯ X¯ + e1 Q˜ 0 e1T − e3 Q˜ 0 e3T + e4 e2T + e2 e4T , A1 A1T X X I I      X¯ E T + H¯ T E bT X¯ + e  = e1 E 1T + e4 F T , 2 T T ¯T T I E + C J Eb 



and ei are defined in (2.75)–(2.79). Then, the dynamic feedback control (4.73)–(4.74) with (4.121) Dc = J¯, Cc = ( H¯ − Dc C X¯ )Y¯ −T , (4.122) Bc = Y −1 (G¯ − X B Dc ), (4.123) −1 ¯ ¯ −T −1 −T −T −1 ¯ ¯ ¯ ¯ Ac = Y F Y − Y X (A + B Dc C) X Y − Bc C X Y − Y X BCc , (4.124) Cc,1 = ( H¯ 1 − Dc C1 X¯ )Y¯ −T , (4.125) Ac,1 = Y −1 F¯1 Y¯ −T − Y −1 X (A1 +B Dc C1) X¯ Y¯ −T− Bc C1 X¯ Y¯ −T−Y −1 X BCc,1 ,

(4.126) where Y and Y¯ can be chosen as X − X¯ −1 and X¯ , respectively, asymptotically stabilizes the system (4.90)–(4.93) with a general time-varying delay h(t) ∈ [−h, 0]. Proof For the conditions (2.72)–(2.73) in Theorem 2.14, we had the conditions (3.52)–(3.54) in Theorem 3.12 for the state-feedback control. We use these conditions without static control gain matrices for the augmented closed-loop system (4.94)–(4.96). Let us partition P¯ as (4.54) and define an invertible block diagonal matrix Ti as in (4.55). Our next step is devoted to multiplying on the right sides of (3.52)–(3.54) by appropriate invertible matrices and on the left sides by their transpose to get a more manageable criterion. For (3.54), a proper invertible matrix and its corresponding entries are given below: ⎡  ⎤T    T4 0 T4 0 ¯ ¯ Y12 ⎣ ⎣ 0 I 0 ⎦ Y11 0 I T ¯ Y¯12 Y22 0 T1 0 ⎡ ⎡  ⎤T    T4 0 T4 0 ¯ 11 Z¯ 12 Z 0 ⎣ 0 I ⎦ ⎣ 0 I T ¯ Z¯ 12 Z 22 0 T1 0



⎡

0⎦

 

T1

⎤ 0⎦

T1

Y˜11 T Y˜12

 Y˜12 , Y˜22

 Z˜ 11 Z˜ 12 .  ˜T ˜ Z 12 Z 22 

4.4 Robust Stabilizing Controls for State Delayed Systems

129

For (3.52)–(3.53), a proper invertible matrix and its corresponding entries are as follows. ⎡



⎡  ⎤ ⎤ T4 0 T T ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 0 0⎥ Π + h Y11 h Σ e1 P E + e2 P E 1 ⎢ 0 0⎥ ⎥ ⎣ ⎥, ⎦⎢ 0 I ∗ −h¯ S¯0 0 ⎣ ⎦ T1 0 0 T1 0⎦ ∗ ∗ −I 0 I 0 0 I

⎡



⎡ ⎤T ⎤  ⎤ ⎡ T 0 0 0⎥ Π + h¯ Z¯ 11 h¯ Σ¯ e1 P¯ E¯ T + e2 P¯ E¯ 1T ⎢ 4 0 0⎥ ⎥ ⎣ ⎥, ⎦⎢ 0 I ∗ −h¯ S¯0 0 ⎣ 0 T1 0⎦ T1 0⎦ ∗ ∗ −I 0 I 0 0 I

T4 0 ⎢ 0 I ⎢ ⎣ 0 0 T4 0 ⎢ 0 I ⎢ ⎣ 0 0

⎤T



where each entry is computed below in detail. 

     I T4 0 = e4 A1 X¯ I e2T , 0 I X   T  T4 0 ¯ 1T } T4 0 {e4 A¯ Pe 0 I 0 I ⎡ ⎤ ¯ ¯ ¯T A + B Dc C  A X + B(Dc C X + CcTY )  ⎦ e1T = e4 ⎣ Y Bc C X¯ + X BCc Y¯ + X A + (X B D + Y B )C c c X (A + B Dc C) X¯ + Y Ac Y¯ T   A X¯ + B H¯ A + B J¯C T  e4 e1 , ¯ F¯ X A + GC    T   T4 0 T4 0 T T ¯ T T4 0 ¯ {(e1 − e4 )Σ } = (e1 − e4 )T1 Σ  (e1 − e4 )Σ˜ T , 0 I 0 I 0 I T    T4 0 T 0 = h¯ Y˜11 , {h¯ Y¯11 } 4 0 I 0 I   T   T T4 0 T4 0 T4 0 T ¯ {Y12 (e1 − e2 ) } Y¯12 T1 (e1 − e2 )T = Y˜12 (e1 − e2 )T , = 0 I 0 I 0 I T    T4 0 T 0 = h¯ Z˜ 11 , {h¯ Z¯ 11 } 4 0 I 0 I   T   T T4 0 T4 0 T4 0 T ¯ { Z 12 (e2 − e3 ) } Z¯ 12 T1 (e2 − e3 )T = Z˜ 12 (e2 − e3 )T , = 0 I 0 I 0 I   T  T4 0 T 0 {e1 Q¯ 0 e1T − e3 Q¯ 0 e3T } 4 = e1 Q˜ 0 e1T − e3 Q˜ 0 e3T , 0 I 0 I      T    T4 0 ¯ 5T } T4 0 = e4 T1T De ¯ 5T = e4 I 0 D e5T = e4 I De5T , {e4 De 0 I 0 I X Y X 0 T4 0 0 I

T

¯ 2T } {e4 A¯ 1 Pe



130



4 Output Feedback Stabilizing Controls



 T4 0 {−ρ = −ρ2 e5 e5T , 0 I ¯ 1T S¯0 T1  −h¯ S˜0 , T1T {−h¯ S¯0 }T1 = −hT T  T4 0 {e1 P¯ E¯ T }I 0 I     X¯ E T + H¯ T E bT {E X¯ + E b (Dc C X¯ + Cc Y¯ T )}T = e1 , = e1 (E + E b Dc C)T E T + C T J¯T E bT  T      T4 0 X¯ Y¯ E 1T X¯ = e2 {e2 P¯ E¯ 1T }I = e2 T1T P¯ E¯ 1T = e2 E 1T , 0 0 I I 0 I  T T4 0 {e5 F T }I = e5 F T . 0 I T4 0 0 I

T

2

e5 e5T }

Finally, for (3.54), a proper invertible matrix and its corresponding entries are 

T1 0 0 T1

T 

Y¯22 Y¯22 Y¯22 P¯ S¯0−1 P¯



  T    Z¯ 22 Z¯ 22 T1 0 T1 0 T1 0 , , 0 T1 0 T1 0 T1 Z¯ 22 P¯ S¯0−1 P¯

where each entry is derived below. T1T {Y¯22 }T1 = Y˜22 , T1T { Z¯ 22 }T1 = Z˜ 22 , ¯ 1 = T1T P¯ T1 S˜0−1 T1T P¯ T1 T1T { P¯ S¯0−1 P}T           X¯ I X¯ Y¯ I X ˜ −1 I 0 X¯ I ˜ −1 X¯ I = = S S . I 0 0 Y T 0 X Y Y¯ T 0 I X 0 I X This completes the proof.



˙ ≤ μ, If h(t) is differentiable with bounded derivative such as h(t) ¯ the following result can be obtained based on Theorems 2.15 and 3.13. Theorem 4.14 Assume that, for a given ρ, there exist matrices Q˜ 0 > 0, Q˜ 1 > 0, ¯ G, ¯ H¯ , J¯, Y˜i j , Z˜ i j , and Σ˜ such that S˜0 > 0, S˜1 > 0, X > 0, X¯ > 0, F, ⎡

⎤ ⎤ ⎡ Π + h¯ Y˜11 h¯ Σ˜ h¯ Σ˜  Π + h¯ Z˜ 11 h¯ Σ˜  ⎢ h¯ Σ˜ T −h¯ S˜0 0 ⎥ 0 ⎥ ⎢ < 0, ⎣ h¯ Σ˜ T −h¯ S˜0 0 ⎦ < 0, (4.127) ⎣ h¯ Σ˜ T 0 −h¯ S˜1 0 ⎦ 0 0 −I 0 0 −I T       Z˜ 11 Z˜ 12 Y˜11 Y˜12 X¯ I ≥ 0, ˜ T ˜ ≥ 0, > 0, (4.128) T ˜ I X Y˜12 Y22 Z 12 Z 22 ⎤ ⎡ Y˜22 Y˜22     ⎣ (4.129) X¯ I X¯ I ⎦ ≥ 0, { S˜0−1 + (1 − μ) ¯ S˜1−1 } Y˜22 I X I X

4.4 Robust Stabilizing Controls for State Delayed Systems

131



⎤ ˜ Z˜ 22   Z 22   ⎣ X¯ I ˜ −1 X¯ I ⎦ ≥ 0, Z˜ 22 S0 I X I X

(4.130)

where Π and  denote  T  A X¯ + B H¯ A + B J¯C T A X¯ + B H¯ A + B J¯C e + e e4T − ρ2 e5 e5T 1 1 ¯ ¯ F¯ X A + GC F¯ X A + GC    T ˜ 1 − e4 )T + e4 I De5T + e5 D T I + (e1 − e4 )Σ˜ T + Σ(e e4T − (1 − μ)e ¯ 2 Q˜ 1 e2T X X 



Π = e4

T T + Y˜12 (e1 − e2 )T + (e1 − e2 )Y˜12 + Z˜ 12 (e2 − e3 )T + (e2 − e3 ) Z˜ 12 + e1 ( Q˜ 0 + Q˜ 1 )e1T    T    T I I X¯ X¯ A1 A1T − e3 Q˜ 0 e3T + e4 e2T + e2 e4T , X X I I      X¯ E T + H¯ T E bT X¯ + e E 1T + e5 F T ,  = e1 2 T T T T ¯ I E + C J Eb

and ei are defined in (2.75)–(2.79). Then the dynamic feedback control (4.73)–(4.74) with (4.121)–(4.126) asymptotically stabilizes the system (4.90)–(4.93) with a time¯ 0] that is differentiable with bounded derivative such as varying delay h(t) ∈ [−h, ˙ ≤ μ. h(t) ¯ Proof Let us define some variables as Q˜ 1  T1T Q¯ 1 T1 , S˜1  T1T S¯1 T1 . The remaining proof is straightforward using a congruence transformation. This completes the proof.  Generally speaking, Theorem 4.14 can yield larger maximum allowable delays than Theorem 4.13 since the property of h(t) that is differentiable with bounded ˙ ≤ μ¯ is utilized. derivative such as h(t) The robust dynamic output feedback controls for systems with time-varying delays based on the Lyapunov–Krasovskii approach in Theorems 4.13 and 4.14 in this subsection can be found in [7] and other output feedback controls can be found in [14], which is not covered in this book.

References 1. Artstein Z (1982) Linear systems with delayed controls: a reduction. IEEE Trans Autom Control 27(4):869–879 2. Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia 3. El Ghaoui L, Oustry F, Rami MA (1997) A cone complementarity linearization algorithm for static output-feedback and related problems. IEEE Trans Autom Control 42(8):1171–1176

132

4 Output Feedback Stabilizing Controls

4. Furukawa T, Shimemura E (1983) Predictive control for systems with time delay. Int J Control 37(2):399–412 5. Haurani A, Michalska HH, Boulet B (2004) Robust output feedback stabilization of uncertain time-varying state-delayed systems with saturating actuators. Int J Control 77(4):399–414 6. Kharitonov VL, Niculescu SI, Moreno J, Michiels W (2005) Static output feedback stabilization: Necessary conditions for multiple delay controllers. IEEE Trans Autom Control 50(1):82– 86 7. Ko JW, Park P (2009) Delay-dependent robust stabilization for systems with time-varying delays. Int J Control Autom Syst 7(5):711–722 8. Kwon WH, Pearson AE (1980) Feedback stabilization of linear systems with delayed control. IEEE Trans Autom Control 25(2):266–269 9. Lewis RM (1979) Control-delayed system properties via an ordinary model. Int J Control 30(3):477–490 10. Leyva-Ramos J, Pearson AE (2000) Output feedback stabilizing controller for time-delay systems. Automatica 36(4):613–617 11. Li H, Niculescu SI, Dugard L, Dion JM (1998) Robust guaranteed cost control of uncertain linear time-delay systems using dynamic output feedback. Math Comput Simul 45(3–4):349– 358 12. Li X, de Souza CE (1998) Output feedback stabilization of linear time-delay systems. Stability and Control of Time-Delay Systems. Springer, Berlin, pp 241–258 13. Méndez-Barrios CF, Niculescu SI, Chen J, Maya-Méndez M (2014) Output feedback stabilisation of single-input single-output linear systems with I/O network-induced delays. An eigenvalue-based approach. Int J Control 87(2):346–362 14. Michiels W, Niculescu SI (2007) Stability and Stabilization of Time-delay Systems. SIAM, Philadelphia 15. Smith OJ (1959) A controller to overcome dead time. ISA J 6:28–33 16. Watanabe K, Ito M (1981) A process-model control for linear systems with delays. IEEE Trans Autom Control 26(6):1261–1269 17. Watanabe K, Ito M (1981) An observer for linear feedback control laws of multivariable systems with multiple delays in controls and outputs. Syst Control Lett 1(1):54–59 18. Zhou B, Li Z, Lin Z (2013) Observer based output feedback control of linear systems with input and output delays. Automatica 49(7):2039–2052

Chapter 5

Guaranteed Cost Controls

5.1 Introduction Non-optimal stabilizing controls for input and state delayed systems are obtained without any performance criteria in Chaps. 3 and 4, where control structures are given a priori in feedback forms of single input and state delays. These controls are easy to compute but possess no optimality. Meanwhile optimal stabilizing controls for input and state delayed systems will be obtained with some LQ/LQG performance criteria in Chaps. 6 and 7 and some H∞ performance criteria in Chap. 8, respectively, where no control structures will be required a priori. The stabilizing controls in Chaps. 6, 7 and 8, such as infinite horizon controls and receding horizon controls, will be obtained fortunately in feedback forms of distributed state delay but are rather difficult to compute compared with stabilizing controls in Chaps. 3 and 4. Alternative controls, so-called guaranteed cost controls (GCC), are looked at, where stabilizing control structures are given a priori in feedback forms of single state and input delays for given performance bounds. The computation of guaranteed cost controls is shown to be rather easy. Therefore, these controls may be in between non-optimal stabilizing controls and optimal stabilizing controls. Guaranteed cost controls can be obtained for those problems dealt with in Chaps. 3 and 4. But this chapter deals with only some basic problems, compared with Chaps. 3 and 4 due to the space limit. Thus guaranteed cost controls for systems with time-varying delay and unknown delays are not be dealt with in this chapter. It is noted that the control structures in this chapter are broader than those in Chaps. 3 and 4 in that the formers are in feedback forms of single state and input delays while the latters in feedback forms of single state delay only. Controls with simpler feedback structures are discussed in each subsection. This chapter is outlined as follows. In Sect. 5.2, state feedback guaranteed LQ controls are obtained for input delayed systems. First a simple guaranteed LQ control for a predictive LQ cost is obtained with guaranteed LQ performance bounds, which is given in a feedback form of memoryless state and distributed input delays. © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_5

133

134

5 Guaranteed Cost Controls

Next a guaranteed LQ control for a standard LQ cost is obtained with guaranteed LQ performance bounds, which is given in a feedback form of single state and input delays. Then an output feedback guaranteed LQ control for the standard LQ cost is obtained with a guaranteed LQ performance bound in a dynamic output feedback form via the cascaded-delay system approach. In Sect. 5.3, state feedback guaranteed LQ controls are obtained for state delayed systems. First both delayindependent and delay-dependent guaranteed LQ controls for standard LQ costs are obtained with guaranteed LQ performance bounds in feedback forms of single state and input delays. Next a robust guaranteed LQ control for state delayed systems with model uncertainties is explored for the standard LQ cost. Then an output feedback guaranteed LQ control for the standard LQ cost is obtained with a guaranteed LQ performance bound in a dynamic output feedback form via the cascaded-delay system approach. In Sect. 5.4, state feedback guaranteed H∞ controls are obtained for input delayed systems. First a simple guaranteed H∞ control for a predictive H∞ cost is obtained with a guaranteed H∞ performance bound, which is given in a feedback form of memoryless state and distributed input delays. Next a guaranteed H∞ control for a standard H∞ cost is obtained with a guaranteed H∞ performance bound, which is given in a feedback form of single state and input delays. Then an output feedback guaranteed H∞ control for the standard H∞ cost is obtained with a guaranteed H∞ performance bound in a dynamic output feedback form via the cascaded-delay system approach. In Sect. 5.5, state feedback guaranteed H∞ controls are obtained for state delayed systems. First both delay-independent and delay-dependent guaranteed H∞ controls for standard H∞ costs are obtained with guaranteed H∞ performance bounds in feedback forms of single state and input delays. Next a robust guaranteed H∞ control for state delayed systems with model uncertainties is explored for the standard H∞ cost. Then an output feedback guaranteed H∞ control for a standard H∞ cost is obtained with a guaranteed H∞ performance bound in a dynamic output feedback form via the cascaded-delay system approach. It is noted that guaranteed H∞ controls in this chapter quite differ from H∞ optimal controls in Chap. 8. But both are often called just as H∞ controls in many literatures. References for contents of this chapter are listed at the end of this chapter and are cited in each subsection for more information and further reading.

5.2 Guaranteed LQ Controls for Input Delayed Systems Consider a linear system with a single input delay given by x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h)

(5.1)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0), where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, and h denotes the constant delay. Associated with the infinite horizon LQ cost J

5.2 Guaranteed LQ Controls for Input Delayed Systems

 J =



  xT (s)Qx(s) + uT (s)Ru(s) ds

135

(5.2)

t0

for a given time-delay system (5.1), the guaranteed LQ control is defined as follows. Definition 5.1 For the system (5.1) and the cost J (5.2), if there exist a feedback control u(t) and a positive scalar η such that the closed-loop system is asymptotically stable and the cost J (5.2) is upper bounded by η such that J ≤ η, then η is said to be a guaranteed LQ performance bound and u(t) is said to be a guaranteed LQ control for the system (5.1). It is noted that η ≥ η ∗ , where the global minimum of the LQ cost, say η ∗ , is given as   ∞ ∗ T T x (s)Qx(s) + u (s)Ru(s) ds. η = min u

t0

Before moving on, we introduce a lemma which is essential for developing alternative LQ controls for both input and state delayed systems. Lemma 5.1 For the system (5.1), if there exist continuous nondecreasing functions uGC (s) and vGC (s) that are positive for s > 0 and uGC (0) = vGC (0) = 0 and if there exists a continuous function V : R × Cn,h → R and a feedback control u(t) such that uGC (x(t)) ≤ V (t, xt ) ≤ vGC (xt c ),   V˙ (t, xt ) ≤ − xT (t)Qx(t) + uT (t)Ru(t) , t > t0

(5.3) (5.4)

with Q > 0 and R > 0 along the solution to the system (5.1), then a guaranteed LQ control u(t) asymptotically stabilizes the system (5.1) with the corresponding guaranteed LQ performance bound V (0), i.e. J ≤ V (t0 , xt0 ). Proof Integrating both sides of (5.4) from t0 to ∞ gives 



V (∞, x∞ ) − V (t0 , xt0 ) ≤ −

  xT (s)Qx(s) + uT (s)Ru(s) ds.

t0

Since V (t, xt ) is non-negative and non-increasing from the conditions (5.3) and (5.4), V (t, xt ) is bounded below and thus converges to a constant greater than or equal to zero as t goes to infinity, i.e. V (∞, x∞ ) = c ≥ 0. Therefore, V˙ (t, xt ) = 0 as t goes to infinity, which ensures the asymptotic stability of the closed-loop system due to Q > 0. Then, we have  J = t0



  T T x (s)Qx(s) + u (s)Ru(s) ds ≤ V (t0 , xt0 ) − V (∞, x∞ ) ≤ V (t0 , xt0 ).

136

5 Guaranteed Cost Controls

Therefore, u(t) is a guaranteed LQ control and V (t0 ) is the corresponding guaranteed LQ performance bound. This completes the proof.  Lemma 5.1 for guaranteed LQ controls for input and state delayed systems can be found similarly in [7]. It is noted that control structures of guaranteed LQ controls are often given a priori in feedback forms of state and input delays and then gain matrices of feedback controls are sought to satisfy (5.4). The LQ optimal controls for the cost (5.2) in Chap. 6 do not assume a control structure and thus the computation of the optimal control is rather complex. It is noted that Definition 5.1 and Lemma 5.1 are applicable for both input delayed systems and state delayed systems. In this section, we consider two costs, a predictive cost and a standard cost, where

the former uses the state predictor z(t) = xˆ (t + h) defined as the predicted state x(t + h) with the constraint ut+h ≡ 0 as 

t

z(t) = xˆ (t + h) = e x(t) +

eA(t−τ ) B1 u(τ )d τ

Ah

(5.5)

t−h

and the latter uses the state x(t) directly.

5.2.1 State Feedback Guaranteed LQ Control for Predictive Costs For the input delayed system (5.1), we consider the predictive cost  J =



 xˆ (s + h)Qˆx(s + h) + u (s)Ru(s) ds,



T

T

(5.6)

t0

where Q > 0, R > 0 and z(t) is a state predictor in (5.5), i.e. z(t) = xˆ (t + h) = x(t + h; ut+h ≡ 0). Therefore, the cost becomes 



J =

  T T z (s)Qz(s) + u (s)Ru(s) ds.

t0

It is interesting to see that z(t) satisfies an ordinary system such as ¯ z˙ (t) = Az(t) + Bu(t),

(5.7)

where B¯ = eAh B + B1 , as in (3.11). Under the assumption that (A, eAh B + B1 ) is stabilizable, we construct the state predictor based control as   Ah u(t) = Kz(t) = K e x(t) +

t

t−h

e

A(t−τ )

 B1 u(τ )d τ .

(5.8)

5.2 Guaranteed LQ Controls for Input Delayed Systems

137

Let us choose a Lyapunov function as follows. V (t) = z T (t)Xz(t),

(5.9)

where X > 0. Using Lemma 5.1, a state feedback guaranteed LQ control based on the state predictor can be obtained in the following theorem. Theorem 5.1 For a given η > 0, if there exist matrices X¯ and K¯ such that ⎤ ¯ T + (AX¯ + (eAh B + B1 )K) ¯ X¯ K¯ T (AX¯ + (eAh B + B1 )K) ⎣ X¯ −Q−1 0 ⎦ < 0, (5.10) K¯ 0 −R−1

T η z (t0 ) > 0, (5.11) z(t0 ) X¯ ⎡

t where z(t0 ) = eAh x(t0 ) + t00−h e−A(t0 −τ ) B1 u(τ )d τ , then the guaranteed LQ control (5.8) with K = K¯ X¯ −1 asymptotically stabilizes the system (5.1) with the guaranteed LQ performance bound η. Proof By Lemma 5.1, we have the following condition 0≥

  d V (t) + z T (t)Qz(t) + uT (t)Ru(t) = z T (t)X LXz(t), dt

where K¯ = K X¯ and X¯ = X −1 and ¯ L = AX¯ + X¯ AT + (eAh B + B1 )K¯ + K¯ T (eAh B + B1 )T + X¯ QX¯ + K¯ T RK, which can be guaranteed if (5.10) holds with some efforts related to the Schur complement. In this case, the cost (5.6) is bounded by V (t0 ), i.e. J ≤ V (t0 ), by Lemma 5.1. Furthermore, using the LMI (5.11), we can get the relation that V (t0 ) = z T (t0 )X¯ −1 z(t0 ) ≤ η. This completes the proof.



We can minimize η in Theorem 5.1 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.6). The state feedback guaranteed LQ control for input delayed systems with predictive costs in Theorem 5.1 follows similarly from [1, 2, 5] for ordinary systems once they are transformed to delay-free systems.

138

5 Guaranteed Cost Controls

5.2.2 State Feedback Guaranteed LQ Control for Standard Costs For the input delayed system (5.1), we consider an LQ cost given by 



J =

  xT (s)Qx(s) + uT (s)Ru(s) ds,

(5.12)

t0

where Q > 0 and R > 0. For guaranteed LQ controls for the standard cost (5.12), we introduce a delay-independent criterion for the input delayed system (5.1). First, we consider a Lyapunov functional such as  V (t) = xT (t)Xx(t) +

t

t−h



x(s) u(s)

T

Z11 Z12 Z21 Z22



x(s) ds, u(s)

(5.13)

where X > 0 and [Zij ]2×2 > 0. For more general control structure, we introduce a control including not only the current state x(t) but also both the delayed state x(t − h) and the delayed input u(t − h) such as u(t) = Kx(t) + K1 x(t − h) + K2 u(t − h),

(5.14)

where K and Ki are designed to guarantee the asymptotic stability of the system (5.1) with the LQ cost J (5.12) less than η. Using Lemma 5.1, a state feedback guaranteed LQ control for the system (5.1) can be obtained in the following theorem. ¯ K¯ i and M Theorem 5.2 For a given η > 0, if there exist matrices X¯ > 0, Z¯ ij , K, such that ⎡ ⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 K¯ T K¯ T 0 ⎥ 2 2 ⎢ ⎥ ⎢ X¯ (5.15) 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎢ ⎥ < 0, ⎢ K¯ ⎥ ¯1 ¯ 2 −Z¯ 21 −Z¯ 22 0 K 0 K ⎢ ⎥ ⎣ K¯ K¯ 1 0 0 −R−1 0 ⎦ K¯ 2 X¯ 0 0 0 0 0 −Q−1 ⎡ ⎤

M LT1 LT2 η − Trace(M ) xT (t0 ) > 0, ⎣ L1 Z¯ 11 Z¯ 12 ⎦ > 0, (5.16) x(t0 ) X¯ L1 Z¯ 21 Z¯ 22 where (i, j) denote ¯ T + AX¯ + BK, ¯ (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 (1, 1) = (AX¯ + BK)

and L1 and L2 are computed from

5.2 Guaranteed LQ Controls for Input Delayed Systems



L1 L2



L1 L2

T

 =



t0

t0 −h

x(s) u(s)

139



x(s) u(s)

T ds,

(5.17)

then the guaranteed LQ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.1) with the guaranteed LQ performance bound η. Proof x˙ (t) = (A + BK)x(t) + BK1 x(t − h) + (B1 + BK2 )u(t − h).

(5.18)

By Lemma 5.1, we have the following condition ⎡ ⎤T ⎡ ⎤   x(t) x(t) d 0 ≥ V (t) + xT (t)Qx(t) + uT (t)Ru(t) = ⎣ x(t − h) ⎦ M ⎣ x(t − h) ⎦ , dt u(t − h) u(t − h) where M denotes ⎡ ⎤ (A + BK)T X + X (A + BK) + Q XBK1 X (B1 + BK2 ) ⎦ −Z11 −Z12 K1T BT X M=⎣ T (B1 + K2T BT )X −Z21 −Z22 ⎡ ⎡ T ⎤ ⎡ T ⎤T ⎤ ⎤ ⎡

I KT T I KT K K Z Z 11 12 ⎣ 0 K1T ⎦ + ⎣ K1T ⎦ R ⎣ K1T ⎦ , + ⎣ 0 K1T ⎦ Z21 Z22 0 K2T 0 K2T K2T K2T which can be guaranteed if it holds that M < 0. Let us define

Z¯ 11 Z¯ 21

Z¯ 12 Z¯ 22



=

Z11 Z12 Z21 Z22

−1

and use the Schur complement to get ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ 0>⎢ ⎢ ⎢ ⎢ ⎢ ⎣

 (A + BK)T X +X (A + BK) K1T BT X T (B1 + K2T BT )X I K K QT /2

XBK1 X (B1 + BK2 ) −Z11 −Z21 0 K1 K1 0

−Z12 −Z22 0 K2 K2 0

I

KT

0 0 −Z¯ 11 −Z¯ 21 0 0

K1T K2T −Z¯ 12 −Z¯ 22 0 0

⎤ K T Q1/2 ⎥ ⎥ K1T 0 ⎥ ⎥ K2T 0 ⎥ ⎥. 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ −R−1 0 ⎦ 0 −I

140

5 Guaranteed Cost Controls

Let us now use a congruence transformation associated with a non-singular matrix X −1 ⊕



Z11 Z12 Z21 Z22

−1

⊕I ⊕I ⊕I ⊕I

to get the LMI (5.15), where X¯ = X −1 and

   Z11 Z12 −1  −1 ¯ ¯ ¯ K = KX , K1 K2 = K1 K2 . Z21 Z22 In this case, the cost (5.12) is bounded by V (t0 ), i.e. J ≤ V (t0 ), by Lemma 5.1. Furthermore, the guaranteed LQ performance bound η can be written as





T  ¯ 11 Z¯ 12 −1 t0 x(s) Z x(s) η ≥ V (t0 ) = x (t0 )X¯ x(t0 ) + Trace ds u(s) Z¯ 21 Z¯ 22 t0 −h u(s) 

−1  T ¯ ¯ L Z L1 Z 1 11 12 = xT (t0 )X¯ −1 x(t0 ) + Trace , L2 L2 Z¯ 21 Z¯ 22 T

−1



which results in the LMI (5.16). This completes the proof.



We can minimize η in Theorem 5.2 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.12). When designing controls, we may not want to use u(t − h) or x(t − h) for the control due to the simple structure, i.e. u(t) = Kx(t) + K2 u(t − h),

(5.19)

u(t) = Kx(t) + K1 x(t − h).

(5.20)

For the controls (5.19) or (5.20), the corresponding results can be directly obtained from Theorem 5.2 by taking K¯ 1 = 0 or K¯ 2 = 0 since K¯ 1 and K¯ 2 can be defined for K1 = 0 and K2 = 0 from 

   Z¯ 11 Z¯ 12 ¯ ¯ , K1 K2 = K1 K2 Z¯ 21 Z¯ 22

respectively. If we do not have any information on the delay constant h, we cannot use both x(t − h) and u(t − h) in the control. Or we may want to use x(t) only for the feedback control due to the simple structure, i.e. u(t) = Kx(t).

(5.21)

The corresponding result for the memoryless state feedback control (5.21) can be directly obtained from Theorem 5.2 by taking both K¯ 1 = 0 and K¯ 2 = 0.

5.2 Guaranteed LQ Controls for Input Delayed Systems

141

The state feedback guaranteed LQ control for standard costs in Theorem 5.2 is newly obtained here and similar restrictive results for memoryless control can be found as in [3, 4].

5.2.3 Output Feedback Guaranteed LQ Control for Standard Costs Consider the input delayed system x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h),

(5.22)

y(t) = Cx(t)

(5.23)

with the initial conditions x(t0 ) = 0 and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0). As mentioned in Sect. 4.3.4, a dynamic output feedback control with the cascaded-delay system description could be very flexible for LMI conditions. Therefore, a dynamic output feedback control follows a cascaded-delay system description such as x˙ c (t) = F11 xc (t) + F12 pc (t − h) + G 1 y(t),

(5.24)

pc (t) = F21 xc (t) + F22 pc (t − h) + G 2 y(t), u(t) = H1 xc (t) + H2 pc (t − h) + J11 y(t)

(5.25) (5.26)

with the initial conditions xc (t0 ) = 0 and pc (t0 + θ) = 0 for θ ∈ [−h, 0). The resulting augmented closed-loop system is then

A¯ x˙¯ (t) = ¯ 11 p¯ (t) A21

A¯ 12 A¯ 22



x¯ (t) , p¯ (t − h)

(5.27)

where

A¯ x(t) u(t) , p¯ (t) = , ¯ 11 x¯ (t) = xc (t) pc (t) A21





A¯ 12 A¯ 22





A + BJ11 C G1C ⎢ =⎢ ⎣ J11 C G2C

BH1 F11 H1 F21

B1 0 0 0

⎤ BH2 F12 ⎥ ⎥. H2 ⎦ F22

Consider a Lyapunov functional  V (t) = x¯ T (t)X x¯ (t) +

t t−h

p¯ T (α)Z p¯ (α)d α

(5.28)

142

5 Guaranteed Cost Controls

with X > 0 and Z > 0, where



X11 X12 Z11 Z12 , Z= . X = X21 X22 Z21 Z22

(5.29)

Using Lemma 5.1, an output feedback guaranteed LQ control for the system (5.22)– (5.23) can be obtained in the following theorem. Theorem 5.3 For a given η > 0, assume there exist matrices X11 , X¯ 11 , Z11 , and Z¯ 11 such that

CT 0T

T

T

C MV (X11 , Z11 ) T < 0, 0 ⊥ ⊥

X11 I Z11 ≥ 0, I X¯ 11 I

MU (X¯ 11 , Z¯ 11 , Z11 ) < 0,

I ≥ 0, Z¯ 11

η − xT (t0 )X11 x(t0 ) − Trace(LT Z11 L) > 0,

(5.30) (5.31) (5.32)

where MV (X11 , Z11 ) and MU (X¯ 11 , Z¯ 11 ) denote

AT X11 + X11 A + Q X11 B1 , B1T X11 −Z11 ⎤ ⎡ 0 X¯ 11 (1, 1) B1 Z11 T 0 0 ⎥ ⎢ Z11 B1 −Z11 ⎥, MU (X¯ 11 , Z¯ 11 , Z11 ) = ⎢ −1 ⎣ 0 ¯ 0 −Z11 − R 0 ⎦ X¯ 11 0 0 −Q−1 (1, 1) = AX¯ 11 + X¯ 11 AT − BR−1 BT



MV (X11 , Z11 ) =

(5.33)

(5.34)

and L is computed from

LLT =



t0 t0 −h

u(α)uT (α)d α.

(5.35)

From such fixed values X11 , X¯ 11 , Z11 , and Z¯ 11 , matrices X12 , X22 , Z12 , and Z22 are determined by using the relation −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 X12 X22

(5.36)

and a feasible set of {Fij , G i , Hj , J11 } can be found by solving the LMI problem

0 > W = [Wij ]i,j=1:8 ,

(5.37)

5.2 Guaranteed LQ Controls for Input Delayed Systems

143

where W11 = (A + BJ11 C)T X11 + C T G T1 X21 + X11 (A + BJ11 C) + X12 G 1 C, W12 = (A + BJ11 C)T X12 + C T G T1 X22 + X11 BH1 + X12 F11 , W13 = X11 B1 , T Z + CT GT Z , W14 = X11 BH2 + X12 F12 , W15 = C T J11 11 2 21 T Z + CT GT Z , W T T W16 = C T J11 12 17 = I , W18 = C J11 , 2 22 T X + X BH + X F , W W22 = H1T BT X12 + F11 22 21 1 22 11 23 = X21 B1 , T Z , W24 = X21 BH2 + X22 F12 , W25 = H1T Z11 + F21 21 T Z , W T W26 = H1T Z12 + F21 22 27 = 0, W28 = H1 , T, W33 = −Z11 , W34 = −Z12 , W35 = 0, W36 = 0, W37 = 0, W38 = C T J11 T Z , W44 = −Z22 , W45 = H2T Z11 + F22 21 T Z , W T W46 = H2T Z12 + F22 22 47 = 0, W48 = H2 , W55 = −Z11 , W56 = −Z12 , W57 = 0, W58 = 0,

W66 = −Z22 , W67 = 0, W68 = 0, W77 = −Q−1 , W78 = 0, W88 = −R−1 .

Then the output feedback guaranteed LQ control (5.24)–(5.26) with such a feasible set of {Fij , G i , Hj , J11 } asymptotically stabilizes the system (5.22)–(5.23) with the guaranteed LQ performance bound η. Proof By Lemma 5.1, we have the following condition 0≥

 

T

d x¯ (t) x¯ (t) V (t) + xT (t)Qx(t) + uT (t)Ru(t) = M , p¯ (t − h) p¯ (t − h) dt

where M denotes

T E E1 A¯ T11 X + X A¯ 11 X A¯ 12 Q 1 + M= 0 0 A¯ T12 X 0  T  ¯ T ¯ ¯ T ¯ A21 Z A21 A21 Z A22 E2 E2 R + + , E3 E3 A¯ T22 Z A¯ 21 A¯ T22 Z A¯ 22 − Z       E1T = I 0 , E2T = J11 C H1 , E3T = 0 H2 ,

which can be guaranteed if it holds that M < 0. The above inequality can be written via the Schur complement as ⎡

A¯ T11 X + X A¯ 11 ⎢ A¯ T12 X ⎢ ⎢ 0>⎢ Z A¯ 21 ⎣ E1T E2T

X A¯ 12 −Z Z A¯ 22 0 E3T

⎤ A¯ T21 Z E1 E2 A¯ T22 Z 0 E3 ⎥ ⎥ T T T −Z 0 0 ⎥ ⎥ = Q + UΣV + VΣ U , −1 ⎦ 0 −Q 0 0 0 −R−1 (5.38)

144

5 Guaranteed Cost Controls

where Q, U, V , and Σ denote ⎡ (1, 1) AT X12 X11 B1 0 ⎢ X21 A 0 X21 B1 0 ⎢ T ⎢ B X11 BT X12 −Z11 −Z12 1 ⎢ 1 ⎢ 0 0 −Z21 −Z22 Q=⎢ ⎢ Z11 0 0 Z11 0 0 ⎢ ⎢ Z21 0 0 Z21 0 0 ⎢ ⎣ I 0 0 0 0 0 0 0

0T Z11 0 0T Z11 0 −Z11 −Z21 0 0

⎤ 0T Z12 I 0 0 0 0 ⎥ ⎥ T 0 Z12 0 0 ⎥ ⎥ 0 0 0 ⎥ ⎥, −Z12 0 0 ⎥ ⎥ −Z22 0 0 ⎥ ⎥ 0 −Q−1 0 ⎦ 0 0 −R−1

(1, 1) = AT X11 + X11 A, ⎡ ⎡ ⎤T ⎤T 0 I 000000 0 I 000000 UT = ⎣ 0 0 0 0 0 I 0 0 ⎦ T , VT = ⎣ 0 0 0 I 0 0 0 0 ⎦ , BT 0 0 0 I 0 0 I C000000I ⎡ ⎤



F11 F12 G 1 Z11 Z12 X11 X12 ⎣ ⎦ Σ = F21 F22 G 2 , T = ⊕I ⊕I ⊕ ⊕ I ⊕ I. X21 X22 Z21 Z22 H1 H2 J11 We remark that ⎡

I ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ 0 U⊥ = T −1 ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⎣ 0 −BT

0 0 I 0 0 0 0 0

0 0 0 I 0 0 0 0

0 0 0 0 I 0 0 −I

⎤ ⎡ 0 Nc1 0 0 0 ⎢ 0 000 0⎥ ⎥ ⎢ ⎢ Nc2 0 0 0 0⎥ ⎥ ⎢ ⎥ ⎢ 0 000 0⎥ ⎢ , V = ⊥ ⎥ ⎢ 0 I 00 0⎥ ⎢ ⎥ ⎢ 0 0I 0 0⎥ ⎢ ⎦ ⎣ 0 00I I 0 0 000

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥, 0⎥ ⎥ 0⎥ ⎥ 0⎦ I

where Nc1 and Nc2 denote

Nc1 Nc2





CT = 0



.

From the elimination lemma in Appendix B.1, the inequality condition 0 > Q + UΣV T + VΣ T U T is equal to the following two conditions: 0 > V⊥T QV⊥ and 0 > U⊥T QU⊥ . Let us investigate the first condition.  0 > V⊥T QV⊥ =

T 

 Nc1 Nc1 ¯V ⊕I ⊕I ⊕I ⊕I M ⊕I ⊕I ⊕I ⊕I , Nc2 Nc2

5.2 Guaranteed LQ Controls for Input Delayed Systems

145

where ⎤ AT X11 + X11 A X11 B1 0 0 I 0 ⎢ −Z11 0 0 0 0 ⎥ B1T X11 ⎥ ⎢ ⎥ ⎢ −Z 0 0 0 0 −Z 11 12 ⎥, ⎢ ¯V = M ⎥ ⎢ −Z 0 0 0 0 −Z 21 22 ⎥ ⎢ ⎣ I 0 0 0 −Q−1 0 ⎦ 0 0 0 0 0 −R−1 ⎡

which can be simplified by the Schur complement associated with the (3, 3)th , (4, 4)th , ¯ V to get the first condition in (5.30). Let us now (5, 5)th and (6, 6)th entries of M investigate the second condition. First, consider T −T QT −1 ⎡

AX¯ 11 + X¯ 11 AT ⎢ X¯ 21 AT ⎢ ⎢ B1T ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⎣ X¯ 11 0

⎤ AX¯ 12 B1 0 0 0 X¯ 11 0 0 0 0 0 0 X¯ 21 0 ⎥ ⎥ 0 −Z¯ 11 −Z¯ 12 0 0 0 0 ⎥ ⎥ 0 −Z¯ 21 −Z¯ 22 0 0 0 0 ⎥ ⎥. X¯ 12 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎥ 0 0 0 −Z¯ 21 −Z¯ 22 0 0 ⎥ ⎥ 0 0 0 −Q−1 0 ⎦ X¯ 21 0 0 0 0 0 0 0 −R−1

Then, the second condition becomes ⎡

⎤ AX¯ 11 + X¯ 11 AT − BR−1 BT B1 0 0 X¯ 11 ⎢ B1T −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎢ ⎥ T ¯ 21 −Z¯ 22 0 > U⊥ QU⊥ = ⎢ 0 − Z 0 0 ⎥ ⎢ ⎥, ⎣ 0 0 0 −Z¯ 11 − R−1 0 ⎦ X¯ 11 0 0 0 −Q−1 which can be simplified by the Schur complement associated with the (3, 3)th entry to get the second condition in (5.30). In this case, the cost (5.12) is bounded by the Lyapunov functional V (0) (5.28), i.e. J ≤ V (t0 ), by Lemma 5.1. Furthermore, the guaranteed LQ cost η is written as   η ≥ V (t0 ) = x (t0 )X11 x(t0 ) + Trace Z11 T

t0 t0 −h

 u(α)u (α)d α T

= xT (t0 )X11 x(t0 ) + Trace(LT Z11 L), where L is defined in (5.35), which results in the LMI (5.32). To generate the controller matrices {Fij , G i , Hj , J11 } in (5.24)–(5.26), we can determine X12 , X22 , Z12 , and Z22 by using the relation (5.36) −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 . X12 X22

(5.39)

146

5 Guaranteed Cost Controls

After determining the values Xij and Zij , {Fij , G i , Hj , J11 } appear linearly in the relation (5.38), which allows us to use Wij in the theorem. This completes the proof.  Even though the cascaded-delay system approach produces a little complicated results, these results are formulated with LMIs. We can minimize η in Theorem 5.3 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.12). Generally, the optimizing control problems for state delayed systems are more difficult to solve compared to those for input delayed systems. However, guaranteed LQ controls are easily applicable for state delayed systems. We introduce several guaranteed LQ controls for state delayed systems in the next section. The output feedback guaranteed LQ control for standard costs in Theorem 5.3 can be found similarly as in [29].

5.3 Guaranteed LQ Controls for State Delayed Systems Consider the state delayed system x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t)

(5.40)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0].

5.3.1 State Feedback Guaranteed LQ Control We introduce two guaranteed LQ controls for the state delayed system (5.40) with the standard cost (5.12) via delay-independent and delay-dependent criteria. Delay-Independent Guaranteed LQ Control Consider the Lyapunov functional (5.13) and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.40) with the initial condition u(t0 + θ) = 0 for θ ∈ [−h, 0) and the LQ cost J (5.12) upper bounded by η. Using Lemma 5.1, a delay-independent state feedback guaranteed LQ control for the system (5.40) can be obtained in the following theorem. ¯ K¯ i and M Theorem 5.4 For a given η > 0, if there exist matrices X¯ > 0, Z¯ ij , K, such that

5.3 Guaranteed LQ Controls for State Delayed Systems

147



⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 K¯ T K¯ T 0 ⎥ 2 2 ⎢ ⎥ ⎢ X¯ 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎢ ⎥ < 0, ⎢ K¯ ⎥ ¯1 ¯ 2 −Z¯ 21 −Z¯ 22 0 K 0 K ⎢ ⎥ −1 ⎣ K¯ K¯ 1 0 0 −R 0 ⎦ K¯ 2 X¯ 0 0 0 0 0 −Q−1 ⎡ ⎤

M LT1 LT2 T η − Trace(M ) x (t0 ) > 0, ⎣ L1 Z¯ 11 Z¯ 12 ⎦ > 0, x(t0 ) X¯ L1 Z¯ 21 Z¯ 22

(5.41)

(5.42)

where (i, j) denote ¯ T + AX¯ + BK, ¯ (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 (1, 1) = (AX¯ + BK) and L1 and L2 are computed from

L1 L2



L1 L2

T

 =

t0



t0 −h

x(s) u(s)



x(s) u(s)

T ds,

(5.43)

then the guaranteed LQ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.40) with the guaranteed LQ performance bound η. Proof x˙ (t) = (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h).

(5.44)

By Lemma 5.1, we have the following condition ⎡ ⎤T ⎡ ⎤   x(t) x(t) d 0 ≥ V (t) + xT (t)Qx(t) + uT (t)Ru(t) = ⎣ x(t − h) ⎦ M ⎣ x(t − h) ⎦ , dt u(t − h) u(t − h) where M denotes ⎡ ⎤ (A + BK)T X + X (A + BK) + Q X (A1 + BK1 ) XBK2 (AT1 + K1T BT )X −Z11 −Z12 ⎦ M=⎣ K2T BT X −Z21 −Z22

148

5 Guaranteed Cost Controls



⎡ T ⎤ ⎡ T ⎤T ⎤ ⎤ ⎡

I KT T I KT K K Z Z + ⎣ 0 K1T ⎦ 11 12 ⎣ 0 K1T ⎦ + ⎣ K1T ⎦ R ⎣ K1T ⎦ , Z21 Z22 0 K2T 0 K2T K2T K2T which can be guaranteed if it holds that M < 0. Let us define

Z¯ 11 Z¯ 12 Z¯ 21 Z¯ 22





Z11 Z12 = Z21 Z22

−1

and use the Schur complement to get ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ 0>⎢ ⎢ ⎢ ⎢ ⎢ ⎣

  (A + BK)T X XA1 + XBK2 I +X (A + BK) XBK1 T T T −Z11 −Z12 0 A1 X + K1 B X −Z21 −Z22 0 K2T BT X I 0 0 −Z¯ 11 K K1 K2 −Z¯ 21 K2 0 K K1 I 0 0 0

⎤ K

T

K1T K2T

K

T

K1T K2T

−Z¯ 12 0 −Z¯ 22 0 0 −R−1 0 0

I

⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥. 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ −Q−1

Let us now use a congruence transformation associated with a non-singular matrix X

−1



Z Z ⊕ 11 12 Z21 Z22

−1

⊕I ⊕I ⊕I ⊕I

to get the LMI (5.41), where X¯ = X −1 and

   Z11 Z12 −1  −1 ¯ ¯ ¯ K = KX , K1 K2 = K1 K2 . Z21 Z22 In this case, the cost (5.12) is bounded by V (t0 ), i.e. J ≤ V (t0 ), by Lemma 5.1. Furthermore, the guaranteed LQ performance bound η can be written as η ≥ V (t0 ) = x (t0 )X¯ −1 x(t0 ) + Trace T



L1 L2

T

Z¯ 11 Z¯ 12 Z¯ 21 Z¯ 22

which results in the LMI (5.42). This completes the proof.

−1

L1 L2

 , 

We can minimize η in Theorem 5.4 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.12). For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.41) in Theorem 5.4. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0.

5.3 Guaranteed LQ Controls for State Delayed Systems

149

Delay-Dependent Guaranteed LQ Control Consider a Lyapunov functional given by  V (t) = xT (t)Xx(t) +  +

0 −h



t



t−h t

x(s) u(s)

T

Z11 Z12 Z21 Z22



x˙ T (s)S x˙ (s)dsdr,

x(s) ds u(s) (5.45)

t+r

with X > 0, [Zij ]2×2 > 0 and S > 0 and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.40) with the initial condition u(t0 + θ) = 0 for θ ∈ [−h, 0) and the LQ cost J (5.12) upper bounded by η. Using Lemma 5.1, a delay-dependent state feedback guaranteed LQ control for the system (5.40) can be obtained in the following theorem. Theorem 5.5 For a given η > 0, if there exist matrices X¯ > 0, S¯ > 0, [Z¯ ij ]2×2 > 0, ¯ K¯ i and M such that S¯ ij , K, ⎤ W11 − W00 W12 W13 T ⎣ W12 −h−1 W¯ S 0 ⎦ < 0, T W13 0 −W¯ S

η − Trace(M ) xT (t0 ) > 0, x(t0 ) X¯ ⎡ ⎤ M LT1 LT2 0 0 0 LT3 ⎢ L1 Z¯ 11 Z¯ 12 0 0 0 0 ⎥ ⎢ ⎥ ⎢ L2 Z¯ 21 Z¯ 22 0 0 0 0 ⎥ ⎢ ⎥ ⎢ 0 0 0 S¯ 11 S¯ 12 S¯ 13 S¯ 14 ⎥ > 0, ⎢ ⎥ ⎢ 0 0 0 S¯ 21 S¯ 22 S¯ 23 S¯ 24 ⎥ ⎢ ⎥ ⎣ 0 0 0 S¯ 31 S¯ 32 S¯ 33 S¯ 34 ⎦ L3 0 0 S¯ 41 S¯ 42 S¯ 43 S¯ ⎡

(5.46)

(5.47)

(5.48)

where Wij and W¯ S denote ⎡

(1, 1) (1, 2) (1, 3) X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 ⎢ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 ⎢ ¯ W11 = ⎢ 0 0 −Z¯ 11 ⎢ X ⎢ K¯ ¯ ¯ K K 1 2 −Z¯ 21 ⎢ ⎣ K¯ 0 K¯ 1 K¯ 2 X¯ 0 0 0

K¯ T K¯ 1T K¯ 2T −Z¯ 12 −Z¯ 22 0 0

⎤ K¯ T X¯ T ¯ 0 ⎥ K1 ⎥ 0 ⎥ K¯ 2T ⎥ 0 0 ⎥ ⎥, 0 0 ⎥ ⎥ −R−1 0 ⎦ 0 −Q−1

¯ T + AX¯ + BK, ¯ (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 , (1, 1) = (AX¯ + BK)

150

5 Guaranteed Cost Controls



⎤ ⎡ 1/2 ¯ X¯ AT + K¯ T BT h X 0 ⎢ Z¯ AT + K¯ T BT ⎥ ⎡ ⎤ 1/2 Z¯ ⎢ 0 h T ⎢ 11 1 ⎥ 11 1 ⎢ ⎢ ⎥ 0 ⎢ 0 h1/2 Z¯ ⎢ Z¯ 21 AT + K¯ T BT ⎥ ⎢ ⎥ 21 ⎢ 1 2 ⎢ ⎥⎢0⎥ W12 = ⎢ ⎥ ⎣ ⎦ , W13 = ⎢ 0 0 0 ⎢ ⎢ ⎥ 0 ⎢ 0 ⎢ ⎥ 0 0 ⎢ ⎢ ⎥ I ⎣ 0 ⎣ ⎦ 0 0 0 0 0 ⎡ ⎡ ⎤ ⎤T 0 0 0 X¯ 0 0 0 X¯ ⎢ 0 0 0 −Z¯ 11 ⎥ ⎢ 0 0 0 −Z¯ 11 ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 0 0 −Z¯ 21 ⎥ ⎢ ¯ ⎥ ⎢ ⎥ −1 ⎢ 0 0 0 −Z21 ⎥ ⎢ ⎥ ¯ ⎥ W00 = ⎢ ⎢ 0 0 0 0 ⎥ WS ⎢ 0 0 0 0 ⎥ , ⎢0 0 0 0 ⎥ ⎢0 0 0 0 ⎥ ⎢ ⎢ ⎥ ⎥ ⎣0 0 0 0 ⎦ ⎣0 0 0 0 ⎦ 000 0 000 0 ⎤ ⎡ ¯S11 S¯ 12 S¯ 13 S¯ 14



−1 ⎢ S¯ 21 S¯ 22 S¯ 23 S¯ 24 ⎥ ¯ ¯ ⎥ , Z11 Z12 = Z11 Z12 W¯ S = ⎢ ⎣ S¯ 31 S¯ 32 S¯ 33 S¯ 34 ⎦ Z21 Z22 Z¯ 21 Z¯ 22 S¯ 41 S¯ 42 S¯ 43 S¯

⎤ X¯ ⎥ 12 −Z¯ 11 ⎥ h1/2 Z¯ 22 −Z¯ 21 ⎥ ⎥ 0 0 ⎥ ⎥, 0 0 ⎥ ⎥ 0 0 ⎦ 0 0 0

h1/2 Z¯

and L1 , L2 and L3 are computed from

L1 L2



L1 L2

T

 =

t0

t0 −h

 x(s)xT (s)ds, L3 LT3 =

0



−h

t0 t0 +r

x˙ (s)˙xT (s)dsdr,

respectively, then the guaranteed LQ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.40) with the guaranteed LQ performance bound η. Proof   d 0 ≥ xT (t)Qx(t) + uT (t)Ru(t) + V (t) dt = xT (t)Qx(t) + uT (t)Ru(t) + 2˙xT (t)Xx(t) + h˙xT (t)S x˙ (t) − +

x(t) u(t)

T

Z11 Z12 Z21 Z22



 0 −h

x˙ T (t + r)S x˙ (t + r)dr



T



x(t) x(t − h) Z11 Z12 x(t − h) − . u(t) u(t − h) u(t − h) Z21 Z22

5.3 Guaranteed LQ Controls for State Delayed Systems

151

We add the non-negative quantity 



⎡ ⎡ ⎤T ⎤ x(t) x(t) S11 ⎢ x(t − h) ⎥ ⎢ ⎥ ⎢ S21 ⎢ ⎢ ⎥ WS ⎢ x(t − h) ⎥ dr for 0 < WS = ⎣ u(t − h) ⎦ ⎣ u(t − h) ⎦ ⎣ S31 −h S41 x˙ (t + r) x˙ (t + r) 0

S12 S22 S32 S42

S13 S23 S33 S43

⎤ S14 S24 ⎥ ⎥ S34 ⎦ S

to the right side of the above inequality so as to get a sufficient condition that  0 ≥ xT (t)Qx(t) + uT (t)Ru(t) + 2˙xT (t)Xx(t) + h˙xT (t)S x˙ (t) −

−h

x˙ T (t + r)S x˙ (t + r)dr





T



Z11 Z12 x(t) x(t − h) Z11 Z12 x(t − h) − Z21 Z22 Z21 Z22 u(t) u(t − h) u(t − h) ⎤T ⎤ ⎡ ⎡ ⎤ ⎡ ⎤ x(t) x(t) T  0 x(t) x(t) ⎢ x(t − h) ⎥ ⎢ x(t − h) ⎥ ⎥ WS ⎢ ⎥ ⎢ ⎣ ⎦ M ⎣ x(t − h) ⎦ , + ⎦ ⎣ ⎣ u(t − h) ⎦ dr = x(t − h) −h u(t − h) u(t − h) u(t − h) x˙ (t + r) x˙ (t + r)

+

x(t) u(t) ⎡

T

0

where M denotes ⎡ ⎤ (A + BK)T X + X (A + BK) + Q X (A1 + BK1 ) XBK2 (AT1 + K1T BT )X −Z11 −Z12 ⎦ M=⎣ T T K2 B X −Z21 −Z22 ⎡ ⎤ ⎤T ⎡ T ⎤ ⎡ T ⎤T ⎡ T T

I K I K K K Z Z + ⎣ 0 K1T ⎦ 11 12 ⎣ 0 K1T ⎦ + ⎣ K1T ⎦ R ⎣ K1T ⎦ Z21 Z22 0 K2T 0 K2T K2T K2T ⎡ ⎡ ⎤ ⎡ ⎤T ⎤ (A + BK)T S11 S12 S13 (A + BK)T + ⎣ (A1 + BK1 )T ⎦ hS ⎣ (A1 + BK1 )T ⎦ + h ⎣ S21 S22 S23 ⎦ K2T BT K2T BT S31 S32 S33 ⎡ ⎤⎡ ⎤⎡ ⎤T ⎡ ⎤T S14 I S14 I + ⎣ S24 ⎦ ⎣ −I ⎦ + ⎣ −I ⎦ ⎣ S24 ⎦ , 0 0 S34 S34 which can be guaranteed if it holds that 0>⎡ M ⎤ (A + BK)T X + X (A + BK) + Q X (A1 + BK1 ) XBK2 (AT1 + K1T BT )X −Z11 −Z12 ⎦ =⎣ K2T BT X −Z21 −Z22 ⎡ ⎡ ⎤ ⎤ ⎤ ⎡ ⎤T ⎡ T

I KT I KT KT KT Z Z 11 12 ⎣ 0 K1T ⎦ + ⎣ K1T ⎦ R ⎣ K1T ⎦ + ⎣ 0 K1T ⎦ Z21 Z22 T 0 K2 0 K2T K2T K2T

152

5 Guaranteed Cost Controls



⎤ ⎡ ⎤T 0 0 0 (A + BK)T 0 0 0 (A + BK)T + ⎣ 0 0 0 (A1 + BK1 )T ⎦ hWS ⎣ 0 0 0 (A1 + BK1 )T ⎦ 000 K2T BT 000 K2T BT ⎡ 1/2 ⎡ 1/2 ⎤ ⎤T h I 0 h I 0 0 I 0 I + ⎣ 0 h1/2 I 0 −I ⎦ WS ⎣ 0 h1/2 I 0 −I ⎦ 0 0 h1/2 I 0 0 0 h1/2 I 0 ⎡ ⎤ ⎡ ⎤T 000 I 000 I − ⎣ 0 0 0 −I ⎦ WS ⎣ 0 0 0 −I ⎦ . 000 0 000 0 Use the Schur complement to get ⎡

⎤ M11 − M00 M12 M13 T M12 −h−1 WS−1 0 ⎦ , 0>⎣ T 0 −WS−1 M13 where Mij denote ⎡

M11

M12

M00

  ⎤ (A + BK)T X XA1 + T T XBK I K K I 2 ⎥ ⎢ +X (A + BK) XBK1 ⎥ ⎢ T T ⎢ (AT1 + K1T BT )X −Z11 −Z12 0 K1 K1 0 ⎥ ⎥ ⎢ ⎢ −Z21 −Z22 0 K2T K2T 0 ⎥ K2T BT X ⎥, =⎢ ⎢ I 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎥ ⎢ ⎢ K K1 K2 −Z¯ 21 −Z¯ 22 0 0 ⎥ ⎥ ⎢ ⎣ K2 0 0 −R−1 0 ⎦ K K1 I 0 0 0 0 0 −Q−1 ⎤ ⎤ ⎡ ⎡ 0 0 0 (A + BK)T h1/2 I 0 0 I ⎢ 0 0 0 (A1 + BK1 )T ⎥ ⎢ 0 h1/2 I 0 −I ⎥ ⎥ ⎥ ⎢ ⎢ T T ⎥ ⎢0 0 0 ⎢ 0 K2 B 0 h1/2 I 0 ⎥ ⎥ ⎥ ⎢ ⎢ ⎥ , M13 = ⎢ 0 0 0 0 0⎥ =⎢ ⎥ ⎥, ⎢0 0 0 ⎢ ⎥ ⎥ ⎢0 0 0 ⎢ 0 0 0 0 0 ⎥ ⎥ ⎢ ⎢ ⎦ ⎣0 0 0 ⎣ 0 0 0 0 0⎦ 000 0 0 0 0 0 ⎤ ⎤T ⎡ ⎡ 000 I 000 I ⎢ 0 0 0 −I ⎥ ⎢ 0 0 0 −I ⎥ ⎥ ⎥ ⎢ ⎢ ⎢0 0 0 0⎥ ⎢0 0 0 0⎥ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ =⎢ ⎢ 0 0 0 0 ⎥ WS ⎢ 0 0 0 0 ⎥ . ⎢0 0 0 0⎥ ⎢0 0 0 0⎥ ⎥ ⎥ ⎢ ⎢ ⎣0 0 0 0⎦ ⎣0 0 0 0⎦ 000 0 000 0

5.3 Guaranteed LQ Controls for State Delayed Systems

153

Let us now use a congruence transformation associated with a non-singular matrix X −1 ⊕



Z11 Z12 Z21 Z22

−1

⊕ (I ⊕ I ) ⊕ I ⊕ I ⊕ (I ⊕ I ⊕ I ⊕ I ) ⊕ (I ⊕ I ⊕ I ⊕ I )

to get the quadratic matrix inequality (5.46), where X¯ = X −1 , W¯ S = WS−1 and

   Z11 Z12 −1  −1 ¯ ¯ ¯ . K = KX , K1 K2 = K1 K2 Z21 Z22 In this case, the cost (5.45) is bounded by V (t0 ), i.e. J ≤ V (t0 ), by Lemma 5.1. Furthermore, the guaranteed LQ performance bound η can be written as η ≥ V (t0 ) = x (t0 )X¯ T

−1

 x(t0 ) + Trace

  +Trace W¯ S−1



Z¯ 11 Z¯ 21

Z¯ 12 Z¯ 22

−1 

t0

t0 −h



x(s) u(s)



x(s) u(s)

T   0 0 0 x˙ T (s) 0 0 0 x˙ T (s) dsdr −h t0 +r 

−1  T L1 Z¯ 11 Z¯ 12 L1 T −1 = x (t0 )X x(t0 ) + Trace L2 L2 Z¯ 21 Z¯ 22    T  , +Trace 0 0 0 LT3 W¯ S−1 0 0 0 LT3 0

t0



which results in the LMI (5.48). This completes the proof.



T ds 



Theorem 5.5 provides a delay-dependent criterion for guaranteed LQ control, which is less conservative than that in Theorem 5.4. However, it is noted that the condition (5.46) contains a nonlinear matrix inequality due to W00 such that ⎡

W00

0 ⎢0 ⎢ ⎢0 ⎢ =⎢ ⎢0 ⎢0 ⎢ ⎣0 0

0 0 0 0 0 0 0

⎡ ⎤ 0 X¯ 0 ⎢0 0 −Z¯ 11 ⎥ ⎢ ⎥ ⎢ 0 −Z¯ 21 ⎥ ⎥ −1 ⎢ 0 ¯ ⎢0 W 0 0 ⎥ ⎥ S ⎢ ⎢0 0 0 ⎥ ⎢ ⎥ ⎣0 0 0 ⎦ 0 0 0

0 0 0 0 0 0 0

⎤T 0 X¯ 0 −Z¯ 11 ⎥ ⎥ 0 −Z¯ 21 ⎥ ⎥ 0 0 ⎥ ⎥ . 0 0 ⎥ ⎥ 0 0 ⎦ 0 0

As a result, unfortunately, we cannot find in general the minimum of the guaranteed LQ cost η for the cost (5.12) via a convex optimization technique. However, if we can afford more computational efforts, we can obtain a guaranteed cost control achieving a smaller guaranteed LQ performance bound η using BMI solvers like [13] in the literature. For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0

154

5 Guaranteed Cost Controls

or K¯ 2 = 0, respectively, from the inequality (5.46) in Theorem 5.5. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0. The state feedback guaranteed LQ controls in Theorems 5.4 and 5.5 are extended from [12, 15, 19, 21, 27, 28, 35].

5.3.2 Robust State Feedback Guaranteed LQ Control Let us consider a state delayed system with model uncertainties given by x˙ (t) = Ax(t) + A1 x(t − h) + Dpw (t) + Bu(t), −2

pw (t) = Δ(t)qw (t), Δ (t)Δ(t) ≤ ρ I , qw (t) = Ex(t) + E1 x(t − h) + Fpw (t) + Eb u(t) T

(5.49) (5.50) (5.51)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. For the LQ cost J (5.12) upper bounded by η, we consider the Lyapunov functional (5.13) and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.49)–(5.51) with the initial condition u(t0 + θ) = 0 for θ ∈ [−h, 0) and the LQ cost J (5.12) upper bounded by η. Using Lemma 5.1, a state feedback guaranteed LQ control for the system (5.49)–(5.51) can be obtained in the following theorem. ¯ K¯ i , λ¯ and M Theorem 5.6 For a given η > 0, if there exist matrices X¯ > 0, Z¯ ij , K, such that ⎡

(1, 1) ⎢ (1, 2)T ⎢ ⎢ (1, 3)T ⎢ ⎢ X¯ ⎢ ⎢ K¯ ⎢ ⎢ K¯ ⎢ ⎢ X¯ ⎢ ⎣ λD ¯ T (9, 1)

⎤ ¯ (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ λD (9, 1)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 0 (9, 2)T ⎥ ⎥ T T −Z¯ 21 −Z¯ 22 0 K¯ 2 K¯ 2 0 0 (9, 3)T ⎥ ⎥ ⎥ 0 0 0 0 0 −Z¯ 11 −Z¯ 12 0 ⎥ ⎥ < 0, (5.52) K¯ 1 0 0 0 K¯ 2 −Z¯ 21 −Z¯ 22 0 ⎥ ⎥ K¯ 1 0 0 −R−1 0 0 0 K¯ 2 ⎥ −1 ⎥ 0 0 0 0 0 −Q 0 0 ⎥ T ⎦ ¯ ¯ 0 0 0 0 0 0 −λI λF ¯ −λρ ¯ −2 I (9, 2) (9, 3) 0 0 0 0 λF ⎡ ⎤

M LT1 LT2 η − Trace(M ) xT (t0 ) ⎣ L1 Z¯ 11 Z¯ 12 ⎦ > 0, > 0, (5.53) x(t0 ) X¯ L1 Z¯ 21 Z¯ 22

where (i, j) denote ¯ T + AX¯ + BK, ¯ (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 , (1, 1) = (AX¯ + BK) ¯ (9, 2) = E1 Z¯ 11 + Eb K¯ 1 , (9, 3) = E1 Z¯ 12 + Eb K¯ 2 (9, 1) = E X¯ + Eb K,

5.3 Guaranteed LQ Controls for State Delayed Systems

155

and L1 and L2 are computed from

L1 L2



L1 L2

T

 =

t0



t0 −h

x(s) u(s)



x(s) u(s)

T ds,

(5.54)

then the guaranteed LQ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.49)–(5.51) with the guaranteed LQ performance bound η. Proof The closed-loop system becomes x˙ (t) = (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h) + Dpw (t), pw (t) = Δ(t)qw (t), Δ (t)Δ(t) ≤ ρ T

−2

I,

qw (t) = (E + Eb K)x(t) + (E1 + Eb K1 )x(t − h) + Eb K2 u(t − h) + Fpw (t).

(5.55) (5.56) (5.57)

By Lemma 5.1, we have the following condition   d T T 0 ≥ V (t) + x (t)Qx(t) + u (t)Ru(t) dt T T subject to 0 ≤ ρ2 qw (t)qw (t) − pw (t)pw (t). Using the S-procedure, we have that for λ>0     d T T (t)qw (t) − pw (t)pw (t) 0 ≥ V (t) + xT (t)Qx(t) + uT (t)Ru(t) + λ ρ2 qw dt ⎤T ⎤ ⎡ ⎡ x(t) x(t) ⎢ x(t − h) ⎥ ⎢ x(t − h) ⎥ ⎥ ⎥ ⎢ =⎢ ⎣ u(t − h) ⎦ M ⎣ u(t − h) ⎦ , pw (t) pw (t)

where M denotes ⎡ ⎤ (A + BK)T X + X (A + BK) + Q X (A1 + BK1 ) XBK2 XD ⎢ −Z11 −Z12 0 ⎥ (AT1 + K1T BT )X ⎥ M=⎢ ⎣ −Z21 −Z22 0 ⎦ K2T BT X 0 0 0 DT X ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ ⎡ ⎡ T T T KT KT I KT

I K ⎢ KT ⎥ ⎢ KT ⎥ ⎢ 0 K T ⎥ Z11 Z12 ⎢ 0 K T ⎥ 1 ⎥ 1 ⎥ ⎢ ⎢ 1 ⎥ ⎢ 1 ⎥ +⎢ ⎣ 0 K2T ⎦ Z21 Z22 ⎣ 0 K2T ⎦ + ⎣ K2T ⎦ R ⎣ K2T ⎦ 0 0 0 0 0 0

156

5 Guaranteed Cost Controls



0 ⎢0 +⎢ ⎣0 0

0 0 0 0

0 0 0 0

⎡ ⎤⎡ ⎤T ⎤ (E + Eb K)T (E + Eb K)T 0 T ⎥⎢ T ⎥ ⎢ 0 ⎥ ⎥ + λρ2 ⎢ (E1 +TEbTK1 ) ⎥ ⎢ (E1 +TEbTK1 ) ⎥ , ⎣ ⎦⎣ ⎦ K2 Eb K2 Eb 0 ⎦ FT FT −λI

which can be guaranteed if it holds that 0 > M. Using the Scuhr complement technique, we have ⎡

(1, 1) ⎢ (1, 2)T ⎢ T T ⎢ K2 B X ⎢ ⎢ I ⎢ 0>⎢ K ⎢ ⎢ K ⎢ ⎢ I ⎢ ⎣ DT (9, 1)

(1, 2) −Z11 −Z21 0 K1 K1 0 0 (9, 2)

XBK2 −Z12 −Z22 0 K2 K2 0 0 Eb K2

I 0 0 Z¯ 11 Z¯ 21 0 0 0 0

KT K1T K2T Z¯ 12 Z¯ 22 0 0 0 0

KT I K1T 0 K2T 0 0 0 0 0 −R−1 0 0 −Q−1 0 0 0 0

⎤ D (9, 1)T 0 (9, 2)T ⎥ ⎥ 0 K2T EbT ⎥ ⎥ ⎥ 0 0 ⎥ ⎥, 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ 0 0 ⎥ T ⎦ −λI F −1 −2 F −λ ρ I

where (i, j) denote (1, 1) = (A + BK)T X + X (A + BK), (1, 2) = XA1 + XBK1 , (9, 1) = E + Eb K, (9, 2) = E1 + Eb K1 . Let us now use a congruence transformation associated with a non-singular matrix X −1 ⊕



Z11 Z12 Z21 Z22

−1

⊕ (I ⊕ I ) ⊕ I ⊕ I ⊕ λ−1 I ⊕ I

to get the quadratic matrix inequality (5.52), where λ¯ and Z¯ ij denote λ¯ = λ−1 ,



Z¯ 11 Z¯ 21

Z¯ 12 Z¯ 22



=

Z11 Z12 Z21 Z22

−1

.

In this case, the cost (5.12) is bounded by V (t0 ), i.e. J ≤ V (t0 ), by Lemma 5.1. With the same manner as in the proof of Theorem 5.4, the guaranteed LQ performance bound η can be written as the LMI (5.53). This completes the proof.  We can minimize η in Theorem 5.6 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.12). For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.52) in Theorem 5.6. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0.

5.3 Guaranteed LQ Controls for State Delayed Systems

157

The robust state feedback guaranteed LQ control in Theorem 5.6 is extended from [8, 12, 24, 25, 27, 35].

5.3.3 Output Feedback Guaranteed LQ Control Consider the state delayed system x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t),

(5.58)

y(t) = Cx(t)

(5.59)

with the initial condition x(θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. As mentioned in Sect. 4.3.4, a dynamic output feedback control with the cascadeddelay system description could be very flexible for LMI conditions. Therefore, a dynamic output feedback control follows the cascaded-delay system description (4.56)–(4.58) with the initial conditions xc (t0 + θ) = 0 for θ ∈ [−h, 0] and pc (t0 + θ) = 0 for θ ∈ [−h, 0). The resulting augmented closed-loop system is given in (4.59). For the same Lyapunov functional as in (5.28), we can develop the following result using Lemma 5.1. Theorem 5.7 For a given η > 0, assume there exist matrices X11 , X¯ 11 , Z11 , and Z¯ 11 such that

CT 0

T

T

C MV (X11 , Z11 ) < 0, 0 ⊥ ⊥

X11 I Z11 ≥ 0, I X¯ 11 I

MU (X¯ 11 , Z¯ 11 , Z11 ) < 0,

I ≥ 0, Z¯ 11

η − xT (t0 )X11 x(t0 ) − Trace(LT Z11 L) > 0,

(5.60) (5.61) (5.62)

where MV (X11 , Z11 ) and MU (X¯ 11 , Z¯ 11 ) denote

AT X11 + X11 A + Q + Z11 X11 A1 AT1 X11 −Z11 ⎡ T −1 T ¯ ¯ AX11 + X11 A − BR B A1 Z11 Z11 AT1 −Z11 ⎢ MU (X¯ 11 , Z¯ 11 , Z11 ) = ⎢ ⎣ X¯ 11 0 X¯ 11 0

MV (X11 , Z11 ) =

,

(5.63)

⎤ X¯ 11 X¯ 11 0 0 ⎥ ⎥ , (5.64) ¯ −Z11 0 ⎦ 0 −Q−1

and L is computed from

LLT =



t0

t0 −h

x(α)xT (α)d α.

(5.65)

158

5 Guaranteed Cost Controls

From such fixed values X11 , X¯ 11 , Z11 , and Z¯ 11 , matrices X12 , X22 , Z12 , and Z22 are determined by using the relation −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 X12 X22

(5.66)

and a feasible set of {Fij , G i , Hj , J11 } can be found by solving the LMI problem

0 > W = [Wij ]i,j=1:8 ,

(5.67)

where W11 = (A + BJ11 C)T X11 + C T G T1 X21 + X11 (A + BJ11 C) + X12 G 1 C, W12 = (A + BJ11 C)T X12 + C T G T1 X22 + X11 BH1 + X12 F11 , W13 = X11 A1 , W14 = X11 BH2 + X12 F12 , W15 = Z11 + C T G T2 Z21 , T, W16 = Z12 + C T G T2 Z22 , W17 = I , W18 = C T J11 T X + X BH X F , W W22 = H1T BT X12 + F11 22 21 1 22 11 23 = X21 A1 , T Z , W24 = X21 BH2 + X22 F12 , W25 = F21 21 T Z , W T W26 = F21 22 27 = 0, W28 = H1 , T, W33 = −Z11 , W34 = −Z12 , W35 = 0, W36 = 0, W37 = 0, W38 = C T J11 T Z , W44 = −Z22 , W45 = F22 21 T W46 = F22 Z22 , W47 = 0, W48 = H2T ,

W55 = −Z11 , W56 = −Z12 , W57 = 0, W58 = 0, W66 = −Z22 , W67 = 0, W68 = 0, W77 = −Q−1 , W78 = 0, W88 = −R−1 .

Then the output feedback guaranteed LQ control (5.24)–(5.26) with such a feasible set of {Fij , G i , Hj , J11 } asymptotically stabilizes the system (5.58)–(5.59) with the guaranteed LQ performance bound η. Proof the guaranteed LQ cost η as follows.

 t0

T

x(α) x(t0 ) x(α) + X Z dα 0 0 0 t0 −h   t0  = xT (t0 )X11 x(t0 ) + Trace Z11 x(α)xT (α)d α

x(t0 ) η ≥ V (t0 ) = 0

T



t0 −h

= x (t0 )X11 x(t0 ) + Trace(L Z11 L), T

T

which results in the LMI (5.62). This completes the proof.



Even though the cascaded-delay system approach produces a little complicated results, these results are formulated with LMIs. Note that it is hard to derive an output feedback guaranteed LQ control for the state delayed system (5.58)–(5.59)

5.3 Guaranteed LQ Controls for State Delayed Systems

159

directly, whereas the cascaded-delay system approach can provide a criterion for such a control, which is numerically tractable in the LMI framework. We can minimize η in Theorem 5.7 via a convex optimization technique to find the minimum of the guaranteed LQ performance bound η for the LQ cost (5.12). The output feedback guaranteed LQ control in Theorem 5.7 is newly obtained here and other output feedback guaranteed LQ controls can be found in [12, 26, 30], which are not covered in this book.

5.4 Guaranteed H∞ Controls for Input Delayed Systems Consider a linear system with a single input delay given by x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h) + Gw(t)

(5.68)

with the zero initial conditions x(t0 ) = 0 and u(t0 + θ) = 0 for θ ∈ [−h, 0), where w(t) ∈ Rmw is the disturbance. Instead of finding the optimal infinite horizon H∞ control associated with ∞ min max

t0

w

u

{xT (s)Qx(s) + uT (s)Ru(s)}ds ∗ 2 ∞ =γ T t0 w (s)w(s)ds

(5.69)

with Q > 0 and R > 0, and the zero initial states, we can obtain an infinite horizon H∞ control yielding ∞ max w

t0

{xT (s)Qx(s) + uT (s)Ru(s)}ds ∞ ≤ γ2, T (s)w(s)ds w t0

(5.70)

for a given γ ≥ γ ∗ . The reason why we use the above approach is that it is extremely difficult to directly find the optimal infinite horizon H∞ control. To check the condition (5.70), we introduce an alternative cost such as J (t) =

  t xT (s)Qx(s) + uT (s)Ru(s) − γ 2 w T (s)w(s) ds,

(5.71)

t0

with Q > 0 and R > 0. Then the condition (5.70) can be described such as, for all t > t0 , min max J (t) ≤ 0. u

w

(5.72)

Associated with the cost J (t) for a given level γ ≥ γ ∗ , the guaranteed H∞ control is defined as follows.

160

5 Guaranteed Cost Controls

Definition 5.2 For the system (5.68) and the cost J (t), if there exists a feedback control such that asymptotically stabilizes the system and minu maxw J (t) is upper bounded by zero, i.e. minu maxw J (t) ≤ 0 then γ is said to be a guaranteed H∞ performance bound and u(t) is said to be a guaranteed H∞ control for the system (5.68). Before moving on, we introduce a lemma which is essential for developing guaranteed H∞ controls for both input and state delayed systems. Lemma 5.2 For the linear system (5.68), if there exist continuous nondecreasing functions uGH (s) and vGH (s) that are positive for s > 0 and uGH (0) = vGH (0) = 0 and if there exists a continuous functional V : R × Cn,h → R and a feedback control u(t) such that u (x(t)) ≤ V (t, xt ) ≤ vGH (xt c ), GH  V˙ (t, xt ) ≤ − xT (t)Qx(t) + uT (t)Ru(t) − γ 2 w T (t)w(t) , t > t0

(5.73) (5.74)

with Q > 0 and R > 0 along the solution to the system (5.68), then the guaranteed H∞ control u(t) asymptotically stabilizes the system (5.68) with the guaranteed H∞ performance bound γ. Proof When w(t) = 0 for all t > t0 , V (t, xt ) is bounded below and thus converges to a constant greater than or equal to zero as t goes to infinity, i.e. V (∞, x∞ ) = c ≥ 0, since V (t, xt ) is non-negative and non-increasing from the condition (5.4). Therefore, V˙ (t, xt ) = 0 as t goes to infinity, which ensures the asymptotic stability of the closedloop system due to Q > 0. Integrating both sides of (5.71) from t0 to t for t > t0 gives 

   t V (t, xt ) − V (t0 , xt0 ) ≤ − xT (s)Qx(s) + uT (s)Ru(s) − γ 2 w T (s)w(s) ds. t0

From the zero initial condition, we have for t > t0   t T T 2 T x (s)Qx(s) + u (s)Ru(s) − γ w (s)w(s) ds. 0 ≤ V (t, xt ) ≤ −J (t) = − t0

Therefore, u(t) is a guaranteed H∞ control and γ is the corresponding guaranteed H∞ performance bound from J (t) ≤ 0 for all t > t0 . This completes the proof.  Lemma 5.2 for guaranteed H∞ controls for input and state delayed systems can be found similarly as in [9, 11, 17]. It is noted that control structures of guaranteed H∞ controls are often given a priori in feedback forms of state and input delays and then gain matrices of feedback controls are sought to satisfy (5.74). The H∞ optimal control for the cost (5.71) in Chap. 8 does not assume a control structure and thus computation of the optimal control is rather complex. It is noted that Definition 5.2 and Lemma 5.2 are applicable for both input delayed systems and state delayed systems.

5.4 Guaranteed H∞ Controls for Input Delayed Systems

161

In this section, we consider two costs, a predictive cost and a standard cost, where

the former uses the state predictor z(t) = xˆ (t + h) defined as the state x(t + h) with the constraint ut+h ≡ 0 and wt+h ≡ 0 as 

t

z(t) = xˆ (t + h) = eAh x(t) +

eA(t−τ ) B1 u(τ )d τ

(5.75)

t−h

and the latter uses the state x(t) directly. It is interesting to see that z(t) satisfies the ordinary system ¯ ¯ z˙ (t) = Az(t) + Bu(t) + Gw(t), (5.76) where B¯ = eAh B + B1 and G¯ = eAh G.

5.4.1 State Feedback Guaranteed H∞ Control for Predictive Costs Consider the cost   t xˆ T (s + h)Qˆx(s + h) + uT (s)Ru(s) − γ 2 w T (s)w(s) ds J (t) = t0   t z T (s)Qz(s) + uT (s)Ru(s) − γ 2 w T (s)w(s) ds = (5.77) t0

with Q > 0 and R > 0. We choose a Lyapunov functional V (t) as V (t) = z T (t)Xz(t)

(5.78)

with X > 0 and a control 



u(t) = Kz(t) = K e x(t) +

t

Ah

e

A(t−τ )

 B1 u(τ )d τ ,

(5.79)

t−h

where K is a constant matrix. Using Lemma 5.2, a state feedback guaranteed H∞ control for the delay-free system (5.76) can be obtained in the following theorem. Theorem 5.8 For a given γ > 0, if there exist matrices X¯ and K¯ such that ⎡

⎤ AX¯ + X¯ AT + B¯ K¯ + K¯ T B¯ T + γ −2 G¯ G¯ T X¯ K¯ T ⎣ X¯ −Q−1 0 ⎦ < 0, K¯ 0 −R−1

(5.80)

162

5 Guaranteed Cost Controls

then the guaranteed H∞ control (5.79) with K = K¯ X¯ −1 asymptotically stabilizes the system (5.68) with the guaranteed H∞ performance bound γ. Proof By Lemma 5.2, we have the following condition   d T T 2 T V (t) + z (t)Qz(t) + u (t)Ru(t) − γ w (t)w(t) dt  T   = z T (t)X MXz(t) − γ −2 G¯ T Xz(t) − γ 2 w(t) G¯ T Xz(t) − γ 2 w(t) ≤ z T (t)X MXz(t) < 0, where K¯ = K X¯ , X¯ = X −1 , and M denotes M = AX¯ + X¯ AT + B¯ K¯ + K¯ T B¯ T + X¯ QX¯ + K¯ T RK¯ + γ −2 G¯ G¯ T , which can be guaranteed if (5.80) holds with some efforts related to the Schur complement. In this case, the cost (5.77) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ performance bound γ (5.70) is ensured. This completes the proof.  We can minimize γ in Theorem 5.8 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). The state feedback guaranteed H∞ control for input delayed systems with predictive costs in Theorem 5.8 in this subsection follows similarly from [5] once they are transformed to delay-free systems.

5.4.2 State Feedback Guaranteed H∞ Control for Standard Costs For the input delayed system (5.1), we consider the cost   t T T 2 T J (t) = x (s)Qx(s) + u (s)Ru(s) − γ w (s)w(s) ds

(5.81)

t0

with Q > 0 and R > 0. For guaranteed H∞ controls for the standard cost (5.81), we introduce a delay-independent criterion for the input delayed system (5.68). First, we consider the Lyapunov functional (5.13) and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.68) with the H∞ cost J (t) (5.81) upper bounded by zero for all t > t0 . Using Lemma 5.2, a state feedback guaranteed H∞ control for the system (5.68) can be obtained in the following theorem.

5.4 Guaranteed H∞ Controls for Input Delayed Systems

163

¯ and K¯ i such Theorem 5.9 For a given γ > 0, if there exist matrices X¯ > 0, Z¯ ij , K, that ⎡ ⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 K¯ T K¯ T 0 ⎥ 2 2 ⎢ ⎥ ⎢ X¯ (5.82) 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎢ ⎥ < 0, ⎢ K¯ ⎥ ¯1 ¯ 2 −Z¯ 21 −Z¯ 22 0 K 0 K ⎢ ⎥ ⎣ K¯ K¯ 1 0 0 −R−1 0 ⎦ K¯ 2 X¯ 0 0 0 0 0 −Q−1 where (i, j) denote ¯ T + AX¯ + BK¯ + γ −2 GG T , (1, 2) = B1 Z¯ 21 + BK¯ 1 , (1, 3) = B1 Z¯ 22 + BK¯ 2 , (1, 1) = (AX¯ + BK)

then the guaranteed H∞ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.68) with the guaranteed H∞ performance bound γ. Proof x˙ (t) = (A + BK)x(t) + BK1 x(t − h) + (B1 + BK2 )u(t − h) + Gw(t). (5.83) By Lemma 5.2, we have the following condition   d V (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt ⎡ ⎤ ⎡ ⎤T  T   x(t) x(t) −2 T 2 T 2 ⎣ ⎦ ⎣ ⎦ G Xx(t) − γ w(t) G Xx(t) − γ w(t) M x(t − h) − γ = x(t − h) u(t − h) u(t − h) ⎡ ⎤ ⎡ ⎤T x(t) x(t) ≤ ⎣ x(t − h) ⎦ M ⎣ x(t − h) ⎦ < 0, u(t − h) u(t − h)

where M denotess ⎡

⎤ (A + BK)T X + X (A + BK) + Q + γ −2 X GG T X XBK1 X (B1 + BK2 ) ⎦ M=⎣ K1T BT X −Z11 −Z12 T T T (B1 + K2 B )X −Z21 −Z22

164

5 Guaranteed Cost Controls

⎡ ⎤ ⎤T ⎡ ⎤ ⎡ T ⎤T I KT K

I KT KT ⎢ 0 K T ⎥ Z11 Z12 ⎢ 0 K T ⎥ ⎢ KT ⎥ T ⎣ ⎦ +⎣ R⎣ 1 ⎦ , ⎣ 1 ⎦ Z Z 1 ⎦ + K1 21 22 K2T 0 K2T 0 K2T K2T ⎡

which can be guaranteed if it holds that M < 0. Let us define

Z¯ 11 Z¯ 12 Z¯ 21 Z¯ 22





Z11 Z12 = Z21 Z22

−1

and use the Schur complement to get ⎡

(1, 1) XBK1 X (B1 + BK2 ) I T T ⎢ B X −Z11 −Z12 0 K 1 ⎢ T ⎢ (B1 + K2T BT )X −Z21 −Z 0 22 ⎢ ⎢ ¯ 11 I 0 0 − Z ⎢ ⎢ ¯ K K1 K2 −Z21 ⎢ ⎣ K2 0 K K1 I 0 0 0

KT K1T K2T −Z¯ 12 −Z¯ 22 0 0

⎤ KT I K1T 0 ⎥ ⎥ T K2 0 ⎥ ⎥ 0 0 ⎥ ⎥ < 0. 0 0 ⎥ ⎥ −R−1 0 ⎦ 0 −Q−1

where (1, 1) = (A + BK)T X + X (A + BK) + γ −2 X GG T X . Let us now use a congruence transformation associated with a non-singular matrix X

−1



Z Z ⊕ 11 12 Z21 Z22

−1

⊕I ⊕I ⊕I ⊕I

to get the LMI (5.82), where X¯ = X −1 and

   Z11 Z12 −1  −1 ¯ ¯ ¯ K K K = KX , K1 K2 = 1 2 . Z21 Z22 In this case, the cost J (t) (5.81) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ performance bound γ (5.70) is ensured. This completes the proof.  We can minimize γ in Theorem 5.9 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.82) in Theorem 5.9. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0.

5.4 Guaranteed H∞ Controls for Input Delayed Systems

165

The state feedback guaranteed H∞ control for standard costs in Theorem 5.9 is newly obtained here and similar restrictive results for memoryless control can be found in [4, 6, 32].

5.4.3 Output Feedback Guaranteed H∞ Control for Standard Costs Consider the input delayed system x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h) + Gw(t),

(5.84)

y(t) = Cx(t)

(5.85)

with the zero initial conditions x(t0 ) = 0 and u(t0 + θ) = 0, θ ∈ [−h, 0). As mentioned in Sect. 4.3.4, a dynamic output feedback control with the cascaded-delay system description could be very flexible for LMI conditions. Therefore, a dynamic output feedback control follows the cascaded-delay system description (5.24)–(5.26) with the initial conditions xc (t0 ) = 0 and pc (t0 + θ) = 0 for θ ∈ [−h, 0). The resulting augmented closed-loop system is then





x¯ (t) A¯ 11 A¯ 12 B¯ 1 x¯˙ (t) + w(t), = ¯ ¯ p¯ (t − h) 0 p¯ (t) A21 A22

(5.86)

where



x(t) u(t) , p¯ (t) = , x¯ (t) = xc (t) pc (t) ⎡ A + BJ11 C BH1

F11 A¯ 11 A¯ 12 ⎢ ⎢ G1C ¯A21 A¯ 22 = ⎣ J11 C H1 G2C F21





G ¯B1 = , 0 ⎤ B1 BH2 0 F12 ⎥ ⎥. 0 H2 ⎦ 0 F22

(5.87)

(5.88)

Based on the Lyapunov functional (5.28), we can develop the following result using Lemma 5.2. Theorem 5.10 Assume that, for a given γ > 0, there exist matrices X11 , X¯ 11 , Z11 , and Z¯ 11 such that 

CT 0

T





⊕I

 MV (X11 , Z11 )

CT 0



 ⊕ I < 0,

MU (X¯ 11 , Z¯ 11 , Z11 ) < 0,



Z11 I X11 I ≥ 0, ≥ 0, I X¯ 11 I Z¯ 11

(5.89) (5.90) (5.91)

166

5 Guaranteed Cost Controls

where MV (X11 , Z11 ) and MU (X¯ 11 , Z¯ 11 , Z11 ) denote ⎡

⎤ AT X11 + X11 A + Q X11 B1 X11 G −Z11 0 ⎦ , B1T X11 MV (X11 , Z11 ) = ⎣ (5.92) T 0 −γ 2 I G X11 MU (X¯ 11 , Z¯ 11 , Z11 ) ⎤ ⎡ AX¯ 11 + X¯ 11 AT − BR−1 BT + γ −2 GG T B1 Z11 0 X¯ 11 Z11 B1T −Z11 0 0 ⎥ ⎢ ⎥ . (5.93) =⎢ −1 ⎣ ¯ 0 0 −Z11 − R 0 ⎦ X¯ 11 0 0 −Q−1

From such fixed values X11 , X¯ 11 , Z11 , and Z¯ 11 , matrices X12 , X22 , Z12 , and Z22 are determined by using the relation −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 X12 X22

(5.94)

and a feasible set of {Fij , G i , Hj , J11 } can be found by solving the LMI problem

0 > W = [Wij ]i,j=1:9 ,

(5.95)

where W11 = (A + BJ11 C)T X11 + C T G T1 X21 + X11 (A + BJ11 C) + X12 G 1 C, W12 = (A + BJ11 C)T X12 + C T G T1 X22 + X11 BH1 + X12 F11 , W13 = X11 B1 , T Z + CT GT Z , W14 = X11 BH2 + X12 F12 , W15 = C T J11 11 2 21 T Z + CT GT Z , W T T W16 = C T J11 12 17 = I , W18 = C J11 , W19 = X11 G, 2 22 T X + X BH + X F , W W22 = H1T BT X12 + F11 22 21 1 22 11 23 = X21 B1 , T Z + FT Z , W24 = X21 BH2 + X22 F12 , W25 = H1T B21 11 21 21 T Z + FT Z , W T W26 = H1T B21 12 27 = 0, W28 = H1 , W29 = X21 G, 21 22 W33 = −Z11 , W34 = −Z12 , W35 = 0, T, W W36 = 0, W37 = 0, W38 = C T J11 39 = 0, T Z , W44 = −Z22 , W45 = H2T Z11 + F22 21 T Z , W T W46 = H2T Z12 + F22 22 47 = 0, W48 = H2 , W49 = 0,

W55 = −Z11 , W56 = −Z12 , W57 = 0, W58 = 0, W59 = 0, W66 = −Z22 , W67 = 0, W68 = 0, W69 = 0, W77 = −Q−1 , W78 = 0, W79 = 0, W88 = −R−1 , W89 = 0, W99 = −γ 2 I .

5.4 Guaranteed H∞ Controls for Input Delayed Systems

167

Then the output feedback guaranteed H∞ control (5.24)–(5.26) with such a feasible set of {Fij , G i , Hj , J11 } asymptotically stabilizes the system (5.84)–(5.85) with the guaranteed H∞ performance bound γ. Proof By Lemma 5.2, we have the following condition   d V (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt 

T  

T x¯ (t) x¯ (t) M − γ −2 B¯ 1T X x¯ (t) − γ 2 w(t) = B¯ 1T X x¯ (t) − γ 2 w(t) p¯ (t − h) p¯ (t − h)



T x¯ (t) x¯ (t) M < 0, ≤ p¯ (t − h) p¯ (t − h)

where M denotes

A¯ T11 X + X A¯ 11 + γ −2 X B¯ 1 B¯ 1T X + E1 QE1T X A¯ 12 A¯ T12 X¯ 0

T T T E2 A¯ 21 Z A¯ 21 A¯ 21 Z A¯ 22 E2 R , + ¯T ¯ ¯T ¯ + E3 E3 A22 Z A21 A22 Z A22 − Z       E1T = I 0 , E2T = J11 C H1 , E3T = 0 H2 ,



M=

which can be guaranteed if it holds that M < 0. The above inequality can be written via the Schur complement as 0 > Q + UΣV T + VΣ T U T ,

(5.96)

where Q, U, Σ and V denote ⎡

AT X11 + X11 A AT X12 X21 A 0 ⎢ TX TX ⎢ B B 11 1 1 12 ⎢ ⎢ 0 0 ⎢ 0 0 Q=⎢ ⎢ 0 0 ⎢ ⎢ I 0 ⎣ 0 0 G T X12 G T X11



UT = ⎡

0 I 0000000 0 0000I 000 BT 0 0 0 I 0 0 I 0



X11 B1 0 0 0 I 0 X11 G X21 B1 0 0 0 0 0 X21 G ⎥ −Z11 −Z12 0 0 0 0 0 ⎥ ⎥ −Z21 −Z22 0 0 0 0 0 ⎥ ⎥ 0 0 −Z11 −Z12 0 0 0 ⎥, 0 0 −Z21 −Z22 0 0 0 ⎥ ⎥ 0 0 0 0 −Q−1 0 0 ⎥ ⎦ 0 0 0 0 0 −R−1 0 2 0 0 0 0 0 0 −γ I



T

T , VT =

0 I 0000000 0 00I 00000 C 000000I 0

T

,





F11 F12 G 1 Z11 Z12 X11 X12 ⎣ ⎦ ⊕I ⊕I ⊕ ⊕ I ⊕ I ⊕ I. Σ = F21 F22 G 2 , T = X21 X22 Z21 Z22 H1 H2 J11

168

5 Guaranteed Cost Controls

We remark that the orthogonal complementary of U and V are constructed as ⎡ ⎡ ⎤ ⎤ I 00 0 00 Nc1 0 0 0 0 0 ⎢ 0 0 0 0 0 0⎥ ⎢ 0 0 0 0 0 0⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 I 0 0 0 0⎥ ⎢ Nc2 0 0 0 0 0 ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 0 I 0 0 0⎥ ⎢ 0 0 0 0 0 0⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎥ U⊥ = T −1 ⎢ ⎢ 0 0 0 I 0 0 ⎥ , V⊥ = ⎢ 0 I 0 0 0 0 ⎥ , ⎢ 0 0 0 0 0 0⎥ ⎢ 0 0 I 0 0 0⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 0 0 0 I 0⎥ ⎢ 0 0 0 I 0 0⎥ ⎢ ⎢ ⎥ ⎥ ⎣ −BT 0 0 −I 0 0 ⎦ ⎣ 0 0 0 0 I 0⎦ 0 00 0 0I 0 0000I where Nc1 and Nc2 denote

Nc1 Nc2







=

CT 0



.

From the elimination lemma in Appendix B.1, the inequality condition 0 > Q + UΣV T + VΣ T U T is equal to the following two conditions: 0 > V⊥T QV⊥ and 0 > U⊥T QU⊥ . Let us investigate the first condition that 0 > V⊥T QV⊥ as follows.  0> where

CT 0



T ⊕I ⊕I ⊕I ⊕I ⊕I

¯V M



CT 0



 ⊥

⊕I ⊕I ⊕I ⊕I ⊕I ,



⎤ AT X11 + X11 A X11 B1 0 0 I 0 X11 G ⎢ −Z11 0 0 0 0 0 ⎥ B1T X11 ⎢ ⎥ ⎢ 0 0 ⎥ 0 0 −Z11 −Z12 0 ⎥ ⎢ ¯V = ⎢ 0 0 ⎥ 0 0 −Z21 −Z22 0 M ⎢ ⎥, −1 ⎢ ⎥ 0 0 I 0 0 0 −Q ⎢ ⎥ −1 ⎣ 0 ⎦ 0 0 0 0 0 −R 0 0 0 0 0 −γ 2 I G T X11

which can be simplified by the Schur complement associated with the (3, 3)th , (4, 4)th , ¯ V to get the condition (5.89). Let us now investigate (5, 5)th and (6, 6)th entries of M the second condition. First, consider T −1 QT −1 ⎡ ¯ ⎤ ¯ T ¯ ¯ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

AX11 + X11 A AX12 B1 0 0 0 X11 0 G 0 0 0 0 0 X¯ 21 0 0 ⎥ X¯ 21 AT B1T 0 −Z¯ 11 −Z¯ 12 0 0 0 0 0 ⎥ ⎥ 0 0 −Z¯ 21 −Z¯ 22 0 0 0 0 0 ⎥ ⎥ 0 0 ⎥. 0 0 0 0 −Z¯ 11 −Z¯ 12 0 0 0 0 0 −Z¯ 21 −Z¯ 22 0 0 0 ⎥ ⎥ −1 ¯ ¯ 0 0 0 0 −Q 0 0 ⎥ X21 X11 ⎦ 0 0 0 0 0 0 0 −R−1 0 T 2 0 0 0 0 0 0 0 −γ I G

5.4 Guaranteed H∞ Controls for Input Delayed Systems

169

Then, the second condition that 0 > U⊥T QU⊥ becomes ⎤ AX¯ 11 + X¯ 11 AT − BR−1 BT B1 0 0 X¯ 11 G ⎢ B1T −Z¯ 11 −Z¯ 12 0 0 0 ⎥ ⎥ ⎢ ⎢ ¯ 21 −Z¯ 22 0 − Z 0 0 0 ⎥ ⎥, ⎢ 0>⎢ 0 0 0 −Z¯ 11 − R−1 0 0 ⎥ ⎥ ⎢ ⎣ X¯ 11 0 0 0 −Q−1 0 ⎦ 0 0 0 0 −γ 2 I GT ⎡

which can be simplified by the Schur complement associated with the (3, 3)th and (5, 5)th entries to get the condition (5.90). In this case, the cost J (t) (5.81) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. To generate the controller matrices {Fij , G i , Hj , J11 } in (5.24)–(5.26), we can determine X12 , X22 , Z12 , and Z22 by using the relation (5.94) −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 . X12 X22

(5.97)

After determining the values Xij and Zij , {Fij , G i , Hj , J11 } appear linearly in the relation (5.96), which allows us to use Wij in the theorem. In this case, the cost J (t) (5.81) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ cost (5.70) is ensured. This completes the proof.  Even though the cascaded-delay system approach produces a little complicated results, these results are formulated with LMIs. We can minimize γ in Theorem 5.10 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). The output feedback guaranteed H∞ control for standard costs in Theorem 5.10 is newly obtained here. As mentioned before, the optimizing control problems for state delayed systems are more difficult to solve compared to those for input delayed systems. However guaranteed H∞ controls are easily applicable for state delayed systems. We introduce several guaranteed H∞ controls for state delayed systems in the next section.

5.5 Guaranteed H∞ Controls for State Delayed Systems Consider the state delayed system x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t) + Gw(t) with the zero initial condition xt0 ≡ 0.

(5.98)

170

5 Guaranteed Cost Controls

5.5.1 State Feedback Guaranteed H∞ Control We introduce two guaranteed H∞ controls for the state delayed system (5.98) with the standard cost (5.71) via delay-independent and delay-dependent criteria. Delay-Independent Guaranteed H∞ Control Consider the Lyapunov functional (5.13) and the control (5.14) where K and Ki should be designed to guarantee the asymptotic stability of the system (5.98) with the H∞ cost J (t) (5.71) upper bounded by zero for all t > t0 . Using Lemma 5.2, a delay-independent state feedback guaranteed H∞ control for the system (5.98) can be obtained in the following theorem. ¯ and K¯ i such Theorem 5.11 For a given γ > 0, if there exist matrices X¯ > 0, Z¯ ij , K, that ⎡ ⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 K¯ T K¯ T 0 ⎥ 2 2 ⎢ ⎥ ⎢ X¯ (5.99) 0 0 −Z¯ 11 −Z¯ 12 0 0 ⎥ ⎢ ⎥ < 0, ⎢ K¯ ⎥ ¯1 ¯ 2 −Z¯ 21 −Z¯ 22 0 K 0 K ⎢ ⎥ ⎣ K¯ K¯ 1 0 0 −R−1 0 ⎦ K¯ 2 X¯ 0 0 0 0 0 −Q−1 where (i, j) denote ¯ T + AX¯ + BK¯ + γ −2 GG T , (1, 1) = (AX¯ + BK) (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 , then the guaranteed H∞ control (5.14) with K = K¯ X¯ −1 and



K1 K2





= K¯ 1 K¯ 2

 Z¯ 11 Z¯ 12 −1 Z¯ 21 Z¯ 22

asymptotically stabilizes the system (5.98) with the guaranteed H∞ performance bound γ. Proof x˙ (t) = (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h) + Gw(t), t ≥ 0. (5.100)

5.5 Guaranteed H∞ Controls for State Delayed Systems

171

By Lemma 5.2, we have the following condition   d V (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt ⎡ ⎡ ⎤ ⎤T  T   x(t) x(t) −2 T 2 T 2 ⎣ ⎣ ⎦ ⎦ G Xx(t) − γ w(t) G Xx(t) − γ w(t) M x(t − h) − γ = x(t − h) u(t − h) u(t − h) ⎡ ⎡ ⎤ ⎤T x(t) x(t) ≤ ⎣ x(t − h) ⎦ M ⎣ x(t − h) ⎦ u(t − h) u(t − h)

where M denotes ⎤ (A + BK)T X + X (A + BK) + Q + γ −2 X GG T X X (A1 + BK1 ) XBK2 ⎢ (AT1 + K1T BT )X −Z11 −Z12 ⎥ ⎥ M=⎢ ⎦ ⎣ T T K2 B X −Z21 −Z22 ⎡



⎤ I KT ⎢ 0 K T ⎥ Z11 +⎣ 1 ⎦ Z 21 0 K2T

⎡ ⎤T ⎡ T ⎤ ⎡ T ⎤T K K

I KT Z12 ⎢ 0 K T ⎥ ⎢ KT ⎥ ⎢ KT ⎥ ⎣ 1 ⎦ + ⎣ 1 ⎦R⎣ 1 ⎦ , Z22 0 K2T K2T K2T

which is non-positive if it holds that M < 0. Let us define

Z¯ 11 Z¯ 12 Z¯ 21 Z¯ 22





Z11 Z12 = Z21 Z22

−1

and use the Schur complement to get ⎡

(1, 1) X (A1 + BK1 ) ⎢ (AT1 + K1T BT )X −Z11 ⎢ T T ⎢ B X −Z K 21 2 ⎢ ⎢ I 0 ⎢ ⎢ K K1 ⎢ ⎣ K K1 I 0

XBK2 I −Z12 0 −Z22 0 0 −Z¯ 11 K2 −Z¯ 21 K2 0 0 0

KT K1T K2T −Z¯ 12 −Z¯ 22 0 0

⎤ KT I K1T 0 ⎥ ⎥ K2T 0 ⎥ ⎥ 0 0 ⎥ ⎥ < 0. 0 0 ⎥ ⎥ −R−1 0 ⎦ 0 −Q−1

where (1, 1) = (A + BK)T X + X (A + BK) + γ −2 X GG T X . Let us now use a congruence transformation associated with a non-singular matrix X −1 ⊕



Z11 Z12 Z21 Z22

−1

⊕I ⊕I ⊕I ⊕I

172

5 Guaranteed Cost Controls

to get the LMI (5.99), where X¯ = X −1 and

   Z11 Z12 −1  K¯ = KX −1 , K¯ 1 K¯ 2 = K1 K2 . Z21 Z22 In this case, the cost J (t) (5.71) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ performance bound γ (5.70) is ensured. This completes the proof.  We can minimize γ in Theorem 5.11 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.99) in Theorem 5.11. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0. Delay-Dependent Guaranteed H∞ Control Consider the Lyapunov functional (5.45) and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.98) with the H∞ cost J (t) (5.71) upper bounded by zero for all t > t0 . Using Lemma 5.2, a delay-dependent state feedback guaranteed H∞ control for the system (5.98) can be obtained in the following theorem. Theorem 5.12 For a given γ > 0, if there exist matrices X¯ > 0, S¯ > 0, [Z¯ ij ]2×2 > 0, ¯ and K¯ i such that S¯ ij , K, ⎤ W11 − W00 W12 W13 W14 T ⎢ W12 −h−1 W¯ S 0 W24 ⎥ ⎥ < 0, ⎢ T ⎣ W13 0 −W¯ S 0 ⎦ T T W24 0 −γ 2 I W14 2

T γ I W24 > 0, −1 ¯ W24 h WS ⎡

where Wij and W¯ S denote ⎡

⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ −Z¯ K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ 11 12 0 ⎢ ⎥ ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 ⎥ ¯ T K¯ T K 0 ⎢ ⎥ 2 2 ⎢ ⎥ W11 = ⎢ X¯ 0 ⎥, 0 0 −Z¯ 11 −Z¯ 12 0 ⎢ ¯ ⎥ ⎢ K 0 ⎥ K¯ 1 K¯ 2 −Z¯ 21 −Z¯ 22 0 ⎢ ⎥ −1 ⎣ K¯ 0 0 −R 0 ⎦ K¯ 1 K¯ 2 X¯ 0 0 0 0 0 −Q−1

(5.101)

(5.102)

5.5 Guaranteed H∞ Controls for State Delayed Systems

173

¯ T + AX¯ + BK}, ¯ (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 3) = A1 Z¯ 12 + BK¯ 2 , (1, 1) = 2{(AX¯ + BK) ⎡ ⎤ ⎡ 1/2 ¯ ⎤ T T T ¯ ¯ XA + K B h X 0 0 X¯ ⎢ Z¯ AT + K¯ T BT ⎥ ⎡ ⎤ 1/2 1/2 ⎢ ¯ ⎥ ¯ ¯ T ⎢ 11 1 ⎥ 1 ⎢ 0 h Z11 h Z12 −Z11 ⎥ ⎢ ⎥ 0 1/2 1/2 ⎢ ⎥ ¯ ¯ T T T ⎢ Z¯ 21 A + K¯ B ⎥ ⎢ ⎥ ⎢ 0 h Z21 h Z22 −Z¯ 21 ⎥ 1 2 ⎢ ⎥⎢0⎥ ⎢ W12 = ⎢ , W = ⎥⎣ ⎦ 0 0 0 ⎥ 13 0 ⎢ 0 ⎥, ⎢ ⎥ 0 ⎢ 0 ⎥ ⎢ ⎥ 0 0 0 0 ⎢ ⎥ ⎢ ⎥ I ⎣ 0 ⎣ ⎦ 0 0 0 ⎦ 0 0 0 0 0 0 ⎡ ⎤ ⎡ ⎡ ⎤ ⎤T 0 0 0 X¯ 0 0 0 X¯ G ⎢0⎥ ⎢ 0 0 0 −Z¯ 11 ⎥ ⎢ 0 0 0 −Z¯ 11 ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢0⎥ ⎢ 0 0 0 −Z¯ 21 ⎥ ⎢ ¯ ⎥ ⎢ ⎥ ⎢ ⎥ −1 ⎢ 0 0 0 −Z21 ⎥ ⎥ ⎢ ⎢ ⎥ ¯ ⎥ W14 = ⎢ ⎢ 0 ⎥ , W00 = ⎢ 0 0 0 0 ⎥ WS ⎢ 0 0 0 0 ⎥ , ⎢0⎥ ⎢0 0 0 0 ⎥ ⎢0 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎥ ⎣0⎦ ⎣0 0 0 0 ⎦ ⎣0 0 0 0 ⎦ 0 000 0 000 0 ⎤ ⎡ ⎤ ⎡ ¯ ¯ ¯ ¯ S11 S12 S13 S14 0 ⎢0⎥ ⎢ S¯ 21 S¯ 22 S¯ 23 S¯ 24 ⎥ ⎢ ⎢ ⎥ , W24 = ⎣ ⎦ , W¯ S = ⎣ ¯ ¯ ¯ ¯ ⎥ 0 S31 S32 S33 S34 ⎦ G S¯ 41 S¯ 42 S¯ 43 S¯

then the guaranteed H∞ control (5.14) with K = K¯ X¯ −1 and



K1 K2





= K¯ 1 K¯ 2

 Z¯ 11 Z¯ 12 −1 Z¯ 21 Z¯ 22

asymptotically stabilizes the system (5.98) with the guaranteed H∞ performance bound γ. Proof   d T T 2 T 0 ≥ x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) + V (t). dt By adding the non-negative quantity ⎡ ⎡ ⎤T ⎤ x(t) x(t) S11 ⎢ x(t − h) ⎥ ⎢ x(t − h) ⎥ ⎢ S21 ⎢ ⎢ ⎢ ⎥ ⎥ 0≤ ⎣ u(t − h) ⎦ WS ⎣ u(t − h) ⎦ dr for 0 < WS = ⎣ S31 −h S41 x˙ (t + r) x˙ (t + r) 



0

S12 S22 S32 S42

S13 S23 S33 S43

⎤ S14 S24 ⎥ ⎥ S34 ⎦ S

to the right side of the above inequality, we can get a sufficient condition that 0 ≥ xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) + 2xT (t)X x˙ (t) + h˙xT (t)S x˙ (t)

T





T



x(t) Z11 Z12 x(t) x(t − h) Z11 Z12 x(t − h) + − u(t) u(t) u(t − h) u(t − h) Z21 Z22 Z21 Z22

174

5 Guaranteed Cost Controls

⎡ ⎤T ⎡ ⎤ x(t) x(t)  0 ⎢ x(t − h) ⎥ ⎢ x(t − h) ⎥ ⎢ ⎥ ⎢ ⎥ − x˙ T (t + r)S x˙ (t + r)dr + ⎣ u(t − h) ⎦ MS ⎣ u(t − h) ⎦ dr −h −h x˙ (t + r) x˙ (t + r)  T = 2 (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h) Xx(t)  0



⎤T ⎤ ⎡

x(t) x(t)    Z Z 11 12 ⎣ Kx(t) + K1 x(t − h) ⎦ + ⎣ Kx(t) + K1 x(t − h) ⎦ Z21 Z22 +K2 u(t − h) +K2 u(t − h)





T Z11 Z12 x(t − h) x(t − h) + xT (t)Qx(t) − u(t − h) Z21 Z22 u(t − h) T    + Kx(t) + K1 x(t − h) + K2 u(t − h) R Kx(t) + K1 x(t − h) + K2 u(t − h) 

T  + (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h)   × hS (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h) ⎤⎡ ⎤ ⎤T ⎡ S11 S12 S13 x(t) x(t) +h ⎣ x(t − h) ⎦ ⎣ S21 S22 S23 ⎦ ⎣ x(t − h) ⎦ S31 S32 S33 u(t − h) u(t − h) ⎤ ⎡ ⎤T ⎡  S14  x(t) +2 ⎣ x(t − h) ⎦ ⎣ S24 ⎦ x(t) − x(t − h) S34 u(t − h)  T + (hS)−1 Xx(t) + {A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h)} ⎡

× (hS)G{γ 2 I − hG T SG}−1 G T (hS)   × (hS)−1 Xx(t) + {A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h)} T  − (hS)−1 Xx(t) + {Ax(t) + A1 x(t − h) + Bu(t)} (hS)G(γ 2 I − hG T SG)−1   T −1 × G (hS) (hS) Xx(t) + {Ax(t) + A1 x(t − h) + Bu(t)} ⎡ ⎤ ⎤T x(t) x(t) = ⎣ x(t − h) ⎦ M ⎣ x(t − h) ⎦ u(t − h) u(t − h) T  − (hS)−1 Xx(t) + {Ax(t) + A1 x(t − h) + Bu(t)} (hS)G(γ 2 I − hG T SG)−1   T −1 × G (hS) (hS) Xx(t) + {Ax(t) + A1 x(t − h) + Bu(t)} , ⎡

where M denotes

5.5 Guaranteed H∞ Controls for State Delayed Systems

175

⎤ + BK ) XBK X (A 1 1 2⎥ ⎢ ⎥ ⎢ M=⎢ −Z11 −Z12 ⎥ ⎦ ⎣ K2T BT X −Z21 −Z22 ⎡ ⎡ ⎤ ⎤T ⎡ T ⎤ ⎡ T ⎤T I KT K K

I KT ⎢ ⎢ ⎢ T⎥ ⎢ T⎥ T⎥ Z Z T⎥ + ⎣ 0 K1 ⎦ 11 12 ⎣ 0 K1 ⎦ + ⎣ K1 ⎦ R ⎣ K1 ⎦ Z21 Z22 0 K2T 0 K2T K2T K2T ⎡ ⎤ ⎡ ⎤⎡ ⎤T ⎤⎡ ⎤T ⎡ S11 S12 S13 I S14 I S14 +h ⎣ S21 S22 S23 ⎦ + ⎣ S24 ⎦ ⎣ −I ⎦ + ⎣ −I ⎦ ⎣ S24 ⎦ S31 S32 S33 S34 S34 0 0 ⎡ ⎤ ⎡ ⎤ −1 (A + BK)T + X (hS)−1 T (A + BK)T + X (hS)−1  −1 −2 T T T ⎦ (hS) − γ GG ⎣ ⎦ . +⎣ (A1 + BK1 ) (A1 + BK1 ) K2T BT K2T BT ⎡

(A + BK)T X + X (A + BK) +Q − X (hS)−1 X (AT1 + K1T BT )X



Since it is assumed that γ 2 I − hG T SG > 0 from the condition (5.102), the above inequality will be satisfied if it holds that 0 > M, which can be written as ⎡

⎤ (A + BK)T X + X (A + BK) + Q X (A1 + BK1 ) XBK2 (AT1 + K1T BT )X −Z11 −Z12 ⎦ 0>⎣ K2T BT X −Z21 −Z22 ⎡ ⎡ ⎤ ⎤ ⎤ ⎡ ⎤T ⎡ T

I KT I KT KT KT Z Z + ⎣ 0 K1T ⎦ 11 12 ⎣ 0 K1T ⎦ + ⎣ K1T ⎦ R ⎣ K1T ⎦ Z21 Z22 0 K2T 0 K2T K2T K2T ⎡ 1/2 ⎡ 1/2 ⎤ ⎤T h I 0 h I 0 0 I 0 I + ⎣ 0 h1/2 I 0 −I ⎦ WS ⎣ 0 h1/2 I 0 −I ⎦ 0 0 h1/2 I 0 0 0 h1/2 I 0 ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ 000 I X (hS)−1 X 0 0 000 I 0 0 0⎦ − ⎣ 0 0 0 −I ⎦ WS ⎣ 0 0 0 −I ⎦ − ⎣ 000 0 000 0 0 00 ⎤ ⎡  ⎤T ⎡ T (A + BK)T (A + BK)   −1 ⎢ −1 ⎥ ⎢ +X (hS)−1 ⎥ −1 −2 T ⎥ ⎢ +X (hS) T ⎥ . +⎢ ⎣ (A1 + BK1 ) ⎦ ⎣ (A1 + BK1 )T ⎦ (hS) − γ GG K2T BT K2T BT Use the Schur complement to get ⎤ M13 0 M11 − M00 M12 T ⎢ −(hS)−1 0 G ⎥ M12 ⎥, 0>⎢ −1 T ⎣ 0 −WS 0 ⎦ M13 0 −γ 2 I 0 GT ⎡

176

5 Guaranteed Cost Controls

where Mij denote ⎡

M11

(1, 1) (1, 2) ⎢ (1, 2)T −Z11 ⎢ T T ⎢ K2 B X −Z21 ⎢ I 0 =⎢ ⎢ ⎢ K K 1 ⎢ ⎣ K K1 I 0

XBK2 I −Z12 0 −Z22 0 0 −Z¯ 11 K2 −Z¯ 21 K2 0 0 0

KT K1T K2T −Z¯ 12 −Z¯ 22 0 0

⎤ KT I K1T 0 ⎥ ⎥ K2T 0 ⎥ ⎥ 0 0 ⎥ ⎥, 0 0 ⎥ ⎥ −R−1 0 ⎦ 0 −Q−1

(1, 1) = (A + BK)T X + X (A + BK) − X (hS)−1 X , (1, 2) = XA1 + XBK1 , ⎡ ⎡ 1/2 ⎤ ⎤ (A + BK)T + X (hS)−1 h I 0 0 I ⎢ ⎢ 0 h1/2 I 0 −I ⎥ ⎥ (A1 + BK1 )T ⎢ ⎢ ⎥ ⎥ T T ⎢ ⎢ 0 ⎥ K2 B 0 h1/2 I 0 ⎥ ⎢ ⎢ ⎥ ⎥ ⎥ , M13 = ⎢ 0 0 0 0⎥ M12 = ⎢ 0 ⎢ ⎢ ⎥, ⎥ ⎢ ⎢ ⎥ ⎥ 0 0 0 0 0 ⎢ ⎢ ⎥ ⎥ ⎣ ⎣ ⎦ ⎦ 0 0 0 0 0 0 0 0 0 0 ⎤ ⎤T ⎡ ⎡ 000 I 000 I ⎢ 0 0 0 −I ⎥ ⎢ 0 0 0 −I ⎥ ⎥ ⎥ ⎢ ⎢ ⎢0 0 0 0⎥ ⎢0 0 0 0⎥ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ M00 = ⎢ ⎢ 0 0 0 0 ⎥ WS ⎢ 0 0 0 0 ⎥ . ⎢0 0 0 0⎥ ⎢0 0 0 0⎥ ⎥ ⎥ ⎢ ⎢ ⎣0 0 0 0⎦ ⎣0 0 0 0⎦ 000 0 000 0 Let us now use a congruence transformation associated with a non-singular matrix Z¯ X¯ ⊕ ¯ 11 Z21

Z¯ 12 ⊕ (I ⊕ I ) ⊕ I ⊕ I ⊕ I ⊕ (I ⊕ I ⊕ I ⊕ I ) ⊕ I Z¯ 22

to get the following ⎡

⎤ Wˆ 11 − W00 Wˆ 12 W13 0 T ⎢ Wˆ 12 −(hS)−1 0 G ⎥ ⎥, 0>⎢ T ⎣ W13 0 −W¯ S 0 ⎦ 0 −γ 2 I 0 GT where X¯ , W¯ S , Z¯ ij and Wˆ ij denote X¯ = X −1 , W¯ S = WS−1 ,



Z¯ 11 Z¯ 12 Z¯ 21 Z¯ 22



=

Z11 Z12 Z21 Z22

−1

,

5.5 Guaranteed H∞ Controls for State Delayed Systems

177



⎤ (1, 1) (1, 2) (1, 3) X¯ K¯ T K¯ T X¯ ⎢ (1, 2)T −Z¯ 11 −Z¯ 12 0 K¯ 1T K¯ 1T 0 ⎥ ⎢ ⎥ T T ⎢ (1, 3)T −Z¯ 21 −Z¯ 22 0 ¯ ¯ K2 0 ⎥ K2 ⎢ ⎥ ¯ Wˆ 11 = ⎢ 0 ⎥ 0 0 −Z¯ 11 −Z¯ 12 0 ⎢ X ⎥, ⎢ K¯ ⎥ ¯ ¯ ¯ ¯ − Z − Z 0 0 K K 1 2 21 22 ⎢ ⎥ ⎣ K¯ −1 K¯ 1 K¯ 2 0 0 −R 0 ⎦ X¯ 0 0 0 0 0 −Q−1   T = Wˆ 12 AX¯ + BK¯ + (hS)−1 A1 Z¯ 11 + BK¯ 1 A1 Z¯ 12 + BK¯ 2 0 0 0 0 , ¯ T + AX¯ + BK¯ + (hS)−1 , (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 1) = (AX¯ + BK)

   Z11 Z12 −1  (1, 3) = A1 Z¯ 12 + BK¯ 2 , K¯ = KX −1 , K¯ 1 K¯ 2 = K1 K2 . Z21 Z22

Now we add the eighth column and row to the first column and row by using R9 and RT9 , where ⎡

I ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢ R9 = ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎣0 0

0 I 0 0 0 0 0 0 0 0 0 0 0

0 0 I 0 0 0 0 0 0 0 0 0 0

0 0 0 I 0 0 0 0 0 0 0 0 0

0 0 0 0 I 0 0 0 0 0 0 0 0

0 0 0 0 0 I 0 0 0 0 0 0 0

0 0 0 0 0 0 I 0 0 0 0 0 0

I 0 0 0 0 0 0 I 0 0 0 0 0

0 0 0 0 0 0 0 0 I 0 0 0 0

0 0 0 0 0 0 0 0 0 I 0 0 0

0 0 0 0 0 0 0 0 0 0 I 0 0

0 0 0 0 0 0 0 0 0 0 0 I 0

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎦ I

to get the following ⎡

⎤ W11 − W00 W˜ 12 W13 W14 T ⎢ −(hS)−1 0 G ⎥ W˜ 12 ⎥, 0>⎢ T ⎣ W13 0 −W¯ S 0 ⎦ T GT 0 −γ 2 I W14 where W˜ 12 denotes   T = AX¯ + BK¯ A1 Z¯ 11 + BK¯ 1 A1 Z¯ 12 + BK¯ 2 0 0 0 0 , W˜ 12 which can be converted into the matrix inequality (5.101), In this case, the cost J (t) (5.71) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ performance bound γ (5.70) is ensured. This completes the proof. 

178

5 Guaranteed Cost Controls

Theorem 5.12 provides a delay-dependent criterion for guaranteed H∞ control, which is less conservative than that in Theorem 5.11; however, it is noted that the conditions (5.46) contain nonlinear matrix inequalities due to W00 such that ⎡

W00

0 ⎢0 ⎢ ⎢0 ⎢ =⎢ ⎢0 ⎢0 ⎢ ⎣0 0

0 0 0 0 0 0 0

⎡ ⎤ 0 X¯ 0 ⎢0 0 −Z¯ 11 ⎥ ⎢ ⎥ ⎢ 0 −Z¯ 21 ⎥ ⎥ −1 ⎢ 0 ⎥ ¯ 0 0 ⎥ WS ⎢ ⎢0 ⎢0 0 0 ⎥ ⎢ ⎥ ⎣0 ⎦ 0 0 0 0 0

0 0 0 0 0 0 0

⎤T 0 X¯ 0 −Z¯ 11 ⎥ ⎥ 0 −Z¯ 21 ⎥ ⎥ 0 0 ⎥ ⎥ . 0 0 ⎥ ⎥ 0 0 ⎦ 0 0

As a result, unfortunately, we cannot find in general the minimum of the guaranteed H∞ cost γ for the cost (5.70) via a convex optimization technique. However, if we can afford more computational efforts, we can obtain a guaranteed H∞ control achieving a smaller guaranteed H∞ performance bound γ using BMI solvers in the literature. For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.101) in Theorem 5.11. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0. The delay-dependent and delay-independent state feedback guaranteed H∞ controls in Theorems 5.11 and 5.12 are extended from [10, 16, 18, 20, 22, 34, 36].

5.5.2 Robust State Feedback Guaranteed H∞ Control Let us consider a state delayed system with model uncertainties: x˙ (t) = Ax(t) + A1 x(t − h) + Dpw (t) + Bu(t) + Gw(t), −2

pw (t) = Δ(t)qw (t), Δ (t)Δ(t) ≤ ρ I , qw (t) = Ex(t) + E1 x(t − h) + Fpw (t) + Eb u(t) T

(5.103) (5.104) (5.105)

with the zero initial condition xt0 ≡ 0. Consider the Lyapunov functional (5.13) and the control (5.14) where K and Ki are designed to guarantee the asymptotic stability of the system (5.103)–(5.105) with the H∞ cost J (t) (5.71) upper bounded by zero for all t > t0 . Using Lemma 5.2, a state feedback guaranteed H∞ control for the system (5.103)–(5.105) can be obtained in the following theorem.

5.5 Guaranteed H∞ Controls for State Delayed Systems

179

¯ K¯ i , and λ¯ Theorem 5.13 For a given γ > 0, if there exist matrices X¯ > 0, Z¯ ij , K, such that ⎡

(1, 1) ⎢ (1, 2)T ⎢ ⎢ (1, 3)T ⎢ ⎢ X¯ ⎢ ⎢ K¯ ⎢ ⎢ K¯ ⎢ ⎢ X¯ ⎢ ⎣ λD ¯ T (9, 1)

(1, 2) −Z¯ 11 −Z¯ 21 0 K¯ 1 K¯ 1 0 0 (9, 2)

(1, 3) X¯ −Z¯ 12 0 −Z¯ 22 0 0 −Z¯ 11 ¯ K2 −Z¯ 21 0 K¯ 2 0 0 0 0 (9, 3) 0

K¯ T K¯ 1T K¯ 2T −Z¯ 12 −Z¯ 22 0 0 0 0

K¯ T X¯ T ¯ 0 K1 0 K¯ 2T 0 0 0 0 −R−1 0 0 −Q−1 0 0 0 0

¯ λD 0 0 0 0 0 0 ¯ −λI ¯ λF

⎤ (9, 1)T (9, 2)T ⎥ ⎥ (9, 3)T ⎥ ⎥ ⎥ 0 ⎥ ⎥ < 0, (5.106) 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ T ⎦ ¯ λF −2 ¯ −λρ I

where (i, j) denote ¯ T + AX¯ + BK¯ + γ −2 GG T , (1, 2) = A1 Z¯ 11 + BK¯ 1 , (1, 1) = (AX¯ + BK) ¯ (9, 2) = E1 Z¯ 11 + Eb K¯ 1 , (9, 3) = E1 Z¯ 12 + Eb K¯ 2 (1, 3) = A1 Z¯ 12 + BK¯ 2 , (9, 1) = E X¯ + Eb K,

then the guaranteed H∞ control (5.14) with K = K¯ X¯ −1 and



   Z¯ 11 K1 K2 = K¯ 1 K¯ 2 Z¯ 21

Z¯ 12 Z¯ 22

−1

asymptotically stabilizes the system (5.103)–(5.105) with the guaranteed H∞ performance bound γ. Proof x˙ (t) = (A + BK)x(t) + (A1 + BK1 )x(t − h) + BK2 u(t − h) + Dpw (t) +Gw(t), pw (t) = Δ(t)qw (t), ΔT (t)Δ(t) ≤ ρ−2 I , qw (t) = (E + Eb K)x(t) + (E1 + Eb K1 )x(t − h) + Eb K2 u(t − h) + Fpw (t).

(5.107) (5.108) (5.109)

By Lemma 5.2, we need to have the following condition   T T 2 T ˙ 0 ≥ V (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) T T subject to the condition that 0 ≤ ρ2 qw (t)qw (t) − pw (t)pw (t). Using the S-procedure, this condition can be written as

    T (t)q (t) − pT (t)p (t) 0 ≥ V˙ (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 w T (t)w(t) + λ ρ2 qw w w w

180

5 Guaranteed Cost Controls ⎡

⎡ ⎤T ⎤ x(t) x(t)  T   ⎢ x(t − h) ⎥ ⎢ x(t − h) ⎥ −2 G T Xx(t) − γ 2 w(t) ⎢ ⎥ ⎥ G T Xx(t) − γ 2 w(t) , =⎢ ⎣ u(t − h) ⎦ M ⎣ u(t − h) ⎦ − γ pw (t) pw (t)

where M denotes ⎡

⎤ (1, 1) X (A1 + BK1 ) XBK2 XD ⎢ (AT + K T BT )X −Z11 −Z12 0 ⎥ 1 1 ⎥ M=⎢ ⎣ −Z21 −Z22 0 ⎦ K2T BT X 0 0 0 DT X ⎡ ⎡ ⎤ ⎤ ⎤ ⎡ T ⎤T ⎡ T I KT I KT K KT

⎢ 0 K T ⎥ Z11 Z12 ⎢ 0 K T ⎥ ⎢ KT ⎥ ⎢ KT ⎥ 1 ⎥ 1 ⎥ ⎢ ⎢ 1 ⎥ ⎢ 1 ⎥ +⎢ ⎣ 0 K2T ⎦ Z21 Z22 ⎣ 0 K2T ⎦ + ⎣ K2T ⎦ R ⎣ K2T ⎦ 0 0 0 0 0 0 ⎡ ⎤⎡ ⎤T ⎡ ⎤ T (E + Eb K) (E + Eb K)T 000 0 T ⎥⎢ T ⎥ ⎢ ⎢0 0 0 0 ⎥ ⎥ + λρ2 ⎢ (E1 +TEbTK1 ) ⎥ ⎢ (E1 +TEbTK1 ) ⎥ , −⎢ ⎣ ⎦⎣ ⎦ ⎣0 0 0 0 ⎦ K2 Eb K2 Eb FT FT 0 0 0 λI

(1, 1) = (A + BK)T X + X (A + BK) + Q + γ −2 X GG T X , which is satisfied if it holds that M < 0. Using the Scuhr complement technique, we have ⎡ ⎤ (1, 1) (1, 2) XBK2 I K T K T I D (9, 1)T ⎢ (1, 2)T −Z11 −Z12 0 K1T K1T 0 0 (9, 2)T ⎥ ⎢ T T ⎥ ⎢ K2 B X −Z21 −Z22 0 K2T K2T 0 0 K2T EbT ⎥ ⎢ ⎥ ⎢ ⎥ I 0 0 Z¯ 11 Z¯ 12 0 0 0 0 ⎢ ⎥ ⎢ K ⎥ < 0, ¯ ¯ K1 K2 Z21 Z22 0 0 0 0 ⎢ ⎥ −1 ⎢ K ⎥ K2 0 0 −R 0 0 0 K1 ⎢ ⎥ −1 ⎢ ⎥ 0 0 I 0 0 0 0 0 −Q ⎢ ⎥ T ⎣ DT ⎦ 0 0 0 0 0 0 −λI F −1 −2 (9, 1) (9, 2) Eb K2 0 0 0 0 F −λ ρ I where (i, j) denote (1, 1) = (A + BK)T X + X (A + BK) + γ −2 X GG T X , (1, 2) = XA1 + XBK1 , (9, 1) = E + Eb K, (9, 2) = E1 + Eb K1 . Let us now use a congruence transformation associated with a non-singular matrix X −1 ⊕



Z11 Z12 Z21 Z22

−1

⊕ (I ⊕ I ) ⊕ I ⊕ I ⊕ λ−1 I ⊕ I

5.5 Guaranteed H∞ Controls for State Delayed Systems

181

to get the quadratic matrix inequality (5.106), where λ¯ and Z¯ ij denote λ¯ = λ−1 ,



Z¯ 11 Z¯ 21

Z¯ 12 Z¯ 22



=

Z11 Z12 Z21 Z22

−1

.

In this case, the cost J (t) (5.71) is upper bounded by −V (t) for all t > t0 , which ensures that the cost is upper bounded by zero for all t > t0 by Lemma 5.2. Therefore, the guaranteed H∞ performance bound γ (5.70) is ensured. This completes the proof.  We can minimize γ in Theorem 5.13 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). For the delayed input feedback control (5.19) or the delayed state feedback control (5.20), the corresponding results can be easily derived by taking the variable K¯ 1 = 0 or K¯ 2 = 0, respectively, from the inequality (5.106) in Theorem 5.13. For the memoryless state feedback control (5.21), the corresponding result can be easily derived by taking both K¯ 1 = 0 and K¯ 2 = 0. The robust state feedback guaranteed H∞ control in Theorem 5.13 is extended from [10, 14, 16, 18, 20, 22, 23, 34, 36].

5.5.3 Output Feedback Guaranteed H∞ Control Consider the state delayed system x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t) + Gw(t),

(5.110)

y(t) = Cx(t)

(5.111)

with the zero initial condition xt0 ≡ 0. As mentioned in Sect. 4.3.4, a dynamic output feedback control with the cascaded-delay system description could be very flexible for LMI conditions. Let us use the cascaded-delay dynamic output feedback control (4.56)–(4.58) for the state delayed system (5.110)–(5.111); then we have the augmented closed-loop system such as





x¯ (t) A¯ 11 A¯ 12 B¯ 1 x¯˙ (t) + w(t) = ¯ ¯ p(t − h) 0 p¯ (t) A21 A22

with the initial condition p(t0 + θ) = 0 for all θ ∈ [−h, 0), where

(5.112)

182

5 Guaranteed Cost Controls





x(t) x(t) D , p¯ (t) = , B¯ 1 = , xc (t) pc (t) 0 ⎤ ⎡ A + BJC BH1 A1 BH2

⎥ A¯ 11 A¯ 12 ⎢ ⎢ G 1 C F11 0 F12 ⎥ . ¯A21 A¯ 22 = ⎣ I 0 0 0 ⎦ G 2 C F21 0 F22



x¯ (t) =

(5.113)

(5.114)

Therefore, a dynamic output feedback control follows the cascaded-delay system description (4.56)–(4.58) with the initial conditions xc (t0 ) = 0 and pc (t0 + θ) = 0 for θ ∈ [−h, 0). The resulting augmented closed-loop system is given in (4.59). Based on the Lyapunov functional (5.28), we can develop the following result using Lemma 5.2. Theorem 5.14 Assume that, for a given γ > 0, there exist matrices X11 , X¯ 11 , Z11 , and Z¯ 11 such that 

CT 0

T





⊕I

 MV (X11 , Z11 )

CT 0





⊕ I < 0,

MU (X¯ 11 , Z¯ 11 , Z11 ) < 0,



Z11 I X11 I ≥ 0, ≥ 0, I X¯ 11 I Z¯ 11

(5.115) (5.116) (5.117)

where MV (X11 , Z11 ) and MU (X¯ 11 , Z¯ 11 , Z11 ) denote ⎡

⎤ AT X11 + X11 A + Q + Z11 X11 A1 X11 G −Z11 0 ⎦ , AT1 X11 MV (X11 , Z11 ) = ⎣ T 0 −γ 2 I G X11 MU (X¯ 11 , Z¯ 11 , Z11 )  ⎡ ⎤ AX¯ 11 + X¯ 11 AT ¯ ¯ A Z X X 1 11 11 11 ⎢ −B1 R−1 BT + γ −2 GG T ⎥ 1 ⎥ ⎢ T ⎥. Z A −Z 0 0 =⎢ 11 1 11 ⎢ ⎥ ⎣ X¯ 11 0 −Z¯ 11 0 ⎦ X¯ 11 0 0 −Q−1

(5.118)

(5.119)

From such fixed values X11 , X¯ 11 , Z11 , and Z¯ 11 , matrices X12 , X22 , Z12 , and Z22 are determined by using the relation −1 −1 −1 −1 X21 = X11 − X¯ 11 , Z12 Z22 Z21 = Z11 − Z¯ 11 X12 X22

(5.120)

and a feasible set of {Fij , G i , Hj , J11 } can be found by solving the LMI problem

0 > W = [Wij ]i,j=1:9 ,

(5.121)

5.5 Guaranteed H∞ Controls for State Delayed Systems

183

where W11 = (A + BJ11 C)T X11 + C T G T1 X21 + X11 (A + BJ11 C) + X12 G 1 C, W12 = (A + BJ11 C)T X12 + C T G T1 X22 + X11 BH1 + X12 F11 , W13 = X11 A1 , W14 = X11 BH2 + X12 F12 , W15 = Z11 + C T G T2 Z21 , T , W19 = X11 G, W16 = Z12 + C T G T2 Z22 , W17 = I , W18 = C T J11 T X22 + X21 BH1 + X22 F11 , W23 = X21 A1 , W22 = H1T BT X12 + F11 T Z21 , W24 = X21 BH2 + X22 F12 , W25 = F21 T Z22 , W27 = 0, W28 = H1T , W29 = X21 G, W26 = F21

W33 = −Z11 , W34 = −Z12 , W35 = 0, T , W39 = 0, W36 = 0, W37 = 0, W38 = C T J11 T T Z21 , W46 = F22 Z22 , W47 = 0, W48 = H2T , W49 = 0, W44 = −Z22 , W45 = F22

W55 = −Z11 , W56 = −Z12 , W57 = 0, W58 = 0, W59 = 0, W66 = −Z22 , W67 = 0, W68 = 0, W69 = 0, W77 = −I , W78 = 0, W79 = 0, W88 = −R−1 , W89 = 0, W99 = −γ 2 I .

Then the output feedback guaranteed H∞ control (5.24)–(5.26) with such a feasible set of {Fij , G i , Hj , J11 } asymptotically stabilizes the system (5.110)–(5.111) with the guaranteed H∞ performance bound γ. Proof This theorem is very close to Theorem 5.10 and thus the proof is omitted.  Even though the cascaded-delay system approach produce a little complicated results, these results are formulated with LMIs. Note that it is hard to derive an output feedback guaranteed H∞ control for the state delayed system (5.110)–(5.111) directly, whereas the cascaded-delay system approach can provide the criterion for such a control, which is numerically tractable in the LMI framework. We can minimize γ in Theorem 5.14 via a convex optimization technique to find the minimum of the guaranteed H∞ performance bound γ such that (5.70). The output feedback guaranteed H∞ control in Theorem 5.14 is newly obtained here and other output feedback guaranteed H∞ controls can be found in [16, 30, 31, 33], which are not covered in this book.

References 1. Anderson BDO, Moore JB (1971) Linear optimal control. Prentice-Hall international editions, Englewood Cliffs 2. Anderson BDO, Moore JB (1989) Optimal linear quadratic methods. Prentice-Hall international editions, Englewood Cliffs 3. Banks HT, Jacobs MQ, Latina MR (1971) The synthesis of optimal controls for linear, timeoptimal problems with retarded controls. J Optim Appl 8(5):319–366 4. Boukas EK, Liu ZK (2002) Deterministic and stochastic time delay systems. Birkhäuser, Basel

184

5 Guaranteed Cost Controls

5. Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia 6. Cao YY, Sun YX, Lam J (1998) Delay-dependent robust H∞ control for uncertain systems with time-varying delays. IEE Proc Control Theory Appl 145(3):338–344 7. Chang SSL, Peng TKC (1972) Adaptive guaranteed cost control of systems with uncertain parameters. IEEE Trans Autom Control 17(4):474–483 8. Cheres E, Gutman S, Palmor ZJ (1989) Stabilization of uncertain dynamic systems including state delays. IEEE Trans Autom Control 34(11):1199–1203 9. Choi HH, Chung MJ (1997) An LMI approach to H∞ controller design for linear time-delay systems. Automatica 33(4):737–739 10. de Souza CE, Li X (1999) Delay-dependent robust H∞ control of uncertain linear state-delayed systems. Automatica 35:1313–1321 11. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2 and H∞ control problems. IEEE Trans Autom Control 34(8):831–847 12. Esfahani SH, Moheimani SOR (1998) LMI approach to suboptimal guaranteed cost control for uncertain time-delay systems. IEE Proc Control Theory Appl 145(6):491–498 13. El Ghaoui L, Oustry F, Rami MA (1997) A cone complementarity linearization algorithm for static output-feedback and related problems. IEEE Trans Autom Control 42(8):1171–1176 14. Fridman E, Shaked U (2002) A descriptor system approach to H∞ control of time-delay systems. IEEE Trans Autom Control 47(2):253–270 15. Fridman E, Shaked U (2002) An improved stabilization method for linear systems with timedelay. IEEE Trans Autom Control 47(11):1931–1937 16. Fridman E, Shaked U (2003) Delay-dependent stability and H∞ control: constant and timevarying delays. Int J Control 76(1):48–60 17. Glover K, Doyle JC (1988) State-space formulae for all stabilizing controllers that satisfy an H∞ -norm bound and relations to risk sensitivity. Syst Control Lett 11(3):167–172 18. Lee JH, Kim SW, Kwon WH (1994) Memoryless H∞ controllers for state delayed systems. IEEE Trans Autom Control 39(1):159–162 19. Lee YS, Kwon OK, Kwon WH (2005) Delay-dependent guaranteed cost control for uncertain state-delayed systems. Int J Control Autom Syst 3(4):524–532 20. Lee YS, Kwon WH, Park P (2005) Delay-dependent guaranteed cost control for uncertain state-delayed systems. Int J Control Autom Syst 3(4):524–532 21. Lee YS, Moon YS, Kwon WH (2007) Authors reply: comments on delay-dependent robust H∞ control for uncertain systems with a state-delay. Automatica 43(3):572–573 22. Lee YS, Moon YS, Kwon WH, Park P (2004) Delay-dependent robust H∞ control for uncertain systems with a state-delay. Automatica 40(1):65–72 23. Li X, de Souza CE (1996) Robust stabilization and H∞ control of uncertain linear time-delay systems. In: Proceedings of 13th IFAC world congress, San Francisco, pp 113–118 24. Li X, de Souza CE (1997) Criteria for robust stability and stabilization of uncertain linear systems with state delay. Automatica 33(9):1657–1662 25. Li X, de Souza CE (1997) Delay-dependent robust stability and stabilization of uncertain linear delay systems: a linear matrix inequality approach. IEEE Trans Autom Control 42(8):1144– 1148 26. Li H, Niculescu SI, Dugard L, Dion JM (1998) Robust guaranteed cost control of uncertain linear time-delay systems using dynamic output feedback. Math Comput Simul 45(3–4):349– 358 27. Mahmoud MS (2000) Robust control and filtering for time-delay systems. Marcel Dekker Inc., New York 28. Moon YS, Park P, Kwon WH, Lee YS (2001) Delay-dependent robust stabilization of uncertain state-delayed systems. Int J Control 74(14):1447–1455 29. Park P, Moon YS, Kwon WH (1999) A stabilizing output-feedback linear quadratic control for pure input-delayed systems. Int J Control 72(5):385–391 30. Shieh CS (2002) H∞ control via output feedback for linear systems with time-varying delays in state and control input. Comput Electr Eng 28(6):649–657

References

185

31. Su H, Chu J (1999) Robust H∞ control for linear time-varying uncertain time-delay systems via dynamic output feedback. Int J Syst Sci 30(10):1093–1107 32. Suplin V, Fridman E, Shaked U (2006) H∞ control of linear uncertain time-delay systems-a projection approach. IEEE Trans Autom Control 51(4):680–685 33. Xie L (1996) Output feedback H∞ control of systems with parameter uncertainty. Int J Control 63(4):741–750 34. Xie L, de Souza CE (1992) Robust H∞ control for linear systems with norm-bounded timevarying uncertainty. IEEE Trans Autom Control 37(8):1188–1191 35. Yu L, Chu J (1999) An LMI approach to guaranteed cost control of linear uncertain time-delay systems. Automatica 35:1155–1159 36. Yu L, Chu J, Su H (1996) Robust memoryless H∞ controller design for linear time-delay systems with norm-bounded time-varying uncertainty. Automatica 32(12):1759–1762

Chapter 6

LQ Optimal Controls

6.1 Introduction Non-optimal stabilizing controls for input and state delayed systems are obtained without any performance criteria in Chaps. 3 and 4, where control structures are given a priori in feedback forms with unknown gain matrices. Gain matrices of feedback controls are sought to satisfy some stability criteria. These controls are easy to compute but possess no optimality. Guaranteed LQ controls for input and state delayed systems are obtained in Chap. 5, where control structures are not free but given a priori in feedback forms with unknown gain matrices. Gain matrices of feedback controls are sought to satisfy some stability criteria with given LQ performance bounds. Therefore, guaranteed cost controls are rather easy to compute but lack optimality since the control structures are not free and the LQ performance criteria are not minimized but upper bounded. In this chapter, LQ optimal controls for input and state delayed systems are dealt with, where any feedback control structures are not required a priori. It is shown in this chapter that optimal controls are obtained fortunately in feedback forms of distributed state delays. LQ optimal controls for input delayed systems are first introduced and then LQ optimal controls for state delayed systems follow since the former ones are easier than the latter ones. First, finite horizon LQ controls are dealt with, which are fundamental results mathematically. Since finite horizon LQ controls cannot be used as stabilizing feedback controls due to the inherent requirement of infinite horizons associated with stability properties, infinite horizon LQ controls are obtained by extending the terminal time to infinity if possible, where their stability properties and some limitations are discussed. Then for general stabilizing feedback controls, receding horizon LQ controls, or model predictive LQ controls, are dealt with which can be obtained easily from finite horizon LQ controls by the receding horizon concept. Terminal conditions in receding horizon controls become very important to guarantee the closed-loop stability, which are investigated in details. Advantages of receding horizon LQ controls are mentioned with some details in Chap. 1. © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_6

187

188

6 LQ Optimal Controls

This chapter is outlined as follows. In Sect. 6.2, two different finite horizon LQ controls for input delayed systems are obtained, one for a predictive LQ cost containing a state predictor and the other for a standard LQ cost containing a state. The former is obtained for free and also fixed terminal states due to the simple reduction transformation while the latter only for free terminal states. Special cases of pure input delayed systems are discussed in order to understand input delayed systems easily. From the finite horizon LQ controls, infinite horizon LQ controls are obtained and discussed with stability properties and some limitations. In Sect. 6.3, receding horizon LQ controls for input delayed systems are obtained from the two different finite horizon LQ controls in Sect. 6.2. Cost monotonicity conditions are investigated, under which the receding horizon LQ controls asymptotically stabilize the closedloop system. Predictive cost problems for input delayed systems in this chapter and subsequent ones are newly obtained in this book. In Sect. 6.4, finite horizon LQ controls for state delayed systems are obtained for three different LQ costs, one for a simple cost, another for a cost including a single integral terminal term, and the other for a cost including a double integral terminal term. The solution is more complex as a cost becomes more complex. Then infinite horizon LQ controls are discussed with stability properties and some limitations. In Sect. 6.5, receding horizon LQ controls for state delayed systems are obtained from the finite horizon LQ controls given in Sect. 6.4. Cost monotonicity conditions are investigated, under which the receding horizon LQ controls asymptotically stabilize the closed-loop system. It is shown that receding horizon LQ controls with the double integral terminal terms can have the delay-dependent stability condition while those with the single integral terminal terms have the delay-independent stability condition. Since these receding horizon controls are still complicated, simple receding horizon LQ controls are obtained with a simple cost or with a short horizon distance. This chapter deals with linear time delay systems only. Receding horizon LQ controls can be extended to constrained time delay system and also nonlinear time delay systems, which can be found in [7, 8, 14, 15, 28]. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

6.2 Fixed Horizon LQ Controls for Input Delayed Systems 6.2.1 Fixed Horizon LQ Control for Predictive Costs Fixed Horizon LQ Control for Free Terminal States Consider a linear system with a single input delay given by x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h)

(6.1)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0), where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, and h denotes the constant delay. The state pre-

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

189



dictor z(t) = xˆ (t + h) is defined as the state x(t + h) at time t + h with the constraint ut+h ≡ 0 as  z(t) = xˆ (t + h) = x(t + h; ut ≡ 0) = eAh x(t) +

t

eA(t−τ ) B1 u(τ )d τ ,

(6.2)

t−h

which produces  x(t + h) = xˆ (t + h) + eAh

t+h

eA(t−τ ) Bu(τ )d τ .

(6.3)

t

It is interesting to see that z(t) satisfies the ordinary system ¯ z˙ (t) = Az(t) + Bu(t),

(6.4)

B¯ = eAh B + B1 .

(6.5)

where B¯ denotes

as in (3.11). We introduce the following predictive cost with a state predictor  J (t0 , tf , u; utf ≡ 0) =

tf



 xˆ (s + h)Qˆx(s + h) + u (s)Ru(s) ds T

T

t0

+xT (tf + h)Fx(tf + h)

(6.6)

with utf ≡ 0, Q ≥ 0, R > 0 and F ≥ 0. Since the cost (6.6) can be rewritten as 

tf

J (t0 , tf , u) =

  z T (s)Qz(s) + uT (s)Ru(s) ds + z T (tf )Fz(tf )

(6.7)

t0

with Q ≥ 0, R > 0 and F ≥ 0, the LQ control theory for the ordinary system (6.4) immediately provides the LQ control minimizing the cost (6.7) as u(t) = −R−1 B¯ T P(t)z(t),

(6.8)

where P(t) = P T (t) satisfies the well known differential Riccati equation with P(tf ) = F ¯ −1 B¯ T P(τ ). ˙ ) = AT P(τ ) + P(τ )A + Q − P(τ )BR − P(τ

(6.9)

It is noted that the optimal cost in (6.7) is given as J (t0 , tf , u∗ ) = V (t0 ),

(6.10)

190

6 LQ Optimal Controls

where V (t) = z T (t)P(t)z(t), which is well known and will be utilized later. For the input delayed system (6.1), the finite horizon LQ control for the cost (6.6) is given by −1 ¯ T





u(t) = −R B P(t) e x(t) +

t

Ah

e

A(t−τ )

 B1 u(τ )d τ .

(6.11)

t−h

Fixed Horizon LQ Control for Fixed Terminal States Now consider the fixed terminal state case. An LQ control is obtained to minimize the following cost 



 xˆ T (s + h)Qˆx(s + h) + uT (s)Ru(s) ds

tf

J (t0 , tf , u; x(tf ) = 0, utf ≡ 0) = t0

(6.12) with x(tf + h) = 0, utf ≡ 0, Q ≥ 0 and R > 0, which can be written as  J (t0 , tf , u; z(tf ) = 0) =

tf

  z T (s)Qz(s) + uT (s)Ru(s) ds

(6.13)

t0

with z(tf ) = 0, Q ≥ 0 and R > 0. This problem can be solved from the above terminal ¯ f ) = 0, and ¯ state cost problem by taking F as infinity. Let P(t) = P −1 (t); then P(t ¯P(t) satisfies the following equation with P(t ¯ f) = 0 ˙¯ ) = AP(τ ¯ )QP(τ ¯ ) − BR ¯ −1 B¯ T , ¯ ) + P(τ ¯ )AT + P(τ P(τ

(6.14)

and thus the solution can be given as follows. u(t) = −R−1 B¯ T P¯ −1 (t)z(t).

(6.15)

For the input delayed system (6.1), the finite horizon LQ control minimizing the cost (6.12) is given by   −1 ¯ u(t) = −R B P (t) eAh x(t) + −1 ¯ T

t

e

A(t−τ )

 B1 u(τ )d τ .

(6.16)

t−h

Infinite Horizon LQ Control If P(t) in (6.9) goes to constant P = P T as tf goes to infinity, P satisfies the wellknown algebraic Riccati equation ¯ −1 B¯ T P 0 = AT P + PA + Q − P BR

(6.17)

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

191

and the infinite horizon LQ control is given from (6.8) by u(t) = −R−1 B¯ T Pz(t)   = −R−1 B¯ T P eAh x(t) +

t



eA(t−τ ) B1 u(τ )d τ .

(6.18) (6.19)

t−h

It is well-known in the ordinary system (6.4) that P(t) in (6.9) converges to the positive definite matrix P in (6.17) and the infinite horizon control (6.18) asymptot¯ is stabilizable and (A, QT /2 ) is ically stabilizes the ordinary system (6.4) if (A, B) detectable. Since there is only one solution that is positive definite among the finite ¯ is stabilizable number of solutions of the algebraic Riccati equation (6.17) if (A, B) T /2 and (A, Q ) is detectable. Therefore, a way to find such an asymptotically stabilizing control is to compute the finite number of solutions of the algebraic Riccati equation (6.17) and then choose the positive definite solution. We used stabilizability and detectability for ordinary systems since they are well-known in introductory textbooks. Once z(t) goes to zero as t goes to infinity, u(t) also goes to zero from the relation (6.18), which concludes that x(t) in (6.2) goes to zero. Fixed horizon LQ controls for input delayed systems with predictive costs in this subsection follow similarly from [2, 4, 21] once they are transformed to delay-free systems.

6.2.2 Fixed Horizon LQ Control for Standard Costs Consider a finite horizon LQ cost with the free-terminal state for the system (6.1) such as   tf  xT (s)Qx(s) + uT (s)Ru(s) ds + VF (x(tf ), utf ), (6.20) J (t0 , tf , u) = t0

with Q ≥ 0 and R > 0, where the terminal weighting functional VF (x(tf ), utf ) denotes  tf  T VF (x(tf ), utf ) = x (tf )F1 x(tf ) + uT (s)F2 u(s)ds (6.21) tf −h

with F1 ≥ 0 and F2 ≥ 0. We introduce a lemma, which plays an essential role in obtaining optimal controls in the following subsection. Lemma 6.1 If it is possible to find a continuous functional V (t) and a vector function u∗ (t) such that

192

6 LQ Optimal Controls

V (tf ) = VF (x(tf ), utf ),   T T ˙ V (t) + x (t)Qx(t) + u (t)Ru(t) 

V˙ (t) + xT (t)Qx(t) + uT (t)Ru(t)

(6.22) u=u∗

≤ V˙ (t) + xT (t)Qx(t) + uT (t)Ru(t), (6.23)

 u=u∗

=0

(6.24)

for all t ∈ [t0 , tf ], then V (t0 ) = J (t0 , tf , u∗ ) ≤ J (t0 , tf , u) and thus u∗ (t) is the optimal control for the finite horizon LQ cost (6.20). 

Proof The proof is straightforward and refer to Remark 6.1.

Remark 6.1 From (6.22) and (6.23), u∗ is the optimal control from the principle of optimality in the dynamic programming. From (6.22) and (6.24),  V (t) = t

tf

  x∗T (s)Qx∗ (s) + u∗ T (s)Ru∗ (s) ds + VF (x(tf ), ut∗f )

for all t ∈ [t0 , tf ], where x∗ (s) is a state generated by {u∗ (α), α ∈ (s, tf ]}, which is called the optimal cost-to-go. Therefore, V (t0 ) is the optimal cost. It is noted that Lemma 6.1 is applicable for both input delayed systems and state delay systems, where terminal conditions in (6.22) may be different. It is also noted that V (t) can be represented with arguments depending on the problems, for example, V (x(t), t) for delay-free ordinary systems, V (xt , t) for state delayed systems, and V (x(t), ut , t) for input delayed system. Lemma 6.1 can be found in [6]. Fixed Horizon LQ Control for Pure Input Delayed Systems The finite horizon LQ control with a general cost for the input delayed system (6.1) is a little complicated compared to the previous section. We start with a pure input delayed system with B = 0 in (6.1) such as x˙ (t) = Ax(t) + B1 u(t − h).

(6.25)

From (6.3), x(t + h) = xˆ (t + h) = z(t). In this case, the finite horizon LQ cost (6.20) can be given by J (t0 , tf , u) = J¯ (t0 , tf , u) +

  t0 +h t0

xT (s)Qx(s)ds +

 tf tf −h

 uT (s)[R + F2 ]u(s)ds ,

(6.26) where the first term denotes

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

J¯ (t0 , tf , u) = =



tf

 x (s)Qx(s)ds +

t0 +h  tf −h

T

tf −h

193

uT (s)Ru(s)ds + xT (tf )F1 x(tf )

t0

  z T (s)Qz(s) + uT (s)Ru(s) ds + z T (tf − h)F1 z(tf − h)

t0

(6.27) and the last two terms in (6.26) include the state xt0 +h , which cannot be controlled via the control, and the control utf , which does not control the state within the finite horizon in the cost J (t0 , tf , u). Therefore, this finite horizon LQ control is sufficient to minimize J¯ (t0 , tf , u), from which the optimal control utf should be chosen as zero, i.e. ut∗f ≡ 0, and for t ∈ [t0 , tf − h]   u(t) = −R−1 B1T P(t) z(t)

B=0

   t = −R−1 B1T P(t) eAh x(t) + eA(t−s) B1 u(s)ds , t−h

(6.28) where P(t) for t ∈ [t0 , tf − h] satisfies the well known differential Riccati equation ˙ ) = AT P(τ ) + P(τ )A + Q − P(τ )B1 R−1 B1T P(τ ), P(tf − h) = F1 . − P(τ

(6.29)

This is a simple control that is similar to the finite horizon LQ control with state predictors. Unlikely, however, the finite horizon LQ control for a single input delayed system is not straightforward. Finite horizon LQ controls with standard costs for pure input delayed systems introduced in closed forms above are hardly found in the literature. Fixed Horizon LQ Control for Single Input Delayed Systems To minimize the finite horizon LQ cost (6.20) for the input delayed system (6.1), we use two continuous functionals V1 (t) and V2 (t) for t0 ≤ t ≤ tf − h and tf − h < t ≤ tf , respectively, so as to design the optimal controls u1∗ (t) and u2∗ (t) for each horizon. Theorem 6.1 For the input delayed system (6.1), the finite horizon LQ control for the cost (6.20) is given as follows. For t ∈ (tf − h, tf ],   u2∗ (t) = −[R + F2 ]−1 BT W1 (t)x(t) +

tf −t−h −h

 W2 (t, s)u∗ (t + s)ds , (6.30)

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy

194

6 LQ Optimal Controls

− W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q −W1 (t)B[R + F2 ]−1 BT W1 (t),

(6.31)

 ∂ ∂ W2 (t, s) = AT W2 (t, s) − W1T (t)B[R + F2 ]−1 BT W2 (t, s), (6.32) − + ∂t ∂s   ∂ ∂ ∂ W3 (t, r, s) = −W2T (t, r)B[R + F2 ]−1 BT W2 (t, s), − + (6.33) + ∂t ∂r ∂s 

with boundary conditions W1 (tf ) = F1 , W2 (t, −h) = W1 (t)B1 , W3 (t, −h, s) = B1T W2 (t, s).

(6.34) (6.35) (6.36)

For t ∈ [t0 , tf − h],   u1∗ (t) = −R−1 [BT P1 (t) + P2T (t, 0)]x(t) +

0 −h

 [BT P2 (t, s) + P3 (t, 0, s)]u∗ (t + s)ds ,

(6.37) where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q −[P1 (t)B + P2 (t, 0)]R−1 [BT P1 (t) + P2T (t, 0)],   ∂ ∂ − + P2 (t, s) = AT P2 (t, s) − [P1 (t)B + P2 (t, 0)]R−1 ∂t ∂s ×[BT P2 (t, s) + P3 (t, 0, s)],   ∂ ∂ ∂ − + P3 (t, r, s) = −[BT P2 (t, r) + P3 (t, 0, r)]T R−1 + ∂t ∂r ∂s

(6.38)

(6.39)

×[BT P2 (t, s) + P3 (t, 0, s)]

(6.40)

P1 (tf − h) = W1 (tf − h), P2 (tf − h, s) = W2 (tf − h, s),

(6.41) (6.42)

P3 (tf − h, r, s) = W3 (tf − h, r, s), P2 (t, −h) = P1 (t)B1 ,

(6.43) (6.44)

with boundary conditions

P3 (t, −h, s) = B1T P2 (t, s).

(6.45)

Proof For t0 ≤ t < tf − h and for tf − h ≤ t ≤ tf , we use two continuous functionals V1 (t) and V2 (t), respectively, as follows. For t ∈ (tf − h, tf ], let us define

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

 V2 (t) = x (t)W1 (t)x(t) + 2x (t) T

 +

tf −t−h −h

T



tf −t−h −h

tf −t−h

−h

195

W2 (t, s)u(t + s)ds 

uT (t + r)W3 (t, r, s)u(t + s)drds +

t

tf −h

uT (s)F2 u(s)ds. (6.46)

Note that  T d T x (t)W1 (t)x(t) = 2 Ax(t) + Bu(t) + B1 u(t − h) W1 (t)x(t) + xT (t)W˙ 1 (t)x(t), dt

 tf −t−h d T W2 (t, s)u(t + s)ds 2x (t) dt −h 

 tf −t−h tf −t−h d T T = 2˙x (t) W2 (t, s)u(t + s)ds + 2x (t) W2 (t, s)u(t + s)ds dt −h −h 

 tf −t−h tf −h d = 2˙xT (t) W2 (t, s)u(t + s)ds + 2xT (t) W2 (t, α − t)u(α)d α dt t−h −h  tf −t−h = 2˙xT (t) W2 (t, s)u(t + s)ds −h

 tf −h ∂W2 (t, α − t) + 2xT (t) −W2 (t, −h)u(t − h) + u(α)d α ∂t t−h  tf −t−h W2 (t, s)u(t + s)ds = 2{Ax(t) + Bu(t) + B1 u(t − h)}T −h

tf −t−h 

 ∂ ∂ − W2 (t, s)u(t + s)ds, ∂t ∂s −h   tf −t−h  tf −t−h  d uT (t + r)W3 (t, r, s)u(t + s)drds dt −h −h  tf −h  tf −h d uT (α)W3 (t, α − t, β − t)u(β)d αd β = dt t−h t−h  tf −h  tf −h =− uT (α)W3 (t, α − t, −h)u(t − h)d α − uT (t − h)W3 (t, −h, β − t)u(β)d β 

− 2xT (t)W2 (t, −h)u(t − h) + 2xT (t)



t−h tf −h  tf −h

t−h

∂W3 (t, α − t, β − t) + u (α) u(β)d αd β ∂t t−h t−h  tf −t−h  tf −t−h W3 (t, −h, r)u(t + r)dr − uT (t + s)W3 (t, s, −h)dru(t − h) = −uT (t − h)  + d dt



T

−h tf −t−h  tf −t−h

−h t tf −h

−h



∂ ∂ ∂ u (t + r) − − ∂t ∂r ∂s T

uT (s)F2 u(s)ds = uT (t)F2 u(t).

After some manipulation, we obtain d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt



−h

W3 (t, r, s)u(t + s) drds,

196

6 LQ Optimal Controls

    = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q x(t) + 2u(t − h) B1 T W1 (t) − W2T (t, −h) x(t)   ∂ ∂ − W2 (t, s) + AT W2 (t, s) u(t + s)ds ∂s ∂t −h   tf −t−h  B1 T W2 (t, s) − W3 (t, −h, s) u(t + s)ds + 2uT (t − h) 

+ 2xT (t)

−h tf −t−h  tf −t−h

 +

tf −t−h 

−h

−h





 uT (t + r)





+ 2uT (t)BT W1 (t)x(t) +

∂ ∂ ∂ − − ∂t ∂r ∂s

tf −t−h −h

 W3 (t, r, s)u(t + s)drds

 W2 (t, s)u(t + s)ds + uT (t)[R + F2 ]u(t).

From the gradient condition (6.23) in Lemma 6.1 such that 

 d V2 (t) T T + x (t)Qx(t) + u (t)Ru(t) dt    tf −t−h T W2 (t, s)u(t + s)ds , = 2[R + F2 ]u(t) + 2B W1 (t)x(t) +

∂ 0= ∂u(t)

−h

we obtain the optimal control u2∗ (·) as in (6.30) and thus the following relation 

 d T T V2 (t) + x (t)Qx(t) + u (t)Ru(t) 0= dt u(t)=u2∗ (t)   = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q − W1T (t)B[R + F2 ]−1 BT W1 (t) x(t)   +2u(t − h) B1 T W1 (t) − W2T (t, −h) x(t) 

tf −t−h 

∂W2 (t, s) ∂W2 (t, s) + + AT W2 (t, s) ∂s ∂t −h  T −1 T −W1 (t)B[R + F2 ] B W2 (t, s) u(t + s)ds

+2x (t) T



 +2u (t − h) T

tf −t−h 

−h tf −t−h



B1 W2 (t, s) − W3 (t, −h, s) u(t + s)ds T

 ∂ ∂ ∂ − − W3 (t, r, s) ∂t ∂r ∂s −h −h  −W2T (t, r)B[R + F2 ]−1 BT W2 (t, s) u(t + s)drds, 

+

tf −t−h





uT (t + r)

which produces the coupled Riccati equations (6.31)–(6.33) with the boundary conditions (6.35)–(6.36). From the condition (6.22) in Lemma 6.1, we should have V2 (tf ) = VF (x(tf ), utf ), which results in the condition (6.34). For t ∈ [t0 , tf − h] let us define

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

 V1 (t) = x (t)P1 (t)x(t) + 2x (t) T

 +

0

T

0 −h



0 −h

−h

197

P2 (t, s)u(t + s)ds

uT (t + r)P3 (t, r, s)u(t + s)drds.

(6.47)

After some manipulation, we obtain d V1 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt 

 = xT (t) P˙ 1 (t) + AT P1 (t) + P1 (t)A + Q x(t) + uT (t)Ru(t)   + 2uT (t − h) B1T P1 (t) − P2T (t, −h) x(t)    0  ∂ ∂ − P2 (t, s) + AT P2 (t, s) u(t + s)ds ∂t ∂s −h   0  B1T P2 (t, s) − P3 (t, −h, s) u(t + s)ds + 2uT (t − h)

+ 2xT (t)

−h

 ∂ ∂ ∂ P3 (t, r, s)u(t + s)drds − − ∂t ∂r ∂s −h −h    0 [BT P2 (t, s) + P3 (t, 0, s)]u(t + s)ds . + 2uT (t) [BT P1 (t) + P2T (t, 0)]x(t) + +

 0  0



uT (t + r)

−h

From the gradient condition (6.23) in Lemma 6.1, we obtain the optimal control u1∗ (·) as in (6.37) and in turn the following relation  d T T V1 (t) + x (t)Qx(t) + u (t)Ru(t) 0= dt u(t)=u1∗ (t)  = xT (t) AT P1 (t) + P1 (t)A + P˙ 1 (t) + Q  −[P1 (t)B + P2 (t, 0)]R−1 [BT P1 (t) + P2T (t, 0)] x(t)   +2uT (t − h) B1T P1 (t) − P2T (t, −h) x(t) 

 +2xT (t)

0



−h

 ∂ ∂ − P2 (t, s) + AT P2 (t, s) ∂t ∂s

 −[P1 (t)B + P2 (t, 0)]R−1 [BT P2 (t, s) + P3 (t, 0, s)] u(t + s)ds  +2u (t − h)

0

T

 +

0



0

u (t + r) T

−h

−h

−h

  T B1 P2 (t, s) − P3 (t, −h, s) u(t + s)ds 

 ∂ ∂ ∂ − − P3 (t, r, s) ∂t ∂r ∂s

198

6 LQ Optimal Controls

 −[BT P2 (t, r) + P3 (t, 0, r)]T R−1 [BT P2 (t, s) + P3 (t, 0, s)] u(t + s)drds, which results in the coupled Riccati equations (6.38)–(6.40) with the boundary conditions (6.44)–(6.45). Because V1 (t) and V2 (t) are the optimal cost functions for t0 ≤ t ≤ tf − h and tf − h < t ≤ tf , respectively, from the continuity conditions it is clear that V1 (tf − h) = V2 (tf − h), which are satisfied if and only if the conditions (6.41)–(6.43) hold. This completes the proof.  We remark that V1 (t) and V2 (t) are continuous but their derivatives are not continuous at time t = tf − h and thus u1∗ (tf − h) = u2∗ (tf − h) since the optimal controls are directly related to V˙1 (t) or V˙2 (t). In this subsection, we choose u1∗ (tf − h) rather than u2∗ (tf − h) as the optimal control at time tf − h. However, any finite instant control at time tf − h does not effect the behavior of the continuous-time system and thus the two of them could be used in the control. We can say that the optimal cost J (t, tf , u∗ ) in (6.20) is equal to the continuous functional V1 (t) for t ∈ [t0 , tf − h] and V2 (t) for t ∈ [tf − h, tf ] as follows. J (t, tf , u∗ ) =



V1 (t) for t ∈ [t0 , tf − h], V2 (t) for t ∈ [tf − h, tf ].

(6.48)

Finite horizon LQ controls with standard costs for single input delayed systems in Theorem 6.1 can be found similarly as in [3, 9, 18, 23]. Infinite Horizon LQ Control for Pure Input Delayed Systems If P(t) in (6.29) goes to constant P as tf goes to infinity, P = P T satisfies the wellknown algebraic Riccati equation 0 = AT P + PA + Q − PB1 R−1 B1T P

(6.49)

and the infinite horizon LQ control is given from (6.28) by   u(t) = −R−1 B1T P eAh x(t) +

t

 eA(t−s) B1 u(s)ds .

(6.50)

t−h

It is well-known in the ordinary system (6.4) with B = 0 that P(t) in (6.29) converges to the positive definite matrix P in (6.49) and the infinite horizon control (6.28) asymptotically stabilizes the ordinary system (6.4) with B = 0 if (A, B1 ) is stabilizable and (A, QT /2 ) is detectable. Alternatively, the control (6.50) can be obtained from the results in Theorem 6.1 for the finite horizon LQ control for a single input delayed system by taking B = 0. It is interesting to check whether there exist some connections between these two approaches. When we send tf to infinity, we do not need to care about the transient quantities u2∗ (t), W1 (t), W2 (t, s), and W3 (t, r, s) for t ∈ (tf − h, tf ). Therefore, we

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

199

concern about the stationary behaviors of P1 (t), P2 (t, s), and P3 (t, r, s) as tf goes to infinity if they exist as follows. P1 (t) → P1 , P2 (t, s) → P2 (s), P3 (t, r, s) → P3 (r, s). In this case, the control (6.37) with B = 0 becomes   T P2 (0)x(t) + u(t) = −R −1

0 −h

 P3 (0, s)u(t + s)ds ,

(6.51)

where the coupled partial differential Riccati equations (6.31)–(6.40) with B = 0 become



0 = AT P1 + P1 A + Q − P2 (0)R−1 P2T (0), P˙ 2 (s) = AT P2 (s) − P2 (0)R−1 P3 (0, s),

 ∂ ∂ + P3 (r, s) = −P3T (0, r)R−1 P3 (0, s) ∂r ∂s

(6.52) (6.53) (6.54)

with the boundary conditions P2 (−h) = P1 B1 , P3 (−h, s) = B1T P2 (s). The continuous functional (6.47) becomes  V (t) = xT (t)P1 x(t) + 2xT (t)  +

0 −h



0

−h

0 −h

P2 (s)u(t + s)ds

uT (t + r)P3 (r, s)u(t + s)drds.

(6.55)

From the relations between (6.50) and (6.51), we can find the relations that P2T (0) = B1T PeAh and P3 (0, s) = B1T Pe−As B1 . From these relations, the analytic forms for P1 , P2 (s) and P3 (r, s) based on P in (6.49) are given in the following corollary. Corollary 6.1 Assume that (A, B1 ) is stabilizable and (A, QT /2 ) is detectable. The matrices P1 , P2 (s) and P3 (r, s) in the infinite horizon LQ control for the system (6.1) yielding (6.52)–(6.54) have the following analytic forms: P1 = e

AT h

  P+

0

−h  0

e

AT β

 Qa d β eAh , Aβ

  AT h AT β Aβ P+ e Qa d β e−As B1 , P2 (s) = e s    0 T T eA β QaAβ d β e−As B1 , P3 (r, s) = B1T e−A r P +

(6.56) (6.57) (6.58)

max(r,s)

where P is the positive definite solution to the algebraic Riccati equation (6.49). Then the continuous functional (6.55) can be written as

200

6 LQ Optimal Controls

eAh x(t) +

V (t) =

+ +

 0 −h

 β

−h

 0 −h

T e−As B1 u(t + s)ds

eAh x(t) +

 β −h

P

eAh x(t) + T

e−As B1 u(t + s)ds



 0 −h

e−As B1 u(t + s)ds

 T eA β QeAβ eAh x(t)

 e−As B1 u(t + s)ds d β.

(6.59)

Proof Since P2 (s) in (6.53) yields the following relation T P˙ 2 (s) = AT P2 (s) − eA h PB1 R−1 B1T Pe−As B1 ,

its analytic solution is  s T T P2 (−h) − eA (s−τ ) eA h PB1 R−1 B1T Pe−Aτ B1 d τ −h  s T AT (s+h) =e P1 B1 − eA (s−τ +h) PB1 R−1 B1T Pe−Aτ B1 d τ , T

P2 (s) = eA

(s+h)

−h

which provides 

T

P2 (0) = eA h P1 B1 − 

0

W (s) =

e s

−AT τ

0

−h

T (−τ +h)

eA

PB1 R−1 B1T Pe−Aτ B1 d τ = eA h [P1 − W (−h)]B1 , T

PB1 R−1 B1T Pe−Aτ d τ .

(6.60)

From the boundary condition, we have T

T

eA h PB1 = eA h [P1 − W (−h)]B1 , which can be satisfied if the relation between P1 and P follows as P1 = P + W (−h).

(6.61)

This relation can be checked out via (6.52). Since the term W (−h) satisfies  AT W (−h) + W (−h)A = −

0 −h

 d −AT τ e PB1 R−1 B1T Pe−Aτ d τ dτ

= −PB1 R−1 B1T P + eA h PB1 R−1 B1T PeAh , T

P1 satisfies the algebraic Riccati equation (6.52) as follows.

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

201

0 = AT P + PA + Q − PB1 R−1 B1T P = AT [P1 − W (−h)] + [P1 − W (−h)]A + Q − PB1 R−1 B1T P = AT P1 + P1 A + Q − eA h PB1 R−1 B1T PeAh T

= AT P1 + P1 A + Q − P2 (0)R−1 P2T (0). In this case, P2 (s) becomes T

P2 (s) = eA

T

= eA

(s+h) (s+h)

 [P + W (−h)]B1 −

s

T

eA −h

(s−τ +h)

PB1 R−1 B1T Pe−Aτ B1 d τ

[P + W (s)]B1 .

(6.62)

The two boundary conditions of P3 (r, s) such that P3 (0, s) = B1T Pe−As B1 and P3 (−h, s) = B1T P2 (s) lead to construct P3 (r, s) as

B1T [P + W (r)]eA(r−s) B1 = P2T (r)e−A(h+s) B1 if r > s, T −AT (r−s) T −AT (r+h) [P + W (s)]B1 = B1 e P2 (s) if r < s, B1 e

T  T = B1T e−A r eA max(r,s) [P + W (max(r, s))]eA max(r,s) e−As B1 (6.63)

P3 (r, s) =

which satisfies (6.54). We remark that T



T

0

eA s {P + W (s)}eAs = eA s PeAs +

T

eA 

s



s

0

(s−β)

PB1 R−1 B1T PeA(s−β) d β

eA β PB1 R−1 B1T PeAβ d β s    0 T AT s As eA β AT P + PA + Q eAβ d β = e Pe + T

= eA s PeAs +

=e

T

0

T

 0 d AT β Aβ  T e Pe dβ + eA β QeAβ d β dβ s

Pe + s  0 T eA β QeAβ d β. =P+ A s

As

s

From (6.61), therefore, P1 can be written as

T  T P1 = P + W (−h) = eA h e−A h [P + W (−h)]e−Ah eAh    0 AT h AT β Aβ P+ =e e Qe d β eAh . −h

From (6.62), P2 (s) can be written as

202

6 LQ Optimal Controls

T  T [P + W (s)]B1 = eA h eA s [P + W (s)]eAs e−As B1    0 T T = eA h P + eA β QeAβ d β e−As B1 . (s+h)

T

P2 (s) = eA

s

From (6.63) and (6.64), P3 (r, s) can be written as P3 (r, s) =

T B1T e−A r

  P+

0

e

AT β



Qe d β e−As B1 . Aβ

max(r,s)

We now look at the continuous functional V (t) = V1 (t) + V2 (t) + V3 (t), where Vi (t) satisfy Th

V1 = xT (t)P1 x(t) = xT (t)eA  V2 = 2xT (t)

0

−h Th



0

−h

  P+

 T AT h P = 2x (t)e 

=



0

−h −h  0 T −h 0

 + + =

−h 0

+



s



r

u (t T



−h 0

+ +



0



=

s

β



−h −h s 0 T −h

0

−h

e

AT β

 Qe





β −h

e−As B1 u(t + s)ds

e

−As

B1 u(t + s)dsd β ,



T T + r)B1T e−A r eA β QeAβ e−As B1 u(t T

+ s)d βdrds

T

uT (t + r)B1T e

−AT r

0

e−As B1 u(t + s)ds

T

uT (t + r)B1T e−A r eA β QeAβ e−As B1 u(t + s)drd βds T

T

0

−h

s



eA β QeAβ e−As B1 u(t + s)drd βds

  T u (t + r)B1T e−A r dr P 

0

−h

s

−h −h −h  0 β β



B1 u(t + s)ds +

  T u (t + r)B1T e−A r dr P

−h s 0 T





uT (t + r)B1T e−A r eA β QeAβ e−As B1 u(t + s)d βdrds

−h s −h 0  0 β

=

e

−As

−h

0

 +

 T eA β QeAβ d β e−As B1 u(t + s)ds

  T u (t + r)B1T e−A r dr P

−h s 0 T



0

uT (t + r)P3 (r, s)u(t + s)drds

−h −h s  0  0 0



−h

 T eA β QeAβ d β eAh x(t),

s 0

−h

0

0

P2 (s)u(t + s)ds

= 2xT (t)eA

V3 =

  P+

u (t T

e−As B1 u(t + s)ds



T T + r)B1T e−A r eA β QeAβ e−As B1 u(t

+ s)drdsd β

uT (t + r)B1T e−A r eA β QeAβ e−As B1 u(t + s)drdsd β T

T

  T u (t + r)B1T e−A r dr P

0

−h

e−As B1 u(t + s)ds



6.2 Fixed Horizon LQ Controls for Input Delayed Systems

 +

0



β



β

−h −h −h 0 T

203

uT (t + r)B1T e−A r eA β QeAβ e−As B1 u(t + s)drdsd β T

T

  0  T u (t + r)B1T e−A r dr P e−As B1 u(t + s)ds −h −h   

 = +

0

β

−h

−h

u (t T

T + r)B1T e−A r dr

e

AT β

Qe



β

−h

e

−As

B1 u(t + s)ds d β.



This completes the proof.

As a sufficient condition for the asymptotic stability of the system (6.1), we should verify whether or not the continuous functional (6.55) satisfies (2.4) and (2.5) in Theorem 2.1. The time-derivative negativity (2.5) can be easily verified from the property of the optimal control such that 

d V (t) dt

 u(t)=u∗ (t)

= −xT (t)Qx(t) − u∗T (t)Ru∗ (t).

The condition (2.4) can be verified through Corollary 6.1, where the special choice of P as the positive definite solution for the algebraic Riccati equation (6.49) ensures the condition (2.4) of the continuous functional (6.55). Therefore, the infinite horizon LQ control (6.51) asymptotically stabilizes the system if (A, B1 ) is stabilizable and (A, QT /2 ) is detectable. Remark 6.2 In the finite horizon problem, the choice of the terminal cost matrix F such as  0 T e−A τ PB1 R−1 B1T Pe−Aτ d τ , F = P1 = P + W (−h) = P + −h

where W (s) denotes (6.60), allows P1 (t), P2 (t, s) and P3 (t, r, s) to be stationary, i.e. P1 , P2 (s) and P3 (r, s), which turns out to be equivalent to the infinite horizon problem. Remark 6.3 Since there are many P yielding the algebraic Riccati equation (6.49), P1 , P2 (s), and P3 (r, s) yielding the coupled Riccati equations (6.52)–(6.54) are not unique. As only one special choice of P, which is positive definite, in (6.49) such that P allows the infinite horizon LQ control (6.50) to asymptotically stabilize the system, only one special choice of P1 , P2 (s), and P3 (r, s) in (6.52)–(6.54) allows the infinite horizon LQ control (6.51) to stabilize the closed-loop system. The relations in Corollary 6.1 are newly obtained here. Infinite Horizon LQ Control for Single Input Delayed Systems If P1 (t) = P1T (t), P2 (t, s) and P(t, r, s) = P3T (t, s, r) of the coupled partial differential Riccati equations (6.31)–(6.40) in Theorem 6.1 go to the stationary quantities P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) when tf goes to infinity, they satisfy

204

6 LQ Optimal Controls

0 = AT P1 + P1 A + Q − [P1 B + P2 (0)]R−1 [BT P1 + P2T (0)],



P˙ 2 (s) = AT P2 (s) − [P1 B + P2 (0)]R−1 [BT P2 (s) + P3 (0, s)],

 ∂ ∂ + P3 (r, s) = −[BT P2 (r) + P3 (0, r)]T R−1 [BT P2 (s) + P3 (0, s)], ∂r ∂s

(6.64) (6.65) (6.66)

with boundary conditions P2 (−h) = P1 B1 and P3 (−h, s) = B1T P2 (s). In this case, the finite horizon LQ optimal control (6.30) in Theorem 6.1 becomes 

u∗ (t) = −R−1 [BT P1 + P2T (0)]x(t) +

 0 −h

 [BT P2 (s) + P3 (0, s)]u(t + s)ds .

(6.67) It was addressed in [19] that for Q > 0 the infinite horizon LQ optimal control (6.67) asymptotically stabilizes the single input delayed system (6.1) if the system is stabilizable [5, 19], where the system is said to be stabilizable if there exists a linear control with a form of  0 K1 (s)u(t + s)ds (6.68) u(t) = Ko x(t) + −h

that stabilizes the system. However, differently from the solution of the algebraic Riccati equation (6.17) for the ordinary system, in which there exist a finite number of solutions of the algebraic Riccati equation (6.17) and thus we can easily find the unique positive definite solution P among them that provides the stabilizing control, the number of solutions of the coupled Riccati equations (6.64)–(6.66) is infinite and there seems to be no research on finding the solutions P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) that produce such a stabilizing control. The corresponding continuous functional has a form in (6.55) when tf is infinite. To check out whether the optimal control asymptotically stabilizes the system (6.1), the continuous functional (6.55) should be examined to satisfy the conditions (2.4) and (2.5) in Theorem 2.1, where the time-derivative negativity (2.5) can be easily verified from the property of the optimal control such that 

d V (t) dt

 u(t)=u∗ (t)

= −xT (t)Qx(t) − u∗T (t)Ru∗ (t).

To the best of our knowledge, however, the properties of P1 , P2 (s) and P3 (r, s) to satisfy the condition (2.4) of the continuous functional (6.55) have not been explicitly investigated in the literature, and thus the stability analysis has not, also. While infinite horizon LQ control systems have problems in stability proof and computation, we introduce receding horizon LQ controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

6.2 Fixed Horizon LQ Controls for Input Delayed Systems

205

We note that, since it is very hard to exactly solve (6.64)–(6.66), the numerical procedure in Appendix E.1, which can be found in [1], is often used to obtain approximate solutions.

6.3 Receding Horizon LQ Controls for Input Delayed Systems This section deals with receding horizon LQ controls for the predictive cost and the standard cost given in Sect. 6.2. In this section, we assume that the input delayed system (6.1) is stabilizable [5, 19].

6.3.1 Receding Horizon LQ Control for Predictive Costs Receding Horizon LQ Control for Free Terminal States Since the receding horizon LQ control is introduced first in this section, we use rather detailed notations for the better understanding of the receding horizon LQ control. Once we understand the real meaning, then we use simple notations later. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. In this case, xˆ (s| t) s=t+h = xˆ (t + h| t) = xˆ (t + h). Consider a receding horizon LQ cost given by  J (t, t + T , u; xˆ (t + h)) =

t+T

 xˆ T (s + h| t)Qˆx(s + h| t) + uT (s| t)Ru(s| t) ds



t

+ˆxT (t + T + h| t)F xˆ (t + T + h| t),

(6.69)

with ut+T ≡ 0, Q > 0, R > 0, and F > 0 for the system (6.1) that can be expressed as d x(s + h| t) = Ax(s + h| t) + Bu(s + h| t) + B1 u(s| t), s ∈ [t, t + T ]. ds

(6.70)

The above problem can be transformed by using a state predictor to 

t+T

J (t, t + T , u; z(t)) =

  T T z (s| t)Qz(s| t) + u (s| t)Ru(s| t) ds

t

+z T (t + T | t)Fz(t + T | t) with Q > 0, R > 0 and F > 0 for the system (6.4) that is expressed as

(6.71)

206

6 LQ Optimal Controls

d ¯ z(s| t) = Az(s| t) + Bu(s| t), s ∈ [t, t + T ], ds

(6.72)

with the initial state z(s| t)s=t = z(t). The optimal control on the time interval [t, t + T ] is obtained from (6.8) and (6.9) by replacing t0 and tf with t and t + T , respectively, as follows. u∗ (s| t) = −R−1 B¯ T P(s| t)z(s| t),

(6.73)

where P(s| t) = P T (s| t) satisfies the well known differential Riccati equation −

d ¯ −1 B¯ T P(s| t) P(s| t) = AT P(s| t) + P(s| t)A + Q − P(s| t)BR ds

(6.74)

with the terminal time condition P(t + T | t) = F. The receding horizon LQ control u(t) at the time t is to use the first control u∗ (t| t) on the time interval [t, t + T ] and thus given as u(t) = u∗ (t| t) = −R−1 B¯ T P(t| t)z(t| t).

(6.75)

Since P(s| t) is shift invariant and z(t| t) = z(t), the differential Riccati equation (6.74) and the optimal control (6.75) can be given as, for P(T ) = F, ¯ −1 B¯ T P(τ ), ˙ ) = AT P(τ ) + P(τ )A + Q − P(τ )BR − P(τ u(t) = −R−1 B¯ T P(0)z(t).

(6.76) (6.77)

On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the receding horizon LQ control u(t+ ) (6.77) with t replaced with t+ as the time proceeds. The concept of the receding horizon LQ control is illustrated in Fig. 6.1. In order to understand the concept easily, the discrete-time version is depicted and then the continuous-time is introduced. It is noted from (6.10) that the optimal cost J (t, t + T , u∗ ) is given as J (t, t + T , u∗ ) = V (t) = z T (t)P(0)z(t),

(6.78)

where P(0) is obtained from (6.76). In order to find a stability condition of the receding horizon LQ control, we investigate the time derivative of the receding horizon LQ cost (6.71), i.e. d J (t, t + T , u∗ ; z(t)) dt   1 J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) − J (t, t + T , u∗ ; z(t)) , = lim Δ→0 Δ (6.79)

6.3 Receding Horizon LQ Controls for Input Delayed Systems

207

Fig. 6.1 Concept of receding horizon LQ control

where J (t, t + T , u∗ ; z(t)) denotes ∗



t+T

J (t, t + T , u ; z(t)) =



 z (s| t)Qz (s| t) + u (s| t)Ru (s| t) ds ∗T



∗T



t

+z ∗T (t + T | t)Fz ∗ (t + T | t)

(6.80)

with the initial state z ∗ (t| t) = z(t) and J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) denotes J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t))   t+T +Δ  1∗T 1∗ 1∗T 1∗ z (s| t)Qz (s| t) + u (s| t)Ru (s| t) ds = t+Δ

+ z 1∗T (t + T + Δ| t + Δ)Fz 1∗ (t + T + Δ| t + Δ)

(6.81)

with the initial state z 1∗ (t + Δ| t + Δ) = z ∗ (t + Δ| t). It is noted that the initial state z 1∗T (t + Δ| t + Δ) of the optimal cost (6.81) is given the trajectory z ∗ (t + Δ| t) of the optimal cost (6.80). Let us now decompose the optimal cost (6.80) as follows. J (t, t + T , u∗ ; z(t)) =



t+Δ t

  z ∗T (s| t)Qz ∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

+J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)),

208

6 LQ Optimal Controls

where J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) denotes J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) =

 t+T  t+Δ

 z ∗T (s| t)Qz ∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

+z ∗T (t + T | t)Fz ∗ (t + T | t).

(6.82)

Using this cost J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)), we can rewrite the Eq. (6.79) as follows.   d 1 J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) − J (t, t + T , u∗ ; z(t)) J (t, t + T , u∗ ; z(t)) = lim Δ→0 Δ dt  1 J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) + lim Δ→0 Δ  −J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) . (6.83)

Terminal Cost Monotonicity In the second part of (6.83), if the optimal control u1∗ in J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) is replaced with a non-optimal control u1 , then due to the definition it obviously holds that J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) ≤ J (t + Δ, t + T + Δ, u1 ; z ∗ (t + Δ| t)), where J (t + Δ, t + T + Δ, u1 ; z ∗ (t + Δ| t)) denotes J (t + Δ, t + T + Δ, u1 ; z ∗ (t + Δ| t))   t+T +Δ  z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) ds + z 1T (t + T + Δ)Fz 1 (t + T + Δ) = t+Δ

with the initial state z 1 (t + Δ) = z ∗ (t + Δ| t). Let us choose u1 (α) = u∗ (α| t) for α ∈ [t + Δ, t + T ]; then z 1 (α) = z ∗ (α| t) for α ∈ [t + Δ, t + T ], which gives J (t + Δ, t + T + Δ, u1 ; z ∗ (t + Δ| t))   t+T  z ∗T (s| t)Qz ∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds = t+Δ

  t+T +Δ  + z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) ds + z 1T (t + T + Δ)Fz 1 (t + T + Δ) t+T   = J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) − z ∗T (t + T | t)Fz ∗ (t + T | t)  +

t+T +Δ 

 z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) ds + z 1T (t + T + Δ)Fz 1 (t + T + Δ).

t+T

Since it holds that z 1 (t + T ) = z ∗ (t + T | t) and

6.3 Receding Horizon LQ Controls for Input Delayed Systems

209

  1 1 ¯ 1 (t + T ) z (t + T + Δ) − z 1 (t + T ) Az 1 (t + T ) + Bu Δ the above relations with a simple choice of u1 (t + T ) as −Hz ∗ (t + T | t), i.e. u1 (t + T ) = −Hz ∗ (t + T | t), provide   1 J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) − J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) Δ→0 Δ   t+T +Δ   1 z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) ds+ ≤ lim Δ→0 Δ t+T  z 1T (t + T + Δ)Fz 1 (t + T + Δ) − z ∗T (t + T | t)Fz ∗ (t + T | t) lim

= z ∗T (t + T | t)Qz ∗ (t + T | t) + u1T (t + T )Ru1 (t + T )   ¯ 1 (t + T ) + 2z ∗T (t + T | t)F Az ∗ (t + T | t) + Bu

(6.84)

= z ∗T (t + T | t)Mz ∗ (t + T | t),

(6.85)

¯ ) + (A − BH ¯ )T F + Q + H T RH . The condition that where M = F(A − BH M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the closedloop system. Remark 6.4 In this subsection, we have proven that   ∂ 1 J (α, β, u∗ ; z(α)) = lim J (α, β + Δ, u1∗ ; z(α)) − J (α, β, u∗ ; z(α)) Δ→0 Δ ∂β ≤ z T (β)Mz(β). The condition that M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the closed-loop system. Stability In the first part of (6.83), we can obtain that   1 J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) − J (t, t + T , u∗ ; z(t)) Δ→0 Δ   t+Δ    1 z ∗T (s| t)Qz ∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds − = lim Δ→0 Δ t   T (6.86) = − z (t)Qz(t) + u∗T (t| t)Ru∗ (t| t) . lim

From the relations (6.85) and (6.86) corresponding to the second and first parts in the Eq. (6.83), respectively, we can get the upper bound of the time derivative of J (t, t + T , u∗ ; z(t)) in (6.83) as follows.

210

6 LQ Optimal Controls

d J (t, t + T , u∗ ; z(t)) dt 

∗T





≤ − z (t)Qz(t) + u (t| t)Ru (t| t) + z ∗T (t + T | t)Mz ∗ (t + T | t). T

(6.87)

Using this choice of u1 (t + T ) satisfying that M < 0, a sufficient condition for the asymptotic stability of the closed-loop system can be found in the following theorem. Theorem 6.2 If there exists a matrix F > 0 such that ¯ ) + (A − BH ¯ )T F + Q + H T RH ≤ 0 F(A − BH

(6.88)

for some H , the receding horizon LQ control (6.77) for the cost (6.71) with such an F asymptotically stabilizes the system (6.4). Proof If the condition (6.88) is satisfied, then it holds that J (t, t + T , u∗ ; z(t)) ≥ 0,   d J (t, t + T , u∗ ; z(t)) ≤ − z T (t)Qz(t) + u∗T (t| t)Ru∗ (t| t) ≤ 0. dt J (t, t + T , u∗; z(t)) is V (t) from (6.78). Therefore, these with cost monotonicity conditions satisfy the conditions of the Lyapunov theorem for ordinary systems, which can be obtained from Theorem 2.1 by replacing x(t) and xt with z(t) and z(t), respectively. This completes the proof.  Remark 6.5 There is an easy way to find F and H that satisfy the condition (6.88) ¯ is stabilizable. In this case, there always exists a matrix H such that when (A, B) ¯ ) is Hurwitz, which ensures the existence of a positive definite matrix F for (A − BH (6.88) from the property of the algebraic Lyapunov matrix equation. Once z(t) goes to zero as t goes to infinity, u(t) also goes to zero from the relation (6.75), which concludes that x(t) in (6.2) goes to zero. The quadratic matrix inequality (6.88) in Theorem 6.2 can be rewritten as linear matrix inequalities. Corollary 6.2 If there exist a positive definite matrix F¯ > 0 and a matrix H¯ such that ⎤ ⎡ ¯ 1/2 H¯ T ¯ T − H¯ T B¯ T FQ AF¯ − B¯ H¯ + FA ⎣ (6.89) QT /2 F¯ −I 0 ⎦ ≤ 0, ¯ H 0 −R−1 the receding horizon LQ control (6.77) for the cost (6.71) with F = F¯ −1 asymptotically stabilizes the system (6.4). Proof Let us multiply the inequality (6.88) by F¯ = F −1 from the both sides, which can be written into

6.3 Receding Horizon LQ Controls for Input Delayed Systems

211

¯ )F¯ + F(A ¯ − BH ¯ )T + FQ ¯ F¯ + FH ¯ T RH F¯ ≤ 0. (A − BH

(6.90)

 Define H¯ = H F¯ and apply the Schur complement for this inequality, which results in (6.89). The Schur complement for this inequality produces the condition (6.89). From Theorem 6.2, we obtain the asymptotic stability of the closed-loop system. This completes the proof. 

Let us rewrite the inequality (6.84) for the cost monotonicity as follows.   d J (t, t + T , u∗ ; z(t)) ≤ − z T (t)Qz(t) + u∗T (t| t)Ru∗ (t| t) dt   ¯ −1 B¯ T F z ∗ (t + T | t) +z ∗T (t + T | t) FA + AT F + Q − F BR T    + u1 (t + T ) + R−1 B¯ T Fz ∗ (t + T | t) R u1 (t + T ) + R−1 B¯ T Fz ∗ (t + T | t) .

(6.91) An alternative choice of u1 (t + T ) as −R−1 B¯ T Fz ∗ (t + T | t) provides the following result. Theorem 6.3 If there exists a matrix F > 0 such that ¯ −1 B¯ T F ≤ 0, FA + AT F + Q − F BR

(6.92)

the receding horizon LQ control (6.77) for the cost (6.71) with such an F asymptotically stabilizes the system (6.4). Proof When H in (6.88) is replaced with R−1 B¯ T F, we obtain (6.92). If there exists an F which satisfies (6.92), then an H exists and (6.88) is satisfied. From Theorem 6.2, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  It is noted that there always exists a positive definite matrix F satisfying (6.92) if ¯ is stabilizable. (A, B) The quadratic matrix inequality (6.92) in Theorem 6.3 can be rewritten as a linear matrix inequality, which is shown in the following corollary. Corollary 6.3 If there exists a positive definite matrix F¯ > 0 such that 

¯ T + AF¯ − BR ¯ −1 B¯ T FQ ¯ 1/2 FA T /2 ¯ Q F −I

 ≤ 0,

(6.93)

the receding horizon LQ control (6.77) for the cost (6.71) with F = F¯ −1 asymptotically stabilizes the system (6.4).

212

6 LQ Optimal Controls

Proof The quadratic matrix inequality (6.92) in Theorem 6.3 can be rewritten as linear matrix inequalities. Let us multiply the inequality (6.92) by F¯ = F −1 from the both sides, which can be written into ¯ F¯ − BR ¯ −1 B¯ T ≤ 0. ¯ T + AF¯ + FQ FA

(6.94)

The Schur complement for this inequality produces the condition (6.93) From Theorem 6.3, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  For the input delayed system (6.1), the receding horizon LQ control for the cost (6.69) is given from (6.77) by replacing z(t) in (6.2) 

−1 ¯ T



u(t) = −R B P(0) e x(t) +

t

Ah

e

A(t−τ )

 B1 u(τ )d τ .

(6.95)

t−h

Once z(t) goes to zero as t goes to infinity, u(t) also goes to zero from the relation (6.18), which concludes that x(t) in (6.2) goes to zero. Therefore, the receding horizon LQ control (6.95) asymptotically stabilizes the system (6.1) under the conditions in Theorems 6.2 and 6.3. We note again the receding horizon LQ controls may be more preferable to infinite horizon LQ controls since the computation is feasible and the asymptotic stability is guaranteed under some assumptions. Receding Horizon LQ Control for Fixed Terminal States The receding horizon LQ problem for the fixed terminal state is to find the control to minimize a cost   t+T  T T z (s| t)Qz(s| t) + u (s| t)Ru(s| t) ds (6.96) J (t, t + T , u; z(t)) = t

with z(t + T | t) = 0, Q > 0 and R > 0. Following the same procedure in the previous subsection, the optimal control on the time interval [t, t + T ] is obtained from (6.14) and (6.15) by replacing t0 and tf with t and t + T , respectively, as follows. u∗ (s| t) = −R−1 B¯ T P¯ −1 (s| t)z(s| t).

(6.97)

¯ t) = P¯ T (s| t) satisfies the well known differential Riccati equation where P(s| d ¯ ¯ t)QP(s| ¯ t) − BR ¯ −1 B¯ T , ¯ t) + P(s| ¯ t)AT + P(s| P(s| t) = AP(s| ds

(6.98)

¯ + T | t) = 0. Since P(s| ¯ t) is shift invariant and with the terminal-time condition P(t z(t| t) = z(t), the optimal control (6.97) and the differential Riccati equation (6.98) can be given as

6.3 Receding Horizon LQ Controls for Input Delayed Systems

u(t) = u∗ (t| t) = −R−1 B¯ T P¯ −1 (0)z(t), ¯ ) + P(τ ¯ )QP(τ ¯ ) − BR ¯ −1 B¯ T ¯˙ ) = P(τ ¯ )AT + AP(τ P(τ

213

(6.99) (6.100)

¯ ) = 0. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is with P(T repeated to obtain the receding horizon LQ control u(t+ ) (6.99) with t replaced with t+ as the time proceeds. ¯ is controllable, then the receding horizon LQ control (6.99) Theorem 6.4 If (A, B) for the cost (6.96) asymptotically stabilizes the system (6.4). ¯ is controllable, then there exists the optimal LQ control for the cost Proof If (A, B) (6.96) with z(t + T | t) = 0. Consider the time derivative of the receding horizon LQ cost (6.96), which can be written as (6.79), where J (t, t + T , u∗ ; z(t)) with the constraint z ∗ (t + T | t) = 0 and J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) with the constraint z ∗ (t + T + Δ| t) = 0 are defined as the same expressions (6.80) and (6.81), respectively, except the terminal weights. In the time derivative having the form (6.83), the first part satisfies the relation (6.86). For the second part of of (6.83), we claim that the optimal finite horizon LQ controls for the cost with the fixed terminal state have the following property. J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) ≤ J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)). (6.101) This can be shown by contradiction. Assume that J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) > J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)). (6.102) Let us replace u1∗ (s) with u1 (s), where u1 (s) denotes u∗ (s) for s ∈ [t + Δ, t + T ] and zero for s ∈ (t + T , t + T + Δ], and denote z 1 (s) as the corresponding state generated by u1 (s) in the system (6.4) with the initial state z 1 (α| t) α=t+T = z ∗ (t + T | t). Then z 1 (α) is z ∗ (α| t) for α ∈ [t + Δ, t + T ] and zero for α ∈ [t + T , t + T + Δ], which results in J (t + Δ, t + T + Δ, u1 ; z ∗ (t + Δ| t)) = J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)). However, by the optimality of u1∗ , it always holds that J (t, t + T + Δ, u1 ; z ∗ (t + Δ| t)) ≥ J (t, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) and thus from (6.102) and (6.103) J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)) > J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t)),

(6.103)

214

6 LQ Optimal Controls

which is not always true. Therefore, by contradiction, the assumption (6.102) does not hold, too. Therefore, the monotonic decreasing property (6.101) implies that J (t, t + T , u∗ ; z(t)) is non-increasing and bounded below and thus converges to a constant greater than or equal to zero as t goes to infinity. Therefore, dtd J (t, t + T , u∗ ; z(t)) goes to zero as t goes to infinity. This means that z(t| t) and u∗ (t| t) go to zero as t goes to infinity since Q > 0 and R > 0. The closed-loop system is linear time-invariant and thus these conditions imply the asymptotic stability of the closed-loop system, which completes the proof.  For the input delayed system (6.1), the receding horizon LQ control for the cost (6.96) is given from (6.99) by replacing z(t) in (6.2)   u(t) = −R−1 B¯ T P¯ −1 (0) eAh x(t) +

t

 eA(t−s) B1 u(s)ds .

(6.104)

t−h

Once z(t) goes to zero as t goes to infinity, u(t) also goes to zero from the relation (6.18), which concludes that x(t) in (6.2) goes to zero. Therefore, the receding horizon LQ control (6.104) asymptotically stabilizes the system (6.1) under the condition in Theorem 6.4. Properties of receding horizon LQ controls for input delayed systems with predictive costs in this subsection follow similarly from [10, 13, 17] once they are transformed to delay-free systems.

6.3.2 Receding Horizon LQ Control for Standard Costs As mentioned in the previous subsection, the infinite horizon LQ control involving with the cost J (t0 , ∞, u) in (6.20) is hardly found and its stability is unknown because it might be unknown to find P1 , P2 (s) and P3 (r, s) that ensures the condition (2.4) of the continuous functional (6.55) in Theorem 2.1. But in this subsection we introduce the receding horizon LQ control which stabilizes the closed-loop system. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. Define us| t ≡ u([s − h, s]| t); then ut+T | t ≡ u([t + T − h, t + T ]| t). Consider a receding horizon LQ cost given by 

t+T

J (t, t + T , u; x(t), ut ) =

  T T x (s| t)Qx(s| t) + u (s| t)Ru(s| t) ds

t

+VF (x(t + T | t), ut+T | t )

(6.105)

with Q > 0 and R > 0, where the terminal weighting functional VF (x(t + T | t), ut+T | t ) denotes VF (x(t + T | t), ut+T | t ) = xT (t + T | t)F1 x(t + T | t) +

 t+T t+T −h

uT (s| t)F2 u(s| t)ds

(6.106)

6.3 Receding Horizon LQ Controls for Input Delayed Systems

215

with F1 > 0 and F2 > 0. It is noted that (6.105) and (6.106) are obtained from the finite horizon LQ cost (6.20) with the terminal weighting function (6.21) by replacing t0 and tf with t and t + T , respectively. Thus, the receding horizon LQ control can be obtained from the finite horizon LQ controls (6.30) and (6.37) by replacing t0 and tf with t and t + T , respectively. For s ∈ (t + T − h, t + T ], the optimal LQ control u2∗ (s| t) for the cost (6.105) is given from (6.30) as u2∗ (s| t)

−1 T

= −[R + F2 ] B  +

 W1 (s − t)x(s| t)

T +t−s−h

−h





W2 (s − t, a)u (s + a| t)da ,

(6.107)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy, for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0], − W˙ 1 (τ ) = AT W1 (τ ) + W1 (τ )A + Q

(6.108) −W1 (τ )B[R + F2 ]−1 BT W1 (τ ),   ∂ ∂ + W2 (τ , s) = AT W2 (τ , s) − W1T (τ )B[R + F2 ]−1 BT W2 (τ , s), (6.109) − ∂τ ∂s   ∂ ∂ ∂ + + W3 (τ , r, s) = −W2T (τ , r)B[R + F2 ]−1 BT W2 (τ , s) − (6.110) ∂τ ∂r ∂s with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W1 (τ )B1 , W3 (τ , −h, s) = B1T W2 (τ , s).

(6.111) (6.112) (6.113)

For s ∈ [t, t + T − h], the optimal LQ input u1∗ (s| t) for the cost (6.105) is given from (6.37) as u1∗ (s| t)

 [BT P1 (s − t) + P2T (s − t, 0)]x(s| t) + = −R −1



0

−h

 [BT P2 (s − t, a) + P3 (s − t, 0, a)]u∗ (s + a| t)da ,

(6.114)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy, for τ ∈ [0, T − h], r ∈ [−h, 0] and s ∈ [−h, 0], − P˙ 1 (τ ) = AT P1 (τ ) + P1 (τ )A + Q − [P1 (τ )B + P2 (τ , 0)]R−1 ×[BT P1 (t) + P2T (τ , 0)],

(6.115)

216

6 LQ Optimal Controls

 − 

 ∂ ∂ + P2 (τ , s) = AT P2 (τ , s) − [P1 (τ )B + P2 (τ , 0)]R−1 ∂τ ∂s ×[BT P2 (τ , s) + P3 (τ , 0, s)],



(6.116)

∂ ∂ ∂ − + + P3 (τ , r, s) = −[BT P2 (τ , r) + P3 (τ , 0, r)]T R−1 ∂τ ∂r ∂s ×[BT P2 (τ , s) + P3 (τ , 0, s)]

(6.117)

with boundary conditions P1 (T − h) = W1 (T − h), P2 (T − h, s) = W2 (T − h, s), P3 (T − h, r, s) = W3 (T − h, r, s), P2 (τ , −h) = P1 (τ )B1 , P3 (τ , −h, s) = B1T P2 (τ , s).

(6.118) (6.119) (6.120) (6.121) (6.122)

Since the horizon length T is a design parameter, we consider the receding horizon LQ controls for 0 < T < h and for h ≤ T . Case I: 0 < T < h The receding horizon LQ control is given from (6.107) as   u(t) = u2∗ (t| t) = −[R + F2 ]−1 BT W1 (0)x(t) +

T −h

−h

 W2 (0, s)u(t + s)ds , (6.123)

where W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the partial differential equations (6.108)–(6.110) with the boundary conditions (6.111)–(6.113) for τ ∈ [0, T ], r ∈ [−h, 0] and s ∈ [−h, 0]. Case II: h ≤ T The receding horizon LQ control is given from (6.114) as u(t) =

u1∗ (t| t)

 [BT P1 (0) + P2T (0, 0)]x(t) = −R −1

 +

0

 [B P2 (0, s) + P3 (0, 0, s)]u(t + s)ds , T

−h

(6.124) where P1 (τ ), P2 (τ , s), and P3 (τ , r, s) satisfy the partial differential equations (6.115)–(6.117) with the boundary conditions (6.118)–(6.122) for τ ∈ [0, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] and W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the partial differential equations (6.108)–(6.110) with the boundary conditions (6.111)–(6.113) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0].

6.3 Receding Horizon LQ Controls for Input Delayed Systems

217

On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. It is noted that the above partial differential equations have one-point boundary conditions on a finite interval. Therefore, the corresponding solutions can be easily computed. This is different from the infinite horizon LQ control. It is also noted from (6.48) that the optimal cost J (t, t + T , u∗ ) is given as follows. For T < h, J (t, t + T , u∗ ) = V2 (t)



= xT (t)W1 (0)x(t) + 2xT (t)  + +

T −h



−h  t

T −h

−h

t+T −h

T −h

−h

W2 (0, s)u∗ (t + s)ds

u∗T (t + r)W3 (0, r, s)u∗ (t + s)drds

u∗T (s)F2 u∗ (s)ds,

(6.125)

where W1 (0), W2 (0, s), and W3 (0, r, s) are obtained from (6.108)–(6.113). For h ≤ T , J (t, t + T , u∗ ) = V1 (t)



= xT (t)P1 (0)x(t) + 2xT (t)  +

0 −h



0 −h

0 −h

P2 (0, s)u∗ (t + s)ds

u∗T (t + r)P3 (0, r, s)u∗ (t + s)drds,

(6.126)

where P1 (0), P2 (0, s), and P3 (0, r, s) are obtained from (6.115)–(6.122). Our main concern is to show the cost monotonicity for stabilizing property. To do so, we investigate the time derivative of the receding horizon LQ cost J (t, t + T , u∗ ; x(t), ut ) (6.105) with (6.106) as follows.  d 1 ∗ J (t, t + T , u∗ ; x(t), ut ) = lim J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t) dt Δ→0 Δ  −J (t, t + T , u∗ ; x(t), ut ) , (6.127)

where J (t, t + T , u∗ ; x(t), ut ) denotes J (t, t + T , u∗ ; x(t), ut ) =



t+T



 x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

t

+ x∗T (t + T | t)F1 x∗ (t + T | t) +



t+T

t+T −h

u∗T (s| t)F2 u∗ (s| t)ds

(6.128)

218

6 LQ Optimal Controls

with the initial state x∗ (t| t) = x(t) and the initial control u∗ (t + θ| t) = u(t + θ) for ∗ θ ∈ [−h, 0) and J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t ) denotes ∗ J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t)   t+T +Δ  1∗T 1∗ 1∗T 1∗ x (s| t)Qx (s| t) + u (s| t)Ru (s| t) ds = t+Δ

+ x1∗T (t + T + Δ| t + Δ)F1 x1∗ (t + T + Δ| t + Δ)  t+T +Δ u1∗T (s| t + Δ)F2 u1∗ (s| t + Δ)ds +

(6.129)

t+T +Δ−h

with the initial state x1∗ (t + Δ| t + Δ) = x∗ (t + Δ| t) the initial control u∗ (t + Δ + θ| t + Δ) = u∗ (t + Δ + θ| t) for θ ∈ [−h, 0). It is noted that the initial state x1∗ (t + Δ| t + Δ) and the initial control u∗ (t + Δ + θ| t + Δ) of the optimal cost (6.129) is given from the state trajectory x∗ (t + Δ| t) and the control sequence u∗ (t + Δ + θ| t) of the optimal cost (6.128), where θ ∈ [−h, 0). Let us now decompose the optimal cost (6.128) as follows. J (t, t + T , u∗ ; x(t), ut ) =

 t

t+Δ

  x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

∗ +J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t ),

∗ where J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t ) denotes ∗ J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t)   t+T  x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds = t+Δ ∗T



+ x (t + T | t)F1 x (t + T | t) +



t+T t+T −h

u∗T (s| t)F2 u∗ (s| t)ds.

(6.130)

∗ Using this cost J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t ), we can rewrite the Eq. (6.127) as follows.

d J (t, t + T , u∗ ; x(t), ut ) dt   1 ∗ ∗ ∗ ∗ J (t + Δ, t + T , u ; x (t + Δ| t), ut+Δ| t ) − J (t, t + T , u ; x(t), ut ) = lim Δ→0 Δ  1 ∗ J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| + lim t) Δ→0 Δ  ∗ ∗ ∗ (6.131) − J (t + Δ, t + T , u ; x (t + Δ| t), ut+Δ| t ) .

6.3 Receding Horizon LQ Controls for Input Delayed Systems

219

Terminal Cost Monotonicity In the second part of (6.131), if the optimal control u1∗ in J (t + Δ, t + T + ∗ 1 Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t ) is replaced with a non-optimal control u , then due to the definition it obviously holds that ∗ 1 ∗ ∗ J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t ) ≤ J (t + Δ, t + T + Δ, u ; x (t + Δ| t), ut+Δ| t ),

∗ where J (t + Δ, t + T + Δ, u1 ; x∗ (t + Δ| t), ut+Δ| t ) denotes ∗ J (t + Δ, t + T + Δ, u1 ; x∗ (t + Δ| t), ut+Δ| t) =

t+T +Δ 



t+Δ



+x (t + T + Δ)F1 x (t + T + Δ) + 1T

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) ds

t+T +Δ

1

t+T +Δ−h

u1T (s| t + Δ)F2 u1 (s| t + Δ)ds

with the initial state x1 (t + Δ) = x∗ (t + Δ| t) the initial control u1 (t + Δ + θ) = u∗ (t + Δ + θ| t) for θ ∈ [−h, 0). Let us choose u1 (α) = u∗ (α| t) for α ∈ [t + Δ, t + T ]; then x1 (α) = x∗ (α| t) for α ∈ [t + Δ, t + T ], which gives ∗ J (t + Δ, t + T + Δ, u1 ; x∗ (t + Δ| t), ut+Δ| t)  ∗ ∗ ∗ = J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t ) − VF (x (t + T | t), ut+T | t )

 +

t+T +Δ 

  x1T (s)Qx1 (s) + u1T (s)Ru1 (s) ds + x1T (t + T + Δ)F1 x1 (t + T + Δ)

t+T

 +

t+T

t+T −h+Δ

u∗T (s| t)F2 u∗ (s| t)ds +



t+T +Δ t+T

∗ = J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t) +

  + x1T (t + T + Δ)F1 x1 (t + T + Δ) +   − x∗T (t + T | t)F1 x∗ (t + T | t) +



 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) ds

t+T +Δ 

t+T t+T +Δ

t+T t+T −h+Δ t+T −h

 u1T (s)F2 u1 (s)ds

 u1T (s)F2 u1 (s)ds

 u∗T (s| t)F2 u∗ (s| t)ds .

Since it holds that x1 (t + T ) = x∗ (t + T | t) and   1 1 1 x (t + T + Δ) − x (t + T ) Ax1 (t + T ) + Bu1 (t + T ) + B1 u∗ (t + T − h| t), Δ

the above relations with a simple choice of u1 (t + T ) as −Hx∗ (t + T | t), i.e. u1 (t + T ) = −Hx∗ (t + T | t) provide  1 ∗ J (t + Δ, t + T + Δ, u1∗ ; x∗ (t + Δ| t), ut+Δ| t) Δ→0 Δ lim

220

6 LQ Optimal Controls ∗ − J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| t)



  t+T +Δ   1 1T 1 1T 1 x (s)Qx (s) + u (s)Ru (s) ds ≤ lim Δ→0 Δ t+T   1T 1 ∗T ∗ + x (t + T + Δ)F1 x (t + T + Δ) − x (t + T | t)F1 x (t + T | t)  +

t+T +Δ

t+T

 u (s)F2 u (s)ds − 1T

1

t+T −h+Δ

t+T −h

∗T





u (s| t)F2 u (s| t)ds

= x∗T (t + T | t)Qx∗ (t + T | t) + u1T (t + T )Ru1 (t + T )   ∗T ∗ 1 ∗ + 2x (t + T | t)F1 Ax (t + T | t) + Bu (t + T ) + B1 u (t + T − h| t) + u1T (t + T )F2 u1 (t + T ) − u∗T (t + T − h| t)F2 u∗ (t + T − h| t)   T  x∗ (t + T | t) x∗ (t + T | t) M ∗ = ∗ , u (t + T − h| t) u (t + T − h| t)

(6.132)

where M denotes   F1 (A − BH ) + (A − BH )T F1 + Q + H T [R + F2 ]H F1 B1 . M= B1T F1 −F2 Remark 6.6 In this subsection, we have proven that   ∂ 1 J (α, β, u∗ ; x(α), uα ) = lim J (α, β + Δ, u1∗ ; x(α), uα ) − J (α, β, u∗ ; x(α), uα ) ∂β Δ→0 Δ    T x(β) x(β) M . ≤ u(β − h) u(β − h)

The condition that M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the closed-loop system. Stability In the first part of (6.131), we can obtain that   1 ∗ ∗ J (t + Δ, t + T , u∗ ; x∗ (t + Δ| t), ut+Δ| ) − J (t, t + T , u ; x(t), u ) t t Δ→0 Δ   (6.133) = − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . lim

From the relations (6.132) and (6.133) corresponding to the second and first parts in the Eq. (6.131), respectively, we can get the upper bound of the time derivative of J (t, t + T , u∗ ; x(t), ut ) in (6.131) as follows.

6.3 Receding Horizon LQ Controls for Input Delayed Systems

221

  d J (t, t + T , u∗ ; x(t), ut ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) dt + x∗T (t + T | t)Qx∗ (t + T | t) + u1T (t + T )Ru1 (t + T )   + 2x∗T (t + T | t)F1 Ax∗ (t + T | t) + Bu1 (t + T ) + B1 u∗ (t + T − h| t) + u1T (t + T )F2 u1 (t + T ) − u∗T (t + T − h| t)F2 u∗ (t + T − h| t)   = − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) 

x∗ (t + T | t) + ∗ u (t + T − h| t)

T

(6.134)



 x∗ (t + T | t) M ∗ . u (t + T − h| t)

(6.135)

Using this choice of u1 (t + T ) satisfying that M ≤ 0, a sufficient condition for the asymptotic stability of the closed-loop system both for 0 < T < h and for h ≤ T can be found in the following theorem. Theorem 6.5 If there exist matrices F1 > 0, F2 > 0 and H such that 

F1 (A − BH ) + (A − BH )T F1 + Q + H T [R + F2 ]H F1 B1 B1T F1 −F2

 ≤ 0,

(6.136)

then the receding horizon LQ controls (6.123) and (6.124) for the cost (6.105) with such F1 and F2 asymptotically stabilize the system (6.1). Proof If the condition (6.136) is satisfied, then it holds that J (t, t + T , u∗ ; x(t), ut ) ≥ 0,   d J (t, t + T , u∗ ; x(t), ut ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) ≤ 0. dt J (t, t + T , u∗ ; x(t), ut ) is V2 (t) for T < h and V1 (t) for T ≥ h from (6.125) and (6.126), respectively. Therefore, these with cost monotonicity conditions satisfy conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof.  The quadratic matrix inequality (6.136) in Theorem 6.5 can be rewritten as linear matrix inequalities, which is shown in the following corollary. Corollary 6.4 If there exist matrices F¯ 1 > 0, F¯ 2 > 0 and H¯ such that ⎡

F¯ 1 AT − H¯ T BT + AF¯ 1 − BH¯ B1 F¯ 2 F¯ 1 H¯ T T ⎢ ¯ ¯ −F2 0 0 F2 B1 ⎢ −1 ⎢ ¯1 F 0 −Q 0 ⎢ ⎣ H¯ 0 0 −R−1 H¯ 0 0 0

⎤ H¯ T 0 ⎥ ⎥ 0 ⎥ ⎥ ≤ 0, 0 ⎦ −F¯ 2

(6.137)

222

6 LQ Optimal Controls

then the receding horizon LQ controls (6.123) and (6.124) for the cost (6.105) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (6.1). Proof For the inequality (6.92), perform a congruence transformation via a nonsingular matrix F1−1 ⊕ F2−1 to get 

(A − BH )F¯ 1 + F¯ 1 (A − BH )T + F¯ 1 QF¯ 1 + F¯ 1 H T [R + F2 ]H F¯ 1 B1 F¯ 2 −F¯ 2 F¯ 2 B1T

 ≤ 0.

Now define H¯ as H F¯ 1 and use the Schur complement to obtain (6.137). From Theorem 6.5, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  Let us rewrite the inequality (6.134) for the cost monotonicity as follows.   d J (t, t + T , u∗ ; x(t), ut ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) dt T    x∗ (t + T | t) x∗ (t + T | t) + η T [R + F2 ]η, + ∗ M ∗ u (t + T − h| t) u (t + T − h| t)

(6.138)

where M and η denote  M=

 Q + F1 A + AT F1 − F1 B[R + F2 ]−1 BT F1 F1 B1 , B1T F1 −F2

η = u1 (t + T ) + [R + F2 ]−1 BT F1 x∗ (t + T | t).

(6.139) (6.140)

An alternative choice of u1 (t + T ) as −[R + F2 ]−1 BT F1 x∗ (t + T | t), i.e. u1 (t + T ) = −[R + F2 ]−1 BT F1 x∗ (t + T | t), provides the following result. Theorem 6.6 If there exist matrices F1 > 0 and F2 > 0 such that 

Q + F1 A + AT F1 − F1 B[R + F2 ]−1 BT F1 F1 B1 B1T F1 −F2

 ≤ 0,

(6.141)

then the receding horizon LQ controls (6.123) and (6.124) for the cost (6.105) with such F1 and F2 asymptotically stabilize the system (6.1). Proof When H in (6.136) is replaced with [R + F2 ]−1 BT F1 , we obtain (6.141). If there exist F1 and F2 which satisfy (6.141), then an H exists and (6.136) is satisfied. From Theorem 6.5, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  The quadratic matrix inequality (6.141) in Theorem 6.6 can be rewritten as linear matrix inequalities, which is shown in the following corollary. Corollary 6.5 If there exist matrices F¯ 1 > 0 and F¯ 2 > 0 such that

6.3 Receding Horizon LQ Controls for Input Delayed Systems

223



⎤ F¯ 1 AT + AF¯ 1 − BF¯ 2 BT + B1 F¯ 2 B1T F¯ 1 BF¯ 2 ⎣ ⎦ ≤ 0, −Q−1 0 F¯ 1 T −1 ¯ ¯ F2 B 0 −R − F2

(6.142)

then the receding horizon LQ controls (6.123) and (6.124) for the cost (6.105) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (6.1). Proof For the inequality (6.141), perform a congruence transformation via a nonsingular matrix F1−1 ⊕ F2−1 and use the so-called matrix inversion lemma such that [R + F2 ]−1 = F¯ 2 − F¯ 2 [R−1 + F¯ 2 ]−1 F¯ 2 so as to obtain   F¯ 1 QF¯ 1 + AF¯ 1 + F¯ 1 AT − B{F¯ 2 − F¯ 2 [R−1 + F¯ 2 ]−1 F¯ 2 }BT B1 F¯ 2 ≤ 0. −F¯ 2 F¯ 2 B1T Now use the Schur complement to get (6.142). From Theorem 6.6, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  We note again that receding horizon LQ controls may be more preferable to infinite horizon LQ controls since the computation is feasible and the asymptotic stability is guaranteed under some assumptions. The stability properties in Theorems 6.6 and 6.7 and Corollaries 6.4 and 6.5 of receding horizon LQ controls with standard costs can be found partially in [23] and a similar restrictive result can be found in [18].

6.4 Fixed Horizon LQ Controls for State Delayed Systems Consider a state delayed system given by x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t)

(6.143)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. The cost to be minimized is   tf  T T x (s)Qx(s) + u (s)Ru(s) ds + VF (xtf ), J (t0 , tf , u) = (6.144) t0

with Q ≥ 0 and R > 0, where the terminal weighting functional VF (xtf ) ∈ Cn,2h → R can have various forms depending on problems. Let us focus on  VF (xtf ) = xT (tf )F1 x(tf ) +

tf tf −h

 xT (s)F2 x(s)ds +

0 −h



tf tf +s



T x˙ (r) − Ax(r)

224

6 LQ Optimal Controls

 ×F3

  x˙ (r) − Ax(r) drds +

0



−h

tf

tf +s

xT (r)F4 x(r)drds

(6.145)

with Fi ≥ 0. Associated with this VF (xtf ), let us consider the following finite horizon LQ optimal control problems.

6.4.1 Fixed Horizon LQ Control for Simple Costs A simple cost to be minimized is 

tf

J (t0 , tf , u) =

uT (s)Ru(s)ds + xT (tf )F1 x(tf )

(6.146)

t0

with R > 0 and F1 ≥ 0, which is identical to the cost (6.144) with Q = 0, F2 = 0, F3 = 0 and F4 = 0. Let us introduce a transformation which reduces the state delayed system into an ordinary system. Let us transfer the state x(t) in (6.143) into ξ(t) such as 



ξ(t) = Φ(tf − t)x(t) +

min(t,tf −h)

Φ(tf − h − s)A1 x(s)ds,

(6.147)

t−h

where Φ(τ ) satisfies the matrix differential-difference equation d Φ(τ ) = Φ(τ )A + Φ(τ − h)A1 dτ

(6.148)

with the boundary conditions that Φ(0) = I and Φ(τ ) = 0 for all τ < 0. Then, the dynamics of ξ(t) is as follows. For t ∈ (tf − h, tf )   tf −h d d Φ(tf − h − s)A1 x(s)ds Φ(tf − t) x(t) + Φ(tf − t)˙x(t) + dt dt t−h   = −Φ(tf − t)Ax(t) + Φ(tf − t) Ax(t) + A1 x(t − h) + Bu(t) − Φ(tf − t)A1 x(t − h)

d ξ(t) = dt



and for t ∈ (t0 , tf − h)    t d d d Φ(tf − h − s)A1 x(s)ds ξ(t) = Φ(tf − t) x(t) + Φ(tf − t)˙x(t) + dt dt dt t−h     = − Φ(tf − t)A − Φ(tf − t − h)A1 x(t) + Φ(tf − t) Ax(t) + A1 x(t − h) + Bu(t) + Φ(tf − h − t)A1 x(t) − Φ(tf − t)A1 x(t − h),

6.4 Fixed Horizon LQ Controls for State Delayed Systems

225

which produces an ordinary time-varying system without delay given by ˙ = Φ(tf − t)Bu(t), ξ(tf ) = x(tf ). ξ(t)

(6.149)

Therefore, an optimization problem with the cost (6.144) with Q = 0, F2 = 0, F3 = 0 and F4 = 0 for a time-delay system (6.143) is transformed to an optimization problem with the cost (6.146) for a delay-free system (6.149). Theorem 6.7 For the state delayed system (6.143), the finite horizon LQ control u∗ (t) for the cost (6.146) is given as follows. For t ∈ (t0 , tf )  u (t) = −R B Φ (tf − t)P(t) Φ(tf − t)x(t) ∗

−1 T



T

min(t,tf −h)

+

 Φ(tf − s − h)A1 x(s)ds ,

(6.150)

t−h

where P(t) = F1 [I + W (tf − t)F1 ]−1 ,  tf  W (tf − t) = Φ(tf − s)BR−1 BT Φ T (tf − s)ds.

(6.151) (6.152)

t

Proof To derive the optimal control for the cost (6.146), we use Lemma 6.1 for finite horizon LQ controls. Choose V (t) = ξ T (t)P(t)ξ(t) with the terminal-time condition V (tf ) = ξ T (tf )F1 ξ(tf ), i.e. P(tf ) = F1 . Consider ˙ + uT (t)Ru(t). V˙ (t) + uT (t)Ru(t) = 2ξ T (t)P(t)Φ(tf − t)Bu(t) + ξ T (t)P(t)ξ(t) By Lemma 6.1, the following partial derivative condition with respect to u(t) such that   ∂ 0= V˙ (t) + uT (t)Ru(t) = 2BT Φ T (tf − t)P(t)ξ(t) + 2Ru(t) ∂u(t) provides the optimal control as follows. u∗ (t) = −R−1 BT Φ T (tf − t)P(t)ξ(t), where P(t) is determined to yield   0 = V˙ (t) + uT (t)Ru(t)

u=u∗

  ˙ − P(t)Φ(tf − t)BR−1 BT Φ T (tf − t)P(t) ξ(t) = ξ T (t) P(t)

and thus P(t) = P T (t) is the solution to the following Riccati differential equation:

226

6 LQ Optimal Controls

˙ P(t) = P(t)Φ(tf − t)BR−1 BT Φ T (tf − t)P(t),

(6.153)

with boundary condition P(tf ) = F1 , which gives the control (6.150). Dividing both sides of (6.153) by P(t) leads to d −1 P (t) = −Φ(tf − t)BR−1 BT Φ T (tf − t), dt −1 ˙ (t). Integrating the above from t by the relation that dP −1 (t)/dt = −P −1 (t)P(t)P to tf yields

P −1 (tf ) − P −1 (t) = −



tf

Φ(tf − s)BR−1 BT Φ T (tf − s)ds = −W (tf − t),

t



which completes the proof.

The finite horizon LQ control (6.150) is simple for the state delayed system (6.143). The infinite horizon LQ control for the cost (6.146) with tf = ∞ does not exist; but the receding horizon LQ control might exist, which is explained in the sequent subsection. Finite horizon LQ controls with a simple cost for state delayed systems in Theorem 6.7 can be found similarly in [12].

6.4.2 Fixed Horizon LQ Control for Costs with Single Integral Terms Consider the finite horizon LQ cost (6.144) with the terminal weighting functional VF (xtf ) in (6.145) yielding F3 = 0 and F4 = 0 as follows.  VF (xtf ) = x (tf )F1 x(tf ) + T

tf

tf −h

xT (s)F2 x(s)ds

(6.154)

with F1 ≥ 0 and F2 ≥ 0. Two continuous functionals V1 (t) and V2 (t) are chosen for each horizon, i.e. t ∈ [t0 , tf − h] and t ∈ (tf − h, tf ]. The optimal controls u1∗ (t) and u2∗ (t) corresponding to each functional can be constructed in the following theorem. Theorem 6.8 For the state delayed system (6.143), the finite horizon LQ control u∗ (t) for the cost (6.144) with the terminal weighting functional (6.154) is chosen as follows. For t ∈ (tf − h, tf ], the optimal control is given as u2∗ (t)

−1 T

= −R B



 W1 (t)x(t) +

tf −t−h −h

 W2 (t, s)x(t + s)ds ,

(6.155)

6.4 Fixed Horizon LQ Controls for State Delayed Systems

227

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy − W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q + F2 −W1 (t)BR−1 BT W1 (t),

(6.156) ∂ ∂ − + W2 (t, s) = AT W2 (t, s) − W1T (t)BR−1 BT W2 (t, s), ∂t ∂s (6.157)   ∂ ∂ ∂ − W3 (t, r, s) = −W2T (t, r)BR−1 BT W2 (t, s) (6.158) + + ∂t ∂r ∂s 



with boundary conditions W1 (tf ) = F1 ,

(6.159)

W2 (t, −h) = W1 (t)A1 , W3 (t, −h, s) = AT1 W2 (t, s).

(6.160) (6.161)

For t ∈ [t0 , tf − h], the optimal control is given as u1∗ (t)

  = −R B P1 (t)x(t) + −1 T

0

−h

 P2 (t, s)x(t + s)ds ,

(6.162)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) satisfy − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q − P1 (t)BR−1 BT P1 (t) 



(6.163) +P2 (t, 0) + P2T (t, 0),  ∂ ∂ + P2 (t, s) = AT P2 (t, s) + P3 (t, 0, s) − P1 (t)BR−1 BT P2 (t, s), − ∂t ∂s

 ∂ ∂ ∂ − + + P3 (t, r, s) = −P2T (t, r)BR−1 BT P2 (t, s) ∂t ∂r ∂s

(6.164) (6.165)

with boundary conditions P1 (tf − h) = W1 (tf − h),

(6.166)

P2 (tf − h, s) = W2 (tf − h, s), P3 (tf − h, r, s) = W3 (tf − h, r, s), P2 (t, −h) = P1 (t)A1 ,

(6.167) (6.168) (6.169)

P3 (t, −h, s) = AT1 P2 (t, s). Proof For t ∈ (tf − h, tf ], let us define

(6.170)

228

6 LQ Optimal Controls

 V2 (t) = x (t)W1 (t)x(t) +

tf −t−h

T

 +2xT (t)

tf −t−h −h



−h

tf −t−h −h

xT (t + r)W3 (t, r, s)x(t + s)drds 

W2 (t, s)x(t + s)ds +

t tf −h

xT (s)F2 x(s)ds.

(6.171)

After some manipulation, we obtain d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt 

 = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2 x(t) + uT (t)Ru(t)   + 2x(t − h) A1 T W1 (t) − W2T (t, −h) x(t)     ∂ ∂ AT W2 (t, s) + − W2 (t, s) x(t + s)ds ∂t ∂s −h   tf −t−h  T T + 2x (t − h) A1 W2 (t, s) − W3 (t, −h, s) x(t + s)ds 

+ 2xT (t)

tf −t−h

−h tf −t−h

  ∂ ∂ ∂ − − W3 (t, r, s) x(t + s)drds ∂t ∂r ∂s −h −h    tf −t−h + 2uT (t)BT W1 (t)x(t) + W2 (t, s)x(t + s)ds . 

+

tf −t−h





xT (t + r)

−h

From the gradient condition (6.23) in Lemma 6.1 such that 0=

  d ∂ V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) , ∂u(t) dt

we can obtain the optimal control (6.155). Therefore, we have  0=

d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt

 u=u2∗

,

which results in the coupled partial differential Riccati equations (6.156)–(6.158) with the boundary conditions (6.160) and (6.161). For t ∈ [t0 , tf − h], let us now define  V1 (t) = xT (t)P1 (t)x(t) + 2xT (t)  +

0 −h

With some efforts, we obtain



0

−h

0 −h

P2 (t, s)x(t + s)ds

xT (t + r)P3 (t, r, s)x(t + s)drds.

(6.172)

6.4 Fixed Horizon LQ Controls for State Delayed Systems

229

d V1 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt 

 T T ˙ = x (t) P1 (t) + A P1 (t) + P1 (t)A + Q + P2 (t, 0) + P2 (t, 0) x(t)   T T T + 2x (t − h) A1 P1 (t) − P2 (t, −h) x(t)    0  ∂ ∂ T T − − P2 (t, s) + A P2 (t, s) + 2P3 (t, 0, s) x(t + s)ds + 2x (t) ∂s ∂t −h   0 A1 T P2 (t, s) − P3 (t, −h, s) x(t + s)ds + 2xT (t − h) T

−h

  ∂ ∂ ∂ P3 (t, r, s) x(t + s)drds − − ∂t ∂r ∂s −h −h    0 P2 (t, s)x(t + s)ds + uT (t)Ru(t). + 2uT (t)BT P1 (t)x(t) + 

+

0



0



xT (t + r)

−h

From the gradient condition (6.23) in Lemma 6.1 such that 0=

  d V1 (t, xt ) ∂ + xT (t)Qx(t) + uT (t)Ru(t) , ∂u(t) dt

we obtain the optimal control u1∗ (·) as in (6.162) and in turn the following relation  0=

d V1 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt

 u=u1∗

,

which results in the coupled partial differential Riccati equations (6.163)–(6.165) with the boundary conditions (6.169) and (6.170). Because V1 (t) and V2 (t) are the optimal costs for t ∈ [t0 , tf − h] and t ∈ (tf − h, tf ], respectively, it is clear that V1 (tf − h) = V2 (tf − h) from the continuity conditions at the time t = tf − h, which are satisfied if and only if the conditions (6.166)–(6.168) hold. All that remains is to satisfy V2 (tf ) = VF (xtf ) by Lemma 6.1, which results in the condition (6.159). This completes the proof.  Using the continuity conditions (6.159) and (6.166)–(6.168), we can say that the optimal cost J (t, tf , u∗ ) t∈[t0 ,tf −h] in (6.144) is equal to the continuous functional V1 (t) and V2 (t) for t ∈ [t0 , tf − h] and t ∈ (tf − h, tf ] as follows. ∗

J (t, tf , u ) =



V1 (t) for t ∈ [t0 , tf − h], V2 (t) for t ∈ (tf − h, tf ].

(6.173)

Finite horizon LQ controls for costs with single integral terminal terms for state delayed systems in Theorem 6.8 can be found similarly in [16]. Infinite horizon LQ controls for costs with single integral terms will be discussed in the next subsection.

230

6 LQ Optimal Controls

6.4.3 *Fixed Horizon LQ Control for Costs with Double Integral Terms As concerned about the receding horizon LQ control for the state delayed system (6.143), which will be discussed in the following section, the stability condition of the receding horizon LQ control induced from the finite horizon LQ control for the cost with F1 ≥ 0, F2 ≥ 0, F3 = 0 and F4 = 0 in (6.145) is delay independent, which is very conservative. Therefore, we introduce the finite horizon LQ control for the cost with F1 ≥ 0, F2 ≥ 0, F3 ≥ 0 and F4 ≥ 0 in (6.145), which is used to produce the delay-dependent stability condition of the receding horizon LQ control. By including the double integral terms associated with F3 and F4 in the terminal weighting functional (6.145), we should choose three continuous functionals, V1 (t), V2 (t), and V3 (t) for the intervals t ∈ [t0 , tf − 2h], t ∈ (tf − 2h, tf − h] and t ∈ (tf − h, tf ], respectively. Using V1 (t), V2 (t), and V3 (t), the finite horizon LQ optimal controls u1∗ (·), u2∗ (·), and u3∗ (·) are obtained for each horizon in the following theorem. ¯ Theorem 6.9 Define f (s) and R(s) as follows.   ¯ f (s) = s − tf + h, R(s) = R + f (s)BT F3 B.

For the state delayed system (6.143)), the finite horizon LQ control u∗ (t) for the cost (6.144) with the terminal weighting functional (6.145) is given as follows. For t ∈ (tf − h, tf ], the optimal control is given as   u3∗ (t) = −R¯ −1 (t)BT W1 (t)x(t) +

tf −t−h −h

 W2 (t, s)x(t + s)ds + f (t)F3 A1 x(t − h) ,

(6.174) where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy − W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q + F2 + f (t)F4 −W1 (t)BR¯ −1 (t)BT W1 (t),   ∂ ∂ − + W2 (t, s) = AT W2 (t, s) − W1 (t)BR¯ −1 (t)BT W2 (t, s), ∂t ∂s   ∂ ∂ ∂ − + + W3 (t, r, s) = −W2T (t, r)BR¯ −1 (t)BT W2 (t, s) ∂t ∂r ∂s

(6.175) (6.176) (6.177)

with boundary conditions W1 (tf ) = F1 ,

W2 (t, −h) = W1 (t)[I − f (t)BR¯ −1 (t)BT F3 ]A1 , W3 (t, −h, s) = AT1 [I − f (t)F3 BR¯ −1 (t)BT ]W2 (t, s).

(6.178) (6.179) (6.180)

6.4 Fixed Horizon LQ Controls for State Delayed Systems

231

For t ∈ (tf − 2h, tf − h], the optimal control is given as   u2∗ (t) = −R−1 BT S1 (t)x(t) +

0

−h

 S2 (t, s)x(t + s)ds ,

(6.181)

where S1 (t) = S1T (t), S2 (t, s), and S3 (t, r, s) = S3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy − S˙ 1 (t) = AT S1 (t) + S1 (t)A − S1 (t)BR−1 BT S1 (t) + Q −f 2 (t + h)AT1 F3 BR−1 (t + h)BT F3 A1 +S2 (t, 0) + S2T (t, 0) + f (t + h)AT1 F3 A1 , 



 ∂ ∂ − S2 (t, s) = AT S2 (t, s) + S3 (t, 0, s) − S1 (t)BR−1 BT S2 (t, s), + ∂t ∂s

 ∂ ∂ ∂ − S3 (t, r, s) = −S2T (t, r)BR−1 BT S2 (t, s) + + ∂t ∂r ∂s

(6.182)

(6.183)

(6.184) with boundary conditions S1 (tf − h) = W1 (tf − h), S2 (tf − h, s) = W2 (tf − h, s),

(6.185) (6.186)

S3 (tf − h, r, s) = W3 (tf − h, r, s), S2 (t, −h) = S1 (t)A1 ,

(6.187) (6.188)

S3 (t, −h, s) = AT1 S2 (t, s).

(6.189)

For t ∈ [t0 , tf − 2h], the optimal control is given as   u1∗ (t) = −R−1 BT P1 (t)x(t) +

0

−h

 P2 (t, s)x(t + s)ds ,

(6.190)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q − P1 (t)BR−1 BT P1 (t) +P2 (t, 0) + P2T (t, 0), 



 ∂ ∂ − P2 (t, s) = AT P2 (t, s) + P3 (t, 0, s) − P1 (t)BR−1 BT P2 (t, s), + ∂t ∂s

 ∂ ∂ ∂ − + + P3 (t, r, s) = −P2T (t, r)BR−1 BT P2 (t, s) ∂t ∂r ∂s

(6.191)

(6.192)

(6.193)

232

6 LQ Optimal Controls

with boundary conditions P1 (tf − 2h) = S1 (tf − 2h), P2 (tf − 2h, s) = S2 (tf − 2h, s),

(6.194) (6.195)

P3 (tf − 2h, r, s) = S3 (tf − 2h, r, s), P2 (t, −h) = P1 (t)A1 ,

(6.196) (6.197)

P3 (t, −h, s) = AT1 P2 (t, s).

(6.198)

Proof For t ∈ (tf − h, tf ], let us choose  V3 (t) = xT (t)W1 (t)x(t) + 2xT (t)  +

tf −t−h  tf −t−h −h

−h

tf −t−h

−h

W2 (t, s)x(t + s)ds

xT (t + r)W3 (t, r, s)x(t + s)drds +



t tf −h

   xT (s) F2 + f (s)F4 x(s)

  T    tf −h + f (s) x˙ (s) − Ax(s) F3 x˙ (s) − Ax(s) ds + xT (s) f (s + h)AT1 F3 A1 t−h  − f 2 (s + h)AT1 F3 BR¯ −1 (s + h)BT F3 A1 x(s)ds,

(6.199)

where, for the condition (2.4) of the continuous functional (6.199), we need to have f (s + h)F3 − f 2 (s + h)F3 BR¯ −1 (s + h)BT F3 ≥ 0, which can be verified since for s ∈ (tf − 2h, tf − h] 

     f (s + h)F3 B f (s + h)F3 I 0 f (s + h)F3 0 I B = ≥ 0. f (s + h)BT F3 R + f (s + h)BT F3 B 0 R BT I 0I

Consider d V3 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt   = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2 + f (t)F4 x(t)   + 2x(t − h) A1 T W1 (t) − W2T (t, −h) x(t)   ∂ ∂ − W2 (t, s) + AT W2 (t, s) x(t + s)ds ∂s ∂t −h   tf −t−h  + 2xT (t − h) A1 T W2 (t, s) − W3 (t, −h, s) x(t + s)ds 

+ 2xT (t)

 +

tf −t−h 

−h tf −t−h  tf −t−h

−h

−h





   ∂ ∂ ∂ − − W3 (t, r, s) x(t + s)drds xT (t + r) ∂t ∂r ∂s

6.4 Fixed Horizon LQ Controls for State Delayed Systems

233

 W2 (t, s)x(t + s)ds + f (t)F3 A1 x(t − h) −h   T T ¯ + u (t)R(t)u(t) + x (t − h) f 2 (t)AT1 F3 BR¯ −1 (t)BT F3 A1 x(t − h).

  + 2uT (t)BT W1 (t)x(t) +

tf −t−h

Now the optimal control u3∗ (·) can be chosen as follows. From the gradient condition (6.23) in Lemma 6.1 such that   d V3 (t) ∂ T T + x (t)Qx(t) + u (t)Ru(t) , 0= ∂u(t) dt we can construct the optimal control (6.174). Using this optimal control, we have  0=

d V3 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt

 u(t)=u3∗ (t)

,

which results in the coupled differential equations (6.175)–(6.177) with the boundary conditions (6.179) and (6.180). For t ∈ (tf − 2h, tf − h], let us choose  V2 (t) = xT (t)S1 (t)x(t) + 2xT (t)  +

0



0

−h −h

0 −h

S2 (t, s)x(t + s)ds 

xT (t + r)S3 (t, r, s)x(t + s)drds +

 −f 2 (s + h)AT1 F3 BR¯ −1 (s + h)BT F3 A1 x(s)ds.

t tf −2h

 xT (s) f (s + h)AT1 F3 A1

(6.200)

Then we obtain d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt 

 = xT (t) S˙ 1 (t) + AT S1 (t) + S1 (t)A + Q + S2 (t, 0) + S2T (t, 0) x(t)   + 2xT (t − h) AT1 S1 (t) − S2T (t, −h) x(t)    0  ∂ ∂ T T − − S2 (t, s) + A S2 (t, s) + 2S3 (t, 0, s) x(t + s)ds + 2x (t) ∂s ∂t −h    0 T T A1 S2 (t, s) − S3 (t, −h, s) x(t + s)ds + 2x (t − h) −h

  ∂ ∂ ∂ − − S3 (t, r, s) x(t + s)drds ∂t ∂r ∂s −h −h    0 S2 (t, s)x(t + s)ds + uT (t)Ru(t) + 2uT (t)BT S1 (t)x(t) + 

+

0



0



xT (t + r)

−h

234

6 LQ Optimal Controls

  + xT (t) f (t + h)AT1 F3 A1 − f 2 (t + h)AT1 F3 BR¯ −1 (t + h)BT F3 A1 x(t). From the gradient condition (6.23) in Lemma 6.1 such that   d ∂ T T 0= V2 (t) + x (t)Qx(t) + u (t)Ru(t) , ∂u(t) dt we obtain the optimal control (6.181). Using this control, we check out  0=

d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) dt

 u=u2∗ ,

which results in the coupled partial differential Riccati equations (6.182)–(6.184) with the boundary conditions (6.188) and (6.189). For t ∈ [t0 , tf − 2h], let us choose  V1 (t) = xT (t)P1 (t)x(t) + 2xT (t)  +

0 −h



0

−h

0 −h

P2 (t, s)x(t + s)ds

xT (t + r)P3 (t, r, s)x(t + s)drds.

(6.201)

Using the same manner as in the proof of Theorem 6.8, we can obtain the optimal control (6.190) and the coupled partial differential Riccati equations (6.191)–(6.193) with the boundary conditions (6.197) and (6.198). Because V1 (t), V2 (t), and V3 (t) are the optimal costs for t0 ≤ t ≤ tf − 2h, tf − 2h < t ≤ tf − h, and tf − h < t ≤ tf , respectively, it is clear that V1 (tf − 2h) = V2 (tf − 2h) and V2 (tf − h) = V3 (tf − h), which are expressed as (6.194)–(6.196) and (6.185)–(6.187). Furthermore, the terminal-time continuity condition is that  V3 (tf ) = VF (xtf ), which is expressed as in (6.178). This completes the proof. Using the continuity conditions (6.178), (6.185)–(6.187), and (6.194)–(6.196), we can say that the optimal cost J (t, tf , u∗ ) t∈[t0 ,tf −h] in (6.144) is equal to the continuous functional V1 (t), V2 (t), and V3 (t) for t ∈ [t0 , tf − 2h], t ∈ (t0 − 2h, tf − h] and t ∈ (tf − h, tf ] as follows. ⎧ ⎨ V1 (t) for t ∈ [t0 , tf − 2h], J (t, tf , u∗ ) = V2 (t) for t ∈ (tf − 2h, tf − h], ⎩ V3 (t) for t ∈ (tf − h, tf ].

(6.202)

Finite horizon LQ controls for costs with double integral terminal terms for state delayed systems in Theorem 6.9 can be found similarly in [22] and a similar restrictive result in [20].

6.4 Fixed Horizon LQ Controls for State Delayed Systems

235

Infinite Horizon LQ Control If P1 (t) = P1T (t), P2 (t, s) and P(t, r, s) = P3T (t, s, r) of the coupled partial differential Riccati equations (6.163)–(6.165) for t ∈ (t0 , tf − h) in Theorem 6.8 or (6.191)– (6.193) for t ∈ (t0 , tf − 2h) in Theorem 6.9 go to the stationary quantities P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) as tf goes to infinite, then they satisfy for r ∈ [−h, 0] and s ∈ [−h, 0]



0 = AT P1 + P1 A − P1 BR−1 BT P1 + Q + P2 (0) + P2T (0), ˙P2 (s) = AT P2 (s) + P3 (0, s) − P1 BR−1 BT P2 (s),

 ∂ ∂ + P3 (r, s) = −P2T (r)BR−1 BT P2 (s) ∂r ∂s

(6.203) (6.204) (6.205)

with boundary conditions P2 (−h) = P1 A1 , P3 (−h, s) = AT1 P2 (s).

(6.206) (6.207)

In this case, from the finite horizon LQ control (6.162) for the cost with F3 = 0 and F4 = 0 in (6.145) in Theorem 6.8 and from the finite horizon LQ control (6.190) for the cost with F3 > 0 and F4 > 0 in (6.145) in Theorem 6.9, we can get the same infinite horizon LQ control as follows.   u (t) = −R B P1 x(t) + ∗

0

−1 T

−h

 P2 (s)x(t + s)ds ,

(6.208)

In this case, the corresponding continuous functional is  V (t) = x (t)P1 x(t) + 2x (t) T

 +

0

T

0 −h



0 −h

−h

P2 (s)x(t + s)ds

xT (t + r)P3 (r, s)x(t + s)drds.

(6.209)

It was proposed in [19] that for Q > 0 the infinite horizon LQ optimal control (6.208) asymptotically stabilizes the state delayed system (6.143) if the system is stabilizable [5, 19], where the system is said to be stabilizable if there exists a linear control with a form of  0 K1 (s)x(t + s)ds (6.210) u(t) = Ko x(t) + −h

that stabilizes the system. However, differently from the solution of the algebraic Riccati equation (6.17) for the ordinary system, in which there exist a finite number of solutions of the algebraic Riccati equation (6.17) and thus we can easily find the unique positive definite solution P that provides the stabilizing control, the number of

236

6 LQ Optimal Controls

solutions of the coupled Riccati equations (6.203)–(6.205) is infinite and there seems to be no research on finding the solutions P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) that produce such a stabilizing control. To check out whether the optimal control asymptotically stabilizes the system (6.143), the continuous functional (6.209) should be examined to satisfy the conditions (2.4) and (2.5) in Theorem 2.1, where the time-derivative negativity (2.5) can be easily verified from the property of the optimal control such that 

d V (t) dt

 u(t)=u∗ (t)

= −xT (t)Qx(t) − u∗T (t)Ru∗ (t).

Even though the stability of state delayed systems with infinite horizon LQ controls can be addressed in [19, 24–27, 29], however, to the best of our knowledge the properties of P1 , P2 (s) and P3 (r, s) to satisfy the condition (2.4) of the continuous functional (6.209) have not been explicitly explained in the literature, and thus the stability analysis has not been explicitly explained. While infinite horizon LQ control systems have problems in stability proof and computation, we introduce receding horizon LQ controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

6.5 Receding Horizon LQ Controls for State Delayed Systems In this section, we assume that the state delayed system (6.143) is stabilizable [19]. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. Consider a receding horizon cost at the current time t given by J (t, t + T , u; xt ) =

  t+T  xT (s| t)Qx(s| t) + uT (s| t)Ru(s| t) ds + VF (xt+T | t ), t

(6.211) with Q > 0 and R > 0, where the terminal weighting functional VF (xt+T | t ) has a form in (6.145) and T is the horizon length. This cost can be obtained by choosing t0 = t and tf = t + T in the finite horizon LQ cost (6.144). Consider the following optimization problem: find {u(s| t), s ∈ [t, t + T ]} to minimize J (t, t + T , u) subject to x˙ (s| t) = Ax(s| t) + A1 x(s − h| t) + Bu(s| t), s ∈ [t, t + T ].

(6.212)

The optimal control and the corresponding optimal cost are denoted by u∗ (s| t), s ∈ [t, t + T ] and J (t, t + T , u∗ ), respectively. The receding horizon control is then

6.5 Receding Horizon LQ Controls for State Delayed Systems

237

given by u∗ (t| t). The choice of the matrices Fi of the terminal weighting functional VF (xt+T ) in (6.145) is very important in proving the closed-loop stability of the receding horizon LQ control for state delayed systems. The receding horizon length T is a design parameter. T could be larger than, equal to, or smaller than h depending on situations.

6.5.1 Receding Horizon LQ Control for Simple Costs We first consider a simple cost given by 

t+T

J (t, t + T , u; xt ) =

uT (s| t)Ru(s| t)ds + xT (t + T | t)F1 x(t + T | t)

t

(6.213) with R > 0 and F1 > 0, where the initial state xt| t is given as xt . This cost is equal to the cost (6.211) with Q = 0, F2 = 0, F3 = 0 and F4 = 0. The optimal control u∗ (s| t) for s ∈ [t, t + T ] is  u (s| t) = −R B Φ (t + T − s)P(s) Φ(t + T − s)x(s| t) ∗

−1 T

 +

T

min(s,t+T −h)

 Φ(t + T − s − h)A1 x(r| t)dr ,

(6.214)

s−h

where Φ(τ ) is given in (6.148) and P(s) denotes P(s) = F1 [I + W (t + T − s)F1 ]−1 ,  t+T  W (t + T − s) = Φ(t + T − r)BR−1 BT Φ T (t + T − r)dr.

(6.215) (6.216)

s

Then, the receding horizon LQ control is  Φ(T )x(t) u(t) = u (t| t) = −R B Φ (T )F1 [I + W (T )F1 ] ∗

−1 T

 +

T

min(t,t+T −h)

−1

 Φ(t + T − s − h)A1 x(s)ds . (6.217)

t−h

On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. This suggested control may be the simplest receding horizon LQ control. However, its stability is currently unknown. It is believed the control (6.217) can stabilize a

238

6 LQ Optimal Controls

broad class of time-delay systems considering from the fact that the corresponding control for ordinary systems always stabilizes the systems for some large F1 . The receding horizon control (6.217) is always defined. Non-singularity of [I + W (T )F1 ] can be shown as I + W (T )F1 = [F1−1 + W (T )]F1 . Since F1 > 0 and F1−1 + W (T ) > 0, it follows that [F1−1 + W (T )]F1 is nonsingular. The values of the state transition matrix Φ(T ) can be obtained by solving (6.148) numerically. However, in the above receding horizon control, we can set T ≤ h so that we obtain Φ(T ) = eAT , which is simple to compute. This is because the horizon length T is a design parameter in the receding horizon LQ control. Remark 6.7 The optimal control that minimizes the cost (6.145) with Q = 0, F2 = 0, F3 = 0 and F4 = 0 subject to a fixed terminal state constraint x(tf ) = 0 can be obtained by requiring F1 = ∞I in (6.213). In this case, the receding horizon control is given by ∗

−1 T

u(t) = u (t| t) = −R B Φ (T )W  +

T

min(t,t+T −h)

−1

 (T ) Φ(T )x(t)

 Φ(t + T − s − h)A1 x(s)ds . (6.218)

t−h

The control (6.218) is simpler than (6.217). Note that the receding horizon LQ control (6.218) is defined only when the controllability matrix W (T ) (6.152) is nonsingular, which is equivalent to the condition of controllability of the time-delay system (6.143). Stability for Short Horizon T ≤ h Let us assume that the horizon T is shorter than or equal to the delay h. In this case, for τ ∈ [0, T ] the matrix differential-difference equation Φ(τ ) in (7.112) can be written as Φ(τ ) = eAτ , which provides  u(t) = u∗ (t| t) = −R−1 BT Φ T (T )F1 [I + W (T )F1 ]−1 Φ(T )x(t)  +

t+T −h

 Φ(t + T − s − h)A1 x(s)ds

(6.219)

t−h

and thus the resulting system yields x˙ (t) = Ax(t) + A1 x(t − h) + Bu∗ (t| t)  t+T −h ˆ = Ax(t) + A1 x(t − h) + Bˆ Φ(t + T − s − h)A1 x(s)ds,

(6.220)

t−h

where Aˆ  A − BR−1 BT Φ T (T )F1 [I + W (T )F1 ]−1 Φ(T ),

(6.221)

6.5 Receding Horizon LQ Controls for State Delayed Systems

239

Bˆ  −BR−1 BT Φ T (T )F1 [I + W (T )F1 ]−1 .

(6.222)

We guess that the receding horizon control (6.219) stabilizes the system (6.143) for many different cases. Instead of considering the cost monotonicity of the receding horizon LQ cost (6.213), we need to directly investigate whether or not the receding horizon control (6.219) asymptotically stabilizes the closed-loop system (6.220). The stability properties of receding horizon LQ controls with a simple cost in this subsection can be found similarly in [12].

6.5.2 Receding Horizon LQ Control for Costs with Single Integral Terms Consider a receding horizon LQ cost given by 

t+T

J (t, t + T , u; xt ) = t

  xT (s| t)Qx(s| t) + uT (s| t)Ru(s| t) ds + VF (xt+T | t )

(6.223) with Q > 0 and R > 0, where the terminal weighting functional VF (xt+T | t ) denotes 



VF (xt+T | t ) = xT (t + T | t)F1 x(t + T | t) +

t+T t+T −h

xT (s| t)F2 x(s| t)ds (6.224)

with F1 > 0 and F2 > 0. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. It is noted that (6.223) and (6.224) are obtained from the finite horizon LQ cost (6.144) with the terminal weighting functional (6.145) by replacing t0 and tf with t and t + T , respectively. Thus, the receding horizon LQ control can be obtained from the finite horizon LQ controls (6.155) and (6.162) by replacing t0 and tf with t and t + T , respectively, as follows. For s ∈ (t + T − h, t + T ], the optimal LQ control u∗ (s| t) for the cost (6.223) is given from (6.155), by replacing tf with t + T , as    t+T −s−h u2∗ (s| t) = −R−1 BT W1 (s − t)x(s| t) + W2 (s − t, a)x(s + a| t)da , −h

(6.225) where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the following coupled partial differential Riccati equations for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0] − W˙ 1 (τ ) = AT W1 (τ ) + W1 (τ )A + Q + F2

240

6 LQ Optimal Controls





−W1 (τ )BR−1 BT W1 (τ ),

∂ ∂ + W2 (τ , s) = AT W2 (τ , s) − W1 (τ )BR−1 BT W2 (τ , s), − ∂τ ∂s   ∂ ∂ ∂ + + W3 (τ , r, s) = −W2T (τ , s)BR−1 BT W2 (τ , r) − ∂τ ∂r ∂s

(6.226) (6.227) (6.228)

with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W1 (τ )A1 , W3 (τ , −h, s) = AT1 W2 (τ , s).

(6.229) (6.230) (6.231)

For s ∈ [t, t + T − h], the optimal LQ control u∗ (s| t) for the cost (6.223) is given from (6.162) as   u1∗ (s| t) = −R−1 BT P1 (s − t)x(s| t) +

0

−h

 P2 (s − t, a)x(s + a| t)da , (6.232)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy the following coupled partial differential Riccati equations for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0] − P˙ 1 (τ ) = AT P1 (τ ) + P1 (τ )A + Q + P2 (τ , 0) + P2T (τ , 0) 





−P1 (τ )BR−1 BT P1 (τ ),

(6.233)

∂ ∂ − + P2 (τ , s) = AT P2 (τ , s) + P3 (τ , 0, s) ∂τ ∂s

−P1 (τ )BR−1 BT P2 (τ , s),  ∂ ∂ ∂ − + + P3 (τ , r, s) = −P2T (τ , s)BR−1 BT P2 (τ , r) ∂τ ∂r ∂s

(6.234) (6.235)

with boundary conditions P1 (T − h) = W1 (T − h), P2 (T − h, s) = W2 (T − h, s),

(6.236) (6.237)

P3 (T − h, r, s) = W3 (T − h, r, s), P2 (τ , −h) = P1 (τ )A1 ,

(6.238) (6.239)

P3 (τ , −h, s) = AT1 P2 (τ , s).

(6.240)

Since the horizon length T is a design parameter, we consider the receding horizon LQ controls for 0 < T < h and for h ≤ T . Case I: 0 < T < h The receding horizon LQ control is given from (6.225) as follows.

6.5 Receding Horizon LQ Controls for State Delayed Systems

u(t) =

u2∗ (t| t)

−1 T



= −R B

 W1 (0)x(t) +

T −h

−h

241

 W2 (0, s)x(t + s)ds ,

(6.241)

where W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.226)–(6.228) with the boundary conditions (6.229)–(6.231) for τ ∈ [0, T ], s ∈ [−h, 0] and r ∈ [−h, 0]. Case II: h ≤ T The receding horizon LQ control is given from (6.232) as follows. u(t) =

u1∗ (t| t)

  = −R B P1 (0)x(t) + −1 T

0

−h

 P2 (0, s)x(t + s)ds ,

(6.242)

where P1 (τ ), P2 (τ , s), and P3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.233)–(6.235) with the boundary conditions (6.236)–(6.240) for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0] and W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.226)–(6.228) with the boundary conditions (6.229)–(6.231) for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0]. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. It is noted from (6.173) that the optimal cost J (t, t + T , u∗ ) is given as follows. For T < h, J (t, t + T , u∗ ) = V2 (t) = xT (t)W1 (0)x(t) + +2xT (t)

 T −h −h

 T −h  T −h −h

−h

xT (t + r)W3 (0, r, s)x(t + s)drds

W2 (0, s)x(t + s)ds +

 t t+T −h

xT (s)F2 x(s)ds,

(6.243) where W1 (0), W2 (0, s), and W3 (0, r, s) are obtained from (6.226)–(6.231). For T ≥ h, J (t, t + T , u∗ ) = V1 (t)



= xT (t)P1 (0)x(t) + 2xT (t)  +

0 −h



0

−h

0 −h

P2 (0, s)x(t + s)ds

xT (t + r)P3 (0, r, s)x(t + s)drds,

(6.244)

where P1 (0), P2 (0, s), and P3 (0, r, s) are obtained from (6.233)–(6.240). Our main concern is to show the cost monotonicity for stabilizing property. To do so, we investigate the time derivative of the receding horizon optimal LQ cost J (t, t + T , u∗ ; xt ) for (6.223) with (6.224) as follows.

242

6 LQ Optimal Controls

  1 d ∗ ∗ J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| J (t, t + T , u∗ ; xt ) = lim t ) − J (t, t + T , u ; xt ) , Δ→0 Δ dt

(6.245) where J (t, t + T , u∗ ; xt ) denotes J (t, t + T , u∗ ; xt ) =

 t+T  t

 x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

+x∗T (t + T | t)F1 x∗ (t + T | t) +

 t+T t+T −h

x∗T (s| t)F2 x∗ (s| t)ds

(6.246) ∗ with the initial state xt|∗ t = xt and J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| t ) denotes ∗ J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| t)   t+T +Δ  x1∗T (s| t)Qx1∗ (s| t) + u1∗T (s| t)Ru1∗ (s| t) ds + x1∗T (t + T + Δ| t + Δ) = t+Δ



× F1 x1∗ (t + T + Δ| t + Δ) +

t+T +Δ

t+T +Δ−h

x1∗T (s| t + Δ)F2 x1∗ (s| t + Δ)ds

(6.247)

∗ ∗ ∗ with the initial state xt+Δ| t+Δ = xt+Δ| t . It is noted that the initial state xt+Δ| t+Δ of the ∗ optimal cost (6.247) is given the state trajectory xt+Δ| t of the optimal cost (6.246). Let us now decompose the optimal cost (6.246) as follows.

J (t, t + T , u∗ ; xt )   t+Δ  ∗ x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds + J (t + Δ, t + T , u∗ ; xt+Δ| = t ), t

∗ where J (t + Δ, t + T , u∗ ; xt+Δ| t ) denotes

∗ J (t + Δ, t + T , u∗ ; xt+Δ| t) =

 t+T  t+Δ

 x∗T (s| t)Qx∗ (s| t) + u∗T (s| t)Ru∗ (s| t) ds

+x∗T (t + T | t)F1 x∗ (t + T | t) +

 t+T t+T −h

x∗T (s| t)F2 x∗ (s| t)ds.

(6.248) ∗ Using this cost J (t + Δ, t + T , u∗ ; xt+Δ| t ), we can rewrite the Eq. (6.245) as follows.

  d 1 ∗ ∗ ∗ ∗ J (t, t + T , u ; xt ) = lim J (t + Δ, t + T , u ; xt+Δ| t ) − J (t, t + T , u ; xt ) dt Δ→0 Δ   1 ∗ ∗ ; x∗ J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| ) − J (t + Δ, t + T , u ) + lim t t+Δ| t . Δ→0 Δ

(6.249)

6.5 Receding Horizon LQ Controls for State Delayed Systems

243

Terminal Cost Monotonicity In the second part of of (6.249), if the optimal control u1∗ in J (t + Δ, t + T + ∗ 1 Δ, u1∗ ; xt+Δ| t ) is replaced with a non-optimal control u , then due to the definition it obviously holds that ∗ 1 ∗ J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| t ) ≤ J (t + Δ, t + T + Δ, u ; xt+Δ| t ), ∗ where J (t + Δ, t + T + Δ, u1 ; xt+Δ| t ) denotes

J (t + Δ, t + T + Δ, u

1

∗ ; xt+Δ| t)

 =

t+T +Δ

t+Δ



+x (t + T + Δ)F1 x (t + T + Δ) + 1T

  1T 1 1T 1 x (s)Qx (s) + u (s)Ru (s) ds

t+T +Δ

1

t+T +Δ−h

x1T (s| t + Δ)F2 x1 (s| t + Δ)ds

1 ∗ 1 ∗ with the initial state xt+Δ = xt+Δ| t . Let us choose u (α) = u (α| t) for α ∈ [t + 1 ∗ Δ, t + T ]}; then x (α) = x (α| t) for α ∈ [t + Δ, t + T ]}, which gives

 ∗ ∗ ∗ J (t + Δ, t + T + Δ, u1 ; xt+Δ| = J (t + Δ, t + T , u∗ ; xt+Δ| t ) − VF (xt+T | t ) t)    t+T +Δ  1T 1 1T 1 x (s)Qx (s) + u (s)Ru (s) ds + x1T (t + T + Δ)F1 x1 (t + T + Δ) + t+T

+

  t+T t+T −h+Δ

x∗T (s| t)F2 x∗ (s| t)ds +

∗ = J (t + Δ, t + T , u∗ ; xt+Δ| t) +

 t+T +Δ t+T

 t+T +Δ  t+T

 x1T (s)F2 x1 (s)ds

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) ds

   t+T +Δ x1T (s)F2 x1 (s)ds + x1T (t + T + Δ)F1 x1 (t + T + Δ) + t+T

   t+T −h+Δ x∗T (s| t)F2 u∗ (s| t)ds . − x∗T (t + T | t)F1 x∗ (t + T | t) + t+T −h

Since it holds that x1 (t + T ) = x∗ (t + T | t) and   1 1 x (t + T + Δ) − x1 (t + T ) Ax∗ (t + T | t) + A1 x∗ (t + T − h| t) + Bu1 (t + T ), Δ

these inequalities with a simple choice of u1 (t + T ) as −H1 x∗ (t + T | t) − H2 x∗ (t + T − h| t), i.e. u1 (t + T ) = −H1 x∗ (t + T | t) − H2 x∗ (t + T − h| t), give   1 1 ∗ ∗ ∗ J (t + Δ, t + T + Δ, u ; xt+Δ| t ) − J (t + Δ, t + T , u ; xt+Δ| t ) lim Δ→0 Δ ≤ x∗T (t + T | t)Qx∗ (t + T | t) + u1T (t + T )Ru1 (t + T )

244

6 LQ Optimal Controls

  + 2x∗T (t + T | t)F1 Ax∗ (t + T | t) + A1 x∗ (t + T − h| t) + Bu1 (t + T ) + x∗T (t + T | t)F2 x∗ (t + T | t) − x∗T (t + T − h| t)F2 x∗ (t + T − h| t) T   ∗  ∗ x (t + T | t) x (t + T | t) , (6.250) M ∗ = ∗ x (t + T − h| t) x (t + T − h| t) where M denotes  M=

 (1, 1) F1 (A1 − BH2 ) + H1T RH2 , (6.251) (A1 − BH2 )T F1 + H2T RH1 −F2 + H2T RH2

(1, 1) = (A − BH1 )T F1 + F1 (A − BH1 ) + Q + H1T RH1 + F2 . Remark 6.8 In this subsection, we have proven that   1 ∂ ∗ 1∗ ∗ J (α, β + Δ, u ; xα ) − J (α, β, u ; xα ) J (α, β, u ; xα ) = lim Δ→0 Δ ∂β    T x(β) x(β) M . ≤ x(β − h) x(β − h) The condition that M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the closed-loop system. Stability In the first part of (6.249), we can obtain that   1 ∗ ∗ J (t + Δ, t + T , u∗ ; xt+Δ| ) − J (t, t + T , u ; x ) t t Δ→0 Δ   = − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . lim

(6.252)

From the relations (6.250) and (6.252) corresponding to the second and first parts in the Eq. (6.249), respectively, we can get the upper bound of the time derivative of J (t, t + T , u∗ ; xt ) in (6.223) as follows.   d ∗ T ∗T ∗ J (t, t + T , u ; xt ) ≤ − x (t)Qx(t) + u (t| t)Ru (t| t) dt + x∗T (t + T | t)Qx∗ (t + T | t) + u1T (t + T )Ru1 (t + T )   ∗T ∗ ∗ 1 + 2x (t + T | t)F1 Ax (t + T | t) + A1 x (t + T − h| t) + Bu (t + T ) + x∗T (t + T | t)F2 x∗ (t + T | t) − x∗T (t + T − h| t)F2 x∗ (t + T − h| t) (6.253) = −xT (t)Qx(t) − u∗T (t| t)Ru∗ (t| t)

6.5 Receding Horizon LQ Controls for State Delayed Systems



x∗ (t + T | t) + ∗ x (t + T − h| t)

T

245



 x∗ (t + T | t) M ∗ . x (t + T − h| t)

(6.254)

Using this choice of u1 (t + T ) satisfying that M ≤ 0, a sufficient condition for the asymptotic stability of the closed-loop system both for 0 < T < h and for h ≤ T can be found in the following theorem. Theorem 6.10 If there exist matrices F1 > 0, F2 > 0, and Hi such that 

(1, 1) F1 (A1 − BH2 ) + H1T RH2 (A1 − BH2 )T F1 + H2T RH1 −F2 + H2T RH2

 ≤ 0,

(6.255)

where (1, 1) denotes (1, 1) = (A − BH1 )T F1 + F1 (A − BH1 ) + Q + H1T RH1 + F2 , then the receding horizon LQ controls (6.241) and (6.242) for the cost (6.223) with such F1 and F2 asymptotically stabilize the system (6.143). Proof If the condition (6.255) is satisfied, then it holds that J (t, t + T , u∗ ; xt ) ≥ 0,   d J (t, t + T , u∗ ; xt ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) ≤ 0. dt J (t, t + T , u∗ ; xt ) is V2 (t) for T < h and V1 (t) for T ≥ h from (6.243) and (6.244), respectively. Therefore, these with cost monotonicity conditions satisfy the conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof.  The quadratic matrix inequality (6.255) in Theorem 6.10 can be rewritten as linear matrix inequalities, which is shown in the following corollary. Corollary 6.6 If there exist matrices F¯ 1 > 0, F¯ 2 > 0, and H¯ i such that ⎡

AF¯ 1 + F¯ 1 AT − BH¯ 1 − H¯ 1T BT A1 F¯ 2 − BH¯ 2 F¯ 1 H¯ 1T T T T ⎢ ¯ ¯ ¯ −F2 0 H¯ 2T F2 A1 − H2 B ⎢ −1 ⎢ ¯ F1 0 −Q 0 ⎢ ⎣ H¯ 1 0 −R−1 H¯ 2 F¯ 1 0 0 0

⎤ F¯ 1 0 ⎥ ⎥ 0 ⎥ ⎥ ≤ 0, (6.256) 0 ⎦ −F¯ 2

then the receding horizon LQ controls (6.241) and (6.242) for the cost (6.223) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (6.143). Proof For the inequality (6.255), perform a congruence transformation via a nonsingular matrix F1−1 ⊕ F2−1 to get

246

6 LQ Optimal Controls



(1, 1) (A1 F¯ 2 − BH¯ 2 ) + H¯ 1T RH¯ 2 T ¯ T ¯ ¯ ¯ (A1 F2 − BH2 ) + H2 RH1 −F¯ 2 + H¯ 2T RH¯ 2

 ≤ 0,

where H¯ 1 = H1 F1−1 , H¯ 2 = H2 F2−1 , and (1, 1) denotes (1, 1) = (AF¯ 1 − BH¯ 1 )T + (AF¯ 1 − BH¯ 1 ) + F¯ 1 QF¯ 1 + H¯ 1T RH¯ 1 + F¯ 1 F2 F¯ 1 . Now define H¯ as H F¯ 1 and use the Schur complement to obtain (6.256). This completes the proof.  Let us rewrite the inequality (6.253) for the cost monotonicity as follows.   d J (t, t + T , u∗ ; xt ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) dt T   ∗  ∗ x (t + T | t) x (t + T | t) + η T Rη, + ∗ M ∗ x (t + T − h| t) x (t + T − h| t) where M and η denote  M=

 F1 A + AT F1 + Q + F2 − F1 BR−1 BT F1 F1 A1 , AT1 F1 −F2

η = u1 (t + T ) + R−1 BT F1 x∗ (t + T | t). An alternative choice of u1 (t + T ) as −R−1 BT F1 x∗ (t + T | t), rather than −H1 x∗ (t + T | t) − H2 x∗ (t + T − h| t), such that η = 0 provides the following result. Theorem 6.11 If there exist matrices F1 > 0 and F2 > 0 such that 

AT F1 + F1 A + Q + F2 − F1 BR−1 BT F1 F1 A1 AT1 F1 −F2

 ≤ 0,

(6.257)

then the receding horizon LQ controls (6.241) and (6.242) for the cost (6.223) with such F1 and F2 asymptotically stabilize the system (6.143). Proof When H1 and H2 in (6.255) are replaced with R−1 BT F1 and 0, respectively, we obtain (6.257). If there exist F1 and F2 which satisfy (6.257), then H1 and H2 exist and (6.255) is satisfied. From Theorem 6.10, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  The quadratic matrix inequality (6.257) in Theorem 6.11 can be rewritten as linear matrix inequalities, which is shown in the following corollary. Corollary 6.7 If there exist matrices F¯ 1 > 0 and F¯ 2 > 0 such that ⎡

AF¯ 1 + F¯ 1 AT − BR−1 BT + A1 F¯ 2 AT1 F¯ 1 ⎣ −Q−1 F¯ 1 F¯ 1 0

⎤ F¯ 1 0 ⎦ ≤ 0, −F¯ 2

(6.258)

6.5 Receding Horizon LQ Controls for State Delayed Systems

247

then the receding horizon LQ controls (6.241) and (6.242) for the cost (6.223) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (6.143). Proof For the inequality (6.257), perform a congruence transformation via a nonsingular matrix F1−1 ⊕ F2−1 , which gives 

AF¯ 1 + F¯ 1 AT + F¯ 1 QF¯ 1 + F¯ 1 F¯ 2−1 F¯ 1 − BR−1 BT A1 F¯ 2 −F¯ 2 F¯ 2 AT1

 ≤ 0,

Now use the Schur complement to get (6.258). From Theorem 6.11, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  The stability properties of receding horizon LQ controls for costs with single integral terminal terms in Theorems 6.10 and 6.11 and Corollaries 6.6 and 6.7 can be found partially in [16].

6.5.3 *Receding Horizon LQ Control for Costs with Double Integral Terms Consider a receding horizon LQ cost given by J (t, t + T , u; xt ) =

 t+T  t

 xT (s| t)Qx(s| t) + uT (s| t)Ru(s| t) ds + VF (xt+T | t )

(6.259) with Q > 0 and R > 0, where the terminal weighting functional VF (xt+T | t ) denotes 



VF (xt+T | t ) = x (t + T | t)F1 x(t + T | t) +

t+T

T

 +

0



−h 0



 +

−h

t+T

t+T +s t+T t+T +s



t+T −h T

x˙ (r| t) − Ax(r| t)

xT (s| t)F2 x(s| t)ds

  F3 x˙ (r| t) − Ax(r| t) drds

xT (r| t)F4 x(r| t)drds

(6.260)

with Fi > 0. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. It is noted that (6.259) and (6.260) are obtained from the finite horizon LQ cost (6.144) with the terminal weighting function (6.145) by replacing t0 and tf with t and t + T , respectively. Thus, the receding horizon LQ control can be obtained from the finite horizon LQ controls (6.174), (6.181) and (6.190) by replacing t0 and tf with t and t + T , respectively. Define ¯ f (s)  s − T + h, R(s)  R + f (s)BT F3 B.

248

6 LQ Optimal Controls

The finite horizon LQ controls are obtained from (6.174), (6.181) and (6.190) as follows. For s ∈ (tf − h, tf ], the optimal LQ control u3∗ (s| t) for the cost (6.259) is given from (6.174), by replacing tf with t + T , as   t+T −s−h u3∗ (s| t) = −R¯ −1 (s − t)BT W1 (s − t)x(s| t) + W2 (s − t, a)x(s + a| t)da −h  + f (s − t)F3 A1 x(s − h| t) , (6.261)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the following coupled partial differential Riccati equations for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0] −W˙ 1 (τ ) = AT W1 (τ ) + W1 (τ )A + Q + F2 + f (τ )F4 



− W1T (τ )BR¯ −1 (τ )BT W1 (τ ),



(6.262)

∂ ∂ − W2 (τ , s) = AT W2 (τ , s) − W1T (τ )BR¯ −1 (τ )BT W2 (τ , s), + ∂τ ∂s

 ∂ ∂ ∂ − + + W3 (τ , r, s) = −W2T (τ , s)BR¯ −1 (τ )BT W2 (τ , r) ∂τ ∂r ∂s

(6.263) (6.264)

with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W3 (τ , −h, s) =

(6.265)

¯ )−1 BT F3 A1 W1 (τ )A1 − f (τ )W1T (τ )BR(τ AT1 {I − f (τ )F3 BR¯ −1 (τ )BT }W2 (τ , s).

(6.266) (6.267)

For s ∈ (tf − 2h, tf − h], the optimal LQ control u2∗ (s| t) for the cost (6.259) is given from (6.181) as follows.   u2∗ (s| t) = −R−1 BT S1 (s − t)x(s| t) +

0

−h

 S2 (s − t, a)x(s + a| t)da , (6.268)

where S1 (τ ) = S1T (τ ), S2 (τ , s) and S3 (τ , r, s) = S3T (τ , s, r) satisfy the following coupled partial differential Riccati equations for τ ∈ (T − 2h, T − h], s ∈ [−h, 0], and r ∈ [−h, 0] − S˙ 1 (τ ) = AT S1 (τ ) + S1 (τ )A + Q + S2 (τ , 0) + S2T (τ , 0) 

−S1 (τ )BR−1 BT S1 (τ ) + f (τ + h)AT1 F3 A1 −f 2 (τ + h)AT1 F3 BR¯ −1 (τ + h)BT F3 A1 , (6.269)

 ∂ ∂ + S2 (τ , s) = AT S2 (τ , s)+S3T (τ , 0, s) − ∂τ ∂s

6.5 Receding Horizon LQ Controls for State Delayed Systems



−S1 (τ )BR−1 BT S2 (τ , s),



∂ ∂ ∂ − S3 (τ , r, s) = −S2T (τ , s)BR−1 BT S2 (τ , r) + + ∂τ ∂r ∂s

249

(6.270) (6.271)

with boundary conditions S1 (T − h) = W1 (T − h),

(6.272)

S2 (T − h, s) = W2 (T − h, s), S3 (T − h, r, s) = W3 (T − h, r, s),

(6.273) (6.274)

S2 (τ , −h) = S1 (τ )A1 , S3 (τ , −h, s) = AT1 S2 (τ , s).

(6.275) (6.276)

For s ∈ [t0 , tf − 2h], the optimal LQ control u1∗ (s| t) for the cost (6.259) is given from (6.190) as follows. u1∗ (s| t)

−1 T

= −R B



 P1 (s − t)x(s| t) +

0

−h

 P2 (s − t, a)x(s + a| t)da ,

(6.277)

where P1 (τ ) = P1T (τ ), P2 (τ , s) and P3 (τ , r, s) = P3T (τ , s, r) satisfy the following coupled partial differential Riccati equations for τ ∈ [0, T − 2h], s ∈ [−h, 0], and r ∈ [−h, 0] − P˙ 1 (τ ) = AT P1 (τ ) + P1 (τ )A + Q + P2 (τ , 0) + P2T (τ , 0) 



− 

−P1 (τ )BR−1 BT P1 (τ ),

(6.278)

∂ ∂ + P2 (τ , s) = AT P2 (τ , s)+P3T (τ , 0, s) ∂τ ∂s

−P1 (τ )BR−1 BT P2 (τ , s),  ∂ ∂ ∂ − + + P3 (τ , r, s) = −P2T (τ , s)BR−1 BT P2 (τ , r) ∂τ ∂r ∂s

(6.279) (6.280)

with boundary conditions P1 (T − 2h) = S1 (T − 2h), P2 (T − 2h, s) = S2 (T − 2h, s), P3 (T − 2h, r, s) = S3 (T − 2h, r, s),

(6.281) (6.282) (6.283)

P2 (τ , −h) = P1 (τ )A1 , P3 (τ , −h, s) = AT1 P2 (τ , s).

(6.284) (6.285)

Since the horizon length T is a design parameter, we consider the receding horizon controls for 0 < T < h, for h ≤ T < 2h and for 2h ≤ T . Case I: 0 < T < h

250

6 LQ Optimal Controls

The receding horizon LQ control u(t) for the cost (6.259) is given from (6.261), by replacing s with t, as u(t) =

u3∗ (t| t)

 −1 T ¯ = −R (0)B W1 (0)x(t)  +

T −h −h

 W2 (0, a)x(t + a)da + f (0)F3 A1 x(t − h) , (6.286)

where W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.262)–(6.264) with the boundary conditions (6.265)–(6.267) for τ ∈ [0, T ], s ∈ [−h, 0], and r ∈ [−h, 0] Case II: h ≤ T < 2h The receding horizon LQ control u(t) for the cost (6.259) is given from (6.268), by replacing s with t, as u(t) =

u2∗ (t| t)

  = −R B S1 (0)x(t) +

0

−1 T

−h

 S2 (0, s)x(t + s)ds ,

(6.287)

where S1 (τ ), S2 (τ , s) and S3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.269)–(6.271) with the boundary conditions (6.272)–(6.276) for τ ∈ [0, T − h], s ∈ [−h, 0], and r ∈ [−h, 0] and W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.262)–(6.264) with the boundary conditions (6.265)–(6.267)for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0]. Case III: 2h ≤ T The receding horizon LQ control u(t) for the cost (6.259) is given from (6.261), by replacing s with t, as   u(t| t) = u1∗ (t| t) = −R−1 BT P1 (0)x(t) +

0

−h

 P2 (0, s)x(t + s)ds , (6.288)

where P1 (τ ), P2 (τ , s) and P3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.278)–(6.280) with the boundary conditions (6.281)–(6.285) for τ ∈ [0, T − 2h], s ∈ [−h, 0], and r ∈ [−h, 0]. And, S1 (τ ), S2 (τ , s) and S3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.269)–(6.271) with the boundary conditions (6.272)–(6.276) for τ ∈ (T − 2h, T − h], s ∈ [−h, 0], and r ∈ [−h, 0] and W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the partial differential Riccati equations (6.262)–(6.264) with the boundary conditions (6.265)–(6.267) for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0]. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. It is noted from (6.202) that the optimal cost J (t, t + T , u∗ ) is given as V3 (t) in (6.199) for T < h with W1 (t), W2 (t, s), and W3 (t, r, s), replaced with W1 (0),

6.5 Receding Horizon LQ Controls for State Delayed Systems

251

W2 (0, s), and W3 (0, r, s), respectively, which are obtained from (6.262)–(6.267), as V2 (t) in (6.200) for h ≤ T < 2h with S1 (t), S2 (t, s), and S3 (t, r, s), replaced with S1 (0), S2 (0, s), and S3 (0, r, s), respectively, which are obtained from (6.269)–(6.276), and as V1 (t) in (6.201) for 2h ≤ T with P1 (t), P2 (t, s), and P3 (t, r, s), replaced with P1 (0), P2 (0, s), and P3 (0, r, s), respectively, which are obtained from (6.278)– (6.285). To show the cost monotonicity for stabilizing property, we investigate the time derivative of the receding horizon optimal LQ cost J (t, t + T , u∗ ; xt ) for (6.259) with (6.260). Similar to the derivation of the monotonicity of the receding horizon LQ cost with single integral terminal terms in Sect. 6.5.2, we can investigate a condition for ∗ terminal cost monotonicity, first. Using this cost J (t + Δ, t + T , u∗ ; xt+Δ| t ), we have   d 1 ∗ ∗ J (t + Δ, t + T , u∗ ; xt+Δ| J (t, t + T , u∗ ; xt ) = lim t ) − J (t, t + T , u ; xt ) Δ→0 Δ dt   1 ∗ ∗ ∗ J (t + Δ, t + T + Δ, u1∗ ; xt+Δ| + lim t ) − J (t + Δ, t + T , u ; xt+Δ| t ) . Δ→0 Δ

(6.289)

Terminal Cost Monotonicity With some efforts, the second part of (6.289) can be written as   1 ∗ ∗ ∗ J (t + Δ, t + T + Δ, u1 ; xt+Δ| ) − J (t + Δ, t + T , u ; x ) t t+Δ| t Δ→0 Δ T   ∗  ∗ x (t + T | t) x (t + T | t) M1 (F1 , F2 , F4 ) ∗ ≤ ∗ x (t + T − h| t) x (t + T − h| t) lim

+ u1T (t + T )[R + hBT F3 B]u1 (t + T )   1T T ∗ ∗ + 2u (t + T )B F1 x (t + T | t) + hF3 A1 x (t + T − h| t) 0



−h 0



 −

 +

−h

x∗ (t + T + s| t) x˙ ∗ (t + T + s| t) x∗ (t + T | t) ∗ x˙ (t + T + s| t)



T

 x∗ (t + T + s| t) ds x˙ ∗ (t + T + s| t)  ∗  T X21 x (t + T | t) ds, X22 x˙ ∗ (t + T + s| t)

M2 (F3 , F4 ) T 

X11 X21

(6.290)

where u1 (t + T ) is an arbitrary control at t + T , [Xij ]2×2 ≥ 0 and Mi denote  F1 A1 AT F1 + F1 A + Q + F2 + hF4 , M1 (F1 , F2 , F4 ) = AT1 F1 hAT1 F3 A1 − F2    F4 + AT F3 A −AT F3 . M2 (F3 , F4 ) = −F3 A F3 



(6.291) (6.292)

252

6 LQ Optimal Controls

A simple choice of u1 (t + T ) as −H1 x∗ (t + T | t) − H2 x∗ (t + T − h| t), i.e. u1 (t + T ) = −H1 x∗ (t + T | t) − H2 x∗ (t + T − h| t), gives   1 1 ∗ ∗ ∗ J (t + Δ, t + T + Δ, u ; xt+Δ| t ) − J (t + Δ, t + T , u ; xt+Δ| t ) lim Δ→0 Δ T   ∗  ∗ x (t + T | t) x (t + T | t) ≤ ∗ L1 ∗ x (t + T − h| t) x (t + T − h| t) T   ∗  0 ∗ x (t + T + s| t) x (t + T + s| t) − ds, (6.293) L 2 ˙ ∗ (t + T + s| t) x˙ ∗ (t + T + s| t) −h x where L1 and L2 denote  T T + X21 −X21 hX11 + X21 −X21 0  T   T    + H1 H2 [R + hBT F3 B] H1 H2 − H1 H2 BT F1 hF3 A1  T   − F1 hF3 A1 B H1 H2 , (6.294)    0 0 . (6.295) L2 = M2 (F3 , F4 ) − 0 X22 



L1 = M1 (F1 , F2 , F4 ) +

Remark 6.9 In this subsection, we have proven that   1 ∂ ∗ 1∗ ∗ J (α, β, u ; xα ) = lim J (α, β + Δ, u ; xα ) − J (α, β, u ; xα ) Δ→0 Δ ∂β    T x(β) x(β) ≤ L1 x(β − h) x(β − h)  T   0 x(β + s) x(β + s) − L2 ds. ˙ (β + s) x˙ (β + s) −h x The conditions that L1 ≤ 0 and L2 ≥ 0 play an important role in guaranteeing the asymptotic stability of the closed-loop system. Stability In the first part of (6.289), we can obtain that   1 ∗ ∗ J (t + Δ, t + T , u∗ ; xt+Δ| ) − J (t, t + T , u ; x ) t t Δ→0 Δ   = − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . lim

(6.296)

From the relations (6.293) and (6.296) corresponding to the second and first parts in the Eq. (6.289), respectively, we can get the upper bound of the time derivative of J (t, t + T , u∗ ; xt ) in (6.259) with (6.260) as follows.

6.5 Receding Horizon LQ Controls for State Delayed Systems

253

  d J (t, t + T , u∗ ; xt ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) dt   T  x∗ (t + T | t) x∗ (t + T | t) + ∗ M1 (F1 , F2 , F4 ) ∗ x (t + T − h| t) x (t + T − h| t) + u1T (t + T )[R + hBT F3 B]u1 (t + T )   + 2u1T (t + T )BT F1 x∗ (t + T | t) + hF3 A1 x∗ (t + T − h| t) 



 x∗ (t + T + s| t) ds x˙ ∗ (t + T + s| t) −h T    ∗  0 ∗ T x (t + T | t) x (t + T | t) X11 X21 ds (6.297) + ∗ x˙ ∗ (t + T + s| t) X21 X22 −h x˙ (t + T + s| t)     T  x∗ (t + T | t) x∗ (t + T | t) = − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) + ∗ L1 ∗ x (t + T − h| t) x (t + T − h| t) T   ∗  0 ∗ x (t + T + s| t) x (t + T + s| t) ds. − L2 ∗ (6.298) ∗ (t + T + s| t) x ˙ x ˙ (t + T + s| t) −h 



0

x∗ (t + T + s| t) x˙ ∗ (t + T + s| t)

T

M2 (F3 , F4 )

Using this choice of u1 (t + T ) satisfying that L1 ≤ 0 and L2 ≥ 0, a sufficient condition for the asymptotic stability of the closed-loop system for 0 < T < h, for h ≤ T < 2h, and for 2h ≤ T can be found in the following theorem. Theorem 6.12 If there exist matrices Fi > 0, Xij , and Hi such that  

T X11 X21 X21 X22

 ≥ 0,

(6.299) 

T T AT F1 + F1 A + Q + F2 + hF4 + hX11 + X21 + X21 F1 A1 − X21 T T A1 F1 − X21 −F2 + hA1 F3 A1  T T    T   − H1 H2 B F1 hF3 A1 − F1 hF3 A1 B H1 H2  T   + H1 H2 [R + hBT F3 B] H1 H2 ≤ 0, (6.300)   T T F4 + A F3 A −A F3 ≥ 0, (6.301) −F3 A F3 − X22

then the receding horizon LQ controls (6.286), (6.287), and (6.288) for the cost (6.259) asymptotically stabilize the system (6.143). Proof The LQ cost generated by the initial state x¯ t satisfies that J (t, t + T , u∗ ; xt ) ≥ 0. The conditions (6.299)–(6.301) for the terminal cost monotonicity guarantee that   d ∗ T ∗T ∗ J (t, t + T , u ; xt ) ≤ − x (t)Qx(t) + u (t| t)Ru (t| t) ≤ 0. dt

254

6 LQ Optimal Controls

The optimal cost J (t, t + T , u∗ ) is given as V3 (t) in (6.199) for T < h with W1 (t), W2 (t, s), and W3 (t, r, s), replaced with W1 (0), W2 (0, s), and W3 (0, r, s), respectively, which are obtained from (6.262)–(6.267), as V2 (t) in (6.200) for h ≤ T < 2h with S1 (t), S2 (t, s), and S3 (t, r, s), replaced with S1 (0), S2 (0, s), and S3 (0, r, s), respectively, which are obtained from (6.269)–(6.276), and as V1 (t) in (6.201) for 2h ≤ T with P1 (t), P2 (t, s), and P3 (t, r, s), replaced with P1 (0), P2 (0, s), and P3 (0, r, s), respectively, which are obtained from (6.278)–(6.285). Therefore, these with the cost monotonicity conditions satisfy the conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof.  Since the condition (6.300) is not linear but bilinear in Fi , Xij and Hi , the conditions (6.299)–(6.301) cannot be solved via a convex optimization technique. However, the quadratic matrix inequality (6.300) can be rewritten as linear matrix inequalities if we restrict X22 as λF1 for some λ > 0, which is shown in the following corollary. Corollary 6.8 If there exist matrices F¯ i > 0, X¯ ij , λ > 0 and H¯ i such that  T X¯ 11 X¯ 21 ≥ 0, X¯ 21 λF¯ 1 ⎡ ⎤ (1, 1) (2, 1)T −H¯ 1T BT H¯ 1T F¯ 1 ⎢ (2, 1) −F¯ 2 H¯ 2T −H¯ 2T BT + F¯ 1 AT1 0 ⎥ ⎢ ⎥ ⎢ H¯ 1 ¯ −R−1 0 0 ⎥ H2 ⎢ ⎥ ≤ 0, ⎣ −BH¯ 1 −BH¯ 2 + A1 F¯ 1 0 −h−1 F¯ 3 0 ⎦ F¯ 1 0 0 0 −Q−1   F¯ 4 F¯ 1 AT ≥ 0, −1 AF¯ 1 λ F¯ 1 − F¯ 3 

(6.302)

(6.303)

(6.304)

where (1, 1) and (2, 1) denote  T (1, 1) = F¯ 1 AT − H¯ 1T BT + AF¯ 1 − BH¯ 1 + F¯ 2 + hF¯ 4 + hX¯ 11 + X¯ 21 + X¯ 21 , (6.305)  (2, 1) = F¯ 1 AT1 − H¯ 2T BT − X¯ 21 ,

(6.306)

then the receding horizon LQ controls (6.286)–(6.288) for the cost (6.259) with F1 = F¯ 1−1 , F3 = F¯ 3−1 , F2 = F¯ 1−1 F¯ 2 F¯ 1−1 , and F4 = F¯ 1−1 F¯ 4 F¯ 1−1 asymptotically stabilize the system (6.143). Proof The proof is complicated but straightforwards with the congruence transformation and the Schur complement technique for the conditions (6.299)–(6.301) in Theorem 6.12, where X22 is replaced with λF1 for some λ > 0, and thus omitted. 

6.5 Receding Horizon LQ Controls for State Delayed Systems

255

Let us rewrite the inequality (6.297) for the cost monotonicity as follows.   d J (t, t + T , u∗ ; xt ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) dt  ∗  ∗ T  x (t + T | t) x (t + T | t) M3 (F1 , F2 , F4 ) ∗ + ∗ + η T [R + hBT F3 B]η x (t + T − h| t) x (t + T − h| t) T   ∗  0  ∗ x (t + T + s| t) x (t + T + s| t) ds M (F , F ) − 2 3 4 ∗ x˙ ∗ (t + T + s| t) −h x˙ (t + T + s| t)  ∗ T    0  ∗ T x (t + T | t) x (t + T | t) X11 X21 ds + (6.307) ∗ x˙ ∗ (t + T + s| t) X21 X22 −h x˙ (t + T + s| t)

where M3 (F1 , F2 , F4 ) and η denote M3 (F1 , F2 , F4 )

T    = M1 (F1 , F2 , F4 ) − F1 hF3 A1 B[R + hBT F3 B]−1 BT F1 hF3 A1 ,

(6.308)  η = u1 (t + T ) + [R + hBT F3 B]−1 BT F1 x∗ (t + T | t) + hF3 A1 x∗ (t + T − h| t). (6.309) A special choice of u1 (t + T ) such that   ∗ ∗ u (t + T ) = −[R + hB F3 B] B F1 x (t + T | t) + hF3 A1 x (t + T − h| t) 1

T

−1 T

provides the following result. Theorem 6.13 If there exist matrices Fi > 0 and Xij such that  

T X11 X21 X21 X22

 ≥ 0,

(6.310) 

T T AT F1 + F1 A + Q + F2 + hF4 + hX11 + X21 + X21 F1 A1 − X21 T T A1 F1 − X21 −F2 + hA1 F3 A1  T   T −1 T − F1 hF3 A1 B[R + hB F3 B] B F1 hF3 A1 ≤ 0, (6.311)   T T F4 + A F3 A −A F3 ≥ 0, (6.312) −F3 A F3 − X22

then the receding horizon LQ controls (6.286), (6.287), and (6.288) for the cost (6.259) asymptotically stabilize the system (6.143). Proof When H1 and H2 in (6.300) are replaced with [R + hBT F3 B]−1 BT F1 and [R + hBT F3 B]−1 BT hF3 A1 ,

256

6 LQ Optimal Controls

respectively, we obtain (6.311). If there exist F1 and F2 which satisfy (6.311), then H1 and H2 exist and (6.300) is satisfied. From Theorem 6.12, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  Since the condition (6.311) is not linear but bilinear in Fi and Xij , the conditions (6.310)–(6.312) cannot be solved via a convex optimization technique. However, the quadratic matrix inequality (6.311) can be rewritten as linear matrix inequalities if we replace X22 with λF1 for some λ > 0, which is shown in the following corollary. Corollary 6.9 If there exist matrices F¯ i > 0, λ > 0 and X¯ ij such that  ⎡

T X¯ 11 X¯ 21 X¯ 21 λF¯ 1

 ≥ 0,

(6.313) ⎤

T (1, 1) A1 F¯ 1 − X¯ 21 −BR−1 BT F¯ 1 ⎢ F¯ 1 AT − X¯ 21 ¯ ¯ − F A 0 ⎥ F 2 1 1 1 ⎢ ⎥ ≤ 0, ⎣ −BR−1 BT AT1 F¯ 1 −h−1 F¯ 3 − BR−1 BT 0 ⎦ F¯ 1 0 0 −Q−1   F¯ 4 F¯ 1 AT ≥ 0, −1 ¯ ¯ AF1 λ F1 − F¯ 3

(6.314)

(6.315)

T where (1, 1) = F¯ 1 AT + AF¯ 1 + F¯ 2 + hF¯ 4 + hX¯ 11 + X¯ 21 + X¯ 21 − BR−1 BT , then the receding horizon LQ controls (6.286)–(6.288) for the cost (6.259) with F1 = F¯ 1−1 , F3 = F¯ 3−1 , F2 = F¯ 1−1 F¯ 2 F¯ 1−1 , F4 = F¯ 1−1 F¯ 4 F¯ 1−1 asymptotically stabilize the system (6.143).

Proof The proof is complicated but straightforwards with the congruence transformation and the Schur complement technique for the conditions (6.310)–(6.312) in Theorem 6.13, where X22 is replaced with λF1 for some λ > 0, and thus omitted.  The stability properties of receding horizon LQ controls for costs with double integral terminal terms in Theorems 6.12 and 6.13 and Corollaries 6.8 and 6.9 can be found partially in [22].

6.5.4 Receding Horizon LQ Control for Short Horizon Costs For a case of the receding horizon T less than the time delay h, i.e. T < h, where T is a design parameter, we have already found the fixed and receding horizon LQ controls (6.150), (6.155) and (6.225), respectively. For the simple cost (6.213) in Sect. 6.5.1, the receding horizon LQ control (6.150) was given in a closed form. Even for the receding horizon costs with single or double integral terminal terms, it is interesting to note that, receding horizon LQ controls (6.155) and (6.225) can be formulated with closed forms. We will see an alternative way based on the well-known maximum principle to solve the short horizon LQ controls.

6.5 Receding Horizon LQ Controls for State Delayed Systems

257

Although the fixed horizon LQ control for the short horizon is not interesting, the receding horizon LQ control for the short horizon is practically useful. Since the latter can be easily adopted from the former, we provide the former first and then the latter. Consider the fixed horizon [t0 , tf ] = [t, t + T ], where the horizon length T = tf − t0 is less than the the delay constant h. Let us now denote t and s as a reference time and a time parameter between t0 = t and tf = t + T ; then the system (6.143) can be written as x˙ (s| t) = Ax(s| t) + A1 x(s − h| t) + Bu(s| t).

(6.316)

Then s − h in x(s − h| t) always locates before t and thus x(s − h|t) is considered as a given term for s ∈ [t0 , tf ] which allows us to use the well-known maximum principle for ordinary systems. 1. Receding Horizon LQ Control for Short Horizon Costs with Single Integral Terms When tf < t0 + h, the cost (6.145) with Q > 0, R > 0, F1 > 0, F2 > 0, F3 = 0, and F4 = 0 can be written as J (t0 , tf , u) = J¯ (t0 , tf , u) +



t0

tf −h

xT (α)F2 x(α)d α,

(6.317)

where  J¯ (t0 , tf , u) =



tf

  T T x (τ )(Q + F2 )x(τ ) + u (τ )Ru(τ ) d τ + xT (tf )F1 x(tf ).

t0

(6.318) The second term in (6.317) is associated with the past state {x(α), α ∈ [tf − h, to ]} at current time t0 and thus should be eliminated from the cost for obtaining the control since it is given. To use Pontryagin’s minimum principle, we choose the Hamiltonian matrix H(x(τ ), u(τ ), p(τ )) as   1 T T x (τ )(Q + F2 )x(τ ) + u (τ )Ru(τ ) H(x(τ ), u(τ ), p(τ )) = 2 +pT (τ ){Ax(τ ) + A1 x(τ − h) + Bu(τ )},

(6.319)

where p(τ ) is a Lagrangian multiplier or also called costate with the boundary condition p(tf ) =

∂ H(x(τ ), u(τ ), p(τ )) ∂x(τ )

τ =tf

.

The necessary condition of the finite horizon LQ control u(τ ) is that

(6.320)

258

6 LQ Optimal Controls

0=

∂ H(x(τ ), u(τ ), p(τ )). ∂u(τ )

(6.321)

Then the state delayed system (6.143) is transformed into ∂ H(x(τ ), u(τ ), p(τ )) = Ax(τ ) + A1 x(τ − h) + Bu(τ ), ∂p(τ ) (6.322) ∂ H(x(τ ), u(τ ), p(τ )) = −(Q + F2 )x(τ ) − AT p(τ ), p˙ (τ ) = − ∂x(τ ) (6.323) x˙ (τ ) =

p(tf ) = F1 x(tf ),

(6.324)

where tf − h ≤ t0 ≤ τ ≤ tf , and the finite horizon LQ control yields 0 = Ru(τ ) + BT p(τ ).

(6.325)

Therefore, the finite horizon LQ control is finally represented by u(τ ) = −R−1 BT p(τ ).

(6.326)

Substituting (6.326) into (6.322) yields x˙ (τ ) = Ax(τ ) + A1 x(τ − h) − BR−1 BT p(τ ). Therefore, we have a set of 2n linear differential equations 

      x˙ (τ ) x(τ ) A −BR−1 BT A1 x(τ − h), = + −(Q + F2 ) −AT 0 p˙ (τ ) p(τ )

(6.327)

where x(τ − h) is a known term for t0 ≤ τ ≤ tf because tf − h ≤ t0 . Therefore, using linear system theory, the solution to the state differential equation (6.327) is given by 

  τ −t0 −h     x(τ ) x(t0 ) A + = Φ(τ − t0 ) Φ(τ − t0 − s − h) 1 x(t0 + s)ds, 0 p(t0 ) p(τ ) −h

where  Φ(τ ) = eHτ , H 

 A −BR−1 BT . −(Q + F2 ) −AT

Let us partition Φ(τ ) into four blocks such as

6.5 Receding Horizon LQ Controls for State Delayed Systems

 Φ(τ ) =

259

 Φ11 (τ ) Φ12 (τ ) . Φ21 (τ ) Φ22 (τ )

Then we obtain      x(tf ) Φ11 (tf − τ ) Φ12 (tf − τ ) x(τ ) = p(τ ) p(tf ) Φ21 (tf − τ ) Φ22 (tf − τ )   tf −τ −h  Φ11 (tf − τ − s − h)A1 x(τ + s) + ds. Φ21 (tf − τ − s − h)A1 x(τ + s) −h

(6.328)

From the boundary condition (6.325) and (6.328), we obtain  p(τ ) = W1 (τ )x(τ ) +

tf −τ −h

−h

W2 (τ , s)x(τ + s)ds,

(6.329)

where W1 (τ ) = [Φ22 (tf − τ ) − F1 Φ12 (tf − τ )]−1 [F1 Φ11 (tf − τ ) − Φ21 (tf − τ )], W2 (τ , s) = [Φ22 (tf − τ ) − F1 Φ12 (tf − τ )]−1 ×[F1 Φ11 (tf − τ − s − h) − Φ21 (tf − τ − s − h)]A1 . The optimal control is obtained by replacing τ with t as follows.   u∗ (t) = −R−1 BT W1 (t)x(t) +

tf −t−h −h

 W2 (t, s)x(t + s)ds ,

(6.330)

which is identical to the control (6.155) in Theorem 6.8. However, the control (6.330) is of a closed form while the control (6.155) in Theorem 6.8 is obtained from partial differential Riccati equations. Let us now investigate the receding horizon LQ control by putting t0 = t and tf = t + T in the cost (6.318), where T ≤ h. Note that W1 (t) and W2 (t, s) are independent of t. Therefore, we can denote W1 (t) and W2 (t, s) by W1 (0) and W2 (0, s), respectively. Then, the receding short horizon LQ control can be written in a simpler form.   u(t) = u∗ (t | t) = −R−1 BT W1 (0)x(t) +

T −h −h

 W2 (0, s)x(t + s)ds ,

(6.331)

where W1 (0) = [Φ22 (T ) − F1 Φ12 (T )]−1 [F1 Φ11 (T ) − Φ21 (T )], W2 (0, s) = [Φ22 (T ) − F1 Φ12 (T )]−1 [F1 Φ11 (T − s − h) − Φ21 (T − s − h)]A1 .

260

6 LQ Optimal Controls

We remark that Theorems 6.10 and 6.11 have addressed the stability of the receding horizon LQ control (6.331). The receding horizon LQ control for the short horizon cost with a single integral terminal term in this subsection can be found similarly as in [11, 16]. 2. Receding Horizon LQ Control for Short Horizon Costs with Double Integral Terms Like in the previous subsection, we first obtain the fixed horizon LQ control. When tf < t0 + h, the cost (6.145) with Q > 0, R > 0, and Fi > 0 can be written as J (t0 , tf , u) = J¯ (t0 , tf , u) +



t0 tf −h

 xT (α)F2 x(α)d α +

t0



tf −h β

t0

 T x˙ (τ ) − Ax(τ ) F3

   × x˙ (τ ) − Ax(τ ) + xT (τ )F4 x(τ ) d αd β,

(6.332)

where  J¯ (t0 , tf , u) =

 tf  t0

 T ¯ )x(τ ) + uT (τ )R(τ ¯ )u(τ ) + f (τ ) x˙ (τ ) − Ax(τ ) F3 xT (τ )Q(τ

  × x˙ (τ ) − Ax(τ ) d τ + xT (tf )F1 x(tf ),

(6.333)

¯ ) = Q + F2 + f (τ )F4 , R(τ ¯ ) = R + f (τ )BT F3 B, f (τ ) = τ − tf + h. Q(τ

The second and third terms in (6.332) are associated with the past terms {x(τ ), τ ∈ [tf − h, to ]} {˙x(τ ), τ ∈ [tf − h, to ]} at current time t0 and thus should be eliminated from the cost for obtaining the control since they are given. Furthermore, since {x(τ − h), α ∈ [t0 , tf ]} is also given, we can use Pontryagin’s minimum principle for ordinary systems. For Pontryagin’s minimum principle, we choose the Hamiltonian matrix H(x(τ ), u(τ ), p(τ )) as   T H(x(τ ), u(τ ), p(τ )) = 21 xT (τ )(Q + F2 )x(τ ) + uT (τ )Ru(τ ) + f (τ ) x˙ (τ ) − Ax(τ ) F3   × x˙ (τ ) − Ax(τ ) + pT (τ ){Ax(τ ) + A1 x(τ − h) + Bu(τ )}, (6.334)

which transfers the time-delay system (6.143) into x˙ (τ ) = Ax(τ ) + A1 x(τ − h) + Bu(τ ), ¯ )x(τ ) − AT p(τ ), p˙ (τ ) = −Q(τ

(6.335) (6.336)

p(tf ) = F1 x(tf ), ¯ )u(τ ) + f (τ )BT F3 A1 x(τ − h) + BT p(τ ), 0 = R(τ

(6.337) (6.338)

where tf − h ≤ t0 ≤ τ ≤ tf . From (6.337) the optimal control is represented by

6.5 Receding Horizon LQ Controls for State Delayed Systems

  u(τ ) = −R¯ −1 (τ )BT p(τ ) + f (τ )F3 A1 x(τ − h) .

261

(6.339)

Substituting (6.339) into (6.335) yields x˙ (τ ) = Ax(τ ) + A¯ 1 (τ )x(τ − h) − BR¯ −1 (τ )BT p(τ ),    −1 T ¯ ¯ where A1 (τ ) = I − f (τ )BR (τ )B F3 A1 . Therefore, we have a set of 2n linear differential equations 

     x˙ (τ ) x(τ ) A¯ 1 (τ ) x(τ − h), = H(τ ) + p˙ (τ ) p(τ ) 0

(6.340)

where x(τ − h) is a known term for t0 ≤ τ ≤ tf because tf − h ≤ t0 and 

 A −BR¯ −1 (τ )BT H(τ )  . ¯ ) −Q(τ −AT Therefore, using the linear system theory, the solution to the state differential equation (6.340) is given by 

x(τ ) p(τ )



   τ  x(t0 ) A¯ (s) x(s − h)ds + Φ(τ , s) 1 p(t0 ) 0 t0    τ −t0 −h   x(t0 ) A¯ (s + t0 + h) x(t0 + s)ds, + = Φ(τ , t0 ) Φ(τ , s + t0 + h) 1 p(t0 ) 0 −h 

= Φ(τ , t0 )

where Φ(·, ·) is a transition matrix associated with H(·) satisfying d Φ(t, s) = H(t)Φ(t, s), Φ(t, t) = I , Φ(t, s) = Φ(t, r)Φ(r, s), r ∈ (s, t). dt Let us partition Φ(t, s) into four blocks Φij (t, s). Then we obtain 

    Φ11 (tf , τ ) Φ12 (tf , τ ) x(τ ) x(tf ) = p(τ ) p(tf ) Φ21 (tf , τ ) Φ22 (tf , τ )   tf −τ −h  Φ11 (tf , τ + s + h)A¯ 1 (τ + s + h)x(τ + s) + ds. Φ21 (tf , τ + s + h)A¯ 1 (τ + s + h)x(τ + s) −h

(6.341)

From the boundary condition (6.338) and (6.341), we obtain  p(τ ) = W1 (τ )x(τ ) +

tf −τ −h

−h

W2 (τ , s)x(τ + s)ds,

(6.342)

262

6 LQ Optimal Controls

where W1 (τ ) = [Φ22 (tf , τ ) − F1 Φ12 (tf , τ )]−1 [F1 Φ11 (tf , τ ) − Φ21 (tf , τ )], W2 (τ , s) = [Φ22 (tf , τ ) − F1 Φ12 (tf , τ )]−1 ×[F1 Φ11 (tf , τ + s + h) − Φ21 (tf , τ + s + h)]A¯ 1 (τ + s + h) = [Φ22 (tf , τ ) − F1 Φ12 (tf , τ )]−1 [F1 Φ11 (tf , τ + s + h) − Φ21 (tf , τ + s + h)] ×[I − f (τ + s + h)BR¯ −1 (τ + s + h)BT F3 ]A1

The optimal control is obtained by replacing τ with t as follows.  u∗ (t) = −R¯ −1 (t)BT W1 (t)x(t)  +

tf −t−h −h

 W2 (t, s)x(t + s)ds + f (t)F3 A1 x(t − h) ,

(6.343)

which is identical to the control (6.174) in Theorem 6.9. However, the control (6.343) is of a closed form while the control (6.174) in Theorem 6.9 is obtained from partial differential equations. Put t0 = t and tf = t + T in the cost (6.333), where T ≤ h. Note that W1 (t) and W2 (t, s) are independent of t. Therefore, we can denote W1 (t) and W2 (t, s) by W1 (0) and W2 (0, s), respectively. Therefore, the receding short horizon LQ control can be written in a simpler form.  u(t) = u∗ (t| t) = −[R + (h − T )BT F3 B]−1 BT W1 (0)x(t)  +

T −h −h

 W2 (0, s)x(t + s)ds + (h − T )F3 A1 x(t − h) , (6.344)

where W1 (0) = [Φ22 (t + T , t) − F1 Φ12 (t + T , t)]−1 [F1 Φ11 (t + T , t) − Φ21 (t + T , t)], (6.345) W2 (0, s) = [Φ22 (t + T , t) − F1 Φ12 (t + T , t)]−1 × [F1 Φ11 (t + T , t + s + h) − Φ21 (t + T , t + s + h)] × [I − f (t + s + h)BR¯ −1 (t + s + h)BT F3 ]A1 ,

(6.346)

¯ + s + h) = R + (s − T )BT F3 B and Φ(t + T , t) is Since f (t + s + h) = s − T , R(t a constant matrix, W (0) and W2 (0, s) are dependent on s but independent from t. Furthermore, when T = h, the receding horizon control is simplified to

6.5 Receding Horizon LQ Controls for State Delayed Systems ∗

−1 T

u(t) = u (t| t) = −R B



 W1 (0)x(t) +

0

−h

263

 W2 (0, s)x(t + s)ds .

We remark that Theorems 6.12 and 6.13 have addressed the stability of the receding horizon control (6.344). The receding horizon LQ controls for the short horizon cost with a double integral terminal term in this subsection are newly obtained here.

References 1. Aggarwal JK (1970) Computation of optimal control for time-delay systems. IEEE Trans. Autom. Control 15(6):683–685 2. Athans MA (1971) Special issue on the LQG problems. IEEE Trans. Autom. Control 16(6):527– 869 3. Basin M, Rodriguez-Gonzalez J (2006) Optimal control for linear systems with multiple time delays in control input. IEEE Trans. Autom. Control 51(1):91–97 4. Carlson D, Haurie AB, Leizarowitz A (1991) Infinite Horizon Optimal Control: Deterministic and Stochastic Systems. Springer, Berlin 5. Delfour MC, McCalla C, Mitter SK (1975) Stability and the infinite-time quadratic cost for linear hereditary differential systems. SIAM J. Control 13(1):48–88 6. Eller DH, Aggarwal JK, Banks HT (1969) Optimal control of linear time-delay systems. IEEE Trans. Autom. Control 14(6):678–687 7. Jeong SC, Park P (2003) Constrained MPC for uncertain time-delayed systems. ICASE 2003(10):1905–1910 8. Jeong SC, Park P (2005) Constrained MPC algorithm for uncertain time-varying systems with state-delay. IEEE Trans. Autom. Control 50(2):257–263 9. Koivo HN, Lee EB (1972) Controller synthesis for linear systems with retarded state and control variables. Automatica 8(2):203–208 10. Kwon WH, Han S (2006) Receding Horizon Control: Model Predictive Control for State Models. Springer, Berlin 11. Kwon WH, Han S, Lee YS (2000) Receding horizon controls for time-delay systems. IFAC Workshop on Linear Time-Delay Syst. 2000(9):273–278 12. Kwon WH, Kang JW, Lee YS, Moon YS (2003) A simple receding horizon control for state delayed systems and its stability criterion. J. Process Control 13(6):539–551 13. Kwon WH, Kim KB (2000) On stablizing receding horizon control for linear continuous timeinvariant systems. IEEE Trans. Autom. Control 45(7):1329–1334 14. Kwon WH, Lee YS, Han S (2001) Receding horizon predictive control for nonlinear time-delay systems. SICE/ICASE Joint Workshop 61–66:2001 15. Kwon WH, Lee YS, Han S (2001) Receding horizon predictive control for nonlinear time-delay systems with and without input constraints. In: 6th IFAC symposium on dynamics and control of process systems, vol 2001. pp 6277–282 16. Kwon WH, Lee YS, Han S (2004) General receding horizon control for linear time-delay systems. Automatica 40(9):1603–1611 17. Kwon WH, Pearson AE (1977) A modified quadratic cost problem and feedback stabilization of linear system. IEEE Trans. Autom. Control 22(5):838–842 18. Kwon WH, Pearson AE (1980) Feedback stabilization of linear systems with delayed control. IEEE Trans. Autom. Control 25(2):266–269 19. Kwong RH (1980) A stability theory for the linear-quadratic-Gaussian problem for systems with delays in the state, control, and observations. SIAM J. Control Optim. 18(1):49–75

264

6 LQ Optimal Controls

20. Lee YS, Han S (2015) An improved receding horizon control for time-delay systems. J. Optim. Theory Appl. 165(2):627–638 21. Lewis FL, Syroms VL (1995) Optimal Control. Wiely, New York 22. Park P, Lee SY, Park J, Kwon WH (To appear) Receding horizon LQ control with delaydependent cost monotonicity for state delayed systems 23. Park JH, Yoo HW, Han S, Kwon WH (2008) Receding horizon controls for input-delayed systems. IEEE Trans. Autom. Control 53(7):1746–1752 24. Ross DW, Flugge-Lotz I (1969) An optimal control problem for systems with differentialdifference equation dynamics. SIAM J. Control 7(4):609–623 25. Soliman MA, Ray WH (1972) Optimal feedback control for linear-quadratic systems having time delays. Int. J. Control 15(4):609–629 26. Uchida K, Shimemura E, Kubo T, Abe N (1988) The linear-quadratic optimal control approach to feedback control design for systems with delay. Automatica 24(6):773–780 27. Vinter RB, Kwong RH (1981) The infinite time quadratic control problem for linear systems with state and control delays: an evolution equation approach. SIAM J. Control Optim. 19(1):139–153 28. Yoo HW, Lee YS, Han S (2012) Constrained receding horizon controls for nonlinear time-delay systems. Nonlinear Dyn. 69(1):149–158 29. Zribi M, Mahmoud MS (1999) H∞ -control design for systems with multiple delays. Comput. Electr. Eng. 25:451–475

Chapter 7

LQG Optimal Controls

7.1 Introduction In this chapter LQG optimal controls for input and state delayed systems are dealt with. While LQ optimal controls in Chap. 6 require state feedbacks, LQG controls in this chapter use output feedbacks when states are not available. State observers or filtered estimates are obtained from inputs and outputs to be used for LQG controls. LQG optimal controls for input delayed systems are first introduced and then LQG optimal controls for state delayed systems follow since the former controls are easier than the latter ones. First, finite horizon LQG controls are dealt with, which are fundamental results mathematically. Since they cannot be used as feedback controls due to the inherent requirement of infinite horizons associated with stability properties, infinite horizon LQG controls are obtained from finite horizon LQG controls and discussed with their stability properties and some limitations. Then for general stabilizing feedback controls, receding horizon LQG controls, or model predictive LQG controls, are obtained from finite horizon LQG controls by the receding horizon concept. Terminal conditions in receding horizon controls become very important to guarantee the stability of the closed-loop system, which are investigated in details. Advantages of the receding horizon control are mentioned with some details in Chap. 1. This chapter is outlined as follows. In Sect. 7.2, two different finite horizon LQG controls for input delayed systems are obtained, one for a predictive LQG cost containing a state-predictor and the other for a standard LQG cost containing a state. They are obtained for free terminal states. State observers or filtered estimates are obtained from inputs and outputs for these two finite horizon LQG controls. From these finite horizon LQG controls, infinite horizon LQG controls are sought and discussed with stability properties and some limitations. In Sect. 7.3, receding horizon LQG controls for input delayed systems are defined and obtained from the two different finite horizon LQG controls for input delayed systems in Sect. 7.2. Appropriate state observers or filtered estimates such as standard Kalman filtered estimates © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_7

265

266

7 LQG Optimal Controls

and frozen-gain Kalman filtered estimates are discussed, which are used for LQG controls. Some conditions are investigated, under which the receding horizon LQG controls asymptotically stabilize the closed-loop systems. In Sect. 7.4, finite horizon LQG controls for state delayed systems are obtained for two different LQG costs, one including single integral terminal terms and the other including double integral terminal terms. The solution is more complex as a cost becomes more complex. Advantages to use the double integral terminal terms are discussed in Chap. 6. State observers or filtered estimates are obtained from inputs and outputs to be used for LQG controls. Then infinite horizon LQG controls are sought and discussed with stability properties and some limitations. In Sect. 7.5, receding horizon LQG controls for state delayed systems are obtained from the finite horizon LQG controls in Sect. 7.4. Appropriate state observers or filtered estimates are discussed, which are used for LQG controls. Cost monotonicity conditions are investigated, under which the receding horizon LQG controls asymptotically stabilize the closed-loop systems. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

7.2 Fixed Horizon LQG Controls for Input Delayed Systems Consider a linear stochastic system with a single input delay given by x(t) ˙ = Ax(t) + Bu(t) + B1 u(t − h) + w(t), y(t) = C x(t) + v(t)

(7.1) (7.2)

with the initial conditions x(t0 ) = 0 and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0) such that  E

x(t0 ) − Ex(t0 ) u(t0 + r ) − Eu(t0 + r )



x(t0 ) − Ex(t0 ) u(t0 + s) − Eu(t0 + s)



T =

Π

T (r ) Πxu

Πxu (s) Πuu (r, s)



(7.3)

for r ∈ [−h, 0] and s ∈ [−h, 0], where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, y(t) ∈ R p is the output, w(t) ∈ Rn is the zero-mean white Gaussian input noise, v(t) ∈ R p is the zero-mean white Gaussian measurement noise, and h denotes the constant delay. Here, w(t) and v(t) are independent from the initial state and the initial input and satisfy for all t ≥ t0 and s ≥ t0  E

w(s) v(s)



w(t) v(t)

T

 =

0 Q w δ(t − s) 0 Rv δ(t − s)



(7.4)



with Q w ≥ 0 and Rv > 0. The state predictor z(t) = x(t ˆ + h) is defined as the state x(t + h) at time t + h with the constraints u t+h ≡ 0 and wt+h ≡ 0 as z(t) = x(t ˆ + h) = x(t + h; u t+h ≡ 0, wt+h ≡ 0) = e Ah x(t) +

 t t−h

e A(t−τ ) B1 u(τ )dτ ,

(7.5)

7.2 Fixed Horizon LQG Controls for Input Delayed Systems

267

where x(t + h) is written as  x(t + h) = x(t ˆ + h) + e Ah

t+h

e A(t−τ ) Bu(τ )dτ + e Ah

t



t+h

e A(t−τ ) w(τ )dτ .

(7.6)

t

It is noted that the above state predictor (7.5) for the system (7.1)–(7.2) is similar to the state predictor (3.9) for the system (3.8). From (7.5), we have the following relations  t ˙ + B1 u(t) − e Ah B1 u(t − h) + A e A(t−τ ) B1 u(τ )dτ , z˙ (t) = e Ah x(t) t−h    t e A(t−τ ) B1 u(τ )dτ + v(t), y(t) = Ce−Ah z(t) − t−h

which produces a delay-free linear stochastic system given by ¯ ¯ z˙ (t) = Az(t) + Bu(t) + Gw(t), ¯ y¯ (t) = C z(t) + v(t),

(7.7) (7.8)

¯ G, ¯ C, ¯ and y¯ (t) denote where B,     B¯ = e Ah B + B1 , G¯ = e Ah , C¯ = Ce−Ah , y¯ (t) = y(t) + C

 t t−h

e A(t−τ −h) B1 u(τ )dτ .

It is noted that the state predictor x(t ˆ + h) for input delayed systems in this book and the state estimate x(t) ˆ in this chapter use the same notation since there is no confusion in contexts.

7.2.1 Fixed Horizon LQG Control for Predictive Costs Consider a finite horizon LQG cost for the input delayed stochastic system (7.1)–(7.2) given by  J E (t0 , t f ) = E

tf



  xˆ T (s + h)Q x(s ˆ + h) + u T (s)Ru(s) ds + xˆ T (t f + h)F x(t ˆ f + h)

(7.9)

t0

with u t f +h ≡ 0, wt f +h ≡ 0, Q ≥ 0, R > 0, and F ≥ 0. Since it holds that z(t f ) = x(t f + h; u t f +h≡0 , wt f +h≡0 ), the above cost can be rewritten as the following cost for the delay-free stochastic system (7.7)–(7.8)  J E (t0 , t f ) = E

tf t0



  z T (s)Qz(s) + u T (s)Ru(s) ds + z T (t f )F z(t f )

(7.10)

268

7 LQG Optimal Controls

with Q ≥ 0, R > 0, and F ≥ 0. Since the delay-free stochastic system (7.7)–(7.8) is causal and the resulting closed-loop system must be also causal, it is noted that, the control u(s) at time s should be a function of current and past output { y¯ (α), α ∈ [t0 , s]}. Since we cannot use the state z(s) for the control u(s) = K z(s), we introduce an intermediate quantity zˆ (s) for an estimate of z(s), which be also a function of current and past output { y¯ (α), α ∈ [t0 , s]}. Based on the current and past output { y¯ (α), α ∈ [t0 , s]}, zˆ (s) is designed to minimize the covariance of the estimation error z˜ (s) = z(s) − zˆ (s) such as E˜z (s)˜z T (s), which is known to satisfy the orthogonality between the estimation error and the current and past outputs such that E˜z (s) y¯ T (α) = 0 for all α ∈ [t0 , s] and thus between the estimate zˆ (s) and the estimation error z˜ (s) such that Eˆz (s)˜z T (s) = 0 [7]. In this case, the cost JE (t0 , t f ) in (7.10) can be rewritten as follows. JE (t0 , t f ) = JE,1 (t0 , t f ) + JE,2 (t0 , t f ),

(7.11)

where JE,1 (t0 , t f ) and JE,2 (t0 , t f ) denote 

 t  f



t0  t f

J E,1 (t0 , t f ) = E J E,2 (t0 , t f ) = E

t0

  zˆ T (s)Q zˆ (s) + u T (s)Ru(s) ds + zˆ T (t f )F zˆ (t f ) ,

 z˜ T (s)Q z˜ (s)ds + z˜ T (t f )F z˜ (t f ) .

(7.12) (7.13)

Estimation The intermediate quantity zˆ (s) mentioned above can be constructed with the wellknown Kalman filtered estimate such as, for t ∈ [t0 , s],   ¯ z˙ˆ (t) = Aˆz (t) + Bu(t) + X (t)C¯ T Rv−1 y¯ (t) − C¯ zˆ (t)

(7.14)

with zˆ (t0 ) = Ez(t0 ), where the covariance of the estimation error X (t) = E{z(t) − zˆ (t)}{z(t) − zˆ (t)}T is defined as X˙ (t) = AX (t) + X (t)A T + G¯ Q w G¯ T − X (t)C¯ T Rv−1 C¯ X (t)

(7.15)

with the initial value from (7.3) and (7.5) X (t0 ) = E{z(t0 ) − zˆ (t0 )}{z(t0 ) − zˆ (t0 )}T = E{z(t0 ) − Ez(t0 )}{z(t0 ) − Ez(t0 )}T  0  0 T T T T = e Ah Π E A h + e Ah Πxu (s)B1T a −A s ds + e−As B1 Πxu (s)dse A h  +

0 −h



−h

0 −h

−h

e−Ar B1 Πuu (r, s)B1T a −A s dsdr. T

In this case, the optimal cost JE,2 (t0 , t f ) is given as

(7.16)

7.2 Fixed Horizon LQG Controls for Input Delayed Systems

 JE,2 (t0 , t f ) = Trace

tf

269

 Q X (s)ds + F X (t f ) .

(7.17)

t0

Control The optimal control u ∗ minimizing the cost JE (t0 , t f ) can be found through JE,1 (t0 , t f ), rather than JE (t0 , t f ). Construct zˆ T (t)P(t)ˆz (t) for t ∈ [t0 , t f ] as follows. JE,1 (t, t f ) = Eˆz T (t)P(t)ˆz (t).

(7.18)

Let us now perform the derivative of the both sides with respect to t to obtain   d T T JE,1 (t, t f ) = −E zˆ (t)Q zˆ (t) + u (t)Ru(t) , dt   d T T T −1 ¯ ¯ ¯ Eˆz (t)P(t)ˆz (t) = 2Eˆz (t)P(t) Aˆz (t) + Bu(t) + X (t)C Rv { y¯ (t) − C zˆ (t)} dt   d T P(t) zˆ (t), + Eˆz (t) dt where, from the assumption that zˆ (t) is generated with { y¯ (α), α ∈ [t0 , t)} and is orthogonal to z(t) − zˆ (t), it holds that ¯ ¯ − zˆ (t)} + v(t)}ˆz T (t) = CEv(t)ˆ z T (t) = 0. E{ y¯ (t) − C¯ zˆ (t)}ˆz T (t) = E{C{z(t) Therefore, we finally have     ¯ ˙ z (t), −E zˆ T (t)Q zˆ (t) + u T (t)Ru(t) = 2Eˆz T (t)P(t) Aˆz (t) + Bu(t) + Eˆz T (t) P(t)ˆ

which results in the well-known LQ optimal control u ∗ (t) minimizing the LQ cost (7.12) such as, for t ∈ [t, t f ], u ∗ (t) = −R −1 B¯ T P(t)ˆz (t), ˙ − P(t) = A T P(t) + P(t)A + Q − P(t) B¯ R −1 B¯ T P(t),

(7.19) (7.20)

where P(t f ) = F. Let us define Σ(t) as the covariance of the minimum least-squares estimate, i.e. 

Σ(t) = Eˆz (t)ˆz T (t).

(7.21) 

 T If zˆ (t0 ) is chosen as Ez(t0 ), it holds that Σ(t0 ) = Eˆz (t0 )ˆz (t0 ) = Ez(t0 ) Ez(t0 ) . T

Then, the optimal cost JE,1 (t0 , t f ) is given as

270

7 LQG Optimal Controls





 T   JE,1 (t0 , t f ) = Trace P(t0 )Σ(t0 ) = Ez(t0 ) P(t0 ) Ez(t0 ) ,

(7.22)

where from (7.5)  Ez(t0 ) = e Ah Ex(t0 ) +

t0

t0 −h

e A(t0 −τ ) B1 Eu(τ )dτ .

(7.23)

From (7.17) and (7.22) , the optimal cost (7.11) is given as  T    J E (t0 , t f ) = Ez(t0 ) P(t0 ) Ez(t0 ) + Trace

tf

 Q X (s)ds + F X (t f ) .

(7.24)

t0

Remark 7.1 It is noted that there exist dual properties between P(t) and X (t) for the control and the estimation, respectively, as follows. ¯ ↔ (A T , C¯ T ) (A, B) (Q, R) ↔ (G¯ Q w G¯ T , Rv ) P(t) ↔ X (t) backwards ↔ forwards

(7.25)

For the input delayed stochastic system (7.1)–(7.2), the finite horizon LQG control for the cost (7.9) is given by u ∗ (t) = −R −1 B¯ T P(t)ˆz (t), where    t ¯ z˙ˆ (t) = Aˆz (t) + Bu(t) + X (t)C¯ T Rv−1 y(t) + C e A(t−τ −h) B1 u(τ )dτ − C¯ zˆ (t)

(7.26)

t−h

with the initial condition of zˆ (t0 ) in (7.23). Infinite Horizon LQG Control If P(t) = P T (t) in (7.20) goes to constant P = P T as t f goes to infinity, P satisfies the well-known algebraic Riccati equation 0 = A T P + P A + Q − P B¯ R −1 B¯ T P

(7.27)

and the infinite horizon LQG control is given from (7.19) by u ∗ (t) = −R −1 B¯ T P zˆ (t).

(7.28)

If X (t) = X T (t) in (7.15) goes to constant X as t f goes to infinity, X = X T satisfies the well-known algebraic Riccati equation 0 = AX + X A T + G¯ Q w G¯ T − X C¯ T Rv−1 C¯ X

(7.29)

7.2 Fixed Horizon LQG Controls for Input Delayed Systems

271

and the infinite horizon estimate zˆ (t) is given from (7.14) by   T −1 ˙zˆ (t) = Aˆz (t) + Bu(t) ¯ ¯ ¯ + X C Rv y¯ (t) − C zˆ (t) .

(7.30)

For the stability analysis, we augment the system (7.7)–(7.8), the Kalman filtered estimate (7.14), and the LQG control (7.19) into the following system, where the estimation error is denoted as z˜ (t) = z(t) − zˆ (t). d dt



    z(t) A − B¯ R −1 B¯ T P(t) B¯ R −1 B¯ T P(t) z(t) = z˜ (t) z˜ (t) 0 A − X (t)C¯ T Rv−1 C¯    ¯ w(t) G 0 + ¯ . v(t) G −X (t)C¯ T Rv−1

(7.31)

The closed-loop stochastic system involving with (7.7), (7.8), (7.14), and (7.28) is said to be asymptotically stable if it is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . Since a linear transformation of coordinates does not change the asymptotic stability, the closed-loop stochastic system is asymptotically stable if the system (7.31) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . The asymptotic stability of the closed-loop stochastic system, therefore, depends on A − B¯ R −1 B¯ T P(t) for the control subsystem and A − X (t)C¯ T Rv−1 C¯ for the estimation error subsystem. The asymptotic stability of the control subsystem was explained in Sect. 6.2.1. It ¯ is stabilizable and (A, Q T /2 ) is detectable there exists the was stated that if (A, B) positive definite matrix P in (7.27) and (A − B¯ R −1 B¯ T P) is a stable matrix. From the well-known dual properties between controls and estimates for ordinary systems ¯ is detectable and (A, G¯ Q 1/2 like in (7.25), if (A, C) w ) is stabilizable, then there exists ¯ is a stable matrix. the positive definite matrix X in (7.29) and (A − X C¯ T Rv−1 C) Therefore, the existence of the positive definite solutions P and X to the two algebraic Riccati equations (7.27) and (7.29) ensures the asymptotic stability of the closed-loop stochastic system. Finite horizon LQG controls for input delayed stochastic systems with predictive costs in this subsection follow similarly from several well-known literatures including [2, 3, 8, 12] for ordinary systems once they are transformed to delay-free systems.

7.2.2 Fixed Horizon LQG Control for Standard Costs For the input delayed stochastic system (7.1)–(7.2), a finite horizon LQG cost is chosen as  t f    x T (s)Qx(s) + u T (s)Ru(s) ds + VF (x(t f ), u t f ) (7.32) JE (t0 , t f ) = E t0

272

7 LQG Optimal Controls

with Q ≥ 0 and R > 0, where the terminal weighting functional VF (x(t f ), u t f ) denotes  tf u T (s)F2 u(s)ds (7.33) VF (x(t f ), u t f ) = x T (t f )F1 x(t f ) + t f −h

with F1 ≥ 0 and F2 ≥ 0. As mentioned earlier, the stochastic system (7.1)–(7.2) is causal and the resulting closed-loop system must be also causal. Therefore, the control u(s) at time s should be a function of current and past output {y(α), α ∈ [t0 , s]}. Since we cannot use the state x(s) for the control u(s) in (6.68), we introduce an intermediate quantity x(s) ˆ for an estimate of x(s), which be also a function of current and past output {y(α), α ∈ [t0 , s]}. Based on the current and past output {y(α), α ∈ ˆ is designed to minimize the covariance of the estimation error x(s) ˜ = [t0 , s]}, x(s) x(s) − x(s) ˆ such as E x(s) ˜ x˜ T (s), which is known to satisfy the orthogonality between T (α) = 0 for the estimation error and the current and past outputs such that E x(s)y ˜ ˆ and the estimation error x(s) ˜ such all α ∈ [t0 , s] and thus between the estimate x(s) that E x(s) ˆ x˜ T (s) = 0. In this case, the cost (7.32) can be divided into two parts as follows. JE (t0 , t f ) = JE,1 (t0 , t f ) + JE,2 (t0 , t f ),

(7.34)

where JE,1 (t0 , t f ) and JE,2 (t0 , t f ) denote for x(t) ˜ = x(t) − x(t) ˆ  J E,1 (t0 , t f ) = E J E,2 (t0 , t f ) = E

tf

t0  tf t0



  xˆ T (s)Q x(s) ˆ + u T (s)Ru(s) ds + VF (x(t ˆ f ), u t f ) ,

 x˜ T (s)Q x(s)ds ˜ + VF (x(t ˜ f ), u t f ) .

(7.35) (7.36)

Estimation As in the previous subsection, the estimate x(s) ˆ is the well-known Kalman filtered estimate such as, for t ∈ [t0 , s],   ˙ˆ = A x(t) ˆ (7.37) x(t) ˆ + Bu(t) + B1 u(t − h) + X (t)C T Rv−1 y(t) − C x(t) with x(t ˆ 0 ) = Ex(t0 ), where the covariance of the estimation error X (t) = E{x(t) − T is obtained from the differential Riccati equation x(t)}{x(t) ˆ − x(t)} ˆ X˙ (t) = AX (t) + X (t)A T + Q w − X (t)C T Rv−1 C X (t)

(7.38)

ˆ in (7.37) is straightforward to obtain as in ordinary with X (t0 ) = Π . The estimate x(t) systems since the inputs are known.

7.2 Fixed Horizon LQG Controls for Input Delayed Systems

273

Control The optimal control u ∗ (s) is designed to minimize the cost JE,1 (t0 , t f ) and the estimate x(s) ˆ is designed to minimize the estimation error covariance E(x(s) − T involving JE,2 (t0 , t f ). x(s))(x(s) ˆ − x(s)) ˆ Similar to the finite horizon LQ control in Theorem 6.1, the optimal control u ∗ (s) for the finite horizon LQG cost JE,1 (t0 , t f ) can be designed as shown in the following theorem. Theorem 7.1 For the input delayed stochastic system (7.1)–(7.2), the optimal finite horizon LQG control u ∗ for the cost (7.32) is given as follows. For t ∈ (t f − h, t f ],   ˆ + u ∗2 (t) = −[R + F2 ]−1 B T W1 (t)x(t)

t f −t−h −h

 W2 (t, s)u ∗ (t + s)ds , (7.39)

where x(t) ˆ is given by (7.37)–(7.38) and W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = T W3 (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.31)–(6.33) with the boundary conditions (6.34)–(3.36). For t ∈ [t0 , t f − h],   u ∗1 (t) = −R −1 [B T P1 (t) + P2T (t, 0)]x(t) ˆ +

0

−h

 [B T P2 (t, s) + P3 (t, 0, s)]u ∗ (t + s)ds ,

(7.40)

where x(t) ˆ is given by (7.37)–(7.38) and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = T P3 (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.38)–(6.40) with the boundary conditions (6.41)–(6.45). Proof For t ∈ (t f − h, t f ], we choose a continuous functional as in (6.46), where x(·) is replaced with x(·), ˆ such as 



ˆ + 2 xˆ (t) V2 (t) = E xˆ (t)W1 (t)x(t) T

 +

t f −t−h

−h



T

t f −t−h −h

t f −t−h

−h

W2 (t, s)u(t + s)ds 

u (t + r )W3 (t, r, s)u(t + s)dr ds +

t

T

 u (s)F2 u(s)ds . T

t f −h

(7.41) For t ∈ [t0 , t f − h], we choose a continuous functional as in (6.47), where x(·) is replaced with x(·), ˆ such as   ˆ + 2 xˆ T (t) V1 (t) = E xˆ T (t)P1 (t)x(t)  +

0



−h

0 −h

P2 (t, s)u(t + s)ds

 u (t + r )P3 (t, r, s)u(t + s)dr ds . T

−h

0

(7.42)

˙ˆ is as follows. The time derivative of the continuous functionals including x(t)

274

7 LQG Optimal Controls 

  xˆ T (t)W1 (t)x(t) ˆ = 2E A x(t) ˆ + Bu(t) + B1 u(t − h)    T + X (t)C T Rv−1 y(t) − C x(t) W1 (t)x(t) ˆ + E xˆ T (t)W˙ 1 (t)x(t) ˆ , ˆ   t −t−h f d T W2 (t, s)u(t + s)ds dt E 2 xˆ (t) −h  T  ˆ = 2E A x(t) ˆ + Bu(t) + B1 u(t − h) + X (t)C T Rv−1 y(t) − C x(t)   t −t−h  t −t−h f d × −hf W2 (t, s)u(t + s)ds + 2E xˆ T (t) dt W2 (t, s)u(t + s)ds , −h    d T ˆ = 2E A x(t) ˆ + Bu(t) + B1 u(t − h) dt E xˆ (t)P1 (t) x(t)  T   ˆ + X (t)C T Rv−1 y(t) − C x(t) P1 (t)x(t) ˆ + E xˆ T (t) P˙1 (t)x(t) ˆ ,   0 d T dt E 2 xˆ (t) −h P2 (t, s)u(t + s)ds  T  ˆ = 2E A x(t) ˆ + Bu(t) + B1 u(t − h) + X (t)C T Rv−1 y(t) − C x(t) 0 0

× −h P2 (t, s)u(t + s)ds + E2 xˆ T (t) −h ∂ P2∂t(t,s) u(t + s) + P2 (t, s)u(t ˙ + s) ds, d dt E

we have for s ∈ [−h, 0] 

T  T E x(t) ˆ y(t) − C x(t) ˆ = 0, Eu(t + s) y(t) − C x(t) ˆ = 0,

(7.43)

which allows us to use the same procedure as in the proof of Theorem 6.1 to obtain the optimal control u ∗2 (t) in (7.39) for t ∈ (t f − h, t f ] and the optimal control u ∗1 (t) in (7.40) for t ∈ [t0 , t f − h], where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r ) satisfy (6.31)–(6.33) for s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (6.34)–(6.36) and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) satisfy (6.38)–(6.40) for s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (6.41)–(6.45). This completes the proof.  Infinite Horizon LQG Control If X (t) in (7.38) goes to constant X as t f goes to infinity, X = X T satisfies the well-known algebraic Riccati equation 0 = AX + X A T + Q w − XC T Rv−1 C X

(7.44)

and the infinite horizon estimate x(t) ˆ is given from (7.37) by   ˙ˆ = A x(t) ˆ x(t) ˆ + Bu ∗ (t) + B1 u ∗ (t − h) + XC T Rv−1 y(t) − C x(t)

(7.45)

with x(t ˆ 0 ) = Ex(t0 ). Similarly, if P1 (t) = P1T (t), P2 (t, s) and P(t, r, s) = P3T (t, s, r ) of the coupled partial differential Riccati equations (6.38)–(6.40) in Theorem 7.1 go to the stationary quantities P1 = P1T , P2 (s) and P(r, s) = P3T (s, r ) as t f goes to infinity, then they satisfy

7.2 Fixed Horizon LQG Controls for Input Delayed Systems 0 = A T P1 + P1 A + Q − [P1 B + P2 (0)]R −1 [B T P1 + P2T (0)],



∂ ∂ + ∂r ∂s

P˙2 (s) = A T P2 (s) − [P1 B + P2 (0)]R −1 [B T P2 (s) + P3 (0, s)], P3 (r, s) = −[B T P2 (r ) + P3 (0, r )]T R −1 [B T P2 (s) + P3 (0, s)]

275

(7.46) (7.47) (7.48)

with boundary conditions P2 (−h) = P1 B1 and P3 (−h, s) = B1T P2 (s). In this case, the optimal LQG control is given by (7.40) in Theorem 7.1 as   u ∗ (t) = −R −1 [B T P1 + P2T (0)]x(t) ˆ +

0

−h

 [B T P2 (s) + P3 (0, s)]u ∗ (t + s)ds . (7.49)

For the stability analysis, we augment the systems (7.1)–(7.2), the standard Kalman filtered estimate (7.37), and the LQG control (7.40) into the following sys

ˆ tem, where the estimation error is defined as x(t) ˜ = x(t) − x(t). 

        x(t) ˙ A 0 B1 ∗ x(t) B ∗ = u (t − h) + u (t) + ˙˜ 0 A − X (t)C T Rv−1 C 0 x(t) ˜ 0 x(t)    w(t) G 0 , (7.50) + v(t) 0 −X (t)C T Rv−1    x(t)  ∗ T T −1 T −1 T u (t) = −R [B P1 (t) + P2 (t, 0)] R [B P1 (t) + P2 (t, 0)] x(t) ˜  0 [B T P2 (t, s) + P3 (t, 0, s)]u ∗ (t + s)ds. (7.51) + −h

It is noted that the estimation error subsystem in (7.50) is independent from the state ˜ is removed from the control u ∗ (t) (7.51), this control x(t) and the control u ∗ (t). If x(t) ∗ u (t) (7.51) is the same as the finite horizon LQ control u ∗ (t) (6.37) in Sect. 6.2.2. Thus the augmented closed-loop system (7.50)–(7.51) is asymptotically stable if the estimation error subsystem in the second part of (7.50) is asymptotically stable and the control subsystem in the first part of (7.50) without x(t) ˜ is asymptotically stable. The asymptotic stability of the estimation error subsystem was given in Sect. 7.2.1 and that of the control subsystem in Sect. 6.2.2. As noted in Sect. 6.2, since it is very hard to exactly find P1 = P1T , P2 (s) and P(r, s) = P3T (s, r ) in (7.46)–(7.48) that produce such a stabilizing control, the numerical procedure in Appendix E.1, which can be found in [1], is often used to obtain approximate solutions. While infinite horizon LQG controls have problems in stability proof and computation, we introduce receding horizon LQG controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

276

7 LQG Optimal Controls

7.3 Receding Horizon LQG Controls for Input Delayed Systems In this section, we assume that the input delayed system (7.1)–(7.2) is stabilizable and detectable [6, 10, 11].

7.3.1 Receding Horizon LQG Control for Predictive Costs The receding horizon control was introduced first in Sect. 6.3.1 for the deterministic system (6.1). Likewise we introduce a receding horizon LQG control at the current time t for the stochastic system (7.1)–(7.2). For the current time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. In this case, ˆ + h| t) = x(t ˆ + h). Consider a receding horizon LQG cost given x(s| ˆ t) s=t+h = x(t by 

 ˆ + h| t) + u T (s| t)Ru(s| t) ds xˆ T (s + h| t)Q x(s t  T + xˆ (t + T + h| t)F x(t ˆ + T + h| t) (7.52)

J E (t, t + T, u; x(t ˆ + h)) = E

t+T



with u t+T ≡ 0, wt+T ≡ 0, Q > 0, R > 0, and F > 0 for the system (7.1)–(7.2). The above problem can be transformed by using a state predictor to  JE (t, t + T, u; z(t)) = E

 t+T t



 z T (s| t)Qz(s| t) + u T (s| t)Ru(s| t) ds  (7.53) +z T (t + T | t)F z(t + T | t)

with Q >, R > 0, and F > 0 for the system (7.7)–(7.8) that can be expressed as d ¯ ¯ z(s| t) = Az(s| t) + Bu(s| t) + Gw(s| t), ds y¯ (s| t) = C¯ z(s| t) + v(s| t)

(7.54) (7.55)

with Q w ≥ 0 and Rv > 0 and with the initial state z(t| t) = z(t). This can be obtained from the cost (7.10) by replacing t0 and t f with t and t + T , respectively. Using the estimate zˆ (s| t) satisfying the orthogonality between the estimate zˆ (s| t) and the estimation error z˜ (s| t) = z(s| t) − zˆ (s| t), i.e. Eˆz (s| t)˜z T (s| t) = 0, the cost JE (t, t + T ) can be rewritten as follows. JE (t, t + T, u; z(t)) = JE,1 (t, t + T, u; zˆ (t)) + JE,2 (t, t + T, u; z˜ (t)),

7.3 Receding Horizon LQG Controls for Input Delayed Systems

277

where JE,1 (t, t + T, u; zˆ (t)) and JE,2 (t, t + T, u; z˜ (t)) denote JE,1 (t, t + T, u; zˆ (t))   t+T     T T T zˆ (s| t)Q zˆ (s| t) + u (s| t)Ru(s| t) ds + zˆ (t + T | t)F zˆ (t + T | t) , =E t





t+T

JE,2 (t, t + T, u; z˜ (t)) = E

(7.56)  z˜ T (s| t)Q z˜ (s| t)ds + z˜ T (t + T | t)F z˜ (t + T | t)

t

(7.57) for the system (7.54)–(7.55). It is noted that the input u(s| t) should be a function of past outputs { y¯ (α), α ≤ s} due to the causality, which in fact imposes the estimate zˆ (s| t) should be, too. The optimal solution u ∗ for the cost JE (t, t + T, u; z(t)) can be found through JE,1 (t, t + T, u; zˆ (t)), rather than JE (t, t + T, u; z˜ (t)), as follows. u ∗ (s| t) = −R −1 B¯ T P(s| t)ˆz (s| t),

(7.58)

where P(s| t) = P T (s| t) satisfies the differential Riccati equation −

d P(s| t) = A T P(s| t) + P(s| t)A + Q − P(s| t) B¯ R −1 B¯ T P(s| t) (7.59) ds

with P(t + T | t) = F. The initial state zˆ (t| t) is given as the estimate zˆ (t). The receding horizon LQG control employs the current input only, i.e. u(t) = u ∗ (t| t) = −R −1 B¯ T P(t| t)ˆz (t| t) = −R −1 B¯ T P(t| t)ˆz (t).

(7.60)

As in (6.76) and (6.77), due to the shift invariance, we can express (7.60) as u(t) = u ∗ (t| t) = −R −1 B¯ T P(0)ˆz (t),

(7.61)

where P(τ ) = P T (τ ) satisfies ˙ ) = A T P(τ ) + P(τ )A + Q − P(τ ) B¯ R −1 B¯ T P(τ ) − P(τ

(7.62)

with P(T ) = F. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the receding horizon LQG control u(t+ ) in (7.61) with t replaced with t+ as time proceeds. In this case, we have to obtain zˆ (t+ ) from zˆ (t) and new information on y¯ [t, t+ ]. This is a filtered estimation problem. The choice of a filtered estimate zˆ (t) determines the property of the receding horizon LQG control. We now suggest two possible choices.

278

7 LQG Optimal Controls

Choice I: Standard Kalman Filtered Estimate A simple choice of the estimate for z(t) is a Kalman filtered estimation problem which is given in (7.14)–(7.15). The estimate zˆ (t) and the estimation covariance matrix X (t) starts from t = t0 as in (7.14)–(7.15) with the initial conditions zˆ (t0 ) = Ez(t0 ) in (7.23) and X (t0 ) in (7.16). Theorem 7.2 If there exist positive definite matrices F > 0 and X > 0 such that F A + A T F + Q − F B¯ R −1 B¯ T F ≤ 0, AX + X A T + G¯ Q w G¯ T − X C¯ Rv−1 C¯ T X = 0,

(7.63) (7.64)

then the receding horizon LQG control (7.61) for the cost (7.53) with such an F, based on the Kalman filtered estimate (7.14)–(7.15), asymptotically stabilizes the system (7.7)–(7.8). Proof Let us augment the systems (7.7)–(7.8), the Kalman filtered estimate (7.14), and the control (7.61) into the following system, where z˜ (t) = z(t) − zˆ (t). 

    z˙ (t) z(t) A − B¯ R −1 B¯ T P(0) B¯ R −1 B¯ T P(0) = z˜ (t) 0 A − X (t)C¯ T Rv−1 C¯ z˙˜ (t)    w(t) G¯ 0 + ¯ . v(t) G −X (t)C¯ T Rv−1

(7.65)

Therefore, the closed-loop system is asymptotically stable if the system (7.65) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system depends on the separated conditions corresponding to A − B¯ R −1 B¯ T P(0) for the control subsystem and A − X (t)C¯ T Rv−1 C¯ for the estimation error subsystem, respectively. It is noted that the asymptotic stability of the estimation error subsystem in (7.65) was discussed first in Sect. 7.2.1. Once x(t) ˆ converges to x(t), the asymptotic stability of the closed-loop system depends on the asymptotic stability for the control subsystem. By Theorem 6.3, the condition (7.63) ensures the asymptotic stability for the control subsystem. This completes the proof.  Theorems 6.2 and 6.3 described the sufficient conditions for the asymptotic stability for the control subsystem of (7.65). It is known that X (t) converges to the positive ¯ is detectable definite solution X of the algebraic Riccati equation (7.64) if (A, C) 1/2 and (A, G¯ Q w ) is stabilizable. If we replace the Kalman filtered estimate (7.14) with the steady-state Kalman filtered estimate (7.30), we can still say the same conditions as in Theorem 7.2 for the asymptotic stability of the closed-loop system. Choice II: Frozen-Gain Kalman Filtered Estimate We mentioned that LQ controls and Kalman filtered estimates have dual properties in Remark 7.1. In the receding horizon LQG control with a Kalman filtered estimate,

7.3 Receding Horizon LQG Controls for Input Delayed Systems

279

however, the control has a fixed gain, −R −1 B¯ T P −1 (0), but the filtered estimate has a time-varying gain, X (t)C¯ T Rv−1 . Like the control gain, we froze the filter gain matrix as follows. As a dual concept to the receding horizon LQ control, we propose a filtered estimate at the current time t with the initial weighting covariance X (t − Te | t) at the past time t − Te , where Te is the horizon length for estimation, such that for τ ∈ [t − Te , t]   ¯ z˙ˆ (t) = Aˆz (t) + Bu(t) + X (t| t)C¯ T Rv−1 y¯ (t) − C¯ zˆ (t)

(7.66)

with zˆ (t0 ) = E(z(t0 )), where X (τ | t) = X T (τ | t) satisfies d X (τ | t) = AX (τ | t) + X (τ | t)A T + G¯ Q w G¯ T − X (τ | t)C¯ T Rv−1 C¯ X (τ | t) (7.67) dτ

with X (t − Te | t) = X 0 . Due to the shift invariance property of X (t), the filtered estimate (7.66) and its error covariance (7.67) can be written as   T −1 ˙zˆ (t) = Aˆz (t) + Bu(t) ¯ ¯ ¯ + X (0)C Rv y¯ (t) − C zˆ (t) , X˙ (τ ) = AX (τ ) + X (τ )A T + G¯ Q w G¯ T − X (τ )C¯ T Rv−1 C¯ X (τ )

(7.68) (7.69)

with X (−Te ) = X 0 , where X 0 is a design parameter and often chosen as a large value. It is noted that the filtered estimate (7.68)–(7.69) is just the Kalman filtered estimate with a frozen gain matrix that is associated with X (τ ) τ =0 propagated from the initial matrix X (τ ) τ =−Te = X 0 along the differential Riccati equation (7.69). Therefore, this filtered estimate will be henceforth called a frozen-gain Kalman filtered estimate. This estimate might be also called a receding horizon dual filtered estimate due to the structural duality to the receding horizon control. Based on this frozen-gain Kalman filtered estimate, we can get the following result. Theorem 7.3 If there exist matrices F > 0 and X 0 > 0 such that F A + A T F + Q − F B¯ R −1 B¯ T F < 0, AX 0 + X 0 A T + G¯ Q w G¯ T − X 0 C¯ Rv−1 C¯ T X 0 < 0,

(7.70) (7.71)

then the receding horizon LQG control (7.61) for the cost (7.53) with such an F, based on the frozen-gain Kalman filtered estimate (7.68)–(7.69), asymptotically stabilizes the system (7.7)–(7.8). Proof Let us augment the systems (7.7)–(7.8), the frozen-gain Kalman filtered estimate (7.68), and the control (7.61) into the following augmented closed-loop system

280

7 LQG Optimal Controls



    z˙ (t) A − B¯ R −1 B¯ T P(0) B¯ R −1 B¯ T P(0) z(t) = z˜ (t) 0 A − X (0)C¯ T Rv−1 C¯ z˙˜ (t)    w(t) G¯ 0 , + ¯ v(t) G −X (0)C¯ T Rv−1

(7.72)

where z˜ (t) = z(t) − zˆ (t). Therefore, the closed-loop system is asymptotically stable if the system (7.72) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system depends on the separated conditions corresponding to A − B¯ R −1 B¯ T P(0) for the control subsystem and A − X (0)C¯ T Rv−1 C¯ for the estimation error subsystem, respectively. The stability condition for the control subsystem associated with the matrix A − B¯ R −1 B¯ T P(0) was much discussed in the receding horizon LQ control with state predictors in Sect. 6.3.1. By Theorem 6.3, the condition (7.70) ensures the asymptotic stability for the control subsystem. By the duality between control and estimation, the condition (7.71) ensures the asymptotic stability for the estimation error subsystem. This completes the proof.  Theorems 6.2 and 6.3 and Corollaries 6.2 and 6.3 in Sect. 6.3.1 described the sufficient conditions for the asymptotic stability for the control subsystem in (7.72). As noted in Sect. 6.3.1, there always exists a positive definite matrix F satisfying ¯ is stabilizable and (A, Q T /2 ) is detectable. Similarly, it is noted (7.70) if (A, B) ¯ is that there always exists a positive definite matrix X o satisfying (7.71) if (A, C) 1/2 detectable and (A, G¯ Q w ) is stabilizable. It is noted that for the initial interval t ∈ [t0 , t0 + Te ] we can use the standard Kalman filtered estimate (7.14)–(7.15) instead of the frozen-gain Kalman filtered estimate (7.68)–(7.69) since the latter is not defined on t ∈ [t0 , t0 + Te ]. Receding horizon LQG controls for input delayed systems with predictive costs in Theorems 7.2 and 7.3 in this subsection follow similarly from well-known results for ordinary systems once they are transformed to delay-free systems. The stability properties for Kalman filtered estimates in Theorem 7.2 and for frozen-gain filtered estimate in Theorem 7.3 follow similarly from well-known literatures including [2, 4, 9] for ordinary systems.

7.3.2 Receding Horizon LQG Control for Standard Costs For the current time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. Consider a receding horizon LQG cost at the current time t given by JE (t, t + T ; x(t), u t )    t+T  T T x (s| t)Qx(s| t) + u (s| t)Ru(s| t) ds + VF (x(t + T | t), u t+T | t ) =E t

(7.73)

7.3 Receding Horizon LQG Controls for Input Delayed Systems

281

with Q > 0 and R > 0, where the terminal weighting functional VF (x(t + T | t), u t+T | t ) denotes  VF (x(t + T | t), u t+T | t ) = x T (t + T | t)F1 x(t + T | t) +

t+T t+T −h

u T (s)F2 u(s)ds

(7.74)

with F1 > 0 and F2 > 0. The system (7.1)–(7.2) can be expressed as d x(s| t) =Ax(s| t) + Bu(s| t) + B1 u(s − h| t) + w(s| t), ds y(s| t) = C x(s| t) + v(s| t)

(7.75) (7.76)

with Q w ≥ 0 and Rv > 0 and with the initial state x(t| t) = x(t) and the initial control u(t + θ| t) = u(t + θ) for θ ∈ [−h, 0). This can be obtained from the cost ˆ t) (7.32) by replacing t0 and t f with t and t + T , respectively. Using the estimate x(s| satisfying the orthogonality between the estimate x(s| ˆ t) and the estimation error x(s| ˜ t) = x(s| t) − x(s| ˆ t), i.e. E x(s| ˆ t)x˜ T (s| t) = 0, the cost JE (t, t + T ; x(t), u t ) can be rewritten as follows. J E (t, t + T ; x(t), u t ) = J E,1 (t, t + T ; x(t), ˆ u t ) + J E,2 (t, t + T ; x(t), ˜ u t ),

(7.77)

ˆ u t ) and JE,2 (t, t + T ; x(t), ˜ u t ) denote where JE,1 (t, t + T ; x(t), ˆ ut ) JE,1 (t, t + T ; x(t),     t+T  ˆ t) + u T (s| t)Ru(s| t) ds + VF (x(t ˆ + T | t), u t+T | t ) , =E xˆ T (s| t)Q x(s| t

 ˜ ut ) = E JE,2 (t, t + T ; x(t),

t+T

(7.78)  T x˜ (s| t)Q x(s| ˜ t)ds + VF (x(t ˜ + T | t), u t+T | t ) ,

t

(7.79) where the initial estimate x(t| ˆ t) = x(t) ˆ and the initial control u(t + θ| t) = u(t + θ) for θ ∈ [−h, 0) are given. It is noted that the input u(s| t) should be a function of past outputs {y(α), α ≤ s} due to the causality, which in fact imposes the estimate x(s| ˆ t) should be, too. The optimal solution u ∗ for the cost JE (t, t + T, u; x(t), u t ) can be found through ˆ u t ), rather than JE (t, t + T, u; x(t), ˜ u t ), as follows. It is noted JE,1 (t, t + T, u; x(t), that (7.73) and (7.74) are obtained from the finite horizon LQG cost (7.32) with the terminal weighting function (7.33) by replacing t0 and t f with t and t + T , respectively. Thus, the receding horizon LQG control can be obtained from the finite horizon LQG controls (7.39) and (7.40) by replacing t0 and t f with t and t + T , respectively. For s ∈ (t + T − h, t + T ], the optimal LQG control u ∗2 (s| t) for the cost (7.78) is given from (7.39) as

282

7 LQG Optimal Controls

   T +t−s−h u ∗2 (s| t) = −[R + F2 ]−1 B T W1 (s − t)x(s| ˆ t) + W2 (s − t, a)u ∗ (s + a| t)da , (7.80) −h

where W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.108)–(6.110) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] with the boundary conditions (6.111)–(6.113). For s ∈ [t, t + T − h], the optimal LQG input u ∗1 (s| t) for the cost (7.78) is given from (7.40) as  u ∗1 (s| t) = −R −1 [B T P1 (s − t) + P2T (s − t, 0)]x(s| ˆ t) + 

0 −h

 [B T P2 (s − t, a) + P3 (s − t, 0, a)]u ∗ (s + a| t)da ,

(7.81)

where P1 (τ ), P2 (τ , s), and P3 (τ , r, s) satisfy the coupled partial differential Riccati equations (6.115)–(6.117) for τ ∈ [0, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] with the boundary conditions (6.118)–(6.122). Since the horizon length T is a design parameter, we consider the receding horizon LQ controls for 0 < T < h and for h ≤ T . Case I: 0 < T < h The receding horizon LQG control is given from (7.80) as   u(t) = u ∗2 (t| t) = −[R + F2 ]−1 B T W1 (0)x(t) ˆ +

T −h −h

 W2 (0, s)u(t + s)ds ,

(7.82)

where W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the partial differential equations (6.108)–(6.110) with the boundary conditions (6.111)–(6.113) for τ ∈ [0, T ], r ∈ [−h, 0] and s ∈ [−h, 0]. Case II: h ≤ T The receding horizon LQG control is given from (7.81) as  u(t) = u ∗1 (t| t) = −R −1 [B T P1 (0) + P2T (0, 0)]x(t) ˆ  +

0 −h

 [B T P2 (0, s) + P3 (0, 0, s)]u(t + s)ds ,

(7.83)

where P1 (τ ), P2 (τ , s), and P3 (τ , r, s) satisfy the partial differential equations (6.115)–(6.117) with the boundary conditions (6.118)–(6.122) for τ ∈ [0, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] and W1 (τ ), W2 (τ , s), and W3 (τ , r, s) satisfy the partial differential equations (6.108)–(6.110) with the boundary conditions (6.111)–(6.113) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0]. It is noted that the above partial differential equations have one-point boundary conditions on a finite interval. Therefore, the corresponding solutions can be easily computed. This is different from the infinite horizon LQ control. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. In ˆ and new information on y[t, t+ ]. This this case, we have to obtain x(t ˆ + ) from x(t)

7.3 Receding Horizon LQG Controls for Input Delayed Systems

283

is a filtered estimation problem. Since the receding horizon LQG controls (7.82) ˆ t) = and (7.83) employ the current input u ∗2 (t| t) or u ∗1 (t| t) only, the choice of x(t| x(t) ˆ determines the property of the receding horizon LQG control. We suggest two possible choices. Choice I: Standard Kalman Filtered Estimate A simple choice of the estimate for x(t) is a Kalman filtered estimate satisfying (7.37) and (7.38). Theorem 7.4 If there exist positive definite matrices F1 > 0, F2 > 0, and X > 0 such that   Q + F1 A + A T F1 − F1 B[R + F2 ]−1 B T F1 F1 B1 ≤ 0, (7.84) B1T F1 −F2 AX + X A T + Q w − XC Rv−1 C T X = 0,

(7.85)

then the receding horizon LQG controls (7.82) and (7.83) for the cost (7.73) with such F1 and F2 , based on the Kalman filtered estimate (7.37)–(7.38), asymptotically stabilize the system (7.1)–(7.2). Proof Let us augment the systems (7.1)–(7.2), the Kalman filtered estimate (7.37), and the controls (7.82) for 0 < T < h and (7.83) for h ≤ T into the following system 

for x(t) ˜ = x(t) − x(t) ˆ 

        x(t) ˙ A 0 x(t) B ∗ B1 ∗ = + u u (t − h) (t) + ˙˜ 0 A − X (t)C T Rv−1 C x(t) ˜ 0 0 x(t)    w(t) G 0 , (7.86) + v(t) 0 −X (t)C T Rv−1    0  x(t)  u ∗ (t) = K 1 −K 1 K 2 (s)u ∗ (t + s)ds, (7.87) + x(t) ˜ −h

where K 1 and K 2 (s) denote for 0 < T < h K 1 = −[R + F2 ]−1 B T W1 (0), K 2 (s) = −[R + F2 ]−1 B T W2 (0, s) and for h ≤ T K 1 = −R −1 [B T P1 + P2T (0)], K 2 (s) = −R −1 [B T P2 (s) + P3 (0, s)]. It is noted that the estimation error subsystem in (7.86) is independent from the ˜ is removed from the control u ∗ (t) (7.87), state x(t) and the control u ∗ (t). If x(t) ∗ this control u (t) (7.87) is the same as the receding horizon LQ control u ∗ (t) in Sect. 6.3.2. Thus the augmented closed-loop system (7.86)–(7.87) is asymptotically stable if the estimation error subsystem in the second part of (7.86) is asymptotically

284

7 LQG Optimal Controls

stable and the control subsystem, which is the subsystem in the first part of (7.86) without x(t) ˜ is asymptotically stable. It is noted that the asymptotic stability of the estimation error subsystem in (7.86) was discussed first in Sect. 7.2.1. Once x(t) ˆ converges to x(t), the asymptotic stability of the closed-loop system depends on the asymptotic stability for the control subsystem. By Theorem 6.6, the existence of positive definite matrices F1 > 0 and F2 > 0 satisfying (7.84) ensures the asymptotic stability of the control subsystem. This completes the proof.  Theorems 6.5 and 6.6 and Corollaries 6.4 and 6.5 described the sufficient conditions including the condition (7.84) for the asymptotic stability for the control subsystem. If we replace the Kalman filtered estimate (7.37) with the steady-state Kalman filtered estimate (7.45), we can still say the same conditions as in Theorem 7.4 for the asymptotic stability of the closed-loop system. Choice II: Frozen-Gain Kalman Filtered Estimate We mentioned that LQ controls and Kalman filtered estimates have dual properties in Remark 7.1. In the receding horizon LQG control with a Kalman filtered estimate, however, the control in (7.82) or (7.83) has a fixed gain, associated with W1 (0), W2 (0, s), P1 (0), P2 (0, s), and P3 (0, 0, s) for s ∈ [−h, 0], but the filtered estimate has a time-varying gain, X (t)C¯ T Rv−1 . Like the control gain, we froze the filter gain matrix as follows. As a dual concept to the receding horizon LQ control, we propose a filtered estimate at the current time t with the initial weighting covariance X (t − Te | t) at the past time t − Te , where Te is the horizon length for estimation, such that for τ ∈ [t − Te , t]   ˙ˆ = A x(t) ˆ (7.88) x(t) ˆ + Bu(t) + B1 u(t − h) + X (0)C T Rv−1 y(t) − C x(t) with x(t ˆ 0 ) = Ex(t0 ), where X (τ ) = X T (τ ) satisfies the following differential Riccati equation d X (τ ) = AX (τ ) + X (τ )A T + Q w − X (τ )C T Rv−1 C X (τ ) dτ

(7.89)

with X (−Te ) = X 0 , where X 0 is a design parameter and often chosen as a large value. It is noted that the filtered estimate (7.88) is just the Kalman filtered estimate with a frozen gain matrix that is associated with X (τ ) τ =0 propagated from the initial matrix X (τ ) τ =−Te = X 0 along the differential Riccati equation (7.89). Based on this frozen-gain Kalman filtered estimate, we can get the following result.

7.3 Receding Horizon LQG Controls for Input Delayed Systems

285

Theorem 7.5 If there exist matrices F1 > 0, F2 > 0, and X 0 > 0 such that 

Q + F1 A + A T F1 − F1 B[R + F2 ]−1 B T F1 F1 B1 B1T F1 −F2

 ≤ 0,

AX 0 + X 0 A T + Q w − X 0 C Rv−1 C T X 0 ≤ 0,

(7.90) (7.91)

then the receding horizon LQG controls (7.82) and (7.83) for the cost (7.73) with such F1 and F2 , based on the frozen-gain Kalman filtered estimate (7.88)–(7.89), asymptotically stabilize the system (7.1)–(7.2). Proof Let us augment the systems (7.1)–(7.2), the frozen-gain Kalman filtered estimate (7.88), and the controls (7.82) for 0 < T < h and (7.83) for h ≤ T into the  following system for x(t) ˜ = x(t) − x(t) ˆ 

        x(t) ˙ B1 ∗ x(0) B ∗ A 0 (t) + + u u (t − h) = ˙˜ x(t) ˜ 0 0 0 A − X (0)C T Rv−1 C x(t)    w(t) G 0 , (7.92) + v(t) 0 −X (0)C T Rv−1    0  x(t)  ∗ u (t) = K 1 −K 1 K 2 (s)u ∗ (t + s)ds, (7.93) + x(t) ˜ −h

where K 1 and K 2 (s) denote for 0 < T < h K 1 = −[R + F2 ]−1 B T W1 (0), K 2 (s) = −[R + F2 ]−1 B T W2 (0, s) and for h ≤ T K 1 = −R −1 [B T P1 + P2T (0)], K 2 (s) = −R −1 [B T P2 (s) + P3 (0, s)]. It is noted that the estimation error subsystem in (7.92) is independent from the ˜ is removed from the control u ∗ (t) (7.93), this state x(t) and the control u ∗ (t). If x(t) ∗ control u (t) (7.93) is the same as the receding horizon LQ control u ∗ (t) in Sect. 6.3.2. Thus the augmented closed-loop system (7.92)–(7.93) is asymptotically stable if the estimation error subsystem in the second part of (7.92) is asymptotically stable and the control subsystem, which is the subsystem in the first part of (7.92) without x(t), ˜ is asymptotically stable. Therefore, the closed-loop system is asymptotically stable if both the asymptotic stability of the control subsystem and the asymptotic stability of the estimation error subsystem are guaranteed. By Theorem 6.6, the existence of positive definite matrices F1 > 0 and F2 > 0 satisfying (7.90) ensures the asymptotic stability of the control subsystem. By Theorem 7.3, the existence of X 0 > 0 satisfying (7.91) ensures the asymptotic stability of the estimation error subsystem. This completes the proof. 

286

7 LQG Optimal Controls

As mentioned earlier, for the initial interval t ∈ [t0 , t0 + Te ] we can use the Kalman filtered estimate (7.14)–(7.15) instead of the frozen-gain Kalman filtered estimate (7.68)–(7.69) since the latter is not defined on t ∈ [t0 , t0 + Te ]. Theorems 6.5 and 6.6 and Corollaries 6.4 and 6.5, which can be found partially in [17], described the sufficient conditions including (7.90) for the asymptotic stability for the control subsystem.

7.4 Fixed Horizon LQG Controls for State Delayed Systems Consider a linear stochastic system with a single state delay given by x(t) ˙ = Ax(t) + A1 x(t − h) + Bu(t) + w(t), y(t) = C x(t) + v(t)

(7.94) (7.95)

with the initial condition x(θ) = φ(θ), φ ∈ Cn,h for θ ∈ [t0 − h, t0 ] such that  E

x(t0 ) − Ex(t0 ) x(t0 + r ) − Ex(t0 + r )



x(t0 ) − Ex(t0 ) x(t0 + s) − Ex(t0 + s)

T

 =

Π1 0 0 Π2 δ(r − s)



(7.96)

for r ∈ [−h, 0) and s ∈ [−h, 0), where the noises w(t) and v(t) satisfy (7.4) for all t ≥ t0 and s ≥ t0 . The finite horizon LQG cost is chosen as  JE (t0 , t f ) = E

tf



  x (s)Qx(s) + u (s)Ru(s) ds + VF (xt f ) T

T

(7.97)

t0

with Q ≥ 0 and R > 0, where the terminal weighting functional VF (xt f ) denotes  VF (xt f ) = x T (t f )F1 x(t f ) + 

tf

t f −h

 x T (s)F2 x(s)ds +



˙ ) − Ax(r ) dr ds + × F3 x(r



0 −h



tf t f +s

0 −h



tf

t f +s



T x(r ˙ ) − Ax(r )

x T (r )F4 x(r )dr ds

(7.98)

with Fi ≥ 0. As mentioned earlier, the stochastic system (7.94)–(7.95) is causal and the resulting closed-loop system must be also causal. Therefore, the control u(s) at time s should be a function of current and past output {y(α), α ∈ [t0 , s]}. Since we cannot use the state {x(s + θ), θ ∈ [−h, 0]} for the control u(s) in (6.210), we introduce an intermediate quantity {x(s ˆ + θ), θ ∈ [−h, 0]} for an estimate of {x(s + θ), θ ∈ [−h, 0]}, which be also a function of current and past output {y(α), α ∈ ˆ + θ), θ ∈ [t0 , s]}. Based on the current and past output {y(α), α ∈ [t0 , s]}, {x(s [−h, 0]} is designed to minimize the covariance of the estimation error x(s ˜ + θ) = x(s + θ) − x(s ˆ + θ) such as E x(s ˜ + θ)x˜ T (s + θ), which is known to satisfy the orthogonality between the estimation error and the current and past outputs such that

7.4 Fixed Horizon LQG Controls for State Delayed Systems

287

E x(s ˜ + θ)y T (α) = 0 for all α ∈ [t0 , s] and thus between the estimate x(s ˆ + θ) and the estimation error x(s ˜ + θ) such that E x(s ˆ + θ)x˜ T (s + θ) = 0. In this case, the cost (7.97) can be divided into two parts as follows. JE (t0 , t f ) = JE,1 (t0 , t f ) + JE,2 (t0 , t f ), where JE,1 (t0 , t f )and JE,2 (t0 , t f ) denote 



J E,1 (t0 , t f ) = E 

J E,2 (t0 , t f ) = E



  ˆ + u T (s)Ru(s) ds + VF (xˆt f ) , xˆ T (s)Q x(s)

tf

t0 tf



T x(s) − x(s) ˆ

t0

   Q x(s) − x(s) ˆ ds + VF (xt f − xˆt f ) .

(7.99) (7.100)

It is noted that u(s) is a function of {y(s), s ∈ [t0 , t]} because of the causality. The optimal control u ∗ (s) is designed to minimize the cost JE,1 (t0 , t f ) in (7.99) and the estimate x(s) ˆ is designed to minimize the estimation error covariance E(x(s) − T involving JE,2 (t0 , t f ) in (7.100). x(s))(x(s) ˆ − x(s)) ˆ Estimation The estimate x(s ˆ + θ) for θ ∈ [−h, 0], which is a function of the current and past output {y(α), α ∈ [t0 , s]}, satisfies the orthogonality between the measurements ˜ + θ). There are two kinds of equa{y(α), α ∈ [t0 , s]} and the estimation error x(s tions related to the estimate and the estimation error covariance in the following result. Theorem 7.6 For the state delayed system (7.94)–(7.95), the optimal estimate x(t) ˆ for the finite horizon LQG cost (7.97) with the terminal weighting functional (7.98) is chosen as follows. For t ∈ [t0 , t0 + h], the optimal estimate is given as   d ˆ x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + U (t, 0, 0)C T Rv−1 y(t) − C x(t) dt    t ˆ ) dτ (7.101) + A1 U (τ , t − h − τ , 0)C T Rv−1 y(τ ) − C x(τ t0

with the initial condition x(t ˆ 0 + τ ) = Ex(t0 + τ ), τ ∈ [−h, 0],

(7.102)

where U (t, r, s) = U T (t, s, r ) satisfies the following coupled partial differential Riccati equations for r ∈ [t0 − t, 0] and s ∈ [t0 − t, 0] ∂ U (t, 0, 0) = AU (t, 0, 0) + U (t, 0, 0)A T + A1 Π2 A1T + Q w ∂t − U (t, 0, 0)C T Rv−1 CU (t, 0, 0),

∂ ∂ − U (t, r, 0) = U (t, r, 0)A T − U (t, r, 0)C T Rv−1 CU (t, 0, 0), ∂t ∂r

(7.103) (7.104)

288

7 LQG Optimal Controls

∂ ∂ ∂ − − ∂t ∂r ∂s

U (t, r, s) = −U (t, r, 0)C T Rv−1 CU (t, 0, s)

(7.105)

with the initial condition U (t0 , 0, 0) = Π1 .

(7.106)

For t ∈ (t0 + h, t f ], the optimal estimate is given as   d ˆ x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + X (t, 0, 0)C T Rv−1 y(t) − C x(t) dt    t ˆ ) dτ + A1 X (τ , t − h − τ , 0)C T Rv−1 y(τ ) − C x(τ max(t0 +h,t−h)

+ A1

 max(t +h,t−h) 0 t−h

  U (τ , t − h − τ , 0)C T Rv−1 y(τ ) − C x(τ ˆ ) dτ ,

(7.107)

where X (t, r, s) = X T (t, s, r ) satisfies the following coupled partial differential Riccati equations for r ∈ [−h, 0] and s ∈ [−h, 0] ∂ X (t, 0, 0) = AX (t, 0, 0) + A1 X (t, −h, 0) + Q w + X (t, 0, 0)A T ∂t + X (t, 0, −h)A1T − X (t, 0, 0)C T Rv−1 C X (t, 0, 0),

∂ ∂ − X (t, r, 0) = X (t, r, 0)A T + X (t, r, −h)A1T ∂t ∂r

∂ ∂ ∂ − − ∂t ∂r ∂s

− X (t, r, 0)C T Rv−1 C X (t, 0, 0), X (t, r, s) = −X (t, r, 0)C T Rv−1 C X (t, 0, s)

(7.108)

(7.109) (7.110)

with the boundary condition U (t0 + h, r, s) = X (t0 + h, r, s).

(7.111)

Proof The solution of (7.94)–(7.95) is given by, for t ≥ t0 ,  x(t) = Φ(t − t0 )x(t0 ) + 

t

+ t0

min(t0 ,t−h) t0 −h



Φ(t − s)Bu(s)ds +

t

Φ(t − s − h)A1 x(s)ds Φ(t − s)w(s)ds,

t0

where the state transition matrix Φ(t) is the solution of the matrix differentialdifference equation d Φ(t) = AΦ(t) + A1 Φ(t − h) (7.112) dt with the boundary conditions that Φ(0) = I and Φ(t) = 0, t < 0. Let us divide the state into two parts like

7.4 Fixed Horizon LQG Controls for State Delayed Systems

289

x(t) = x1 (t) + x2 (t), where x1 (t) and x2 (t) denote  

x1 (t) = Φ(t − t0 )x(t0 ) +

min(t0 ,t−h)

t0 −h

Φ(t − s − h)A1 x(s)ds +

 t t0

 t  x2 (t) = Φ(t − s)Bu(s)ds

Φ(t − s)w(s)ds,

t0

with the initial conditions that for s ∈ [−h, 0] x1 (t0 + s) = x(t0 + s), x2 (t0 + s) = 0. Since u(α) is given for α ∈ [t0 , t], we do not need to estimate x2 (t) from the output. Then the output y(t) can be replaced with y1 (t) defined as 

y1 (t) = y(t) − C x2 (t) = C x1 (t) + v(t). Let us consider the linear estimate of {x1 (t + s), s ∈ [max(t0 − t, −h), 0]} given {y1 (t + s), s ∈ [max(t0 − t, −h), 0]}, say xˆ1 (t, s), to minimize the least-squares error

 T  . (7.113) E x1 (t + s) − xˆ1 (t, s) x1 (t + s) − xˆ1 (t, s) The innovation e(τ ) for τ ∈ [t0 , t], defined by e(τ ) = y1 (τ ) − C xˆ1 (τ , 0),

(7.114)

satisfies, for τ1 ∈ [t0 , τ ] and for τ2 ∈ [t0 , τ ],

Ee(τ1 ) = 0, E e(τ1 )e (τ2 ) = Rv δ(τ1 − τ2 ). T

(7.115)

We remark that for τ ∈ [t0 , t] and s ∈ [−h, 0]

E {x1 (t0 + s) − Ex1 (t0 + s)}e T (τ ) = 0

(7.116)

because e(τ ) is a linear functional of 

 x(t0 ) − Ex(t0 ), u(s), v(s) s ∈ [t0 , τ ] .

Using such an innovation, we can rewrite the linear estimate xˆ1 (t, s) given {e(τ ), τ ∈ [t0 , t]}, instead of {y1 (τ ), τ ∈ [t0 , t]}, like

290

7 LQG Optimal Controls

 xˆ1 (t, s) =

t

L(t, s, τ )e(τ )dτ + Ex1 (t + s).

(7.117)

t0

From the orthogonality between the optimal estimates and the innovations, i.e. for r ∈ [to , t] 0=E =E



 x1 (t + s) − xˆ1 (t, s) e T (r )



x1 (t + s) − Ex1 (t + s) − xˆ1 (t, s) + Ex1 (t + s) e (r ) , 

T

we have the following relation that

 E {x1 (t + s) − Ex1 (t + s)}e T (r ) =

t t0



L(t + s, τ ) E e(τ )e T (r ) dτ = L(t + s, r )Rv ,

which produces the gain matrix L(t, s) like



L(t + s, r ) ≡ E {x1 (t + s) − Ex1 (t + s)}e (r ) Rv−1 , r ∈ [t0 , t]. (7.118) T

We remark that for τ ∈ [t0 , t]

d L(t, τ ) = E {x˙1 (t) − E x˙1 (t)}e T (τ ) Rv−1 dt

= E {A(x1 (t) − Ex1 (t)) + A1 (x1 (t − h) − Ex1 (t − h)) + Bw(t)}e T (τ ) Rv−1 = AL(t, τ ) + A1 L(t − h, τ ),

which results in the following time derivative of the estimate x(t, ˆ 0) in (7.117)   t d d d xˆ1 (t, 0) = L(t, τ ) e(τ )dτ + L(t, t)e(t) + Ex1 (t) dt dt t0 dt  t  t = A L(t, τ )e(τ )dτ + A1 L(t − h, τ )e(τ )dτ + L(t, t)e(t) + AEx1 (t) t0

t0

+ A1 Ex1 (t − h) = A xˆ1 (t, 0) + A1 Ex1 (t − h) + L(t, t)e(t) + A1

 t t0

L(t − h, τ )e(τ )dτ .

(7.119)

For t ∈ [t0 , t0 + h], define the cross-correlation matrix of estimation errors such as, for r ∈ [t0 − t, 0] and for s ∈ [t0 − t, 0], U (t, r, s) = E



x1 (t + r ) − xˆ1 (t, r )



x1 (t + s) − xˆ1 (t, s)

T

.

7.4 Fixed Horizon LQG Controls for State Delayed Systems

291

By the orthogonality between the innovations {x1 (t + s) − xˆ1 (t, s)} and the optimal zero-mean estimates {xˆ1 (t, r ) − Ex1 (t + r )}, this matrix can be written as

U (t, r, s) = E {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s) − Ex1 (t + s)}T 

t



t0



E {x1 (t + r ) − Ex1 (t + r )}e T (τ ) Rv−1 E e(τ ){x1 (t + s) − Ex1 (t + s)}T dτ .

Using the fact that the relation 



∂ ∂a



∂ ∂b



m(a + b) = 0 for any function m(a + b), we can get

 ∂ ∂ U (t, r, s) − ∂r − ∂s



= −E {x1 (t + r ) − Ex1 (t + r )}e T (t) Rv−1 E e(t){x1 (t + s) − Ex1 (t + s)}T ∂ ∂t

= −U (t,  r, 0)C T Rv−1 CU (t, 0, s),  ∂ ∂ ∂t − ∂r U (t, r, 0)

= U (t, r, 0)A T − E {x1 (t + r ) − Ex1 (t + r )}e T (t) Rv−1 E e(t){x1 (t) − Ex1 (t)}T = AU (t, r, 0) − U (t, r, 0)C T Rv−1 CU (t, 0, 0).

Since it holds that

T E {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)}



t−h

= E Φ(t − t0 ){x(t0 ) − Ex(t0 )} + Φ(t − a − h)A1 {x(a) − Ex(a)}da t0 −h

 t Φ(t − a)w(a)da Φ(t − t0 ){x(t0 ) − Ex(t0 )} + t0

 +

t−h



T

t

Φ(t − a)w(a)da  = E Φ(t − t0 ){x(t0 ) − Ex(t0 )}{x(t0 ) − Ex(t0 )} Φ T (t − t0 )  t  t−h Φ(t − a − h)A1 Π2 A1T Φ T (t − a − h)da + Φ(t − a)Q w Φ T (t − a)da, + t0 −h

Φ(t − a − h)A1 {x(a) − Ex(a)}da +



t0 −h

t0 T

t0

we can get the relation

∂ T E {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)} ∂t

T =AE {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)}

T + E {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)} A T

292

7 LQG Optimal Controls

+ A1 E {x1 (t − h) − Ex1 (t − h)}{x1 (t) − Ex1 (t)}T

+ E {x1 (t) − Ex1 (t)}{x1 (t − h) − Ex1 (t − h)}T A1T + Q w

=AE {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)}T

+ E {x1 (t) − Ex1 (t)}{x1 (t) − Ex1 (t)}T A T + A1 Π2 A1T + Q w ,

because E {x1 (t − h) − Ex1 (t − h)}{x1 (t) − Ex1 (t)}T = 0 for t ∈ [t0 , t0 + h]. Furthermore,

 t ∂ T −1 T E {x1 (t) − Ex1 (t)}e (τ ) Rv E e(τ ){x1 (t) − Ex1 (t)} dτ ∂t t0

 t = E {A{x1 (t) − Ex1 (t)} + A1 {x1 (t − h) − Ex1 (t − h)} + Bw(t)}e T (τ ) Rv−1 t0



 t × E e(τ ){x1 (t) − Ex1 (t)}T dτ + E {x1 (t) − Ex1 (t)}e T (τ ) Rv−1 t0

× E e(τ ){A{x1 (t) − Ex1 (t)} + A1 {x1 (t − h) − Ex1 (t − h)} + Bw(t)}T dτ



+ E {x1 (t) − Ex1 (t)}e T (t) Rv−1 E e(t){x1 (t) − Ex1 (t)}T

 t =A E {x1 (t) − Ex1 (t)}e T (τ ) Rv−1 E e(τ ){x1 (t) − Ex1 (t)}T dτ t0

 t T −1 T + E {x1 (t) − Ex1 (t)}e (τ ) Rv E e(τ ){x1 (t) − Ex1 (t)} dτ A T t0

+ U (t, 0, 0)C T Rv−1 CU (t, 0, 0), which provides ∂ U (t, 0, 0) = AU (t, 0, 0) + U (t, 0, 0)A T + A1 Π2 A1T + Q w − U (t, 0, 0)C T Rv−1 CU (t, 0, 0). ∂t

Let us now define x(t) ˆ as x(t) ˆ = xˆ1 (t, 0) + x2 (t).

(7.120)

ˆ and thus from (7.119) we have Then e(t) = y1 (t) − C xˆ1 (t, 0) = y(t) − C x(t)

7.4 Fixed Horizon LQG Controls for State Delayed Systems

293

      d xˆ1 (t, 0) + x2 (t) = A xˆ1 (t, 0) + x2 (t) + A1 Ex1 (t − h) + x2 (t − h) dt  t L(t − h, τ )e(τ )dτ , + Bu(t) + L(t, t)e(t) + A1 t0

where Ex1 (t − h) = xˆ1 (t − h, 0) for t ∈ [t0 , t0 + h], which gives ˙ˆ = A x(t) x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + L(t, t)e(t) + A1



t

L(t − h, τ )e(τ )dτ .

t0

Since L(t, t) = U (t, 0, 0) and L(t − h, τ ) = U (τ , t − h − τ , 0), we obtain the estimate (7.101) with the initial condition (7.102), where U (t, r, s) = U T (t, s, r ) satisfies the coupled differential Riccati equations (7.103)–(7.105), with the initial condition (7.106). For t ∈ (t0 + h, t f ], define the cross-correlation matrix of estimation errors such as, for r ∈ [−h, 0] and for s ∈ [−h, 0], X (t, r, s) = E



x1 (t + r ) − xˆ1 (t, r )



x1 (t + s) − xˆ1 (t, s)

T

.

By the orthogonality between the innovations {x1 (t + s) − xˆ1 (t, s)} and the optimal zero-mean estimates {xˆ1 (t, r ) − Ex1 (t + r )}, this matrix can be written as

X (t, r, s) = E {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s) − Ex1 (t + s)}T  −

t t0





E {x1 (t + r ) − Ex1 (t + r )}e T (τ ) Rv−1 E e(τ ){x1 (t + s) − Ex1 (t + s)}T dτ .

Therefore, we can get the relation

∂ ∂ ∂ − − X (t, r, s) ∂t ∂r ∂s



= −E {x1 (t + r ) − Ex1 (t + r )}e T (t) Rv−1 E e(t){x1 (t + s) − Ex1 (t + s)}T

= −X (t, r, 0)C T Rv−1 C X (t, 0, s),

∂ ∂ − X (t, r, 0) ∂t ∂r = X (t, r, 0)A T + X (t, r, −h)A1T



− E {x1 (t + r ) − Ex1 (t + r )}e T (t) Rv−1 E e(t){x1 (t) − Ex1 (t)}T = X (t, 0, s)A T + X (t, r, −h)A1T − X (t, r, 0)C T Rv−1 C X (t, 0, 0). Since it holds that

294

7 LQG Optimal Controls

E {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s) − Ex1 (t + s)}T  = E Φ(t + r − t0 ){x(t0 ) − Ex(t0 )} +  × Φ(t + s − t0 ){x(t0 ) − Ex(t0 )} + +

 min(t+r,t+s) t0

t0 t0 −h t0

t0 −h

Φ(t + r − a − h)A1 {x(a) − Ex(a)}da

T Φ(t + s − a − h)A1 {x(a) − Ex(a)}da

Φ(t + r − a)Q w Φ T (t + s − a)da,

we can get the relation ∂ ∂t

X (t, r, s)

= AE {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s) − Ex1 (t + s)}T

+ A1 E {x1 (t + r − h) − Ex1 (t + r − h)}{x1 (t + s) − Ex1 (t + s)}T

+ E {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s) − Ex1 (t + s)}T A T

+ E {x1 (t + r ) − Ex1 (t + r )}{x1 (t + s − h) − Ex1 (t + s − h)}T A1T + Φ(r − min(r, s))Q w Φ T (s − min(r, s))

t − A t0 E {x1 (t + r ) − Ex1 (t + r )}e T (τ ) Rv−1 E e(τ ){x1 (t + s) − Ex1 (t + s)}T dτ

t − A1 t0 E {x1 (t + r − h) − Ex1 (t + r − h)}e T (τ ) Rv−1

× E e(τ ){x1 (t + s) − Ex1 (t + s)}T dτ



t − t0 E {x1 (t + r ) − Ex1 (t + r )}e T (τ ) Rv−1 E e(τ ){x1 (t + s) − Ex1 (t + s)}T dτ A T

t − t0 E {x1 (t + r ) − Ex1 (t + r )}e T (τ ) Rv−1

× E e(τ ){x1 (t + s − h) − Ex1 (t + s − h)}T dτ A1T



− E x1 (t + r )e T (t) Rv−1 E e(t)x1T (t + s) = AX (t, r, s) + A1 X (t, r − h, s) + X (t, r, s)A T + X (t, r, s − h)A1T + Φ(r − min(r, s))Q w Φ T (s − min(r, s)) − X (t, r, 0)C T Rv−1 C X (t, 0, s),

where by assigning r = s = 0 ∂ X (t, 0, 0) = AX (t, 0, 0) + A1 X (t, −h, 0) + X (t, 0, 0)A T + X (t, 0, −h)A1T ∂t + Q w − X (t, 0, 0)C T Rv−1 C X (t, 0, 0). Let us now define x(t) ˆ as x(t) ˆ = xˆ1 (t, 0) + x2 (t).

(7.121)

7.4 Fixed Horizon LQG Controls for State Delayed Systems

295

Then e(t) = y1 (t) − C xˆ1 (t, 0) = y(t) − C x(t) ˆ and thus from (7.119)       d xˆ1 (t, 0) + x2 (t) = A xˆ1 (t, 0) + x2 (t) + A1 E xˆ1 (t − h, 0) + x2 (t − h) dt  t L(t − h, τ )e(τ )dτ , + Bu(t) + L(t, t)e(t) + A1 t0

which gives   ˙ˆ x(t) = A x(t) ˆ + A1 E xˆ1 (t − h, 0) + x2 (t − h) 

t−h

+ Bu(t) + L(t, t)e(t) + A1

 L(t − h, τ )e(τ )dτ +

t0

  = A x(t) ˆ + A1 E xˆ1 (t − h, 0) +  + Bu(t) + L(t, t)e(t) + A1

t

 L(t − h, τ )e(τ )dτ

t−h t−h

t0 t



L(t − h, τ )e(τ )dτ + x2 (t − h) 

L(t − h, τ )e(τ )dτ

t−h



ˆ − h) + Bu(t) + L(t, t)e(t) + A1 = A x(t) ˆ + A1 x(t

t

L(t − h, τ )e(τ )dτ .

t−h

Since L(t, t) = X (t, 0, 0), L(t − h, τ ) = X (τ , t − h − τ , 0) for τ ∈ (t0 + h, t f ], and L(t − h, τ ) = U (τ , t − h − τ , 0) for τ ∈ [t0 , t0 + h], we obtain the estimate (7.101) with the initial condition (7.102), where U (t, r, s) = U T (t, s, r ) satisfies the coupled partial differential Riccati equations (7.103)–(7.105), with the initial condition (7.106). This completes the proof.  Whereas the estimate x(s) ˆ is designed to minimize the estimation error covariance T E(x(s) − x(s))(x(s) ˆ − x(s)) ˆ involving JE,2 (t0 , t f ) in (7.100), the optimal control ∗ u (s) is designed to minimize the cost JE,1 (t0 , t f ) in (7.99). The optimal estimate for state delayed stochastic systems in Theorem 7.6 can be found in [11].

7.4.1 Fixed Horizon LQG Control for Costs with Single Integral Terms Consider the finite horizon LQG cost (7.97) with the terminal weighting functional VF (xt f ) in (7.98) yielding F3 = 0 and F4 = 0 as follows.  VF (xt f ) = x T (t f )F1 x(t f ) +

tf t f −h

x T (s)F2 x(s)ds

(7.122)

296

7 LQG Optimal Controls

with F1 ≥ 0 and F2 ≥ 0. Two continuous functionals V1 (t) and V2 (t) are chosen for each horizon, i.e. t ∈ [t0 , t f − h] and t ∈ (t f − h, t f ]. The optimal controls u ∗1 (t) and u ∗2 (t) corresponding to each functional can be constructed as shown in the following theorem. Theorem 7.7 For the state delayed system (7.94)–(7.95), the finite horizon LQG control u ∗ (t) for the cost (7.97) with the terminal weighting functional (7.98) is chosen as follows. For t ∈ (t f − h, t f ],   ˆ + u ∗2 (t) = −R −1 B T W1 (t)x(t)

t f −t−h −h

 W2 (t, s)x(t ˆ + s)ds ,

(7.123)

where x(t) ˆ is given by (7.101) and (7.107) and W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.156)–(6.158) with the boundary conditions (6.159)– (6.161). For t ∈ [t0 , t f − h], u ∗1 (t)

= −R

−1

 B

 P1 (t)x(t) ˆ +

T



0

P2 (t, s)x(t ˆ + s)ds ,

−h

(7.124)

where x(t) ˆ is given by (7.101) and (7.107) and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) satisfy the coupled partial differential Riccati equations (6.159)–(6.165) with the boundary conditions (6.166)–(6.170). Proof For t ∈ (t f − h, t f ], we choose a continuous functional as in (6.171), where x(·) is replaced with x(·), ˆ such as  ˆ + V2 (t) = xˆ T (t)W1 (t)x(t)  + 2 xˆ T (t)

t f −t−h

−h

t f −t−h

−h



t f −t−h

−h

xˆ T (t + r )W3 (t, r, s)x(t ˆ + s)dr ds 

W2 (t, s)x(t ˆ + s)ds +

t

t f −h

xˆ T (s)F2 x(s)ds. ˆ

(7.125)

For t ∈ [t0 , t f − h], we choose a continuous functional as in (6.172), where x(·) is replaced with x(·), ˆ such as  ˆ + 2 xˆ (t) V1 (t) = xˆ (t)P1 (t)x(t) T

 +

0

T

0

−h



0 −h

−h

P2 (t, s)x(t ˆ + s)ds

xˆ T (t + r )P3 (t, r, s)x(t ˆ + s)dr ds.

(7.126)

In the time derivative of the expectation of the continuous functionals, the terms ˙ˆ follow as including x(t)   d ˆ E xˆ T (t)W1 (t)x(t) dt

7.4 Fixed Horizon LQG Controls for State Delayed Systems

297

 = 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ  + A1

t max(t0 ,t−h)

T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ

T W1 (t)x(t) ˆ

  ˆ , + E xˆ T (t)W˙ 1 (t)x(t)    t f −t−h d W2 (t, s)x(t ˆ + s)ds E 2 xˆ T (t) dt −h  ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ = 2E A x(t) ˆ + A1 x(t  + A1

t

max(t0 ,t−h) t f −t−h

T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ

T

  t f −t−h  d W2 (t, s)x(t ˆ + s)ds + 2E xˆ T (t) W2 (t, s)x(t ˆ + s)ds , dt −h −h   d T ˆ E xˆ (t)P1 (t)x(t) dt  ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ = 2E A x(t) ˆ + A1 x(t 

×

 + A1

t max(t0 ,t−h)

T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ

T P1 (t)x(t) ˆ

+ E xˆ T (t) P˙1 (t)x(t), ˆ    0 d P2 (t, s)x(t ˆ + s)ds E 2 xˆ T (t) dt −h  ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ = 2E A x(t) ˆ + A1 x(t  + A1

t

T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ

max(t0 ,t−h)  0  ∂ P2 (t, s) T

+ 2E xˆ (t)

−h

∂t

T 



0

−h

P2 (t, s)x(t ˆ + s)ds

˙ˆ + s) ds, x(t ˆ + s) + P2 (t, s)x(t

where T (t, r, s) denotes U (t, r, s) in (7.101) and X (t, r, s) in (7.107) for t ∈ [t0 , t0 + h] and t ∈ (t0 + h, t f ], respectively. We note that for s ∈ [−h, 0] and for τ ∈ [max(t0 , t − h), t] 

T E x(t ˆ + s) y(τ ) − C x(τ ˆ ) = 0,

(7.127)

which allows us to use the same procedure as in the proof of Theorem 6.8 to obtain the optimal control u ∗2 (t) in (7.123) for t ∈ (t f − h, t f ] and the optimal control u ∗1 (t) in (7.124) for t ∈ [t0 , t f − h], where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r ) satisfy, for s ∈ [−h, 0] and r ∈ [−h, 0], (6.156)–(6.158) with the boundary conditions (6.159)–(6.161). and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) satisfy, for s ∈ [−h, 0] and r ∈ [−h, 0], (6.166)–(6.165) with the boundary conditions (6.166)–(6.170). This completes the proof. 

298

7 LQG Optimal Controls

Because V1 (t) and V2 (t) are the optimal costs for t ∈ [t0 , t f − h] and t ∈ (t f − h, t f ], respectively, it is clear that V1 (t f − h) = V2 (t f − h), which is expressed as (6.166)–(6.168). Furthermore, the terminal-time continuity condition is that V2 (t f ) = VF (xt f ), which is expressed as (6.159). Therefore, we can say that the optimal cost JE,2 (t0 , t f ) in (7.100) is equal to the expectation of the continuous functionals V1 (t) and V2 (t) for t ∈ [t0 , t f − h] and t ∈ (t f − h, t f ] as follows.  JE,2 (t0 , t f ) =

E V1 (t) for t ∈ [t0 , t f − h], E V2 (t) for t ∈ (t f − h, t f ].

Remark 7.2 As mentioned in Remark 7.1, we can verify dual properties between control and estimation in Theorems 7.6 and 7.7 as follows. (A, A1 , B) ↔ (A T , A1T , C) (Q, R) ↔ (Q w , Rv ) (F1 , F2 ) ↔ (Π1 , AΠ2 A T ) W1 (t) ↔ U (t, 0, 0) W2 (t, s) ↔ U (t, 0, −h − s)A1 W3 (t, r, s) ↔ A1T U (t, −h − r, −h − s)A1 P1 (t) ↔ X (t, 0, 0)

(7.128)

P2 (t, s) ↔ X (t, 0, −h − s)A1 P3 (t, r, s) ↔ A1T X (t, −h − r, −h − s)A1 backwards ↔ forwards Finite horizon LQG controls for costs with single integral terminal terms for state delayed stochastic systems as in Theorem 7.7 can be found as in [5, 10, 11, 13–16].

7.4.2 *Fixed Horizon LQG Control for Costs with Double Integral Terms As concerned about the receding horizon LQG control for the state delayed stochastic system (7.94)–(7.95), which will be discussed in the following section, the stability condition of the receding horizon LQG control induced from the finite horizon LQG control for the cost with F1 ≥ 0, F2 ≥ 0, F3 = 0 and F4 = 0 in (7.98) is delay independent, which is rather conservative. Therefore, we consider the finite horizon LQG control for the cost (7.97) with the terminal weighting functional VF (xt f ) in (7.98)

7.4 Fixed Horizon LQG Controls for State Delayed Systems

 VF (xt f ) = x (t f )F1 x(t f ) + T

299



tf

x (s)F2 x(s)ds +

0

T

t f −h

   ˙ ) − Ax(r ) dr ds + × F3 x(r

0



−h

tf t f +s

−h



tf



t f +s

T x(r ˙ ) − Ax(r )

x T (r )F4 x(r )dr ds,

(7.129)

where all Fi ≥ 0, which is used to produce the delay-dependent stability condition of the receding horizon LQG control. By the double integral terms associated with F3 and F4 in the terminal weighting functional (7.129), we choose three continuous functionals, V1 (t), V2 (t), and V3 (t) for the intervals t ∈ [t0 , t f − 2h], t ∈ (t f − 2h, t f − h] and t ∈ (t f − h, t f ], respectively. Using V1 (t), V2 (t), and V3 (t), the finite horizon LQG optimal controls u ∗1 (·), u ∗2 (·), and u ∗3 (·) are obtained for each horizon in the following theorem. ¯ Theorem 7.8 Assume that f (s) and R(s) are defined as follows.   ¯ f (s) = s − t f + h, R(s) = R + f (s)B T F3 B.

For the state delayed stochastic system (7.94)–(7.95), the finite horizon LQG control u ∗ (t) for the cost (7.97) with the terminal weighting functional (7.129) is given as follows. For t ∈ (t f − h, t f ],    t −t−h f u ∗3 (t) = − R¯ −1 (t)B T W1 (t)x(t) ˆ + W2 (t, s)x(t ˆ + s)ds + f (t)F3 A1 x(t ˆ − h) ,(7.130) −h

where x(t) ˆ is given by (7.101) and (7.107) and W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.175)–(6.177) with the boundary conditions (6.178)– (6.180). For t ∈ (t f − 2h, t f − h], u ∗2 (t)

= −R

−1

 B

T

 S1 (t)x(t) ˆ +

0 −h

 S2 (t, s)x(t ˆ + s)ds ,

(7.131)

where x(t) ˆ is given by (7.101) and (7.107) and S1 (t) = S1T (t), S2 (t, s), and S3 (t, r, s) = S3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.182)–(6.184) with the boundary conditions (6.185)– (6.189). For t ∈ [t0 , t f − 2h],   ˆ + u ∗1 (t) = −R −1 B T P1 (t)x(t)

0 −h

 P2 (t, s)x(t ˆ + s)ds ,

(7.132)

where x(t) ˆ is given by (7.101) and (7.107) and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.191)–(6.193) with the boundary conditions (6.194)– (6.198).

300

7 LQG Optimal Controls

Proof For t ∈ (t f − h, t f ], we choose a continuous functional as in (6.199), where ˙ˆ x(·) and x(·) ˙ are replaced with x(·) ˆ and x(·), respectively, such as  V3 (t) = xˆ T (t)W1 (t)x(t) ˆ + 2 xˆ T (t)  + +

t f −t−h 

−h  t



t f −h

t f −t−h −h

t f −t−h

−h

W2 (t, s)x(t ˆ + s)ds

xˆ T (t + r )W3 (t, r, s)x(t ˆ + s)dr ds

  xˆ T (s) F2 + f (s)F4 x(s) ˆ

 T     t f −h ˙ˆ + f (s) x(s) ˆ˙ − A x(s) ˆ − A x(s) ˆ ds + F3 x(s) xˆ T (s) f (s + h)A1T F3 A1 t−h  2 T −1 T ˆ − f (s + h)A1 F3 B R¯ (s + h)B F3 A1 x(s)ds, (7.133)

where, for the condition (2.4) of the continuous functional (7.133), we need to have f (s + h)F3 − f 2 (s + h)F3 B R¯ −1 (s + h)B T F3 ≥ 0, which can be verified since, for s ∈ (t f − 2h, t f − h], 

f (s + h)F3 B f (s + h)F3 f (s + h)B T F3 R + f (s + h)B T F3 B



 =

I 0 BT I



f (s + h)F3 0 0 R



I B 0 I

 ≥ 0.

For t ∈ (t f − 2h, t f − h], we choose a continuous functional as in (6.200), where ˙ˆ x(·) and x(·) ˙ are replaced with x(·) ˆ and x(·), respectively, such as  V2 (t) = xˆ T (t)S1 (t)x(t) ˆ + 2 xˆ T (t)  +

0

−h



0 −h

0

−h

S2 (t, s)x(t ˆ + s)ds 

xˆ T (t + r )S3 (t, r, s)x(t ˆ + s)dr ds + 

t

t f −2h

 xˆ T (s) f (s + h)A1T F3 A1

− f 2 (s + h)A1T F3 B R¯ −1 (s + h)B T F3 A1 x(s)ds. ˆ

(7.134)

For t ∈ [t0 , t f − 2h], we choose a continuous functional as in (6.201), where x(·) ˙ˆ and x(·) ˙ are replaced with x(·) ˆ and x(·), respectively, such as  V1 (t) = xˆ T (t)P1 (t)x(t) ˆ + 2 xˆ T (t)  +

0 −h



0

−h

0

−h

P2 (t, s)x(t ˆ + s)ds

xˆ T (t + r )P3 (t, r, s)x(t ˆ + s)dr ds.

(7.135)

In the time derivative of the expectation of the continuous functionals, the terms ˙ˆ follow as including x(t)

7.4 Fixed Horizon LQG Controls for State Delayed Systems 





301



d T ˆ = E xˆ T (t)W˙ 1 (t)x(t) ˆ dt E xˆ (t)W1 (t) x(t)

 + 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ T t + A1 max(t T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ W1 (t)x(t), ˆ 0 ,t−h)       t −t−h t −t−h f f d d T W2 (t, s)x(t ˆ + s)ds = 2E xˆ T (t) dt W2 (t, s)x(t ˆ + s)ds −h dt E 2 xˆ (t) −h  + 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ T t + A1 max(t T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ 0 ,t−h)  t f −t−h × −h W2 (t, s)x(t ˆ + s)ds,   d E xˆ T (t)S (t) x(t) ˆ = E xˆ T (t) S˙1 (t)x(t) ˆ 1 dt  + 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ T t + A1 max(t T (t, −h, τ − t)C T Rv−1 {y(τ ) − C x(τ ˆ )}dτ S1 (t)x(t), ˆ 0 ,t−h)   0 d T ˆ + s)ds = 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 dt E 2 xˆ (t) −h S2 (t, s) x(t

 0 ∂ S2 (t,s) ˙ˆ + s) ds × {y(t) − C x(t)} ˆ + 2E xˆ T (t) −h x(t ˆ + s) + S2 (t, s)x(t ∂t T t 0 T R −1 {y(τ ) − C x(τ T (t, −h, τ − t)C ˆ )}dτ ˆ + s)ds, + A1 max(t v −h S2 (t, s) x(t 0 ,t−h)   d T ˆ = E xˆ T (t) P˙1 (t)x(t) ˆ dt E xˆ (t)P1 (t) x(t)  ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 {y(t) − C x(t)} ˆ + 2E A x(t) ˆ + A1 x(t T t T R −1 {y(τ ) − C x(τ + A1 max(t T (t, −h, τ − t)C ˆ )}dτ P1 (t)x(t), ˆ v 0 ,t−h)   0 d T ˆ + s)ds = 2E A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + T (t, 0, 0)C T Rv−1 dt E 2 xˆ (t) −h P2 (t, s) x(t

 0 ∂ P2 (t,s) ˙ˆ + s) ds × {y(t) − C x(t)} ˆ + 2E xˆ T (t) −h x(t ˆ + s) + P2 (t, s)x(t ∂t T t 0 T R −1 {y(τ ) − C x(τ T (t, −h, τ − t)C ˆ )}dτ ˆ + s)ds, + A1 max(t v −h P2 (t, s) x(t ,t−h) 0

where T (t, r, s) denotes U (t, r, s) in (7.101) and X (t, r, s) in (7.107) for t ∈ [t0 , t0 + h] and t ∈ (t0 + h, t f ], respectively. We note that for s ∈ [−h, 0] and for τ ∈ [max(t0 , t − h), t] 

T E x(t ˆ + s) y(τ ) − C x(τ ˆ ) = 0,

(7.136)

which allows us to use the same procedure as in the proof of Theorem 6.9 to obtain the optimal control u ∗3 (t) in (7.130) for t ∈ (t f − h, t f ], the optimal control u ∗2 (t) in (7.131) for t ∈ (t f − 2h, t f − h], and the optimal control u ∗1 (t) in (7.132) for t ∈ [t0 , t f − 2h], where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r )

302

7 LQG Optimal Controls

for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.175)–(6.177) with the boundary conditions (6.178)–(6.180), S1 (t) = S1T (t), S2 (t, s), and S3 (t, r, s) = S3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.182)–(6.184) with the boundary conditions (6.185)–(6.189), and P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) for r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.191)–(6.193) with the boundary conditions (6.194)–(6.198). This completes the proof.  Because V1 (t), V2 (t), and V3 (t) are the optimal costs for t0 ≤ t ≤ t f − h, t f − 2h < t ≤ t f − h, and t f − h < t ≤ t f , respectively, it is clear that V1 (t f − 2h) = V2 (t f − 2h) and V2 (t f − h) = V3 (t f − h), which are expressed as (6.194)–(6.188) and (6.185)–(6.198). Furthermore, the terminal-time continuity condition is that V3 (t f ) = VF (xt f ), which is expressed as (6.178) Therefore, we can say that the optimal cost JE,2 (t0 , t f ) in (7.100) is equal to the expectation of the continuous functionals V1 (t), V2 (t), and V3 (t) for t ∈ [t0 , t f − 2h], t ∈ (t0 − 2h, t f − h] and t ∈ (t f − h, t f ] as follows. ⎧ ⎨ E V1 (t) for t ∈ [t0 , t f − 2h], JE,2 (t0 , t f ) = E V2 (t) for t ∈ (t f − 2h, t f − h], ⎩ E V3 (t) for t ∈ (t f − h, t f ]. Infinite Horizon LQG Control If X (t, r, s) = X T (t, s, r ) in the coupled partial differential Riccati equations (7.108)– (7.110) in Theorem 7.6 goes to the stationary quantity X (r, s) as t f goes to infinity, they satisfy 0 = AX (0, 0) + A1 X (−h, 0) + Q w + X (0, 0)A T + X (0, −h)A1T − X (0, 0)C T Rv−1 C X (0, 0),

∂ X (r, 0) = X (r, 0)A T + X (r, −h)A1T − X (r, 0)C T Rv−1 C X (0, 0), ∂r

∂ ∂ X (r, s) = −X (r, 0)C T Rv−1 C X (0, s) + ∂r ∂s

(7.137) (7.138) (7.139)

and the infinite horizon estimate is given from (7.107) by   d ˆ x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + X (0, 0)C T Rv−1 y(t) − C x(t) dt    0 ˆ + s) ds. (7.140) + X (−h − s, 0)C T Rv−1 y(t + s) − C x(t −h

Similarly, if P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r ) satisfying (6.159)–(6.165) or (6.191)–(6.193) go to the stationary quantities P1 = P1T , P2 (s), P3 (r, s) = P3T (s, r ) as t f goes to infinity, then they satisfy the following coupled partial differential Riccati equations

7.4 Fixed Horizon LQG Controls for State Delayed Systems

303

0 = A T P1 + P1 A + Q + P2 (0) + P2T (0) − P1 B R −1 B T P1 ,



P˙2 (s) = A T P2 (s) + P3 (0, s) − P1 B R −1 B T P2 (s),

∂ ∂ + P3 (r, s) = −P2T (r )B R −1 B T P2 (s), ∂r ∂s

(7.141) (7.142) (7.143)

with boundary conditions P2 (−h) = P1 A1 , P3 (−h, s) =

A1T

(7.144)

P2 (s)

(7.145)

and the infinite horizon LQG control is given from (7.124) or (7.132) by u ∗ (t) = −R −1 B T



 P1 x(t) ˆ +

0 −h

 P2 (s)x(t ˆ + s)ds .

(7.146)

In this case, the corresponding continuous functional for JE,1 (t0 , ∞) becomes   E V (t) = E xˆ T (t)P1 x(t) ˆ + 2 xˆ T (t)  +

0



−h

0 −h

P2 (s)x(t ˆ + s)ds

 xˆ (t + r )P3 (r, s)x(t ˆ + s)dr ds . T

−h

0

(7.147)

For the stability analysis, we augment the system (7.94)–(7.95), the Kalman filtered estimate (7.107), and the LQG control (7.124) into the following system for 

x(t) ˜ = x(t) − x(t). ˆ 

    x(t) ˙ A − B R −1 B T P1 (t) x(t) B R −1 B T P1 (t) = ˙˜ x(t) ˜ 0 A − X (t, 0, 0)C T Rv−1 C x(t)    x(t − h) A1 0 + 0 A1 x(t ˜ − h)    0 −1 T B R −1 B T P2 (t, s) −B R B P2 (t, s) x(t + s) + ds 0 −X (t, −h − s, 0)C T Rv−1 C x(t ˜ + s) −h   w(t)  , (7.148) + 0 −X (t, 0, 0)C T Rv−1 v(t) − −h X (t, −h − s, 0)C T Rv−1 v(t + s)ds

from which the closed-loop system is asymptotically stable if the control subsystem in the first part of (7.148) without x(t) ˜ is asymptotically stable and the estimation error subsystem in the second part of (7.148) is asymptotically stable. The asymptotic stability of the control subsystem was given in Sect. 6.4.3. For the asymptotic stability of the estimation error subsystem, we can use dual properties between controls and estimates like in (7.128), which provides conditions for the asymptotic stability of the estimation error subsystem dual to those for the asymptotic

304

7 LQG Optimal Controls

stability of the control subsystem in Sect. 6.4.3. We can have a similar proposition like in [10] that for Q w > 0 the infinite estimation error subsystem in the first part of (7.148) without x(t) ˜ is asymptotically stable if the system (7.94)–(7.95) is detectable [10, 11], where the system is said to be detectable if there exists a linear estimate with a form of d x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) dt    0   ˆ + + L o y(t) − C x(t) L 1 (s) y(t + s) − C x(t ˆ + s) ds −h

that stabilizes the estimation error system. However, differently from the solution X of the algebraic Riccati equation (7.44) for the ordinary system, in which there exist a finite number of solutions of the algebraic Riccati equation (7.44) and thus we can easily find the unique positive definite solution X that provides the asymptotic stability of the estimation, the number of solutions X (r, s) = X T (s, r ) of the coupled partial differential Riccati equations (7.137)–(7.139) is infinite and there seems to be no research on finding the solutions X (r, s) = X T (s, r ) that produces such asymptotic stability of the estimation error subsystem. Even though the stability properties of infinite horizon LQ filtered estimate subsystems and control subsystems were addressed in [10, 11], the asymptotic stabilities of both the estimation error subsystem and the control subsystem have not been explicitly explained in the literature to the best of our knowledge. While infinite horizon LQG controls have problems in stability proof and computation, we introduce receding horizon LQG controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

7.5 Receding Horizon LQG Controls for State Delayed Systems In this section, we assume that the state delayed system (7.94)–(7.95) is stabilizable and detectable [6, 10, 11]. For the current time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. Consider a receding horizon LQG cost given by  J E (t, t + T ; xt ) = E t

t+T



  x T (s| t)Qx(s| t) + u T (s| t)Ru(s| t) ds + VF (xt+T | t )

(7.149) with Q > 0 and R > 0, where the terminal weighting functional denotes

7.5 Receding Horizon LQG Controls for State Delayed Systems





VF (xt+T | t ) = x (t + T | t)F1 x(t + T | t) +

t+T

T

 +

0



−h 0



 +

−h

t+T t+T +s t+T t+T +s



t+T −h T

x(r ˙ | t) − Ax(r | t)

305

x T (s| t)F2 x(s| t)ds

  ˙ | t) − Ax(r | t) dr ds F3 x(r

x T (r | t)F4 x(r | t)dr ds

(7.150)

with F1 > 0, F2 > 0, F3 ≥ 0, and F4 ≥ 0 for the system (7.94)–(7.95) that can be expressed as d x(s| t) = Ax(s| t) + A1 x(s − h| t) + Bu(s| t) + w(s| t), ds y(s| t) = C x(s| t) + v(s| t)

(7.151) (7.152)

with Q w > 0 and Rv > 0 and with the initial state xt| t = xt . Using the estimate x(s| ˆ t) satisfying the orthogonality between the estimate x(s| ˆ t) and the estimation error x(s| t) − x(s| ˆ t), i.e. E x(s| ˆ t){x(s| t) − x(s| ˆ t)}T = 0, the cost JE (t, t + T ) can be rewritten as follows. JE (t, t + T ; xt ) = JE,1 (t, t + T ; xˆt ) + JE,2 (t, t + T ; xt − xˆt ), where JE,1 (t, t + T ; xˆt ) and JE,2 (t, t + T ; xt − xˆt ) denote JE,1 (t, t + T ; xˆt )   t+T     T T xˆ (s| t)Q x(s| =E ˆ t) + u (s| t)Ru(s| t) ds + VF (xˆt+T | t ) ,

(7.153)

t

JE,2 (t, t + T ; xt − xˆt ) T    t+T     x(s| t) − x(s| ˆ t) Q x(s| t) − x(s| =E ˆ t) ds + VF (xt+T | t − xˆt+T | t ) , t

(7.154) where the initial estimate xˆt| t = xˆt is given. It is noted that the control u(s| t) should be a function of past output {y(α), α ∈ [t, s]} and past estimate {x(α| ˆ t), α ∈ [t, s]} due to the causality. The cost (7.149) can be obtained from the cost (7.97) by replacing t0 and t f with t and t + T , respectively, where T is a horizon for control. Since the horizon length T is a design parameter, we consider two cases of 0 < T < h and h ≤ T when F3 = 0 and F4 = 0 in the cost (7.150) and also three cases of 0 < T < h, h ≤ T < 2h, and 2h ≤ T when F3 > 0 and F4 > 0 in the cost (7.150).

306

7 LQG Optimal Controls

7.5.1 Receding Horizon LQG Control for Costs with Single Integral Terms Consider a receding horizon LQG cost JE (t, t + T ; xt ) in (7.149) with F1 > 0, F2 > 0, F3 = 0, and F4 = 0 in the terminal weighting functional VF (xt+T | t ). The optimal solution u ∗ for the cost JE (t, t + T ; xt ) for the state delayed system (7.151)– (7.152) can be found through JE,1 (t, t + T ; xˆt ), rather than JE (t, t + T ; xt ). It is noted that (7.149) and (7.150) are obtained from the finite horizon LQG cost (7.97) with the terminal weighting function (7.98) by replacing t0 and t f with t and t + T , respectively. Thus, the receding horizon LQG control can be obtained from the finite horizon LQG controls (7.123) and (7.124) in Theorem 7.7 by replacing t0 and t f with t and t + T , respectively. For s ∈ (t + T − h, t + T ], the optimal LQ control u ∗ (s| t) for the cost (7.153) is given from (7.123), by replacing t f with t + T , as    t+T −s−h u ∗2 (s| t) = −R −1 B T W1 (s − t)x(s| ˆ t) + W2 (s − t, a)x(s ˆ + a| t)da , −h

(7.155)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (t, s, r ) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.226)–(6.228) with the boundary conditions (6.229)–(6.231). For s ∈ [t, t + T − h], the optimal LQ control u ∗ (s| t) for the cost (7.153) is given from (7.124) as   u ∗1 (s| t) = −R −1 B T P1 (s − t)x(s| ˆ t) +

 P2 (s − t, a)x(s ˆ + a| t)da ,

0 −h

(7.156)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r ) for τ ∈ [0, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.233)–(6.235) with the boundary conditions (6.236)–(6.240). Since the horizon length T is a design parameter, as mentioned earlier, we consider two cases of 0 < T < h and h ≤ T when F3 = 0 and F4 = 0 in the cost (7.150). The receding horizon LQG control u ∗ (t| t) for the receding horizon LQG cost JE,1 (t, t + T ) in (7.153) at the current time t is given from (7.155)–(7.156) as follows. Case I: 0 < T < h The receding horizon LQG control is given from (7.155) as follows.   u(t) = u ∗2 (t| t) = −R −1 B T W1 (0)x(t) ˆ +

T −h

−h

 W2 (0, s)x(t ˆ + s)ds ,

(7.157)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) satisfy the coupled partial differential Riccati equations (6.226)–(6.228) for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (6.229)–(6.231). Case II: h ≤ T The receding horizon LQG control is given from (7.156) as follows.

7.5 Receding Horizon LQG Controls for State Delayed Systems

u(t) =

u ∗1 (t| t)

= −R

−1

 B

T

 P1 (0)x(t) ˆ +

0

−h

307

 P2 (0, s)x(t ˆ + s)ds , (7.158)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r ) satisfy the coupled partial differential Riccati equations (6.233)–(6.235) for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (6.236)–(6.240). And W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) satisfy the coupled partial differential Riccati equations (6.226)–(6.228) for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (6.229)–(6.231). On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. In this case, we have to obtain xˆt+ from xˆt and new information on y[t, t+ ]. This is a filtered estimation problem. Since the receding horizon LQG controls (7.157) and (7.158) employ the current control u ∗2 (t| t) and u ∗1 (t| t) only, the choice of xˆt| t = xˆt determines the property of the receding horizon LQG control. We suggest two possible choices. Choice I: Standard Kalman Filtered Estimate A simple choice of the estimate for x(t) is a Kalman filtered estimate (7.101) and (7.107) for t ∈ [t0 , t0 + h] and for t ∈ (t0 + h, ∞), respectively, with the initial estimate x(t ˆ 0 ) = Ex(t0 ), where U (t, r, s) = U T (t, s, r ) and X (t, r, s) = X T (t, s, r ) satisfy the coupled partial differential Riccati equations (7.103)–(7.105) with the initial condition (7.106) and the coupled partial differential Riccati equations (7.108)– (7.110) with the boundary condition (7.111), respectively. Theorem 7.9 If there exist positive definite matrices F1 > 0 and F2 > 0 such that 

Q + F1 A + A T F1 − F1 B[R + F2 ]−1 B T F1 F1 A1 A1T F1 −F2

 ≤ 0,

(7.159)

then the receding horizon LQG controls (7.157) and (7.158) for the cost (7.153) with such F1 and F2 , based on the Kalman filtered estimates (7.101) and (7.107), asymptotically stabilize the system (7.94)–(7.95). Proof Let us augment the systems (7.94)–(7.95), the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + h] and (7.107) for t ∈ (t0 + h, ∞), and the controls (7.157)  for 0 < T < h and (7.158) for h ≤ T into the following system for x(t) ˜ = x(t) − x(t). ˆ 

       x(t) ˙ x(t) A1 0 A + B K 1 −B K 1 x(t − h) + = ˙˜ 0 A + K 3 (t)C x(t) ˜ 0 A1 x(t ˜ − h) x(t)     min(T −h,0) I −I x(t + s) + B K 2 (s) ds 0 0 x(t ˜ + s) −h    t 0 0 x(s) ds + 0 K 4 (t, s)C x(s) ˜ t0

308

7 LQG Optimal Controls

 +

I 0 0 K 3 (t)



  t   w(t) w(s) 0 0 + ds, v(t) v(s) 0 K 4 (t, s) t0

(7.160)

where K 1 and K 2 (s) denote for 0 < T < h K 1 = −R −1 B T W1 (0), K 2 (s) = −R −1 B T W2 (0, s) and for h ≤ T K 1 = −R −1 B T P1 (0), K 2 (s) = −R −1 B T P2 (0, s), and K 3 (t) and K 4 (t, s) denote for t ∈ [t0 , t0 + h] K 3 (t) = −U (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 U (s, t − h − s, 0)C T Rv−1 , for t ∈ (t0 + h, ∞) and s ∈ [t0 , t0 + h] K 3 (t) = −X (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 U (s, t − h − s, 0)C T Rv−1 , and for t ∈ (t0 + h, ∞) and s ∈ (t0 + h, ∞) K 3 (t) = −X (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 X (s, t − h − s, 0)C T Rv−1 . Therefore, the closed-loop system is asymptotically stable if the system (7.160) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system can be separated into the asymptotic stability of the estimation error subsystem and the asymptotic stability of the control subsystem. It is noted that the asymptotic stability of the estimation error subsystem in (7.160) was newly obtained in Sect. 7.4.2. By Theorem 6.6, the existence of positive definite matrices F1 > 0 and F2 > 0 satisfying (7.159) ensures the asymptotic stability of the control subsystem. This completes the proof.  Theorems 6.5 and 6.6 and Corollaries 6.4 and 6.5 described the sufficient conditions including the condition (7.159) for the asymptotic stability for the control subsystem. Choice II: Frozen-Gain Kalman Filtered Estimate We mentioned that LQ controls and Kalman filtered estimates have dual properties in Remark 7.1. In the receding horizon LQG control with a Kalman filtered estimate, however, the control has a fixed gain matrix but the filtered estimate has a timevarying gain. Like the control gain, we froze the filter gain. As a dual concept to the receding horizon LQ control, we propose a filtered estimate at the current time t dependent on Te , which is the horizon length for estimation. Since Te is a design parameter, we can consider three cases of 0 < Te < h, h ≤ Te < 2h and 2h ≤ Te .

7.5 Receding Horizon LQG Controls for State Delayed Systems

309

Case I: 0 < Te < h The filtered estimate x(t) ˆ is obtained from (7.101) in Theorem 7.6 such as   d ˆ x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + U (0, 0, 0)C T Rv−1 y(t) − C x(t) dt    0 + A1 U (s, −h − s, 0)C T Rv−1 y(t + s) − C x(t (7.161) ˆ + s) ds, −Te

where U (τ , r, s) = U T (τ , s, r ) satisfies the following coupled partial differential Riccati equations for τ ∈ [−Te , 0], s ∈ [−h, 0], and r ∈ [−h, 0]



∂ ∂τ ∂ ∂ − ∂τ ∂r

∂ U (τ , 0, 0) = AU (τ , 0, 0) + U (τ , 0, 0)A T + A1 Π2 A1T + Q w ∂τ − U (τ , 0, 0)C T Rv−1 CU (τ , 0, 0),

∂ U (τ , r, 0) = U (τ , r, 0)A T − U (τ , r, 0)C T Rv−1 CU (τ , 0, 0), − ∂r

∂ − U (τ , r, s) = −U (τ , r, 0)C T Rv−1 CU (τ , 0, s) ∂s

(7.162) (7.163) (7.164)

with the initial condition U (−Te , 0, 0) = Π1 .

(7.165)

Case II: h ≤ Te < 2h The filtered estimate x(t) ˆ is obtained from (7.107) in Theorem 7.6 such as   d ˆ − h) + Bu(t) + X (0, 0, 0)C T Rv−1 y(t) − C x(t) x(t) ˆ = A x(t) ˆ + A1 x(t dt    0 X (s, −h − s, 0)C T Rv−1 y(t + s) − C x(t + s) ds + A1  + A1

−Te +h −Te +h −h

  U (s, −h − s, 0)C T Rv−1 y(t + s) − C x(t + s) ds, (7.166)

where X (τ , r, s) = X T (τ , s, r ) satisfies the following coupled partial differential Riccati equations for τ ∈ [−Te + h, 0], s ∈ [−h, 0], and r ∈ [−h, 0]. ∂ X (τ , 0, 0) = AX (τ , 0, 0) + A1 X (τ , −h, 0) + Q w + X (τ , 0, 0)A T ∂τ + X (τ , 0, −h)A1T − X (τ , 0, 0)C T Rv−1 C X (t, 0, 0),

∂ ∂ − X (τ , 0, s) = AX (t, 0, s) + A1 X (t, −h, s) ∂τ ∂s

∂ ∂ ∂ − − ∂τ ∂r ∂s

− X (τ , 0, 0)C T Rv−1 C X (τ , 0, s), X (τ , r, s) = −X (τ , r, 0)C T Rv−1 C X (τ , 0, s)

with the initial condition for r ∈ [−h, 0] and s ∈ [−h, 0]

(7.167)

(7.168) (7.169)

310

7 LQG Optimal Controls

X (−Te + h, r, s) = U (−Te + h, r, s)

(7.170)

and the U (τ , r, s) = U T (τ , s, r ) is generated via the coupled partial differential Riccati equations (7.162)–(7.164) for τ ∈ [−Te , h − Te ), s ∈ [−h, 0], and r ∈ [−h, 0] with the initial condition (7.165). Case III: 2h ≤ Te The filtered estimate x(t) ˆ is obtained from (7.107) in Theorem 7.6 such as   d x(t) ˆ = A x(t) ˆ + A1 x(t ˆ − h) + Bu(t) + X (0, 0, 0)C T Rv−1 y(t) − C x(t) dt    0 T −1 + A1 X (s, −h − s, 0)C Rv y(t + s) − C x(t + s) ds, (7.171) −h

where X (τ , r, s) = X T (τ , s, r ) satisfies the coupled partial differential Riccati equations (7.167)–(7.169) for τ ∈ [−Te + h, 0], s ∈ [−h, 0], and r ∈ [−h, 0] with the initial condition (7.170) for r ∈ [−h, 0] and s ∈ [−h, 0] and the U (τ , r, s) = U T (τ , s, r ) is generated via the coupled partial differential Riccati equations (7.162)– (7.164) for τ ∈ [−Te , h − Te ), s ∈ [−h, 0], and r ∈ [−h, 0] with the initial condition (7.165). Based on this frozen-gain Kalman filtered estimate, we can get the following result. Theorem 7.10 If there exist positive definite matrices F1 > 0, F2 > 0, Π1 > 0 and Π2 > 0 such that   Q + F1 A + A T F1 − F1 B[R + F2 ]−1 B T F1 F1 A1 ≤ 0, (7.172) A1T F1 −F2   Q w + Π1 A T + AΠ1 − Π1 C T [Rv + Π2 ]−1 CΠ1 Π1 A1T ≤ 0, (7.173) A 1 Π1 −Π2 then the receding horizon LQG controls (7.157) and (7.158) for the cost (7.153) with such F1 and F2 , based on the Kalman filtered estimates (7.161), (7.166), and (7.171), asymptotically stabilize the system (7.94)–(7.95). Proof Let us augment the systems (7.94)–(7.95), the frozen-gain Kalman filtered estimates (7.161) for 0 < Te < h, (7.166) for h ≤ Te < 2h, and (7.171) for 2h ≤ Te , and the controls (7.157) for 0 < T < h and (7.158) for h ≤ T into the following 

system for x(t) ˜ = x(t) − x(t). ˆ 

       x(t) ˙ A + B K 1 −B K 1 x(t − h) x(t) A1 0 = + ˙˜ 0 A + K3C 0 A1 x(t ˜ − h) x(t) ˜ x(t)     min(T −h,0) I −I x(t + s) B K 2 (s) + ds 0 0 x(t ˜ + s) −h

7.5 Receding Horizon LQG Controls for State Delayed Systems

311

     0 0 w(t) x(t + s) I 0 + ds + v(t) x(t ˜ + s) 0 K3 − min(Te ,h) 0 K 4 (s)C      0 w(t + s) 0 0 ds, (7.174) + (s) v(t + s) 0 K 4 − min(Te ,h) 

0



where K 1 and K 2 (s) denote for 0 < T < h K 1 = −R −1 B T W1 (0), K 2 (s) = −R −1 B T W2 (0, s) and for h ≤ T K 1 = −R −1 B T P1 (0), K 2 (s) = −R −1 B T P2 (0, s) and K 3 and K 4 (s) denote for 0 < Te < h K 3 = −U (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 U (s, −h − s, 0)C T Rv−1 , for h ≤ Te < 2h and for s ∈ [−h, −Te + h] K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 U (s, −h − s, 0)C T Rv−1 , for h ≤ Te < 2h and for s ∈ (−Te + h, 0] K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 X (s, −h − s, 0)C T Rv−1 , and for 2h ≤ Te K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 X (s, −h − s, 0)C T Rv−1 . Therefore, the closed-loop system is asymptotically stable if the system (7.174) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system can be separated into the asymptotic stability of the control subsystem and the asymptotic stability of the estimation error subsystem. By Theorem 6.6, the existence of positive definite matrices F1 > 0 and F2 > 0 satisfying (7.172) ensures the asymptotic stability of the control subsystem. By the duality between control and estimation mentioned in Remark 7.2, the existence of positive definite matrices Π1 > 0 and Π2 > 0 satisfying (7.173) ensures the asymptotic stability of the estimation error subsystem. This completes the proof.  Theorems 6.5 and 6.6 and Corollaries 6.4 and 6.5 in Sect. 5.5 described the sufficient conditions including (7.172) for the asymptotic stability for the control subsystem. Similarly, one can develop the corresponding sufficient conditions for the asymptotic stability for the estimation error subsystem.

312

7 LQG Optimal Controls

For the initial interval t ∈ [t0 , t0 + Te ], we can use the Kalman filtered estimates instead of the frozen-gain Kalman filtered estimates. In the case of 0 < Te < h, we use the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + Te ]. In the case of h ≤ Te , we use the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + h] and (7.107) for t ∈ (t0 + h, t0 + Te ]. Whereas the stability properties of the Kalman filtered estimate subsystem as in Theorem 7.9 can be found as in [10, 11], the stability properties of the frozen-gain filtered estimate subsystem as in Theorem 7.10 are newly obtained here.

7.5.2 *Receding Horizon LQG Control for Costs with Double Integral Terms Consider a receding horizon LQG cost JE (t, t + T ; xt ) in (7.149) with F1 > 0, F2 > 0, F3 > 0, and F4 > 0 in the terminal weighting functional VF (xt+T | t ). The optimal solution u ∗ for the cost JE (t, t + T ; xt ) for the state delayed system (7.151)–(7.152) can be found through JE,1 (t, t + T ; xˆt ), rather than JE (t, t + T ; xt ), as follows. It is noted that (7.149) and (7.150) are obtained from the finite horizon LQG cost (7.97) with the terminal weighting function (7.98) by replacing t0 and t f with t and t + T , respectively. Thus, the receding horizon LQG control can be obtained from the finite horizon LQG controls (7.130)–(7.132) in Theorem 7.8 by replacing t0 and t f with t ¯ and t + T , respectively. Define f (s) and R(s) as follows.   ¯ f (s) = s − T + h, R(s) = R + f (s)B T F3 B.

For s ∈ (t + T − h, t + T ], the optimal control is given as  u ∗3 (s| t) = − R¯ −1 (s − t)B T W1 (s − t)x(s| ˆ t)  +

T −h

−h

 W2 (s − t, a)x(s ˆ + a| t)da + f (s − t)F3 A1 x(s ˆ − h| t) , (7.175)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.262)–(6.264) For s ∈ (t + T − 2h, t + T − h], the optimal control is given as   u ∗2 (s| t) = −R −1 B T S1 (s − t)x(s| ˆ t) +

0 −h

 S2 (s − t, a)x(s ˆ + a| t)da ,

(7.176)

where S1 (τ ) = S1T (τ ), S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r ) for τ ∈ (T − 2h, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.269)–(6.271) with the boundary conditions (6.272)–(6.276). For s ∈ [t, t + T − 2h], the optimal control is given as

7.5 Receding Horizon LQG Controls for State Delayed Systems   u ∗1 (s| t) = −R −1 B T P1 (s − t)x(s| ˆ t) +

0 −h

 P2 (s − t, a)x(s ˆ + a| t)da ,

313

(7.177)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r ) for τ ∈ [0, T − 2h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.278)–(6.280) with the boundary conditions (6.281)–(6.285). Since the horizon length T is a design parameter, as mentioned earlier, we consider three cases of 0 < T < h, h ≤ T < 2h and 2h ≤ T when Fi > 0 in (7.150). The receding horizon LQG control u(t) for the receding horizon LQG cost JE,1 (t, t + T ) in (7.153) at the current time t is given from (7.175)–(7.177) as follows. Case I: 0 < T < h The receding horizon LQG control is given from (7.175) as follows.   u(t) = u ∗3 (t| t) = − R¯ −1 (0)B T W1 (0)x(t) ˆ +

T −h

W2 (0, s)x(t ˆ + s)ds  + f (0)F3 A1 x(t ˆ − h) ,

−h

(7.178)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.262)–(6.264) with the boundary conditions (6.265–(6.267). Case II: h ≤ T < 2h The receding horizon LQG control is given from (7.176) as follows.   u(t) = u ∗2 (t| t) = −R −1 B T S1 (0)x(t) ˆ +

0

−h

 S2 (0, s)x(t ˆ + s)ds , (7.179)

where S1 (τ ) = S1T (τ ), S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r ) for τ ∈ (T − 2h, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.269)–(6.271) with the boundary conditions (6.272)–(6.276). And W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.262)– (6.264) with the boundary conditions (6.265)–(6.267). Case III: 2h ≤ T The receding horizon LQG control is given from (7.177) as follows.   u(t) = u ∗1 (t| t) = −R −1 B T P1 (0)x(t) ˆ +

0

−h

 P2 (0, s)x(t ˆ + s)ds , (7.180)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r ) for τ ∈ [0, T − 2h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.278)–(6.280) with the boundary conditions (6.281)–(6.285). And S1 (τ ) = S1T (τ ),

314

7 LQG Optimal Controls

S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r ) for τ ∈ (T − 2h, T − h], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.269)–(6.271) with the boundary conditions (6.272)–(6.276). And W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r ) for τ ∈ (T − h, T ], r ∈ [−h, 0] and s ∈ [−h, 0] satisfy the coupled partial differential Riccati equations (6.262)–(6.264) with the boundary conditions (6.265–(6.267). On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) with t replaced with t+ as the time proceeds. In this case, we have to obtain xˆt+ from xˆt and new information on y[t, t+ ]. This is a filtered estimation problem. Since the receding horizon LQG controls (7.178), (7.179) and (7.180) employ the current control u ∗3 (t| t), u ∗2 (t| t) and u ∗1 (t| t) only, the choice of xˆt| t = xˆt determines the property of the receding horizon LQG control. We suggest two possible choices. Choice I: Standard Kalman Filtered Estimate A simple choice of the estimate for x(t) is a Kalman filtered estimate (7.101) and (7.107) for t ∈ [t0 , t0 + h] and for t ∈ (t0 + h, ∞), respectively, with the initial estimate x(t ˆ 0 ) = Ex(t0 ), where U (t, r, s) = U T (t, s, r ) and X (t, r, s) = X T (t, s, r ) satisfy the coupled partial differential Riccati equations (7.103)–(7.105) with the initial condition (7.106) and the coupled partial differential Riccati equations (7.108)– (7.110) with the boundary condition (7.111), respectively. Theorem 7.11 If there exist matrices Fi > 0 and X i j such that  

T X 11 X 21 X 21 X 22

 ≥ 0,

(7.181) 

T T A T F1 + F1 A + Q + F2 + h F4 + h X 11 + X 21 + X 21 F1 A1 − X 21 T T A1 F1 − X 21 −F2 + h A1 F3 A1   T  F1 F1 T −1 T B[R + h B F B] B ≤ 0, (7.182) − 3 h A1T F3 h A1T F3   F4 + A T F3 A −A T F3 ≥ 0, (7.183) −F3 A F3 − X 22

then the receding horizon LQG controls (7.178)–(7.180) for the cost (7.153) with such Fi , based on the Kalman filtered estimates (7.101) and (7.107), asymptotically stabilize the system (7.94)–(7.95). Proof Let us augment the systems (7.94)–(7.95), the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + h] and (7.107) for t ∈ (t0 + h, ∞), and and the control u(t) (7.178) for 0 < T < h, (7.179) for h ≤ T < 2h, and (7.180) for 2h ≤ T into the  ˆ following system for x(t) ˜ = x(t) − x(t). 

    x(t) ˙ x(t) A + B K 1 −B K 1 = ˙˜ 0 A + K 3 (t)C x(t) ˜ x(t)

7.5 Receding Horizon LQG Controls for State Delayed Systems

315



  I −I x(t + s) + B K 2 (s) ds 0 0 x(t ˜ + s) −h      t  x(s) x(t − h) 0 0 A1 + B K 5 −B K 5 ds + + 0 A1 x(s) ˜ x(t ˜ − h) 0 K 4 (t, s)C t0      t  I 0 0 0 w(t) w(s) + + ds, (7.184) (t, s) 0 K 3 (t) 0 K v(t) v(s) 4 t0 

min(T −h,0)

where K 1 , K 2 (s), and K 5 denote for 0 < T < h K 1 = − R¯ −1 (0)B T W1 (0), K 2 (s) = − R¯ −1 (0)B T W2 (0, s), K 5 = − R¯ −1 (0)B T f (0)F3 A1

and for h ≤ T K 1 = −R −1 B T P1 (0), K 2 (s) = −R −1 B T P2 (0, s) and K 3 (t) and K 4 (t, s) denote for t ∈ [t0 , t0 + h] K 3 (t) = −U (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 U (s, t − h − s, 0)C T Rv−1 , for t ∈ (t0 + h, ∞) and s ∈ [t0 , t0 + h] K 3 (t) = −X (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 U (s, t − h − s, 0)C T Rv−1 , and for t ∈ (t0 + h, ∞) and s ∈ (t0 + h, ∞) K 3 (t) = −X (t, 0, 0)C T Rv−1 , K 4 (t, s) = −A1 X (s, t − h − s, 0)C T Rv−1 . Therefore, the closed-loop system is asymptotically stable if the system (7.189) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system can be separated into the asymptotic stability of the control subsystem and the asymptotic stability of the estimation error subsystem. It is noted that the asymptotic stability of the estimation error subsystem in (7.189) was newly obtained in Sect. 7.4.2. By Theorem 6.13, the existence of positive definite matrices Fi > 0 and matrices X i j satisfying (7.185)– (7.187) ensures the asymptotic stability of the control subsystem. This completes the proof.  Since it is hard to determine X (r, s) that ensures the asymptotic stability of the estimation error subsystem, we introduce a frozen-gain Kalman filtered estimate dual to the receding horizon LQ control in the following subsection for the asymptotic stability of estimation. Choice II: Frozen-Gain Kalman Filtered Estimate In the previous subsection, as a dual concept to the receding horizon LQ control, we proposed the filtered estimates (7.161) for 0 < Te < h, (7.166) for h ≤ Te < 2h, and

316

7 LQG Optimal Controls

(7.171) for 2h ≤ Te . Using this estimate, the stability condition of the closed-loop system can be obtained in the following theorem. Theorem 7.12 If there exist matrices Fi > 0, X i j , Π1 > 0 and Π2 > 0 such that  

T X 11 X 21 X 21 X 22

 ≥ 0,

(7.185) 

T T A T F1 + F1 A + Q + F2 + h F4 + h X 11 + X 21 + X 21 F1 A1 − X 21 A1T F1 − X 21 −F2 + h A1T F3 A1    T F1 F1 T −1 T − B[R + h B F B] B ≤ 0, (7.186) 3 h A1T F3 h A1T F3   F4 + A T F3 A −A T F3 ≥ 0, (7.187) −F3 A F3 − X 22   Q w + Π1 A T + AΠ1 − Π1 C T [Rv + Π2 ]−1 CΠ1 Π1 A1T ≤ 0, (7.188) A 1 Π1 −Π2

then the receding horizon LQG controls (7.178)–(7.180) for the cost (7.153) with such Fi , based on the frozen-gain Kalman filtered estimates (7.161) and (7.166), asymptotically stabilize the system (7.94)–(7.95). Proof Let us augment the system (7.94)–(7.95), the frozen-gain Kalman filtered estimates (7.161) for 0 < Te < h and (7.166) for h ≤ Te , and the controls (7.157) for  ˆ 0 < T < h and (7.158) for h ≤ T into the following system for x(t) ˜ = x(t) − x(t). 

       x(t) ˙ A + B K 1 −B K 1 x(t − h) x(t) A1 + B K 5 −B K 5 = + ˙˜ 0 A + K3C 0 A1 x(t ˜ − h) x(t) ˜ x(t)     min(T −h,0) I −I x(t + s) B K 2 (s) ds + 0 0 x(t ˜ + s) −h        0 x(t + s) I 0 0 0 w(t) ds + + x(t ˜ + s) 0 K3 v(t) − min(Te ,h) 0 K 4 (s)C     0 w(t + s) 0 0 ds, (7.189) + v(t + s) − min(Te ,h) 0 K 4 (s)

where K 1 , K 2 (s), and K 5 denote for 0 < T < h K 1 = − R¯ −1 (0)B T W1 (0), K 2 (s) = − R¯ −1 (0)B T W2 (0, s), K 5 = − R¯ −1 (0)B T f (0)F3 A1 ,

for h ≤ T < 2h K 1 = −R −1 B T S1 (0), K 2 (s) = −R −1 B T S2 (0, s), K 5 = 0, and for 2h ≤ T

7.5 Receding Horizon LQG Controls for State Delayed Systems

317

K 1 = −R −1 B T P1 (0), K 2 (s) = −R −1 B T P2 (0, s), K 5 = 0 and K 3 and K 4 (s) denote for 0 < Te < h K 3 = −U (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 U (s, −h − s, 0)C T Rv−1 , for h ≤ Te < 2h and for s ∈ [−h, −Te + h] K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 U (s, −h − s, 0)C T Rv−1 , for h ≤ Te < 2h and for s ∈ (−Te + h, 0] K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 X (s, −h − s, 0)C T Rv−1 , and for 2h ≤ Te K 3 = −X (0, 0, 0)C T Rv−1 , K 4 (s) = −A1 X (s, −h − s, 0)C T Rv−1 . Therefore, the closed-loop system is asymptotically stable if the system (7.189) is asymptotically stable assuming w(t) = 0 and v(t) = 0 for all t ≥ t0 . In other words, the asymptotic stability of the closed-loop system can be separated into the asymptotic stability of the control subsystem and the asymptotic stability of the estimation error subsystem. By Theorem 6.13, the existence of positive definite matrices Fi > 0 and matrices X i j satisfying (7.185)–(7.187) ensures the asymptotic stability of the control subsystem. By the duality between control and estimation mentioned in Remark 7.2, the existence of positive definite matrices Π1 > 0 and Π2 > 0 satisfying (7.188) ensures the asymptotic stability of the estimation error subsystem. This completes the proof.  Theorems 6.12 and 6.13 and Corollaries 6.8 and 6.9 in Sect. 6.5 described the sufficient conditions including (7.185)–(7.187) for the asymptotic stability for the control subsystem. These conditions are delay-dependent, whereas the conditions in Theorems 6.5 and 6.6 and Corollaries 6.4 and 6.5 in Sect. 6.5 are delay-independent. Therefore, the former conditions are less conservative than the latter conditions. For the initial interval t ∈ [t0 , t0 + Te ], we can use the Kalman filtered estimates instead of the frozen-gain Kalman filtered estimates. In the case of 0 < Te < h, we use the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + Te ]. In the case of h ≤ Te , we use the Kalman filtered estimates (7.101) for t ∈ [t0 , t0 + h] and (7.107) for t ∈ (t0 + h, t0 + Te ]. Whereas the stability properties of the Kalman filtered estimate subsystem in Theorem 7.11 can be found as in [10, 11], the stability properties of the frozen-gain filtered estimate subsystem in Theorem 7.12 are newly obtained here.

318

7 LQG Optimal Controls

References 1. Aggarwal JK (1970) Computation of optimal control for time-delay systems. IEEE Trans Autom Control 683–685 2. Anderson BDO, Moore JB (1979) Optimal filtering. Prentice-Hall International Editions, Englewood Cliffs 3. Athans MA (1971) Special issue on the LQG problems. IEEE Trans Autom Control 16(6):527– 869 4. Bitmead R, Gevers M, Wertz V (1990) Adaptive optimal control: the thinking man’s GPC. Prentice Hall, New York 5. Delfour MC (1986) The linear-quadratic optimal control problem with delays in state and control variables: a state space approach. SIAM J Control 24(5):835–883 6. Delfour MC, McCalla C, Mitter SK (1975) Stability and the infinite-time quadratic cost for linear hereditary differential systems. SIAM J Control 13(1):48–88 7. Kailath T, Sayed AH, Hassibi B (2000) Linear estimation. Prentice-Hall, Upper Saddle River 8. Kwakernaak H, Sivan R (1972) Linear optimal control systems. Wiley-Interscience, New York 9. Kwon WH, Han S (2006) Receding horizon control: model predictive control for state models. Springer, Berlin 10. Kwong RH (1980) A stability theory for the linear-quadratic-Gaussian problem for systems with delays in the state, control, and observations. SIAM J Control Optim 18(1):49–75 11. Kwong RH, Willsky AS (1977) Optimal filtering and filter stability of linear stochastic delay systems. IEEE Trans Autom Control 22(2):196–201 12. Lewis FL, Syroms VL (1995) Optimal control. Wiely, New York 13. Lindquist A (1969) An innovations approach to optimal control of linear stochastic systems with time delay. Inf Sci 1(3):279–295 14. Lindquist A (1972) A theorem on duality estimation and control for linear stochastic systems with time delay. J Math Anal Appl 37(2):516–536 15. Lindquist A (1973) Optimal control of linear stochastic systems with applications to time lag systems. Inf Sci 5:81–124 16. Ni Y, Yiu KC, Zhang H, Zhang J (2017) Delayed optimal control of stochastic LQ problem. SIAM J Control 55(5):3370–3407 17. Park JH, Yoo HW, Han S, Kwon WH (2008) Receding horizon controls for input-delayed systems. IEEE Trans Autom Control 53(7):1746–1752

Chapter 8

H∞ Optimal Controls

8.1 Introduction In Chap. 5, guaranteed H∞ controls for input and state delayed systems are obtained with certain H∞ performance bounds, where control structures are not free but given a priori in feedback forms with constant gain matrices. Gain matrices of feedback controls are sought to satisfy a certain stability criterion with an H∞ performance bound. Therefore, guaranteed H∞ controls are rather easy to compute but lack of optimality since the control structures are not free. In this chapter, H∞ optimal controls are dealt with for both input and state delayed systems, where any feedback control structures are not required a priori. First, finite horizon H∞ optimal controls are dealt with, which are fundamental results mathematically. Since finite horizon controls cannot be used as stabilizing feedback controls due to the inherent requirement of infinite horizons associated with stability properties, infinite horizon H∞ controls are obtained by extending the terminal time to infinity if possible, where their stability properties and some limitations are discussed. Then for general stabilizing controls, receding horizon H∞ controls, or model predictive H∞ controls, are dealt with, which can be obtained easily from finite horizon controls by the receding horizon concept. Terminal conditions in receding horizon H∞ controls become very important to guarantee the closed-loop stability, which is investigated in details. The H∞ controls in this book use free terminal states. Output feedback H∞ controls are not dealt with in this chapter due to the space limit. This chapter is outlined as follows. In Sect. 8.2 finite horizon H∞ controls for input delayed systems are obtained for two different H∞ costs: predictive and standard ones. First they are obtained for the predictive H∞ cost containing the weight of a state predictor. The state predictor plays an important role in this case. These finite horizon H∞ controls are extended to infinite horizon H∞ controls which are discussed with their stability properties and some limitations. Next they are obtained for the standard H∞ cost containing the weight of states, not state predictors. These finite H∞ controls are extended to infinite horizon H∞ controls which are discussed © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6_8

319

8 H∞ Optimal Controls

320

with their stability properties and some limitations. In Sect. 8.3 receding horizon H∞ controls for input delayed systems are obtained from the two different H∞ controls given in Sect. 8.2. Cost monotonicity conditions are investigated, under which these receding horizon H∞ controls asymptotically stabilize the closed-loop systems. In Sect. 8.4 finite horizon H∞ controls for state delayed systems are obtained for two standard H∞ costs with two different terminal terms: one with single integral terminal terms and the other with double integral terminal terms. These finite horizon H∞ controls are fundamental results and they are extended to infinite horizon H∞ controls which are discussed with their stability properties and some limitations. In Sect. 8.5 receding horizon H∞ controls for state delayed systems are obtained from the finite horizon H∞ controls in Sect. 8.4. Cost monotonicity conditions are investigated, under which these receding horizon H∞ controls asymptotically stabilize the closedloop system. It is shown that receding horizon H∞ controls with the double integral terminal terms can have the delay-dependent stability condition while those with the single integral terminal terms have the delay-independent stability condition. This chapter deals with linear time-delay systems only. Receding horizon H∞ controls can be extended to constrained and also nonlinear time-delay systems, which need to be investigated. References for contents of this chapter are listed at the end of this chapter and cited in each subsection for more information and further reading.

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems 8.2.1 Fixed Horizon H∞ Control for Predictive Costs Consider a linear system with a single input delay given by x˙ (t) = Ax(t) + Bu(t) + B1 u(t − h) + Gw(t)

(8.1)

with the initial conditions x(t0 ) and u(t0 + θ) = φ(θ) for θ ∈ [−h, 0), where x(t) ∈ Rn is the state, u(t) ∈ Rm is the input, w(t) ∈ Rmw is the disturbance, and h denotes the constant delay. Before solving the H∞ control problems with state predictors, we first introduce a state predictor that is used instead of the true state in the performance. The state predictor z(t) = xˆ (t + h) is defined as the state x(t + h) at the time t + h with the constraints that ut+h ≡ 0 and wt+h ≡ 0 as z(t) = xˆ (t + h) = x(t + h; x(t), ut+h ≡ 0, wt+h ≡ 0) = eAh x(t) +

 t t−h

eA(t−τ ) B1 u(τ )d τ .

The state x(t + h) and the state predictor xˆ (t + H ) has the following relation  x(t + h) = xˆ (t + h) + t

t+h

eA(t+h−τ ) Bu(τ )d τ +



t+h

eA(t+h−τ ) Gw(τ )d τ ,

t

(8.2)

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

321

similarly as in (6.3). It is interesting to see that z(t) = xˆ (t + h) satisfies an ordinary system. ¯ ¯ z˙ (t) = Az(t) + Bu(t) + Gw(t),

(8.3)

B¯ = eAh B + B1 , G¯ = eAh G.

(8.4)

where B¯ and G¯ denote

For a finite horizon H∞ control, we use the following performance criterion to be minimized  tf T T t {z (s)Qz(s) + u (s)Ru(s)}ds + VF (tf ) , (8.5) max 0  tf T w t0 w (s)w(s)ds + VI (t0 ) where the terminal weighting function VF (tf ) and the initial weighting function VI (t0 ) denote VF (tf ) = z T (tf )Fz(tf ) = xˆ T (tf + h)F xˆ (tf + h),

(8.6)

VI (t0 ) = z (t0 )Π z(t0 ) = xˆ (t0 + h)Π xˆ (t0 + h)

(8.7)

T

T

with F ≥ 0 and Π ≥ 0. The functions VF (tf ) and VI (t0 ) represent internal energies at the terminal time tf and at the initial time t0 , respectively. The former might stand for the total energy flowing out from the system after the terminal time tf , whereas the latter might stand for the total energy flowing into the system until the initial time t0 . Furthermore, the 2-norm of the disturbance w should be bounded, which is basically assumed in this chapter and thus will not be mentioned again. However, it might be impossible to directly find the optimal control yielding the optimal value γ ∗ such that  tf γ

∗2

= min max u

t0

w

{z T (s)Qz(s) + uT (s)Ru(s)}ds + VF (tf ) .  tf T t0 w (s)w(s)ds + VI (t0 )

(8.8)

Instead, therefore, the following problem is usually considered: for a given finite horizon H∞ performance bound γ, find an H∞ control satisfying the following performance criterion  tf γ ≥ min max 2

u

w

t0

{z T (s)Qz(s) + uT (s)Ru(s)}ds + VF (tf ) .  tf T t0 w (s)w(s)ds + VI (t0 )

(8.9)

If this problem is feasible, then the resulting γ is greater than or equal to γ ∗ . In order to obtain this finite horizon H∞ control, we use an alternative expression such as

8 H∞ Optimal Controls

322



   z T (s)Qz(s) + uT (s)Ru(s) ds + VF (tf ) t0   tf  2 T −γ w (s)w(s)ds + VI (t0 )

0≥

tf

t0

= J (t0 , tf , u, w) − γ 2 VI (t0 ),

(8.10)

where the first term J (t0 , tf , u, w), which is henceforth called a finite horizon H∞ cost defined by J (t0 , tf , u, w) =

 tf  t0

 z T (s)Qz(s) + uT (s)Ru(s) − γ 2 wT (s)w(s) ds + VF (tf )

(8.11) with Q ≥ 0 and R > 0, is determined by u and w but the last term γ 2 VI (t0 ) = γ 2 z T (t0 )Π z(t0 ) is not affected by u and w because z(t0 ) is given. We consider the following finite horizon H∞ control problem: γ 2 z T (t0 )Π z(t0 ) ≥ min max J (t0 , tf , u, w). u

(8.12)

w

If we choose the optimal control u∗ and the worst disturbance w ∗ , it holds that J (t0 , tf , u∗ , w) ≤ J (t0 , tf , u∗ , w ∗ ) ≤ J (t0 , tf , u, w ∗ ) and thus (u∗ , w ∗ ) and J (t0 , tf , u∗ , w ∗ ) are sometimes called a saddle-point solution and a saddle-point cost, respectively. We introduce a lemma, which plays an essential role in obtaining optimal controls in this subsection and also in the subsequent subsections. Lemma 8.1 If it is possible to find a continuous functional V (t) and vector functions u∗ (t) and w ∗ (t) for the finite horizon H∞ cost (8.11) such that   V (tf ) = J (σ, tf , u, w) 

σ=tf

= VF (tf ),

(8.13)

T T 2 T ˙ V (t) + z (t)Qz(t) + u (t)Ru(t) − γ w (t)w(t) u=u∗  T T 2 T ˙ ≤ V (t) + z (t)Qz(t) + u (t)Ru(t) − γ w (t)w(t) 

u=u∗ ,w=w ∗

≤ V˙ (t) + z T (t)Qz(t) + uT (t)Ru(t) − γ 2 w T (t)w(t)  V˙ (t) + z T (t)Qz(t) + uT (t)Ru(t) − γ 2 w T (t)w(t)



w=w ∗

,

u=u∗ ,w=w ∗

(8.14) =0

(8.15)

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

323

for all t ∈ [t0 , tf ], then V (t) = J (t, tf , u∗ , w ∗ ) for all t ∈ [t0 , tf ]. In this case, the optimal control u∗ and the worst disturbance w ∗ satisfy J (t0 , tf , u∗ , w) ≤ V (t0 ) = J (t0 , tf , u∗ , w ∗ ) ≤ J (t0 , tf , u, w ∗ ).

(8.16)

If such a continuous functional V (t0 ) satisfies γ 2 VI (t0 ) ≥ V (t0 ),

(8.17)

then the finite horizon H∞ performance bound (8.9) holds for any w such that  tf

γ ≥ 2

t0

z T (s)Qz(s) + u∗T (s)Ru∗ (s) ds + VF (tf ) ,  tf T t0 w (s)w(s)ds + VI (t0 )

(8.18)

where z is the state generated by u∗ and some w. Proof Because of the conditions (8.13) and (8.15), V (t) = J (t, tf , u∗ , w ∗ ) for all t ∈ [t0 , tf ]. The condition (8.14) provides (8.16). Since the initial time condition (8.17) satisfies γ 2 VI (t0 ) ≥ V (t0 ) = J (t0 , tf , u∗ , w ∗ ) ≥ J (t0 , tf , u∗ , w),

(8.19)

this relation can be written as (8.18). Refer to Remark 8.1 for related subjects. This completes the proof.  Remark 8.1 From (8.13) and (8.14), u∗ and w ∗ are the optimal control and the worst disturbance from the principle of optimality in the dynamic programming. From (8.13) and (8.15), 

tf

V (t) =

  z ∗T (s)Qz ∗ (s) + u∗ T (s)Ru∗ (s) − γ 2 w ∗T (s)w ∗ (s) ds + VF (tf )

t

for all t ∈ [t0 , tf ], where z ∗ (s) is a state generated by {u∗ (α), w ∗ (s), α ∈ (s, tf ]}, which is called the optimal cost-to-go. Therefore, V (t0 ) is the saddle-point optimal cost. It is noted that Lemma 8.1 is given for ordinary systems but also applicable for both input delayed systems and state delay systems, where terminal conditions in (6.22) may be different. It is also noted that V (t) can be represented with arguments depending on the problems, for example with the state variable x, V (x(t), t) for delay-free ordinary systems, V (xt , t) for state delayed systems, and V (x(t), ut , t) for input delayed system. Lemma 8.1 is a modified version for linear quadratic control in [4]. It is noted that the worst disturbance w∗ is used to construct the optimal control u∗ for the finite horizon H∞ but a disturbance may be applied into the system instead of the worst disturbance w ∗ .

8 H∞ Optimal Controls

324

For the input delayed system (8.1), let us consider 

tf

J (t0 , tf , u, w) =



 xˆ (s + h)Qˆx(s + h) + u (s)Ru(s) − γ w (s)w(s) ds T

T

2

T

t0

+ xˆ T (tf + h)F xˆ (tf + h)

(8.20)

with utf ≡ u[tf − h, tf ] ≡ 0, wtf ≡ w[tf − h, tf ] ≡ 0, Q ≥ 0, R > 0, and F ≥ 0, which can be written as the cost (8.11) with Q ≥ 0, R > 0, and F ≥ 0. Let us choose a continuous functional in Lemma 8.1 as V (t) = z T (t)P(t)z(t)

(8.21)

and consider its time derivative.   T  d d = z T (t)M(t)z(t) + B¯ T P(t)z(t) + Ru(t) V (t) − J (σ, tf , u, w) dt dσ σ=t    T   × R B¯ T P(t)z(t) + Ru(t) − γ −2 G¯ T P(t)z(t) − γ 2 w(t) G¯ T P(t)z(t) − γ 2 w(t) ,

˙ + AT P(t) + P(t)A + Q − P(t)BR ¯ −1 B¯ T P(t) + γ −2 P(t)G¯ G¯ T P(t) where M(t) = P(t) ∗ and the choice of u and w as the optimal u and the worst disturbance w ∗ produces the optimal control u∗ and the worst disturbance w ∗ u∗ (t) = −R−1 B¯ T P(t)z(t), w ∗ (t) = γ −2 G¯ T P(t)z(t).

(8.22) (8.23)

and the condition (8.15) in Lemma 8.1 givens the condition that 0 = M(t). The initial time condition (8.17) in Lemma 8.1 is written as   2 V (t0 ) − γ z (t0 )Π z(t0 ) = z (t0 ) P(t0 ) − γ Π z(t0 ) ≥ 0. 2 T

T

Therefore, if the initial time condition of P(τ ) holds such as P(t0 ) ≤ γ 2 Π,

(8.24)

where P(τ ) = P T (τ ) is the solution of the following differential Riccati equation for τ ∈ [t0 , tf ] ¯ −1 B¯ T P(τ ) + γ −2 P(τ )G¯ G¯ T P(τ ) ˙ ) = AT P(τ ) + P(τ )A + Q − P(τ )BR − P(τ (8.25)

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

325

with the terminal-time condition P(tf ) = F, then the optimal control u∗ and the worst disturbance w ∗ of the finite horizon H∞ cost J (t0 , tf , u, w) (8.11) for the delay-free system (8.3) are given by and satisfy the finite horizon H∞ performance bound (8.18). For the time-delay system (8.1) with single input delay, the optimal control u∗ and the worst disturbance w ∗ are rewritten as    t u∗ (t) = −R−1 B¯ T P(t) eAh x(t) + eA(t−τ ) B1 u∗ (τ )d τ , t−h    t w ∗ (t) = γ −2 G¯ T P(t) eAh x(t) + eA(t−τ ) B1 u∗ (τ )d τ . t−h

Infinite Horizon H ∞ Control We consider the infinite horizon (8.11) with tf = ∞ for the delay-free system (8.3), where the initial state z(t0 ) is assumed to be zero, i.e. x(t0 ) = 0 and u(t0 + θ) = 0 for all θ ∈ [−h, 0). If P(t) in (8.25) goes to a constant matrix P as tf goes to infinity, this matrix P = P T satisfies the following algebraic Riccati equation ¯ −1 B¯ T P + γ −2 P G¯ G¯ T P 0 = AT P + PA + Q − P BR

(8.26)

and the infinite horizon H∞ control u∗ is given from (8.22) by u∗ (t) = −R−1 B¯ T Pz(t).

(8.27)

When we consider the infinite horizon H∞ control, we can investigate the asymptotic stability of the closed-loop system with no disturbance. Consider the delay-free system (8.3) with the control u∗ (8.27) such as   ¯ ¯ −1 B¯ T P z(t) + Gw(t). z˙ (t) = A − BR The algebraic Riccati equation (8.26) can be written as  T   ¯ −1 B¯ T P P + P A − BR ¯ −1 B¯ T P + Q + P BR ¯ −1 B¯ T P + γ −2 P G¯ G¯ T P. 0 = A − BR

¯ −1 B¯ T P) is Hurwitz from If P is given as positive definite, the system matrix (A − BR the well-known property of the algebraic Lyapunov equation and the control u∗ (8.27) asymptotically stabilizes the delay-free system (8.3) with no disturbance. Therefore, by Lemma 8.1, the following infinite horizon H∞ performance bound holds ∞ T z (s)Qz(s) + u∗T (s)Ru∗ (s) ds t0 2 ∞ (8.28) γ ≥ T t0 w (s)w(s)ds

8 H∞ Optimal Controls

326

with the zero initial state z(t0 ) = 0, which is an expression corresponding to (8.18) in Lemma 8.1 when tf is infinite. It is well-known that, by the conventional H∞ control, under the assumption that ¯ is stabilizable and (A, QT /2 ) is detectable the existence of the positive definite (A, B) matrix P to the algebraic Riccati equation (8.26) guarantees that the H∞ norm of the resulting closed-loop transfer function is less than or equal to γ. For the time-delay system (8.1) with single input delay, the optimal control u∗ is rewritten as    t u∗ (t) = −R−1 B¯ T P¯ eAh x(t) + eA(t−τ ) B1 u∗ (τ )d τ . t−h

Fixed horizon H∞ controls for input delayed systems with predictive costs in this subsection follow similarly from several well-known literatures including [1, 3, 5] for ordinary systems once they are transformed to delay-free systems.

8.2.2 Fixed Horizon H∞ Control for Standard Costs In the finite horizon LQ control for the standard cost (6.20), we have dealt with a pure input delayed system because this control can be obtained simply via the differential equation (6.29), rather than the partial differential equations (6.31)–(6.33) and (6.38)– (6.40). However, in the finite horizon H∞ control for a standard cost, we do not deal with a pure input delayed system separately from a single input delayed system because there is no advantage due to the existence of disturbance w(t) in the system. For the input delayed system (8.1) and for a given finite horizon H∞ performance bound, find an H∞ control satisfying the following performance criterion.  tf

γ ≥ min max 2

u

w

t0

xT (s)Qx(s) + uT (s)Ru(s) ds + VF (tf )  tf T t0 w (s)w(s)ds + VI (t0 )

(8.29)

with Q ≥ 0 and R > 0, where the terminal weighting functional VF (tf ) and the initial weighting functional VI (t0 ) denote 



VF (tf ) = xT (tf )F1 x(tf ) +

tf tf −h

uT (s)F2 u(s)ds,



VI (t0 ) = xT (t0 )Π x(t0 )

(8.30) (8.31)

with F1 > 0, F2 > 0, and Π > 0. This can be written as γ 2 VI (t0 ) ≥ min max J (t0 , tf , u, w), u

w

(8.32)

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

327

where the finite horizon H∞ cost is given as J (t0 , tf , u, w) =

 tf

xT (s)Qx(s) + uT (s)Ru(s) − γ 2 wT (s)w(s) ds + VF (tf ).

t0

(8.33) Using Lemma 8.1, we are going to find the optimal control u∗ and the worst disturbance w ∗ of the finite horizon H∞ cost (8.33) for the input delayed system (8.1). Differently from those for the delay-free system (8.3), however, we need to construct two continuous functionals V1 (t) and V2 (t) for horizons t0 ≤ t ≤ tf − h and tf − h < t ≤ tf , respectively,  V1 (t) = xT (t)P1 (t)x(t) + 2xT (t)  +

0



−h

0 −h

−h



V2 (t) = x (t)W1 (t)x(t) + 2x (t)

+

tf −t−h

T



tf −t−h

−h  t tf −h

P2 (t, s)u(t + s)ds

uT (t + r)P3 (t, r, s)u(t + s)drds,

T

+

0



tf −t−h

−h

−h

(8.34)

W2 (t, s)u(t + s)ds

uT (t + r)W3 (t, r, s)u(t + s)drds

uT (s)F2 u(s)ds,

(8.35)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) are assumed to be differentiable with respect to t ∈ [t0 , tf − h], r ∈ [−h, 0], and s ∈ [−h, 0] and W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) are assumed to be differentiable with respect to t ∈ (tf − h, tf ], r ∈ [−h, 0], and s ∈ [−h, 0], which leads to obtain two pairs of the optimal control u∗ and the worst disturbance w ∗ , say {u1∗ (·), w1∗ (·)} and {u2∗ (·), w2∗ (·)}, for the two horizons in the following result. Theorem 8.1 For a given γ > 0, the optimal control u∗ and the worst disturbance w ∗ for the finite horizon H∞ cost J (t0 , tf , u, w) (8.33) for the system (8.1) are given as follows. For t ∈ (tf − h, tf ]   u2∗ (t) = −[R + F2 ]−1 BT W1 (t)x(t) + 

w2∗ (t) = γ −2 G T W1 (t)x(t) +



tf −t−h

−h

tf −t−h

−h

 W2 (t, s)u∗ (t + s)ds , (8.36)

 W2 (t, s)u (t + s)ds , ∗

(8.37)

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0]

8 H∞ Optimal Controls

328

−W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q + F2 − W1 (t)B × [R + F2 ]−1 BT W1 (t) + γ −2 W1T (t)GG T W1 (t),   ∂ ∂ − + W2 (t, s) = AT W2 (t, s) − W1T (t)B[R + F2 ]−1 BT W2 (t, s) ∂t ∂s + γ −2 W1T (t)GG T W2 (t, s),   ∂ ∂ ∂ − + W3 (t, r, s) = −W2T (t, r)B[R + F2 ]−1 BT W2 (t, s) + ∂t ∂r ∂s

(8.38)

(8.39)

+ γ −2 W2T (t, r)GG T W2 (t, s)

(8.40)

with boundary conditions W1 (tf ) = F1 ,

(8.41)

W2 (t, −h) = W1 (t)B1 , W3 (t, −h, s) = B1T W2 (t, s).

(8.42) (8.43)

For t ∈ [t0 , tf − h]   u1∗ (t) = −R−1 BT P1 (t) + P2T (t, 0) x(t)  0 −1 −R {BT P2 (t, s) + P3 (t, 0, s)}u(t + s)ds, 

−h

w1∗ (t) = γ −2 G T P1 (t)x(t) +



0

−h

(8.44)

G T P2 (t, s)u(t + s)ds ,

(8.45)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q + γ −2 P1 (t)GG T P1 (t) − {P1 (t)B + P2 (t, 0)}R−1 {BT P1 (t) + P2T (t, 0)}, (8.46)   ∂ ∂ − + P2 (t, s) = AT P2 (t, s) + γ −2 P1 (t)GG T P2 (t, s) − {P1 (t)B + P2 (t, 0)} ∂t ∂s × R−1 {BT P2 (t, s) + P3 (t, 0, s)}, (8.47)   ∂ ∂ ∂ −2 T T T − + + P3 (t, r, s) = γ P2 (t, r)GG P2 (t, s) − {B P2 (t, r) + P3 (t, 0, r)} ∂t ∂r ∂s × R−1 {BT P2 (t, s) + P3 (t, 0, s)}

(8.48)

with boundary conditions P1 (tf − h) = W1 (tf − h), P2 (tf − h, s) = W2 (tf − h, s),

(8.49) (8.50)

P3 (tf − h, r, s) = W3 (tf − h, r, s), P2 (t, −h) = P1 (t)B1 ,

(8.51) (8.52)

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

P3 (t, −h, s) = B1T P2 (t, s).

329

(8.53)

Furthermore, if it holds that γ 2 xT (t0 )Π x(t0 ) = γ 2 VI (t0 ) ≥ V1 (t0 ) = xT (t0 )P1 (t0 )x(t0 ),

(8.54)

the control u∗ , composed of u1∗ for t ∈ [t0 , tf − h] and u2∗ for t ∈ (tf − h, tf ], satisfies the finite horizon H∞ performance bound (8.29). Proof For t ∈ (tf − h, tf ], consider V2 (t) in (8.34). After some manipulation, we obtain d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 w T (t)w(t) dt = xT (t)[W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q]x(t) + 2u(t − h)[B1 T W1 (t) − W2T (t, −h)]x(t)  tf −t−h  ∂W2 (t, s) ∂W2 (t, s) T T − + + A W2 (t, s) u(t + s)ds + 2x (t) ∂s ∂t −h  tf −t−h  B1 T W2 (t, s) − W3 (t, −h, s) u(t + s)ds + 2uT (t − h) −h tf −t−h

 ∂ ∂ ∂ − − W3 (t, r, s) u(t + s)drds ∂t ∂r ∂s −h −h     tf −t−h T T T T W1 (t)x(t) + W2 (t, s)u(t + s)ds + 2 u (t)B + w (t)G 

+

tf −t−h





uT (t + r)

−h

+ uT (t)[R + F2 ]u(t) − γ 2 w T (t)w(t). Now the optimal control u2∗ (·) and the worst disturbance w2∗ (·) can be chosen as follows. From the derivative conditions    ∂ d V2 (t) T T 2 T ∂u(t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) , 0= ∂ dt ∂w(t) we can construct u2∗ (t) in (8.36) and w2∗ (t) in (8.37). With such u2∗ (t) and w2∗ (t), we have 

 d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt u(t)=u2∗ (t),w(t)=w2∗ (t)  = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q − W1T (t)B[R + F2 ]−1 BT W1 (t)    + γ −2 W1T (t)GG T W1 (t) x(t) + 2u(t − h) B1 T W1 (t) − W2T (t, −h) x(t)

0=

 + 2xT (t)

tf −t−h  −h



∂W2 (t, s) ∂W2 (t, s) + + AT W2 (t, s) ∂s ∂t

8 H∞ Optimal Controls

330

 − W1T (t)B[R + F2 ]−1 BT W2 (t, s) + γ −2 W1T (t)GG T W2 (t, s) u(t + s)ds  + 2uT (t − h)

−h tf −t−h  tf −t−h

 +

tf −t−h 

−h

−h

 B1 T W2 (t, s) − W3 (t, −h, s) u(t + s)ds

  ∂ ∂ ∂ − − W3 (t, r, s) xT (t + r) ∂t ∂r ∂s



W2T (t, r)B[R + F2 ]−1 BT W2 (t, s) + γ −2 W2T (t, r)GG T W2 (t, s)



u(t + s)drds,

which produces the set of sufficient conditions (8.38)–(8.40) with the boundary conditions (8.42)–(8.43). From the terminal-time condition that V2 (tf ) = VF (tf ), we have the condition (8.41). For t ∈ [t0 , tf − h], consider V1 (t) in (8.34). With some efforts, we have d V (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt = xT (t){AT P1 (t) + P1 (t)A + P˙ 1 (t) + Q}x(t)     0 ∂ ∂ + 2xT (t) − P2 (t, s) u(t + s)ds AT P2 (t, s) + ∂t ∂s −h  0 + 2uT (t){BT P1 (t) + P2T (t, 0)}x(t) + 2uT (t) {BT P2 (t, s) + P3 (t, 0, s)}u(t + s)ds −h

 + 2wT (t)

0 −h

G T P2 (t, s)u(t + s)ds + 2wT (t)G T P1 (t)x(t)

+ 2uT (t − h){B1T P1 (t) − P2T (t, −h)}x(t)  0 + 2uT (t − h) {B1T P2 (t, s) − P3 (t, −h, s)}u(t + s)ds  +

0



−h

0

−h −h

uT (t + r)



 ∂ ∂ ∂ − − P3 (t, r, s)u(t + s)drds ∂t ∂r ∂s

+ uT (t)Ru(t) − γ 2 wT (t)w(t).

(8.55)

Now the optimal control u1∗ (t) and the worst disturbance w1∗ (t) are found such that  0=

∂ ∂u ∂ ∂w



 d T T 2 T V (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) . dt

Therefore, we can construct u1∗ (t) in (8.44) and w1∗ (t) in (8.45). With u1∗ (t), w1∗ (t) and the past optimal control u(t + s) = u∗ (t + s), we have 

 d V1 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt u=u1∗ ,w=w1∗  = xT (t) AT P1 (t) + P1 (t)A + P˙ 1 (t) + Q + γ −2 P1 (t)GG T P1 (t)  − {P1 (t)B + P2 (t, 0)}R−1 {BT P1 (t) + P2T (t, 0)} x(t)

0=

8.2 Fixed Horizon H∞ Controls for Input Delayed Systems

331

+ 2uT (t − h){B1T P1 (t) − P2T (t, −h)}x(t)   0  ∂ ∂ T − P2 (t, s) + AT P2 (t, s) + γ −2 P1 (t)GG T P2 (t, s) + 2x (t) ∂t ∂s −h − {P1 (t)B + P2 (t, 0)}R−1 {BT P2 (t, s) + P3 (t, 0, s)} u∗ (t + s)ds + 2u∗T (t − h)

 0 −h

{B1T P2 (t, s) − P3 (t, −h, s)}u∗ (t + s)ds

 ∂ ∂ ∂ − − P3 (t, r, s) + γ −2 P2 (t, r)GG T P2 (t, s) ∂t ∂r ∂s −h −h − {BT P2 (t, r) + P3 (t, 0, r)}T R−1 {BT P2 (t, s) + P3 (t, 0, s)} u∗ (t + s)drds. +

 0  0

u∗T (t + r)



Therefore, we obtain the coupled Riccati equations of sufficient conditions (8.46)– (8.48) with boundary conditions (8.52)–(8.53). Because V1 (t) and V2 (t) are the continuous functionals for t0 ≤ t ≤ tf − h and tf − h < t ≤ tf , respectively, it is clear that V1 (tf − h) = V2 (tf − h), which are satisfied if and only if the conditions (8.49)– (8.51) hold. Since ut0 ≡ 0, V1 (t0 ) = xT (t0 )P1 (t0 )x(t0 ) and thus the last condition in Lemma 8.1 becomes (8.54). This completes the proof.  Infinite Horizon H ∞ Control We consider the infinite horizon H∞ cost J (t0 , tf , u, w) | tf =∞ (8.33). In this case, the initial state x(t0 ) and the initial control u(t0 + θ) for all θ ∈ [−h, 0) are assumed to be zero, i.e. x(t0 ) = 0 and u(t0 + θ) = 0 for all θ ∈ [−h, 0). If the coupled partial differential Riccati solutions P1 (t), P2 (t, s), and P3 (t, r, s) go to the stationary quantities P1 , P2 (s), and P3 (r, s) as tf goes to infinity, then they satisfy 0 = AT P1 + P1 A + Q + γ −2 P1 GG T P1 − {P1 B + P2 (0)}R−1 {P1 B + P2 (0)}T ,



P˙ 2 (s) = AT P2 (s) + γ −2 P1 GG T P2 (s) − {P1 B + P2 (0)}R−1 {BT P2 (s) + P3 (0, s)},

 ∂ ∂ + P3 (r, s) = γ −2 P2 (r)GG T P2 (s) ∂r ∂s

(8.56) (8.57)

−{BT P2 (r) + P3 (0, r)}T R−1 {BT P2 (s) + P3 (0, s)} (8.58) with boundary conditions P2 (−h) = P1 B1 , P3 (−h, s) = B1T P2 (s)

(8.59) (8.60)

8 H∞ Optimal Controls

332

and the infinite horizon H∞ control and the corresponding continuous functional are given from (8.44) and (8.34) by  u∗ (t) = −R−1 {BT P1 + P2T (0)}x(t) +

 0 −h

 {BT P2 (s) + P3 (0, s)}u∗ (t + s)ds ,

 0

V1 (t) = xT (t)P1 x(t) + 2xT (t) P2 (s)u(t + s)ds −h  0  0 + uT (t + r)P3 (r, s)u(t + s)drds, −h −h

(8.61)

(8.62)

When we consider the infinite horizon H∞ cost, we can investigate the asymptotic stability of the closed-loop system with no disturbance. Similar to the infinite horizon LQ optimal control in Sect. 6.2 addressed in [9], the infinite horizon H∞ optimal control (8.61) could asymptotically stabilize the single input delayed system (8.1) if the system is stabilizable [2, 9] and furthermore if the H∞ performance bound γ is chosen to be greater than or equal to the achievable optimal H∞ performance bound γ ∗ . However, differently from the solution of the algebraic Riccati equation P in (8.26) for the ordinary system, in which there exist a finite number of solutions of the algebraic Riccati equation (8.26) and thus we can easily find the unique positive definite solution P among them that provides the stabilizing control, the number of solutions of the coupled partial differential Riccati equations (8.56)–(8.58) is infinite and there seems to be no research on finding the solutions P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) that produce such a stabilizing H∞ control. To check out whether the optimal control asymptotically stabilizes the system (8.1), the continuous functional (8.62) should be examined to satisfy the conditions (2.4) and (2.5) in Theorem 2.1, where the time-derivative negativity (2.5) can be easily verified from the property of the optimal control such that 

d V1 (t) dt

 u(t)=u∗ (t)

= −xT (t)Qx(t) − u∗T (t)Ru∗ (t) + γ 2 w T (t)w(t)

w(t)=0 .

To the best of our knowledge, however, the properties of P1 , P2 (s) and P3 (r, s) to satisfy the condition (2.4) of the continuous functional (8.62) have not been explicitly explained in the literature, and thus the stability analysis has not been explicitly explained. Finite and infinite horizon H∞ controls with standard costs for input delayed systems can be found as in [12, 13]. While infinite horizon H∞ controls have problems in stability proof and computation, we introduce receding horizon H∞ controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

333

8.3 Receding Horizon H∞ Controls for Input Delayed Systems In this section, we assume that the input delayed system (8.1) is stabilizable as in Sect. 6.3. This section deals with receding horizon LQ controls for the predictive cost and the standard cost given in Sect. 6.2

8.3.1 Receding Horizon H∞ Control for Predictive Costs For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. Consider a receding horizon H∞ cost J (t, t + T , u, w; xˆ (t + h))   t+T  xˆ T (s + h| t)Qˆx(s + h| t) + uT (s| t)Ru(s| t) − γ 2 w T (s| t)w(s| t) ds = t

+ xˆ T (t + T + h| t)F xˆ (t + T + h| t)

(8.63)

with ut+T ≡ 0, wt+T ≡ 0, Q > 0, R > 0, and F > 0 for the input delayed system (8.1), which can be constructed by replacing t0 and tf by t and t + T , respectively, from the finite horizon H∞ cost J (t0 , tf , u, w; xˆ (t + h)) (8.11) with the free terminal state. In this case, xˆ (s| t) s=t+h = xˆ (t + h| t) = xˆ (t + h) = z(t). This cost can be transformed into the following receding horizon H∞ cost  z T (s| t)Qz(s| t) + uT (s| t)Ru(s| t) t  − γ 2 w T (s| t)w(s| t) ds + z T (t + T | t)Fz(t + T | t)

 J (t, t + T , u, w; z(t)) =

t+T

(8.64) with Q > 0, R > 0, and F > 0 for the delay-free system (8.3). The delay-free system (8.3) can be expressed as ∂ ¯ ¯ z(s| t) = Az(s| t) + Bu(s| t) + Gw(s| t) ∂s

(8.65)

with the initial state z(s| t)s=t = z(t). In this case, we can obtain the optimal control u∗ (s| t) and the worst disturbance ∗ w (s| t) as follows.

8 H∞ Optimal Controls

334

u∗ (s| t) = −R−1 B¯ T P(s| t)z(s| t), w ∗ (s| t) = γ −2 G¯ T P(s| t)z(s| t),

(8.66) (8.67)

where P(s| t), defined as (8.25), satisfies for s ∈ [t, t + T ] −

∂ ¯ −1 B¯ T P(s| t) P(s| t) = AT P(s| t) + P(s| t)A + Q − P(s| t)BR ∂s + γ −2 P(s| t)G¯ G¯ T P(s| t)

(8.68)

with the boundary condition P(t + T | t) = F. Since K(s| t) is shift invariant and z(t| t) = z(t), the differential Riccati equation (8.68) can be given as, for τ ∈ [0, T ], ¯ −1 B¯ T P(τ ) + γ −2 P(τ )G¯ G¯ T P(τ ) ˙ ) = AT P(τ ) + P(τ )A + Q − P(τ )BR − P(τ (8.69) with the boundary condition P(T ) = F and thus the optimal control (8.66) and the worst disturbance (8.67) become u∗ (s| t) = −R−1 B¯ T P(s − t)z(s| t), w ∗ (s| t) = γ −2 G¯ T P(s − t)z(s| t),

(8.70) (8.71)

The receding horizon H∞ control u(t) is to use the first control u∗ (t| t) on the time interval [t, t + T ] and thus given as u(t) = u∗ (t| t) = −R−1 B¯ T P(0)z(t).

(8.72)

On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) = u∗ (t+ | t+ ) with t replaced with t+ as the time proceeds. If the initial time condition of P(τ ) satisfies P(0) ≤ γ 2 Π,

(8.73)

like (8.24), then we can say that it holds the finite horizon H∞ performance bound at each time t  t+T T z (s| t)Qz(s| t) + uT (s| t)Ru(s| t) ds + VF (t + T | t) t 2 γ ≥ . (8.74)  t+T w T (s| t)w(s| t)ds + VI (t) t For the linear delay system (8.1) with single input delay, the receding horizon H∞ control (8.72) is rewritten as   u(t) = −R−1 B¯ T K(0) eAh x(t) +

t

t−h

 eA(t−τ ) B1 u(τ )d τ .

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

335

Furthermore, the receding horizon H∞ cost is given from (8.21) as J (t, t + T , u∗ , w ∗ ) = xT (t)P(0)x(t),

(8.75)

where P(0) is obtained from (8.69). In the receding horizon LQ controls for the costs J (t, t + T , u; z(t)), J (t, t + T , u; x(t), ut ), and J (t, t + T , u; xt ) in Sects. 6.3 and 6.5, the monotonicity of the optimal LQ costs plays an important role in providing the asymptotic stability of the closed-loop system, which was done by investigating the non-positivity of the time derivative of the optimal LQ costs such as for J (t, t + T , u∗ ; z(t)) 0 ≥ lim

Δ→0

= lim

Δ→0

lim

Δ→0

  1 J (t + Δ, t + T + Δ, u1∗ ; z ∗ (t + Δ| t)) − J (t, t + T , u∗ ; z(t)) Δ   1 J (t + Δ, t + T , u∗ ; z ∗ (t + Δ| t) − J (t, t + T , u∗ ; z(t)) + Δ   1 1∗ ∗ ∗ ∗ J (t + Δ, t + T + Δ, u ; z (t + Δ| t)) − J (t + Δ, t + T , u ; z (t + Δ| t)) , Δ

where u∗ is the optimal control for the LQ cost J (t, t + T , u; z(t)) with the initial state z(t), z ∗ is the corresponding state generated by u∗ , and u1∗ is the optimal control for the LQ cost J (t + Δ, t + T + Δ, u; z ∗ (t + Δ| t)) with the initial state z ∗ (t + Δ| t). Since the first term is always non-positive, the terminal cost monotonicity conditions were constructed from the second term in Sects. 6.3 and 6.5. In the receding horizon H∞ control for the H∞ cost (8.64), however, the resulting H∞ cost is saddled as in (8.16), which leads us to hardly say the monotonicity of the saddle-point H∞ cost, differently from the monotonicity of the optimal LQ costs. However, the terminal cost monotonicity of the saddle-point H∞ cost   1 J (t, t + T + Δ, u1∗ , w 1∗ ; z(t)) − J (t, t + T , u∗ , w ∗ ; z(t)) , (8.76) Δ→0 Δ

0 ≥ lim

where {u1∗ , w 1∗ } and {u∗ , w ∗ } are the sets of the optimal control and the worst disturbance of J (t, t + T + Δ, u, w; z(t)) and of J (t, t + T , u, w; z(t)), respectively, plays a key role in the stability of the closed-loop system with no disturbance and in the infinite horizon H∞ performance bounds of the closed-loop system with disturbances. Terminal Cost Monotonicity Let z 1∗ (α| t) and z ∗ (α| t) be the corresponding states generated by {u1∗ , w 1∗ } and {u∗ , w ∗ }, respectively, in the system (8.65) with the initial state z 1∗ (α| t) α=t = z ∗ (α| t) α=t = z(t). Let us replace {u1∗ (α| t), α ∈ [t, t + T + Δ]} of the optimal cost J (t, t + T + Δ, u1∗ , w 1∗ ; z(t)) with {u∗ (α| t), α ∈ [t, t + T ]} and {u1 (α), α ∈ (t + T , t + T + Δ]}, where u1 is an arbitrary control. Since we use an alternative control instead of u1∗ in the cost J (t, t + T + Δ, u, w; z(t)), we have

8 H∞ Optimal Controls

336

J (t,t + T + Δ, u1∗ , w 1∗ ; z(t))   t+T  1T 1 ∗T ∗ 2 1∗T 1∗ z (s)Qz (s) + u (s| t)Ru (s| t) − γ w (s| t)w (s| t) ds ≤ t



+

t+T +Δ

  z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T 1T

+ z (t + T + Δ)Fz 1 (t + T + Δ)   = J (t, t + T , u∗ , w 1∗ ; z(t)) − z 1T (t + T )Fz 1 (t + T )  +

t+T +Δ

  z 1T (s)Qz 1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T

+ z 1T (t + T + Δ)Fz 1 (t + T + Δ), where {z 1 (α), α ∈ [t, t + T + Δ]} is the corresponding state generated by {u∗ (α| t), α ∈ [t, t + T ]}, {u1 (α), α ∈ (t + T , t + T + Δ]} and {w1∗ (α), ∈ [t, t + T + Δ]} in the system (8.65) with the initial state z 1 (α) α=t = z(t). Furthermore, let us replace w 1∗ of J (t, t + T , u∗ , w 1∗ ; z(t)) with the worst disturbance w ∗ of the cost J (t, t + T , u∗ , w; z(t)), which produces the relation that J (t, t + T , u∗ , w 1∗ ; z(t)) ≤ J (t, t + T , u∗ , w ∗ ; z(t)). Since it holds that 1 1 ¯ 1∗ (t + T | t) ¯ 1 (t + T ) + Gw {z (t + T + Δ) − z 1 (t + T )} Az 1 (t + T ) + Bu Δ with a simple choice of u1 (t + T ) as −Hz 1 (t + T ), i.e. u1 (t + T ) = −Hz 1 (t + T ), the above inequality becomes   1 J (t, t + T + Δ, u1∗ , w1∗ ; z(t)) − J (t, t + T , u∗ , w∗ ; z(t)) Δ→0 Δ lim

≤ z 1T (t + T )Qz 1 (t + T ) + u1T (t + T )Ru1 (t + T ) − γ 2 w1∗T (t + T | t)w1∗ (t + T | t)   ¯ 1∗ (t + T | t) ¯ 1 (t + T ) + Gw + 2z 1T (t + T )F Az 1 (t + T ) + Bu (8.77) = z 1T (t + T )Mz 1 (t + T ) − γ 2 {w1∗ (t + T | t) − γ −2 G¯ T Fz 1 (t + T )}T {w1∗ (t + T | t) − γ −2 G¯ T Fz 1 (t + T )},

(8.78)

where M denotes ¯ ) + (A − BH ¯ )T F + Q + H T RH + γ −2 F G¯ G¯ T F. M = F(A − BH Remark 8.2 In this subsection, we have proven that   ∂ 1 J (α, β, u∗ , w ∗ ; z(α)) = lim J (α, β + Δ, u1∗ , w 1∗ ; z(α)) − J (α, β, u∗ , w ∗ ; z(α)) Δ→0 Δ ∂β ≤ z T (β)Mz(β).

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

337

The condition that M ≤ 0 ensures the terminal monotonicity of the saddle-point H∞ cost, which plays an important role in guaranteeing the asymptotic stability of the delay-free system (8.3) with no disturbance and also the infinite horizon H∞ performance bound of the receding horizon H∞ control (8.72). Stability For the asymptotic stability of the delay-free system (8.3) with no disturbance, consider the saddle-point H∞ cost J (t, t + T , u∗ , w ∗ ; z(t)).

(8.79)

Let us consider for a very small positive Δ J (t + Δ, t + Δ + T , u1∗ , w1∗ ; z(t + Δ)) − J (t, t + T , u∗ , w∗ ; z(t)) = J (t + Δ, t + Δ + T , u1∗ , w1∗ ; z(t + Δ)) − J (t + Δ, t + T , u2∗ , w2∗ ; z(t + Δ)) + J (t + Δ, t + T , u2∗ , w2∗ ; z(t + Δ)) − J (t, t + T , u∗ , w∗ ; z(t)),

(8.80)

where z(t + Δ) denotes the state generated by the initial state z(t), the optimal input u∗ = {u(τ | t), τ ∈ [t, t + Δ]} and w(τ ) = 0 for τ ∈ [t, t + Δ]. Since the receding horizon cost J (t, t + T , u∗ , w ∗ ; z(t)) was designed to satisfy the terminal monotonicity of the saddle-point H∞ cost so that the first part of (8.80) satisfies J (t + Δ, t + Δ + T , u1∗ , w1∗ ; z(t + Δ)) − J (t + Δ, t + T , u2∗ , w2∗ ; z(t + Δ)) ≤ 0,

the relation (8.80) becomes J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; z(t + Δ)) − J (t, t + T , u∗ , w ∗ ; z(t)) ≤ J (t + Δ, t + T , u2∗ , w 2∗ ; z(t + Δ)) − J (t, t + T , u∗ , w ∗ ; z(t)).

(8.81)

If the worst disturbance w∗ of J (t, t + T , u∗ , w ∗ ; z(t)) is replaced with w 3 consisting w3 (τ ) = 0 for τ ∈ [t, t + Δ] and w3 (τ ) = w 2∗ (τ ) for τ ∈ [t + Δ, t + T ], which generates the state z(t + Δ) from the initial state z(t), it holds that J (t, t + T , u∗ , w ∗ ; z(t)) ≥ J (t, t + T , u∗ , w 3 ; z(t)) and thus ∗





  T ∗T ∗ z (τ )Qz(τ ) + u (τ | t)Ru (τ | t) d τ  2∗ 2∗ + J (t + Δ, t + T , u , w ; z(t + Δ)) ,

t+Δ

J (t, t + T , u , w ; z(t)) ≥ t

8 H∞ Optimal Controls

338

which gives this relation   1 2∗ 2∗ ∗ ∗ J (t + Δ, t + T , u , w ; z(t + Δ)) − J (t, t + T , u , w ; z(t))) lim Δ→0 Δ    1 t+Δ T ∗T ∗ z (τ )Qz(τ ) + u (τ | t)Ru (τ | t) d τ . ≤ − lim Δ→0 Δ t From the relation (8.81), therefore, it holds that   d J (t, t + T , u∗ , w ∗ ; z(t)) ≤ − z T (t)Qz(t) + u∗T (t| t)Ru∗ (t| t) . dt Theorem 8.2 If there exist a positive definite matrix F > 0 and a matrix H such that ¯ ) + (A − BH ¯ )T F + Q + H T RH + γ −2 F G¯ G¯ T F ≤ 0, F(A − BH

(8.82)

then the receding horizon H∞ control (8.72) for the cost (8.64) with such an F asymptotically stabilizes the delay-free system (8.3) with no disturbance. Proof The saddle-point H∞ cost generated by the initial state z(t) satisfies that J (t, t + T , u∗ , w ∗ ; z(t)) ≥ J (t, t + T , u∗ , 0; z(t)) ≥ 0. The condition (8.82) for the terminal monotonicity of the saddle-point H∞ cost guarantees that   d J (t, t + T , u∗ , w ∗ ; z(t)) ≤ − z T (t)Qz(t) + u∗T (t| t)Ru∗ (t| t) . dt J (t, t + T , u∗ , w ∗ ; z(t)) is V (t) = z T (t)P(0)z(t) from (8.75). Therefore, these conditions satisfy the conditions of the Lyapunov theorem for ordinary systems, which can be obtained from Theorem 2.1 by replacing x(t) and xt with z(t) and z(t), respectively. This completes the proof.  Remark 8.3 There is an easy way to find F and H that satisfy the condition (8.82) ¯ is stabilizable. In this case, there exists always a matrix H such that when (A, B) ¯ ) is Hurwitz, which ensures the existence of a positive definite matrix F for (A − BH (8.82). The quadratic matrix inequality (8.82) in Theorem 8.2 can be rewritten as linear matrix inequalities. Corollary 8.1 If there exist a positive definite matrix F¯ > 0 and a matrix H¯ such that

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

⎤ ¯ T − H¯ T B¯ T + γ −2 G¯ G¯ T F¯ AF¯ − B¯ H¯ + FA H¯ T ⎣ F¯ −Q−1 0 ⎦ ≤ 0, H¯ 0 −R−1

339



(8.83)

then the receding horizon H∞ control (8.72) for the cost (8.64) with such an F = F¯ −1 asymptotically stabilizes the delay-free system (8.3) with no disturbance. Proof Let us multiply the inequality (8.82) by F¯ = F −1 from the both sides, which can be written into ¯ F¯ + FH ¯ T RH F¯ + γ −2 G¯ G¯ T ≤ 0. ¯ )F¯ + F(A ¯ − BH ¯ )T + FQ (A − BH 

Define H¯ = H F¯ and apply the Schur complement technique for this inequality, which produces the LMI condition (8.83). This completes the proof.  Let us rewrite the inequality (8.77) for the terminal cost monotonicity as follows.   1 1 1∗ ∗ ∗ J (t, t + T + Δ, u , w ; z(t)) − J (t, t + T , u , w ; z(t)) lim Δ→0 Δ  T ≤ z 1T (t + T )Mz 1 (t + T ) + u1 (t + T ) + R−1 F B¯ T Fz 1 (t + T ) R    × u1 (t + T ) + R−1 F B¯ T Fz 1 (t + T ) − γ 2 w 1∗ (t + T | t) −γ

−2

G¯ T Fz 1 (t + T )

T  w (t + T | t) − γ 1∗

−2

 T 1 ¯ G Fz (t + T ) ,

(8.84)

where M denotes ¯ −1 F B¯ T F + γ −2 F G¯ G¯ T F. M = FA + AT F + Q − F BR When Δ = 0, u1 (t + T ) = u∗ (t + T | t). Therefore, from the definition of u∗ (t + T | t) shown in (8.70) at s = t + T we have the relation that 0 = u1 (t + T ) + R−1 F B¯ T Fz 1 (t + T ),

(8.85)

which results in   1 J (t, t + T + Δ, u1 , w 1∗ ; z(t)) − J (t, t + T , u∗ , w ∗ ; z(t)) ≤ z 1T (t + T )Mz 1 (t + T ). Δ→0 Δ lim

Therefore, the terminal cost monotonicity is ensured if M < 0, which gives the following result. Theorem 8.3 If there exists a positive definite matrix F > 0 such that ¯ −1 B¯ T F + γ −2 F G¯ G¯ T F ≤ 0, AT F + FA + Q − F BR

(8.86)

8 H∞ Optimal Controls

340

then the receding horizon H∞ control (8.72) for the cost (8.64) with such an F asymptotically stabilizes the delay-free system (8.3) with no disturbance. Proof When H in (8.82) is replaced with R−1 B¯ T F, we obtain (8.86). If there exists an F which satisfies (8.86), then an H exists and (8.82) is satisfied. From Theorem 8.2, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  It is noted that there exists always a positive definite matrix F satisfying (8.86) if (A, eAh B + B1 ) is stabilizable. The quadratic matrix inequality (8.86) in Theorem 8.3 can be rewritten as linear matrix inequalities. Corollary 8.2 If there exists a positive definite matrix F¯ > 0 such that The Schur complement technique for this inequality produces the following LMI condition 

¯ −1 B¯ T + γ −2 G¯ G¯ T F¯ ¯ T + AF¯ − BR FA F¯ −Q−1

≤ 0,

(8.87)

then the receding horizon H∞ control (8.72) for the cost (8.64) with such an F = F¯ −1 asymptotically stabilizes the delay-free system (8.3) with no disturbance. Proof Let us multiply the inequality (8.86) by F¯ = F −1 from the both sides, which can be written into ¯ F¯ − BR ¯ −1 B¯ T + γ −2 G¯ G¯ T ≤ 0. ¯ T + AF¯ + FQ FA The Schur complement technique for this inequality produces the LMI condition (8.87). This completes the proof.  The stability properties of receding horizon H∞ controls with predictive costs in Theorems 8.2 and 8.3 and their LMI representations in Corollaries 8.1 and 8.2 in this subsection can be partially obtained similarly from [7, 8] for ordinary systems. Infinite Horizon H ∞ Performance Bounds of Receding Horizon H ∞ Control It was shown that the receding horizon H∞ control (8.72) asymptotically stabilizes the delay-free system (8.3) with no disturbance, which ensures the BIBO stability. Therefore, we might consider an infinite horizon H∞ performance. In this case, the initial state z(t0 ) is assumed as zero, i.e. x(t0 ) = 0 and u(t0 + θ) = 0 for all θ ∈ [−h, 0). With the receding horizon H∞ control u¯ = {u∗ (s| s), s ∈ [t0 , ∞)} in (8.72), the infinite horizon H∞ cost is given as follows. J (t0 , ∞, u¯ , w; z(t0 )) =

  ∞ z T (s)Qz(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 wT (s)w(s) ds t0

with Q > 0 and R > 0, where z(s) denotes the state trajectory generated by the initial state z(t0 ), the receding horizon H∞ control u¯ = {u∗ (τ | τ ), τ ∈ [t0 , s]} in (8.72) and an arbitrary disturbance w = {w(τ ), τ ∈ [t0 , s]}. Consider the following relation

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

341

J (t0 , ∞, u¯ , w; z(t0 ))  ∞ = z T (s)Qz(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) t0  d ∗ ∗ + J (s, s + T , u , w ; z(s)) ds ds   ∗ ∗ ∗ ∗ − J (s, s + T , u , w ; z(s))| s=∞ − J (t0 , t0 + T , u , w ; z(t0 )) . Since it is assumed that z(t0 ) = 0, J (t0 , t0 + T , u∗ , w ∗ ; z(t0 )) = 0. Furthermore, from the property of the worst disturbance w∗ , it holds that −J (s, s + T , u∗ , w ∗ ; z(s))s=∞ ≤ −J (s, s + T , u∗ , 0; z(s))s=∞ ≤ 0. Let us consider for a very small positive Δ J (s + Δ, s + T + Δ, u1∗ , w1∗ ; z(s + Δ)) − J (s, s + T , u∗ , w∗ ; z(s)) = J (s + Δ, s + T + Δ, u1∗ , w1∗ ; z(s + Δ)) − J (s + Δ, s + T , u2∗ , w2∗ ; z(s + Δ)) + J (s + Δ, s + T , u2∗ , w2∗ ; z(s + Δ)) − J (s, s + T , u∗ , w∗ ; z(s)),

(8.88)

where z(s + Δ) denotes the state generated by the initial state z(s), the optimal control u∗ = {u∗ (τ | s), τ ∈ [s, s + Δ]} and a disturbance w = {w(τ ), τ ∈ [s, s + Δ]}. The receding horizon cost J (s, s + T , u∗ , w ∗ ; z(s)) was designed to satisfy the terminal monotonicity of the saddle-point H∞ cost so that the first part of (8.88) satisfies J (s + Δ, s + T + Δ, u1∗ , w1∗ ; z(s + Δ)) − J (s + Δ, s + T , u2∗ , w2∗ ; z(s + Δ)) ≤ 0.

If the worst disturbance w ∗ of J (s, s + T , u∗ , w ∗ ; z(s)) for τ ∈ [s, s + Δ] is replaced with w3 consisting w 3 (τ ) = w(τ ) for τ ∈ [s, s + Δ] and w3 (τ ) = w 2∗ (τ ) for τ ∈ [s + Δ, s + T ], which generates the state z(s + Δ) from the initial state z(s), it holds that J (s, s + T , u∗ , w ∗ ; z(s)) ≥ J (s, s + T , u∗ , w 3 ; z(s)) and thus J (s, s + T , u∗ , w ∗ ; z(s))   s+Δ  T ∗T ∗ 2 T z (τ )Qz(τ ) + u (τ | s)Ru (τ | s) − γ w (τ )w(τ ) d τ ≥ s

+ J (s + Δ, s + T , u2∗ , w 2∗ ; z(s + Δ)), which gives the relation   1 2∗ 2∗ ∗ ∗ J (s + Δ, s + T + Δ, u , w ; z(s + Δ)) − J (s, s + T , u , w ; z(s)) lim Δ→0 Δ    1 s+Δ T ∗T ∗ 2 T z (τ )Qz(τ ) + u (τ | s)Ru (τ | s) − γ w (τ )w(τ ) d τ . ≤ − lim Δ→0 Δ s

8 H∞ Optimal Controls

342

From the above relation, therefore, it holds that   d J (s, s + T , u∗ , w∗ ; z(s)) ≤ − z T (s)Qz(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 wT (s)w(s) . ds

Therefore, this concludes that J (t0 , ∞, u¯ , w; z(t0 )) ≤ 0, which produces the following infinite horizon H∞ performance bound ∞

γ ≥ 2

t0

z T (s)Qz(s) + u∗T (s| s)Ru∗ (s| s) ds ∞ . T t0 w (s)w(s)ds

(8.89)

The infinite horizon performance bounds of the receding horizon H∞ controls can be found similarly in [7, 8] for ordinary systems.

8.3.2 Receding Horizon H∞ Control for Standard Costs For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. Consider a standard receding horizon H∞ cost given by J (t, t + T , u, w; x(t), ut ) =

 t+T  xT (s| t)Qx(s| t) + uT (s| t)Ru(s| t) t  − γ 2 wT (s| t)w(s| t) ds + VF (x(t + T | t), ut+T | t ),

(8.90) with Q > 0 and R > 0, where the terminal weighting functional VF (tf ) is defined in (8.30) with F1 > 0 and F2 > 0, which can be obtained from the finite horizon H∞ cost (8.33) by taking t0 = t and tf = t + T . In this case, the input delayed system (8.1) can be written as x˙ (s| t) = Ax(s| t) + Bu(s| t) + B1 u(s − h| t) + Gw(s| t)

(8.91)

with the initial state x(s| t) s=t = x(t). By Theorem 8.1, the optimal controls and the worst disturbances are given for s ∈ (t + T − h, t + T ] and for s ∈ [t, t + T − h] as follows. For s ∈ (t + T − h, t + T ],

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

343

 u2∗ (s| t) = −[R + F2 ]−1 BT W1 (s − t)x(s| t)  +

t+T −s−h −h

w2∗ (s| t) = γ −2 G T







W2 (s − t, a)u (s + a| t)da ,

(8.92)

  t+T −s−h ∗ W1 (s − t)x(s| t) + W2 (s − t, a)u (s + a| t)da , −h

(8.93) where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0], − W˙ 1 (τ ) = AT W1 (τ )+W1 (τ )A+Q+F2 +γ −2 W1T (τ )GG T W1 (τ ) −W1 (τ )B[R + F2 ]−1 BT W1 (τ ),   ∂ ∂ − W2 (τ , s) = AT W2 (τ , s) + γ −2 W1T (τ )GG T W2 (τ , s) + ∂τ ∂s 

∂ ∂ ∂ − + + ∂τ ∂r ∂s

(8.94)

−W1T (τ )B[R + F2 ]−1 BT W2 (τ , s),



(8.95)

W3 (τ , r, s) = γ −2 W2T (τ , r)GG T W2 (τ , s) −W2T (τ , r)B[R + F2 ]−1 BT W2 (τ , s)

(8.96)

with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W1 (τ )B1 ,

(8.97) (8.98)

W3 (τ , −h, s) = B1T W2 (τ , s).

(8.99)

For s ∈ [t, t + T − h]   u1∗ (s| t) = −R−1 BT P1 (s − t) + P2T (s − t, 0) x(s| t) −1

−R



0

 B P2 (s − t, a) + P3 (s − t, 0, a) u(s + a| t)da, T

−h





w1∗ (s| t) = γ −2 G T P1 (s − t)x(s| t) +



0

−h

(8.100)  G T P2 (s − t, a)u(s + a)da , (8.101)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (t, s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0],

8 H∞ Optimal Controls

344

−P˙ 1 (τ ) =AT P1 (τ ) + P1 (τ )A + Q + γ −2 P1 (τ )GG T P1 (τ ) − {P1 (τ )B+P2 (τ , 0)}R−1 {BT P1 (τ )+P2T (τ , 0)}, (8.102)   ∂ ∂ − + P2 (τ , s) =AT P2 (τ , s)+γ −2 P1 (τ )GG T P2 (τ , s)−{P1 (τ )B+P2 (τ , 0)} ∂τ ∂s 

× R−1 {BT P2 (τ , s)+P3 (τ , 0, s)},



(8.103)

∂ ∂ ∂ − P3 (τ , r, s) =γ −2 P2 (τ , r)GG T P2 (τ , s) − {BT P2 (τ , r)+P3 (τ , 0, r)}T + + ∂τ ∂r ∂s × R−1 {BT P2 (τ , s)+P3 (τ , 0, s)}

(8.104)

with boundary conditions P2 (0, −h) = P1 (0)B1 ,

(8.105)

B1T P2 (0, s),

P3 (0, −h, s) = P1 (T − h) = W1 (T − h),

(8.106) (8.107)

P2 (T − h, s) = W2 (T − h, s), P3 (T − h, r, s) = W3 (T − h, r, s).

(8.108) (8.109)

Since the horizon length T is a design parameter, we consider the receding horizon H∞ control for 0 < T < h and for h ≤ T . Case I: 0 < T < h The receding horizon H∞ control is represented from (8.92) as   u(t) = u2∗ (t| t) = −[R + F2 ]−1 BT W1 (0)x(t) +

T −h −h

 W2 (0, s)u∗ (t + s)ds , (8.110)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the partial differential Riccati equations (8.94)–(8.96) for τ ∈ [0, T ], s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (8.97)–(8.99). Case II: h ≤ T The receding horizon H∞ control is represented from (8.100) as u(t) =

u1∗ (t| t)

−1

= −R

 T  B P1 (0) + P2T (0, 0) x(t) − R−1



0 −h

 BT P2 (0, s)



+ P3 (0, 0, s) u∗ (t + s)ds, (8.111) where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (t, s, r) satisfy the partial differential Riccati equations (8.102)–(8.104) for τ ∈ [0, T − h], s ∈ [−h, 0] and

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

345

r ∈ [−h, 0] with the boundary conditions (8.105)–(8.109). Here W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations (8.94)–(8.96) for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0] with the boundary conditions (8.97)–(8.99). On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) = u2∗ (t+ | t+ ) for 0 < T < h and u(t+ ) = u1∗ (t+ | t+ ) for h ≤ T with t replaced with t+ as the time proceeds. It is also noted from (8.34) and (8.35) that the optimal cost J (t, t + T , u∗ , w ∗ ) is given as follows. For T < h, J (t, t + T , u∗ , w∗ ; x(t), ut ) = V2 (t) = xT (t)W1 (0)x(t) + 2xT (t) + +

 T −h  T −h −h  t

−h

t+T −h

 T −h −h

W2 (0, s)u∗ (t + s)ds

u∗T (t + r)W3 (0, r, s)u∗ (t + s)drds

u∗T (s)F2 u∗ (s)ds,

(8.112)

where W1 (0) = W1T (0), W2 (0, s), and W3 (0, r, s) = W3T (0, s, r) are obtained from (8.94)–(8.99). For h ≤ T , J (t, t + T , u∗ , w ∗ ; x(t), ut ) = V1 (t)



= x (t)P1 (0)x(t) + 2x (t) T

 +

0

T

0 −h



0

−h

−h

P2 (0, s)u∗ (t + s)ds

u∗T (t + r)P3 (0, r, s)u∗ (t + s)drds, (8.113)

where P1 (0) = P1T (0), P2 (0, s), and P3 (0, r, s) = P3T (0, s, r) are obtained from (8.102)–(8.109). Differently from the monotonicity of the optimal receding horizon LQ costs, as mentioned in the previous subsection, we need to investigate the terminal monotonicity of the saddle-point receding horizon H∞ cost for the asymptotic stability of the closed-loop system with no disturbance and in the infinite horizon H∞ performance bounds of the closed-loop system with disturbances. Terminal Cost Monotonicity Consider   1 J (t, t + T + Δ, u1∗ , w1∗ ; x(t), ut ) − J (t, t + T , u∗ , w∗ ; x(t), ut ) , (8.114) Δ→0 Δ lim

8 H∞ Optimal Controls

346

where {u1∗ , w 1∗ } and {u∗ , w ∗ } are the sets of the optimal control and the worst disturbance of J (t, t + T + Δ, u, w; x(t), ut ) and of J (t, t + T , u, w; x(t), ut ), respectively. Let x1∗ (α| t) and x∗ (α| t) be the corresponding states generated by {u1∗ , w 1∗ } and {u∗ , w ∗ }, respectively, in the system (8.1) with the initial state x1∗ (α| t) α=t = x∗ (α| t) α=t = x(t). Let us replace {u1∗ (α| t), α ∈ [t, t + T + Δ]} of the optimal cost J (t, t + T + Δ, u1∗ , w 1∗ ; x(t), ut ) with {u∗ (α| t), α ∈ [t, t + T ]} and {u1 (α), α ∈ (t + T , t + T + Δ]}, where u1 is an arbitrary control. Since we use an alternative control instead of u1∗ in the cost J (t, t + T + Δ, u, w; x(t), ut ), we have J (t, t + T + Δ, u1∗ , w 1∗ ; x(t), ut )   t+T  x1T (s)Qx1 (s) + u∗T (s| t)Ru∗ (s| t) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds ≤ t



+

t+T +Δ 

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T



+ x1T (t + T + Δ)F1 x1 (t + T + Δ) +

t+T +Δ t+T +Δ−h

u1T (s)F2 u1 (s)ds

  = J (t, t + T , u∗ , w 1∗ ; x(t), ut ) − x1T (t + T )F1 x1 (t + T ) −  +

t+T +Δ 

t+T t+T −h

 u1T (s)F2 u1 (s)ds

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T



+ x (t + T + Δ)F1 x1 (t + T + Δ) +

t+T +Δ

1T

t+T +Δ−h

u1T (s)F2 u1 (s)ds,

where {x1 (α), α ∈ [t, t + T + Δ]} is the corresponding state generated by {u∗ (α| t), α ∈ [t, t + T ]}, {u1 (α), α ∈ (t + T , t + T + Δ]} and {w1∗ (α), ∈ [t, t + T + Δ]} in the system (8.1) with the initial state x1 (α) α=t = x(t). Furthermore, let us replace w 1∗ of J (t, t + T , u∗ , w 1∗ ; x(t), ut ) with the worst disturbance w ∗ of the cost J (t, t + T , u∗ , w; x(t), ut ), which produces the relation that J (t, t + T , u∗ , w 1∗ ; x(t), ut ) ≤ J (t, t + T , u∗ , w ∗ ; x(t), ut ). Therefore, the above inequality is written as J (t, t + T + Δ, u1∗ , w1∗ ; x(t), ut )    t+T u1T (s)F2 u1 (s)ds ≤ J (t, t + T , u∗ , w∗ ; x(t)) − x1T (t + T )F1 x1 (t + T ) − t+T −h

  t+T +Δ  x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w1∗T (s| t)w1∗ (s| t) ds + t+T

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) +

 t+T +Δ t+T +Δ−h

u1T (s)F2 u1 (s)ds.

Since it holds that   1 1 1 x (t + T + Δ) − x (t + T ) Δ

Ax1 (t + T ) + Bu1 (t + T ) + B1 u1 (t + T − h) + Gw1∗ (t + T | t),

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

347

the above inequality becomes   1 J (t, t + T + Δ, u1∗ , w 1∗ ; x(t), ut ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ) Δ→0 Δ    t+T +Δ  1 ≤ lim x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds Δ→0 Δ t+T lim

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) − x1T (t + T )F1 x1 (t + T )   t+T +Δ  t+T + u1T (s)F2 u1 (s)ds − u1T (s)F2 u1 (s)ds t+T +Δ−h

1 = lim Δ→0 Δ



t+T +Δ 

(8.115)

t+T −h

 x (s)Qx (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds 1T

1

t+T

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) − x1T (t + T )F1 x1 (t + T )   t+T +Δ  t+T +Δ−h + u1T (s)F2 u1 (s)ds − u1T (s)F2 u1 (s)ds t+T

(8.116)

t+T −h

= x1T (t + T )Qx1 (t + T ) + u1T (t + T )Ru1 (t + T ) − γ 2 w 1∗T (t + T | t)w 1∗ (t + T | t)   + 2x1T (t + T )F1 Ax1 (t + T ) + Bu1 (t + T ) + B1 u1 (t + T − h) + Gw 1∗ (t + T | t) + u1T (t + T )F2 u1 (t + T ) − u1T (t + T − h)F2 u1 (t + T − h).

(8.117)

Therefore, the terminal monotonicity of the saddle-point receding horizon H∞ cost is ensured if we can find u1 (t + T ) yielding the the right part of the above relation. The simple choice of u1 (t + T ) such that u1 (t + T ) = −Hx1 (t + T ) yielding the inequality produces the following terminal monotonicity condition of the saddlepoint receding horizon H∞ cost   1 J (t, t + T + Δ, u1∗ , w 1∗ ; x(t), ut ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ) Δ→0 Δ    T x1 (t + T ) x1 (t + T ) ≤ 1 M 1 − γ 2 w 1∗ (t + T | t) u (t + T − h) u (t + T − h) T   −2 T 1 1∗ −2 T 1 w (t + T | t) − γ G F1 x (t + T ) , − γ G F1 x (t + T ) (8.118) lim

where M denotes  M=

(A − BH )T F1 + F1 (A − BH ) + Q + H T [R + F2 ]H + γ −2 F1 GG T F1 F1 B1 . B1T F1 −F2

Remark 8.4 In this subsection, we have proven that

8 H∞ Optimal Controls

348

∂ J (α, β, u∗ , w ∗ ; x(α), uα ) ∂β   1 J (α, β + Δ, u1∗ , w 1∗ ; x(α), uα ) − J (α, β, u∗ , w ∗ ; x(α), uα ) = lim Δ→0 Δ   T x(β) x(β) ≤ M . u(β − h) u(β − h) The condition that M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the system (8.1) with no disturbance and also the infinite horizon H∞ performance bound of the system (8.1) with disturbances. Stability For the asymptotic stability of the system (8.1) with no disturbance, consider the saddle-point H∞ cost J (t, t + T , u∗ , w ∗ ; x(t), ut ).

(8.119)

Let us consider for a very small positive Δ J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; x(t + Δ), ut+Δ ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ) =J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; x(t + Δ), ut+Δ ) − J (t + Δ, t + T , u2∗ , w 2∗ ; x(t + Δ), ut+Δ ) + J (t + Δ, t + T , u2∗ , w 2∗ ; x(t + Δ), ut+Δ ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ), (8.120) where x(t + Δ) and ut+Δ denote the state and the input generated by the initial state x(t) and the initial input ut , the optimal input u∗ = {u(τ | t), τ ∈ [t, t + Δ]} and w(τ ) = 0 for τ ∈ [t, t + Δ]. Since the receding horizon cost J (t, t + T , u∗ , w ∗ ; x(t), ut ) was designed to satisfy the terminal monotonicity of the saddlepoint H∞ cost so that the first part of (8.120) satisfies J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; x(t + Δ), ut+Δ ) − J (t + Δ, t + T , u2∗ , w 2∗ ; x(t + Δ), ut+Δ ) ≤ 0,

the relation (8.120) becomes J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; x(t + Δ), ut+Δ ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≤ J (t + Δ, t + T , u2∗ , w 2∗ ; x(t + Δ), ut+Δ ) − J (t, t + T , u∗ , w ∗ ; x(t), ut ). (8.121) If the worst disturbance w∗ of J (t, t + T , u∗ , w ∗ ; x(t), ut ) is replaced with w 3 consisting w 3 (τ ) = 0 for τ ∈ [t, t + Δ] and w3 (τ ) = w 2∗ (τ ) for τ ∈ [t + Δ, t + T ], which generates the state x(t + Δ) and the input ut+Δ from the initial state x(t) and the initial input ut , it holds that J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≥ J (t, t + T , u∗ , w 3 ; x(t), ut )

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

349

and thus J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≥

 t

t+Δ

  xT (τ )Qx(τ ) + u∗T (τ | t)Ru∗ (τ | t) d τ

 + J (t + Δ, t + T , u2∗ , w 2∗ ; x(t + Δ), ut+Δ ) ,

which gives this relation lim

Δ→0

  1 J (t + Δ, t + T , u2∗ , w2∗ ; x(t + Δ), ut+Δ ) − J (t, t + T , u∗ , w∗ ; x(t), ut )) Δ    1 t+Δ T x (τ )Qx(τ ) + u∗T (τ | t)Ru∗ (τ | t) d τ . ≤ − lim Δ→0 Δ t

From the relation (8.121), therefore, it holds that   d J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . dt Theorem 8.4 If there exist positive definite matrices F1 > 0, F2 > 0 and a matrix H such that  (1, 1) F1 B1 ≤ 0, (8.122) B1T F1 −F2 where (1, 1) denotes (1, 1) = (A − BH )T F1 + F1 (A − BH ) + Q + H T [R + F2 ]H + γ −2 F1 GG T F1 , the receding horizon H∞ controls (8.110) and (8.111) for the cost (8.90) with such F1 and F2 asymptotically stabilize the system (8.1) with no disturbance. Proof The saddle-point H∞ cost generated by the initial state x(t) and the initial input u¯ t satisfies that J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≥ J (t, t + T , u∗ , 0; x(t), ut ) ≥ 0. The condition (8.122) for the terminal monotonicity of the saddle-point H∞ cost guarantees that   d J (t, t + T , u∗ , w ∗ ; x(t), ut ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . dt J (t, t + T , u∗ , w ∗ ; x(t), ut ) is V2 (t) for T < h and V1 (t) for T ≥ h from (8.112) and (8.113), respectively. Therefore, these with cost monotonicity conditions satisfy conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof. 

8 H∞ Optimal Controls

350

The quadratic matrix inequality (8.122) in Theorem 8.4 can be rewritten as linear matrix inequalities, which is shown in the following result. Corollary 8.3 If there exist positive definite matrices F¯ 1 > 0, F¯ 2 > 0 and a matrix H¯ such that ⎤ ⎡ (1, 1) F¯ 1 H¯ T 0 ⎥ ⎢ F¯ 1 −Q−1 0 0 ⎥ ≤ 0, ⎢ (8.123) ⎦ ⎣ H¯ 0 −F¯ 2 F¯ 2 −1 0 0 F¯ 2 −R − F¯ 2 where (1, 1) denotes (1, 1) = F¯ 1 AT − H¯ T BT + AF¯ 1 − BH¯ + γ −2 GG T + B1 F¯ 2 B1T , the receding horizon H∞ controls (8.110) and (8.111) for the cost (8.90) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (8.1) with no disturbance. Proof By using the Schur complement technique, the condition (8.122) can be written as 0 > (A − BH )T F1 + F1 (A − BH ) + Q + H T [R + F2 ]H + γ −2 F1 GG T F1 − F1 B1 F2−1 B1T F1   −1 = F1−1 F¯ 1 AT − H¯ T BT + AF¯ 1 − BH¯ + H¯ T F¯ 2 − F¯ 2 (R−1 + F¯ 2 )−1 F¯ 2 H¯  + F¯ 1 QF¯ 1 + γ −2 GG T + B1 F¯ 2 B1T F1−1 . By using the congruence transformation and the Schur complement technique, the above can be written as the condition (8.123). Therefore, by Theorem 8.4, this completes the proof.  Let us rewrite the inequality (8.117) as follows.   1 J (t, t + T + Δ, u1 , w 1∗ ; x(t)) − J (t, t + T , u∗ , w ∗ ; x(t)) Δ→0 Δ T   x1 (t + T ) x1 (t + T ) ≤ 1 M 1 u (t + T − h) u (t + T − h)  T  + u1 (t + T ) + [R + F2 ]−1 BT F1 x1 (t + T ) [R + F2 ] u1 (t + T )   + [R + F2 ]−1 BT F1 x1 (t + T ) − γ 2 w 1∗ (t + T | t) lim

−γ

−2

T  G F1 x (t + T ) T

1

w (t + T | t) − γ 1∗

−2

 G F1 x (t + T ) T

1

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

351



 T x1 (t + T ) x1 (t + T ) ≤ 1 M 1 u (t + T − h) u (t + T − h) T   + u1 (t + T ) + [R + F2 ]−1 BT F1 x1 (t + T ) [R + F2 ] u1 (t + T )  (8.124) + [R + F2 ]−1 BT F1 x1 (t + T ) , where M denotes  F1 A + AT F1 + Q − F1 B[R + F2 ]−1 BT F1 + γ −2 F1 GG T F1 F1 B1 . M= B1T F1 −F2 An alternative choice of u1 (t + T ) as −[R + F2 ]−1 BT F1 x1 (t + T ), i.e. u1 (t + T ) = −[R + F2 ]−1 BT F1 x1 (t + T ), provides the following result. Theorem 8.5 If there exist positive definite matrices F1 > 0 and F2 > 0 and such that  F1 A + AT F1 + Q − F1 B[R + F2 ]−1 BT F1 + γ −2 F1 GG T F1 F1 B1 ≤ 0, (8.125) B1T F1 −F2 the receding horizon H∞ controls (8.110) and (8.111) for the cost (8.90) with such F1 and F2 asymptotically stabilize the system (8.1) with no disturbance. Proof When H in (8.122) is replaced with [R + F2 ]−1 BT F1 , we obtain (8.125). If there exist F1 and F2 which satisfy (8.125), then an H exists and (8.122) is satisfied. From Theorem 8.4, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  The quadratic matrix inequality (8.125) in Theorem 8.5 can be rewritten as linear matrix inequalities, which is shown in the following result. Corollary 8.4 If there exist positive definite matrices F¯ 1 > 0 and F¯ 2 > 0 and such that ⎤ ⎡ F¯ 1 AT + AF¯ 1 − BF¯ 2 BT + B1 F¯ 2 B1T + γ −2 GG T F¯ 1 BF¯ 2 ⎦ ≤ 0, ⎣ −Q−1 0 F¯ 1 T F¯ 2 B 0 −R−1 − F¯ 2 (8.126) the receding horizon H∞ controls (8.110) and (8.111) for the cost (8.90) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (8.1) with no disturbance. Proof By using the Schur complement technique, the condition (8.125) can be written as 0 ≥ F1 A + AT F1 + Q − F1 B[R + F2 ]−1 BT F1 + γ −2 F1 GG T F1 + F1 B1 F2−1 B1T F1

8 H∞ Optimal Controls

352

 = F1−1 F¯ 1 AT + AF¯ 1 + F¯ 1 QF¯ 1 + γ −2 GG T + B1 F¯ 2 B1T −1   −B F¯ 2 − F¯ 2 (R−1 + F¯ 2 )F¯ 2 BT F1−1 .

By using the congruence transformation and the Schur complement technique, the above can be written as the condition (8.126). Therefore, by Theorem 8.5, this completes the proof.  Infinite Horizon H ∞ Performance Bounds of Receding Horizon H ∞ Controls With the receding horizon H∞ control u¯ = {u∗ (s| s), s ∈ [t0 , ∞)} in (8.110) or (8.111), the infinite horizon H∞ cost is given as follows.  J (t0 , ∞, u¯ , w; x(t0 ), ut0 ) =

 xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) ds

∞ t0

with Q > 0, R > 0, x(t0 ) = 0, and ut0 ≡ 0, where x(s) and us denote the state trajectory and the input trajectory generated by the initial state x(t0 ) and the initial input ut0 , the receding horizon H∞ control u¯ = {u∗ (τ | τ ), τ ∈ [t0 , s]} and an arbitrary disturbance w = {w(τ ), τ ∈ [t0 , s]}. Consider the following relation J (t0 , ∞, u¯ , w; x(t0 ), ut0 )  ∞ xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) = t0  d ∗ ∗ + J (s, s + T , u , w ; x(s), us ) ds ds   − J (s, s + T , u∗ , w ∗ ; x(s), us )| s=∞ − J (t0 , t0 + T , u∗ , w ∗ ; x(t0 ), ut0 ) . Since it is assumed that x(t0 ) = 0 and ut0 ≡ 0, J (t0 , t0 + T , u∗ , w ∗ ; x(t0 ), ut0 ) = 0. Furthermore, from the property of the worst disturbance w∗ , it holds that −J (s, s + T , u∗ , w ∗ ; x(s), us )s=∞ ≤ −J (s, s + T , u∗ , 0; x(s), us )s=∞ ≤ 0. Let us consider for a very small positive Δ J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; x(s + Δ), us+Δ ) − J (s, s + T , u∗ , w ∗ ; x(s), us ) =J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; x(s + Δ), us+Δ ) − J (s + Δ, s + T , u2∗ , w 2∗ ; x(s + Δ), us+Δ ) + J (s + Δ, s + T , u2∗ , w 2∗ ; x(s + Δ), us+Δ ) − J (s, s + T , u∗ , w ∗ ; x(s), us ), (8.127)

8.3 Receding Horizon H∞ Controls for Input Delayed Systems

353

where x(s + Δ) and us+Δ denote the state and the input generated by the initial state x(s) and the initial input us , the optimal control u∗ = {u∗ (τ | s), τ ∈ [s, s + Δ]} and a disturbance w = {w(τ ), τ ∈ [s, s + Δ]}. The receding horizon cost J (s, s + T , u∗ , w ∗ ; x(s), us ) was designed to satisfy the terminal monotonicity of the saddlepoint H∞ cost so that the first part of (8.127) satisfies J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; x(s + Δ), us+Δ ) − J (s + Δ, s + T , u2∗ , w 2∗ ; x(s + Δ), us+Δ ) ≤ 0. If the worst disturbance w ∗ of J (s, s + T , u∗ , w ∗ ; x(s), us ) for τ ∈ [s, s + Δ] is replaced with w 3 consisting w 3 (τ ) = w(τ ) for τ ∈ [s, s + Δ] and w3 (τ ) = w 2∗ (τ ) for τ ∈ [s + Δ, s + T ], which generates the state x(s + Δ) and the input us+Δ from the initial state x(s) and the initial input us , it holds that J (s, s + T , u∗ , w ∗ ; x(s), us ) ≥ J (s, s + T , u∗ , w 3 ; x(s), us ) and thus J (s, s + T ,u∗ , w ∗ ; x(s), us )   s+Δ  T ∗T ∗ 2 T ≥ x (τ )Qx(τ ) + u (τ | s)Ru (τ | s) − γ w (τ )w(τ ) d τ s

+ J (s + Δ, s + T , u2∗ , w 2∗ ; x(s + Δ), us+Δ ), which gives the relation   1 J (s + Δ, s + T + Δ, u2∗ , w 2∗ ; x(s + Δ), us+Δ ) − J (s, s + T , u∗ , w ∗ ; x(s), us ) Δ→0 Δ    1 s+Δ T x (τ )Qx(τ ) + u∗T (τ | s)Ru∗ (τ | s) − γ 2 w T (τ )w(τ ) d τ . ≤ − lim Δ→0 Δ s lim

From the above relation, therefore, it holds that   d J (s, s + T , u∗ , w ∗ ; x(s), us ) ≤ − xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) . ds

Therefore, this concludes that J (t0 , ∞, u¯ , w; x(t0 ), ut0 ) ≤ 0, which produces the following infinite horizon H∞ performance bound ∞

γ ≥ 2

t0

xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) ds ∞ . T t0 w (s)w(s)ds

(8.128)

The stability properties of receding horizon H∞ controls with standard costs in Theorems 8.4 and 8.5 and their LMI representations in Corollaries 8.3 and 8.4 together with the infinite horizon H∞ performance bound in this subsection can be found partially as in [13, 14].

8 H∞ Optimal Controls

354

8.4 Fixed Horizon H∞ Controls for State Delayed Systems Consider a state delayed system given by x˙ (t) = Ax(t) + A1 x(t − h) + Bu(t) + Gw(t)

(8.129)

with the initial condition x(t0 + θ) = φ(θ), φ ∈ Cn,h for θ ∈ [−h, 0]. Consider a finite horizon H∞ cost given by  J (t0 , tf , u, w; xt ) =

tf t0



xT (s)Qx(s) + uT (s)Ru(s) − γ 2 w T (s)w(s) ds + VF (xtf )

(8.130) with Q ≥ 0 and R > 0, where the terminal weighting functional VF (xtf ) denotes  VF (xtf ) = xT (tf )F1 x(tf ) +  ×F3

tf tf −h

 xT (s)F2 x(s)ds +

  x˙ (r) − Ax(r) drds +

0 −h



tf

tf +s

0 −h



tf tf +s



T x˙ (r) − Ax(r)

xT (r)F4 x(r)drds

(8.131)

with Fi ≥ 0, which appeared in (6.145).

8.4.1 Fixed Horizon H∞ Control for Costs with Single Integral Terms Consider the finite horizon H∞ cost J (t0 , tf , u, w) (8.130) with F1 ≥ 0, F2 ≥ 0, F3 = 0 and F4 = 0. Similar to the finite horizon LQ optimal control problem, we choose two continuous functionals, V1 (t) for t0 ≤ t ≤ tf − h and V2 (t) for tf − h < t ≤ tf , to avoid discontinuity at time t = tf − h. Using V1 (t) and V2 (t), the optimal controls u1∗ (·) and u2∗ (·) are also obtained for each horizon in the following result. Theorem 8.6 For the state delayed system (8.129), the optimal control u∗ and the worst disturbance w ∗ for the finite horizon H∞ control for the cost (8.130) with F1 ≥ 0, F2 ≥ 0, F3 = 0 and F4 = 0 are given as follows. For t ∈ (tf − h, tf ], the optimal control and the worst disturbance are given by u2∗ (t)

−1 T

= −R B



 W1 (t)x(t) +

  w2∗ (t) = γ −2 G T W1 (t)x(t) +

tf −t−h

−h tf −t−h

−h

 W2 (t, s)x(t + s)ds ,

 W2 (t, s)x(t + s)ds .

(8.132) (8.133)

8.4 Fixed Horizon H∞ Controls for State Delayed Systems

355

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] − W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q + F2 −W1T (t)(BR−1 BT − γ −2 GG T )W1 (t),   ∂ ∂ − + W2 (t, s) = AT W2 (t, s) ∂t ∂s −W1T (t)(BR−1 BT − γ −2 GG T )W2 (t, s),   ∂ ∂ ∂ − + + W3 (t, r, s) = −W2T (t, r)(BR−1 BT − γ −2 GG T )W2 (t, s) ∂t ∂r ∂s

(8.134)

(8.135) (8.136)

with boundary conditions W1 (tf ) = F1 ,

(8.137)

W2 (t, −h) = W1 (t)A1 , W3 (t, −h, s) = AT1 W2 (t, s).

(8.138) (8.139)

For t ∈ [t0 , tf − h], the optimal control and the worst disturbance are given by   u1∗ (t) = −R−1 BT P1 (t)x(t) +   w1∗ (t) = γ −2 G T P1 (t)x(t) +

0

−h 0

−h

 P2 (t, s)x(t + s)ds ,

 P2 (t, s)x(t + s)ds ,

(8.140) (8.141)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q + P2 (t, 0) + P2T (t, 0) −P1 (t)(BR−1 BT − γ −2 GG T )P1 (t),   ∂ ∂ − + P2 (t, s) = AT P2 (t, s) + P3 (t, 0, s) ∂t ∂s −P1 (t)(BR−1 BT − γ −2 GG T )P2 (t, s),   ∂ ∂ ∂ − + + P3 (t, r, s) = −P2T (t, r)(BR−1 BT − γ −2 GG T )P2 (t, s) ∂t ∂r ∂s

(8.142)

(8.143) (8.144)

with boundary conditions P2 (t, −h) = P1 (t)A1 , P3 (t, −h, s) = AT1 P2 (t, s), P1 (tf − h) = W1 (tf − h), P2 (tf − h, s) = W2 (tf − h, s), P3 (tf − h, r, s) = W3 (tf − h, r, s).

(8.145) (8.146) (8.147) (8.148) (8.149)

8 H∞ Optimal Controls

356

Proof For t ∈ (tf − h, tf ], choose a continuous functional  V2 (t) = xT (t)W1 (t)x(t) + 2xT (t)  +  +

tf −t−h −h t



tf −t−h −h

tf −t−h −h

W2 (t, s)x(t + s)ds

xT (t + r)W3 (t, r, s)x(t + s)drds

x (s)F2 x(s)ds, T

tf −h

(8.150)

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) are assumed to be differentiable with respect to t, r, and s. After some tedious manipulation, we obtain d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 w T (t)w(t) dt   = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2 x(t)  + x(t − h) 2A1 T W1 (t) − 2W2T (t, −h) x(t)  ∂W2 (t, s) ∂W2 (t, s) T −2 +2 + 2A W2 (t, s) x(t + s)ds + x (t) ∂s ∂t −h  tf −t−h  T T + x (t − h) 2A1 W2 (t, s) − 2W3 (t, −h, s) x(t + s)ds 

T

tf −t−h

−h tf −t−h

 ∂ ∂ ∂ − − W3 (t, r, s) x(t + s)drds ∂t ∂r ∂s −h −h    tf −t−h W2 (t, s)x(t + s)ds + 2{Bu(t) + Gw(t)}T W1 (t)x(t) + 

+

tf −t−h





xT (t + r)

−h

+ uT (t)Ru(t) − γ 2 w T (t)w(t). The optimal control u2∗ (·) and the worst disturbance w2∗ (·) are found via the following relation.   ∂ d T T 2 T ∂u(t) V2 (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) , 0= ∂ dt ∂w(t) which provides the solutions (8.132) and (8.133). Therefore, we have 

 d T T 2 T V2 (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) dt u=u2∗ ,w=w2∗  = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2  − W1T (t)(BR−1 BT − γ −2 GG T )W1 (t) x(t)

8.4 Fixed Horizon H∞ Controls for State Delayed Systems



+ 2x(t − h) A1 T W1 (t) − W2T (t, −h) x(t) + 2xT (t)

357 tf −t−h 

∂W2 (t, s) ∂s  ∂W2 (t, s) + AT W2 (t, s) − 2W1T (t)(BR−1 BT − γ −2 GG T )W2 (t, s) x(t + s)ds + ∂t  tf −t−h

+ xT (t − h) 2A1 T W2 (t, s) − 2W3 (t, −h, s) x(t + s)ds −h



−h tf −t−h  tf −t−h

  ∂ ∂ ∂ − − W3 (t, r, s) xT (t + r) ∂t ∂r ∂s −h −h  − W2T (t, r)(BR−1 BT − γ −2 GG T )W2 (t, s) x(t + s)drds, 

+

which results in the coupled partial differential Riccati equations (8.134)–(8.136) with boundary conditions (8.138)–(8.139). For t ∈ [t0 , tf − h], let us choose a continuous functional such as  V1 (t) = x (t)P1 (t)x(t) + 2x (t) T

 +

0

T

0 −h



0

−h

−h

P2 (t, s)x(t + s)ds

xT (t + r)P3 (t, r, s)x(t + s)drds,

(8.151)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) are assumed to be differentiable with respect to t, r, and s. After some tedious manipulation, we obtain d V1 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt = xT (t)[P˙ 1 (t) + AT P1 (t) + P1 (t)A + Q + P2 (t, 0) + P2T (t, 0)]x(t) + x(t − h)[2A1 T P1 (t) − 2P2T (t, −h)]x(t)  0 ∂P2 (t, s) ∂P2 (t, s) + xT (t) −2 +2 + 2AT P2 (t, s) + 2P3 (t, 0, s) x(t + s)ds ∂s ∂t −h  0 + xT (t − h) [2A1 T P2 (t, s) − 2P3 (t, −h, s)]x(t + s)ds −h

  ∂ ∂ ∂ − − P3 (t, r, s) x(t + s)drds xT (t + r) ∂t ∂r ∂s −h −h   0 + 2[Bu(t) + Gw(t)]T P1 (t)x(t) + P2 (t, s)x(t + s)ds 

+

0



0

−h

+ u (t)Ru(t) − γ w (t)w(t). T

2

T

The optimal control u2∗ (·) and the worst disturbance w2∗ (·) are found via the following relation.   ∂ d T T 2 T ∂u(t) V (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) , 0= 1 ∂ dt ∂w(t)

8 H∞ Optimal Controls

358

which provides the solutions (8.140) and (8.141). Therefore, we have 

d V1 (t) + xT (t)Qx(t) + u1T (t)Ru1 (t) − γ −2 w T (t)w(t) dt

 u=u1∗ ,w=w1∗

= xT (t)[P˙ 1 (t) + AT P1 (t) + P1 (t)A + Q − P1 (t)(BR−1 BT − γ −2 GG T )P1 (t) + P2 (t, 0) + P2T (t, 0)]x(t) + 2x(t − h)[A1 T P1 (t) − P2T (t, −h)]x(t)   0  ∂ ∂ T P2 (t, s) + AT P2 (t, s) + P3 (t, 0, s) + 2x (t) − ∂t ∂s −h −1 T −2 T − P1 (t)(BR B − γ GG )P2 (t, s) x(t + s)ds  + 2x (t − h)

0

T



A1 P2 (t, s) − P3 (t, −h, s) x(t + s)ds T

−h

 ∂ ∂ ∂ − − P3 (t, r, s) ∂t ∂r ∂s −h −h − P2T (t, r)(BR−1 BT − γ −2 GG T )P2 (t, s) x(t + s)drds, 

+

0



0



xT (t + r)

which results in the coupled partial differential Riccati equations (8.142)–(8.144) with boundary conditions (8.145)–(8.146). From the continuity conditions that V1 (tf ) = VF (xtf ) and V1 (tf − h) = V2 (tf − h), we need to have (8.137) and (8.147)–(8.149), respectively. This completes the proof.  Finite horizon H∞ controls for costs with single integral terminal terms for state delayed systems in Theorem 8.6 can be found as in [6, 10].

8.4.2 *Fixed Horizon H∞ Control for Costs with Double Integral Terms Consider the finite horizon H∞ cost J (t0 , tf , u, w) (8.130) with Fi ≥ 0. Similar to finite horizon LQ optimal control problem, we choose three continuous functionals, V1 (t), V2 (t), and V3 (t) for tf − 2h < t ≤ tf − h, and tf − h < t ≤ tf , respectively, to avoid discontinuity between V1 (t) and V2 (t) at time t = tf − 2h and between V2 (t) and V3 (t) at time t = tf − h. Using V1 (t) and V2 (t), and V3 (t), the H∞ optimal controls u1∗ (·), u2∗ (·), and u3∗ (·) are also obtained for each horizon in the following result. ¯ Theorem 8.7 Define B¯ and R(s) as follows.  B¯ T =



  f (s)BT F3 G BT R + f (s)BT F3 B ¯ , (8.152) , R(s) = f (s)G T F3 B −γ 2 I + f (s)G T F3 G GT

8.4 Fixed Horizon H∞ Controls for State Delayed Systems

359



where f (s) = s − tf + h. For the state delayed system (8.129), the optimal control u∗ and the worst disturbance w ∗ for the finite horizon H∞ cost (8.130) with Fi ≥ 0, where F3 yields 0 < γ 2 I − hG T F3 G,

(8.153)

are given as follows. For t ∈ (tf − h, tf ], the optimal control and the worst disturbance are given by 

 u3∗ (t) −1 T ¯ ¯ = −R (t)B f (t)F3 A1 x(t − h) + W1 (t)x(t) w3∗ (t)   tf −t−h + W2 (t, s)x(t + s)ds ,

(8.154)

−h

where W1 (t) = W1T (t), W2 (t, s), and W3 (t, r, s) = W3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] − W˙ 1 (t) = AT W1 (t) + W1 (t)A + Q + F2 + hF4 −W1 (t)B¯ R¯ −1 (t)B¯ T W1 (t), (8.155)

  ∂ ∂ − + W2 (t, s) = AT W2 (t, s) − W1 (t)B¯ R¯ −1 (t)B¯ T W2 (t, s), ∂t ∂s (8.156)   ∂ ∂ ∂ − + + W3 (t, r, s) = −W2T (t, r)B¯ R¯ −1 (t)B¯ T W2 (t, s) (8.157) ∂t ∂r ∂s with boundary conditions W1 (tf ) = F1 , W2 (t, −h) = W1 (t)[I − f (t)B¯ R¯ −1 (t)B¯ T F3 ]A1 , W3 (t, −h, s) = AT1 [I − f (t)F3 B¯ R¯ −1 (t)B¯ T ]W2 (t, s).

(8.158) (8.159) (8.160)

For t ∈ (tf − 2h, tf − h], the optimal control and the worst disturbance are given by 

   0 u2∗ (t) −R−1 BT = S (t)x(t) + S (t, s)x(t + s)ds , 1 2 γ −2 G T w2∗ (t) −h

(8.161)

where S1 (t) = S1T (t), S2 (t, s), and S3 (t, r, s) = S3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] the coupled partial differential Riccati equations − S˙ 1 (t) = AT S1 (t) + S1 (t)A + Q

−S1 (t)(BR−1 BT − γ −2 GG T )S1 (t)

8 H∞ Optimal Controls

360

¯ −1 (t + h)B¯ T F3 A1 −f 2 (t + h)AT1 F3 BR 

 −



(8.162)

∂ ∂ + S2 (t, s) = AT S2 (t, s) + S3 (t, 0, s) ∂t ∂s −S1 (t)(BR−1 BT − γ −2 GG T )S2 (t, s),





+S2 (t, 0) + S2T (t, 0) + f (t + h)AT1 F3 A1 ,

∂ ∂ ∂ S3 (t, r, s) = −S2T (t, r)(BR−1 BT − γ −2 GG T )S2 (t, s) + + ∂t ∂r ∂s

(8.163) (8.164)

with boundary conditions S2 (t, −h) = S1 (t)A1 , S3 (t, −h, s) = AT1 S2 (t, s),

(8.165) (8.166)

S1 (tf − h) = W1 (tf − h), S2 (tf − h, s) = W2 (tf − h, s),

(8.167) (8.168)

S3 (tf − h, r, s) = W3 (tf − h, r, s).

(8.169)

For t ∈ [t0 , tf − 2h], the optimal control and the worst disturbance are given by 

   0 u1∗ (t) −R−1 BT = P1 (t)x(t) + P2 (t, s)x(t + s)ds , γ −2 G T w1∗ (t) −h

(8.170)

where P1 (t) = P1T (t), P2 (t, s), and P3 (t, r, s) = P3T (t, s, r) satisfy the coupled partial differential Riccati equations for s ∈ [−h, 0] and r ∈ [−h, 0] − P˙ 1 (t) = AT P1 (t) + P1 (t)A + Q + P2 (t, 0) + P2T (t, 0) −P1 (t)(BR−1 BT − γ −2 GG T )P1 (t),   ∂ ∂ P2 (t, s) = AT P2 (t, s) + P3 (t, 0, s) − + ∂t ∂s 



−P1 (t)(BR−1 BT − γ −2 GG T )P2 (t, s),

(8.171)

(8.172)

∂ ∂ ∂ + P3 (t, r, s) = −P2T (t, r)(BR−1 BT − γ −2 GG T )P2 (t, s) (8.173) − + ∂t ∂r ∂s

with boundary conditions P2 (t, −h) = P1 (t)A1 , P3 (t, −h, s) = AT1 P2 (t, s).

(8.174) (8.175)

P1 (tf − 2h) = S1 (tf − 2h), P2 (tf − 2h, s) = S2 (tf − 2h, s),

(8.176) (8.177)

P3 (tf − 2h, r, s) = S3 (tf − 2h, r, s).

(8.178)

Proof For t ∈ (tf − h, tf ], let us choose a continuous functional

8.4 Fixed Horizon H∞ Controls for State Delayed Systems

 V3 (t) = x (t)W1 (t)x(t) + 2x (t) T

 + +

tf −t−h

T

tf −t−h

−h  t tf −h



tf −t−h −h

−h

361

W2 (t, s)x(t + s)ds

xT (t + r)W3 (t, r, s)x(t + s)drds

 xT (s)F2 x(s) + f (s) {˙x(α) − Ax(α)}T F3 {˙x(α) − Ax(α)}

 x (s) f (s + h)AT1 F3 A1 +f (s)x (α)F4 x(α) ds + t−h 2 T −1 T ¯ ¯ ¯ −f (s + h)A1 F3 BR (s + h)B F3 A1 x(s)ds. T



tf −h

T

(8.179)

After some tedious manipulation, we obtain d V3 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 w T (t)w(t) dt   = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2 + hF4 x(t)  + 2x(t − h) A1 T W1 (t) − W2T (t, −h) x(t) ∂W2 (t, s) ∂W2 (t, s) + + AT W2 (t, s) x(t + s)ds ∂s ∂t −h  tf −t−h  + 2xT (t − h) A1 T W2 (t, s) − W3 (t, −h, s) x(t + s)ds 

+ 2xT (t)

tf −t−h 



−h tf −t−h  tf −t−h

  ∂ ∂ ∂ xT (t + r) − − W3 (t, r, s) x(t + s)drds ∂t ∂r ∂s −h −h 2 T T −1 T + f (t)x (t − h)A1 F3 B¯ R¯ (t)B¯ F3 A1 x(t − h)    tf −t−h + 2{Bu(t) + Gw(t)}T f (t)F3 A1 x(t − h) + W1 (t)x(t) + W2 (t, s)x(t + s)ds 

+

−h

+ uT (t)Ru(t) − γ 2 w T (t)w(t) + f (t){Bu(t) + Gw(t)}T F3 {Bu(t) + Gw(t)}.

The optimal control u3∗ (·) and the worst disturbance w3∗ (·) are found via the following relation.   ∂ d T T 2 T ∂u(t) V3 (t) + x (t)Qx(t) + u (t)Ru(t) − γ w (t)w(t) 0= ∂ dt ∂w(t)    tf −t−h T ¯ W2 (t, s)x(t + s)ds = 2B f (t)F3 A1 x(t − h) + W1 (t)x(t) + −h  u(t) ¯ + 2R(t) , w(t) where because of (8.153) we have

8 H∞ Optimal Controls

362

0 < γ 2 I − f (s)G T F3 G,

(8.180)

¯ which provides the inertia property of R(s) for the existence of the optimal control u3∗ and the worst disturbance w3∗ as in (8.154). Therefore, we have 

 d V2 (t) + xT (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt u=u3∗ ,w=w3∗   = xT (t) W˙ 1 (t) + AT W1 (t) + W1 (t)A + Q + F2 + hF4 − W1 (t)B¯ R¯ −1 (t)B¯ T W1 (t) x(t)  + 2x(t − h) A1 T W1 (t) − W2T (t, −h) − f (t)AT1 F3 B¯ R¯ −1 (t)B¯ T W1 (t) x(t) tf −t−h 



∂W2 (t, s) ∂W2 (t, s) + + AT W2 (t, s) ∂s ∂t  tf −t−h  − W1 (t)B¯ R¯ −1 (t)B¯ T W2 (t, s) x(t + s)ds + 2xT (t − h) A1 T W2 (t, s) −h − W3 (t, −h, s) − f (t)AT1 F3 B¯ R¯ −1 (t)B¯ T W2 (t, s) x(t + s)ds

+ 2xT (t)

−h



  ∂ ∂ ∂ − − W3 (t, r, s) xT (t + r) ∂t ∂r ∂s −h −h − W2T (t, r)B¯ R¯ −1 (t)B¯ T W2 (t, s) x(t + s)drds, 

+

tf −t−h  tf −t−h

which results in the coupled partial differential Riccati equations (8.155)–(8.157) with boundary conditions (8.159)–(8.160). For t ∈ (tf − 2h, tf − h], choose a continuous functional  V2 (t) = xT (t)S1 (t)x(t) + 2xT (t)  +

0



0

−h −h

0

−h

S2 (t, s)x(t + s)ds 

xT (t + r)S3 (t, r, s)x(t + s)drds +

t tf −2h

xT (s){f (s + h)AT1 F3 A1

¯ −1 (s + h)B¯ T F3 A1 }x(s)ds. −f 2 (s + h)AT1 F3 BR

(8.181)

Following the similar step to the previous proof for Theorem 8.6, we obtain the optimal control u2∗ (·) and the worst disturbance w2∗ (·) as in (8.161), which provides the partial differential Riccati equations (8.162)–(8.164) with boundary conditions (8.165) and (8.166). For t ∈ [t0 , tf − 2h], choose a continuous functional  V1 (t) = xT (t)P1 (t)x(t) + 2xT (t)  +

0 −h



0

−h

0 −h

P2 (t, s)x(t + s)ds

xT (t + r)P3 (t, r, s)x(t + s)drds.

(8.182)

8.4 Fixed Horizon H∞ Controls for State Delayed Systems

363

Following the similar step to the previous proof for Theorem 8.6, we obtain the optimal control u1∗ (·) and the worst disturbance w1∗ (·) as in (8.170) for t ∈ [t0 , tf − 2h], which provides the coupled partial differential Riccati equations (8.171)–(8.173) with boundary conditions (8.174)–(8.175). Because V1 (t), V2 (t), and V3 (t) are the continuous functionals for t0 ≤ t ≤ tf − 2h, tf − 2h < t ≤ tf − h, and tf − h < t ≤ tf , respectively, it is clear that V1 (tf − 2h) = V2 (tf − 2h) and V2 (tf − h) = V3 (tf − h), which produces (8.176)–(8.178) and (8.167)–(8.169), respectively. Furthermore, we need V3 (tf ) = VF (xtf ), which produces (8.158). This completes the proof.  Finite horizon H∞ controls for cost with double integral terminal terms for state delayed systems in Theorem 8.7 can be found as in [11]. Infinite Horizon H ∞ Control We consider the infinite horizon H∞ cost (8.130). In this case, the initial state x(t0 + θ) for all θ ∈ [−h, 0] is assumed to be zero, i.e. x(t0 + θ) = 0 for θ ∈ [−h, 0]. If the coupled partial differential Riccati solutions P1 (t), P2 (t, s), and P3 (t, r, s) in (8.142)– (8.144) and in (8.171)–(8.173) go to the stationary quantities P1 = P1T , P2 (s), and P3 (t, r, s =)P3T (t, s, r) as tf goes to infinity, then they satisfy 0 = AT P1 + P1 A + Q + P2 (0) + P2T (0) −P1 (BR−1 BT − γ −2 GG T )P1 , 

P˙ 2 (s) = AT P2 (s) + P3 (0, s) − P1 (BR−1 BT − γ −2 GG T )P2 (s),

 ∂ ∂ + P3 (r, s) = −P2T (r)(BR−1 BT − γ −2 GG T )P2 (s) ∂r ∂s

(8.183) (8.184) (8.185)

with boundary conditions P2 (−h) = P1 A1 , P3 (−h, s) =

(8.186)

AT1 P2 (s)

(8.187)

and the infinite horizon H∞ control and the corresponding continuous functional are given from (8.140) and (8.170) and from (8.151) and (8.34), respectively, by   u (t) = −R B P1 x(t) + ∗

−1 T

0 −h

V1 (t) = xT (t)P1 (t)x(t) + 2xT (t)  +

0 −h



0

−h

 P2 (s)x(t + s)ds , 

0 −h

(8.188)

P2 (t, s)x(t + s)ds

xT (t + r)P3 (t, r, s)x(t + s)drds.

(8.189)

When we consider the infinite horizon H∞ cost, we can investigate the asymptotic stability of the closed-loop system with no disturbance. Similar to the infinite horizon LQ optimal control in Sect. 6.4 addressed in [9], the infinite horizon H∞ optimal

8 H∞ Optimal Controls

364

control (8.188) could asymptotically stabilize the single input delayed system (8.129) if the system is stabilizable [2, 9] and furthermore if the H∞ performance bound γ is chosen to be greater than or equal to the achievable optimal H∞ performance bound γ ∗ . However, differently from the solution of the algebraic Riccati equation P in (8.26) for the ordinary system, in which there exist a finite number of solutions of the algebraic Riccati equation (8.26) and thus we can easily find the unique positive definite solution P among them that provides the stabilizing control, the number of solutions of the coupled Riccati equations (8.183)–(8.185) is infinite and there seems to be no research on finding the solutions P1 = P1T , P2 (s) and P(r, s) = P3T (s, r) that produce such a stabilizing H∞ control. To check out whether the optimal control asymptotically stabilizes the system (8.129), the continuous functional (8.189) should be examined to satisfy the conditions (2.4) and (2.5) in Theorem 2.1, where the time-derivative negativity (2.5) can be easily verified from the property of the optimal control such that 

d V1 (t) dt

 u(t)=u∗ (t)

= −xT (t)Qx(t) − u∗T (t)Ru∗ (t) + γ 2 w T (t)w(t)

w(t)=0 .

To the best of our knowledge, however, the properties of P1 , P2 (s) and P3 (r, s) to satisfy the condition (2.4) of the continuous functional (8.189) have not been explicitly explained in the literature, and thus the stability analysis has not been explicitly explained. The stability of state delayed systems with infinite horizon H∞ controls can hardly be found in the literatures. While infinite horizon H∞ controls have problems in stability proof and computation, we introduce receding horizon H∞ controls which have guaranteed asymptotic stability under certain conditions and feasible computation, which appear in the following section.

8.5 Receding Horizon H∞ Controls for State Delayed Systems In this section, we assume that the state delayed system (8.129) is stabilizable as in Sect. 6.5. For the time t, let s be a time variable defined in [t, t + T ], i.e. s ∈ [t, t + T ]. We introduce x(s| t) and u(s| t) as x(s) and u(s), respectively, where s belongs to the horizon [t, t + T ]. Consider a receding horizon H∞ cost given by  xT (s| t)Qx(s| t) + uT (s| t)Ru(s| t) t  (8.190) − γ 2 w T (s| t)w(s| t) ds + VF (xt+T | t )

 J (t, t + T , u, w; xt ) =

t+T

8.5 Receding Horizon H∞ Controls for State Delayed Systems

365

with Q > 0 and R > 0, where the terminal weighting functional VF (xt+T ) is defined in (6.145). This cost can be constructed by taking t0 = t and tf = t + T in the cost (8.130). The state delayed system (8.129) can be expressed as x˙ (s| t) = Ax(s| t) + A1 x(s − h| t) + Bu(s| t) + Gw(s| t)

(8.191)

with the initial state xs| t | s=t = xt .

8.5.1 Receding Horizon H∞ Control for Costs with Single Integral Terms By Theorem 8.6 for the state delayed system (8.191), the optimal input u∗ and the worst disturbance w∗ for the finite horizon H∞ control for the cost (8.190) with Q > 0, R > 0, F1 > 0, F2 > 0, F3 = 0 and F4 = 0 are chosen as follows. For s ∈ (t − h, t + T ], the optimal input and the worst disturbance are given by   u2∗ (s| t) = −R−1 BT W1 (s − t)x(s| t) +

t+T −s−h

−h



w2∗ (s| t) = γ −2 G T W1 (s − t)x(s| t) +



t+T −s−h −h

 W2 (s − t)x(s + a| t)da ,

(8.192)  W2 (s − t)x(s + a| t)da , (8.193)

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0], − W˙ 1 (τ ) = AT W1 (τ ) + W1 (τ )A + Q + F2 −W1T (τ )(BR−1 BT − γ −2 GG T )W1 (τ ),   ∂ ∂ − W2 (τ , s) = AT W2 (τ , s) + ∂τ ∂s −W1T (τ )(BR−1 BT − γ −2 GG T )W2 (τ , s),   ∂ ∂ ∂ − W3 (τ , r, s) = −W2T (τ , r)(BR−1 BT − γ −2 GG T )W2 (τ , s) + + ∂τ ∂r ∂s

(8.194)

(8.195) (8.196)

with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W1 (τ )A1 , W3 (τ , −h, s) = AT1 W2 (τ , s).

(8.197) (8.198) (8.199)

8 H∞ Optimal Controls

366

For s ∈ [t, t + T − h], the optimal input and the worst disturbance are given by u1∗ (s| t)

  = −R B P1 (s − t)x(s| t) + −1 T

0

−h



w1∗ (s| t) = γ −2 G T P1 (s − t)x(s| t) +



0

−h

 P2 (s − t, a)x(s + a| t)da , 

(8.200)

P2 (s − t, a)x(s + a| t)da , (8.201)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0], − P˙ 1 (τ ) = AT P1 (τ ) + P1 (t)A + Q + P2 (τ , 0) + P2T (τ , 0) −P1 (τ )(BR−1 BT − γ −2 GG T )P1 (τ ),   ∂ ∂ + P2 (τ , s) = AT P2 (τ , s) + P3 (τ , 0, s) − ∂τ ∂s

(8.202)

−P1 (τ )(BR−1 BT − γ −2 GG T )P2 (τ , s), (8.203)   ∂ ∂ ∂ + + P3 (τ , r, s) = −P2T (τ , r)(BR−1 BT − γ −2 GG T )P2 (τ , s) − ∂τ ∂r ∂s

(8.204) with boundary conditions P2 (τ , −h) = P1 (τ )A1 ,

(8.205)

P3 (τ , −h, s) = AT1 P2 (τ , s), P1 (T − h) = W1 (T − h),

(8.206) (8.207)

P2 (T − h, s) = W2 (T − h, s), P3 (T − h, r, s) = W3 (T − h, r, s).

(8.208) (8.209)

Since the horizon length T is a design parameter, we consider the receding horizon H∞ control for 0 < T < h and for h ≤ T . Case I: 0 < T < h For the state delayed system (8.129), the receding horizon H∞ control u(t) is to use the first control u∗ (t| t) in (8.192) on the time interval [t, t + T ] and thus is given as follows.    T −h ∗ −1 T u(t) = u2 (t| t) = −R B W1 (0)x(t) + W2 (0, s)x(t + s)ds , −h

(8.210)

8.5 Receding Horizon H∞ Controls for State Delayed Systems

367

where W1 (τ ) = W1T (τ ), W2 (τ , s), and additionally W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations (8.194)–(8.196) with the boundary conditions (8.197)–(8.199) for τ ∈ [0, T ], s ∈ [−h, 0] and r ∈ [−h, 0]. Case II: h ≤ T The receding horizon H∞ control u(t) is to use the first control u∗ (t| t) in (8.200) on the time interval [t, t + T ] and thus is given as follows. u(t) =

u1∗ (t| t)



−1 T

= −R B

 P1 (0)x(t) +



0

−h

P2 (0, s)x(t + s)ds , (8.211)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy the partial differential equations (8.202)–(8.204) with the boundary conditions (8.205)–(8.209) for τ ∈ [0, T − h], s ∈ [−h, 0] and r ∈ [−h, 0] and W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the partial differential equations (8.194)–(8.196) with the boundary conditions (8.198)–(8.199). for τ ∈ (T − h, T ], s ∈ [−h, 0] and r ∈ [−h, 0]. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) = u2∗ (t+ | t+ ) for 0 < T < h and u(t+ ) = u1∗ (t+ | t+ ) for h ≤ T with t replaced with t+ as the time proceeds. It is noted from (8.151) and (8.150) that the optimal cost J (t, t + T , u∗ , w ∗ ) is given as follows. For T < h, J (t, t + T , u∗ , w ∗ ; xt ) = V2 (t)



= xT (t)W1 (0)x(t) +  + 2xT (t)  +

T −h

t+T −h

−h

−h

xT (t + r)W3 (0, r, s)x(t + s)drds

W2 (0, s)x(t + s)ds

−h

t

T −h  T −h

xT (s)F2 x(s)ds,

(8.212)

where W1 (0) = W1T (0), W2 (0, s), and W3 (0, r, s) = W3T (0, s, r) are obtained from (8.194)–(8.199). For T ≥ h, J (t, t + T , u∗ , w ∗ ; xt ) = V1 (t)



= x (t)P1 (0)x(t) + 2x (t) T

 +

0

T

0 −h



0

−h

−h

P2 (0, s)x(t + s)ds

xT (t + r)P3 (0, r, s)x(t + s)drds, (8.213)

8 H∞ Optimal Controls

368

where P1 (0) = P1T (0), P2 (0, s), and P3 (0, r, s) = P3T (0, s, r) are obtained from (8.202)–(8.209). Differently from the monotonicity of the optimal receding horizon LQ costs, as mentioned in the previous subsection, we need to investigate the terminal monotonicity of the saddle-point receding horizon H∞ cost J (t, σ, u∗ , w ∗ ; xt ) (8.190) for the asymptotic stability of the closed-loop system with no disturbance and in the infinite horizon H∞ performance bounds of the closed-loop system with disturbances. Terminal Cost Monotonicity Consider   1 1∗ 1∗ ∗ ∗ lim J (t, t + T + Δ, u , w ; xt ) − J (t, t + T , u , w ; xt ) , (8.214) Δ→0 Δ where {u1∗ , w 1∗ } and {u∗ , w ∗ } are the sets of the optimal control and the worst disturbance of J (t, t + T + Δ, u, w; xt ) and of J (t, t + T , u, w; xt ), respectively. Let x1∗ (α| t) and x∗ (α| t) be the corresponding states generated by {u1∗ , w 1∗ } 1∗ and {u∗ , w ∗ }, respectively, in the system (8.191) with the initial state xα| t α=t = ∗ 1∗ xα| t α=t = xt . Let us replace {u (α| t), α ∈ [t, t + T + Δ]} of the optimal cost J (t, t + T + Δ, u1∗ , w 1∗ ; xt ) with {u∗ (α| t), α ∈ [t, t + T ]} and {u1 (α), α ∈ (t + T , t + T + Δ]}, where u1 is an arbitrary control. Since we use an alternative control instead of u1∗ in the cost J (t, t + T + Δ, u, w; xt ), we have J (t,t + T + Δ, u1∗ , w 1∗ ; xt )   t+T  ≤ x1T (s)Qx1 (s) + u∗T (s| t)Ru∗ (s| t) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds t



+

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T +Δ 

t+T



+ x1T (t + T + Δ)F1 x1 (t + T + Δ) +

t+T +Δ

t+T +Δ−h

x1T (s)F2 x1 (s)ds

  = J (t, t + T , u∗ , w 1∗ ; x(t)) − x1T (t + T )F1 x1 (t + T ) −  +

t+T t+T −h

 x1T (s)F2 x1 (s)ds

 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w 1∗T (s| t)w 1∗ (s| t) ds

t+T +Δ 

t+T

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) +



t+T +Δ

t+T +Δ−h

x1T (s)F2 x1 (s)ds,

where {x1 (α), α ∈ [t, t + T + Δ]} is the corresponding state generated by {u∗ (α| t), α ∈ [t, t + T ]}, {u1 (α), α ∈ (t + T , t + T + Δ]} and {w1∗ (α), ∈ [t, t + T + Δ]} in the system (8.191) with the initial state xα1 α=t = xt . Furthermore, let us replace w 1∗ of J (t, t + T , u∗ , w 1∗ ; xt ) with the worst disturbance w ∗ of the cost J (t, t + T , u∗ , w; xt ), which produces the relation that J (t, t + T , u∗ , w 1∗ ; xt ) ≤ J (t, t + T , u∗ , w ∗ ; xt ). Therefore, the above inequality is written as

8.5 Receding Horizon H∞ Controls for State Delayed Systems

369

J (t,t + T + Δ, u1∗ , w1∗ ; xt )    t+T ∗ ∗ 1T 1 1T 1 x (s)F2 x (s)ds ≤ J (t, t + T , u , w ; xt ) − x (t + T )F1 x (t + T ) − +

 t+T +Δ  t+T

t+T −h



x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w1∗T (s| t)w1∗ (s| t) ds

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) +

 t+T +Δ t+T +Δ−h

x1T (s)F2 x1 (s)ds,

from which we can obtain that   1 J (t, t + T + Δ, u1 , w1∗ ; xt ) − J (t, t + T , u∗ , w∗ ; xt ) Δ→0 Δ   t+T +Δ   1 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w1∗T (s| t)w1∗ (s| t) ds ≤ lim Δ→0 Δ t+T lim

+ x1T (t + T + Δ)F1 x1 (t + T + Δ) − x1T (t + T )F1 x1 (t + T )   t+T  t+T +Δ x1T (s)F2 x1 (s)ds − x1T (s)F2 x1 (s)ds + t+T +Δ−h t+T −h   t+T +Δ   1 x1T (s)Qx1 (s) + u1T (s)Ru1 (s) − γ 2 w1∗T (s| t)w1∗ (s| t) ds = lim Δ→0 Δ t+T + x1T (t + T + Δ)F1 x1 (t + T + Δ) − x1T (t + T )F1 x1 (t + T )   t+T +Δ−h  t+T +Δ x1T (s)F2 x1 (s)ds − x1T (s)F2 x1 (s)ds . + t+T

t+T −h

Since it holds that   1 1 1 x (t + T + Δ) − x (t + T ) Δ

Ax1 (t + T ) + A1 x1 (t + T − h) + Bu1 (t + T ) + Gw 1∗ (t + T | t), the above inequality becomes   1 J (t, t + T + Δ, u1 , w 1∗ ; xt ) − J (t, t + T , u∗ , w ∗ ; xt ) Δ→0 Δ lim

≤ x1T (t + T )Qx1 (t + T ) + u1T (t + T )Ru1 (t + T ) − γ 2 w 1∗T (t + T | t)w 1∗ (t + T | t)   + 2x1T (t + T )F1 Ax1 (t + T ) + A1 x1 (t + T − h) + Bu1 (t + T ) + Gw 1∗ (t + T | t) + x1T (t + T )F2 x1 (t + T ) − x1T (t + T − h)F2 x1 (t + T − h).

(8.215)

Therefore, the terminal monotonicity of the saddle-point receding horizon H∞ cost is ensured if we can find u1 (t + T ) guaranteeing the right part of the above relation. A simple choice of u1 (t + T ) as −H1 x1 (t + T ) − H2 x1 (t + T − h), i.e. u1 (t + T ) = −H1 x1 (t + T ) − H2 x1 (t + T − h) results in the following relation.

8 H∞ Optimal Controls

370

  1 J (t, t + T + Δ, u1 , w 1∗ ; xt ) − J (t, t + T , u∗ , w ∗ ; xt ) Δ→0 Δ   1  1 T x (t + T ) x (t + T ) 2 ≤ 1 M 1 − γ w 1∗ (t + T | t) x (t + T − h) x (t + T − h) T   w 1∗ (t + T | t) − γ −2 G T F1 x1 (t + T ) , − γ −2 G T F1 x1 (t + T ) lim

(8.216)

where M denotes  (1, 1) F1 (A1 − BH2 ) + H1T RH2 , M= (A1 − BH2 )T F1 + H2T RH1 −F2 + H2T RH2 (1, 1) = (A − BH1 )T F1 + F1 (A − BH1 ) + Q + F2 + H1T RH1 + γ −2 F1 GG T F1 . Remark 8.5 In this subsection, we have proven that   ∂ 1 J (α, β, u∗ , w∗ ; xα ) = lim J (α, β + Δ, u1∗ , w1∗ ; xα ) − J (α, β, u∗ , w∗ ; xα ) ∂β Δ→0 Δ   T x(β) x(β) M . ≤ x(β − h) x(β − h)

The condition that M ≤ 0 plays an important role in guaranteeing the asymptotic stability of the system (8.129) with no disturbance and also the infinite horizon H∞ performance bound of the system (8.129) with disturbances. Stability For the asymptotic stability of the system (8.129) with no disturbance, consider the saddle-point H∞ cost J (t, t + T , u∗ , w ∗ ; xt ).

(8.217)

Let us consider for a very small positive Δ J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; xt+Δ ) − J (t, t + T , u∗ , w ∗ ; xt ) = J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; xt+Δ ) − J (t + Δ, t + T , u2∗ , w 2∗ ; xt+Δ ) + J (t + Δ, t + T , u2∗ , w 2∗ ; xt+Δ ) − J (t, t + T , u∗ , w ∗ ; xt ),

(8.218)

where xt+Δ denotes the state generated by the initial state xt , the optimal input u∗ = {u(τ | t), τ ∈ [t, t + Δ]} and w(τ ) = 0 for τ ∈ [t, t + Δ]. Since the receding horizon cost J (t, t + T , u∗ , w ∗ ; xt ) was designed to satisfy the terminal monotonicity of the saddle-point H∞ cost so that the first part of (8.218) satisfies J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; xt+Δ ) − J (t + Δ, t + T , u2∗ , w 2∗ ; xt+Δ ) ≤ 0,

8.5 Receding Horizon H∞ Controls for State Delayed Systems

371

the relation (8.218) becomes J (t + Δ, t + Δ + T , u1∗ , w 1∗ ; xt+Δ ) − J (t, t + T , u∗ , w ∗ ; xt ) ≤ J (t + Δ, t + T , u2∗ , w 2∗ ; xt+Δ ) − J (t, t + T , u∗ , w ∗ ; xt ).

(8.219)

If the worst disturbance w ∗ of J (t, t + T , u∗ , w ∗ ; xt ) is replaced with w 3 consisting w 3 (τ ) = 0 for τ ∈ [t, t + Δ] and w 3 (τ ) = w 2∗ (τ ) for τ ∈ [t + Δ, t + T ], which generates the state xt+Δ from the initial state xt , it holds that J (t, t + T , u∗ , w ∗ ; xt ) ≥ J (t, t + T , u∗ , w 3 ; xt ) and thus ∗





J (t, t + T , u , w ; xt ) ≥ t

  T ∗T ∗ x (τ )Qx(τ ) + u (τ | t)Ru (τ | t) d τ  2∗ 2∗ + J (t + Δ, t + T , u , w ; xt+Δ ) ,

t+Δ

which gives this relation   1 J (t + Δ, t + T , u2∗ , w 2∗ ; xt+Δ ) − J (t, t + T , u∗ , w ∗ ; xt )) Δ→0 Δ    1 t+Δ T ∗T ∗ ≤ − lim x (τ )Qx(τ ) + u (τ | t)Ru (τ | t) d τ . Δ→0 Δ t lim

From the relation (8.219), therefore, it holds that   d J (t, t + T , u∗ , w ∗ ; xt ) ≤ − xT (t)Qx(t) + u∗T (t| t)Ru∗ (t| t) . dt Theorem 8.8 If there exist positive definite matrices F1 > 0, F2 > 0 and matrices Hi such that  (1, 1) F1 (A1 − BH2 ) + H1T RH2 ≤ 0, (8.220) (A1 − BH2 )T F1 + H2T RH1 −F2 + H2T RH2 where (1, 1) denotes (1, 1) = (A − BH1 )T F1 + F1 (A − BH1 ) + Q + F2 + H1T RH1 + γ −2 F1 GG T F1 , the receding horizon H∞ controls (8.210) and (8.211) for the cost (8.190) with such F1 and F2 asymptotically stabilize the system (8.129) with no disturbance. Proof The saddle-point H∞ cost generated by the initial state xt satisfies that J (t, t + T , u∗ , w ∗ ; xt ) ≥ J (t, t + T , u∗ , 0; xt ) ≥ 0.

8 H∞ Optimal Controls

372

The condition (8.220) for the terminal monotonicity of the saddle-point H∞ cost guarantees that   d ∗ ∗ T ∗T ∗ J (t, t + T , u , w ; xt ) ≤ − x (t)Qx(t) + u (t| t)Ru (t| t) . dt J (t, t + T , u∗ , w ∗ ; xt ) is V2 (t) for T < h and V1 (t) for T ≥ h from (8.212) and (8.213), respectively. Therefore, these with cost monotonicity conditions satisfy conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof.  The quadratic matrix inequality (8.220) in Theorem 8.8 can be rewritten, which is shown in the following result. Corollary 8.5 If there exist positive definite matrices F¯ 1 > 0, F¯ 2 > 0 and a matrix H¯ such that ⎡ ⎤ (1, 1) A1 F¯ 2 − BH¯ 2 H¯ 1T F¯ 1 F¯ 1 ⎢ F¯ 2 AT − H¯ T BT −F¯ 2 0 0 ⎥ H¯ 2T 1 2 ⎢ ⎥ −1 ⎢ ¯ ¯ (8.221) H1 −R 0 0 ⎥ H2 ⎢ ⎥ ≤ 0, ⎣ F¯ 1 0 0 −Q−1 0 ⎦ F¯ 1 0 0 0 −F¯ 2 where (1, 1) denotes (1, 1) = F¯ 1 AT − H¯ 1T BT + AF¯ 1 − BH¯ 1 + γ −2 GG T , the receding horizon H∞ controls (8.210) and (8.211) for the cost (8.190) with F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (8.129) with no disturbance. Proof By the congruence transformation associated with (F1 ⊕ F2 )−1 , the condition (8.220) can be written as 

(1, 1) A1 F¯ 2 − BH¯ 2 + H¯ 1T RH¯ 2 T T T T ¯ ¯ ¯ ¯ −F¯ 2 + H¯ 2T RH¯ 2 F2 A1 − H2 B + H2 RH1

≤ 0,

where (1, 1) denotes (1, 1) = (AF¯ 1 − BH¯ 1 )T + (AF¯ 1 − BH¯ 1 ) + F¯ 1 QF¯ 1 + F¯ 1 F2 F¯ 1 + H¯ 1T RH¯ 1 + γ −2 GG T .

By using the congruence transformation and the Schur complement technique, the above can be written as the condition (8.221). Therefore, by Theorem 8.8, we can complete the proof. 

8.5 Receding Horizon H∞ Controls for State Delayed Systems

373

Let us rewrite the inequality (8.215) as follows.   1 J (t, t + T + Δ, u1 , w 1∗ ; xt ) − J (t, t + T , u∗ , w ∗ ; xt ) Δ→0 Δ   T x1 (t + T ) x1 (t + T ) ≤ 1 M 1 x (t + T − h) x (t + T − h) T    + u1 (t + T ) + R−1 BT F1 x1 (t + T ) R u1 (t + T ) + R−1 BT F1 x1 (t + T ) lim

T    − γ 2 w 1∗ (t + T | t) − γ −2 G T F1 x1 (t + T ) w 1∗ (t + T | t) − γ −2 G T F1 x1 (t + T ) ,

where M denotes  F1 A + AT F1 + Q + F2 − F1 BR−1 BT F1 + γ −2 F1 GG T F1 F1 A1 . M= AT1 F1 −F2 An alternative choice of u1 (t + T ) as −R−1 BT F1 x1 (t + T ), i.e. u1 (t + T ) = −R−1 BT F1 x1 (t + T ), rather than u1 (t + T ) = −H1 x1 (t + T ) − H2 x1 (t + T − h), provides the following result. Theorem 8.9 If there exist positive definite matrices F¯ 1 > 0 and F¯ 2 > 0 and such that  F1 A + AT F1 + Q + F2 − F1 BR−1 BT F1 + γ −2 F1 GG T F1 F1 A1 ≤ 0, (8.222) AT1 F1 −F2 the receding horizon H∞ controls (8.210) and (8.211) for the cost (8.190) with such F1 and F2 asymptotically stabilize the system (8.129) with no disturbance. Proof When H1 and H2 in (8.220) are replaced with R−1 BT F1 and 0, respectively, we obtain (8.222). If there exist F1 and F2 which satisfy (8.222), then H1 and H2 exist and (8.220) is satisfied. From Theorem 8.8, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  The quadratic matrix inequality (8.222) in Theorem 8.9 can be rewritten as linear matrix inequalities, which is shown in the following result. Corollary 8.6 If there exist positive definite matrices F¯ 1 > 0 and F¯ 2 > 0 and such that ⎤ ⎡ F¯ 1 AT + AF¯ 1 + A1 F¯ 2 AT1 + γ −2 GG T − BR−1 BT F¯ 1 F¯ 1 ⎣ −Q−1 0 ⎦ ≤ 0, (8.223) F¯ 1 ¯ F1 0 −F¯ 2 the receding horizon H∞ controls in (8.210) when 0 < T < h and in (8.211) when h ≤ T for the cost (8.190) with such F1 = F¯ 1−1 and F2 = F¯ 2−1 asymptotically stabilize the system (8.129) with no disturbance.

8 H∞ Optimal Controls

374

Proof By the Schur complement technique and the congruence transformation, the condition (8.222) can be written as 0 ≥ F¯ 1 AT + AF¯ 1 + F¯ 1 [Q + F2 ]F¯ 1 + γ −2 GG T + A1 F¯ 2 AT1 − BR−1 BT . By the congruence transformation and the Schur complement technique, the above can be written as the condition (8.223). Therefore, by Theorem 8.9, this completes the proof.  Infinite Horizon H ∞ Performance Bounds of Receding Horizon H ∞ Controls With the receding horizon H∞ control u¯ = {u∗ (s| s), s ∈ [t0 , ∞)} in (8.210) or (8.211), the infinite horizon H∞ cost is given as follows. J (t0 , ∞, u¯¯ , w; xt0 ) =







 xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) ds

t0

with Q > 0, R > 0, and xt0 ≡ 0, where xs denotes the state generated by the initial state xt0 , the receding horizon H∞ control u¯ = {u∗ (τ | τ ), τ [t0 , s]} and an arbitrary disturbance w = {w(τ ), τ ∈ [t0 , s]}. Consider the following relation J (t0 , ∞, u¯ , w; xt0 )   ∞ d = xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) + J (s, s + T , u∗ , w ∗ ; xs ) ds ds t0   − J (s, s + T , u∗ , w ∗ ; xs )| s=∞ − J (t0 , t0 + T , u∗ , w ∗ ; xt0 ) .

Since it is assumed that xt0 ≡ 0, J (t0 , t0 + T , u∗ , w ∗ ; xt0 ) = 0. Furthermore, from the property of the worst disturbance w∗ , it holds that −J (s, s + T , u∗ , w ∗ ; xs )s=∞ ≤ −J (s, s + T , u∗ , 0; xs )s=∞ ≤ 0. Let us consider for a very small positive Δ J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; xs+Δ ) − J (s, s + T , u∗ , w ∗ ; xs ) = J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; xs+Δ ) − J (s + Δ, s + T , u2∗ , w 2∗ ; xs+Δ ) + J (s + Δ, s + T , u2∗ , w 2∗ ; xs+Δ ) − J (s, s + T , u∗ , w ∗ ; xs ),

(8.224)

where xs+Δ denotes the state generated by the initial state xs , the optimal control u∗ = {u∗ (τ | s), τ ∈ [s, s + Δ]} and a disturbance w = {w(τ ), τ ∈ [s, s + Δ]}. The receding horizon cost J (s, s + T , u∗ , w ∗ ; xs ) was designed to satisfy the terminal monotonicity of the saddle-point H∞ cost so that the first part of (8.224) satisfies J (s + Δ, s + T + Δ, u1∗ , w 1∗ ; xs+Δ ) − J (s + Δ, s + T , u2∗ , w 2∗ ; xs+Δ ) ≤ 0.

8.5 Receding Horizon H∞ Controls for State Delayed Systems

375

If the worst disturbance w ∗ of J (s, s + T , u∗ , w ∗ ; xs ) for τ ∈ [s, s + Δ] is replaced with w3 consisting w 3 (τ ) = w(τ ) for τ ∈ [s, s + Δ] and w3 (τ ) = w 2∗ (τ ) for τ ∈ [s + Δ, s + T ], which generates the state xs+Δ from the initial state xs , it holds that J (s, s + T , u∗ , w ∗ ; xs ) ≥ J (s, s + T , u∗ , w 3 ; xs ) and thus J (s, s + T ,u∗ , w ∗ ; xs )   s+Δ  T ∗T ∗ 2 T x (τ )Qx(τ ) + u (τ | s)Ru (τ | s) − γ w (τ )w(τ ) d τ ≥ s

+ J (s + Δ, s + T , u2∗ , w 2∗ ; xs+Δ ), which gives the relation   1 2∗ 2∗ ∗ ∗ J (s + Δ, s + T + Δ, u , w ; xs+Δ ) − J (s, s + T , u , w ; xs ) lim Δ→0 Δ    1 s+Δ T x (τ )Qx(τ ) + u∗T (τ | s)Ru∗ (τ | s) − γ 2 w T (τ )w(τ ) d τ . ≤ − lim Δ→0 Δ s From the above relation, therefore, it holds that   d J (s, s + T , u∗ , w∗ ; xs ) ≤ − xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 wT (s)w(s) . ds

Therefore, this concludes that J (t0 , ∞, u¯ , w; xt0 ) ≤ 0, which produces the following infinite horizon H∞ performance bound ∞

γ ≥ 2

t0

xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) ds ∞ . T t0 w (s)w(s)ds

(8.225)

The stability properties of receding horizon H∞ controls for costs with single integral terminal terms in Theorems 8.8 and 8.9 and their LMI representations in Corollaries 8.5 and 8.6 together with the infinite horizon H∞ performance bound in this subsection can be found partially in [10].

8.5.2 *Receding Horizon H∞ Control for Costs with Double Integral Terms ¯ ) for τ ∈ (T − h, T ] as follows. Define B¯ and R(τ  B¯ T =



  f (τ )BT F3 G BT R + f (τ )BT F3 B ¯ . (8.226) , R(τ ) = GT f (τ )G T F3 B −γ 2 I + f (τ )G T F3 G

8 H∞ Optimal Controls

376 

where f (τ ) = τ − T + h. Assume that the condition (8.153), i.e. γ 2 I − hG T F3 G > ¯ ) is always negative definite because for τ ∈ (T − 0; then the (2, 2)th entry of R(τ h, T ], f (τ ) ∈ [0, h] and thus −γ 2 I + f (τ )G T F3 G ≤ −γ 2 I + hG T F3 G < 0. In Theorem 8.7 for the state delayed system (8.129), we replace to and tf with t and t + T , respectively, to get the optimal input u∗ and the worst disturbance w ∗ of the finite horizon H∞ cost (8.190) with Q > 0, R > 0, F1 > 0, F2 > 0, F3 > 0 and F4 > 0 for the state delayed system (8.191) as follows. For s ∈ (t + T − h, t + T ], the optimal input and the worst disturbance are given by 

 u3∗ (s| t) −1 T ¯ ¯ = −R (s − t)B f (s − t)F3 A1 x(s − h| t) + W1 (s − t)x(s| t) w3∗ (s| t)   t+T −s−h + W2 (s − t, a)x(s + a| t)da , (8.227) −h

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0], − W˙ 1 (τ ) = AT W1 (τ ) + W1 (τ )A + Q + F2 + hF4 −W1 (τ )B¯ R¯ −1 (τ )B¯ T W1 (τ ),

  ∂ ∂ − + W2 (τ , s) = AT W2 (τ , s) − W1 (τ )B¯ R¯ −1 (τ )B¯ T W2 (τ , s), ∂τ ∂s   ∂ ∂ ∂ − + + W3 (τ , r, s) = −W2T (τ , r)B¯ R¯ −1 (τ )B¯ T W2 (τ , s) ∂τ ∂r ∂s

(8.228)

(8.229) (8.230)

with boundary conditions W1 (T ) = F1 , W2 (τ , −h) = W1 (τ )[I − f (τ )B¯ R¯ −1 (τ )B¯ T F3 ]A1 , W3 (τ , −h, s) = AT1 [I − f (τ )F3 B¯ R¯ −1 (τ )B¯ T ]W2 (τ , s).

(8.231) (8.232) (8.233)

For s ∈ (tf − 2h, tf − h], the optimal input and the worst disturbance are given by  ∗    0 u2 (s| t) −R−1 BT S = (s − t)x(s| t) + S (s − t, a)x(s + a| t)da , 1 2 w2∗ (s| t) γ −2 G T −h

(8.234)

8.5 Receding Horizon H∞ Controls for State Delayed Systems

377

where S1 (τ ) = S1T (τ ), S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for τ ∈ (T − 2h, T − h], s ∈ [−h, 0], and r ∈ [−h, 0], − S˙ 1 (τ ) = AT S1 (τ ) + S1 (τ )A + Q −S1 (τ )(BR−1 BT − γ −2 GG T )S1 (τ ) ¯ −1 (τ + h)B¯ T F3 A1 −f 2 (τ + h)AT1 F3 BR +S2 (τ , 0) + S2T (τ , 0) + f (τ + h)AT1 F3 A1 , (8.235)   ∂ ∂ − + S2 (τ , s) = AT S2 (τ , s) + S3 (τ , 0, s) ∂τ ∂s −S1 (τ )(BR−1 BT − γ −2 GG T )S2 (τ , s),   ∂ ∂ ∂ − S3 (τ , r, s) = −S2T (τ , r)(BR−1 BT − γ −2 GG T )S2 (τ , s) + + ∂τ ∂r ∂s

(8.236) (8.237)

with boundary conditions S2 (τ , −h) = S1 (τ )A1 , AT1 S2 (τ , s),

S3 (τ , −h, s) = S1 (T − h) = W1 (T − h), S2 (T − h, s) = W2 (T − h, s), S3 (T − h, r, s) = W3 (T − h, r, s).

(8.238) (8.239) (8.240) (8.241) (8.242)

For s ∈ [t0 , tf − 2h], the optimal input and the worst disturbance are given by 

   0 u1∗ (s| t) −R−1 BT P2 (s − t, a)x(s + a| t)da , = P1 (s − t)x(s| t) + γ −2 G T w1∗ (s| t) −h (8.243)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy the coupled partial differential Riccati equations, for t ∈ [0, T − 2h], s ∈ [−h, 0], and r ∈ [−h, 0], − P˙ 1 (τ ) = AT P1 (τ ) + P1 (τ )A + Q + P2 (τ , 0) + P2T (τ , 0) −P1 (τ )(BR−1 BT − γ −2 GG T )P1 (τ ),   ∂ ∂ − P2 (τ , s) = AT P2 (τ , s) + P3 (τ , 0, s) + ∂τ ∂s −P1 (τ )(BR−1 BT − γ −2 GG T )P2 (τ , s),   ∂ ∂ ∂ − + + P3 (τ , r, s) = −P2T (τ , r)(BR−1 BT − γ −2 GG T )P2 (τ , s) ∂τ ∂r ∂s

(8.244)

(8.245) (8.246)

with boundary conditions P2 (τ , −h) = P1 (τ )A1 , P3 (τ , −h, s) =

AT1 P2 (τ , s),

(8.247) (8.248)

8 H∞ Optimal Controls

378

P1 (T − 2h) = S1 (T − 2h),

(8.249)

P2 (T − 2h, s) = S2 (T − 2h, s), P3 (T − 2h, r, s) = S3 (T − 2h, r, s).

(8.250) (8.251)

Since the horizon length T is a design parameter, we consider the receding horizon H∞ control for 0 < T < h, for h ≤ T < 2h and for 2h ≤ T . Case I: 0 < T < h For the state delayed system (8.129), the receding horizon H∞ control u(t) is to use the first control u∗ (t| t) in (8.227) on the time interval [t, t + T ] and thus is given as follows.  T  −1 I R + (h − T )BT F3 B (h − T )BT F3 G B¯ T 0 (h − T )G T F3 B −γ 2 I + (h − T )G T F3 G    tf −t−h × (h − T )F3 A1 x(t − h) + W1 (0)x(t) + W2 (0, s)x(t + s)ds , (8.252)

u(t) = u3∗ (t| t) = −

−h

where W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the partial differential Riccati equations (8.228)–(8.230) with the boundary conditions (8.231)– (8.233) for τ ∈ [0, T ], s ∈ [−h, 0], and r ∈ [−h, 0]. Case II: h ≤ T < 2h The receding horizon H∞ control u(t) is to use the first control u∗ (t| t) in (8.234) on the time interval [t, t + T ] and thus is given as follows.   u(t) = u2∗ (t| t) = −R−1 BT S1 (0)x(t) +

0

−h

S2 (0, s)x(t + s)ds ,

(8.253)

where S1 (τ ) = S1T (τ ), S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r) satisfy the partial differential Riccati equations (8.235)–(8.237) with the boundary conditions (8.238)– (8.242) for τ ∈ [0, T − h], s ∈ [−h, 0], and r ∈ [−h, 0] and W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations (8.228)–(8.230) with the boundary conditions (8.231)–(8.233). for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0]. Case III: 2h ≤ T The receding horizon H∞ control u(t) is to use the first control u∗ (t| t) in (8.243) on the time interval [t, t + T ] and is given as follows.   u(t) = u1∗ (t| t) = −R−1 BT P1 (0)x(t) +

0

−h

P2 (0, s)x(t + s)ds , (8.254)

where P1 (τ ) = P1T (τ ), P2 (τ , s), and P3 (τ , r, s) = P3T (τ , s, r) satisfy the partial differential Riccati equations (8.244)–(8.246) with the boundary conditions (8.247)– (8.251) for t ∈ [0, T − 2h], s ∈ [−h, 0], and r ∈ [−h, 0]. And, S1 (τ ) = S1T (τ ),

8.5 Receding Horizon H∞ Controls for State Delayed Systems

379

S2 (τ , s), and S3 (τ , r, s) = S3T (τ , s, r) satisfy the coupled partial differential Riccati equations (8.235)–(8.237) with the boundary conditions (8.238)–(8.242) for τ ∈ (T − 2h, T − h], s ∈ [−h, 0], and r ∈ [−h, 0] and W1 (τ ) = W1T (τ ), W2 (τ , s), and W3 (τ , r, s) = W3T (τ , s, r) satisfy the coupled partial differential Riccati equations (8.228)–(8.230) with the boundary conditions (8.231)–(8.233). for τ ∈ (T − h, T ], s ∈ [−h, 0], and r ∈ [−h, 0]. On the next interval [t+ , t+ + T ] with t+ > t, the above procedure is repeated to obtain the optimal control u(t+ ) = u3∗ (t+ | t+ ) for 0 < T < h, u(t+ ) = u2∗ (t+ | t+ ) for h ≤ T < 2h, and u(t+ ) = u1∗ (t+ | t+ ) for 2h ≤ T with t replaced with t+ as the time proceeds. It is noted from (8.179), (8.181), and (8.182) that the optimal cost J (t, t + T , u∗ , w ∗ ) is given as V3 (t) in (8.179) for T < h with W1 (t), W2 (t, s), and W3 (t, r, s), replaced with W1 (0), W2 (0, s), and W3 (0, r, s), respectively, which are obtained from (8.228)–(8.233), as V2 (t) in (8.181) for h ≤ T < 2h with S1 (t), S2 (t, s), and S3 (t, r, s), replaced with S1 (0), S2 (0, s), and S3 (0, r, s), respectively, which are obtained from (8.235)–(8.242), and as V1 (t) in (8.182) for 2h ≤ T with P1 (t), P2 (t, s), and P3 (t, r, s), replaced with P1 (0), P2 (0, s), and P3 (0, r, s), respectively, which are obtained from (8.244)–(8.251). Differently from the monotonicity of the optimal receding horizon LQ costs, as mentioned in the previous subsection, we need to investigate the terminal monotonicity of the saddle-point receding horizon H∞ cost J (t, σ, u∗ , w ∗ ; xt ) (8.190) for the asymptotic stability of the closed-loop system with no disturbance and in the infinite horizon H∞ performance bounds of the closed-loop system with disturbances. Terminal Cost Monotonicity Consider   1 1∗ 1∗ ∗ ∗ lim J (t, t + T + Δ, u , w ; xt ) − J (t, t + T , u , w ; xt ) , (8.255) Δ→0 Δ where {u1∗ , w 1∗ } and {u∗ , w ∗ } are the sets of the optimal control and the worst disturbance of J (t, t + T + Δ, u, w; xt ) and of J (t, t + T , u, w; xt ), respectively. Let x1∗ (α| t) and x∗ (α| t) be the corresponding states generated by {u1∗ , w 1∗ } 1∗ and {u∗ , w ∗ }, respectively, in the system (8.191) with the initial state xα| t α=t = ∗ xα| t α=t = xt . Let us replace {u1∗ (α| t), α ∈ [t, t + T + Δ]} of the optimal cost J (t, t + T + Δ, u1∗ , w 1∗ ; xt ) with {u∗ (α| t), α ∈ [t, t + T ]} and {u1 (α), α ∈ (t + T , t + T + Δ]}, where u1 is an arbitrary control and {x1 (α), α ∈ [t, t + T + Δ]} is the corresponding state generated by {u∗ (α| t), α ∈ [t, t + T ]}, {u1 (α), α ∈ (t + T , t + T + Δ]} and {w1∗ (α), ∈ [t, t + T + Δ]} in the system (8.191) with the initial state xα1 α=t = xt . Similar to the derivation of the terminal monotonicity of the saddle-point receding horizon H∞ cost with single integral terminal terms in Sect. 8.5.1, we can obtain

8 H∞ Optimal Controls

380

  1 J (t, t + T + Δ, u1∗ , w 1∗ ; xt ) − J (t, t + T , u∗ , w ∗ ; xt ) Δ→0 Δ  1  1 T x (t + T ) x (t + T ) ≤ 1 M1 (F1 , F2 , F4 ) 1 x (t + T − h) x (t + T − h)  1 T   1 hBT F3 G R + hBT F3 B u (t + T ) u (t + T ) + w 1∗ (t + T | t) hG T F3 B −γ 2 I + hG T F3 G w 1∗ (t + T | t)  1 T  T  u (t + T ) B F1 hBT F3 A1 x1 (t + T ) +2 G T F1 hG T F3 A1 w 1∗ (t + T | t) x1 (t + T − h)  1 T  0 1 x (t + T + s) x (t + T + s) − M (F , F ) ds, (8.256) 2 3 4 ˙ 1 (t + T + s) x˙ 1 (t + T + s) −h x lim

where F1 A1 AT F1 + F1 A + Q + F2 + hF4 , AT1 F1 hAT1 F3 A1 − F2 (8.257)  T T  F4 + A F3 A −A F3 . (8.258) M2 (F3 , F4 ) = −F3 A F3 



M1 (F1 , F2 , F4 ) =

A simple choice of u1 (t + T ) as −H1 x1 (t + T ) − H2 x1 (t + T − h), i.e. u1 (t + T ) = −H1 x1 (t + T ) − H2 x1 (t + T − h), with X11 , X21 , and X22 such that [Xij ]2×2 ≥ 0 leads us to rewrite the above as   1 J (t, t + T + Δ, u1∗ , w 1∗ ; xt ) − J (t, t + T , u∗ , w ∗ ; xt ) lim Δ→0 Δ T  1  1 x (t + T ) x (t + T ) ≤ 1 L1 1 x (t + T − h) x (t + T − h)  1 T  0 1 x (t + T + s) x (t + T + s) − L ds 2 ˙ 1 (t + T + s) x˙ 1 (t + T + s) −h x  T    1∗ 2 T 1∗ − γ I + hG F3 G w (t + T | t) + a , + w (t + T | t) + a where L1 , L2 , and a denote 

T + X −X T hX11 + X21 21 21 −X21 0      T  T F1 H1T H1T H1T T T + − (R + hB F3 B) B H2T hAT1 F3 H2T H2T  T   T T   T  T H1T F1 F1 H1 T T B − − G − hG F B 3 T T hA1 F3 hA1 F3 H2T HT 

L1 = M1 (F1 , F2 , F4 ) +

2

8.5 Receding Horizon H∞ Controls for State Delayed Systems

  −1  GT × − γ 2 I + hG T F3 G   0 0 L2 = M2 (F3 , F4 ) − , 0 X22  −1   a = − γ 2 I + hG T F3 G GT

F1 hAT1 F3

F1 hAT1 F3

381



T − hG T F3 B



T − hF3 B

H1T H2T

T  , H2T

H1T

T  

x1 (t + T ) . x1 (t + T − h)

This inequality for the terminal cost monotonicity produces the following result, which states the sufficient condition for the asymptotic stability of the closed-loop system with no disturbance for 0 < T < h, for h ≤ T < 2h, and for 2h ≤ T . Remark 8.6 In this subsection, we have proven that   ∂ 1 ∗ ∗ 1∗ 1∗ ∗ ∗ J (α, β, u , w ; xα ) = lim J (α, β + Δ, u , w ; xα ) − J (α, β, u , w ; xα ) ∂β Δ→0 Δ   T x(β) x(β) L1 ≤ x(β − h) x(β − h) T   0  x(β + s) x(β + s) ds. − L2 x˙ (β + s) −h x˙ (β + s)

The condition that L1 ≤ 0 and L2 ≥ 0 plays an important role in guaranteeing the asymptotic stability of the system (8.129) with no disturbance and also the infinite horizon H∞ performance bound of the system (8.129) with disturbances. Stability By the same manner as in the previous subsection for the receding horizon H∞ cost with single integral terms, we can conclude that   d ∗ ∗ T ∗T ∗ J (t, t + T , u , w ; xt ) ≤ − x (t)Qx(t) + u (t| t)Ru (t| t) . dt Theorem 8.10 If there exist matrices Fi > 0, Xij , and Hi such that  

T X11 X21 X21 X22

≥ 0,

T T F1 A1 − X21 AT F1 + F1 A + Q + F2 + hF4 + hX11 + X21 + X21 T T A1 F1 − X21 hA1 F3 A1 − F2 T   T T     T + H1 H2 [R + hB F3 B] H1 H2 − H1 H2 B F1 hF3 A1 T     T T  T   H1T F1 T T − F1 hF3 A1 B H1 H2 − G − hG F3 B T hAT1 F3 H2  T   −1   T T H1 F1 × − γ 2 I + hG T F3 G − hG T F3 B GT ≤ 0, hAT1 F3 H2T

(8.259)

(8.260)

8 H∞ Optimal Controls

382 

F4 + AT F3 A −AT F3 −F3 A F3 − X22

≥ 0,

(8.261)

the receding horizon H∞ controls (8.252)–(8.254) for the cost (8.190) with such F1 , F2 , F3 , and F4 asymptotically stabilize the system (8.129) with no disturbance. Proof The saddle-point H∞ cost generated by the initial state xt satisfies that J (t, t + T , u∗ , w ∗ ; xt ) ≥ J (t, t + T , u∗ , 0; xt ) ≥ 0. The conditions (8.259)–(8.261) for the terminal monotonicity of the saddle-point H∞ cost guarantee that   d ∗ ∗ T ∗T ∗ J (t, t + T , u , w ; xt ) ≤ − x (t)Qx(t) + u (t| t)Ru (t| t) . dt The optimal cost J (t, t + T , u∗ , w ∗ ) is given as V3 (t) in (8.179) for T < h with W1 (t), W2 (t, s), and W3 (t, r, s), replaced with W1 (0), W2 (0, s), and W3 (0, r, s), respectively, which are obtained from (8.228)–(8.233), as V2 (t) in (8.181) for h ≤ T < 2h with S1 (t), S2 (t, s), and S3 (t, r, s), replaced with S1 (0), S2 (0, s), and S3 (0, r, s), respectively, which are obtained from (8.235)–(8.242), and as V1 (t) in (8.182) for 2h ≤ T with P1 (t), P2 (t, s), and P3 (t, r, s), replaced with P1 (0), P2 (0, s), and P3 (0, r, s), respectively, which are obtained from (8.244)–(8.251). Therefore, these with cost monotonicity conditions satisfy conditions of the Krasovskii theorem of Theorem 2.1. This completes the proof.  Since the conditions (8.259)–(8.261) are not linear but bilinear in Fi , Xij and Hi , these conditions cannot be solved via a convex optimization technique. However, these quadratic matrix inequalities (8.259)–(8.261) can be rewritten as linear matrix inequalities if we replace X22 with λF1 for some λ > 0, which is shown in the following corollary. Corollary 8.7 If there exist matrices F¯ i > 0, X¯ ij , Hˆ i , and λ > 0 such that  ⎡

(1, 1) (2, 1)T ⎢ (2, 1) −F¯ 2 ⎢ T ⎢ B 0 ⎢ ⎢ Hˆ ˆ H2 ⎢ 1 ⎢ GT 0 ⎢ ⎢ 0 A1 F¯ 1 ⎢ ⎣ 0 0 F¯ 1 0

T X¯ 11 X¯ 21 ¯ ¯ X21 λF1

≥ 0,

⎤ B Hˆ 1T G 0 0 F¯ 1 0 Hˆ 2T 0 F¯ 1 AT1 0 0 ⎥ ⎥ −R 0 0 BT 0 0 ⎥ ⎥ 0 −R 0 0 BT 0 ⎥ ⎥ ≤ 0, 0 0 −γ 2 I G T 0 0 ⎥ ⎥ B 0 G −h−1 F¯ 3 0 0 ⎥ ⎥ 0 B 0 0 −h−1 F¯ 3 0 ⎦ 0 0 0 0 0 −Q−1  F¯ 4 F¯ 1 AT ≥ 0, −1 AF¯ 1 λ F¯ 1 − F¯ 3

(8.262)

(8.263)

(8.264)

8.5 Receding Horizon H∞ Controls for State Delayed Systems

383

where (1, 1) and (2, 1) denote T + X¯ 21 , (1, 1) = F¯ 1 AT + AF¯ 1 + F¯ 2 + hF¯ 4 + hX¯ 11 + X¯ 21 T ¯ ¯ (2, 1) = F1 A1 − X21 ,

the receding horizon H∞ controls (8.252)–(8.254) for the cost (8.190) with such F1 = F¯ 1−1 , F3 = F¯ 3−1 , F2 = F¯ 1−1 F¯ 2 F¯ 1−1 , and F4 = F¯ 1−1 F¯ 4 F¯ 1−1 asymptotically stabilize the system (8.129) with no disturbance. Proof The proof is complicated but straightforwards with the congruence transformation and the Schur complement technique for the conditions (8.259)–(8.261) in Theorem 8.10, where X22 is replaced with λF1 for some λ > 0, and thus omitted.  Let us rewrite the inequality (8.256) as follows.   1 1∗ 1∗ ∗ ∗ J (t, t + T + Δ, u , w ; xt ) − J (t, t + T , u , w ; xt ) lim Δ→0 Δ  1  1 T x (t + T ) x (t + T ) ≤ 1 M 1 x (t + T − h) x (t + T − h)  1  T    1 u (t + T ) a a u (t + T ) ¯ + − − R(T ) w 1∗ (t + T | t) w 1∗ (t + T | t) b b T  1  0 1 x (t + T + s) x (t + T + s) ds, (8.265) M2 (F3 , F4 ) 1 − ˙ 1 (t + T + s) x˙ (t + T + s) −h x where M, a and b denote  T F1 F1 −1 T ¯ ¯ ¯ (T ) B , B R hAT1 F3 hAT1 F3   T  1 a F1 x (t + T ) −1 T ¯ ¯ = −R (T )B . b hAT1 F3 x1 (t + T − h) 

M = M1 (F1 , F2 , F4 ) −

¯ ) can be decomposed into Since R(T ¯ )= R(T

 0 Δ22 I hBT F3 G{−γ 2 I + +hG T F3 G}−1 0 I 0 −γ 2 I + +hG T F3 G  I 0 , × {−γ 2 I + +hG T F3 G}−1 hG T F3 B I 

8 H∞ Optimal Controls

384

the above inequality with X11 , X21 , and X22 such that 0 < [Xij ]2×2 becomes   1 J (t, t + T + Δ, u1 , w1∗ ; xt ) − J (t, t + T , u∗ , w∗ ; xt ) Δ→0 Δ  1  T  T   1 x (t + T ) x (t + T ) 1 1 L1 1 + u (t + T ) − a Δ22 u (t + T ) − a ≤ 1 x (t + T − h) x (t + T − h) T  1  0  1 x (t + T + s) x (t + T + s) ds, L (8.266) − 2 1 x˙ 1 (t + T + s) −h x˙ (t + T + s) lim

where L1 and L2 denote  T F1 F1 −1 T ¯ ¯ ¯ L1 = M1 (F1 , F2 , F4 ) − BR (T )B hAT1 F3 hAT1 F3  T T + X21 −X21 hX11 + X21 + , −X21 0   0 0 . L2 = M2 (F3 , F4 ) − 0 X22 



Therefore, the terminal monotonicity of the saddle-point receding horizon H∞ cost is ensured if we can find u1 (t + T ) ensuring the non-positivity of the right part in the above relation. An alternative choice of u1 (t + T ) as a, i.e. u1 (t + T ) = a = −H¯ 1 x1 (t + T ) − H¯ 2 x1 (t + T − h), where H¯ 1 and H¯ 2 denote   T T 2 T −1 T B F1 , − hB F G{−γ I + hG F G} G H¯ 1 = Δ−1 3 3 22   T T 2 T −1 T B hF3 A1 , − hB F G{−γ I + hG F G} G H¯ 2 = Δ−1 3 3 22

(8.267) (8.268)

Δ22 = (R + hBT F3 B) − hBT F3 G{−γ 2 I + hG T F3 G}−1 hG T F3 B. (8.269) Theorem 8.11 If there exist matrices Fi > 0, Xij , and Hi such that  

T X11 X21 X21 X22

≥ 0,

T T AT F1 + F1 A + Q + F2 + hF4 + hX11 + X21 + X21 F1 A1 − X21 T T A1 F1 − X21 hA1 F3 A1 − F2   T T  −1 T T hB F3 G F1 B R + hB F3 B − hAT1 F3 GT hG T F3 B −γ 2 I + hG T F3 G

(8.270)

8.5 Receding Horizon H∞ Controls for State Delayed Systems





BT × GT



F1 hAT1 F3

385

T

≤ 0, F4 + AT F3 A −AT F3 ≥ 0, −F3 A F3 − X22

(8.271) (8.272)

the receding horizon H∞ controls (8.252)–(8.254) for the cost (8.190) with such F1 , F2 , F3 , and F4 asymptotically stabilize the system (8.129) with no disturbance. Proof When H¯ 1 and H¯ 2 in (8.260) are replaced with the matrices in the definitions (8.267) and (8.268), we obtain (8.271). If there exist Fi and Xij which satisfy (8.270)– (8.272), then H¯ 1 and H¯ 2 in (8.260) exist and (8.259)–(8.261) are satisfied. From Theorem 8.10, we obtain the asymptotic stability of the closed-loop system. This completes the proof.  Since the conditions (8.270)–(8.272) are not linear but bilinear in Fi and Xij , these conditions cannot be solved via a convex optimization technique. However, these matrix inequalities (8.270)–(8.272) can be rewritten as linear matrix inequalities if we restrict X22 as λF1 for some λ > 0, which is shown in the following result. Corollary 8.8 If there exist matrices F¯ i > 0, X¯ ij , and λ > 0 such that 

T X¯ 11 X¯ 21 X¯ 21 λF¯ 1

 ≥ 0,

⎤ (1, 1) (1, 2) B G 0 F¯ 1 ⎢ (2, 1) −F¯ 2 0 0 F¯ 1 AT1 0 ⎥ ⎥ ⎢ T T ⎢ B 0 −R 0 B 0 ⎥ ⎥ ≤ 0, ⎢ T ⎢ G 0 0 −γ 2 I G T 0 ⎥ ⎥ ⎢ ⎣ 0 A1 F¯ 1 B G −h−1 F¯ 3 0 ⎦ F¯ 1 0 0 0 0 −Q−1  F¯ 4 F¯ 1 AT ≥ 0, −1 ¯ ¯ AF1 λ F1 − F¯ 3 ⎡

(8.273)

(8.274)

(8.275)

where (1, 1) and (1, 2) denote T (1, 1) = F¯ 1 AT + AF¯ 1 + F¯ 2 + hF¯ 4 + hX¯ 11 + X¯ 21 + X¯ 21 , T (1, 2) = A1 F¯ 1 − X¯ 21 ,

the receding horizon H∞ controls (8.252)–(8.254) for the cost (8.190) with such F1 = F¯ 1−1 , F3 = F¯ 3−1 , F2 = F¯ 1−1 F¯ 2 F¯ 1−1 , and F4 = F¯ 1−1 F¯ 4 F¯ 1−1 asymptotically stabilize the system (8.129) with no disturbance.

8 H∞ Optimal Controls

386

Proof The proof is complicated but straightforwards with the congruence transformation and the Schur complement technique for the conditions (8.270)–(8.272) in Theorem 8.11, where X22 is replaced with λF1 for some λ > 0, and thus omitted.  Infinite Horizon H ∞ Performance Bounds of Receding Horizon H ∞ Controls With the receding horizon H∞ control u¯ = {u∗ (s| s), s ∈ [t0 , ∞)} in (8.252), (8.253) or (8.254), the infinite horizon H∞ cost is given as follows. 



J (t0 , ∞, u¯ , w) =



 xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) − γ 2 w T (s)w(s) ds

t0

with Q > 0, R > 0, and xt0 ≡ 0. Using the same manner as in Sect. 8.5.1, we can prove the following infinite horizon H∞ performance bound ∞

γ ≥ 2

t0

xT (s)Qx(s) + u∗T (s| s)Ru∗ (s| s) ds ∞ . T t0 w (s)w(s)ds

(8.276)

The stability properties of receding horizon H∞ controls for cost with double integral terminal terms in Theorems 8.10 and 8.11 and their LMI representations in Corollaries 8.7 and 8.8 together with the infinite horizon H∞ performance bound in this subsection can be found partially in [11].

References 1. Basar T, Bernhard P (2008) H∞ optimal control and related minimax design problems: a dynamic game approach. Modern Birkhäuser Classics, Boston 2. Delfour MC, McCalla C, Mitter SK (1975) Stability and the infinite-time quadratic cost for linear hereditary differential systems. SIAM J Control 13(1):48–88 3. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2 and H∞ control problems. IEEE Trans Autom Control 34(8):831–847 4. Eller DH, Aggarwal JK, Banks HT (1969) Optimal control of linear time-delay systems. IEEE Trans Autom Control 14(6):678–687 5. Francis BA (1987) A course in H∞ control theory, vol 88. Lecture notes in control and information science. Springer, Berlin 6. Fridman E, Shaked U (2000) Finite horizon H∞ state feedback control of continuous-time systems with state delays. IEEE Trans Autom Control 45(12):2406–2411 7. Kim KB, Yoon TW, Kwon WH (2001) Stabilizing receding horizon H∞ controls for linear continuous time-varying systems. IEEE Trans Autom Control 46(8):1273–1279 8. Kwon WH, Han S (2006) Receding horizon control: model predictive control for state models. Springer, Berlin 9. Kwong RH (1980) A stability theory for the linear-quadratic-Gaussian problem for systems with delays in the state, control, and observations. SIAM J Control Optim 18(1):49–75 10. Lee YS, Han S, Kwon WH (2006) Receding horizon H∞ control for systems with a state-delay. Asian J Control 8(1):63–71

References

387

11. Park P, Lee SY, Park J, Kwon WH (To appear) Receding horizon H∞ control with delaydependent cost monotonicity for state delayed systems 12. Tadmor G (2000) The standard H∞ problem in systems with a single input delay. IEEE Trans Autom Control 45(3):382–397 13. Yoo HW (2007) Receding horizon control for input delay system. M.S. thesis, Seoul National University 14. Yoo HW, Han S, Lee YS (2012) Receding horizon H∞ control for input-delayed systems. Math Probl Eng

Appendix A

Useful Matrix Results

A.1

Matrix Inertia Properties

The inertia of A is defined to be the ordered triple such that In{A} = {n+ , n− , n0 }, where n+ , n− , and n0 denote the number of positive, negative, and zero eigenvalues of A, respectively. The number n+ + n− is the rank of A and n0 is often called the nullity of A. A transformation of the form BABT , where B is any nonsingular matrix, is called the congruence transformation and the matrices A and BABT are said to be congruent to each other. Lemma A.1 (Sylvester’s Law of Inertia) For any nonsingular B, In{A} = In{BABT }. Sylvester’s Law implies that congruence preserves inertia. For any symmetric matrices A and D, it holds that       −1 T  I 0 A 0 A CT I A C = if ∃ A−1 M = C D CA−1 I 0 ΔA 0 I     I 0 I C T D−1 ΔD 0 if ∃ D−1 , = 0 I 0 D D−1 C I where ΔA  D − CA−1 C T , ΔD  A − C T D−1 C. Here, ΔA is called the Schur complement of the block A of the matrix M and ΔD is called the Schur complement of the block D of the matrix M . Lemma A.2 (Inertia of Block Hermitian Matrices) In{M } = In{A} + In{ΔA } = In{D} + In{ΔD }. © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

389

390

Appendix A: Useful Matrix Results

A.2

Positive Definite Matrices

For a symmetric n × n real matrix M such that  A CT , M = C D 

the matrix M is said to be positive/negative definite if z T Mz is positive/negative for every non-zero column vector z ∈ Rn . Also, the matrix M is said to be positive/negative semi-definite if z T Mz is non-negative/non-positive for every non-zero column vector z ∈ Rn . Lemma A.3 All eigenvalues of positive/negative definite matrix are positive/ negative. From Lemmas A.2 and A.3, it holds that 0 < M ⇔ 0 < A, 0 < ΔA = D − CA−1 C T ⇔ 0 < D, 0 < ΔD = A − C T D−1 C, 0 > M ⇔ 0 > A, 0 > ΔA = D − CA−1 C T ⇔ 0 > D, 0 > ΔD = A − C T D−1 C.

A.3

Inverse of Block Matrices

When the block matrix is invertible, we can use the factorization to write 

A B CD

−1

   I 0 I −A−1 B A−1 0 = 0 I −CA−1 I 0 Δ−1 A   −1 CA−1 −A−1 BΔ−1 A + A−1 BΔ−1 A A . = −1 −Δ−1 Δ−1 A CA A 

(A.1)

Alternatively, we can write 

A B CD

−1

   I −BD−1 I 0 Δ−1 0 D I −D−1 C I 0 D−1 0   −1 −1 −1 −ΔD BD ΔD = −1 −1 . −D−1 CΔ−1 + D−1 CΔ−1 D D D BD 

=

(A.2)

−1 −1 −1 −1 By (A.1) and (A.2), we have Δ−1 + A−1 BΔ−1 + D−1 D =A A CA , ΔA = D −1 −1 CΔD BD .

Lemma A.4 (Matrix Inversion) (A + BCD)−1 = A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 .

Appendix A: Useful Matrix Results

Lemma A.5 (Matrix Inversion Shift) (I + AB)−1 A = A(I + BA)−1 .

A.4

Determinant

Using the product rule for determinants, we have 

 A B det = det A · det(D − CA−1 B) = det A · det ΔA , CD = det D · det(A − BD−1 C) = det D · det ΔD .

391

Appendix B

Useful Lemmas

B.1 B.1.1

Elimination Lemma Definition and Concept

Lemma B.1 There exists a matrix Σ such that Q + UΣV T + VΣ T U T > 0, if and only if U⊥T QU⊥ > 0, V⊥T QV⊥ > 0, where U⊥ and V⊥ are orthogonal complements of U and V, respectively. Proof It is obvious that if the first condition holds for some Σ, so do the second inequalities. Suppose that the second inequalities are feasible. Suppose now that the first inequality is not feasible for any Σ. In other words, suppose that, for every Σ, Q + VΣ T U T + UΣV T ≯ 0. By duality, this is equivalent to the dual condition that there exists Z = 0 with Z ≥ 0 such that V T ZU = 0, trace{QZ} ≤ 0. Z = 0 and V T ZU = 0 imply that Z = V⊥ HHT V⊥T + U⊥ KKT U⊥T for some matrices H, K (at least one of them is nonzero, since otherwise Z = 0). This will finish our proof, since © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

393

394

Appendix B: Useful Lemmas

V⊥T QV⊥ > 0, U⊥T QU⊥ > 0 implies trace{QZ} > 0 for Z of the above equation, which contradicts trace{QZ} ≤ 0 in dual condition. 

B.1.2

Example

We can encounter the LMI problem Q > 0, AQ + QAT + BY + Y T BT < 0, where Q and Y are variables. This is equivalent to Q > 0, AQ + QAT < σBBT , where the variables are Q and a scalar σ. It is also equivalent to T (AQ + QAT )B⊥ < 0, Q > 0, B⊥ T B = 0. with the variable Q, where B⊥ is any matrix of maximum rank such that B⊥ Thus we have eliminated the variable Y from the original LMI and reduced the size of the matrices in the LMI.

B.2 B.2.1

S-Procedure S-Procedure with Non-strict Inequalities

Let F0 , . . . , Fp be quadratic functions of the variable ζ: Fi (ζ)  ζ T Ti ζ + 2uiT ζ + vi , i = 0, . . . , p, where Ti = TiT . We consider the following condition on F0 , . . . , Fp F0 (ζ) ≥ 0 for all ζ such that Fi (ζ) ≥ 0, i = 1, . . . , p. Obviously if there exist τ1 ≥ 0, . . . , τp ≥ 0 such that for all ζ, F0 (ζ) −

p  i=1

τi Fi (ζ) ≥ 0,

Appendix B: Useful Lemmas

395

then the above condition holds. It is a nontrivial fact that when p = 1, the converse also holds, provided that there is some ζ0 such that F1 (ζ0 ) > 0. Note that the above inequality can be written as 

    p T0 u0 Ti ui − ≥ 0. τ i u0T v0 uiT vi i=1

B.2.2

S-Procedure with Strict Inequalities

Another variation of the S-procedure involves quadratic forms and strict inequalities. Let T0 , . . . , Tp be symmetric matrices. We consider the following condition on T0 , . . . , Tp ζ T T0 ζ > 0 for all ζ = 0 such that ζ T Ti ζ ≥ 0, i = 1, . . . , p. It is obvious that if there exists τ1 ≥ 0, . . . , τp ≥ 0 such that T0 −

p 

τi Ti > 0,

i=1

then the upper condition holds. It is a nontrivial fact that when p = 1, the converse also holds, provided that there is some ζ0 such that ζ0T T1 ζ0 > 0. Note that the lower condition is an LMI in the variables T0 and τ1 , . . . , τp .

B.2.3

Example

In quadratic stability problems, we will encounter the following constraint on the variable P  T  T   A P + PA PB ξ ξ < 0 for all ξ = 0 and π satisfying π T π ≥ ξ T C T Cξ. 0 BT P π π Applying the second version of the S-procedure, the above condition is equivalent to the existence of τ ≥ 0 such that  T  A P + PA + τ C T C PB < 0. −τ I BT P Thus the problem of finding P > 0 such that the upper condition holds can be expressed as an LMI in P and the scalar variable τ .

396

B.3

Appendix B: Useful Lemmas

Completion Lemma

Lemma B.2 Let X ∈ Rn×n and X¯ ∈ Rn×n be positive definite matrices. Then, there exists a positive definite matrix P ∈ R2n×2n such that the upper left n × n block of P is X , and that of P −1 is X¯ , if and only if   X I ≥ 0. I X¯ Proof Given any positive definite matrices X , X¯ ∈ Rn×n , there exists an integer m such that the dilation can be completed if and only if X − X¯ −1 ≥ 0. If this condition holds, then the rank of X − X¯ −1 determines the dimension necessary for the dilation. Since n × n matrices have rank of at most n, the maximum dimension needed for the  dilation is mmax = n, and this occurs when X − X¯ −1 > 0.

B.4

Elimination of Matrix Variables

Lemma B.3 There exists a symmetric matrix X such that 

if and only if

   P1 − LXLT Q1 P2 + X Q2 > 0, >0 Q1T R1 Q2T R2

(B.1)



⎤ P1 + LP2 LT Q1 LQ2 ⎣ Q1T R1 0 ⎦ > 0. T T 0 R2 Q2 L

(B.2)

Proof Using the Schur complement, it is easy to see that (B.2) is equivalent to

and

R1 > 0, R2 > 0

(B.3)

−1 T T T Δ = P1 + LP2 LT − Q1 R−1 1 Q1 − LQ2 R2 Q2 L > 0.

(B.4)

Also, (B.1) are equivalent to (B.3) and −1 T T Δ1 = P1 − LXLT − Q1 R−1 1 Q1 > 0, Δ2 = P2 + X − Q2 R2 Q2 > 0.

(B.5)

Therefore, we only need to show that, given (B.3), there exists an X satisfying (B.5) if and only if (B.4) is satisfied. The existence of X satisfying (B.5) implies (B.4) since Δ = Δ1 + LΔ2 LT .

Appendix B: Useful Lemmas

397

On the other hand, if (B.4) is satisfied, let T X = Q2 R−1 2 Q2 − P2 + I

for some sufficiently small  > 0 results in Δ2 = I > 0, Δ1 = Δ − LΔ2 LT = Δ − LLT > 0. Thus (B.5) are satisfied.



Appendix C

Integral Inequalities for Quadratic Functions

C.1

Wirtinger-Based Integral Inequality

In order to reduce the conservatism of Lemma 2.4, an alternative inequality as in [1] can be obtained as follows. Lemma C.1 For a matrix Z > 0, vector function w : [a, b] → Rn , the following inequality holds:

b b 1 3 Ω T ZΩ, w T (α)d αZ w(α)d α − b − a b − a a a a (C.1)



b b α 2 where Ω = a w(α)d α − b−a a a w(β)d βd α. −

b

w T (α)Zw(α)d α ≤ −

Proof For any continuous function w and which admits a continuous derivative, define the function z given, for all u ∈ [a, b], by

α

z(α) =

w(s)ds −

a

α−a b−a



b

a

w(s)ds −

(b − α)(α − a) Θ, (b − a)2

where Θ is a constant vector of Rn to be defined. Then, T b b 1 w(α)d α Z w(α)d α b−a a a a a   b b (b + a − 2α) 2 (b + a − 2α) T ZΘ − 2Θ T Z w(α)d α + d αΘ (b − a)2 (b − a)2 a a  b b (b + a − 2α) TZ +2 d αΘ w(α)d α . (b − a)2 a a

b

z˙ T (α)Z z˙ (α)d α =

b

wT (α)Zw(α)d α −

© Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

399

400

Appendix C: Integral Inequalities for Quadratic Functions

By noting that b a

+

b a

(b + a − 2α)d α = 0 and by use of an integration by parts, it yields

z˙ T (α)Z z˙ (α)d α =

b a

wT (α)Zw(α)d α −

1 b−a

T

b

w(α)d α

a

b Z a

w(α)d α

1 2 Θ T ZΘ + Θ T ZΩ. 3(b − a) b−a

Rewriting the two last terms as a sum of squares leads to b a



z˙ T (α)Z z˙ (α)d α =

b a

wT (α)Zw(α)d α −

1 b−a

T

b

w(α)d α

a

b Z a

w(α)d α

3 1 Ω T ZΩ + (Θ + 3Ω)T Z(Θ + 3Ω). b−a 3(b − a)

Since Z > 0, it holds that −

b a

wT (α)Zw(α)d α ≤ −

1 b−a

a

b

T w(α)d α

b Z a

w(α)d α −

3 Ω T ZΩ b−a

1 (Θ + 3Ω)T M (Θ + 3Ω). + 3(b − a)

Setting Θ = −3Ω concludes the proof.



Note that the last term on the right-hand side of (C.1) is positive since Z > 0, thus it is obvious that Lemma C.1 gives much tighter bounds than Lemma 2.4.

C.2

Auxiliary Function-Based Integral Inequality

Lemma C.1 can be generalized by introducing some intermediate terms called auxiliary functions. Before introducing the inequality, we give an essential lemma as in [2] as follows. Lemma C.2 (Innovation process for integral operation) Given independent functions {pi (u), i ∈ [0, n] | p0 (u) = 1}, the innovation functions of pi (u) based on {pj (u), j ∈ [0, i − 1]}, say p˜ i (u), can be generated by

p˜ i (u) = pi (u) −

i−1 b  j=0

a

pi (α)˜pj (α)d α

Then, the following properties are satisfied:

a

b

−1 p˜ j2 (α)d α

p˜ j (u), p˜ 0 (u) = p0 (u).

Appendix C: Integral Inequalities for Quadratic Functions



b

401

p˜ i (α)d α = 0, 1 ≤ i ≤ n,

(C.2)

p˜ i (α)˜pj (α)d α = 0,

(C.3)

a



b

0 ≤ i, j ≤ n, i = j.

a

Proof It is easy to show that

b



b

p˜ 1 (α)d α =

a





p1 (α) −

a



b

=



b a



b

b

p1 (s)˜p0 (s)ds

a

p1 (α)d α −

a

Assume that

b

a

−1 p˜ 02 (s)ds

 p˜ 0 (α) d α

p1 (s)ds = 0.

a

p˜ j (α)d α = 0 for j ∈ [1, i − 1]. Then for i ∈ [2, n],

⎫⎤ ⎬ ⎣pi (α) − p˜ i (α)d α = pi (s)˜pj (s)ds p˜ j2 (s)ds p˜ j (α) ⎦ d α ⎭ ⎩ a a a a j=0 ⎫⎤ ⎧ ⎡ −1 b i−1 ⎨ b ⎬ b  ⎣pi (α) − = pi (s)˜p0 (s)ds p˜ 02 (s)ds p˜ 0 (α) ⎦ d α ⎭ ⎩ a a a b

b

=

b a

⎧ i−1 ⎨ b 





−1

b

j=0

pi (α)d α −

b a

pi (s)ds = 0.



This completes the proof. Now, we have the following auxiliary function-based integral inequality lemma.

Lemma C.3 For a positive definite matrix Z > 0, an integrable function {w(u), u ∈ [a, b]} and a given function {pi (u), i ∈ [0, n] | p0 (u) = 1}, the following inequality holds:

b



w (α)Zw(α)d α ≥ − T

a

n 

b a

i=0

−1 p˜ i2 (α)d α

b

T

b

p˜ i (α)w(α)d α Z

a

p˜ i (α)w(α)d α ,

a

where p˜ i (u) is obtained by the process in Lemma C.2.

Proof Leu us define qi (u) as qi (u) = qi−1 (u) − p˜ i (u)Θi for all i ∈ [0, n], where q−1 (u) = w(u), and Θi is defined by





Θi = a

b

−1 p˜ i2 (α)d α

a

b

p˜ i (α)qi−1 (α)d α.

402

Appendix C: Integral Inequalities for Quadratic Functions

If (C.2)–(C.3) hold, we have the inequality such that

b

0≤ a



b

=

qnT (α)Zqn (α)d α

w(α) −

a

=

n 

T p˜ i (α)Θi

i=0 b

w (α)Zw(α)d α + T

i=0 b

=

w (α)Zw(α)d α − T

n 

a

i=0

b a



b a

n 

p˜ i (α)Θi d α

i=0

n  

a



Z w(α) −



p˜ i2 (α)d α

ΘiT ZΘi − 2ΘiT Z

−1 p˜ i2 (α)d α

b

b

 p˜ i (α)w(α)d α

a

T p˜ i (α)w(α)d α



b

Z

a

p˜ i (α)w(α)d α ,

a

for n = 0, 1, 2, . . .. This completes the proof.



To easily apply Lemma C.3 to the stability analysis for time-delay systems, we can choose the auxiliary functions as polynomials. For n ∈ [0, 3], we have the following result. Corollary C.1 For a positive definite matrix Z > 0, a differentiable function {w(u), u ∈ [a, b]}, and a polynomial auxiliary function pi (u) = (u − a)i , the following inequality holds: − a

b

w T (α)Zw(α)d α ≥ −

n  2i + 1 i=0

b−a

ΩiT ZΩi ,

(C.4)

for 0 ≤ n ≤ 3, where Ω0 =

b a

b

w(α)d α,

b α 2 w(β)d βd α, b−a a a a b b α 6 12 w(α)d α − w(β)d βd α + Ω2 = b − a (b − a)2 a a a b b α 12 60 w(α)d α − w(β)d βd α + Ω3 = b−a a a (b − a)2 a b α β γ 120 w(δ)d δd γd βd α. − (b − a)3 a a a a Ω1 =

w(α)d α −

b α β a

a

a

a

a

a

b α β

w(γ)d γd βd α, w(γ)d γd βd α

Proof For a polynomial pi (u) = (u − a)i , we can obtain p˜ i (u) by using the innovation process in Lemma C.2 as follows.

Appendix C: Integral Inequalities for Quadratic Functions

403

p˜ 0 (u) =1, 1 p˜ 1 (u) =(u − a) − (b − a), 2 1 p˜ 2 (u) =(u − a)2 − (b − a)(u − a) + (b − a)2 , 6 3 3 1 p˜ 3 (u) =(u − a)3 − (b − a)(u − a)2 + (b − a)2 (u − a) − (b − a)3 . 2 5 20 Then, the inequality (C.4) can easily be derived by Lemma C.3 based on the above  p˜ i (u), which concludes the proof.

References 1. Seuret A, Gouaisbaut F (2013) Wirtinger-based integral inequality: application to time-delay systems. Automatica 49(9):2860–2866 2. Park P, Lee WI, Lee SY (2015) Auxiliary function-based integral inequalities for quadratic functions and their applications to time-delay systems. J Frankl Inst 352:1378–1396

Appendix D

Stochastic Processes

D.1

Least-Squares Estimate

For given two random variables X and Y with a known joint probability density function fX ,Y (·, ·), the random variable Y can be measured and takes the value y and we want to estimate the corresponding value, say x, of the unobservable random variable X . An estimate, say xˆ , of the value x of X when Y = y is given as a function of y such as xˆ = h(y),

(D.1)

where h(·) is a (nonlinear or linear) function. Theorem D.1 Assume that the criterion is chosen as the squares error defined as

ΩY

ΩX

2 x − h(y) fX ,Y (x, y)dxdy,

(D.2)

where ΩX and ΩY are the sample spaces of X and Y , respectively. Then the least mean-squares estimate that minimizes the criterion is given by h(y) =

ΩX

xfX |Y (x|y)dx,

(D.3)

where fX |Y (·|·) is the conditional probability density function of X for Y = y. Proof Note that fX ,Y (x, y) = fX |Y (x|y)fY (y). Therefore, (D.1) can be written as

© Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

405

406

Appendix D: Stochastic Processes

2 x − h(y) fX |Y (x|y)fY (y)dxdy ΩY ΩX 



 2 h (y) fX |Y (x|y)dx − 2h(y) xfX |Y (x|y)dx = ΩY ΩX ΩX  x2 fX |Y (x|y)dx fY (y)dy. +





ΩX

Since



ΩX fX |Y (x|y)dx

= 1, this is written as

2 x − h(y) fX |Y (x|y)fY (y)dxdy ΩY ΩX

  h2 (y) − 2h(y) xfX |Y (x|y)dx + =



ΩY

=

ΩY



h(y) −



+

ΩY

 ΩX

ΩX

ΩX

2

xfX |Y (x|y)dx

ΩX

fY (y)dy



x2 fX |Y (x|y)dx −

ΩX

 x2 fX |Y (x|y)dx fY (y)dy

xfX |Y (x|y)dx

2  fY (y)dy,

which gives (D.3). This completes the proof.



We can simply express (D.3) with the conditional expectation of X for a given Y = y, i.e. xˆ = E(X |Y = y).

(D.4)

Therefore, we can say that the least-squares estimate of a random variable X for a given random variable Y can be written as Xˆ = E(X |Y ).

(D.5)

These least-squares estimates have the following property. Theorem D.2 The conditional expectation of a random variable X for a given random variable Y , say E(X |Y ), is the unique random variable that (a) is a function of Y , i.e. h(Y ), (b) satisfies the orthogonal condition such that

0 = E(X − h(Y ))g(Y )

(D.6)

for all functions g(·) that are random variables for which the above expected value is meaningful.

Appendix D: Stochastic Processes

407

Proof We show the second part (b), first. If h(Y ) = E(X |Y ), (D.6) can be written as



 E E(X |Y )g(Y ) = E X g(Y ) , which is shown to hold for all g(Y ). To do so, let us consider the LHS =

ΩY

 X



ΩY





ΩY

= = =





ΩY

Ω

ΩX

ΩX

 zfX |Y (z|y)dz g(y)fX ,Y (x, y)dxdy ΩX    zfX |Y (z|y)dz g(y) fX ,Y (x, y)dx dy ΩX  zfX |Y (z|y)dz g(y)fY (y)dy

xg(y)fX ,Y (x, y)dxdy

= the RHS. Let us now consider h(Y ) satisfying the condition (b). Then





E X g(Y ) = E h(Y )g(Y ) , from which we have for all g(Y )

the RHS =





the LHS =

ΩY

ΩY

ΩX

xg(y)fX ,Y (x, y)dxdy =

ΩY

 ΩX

 xfX |Y (x|y)dx g(y)fY (y)dy,

h(y)g(y)fY (y)dy.

Therefore, such an h(Y ) should be E(X |Y ), which completes the proof for the second part (b). Simultaneously, this also shows the uniqueness of h(Y ) satisfying the second part (b), which completes the proof of the first part (a).  If h(Y ) is restricted to be affine in Y , i.e. h(Y ) = aY + b, then the minimum affine least-squares estimate is given as follows. a = E(X − mX )(Y − mY )T E−1 (Y − mY )(Y − mY )T , b = mY − E(X − mX )(Y − mY )T E−1 (Y − mY )(Y − mY )T mX ,

(D.7) (D.8)

where mX = EX and mY = EY . Therefore, if X and Y are zero-mean random variables, then the minimum affine least-squares estimate h(Y ) becomes linear in Y such that

408

Appendix D: Stochastic Processes

 h(Y ) = E(XY T )E−1 (YY T ) Y ,

(D.9)

which is called the minimum linear least-squares estimate. If random variables X and Y are jointly zero-mean Gaussian, the minimum leastsquares estimate h(Y ) is the linear least-squares estimate h(Y ). All properties mentioned in the section can be extended into the case of a random T  vector Y = Y1 . . . Yn or a stochastic process Y = {Y (τ ) | τ ∈ [a, b]} such as Xˆ = E(X |Y ) or Xˆ = E(X |Y).

D.2

Kalman Filtering

A standard continuous-time state-space model is of the form x˙ (t) = F(t)x(t) + G(t)u(t), y(t) = H (t)x(t) + v(t),

(D.10) (D.11)

where u(t) and v(t) are the input noises and the measurement noises, respectively, with zero-mean independent Gaussian properties such as ⎤ ⎤⎡ ⎤T ⎡ Πo x(t0 ) − Ex(t0 ) 0 0 x(t0 ) − Ex(t0 ) ⎦ ⎦⎣ ⎦ = ⎣ 0 Qu (t)δ(t − s) 0 u(t) u(s) E⎣ 0 0 Rv (t)δ(t − s) v(t) v(s) ⎡

for t > t0 and s > t0 . Then the minimum least-squares filtered estimate of the state x(t) for the cost E(x(t) − xˆ (t))(x(t) − xˆ (t))T , based on the past output {y(α), α ∈ [t0 , t)}, can be obtained via the so-called differential Riccati equation as follows [1]. Theorem D.3 Given a stochastic process {y(α), α ∈ [t0 , t)} with the state-space model (D.10) and (D.11), the minimum least-squares filtered estimate of the state x(t) is given by x˙ˆ (t) = F(t)ˆx(t) + P(t)H T (t)R−1 v (t){y(t) − H (t)ˆx (t)}, xˆ (t0 ) = Ex(t0 ),

(D.12)

where P(t) is the estimation error covariance defined by E(ˆx(t) − Ex(t))(ˆx(t) − Ex(t))T that is propagated along the following differential Riccati equation

Appendix D: Stochastic Processes

˙ P(t) = F(t)P(t) + P(t)F T (t) + G(t)Qu (t)G T (t) − P(t)H T (t)R−1 v (t)H (t)P(t)

409

(D.13)

with P(t0 ) = E(ˆx(t0 ) − Ex(t0 ))(ˆx(t0 ) − Ex(t0 ))T .

Reference 1. Kailath T, Sayed AH, Hassibi B (2000) Linear estimation. Prentice-Hall, Englewood Cliffs

Appendix E

Numerical Procedures for Infinite Horizon Controls

The coupled partial differential Riccati equations in LQ optimal controls, LQG optimal controls, and H∞ optimal controls can be approximately solved in the same manner as in [1].

E.1

Infinite Horizon LQ Control for Single Input Delayed Systems

The approximate method is characterized by replacing the derivatives in (6.65) to (6.66) by difference equations. In this manner, a control u∗ (t) can be found. The use of difference equations instead of the derivatives necessarily forces the user of

Fig. E.1 Square partition for P3 (r, s) © Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

411

412

Appendix E: Numerical Procedures for Infinite Horizon Controls

this method to check whether the reduce of the step size or mesh size towards a well-defined limit yields the convergence of u∗ (t). Let N be a positive integer and Δ = h/N . Partition the interval −h ≤ s ≤ 0 into equal segments whose endpoints are s = −iΔ, 0 ≤ i ≤ N . Also for this value of N , partition the square −h ≤ s ≤ 0, −h ≤ r ≤ 0 into a grid of smaller squares whose vertices are (r, s) = (−iΔ, −jΔ) for i, j ∈ [1, N ]. Figure E.1 shows the square partition to obtain P3 (r, s). Then replace (6.64)–(6.66) with 0 = AT P1 + P1 A + Q − [BT P1 + P2T (0)]T R−1 [BT P1 + P2T (0)], 0 = [P2 (−(i − 1)Δ) − P2 (−iΔ)]/Δ − AT P2 (−(i − 1)Δ)

(E.1)

+[BT P1 + P2T (0)]T R−1 [BT P2 (−(i − 1)Δ) + P3 (0, −(i − 1)Δ)], (E.2) 0 = [P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−iΔ, −(j − 1)Δ)]/Δ +[P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−(i − 1)Δ, −jΔ)]/Δ +[BT P2 (−(i − 1)Δ) + P3 (0, −(i − 1)Δ)]T R−1 [BT P2 (−(j − 1)Δ) + P3 (0, −(j − 1)Δ)] for i, j ∈ [1, N ] with boundary conditions P2 (−h) = P1 B1 , P3 (−h, −jΔ) = B1T P2 (−jΔ), 0 ≤ j ≤ N .

(E.3) (E.4)

The original input delayed systems can be approximated by x˙ (t) = Ax(t) + Bu(t) + B1 u(t − N Δ), u˙ (t − iΔ) = [u(t − (i − 1)Δ) − u(t − iΔ)]/Δ, which reduces to the following augmented delay-free system x˙ a (t) = Aa xa (t) + Ba u(t), where ⎡ ⎤ A 0 0 x(t) ⎢0 −1I 0 ⎢ u(t − Δ) ⎥ ⎢ Δ ⎥ ⎢ ⎢ 1 ⎢ u(t − 2Δ) ⎥ ⎢ 0 I − Δ1 I ⎥ ⎢ Δ ⎢ ⎥ ⎢ . . . .. xa (t) = ⎢ .. ⎥ , Aa = ⎢ ⎢ .. .. . ⎥ ⎢ ⎢ ⎥ ⎢ .. .. ⎢ .. .. ⎦ ⎣ . ⎣. . . u(t − N Δ) 0 ··· ··· ⎡

⎤ ⎡ ⎤ 0 · · · B1 B 1 ⎢ ΔI ⎥ 0 ··· 0 ⎥ ⎥ ⎢ ⎥ ⎥ ⎢ 0 ⎥ 0 ··· 0 ⎥ ⎢ ⎥ ⎥ .. ⎥ .. . . .. ⎥ , Ba = ⎢ ⎢ ⎥. . . . ⎥ ⎢ . ⎥ ⎥ ⎢ . ⎥ .. ⎥ ⎣ .. ⎦ . ··· 0 ⎦ 1 1 0 0 ΔI −ΔI

Let Pa be the positive semidefinite solution of the following Algebraic Riccati Equation

Appendix E: Numerical Procedures for Infinite Horizon Controls

ATa Pa + Pa Aa − Pa Ba R−1 BaT Pa + Qa = 0,

413

(E.5)

where Qa and Pa are defined by ⎡

Q ⎢∗ ⎢ Qa = ⎢ ⎣∗ ∗

⎤ ⎡ 0 P0,0 ⎢ ∗ 0⎥ ⎥ ⎢ .. ⎥ , Pa = ⎢ ⎦ ⎣ ∗ . ∗ ∗ 0 ∗ 0 ··· 0 ··· . ∗ ..

⎤ P0,1 · · · P0,N P1,1 · · · P1,N ⎥ ⎥ .. ⎥ . .. . ∗ . ⎦ ∗ ∗ PN ,N

If one dissects (E.5) and compares those algebraic relations with the algebraic relations (E.3) to (E.4), the two sets of equations are found in the direct correspondence. Specifically, P1 = P0,0 , P2 (−iΔ) = P0,i+1 /Δ, 0 ≤ i ≤ N − 1,

(E.6) (E.7)

P3 (−iΔ, −jΔ) = Pi+1,j+1 /Δ2 , 0 ≤ i ≤ N − 1, 0 ≤ j ≤ N − 1, (E.8) P2 (−N Δ) = P2 (−h) = P0,0 B1 = P1 B1 , (E.9) P3 (−N Δ, −jΔ) = P3 (−h, −jΔ) = B1T P2 (−jΔ), 0 ≤ j ≤ N . (E.10) The relations, (E.6) to (E.10), provide the basis for a computation scheme. It should be noted that u∗ (t) = −R−1 BaT Pa xa (t) ⎡ P0,0 B + Δ−1 P0,1 T ⎢ P0,1 B + Δ−1 P1,1 ⎢ = −R−1 ⎢ .. ⎣ .

⎤T ⎡ ⎤ x(t) ⎥ ⎢ ⎥ ⎢ u(t − Δ) ⎥ ⎥ ⎥ ⎣ ⎦ ··· ⎦ T −1 T u(t − N Δ) P0,N B + Δ P1,N   0 [BT P2 (s) + P3 (0, s)]u(t + s)ds . ≈ −R−1 [BT P1 + P2T (0)]x(t) + −h

T /2

We remark that if (Aa , Ba ) is stabilizable and (Aa , Qa ) is detectable, there exists the positive definite solution Pa , which ensures the resulting approximate closed-loop system is asymptotically stable.

E.2

Infinite Horizon LQ Control for State Delayed Systems

We can construct an approximate method to obtain P1 , P2 (·), and P3 (·, ·). It is noted that (6.203)–(6.207) are approximated by for i ∈ [1, N ] and j ∈ [1, N ]

414

Appendix E: Numerical Procedures for Infinite Horizon Controls

0 = AT P1 + P1 A − P1 BR−1 BT P1 + P2 (0) + P2T (0) + Q,

(E.11)

0 = −[P2 (−(i − 1)Δ) − P2 (−iΔ)]/Δ + AT P2 (−(i − 1)Δ) + P3 (0, −(i − 1)Δ) − P1 BR−1 BT P2 (−(i − 1)Δ), (E.12) 0 = −[P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−iΔ, −(j − 1)Δ)]/Δ − [P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−(i − 1)Δ, −jΔ)]/Δ − P2T (−(i − 1)Δ)BR−1 BT P2 (−(j − 1)Δ)

(E.13)

with boundary conditions for j ∈ [0, N ] P2 (−h) = P1 A1 , P3 (−h, −jΔ) = AT1 P2 (−jΔ).

(E.14) (E.15)

The original system can be approximated by x˙ (t) = Ax(t) + A1 x(t − N Δ) + Bu(t), x˙ (t − Δ) = [x(t) − x(t − Δ)]/Δ, x˙ (t − 2Δ) = [x(t − Δ) − x(t − 2Δ)]/Δ, .. . x˙ (t − N Δ) = [x(t − (N − 1)Δ) − x(t − N Δ)]/Δ, which reduces to the following augmented delay-free systems x˙ a (t) = Aa x(t) + Ba u(t), where ⎤ ⎡ A 0 0 x(t) ⎢ x(t − Δ) ⎥ ⎢ Δ−1 I −Δ−1 I 0 ⎥ ⎢ ⎢ −1 I −Δ−1 I ⎢ x(t − 2Δ) ⎥ ⎢ 0 Δ xa (t)  ⎢ ⎥ , Aa  ⎢ ⎥ ⎢ ⎢ . .. .. .. ⎦ ⎣ ⎣ .. . . . x(t − N Δ) 0 ··· ··· ⎡

0 0 0 .. . 0

⎤ ⎡ ⎤ ··· A1 B ⎢0⎥ ··· 0 ⎥ ⎥ ⎢ ⎥ ··· 0 ⎥, B  ⎢ 0 ⎥. ⎥ ⎢ ⎥ a ⎥ ⎢.⎥ . .. .. ⎦ ⎣ .. ⎦ . −1 −1 0 Δ I −Δ I

Let Pa be the positive semidefinite solution to the following Algebraic Riccati Equation 0 = ATa Pa + Pa Aa − Pa Ba R−1 BaT Pa + Qa , where Qa and Pa are defined by

(E.16)

Appendix E: Numerical Procedures for Infinite Horizon Controls



⎤ ⎡ P0,0 P0,1 0 ··· 0 T ⎢ P0,1 P1,1 0 ··· 0⎥ ⎥ ⎢ .. . . .. ⎥ , Pa = ⎢ .. .. ⎣ . . . . .⎦ T T 0 0 ··· 0 P1,N P0,N

Q ⎢0 ⎢ Qa = ⎢ . ⎣ ..

415

⎤ · · · P0,N · · · P1,N ⎥ ⎥ . ⎥. .. . .. ⎦ · · · PN ,N

If one compares (E.16) with algebraic relations (E.11)–(E.15), the two sets of equations are found to be in direct correspondence. Specifically, we have P1 = P0,0 , P2 (−iΔ) = P0,i+1 /Δ, 0 ≤ i ≤ N − 1, P3 (−iΔ, −jΔ) = Pi+1,j+1 /Δ2 , 0 ≤ i ≤ N − 1, 0 ≤ j ≤ N − 1, P2 (−N Δ) = P2 (−h) = P1 A1 = P0,0 A1 , P3 (−N Δ, −jΔ) = P3 (−h, −jΔ) = AT1 P2 (−jΔ), 0 ≤ j ≤ N . By the properties of the algebraic Riccati equation (E.16), we know that there exist T /2 2n(N +1) solutions for (E.16) if (Aa , Ba ) is controllable and (Aa , Qa ) is observable, where n is the system dimension and N is the order of discretization, among which only one solution is the positive definite solution that stabilizes the closed-loop system. If the order of the discretization becomes infinite, the number of solutions also becomes infinite, which implies that the number of the solution P1 , P2 (s) and P3 (r, s) satisfying (6.203)–(6.205) is also infinite. As mentioned earlier, only one of them might be the solution yielding the condition (2.4) of the resulting Lyapunov functional (6.209), which has not been explicitly explained in the literature.

E.3

Infinite Horizon H∞ Control for State Delayed Systems

Following the step similar to the infinite horizon LQ optimal control problem, we can obtain P1 , P2 (·), and P3 (·, ·) approximately. It is noted that (8.183)–(8.185) are approximated by, for i, i ∈ [1, N ], (E.17) 0 = AT P1 + P1 A + Q − P1 (BBT − γ −2 GG T )P1 + P2 (0) + P2T (0), P2 (−(i − 1)Δ) − P2 (−iΔ) + AT P2 (−(i − 1)Δ) + P3 (0, −(i − 1)Δ) 0=− Δ −P1 (BBT − γ −2 GG T )P2 (−(i − 1)Δ), (E.18) P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−iΔ, −(j − 1)Δ) 0=− Δ P3 (−(i − 1)Δ, −(j − 1)Δ) − P3 (−(i − 1)Δ, −jΔ) − Δ (E.19) −P2T (−(i − 1)Δ)(BBT − γ −2 GG T )P2 (−(j − 1)Δ)

416

Appendix E: Numerical Procedures for Infinite Horizon Controls

with boundary conditions for j ∈ [0, N ] P2 (−h) = P1 A1 , P3 (−h, −jΔ) = AT1 P2 (−jΔ).

(E.20) (E.21)

The original system can be approximated by x˙ (t) = Ax(t) + A1 x(t − N Δ) + Bu(t) + Gw(t), x(t) − x(t − Δ) , x˙ (t − Δ) = Δ x(t − Δ) − x(t − 2Δ) x˙ (t − 2Δ) = , Δ .. . x(t − (N − 1)Δ) − x(t − N Δ) , x˙ (t − N Δ) = Δ which reduces to the following augmented delay-free systems x˙ a (t) = Aa x(t) + Ba u(t) + G a w(t), where ⎡

A 0 0 ⎢ Δ1 I − Δ1 I 0 ⎢ 1 1 ⎢ Aa  ⎢ 0 Δ I − Δ I ⎢ . . . .. .. ⎣ .. 0 ··· ···

⎤ 0 · · · A1 ⎡ ⎤ ⎡ ⎤ B G 0 ··· 0 ⎥ ⎥ ⎢0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ 0 ··· 0 ⎥ ⎥ , Ba  ⎢ .. ⎥ , G a  ⎢ .. ⎥ . ⎣.⎦ ⎣.⎦ .. . . .. ⎥ . . . ⎦ 0 0 0 Δ1 I − Δ1 I

Let Pa be the positive semidefinite solution to the following Algebraic Riccati Equation 0 = ATa Pa + Pa Aa − Pa (Ba R−1 BaT − γ −2 G a G Ta )Pa + Qa ,

(E.22)

where Qa and Pa are defined by ⎡

⎤ ⎡ 0 P0,0 ⎢ ∗ 0⎥ ⎥ ⎢ .. ⎥ , Pa = ⎢ ⎦ ⎣ ∗ . 0 0 ··· 0 ∗

Q ⎢0 ⎢ Qa = ⎢ . ⎣ ..

0 0 .. .

··· ··· .. .

⎤ P0,1 · · · P0,N P1,1 · · · P1,N ⎥ ⎥ .. ⎥ . .. . ∗ . ⎦ ∗ ∗ PN ,N

If one dissects (E.22) and compares those algebraic relations with algebraic relations (E.17)–(E.21), the two sets of equations are found to be in direct correspondence. Specifically, we have

Appendix E: Numerical Procedures for Infinite Horizon Controls

417

P1 = P0,0 , P0,i+1 , 0 ≤ i ≤ N − 1, P2 (−iΔ) = Δ Pi+1,j+1 P3 (−iΔ, −jΔ) = , 0 ≤ i ≤ N − 1, 0 ≤ j ≤ N − 1, Δ2 P2 (−N Δ) = P2 (−h) = P1 A1 = P0,0 A1 , P3 (−N Δ, −jΔ) = P3 (−h, −jΔ) =

AT1 P2 (−jΔ),

0 ≤ j ≤ N.

(E.23) (E.24) (E.25) (E.26) (E.27)

Reference 1. Aggarwal JK(1970) Computation of optimal control for time-delay systems. IEEE TransAutom Control 683–685

Appendix F

Program Codes

Site Address: http://extras.springer.com. Subjects • State feedback guaranteed LQ control for state delayed systems (Theorem 5.5 in Sect. 5.3.1) • State Feedback guaranteed H∞ control for state delayed systems (Theorem 5.12 in Sect. 5.5.1) • Fixed horizon LQ control for costs with single integral terms for state delayed systems (Theorem 6.8 in Sect. 6.4.2) • Receding horizon LQ control for costs with single integral terms for state delayed systems (Theorem 6.10 and Corollary 6.6 in Sect. 6.5.2) • Receding horizon LQ control for costs with double integral terms for state delayed systems (Theorem 6.13 and Corollary 6.9 in Sect. 6.5.3) • Receding Horizon H∞ control for standard costs for input delayed systems (Theorem 8.4 and Corollary 8.3 in Sect. 8.3.2) • Receding Horizon H∞ control for costs with single integral terms for state delayed systems (Theorem 8.8 and Corollary 8.5 in Sect. 8.5.1) Once you visit the above site, you should read “Readme.txt”, first, which addresses how to use program codes. All variables and parameters are defined for each subject listed above. Simulation results are provided for easy cases.

© Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

419

Index

A Asymptotically stabilize, 66 Augmented closed-loop system control subsystem, 103, 271 estimation error subsystem, 103, 271

B Banach space Cn,h , 28 Bilinear Matrix Inequalities (BMIs) local solution, 33 NP-hard, 33

C Cascaded-delay system, 110, 141 Characteristic equation, 97, 103 Closed-loop stability, 2, 22, 67 Closed-loop system, 2, 70, 99, 100 Completion lemma, 396 Computation, 19 Congruence transformation, 389 Control gain matrix, 69 Control objectives nominal performance, 2 nominal stability, 3 robust performance, 3 robust stability, 3 Coupled partial differential Riccati equations, 203, 239, 240, 248 Covariance of estimation error, 268

D Delay-dependent stability, 32, 38

Delay-dependent stabilizing controls, 72, 73, 107 Delayed state feedback control, 70 Delay-free linear stochastic system, 267 Delay-free system, 69 Delay-independent stability, 32, 37 Delay-independent stabilizing controls, 70, 72, 107 Detectable, 99 Determinant, 391 Discretization, 43 Discretization numbers, 45 Distributed delay robust stability of state delayed systems Lyapunov–Krasovskii approach, 60 stability of state delayed systems Lyapunov–Krasovskii approach, 60 Lyapunov–Razumikhin approach, 59 Double integral terminal terms, 230, 247, 260, 298, 312, 358, 375 Dual properties, 270, 298 Dynamic output feedback control, 100, 104, 110, 118, 141

E Eigenvalues of the closed-loop system, 103 Elimination lemma, 393 Error dynamics, 99 Estimation estimate, 268, 270, 272, 274, 279, 284 estimation error, 268, 272, 275, 281 orthogonality, 268, 272, 276, 286, 290

© Springer International Publishing AG, part of Springer Nature 2019 W. H. Kwon and P. Park, Stabilizing and Optimizing Control for Time-Delay Systems, Communications and Control Engineering, https://doi.org/10.1007/978-3-319-92704-6

421

422 F Feedback controls dynamic feedback controls, 3 output feedback controls, 3 state feedback controls, 3 static feedback controls, 3 Finite horizon H∞ cost, 322, 327, 354 Finite horizon H∞ performance bound, 321, 323, 326, 334 Finite horizon LQ cost, 191 Fixed horizon finite horizon, 5 infinite horizon, 5 Fixed horizon H∞ control finite horizon, 320, 326, 354, 358 infinite horizon, 325, 331, 363 Fixed horizon LQ control finite horizon, 190, 193, 225, 226, 230 infinite horizon, 190, 198, 203, 235 Fixed horizon LQG control finite horizon, 267, 271, 295, 298 infinite horizon, 270, 274, 302 Fixed terminal states, 190, 212 Free terminal states, 188, 205 Functional Differential Equation (FDE) continuous dependence, 29 existence, 29 solution, 29 uniqueness, 29 G General time-delay systems, 29 Guaranteed cost controls guaranteed H∞ controls, 8, 159, 169 guaranteed LQ controls, 8, 134, 146 Guaranteed H∞ controls, 160 Guaranteed H∞ controls for input delayed systems output feedback standard cost, 165 state feedback predictive cost, 161 standard cost, 163 Guaranteed H∞ controls for state delayed systems output feedback, 182 robust state feedback, 179 state feedback delay-dependent, 172 delay-independent, 170 Guaranteed H∞ performance bound, 160 Guaranteed LQ controls for input delayed systems

Index output feedback standard cost, 142 state feedback predictive cost, 137 standard cost, 138 Guaranteed LQ controls for state delayed systems output feedback, 157 robust state feedback, 154 state feedback delay-dependent, 149 delay-independent, 146 Guaranteed LQ performance bound, 135 H H∞ controls

optimal control, 322, 325, 327 saddle-point cost, 322 saddle-point solution, 322 worst disturbance, 322, 325, 327 H∞ controls for input delayed system fixed horizon predictive cost, 320 standard cost, 326 receding horizon predictive cost, 333 standard cost, 342 H∞ controls for state delayed systems fixed horizon double integral terminal terms, 358 single integral terminal terms, 354 receding horizon double integral terminal terms, 375 single integral terminal terms, 365 H∞ performance bound finite horizon, 321, 326, 334 infinite horizon, 325 receding horizon, 340, 352, 374, 386 Horizons finite horizon, 5 fixed horizon, 6 infinite horizon, 5 receding horizon, 6 Hurwitz, 67 I Inequalities non-strict inequalities, 394 strict inequalities, 395 Inertia of block Hermitian matrices, 389 Infinite horizon H∞ controls input delayed system

Index predictive cost, 325 standard cost, 331 state delayed system, 363 Infinite horizon H∞ performance bounds of receding horizon H∞ control input delayed system predictive cost, 340 standard cost, 352 state delayed system double integral terminal terms, 386 single integral terminal terms, 374 Infinite horizon LQ control input delayed system predictive cost, 190 standard cost, 198, 203 state delayed system, 235 Infinite horizon LQG control input delayed system predictive cost, 270 standard cost, 274 state delayed system, 302 Infinite-dimensional system, 28 Initial weighting functional, 321, 326 Input delayed systems, 66, 96, 134, 141, 159 Integral inequalities for quadratic functions integral inequality lemma, 33 Jensen inequality lemma, 34 reciprocal convexity, 35 K Kalman filtered estimation frozen-gain, 278, 284, 308, 315 standard, 278, 283, 307, 314 Krasovskii theorem globally uniformly asymptotically stable, 30 uniformly asymptotically stable, 30 Kronecker product, 24 Kronecker sum, 24 L Linear fractional representation, 51 Linear Matrix Inequalities (LMIs) affine combination, 32 convex constraints, 32 convex optimizations, 32 least square problem, 32 linear objective, 32 LMI feasibility problem, 32 polynomial time algorithms, 32 semidefinite program, 32 Linear stochastic system, 266, 267, 286

423 Lipschitz condition, 29 Lipschitz constant, 29 LQ controls for input delayed systems fixed horizon predictive cost, 188 standard cost, 191 receding horizon predictive cost, 205 standard cost, 214 LQ controls for state delayed systems fixed horizon double integral terminal terms, 230 short horizon cost, 256 simple cost, 224 single integral terminal terms, 226 receding horizon double integral terminal terms, 247 simple cost, 237 single integral terminal terms, 239 LQG controls for input delayed systems fixed horizon predictive cost, 267 standard cost, 271 receding horizon predictive cost, 276 standard cost, 280 LQG controls for state delayed systems fixed horizon double integral terminal terms, 298 single integral terminal terms, 295 receding horizon double integral terminal terms, 312 single integral terminal terms, 306 Luenberger observer, 98, 102 Lyapunov–Krasovskii approach, 30, 40 Lyapunov–Razumikhin approach, 30, 37

M Matrix inequalities bilinear matrix inequalities (BMIs), 33 linear matrix inequalities (LMIs), 32 Matrix inertia, 389 Matrix inversion, 390 Matrix inversion shift, 391 Maximum allowable delay bound, 75 Maximum allowable time delays, 45 Memoryless state feedback control, 71, 140 Minimum least-squares estimate, 269 Minor feedback loop, 98 Model Predictive Control (MPC), 5 Model transformation, 38 Model uncertainties, 51, 82, 118

424 Models input and measurement delayed systems industrial boiler, 11 inverted pendulum, 9 networked control system, 8 state delayed systems FAST TCP network system, 17 gene regulatory network, 12 HIV pathogenesis system, 18 liquid propellant rocket motor, 16 refining plant, 14

N Negative definite, 390 Non-delayed input, 67 Norm Euclidean vector norm, 25 spectral norm, 25 Nullity, 389 Numerical procedures for infinite horizon controls infinite horizon H∞ control, 415 infinite horizon LQ control, 411, 413

O Observation error, 102 Open-loop unstable systems, 98 Optimal controls finite horizon, 5 infinite horizon, 5 receding horizon, 6 Optimizing controls H∞ controls, 319 LQ controls, 187 LQG controls, 265 Ordinary system, 67 Orthogonal complement, 393 Output controlled, 96 measured, 96 Output feedback stabilizing controls for input delayed systems pure input and measurement delay Smith predictor approach, 96 single input delay dynamic feedback control approach, 100 Luenberger observer approach, 99 Output feedback stabilizing controls for state delayed systems constant delay

Index cascaded-delay system approach, 110, 111 Luenberger observer approach, 102 Lyapunov–Krasovskii approach, 107 Lyapunov–Razumikhin approach, 104, 106 time-varying delay differentiable with bounded derivative, 117 general, 115 P Performance criteria H∞ , 5, 8 LQ, 5, 8 LQG, 5 Positive definite, 390 Predicted cost, 136 Predicted state, 66, 136 Program codes, 419 Pure input delay, 66, 96 Pure measurement delay, 96 R Razumikhin theorem globally uniformly asymptotically stable, 30 uniformly asymptotically stable, 30 uniformly stable, 30 Receding horizon H∞ controls, 333, 364 Receding horizon H∞ cost, 333, 342, 364 Receding Horizon Control (RHC) advantages, 6 disadvantages, 7 Receding horizon LQ controls, 205, 236 Receding horizon LQ cost, 205, 214, 239 Reduction transformation, 69, 98 Reduction transformation approach, 69 Reference input, 97 Robust output feedback stabilizing controls for state delayed systems constant delay Ccascaded-delay system approach, 122 Lyapunov–Razumikhin approach, 119 time-varying delay differentiable with bounded derivative, 130 general, 127 Robust stability of state delayed systems constant delay

Index discretized state approach, 53 Lyapunov–Krasovskii approach, 52 time-varying delay differentiable with bounded derivative, 56, 57 general, 55, 57 Robust state feedback stabilizing controls for state delayed systems constant delay discretized state approach, 85 Lyapunov-Krasovskii approach, 83 time-varying delay differentiable with bounded derivative, 91 general, 89

S Schur complement, 389 Single input delay, 67, 98 Single integral terminal terms, 226, 239, 257, 295, 306, 354, 365 Size of the control, 114 S -procedure, 53, 394 Stability asymptotically stable, 29 globally uniformly asymptotically stable, 30 stable, 29 uniformly asymptotically stable, 29 uniformly stable, 29 Stability of state delayed systems constant delay discretized state approach, 43 Lyapunov–Krasovskii approach, 40, 41 Lyapunov–Razumikhin approach, 37, 39 distributed delay Lyapunov–Krasovskii approach, 60 Lyapunov–Razumikhin approach, 59 time-varying delay differentiable with bounded derivative, 47, 50 general, 45, 49 Stabilizable, 67 Stabilize, 66 Stabilizing controls

425 output feedback stabilizing controls, 4, 96 state feedback stabilizing controls, 4, 66 State delayed systems, 36, 70, 146, 169 State delayed system with model uncertainties, 82 State feedback stabilizing controls for input delayed systems pure input delay state predictor approach, 66 single input delay reduction transformation approach, 69 State feedback stabilizing controls for state delayed systems constant delay discretized state approach, 76 Lyapunov-Krasovskii approach, 72 Lyapunov-Razumikhin approach, 71 time-varying delay differentiable with bounded derivative, 81 general, 79 State predictor, 66, 136, 161, 189, 266, 320 State predictor approach, 66 Sylvester’s law of inertia, 389 Systems nominal system, 2 uncertain system, 2

T Terminal cost monotonicity, 208, 219, 243, 251, 335, 346, 368, 379 Terminal weighting functional, 191, 214, 239, 247, 321, 326 Tracking performance, 2 Trajectory, 29 Trivial solution, 29

U Uncertainty block, 51

Z Zero-mean white Gaussian input noise, 266 Zero-mean white Gaussian measurement noise, 266