Optimal Control of Dynamic Systems Driven by Vector Measures: Theory and Applications [1 ed.] 3030821382, 9783030821388

This book is devoted to the development of optimal control theory for finite dimensional systems governed by determinist

244 61 4MB

English Pages 335 [328] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Mathematical Preliminaries
1.1 Introduction
1.2 Vector Space
1.3 Normed Space
1.4 Banach Space
1.5 Measures and Measurable Functions
1.6 Modes of Convergence and Lebesgue Integral
1.6.1 Modes of Convergence
1.6.2 Lebesgue Integral
1.7 Selected Results From Measure Theory
1.8 Special Hilbert and Banach Spaces
1.8.1 Hilbert Spaces
1.8.2 Special Banach Spaces
1.9 Metric Space
1.10 Banach Fixed Point Theorems
1.11 Frequently Used Results From Analysis
1.12 Bibliographical Notes
2 Linear Systems
2.1 Introduction
2.2 Representation of Solutions for TIS
2.2.1 Classical System Models
2.2.2 Impulsive System Models
2.3 Representation of Solutions for TVS
2.3.1 Classical System Models
2.3.2 Measure Driven System Models
2.3.3 Measure Induced Structural Perturbation
2.3.4 Measure Driven Control Systems
2.4 Bibliographical Notes
3 Nonlinear Systems
3.1 Introduction
3.2 Fixed Point Theorems for Multi-Valued Maps
3.3 Regular Systems (Existence of Solutions)
3.4 Impulsive Systems (Existence of Solutions)
3.4.1 Classical Impulsive Models
3.4.2 Systems Driven by Vector Measures
3.4.3 Systems Driven by Finitely Additive Measures
3.5 Differential Inclusions
3.6 Bibliographical Notes
4 Optimal Control: Existence Theory
4.1 Introduction
4.2 Regular Controls
4.3 Relaxed Controls
4.4 Impulsive Controls I
4.5 Impulsive Controls II
4.6 Structural Control
4.7 Differential Inclusions (Regular Controls)
4.8 Differential Inclusions (Measure-Valued Controls)
4.9 Systems Controlled by Discrete Measures
4.10 Existence of Optimal Controls
4.11 Bibliographical Notes
5 Optimal Control: Necessary Conditions of Optimality
5.1 Introduction
5.2 Relaxed Controls
5.2.1 Discrete Control Domain
5.3 Regular Controls
5.4 Transversality Conditions
5.4.1 Necessary Conditions Under State Constraints
5.5 Impulsive and Measure-Valued Controls
5.5.1 Signed Measures as Controls
5.5.2 Vector Measures as Controls
5.6 Convergence Theorem
5.7 Implementability of Necessary Conditions of Optimality
5.7.1 Discrete Measures
5.7.2 General Measures
5.8 Structural Controls
5.9 Discrete Measures with Variable Supports as Controls
5.10 Bibliographical Notes
6 Stochastic Systems Controlled by Vector Measures
6.1 Introduction
6.2 Conditional Expectations
6.3 SDE Based on Brownian Motion
6.3.1 SDE Driven by Vector Measures (Impulsive Forces)
6.4 SDE Based on Poisson Random Processes
6.5 Optimal Relaxed Controls
6.5.1 Existence of Optimal Controls
6.5.2 Necessary Conditions of Optimality
6.6 Regulated (Filtered) Impulsive Controls
6.6.1 Application to Special Cases
6.7 Unregulated Measure-Valued Controls
6.7.1 An Application
6.8 Fully Observed Optimal State Feedback Controls
6.8.1 Existence of Optimal State Feedback Laws
6.8.2 Necessary Conditions of Optimality
6.9 Partially Observed Optimal Feedback Controls
6.9.1 Existence of Optimal Feedback Laws
6.9.2 Necessary Conditions of Optimality
6.10 Bellman's Principle of Optimality
6.11 Bibliographical Notes
7 Applications to Physical Examples
7.1 Numerical Algorithms
7.1.1 Numerical Algorithm I
7.1.2 Numerical Algorithm II
7.2 Examples of Physical Systems
7.2.1 Cancer Immunotherapy
7.2.2 Geosynchronous Satellites
7.2.3 Prey-Predator Model
7.2.4 Stabilization of Building Maintenance Units
7.2.5 An Example of a Stochastic System
Bibliography
Index
Recommend Papers

Optimal Control of Dynamic Systems Driven by Vector Measures: Theory and Applications [1 ed.]
 3030821382, 9783030821388

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

N.U. Ahmed Shian Wang

Optimal Control of Dynamic Systems Driven by Vector Measures Theory and Applications

Optimal Control of Dynamic Systems Driven by Vector Measures

N. U. Ahmed • Shian Wang

Optimal Control of Dynamic Systems Driven by Vector Measures Theory and Applications

N. U. Ahmed University of Ottawa Ottawa, ON, Canada

Shian Wang University of Minnesota Minneapolis, MN, USA

ISBN 978-3-030-82138-8 ISBN 978-3-030-82139-5 (eBook) https://doi.org/10.1007/978-3-030-82139-5 Mathematics Subject Classification: 34H05, 49J15, 49K15, 49M05, 37N35, 93C15 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

In memory of my parents, uncles, aunts, sisters, brothers, and my wife Feroza who gave so much. Dedicated to my sons: Jordan and Schockley; daughters: Pamela, Rebeka, Mona, and Lisa; and my grandchildren: Reynah-Sofia, Maximus, Achilles, Eliza, Pearl, Austin, Rio, Kira, and Jazzmine. Dedicated to my parents Guihua Xu and Shiwu Wang, and my sister Man Wang.

Preface

There are many prominent areas of systems and control theory that include systems governed by linear and nonlinear ordinary and functional differential equations, stochastic differential equations, partial differential equations including their stochastic counterparts, and, above all, systems governed by abstract differential equations and inclusions. The remarkable advance of this field is due to the unprecedented interest, interaction, and contribution of pure and applied mathematics and physical and engineering sciences. We strongly believe that this interaction will continue simply because there are many unsolved challenging problems and emerging new ones. Such problems are of great interest to mathematicians, scientists, and engineers. This book is concerned with the development of optimal control theory for finite dimensional systems governed by deterministic as well as stochastic differential equations (which may be subject to impulsive forces) controlled by vector measures. Impulsive controls are special cases of controls determined by vector measures. The book has two major parts. The first part deals with deterministic dynamic systems including differential inclusions controlled by vector measures; the second part is concerned with stochastic dynamic systems also controlled by vector measures. For non-convex control problems, probability measure valued functions known as relaxed controls are used. We consider the question of existence of optimal controls and the necessary conditions of optimality whereby optimal control policies can be determined. In recent years, significant applications of systems and control theory have been witnessed in areas as diverse as physical sciences, engineering, biological sciences, social sciences, management, and financial engineering, among many others. In particular, the most interesting applications have taken place in areas such as aerospace (civilian, military), space structures (space station, communication satellites), suspension bridges, artificial heart, immunology, power system, hydrodynamics, plasma and magneto hydrodynamics, computer communication networks, and intelligent transportation systems. The importance of applications whereby a theory is tested and new theories are developed is clearly recognized. This book is

vii

viii

Preface

devoted mainly towards the development of theory of measure-driven differential equations and their optimal control by vector measures. This book contains seven chapters. In Chap. 1, we present most of the basic and important results from analysis required to read the book smoothly. The contents of this chapter will also serve as a quick reference for readers familiar with the subject and a valuable guide for those not so familiar. It contains many relevant results from measure theory and abstract functional analysis used in the text. Chapters 2 and 3 are devoted respectively to linear and nonlinear dynamic systems subject to forces determined by vector measures covering existence and uniqueness of solutions and regularity properties thereof. Chapter 4 deals with the question of existence of optimal controls from the class of regular controls (vector valued measurable functions), relaxed controls (measure valued functions), and controls determined by vector measures. We consider both fully and partially observed control problems. In Chap. 5, we present necessary conditions of optimality for all the control problems considered in Chap. 4. In Chap. 6, we consider systems governed by stochastic differential equations controlled by vector measures and present numerous results on existence of optimal controls and necessary conditions of optimality. Here, we consider also both fully and partially observed control problems. Corresponding to each of the necessary conditions of optimality given in Chaps. 5 and 6, we present important convergence results guaranteeing convergence of any algorithm based on the necessary conditions of optimality. In Chap. 7, we present some practical examples of application with numerical results to demonstrate the applicability of the theories developed in this book. The authors hope that this book will inspire young mathematicians and mathematically oriented scientists and engineers to further advance the theory and application of dynamic systems driven and controlled by vector measures. Finally, we would like to appreciate Dr. Remi Lodh, the Editor of Mathematics with Springer Verlag, for his continued support and excellent cooperation throughout the publishing process. Ottawa, ON, Canada Minneapolis, MN, USA

N. U. Ahmed Shian Wang

Preface

ix

Table 1 List of Notations Notation C(I, R n ) BX

M(, X) (, B, μ) B∞ (I, R n ) M(, B, μ) ≡ M MCT LDCT LBCT (, F , P ) (, F , Ft≥0 , P ) Lp (, B, μ), 1 ≤ p ≤ ∞ Lp (, μ)∗ μ AC(I, R n ) F ix(F ) E∗ coB clcoB Ext (K) RNP RND I Mca (I , R n ) Sp (F )

cc(R n ) cb(E) SISO MIMO TIS TVS P W C(I, R n )

Description Class of continuous functions defined on the interval I and taking values in R n , p. 2 σ -algebra of Borel subsets of the set X, p. 4 Class of measurable functions from  to X, p. 5 A σ -finite measure space, p. 5 Banach space of bounded measurable functions from I to R n , p. 5 Class of μ measurable real-valued functions defined on , p. 6 Monotone Convergence Theorem, p. 11 Lebesgue dominated convergence theorem, p. 13 Lebesgue bounded convergence theorem, p. 14 Complete probability space, p. 15 Filtered probability space, p. 16 Lp spaces, p. 18 Dual of the Banach space Lp (, μ), p. 20 Total variation norm of the measure μ, p. 21 Class of absolutely continuous functions from I to R n , p. 23 Set of fixed points of the map F , p. 26 Continuous (topological) dual of any Banach space E, p. 29 Convex hull of any set B, p. 30 Closure of the convex hull of the set B, p. 30 Set of extreme points of the set K, p. 30 Radon-Nikodym property, p. 31 Radon-Nikodym derivative, p. 116 Sigma algebra of subsets of the set I , p. 31 Space of countably additive bounded vector measures defined on I and taking values in R n , p. 31 Class of Lp -selections of the multifunction F , p. 32 Class of nonempty closed convex subsets of R n , p. 33 Class of closed bounded subsets of E, p. 33 Single-input single-output, p. 35 Multi-input multi-output, p. 36 Time-invariant system, p. 38 Time-variant system, p. 38 Class of piecewise continuous and bounded functions defined on I and taking values in R n , p. 42 (continued)

x

Preface

Table 1 (continued) Notation |μ|() M(n × m) Br P W Cr (I, R n ) AI Ud Mad

usc lsc c(L1 (I, R n )) L∞ (I, R m ) Uad ≡ M(I, U ) M1 (U )

Lw ∞ (I, M(U ))

rel Uad

Mbf a (U ×I , R m ) L(R m , R n )

∼ = Dad

C(U ) M(U )

(Mbf a (I , R n ))∗ A∗ (t) B ∗ (dt) (, F , P ) Ft≥0

(, F , Ft≥0 , P )

Description Variation of measure μ on the set , p. 48 Class of n × m matrices with real entries, p. 49 Closed ball of radius r centered at the origin, p. 64 Class of right-continuous functions having left limits, p. 69 Algebra of subsets of the set I , p. 76 Set of admissible controls, p. 79 Set of admissible control measures, p. 80 Upper semi-continuity, p. 85 Lower semi-continuity, p. 85 Class of nonempty closed subsets of L1 (I, R n ), p. 90 Class of essentially bounded Lebesgue measurable functions defined on I and taking values from R m , p. 93 Class of measurable functions with values in U , p. 93 Space of probability measures on U , p. 99 Class of weak-star measurable functions defined on I and taking values from the space of Borel measures M(U ), p. 99 Unit sphere in Lw ∞ (I, M(U )) chosen as the class of relaxed controls, p. 99 Space of finitely additive R m -valued vector measures on the algebra U ×I of subsets of the set U × I , p. 105 Vector space of bounded linear operators from R m to R n , p. 106 Symbol of isomorphism, p. 116 Class of admissible controls, p. 121 Linear space of real-valued continuous functions on U , endowed with the supnorm topology, p. 130 Space of countably additive bounded signed measures on the Borel sigma field BU of U having bounded total variation, p. 130 Topological dual of Mbf a (I , R n ), p. 147 Adjoint of the matrix-valued function A(t), p. 148 Adjoint of the matrix-valued set function B(dt), p. 148 Complete probability space, p. 168 Increasing family of complete subsigma algebras of the sigma algebra F , p. 168 Filtered probability space, p. 168 (continued)

Preface

xi

Table 1 (continued) Notation EX L(B) L2 (F0 , R n ) Hn ≡ L2 (, R n ) a (I, H ) B∞ n Fad N (ϑ)

L∞ (, L(R m , R n ))

Description Expected value of the random variable X, p. 168 Itô integral of B with respect to standard Brownian motion, p. 172 Hilbert space of R n -valued F0 -measurable random elements having finite second moments, p. 173 Hilbert space of R n -valued random variables, p. 177 Class of Ft -adapted R n -valued stochastic processes with finite second moments, p. 177 Class of admissible operators, p. 211 Null set of the measure ϑ, p. 218 Space of essentially bounded Borel measurable functions with values in L(R m , R n ), p. 223

Contents

1

Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.3 Normed Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.4 Banach Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.5 Measures and Measurable Functions . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.6 Modes of Convergence and Lebesgue Integral .. .. . . . . . . . . . . . . . . . . . . . 1.6.1 Modes of Convergence .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.6.2 Lebesgue Integral.. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.7 Selected Results From Measure Theory .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.8 Special Hilbert and Banach Spaces . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.8.1 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.8.2 Special Banach Spaces .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.9 Metric Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.10 Banach Fixed Point Theorems . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.11 Frequently Used Results From Analysis . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.12 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

1 1 1 2 3 3 6 7 8 12 19 19 21 25 30 33 38

2 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2 Representation of Solutions for TIS . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2.1 Classical System Models.. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2.2 Impulsive System Models.. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3 Representation of Solutions for TVS . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.1 Classical System Models.. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.2 Measure Driven System Models .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.3 Measure Induced Structural Perturbation . . . . . . . . . . . . . . . . . . . . 2.3.4 Measure Driven Control Systems. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

39 39 43 44 46 47 53 55 58 63 64

xiii

xiv

Contents

3 Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67 3.2 Fixed Point Theorems for Multi-Valued Maps . . .. . . . . . . . . . . . . . . . . . . . 69 3.3 Regular Systems (Existence of Solutions) .. . . . . . .. . . . . . . . . . . . . . . . . . . . 71 3.4 Impulsive Systems (Existence of Solutions) . . . . .. . . . . . . . . . . . . . . . . . . . 78 3.4.1 Classical Impulsive Models .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79 3.4.2 Systems Driven by Vector Measures . . . . .. . . . . . . . . . . . . . . . . . . . 83 3.4.3 Systems Driven by Finitely Additive Measures.. . . . . . . . . . . . . 89 3.5 Differential Inclusions .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 98 3.6 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 108 4 Optimal Control: Existence Theory . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2 Regular Controls .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3 Relaxed Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4 Impulsive Controls I .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.5 Impulsive Controls II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.6 Structural Control .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.7 Differential Inclusions (Regular Controls) . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.8 Differential Inclusions (Measure-Valued Controls) .. . . . . . . . . . . . . . . . . 4.9 Systems Controlled by Discrete Measures . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.10 Existence of Optimal Controls . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

109 109 109 116 121 124 129 132 136 141 144 149

5 Optimal Control: Necessary Conditions of Optimality .. . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2 Relaxed Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.1 Discrete Control Domain . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.3 Regular Controls .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.4 Transversality Conditions . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.4.1 Necessary Conditions Under State Constraints . . . . . . . . . . . . . . 5.5 Impulsive and Measure-Valued Controls . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.5.1 Signed Measures as Controls . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.5.2 Vector Measures as Controls .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6 Convergence Theorem .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.7 Implementability of Necessary Conditions of Optimality . . . . . . . . . . . 5.7.1 Discrete Measures .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.7.2 General Measures . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.8 Structural Controls.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.9 Discrete Measures with Variable Supports as Controls .. . . . . . . . . . . . . 5.10 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

151 151 152 160 162 165 168 169 170 175 177 179 179 180 183 185 195

6 Stochastic Systems Controlled by Vector Measures . .. . . . . . . . . . . . . . . . . . . . 197 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 197 6.2 Conditional Expectations .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 198

Contents

6.3

xv

SDE Based on Brownian Motion . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.1 SDE Driven by Vector Measures (Impulsive Forces) .. . . . . . . 6.4 SDE Based on Poisson Random Processes . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5 Optimal Relaxed Controls .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5.1 Existence of Optimal Controls .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.5.2 Necessary Conditions of Optimality.. . . . .. . . . . . . . . . . . . . . . . . . . 6.6 Regulated (Filtered) Impulsive Controls.. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.6.1 Application to Special Cases. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.7 Unregulated Measure-Valued Controls . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.7.1 An Application . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.8 Fully Observed Optimal State Feedback Controls . . . . . . . . . . . . . . . . . . . 6.8.1 Existence of Optimal State Feedback Laws .. . . . . . . . . . . . . . . . . 6.8.2 Necessary Conditions of Optimality.. . . . .. . . . . . . . . . . . . . . . . . . . 6.9 Partially Observed Optimal Feedback Controls ... . . . . . . . . . . . . . . . . . . . 6.9.1 Existence of Optimal Feedback Laws . . . .. . . . . . . . . . . . . . . . . . . . 6.9.2 Necessary Conditions of Optimality.. . . . .. . . . . . . . . . . . . . . . . . . . 6.10 Bellman’s Principle of Optimality . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

200 207 212 216 218 223 231 241 242 247 248 249 252 253 253 258 267 273

7 Applications to Physical Examples . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1 Numerical Algorithms .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1.1 Numerical Algorithm I .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1.2 Numerical Algorithm II . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2 Examples of Physical Systems . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2.1 Cancer Immunotherapy . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2.2 Geosynchronous Satellites . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2.3 Prey-Predator Model . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2.4 Stabilization of Building Maintenance Units . . . . . . . . . . . . . . . . 7.2.5 An Example of a Stochastic System . . . . . .. . . . . . . . . . . . . . . . . . . .

275 276 278 279 281 281 283 288 296 305

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 311 Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 317

Chapter 1

Mathematical Preliminaries

1.1 Introduction To fully describe the state of a process, natural or artificial, at any point of time, one may need to quantify a multiplicity of variables which may be arranged in any convenient and consistent order and call it a vector. This is particularly important for correct and complete characterization of the temporal evolution of the state of dynamic systems. This makes us interested in vectors and vector spaces. Here in this chapter we present some relevant results from real analysis, measure theory, and functional analysis frequently used in later chapters of this book. In the first few sections, we present some introductory materials from real analysis and measure and integration. This is intended to familiarize the readers with basic notations and terminologies used throughout the book. This will also help the readers, especially students from science and engineering, who may not have formal background on mathematical analysis. For better readability, a list of notations frequently used in the book is presented in Table 1.

1.2 Vector Space We present here the most important axioms of a vector space. Let X denote an arbitrary set and let F ≡ R/C denote the field of real or complex numbers. The set X is said to be a vector space over the field F if it satisfies the following two basic properties: (i) : if x ∈ X and α ∈ F, then αx ∈ X,

(1.1)

(ii) : if x, y ∈ X then x + y ∈ X.

(1.2)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_1

1

2

1 Mathematical Preliminaries

For example,   x1   . X ≡ R ≡ x = .. : xi ∈ R, i = 1, 2, · · · , n n

xn is the set of all ordered n-tuples called a real vector space or a vector space of dimension n over the field of real numbers R. A simple example is the daily production of a firm that produces n distinct products which may be quantified by n ⊂ R n , where R n denotes the positive orthant of R n . This may a vector x ∈ R+ + vary from day to day. Letting m denote the vector of daily mean production of n distinct goods and y the vector of actual daily production, one may consider the vector x = y − m, which of course lies in R n . The elements of this vector can take positive as well as negative values.

1.3 Normed Space A normed space is a vector space X furnished with a norm N(·) ≡ · . The norm N is a real-valued function defined on X which satisfies the following properties: (i) :  x ≥ 0, ∀ x ∈ X,

(1.3)

(ii) :  x = 0, if and only if x = 0,

(1.4)

(iii) :  αx = |α|  x , ∀ x ∈ X, α ∈ F,

(1.5)

(iv) :  x + y ≤ x  +  y , ∀ x, y ∈ X.

(1.6)

Few examples of frequently used normed vector spaces are given below,  n 2 (1) R n with  x ≡ i=1 xi ;  1/p n p |x | ; the sequence (2) np , 1 ≤ p < ∞, with norm given by  x ≡ i=1 i 1/p  ∞ p space p , with norm,  x = |x | ; i i=1 (3) Let C(I, R n ) denote the class of all continuous functions f defined on the interval I and taking values in R n . Furnished with the norm,  f ≡ sup{ f (t) R n , t ∈ I }, it is a normed vector space;

1.5 Measures and Measurable Functions

3

 (4) Lp (I ), 1 ≤ p < ∞, with  f ≡

1/p



p I |f (t)| dt



; and Lp () with norm 1/p

 f ≡

|f (ξ )| dξ p

,



where  ⊂ R n ; (5) Weighted Lp spaces denoted by Lp (, ρ), where ρ is a nonnegative function defined on . The norm for this space is given by 1/p

 |f (ξ )|p ρ(ξ )dξ

 f ≡

.

(1.7)



The most commonly used function spaces are C(I, R n ), Lp (I, R n ), p = 1, p = 2, and p = ∞.

1.4 Banach Space Definition 1.4.1 (Cauchy Sequence) Let X be a normed space with norm denoted by  ·  . A sequence {xn } ∈ X is said to be a Cauchy sequence if lim  xn+p − xn = 0 for every p ≥ 1.

n→∞

Definition 1.4.2 (Banach Space) A normed space X is said to be complete if every Cauchy sequence has a limit. A complete normed space is called a Banach space. The vector spaces R n , np , 1 ≤ p ≤ ∞, with the norms as defined in the previous section, are finite dimensional Banach spaces. The spaces {p , 1 ≤ p ≤ ∞}, {Lp (I ), 1 ≤ p ≤ ∞} are infinite dimensional Banach spaces. The space of continuous functions defined on any finite interval I and endowed with the supremum norm,  f ≡ sup{|f (t)|, t ∈ I }, is an example of an infinite dimensional Banach space. More examples will be seen later.

1.5 Measures and Measurable Functions Lebesgue integration stands out as one of the most monumental discoveries of mathematics of the twentieth century. There are functions which are integrable in the sense of Lebesgue, but their Riemann integrals are not defined. Later we present the basic spirit of Lebesgue integration. For detailed study, the reader is referred to the celebrated books on measure and integration as indicated in the bibliographical notes.

4

1 Mathematical Preliminaries

For any abstract set , let A denote the class of all subsets of the set  satisfying the following conditions: (1) : empty set ∅ ∈ A, (2) : A ∈ A ⇒ A ∈ A where A =  \ A, (3) : A, B ∈ A ⇒ A ∪ B ∈ A, A ∩ B ∈ A. In other words, the class A is closed under complementation, finite union, and finite intersections. In this case, A is called an algebra or a field of subsets of the set . It is also called the finitely additive class. It is clear from (1) and (2) that  ∈ A. Let B denote the class of subsets of the same set  satisfying the following properties: (1) : ∅ ∈ B, (2) : A ∈ B ⇒ A ∈ B,

(3) : Am ∈ B ⇒ Am ∈ B. m≥1

In this case, the class B is closed under countable union and intersections. It is called the σ -algebra or the sigma-field of subsets of the set . Clearly, the class B ⊂ A. In this book we use primarily the sigma algebra B and occasionally the algebra A. As we shall see later, the class A is the proper domain for finitely additive measures, while the class B is the correct domain for countably additive measures. We use these in the study of differential equations driven by control measures. The class of Borel sets B is the minimal completely additive class of sets containing all closed subsets of . In fact the class of Borel sets can be constructed by repeated operation of countable union and intersection of closed or open sets in . For an excellent account of this, the reader is referred to Munroe [85]. The pair (, B) is called a (Borel) measurable space. Now we are prepared to introduce (positive) measures. A set function μ : B −→ [0, ∞] is said to be a measure if for every ∈ B, μ( ) is defined and takes values from [0, ∞], and for every 1 , 2 ∈ B with 1 ⊂ 2 , we have μ( 1 ) ≤ μ( 2 ) and μ(∅) = 0. The measure μ is said to be countably additive if for any pairwise disjoint sequence {Ai }i≥1 of B measurable sets, μ



i≥1

 Ai

=

i≥1

μ(Ai ).

1.5 Measures and Measurable Functions

5

In contrast, the measure μ defined on the algebra A is said to be finitely additive if the identity holds only for the union of finitely many disjoint sets. Loosely speaking, if  ⊆ R n and ∈ B ≡ B , the volume (e.g., length for n = 1; area for n = 2) of the set is its Lebesgue measure. We denote this by λ( ) = V ol( ) and call λ as the Lebesgue measure. For rigorous definition and construction of Lebesgue measure see any of the books on measure theory, in particular, the books by Munroe [85], Halmos [69], and Hewitt and Stromberg [71]. The standard method of construction of any countably additive nonnegative measure involves, to start with, (i) a nonnegative set function vanishing on the empty set, (ii) a sequential covering class of subsets of the space  such as the class of all open subsets of , (iii) construction of an outer measure, (iv) Caratheodory’s characterization of measurable sets, and finally (v) restriction of the outer measure on to the class of Caratheodory measurable sets. This gives a measure corresponding to any given set function. In fact, in this way one can put any measure μ on the measurable space (, B) turning this into a measure space which is written as (, B, μ). If μ() < ∞, the measure space (, B, μ) is called a finite measure space, and if μ() = 1, it is called a probability space. For  = R n , one can construct Lebesgue measure following the same series of steps (i)–(v) as described above. In this case, for the sequential covering class, one can choose the open cubes in R n and, for the nonnegative set function, one can use the volume of cubes. In this way one obtains the Lebesgue outer measure and finally the Lebesgue measure by restricting the outer measure on the class of Caratheodory measurable sets. Let X ≡ R n , and let BX denote the σ -algebra of Borel subsets of the set X. A function f :  → X is said to be a measurable function (Borel measurable map) if for every B ∈ BX , {ω ∈  : f (ω) ∈ B} ≡ f −1 (B) ∈ B . We denote this class of functions by M(, X). Sometimes this is also denoted by L0 (, X). One can show that this is a real linear vector space in the sense that (1) : f ∈ M(, X) ⇒ αf ∈ M(, X) for all α ∈ R, (2) : f1 , f2 ∈ M(, X) ⇒ f1 + f2 ∈ M(, X). Note that the space of real-valued measurable functions, M(, R), is an algebra in the sense that point-wise multiplication of any two real-valued measurable functions is also a real-valued measurable function. Clearly, the class M(, X) is quite general and includes continuous as well as discontinuous functions. Indeed, if both  and X are metric spaces, a function f :  −→ X is said to be continuous if the inverse image of every open set in X is an open set in . Thus, open sets being measurable sets, continuous functions are also measurable. Since every open or closed set and countable union and intersection of such sets are all Borel sets by definition, continuous functions constitute a smaller subclass of the class of measurable functions.

6

1 Mathematical Preliminaries

Now we can present some examples of infinite dimensional spaces. A measure space (, B, μ) is called σ -finite if there exists a countable sequence of disjoint sets {Bn } ∈ B such that μ(Bn ) < ∞ for each n ∈ N and  = ∞ B , while μ() = n n ∞. A class of linear spaces occasionally used is denoted by Lpoc (, B, μ), 1 ≤ p < ∞, where (, B, μ) is a sigma finite measure space. It consists of B-measurable functions which are only locally p-th power integrable. That is, for any set ⊂  with ∈ B and μ( ) < ∞, we have |f |p dμ < ∞.

This is not a normed space and hence not a Banach space. But it can be given a countable family of semi-norms with respect to which it becomes a Frechèt space [59]. Using the semi-norms one can introduce a metric with respect to which it becomes a complete metric  space. Here we are not interested in this space. The class of functions for which  |f (ω)|p dμ < ∞ is endowed with a norm (topology) given by 1/p

 |f |p dμ

 f ≡ 

and denoted by Lp (, B, μ). The vector spaces p , C(I, R n ), C(, R n ), Lp (, B, μ), 1 ≤ p ≤ ∞, with norms as defined above, are infinite dimensional Banach spaces. An important Banach space, denoted by B∞ (I, R n ), is the vector space of bounded measurable functions defined on the interval I ⊂ R and taking values from R n . The norm for this space is given by  f ≡ sup{ f (t) R n , t ∈ I }. As we shall see later in Chaps. 2, 3, 4, 5, and 6, the Banach spaces C(I, R n ), B∞ (I, R n ), and Lp (, B, μ), 1 ≤ p ≤ ∞, are very useful in the study of dynamic systems. More examples will be seen later.

1.6 Modes of Convergence and Lebesgue Integral There are many different notions of convergence in measure theory and functional analysis. Here we present only those that have been frequently used in this book. For detailed study the reader is referred to any of the standard books on real analysis and measure theory, such as Halmos [69], Munroe [85], Hewitt and Stromberg [71], Berberian [39], Royden [92].

1.6 Modes of Convergence and Lebesgue Integral

7

1.6.1 Modes of Convergence Let (, B, μ) be a measure space and M(, B, μ) ≡ M denote the linear space of μ measurable real-valued functions defined on . Sometimes this is denoted by L0 (, B, μ) = L0 (). Uniform Convergence Definition 1.6.1 A sequence {fn } ∈ M is said to converge to f ∈ M uniformly if for every ε > 0, there exists an integer nε such that whenever n > nε , |fn (ω) − f (ω)| < ε ∀ ω ∈ . Recall that for any set E ⊂  we have used E  to denote its complement given by E  ≡  \ E. Almost Uniform Convergence Definition 1.6.2 A sequence {fn } ∈ M is said to converge to f ∈ M almost uniformly, denoted by a.u, if for every ε > 0, there exists a set Gε ∈ B with μ(Gε ) < ε, such that fn (ω) −→ f (ω) uniformly on Gε . Clearly, uniform convergence implies almost uniform convergence. Almost Everywhere Convergence Definition 1.6.3 A sequence {fn } ∈ M is said to converge to f μ-almost everywhere, indicated by μ − a.e, or just a.e, if μ{ω ∈  : lim fn (ω) = f (ω)} = 0. n→∞

In other words, the set on which fn fails to converge to f is a set of μ-measure zero. If μ is a probability measure, i.e., μ() = 1, this mode of convergence is known as almost sure convergence. The space of measurable functions M is closed with respect to convergence almost everywhere. In other words, the a.e. limit of a sequence of measurable functions is also a measurable function. This is presented in the following result. Theorem 1.6.4 Let (, B, μ) be a measure space and M ≡ M(, B, μ) denote the class of measurable functions. Then, if {fn } ∈ M and fn −→ f μ − a.e, we have f ∈ M. Proof See Munroe [85, Corollary 20.3.2].

 

In other words, the almost everywhere limit of a sequence of measurable functions is a measurable function.

8

1 Mathematical Preliminaries

Convergence in Measure Definition 1.6.5 A sequence {fn } ∈ M is said to converge to f ∈ M in measure, indicated by fn −→ f meas., if for every ε > 0, lim μ{ω ∈  : |fn (ω) − f (ω)| > ε} = 0.

n→∞

Again, if μ is a probability measure, this mode of convergence is known as convergence in probability. In probability theory, there is another more general mode of convergence known as convergence in distribution. A sequence of real random variables {Xn } is said to converge to a random variable X in distribution if the cumulative distribution functions Fn (x) ≡ P rob.{Xn ≤ x} of {Xn } converge to the cumulative distribution function F (x) = P rob.{X ≤ x} of the random variable X at all continuity points x of F. Another frequently used mode of convergence is known as convergence in the mean of order p ≥ 1. We will discuss this after we have introduced the Lebesgue integral.

1.6.2 Lebesgue Integral It is known that Riemann and Riemann–Stieltjes integrals are defined for continuous functions. This is a serious restriction. This was overcome by the discovery of Lebesgue integral which revolutionized mathematics in the twentieth century. Here we shall briefly introduce this notion for readers not familiar with the subject. Let (, B, μ) denote a measure space with μ being the Lebesgue measure. Let M ≡ M(, B, μ) denote the space of extended real-valued B measurable functions defined on . An element f ∈ M is said to be a simple function if there exists an integer n and a set of real numbers {a1 , a2 , · · · , an } and a family of disjoint B measurable sets {Ei , i = 1, 2, · · · , n} such that Range(f ) = {a1 , a2 , · · · , an } and f =



ai CEi

where CEi is the indicator function of the set Ei with μ(Ei ) < ∞ for each i = 1, 2 · · · , n. This class of functions is denoted by S. For f ∈ S, its integral with respect to the measure μ is given by f dμ =

L(f ) ≡ 

n

ai μ(Ei ).

i=1

It is clear that for f ∈ S and α ∈ R, we have L(αf ) = αL(f )

1.6 Modes of Convergence and Lebesgue Integral

9

and for f1 , f2 ∈ S we have L(f1 + f2 ) = L(f1 ) + L(f2 ). Clearly, Lebesgue integration is a linear operation on S. For f ∈ M, let {fn } ∈ S be a sequence such that lim fn = f μ a.e.

n→∞

That is, f is the almost everywhere limit of a sequence of simple functions {fn }. We have seen that almost everywhere limit of a sequence of measurable functions is a measurable function. Thus, f is measurable. Then, the Lebesgue integral of f is defined by L(f ) ≡ lim L(fn ) n→∞

possibly taking values from the set [−∞, +∞) or the set (−∞, +∞]. Thus, Lebesgue integration denoted by the operation L is linear on M. For more on Lebesgue integration see the celebrated books of Halmos [69], Munroe [85], Hewitt and Stromberg [71], Berberian [39], Royden [92]. Set functions defined on any sigma algebra can take positive as well as negative values. In this case they are called signed measures. Let B denote the sigma algebra of Borel subsets of any abstract set . A set function ν : B −→ R is called a signed measure. A simple example of a signed measure is ν(D) ≡

f (s)μ(ds), D ∈ B, D

where μ is the Lebesgue measure and f is any Lebesgue integrable function. Similarly, a vector valued set function ν : B −→ R n is called a (finite dimensional) vector measure with values in R n , where R n is given any suitable norm  ·  . In passing we note that signed measures can also take values from the extended real number system. They can also be complex valued. We do not use complex valued measures in this book. For any E ∈ B we can define the variation of ν on E by |ν|(E) = sup 



 ν(σ ) ,

10

1 Mathematical Preliminaries

where the summation is taken over  which consists of a finite number of pairwise disjoint B measurable sets covering the set E, and the supremum is taken over all such finite partitions. The total variation norm of ν is given by  ν ≡ |ν|(). An interesting example of a positive measure on  is μ(dx) = λ(dx) +



γi δxi (dx) = dx +



γi δxi (dx), γi ≥ 0,

where δxi (dx) denotes the Dirac measure concentrated at the point xi ∈ . For any bounded set ⊂ , μ(dx) = λ( ) + γi .

{i:xi ∈ }

If the above integral is finite, the measure μ has finite variation on the set . It is clear that for the measure μ to have bounded total variation, it is necessary that the Dirac measures have countable support and that γi < ∞. Next we consider Lebesgue–Stieltjes integral denoted by, Ls (f ) ≡

f (x)dα(x), 

where α is a real-valued function on  having bounded total variation. For any such α, we can define a signed measure as follows: γα (G) ≡

dα(x), G ∈ B. G

− This measure can be decomposed into two positive measures μ+ α and μα giving − γ α = μ+ α − μα ,

−  γ α  ≡ μ+ α + μα ,

where  γα  denotes the total variation norm. This is called the Jordan decomposition of γα . Since α is of bounded variation, these are finite (positive) measures of bounded total variation. Now we can define Lebesgue integral of f ∈ M with respect to these measures giving L+ (f ) =



− f (x)μ+ α (dx), L (f ) =



f (x)μ− α (dx).

Given that one of them is finite, the Lebesgue–Stieltjes integral is given by +





Ls (f ) = L (f ) − L (f ) ≡

f (x)γα (dx) 

which may take values in the extended real number system.

1.6 Modes of Convergence and Lebesgue Integral

11

Convergence in the Mean-p Definition 1.6.6 A sequence {fn } ∈ M is said to converge to f ∈ M in the mean of order p, for any p ∈ [1, ∞), if 1/p

 lim  fn − f p ≡ lim

n→∞

n→∞

|fn (ω) − f (ω)| μ(dω) p

= 0.



Clearly, convergence in the mean-p applies only to the subspace Lp (, B, μ) ⊂ M, where   Lp (, B, μ) ≡ f ∈ M : |f (x)|p μ(dx) < ∞ . 

The class of vector spaces {Lp (, B, μ), 1 ≤ p ≤ ∞} is known as the Lebesgue spaces. These constitute a very large class of Banach spaces as seen later in this chapter. For p = ∞, we have the class of μ-essentially bounded measurable functions. By this one means that there exists a finite positive number β which is exceeded by the absolute value of the function f only on a set of μ-measure zero. That is, μ{x ∈  : |f (x)| > β} = 0. The norm of the function f is defined by  f ∞ ≡ ess − sup{|f (x)|, x ∈ }, and it is given by the smallest number β for which μ{x ∈  : |f (x)| > β} = 0. There are many subtle relationships between all these modes of convergence. For details the reader may consult any book on measure and integration. For example, the most popular books are those of Halmos [69], Munroe [85]. Here we present the relationship among various modes of convergence through the following diagram. For detailed proof see Munroe [85] or any of the references mentioned above. In Figs. 1.1 and 1.2, solid lines with arrow indicate that the statement at the tail of the arrow implies the statement at the leading edge of the arrow, while dashed lines with arrow mean that the statement at the tail implies the existence of a convergent subsequence in the sense of the statement in the front of the arrow. With reference to Fig. 1.1, for measure spaces (, B, μ) with μ() = ∞, convergence in the mean of order p, 1 ≤ p < ∞, implies convergence in measure (M) as indicated by the solid arrow from Lp to M. Convergence in measure implies that there exists a subsequence that converges almost everywhere (AE), as indicated by the dashed arrow running from M to AE. The mode of almost everywhere (AE)

12

1 Mathematical Preliminaries

Fig. 1.1 General measure spaces

Fig. 1.2 Finite measure spaces

convergence does not imply any other modes of convergence. Convergence in mean of order p, 1 ≤ p < ∞, implies the existence of an almost everywhere (AE) and almost uniform (AU) convergent subsequence. In the case of finite measure spaces, i.e., μ() < ∞, the modes of convergence AE and AU are equivalent. The modes of convergence {Lp , AE, AU} all imply convergence in measure M. Convergence in measure implies the existence of a subsequence that converges almost everywhere.

1.7 Selected Results From Measure Theory In this section, we present some selected results from measure theory which have been frequently used in this book. These are monotone convergence theorem, Fatou’s Lemma and the celebrated Lebesgue dominated convergence theorem, and Fubini’s theorem. First we state the monotone convergence theorem. Theorem 1.7.1 (Monotone Convergence Theorem (MCT)) Let {fn } ∈ L1 (, B, μ) be a non-decreasing sequence of nonnegative functions and let f0 be a function such that lim fn = f0 a.e.

n→∞

 Then, f0 ∈ L1 (, B, μ), if and only if, limn→∞  fn dμ < ∞, and if this is the case, then lim fn dμ = lim fn dμ = f0 dμ. n→∞ 

 n→∞



1.7 Selected Results From Measure Theory

13

Proof Since {fn } is monotone non-decreasing sequence, fn (x) ≤ f0 (x) μ a.e.  Thus, if f0 ∈ L1 (, B, μ), then lim  fn dμ < ∞. So,this is necessary for  integrability of f0 . Clearly, by monotonicity of {fn } we have  fn dμ ≤  f0 dμ for all n ∈ N and hence lim fn dμ ≤ f0 dμ. (D1) n→∞ 



Let S denote the class of simple functions defined on , that is, functions assuming only a finite number of values and let S + ⊂ S denote the class of nonnegative simple functions. For each n ∈ N, let {Sn,k } ⊂ S + , k ∈ N, be a non-decreasing sequence of integrable simple functions such that for each n ∈ N lim Sn,k = fn μ a.e.

k→∞

Define the sequence of functions {Rn,k } by Rn,k ≡ sup Si,k . 1≤i≤n

Clearly, this is also a sequence of nonnegative integrable simple functions and it is monotone non-decreasing in both the indices {n, k}. Since {fn } is a non-decreasing sequence, we have, for each n ∈ N and each k ≥ n, Sn,k ≤ Rn,k ≤ Rk,k ≤ sup fi = fk .

(D2)

1≤i≤k

Letting k → ∞ in the above inequality we obtain, fn ≤ lim Rk,k ≤ f0 μ a.e. k→∞

Now letting n → ∞, it follows from the above expression that lim Rk,k = f0 μ a.e.

(D3)

k→∞

For simple functions, it follows from the definition of Lebesgue integral that

lim

k→∞ 

Rk,k dμ =

lim Rk,k dμ =

 k→∞

From (D2) it is clear that for all k ∈ N, Rk,k dμ ≤ fk dμ. 

f0 dμ.

(D4)





(D5)

14

1 Mathematical Preliminaries

and hence, letting k → ∞, it follows from (D4) and (D5) that

f0 dμ ≤ lim

k→∞ 



This shows that for integrability of f0 it is sufficient that limk→∞ Combining (D1) and (D6) we arrive at the following identity:



 fk

dμ < ∞.



lim

(D6)

fk dμ.

k→∞ 

fk dμ =

f0 dμ. 

 

This completes the proof.

This theorem is due to Lebesgue. Later, this result was extended by B.Levi lifting the non-negativity hypothesis. We present this in the following theorem without proof. First let us define f − ≡ −{f ∧ 0} ≡ − inf{f, 0}. Theorem 1.7.2 (Beppo Levi’s Theorem (GMCT)) Let {fn } be a monotone non-decreasing sequence of extended real-valued measurable functions such that  − f dμ < ∞ for some k ∈ N. Then  k

lim

n→∞ 

fn dμ =

lim fn dμ.

 n→∞

Fatou’s lemma and Lebesgue dominated convergence theorem are frequently used in this book. On the basis of the preceding results, we can give simple proof of these as follows. Theorem 1.7.3 (Fatou’s Lemma) Let {fn } be a sequence of measurable nonnegative μ-integrable functions on the measure space (, B, μ) satisfying lim inf fn = f0 μ a.e. Then lim inf fn dμ ≤ lim inf fn dμ.  n→∞

n→∞



Note that the expression on the right-hand side may assume the value +∞. Proof The proof is based on the monotone convergence Theorem 1.7.1. Define the sequence gn ≡ infk≥n fk . Since fn is nonnegative and integrable, this is a nondecreasing sequence of nonnegative integrable functions satisfying 0 ≤ gn ≤ fn . Clearly,

gn dμ ≤ 

fn dμ 

1.7 Selected Results From Measure Theory

15

and hence

gn dμ ≤ lim inf

lim inf n→∞

fn dμ.

n→∞





By the monotone convergence theorem and the fact that lim gn = lim inf fn , we have lim gn dμ = lim gn dμ ≡ lim inf fn dμ. n→∞ 

 n→∞



It follows from these two expressions that

lim inf fn dμ = lim

 n→∞

n→∞ 





gn dμ = lim inf n→∞

gn dμ ≤ lim inf

fn dμ.

n→∞





Thus,

f0 dμ ≤ lim inf 

n→∞

fn dμ. 

This completes the proof. Remark 1.7.4 It is clear from the above result that if lim infn→∞ then f0 ∈ L1 (, B, μ).



 fn dμ

  < ∞

Remark 1.7.5 Note that in the statement of the Fatou’s Lemma, the condition that f0 = lim inf fn can be replaced by f0 ≤ lim inf fn . Remark 1.7.6 Fatou’s Lemma also applies to any sequence of integrable functions which are bounded from below by an integrable function. For example, consider the sequence of integrable function fn and suppose fn ≥ g μ a.e where g ∈ L1 (, B, μ). Then one can verify that

lim inf fn dμ ≤ lim inf 

fn dμ. 

Now we can present the Lebesgue dominated convergence theorem which we denote by LDCT. Theorem 1.7.7 (LDCT-1) Let {fn } ∈ L1 (, B, μ) and suppose (i) :

lim fn (ω) = f (ω) μ − a.e

n→∞

(ii) : ∃ a g ∈ L+ 1 (, B, μ) such that |fn (ω)| ≤ g(ω) μ − a.e.

16

1 Mathematical Preliminaries

Then f ∈ L1 (, B, μ) and

lim

n→∞ 

fn dμ =

lim fn dμ =

f dμ.

 n→∞



Proof We use Fatou’s lemma to prove the theorem. By assumption, the sequence fn is dominated by the integrable function g and hence the two sequences fn + g and g − fn are nonnegative integrable functions. Applying Fatou’s Lemma to the first sequence, we obtain

lim inf fn dμ ≤ lim inf

fn dμ.





Applying it to the second sequence, we obtain

lim inf(g − fn )dμ ≤ lim inf 

(g − fn )dμ. 

This means

fn dμ ≤

lim sup 

lim sup fn dμ. 

Since fn → f μ-a.e, it follows from the first inequality that

f dμ = 

lim inf fn dμ ≤ lim inf

fn dμ.





Similarly it follows from the second inequality that





fn dμ ≤

lim sup 

lim sup fn dμ = 

f dμ. 

Thus, it follows from the last two inequalities that

lim

n→∞ 

This completes the proof.

fn dμ =

lim fn dμ =

 n→∞

f dμ. 

 

Remark 1.7.8 (LBCT) It is clear from the above theorem that if the condition (i) holds and condition (ii) is replaced by the assumption that the sequence is absolutely bounded by a finite positive number, the conclusion of the theorem holds trivially. This is known as the Lebesgue bounded convergence theorem.

1.7 Selected Results From Measure Theory

17

Two Examples (E1) Consider  = R and define the sequence of real-valued (positive) functions given by

√  fn (x) ≡ n/ 2π exp{−(1/2)n2 x 2 }, x ∈ R, n ∈ N. 2 This is a Gaussian  density function with zero mean and variance (1/n ). It is easy to see that  R fn (x)dx = 1 for all n ∈ N, and that fn −→ 0 a.e, but clearly limn→∞ R fn (x)dx = 0. What is lacking here is the condition (ii) of the LDCT-1. There is no integrable function that dominates the sequence {fn }. Another example is the sequence {fn } given by

√ fn (x) ≡ (1/ 2π) exp{−(1/2)(x − n)2 } which has similar behavior. Its integral  is one for all n ∈ N and it converges to zero for each x ∈ R, but the limn→∞ fn (x) = 1 = 0. The reason is same, there is no integrable function that can dominate the sequence. (E2) Consider the interval J ≡ [0, π] and the sequence of functions fn (t) ≡ sin(nt), t ∈ J. Note that lim fn (t)dt = 0. n→∞ J

Here |fn | ≤ 1 and so dominated by an integrable function but limn→∞ fn (t) = 0 a.e. Here condition (i) is missing. However, note that the Gaussian sequence given by the first part of example (E1) converges, in generalized sense, to the Dirac measure with the point {0} being the support. The second sequence, however, converges to zero in the sense of distribution. Lebesgue dominated convergence theorem also holds for Lp (, B, μ) spaces as stated in the following theorem. Theorem 1.7.9 (LDCT-2) Let {fn } ∈ Lp (, B, μ) for some p ∈ [1, ∞) and suppose (i) :

lim fn (ω) = f (ω) μ − a.e.

n→∞

(ii) : ∃ a g ∈ L+ p (, B, μ) such that |fn (ω)| ≤ g(ω) μ − a.e. Then f ∈ Lp (, B, μ) and lim

n→∞ 

|fn − f |p dμ = 0.

18

1 Mathematical Preliminaries

Sometimes we have to deal with multiple integrals. Fubini’s theorem proves that under certain conditions they can be evaluated as iterated integrals which is much simpler. For example, let us consider double integrals. First, one integrates with respect to one variable keeping the other variable fixed and then complete the integration by integrating with respect to the remaining variable. This process is acceptable only if the end result is the same irrespective of the order of integration. This is the basic content of Fubini’s theorem. Let (i , i , μi ) i = 1, 2 be two measure spaces. Let  = 1 × 2 denote the product σ -field on the product space  ≡ 1 × 2 and μ ≡ μ1 × μ2 denote the product measure on the σ -field . Theorem 1.7.10 (Fubini’s Theorem) Let f be a measurable function with respect to the product σ -field and suppose the following integral exists: 1 ×2

f dμ.

If the functions f1 and f2 given by



f1 (x) ≡

f (x, y)μ2 (dy), f2 (y) ≡ 2

f (x, y)μ1 (dx), 1

exist μ1 -a.e and μ2 -a.e, respectively, then f1 and f2 are integrable with respect to the measures μ1 and μ2 respectively and the following identity holds:





f dμ = 

f1 (x)μ1 (dx) = 1

f2 (y)μ2 (dy). 2

Proof See Halmos [69], Berberian [39], Munroe [85].

 

Let (, F , P ) be a complete probability space furnished with an increasing family of subsigma algebras Ft , t ≥ 0, of the sigma algebra F with P being the probability measure. We call (, F , Ft ≥0, P ) a filtered probability space or a probability space furnished with a filtration. In probability theory, Borel–Cantelli Lemma is one of the best known results on zero-one law. The lemma states that under certain conditions an event will occur either with probability one or zero. It has been used in the study of continuity properties of solutions of stochastic differential equations to be seen in Chap. 6. We present this result in the following lemma. Lemma 1.7.11 (Borel–Cantelli Lemma) Let {Ai } be a sequence of measurable sets and let A denote the set of points in infinitely  many of the sets {Ai }, that is ∞ ∪∞ A . Then, either P (A) = 0 if A = ∩ P (Ai ) < ∞ or P (A) = 1 if i n=1 i=n  P (A ) = ∞ and the sets {A } are mutually independent. i i i≥1 Proof See Doob [58, Theorem 1.2, p. 104].

 

1.8 Special Hilbert and Banach Spaces

19

Doob’s martingale inequality is one of the most celebrated results in the study of stochastic processes. A random process {ξ(t), t ≥ 0} adapted to filtration Ft , t ≥ 0, is said to be an Ft -martingale if (i): E|ξ(t)| < ∞, t ≥ 0, and (ii) E{ξ(t)|Fs } = ξ(s) for all t ≥ s ≥ 0. The process is said to be a submartingale if E{ξ(t)|Fs } ≥ ξ(s) for all t ≥ s ≥ 0; and super martingale if E{ξ(t)|Fs } ≤ ξ(s) ∀ t ≥ s ≥ 0. Theorem 1.7.12 If ξ(t), t ∈ I ≡ [0, T ] is an Ft -martingale on the filtered probability space (, F ⊃ Ft ↑ t, P ) then (i) P {supt ∈I |ξ(t)| > r} ≤ (1/r)E|ξ(T )|, ∀ r > 0, (ii) if for some p > 1, E|ξ(t)|p < ∞, ∀ t ∈ I, then   E sup |ξ(t)|p ≤ (p/p − 1)p E|ξ(T )|p . t ∈I

Note that for square integrable martingales, the last inequality reduces to   E sup |ξ(t)|2 ≤ 4E|ξ(T )|2 . t ∈I

This particular inequality is crucially important in the study of stochastic differential equations driven by martingales as seen in Chap. 6.

1.8 Special Hilbert and Banach Spaces Many of the problems arising in physics and engineering are concerned with energy. Hilbert spaces are more suitable for quantifying such entities.

1.8.1 Hilbert Spaces Definition 1.8.1 (Hilbert Space) A Hilbert space H is a Banach space with spacial structure. It is furnished with an inner (or scalar) product (·, ·) as defined below. For any two elements f, g ∈ H, the scalar product (f, g) is a real or a complex number. Let C denote the field of complex numbers. The map {f, g} −→ (f, g) has the following properties: (H1) (αf, g) = α(f, g), (f, αg) = α ∗ (f, g), α ∈ C, f, g ∈ H where α ∗ is the complex conjugate of α, (H2) (f, g1 + g2 ) = (f, g1 ) + (f, g2 ), ∀ f, g1 , g2 ∈ H,

20

1 Mathematical Preliminaries

(H3) (H4)

(f, g) = (g, f )∗ , ∀ f, g ∈ H,  f 2 = (f, f ), ∀ f ∈ H.

Some examples of finite dimensional Hilbert spaces are R n , E n , n2 , with the norms as defined in the previous section. The normed spaces 2 , L2 (, μ), are infinite dimensional Hilbert spaces. For estimates of intensity of forces, energy, or any other physically important variables, often we use inequalities. A very important inequality, called the Schwartz inequality, holds for Hilbert spaces. Proposition 1.8.2 (Schwarz Inequality) For every f, g ∈ H, |Re(f, g)| ≤ f  g  .

(1.8)  

Proof See [14, Proposition 1.4.4, p. 7]. As a corollary of this result, we have the triangle inequality. Corollary 1.8.3 (Triangle Inequality) For every f, g ∈ H  f + g ≤ f  +  g  .

(1.9)  

Proof See [14, Corollary 1.4.5, p. 8].

Another example of a Hilbert space is given by H 1 (, μ) which consists of functions (equivalence classes) which along with their first derivatives are square integrable. H 1 (, μ) ≡ {ϕ : ϕ ∈ L2 (, μ), Dϕ = (Dxi ϕ, i = 1, 2, · · · , n) ∈ L2 (, μ)}. The scalar product in this space is given by

ϕ(x)ψ(x)μ(dx) +

(ϕ, ψ) = 

1≤i≤n 

(Dxi ϕ)(Dxi ψ)μ(dx),

with the corresponding norm,   ϕ H 1 =

|ϕ(x)|2μ(dx) + 

1/2

1≤i≤n 

|Dxi ϕ|2 μ(dx)

.

If  = I ≡ (0, T ) and μ is the Lebesgue measure (length), H 1 (I ) is the space of square integrable functions which are absolutely continuous with the first derivatives being square integrable. The norm is given by   ϕ H 1 =

T 0

|ϕ|2 dt + 0

T

1/2 |ϕ| ˙ 2 dt

.

1.8 Special Hilbert and Banach Spaces

21

1.8.2 Special Banach Spaces Now we consider a very important class of function (signal) spaces which are Banach spaces, not Hilbert spaces. For f ∈ Lp (, B, μ), 1 ≤ p < ∞, we denote the norm of f by 

1/p

 f p ≡

|f |p μ(dx)

,



where μ is a countably additive σ -finite positive measure on . For p = 2, this is a Hilbert space. The spaces Lp (, B, μ), 1 ≤ p ≤ ∞ are Banach spaces. There is no scalar product in these spaces in the sense defined for Hilbert spaces. For f, g ∈ Lp , (f, g) is not defined. However, there is a duality pairing as described below. For special Banach spaces, like Lp (, B, μ) with μ any finite positive measure, we have similar results like those of Proposition 1.8.2 and Corollary 1.8.3. These are Hölder and Minkowski inequalities. For economy of notations, we may sometimes omit the sigma algebra and write Lp (, μ) to denote Lp (, B, μ). Proposition 1.8.4 (Hölder Inequality) For every f ∈ Lp (, μ) and g ∈ Lq (, μ) with 1 < p, q < ∞, (1/p) + (1/q) = 1, |(f, g)| ≤ f p  g q .

(1.10)

Proof We present a simple proof. Define a≡

|f (x)|p |g(x)|q , b ≡ p q  f p  g q

(1.11)

and α ≡ (1/p), β ≡ (1/q). Since the function log x, x ≥ 0, is a concave function, we have log(αa + βb) ≥ α log a + β log b = log(a α bβ ).  

Hence, the inequality (1.10) follows. Remark 1.8.5 Note that the above result holds also for p = 1 and q = ∞, giving

|(f, g)| ≤ 

|f ||g|dμ ≤ g ∞



|f |dμ = f 1  g ∞ ,

where  g ∞ = μ − ess − sup{|g(x)|, x ∈ }. As a corollary of this result, we have the following triangle inequality.

22

1 Mathematical Preliminaries

Corollary 1.8.6 (Minkowski Inequality) For f, g ∈ Lp (, μ), 1 ≤ p < ∞,  f + g p ≤ f p +  g p .

(1.12)

  Proof Write  |f +g|p dμ =  |f +g|p−1 |f +g|dμ and apply Hölder inequality. For details see [14, Corollary 1.8.7, p. 11].   The following result states that all Lp (Lebesgue) spaces are complete and hence are Banach spaces. Theorem 1.8.7 For any positive measure space (, B, μ) with μ() < ∞, and for each p ≥ 1, the vector space Lp (, B, μ) is a Banach space.  

Proof See [14, Theorem 1.4.8, p. 12].

Proposition 1.8.8 Consider the Banach spaces {Lp (, μ), 1 ≤ p ≤ ∞} and suppose μ() < ∞. Then, for 1 ≤ p1 ≤ p2 ≤ ∞, we have  f p1 ≤ c  f p2 ,

(1.13)

where c = (μ())(p2−p1 )/(p2 p1 ) . Hence, Lp2 (, B, μ) ⊂ Lp1 (, B, μ). Note that the embedding constant c is dependent on the parameters as shown. It follows from this inequality that the embedding Lp2 (, μ) → Lp1 (, μ) is continuous. Remark 1.8.9 It is clear from the expression for the constant c that if μ() = ∞, the inclusion does not make much of a sense indicating that the inclusion may not even hold. Here is an example justifying this statement. Let  = [0, ∞) and μ the Lebesgue measure. Consider the function f (x) ≡

1 x 1/p (1 + | log x|)2/p

, x ∈ [0, ∞), ∞ > p ≥ 1.

The reader can verify that f ∈ / Lr ([0, ∞), μ) for any r = p. For r = p we have

∞ 0





1 dx x(1 + | log x|)2 0 ∞ ∞ 1 1 dy = 2 dy = 2. = 2 (1 + y)2 −∞ (1 + |y|) 0

|f (x)| dx = p

In summary, in case μ() = ∞, there exists f ∈ Lp but f ∈ /



r=p

(1.14)

Lr .

Remark 1.8.10 Let μ(dx) = dx denote the Lebesgue measure and ν(dx) ≡ 1/(1+ x 2 )dx be another measure both defined on R. Note that ν(R) = π, so this is a

1.8 Special Hilbert and Banach Spaces

23

finite positive measure. Then, one can easily verify that Lp (R, μ) ⊂ Lp (R, ν) for all p ≥ 1. For f ∈ Lp (R, ν), g ∈ Lq (R, ν), with (p, q) being the conjugate pair, we have  fg√ ∈ L1 (R, ν). Similar conclusions hold for the Gaussian measure νg (K) ≡ K (1/ 2π) exp{−(1/2)x 2}dx. Another interesting property of the {Lp (, μ), 1 ≤ p ≤ ∞} spaces is stated in the following result. Let B1 (Lp ) ≡ {h : h ∈ Lp (, μ),  h p ≤ 1} denote the closed unit ball of the Banach space Lp . The unit sphere is denoted by ∂B1 ≡ S1 , that is, these are the elements of B1 which have norms exactly equal to one. This is also the boundary of the unit ball. Proposition 1.8.11 For every f ∈ Lp (, μ) (1 ≤ p < ∞), there exists a g ∈ B1 (Lq ), where q is the conjugate of p in the sense that (1/p) + (1/q) = 1, such that (f, g) = f p . Further, if 1 < q < ∞, there is only one such (unique) g ∈ B1 (Lq ). Proof Let f (= 0) ∈ Lp (, μ) be given. Define p−1

g(x) ≡ (1/  f p

)|f (x)|p−1 signf (x), x ∈ .

(1.15)

The signum function of a Borel measurable function is also Borel measurable. Product of measurable functions is a measurable function. Thus, g as defined is a measurable function. By integration we show that this element belongs to the unit sphere ∂B1 (Lq ) where q is the number conjugate to the number p. Integrating |g|q and noting that (p − 1)q = p, we find that

(p−1)q

|g|q dμ = (1/  f p

|f |(p−1)q dμ = 1.

)





Thus, g ∈ ∂B1 (Lq ). Now computing the duality product of f with the function g given by (1.15), we find that

p−1

f (x)g(x)dμ = (1/  f p

(f, g) = 

|f |p dμ = f p .

)

(1.16)



This proves the first part. For the second part, we note that the ball B1 (Lq ) is strictly convex for all q ∈ (1, ∞) in the sense that the line segment joining any two points f1 , f2 of the ball B1 given by {h ∈ B1 : h = αf1 + βf2 , α, β ≥ 0, α + β = 1},

24

1 Mathematical Preliminaries

cannot touch the boundary except possibly at the end points. We prove this by contradiction. Suppose there are two points g1 , g2 ∈ ∂B1 (Lq ) such that (f, g1 ) = (f, g2 ) = f p . Define g = (1/2)(g1 + g2 ). Then, clearly (f, g) = f p . Since the ball is strictly convex, g must be an interior point of the ball and so  g q < 1 and this implies that |(f, g)| 0, K can be covered by a finite number of balls of radius ε. That is, there exists an integer nε and a finite set of points {xi ∈ K, i = 1, 2, · · · , nε } such that K⊂



B(xi , ε),

1≤i≤nε

where B(x, ) ≡ {z ∈ M : d(x, z) ≤ ε} is a closed ball of radius ε centered at x. The set K is said to be compact, if it is also closed. Definition 1.9.3 A subset K of a metric space (M, d) is said to be sequentially compact if every sequence from K has a convergent subsequence with the limit being in K. A set K ⊂ M is said to be relatively compact, if its closure is compact. Here are some examples of compact sets. A closed bounded set in R n is compact. A bounded set in R n is relatively compact. In fact any bounded set in a finite dimensional space is relatively compact. In infinite dimensional Banach spaces, this is not true. However, for reflexive Banach spaces, furnished with the weak convergence topology, we have the following result. For proof see Dunford [59]. Theorem 1.9.4 A bounded weakly closed subset K of a reflexive Banach space is weakly compact, as well as, weakly sequentially compact. One of the most frequently used Banach spaces is the space of continuous functions. Let (X, d) and (Y, ρ) be any two metric spaces. A function f : X −→ Y is said to be continuous if the inverse image of any open set in Y is an open set in X, that is OX ≡ f −1 (OY ) is an open set in X whenever OY is an open subset of Y. Denote by C(X, Y ) the space of continuous functions from X to Y . Furnished with the metric topology γ (f, g) ≡ sup ρ(f (x), g(x)), x∈X

1.9 Metric Space

27

C(X, Y ) is a metric space. If (Y, ρ) is a complete metric space, then so also is (C(X, Y ), γ ). We are especially interested in the following Banach space. Let D be a compact subset of a metric space and Y a finite dimensional Banach space. Let C(D, Y ) denote the vector space of continuous functions furnished with the norm topology  f ≡ sup{|f (x)|Y , x ∈ D}. Then it is a Banach space. That is, every Cauchy sequence {fn } in C(D, Y ) converges in the norm topology defined above to an element f ∈ C(D, Y ). In particular, let D be a closed bounded interval I of the real line R and Y = R n . Then, C(I, R n ) is a Banach space. Definition 1.9.5 (Absolute Continuity) A function f ∈ C(I, R n ) is said to be absolutely continuous if, for every ε > 0, there is δ > 0 such that, for any set of disjoint open intervals {Ii ≡ (ti , ti+1 )} ⊂ I,

|f (ti+1 ) − f (ti )|R n < ε

whenever

|ti+1 − ti | < δ.

We use AC(I, R n ) to denote the class of absolutely continuous functions from I to R n . It is evident that AC(I, R n ) ⊂ C(I, R n ). The question of compactness of a set G ⊂ C(I, R n ) is very crucial in the study of existence of solutions of differential equations and many optimization problems. This is closely related with the notion of equicontinuity. Definition 1.9.6 (Equicontinuity) A set G ⊂ C(I, R n ) is said to be equicontinuous at a point t ∈ I, if for every ε > 0 there exists a δ > 0 such that |f (t) − f (s)|R n < ε ∀ f ∈ G whenever |t − s| < δ. The set G is said to be equicontinuous on I if this holds for all t ∈ I. The following result is very important in the study of regularity properties of solutions of ordinary differential equations. It has also applications in the study of optimization. Theorem 1.9.7 (Ascoli–Arzela) A set G ⊂ C(I, R n ) is relatively compact if (i) it is bounded in the norm topology and (ii) it is equicontinuous. The set G is compact if it is relatively compact and the t section, R(t) ≡ {f (t), f ∈ G} is closed for every t ∈ I.

28

1 Mathematical Preliminaries

Proof Take any sequence {xn } ⊂ G. We show that it has a convergent subsequence. Since I is a compact interval, for every δ > 0, there exists a finite set IF ≡ {ti , i = 1, 2, · · · , m = m(δ)} ⊂ I such that for every t ∈ I , there exists a tj ∈ IF such that |t − tj | ≤ δ. Since the set G is equicontinuous, the sequence {xn } is equicontinuous. Thus, for any ε > 0, there exists a δ > 0 such that sup  xn (t) − xn (s) ≤ ε whenever |t − s| ≤ δ. n∈N

Consider the sequence {xn (tj ), j = 1, 2, · · · , m}. Since G is a bounded set it is clear that each of the elements of the above sequence is contained in a bounded subset of R n . Bolzano-Weierstrass theorem says that every bounded sequence of vectors in a finite dimensional space has a convergent subsequence. Thus, for every sequence {xn (tj ), n ∈ N} there exists a convergent subsequence {xn(j ) (tj )}. Let {n(1)} be a subsequence of the sequence {n} for which xn(1) (t1 ) is convergent. Denote by n(2) a subsequence of n(1) for which xn(2) (t2 ) is convergent. Carrying out this diagonal process, we obtain a subsequence {n(m)} ⊂ {n} such that along this subsequence every element of the set {xn (tj ), j = 1, 2, · · · , m} is convergent. For any t ∈ I, there exists a j ∈ {1, 2, · · · , m} such that |t − tj | ≤ δ. Clearly, the required tj depends on t ∈ I. We indicate this dependence by tj (t). Thus, we have  xk (t) − xr (t) ≤ xk (t) − xk (tj (t))  +  xk (tj (t)) − xr (tj (t))  +  xr (tj (t)) − xr (t)  ≤ 2ε+  xk (tj (t)) − xr (tj (t))  for any k, r ∈ {n(m)}. Hence, sup  xk (t) − xr (t) ≤ 2ε + sup  xk (tj (t)) − xr (tj (t))  . t ∈I

t ∈I

Letting k, r −→ ∞ along the subsequence {n(m)}, we obtain lim  xk − xr C(I,R n ) ≤ 2ε.

k,r→∞

Since ε(> 0) is arbitrary, it follows from this that from every sequence of G one can extract a subsequence which is a Cauchy sequence in C(I, R n ). Since this is a Banach space with respect to the sup norm topology, there exists an element x ∈ C(I, R n ) to which the sequence {xk } converges. This proves that the set G is relatively compact. Clearly, if every t section of G is closed, the limit x ∈ G. Hence, G is compact. This completes the proof.   Remark 1.9.8 In fact, the Ascoli–Arzela theorem, as presented above, also holds for compact metric spaces (M, d) replacing compact interval I. For more general result, see [92, p. 153], [108, p. 85].

1.9 Metric Space

29

The space of continuous linear functionals on C(I, R n ) has the representation f (t), μ(dt), f ∈ C(I, R n ),

(f ) = I

where μ is an R n -valued measure defined on the Borel subsets of the set I. For example, note that for any z ∈ R n and the Dirac measure δs (dt), with mass concentrated at the point s, the element νz,s (dt) ≡ zδs (dt) defines a continuous linear functional on C(I, R n ). Indeed, z,s (f ) ≡ f (t), νz,s (dt) = (f (s), z). I

It is obvious that |z,s (f )| ≤ |z|R n  f  . Hence, the functional z,s is a bounded linear (and hence continuous) functional on C(I, R n ). Similarly, for any sequence J ≡ {ti } ⊂ I and Z ≡ {zi } ⊂ R n , the  measure νZ,J ≡ zi δti (dt) defines a vector measure. The linear functional Z,J (f ) ≡

f (t), νZ,J (dt) = I



(f (ti ), zi ),

ti ∈J

defines a bounded linear functional on C(I, R n ) if and only if

|zi | < ∞.

ti ∈J

Two interesting facts emerge from this example. One, if J is a continuum and |zτ |R n > 0 for all τ ∈ J , Z,J defines an unbounded linear functional and hence is n not an  element of the (topological) dual of C(I, R ). Similarly, if J is a countable set but |zi |R n = ∞, the functional Z,J fails to define a continuous linear functional. From these elementary observations, we conclude that the (topological) dual of the Banach space C(I, R n ) is the space of countably additive R n -valued measures having bounded total variation on I given by  μ ≡ sup



|μ(σ )|R n ,

 σ ∈

where  is any finite partition of the set I by disjoint members from the class of Borel sets BI of I. The supremum is taken over all such finite partitions. We may denote this dual space by M(I, R n ) or Mca (I, R n ) to emphasize countable additivity. To emphasize that these are set functions, it is more appropriate to use the notation Mca (I , R n ), and to emphasize that they have bounded variation one may

30

1 Mathematical Preliminaries

use the symbol Mcabv (I , R n ). We may use both provided there is no possibility of confusion. Furnished with the norm topology (total variation norm), as defined above, Mca (I , R n ) is a Banach space. Note that every g ∈ L1 (I, R n ) also induces a countably additive bounded vector measure through the mapping, g −→ μg given by μg (σ ) ≡

g(t)dt, σ ∈ I . σ

Indeed, define g (f ) ≡

f (t), μ(dt) =

I

f (t), g(t)dt. I

Clearly, it follows from this expression that for any f ∈ C(I, R n ), |g (f )| ≤ f 

|g(t)|R n dt < ∞. I

This shows that the map e : g −→ g from L1 (I, R n ) to C(I, R n )∗ ≡ Mca (I , R n ) is a continuous embedding. That is e(L1 (I, R n )) ⊂ Mca (I , R n ). Thus, the space of vector measures is much larger than the class of Lebesgue integrable vector valued functions. In the study of optimal controls involving vector measures (as controls), sometimes we need to consider the question of compactness of subsets ⊂ Mca (I , R n ). We consider this later in the sequel.

1.10 Banach Fixed Point Theorems Many problems of mathematical sciences lead to the questions of existence, uniqueness, and regularity properties of solutions of equations of the form x = F (x) in appropriate metric spaces depending on the particular application. The questions of existence and uniqueness of solutions of such problems are called fixed point problems. Let X ≡ (X, d) be a complete metric space and suppose F : X → X. Definition 1.10.1 A point x ∗ ∈ X is said to be a fixed point of the map F if x ∗ = F (x ∗ ). Let F ix(F ) ≡ {x ∈ X : x = F (x)}, denote the set of fixed points of the map F. If F ix(F ) = ∅, the equation, x = F (x), has no solution. If the set F ix(F ) consists of only a single point in X, then the

1.10 Banach Fixed Point Theorems

31

equation, x = F (x), has a unique solution. Otherwise the equation has multiple solutions. Fixed point theorems are important tools for proof of existence of solutions of linear and nonlinear problems. We present here only those used in this book. For motivation, we consider the following example. An input–output model often used in engineering is given by a linear system governed by an integral operator like

t

y(t) ≡

K(t, s)u(s)ds, t ≥ 0,

0

where the input u may be provided by an expression like u(t) = g(t, y(t)) + r(t), t ≥ 0. That is, the input has two components one being the output feedback through a nonlinear device, and the other is the direct input command. In this case, we obtain an integral equation of the form

t

y(t) = v(t) +

K(t, s)g(s, y(s))ds, 0

where v(t) =

t

K(t, s)r(s)ds. 0

This is a nonlinear Volterra integral equation, and it is characterized by the kernel K and the nonlinear map g. One can formulate this as an abstract fixed point problem y = v + G(y) ≡ Fv (y) in appropriately chosen function spaces. This is precisely in the form we have presented earlier. We note that choice of the function space is predominantly determined by the nonlinear operator g and the kernel K as seen in Ahmed [1]. More general nonlinear integral equations based on infinite Volterra series can be found in Ahmed [17, 21]. Theorem 1.10.2 (Banach Fixed Point Theorem) Let X ≡ (X, d) be a complete metric space, and suppose F : X −→ X is a contraction in the sense that there exists a number α ∈ (0, 1) such that d(F (x), F (y)) ≤ α d(x, y) ∀ x, y ∈ X. Then F has a unique fixed point in X.

32

1 Mathematical Preliminaries

Proof We start with an arbitrary element x0 ∈ X and define the sequence xk+1 ≡ F (xk ), k ∈ N0 ≡ {0, 1, 2, · · · }.

(1.17)

For any 1 ≤ p ∈ N0 , by repeated application of the triangle inequality, we find that d(xn+p , xn ) ≤

n+p−1

d(xk+1 , xk ) ≤

 n+p−1

k=n

 α k d(x1, x0 ).

(1.18)

k=n

Since 1 > α > 0, it is clear that n+p−1

αk ≤ αn

 ∞

k=n

 αk



≤ α n /(1 − α) .

k=0

From (1.17), (1.18) and the above inequality it is clear that, for any fixed p(1 ≤ p < ∞) ∈ N0 , we have

 d(xn+p , xn ) ≤ α n /(1 − α) d(x1 , x0 ) and consequently lim d(xn+p , xn ) = 0.

n→∞

Thus, {xn } is a Cauchy sequence (and hence bounded). Since (X, d) is a complete metric space, there exists a unique element x ∗ ∈ X such that d

xn −→ x ∗ in the metric as indicated. Now we must show that x ∗ is the unique fixed point of the operator F. Again by use of the triangle inequality, the reader can easily verify that d(x ∗ , F (x ∗ )) ≤ d(x ∗ , xn ) + d(xn , F (x ∗ )) = d(x ∗ , xn ) + d(F (xn−1 ), F (x ∗ )) ≤ d(x ∗ , xn ) + α d(xn−1 , x ∗ ), ∀ n ≥ 1. Letting n −→ ∞, it follows from this inequality that d(x ∗ , F (x ∗ )) = 0

(1.19)

1.11 Frequently Used Results From Analysis

33

which proves that x ∗ is a fixed point of F . The uniqueness of solution is proved by contradiction. Suppose x ∗ , y ∗ are any two fixed points of F . Then d(x ∗ , y ∗ ) = d(F (x ∗ ), F (y ∗ )) ≤ αd(x ∗ , y ∗ ) and since 0 < α < 1, it is clear that this inequality is false unless x ∗ = y ∗ . This completes the proof.   Corollary 1.10.3 Suppose X0 is a closed subset of the metric space (X, d) and F : X0 −→ X0 is a contraction. Then F has a unique fixed point in X0 . Proof The proof is exactly the same. Here starting from any point x0 ∈ X0 one has the sequence {xn }, as constructed above, confined in X0 . Since X0 is closed the limit x ∗ ∈ X0 . The rest is obvious.   Corollary 1.10.4 Suppose X0 is a closed subset of the metric space (X, d) and that the n-th iterate, or n-fold composition of F denoted by F n , has the property that F n : X0 −→ X0 is a contraction. Then F n has a unique fixed point in X0 which is also the unique fixed point of the operator F. Proof Since F n is a contraction it follows from Theorem 1.10.2 that it has a unique fixed point, say, x ∈ X0 . We show that x is also the unique fixed point of F. Indeed, d(F (x), x) = d(F (x), F n (x)) = d(F (F n (x)), F n (x)) = d(F n (F (x)), F n (x)) ≤ α d(F (x), x).

(1.20)

Since 0 < α < 1, this is impossible unless x is a fixed point of F. Similarly, one can verify that if x, y are two fixed points of F , d(x, y) = d(F (x), F (y)) = d(F (F n (x)), F (F n (y))) ≤ αd(F (x), F (y)) = αd(x, y).

This is impossible unless x = y proving uniqueness.

 

1.11 Frequently Used Results From Analysis In this section, we present without proof some important results from functional analysis which have been frequently used in this book. For proof see Dunford [59], Yosida [108], Zeidler [110], Hewitt–Stromberg [71]. For any Banach space E, we use the symbol E ∗ to denote the continuous (topological) dual of E.

34

1 Mathematical Preliminaries

Weak and Weak* Convergence Let X be a Banach space with the first and the second duals denoted by X∗ and X∗∗ , respectively. A sequence xn∗ ∈ X∗ is said to converge weakly to x ∗ , denoted by w

xn∗ −→ x ∗ , if, for every x ∗∗ ∈ X∗∗ , x ∗∗ (xn∗ ) −→ x ∗∗ (x ∗ ). The sequence {xn∗ } ⊂ X∗ is said to converge in the weak-star (weak*) topology to x ∗ , denoted by w∗

xn∗ −→ x ∗ , if, for every x ∈ X, xn∗ (x) −→ x ∗ (x). Since every element x ∈ X induces a continuous linear functional xˆ on X∗ through the relation x(x ˆ ∗ ) ≡ x ∗ (x), we have the canonical embedding X → X∗∗ . Hence, the weak-star topology is weaker than the weak topology. Definition 1.11.1 A Banach space X is said to be reflexive, if X∗∗ = X. For 1 < p < ∞, the Lp spaces are reflexive Banach spaces. Indeed, for (1/p) + (1/q) = 1, (Lp )∗ = Lq and (Lq )∗ = Lp . Hence, (Lp )∗∗ = Lp and so these spaces are reflexive. It is well-known that a closed bounded subset of a finite dimensional space is compact. Though this is false in infinite dimensional spaces, there is a similar result with respect to weak topologies. This is presented in the following theorem. Theorem 1.11.2 ([59]) A closed bounded subset of a reflexive Banach space is weakly compact. It is an amazing fact that the closed convex hull of a compact set inherits the compactness of its parent set. This is stated in the following theorem. Theorem 1.11.3 The closed convex hull of a weakly compact set is weakly compact. Under certain (geometric) properties, the weak and strong closures are equivalent. This is the content of Mazur’s theorem. Theorem 1.11.4 ([59, Mazur’s Theorem]) A convex subset of a Banach space is closed if and only if it is weakly closed. As a result of this theorem, we have the following corollary which is known as Banach–Sacks–Mazur Theorem.

1.11 Frequently Used Results From Analysis

35

Corollary 1.11.5 For any weakly convergent sequence in a Banach space, there exists a proper convex combination of the given sequence that converges strongly to the same limit. In a finite dimensional space, the closed unit ball is compact. An analogous result in infinite dimensional setting is Alaoglu’s theorem. Theorem 1.11.6 ([59, Alaoglu’s Theorem]) The closed unit ball B1 (X∗ ) of the dual X∗ of the Banach space X is weak-star compact. Remark 1.11.7 (i): Any norm bounded weak-star closed set A ⊂ X∗ is weak-star compact; (ii): As seen above B1 (X∗ ) is compact in the weak-star topology (i.e., X topology of X∗ ) but it is not necessarily weakly compact since weak-star topology is weaker than the weak topology (i.e. X∗∗ topology of X∗ ). Definition 1.11.8 Let C be a convex subset of a Banach space X. A point e ∈ C is said to be an extreme point of C, if it is not a point in the interior of any nondegenerate line segment in C. In other words, for α ∈ [0, 1] and x1 , x2 ∈ C, if e = (1 − α)x1 + αx2 then either α = 0 or 1. For any set B, let coB denote the convex hull of B and clcoB ≡ coB its closed convex hull. Theorem 1.11.9 ([59, Krein–Milman Theorem]) For any weakly compact convex set K of a Banach space X, K = cow Ext (K) where Ext (K) denotes the set of extreme points of K. Similarly, for a weak-star compact convex set K of the dual ∗ X∗ of a Banach space X, K = cow Ext (K). Another very important result is the celebrated Dunford–Pettis theorem. This result has been frequently used in the proof of existence of optimal controls. Theorem 1.11.10 ([59, Dunford–Pettis Theorem IV.8.9, p. 292]) Consider the Banach space L1 (I, R n ). A subset B of L1 (I, R n ) is relatively weakly sequentially compact if (1): it is bounded in norm and (ii) it is uniformly integrable in the sense that lim f (t)dt = 0 uniformly with respect to f ∈ B, λ(σ )→0 σ

where λ denotes the Lebesgue measure. Further, if B is also weakly closed, then it is weakly sequentially compact. This theorem is also a particular case of a more general result that applies to infinite dimensional Banach spaces. Let (, , μ) be any finite measure space and let X be a Banach space so that both X and X∗ satisfy Radon–Nikodym property (RNP) (see Definition 1.11.13). Let L1 (μ, X) denote the Banach space of Bochner integrable functions on  with values in X. A subset B ⊂ L1 (μ, X) is weakly relatively compact if, and only if, (i): it is bounded, (ii): uniformly integrable, and  (iii): for each  measurable set ⊂ , the set K( ) ≡ { f (ω)μ(dω), f ∈ B}

36

1 Mathematical Preliminaries

is a relatively weakly compact subset of X. For details see Diestel [57, Theorem IV.2.1.]. Theorem 1.11.11 ([59, Eberlein–Šmulian Theorem V.6.1, p. 430]) A subset K of a Banach space is weakly compact if, and only if, it is weakly sequentially compact. The question of characterization of compact sets in the space of vector measures is very important in the study of control problems related to differential equations driven by vector measures. Let I denote the sigma algebra of subsets of the set I and let Mca (I , R n ) denote the space of countably additive bounded vector measures. Furnished with the total variation norm, this is a Banach space. An important result giving the necessary and sufficient conditions for weak compactness of a subset of this space is presented in the following theorem. Theorem 1.11.12 ([57, Bartle-Dunford-Schwartz]) A set ⊂ Mca (I , R n ) is relatively weakly compact if and only if (i): is bounded (ii): there exists a countably additive bounded positive measure ν such that limν(σ )→0 |μ|(σ ) = 0 uniformly with respect to μ ∈ . If in addition is weakly closed, then it is also weakly compact. This theorem is in fact a special case of a much more general theorem due to Bartle-Dunford-Schwartz [57] that holds for general Banach spaces. We need the concept of Radon–Nikodym property (RNP) as stated below. Definition 1.11.13 A Banach space X is said to satisfy the Radon–Nikodym property (RNP) if for every finite measure space (, , μ) and every μ continuous X valued vector measure m there exists a g ∈ L1 (μ, X) such that m(σ ) =

g(ω)μ(dω), for every σ ∈ , σ

which is briefly described by the expression dm = gdμ. The general result that holds for a large class of Banach spaces is given by the following Theorem. Theorem 1.11.14 ([57, Bartle-Dunford-Schwartz]) Let E be a Banach space with dual E ∗ . Suppose both E and E ∗ satisfy the Radon–Nikodym property. Then, a set ⊂ Mca (, E) is relatively weakly compact if, and only if, (i): is bounded (ii): there exists a countably additive bounded positive measure ν such that limν(σ )→0 |μ|(σ ) = 0 uniformly with respect to μ ∈ and (iii): for each σ ∈  the set D ≡ {m(σ ), m ∈ } is a relatively weakly compact subset of E. For our purpose, Theorem 1.11.12 is sufficient since in finite dimensional Banach spaces the additional conditions such as RNP and the condition (iii) are automatically satisfied. In the study of differential inclusions, we need some important results from the theory of measurable multifunctions. Let (, , μ) be a finite measure space

1.11 Frequently Used Results From Analysis

37

and X a separable Banach space and Lp (μ, X), 1 ≤ p < ∞, the Banach space of strongly measurable X-valued functions having norms which are p-th power Lebesgue integrable. Let 2X denote the power set of X and let F :  −→ 2X \ ∅ be a measurable multifunction in the sense that for every open set O ⊂ X the set F − (O) ≡ {ω ∈  : F (ω) ∩ O = ∅} is measurable. In fact this definition of measurability is equivalent to the measurability of the distance function  ω −→ ρx (ω) ≡ d(x, F (ω)) for every x ∈ X, where d is the metric induced by the norm. For details see [27, 73]. Theorem 1.11.15 ([73, Kuratowski-Ryll Nardzewski], [27]) Let (, ) be a measurable space, Y a Polish space (complete separable metric space) and c(Y ) the class of nonempty closed subsets of Y, and G :  −→ c(Y ) a measurable multifunction. Then G has measurable selections, that is, there exist measurable functions g :  −→ Y such that g(ω) ∈ G(ω) for all ω ∈ . In the study of differential inclusions, sometimes we need selections which are not only measurable but also integrable. Let Sp (F ) ≡ {f ∈ Lp (μ, X) : f (ω) ∈ F (ω), μ − a.e ω ∈ } denote the class of Lp -selections of the multifunction F. Theorem 1.11.16 ([27, 73]) Let X be a separable Banach space and F :  −→ 2X \ ∅ be a graph measurable multifunction in the sense that Gr(F ) ≡ {(x, ω) ∈ X ×  : x ∈ F (ω)} ∈ B(X) × B(). Then, for 1 ≤ p ≤ ∞, the set Sp (F ) = ∅ if, and only if, there exists an h ∈ L+ p () such that inf{ x X , x ∈ F (ω)} ≤ h(ω) for a.e ω ∈ . Theorem 1.11.17 ([31, 50, Cesari Lower Closure Theorem]) Let (t, x) −→ F (t, x) be a multifunction (Borel) measurable in t and upper semicontinuous in x defined on I × R n with closed convex values cc(R n ) (the class of nonempty closed convex subsets of R n ). Let {xk } ∈ C(I, R n ) and {yk } ∈ L1 (I, R n ) be two sequences of functions satisfying yk (t) ∈ F (t, xk (t)) for t ∈ I, and that w xk (t) −→ x0 (t) for each t ∈ I and yk −→ y0 in L1 (I, R n ). Then, for almost all t ∈ I , y0 (t) ∈ F (t, x0 (t)). Another very interesting result that has been used in the study of time optimal control problems [70] and relaxation problems arising from non-convexity of the contingent set [27] is concerned with the properties of integrals of measurable setvalued functions. Let k(R n ) denote the class of nonempty compact subsets of R n

38

1 Mathematical Preliminaries

and V : I −→ k(R n ) a measurable set-valued function. By the integral of such a function one means the integral of all its measurable selections, that is, 



 v(t)dt : v measurable, v(t) ∈ V (t), t ∈ I .

V (t)dt ≡ I

I

Theorem 1.11.18 ([70, Aumann Theorem 8.4]) For any measurable and integrable set-valued function V (·) with values in k(R n ),

V (t)dt = I

co V (t)dt, I

and that both these sets are convex and compact. The result as stated above does not hold in infinite dimensional Banach spaces. However, in separable reflexive Banach spaces E, a somewhat similar result holds

V (t)dt =

cl I

clco V (t)dt, I

where V : I −→ cb(E), with cb(E) denoting the class of closed bounded (so weakly compact) subsets of E. This later result, due to Datko (see [55]), has been used in the study of optimal control of nonlinear systems in infinite dimensional spaces [27].

1.12 Bibliographical Notes Most of the materials on real analysis presented in this chapter can be found in any book on real analysis and measure theory, such as Hewitt and Stromberg [71], Royden [92], Munroe [85], Berberian [39], Halmos [69]. Results related to abstract analysis including characterization of the duals of Lp spaces, properties of reflexive Banach spaces, and Banach fixed point theorems for single-valued maps can be found in Dunford and Schwartz [59] and Yosida [108]. Vector measures are used throughout the book. Here only a minor reference to this topic has been made. For detailed study of vector measures the classical book is due to Diestel and Uhl, Jr [57]. For our book however, such a deep study on vector measures is not essential. For Banach fixed point theorem for multi-valued maps, the most interesting references are Hu and Papageorgiou [73] and Zeidler [110]. We present such results in the sequel as and when required.

Chapter 2

Linear Systems

2.1 Introduction Linear time-invariant systems have the general form dx(t)/dt ≡ x˙ = Ax(t) + Bu(t) + γ (t),

(2.1)

where x(t) denotes the state of the system at time t and it is a vector of dimension say n, and A is a n × n square matrix representing the system matrix. This matrix characterizes the intrinsic behavior of the uncontrolled system. The n × m matrix B is the control matrix and u(t) is the control vector or input vector of dimension m representing the external force. The control matrix can be seen as the channel through which communication takes place between the external and internal world. This is the link through which external influence can be brought to bear upon the internal state of the system. Thus, B also determines the extent by which external forces can change the course of evolution of the internal state x. The vector γ denotes the exogenous disturbance. A special case of this system is given by a n-th order differential equation (2.2) ξ (n) + an−1 ξ (n−1) + · · · + a1 ξ (1) + a0 ξ = u,

(2.2)

with constant or time-variant coefficients {ai , i = 0, 1, 2, · · · , n} independent of ξ and its derivatives. Defining {x1 ≡ ξ, x2 ≡ ξ (1) , x3 ≡ ξ (2) , · · · , xn ≡ ξ (n−1) }, one can write this system in the canonical form x˙ = Ax + bu,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_2

(2.3)

39

40

2 Linear Systems

where ⎛ ⎞ ⎞ 0 0 ··· 0 ⎜ ⎟ ⎜ ⎟ 1 ··· 0 ⎟ ⎜0⎟ ⎜ A=⎜ .. ⎟ and b = ⎜ .. ⎟ .. . . ⎝.⎠ ⎝ . . ⎠ . −a0 −a1 −a2 · · · −an−1 1 ⎛

0 0 .. .

1 0 .. .

and u is the scalar input. In many communication and control problems, one is simply interested in the solution ξ corresponding to the scalar control u. In that situation, one describes this system as a single-input single-output system (SISO). In contrast, we have the general multi-input multi-output systems (MIMO) given by (2.1). Similarly, linear time-variant systems have the general form dx(t)/dt = A(t)x(t) + B(t)u(t) + γ (t),

(2.4)

where now both the system matrix A and the control matrix B may vary with time. We present few simple examples of linear systems. (E1) Suspension Systems The suspension system of a motor vehicle can be modeled by a system of mass, spring, and damper as follows. Consider the static equilibrium position as the zero state and denote by ξ the displacement of the mass m from this rest state. Then, using one of the basic laws of Newtonian mechanics, which states that the algebraic sum of all the forces acting on the body m must equal zero, we have mξ¨ + bξ˙ + kξ = u. Defining the state x as x = col(ξ, ξ˙ ) ≡ col(x1, x2 ) we can write the system model in the canonical form ⎛ (d/dt) ⎝

x1 x2





⎠=⎝

0

1

−(k/m) −(b/m)

⎞⎛ ⎠⎝

x1 x2





⎠+⎝

0

⎞ ⎠ u.

(2.5)

(1/m)

(E2) Torsional Motion Suppose one end of a cylindrical beam of length L is attached to a wall perpendicularly, while the other end is free to rotate around its central axis. The torsional (rotational) motion around the central axis is governed by a second order partial differential equation given by J

∂2 ∂2  ∂2 EI φ + φ = T , x ∈ (0, L), t ≥ 0, ∂t 2 ∂x 2 ∂x 2

(2.6)

2.1 Introduction

41

where J is the moment of inertia about the central axis, EI is the modulus of rigidity, and φ is the angular displacement corresponding to the applied torque T . The parameters may be functions of x if the beam is not uniform. By use of modal expansion and retaining only the first n modes, this equation can be written as a system of n second order ordinary differential equations, J

d2  + K = T , dt 2

(2.7)

where J is the matrix of mass moment of inertia, K is the stiffness matrix, and T is a n-vector of torques and  is the vector of first n modes of angular displacement. ˙ the system can be written in the standard Defining the state x ≡ col(, ), canonical form x˙ = Ax + Bu where ⎛ A≡⎝

0

I

−J −1 K 0

⎞ ⎠ and B ≡



0



J −1

(2.8)

and u = T . This system is a finite dimensional approximation of the partial differential equation (2.6) and is considered adequate for many engineering applications. In case we are only interested in the angular position of the free end of the beam we obtain a scalar differential equation J θ¨ + kθ = τ , where J is the mass moment of inertia about the central axis, k is the stiffness coefficient, and τ is the torque. Here we present a simple example of an impulsive system arising in construction problems. For laying out pipe lines for transportation of oil, it is necessary to apply torque to fit one section of the pipe into another. This requires hammering the handle attached to the steel band around the pipe. This produces impulsive torque forcing one section of the pipe into another. So the appropriate dynamic model is given by J θ¨ + b θ˙ + kθ = τ ≡



ak δtk (dt),

where ak denotes the size of the instantaneous torque applied at time tk . Introducing the appropriate matrices {A, B}, dependent on the parameters {J, b, k}, this can be written as dx = Axdt + Bμ(dt),  where μ(dt) ≡ ak δtk (dt) is given by a sum of Dirac measures where ak denotes the intensity of torque applied by the hammer at time tk .

42

2 Linear Systems

(E3) System Reliability (Three States) Suppose a (manufacturing) factory can be in either one of the three states {S1 , S2 , S3 }, where state S1 denotes the level of full capacity, S2 denotes the level of partial capacity, and S3 denotes the failed state. We may denote by S(t) the state of the system at time t. Let {λ12 , λ13 , λ23 } denote the transition (failure) rates from state S1 → S2 , S1 → S3 , and S2 → S3 , respectively. Similarly {μ21, μ31 , μ32 } denote the transition (repair) rates from states S2 → S1 , S3 → S1 , and S3 → S2 . Let {P1 (t), P2 (t), P3 (t)} denote the probabilities of the system being in states {S1 , S2 , S3 }, respectively, at time t. Then,  one can justify that the probability vector P ≡ (P1 , P2 , P3 ) satisfies the following differential equation: ⎛

⎞ ⎛ ⎞⎛ ⎞ P1 −(λ12 + λ13 ) P1 μ21 μ31 ⎠ ⎝ P2 ⎠ . (d/dt) ⎝ P2 ⎠ = ⎝ λ12 −(λ23 + μ21 ) μ32 P3 λ13 λ23 −(μ31 + μ32 ) P3 (2.9) It is interesting to note that the elements of each column of the transition matrix sum to zero, a property that characterizes conservation. For a manufacturer who owns the system, it is important to have an estimate of the average daily production. Let C1 denote the production rate when the system is in state S1 , C2 (< C1 ) denote the production rate when the system is in state S2 , and C3 (= 0) denote the production rate in state S3 . Define the indicator function  I (S(t) = S) ≡

1 if S(t) = S, 0 otherwise.

Using the indicator function one can express the expected total production over the time period [0, T ] as  QT ≡ E =

T

 {C1 I (S(t) = S1 ) + C2 I (S(t) = S2 ) + C3 I (S(t) = S3 )}dt

0 T

{C1 P1 (t) + C2 P2 (t) + C3 P3 (t)}dt

0

=

T

{C1 P1 (t) + C2 P2 (t)}dt.

(2.10)

0

In general a multi-state system is given by (d/dt)P = AP , P (0) = P0 ,

(2.11)

where P is a n-vector of probabilities with Pi (t), i = 1, 2, · · · , n, denoting the probability that the system is in state Si at time t. The matrix A is the infinitesimal generator of the random process S(t), t ≥ 0, which takes values from the set of

2.2 Representation of Solutions for TIS

43

states S ≡ {S1 , S2 , S3 , · · · , Sn } that constitutes the state space of the production system. For application to maintainable systems arising in manufacturing industry see [26]. It is interesting to note that the sum of the entries of any column of the infinitesimal generator A is zero. This is a unique characteristic of infinitesimal generators of Markov chains. In fact there is also a strong similarity with the Kirchoff’s current law that states that the algebraic sum of currents in all the branches connected to any node must be equal to zero. Thus, using this basic principle one can easily construct the generator matrix A just by inspection. Now we consider the main objective of this chapter. We wish to give a complete mathematical representation of solutions of linear time-invariant and time-variant systems abbreviated by TIS and TVS, respectively. First we consider the timeinvariant case.

2.2 Representation of Solutions for TIS First we consider a time-invariant homogeneous (or uncontrolled) system given by the following differential equation with state space R n , x˙ = Ax, x(0) = x0 ∈ R n .

(2.12)

The solution of this system is given by x(t) = et A x0 , t ≥ 0,

(2.13)

 n n where the exponential function et A is given by et A ≡ ∞ n=0 (1/n!)t A . That the solution of Eq. (2.12) is given by the expression (2.13) can be proved in several different ways: ∞ (1) Laplace transform: Let xˆ ≡ x(s) ˆ ≡ 0 x(t)e−st dt denote the Laplace transform of the function x ≡ {x(t), t ≥ 0}, for s ∈ C, where C is the field of complex numbers. Then, it follows from Eq. (2.12) that xˆ = (sI − A)−1 x0 , which is well defined for Re(s) > A  . Then, taking the inverse transform one gets x(t) = et A x0 . (2) Explicit difference approximation and its limit: Let t = nt and define xn (t) ≡ x(nt) = x((n − 1)t) + tAx((n − 1)t) = (I + tA)n x0 = (I + (t/n)A)n x0 .

44

2 Linear Systems

Letting n → ∞ we have lim xn (t) = lim (I + (t/n)A)n = et A x0 , t ≥ 0.

n→∞

n→∞

(3) Implicit difference technique and its limit: xn (t) ≡ x(nt) = x((n − 1)t) + tAx(nt) = (I − tA)−n x0 = (I − (t/n)A)−n x0 −→ et A x0 . Letting n → ∞ we have lim xn (t) = lim (I − (t/n)A)−n = et A x0 , t ≥ 0.

n→∞

n→∞

(4) Successive approximation: Here we integrate the equation obtaining

t

xn (t) = x0 +

Axn−1 (s)ds, n ∈ N, n ≥ 1.

(2.14)

0

Carrying out this step by step computation n times we arrive at the following expression: xn (t) =

n tk k=0

k!

Ak x0 ,

(2.15)

and then taking the limit we obtain lim xn (t) −→ et A x0 , t ≥ 0.

n→∞

(2.16)

The reader can easily verify that {xn } is a Cauchy sequence in C(I, R n ) and hence has a unique limit and it is given by the expression (2.13).

2.2.1 Classical System Models Now we consider the classical control system x(t) ˙ = Ax(t) + Bu(t) + γ (t), t ≥ 0, x(0) = x0 ,

(2.17)

where A is a n×n matrix, B is a n×m matrix, u(t) is the control vector of dimension m, and γ (t) is the disturbance vector of dimension n.

2.2 Representation of Solutions for TIS

45

Theorem 2.2.1 Let u ∈ L1 (I, R m ) and γ ∈ L1 (I, R n ) (Lebesgue integrable functions). Then, an absolutely continuous function ϕ(t), t ≥ 0, taking values from R n , is a solution of Eq. (2.17) if and only if ϕ is given by

t

ϕ(t) = et A x0 +

e(t −s)ABu(s)ds +

0



t

e(t −s)Aγ (s)ds, t ≥ 0.

(2.18)

0

Proof The proof is straightforward computation. By taking the derivative one can verify that ϕ satisfies the first and second identity of Eq. (2.17). Suppose now that a function z satisfies the system Eq. (2.17). We must show that z equals ϕ as given. Since z satisfies the system Eq. (2.17), we have z(0) = x0 and z˙ (s) − Az(s) = Bu(s) + γ (s), a.e s > 0.

(2.19)

Multiplying the above equation on the left by the matrix e−sA and integrating over the interval [0, t] we obtain

t

e

−sA

    t −sA z˙ (s) − Az(s) ds = Bu(s) + γ (s) ds. e

0

(2.20)

0

Since A commutes with any function of A, the expression on the left of Eq. (2.20) can be written as     t t e−sA z˙ (s) − Az(s) ds = (d/ds) e−sA z(s) ds = e−tA z(t) − z(0). (2.21) 0

0

Hence, it follows from Eqs. (2.20) and (2.21) that e

−t A



t

z(t) − z(0) =

e

−sA

  Bu(s) + γ (s) ds.

(2.22)

0

By multiplying this equation on the left by et A and recalling that z(0) = x0 , we arrive at the following expression:

t

z(t) = e z(0) + tA

  e(t −s)A Bu(s) + γ (s) ds

0



t

= e t A x0 +

  e(t −s)A Bu(s) + γ (s) ds = ϕ(t).

(2.23)

0

Since t is arbitrary, this verifies that z coincides with ϕ. This completes the proof.  

46

2 Linear Systems

2.2.2 Impulsive System Models A linear time-invariant system subject to impulsive forces can be described by the following differential equation: dx = Axdt + Budt + γ (dt), x(0) = x0 , t ∈ I, where γ is a countably additive bounded vector measure with values in R n and having bounded total variation. First we consider an elementary impulsive force. Suppose an impulsive disturbance occurs at time t0 ≥ 0. That is, the disturbance γ is given by γ (dt) = ηδt0 (dt) for some vector η ∈ R n , where δt0 (dt) denotes the Dirac measure (also known as delta function); actually a measure concentrated at the point {t0 }. Then, the response is given by  x(t) ≡

t et A x0 + 0 e(t −s)ABu(s)ds, t et A x0 + 0 e(t −s)ABu(s)ds + e(t −t0)A η,

0 ≤ t < t0 , t ≥ t0 .

(2.24)

This gives rise to a jump in the state at t0 given by δx(t0 ) = x(t0 ) − x(t0 −) = η. Note that this solution is right continuous having left limit. A more rigorous way is to use the example (E1) of Chap. 1 as the correct approximation of the delta function. This is given by   n n (t) = √ exp − (1/2)(n(t − t0 ))2 . 2π For any ϕ ∈ C(I ) ≡ C([0, T ]) it is easy to verify that as n → ∞, n (ϕ) ≡

ϕ(t)n (t)dt −→ ϕ(t0 ). I

In other words n converges to the Dirac measure (in the sense of distribution). Let Mca (I , R n ) denote the space of countably additive vector measures defined on I and taking values in R n . Furnished with the total variation norm this is a Banach space. As seen in the above, using any element γ ∈ Mca (I , R n ), we can introduce a more general system model given by the following equation: dx = Axdt + Bu(t)dt + γ (dt), x(0) = x0 ,

(2.25)

which is driven by vector measure γ . The solution of this equation is given by

t

x(t) = et A x0 + 0

e(t −s)ABu(s)ds +



t 0

e(t −s)Aγ (ds), t ∈ I.

(2.26)

2.3 Representation of Solutions for TVS

47

This is a much more general system covering standard  impulsive systems. For m n example, if γ is given by a discrete measure like γ (dt) = i=1 ai δti (dt), ai ∈ R  with  ai < ∞, one can easily verify that the solution is given by

t

x(t) = et A x0 +

e(t −s)ABu(s)ds +

0



e(t −ti )A ai , t ∈ I.

(2.27)

ti 0, X(0) = In ,

(2.52)

where In is the identity matrix. The matrix-valued function X(t), t ≥ 0, is called the fundamental solution. Using this fundamental solution we can construct the solution of the initial value problem x(t) ˙ = A(t)x(t), x(0) = x0 ,

(2.53)

x(t) = X(t)x0 , t ≥ 0.

(2.54)

for any x0 ∈ R n , as

Now we show that (t, s) = X(t)X−1 (s), t ≥ s. Consider the matrix differential equation Y˙ = −Y A(t), Y (0) = In .

(2.55)

Again this equation has a unique solution Y that is absolutely continuous and bounded in norm on bounded intervals. Considering the product Y (t)X(t), it is easy to verify that d/dt (Y (t)X(t)) = 0 for all t ≥ 0. Hence, Y (t)X(t) = In for all t ≥ 0. This shows that X, as well as Y, are non-singular matrices and hence X−1 (t) = Y (t), t ≥ 0. Then, the transition matrix is given by (t, s) = X(t)X−1 (s), 0 ≤ s ≤ t < ∞.

(2.56)

Hence, the solution of the homogeneous equation x˙ = A(t)x, x(s) = ξ, starting from any time 0 ≤ s ≤ t < ∞ is given by x(t) = (t, s)ξ. The reader can easily verify that the transition operator (t, s), s ≤ t < ∞ satisfies the following properties: (P1) (P2) (P3) (P4)

(t, t) = In . (t, θ )(θ, s) = (t, s), s ≤ θ ≤ t, called the evolution property. ∂/∂t(t, s) = A(t)(t, s). ∂/∂s(t, s) = −(t, s)A(s).

2.3 Representation of Solutions for TVS

53

2.3.1 Classical System Models We consider the linear time-variant system subject to control and other external forces: x(t) ˙ = A(t)x(t) + B(t)u(t) + γ (t), t > s, x(s) = ξ.

(2.57)

Theorem 2.3.4 Suppose the elements of the matrix A are locally integrable and those of B are essentially bounded and let u and γ be integrable functions. Then, a continuous function ϕ(t), t ≥ 0, taking values from R n , is a solution of Eq. (2.57) if and only if ϕ is given by

t

ϕ(t) = (t, s)ξ +



t

(t, θ )B(θ )u(θ )dθ +

s

(t, θ )γ (θ )dθ, t ≥ s.

(2.58)

s

Proof The proof is very similar to that of the time-invariant system (2.17). Clearly, letting t ↓ s in Eq. (2.58), it follows from property (P1) of the transition operator that limt ↓s ϕ(t) = ξ. Differentiating ϕ and using the properties (P1) and (P3) of the transition operator , we obtain  ϕ˙ = A(t) (t, s)ξ +



t

 (t, θ )[B(θ )u(θ ) + γ (θ )]dθ + [B(t)u(t) + γ (t)]

s

= A(t)ϕ(t) + B(t)u(t) + γ (t).

(2.59)

Thus, the function ϕ as defined by Eq. (2.58) satisfies the initial value problem (2.57), which proves sufficiency. For the necessary condition, let x be an absolutely continuous function that satisfies the system Eq. (2.57). We show that x equals ϕ as defined by Eq. (2.58). If x satisfies Eq. (2.57), then x(θ ˙ ) − A(θ )x(θ ) = B(θ )u(θ ) + γ (θ ), a.e ∞ > t ≥ θ > s and hence for almost all θ in the interval indicated, we have (t, θ )[x(θ ˙ ) − A(θ )x(θ )] = (t, θ )[B(θ )u(θ ) + γ (θ )], s ≤ θ ≤ t.

(2.60)

Integrating this over the interval [s, t] we obtain

t s



t

(t, θ )[x(θ ˙ ) − A(θ )x(θ )]dθ =

(t, θ )[B(θ )u(θ ) + γ (θ )]dθ, t ≥ s.

s

(2.61)

54

2 Linear Systems

Using the property (P4), the reader can easily verify that t s

t

(t, θ)[x(θ) ˙ − A(θ)x(θ)]dθ =

∂/∂θ{(t, θ)x(θ)}dθ s

= (t, t)x(t) − (t, s)x(s) = x(t) − (t, s)ξ. (2.62)

Clearly, it follows from the last two equations that

t

x(t) = (t, s)ξ +

(t, θ )[B(θ )u(θ ) + γ (θ )]dθ.

(2.63)

s

This shows that if x is a solution of the system Eq. (2.57), then it must be given by the expression (2.58). This completes the proof.   Continuous Dependence of Solution on Data We demonstrate here that the solution is continuously dependent on all the data such as the initial state, the input, and even the system matrices. Consider the system x(t) ˙ = A(t)x(t) + f (t), x(s) = ξ, t ∈ I ≡ [s, T ],

(2.64)

and let {x(ξ, f, A)(t), t ∈ I } denote its solution. Let M(n × n) denote the space of n × n matrices considered as linear operators in R n and denoted by L(R n ). Theorem 2.3.5 Consider the system (2.64) with two sets of data {ξ, f, A} and ˜ both belonging to R n × L1 (I, R n ) × L1 (I, L(R n )). Then, there exists a {η, g, A} constant α > 0 such that ˜ C(I,R n )  x(ξ, f, A) − x(η, g, A)   ˜  dt . (2.65) ≤ α  ξ − η R n +  f (t) − g(t) R n dt +  A(t) − A(t) I

I

Proof The proof is based on Grönwall Lemma 2.3.1. Let x1 ≡ x(ξ, f, A), x2 ≡ ˜ ∈ C(I, R n ) denote the solutions of Eq. (2.64) corresponding to the data x(η, g, A) indicated. Define ϕ(t) ≡ x1 (t) − x2 (t) R n . Then, it is easy to verify that for each t ∈ I ≡ [s, T ], ϕ(t) ≤ ξ − η  +  f (t) − g(t) R n dt I



˜  dt +  A(t) − A(t)

+C I



t

h(τ )ϕ(τ )dτ, s

(2.66)

2.3 Representation of Solutions for TVS

55

˜ R n×n . Hence, by Grönwall inequality, where C = x1 C(I,R n ) and h(t) ≡ A(t) we obtain the result as stated with  T  ˜ α ≡ max{1, C} exp  A(τ )  dτ . s

 

This completes the proof.

It follows from the above result that for every given set of data {ξ, f, A} ∈ × L1 ([s, T ], R n ) × L1 ([s, T ], R n×n ), the system (2.64) has a unique solution x ∈ C(I, R n ). For more details, see [2, Chapter 2]. Rn

2.3.2 Measure Driven System Models Here we consider linear systems driven by vector measures. As stated earlier, the class of impulsive systems is a subclass of the class of measure driven systems. Denote by Mca (I , R n ) the space of countably additive measures defined on the sigma algebra I of subsets of the set I with values in R n having bounded total variation. The total variation norm is defined as follows. Let  ∈ I , and let  denote any finite family of disjoint I measurable partition of the set . The variation of any μ ∈ Mca (I , R n ) on the set  is given by |μ|() ≡ sup



 μ(σ ) R n ,

 σ ∈ 

where the summation is taken over the family  and supremum is taken over all such partitions. The norm of the measure μ is then given by  μ ≡ |μ|(I ) (the total variation). With respect to this norm Mca (I , R n ) is a Banach space. For details on vector measures see [57]. A particular example is the sum of weighted Dirac measures, also called discrete measures, μ() ≡ vi δti () ti ∈

where vi ∈ R n and δti () = 1 if ti ∈ , otherwise zero. The total variation of this measure on  is given by |μ|() ≡

ti ∈

 vi R n .

56

2 Linear Systems

As noted earlier, for measure driven systems the solutions may not be continuous. Let B∞ (I, R n ) denote the class of bounded measurable functions defined on the interval I and taking values in R n . Furnished with the supnorm topology sup{ x(t) , t ∈ I }, B∞ (I, R n ) is a Banach space. We will see that this is the natural space for solutions of measure driven systems. Here we consider systems of the form dx = A(t)xdt + B(t)u(t)dt + γ (dt), x(0) = x0 , t ∈ I ≡ [0, T ].

(2.67)

For convenience of notation let M(n × m), n, m ∈ N, denote the class of n × m matrices with real entries. These are bounded linear operators from R m to R n and denoted by L(R m , R n ). So we may use these notations interchangeably. Theorem 2.3.6 Suppose A ∈ L1 (I, M(n × n)), B ∈ L∞ (I, M(n × m)), u ∈ L1 (I, R m ), and γ ∈ Mca (I , R n ). Then, for every initial state x0 ∈ R n , the system (2.67) has a unique solution x ∈ B∞ (I, R n ). Proof Let (t, s), 0 ≤ s ≤ t ≤ T , denote the transition operator corresponding to the matrix-valued function A(t), t ∈ I. Then, the solution of Eq. (2.67) is given by the following expression:

t

x(t) = (t, 0)x0 +



t

(t, s)B(s)u(s)ds +

0

(t, s)γ (ds), t ∈ I.

(2.68)

0

Unlike the classical models, the solution x may not be continuous. However, it is easy to verify that x ∈ B∞ (I, R n ). Since A is integrable over the finite interval I , there exists a finite positive number M such that the corresponding transition operator satisfies sup{ (t, s) M(n×n) , 0 ≤ s ≤ t ≤ T } ≤ M. Under the given assumption, B ∈ L∞ (I, M(n × m)) ≡ L∞ (I, L(R m , R n )), and hence, there exists a finite positive number b such that ess − sup{ B(t) M(n×m) , t ∈ I } ≤ b. Hence, taking the norm on either side of the expression (2.68) and using triangle inequality, we find that   x(t) R n ≤ M



t

 x0 R n +b 0



t

 u(s) R m ds + 0

 |γ |(ds) .

(2.69)

2.3 Representation of Solutions for TVS

57

Since u ∈ L1 (I, R m ) and the measure γ has bounded variation, it follows from the above expression that  sup{ x(t) R n , t ∈ I } ≤ M

  x0 R n +b  u L1 (I,R m ) +  γ  ,

where  γ ≡ γ Mca (I ,R n ) ≡ |γ |(I ), denotes the total variation norm. This completes the proof.   Remark 2.3.7 It is important to note that the above result also holds for more general systems of the following form where both the controls and other external disturbances may be given by vector measures. Hence, this class also contains the impulsive systems dx = A(t)xdt + B(t)u(dt) + γ (dt), x(0) = x0 , t ∈ I.

(2.70)

Here u ∈ Mca (I , R m ). The solution of this equation is given by x(t) = (t, 0)x0 +

t



t

(t, s)B(s)u(ds) +

0

(t, s)γ (ds), t ∈ I.

(2.71)

0

Under the stated assumption the control measure u has bounded total variation and hence in this case as well we have the inequality   x B∞

(I,R n )

≤M

 x0  +b  u Mca (I

,R m )

+  γ Mca (I

,R n )

 .

Hence, the solution x ∈ B∞ (I, R n ). Example An example of a system represented by Eq. (2.70) is found in inventory control problems. A store owner sells n distinct items of goods. Let x ≡ (I1 , I2 , · · · , In ) denote the vector of inventory levels of these goods. A simple linear model for this system is given by (d/dt)Ii = −γi Ii −



aij Ij + bi ui , i = 1, 2, · · · , n,

j =i

where γi is the rate of demand of the i-th good independent of other goods, aij is the demand of the same good induced by the demand of the j -th good, and ui is the supply rate with bi ∈ [0, 1] representing the spoilage factors. The coefficients γi ≥ 0, aij ≥ 0, while aij = 0 if the sale of the j -th good has no influence on the demand of the i-th good. The goods are supplied to the merchant at a set of

58

2 Linear Systems

instants of time {tk }m 1 on demand. Thus, the control u is purely impulsive and can be represented by ui (dt) ≡

m

uki δtk (dt), with uk ≡ (uk1 , uk2 , · · · , ukn ) ,

k=1

where δs (J ) = 1, if s ∈ J and zero otherwise. Clearly, this system can be written in the canonical form dx = Axdt + Bu(dt). Continuous Dependence of Solutions Here we verify that the solution is continuous with respect to the given data. Corollary 2.3.8 Suppose A and B satisfy the assumptions of Theorem 2.3.4. Then, the solutions of the system (2.70) corresponding to the initial data and the input measures are Lipschitz. Proof The proof follows from simple computation. Consider the system (2.70) subject to data (ξ, u, γ ) ∈ R n × Mca (I , R m ) × Mca (I , R n ) and (η, v, ν) ∈ R n × Mca (I , R m ) × Mca (I , R n ), respectively. Then

t

x(ξ, u, γ )(t) = (t, 0)ξ +



0



(t, s)γ (ds), t ∈ I,

(2.72)

0

t

x(η, v, ν)(t) = (t, 0)η +

t

(t, s)B(s)u(ds) + (t, s)B(s)v(ds) +

0

t

(t, s)ν(ds), t ∈ I .

(2.73)

0

Then, subtracting Eq. (2.73) from Eq. (2.72) it follows from triangle inequality that sup{ x(ξ, u, γ )(t) − x(η, v, ν)(t) , t ∈ I }   ≤ C  ξ − η Rn +  u − v Mca (I ,Rm ) +  γ − ν Mca (I ,Rn ) ,

where C = M max{b, 1}. This completes the proof.

(2.74)  

2.3.3 Measure Induced Structural Perturbation Many physical systems may experience and undergo impulsive changes in their structural configuration or parameter values. This class of systems can be described by the following differential equation: dx(t) = A0 (t)x(t)dt + A1 (dt)x(t), x(0) = x(0+) = ξ, t ∈ I.

(2.75)

2.3 Representation of Solutions for TVS

59

In some applications such as an aircraft facing a sudden wind shear may have to activate its flaps and ailerons to control and stabilize its flight path in a very short notice. A person on the skating rink maintains his balance by spreading out his hands and limbs in appropriate directions as required. A cat falling from a tree lands on its legs by maneuvering its limbs. In management problems one may be required to make structural changes in the original system in order to improve efficiency and optimize management cost. We wish to develop sufficient conditions that guarantee existence of a transition operator corresponding to the above model (2.75) and existence of solutions of the above equations corresponding to any given initial state ξ ∈ R n . Theorem 2.3.9 Suppose A0 ∈ L1 (I, M(n × n)) and A1 ∈ Mca (I , M(n × n)) (a matrix or operator valued measure). Then, for every ξ ∈ R n Eq. (2.75) has a unique solution x ∈ B∞ (I, R n ) and there exists a measurable transition operator (t, s), 0 ≤ s ≤ t ≤ T so that x(t) = (t, 0)ξ.

(2.76)

Proof Since A0 ∈ L1 (I, M(n × n)), it has a unique transition operator that we denote by 0 (t, s). Thus, we can write the differential equation (2.75) as an integral equation

t

x(t) = 0 (t, 0)ξ +

0 (t, s)A1 (ds)x(s), t ∈ I.

(2.77)

0

We consider the general situation. Let x(s+) = ξ be given; then we may write the system as an integral equation starting from time s ∈ (0, T ) as follows:

t

x(t) = 0 (t, s)ξ +

0 (t, τ )A1 (dτ )x(τ ), t ∈ (s, T ].

(2.78)

s

Thus, for the proof of existence of a measurable transition operator for the system (2.75), it suffices to prove the existence of a unique (measurable) solution x = x(·, s, ξ ) ∈ B∞ ((s, T ], R n ) of the integral equation (2.78) and that the map ξ −→ x(·, s, ξ ) is a bounded linear map from R n to B∞ ((s, T ], R n ). For any σ ∈ I , define μ1 (σ ) = |A1 |(σ ) ≡ sup



 J ∈

 A1 (σ ∩ J ) L(R n ,R n ) ,

(2.79)

where  denotes any partition of the interval I into a finite number of disjoint members from I . The supremum is taken over all such finite partitions. Since A1 is a countably additive bounded vector measure having bounded total variation, μ1

60

2 Linear Systems

is a countably additive bounded positive measure. Define the operator G by

t

(Gf )(t) ≡ 0 (t, s)ξ +

0 (t, r)A1 (dr)f (r), t ∈ [s, T ].

(2.80)

s

Since 0 is continuous in both the variables and A1 ∈ Mca (I , M(n×n)), clearly, for every f ∈ B∞ ([s, T ], R n ), the function,

t

t −→

0 (t, r)A1 (dr)f (r), t ∈ [s, T ],

s

is measurable and uniformly bounded, that is, it is an element of B∞ ([s, T ], R n ). Hence, for every f ∈ B∞ ([s, T ], R n ), (Gf ) ∈ B∞ ([s, T ], R n ). Thus, G maps B∞ ([s, T ], R n ) into itself. We prove that G has a unique fixed point in B∞ ([s, T ], R n ). For x, y ∈ B∞ ([s, T ], R n ) with x(s+) = y(s+) = ξ, we have

t

(Gx)(t) − (Gy)(t) =

0 (t, r)A1 (dr)(x(r) − y(r)), t ∈ [s, T ].

s

Taking the norm on either side of the above expression, using the positive measure μ1 given by the expression (2.79), and using triangle inequality, we obtain

t

 (Gx)(t) − (Gy)(t) ≤ M0

 x(r) − y(r)  μ1 (dr), t ∈ [s, T ].

(2.81)

0

Define dt (x, y) ≡ sup{ x(r) − y(r) , s ≤ r ≤ t}, t ∈ I, and set d(x, y) = dT (x, y). Since B∞ ([s, T ], R n ) is a Banach space, furnished with the above metric topology, it is a complete metric space. Using this metric and the inequality (2.81), and the fact that t −→ dt (x, y) is a monotone non-decreasing function, the reader can easily verify that dt (Gx, Gy) ≤ M0

t

dr (x, y)μ1 (dr), t ∈ [s, T ].

(2.82)

s

t Define α(t) ≡ 0 μ1 (ds), t ∈ I. Since μ1 is a bounded positive measure, the function α is a positive monotone (non-decreasing) right continuous function of its argument having bounded total variation. A function of bounded variation is differentiable almost everywhere and the derivative is Lebesgue integrable. Thus, there exists a nonnegative continuous monotone increasing function of bounded

2.3 Representation of Solutions for TVS

61

variation α0 (t), t ∈ I, such that 0 ≤ α(t) ≤ α0 (t), t ∈ I and that 0 ≤ α(t) ˙ < α˙ 0 (t), a.e t ∈ I. Thus, the inequality (2.82) is equivalent to the following inequality:

t

dt (Gx, Gy) ≤ M0



t

dr (x, y)dα(r) ≤ M0

s

dr (x, y)dα0 (r), t ∈ [s, T ]. (2.83)

s

Using this inequality and repeated substitution of this into itself (after two iterations) we obtain t dt (G2 x, G2 y) ≤ M0 dr (Gx, Gy)dα0(r), t ∈ [s, T ] s



≤ M02 dt (x, y)

t

α0 (r)dα0 (r) s

≤ (M02 /2)α02 (t)dt (x, y). The last identity follows from the fact that α0 is a (positive) non-decreasing continuous function. After m iterations we arrive at the following inequality:  dt (G x, G y) ≤ m

m

M0m

 (α0 (t))m dt (x, y). (m + 1)

(2.84)

Using the metric d, it follows from the above inequality that  d(Gm x, Gm y) ≤ M0m

 (α0 (T ))m d(x, y). (m + 1)

(2.85)

Since α0 (T ) < ∞, it follows from this estimate that for m sufficiently large, say m0 , the operator Gm0 is a contraction on the metric space (B∞ ([s, T ], R n ), d). In other words, M0m0 (α0 (T ))m0 / (m0 + 1) ≡ ρ < 1 and d(Gm0 x, Gm0 y) ≤ ρd(x, y) ∀ x, y ∈ B([s, T ], R n ). Hence, by Banach fixed point theorem, Gm0 has a unique fixed point, say x o ∈ B([s, T ], R n ), that is x o = Gm0 x o . Therefore, d(x o , Gx o ) ≤ d(Gm0 x o , G(Gm0 x o )) = d(Gm0 x o , Gm0 (Gx o )) ≤ ρd(x o , Gx o ).

62

2 Linear Systems

Since ρ < 1, this inequality is true if, and only if, d(x o , Gx o ) = 0. Thus, x o is also the unique fixed point of the operator G. Hence, we have

t

x o (t, s, ξ ) = (Gx o )(t, s, ξ ) ≡ 0 (t, s)ξ +

0 (t, r)A1 (dr)x o (r, s, ξ ), t ∈ [s, T ].

(2.86)

s

Clearly, the solution x o is bounded on every finite interval. Using generalized Grönwall inequality due to Ahmed [7, Lemma 5, p. 268] one can easily verify that   t  x o (t, s, ξ ) ≤ M0  ξ  exp M0 μ1 (dr) ∀ t ∈ [s, T ].

(2.87)

s

We leave it as an easy exercise for the reader to verify that for any β, γ ∈ R and any ξ, η ∈ R n x o (t, s, βξ + γ η) = βx o (t, s, ξ ) + γ x o (t, s, η), t ∈ [s, T ].

(2.88)

Thus, the map ξ −→ x(t, s, ξ ), t ≥ s is linear and it follows from Eq. (2.87) that it is also bounded. Measurability of the map t −→ x o (t, s, ξ ) follows from the fact that x o ∈ B∞ ([s, T ], R n ). Hence, there exists a bounded measurable operator valued function or equivalently a M(n × n) valued function (t, s), 0 ≤ s ≤ t ≤ T such that x o (t, s, ξ ) = (t, s)ξ. The uniqueness of solution implies that (t, s) = (t, r)(r, s), 0 ≤ s ≤ r ≤ t < ∞.

(2.89)

This proves the existence and uniqueness of a bounded measurable transition operator. The existence and uniqueness of a solution x o for the initial value problem (2.75) now follow as a special case for s = 0 + . This completes the proof.   Remark 2.3.10 Note that the solutions are not only elements of the Banach space B∞ (I, R n ) (bounded measurable functions), they have also bounded total variation. In other words, the solutions are elements of the space BV (I, R n ).

2.3 Representation of Solutions for TVS

63

Example We present here two interesting examples of the operator valued measure A1 (·). A1 (σ ) ≡



Ck δtk (σ ), Ck ∈ M(n × n),

k=1

A1 (σ ) ≡

(2.90)



K(s)β(ds) + σ

L(s)ds,

(2.91)

σ

for any σ ∈ I and any countably additive measure β having bounded total variation with K ∈ L1 (β, M(n × n)) and L ∈ L1 (dt, M(n × n)). Clearly, the first example is equivalent to the classical model described by x˙ = A0 (t)x(t), x(0+) = ξ, t ∈ I \ D $x(tk ) = Ck x(tk ), tk ∈ D ≡ {tk , k = 1, 2, · · · , } ⊂ I.

2.3.4 Measure Driven Control Systems Here we consider a general class of structurally perturbed linear control systems given by dx = A0 (t)xdt + A1 (dt)x(t) + B(t)u(dt) + f (t)dt, x(0+) = ξ, t ∈ I.

(2.92)

We prove the following result. Theorem 2.3.11 Let A0 , A1 satisfy the assumptions of Theorem 2.3.9 and the operator valued function B ∈ C(I, M(n × d)). Then, for every ξ ∈ R n , control measure u ∈ Mca (I , R d ), and f ∈ L1 (I, R n ), Eq. (2.92) has a unique solution x ∈ B∞ (I, R n ) and further, the map (ξ, u, f ) −→ x(·, ξ, u, f ) from R n × Mca (I, R d ) × L1 (I, R n ) to B∞ (I, R n ) is bounded and Lipschitz continuous. Proof Under the given assumptions, it follows from the previous Theorem 2.3.9 that the pair {A0 , A1 } generates a unique bounded measurable transition operator (t, s), 0 ≤ s ≤ t ≤ T . Hence, there exists a finite positive number M such that sup{ (t, s) , 0 ≤ s ≤ t ≤ T } ≤ M.

64

2 Linear Systems

Using this transition operator we can express the solution explicitly as follows:

t

x(t) = (t, 0)ξ +

(t, r)B(r)u(dr) +

0

t

(t, r)f (r)dr, t ∈ I.

(2.93)

0

One can verify, as in Theorem 2.3.9, that this is the only solution of Eq. (2.92) and that it belongs to B∞ (I, R n ). For the Lipschitz continuity, let xi ∈ B∞ (I, R n ) denote the solution corresponding to the data (ξi , ui , fi ) ∈ R n × Mca (I, R d ) × L1 (I, R n ), i = 1, 2. Then, one can easily verify that   x1 (t) − x2 (t) ≤ M



t

ξ −η + 0

 B(s)  |u − v|(ds)



t

+

  f1 (s) − f2 (s)  ds , t ∈ I. (2.94)

0

Hence, we have

  x1 − x2 B∞ (I,Rn ) ≤ M  ξ − η  +  B 0 |u − v|(I )+  f1 − f2 L1 (I,Rn ) ,

(2.95) where |u − v|(I ) ≡ u − v Mca (I ,R d ) denotes the total variation norm and  B 0 denotes the supnorm. Since B ∈ C(I, M(n × d)), its norm is finite. Thus, from the above inequality we conclude that the solution of Eq. (2.92) is unique and it is Lipschitz with respect to the data. This completes the proof.   Remark 2.3.12 Notice that in Theorem 2.3.11 we assumed that B ∈ C(I, M(n×d)) and, as a consequence, we found that the operator L defined by

t

(Lu)(t) ≡

(t, s)B(s)u(ds) 0

is a bounded linear operator from the Banach space Mca (I , R d ) to the Banach space B∞ (I, R n ) and hence continuous. The reader can easily verify that this operator is also well defined for B ∈ B∞ (I, M(n × d)). This is proved by use of the argument based on continuous extension of the operator L from C(I, M(n × d)) to B∞ (I, M(n × d)). We denote the extension by the same symbol L.

2.4 Bibliographical Notes In this introductory chapter we have considered classical linear systems, and linear systems driven by vector measures including operator valued measures representing structural controls. Literature on structural control is rather limited [14, 16, 22, 93].

2.4 Bibliographical Notes

65

In many applications of control theory, sometimes it is necessary to effect structural control in order to optimize system performance. This is very useful in engineering and management sciences where structural reorganization is employed to improve efficiency. To stabilize buildings against earthquakes and high winds, engineers use structural controls [93]. For flight control of aircraft, structural control is executed through manipulation of flaps, ailerons, rudders, etc. For stability of motion on ice, skaters control their physical posture by appropriate movements of their limbs. In the theory of nonlinear filtering, as seen in [22], structural optimization is used for the observer (measurement) dynamics. These are all structural controls. In addition to structural reorganization one may also consider additional controls using vector measures. The results presented here are used in later chapters dealing with control and optimization. We have used some results from [14] with revised and refined proofs.

Chapter 3

Nonlinear Systems

3.1 Introduction Most of the dynamical systems arising in physical sciences and engineering are nonlinear. For example, electrical circuits containing nonlinear components such as diodes, nonlinear inductors, nonlinear capacitors, and registers can be described only by nonlinear differential equations. Chemical reactors supporting binary and ternary reactions are described by nonlinear differential equations. Mechanical systems containing nonlinear springs and dampers are described by nonlinear differential equations. Ecological systems supporting different species of lives are described by nonlinear differential equations determined by complex inter-species and intra-species interactions. These, among many others, are the overriding reasons why we study nonlinear systems. The standard form of a controlled nonlinear system in finite dimensional vector spaces is given by x˙ = f (t, x, u),

(3.1)

where x is the state vector of dimension n and f = col(f1 , f2 , f3 , · · · , fn ) is a n-vector that is a function of time, the state, and the control variable. In case f is independent of time we have time-invariant system. Classical impulsive nonlinear systems are described by similar equations in association with another equation that describes the evolution of jumps. This is further generalized by including nonlinear systems that are governed by differential equations and inclusions driven by signed measures and vector measures. Here, with reference to the system (3.1), we study the questions of existence of solutions and their regularity properties. These results are then used in control theory in later chapters.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_3

67

68

3 Nonlinear Systems

For physical motivation, we start with some elementary examples arising in ecology and engineering. (E1) Controlled Ecological Systems Consider a habitat where rabbits and foxes reside. Foxes prey upon rabbits, while rabbits live on vegetation. A prey–predator model for this environment is given by the logistic equation known as the Lotka– Volterra equation [2, 14]. Here one assumes that the effective growth rate of rabbits is linearly proportional to the fox population (in the negative sense). That is, αe ≡ (α −βF ), where α is the natural growth rate in the absence of adversaries. Similarly, the effective growth rate of the fox population is again linearly proportional (in the positive sense) to the rabbit population given by γe ≡ (−γ + δR), where γ is the death rate of foxes for lack of food supply in the absence of rabbits. Based on these hypotheses, the population dynamics is given by R˙ = (α − βF )R = αR − βF R, F˙ = (−γ + δR)F + bu = −γ F + δF R + bu,

(3.2)

where u is the control (exercised by conservation authorities) and bu is the rate of removal (killing rate) of foxes. Defining the state x as x = (x1 , x2 ) ≡ (R, F ) , this equation can be written in the canonical form giving x˙ 1 = f1 (x1 , x2 ) ≡ αx1 − βx1 x2 , x˙ 2 = f2 (x1 , x2 ) ≡ −γ x2 + δx1 x2 + bu.

(3.3)

Based on similar logistic arguments one can develop a dynamic model for human immune system to fight cancer [18, 30]. This process of treatment is known as immunotherapy. In this model antibodies interact with antigens destructively to eliminate cancer cells. This is used in Chap. 7 for optimal control with numerical results. (E2) Nonlinear Circuits A nonlinear parallel RLC circuit connected to a current source u is given by a pair of nonlinear differential equations. The resistor is nonlinear in the sense that its i − v (current–voltage) characteristic is given by i = g(v), where g is a nonlinear function. The energy storage elements of the circuit are L (inductor) and C (capacitor). Hence, the appropriate state variables are the current i through the inductor, and the voltage v across the capacitor. Using Kirchoff’s current and voltage laws one can write the system of equations as (d/dt)i = (1/L)v, (d/dt)v = −(1/C)i − (1/C)g(v) + (1/C)u.

(3.4)

3.2 Fixed Point Theorems for Multi-Valued Maps

69

Here the state is given by x = (x1 , x2 ) = (i, v) . Thus, written in the canonical form, we have the system x˙ = f (x) + Bu,

(3.5)

where f1 (x) = (1/L)x2 , f2 (x) = −(1/C)x1 − (1/C)g(x2 ) and B ≡ (0, 1/C) and u is the source current used as control.

3.2 Fixed Point Theorems for Multi-Valued Maps Banach fixed point theorem deals with the question of existence and uniqueness of solutions of equations of the form x = F (x), where F is either a linear or a nonlinear operator in a Banach space or more generally a metric space. If the metric space is simply the real line and F is a polynomial, it is simply the question of existence of zeros of the polynomial P (x) ≡ F (x) − x. The set {x : F (x) − x = 0} is called the set of fixed points of F. This question often arises in mathematical sciences dealing with linear or nonlinear differential equations or integral equations. There are many fixed point theorems on Banach spaces [110]. The most well-known ones are Banach fixed point theorem and Schauder fixed point theorem. Banach fixed point theorem is constructive and it provides both existence and uniqueness, while Schauder fixed point theorem is more abstract and it provides only existence. Uniqueness is very important in the study of control problems and hence most often we use Banach fixed point theorem. In the study of systems governed by differential inclusions representing uncertain systems, that is systems with incomplete description or equivalent systems with parametric uncertainty, we need multi-valued analysis, in particular, Banach fixed point theorem for multi-valued maps. Let X = (X, ρ) be a complete metric space and c(X) denote the class of nonempty closed subsets of X. For each A, B ∈ c(X) one can define a distance between them by the following expression:   dH (A, B) ≡ max sup{ρ(A, y), y ∈ B}, sup{ρ(x, B), x ∈ A} . It is not difficult to verify that it satisfies the following properties: (MH1) (MH2) (MH3)

dH (A, B) ≥ 0, ∀ A, B ∈ c(X); dH (A, B) = 0 if and only if A = B. dH (A, B) = dH (B, A), ∀ A, B ∈ c(X). dH (A, B) ≤ dH (A, C) + dH (C, B), ∀ A, B, C ∈ c(X).

So this defines a metric on the class of closed subsets c(X) of the metric space X. With this metric topology, (c(X), dH ) is a complete metric space. The metric dH is known as the Hausdorff metric. Now we are prepared to consider Banach fixed point theorem for multi-valued maps.

70

3 Nonlinear Systems

Definition 3.2.1 Let X0 be a closed subset of a complete metric space X ≡ (X, ρ) and G : X0 −→ c(X0 ) be a multi-valued map. A point z ∈ X0 is said to be a fixed point of the multifunction G if z ∈ G(z). The set F ix(G) ≡ {z ∈ Xo : z ∈ G(z)} is called the set of fixed points of G. Theorem 3.2.2 (Banach Fixed Point Theorem for Multifunctions) Let X0 be a closed subset of a complete metric space (X, ρ). Suppose G is a multi-valued map from X0 to c(X0 ) satisfying dH (G(x), G(y)) ≤ αρ(x, y) ∀ x, y ∈ X0 with α ∈ (0, 1). Then, G has at least one fixed point in X0 . Proof Take any q ∈ (α, 1), and x0 ∈ X0 . By hypothesis, G(x0 ) is a closed subset of X0 . Choose an element x1 ∈ G(x0 ) such that ρ(x0 , x1 ) > 0. Clearly, if no such element exists, then x0 is already a fixed point of G and that will end the proof. Otherwise, it follows from the Lipschitz property that ρ(x1 , G(x1 )) ≤ dH (G(x0 ), G(x1 )) ≤ qρ(x0 , x1 ). Thus, there exists an element x2 ∈ G(x1 ) ⊂ X0 , such that ρ(x1 , x2 ) ≤ qρ(x0 , x1 ). Continuing this process we generate a sequence {xn }, n ∈ N0 ≡ {0, 1, 2, · · · } from the set X0 , satisfying the inclusions xn+1 ∈ G(xn ), n ∈ N0 . Clearly, by the Lipschitz property of G, we have ρ(x2 , x1 ) ≤ dH (G(x1 ), G(x0 )) ≤ qρ(x1 , x0 ), ρ(x3 , x2 ) ≤ dH (G(x2 ), G(x1 )) ≤ qρ(x2 , x1 ) ≤ q 2 ρ(x1 , x0 ). Thus, by induction, we have ρ(xn+1 , xn ) ≤ q n ρ(x1 , x0 ), ∀ n ∈ N0 . Hence, for any integer p ≥ 1, it follows from this inequality that ρ(xn+p , xn ) ≤ q n (1 − q)−1 ρ(x1 , x0 ). Since α < q < 1, this inequality implies that limn→∞ ρ(xn+p , xn ) = 0 for any p ≥ 1. Thus, {xn } is a Cauchy sequence, and since X is a complete metric space, there exists a z ∈ X such that lim ρ(xn , z) = 0.

n→∞

3.3 Regular Systems (Existence of Solutions)

71

Since X0 is a closed subset of X and all the elements of the sequence {xn } are contained in X0 , the limit z ∈ X0 . We show that z is a fixed point of G. Note that ρ(z, G(z)) ≤ ρ(z, xn+1 ) + ρ(xn+1 , G(z)) ≤ ρ(z, xn+1 ) + dH (G(xn ), G(z)) ≤ ρ(z, xn+1 ) + qρ(xn , z). This holds for all n ∈ N. Thus, letting n → ∞, we obtain ρ(z, G(z)) = 0. Since z ∈ X0 , we have G(z) ∈ c(X0 ) and so the identity ρ(z, G(z)) = 0 implies that z ∈ G(z). This proves that z is a fixed point of the multi-valued map G.   As a corollary of the above theorem we have the following result. Corollary 3.2.3 Under the assumptions of Theorem 3.2.2, the fixed point set of the multi-valued map G denoted by F ix(G) ≡ {ξ ∈ X0 : ξ ∈ G(ξ )} is a sequentially closed subset of X0 . Proof Let ξn ∈ F ix(G) and ξ0 ∈ X0 and suppose that ρ(ξn , ξ0 ) −→ 0 as n → ∞. We show that ξ0 ∈ F ix(G). Clearly ρ(ξ0 , G(ξ0 )) ≤ ρ(ξ0 , ξn ) + ρ(ξn , G(ξ0 )) ≤ ρ(ξ0 , ξn ) + dH (G(ξn ), G(ξ0 )) ≤ ρ(ξ0 , ξn ) + qρ(ξn , ξ0 ). Letting n → ∞, we obtain ρ(ξ0 , G(ξ0 )) = 0. Since ξ0 ∈ X0 and G(x) ∈ c(X0 ) for every x ∈ X0 , we have ξ0 ∈ G(ξ0 ) and hence ξ0 ∈ F ix(G). Thus, we have proved that F ix(G) is a sequentially closed subset of X0 .  

3.3 Regular Systems (Existence of Solutions) In this section we consider systems governed by regular nonlinear differential equations in R n where n is a finite positive integer. Our objective is to present some results on the question of existence and uniqueness of solutions and their regularity properties. First we consider the system model x(t) ˙ = f (t, x(t)), x(0) = x0 .

(3.6)

72

3 Nonlinear Systems

Suppose that the vector field f of the above equation is defined for all (t, x) ∈ I ×R n in the sense that it is single-valued and that  f (t, x) < ∞ for each (t, x) ∈ I × B where B is any bounded subset of R n . We present here some sufficient conditions for existence and uniqueness of solutions. In fact there is no general theory giving necessary and sufficient conditions for existence of solutions. Most of the existence results in the literature give sufficient conditions. The implication is, if these conditions are not satisfied, it does not necessarily imply nonexistence of solution. Theorem 3.3.1 Suppose f is continuous on [0, ∞) × R n and bounded on bounded sets. Then, there exists a critical time tc > 0 such that the system (3.6) has a solution x ∈ C(Itc , R n ) (not necessarily unique) where Itc = [0, tc ) is the maximal interval of existence of solutions and tc is the blow-up time.  

Proof See [2, Theorem 3.1.17].

An Example For illustration we consider a simple example. Consider the scalar equation z˙ = −z2 , z(0) = z0 . By simply integrating this equation one can verify that the solution is given by z(t) =

z0 . (1 + tz0 )

Clearly, if z0 > 0, the solution is defined for all t ≥ 0 and vanishes as t → ∞. If on the other hand, z0 < 0, the solution blows up at time tc = (1/|z0 |). If the sign of the right-hand side is changed from −z2 to +z2 , the solution is defined for all time t ≥ 0 provided z0 < 0; in contrast if z0 > 0, the solution blows up in time tc = (1/z0 ). Similar conclusions hold for z˙ = αz2n , z(0) = z0 , for α = −1 or + 1. Note that here f is locally Lipschitz but does not satisfy the (at most) linear growth condition (A1) as seen below in Theorem 3.3.2. Continuity of f in t is not essential. It suffices if it is measurable and locally integrable in t. However, one requires stronger regularity conditions on f with respect to the state variable x ∈ R n . One such existence result is provided by the following theorem. Theorem 3.3.2 Consider the system (3.6). Suppose f satisfies the linear growth and Lipschitz conditions with respect to the state x ∈ R n in the sense that there exist K, L ∈ L+ 1 (I ) such that (A1) :  f (t, x) ≤ K(t){1+  x }

(3.7)

(A2) :  f (t, x) − f (t, y) ≤ L(t)  x − y  .

(3.8)

Then, the system (3.6) has a unique solution x ∈ C(I, R n ).

3.3 Regular Systems (Existence of Solutions)

73

Proof Integrating Eq. (3.6) we obtain the following integral equation:

t

x(t) = x0 +

f (s, x(s))ds, t ∈ I ≡ [0, T ].

(3.9)

0

Define x1 (t) = x0 , t ∈ I t f (s, x1 (s))ds, t ∈ I, x2 (t) = x0 + 0

and by induction,

t

xn+1 (t) = x0 +

f (s, xn (s))ds, t ∈ I, n ∈ N.

(3.10)

0

We prove that {xn } is a Cauchy sequence in C(I, R n ) and that it converges to a unique limit that is the solution of the integral equation and hence the solution of the system Eq. (3.6). First note that by virtue of the assumption (A1) and the expression (3.10) we have

t

 x2 (t) − x1 (t) ≤

  f (s, x0 )  ds ≤ 1+  x0 

0



t

K(s)ds, t ∈ I.

0

(3.11) Hence,

  x2 − x1 C ≡ sup  x2 (t) − x1 (t) ≤ 1+  x0   K L1 ≡ C. t ∈I

(3.12)

Using the Lipschitz condition (A2) and the expression (3.10) and the constant as defined above, we obtain

t

 x3 (t) − x2 (t)  ≤

 f (s, x2 (s)) − f (s, x1 (s))  ds

0 t

≤ 0

L(s)  x2 (s) − x1 (s)  ds



t

≤C 0

L(s)ds ≡ C(t),

(3.13)

74

3 Nonlinear Systems

t where (t) ≡ 0 L(s)ds. Clearly, this is a nonnegative non-decreasing function of t and (T ) < ∞ for any finite T < ∞. Following this procedure one finds that

t

 x4 (t) − x3 (t)  ≤

L(s)  x3 (s) − x2 (s)  ds

0 t





t

L(s)C(s)ds = C

0

(s)d(s) 0

≤ (C/2)2 (t), t ∈ I,

(3.14)

and  x5 (t) − x4 (t) ≤ (C/3!)3 (t), t ∈ I,

(3.15)

and so on. Thus, by induction  xn+1 (t) − xn (t) ≤ (C/(n − 1)!)n−1 (t), n ≥ 1, t ∈ I.

(3.16)

Since xn+p (t) − xn (t) =

n+p−1

(xk+1 (t) − xk (t)),

k=n

we have  xn+p (t) − xn (t)  ≤

n+p−1

 xk+1 (t) − xk (t) 

k=n

≤ (C/(n − 1)!)n−1 (t) exp{(t)}, t ∈ I.

(3.17)

Hence,  xn+p − xn C ≡ sup  xn+p (t) − xn (t) ≤ (C exp{(T )})(n−1 (T )/(n − 1)!). t ∈I

(3.18) Note that the expression on the right-hand side of this estimate is independent of p. Thus, letting n → ∞ we see that lim  xn+p − xn = 0

n→∞

3.3 Regular Systems (Existence of Solutions)

75

for any integer p. Hence, {xn } is a Cauchy sequence in the Banach space C(I, R n ), and consequently, there exists a unique limit x ∈ C(I, R n ). That is lim xn (t) = x(t), uniformly on I.

n→∞

Since ξ −→ f (t, ξ ) is continuous on R n , it follows from the above result that lim f (t, xn (t)) = f (t, x(t)) a.a t ∈ I.

n→∞

(3.19)

Further, since every Cauchy sequence is bounded, there exists a finite positive number b so that  f (t, xn (t)) ≤ K(t)(1+  xn (t) ) ≤ (1 + b)K(t) a.e.

(3.20)

By virtue of the two conditions (3.19) and (3.20), Lebesgue dominated convergence theorem holds, and hence, lim

n→∞ 0

t



t

f (s, xn (s))ds =

f (s, x(s))ds, t ∈ I.

0

Thus, letting n → ∞, it follows from (3.10) that

t

x(t) = x0 +

f (s, x(s))ds, t ∈ I.

(3.21)

0

In other words, the limit x is a solution of the integral equation (3.9). Since f is Lipschitz in x, this is the unique solution. Indeed, if there is another solution, say y ∈ C(I, R n ) that satisfies the integral equation, we have

t

 x(t) − y(t) ≤

K(s)  x(s) − y(s)  ds, t ∈ I.

0

By Grönwall inequality, this implies that x(t) = y(t) for all t ∈ I. So far we have shown that the limit x is the unique solution of the integral equation. We must show that it is also the solution of our original system Eq. (3.6). Since g(t) ≡ f (t, x(t)), t ∈ I is an integrable function (the reader may justify this), its t t integral 0 g(s)ds ≡ 0 f (s, x(s))ds, t ∈ I is an absolutely continuous function of time t ∈ I , and hence, it is differentiable almost everywhere on I and the derivative equals g(t) = f (t, x(t)). Thus, differentiating either side of Eq. (3.21), we obtain the differential equation (3.6). This completes the proof.   Remark 3.3.3 Examining the last part of the proof we note that, under the given assumptions on f, x ∈ C(I, R n ) is a solution if it is absolutely continuous and the identity (3.6) holds for almost all t ∈ I. This is a relaxed notion of solution. In contrast, if f is also continuous in t, we have a C 1 solution known as the classical

76

3 Nonlinear Systems

solution and the identity (3.6) holds for all t ∈ I. The reader can also check that for n ∈ N0 ≡ {0, 1, 2, · · · }, if f ∈ C n in both the arguments, x ∈ C n+1 . A more general result is given by the following theorem. This theorem requires only local Lipschitz condition and the global linear growth condition as in Theorem 3.3.2. Let Br ≡ {ξ ∈ R n : ξ ≤ r} denote the closed ball of radius r centered at the origin. Theorem 3.3.4 Suppose there exists a function K ∈ L+ 1 (I ) and, for each positive number r, a function Lr ∈ L+ (I ) so that the following inequalities hold 1 (A1) :  f (t, x) ≤ K(t){1+  x } ∀ x ∈ R n ,

(3.22)

(A2) :  f (t, x) − f (t, y) ≤ Lr (t)  x − y  ∀ x, y ∈ Br .

(3.23)

Then, for each x0 ∈ R n , the system (3.6) has a unique solution x ∈ C(I, R n ). Proof We prove this result using the previous theorem. From the linear growth condition it follows that a solution of the system Eq. (3.6), if one exists, remains confined in a ball Br o ≡ {ξ ∈ R n : ξ ≤ r o } of radius r o around the origin where r o is possibly dependent on T and the norm of the initial state  x0  . Let us verify this statement. Let x ∈ C(I, R n ) be a solution of Eq. (3.6), or equivalently the integral equation

t

x(t) = x0 +

(3.24)

f (s, x(s))ds. 0

Then, it follows from the growth assumption (A1) that

t

 x(t) ≤ x0  +

K(s)(1+  x(s) )ds, t ∈ I.

(3.25)

0

By virtue of Grönwall inequality, it follows from this estimate that 

T

(1+  x(t) ) ≤ (1+  x0 ) exp 0

 K(s)ds ≡ r o .

(3.26)

3.3 Regular Systems (Existence of Solutions)

77

This implies that if x is a solution of the problem, then x(t) ∈ Br o for all t ∈ I. Now choose r > r o and define the retraction ϕr of the ball Br as follows:  ϕr (x) =

x, (r/  x )x,

x ∈ Br , x∈ / Br .

(3.27)

The function ϕr maps every point outside the ball Br on to the sphere Sr = ∂Br . Let x˜ = ϕr (x) and y˜ = ϕr (y). Geometrically, it is clear that  x˜ − y˜ ≤ x − y  . This is because the distance between any two points outside the ball is always greater than the distance between their retracts. We leave it to the reader to verify that this inequality can also be derived purely algebraically using Cauchy inequality. Define fr (t, x) ≡ f (t, ϕr (x)) and note that fr has the linear growth property (A1) and further it is also globally Lipschitz, that is,  fr (t, x) − fr (t, y) ≤ Lr (t)  x − y , ∀ x, y ∈ R n .

(3.28)

Hence, it follows from the previous Theorem 3.3.2 that the initial value problem ˙ ψ(t) = fr (t, ψ(t)), ψ(0) = x0 , t ∈ I, t > 0, or equivalently the integral equation

t

ψ(t) = x0 +

fr (s, ψ(s))ds, t ∈ I,

(3.29)

0

has a unique solution that we denote by xr (t), t ∈ I. Since fr satisfies the same growth condition (A1), xr (t) ∈ Br o ⊂ Br for all t ∈ I. Hence, xr (t) = x0 + 0

t



t

fr (s, xr (s))ds = x0 +

f (s, xr (s))ds, t ∈ I, ∀ r > r o .

0

(3.30) Thus, xr is independent of r for all r > r o . Hence, we may denote the solution by x without the subscript r. This completes the proof.   In the above theorem the vector field f was assumed to have at most linear growth globally. We wish to relax this condition in the following theorem. In fact, under the assumptions stated below, f can posses polynomial growth locally. Theorem 3.3.5 Let I ≡ [0, T ], T ∈ (0, ∞) and suppose that f (·, 0) ∈ L1 (I, R n ) and that, for every finite positive number r, there exists a function Lr ∈ L+ 1 (I ) so that  f (t, x) − f (t, y) ≤ Lr (t)  x − y  ∀ x, y ∈ Br .

(3.31)

78

3 Nonlinear Systems

Then, for each x0 ∈ R n , there exists a maximal interval I (x0 ) = [0, τ (x0)) ⊂ I such that the system (3.6) has a unique solution x ∈ C(I (x0 ), R n ). Further, if τ (x0 ) ∈ Int (I ), then limt →τ (x0 )  x(t) = +∞. If τ (x0 ) = T , the solution may be continued beyond the interval I till it blows up. Proof Define r0 ≡  x0  where x0 ∈ R n is the given initial state. Take any finite r > r0 , and note that x0 ∈ Int(Br ), where Br is the closed ball of radius r around the origin of R n . Now observe that  f (t, x) ≤ f (t, 0)  +Lr (t)  x ≤ Kr (t)(1+  x ), x ∈ Br , (3.32) where Kr (t) ≡ sup{ f (t, 0) , Lr (t)}. Since f (·, 0) ∈ L1 (I, R n ) and Lr ∈ + L+ 1 (I ), we have Kr ∈ L1 (I ). Thus, f satisfies the assumptions (A1) and (A2) of Theorem 3.3.2 in Br , and hence, there exists a time τr ≤ T such that the system (3.6) has a unique solution x ∈ C([0, τr ), R n ) till τr at which time it hits the boundary ∂Br ≡ {ξ ∈ R n : ξ = r}. If it never hits the boundary, set τr = T , to complete the process, and we may conclude that we have a unique solution. However, if τr < T , we start all over again with the initial state x(τr ). Clearly,  x(τr ) = r, which we denote by r1 = r. Again we choose a positive number, say, r2 > r1 , and note that x(τr1 ) ∈ IntBr2 . By our assumption on f, we can again find functions Lr2 , Kr2 ∈ L+ 1 (I ) so that f has the local growth and Lipschitz properties with respect to these functions on Br2 , thereby guaranteeing the existence and uniqueness of a solution till time, say, τr2 at which the solution hits the boundary of the ball Br2 . If τr2 = T , the process is complete and we have a unique solution; if τr2 < T we repeat the process. This way we generate an increasing sequence of positive numbers {rk } with rk → ∞ and a corresponding sequence of {τrk } ≤ T so that we have unique solutions on each of the sequence of intervals [0, τrk ). If limk→∞ τrk = τ < T , the solution blows up at time τ and we have a unique solution locally on the interval [0, τ ), with τ being the blow-up time. If on the other hand τ = T we have a unique global solution on the interval [0, T ). This completes the proof.   Remark 3.3.6 This result tells us that a system like (3.6) with f only locally Lipschitz has always a unique solution till a blow-up time. With little reflection, the reader may also observe that Theorem 3.3.5 covers all functions f, having components {fi } that are polynomials in the variables {xi , i = 1, 2, 3, · · · , n}. We have seen a simple example presented immediately following Theorem 3.3.1.

3.4 Impulsive Systems (Existence of Solutions) In the preceding section we have considered regular or classical nonlinear systems. In this section we consider systems driven by measures covering standard impulsive systems. There are situations where a system may be forced out of its current state abruptly by an external or internal force. A simple example is experienced

3.4 Impulsive Systems (Existence of Solutions)

79

in collision between cars on highways. Hammering a nail can be considered an impulsive force or control. Another example is given by an orbiting communication satellite that may be hit by micrometeorites from time to time causing an abrupt change of its state. In the absence of such forces, the satellite executes its motion according to its normal attitude dynamics. Similar examples can be found in the study of demography where due to war or other catastrophic events there is largescale migration taking place in a very short span of time. Similarly, epidemic may cause abrupt changes in the demography. Further, a system may be labeled impulsive, if the control forces are impulsive. Such examples arise in the study of traffic flow control in computer communication networks.

3.4.1 Classical Impulsive Models For better understanding, first we may consider the classical (purely) impulsive system. This is described by the following system of equations: x˙ = f (t, x), t ∈ I \ D, $x(ti ) = G(ti , x(ti −)), ti ∈ D, 1 ≤ i ≤ m, m ∈ N,

(3.33) (3.34)

where D ≡ {0 = t0 < t1 < t2 < · · · < tm < T }, ti ∈ I. Here,  denotes the jump operator given by $x(ti ) ≡ x(ti ) − x(ti −). The system evolves continuously according to the first equation during each of the time intervals [ti−1 , ti ), and at time ti it makes a jump to the state x(ti ) = x(ti −) + G(ti , x(ti −)). Clearly, the jump size is determined by the function G and the state just before the jump. This is an explicit scheme. Later in the sequel we will also consider an implicit scheme. Note that G could also be viewed as a feedback control (law) that imparts control actions at a set of discrete points of time according to the state of the system at those instances. One may also visualize even more general systems of the form x˙ = f (t, x, u), t ∈ I \ D, $x(ti ) = G(ti , x(ti −), vi ), ti ∈ D,

(3.35) (3.36)

80

3 Nonlinear Systems

where u is a regular control and v is an impulsive control. Before we discuss the questions of existence, uniqueness, and regularity properties of solutions of such models, we may consider a practical example arising from construction engineering. This is known as piling. Piling in Construction Engineering [14] For construction of buildings, civil engineers use piling to reinforce the soil beneath the construction site. Piling is also used to construct supporting pillars by pouring concrete mixture in a hollow steel pipe that is hammer driven into the ground. A massive hammer is used to drop under gravity on a supporting plate placed on the pipe. We are interested in estimating the depth of penetration after each strike and after series of strikes. This is a system subject to impulsive forces used as control. Letting x(t) denote the depth of penetration at time t measured downward from the ground surface, m the mass of the pipe, M the mass of the hammer, r the radius of the pipe, β the coefficient of friction per unit surface area, one can verify that x satisfies the second order differential equation given by mx(t) ¨ + (4βπrx(t))x(t) ˙ = F (t),

(3.37)

where F is the force imparted to the standing pipe by the free fall of the hammer of mass M. Note that the resistance to penetration is given by R = β(4πrx) where (4πrx) is the surface area of the pipe in contact with the surrounding ground. Here we have assumed that after each strike the hammer bounces back. Define the set D ≡ {0 < t1 < t2 < · · · < T } ⊂ I to be the set of time instances at which the hammer strikes the supporting plate. Note that√the velocity of the pipe before the first strike is 0 and immediately after the height of fall. Thus, the force imparted to the pipe strike is 2gH , where H is the √ is impulsive given by F (t) = M 2gH δ(t − t1 ). Assuming the initial position and velocity to be x(0) = x0 , x(0) ˙ = 0 and that the friction is approximately constant in between strikes, we can approximate the dynamics as follows:  mx(t) ¨ + (4βπrx0)x(t) ˙ = M 2gH δ(t − t1 ), t ∈ [0, t2 ).

(3.38)

Solving this equation, we find that x(t) = x0 for 0 ≤ t < t1 and √   M 2gH x(t) = x0 + 1 − exp{−(4βπrx0/m)(t − t1 )} , for t1 ≤ t < t2 . 4βπrx0 (3.39) It is clear from this expression that penetration increases with the increase of the weight of the hammer and the height of fall, while it decreases with the increase of radius of the pipe, friction coefficient β, and the initial depth x0 . If we wish to consider the result of only one strike, we must set F (t) ≡ 0 for t > t1 . In that case

3.4 Impulsive Systems (Existence of Solutions)

81

the maximum possible penetration due to first strike is given by  x1 = x0 +

√  M 2gH . 4βπrx0

Assuming the time difference between the strikes to be sufficiently large, we can take x(t2 −) = x1 and x(t ˙ 2 −) = 0 as the initial state for the second strike. In that case, one can write the solution approximately as x(t) = x1 +

√   M 2gH 1 − exp{−(4βπrx1/m)(t − t2 )} , t2 ≤ t < t3 . 4βπrx1

Continuing this process one can construct an approximate solution and estimate the depth of penetration after any number of strikes. We leave it as an exercise for the reader to compute the solution for multiple strikes with the force given by F (t) ≡

m

 M 2gHi δ(t − ti ),

i=1

30

30

29

29

28

28

27

27

Depth (m)

Depth (m)

where Hi may vary from one strike to another. For illustration, some numerical results based on exact solution and approximate ones are presented in Fig. 3.1. The parameters used are m = 2.5 × 103 Kg, M = 7.85 × 103 Kg, β = 0.4, r = 0.6 m, H = 20 m, g = 9.8 m/sec2. The graph on the left corresponds to the exact solution and that on the right is based on approximate solution. Notice that the approximate solution overestimates the depth of penetration slightly. Clearly, this is due to the assumption of constancy of the friction during the process of penetration for each strike.

26 25 24 23

26 25 24 23

22

22

21

21

20 0

5

10

15

Time (unit: second)

(a)

20

20 0

5

10

15

20

Time (unit: second)

(b)

Fig. 3.1 Piling in Construction Engineering [14]. (a) Numerical results of the exact solution. (b) Numerical results of the approximate solution

82

3 Nonlinear Systems

Next we present some results on the questions of existence of solutions of impulsive systems given by Eqs. (3.33)–(3.34). Let P W C(I, R n ) denote the class of piecewise continuous and bounded functions on I with values in R n . Let B∞ (I, R n ) denote the space of bounded measurable functions defined on I and taking values in R n . This is a Banach space with respect to the supnorm topology and it contains P W C(I, R n ). Let P W Cr (I, R n ) ⊂ P W C(I, R n ) denote the class of functions that are right continuous having left limits. Theorem 3.4.1 Suppose f satisfies the assumptions of Theorem 3.3.2 and G : I × R n −→ R n is continuous and bounded on bounded subsets of I × R n . Then, for every initial state x0 ∈ R n , the system (3.33)–(3.34) has a unique solution in P W Cr (I, R n ) ⊂ B∞ (I, R n ). Proof Considering the open interval [0, t1 ) till the first jump, it follows from the assumptions of Theorem 3.3.2 that the integral equation

t

x(t) = x0 +

f (s, x(s))ds, 0 ≤ t < t1 ,

0

has a unique (absolutely continuous) solution. By integrability of f and continuity of x on this interval [0, t1 ), we can extend this solution from the left up to time t1 giving x(t1 −) = x0 +

t1

f (s, x(s))ds. 0

Now the jump occurs at t1 giving x(t1 ) = x(t1 −) + G(t1 , x(t1 −)). Clearly, the jump size is determined by the value of G just prior to jump. Since G is continuous and bounded on bounded sets of I × R n , this is uniquely defined. Starting with this as the initial state for the next interval [t1 , t2 ), we solve the integral equation x(t) = x(t1 ) +

t

f (s, x(s))ds, t ∈ [t1 , t2 ).

t1

Again by virtue of Theorem 3.3.2, this equation has a unique solution x ∈ AC([t1 , t2 ), R n ). It is clear from these steps that we can construct a unique solution for the system (3.33)–(3.34) on the entire interval I piece by piece. Also note that the solution so constructed is continuous from the right having limits from the left. Thus, x ∈ P W Cr (I, R n ). This proves the statement of the theorem.   This result can be further extended to the case in which f is only locally Lipschitz. This is given in the following theorem.

3.4 Impulsive Systems (Existence of Solutions)

83

Theorem 3.4.2 Suppose f satisfies the assumptions of Theorem 3.3.4 and G : I × R n −→ R n is continuous and bounded on bounded subsets of I × R n . Then, for every initial state x0 ∈ R n , the system (3.33)–(3.34) has a unique solution in P W Cr (I, R n ). Proof Since the proof is very similar to that of the previous theorem, we give only an outline. Let I ≡ [0, T ]. Since x0 ∈ R n , there exists a positive number r0 such that x0 ∈ Br0 . Now choose r1 > r0 and note that by our assumption there exists Lr1 ≡ L1 ∈ L+ 1 (I ) such that  f (t, x) − f (t, y) ≤ L1 (t)  x − y , x, y ∈ Br1 . Then, we solve the integral equation

t

x(t) = x0 +

f (s, x(s))ds, 0 ≤ t < τ1 ∧ t1 ,

0

where τ1 denotes the first time the solution hits the boundary ∂Br1 . The existence and uniqueness of a solution follow from Theorem 3.3.4. If τ1 > t1 , to continue the process, define x(t1 ) = x(t1 −) + G(t1 , x(t1 −)). If x(t1 ) ∈ I nt (Br1 ), solve the integral equation x(t) = x(t1 ) +

t

f (s, x(s))ds, t1 ≤ t ≤ τ1 ∧ t2 .

t1

If x(t1 ) ∈ / Br1 , take a ball of radius r2 > r1 so that x(t1 ) ∈ I nt (Br2 ), and repeat the process and let τ2 denote the first time the solution hits the boundary ∂Br2 . Continuing this process, we obtain a sequence {Bri , τi }. Note that {τi } ∈ I is a nondecreasing sequence. If it converges to a point τ < T as i → ∞, the solution blows up at time τ as ri −→ ∞; and we have a unique solution x ∈ P W Cr ([0, τ0 ], R n ) for any τ0 < τ. This completes the outline of our proof.  

3.4.2 Systems Driven by Vector Measures The mathematical model for impulsive systems discussed in the preceding subsection is classical and has been studied extensively by Lakshmikantham, Bainov and Simeonov [76]. In later years, it was substantially generalized by the first author to include infinite dimensional systems covering systems governed by partial differential equations. For reference we mention the following few papers [5, 8, 9, 15].

84

3 Nonlinear Systems

Let I denote the sigma algebra of subsets of the set I and let Mca (I , R m ) denote the space of countably additive bounded vector measures defined on I and taking values in R m . Recall that the variation of any μ ∈ Mca (I , R m ) on a set E ∈ I is given by |μ|(E) ≡ sup



 μ(σ ) R n ,

 σ ∈

where  is any finite partition of E by I measurable disjoint subsets of E. The summation is taken over the finite family  and the supremum is taken over all such finite partitions. The total variation norm of μ is then given by  μ ≡ |μ|(I ). Furnished with the total variation norm, Mca (I , R m ) is a Banach space. Diestel and Uhl Jr. [57] use the notation Mcabv (I , R m ) to emphasize that these measures have bounded total variation. For convenience of notation we prefer to use simply Mca (I , R m ) with the understanding that the elements of this space are always of bounded variation unless stated otherwise. Now we are prepared to consider more general impulsive systems. These models can be described as follows: dx = F (t, x)dt + G(t, x)ν(dt), x(0) = x0 , t ∈ I ≡ [0, T ],

(3.40)

where F : I × R n −→ R n , G : I × R n −→ M(n × m) and ν ∈ Mca (I , R m ) is a vector measure. If the measure ν is discrete or purely atomic, it is given by a sum of Dirac measures such as ν(dt) ≡ ak δtk (dt), tk ∈ D ≡ {0 < t1 < t2 < · · · < tκ < T , κ ∈ N}, ak ∈ R m . In this case we revert to a system like the classical model (3.33)–(3.34). Indeed, we obtain the following system: x˙ = F (t, x(t)), t ∈ I \ D, x(tk ) = x(tk −) + G(tk , x(tk −))ak , tk ∈ D, k ∈ N. In order that the discrete measure ν, given by a sum  of Dirac measures, has bounded total variation, it is necessary and sufficient that  ak R m < ∞. Thus, the system model given by (3.40) is very general. Here we consider the questions of existence and uniqueness of solutions of this system. The first question we must answer is what we mean by a solution of Eq. (3.40). This is given in the following definition. First recall that B∞ (I, R n ) denotes the Banach space of bounded measurable functions from I to R n furnished with the supnorm topology.

3.4 Impulsive Systems (Existence of Solutions)

85

Definition 3.4.3 An element x ∈ B∞ (I, R n ) is said to be a solution of Eq. (3.40) if x(0) = x0 and x satisfies the following integral equation:

t

x(t) = x0 + 0



t

F (s, x(s))ds +

G(s, x(s))ν(ds), t ∈ I.

(3.41)

0

It is important to justify the definition of solution as given above. Consider ν to be a discrete measure as seen above. Suppose t1 is an atom of the measure ν. Then, at t1 we expect a jump in the solution. This may be given by either of the following two expressions: x(t1 ) = x(t1 −) + G(t1 , x(t1 −))ν({t1 }), x(t1 ) = x(t1 −) + G(t1 , x(t1 ))ν({t1 }). Clearly, the first expression is explicit; the state at time t1 is given entirely in terms of the state just before the jump, and the size of the atom. This is compatible with the class of basic impulsive systems given by Eqs. (3.33)–(3.34). On the other hand the second expression is implicit; the state at time t1 is a result of its interaction with the state just before the jump and the size of the atom. This leads to a fixed point problem of the form ξ = η + G(t1 , ξ )ν({t1 }), where η ∈ R n is given; we must solve for ξ. The operator on the right-hand side of the above expression depends on the size of the atom and hence it may be a contraction for some atoms and not so for others. Thus, this equation may have a unique solution, may have no solution, or may even have multiple solutions leading to ambiguities, whereas the first choice offers a unique solution. Thus, the implicit scheme will require additional assumptions on G. We consider this situation later in the sequel. As an example, consider the scalar equation dy = (1 − y 2 )δt1 (dt), t ≥ 0, where the measure ν(dt) = δt1 (dt), that is the Dirac measure with mass 1 at t1 and zero elsewhere. If one chooses the implicit scheme, the reader can easily verify that for y(0) = 1, there are two real solutions given by  y(t) =

1, for t ≥ 0; 1, for 0 ≤ t < t1 , and − 2 for t ≥ t1 .

In case y(0) = −3, there are no real solutions. In fact there are two complex solutions that clearly do not make physical sense for differential equations with real coefficients. In contrast, if explicit scheme is used, we have unique solutions

86

3 Nonlinear Systems

for both the initial conditions. For y(0) = 1, the solution is y(t) = 1, t ≥ 0, and for y(0) = −3 the solution is given by  y(t) =

−3, for 0 ≤ t < t1 , −11, for t ≥ t1 .

In order for the implicit scheme to hold, one is required to impose a limit on the size of the atoms so that each of the fixed point problems encountered at each jump time has unique solutions. The above examples illustrate the ambiguities in the interpretation of solutions of systems driven by purely discrete measures. We will discuss this more in detail later. For systems driven by general vector measures we prove, under standard assumptions, that Eq. (3.41) has a unique solution. For scalar-valued positive measures a notion of robust solutions based on the socalled graph completion technique was developed by Dal Maso and Rampazzo [54] and Silva and Vinter [94]. This technique does not appear to admit extension to systems driven by vector measures considered here. Now we are prepared to consider the general model. For any μ ∈ Mca (I , R n ) let |μ|(·) denote the countably additive positive measure induced by the variation of μ. Theorem 3.4.4 Consider the system (3.40) and suppose ν ∈ Mca (I , R m ) and F : I × R n −→ R n and G : I × R n −→ M(n × m) are measurable in t ∈ I and continuous in x ∈ R n and there exist a constant K > 0 and a function L ∈ L+ 1 (I, |ν|) such that

 (F 1) :  F (t, x) ≤ K 1+  x  , x ∈ R n ,

 (F 2) :  F (t, x) − F (t, y) ≤ K  x − y  , x, y ∈ R n ,

 (G1) :  G(t, x) ≤ L(t) 1+  x  , x ∈ R n ,

 (G2) :  G(t, x) − G(t, y) ≤ L(t)  x − y  , x, y ∈ R n . Then, for every x0 ∈ R n , Eq. (3.40) has a unique solution x ∈ B∞ (I, R n ), and further it is piecewise continuous. Proof First we prove an a priori bound. Let x be a solution of Eq. (3.40). Then, by definition, x must satisfy the integral equation (3.41). Using this identity and computing the norm of x, it follows from triangle inequality that

t

 x(t) ≤ x0  + 0



t

Kds +

+ 0

L(s)|ν|(ds) 0

t



t

K  x(s)  ds + 0

L(s)  x(s)  |ν|(ds),

(3.42)

3.4 Impulsive Systems (Existence of Solutions)

87

for all t ∈ I. Define ξ(t) ≡ sup{ x(s) , 0 ≤ s ≤ t} and the measure μ by

Kds +

μ(σ ) ≡

L(s)|ν|(ds), σ ∈ I .

σ

σ

It is known that a vector measure is countably additive if and only if the measure induced by its variation is also countably additive [57]. Therefore, since |ν| is a countably additive bounded positive measure and K > 0 and L ∈ L+ 1 (I, |ν|), it is clear that μ is also a countably additive bounded positive measure having bounded total variation on I. Define C ≡ { x0  +KT +  L L+ (I,|ν|) }, 1

and note that ξ(t), t ∈ I, is a nonnegative non-decreasing function. Thus, it follows from (3.42) that

t

ξ(t) ≤ C +

ξ(s)μ(ds), t ∈ I.

(3.43)

0

By virtue of the generalized Grönwall inequality [7] applied to the above expression, one can verify that 

t

ξ(t) ≤ C exp

 μ(ds) , t ∈ I.

0

Hence, we conclude that  x ≡ sup{ x(t) , t ∈ I } ≤ C exp{μ(I )}. Thus, r ≡ C exp{μ(I )} gives the a priori bound. It follows from this result that if Eq. (3.40) has a solution x it must belong to the Banach space B∞ (I, R n ). Define the operator G by

t

(Gx)(t) ≡ x0 +

F (s, x(s))ds +

0

t

G(s, x(s))ν(ds), t ∈ I.

(3.44)

0

Clearly, it follows from the growth properties (F1) and (G1) and the a priori bound just proved that G : B∞ (I, R n ) −→ B∞ (I, R n ). Using the Lipschitz properties (F2) and (G2) one can easily verify that, for x, y ∈ B∞ (I, R n ), we have the following inequality:

t

 (Gx)(t) − (Gy)(t) ≤

K  x(s) − y(s)  ds

0



t

+ 0

L(s)  x(s) − y(s)  |ν|(ds), t ∈ I.

(3.45)

88

3 Nonlinear Systems

Now defining dt (x, y) ≡ sup{ x(s) − y(s) , 0 ≤ s ≤ t}, t ∈ I, for x, y ∈ B∞ (I, R n ), and using the measure μ as defined above, we can rewrite the expression (3.45) as follows:

t

dt (Gx, Gy) ≤ 0

ds (x, y)μ(ds), t ∈ I, x, y ∈ B∞ (I, R n ).

(3.46)

Define

t

β(t) ≡

μ(ds), t ∈ I.

0

Since μ is a countably additive positive measure having bounded total variation, the function β, as defined above, is a nonnegative, non-decreasing function of bounded variation having at most a countable number of discontinuities. Without loss of generality, we may assume that 0 is not an atom of the measure ν. Then, 0 is not an atom of μ either and consequently, β(0) = 0. Using this β we can rewrite the expression (3.46) as

t

dt (Gx, Gy) ≤ 0

ds (x, y)dβ(s), t ∈ I, x, y ∈ B∞ (I, R n ).

(3.47)

Since β is a nonnegative monotone non-decreasing (actually increasing) function of bounded variation, it is differentiable almost everywhere on I and the derivative ˙ is measurable and Lebesgue integrable and β(·) ˙ ≡ 0 ≤ I(β)



T

˙ β(t)dt ≤ β(T )

0

and for every t ∈ I , 0 ≤

t 0

˙ β(s)ds ≤ β(t+) ≤ β(T ). Consider the set

˙  ≡ {f ∈ L+ 1 (I ) : f (t) > β(t) ≥ 0, a.e t ∈ I }. ˙ = inf{I(f ), f ∈ }. Hence, there exists Clearly, the set  is nonempty and I(β) ˙ for almost all t ∈ I (and so one can choose) an fo ∈ ! such that fo (t) > β(t) t and that the indefinite Lebesgue integral t −→ 0 fo (s)ds ≡ βo (t) is continuous satisfying βo (t) ≥ β(t+), t ∈ [0, T ]. Using the function βo in the inequality (3.47) in place of β, we obtain

t

dt (Gx, Gy) ≤ 0



t

ds (x, y)dβ(s) ≤ 0

ds (x, y)dβo (s), t ∈ I.

(3.48)

3.4 Impulsive Systems (Existence of Solutions)

89

Repeating this for the second iterate, we have dt (G 2 x, G 2 y) ≤

t

ds (Gx, Gy)dβo (s)

0



t

ds (x, y)βo (s)dβo (s) 0

≤ dt (x, y)(βo2 (t)/2), t ∈ I, x, y ∈ B∞ (I, R n ).

(3.49)

Continuing this iterative process m times, we arrive at the following inequality dt (G m x, G m y) ≤ dt (x, y)(βom (t)/m!), t ∈ I. Define d(x, y) ≡ dT (x, y) for x, y ∈ B∞ (I, R n ) and note that (B∞ (I, R n ), d) is a complete metric space. Then, it follows from the above inequality that

 d(G m x, G m y) ≤ βom (T )/m! d(x, y). Since 0 < βo (T ) < ∞ for finite T , it follows from the above inequality that, for some m ∈ N, the m-th iterate of the operator G, given by G m , is a contraction. Since (B∞ (I, R n ), d) is a complete metric space, it follows from this and Banach fixed point theorem that G m , and hence G itself, has a unique fixed point x o ∈ B∞ (I, R n ), that is, x o = Gx o . Hence, x o is the unique solution of the integral equation (3.41), and therefore, by definition, it is the unique solution of the system (3.40). This concludes the proof.   Remark 3.4.5 Since the measure ν is countably additive having bounded total variation on I , the solution x o can have at most a finite number of discontinuities on the interval I. Thus the solution x o is also piecewise continuous. This is summarized by the statement x o ∈ B∞ (I, R n ) ∩ P W C(I, R n ). The following result establishes the continuous dependence of solution with respect to the initial state and the input measure. Corollary 3.4.6 Suppose the assumptions of Theorem 3.4.4 hold. Then, the solution map (x0 , ν) −→ x is continuous and Lipschitz from R n × Mca (I , R m ) to B∞ (I, R n ) with respect to their respective norm topologies. Proof This is left as an exercise for the reader.

 

3.4.3 Systems Driven by Finitely Additive Measures We can generalize the results of the preceding section by using finitely additive measures in place of countably additive measures and a more general system with

90

3 Nonlinear Systems

more structural freedom. Let U ⊂ R d be a closed possibly bounded set and I ≡ [0, T ] be a closed bounded interval. Let A ≡ AU ×I denote an algebra of subsets of the set U × I . We use the notations AI and AU to denote the algebra of subsets of the set I and the set U , respectively. Let B∞ (U × I ) denote the space of bounded measurable real-valued functions. Furnished with the supnorm topology B∞ (U ×I ) is a Banach space. It is known [56] that the continuous (topological) dual of this space is given by the space of bounded finitely additive measures on A denoted by Mbf a (A). For convenience of the reader we recall the definition of the variation norm. Let E be an A measurable subset of the set U × I and let  denote any finite disjoint A measurable partition of the set E. The total variation of μ on E, denoted by |μ|(E), is given by |μ|(E) ≡ sup



|μ|(σ ),

 σ ∈

where the summation is taken over the elements of  and the supremum is taken with respect to the class of all such finite partitions of E. The norm of the measure μ is then given by  μ ≡ |μ|(U × I ). Endowed with the total variation norm, Mbf a (AU ×I ) is a Banach space. A continuous linear functional  on B∞ (U × I ) has the integral representation through an element μ ∈ Mbf a (A) giving (f ) =

U ×I

f (ξ, t)μ(dξ × dt), f ∈ B∞ (U × I ).

In other words, every continuous linear functional  on B∞ (U × I ) is given by integration with respect to a unique measure μ ∈ Mbf a (A). This is a special case of a general result. For details on this topic see Diestel [57, Theorem I.1.13]. We can use these measures to develop mathematical models for dynamic systems that exhibit impulsive behavior. In general, a system governed by any nonlinear ordinary differential equation subject to or controlled by impulsive forces can be described as follows: dx(t) = F (t, x(t))dt + G(t, x(t), ξ )μ(dξ × dt), t ∈ I, x(0) = x0 , (3.50) U

where μ ∈ Mbf a (A). It is better understood as an integral equation

t

x(t) = x0 + 0

F (s, x(s))ds +

t 0

G(s, x(s), ξ )μ(dξ × ds), t ∈ I. U

(3.51)

3.4 Impulsive Systems (Existence of Solutions)

91

In case the measure μ has the form μ(dξ × dt) = mt (dξ )ρ(dt), the integral equation (3.51) takes the form

t

x(t) = x0 +



0



t

F (s, x(s))ds +

G(s, x(s), ξ )ms (dξ ), t ∈ I.

ρ(ds) 0

U

(3.52) Further, if ρ is a discrete measure given by a weighted sum of a finite number of Dirac measures, ρ(dt) = ai δti (dt), 0 < t1 < t2 < · · · < tκ < T , ai ∈ R, κ ∈ N, the integral equation (3.52) reduces to

t

x(t) = x0 +

F (s, x(s))ds +

0

ti ≤t

U

ai G(ti , x(ti ), ξ )mti (dξ ), t ∈ I.

(3.53)

Integrating the i-th component of the last term with respect to the measure mti over the set U , we obtain ˆ Gi (ti , x(ti )) ≡ ai G(ti , x(ti ), ξ )mti (dξ ), i = 1, 2, · · · , κ. (3.54) U

Using this notation we observe that Eq. (3.52) can be written as

t

x(t) = x0 + 0

F (s, x(s))ds +



ˆ i (ti , x(ti )), t ∈ I. G

(3.55)

ti ≤t

Letting I0 ≡ {ti , i = 1, 2, · · · , κ}, for numerical (computational) convenience some time the system (3.55) is approximated by the following system of equations: x(t) ˙ = F (t, x(t)), x(0) = x0 , t ∈ I \ I0 ;

(3.56)

ˆ i (ti , x(ti −)), ti ∈ I0 . x(ti ) = x(ti −) + G

(3.57)

ˆ i , depends on the state of the system x(ti −) The jump at time ti , determined by G just before the jump occurs. This is the explicit scheme as discussed earlier and seems natural in case of discrete measures. This was discussed in Sect. 3.4.2 using some simple examples. Clearly, the solutions of Eqs. (3.56)–(3.57), if they exist, are piecewise continuous and bounded. In case the measures {mti } are also discrete given by sums of Dirac measures concentrated at any finite set of points {vi } ∈ U , the expression (3.54) reduces to ˆ i (ti , x(ti −)) ≡ ai G(ti , x(ti −), vi ), i = 1, 2, · · · , κ. G

92

3 Nonlinear Systems

So this is the dynamics where the jump times are discrete and the control {vi }, ˆ i at times {ti }, can be chosen as desired from the set U . determining the jumps G In classical impulsive systems, this is considered as control variables and can be chosen so as to optimize certain performance measures. We consider the question of existence and regularity properties of solutions of the general impulsive system given by Eq. (3.50). First we consider the purely impulsive system consisting of Eqs. (3.56)–(3.57). Theorem 3.4.7 Suppose F : I × R n −→ R n is Borel measurable and uniformly Lipschitz in x ∈ R n having at most linear growth and the function G : I × R n × U −→ R n is continuous. Then, the system (3.56)–(3.57) has a unique piecewise continuous solution. Proof The proof is very similar to that of Theorem 3.4.1.

 

This theorem gives the existence of solution of Eq. (3.50) under the assumption that the measure is discrete having the special structure given by a sum of weighted Dirac measures μ(dξ × dt) ≡ ai δvi (dξ )δti (dt)  with total variation norm  μ = |ai | < ∞. It is important to note that the system of Eqs. (3.56)–(3.57), in particular Eq. (3.57), is written in the explicit form implying that the jump size at time ti is a function of the state just before the jump. Another possible model is given by a system of implicit equations such as x(t) ˙ = F (t, x(t)), x(0) = x0 , t ∈ I \ I0 ;

(3.58)

ˆ i (ti , x(ti )), ti ∈ I0 , x(ti ) = x(ti −) + G

(3.59)

where x(ti −) denotes the limit from the left of the (continuous) solution of ˆ i is determined by the state reached as a Eq. (3.58). In this case the jump size G result of interaction of the state before the jump and the impulsive force applied. Clearly, Eq. (3.59) presents a fixed point problem in the state space R n rewritten as ˆ i (ti , ξ ). ξ =η+G ˆ i (ti , ξ ) is uniformly Lipschitz with Lipschitz constant 0 < If the map ξ −→ G L < 1, then this equation has a unique fixed point and hence Eq. (3.59) governing the evolution of jumps has a unique solution. This result is formally stated in the following theorem. Proposition 3.4.8 Consider the system of Eqs. (3.58)–(3.59) representing the system (3.50) subject to discrete measure as described above. Suppose: (i) F satisfies uniform Lipschitz condition with respect to the state variable having at most linear growth with the same constant K.

3.4 Impulsive Systems (Existence of Solutions)

93

(ii) G satisfies Lipschitz condition in the state variable with Lipschitz constant 0 < L < 1 uniformly on I × U, and there exists a constant C > 0 such that sup{ G(t, 0, v) , (t, v) ∈ I × U } ≤ C, 2, · · · , κ}. Then, for every initial state and that |ai | ≤ 1 for all i ∈ {0, 1,  x0 ∈ R n and discrete measure μ = i ai δvi (dξ )δti (dt), the system (3.58)– (3.59) has a unique solution x ∈ B∞ (I, R n ) ∩ P W C(I, R n ). Proof We use standard technique to prove existence of solution of Eq. (3.58) describing continuous evolution and, at the jump times, we use Banach fixed point theorem to prove existence of solution of the jump Eq. (3.59). For detailed proof see [28].   To deal with control problems involving the system of Eqs. (3.58)–(3.59) and the discrete measures we must specify the set of admissible controls. Let  ≡ [−1, +1] and U a compact subset of R d . Let Ud denote the set of admissible controls given by Ud ≡ {w = {a, v} : ai ∈ , vi ∈ U ; i = 0, 1, 2, · · · , κ}. We need the following result asserting continuous dependence of solutions with respect to controls. Proposition 3.4.9 Consider the system described by Eqs. (3.58)–(3.59) along with the set of admissible controls Ud and suppose the assumptions of Proposition 3.4.8 hold. Then, the control to solution map w −→ x(w) from Ud to B∞ (I, Rn ) is continuous. Proof Consider the system x˙ = F (t, x), x(ti−1 ) = ξ, t ∈ (ti−1 , ti ], i ∈ {1, 2, · · · , κ}.

(3.60)

Let ξ k −→ ξ o in Rn and suppose {x k , x o } ∈ C((ti−1 , ti ], Rn ) denote the corresponding solutions. Then, by virtue of Grönwall inequality it follows from assumption (i) of Proposition 3.4.8 that    x o (t) − x k (t) ≤ ξ0 − ξk  exp K[t − ti−1 ] , t ∈ (ti−1 , ti ]. Thus, the solution is continuously dependent on the initial data. Further, the solution has also continuous limit from the left. In summary, the solution of Eq. (3.60) is continuously dependent on the initial data having well defined limit from the left. Thus, x k (ti −) → x o (ti −) as k → ∞. Given this fact, to prove the proposition it suffices to prove it for the discrete evolution that depends directly on the controls. Consider the jump instant ti for any i ∈ {0, 1, 2, · · · , κ} and let {wk } = {a k , v k } ∈ Ud be a sequence of controls converging to wo = {a o , v o } as k → ∞. Hence, for each i ∈ {0, 1, 2, · · · , κ} we have {aik , vik } → {aio , vio } as k → ∞. Letting x k and

94

3 Nonlinear Systems

x o denote the corresponding solutions, and considering the jump instant ti , we have x k (ti ) = x k (ti −) + aik G(ti , x k (ti ), vik ),

(3.61)

+ aio G(ti , x o (ti ), vio ).

(3.62)

x (ti ) = x (ti −) o

o

Subtracting Eq. (3.61) from Eq. (3.62) term by term and rearranging terms suitably we obtain x o (ti ) − x k (ti ) = [x o (ti −) − x k (ti −)] + (aio − aik )G(ti , x o (ti ), vio ) + aik [G(ti , x o (ti ), vio ) − G(ti , x o (ti ), vik )] + aik [G(ti , x o (ti ), vik ) − G(ti , x k (ti ), vik )].

(3.63)

Computing the norm and using triangle inequality, and recalling that G is a contraction in the state variable (see assumption (ii) of Proposition 3.4.8) and {aik } ∈ , we arrive at the following inequality: 1  x (ti ) − x (ti ) ≤ 1−L o

k

  x o (ti −) − x k (ti −) 

+ |aio − aik |  G(ti , x o (ti ), vio ) 

 + |aik |  G(ti , x o (ti ), vio ) − G(ti , x o (ti ), vik )  . (3.64) We have already seen that the first component on the right-hand side of the above inequality converges to zero as k → ∞. Considering the second component, since G is continuous in all the variables having at most linear growth in the state variable and x o ∈ B∞ (I, Rn ) and U is a closed bounded subset of Rd , it is clear that  G(ti , x o (ti ), vio ) < ∞. Thus, as a k → a o , the second component in the above inequality converges to zero. Similarly, since  is a compact interval and G is continuous, the third component in the above expression converges to zero as v k → v o . Hence, we conclude that as k → ∞ the right-hand expression of the inequality (3.64) converges to zero and therefore the solution is continuously dependent on the control. This completes the proof.   Next we consider the general model given by Eq. (3.50). Let Mad ⊂ Mbf a (A) be a nonempty bounded set denoting the set of admissible control measures. Later we state more precise characterization of this set. Let B∞ (I, R n ) denote the Banach space of bounded measurable functions with the standard supnorm topology  z B∞ (I,R n ) = sup{ z(t) , t ∈ I }. We need the following basic assumptions. Here the set U ⊂ R d is not necessarily bounded.

3.4 Impulsive Systems (Existence of Solutions)

95

(A1) F : I × R n −→ R n is Borel measurable and there exists a constant K1 > 0 such that (1)  F (t, x) ≤ K1 (1+  x ), x ∈ R n , t ∈ I, (2)  F (t, x) − F (t, y) ≤ K1  x − y , x, y ∈ R n , t ∈ I. (A2) G : I × R n × U −→ R n is Borel measurable and there exists a finite measurable function K2 : U −→ R0 ≡ [0, ∞) and a nonnegative bounded measure ν ∈ M+ bf a (AI ) such that (1)  G(t, x, ξ )) ≤ K2 (ξ )(1+  x ), x ∈ R n , t ∈ I, ξ ∈ U, (2)  G(t, x, ξ ) − G(t, y, ξ ) ≤ K2 (ξ )  x − y , x, y ∈ R n , t ∈ I, ξ ∈ U, with K2 satisfying sup

μ∈Mad

U ×σ

K2 (ξ )|μ|(dξ × dt) ≤ ν(σ ), for any σ ∈ B(I ).

Theorem 3.4.10 Consider the system (3.50) with the control measure μ ∈ Mad ⊂ Mbf a (AU ×I ) and suppose the assumptions (A1) - (A2) hold. Then, for every x0 ∈ R n , the system (3.50) has a unique solution x ∈ B∞ (I, R n ). Proof The proof is based on Banach fixed point theorem. For any given x0 ∈ R n and μ ∈ Mad , we define the operator on B∞ (I, R n ) as follows:

t

(x)(t) ≡ x0 +

F (s, x(s))ds +

0

t 0

G(s, x(s), ξ )μ(dξ × ds), t ∈ I. U

(3.65) Under the given assumptions we show that maps B∞ (I, R n ) to itself. Computing the norm of (x) and using the assumptions (A1) and (A2) it follows from triangle inequality that for each t ∈ I  (x)(t) ≤ x0  + K1 t (1 + sup  x(s) ) 0≤s≤t

 t  K2 (ξ )|μ|(dξ × ds). + 1 + sup  x(s)  0≤s≤t

0

U

(3.66) Using the assumption (A2) related to the function K2 , it follows from the above inequality that    t  (x)(t) ≤ x0  + K1 t + ν(ds) 1 + sup  x(s)  , t ∈ I. 0

0≤s≤t

(3.67)

96

3 Nonlinear Systems

Since I ≡ [0, T ] is a finite interval and ν ∈ M+ bf a (AI ), it follows from the above inequality that 

  (x) B∞ (I,R n ) ≤  x0  + K1 T + ν(I ) 1+  x B∞ (I,R n ) .

(3.68)

This shows that the operator maps B∞ (I, R n ) to B∞ (I, R n ). Next we show that is a contraction. Let x, y ∈ B∞ (I, R n ) satisfying x(0) = y(0) = x0 . Using the expression (3.65) it is easy to verify that, for each t ∈ I,

t

 (x)(t) − (y)(t) ≤ K1

 x(s) − y(s)  ds

0

+

t 0

K2 (ξ )  x(s) − y(s)  |μ|(dξ × ds). U

(3.69) Using the assumption (A2) related to K2 , this can be rewritten as

t

 (x)(t) − (y)(t)  ≤ K1

 x(s) − y(s)  ds

0



t

+

 x(s) − y(s)  ν(ds), t ∈ I.

(3.70)

0

t t Define the function β(t) ≡ 0 K1 ds + 0 ν(ds), t ∈ I . Since ν ∈ M+ bf a (AI ), it is clear that β is a nonnegative monotone increasing function of bounded total variation on I . Using this function, the expression (3.70) can be rewritten as  (x)(t) − (y)(t) ≤

t

 x(s) − y(s)  dβ(s), t ∈ I.

(3.71)

0

For any pair x, y ∈ B∞ (I, R n ) and t ∈ I , define ρt (x, y) ≡ sup{ x(s) − y(s) , 0 ≤ s ≤ t}, and note that ρT (x, y) = x − y B∞ (I,R n ) . Using this notation we can rewrite the inequality (3.71) as follows: ρt ( (x), (y)) ≤ 0

t

ρs (x, y)dβ(s), t ∈ I.

(3.72)

3.4 Impulsive Systems (Existence of Solutions)

97

Using the same argument as presented in the proof of Theorem 3.4.4 we can find a continuous nonnegative monotone increasing function of bounded variation ˙ βo (t), t ∈ I, such that βo (t) ≥ β(t+) for all t ∈ I, satisfying β˙o (t) > β(t) ≥ 0, for almost all t ∈ I. Thus, it follows from the inequality (3.72) that

t

ρt ( (x), (y)) ≤



t

ρs (x, y)dβ(s) ≤

0

ρs (x, y)dβo (s), t ∈ I.

(3.73)

0

Considering the second iteration of the operator (i.e., 2 ≡ o ) it follows from the expression (3.73), and the fact that t −→ ρt (x, y) is a non-decreasing function that, for each t ∈ I we have

t

ρt ( 2 (x), 2 (y)) ≤

ρs ( (x), (y))dβo (s) 0



t 

0

s

 ρθ (x, y)dβo (θ ) dβo (s)

0 t



ρs (x, y)βo (s)dβo (s).

(3.74)

0

Clearly, it follows from this inequality that

 ρt ( 2 (x), 2 (y)) ≤ ρt (x, y) βo2 (t)/2 , t ∈ I.

(3.75)

Following the above procedure step by step, one can easily verify that the m-th iteration of satisfies the following inequality:

 ρt ( m (x), m (y)) ≤ ρt (x, y) βom (t)/m! , t ∈ I.

(3.76)

Thus, for t = T we have  m (x) − m (y) B∞ (I,R n ) ≤ αm  x − y B∞ (I,R n ) ,

(3.77)

where αm = ((βo (T ))m /m!). Since βo (T ) is finite, for m ∈ N sufficiently large, αm < 1 and hence the m-th iteration of the operator is a contraction. Thus it follows from Banach fixed point theorem that m has a unique fixed point x ∗ ∈ B∞ (I, R n ). Using this fact one can easily verify that x ∗ is also the unique fixed point of the operator itself. This proves the existence of a unique solution of Eq. (3.50) in the Banach space B∞ (I, R n ).   Under the assumptions of Theorem 3.4.10, we show that the solution set is a bounded subset of the Banach space B∞ (I, R n ). We denote the solution by x(μ) indicating its dependence on μ. Corollary 3.4.11 Consider the system (3.50) with the admissible control measures Mad = Mad (A) being a norm bounded subset of Mbf a (A) and suppose the

98

3 Nonlinear Systems

assumptions of Theorem 3.4.10 hold. Then, the solution set S ≡ {x ∈ B∞ (I, R n ) : x = x(μ) for some μ ∈ Mad } is a bounded subset of B∞ (I, R n ). Proof By virtue of Theorem 3.4.10, for each μ ∈ Mad Eq. (3.50) has a unique solution x(μ) ∈ B∞ (I, R n ). Thus x(μ) satisfies the following integral equation:

t

x(μ)(t) = x0 +

F (s, x(μ)(s))ds +

0

t 0

G(s, x(μ)(s), ξ )μ(dξ × ds), t ∈ I. U

(3.78) By taking the norm on either side and using the assumptions (A1) and (A2) it follows from triangle inequality that  x(μ)(t)  ≤  x0  +K1 T +

T

0



t

≤C+



K2 (ξ )|μ|(dξ × ds) + U

 x(μ)(s)  dβ(s), t ∈ I,

t

 x(μ)(s)  dβ(s)

0

(3.79)

0

 where C ≡  x0  +K1 T + ν(I ) . By virtue of assumption (A2) related to the function K2 , we have C < ∞. Using generalized Grönwall inequality ([7, Lemma 5, p. 268]), it follows from the above inequality that t  x(μ)(t) ≤ C + Ceβ(t ) dβ(s), t ∈ I, for all μ ∈ Mad . (3.80) 0

Hence, by virtue of the assumption (A2) related to the function K2 , we have

 sup{ x(μ) B∞ (I,R n ) , μ ∈ Mad } ≤ C 1 + β(T ) exp{β(T )} < ∞. (3.81) Thus, the solution set S is a bounded subset of B∞ (I, R n ). This completes the proof.   Remark 3.4.12 All the results of this section can be easily extended to cover systems of the form (3.50) with μ ∈ Mbf a (A, R d ) and G : I × R n × U −→ L(R d , R n ). The proof requires only minor changes in the wordings.

3.5 Differential Inclusions Here we consider systems governed by differential inclusions. Many dynamic systems with incomplete description or incomplete knowledge of the constituent system parameters (with known range of values) can be described by differential

3.5 Differential Inclusions

99

inclusions of the form x(t) ˙ ∈ F (t, x(t)), x(0) = x0 , t ∈ I,

(3.82)

where F is a multi-valued or set-valued function. If for each (t, x) ∈ I × R n , the diameter of the set F (t, x) is zero, then it is a point in R n meaning F is singlevalued (not set-valued) and hence we have a differential equation. It is clear that the larger the diameter is the greater is the uncertainty. Here we present few examples where differential inclusions are appropriate models. (E1) An important example is a system with parametric uncertainty. Consider a controlled nonlinear system given by x˙ = f (t, x, u; α), x(0) = x0 , where α is a vector of basic parameters that determines the vector field f. If all the parameters are exactly known, then we have an exact differential equation describing the dynamics. In physical sciences these parameters are fundamental and are determined experimentally giving only an approximate range of values this may take. Letting P denote this range, we have the multifunction F (t, x, u) ≡ f (t, x, u, P) ≡ {ξ ∈ R n : ξ = f (t, x, u, α) for some α ∈ P} leading to the controlled differential inclusion x˙ ∈ F (t, x, u), x(0) = x0 . (E2) A closely related problem arises in the study of controlled nonlinear variational inequalities of the form x(t) ˙ − f (t, x(t), u(t)), y − x(t) ≤ ψ(y) − ψ(x(t)), y ∈ R n , t ∈ I, where ψ is a real-valued function defined on R n . We show that it is equivalent to a differential inclusion. For this we need the notion of sub-differentials. A function ψ : R n −→ (−∞, ∞], not identically +∞, is said to be a proper convex function if it satisfies the following inequality, ψ((1 − α)x + αy) ≤ (1 − α)ψ(x) + αψ(y), ∀ x, y ∈ R n , α ∈ [0, 1]. The sub-differential of a proper convex function at any x ∈ R n is given by ∂ψ(x) ≡ {ξ ∈ R n : ξ, y − x ≤ ψ(y) − ψ(x), ∀ y ∈ R n }.

100

3 Nonlinear Systems

If ψ is a convex function from R n to the reals, possessing sub-differential ∂ψ, the variational inequality is equivalent to the differential inclusion x˙ − f (t, x, u) ∈ ∂ψ(x). To verify this take y = x(t) + εh for any h ∈ R n . Substituting this in the above inequality we obtain x(t) ˙ − f (t, x(t), u), εh ≤ ψ(x(t) + εh) − ψ(x(t)). If ψ is Gâteaux differentiable, with the Gâteaux derivative denoted by Dψ, then dividing the above expression by ε and letting ε → 0 we obtain the following inequality x(t) ˙ − f (t, x(t), u), h ≤ Dψ(x(t)), h, for all h ∈ R n . Since this holds for all h ∈ R n , we must have x(t) ˙ − f (t, x(t), u) = Dψ(x(t)). On the other hand, if ψ is not Gâteaux differentiable but has sub-differentials denoted by ∂ψ(x) (a set-valued function), then we have the differential inclusion x(t) ˙ − f (t, x(t), u) ∈ ∂ψ(x(t)), t ≥ 0. Defining the multifunction F (t, x, u) ≡ f (t, x, u) + ∂ψ(x), we obtain the controlled differential inclusion x(t) ˙ ∈ F (t, x(t), u). (E3) Another interesting example arises from systems governed by differentialalgebraic equations of the form x˙ = f (x, y), g(x, y) = 0, where f : R n × R m −→ R n and g : R n × R m −→ R m . Define the multifunction m G : R n −→ 2R \ ∅ by setting G(x) ≡ {y ∈ R m : g(x, y) = 0}. Then, an equivalent formulation of the differential-algebraic system is given by the differential inclusion x˙ ∈ F (x), where F (x) ≡ f (x, G(x)) for all x ∈ R n for which G(x) = ∅. In order to study differential inclusions, we need certain basic regularity propn erties of the multifunctions. Let 2R denote the power set of R n and c(R n ) ⊂ n 2R \ ∅ denote the set of all nonempty closed subsets of R n . The most important properties of interest for set-valued functions are upper semi-continuity (usc), lower semi-continuity (lsc), and measurability.

3.5 Differential Inclusions

101

Definition 3.5.1 (usc) Let (X, d), (Y, ρ) be two metric spaces. A multifunction F : X −→ 2Y \ ∅ is said to be upper semicontinuous at x ∈ X if, for every open set V ⊂ Y with F (x) ⊂ V , there exists an open neighborhood U of x such that F (x) ⊂ V for all x ∈ U. The multifunction F is said to be upper semicontinuous on X if it is so for every point of X. If F is a single-valued map, the above definition is precisely the definition of continuity. Definition 3.5.2 (lsc) A multifunction F : X −→ 2Y \ ∅ is said to be lower semicontinuous at x ∈ X, if for every open set O ⊂ Y for which F (x) ∩ O = ∅, there exists an open neighborhood N(x) of x such that F (ξ ) ∩ O = ∅ for all ξ ∈ N(x). A multifunction F : X −→ 2Y \ ∅ is said to be continuous on X if it is both usc and lsc on X. A multifunction is said to be Hausdorff continuous on X if it is continuous with respect to the Hausdorff metric dH introduced in Chap. 1. Definition 3.5.3 (Measurability) Let (, B) be a measurable space and Y = (Y, ρ) a metric space. A multifunction F :  −→ 2Y \ ∅ is said to be measurable if, for every open set O ⊂ Y , the set F − (O) ≡ {ω ∈  : F (ω) ∩ O = ∅} ∈ B. An equivalent definition of measurability is: the multifunction F is measurable if and only if the function  ω −→ ρ(y, F (ω)) is B measurable for every y ∈ Y. With this preparation we are now ready to consider the question of existence of solutions of differential inclusions. Theorem 3.5.4 Consider the system governed by differential inclusion of the form x˙ ∈ F (t, x), x(0) = x0

(3.83)

and suppose the following assumptions hold: (F1) F : I × R n −→ c(R n ) \ ∅ is measurable in t on I for each fixed x ∈ R n and, for almost all t ∈ I, it is upper semicontinuous (usc) on R n . (F2) For every finite positive number r, there exists an r ∈ L+ 1 (I ) such that inf{ z , z ∈ F (t, x)} ≤ r (t), ∀ t ∈ I, ∀ x ∈ Br . (F3)

There exists an  ∈ L+ 1 (I ) such that dH (F (t, x), F (t, y)) ≤ (t)  x − y , ∀ x, y ∈ R n , t ∈ I.

Then, for each initial state x(0) = x0 ∈ R n , the system (3.83) has at least one solution x ∈ C(I, R n ).

102

3 Nonlinear Systems

Proof For any f ∈ L1 (I, R n ) consider the elementary differential equation x˙ = t f, x(0) = x0 , t ∈ I. Clearly, x(t) = x0 + 0 f (s)ds, t ∈ I. This defines the operator R giving x(t) = R(f )(t) ≡ Rt (f ), where R is the map from L1 (I, R n ) to C(I, R n ). In fact x ∈ AC(I, R n ). For any f ∈ L1 (I, R n ) define the multifunction Fˆ (f ) ≡ {g ∈ L1 (I, R n ) : g(t) ∈ F (t, Rt (f )) a.e t ∈ I }.

(3.84)

Note that if the multifunction Fˆ has a fixed point h ∈ L1 (I, R n ), then by definition h ∈ Fˆ (h) and so h(t) ∈ F (t, Rt (h)) a.e t ∈ I. Thus, the function x defined by x(t) = Rt (h), t ∈ I, solves the differential inclusion (3.83). Conversely, corresponding to any solution x of the differential inclusion (3.83), there exists an f ∈ L1 (I, R n ), such that f (t) ≡ x(t), ˙ t ∈ I, which is a fixed point of the multifunction Fˆ . Thus it suffices to show that the multifunction Fˆ has a fixed point in L1 (I, R n ). Using Banach fixed point theorem for multi-valued maps, we prove that Fˆ has a nonempty set of fixed points. For this we must show that n Fˆ is nonempty with closed values in 2L1 (I,R ) \ ∅ and that it is Lipschitz with respect to Hausdorff metric on c(L1 (I, R n )) with Lipschitz constant less than 1. Let g ∈ L1 (I, R n ) and define x = Rg. Clearly x ∈ C(I, R n ). Since (t, e) −→ F (t, e) is measurable in t on I and usc in e on R n , it is clear that t −→ F (t, x(t)) is a nonempty measurable set-valued function on I. Thus, by Kuratowski–Ryll– Nardzewski Selection Theorem (see Chap. 1, Theorem 1.11.15) the multifunction t −→ F (t, x(t)) has measurable selections. We must show that it has L1 (I, R n ) selections. Since x ∈ C(I, R n ) and I ≡ [0, T ] is a closed bounded interval, it is clear that there exists a finite positive number r such x(t) ∈ Br for all t ∈ I. Then, by virtue of assumption (F2), it follows from the theorem on the existence of L1 (I, R n ) selections (see Chap. 1, Theorem 1.11.16, p=1) that the multifunction t −→ F (t, x(t)) has a nonempty set of L1 selections. Thus, for each g ∈ L1 (I, R n ), Fˆ (g) = ∅. Further, since F is closed valued, Fˆ is also closed valued, that is, Fˆ (g) ∈ c(L1 (I, R n )) for each g ∈ L1 (I, R n ). For application of Banach fixed point theorem, we must now show that Fˆ is Lipschitz with Lipschitz constant less than 1. Define the function t γ (t) ≡ (s)ds, t ∈ I, 0 n where  ∈ L+ 1 (I ) (see (F3)). Set X0 ≡ L1 (I, R ) and, for any δ > 0, define the vector space Xδ by



 z(t)  e−2δγ (s)ds < ∞},

Xδ ≡ {z ∈ L0 (I, R ) : n

I

where L0 (I, R n ) denotes the vector space of measurable functions with values in R n . Furnished with the norm topology,

 z(s)  e−2δγ (s)ds,

 z δ ≡ I

3.5 Differential Inclusions

103

Xδ is a Banach space. Let g1 , g2 ∈ L1 (I, R n ) and define x1 = R(g1 ), x2 = R(g2 ). Clearly,

t

 x1 (t) − x2 (t) ≤

 g1 (s) − g2 (s)  ds, t ∈ I.

0

Let ε > 0 and z1 ∈ Fˆ (g1 ), that is, z1 (t) ∈ F (t, x1 (t)), t ∈ I. Since F is Hausdorff Lipschitz with respect to its second argument, there exists a z2 ∈ Fˆ (g2 ) such that  z1 (t) − z2 (t) ≤ (t)  x1 (t) − x2 (t)  +ε, t ∈ I. Define

t

h(t) ≡

 g1 (s) − g2 (s)  ds, t ∈ I.

0

Then, it follows from the above inequality that  z1 (t) − z2 (t) ≤ (t)h(t) + ε, a.e t ∈ I. Multiplying the above inequality on both sides by e−2δγ (t ) and integrating over the interval I, we obtain  z1 (t) − z2 (t)  e−2δγ (t )dt ≤ h(t)e−2δγ (t )dγ (t) + εT . I

I

Integrating by parts the right-hand expression of the above inequality, we obtain

 z1 (t) − z2 (t)  e−2δγ (t) dt ≤ −(1/2δ)h(T )e−2δγ (T )

I



 g1 (t) − g2 (t)  e−2δγ (t) dt + εT .

+(1/2δ) I

Thus, we have  z1 − z2 δ ≤ (1/2δ)  g1 − g2 δ +εT . Letting DH,δ denote the Hausdorff distance on c(Xδ ), the class of nonempty closed subsets of the metric space Xδ , it follows from the above inequality that DH,δ (Fˆ (g1 ), Fˆ (g2 )) ≤ (1/2δ)  g1 − g2 δ +εT . Since ε > 0 is arbitrary and the above inequality holds for all such ε, we conclude that the multifunction Fˆ is Lipschitz with respect to the Hausdorff metric DH,δ on Xδ . Thus, for δ > (1/2), Fˆ is a multi-valued contraction and hence by

104

3 Nonlinear Systems

the generalized Banach fixed point theorem for multi-valued maps (see Chap. 3, Theorem 3.2.2), Fˆ has at least one fixed point fo ∈ Xδ . The reader can easily verify that  z δ ≤ z 0 ≤ c  z δ , where c = eδγ (T ) . Thus, the Banach spaces Xδ and X0 are equivalent and hence fo is also a fixed point of Fˆ in the Banach space X0 . This completes the proof.   Under an additional assumption we can prove that the solution set S of the system (3.83) is compact and so closed. This is presented in the following corollary. Corollary 3.5.5 Suppose the assumptions of Theorem 3.5.4 hold and further there exists a K ∈ L+ 1 (I ) such that the multifunction F satisfies the following growth condition sup{ z , z ∈ F (t, x)} ≤ K(t)(1+  x ), x ∈ R n . Then, the solution set S is a compact subset of C(I, R n ). Proof Denote by F ix(Fˆ ) ≡ {f ∈ L1 (I, R n ) : f ∈ Fˆ (f )}, the set of fixed points of the multifunction Fˆ . The set F ix(Fˆ ) is nonempty by Theorem 3.5.4. By virtue of the growth assumption, it is a bounded subset of L1 (I, R n ). Indeed, for any f ∈ F ix(Fˆ ) it follows from the growth assumption that   t  f (t) ≤ K(t)(1+  Rt (f ) ) ≤ K(t) 1+  x0  +  f (s)  ds . 0

From this inequality one can easily verify that

t

ϕ(t) ≤ C +

K(s)ϕ(s)ds, t ∈ I,

0

where

t

ϕ(t) =

 f (s)  ds

0

and C = (1+  x0 )  K L1 (I ) .

3.5 Differential Inclusions

105

 Thus, it follows from Grönwall inequality that ϕ(T ) ≤ C exp{ I K(s)ds}. Since C is independent of f ∈ F ix(Fˆ ), it follows from this that F ix(Fˆ ) is a bounded subset of L1 (I, R n ). Let b denote this bound. We show that F ix(Fˆ ) is uniformly integrable. Again, it follows from the growth assumption that every f ∈ F ix(Fˆ ) must satisfy the following inequality:   f (t) ≤ K(t) 1+  x0  +

t

 ˜  f (s)  ds ≤ (1+  x0  +b)K(t) ≡ bK(t), t ∈ I.

0

Clearly, letting λ denote the Lebesgue measure, it follows from this that lim

λ(σ )→0 σ

f (t)dt = 0

uniformly with respect to f ∈ F ix(Fˆ ). Hence, by Dunford–Pettis Theorem (see Chap. 1, Theorem 1.11.10), F ix(Fˆ ) is a relatively weakly sequentially compact subset of L1 (I, R n ). Using the above results and upper semi-continuity (usc) property of F , one can verify that F ix(Fˆ ) is weakly sequentially closed. Thus, the set F ix(Fˆ ) is weakly sequentially compact. Since the solution set S is given by S ≡ {R(f ), f ∈ F ix(Fˆ )} and R is an affine continuous map, one can easily verify that the set S is bounded and an equicontinuous subset of C(I, R n ) and hence compact. This completes the proof.   Next we consider impulsive, more generally measure driven, differential inclusion. This is given by the following differential inclusion: dx ∈ F (t, x(t))dt + G(t, x(t))γ (dt), x(0) = x0 , t ∈ I.

(3.85)

Theorem 3.5.6 Consider the differential inclusion (3.85) and suppose F is a multifunction satisfying the assumptions (F1), (F2), (F3) of Theorem 3.5.4. Let G : I × R n −→ M(n × m) ≡ L(R m , R n ) be a Borel measurable function satisfying the following assumptions: (A1) :  G(t, x) ≤ L(t)(1+  x ), (A2) :  G(t, x) − G(t, y) ≤ L(t)  x − y , + m with L ∈ L+ 1 (I, |γ |), where |γ | ∈ Mca (I ) for γ ∈ Mca (I , R ). Then, for every n x0 ∈ R , the evolution inclusion (3.85) has at least one solution x ∈ B∞ (I, R n ) .

106

3 Nonlinear Systems

Proof Let f ∈ L1 (I, R n ) and consider the following measure driven differential equation: dx = f (t)dt + G(t, x(t))γ (dt), x(0) = x0 , t ∈ I.

(3.86)

This is a special case of the system considered in Theorem 3.4.4. Thus, it follows from this theorem that Eq. (3.86) has a unique solution x ∈ B∞ (I, R n ). Further, using generalized Grönwall inequality, one can verify that the solution map ϒ defined by f −→ x(f ) ≡ ϒ(f ) from L1 (I, R n ) to B∞ (I, R n ) is continuous and Lipschitz. Indeed, let x1 denote the solution of Eq. (3.86) corresponding to f1 ∈ L1 (I, R n ), and x2 the solution corresponding to f2 ∈ L1 (I, R n ). Then

t

 ϒt (f1 ) − ϒt (f2 ) = x1 (t) − x2 (t) ≤ C

 f1 (s) − f2 (s)  ds, t ∈ I,

0

(3.87) where

T

C = 1+ 0



T

L(s)|γ |(ds) exp

 L(s)|γ |(ds)

0

and |γ |(·) is the nonnegative measure induced by the variation of the vector measure γ . For convenience of notation we write x(f )(t) ≡ ϒt (f ), t ∈ I. Now let us introduce the multifunction   Fˆ (f ) ≡ g ∈ L1 (I, R n ) : g(t) ∈ F (t, ϒt (f )) a.e t ∈ I . (3.88) Since F (t, ξ ) ∈ c(R n ) for each t ∈ I and ξ ∈ R n , one can verify that the multifunction Fˆ (f ) ∈ c(L1 (I, R n )) where c(L1 (I, R n )) denotes the class of nonempty closed subsets of L1 (I, R n ). If the multifunction Fˆ has a fixed point fo ∈ L1 (I, R n ), that is, fo ∈ Fˆ (fo ), it is clear that fo (t) ∈ F (t, ϒt (fo )), a.e t ∈ I. Thus the function xo ∈ B∞ (I, R n ) given by xo (t) ≡ ϒt (fo ), t ∈ I, is the solution of Eq. (3.86) corresponding to f = fo . Thus corresponding to every fixed point of the multifunction Fˆ , there exists a solution of the differential inclusion (3.85). Conversely, suppose the differential inclusion (3.85) has a solution x o ∈ B∞ (I, R n ), then the multifunction t −→ F (t, x o (t)) is measurable. This follows from the fact that F is measurable in the first argument and upper semicontinuous in the second argument and x o is measurable. Thus, it follows from Measurable Selection Theorem 1.11.15 that t −→ F (t, x o (t)) has measurable selections (see also [27, 73]). We must show that it has L1 selections. Since x o ∈ B∞ (I, R n ), it is clear that there exists a finite number r > 0 such that x o (t) ∈ Br for all t ∈ I where Br denotes the closed ball in R n of radius r. Hence, by virtue of the assumption (F2), it follows from Theorem 1.11.16 ([73, Lemma 3.2]) that t −→ F (t, x o (t)) has L1 (I, R n ) selections. This also proves that the multifunction Fˆ is nonempty

3.5 Differential Inclusions

107

having L1 selections. In view of the above observations, the question of existence of solution of the differential inclusion (3.85) is equivalent to the question of existence of a fixed point of the multi-valued map Fˆ . This is what we prove now. According to Theorem 3.2.2, we must show that Fˆ is Lipschitz on c(L1 (I, R n )) and that with respect to the Hausdorff metric, it is a contraction. Again, let L0 (I, R n ) denote the topological space of measurable functions defined on I with values in R n and the Banach space X0 ≡ L1 (I, R n ). Define the function v by

t

v(t) ≡

(s)ds, t ∈ I,

0

where  ∈ L+ 1 (I ) denotes the (Hausdorff) Lipschitz coefficient of the multifunction F (t, x) as in assumption (F3) of Theorem 3.2.2. For any δ > 0, define Xδ by   Xδ ≡ z ∈ Lo (I, R n ) :  z(t)  e−δv(t )dt < ∞ . I

With respect to the above norm topology Xδ is a Banach space. Clearly, X0 ⊂ Xδ and we have the following inequalities  z δ ≤ z 0 ≤ α  z δ , for α = eδv(T ) . Thus, these Banach spaces are equivalent, Xδ ∼ = X0 . We shall prove that for some δ > 0, the multifunction Fˆ is a contraction on the Banach space Xδ . We follow similar steps as seen in the proof of Theorem 3.5.4. Let f1 , f2 ∈ L1 (I, R n ) and consider Fˆ (f1 ) ∈ c(L1 (I, R n )) and Fˆ (f2 ) ∈ c(L1 (I, R n ) and let z1 ∈ Fˆ (f1 ) be an L1 selection. Since F is dH Lipschitz and z1 ∈ Fˆ (f1 ), with z1 (t) ∈ F (t, ϒt (f1 )) a.e t ∈ I , for every ε > 0, we can find a z2 ∈ Fˆ (f2 ) satisfying z2 (t) ∈ F (t, ϒt (f2 )), a.e. t ∈ I so that  z1 (t) − z2 (t) ≤ (t)  ϒt (f1 ) − ϒt (f2 )  +ε, t ∈ I.

(3.89)

Thus, it follows from the Lipschitz property of the operator ϒ as seen in (3.87) that

t

 z1 (t) − z2 (t) ≤ C(t)

 f1 (s) − f2 (s)  ds + ε, t ∈ I.

(3.90)

0

For convenience of notation, setting

t

η(t) ≡

 f1 (s) − f2 (s)  ds, t ∈ I,

0

we can rewrite the preceding inequality as follows:  z1 (t) − z2 (t) ≤ (t)η(t) + ε, t ∈ I.

(3.91)

108

3 Nonlinear Systems

Multiplying either side of the above expression by e−δv(t ) and integrating we obtain

 z1 (t) − z2 (t)  e−δv(t )dt ≤ C



I

η(t)e−δv(t )dv(t) + εT , I



≤ (−C/δ)

T

η(t)d(e−δv(t )) + εT .

(3.92)

0

Integrating by parts the first term on the right-hand side, we finally obtain the following inequality:

 z1 (t) − z2 (t)  e−δv(t )dt ≤ (C/δ)



T

 f1 (s) − f2 (s)  e−δv(t )dt + εT .

0

I

(3.93) For any α ∈ (0, 1), we can choose δ = (C/α). Clearly, it follows from this choice that

 z1 (t) − z2 (t)  e−δv(t )dt ≤ α



T

 f1 (s) − f2 (s)  e−δv(t )dt + εT .

0

I

(3.94) Let c(Xδ ) denote the class of nonempty closed subsets of the Banach space Xδ and, for δ = C/α, let DH denote the Hausdorff metric on c(Xδ ). Then, it follows from the preceding inequality that DH (Fˆ (f1 ), Fˆ (f2 )) ≤ α  f1 − f2 Xδ +εT .

(3.95)

Since the choice of ε(> 0) is arbitrary, it follows from the above expression that the multifunction Fˆ is a contraction in the complete metric space (Xδ , DH ) with respect to the metric DH . Thus, it follows from Theorem 3.2.2 that Fˆ has fixed points in Xδ . Since Xδ ∼  = X0 these are also fixed points of Fˆ in X0 . This completes the proof. 

3.6 Bibliographical Notes Many of the results on existence of solutions of differential equations and inclusions driven by vector measures appear in this monograph for the first time. Some results are taken from our recent papers [28, 29]. Readers interested in abstract differential equations defined on infinite dimensional Banach spaces and driven by vector measures may also find these references [5, 7–10, 12, 13, 15] interesting and useful for further research.

Chapter 4

Optimal Control: Existence Theory

4.1 Introduction Calculus of variations deals with many interesting problems related to geometry where one wishes to minimize length, area, volume, shapes, etc. subject to some constraints. In the case of dynamic systems, if the functional is a measure of performance, one is interested to maximize the functional and find the control that does it. In case the functional is a measure of cost or losses, one is interested to minimize the functional and determine the corresponding control policy. In fact the extremum may or may not exist or there may be multiple extremals. Thus, one is naturally interested in the question of existence of extremals. This is the subject matter of this chapter. Once the question of existence of optimal control policies is settled, one may want to develop necessary conditions of optimality whereby one can compute the extremals. This is the subject matter of the next chapter. The question of existence of optimal controls is crucial. In the absence of existence, necessary conditions of optimality do not make much of a sense. In fact, it is equivalent to characterizing an entity that does not exist in the first place. Here in this chapter we prove some basic existence results for convex and non-convex control problems. We do so for regular, relaxed, and impulsive (more generally measure-valued) controls.

4.2 Regular Controls Recall that by regular controls we mean vector valued measurable functions. More precise definition is as follows. Let U ⊂ R m be a closed bounded convex subset of R m and let L∞ (I, R m ) denote the class of essentially bounded Lebesgue measurable functions defined on I and taking values from R m . For admissible controls we choose the set Uad ≡ M(I, U ) ⊂ L∞ (I, R m ) which denotes the class of measurable © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_4

109

110

4 Optimal Control: Existence Theory

functions with values in U. Consider the control system x˙ = f (t, x, u), x(0) = x0 , t ∈ I ≡ [0, T ], u ∈ Uad

(4.1)

with the cost functional (t, x, u)dt + (x(T )),

J (u) ≡ C(u, x) ≡

(4.2)

I

where x is the solution corresponding to the control policy u. Our primary objective here is to prove existence of optimal controls, that is, controls at which the functional J attains its minimum. With this goal in mind we introduce the following set-valued function. For (t, x) ∈ I × R n , define the set Q(t, x) ≡ {(ζ, η) ∈ R × R n : ζ ≥ (t, x, v), η = f (t, x, v) for some v ∈ U }. For each (t, x) ∈ I × R n , the set Q(t, x) is called the contingent set. It describes the set of possible directions the system can move from the current time and state (t, x). Here we need a definition of upper semi-continuity in terms of (ε, δ) which is equivalent to the Definition 3.5.1. n

Definition 4.2.1 A multifunction G : R n −→ 2R \ ∅ is said to be upper semicontinuous at the point x ∈ R n , if for every ε > 0 there exists a δ > 0 such that G(ξ ) ⊂ Gε (x) for all ξ ∈ Nδ (x), where Gε (x) denotes the ε-neighborhood of the set G(x) and Nδ (x) denotes the δ-neighborhood of x. The multifunction G is said to be upper semicontinuous if it is so at every point in its domain of definition. n

Definition 4.2.2 A multifunction Q : I × R n −→ 2R×R \ ∅ is said to satisfy the weak Cesari property [50] if for each t ∈ I 

clcoQ(t, Nε (x)) ⊂ Q(t, x),

(4.3)

ε>0

where clcoA denotes the closure of the convex hull of the set A. Remark 4.2.3 It is clear from the above inclusion that, for the Cesari property to hold, Q(t, x) must be closed convex valued. A sufficient condition for Q to satisfy this property is that x −→ Q(t, x) is upper semicontinuous with closed convex values. Another important fact is that if the vector function F (t, x, u) ≡ col{(t, x, u), 1+n f (t, x, u)} is such that Q(t, x) ≡ F (t, x, U ) ⊂ 2R \ ∅ is closed convex valued and upper semicontinuous in x, then Q has the weak Cesari property. Clearly, this is a stronger requirement. Here we present an existence result for systems driven by measurable functions. The original proof was given by Cesari [50, Theorem 8.6.2].

4.2 Regular Controls

111

Theorem 4.2.4 Consider the control problem as stated above and suppose f : I × R n × U −→ R n is measurable in the first argument and continuous with respect to the second and third, and satisfy the following conditions: there exist K, L ∈ L+ 1 (I ) such that (A1) (A2)

 f (t, x, u) ≤ K(t)(1+  x ), ∀ u ∈ U  f (t, x, u) − f (t, y, u) ≤ L(t)  x − y , ∀ u ∈ U.

Suppose  is measurable in the first variable and continuous in the second and third argument and  is continuous and bounded on bounded subsets of R n . Further, suppose that for every finite positive number r, there exists an hr ∈ L1 (I, R) such that (t, x, u) ≥ hr (t) a.e t ∈ I, x ∈ Br , u ∈ U, and that the multifunction Q satisfies the weak Cesari property (see Definition 4.2.2). Then, there exists an optimal control. Proof For convenience of notation, let u → x(u)(·) ∈ C(I, R n ) denote the unique solution of Eq. (4.1) corresponding to the fixed initial state x0 and control policy u ∈ Uad , and let X ≡ {x ∈ C(I, R n ) : x = x(u) for some u ∈ Uad } denote the set of attainable trajectories. Define ≡ {(u, x) ∈ Uad × X : x = x(u)}

(4.4)

to be the set of control-state pairs and m0 ≡ inf{C(u, x) : (u, x) ∈ }.

(4.5)

Our objective is to show that there exists a pair (uo , x o ) ∈ such that C(uo , x o ) = m0 . By virtue of the linear growth assumption on f , and the boundedness of the set U, the set {x(u)(T ), u ∈ Uad } is contained in a bounded subset of R n . In fact, for any given initial state x0 ∈ R n , it follows from the above assumptions on f and U that there exists an r > 0 such that x(u)(t) ∈ Br (ball of radius r centered at the origin) for all t ∈ I and u ∈ Uad . Since  is continuous and bounded on bounded sets and hr ∈ L1 (I ), it follows from this that −∞ < m0 ≤ +∞. If m0 ≡ +∞, there is nothing to prove. So let m0 < +∞ and {(un , xn )} ∈ a minimizing sequence. One may be tempted to use weak-star convergence of the control sequence {un } and uniform convergence of the accompanying trajectories directly on the system equation and the objective functional to arrive at the conclusion. But this is not possible since, in general, weak (as well as weak-star) convergence is incompatible with non-linearity. So the direct approach fails unless u appears linearly in the system equation and  is weak-star lower semicontinuous. So we

112

4 Optimal Control: Existence Theory

must seek an indirect method. Note that X is a bounded subset of C(I, R n ). Further, it follows from the growth assumption on f that this set is also equicontinuous and hence by Ascoli–Arzela theorem (see Chap. 1, Theorem 1.9.7 and [108]) it is relatively compact. Hence, the sequence {xn } has a convergent subsequence and there exists an x o ∈ C(I, R n ) to which this converges. Define the sequence of functions {fn } by setting fn (t) ≡ f (t, xn (t), un (t)), t ∈ I. Again it follows from the linear growth assumption on f and the fact that the solution set X is bounded, that the sequence {fn } is contained in a bounded subset of L1 (I, R n ), and that it is uniformly integrable. Thus, by Dunford–Pettis theorem (see Chap. 1, Theorem 1.11.10) it has a subsequence that converges weakly to an element f o ∈ L1 (I, R n ). Relabeling the subsequences as their original counterparts, we have s

xn −→ x o in C(I, R n ), w

fn −→ f o in L1 (I, R n ).

(4.6) (4.7)

Corresponding to this sequence we define n (t) ≡ (t, xn (t), un (t)), t ∈ I.

(4.8)

Clearly, by definition of the contingent function Q we have (n (t), fn (t)) ∈ Q(t, xn (t)), t ∈ I.

(4.9)

It follows from Mazur’s theorem (see Chap. 1, Theorem 1.11.4, Corollary 1.11.5) that, given any weakly convergent sequence, one can construct a suitable convex combination of the original sequence that converges strongly to the same (weak) limit. Let j ∈ N ≡ {0, 1, 2, · · · } and n(j ), m(j ) ∈ N increasing with j, and n(j ), m(j ) → +∞ as j → +∞, and define αj,i ≥ 0,

m(j )

αj,i = 1, for all j ∈ N.

(4.10)

i=1

Define the sequence pair (ηj , gj ) as follows: ηj (t) ≡

m(j )

αj,i n(j )+i (t), gj (t) ≡

i=1

m(j )

αj,i fn(j )+i (t), t ∈ I.

(4.11)

i=1

By Banach–Sacks–Mazur theorem, one can choose a suitable convex combination so that s

gj (t) −→ f o (t) in L1 (I, R n ).

(4.12)

4.2 Regular Controls

113

Clearly, as j → ∞, we also have s

xn(j )+i (t) −→ x o (t) in C(I, R n ).

(4.13)

o (t) ≡ lim ηj (t), t ∈ I.

(4.14)

Define

Since the set X is bounded, there is a finite positive number r such that x(t) ∈ Br , for all t ∈ I and x ∈ X. Hence, there exists an hr ∈ L1 (I, R) such that ηj (t) ≥ hr (t), t ∈ I. Thus, the function o (t) is well defined and o (t) ≥ hr (t) for t ∈ I. We show that o ∈ L1 (I, R). Indeed, it follows from Fatou’s Lemma (see Chap. 1, Theorem 1.7.3) that o (t)dt ≤ lim ηj (t)dt. (4.15) I

I

Since xn (t) converges to x o (t) for each t ∈ I, and  is continuous and bounded on bounded sets, it is easy to verify that

lim

j →∞

m(j )

αj,i (xn(j )+i (T )) = (x o (T )).

(4.16)

i=1

Clearly, it follows from the definition of C(u, x) and the expressions (4.15) and (4.16) that (x o (T )) +

o (t)dt ≤ lim

m(j )

j →∞ i=1

I

αj,i C(un(j )+i , xn(j )+i ).

(4.17)

Since {un , xn } is a minimizing sequence, the right-hand expression equals m0 . Thus, we have (x o (T )) + o (t)dt ≤ m0 < +∞. (4.18) I

Since  is continuous and bounded on bounded sets, and hr ∈ L1 (I, R), it follows from the above inequality that o ∈ L1 (I, R). Now we must verify that (o (t), f o (t)) ∈ Q(t, x o (t)) a.e t ∈ I.

(4.19)

114

4 Optimal Control: Existence Theory

Define the sets I1 ≡ {t ∈ I : |o (t)| = ∞}, I2 ≡ {t ∈ I : lim  gj (t) − f o (t) = 0}, I3 ≡



j →∞

{t ∈ I : uj (t) ∈ U }.

j ∈N

By definition of admissible controls as presented in the introduction of this section and (4.12) and the fact that o ∈ L1 (I, R), all the sets defined above have Lebesgue measure zero. Choosing a subsequence of the original sequence, if necessary, we note that for each t ∈ I˜ = I \ Ii , ηj (t) −→ o (t), gj (t) −→ f o (t), xn(j )+i (t) −→ x o (t). Clearly, for every ε > 0, there exists a j0 ∈ N such that xn(j )+i (t) ∈ Nε (x o (t)) for all j > j0 and any i ∈ N. Since both  and f are continuous in x on R n , for j > j0 , Q(t, xn(j )+i (t)) ⊂ Q(t, Nε (x o (t))), t ∈ I. It follows from the definition of the contingent function Q that (n(j )+i (t), fn(j )+i (t)) ∈ Q(t, xn(j )+i (t)) a.e t ∈ I. Thus, for sufficiently large j ∈ N and for any ε > 0, we have (n(j )+i (t), fn(j )+i (t)) ∈ Q(t, Nε (x o (t))) a.e t ∈ I.

(4.20)

Hence, it follows from the definition of the sequence {ηj , gj } given by (4.11) and the above inclusion that (ηj (t), gj (t)) ∈ coQ(t, Nε (x o (t))) a.e t ∈ I. Therefore, for each t ∈ I˜, we have (o (t), f o (t)) ∈ ccoQ(t, Nε (x o (t))) for every ε > 0, and consequently, (o (t), f o (t)) ∈

 ε>0

ccoQ(t, Nε (x o (t))), t ∈ I˜.

(4.21)

4.2 Regular Controls

115

Since Q satisfies the weak Cesari property, it follows from this that (o (t), f o (t)) ∈ Q(t, x o (t)), t ∈ I˜.

(4.22)

Define the multifunction G by G(t) ≡ {v ∈ U : o (t) ≥ (t, x o (t), v) and f o (t) = f (t, x o (t), v)}. (4.23) This is a measurable multifunction with nonempty closed subsets as values. Thus, by virtue of Kuratowski-Ryll Nardzewski Selection theorem (see Chap. 1, Theorem 1.11.15), G has measurable selections. Let uo be any selection of G. Clearly uo ∈ Uad . Corresponding to this control, it follows from the inequality appearing in the definition of G and the expression (4.18) that



(x o (T )) +

(t, x o (t), uo (t)))dt ≤ (x o (T )) + I

o (t)dt ≤ m0 < +∞. I

(4.24) From weak convergence of fn to f o and uniform convergence of xn to x o (along a subsequence if necessary) and the fact that uo is a measurable selection of the multifunction G as defined above, we have

t

x (t) = x0 + o



t

f (s)ds = x0 + o

0

f (s, x o (s), uo (s))ds, t ∈ I.

0

Thus, (uo , x o ) ∈ where is as defined in (4.4). By definition, C(uo , x o ) =

(t, x o (t), uo (t))dt + (x o (T )). I

Hence, it follows from the inequality (4.24) that C(uo , x o ) ≤ m0 , while the admissibility of the pair (uo , x o ) and the fact that m0 is the infimum of C on the set , imply that m0 ≤ C(uo , x o ). Hence, we have C(uo , x o ) = m0 . This completes the proof.   It is interesting to note that the above result depends heavily on the convexity of the control domain U and the contingent function Q. A simple example that satisfies these conditions is given by a system like x˙ = f (t, x, u) ≡ F (t, x) + G(t, x)u, u ∈ U with the set U assumed to be convex and the cost integrand u −→ (t, x, u) a convex function. There are many practical problems where the control domain U and the contingent set Q ≡ Q(t, x) fail to satisfy these convexity conditions. In this situation we may look for relaxed (generalized) controls which do not require the convexity condition.

116

4 Optimal Control: Existence Theory

4.3 Relaxed Controls The question of relaxed controls arises when the contingent function Q(t, x) is non-convex. For example, if the set U is not convex, optimal control policies may not exist in the class of regular controls. There are two equivalent approaches to this problem. One approach is to replace Q by its closed convex hull and use Theorem 4.2.4. In this case the contingent set Q is relaxed by introducing the set ˆ x) ≡ ccoQ(t, x). Again assuming weak Cesari property for this set we obtain Q(t, existence results as in Theorem 4.2.4 in the class of regular (ordinary) controls. This solves the problem only approximately. Another more direct approach is to replace the ordinary controls by probability measure-valued functions supported on the control domain U. Let M1 (U ) denote the space of probability measures on U and define   ˆ x, μ), η = fˆ(t, x, μ) for some μ ∈ M1 (U ) , Q(t, x) ≡ (ζ, η) ∈ R × R n : ζ ≥ (t,

(4.25) where ˆ x, μ) ≡ (t,



(t, x, ξ )μ(dξ ), fˆ(t, x, μ) ≡ U

f (t, x, ξ )μ(dξ ).

(4.26)

U

In this case ˆ and fˆ are naturally linear in μ ∈ M1 (U ) and hence the contingent function is convex without requiring convexity of the set U. This relaxed problem can be solved directly by use of lower semi-continuity and compactness arguments. Let Lw ∞ (I, M(U )) denote the class of weak-star measurable functions defined on rel I and taking values from the space of Borel measures M(U ) and let Uad ≡ w (I, M (U )) denote the unit sphere in Lw (I, M(U )) chosen as the class of M∞ 1 ∞ rel relaxed controls. For any admissible control policy ν ∈ Uad , define the cost functional J as follows ˆ x(t), νt )dt + (x(T )), J (ν) ≡ C(ν, x) ≡ (t, (4.27) I

where x ≡ x(ν) is the solution of equation x˙ = fˆ(t, x(t), νt ), x(ν)(0) = x0 , t ∈ I.

(4.28)

Theorem 4.3.1 Suppose f satisfy the assumptions of Theorem 4.2.4, and the cost integrand  is Borel measurable in all the variables, continuous in the last two arguments, and it is also continuous in the second argument uniformly with respect to the third. The terminal cost functional  is lower semicontinuous in x ∈ R n . Further suppose there exist α1 ∈ L+ 1 (I ), β1 , β2 ≥ 0, p ∈ [1, ∞) such that

4.3 Relaxed Controls

117

following conditions hold: |(t, x, ξ )| ≤ α(t) + β1  x p , x ∈ R n , ∀ ξ ∈ U, |(x)| ≤ β2 (1+  x p ), x ∈ R n . rel that minimizes the cost Then, there exists an optimal control policy ν o ∈ Uad functional J (ν). rel is a weak-star compact Proof Let us note that the set of admissible controls Uad w subset of L∞ (I, M(U )). Thus, it suffices to prove that ν −→ J (ν) is weak-star w∗ rel and x m ∈ C(I, R n ) the corresponding lower semicontinuous. Let ν m −→ ν o in Uad sequence of solutions of Eq. (4.28). Let x o denote the unique solution of Eq. (4.28) corresponding to the control ν o . Since f is Lipschitz, uniqueness is guaranteed. We s show that x m −→ x o in C(I, R n ). It follows from the following expressions

x m (t) = x0 + 0

t

fˆ(s, x m (s), νsm )ds ≡ x0 +



x o (t) = x0 + 0

t

t 0

fˆ(s, x o (s), νso )ds ≡ x0 +

U

f (s, x m (s), ξ )νsm (dξ )ds, t ∈ I,

t 0

U

f (s, x o (s), ξ )νso (dξ )ds, t ∈ I,

that

t

 x m (t) − x o (t) ≤

L(s)  x m (s) − x o (s)  ds+  em (t) , t ∈ I,

(4.29)

0

where

t

em (t) ≡ 0

[fˆ(s, x o (s), νsm ) − fˆ(s, x o (s), νso )]ds, t ∈ I.

(4.30)

It follows from Grönwall inequality applied to the expression (4.29) that  x (t) − x (t) ≤ e (t)  + m

o

m



t

t

exp 0

 L(τ )dτ L(s)  em (s)  ds, t ∈ I.

s

(4.31) w∗

Since ν m −→ ν o , it is clear that  em (t) −→ 0 for every t ∈ I. It follows from the growth property of f (see assumption (A1) of Theorem 4.2.4) that the set of admissible trajectories rel Xad ≡ {x ∈ C(I, R n ) : x(ν)(0) = x0 and x = x(ν), ν ∈ Uad },

118

4 Optimal Control: Existence Theory

is a bounded subset of C(I, R n ). Using this fact, one can easily verify that

t

 em (t) ≤ 2 0



K(s)(1+  x o (s) )ds ≤ 2 1+  x o C(I,Rn )  K L1 (I ) , t ∈ I.

(4.32) Summarizing the above facts, we have em (t) −→ 0 for each t ∈ I, and that it is uniformly bounded on I. Thus, it follows from Lebesgue bounded convergence theorem, a special case of the Lebesgue dominated convergence theorem (see Chap. 1, Theorem 1.7.7), that the expression on the right-hand side of the inequality (4.31) converges to zero uniformly with respect to t ∈ I. Hence, x m converges to x o in the norm topology of C(I, R n ). This proves the continuity of the solution map, rel and norm ν −→ x(ν), with respect to the relative weak-star topology on Uad n topology on C(I, R ). Using this fact, we show that the first term of the cost functional (4.27) given by

T

J1 (ν) ≡

ˆ x(t), νt )dt (t,

0

is weak-star lower semicontinuous. Suppose ν m converges to ν o in the weakstar topology and let x m , x o ∈ C(I, R n ) denote the corresponding solutions of Eq. (4.28). We express J1 (ν o ) as follows: J1 (ν o ) ≡ I

ˆ x o (t), νto )dt = (t, + I

I

 ˆ x o (t), νto ) − (t, ˆ x m (t), νto ) dt (t,

 ˆ x m (t), νto ) − (t, ˆ x m (t), νtm ) dt + J1 (ν m ). (t,

(4.33)

Consider the first term on the right-hand side of the expression (4.33). Since x m −→ x o uniformly on I , for any ε > 0 there exists an integer m1,ε such that      o o m o  ˆ ˆ (t, x (t), νt ) − (t, x (t), νt ) dt  < ε/2 

(4.34)

I

for all m > m1,ε . Now considering the second term we note that I

 ˆ x m (t), νto ) − (t, ˆ x m (t), νtm ) dt (t, = I

(t, x m (t), ·), (νto − νtm )(·)C(U ),M1 (U ) dt

= m , ν o − ν m L1 (I,C(U )),Lw∞(I,M(U )) ,

(4.35)

4.3 Relaxed Controls

119

where m (t) ≡ (t, x m (t), ·), t ∈ I. Let us also denote o (t) ≡ (t, x o (t), ·), t ∈ I. It follows from the growth property of  in x ∈ R n , and the fact that the solutions {x m , x o } are contained in a bounded subset of C(I, R n ), that the cost integrands belong to the Lebesgue–Bochner space L1 (I, C(U )), that is m , o ∈ L1 (I, C(U )). Since by assumption  is continuous in x ∈ R n uniformly with respect to ξ ∈ U, it s is clear that m −→ o in L1 (I, C(U )). On the other hand, ν m converges to ν o in the weak-star topology of Lw ∞ (I, M(U )). Thus, it follows from the expression (4.35) that for any ε > 0 there exists an integer m2,ε such that      m o m m  ˆ ˆ (t, x (t), νt ) − (t, x (t), νt ) dt  < ε/2 ∀ m > m2,ε . 

(4.36)

I

Hence, it follows from (4.33), (4.34) and (4.36) that J1 (ν o ) ≤ ε + J1 (ν m ), ∀ m > max{m1,ε , m2,ε }.

(4.37)

Since ε > 0 is arbitrary, it follows from the above inequality that J1 (ν o ) ≤ lim J1 (ν m ).

(4.38)

m→∞

rel Thus, J1 is weak-star lower semicontinuous on Uad . Considering the second component of the cost functional J2 (ν), it follows from lower semi-continuity of  on R n that

J2 (ν o ) ≡ (x o (T )) ≤ lim (x m (T )) ≡ lim J2 (ν m ). m→∞

(4.39)

m→∞

Since the sum of a finite number of lower semicontinuous functions is lower semicontinuous, it follows from (4.38), (4.39) and (4.27) that the cost functional rel . Further, it follows from the growth J is weak-star lower semicontinuous on Uad properties of  and  and the boundedness of solution set that J (ν) > −∞ for rel . Since the set U rel is weak-star compact and J is weak-star lower all ν ∈ Uad ad semicontinuous we conclude that J attains its minimum on it and hence an optimal control exists. This completes the proof.   Remark 4.3.2 It is interesting to note that the proof of Theorem 4.3.1 is much simpler than that of Theorem 4.2.4. For relaxed controls even compactness of U is not necessary. For noncompact U , the proof is slightly more difficult. Interestingly, the set may even consist of a discrete set of points which is clearly not convex. In all these cases, under some mild assumptions, relaxed controls do exist. For more general problems involving differential inclusions driven by relaxed controls see [13, 61] and the references therein.

120

4 Optimal Control: Existence Theory

Practical Realizability One important question is, how can we practically construct relaxed controls. Clearly, it seems that there is a fundamental practical problem. However, it follows from Krein–Milman theorem (Theorem 1.11.9) that any relaxed control can be approximated by chattering controls. For example, if U is compact, then M1 (U ) is weak-star compact and so by Krein–Milman theorem M1 (U ) = ccoExt (M1(U )), where Ext (M1(U )) denotes the set of extreme points of the set M1 (U ). In fact, the set of extreme points of M1 (U ) are precisely the Dirac measures {δu , u ∈ U }. Let Lw ∞ (I, M(U )) denote the space of weak-star measurable functions defined on I and taking values from the space of regular Borel measures on U. We consider w (I, M (U )) ⊂ Lw (I, M(U )). This set consists of weakthe unit sphere M∞ 1 ∞ star measurable probability measure-valued functions. Again, by Krein–Milman theorem we have w w M∞ (I, M1 (U )) = ccoExt (M∞ (I, M1 (U ))). w (I, M (U )), letting U denote a countable dense Given any relaxed control ν ∈ M∞ 1 0 subset of U, we can construct an approximating sequence of controls of the form

νtn (dv) ≡

m(n)

αn,i (t)δui (dv), t ∈ I, ui ∈ U0 , m(n) ∈ N,

i=1

where {αn,i (·)} are nonnegative bounded measurable functions satisfying m(n) i=1

αn,i (t) = 1 ∀ t ∈ I, n ∈ N, lim m(n) = ∞. n→∞

Let Xr denote the class of relaxed trajectories (solutions corresponding to relaxed controls) and Xc the class of trajectories corresponding to chattering controls as defined above. Then by use of Krein–Milman theorem one can verify that the set of chattering trajectories is dense in the set of relaxed trajectories, that is, X c = Xr . In case the set U consists of a finite set of points (hence non-convex) one has to choose only a finite number of switching functions {αn,i } from the class of measurable functions satisfying the constraint indicated above. This is much simpler. It is important to mention that in case of discrete U and regular controls, that is measurable functions with a discrete set of values from U , optimal control policy may not exist. However, optimal policies do exist in the class of corresponding relaxed controls. Practically these policies are easier to construct.

4.4 Impulsive Controls I

121

4.4 Impulsive Controls I As seen and discussed in Chap. 3, systems controlled by impulsive forces arise in many applications such as engineering, economics, management, and social sciences. These systems are special cases of those driven by vector measures. For more detailed study of such problems interested readers are referred to [5, 7– 10, 12, 13, 15]. Here we start with a simpler system subject to controls determined by vector measures, containing impulsive controls as special cases. Consider the following system dx = F (t, x)dt + B(t)u(dt), x(0) = x0 ,

(4.40)

and let Uad ⊂ Mca (I , R d ) denote the class of admissible controls which consists of countably additive bounded (R d -valued) vector measures defined on the sigma algebra I of measurable subsets of the interval I ≡ [0, T ] having bounded total variation. Recall the Banach space B∞ (I, R n ) consisting of bounded measurable functions defined on I and taking values in R n endowed with the standard supnorm topology. The problem is to find an admissible control that minimizes the cost functional given by (t, x)dt + (x(T )) + ϕ(u),

J (u) ≡

(4.41)

I

where the first two terms are standard and the third term ϕ(u) represents the cost for using the control policy u which may include the cost of switching. Theorem 4.4.1 Consider the system (4.40) and suppose F : I × R n −→ R n is Borel measurable satisfying the following assumptions: + (1) There exist K ∈ L+ 1 (I ) and, for each r > 0, Kr ∈ L1 (I ) such that

 F (t, x) ≤ K(t)(1+  x ), x ∈ R n  F (t, x) − F (t, y) ≤ Kr (t)  x − y , x, y ∈ Br (R n ). (2) B ∈ C(I, L(R d , R n )) bounded continuous, where L(R d , R n ) denotes the space of bounded linear operators from R d to R n . (3) The set of admissible controls Uad is a weakly compact subset of Mca (I , R d ) and the function  is measurable in t on I and lower semicontinuous on R n satisfying (t, x) ≥ h(t) for an integrable function h,  is lower semicontinuous on R n satisfying (x) ≥ c for c ∈ R and ϕ is a nonnegative weakly lower semicontinuous functional on Uad . Then, the optimal control problem (4.40)–(4.41) has a solution.

122

4 Optimal Control: Existence Theory

Proof First we prove that u −→ J (u) is weakly lower semicontinuous. We have seen in Chap. 3 that under the assumptions (1) and (2), for each x0 ∈ R n and u ∈ Uad , the system (4.40) has a unique solution x ∈ B∞ (I, R n ). Let uk be a sequence from Uad converging weakly to uo and let x k , x o ∈ B∞ (I, R n ) denote the solutions of Eq. (4.40) corresponding to the controls uk , uo , respectively. Since Uad is weakly compact it is bounded and therefore it follows from the growth assumption on F and the boundedness of B(·) that there exists a finite positive number r such that x k (t), x o (t) ∈ Br (R n ) for all t ∈ I and all k ∈ N. Clearly, it follows from Eq. (4.40) corresponding to the controls uo and uk that x o (t) − x k (t) =

t

 F (s, x o (s)) − F (s, x k (s)) ds +



0

t

B(s)(uo (ds) − uk (ds)).

0

(4.42) Defining

t

zk (t) ≡ x o (t) − x k (t), ek (t) ≡

B(s)(uo (ds) − uk (ds))

0

and using the local Lipschitz property of F, we have

t

 zk (t) ≤ ek (t)  +

Kr (s)  zk (s)  ds.

0

By virtue of Grönwall Lemma (inequality) it follows from the above inequality that for all t ∈ I  t  t  zk (t) ≤ ek (t)  + exp Kr (θ )dθ Kr (s)  ek (s)  ds, 0

s



=  ek (t)  + exp  Kr L1 (I )





t

Kr (s)  ek (s)  ds. (4.43)

0

w

Since uk −→ uo in Mca (I , R d ) it is clear that ek (t) → 0 in R n for each t ∈ I and hence lim  ek (t)  Kr (t) −→ 0

k→∞

for almost all t ∈ I. Further, it follows from the boundedness of the operator valued function B(·) and the boundedness of the set Uad (as it is weakly compact) that there exists a finite positive number b such that sup {sup{ ek (t) , t ∈ I }} ≤ b. k∈N

4.4 Impulsive Controls I

123

Thus, by Lebesgue bounded convergence theorem, it follows from the above expression that lim zk (t) → 0

k→∞

for each t ∈ I. In other words x k (t) −→ x o (t) point-wise in t ∈ I whenever w uk −→ uo . By assumption (3), it follows from the above results that h(t) ≤ (t, x o (t)) ≤ lim inf (t, x k (t)), (x o (T )) ≤ lim inf (x k (T )) and ϕ(uo ) ≤ lim inf ϕ(uk ). Since h ∈ L1 (I ), by extended Fatou’s Lemma we obtain

(t, x (t))dt ≤ lim inf o

I

(t, x k (t))dt. I

Hence J (u ) =

(t, x o (t))dt + (x o (T )) + ϕ(uo )

o

I

 (t, x k (t))dt + (x k (T )) + ϕ(uk ) = lim inf J (uk ).



≤ lim inf k

k

I

This proves that J is weakly lower semicontinuous on Mca (I , R d ). Using this fact and weak compactness of the set Uad we prove the existence of an optimal control. If J (u) ≡ +∞ there is nothing to prove. So we may assume the contrary. Since h ∈ L1 (I ), c ∈ R and ϕ ≥ 0, it is clear that J (u) > −∞ for all u ∈ Uad . Let {uk } ⊂ Uad be a minimizing sequence so that lim J (uk ) = mo = inf{J (u), u ∈ Uad }. k

Since Uad is weakly compact, there exists a subsequence of the sequence uk , w relabeled as the original sequence, and an element uo ∈ Uad so that uk −→ uo . Then by weak lower semi-continuity of J we have J (uo ) ≤ lim inf J (uk ) = lim J (uk ) = mo . k

k

On the other hand, since uo ∈ Uad and mo is the infimum of J on this set, J (uo ) ≥ mo . Hence, we conclude that J (uo ) = mo and so uo is the optimal control. This completes the proof.   An example of a functional ϕ is given by the total variation (norm) of u, that is, ϕ(u) = u  . Since in a Banach space the norm is weakly lower semicontinuous, ϕ is weakly lower semicontinuous. Clearly, this represents the cost of frequent switching of control energy.

124

4 Optimal Control: Existence Theory

4.5 Impulsive Controls II In this section we consider a larger class of measure driven systems that cover impulsive systems as special case. Let Mbf a (U ×I , R m ) denote the space of finitely additive R m -valued vector measures on the algebra U ×I of subsets of the set U × I where U ⊂ R d and I = [0, T ]. For admissible controls we choose a subset Mad ⊂ Mbf a (U ×I , R m ) satisfying the following conditions. (a1) Mad is a bounded set in the sense that sup{ μ , μ ∈ Mad } < ∞. + (a2) There exists a finitely additive nonnegative measure mo ∈ Mbf a (U ×I ) such that for every E ⊂ U × I and E ∈ U ×I , limmo (E)→0 |μ|(E) = 0 uniformly with respect to μ ∈ Mad . (a3) For every E ∈ U ×I , the set {μ(E), μ ∈ Mad } is a relatively compact subset of R m . In the finite dimensional case the condition (a3) is superfluous since it is implied by condition (a1). Theorem 4.5.1 ([57, Brooks and Dinculeanu]) Under the assumptions (a1) and (a2), the set Mad is a (relatively) weakly compact subset of Mbf a (U ×I , R m ). These conditions are also necessary. This is a very special case of a general result on the characterization of weakly compact sets in the space of finitely additive vector measures with values in infinite dimensional Banach spaces, see Theorem 1.11.12, and for details see [57, Theorem IV.2.5, Corollary IV.2.6]. The Corollary IV.2.6 is due to Brooks and Dinculeanu which generalizes a result due to Bartle–Dunford–Schwartz for countably additive vector measures ([57, Theorem IV. 2.5]) to finitely additive vector measures. Here we consider the following measure driven system. It is described by the following differential equation G(t, x, ξ )μ(dξ × dt), x(0) = x0 , t ∈ I ≡ [0, T ], (4.44)

dx = F (t, x)dt + U

where F : I ×R n −→ R n and G : I ×R n ×U −→ L(R m , R n ) are Borel measurable maps and μ ∈ Mad is the control. The vector space of bounded linear operators from R m to R n , denoted by L(R m , R n ) ≡ M, is endowed with the standard operator norm. As discussed in Chap. 3, the system is better understood as the following integral equation x(t) = x0 + 0

t

F (s, x(s))ds +

t 0

G(s, x(s), ξ )μ(dξ × ds), t ∈ I. (4.45) U

4.5 Impulsive Controls II

125

The cost functional is given by J (μ) ≡

U ×I

(t, x(μ)(t), ξ )mo (dξ × dt) + (x(μ)(T )),

(4.46)

where mo ∈ M+ ad (U ×I ) is the measure characterizing weak compactness of the admissible set Mad , and x(μ) ∈ B∞ (I, R n ) is the solution of the system Eq. (4.44) or equivalently the associated integral Eq. (4.45). The objective is to find a control measure μ ∈ Mad that minimizes the cost functional (4.46) subject to the dynamic constraint (4.44). Our objective here is to prove existence of optimal controls. Basic Assumptions To proceed further, we need the following assumptions. Here the set U ⊂ R d is not necessarily bounded. (A1) F : I × R n −→ R n is Borel measurable and there exists a constant K1 > 0 such that (1) :  F (t, x) R n ≤ K1 (1+  x R n ), x ∈ R n , t ∈ I, (2) :  F (t, x) − F (t, y) R n ≤ K1  x − y R n , x, y ∈ R n , t ∈ I. (A2) G : I × R n × U −→ L(R m , R n ) is Borel measurable and there exists a finite measurable function K2 : U −→ R0 ≡ [0, ∞) and a nonnegative bounded n measure ν ∈ M+ bf a (I ) such that for x, y ∈ R and ξ ∈ U and t ∈ I (1) :  G(t, x, ξ )) M ≤ K2 (ξ )(1+  x ), (2) :  G(t, x, ξ ) − G(t, y, ξ ) M ≤ K2 (ξ )  x − y R n , with K2 satisfying sup

μ∈Mad

U ×D

K2 (ξ )|μ|(dξ × dt) ≤ ν(D), for any D ∈ I .

To prove existence of optimal controls we need the following important result on continuity of the control to solution map μ −→ x(μ). This is presented in the following theorem. Theorem 4.5.2 Consider the system (4.44) and suppose the assumptions (A1)-(A2) hold and that Mad is a weakly compact subset of Mbf a (U ×I , R m ). Then, the map μ −→ x(μ) from Mad to B∞ (I, R n ) is continuous with respect to the relative weak topology on Mad and the norm topology on B∞ (I, R n ). w

Proof Let {μk , μo } ∈ Mad and suppose μk −→ μo . Let x k ≡ x(μk ) and x o ≡ x(μo ) denote the unique solutions of Eq. (4.44) corresponding to the driving measures μk and μo , respectively. Clearly, this means that {x k , x o } satisfies the

126

4 Optimal Control: Existence Theory

following integral equations:

t

x k (t) = x0 +

F (s, x k (s))ds +

0



0

t

x o (t) = x0 +

t

F (s, x o (s))ds +

0

G(s, x k (s), ξ )μk (dξ × ds), t ∈ I, U

t 0

(4.47) G(s, x o (s), ξ )μo (dξ × ds), t ∈ I,

U

(4.48) where {x k (t) ≡ x(μk )(t), x o (t) ≡ x(μo )(t), t ∈ I }. Subtracting Eq. (4.48) from Eq. (4.47) term by term and rearranging terms we obtain the following identity

t

x k (t) − x o (t) = 0

+

 F (s, x k (s)) − F (s, x o (s)) ds

t 0

 G(s, x k (s), ξ ) − G(s, x o (s), ξ ) μk (dξ × ds)

U

+

t 0

G(s, x o (s), ξ )(μk − μo )(dξ × ds).

(4.49)

U

We denote the last term on the right-hand side of the above expression by ek giving ek (t) ≡

t 0

G(s, x o (s), ξ )(μk − μo )(dξ × ds), t ∈ I.

(4.50)

U

Taking norm on either side of the expression (4.49) and using the basic assumptions (A1) and (A2) and triangle inequality, we obtain the following inequality

t

 x k (t) − x o (t) Rn ≤ +

t 0

K1  x k (s) − x o (s) Rn ds

0

K2 (ξ )  x k (s) − x o (s)  |μk |(dξ × ds)+  ek (t) , t ∈ I. U

(4.51) Hence, using the assumption (A2) related to the function K2 and the dominating property of the related measure ν ∈ M+ bf a (I ), it follows from the above inequality that t k o  x (t) − x (t) R n ≤ K1  x k (s) − x o (s) R n ds 0



t

+ 0

 x k (s) − x o (s)  ν(ds)+  ek (t) , t ∈ I. (4.52)

4.5 Impulsive Controls II

127

Using the function β given by

t

β(t) ≡



t

K1 ds +

0

ν(ds), t ∈ I,

0

which is clearly a positive monotone increasing function of bounded variation, we can rewrite the inequality (4.52) as follows

t

 x k (t) − x o (t) R n ≤

 x k (s) − x o (s) R n dβ(s)+  ek (t) , t ∈ I.

(4.53)

0

Defining ϕk (t) ≡ x k (t) − x o (t) , t ∈ I , it follows from generalized Grönwall inequality that



t

ϕk (t) ≤ ek (t)  +

t

exp 0



≤ ek (t)  +eβ(t )

 dβ(θ )  ek (s)  dβ(s)

s t

 ek (s)  dβ(s), t ∈ I.

(4.54)

0

It follows from weak convergence of μk to μo that the function ek , given by the expression (4.50), converges to zero for each t ∈ I , that is, the vector ek (t) −→ 0. Using the expression (4.50) one can easily verify that

t

 ek (t) ≤ 2

(1+  x o (s) )ν(ds) ≤ 2

0

(1+  x o (s) )ν(ds), t ∈ I, I

(4.55) and hence sup{ ek (t) , t ∈ I } ≤ 2(1+  x o B∞ (I,R n ) )ν(I ). Thus, it follows from Lebesgue bounded convergence theorem (see Chap. 1, Remark 1.7.8) that the expression on the right-hand side of the inequality (4.54) converges to zero uniformly with respect to t ∈ I . Hence, ϕk (t) −→ 0 uniformly in t ∈ I . In other words, x k −→ x o in the norm topology of B∞ (I, R n ). This shows that the map μ −→ x(μ) is continuous with respect to the weak topology on Mad and the norm topology on B∞ (I, R n ). This completes the proof.   Next, we consider the question of existence of optimal controls. This is presented in the following Theorem. Theorem 4.5.3 Consider the system (4.44) and suppose that the set of admissible controls Mad is a weakly compact subset of Mbf a (U ×I , R m ), and the objective functional is given by J (μ) ≡

U ×I

(t, x(t), ξ )mo (dξ × dt) + (x(T )),

(4.56)

128

4 Optimal Control: Existence Theory

where mo ∈ M+ ad (U ×I ) and x(t) ≡ x(μ)(t), t ∈ I , is the solution of (4.44) corresponding to the control measure μ ∈ Mad . Suppose the assumptions of Theorem 4.5.2 hold and that  and  satisfy the following assumptions: (1)  : I × R n × U −→ R is nonnegative, Borel measurable in all the arguments, and lower semicontinuous in the second argument x ∈ R n uniformly with respect to (t, ξ ) ∈ I × U , and mo -integrable on U × I uniformly with respect to x in bounded subsets of R n . (2)  : R n −→ R is nonnegative and lower semicontinuous. Then, there exists an optimal control policy at which J attains its minimum. Proof Since Mad is a weakly compact subset Mbf a (U ×I , R m ), it suffices to prove that the map μ −→ J (μ) is weakly lower semicontinuous on Mad . Let w μk −→ μo in Mad . It follows from Theorem 4.5.2 that, (along a subsequence if s necessary), x(μk ) −→ x(μo ) in the Banach space B∞ (I, R n ). So it follows from lower semi-continuity of  and  that (t, x o (t), ξ ) ≤ lim (t, x k (t), ξ ),

(4.57)

(x (T )) ≤ lim (x (T )),

(4.58)

o

k

for mo -almost all (ξ, t) ∈ U × I . Using the following integral equation x(μ)(t) = x0 +

t

F (s, x(μ)(s))ds +

0

t 0

G(s, x(μ)(s), ξ )μ(dξ × ds)

(4.59)

U

and the basic assumptions of the preceding theorem one can verify that

t

(1+  x(μ)(t) ) ≤ (1+  x0 ) +

(1+  x(μ)(s) )dβ(s), t ∈ I.

(4.60)

0

Hence, by virtue of generalized Grönwall inequality, it follows from the above expression that the solution set S ≡ {x(μ) ∈ B∞ (I, R n ), x(μ) is a solution of Eq. (4.59), μ ∈ Mad } is a bounded subset of B∞ (I, R n ). Thus, there exists a closed ball Br ⊂ R n of finite radius r such that x(μ)(t) ∈ Br , ∀ t ∈ I and for all μ ∈ Mad . Since  is mo -integrable on U × I uniformly with respect to x in bounded subsets of R n , both sides of the first inequality (4.57) are mo -integrable. Hence, it follows from the inequality (4.57) that

U ×I

(t, x o (t), ξ ) mo (dξ × dt) ≤

U ×I

lim (t, x k (t), ξ ) mo (dξ × dt).

(4.61)

4.6 Structural Control

129

Using Fatou’s Lemma, it follows from the above inequality that

U ×I

(t, x o (t), ξ ) mo (dξ × dt) ≤ lim

U ×I

(t, x k (t), ξ ) mo (dξ × dt).

(4.62)

Summing (4.58) and (4.62) we conclude that J (μo ) ≤ lim J (μk ). This proves that J is weakly lower semicontinuous on Mad . Since Mad is weakly compact, there exists a μo ∈ Mad at which J attains its minimum. This completes the proof.   Remark 4.5.4 We note that the assumption (1) in (A1) of Theorem 4.5.2 can be relaxed by taking K1 ∈ L+ 1 (I ) and replacing the uniform Lipschitz hypothesis by local Lipschitz condition with only minor changes in the proof. Hence, the existence Theorem 4.5.3 continues to hold under these relaxed conditions. Remark 4.5.5 Another important class of impulsive systems that follows naturally from the class considered in this section is described by the following differential equation dx = F (t, x)dt + G(t, x)γ (dt), x(0) = x0 , t ∈ I,

(4.63)

where γ ∈ Mbf a (I , R m ). Here I denotes an algebra of subsets of the set I. This model clearly follows from Eq. (4.44) by taking G independent of the variable ξ ∈ U and considering μ(dξ × dt).

γ (dt) ≡ U

4.6 Structural Control In many engineering problems it is possible to maneuver certain mechanical structures to control the operation of the system optimally. An important example is the aircraft controlled by appropriate movement of rudder, aileron, wing flaps, etc. Similarly, in management problems an appropriate reorganization of the hierarchy of administrative staffs and ground workers may improve the day-to-day operation of the system maximizing efficiency. This is a structural control. Here we consider a class of linear systems of the form dx = A0 (t)x(t)dt + B(dt)x(t) + f (t)dt, x(0) = x0 ,

(4.64)

where A0 (t) ∈ L(R n ), t ∈ I, is a given square matrix-valued function and B(·) is an L(R n ) valued measure belonging to Mca (I , L(R n )) and f any external force. The objective functional is given by (t, x)dt + (x(T )).

J (B) = I

(4.65)

130

4 Optimal Control: Existence Theory

The problem is to find a Bo ∈ Mad ⊂ Mca (I , L(R n )) that minimizes the objective functional J (B). To solve this problem we use the following theorem. Theorem 4.6.1 Suppose there exists an integrable function a ∈ L+ 1 (I ) such that  A0 (t) L(R n ) ≤ a(t) a.e t ∈ I and f ∈ L1 (I, R n ). The set Mad is a weakly compact subset of Mca (I , L(R n )) and there exists a ν ∈ M+ ca (I ) such that the variation |B|(σ ) ≤ ν(σ ) for all σ ∈ I uniformly with respect to B ∈ Mad . Then the solution set S ≡ {x ∈ B∞ (I, R n ) : x = x(B) for some B ∈ Mad } is a bounded subset of B∞ (I, R n ) and B −→ x(B) is continuous with respect to the weak topology on Mca (I , L(R n )) and strong (norm) topology on B∞ (I, R n ). Proof Existence of solution follows from Theorem 2.3.9 of Chap. 2. Since the set Mad is bounded, it follows from generalized Grönwall inequality that the solution set S is bounded. We prove the continuity. Let {Bk , Bo } ∈ Mad and w suppose Bk −→ Bo in Mca (I , L(R n )) and let {x k , x o } ⊂ B∞ (I, R n ) denote the corresponding solutions of Eq. (4.64). Consider the associated integral equations,



t

x (t) = x0 + k

A0 (s)x (s)ds +

0





0

f (s)ds, t ∈ I,

(4.66)

0

t

A0 (s)x o (s)ds +

t

Bk (ds)x (s) + k

0

t

x o (t) = x0 +



t

k

Bo (ds)x o (s) +

0

t

f (s)ds, t ∈ I. (4.67)

0

Subtracting Eq. (4.66) from Eq. (4.67) term by term and rearranging terms we obtain



t

x o (t) − x k (t) =

t

A0 (s)(x o (s) − x k (s))ds +

0

Bk (ds)(x o (s) − x k (s))

0



t

+

(Bo (ds) − Bk (ds))x o (s), t ∈ I.

(4.68)

0

Computing the norms on either side of the above expression and using the triangle inequality and the assumptions on A0 and the dominating measure ν related to the admissible set Mad , one can easily verify that

t

 x o (t) − x k (t) ≤

a(s)  x o (s) − x k (s)  ds

0



t

+

 x o (s) − x k (s)  ν(ds)+  ek (t) , t ∈ I, (4.69)

0

where

t

ek (t) ≡ 0

(Bo (ds) − Bk (ds))x o (s), t ∈ I.

(4.70)

4.6 Structural Control

131

Define the measure μ ∈ M+ ca (I ) by



μ(E) ≡

a(s)ds + E

ν(ds), E ∈ I . E

Again, by virtue of generalized Grönwall inequality, it follows from the expression (4.69) that  x o (t) − x k (t) ≤ ek (t)  + exp{μ(I )}

t

 ek (s)  μ(ds), t ∈ I. (4.71)

0

Since Bk converges weakly to Bo it follows from (4.70) that ek (t) −→ 0, for each t ∈ I. Further, using the dominating measure ν, one can easily verify that  ek (t) ≤ 2

 x o (s)  ν(ds), ∀ t ∈ I.

(4.72)

I

Since the solution set S is bounded, it is clear sup{ ek (t) , t ∈ I, k ∈ N} < ∞. Thus, it follows from Lebesgue bounded convergence theorem that the expression on the right-hand side of the inequality (4.71) converges to zero uniformly on I. s Hence, x k −→ x o in B∞ (I, R n ). This completes the proof.   Remark 4.6.2 Note that, for the proof of the above theorem, it is not necessary to assume that Mad is weakly compact. Next we prove the existence of an optimal structural control Bo ∈ Mad . This is stated in the following theorem. Theorem 4.6.3 Consider the system (4.64) with the cost functional (4.65) and suppose the assumptions of Theorem 4.6.1 hold and the functions {, } satisfy the following properties: (1)  : I × R n −→ R is nonnegative, Borel measurable in all the arguments, and lower semicontinuous in the second argument x ∈ R n for almost all t ∈ I. (2)  : R n −→ R is nonnegative and lower semicontinuous. Then, there exists an optimal structural control Bo ∈ Mad at which J attains its minimum.

132

4 Optimal Control: Existence Theory

Proof The proof is very similar to that of Theorem 4.5.3. We present an outline. Under the given assumptions on  and , one can prove that the functional J (B) is weakly lower semicontinuous on Mca (I , L(R n )). Thus, since Mad is weakly compact, there exists a Bo ∈ Mad such that J (Bo ) ≤ J (B), ∀ B ∈ Mad .   Remark 4.6.4 Using exactly similar steps and standard assumptions on F , one can prove existence of optimal structural control for the following nonlinear system dx = A0 (t)x(t)dt + B(dt)x(t) + F (t, x)dt, x(0) = x0 ,

(4.73)

with the same cost functional (4.65). The reader is encouraged to carry out the details.

4.7 Differential Inclusions (Regular Controls) Uncertain systems can be modelled as differential inclusions. The uncertainty may arise from incomplete knowledge of the dynamics or unknown system parameters with known bounds as discussed in Chap. 3. Here we present one such result. Using the techniques of the preceding sections one can prove existence of optimal policies for systems described by controlled differential inclusion as follows: x(t) ˙ ∈ F (t, x(t), u(t)), x(0) = x0 , t ∈ I,

(4.74)

where F (t, ξ, v) is a multi-valued map. Here we consider the admissible controls to be ordinary controls (bounded measurable vector valued functions with values in U ⊂ R m ). Under the assumptions of Theorem 3.5.4 (holding uniformly with respect to the control domain U ) the system has a nonempty set of solutions. Let X(u) denote the solution set corresponding to the control u ∈ Uad . The problem is to find a control policy uo ∈ Uad such that Jˆ(uo ) = inf{Jˆ(u), u ∈ Uad }, where Jˆ(u) ≡ sup{C(u, x), x ∈ X(u)} with the functional C given by (t, x(t), u(t))dt + (x(T )).

C(u, x) ≡

(4.75)

I

Clearly, this is a min-max problem where one tries to minimize the maximum risk. Let cc(R n ) denote the class of nonempty closed convex subsets of R n . For each (t, x) ∈ I × R n , define the contingent set as Q(t, x) ≡ {(ζ, η) ∈ R × R n : ζ ≥ (t, x, u), η ∈ F (t, x, u) for some u ∈ U }. (4.76)

4.7 Differential Inclusions (Regular Controls)

133

We need the following assumptions: (F1) F : I × R n × U −→ cc(R n ) (F2) t −→ F (t, x, u) is measurable for each x ∈ R n and u ∈ U (F3) u −→ F (t, x, u) is continuous with respect to the Hausdorff metric dH on cc(R n ) uniformly with respect to (t, x) ∈ I × R n . (F4) x −→ F (t, x, u) is upper semicontinuous uniformly with respect to (t, u) ∈ I × U and there exists a K ∈ L+ 1 (I ) such that dH (F (t, x, u), F (t, y, u)) ≤ K(t)  x − y  ∀ u ∈ U. (F5)

There exists an h ∈ L+ 1 (I ) such that  z ≤ h(t)(1+  x ), ∀ z ∈ F (t, x, u), ∀ u ∈ U.

Theorem 4.7.1 Suppose the set U is a compact and convex subset of R m and the multifunction F satisfy the assumptions (F1)–(F5), and the functions  and , determining the cost functional C, satisfy the assumptions of Theorem 4.2.4. Then, the min-max problem as stated above has a solution. Proof The proof is very similar to that of Theorem 4.2.4. We present a brief outline. It follows from the assumptions (F1), (F2), (F4), and (F5) that for each control policy u ∈ Uad , the system x(t) ˙ ∈ F (t, x(t), u(t)), t ∈ I, x(0) = x0 has a nonempty set of solutions denoted by X(u). This is a bounded equicontinuous subset of C(I, R n ). Hence, by Ascoli–Arzela theorem it is relatively compact. That the set X(u) is closed follows from the fact that F is closed convex valued. Thus, for each u ∈ Uad , the solution set X(u) is a compact subset of C(I, R n ). By assumption both  and  are continuous in x on C(I, R n ). Thus, for each fixed u ∈ Uad , the functional x −→ C(x, u) attains its maximum on the set X(u) and therefore the functional Jˆ(u) ≡ sup{C(x, u), x ∈ X(u)} is well defined. We must demonstrate that Jˆ attains its infimum on Uad . Let {uk } be a minimizing sequence for Jˆ and {xk ∈ X(uk )} the associated sequence at which Jˆ(uk ) = C(xk , uk ). Using this sequence we then construct the sequence k (t) ≡ (t, xk (t), uk (t)), t ∈ I, fk (t) ∈ F (t, xk (t), uk (t)), t ∈ I and note that (k (t), fk (t)) ∈ Q(t, xk (t)), t ∈ I.

134

4 Optimal Control: Existence Theory

It follows from the assumptions (F1)–(F4) that the contingent set Q satisfies the weak Cesari property. Hence, from here on we can follow the same steps as in Theorem 4.2.4 starting from Eq. (4.9). This completes the brief outline of our proof.   Remark 4.7.2 In the preceding theorem we gave sufficient conditions guaranteeing the multifunction Q to satisfy the weak Cesari property. However, these conditions are not necessary. For more on uncertain systems see [13]. Some Examples From Differential Games (E1) Consider two competing agents or parties one of which wants to undermine the objective of the other. The system is governed by a differential equation with two sets of controls given by x˙ = f (t, x, u, v), x(0) = x0 , where u ∈ Uad and v ∈ Vad . Here, player P1 can choose its controls from the set Uad and player P2 can choose its strategies from the set Vad . Player P1 wants to reach its target D, a closed bounded convex subset of R n , and player P 2 tries to prevent this. This can be formulated as the min-max problem as follows: inf

sup {J (u, v) ≡ d(x u,v (T ), D)},

u∈Uad v∈Vad

where x u,v (T ) denotes the terminal state actually reached due to control policies {u, v} used by the players P1 and P2, respectively, and d is any metric on R n . We show that this is equivalent to the general problem treated in Theorem 4.7.1. Define the multifunction F (t, x, u) ≡ f (t, x, u, V ) with V denoting the control constraints of the player P2. Then the system is governed by the differential inclusion x˙ ∈ F (t, x, u), x(0) = x0 , with X(u) denoting the family of solutions corresponding to the control u ∈ Uad . The objective functional is given by C(u, x) = d(x(T ), D), for x ∈ X(u). The problem is to find a control uo ∈ Uad so that Jˆ(uo ) ≤ Jˆ(u) for all u ∈ Uad where Jˆ(u) ≡ sup{C(u, x), x ∈ X(u)}. By Theorem 4.7.1 this problem has a solution. (E2) Another classical example is found in differential games. Here there are two competing players with two competing dynamics given by P 1 : x˙ = f (t, x, u) ∈ R n , u ∈ Uad

(4.77)

P 2 : y˙ = g(t, y, v) ∈ R , v ∈ Vad ,

(4.78)

n

4.7 Differential Inclusions (Regular Controls)

135

where the control strategies of player P1 consists of measurable functions with values in a closed bounded set U ⊂ R m1 and those of player P2 are given by measurable functions with values in a closed bounded set V ⊂ R m2 . Player P1 wants to pursue P2 and player P2 wants to evade P1. Here one simple objective functional is given by J (u, v) ≡ ϕ( x u (T ) − y v (T ) ), where ϕ is any continuous nonnegative non-decreasing function of its argument. The problem is to find a pair of policies (uo , v o ) ∈ Uad × Vad so that J (uo , v o ) = inf

sup J (u, v).

u∈Uad v∈Vad

A pair (uo , v o ) ∈ Uad ×Vad is said to be a saddle point if it satisfies the following inequalities J (uo , v) ≤ J (uo , v o ) ≤ J (u, v o ). This raises the question of existence of saddle points. If U and V are closed bounded convex sets, the corresponding class of admissible controls Uad and Vad are weak-star compact subsets of L∞ (I, R m1 ) and L∞ (I, R m2 ), respectively. Let A1 (T ) ≡ {ξ ∈ R n : ξ = x(u)(T ), u ∈ Uad }, A2 (T ) ≡ {η ∈ R n : η = y(v)(T ), v ∈ Vad } denote the attainable sets (at time T ) of P1 and P2, respectively. Then, under usual regularity assumptions on the vector fields {f, g}, it is not difficult to prove that the attainable sets {A1 (T ), A2 (T )} of the players P1 and P2 are compact subsets of R n . Thus, the games problem stated above is equivalent to inf

sup J (u, v) =

u∈Uad v∈Vad

inf

sup ϕ( ξ − η ).

ξ ∈A1 (T ) η∈A2 (T )

From the compactness of the attainable sets and the continuity of ϕ follows the existence of saddle points. For detailed study of games theory see Tamar [36] and the references therein. Interested readers may also refer to [12, 13] where one will find detailed analysis of much broader classes of systems in infinite dimensional Banach spaces including stochastic systems.

136

4 Optimal Control: Existence Theory

4.8 Differential Inclusions (Measure-Valued Controls) Here we consider the question of existence of optimal controls for differential inclusions driven by vector measures. Consider the system dx ∈ F (t, x(t))dt + G(t, x(t))γ (dt), x(0) = x0 , t ∈ I,

(4.79)

for γ ∈ Mad ⊂ Mca (I , R m ) where Mad is the set of admissible controls. Let S(γ ) ⊂ B∞ (I, R n ) denote the family of solutions corresponding to a given control γ ∈ Mad . We consider terminal control problem. Let  : R n −→ R be a continuous map. We want to find a control that minimizes the maximum (risk) potential cost. Define J o (γ ) ≡ sup (x(T )). x∈S (γ )

The objective is to find a γo ∈ Mad such that J o (γo ) ≤ J o (γ ) ∀ γ ∈ Mad . Clearly, this is a min-max control problem. Lemma 4.8.1 Let Mo be a closed bounded subset of Mca (I , R m ) and suppose there exists a measure ν ∈ M+ ca (I ) such that Mo is uniformly ν continuous. Let Lo ⊂ L1 (ν, R m ) denote the set of Radon–Nikodym derivatives (RND’s) of Mo with respect to ν. Suppose there exists a g o ∈ L+ 1 (ν) such that the set Loo ≡ {g ∈ Lo : g(t) ≤ g o (t) ν a.e t ∈ I } is a nonempty subset of Lo . Then, the set Moo ≡ {γ ∈ Mca (I , R m ) : dγ = gdν, g ∈ Loo } is a weakly compact subset of Mo . Proof By assumption Mo is a bounded and uniformly ν continuous subset of Mca (I , R m ). In finite dimensional spaces, such as R m , these conditions are necessary and sufficient for relative weak compactness of the set Mo (see [57, Theorem IV.2.5]). Since Mo is also assumed to be closed, it is weakly compact. Note that Mo is isomorphic to Lo (Mo ∼ = Lo ) and since compactness is preserved under isomorphism, Lo is a weakly compact subset of L1 (ν, R m ). For any Banach space X and any x ∈ X, it follows from a corollary of Hahn–Banach theorem that there exists an x ∗ ∈ S1 (X∗ ), the unit sphere of the dual X∗ , such that x ∗ (x) = x  . Using this fact it is easy to verify that Loo is a closed subset of Lo . Thus, Loo is also a weakly compact subset of L1 (ν, R m ). Hence, under the isomorphism, Moo is

4.8 Differential Inclusions (Measure-Valued Controls)

137

a weakly compact subset of Mca (I , R m ) and Moo ∼ = Loo . This concludes the proof.   For admissible controls we choose the set Moo and use the above result to prove the following theorem concerning existence of solution of the min-max control problem stated above. Theorem 4.8.2 Consider the system governed by the differential inclusion (4.79) with admissible controls Moo . Suppose the assumptions of Theorem 3.5.6 hold with the function L(t) = L > 0 assumed constant. Let  : R n −→ R, defining the objective functional, be continuous and bounded on bounded sets. Then, there exists an optimal control for the min-max problem as stated. Proof Recall that for every f ∈ L1 (I, R n ) the following differential equation dx = f (t)dt + G(t, x(t))γ (dt), x(0) = x0 , t ∈ I,

(4.80)

has a unique solution x = ηγ (f ) ∈ B∞ (I, R n ) which depends on the choice of γ and f as indicated. Using this we define the multifunction Fˆγ : L1 (I, R n ) −→ n 2L1 (I,R ) \ ∅ given by Fˆγ (f ) ≡ {h ∈ L1 (I, R n ) : h(t) ∈ F (t, ηγ (f )(t)) a.e t ∈ I }. Let F ix(Fˆγ ) denote the set of fixed points of the multifunction Fˆγ . We know from Theorem 3.5.6 that the set F ix(Fˆγ ) is nonempty and that it is a weakly compact subset of L1 (I, R n ) and that every element of this set determines a solution of the differential inclusion (4.79) through the map ηγ and conversely every solution of the differential inclusion corresponds to a certain L1 selection of the multifunction Fˆγ . Define the set S(γ ) by S(γ ) ≡ {x ∈ B∞ (I, R n ) : x = ηγ (f ) for some f ∈ F ix(Fˆγ )}. This set denotes the complete family of solutions of the system (4.79) corresponding to the control measure γ . By virtue of Theorem 3.5.6 this is a nonempty subset of B∞ (I, R n ). Let us verify that the functional J o (γ ) is well defined. By definition J o (γ ) ≡ sup (x(T )) = sup{(ηγ (f )(T )), f ∈ F ix(Fˆγ )}. x∈S (γ )

Since the set F ix(Fˆγ ) is a weakly sequentially compact subset of L1 (I, R n ), by Eberlein–Šmulian theorem it is weakly compact. Thus, it suffices to prove that the map f −→ (ηγ (f )(T )) is weakly continuous. Let {fk , fo } ⊂ F ix(Fˆγ ) and w suppose fk −→ fo in L1 (I, R n ) and let {x k , x o } ⊂ B∞ (I, R n ) denote the solutions

138

4 Optimal Control: Existence Theory

of the following system of integral equations

t

x k (t) = x0 +



0

x o (t) = x0 +

t

fk (s)ds +

G(s, x k (s))γ (ds), t ∈ I,

(4.81)

G(s, x o (s))γ (ds), t ∈ I.

(4.82)

0

t



t

fo (s)ds +

0

0

Computing the difference x o (t) − x k (t), for each t ∈ I , we have

t

x o (t) − x k (t) =



t

(fo (s) − fk (s))ds +

0

 G(s, x o (s)) − G(s, x k (s)) γ (ds).

0

(4.83) Evaluating the norm and using the Lipschitz property of G and triangle inequality we obtain, for each t ∈ I, the following inequality  t     + L  x o (t) − x k (t) ≤  (f (s) − f (s))ds o k   0

t

 x o (s) − x k (s)  |γ |(ds).

0

(4.84) Hence, it follows from Grönwall inequality that  x (t) − x (t) ≤ ek (t)  +L exp{L|γ |(I )} o

k

t

 ek (s)  |γ |(ds), t ∈ I,

0

(4.85) where

t

ek (t) ≡

(fo (s) − fk (s))ds, t ∈ I.

(4.86)

0

Since the sequence fk converges weakly to fo in L1 (I, R n ), the sequence ek (t) converges to zero point-wise in t ∈ I in the standard norm topology of R n . Further, since the set F ix(Fˆγ ) is a bounded subset of L1 (I, R n ), there exists a finite number a > 0 such that sup{ ek (t) , t ∈ I, k ∈ N} ≤ a. Hence, it follows from Lebesgue bounded convergence theorem that the expression on the right-hand side of the inequality (4.85) converges to zero as k → ∞. Thus, x k (t) ≡ ηγ (fk )(t) −→ ηγ (fo )(t) = x o (t), t ∈ I. So given that  is continuous on R n , (ηγ (fk )(T )) −→ (ηγ (fo )(T ))

4.8 Differential Inclusions (Measure-Valued Controls)

139

as k → ∞. This shows that the map f −→ (ηγ (f )(T )) is weakly continuous. Since the set F ix(Fˆγ ) is a weakly compact subset of L1 (I, R n ), the functional determined by  as given above attains its maximum (as well as) minimum on F ix(Fˆγ ). Hence, we conclude that the functional J o (γ ) ≡ sup (x(T )) = sup{(ηγ (f )(T )), f ∈ F ix(Fˆγ )} x∈S (γ )

is well defined. We prove that the functional J o is weakly continuous on Moo . w Let {γk , γo } ∈ Moo such that γk −→ γo . By virtue of the growth properties of the multifunction F and that of G and the fact that Moo is a bounded subset of Mca (I , R m ) it is not difficult to verify that the set S ≡ ∪{S(γ ), γ ∈ Moo } is a bounded subset of B∞ (I, R n ). Thus, there exists a finite positive number b such that

 sup{ z : z ∈ F (t, x(t))} ≤ K(t) 1+  x(t)  ≤ K(t)(1 + b) ∀ x ∈ S, t ∈ I. Define the set W ≡ {f ∈ L1 (I, R n ) : f (t) R n ≤ (1 + b)K(t) for a.e t ∈ I }. Clearly, this is a bounded and uniformly integrable subset of L1 (I, R n ) and so by Dunford–Pettis theorem it is relatively weakly compact. Note that Wo ≡ {F ix(Fˆγ ), γ ∈ Moo } ⊂ W. Let fk ∈ F ix(Fˆγk ) such that J o (γk ) = (ηγk (fk )(T )). Clearly, {fk } ⊂ Wo ⊂ W and since W is relatively weakly compact, there exists a subsequence of the sequence {fk }, relabeled as the original sequence, and an element fo ∈ L1 (I, R n ) w such that fk −→ fo in L1 (I, R n ). Let x k ∈ B∞ (I, R n ) denote the solution of Eq. (4.80) corresponding to the pair (γk , fk ) and x o ∈ B∞ (I, R n ) denote the solution of the same equation corresponding to the pair (γo , fo ). Thus, we have

t

x k (t) = x0 +



0

G(s, x k (s))γk (ds), t ∈ I,

(4.87)

G(s, x o (s))γo (ds), t ∈ I.

(4.88)

0

t

x o (t) = x0 +

t

fk (s)ds +

t

fo (s)ds +

0

0

Define the sequence {ek } as follows ek (t) ≡ 0

t



t

(fo (s) − fk (s))ds + 0

G(s, x o (s))(γo (ds) − γk (ds)), t ∈ I.

(4.89)

140

4 Optimal Control: Existence Theory

Subtracting Eq. (4.87) from Eq. (4.88) term by term we obtain the following expression

t

x o (t) − x k (t) = ek (t) +

[G(s, x o (s)) − G(s, x k (s))]γk (ds), t ∈ I.

(4.90)

0

Computing the norm and using triangle inequality and the Lipschitz property of G one can readily verify that

t

 x o (t) − x k (t) ≤ ek (t)  +

L  x o (s) − x k (s)  |γk |(ds), t ∈ I, (4.91)

0

where, as usual, |γk |(·) ∈ M+ ca (I ) denotes the measure induced by its variation. Since the elements of the admissible set of control measures Moo have Radon– Nikodym derivatives with respect to ν there exists gk ∈ Loo such that dγk = gk dν. Using this fact in the above inequality we obtain

t

 x o (t) − x k (t) ≤ ek (t)  +

L  x o (s) − x k (s)  gk (s) Rm ν(ds),

(4.92)

0

for all t ∈ I. Since the sequence {gk } ∈ Loo , it follows from Lemma 4.8.1 that  gk (t) ≤ g o (t), ν − a.e t ∈ I for all k ∈ N. Hence, it follows from the above inequality that  x o (t) − x k (t) ≤ ek (t)  +

t

L  x o (s) − x k (s)  g o (s)ν(ds),

(4.93)

0

for all t ∈ I. Using Grönwall inequality we conclude that    x o (t) − x k (t) ≤ ek (t)  +L exp L  g o L1 (ν)



t

 ek (s)  g o (s)ν(ds).

0

(4.94) Weak convergence of fk to fo in L1 (I, R n ), and weak convergence of γk to γo in Mca (I , R m ), and the fact that G(t, x o (t)) is bounded in norm in L(R m , R n ) uniformly with respect to t ∈ I, ek (t) → 0 for each t ∈ I. Further, it follows from the growth properties as stated above that there exists a finite positive number d such that sup{ ek (t) , t ∈ I } ≤ d < ∞. Thus, by Lebesgue bounded convergence theorem, the right-hand expression of Eq. (4.94) converges to zero and hence, for each t ∈ I, x k (t) → x o (t) in norm. Since  is continuous, it follows from this that (x k (T )) −→ (x o (T )) which is equivalent to limk→∞ (ηγk (fk )(T )) = (ηγo (fo )(T )). This proves that limk→∞ J o (γk ) = J o (γo ). Since Moo is weakly compact and J o is weakly continuous we conclude that J o attains its minimum on Moo and hence an optimal control exists. This completes the proof.  

4.9 Systems Controlled by Discrete Measures

141

4.9 Systems Controlled by Discrete Measures In the following two sections we consider optimal control problems for a class of systems driven by impulsive forces (or discrete measures). Impulsive forces are characterized by their intensities and the time instants when they are applied. Our objective is to find the optimal set of intensities and optimal time instants of application of impulsive forces so as to minimize certain cost functionals. We follow closely the method recently proposed in our paper [29]. We consider the following hybrid system with the first and the second equations describing continuous evolution, and the third equation describing evolution by jumps. The second equation determines the instants of time at which impulsive controls may be applied leading to jumps in the state trajectory. So the system is governed by the following set of equations, x˙ = F (t, x), x(0−) = x0 , t ∈ I \ Iλ , I ≡ [0, T ],

(4.95)

λ˙ = g(t), t ∈ I, λ(t0 ) = λ(0) = 0, g ∈ L+ 1 (I ),

(4.96)

x(λ(ti )) = x(λ(ti )−) + ai G(λ(ti ), x(λ(ti )), vi ), i ∈ {0, 1, · · · , κ},

(4.97)

where   Iλ ≡ 0 = λ(0) = λ(t0 ) ≤ λ(t1 ) ≤ λ(t2 ) ≤ · · · ≤ λ(tκ ) = λ(T ) = T . The class of functions determining λ is given by the following family of Lebesgue integrable functions,  t + g(s)ds ≤ T , ∀ t ∈ I ; and GT ≡ g ∈ L1 (I ) : 0

T

 g(s)ds = T .

(4.98)

0

Clearly, this is a bounded subset of L1 (I ). Let G be any subset of GT and suppose it is uniformly integrable with respect to Lebesgue measure and weakly closed. Then, it follows from the well-known Dunford–Pettis Theorem 1.11.10 [59, Theorem 9, p. 292] that G is a weakly compact subset of L1 (I ). According to Eberlein–Šmulian Theorem 1.11.11 [59, Theorem 1, p. 430], weak and weak sequential compactness are equivalent in Banach spaces. Thus, G is a weakly sequentially compact subset of L1 (I ). Using this G we introduce the set as defined below   t g(s)ds, t ∈ I, for some g ∈ G . ≡ λ ∈ C(I ) : λ(t) ≡ 0

(4.99)

142

4 Optimal Control: Existence Theory

We use this set to control the time instants at which impulsive forces may be applied. Further, to control the intensities we introduce the sets A ≡ {a ∈ R κ+1 : |ai | ≤ 1, i = 0, 1, 2, · · · , κ},

(4.100)

U ≡ {v : vi ∈ U ⊂ R , i = 0, 1, 2, · · · , κ},

(4.101)

d

∼ ×A×U where U is a compact subset of R d . Let Dad ≡ G × A × U = denote the class of admissible controls (or decisions). This is furnished with the product topology τp generated by the product of weak topology on G and the standard norm topologies on A and U, respectively. The objective is to find a control ν = (g, a, u) ∈ Dad that minimizes the following cost functional, J (ν) ≡

T

(t, x(t))dt + (x(T )) + 0 (x(0)),

(4.102)

0

subject to the dynamic constraints (4.95)–(4.97). In the following theorem we consider the question of existence, uniqueness, and regularity properties of solutions of the system of Eqs. (4.95)–(4.97). Theorem 4.9.1 Consider the system (4.95)–(4.97). Suppose F : I × R n −→ R n is continuous in all the variables, and Lipschitz in the state variable with Lipschitz constant K > 0, and possessing at most linear growth; the function G : I × R n × R d −→ R n is continuous in all the variables and uniformly Lipschitz in the state variable with Lipschitz constant L ∈ (0, 1) and possessing at most linear growth. Then, for any initial state x0 ∈ R n and any control ν = (g, a, v) ∈ Dad , the system (4.95)–(4.97) has a unique solution x ∈ B∞ (I, R n ), and further the solution is bounded and piecewise continuous. Proof In principle the proof is very similar to that of [28, Proposition 2.1]. So, we present a brief outline of the major  t steps. Corresponding to the given g ∈ G, we define the function λ as λ(t) = 0 g(s)ds, t ∈ I. Since, for i = 0, t0 = 0, and λ(t0 ) = λ(0) = 0, it follows from Eq. (4.97) that x(0) = x(0−) + a0 G(0, x(0), v0 ).

(4.103)

Since G is Lipschitz in the state variable uniformly with respect (t, v) ∈ I × U with Lipschitz constant L ∈ (0, 1), and a0 ∈ A, it follows from Banach fixed point theorem that Eq. (4.103) has a unique solution x(0) ∈ R n . Using this as the initial state for Eq. (4.95), and the linear growth and Lipschitz property of F in the state variable, one can easily prove, using again Banach fixed point theorem, that Eq. (4.95) has a unique solution x1 ∈ C((0, λ(t1 )), R n ) over the interval (λ(0), λ(t1 )) = (0, λ(t1 )). Denote the limit of this solution from the left by x1 (λ(t1 )−) = lim x1 (t) = x(0) + lim t %λ(t1 )

t %λ(t1 ) 0

t

F (s, x1 (s))ds.

4.9 Systems Controlled by Discrete Measures

143

In general, for the interval (λ(ti ), λ(ti+1 )), the jump evolution at time λ(ti ) is given by the solution of the following fixed point problem in R n , z = xi (λ(ti )−) + ai G(λ(ti ), z, vi ), i = 0, 1, · · · , κ, giving z = xi (λ(ti )). In other words, xi (λ(ti )), satisfies the following equation xi (λ(ti )) = xi (λ(ti )−) + ai G(λ(ti ), xi (λ(ti )), vi ), i = 0, 1, · · · , κ.

(4.104)

Using the state reached after the jump one must consider the continuous evolution determined by the following differential equation, x˙ = F (t, x), t ∈ (λ(ti ), λ(ti+1 )) with initial condition x(λ(ti )) = xi (λ(ti )), (4.105) or equivalently the following integral equation x(t) = xi (λ(ti )) +

t

F (s, x(s))ds, t ∈ (λ(ti ), λ(ti+1 )).

(4.106)

λ(ti )

The solution of this equation, denoted by xi+1 , is unique and continuous. Its left-hand limit, denoted by xi+1 (λ(ti+1 )−), is given by xi+1 (λ(ti+1 )−) ≡ limt %λ(ti+1) xi+1 (t). This holds for all indices i ∈ {0, 1, 2, · · · , κ − 1}. Since λ(tκ ) = λ(T ) = T , it follows from the above expression that for i = κ − 1, the left-hand limit exists and is given by xκ (T −) = lim xκ (t). t %T

This serves as the input to the evolution equation determining the jump at the terminal time, which can be written as z = xκ (T −) + aκ G(T , z, vκ ). Again, by virtue of Banach fixed point theorem, this equation has a unique solution z = xκ (T ), satisfying xκ (T ) = xκ (T −) + aκ G(T , xκ (T ), vκ ). By concatenating all the pieces as constructed above we obtain the solution x for the system (4.95)–(4.97). By virtue of Grönwall inequality it follows from (at most) linear growth property of both F and G in the state variable that the solution x is bounded. Thus, x ∈ B∞ (I, R n ) and it is clear from the construction that it is also piecewise continuous. This completes the proof.  

144

4 Optimal Control: Existence Theory

4.10 Existence of Optimal Controls In this section we prove existence of optimal control policy. For this we need the following result. Theorem 4.10.1 Consider the system (4.95)–(4.97) and suppose the assumptions of Theorem 4.9.1 hold with the admissible controls Dad furnished with the product topology induced by the product of the weak topology on G and the norm topology on A × U. Then the control to solution map, ν −→ x, is continuous with respect to the product topology on Dad and the norm topology on B∞ (I, R n ). Proof Let {ν k } be a sequence of controls from the admissible set Dad and suppose ν k = (g k , a k , v k ) −→ ν o = (g o , a o , v o ) ∈ Dad with respect to the topologies w indicated in the statement of the theorem. Since g k −→ g o it is clear that λk (t) ≡ t k t o o 0 g (s)ds −→ 0 g (s)ds = λ (t) for each t ∈ I. Consider the sequence of time instants 0 = t0 < t1 < t2 < · · · < tκ = T and the corresponding images under λk and λo given by 0 = λk (t0 ) ≤ λk (t1 ) ≤ λk (t2 ) ≤ · · · ≤ λk (tκ ) = λk (T ) = T and 0 = λo (t0 ) ≤ λo (t1 ) ≤ λo (t2 ) ≤ · · · ≤ λo (tκ ) = λo (T ) = T , respectively. Since a jump is allowed at time t0 = 0, and it is governed by the Eq. (4.97), we must consider the fixed point problems x k (0) = x(0−) + a0k G(0, x k (0), v0k ),

(4.107)

x o (0) = x(0−) + a0o G(0, x o (0), v0o ).

(4.108)

Since G is a contraction in the state variable and a ∈ A, by virtue of Banach fixed point theorem these equations have unique solutions {x k (0), x o (0)} ⊂ R n which we have denoted by the same symbols as they appear in the equations. Subtracting Eq. (4.108) from Eq. (4.107) term by term and rearranging we obtain x k (0) − x o (0) = (a0k − a0o )G(0, x k (0), v0k ) + a0o (G(0, x k (0), v0k ) − G(0, x o (0), v0k )) + a0o (G(0, x o (0), v0k ) − G(0, x o (0), v0o )).

(4.109)

4.10 Existence of Optimal Controls

145

Computing the norm on either side of the above equation, and using the Lipschitz property of G in its second argument, and the triangle inequality, we arrive at the following inequality  x k (0) − x o (0) ≤

 1 |a0k − a0o |  G(0, x k (0), v0k )  (1 − |a0o |L)

 + |a0o |  G(0, x o (0), v0k ) − G(0, x o (0), v0o )  , (4.110)

where, by virtue of our assumptions on G and the controls, (1 − |a0o |L) > 0. Since the norm  G(0, x k (0), v0k )  is bounded independently of k ∈ N, and a0k → a0o , and G is continuous in its third argument, letting k → ∞ it follows from the above inequality that x k (0) → x o (0). Using {x k (0), x o (0)} as the initial states for continuous evolution determined by Eq. (4.95) we have x(t) ˙ = F (t, x(t)), x(0) = x k (0), 0 < t < λk (t1 ),

(4.111)

x(t) ˙ = F (t, x(t)), x(0) = x (0), 0 < t < λ (t1 ).

(4.112)

o

o

By assumption, F is Lipschitz having at most linear growth in the state variable with Lipschitz constant K. Hence, these equations have unique continuous solutions denoted by {x k , x o } on the respective intervals satisfying the following integral equations

t

x k (t) = x k (0) +

F (s, x k (s))ds, t ∈ (0, λk (t1 )),

(4.113)

F (s, x o (s))ds, t ∈ (0, λo (t1 )).

(4.114)

0



t

x o (t) = x o (0) + 0

Letting λk (t1 ) ∧ λo (t1 ) denote the minimum of the two, and subtracting Eq. (4.114) from Eq. (4.113), and computing the norm of the difference, and using triangle inequality, we arrive at the following inequality

t

 x k (t) − x o (t) ≤ x k (0) − x o (0)  +K

 x k (s) − x o (s)  ds,

0

for all 0 < t < λk (t1 ) ∧ λo (t1 ). (4.115) It follows from Grönwall inequality applied to the above expression that  x k (t) − x o (t) ≤ x k (0) − x o (0)  exp{K(λk (t1 ) ∧ λo (t1 ))}

(4.116)

146

4 Optimal Control: Existence Theory

for all t ∈ (0, λk (t1 ) ∧ λo (t1 )). Since λk (t1 ) → λo (t1 ), and x k (0) −→ x o (0) (in the norm topology), it follows from the above expression that lim sup{ x k (t) − x o (t) , t ∈ (0, λk (t1 ) ∧ λo (t1 ))} = 0.

k→∞

(4.117)

Thus, as ν k −→ ν o , x k −→ x o in B∞ ([0, λo (t1 )], R n ). Since {x k , x o } satisfy the integral equations (4.113) and (4.114), the following left limits are well defined lim x k (t) = x k (λk (t1 )−),

t %λk (t1 )

lim x o (t) = x o (λo (t1 )−),

t %λo (t1 )

and lim  x k (λk (t1 )−) − x o (λo (t1 )−) = 0.

k→∞

(4.118)

To continue, we must now consider the evolution of solution determined by jump equations as follows: x k (λk (t1 )) = x k (λk (t1 )−) + a1k G(λk (t1 ), x k (λk (t1 )), v1k ),

(4.119)

x o (λo (t1 )) = x o (λo (t1 )−) + a1o G(λo (t1 ), x o (λo (t1 )), v1o ).

(4.120)

By virtue of our assumption, G is uniformly Lipschitz in the state variable with Lipschitz constant L < 1 and aik ∈ [−1, +1]. Thus, it follows from Banach fixed point theorem that both the equations have unique solution denoted by {x k (λk (t1 )), x o (λo (t1 ))}. Subtracting Eq. (4.120) from Eq. (4.119) term by term and rearranging terms suitably, and computing the norm and using triangle inequality, we obtain the following inequality 1  x (λ (t1 )) − x (λ (t1 )) ≤ (1 − |a1o |L) k

k

o

o

  x k (λk (t1 )−) − x o (λo (t1 )−) 

+|a1k − a1o |  G(λk (t1 ), x k (λk (t1 )), v1k )   o k o o k o o o o +|a1 |  G(λ (t1 ), x (λ (t1 )), v1 ) − G(λ (t1 ), x (λ (t1 )), v1 )  .

(4.121) It follows from (4.118) that the first term on the right-hand side of the above inequality converges to zero as k → ∞. Since the set of admissible controls Dad is bounded, and G has at most linear growth in the state variable uniformly with respect to its other arguments, the solution set {x k , x o } is bounded and hence there exists a finite positive number b so that  G(λk (t1 ), x k (λk (t1 )), v1k ) ≤ b < ∞ for all k ∈ N. Thus, as a1k → a1o as k → ∞, the second term on the right-hand side of the above inequality converges to zero. Since λk (t1 ) → λo (t1 ) and v1k → v1o and G

4.10 Existence of Optimal Controls

147

is continuous in all its arguments, the third term also converges to zero as k → ∞. This proves that lim  x k (λk (t1 )) − x o (λo (t1 )) = 0.

(4.122)

k→∞

We consider one more step involving continuous evolution determined by Eq. (4.95), x˙ = F (t, x), t ∈ (λk (t1 ), λk (t2 )), x(λk (t1 )) = x k (λk (t1 ))

(4.123)

x˙ = F (t, x), t ∈ (λ (t1 ), λ (t2 )), x(λ (t1 )) = x (λ (t1 )).

(4.124)

o

o

o

o

o

Letting x k and x o denote the solution of Eqs. (4.123) and (4.124), respectively, and using the corresponding integral equations and subtracting one from the other, and computing the norm of the difference and using Grönwall inequality, one can verify that, for all k sufficiently large,  x k (t) − x o (t) ≤

   x k (λk (t1 )) − x o (λo (t1 ))  exp K|λk (t2 ) ∧ λo (t2 ) − λk (t1 ) ∨ λo (t1 )|

(4.125)

for all t ∈ (λk (t1 ), λk (t2 )) ∩ (λo (t1 ), λo (t2 )). Letting k → ∞, it follows from the expression (4.122) and the above inequality, and the fact that λk (ti ) → λo (ti ), that x k (t) → x o (t), ∀ t ∈ (λo (t1 ), λo (t2 )). Continuing this process step by step we arrive at the last step. Completing the same procedure for the last intervals, (λk (tκ−1 ), λk (tκ )) = (λk (tκ−1 ), T ) (λo (tκ−1 ), λo (tκ )) = (λo (tκ−1 ), T ), and the terminal jump equations given by x k (T ) = x k (T −) + aκk G(T , x k (T ), vκk ) x o (T ) = x o (T −) + aκo G(T , x o (T ), vκo ) we arrive at the conclusion that as ν k → ν o in the product topology τp , the solution x k −→ x o in B∞ (I, R n ) in its norm topology proving that the control to solution map ν −→ x(ν) is continuous with respect to the topologies as stated in the theorem. This completes the proof.   Using the above results we can prove existence of optimal controls. This is presented in the following theorem.

148

4 Optimal Control: Existence Theory

Theorem 4.10.2 Consider the system (4.95)–(4.97) with the admissible controls Dad and cost functional (4.102) and suppose the assumptions of Theorem 4.10.1 hold. Further, suppose the functions {, , o } satisfy the following properties: (A1) The cost integrand  is a real-valued Borel measurable function on I × R n , lower semicontinuous in the second argument, satisfying |(t, x)| ≤ α0 (t)+α1 (1+  x p ), for a finite p ≥ 1, α0 ∈ L+ 1 (I ), α1 ∈ (0, ∞).

(A2) Both the initial and terminal cost functionals {0 , } are lower semicontinuous on R n satisfying |$(x)| ≤ α2 (1+  x p ) for the same p and α2 ∈ (0, ∞) for $ = {0 , }. Then, there exists a control ν o ∈ Dad at which J attains its minimum. Proof Since Dad is compact in the product topology τp , it suffices to verify that the cost functional is lower semicontinuous in this topology. For convenience of presentation, we write the cost functional as J (ν) = J1 (ν) + J2 (ν) + J3 (ν) in the same order as it appears in the expression (4.102). Let {ν k } ∈ Dad and τp

suppose ν k −→ ν o . Let {x k , x o } ∈ B∞ (I, R n ) denote the corresponding solutions of Eqs. (4.95)–(4.97). Then, it follows from Theorem 4.10.1 that x k −→ x o in B∞ (I, R n ) and, for each t ∈ I, x k (t) → x o (t) in R n . It follows from lower semicontinuity of  on R n that (t, x o (t)) ≤ lim (t, x k (t)) for almost all t ∈ I and hence it follows from generalized Fatou’s Lemma that





(t, x (t))dt ≤

lim (t, x (t))dt ≤ lim

o

I

k

I k→∞

(t, x k (t))dt.

k→∞ I

This proves that J1 is lower semicontinuous. Since both  and 0 are lower semicontinuous on R n , we have J2 (ν o ) = (x o (T )) ≤ lim (x k (T )) = lim J2 (ν k ) k→∞

k→∞

J3 (ν ) =  (x (0)) ≤ lim  (x (0)) = lim J3 (ν k ) o

0

0

o

k→∞

k

k→∞

proving lower semi-continuity of J2 and J3 . Thus, it follows from the above results that J = J1 + J2 + J3 is lower semicontinuous. Since the set Dad is compact in the τp topology and J is lower semicontinuous in the same topology, J attains its minimum on Dad . This proves the existence of an optimal control.  

4.11 Bibliographical Notes

149

4.11 Bibliographical Notes Calculus of variation is one of the oldest branches of mathematics. Many of the problems in this subject are named after the names of the pioneers: Lagrange, Meyer, Bolza, Wierstrass, Tonelli. For an excellent historical account the reader is referred to the extensive references given in Cesari’s book [50]. For recent developments see Cesari [50], Clark [51] and Zeidler [110]. For theory and applications see Ahmed [2, 14] and Biswas [43]. The first author learnt from personal contact with Cesari that Tonelli was his teacher and that he was the first who gave a rigorous proof of existence of solutions for the Euler–Lagrange variational problem in the twentieth century (around 1930). Cesari himself made outstanding contributions in this area since that time. The book of Cesari [50] and the references therein provide a monument of information. Here we have presented many results on the question of existence of optimal controls for dynamic systems driven by vector measures not found in the general literature [68, 80, 83, 105]. For example, the results of Sects. 4.9 and 4.10 are taken from our recent paper [29].

Chapter 5

Optimal Control: Necessary Conditions of Optimality

5.1 Introduction In industrial engineering and operations research, a significant volume of research has gone into finding the minimum (or maximum) of a real-valued function defined on some set X ⊂ R n . This is commonly referred to as static optimization. Static optimization problems are generally classified as unconstrained and constrained ones. The constrained problems are more challenging due to the presence of a large number of nonlinear equality and inequality constraints. Mathematical methods frequently used in static optimization include linear and nonlinear programming [41, 82], convex optimization [38, 42, 45, 48, 86], among many others. There are many prominent computational techniques developed for static optimization, such as the simplex method, interior-point method, primal–dual method, branch and bound method, etc. Unlike static optimization, control of dynamic systems over a given period of time coupled with optimization problems involving resource and other side constraints is required in many socioeconomic, management, and engineering problems, where it is very important to use limited resources in a judicious way while seeking the desired goal. For example, in manufacturing industry certain goods are produced using available manpower and material resources including some form of energy. The objective is to use the resources in a way so as to minimize the cost of production while maintaining the level of production and the quality of products. Similarly, in public sectors the government wishes to attain certain social goals, such as eradicating poverty, improving health benefits, reducing greenhouse gas emissions, increasing support for research and development of renewable energy, in a given period of time under budgetary constraints. The problem is to develop a strategy in order to achieve the goal with a minimum cost within the given period. It is clear that here the question of controllability arises naturally. The set goal may not be reachable within the given time period under the given resource constraints. In that case one must redefine the goal or use the limited © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_5

151

152

5 Optimal Control: Necessary Conditions of Optimality

amount of resources in a way that minimizes the discrepancy between the set goal and the one that is attainable. The general control problem can be stated as follows: The system is governed by a controlled differential equation in R n starting from a given manifold M0 ⊂ R n and reaching out to a target manifold M1 ⊂ R n with controls from a given admissible class Uad . The problem is to find a control policy from the admissible class that minimizes (or maximizes) an objective functional J (u). Mathematically, it can be stated as follows: x˙ = f (t, x, u), t ∈ I ≡ [0, T ], x(0) ∈ M0 , x(T ) ∈ M1 , J (u) = $(x(u), u) −→ inf, u ∈ Uad . In the preceding chapter we dealt with the important question of existence of optimal control policies. Here in this chapter we develop necessary conditions of optimality whereby optimal policies can be determined for these problems or more general problems of this nature. We consider three broad classes of control policies. The first class consists of relaxed controls that are Borel measurable functions defined on the time interval I and taking values in the space of probability measures supported on a compact metric space. The second class consists of vector valued measurable functions taking values from a compact convex subset of a finite dimensional Euclidean space. These are called regular controls and they are contained in the class of relaxed controls. The third class of controls consists of signed or vector measures containing the class of impulsive controls as a special case. We develop necessary conditions of optimality for all the above three classes.

5.2 Relaxed Controls In this section we present necessary conditions of optimality under much relaxed assumptions. For example, the differentiability assumption of the vector field with respect to the control variable is not invoked as seen in control problems where regular controls, consisting of measurable functions with values in a convex subset of the control space, are used. The convexity assumption of the control domain U is not required. For regular controls the convexity assumption is absolutely necessary for both existence of optimal controls and necessary conditions of optimality. In contrast these restrictive conditions are not necessary in the case of relaxed controls. These generalizations are achieved by replacing regular controls by probability measure-valued controls called relaxed controls. The relaxed controls are weak-star measurable functions defined on the time interval I and taking values in the space of Borel probability measures on U. Later we show that all the results based on regular controls follow as special cases from those based on relaxed controls.

5.2 Relaxed Controls

153

Let U be a compact subset of R m and C(U ) the linear space of real-valued continuous functions on U endowed with the supnorm topology,  ϕ = sup{|ϕ(ξ )|, ξ ∈ U }, ϕ ∈ C(U ). Furnished with this norm topology, C(U ) is a Banach space. Let M(U ) denote the space of countably additive bounded signed measures on the Borel sigma field BU of U having bounded total variation,  μ ≡ |μ|v = sup



|μ(σ )|,

 σ ∈

where  is any partition of U into a finite number of disjoint members of BU and the sup is taken over all such finite partitions. With respect to this norm topology, M(U ) is also a Banach space. It is known that the topological dual of C(U ) is given by M(U ), that is, for any continuous linear functional L on C(U ), there exists a μ ∈ M(U ) such that ϕ(ζ )μ(dζ ). L(ϕ) = U

We are interested in the subset M1 (U ) ⊂ M(U ), the space of probability measures on U , that is, for μ ∈ M1 (U ), μ( ) ≥ 0 for all ∈ BU and that μ(U ) = 1. To proceed further, we need the vector space L1 (I, C(U )) that consists of Borel measurable functions on I with values in the Banach space C(U ) that are integrable in the sense of Bochner. This is given the standard norm topology,



 ϕ ≡

 ϕ(t) C(U ) dt = I

sup{|ϕ(t, ζ )|, ζ ∈ U }dt, I

where we have used ϕ(t) to denote the vector valued function, ϕ(t)(·), that is, ϕ(t)(ξ ) = ϕ(t, ξ ), ξ ∈ U. It is known that the Banach space C(U ) does not satisfy the RNP [57]. Thus, the topological dual of L1 (I, C(U )) is not given by L∞ (I, M(U )). However, it follows from the theory of lifting [99] that the topological (continuous) dual of L1 (I, C(U )) is given by Lw ∞ (I, M(U )), which consists of weak-star measurable functions on I with values in M(U ). This means that for any ϕ ∈ C(U ) and u ∈ Lw ∞ (I, M(U )) t −→ U

ϕ(ξ )ut (dξ ) = ϕ, ut C(U ),M(U )

154

5 Optimal Control: Necessary Conditions of Optimality

is an essentially bounded measurable function. We are interested in the subset w Lw ∞ (I, M1 (U )) ⊂ L∞ (I, M(U )). These are probability measure-valued functions and we choose this set as the class of admissible controls that we denote by rel Uad ≡ Lw ∞ (I, M1 (U )). rel is the unit sphere of This is known as relaxed controls. It is easily seen that Uad rel w L∞ (I, M(U )) and, for μ ∈ Uad , we have  μ = 1. Now we are prepared to consider the control system

x˙ = fˆ(t, x(t), ut ) ≡

f (t, x(t), v)ut (dv), x(0) = x0 ,

(5.1)

U rel . Under standard assumptions, one can prove existence and with control u ∈ Uad uniqueness of solutions. We present here one such result.

Theorem 5.2.1 Consider the system (5.1) and suppose f is Borel measurable in all the variables and there exists a K ∈ L+ 1 (I ) such that  f (t, ξ, v) ≤ K(t)(1+  ξ ) ∀ ξ ∈ R n , v ∈ U,

(5.2)

and, for each finite r > 0, there exists a Kr ∈ L+ 1 (I ) such that  f (t, ξ, v) − f (t, η, v) ≤ Kr (t)  ξ − η  ∀ ξ, η ∈ Br ∀ v ∈ U,

(5.3)

where Br is the closed ball in R n of radius r centered at the origin. Then, for every rel x0 ∈ R n and u ∈ Uad , the system has a unique solution x ∈ C(I, R n ). rel Proof First we prove an a priori bound. Let u ∈ Uad be given and suppose x ∈ n C(I, R ) is a solution of the system Eq. (5.1). Then it follows from growth property of f that

 t       x(t)  ≤  x0  +  f (s, x(s), v)us (dv) ds U 0 t

 K(s) 1+  x(s)  ds, ∀ t ∈ I. ≤  x0  +

(5.4)

0

The last inequality follows from the fact that u is a probability measure-valued function and hence us (U ) = 1 for all s ∈ I. By virtue of Grönwall inequality it follows from this that there exists a finite positive number b > 0 such that  x C(I,R n ) ≡ sup{ x(t) , t ∈ I }     K(s)ds ≤ b < ∞. ≤  x0  + K(s)ds exp I

I

5.2 Relaxed Controls

155

This shows that if the initial value problem (5.1) has a solution it is necessarily contained in a bounded subset of C(I, R n ). Thus, it suffices to prove that it has a unique solution in the space C(I, R n ). Define the operator G as follows (Gx)(t) ≡ x0 +

t 0

f (s, x(s), ξ )us (dξ )ds, t ∈ I. U

Since f is integrable, it is clear that G maps C(I, R n ) into itself. We show that G has a unique fixed point in C(I, R n ), that is an element x o ∈ C(I, R n ) such that x o = Gx o . Here we use Banach fixed point theorem. Let x, y ∈ C(I, R n ) with x(0) = y(0) = x0 . Since x, y ∈ C(I, R n ) and I is a compact interval, there exists a finite positive number r such that x(t), y(t) ∈ Br for all t ∈ I. Define dt (x, y) ≡ sup{ x(s) − y(s) , 0 ≤ s ≤ t} t Kr (s)ds, t ∈ I, α(t) ≡

(5.5) (5.6)

0

where Kr ∈ L+ 1 (I ) is the local Lipschitz variable for f as stated in the theorem. Using this notation, and the definition of the operator G, one can easily verify that

t

dt (Gx, Gy) ≤



t

Kr (s)ds (x, y)ds =

0

ds (x, y)dα(s).

(5.7)

0

Considering the second iterate of the operator G, one can easily verify that

t

dt (G2 x, G2 y) ≤

ds (x, y)α(s)dα(s) ≤ (α 2 (t)/2!)dt (x, y), t ∈ I.

0

By repeating this process k times we find that

 dt (Gk x, Gk y) ≤ α k (t)/ (k + 1) dt (x, y) ∀ t ∈ I,

(5.8)

where denotes the gamma function. Note that for t = T , it follows from the definition of the metric dt that, dT (x, y) ≡ x − y C(I,R n ) . Thus, it follows from the above inequality that

  Gk x − Gk y C(I,R n ) ≤ α k (T )/ (k + 1)  x − y C(I,R n ) ≤ ρk  x − y C(I,R n ) ,

(5.9)

 where ρk ≡ α k (T )/ (k +1) . Since α(T ) is finite, it is clear that, for k sufficiently

k  large, ρk ≡ α (T )/ (k + 1) < 1. In other words, for k sufficiently large the operator Gk is a contraction on the Banach space C(I, R n ). Thus, by Banach fixed point theorem (see Chap. 1, Theorem 1.10.2), it has a unique fixed point

156

5 Optimal Control: Necessary Conditions of Optimality

x o ∈ C(I, R n ). But then G has one and the same fixed point. This completes the proof.   Remark 5.2.2 According to the above theorem, x o ∈ C(I, R n ) is the unique rel solution of Eq. (5.1) corresponding to the control u ∈ Uad . Since every solution o must satisfy the a priori bound, it is clear that  x C(I,R n ) ≤ r. In fact, for the given rel set of admissible controls Uad , the solution set rel X ≡ {x ∈ C(I, R n ) : x = x(u), for some u ∈ Uad }

is contained in a bounded subset of C(I, R n ) with bound not exceeding r. Now we are prepared to present necessary conditions of optimality using relaxed controls. The cost functional is given by J (u) ≡

T

ˆ x(t), ut )dt + (x(T )) = (t,

0



T 0

(t, x(t), ξ )ut (dξ )dt + (x(T )), U

(5.10) rel . For this where x is the solution of Eq. (5.1) corresponding to the control u ∈ Uad n n relaxed problem we can introduce the Hamiltonian H : I × R × R × M1 (U ) −→ R as follows:

 (t, x, ξ ) + f (t, x, ξ ), yR n μ(dξ ). (5.11) H (t, x, y, μ) ≡ U

Appropriate assumptions on the data {f, , } are stated in the theorem presented below. In the case of non-convex control problems, for example non-convex U or more generally non-convex contingent set, Pontryagin Minimum Principle does not hold. Here we present a general minimum principle (necessary conditions of optimality) that holds for convex as well as non-convex problems. This is certainly a far reaching generalization covering a wider class of problems where the Pontryagin Minimum Principle does not hold. It is also interesting to mention that the proof we present is incredibly simple. Later in the sequel, adding the convexity assumptions, we derive the well-known Pontryagin Minimum Principle from the general necessary conditions. Theorem 5.2.3 Consider the system

f (t, x, v)ut (dv) ≡ fˆ(t, x, ut ), t ∈ I, x(0) = x0

x˙ =

(5.12)

U rel with the cost functional given by (5.10) and the admissible controls Uad . Suppose both f and  are Borel measurable in all the variables and continuous in the last two arguments and once continuously Gâteaux differentiable with respect to the state variable on R n . Further, along any solution trajectory, the Gâteaux differentials

5.2 Relaxed Controls

157

rel x ∈ L1 (I, R n ) and x ∈ R n . Then, for the pair {uo , x o } ∈ Uad × C(I, R n ) to be n optimal, it is necessary that there exists a ψ ∈ C(I, R ) (called the adjoint state) satisfying the following (necessary) conditions:



I

H (t, x o (t), ψ(t), uot ) dt ≤

I

rel H (t, x o (t), ψ(t), ut ) dt, ∀ u ∈ Uad ,

x˙ o (t) = Hψ = fˆ(t, x o (t), uot ), x o (0) = x0 , t ∈ I,

(5.13) (5.14)

˙ ψ(t) = −Hx = −fˆx∗ (t, x o (t), uot )ψ(t) − ˆx (t, x o (t), uot ), t ∈ I, ψ(T ) = x (x o (T )).

(5.15)

rel Proof Let {uo , x o } be an optimal pair. Take any other admissible control u ∈ Uad and define

uε ≡ uo + ε(u − uo ), for ε ∈ [0, 1]. rel It is clear that the set of relaxed controls Uad is a closed convex subset of w L∞ (I, M(U )) even though U is only compact but not necessarily convex. Thus, rel uε ∈ Uad . Let x ε denote the solution of Eq. (5.12) corresponding to the control uε . It is not difficult to verify that the limit

 y ≡ lim (x ε − x o )/ε ε→0

exists and is given by the solution of the variational equation y˙ = fˆx (t, x o (t), uot )y +

U

f (t, x o (t), z)(ut (dz) − uot (dz))

= fˆx (t, x o (t), uot )y + fˆ(t, x o (t), ut − uot ), y(0) = 0, t ∈ I .

(5.16)

Since f is continuously Gâteaux differentiable in the state variable and the Gâteaux differential is integrable, it is clear that Eq. (5.16) is linear and that it has a unique solution y ∈ C(I, R n ). Given that uo is optimal, it is evident that rel J (uo ) ≤ J (uo + ε(u − uo )), ∀ u ∈ Uad and ε ∈ [0, 1].

Since both  and  are Gâteaux differentiable in the state variable x, the cost functional J is Gâteaux differentiable in u. The Gâteaux derivative of J at uo in the direction (u − uo ) is given by

T

dJ (uo ; u − uo ) = 0

ˆ x o (t), ut − uot ) dt + L(y) ≥ 0, ∀ u ∈ U rel , (t, ad (5.17)

158

5 Optimal Control: Necessary Conditions of Optimality

where L(y) is given by

T

L(y) ≡ 0

ˆx (t, x o (t), uot ), y(t)dt + x (x o (T )), y(T ).

(5.18)

By our assumption the Gâteaux differential ˆx ∈ L1 (I, R n ) along the path {x o , uo }, and x (x o (T )) ∈ R n , and as noted above y ∈ C(I, R n ). Thus, the functional y −→ L(y) is a continuous linear functional on C(I, R n ). Further, it follows from the variational equation (5.16) that the map fˆ(·, x o (·), u· − uo· ) −→ y is a continuous linear map from L1 (I, R n ) to C(I, R n ). Hence, the composition map fˆ(·, x o (·), u· − uo· ) −→ y −→ L(y) is a continuous linear functional on L1 (I, R n ). Thus, by Riesz representation theorem given by Proposition 1.8.12 (duality), there exists a ψ ∈ L∞ (I, R n ) so that L(y) has an equivalent representation given by

T

L(y) = 0

fˆ(t, x o (t), ut − uot ), ψ(t)dt.

(5.19)

Using this expression in (5.17) we obtain

T

dJ (uo ; u − uo ) = 0

 ˆ x o (t), ut − uot ) + fˆ(t, x o (t), ut − uot ), ψ(t) dt ≥ 0, (t, rel for all u ∈ Uad .

(5.20)

In terms of the Hamiltonian, as defined by the expression (5.11), this is equivalent to the following inequality,

I

H (t, x o (t), ψ(t), uot ) dt ≤

I

rel H (t, x o (t), ψ(t), ut ) dt, ∀ u ∈ Uad ,

which is the necessary condition given by (5.13). It remains to show that the multiplier ψ satisfies the adjoint equation (5.15). Using the variational equation (5.16) in (5.19) and integrating by parts, we obtain

T

L(y) = y(T ), ψ(T ) − 0

ψ˙ (t) + fˆx∗ (t, x o , uot )ψ(t), y(t)dt.

(5.21)

5.2 Relaxed Controls

159

Comparing Eqs. (5.18) and (5.21) we find that ψ must satisfy the following Cauchy problem: ˙ − ψ(t) = fˆx∗ (t, x o (t), uot )ψ(t) + ˆx (t, x o (t), uot ), ψ(T ) = x (x o (T )), t ∈ I, (5.22) backward in time. This is precisely the adjoint equation given by (5.15). Equation (5.14) is the given system driven by the optimal control and hence nothing to prove. Thus, we have all the necessary conditions as stated in the theorem. This completes the proof.   From the above theorem we can derive point-wise necessary conditions of optimality as stated below. Corollary 5.2.4 Under the assumptions of Theorem 5.2.3, the necessary condition (5.13) is equivalent to the point-wise necessary condition given by H (t, x o (t), ψ(t), uot ) ≤ H (t, x o (t), ψ(t), ν) ∀ ν ∈ M1 (U ) for a.a t ∈ I, (5.23) with the other necessary conditions (5.14) and (5.15) remaining unchanged. Proof The proof is based on Lebesgue density argument. Consider the open subinterval (0, T ) of the interval I and let τ ∈ (0, T ) be any Lebesgue density point and define σε (τ ) ≡ (τ − ε/2, τ + ε/2) ⊂ I for some ε > 0. Now taking  ut =

uot , t ∈ I \ σε (τ ), ν,

t ∈ σε (τ ),

in the expression (5.13) and dividing it by the Lebesgue measure λ(σε (τ )) = ε we obtain the following expression: 1 λ(σε (τ ))

σε (τ )

H (t, x o (t), ψ(t), uot )dt ≤

1 λ(σε (τ ))

H (t, x o (t), ψ(t), ν)dt. σε (τ )

(5.24) Letting ε ↓ 0, it follows from the above inequality that H (τ, x o (τ ), ψ(τ ), uoτ ) ≤ H (τ, x o (τ ), ψ(τ ), ν) ∀ ν ∈ M1 (U ).

(5.25)

Since for measurable functions, almost all points t ∈ I are Lebesgue density points, we conclude that the above inequality holds for almost all τ ∈ I. This completes the proof.  

160

5 Optimal Control: Necessary Conditions of Optimality

From this point-wise necessary condition we observe that the right-hand expression of the inequality (5.23) is given by H (t, x o (t), ψ(t), ν) =

H (t, x o (t), ψ(t), ζ )ν(dζ ). U

Since U is assumed to be compact and H is continuous on U , H attains its minimum at some point, say, w ∈ U that is dependent on t. Thus, the measure ν that minimizes the right-hand expression of the inequality (5.25) is given by the Dirac measure ν = δwτ (ζ ). This may give the impression that the control can be chosen as an ordinary function wτ taking values in U. But this is false since point-wise selection may lead to a nonmeasurable function and all the integrals lose their meaning. However, if a measurable selection can be found, then the necessary conditions reduce to classical Pontryagin Minimum Principle as presented in the next section.

5.2.1 Discrete Control Domain For some control problems the control constraint set may be given by a set of discrete points in R m like U ≡ {ei , i = 1, 2, 3, · · · , s} where s is any finite integer. Evidently this is a non-convex set. Define   s αi = 1 . ! ≡ α ∈ R s : αi ≥ 0, i=1

In this case the space M1 (U ) is replaced by the set of discrete measures  Md (U ) ≡ μ ∈ M(U ) : μ(dξ ) =

s

 αi δei (dξ ), α ∈ ! ⊂ M1 (U ),

i=1

where δe denotes the Dirac measure concentrated at the single point e ∈ U. In this case the set of admissible controls is given by   s rel pi (t)δei (dξ ), p ∈ L∞ (I, !) ⊂ Uad , Ud ≡ u : ut (dξ ) = i=1

5.2 Relaxed Controls

161

where, with a slight abuse of notation, L∞ (I, !) ⊂ L∞ (I, R s ) is the class of essentially bounded measurable functions defined on I and taking values from the set ! (a simplex). So in this case the necessary conditions given by Corollary 5.2.4 reduce to the following one. Corollary 5.2.5 Let Ud denote the class of admissible controls. Then, under the assumptions of Theorem 5.2.3, the necessary condition (5.13) reduces to the following one: s

pio (t)H (t, x o (t), ψ(t), ei ) ≤

i=1

s

pi H (t, x o (t), ψ(t), ei ) ∀ p ∈ ! for a.a t ∈ I,

i=1

(5.26) while the other necessary conditions (5.14) and (5.15) remain unchanged. Proof The proof follows from Corollary 5.2.4. controls Ud ,  Ino the case of discrete the optimal control has the form uot (dξ ) = pi (t)δei (dξ ) with po ∈ L∞ (I, !). The rest follows from the inequality (5.23). This completes the proof.   An Example Here we present a simple example illustrating the importance of relaxed controls. Consider the system x˙1 = (1 − x22 ), x1 (0) = 0 x˙2 = u, x2 (0) = 0, t ∈ [0, 1], with the control constraint set U ≡ {−1, +1}, which consists of only two points as indicated. Clearly, this is a non-convex set. The problem is to find a control that drives the system from the initial state (0, 0) to the desired final state (1, 0). This problem does not have a solution if the admissible class of controls is given by the set of regular controls consisting of measurable functions with values U, while it has a solution if the admissible class is given by rel ≡ Lw ([0, 1], M (U )). Indeed, since 0 ∈ the set of relaxed controls Uad / U, to 1 ∞ reach the target we must have a nontrivial control u ≡ 0 such that

1

x1 (1) = 0

 1 − x22 (t) dt = 1 and x2 (1) =



1

u(t)dt = 0.

(5.27)

0

But this is impossible, because if u ≡ 0, x2 (t) ≡ 0 and this means that x1 (1) < 1. On the other hand a relaxed control exists. This is given by ut (dξ ) = p1 (t)δ1 (dξ ) + p2 (t)δ−1 (dξ ), 0 ≤ pi (t) ≤ 1, p1 (t) + p2 (t) = 1,

162

5 Optimal Control: Necessary Conditions of Optimality

where δa (dξ ) denotes the Dirac measure concentrated at a. If we take p1 (t) = p2 (t) = (1/2), the reader can easily verify that

t

x2 (t) =

 p1 (s) − p2 (s) ds ≡ 0 ∀ t ∈ [0, 1].

0

Thus, x1 (1) = 1, while x2 (1) = 0. This shows that using relaxed controls one can achieve goals not attainable by ordinary controls (measurable functions). The reader can construct many such examples where optimal controls from the class of measurable functions do not exist if the set is non-convex. See [40, 70] for many more interesting examples. So we observe that what cannot be achieved by use of regular controls can be achieved by relaxed controls. This is well illustrated by the above example. However, one may raise the question of physical realizability of relaxed controls. In Corollary 5.2.5 we have used Dirac measures. If U is compact (not necessarily convex) M1 (U ) ⊂ M(U ) is compact with respect to the weak-star topology. In fact it is weak-star compact and convex. Thus, by Krein–Milman theorem (see Chap. 1, Theorem 1.11.9) the closed convex hull of its extremals coincides with M1 (U ). The extremals are Dirac measures D ≡ {δei , {ei } dense in U } and hence cw∗ co D = M1 (U ). This immediately tells us that the set Lw ∞ (I, M1 (U )) is given by Lw ∞ (I, M1 (U ))

= c

w∗

 i≥1

αi (t)δei , αi (t) ≥ 0,



 αi (t) = 1 .

i≥1

Thus, under the above conditions relaxed controls can be constructed by proper convex combination of Dirac measures.

5.3 Regular Controls In this section, for admissible controls, we consider the class of measurable functions defined on I ≡ [0, T ] and taking values from a closed bounded convex set (compact) U ⊂ R m . We denote this class of functions by Uad ⊂ L∞ (I, R m ). Here we present the celebrated minimum principle of Pontryagin and his colleagues [90]. We show that these results can be obtained from the general necessary conditions of optimality proved in the preceding section. Consider the system x(t) ˙ = f (t, x(t), u(t)), t ∈ I, x(0) = x0 ,

(5.28)

5.3 Regular Controls

163

driven by regular control u ∈ Uad and let the cost functional be given by

T

J (u) =

(t, x(t), u(t))dt + (x(T )).

(5.29)

0

The problem is to find a control u ∈ Uad that minimizes the cost functional J . Theorem 5.3.1 (Pontryagin Minimum Principle) Consider the system (5.28) with the cost functional (5.29) and admissible controls given by Uad . Suppose the functions f and  satisfy the assumptions of Theorem 5.2.3 and Theorem 4.2.4. Then, for the pair {uo , x o } ∈ Uad × C(I, R n ) to be optimal, it is necessary that there exists a ψ ∈ C(I, R n ) such that the triple {uo , x o , ψ} satisfies the following inequality and the differential equations: H (t, x o (t), ψ(t), uo (t)) ≤ H (t, x o (t), ψ(t), v), ∀ v ∈ U, a.e t ∈ I,

(5.30)

x˙ o (t) = Hψ = f (t, x o (t), uo (t)), x o (0) = x0 , t ∈ I,

(5.31)

˙ ψ(t) = −Hx =

−fx∗ (t, x o (t), uo (t))ψ(t)

− x (t, x (t), u (t)), t ∈ I,

ψ(T ) = x (x o (T )).

o

o

(5.32)

Proof By virtue of Theorem 4.2.4, optimal control exists in the class of regular controls Uad . Hence, it makes sense to characterize optimal controls in terms of necessary conditions of optimality. For each regular control u ∈ Uad , there exists a relaxed control given by μt (·) ≡ {δu(t )(·), t ∈ I }, which is nothing but a Dirac measure concentrated along the path of the measurable function {u(t), t ∈ I } taking values from the set U. Clearly, this is an element of the w m set Lw ∞ (I, M1 (U )) ⊂ L∞ (I, M(R )). For any continuous bounded (including Borel measurable) function f on U, U f (ξ )μt (dξ ) = U f (ξ )δu(t )(dξ ) = f (u(t)), t ∈ I. Thus, the class of regular controls is embedded in the class of relaxed controls, w (I, M1 (U )), Uad → M∞

through the mapping {u(t), t ∈ I } −→ {δu(t )(·), t ∈ I }. Let {uo , x o } be the optimal control state pair and u ∈ Uad any other control; then it follows from Corollary 5.2.4 that H (t, x o (t), ψ(t), ξ )δuo (t )(dξ ) ≤ H (t, x o (t), ψ(t), ξ )δu(t ) (dξ ), (5.33) U

U

164

5 Optimal Control: Necessary Conditions of Optimality

for all u ∈ Uad . Hence, for all u ∈ Uad , we have H (t, x o (t), ψ(t), uo (t)) ≤ H (t, x o (t), ψ(t), u(t)) for a.e t ∈ I. From here one obtains the necessary inequality giving H (t, x o (t), ψ(t), uo (t)) ≤ H (t, x o (t), ψ(t), v) ∀ v ∈ U and for a.e t ∈ I. (5.34) Clearly, this gives the necessary inequality (5.30). The other necessary conditions (5.31) and (5.32) follow from Theorem 5.2.3. This completes the proof.   Remark 5.3.2 It is interesting to note that the proof of the Pontryagin Minimum Principle easily follows from the necessary conditions of optimality developed for relaxed controls. Remark 5.3.3 It is important to emphasize that, for regular controls, the convexity assumption of the set U (control domain) is crucial. Without this, the problem may have no solution from the class of regular (or ordinary) controls Uad ⊂ L∞ (I, R m ), (as seen in the example given above), and hence the Pontryagin Minimum Principle does not hold. However, the generalized minimum principle given by Theorem 5.2.3 holds. In case the vector field f and the cost integrand  are also differentiable in the control variable on U , the necessary conditions can be further simplified. This is presented in the following result. Corollary 5.3.4 Consider the system (5.28) with regular admissible controls Uad and the cost functional (5.29). Suppose {f, , } satisfy the assumptions of Theorem 5.3.1, and further, both f and  are continuously Gâteaux differentiable in the control variable. Then, for the pair {uo , x o } ∈ Uad × C(I, R n ) to be optimal, it is necessary that there exists a ψ ∈ C(I, R n ) such that the triple satisfies the following inequality and differential equations: Hu (t, x o (t), ψ(t), uo (t)), u(t) − uo (t) ≥ 0 for almost all t ∈ I, x˙ o (t) = f (t, x o (t), uo (t)), x o (0) = x0 , t ∈ I, ˙ −ψ(t) =

fx∗ (t, x o (t), uo (t))ψ(t)

(5.35) (5.36)

+ x (t, x (t), u (t)), t ∈ I, ψ(T ) = x (x (T )), o

o

o

(5.37) where fx∗ is the transpose of the matrix fx . Proof The proof follows from Theorem 5.3.1. Indeed, under the given assumptions, Theorem 5.3.1 holds. Let uo ∈ Uad denote the optimal control. Take any u ∈ Uad and ε ∈ [0, 1]. Since the set U is convex, it is clear that the control v given by v(t) = uo (t) + ε(u(t) − uo (t)) ∈ U for almost all t ∈ I. Substituting this in the

5.4 Transversality Conditions

165

inequality (5.30) we obtain H (t, x o (t), ψ(t), uo (t) + ε(u(t) − uo (t))) − H (t, x o (t), ψ(t), uo (t)) ≥ 0, t ∈ I. Dividing this by ε and letting ε ↓ 0, we obtain the inequality (5.35). The rest of the proof is obvious.   Remark 5.3.5 Pontryagin Minimum Principle given by Theorem 5.3.1 can also be derived by use of the so-called spike variation. In other words, one perturbs the optimal control only over a small set of positive Lebesgue measure to obtain another control and then compare this against the optimal control and obtain the desired inequality by dividing the expression by the Lebesgue measure of the set on which control is perturbed and then letting the Lebesgue measure converge to zero. We claim that the method as presented here using relaxed controls is much more general and simpler.

5.4 Transversality Conditions There are many control problems where the initial and final states may not be specified as particular fixed points in the state space. Instead, they may be required to belong to certain specified sets or manifolds. An example is the landing of an aircraft facing emergency on any available spot on earth. We consider two situations. (TC1) Let K be a closed convex subset of the state space R n and suppose it is required to find a control that drives the system from a fixed initial state x0 to the target set K while minimizing the cost functional J0 (u) ≡

T

(t, x, u)dt. 0

This is a Lagrange problem subject to the terminal constraint, x(T ) ∈ K. Assuming controllability from x0 to the set K at time T , this can be converted to a Bolza problem given by J (u) ≡

T

(t, x, u)dt + IK (x(T )) −→ inf,

0

where IK (ξ ) denotes the indicator function of the set K, that is, IK (ξ ) = 0 if ξ ∈ K and +∞ if ξ ∈ K. Since K is closed, the indicator function is lower semicontinuous, and since it is convex, the function IK is convex and so it is sub-differentiable with the sub-differential denoted by ∂IK (·). Given that this problem has a solution, the optimal control must necessarily drive the system from the given initial state x0 to some final state x ∗ ∈ K at time T . Thus, given the end points {x0 , x ∗ }, the optimal control that realizes this transfer must satisfy the same necessary conditions

166

5 Optimal Control: Necessary Conditions of Optimality

of optimality as given by Theorem 5.3.1. The only difference in this case is that the adjoint state ψ(T ) must satisfy the terminal condition: ψ(T ) ∈ ∂IK (x o (T )). This is equivalent to the condition (ψ(T ), y) ≤ (ψ(T ), x o (T )) ∀ y ∈ K. Recall the definition of normal cone to the set K at the point x ∈ K, given by: NK (x) ≡ {z ∈ R n : (z, y) ≤ (z, x) ∀ y ∈ K}. Accordingly, the transversality condition can also be stated as ψ(T ) ∈ NK (x o (T )). This is equivalent to ψ(T ) ⊥ TK (x o (T )), where TK (ξ ) denotes the tangent plane to the set K passing through the point ξ ∈ K. Strict convexity of the set K guarantees the existence of a unique tangent plane to each point on K. Remark 5.4.1 This result can also be obtained by choosing a sequence of smooth functions n approximating the indicator function IK and converging to it in the limit. So in that case ψn (T ) = Dx n (x o (T )) and ψn (T ) → ψ(T ) and ψ(T ) ∈ NK (x o (T )). This approximation may be useful for simplifying numerical computations. (TC2) More generally, let M0 and M1 be two smooth manifolds in R n with dimensions m0 < n and m1 < n, respectively. Suppose the system x˙ = f (t, x, u), t ∈ I, x(0) ∈ M0 , x(T ) ∈ M1 , u ∈ Uad , is controllable from M0 to M1 in time T . The problem is to find a control that drives the system from M0 to M1 while minimizing the cost functional J (u) ≡

T

(t, x, u)dt. 0

Again the necessary conditions of optimality given by Theorem 5.2.3, Corollary 5.2.4, Corollary 5.2.5, and Theorem 5.3.1 remain valid with the boundary conditions for the adjoint equations given by ψ(0) ∈ NM0 (x o (0)), ψ(T ) ∈ NM1 (x o (T )),

5.4 Transversality Conditions

167

where NM (ξ ) denotes the normal cone to the manifold M with vertex at the point ξ. Equivalently, in terms of tangent spaces, the transversality conditions can be described as ψ(0) ⊥ TM0 (x(0)), ψ(T ) ⊥ TM1 (x(T )), where TM (ξ ) denotes the tangent plane passing through the point ξ ∈ M. The transversality condition at the left-hand end point, ψ(0) ⊥ TM0 (x(0)), gives m0 relations, while that at the right-hand end point gives m1 relations. Thus, we have a consistent two point variable boundary value problem. In general, the normal and the tangent cones denoted by NM and TM are defined in the sense of Clarke ([51, p. 50]). Thus, the complete set of necessary conditions (including the terminal conditions) of optimality for the Lagrange problem (TC2) is given by (NC1), (NC2), and (NC3) as stated below: (NC1) : H (t, x o (t), ψ(t), uo (t)) ≤ H (t, x o (t), ψ(t), v) ∀ v ∈ U, a.e t ∈ I, (NC2) : x˙ o (t) = Hψ = f (t, x o (t), uo (t)), x o (0) ∈ M0 , x o (T ) ∈ M1 , ˙ (NC3) : ψ(t) = −Hx = −fx∗ (t, x o (t), uo (t))ψ(t) − x (t, x o (t), uo (t)), ψ(0) ∈ NM0 (x o (0)), ψ(T ) ∈ NM1 (x o (T )). For illustration, let us consider the manifolds M0 of dimension m0 and M1 of dimension m1 given by M0 ≡

n−m 0 k=1

{x ∈ R : gk (x) = 0}; M1 ≡ n

n−m 1

{x ∈ R n : hk (x) = 0},

k=1

where the functions {gk , hr , 1 ≤ k ≤ n − m0 , 1 ≤ r ≤ n − m1 } are assumed to be once continuously differentiable. Further, both the sets of gradient vectors {∇gk (x), 1 ≤ k ≤ n − m0 } and {∇hk (x), 1 ≤ k ≤ n − m1 } are assumed to be linearly independent and do not vanish simultaneously anywhere in R n . Under these assumptions, the manifolds M0 and M1 are smooth, and at every point ξ ∈ M0 and η ∈ M1 , the normal and the tangent cones NM0 (ξ ), TM0 (ξ ) and NM1 (η), TM1 (η) are well defined. The transversality conditions are then given by ψ(0) ⊥ TM0 (x o (0)), ψ(T ) ⊥ TM1 (x o (T )). The adjoint state ψ is said to satisfy the transversality conditions if the above orthogonality conditions hold. These expressions provide m0 + m1 initial-boundary conditions whereby the exact positions of the initial and final states, x o (0) ∈ M0 and x o (T ) ∈ M1 , can be determined. Thus, a complete set of 2n boundary conditions required for solving the two point boundary value problem (the minimum principle, the state equation, and the adjoint equation) are now fully specified and can be solved by standard techniques.

168

5 Optimal Control: Necessary Conditions of Optimality

Remark 5.4.2 In the presence of a terminal cost ϕ(x(T )), including the boundary condition x(T ) ∈ M1 , the transversality condition takes the form ψ(T ) ∈ ϕx (x o (T )) + NM1 (x o (T )). This generalizes the transversality condition stated above. Clearly, if the terminal state is free, M1 = R n and hence the normal cone NM1 = {0}. As a consequence ψ(T ) = ϕx (x o (T )).

5.4.1 Necessary Conditions Under State Constraints In this section we consider some additional problems that involve state constraints. Some of these problems can be transformed into the standard problems treated in the preceding section. Consider the problem x˙ = f (t, x, u), x(0) = x0 , t ∈ I, T (t, x, u)dt + (x(T )) −→ min, J (u) =

(5.38) (5.39)

0

subject to the equality constraints

T

i (t, x, u)dt = ci , 1 ≤ i ≤ k.

(5.40)

0

Define fn+i ≡ i , 1 ≤ i ≤ k ≤ n and

t

xn+i (t) ≡

fn+i (s, x(s), u(s))ds, xn+i (0) = 0, xn+i (T ) = ci , 1 ≤ i ≤ k.

0

We introduce the augmented state variable as z = col{xj , 1 ≤ j ≤ n + k} and the vector field as F ≡ col{fj , 1 ≤ j ≤ n + k}. This transforms the original problem into a standard problem treated in the previous sections. Indeed now we have the system z˙ = F (t, z, u), z(0) = z0 = col{x0, 0}, z(T ) ∈ {z ∈ R n+k : zn+i = ci , i = 1, 2, · · · , k}. The cost functional remains unchanged,

T

J (u) ≡ 0

(t, z(t), u(t))dt + (z(T ))

(5.41)

5.5 Impulsive and Measure-Valued Controls

169

except that we write this now using the state variable z though  and  are independent of the last k components of z. Define the augmented Hamiltonian as H (t, z, ψ, u) ≡ F (t, z, u), ψ + (t, z, u). The necessary conditions of optimality may be stated as follows. Theorem 5.4.3 In order for the pair {uo , zo } to be optimal it is necessary that there exists an absolutely continuous (n + k) dimensional vector function ψ o such that the triple {uo , zo , ψ o } satisfies the following inequality and differential equations: H (t, zo (t), ψ o (t), uo (t)) ≤ H (t, zo (t), ψ o (t), v) for a.a t ∈ I, and all v ∈ U  Hzi (t, zo (t), ψ o , uo (t)), 1 ≤ i ≤ n, o −ψ˙ i = 0, n + 1 ≤ i ≤ n + k, ψio (T ) = zi (zo (T )), 1 ≤ i ≤ n, o (T ) = ci , 1 ≤ i ≤ k. z˙ o = Hψ = F (t, zo , uo ), zo (0) = z0 , zn+i

Remark 5.4.4 Note that for the adjoint equations, there are n terminal (final) conditions specified, while for the state equation there are (n + k) initial conditions and k terminal conditions specified. Thus, in all there are 2(n + k) boundary conditions specified for the two point boundary value problem of dimension 2(n + k). This makes the system consistent.

5.5 Impulsive and Measure-Valued Controls Impulsive controls are represented by a weighted sum of Dirac measures. Clearly, this is a subclass of the broader class of vector measures including signed measures as controls. In Chap. 4, we considered optimal control problems for nonlinear systems driven by finitely additive measures (signed measures as well as vector measures) represented by the following differential equation in R n : G(t, x, ξ )μ(dξ × dt), x(0) = x0 , t ∈ I,

dx = F (t, x)dt +

(5.42)

U

where F : I × R n −→ R n and G : I × R n × U −→ M(n × m) ≡ L(R m , R n ) are suitable functions and μ ∈ Mbf a (U ×I , R m ) (the space of finitely additive bounded vector measures having bounded variations) is a control measure. In Chap. 3, we considered the question of existence and uniqueness of solutions for this class of systems, and in Chap. 4, we considered the questions of existence of

170

5 Optimal Control: Necessary Conditions of Optimality

optimal control policies for control problems given by J (μ) ≡

I ×U

(t, x, ξ )mo (dξ × dt) + (x(T )) −→ min, μ ∈ Mad ,

(5.43)

where Mad ⊂ Mbf a (U ×I , R m ) is the class of admissible control measures as discussed in Chaps. 3 and 4 and mo ∈ M+ bf a (U ×I ) is the so-called control measure in the sense that μ 0. By convexity of Mad , it is clear that με ≡ μo + ε(μ − μo ) ∈ Mad for all ε ∈ [0, 1]. Then, by optimality of μo , it is evident that J (με ) ≥ J (μo ) ∀ μ ∈ Mad , and ε ∈ [0, 1]. Hence, (1/ε)(J (με ) − J (μo )) ≥ 0 ∀ μ ∈ Mad , and ε ∈ (0, 1).

(5.47)

Let {x ε , x o } ∈ B∞ (I, R n ) denote the solutions of the state equation (5.42) corresponding to the control measures {με , μo }, respectively. In other words, the pair {x ε , x o } satisfies the following integral equations:

t

x ε (t) = x0 +

F (s, x ε (s))ds +

t

0



0

t

x o (t) = x0 +

F (s, x o (s))ds +

G(s, x ε (s), ξ )με (dξ × ds), t ∈ I, U

(5.48)

t

0

0

G(s, x o (s), ξ )μo (dξ × ds), t ∈ I. U

(5.49) Under the given assumptions (A1) and (A2) (see Chap. 3, Sect. 3.4), one can readily s verify that x ε −→ x o in B∞ (I, R n ) as με converges strongly to μo . Further, subtracting Eq. (5.49) from Eq. (5.48) term by term and computing the difference quotient (1/ε)(x ε (t) − x o (t)) and letting ε ↓ 0, and denoting the limit by y, if one exists, we have y(t) ≡ lim(1/ε)(x ε (t) − x o (t)), t ∈ I. ε↓0

(5.50)

It is not difficult to verify that y satisfies the following differential equation: dy(t) = (DF )(t, x o (t))y(t)dt + +

(DG)(t, x o (t), ξ )y(t)μo (dξ × dt) U

G(t, x o (t), ξ )(μ − μo )(dξ × dt), y(0) = 0, t ∈ I.

(5.51)

U

This is a linear differential equation in y and can be written compactly as dy = A(t)y(t)dt + B(dt)y(t) + γμ (dt), y(0) = 0, t ∈ I,

(5.52)

172

5 Optimal Control: Necessary Conditions of Optimality

where A(t) ≡ (DF )(t, x o (t)), t ∈ I, B(σ ) ≡ (DG)(t, x o (t), ξ )μo (dξ × dt), σ ∈ I , U ×σ

γμ (σ ) ≡

U ×σ

G(t, x o (t), ξ )(μ − μo )(dξ × dt), σ ∈ I

where I denotes the algebra of subsets of the set I. Since, under the given assumptions, both F and G are continuously Gâteaux differentiable in the state variable with the G-derivatives being bounded on bounded sets and x o ∈ B∞ (I, R n ), and μo , μ ∈ Mad , it is clear that A is a bounded (n × n) matrix-valued function, and B(σ ), σ ∈ I , is also a bounded (n × n) matrix-valued set function belonging to Mbf a (I , L(R n )), and γμ ∈ Mbf a (I , R n ) is a bounded finitely additive R n valued vector measure. Using Banach fixed point theorem, as in Theorem 3.4.10, one can verify that the variational equation (5.52), and hence (5.51), has a unique solution y ∈ B∞ (I, R n ) and the limit in (5.50) is well defined. Thus, the map γμ −→ y,

(5.53)

from Mbf a (I , R n ) to B∞ (I, R n ), is continuous and linear, and hence bounded. On the other hand, computing the difference quotient (5.47) and letting ε ↓ 0, we obtain the Gâteaux differential of J at μo in the direction (μ − μo ) as follows: dJ (μo ; μ − μo ) = lim(1/ε)(J (με ) − J (μo )) ε↓0

=

U ×I

x (t, x o (t), ξ ), y(t)mo (dξ × dt) + x (x o (T )), y(T ).

(5.54)

By optimality of μo , it follows from (5.47) that dJ (μo ; μ − μo ) ≥ 0 ∀ μ ∈ Mad .

(5.55)

By virtue of our assumption, x (·, x o (·), ·) ∈ L1 (mo , R n ) and x (x o (T )) ∈ R n . Combining this with the fact that Eq. (5.52) has a unique solution y ∈ B∞ (I, R n ), we conclude that the functional L, given by L(y) ≡

U ×I

x (t, x o (t), ξ ), y(t)mo (dξ × dt) + x (x o (T )), y(T ),

(5.56)

is a well defined bounded linear functional. Thus, y −→ L(y) is a continuous linear functional on B∞ (I, R n ), and hence it follows from (5.53) that the composition map ˜ μ) γμ −→ y −→ L(y) ≡ L(γ

(5.57)

5.5 Impulsive and Measure-Valued Controls

173

is a continuous linear functional on the Banach space Mbf a (I , R n ). Thus, by duality there exists a ψ ∈ (Mbf a (I , R n ))∗ (topological dual of Mbf a (I , R n )) such that ˜ μ ) = ψ(t), γμ (dt). (5.58) L(γ I

Note that, under the canonical embedding of any Banach space into its bidual, in particular, B∞ (I, R n ) ⊂ (B∞ (I, R n ))∗∗ = (Mbf a (I , R n ))∗ , ψ may very well lie in the Banach space B∞ (I, R n ) itself. Indeed this is true as seen in the proof given below. Using the expression for γμ in Eq. (5.58), we obtain ˜ μ) = L(γ

U ×I

ψ(t), G(t, x o (t), ξ )(μ − μo )(dξ × dt).

(5.59)

It follows from (5.54), (5.55), (5.57), and (5.59) that ψ(t), G(t, x o (t), ξ )(μ − μo )(dξ × dt) ≥ 0 ∀ μ ∈ Mad .

(5.60)

U ×I

This gives the necessary condition (5.46). Using the variational equation (5.51), or equivalently equation (5.52), in the above expression and integrating by parts and using Fubini’s theorem one can readily derive the following identity: ˜ μ ) = ψ(T ), y(T ) − L(γ



T

y(t), dψ(t)

0



T

− 0

y(t), (DF )∗ (t, x o (t))ψ(t)dt





T





(DG)∗ (t, x o (t), ξ )ψ(t)μo (dξ × dt)

y(t), 0

 .

(5.61)

U

˜ μ ) = L(y). This is satisfied if, and By virtue of the expression (5.57), we have L(γ only if, the following relations hold: −dψ = (DF )∗ (t, x o (t))ψ(t)dt +



(DG)∗ (t, x o (t), ξ )ψ(t)μo (dξ × dt) U



x (t, x o (t), ξ )mo (dξ × dt), t ∈ I, ψ(T ) = x (x o (T )).

+ U

(5.62)

174

5 Optimal Control: Necessary Conditions of Optimality

Hence, we conclude that ψ must satisfy the necessary condition (5.45). Equation (5.62), or equivalently (5.45), is a linear differential equation with a terminal condition (instead of initial condition) and called the adjoint equation. This equation can be written in the compact form as follows: − dψ = A∗ (t)ψ(t)dt + B ∗ (dt)ψ(t) + λmo (dt), t ∈ I, ψ(T ) = x (x o (T )) ≡ ψo (T ),

(5.63) where A∗ (t) is the adjoint of the matrix-valued function A(t) and B ∗ (dt) is the adjoint of the matrix-valued set function B(dt), all defined immediately following Eq. (5.52). The set function λmo (·) is given by λmo (σ ) =

U ×σ

x (t, x o (t), ξ )mo (dξ × dt), σ ∈ I .

By reversing the flow of time one can convert this into an initial value problem as follows: dϕ(t) = A∗T (t)ϕ(t)dt + BT∗ (dt)ϕ(t) + λmo ,T (dt), ϕ(0) = ψo (T ), t ∈ I,

(5.64)

where A∗T (t) = A∗ (T − t), BT∗ (dt) is the translation of the set function B ∗ (dt) by T , and λmo ,T (dt) is the translation of the set function λmo (dt) by T . For illustration of what is meant by the translation, let us consider the set function n BT∗ ∈ Mbf a (I , L(R simple function with values in κ )) and let f be a measurable n R . Then, f (t) = i=1 zi χ[ti−1 ,ti ) (t), zi ∈ R n , for t, ti , ti−1 ∈ I , and the integral of this function with respect to the set function BT∗ is given by

T 0

BT∗ (dt)f (t) =

κ i=1

BT∗ ([ti−1 , ti ))zi =

κ

B ∗ ((T − ti , T − ti−1 ])zi .

i=1

Since the class of simple functions is dense in B∞ (I, R n ), the integral is well defined for any f ∈ B∞ (I, R n ). Similar statements hold for the R n -valued set function λμo ,T ∈ Mbf a (I , R n ). Thus, using Banach fixed point theorem and following the same procedure as in Chap. 3, Theorem 3.4.10, one can prove that Eq. (5.64) has a unique solution ϕ ∈ B∞ (I, R n ) ⊂ (Mbf a (I , R n ))∗ . Hence, Eq. (5.63) has a unique solution ψ ∈ B∞ (I, R n ) given by ψ(t) = ϕ(T − t), t ∈ I . Equation (5.44) is the given dynamic system with x o being the solution corresponding to the optimal control measure μo and hence nothing to prove. This completes the proof of all the necessary conditions of optimality as stated.  

5.5 Impulsive and Measure-Valued Controls

175

5.5.2 Vector Measures as Controls In the preceding theorem we considered signed measures as controls. In fact this can be readily extended to cover vector measures with values in any finite dimensional normed space. For any mo ∈ M+ bf a (U ×I ), we can choose the admissible controls Mad , a weakly compact subset of Mbf a (U ×I , R m ), which is uniformly mo continuous. The statement of the corresponding theorem and its proof are similar to that of Theorem 5.5.1 with only minor changes. The system is now driven by vector measures as stated below, dx = F (t, x(t))dt + G(t, x(t), ξ )μ(dξ × dt), x(0) = x0 , t ∈ I, (5.65) U

where F is the same as for system (5.42) and G is a (n × m) matrix-valued function defined on I × R n × U , and μ is a R m -valued vector measure; more precisely, μ ∈ Mbf a (U ×I , R m ). Theorem 5.5.2 Consider the system (5.65) with the cost functional (5.43) and suppose the assumptions of Theorem 4.5.3 hold and that the set Mad ⊂ Mbf a (U ×I , R m ) is weakly compact and convex. Further, suppose the pair {F, G} is once Gâteaux differentiable in the state variable with the G-derivatives being continuous and bounded, and the pair {, } appearing in the objective functional (5.43) is once continuously Gâteaux differentiable with respect to the state variable satisfying x (·, x(·), ·) ∈ L1 (mo , R n ) and x (x(T )) ∈ R n for any x ∈ B∞ (I, R n ). Then, in order for the control state pair {μo , x o } ∈ Mad × B∞ (I, R n ) to be optimal, it is necessary that there exists a ψ ∈ B∞ (I, R n ) such that the triple {μo , x o , ψ} satisfies the following system of evolution equations and the inequality: dx o (t) = F (t, x o (t))dt +

G(t, x o (t), ξ )μo (dξ × dt), x(0) = x0 , t ∈ I, U

−dψ(t) = (DF )∗ (t, x o (t))ψ(t)dt + +

(5.66)



(DG)(t, x o (t), ξ ; ψ(t)) μo (dξ × dt) U

x (t, x o (t), ξ ) mo (dξ × dt), ψ(T ) = x (x o (T )), t ∈ I, U

(5.67)

U ×I

G ∗ (t, x o (t), ξ )ψ(t), (μ − μo )(dξ × dt) ≥ 0 ∀ μ ∈ Mad .

Proof The proof is identical to that of Theorem 5.5.1.

(5.68)  

176

5 Optimal Control: Necessary Conditions of Optimality

Looking at Eq. (5.67) one may wonder why the adjoint sign does not appear on DG. With a little reflection, the reader will readily discover that here we have repeated the adjoint operation twice. A system closely related to (5.65) is given by dx = F (t, x(t))dt + G(t, x(t))γ (dt), x(0) = x0 , t ∈ I,

(5.69)

where F is the same as for system (5.65) and G is a (n × m) matrix-valued function (independent of ξ ) defined on I × R n , and γ is a R m -valued vector measure; more precisely, γ ∈ Mbf a (I , R m ). The set Mad is a weakly compact subset of Mbf a (I , R m ), and as seen in Chap. 4, Theorem 4.5.1, there exists a finitely additive nonnegative finite measure ν ∈ M+ bf a (I ) such that γ 0 sufficiently small so that μ2 ≡ μ1 − εm1 ∈ Mad . Computing J (μ2 ), it follows from Lagrange formula that J (μ2 ) = J (μ1 ) + dJ (μ1 ; μ2 − μ1 ) + o(ε) = J (μ1 ) + η1 , μ2 − μ1  + o(ε) = J (μ1 ) − εη1 , m1  + o(ε) = J (μ1 ) − ε  η1 2X +o(ε) = J (μ1 ) − ε  m1 2X∗ +o(ε). Hence, for ε > 0 sufficiently small, we have J (μ2 ) < J (μ1 ). Next, we return to Step 1 with μ2 and continue the process till a stopping criterion is satisfied. Clearly, following the above steps we can construct a sequence of control measures {μn } ∈ Mad such that J (μ1 ) ≥ J (μ2 ) ≥ J (μ3 ) ≥ · · · ≥ J (μn ) ≥ · · · . This is a monotone non-increasing (decreasing) sequence. Since, by our assumption,  and  are all nonnegative, J (μ) ≥ 0 for all μ ∈ Mad . Thus, there exists a nonnegative real number Mo such that limn→∞ J (μn ) −→ Mo . Since Mad is weakly compact and the sequence {μn } ⊂ Mad , there exists a w μo ∈ Mad such that (along a generalized subsequence if necessary) μn −→ μo . Under the assumptions of Theorem 5.5.1, the cost integrands {, } are Gâteaux differentiable in the state variable and hence continuous. Using this fact and Lebesgue dominated convergence theorem in the proof of Theorem 4.5.3 one can verify that μ −→ J (μ) is weakly continuous. Thus, J (μo ) = limn→∞ J (μn ) = Mo . This completes the proof.  

5.7 Implementability of Necessary Conditions of Optimality

179

5.7 Implementability of Necessary Conditions of Optimality For applications of optimal control theory it is necessary to implement the necessary conditions of optimality using suitable computer programs. In the following subsections we present this for controls described by discrete measures as well as general measures.

5.7.1 Discrete Measures For illustration we apply the necessary conditions of optimality given by Theorem 5.5.1 to the class of discrete measures that consists of purely impulsive controls. Consider the set I0 ⊂ I given by I0 ≡ {ti , i = 0, 1, 2, · · · , κ : 0 = t0 < t1 < t2 < t3 < · · · < tκ < T } for any κ ∈ N (set of nonnegative integers). This is the set of instants at which controls may be applied. Here, for admissible controls we choose the following family of discrete measures:  κ Mδ (U × I ) ≡ μ ∈ Mad : μ(dξ × dt) = αi ai δvi (dξ )δti (dt), i=1

 ai ∈ [−1, +1], vi ∈ U, ti ∈ I0 ,

(5.77)

where U is a compact subset of R m and αi ≥ 0, i = 1, 2, · · · , κ. For this class of admissible controls the (dominating) measure m0 is given by m0 (dξ × dt) ≡

κ

αi δvi (dξ )δti (dt)

i=1

with



αi < ∞. The cost functional given by the expression (5.43) reduces to J (μ) =

κ

αi (ti , x(ti ), vi ) + (x(T ).

i=1

In this case the system (5.44) reduces to the following pair of dynamic systems, dx o (t) = F (t, x o (t))dt, x o (0) = x0 , t ∈ I \ I0 ;

(5.78)

x o (ti ) = aio G(ti , x o (ti ), vio ), ti ∈ I0 .

(5.79)

For given I0 , one can consider the set Aad ≡ {ai ∈ [−1, +1], vi ∈ U, i = 1, 2, · · · , κ}

180

5 Optimal Control: Necessary Conditions of Optimality

as the set of admissible control policies. Clearly, this is a compact set. For convenience of notation an element of this set may be denoted by the pair (a, v) ≡ {ai , vi , i = 1, 2, · · · , κ}. It follows from Theorem 5.5.1 that the adjoint system is given by −ψ˙ = (DF )∗ (t, x o (t))ψ(t), t ∈ I \ I0 , ψ(T ) = x (x o (T )), −ψ(ti ) =

aio (DG)∗ (ti , x o (ti ), vio )ψ(ti ) + αi x (ti , x o (ti ), vio ), ti

(5.80)

∈ I0 . (5.81)

The inequality (5.46) takes the following form:

ai ψ(ti ), G(ti , x o (ti ), vi ) + αi x (ti , x o (ti ), vi )

i





aio ψ(ti ), G(ti , x o (ti ), vio ) + αi x (ti , x o (ti ), vio )

(5.82)

i

for all (a, v) ∈ Aad . Using Proposition 3.4.8, Proposition 3.4.9, and lower semi-continuity of the pair {, } in the state variable, and compactness of the set Aad , one can prove the existence of an optimal control policy (a o , v o ) ∈ Aad . Even though the elements of the set I0 where impulsive forces may be applied are fixed, the intensities of the impulses are not; they are variable, and the optimal policy may dictate removal of some of the points in I0 by choosing the corresponding intensities equal to zero.

5.7.2 General Measures It is clear from the preceding subsection that it is relatively easier to develop computer program to solve optimal control problems involving discrete measures. For general measures this is possible only if the positive measure mo , with respect to which the admissible set Mad is uniformly absolutely continuous, is known. For illustration we consider the necessary conditions of optimality given by Theorem 5.5.2. Here the system is governed by (5.65) and the cost functional is given by the expression (5.43). Note that every μ ∈ Mad is absolutely continuous with respect to the positive measure mo and hence it has a Radon–Nikodym derivative (RND) g ∈ L1 (mo , R m ) giving dμ = gdmo . Define the set Lad ≡ {g ∈ L1 (mo , R m ) : dμ = gdmo , μ ∈ Mad }.

5.7 Implementability of Necessary Conditions of Optimality

181

Clearly, the set Lad is isometrically isomorphic to Mad signified by Lad ∼ = Mad . Thus, one may consider Lad to be the set of admissible controls. Since compactness is preserved under isomorphism, the set Lad is a weakly (sequentially) compact subset of L1 (mo , R m ). In view of this the necessary conditions of optimality given by Theorem 5.5.2 are equivalent to the following necessary conditions: dx o (t) = F (t, x o (t))dt +

U

G(t, x o (t), ξ )g o (ξ, t)mo (dξ × dt), t ∈ I, x(0) = x0 ,

−dψ(t) = (DF )∗ (t, x o (t))ψ(t)dt + + U ×I

U

(5.83)

U

(DG)(t, x o (t), ξ ; ψ(t)) g o (ξ, t)mo (dξ × dt)

x (t, x o (t), ξ ) mo (dξ × dt), t ∈ I, ψ(T ) = x (x o (T )),

G ∗ (t, x o (t), ξ )ψ(t), (g − g o )(ξ, t)mo (dξ × dt) ≥ 0 ∀ g ∈ Lad ,

(5.84) (5.85)

where we have replaced the measures by their corresponding RND’s. Thus, with a slight abuse of notation, the Gâteaux differential of J at μo in the direction (μ−μo ) is equivalent to the Gâteaux differential of J at g o in the direction (g − g o ) written as dJ (μo ; μ − μo ) = dJ (g o ; g − g o ). Hence, the inequality (5.68) is equivalent to the following inequality, dJ (μo ; μ − μo ) = dJ (g o ; g − g o ) G ∗ (t, x o (t), ξ )ψ(t), (g − g o )(ξ, t)mo (dξ × dt) ≥ 0, ∀ g ∈ Lad . = U ×I

(5.86) We recall that the convergence Theorem 5.6.1 also holds for the necessary conditions of optimality given by Theorem 5.5.2. In this case, let us state few steps of the algorithm. Choose any g 1 ∈ Lad ⊂ L1 (mo , R m ) (corresponding to μ1 ∈ Mad ), and solve Eq. (5.83) corresponding to g 1 in place of g o giving x 1 ∈ B∞ (I, R n ). Use the pair {g 1 , x 1 } in place of {g o , x o } in the adjoint equation (5.84) and solve for ψ 1 . Use the triple {g 1 , x 1 , ψ 1 } in place of {g o , x o , ψ o } in the expression on the left-hand side of the inequality (5.86) and define η1 (t, ξ ) ≡ (G ∗ (t, x 1 (t), ξ )ψ 1 (t)) giving dJ (μ ; μ − μ ) = 1

1

=

U ×I

U ×I

η1 (ξ, t), μ(dξ × dt) − μ1 (dξ × dt)

η1 (ξ, t), (g(ξ, t) − g 1 (ξ, t))mo (dξ × dt) ≡ dJ (g 1 ; g − g 1 ). (5.87)

182

5 Optimal Control: Necessary Conditions of Optimality

Take μ = μ2 ≡ μ1 − εm1 with m1 ∈ D(η1 ) and ε > 0 sufficiently small so that μ2 ∈ Mad . Then dJ (μ1 ; μ2 − μ1 ) = −ε η1 (ξ, t), m1 (dξ × dt) U ×I

= −ε  η1 2 +o(ε) = −ε  m1 2 +o(ε). Hence, by virtue of the isometric isomorphism, Mad ∼ = Lad , there exist {g 1 , g 2 , h1 } ∈ L1 (mo , R m ) such that dμ1 = g 1 dmo , dμ2 = g 2 dmo , dm1 = h1 dmo . Thus, again with a slight abuse of notation, we can rewrite the Gâteaux differentials of J as follows: dJ (μ1 ; μ2 − μ1 ) = dJ (g 1 ; g 2 − g 1 ) = −ε  η1 2 +o(ε) = −ε  h1 2L1 (mo ,R m ) +o(ε),

(5.88)

and J (μ2 ) = J (μ1 ) + dJ (μ1 ; μ2 − μ1 ) + o(ε) = J (μ1 ) − ε  m1 2 +o(ε), which is equivalent to J (g 2 ) = J (g 1 ) + dJ (g 1 ; g 2 − g 1 ) + o(ε) = J (g 1 ) − ε  h1 2L1 (m0 ,R m ) +o(ε). Here μ2 = μ1 − εm1 and g 2 = g 1 − εh1 . Hence, corresponding to the monotone decreasing (more precisely non-increasing) sequence, J (μ1 ) ≥ J (μ2 ) ≥ J (μ3 ) ≥ · · · ≥ J (μn ) ≥ J (μn+1 ) ≥ · · · , there exists a parallel monotone decreasing sequence, J (g 1 ) ≥ J (g 2 ) ≥ J (g 3 ) ≥ · · · ≥ J (g n ) ≥ J (g n+1 ) ≥ · · · , along which the cost functional converges possibly to a local minimum. This shows that, given the dominating measure mo ∈ M+ bf a (U ×I ), one can develop a computer program using the RND’s {g n } ⊂ L1 (mo , R m ) in place of the measures {μn } ⊂ Mbf a (U ×I , R m ). Remark 5.7.1 The inverse problem is much simpler. Here one can choose any ν ∈ M+ bf a (U ×I ) as the dominating measure and then, for admissible controls, choose a bounded and weakly closed subset of the space of ν continuous vector measures Mν (U ×I , R m ), a subset of Mbf a (U ×I , R m ).

5.8 Structural Controls

183

5.8 Structural Controls As seen in Sect. 4.6 of Chap. 4, Eq. (4.73) describes a mathematical model for structurally perturbed system. This is given by dx = A0 (t)x(t)dt + F (t, x(t))dt + B(dt)x(t), x(0) = x0 , t ∈ I.

(5.89)

The cost functional is given by (t, x(t))dt + (x(T )),

J (B) =

(5.90)

I

where x ∈ B∞ (I, R n ) is the solution of Eq. (5.89) corresponding to the structural control B ∈ Mbf a (I , L(R n )). Here we are interested to determine the optimal structural control. Let Mad ⊂ Mbf a (I , L(R n )) be a weakly compact set. We note that for vector measures taking values from finite dimensional Banach spaces such as R n and L(R n ), the criterion for weak compactness as stated in Chap. 4, Theorem 4.5.1 remains valid. Thus, under similar assumptions, as stated in Chap. 4, Theorem 4.6.3 and Remark 4.6.4, the structural control problem for the system given by (5.89) and the cost functional (5.90) has an optimal solution Bo ∈ Mad minimizing the cost functional (5.90). Here we present the necessary conditions that Bo must satisfy for optimality. Theorem 5.8.1 Consider the system given by Eq. (5.89) with cost functional (5.90) and suppose the set Mad is a weakly compact convex subset of Mbf a (I , L(R n )). Suppose the assumptions of Theorem 4.6.1 and Theorem 4.6.3 hold and that F is once continuously Gâteaux differentiable. Then, for the control state pair (Bo , x o ) ∈ Mad × B∞ (I, R n ) to be optimal it is necessary that there exists a ψ ∈ B∞ (I, R n ) such that the triple {Bo , x o , ψ} satisfies the following inequality and differential equations:

T

dJ (Bo ; B − Bo ) ≡

ψ(t), [B(dt) − Bo (dt)]x o (t) ≥ 0, ∀ B ∈ Mad ,

0

(5.91) −dψ = A∗0 (t)ψdt + (DF )∗ (t, x o (t))ψdt + x (t, x o (t))dt + Bo∗ (dt)ψ, ψ(T ) = x (x o (T )), t ∈ I,

(5.92)

dx o = A0 (t)x o (t)dt + F (t, x o (t))dt + Bo (dt)x o (t), x(0) = x0 , t ∈ I.

(5.93)

Proof Since the proof is quite similar to that of Theorem 5.5.3 we present only an outline. Under the given assumptions, one can verify that the control to solution map B −→ x(B) from Mbf a (I , L(R n )) to B∞ (I, R n ) is continuous with respect to the weak topology on Mbf a (I , L(R n )) and the norm topology on B∞ (I, R n ). Hence, under the given assumptions on  and , an optimal structural control exists.

184

5 Optimal Control: Necessary Conditions of Optimality

Let Bo ∈ Mad be the optimal policy and B ∈ Mad any other element. Then by convexity of the set Mad , Bε ≡ Bo + ε(B − Bo ) ∈ Mad for all ε ∈ [0, 1]. Thus, by virtue of optimality of Bo it is clear that the directional derivative of J at Bo in the direction (B − Bo ) must satisfy dJ (Bo ; B − Bo ) ≥ 0 ∀ B ∈ Mad . Then, it follows from Gâteaux differentiability of  and  that dJ (Bo ; B − Bo ) ≡

T 0

x (t, x o (t)), y(t)dt + x (x o (T )), y(T ) ≥ 0, ∀ B ∈ Mad ,

(5.94) where y ∈ B∞ (I, R n ) is the unique solution of the variational equation dy = A0 (t)ydt + DF (t, x o (t))ydt + Bo (dt)y + (B(dt) − Bo (dt))x o (t), y(0) = 0, t ∈ I.

(5.95)

Define the functional

T

L(y) ≡

x (t, x o (t)), y(t)dt + x (x o (T )), y(T )

(5.96)

0

and introduce the measure



B(dt) − Bo (dt) x o (t), σ ∈ I .

γB (σ ) ≡ σ

Since x (·, x o (·)) ∈ L1 (I, R n ) and y ∈ B∞ (I, R n ) and x (x o (T )) ∈ R n , it is clear that y −→ L(y) is a continuous linear functional on B∞ (I, R n ). Since x o ∈ B∞ (I, R n ) and Bo , B ∈ Mbf a (I , L(R n )), it is easy to verify that γB ∈ Mbf a (I , R n ). Thus, the composition map ˜ B) γB −→ y −→ L(y) ≡ L(γ is continuous and linear. Hence, there exists a ψ ∈ (Mbf a (I , R n ))∗ such that ˜ B) = L(y) = L(γ





 ψ(t), B(dt) − Bo (dt) x o (t).

ψ(t), γB (dt) = I

I

(5.97)

5.9 Discrete Measures with Variable Supports as Controls

185

It now follows from (5.94), (5.96), and (5.97) that

 dJ (Bo ; B − Bo ) ≡ ψ(t), B(dt) − Bo (dt) x o (t) ≥ 0 ∀ B ∈ Mad . I

(5.98) This proves the necessary condition (5.91). Using the variational equation (5.95), substituting it in the necessary condition (5.91), and integrating by parts one can obtain the necessary condition (5.92). Equation (5.93) is the given system subject to the optimal control Bo , so nothing to prove. This completes the outline of our proof.   Remark 5.8.2 It will be very useful and mathematically interesting to extend the above result to nonlinear structural optimization. We leave this as an open problem for interested readers. The technique presented in reference [19] may be useful in this project.

5.9 Discrete Measures with Variable Supports as Controls In this section we present the necessary conditions of optimality for the control problem of the system (4.95)–(4.97) with the cost functional (4.102) and admissible controls Dad . The significant difference with other necessary conditions of optimality with discrete measures is that here the supports of the measures are also considered as control variables and must be chosen so as to minimize the cost functional (4.102). This is presented in the following theorem. Theorem 5.9.1 Consider the system (4.95)–(4.97) with the cost functional (4.102), and suppose the assumptions of Theorem 4.10.2 hold and that the set Dad is convex. Further suppose F is continuous in the first variable, and continuously Gâteaux differentiable in the state variable; and G is continuously differentiable in the first argument, and continuously Gâteaux differentiable in the second and third arguments. The functions {, , 0 } are Gâteaux differentiable on R n with x (·, z(·)) being integrable along any path z ∈ B∞ (I, R n ). Let ν o ≡ (g o , a o , v o ) ∈ Dad and ν ≡ (g, a, v) ∈ Dad with

t

λo (t) = 0

g o (s)ds, t ∈ I and λ(t) ≡

t

g(s)ds, t ∈ I.

(5.99)

0

Let x o denote the solution of the system of Eqs. (4.95)–(4.97) corresponding to the control ν o . Then, in order for the pair {ν o , x o } to be optimal, it is necessary that there exists a ψ ∈ B∞ (I, R n ) such that the triple {ν o , x o , ψ} satisfies the following

186

5 Optimal Control: Necessary Conditions of Optimality

inequality (5.100) and the system of differential and algebraic equations (5.101)(5.106) as described below: (1) κ    ψ(λo (ti )), H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio ) R n (λ(ti ) − λo (ti )) i=0

  + ψ(λo (ti )), G(λo (ti ), x o (λo (ti )), vio ) R n (ai − aio ) +   o ∗  o o o o o o ai Gv (λ (ti ), x (λ (ti )), vi ) ψ(λ (ti )), (vi − vi ) R d ≥ 0, ∀ (λ, a, v) ∈ Dad ,

(5.100) where the function H is given by the following expression: H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio ) ≡ (aio Dx G(λo (ti ), x o (λo (ti )), vio ) − I )F (λo (ti ), x o (λo (ti ))) +F (λo (ti ), x o (λo (ti )−)) + aio Dt G(λo (ti ), x o (λo (ti )), vio ). (2) ∗

−b ψ(λo (tκ )) − aκo Dx G(λo (tκ ), x o (λo (tκ )), vκo ) ψ(λo (tκ )) = x (x o (λo (tκ )) = x (x o (T )), ψ(λo (tκ )+) = ψ(T +) = 0; (5.101) 

o o o o o o ∗ o −b ψ(λ (ti )) − ai Dx G(λ (ti ), x (λ (ti )), vi ) ψ(λ (ti )) = 0, i = 1, · · · , κ − 1; −b ψ(λ (t0 )) − o

a0o

∗

Dx G(λo (t0 ), x o (λo (t0 )), v0o ) ψ(λo (t0 )) = 0x (x o (λo (t0 )) = 0x (x o (0));

−ψ˙ − (Dx F (t, x (t))) ψ = x (t, x (t)), t ∈ I \ Iλo , o



o

(5.102)

(5.103) (5.104)

where −b denotes the backward jump operator, −b φ(t) ≡ −(φ(t+) − φ(t)). (3) x o ∈ B∞ (I, R n ) is the solution of the system of equations corresponding to the optimal control ν o = (g o , a o , v o ) given by x o (λo (ti )) = x o (λo (ti )−) + aio G(λo (ti ), x o (λo (ti )), vio ), i ∈ {0, 1, · · · , κ},

(5.105) x˙ = F (t, x ), x (0−) = x0 , t ∈ I \ Iλo , I ≡ [0 T ], o

o

o

(5.106)

5.9 Discrete Measures with Variable Supports as Controls

187

with the optimal set of jump instants denoted by   Iλo ≡ 0 = λo (0) = λo (t0 ) ≤ λo (t1 ) ≤ λo (t2 ) ≤ · · · ≤ λo (tκ ) = T . Proof Let ν o ≡ (g o , a o , v o ) ∈ Dad denote the optimal control, ν ≡ (g, a, v) ∈ Dad any other control. For any ε ∈ [0, 1], we introduce the control ν ε ≡ (g ε , a ε , v ε ), where g ε = g o + ε(g − g o ), a ε = a o + ε(a − a o ), and v ε = v o + ε(v − v o ). By definition,

t

λ (t) = ε



t

g (s)ds, λ (t) = ε

o

0

g o (s)ds, t ∈ I.

0

Since Dad is convex, it is evident that ν ε ∈ Dad . Let x ε ∈ B∞ (I, R n ), denote the solution of the system (4.95)–(4.97) corresponding to the control ν ε , and x o ∈ B∞ (I, R n ) the solution of the same system corresponding to control ν o . Since ν o is optimal it is clear that J (ν ε ) ≥ J (ν o ), ∀ ε ∈ [0, 1] and ∀ ν ∈ Dad .

(5.107)

Under the given regularity assumptions on {, , 0 }, computing the difference of the cost functional J (ν ε ) − J (ν o ) corresponding to the controls ν ε and ν o , respectively, and dividing it by ε and letting ε → 0, it is easy to show that the Gâteaux differential of J at ν o in the direction (ν − ν o ) satisfies the following inequality:

T

dJ (ν ; ν − ν ) = o

o



   x (t, x o (t)), y o (t) dt + x (x o (T )), y o (T )

0

  + 0x (x o (0)), y o (0) ≥ 0, ∀ ν ∈ Dad , (5.108)

where y o (t) ≡ limε↓0 (1/ε)(x ε (t)−x o (t)), t ∈ I. Step by step we show that y o is the unique solution to a system of variational equations consisting of both continuous and jump evolutions. Starting with the initial time t0 = 0, we note that λε (t0 ) = t0 = 0, and λo (t0 ) = t0 = 0. For the jump at t0 = 0, we have the following fixed point problems: x ε (0) = x0 + a0ε G(0, x ε (0), v0ε ), x o (0)

=

x0 + a0o

G(0, x o (0), v0o ).

(5.109) (5.110)

It is clear from Eqs. (5.109)–(5.110) that y o (0−) = 0. Recalling the definition (4.100) of the set A, and using the contraction property of G in the state variable, we conclude that these equations have unique solutions denoted by the

188

5 Optimal Control: Necessary Conditions of Optimality

same symbols as they appear in their arguments. Computing the difference of the above two equations, after some elementary algebraic operations, and noting that  a0o Dx G(0, x o (0), v0o ) < 1, one can easily verify that y o (0) is given by

y o (0) = (I − a0o Dx G(0, x o (0), v0o ))−1 (a0 − a0o )G(0, x o (0), v0o )

 +a0o Dv G(0, x o (0), v0o )(v0 − v0o ) . (5.111)

Using y o (0) as the initial state we now consider the continuous part of the variational equation on the open interval as indicated below, y(t) ˙ = Dx F (t, x o (t))y(t), y(0) = y o (0), t ∈ (λo (t0 ), λo (t1 )) = (0, λo (t1 )). (5.112) Clearly, for given x o , this is a linear differential equation in y. Since F is Lipschitz and continuously Gâteaux differentiable in the state variable, the matrix-valued function Dx F (·, x o (·)) is well defined. Thus, this equation has a unique continuous solution y o having the limit from the left lim y o (t) = y o (λo (t1 )−).

t %λo (t1 )

Thus, for a general index i ∈ {0, 1, 2, · · · , κ}, given y o (λo (ti )−), we have to consider the following system of equations: x ε (λε (ti )) = x ε (λε (ti )−) + aiε G(λε (ti ), x ε (λε (ti )), viε ),

(5.113)

x(t) ˙ = F (t, x(t)), t ∈ (λ (ti ), λ (ti+1 )), x(λ (ti )) = x (λ (ti ))

(5.114)

x o (λo (ti )) = x o (λo (ti )−) + aio G(λo (ti ), x o (λo (ti )), vio ),

(5.115)

x(t) ˙ = F (t, x(t)), t ∈ (λo (ti ), λo (ti+1 )), x(λo (ti )) = x o (λo (ti )).

(5.116)

ε

ε

ε

ε

ε

At this point we need the jump operator  defined by y(t) ≡ y(t) − y(t−). Subtracting Eq. (5.115) from Eq. (5.113) term by term and carrying out some elementary but laborious algebraic operations we obtain the following system of discrete variational equations, y o (λo (ti )) = aio Dx G(λo (ti ), x o (λo (ti )), vio )y o (λo (ti )) + ηio , i ∈ {0, · · · , κ} (5.117)

5.9 Discrete Measures with Variable Supports as Controls

189

at the nodes {λo (ti )} where ηio ≡ H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio )(λ(ti ) − λo (ti )) +(ai − aio )G(λo (ti ), x o (λo (ti )), vio ) +aio Dv G(λo (ti ), x o (λo (ti )), vio )(vi − vio ), (5.118) with the function H : R × R n × R n × R × R d −→ R n defined and given by H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio ) ≡ (aio Dx G(λo (ti ), x o (λo (ti )), vio ) − I )F (λo (ti ), x o (λo (ti ))) +F (λo (ti ), x o (λo (ti )−)) + aio Dt G(λo (ti ), x o (λo (ti )), vio ). (5.119) Under the given assumptions, G is once continuously Gâteaux differentiable in the state variable and a contraction map in the same variable. From this property and the property of the set A we conclude that  aio Dx G(λo (ti ), x o (λo (ti )), vio ) < 1 for all {i = 0, 1, 2, · · · , κ}. Hence, for each i ∈ {0, 1, 2, · · · , κ}, given y o (λo (ti )−), Eq. (5.117) has a unique solution given by

−1 o o  y o (λo (ti )) = I − aio (Dx G(λo (ti ), x o (λo (ti )), vio )) y (λ (ti )−) + ηio . (5.120) Note that by taking i = 0, this equation reduces to Eq. (5.111) as expected. Equation (5.117) governs the evolution of jumps at the nodes {λo (ti ), i = 0, 1, 2, · · · , κ}. Subtracting Eq. (5.116) from Eq. (5.114) term by term, dividing by ε, and letting ε → 0 we obtain the variational equation for the continuous part of the evolution on open intervals with the initial condition as indicated below y˙ = Dx F (t, x o (t))y, t ∈ (λo (ti ), λo (ti+1 )), y(λo (ti )) = y o (λo (ti )), i = 0, · · · , κ − 1.

(5.121) Note that Eqs. (5.117) and (5.121) are coupled through the boundary data. It follows from the properties of F that Eq. (5.121) has a unique continuous solution that we continue to denote by y o having the left limit as follows, lim

t %λo (ti+1 )

y o (t) = y o (λo (ti+1 )−).

190

5 Optimal Control: Necessary Conditions of Optimality

Equations (5.117) and (5.121) constitute the system of variational equations. It follows from our regularity assumptions on F and G that the function H given by the expression (5.119) is well defined and hence ηio , i ∈ {0, 1, 2, · · · , κ} is also well defined with values in R n . It is clear from the expression (5.118) that {(g − g o ), (a − a o ), (v − v o )} ⇒ {(λ − λo ), (a − a o ), (v − v o )} −→ ηo is a continuous linear map. At this point we may consider ηo as a discrete measure defined on I taking values in R n and given by a sum of Dirac measures with supports {λo (ti ), i = 0, 1, 2, · · · , κ} as follows: ηo (dt) ≡

κ

ηio δλo (ti ) (dt).

i=0

It follows from the regularity properties of {F, G} as stated in Theorem 4.9.1  and Theorem 5.9.1 that κi=0  ηio R n < ∞. Hence, ηo ∈ Mbf a (I , R n ). In fact, this measure is also countably additive having bounded variation, i.e., ηo ∈ Mca (I , R n ) ⊂ Mbf a (I , R n ). Note that the variational equation (5.117) governs the evolution of jumps at the nodes {λo (ti ), i = 0, 1, · · · , κ} that in turn provides the initial conditions for continuous evolution determined by the variational equation (5.121) on the open intervals {(λo (ti ), λo (ti+1 )), i = 0, 1, 2, · · · , κ − 1}. In turn, this equation provides the data y o (λo (ti+1 )−) that drives the jump evolution governed by Eq. (5.117) at the node λo (ti+1 ). This cyclical dependence on each other verifies explicitly that the system of Eqs. (5.117) and (5.121) are coupled. Since the solutions of Eq. (5.121) are continuously dependent on the initial data and the solutions of Eq. (5.117) are continuously dependent on ηo , it follows from Eqs. (5.117) and (5.121) that ηo −→ y o is a continuous linear map from the Banach space Mbf a (I , R n ) to the Banach space B∞ (I, R n ) and hence bounded. Let L denote the functional appearing in the inequality (5.108) that is given by a sum of three components: L(y o ) ≡ L1 (y o ) + L2 (y o ) + L3 (y o ), where   L1 (y o ) ≡ x (x o (T )), y o (T ) , T   o L2 (y ) ≡ x (t, x o (t)), y o (t) dt, 0

  L3 (y ) ≡ 0x (x o (0)), y o (0) . o

5.9 Discrete Measures with Variable Supports as Controls

191

It follows from the regularity assumptions on {, , 0 } that y o −→ L(y o ) is a continuous linear functional on B∞ (I, R n ). Thus, the composition map ˜ o) ηo −→ y o −→ L(y o ) ≡ L(η is a continuous linear functional on Mbf a (I , R n ), which is the topological dual of the Banach space B∞ (I, R n ) [56, Theorem 7, p. 77], that is, (B∞ (I, R n ))∗ ∼ = Mbf a (I , R n ). Hence, there exists a ψ ∈ (Mbf a (I , R n ))∗ = (B∞ (I, R n ))∗∗ such that ˜ o) = L(η



 I

κ    ψ(t), ηo (dt) = ψ(λo (ti )), ηio i=0

κ 

ψ(λo (ti )), H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio )(λ(ti ) − λo (ti )) = i=0

+(ai − aio )G(λo (ti ), x o (λo (ti )), vio )  +aio Dv G(λo (ti ), x o (λo (ti )), vio )(vi − vio )

 .

(5.122) ˜ o ), Thus, the necessary condition (5.100) follows from the identity L(y o ) = L(η the expression (5.122), and the inequality (5.108). The duality argument as stated ˜ o ). above guarantees the existence of a function ψ representing the functional L(η We show that ψ is given by the unique solution of a system of (adjoint) evolution Eqs. (5.101)–(5.104). By virtue of the canonical embedding of a Banach space into its bidual, B∞ (I, R n ) ⊂ (B∞ (I, R n ))∗∗ = (Mbf a (I , R n ))∗ . We show that, actually ψ ∈ B∞ (I, R n ). Using the variational equation (5.117) in the first line of the expression (5.122) and carrying out some elementary algebraic operations we obtain ˜ o) = L(η

κ  i=0

κ  Ci ψ(λo (ti )), ηio ≡ i=0

κ   = ψ(λo (ti )), y o (λo (ti )) − aio Dx G(λo (ti ), x o (λo (ti )), vio )y o (λo (ti )) i=0

=

κ 

 − b ψ(λo (ti )) − aio (Dx G(λo (ti ), x o (λo (ti )), vio ))∗ ψ(λo (ti )), y o (λo (ti )) ,

i=0

(5.123) where −b , defined by −b ψ(t) = −(ψ(t+) − ψ(t)), is the adjoint of the (forward) jump operator  introduced above Eq. (5.117), (Dx G)∗ is the adjoint of the operator (Dx G), and Ci denotes the i-th component in the above sum.

192

5 Optimal Control: Necessary Conditions of Optimality

Considering the last term Cκ , it follows from the terminal condition as stated in Eq. (5.101) that ψ(λo (tκ )) − aκo (Dx G(λo (tκ ), x o (λo (tκ )), vκo ))∗ ψ(λo (tκ )) = x (x o (λo (tκ ))) ≡ x (x o (T )).

(5.124)

Thus, by virtue of the Lipschitz and contraction property of G and the fact that a o ∈ A, this equation has a unique solution given (and denoted) by ψ o (λo (tκ )) = (I − (Dx G(λo (tκ ), x o (λo (tκ )), vκo ))∗ )−1 x (x o (T )).

(5.125)

In view of the last term of the expression (5.123), by scalar multiplying the expression appearing on the right-hand side of Eq. (5.124) by y o (T ) (the terminal state of the system of variational equations (5.117) and (5.121)) we obtain the Gâteaux differential of the terminal cost giving   L1 (y o ) = x (x o (T )), y o (T ) . The solution of Eq. (5.124) given by (5.125) provides the terminal data ψ o (λo (tκ )) for the continuous part of the adjoint evolution determined by Eq. (5.104) on the open interval Iκ ≡ (λo (tκ−1 ), λo (tκ )) giving − ψ˙ − (Dx F (t, x o (t)))∗ ψ = x (t, x o (t)), t ∈ Iκ , ψ(λo (tκ )) = ψ o (λo (tκ )) = ψ o (T ).

(5.126) Since x o ∈ B∞ (I, R n ), it follows from the regularity assumption on F that the matrix-valued function Dx F (t, x o (t)) is well defined. This, combined with the fact that x (·, x o (·)) ∈ L1 (I, R n ), implies the existence of a unique solution ψ o ∈ B∞ (Iκ , R n ) of Eq. (5.126) that is continuous on the open interval Iκ = (λo (tκ−1 ), λo (tκ )). Clearly, the limit of this solution from the right, given by lim

λo (tκ−1 ))t

ψ o (t) ≡ ψ o (λo (tκ−1 )+),

is well defined. By scalar multiplying the expressions on either side of the adjoint evolution (5.126) by y o and integrating by parts over the open interval Iκ , we obtain the following identity:     ψ(λo (tκ−1 )), y o (λo (tκ−1 )) − ψ(λo (tκ )), y o (λo (tκ )) =



  x (t, x o (t)), y o (t) dt.



(5.127) Clearly, this gives the Gâteaux differential of the running cost for the interval Iκ . Proceeding backward in time and using the data ψ o (λo (tκ−1 )+) (as seen above) in

5.9 Discrete Measures with Variable Supports as Controls

193

Eq. (5.102), we obtain the adjoint equation for jump evolution at the node λo (tκ−1 ), ∗

o o − b ψ(λo (tκ−1 )) − aκ−1 ) ψ(λo (tκ−1 )) = 0, Dx G(λo (tκ−1 ), x o (λo (tκ−1 )), vκ−1

(5.128) giving the solution −1

o o (Dx G(λo (tκ−1 ), x o (tκ−1 ), vκ−1 ))∗ ψ o (λo (tκ−1 )+). ψ o (λo (tκ−1 )) = I − aκ−1 This, in turn, provides the terminal data for the continuous part of the adjoint evolution on the next interval Iκ−1 ≡ (λo (tκ−2 ), λo (tκ−1 )), giving − ψ˙ − (Dx F (t, x o (t)))∗ ψ = x (t, x o (t)), t ∈ Iκ−1 , ψ(λo (tκ−1 )) = ψ o (λo (tκ−1 )).

(5.129) Again, by similar argument one can easily verify that the Gâteaux differential of the running cost over the interval Iκ−1 is given by 

   ψ(λo (tκ−2 )), y o (λo (tκ−2 )) − ψ(λo (tκ−1 )), y o (λo (tκ−1 ))   x (t, x o (t)), y o (t) dt. =

(5.130)

Iκ−1

It follows from the properties of F and  as noted before, Eq. (5.129) has a unique solution ψ o on the open interval Iκ−1 having the right-hand limit ψ o (λo (tκ−2 )+) that drives the jump evolution at the node λo (tκ−2 ). In summary, using the adjoint equations (5.102) and (5.104) alternately and continuing this process we arrive at the node λo (t1 ) with the data ψ o (λo (t1 )+) where the jump evolution is governed by the adjoint equation −b ψ(λo (t1 )) − a1o (Dx G(λo (t1 ), x o (λo (t1 )), v1o ))∗ ψ(λo (t1 )) = 0, ψ(λo (t1 )+) = ψ o (λo (t1 )+).

(5.131)

Using the expression of the operator b , one can easily verify that this equation has a unique solution given by −1

ψ o (λo (t1 )) = I − a1o (Dx G(λo (t1 ), x o (λo (t1 )), v1o ))∗ ψ o (λo (t1 )+). This provides the boundary data for the continuous part of the adjoint evolution on the remaining open interval I1 ≡ (λo (t0 ), λo (t1 )) = (0, λo (t1 )) given by, − ψ˙ − (Dx F (t, x o (t)))∗ ψ = x (t, x o (t)), t ∈ (0, λo (t1 )), ψ(λo (t1 )) = ψ o (λo (t1 )).

(5.132)

194

5 Optimal Control: Necessary Conditions of Optimality

Following the same procedure, we obtain the Gâteaux differential of the running cost for the last interval I1 . This is given by 

   ψ(λo (t0 )), y o (λo (t0 )) − ψ(λo (t1 )), y o (λo (t1 ))     = ψ(λo (0), y o (λo (0) − ψ(λo (t1 )), y o (λo (t1 ))       = ψ(0), y o (0) − ψ(λo (t1 )), y o (λo (t1 )) = x (t, x o (t)), y o (t) dt. I1

(5.133) Thus, the Gâteaux differential of the running cost covering the whole interval I ≡ [0, T ] is given by the sum     L2 (y o ) = ψ(0), y o (0) − ψ(λo (tκ )), y o (tκ ) =



T



 x (t, x o (t)), y o (t) dt.

0

By similar argument as seen above, Eq. (5.132) has a unique solution ψ o on the open interval I1 defined above having the limit from the right ψ o (0+). Finally, considering the remaining node λo (t0 ) = 0, we have the jump evolution governed by the following nonhomogeneous equation: −b ψ(0) − a0o (Dx G(0, x o (0), v0o ))∗ ψ(0) = 0x (x o (0)), ψ(0+) = ψ o (0+). (5.134) Based on similar arguments as seen before, one can verify that this equation has a unique solution given by −1 o

 ψ (0+) + 0x (x o (0)) . ψ o (0) = I − a0o (Dx G(0, x o (0), v0o ))∗ In view of the first term of the expression (5.123), scalar multiplying the right-hand expression of Eq. (5.134) by y o (0), we obtain the Gâteaux differential of the cost functional at this node that is given by     L3 = Dx 0 (x o (0)), y o (0) = 0x (x o (0)), y o (0) . Summing all the Gâteaux differentials obtained above, we arrive at the following functional       o o x (t, x o (t)), y o (t) dt + 0x (x o (0)), y o (0) , S ≡ x (x (T )), y (T ) + I

which coincides with the functional L(y o ) appearing in the inequality (5.108). Thus, we have proved that ψ, whose existence was guaranteed by duality, is given by the solution of a system of adjoint equations (5.101)–(5.104) and that it is an element

5.10 Bibliographical Notes

195

of the Banach space B∞ (I, R n ). Hence, for the control state pair {ν o , x o } to be optimal, it is necessary that Eqs. (5.100)–(5.104) hold. Equations (5.105)–(5.106) are the given dynamic systems corresponding to the optimal control ν o , so nothing to prove. This completes the proof.   Remark 5.9.2 In case the locations of the impulsive forces are kept fixed (not considered as a control variable), the functions λ(t) = λo (t) = t, ∀ t ∈ I. Under this assumption the necessary conditions given by Theorem 5.9.1 simplify drastically with λo (ti ) = λ(ti ) = ti for all i ∈ {0, 1, 2, · · · , κ}. Hence, the inequality (5.100) reduces to the following inequality: κ   i=0

 ψ(ti ), (ai − aio )G(ti , x o (ti ), vio ) +    ψ(ti ), aio Gv (ti , x o (ti ), vio )(vi − vio ) ≥ 0, ∀ (a, v) ∈ A × U.

Remark 5.9.3 In the preceding subsections we have considered continuous running cost. One may also like to consider discrete running cost given by the following expression: J (ν) =

κ

(λ(ti ), x(λ(ti )), ai , vi ),

(5.135)

i=0

where ν = (λ, a, v) ∈ Dad and  : I × R n × [−1, +1] × U −→ R is continuous and bounded on bounded sets with x ∈ B∞ (I, R n ) being the solution of the system of Eqs. (4.95)–(4.97). Note that in this case the expression (5.135) also contains the terminal cost. The objective is to find a control ν o ∈ Dad that minimizes the functional (5.135). Existence of optimal policy follows from Theorem 4.10.2 with minor changes in the statements due to change in the cost functional from (4.102) to (5.135). The detailed necessary conditions of optimality can be found in [29]. Fundamentally, the proof is similar to that of Theorem 5.9.1. The reader may like to use this as an exercise.

5.10 Bibliographical Notes There are three major breakthroughs in the field of control theory. The first is the introduction of feedback, the second is the invention of dynamic programming due to Bellman, and the third is the maximum (or minimum) principle due to Pontryagin and his co-workers Boltyanskii, Gamkrelidze, and Mishchenko. This is popularly known as Pontryagin’s maximum principle despite the fact that there are three other contributors. This result, with calculus of variations as its precursor, brought a revolution in the field of optimal control. There has been enormous

196

5 Optimal Control: Necessary Conditions of Optimality

progress in this field since these foundations were laid and many outstanding contributions have been made leading to the present level of depth both in theory and applications. The subject can be credited for its diversity of applications in physical sciences, engineering, economics, social sciences, and medicine. There are many outstanding contributors whose names may have been missed in the list presented here: Bellman [37], Pontryagin and his co-workers [90], Cesari [50], Neustadt [88], Hermes and LaSalle [70], Berkovitz [40], Boltyanskii [44], Rishel [91], Warga [101, 102], Gamkrelidze [65], Oguztoreli [89], Clark [51], Fleming [62], Clarke and Vinter [52, 53], Bressan, Rampazzo and Dal Maso [46, 47, 54], Silva and Vinter [95], Ahmed [2, 14], Teo [98]. Most of the materials presented in this chapter on measurevalued optimal controls are new. For infinite dimensional systems, see Fattorini [61], Lions [78], Balakrishnan [32], Ahmed and Teo [27], and many references therein.

Chapter 6

Stochastic Systems Controlled by Vector Measures

6.1 Introduction In this chapter we consider stochastic differential equations. In the case of deterministic systems, given the initial state, the entire future is determined by the evolution equation, and hence it is exactly predictable. As far as the natural world is concerned, for exact prediction it requires exact knowledge of the workings of nature and currently that seems to be beyond the reach of human mind. It is well-known to meteorologists that long-term, and at times even short-term, prediction of weather is difficult and at best unreliable. However, we are comfortable with results based on statistical analysis and predictions which provide a significant level of confidence and seem to have served the mankind well. In fact, statistical analysis has served science and engineering so well that without this tool it would have been impossible to come to the stage it is in today. Statistical mechanics, quantum mechanics, information theory, stochastic control, linear and nonlinear filtering are some of the outstanding examples of success brought about by this field of knowledge. Stochastic models are built upon the deterministic ones by introducing some additional terms which take into account the uncertainty that may exist in our mathematical models. The basic processes used for this purpose are the so-called Wiener process (or the Brownian motion process) and the Lêvy process (also known as counting or jump process). Generally, Wiener processes and Poisson jump processes are used as basic building blocks for the construction of large classes of stochastic processes covering Markov and non-Markovian processes. This is done by adding stochastic terms to the deterministic differential equations giving stochastic differential equations (SDEs). Those events that take place continuously in time are described by stochastic differential equations driven by Brownian motion. Events that evolve both continuously and by intermittent jumps are described by stochastic differential equations driven by both Brownian motion and the Poisson jump process. In the following sections we will consider optimal control problems involving such systems either subject to impulsive forces naturally or controlled by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_6

197

198

6 Stochastic Systems Controlled by Vector Measures

such forces. In general, mathematically such forces are a subclass of the forces that can be described by vector measures. For general impulsive systems driven by vector measures, applicable to systems governed by partial differential equations (PDEs) or more generally abstract differential equations on Banach spaces, see [5, 8, 10].

6.2 Conditional Expectations For study of stochastic differential equations, we need some basic facts on the notions of expectation and particularly conditional expectations. The following notations and terminologies are standard in the field of stochastic processes. Let (, F , P ) denote a complete probability space, where  represents the sample space, F the sigma algebra (Borel algebra) of subsets of the set  representing elementary events, and P the probability measure on the sigma algebra F . Recall that the minimal completely additive class of sets F containing all closed subsets of  is called the (class of) Borel sets. Let Ft , t ≥ 0, be an increasing family of complete subsigma algebras of the sigma algebra F (with F0 containing all P null sets). The probability space equipped with this filtration is called the filtered probability space and denoted by (, F , Ft ≥0, P ). The filtration is said to be right continuous if for each t ≥ 0, Ft = Ft + ≡ s>t Fs , and it is said to have left limits if Ft − = σ ( s 0 for all t ∈ I. Furthermore, β˙ is t ∈ I, and since β is monotone increasing, β(t)

6.3 SDE Based on Brownian Motion

211

measurable and integrable satisfying ˙ ≡ I(β)



T

˙ β(t)dt ≤ β(T ) < ∞.

0

Define the set ! by   ˙ ! ≡ f ∈ L+ 1 (I ) : f (t) > β(t), a.e. t ∈ I . ˙ = inf{I(f ), f ∈ !}. So there exists an Clearly, the set ! is nonempty and I(β) o f ∈ ! such that ˙ ≤ β(T ) < I(f o ), I(β) and the function βo , given by the indefinite Lebesgue integral

t

t −→ βo (t) ≡

f o (s)ds, t ∈ I,

0

˙ a.e. t ∈ I, is continuous (actually absolutely continuous) satisfying β˙o (t) > β(t), and 0 ≤ β(t+) ≤ βo (t), t ∈ I. Thus,



t

t

β(s)dβ(s) < 0

βo (s)dβo (s), t ∈ I.

0

t Since β is not necessarily continuous, 0 β(s)dβ(s) = (β(t))2 /2. However, since f o ∈ !, the function βo is continuous. Hence, it follows from the above inequality that t t

2 β(s)dβ(s) < βo (s)dβo (s) ≤ βo (t) /2, t ∈ I. 0

0

Returning to the inequality (6.22) and computing the second iterate of the operator and using the above inequality, we obtain ρt2 ( 2 x, 2 y) ≤ ≤

t 0

t  0



ρs2 ( x, y)dβ(s), t ∈ I, s 0

ρt2 (x, y)

ρθ2 (x, y)dβ(θ )

0

t

 dβ(s), t ∈ I

β(s)dβ(s), t ∈ I,

212

6 Stochastic Systems Controlled by Vector Measures


0 such that

   H (t, x, v) 2 !(dv) ≤ K42 1+  x 2 , x ∈ R n , (H1)

 D 2 2 2 n (H2) D  H (t, x, v) − H (t, y, v)  !(dv) ≤ K4  x − y  , x, y ∈ R . Then, for every x0 ∈ L2 (F0 , R n ), independent of the Brownian motion w and a (I, H ). random measure q, the system (6.24) has a unique solution x ∈ B∞ n

6.4 SDE Based on Poisson Random Processes

215

Proof Again, by use of the Banach fixed point theorem, one can prove the above result. We present a brief outline. Define the operator by

t

( x)(t) = x0 +



t

F (s, x(s))ds +

0

G(s, x(s))dw(s) 0

+

t 0

H (s, x(s), v)q(ds × dv), t ∈ I. D0

(6.25) a (I, H ) −→ B a (I, H ). Using the above expression and We show that : B∞ n n ∞ computing the norm of ( x)(t) and using triangle inequality, one can show that

 ( x)(t) 2Hn = E  ( x)(t) 2Rn ≤ C1 + C2

t

0

= C1 + C2

0

t

(1 + E  x(s) 2Rn )ds, t ∈ I, (1+  x(s) 2Hn )ds, t ∈ I,

(6.26)

where C1 = 23 E  x0 2 and C2 = 23 (K12 T + K22 + K42 ). a (I, H ), we have x ∈ B a (I, H ). Using the definition of Hence, for x ∈ B∞ n n ∞ the operator and computing the expected value of the square of the norm of the difference a ( x)(t) − ( y)(t) for x, y ∈ B∞ (I, Hn ),

we obtain the following inequality:  ( x)(t) − ( y)(t) 2Hn ≤ C3

t 0

 x(s) − y(s) 2Hn ds, t ∈ I,

(6.27)

where the constant C3 ≡ 4(K12 T + K22 + K42 ). As in Theorem 6.3.1, we define the a (I, H ) using the following expression: metric ρ = ρT on B∞ n ρt2 (x, y) ≡ sup{ x(s) − y(s) 2Hn , 0 ≤ s ≤ t}, t ∈ I. Using this notation and the inequality (6.27), we obtain the following expression: ρt2 ( x, y) ≤ C3

t 0

ρs2 (x, y)ds, t ∈ I.

(6.28)

216

6 Stochastic Systems Controlled by Vector Measures

By using the above inequality and similar iterative procedure as in Theorem 6.3.3, after m iterations, we find that  ρt ( m x, m y) ≤ ρt (x, y) (C3 t)m /m!, t ∈ I.

(6.29)

a (I, H ), it follows from Hence, by definition of the norm for the Banach space B∞ n the above inequality that a (I,H ) ≤ αm  x − y B a (I,H ) ,  m x − m y B∞ n n ∞

(6.30)

√ where αm ≡ (C3 T )m /m!. Since I is a finite interval, for m ≥ mo ∈ N large enough, m as well as mo is a contraction. Thus, by virtue of the Banach fixed point theorem, the operator mo , and consequently the operator itself, has a unique a (I, H ). Hence, we conclude that the stochastic differential fixed point x o ∈ B∞ n a (I, H ). This completes the equation (6.24) has a unique solution x o ∈ B∞ n proof.  

6.5 Optimal Relaxed Controls Recall that relaxed controls consist of probability measure-valued functions in the case of deterministic systems, while in the case of stochastic systems, these are probability measure-valued random processes adapted to a given subsigma algebra Gt ≥0 ⊂ Ft ≥0. Consider the stochastic system ˆ x, ut )dw, x(0) = x0 , t ∈ I, dx = Fˆ (t, x, ut )dt + G(t,

(6.31)

ˆ are also functions of control where the drift and the diffusion parameters Fˆ , G measure, and they are given by Fˆ (t, x, ut ) ≡

F (t, x, ξ )ut (dξ ), t ∈ I, x ∈ R n , U

ˆ x, ut ) ≡ G(t,



G(t, x, ξ )ut (dξ ), t ∈ I, x ∈ R n , U

where U ⊂ R d and {ut , t ∈ I } is a M1 (U ) (probability measure) valued random process to be discussed shortly. The objective functional is given by  J (u) = E

T

 ˆ (t, x, ut )dt + (x(T )) ,

(6.32)

0

 ˆ x, ut ) ≡ where (t, U (t, x, ξ )ut (dξ ). The problem is to find a control that minimizes the above cost functional. To solve such problems, we must specify the

6.5 Optimal Relaxed Controls

217

class of admissible controls. We have already seen in the previous chapters that in standard control problems convexity of the domain of controls is very important. In the absence of this property, we know optimal controls may not exist (as seen in the example in Chap. 5, following Corollary 5.2.5). In many control problems, the domain of controls is only a closed bounded set in a complete metric space (not necessarily convex). For example, it may consist of a finite or infinite set of discrete points in a metric space and therefore non-convex. As seen in Chap. 5, in order to cover both convex and non-convex control problems, we made use of relaxed controls, which are probability measure-valued functions as described above. Let U ⊂ R d be a closed bounded set, not necessarily convex, and let M1 (U ) denote the class of regular probability measures defined on the sigma algebra of Borel subsets of the set U. Following standard notation, let (, F ⊃ Ft ≥0 , P ) denote a complete filtered probability space, and let {Gt ⊂ Ft , t ≥ 0} be a current of subsigma algebras. For admissible controls, we may choose the set Uad ≡ La∞ (I, M1 (U )), where the super script “a” is used to mean that the elements of this set are adapted to the subsigma algebra Gt of the sigma algebra Ft ⊂ F as described above. It follows from Alaoglu’s theorem that, endowed with the weak-star (or vague) topology, this set is compact. It is known [23, 24] that this topology is too weak (or course) for application to stochastic control problems we are concerned with. A slightly stronger topology that works well for stochastic control problems is presented below. Consider the complete separable filtered probability space (, F , Ft ≥0, P ) with Ft being right continuous having left limits, and let Gt ≥0 be a non-decreasing family of complete subsigma algebras of the sigma algebra Ft ≥0. Let P denote the Gt ≥0 predictable subsigma field of the product sigma field B(I ) × F and μ the restriction of the product measure Leb. × P on it. Note that the measure space (I × , P, μ) is separable, and, since U is a compact metric space, the Banach space C(U ) with the standard supnorm topology is also separable. Thus, the Lebesgue–Bochner space L1 (μ, C(U )) is a separable Banach space with the (topological) dual La∞ (I, M(U )), where M(U ) denotes the space of regular signed Borel measures. Since L1 (μ, C(U )) is separable, it has a countable dense set {ϕn }. Thus, La∞ (I, M1 (U )), being a closed bounded convex subset of its dual, is metrizable. Using this fact, we can put several metric topologies on La∞ (I, M1 (U )). One suitable metric topology for our control problem is given by d(u, v) ≡

∞ n=1

n

(1/2 )

I ×

min{1, |ϕn (u) − ϕn (v)|} dμ,

(6.33)

 where ϕn (u)(t, ω) ≡ U ϕn (t, ω, ξ )ut,ω (dξ ). This is a complete metric space. We rel denote this metric space by (M, d) and take any compact subset Uad (closed and totally bounded set) of this metric space as the set of admissible controls. Convergence of un to uo in this metric topology implies that, for any ϕ ∈ L1 (μ, C(U )), ϕ(un ) converges to ϕ(uo ) in μ measure, and hence (along a subsequence if necessary) it converges to the same limit μ a.e.

218

6 Stochastic Systems Controlled by Vector Measures

Now we are prepared to consider the question of existence of optimal controls. The basic assumptions for F and G are similar to those of Theorem 6.3.1. Without loss of generality, we may even consider {K, L} to be constant and positive.

6.5.1 Existence of Optimal Controls For the proof of existence of optimal controls, we use the following theorem that asserts the continuous dependence of solutions on controls. Theorem 6.5.1 Suppose the functions {F, G} are Borel measurable in all the variables and continuous in the last two arguments and satisfy Lipschitz and growth assumptions (as stated in Theorem 6.3.1) uniformly with respect to the third argument (in U ) with {K, L} constant and x0 ∈ L2 (F0 , R n ). Then the control to solution map u → x is continuous with respect to the relative metric topology d on rel and the strong norm topology on & ≡ B a (I, H ) given by Uad n ∞  x & ≡ sup



 E|x(t)|2R n , t

∈I





sup{E|x(t)|2R n , t ∈ I }.

d

rel Proof Let {un , uo } ∈ Uad and suppose un −→ uo , and let {x n , x o } denote the corresponding solutions of Eq. (6.31). Clearly, they satisfy the following integral equations:



t

x n (t) = x0 +

0 t

x (t) = x0 + o

0

Fˆ (s, x n (s), uns )ds + Fˆ (s, x o (s), uos )ds +



t

0 t 0

ˆ x n (s), uns )dw(s), t ∈ I, G(s,

(6.34)

ˆ x o (s), uos )dw(s), t ∈ I. G(s,

(6.35)

Subtracting Eq. (6.34) from Eq. (6.35) term by term and computing the expected value of the square of the norm and using the standard Cauchy–Schwartz inequality, we arrive at the following inequality:  t 2 2 E  x o (s) − x n (s) 2 ds E  x (t) − x (t)  ≤ 2 (K t + L ) o

n

2

3

0

+E  e1,n (t) 2 +E  e2,n (t) 2

 , t ∈ I, (6.36)

6.5 Optimal Relaxed Controls

219

where

t

e1,n (t) =

0 t

e2,n (t) = 0

Fˆ (s, x o (s), uos − uns )ds, t ∈ I,

(6.37)

ˆ x o (s), uos − uns )dw(s), t ∈ I. G(s,

(6.38)

For convenience of notation, we rewrite the expression (6.36) in the following compact form:

t

Vn (t) ≤ C1 ϕn (t) + C2

Vn (s)ds, t ∈ I,

(6.39)

0

where C1 = 23 , C2 = 23 (K 2 T + L2 ), and Vn (t) ≡ E  x o (t) − x n (t) 2 , t ∈ I, ϕn (t) ≡ E  e1,n (t) 2 +E  e2,n (t) 2 , t ∈ I. By virtue of the Grönwall inequality, it follows from the expression (6.39) that Vn (t) ≤ C1 ϕn (t) + C1 C2 exp{C1 T }

t

ϕn (s)ds, t ∈ I.

(6.40)

0 d

Since un −→ uo , it is clear that, for each t ∈ I, the integrands in the expressions (6.37) and (6.38) converge to zero in μ measure on I × , and hence, along a subsequence if necessary, they converge to zero μ-a.e. Thus, for every t ∈ I , e1,n (t) −→ 0, e2,n (t) −→ 0 P -a.s. By computing the expected values of the square of the norms of the processes e1,n and e2,n , it is easy to verify that

t

E  e1,n (t) 2 ≤ 2tK 2 E  e2,n (t)  ≤ 2L 2

(1 + E  x o (s) 2 )ds, t ∈ I,

(6.41)

0 t

(1 + E  x o (s) 2 )ds, t ∈ I.

2

(6.42)

0

Since x o ∈ &, it follows from this that the aforementioned integrands (see Eqs. (6.37) and (6.38)) are also dominated by square integrable random processes. Thus, it follows from the dominated convergence theorem that they are convergent to zero in norm. Hence, ϕn (t) −→ 0 for all t ∈ I, and it follows from (6.41) and (6.42) that it is bounded from above. Hence, it follows from the Lebesgue bounded convergence theorem that the expression on the right-hand side of the s inequality (6.40) converges to zero for all t ∈ I. This proves that x n −→ x o in the Banach space & as stated.   Using the above result, we prove the existence of optimal control.

220

6 Stochastic Systems Controlled by Vector Measures

Theorem 6.5.2 Consider the system (6.31) with the admissible controls given by rel and the cost functional given by (6.32), and suppose the assumptions of Uad Theorem 6.5.1 hold. Suppose the cost integrand  is measurable in the first variable, lower semicontinuous in the second, and continuous in the third argument, and the terminal cost function  is lower semicontinuous and satisfy the following growth properties: |(t, x, ξ )| ≤ α1 (t) + α2  x 2R n , ∀ ξ ∈ U, and |(x)| ≤ α3 + α4  x 2R n , where α1 ∈ L+ 1 (I ) and α2 , α3 , α4 ≥ 0. Then, there exists a control minimizing the cost functional (6.32). rel Proof Since the set of admissible controls Uad is weak-star compact (in fact, compact in the metric topology “d”), it suffices to verify that u −→ J (u) is lower d

rel semicontinuous in this topology. Let {un , uo } ∈ Uad and suppose un −→ uo . Then it follows from Theorem 6.5.1 that the corresponding solutions s

x n −→ x o in &. Thus, it follows from lower semi-continuity of  and  that (t, x o (t), uot ) ≤ lim (t, x n (t), unt ) and (x o (T )) ≤ lim (x n (T ))

(6.43)

for all t ∈ I, P -a.s. Since the set of admissible controls is compact in the metric topology induced by “d”, it is bounded. Hence, considering the integral equation t t ˆ x(u)(s), us )dw(s), t ∈ I, x(u)(t) = x0 + Fˆ (s, x(u)(s), us )ds + G(s, 0

0

rel , invoking the growth properties of {Fˆ , G}, ˆ corresponding to any control u ∈ Uad and using the Grönwall inequality, one can easily verify that the attainable set of solutions of the above equation (equivalently (6.31)) given by

SA ≡ {x ∈ & : x = x(u) for some u ∈ Uad } is bounded. Thus, the set {x n , x o } is contained in a bounded subset of &. Hence, it follows from the absolute growth properties of  and  that both (t, x o (t), uot ) and (x o (T )) are bounded from below, the former by an integrable random process and the later by an integrable random variable, respectively. Therefore, it follows from generalized Fatou’s Lemma that

T

E 0

(t, x o (t), uot )dt ≤ lim E

E(x o (T )) ≤ lim E(x n (T )).

T 0

(t, x n (t), unt )dt

(6.44)

(6.45)

6.5 Optimal Relaxed Controls

221

Since any finite sum of lower semicontinuous functionals is lower semicontinuous, we conclude that J is lower semicontinuous in the metric topology leading to J (uo ) ≤ lim J (un ). n→∞

rel is compact in the metric topology, and J is lower semicontinuous in this Since Uad topology, we conclude that J attains its minimum on Uad . Hence, an optimal control policy exists.  

Remark 6.5.3 In case the set U consists of a finite set of points, such as U ≡ {ξ1 , ξ2 , · · · , ξm }, ξi ∈ R d , it is clear that U is not convex and so optimal control from the class of regular controls, for example, measurable functions with values in U, does not exist. However, optimal control exists from the class of relaxed controls. In this particular case, optimal control has the form uot ≡

m

pio (t)δξi (dξ ), pio (t) ≥ 0, and

i=1

m

pio (t) = 1

i=1

for all t ∈ I. Note that pio (t) denotes the probability that at time t the control takes the value ξi . Alternatively, one may also think of pio (t) as the frequency of use of the control ξi . These are chattering controls. Under the assumptions (H1) and (H2) of Theorem 6.4.3 and the assumptions of Theorem 6.5.2, the existence result can be easily extended to cover the more general stochastic system driven by both Brownian motion and Poisson random measure. This is described by the following stochastic system: ˆ x, ut )dw dx(t) = Fˆ (t, x, ut )dt + G(t, Hˆ (t, x(t), v, ut )q(dt, dv), x(0) = x0 , t ∈ I, +

(6.46)

D0

where Hˆ (t, x(t), v, ut ) ≡

 U

H (t, x(t), v, ξ )ut (dξ ).

Theorem 6.5.4 Consider the system (6.46) with the cost functional (6.32), and suppose the assumptions of Theorem 6.4.3 and Theorem 6.5.1 hold. Furthermore, suppose the cost integrands  and  satisfy the same assumptions as those in Theorem 6.5.2. Then there exists a control minimizing the cost functional (6.32). Proof The proof is very similar to that of Theorem 6.5.2. So we present only a brief outline. It suffices to establish that the control to solution map u → x is rel and the norm topology on continuous with respect to the metric topology on Uad a & ≡ B∞ (I, Hn ). Once this is done, existence of optimal control follows exactly as in Theorem 6.5.2. To prove the continuity, we use similar estimate as seen in the expression (6.36). So we must add to this estimate the effects of the jump

222

6 Stochastic Systems Controlled by Vector Measures

components giving the following inequality:  t E  x o (t) − x n (t) 2 ≤ 25 (K 2 t + L2 + K42 ) E  x o (s) − x n (s) 2 ds 0

+E  e1,n (t) 2 +E  e2,n (t) 2 +E  e3,n (t) 2

 , t ∈ I, (6.47)

where K4 is the Lipschitz coefficient of the function H with respect to the state variable, and the component e3,n is given by e3,n (t) =

t 0

D0

 Hˆ (s, x o (s), v, uos ) − Hˆ (s, x o (s), v, uns ) q(ds, dv), t ∈ I. (6.48)

Since un converges to uo in the topology induced by the metric d, the integrand converges to zero in μ measure, and hence along a subsequence it converges to zero μ-a.e. Thus, for each t ∈ I, e3,n (t) −→ 0 P-a.s. Using the assumption (H1) of Theorem 6.4.3 and computing the expected value of the norm square of e3,n (t), it follows from the above expression that 2

E  e3,n (t)  ≤

2K42

t

(1 + E  x o (s) 2 )ds, t ∈ I.

(6.49)

0

a (I, H ), e Since x o ∈ B∞ n 3,n is also dominated by square integrable random process. So by Lebesgue dominated convergence theorem, we conclude that for each t ∈ I, e3,n (t) converges to zero in norm. Again, defining

Vn (t) ≡ E  x o (t) − x n (t) 2 ≡ x o (t) − x n (t) 2Hn and ϕn (t) ≡ E  e1,n (t) 2 +E  e2,n (t) 2 +E  e3,n (t) 2 and C1 ≡ 25 and C2 = 25 (K 2 T + L2 + K42 ), we obtain a similar inequality s

as (6.36). By virtue of the Grönwall inequality, it follows from this that x n −→ x o in &. This proves continuity of the control to solution map. This is then used, as in Theorem 6.5.2, to prove that the optimal control problem has a solution. This completes a brief sketch of our proof.  

6.5 Optimal Relaxed Controls

223

6.5.2 Necessary Conditions of Optimality As seen in the previous chapters, to determine the optimal control policies, we need the necessary conditions of optimality. To construct such necessary conditions (of optimality) for the control problems stated in this subsection, we need the notion of semi-martingales. In general, these are Ft -adapted random processes having the representation Zt = m0 + At + Mt , t ≥ 0, where m0 is an F0 -measurable random vector, {At , t ≥ 0} is a random process having bounded variation, and {Mt , t ≥ 0} is an integrable continuous martingale. Here in this section we are interested in semi-martingales based on Ft ≥0 Brownian motion {w(t), t ≥ 0}. We need only L2 semi-martingales starting from the origin. These can be represented by the following expression:

t

Zt =



t

λ(s)ds +

0

!(s)dw(s), t ≥ 0,

0

where λ ∈ La2 (I, Hn ) with Hn = L2 (, R n ), and ! ∈ La2 (I, Hm,n ) with Hm,n = L2 (, L(R m , R n )). These are adapted processes as indicated by the superscript “a”. We denote the space of such semi-martingales by SM20 . One can introduce an inner (scalar) product in this space. Let Z 1 , Z 2 ∈ SM20 given by Zti =

t



t

λi (s)ds +

0

!i (s)dw(s), t ∈ I, i = 1, 2.

0

Given that w is an R m -valued standard Brownian motion, the scalar product of these elements is given by (Z 1 , Z 2 )SM2 = E

λ1 (t), λ2 (t)R n dt + E

0

I

I

T r(!∗1 (t)!2 (t))dt,

and the square of the norm of Z is given by  Z SM2 ≡ E

 λ(t)

2

0



≡ I

I

2R n

dt + E I



 λ(t) 2Hn dt +

I

 !(t) 2L(R m ,R n ) dt

 !(t) 2Hm,n dt,

where the last identity is justified by Fubini’s theorem. Equipped with the scalar product and the associated norm topology, SM20 is a real Hilbert space. It is interesting to observe that given any pair of elements {λ, !} ∈ La2 (I, Hn ) × La2 (I, Hm,n ), there exists a continuous semi-martingale Z ∈ SM2o given by

t

Zt = 0



t

λ(s)ds + 0

!(s)dw(s), t ∈ I.

224

6 Stochastic Systems Controlled by Vector Measures

Conversely, given any continuous semi-martingale Z ∈ SM20 , there exists a unique pair (λ, !) ∈ La2 (I, Hn ) × La2 (I, Hm,n ) such that

t

Zt =



t

λ(s)ds +

0

!(s)dw(s), t ∈ I.

0

Thus, SM20 ∼ = La2 (I, Hn ) × La2 (I, Hm,n ), that is, SM20 is isometrically isomorphic a to L2 (I, Hn ) × La2 (I, Hm,n ). The reader is encouraged to verify the uniqueness. Now we are prepared to construct the necessary conditions of optimality. For this, the data {F, G, , } must satisfy stronger regularity conditions. This is presented in the following theorem. Theorem 6.5.5 Consider the system (6.31) with the cost functional (6.32) and rel . Suppose the assumptions of Theorem 6.5.2 hold, the admissible controls Uad and further, {F, G, , } are all continuously Gâteaux differentiable in the state variable with the Gâteaux derivatives DF, DG being uniformly bounded and {x , x } ∈ La2 (I, Hn ) × Hn along the solution trajectories. Then, for a control state pair {uo , x o } to be optimal, it is necessary that there exists a pair {ψ, Q} ∈ a (I, H ) × La (I, H B∞ n m,n ) satisfying the following inequality and the stochastic 2 differential equations:

T

E 0

Fˆ (t, x o , ut − uot ), ψdt + E

T

+E 0



T 0

ˆ ∗ (t, x o , ut − uot )Q(t))dt T r(G

ˆ x o , ut − uot )dt ≥ 0 ∀ u ∈ Uad , (t,

(6.50)

−dψ = (D Fˆ )∗ (t, x o (t), uot )ψdt + V (t, x o (t), uot )ψdt + ˆx (t, x o (t), uot )dt − Q(t)dw, ψ(T ) = x (x o (T )), t ∈ I, dx = Fˆ (t, x o

o

, uot )dt

ˆ x + G(t,

o

, uot )dw,

x(0) = x0 , t ∈ I,

(6.51) (6.52)

where the operator valued processes {Q, V } are given by ˆ x o (t), uot ; ψ(t)), Q(t) = −D G(t,

 ˆ x o (t), uot ; y(t)) V (t, x o (t), uot )ψ, y = T r Q∗ (t)D G(t,

 ˆ ∗ (t, x o (t), uot ; ψ(t))D G(t, ˆ x o (t), uot ; y(t)) . = −T r D G rel rel Proof Let uo ∈ Uad denote the optimal control and u ∈ Uad any other control. rel rel ε o o Since Uad is convex, it is clear that u ≡ u + ε(u − u ) ∈ Uad for all ε ∈ [0, 1] rel and u ∈ Uad . Thus, by optimality of uo , we have J (uε ) ≥ J (uo ) for all ε ∈ [0, 1] rel a (I, H ) denote the solutions of Eq. (6.31) . Let {x o , x ε } ∈ B∞ and for all u ∈ Uad n o ε corresponding to the controls {u , u }, respectively. It is easy to verify that as ε ↓ 0, s a (I, H ). Furthermore, following straightforward computation, one x ε −→ x o in B∞ n

6.5 Optimal Relaxed Controls

225

can verify that the following limit lim(1/ε)(x ε (t) − x o (t)) ≡ y(t), t ∈ I, ε↓0

a (I, H ) and that y satisfies the variational exists in the norm topology of B∞ n equation given by the following SDE:

ˆ x o (t), uot ; y)dw + dmu−u dy = D Fˆ (t, x o (t), uot )ydt + D G(t, , y(0) = 0, t ∈ I, t o

(6.53) o

where mu−u is a semi-martingale given by o ˆ x o (t), ut − uot )dw, mu−uo = 0, t ∈ I. dmu−u = Fˆ (t, x o (t), ut − uot )dt + G(t, t 0

(6.54) a (I, H ), it is easy Using our assumptions on F and G and the fact that x o ∈ B∞ n o u−u 2 to verify that the process m ∈ SM0 . The system (6.53) is a linear stochastic o differential equation in y driven by the semi-martingale {mu−u , t ∈ I }. It is easy to t a (I, H ), which can be expressed in the verify that it has a unique solution y ∈ B∞ n form t o y(t) ≡ $(t, s)dmu−u , t ∈ I, s 0

where $(t, s) is the stochastic transition operator determined by the homogeneous part of Eq. (6.53). In other words, the map o

mu−u −→ y is a bounded linear operator from the Hilbert space of semi-martingales SM20 to a (I, H ). Now considering the cost functional J and computing its Gâteaux B∞ n derivative at uo in the direction (u − uo ), we find that  dJ (uo ; u − uo ) = E

T 0

 ˆx (t, x o , uot ), y(t)dt + x (x o (T )), y(T )



T

+E 0

ˆ x o , ut − uot )dt. (t,

(6.55)

a (I, H ) ⊂ La (I, L (, R n )), it follows from our assumptions on  Since y ∈ B∞ n 2 2 and  that the first term within the parenthesis involving duality brackets is well defined. Thus, the directional derivative of J given by the above expression is well defined and bounded. By hypothesis, uo is optimal, and therefore it is clear that rel dJ (uo ; u − uo ) ≥ 0 ∀ u ∈ Uad .

(6.56)

226

6 Stochastic Systems Controlled by Vector Measures

For convenience of presentation, let us denote the first term of the expression (6.55) by  L(y) ≡ E

T 0

 o o o ˆ x (t, x , ut ), y(t)dt + x (x (T )), y(T ) .

(6.57)

Clearly, as indicated above, it follows from our assumptions on x and x that y −→ L(y) a (I, H ) and hence is a continuous linear functional on the Banach space B∞ n bounded. Thus, the composition map o ˜ u−uo ) mu−u −→ y −→ L(y) ≡ L(m

is a bounded linear functional on the Hilbert space of semi-martingales SM20 . Hence, it follows from the well-known Riesz representation theorem for Hilbert spaces (see Proposition 1.8.12) that there exists a semi-martingale M = {Mt , t ∈ I } in SM20 such that ˜ u−uo ) = M, mu−uo  L(m SM2 . 0

(6.58)

By virtue of the semi-martingale representation theory as seen above, there exists a pair (ψ, Q) ∈ La2 (I, Hn ) × La2 (I, Hm,n ) such that

t

Mt =



t

ψ(s)ds +

0

Q(s)dw(s), t ∈ I.

0

Hence, it follows from the representation of the semi-martingale m ∈ SM20 given by the expression (6.54), and the scalar product (6.58), that ˜ u−uo ) = E L(m



T 0

ψ(t), Fˆ (t, x o (t), ut − uot )dt

T

+E 0

ˆ x o (t), uot − ut ))dt. T r(Q∗ (t)G(t,

(6.59)

˜ u−uo )), Using this equivalent representation of the linear functional L, (L(y) = L(m in the expression (6.55), and recalling the inequality (6.56), we obtain the necessary condition given by (6.50). Now it remains to characterize the pair {ψ, Q} and prove that they satisfy the adjoint equation (6.51). For this we compute the Itô differential of the scalar product y(t), ψ(t). This is given by dy(t), ψ(t) = y(t), dψ(t) + dy(t), ψ(t) + dy(t), dψ(t), t ∈ I,

(6.60)

6.5 Optimal Relaxed Controls

227

where the last component denotes the quadratic variation term. Integrating this and using the variational equation (6.53) and integration by parts, we obtain

T

Ey(T ), ψ(T ) = E 0

D Fˆ (t, x o (t), uot )y, ψ(t)dt



T

+E 0



T

+E 0

ˆ x o (t), uot ; y)dw, ψ(t) D G(t,

ˆ x o (t), ut − uot )dw, ψ(t) Fˆ (t, x o (t), ut − uot )dt + G(t,



T

+E

T

y(t), dψ(t) + E

0

dy, dψ.

(6.61)

0

Rearranging this in suitable forms and using adjoint operation, one can easily verify that Ey(T ), ψ(T ) T ˆ x o (t), uot ; ψ(t))dw =E y, dψ + (D Fˆ )∗ (t, x o (t), uot )ψ(t)dt + D G(t, 0



+E 0

T

Fˆ (t, x o (t), ut − uot ), ψ(t)dt

+E

T

0

ˆ ∗ (t, x o (t), ut − uot )ψ(t), dw + E G



T

dy, dψ.

(6.62)

0

Examining the above expression and using the variational equations (6.53)–(6.54), we observe that the quadratic variation term is given by

T

E



T

dy, dψ = E

0

0

ˆ x o (t), uot ; y(t))dw, −D G(t, ˆ x o (t), uot ; ψ(t))dw D G(t,

T

+E 0

ˆ x o (t), ut − uot )dw, −D G(t, ˆ x o (t), uot ; ψ(t))dw. G(t,

(6.63) Hence, computing the quadratic variation terms, we obtain the following expression: E

T



T

dy, dψ = E

0

0



+E 0

 ˆ ∗ (t, x o (t), uot ; ψ(t))D G(t, ˆ x o (t), uot ; y(t)) dt T r − (D G) T

 ˆ ∗ (t, x o (t), uot ; ψ(t))G(t, ˆ x o , ut − uot ) dt. T r − (D G)

(6.64)

228

6 Stochastic Systems Controlled by Vector Measures

Note that the first term on the right-hand side of the above expression is a bilinear form in the variables ψ and y, and it is symmetric and negative. The corresponding operator valued function is denoted by V . Using this notation, the expression (6.64) can be written as

T

E



T

dy, dψ = E

0

0

V (t, x o (t), uot )ψ(t), y(t)dt

T

+E 0

 ˆ x o , ut − uot ) dt, T r Q∗ (t)G(t,

(6.65)

ˆ x o (t), uot ; ψ(t)), t ∈ I. where V ≡ V (t, x o (t), uot ), t ∈ I and Q ≡ −D G(t, Note that Q is linear in ψ. Using this expression for the quadratic variation term in Eq. (6.62), we obtain

T

Ey(T ), ψ(T ) = E 0

y(t), dψ(t) + (D Fˆ )∗ (t, x o (t), uot )ψ(t)dt

+ y(t), V (t, x o (t), uot )ψ(t)dt − Q(t)dw(t) T +E Fˆ (t, x o (t), ut − uot ), ψ(t)dt

0

T

+E

0 T

+E 0

 ˆ x o , ut − uot ) dt T r Q∗ (t)G(t, ˆ ∗ (t, x o (t), ut − uot )ψ(t), dw. G

(6.66)

By setting −dψ(t) = (D Fˆ )∗ (t, x o (t), uot )ψ(t)dt + V (t, x o (t), uot )ψ(t)dt +ˆx (t, x o (t), uot )dt − Q(t)dw(t), t ∈ I, (6.67) and ψ(T ) = x (x o (T )), it follows from Eq. (6.66) that

T

Ey(T ), x (x o (T )) + E 0



T

=E 0

y(t), ˆx (t, x o (t), uot )dt

Fˆ (t, x o (t), ut − uot ), ψ(t)dt

T

+E 0

 ˆ x o (t), ut − uot ) dt T r Q∗ (t)G(t,



T

+E 0

ˆ ∗ (t, x o (t), ut − uot )ψ(t), dw. G

(6.68)

6.5 Optimal Relaxed Controls

229

By using stopping time argument, one can verify that the last term in Eq. (6.68) vanishes giving Ey(T ), x (x o (T )) + E =E 0

0 T

T

y(t), ˆx (t, x o (t), uot )dt

Fˆ (t, x o (t), ut − uot ), ψ(t)dt + E

0

T

 ˆ x o (t), ut − uot ) dt. T r Q∗ (t)G(t,

(6.69) Note that the expression on the left-hand side of the above equation is precisely the linear functional L(y) given by Eq. (6.57) and coincides with the expression ˜ u−uo ) given by Eq. (6.59) as expected. Thus, Eq. (6.67), with boundary condiL(m tion as stated above, gives the necessary condition (6.51). Equation (6.52) is the system equation driven by the optimal control. This completes the proof of all the necessary conditions of optimality as stated.   Algorithm Using the necessary conditions of optimality, we can construct an algorithm whereby one can determine the optimal control. We present a proof of convergence of this algorithm. Theorem 6.5.6 (Convergence Theorem) Suppose the assumptions of Theorem 6.5.2 and those of Theorem 6.5.5 hold. Then, there exists a sequence of control rel along which J (un ) −→ m monotonically, where m is policies {un } ⊂ Uad 0 0 rel }. possibly a local minimum or m0 = inf{J (u), u ∈ Uad Proof rel Step 1: Choose an arbitrary control from Uad and call it u1 , and solve the state o 1 equation (6.52) by replacing u with u and call this solution x 1 . Step 2: Use the pair {u1 , x 1 } in the adjoint equation (6.51) in place of {uo , x o }, and solve it for ψ giving ψ 1 . Step 3: Use the triple {u1 , x 1 , ψ 1 } in the inequality (6.50) in place of {uo , x o , ψ o }, and define the process

ˆ x 1 , u1 ; ψ 1 )) η1 (t, ξ ) ≡ F (t, x 1 , ξ ), ψ 1  − T r(G∗ (t, x 1 , ξ )D G(t, +(t, x 1 , ξ ), t ∈ I, ξ ∈ U.

(6.70)

Using this notation in the expression on the left-hand side of inequality (6.50), we obtain the Gâteaux differential of J at u1 in the direction (u − u1 ) given by 1

1

dJ (u ; u − u ) = E =

I

I ×

U

 η1 (t, ξ )(ut (dξ ) − u1t (dξ )) dt

η1 (t), ut − u1t C(U ),M(U )dμ.

(6.71)

230

6 Stochastic Systems Controlled by Vector Measures

where the bracket ,  denotes the following duality pairing: η1 (t), ut − u1t C(U ),M(U ) ≡

U

η1 (t, ξ )(ut − u1t )(dξ ),

and μ is the measure given by the restriction of the product measure dt × dP on the predictable sigma algebra P ⊂ B(I ) × F as defined in the introduction of this section. Now we recall the Banach space L1 (μ, C(U )) of Bochner μintegrable functions defined on I ×  and taking values in the Banach space C(U ). The (topological) dual of this space is given by the space of weakstar measurable functions defined on I ×  with values in the space of Borel measures on U denoted by Lw ∞ (μ, M(U )). For simplicity of notation, let us use X ≡ L1 (μ, C(U )) and its dual X∗ ≡ Lw ∞ (μ, M(U )). Thus, the pairing in the expression (6.71) can be written as dJ (u1 ; u − u1 ) = η1 , u − u1 X,X∗ .

(6.72)



At this point, we need the duality map  : X −→ 2X \ ∅ (the power set of X∗ ) given by (x) ≡ {x ∗ ∈ X∗ : x ∗ , x = x 2X = x ∗ 2X∗ }. Again, by virtue of the Hahn–Banach theorem, for every x = 0, the set (x) is nonempty. In general, it is a multi-valued map (upper semicontinuous: strong to weak-star topology). Now returning to Step 3, in particular the expression (6.72), choose any v 1 ∈ (η1 ) and define u2 ≡ u1 − εv 1 rel . Then, using the Lagrange formula with ε > 0 sufficiently small so that u2 ∈ Uad 2 and evaluating the functional J at u , one can readily verify that

J (u2 ) = J (u1 ) + dJ (u1 ; u2 − u1 ) + o(ε), = J (u1 ) + DJ (u1 ), u2 − u1 X,X∗ + o(ε) = J (u1 ) + η1 , u2 − u1  + o(ε), = J (u1 ) − ε  η1 2X +o(ε) = J (u1 ) − ε  v 1 2X∗ +o(ε). This shows that, for ε > 0 sufficiently small, we have J (u2 ) < J (u1 ). Returning to Step 1 with u2 and replacing u1 by u2 , and repeating the process, we can rel (as stated) such that construct a sequence of controls {un } ⊂ Uad J (u1 ) ≥ J (u2 ) ≥ J (u3 ) ≥ · · · ≥ J (un ) ≥ J (un+1 ) ≥ · · · .

(6.73)

6.6 Regulated (Filtered) Impulsive Controls

231

Now under the assumptions on the cost integrands  and , used to prove Theorem 6.5.2, it is clear that J (un ) > −∞ for all n ∈ N. Thus, J (un ) monotonically converges (possibly) to a local minimum m0 . This completes the proof.   Remark 6.5.7 In case the control domain U is compact and convex and {F, G, } satisfy some additional conditions (such as convexity condition with respect to r ⊂ control variables), optimal controls exist in the class of regular controls Uad a m L∞ (I, R ) taking values in U and measurable with respect to the predictable sigma field P ⊂ B(I ) × F . In that case the necessary conditions of optimality given by Theorem 6.5.5 reduce to the necessary conditions of optimality for ordinary controls. Introducing the Hamiltonian H as

 H (t, x, ψ, u) ≡ F (t, x, u), ψ − T r G∗ (t, x, u)DG(t, x, u; ψ) + (t, x, u), (6.74) one can rewrite the inequality (6.50) as

T

E 0



T

H (t, x o (t), ψ(t), u(t))dt ≥ E 0

rel H (t, x o (t), ψ(t), uo (t))dt, ∀ u ∈ Uad .

From this inequality, one can readily obtain point-wise necessary conditions of optimality giving a minimum principle for stochastic systems driven by regular controls.

6.6 Regulated (Filtered) Impulsive Controls Here in this section, we consider the following stochastic system in R n : dx = F (t, x)dt + G(t, x)dw + C(t)γ (dt), t ∈ I, x(0) = x0 ,

(6.75)

subject to impulsive forces regulated by the operator valued function C. The objective is to choose a regulator that minimizes the impact of the impulsive forces on the performance of the system. The drift and the diffusion pair {F, G} satisfy standard assumptions, the function C : I −→ L(R m , R n ) is any essentially norm bounded Borel measurable random process, and γ ∈ Mca (I , R m ) (the Banach space of countably additive vector measures having boundedvariation). m The space hi δti (dt)  Mca (I , R ) also contains discrete measures of the form m where |hi |R < ∞. In case γ is a random vector measure, we can use the space

232

6 Stochastic Systems Controlled by Vector Measures

Maca (I , Hm ), where Hm ≡ L2 (, R m ). In this case we can only allow vector measures, which are also adapted to the current of sigma algebras Ft ≥0 generated t by the Brownian motion in the sense that 0 γ (ds) is Ft -measurable for all t ≥ 0. The cost functional is given by   (t, x(t))dt + (x(T )) . J (C) = E

(6.76)

I

Let Aad ⊂ L∞ (I, L(R m , R n )) be the admissible set of filtering operators (regulating the impulsive forces). The objective is to find a C ∈ Aad that minimizes the above cost functional. There are situations where a system may be randomly struck by natural impulsive forces and it may be possible to filter them so as to reduce their impact. In such situations the system model given by (6.75) fits very well. Here, by choosing C, one can filter and regulate the impulsive forces in terms of their intensity and the time of strike. One can also think of C as being an operator valued control. First we present a result on existence of optimal policy. Theorem 6.6.1 Consider the system (6.75), and suppose that the admissible set of filters or (control policies) Aad is a weak-star compact subset of L∞ (I, L(R m , R n )) and that the drift–diffusion parameters {F, G} satisfy the standard growth and uniform Lipschitz conditions with Lipschitz (and growth) constants {K1 , K2 }, respectively. The measure γ ∈ Maca (I , Hm ), and there exists a scalar-valued measure ν ∈ M+ ca (I ) having bounded variation so that γ 0, an element C2 ≡ C1 − εB1 with B1 ∈ (R1 ), where  denotes the duality map as described above. Note that it is not essential to verify the inequality. The algorithm may continue till a desired stopping criterion is satisfied. Choose ε > 0 sufficiently small so that C2 ∈ Aad . Computing the cost functional at C2 , we have J (C2 ) = J (C1 ) + dJ (C1 ; C2 − C1 ) + o(ε) = J (C1 ) + E C2 − C1 , ψ1 ⊗ hγ ν(dt) + o(ε) I

= J (C1 ) − εB1 , R1  + o(ε) = J (C1 ) − ε  B1 2X∗ +o(ε) = J (C1 ) − ε  R1 2X +o(ε).

(6.106)

Thus, for ε > 0 sufficiently small, J (C2 ) < J (C1 ). As we have seen before, since the set Aad is bounded, the solution set is bounded, and it follows from the quadratic growth properties of the cost integrands that J (C) > −∞ for all C ∈ Aad . Using C2 as constructed above and returning to Step 1 and repeating the process, we obtain a sequence {Ck } along which {J (Ck )} is a monotone non-increasing sequence. Thus, there exists a real number m0 > −∞ such that lim J (Ck ) −→ m0 , possibly a local minimum. This completes the proof.  

6.6 Regulated (Filtered) Impulsive Controls

241

Remark 6.6.6 Let Gt ≥0 denote an increasing family of complete subsigma algebras of the sigma algebra Ft ≥0, and suppose the regulators {C} can be chosen on the basis of information carried by the subsigma algebra Gt ≥0 . In that case the necessary condition given by the inequality (6.85) or (6.105) can be expressed as follows:

T

dJ (C ; C − C ) = E o

o

0

 C(t) − C o (t), E{ψ(t) ⊗ hγ (t)Gt }ν(dt),

where E{·|Gt } denotes the conditional expectation relative to the subsigma algebra Gt ≥0 . This will produce a Gt ≥0 -adapted optimal filter (regulator).

6.6.1 Application to Special Cases Case 1 Suppose we want to applythe above result to discrete measures. Let the measure γ be given by γ (dt) ≡ αi zi δti (dt), where {zi ∈ Hm } and ν(dt) ≡  αi δt It is clear that γ is ν continuous (γ 0 defines the memory size. Here we are required to choose K from a given class of functions like B∞ (I × I × R m , R n ). This is an interesting research problem for graduate students.

6.9.2 Necessary Conditions of Optimality On the basis of the existence Theorem 6.9.2, we can develop necessary conditions of optimality for the control problem (6.134)–(6.135). By now it is clear to the reader that, for necessary conditions of optimality, we need stronger regularity conditions for the system parameters {F, G, C} and {F0 , G0 , H0 } and the cost integrands {, }. We present this in the following theorem. We use D1 (·) and D2 (·) to denote the Gâteaux differential (operators) with respect to the state variable x and the observed variable y, respectively.

6.9 Partially Observed Optimal Feedback Controls

259

Theorem 6.9.5 Consider the feedback control problem (6.134)–(6.135) with the admissible control laws Aad , and suppose the assumptions of Theorem 6.9.2 hold. Furthermore, suppose the elements of Aad and all the parameters {F, G, C} and {F0 , G0 , H0 } are continuously Gâteaux differentiable in the state and observed variables with the Gâteaux derivatives being uniformly bounded. The cost integrands {, } satisfy the assumptions of Theorem 6.9.2, and that the norms of their Gâteaux derivatives are square integrable. Then, for the triple { o , x o , y o } to be optimal, it a (I, H ) × B a (I, H ) satisfying is necessary that there exists a pair {ψ1 , ψ2 } ∈ B∞ n m ∞ the following inequalities and the pair of stochastic adjoint equations:

T

E

 ψ1 (t), (y o (t)) − o (y o (t)) dt ≥ 0 ∀ ∈ Aad ,

(6.154)

0

−dψ1 = (D1 F )∗ (t, x o )ψ1 dt + (D1 C)∗ (t, x o ; ψ1 )γ (dt) + (D1 H0 )∗ (t, x o )ψ2 dt + Q1 (t, x o )ψ1 dt + x (t, x o , y o )dt + D1 G(t, x o ; ψ1 )dw, ψ1 (T ) = x (x o (T ), y o (T ));

(6.155)

−dψ2 = (D2 F0 )∗ (t, y o )ψ2 dt + (D2 o )∗ (y o )ψ1 dt + Q2 (t, y o )ψ2 dt + y (t, x o , y o )dt + D2 G0 (t, y o ; ψ2 )dw0 , ψ2 (T ) = y (x o (T ), y o (T )),

(6.156)

where the operators {Q1 , Q2 } are identified in the proof, and the pair {x o , y o } is the solution of the pair of stochastic differential equations (6.134)–(6.135) corresponding to the feedback control law o . Proof Let o ∈ Aad denote the optimal feedback law, and define ε = o + ε( − o ) for any ∈ Aad and ε ∈ [0, 1]. One can easily verify that the set Aad is convex and closed. So ε ∈ Aad , and hence, by virtue of optimality of o , it is necessary a (I, H ) × B a (I, H ) and that J ( ε ) ≥ J ( o ) for all ∈ Aad . Let (x ε , y ε ) ∈ B∞ n m ∞ o o a a (x , y ) ∈ B∞ (I, Hn ) × B∞ (I, Hm ) denote the solutions of Eqs. (6.134)–(6.135) corresponding to the feedback law ε and o , respectively. Computing the Gâteaux differential of J at o in the direction ( − o ), we arrive at the following inequality:

 x (t, x o , y o ), z1  + y (t, x o , y o ), z2  dt

dJ ( ; − ) = E o

o

I

 + E x (x o (T ), y o (T )), z1 (T )  + y (x o (T ), y o (T )), z2 (T ) ≥ 0 ∀ ∈ Aad ,

(6.157)

260

6 Stochastic Systems Controlled by Vector Measures

where, for each t ∈ I , z1 (t) ≡ limε↓0 (1/ε)(x ε (t) − x o (t)) and z2 (t) ≡ limε↓0 (1/ε)(y ε (t)−y o (t)). The pair {z1 , z2 } exists and satisfies the following linear stochastic differential equations (variational equations):

 dz1 = D1 F (t, x o )z1 dt + D2 o (y o )z2 dt + (y o ) − o (y o ) dt + D1 C(t, x o ; z1 )γ (dt) + D1 G(t, x o ; z1 )dw, z1 (0) = 0;

(6.158)

dz2 = D2 F0 (t, y o )z2 dt + D1 H0 (t, x o )z1 dt + D2 G0 (t, y o ; z2 )dw0 , z2 (0) = 0. (6.159) By assumption, all the functions {F, C, G, F0 , H0 , G0 } including the elements of Aad have uniformly bounded Gâteaux derivatives. Hence, the pair of variaa (I, H ) × tional equations (6.158)–(6.159) has a unique solution (z1 , z2 ) ∈ B∞ n a B (I, H ), and further the solution is continuously dependent on the element m ∞

 a (I, H ), it follows from the properties of the (y o ) − o (y o ) . Since y o ∈ B∞ m  o a (I, H ), and thus elements of the set Aad that (y ) − o (y o ) ∈ B∞ n

 (y o ) − o (y o ) −→ (z1 , z2 )

(6.160)

a (I, H ) to the Banach space is a continuous linear map from the Banach space B∞ n a a B∞ (I, Hn ) × B∞ (I, Hm ). In fact, this map is well defined on a larger space. Since the variational equations are linear, it is easy to verify that this map is also a (I, H )×B a (I, H ). Define the functional continuous linear from La2 (I, Hn ) to B∞ n m ∞ a a L on B∞ (I, Hn ) × B∞ (I, Hm ) by



L(z1 , z2 ) ≡ E

 x (t, x o , y o ), z1 R n + y (t, x o , y o ), z2 R m dt

I

 + E x (x o (T ), y o (T )), z1 (T )R n

 + y (x o (T ), y o (T )), z2 (T )R m .

(6.161)

It follows from our assumptions on the Gâteaux differentials of the cost integrands {, } (with respect to (x, y) variables) that the scalar products in the above expression are well defined, and hence the functional (z1 , z2 ) −→ L(z1 , z2 ) is a (I, H ) × B a (I, H ). a continuous linear functional on the product space B∞ n m ∞ Since the set Aad is compact (and so bounded), the set of corresponding solutions {(x( ), y( ), ∈ Aad } of the system (6.134)–(6.135) is contained in a bounded a (I, H ) × B a (I, H ). Thus, it follows from the property of the set subset of B∞ n m ∞

 a (I, H ), that (y o ) − o (y o ) ∈ Aad (see (6.137)) and the fact that y o ∈ B∞ m a (I, H ) ⊂ La (I, H ) ∀ ∈ A . Hence, the composition map B∞ n n ad 2



 ( (y o ) − o (y o )), 0 −→ (z1 , z2 ) −→ L(z1 , z2 ) ≡ L˜ ( (y o ) − o (y o )), 0 (6.162)

6.9 Partially Observed Optimal Feedback Controls

261

is a continuous linear functional on La2 (I, Hn ) × La2 (I, Hm ). Clearly, for any Hilbert space H, the dual of La2 (I, H ) is given by La2 (I, H ) itself, and hence the product space La2 (I, Hn ) × La2 (I, Hm ) is self-dual. Thus, there exists a pair (ψ1 , ψ2 ) ∈ La2 (I, Hn ) × La2 (I, Hm ) such that

 L˜ ( (y o ) − o (y o )), 0 = E



T

  (y o ) − o (y o ), ψ1 (t)Rn + 0, ψ2 (t)Rm dt.

0

(6.163) Hence, we obtain 

L˜ ( (y o ) − o (y o )), 0 = E



T

 (y o ) − o (y o ), ψ1 (t)dt.

(6.164)

0

From the inequality (6.157) and the expressions (6.161), (6.162), and (6.164), we arrive at the necessary inequality (6.154). It remains to verify that the pair (ψ1 , ψ2 ) satisfies the adjoint equations (6.155)–(6.156). For this we compute the integrals of the Itô differentials of the scalar product z1 , ψ1  + z2 , ψ2 . The Itô differential is given by dz1 , ψ1  + dz2 , ψ2  = dz1 , ψ1  + z1 , dψ1  + dz1 , dψ1  +dz2 , ψ2  + z2 , dψ2  + dz2 , dψ2 ,

(6.165)

where the double angle bracket denotes the quadratic variation. Integrating the above expression on I ×  and recalling that z1 (0) = 0, z2 (0) = 0, we obtain the following expression: E{z1 (T ), ψ1 (T ) + z2 (T ), ψ2 (T )} T =E {dz1 , ψ1  + z1 , dψ1 } + E 0

T

dz1 , dψ1 

0



T

+E



T

{dz2 , ψ2  + z2 , dψ2 } + E

0

dz2 , dψ2 .

(6.166)

0

For convenience of presentation, let us denote the quadratic variation terms by

T

QV1 ≡ E 0



T

dz1 , dψ1  and QV2 ≡ E 0

dz2 , dψ2 .

262

6 Stochastic Systems Controlled by Vector Measures

Using the variational equations (6.158)–(6.159) in the expression (6.166) (while ignoring the quadratic variation terms temporarily) and integrating by parts, we obtain

T

E



T

{dz1 , ψ1  + z1 , dψ1 } + E

0

{dz2 , ψ2  + z2 , dψ2 }

0



T

=E

 z1 , dψ1 + (D1 F )∗ (t, x o )ψ1 dt + (D1 C)∗ (t, x o ; ψ1 )γ (dt)

0

 + D1 G(t, x o ; ψ1 )dw + (D1 H0 )∗ (t, x o )ψ2 dt 



 z2 , dψ2 + (D2 F0 )∗ (t, y o )ψ2 dt + (D2 o (y o ))∗ ψ1 dt

T

+E 0

 + D2 G0 (t, y o ; ψ2 )dw0 .

(6.167)

It follows from the above expression and Eq. (6.166) that the Itô differentials of the adjoint variables {ψ1 , ψ2 } cannot contain any other martingale term other than what already appears in the above expression. Using the martingale terms in the variational equations (6.158)–(6.159) and those in the above equation, one can verify that the quadratic variation terms are given by

T

QV1 ≡ E

dz1 , dψ1 

0 T

=E

D1 G(t, x o ; z1 )dw, (−1)D1 G(t, x o ; ψ1 )dw

0



T

= −E

 T r D1 G∗ (t, x o ; z1 )D1 G(t, x o ; ψ1 ) dt

0



T

≡E

Q1 (t, x o )ψ1 , z1 dt,

(6.168)

0

and

T

QV2 ≡ E

dz2 , dψ2 

0 T

=E

D2 G0 (t, y o ; z2 )dw0 , (−1)D2 G0 (t, y o ; ψ2 )dw0 

0



T

= −E 0



T

≡E 0

 T r D2 G∗0 (t, y o ; z2 )D2 G0 (t, y o ; ψ2 ) dt

Q2 (t, y o )ψ2 , z2 dt.

(6.169)

6.9 Partially Observed Optimal Feedback Controls

263

Since the Gâteaux differentials of G and G0 are uniformly bounded and {x o , y o } a (I, H ) × B a (I, H ), it follows from the above expressions are contained in B∞ n m ∞ that the two operators Q1 and Q2 , giving the quadratic variation terms, are well defined bounded Ft -adapted random processes with values in L(R n ) and L(R m ), respectively, for all t ∈ I, P -a.s. It is also clear that they are symmetric and negative. Adding these quadratic variation terms and the expressions given by Eq. (6.167), one can verify that the right-hand expression of Eq. (6.166) is given by

T

E



T

{dz1 , ψ1  + z1 , dψ1 } + QV1 + E

0

{dz2 , ψ2  + z2 , dψ2 } + QV2

0



T

=E

 z1 , dψ1 + (D1 F )∗ (t, x o )ψ1 dt + (D1 C)∗ (t, x o ; ψ1 )γ (dt)

0

 + Q1 (t, x o )ψ1 dt + D1 G(t, x o ; ψ1 )dw + (D1 H0 )∗ (t, x o )ψ2 dt  T  +E z2 , dψ2 + (D2 F0 )∗ (t, y o )ψ2 dt + (D2 o (y o ))∗ ψ1 dt 0



 + Q2 (t, y o )ψ2 dt + D2 G0 (t, y o ; ψ2 )dw0  T

+E

 (y o ) − o (y o ), ψ1 dt.

(6.170)

0

Now taking ψ1 (T ) ≡ x (x o (T ), y o (T )) and ψ2 (T ) = y (x o (T ), y o (T )) and setting dψ1 + (D1 F )∗ (t, x o )ψ1 dt + (D1 C)∗ (t, x o ; ψ1 )γ (dt) + Q1 (t, x o )ψ1 dt + D1 G(t, x o ; ψ1 )dw + (D1 H0 )∗ (t, x o )ψ2 dt = −x (t, x o (t), y o (t))dt,

(6.171)

and dψ2 + (D2 F0 )∗ (t, y o )ψ2 dt + (D2 o (y o ))∗ ψ1 dt + Q2 (t, y o )ψ2 dt + D2 G0 (t, y o ; ψ2 )dw0 = −y (t, x o (t), y o (t))dt,

(6.172)

and using Eqs. (6.166) and (6.170), we arrive at the following inequality: ˜ − o , 0) = E L(z1 , z2 ) = L(



T 0

 (y o ) − o (y o ), ψ1 dt.

(6.173)

264

6 Stochastic Systems Controlled by Vector Measures

Hence, it follows from (6.157), (6.162), and (6.164) that

T

dJ ( o ; − o ) = E

 (y o ) − o (y o ), ψ1 dt ≥ 0, ∀ ∈ Aad

(6.174)

0

giving the necessary condition (6.154). The necessary conditions (6.155)– (6.156) follow from Eqs. (6.171) and (6.172) along with the terminal conditions as stated. The covariance operators Q1 and Q2 are given by the expressions (6.168) and (6.169), respectively. It remains to verify that (ψ1 , ψ2 ) ∈ a (I, H ) × B a (I, H ) ⊂ La (I, H ) × La (I, H ). By integrating the adjoint B∞ n m n m ∞ 2 2 equations (6.155)–(6.156) over the interval [t, T ] and computing the expected values of the square of the norms of ψ1 (t) and ψ2 (t) and using the Grönwall a (I, H ) and ψ ∈ B a (I, H ). This inequality, one can verify that ψ1 ∈ B∞ n 2 m ∞ completes the proof of all the necessary conditions as stated in the theorem.   Using the above necessary conditions of optimality, one can prove a convergence theorem similar to Theorem 6.6.5. We leave this for the reader as an exercise. Some Special Cases Based on the above result, we can derive several interesting special cases. Let L∞ (, L(R m , R n )) denote the space of essentially bounded Borel measurable functions with values in L(R m , R n ). Let V ⊂ L∞ (, L(R m , R n )) be a bounded, weak-star closed, and convex set possibly containing zero in its interior. Clearly, by Alaoglu’s Theorem 1.11.6, this is a weakstar compact set. Let us introduce a class of linear operator valued Ft -adapted measurable random processes with values in V , Cad ≡ { : I −→ L∞ (, L(R m , R n )) : (t) ∈ V , ∀ t ∈ I }. ! Consider the product space t ∈I Vt , where Vt = V for all t ∈ I , and let this be given the Tychonoff product topology, and again denote this by τp . Since V is weak-star compact, the set Cad is compact in the τp topology. Corollary 6.9.6 Consider the partially observed system given by Eqs. (6.134) and (6.135) with linear feedback controls from the set Cad . Suppose all the other assumptions of Theorems 6.9.2 and 6.9.5 hold. Then, the necessary conditions of optimality given by Theorem 6.9.5 remain valid with Cad replacing Aad , and the optimal feedback law o ∈ Cad satisfies the following necessary inequality:

T

E 0

( (t) − o (t))y o (t), ψ1 (t)dt ≥ 0, ∀ ∈ Cad .

(6.175)

6.9 Partially Observed Optimal Feedback Controls

265

Using the tensor product of ψ1 and y o , denoted by L(t) ≡ ψ1 (t) ⊗ y o (t), we can rewrite the above inequality in the form of scalar product as follows:

T

E

( (t) − o (t)), ψ1 (t) ⊗ y o (t)dt

0



T

≡E

 T r L∗ (t)( (t) − o (t)) dt ≥ 0, ∀ ∈ Cad .

(6.176)

0

Remark 6.9.7 Several interesting special cases follow from the above corollary. (a) Let Gt ⊂ Ft , t ≥ 0, be a current of complete subsigma algebras of the sigma algebra F , and suppose we want to choose Gt -adapted feedback law. That is, the elements of Cad are Gt -adapted random processes. In this case, we can rewrite the above inequality as follows:

T

E 0

( (t) − o (t)), E{ψ1 (t) ⊗ y o (t)|Gt }dt ≥ 0, ∀ ∈ Cad ,

(6.177)

where E{ψ1 (t) ⊗ y o (t)|Gt } denotes the conditional expectation of the rank one tensor ψ1 (t) ⊗ y o (t) relative to the subsigma algebra Gt . (b) If we want to allow only deterministic feedback law, then the vector space L∞ (, L(R m , R n )) is replaced by L(R m , R n ) and the above integral reduces to

T

( (t) − o (t)), E{ψ1 (t) ⊗ y o (t)}dt ≥ 0, ∀ ∈ Cad .

(6.178)

0

(c) If we want to choose constant feedback law, the above inequality should be written as   T o o ( − ), E {ψ1 (t) ⊗ y (t)}dt ≥ 0, ∀ ∈ Cad . (6.179) 0

Clearly, the feedback law obtained by using the case (a) is expected to perform better than that of (b), which in turn, is expected to outperform that of case (c). Using the above corollary, we can develop a convergence theorem whereby one can construct a sequence of feedback laws { k }k∈N along which the cost functional monotonically decreases (possibly) to a local minimum on Cad . Theorem 6.9.8 (Convergence Theorem) Consider the system (6.134)–(6.135) with the set of admissible feedback laws Cad . Suppose the assumptions of Theorems 6.9.2 and 6.9.5 hold. Then there exists a generalized sequence (net), { k } ⊂ Cad along which the cost functional monotonically converges (possibly) to a local minimum.

266

6 Stochastic Systems Controlled by Vector Measures

Proof First note that inf{J ( ), ∈ Cad } > −∞. This follows from the facts that a (I, H )× the set Cad is bounded and hence the solution set is a bounded subset of B∞ n a B∞ (I, Hm ) and that the cost integrands have quadratic growth. Consider the dual pair of Lebesgue spaces   a L1 (I × , L(R m , R n )), La∞ (I × , L(R m , R n )) with their norms denoted by  · 1 and  · ∞ , respectively. The duality map corresponding to the above pair is given by (ξ ) ≡ {η ∈ La∞ (I × , L(R m , R n )) : η, ξ  = η 2∞ = ξ 21 }

(6.180)

for ξ(= 0) ∈ La1 (I × , L(R m , R n )). To start the algorithm, take any 1 ∈ Cad and solve Eqs. (6.134)–(6.135), and let (x 1 , y 1 ) denote the corresponding solution. Using this pair (x 1 , y 1 ) in place of the pair (x o , y o ) in the adjoint system of Eqs. (6.155)–(6.156) and solving these equations, one has the adjoint state trajectory (ψ11 , ψ21 ). Using the pair {ψ11 , y 1 } in the necessary condition (6.176), we construct a (I, H ) ⊂ the tensor product ψ11 ⊗ y 1 and denote this by ξ1 . Since ψ11 ∈ B∞ n a (I, H ), they have finite second moments, which are uniLa∞ (I, Hn ) and y 1 ∈ B∞ m formly bounded on I. Thus, the random process ξ1 ∈ La∞ (I, L1 (, L(R m , R n )) ⊂ La1 (I × , L(R m , R n )). Hence, for any ε > 0 and η1 ∈ (ξ1 ), we can choose 2 = 1 − εη1 so that, for ε > 0 sufficiently small, 2 ∈ Cad . If it is impossible to choose such an ε > 0, then 1 is optimal and it is on the boundary of the convex set V . Otherwise, evaluating the functional J at 2 and using Lagrange formula, we obtain J ( 2 ) = J ( 1 ) + dJ ( 1 ; 2 − 1 ) + o( 2 − 1 ) = J ( 1 ) − εη1 , ξ1  + o(ε) = J ( 1 ) − ε  η1 2∞ +o(ε) = J ( 1 ) − ε  ξ1 21 +o(ε).

(6.181)

Thus, for ε > 0 sufficiently small, J ( 2 ) < J ( 1 ). This completes one iteration. The iterative process is then continued with 2 in place of 1 . This shows that, following the necessary conditions of optimality given by Corollary 6.9.6, one can construct a sequence { k }k≥1 that satisfies J ( 1 ) ≥ J ( 2 ) ≥ J ( 3 ) ≥ · · · ≥ J ( n ) ≥ · · · . Under the assumptions on the cost integrands {, } including the boundedness of the set Cad , one can easily verify that inf{J ( ), ∈ Cad } > −∞. Thus, along this sequence, the cost functional J monotonically converges (possibly) to its local minimum. This completes the proof.   Remark 6.9.9 Following similar procedure most of the results presented in the preceding sections can be extended to cover systems subject to both Wiener process and Poisson processes.

6.10 Bellman’s Principle of Optimality

267

6.10 Bellman’s Principle of Optimality One of the most well-known techniques in optimal control theory is Bellman’s principle of optimality, which is the essence of dynamic programming. This concept was originally created by Bellman [37] making a lasting impact on the subject of control and decision-making. The basic philosophy is very simple: nothing can be done to change the past, but judicious policies can shape the future. Originally, this concept was used to develop optimal feedback control policies for dynamic systems governed by ordinary and stochastic differential equations by many authors, such as Lions [79], Ahmed [2, 14], Fleming and Soner [63], Bardi and CapuzzoDolcetta [34], Yong and Zhou [107], Crandall and Bardi [35], and Motta [84]. This technique leads to what is known as the Hamilton–Jacob–Bellman (HJB) equation for the so-called value function. Huang, Wang, and Teo [74] developed some computational techniques for numerical solutions of such HJB equations. In fact, HJB equations are highly nonlinear PDEs on finite dimensional spaces with the dimension determined by the dimension of the original system of equations. An excellent review of numerical methods for nonlinear PDEs can be found in [97]. Many researchers in the field have pushed the subject forward and covered both deterministic and stochastic systems on infinite dimensional spaces which clearly include PDEs. For details, the reader is referred to Barbu and Da Prato [33], Ahmed [3, 6, 11], and Fabbri, Gozzi, and Swiech [60]. Again, this leads to HJB equations and hence nonlinear PDEs on infinite dimensional spaces. We have mentioned only a few that we are familiar with and apologize for missing those who may have made equally valuable contributions in this area. We present here two examples, one on ordinary differential equation and another on stochastic differential equation on finite dimensional spaces. Deterministic Systems Consider the following control problem in R n : ξ˙ (s) = F (s, ξ(s), u(s)), t ∈ I ≡ [0, T ], ξ(0) = x0 ∈ R n ,

(6.182)

with the cost functional given by

T

J (u) =

(s, ξ(s), u(s))ds + (ξ(T )),

(6.183)

0

where u ∈ Uad . The set of admissible controls Uad consists of measurable functions defined on I and taking values in a compact metric space U. The problem is to find a control policy that minimizes the functional J. We assume that the data {, F, } satisfy standard regularity properties with respect to their arguments on I × R n × U. u (s), s ∈ [t, T ] Suppose at time t ∈ I the system is in state x ∈ R n , and let ξt,x denote the trajectory of the solution process corresponding to control policy u over the remaining time period. Let Jˆ(t, x, u) ≡



T t

u u (s, ξt,x (s), u(s))ds + (ξt,x (T ))

(6.184)

268

6 Stochastic Systems Controlled by Vector Measures

denote the cost functional over the same remaining time interval [t, T ]. Define the value function   V (t, x) ≡ inf Jˆ(t, x, u), u ∈ U[t,T ] ⊂ Uad . Clearly, by virtue of the basic principle of optimality, one can express this functional as follows:  t +t  u u (s, ξt,x (s), u(s))ds + V (t + t, ξt,x (t + t)) . V (t, x) = inf u∈U[t,T ]

t

(6.185) Using the following Taylor approximation, u (t + t)) , V (t + t, x) + Dx V (t + t, x), F (t, x, u)t + o(t), V (t + t, ξt,x

in the above expression and rearranging terms suitably and dividing by t and letting t → 0, one arrives at the following expression:   − ∂t V = inf (t, x, v) + DV (t, x), F (t, x, v)R n , v ∈ U .

(6.186)

Clearly, it follows from the expressions (6.184) and (6.185) that V (T , x) = (x). Since U is compact and the functions {, F } are continuous in the control variable, the infimum is attained giving the following HJB equation with terminal condition: −∂t V = Ho (t, x, DV ), (t, x) ∈ I × R n ,

(6.187)

V (T , x) = (x), x ∈ R ,

(6.188)

n

where the function Ho is given by the right-hand expression of Eq. (6.186). This is a first order HJB equation. Stochastic Systems Next, we consider the following controlled stochastic differential equation subject to Brownian motion: dξ(s) = F (s, ξ(s), u(s))ds + G(s, ξ(s), u(s))dW (s), s ∈ I, ξ(0) = x0 ∈ R n , (6.189) where I ≡ [0, T ] and the data {F, G} satisfy standard assumptions including regularity properties. Let (, F , Ft ≥0, P ) denote the complete filtered probability space with x0 being F0 -measurable having finite second moment and the Brownian motion W and the control process u are all Ft -measurable processes with u taking a values from a compact metric space U. We use the symbol Uad to denote the class

6.10 Bellman’s Principle of Optimality

269

of admissible (Ft -adapted) controls. The objective is to find a control policy that minimizes the following cost functional:  J (u) = E

T

 (s, ξ(s), u(s))ds + (ξ(T )) .

(6.190)

0 u (s), s ∈ [t, T ]} to denote the solution process corresponding to Again, using {ξt,x control policy u and starting at time t from state x ∈ R n , the functional representing the cost (to go) over the same period is given by

 ˆ J (t, x, u) ≡ E

T t

 u (s, ξt,x (s), u(s))ds

u + (ξt,x (T ))

(6.191)

Next, we define the value function as   a V (t, x) ≡ inf Jˆ(t, x, u), u ∈ Uad .

(6.192)

Again, using the Taylor approximation and recalling the Itô differential rule and retaining all the first order terms of order t and less, we obtain u (t + t)) , V (t + t, x) + Dx V (t + t, x), F (t, x, u)t V (t + t, ξt,x

+Dx V (t + t, x), G(t, x, u)W  +(1/2)Dx2 V (t + t)G(t, x, u)W, G(t, x, u)W  + o(t), where W = W (t + t) − W (t). Substituting this in the following expression V (t, x) =

 inf E

u∈U[t,T ]

t +t t

 u u (s, ξt,x (s), u(s))ds + V (t + t, ξt,x (t + t)) ,

and using the principle of optimality, we arrive at the following identity:  −∂t V = inf (t, x, v) + DV (t, x), F (t, x, v)R n  +(1/2)T r(G∗ (t, x, v)Dx2 V G(t, x, v)), v ∈ U .

(6.193)

By letting t → T , it follows from the expression (6.191) that V (T , x) = (x). This leads to the HJB equation, −∂t V = Hc (t, x, DV , D 2 V ), (t, x) ∈ I × R n ,

(6.194)

V (T , x) = (x), x ∈ R n ,

(6.195)

270

6 Stochastic Systems Controlled by Vector Measures

where Hc denotes the function obtained after the infimum operation has been performed in Eq. (6.193). Given the solution V , the optimal cost is given by V (0, x) for x = x0 . In case the initial state is random, it is given by EV (0, x0 ) =

Rn

V (0, ζ )μ0 (dζ ),

where μ0 is the probability law of the initial state. The optimal control law is determined from the solution of the above HJB equation and the expression for Hc . Given the solution V , the feedback control law is given by  u(t, x) = arg inf (t, x, v) + DV (t, x), F (t, x, v)R n  +(1/2)T r(G∗ (t, x, v)D 2 V G(t, x, v)), v ∈ U . So far we have considered stochastic systems driven by Brownian motion. Let us now consider systems driven by both Brownian motion and the compensated Poisson process q (also called the Lèvy process), dx(t) = F (t, x, u)dt + G(t, x, u)dw +

K(t, x(t), v)q(dt, dv), x(0) = x0 , t ∈ I, D0

(6.196) with the same cost functional as given by the expression (6.190). The jump kernel K : I × R n × D0 −→ R n satisfies the standard assumption, I ×D0

 K(t, x, v) 2R n dt!(dv) < ∞, ∀ x ∈ R n .

For simplicity and clarity, we have assumed that there is no control in the last term representing the jump process. Using the Taylor approximation and Itô differential rule and following the same procedure as in the continuous case, one can derive the following HJB equation, which is an integro-partial-differential equation given by −∂t V = Hc (t, x, DV , D 2 V )   V (t, x + K(t, x, z)) − V (t, x) − DV , K(t, x, z) !(dz), + Do

V (T , x) = (x), (t, x) ∈ I × R n ,

(6.197)

where Hc is the same as in Eq. (6.194) and ! is the Le´vy measure. Performing the minimization operation giving Hc , one can expect the control law to be a highly nonlinear function of the variables {DV , D 2 V }. In general, HJB equations

6.10 Bellman’s Principle of Optimality

271

do not have classical solutions, that is, solutions in the class C 1,2 (I × R n ). Thus, one looks for generalized solutions (including viscosity solutions), which are difficult to construct. For details on generalized solutions for finite dimensional problems, interested readers are referred to Lions [79], Fleming and Soner [63], Bardi and Capuzzo-Dolcetta [34], Yong and Zhou [107], Crandall and Bardi [35], and Motta [84]. For infinite dimensional problems, the reader is referred to Barbu and Da Prato [33], Ahmed [3, 6, 11], Fabbri, Gozzi and Swiech [60], Goldys and Maslowski [66], Gozzi and Rouy [67], and the references therein. For applications of HJB equations to some simpler practical problems such as computer communication networks and related traffic flow control, interested readers may see [14] and the references therein. The necessary conditions of optimality presented in this monograph involve the state equation, the adjoint equation, and an inequality which appear to be challenging. In contrast, it is very satisfying to note that Bellman’s principle of optimality reduces the optimal control problem to a single HJB equation. However, there are some serious theoretical and practical challenges presented by the HJB approach. Theoretical challenges are (1) questions of existence of viscosity solutions of highly nonlinear PDEs (the HJB equation) and (2) the HJB approach is applicable only to fully observed control problems thereby excluding more important partially observed problems. Practical challenges are (1) numerical solutions of nonlinear PDEs in high spatial dimensions (equal to the dimension of the state space) are extremely difficult to obtain [97], (2) numerical problems arising from the size of the domain  = R n are also equally difficult, and (3) construction of the feedback law from the numerical solution of the HJB equation is very much nontrivial. So from the perspective of applications, even for dimension n = 3, it is a challenging task to construct the feedback law from the numerical data. This is one of the reasons why we believe, for applications the necessary conditions of optimality presented in this monograph are simpler and preferable. Furthermore, it is important to mention that there is no HJB equation for deterministic as well as stochastic differential equations controlled by vector measures unless the measures are absolutely continuous with respect to the Lebesgue measure. We believe this is also one of the limitations of the HJB approach. However, from theoretical point of view, it is interesting to carry on research in this direction with the goal toward constructing HJB equations (in some weak sense) for control problems involving vector measures without imposing undue constraints. Though this is theoretically interesting, it suffers from similar limitations as the classical HJB equations mentioned above. For illustration, we consider a simple deterministic system like dξ = F (t, ξ )dt + C(t, ξ )γ (dt), t ∈ I = [0, T ], ξ(0) = x0 ,

(6.198)

272

6 Stochastic Systems Controlled by Vector Measures

with the cost functional

T

J (γ ) =

(s, ξ(s))ds + (ξ(T )).

(6.199)

0

Let Mca (I , R d ) denote the class of countably additive bounded R d -valued vector measures and Mν ⊂ M(I , R d ) a weakly compact set consisting of vector measures which are uniformly absolutely continuous with respect to the nonnegative scalar-valued measure ν. Then Mν ∼ = Lν ⊂ L1 (ν, R d ) and Lν is weakly compact since Mν is. Let λ denote the Lebesgue measure on the real line, and assume that for every σ ∈ I , the measure ν satisfies (ν(σ ))2 = o((λ(σ ))1+α ) for some α ∈ (0, 1). Then, using Bellman’s principle of optimality, one can derive the following generalized HJB equation for the value function V : −dV = (t, x)dt + F (t, x), DV R n dt + H0 (t, x, DV )ν(dt), (t, x) ∈ I × R n , V (T , x) = (x), x ∈ R n ,

(6.200)

where the function H0 is given by the one that satisfies the following identity:

T

inf

γ ∈Mν

0





C (t, x)DV (t, x), γ (dt)Rd = inf

g∈Lν



T 0

C ∗ (t, x)DV (t, x), g(t)Rd ν(dt)

T

=

H0 (t, x, DV (t, x))ν(dt). 0

Here we have used the weak compactness of the set Lν and Lebesgue density argument to obtain the last term in Eq. (6.200). Clearly, if ν is absolutely continuous with respect to the Lebesgue measure, we obtain the classical HJB equation. The assumption on the measure ν allows us to eliminate all terms containing second and higher order in ν. Similarly, one can derive such generalized Bellman equations for stochastic systems giving second order HJB equations in the weak sense. In view of the above observation, one may also consider the following cost functional:

T

J (γ ) =

(s, ξ(s))ν(ds) + (ξ(T )),

(6.201)

0

which directly penalizes the cost of control requiring minor modification of Eq. (6.200) with dt replaced by ν(dt). The question of existence of solutions of these generalized HJB equations in some weak sense is left as an open problem.

6.11 Bibliographical Notes

273

6.11 Bibliographical Notes Most of the results presented in this chapter appear for the first time in this monograph. Recently, one of the authors published some results on partially observed feedback control problems in infinite dimensional Banach spaces [19, 20]. Readers interested in systems governed by PDEs and, more generally, abstract differential equations on infinite dimensional Banach spaces may find these papers useful and interesting.

Chapter 7

Applications to Physical Examples

In this chapter we present numerical results for several practical examples. More extensive computational work can be carried out based on all the necessary conditions of optimality presented in this book. There is a broad range of practical problems that can be solved using the theoretical techniques developed in this book, which, however, are not fully covered in this chapter. This is left for readers interested in numerical computations. For the purpose of illustration, here we consider the following examples: (1) Cancer treatment immunotherapy; (2) Attitude control of geosynchronous satellites; (3) Prey-predator model with population control; (4) Control and stabilization of building maintenance units subject to wind gust; and (5) Control of a stochastic system. Impulsive control, one of the branches of optimal control theory, has gained particular interest in applications over the past few decades. There are many practical examples, such as population dynamics [87], harvesting [109], and investment portfolio [75], which can be mathematically modeled as impulsive dynamic systems. As seen in Chaps. 3 and 5, a purely impulsive system is modeled by a set of differential and algebraic equations describing continuous evolution of the state followed by jumps. In order to achieve a desired objective, one can use optimal control theory developed in this book for such systems to determine the optimal size of the impulsive controls and the optimal time instants of their application. For scalar-valued positive measures a notion of robust solutions based on the socalled graph completion method (time re-scaling) was developed in [47, 54, 94]. This technique does not appear to admit extension to systems driven by vector measures considered in this book. One of the effective computational methods is the control parametrization technique [98], which involves time scaling transformation and constraint transcription [77, 104]. Essentially, a sequence of approximate optimal control problems is obtained and indirectly solved with periodic boundary conditions. In addition, the aforementioned studies have considered impulsive systems described by an explicit scheme. Though convenient for numerical work, the explicit scheme is not able to fully capture the intrinsic interactions between © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5_7

275

276

7 Applications to Physical Examples

the system states and the impulsive forces at each of the jump points [13]. This requires an implicit scheme and more rigorous treatment based on a suitable fixed point theorem. In this book, we have considered both explicit and implicit schemes describing impulsive systems. We develop an algorithm that can directly address the problem without requiring any transformation or approximation, and also prove its convergence. This algorithm can automatically select the optimal set of time instants from a specified set to apply impulsive forces with appropriate sizes and eliminate unnecessary ones. This appears to be complementary to the approach proposed in [77, 104]. Further, it reduces significantly the complexity since it does not require time scaling transformation and constraint transcription. In addition, we develop a second algorithm that can automatically adjust the times and intensities of impulsive controls (forces) to optimize the performance of the system.

7.1 Numerical Algorithms In this section we consider discrete measures as controls. We first study the case where a set of potential jump points are given. In other words, the impulsive controls may be applied only at some of these given time instants automatically chosen by the algorithm. The corresponding computational algorithm developed is presented in Sect. 7.1.1. In the second case, we consider the same dynamic system with variable time instants where impulsive controls may be applied. The associated algorithm is presented in Sect. 7.1.2. This algorithm chooses optimal time instants for application of impulsive controls with optimum intensities. Case 1 For the first case, we consider the system given by (3.56)–(3.57) which is reproduced as follows: x(t) ˙ = F (t, x(t)), x(0−) = x0 , t ∈ I \ I0 ;

(7.1)

x(ti ) = x(ti −) + αi ai G(ti , x(ti ), vi ), ti ∈ I0 ,

(7.2)

 where we have chosen μ(dξ × dt) = αi ai δvi (dξ )δti (dt). The measure m0 associated with the cost functional is given by mo (dξ × dt) ≡



αi δvi (dξ )δti (dt),

where αi ≥ 0. Let  ≡ [−1, +1] and U ⊂ R d be a compact set and let Ud denote the set of admissible control parameters given by Ud ≡ {w = {a, v} : ai ∈ , vi ∈ U ; i = 0, 1, 2, · · · , κ}.

7.1 Numerical Algorithms

277

In this case the set of admissible controls is given by the following set of discrete measures   Md ≡ μ : μ(dξ × dt) = αi ai δvi (dξ )δti (dt), (a, v) ∈ Ud . Hence, the expression for the cost functional (4.46) reduces to the following one, J (w) =

κ

αi (ti , x(w)(ti ), vi ) + (x(w)(T )),

(7.3)

i=0

where x(w) ∈ B∞ (I, R n ) denotes the solution of the system of Eqs. (7.1)–(7.2) corresponding to the control parameter w = (a, v) ∈ Ud . Since Md is isomorphic to Ud (Md ∼ = Ud ), we may call w the control. The objective is to find a control that minimizes the functional J. To find the optimal policy we use the necessary conditions of optimality given in Chap. 5. For applicability of gradient based algorithm, we assume that both  and G are continuously Gâteaux differentiable in the variable v ∈ U . Case 2 In the second case, the system dynamics is given by (4.95)–(4.97) as reproduced below x˙ = F (t, x), x(0−) = x0 , t ∈ I \ Iλ , I ≡ [0 T ], λ˙ = g(t), t ∈ I, λ(t0 ) = λ(0) = 0, g ∈

L+ 1 (I ),

x(λ(ti )) = x(λ(ti )−) + ai G(λ(ti ), x(λ(ti )), vi ), i ∈ {0, 1, · · · , κ},

(7.4) (7.5) (7.6)

where   Iλ ≡ 0 = λ(0) = λ(t0 ) ≤ λ(t1 ) ≤ λ(t2 ) ≤ · · · ≤ λ(tκ ) = λ(T ) = T . This set is determined by the choice of the function λ taken from the admissible class of functions given by the expression (4.99). This controls the time instants at which impulsive forces may be applied. Further, to control the intensities of these impulsive forces we introduce the sets A ≡ {a ∈ R κ+1 : |ai | ≤ 1, i = 0, 1, 2, · · · , κ},

(7.7)

U ≡ {v : vi ∈ U ⊂ R , i = 0, 1, 2, · · · , κ},

(7.8)

d

where U is a compact subset of R d . Let G be any weakly compact subset of GT ⊂ ∼ L+ 1 (I ) as defined by the expression (4.98) of Sect. 4.9. Let Dad ≡ G × A × U =

278

7 Applications to Physical Examples

× A × U denote the class of admissible controls. The objective functional is given by J (ν) ≡

T

(t, x(t))dt + (x(T )) + 0 (x(0)),

(7.9)

0

where ν ∈ Dad .

7.1.1 Numerical Algorithm I Here we consider the dynamic system given by (7.1)–(7.2) subject to the set of admissible controls Ud . The objective functional chosen is given by (7.3). In what follows, we present an algorithm based on the necessary conditions of optimality given by Theorem 5.5.1. First we write the necessary conditions in terms of w ∈ Ud . On the basis of the inequality (5.68) and characterization of the set Ud , we observe that, for wo := (a o , v o ) ∈ Ud to be optimal, it is necessary that the following inequality holds  κ  o o ai ψ(ti ), G(ti , x (ti ), vi ) + ai (ti , x (ti ), vi ) i=0



 κ  aio ψ(ti ), G(ti , x o (ti ), vio ) + aio (ti , x o (ti ), vio ) ,

(7.10)

i=0

where x o and ψ are the solutions of the state and adjoint equations, respectively, corresponding to the control variable wo := {aio , vio }. Define the (cumulative) Hamiltonian-like functional H (a, v, x, ψ) :=

κ 

 ai ψ(ti ), G(ti , x(ti ), vi ) + ai (ti , x(ti ), vi ) .

i=0

Now we can describe the numerical algorithm. Step 1 Choose any arbitrary element w1 = (a 1 , v 1 ) ∈ Ud and compute the corresponding solution of Eqs. (7.1)–(7.2) giving x 1 ∈ B∞ (I, Rn ). This provides the triple {a 1 , v 1 , x 1 }. Step 2 Use the triple {a 1, v 1 , x 1 } in the following system of adjoint equations ˙ − ψ(t) = (DF )∗ (t, x 1 (t))ψ(t), ψ(T ) = x (x 1 (T )), t ∈ (tκ−1 , tκ = T ], (7.11)

7.1 Numerical Algorithms

279

giving ψ(tκ−1 +), and ψ(ti ) = ψ(ti +) + ai1 (DG)∗ (ti , x 1 (ti ), vi1 )ψ(ti ) + ai1 x (ti , x 1 (ti ), vi1 ), i = κ − 1.

(7.12) This solves the adjoint equations for the last interval (tκ−1 , T ]. Continuing this process and solving the adjoint equations backward in time for the indices i = {κ − 1, κ − 2, · · · , 2, 1, 0}, one can obtain the adjoint vector ψ 1 := {ψ 1 (t), t ∈ I } which gives the quadruple {a 1 , v 1 , x 1 , ψ 1 }. Step 3 Use the quadruple {a 1, v 1 , x 1 , ψ 1 } in the necessary conditions of optimality to verify if the following inequality holds H (a, v, x 1 , ψ 1 ) ≥ H (a 1 , v 1 , x 1 , ψ 1 ) ∀ (a, v) ∈ Ud .

(7.13)

If this holds, then (a 1 , v 1 ) is optimal. If NOT, go to Step 4. Step 4 Use (a 1, v 1 ) to generate (a 2 , v 2 ) as follows a 2 = a 1 − εHa (a 1, v 1 , x 1 , ψ 1 ), v 2 = v 1 − εHv (a 1 , v 1 , x 1 , ψ 1 ), for ε > 0 sufficiently small so that (a 2 , v 2 ) ∈ Ud . Computing the cost functional (7.3) at (a 2 , v 2 ) one can verify that for ε > 0 sufficiently small J (a 2 , v 2 ) = J (a 1 , v 1 ) − ε  Ha (a 1 , v 1 , x 1 , ψ 1 ) 2Rκ −ε  Hv (a 1 , v 1 , x 1 , ψ 1 ) 2Rd×κ +o(ε).

(7.14)

Next, use w2 = (a 2 , v 2 ) and go to Step 1, and repeat the process until a prescribed stopping criterion is met.

7.1.2 Numerical Algorithm II In this section, we consider the dynamic system given by (7.4)–(7.6) and choose Dad as the set of admissible controls. The objective functional is given by (7.9). In what follows, we develop an algorithm based on the necessary conditions of optimality given by Theorem 5.9.1. In this case the time instants of application of impulsive forces are also control variables while in the previous algorithm they are fixed. Because of this, here the necessary conditions are also more complex.

280

7 Applications to Physical Examples

Let us rewrite the inequality (5.100) of the necessary conditions of optimality (Theorem 5.9.1) in a compact form. For each index i ∈ {0, 1, 2, · · · , κ}, let us define the following components (appearing in the expression (5.100)) as follows: Hi1 (ν o ) ≡ Hi1 (g o , a o , v o )   ≡ ψ(λo (ti )), H(λo (ti ), x o (λo (ti )), x o (λo (ti )−), aio , vio ) ,   Hi2 (ν o ) ≡ Hi2 (g o , a o , v o ) ≡ ψ(λo (ti )), G(λo (ti ), x o (λo (ti )), vio ) ,

(7.15)

Hi3 (ν o ) ≡ Hi3 (g o , a o , v o ) ≡ aio (Gv (λo (ti ), x o (λo (ti )), vio ))∗ ψ(λo (ti )),

(7.17)

(7.16)

where the first two components  t are scalars and the third component is an element of R d . Recall that λo (t) = 0 g o (s)ds, t ∈ I. Using these notations one can rewrite the inequality (5.100) as follows κ   1 o   Hi (ν )(λ(ti ) − λo (ti )) + Hi2 (ν o )(ai − aio ) + Hi3 (ν o ), vi − vio R d ≥ 0, i=0

∀ ν ∈ Dad .

(7.18)

The detailed computational steps are presented as follows: Step 1 Choose ν 1 ≡ {g 1 , a 1 , v 1 } ∼ = {λ1 , a 1 , v 1 } ∈ Dad and solve the state equations (5.105)–(5.106) corresponding to {λ1 , a 1 , v 1 } in place of {λo , a o , v o }, and let x 1 denote the corresponding solution. Step 2 Use the pair {ν 1 , x 1 } in the adjoint equations (5.101)–(5.104) and solve for ψ 1 . At this stage we have {ν 1 , x 1 , ψ 1 }. Step 3 In the expressions (7.15)–(7.18), use the triple {ν 1 , x 1 , ψ 1 } in place of {ν o , x o , ψ}. Step 4 Use ν 1 to generate ν 2 ≡ (λ2 , a 2 , v 2 ) as follows λ2 (ti ) = λ1 (ti ) − εHi1 (ν 1 ), i ∈ {1, 2, · · · , κ − 1}, (see expression (4.99)) (7.19) ∈ {0, 1, 2, · · · , κ},

(7.20)

vi2 = vi1 − εHi3 (ν 1 ), i ∈ {0, 1, 2, · · · , κ},

(7.21)

ai2

=

ai1

− εHi2 (ν 1 ), i

where ε > 0 is chosen sufficiently small so that λ2 ∈ , a 2 ∈ A and v 2 ∈ U. Using Lagrange formula to compute the cost functional J at ν 2 in terms of the above

7.2 Examples of Physical Systems

281

expressions one can verify that J (ν 2 ) = J (ν 1 )−ε

κ  1 1 2  |Hi (ν )| +|Hi2 (ν 1 )|2 +  Hi3 (ν 1 ) 2R d +o(ε).

(7.22)

i=0

Thus, for ε > 0 sufficiently small we have J (ν 2 ) < J (ν 1 ). Step 5 Use ν 2 and return to Step 1 and, for any prescribed δ > 0, continue the process while |J (ν n ) − J (ν n−1 )| > δ is met within a given maximum number of iterations Nmax ∈ N+ . It is clear from the above description that, for any initial choice ν 1 ∈ Dad , the algorithm generates a sequence of controls {ν n } ⊂ Dad along which the cost functional J (ν n ) monotonically decreases. By virtue of the growth assumptions (A1) and (A2) of Theorem 4.10.2, and the fact that the solution set is bounded, we conclude that {J (ν), ν ∈ Dad } > −∞. Thus, J (ν n ) converges to a finite number m0 > −∞ (possibly a local minimum). Since we are dealing with nonlinear problems with non-convex cost functionals, it is possible that the limit m0 may depend on the choice of the starting point ν 1 . To verify if further reduction of J is possible, one can try with different initial choices from the set Dad and check if the results converge to the same cost or different ones. There are various ad-hoc but effective techniques for repeating this search process, e.g., recursive random search technique [25, 106].

7.2 Examples of Physical Systems In this section, we present several examples of physical systems, with detailed numerical results and interpretations. For some examples we use impulsive controls, and for some others we use vector measures as controls. We apply the methodology proposed in Chap. 5 and the algorithms presented in Sects. 7.1.1 and 7.1.2 to address the control problems formulated for specific examples, and demonstrate their effectiveness by numerical simulations.

7.2.1 Cancer Immunotherapy We use the algorithm presented in Sect. 7.1.1 to solve an impulsive control problem arising in biomedical engineering. Specifically, we consider a dynamic system describing the interactions between antigens and antibodies in cancer patients [30]. The antigen-antibody interaction is naturally observed in human immune system. It is known that for each antigen there is a specific antibody that can bind onto it and chemically interact to destroy it. Thus, there may be as many different antibodies as there are antigens [30].

282

7 Applications to Physical Examples

Let B and G denote, respectively, the antibodies and antigens with population densities {Bi , Gi , i = 1, 2, · · · , κ}, where it is assumed that there are κ types of antigens and correspondingly κ different antibodies. The continuous population dynamics of B and G are given by the following system of equations:   κ M βi,j Gj + γi,m um Bi ; Bi (0) = Bi,0 , i = 1, 2, · · · , κ, B˙ i = αi + j =1

m=1

(7.23)   κ ˙ ηi,j Bj Gi ; Gi (0) = Gi,0 , i = 1, 2, · · · , κ, Gi = δ i +

(7.24)

j =1

where the parameters appearing in the above equations are described as follows [30]: (1) αi denotes the production rate of antibody Bi by the natural immune system; (2) {βi,j , j = 1, 2, · · · , κ} are the interaction coefficients of the j -th antigen with the i-th antibody, i.e., the impact of the antibody Bi on the antigen Gj ; (3) δi denotes the natural intrinsic growth rate of antigens, e.g., cancer cells; (4) {ηi,j , j = 1, 2, · · · , κ} represent similar conjoint interaction of the population of j -th antibody with the i-th antigen, i.e., the impact of antibody Bj on antigen Gi ; (5) {um , m = 1, 2, · · · , M} are the control drugs used to boost the immune system; (6) γi,m is the effect of the m-th control drug on the i-th antibody. If positive, it will enhance the immune system response; if zero, there is no effect (neutral); if negative, it is harmful to the antibody formulation. At a given set of time instants, we aim to administer optimal doses of drugs designed to enhance immune response of the patient so as to destroy the antigen population or bring it down to a harmless and clinically acceptable level over a given period of time I = [0, T ]. For the purpose of numerical illustration, we consider a simple case where there is only one antigen-antibody pair, i.e., B1 and G1 . Suppose drugs are used at the set of time instants {tk }. Then, the original system dynamics given by (7.23)–(7.24) is reduced to the following impulsive control system   ˙ B1 = α1 + β1 G1 B1 ; B1 (0) = B1,0 ,

(7.25)

  ˙ G1 = δ1 + η1 B1 G1 ; G1 (0) = G1,0 ,

(7.26)



B1 (tk ) = B1 (tk −) + γ1 ak uk B1 (tk −), tk ∈ I,

(7.27)

with β1 , η1 , and γ1 replacing β1,1 , η1,1 , and γ1,1 , respectively. Note that this system is described by an explicit scheme which may be justified by the fact that it evolves relatively slowly. To boost the immune system of the patient, control

7.2 Examples of Physical Systems

283

drugs are administered at the time instants {tk }. The second term on the right-hand side of (7.27) denotes the effect of the control drug administered at time tk , with ak ∈ [−1, 1] determining the impact of the control (drug input) uk . To bring the antigen population down to a clinically acceptable level, the objective functional can be chosen as,

2 J (u) = (G(T )) = w1 G1 (T ) − Gd1 ,

(7.28)

where Gd1 is the desired terminal antigen population which may be taken as zero if desired, and G1 (T ) is the actual antigen population reached at the end of the treatment period. The positive parameter w1 is the weight (importance) assigned to any discrepancy from the desired level of antigen population. The objective is to determine the optimal doses of the drug u1 to be used at a set of time instants {t1 = 5, t2 = 10, t3 = 15, t4 = 20} over the treatment period I = [0, 30] (measured in days), in order to fight (and if possible eliminate) the antigen population G1 . The system parameters are adopted from [30]. That is, α1 = 0.2, δ1 = 0.431, β1 = −0.01, η1 = −0.08, γ1 = 1.0, w1 = 1000. For numerical experiments, the control constraints are given by −1 ≤ ak ≤ 1, 0 ≤ uk ≤ 5. The initial states are chosen as B1 (0) = 0.014 and G1 (0) = 0.22. Note that these values are chosen for the purpose of illustration only. In practice, they can be determined in accordance with the actual health condition of the patient. Figure 7.1 shows the population trajectories of B1 and G1 without any drugs administered (applied). It is observed that the population level of antigens grows dramatically due to lack of sufficient antibodies, which could essentially result in a serious health condition. Figure 7.2 shows the population trajectories of B1 and G1 in response to optimal doses of drugs applied at the given set of time instants. The corresponding optimal control inputs obtained are a1o = a2o = a3o = a4o = 1.0 and uo1 = uo2 = uo3 = uo4 = 5.0, which indicates that the maximum doses of drugs are required to fight the rapid growth of antigens given the limited frequency of treatment. It is interesting to note that with optimal doses of drugs administered at a given frequency (4 times over the treatment period), the antigen population is almost completely eliminated, as observed in Fig. 7.2b. Figure 7.3 shows the reduction of the cost functional with the increase of the number of iterations. Remark 7.2.1 We considered a simple case where there is only one antigenantibody pair. However, it is easy to apply the proposed approach to more complex scenarios involving multiple types of antigens and antibodies.

7.2.2 Geosynchronous Satellites In this example, we apply the algorithm presented in Sect. 7.1.1 to solve an impulsive control problem related to the attitude control of geosynchronous satellites. We

284

7 Applications to Physical Examples

Fig. 7.1 Population of B1 and G1 without treatment. (a) B1 from 0 to 30 days. (b) G1 from 0 to 30 days

consider the attitude dynamics of a rigid body satellite in geosynchronous orbit [2, p. 8]. For communication and observatory satellites, it is absolutely necessary to maintain a high degree of pointing accuracy, which requires an accurate model for the attitude dynamics of the satellite. For simplicity, we present here the dynamic system model [2, p. 11] only taking into account the body angular velocities as the

7.2 Examples of Physical Systems

285

Fig. 7.2 Population of B1 and G1 with treatment. (a) B1 from 0 to 30 days. (b) G1 from 0 to 30 days

state of the system. The autonomous (and continuous) part of the model is described by the following set of differential equations: ⎧ ⎪ ⎪ ⎨Ix p˙ + (Iz − Iy )(q − w0 )r = 0, Iy q˙ + (Ix − Iz )pr = 0, ⎪ ⎪ ⎩I r˙ + (I − I )p(q − w ) = 0, z

y

x

0

286

7 Applications to Physical Examples

Fig. 7.3 Objective functional

where p, q, and r are the body angular velocities of the satellite representing the state. The state vector is denoted by x = (p, q, r). Ix , Iy and Iz are the satellite moments of inertia in the x, y, and z directions of the body coordinate system. For a geosynchronous satellite, w0 is a constant (w0 = 7.292 × 10−5 rad/s) which is the same as the spin velocity of the earth. The discontinuous part of the system including controls is given by ⎛ ai x(ti ) − x(ti −) = ai G(ti , x(ti ), Ti ) :=



Ix Ti,x ⎜ ai T ⎟ ⎝ Iy i,y ⎠ , ai Iz Ti,z

where Ti := (Ti,x , Ti,y , Ti,z ) denotes the control torque applied at time ti . The function G represents the jump in the state x with the size determined by |ai G|. It is well-known that reaction jets are normally used to exert torques on the body of the satellite for its attitude control. Our objective is to maintain a high pointing accuracy (towards the earth) for the satellite. In other words, the satellite is expected to reach the desired state x d = (0, w0 , 0) at the terminal time. Based on the above objective, we define the following performance functional over the time period I , J (u) :=

m i=1

1 ϑi ai2 Ti 2 + P (x(T ) − x d ), (x(T ) − x d ), 2

(7.29)

where on the right-hand side, the first term denotes the weighted fuel cost for applying torques using reaction jets at times ti (i = 1, 2, · · · , m); the second

7.2 Examples of Physical Systems

287

term represents the terminal cost due to deviation from the desired state where P is any 3 × 3 positive definite matrix. For the evaluation of system performance, the weighting parameters are set as ϑi = 30 (i = 1, 2, · · · , m) and P = diag(4.8 × 104 , 4.8 × 104 , 4.8 × 104 ). The initial state of the satellite is assumed to be x(0) = (0.822665, −0.136056, −0.805628)T rad/s. The satellite moments of inertia are chosen as Ix = 13.6 kg · m2 , Iy = 15.4 kg · m2 and Iz = 12.1 kg · m2 . The control constraint set is defined as U = {Ti,x , Ti,y , Ti,z : −50 N · m ≤ Ti,x , Ti,y , Ti,z ≤ 50 N · m, i = 1, 2, · · · , m}, and ai is chosen from [−1, 1]. To start with the computation, we choose the control values of a and T as a = (0.8, 0.2, 0.4), and T1 = (−4.36, 4.36, 4.36)T , T2 = (−6.55, 6.55, 6.55)T , T3 = (−2.18, 2.18, 2.18)T . The overall system performance is evaluated over the time horizon I = [0, 8]. For numerical simulations, we set t1 = 2, t2 = 4, t3 = 6 in the first scenario (m = 3). We run the simulation for 500 iterations and present detailed numerical results in Table 7.1. For economy of space we only show some of the representative graphs in Figs. 7.4 and 7.5. Table 7.1 shows simulation results of the three scenarios consisting of seven distinct cases. In scenario 1, all the three controls T1 , T2 and T3 are allowed to be used; in scenario 2, any two (pair) of these three controls are allowed; and in scenario 3, only any one of the three controls is applicable. In scenario 1, we obtain the optimal cost 3243.9732 corresponding to only two impulse controls appearing at instants t1 and t3 determined by the optimality conditions. It is interesting to note that the system automatically abandons the second impulse at t2 in order to achieve the best performance. Among all the three cases in scenario 2, it is observed from the table that the best policy is to apply impulse controls at times t1 and t3 , resulting in the optimal cost 3249.5554 which is very close to the value obtained in scenario 1.

Table 7.1 Simulation results of geosynchronous satellite Scenarios Scenario 1 Variables T1 , T2 , T3 T1,x −5.9124 T1,y 1.0330 T1,z 5.2932 T2,x N/A T2,y N/A T2,z N/A T3,x −3.7645 T3,y −0.9753 T3,z 3.5917 a1 1 a2 0 a3 1 J 3243.9732

Scenario 2 T2 , T3 N/A N/A N/A −7.0828 2.1263 6.7054 −3.5003 −2.2656 3.5668 0 0.8615 1 4055.454

T1 , T3 −5.9913 1.0923 5.3022 N/A N/A N/A −3.7778 −0.9290 3.6749 1 0 1 3249.5554

T1 , T2 −6.3181 −0.1131 5.4231 −7.0407 3.8731 6.6672 N/A N/A N/A 1 0.4968 0 4660.2047

Scenario 3 T3 N/A N/A N/A N/A N/A N/A −7.0777 −1.6107 7.7390 0 0 1 6219.7036

T2 N/A N/A N/A −8.6210 0.6689 7.9742 N/A N/A N/A 0 1 0 6224.6312

T1 −8.3820 1.1998 7.5326 N/A N/A N/A N/A N/A N/A 1 0 0 5951.7589

288

7 Applications to Physical Examples

Indeed, these two costs are supposed to be identical since the same impulse control policy will be obtained based on the optimality conditions (given a sufficient number of iterations). The small difference is due to the limited number of iterations. The results of three cases, where only one impulse appears, are presented in scenario 3. As expected, the cost is larger than those of scenarios 1 and 2. Therefore, the optimal policy automatically determined by the algorithm is to apply impulse controls at t1 and t3 with the control values shown for scenario 1 in Table 7.1. Figures 7.4 and 7.5 show the representative simulation results of scenarios 1 and 2, respectively. Figure 7.5 presents the best result out of all the possible cases in scenario 2. It is observed that the results in Figs. 7.4 and 7.5 are almost identical because, in scenario 1, the second impulse control (at t2 ) is automatically abandoned, which results in the same control policy as obtained in the best case of scenario 2 (Fig. 7.5). It is expected that the optimal costs of these two cases will converge to the same value with the increase of the number of iterations.

7.2.3 Prey-Predator Model In this example we consider impulsive controls not only with variable intensities but also with variable supports (time instants of application) as control variables. We use the algorithm presented in Sect. 7.1.2 to address a control problem arising from the prey-predator model [81]. Consider the following system of equations describing the dynamics of prey-predator population of two species, S1 and S2 , in a given habitat subject to (population) control by the forestry department; x˙1 = α1 x1 − α2 x1 x2 ,

(7.30)

x˙2 = −α3 x2 + α4 x1 x2 ,

(7.31)

x2 (λ(ti )) = x2 (λ(ti )−) + ai tan−1 (αx2 (λ(ti ))vi ), i ∈ {0, 1, 2, · · · , κ}. (7.32) The population of species S1 is denoted by x1 while that of S2 is denoted by x2 . The first pair of equations, i.e., (7.30) and (7.31), gives the classical prey-predator model known as Lotka-Volterra equations [2, 81]. The population of species S2 (predator) solely depends for its food on the population of species S1 (prey). The parameters {αi > 0, i = 1, 2, 3, 4} describe the interactions between these two species. In the numerical studies, they are chosen as {α1 = 0.05, α2 = 0.03, α3 = 0.08, α4 = 0.045}. The third equation (7.32) describes the schedule of population control executed by the regulators at strategically chosen times {λ(ti ), i = 0, 1, · · · , κ} and by choosing the appropriate variables {ai , vi } subject to the constraints |ai | ≤ 1, |vi | ≤ 1. It is the optimal choice of the function λ, determining the best schedule of applying (impulsive) controls, makes the problem more difficult and the associated necessary conditions of optimality more cumbersome. We consider three scenarios in the following numerical studies. In the first scenario (E1), the goal is to promote the growth of population S1 by reducing the

7.2 Examples of Physical Systems

289

Fig. 7.4 Simulation results of scenario 1. (a) Optimal state trajectories. (b) Objective functional

population S2 without much concern for the ecology. In the second scenario (E2), the objective is to control the population of S1 and S2 while maintaining the natural ecological balance. In the third scenario (E3), we study the impact of control cost on the overall system performance. Scenario E1 In this scenario, the objective of the regulators is to promote the growth of population S1 by controlling the size of the population S2 . Thus, a

290

7 Applications to Physical Examples

Fig. 7.5 Representative results of scenario 2. (a) Optimal state trajectories. (b) Objective functional

reasonable objective functional for this case is given by

T

J (ν) = 0

  w1 (x2 (t))2 − w2 (x1 (t))2 dt + w3 (x2 (T ))2 + w4 (x2 (0))2 ,

(7.33)

7.2 Examples of Physical Systems

291

where {wi , i = 1, 2, 3, 4} are positive real numbers representing the weights assigned to each term in the objective functional (7.33). The optimization problem is to minimize the cost functional (7.33). In other words, the objective is to reduce the population of S2 while increasing that of S1 by choosing the sizes {ai , vi }, and the times of application {λ(ti )} of impulsive controls. Without loss of generality, we consider the scenario where there are three potential jump instants in the open time interval (0, 100). That is, there are five jump instants in total including the two boundary (time) points, {0} and {100}. Specifically, to begin with the simulation it is assumed that λ1 (t0 ) = 0, λ1 (t1 ) = 30, λ1 (t2 ) = 50, λ1 (t3 ) = 80, λ1 (t4 ) = 100. The other initial control inputs are chosen as a 1 = (0.4, −0.3, −0.3, 0.4, 0.3) and v 1 = (0.4, −0.3, 0.5, −0.4, 0.4). The initial state of the system before the jump at time {0} is assumed to be x(0−) = (x1 , x2 ) = (1, 0.8) units (one unit may stand for a certain number of population). The weighting parameters appearing in the objective functional (7.33) are chosen as {w1 = 10, w2 = 5, w3 = 20, w4 = 20}, and the parameter α appearing in (7.32) is set as α = 0.5. In this scenario, the state trajectory with no control applied is shown in Fig. 7.6. With the system parameters and initial states specified above, the optimal state trajectory and the associated objective functional are shown in Fig. 7.7a, b, respectively. After Nmax = 500 iterations the optimal state trajectories are obtained as shown in Fig. 7.7a with the optimal impulsive controls {a0o = 1, a1o = −1, a2o = −1, a3o = 1, a4o = −1; v0o = 1, v1o = −1, v2o = 1, v3o = −1, v4o = 1} applied at the optimal time instants λo (t0 ) = 0, λo (t1 ) = 22.67, λo (t2 ) = 71.68, λo (t3 ) = 80.35, λo (t4 ) = 100. With the increase of the number of iterations the value of the objective functional J converges to −74.5 as observed in Fig. 7.7b. Comparing

Fig. 7.6 State trajectory of Scenario E1 without any control applied: x(0−) = (1, 0.8)

292

7 Applications to Physical Examples

Fig. 7.7 Simulation results of Scenario E1 corresponding to optimal impulsive controls. (a) Optimal state trajectory. (b) Objective functional

Fig. 7.7a with Fig. 7.6, it is observed that the population size of S2 is kept relatively low while that of S1 is increased as desired. It is observed in Fig. 7.7b that the objective functional monotonically converges with the increase of the number of iterations. This is due to the fact that the computational algorithm proposed is capable of determining a local minimum based on the optimal direction of descent at each iteration. It is interesting to see how the cost functional is affected if the optimal time instants {λo (ti )} are replaced by non-optimal ones. For this, we use the optimal

7.2 Examples of Physical Systems

293

Table 7.2 Comparison of J corresponding to optimal and non-optimal controls for E1 Cases Optimala Non-optimal

Time instants λo (t0 ) = 0, λo (t1 ) = 22.67, λo (t2 ) = 71.68, λo (t3 ) = 80.35, λo (t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 20, λ(t2 ) = 50, λ(t3 ) = 80, λ(t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 16, λ(t2 ) = 50, λ(t3 ) = 60, λ(t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 10, λ(t2 ) = 40, λ(t3 ) = 70, λ(t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 20, λ(t2 ) = 60, λ(t3 ) = 90, λ(t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 16, λ(t2 ) = 50, λ(t3 ) = 90, λ(t4 ) = 100 λ(t0 ) = 0, λ(t1 ) = 15, λ(t2 ) = 60, λ(t3 ) = 75, λ(t4 ) = 100

Value of J −74.5 653.66 586.29 1254 660.14 1275.7 240.96

a Comparison

of the value of J corresponding to optimal λo (·) with those corresponding to nonoptimal choices of λ(·)

values of the other control variables {aio , vio , i = 0, 1, 2, 3, 4}, and apply these at various other non-optimal time instants to verify and compare the results shown in Fig. 7.7. The detailed comparisons are presented in Table 7.2. Clearly, it is observed that the optimal value of the objective functional is less than those corresponding to impulsive controls applied at time instants other than the optimal ones. Further, the closer the time instants are to their optimal counterparts, the smaller is the difference between the optimal and the non-optimal values of the objective functional. Scenario E2 It follows from Eqs. (7.30) and (7.31) that, in the absence of controls (determined by (7.32)), there exists a nontrivial equilibrium state of the system 5 given by (x1e , x2e ) = ( αα34 , αα12 ) = ( 16 9 , 3 ). By computing the eigenvalues of the Jacobian matrix at the equilibrium point (x1e , x2e ), one can verify that it has a pair of complex eigenvalues with zero real parts (complex conjugate pair) and hence this equilibrium point is a “focus” also known as “center equilibrium.” The goal of the forestry department is to maintain the population of the two species close to their equilibrium. To achieve this goal, the objective functional is chosen as follows

T

J (ν) = 0

  w1 (x1 (t) − x1e )2 + w2 (x2 (t) − x2e )2 dt +w3 (x1 (T ) − x1e )2 + w4 (x2 (T ) − x2e )2 + w5 (x2 (0) − x2e )2 .

(7.34)

We run the simulation over the same time interval [0, 100] with the state before the jump x(0−) = (1, 0.4), the initial control intensities a 1 = (−0.4, 0.3, 0.3, −0.4, −0.3) and v 1 = (−0.4, 0.3, −0.5, 0.4, −0.4), and weighting parameters {w1 = 10, w2 = 10, w3 = 20, w4 = 20, w5 = 20} while all the other system settings remain the same as in Scenario E1. In this scenario, the state trajectory without any control applied is shown in Fig. 7.8. Next we use the optimal control. The state trajectory and the associated objective functional corresponding to the optimal control are shown in Fig. 7.9a, b. We note that after Nmax = 500

294

7 Applications to Physical Examples

Fig. 7.8 State trajectory of Scenario E2 without any control applied: x(0−) = (1, 0.4)

iterations the state trajectories obtained are as shown in Fig. 7.9a. These trajectories correspond to the optimal controls described by their intensities {a0o = −1, a1o = 1, a2o = 1, a3o = −0.7, a4o = −0.316; v0o = −1, v1o = 1, v2o = −1, v3o = 0.668, v4o = −0.415}, and the time instants of their application {λo (t0 ) = 0, λo (t1 ) = 20.56, λo (t2 ) = 57.22, λo (t3 ) = 79.39, λo (t4 ) = 100}. As observed in Fig. 7.9b, the objective functional J converges to 520.9 with the increase of the number of iterations. Comparing Fig. 7.9a with Fig. 7.8, it is observed that the population of the two species is maintained closer to their equilibrium values as desired. In order to further validate the effectiveness of the proposed algorithm, we start the simulation with a set of distinct initial choices of controls. The corresponding values of the objective functional are presented in Table 7.3. It is observed that, within the given (fixed) number of iterations, the final values of J corresponding to distinct initial choices of control inputs appear to converge close to a neighborhood of the minimum. These final values of the objective functional are expected to converge to the same local minimum with sufficient increase of the number of iterations. Scenario E3 In the previous scenarios we ignored the cost of control. Here in this  case we add a quadratic control cost (1/2) qi ai2 , qi > 0, to the objective functional (7.34) and notice that the program chooses fewer jumps for optimization. The parameters {qi } are chosen as {q0 = 200, q1 = 200, q2 = 200, q3 = 1500, q4 = 200}. We run the simulation for Nmax = 500 iterations with a step size * = 1×10−4 , whilst the other parameters remain the same as those in Scenario E2. The optimal state trajectory and the associated objective functional are shown in Fig. 7.10a, b, respectively. The state trajectories shown in Fig. 7.10a correspond to the optimal

7.2 Examples of Physical Systems

295

Fig. 7.9 Simulation results of Scenario E2 corresponding to optimal impulsive controls. (a) Optimal state trajectory. (b) Objective functional

controls described by their intensities {a0o = −1, a1o = 1, a2o = 0.778, a3o = −0.023, a4o = −0.016; v0o = −1, v1o = 1, v2o = −1, v3o = 0.467, v4o = −0.420}, and the time instants of their application {λo (t0 ) = 0, λo (t1 ) = 27.68, λo (t2 ) = 51.49, λo (t3 ) = 80, λo (t4 ) = 100}. It is interesting to observe from Fig. 7.10a that, within a limited number of iterations, the jump at λo (t3 ) is almost eliminated. This is because of the larger cost associated with controls at λo (t3 ). Similarly, the jump at the terminal time is also eliminated. This is due to the fact that use of smaller

296

7 Applications to Physical Examples

Table 7.3 Optimal values of J corresponding to distinct initial choices of controls for E2 Cases 1a

2

3

4

5

6

7

Initial choices of controls λ1 (t0 ) = 0, λ1 (t1 ) = 30, λ1 (t2 ) = 50, λ1 (t3 ) = 80, λ1 (t4 ) = 100; a 1 = (−0.4, 0.3, 0.3, −0.4, −0.3); v 1 = (−0.4, 0.3, −0.5, 0.4, −0.4) λ1 (t0 ) = 0, λ1 (t1 ) = 30, λ1 (t2 ) = 50, λ1 (t3 ) = 80, λ1 (t4 ) = 100; a 1 = (−0.8, 0.6, 0.5, −0.2, −0.3); v 1 = (−0.2, 0.3, 0.5, 0.1, −0.1) λ1 (t0 ) = 0, λ1 (t1 ) = 30, λ1 (t2 ) = 50, λ1 (t3 ) = 80, λ1 (t4 ) = 100; a 1 = (0.6, −0.3, 0.1, 0.2, 0.4); v 1 = (0.2, −0.4, 0.3, 0.1, 0.2) λ1 (t0 ) = 0, λ1 (t1 ) = 30, λ1 (t2 ) = 50, λ1 (t3 ) = 80, λ1 (t4 ) = 100; a 1 = (0.1, −0.8, −0.2, 0.1, 0.5); v 1 = (0.1, −0.6, −0.2, −0.5, 0.6) λ1 (t0 ) = 0, λ1 (t1 ) = 20, λ1 (t2 ) = 40, λ1 (t3 ) = 60, λ1 (t4 ) = 100; a 1 = (−0.4, 0.3, 0.3, −0.4, −0.3); v 1 = (−0.4, 0.3, −0.5, 0.4, −0.4) λ1 (t0 ) = 0, λ1 (t1 ) = 10, λ1 (t2 ) = 40, λ1 (t3 ) = 70, λ1 (t4 ) = 100; a 1 = (−0.4, 0.3, 0.3, −0.4, −0.3); v 1 = (−0.4, 0.3, −0.5, 0.4, −0.4) λ1 (t0 ) = 0, λ1 (t1 ) = 10, λ1 (t2 ) = 40, λ1 (t3 ) = 90, λ1 (t4 ) = 100; a 1 = (−0.4, 0.3, 0.3, −0.4, −0.3); v 1 = (−0.4, 0.3, −0.5, 0.4, −0.4)

Optimal value of J 520.9

530.1

627.2

623.4

584.2

591.5

521.9

For Scenario E2, the optimal controls corresponding to case 1a are λo = (0, 20.56, 57.22, 79.39, 100), a o = (−1, 1, 1, −0.7, −0.316), v o = (−1, 1, −1, 0.668, −0.415). Optimal controls corresponding to other cases are determined using the same algorithm and are omitted here

a

control energy (in terms of a4 ) contributes less to the objective functional compared to the cost due to mismatch of the reachable state from the desired equilibrium.

7.2.4 Stabilization of Building Maintenance Units High-rise buildings normally require regular maintenance for general upkeep and periodic inspection to maintain and guarantee structural integrity. Generally, these activities are carried out using building maintenance units (BMUs) which are suspended from large cranes. In the event of large and gusty wind the system may be destabilized risking lives of the workers on the unit and the public near the buildings. To avoid such risk it is essential to equip the BMU with stabilizing controls. These controls are automatically activated to restore stability in the event of dangerous wind activities. This can be done using reaction jets mounted on the BMU as

7.2 Examples of Physical Systems

297

Fig. 7.10 Simulation results of Scenario E3 corresponding to optimal controls. (a) Optimal state trajectory. (b) Objective functional

introduced in [100]. The dynamic model of the system with control measures is given by the following system of equations, dx1 = −(g/r)x4 dt + (g/r 2 )x5 γ2 (dt) − (g/r 2 )x4 γ3 (dt)

(7.35)

dx2 = (g/r)x3 dt − (g/r 2 )x5 γ1 (dt) + (g/r 2 )x3 γ3 (dt)

(7.36)

dx3 = −x2 x5 dt

(7.37)

298

7 Applications to Physical Examples

dx4 = x1 x5 dt

(7.38)

dx5 = (x2 x3 − x1 x4 )dt,

(7.39)

where x1 and x2 are the angular velocities and x3 , x4 , and x5 represent the position of the BMU with respect to an inertial reference frame. A detailed description of the dynamic model can be found in [100]. In response to wind gust the reaction jets are activated in rapid succession producing necessary momentum to stabilize the BMU. The control inputs provided by the reaction jets can be approximated by the vector measure γ (dt) = (γ1 (dt), γ2 (dt), γ3 (dt)). Hence, the compact form of the system is given by dx = F (x)dt + G(x)γ (dt),

(7.40)

where x = (x1 , x2 , x3 , x4 , x5 ) represents the state vector. The cost functional is given by J (γ ) =

1 2

Q(x(t) − x d (t)), (x(t) − x d (t))ν(dt) I

1 ¯ (x(T ) − x), ¯ + P (x(T ) − x), 2

(7.41)

where x d is the desired state trajectory during the operation, and x¯ is the target state; I = [0, T ] is the period of operation. For admissible control measures, we choose a weakly compact set Mad from the space of vector measures Mca (I , R 3 ). We have seen in Lemma 4.8.1 of Chap. 4, that for weak compactness, it is necessary that the set Mad is norm bounded (variation norm) and that there exists a nonnegative measure ν ∈ M+ ca (I ) with respect to which the set Mad is uniformly absolutely continuous. By virtue of Radon–Nikodym theorem, for each γ ∈ Mad there exists a unique v ∈ L1 (ν, R 3 ) such that γ (dt) = v(t)ν(dt). Thus, given the measure ν, we can introduce the family of ν-integrable functions given by Lad ≡ {v ∈ L1 (ν, R 3 ) : γ (dt) = v(t)ν(dt), for γ ∈ Mad }. Since Mad is isometrically isomorphic to Lad , it suffices to consider the set Lad for all numerical simulations. Hence, we can replace γi (dt) by vi (t)ν(dt) in (7.35)–(7.36). Since Mad is a weakly compact subset of Mca (I , R 3 ) and under isomorphism weak compactness is preserved, the set Lad is also a weakly compact subset of L1 (ν, R 3 ). Considering the system given by (7.40) with the cost functional (7.41) written in the following compact form, (t, x(t))ν(dt) + (x(T )),

J (γ ) = I

(7.42)

7.2 Examples of Physical Systems

299

we note that as a special case, it follows from Theorem 5.5.2 that the necessary conditions of optimality are given by the following system of equations and inequality: dx o (t) = F (t, x o (t))dt + G(t, x o (t))v o (t)ν(dt), x(0) = x0 , t ∈ I, ∗

(7.43)

−dψ (t) = (DF ) (t, x (t))ψ (t)dt + (DG)(t, x (t); ψ (t))v (t)ν(dt) o

o

o

o

o

o

+x (t, x o (t))ν(dt), ψ o (T ) = x (x o (T )), t ∈ I,



G ∗ (t, x o (t))ψ o (t), (v(t) − v o (t))ν(dt) ≥ 0, ∀ γ ∈ Mad ,

(7.44) (7.45)

I

where γ (dt) = v(t)ν(dt) is any element of Mad and γ o (dt) = v o (t)ν(dt) is the optimal control measure from the same set. The elements {v, v o } ∈ L1 (ν, R 3 ) are, respectively, the Radon–Nikodym derivatives of the pair of vector measures {γ , γ o } with respect to the scalar measure ν. In the following numerical examples we call v as the control vector. For numerical simulations, √ √ √ the initial state of the BMU is assumed to be x(0) = (0.6, 0.4, 42 , 42 , 23 ) which is different from the equilibrium state. The matrices Q and P are chosen as Q = diag(10, 10, 10, 10, 10) and P = diag(200, 200, 200, 200, 200), respectively. The overall system performance is evaluated over the time period I = [0, 100]. As in [100], the target state x¯ and the desired state trajectory x d are chosen the same as the equilibrium state (0, 0, 0, 0, 1). In the presence of high winds, the system is forced out of its equilibrium state leading to the oscillatory motion as shown in Fig. 7.11. The functions representing the wind-induced torques are denoted by W1 and W2 as described below,  W1 (t) =

0, 

W2 (t) =

0.3,

0.2, 0,

t ∈ , otherwise. t ∈ , otherwise,

where  = [20, 30] represents the period when high winds occur. In order to include the wind-induced torques into the system dynamics, W1 dt and W2 dt are added to the system equations (7.35) and (7.36), respectively. Essentially, the problem setting is very similar to that in [100]. In the absence of control, the state trajectories of the BMU are shown in Fig. 7.11. It is clear from this figure that the state of the system is significantly disturbed by the high winds. As a result, the magnitude of the oscillation increases and the BMU violently oscillates and tends to become unstable putting the workers and the public at high risk. In what follows, we study the effect of using control measures to stabilize the BMU in the same system setting as described above. We run the simulation for 1000 iterations with a step size ε = 0.002 and present the corresponding numerical results for two scenarios.

300

7 Applications to Physical Examples

Fig. 7.11 State trajectory of the BMU without controls. (a) Angular velocity. (b) Position vector

Scenario E1 In this scenario, the measure ν is chosen as ν(dt) ≡ ρ(t)dt, where ρ is a nonnegative function. For simulations, the function ρ is chosen as  ρ(t) =

t 100 , 1 100 (T

0 ≤ t ≤ T /2, − t),

T /2 ≤ t ≤ T .

7.2 Examples of Physical Systems

301

Fig. 7.12 Scenario E1: State trajectory of the BMU with optimal controls. (a) Angular velocity. (b) Position vector

Each element vi of the control vector v is assumed to take values from [−3, 3] with the initial control vector chosen as v(0) = (0, 0, 0). The numerical results of this scenario are shown in Figs. 7.12 and 7.13. It is observed in Fig. 7.12 that the system states experience a disturbance during the occurrence of the high winds over the time interval  = [20, 30]. However, unlike what has been observed in Fig. 7.11, the optimal control drives the state of the system to its equilibrium value thereby stabilizing the BMU. The value of the objective functional J decreases with the increase of the number of iterations,

302

7 Applications to Physical Examples

Fig. 7.13 Scenario E1: Optimal controls and value of the cost functional J . (a) Optimal control. (b) Value of the objective functional J

as shown in Fig. 7.13b. Due to non-optimal choice of the step size ε, some fluctuations are observed in Fig. 7.13b. However, the overall convergence of J is clearly observed as the number of iterations increases. A monotonic decrease of J could be observed if ε is chosen small enough, which, however, would require a significantly larger number of iterations and hence a longer computational time. Scenario E2 In this scenario, the measure ν is chosen as ν(dt) ≡ ρ(t)dt +  k i=1 αi δti (dt), where ρ is the same nonnegative function as seen in the previous

7.2 Examples of Physical Systems

303

Fig. 7.14 Scenario E2: State trajectory of the BMU with optimal controls. (a) Angular velocity. (b) Position vector

scenario while δti (dt) are the Dirac measures supported at {ti } and {αi } are nonnegative weights. For simulations, the weights are given by αi = 10 and the supporting points {ti } are chosen as {25, 50, 75}. The simulation results of this scenario are shown in Figs. 7.14 and 7.15. Again, with optimal controls applied, it is observed from Fig. 7.14 that in the presence of high winds the system state is brought close to its equilibrium value within the given period of time. Comparing Fig. 7.14a with Fig. 7.12a, it is observed that the impact of disturbance on the angular velocities is reduced due to the presence of dominant

304

7 Applications to Physical Examples

Fig. 7.15 Scenario E2: Optimal controls and value of the cost functional J . (a) Optimal control. (b) Value of the objective functional J

Dirac measures in the measure ν. Due to application of such impulsive forces, it appears that a longer time is required for the system to reach its equilibrium state. For fair comparison of the results of this scenario with those of Scenario E1, the step size ε is kept the same without any adaptation. Consequently, more fluctuations are observed in the objective functional J as seen in Fig. 7.15b. These fluctuations can be reduced by choosing a smaller step size ε. However, the overall convergence of J is achieved with the increase of the number of iterations.

7.2 Examples of Physical Systems

305

Fig. 7.16 Scenario E2: Sections of the state trajectories {x1 , x2 } around t1 , t2 , and t3 . (a) A section of the state trajectories {x1 , x2 } around t1 . (b) A section of the state trajectories {x1 , x2 } around t1 . (c) A section of the state trajectories {x1 , x2 } around t2 . (d) A section of the state trajectories {x1 , x2 } around t3

To make a closer observation at the state trajectories {x1 , x2 }, a section of Fig. 7.14a around the time instant ti is extracted and shown in Fig. 7.16. It is observed from Fig. 7.16a, b that x1 and x2 experience a jump at t1 = 25. Similarly, a jump is also observed for x1 and x2 at the other two time instants t2 = 50 and t3 = 75 as shown in Fig. 7.16c, d, respectively. These jump sizes are smaller than those observed at t1 = 25. This is possibly due to the choice of optimal values of vi smaller in magnitude.

7.2.5 An Example of a Stochastic System In this section, we consider a simple example of a stochastic system driven by only discrete measures. The system model is given by a scalar stochastic differential

306

7 Applications to Physical Examples

equation as follows, dx = sin(x)dt + 0.8 cos(x)dw + arctan(0.01t)γ (dt), t ∈ I = [0, T ], x(0) = x0 ,

(7.46) with a cost functional given by  J (γ ) = E

T

 & ' 2 2 (w1 /2)|x(t)| dt + (w2 /2)|x(T )| ,

(7.47)

0

where w1 and w2 are positive weights given to each term in the cost functional. Define the set ≡ {y ∈ R : −β ≤ y ≤ β} for some β > 0, and introduce the set of admissible controls as follows:  k Mad ≡ γ ∈ Mca (I , R) : γ (dt) = αi yi δti (dt) + λ(dt), i=1



yi ∈ , αi ≥ 0, i = 1, 2, · · · , k for k ∈ N , (7.48) where λ(dt) = dt is the Lebesgue measure; {αi ≥ 0} are prespecified. The objective is to determine γ o ∈ Mad that minimizes the functional J (γ ). This is equivalent to finding the optimal sizes {yio } of the discrete measures. It follows from Theorem 6.7.5 and the special case thereof, given in Sect. 6.7.1, that the necessary conditions of optimality for this example are as follows: 

T

dJ (γ ; γ − γ ) = E o

o

 arctan(0.01t)ψ(t)[γ (dt) − γ (dt)] ≥ 0, ∀ γ ∈ Mad , o

0

(7.49) −dψ o = cos(x o )ψ o dt − 0.64(sin(x o ))2 ψ o dt + w1 x o dt − 0.8 sin(x o )dw, ψ o (T ) = w2 x o (T ), dx o = sin(x o )dt + 0.8 cos(x o )dw + arctan(0.01t)γ o (dt), x o (0) = x0 .

(7.50) (7.51)

Following standard iterative computational procedures, it is easy to verify that at the (m + 1)-th iteration J (γ m+1 ) = J (γ m ) + dJ (γ m ; γ m+1 − γ m ) + o(),

(7.52)

7.2 Examples of Physical Systems

307

where dJ (γ ; γ m

m+1

 −γ ) = E m

T

 & ' arctan(0.01t)ψ m (t) γ m+1 (dt) − γ m (dt) .

0

(7.53) Specializing to purely impulsive controls as given by (7.48) with a slight abuse of notation we obtain dJ (y m ; y m+1 − y m ) = E

k

αi arctan(0.01ti )(yim+1 − yim )ψ m (ti ). (7.54)

i=1

It is clear from the above expression that the optimal control, here y ≡ {yi }ki=1 , is a finite sequence of random variables dependent on the random process {ψ(ti ), i = 1, 2, · · · , k}. Suppose, for simplicity, we want to choose the best deterministic control. In that case Eq. (7.54) can be written as dJ (y m ; y m+1 − y m ) =

k

αi arctan(0.01ti )(yim+1 − yim )E{ψ m (ti )}.

(7.55)

i=1

Thus, for updating the variable {yi } at the (m + 1)-th iteration, we must have yim+1 = yim − εαi arctan(0.01ti )E{ψ m (ti )}, i = 1, 2, · · · , k

(7.56)

with the step size ε > 0 sufficiently small so that yim+1 ∈ . Clearly, it follows from the above expressions that dJ (γ m ; γ m+1 − γ m ) = −ε

2 k  αi arctan(0.01ti )E{ψ m (ti )} ≤ 0

(7.57)

i=1

indicating J (γ m+1 ) ≤ J (γ m ). In the following numerical studies we have chosen k = 5 with the corresponding atoms as {t1 = 15, t2 = 30, t3 = 50, t4 = 70, t5 = 85} ⊂ I = [0, 100]. The weight parameters are w1 = w2 = 1; the control bound is set as β = 10; {αi } are chosen as {α1 = 8, α2 = 4, α3 = 2.5, α4 = 1.5, α5 = 1.5}; the initial state is x0 = 2; and for running the algorithm the starting control variables are chosen as {yi = 1}5i=1 . The stochastic differential equation is solved using basic Euler approximations with Monte Carlo simulation. For more accurate results one may use the Euler-Maruyama method and Milstein method [72]. At each iteration, both the state and costate equations are solved for 50 times with a sample path of the Brownian motion generated for each run. The maximum number of iterations

308

7 Applications to Physical Examples

Fig. 7.17 Comparison of state trajectories x and x o without and with controls. (a) State trajectory x without controls. (b) State trajectory x o with optimal controls

allowed is taken as 1000 with a step size ε = 8 × 10−4 . The numerical results are shown in Figs. 7.17 and 7.18. In Fig. Fig. 7.17a the dashed curve represents the state trajectory x in the absence of controls, i.e., {yi = 0}, whereas in Fig. Fig. 7.17b the solid curve shows the solution trajectory x o corresponding to the optimal controls {y1o = −2.165, y2o = −2.148, y3o = −2.590, y4o = −3.286, y5o = −3.735}. It is observed that the optimal control forces the system state closer to the desired value 0 at all the atoms {t1 = 15, t2 = 30, t3 = 50, t4 = 70, t5 = 85} as opposed to a continuously increasing

7.2 Examples of Physical Systems

309

Fig. 7.18 Value of the objective functional J

trend in the uncontrolled trajectory x. The infinitesimal mean given by (sin(x) + arctan(0.01t)) is more dominant than the diffusion term 0.8 cos(x) and hence an upward trend is observed in the state trajectory. As a result, it is also observed that the size of the impulsive controls is increased as time increases in order to bring the state closer to the desired value. The value of J is shown in Fig. 7.18 where it tends to decrease with the increase of the number of iterations. This is consistent with the algorithm described above. The fluctuation observed is mainly due to the use of a limited number of sample paths in the Monte Carlo simulation, a limited number of impulsive controls, and a fixed step size ε. The step size can be chosen very small to eliminate the fluctuation; however, this will require a much larger number of iterations. Since the control is purely impulsive and only a very limited number is allowed, the optimal state trajectory still remains relatively far from the desired value. For better results, it is required to allow more of the impulses. For numerical accuracy, one may increase the number of Monte Carlo simulation runs at each iteration.

Bibliography

1. N.U. Ahmed, Nonlinear integral equations on reflexive Banach spaces with applications to stochastic integral equations and abstract evolution equations. J. Integr. Equ. 1, 1–15 (1979) 2. N.U. Ahmed, Elements of Finite Dimensional Systems and Control Theory. Pitman Monographs and Surveys in Pure and Applied Mathematics (Longman Scientific and Technical, U.K; Copublished by Wiley, London, 1988) 3. N.U. Ahmed, Optimal relaxed controls for infinite-dimensional stochastic systems of Zakai type. SIAM J. Control Optim. 34(5), 1592–1615 (1996) 4. N.U. Ahmed, Linear and Nonlinear Filtering for Scientists and Engineers (World Scientific, Singapore, 1999) 5. N.U. Ahmed, Vector measures for optimal control of impulsive systems in Banach spaces. Nonlinear Funct. Anal. Appl. 5(2), 95–106 (2000) 6. N.U. Ahmed, Optimal control of ∞-dimensional stochastic systems via generalized solutions of HJB equations. Discuss. Math. Differ. Inclusions Control Optim. 21(1), 97–126 (2001) 7. N.U. Ahmed, Some remarks on the dynamics of impulsive systems in Banach spaces. Dyn. Contin. Discrete Impulsive Syst. Ser. A 8, 261–274 (2001) 8. N.U. Ahmed, Systems governed by impulsive differential inclusions on Hilbert spaces. Nonlinear Anal. 45(6), 693–706 (2001) 9. N.U. Ahmed, Necessary conditions of optimality for impulsive systems on Banach spaces. Nonlinear Anal. 51(3), 409–424 (2002) 10. N.U. Ahmed, Existence of optimal controls for a general class of impulsive systems on Banach spaces. SIAM J. Control Optim. 42(2), 669–685 (2003) 11. N.U. Ahmed, Generalized solutions of HJB equations applied to stochastic control on Hilbert space. Nonlinear Anal. Theory Methods Appl. 54(3), 495–523 (2003) 12. N.U. Ahmed, Controllability of evolution equations and inclusions driven by vector measures. Discuss. Math. Differ. Inclusions Control Optim. 24(1), 49–72 (2004) 13. N.U. Ahmed, Optimal relaxed controls for systems governed by impulsive differential inclusions. Nonlinear Funct. Anal. Appl. 10(3), 427–460 (2005) 14. N.U. Ahmed, Dynamic Systems and Control with Applications (World Scientific, Singapore, 2006) 15. N.U. Ahmed, Optimal impulse control for impulsive systems in Banach spaces. Int. J. Differ. Equ. Appl. 1(1) 2011. 16. N.U. Ahmed, Optimal structural feedback control for partially observed stochastic systems on Hilbert space. Nonlinear Anal. Hybrid Syst. 5(1), 1–9 (2011) 17. N.U. Ahmed, Volterra series and nonlinear integral equations of the second kind arising from output feedback and their optimal control. J. Nonlinear Funct. Anal. 1–26 (2016)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5

311

312

Bibliography

18. N.U. Ahmed, Application of optimal control theory to biomedical and biochemical processes. Am. J. Biomed. Sci. Res. 3(4), 315–317 (2019) 19. N.U. Ahmed, Partially observed stochastic evolution equations on Banach spaces and their optimal Lipschitz feedback control law. SIAM J. Control Optim. 57(5), 3101–3117 (2019) 20. N.U. Ahmed, Vector measures applied to optimal control for a class of evolution equations on Banach spaces. Commun. Korean Math. Soc. 35(4), 1329–1352 (2020) 21. N.U. Ahmed, A class of strongly nonlinear integral equations on the space of vector measures and their optimal control. Publ. Math. Debrecen 99(1-2), 1–24 (2021). https://doi.org/10. 5486/PMD.2021.8639 22. N.U. Ahmed, C.D. Charalambous, Optimal measurement strategy for nonlinear filtering. SIAM J. Control Optim. 45(2), 519–531 (2006) 23. N.U. Ahmed, C.D. Charalambous, Stochastic minimum principle for partially observed systems subject to continuous and jump diffusion processes and driven by relaxed controls. SIAM J. Control Optim. 51(4), 3235–3257 (2013) 24. N.U. Ahmed, C.D. Charalambous, Erratum: Stochastic minimum principle for partially observed systems subject to continuous and jump diffusion processes and driven by relaxed controls. SIAM J. Control Optim. 55(2), 1344–1345 (2017) 25. N.U. Ahmed, C. Li, Optimum feedback strategy for access control mechanism modelled as stochastic differential equation in computer network. Math. Probl. Eng. 2004(3), 263–276 (2004) 26. N.U. Ahmed, K.F. Schenk, Optimal availability of maintainable systems. IEEE Trans. Reliab. 27(1), 41–45 (1978) 27. N.U. Ahmed, K.L. Teo, Optimal Control of Distributed Parameter Systems (Elsevier, Amsterdam, 1981) 28. N.U. Ahmed, S. Wang, Measure-driven nonlinear dynamic systems with applications to optimal impulsive controls. J. Optim. Theory Appl. 188(1), 26–51 (2021) 29. N.U. Ahmed, S. Wang, Optimal control of nonlinear hybrid systems driven by signed measures with variable intensities and supports. SIAM J. Control Optim. under review 30. T. Ahmed, N.U. Ahmed, Optimal control of antigen-antibody interactions for cancer immunotherapy. Dyn. Contin. Discrete Impulsive Syst. Ser. B: Appl. Algorithms 26, 135– 152 (2019) 31. J.P. Aubin, H. Frankowska, Set-Valued Analysis (Springer, Berlin, 2009) 32. A.V. Balakrishnan, Applied Functional Analysis (Springer, Berlin, 1981) 33. V. Barbu, G. Da Prato, Hamilton-Jacobi Equations in Hilbert Spaces (Pitman Advanced Pub. Program, 1983) 34. M. Bardi, I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-JacobiBellman Equations (Springer, Berlin, 2008) 35. M. Bardi, M.G. Crandall, L.C. Evans, H.M. Soner, P.E. Souganidis, Viscosity Solutions and Applications (Springer, Berlin, 1997) 36. T. Basar, G.J. Olsder, Dynamic Noncooperative Game Theory (SIAM, 1999) 37. R. Bellman, Dynamic Programming (Princeton University Press, Princeton, 1957) 38. A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications (SIAM, 2001) 39. S.K. Berberian, Measure and Integration (The Macmillan Company, New York, 1962) 40. L.D. Berkovitz, Optimal Control Theory (Springer, Berlin, 1974) 41. D.P. Bertsekas, Nonlinear Programming (Athena Scientific, Nashua, 1999) 42. D.P. Bertsekas, Convex Optimization Theory (Athena Scientific, Nashua, 2009) 43. S.K. Biswas, N.U. Ahmed, Stabilization of a class of hybrid systems arising in flexible spacecraft. J. Optim. Theory Appl. 50(1), 83–108 (1986) 44. V.G. Boltyanskii, Mathematical Methods of Optimal Control (Holt McDougal, 1971) 45. S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004) 46. A. Bressan, F. Rampazzo, On differential systems with vector-valued impulsive controls. Boll. Un. Mat. Ital 7, 641–656 (1988)

Bibliography

313

47. A. Bressan, F. Rampazzo, Impulsive control systems with commutative vector fields. J. Optim. Theory Appl. 71(1), 67–83 (1991) 48. J. Brinkhuis, Convex analysis for optimization (Springer, Berlin, 2020) 49. M. Cambern, P. Griem, The dual of a space of vector measures. Math. Z. 180, 373–378 (1982) 50. L. Cesari, Optimization—Theory and Applications: Problems with Ordinary Differential Equations (Springer, Berlin, 1983) 51. F.H. Clarke, Optimization and Nonsmooth Analysis (SAIM, 1990) 52. F.H. Clarke, R.B. Vinter, Applications of optimal multiprocesses. SIAM J. Control Optim. 27(5), 1048–1071 (1989) 53. F.H. Clarke, R.B. Vinter, Optimal multiprocesses. SIAM J. Control Optim. 27(5), 1072–1091 (1989) 54. G. Dal Maso, F. Rampazzo, On systems of ordinary differential equations with measures as controls. Differ. Integr. Equ. 4(4), 739–765 (1991) 55. R. Datko, Measurability properties of set-valued mappings in a Banach space. SIAM J. Control 8(2), 226–238 (1970) 56. J. Diestel, Sequences and Series in Banach Spaces (Springer, Berlin, 1984) 57. J. Diestel, J.J. Uhl Jr., Vector Measures, vol. 15 (American Mathematical Society, Providence, 1977) 58. J.L. Doob, Stochastic Processes, vol. 101 (Wiley, New York, 1953) 59. N. Dunford, J.T. Schwartz, Linear Operators Part I: General Theory, vol. 243 (Interscience Publishers, New York, 1958) 60. G. Fabbri, F. Gozzi, A. Swiech, Stochastic Optimal Control in Infinite Dimension: Dynamic Programming and HJB Equations (Springer, Berlin, 2017) 61. H.O. Fattorini, Infinite Dimensional Optimization and Control Theory (Cambridge University Press, Cambridge, 1999) 62. W.H. Fleming, R.W. Rishel, Deterministic and Stochastic Optimal Control (Springer, Berlin, 1975) 63. W.H. Fleming, H.M. Soner, Controlled Markov Processes and Viscosity Solutions (Springer, Berlin, 2006) 64. A. Friedman, Stochastic differential equations and applications, in Stochastic Differential Equations (Springer, Berlin, 2010), pp. 75–148 65. R. Gamkrelidze, Principles of Optimal Control Theory (Springer, Berlin, 1978) 66. B. Goldys, B. Maslowski, Ergodic control of semilinear stochastic equations and the Hamilton–Jacobi equation. J. Math. Anal. Appl. 234(2), 592–631 (1999) 67. F. Gozzi, E. Rouy, Regular solutions of second-order stationary Hamilton–Jacobi equations. J. Differ. Equ. 130(1), 201–234 (1996) 68. W.M. Haddad, V. Chellaboina, S.G. Nersesov, Impulsive and Hybrid Dynamical Systems: Stability, Dissipativity, and Control (Princeton University Press, Princeton, 2006) 69. P.R. Halmos, Measure Theory (Springer, Berlin, 1950) 70. H. Hermes, J.P. LaSalle, Functional Analysis and Time Optimal Control (Academic Press, New York, 1969) 71. E. Hewitt, K. Stromberg, Real and Abstract Analysis: A Modern Treatment of the Theory of Functions of a Real Variable (Springer, Berlin, 2013) 72. D.J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43(3), 525–546 (2001) 73. S. Hu, N.S. Papageorgiou, Handbook of Multivalued Analysis, Volume II: Applications (Springer, Berlin, 2000) 74. C. Huang, S. Wang, K.L. Teo, On application of an alternating direction method to HamiltonJacobin-Bellman equations. J. Comput. Appl. Math. 166(1), 153–166 (2004) 75. R. Korn, Portfolio optimisation with strictly positive transaction costs and impulse control. Financ. Stoch. 2(2), 85–114 (1998) 76. V. Lakshmikantham, D.D. Bainov, P.S. Simeonov, Theory of Impulsive Differential Equations (World Scientific, Singapore, 1989)

314

Bibliography

77. Q. Lin, R. Loxton, K.L. Teo, Y. Wu, A new computational method for optimizing nonlinear impulsive systems. Dyn. Contin. Discrete Impulsive Syst. Ser. B 18(1), 59–76 (2011) 78. J.L. Lions, Optimal Control of Systems Governed by Partial Differential Equations (Springer, Berlin, 1971) 79. P.L. Lions, Generalized Solutions of Hamilton-Jacobi Equations (Pitman Advanced Publishing Program, Boston, 1982) 80. X. Liu, K. Zhang, Impulsive Systems on Hybrid Time Domains (Springer, Berlin, 2019) 81. A.J. Lotka, Elements of physical biology. Sci. Progr. Twentieth Century (1919–1933) 21(82), 341–343 (1926) 82. D.G. Luenberger, Y. Ye, Linear and Nonlinear Programming (Springer, Berlin, 1984) 83. B. Miller, E.Y. Rubinovich, Impulsive Control in Continuous and Discrete-Continuous Systems (Springer, Berlin, 2003) 84. M. Motta, Viscosity solutions of HJB equations with unbounded data and characteristic points. Appl. Math. Optim. 49(1), 1–26 (2004) 85. M.E. Munroe, Introduction to Measure and Integration (Addison-Wesley, Reading, 1953) 86. Y. Nesterov, Lectures on Convex Optimization (Springer, Berlin, 2018) 87. C.E. Neuman, V. Costanza, Deterministic impulse control in native forest ecosystems management. J. Optim. Theory Appl. 66(2), 173–196 (1990) 88. L.W. Neustadt, Optimization: A Theory of Necessary Conditions (Princeton University Press, Princeton, 2015) 89. M.N. Oguztoreli, Time-Lag Control Systems (Academic Press, New York, 1966) 90. L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, E.F. Mishchenko, Mathematical theory of optimal processes (Interscience, 1962) 91. R.W. Rishel, An extended Pontryagin principle for control systems whose control laws contain measures. J. Soc. Ind. Appl. Math. Ser A: Control 3(2), 191–205 (1965) 92. H.L. Royden, P. Fitzpatrick, Real Analysis (Macmillan New York, 1988) 93. K. Seto, A structural control method of the vibration of flexible buildings in response to large earthquakes and strong winds, in Proceedings of 35th IEEE Conference on Decision and Control, vol. 1 (IEEE, Piscataway, 1996), pp. 658–663 94. G.N. Silva, R.B. Vinter, Measure driven differential inclusions. J. Math. Anal. Appl. 202(3), 727–746 (1996) 95. G.N. Silva, R.B. Vinter, Necessary conditions for optimal impulsive control problems. SIAM J. Control Optim. 35(6), 1829–1846 (1997) 96. A.V. Skorokhod, Studies in the Theory of Random Processes (Dover Publications, Mineola, New York, 1982) 97. E. Tadmor, A review of numerical methods for nonlinear partial differential equations. Bull. Am. Math. Soc. 49(4), 507–554 (2012) 98. K.L. Teo, C. Goh, K. Wong, A unified computational approach to optimal control problems (Longman Science & Technology, 1991) 99. A.I. Tulcea, C.I. Tulcea, Topics in the Theory of Lifting (Springer, Berlin, 1969) 100. S. Wang, N.U. Ahmed, Optimal control and stabilization of building maintenance units based on minimum principle. J. Ind. Manag. Optim. 17(4), 1713–1727 (2021) 101. J. Warga, Variational problems with unbounded controls. J. Soc. Ind. Appl. Math. Ser. A Control 3(3), 424–438 (1965) 102. J. Warga, Optimal Control of Differential and Functional Equations (Academic Press, New York, 2014) 103. S. Willard, General Topology (Courier Corporation, North Chelmsford, 2004) 104. C.Z. Wu, K.L. Teo, Global impulsive optimal control computation. J. Ind. Manag. Optim. 2(4), 435 (2006) 105. T. Yang, Impulsive Control Theory (Springer, Berlin, 2001)

Bibliography

315

106. T. Ye, S. Kalyanaraman, A recursive random search algorithm for large-scale network parameter configuration, In Proceedings of the 2003 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems (2003), pp. 196–205 107. J. Yong, X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations (Springer, Berlin, 1999) 108. K. Yosida, Functional Analysis (Springer, Berlin, 2012) 109. R. Yu, P. Leung, Optimal partial harvesting schedule for aquaculture operations. Mar. Resour. Econ. 21(3), 301–315 (2006) 110. E. Zeidler, Nonlinear Functional Analysis and Its Applications (Springer, New York, 1986)

Index

A Alaoglu’s theorem, 35 Algebra, 4 Applications to physical systems, 275 numerical algorithm I, 278 numerical algorithm II, 279 Ascoli–Arzela, 27

B Banach fixed point theorems, 30 multi-valued maps, 69 single-valued maps, 33 Banach–Sacks–Mazur theorem, 34 Bartle–Dunford–Schwartz theorem, 36 Bellman’s principle of optimality, 267 HJB equation for deterministic systems, 268 HJB equation for stochastic systems, 269 Borel–Cantelli Lemma, 18

C Cesari Lower Closure Theorem, 37 Compact relative, 26 sequential, 26 Conditional expectation, 198 basic properties, 199 Continuity absolute, 27 control to solution map, 125 equicontinuity, 27 Continuity of multi-valued maps, 100 lower semi-continuity, 101

upper semi-continuity, 101 Continuous dependence of solutions on control, 93 Continuous dependence of solutions on measure, 89 Continuous dependence of solutions on operator valued measures, 130 Controlled nonlinear systems, 67 ecological systems, 68 nonlinear circuits, 68 Convergence, 6 almost everywhere, 7 almost uniform, 7 convergence in distribution, 8 convergence in probability, 8 Fatou’s Lemma, 14 Fubini’s theorem, 18 Lebesgue bounded convergence theorem, 16 Lebesgue dominated convergence theorem, 15 in mean-p, 11 in measure, 8 monotone convergence theorem, 12 relationship between modes of convergence, 11 uniform, 7 Convergence theorem, 177

D Differential inclusions, 98 compact solution set, 104 examples, 99 existence of solutions, 101

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. U. Ahmed, S. Wang, Optimal Control of Dynamic Systems Driven by Vector Measures, https://doi.org/10.1007/978-3-030-82139-5

317

318 measure driven existence of solutions, 105 Dunford–Pettis theorem, 35

E Eberlein–Šmulian theorem, 36 Examples of physical systems, 281 building maintenance units, 296 scenario E1, 300 scenario E2, 302 cancer immunotherapy, 281 geosynchronous satellites, 283 prey-predator model, 288 scenario E1, 289 scenario E2, 293 scenario E3, 294 stochastic systems, 305 Existence of optimal (ordinary) controls, 110 Existence of optimal measure-valued controls, 127 Existence of optimal structural controls, 131 Existence of saddle points, 135 Existence of solutions discrete measures, 92 general control measures, 95 relaxed controls, 154 Extreme point, 35

F Fatou’s Lemma, 12 Finitely additive measures topological dual of, 90 Fixed point theorem for multi-valued maps, 69 Schauder, 69 Fully observed optimal state feedback controls, 248 existence of optimal state feedback laws, 249 general case, 252 necessary conditions of optimality, 252

H Hausdorff metric, 69, 103

I Impulsive systems, 78 classical models, 79 existence of solutions, 85 general impulsive models, 83 measure driven, 84

Index Inequality Hölder, 21 Grönwall, 51 Minkowski, 22 Schwarz, 20 triangle, 20 Integrable set-valued functions, 38 Itô differential equation, 203

K Krein–Milman theorem, 35 Kuratowski-Ryll Nardzewski theorem, 37

L Lêvy process, 197 Linear impulsive systems, 46 Linear systems continuous dependence, 54 measure driven systems, 55, 63 operator valued measures, 59 existence of solutions, 63 time-invariant, 43 time-variant, 47 vector measure, 55 Linear time-invariant systems, 39 examples, 40 suspension systems, 40 system reliability, 42 torsional motion, 40

M Markov process, 197 Martingale inequality, 19 Measurability of multi-valued maps, 101 Measurable function Borel, 23 Measurable space, 4 Measure driven linear systems, 56 an example, 57 existence of solutions, 56 Measure driven nonlinear systems existence of solutions, 86 Measures countably additive measures, 4 finitely additive, 5 Min-max control, 132 Multi-valued maps L1 selections, 106

Index N Necessary conditions of optimality optimal supports of discrete measures, 185 Pontryagin Minimum Principle, 163 relaxed controls, 156 discrete control domain, 160 Nonlinear systems discrete measures, 92 driven by finitely additive measures, 89 Normed space Banach space, 3

O Optimal control, 151 differential games, 134 differential inclusions, 132, 136 existence measure-valued controls, 121 min-max problem, 137 optimal times of application, 144 ordinary controls, 109 relaxed controls, 116 Lagrange problem, 165 necessary conditions relaxed controls, 152 transversality conditions, 165 Optimal output feedback control law existence of optimal output feedback control, 254 Optimal structural control, 183 necessary conditions of optimality, 183

P Partially observed optimal feedback controls, 253 convergence theorem, 265 existence of optimal feedback laws, 253 necessary conditions of optimality, 258

R Radon–Nikodym property (RNP), 36 Reflexive Banach space, 25 Regular nonlinear systems existence of solutions, 71 local existence, 77 local solution, 76 locally Lipschitz case, 83 Regulated filtered impulsive controls, 231

319 Relaxed controls existence of optimal policy, 116 practical realizability, 120

S Selection theorem, 37 Sigma-algebra (σ -algebra), 4 Sigma-algebra (σ -algebra) of sets, 36 Solutions blow-up time, 72, 78 Space Banach, 3 Hilbert, 19 metric, 25 normed, 2 vector, 1 Space of vector measures countably additive, 200 finitely additive, 200 weakly compact sets, 124 Special Banach spaces C(), 24 C() dual, 24 Lp space, 21 Lp space dual, 24 reflexive, 25 Stochastic differential equations driven by Brownian motion existence of solutions, 204 driven by Brownian motion and Poisson process existence of solutions, 214 driven by vector measures existence of solutions, 207 Poisson process, 212 Stochastic systems, 197 Brownian motion, 200 conditional expectation, 198 Itô integral, 201 linear systems, 203 nonlinear systems, 203 optimal relaxed control, 216 probability space, 198 Stochastic systems subject to Brownian and Poisson process existence of optimal relaxed controls, 221 Stochastic systems subject to Wiener process filtering impulsive forces, 232 convergence theorem, 240 existence of optimal filters, 234

320 necessary conditions of optimality, 235 special cases, 241 necessary conditions of optimality, 224 convergence theorem, 229 Structural perturbation, 58 U Unregulated measure-valued controls, 242 existence of optimal controls, 243 necessary conditions of optimality, 246 specialized to impulsive controls, 246

Index W Weak Cesari property, 110 Weak convergence, 24 Weakly compact sets in the space of vector measures, 36 Weak topology weak convergence, 34 weak-star convergence, 34 Wiener process, 197