Mathematical Control Theory for Stochastic Partial Differential Equations [1 ed.] 303082330X, 9783030823306

This is the first book to systematically present control theory for stochastic distributed parameter systems, a comparat

196 47 7MB

English Pages 605 [598] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Introduction
1.1 Why Stochastic Distributed Parameter Control Systems?
1.2 Two Fundamental Issues in Control Theory
1.3 Range Inclusion and the Duality Argument
1.4 Two Basic Methods in This Book
2 Some Preliminaries in Stochastic Calculus
2.1 Measures and Probability, Measurable Functions and Random Variables
2.2 Integrals and Expectation
2.3 Signed/Vector Measures, Conditional Expectation
2.3.1 Signed Measures
2.3.2 Distribution, Density and Characteristic Functions
2.3.3 Vector Measures
2.3.4 Conditional Expectation
2.4 A Riesz-Type Representation Theorem
2.4.1 Proof of the Necessity for a Special Case
2.4.2 Proof of the Necessity for the General Case
2.4.3 Proof of the Sufficiency
2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version
2.6 Stochastic Processes
2.7 Stopping Times
2.8 Martingales
2.8.1 Real Valued Martingales
2.8.2 Vector-Valued Martingales
2.9 Brownian Motions
2.9.1 Brownian Motions in Finite Dimensions
2.9.2 Construction of Brownian Motions in one Dimension
2.9.3 Vector-Valued Brownian Motions
2.10 Stochastic Integrals
2.10.1 Itô's Integrals w.r.t. Brownian Motions in Finite Dimensions
2.10.2 Itô's Integrals w.r.t. Vector-Valued Brownian Motions
2.11 Properties of Stochastic Integrals
2.11.1 Itô's Formula for Itô's Processes (in a Strong Form)
2.11.2 Burkholder-Davis-Gundy Inequality
2.11.3 Stochastic Fubini Theorem
2.11.4 Itô's Formula for Itô's processes in a Weak Form
2.11.5 Martingale Representation Theorem
2.12 Notes and Comments
3 Stochastic Evolution Equations
3.1 Stochastic Evolution Equations in Finite Dimensions
3.2 Well-Posedness of Stochastic Evolution Equations
3.2.1 Notions of Solutions
3.2.2 Well-Posedness in the Sense of Mild Solution
3.3 Regularity of Mild Solutions to Stochastic Evolution Equations
3.3.1 Burkholder-Davis-Gundy Type Inequality and Time Regularity
3.3.2 Space Regularity
3.4 Notes and Comments
4 Backward Stochastic Evolution Equations
4.1 The Case of Finite Dimensions and Natural filtration
4.2 The Case of Infinite Dimensions
4.2.1 Notions of Solutions
4.2.2 Well-Posedness in the Sense of Mild Solution for the Case of Natural Filtration
4.3 The Case of General Filtration
4.4 The Case of Natural Filtration Revisited
4.5 Notes and Comments
5 Control Problems for Stochastic Distributed Parameter Systems
5.1 An Example of Controlled Stochastic Differential Equations
5.2 Control Systems Governed by Stochastic Partial Differential Equations
5.3 Some Control Problems for Stochastic Distributed Parameter Systems
5.4 Notes and Comments
6 Controllability for Stochastic Differential Equations in Finite Dimensions
6.1 The Control Systems With Controls in Both Drift and Diffusion Terms
6.2 Control System With a Control in the Drift Term
6.3 Lack of Robustness for Null/Approximate Controllability
6.4 Notes and Comments
7 Controllability for Stochastic Linear Evolution Equations
7.1 Formulation of the Problems
7.2 Well-Posedness of Stochastic Systems With Unbounded Control Operators
7.3 Reduction to the Observability of Dual Problems
7.4 Explicit Forms of Controls for the Controllability Problems
7.5 Relationship Between the Forward and the Backward Controllability
7.5.1 The Case of Bounded Control Operators
7.5.2 The Case of Unbounded Control Operators
7.6 Notes and Comments
8 Exact Controllability for Stochastic Transport Equations
8.1 Formulation of the Problem and the Main Result
8.2 Hidden Regularity and a Weighted Identity
8.3 Observability Estimate for Backward Stochastic Transport Equations
8.4 Notes and Comments
9 Controllability and Observability of Stochastic Parabolic Systems
9.1 Formulation of the Problems
9.2 Controllability of a Class of Stochastic Parabolic Systems
9.2.1 Preliminaries
9.2.2 Proof of the Null Controllability
9.2.3 Proof of the Approximate Controllability
9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control
9.3.1 Proof of the Null Controllability Result
9.3.2 Proof of the Negative Null Controllability Result
9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator
9.5 Observability Estimate for Stochastic Parabolic Equations
9.5.1 Global Carleman Estimate for Stochastic Parabolic Equations, I
9.5.2 Global Carleman Estimate for Stochastic Parabolic Equations, II
9.5.3 Proof of the Observability Result
9.6 Null and Approximate Controllability of Stochastic Parabolic Equations
9.6.1 Global Carleman Estimate for Backward Stochastic Parabolic Equations
9.6.2 Proof of the Observability Estimate for Backward Stochastic Parabolic Equations
9.7 Notes and Comments
10 Exact Controllability for a Refined Stochastic Wave Equation
10.1 Formulation of the Problem
10.2 Well-Posedness of Stochastic Wave Equations With Boundary Controls
10.3 Main Controllability Results
10.4 A Reduction of the Exact Controllability Problem
10.5 A Fundamental Identity for Stochastic Hyperbolic-Like Operators
10.6 Observability Estimate for the Stochastic Wave Equation
10.7 Notes and Comments
11 Exact Controllability for Stochastic Schrödinger Equations
11.1 Formulation of the Problem and the Main Result
11.2 Well-Posedness of the Control System
11.3 A Fundamental Identity for Stochastic Schrödinger-Like Operators
11.4 Observability Estimate for Backward Stochastic Schrödinger Equations
11.5 Notes and Comments
12 Pontryagin-Type Stochastic Maximum Principle and Beyond
12.1 Formulation of the Optimal Control Problem
12.2 The Case of Finite Dimensions
12.3 Necessary Condition for Optimal Controls for Convex Control Regions
12.4 Operator-Valued Backward Stochastic Evolution Equations
12.4.1 Notions of Solutions
12.4.2 Preliminaries
12.4.3 Proof of the Uniqueness Results
12.4.4 Well-Posedness Result for a Special Case
12.4.5 Proof of the Existence and Stability for the General Case
12.4.6 A Regularity Result
12.5 Pontryagin-Type Maximum Principle
12.6 Sufficient Condition for Optimal Controls
12.6.1 Clarke’s Generalized Gradient
12.6.2 A Sufficient Condition for Optimal Controls
12.7 Second Order Necessary Condition for Optimal Controls
12.8 Notes and Comments
13 Linear Quadratic Optimal Control Problems
13.1 Formulation of the Problem
13.2 Optimal Feedback for Deterministic LQ Problem in Finite Dimensions
13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions
13.3.1 Differences Between Deterministic and Stochastic LQ Problems in Finite Dimensions
13.3.2 Characterization of Optimal Feedbacks for Stochastic LQ Problems in Finite Dimensions
13.4 Finiteness and Solvability of Problem (SLQ)
13.5 Pontryagin-Type Maximum Principle for Problem (SLQ)
13.6 Transposition Solutions to Operator-Valued Backward Stochastic Riccati Equations
13.7 Existence of Optimal Feedback Operator for Problem (SLQ)
13.8 Global Solvability of Operator-Valued Backward Stochastic Riccati Equations
13.8.1 Some Preliminary Results
13.8.2 Proof of the Main Solvability Result
13.9 Some Examples
13.9.1 LQ Problems for Stochastic Wave Equations
13.9.2 LQ problems for Stochastic Schrödinger Equations
13.10 Notes and Comments
References
Index
Recommend Papers

Mathematical Control Theory for Stochastic Partial Differential Equations [1 ed.]
 303082330X, 9783030823306

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Probability Theory and Stochastic Modelling 101

Qi Lü Xu Zhang

Mathematical Control Theory for Stochastic Partial Differential Equations

Probability Theory and Stochastic Modelling Volume 101

Editors-in-Chief Peter W. Glynn, Stanford, CA, USA Andreas E. Kyprianou, Bath, UK Yves Le Jan, Shanghai, China Advisory Editors Søren Asmussen, Aarhus, Denmark Martin Hairer, London, UK Peter Jagers, Gothenburg, Sweden Ioannis Karatzas, New York, NY, USA Frank P. Kelly, Cambridge, UK Bernt Øksendal, Oslo, Norway George Papanicolaou, Stanford, CA, USA Etienne Pardoux, Marseille, France Edwin Perkins, Vancouver, Canada Halil Mete Soner, Princeton, NJ, USA

The Probability Theory and Stochastic Modelling series is a merger and continuation of Springer’s two well established series Stochastic Modelling and Applied Probability and Probability and Its Applications. It publishes research monographs that make a significant contribution to probability theory or an applications domain in which advanced probability methods are fundamental. Books in this series are expected to follow rigorous mathematical standards, while also displaying the expository quality necessary to make them useful and accessible to advanced students as well as researchers. The series covers all aspects of modern probability theory including: • • • • • •

Gaussian processes Markov processes Random Fields, point processes and random sets Random matrices Statistical mechanics and random media Stochastic analysis

as well as applications that include (but are not restricted to): • Branching processes and other models of population growth • Communications and processing networks • Computational methods in probability and stochastic processes, including simulation • Genetics and other stochastic models in biology and the life sciences • Information theory, signal processing, and image synthesis • Mathematical economics and finance • Statistical methods (e.g. empirical processes, MCMC) • Statistics for stochastic processes • Stochastic control • Stochastic models in operations research and stochastic optimization • Stochastic models in the physical sciences All titles in this series are peer-reviewed to the usual standards of mathematics and its applications.

More information about this series at http://www.springer.com/series/13205

Qi Lü Xu Zhang •

Mathematical Control Theory for Stochastic Partial Differential Equations

123

Qi Lü School of Mathematics Sichuan University Chengdu, China

Xu Zhang School of Mathematics Sichuan University Chengdu, China

ISSN 2199-3130 ISSN 2199-3149 (electronic) Probability Theory and Stochastic Modelling ISBN 978-3-030-82330-6 ISBN 978-3-030-82331-3 (eBook) https://doi.org/10.1007/978-3-030-82331-3 Mathematics Subject Classification: 93E03, 93B05, 93B07, 93E20, 60H15, 60H30 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

It is well-known that Control Theory was founded by N. Wiener in 1948 ([349]). After that, this theory was greatly extended to various complicated setting and widely used in sciences and technologies. Particularly, the rapid development of Control Theory began in the mid 1950’s, partially in response to practical problems in many branches of engineering and economics. Roughly speaking, Control Theory can be divided into two parts. The first part is control theory for deterministic systems, and the second one is that for stochastic systems. Of course, these two parts are not completely separated but rather they are inextricably linked each other. Control theory for deterministic systems can be again divided into two parts. The first part is control theory for finite dimensional systems, mainly governed by ordinal differential equations, and the second one is that for (deterministic) distributed parameter systems, mainly described by differential equations in infinite dimensional spaces, typically by partial differential equations. Control theory for finite dimensional systems is by now relatively mature. Classics in the field include, among others, the three milestones of modern control theory, i.e., L. S. Pontryagin’s maximum principle ([281]), R. Bellman’s dynamic programming method ([20]), and R. E. Kalman’s optimal linear regulator theory ([163]). We refer to [1, 28, 37, 45, 59, 102, 193], just to mention a few, for some of the subsequent progresses. There exist a huge list of works on control theory for distributed parameter systems, and it is still quite active. Pioneers in this field include A. V. Balakrishnan ([9]), A. G. Butkovski˘ı ([44]), Yu V. Egorov ([85]), H. O. Fattorini ([91]), C. C. Kwan and K. N. Wang ([183]), J.-L. Lions ([205]), D. L. Russell ([292]), P. K. C. Wang ([343]) and so on. For early books on this topic, we mention [44, 66, 205]. For the extensive works in the recent three decades, we refer to [7, 10, 61, 67, 88, 92, 101, 110, 116, 117, 123, 144, 168, 170, 172, 186, 199, 202, 220, 305, 320, 323, 325, 338, 388, 395, 396] and the rich references cited therein. V

VI

Preface

Likewise, control theory for stochastic systems can be divided into two parts. The first part is control theory for stochastic finite dimensional systems, governed by stochastic (ordinary) differential equations, and the second one is that for stochastic distributed parameter systems, described by stochastic differential equations in infinite dimensions, typically by stochastic partial differential equations. One can also find a huge list of publications on control theory for stochastic finite dimensional systems and its applications (say, in mathematical finance). Pioneers in this field include R. Bellman ([21]), W. H. Fleming and M. Nisio ([97]), J. J. Florentin ([100]), H. J. Kushner ([177]) and so on. For early books on this topic, we mention [6], [29] and [179]. For the extensive (and/or closely related) works in this respect in the recent decades, we refer to [5, 27, 48, 52, 53, 98, 124, 173, 181, 204, 251, 272, 279, 322, 340, 363, 364, 371, 383] and the rich references cited therein. Nevertheless, most of the existing works in this respect are mainly addressed/related to optimal control problems. As we shall see in Chapter 6 (of this book), so far controllability theory for stochastic finite dimensional systems is NOT well-developed. By contrast, control theory for stochastic distributed parameter systems, is, in our opinion, still at its very beginning stage. Indeed, though one can find a considerably number of papers (See [24, 30, 90, 96, 178, 302, 326] for the early ones) and five related books [15, 89, 167, 242, 259] 1 (See also the books [66, 270] for several related chapters) on this topic, as far as we know, there exist no any monographs addressing systematically to this rather new branch of mathematical control theory, which is actually the main concern of this book. One of the most essential difficulties in the study of control theory for stochastic distributed parameter systems is that, compared to the deterministic setting, people know very little about stochastic evolution equation (and particularly, about stochastic partial differential equations) although many progresses have been made there (See [68, 136, 137, 145, 157, 329, 335], etc.). On the other hand, as we shall see in the context of this book, both the formulation of stochastic control problems in infinite dimensions and the tools to solve them may differ considerably from their deterministic/finitedimensional counterparts. Because of this, one has to develop new mathematical tools, say, the stochastic transposition method introduced in our works [241, 242, 244, 245, 248] (See also Chapters 4 and 12–13, and particularly Section 1.4 of Chapter 1 for more details and explanations), to solve some problems in this field. In some sense, the stochastic distributed parameter control system is the most general control system in the framework of classical physics, and therefore the study of this field may provide some useful hints for that of quantum control systems. 1

Actually, the books [15, 167, 259] addressed mainly to some slightly different topics.

Preface

VII

This book is an introduction to (mathematical) control theory for stochastic distributed parameter systems. Our aim is to provide a basic reference for both beginners and experts who are interested in this challenging, vigorous and fascinating field. It is designed for a reader having some basic knowledge of functional analysis, partial differential equations and control theory for deterministic systems but not being familiar with probability theory and, particularly, stochastic analysis. The authors would like to take this opportunity to thank Professors Shuping Chen, Jean-Michel Coron, H´el`ene Frankowska, Xiaoyu Fu, Andrei Fursikov, Zechun Hu, Oleg Yu. Imanuvilov, Gilles Lebeau, Xu Liu, Shige Peng, Luc Robbiano, Shanjian Tang, Emmanuel Tr´elat, Jan van Neerven, Gengsheng Wang, Tianxiao Wang, Zhongqi Yin, Jiongmin Yong, Bingyu Zhang, Haisen Zhang and Enrique Zuazua for collaborations/comments/discussions/encouragement/supports during the preparation of this book. This work is partially supported by NSF of China under grants 12025105, 11971334, 11931011 and 11221101. Chengdu, China Chengdu, China June 30, 2020

Qi L¨ u Xu Zhang

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Why Stochastic Distributed Parameter Control Systems? . . . . . 1 1.2 Two Fundamental Issues in Control Theory . . . . . . . . . . . . . . . . . 5 1.3 Range Inclusion and the Duality Argument . . . . . . . . . . . . . . . . . 9 1.4 Two Basic Methods in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2

Some Preliminaries in Stochastic Calculus . . . . . . . . . . . . . . . . . 2.1 Measures and Probability, Measurable Functions and Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Integrals and Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Signed/Vector Measures, Conditional Expectation . . . . . . . . . . . 2.3.1 Signed Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Distribution, Density and Characteristic Functions . . . . 2.3.3 Vector Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 A Riesz-Type Representation Theorem . . . . . . . . . . . . . . . . . . . . . 2.4.1 Proof of the Necessity for a Special Case . . . . . . . . . . . . . 2.4.2 Proof of the Necessity for the General Case . . . . . . . . . . . 2.4.3 Proof of the Sufficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Real Valued Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Vector-Valued Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Brownian Motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Brownian Motions in Finite Dimensions . . . . . . . . . . . . . . 2.9.2 Construction of Brownian Motions in one Dimension . . . 2.9.3 Vector-Valued Brownian Motions . . . . . . . . . . . . . . . . . . . .

25 25 31 35 35 37 40 41 44 46 50 53 55 63 70 73 73 83 85 86 87 90 IX

X

Contents

2.10 Stochastic Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 2.10.1 Itˆo’s Integrals w.r.t. Brownian Motions in Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 2.10.2 Itˆo’s Integrals w.r.t. Vector-Valued Brownian Motions . . 102 2.11 Properties of Stochastic Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 108 2.11.1 Itˆo’s Formula for Itˆo’s Processes (in a Strong Form) . . . 108 2.11.2 Burkholder-Davis-Gundy Inequality . . . . . . . . . . . . . . . . . 115 2.11.3 Stochastic Fubini Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 117 2.11.4 Itˆo’s Formula for Itˆo’s processes in a Weak Form . . . . . . 120 2.11.5 Martingale Representation Theorem . . . . . . . . . . . . . . . . . 123 2.12 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3

Stochastic Evolution Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 3.1 Stochastic Evolution Equations in Finite Dimensions . . . . . . . . . 131 3.2 Well-Posedness of Stochastic Evolution Equations . . . . . . . . . . . 135 3.2.1 Notions of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.2.2 Well-Posedness in the Sense of Mild Solution . . . . . . . . . . 140 3.3 Regularity of Mild Solutions to Stochastic Evolution Equations 145 3.3.1 Burkholder-Davis-Gundy Type Inequality and Time Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.3.2 Space Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

4

Backward Stochastic Evolution Equations . . . . . . . . . . . . . . . . . 157 4.1 The Case of Finite Dimensions and Natural filtration . . . . . . . . 157 4.2 The Case of Infinite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 4.2.1 Notions of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 4.2.2 Well-Posedness in the Sense of Mild Solution for the Case of Natural Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . 166 4.3 The Case of General Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 4.4 The Case of Natural Filtration Revisited . . . . . . . . . . . . . . . . . . . 183 4.5 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

5

Control Problems for Stochastic Distributed Parameter Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5.1 An Example of Controlled Stochastic Differential Equations . . . 189 5.2 Control Systems Governed by Stochastic Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.3 Some Control Problems for Stochastic Distributed Parameter Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Contents

XI

6

Controllability for Stochastic Differential Equations in Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 6.1 The Control Systems With Controls in Both Drift and Diffusion Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 6.2 Control System With a Control in the Drift Term . . . . . . . . . . . 217 6.3 Lack of Robustness for Null/Approximate Controllability . . . . . 223 6.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

7

Controllability for Stochastic Linear Evolution Equations . . 229 7.1 Formulation of the Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.2 Well-Posedness of Systems With Unbounded Control Operators236 7.3 Reduction to the Observability of Dual Problems . . . . . . . . . . . . 241 7.4 Explicit Forms of Controls for the Controllability Problems . . . 246 7.5 Relationship Between the Forward and the Backward Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.5.1 The Case of Bounded Control Operators . . . . . . . . . . . . . 252 7.5.2 The Case of Unbounded Control Operators . . . . . . . . . . . 256 7.6 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

8

Exact Controllability for Stochastic Transport Equations . . 263 8.1 Formulation of the Problem and the Main Result . . . . . . . . . . . . 263 8.2 Hidden Regularity and a Weighted Identity . . . . . . . . . . . . . . . . . 266 8.3 Observability Estimate for Backward Stochastic Transport Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 8.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271

9

Controllability and Observability of Stochastic Parabolic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 9.1 Formulation of the Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 9.2 Controllability of a Class of Stochastic Parabolic Systems . . . . . 280 9.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 9.2.2 Proof of the Null Controllability . . . . . . . . . . . . . . . . . . . . . 286 9.2.3 Proof of the Approximate Controllability . . . . . . . . . . . . . 291 9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 9.3.1 Proof of the Null Controllability Result . . . . . . . . . . . . . . 295 9.3.2 Proof of the Negative Null Controllability Result . . . . . . 300 9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator . . 302 9.5 Observability Estimate for Stochastic Parabolic Equations . . . . 310 9.5.1 Global Carleman Estimate for Stochastic Parabolic Equations, I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 9.5.2 Global Carleman Estimate for Stochastic Parabolic Equations, II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 9.5.3 Proof of the Observability Result . . . . . . . . . . . . . . . . . . . . 322

XII

Contents

9.6 Null and Approximate Controllability of Stochastic Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 9.6.1 Global Carleman Estimate for Backward Stochastic Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 9.6.2 Proof of the Observability Estimate for Backward Stochastic Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . 326 9.7 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 10 Exact Controllability for a Refined Stochastic Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 10.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 10.2 Well-Posedness of Stochastic Wave Equations With Boundary Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 10.3 Main Controllability Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 10.4 A Reduction of the Exact Controllability Problem . . . . . . . . . . . 343 10.5 A Fundamental Identity for Stochastic Hyperbolic-Like Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 10.6 Observability Estimate for the Stochastic Wave Equation . . . . . 349 10.7 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 11 Exact Controllability for Stochastic Schr¨ odinger Equations 363 11.1 Formulation of the Problem and the Main Result . . . . . . . . . . . . 363 11.2 Well-Posedness of the Control System . . . . . . . . . . . . . . . . . . . . . . 365 11.3 A Fundamental Identity for Stochastic Schr¨odinger-Like Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 11.4 Observability Estimate for Backward Stochastic Schr¨odinger Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 11.5 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 12 Pontryagin-Type Stochastic Maximum Principle and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 12.1 Formulation of the Optimal Control Problem . . . . . . . . . . . . . . . 387 12.2 The Case of Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 12.3 Necessary Condition for Optimal Controls for Convex Control Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 12.4 Operator-Valued Backward Stochastic Evolution Equations . . . 399 12.4.1 Notions of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 12.4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 12.4.3 Proof of the Uniqueness Results . . . . . . . . . . . . . . . . . . . . . 409 12.4.4 Well-Posedness Result for a Special Case . . . . . . . . . . . . . 413 12.4.5 Proof of the Existence and Stability for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 12.4.6 A Regularity Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 12.5 Pontryagin-Type Maximum Principle . . . . . . . . . . . . . . . . . . . . . . 439 12.6 Sufficient Condition for Optimal Controls . . . . . . . . . . . . . . . . . . . 457

Contents

XIII

12.6.1 Clarke’s Generalized Gradient . . . . . . . . . . . . . . . . . . . . . . . 458 12.6.2 A Sufficient Condition for Optimal Controls . . . . . . . . . . 461 12.7 Second Order Necessary Condition for Optimal Controls . . . . . 463 12.8 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 13 Linear Quadratic Optimal Control Problems . . . . . . . . . . . . . . . 477 13.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 13.2 Optimal Feedback for Deterministic LQ Problem in Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 13.3.1 Differences Between Deterministic and Stochastic LQ Problems in Finite Dimensions . . . . . . . . . . . . . . . . . . . . . . 487 13.3.2 Characterization of Optimal Feedbacks for Stochastic LQ Problems in Finite Dimensions . . . . . . . . . . . . . . . . . . 492 13.4 Finiteness and Solvability of Problem (SLQ) . . . . . . . . . . . . . . . . 500 13.5 Pontryagin-Type Maximum Principle for Problem (SLQ) . . . . . 504 13.6 Transposition Solutions to Operator-Valued Riccati Equations 510 13.7 Existence of Optimal Feedback Operator for Problem (SLQ) . . 515 13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 13.8.1 Some Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . 519 13.8.2 Proof of the Main Solvability Result . . . . . . . . . . . . . . . . . 536 13.9 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 13.9.1 LQ Problems for Stochastic Wave Equations . . . . . . . . . . 558 13.9.2 LQ problems for Stochastic Schr¨odinger Equations . . . . . 560 13.10Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589

1 Introduction

In this chapter, we expound the necessity, novelty, importance and challenge to study stochastic distributed parameter control systems. We also illustrate pedagogically the main problems to be considered and the main methods to be employed in this book. Throughout this book, N, Z, Q, R and C stand for respectively the set of positive integers, integers, rational numbers, real numbers and complex numbers. Also, we shall denote by C a generic positive constant which may change from line to line (unless other stated).

1.1 Why Stochastic Distributed Parameter Control Systems? After N. Wiener’s classical work ([349]), owing to the great effort of numerous mathematicians and engineers, Control Theory has grown into a new discipline—Control Science. Mathematical control theory, an active branch of applied mathematics, which aims to provide the mathematical foundations for Control Science, refers to mathematical theory and methods for the analysis and design of control systems. Stimulated by the seminal works [20, 163, 281], rapid development of mathematical control theory (for both deterministic and stochastic systems) began in the 1960s. Usually, in terms of the state space method (e.g., [163]), people describe the control system under consideration as a suitable state equation. The main concern of this book belongs to a new branch of mathematical control theory, i.e., control theory for stochastic distributed parameter systems, that is, systems governed by stochastic differential equations in infinite dimensions, including stochastic differential equations with delays, stochastic partial differential equations, random partial differential equations (i.e., partial differential equations with random parameters), and so on. One may ask a natural question, i.e., why should people study stochastic distributed parameter control systems? Our answer is, this is because, for © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_1

1

2

1 Introduction

Control Theory, it is now stochastic distributed parameter control systems’ turn! Let us justify it below by recalling rapidly the history of modern control theory (particularly mathematical control theory, of course). Since the middle of 1950s, the rapid development of space technology requests urgently to solve the control problems such as launching the rockets, satellites, etc. into the pre-selected orbit accurately with the minimum fuel consumption or in the shortest flight time. These control problems are too complex to be solved by the classical control theory developed in [349]. By the early 1960s, a set of new principles and methods, which are based on the state space method to analyze and design the control system, has been established. This marks the formation of modern control theory (and also mathematical control theory). Three milestones therein are achieved by L. Pontryagin (Pontryagin’s maximum principle, [281]), R. Bellman (Bellman’s dynamic programming method, [20]) and R. Kalman (Kalman’s filtering and linear quadratic optimal control, [162, 163]). Note that these classics are addressed mainly to finite dimensional control systems, which are governed by ordinary differential (or difference) equations. Inspired by Pontryagin, Bellman and Kalman’s works, starting from 1960s there are numerous publications addressing to this topic. Now, control theory for finite dimensional systems is relatively mature though many problems in this field are still remained to be done. Fundamentally, all physical systems are intrinsically distributed in nature. Actually, for practical problems, finite dimensional systems are only approximations of distributed parameter systems (i.e., infinite dimensional systems) to a certain extend. Therefore, theoretically distributed parameter systems are more realistic models. As for concrete applications, although it seems that approximate models have more significance in view of engineering (especially by considering computer implementation), the study on more realistic models contributes to giving reasonable approximate models, and particularly giving reasonable control laws, from the point of view of Control Theory. Distributed parameter control systems are control systems described by differential equations in infinite dimensional spaces, for example, partial differential equations, delay differential equations, etc. Control theory for distributed parameter systems originated in the 1960s, and the early works on this field were mainly done by A. G. Butkovski˘ı ([44]), Yu V. Egorov ([85]), H. O. Fattorini ([91]), J.-L. Lions ([205]), D. L. Russell ([292]), P. K. C. Wang ([343]) and so on. In the past nearly 60 years, this theory has been developed very rapidly, for which more than one hundred monographs and thousands of research papers have been published one after another. So far, there exists no sign of slowing growth of this field. In the recent two decades, many new tools are introduced/refined, and lots of breakthroughs are obtained in this respect. We refer the readers to [45, 46, 49, 62, 65, 88, 95, 110, 116, 123, 133, 144, 168, 170, 172, 186, 199, 200, 220, 288, 305, 323, 325, 333, 338, 388, 395, 396] and the references therein for the development of control theory for distributed parameter systems in the new century.

1.1 Why Stochastic Distributed Parameter Control Systems?

3

The world is full of uncertainties, and therefore it is impossible to predict accurately the future, just like “a storm may arise from a clear sky”. Therefore, studying random phenomena from the viewpoint of Control Theory, without doubt, has basic, important scientific significance and practical values. Strictly speaking, any real system contains various random factors, but these factors may be ignored in many situations. However, when these factors cannot be ignored, the behavior of the control system, if designed according to the deterministic control theory, would deviate from the expected design requirements, and sometimes it may even happen that, as a Chinese proverb said, “Cha Zhi Hao Li, Shi Zhi Qian Li”, i.e., “an error, the breadth of a single hair, can lead a thousand li astray”. Control theory for stochastic finite dimensional systems originated in the late 1950s, which has developed very rapidly for the past (more than) 60 years. The early works on this field were mainly accomplished by R. Bellman, W. H. Fleming, H. J. Kushner, M. Nisio, etc. ([21, 97, 177]). Thanks to J.-M. Bismut ([33]), W. H. Fleming ([96]), P.-L. Lions ([210, 211, 212]), S. Peng ([273]) and so on, deep studies in this respect have been done (see [98, 173, 371]). An important byproduct in the study of this field is the appearance of a new theory, i.e., backward (or more generally, forward-backward) stochastic differential equations (see [33, 251, 271]), which is by now widely used in financial mathematics, stochastic analysis, partial differential equations, etc. (e.g., [56, 73, 86, 251, 274, 321]). Now, we are in a position to return to the main topic of this book, i.e., control theory for stochastic distributed parameter systems. Theoretically, motivated by the success of control theories for finite dimensional systems, distributed parameter systems and stochastic finite dimensional systems, as a natural development, people hope to study stochastic distributed parameter control systems. Indeed, soon after the publication of early works on stochastic finite dimensional control systems, some pioneers including A. Bensoussan [24], M. Kh. Bikchantayev ([30]), P. L. Falb ([90]), W. H. Fleming ([96]), H. J. Kushner ([178]), T. K. Sirazetdinov ([302]), G. Tzafestas and J. M. Nightingale ([326]) and so on began the study of stochastic distributed parameter control systems. On the other hand, in the application aspects, because of the inherent complexity of the underlying physical processing, many control systems in reality (such as that in the microelectronics industry, in the atmospheric motion, in communications and transportation, and so on) exhibit very complex dynamics, including substantial model uncertainty, actuator and state constraints, and high dimensionality (usually infinite). These systems are often best described by stochastic partial differential equations or even more complicated stochastic equations in infinite dimensions (e.g., [263]). In our opinion, the field of stochastic distributed parameter control systems is far from mature till now, in which many aspects are much less-understood or even still blank, compared to the deterministic setting and/or stochastic finite dimensional problems. More precisely, we mention the following issues.

4









1 Introduction

Compared to the deterministic cases (for both finite dimensional and distributed parameter control systems), the study of stochastic controllability/observability is quite unsatisfactory, even for the setting of stochastic finite dimensional control systems (see Chapter 6 for a more detailed analysis). Indeed, so far there exists no nontrivial controllability result for stochastic nonlinear systems. Also, for most of positive controllability results (in the existing literatures, e.g., [228, 231, 247, 315]) for stochastic partial differential equations, controls are introduced into the diffusion terms and are assumed to be active everywhere. In order to derive Pontryagin-type maximum principle for general nonlinear stochastic distributed parameter control systems, the key in [242] is to introduce the notion of relaxed transposition solutions to an operatorvalued backward stochastic Lyapunov equation, and via which the wellposednss of such an equation can be established. Although this notion of solutions and its modification are useful and enough to tackle several different problems (say, the first and second order necessary conditions for optimal controls of general nonlinear stochastic evolution equations in infinite dimensions ([103, 104, 240, 242, 244, 245]), characterization of optimal feedback operators for general stochastic linear quadratic control problems (with random coefficients) in infinite dimensions ([235, 248]), etc.), it is much better to show that the underlying operator-valued backward stochastic evolution equations are well-posed in a stronger sense, i.e., in the sense of transposition solutions rather than relaxed transposition solutions (See Chapters 12–13 for more details), which remains an unsolved problem. The extension of Bellman’s dynamic programming method to the stochastic setting, though some very interesting and important progresses were made ([89, 213, 214]), the problem is far from well-understood for the systems with random coefficients, even in the setting of stochastic finite dimensional control systems. Indeed, as far as we know, in this case, one has to face a challenging problem to establish the well-posednss of backward stochastic Hamilton-Jacobi-Bellman equations. In order to find optimal feedback operators for stochastic linear quadratic control problems (with random coefficients) in infinite dimensions, one has to solve suitable operator-valued backward stochastic Riccati equations. However, so far the well-posedness of this sort of Riccati equations is known only for some very special case, again even in the setting of stochastic control systems in finite dimensions ([248]).

Consequently, control theory for stochastic distributed parameter systems is still at its very beginning stage, though after the development of more than 50 years. Compared with other directions in mathematical control theory (including control theory for finite dimensional systems, that for distributed parameter systems and that for stochastic finite dimensional systems), at this moment it is an “ugly duckling”!

1.2 Two Fundamental Issues in Control Theory

5

One will meet many substantially difficulties in the study of control problems for stochastic distributed parameter systems, for example: • • • •

Unlike the deterministic setting, the solution to a stochastic evolution equation (even in finite dimensions) is usually non-differentiable w.r.t. the variables with noises. The usual compactness embedding result fails to be true for solution spaces related to the stochastic evolution equations under consideration. The “time” in the stochastic setting is not reversible, even for stochastic hyperbolic equations. Generally speaking, stochastic control problems cannot be reduced to deterministic ones.

Actually, the main difficulty in this field lies in the fact that, as we shall see in the rest of this book, one has to develop new mathematical tools even for some very simple stochastic distributed parameter control systems. Indeed, many tools and methods, which are effective in the deterministic case, do not work anymore in the stochastic setting. Technically, the most essential difficulty in the field of stochastic distributed parameter control systems, is the very fact that the theory of stochastic partial differential equations itself is far from mature (though significant progress has been made there in recent years, e. g., [136, 137]). This can be seen even from the title of J. Glimm’s review article ([122]): Nonlinear and stochastic phenomena: the grand challenge for partial differential equations. We think, control theory for stochastic distributed parameter systems is a direction which deserves to be developed with great efforts in the whole Control Science, especially in mathematical control theory. Clearly, in the framework of classical physics, the stochastic distributed parameter system is very likely the most general control systems. A deep study on this topic may provide some useful hints for the development of mathematical theory for quantum control systems. In conclusion, the field of stochastic distributed parameter control systems is full of challenging problems, which offers a rare opportunity for the new generations in Control Theory. Definitely, we believe that control theory for stochastic distributed parameter systems will become a “white swan” in the near future!

1.2 Two Fundamental Issues in Control Theory Roughly speaking, “control” means one can change the dynamics of the involved system, by means of a suitable way. By our understanding, there are at least two fundamental issues in Control Theory1 , i.e., feasibility and optimality, which will be explained more below (c.f. [220]). 1

We refer the readers to [389] for more fundamental issues in Control Theory.

6

1 Introduction

The first fundamental issue in Control Theory is feasibility, or in the terminology of Control Theory, controllability, which means that, one can find at least one way to achieve a goal. More precisely, for simplicity, let us consider the following controlled system governed by a linear ordinary differential equation: { yt (t) = Ay(t) + Bu(t), a.e. t ∈ [0, T ], (1.1) y(0) = y0 . In (1.1), A ∈ Rn×n , B ∈ Rn×m (n, m ∈ N), T > 0, y(·) is the state variable, u(·) is the control variable, and Rn and Rm are respectively the state and control spaces. We begin with the following notion (More notations/notions, which may be used below, will be given in Chapter 2): Definition 1.1. The system (1.1) is called exactly controllable (at time T ) if for any initial state y0 ∈ Rn and final state y1 ∈ Rn , there is a control u(·) ∈ L1 (0, T ; Rm ) such that the solution y(·) to (1.1) satisfies y(T ) = y1 ,

(1.2)

i.e., the control u(·) transfers the state of (1.1) from y0 to y1 at time T . One has the following result: Theorem 1.2. ([163]) The system (1.1) is exactly controllable at time T if and only if the following Kalman rank condition holds rank [B, AB, · · · , An−1 B] = n.

(1.3)

In the sequel, denote by B ⊤ the transpose of B, and write ∫

T

GT =



eAt BB ⊤ eA t dt.

(1.4)

0

One can show the following further result: Theorem 1.3. ([163]) If the system (1.1) is exactly controllable at time T , then det GT ̸= 0. Moreover, for any y0 , y1 ∈ Rn , the following control of explicit form: ⊤ AT u(t) = −B ⊤ eA (T −t) G−1 y0 − y1 ) (1.5) T (e transfers the state of (1.1) from y0 to y1 at time T . The proof of Theorems 1.2–1.3 will be given in that of Theorem 1.13 (in Section 1.3). Remark 1.4. Unlike most of the previous literatures, in Definition 1.1 we deliberately choose the control class to be L1 (0, T ; Rm ). From Theorem 1.3, it is easy to see that, if (1.1) is exactly controllable at time T (by means of

1.2 Two Fundamental Issues in Control Theory

7

L1 -(in time) controls), then the same controllability can be achieved by using analytic-(in time) controls. Actually the same can be said for the case that the control class L1 (0, T ; Rm ) in Definition 1.1 is replaced by Lp (0, T ; Rm ) for any p ∈ [1, ∞]. However, in Chapter 6, we shall see a completely different phenomenon even in the simplest stochastic situation. The above definition of controllability can be easily extended to both deterministic and stochastic systems (See Definition 5.6 in Chapter 5 for a general formulation of controllability). In the general setting, it may happen that the requirement (1.2) has to be relaxed in one way or another. This leads to the notions of approximate controllability, null controllability, partial controllability, and so on. Also, the above B can be unbounded (in a suitable way) for general controlled systems. Controllability is strongly related to (or in some situation, even equivalent to) other important issues in Control Theory, say observability, stabilization and so on. One can find numerous literatures on these topics (e.g., [4, 14, 41, 42, 61, 62, 117, 154, 163, 189, 190, 199, 207, 288, 295, 388, 395, 396] and rich references therein). One of the main concerns in this book is to study controllability problems for stochastic evolution systems, in particularly that for several typical stochastic partial differential equations. The second fundamental issue in Control Theory is optimality, or in the terminology of Control Theory, optimal control, which means that people are expected to find the best way, in some sense, to achieve their goals. As an example, we fix y0 ∈ Rn , a subset S of Rn , a suitable nonlinear function f : [0, T ] × Rn × Rm → Rn and a given nonempty subset U of Rm , and consider the following controlled system: { yt (t) = f (t, y(t), u(t)), a.e. t ∈ [0, T ], (1.6) y(0) = y0 , where y(·) is the state variable and u(·) is the control variable, valued in Rn and U , respectively. It is easy to see that, if there exists a control u(·) ∈ ∆ U ={v(·) : [0, T ] → U v(·) is (Lebesgue) measurable} such that the solution y(·) ∈ C([0, T ]; Rn ) to (1.6) satisfies y(T ) ∈ S,

(1.7)

then, very likely, one may find another control verifying the same conditions. Naturally, one hopes to find the “best” control fulfilling these conditions. To be more precisely, we fix two suitable functions g(·, ·, ·) : [0, T ] × Rn × Rm → R and h : Rn → R, and denote by Uad the set of controls u(·) ∈ U so that the solution y(·) to (1.6) satisfies (1.7) and g(·, y(·), u(·)) ∈ L1 (0, T ). Suppose that Uad ̸= ∅,

(1.8)

8

1 Introduction

and introduce the following cost functional: ∫ T J (u(·)) = g(t, y(t), u(t))dt + h(y(T )),

u(·) ∈ Uad .

(1.9)

0

A typical optimal control problem for the system (1.6) is to find a u ¯(·) ∈ Uad , called an optimal control, which minimizes the cost functional (1.9), i.e., J (¯ u(·)) =

inf

u(·) ∈ Uad

J (u(·)).

(1.10)

The corresponding solution to (1.6) is denoted by y¯(·), and (¯ y (·), u ¯(·)) is called an optimal pair. The simplest case is S = Rm , which means that there is no constraint on the final states of (1.6). For this case, people introduce the Hamiltonian as follows: ∆ H(t, y, u, p) = ⟨ p, f (t, y, u) ⟩Rn − g(t, y, u), (t, y, u, p) ∈ [0, T ] × Rn × U × Rn . Then, one can show the following Pontryagin Maximum Principle ([281]): Theorem 1.5. Let S = Rm and (¯ y (·), u ¯(·)) be an optimal pair. Then H(t, y¯(t), u ¯(t), p(t)) = max H(t, y¯(t), u, p(t)), u∈U

where p(·) : [0, T ] → Rn solves { pt (t) = −fy (t, y¯(t), u ¯(t))⊤ p(t) + gy (t, y¯(t), u ¯(t)), p(T ) = −hy (¯ y (T )).

a.e. t ∈ [0, T ],

a.e. t ∈ [0, T ],

(1.11)

(1.12)

Remark 1.6. Thanks to Pontryagin’s Maximum Principle, the optimization problem (1.10) (which is usually infinite dimensional) is reduced to a finite dimensional optimization problem (1.11) (in the pointwise sense). Since Theorem 1.5 can be viewed as a special case of Theorems 12.2 or 12.17 in Chapter 12, we will not prove it in here. Instead, in the next section we shall prove a linear quadratic version of this theorem, and via which we emphasize the difficulty to formulate the stochastic version of the “backward” equations (1.12) and (1.41) (in Section 1.3). The above optimal control problem can be easily extended to more general setting (See Problem (SOP) in Section 5.3 of Chapter 5 for a general formulation of stochastic optimal control problems), particularly for the case that the controls take values in a more general set (instead of a subset of Rm ), say any metric space, which does not need to enjoy any linearity or convexity structure. Optimal control problems are strongly related to the classical calculus of variations and optimization theory. Nevertheless, since the control class may

1.3 Range Inclusion and the Duality Argument

9

be quite general, the classical variation technique cannot be applied to optimal control problems directly. Various optimal control problems are extensively studied in the literatures (e.g., [1, 5, 20, 27, 44, 59, 64, 98, 102, 158, 163, 179, 186, 202, 206, 242, 281, 323, 371] and rich references cited therein). Another main concern in this book is to study optimal control problems for stochastic evolution systems in infinite dimensions, including Pontryagin-type Maximum Principle for the general setting and characterization of optimal feedback operators for linear quadratic optimal control problems. It is easy to see that, the study of controllability problems is a basis to investigate further optimal control problems. Indeed, the usual nonempty assumption on the set of feasible/admissible control-state pairs for optimal control problems (say, Uad ̸= ∅ in (1.8)) is actually a controllability condition. Nevertheless, in the previous literatures, it seems that the studies of controllability and optimal control problems are almost independent. A few typical exceptions that we know are the following: 1) In [154], some techniques from optimal control theory are employed to derive the observability estimate and null controllability for parabolic type equations. 2) In [338], some techniques developed in the study of controllability and observability problems are adopted to solve several time optimal control problems. 3) Very recently, in [220], a new link between (finite codimensional exact) controllability and optimal control problems for general evolution equations with endpoint state constraints is presented. In our opinion, it is now the time to solve controllability and optimal control problems as a whole, at least in some sense and to some extent, though they are two quite different issues in Control Theory. This is by no means an easy task. Actually, for many concrete problems, it is highly nontrivial to verify the above mentioned assumption that the set of feasible/admissible control-state pairs is nonempty, even in the setting of finite dimensional control systems. In this book, we shall try to present the existing theory on controllability and optimal control for stochastic distributed parameter systems in a unified manner, at least from the perspective of methodology. Indeed, as we shall explain in the next section, throughout this book we shall systematically use the duality argument.

1.3 Range Inclusion and the Duality Argument Clearly, any controllability problem (formulated in Definition 1.1 or more generally, in Definition 5.6 in Chapter 5) can be viewed as an equation problem, in which both the state x(·) and the control u(·) variables are unknowns. Namely, instead of viewing u(·) as a control variable, we may simply regard

10

1 Introduction

it as another unknown variable2 . One of the main concerns in this book is to study the controllability problems for linear stochastic evolution equations. As we shall see later, this is far from an easy task. It is easy to see that, in many cases solving linear equations are equivalent to showing range inclusion for suitable linear operators. Because of this, we shall present below two known range inclusion theorems (i.e., Theorems 1.7 and 1.10 below) in an abstract setting. Throughout this section, X, Y and Z are Banach spaces. Denote by L(X; Y ) the Banach space of all bounded linear operators from X to Y , with the usual operator norm. When X = Y , we simply write L(X) instead of L(X; X). For any L ∈ L(X; Y ), denote by R(L) the range of L. We begin with the following result. Theorem 1.7. Let F ∈ L(X; Z) and G ∈ L(Y ; Z). The following assertions hold: 1) If R(F ) ⊇ R(G), then there is a constant C > 0 such that |G∗ z ∗ |Y ∗ ≤ C|F ∗ z ∗ |X ∗ ,

∀ z∗ ∈ Z ∗.

(1.13)

2) If X is reflexive and (1.13) holds for some constant C > 0, then R(F ) ⊇ R(G). Proof : 1) We use the contradiction argument. Suppose (1.13) did not hold. Then, for each n ∈ N, we could find zn∗ ∈ Z ∗ such that |F ∗ zn∗ |X ∗
0, |G∗ z ∗ |Y ∗ ≤ C|F ∗ z ∗ |X ∗ ,

∀ z ∗ ∈ Z ∗ ⇐⇒ R(G) ⊆ R(F ).

(1.20)

As shown in [36], the equivalence (1.20) may fail whenever X is not reflexive. In this book, we shall need a range inclusion theorem in which X is NOT reflexive. In this case, instead of the reflexivity assumption for X, people assume the surjection condition for G, and then show that the equivalence (1.20) remains true. In order to establish such an equivalence, we need the following result, which is an easy consequence of [291, Theorem 3.4, p. 58]. Lemma 1.9. Suppose that O1 and O2 are two disjoint closed convex sets in X, and O1 is compact. Then, there exists a real-valued functional f ∈ X ∗ and two constants α, β ∈ R, such that sup ⟨ x, f ⟩X,X ∗ < α < β < sup ⟨ x, f ⟩X,X ∗ .

x∈O1

x∈O2

12

1 Introduction

The following result holds: Theorem 1.10. If F ∈ L(X; Z), G ∈ L(Y ; Z) is surjective and (1.13) holds for some constant C > 0, then { } G(BY ) ⊂ CF (BX ) ≡ CF x x ∈ BX , (1.21) where BX and BY are respectively the open unit balls in X and Y , and hence R(F ) ⊇ R(G). Proof : We first claim that F (BX ) ⊇

1 G(BY ). C

(1.22)

If the above was not true, then we could find some y0 ∈ Y with |y0 |Y ≤ C1 such that Gy0 ̸∈ F (BX ). By Lemma 1.9, one can find a real-valued functional ζ ∗ ∈ Z ∗ and two constants α, β ∈ R, such that ⟨ Gy0 , ζ ∗ ⟩Z,Z ∗ < α < β < ⟨ z, ζ ∗ ⟩Z,Z ∗ ,

∀ z ∈ F (BX ).

This implies that ⟨ z, −ζ ∗ ⟩Z,Z ∗ < −β < −α < ⟨ Gy0 , −ζ ∗ ⟩Z,Z ∗ ,

∀ z ∈ F (BX ).

Noting 0 ∈ F (BX ), we then see that −β > 0. Hence, after suitable scaling, for some z ∗ ∈ Z ∗ , we have ⟨ z, z ∗ ⟩Z,Z ∗ ≤ 1 < ⟨ Gy0 , z ∗ ⟩Z,Z ∗ ,

∀ z ∈ F (BX ).

Further, F (BX ) is symmetric w.r.t. the origin. Thus, the above also implies |⟨ z, z ∗ ⟩Z,Z ∗ | ≤ 1 < ⟨ Gy0 , z ∗ ⟩Z,Z ∗ ,

∀ z ∈ F (BX ).

Consequently, |⟨ x, F ∗ z ∗ ⟩X,X ∗ | = |⟨ F x, z ∗ ⟩Z,Z ∗ | ≤ 1,

∀ x ∈ BX ,

which yields |F ∗ z ∗ |X ∗ ≤ 1. Hence, by (1.13), we arrive at 1 1 1 < |⟨ Gy0 , z ∗ ⟩Z,Z ∗ | = |⟨ y0 , G∗ z ∗ ⟩Y,Y ∗ | C C C 1 ≤ |y0 |Y |G∗ z ∗ |Y ∗ ≤ |y0 |Y |F ∗ z ∗ |X ∗ ≤ |y0 |Y , C which is a contradiction. This proves (1.22). Further, we claim that

1.3 Range Inclusion and the Duality Argument

F (BX ) ⊇

1 G(BY ). C

13

(1.23)

To show this, for each y ∈ Y and ε > 0, let us find x ∈ X such that |x|X ≤ C |y|Y ,

|Gy − F x|Z < ε.

(1.24)

This can be achieved as follows: If y = 0, we simply choose x = 0. Otherwise, consider v = |y|y . Clearly, |v|Y = 1 and v ∈ BY . Note that, by (1.22), we Y have 1 1 F (BX ) ⊇ G(BY ) ⊇ G(BY ). C C Hence, there is a u ∈ CBX such that |Gv − F u|Z < |y|ε . Now, let us put Y x = |y|Y u. We then have (1.24). Clearly, ker G = {y ∈ Y | Gy = 0} is a closed subspace of Y . Denote by Ye the quotient space Y / ker G, which is a Banach space with the usual quotient norm. Since G is surjective, the operator G : Ye → Z defined by G(y + ker G) = G(y),

∀ y ∈ Y,

is a bijective bounded linear operator from Ye onto Z. Now, fix any y1 ∈ BY . Pick εn > 0 so that ∞ ∑ εn < 1 − |y1 |Y . n=1

Assume for n ≥ 1, yn is picked. By (1.24), there is xn such that |xn |X ≤ C |yn |Y and |Gyn − F xn |Z < 2||G −1ε||n e . Let us define y˜n+1 ∈ Ye by L(Z;Y )

y˜n+1 = G −1 (Gyn − F xn ). Then, one may find yn+1 ∈ Y so that { Gyn+1 = Gyn − F xn , |yn+1 |Y ≤ 2|˜ yn+1 |Ye ≤ 2||G −1 ||L(Z;Ye ) |Gyn − F xn |Z < εn . By induction, this process defines two sequences {xn } ⊂ X and {yn } ⊂ Y . Note that |xn+1 |X ≤ C |yn+1 |Y < Cεn . Hence,

∞ ∑

|xn |X ≤ |x1 |X +

n=1

It follows that x =

∞ ∑

εn ≤ C |y1 |Y + C

n=1 ∞ ∑ n=1

xn is in CBX and that

∞ ∑ n=1

εn < C.

14

1 Introduction

F x = lim

N →∞

N ∑ n=1

F xn = lim

N →∞

N ∑

(Gyn − Gyn+1 ) = Gy1 ,

n=1

since yN +1 → 0 as N → ∞. Thus Gy1 = F x ∈ CF (BX ), which proves (1.23). Finally, R(F ) ⊇ R(G) follows immediately from (1.23). This completes the proof of Theorem 1.10. Remark 1.11. Clearly, by Theorem 1.10 and the first assertion in Theorem 1.7, we see that the equivalence (1.20) holds whenever G is surjective (even without the reflexivity assumption for X). Also, the inclusion relationship (1.21) in Theorem 1.10 provides a “quantitative” characterization on R(F ) ⊇ R(G). Such a characterization is sometimes useful (e.g., [239]). We shall also need the following result. Theorem 1.12. Let F ∈ L(X; Z). Then R(F ) = Z if and only if F ∗ z ∗ = 0 for some z ∗ ∈ Z ∗ ⇒ z ∗ = 0.

(1.25)

Proof : The “if” part. We use the contradiction argument. If R(F ) = Z was not true, then we could find some nonzero z0 ∈ Z such that z0 ̸∈ R(F ). By Lemma 1.9, similarly to that in the proof of Theorem 1.10, one can find a nonzero z ∗ ∈ Z ∗ with ⟨ z0 , z ∗ ⟩Z,Z ∗ = 1 such that ⟨ z, z ∗ ⟩Z,Z ∗ = 0, Consequently,

∀ z ∈ R(F ).

⟨ F x, z ∗ ⟩Z,Z ∗ = 0,

∀ x ∈ X,

(1.26)

which implies that F ∗ z ∗ = 0. But this, by (1.25), leads to z ∗ = 0, a contradiction. The “only if” part. Suppose that F ∗ z ∗ = 0 for some z ∗ ∈ Z ∗ . Then, it is easy to see that (1.26) holds. This, combined with R(F ) = Z, indicates that z ∗ = 0. The key in Theorems 1.7 and 1.10 is to reduce the problem of range inclusion (i.e., R(F ) ⊇ R(G)) to the majorization of dual operators, i.e., the estimate (1.13) (See also Theorem 1.12). Such a method, i.e., solving the problem under consideration via its dual one, is classical, called the duality argument. Inspired by the study of deterministic control problems (especially these for deterministic partial differential equations), in this book we shall employ the above range inclusion theorems and the duality argument to handle the controllability problems and optimal control problems for stochastic evolution equations To explain the idea, in the rest of this section, we shall focus on the simplest control system, i.e., the system (1.1) (governed by a linear ordinary differential

1.3 Range Inclusion and the Duality Argument

15

equation), and show how to apply the above mentioned method to solve the exact controllability problem and an optimal control problem for such a system by means of its dual problem. To analyze the exact controllability for (1.1), people introduce the following dual equation (of (1.1)), which is a (backward) ordinary differential equation: { zt = −A⊤ z in [0, T ], (1.27) n z(T ) = z1 ∈ R . We have the following result (which implies Theorems 1.2–1.3): Theorem 1.13. The following five statements are equivalent: 1) The system (1.1) is exactly controllable; 2) Solutions to (1.27) satisfy the following observability estimate: |z1 |Rn ≤ C max |B ⊤ z(t)|Rm , t∈[0,T ]

∀ z1 ∈ Rn ;

(1.28)

3) Solutions to (1.27) enjoy the following observability: B ⊤ z(·) ≡ 0 in (0, T ) ⇒ z1 = 0;

(1.29)

4) The rank condition (1.3) holds; 5) GT defined by (1.4) is non-degenerate, i.e., det GT ̸= 0. Moreover, under any one of the above conditions, for any y0 , y1 ∈ Rn , the control u(·) given by (1.5) transfers the state of (1.1) from y0 to y1 at time T . Proof : “1)⇐⇒2)”. Clearly, it suffices to consider the special case that the initial datum x0 in the system (1.1) is equal to 0. We define an operator G : L1 (0, T ; Rm ) → Rn by G(u(·)) = y(T ),

∀ u(·) ∈ L1 (0, T ; Rm ),

(1.30)

where y(·) is the corresponding solution to (1.1) with y0 = 0. Then, by (1.1) (with y0 = 0) and (1.27), we obtain that ∫ ⟨ y(T ), z1 ⟩Rn =

T 0

⟨ Bu(t), z(t) ⟩Rn dt,

∀ z1 ∈ Rn .

(1.31)

where z(·) solves (1.27). By (1.30)–(1.31), it is clear that, (G∗ z1 )(t) = B ⊤ z(t),

a.e. t ∈ [0, T ].

(1.32)

Now, the exact controllability of (1.1) is equivalent to that R(G) = Rn ; while the latter, by (1.32) and in view of Theorem 1.10 and the first assertion in Theorem 1.7, is equivalent to the estimate (1.28). “2)⇐⇒3)”. By (1.29), one can define a norm || · ||1 in Rn as follows:

16

1 Introduction

||z1 ||1 = max |B ⊤ z(t)|Rm , t∈[0,T ]

where z solves (1.27). By the equivalence of all norms in Rn , we obtain that “2)⇐=3)”. “2)=⇒3)” is obvious. “4)⇐⇒3)”. Let us assume that B ⊤ z(·) ≡ 0 in (0, T ) for some z1 ∈ Rn . Then, by ⊤ z(t) = eA (T −t) z1 , (1.33) this is equivalent to that B ⊤ (Ak )⊤ z1 = 0,

k ∈ N ∪ {0}.

Therefore, by the Hamilton-Cayley Theory (in Matrix Theory), the above is equivalent to that z1⊤ [B, AB, A2 B, · · · An−1 B] = 0.

(1.34)

By (1.34), it follows that 4)⇐⇒3). “5)⇐⇒3)”. Using again the expression (1.33) for solutions to (1.27), we obtain that 5)⇐⇒3). Finally, by (1.4), the corresponding solution y(·) to (1.1) with u(·) given by (1.5) satisfies ∫ T AT y(T ) = e y0 + eA(T −t) Bu(t)dt ∫

0 T

= eAT y0 −



eA(T −t) BB ⊤ eA

0

(T −t)

AT dtG−1 y0 − y1 ) T (e

AT = eAT y0 − GT G−1 y0 − y1 ) = y1 . T (e

This completes the proof of Theorem 1.13. Let us analyze a little more the dual equation (1.27). Write { w(t) = z(T − t),

(1.35)

w0 = z1 , i.e., we simply reverse the time in (1.27). Then, (1.27) is reduced to the following: { wt = A⊤ w, t ∈ [0, T ], (1.36) w(0) = w0 . By Theorem 1.13, it is easy to see that the system (1.1) is exactly controllable if and only if solutions to (1.36) enjoy the following observability: B ⊤ w(·) ≡ 0 in (0, T ) ⇒ w0 = 0.

1.3 Range Inclusion and the Duality Argument

17

Things will be much more complicated to handle the controllability problems for stochastic equations, even in finite dimensions. Indeed, in the stochastic setting, as we will explain in Chapter 4, one cannot use a similar transformation as that in (1.35) to reduce the dual equation (1.27) to a forward one, and therefore, in order to solve the stochastic controllability problem under consideration, generally speaking we will have to handle the observability for the corresponding (genuinely) backward stochastic equation. On the other hand, since one may put controls on both drift and diffusion terms in the stochastic setting (as we shall see in Chapter 6, sometimes it is also necessary to introduce controls in such a way), even the choice of control class (and therefore the definition of controllability) is a delicate problem. Indeed, in order that the drift term makes sense, one needs the control to be L1 -integrable in time; while for the diffusion one, people need the control to be L2 -integrable in time. The problem is then how to balance the time integrability of controls and minimize the number of controls. As far as we know, such a problem is actually unclear even for controllability of stochastic differential equations in finite dimensions (See some progress in this respect in Chapter 6). We now use the duality argument to analyze a quadratic optimal control problem for the linear system (1.1) (with a given y0 ∈ Rn ), for which the cost functional is given by 1 2



T

1 (⟨ M y(t), y(t) ⟩Rn + ⟨ Ru(t), u(t) ⟩Rm ) dt+ ⟨ Gy(T ), y(T ) ⟩Rn , 2 0 (1.37) where M = M ⊤ ∈ Rn×n , R = R⊤ ∈ Rm×m , G = G⊤ ∈ Rn×n , u(·) ∈ L2 (0, T ; Rm ). Similarly to that for (1.10), u ¯(·) ∈ L2 (0, T ; Rm ) is called an optimal control for the control system (1.1) with the cost functional (1.37)), if J (¯ u(·)) = inf m J (u(·)). 2 J (u(·)) =

u(·)∈L (0,T ;R )



The corresponding solution y¯(·) = y(·) to (1.1) (with u(·) replaced by u ¯(·)) is called an optimal trajectory, and (¯ u(·), y¯(·)) is called an optimal pair. In order to give a necessary condition for which u ¯(·) and y¯(·) satisfy, by the classical variational method, we take any ε ∈ R and u(·) ∈ L2 (0, T ; Rm ), and set uε (·) = u ¯(·) + εu(·). (1.38) Denote by y ε (·) the corresponding solutio to (1.1) (with u(·) replaced by uε (·)). Then y ε (·) = y¯(·) + εr(·), where r(·) satisfies the following variational equation: { rt (t) = Ar(t) + Bu(t), t ∈ [0, T ], r(0) = 0.

(1.39)

18

1 Introduction

Clearly, [∫ J (y ε (·))= J (¯ y (·)) + 2ε

T 0

(⟨ M y¯(t), r(t) ⟩Rn + ⟨ R¯ u(t), u(t) ⟩Rm ) dt

] +⟨ G¯ y (T ), r(T ) ⟩Rn + O(ε2 ) ≥ J (¯ y (·)),

as ε → 0.

Hence, ∫ T (⟨ M y¯(t), r(t) ⟩Rn + ⟨ R¯ u(t), u(t) ⟩Rn ) dt + ⟨ G¯ y (T ), r(T ) ⟩Rn = 0. (1.40) 0

Obviously, (1.40) is a necessary condition for the optimal pair (¯ y (·), u ¯(·)). In order to re-write (1.40) in a better form, similarly to (1.27), people introduce a “backward” ordinary differential equation as follows: { ψt (t) = −A⊤ ψ(t) + M y¯(t), (1.41) ψ(T ) = −G¯ y (T ). Computing ∫

T 0

(

d dt ⟨ r(t), ψ(t) ⟩Rn ,

from (1.39) and (1.41), we obtain that

) ⟨ M y¯(t), r(t) ⟩Rn + ⟨ B ⊤ ψ(t), u(t) ⟩Rm dt = −⟨ G¯ y (T ), r(T ) ⟩Rn ,

(1.42)

∀ u(·) ∈ L2 (0, T ; Rm ). Finally, combining (1.40) and (1.42), we conclude the following necessary condition for (¯ y (·), u ¯(·)): R¯ u(t) = B ⊤ ψ(t). (1.43) Using a similar method, one can prove the maximum condition (1.11) except that the details are more complicated than that to prove (1.43), and more importantly, since the control set U is any given subset of Rm (without assuming any linearity or convexity conditions on U ), the classical variation (1.38) has to be replaced by the following spike variation: { u ¯(t), t ∈ [0, T ] \ Eε , ε u (t) = (1.44) u(t), t ∈ Eε , where u(·) ∈ Uad , ε ∈ (0, T ], Eε (⊆ [0, T ]) is a measurable set with the Lebesgue measure m(Eε ) = ε. Nevertheless, there are some essential difficulties to extend the results for optimal controls in deterministic problems to the stochastic situation. Firstly, similarly to the dual equation (1.27) for the controllability problem, in the study of stochastic optimal control problems, as a counterpart of the dual equation (1.12) (also (1.41)), people need to introduce backward stochastic differential equations ([31]). Secondly, again similarly to the controllability

1.4 Two Basic Methods in This Book

19

problems, since one may put controls on both drift and diffusion terms in the stochastic setting, the corresponding necessary conditions for optimal controls (even in finite dimensions) are quite different from the deterministic problems, for example the first order necessary condition when the diffusion term may contain the control variable and the set of controls is allowed to be nonconvex ([273]) and the second order necessary condition when the diffusion term contains the control variable even if the set of controls is convex ([378, 379, 380]). Thirdly and more challengingly, since we are mainly concerned with optimal controls for stochastic evolution equations in infinite dimensions, we have to handle operator-valued backward stochastic evolution equations, for which there exists no any satisfactory theory (in the previous literatures) on stochastic integration/evolution equations that can be employed to derive the wellposedness of such equations in the usual sense. In order to overcome this difficulty, we introduce a new concept of solution, i.e., transposition solutions to the desired operator valued backward stochastic evolution equation, and prove the corresponding well-posedness (See [242, 248] and Chapters 12 and 13 for details). Note that, as we shall explain in the next section, our stochastic transposition method can also be viewed as a variant of the duality argument.

1.4 Two Basic Methods in This Book In this section, we shall present two basic methods (via illuminating examples) that will be systematically used throughout this book. The main method that we employ in this book to deal with the analysis of the structure of stochastic distributed parameter systems is the global Carleman type estimate. This method was introduced by T. Carleman ([47]) in 1939 to prove the uniqueness of solutions to second order elliptic partial differential equations with two variables. The key in [47] is an elementary energy estimate with some exponential weight. This type of weighted energy estimates, now referred to as Carleman estimates, have become one of the major tools in the study of unique continuation property, inverse problems and control problems for many partial differential equations. However, it is only in the last ten plus years that the power of the global Carleman estimate in the context of controllability of stochastic partial differential equations came to be realized. For the readers’ convenience, we explain the main idea of Carleman estimate by the following very simple example: Example 1.14. Consider the following ordinary differential equation in Rn :  yt (t) = a(t)y(t) in [0, T ], (1.45) y(0) = y . 0 It is well-known that if a ∈ L∞ (0, T ), then there is a constant CT > 0 such that for all solutions of (1.45), it holds that

20

1 Introduction

max |y(t)|Rn ≤ CT |y0 |Rn ,

t∈[0,T ]

∀ y0 ∈ Rn .

(1.46)

Now we give a slightly different proof of this result via Carleman-type estimate: For any λ ∈ R, it is easy to see that ) d ( −2λt e |y(t)|2Rn dt ( ) = −2λe−2λt |y(t)|2Rn + 2e−2λt ⟨ yt (t), y(t) ⟩Rn = 2 a(t) − λ e−2λt |y(t)|2Rn . (1.47) Choosing λ = |a|L∞ (0,T ) , we find that |y(t)|Rn ≤ eλT |y0 |Rn ,

t ∈ [0, T ],

which proves (1.46). It is easy to see that, the equality (1.47) can be re-written as the following pointwise identity: 2e−2λt ⟨ yt (t), y(t) ⟩Rn =

) d ( −2λt e |y(t)|2Rn + 2λe−2λt |y(t)|2Rn . dt

(1.48)

d Note that dt is the principal operator of the first equation in (1.45). The main idea of (1.48) is to establish a pointwise identity (and/or estimate) d on the principal operator dt in terms of the sum of a “divergence” term d −2λt 2 (e |y(t)| ) and an “energy” term 2λe−2λt |y(t)|2Rn . One then chooses λ Rn dt being large enough to absorb the undesired terms. In the above example, the weight function depends only on the time variable t. By suitably choosing time and spatial dependent weight functions, one can apply the method of Carleman-type estimates to handle many problems in both deterministic and stochastic partial differential equations. Nevertheless, usually for many control problems for stochastic partial differential equations, people have to establish Carleman-type estimates for backward stochastic partial differential equations (in which one has two unknowns in one equation), and therefore the results that can be obtained so far are not so satisfactory. It seems that people need to introduce some Carleman-type estimates with the weight functions being suitable stochastic process (i.e., which should depend on the sample point as well) but this remains to be done.

The main method that we employ in this book to deal with optimal control problems for stochastic distributed parameter systems is the stochastic transposition method (See Chapters 12 and 13 for details). More precisely, we solve the corresponding dual equations, including operator-valued backward stochastic evolution equations in particular, which are key tools to establish the Pontryagin type maximum principle, in the sense of transposition solution. In the stochastic setting in finite dimensions, based on a new and delicate

1.4 Two Basic Methods in This Book

21

Riesz-type Representation Theorem (to characterize the dual of the Banach space of some vector-valued stochastic processes having different integrability with respect to the time variable and the probability measure) (See [239] and Theorem 2.73 in Chapter 2), the stochastic transposition method (for solving backward stochastic differential equations) was first introduced in our paper [241] (See [371, pp. 353–354] for a rudiment of this method). Our method was further developed in [242] (and also [244, 245]) for the stochastic setting in infinite dimensions (See also Chapters 4 and 12). Recently, in [248] we employ the stochastic transposition method to characterize the optimal feedbacks for general stochastic linear quadratic control problems (with random coefficients) in infinite dimensions, in which the point is to introduce a suitable notion of solutions (i.e., transposition solutions) to the corresponding operator-valued backward stochastic Riccati equations (See also Chapter 13). Although our stochastic transposition method was developed to handle backward stochastic evolution equations, it is stimulated by the classical transposition method to solve the non-homogeneous boundary value problems for deterministic partial differential equations (See [208] for a systematic introduction to this topic). For the readers’ convenience, as in [244], let us recall the main idea of the classical transposition method to solve the following wave equation with non-homogeneous Dirichelt boundary conditions:  ytt − ∆y = 0 in Q,    y=u on Σ, (1.49)    y(0) = y0 , yt (0) = y1 in G. Here G ⊂ Rn is a nonempty open bounded domain with a C 2 boundary Γ , Q ≡ (0, T ) × G, Σ ≡ (0, T ) × Γ , (y0 , y1 ) ∈ L2 (G) × H −1 (G) and u ∈ L2 ((0, T ) × Γ ) are given, and y is the unknown. When u ≡ 0, one can use the standard semigroup ∩ theory to prove the well-posedness of (1.49) in the space C([0, T ]; L2 (G)) C 1 ([0, T ]; H −1 (G)). When u ̸≡ 0, y|Σ = u does NOT make sense any more by the usual trace theorem in theory of Sobolv spaces. One needs to use the transposition method. To this end, for any f ∈ L1 (0, T ; L2 (G)) and g ∈ L1 (0, T ; H01 (G)), let us consider the following test equation:  ζ − ∆ζ = f + gt , in Q,    tt ζ = 0, on Σ, (1.50)    ζ(T ) = ζt (T ) = 0, in G. ∩ (1.50) admits a unique solution ζ ∈ C([0, T ]; H01 (G)) C 1 ([0, T ]; L2 (G)), ∂ζ which enjoys a hidden regularity ∂ν ∈ L2 (Σ) (e.g. [207]). Here and hence1 n forth, ν(x) = (ν (x), · · · , ν (x)) stands for the unit outward normal vector of G at x ∈ Γ . To give a reasonable definition for solutions to (1.49), we first consider the case that y is smooth. Assume that f ∈ L2 (Q), g ∈ C0∞ (0, T ; H01 (G)),

22

1 Introduction

y1 ∈ L2 (G), and that y ∈ H 2 (Q). Then, multiplying the first equation in (1.49) by ζ, integrating it in Q, and using integration by parts, we find that ∫ ∫ ∫ ( ) ∂ζ (f y − gyt )dxdt = ζ(0)y1 − ζt (0)y0 dx − udΣ. (1.51) ∂ν Q G Σ Clearly, (1.51) still ∩ makes sense even if the regularity of y is relaxed as y ∈ C([0, T ]; L2 (G)) C 1 ([0, T ]; H −1 (G)). This motivates the following notion: ∩ Definition 1.15. We call y ∈ C([0, T ]; L2 (G)) C 1 ([0, T ]; H −1 (G)) a transposition solution to (1.49), if y(0) = y0 , yt (0) = y1 , and for any f ∈ L1 (0, T ; L2 (G)) and g ∈ L1 (0, T ; H01 (G)), it holds that ∫ ∫ T f ydxdt − ⟨g, yt ⟩H01 (G),H −1 (G) dt Q

0

= ⟨ζ(0), y1 ⟩H01 (G),H −1 (G) −



∫ ζt (0)y0 dx − G

Σ

∂ζ udΣ, ∂ν

where ζ is the unique solution to (1.50). One can show the well-posedness of (1.49) in the sense of Definition 1.15, by means of the classical transposition method ([208]). The point of this method is to interpret the solution to the forward wave equation (1.49) (with non-homogeneous Dirichlet boundary conditions) by means of the test equation (1.50), which is a backward wave equation with non-homogeneous source terms. In the deterministic setting, since the wave equation is timereversible, there exists no essential difference between the forward equation and the backward one. Nevertheless, this reminds us to interpret backward stochastic differential/evolution equations in terms of forward stochastic differential/evolution equations, as we have done in [241, 242] and so on. Clearly, the transposition method is a variant of the classical duality argument, and in some sense it provides a way to see something which is not easy to be detected directly. Indeed, it is easy to see that, the main idea of this method is to interpret the solution to a less understood equation by means of another well understood one. Nevertheless, the philosophy of our stochastic transposition method is quite different. Indeed, the main purpose to introduce stochastic transposition method in our works is not to solve the non-homogeneous boundary value problems for stochastic partial differential equations (though it does work for this sort of problems, see Proposition 8.8 of Chapter 8 in this book, for example) but to solve the backward stochastic equations (especially the operator-valued backward stochastic evolution equations mentioned above), in which there exist no boundary conditions explicitly (or even no boundary conditions at all)!

1.5 Notes and Comments Sections 1.1 and 1.2 are partially based on [389] and [220], respectively.

1.5 Notes and Comments

23

Theorem 1.7 can be found in [202, Lemma 2.4 in Chapter 7]), for example. Theorem 1.10 can be found in [291, Lemma 4.13, pp. 94–95 and Theorem 4.15, p. 97] and [330, Theorem 1.2 and Remark 1.3]). Theorem 1.12 is well-known (e.g., [377, Theorem 2.1 in Chapter 2 of Part IV, p. 207]). It seems that Example 1.14 was firstly introduced in [201] to explain the main idea of Carleman estimates (See [188] and also [110] for much more general pointwise weighted identities/inequalities). The material in Section 1.4 is partially based on [201, 244].

2 Some Preliminaries in Stochastic Calculus

This chapter is a concise introduction to stochastic calculus (in infinite dimensions in particular), which will play a fundamental role throughout this book. Especially, we collect the most relevant preliminaries for studying control problems in stochastic distributed parameter systems. Also, we will provide some unified notations (which may differ from one paper/book to another) to be used in later chapters. To present the proofs for all the results introduced in this chapter would considerably increase both the size and scope of this book. Thus, we distinguish two different cases. When we think that the proof is important in understanding the subsequent materials and/or when there is no immediate reference available, we will provide the details. Otherwise, we omit the proofs but giving standard and easily accessible references. Knowledgeable readers may skip this chapter or regard it as a quick reference.

2.1 Measures and Probability, Measurable Functions and Random Variables Fix a nonempty set Ω and a family F of subsets of Ω. For any E ∈ F , denote by χE (·) the characteristic function of E, defined on Ω. ∪∞F is called a σ-field on Ω if i) Ω ∈ F ; ii) Ω \ A ∈ F for any A ∈ F ; and iii) i=1 Ai ∈ F whenever each Ai ∈ F . If F is a σ-field on Ω, then (Ω, F ) is called a measurable space. An element A ∈ F is called a measurable set on (Ω, F ), or simply a measurable set. A set function µ : F → [0, +∞] is called a measure on (Ω, F ) if µ(∅) = 0 and ∪∞ ∑∞ µ is countably additive, i.e., µ( i=1 Ai ) = i=1 µ(Ai ) whenever {Ai }∞ i=1 are a sequence of mutually disjoint sets from F . The triple (Ω, F , µ) is called a measure space. The measure µ is called finite (resp. σ-finite) if µ(Ω) < ∞ ∪∞ (resp. there exists a sequence {Ai }∞ A and µ(Ai ) < i i=1 ⊂ F so that Ω = i=1 ∞ for each i ∈ N). We call A a µ-null (measurable) set if µ(A) = 0. © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_2

25

26

2 Some Preliminaries in Stochastic Calculus

A probability space is a finite measure space (Ω, F , P) for which P(Ω) = 1. In this case, P is also called a probability measure; any ω ∈ Ω is called a sample point; any A ∈ F is called an event, and P(A) represents the probability of the event A. If an event A ∈ F is such that P(A) = 1, then we may alternatively say that A holds, P-a.s., or simply A holds a.s. (if the probability P is clear from the context). For any measure space (Ω, F , µ), write N = {B ⊂ Ω | B ⊂ A for some µ-null set A}. (Ω, F , µ) is said to be complete if N ⊂ F . Especially, one may define complete probability space (Ω, F , P). If (Ω, F , µ) is incomplete, then the class F of all sets of the form (E \ N ) ∪ (N \ E), with E ∈ F and N ∈ N , is a σ-field which contains F as a proper sub-class, and the set function µ ¯ defined by µ ¯((E \ N ) ∪ (N \ E)) = µ(E) is a complete measure on F . The measure µ ¯ is called the completion of µ. Let (Ω1 , F1 ), · · · , (Ωn , Fn ) be measurable spaces, n ∈ N. Denote by F1 × · · · × Fn the σ-field1 (on the Cartesian product Ω1 × · · · × Ωn ) generated by the subsets of the form A1 × · · · × An , where Ai ∈ Fi , 1 ≤ i ≤ n. Let µi be a measure on (Ωi , Fi ). We call µ a product measure on (Ω1 × · · · × Ωn , F1 × · · · × Fn ) induced by µ1 , · · · , µn if µ(A1 × · · · × An ) =

n ∏

µi (Ai ),

∀ Ai ∈ Fi .

i=1

Theorem 2.1. Let (Ωi , Fi , µi ) be a σ-finite measure space, 1 ≤ i ≤ n. Then there is one and only one product measure µ (denoted by µ1 × · · · × µn ) on (Ω1 × · · · × Ωn , F1 × · · · × Fn ) induced by {µi }ni=1 . Further, we consider the product spaces and product measures for countably many measurable spaces and measures. In this case, for simplicity, we only present the definition and related result for probability spaces, which is ∞ enough for the rest of this measur∏∞book. Let {(Ωi , Fi )}i=1 be a sequence of ∏ ∞ able spaces. Denote by i=1 Fi the σ-field on the Cartesian product i=1 Ωi generated by the subsets of the form ∏∞ i=1 Ai , where Ai ∈ Fi for all i ∈ N, and Ai = Ωi for all (2.1) but a finite number of values of i. Let Pi be a probability (Ωi , Fi ), i ∈ N. ∏ We call P a product ∏measure ∏on ∞ ∞ ∞ probability measure on ( i=1 Ωi , i=1 Fi ) if for any i=1 Ai of the form (2.1), ∞ ∞ (∏ ) ∏ P Ai = Pi (Ai ). i=1 1

i=1

Note that here and henceforth the σ-field F1 × · · · × Fn does not stand for the Cartesian product of F1 , · · · , Fn .

2.1 Measures and Probability, Measurable Functions and Random Variables

27

Theorem 2.2. Let {(Ωi , Fi , Pi )}∞ i=1 be a sequence of probability spaces. ∏∞ Then, there is a unique product probability measure P (denoted by i=1 Pi ) on ∏∞ ∏∞ ( i=1 Ωi , i=1 Fi ) induced by {Pi }∞ i=1 . If (G, J ) is a topological space (on a given nonempty set G), then the smallest σ-field containing all open sets of G is called the Borel σ-field of G, denoted by B(G). Any set A ∈ B(G) is called a Borel set (in (G, J )). Let H be a metric space with metric d. We call any A ∈ B(H) a Borel set in H. Let (Ω, F ) and (Ω ′ , F ′ ) be two measurable spaces and f : Ω → Ω ′ be a map. The map f is said to be F /F ′ -measurable or simply F -measurable or even measurable (in the case that no confusion would occur) if f −1 (F ′ ) ⊂ F . Particularly if (Ω ′ , F ′ ) = (H, B(H)), then f is said to be an (H-valued) F measurable (or simply measurable) function. In the context of probability theory, f is called an (H-valued) F -random variable or simply a random variable (if the meaning is clear from the context). Note that measurable map or random variable is defined without measures. Also, it is clear that random variable is a special case of measurable map. For a measurable map f : (Ω, F ) → (Ω ′ , F ′ ), it is easy to show that f −1 (F ′ ) is a sub-σ-field of F . We call it the σ-field generated by f , and denote it by σ(f ). Further, for a given index set Λ and a family of measurable maps {fλ }λ∈Λ (defined on (Ω, F ), with possibly different ranges), we denote by σ(fλ ; λ ∈ Λ) the σ-field generated by ∪λ∈Λ σ(fλ ). Now, let us introduce an important notation, independence, which distinguishes probability theory from the usual measure theory. Definition 2.3. Let (Ω, F , P) be a probability space. 1) We say that two events A and B are independent if P(A ∩ B) = P(A)P(B); 2) Let J1 and J2 be two subsets of F . We say that J1 and J2 are independent if P(A ∩ B) = P(A)P(B) for any A ∈ J1 and B ∈ J2 ; 3) Let f and g be two random variables defined on (Ω, F ) (with possibly different ranges), and J ⊂ F . We say that f and g are independent if σ(f ) and σ(g) are independent. We say that f and J are independent if σ(f ) and J are independent; 4) Let {Aλ }λ∈Λ ⊂ F . We say that {Aλ }λ∈Λ is a class of mutually independent events if P(Aλ1 ∩ · · · ∩ Aλk ) = P(Aλ1 ) · · · P(Aλk ) for any k ∈ N, λ1 , · · · , λk ∈ Λ satisying λi ̸= λj whenever i ̸= j, i, j = 1, · · · , k. The following result is known as the Borel-Cantelli lemma. Theorem 2.4. 1) Suppose that {Ai }∞ sets in i=1 is a sequence of( measurable ) ∑∞ a measure space (Ω, F , µ). If i=1 µ(Ai ) < +∞, then µ lim Ai = 0. Here i→∞

and henceforth, ∆

lim Ai =

i→∞

∞ ∪ ∞ ∩ k=1 i=k

Ai .

28

2 Some Preliminaries in Stochastic Calculus

2) If (Ω, F , P) is a probability space and ∑ {Ai }∞ i=1 is a sequence of mutu∞ ally independent events, then the condition i=1 P(Ai ) = +∞ implies that ( ) P lim Ai = 1. i→∞

Remark 2.5. In the probability context, the first conclusion in Theorem 2.4 states a condition that (almost surely) an event occurs only finitely often; and the second one in this theorem states that, in the independence case, if the condition is not satisfied, the event is almost sure to occur infinitely times. Let f be a map from Ω to Ω ′ and P be a property concerning the map f at some elements in Ω. In the rest of this book, we shall simply denote by {P} the subset {ω ∈ Ω | P holds for f (ω)}. For example, when Ω ′ = R we use {f ≥ 0} to denote {ω ∈ Ω | f (ω) ≥ 0}. The following two notions of convergence are used frequently. Definition 2.6. Let (Ω, F , µ) be a measure space and {fi }∞ i=0 be a sequence of H-valued functions defined on Ω. 1) The sequence {fk }∞ k=1 is said to converge to f0 in measure, denoted by f0 = µ- lim fk , if d(fk , f0 ) is F -measurable and for every ε > 0, k→∞

lim µ({d(fk , f0 ) ≥ ε}) = 0.

k→∞

(In the context of probability theory, {fk }∞ k=1 is said to converge to f0 in probability). 2) The sequence {fk }∞ k=1 is said to converge to f0 almost everywhere, denoted by limk→∞ fk = f0 , µ-a.e., if {limk→∞ fk ̸= f0 } ∈ F , and µ({ lim fk ̸= f0 }) = 0. k→∞

(Clearly, in the context of probability theory, {fk }∞ k=1 converges to f0 a.s.). The relationship between convergence in measure and convergence almost everywhere is as follows. Theorem 2.7. Suppose that (Ω, F , µ) is a measure space and {fi }∞ i=0 is a sequence of H-valued functions defined on Ω. 1) If {fk }∞ k=1 converges to f0 in measure, then there is a subsequence ∞ {fnk }∞ k=1 of {fk }k=1 such that fnk → f0 , µ-a.e. as k → ∞. 2) Suppose in addition that µ is finite, and d(fk , f0 ) is F -measurable for ∞ each k ∈ N. Then the µ-a.e. convergence of {fk }∞ k=1 to f0 implies that {fk }k=1 converges to f0 in measure. Clearly, the H-valued measurable functions/random variables introduced in the above are natural generalization of the usual real valued measurable functions/random variables. However, as pointed in [264], when H is a nonseparable Banach space, the sum of two H-valued measurable functions is not necessary to be measurable. Because of this, one needs to introduce other notions for measurable functions (or random variables in the context of probability) valued in metric spaces.

2.1 Measures and Probability, Measurable Functions and Random Variables

29

Definition 2.8. Let f (·) be an H-valued function defined on a measurable space (Ω, F ). 1) The function f (·) is called weakly F -measurable (or simply, weakly measurable) if H is a Banach space2 , and for any φ ∈ H ∗ , the (scalar) function φ(f (·)) is measurable; 2) We call f (·) an F -simple function if 3 f (·) =

k ∑

χEi (·)hi ,

(2.2)

i=1

for ∪k some k ∈ N, hi ∈ H, and mutually disjoint sets E1 , · · · , Ek ∈ F satisfying i=1 Ei = Ω; ∑∞ 3) We call f (·) an F -countably valued function if f (·) = i=1 χEi (·)hi , for ∪∞some h1 , h2 , · · · ∈ H, and mutually disjoint sets E1 , E2 , · · · ∈ F satisfying i=1 Ei = Ω; 4) The function f (·) is said to be strongly F -measurable or simply strongly measurable if there exists a sequence of (H-valued) F -simple functions {fk }∞ k=1 converging pointwise to f ; 5) The function f (·) is said to be separably valued if its range {f (ω) | ω ∈ Ω} is separable. The following result indicates that when H is separable, the above defined strong F -measurability, F -measurability and weak F -measurability are equivalent. Theorem 2.9. Let f (·) be an H-valued function defined on a measurable space (Ω, F ). Then, the following two statements are equivalent: 1) f (·) is strongly F -measurable; 2) f (·) is F -measurable and separably valued. Further, if H is a Banach space, then the above two statements are equivalent to: 3) f (·) is weakly F -measurable and separably valued. For vector-valued functions defined on measure spaces, one may slightly change/weaken the requirement in Definition 2.8 4)–5) as follows. Definition 2.10. Let f (·) be an H-valued function defined on a σ-finite measure space (Ω, F , µ). 1) The function f (·) is said to be strongly F -measurable or simply strongly measurable (w.r.t. µ) if there exists a sequence of (H-valued) F -simple functions {fk }∞ k=1 such that lim fk = f , µ-a.e.; k→∞

2) The function f (·) is called µ-almost separably valued if there exists a E0 ∈ F such that µ(E0 ) = 0 and {f (ω) | ω ∈ Ω \ E0 } is separable. 2 3

When H is a Banach space, we denote by H ∗ the dual space of H. Since E1 , · · · , Ek are mutually disjoint, the function f (·) in (2.2) is simply defined by f (x) = hi when x ∈ Ei for some i = 1, · · · , k.

30

2 Some Preliminaries in Stochastic Calculus

Note that, for any measurable space (Ω, F ), one may define a trivial measure µ0 by µ0 (∅) = 0 and µ0 (A) = ∞ for any nonempty A ∈ F . Hence, the strongly F -measurable functions and separably valued functions in Definition 2.8 are special cases of that in Definition 2.10. Clearly, the norm of (H-valued) strongly F -measurable function (w.r.t. µ) is measurable, and any linear combination of two strongly F -measurable functions (w.r.t. µ) is strongly measurable whenever H is a Banach space; the (µ-almost everywhere) strong limit of a sequence of strongly F -measurable functions is strongly measurable. Similar to Theorem 2.9, one has the following result. Theorem 2.11. Let f (·) be an H-valued function defined on a complete σfinite measure space (Ω, F , µ). Then, the following two statements are equivalent: 1) f (·) is strongly F -measurable w.r.t. µ; 2) f (·) is F -measurable and µ-almost separably valued. Further, if H is a Banach space, then the above two statements are equivalent to: 3) f (·) is weakly F -measurable and µ-almost separably valued. By Theorem 2.11, it is clear that when (Ω, F , µ) is a complete σ-finite measure space and H is a separable Banach space, the strong F -measurability w.r.t. µ, the F -measurability and the weak F -measurability are equivalent. Also, we remark that, the completeness assumption on (Ω, F , µ) in the second part of Theorem 2.11 cannot be dropped (Indeed, without this assumption, the strong F -measurability w.r.t. µ does not imply the weak F -measurability, even in finite dimensions). Also, one has the following characterization on the strong F -measurability w.r.t. µ. Proposition 2.12. Let (Ω, F , µ) be a σ-finite measure space. Then, f (·) is an H-valued, strongly F -measurable (w.r.t. µ) function on (Ω, F , µ) if and only if there exist a µ-null set Ω0 and a sequence of F -countably (H-)valued functions {fi (·)}∞ i=1 such that {fi (·)} converges to f (·) uniformly in Ω \ Ω0 . We shall use the following result, known as the Doob-Dynkin lemma. Theorem 2.13. Let (Ω, F ) be a measurable space, H1 and H2 be two separable Banach spaces, and fk : Ω → Hk (k = 1, 2) be F -measurable functions. Then f2 = g(f1 ) for some B(H1 )-measurable function g : H1 → H2 if and only if σ(f2 ) ⊂ σ(f1 ). Proof : We need only to prove the “if” part. Put } ∆{ H = η(f1 (·)) η : H1 → H2 is B(H1 )-measurable . It suffices to show that H contains all H2 -valued, σ(f1 )-measurable functions.

(2.3)

2.2 Integrals and Expectation

31

Now we prove (2.3). Clearly, H is a linear space. Let } ∆{ F ′ = A ⊂ Ω χA (·)h ∈ H, ∀ h ∈ H2 . If A ∈ σ(f1 ), then A = f1−1 (B) for some B ∈ B(H1 ), which leads to χA (·)h = χB (f1 (·))h ∈ H for any h ∈ H2 . Hence, σ(f1 ) ⊂ F ′ . For any H2 -valued, σ(f1 )-measurable function ζ, we select a sequence of H2 -valued, σ(f1 )-simple functions {ζk }∞ = ζ(ω) for any ω ∈ Ω. Without k=1 such that limk→∞ ζk (ω) ∑ℓk loss of generality, we assume that ζk (·) = i=1 χf −1 (Bi ) (·)hi , for some ℓk ∈ N, 1 ℓk → ∞ as k → ∞, hi ∈ H2 , Bi ∈ B(H1 ) and Bi ∩ Bj = ∅ when i ̸= j (j = 1, · · · , ℓk ). Write ηk (·) =

ℓk ∑

χBi (·)hi ,

B=

i=1

∞ ∪

Bi .

i=1

It is easy to see that ηk (f1 (·)) = ζk (·). Define   lim ηk (x), x ∈ B, η(x) = k→∞  0, x ∈ H1 \ B. Clearly, η : H1 → H2 is B(H1 )-measurable and ζ(·) = η(f1 (·)). Thus, ζ(·) ∈ H. This completes the proof of (2.3).

2.2 Integrals and Expectation In this section, we recall the definitions and some basic results for the Bochner integral and the Pettis integral. We omit the proofs and refer the readers to [74, 143]. Let us fix a σ-finite measure space (Ω, F , µ), a probability space (Ω, F , P) and a Banach space H. Let f (·) be an (H-valued) F -simple function in the form (2.2). We call f (·) Bochner integrable if µ(Ei ) < ∞ for each i = 1, · · · , k. In this case, for any E ∈ F , the Bochner integral of f (·) over E is defined by ∫ f (s)dµ = E

k ∑

µ(E ∩ Ei )hi .

i=1

In general, we have the following notion. Definition 2.14. A strongly F -measurable function f (·) : Ω → H is said to be Bochner integrable (w.r.t. µ) if there exists a sequence of Bochner integrable F -simple functions {fi (·)}∞ i=1 converging strongly to f (·), µ-a.e. in Ω, so that ∫ lim |fi (s) − fj (s)|H dµ = 0. i,j→∞



32

2 Some Preliminaries in Stochastic Calculus

For any E ∈ F , the Bochner integral of f (·) over E is defined by ∫ ∫ f (s)dµ = lim χE (s)fi (s)dµ(s) in H. i→∞

E

(2.4)



It is easy to verify that the limit in the right hand side of (2.4) exists and its value is independent of the choice of the sequence {fi (·)}∞ i=1 . Clearly, when H = Rn (for some n ∈ N), the above Bochner integral coincides the usual integral for Rn -valued functions. Particularly, if (Ω, F , P) ia a probability space and f : Ω → H is Bochner integrable (w.r.t. P), then we say that f has a mean, denoted by ∫ Ef = f dP. (2.5) Ω

We also call Ef the (mathematical) expectation of f . The following result reveals the relationship between the Bochner integral (for vector-valued functions) and the usual integral (for scalar functions). Theorem 2.15. Let f (·) : Ω → H be a strongly F -measurable w.r.t. µ. Then, f (·) is Bochner integrable w.r.t. µ if and only if the scalar function |f (·)|H : Ω → R is integrable w.r.t. µ. Further properties for Bochner integral are collected as follows. Theorem 2.16. Let f (·), g(·) : (Ω, F , µ) → H be Bochner integrable. Then, 1) For any a, b ∈ C, the function af (·) + bg(·) is Bochner integrable, and ∫ ∫ ∫ ( ) af (s) + bg(s) dµ = a f (s)dµ + b g(s)dµ, ∀ E ∈ F. E

E

E

2) For any E ∈ F , ∫ f (s)dµ E

∫ H



|f (s)|H dµ. E

3) The Bochner integral is µ-absolutely continuous, that is, ∫ lim f (s)dµ = 0 in H. E∈F , µ(E)→0

E

e then F f (·) is an H-valued e 4) If F ∈ L(H; H), Bochner integrable function, and for any E ∈ F , ∫ ∫ F f (s)dµ = F E

It is easy to show the following result.

f (s)dµ. E

2.2 Integrals and Expectation

33

Proposition 2.17. Let H be a Hilbert space. If f and g are independent, Bochner integrable random variables on (Ω, F , P), valued in H, then (f, g)H is integrable, and E(f, g)H = (Ef, Eg)H . (2.6) The following result, known as Dominated Convergence Theorem, will be used frequently in the sequel. Theorem 2.18. Let f : (Ω, F , µ) → H be strongly measurable, and let g : (Ω, F , µ) → R be a real valued nonnegative integrable function. Assume that, fi : (Ω, F , µ) → H is Bochner integrable so that |fi |H ≤ g, µ-a.e. for each i ∈ N and limi→∞ fi = f , µ-a.e. Then, f is Bochner integrable, and ∫ ∫ lim fi (s)dµ = f (s)dµ, ∀ E ∈ F. i→∞

E

E

The following result, known as Monotone Convergence Theorem (also called Levi’s lemma), will be used in the sequel. Theorem 2.19. Let f : Ω → H be strongly measurable. Assume that, fi : Ω → H is Bochner integrable w.r.t. µ so that |fi |H ≤ |fi+1 |H , µ-a.e. for each ∫ i ∈ N, supi∈N Ω |fi |H dµ < ∞ and limi→∞ fi = f , µ-a.e. Then the same conclusion as that in Theorem 2.18 holds. The following result, known as Fatou’s lemma, will be useful in the sequel. Theorem 2.20. Let f, fi : (Ω, F , µ) → R be real valued integrable functions (i ∈ N). Assume that f ≤ fi , µ-a.e. for each i ∈ N. Then, ∫ ∫ lim fi (s)dµ ≤ lim fi (s)dµ, ∀ E ∈ F. E i→∞

i→∞

E

Also, one has the following Fubini Theorem (on Bochner integrals). Theorem 2.21. Let (Ω1 , F1 , µ1 ) and (Ω2 , F2 , µ2 ) be σ-finite measure spaces. If ∫ f (·, ·) : Ω1 × Ω2 → H is∫Bochner integrable, then the functions y(·) ≡ f (t, ·)dµ1 (t) and z(·) ≡ Ω2 f (·, s)dµ2 (s) are a.e. defined and Bochner Ω1 integrable on Ω2 and Ω1 , respectively. Moreover, ∫ ∫ ∫ f (t, s)dµ1 × µ2 = z(t)dµ1 (t) = y(s)dµ2 (s). Ω1 ×Ω2

Ω1

Ω2 ∆

For any p ∈ (0, ∞), denote by LpF (Ω; H) = Lp (Ω, F , µ; H) the set of all (equivalence ∫ classes of) strongly measurable functions f : (Ω, F , µ) → H such that Ω |f |pH dµ < ∞. When p ∈ [1, ∞), this is a Banach space with the norm (∫ ) 1/p

|f |LpF (Ω;H) =



|f |pH dµ

.

(2.7)

34

2 Some Preliminaries in Stochastic Calculus

(Particularly, when H is a Hilbert space, so is L2F (Ω; H)). We denote by ∆

∞ L∞ F (Ω; H) = L (Ω, F , µ; H) the set of all (equivalence classes of) strongly measurable (H-valued) functions f such that ess sup ω∈Ω |f (ω)|H < ∞. This is also a Banach space with the norm

|f |L∞ = ess sup ω∈Ω |f (ω)|H . F (Ω;H)

(2.8)

For 1 ≤ p ≤ ∞ and T > 0, we shall simply denote Lp ((0, T ), L, m; H) by Lp (0, T ; H), where L is the family of Lebesgue measurable sets in (0, T ), and m is the Lebesgue measure on (0, T ). Also, we simply denote LpF (Ω; R) and Lp (0, T ; R) by LpF (Ω) and Lp (0, T ), respectively. The following easy result is known as Chebyshev’s inequality. Theorem 2.22. For any p ∈ [1, +∞), f ∈ LpF (Ω; H) and λ > 0, it holds that ∫ 1 µ({|f |H ≥ λ}) ≤ p |f |p dµ. (2.9) λ Ω H The following simple result is sometimes useful. Lemma 2.23. If p ∈ (0, ∞) and f ∈ LpF (Ω; H), then ∫

∫ Ω

|f |pH dµ = p

+∞

λp−1 µ({|f |H ≥ λ})dλ.

(2.10)

0

Proof : Since p ∈ (0, +∞), we have ∫

∫ Ω

|f |pH dµ = p



|f |H

dµ Ω ∫ +∞

0





λp−1 dλ = p

+∞

dµ Ω

0

χ{|f |H ≥λ} λp−1 dλ

λp−1 µ({|f |H ≥ λ})dλ,

=p 0

which gives (2.10). In the sequel, we will need the following concept of uniform integrability. 1 Definition 2.24. Suppose ∫ K ⊂ LF (Ω; H). We call K a uniformly integrable 1 subset (of LF (Ω; H)) if {|f |H ≥s} |f |H dµ converges to 0 uniformly (for any f ∈ K) as s → +∞.

Obviously, if µ(Ω) < ∞ and K is a uniformly integrable subset of L1F (Ω; H), then it is also bounded in L1F (Ω; H). The following result characterizes the L1 -convergence in terms of the uniform integrability. 1 1 Theorem 2.25. Let {fi }∞ i=0 ⊂ LF (Ω; H). Then fi → f0 in LF (Ω; H) as ∞ i → ∞ if and only if {fi }i=1 converges to f0 in measure and {fi }∞ i=1 is uniformly integrable.

2.3 Signed/Vector Measures, Conditional Expectation

35

For nonnegative measurable functions, one has the following simple characterization of L1 -convergence: Theorem 2.26. Let {fi }∞ i=0 be a sequence of nonnegative integrable functions ∞ on (Ω, F , µ). Then fi → f0 in L1F (Ω) ∫ as i → ∞ ∫ if and only if {fi }i=1 converges to f0 in measure and limi→∞ Ω fi dµ = Ω f0 dµ. Let (Ω ′ , F ′ ) be another measurable space, and Φ : (Ω, F ) → (Ω ′ , F ′ ) be a measurable map. Then Φ induces a measure ν on (Ω ′ , F ′ ) via ν(A′ ) = µ(Φ−1 (A′ )),

∀ A′ ∈ F ′ .



(2.11)

The following is a change-of-variable formula: Theorem 2.27. A function f (·) : Ω ′ → H is Bochner integrable w.r.t. ν if and only if f (Φ(·)) (defined on (Ω, F )) is Bochner integrable w.r.t. µ. Furthermore, ∫ ∫ f (ω ′ )dν(ω ′ ) =

Ω′

f (Φ(ω))dµ(ω).

(2.12)



2.3 Signed/Vector Measures, Conditional Expectation We fix a measurable space (Ω, F ) and a Banach space H. 2.3.1 Signed Measures Let us begin with the following notion. Definition 2.28. A function µ : F → [−∞, +∞] is called a signed measure on (Ω, F ) if 1) µ(∅) ∪∞= 0; ∑∞ 2) µ( j=1 Aj ) = j=1 µ(Aj ) for any sequence {Aj } of mutually disjoint sets from F ; and 3) µ assumes at most one of the values +∞ and −∞. Example 2.29. Let ν be a measure on (Ω, F ) and f be a real valued integrable function defined on (Ω, F , ν). Then ∫ µ(A) = f dν, ∀ A ∈ F, A

defines a signed measure in (Ω, F ). More generally, the above µ is still a signed measure if f is a measurable function on (Ω, F ), and one of f + and f − , the positive and negative parts of f , is integrable on (Ω, F , ν). If µ is a signed measure on (Ω, F ), we call a set E ⊂ Ω positive (resp. negative) (w.r.t. µ) if for every F ∈ F , E ∩ F is measurable, and µ(E ∩ F ) ≥ 0 (resp. µ(E ∩ F ) ≤ 0).

36

2 Some Preliminaries in Stochastic Calculus

Theorem 2.30. If µ is a signed measure on (Ω, F ), then there exist two disjoint sets A and B, whose union is Ω, such that A is positive and B is negative w.r.t. µ. The sets A and B in Theorem 2.30 are said a Hahn decomposition of Ω w.r.t. µ. It is not difficult to construct examples to show that Hahn decomposition is not unique. However, if Ω = A1 ∪ B1

Ω = A2 ∪ B2

and

are two Hahn decomposition of Ω, then it is easy to show that, for every measurable set E, it holds µ(E ∩ A1 ) = µ(E ∩ A2 )

and

µ(E ∩ B1 ) = µ(E ∩ B2 ).

From this fact, we may define unambiguously two set functions µ+ and µ− on (Ω, F ) as follows: µ− (E) = −µ(E ∩ B),

µ+ (E) = µ(E ∩ A),

∀ E ∈ F.

We call µ+ and µ− respectively the upper variation and the lower variation of µ. The set function |µ|, defined for every E ∈ F by |µ|(E) = µ+ (E) + µ− (E) is called the total variation of µ. Obviously µ(E) = µ+ (E) − µ− (E),

|µ|(E) ≥ |µ(E)|,

∀E ∈ F .

Also, it is easy to show that the upper, lower, and the total variations of a signed measure µ are measures. Definition 2.31. A signed measure µ on (Ω, F ) is said to be finite if |µ(E)| < +∞ for any E ∈ F ; µ is said to be σ-finite if for any E ∈ F , there exists a sequence {Ei }∞ i=1 ⊂ F such that |µ(Ei )| < +∞ for every i ∈ N and E ⊂ ∪∞ E . i i=1 Let µ be a σ-finite signed measure on (Ω, F ). If f : (Ω, F ) → H is Bochner integrable w.r.t. |µ| = µ+ + µ− , then f is said to be Bochner integrable w.r.t. µ. The Bochner integral of f in Ω w.r.t. µ is defined by ∫ ∫ ∫ ∆ f dµ = f dµ+ − f dµ− . Ω

For any E ∈ F , we define







∫ f dµ = E

χE f dµ. Ω

The properties of the Bochner integral w.r.t. the above signed measure µ is similarly to the usual (Bochner) integral except that all conditions such as “null measure sets” or “almost everywhere” w.r.t. µ should be changed to that w.r.t. |µ|.

2.3 Signed/Vector Measures, Conditional Expectation

37

Definition 2.32. Let µ and ν be two signed measures on (Ω, F ). We say that µ is absolutely continuous w.r.t. ν, denoted by µ ≪ ν, if µ(E) = 0 for every E ∈ F with |ν|(E) = 0. We recall the following fundamental result (known as the Radon-Nikod´ ym Theorem) concerning absolute continuity. Theorem 2.33. Let µ and ν be two σ-finite signed measures on (Ω, F ) and µ 0, then we say that X has a normal distribution with parameter (λ, Q), denoted by X ∼ N (λ, Q). We call X a Gaussian random variable (valued in Rm ) if X has a normal distribution or X is constant. Remark 2.35. Gaussian random variable appears in many physical models. Indeed, under some mild conditions, the mean of many independent and identically-distributed random variables is close to a Gaussian random variable (We refer to [38] for more details). For Gaussian random variables, one has the following result. Theorem 2.36. Let X = (X1 , · · · , Xm ) be a Gaussian random variable valued in Rm . Then X1 , · · · , Xm are independent if and only if the matrix Q (in (2.17)) is diagonal. In the rest of this subsection, we assume that H and G are two Hilbert spaces. For any x ∈ H and y ∈ G, the tensor product x ⊗ y (of x and y) is a bounded linear operator from H to G, defined by (x ⊗ y)h = ⟨ h, x ⟩H y,

∀ h ∈ H.

(2.18)

Let Q ∈ L(H; G). We call Q a trace-class operator from H to G if {∑ ∆ trQ = sup |⟨ Qhµ , gµ ⟩G | {hµ }µ∈Λ and {gµ }µ∈Λ are respectively µ∈Λ } orthonormal basis in H and G < ∞. Denote by L1 (H; G) (or simply L1 (H) if H = G) the space of trace-class operators, which is a subset of the space of all compact operators (from H to G). It is well known that L1 (H; G) equipped with the trace norm trQ is a Banach space, and the following result holds (e.g. [299, Chapter III] and [355, Chapter 6]):

2.3 Signed/Vector Measures, Conditional Expectation

39

Proposition 2.37. Let H be a separable Hilbert space. Then, a nonnegative operator Q ∈ L(H) is of trace-class if and only if for an orthonormal basis {hi }∞ i=1 on H, ∞ ∑ ⟨ Qhi , hi ⟩H < ∞. i=1

Moreover, in this case trQ = For any X, Y ∈ by

∑∞

L2F (Ω; H),

i=1

⟨ Qhi , hi ⟩H .

we define the covariance operator of X and Y

] ∆ [ Cov (X, Y ) = E (X − EX) ⊗ (Y − EY ) .

(2.19)

It is easy to show that Cov (X, Y ) ∈ L1 (H). Particularly, ∆

Var X = Cov (X, X)

(2.20)

is called the variance operator of X. It is not difficult to show that for a normal distribution N (λ, Q) (defined above), λ is the mean and Q is the variance operator. When H is finite dimensional, the covariance/variance operator is usually called the covariance/variance matrix. For any H-valued, strongly measurable random variable X and ξ ∈ H, when ei⟨ξ,X(·)⟩H is integrable w.r.t. P (It is always the case provided that H is a real Hilbert space), write ∫ φX (ξ) = Eei⟨ξ,X⟩H = ei⟨ξ,x⟩H d PX (x). (2.21) H

We call φX (·) the characteristic function of X. It can be regarded as the Fourier transform of the probability measure induced by X(·). The characteristic function is unique in the following sense. Theorem 2.38. Assume that X and Y are two H-valued, strongly measurable random variables. If φX (·) = φY (·), then PX = PY . We will need the following notion. Definition 2.39. An H-valued random variable X is said to be Gaussian, denoted by X ∼ N (λ, Q) if for some λ ∈ H and nonnegative symmetric operator Q ∈ L1 (H), { } 1 φX (ξ) = exp i⟨λ, ξ⟩H − ⟨Qξ, ξ⟩H , ∀ ξ ∈ H. (2.22) 2 If an H-valued random variable X ∼ N (λ, Q), then it is easy to check that EX = λ and Var X = Q.

40

2 Some Preliminaries in Stochastic Calculus

2.3.3 Vector Measures The results of this subsection can be found in [74]. In this subsection, we fix a finite measure ν on (Ω, F ) We begin with the following definition. Definition 2.40. An H-valued function µ : F → H is called a vector measure (on (Ω, F )) if µ(∅) = 0, and µ is countably additive, i.e., for any sequence {Ak }∞ k=1 of mutually disjoint sets in F , µ

∞ (∪

∞ ) ∑ Ak = µ(Ak ),

k=1

in H.

k=1

An example of vector measures is as follows. Example 2.41. For p ∈ [1, ∞), define µ : F → Lp (Ω, F , ν) by µ(A) = χA (·), ∀ A ∈ F . It is easy to see that µ is a vector measure on (Ω, F ). Definition 2.42. Let µ be an (H-valued) vector measure on (Ω, F ). The variation of µ is the nonnegative function |µ|(·) on F , defined by |µ|(A) = sup

k ∪ |µ(Ai )|H A = Ai for some k ∈ N and mutually i=1 i=1 } disjoint sets A1 , · · · , Ak ∈ F , ∀ A ∈ F.

k {∑

If |µ|(Ω) < +∞, then we call µ a vector measure of bounded variation. It is easy to check that the vector measure µ given in Example 2.41 is of bounded variation. Theorem 2.43. Let µ be a vector measure of bounded variation on (Ω, F ). Then the variation |µ| is a finite measure on (Ω, F ). The following result is sometimes useful. ∫ Proposition 2.44. If g ∈ L1 (Ω, F , ν; H), µ(A) = A gdν for all A ∈ F , then µ is a vector measure of bounded variation on (Ω, F ), and ∫ |µ|(A) = |g|H dν, ∀A ∈ F . A

Definition 2.45. An (H-valued) vector measure µ on (Ω, F ) is called νcontinuous, denoted by µ ≪ ν, if for any ε > 0, there is a δ > 0 such that |µ(A)|H < ε whenever A ∈ F satisfying ν(A) < δ. The result below gives a criterion for the ν-continuous vector measure.

2.3 Signed/Vector Measures, Conditional Expectation

41

Theorem 2.46. An (H-valued) vector measure µ on (Ω, F ) is ν-continuous if and only if µ(A) = 0 for any A ∈ F with ν(A) = 0. We shall need the following notion. Definition 2.47. We call H to have the Radon-Nikod´ym property w.r.t. (Ω, F , ν), if for each ν-continuous (H-valued) vector measure µ of bounded variation on (Ω, F ), there exists a g ∈ L1 (Ω, F , ν; H) such that ∫ µ(A) = gdν, ∀ A ∈ F. A

H is said to have the Radon-Nikod´ym property, if it has the Radon-Nikod´ym property w.r.t. any finite measure space. It is well-known that there does exist a Banach space for which the RadonNikod´ ym property fails. The following result is due to R. S. Phillips. Theorem 2.48. Reflexive Banach spaces (and hence Hilbert spaces) have the Radon-Nikod´ym property. 2.3.4 Conditional Expectation In this subsection, we fix a probability measure P on (Ω, F ), and a function ∆ f ∈ L1F (Ω; H) = L1 (Ω, F , P; H). Definition 2.49. Let B ∈ F with P(B) > 0. For any event A ∈ F , put P(A | B) =

P(A ∩ B) . P(B)

Then P(· | B) is a probability measure on (Ω, F ), called the conditional probability given the event B, and denoted by PB (·). For any given A ∈ F , P(A | B) is called the conditional probability of A given B. The conditional expectation of f given the event B is defined by ∫ ∫ 1 E(f | B) = f dPB = f dP. P(B) B Ω Clearly, the conditional expectation of f given the event B represents the average value of f on B. In many concrete problems, it is not enough to consider the conditional expectation given only one event. Instead, it is quite useful to define the conditional expectation to be a suitable random variable. For example, when consider two conditional expectations E(f | B) and E(f | B c ) simultaneously, we simply define it as a function E(f | B)χB (ω) + E(f | B c )χB c (ω) rather than regarding it as two numbers. Before considering the general setting, we begin with the following special case.

42

2 Some Preliminaries in Stochastic Calculus

Definition 2.50. Let {Bi }∞ i=1 ⊂ F be a sequence of mutually ∪∞ disjoint sets in Ω such that P(Bi ) > 0 for all i = 1, 2, · · · and Ω = i=1 Bi . Put J = σ{B1 , B2 , · · · }. Then the following J -measurable function ∆

E(f | J )(ω) =

∞ ∑

E(f | Bk )χBk (ω)

k=1

is called the conditional expectation of f given the σ-field J . Clearly, when J = σ{B1 , B2 , · · · }, the conditional expectation E(f | J ) of f takes its average value in every Bk . Also, it is easy to check that ∫ ∫ E(f | J )dP = f dP, k = 1, 2, · · · . (2.23) Bk

Bk

In the rest of this subsection, we assume that H has the Radon-Nikod´ ym property. Now, let us consider the more general case that J is a given sub-σ-field of F . Stimulated by (2.23), one defines a set function on J by ∫ ∆ µ(B) = f dP, ∀ B ∈ J . B

It is easy to see that µ is an (H-valued) vector measure of bounded variation on (Ω, J ), and µ ≪ P. Hence, one may find a (unique) function in ∆ L1J (Ω; H) = L1 (Ω, J , P; H), denoted by E(f | J ), such that ∫ ∫ E(f | J )dP = f dP, ∀ B ∈ J. (2.24) B

B

This leads to the following notion: Definition 2.51. The (H-valued) function E(f | J ), determined by (2.24), is called the conditional expectation of f given the σ-field J . We collect some basic properties of the conditional expectation as follows. Theorem 2.52. Let J be a sub-σ-field of F . It holds that: 1) The map E(· | J ) : L1F (Ω; H) → L1J (Ω; H) is linear and continuous; 2) E(a | J ) = a, P|J -a.s. , ∀ a ∈ H; 3) If f1 , f2 ∈ L1F (Ω) with f1 ≥ f2 , then E(f1 | J ) ≥ E(f2 | J ),

P|J -a.s.;

4) Let m, n, k ∈ N, f1 ∈ L1J (Ω; Rm×n ) and f2 ∈ L1F (Ω; Rn×k ) such that f1 f2 ∈ L1F (Ω; Rm×k ). Then E(f1 f2 | J ) = f1 E(f2 | J ),

P|J -a.s.

2.3 Signed/Vector Measures, Conditional Expectation

43

Particularly, E(f1 | J ) = f1 , P|J − a.s. Also, for any f3 ∈ L1F (Ω; Rm×n ), E(E(f3 | J )f2 | J ) = E(f3 | J )E(f2 | J ),

P|J -a.s.;

5) If f is independent of J , then E(f | J ) = Ef,

P|J -a.s.;

6) Let J ′ be a sub-σ-field of J . Then E(E(f | J ) | J ′ ) = E(E(f | J ′ ) | J ) = E(f | J ′ ),

P|J ′ -a.s.;

7) (Jensen’s inequality) Let ϕ : H → R be a convex function such that ϕ(f ) ∈ L1F (Ω). Then ϕ(E(f | J )) ≤ E(ϕ(f ) | J ),

P|J -a.s.

Particularly, for any p ≥ 1, we have E(f | J ) p ≤ E(|f |p | J ), H H

P|J -a.s.

provided that E|f |pH exists. Remark 2.53. Given two sub-σ-fields J k (k = 1, 2) of F , generally E(E(f |J 1 )|J 2 ) ̸= E(E(f |J 2 )|J 1 ) ̸= E(f |J 1 ∩ J 2 ),

P|J 1 ∩J 2 -a.s.

Proposition 2.54. Let {Fα }α∈Λ be a family of sub-σ-fields of F , where Λ is a given index set. Then {E(f | Fα )}α∈Λ is uniformly integrable. Proof : By Theorem 2.22, for any s > 0, P(|E(f | Fα )|H ≥ s) ≤ s−1 E|E(f | Fα )|H ≤ s−1 E|f |H , Hence



∀ α ∈ Λ.

∫ (|E(f | Fα )|H ≥s)





|E(f | Fα )|H dP ≤ ∫

sP(|E(f | Fα )|H ≥ s) +

1 ≤ √ E|f |H + s

∫ √ (|f |H ≥ s)

(|E(f | Fα )|H ≥s)

√ (|f |H ≥ s)

|f |H dP

|f |H dP,

which yields the uniform integrability of {E(f | Fα )}α∈Λ .

|f |H dP

44

2 Some Preliminaries in Stochastic Calculus

2.4 A Riesz-Type Representation Theorem In this section, we shall prove a Riesz-type representation theorem, which will play important roles in the study of both controllability and optimal control problems for stochastic evolution equations. Let (X1 , M1 , µ1 ) and (X2 , M2 , µ2 ) be two finite measure spaces, and let H be a Banach space. Let M be a sub-σ-field of M1 × M2 , and for any 1 ≤ p, q < ∞, let { LpM (X1 ; Lq (X2 ; H)) = φ : X1 × X2 → H φ(·) is strongly M-measurable (∫

∫ w.r.t. µ1 × µ2 and X1

X2

|φ(x1 , x2 )|qH dµ2

) pq

} dµ1 < ∞ . (2.25)

Likewise, let { q L∞ M (X1 ; L (X2 ; H)) = φ : X1 × X2 → H φ(·) is strongly M-measurable (∫ w.r.t. µ1 × µ2 and ess sup x1 ∈X1

X2

|φ(x1 , x2 )|qH dµ2

) q1

} 0, there exists a p nonnegative function φ ∈ LM (X1 ; Lq (X2 )) such that ∫ 0 < |φ|p,q ≤ 1, |g|p′ ,q′ ;H ∗ − ε ≤ |g|H ∗ φdµ1 dµ2 . X1 ×X2

Further, choose an hi ∈ H with |hi |H = 1 such that ⟨ hi , h∗i ⟩H,H ∗ is nonnegative and ε |h∗i |H ∗ − ≤ ⟨ hi , h∗i ⟩H,H ∗ , |φ|1,1 and define f=

∞ ∑

χEi φhi ∈ LpM (X1 ; Lq (X2 ; H)).

i=1

It is easy to check that |f (·)|H = φ(·) in X1 × X2 . Hence, |f |p,q;H = |φ|p,q ≤ 1, and

2.4 A Riesz-Type Representation Theorem

51

∫ X1 ×X2

⟨ f (x1 , x2 ), g(x1 , x2 ) ⟩H,H ∗ dµ1 dµ2



=

φ X1 ×X2



∞ ∑

⟨ hi , h∗i ⟩H,H ∗ χEi dµ1 dµ2

i=1 ∞ ( ∑ φ |h∗i |H ∗ −

ε ) χEi dµ1 dµ2 |φ|1,1 X1 ×X2 i=1 ∫ ∫ ε ∗ ≥ |g|H φdµ1 dµ2 − φdµ1 dµ2 ≥ |g|p′ ,q′ ;H ∗ − 2ε. |φ|1,1 X1 ×X2 X1 ×X2 ≥

This gives |Fg |LpM (X1 ;Lq (X2 ;H))∗ ≥ |g|p′ ,q′ ;H ∗ , and therefore |Fg |LpM (X1 ;Lq (X2 ;H))∗ = |g|p′ ,q′ ;H ∗ , ′



whenever g ∈ LpM (X1 ; Lq (X2 ; H ∗ )) is countably valued. For the general case, by Proposition 2.12, we may choose a sequence p′ q′ ∗ {gj }∞ j=1 ⊂ LM (X1 ; L (X2 ; H )) such that each gj is countably valued and lim |gj − g|p′ ,q′ ;H ∗ = 0.

(2.44)

|Fgj |LpM (X1 ;Lq (X2 ;H))∗ = |gj |p′ ,q′ ;H ∗ ,

(2.45)

j→∞

We have obtained that

and by virtue of (2.42), |Fgj − Fg |LpM (X1 ;Lq (X2 ;H))∗ = |Fgj −g |LpM (X1 ;Lq (X2 ;H))∗ ≤ |gj − g|p′ ,q′ ;H ∗ . Therefore, noting (2.44) and (2.45), we end up with = lim |Fgj |LpM (X1 ;Lq (X2 ;H))∗ = lim |gj |p′ ,q′ ;H ∗ Fg p q ∗ LM (X1 ;L (X2 ;H))

j→∞

j→∞

= |g|p′ ,q′ ;H ∗ . ′



Hence, LpM (X1 ; Lq (X2 ; H ∗ )) is isometrically isomorphic to H. Step 2. We show that the subspace H (defined in (2.43)) is equal to LpM (X1 ; Lq (X2 ; H))∗ . To this end, for F ∈ LpM (X1 ; Lq (X2 ; H))∗ , we define G(E)(h) = F (hχE ), By

∀ E ∈ M, h ∈ H.

(2.46)

52

2 Some Preliminaries in Stochastic Calculus

|F (hχE )| ≤ |F |LpM (X1 ;Lq (X2 ;H))∗ |hχE |p,q;H ≤ |F |LpM (X1 ;Lq (X2 ;H))∗ |h|H |χE |p,q , we see that G : M → H ∗ and it is countably additive. Let E1 , · · · , Ek (k ∈ ∪k N) be mutually disjoint sets from M such that X1 × X2 = i=1 Ei . Then G(Ei ) ∈ H ∗ . For any ε > 0, one can find an hi ∈ H with |hi |H = 1 such that G(Ei )(hi ) is nonnegative and |G(Ei )|H ∗ −

ε < G(Ei )(hi ). k

It follows that k ∑

|G(Ei )|H ∗ − ε
0 on each Aj . It follows from (2.51) that

|G|(A) ≤ kj (µ1 × µ2 )(A),

∀ A ∈ M with A ⊂ Aj .

(2.52)

For each j ∈ N, define a linear functional ℓj on the subspace S of M-simple functions in LpM (X1 ; Lq (X2 ; H)) as follows: ℓj (f ) =

i0 ∑

G(Ei ∩ Aj )(xi ),

i=1

∑ i0 where i0 ∈ N, and f = i=1 xi χEi for some xi ∈ H and mutually disjoint ∪i 0 sets E1 , · · · Ei0 from M so that i=1 Ei = X1 × X2 . Then, by (2.52),

54

2 Some Preliminaries in Stochastic Calculus i0 ∑ |ℓj (f )| = G(Ei ∩ Aj )(xi ) i=1



i0 ∑

|G(Ei ∩ Aj )|H ∗ |xi |H ≤

i=1

≤ kj

i0 ∑

|G|(Ei ∩ Aj )|xi |H

i=1

i0 ∑

(µ1 × µ2 )(Ei ∩ Aj )|xi |H ≤ kj |f |L1 (X1 ×X2 ;H)

i=1 1

1

≤ kj µ1 (X1 ) p µ2 (X2 ) q |f |LpM (X1 ;Lq (X2 ;H)) . Thus, ℓj is a bounded linear functional on S. By the Hahn-Banach Theorem, ℓj can be extended to a bounded linear functional on LpM (X1 ; Lq (X2 ; H)) ′ (The extension is still denoted by ℓj ). Hence there exists a gj ∈ LpM (X1 ; ′ Lq (X2 ; H ∗ )) such that ∫ ℓj (f ) = ⟨f, gj ⟩H,H ∗ dµ1 dµ2 , ∀ f ∈ LpM (X1 ; Lq (X2 ; H)). X1 ×X2

We have



G(E ∩ Aj )(x) = ℓj (xχE ) =

⟨x, gj ⟩H,H ∗ dµ1 dµ2 ,

∀ x ∈ H, E ∈ M.

E ′



Since gj ∈ LpM (X1 ; Lq (X2 ; H ∗ )) is Bochner integrable, we see that (∫ ) G(E ∩ Aj )(x) = gj dµ1 dµ2 (x), ∀ x ∈ H, E ∈ M. E

Consequently, ∫ G(E ∩ Aj ) =

gj dµ1 dµ2 ,

∀ E ∈ M.

(2.53)

E

Noting that Aj ∈ M, and therefore replacing E in (2.53) by E ∩ Aj , we see that ∫ G(E ∩ Aj ) = gj dµ1 dµ2 , ∀ E ∈ M, j ∈ N. E∩Aj

Define g˜ : X1 × X2 → H ∗ by g˜(x1 , x2 ) = gj (x1 , x2 ) if (x1 , x2 ) ∈ Aj . It is obvious that g˜ is strongly M-measurable w.r.t. µ1 × µ2 . Moreover, for each E ∈ M and all k ∈ N, it holds k ( ∩( ∪ )) ∫ G E Aj = j=1

Consequently,

∫ g˜dµ1 dµ2 = E∩(∪k j=1 Aj )

E

g˜χ∪kj=1 Aj dµ1 dµ2 .

(2.54)

2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version

55

∫ G(E) = lim

k→∞

E

g˜χ∪kj=1 Aj dµ1 dµ2 ,

∀ E ∈ M.

(2.55)

It remains to show that g˜ ∈ L1M (X1 × X2 ; H ∗ ). Since |G|(X1 × X2 ) is finite, by (2.54) and using Proposition 2.44, we see that ∫ |˜ g |H ∗ χ∪kj=1 Aj dµ1 dµ2 ≤ |G|(X1 × X2 ), ∀ k ∈ N. X1 ×X2

By the Monotone Convergence Theorem, |˜ g |H ∗ ∈ L1M (X1 × X2 ). Hence g˜ is Bochner integrable. By (2.55) and Theorem 2.18, we obtain (2.50), which proves the Radon-Nikod´ ym property of H ∗ w.r.t. (X1 × X2 , M, µ1 × µ2 ).

2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version The classical Banach-Alaoglu Theorem (e.g. [60, p. 130]) states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology. This theorem has an important special (sequential) version, asserting that the closed unit ball of the dual space of a separable normed vector space (resp., the closed unit ball of a reflexive Banach space) is sequentially compact in the weak* topology (resp., the weak topology). In this section, we shall present a sequential Banach-Alaoglu-type theorem for uniformly bounded linear operators (between suitable Banach spaces). As we shall see later, this result will play crucial roles in the study of the optimal control problems for stochastic evolution equations. ∞ Let X and Y be two Banach spaces. Let {yn }∞ n=1 ⊂ Y , y ∈ Y , {zn }n=1 ⊂ ∗ ∗ Y and z ∈ Y . In the sequel, we denote by (w)- lim yn = y in Y n→∞

when {yn }∞ n=1 weakly converges to y in Y ; and by (w*)- lim zn = z in Y ∗ n→∞

{zn }∞ n=1



when weakly converges to z in Y ∗ . Let us show first the following simple result: Lemma 2.61. Let X be a separable Banach space and let Y be a reflexive Banach space. Assume that {Gn }∞ n=1 ⊂ L(X; Y ) is a sequence of bounded linear operators such that {Gn x}∞ n=1 is bounded in Y for any given x ∈ X. Then, there exist a subsequence {Gnk }∞ k=1 and a G ∈ L(X; Y ) satisfying that (w)- lim Gnk x = Gx in Y, k→∞

∀ x ∈ X,

56

2 Some Preliminaries in Stochastic Calculus

(w*)- lim G∗nk y ∗ = G∗ y ∗ in X ∗ ,

∀ y∗ ∈ Y ∗ ,

k→∞

and |G|L(X;Y ) ≤ sup |Gn |L(X;Y ) (< ∞).

(2.56)

n∈N

Proof : Noting that X is separable, we can find a countable subset {xi }∞ i=1 of X such that {x1 , x2 , · · · } is dense in X. Since {Gn x1 }∞ n=1 is bounded (1) ∞ in Y and Y is reflexive, there exists a subsequence {n ⊂ {n}∞ n=1 { k }k=1}∞ such that (w)- lim Gn(1) x1 = y1 . Now, the sequence Gn(1) x2 k=1 is still k→∞

k

k

∞ bounded in Y , one can find a subsequence {nk }∞ k=1 ⊂ {nk }k=1 such that (w)- lim Gn(2) x2 = y2 . Generally, for each m ∈ N, we can find a subsequence (2)

k k→∞ (m+1) ∞ }k=1 ⊂

{nk

(1)

{nk }∞ k=1 such that (w)- lim Gn(m+1) xm+1 = ym+1 . We now (m)

k→∞

k

(m)

use the classical diagonalisation argument. Write nm = nm , m = 1, 2, · · · . Then, {Gnm xi }∞ m=1 converges weakly to yi in Y . Let us define an operator G (from X to Y ) as follows: For any x ∈ X, ( ) Gx = lim yik = lim (w)- lim Gnm xik , k→∞

k→∞

m→∞

∞ where {xik }∞ k=1 is any subsequence of {xi }i=1 such that lim xik = x in X. k→∞

We shall show below that G ∈ L(X; Y ). First, we show that G is well-defined. By the Principle of Uniform Boundedness, it is clear that {Gn }∞ n=1 is uniformly bounded in L(X; Y ). We choose an M > 0 such that |Gn |L(X;Y ) ≤ M for all n ∈ N. Since {xik }∞ k=1 is a Cauchy ε sequence in X, for any ε > 0, there is an N > 0 such that |xik1 − xik2 |X < M whenever k1 , k2 > N . Hence, |Gn (xik1 − xik2 )|Y < ε for any n ∈ N. Then, by the weakly sequentially lower semicontinuity (on the norms of Banach spaces), we deduce that |yik1 − yik2 |Y ≤ lim |Gnm (xik1 − xik2 )|Y < ε, m→∞

which implies that

{yik }∞ k=1

is a Cauchy sequence in Y . Therefore, lim yik k→∞

exists in Y . On the other hand, assume that there is another subsequence ′ ∞ ′ {x′ik }∞ k=1 ⊂ {xi }i=1 such that lim xik = x. Let yik be the corresponding k→∞

weak limit of Gnm x′ik in Y for m → ∞. Then, lim yik − lim yi′k k→∞

≤ lim

k→∞

Y

lim |Gnm (xik − x′ik )|Y ≤ M lim |xik − x′ik |X

k→∞ m→∞

k→∞

≤ M lim |xik − x|X + M lim |x − x′ik |X = 0. k→∞

Hence, G is well-defined.

k→∞

2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version

57

Next, we prove that G is a bounded linear operator. For any x ∈ X and the above sequence {xik }∞ k=1 , it follows that |Gx|Y = lim |yik |Y ≤ lim k→∞

lim |Gnm xik |Y ≤ M lim |xik |X ≤ M |x|X .

k→∞ m→∞

k→∞

Hence, G is a bounded operator. Further, it is easy to see that G is linear. Therefore, G ∈ L(X; Y ). Also, for any x ∈ X and y ∗ ∈ Y ∗ , it holds that (x, G∗ y ∗ )X,X ∗ = (Gx, y ∗ )Y,Y ∗ = lim (Gnk x, y ∗ )Y,Y ∗ = lim (x, G∗nk y ∗ )X,X ∗ . k→∞

Hence,

k→∞

(w*)- lim G∗nk y ∗ = G∗ y ∗ in X ∗ . k→∞

Finally, from the above proof, (2.56) is obvious. This completes the proof of Lemma 2.61. Remark 2.62. Lemma 2.61 is not a direct consequence of the classical sequential Banach-Alaoglu Theorem. Indeed, the Banach space L(X; Y ) is neither reflexive nor separable even if both X and Y are (infinite dimensional) separable Hilbert spaces (See Problem 99 in [135]). Let (Ω1 , M1 , µ1 ) and (Ω2 , M2 , µ2 ) be two finite measure spaces. Let M be a sub-σ-field of M1 × M2 . For any 1 ≤ p, q ≤ ∞, we define a Banach space LpM (Ω1 ; Lq (Ω2 ; X))( as that in (2.25)–(2.28). Fix any r1 , r2 , r)3 , r4 ∈ r1 r3 r2 r4 [1, ∞], ( wer3 denote rby Lpd L ) M (Ω1 ; L (Ω2 ; X)); LM (Ω1 ; L (Ω2 ; Y )) (resp. Lpd X; LM (Ω1 ; L 4 (Ω; Y )) ) the vector space of all bounded, pointwise de1 3 fined linear operators L from LrM (Ω1 ; Lr2 (Ω2 ; X)) (resp. X) to LrM (Ω1 ; r4 L (Ω2 ; Y )), i.e., for a.e. (ω , ω ) ∈ Ω × Ω , there exists an L(ω , ω 1) 2 1 2 1 2) ∈ ( r1 L(X; Y ) satisfying that Lu(·) (ω , ω ) = L(ω , ω )u(ω , ω ), ∀ u(·) ∈ L 1 2 1 2 1 2 M (Ω1 ; ( ) Lr2 (Ω2 ; X)) (resp. Lx (ω1 , ω2 ) = ( L(ω1 , ω2 )x, ∀ x ∈ X). In a similar ) 1 2 way, one can define the spaces Lpd LrM (Ω1 ; X); LrM (Ω1 ; Lr3 (Ω2 ; Y )) and 1 ( r1 ) 2 Lpd LM1 (Ω1 ; X); LrM (Ω2 ; Y ) , etc. In the sequel, in the case that no confu2 sion would occur, sometimes we identify L with L(·, ·). We now show the following sequential Banach-Alaoglu-type theorem in the operator version. Theorem 2.63. Let X be a separable Banach space, and let Y be a reflexive Banach space. Let 1 ≤ p1 , p2 < ∞ and 1 < q1 , q2 < ∞, and 1 let LpM (Ω1 ; Lp2 (Ω2 ; C)) be separable. Let Condition 2.1 hold with X1 , X2 , p and q being replaced respectively by Ω1 , Ω2 , q1 and q2). Assume that ( p1 q1 p2 q2 {Gn }∞ are uniformn=1 ⊂ Lpd LM (Ω1 ; L (Ω2 ; X)); LM (Ω1 ; L (Ω2 ; Y )) ∞ ly bounded. Then, there exist a subsequence {G } ⊂ {G }∞ n n n=1 and a k k=1 ( p1 ) q1 p2 q2 G ∈ Lpd LM (Ω1 ; L (Ω2 ; X)); LM (Ω1 ; L (Ω2 ; Y )) such that, for all u(·) ∈ 1 LpM (Ω1 ; Lp2 (Ω2 ; X)),

58

2 Some Preliminaries in Stochastic Calculus 1 Gu(·) = (w)- lim Gnk u(·) in LqM (Ω1 ; Lq2 (Ω2 ; Y )).

k→∞

Moreover,

|G|L(LpM1 (Ω1 ;Lp2 (Ω2 ;X));LqM1 (Ω1 ;Lq2 (Ω2 ;Y ))) ≤ sup |Gn |L(LpM1 (Ω1 ;Lp2 (Ω2 ;X));LqM1 (Ω1 ;Lq2 (Ω2 ;Y ))) . n∈N

Proof : We divide the proof into several steps. ( 1 ) 1 Step 1. Since Gn ∈ Lpd LpM (Ω1 ; Lp2 (Ω2 ; X)); LqM (Ω1 ; Lq2 (Ω2 ; Y )) (for each n ∈ N), for a.e. (ω1 , ω2 ) ∈ Ω1 × Ω2 , there exists an Gn (ω1 , ω2 ) ∈ L(X; Y ) 1 satisfying that, for all u(·) ∈ LpM (Ω1 ; Lp2 (Ω2 ; X)), ( ) Gn u(·) (ω1 , ω2 ) = Gn (ω1 , ω2 )u(ω1 , ω2 ). (2.57) Write M = sup |Gn |L(LpM1 (Ω1 ;Lp2 (Ω2 ;X));LqM1 (Ω1 ;Lq2 (Ω2 ;Y ))) . n∈N

By Lemma 2.61, we conclude that there exist a bounded linear operator G from 1 1 LpM (Ω1 ; Lp2 (Ω2 ; X)) to LqM (Ω1 ; Lq2 (Ω2 ; Y )) and a subsequence {Gnk }∞ k=1 ⊂ p1 p2 {Gn }∞ such that, for all u(·) ∈ L (Ω ; L (Ω ; X)), 1 2 n=1 M 1 Gu(·) = (w)- lim Gnk u(·) in LqM (Ω1 ; Lq2 (Ω2 ; Y )),

(2.58)

|Gu(·)|Lq1 (Ω1 ;Lq2 (Ω2 ;Y )) ≤ M |u(·)|LpM1 (Ω1 ;Lp2 (Ω2 ;X)) .

(2.59)

k→∞

and M

We claim that m ∑

fi Gui = G

m (∑

i=1

) fi ui

i=1

= (w)- lim

k→∞

m ∑

(2.60)

f i Gn k u i

in

1 LqM (Ω1 ; Lq2 (Ω2 ; Y

))

i=1

and m ∑ fi Gui i=1

q

1 (Ω ;Lq2 (Ω ;Y )) LM 1 2

m ∑ ≤ M f i ui i=1

p

1 (Ω ;Lp2 (Ω ;X)) LM 1 2

,

(2.61)

p1 p2 where m ∈ N, fi ∈ L∞ M (Ω1 × Ω2 ) and ui ∈ LM (Ω1 ; L (Ω2 ; X)), i = q q2 ′ ′ 1 1, 2, · · · , m. To show this, write q1 = q1 −1 and q2 = q2 −1 . Denote by µ the product measure of µ1 and µ2 . By Theorems 2.48 and 2.55, from (2.57)–(2.58), ′ q1′ we conclude that for any v(·) ∈ LM (Ω1 ; Lq2 (Ω2 ; Y ∗ )),

2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version



m ⟨∑ Ω1 ×Ω2

⟩ fi (ω1 , ω2 )(Gui )(ω1 , ω2 ), v(ω1 , ω2 )

Y,Y ∗

i=1



m ∑ ⟨

=

Ω1 ×Ω2 i=1

∫ ∫

Ω1 ×Ω2 i=1 m ∑ ⟨

= lim

k→∞

Gui )(ω1 , ω2 ), fi (ω1 , ω2 )v(ω1 , ω2 ) m ∑ ⟨(

= lim

k→∞

59

Ω1 ×Ω2 i=1



⟩ Y,Y ∗



) ⟩ Gnk ui (ω1 , ω2 ), fi (ω1 , ω2 )v(ω1 , ω2 ) Y,Y ∗ dµ

(fi Gnk ui )(ω1 , ω2 ), v(ω1 , ω2 )

⟩ Y,Y ∗

(2.62)

dµ,

and ∫

m ∑ ⟨(

lim

k→∞

Ω1 ×Ω2 i=1



m ∑ ⟨ ⟩ Gnk (ω1 , ω2 )ui (ω1 , ω2 ), fi (ω1 , ω2 )v(ω1 , ω2 ) Y,Y ∗ dµ

= lim

k→∞



Ω1 ×Ω2 i=1



= lim

k→∞

Ω1 ×Ω2



Gnk (ω1 , ω2 )

Ω1 ×Ω2

∫ = Ω1 ×Ω2

m (∑

) ⟩ fi (ω1 , ω2 )ui (ω1 , ω2 ) , v(ω1 , ω2 ) Y,Y ∗ dµ

i=1

⟨(

= lim

k→∞

) ⟩ Gnk ui (ω1 , ω2 ), fi (ω1 , ω2 )v(ω1 , ω2 ) Y,Y ∗ dµ

Gnk

m (∑

f i ui

))

(ω1 , ω2 ), v(ω1 , ω2 )

⟩ Y,Y ∗



i=1

m ⟨( ( ∑ )) ⟩ G fi ui (ω1 , ω2 ), v(ω1 , ω2 ) Y,Y ∗ dµ. i=1

(2.63) By (2.62)–(2.63) and noting (2.59), we obtain (2.60)–(2.61). Step 2. It is easy to see that, each x ∈ X can be regarded as an ele1 ment (i.e., χΩ1 ×Ω2 (·)x) in LpM (Ω1 ; Lp2 (Ω2 ; X)). Hence, Gx makes sense and q1 q2 belongs to LM (Ω1 ; L (Ω2 ; Y )), and G is a bounded linear operator from X 1 to LqM (Ω1 ; Lq2 (Ω{2 ; Y )). } Write BX = x ∈ X |x|X ≤ 1 . By of X, it is easy to ( the ) separability see that the real valued function sup Gx (·) Y is M-measurable. We claim x ∈BX

that ( ) sup Gx (ω1 , ω2 ) Y < ∞,

x ∈BX

a.e. (ω1 , ω2 ) ∈ Ω1 × Ω2 .

(2.64)

In the rest of this step, we shall prove (2.64) by the contradiction argument. Assume ( ) that (2.64) was not true. By the measurability of the function sup Gx (·) w. r. t. M, there would exist a set A ∈ M such that µ(A) > 0 x ∈BX

and that

Y

60

2 Some Preliminaries in Stochastic Calculus

( ) sup Gx (ω1 , ω2 ) Y = ∞, for (ω1 , ω2 ) ∈ A.

x ∈BX

Let

{xi }∞ i=1

be a sequence in BX such that it is dense in BX . Then ( ( ) ) sup Gxi (ω1 , ω2 ) Y = sup Gx (ω1 , ω2 ) Y = ∞, for (ω1 , ω2 ) ∈ A. x ∈BX

i∈N

For any n ∈ N and i ∈ N \ {1}, we define a sequence of subsets of Ω1 × Ω2 in the following way: ( { }  ) (n)  A = (ω , ω ) ∈ Ω × Ω Gx (ω , ω ) ≥ n ,  1 2 1 2 1 1 2 Y   1 { } ∪ (n) ) ( ( ) ( i−1 )  (n)  Gxi (ω1 , ω2 ) ≥ n . A = (ω , ω ) ∈ Ω × Ω \ A  1 2 1 2 i k  Y k=1

(2.65) ( ) (n) It follows from the measurability of Gx (·) Y that Ai ∈ M for every i ∈ N ∞ ∪ (n) (n) (n) and n ∈ N. It is clear that A ⊂ Ai for any n ∈ N and Ai ∩ Aj = ∅ i=1

for i ̸= j. Hence, we see that ∞ ∑

(n)

µ(Ai ) = µ

∞ (∪

i=1

(n) )

Ai

≥ µ(A) > 0,

∀ n ∈ N.

i=1

Thus, for each n ∈ N, there is an Nn ∈ N such that Nn ∑

(n)

µ(Ai ) = µ

∪n (N

i=1

Write x(n) (ω1 , ω2 ) =

(n) )

Ai

i=1 Nn ∑



µ(A) > 0. 2

χA(n) (ω1 , ω2 )xi .

i=1

(2.66)

(2.67)

i

Obviously, x(n) (·) is an X-valued, strongly M-measurable function. By G ∈ ( p1 ) 1 L LM (Ω1 ; Lp2 (Ω2 ; X)); LqM (Ω1 ; Lq2 (Ω2 ; Y )) and |x(n) (ω1 , ω2 )|X ≤ 1 for a.e. (ω1 , ω2 ) ∈ Ω1 × Ω2 , and noting (2.60)–(2.61), we find that |Gx(n) |LqM1 (Ω1 ;Lq2 (Ω2 ;Y )) {∫ [∫ } p1 ] pp2 (n) p1 2 1 ≤M x (ω1 , ω2 ) X dµ2 (ω2 ) dµ1 (ω1 ) Ω1

(2.68)

Ω2

( )1/p2 ( )1/p1 ≤ M µ1 (Ω1 ) µ2 (Ω2 , Now, let us choose an integer n > (2.65)–(2.67), it follows that

2M µ(A)

∀ n ∈ N. (

µ1 (Ω1 )

) p1

2

+ q1′

1

( ) 1 + 1′ µ2 (Ω2 ) p1 q2 . By

2.5 A Sequential Banach-Alaoglu-Type Theorem in the Operator Version

61

|Gx(n) |LqM1 (Ω1 ;Lq2 (Ω2 ;Y ))

( )− 1′ ( )− 1′ ≥ µ1 (Ω1 ) q1 µ2 (Ω2 ) q2 |Gx(n) |L1M (Ω1 ;L1 (Ω2 ;Y )) (

= µ1 (Ω1 )

)− q1′ ( 1

µ2 (Ω2 )

Nn ∫ )− q1′ ∑ 2

i=1

(n) Ai

Gxi dµ Y

Nn ( )− 1′ ( )− 1′ ∑ (n) ≥ µ1 (Ω1 ) q1 µ2 (Ω2 ) q2 n µ(Ai ) i=1

)− 1′ ( )− 1′ ( )1/p2 ( )1/p1 µ(A) ( ≥ µ1 (Ω1 ) q1 µ2 (Ω2 ) q2 n > M µ1 (Ω1 ) µ2 (Ω2 , 2 which contradicts the inequality (2.68). Therefore, (2.64) holds. Step 3. By (2.64), for a.e. (ω1 , ω2 ) ∈ Ω1 × Ω2 , we may define an operator G(ω1 , ω2 ) ∈ L(X; Y ) by ( ) X ∋ x 7→ G(ω1 , ω2 )x = Gx (ω1 , ω2 ). (2.69) 1 Further, we introduce the following subspace of LpM (Ω1 ; Lp2 (Ω2 ; X)):

m { } ∑ X = u(·) = χAi (·)hi m ∈ N, Ai ∈ M, hi ∈ X . i=1 1 It is clear that X is dense in LpM (0, T ; Lp2 (Ω; X)). We now define a linear q 1 q operator Ge from X to LM (Ω1 ; L 2 (Ω2 ; Y )) by

X ∋ u(·) =

m ∑

e χAi (·)hi 7→ (Gu)(ω 1 , ω2 ) =

i=1

m ∑

χAi (ω1 , ω2 )G(ω1 , ω2 )hi .

i=1

(2.70) We claim that e (Gu)(·) = (Gu)(·),

∀ u(·) ∈ X .

(2.71) q′



1 Indeed, it follows from (2.57)–(2.58) that for any v(·) ∈ LM (Ω1 ; Lq2 (Ω2 ; Y ∗ )), and u(·) to be of the form in (2.70),

62

2 Some Preliminaries in Stochastic Calculus

∫ Ω1 ×Ω2

⟨ ⟩ e (Gu)(ω 1 , ω2 ), v(ω1 , ω2 ) Y,Y ∗ dµ



m ⟨∑

= Ω1 ×Ω2

∫ Ω1 ×Ω2

=

=

i=1





k→∞

Ω1 ×Ω2





= lim ∫

) ⟩ Ghi (ω1 , ω2 ), χAi (ω1 , ω2 )v(ω1 , ω2 ) Y,Y ∗ dµ

lim

k→∞

Ω1 ×Ω2



= Ω1 ×Ω2



( ) ⟩ χAi (ω1 , ω2 ) Ghi (ω1 , ω2 ), v(ω1 , ω2 ) Y,Y ∗ dµ

⟨( Ω1 ×Ω2

m ∑

Y,Y ∗

i=1

m ∫ ∑ i=1



i=1 m ⟨∑

=

χAi (ω1 , ω2 )G(ω1 , ω2 )hi , v(ω1 , ω2 )

Gnk (ω1 , ω2 )hi , χAi (ω1 , ω2 )v(ω1 , ω2 )

Gnk (ω1 , ω2 )

m (∑

⟩ Y,Y ∗



) ⟩ χAi (ω1 , ω2 )hi , v(ω1 , ω2 ) Y,Y ∗ dµ

i=1

(Gu)(ω1 , ω2 ), v(ω1 , ω2 )

⟩ Y,Y ∗

dµ.

This gives (2.71). 1 Recall that G is a bounded linear operator from LpM (Ω1 ; Lp2 (Ω2 ; X)) to q1 q2 LM (Ω1 ; L (Ω2 ; Y )). Hence, it is also a bounded linear operator from X to 1 LqM (Ω1 ; Lq2 (Ω2 ; Y )). By (2.71), we see that Ge is a bounded linear operator 1 1 from X to LqM (Ω1 ; Lq2 (Ω2 ; Y )). Since X is dense in LpM (Ω1 ; Lp2 (Ω2 ; X)), it is clear that Ge can be uniquely extended as a bounded linear operator 1 1 from LpM (Ω1 ; Lp2 (Ω2 ; X)) to LqM (Ω1 ; Lq2 (Ω2 ; Y )) (We still denote by Ge its extension). By (2.71) again, we conclude that Ge = G. It remains to show that ( ) e Gu(·) (ω1 , ω2 ) = G(ω1 , ω2 )u(ω1 , ω2 ),

(2.72)

a.e. (ω1 , ω2 ) ∈ Ω1 × Ω2 ,

(2.73)

1 for all u ∈ LpM (Ω1 ; Lp2 (Ω2 ; X)). For this purpose, by the fact that X is dense p1 p2 in LM (Ω1 ; L (Ω2 ; X)), we may assume that

u(·) =

∞ ∑

χAi (·)hi ,

(2.74)

i=1

for some A∩i ∈ M and hi ∈ X, i = 1, 2, · · · (Note that here we assume neither Ai Aj = ∅ nor hi ̸= hj for i, j = 1, 2, · · · ). For each n ∈ N, write n ∑ n u (·) = χAi (·)hi . From (2.74), it is clear that i=1

2.6 Stochastic Processes 1 in LpM (Ω1 ; Lp2 (Ω2 ; X)).

u(·) = lim un (·) n→∞

63

(2.75)

By (2.61), (2.69)–(2.70) and (2.75), it is easy to see that (

n ∑ ) e n (·) (ω1 , ω2 ) = Gu χAi (ω1 , ω2 )G(ω1 , ω2 )hi

(2.76)

i=1 1 is a Cauchy sequence in LqM (Ω1 ; Lq2 (Ω2 ; Y )). Hence, by (2.76) and re1 calling that Ge is a bounded linear operator from LpM (Ω1 ; Lp2 (Ω2 ; X)) to q1 q2 LM (Ω1 ; L (Ω2 ; Y )), we conclude that

(

∞ ∑ ) e Gu(·) (ω1 , ω2 ) = χAi (ω1 , ω2 )G(ω1 , ω2 )hi .

(2.77)

i=1

Combining (2.74) and (2.77), we obtain (2.73). Finally, by (2.72) and (2.73), the desired result follows. This completes the proof of Theorem 2.63. Remark 2.64. i) Clearly, the most difficult part(in the proof of Theorem 2.63 is 1 1 to show that) the weak limit operator G ∈ Lpd LpM (Ω1 ; Lp2 (Ω2 ; X)); LqM (Ω1 ; q2 L (Ω2 ; Y )) . Note that, a simple application of Lemma 2.61( to the operators 1 {Gn }∞ this point but only that G ∈ L LpM (Ω1 ; Lp2 (Ω2 ; n=1 does not guarantee ) q1 q2 X)); LM (Ω1 ; L (Ω2 ; Y )) . ( 1 ii) By Theorem 2.63, it is easy to deduce that Lpd (LpM (Ω1 ; Lp2 (Ω2 ; X)); ) q1 p1 q2 LM (Ω1 ; L (Ω2 ; Y ))) is a closed linear subspace of L LM (Ω1 ; Lp2 (Ω2 ; X)); 1 LqM (Ω1 ; Lq2 (Ω2 ; Y )) .

2.6 Stochastic Processes In this section, we recall some elements on stochastic processes. We fix a probability space (Ω, F , P), a Banach space H and a nonempty index set I. We begin with the following definition: Definition 2.65. A family of (strongly F -measurable) random variables ∆ ∆ X = X(·) ={X(t)}t∈I from (Ω, F ) → (H, B(H)) is called an (H-valued) stochastic process. For any ω ∈ Ω, the map t 7→ X(t, ω) is called a sample path (of X). In what follows, we will choose I = [0, T ] with T > 0, or I = [0, +∞) unless otherwise stated. Also, a stochastic process will be simply called a process if no ambiguity. An (H-valued) process X(·) is said to be continuous (resp., c´adl`ag, i.e., right-continuous with left limits) if there is a P-null set N ∈ F , such that for any ω ∈ Ω \ N , the sample path X(·, ω) is continuous (resp., c´adl`ag) in H. In a similar way, one can define right-continuous stochastic processes, etc.

64

2 Some Preliminaries in Stochastic Calculus

In the case that H is Rm (for some m ∈ N) with the standard topology, for a given stochastic process {X(t)}t∈I , we set ) ∆ ( Ft1 ,··· ,tj (x1 , · · · , xj ) = P {X(t1 ) ≤ x1 , · · · , X(tj ) ≤ xj } ,

(2.78)

where j ∈ N, ti ∈ I, xi ∈ Rm and X(ti ) ≤ xi stands for componentwise inequalities (i = 1, · · · , j). Functions defined in (2.78) are called the finite dimensional distributions of X. Similar to the distribution function of random variable, the finite dimensional distributions Ft1 ,··· ,tj (x1 , · · · , xj ) of X include the main probability properties of the process. When H is a Hilbert space, an (H-valued) stochastic process {X(t)}t∈I ∑k is called Gaussian, if any finite linear combination i=1 ai X(ti ) (with ai ∈ R and ti ∈ I (i = 1, · · · , k) and k ∈ N) is an H-valued Gaussian random variable. We shall use the following two notions in the sequel. Definition 2.66. A class of stochastic processes {Xλ (·)}λ∈Λ are called independent, if for any n, ℓ ∈ N, {λ1 , · · · , λn } ⊂ Λ and {t1 , · · · , tℓ } ⊂ I, the random variables (Xλ1 (t1 ), · · · , Xλ1 (tℓ )), · · · , (Xλn (t1 ), · · · , Xλn (tℓ )) are independent. Definition 2.67. Two (H-valued) processes X(·) and X(·) are said to be stochastically equivalent if P({X(t) = X(t)}) = 1 for any t ∈ I. In this case, one is said to be a modification of the other. Obviously, in Definition 2.67, X(·)’s and X(·)’s finite dimensional distributions are the same, provided that they are stochastically equivalent, However, ∆ generally, the P-null set Nt ={X(t) ̸= X(t)} depends on t. Therefore, the sample paths of X(·) and X(·) can differ significantly. The following is a simple example. Example 2.68. Let Ω = [0, 1], I = [0, 1], P be the Lebesgue measure m on [0, 1], X(t, ω) ≡ 0, and { 0, t ̸= ω, X(t, ω) = 1, t = ω. Then X(·) and X(·) are stochastically equivalent. But, each sample path X(· , ω) is continuous and none of the ∪ sample paths X(· , ω) is continuous. In the present case, we actually have t∈I Nt = Ω. We call a family of sub-σ-fields {Ft }t∈I in F a filtration if Ft1 ⊂ Ft2 for all t1 , t2 ∈ I with t1 ≤ t2 . For any t ∈ I, we put ∩ ∪ ∆ ∆ Ft+ = Fs , Ft− = Fs . s∈(t,+∞)∩I

s∈[0,t)∩I

2.6 Stochastic Processes

65

If Ft+ = Ft (resp. Ft− = Ft ), then {Ft }t∈I is said to be right (resp. left) continuous. In the sequel, for simplicity, we write F = {Ft }t∈I unless we want to emphasize what Ft or I exactly is. We call (Ω, F , F) a filtered measurable space and (Ω, F , F, P) a filtered probability space. We say that (Ω, F , F, P) satisfies the usual condition if (Ω, F , P) is complete, F0 contains all P-null sets in F , and F is right continuous. Remark 2.69. For each t ∈ I, the σ-algebra Ft can be interpreted as the set of information that one has collected up to time t. In the rest of this section, we fix a filtration F on (Ω, F ). Definition 2.70. Let X(·) be an H-valued process. 1) X(·) is said to be measurable if the map (t, ω) 7→ X(t, ω) is strongly (B(I) × F )/B(H)-measurable; 2) X(·) is said to be F-adapted if it is measurable, and for each t ∈ I, the map ω 7→ X(t, ω) is strongly Ft /B(H)-measurable; 3) X(·) is said to be F-progressively measurable if for each t ∈ I, the map (s, ω) 7→ X(s, ω) from [0, t] × Ω to H is strongly (B([0, t]) × Ft )/B(H)measurable. Definition 2.71. A set A ∈ I × Ω is called progressively measurable w.r.t. F if the process χA (·) is progressive. The class of all progressively measurable sets is a σ-algebra, called the progressive σ-field w.r.t. F, denoted by F. The following result is sometimes useful. Lemma 2.72. An (H-valued) process X(·) : [0, T ]×Ω → H is F-progressively measurable if and only if it is strongly F-measurable. It is clear that if X(·) is F-progressively measurable, it must be F-adapted. Conversely, it can be proved that, for any F-adapted process X(·), there is a e which is stochastically equivalent to F-progressively measurable process X(·) X(·) (e.g. [261, pp. 68] for the case dim H < ∞. The case when dim H = ∞ can be considered similarly). For this reason, unless otherwise indicated, in the sequel, by saying that a process is F-adapted, we mean that it is Fprogressively measurable. The notations to be given below will be used in the rest of this book. ∆ For any p, q ∈ [1, ∞), write LpFt (Ω; H) = Lp (Ω, Ft , P; H) (t ∈ [0, T ]), and define { ∆ LpF (Ω; Lq (0, T ; H)) = φ : (0, T ) × Ω → H φ(·) is F-adapted (∫ T ) pq } and E |φ(t)|qH dt 0, there is a g ∈ C0,F ((0, T ); L∞ (Ω; H)) such that |f − g|LrF (0,T ;Ls (Ω;H)) < ∑n ε. By Lemma 2.75, we can find an fn = i=1 χ[ti ,ti+1 ) (t)ξi , where n ∈ N, 0 = t1 < t2 < · · · < tn < tn+1 = T and ξi ∈ L∞ Ft (Ω; H), such that i

|f − fn |LrF (0,T ;Ls (Ω;H))
0. A stopping time may be thought of as the first time when some physical event occurs. Example 2.83. Let X(·) be an F-adapted and continuous process with values in H. Let D ⊂ H be an open set. Then, both the first entrance time of the process X(·) to D, i.e., { } ∆ σD (ω) = inf t ≥ 0 X(t, ω) ∈ D , and the first exit time of the process X(·) from D, i.e., { } ∆ τD (ω) = inf t ≥ 0 X(t, ω) ̸∈ D are stopping times. (Here, we agree that inf ∅ = +∞).

2.7 Stopping Times

71

Some basic properties of stopping times are listed in the following proposition. Proposition 2.84. Let σ, τ and σi (i = 1, 2, · · · ) are stopping times. Then, 1) The random variables σ + τ , supi∈N σi , inf i∈N σi , lim σi and lim σi i→∞

i→∞

are stopping times. Also, the events {σ > τ }, {σ ≥ τ } and {σ = τ } belong to Fσ∧τ ; ∆ 2) The process Y (·) = τ ∧ · is F-progressively measurable; 3) For any A ∈ Fσ , it holds A ∩ {σ ≤ τ } ∈ Fτ . Particularly, if σ ≤ τ a.s., then Fσ ⊂ Fτ ; ∩∞ 4) Let σ ˆ = inf i∈N σi . Then i=1 Fσi = Fσˆ ; Particularly, Fσ1 ∩ Fσ2 = Fσ1 ∧σ2 . The following result will be sometimes technically useful. Proposition 2.85. Every stopping time is the decreasing limit of a sequence of stopping times taking only finitely many values. Proof : For a stopping time τ , write k2 ∑ ℓ χ ℓ−1 + (+∞)χ{τ ≥k} , ℓ 2k { 2k ≤τ < 2k } k

τk =

k = 1, 2, · · · .

ℓ=1

It is easy to check that τk is a stopping time and that {τk }∞ k=1 decreases to τ. The following result provides a characterization of Fτ -random variable. Proposition 2.86. Let τ be a stopping time and ξ be a random variable with values in H. Then ξ is Fτ -measurable if and only if ξχ{τ ≤t} is Ft -measurable for all t ≥ 0. Proof : For simplicity, we only consider the case H = R. If ξ is Fτ measurable, then there is a sequence of Fτ -measurable simple functions ∆

ξj =

nj ∑

ξji χAij → ξ,

as j → ∞, P-a.s.,

i=1

where ξji ∈ R, nj ∈ N, Aij ∈ Fτ and j = 1, 2, · · · . Obviously, ξj χ{τ ≤t} = ∑ nj i i=1 ξj χAij ∩{τ ≤t} is Ft -measurable. Letting j → ∞, we see that ξχ{τ ≤t} is also Ft -measurable for all t ≥ 0. Conversely, if ξχ{τ ≤t} is Ft -measurable for all t ≥ 0, then {ξ ≤ a} ∩ {τ ≤ t} = {ξχ{τ ≤t} ≤ a} ∈ Ft , ∀ a < 0, {ξ > a} ∩ {τ ≤ t} = {ξχ{τ ≤t} > a} = {ξχ{τ ≤t} ≤ a}c ∈ Ft ,

∀ a ≥ 0.

Therefore, {ξ ≤ a} ∈ Fτ for all a ∈ R, which implies that ξ is Fτ -measurable.

72

2 Some Preliminaries in Stochastic Calculus

Proposition 2.87. Let σ and τ be two stopping times, and X be an integrable random variable with values in H. Then  χ E(X | Fτ ) = E(χ{σ>τ } X | Fτ ) = χ{σ>τ } E(X | Fσ∧τ ),    {σ>τ } χ{σ≥τ } E(X | Fτ ) = E(χ{σ≥τ } X | Fτ ) = χ{σ≥τ } E(X | Fσ∧τ ),    E(E(X | Fτ ) | Fσ ) = E(X | Fσ∧τ ). Proof : The first equalities in the first two assertions are obvious. To prove the second equality in the first assertion, we note that χ{σ>τ } E(X | Fτ )χ{σ∧τ ≤t} = E(X | Fτ )χ{τ ≤t} χ{σ>τ, σ∧τ ≤t} . Since E(X | Fτ ) is Fτ -measurable, by Proposition 2.86, E(X | Fτ )χ(τ ≤t) is Ft -measurable. Recall also that {σ > τ } ∈ Fσ∧τ . Thus χ{σ>τ, σ∧τ ≤t} is Ft measurable. Hence χ{σ>τ } E(X | Fτ )χ{σ∧τ ≤t} is Ft -measurable for all t ≥ 0. Hence, by Proposition 2.86 again, χ{σ>τ } E(X | Fτ ) is Fσ∧τ -measurable. Then, χ{σ>τ } E(X | Fτ ) = E(χ{σ>τ } E(X | Fτ ) | Fσ∧τ ) = χ{σ>τ } E(E(X | Fτ ) | Fσ∧τ ) = χ{σ>τ } E(X | Fσ∧τ ), which proves the first assertion. The second one can be proved similarly. Finally, E(E(X | Fτ ) | Fσ ) = E(χ{σ>τ } E(X | Fτ ) | Fσ ) + E(χ{τ ≥σ} E(X | Fτ ) | Fσ ) = E(χ{σ>τ } E(X | Fσ∧τ ) | Fσ ) + χ{τ ≥σ} E(E(X | Fτ ) | Fτ ∧σ ) = χ{σ>τ } E(X | Fσ∧τ ) + χ{τ ≥σ} E(X | Fτ ∧σ ) = E(X | Fτ ∧σ ), which gives the third assertion. Finally, we show the following result. Proposition 2.88. Let X(·) be an F-adapted process with values in H, and τ be a stopping time. Then the random variable X(τ ) is Fτ -measurable and the process X(τ ∧ ·) is F-adapted. Proof : We first prove that X(τ ∧ ·) is F-adapted. By the second conclusion in Proposition 2.84, the process τ ∧ · is F-progressively measurable. Thus for each t ≥ 0, the map (s, ω) 7→ (τ (ω) ∧ s, ω) is measurable from ([0, t] × Ω, B([0, t]) × Ft ) into itself. On the other hand, by the progressively measurability of X(t), the map (s, ω) 7→ X(s, ω) is measurable from ([0, t] × Ω, B([0, t]) × Ft ) into (H, B(H)). Hence, the map (s, ω) 7→ X(τ (ω) ∧ s, ω) is measurable from ([0, t] × Ω, B([0, t]) × Ft ) into (H, B(H)), which yields the F-progressive measurability of X(τ ∧t). Particularly, X(τ ∧t) is Ft -measurable for all t ≥ 0. Next, for any B ∈ B(H), {X(τ ) ∈ B} ∩ {τ ≤ t} = {X(τ ∧ t) ∈ B} ∩ {τ ≤ t} ∈ Ft , Therefore, {X(τ ) ∈ B} ∈ Fτ . Thus X(τ ) is Fτ -measurable.

∀ t ≥ 0.

2.8 Martingales

73

2.8 Martingales Let (Ω, F , F, P) (with F = {Ft }t∈I ) be a filtered probability space satisfying the usual condition, and denote by F the progressive σ-field w.r.t. F. We first consider the case of real valued martingales. Then we address to the vectorvalued martingales. 2.8.1 Real Valued Martingales Let I be a time parameter set: I = {0, 1, 2, · · · } (Iˆ = {0, 1, 2, · · · , ∞}) in the discrete time case or I = [0, ∞) (Iˆ = [0, ∞]) in the continuous time case. We begin with the following notions. Definition 2.89. A real valued process X = {X(t)}t∈I is called an {Ft }t∈I martingale, or simply a martingale (resp. supermartingale, submartingale) in the case that the filtration {Ft }t∈I is clear from the context, if 1) X(t) is Ft -measurable and integrable (w.r.t. the probability measure P) for each t ∈ I; and 2) E(X(t) | Fs ) = X(s) (resp. ≤ X(s), ≥ X(s)), a.s. for any t, s ∈ I with s < t. In the sequel, for any real numbers b and c, write b ∨ c ≡ max(b, c) and b ∧ c ≡ min(b, c). Example 2.90. 1) Suppose that X is an integrable random variable. Then {E(X | Ft )}t∈I is a martingale. 2) Let X(·) be a martingale (resp. submartingale) and f be a convex function (resp. non-decreasing convex function) so that f (X(t)) is integrable for each t ∈ I. Then f (X(·)) is a submartingale. Particularly, for any a ∈ R, the processes X(·)+ and X(·) ∨ a are submartingales whenever so is X(·). Obviously, X(·) is a martingale if and only if it is both sub- and suppermartingale. Martingales are a class of important stochastic processes, which are easily computable and estimable. First, we consider the discrete time case. In this case, we write Xn instead of X(n), and all stopping times are assumed to be valued in {0, 1, 2, · · · , ∞}. The following result, called Doob’s stopping theorem, is the basis to show the inequalities for martingales in the sequel. Theorem 2.91. Let {Xn }n∈I be a martingale (resp. supermartingale, submartingale), σ and τ be two bounded stopping times with σ ≤ τ , a.s. Then E(Xτ | Fσ ) = Xσ (resp. ≤ Xσ , ≥ Xσ ),

a.s.

Proof : It suffices to consider the case of supermartingale. Suppose σ ∨ τ ≤ M , a.s. We need to show that for every A ∈ Fσ , it holds

74

2 Some Preliminaries in Stochastic Calculus



∫ Xσ dP ≥ A

Xτ dP. A

Suppose first that σ ≤ τ ≤ σ + 1. Put ∆

Bn = A ∩ {σ = n} ∩ {τ > σ} = A ∩ {σ = n} ∩ {τ > n} ∈ Fn . ∪∞ It is clear that A ∩ {τ > σ} = n=0 Bn . Therefore ∫ ∫ ∫ (Xσ − Xτ )dP = (Xσ − Xτ )dP = (Xσ − Xτ )dP A∩{τ ≥σ}

A

=

∞ ∫ ∑ n=0

A∩{τ >σ}

(Xσ − Xτ )dP = Bn

∞ ∫ ∑ n=0

(Xn − Xn+1 )dP ≥ 0. Bn

In the general case, write γk = τ ∧ (σ + k), for k = 0, 1, 2, · · · , M . Then γk are {Fn }∞ n=1 -stopping times, and 0 ≤ γj+1 − γj ≤ 1,

j = 0, 1, 2, · · · , M − 1.

Note that γ0 = σ and γM = τ . Thus from the case discussed above, ∫ ∫ ∫ ∫ ∫ Xσ dP = Xγ0 dP ≥ Xγ1 dP ≥ · · · ≥ XγM dP = Xτ dP. A

A

A

A

A

The following result is called Doob’s inequality. Theorem 2.92. Let {Xn }n∈I be a submartingale. Then for every λ > 0 and m ∈ I, ∫ λP({ max Xn ≥ λ}) ≤ Xm dP ≤ E|Xm |, 0≤n≤m { max Xn ≥ λ} 0≤n≤m

and λP({ min Xn ≤ −λ}) ≤ E(|X0 | + |Xm |). 0≤n≤m

Proof : We define a stopping time as follows { min{n ≤ m | Xn ≥ λ}, if {n ≤ m | Xn ≥ λ} ̸= ∅, σ= m, if {n ≤ m | Xn ≥ λ} = ∅. Obviously, σ ≤ m. Therefore ∫ ∫ EXm ≥ EXσ = Xσ dP + Xσ dP { max Xn ≥ λ} { max Xn < λ} 0≤n≤m 0≤n≤m ∫ ≥ λP({ max Xn ≥ λ}) + Xm dP, 0≤n≤m { max Xn < λ} 0≤n≤m

2.8 Martingales

75

which yields the first inequality. The second one is obtained from EX0 ≤ EXτ , where { min{n ≤ m | Xn ≤ −λ}, if {n ≤ m | Xn ≤ −λ} = ̸ ∅, τ= m, if {n ≤ m | Xn ≤ −λ} = ∅. This completes the proof of Theorem 2.92. As a consequence of Theorem 2.92, one has the following result. Corollary 2.93. Let {Xn }n∈I be a martingale or nonnegative submartingale such that E|Xn |p < ∞ for some p ≥ 1 and all n ∈ I. Then, for every m ∈ I and λ > 0, P({ max |Xn | ≥ λ}) ≤ λ−p E|Xm |p , 0≤n≤m

and if p > 1, E( max |Xn |p ) ≤ 0≤n≤m

( p )p E|Xm |p . p−1

Proof : Obviously, {|Xn |p }n∈I is a submartingale and the first assertion follows from Theorem 2.92. As for the second one, we set Y = max |Xn |. Then, by Theorem 2.92, 0≤n≤m



we have λP({Y ≥ λ}) ≤

{Y ≥λ}

|Xm |dP.

(2.81)

Hence, by (2.81) and Lemma 2.23, ∫ ∞ ∫ EY p ≤ p λp−2 χ{Y ≥λ} |Xm |dPdλ ∫

0





Y

|Xm |

=p Ω

λp−2 dλdP = 0

p p−1

∫ |Xm |Y p−1 dP, Ω

which yields the desired result. For a real valued, F-adapted process X = {Xn }n∈I , and an interval [a, b], with −∞ < a < b < ∞, we set  τ0 = 0,     τ1 = min{n | Xn ≤ a},     τ2 = min{n ≥ τ1 | Xn ≥ b},  ··· (2.82)  τ2k+1 = min{n ≥ τ2k | Xn ≤ a},     τ2k+2 = min{n ≥ τ2k+1 | Xn ≥ b},     ··· . (min ∅ = +∞ unless otherwise stated). It is clear that {τn } is an increasing sequence of stopping times. For any m ∈ I, set

76

2 Some Preliminaries in Stochastic Calculus ∆

X Um (a, b)(ω) = max{k | τ2k (ω) ≤ m}. X Obviously, Um (a, b) is the number of upcrossings of [a, b] by {Xn }m n=0 . The following result is called Doob’s upcrossing inequality, which is used to derive convergence results for martingales.

Theorem 2.94. Let X = {Xn }n∈I be a submartingale. Then, X E(Um (a, b)) ≤

1 E[(Xm − a)+ − (X0 − a)+ ]. b−a

Proof : Write Yn = (Xn − a)+ and Y = {Yn }n∈I . Obviously, Y is a subX Y martingale and Um (a, b) = Um (0, b − a). Let τi be defined as in (2.82) with X, a and b replaced by Y , 0 and b − a, respectively. Set τn′ = τn ∧ m. Then, if 2j > m, Ym − Y0 =

2j ∑

(Y

′ τn

−Y

′ τn−1

)=

n=1

j ∑

(Y

′ τ2n

−Y

′ τ2n−1

)+

n=1

j−1 ∑

′ ′ ). (Yτ2n+1 − Yτ2n

n=0

It is easy to see that the first term in the right-hand side is greater than or Y ′ ′ . These two facts yield the equal to (b − a)Um (0, b − a). Also, EYτ2n+1 ≥ EYτ2n desired results. The following result characterizes the convergence of submartingales. Theorem 2.95. If X = {Xn }n∈I is a submartingale such that supn∈I EXn+ < ∞, or equivalently supn∈I E|Xn | < ∞, then X∞ = limn→∞ Xn exists a.s., and ˆ = {Xn } ˆ is a submartingale, i.e., X∞ is integrable. In order that X n∈I Xn ≤ E(Xm | Fn ),

∀ 0 ≤ n < m ≤ ∞,

it is necessary and sufficient that {Xn+ }n∈I is uniformly integrable. Proof: Since X is a submartingale, the equivalence of supn∈I EXn+ < ∞ and supn∈I E|Xn | < ∞ follows from sup EXn+ ≤ sup E|Xn | = sup(2EXn+ − EXn ) ≤ −EX0 + 2 sup EXn+ .

n∈I

n∈I

n∈I

n∈I

X X Set U∞ (a, b) = limm→∞ Um (a, b). Clearly, ∪ X { lim Xn < lim Xn } ⊂ {U∞ (r, r′ ) = ∞}. n→∞

n→∞

r,r ′ ∈Q,r 0,

one concludes that {Xn ∨(−a)}n∈I is uniformly integrable. Thus, by Theorem 2.25, we see that lim Xn ∨ (−a) = X∞ ∨ (−a) in L1F (Ω).

n→∞

Since {Xn ∨ (−a)}n∈I is a submartingale, E(X∞ ∨ (−a) | Fn ) = lim E(Xm ∨ (−a) | Fn ) ≥ Xn ∨ (−a), m→∞

and so, letting a → +∞, one gets E(X∞ | Fn ) ≥ Xn . For the martingales, one has the following further result. Theorem 2.96. Let X = {Xn }n∈I be a martingale. Then the following conditions are equivalent. 1) limn→∞ Xn = X∞ exists a.s. and in L1F (Ω); 2) Xn = E(X∞ | Fn ) for all n ∈ I, a.s.; 3) {Xn }n∈I is uniformly integrable. ˆ = {Xn } ˆ is a martinFurthermore, under one of the above conditions, X n∈I gale. Proof : “1)⇒2)”. For any m ∈ I, we have E(Xn | Fm ) = Xm for all n ≥ m. Letting n → ∞, we obtain that Xm = E(X∞ | Fm ), a.s. “2)⇒3)”. Obviously. “3)⇒1)”. The uniform integrability of {Xn }n∈I implies the boundedness of {Xn }n∈I in L1F (Ω) and the uniform integrability of {Xn+ }n∈I . Now, the desired result follows from Theorems 2.95 and 2.25. Next, we consider, for a moment, martingales with the “reversed” time. Let {Fn }0n=−∞ be a family of sub-σ-fields of F such that F0 ⊃ F−1 ⊃ F−2 ⊃ · · · ⊃

78

2 Some Preliminaries in Stochastic Calculus

F−∞ . We say X = {Xn }0n=−∞ to be a martingale (resp. supermartingale, submartingale) if Xn is Fn -adapted integrable random variable such that E(Xn | Fm ) = Xm

(resp. ≤ Xm , ≥ Xm )

for every n, m ∈ {0, −1, −2, · · · } with n > m. One has the following convergence result. Theorem 2.97. Let X = {Xn }0n=−∞ be a submartingale. Then 1) X−∞ = limn→−∞ Xn exists a.s.; 2) X is uniformly integrable and limn→−∞ Xn = X−∞ in L1F (Ω) provided that limn→−∞ EXn > −∞. Proof : The first assertion can be proved similarly to that of Theorem 2.95. For the second one, in view of Theorem 2.25, it suffices to show the uniform integrability of X. For this, we fix any ε > 0 and choose an integer k such that |EXk1 − EXk2 | < ε, ∀ integers k1 , k2 ≤ k. Then, for any λ > 0 and integer n ≤ k, ∫ ∫ ∫ |Xn |dP = Xn dP + {|Xn |>λ}





{Xn >λ}

∫ Xk dP +

{Xn >λ}

{Xn ≥−λ}

{Xn ≥−λ}

Xn dP − EXn

Xk dP − EXk + ε ≤ 2

∫ {|Xn |>λ}

|Xk |dP + ε.

Also, P({|Xn | > λ}) ≤

1 1 1 E|Xn | = (2EXn+ − EXn ) ≤ (2EX0+ − lim EXn ). n→−∞ λ λ λ

Therefore, the uniform integrability of {Xn }0n=−∞ follows. Now, we consider the continuous time case, i.e., I = [0, ∞). We begin with the following result. Proposition 2.98. Let X = {X(t)}t∈I be a submartingale. Then, Q ∩ I ∋ t 7→ X(t) is finitely valued a.s. and possesses lim

Q∩I∋s→t+

X(s) and

lim

Q∩I∋s→t−

X(s),

a.s., ∀ t ≥ 0.

Proof : Let T > 0 be given and {r1 , r2 , · · · } be an enumeration of the set Q ∩ [0, T ]. For every n ∈ N, if {s1 , s2 , · · · , sn } is the rearrangement of the set {r1 , r2 , · · · , rn } according to the natural order, then Y0 = X(0), Y1 = X(s1 ), · · · , Yn = X(sn ), Yn+1 = X(T ) defines a submartingale. Write Y = {Yi }i∈{0,1,··· ,n+1} . By Theorems 2.92 and 2.94, for each λ > 0, we get

2.8 Martingales

79

({ }) 2 ( ) P max |Yi | > λ ≤ E|X(0)| + E|X(T )| , 0≤i≤n+1 λ and for any a, b ∈ Q with a < b, E(UnY (a, b)) ≤

1 E(X(T ) − a)+ . b−a

Since this holds for every n, we have P

({

sup

|X(t)| > λ

})

t∈Q∩[0,T ]

and

( X| ) E U∞ Q∩[0,T ] (a, b) ≤



) 2( E|X(0)| + E|X(T )| , λ

( )+ 1 E X(T ) − a , b−a

where X|Q∩[0,T ] denotes the restriction of X on the set Q ∩ [0, T ]. By letting λ and a < b run over respectively positive integers and pairs of rational, the assertion of the theorem follows. The following result is quite useful. Theorem 2.99. Let X = {X(t)}t∈I be a submartingale. Then, ∆ e = 1) X(t) limQ∩I∋r→t+ X(r) exists a.s. for any t ∈ I, and the stochastic e = {X(t)} e process X adl` ag submartingale; t∈I is a c´ e 2) X(t) ≤ X(t) a.s. , ∀ t ∈ I; e 3) P({X(t) = X(t)}) = 1 for every t ∈ I if and only if EX(·) is rightcontinuous. e Proof : 1) By Proposition 2.98, we see that X(t) is well-defined and Fadapted. e To prove that {X(t)} t∈I is a submartingale, we fix any s > t and choose arbitrarily a sequence {εn }∞ n=1 decreasing to 0 so that t + εn ∈ Q ∩ I for all n ∈ N. Then, by Theorem 2.97, we see that e lim |X(t + εn ) − X(t)| L1F (Ω) = 0.

n→∞

′ Similarly, we choose a sequence {ε′n }∞ n=1 decreasing to 0 such that s + εn ∈ Q ∩ I for all n ∈ N. Then

e lim |X(s + ε′n ) − X(s)| L1F (Ω) = 0.

n→∞

Hence, for any B ∈ Ft , ∫ ∫ ∫ ∫ e e X(t)dP = lim X(t + εn )dP ≤ lim X(s + ε′n )dP = X(s)dP. B

n→∞

B

n→∞

B

e This implies that {X(t)} t∈I is a submartingale.

B

80

2 Some Preliminaries in Stochastic Calculus

Now, using Proposition 2.98 again, we see that, for any t0 ∈ I and ε > 0, there is a δ > 0 such that for any s ∈ (t0 , t0 + δ) ∩ Q, e 0 ) − X(s)| < ε, a.s. |X(t Therefore, for any r ∈ (t0 , t0 + δ), e 0 ) − X(r)| e |X(t =

lim

Q∩I∋s→r+

e 0 ) − X(s)| ≤ ε, a.s. |X(t

e e 0 ), a.s. This yields the right-continuity of X. e Hence, limr→t0 + X(r) = X(t e To show the existence of limt→t0 − X(t), we use Proposition 2.98 (to the e submartingale {X(t)} t∈I ) to conclude that lim

Q∩I∋t→t0 −

e X(t),

exists, a.s.

Thus, for any ε > 0, there is a δ > 0 such that e 1 ) − X(t e 2 )| < ε, |X(t

∀ t1 , t2 ∈ (t0 − δ, t0 ) ∩ Q ∩ I.

e By the right-continuity of X(t), the above inequality can be strengthened as e 1 ) − X(t e 2 )| < ε, |X(t

∀ t1 , t2 ∈ (t0 − δ, t0 ) ∩ I,

which implies the existence of the desired left limit. 2) It is easy to see that ∫ ∫ e X(t)dP ≤ X(t)dP, B

∀ B ∈ Ft

B

e and hence X(t) ≤ X(t), a.s. 3) The “if” part. By the right-continuity of EX(·), we get EX(t) =

lim

Q∩I∋s→t+

e EX(s) = EX(t).

e e Note that, X(t) ≥ X(t), a.s. Hence P(X(t) = X(t)) = 1. The “only if” part. Choose any sequence {sn } decreasing to t. Then, from the proof of Proposition 2.98, we see that, lim X(sn ) =

sn →t+

lim

Q∩I∋s→t+

e X(s) = X(t),

a.s. ∀ t ∈ I.

Thus, by Theorem 2.97, e lim EX(sn ) = EX(t) = EX(t),

sn →t+

which gives the right-continuity of EX(·), An immediate consequence of Theorem 2.99 is as follows.

2.8 Martingales

81

Corollary 2.100. Let X = {X(t)}t∈I be a submartingale such that EX(·) e = {X(t)} e is right-continuous. Then the process X t∈I in Theorem 2.99 is a c´ adl` ag modification of X. The following result is a continuous counterpart of Corollary 2.93. Theorem 2.101. If X = {X(t)}t∈I is a right-continuous martingale or nonnegative submartingale so that E|X(T )|p < ∞ for some p ≥ 1 and T > 0, then for each λ > 0, ({ }) P sup |X(t)| ≥ λ ≤ λ−p E|X(T )|p , p ≥ 1, t∈[0,T ]

E( sup |X(t)|p ) ≤ t∈[0,T ]

( p )p E|X(T )|p , p−1

(2.83) p > 1.

∞ ( {∆n }n=1) be a family of subsets of (Q ∩ [0, T ]) ∪ {T } such that ∪∞Proof : Let ∆ = Q ∩ [0, T ] ∪ {T } and ∆n has n elements. From Corollary 2.93, n n=1 we find that ({ }) P sup |X(t)| ≥ λ ≤ λ−p E|X(T )|p , p ≥ 1. t∈∆n

Hence, P

({

sup t∈(Q∩[0,T ])∪{T }

|X(t)| ≥ λ

})

≤ λ−p E|X(T )|p ,

p ≥ 1.

This, together with the right continuous of |X(t)|, implies the first inequality in (2.83). The proof for the second inequality of (2.83) is similar. Corollary 2.102. Let T > 0, p ∈ (1, ∞] and X(·) ∈ LpF (0, T ) be a martingale. e ∈ Lp (Ω; D([0, T ])) such that X(·) e Then, there is an X(·) is a modification of F X(·). e Proof : By Corollary 2.100, there is a c´adl` ag martingale X(·) such that p e e P({X(t) = X(t)}) = 1 for all t ∈ [0, T ]. Thus, X(·) ∈ LF (0, T ). This, togethe e ) ∈ er with the fact that the stochastic process X(·) is c´adl`ag, implies X(T p p e LFT (Ω). By Theorem 2.101, we conclude that E(supt∈[0,T ] |X(t)| ) < ∞. p e Hence, X(·) ∈ LF (Ω; D([0, T ])). Similar to the proof of Theorem 2.101, by Theorems 2.95–2.96, one can show the following two results. Theorem 2.103. Let X = {X(t)}t∈I be a right-continuous submartingale so that supt∈I EX(t)+ < ∞. Then, 1) limt→∞ X(t) = X∞ a.s., for some X∞ ∈ L1F (Ω); ˆ = {X(t)} ˆ is a sub2) If {X(t)+ }t∈I is uniformly integrable, then X t∈I martingale; 3) If {X(t)}t∈I is uniformly integrable, then limt→∞ X(t) = X∞ in L1F (Ω).

82

2 Some Preliminaries in Stochastic Calculus

Theorem 2.104. Let X = {X(t)}t∈I be a martingale. Then the following conditions are equivalent: 1) limt→∞ X(t) = X∞ in L1F (Ω); 2) X(t) = E(X∞ | Ft ) a.s., ∀ t ∈ I; 3) {X(t)}t∈I is uniformly integrable. ˆ = {X(t)} ˆ is a martinFurthermore, under one of the above conditions, X t∈I gale. The following result is a continuous version of Theorem 2.91. Theorem 2.105. (Doob’s stopping theorem) Let X = {X(t)}t∈I be a rightcontinuous submartingale, σ and τ be two bounded stopping times such that P({σ ≤ τ }) = 1. Then X(σ) and X(τ ) are integrable, and E(X(τ ) | Fσ ) ≥ Xσ , a.s. Proof : Since σ and τ are bounded, one can find a positive integer N such that σ ∨ τ ≤ N . Put σn =

n 2∑ N

k=1

k χ k−1 , k 2n { 2n ≤σ< 2n }

τn =

n 2∑ N

k=1

k χ k−1 . k 2n {{ 2n ≤τ < 2n }

Then σn ≤ τn ≤ N, a.s. By Theorem 2.91, we see that E(X(τn ) | Fσn ) ≥ X(σn ), a.s. This means that, for any A ∈ Fσ , ∫ ∫ X(τn )dP ≥ X(σn )dP, ∀ n. A

A

Hence, by Theorem 2.97, we get lim X(τn ) = X(τ ), lim X(σn ) = X(σ), in L1F (Ω). n→∞ ∫ ∫ Consequently, A X(τ )dP ≥ A X(σ)dP, which gives the desired result. Some consequences of Theorem 2.105 are in order. n→∞

Corollary 2.106. Let X(·) be a right-continuous, F-submartingale, and let {σt }t∈I be a family of right-continuous, bounded stopping times such that P(σt ≤ σs ) = 1 when t < s. Set X(t) = X(σt ),

F t = F σt ,

∀ t ∈ I.

Then X = {X(t)}t∈I be a right-continuous {F t }t∈I -submartingale. Corollary 2.107. Let X = {X(t)}t∈I be a martingale, and σ ≤ τ be two stopping times satisfying σ ≤ τ , a.s. Then E(X(t ∧ τ ) − X(t ∧ σ) | Fσ ) = 0,

∀ t ∈ I.

Proof : By Proposition 2.88, X(t∧τ ) is Ft -measurable. Hence, by Corollary 2.106, for any t ≥ 0, X(t ∧ σ) = E(X(t ∧ τ ) | Ft∧σ ) = E(E(X(t ∧ τ ) | Ft ) | Fσ ) = E(X(t ∧ τ ) | Fσ ).

2.8 Martingales

83

2.8.2 Vector-Valued Martingales Let T > 0 and H be a separable Hilbert space. Definition 2.108. An H-valued, F-adapted process X = {X(t)}t∈[0,T ] is called an F-martingale, or simply a martingale when the filtration F is clear from the context, if 1) X(t) is Bochner integrable for each t ∈ [0, T ]; and 2) E(X(t) | Fs ) = X(s) a.s., for every t, s ∈ [0, T ] with s < t. We have the following result. Theorem 2.109. If X(·) is an H-valued martingale such that |X(·)|H is right-continuous and X(T ) ∈ LpFT (Ω; H) for some p ≥ 1, then, ({

})

1 E|X(T )|pH , p ≥ 1, λ > 0, p λ t∈[0,T ] ( p )p E( sup |X(t)|pH ) ≤ E|X(T )|pH , p > 1. p−1 t∈[0,T ] P

sup |X(t)|H > λ



(2.84)

Proof : Noting that the process |X(·)|H is a real valued submartingale, by Theorem 2.101, we obtain the desired results. As an immediate corollary of Corollary 2.107, we have the following result. Corollary 2.110. Let X = {X(t)}t∈I be an H-valued martingale, and σ ≤ τ be two stopping times satisfying σ ≤ τ , a.s. Then E(X(t ∧ τ ) − X(t ∧ σ) | Fσ ) = 0,

∀ t ∈ I.

We give below a generalization of Corollary 2.102 to the case of H-valued martingales. This result will play a key role in the sequel. Theorem 2.111. Let X(·) be an H-valued martingale such that X(T ) ∈ e ∈ Lp (Ω; D([0, T ]; H)) LpFT (Ω; H) for some p ∈ (1, ∞]. Then there is an X(·) F e such that X(·) is a modification of X(·). only the case that p ∈ (1, ∞). Assume that X(T ) = ∑∞Proof : We consider ∞ ξ e , where {e } i i i i=1 is an orthonormal basis of H. Obviously, i=1 X(t) = E(X(T ) | Ft ) =

∞ ∑

E(ξi | Ft )ei ,

∀ t ∈ [0, T ].

i=1

Put Xi (t) = E(ξi | Ft ). Since Xi (·) is a C-valued martingale for each i ∈ N, ei (·) such that it by Corollary 2.100, one can find a C-valued c´adl´ag process X is a modification of Xi (·). Write e X(t) =

∞ ∑ i=1

ei (t)ei , X

∀ t ∈ [0, T ].

84

2 Some Preliminaries in Stochastic Calculus

e is an H-valued martingale and X(T e ) ∈ Lp (Ω; H). Then X(·) FT e is c´adl`ag. For each n ∈ N, put X en (·) = ∑n X e Let us prove that X(·) i=1 i (·)ei . e e e e en (·) Clearly, Xn (·) is c´adl`ag. Since X(·) and Xn (·) are martingales, X(·) − X en (·), there is a subsequence is also a martingale. From the definition of X ∞ {nk }∞ ⊂ {n} such that for any k ∈ N, n=1 k=1 e )−X en (T )|Lp |X(T k F

T

(Ω;H)



1 . k3

(2.85)

e −X en (·) is a non-negative submartingale, by Theorem Noting that X(·) k H 2.109, we find that P

({

}) e −X en (t) ≥ 1 sup X(t) k H k t∈[0,T ]

e )−X en (T ) ≤ k X(T e )−X en (T ) p ≤ kE X(T k k H L

FT

∑∞ Since k=1 clude that

1 k2

(Ω;H)



1 . k2

< +∞, by the Borel-Cantelli lemma (Theorem 2.4), we conP

(

{ lim

k→∞

}) e −X en (t) ≥ 1 sup X(t) = 0. k H k t∈[0,T ]

(2.86)

}) e −X en (t) < 1 sup X(t) = 1. k H k t∈[0,T ]

(2.87)

This implies that P

(

{ lim k→∞

Hence there exists an event Ω0 such that P(Ω0 ) = 1 and for each ω ∈ Ω0 , there exists a positive integer N (ω) such that e −X en (t) < 1 , sup X(t) k H k t∈[0,T ]

∀ k ≥ N (ω).

(2.88)

en (·, ω)}∞ converges uniformThus for ω ∈ Ω0 , the sequence of functions {X k k=1 e en (·) is ly to X(·, ω) on [0, T ]. Since for each k ∈ N, the stochastic process X k c´adl` ag, there exists an event Ωk with P(Ωk ) = 1 and for any ω ∈ Ωk , the en (·, ω) is c´adl`ag. function X k ∩∞ e e = 1 and for each ω ∈ Ω, e the sequence Put Ω = k=0 Ωk . Then P(Ω) e e ω) Xnk (·, ω) is a sequence of c´adl`ag functions that converges uniformly to X(·, e ω) is a c´adl`ag function for each ω ∈ Ω. e Hence on [0, T ]. It follows that X(·, e is an H-valued, c´adl`ag stochastic process. X(·) e Now, by Theorem 2.109, one can show that X(·) ∈ LpF (Ω; D([0, T ]; H)). This completes the proof of Theorem 2.111. Finally, we introduce some Hilbert spaces which will be useful later.

2.9 Brownian Motions

85

 2 M ([0, T ]; H) = {X ∈ L2F (0, T ; H) X is a right-continuous    F-martingale with X(0) = 0, a.s.},    2 Mc ([0, T ]; H) = {X ∈ M2 ([0, T ]; H) X is continuous}. We identify X, Y ∈ M2 ([0, T ]; H) if there exists a set N ∈ F with P(N ) = 0, such that X(t, ω) = Y (t, ω), for all t ≥ 0 and ω ∈ / N . Define √ |X|M2 ([0,T ];H) = E|X(T )|2H , ∀ X ∈ M2 ([0, T ]; H). We shall simply denote M2 ([0, T ]; R) and M2c ([0, T ]; R) by M2 [0, T ] and M2c [0, T ], respectively. We have the following result. Lemma 2.112. (M2 ([0, T ]; H), | · |M2 ([0,T ];H) ) is a Hilbert space, and M2c ([0, T ]; H) is a closed subspace of M2 ([0, T ]; H). Proof : First we note that if |X − Y |M2 ([0,T ];H) = 0, then X = Y . Indeed, |X − Y |M2 ([0,T ];H) = 0 implies X(T ) = Y (T ) a.s. Therefore, X(t) = E(X(T ) | Ft ) = E(Y (T ) | Ft ) = Y (t) for t ∈ [0, T ], a.s. By the right-continuity of X(·) and Y (·), we conclude that X = Y . 2 Next, suppose that {X n }∞ n=1 is a Cauchy sequence in M ([0, T ]; H), i.e., n m limn,m→∞ |X − X |M2 ([0,T ];H) = 0. Then, by the second assertion in Theorem 2.109, we conclude that ( ) E sup |X n (t) − X m (t)|2H ≤ 4E|X n (T ) − X m (T )|2H . 0≤t≤T

Hence, there is an X = {X(t)}t∈[0,T ] such that E|X n (t) − X(t)|2H → 0 as n → ∞ uniformly for t ∈ [0, T ], and we see from this that X ∈ M2 ([0, T ]; H) and |X n (t) − X(t)|M2 ([0,T ];H) → 0 as n → ∞. Finally, it is also clear from this proof that if X n ∈ M2c ([0, T ]; H), then X ∈ M2c ([0, T ]; H). Finally, we give below the notion of local martingale. Definition 2.113. An H-valued, F-adapted process X = {X(t)}t∈[0,T ] is called an F-local martingale, or simply a local martingale when the filtration F is clear from the context, if there exists a sequence of stopping times {τk }∞ k=1 such that τk increases monotonically to T a.s. as k → +∞, and for each k ∈ N, X(· ∧ τk ) is an F-martingale.

2.9 Brownian Motions Let (Ω, F , F, P) (with F = {Ft }t∈[0,∞) ) be a filtered probability space satisfying the usual condition. In this section, we present a class of important examples of martingales, i.e., Brownian motions.

86

2 Some Preliminaries in Stochastic Calculus

2.9.1 Brownian Motions in Finite Dimensions The Brownian motion of pollen particles in a liquid, which owes the name to its discovery by the English botanist R. Brown in 1827, is due to the incessant hitting of pollen by the much small molecules of the liquid. The hits occur a large number of times in any small interval of time, independently of each other and the effect of a particular hit is so small compared to the total effect. This motion has the following properties: 1) The displacement of a pollen particle over disjoint time intervals are independent random variables; 2) The displacements are Gaussian random variables; 3) The motion is continuous. In this subsection, we address to Brownian motions in Rm ( m ∈ N). Denote by Im the identity matrix of order m×m. We begin with the following notion. Definition 2.114. A continuous, Rm -valued, F-adapted process {W (t)}t≥0 is called an m-dimensional (standard) Brownian motion, if 1) P({W (0) = 0}) = 1; and 2) For any s, t ∈ [0, ∞) with 0 ≤ s < t < ∞, the random variable W (t) − W (s) is independent of Fs , and W (t) − W (s) ∼ N (0, (t − s)Im ). Similarly, one can define Rm -valued Brownian motions over any time interval [a, b] or [a, b) with 0 ≤ a < b ≤ ∞. If W (·) is a Brownian motion on (Ω, F , F, P), we may define ∆

FtW = σ(W (s); s ∈ [0, t]) ⊂ Ft ,

∀ t ≥ 0.

(2.89)

Generally, the filtration {FtW }t≥0 is left-continuous, but not necessarily rightcontinuous. Nevertheless, the augmentation {FˆtW }t≥0 of {FtW }t≥0 by adding all P-null sets is continuous, and W (·) is still a Brownian motion on the (augmented) filtered probability space (Ω, F , {FˆtW }t≥0 , P) (see [164, pp. 89 and 122] for detailed discussion). In the sequel, by saying that F is the natural filtration generated by (the Brownian motion) W (·), we mean that F is generated as (2.89) with the above augmentation, and hence in this case F is continuous. The following result shows that the sample path of a Brownian motion is highly irregular, almost surely. Proposition 2.115. Let W (·) be a real valued Brownian motion. Then, for almost all ω ∈ Ω , the map t 7→ W (t, ω) is nowhere differentiable. Proof : It suffices to show that P

({

}) |W (t + s) − W (t)| = +∞ = 1, s→0+ s lim

∀ t ≥ 0.

2.9 Brownian Motions

87

For this purpose, for any t ≥ 0, s > 0 and positive integer n, denote ∆

At,n,s ={|W (t + s) − W (t)| < ns}. Noting that W (t + s) − W (t) ∼ N (0, s), we get ∫ ns √ x2 1 P (At,n,s ) = √ e− 2s dx ≤ C s, 2πs −ns ∞ where C = C(n) > 0 is a∑ generic constant. We choose ∑∞ a sequence {sk }k=1 so ∞ √ that limk→∞ sk = 0 and k=1 sk < ∞. Then k=1 P (At,n,sk ) < ∞. Denote At,n = lim (At,n,sk )c . k→∞

By Theorem 2.4, we conclude that P(At,n ) = 1. Then for any ω ∈ At,n , there is a k(ω), such that W (t + sk , ω) − W (t, ω) ≥ n, ∀ k ≥ k(ω). sk Therefore, lim

s→0+

Put B =

∩ n≥1

|W (t + s, ω) − W (t, ω)| ≥ n, s

∀ ω ∈ At,n .

At,n . Then, P(B) = 1 and

lim

s→0+

|W (t + s, ω) − W (t, ω)| = +∞, s

∀ ω ∈ B.

This completes the proof. 2.9.2 Construction of Brownian Motions in one Dimension In this subsection, we present the L´evy construction of real valued Brownian motions. For a systematic introduction to the construction of Brownian motions in finite dimensions, we refer the readers to [164, pp. 49–66]. We begin with the following assumption. (H) There exists a sequence of independent random variables {Xk }∞ k=1 on the probability space (Ω, F , P), and Xk ∼ N (0, 1) for each k ∈ N. We now present a probability space (Ω, F , P) satisfying the assumption (H). For i ∈ N, let (Ωi , Fi , Pi ) be a∏probability ∏∞ space on which there is a ∞ random variable Yi ∼ N (0, 1). Let ( i=1 Ωi , i=1 Fi ) be the product measurable space of {(Ωi , Fi )}∞ induced ∏ by {Pi }∞ i=1 and P the produce measure∏ i=1 . ∞ ∞ ∞ We define a sequence of random variables {Xk }k=1 on ( i=1 Ωi , i=1 Fi , P) as follows:

88

2 Some Preliminaries in Stochastic Calculus

Xk (ω1 , · · · , ωk , · · · ) = Yk (ωk ),

for (ω1 , · · · , ωk , · · · ) ∈

∞ ∏

Ωi .

i=1

One can check that {Xk }∞ k=1 are independent, and Xk ∼ N (0, 1). We first construct a Brownian motion on [0, 1]. Let us recall the Haar functions {hk (·)}∞ k=1 on [0, 1]: h1 (·) ≡ 1 in [0, 1];  [ )  if t ∈ 0, 12 ,  1, h2 (t) = [ ]   −1, if t ∈ 12 , 1 ; ∆

while for n ∈ N, k ∈ I(n) =[2n + 1, 2n+1 + 1) ∩ N,  [ ) n n/2 k−2n −1 k−2 −1/2  2 , if t ∈ , ,  n n 2 2    [ ) n n hk (t) = −2n/2 , if t ∈ k−22n−1/2 , k−2 , 2n      0, otherwise. It is well-known that {hk }∞ k=1 forms a complete, orthonormal basis for the Hilbert space L2 (0, 1) (with the standard inner product ⟨ ·, · ⟩). Hence, ⟨ f, g ⟩ =

∞ ∑

⟨ f, hk ⟩ ⟨ g, hk ⟩,

f, g ∈ L2 (0, 1),

(2.90)

k=1

where ⟨ ·, · ⟩ stands for the usual inner product in L2 (0, 1). Let ∫

t

sk (t) =

hk (τ )dτ,

t ∈ [0, 1],

for k = 1, 2, · · · .

0

Applying (2.90) to f (·) = χ[0,t] (·) and g(·) = χ[0,s] (·) with t, s ∈ [0, 1], we find ∞ ∑

sk (t)sk (s) = min(t, s).

(2.91)

k=1

For n ∈ N and the sequence {Xk }∞ k=1 given in the assumption (H), put Wn (t, ω) =

n ∑

sk (t)Xk (ω),

(t, ω) ∈ [0, 1] × Ω.

(2.92)

k=1

One has the following result. Lemma 2.116. The sequence {Wn (·)}∞ n=1 (given by (2.92)) converges uniformly w.r.t. t ∈ [0, 1], a.s.

2.9 Brownian Motions

89

Proof : For any x > 0 and k ≥ 2, ∫ ∞ ∫ ∞ s2 x2 s2 x2 2 2 P({|Xk | > x}) = √ e− 2 ds ≤ √ e− 4 e− 4 ds ≤ 2e− 4 . 2π x 2π x (2.93) √ Let us choose x = 4 ln k in (2.93). Then,

∑∞ From k=2 find that

√ 2 P({|Xk | ≥ 4 ln k}) ≤ 2e−4 ln k = 4 . k 1 k4

< ∞ and the Borel-Cantelli lemma (i.e., Theorem 2.4), we √ ({ }) P lim |Xk | ≥ 4 ln k = 0. k→∞

Thus, for a.e. ω ∈ Ω, there is a constant K = K(ω) such that √ |Xk (ω)| ≤ 4 ln k, for k ≥ K(ω). (2.94) √ ∆ Put Yn = maxk∈I(n) |Xk |. It is clear that Yn ≤ C n + 1. For any ε > 0, from the definition of sk (·), one can find an m ∈ N large enough such that, for any t ∈ [0, 1], ∞ ∞ ∞ √ ∑ ∑ ∑ n+1 |Xk ||sk (t)| ≤ Yn max |sk (t)| ≤ C < ε. n/2−1 k∈I(n),t∈[0,1] 2 n=m n=m k=2m This completes the proof of Lemma 2.116. For t ∈ [0, 1], write W (t) =

∞ ∑

sk (t)Xk ,

Ft = σ(W (s); s ∈ [0, t]).

(2.95)

k=1

We have the following result. Theorem 2.117. Let the assumption (H) hold. Then, the real valued stochastic process W (given by (2.95)) is a Brownian motion on (Ω, F , {Ft }t∈[0,1] , P). Proof : Clearly, W (0) = 0 a.s. By Lemma 2.116, it is easy to see that W (·) is continuous on [0, 1], a.s. For any s, t ∈ [0, 1] satisfying 0 ≤ s ≤ t ≤ 1, in view of (2.91), we have E(eiξ(W (t)−W (s)) ) = E(eiξ

∑∞

k=1 [sk (t)−sk (s)]Xk

∞ ∏

)=

E(eiξ[sk (t)−sk (s)]Xk )

k=1

=

∞ ∏

e−

ξ2 2

[sk (t)−sk (s)]2

= e−

ξ2 2

∑∞

k=1 [sk (t)−sk (s)]

k=1

= e−

ξ2 2

= e−

ξ2 2

∑∞

2 2 k=1 [sk (t)−2sk (t)sk (s)+sk (s)]

(t−s)

,

∀ ξ ∈ R.

2

90

2 Some Preliminaries in Stochastic Calculus

Hence, by Definition 2.39, W (t) − W (s) ∼ N (0, t − s). We now prove that for any m ∈ N and for {ti }m i=0 satisfying 0 = t0 < t1 < · · · < tm ≤ 1, the random variables W (t1 ), W (t2 ) − W (t1 ), · · · , W (tm ) − W (tm−1 ) are independent. For simplicity, we only consider the case that m = 2. Using (2.91) again, we obtain that E(ei[ξ1 W (t1 )+ξ2 (W (t2 )−W (t1 ))] ) = E(ei[(ξ1 −ξ2 )W (t1 )+ξ2 W (t2 )] ) = E(ei[(ξ1−ξ2 )

∑∞ k=1

sk (t1 )Xk +ξ2

∑∞ k=1

sk (t2 )Xk ]

)=

∞ ∏

E(ei[(ξ1−ξ2 )sk (t1 )+ξ2 sk (t2 )]Xk )

k=1

=

∞ ∏

e− 2 [(ξ1 −ξ2 )sk (t1 )+ξ2 sk (t2 )] = e− 2 [(ξ1 −ξ2 ) 2

1

1

2

t1 +2(ξ1 −ξ2 )ξ2 t1 +ξ22 t2 ]

k=1

= e− 2 [ξ1 t1 +ξ2 (t2 −t1 )] , 1

2

2

∀ ξ1 , ξ2 ∈ R.

This, together with Definition 2.39 and Theorem 2.36, implies that W (t1 ) and W (t2 ) − W (t1 ) are independent. Consequently, W (t) − W (s) is independent of Fs . One can patch the Brownian motion (on [0, 1]) constructed in Theorem 2.117 to get a Brownian motion on [0, ∞). 2.9.3 Vector-Valued Brownian Motions Let V be a separable, real Hilbert space. In this subsection, we address to the V -valued Brownian motions. Let Q ∈ L1 (V ) be positive definite. Then, there is an orthonormal ba∞ sis {ej }∞ j=1 in V and a sequence {λj }j=1 of positive numbers satisfying ∑∞ j=1 λj < ∞ and Qej = λj ej for j = 1, 2, · · · (Hence, λj and ej are respectively eigenvalue and eigenvector of Q). Here, when V is an m-dimensional m Hilbert space (for some m ∈ N), the sequence {λj }∞ j=1 is reduced to {λj }j=1 . Similar to Definition 2.114, one introduces the following notion. Definition 2.118. A continuous, V -valued, F-adapted process {W (t)}t≥0 is called a (standard) Q-Brownian motion if 1) P({W (0) = 0}) = 1; and 2) For any s, t ∈ [0, ∞) satisfying 0 ≤ s < t < ∞, the random variable W (t) − W (s) is independent of Fs , and W (t) − W (s) ∼ N (0, (t − s)Q). ∆

In the sequel, we call ∆n ={t0 , t1 , · · · , tn } (for n ∈ N) a partition of [0, T ] ∆ if 0 = t0 < t1 < · · · < tn−1 < tn = T , and write |∆n | = max0≤i≤n−1 {ti+1 −ti }. The following result reveals the relationship between a Q-Brownian Motion and its finite dimensional projections.

2.9 Brownian Motions

91

Proposition 2.119. Assume that {W (t)}t≥0 is a Q-Brownian motion. Then, for any t ≥ 0, ∞ ∑ √ W (t) = λj Wj (t)ej , (2.96) j=1

where Wj (t) =

√1 ⟨W (t), ej ⟩V λj

. Furthermore, W1 (·), W2 (·), · · · are indepen-

dent real valued Brownian motions, and the series in (2.96) is convergent in L2F (Ω; C([0, T ]; V )) for each T > 0. Proof : We only prove that W1 (·), W2 (·), · · · are independent because the other conclusions in Proposition 2.119 can be easily verified. Assume that n ∈ N and j1 , · · · , jn ∈ N satisfying ji ̸= jk for i, k ∈ {1, 2, · · · , n} with i ̸= k. Let ∆ℓ = {t0 , t1 , · · · , tℓ }, ℓ ∈ N be a partition of [0, T ]. By Definition 2.66, it remains to prove that σ(Wj1 (t1 ), · · · , Wj1 (tℓ )), · · · , σ(Wjn (t1 ), · · · , Wjn (tℓ )) are independent. (2.97) We use the induction argument (w.r.t. ℓ). For the case ℓ = 1, it is clear that (Wj1 (t1 ), · · · , Wjn (t1 )) is a n-dimensional Gaussian random variable. Further, ( ) E(Wji (t1 )Wjk (t1 )) = √λ 1 λ E ⟨W (t1 ), eji ⟩V ⟨W (t1 ), ejk ⟩V ji

= √λ 1 λ t1 (Qeji , ejk )V = ji

jk

jk



λ ji λjk t1 δji jk .

Hence, the covariance matrix of (Wj1 (t1 ), · · · , Wjn (t1 )) is diagonal. By Theorem 2.36, it follows that Wj1 (t1 ), · · · , Wjn (t1 ) are independent. We now assume that (2.97) holds for ℓ and prove that it also holds for ℓ + 1. Similar to the above argument, one can show that Wj1 (tℓ+1 ) − Wj1 (tℓ ), · · · , Wjn (tℓ+1 ) − Wjn (tℓ ) are independent. Further, for 1 ≤ i ≤ n, it holds that ( ) σ Wji (t1 ) · · · , Wji (tℓ ), Wji (tℓ+1 ) ( ) = σ Wji (t1 ) · · · , Wji (tℓ ), Wji (tℓ+1 ) − Wji (tℓ ) , Let Bi,k ∈ B(R), 1 ≤ i ≤ n, 1 ≤ k ≤ ℓ + 1. Utilizing the fact that σ(W (s); s ∈ [0, tℓ ]) and σ(W (tℓ+1 ) − W (tℓ )) are independent, we obtain that

92

P

2 Some Preliminaries in Stochastic Calculus n (∩

) {Wji (t1 ) ∈ Bi,1 , · · · , Wji (tℓ ) ∈ Bi,ℓ , Wji (tℓ+1 ) − Wji (tℓ ) ∈ Bi,ℓ+1 }

i=1

=P

n ∩ ℓ ([ ∩

n ]∩[ ∩ ]) {Wji (tk ) ∈ Bi,k } {Wji (tℓ+1 ) − Wji (tℓ ) ∈ Bi,ℓ+1 }

i=1 k=1

=P

n ∩ ℓ (∩

i=1

n ) (∩ ) {Wji (tk ) ∈ Bi,k } P {Wji (tℓ+1 ) − Wji (tℓ ) ∈ Bi,ℓ+1 }

i=1 k=1

i=1

n ℓ n [∏ (∩ )][ ∏ ( )] = P {Wji (tk ) ∈ Bi,k } P {Wji (tℓ+1 ) − Wji (tℓ ) ∈ Bi,ℓ+1 } i=1

=

n ∏ i=1

P

i=1

k=1

([

ℓ ∩

]∩ ) {Wji (tk ) ∈ Bi,k } {Wji (tℓ+1 ) − Wji (tℓ ) ∈ Bi,ℓ+1 } .

k=1

Hence the independence of W1 (·), W2 (·), · · · follows. In the rest of this subsection, we consider another kind of vector-valued Brownian motions. Let {wj (·)}∞ j=1 be a sequence of independent real valued, standard Brownian motions. We need the following notion. Definition 2.120. The following formal series (in V ) W (t) =

∞ ∑

wj (t)ej ,

t ≥ 0,

(2.98)

j=1

is called a (V -valued) cylindrical Brownian motion. Clearly, the series in (2.98) does not converge in V . Nevertheless, this series does converge in a larger space, as we shall see below. Let H be another separable Hilbert ∑ spaces. An operator F ∈ L(V ; H) is ∞ said to be a Hilbert-Schmidt operator if j=1 |F ej |2H < ∞. It is easy to show ∑∞ 2 that the number j=1 |F ej |H is independent of the choice of orthonormal basis {ej }∞ in V . Denote by L2 (V ; H) the space of all Hilbert-Schmidt j=1 operators from V into H. When V and H are clear from the context, we simply write L02 for L2 (V ; H). One can show that, L02 equipped with the inner product ∞ ∑ ⟨F, G⟩L02 = ⟨F ej , Gej ⟩H , ∀ F, G ∈ L02 , (2.99) j=1

is a separable Hilbert space. For any v ∈ V and h ∈ H, it is easy to show that the tensor product v ⊗ h ∈ L02 (Recall (2.18) for the definition of v ⊗ h). Generally, for any bounded bilinear functional ψ(·, ·) on H, one can define an indefinite inner product on L02 as follows:

2.9 Brownian Motions

ψ(F, G)

L02

=

∞ ∑

ψ(F ej , Gej ),

∀ F, G ∈ L02 .

93

(2.100)

j=1

It is easy to see that |ψ(F, G)L02 | ≤ ||ψ|||F |L02 |G|L02 ,

(2.101)

where { } ||ψ|| = sup |ψ(x, y)| x, y ∈ H, |x|H ≤ 1, |y|H ≤ 1 .

(2.102)

For more details on the Hilbert-Schmidt operators, we refer to [299] for example. ∑∞ 2 Fix any sequence {µj }∞ j=1 of positive numbers such that j=1 µj < ∞. Let V1 be the completion of V w.r.t. the following norm: v u∑ u∞ 2 |f |V1 = t µj |⟨f, ej ⟩V |2 , ∀ f ∈ V. j=1

Then, V1 is a separable Hilbert space, V ⊂ V1 and the embedding map J : V → V1 is a Hilbert-Schmidt operator. Let Q1 = JJ ∗ . Then Q1 ∈ L1 (V1 ) is positive definite and symmetric. Proposition 2.121. The series W (t) =

∞ ∑

wj (t)Jej ,

t ≥ 0,

(2.103)

j=1

converges in L2F (Ω; C([0, T ]; V1 )) for any T > 0 and defines a V1 -valued, standard Q1 -Brownian motion. Proof : We first prove that (2.103) is a V1 -valued, standard Q1 -Brownian motion. ∑n Let Wn (·) = j=1 wj (·)Jej for n ∈ N. Then Wn (·) is a continuous V1 valued martingale. By Theorem 2.109, for m ∈ N with n < m, we have that E sup |Wm (t) − t∈[0,T ]

Wn (t)|2V1

m m ∑ 2 ∑ ≤ 4E wj (T )Jej = 4T |Jej |2V1 . j=n+1

V1

j=n+1

∑∞ 2 This, together with the fact that |J|2L2 (V ;V1 ) = j=1 |Jej |V1 < ∞, implies ∞ 2 that {Wn (·)}n=1 is a Cauchy sequence in LF (Ω; C([0, T ]; V1 )). Hence, its limit W (·) ∈ L2F (Ω; C([0, T ]; V1 )) for any T > 0. Next, for any f ∈ V1 , it is easy to see that ⟨W (t)−W (s), f ⟩V1 is a Gaussian random variable for any s, t ∈ [0, T ] with s < t. Further,

94

2 Some Preliminaries in Stochastic Calculus

E⟨W (t) − W (s), f ⟩V1

∞ ⟨∑ ⟩ [ ] = E wj (t) − wj (s) Jej , f j=1

and

=0 V1

( ) E ⟨W (t) − W (s), f1 ⟩V1 ⟨W (t) − W (s), f2 ⟩V1 =

∞ ∑

(t − s)⟨Jej , f1 ⟩V1 ⟨Jej , f2 ⟩V1 = (t − s)

j=1

∞ ∑

⟨ej , J ∗ f1 ⟩V ⟨ej , J ∗ f2 ⟩V

j=1

= (t − s)⟨J ∗ f1 , J ∗ f2 ⟩V = (t − s)⟨JJ ∗ f1 , f2 ⟩V1 = (t − s)⟨Q1 f1 , f2 ⟩V1 . Hence, W (t) − W (s) ∼ N (0, (t − s)Q1 ). Also, it is easy to show that the random variable W (t)−W (s) is independent of Fs . Thus, W (·) is a V1 -valued, standard Q1 -Brownian motion.

2.10 Stochastic Integrals Let T > 0 and (Ω, F , F, P) (with F = {Ft }t∈[0,T ] ) be a fixed filtered probability space satisfying the usual condition, and denote by F the progressive σ-field w.r.t. F. In this section, we shall define the integral ∫ T X(t)dW (t) (2.104) 0

of a stochastic process X(·) w.r.t. a Brownian motion W (t). Such an integral will play an essential role in the sequel. Note that if for ω ∈ Ω, the map t 7→ W (t, ω) was of bounded variation, then a natural definition of (2.104) would be a Lebesgue-Stieltjes type integral, simply regarding ω as a parameter. Unfortunately, as shown in Proposition 2.115, the map t 7→ W (t, ω) is not of bounded variation for a.e. ω ∈ Ω even for the simplest case of real valued ∫T Brownian motions. Thus, the integral 0 f (s, ω)dW (s, ω) cannot be defined pointwise. However, one can define the integral for a large class of processes by means of the martingale property of Brownian motion. This was first done by K. Itˆo ([155]) and is now known as Itˆo’s integral. 2.10.1 Itˆ o’s Integrals w.r.t. Brownian Motions in Finite Dimensions To begin with, we assume that W (·) is a one dimensional standard Brownian motion. We shall define the Itˆo’s integral as a mapping f ∈ L2F (0, T ) 7→ I(f ) ∈ 2 Mc [0, T ]. As the situation for the Lebesgue integral, we will first define the Itˆo’s integral for simple integrands.

2.10 Stochastic Integrals

95

For a moment, assume that f ∈ LS,F (0, T ) takes the form n ∑

f (t) =

χ[tj ,tj+1 ) (t)fj ,

t ∈ [0, T ],

(2.105)

j=0

where n ∈ N, 0 = t0 < t1 < · · · < tn+1 = T , fj is Ftj -measurable with sup{|fj (ω)| | j ∈ {0, · · · , n}, ω ∈ Ω} < ∞. We set I(f )(t, ω) =

n ∑

fj (ω)[W (t ∧ tj+1 , ω) − W (t ∧ tj , ω)].

(2.106)

j=0

We have the following key preliminary result. Proposition 2.122. Let f ∈ LS,F (0, T ). Then 1) I(f ) ∈ M2c [0, T ]; 2) For any t ∈ [0, T ], E[I(f )(t)] = 0; 3) For any t ∈ (0, T ], |I(f )|M2c [0,t] = |f |L2F (0,t) ; and ( )2 ∫ t 4) I(f )(t) − 0 f 2 (s)ds is a martingale. Proof : Suppose that f takes the form of (2.105). From (2.106), it is clear that I(f ) is continuous. Further, 0 ≤ s ≤ t ≤ T , ( ) E(I(f )(t) | Fs ) = E I(f )(s) + I(f )(t) − I(f )(s) | Fs = I(f )(s) + E −

n ∑

n (∑

fj (ω)[W (t ∧ tj+1 , ω) − W (t ∧ tj , ω)]

j=0

) fj (ω)[W (s ∧ tj+1 , ω) − W (s ∧ tj , ω)] Fs

j=0

= I(f )(s),

a.s.

Hence, the conclusion 1) holds. For each j ∈ {0, 1, · · · , n}, { } { ( )} E fj [W (t ∧ tj+1 ) − W (t ∧ tj )] = E E fj [W (t ∧ tj+1 ) − W (t ∧ tj )] Ftj { ( )} = E fj E [W (t ∧ tj+1 ) − W (t ∧ tj )] Ftj { } = E fj E[W (t ∧ tj+1 ) − W (t ∧ tj )] = 0. Hence, EI(f ) = 0, which proves the second conclusion. Clearly, |I(f )(t)|2 =

n ∑ i,j=0

For i < j,

fi fj [W (t ∧ ti+1 ) − W (t ∧ ti )][W (t ∧ tj+1 ) − W (t ∧ tj )].

96

2 Some Preliminaries in Stochastic Calculus

{ } E fi fj [W (t ∧ ti+1 ) − W (t ∧ ti )][W (t ∧ tj+1 ) − W (t ∧ tj )] { ( )} = E E fi fj [W (t ∧ ti+1 ) − W (t ∧ ti )][W (t ∧ tj+1 ) − W (t ∧ tj )] Ftj { ( )} = E fi fj [W (t ∧ ti+1 ) − W (t ∧ ti )]E [W (t ∧ tj+1 ) − W (t ∧ tj )] Ftj = 0 For i = j, { } E fj2 [W (t ∧ tj+1 ) − W (t ∧ tj )]2 { ( = E E fj2 [W (t ∧ tj+1 ) − W (t ∧ tj )]2 { ( = E fj2 E [W (t ∧ tj+1 ) − W (t ∧ tj )]2

)} Ft j )} Ftj = (t ∧ tj+1 − t ∧ tj )Efj2 .

Hence, the conclusion 3) holds. For any 0 ≤ s ≤ t, ∫ t ) ([ ]2 E I(f )(t) − f 2 (r)dr Fs =E

([

+E

0

I(f )(s)

([

]2



s



) f 2 (r)dr Fs

0

I(f )(t) − I(f )(s) + I(f )(s)

(

= I(f )(s)

)2



s



f 2 (r)dr + E

{[

]2

[ ]2 − I(f )(s) −



t

) f 2 (r)dr Fs

s

I(f )(t) − I(f )(s)

]2

0

) (∫ t [ ] } +2I(f )(s) I(f )(t) − I(f )(s) Fs − E f 2 (r)dr Fs (

= I(f )(s)

)2

s



s



f 2 (r)dr. 0

This proves the last conclusion. Next, for any given f ∈ L2F (0, T ), by Lemma 2.75, we can find a sequence of {fn }∞ n=1 ⊂ LS,F (0, T ) such that |fn − f |L2F (0,T ) → 0 as n → ∞. Since |I(fn ) − I(fm )|M2c [0,T ] = |fn − fm |L2F (0,T ) for m, n ∈ N, one deduces that 2 {I(fn )}∞ n=1 is a Cauchy sequence in Mc [0, T ] and therefore, it converges to a 2 unique element X ∈ Mc [0, T ]. Clearly, the process X is determined uniquely by f and it is independent of the particular choice of {fn }∞ n=1 . This process is called Itˆo’s integral of f ∈ L2F (0, T ) (w.r.t. the Brownian motion W (·)). We shall denote it by ∫ t ∫ t ∫ t f (s, ω)dW (s, ω) or simply f (s)dW (s) or even f dW. 0

0

It is easy to show the following result.

0

2.10 Stochastic Integrals

97

Theorem 2.123. For each f ∈ L2 (a, b) (for which f is a deterministic function, i.e., it does not depend on ω ∈ Ω) with 0 ≤ a < b ≤ T , the Itˆ o inte∫b gral a f (t)dW (t) is a Gaussian random variable with mean 0 and variance ∫b |f (t)|2 dt. a Now, fix m1 , m2 ∈ N. For an f = (f ij )1≤i≤m1 , 1≤j≤m2 ∈ L2F (0, T ; Rm1 ×m2 ) and an Rd2 -valued Brownian motion W (·) = (W 1 (·), · · · , W m2 (·))⊤ , we define ∫

t



f (s)dW (s) = 0

m2 ∫ (∑ j=1

t

f 1j (s)dW j (s), · · · , 0

m2 ∫ ∑ j=1

Further, for 0 < s < t, we define ∫ t ∫ t ∫ f (r)dW (r) = f (r)dW (r) − s

0

t

f m1 j (s)dW j (s)

)⊤

.

0

s

f (r)dW (r). 0

Itˆ o’s integral (w.r.t. the Rm2 -valued Brownian motion W (·)) has the properties: Theorem 2.124. Let f, g ∈ L2F (0, T ; Rm1 ×m2 ), a, b ∈ R and T ≥ t > s ≥ 0. Then 1) ∫ t ∫ t ∫ t (af + bg)dW = a f dW + b gdW, a.s.; 0

0

2) E

(∫

t

0

) f dW Fs = 0,

a.s.;

s

E

3) (⟨∫



t

f dW, s



t

gdW s

R m1

) ) (∫ t ⟨f (r, ·), g(r, ·)⟩Rm1 ×m2dr Fs , a.s.; Fs = E

4) The stochastic process

s

∫· 0

f dW ∈ M2c ([0, T ]; Rm1 ).

Proof : It is easy to show that the conclusions in Theorem 2.124 hold for simple processes. The proof for the general f and g is based on these facts and the definition of stochastic integral, and hence we omit it here. The following result shows that Itˆo’s integral enjoys some local property though it is not defined pointwise w.r.t. the sample point. Lemma 2.125. Let f, g ∈ L2F (0, T ; Rm1 ×m2 ) and W (·) be an Rm2 -valued Brownian motion. Let Ω1 = {ω ∈ Ω | f (t, ω) = g(t, ω) for a.e. t ∈ [0, T ]}. Then, ∫ T ∫ T f (t)dW (t) = g(t)dW (t), a.e. ω ∈ Ω1 . 0

0

98

2 Some Preliminaries in Stochastic Calculus

Proof : Without loss of generality, we may assume that g(t, ω) = 0 for a.e. (t, ω) ∈ [0, T ] × Ω. Let ∫ t { } ∆ τ (ω) = inf t ∈ [0, T ) |f (s, ω)|2 ds ̸= 0 . 0



T

Here we agree that inf ∅ = T . Then,

χ[0,τ ] (t)|f (t)|2 dt = 0, a.s. For any 0

t ∈ (0, T ],

{∫



{τ < t} =

r

} |f (s)|2 ds > 0 ∈ Ft .

0

r∈Q∩(0,t)

For t = 0, we have that {τ ≤ 0} = ∅ ∈ F0 . Hence, τ is a stopping time. Define a random variable ∫ τ ∫ T Y (τ ) = f (s)dW (s) = χ[0,τ ] (s)f (s)dW (s). 0

0

Since χ[0,τ ] f ∈ L2F (0, T ), we see that Y (τ ) is well defined and ∫

T

E|Y (τ )| = E 2

χ[0,τ ] (t)|f (t)|2 dt = 0, 0

which implies that Y (τ ) = 0, a.s. ∫T If ω ∈ Ω1 , then τ (ω) = T . Hence, Y (τ (ω)) = 0 f (s)dW (s). This show ∫T that 0 f (s)dW (s) = 0, a.e. ω ∈ Ω1 . The following result will play a crucial role later. Lemma 2.126. Let f ∈ L2F (0, T ; Rm1 ×m2 ) and W (·) be an Rm2 -valued Brownian motion. Then, for any positive numbers ε and λ, ({ ∫ T }) ({ ∫ T }) λ P f (t)dW (t) m ≥ ε ≤ 2 +P |f (t)|2Rm1 ×m2 dt > λ . ε R 1 0 0 (2.107) Proof : For simplicity, we only consider the case that m1 = m2 = 1. Define a stochastic process fλ (t, ω) by { ∫t f (t, ω), if 0 |f (s, ω)|2 ds ≤ λ, fλ (t, ω) = (2.108) 0, otherwise. Clearly, { ∫ T } f (t)dW (t) ≥ ε 0

{ ∫ ⊂

T 0

}∪{∫ fλ (t)dW (t) ≥ ε



T

T

f (t)dW (t) ̸= 0

} fλ (t)dW (t) .

0

(2.109)

2.10 Stochastic Integrals

99

Further, by Lemma 2.125 and the definition of fλ (·), we see that {∫



T

T

f (t)dW (t) ̸= 0

} {∫ fλ (t)dW (t) ⊂

0

T

} |f (t)|2 dt > λ .

(2.110)

0

From (2.109) and (2.110), we see that { ∫ T } { ∫ T } {∫ T } P f (t)dW (t) ≥ ε ≤ P fλ (t)dW (t) ≥ ε + P |f (t)|2 dt > λ . 0

0

0

(2.111) ∫T ∫T By (2.108), we have 0 |fλ (t)|2 dt ≤ λ, a.s. Hence, E 0 |fλ (t)|2 dt ≤ λ. This, together with Chebyshev’s inequality (in Theorem 2.22) and (2.111), implies that ∫ 2 { ∫ T } {∫ T } 1 T P f (t)dW (t) ≥ ε ≤ 2 E fλ (t)dW (t) + P |f (t)|2 dt > λ ε 0 0 0 ∫ T ∫ ∫ T T { } { } 1 λ = 2E |fλ (t)|2 dt + P |f (t)|2 dt > λ ≤ 2 + P |f (t)|2 dt > λ , ε ε 0 0 0 which gives (2.107). This completes the proof of Lemma 2.126. ∫T The above Itˆo integral 0 f (t)dW (t) is defined for F-adapted stochastic ∫T processes f (t) satisfying the condition E 0 |f (t)|2Rm1 ×m2 dt < ∞. Next, we will ∫T extend the Itˆo integral 0 f (t)dW (t) to more general F-adapted stochastic processes f (·). Let Y be a Banach space. For any p ∈ [1, ∞), denote by Lp,loc (0, T ; Y ) the F ∫T set of all Y -valued, F-adapted stochastic processes f (·) satisfying 0 |f (t)|pY dt < ∞, a.s. ∫T Now, we extend the Itˆo integral 0 f (t)dW (t) to the case that the integrand f (·) ∈ L2,loc (0, T ; Rm1 ×m2 ). For simplicity, we consider only the case F that m1 = m2 = 1, and simply denote Lp,loc (0, T ; R) by Lp,loc (0, T ). The F F reasons for this generalization are as follows. First, generally speaking, it is very hard to check whether an F-adapted stochastic process f (·) belongs to L2F (0, T ) and it is easier to see whether f ∈ L2,loc (0, T ). For example, when F f (·) is a continuous, F-adapted process, then it belongs to L2,loc (0, T ). SecF ond, we need to consider the integral of processes f ∈ LrF (Ω; L2 (0, T )) for r ∈ [1, 2) in the study of stochastic differential/evolution equations. We need the following technical result. Lemma 2.127. Let f ∈ L2,loc (0, T ). Then there exists a sequence {fn }∞ n=1 ⊂ F 2 LF (0, T ) such that ∫

T

|fn (t) − f (t)|2 dt = 0,

lim

n→∞

0

a.s.

(2.112)

100

2 Some Preliminaries in Stochastic Calculus

Proof : For each n ∈ N, let us set { ∫t f (t, ω), if 0 |f (s, ω)|2 ds ≤ n, fn (t, ω) = 0, otherwise.

(2.113)

∫T Clearly, fn is F-adapted and 0 |fn (t, ω)|2 dt ≤ n, a.s., which implies that ∫T E 0 |fn (t)|2 dt ≤ n. Hence, fn ∈ L2F (0, T ). ∫T Fix an ω ∈ Ω. When n > 0 |f (t, ω)|2 dt, by the definition of fn , we have fn (t, ω) = f (t, ω) for all t ∈ [0, T ]. Thus, (2.112) holds. Now, for any given f ∈ L2,loc (0, T ), by Lemma 2.127, one can find a F 2 sequence {fn }∞ ⊂ L (0, T ) such that (2.112) holds. Apply Lemma 2.126 to n=1 F 3 f = fn − fm (for m, n ∈ N) with ε > 0 and λ = ε2 , we obtain that ∫ P{



T

T

fn (t)dW (t) − 0

ε ≤ +P 2

fm (t)dW (t) > ε}

0

{∫

T

ε3 } |fn (t) − fm (t)| dt > . 2

(2.114)

2

0

∫T Hence, { 0 fn (t)dW (t)}∞ n=1 is a Cauchy sequence in probability. This indi∫T cates that the sequence { 0 fn (t)dW (t)}∞ n=1 converges in probability. Therefore, we can define ∫



T

T

f (t)dW (t) = P- lim

n→∞

0

fn (t)dW (t).

(2.115)

0

∫T One can easily see that 0 f (t)dW (t) is independent of the choice of {fn }∞ n=1 . ∫T Hence, the integral “ 0 f (t)dW (t)” is well-defined. If f ∈ LpF (Ω; L2 (0, T )) for ∫T p ≥ 1, then one can show that 0 f (t)dW (t) ∈ Lp (Ω). Remark 2.128. If f ∈ L2F (0, T ), then we can take fn = f for all n ∈ N. In this case, the stochastic integral defined by (2.115) is obviously the same as the one defined for f ∈ L2F (0, T ). This shows that the new stochastic integral defined by (2.115) reduces to the old stochastic integral when f ∈ L2F (0, T ). We list some basic properties of the Itˆo’s integral of L2,loc (0, T ; Rm1 ×m2 )F processes as follows. Theorem 2.129. Let f, g ∈ L2,loc (0, T ; Rm1 ×m2 ) and W (·) be an Rm2 -valued F Brownian motion. Then: 1) For any F-stopping time σ, it holds that ∫



t∧σ

t

f dW = 0

f χ[0,σ] (s)dW (s); 0

2.10 Stochastic Integrals

101

2) For any bounded F-stopping times τ and σ with τ ≤ σ a.s., and any bounded Fτ -measurable random variable ξ1 and ξ2 , it holds ∫ σ ∫ σ ∫ σ (ξ1 f + ξ2 g)dW = ξ1 f dW + ξ2 gdW ; τ

τ

τ

∆∫· 3) The stochastic process X(·) = 0 f (s)dW (s) is a continuous, local martingale; 4) For any ε > 0 and λ > 0,

∫ ({ P sup s∈[0,T ]

s

f dW

0

R m1

}) ({ ∫ T }) λ ≥ε ≤ 2 +P |f (t)|2Rm1 ×m2 dt > λ . ε 0 (2.116)

Proof : Let us prove the last two conclusions 3) and 4). We only consider the case that m1 = m2 = 1. To prove 3), for any n ∈ N, put ∫ t } τn (ω) = inf t ∈ [0, T ) |f (s, ω)|2 ds > n . {



(2.117)

0

Then, τn is a stopping time and limn→∞ τn = T , a.s. Clearly, χ[0,τn ] f ∈ L2F (0, T ) and ∫



t∧τn

X(t ∧ τn ) =

t

f (s)dW (s) = 0

χ[0,τn ] (s)f (s)dW (s). 0

Thus, {X(t ∧ τn )}t∈[0,T ] is a martingale, which implies that {X(t)}t∈[0,T ] is a local martingale. For each n ∈ N, let us set  ∫ t   f (t, ω), if |f (s, ω)|2 ds ≤ n, fn (t, ω) = (2.118) 0   0, otherwise. ∆

It is easy to see that fn ∈ L2F (0, T ). Let X (n) (t) =

∫t

fn (s)dW (s). By the fourth ∆ ∫T conclusion in Theorem 2.124, X (·) is continuous. Let Ωn ={ 0 |f (t)|2 dt ≤ ∫T 2 e ∆ ∪∞ n}. The sequence {Ωn }∞ n=1 is increasing. Let Ω = n=1 Ωn . Since 0 |f (t)| dt e = 1. If ω ∈ Ωn , then fn (t, ω) = fm (t, ω) for any < ∞, a.s., we find that P(Ω) t ∈ [0, T ] and integer m ≥ n. Thus, by Lemma 2.125, for almost all ω ∈ Ωn , 0

(n)

X (m) (t, ω) = X (n) (t, ω),

∀ m ≥ n and t ∈ [0, T ].

(2.119)

e limm→∞ X (m) (t, ω) exists for all t ∈ [0, T ]. This implies that for a.e. ω ∈ Ω, We define a stochastic process Y (t, ω) by

102

2 Some Preliminaries in Stochastic Calculus ∆

Y (t, ω) =

  lim X (m) (t, ω),

e if ω ∈ Ω,



e if ω ∈ / Ω.

m→∞

0,

Then Y (·) is a continuous stochastic process. On the other hand, from the definition of the stochastic integral, we have that X(t) = P- limm→∞ X (m) (t). Therefore, for every t ∈ [0, T ], X(t) = Y (t), a.s. Hence Y (·) is a continuous modification of X(·). We now prove 4). We choose f λ (·)(∈ L2F (0, T )) as that in (2.108). Then, thanks to Chebyshev’s inequality (in Theorem 2.22) and Doob’s inequality (in Theorem 2.101), we have ∫ s ({ }) P sup f dW ≥ ε s∈[0,T ]

≤P

({

0

∫ s ∫ }) ({ λ sup (f − f )dW > 0 + P sup

s∈[0,T ]

≤P =P ≤P

({ ∫ ({ ∫ ({ ∫

s∈[0,T ]

0

T

|f (r)|2 dr ≥ λ 0 T

|f (r)| dr ≥ λ 2

0 T

|f (r)|2 dr ≥ λ

})

+ ε−2 E

}) +ε

T

f λ dW

}) f λ dW ≥ ε

0

)2

0



T

E

|f λ (r)|2 dr 0

})

0

−2

(∫

s

+

λ , ε2 (2.120)

which gives (2.116). 2.10.2 Itˆ o’s Integrals w.r.t. Vector-Valued Brownian Motions Let H be a separable Hilbert space with an orthonormal basis {hk }∞ k=1 . Let V be another separable Hilbert space and L02 ≡ L2 (V ; H). We consider first the case of Q-Brownian motion W (·), where Q ∈ L1 (V ) ∞ is given as that in Subsection 2.9.3 (Recall that {λk }∞ k=1 and {ek }k=1 are respectively sequences of eigenvalues and eigenvectors √Q). By Proposition ∑∞ of 2.119, for any t, W (t) has the expansion W (t) = j=1 λj Wj (t)ej , where 1 Wj = √ ⟨W (t), ej ⟩V , λj

j = 1, 2, · · · ,

are mutually independent real valued Brownian∑motions. N √ fN (t) = For each N ∈ N, we define W λj Wj (t)ej . Let Φ ∈ j=1 2 LF (0, T ; L(V ; H)). Put Xn,N =

∫ n ∑ N ∑ √ λj hk k=1 j=1

T

⟨Φ(s)ej , hk ⟩H dWj (s). 0

2.10 Stochastic Integrals

103

Then, for any m, n ∈ N with m < n, we have that E|Xn,N −

∫ n N ∑ ∑ √ = E λj hk

Xm,N |2H

∫ N ∑ √ E λj

n ∑ k=m+1

=

H

T

2 ⟨Φ(s)ej , hk ⟩H dWj (s)

0

j=1

n N ∑ ∑

2 ⟨Φ(s)ej , hk ⟩H dWj (s)

0

k=m+1 j=1

=

T



T

λj E

⟨Φ(s)ej , hk ⟩H 2 ds,

0

k=m+1 j=1

(2.121) 2 which implies that {Xn,N }∞ is a Cauchy sequence in L (Ω; H). We define n=1 FT ∫

T

fN (s) = Φ(s)dW

0

∫ ∞ ∑ N ∑ √ λj hk

E|YN (T )−YM (T )|2H

⟨Φ(s)ej , hk ⟩H dWj (s) ∈ L2FT (Ω; H).

0

k=1 j=1

For t ∈ [0, T ], put YN (t) = M > N , we have that

T

∫t

fN (s). Then, for any M, N ∈ N with Φ(s)dW

0

∫ T ∞ M ∑ 2 ∑ √ = E λj hk ⟨Φ(s)ej , hk ⟩H dWj (s)

=

∫ ∞ M ∑ ∑ √ E λj

=



M ∑

=E

⟨Φ(s)ej , hk ⟩H 2 ds

0



M ∑

k=1 T

λj 0

j=N +1

(2.122)

∞ T ∑

λj E

j=N +1

2 ⟨Φ(s)ej , hk ⟩H dWj (s)

0

j=N +1

k=1

T

H

0

k=1 j=N +1

M ∑ Φ(s)ej 2 ds ≤ |Φ|2 2 λj . L (0,T ;L(V ;H)) H F

j=N +1

2 This shows that {YN (T )}∞ N =1 is a Cauchy sequence in LFT (Ω; H). Hence, we may define the integral of Φ w.r.t. W (·) as follows:





T

T

Φ(s)dW (s) = lim

N →∞

0

fN (s), Φ(s)dW

0

in L2FT (Ω; H).

Further, we define ∫



t

T

Φ(s)dW (s) = 0

and

χ[0,t] Φ(s)dW (s) 0

for t ∈ [0, T ],

(2.123)

104

2 Some Preliminaries in Stochastic Calculus





t



t

s

Φ(s)dW (s) −

Φ(s)dW (s) = s

0

Φ(s)dW (s)

for 0 ≤ s < t ≤ T.

0

Similar to the case of finite dimensions, for any p ∈ [1, ∞), one may introduce the space Lp,loc (0, T ; L(V ; H)) and define the stochastic integral F 2,loc of Φ ∈ LF (0, T ; L(V ; H)) (especially Φ ∈ LpF (Ω; L2 (0, T ; L(V ; H)))) w.r.t. W (·). Similar to Theorems 2.124 and 2.129, one can show that the Itˆo’s integral (with respect to a Q-Brownian motion W (·) valued in V ) has the following elementary properties: Theorem 2.130. Let f, g ∈ L2,loc (0, T ; L(V ; H)), a, b ∈ R and T ≥ t > s ≥ F 0. The following results hold: 1) ∫ t ∫ t ∫ t (af + bg)dW = a f dW + b gdW, a.s.; 0

0

0

∫· 2) The stochastic process 0 f dW is an H-valued, continuous, local martingale; 3) If f ∈ LpF (Ω; L2 (0, T ; L(V ; H))), then E

(∫

t

) f dW Fs = 0,

a.s.;

s

4) If f, g ∈ L2F (0, T ; L(V ; H)), then E

(⟨ ∫



t s

0



t

f dW,

∫·

gdW H

s

f dW ∈ M2c ([0, T ]; H), and

) Fs

) (∫ t ⟨ 1 1⟩ =E f (r, ·)Q 2 , g(r, ·)Q 2 L0 dr Fs , a.s. 2

s

As a consequence of the conclusion 4) in Theorem 2.130, we have the following result (recall (2.100) for the notation ψ(·, ·)L02 ). Corollary 2.131. For any bounded bilinear functional ψ(·, ·) on H and f, g ∈ L2F (0, T ; L(V ; H)), it holds that ∫ t ( (∫ t ) E ψ f dW (r), gdW (r) =E

(∫

s t s

s

H

) Fs

) 1 1 ψ(f (r, ·)Q 2 , g(r, ·)Q 2 )L02 dr Fs , a.s.

Proof : Since ψ(·, ·) is a bounded bilinear functional on H, there is a P ∈ L(H) such that ψ(x, y) = ⟨ P x, y ⟩H for any x, y ∈ H. Hence, by (2.99), (2.100) and the third conclusion in Theorem 2.130, it follows that

2.10 Stochastic Integrals

∫ t ( (∫ t ) E ψ f dW (r), gdW (r) s

=E =E =E

s

(⟨ ∫



t

P f dW (r),

(∫

t s t s

) Fs ) Fs

⟩ gdW (r)

s

(∫

H

t

105

s

H

) 1 1 ⟨P f (r, ·)Q 2 , g(r, ·)Q 2 ⟩L0 dr Fs 2 ) 1 1 ψ(f (r, ·)Q 2 , g(r, ·)Q 2 )L02 dr Fs , a.s.

This completes the proof of Corollary 2.131. We now proceed with the definition of the stochastic integral w.r.t. a cylindrical Wiener process. Let {wk (·)}∞ k=1 be a sequence of independent real valued standard Brownian motions, and W (·) be the corresponding V -valued cylindrical Brownian motion (given in Definition 2.120). Recall that L2F (0, T ; L02 ) is the Hilbert space consisting of all L02 -valued, F-adapted processes Φ such that ∆

|Φ|2L2 (0,T ;L0 ) = E F

put

2



T

|Φ(t)|2L0 dt =

∞ ∑

2

0



T

E

|Φ(t)ek |2H dt < ∞.

(2.124)

0

k=1

cN (t) = ∑N wj (t)ej . For any Φ ∈ L2 (0, T ; L0 ), For each N ∈ N, write W 2 F j=1 Xn,N =

n ∑ N ∑



T

⟨Φ(s)ej , hk ⟩H dwj (s).

hk 0

k=1 j=1

Then, similarly to (2.121), for any m, n ∈ N with m < n, we have that E|Xn,N − Xm,N |2H =

n N ∑ ∑ k=m+1 j=1



T

E

⟨Φ(s)ej , hk ⟩H 2 ds,

0

2 which implies that {Xn,N }∞ n=1 is a Cauchy sequence in LFT (Ω; H). Hence, we define



T 0

cN (s) = Φ(s)dW

∞ ∑ N ∑



T

hk

k=1 j=1

0

⟨Φ(s)ej , hk ⟩H dwj (s) ∈ L2FT (Ω; H).

∫t cN (s). Then, similarly to (2.122), for For any t ∈ [0, T ], put YN (t) = 0 Φ(s)dW any M, N ∈ N with M > N , we have that E|YN (T ) −

YM (T )|2H

=E

∫ M ∑ j=N +1

T 0

Φ(s)ej 2 ds. H

(2.125)

106

2 Some Preliminaries in Stochastic Calculus

2 By (2.124) and (2.125), {YN (T )}∞ N =1 is a Cauchy sequence in LFT (Ω; H). Hence, we may define the integral of Φ w.r.t. W (·) as follows:





T

T

Φ(s)dW (s) = lim

N →∞

0

cN (s), Φ(s)dW

in L2FT (Ω; H).

0

Similarly as in the case of finite dimensions, for any p ∈ [1, ∞), one may introduce the space Lp,loc (0, T ; L02 )) and define the stochastic integral of Φ ∈ F 2,loc LF (0, T ; L02 )) (especially Φ ∈ LpF (Ω; L2 (0, T ; L02 ))) w.r.t. W (·). Further, we define ∫ ∫ t

T

Φ(s)dW (s) =

χ[0,t] Φ(s)dW (s),

0

and ∫

for t ∈ [0, T ],

0



t

s

Φ(s)dW (s) −

Φ(s)dW (s) = s



t 0

Φ(s)dW (s),

for 0 ≤ s < t ≤ T.

0

Remark 2.132. As we mentioned before, the series (for the definition of cylindrical Brownian motion W (·)) in (2.98) does not converge in V . Nevertheless, for any operator Φ0 ∈ L02 , one can show that Φ0 W (t) ∈ H for any t ∈ [0, T ], and ∫ T

Φ0 dW (s) = Φ0 W (T ).

(2.126)

0

Similar to the proof of Theorems 2.124 and 2.129, we can show that the Itˆo’s integral (w.r.t. the cylindrical Brownian motion W (·)) has the following properties: Theorem 2.133. Let f, g ∈ L2,loc (0, T ; L02 ), a, b ∈ R and T ≥ t > s ≥ 0. The F following results hold: 1) ∫ t ∫ t ∫ t (af + bg)dW = a f dW + b gdW, a.s.; 0

0

0

∫· 2) The stochastic process 0 f dW is an H-valued, continuous, local martingale; 3) If f ∈ LpF (Ω; L2 (0, T ; L02 )), then E

(∫

t

) f dW Fs = 0,

a.s.;

s

4) If f, g ∈ L2F (0, T ; L02 ), then E

(⟨ ∫



t s



t

f dW,

gdW s

H

∫· 0

f dW ∈ M2c ([0, T ]; H), and

) ) (∫ t ⟨ ⟩ f (r, ·), g(r, ·) L0 dr Fs , a.s. Fs = E s

Similar to Corollary 2.131, we have the following result.

2

2.10 Stochastic Integrals

107

Corollary 2.134. For any bounded bilinear functional ψ(·, ·) on H and f, g ∈ L2F (0, T ; L02 ), it holds that ∫ t ( (∫ t ) E ψ f dW (τ ), gdW (τ ) =E

(∫

s t s

s

H

) Fs

) ψ(f (τ, ·), g(τ, ·))L02 dτ Fs ,

a.s.

As an easy consequence of Theorem 2.109 and Corollary 2.110, one has the following Doob inequalities for stochastic integrals. Theorem 2.135. Let W (·) be a V -valued, Q-Brownian motion (resp. cylindrical Brownian motion) and f ∈ L2F (0, T ; L(V, H)) (resp. f ∈ L2F (0, T ; L02 )). Let τ be a stopping time. Then, for any p ≥ 1 and λ > 0, P

(

∫ sup t∈[0,T ]

t∧τ

f (s)dW (s)

0

H

∫ p ) 1 T ∧τ ≥ λ ≤ p E f (s)dW (s) , λ H 0

and for any p > 1, E

(

∫ sup t∈[0,T ]

t∧τ 0

p ) ( p )p ∫ T ∧τ p f (s)dW (s) ≤ E f (s)dW (s) . p−1 H H 0

The following result provides a useful link between Itˆo’s and Lebesgue’s integrals. Lemma 2.136. Let W (·) be a V -valued, Q-Brownian motion (resp. cylindrical Brownian motion) and f ∈ L2,loc (0, T ; L(V, H)) (resp. f ∈ L2,loc (0, T ; F F 0 L2 )). Then, for any positive numbers ε and λ, ∫ s ({ }) P sup f dW ≥ ε s∈[0,T ]

H

0

∫ T ) ({ ∫ T }) 1 1 1 ( ≤ 2E λ ∧ |f (t)Q 2 |2L0 dt + P |f (t)Q 2 |2L0 dt > λ 2 2 ε 0 0

(2.127)

(resp. P

({

∫ sup s∈[0,T ]

1 ( ≤ 2E λ ∧ ε



s 0

f dW

H

}) ≥ε

T 0

|f (t)|2L0 dt 2

)

+P

({ ∫ 0

|f (t)|2L0 dt 2

(2.128)

})

T



).

Proof : We only consider the case that W (·) is a (V -valued) Q-Brownian motion. Similar to the proof of Lemma 2.126, we define a stochastic process fλ (t, ω) by

108

2 Some Preliminaries in Stochastic Calculus

fλ (t, ω) =

  f (t, ω),

if

 0,

otherwise.

∫t 0

1

|f (s, ω)Q 2 |2L0 ds ≤ λ, 2

(2.129)

By Chebyshev’s inequality (in Theorem 2.22) and Doob’s inequality (in Theorem 2.135), similarly to (2.120), we have that ∫ s ({ }) P sup f dW ≥ ε s∈[0,T ]

H

0

∫ ({ ≤P sup s∈[0,T ]

∫ −2 ≤ ε E = ε−2 E



T

s

f dW λ

0

2 f λ dW + P H

0 T 0

H

≥ε

})

+P

({

∫ s sup (f − f λ )dW s∈[0,T ]

({ ∫

T

1

0

|f (r)Q 2 |2H dr > λ

1

2

({ ∫

T

H

})

0

|f λ (r)Q 2 |2L0 dr + P

}) >0

1

|f (r)Q 2 |2H dr > λ

})

0

∫ T ) ({ ∫ T }) 1 1 1 ( ≤ 2E λ ∧ |f Q 2 |2L0 ds + P |f (r)Q 2 |2H dr > λ , 2 ε 0 0 (2.130) which gives (2.127). Remark 2.137. In principle, one should introduce the stochastic integral w.r.t. a Brownian sheet. This integral plays an important role in the study of the stochastic partial differential equations perturbed by white noise depending on both the time and spatial variables (See [335]). One can show that such kind of integral equals the integral w.r.t. a cylindrical Brownian motion for the same integrand (e. g., [68]). Therefore, we omit the details here.

2.11 Properties of Stochastic Integrals Let T > 0 and (Ω, F , F, P) (with F = {Ft }t∈[0,T ] ) be a fixed filtered probability space satisfying the usual condition, and denote by F the progressive σ-field w.r.t. F. Let H and V be two separable Hilbert spaces, Q ∈ L1 (V ) be ∆ given as that in Subsection 2.9.3 and write L02 = L2 (V ; H). Denote by I the identity operator on H. 2.11.1 Itˆ o’s Formula for Itˆ o’s Processes (in a Strong Form) In this subsection, we present a stochastic version of the chain rule, called Itˆ o’s formula, which plays a fundamental role in stochastic calculus. We first give the notion of (H-valued) Itˆo process (in a strong form).

2.11 Properties of Stochastic Integrals

109

Definition 2.138. For any b(·) ∈ L1,loc (0, T ; H), V -valued Q-Brownian moF tion (resp. cylindrical Brownian motion) W (·) and σ(·) ∈ L2,loc (0, T ; L(V ; H)) F 2,loc 0 (resp. σ(·) ∈ LF (0, T ; L2 )), the following form of F-adapted process ∫ t ∫ t X(t) = X(0) + b(s)ds + σ(s)dW (s), t ∈ [0, T ], (2.131) 0

0

is called an H-valued Itˆ o process, or simply an Itˆ o process (if the meaning is clear from the context). For any h ∈ H, h∗ ∈ H ∗ and f ∈ L(V ; H), we define two maps ⟨⟨ f, h ⟩⟩H , ⟨⟨ h∗ , f ⟩⟩H : V → R as follows: ⟨⟨ f, h ⟩⟩H (v) = ⟨ f (v), h ⟩H ,

⟨⟨ h∗ , f ⟩⟩H (v) = ⟨ h∗ , f (v) ⟩H ∗ ,H ,

∀ v ∈ V. (2.132) ∈ L(V ; R) (It is easy to check that L(V ; R) ≡

Clearly, ⟨⟨ f, h ⟩⟩H , ⟨⟨ h∗ , f ⟩⟩H L2 (V ; R))). Let X , Y and Z be three Banach spaces, and O and O′ be respectively nonempty open subsets of X and Y. Let us recall that, a function f : O → Y is called Fr´echet differentiable at x0 ∈ O if there exists an A(x0 ) ∈ L(X ; Y) such that |f (x0 + h) − f (x0 ) − A(x0 )h|Y lim = 0. h→0 |h|X In this case, we write fx (x0 ) = A(x0 ) and call it the Fr´echet derivative of f at x0 . A function f that is Fr´echet differentiable at any point of O is said to be C 1 in O, denoted by f ∈ C 1 (O; Y), if the function fx : O → L(X ; Y),

x 7→ fx (x)

is continuous. Similarly, one can define the Fr´echet derivatives of higher orders and the space C m (O; Y) (m ∈ N). This is done by induction so f ∈ C m (O; Y) if f ∈ C 1 (O; Y) and fx ∈ C m−1 (O; L(X ; Y)). Especially, the Fr´echet derivative fxx of fx is called the second derivative of f , and fxx (x0 ) ∈ L(X ; L(X ; Y)). Note that L(X ; L(X ; Y)) is isomorphic (as a Banach space) to the Banach space L(X , X ; Y) of all bilinear maps from X to Y, with the canonic norm. Similarly, for a function g : O × O′ → Z and y0 ∈ O′ , one can define the (second) mixed Fr´echet derivative of g, i.e., gxy (x0 , y0 ) ∈ L(X , Y; Z), where L(X , Y; Z) stands for the Banach space of all maps M from X × Y to Z so that, for any a1 , a2 ∈ C, x, x1 , x2 ∈ X and y, y1 , y2 ∈ Y, { M (a1 x1 + a2 x2 , y) = a1 M (x1 , y) + a2 M (x2 , y), M (x, a1 y1 + a2 y2 ) = a1 M (x, y1 ) + a2 M (x, y2 ), with the following norm: { } |M |L(X ,Y;Z) = sup |M (x, y)|Z x ∈ X , y ∈ Y, |x|X ≤ 1, |y|Y ≤ 1 . The following result is known as Itˆo’s formula.

110

2 Some Preliminaries in Stochastic Calculus

Theorem 2.139. Let X(·) be given by (2.131). Let F : [0, T ] × H → R be a function such that its partial (Fr´echet) derivatives Ft , Fx and Fxx are uniformly continuous on any bounded subsets of [0, T ] × H. Then, a.s., for all t ∈ [0, T ] (Recall (2.100) for the notation Fxx (s, X(s))(·, ·)L02 ), ∫

t

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s)

F (t, X(t)) = F (0, X(0)) + 0

+

∫ t[ ⟨ ⟩ Ft (s, X(s)) + Fx (s, X(s)), b(s) H ∗ ,H

(2.133)

0

] ( 1 1) 1 + Fxx (s, X(s)) σ(s)Q 2 , σ(s)Q 2 L0 ds 2 2 for the case that {W (t)}t∈[0,T ] is a Q-Brownian motion, and ∫

t

F (t, X(t)) = F (0, X(0)) + 0

+

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s)

∫ t[ ⟨ ⟩ Ft (s, X(s)) + Fx (s, X(s)), b(s) H ∗ ,H

(2.134)

0

( ) ] 1 + Fxx (s, X(s)) σ(s), σ(s) L0 ds 2 2 for the case that {W (t)}t∈[0,T ] is a cylindrical Brownian motion. Proof : We only consider the case that W (·) is a (V -valued) cylindrical Brownian motion. We divide the proof into several steps. The main idea is to use Taylor’s formula. We fix any t ∈ [0, T ]. Step 1. We claim that, it ∫suffices to prove (2.134) under ∫· ∫ · the addition· al assumptions that |X(·)|H , | 0 σdW |H , 0 |σ(r)|2L0 dr and 0 |b(r)|H dr are 2 uniformly bounded in [0, t] × Ω, a.s. Indeed, for the general case, for n ∈ N, we set ∫ s { } ∫ s ∫ s ∆ τ n = inf s ∈ [0, t] σdW ∨ |σ(s)|2L0 ds ∨ |b(s)|H ds ≥ n , 2 0

(inf ∅ = t). It is easy to see that Put

H

0

0

{τ n }∞ n=1

is a sequence of F-stopping times. ∫ t ∫ t ∆ X n (t) = X0n + σ n dW + bn ds, 0

0

where X0n = X(0)χ(|X(0)|H ≤n) , σ n (s) = σ(s)χ[0,τ n ] (s), bn (s) = b(s)χ[0,τ n ] (s). ∫· ∫· ∫· Hence, |X0n |H , | 0 σ n dW |H , 0 |σ n (s)|2L0 ds and 0 |bn (s)|H ds are uniformly 2 bounded almost surely. If the formula (2.134) holds for X n , then we have

2.11 Properties of Stochastic Integrals



t

F (t, X n (t)) = F (0, X0n )) + ∫ t[ + 0

111

0

⟨⟨ Fx (s, X n (s)), σ n (s) ⟩⟩H dW (s)

⟨ ⟩ Ft (s, X n (s)) + Fx (s, X n (s)), bn (s) H ∗ ,H

( ) ] 1 + Fxx (s, X n (s)) σ n (s), σ n (s) L0 ds, 2 2

(2.135)

a.s.

Clearly, F (t, X n (t)) → F (t, X(t)),

F (0, X0n ) → F (0, X(0)),

a.s.

as n → ∞.

Now, for any fixed ω ∈ Ω, Fx (s, X(s)) is continuous w.r.t. s ∈ [0, t] and therefore it is bounded by a constant C. Hence |⟨⟨ Fx (s, X n (s)), σ n (s) ⟩⟩H − ⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H |2L2 (V ;R) ≤ C|σ(s)|2L0 ∈ L1,loc (0, T ), F 2

which implies that ∫ t |⟨⟨ Fx (s, X n (s)), σ n (s) ⟩⟩H − ⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H |2L2 (V ;R) ds → 0, a.s. 0

as n → ∞. Thus, by the third conclusion in Theorem 2.133, ∫ t P- lim ⟨⟨ Fx (s, X n (s)), σ n (s) ⟩⟩H dW (s) n→+∞



0

(2.136)

t

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s).

= 0

Similarly, ∫ t |⟨Fx (s, X n (s)), bn (s)⟩H ∗ ,H − ⟨Fx (s, X(s)), b(s)⟩H ∗ ,H |ds → 0, a.s. 0

as n → ∞. Also, using the boundedness for any fixed ω, we obtain that ∫ t[ ( ) ] 1 Fs (s, X n (s)) + Fxx (s, X n (s)) σ n (s), σ n (s) L0 ds 2 2 0 ∫ t[ ] ( ) 1 → Fs (s, X(s)) + Fxx (s, X(s)) σ(s), σ(s) L0 ds, a.s. as n → ∞. 2 2 0 From (2.136), we conclude that there is a subsequence {nk }∞ k=1 such that ∫ t lim ⟨⟨ Fx (s, X nk (s)), σ nk (s) ⟩⟩H dW (s) k→+∞



0

t

= 0

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s),

a.s.

112

2 Some Preliminaries in Stochastic Calculus

Now, replacing n in (2.135) by nk and then letting k → ∞, one gets (2.134). Step 2. show (2.134) ∫under the additional conditions that ∫ · Now, let∫ us · · |X(·)|H , | 0 σdW |H , 0 |σ(r)|2L0 dr and 0 |b(r)|H dr are uniformly bounded in 2 [0, t] × Ω by some positive constant K. Denote by C the upper bound of { } ∆ Ft (t, x), Fx (t, x) and Fxx (t, x) over L =[0, t] × x ∈ H |x|H ≤ K . Using Taylor’s expansion, we see that there is a (scalar) function ε(r) (defined on [0, ∞)), decreasing to 0 as r → 0, such that |F (s, x2 ) − F (s, x1 ) − ⟨ Fx (s, x1 ), x2 − x1 ⟩H ∗ ,H 1 − Fxx (s, x1 )(x2 − x1 , x2 − x1 )| 2 ≤ ε(|x2 − x1 |H )|x2 − x1 |2H , ∀ (s, x1 ), (s, x2 ) ∈ L, and

|F (s2 , x) − F (s1 , x) − Fs (s1 , x)(s2 − s1 )| ≤ ε(|s2 − s1 |)|s2 − s1 |,

∀ (s1 , x), (s2 , x) ∈ L.

For any h > 0 and k = 0, 1, 2 · · · , put  τ0 = 0,    ∫ s  { } ∫ s ∫ s  τˆk+1 = inf s ∈ [τk , t] σdW ∨ |σ(r)|2L0 dr ∨ |b(r)|H dr ≥ h , 2  H τk τk τk     τk+1 = τˆk+1 ∧ (τk + h) ∧ t, (inf ∅ = t). One can show that {τk } is a sequence of F-stopping times, and ∫ τk+1 ∫ τk+1 τk ≤ τk+1 ≤ τk + h, σdW ≤ h, |b(s)|H ds ≤ h. τk

H

τk

It is easy to see that ∆

I = F (t, X(t)) − F (0, X(0)) =

∞ ∑

[F (τk+1 , X(τk+1 )) − F (τk , X(τk+1 ))]

k=0

+

∞ ∑

[F (τk , X(τk+1 )) − F (τk , X(τk ))].

k=0

Put ∆τk = τk+1 − τk , ∆X(τk ) = X(τk+1 ) − X(τk ), and Ih =

∞ [ ∑ k=0

Fs (τk , X(τk+1 ))∆τk + ⟨ Fx (τk , X(τk )), ∆X(τk ) ⟩H ∗ ,H

] 1 + Fxx (τk , X(τk ))(∆X(τk ), ∆X(τk )) . 2

2.11 Properties of Stochastic Integrals

113

Then |I − Ih | ≤

∞ ( ∑

ε(h)|∆τk | + ε(2h)|∆X(τk )|2H

k=0

[

≤ ε(2h) t + 2

∞ ( ∫ ∑

τk+1

∞ ∫ 2 ∑ σdW = E H

τk

k=0

H

τk

k=0

Note that ∞ ∫ ∑ E

2 ∫ σdW +

τk+1

Hence, ε(2h)E

k=0

τk+1

H

τk

∫ |σ(s)|2L0 ds = E

2 σdW → 0 H

τk

2 )] b(u)du .

2

τk

k=0

∞ ∫ ∑

τk+1

τk+1

)

t

|σ(s)|2L0 ds. 2

0

as h → 0.

On the other hand, ∞ ∫ ∑ k=0

τk+1

∞ ∫ 2 ∑ b(r)dr ≤ h H

τk

k=0

τk+1



t

|b(r)|H dr ≤ h

τk

|b(r)|H dr. 0

Hence, lim E|Ih − I| = 0.

h→0

Step 3. In what follows, we will analyze ∫the limit of Ih in probability as t h → 0. Clearly, the first term in Ih tends to 0 Fs (s, X(s))ds a.s. as h → 0. Since ∫ τ ∫ τ k+1

∆X(τk ) =

k+1

σdW + τk

∑∞

b(u)du, τk

⟨ Fx (τk , X(τk )), ∆X(τk ) ⟩H ∗ ,H ” in Ih can be split into two ∫t terms. The last term tends to 0 ⟨ Fx (s, X(s)), b(s) ⟩H ∗ ,H ds a.s. as h → 0; while the previous term reads the term “

k=0

∞ ∑

∫ ⟨ Fx (τk , X(τk )),

τk

k=0

=

∞ ∫ ∑

k=0

=

τk+1

τk+1 τk

∫ t∑ ∞ 0 k=0

σdW ⟩H ∗ ,H

⟨⟨ Fx (τk , X(τk )), σ(s) ⟩⟩H dW (s)

⟨⟨ Fx (τk , X(τk ))χ(τk ,τk+1 ] (s), σ(s) ⟩⟩H dW (s).

However, it follows from the dominated convergence theorem that

114

2 Some Preliminaries in Stochastic Calculus

∫ t ∑ ∞ 2 E Fx (τk , X(τk ))χ(τk ,τk+1 ] (s) − Fx (s, X(s)) |σ(s)|2L0 ds → 0 2 0

H

k=0

as h → 0. Hence, ∞ ∑



τk+1

⟨ Fx (τk , X(τk )),

τk

k=0

∫ σdW ⟩H ∗ ,H →

t 0

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s)

in probability as h → 0. ∑ ∞ Similarly, the term “ 12 k=0 Fxx (τk , X(τk ))(∆X(τk ), ∆X(τk ))” in Ih can be split into three terms. The last two terms tend to 0 a.s. as h → 0; while the first one reads ∫ τk+1 ∞ ( ∫ τk+1 ) 1∑ Fxx (τk , X(τk )) σdW, σdW ≡ I1 + I2 , 2 τk τk k=0

where



1∑ I1 = 2

k=0



τk+1

( ) Fxx (τk , X(τk )) σ(s), σ(s) L0 ds, 2

τk

∫ τk+1 ∞ ( ∫ τk+1 ) 1 ∑[ I2 = Fxx (τk , X(τk )) σdW, σdW 2 τk τk k=0 ∫ τk+1 ] ( ) − Fxx (τk , X(τk )) σ(s), σ(s) L0 ds . 2

τk

( ) Obviously, I1 tends to 2 0 Fxx (s, X(s)) σ(s), σ(s) L0 ds a.s. as h → 0. 2 On the other hand, by Corollary 2.134 and using (2.101), we have ∫ τk+1 ∞ ( ∫ τk+1 ) 1 ∑[ 2 E(I2 ) = E Fxx (τk , X(τk )) σdW, σdW 4 τk τk k=0 ∫ τk+1 ]2 ( ) − Fxx (τk , X(τk )) σ(s), σ(s) L0 ds ∫ 1 t

2

τk

∫ ∞ 4 ( ∫ τk+1 )2 ] C 2 ∑ [ τk+1 E σdW + |σ(s)|2L0 ds 2 2 H τk τk k=0 ∫ ∫ t 2 ) C 2 h ( t ≤ hE σdW + E |σ(s)|2L0 ds → 0, as h → 0. 2 2 H 0 0 ≤

Hence, limh→0 I2 = 0 in probability. Combining the above analysis, we conclude that ∫ t{ Ih → Fs (s, X(s)) + ⟨ Fx (s, X(s)), b(s) ⟩H ∗ ,H 0

( ) } 1 + Fxx (s, X(s)) σ(s), σ(s) L0 ds + 2 2



t 0

⟨⟨ Fx (s, X(s)), σ(s) ⟩⟩H dW (s)

in probability as h → 0, which gives the desired result.

2.11 Properties of Stochastic Integrals

115

2.11.2 Burkholder-Davis-Gundy Inequality In this subsection, we shall prove the following Burkholder-Davis-Gundy inequality (for stochastic integrals), which will be quite useful in the sequel (Compared to Lemma 2.136, this inequality can be regarded as a further link between Itˆo’s and Lebesgue’s integrals). Theorem 2.140. For any p > 0, there exists a constant Cp > 0 such that for any T > 0, V -valued Q-Brownian motion (resp. cylindrical Brownian motion) W (·) and f ∈ LpF (Ω; L2 (0, T ; L(V ; H))) (resp. f ∈ LpF (Ω; L2 (0, T ; L02 ))), E

∫ t p ) (∫ sup f (s)dW (s) ≤ Cp E

(

t∈[0,T ]

H

0

T

1

|f (s)Q 2 |2L0 ds

) p2

2

0

(2.137)

(resp. E

(

∫ t p ) (∫ sup f (s)dW (s) ≤ Cp E t∈[0,T ]

H

0

T 0

|f (s)|2L0 ds

) p2

2

).

(2.138)

Proof : We shall only deal with the case that W (·) is cylindrical Brownian motion. The case that W (·) is Q-Brownian motion can be handled similarly. If p = 2, then (2.138) follows from Theorem 2.135. Hence, we only need to deal with the case that p ̸= 2. Put ∫ t



X(t) =

f (s)dW (s),

t ∈ [0, T ]

(2.139)

0

We first consider the case that X is bounded. Step 1. In this step, we prove (2.138) for p > 2. Let g(x) = |x|pH for x ∈ H. Since { p−2 p(p − 2)|x|p−4 x= ̸ 0, H x ⊗ x + p|x|H I, gxx (x) = 0, x = 0, and recalling the definition of || · || in (2.102), we have that ||gxx (x)|| ≤ p(p − 1)|x|p−2 H ,

∀ x ∈ H.

By Itˆo’s formula, ∫ t 1 2 p(p − 1)E |X|p−2 H |f |L02 ds 2 0 0 ∫ t ( ) 1 2 ≤ p(p − 1)E sup |X(s)|p−2 |f (s)| ds 0 H L2 2 s∈[0,t] 0 ( ) p−2 [ (∫ t ) p2 ] p2 1 p ≤ p(p − 1) E sup |X(s)|pH E |f (s)|2L0 ds . 2 2 s∈[0,t] 0

E|X(t)|pH =

1 E 2



t

gxx (X)(f, f )L02 ds ≤

116

2 Some Preliminaries in Stochastic Calculus

This, together with Theorem 2.135, implies that E|X(t)|pH

[( p )p ] 1 ≤ p(p − 1) E|X(t)|pH 2 p−1

p−2 p

[ (∫ t ) p2 ] p2 E |f (s)|2L0 ds . 2

0

Hence, ( p )p−2 [ ( ( )2 1 E|X(t)|pH p ≤ p(p − 1) E 2 p−1



t 0

|f (s)|2L0 ds

) p2 ] p2

2

.

(2.140)

It follows from (2.140) and Theorem 2.135 again that ( p )p [ (∫ t ) p2 ] E sup |X(s)|pH ≤ E|X(t)|pH ≤ Cp E |f (s)|2L0 ds . (2.141) 2 p−1 s∈[0,t] 0 Step 2. In this step, we consider the case that p ∈ (0, 2). By Lemma 2.23, it is easy to see that ∫ ∞ ( ) p E( sup |X(s)|H ) = p λp−1 P sup |X(s)|H ≥ λ dλ. s∈[0,t]

(2.142)

s∈[0,t]

0

By (2.142) and Lemma 2.136, we get that ( ) E sup |X(s)|pH s∈[0,t]





≤p

λp−1 P

(∫

0

t 0

Write a =

∫ ) |f |2L0 ds > λ2 dλ + p 2

|f |2L0 ds. 0 2





λp−3 E

(∫

0

t 0

) |f |2L0 ds ∧ λ2 dλ. 2

(2.143)

√∫ t

p



λp−1 P

A direct computation yields that

(∫

0

t 0

) (∫ t ) p2 |f |2L0 ds > λ2 dλ = E |f |2L0 ds 2

(2.144)

2

0

and ∫



p

λ 0

p−3

E

(∫

t 0



a

= pE 0

|f |2L0 ds 2

∧λ

( λp−1 dλ + pE a2



2



) dλ = pE

+∞

λp−3 dλ a



( p 2 = E(ap ) + E(ap ) = E 2−p 2−p



)

λ 0

t 0

p−3

|f |2L0 ds 2

(∫

t 0

) p2

) |f |2L0 ds ∧ λ2 dλ 2

. (2.145)

From (2.143), (2.144) and (2.145), we get that (∫ t ) p2 (∫ t ) p2 ( ) 2 E sup |X(s)|pH ≤ E |f |2L0 ds + E |f |2L0 ds . 2 2 2−p s∈[0,t] 0 0

(2.146)

2.11 Properties of Stochastic Integrals

117

This concludes that (2.138) holds for p ∈ (0, 2). Step 3. In this step, we consider the general X. For each k ∈ N, put  { } { } inf t ∈ [0, T ] |X(t)|H ≥ k , if t ∈ [0, T ] |X(t)|H ≥ k ̸= ∅, ∆ τk = T, otherwise. Write fk = χ[0,τk ] f . Then by the result in Step 2, we have that E

∫ t p ) (∫ sup fk (s)dW (s) ≤ Cp E

(

t∈[0,T ]

0

H

T 0

|fk (s)|2L0 ds

) p2

2

.

(2.147)

Letting k → +∞ in both sides of (2.147), by Fatou’s lemma (See Theorem 2.20), we arrive at the desired inequality (2.138). This completes the proof of Theorem 2.140. 2.11.3 Stochastic Fubini Theorem The following result is a stochastic version of Fubini’s theorem. Theorem 2.141. Let W (·) be a V -valued, Q-Brownian motion (resp. cylindrical Brownian motion). Let (G, G, µ) be a finite measure space and Φ : (G × (0, T )×Ω, G×F) → (L(V ; H), B(L(V ; H))) (resp. Φ : (G×(0, T )×Ω, G×F) → ∫ ∫T (L02 , B(L02 ))) be a measurable mapping. If G ( 0 |Φ(x, t, ·)|2L(V ;H) dt)1/2 µ(dx) < ∫ ∫T ∫ ∞ (resp. G ( 0 |Φ(x, t, ·)|2L0 dt)1/2 µ(dx) < ∞), a.s., then G Φ(x, ·, ·)µ(dx) ∈ 2 ∫ L2,loc (0, T ; L(V ; H)) (resp. G Φ(x, ·, ·)µ(dx) ∈ L2,loc (0, T ; L02 )), and a.s., F F ∫ ∫T | 0 Φ(x, t, ·)dW (t)|H µ(dx) < ∞ and G ∫ ∫



T

T



Φ(x, t, ·)dW (t)µ(dx) = G

0

Φ(x, t, ·)µ(dx)dW (t). 0

(2.148)

G

Proof : We shall only deal with the case that W (·) is cylindrical Brownian motion. Let us first show that (2.148) holds for the functions in the following set n ∑ n { ∑ ∆ S = Φn = βjn χ(snk ,snk+1 ] αkn n ∈ N, αkn ∈ L2Fsn (Ω; L02 ), j=1 k=1

k

βjn are real valued, bounded G-measurable functions, and } {snk , · · · , snk+1 } is a partition of [0, T ] . By Remark 2.132, for any Φn in the above form,

118

2 Some Preliminaries in Stochastic Calculus

∫ ∫

T

Φn (x, t, ·)dW (t)dµ(x) G

=

0

∫ ∑ n ∑ n G j=1 k=1

=

n [∑ n ∑ k=1



T

[ ] βjn (x)χ(snk ,snk+1 ] αkn W (snk+1 ) − W (snk ) dµ(x) ∫

χ(snk ,snk+1 ]

j=1

βjn (x)dµ(x)

] [ ] αkn W (snk+1 ) − W (snk )

(2.149)

G



Φn (x, t, ·)dµ(x)dW (t).

= 0

G

Next, we consider the general case. We claim that there exist {Φn }∞ n=1 ⊂ S such that ∫ (∫ T )1/2 P- lim |Φn (x, t, ·) − Φ(x, t, ·)|2L0 dt µ(dx) = 0. (2.150) n→∞

G

2

0

To see this, for each n ∈ N, let us set  )1/2 ∫ (∫T  Φ(·, ·, ω) if G 0 |Φ(x, t, ω)|2L0 dt µ(dx) ≤ n, 2 Λn (·, ·, ω) = (2.151) 0 otherwise. )1/2 ∫ (∫T It is easy to see that G 0 |Λn (x, t, ω)|2L0 dt µ(dx) ≤ n, a.s. Hence, Λn ∈ 2 L1G (G; L1F (Ω; L2 (0, T ; L02 ))). Fix an ω ∈ Ω. By (2.151), it is easy to see that ∫ (∫

T

lim

n→∞

G

0

|Λn (x, t, ·) − Φ(x, t, ·)|2L0 dt 2

)1/2 µ(dx) = 0,

a.s.

(2.152)

By the definition of Bochner’s integral, for each m ∈ N, one can find qi (·, ·) ∈ L1F (Ω; L2 (0, T ; L02 )) (i = 1, 2, · · · , m) and Ei ∈ G such that ∫ |Λn (·, ·, x) −

lim

m→∞

G

m ∑

qi (·, ·)χEi (x)|L1F (Ω;L2 (0,T ;L02 )) µ(dx) = 0.

(2.153)

i=1

By Lemma 2.75, the set LS,F (0, T ; L02 ) is dense in L1F (Ω; L2 (0, T ; L02 )). This fact, together with (2.152)–(2.153), gives (2.150) for suitably chosen Φn ∈ S. ∫ ∫T We claim that G | 0 Φ(x, t, ·)dW (t)|H µ(dx) < ∞, a.s. Indeed, for any n ∈ N, by the definition of Λn in (2.151), using the Burkholder-Davis-Gundy inequality (in Theorem 2.140), we obtain that

2.11 Properties of Stochastic Integrals

P

({ ∫ ∫ G

T

}) Φ(x, t, ·)dW (t) µ(dx) ≥ n2 H

0

({ ∫ ∫ ≤P G

T

}) (Φ(x, t, ·) − Λn (x, t, ·))dW (t) µ(dx) > 0 H

0

({ ∫ ∫ +P G

≤P

G

+n

−2

T

|Φ(x, t, ·)|2L0 dt

G

G

(2.154)

µ(dx) ≥ n

T

|Λn (x, t, ·)|2L0 dt

|Φ(x, t, ·)|2L0 dt

)1/2

2

})

)1/2 µ(dx)

2

T 0

)1/2

2

(∫ 0

({ ∫ ( ∫

})

H

|Φ(x, t, ·)|2L0 dt

E

µ(dx) ≥ n

Λn (x, t, ·)dW (t) µ(dx)

0

G

≤P

T

T 0



)1/2

2

0

∫ E

({ ∫ ( ∫

+Cn−2

H

T

G

≤P

}) Λn (x, t, ·)dW (t) µ(dx) ≥ n2

0

({ ∫ ( ∫ ∫

119

µ(dx) ≥ n

}) +

C . n

({ ∫ ∫ T }) Hence, P | 0 Φ(x, t, ·)dW (t)|H µ(dx) = ∞ = 0. G ∫ ∫T By (2.30) and G ( 0 |Φ(x, t, ·)|2L0 dt)1/2 µ(dx) < ∞, a.s., it is easy to show 2 ∫ that G Φ(x, ·, ·)µ(dx) ∈ L2,loc (0, T ; L02 )), a.s. F Similar to (2.154), one can show that, for any ε > 0 and δ > 0, P

({ ∫ ∫ G

∫ ∫

T 0

G

({ ∫ ∫ ≤P G

≤P

T

Φn (x, t, ·)dW (t)µ(dx) − T

0

G

H

T 0

H

}) ≥ε

}) ( ) Φn (x, t, ·) − Φ(x, t, ·) dW (t) µ(dx) ≥ ε

0

({ ∫ ( ∫

Φ(x, t, ·)dW (t)µ(dx)

|Φn (x, t, ·) − Φ(x, t, ·)|2L0 dt

)1/2

2

µ(dx) ≥ δ

})

+ ε−1 δ. (2.155)

This, together with (2.150), yields that ∫ ∫ n→∞

∫ ∫

T

P- lim

T

Φn (x, t, ·)dW (t)µ(dx) = G

0

Φ(x, t, ·)dW (t)µ(dx). (2.156) G

0

On the other hand, similarly to (2.155), one can show that,

120

P

2 Some Preliminaries in Stochastic Calculus

({ ∫



T 0

G

({ ∫ =P ≤P

({ ∫

T 0 T

0

≤P



T



Φ(x, t, ·)µ(dx)dW (t)

Φn (x, t, ·)µ(dx)dW (t) − (∫

Φ(x, t, ·)µ(dx) dW (t)

Φn (x, t, ·)µ(dx) − G

G

G

G

0

H

H

}) ≥ε

T 0

L2

2 )1/2 }) µ(dx) ≥ δ 1/2 + ε−1 δ. Φn (x, t, ·) − Φ(x, t, ·) 0 dt L2

This, together with (2.150), yields that ∫ T∫ ∫ P- lim Φn (x, t, ·)µ(dx)dW (t) = n→∞

)

∫ ( 2 }) ) Φn (x, t, ·) − Φ(x, t, ·) µ(dx) 0 dt ≥ δ + ε−1 δ

({ ∫ ( ∫ G

0



}) ≥ε

G

T

∫ Φ(x, t, ·)µ(dx)dW (t). (2.157)

0

G

Finally, by (2.156) and (2.157), and noting (2.149), we obtain the desired equality (2.148). 2.11.4 Itˆ o’s Formula for Itˆ o’s processes in a Weak Form Theorem 2.139 works well for Itˆo’s processes in the (strong) form (2.131). However, usually this is too restrictive in the study of stochastic differential equations in infinite dimensions. Indeed, in the infinite dimensional setting sometimes one has to handle Itˆo’s processes in a weaker form, to be presented below. Let V be a Hilbert space such that the embedding V ⊂ H is continuous and dense. Denote by V ∗ the dual space of V with respect to the pivot space H. Hence, V ⊂ H = H ∗ ⊂ V ∗ , continuously and densely and ⟨ z, v ⟩V,V ∗ = ⟨ z, v ⟩H ,

∀ v ∈ H, z ∈ V.

We have the following Itˆo’s formula for a weak form of Itˆo process (Recall (2.132) for the notation ⟨⟨ ·, · ⟩⟩H ). Theorem 2.142. Suppose that X0 ∈ L2F0 (Ω; H), W (·) is a V -valued, QBrownian motion (resp. cylindrical Brownian motion), ϕ(·) ∈ L2F (0, T ; V ∗ ), and Φ(·) ∈ LpF (Ω; L2 (0, T ; L(V ; H))) (resp. Φ(·) ∈ LpF (Ω; L2 (0, T ; L02 ))) for some p ≥ 1. Let ∫ t ∫ t X(t) = X0 + ϕ(s)ds + Φ(s)dW (s), t ∈ [0, T ]. (2.158) 0

If X ∈

0

L2F (0, T ; V),

|X(t)|2H

then X(·) ∈ C([0, T ]; H), a.s., and for any t ∈ [0, T ], ∫ t = |X0 |2H + 2 ⟨ ϕ(s), X(s) ⟩V ∗ ,V ds 0 (2.159) ∫ t ∫ t +2 0

⟨⟨ Φ(s), X(s) ⟩⟩H dW (s) +

1

0

|Φ(s)Q 2 |2L0 ds, 2

a.s.

2.11 Properties of Stochastic Integrals

121

(resp. ∫ |X(t)|2H

=

|X0 |2H

t

+2 0



⟨ ϕ(s), X(s) ⟩V ∗ ,V ds ∫

t

+2 0

⟨⟨ Φ(s), X(s) ⟩⟩H dW (s) +

(2.160)

t 0

|Φ(s)|2L0 ds, 2

a.s. ).

Proof : We only consider the case that W (·) is a V -valued, cylindrical Brownian motion. Let us introduce the infinitesimal generators of some C0 -semigroups. Denote by A the Riesz isometry from V to V ∗ , defined by ⟨ u, Av ⟩V,V ∗ = ⟨ u, v ⟩V ,

∀ u, v ∈ V.

(2.161)

From (2.161), we deduce that ⟨ u, Av ⟩H = ⟨ u, v ⟩V ,

∀ u, v ∈ V such that Av ∈ H.

(2.162)

Write D = {v ∈ V | Av ∈ H}. Clearly, A : D(⊂ H) → H is a densely defined, closed operator. Also, one can check that A is self-adjoint, and ⟨ v, Av ⟩H = |v|2V for any v ∈ D. Hence, by the Lumer-Phillips theorem, −A generates a contractive C0 -semigroup on H. Similarly, −A also generates contractive C0 semigroups on V and V ′ (Here, to simplify the notation, we denote by −A the “same” infinitesimal generator of three different C0 -semigroups). For the identity operator I on H (also on V and V ′ ) and any λ > 0, put Iλ = λ(λI + A)−1 . It is clear that Iλ → I strongly on H (also on V and V ′ ) as λ → +∞. Write Xλ (·) = Iλ X(·), X0,λ = Iλ X0 , ϕλ (·) = Iλ ϕ(·) and Φλ (·) = Iλ Φ(·). Clearly, ϕλ (·) ∈ L2F (0, T ; H) and Φλ (·) ∈ LpF (Ω; L2 (0, T ; L02 )). We claim that  lim X0,λ = X0 in L2F0 (Ω; H),    λ→∞   lim ϕλ (·) = ϕ(·) in L2F (0, T ; V ∗ ), (2.163) λ→∞      lim Φλ (·) = Φ(·) in LpF (Ω; L2 (0, T ; L02 )). λ→∞

Indeed, by A ≥ 0, we see that for any λ ≥ 0, |λ(λ + A)−1 |L(H) ≤ 1. Hence, λ(λ + A)−1 X0 − X0 ≤ 2|X0 |H . This, together with Lebesgue’s dominated H 2 convergence theorem, implies that limλ→∞ E λ(λ+A)−1 X0 −X0 H = 0, which gives the first equality in (2.163). Similarly, we can prove that the second conclusion in (2.163) is true. Now, for any λ > 0, v ∈ V and K ∈ L02 , |λ(λ + A)−1 Kv|H ≤ |Kv|H . Thus, |λ(λ + A)−1 K|L02 ≤ |K|L02 . Therefore, |Φλ − Φ|L02 ≤ 2|Φ|L02 . Hence, using Lebesgue’s dominated convergence theorem again, we obtain the third equality in (2.163). It follows from (2.158) that ∫ t ∫ t Xλ (t) = X0,λ + ϕλ (s)ds + Φλ (s)dW (s), t ∈ [0, T ]. (2.164) 0

0

122

2 Some Preliminaries in Stochastic Calculus

According to (2.163), we get that Xλ (·) is an Itˆo process in H. Similar to the proof of (2.163), we can show that lim Xλ (·) = X(·) in L2F (0, T ; V).

(2.165)

λ→∞

Applying Theorem 2.139 to Xλ (·), we find that, for any t ∈ [0, T ], a.s., ∫

t

|Xλ (t)|2H = |X0,λ |2H + 2 0



⟨ ϕλ (s), Xλ (s) ⟩V ∗ ,V ds ∫

t

+2 0

⟨⟨ Φλ (s), Xλ (s) ⟩⟩H dW (s) +

(2.166)

t 0

|Φλ (s)|2L0 ds 2

By Burkholder-Davis-Gundy’s inequality (in Theorem 2.140), for any ε > 0, we see that ∫ t (∫ T )1 Φλ (s) 2 0 Xλ (s) 2 ds 2 E sup ⟨⟨ Φλ (s), Xλ (s) ⟩⟩H dW (s) ≤ CE L H t∈[0,T ]

≤ CE

[

0

sup |Xλ (t)|H

(∫

t∈[0,T ]

≤ εE sup |Xλ (t)|2H + t∈[0,T ]

2

0

T

)1 ] Φλ (s) 2 0 ds 2

0

C ( E ε

L2



T

) Φλ (s) 2 0 ds . L2

0

(2.167) Further, ∫ t E sup ⟨ ϕλ (s), Xλ (s) ⟩V ∗ ,V ds t∈[0,T ]

0

∫ H

T

≤E 0

ϕλ (s)

V∗

Xλ (s) ds V

(2.168)

≤ |ϕλ (·)|L2F (0,T ;V ∗ ) |Xλ (·)|L2F (0,T ;V) ≤ C. By (2.166)–(2.168), we find that E supt∈[0,T ] |Xλ (t)|2H ≤ C, where the constant C is independent of λ. Hence, there is a subsequence {λn }∞ n=1 ⊂ (0, +∞) ˆ ∈ L2 (Ω; L∞ (0, T ; H)) such that limn→∞ Xλ = X ˆ weakly star and a X n F 2 ∞ in LF (Ω; L (0, T ; H)). On the other hand, since X(·) ∈ L2F (0, T ; V ), we ˆ ∈ see that limλ→∞ Xλ = X in L2F (0, T ; V ) ⊂ L2F (0, T ; H). Thus, X = X 2 ∞ LF (Ω; L (0, T ; H)). Letting λ → +∞ in (2.166) and noting (2.165), we obtain (2.160). Hence, |X(·)|H is continuous. This, together with that X(·) is continuous in V ∗ , implies that X(·) is continuous in H. Remark 2.143. When V is only assumed to be a reflexive Banach space such that the embedding V ⊂ H is continuous and dense, one can prove a similar result as that in Theorem 2.142 provide that there is a C0 -semigroup {S(t)}t≥0 on H such that the restriction of {S(t)}t≥0 on V is a C0 -semigroup on V.

2.11 Properties of Stochastic Integrals

123

2.11.5 Martingale Representation Theorem In this subsection, we always assume that F = {Ft }t∈[0,T ] is the natural filtration generated by the underlying Brownian motion W (·), which can be an Rm -valued (for some m ∈ N), or a V -valued, Q-Brownian motion, or a V -valued, cylindrical Brownian motion but it can be clarified by the context. The goal of this subsection is to prove that any η ∈ L2FT (Ω; H) with mean 0 can be represented as an Itˆo integral of a suitable process w.r.t. W (·). We begin with some auxiliary results. Lemma 2.144. Let W (·) = (W1 (·), · · · , Wm (·))⊤ be an Rm -valued Brownian motion. Then, the following set { ϕ(W1 (t1 ), · · · , W1 (tn ), · · · , Wm (t1 ), · · · , Wm (tn )) 0 < t1 < · · · < tn ≤ T, } ϕ ∈ C0∞ (Rmn ), n ∈ N is dense in L2FT (Ω). Proof : Without loss of generality, we only consider the case that m = 1. The proof for the general case is similar. Let {si }∞ i=1 be a dense subset of (0, T ] and for each n = 1, 2, · · · , let {si }ni=1 = {ti }ni=1 with 0 < t1 < · · · < tn ≤ T , and F n = σ(W (t1 ), W (t2 ) − W (t1 ), · · · , W (tn ) − W (tn−1 )). Then, F n ⊂ F n+1 and( FT is the ) smallest ∪∞ n σ-algebra containing all of the F n ’s. Therefore, FT = σ F . n=1 Fix any g ∈ L2FT (Ω). By Theorem 2.96, we have that g = E(g | FT ) = lim E(g | F n ), n→∞

in L2FT (Ω).

(2.169)

By the Doob-Dynkin lemma (i.e., Theorem 2.13), for each n, we have that E(g | F n ) = gn (W (t1 ), W (t2 ) − W (t1 ), · · · , W (tn ) − W (tn−1 )) for some Borel measurable function gn : Rn → R. It suffices to prove that there is a sequence of functions {ϕnk }∞ k=1 ⊂ ∞ C0 (Rn ) such that ϕnk (W (t1 ), W (t2 ) − W (t1 ), · · · , W (tn ) − W (tn−1 )) → gn (W (t1 ), W (t2 ) − W (t1 ), · · · , W (tn ) − W (tn−1 )) in L2FT (Ω) as k → ∞. To prove this, we fix an n ∈ N and an ε > 0. Write t0 = 0. By |gn (W (t1 ), W (t2 ) − W (t1 ), · · · , W (tn ) − W (tn−1 ))|2L2

FT

∫ = Rn

gn2 (x1 , · · · , xn )

n ∏

(Ω)

x2 − 2(t −ti i i−1 )

e √ dx1 · · · dxn < ∞, 2π(ti − ti−1 ) i=1

we conclude that there exists a bounded, open subset G ⊂ Rn such that

124

2 Some Preliminaries in Stochastic Calculus





n ∏

x2 i

e 2(ti −ti−1 ) ε2 √ gn2 (x1 , · · · , xn ) dx1 · · · dxn < . 4 2π(ti − ti−1 ) Rn \G i=1

(2.170)

On the other hand, one can find a φ ∈ C0∞ (G) such that ∫

n ∏



x2 i

e 2(ti −ti−1 ) ε2 √ |gn (x1 , · · · , xn ) − φ(x1 , · · · , xn )|2 dx1 · · · dxn < . 4 2π(ti − ti−1 ) G i=1 (2.171) From (2.170) and (2.171), we obtain that |gn − φ|L2F (Ω) < ε. This completes T the proof of Lemma 2.144. Lemma 2.145. Let F : Cm → C be an analytic function. If F (z1 , · · · , zm ) = 0 } on the set (z1 , · · · , zm ) ∈ Cm Im z1 = · · · Im zm = 0 , then F ≡ 0 in Cm . {

Proof : For simplicity, we assume that m = 1. The proof for the general l is similar. Write z1 = x + iy for x, y ∈ R, and denote by u and v respectively the real and imaginary parts of F . Clearly, both u and v are real analytic functions. Since F is analytic, it follows that ∂u(x, 0) ∂v(x, 0) = = 0, ∂x ∂y



∂u(x, 0) ∂v(x, 0) = = 0, ∂y ∂x

∀ x ∈ R.

(2.172)

Thus, ∂ 2 u(x, 0) ∂ 2 v(x, 0) = = 0, 2 ∂x ∂y∂x



∂ 2 u(x, 0) ∂ 2 v(x, 0) = = 0, ∂y∂x ∂x2

∀ x ∈ R. (2.173)

From (2.172) and (2.173), we get that ∂ 2 u(x, 0) ∂ 2 v(x, 0) =− = 0, 2 ∂y ∂y∂x

∂ 2 v(x, 0) ∂ 2 u(x, 0) = = 0, 2 ∂y ∂y∂x

∀ x ∈ R.

Inductively, for any j, k ∈ N, ∂ j+k u(x, 0) ∂ j+k v(x, 0) = = 0, ∀ x ∈ R. ∂xj ∂y k ∂xj ∂y k Therefore, u = v ≡ 0. This completes the proof of Lemma 2.145. Lemma 2.146. Let W (·) = (W1 (·), · · · , Wm (·))⊤ be an Rm -valued Brownian motion. Then the following set { } ∫ {∫ T } 1 T ∆ X = span exp h(s)⊤ dW (s) − |h(s)|2Rm ds h ∈ L2 (0, T ; Rm ) 2 0 0 (2.174) is dense in L2FT (Ω).

2.11 Properties of Stochastic Integrals

125

Proof : Suppose that g ∈ L2FT (Ω) \ {0} is orthogonal (in L2FT (Ω)) to all functions in X . For any given t1 , · · · , tn ∈ [0, T ] with 0 < t1 < · · · < tn ≤ T , we define an analytic function in the complex space Cmn as follows: ∫ n ∑ m {∑ } F (z) = exp zij Wj (ti ) gdP, Ω (2.175) i=1 j=1 ∀ z = (z11 , · · · , z1n , · · · , zm1 , · · · , zmn ) ∈ Cmn . For any given λ01 ∈ Rm , · · · , λ0n ∈ Rm , let us choose n ∑

h=

λ0i χ[0,ti ] ∈ L2 (0, T ; Rm ).

(2.176)

i=1

By a suitable choice of (λ01 , · · · , λ0n ) in (2.176), we see that ∫ { } ⊤ F (λ) = exp λ⊤ 1 W (t1 ) + · · · + λn W (tn ) gdP = 0

(2.177)



for all λ = (λ1 , · · · , λn ) with λ1 ∈ Rm , · · · , λn ∈ Rm and t1 , · · · , tn ∈ [0, T ] with 0 < t1 < · · · < tn ≤ T . Since F = 0 on Rmn and F is analytic, by Lemma 2.145, we conclude that F ≡ 0 in Cmn . Particularly, F (iy) = 0 for all y = (y11 , · · · , y1n , · · · , ym1 , · · · , ymn ) ∈ Rmn . But then, for any ϕ ∈ C0∞ (Rmn ), ∫ ( ) ϕ W1 (t1 ), · · · , W1 (tn ), · · · , Wm (t1 ), · · · , Wm (tn ) gdP Ω

= (2π)−mn

∫ (∫ Ω

= (2π)

−mn

= (2π)−mn

Rmn

∫ ˆ ϕ(y) ∫

Rmn

n ∑ m {∑

ˆ exp ϕ(y)

} ) iyji Wj (ti ) dy gdP

i=1 j=1

(∫ exp

n ∑ m {∑



}

)

(2.178)

iyji Wj (ti ) gdP dy

i=1 j=1

ˆ ϕ(y)F (iy)dy = 0, Rmn

∫ ˆ where ϕ(y) = (2π)−mn Rmn ϕ(x)e−i⟨x,y⟩Rmn dx is the Fourier transform of ϕ. By (2.178) and Lemma 2.144, g is orthogonal to a dense subset of L2FT (Ω) and hence g = 0. Therefore X is dense in L2FT (Ω). The following result is known as the Martingale Representation Theorem. Theorem 2.147. Let W (·) be a V -valued, Q-Brownian motion (resp. cylindrical Brownian motion). If X ∈ L2FT (Ω; H), then there exists one and only one F-adapted process Φ(·) ∈ L2F (0, T ; L(V ; H)) (resp. Φ(·) ∈ L2F (0, T ; L02 )) such that ∫ T

X = EX +

Φ(s)dW (s). 0

(2.179)

126

2 Some Preliminaries in Stochastic Calculus

Proof : We only consider the case of cylindrical Brownian motion. We divide the proof into three steps. Step 1. We consider the case that V = Rm , H = R and W (·) is an R -valued, standard Brownian{motion. } ∫T ∫T Assume first that X = exp 0 h(s)⊤ dW (s) − 12 0 |h(s)|2Rm ds for some m

h(·) ∈ L2 (0, T ; Rm ). Define {∫

t

Y (t) = exp 0

1 h(s) dW (s) − 2 ⊤



t 0

} |h(s)|2Rm ds ,

t ∈ [0, T ].

Then, by Itˆo’s formula, it follows that [ ] 1 1 dY (t) = Y (t) h(t)⊤ dW (t) − |h(t)|2Rm dt + Y (t)|h(t)|2Rm dt 2 2 = Y (t)h(t)⊤ dW (t), which gives Y (t) = 1 +

∫t 0

Y (s)h(s)⊤ dW (s). This leads to ∫

T

X = Y (T ) = 1 +

Y (s)h(s)dW (s) 0

and EX = 1. Hence, (2.179) holds for such kind of X’s. By the linearity, (2.179) also holds for any element of X given by (2.174). Now for any X ∈ L2FT (Ω), by Lemma 2.146, we may approximate it in L2FT (Ω) by a sequence {Xn }∞ n=1 ⊂ X . Then for each n ∈ N, we have ∫

T

Xn = EXn +

Φn (s)⊤ dW (s),

for some Φn ∈ L2F (0, T ; Rm ).

0

By the third conclusion in Theorem 2.124, for any k ∈ N, ∫

[

T

E(Xn − Xk ) = E E(Xn − Xk ) + 2



(Φn − Φk )(s)⊤ dW (s)

0 T

= [E(Xn − Xk )]2 + 0

E|Φn − Φk |2Rm ds → 0,

as k, n → ∞. Hence, {Φn }∞ n=1 is a Cauchy sequence it converges to some Φ ∈ L2F (0, T ; Rm ). Hence, ∫ [ X = lim Xn = lim EXn + n→∞

n→∞

T



]

in L2F (0, T ; Rm ) and hence ∫

T

Φn (s) dW (s) = EX + 0

]2

Φ(s)⊤ dW (s),

0

the limit being taken in L2FT (Ω). Hence the representation (2.179) holds for all X ∈ L2FT (Ω). The uniqueness of Φ is obvious.

2.11 Properties of Stochastic Integrals

127

Step 2. In this step, we consider the case that H = R and W (·) is a V valued, cylindrical Brownian ∑∞ motion. By Definition 2.120, W (·) can be formally written as W (·) = i=1 wi (·)ei , where {wi (·)}∞ i=1 is a sequence of independent real valued, standard Brownian motions, and {ei }∞ i=1 is an orthonormal basis of V . Clearly, ℓ2 can be regarded as a subspace of L2 (V ; R) ≡ L(V ; R). Indeed, for any ∑ a = (a1 , a2 , · · · ) ∈ ℓ2 , ∑ it can be defined as an element of L2 (V ; R) by ∞ ∞ a(v) = i=1 ai vi for any v = i=1 vi ei . For each k ∈ N, denoted by {Ftk }t∈[0,T ] the natural filtration generated by {wi (·)}ki=1 . For any X ∈ L2FT (Ω), similarly to (2.169), there exists an Xk ∈ L2F k (Ω) such that T

lim Xk = X

k→∞

in L2FT (Ω).

(2.180)

For every k ∈ N, by Step 1, there is a Φk ∈ L2F (0, T ; Rk ) ⊂ L2F (0, T ; ℓ2 ) ⊂ L2F (0, T ; L2 (V ; R)) such that ∫

T

Xk = EXk +

Φk (s)⊤ dW (s) = EXk +



T

Φk (s)dW (s).

0

(2.181)

0

In the last term of the equality (2.181), we have regarded Φk (·) as an element of L2F (0, T ; L2 (V ; R)). From (2.181), using the conclusion 4) in Theorem 2.133, we find that, for any k, n ∈ N, ∫ T ( 2 ) 2 |Φk − Φn |L2 (0,T ;L2 (V ;R)) = E Φk (s) − Φn (s) dW (s) F 0 (2.182) ∫ T ∫ T 2 ⊤ ⊤ 2 = E Φk (t) dW (s) − Φn (s) dW (s) ≤ E|Xk − Xn | . 0

0

This, together with (2.180), indicates that {Φk }∞ k=1 is a Cauchy sequence in L2F (0, T ; L2 (V ; R)). Denote by Φ the limit of {Φk }∞ k=1 . Then, ∫ [ X = lim Xk = lim EXk + k→∞

k→∞



]

T

Φk (s)dW (s) 0

T

= EX +

in L2FT (Ω).

Φ(s)dW (s) 0

Step 3. In this step, we consider the general separable Hilbert space H. 2 Let {hj }∞ j=1 be an orthogonal normal basis of H. Then, ⟨X, hj ⟩H ∈ LFT (Ω) 2 for j ∈ N. Thus, we can find a ϕj ∈ LF (0, T ; L2 (V ; R)) such that ∫

T

⟨X, hj ⟩H = E⟨X, hj ⟩H +

ϕj (s)dW (s). 0

Similar to (2.182), it holds that

(2.183)

128

2 Some Preliminaries in Stochastic Calculus

∫ |ϕj |2L2 (0,T ;L2 (V ;R)) = E F

2 ϕj (s)dW (s) ≤ E|⟨X, hj ⟩H |2 .

T

(2.184)

0

Therefore, X = EX +

∞ ∫ ∑ j=1

T

ϕj (s)dW (s)hj .

(2.185)

0

Hence, we only need to prove that there is a Φ ∈ L2F (0, T ; L02 ) such that ∫

T

Φ(s)dW (s) = 0

∞ ∫ ∑

T

ϕj (s)dW (s)hj .

(2.186)

0

j=1

First, by (2.184), we have that ∞ ∫ ∑ E

T

∫ ∞ 2 ∑ ϕj (s)dW (s)hj = E H

0

j=1

Let us define Φ(·) by Φ(·)v =

∞ ∑

T 0

j=1

|ϕj (s)|2L2 (V ;R) ds ≤ E|X|2H .

(ϕj v)hj for any v ∈ V . Then,

j=1

∫ |Φ(·)|2L2 (0,T ;L0 ) 2 F ∫ 0



0

∞ ∑ ∞ T ∑

=E

∞ ∑ ∞ T ∑

=E

j=1 i=1

i=1

2 (ϕj ei )hj dt H

j=1

|(ϕj ei )|2H dt =

∞ ∑

|ϕj (s)|2L2 (0,T ;L2 (V ;R)) ds < +∞, F

j=1

and the equality (2.186) holds. This completes the proof of Theorem 2.147. Remark 2.148. By (2.179), it is easy to see that ∫

t

E(X | Ft ) = EX +

Φ(s)dW (s)

(2.187)

0

and



T

E|X − EX|2H = E ∫

(2.188)

|Φ(s)|2L0 ds ).

(2.189)

2

0

(resp.

1

|Φ(s)Q 2 |2L0 ds T

E|X − EX|2H = E 0

2

As a consequence of Theorem 2.147, we have the following result, which will play a crucial role in the study of backward stochastic evolution equations.

2.11 Properties of Stochastic Integrals

129

Corollary 2.149. Let W (·) be a V -valued, Q-Brownian motion (resp. cylindrical Brownian motion). Then, for any f ∈ L1F (0, T ; L2 (Ω; H)), there is a unique K(·, ·, ·) ∈ L1 (0, T ; L2F (0, T ; L(V ; H))) (resp. K(·, ·, ·) ∈ L1 (0, T ; L2F (0, T ; L02 )) satisfying the following conditions: 1) K(s, σ, ·) = 0 for σ > s; 2) For a.e. s ∈ [0, T ], ∫ s f (s) = Ef (s) + K(s, σ)dW (σ), a.s.; (2.190) 0

3) |K(·, ·, ·)|L1 (0,T ;L2F (0,T ;L(V ;H))) ≤ |f |L1F (0,T ;L2 (Ω;H)) (resp. |K(·, ·, ·)|L1 (0,T ;L2F (0,T ;L02 )) ≤ |f |L1F (0,T ;L2 (Ω;H)) ).

(2.191)

Proof : We only consider the case that W (·) is a cylindrical Brownian motion here. For f ∈ L1F (0, T ; L2 (Ω; H)), we can find a sequence {fn }∞ n=1 of simple processes such that fn tends to f in L1F (0, T ; L2 (Ω; H)) as n → ∞, where ∑n−1 fn (s, ω) = i=0 χ[tn,i ,tn,i+1 ) (s)ξn,i (ω) for some partition {tn,0 , tn,1 , · · · , tn,n } of [0, T ] and ξn,i ∈ L2Ft (Ω; H). n,i By Theorem 2.147 and Remark 2.148, for every ξn,i , there is a kn,i ∈ L2F (0, T ; L02 ) such that ∫ tn,i ξn,i = Eξn,i + kn,i (σ)dW (σ), kn,i (σ) = 0 for σ > tn,i . (2.192) 0

Put Kn (s, σ) =

∑n−1

χ[tn,i ,tn,i+1 ) (s)χ[0,s] (σ)kn,i (σ). Then, ∫ s fn (s) = Efn (s) + Kn (s, σ)dW (σ).

i=0

(2.193)

0

From (2.193), we see that, for any n, k ∈ N, ∫ T( ∫ T )1/2 E |Kn (s, σ) − Kk (s, σ)|2L0 dσ ds 0

T

= ∫

0 T 0

s

( ∫ E

|Kn (s, σ) − Kk (s, σ)|2L0 dσ 2



s

s

Kn (s, σ)dW (σ) − 0

T

=

( ∫ E 0

= ∫

2

0



)1/2 ds

2 )1/2 Kk (s, σ)dW (σ) ds

(2.194)

H

0

( ( ) 2 )1/2 E fn (s) − fk (s) − Efn (s) − Efk (s) ds

0

H

≤ |fn − fk |L1F (0,T ;L2 (Ω;H)) . 1 2 Since {fn }∞ n=1 is a Cauchy sequence in LF (0, T ; L (Ω; H)), by (2.194), we find ∞ 1 that {Kn }n=1 is a Cauchy sequence in L (0, T ; L2F (0, T ; L02 )). Thus, there is a

130

2 Some Preliminaries in Stochastic Calculus

K ∈ L1 (0, T ; L2F (0, T ; L02 )) so that limn→∞ Kn = K in L1 (0, T ; L2F (0, T ; L02 )). This, together with (2.193), implies (2.190). Similar to (2.194), one has ∫

T

( ∫ E

0

T 0

|Kn (s, σ)|2L0 dσ

)1/2

2

ds ≤ |fn |L1F (0,T ;L2 (Ω;H)) .

(2.195)

Taking n → ∞ in (2.195), we obtain (2.191).

2.12 Notes and Comments Stochastic Analysis is a huge subject which is still growing up and flourishing. In this chapter, we only present a brief introduction to the relevant results will be used in the rest of the book, and we omit the proofs of some classical results, which can be found in almost all standard books (e.g., [57, 164, 287]). Interested readers can find more materials in the references such as [58, 68, 129, 140]. Theorem 2.55 was proved in [239]. Section 2.5 was based on [242]. The proof of Theorem 2.140 (Burkholder-Davis-Gundy inequality) can be found in [68], for example. Note that the inverse inequality of (2.137) (resp. (2.138)) in Theorem 2.140 is also true, i.e., one can prove that E

(

∫ ∫ t p ) ) p2 1 1 ( T sup f (s)dW (s) ≥ E |f (s)Q 2 |2L0 ds 2 Cp H t∈[0,T ] 0 0

(resp. ∫ ∫ t p ) ) p2 1 ( T E sup f (s)dW (s) ≥ E |f (s)|2L0 ds ), 2 Cp H t∈[0,T ] 0 0 (

but we will not use it in this book. The readers are referred to [327] and the references therein for the general stochastic Fubini theorem. The proof of Theorem 2.142 can be found in [282, 286]. In this book, we present an elementary but long proof of the Martingale Representation Theorem (There exist some shorter proofs which need more preliminaries on stochastic analysis, e.g., [160, 175]). In this book, we use the Itˆo integral, which is the most common and useful way to define stochastic integral w.r.t. a Brownian motion. There are two other typical ways to define stochastic integral: Stratonovich integral (e.g., [176]) and rough path integral (e.g., [105, 250]). Stochastic integral (including multiple stochastic integral) w.r.t. more general stochastic processes can be found in [129, 176, 182].

3 Stochastic Evolution Equations

This book is mainly addressed to studying the control problems governed by stochastic evolution equations. In this chapter, we shall present a short introduction to the well-posedness and regularity of solutions to this sort of equations, valued in Hilbert spaces. Throughout this chapter, T > 0, (Ω, F , F, P) (with F = {Ft }t∈[0,T ] ) is a fixed filtered probability space satisfying the usual condition, and we denote by F the progressive σ-field w.r.t. F; H and V are two separable Hilbert ∆ spaces; we denote by I the identity operator on H, and write L02 = L2 (V ; H).

3.1 Stochastic Evolution Equations in Finite Dimensions In this section, we shall consider the well-posedness of stochastic evolution equations in finite dimensions, i.e., stochastic (ordinary) differential equations. Fix d, n ∈ N and p ≥ 1. Let W (·) = (W 1 (·), · · · , W d (·))⊤ be an ddimensional standard Brownian motion. We consider the following stochastic differential equation: { dX(t) = F (t, X)dt + Fe (t, X)dW (t) in [0, T ], (3.1) X(0) = X0 . In (3.1), X is an unknown, and the initial datum X0 is a given F0 -measurable Rn -valued function; F (·, ·, ·) : [0, T ] × Ω × Rn → Rn and Fe (·, ·, ·) : [0, T ] × Ω × Rn → Rn×d are two given functions. Here and in what follows, when the context is clear, we omit the ω(∈ Ω) argument in the defined functions. In the sequel, we call F (t, X)dt and Fe (t, X)dW (t) the drift and diffusion terms of (3.1), respectively. To begin with, as for ordinal differential equations, one needs to define the solution to (3.1). The following is the most natural one. © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_3

131

132

3 Stochastic Evolution Equations

Definition 3.1. An Rn -valued, F-adapted, continuous process X(·) is called a solution to (3.1) if F (·, X(·)) ∈ L1 (0, T ; Rn ) a.s., Fe (·, X(·)) ∈ L2,loc (0, T ; F Rn×d ), and for each t ∈ [0, T ], ∫ t ∫ t X(t) = X0 + F (s, X(s))ds + Fe (s, X(s))dW (s), a.s. (3.2) 0

0

The solution to (3.1) is said to be unique, if for any other solution Y (·) to (3.1), one has P({X(t) = Y (t), t ∈ [0, T ]}) = 1. In the rest of this section, we assume that 1) Both F (·, x) and Fe (·, x) are F-adapted for each x ∈ Rn ; and 2) F (·, 0) ∈ LpF (Ω; L1 (0, T ; Rn )) and Fe (·, 0) ∈ LpF (Ω; L2 (0, T ; Rn×d )), and there exist two nonnegative functions L1 (·) ∈ L1 (0, T ) and L2 (·) ∈ L2 (0, T ) such that for any x, y ∈ Rn and a.e. t ∈ [0, T ], { |F (t, x) − F (t, y)|Rn ≤ L1 (t)|x − y|Rn , a.s. |Fe (t, x) − Fe (t, y)|Rn×d ≤ L2 (t)|x − y|Rn , The fundamental existence and uniqueness result for the equation (3.1) is as follows. Theorem 3.2. For any X0 ∈ LpF0 (Ω; Rn ), the equation (3.1) admits a unique solution X(·) ∈ LpF (Ω; C([0, T ]; Rn )) satisfying |X(·)|LpF (Ω;C([0,T ];Rn )) ≤ C(|X0 |LpF

0

(Ω;Rn )

+ |F (·, 0)|LpF (Ω;L1 (0,T ;Rn ))

(3.3)

+|Fe (·, 0)|LpF (Ω;L2 (0,T ;Rn×d )) ).

e0 ∈ Lp (Ω; Rn ) is another initial datum and X(·) e is the correMoreover, if X F0 sponding solution to (3.1), then e Lp (Ω;C([0,T ];Rn )) ≤ C|X0 − X e0 |Lp |X(·) − X(·)| F F

0

(Ω;Rn ) .

(3.4)

Proof : For any T1 ∈ (0, T ) and x(·) ∈ LpF (Ω; C([0, T1 ]; Rn )), we define ∫ t ∫ t J (x(·))(t) = X0 + F (s, x(s))ds + Fe (s, x(s))dW (s), t ∈ [0, T1 ]. (3.5) 0

0

By H¨older’s inequality, it follows that ∫ t p E sup F (s, x(s))ds n 0≤t≤T1

R

0

( ∫ t p ∫ t p ) n ≤ C E sup F (s, 0)ds n + E sup L1 (s)|x(s)|R ds 0≤t≤T1

[ (∫ ≤C E

T1 0

R

0

|F (s, 0)|

)p Rn

ds

+

0≤t≤T1

0

|L1 (·)|pL1 (0,T1 ) |x(·)|pLp (Ω;C([0,T1 ];Rn )) F

(3.6)

] ,

3.1 Stochastic Evolution Equations in Finite Dimensions

133

where C is independent of T1 . Similarly, by the Burkholder-Davis-Gundy inequality (i.e., Theorem 2.140), we have ∫ t p ( ∫ T1 )p/2 Fe (s, x(s)) 2 n×d ds E sup Fe (s, x(s))dW (s) n ≤ CE R 0≤t≤T1

[ (∫ ≤C E

0

)p/2 ( Fe (s, x(s)) − Fe (s, 0) 2 n×d ds +E

T1

R

0 T1 0

[

R

0

[ (∫ ≤C E

|L2 (s)|2 |x(s)|2Rn ds

)p/2

+E



T1 0

(∫

T1 0

≤ C |L2 (·)|pL2 (0,T1 ) |x(·)|pLp (Ω;C([0,T1 ];Rn )) + E

|Fe (s, 0)|2Rn×d ds

|Fe (s, 0)|2Rn×d ds

(∫

F

T1 0

)p/2 ]

)p/2 ]

|Fe (s, 0)|2Rn×d ds

)p/2 ] . (3.7)

Hence, by (3.5)–(3.7), we arrive at E sup |J (x(·)(t)|pRn 0≤t≤T1

≤C

[(

) |L1 (·)|pL1 (0,T1 ) + |L2 (·)|pL2 (0,T1 ) |x(·)|pLp (Ω;C([0,T1 ];Rn )) F

+E|X0 |pRn +E

(∫

T1 0

)p |F (s, 0)|Rn ds +E

(∫ 0

(3.8)

T1 )p/2 ] 2 e |F (s, 0)|Rn×d ds .

By (3.8), it is easy to see that J (x(·)) ∈ LpF (Ω; C([0, T1 ]; Rn )). Further, from the proof of (3.8), we see that |J (x(·)) − J (y(·))|LpF (Ω;C([0,T1 ];Rn )) ( ) ≤ C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |x(·) − y(·)|LpF (Ω;C([0,T1 ];Rn )) ,

(3.9)

for any x(·), y(·) ∈ LpF (Ω; C([0, T1 ]; Rn )). We choose a T1 ∈ (0, T ) such that ( ) C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) < 1. (3.10) By (3.9), the map J : LpF (Ω; C([0, T1 ]; Rn )) → LpF (Ω; C([0, T1 ]; Rn )) is contractive. Thus, J admits a unique fixed point, which gives a solution X(·) to (3.1) on [0, T1 ]. Further, by (3.8) and noting (3.10), we see that X(·) satisfies the estimate (3.3) with T replaced by T1 . Repeating this procedure, we obtain a solution X(·) to (3.1) on [0, T ] and it satisfies (3.3). The other conclusions in Theorem 3.2 follow also from the above argument. Now, we consider the following linear equation:  d ∑  ( ) ( )  dX(t) = A(t)X(t) + f (t) dt + Ci (t)X(t) + gi (t) dW i (t) in [s, T ], i=1   X(s) = ξ. (3.11)

134

3 Stochastic Evolution Equations

Here s ∈ [0, T ), ξ ∈ L2Fs (Ω; Rn ), and { A(·) ∈ L1F (0, T ; L∞ (Ω; Rn×n )), Ci (·) ∈ L2F (0, T ; L∞ (Ω; Rn×n )), f (·) ∈ L2F (Ω; L1 (0, T ; Rn )),

gi (·) ∈ L2F (0, T ; Rn ).

(3.12)

In the rest of this section, we shall derive a representation formula for solutions to the equation (3.11). For this purpose, let us consider the following two (Rn×n -valued) stochastic differential equations:  d ∑    dΦ(t) = A(t)Φ(t)dt + Ci (t)Φ(t)dW i (t) in [s, T ], (3.13) i=1    Φ(0) = In , and  d d ∑ ∑  ( )   dΨ (t) = Ψ (t) − A(t) + Ci (t)2 dt − Ψ (t)Ci (t)dW i (t) in [s, T ],   

i=1

i=1

Ψ (0) = In ,

(3.14) where In is the identity matrix in Rn×n . By Itˆo’s formula, we have that ( ) d Ψ (t)Φ(t) = dΨ (t)Φ(t) + Ψ (t)dΦ(t) + dΨ (t)dΦ(t) = −Ψ (t)A(t)Φ(t)dt + Ψ (t)

d ∑

Ci (t)2 Φ(t)dt −

i=1

+Ψ (t)A(t)Φ(t)dt +

d ∑

d ∑

Ψ (t)Ci (t)Φ(t)dW i (t)

i=1

Ψ (t)Ci (t)Φ(t)dW (t) − Ψ (t) i

i=1

d ∑

Ci (t)2 Φ(t)dt

i=1

= 0. This, together with Ψ (0)Φ(0) = In implies that Ψ (·)Φ(·) = In in [0, T ]. Hence, Ψ (t) = Φ(t)−1 for all t ∈ [0, T ]. We have the following variation of constants formula for solutions to (3.11). Theorem 3.3. Let the assumption (3.12) hold. Then, for any ξ ∈ L2F0 (Ω; Rn ), the equation (3.11) admits a unique solution X(·), which can be represented by the following formula: ∫ t d ∑ ( ) X(t) = Φ(t)ξ + Φ(t) Φ(r)−1 f (r) − Ci (r)gi (r) dr s

+

d ∑ i=1



t

Φ(t) s

i=1

Φ(r)−1 gi (r)dW i (r),

(3.15) ∀ t ∈ [s, T ].

3.2 Well-Posedness of Stochastic Evolution Equations

135

Proof : Write ∫

t

Y (t) = ξ +

d d ∫ t ( ) ∑ ∑ Φ(r)−1 f (r) − Ci (r)gi (r) dr+ Φ(r)−1 gi (r)dW i (r).

s

i=1 s

i=1

By (3.13) and Itˆo’s formula, it follows that d ∑ ( ) ( ) d(Φ(t)Y (t)) = A(t)Φ(t)Y (t) + f (t) dt + Ci (t)Φ(t)Y (t) + gi (t) dW i (t). i=1

Also, Φ(0)Y (0) = ξ. By the uniqueness of solutions to the equation (3.11), we conclude that X(t) = Φ(t)Y (t), which gives the desired result. Remark 3.4. As we shall see later (in Theorem 3.20), the conditions A(·) ∈ L1F (0, T ; L∞ (Ω; Rn×n )) and Ci (·) ∈ L2F (0, T ; L∞ (Ω; Rn×n )) in (3.12) (for The1 n×n orem 3.3) can be weakened as A(·) ∈ L∞ )) and Ci (·) ∈ F (Ω; L (0, T ; R ∞ 2 n×n LF (Ω; L (0, T ; R )), respectively.

3.2 Well-Posedness of Stochastic Evolution Equations In the rest of this chapter, W (·) may be a V -valued, Q-Brownian motion or cylindrical Brownian motion but we only consider the case of cylindrical Brownian motion. We consider the following stochastic evolution equation: { ( ) dX(t) = AX(t) + F (t, X(t)) dt + Fe (t, X(t))dW (t) in (0, T ], (3.16) X(0) = X0 . Here X0 : Ω → H is an F0 -random variable, A generates a C0 -semigroup {S(t)}t≥0 on H, and F (·, ·) : [0, T ] × Ω × H → H and Fe (·, ·) : [0, T ] × Ω × H → L02 are two given functions. Similarly to the case of finite dimensions, in the sequel, we call (AX(t) + F (t, X(t))) dt and Fe (t, X)dW (t) the drift and diffusion terms of (3.16), respectively. It is easy to show the following result (and hence we omit the details). Proposition 3.5. For any g ∈ L2,loc (0, T ; L02 ), the H-valued, stochastic proF ∫· cess 0 S(· − s)g(s)dW (s) has a continuous modification1 . 3.2.1 Notions of Solutions Similar to the case of deterministic evolution equations, one has several ways to define solutions to (3.16). The most natural way seems to be the following one. 1

In the sequel, we shall always use such a modification for

∫· 0

S(· − s)g(s)dW (s).

136

3 Stochastic Evolution Equations

Definition 3.6. An H-valued, F-adapted, continuous stochastic process X(·) is called a strong solution to (3.16) if 1) X(t) ∈ D(A) for a.e. (t, ω) ∈ [0, T ] × Ω and AX(·) ∈ L1 (0, T ; H) a.s.; 2) F (·, X(·)) ∈ L1 (0, T ; H) a.s., Fe (·, X(·)) ∈ L2,loc (0, T ; L02 ); and F 3) For all t ∈ [0, T ], ∫

t

X(t) = X0 +

( ) AX(s) + F (s, X(s)) ds +



0

t

Fe (s, X(s))dW (s),

a.s.

0

Example 3.7. Let V = R, {T (t)}t≥0 be another C0 -semigroups on H with the infinitesimal generator B so that D(A) ⊂ D(B 2 ) and S(·)T (·) = T (·)S(·) (Particularly, we may take B = I). Then, by Itˆo’s formula, a strong solution to the following equation   dX(t) = AX(t)dt + 1 B 2 X(t)dt + BX(t)dW in [0, ∞), 2  X(0) = X0 ∈ D(A), ( ) is given by X(t) = S(t) T (W (t))X0 . Unfortunately, generally the conditions which guarantee the well-posedness of (3.16) in the sense of Definition 3.6 are very restrictive. Therefore, following the idea of weak and mild solutions to deterministic evolution equations, people also introduce similar notions for its stochastic counterpart (Recall (2.132) for the notation ⟨⟨ ·, · ⟩⟩H ). Definition 3.8. An H-valued, F-adapted, continuous stochastic process X(·) is called a weak solution to (3.16) if F (·, X(·)) ∈ L1 (0, T ; H) a.s., Fe (·, X(·)) ∈ L2,loc (0, T ; L02 ), and for any t ∈ [0, T ] and ξ ∈ D(A∗ ), F ⟨ ⟩ ⟨ ⟩ X(t), ξ H = X0 , ξ H + ∫

t

+ 0



t

(⟨

X(s), A∗ ξ

0

⟩ H

⟨ ⟩ ) + F (s, X(s)), ξ H ds (3.17)

⟨⟨ Fe (s, X(s)), ξ ⟩⟩H dW (s),

a.s.

Definition 3.9. An H-valued, F-adapted, continuous stochastic process X(·) is called a mild solution to (3.16) if F (·, X(·)) ∈ L1 (0, T ; H) a.s., Fe (·, X(·)) ∈ L2,loc (0, T ; L02 ), and for any t ∈ [0, T ], F ∫

t

S(t − s)F (s, X(s))ds

X(t) = S(t)X0 + 0



t

+ 0

S(t − s)Fe (s, X(s))dW (s),

(3.18) a.s.

3.2 Well-Posedness of Stochastic Evolution Equations

137

It is easy to see that, if an H-valued, F-adapted, continuous stochastic process X(·) is a strong solution to (3.16), then it is also a weak solution to the same equation. The following result shows the equivalence between weak and mild solutions to (3.16). Theorem 3.10. An H-valued, F-adapted, continuous stochastic process X(·) is a weak solution to (3.16) if and only if it is a mild solution to the same equation. Proof : The “only if” part. Since X(·) is a weak solution to (3.16), for any fixed ϕ ∈ D((A∗ )2 ), t ∈ [0, T ] and r ∈ [t, T ], we choose ξ = S ∗ (r − t)A∗ ϕ in (3.17), and obtain that ⟨ ⟩ X(t), S ∗ (r − t)A∗ ϕ H ∫ t ⟨ ⟩ ⟨ ⟩ ∗ ∗ = X0 , S (r − t)A ϕ H + X(s), A∗ S ∗ (r − t)A∗ ϕ H ds 0



t

+ ∫

0 t

+ 0

⟨ ⟩ F (s, X(s)), S ∗ (r − t)A∗ ϕ H ds

(3.19)

⟨⟨ Fe (s, X(s)), S ∗ (r − t)A∗ ϕ ⟩⟩H dW (s).

Integrating (3.19) w.r.t. t from 0 to r, we obtain that ∫ r ⟨ ⟩ X(t), S ∗ (r − t)A∗ ϕ H dt 0



⟨ ⟩ X0 , S ∗ (r − t)A∗ ϕ H dt +

r

= 0



r



t

+ ∫

0 r



r



0

t 0

t



X(s), A∗ S ∗ (r − t)A∗ ϕ

0

F (s, X(s)), S ∗ (r − t)A∗ ϕ

0

+ 0





⟩ H

⟩ H

dsdt

dsdt

⟨⟨ Fe (s, X(s)), S ∗ (r − t)A∗ ϕ ⟩⟩H dW (s)dt. (3.20)

A direct computation shows that ∫ r ∫ ⟨ ⟩ X0 , S ∗ (r − t)A∗ ϕ H dt = 0



r 0

⟨ ⟩ AS(r − t)X0 , ϕ H dt

= S(r)X0 , ϕ

⟩ H

⟨ ⟩ − X0 , ϕ H .

By Fubini’s theorem (i.e., Theorem 2.21), we find that

(3.21)

138

3 Stochastic Evolution Equations



r 0



t

⟨ ⟩ X(s), A∗ S ∗ (r − t)A∗ ϕ H dsdt

0 r∫ r



= ∫

0

s r

=



⟨ ⟩ X(s), A∗ S ∗ (r − t)A∗ ϕ H dtds ∗



X(s), S (r − s)A ϕ





ds − H

0



r

(3.22)

X(s), A∗ ϕ

⟩ H

0

ds

and ∫

r 0



t

⟨ ⟩ F (s, X(s)), S ∗ (r − t)A∗ ϕ H dsdt

0 r⟨



=



F (s, X(s)), S (r − s)ϕ

0



⟩ H

r

ds − 0

⟨ ⟩ F (s, X(s)), ϕ H ds.

(3.23)

By the stochastic Fubini theorem (i.e., Theorem 2.141), we see that ∫

r 0



t

⟨⟨ Fe (s, X(s)), S ∗ (r − t)A∗ ϕ ⟩⟩H dW (s)dt

0 r∫ r



= ∫

0

s r

= 0

⟨⟨ Fe (s, X(s)), S ∗ (r − t)A∗ ϕ ⟩⟩H dtdW (s)

⟨⟨ Fe (s, X(s)), S ∗ (r − s)ϕ ⟩⟩H dW (s) −

⟨∫

r

=

S(r − s)Fe (s, X(s))dW (s), ϕ

0

⟩ H

∫ 0 r

∫ − 0

r

⟨⟨ Fe (s, X(s)), ϕ ⟩⟩H dW (s)

⟨⟨ Fe (s, X(s)), ϕ ⟩⟩H dW (s). (3.24)

From (3.20)–(3.24) and using (3.17) again, we end up with ∫ r ⟨ X(r) − S(r)X0 − S(r − s)F (s, X(s))ds 0



r

− 0

S(r − s)Fe (s, X(s))dW (s), ϕ

(3.25)

⟩ = 0. H

Since D((A∗ )2 ) is dense in H, (3.25) also holds for any ϕ ∈ H, which indicates that X(·) is a mild solution to (3.16). The “if” part. Assume that X(·) is a mild solution to (3.16). Then for any ξ ∈ D(A∗ ) and r ∈ [0, T ], we have that ∫ r ⟨ X(r) − S(r)X0 − S(r − s)F (s, X(s))ds 0 (3.26) ∫ r ⟩ ∗ e − S(r − s)F (s, X(s))dW (s), A ξ = 0. 0

H

Fix t ∈ [0, T ]. Integrating (3.26) from 0 to t w.r.t. r, we find that

3.2 Well-Posedness of Stochastic Evolution Equations



t 0



X(r), A∗ ξ



⟩ H

⟨ ⟩ S(r)X0 , A∗ ξ H dr +

t

= 0

∫ t∫

r

+ 0

0

Clearly,

139

∫ t∫ 0

r 0

⟨ ⟩ S(r − s)F (s, X(s)), A∗ ξ H dsdr

(3.27)

⟨⟨ S(r − s)Fe (s, X(s)), A∗ ξ ⟩⟩H dW (s)dr. ∫

t 0

⟨ ⟩ ⟨ ⟩ ⟨ ⟩ S(r)X0 , A∗ ξ H dr = S(t)X0 , ξ H − X0 , ξ H .

(3.28)

By Fubini’s theorem, we find that ∫ t∫ r ⟨ ⟩ S(r − s)F (s, X(s)), A∗ ξ H dsdr 0

0

∫ t∫

t

= ∫

0

s t

= 0

⟨ ⟩ AS(r − s)F (s, X(s)), ξ H drds

⟨ ⟩ S(t − s)F (s, X(s)), ξ H ds −



t 0

(3.29)

⟨ ⟩ F (s, X(s)), ξ H ds.

Thanks to the stochastic Fubini theorem, we see that ∫ t∫ r ⟨⟨ S(r − s)Fe (s, X(s)), A∗ ξ ⟩⟩H dW (s)dr 0

0

∫ t∫

t

= ∫

0

s t

= 0

⟨⟨ AS(r − s)Fe (s, X(s)), ξ ⟩⟩H drdW (s)

⟨⟨ S(t − s)Fe (s, X(s)), ξ ⟩⟩H dW (s) −



t 0

⟨⟨ Fe (s, X(s)), ξ ⟩⟩H dW (s).

(3.30) From (3.27)–(3.30) and noting (3.26), we see that X(·) satisfies (3.17). Hence, X(·) is a weak solution to (3.16). The following result provides a sufficient condition for a mild solution to be a strong solution to (3.16). Lemma 3.11. A mild solution X(·) to (3.16) is a strong solution (to the same equation) if the following three conditions hold for all x ∈ H and 0 ≤ s ≤ t ≤ T , a.s., 1) X0 ∈ D(A), S(t−s)F (s, x) ∈ D(A) and S(t−s)Fe (s, x) ∈ L2 (V ; D(A)); 2) |AS(t−s)F (s, x)|H ≤ α(t−s)|x|H for some real-valued stochastic process α(·) with α(·) ∈ L1 (0, T ) a.s.; and 3) |AS(t − s)Fe (s, x)|L02 ≤ β(t − s)|x|H for some real-valued stochastic process β(·) ∈ L2,loc (0, T ). F

140

3 Stochastic Evolution Equations

Proof : Since X(·) is an H-valued, continuous stochastic process, by the conditions in Lemma 3.11, we have ∫ T∫ t AS(t − r)F (r, X(r)) drdt < ∞, a.s. H 0

and



0

T



0

t 0

|AS(T − r)Fe (r, X(r))|2L0 drdt < ∞, 2

a.s.

By Fubini’s theorem, we have ∫ t∫ s ∫ t∫ t AS(s − r)F (r, X(r))drds = AS(s − r)F (r, X(r))dsdr 0

0 t



0

∫ S(t − r)F (r, X(r))dr −

=

r

t

F (r, X(r))dr.

0

0

On the other hand, by Theorem 2.141, we have that ∫ t∫ s ∫ t∫ t e AS(s − r)F (r, X(r))dW (r)ds = AS(s − r)Fe (r, X(r))dsdW (r) 0

0 t



=

S(t − r)Fe (r, X(r))dW (r) −

0



0

t

r

Fe (r, X(r))dW (r).

0

Therefore, by (3.18), AX(·) is integrable a.s. and ∫ t ∫ t ∫ t AX(s)ds = S(t)X0 − X0 + S(t − r)F (r, X(r))dr − F (r, X(r))dr 0

0



t

+

S(t − r)Fe (r, X(r))dW (r) −

0



0 t



t

= X(t) − X0 −



F (r, X(r))dr − 0

0 t

Fe (r, X(r))dW (r)

Fe (r, X(r))dW (r).

0

This indicated that X(·) is a strong solution to (3.16). 3.2.2 Well-Posedness in the Sense of Mild Solution In the rest of this section, we will focus on the well-posedness of stochastic evolution equations in the sense of mild solution. Let us begin with the following result, which is an easy consequence of the Burkholder-Davis-Gundy inequality (in Theorem 2.140). Proposition 3.12. For any p ≥ 1, there exists a constant Cp > 0 such that for any g ∈ LpF (Ω; L2 (0, T ; L02 )) and t ∈ [0, T ], ∫ t p (∫ t ) p2 E S(t − s)g(s)dW (s) ≤ Cp E |g(s)|2L0 ds . (3.31) 2 0

H

0

3.2 Well-Posedness of Stochastic Evolution Equations

{

We consider first the following linear equation: ( ) ( ) dX(t) = AX(t) + AX(t) + f (t) dt + BX(t) + g(t) dW (t)

141

in (0, T ],

X(0) = X0 . (3.32) Here X0 , A, B, f (·) and g(·) are suitable vector/operator valued functions to be given later. We have the following well-posedness result for (3.32) with A = 0 and B = 0. Theorem 3.13. Let X0 ∈ LpF0 (Ω; H) for some p ≥ 1, A = 0, B = 0, f (·) ∈ LpF (Ω; L1 (0, T ; H)) and g(·) ∈ Lp (Ω; L2 (0, T ; L02 )). Then, the equation (3.32) admits a unique mild solution X(·) ∈ CF ([0, T ]; Lp (Ω; H)). Moreover, |X(·)|CF ([0,T ];Lp (Ω;H)) ( ) ≤ C |X0 |LpF (Ω;H) + |f |LpF (Ω;L1 (0,T ;H)) + |g|LpF (Ω;L2 (0,T ;L02 )) .

(3.33)

0

Proof : Clearly, ∫



t

t

S(t − s)f (s)ds +

X(t) = S(t)X0 + 0

S(t − s)g(s)dW (s)

(3.34)

0

is a mild solution to (3.32). Now, we prove that X(·) ∈ CF ([0, T ]; Lp (Ω; H)). First, by Proposition 3.12, for any t ∈ [0, T ], |X(t)|LpF

t

(Ω;H)

∫ t ∫ t = S(t)X0 + S(t − s)f (s)ds + S(t − s)g(s)dW (s) 0

≤ S(·)X0

Lp F

0

0

∫ t + S(t − s)f (s)ds (Ω;H) 0

∫ t + S(t − s)g(s)dW (s) (

0

≤ C |X0 |

+ |f |

t

Lp F (Ω;H)

(3.35)

t

Lp F (Ω;H) t

Lp F0 (Ω;H)

Lp F (Ω;H)

1 Lp F (Ω;L (0,t;H))

) + |g|LpF (Ω;L2 (0,t;L02 )) .

Hence, p |X(·)|L∞ F (0,T ;L (Ω;H)) ( ) ≤ C |X0 |LpF (Ω;H) + |f |LpF (Ω;L1 (0,T ;H)) + |g|LpF (Ω;L2 (0,T ;L02 )) .

(3.36)

0

∫· Obvious, S(·)X0 + 0 S(· − s)f (s)ds ∈ CF ([0, T ]; Lp (Ω; H)). It remains to ∫· show that 0 S(· − s)g(s)dW (s) ∈ CF ([0, T ]; Lp (Ω; H)). For this purpose, for any t1 and t2 satisfying 0 ≤ t1 ≤ t2 ≤ T , we have

142

3 Stochastic Evolution Equations



t2



t1

S(t2 − s)g(s)dW (s) −

0

0

∫ ≤

t2 S(t2 − s)g(s)dW (s) t1

∫ +

t1

Lp F

S(t1 − s)g(s)dW (s)

t2

0

Lp F

t2

(Ω;H)

.

Lp F

t2

t1

t2

(Ω;H)

[ ] S(t1 − s) S(t2 − t1 ) − I g(s)dW (s)

By Proposition 3.12, it follows that ∫ t2 S(t2 − s)g(s)dW (s)

Lp F

(Ω;H)

(Ω;H)

≤ C|g|LpF (Ω;L2 (t1 ,t2 ;L02 )) .

∫t ∫t Since t12 |g(s)|2L0 ds ≤ 0 2 |g(s)|2L0 ds, by Lebesgue’s dominated convergence 2 2 theorem, we find that ∫ t2 lim− S(t2 − s)g(s)dW (s) p t1 →t2

LF

t1

[ (∫ ≤ C lim− E t1 →t2

t2 t1

|g(s)|2L0 ds 2

) p2 ] p1

t2

(Ω;H)

(3.37)

= 0.

Likewise, ∫ lim−

t1 →t2

t1

[ ] S(t1 − s) S(t2 − t1 ) − I g(s)dW (s)

0

Lp F

t2

= 0.

(3.38)

(Ω;H)

By (3.34) and (3.37)–(3.38), we conclude that limt→t− |X(t)−X(t0 )|LpF 0

= 0 for any t0 ∈ [0, T ]. Similarly, limt→t+ |X(t) − X(t0 )| 0

t2

Lp Ft (Ω;H) 2

(Ω;H)

= 0. By

(3.36), we obtain (3.33). This completes the proof of Theorem 3.13. In the sequel, we need the following assumption on the nonlinearities in (3.16). Condition 3.1 F (·, ·) : [0, T ]×Ω ×H → H and Fe (·, ·) : [0, T ]×Ω ×H → L02 are two given functions satisfying 1) Both F (·, x) and Fe (·, x) are F-adapted for each x ∈ H; and 2) There exist two nonnegative (real-valued) functions L1 (·) ∈ L1 (0, T ) and L2 (·) ∈ L2 (0, T ) such that, for any x, y ∈ H and a.e. t ∈ [0, T ], { |F (t, x) − F (t, y)|H ≤ L1 (t)|x − y|H , a.s. (3.39) |Fe (t, x) − Fe (t, y)|L02 ≤ L2 (t)|x − y|H , We have the following well-posedness result for (3.16).

3.2 Well-Posedness of Stochastic Evolution Equations

143

Theorem 3.14. Let Condition 3.1 hold, and F (·, 0) ∈ LpF (Ω; L1 (0, T ; H)) and Fe (·, 0) ∈ LpF (Ω; L2 (0, T ; L02 )) for some p ≥ 2. Then, for any X0 ∈ LpF0 (Ω; H), the equation (3.16) admits a unique mild solution X(·) ∈ CF ([0, T ]; Lp (Ω; H)). Moreover, |X(·)|CF ([0,T ];Lp (Ω;H)) ( ) ≤ C |X0 |LpF (Ω;H)+|F (·, 0)|LpF (Ω;L1 (0,T ;H))+|Fe (·, 0)|LpF (Ω;L2 (0,T ;L02 )) .

(3.40)

0

Proof : Fix any T1 ∈ [0, T ]. For any Y ∈ CF ([0, T1 ]; Lp (Ω; H)), we consider the following equation { ( ) dX(t) = AX(t) + F (t, Y (t)) dt + Fe (t, Y (t))dW (t) in (0, T1 ], (3.41) X(0) = X0 . By Condition 3.1 and noting L1F (0, T1 ; Lp (Ω; H)) ⊂ LpF (Ω; L1 (0, T1 ; H)), we see that |F (·, Y (·))|LpF (Ω;L1 (0,T1 ;H)) ≤ |F (·, 0)|LpF (Ω;L1 (0,T1 ;L02 )) + C|L1 (·)|Y (·)|H |L1F (0,T1 ;Lp (Ω))

(3.42)

≤ |F (·, 0)|LpF (Ω;L1 (0,T1 ;L02 )) + C|L1 (·)|L1 (0,T1 ) |Y (·)|CF ([0,T1 ];Lp (Ω;H)) . Using Condition 3.1 again and noting p ≥ 2 (and hence L2F (0, T1 ; Lp (Ω; L02 )) ⊂ LpF (Ω; L2 (0, T1 ; L02 )), we see that |Fe (·, Y (·))|LpF (Ω;L2 (0,T1 ;L02 )) ≤ |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) + C|L2 (·)|Y (·)|H |L2F (0,T1 ;Lp (Ω))

(3.43)

≤ |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) + C|L2 (·)|L2 (0,T1 ) |Y (·)|CF ([0,T1 ];Lp (Ω;H)) . By (3.42)–(3.43) and Theorem 3.13, the equation (3.41) admits a unique mild solution X(·) ∈ CF ([0, T1 ]; Lp (Ω; H)). We define a map J : CF ([0, T1 ]; Lp (Ω; H)) → CF ([0, T1 ]; Lp (Ω; H)) by J (Y ) = X. Now we show that the map J is contractive provided that T1 is small b = J (Yb ). enough. Indeed, for another Yb ∈ CF ([0, T1 ]; Lp (Ω; H)), we define X e b b Write X(·) = X(·) − X(·), F1 (·) = F (·, Y (·)) − F (·, Y (·)) and Fe1 (·) = e solves the following equation Fe (·, Y (·)) − Fe (·, Yb (·)). Clearly, X(·) ( ) { e e + F1 (t) dt + Fe1 (t)dW (t) dX(t) = AX(t) in (0, T1 ], (3.44) e X(0) = 0. Similar to (3.42)–(3.43), we have |F1 (·)|LpF (Ω;L1 (0,T1 ;H)) ≤ C|L1 (·)|L1 (0,T1 ) |Y (·) − Yb (·)|CF ([0,T1 ];Lp (Ω;H)) (3.45)

144

3 Stochastic Evolution Equations

and |Fe1 (·)|LpF (Ω;L2 (0,T1 ;L02 )) ≤ C|L2 (·)|L2 (0,T1 )|Y (·)− Yb (·)|CF ([0,T1 ];Lp (Ω;H)) . (3.46) Applying Theorem 3.13 to (3.44) and noting (3.45)–(3.46), we find that e C ([0,T ];Lp (Ω;H)) |X(·)| 1 F ( ) ≤ C |F1 (·)|LpF (Ω;L1 (0,T1 ;H)) + |Fe1 (·)|LpF (Ω;L2 (0,T1 ;L02 )) ( ) ≤ C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |Y (·) − Yb (·)|CF ([0,T1 ];Lp (Ω;H)) . ( ) Choose T1 so that C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) < 1. Then, J is contractive. By means of the Banach fixed point theorem, J enjoys a unique fixed point X(·) ∈ CF ([0, T1 ]; Lp (Ω; H)). It is clear that X(·) is a mild solution to the following equation: { ( ) dX(t) = AX(t) + F (t, X(t)) dt + Fe (t, X(t))dW (t) in (0, T1 ], (3.47) X(0) = X0 . By (3.42)–(3.43), we see that F (·, X(·)) ∈ LpF (Ω; L1 (0, T1 ; H)), Fe (·, X(·)) ∈ and

LpF (Ω; L2 (0, T1 ; L02 )),

|F (·, X(·))|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, X(·))|LpF (Ω;L2 (0,T1 ;L02 )) ≤ |F (·, 0)|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) ( ) +C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |X(·)|CF ([0,T1 ];Lp (Ω;H)) . Therefore, using Theorem 3.13 again, we find that |X(·)|CF ([0,T1 ];Lp (Ω;H)) ( ) ≤ C |X0 |LpF (Ω;H)+ |F (·, X(·))|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, X(·))|LpF (Ω;L2 (0,T1 ;L02 )) 0 [ ≤ C |X0 |LpF (Ω;H) + |F (·, 0)|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) 0 ] ( ) + |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |X(·)|CF ([0,T1 ];Lp (Ω;H)) . (3.48) ( ) Since C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) < 1, it follows from (3.48) that X(·) CF ([0,T1 ];Lp (Ω;H)) ( ) (3.49) ≤ C |X0 |LpF (Ω;H)+|F (·, 0)|LpF (Ω;L1 (0,T1 ;H))+|Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) . 0

Repeating the above argument, we obtain a mild solution to the equation (3.16). The uniqueness of such solution to (3.16) is obvious. The desired estimate (3.40) follows from (3.49). This completes the proof of Theorem 3.14.

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

145

Remark 3.15. In the above proof (of Theorem 3.14), p ≥ 2 is a key assumption. Otherwise, we cannot show that Fe (·, X(·)) ∈ LpF (Ω; L2 (0, T ; L02 )),

∀ X(·) ∈ CF ([0, T ]; Lp (Ω; H)).

As far as we know, it is an unsolved problem to establish the well-posedness result (as that in Theorem 3.14) for the general stochastic evolution equation (3.16) without the assumption p ≥ 2.

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations In this section, we shall consider the time/space regularities of mild solutions to the equation (3.16). 3.3.1 Burkholder-Davis-Gundy Type Inequality and Time Regularity By comparing Theorems 3.2 and 3.14, we see that the time regularity in the solution space LpF (Ω; C([0, T ]; Rn )) for finite dimensions is a little better than that in CF ([0, T ]; Lp (Ω; H)) for the general stochastic evolution equation (3.16). It is easy to see that, the difference comes from the use ∫ · of Burkholder-Davis-Gundy type inequality for the stochastic involution “ 0 S(· − s)g(s)dW (s)”. We consider first the linear equation (3.32). For this, we introduce some notations. Fix p ≥ 1 and q ≥ 1. Let H1 and H2 be two Hilbert spaces. Write ∆{ Υp,q (H1 ; H2 ) = J(·, ·) ∈ Lpd (LpF (Ω; L∞ (0, T ; H1 )); LpF (Ω; Lq (0, T ; H2 ))) } q |J(·, ·)|L(H1 ;H2 ) ∈ L∞ F (Ω; L (0, T )) . (3.50) In the sequel, we shall simply denote Υp,p (H1 ; H2 ) (resp. Υp,p (H; H)) by Υp (H1 ; H2 ) (resp. Υp (H)). Remark 3.16. Note that, for any J(·, ·) ∈ Υp,q (H1 ; H2 ), one does not need p to have J(·, ·) ∈ L∞ F (Ω; L (0, T ; L(H1 ; H2 ))). Nevertheless, in some sense q Υp,q (H1 ; H2 ) is a nice “replacement” of the space L∞ F (Ω; L (0, T ; L(H1 ; H2 ))). Also, we introduce the following assumption: Condition 3.2 There is a constant Cp >∫0 such that the following Burkholder· Davis-Gundy type inequality holds (for “ 0 S(· − s)g(s)dW (s)”): ∫ t p ) ( E sup S(t − s)g(s)dW (s) ≤ Cp |g|pLp (Ω;L2 (0,T ;L0 )) , 2 F H t∈[0,T ] 0 (3.51) ∀ g ∈ LpF (Ω; L2 (0, T ; L02 )).

146

3 Stochastic Evolution Equations

Remark 3.17. For any two stopping times σ and τ with 0 ≤ σ ≤ τ ≤ T a.s., similarly to LpF (Ω; C([0, T ]; H)) and LpF (Ω; Lq (0, T ; H)), one can define the Banach spaces LpF (Ω; C([σ, τ ]; H)) and LpF (Ω; Lq (σ, τ ; H)). Under Condition 3.2, one can show that ∫ t p ) ( E sup S(t − s)g(s)dW (s) ≤ Cp |g|pLp (Ω;L2 (σ,τ ;L0 )) , 2 F H t∈[σ,τ ] σ (3.52) ∀ g ∈ LpF (Ω; L2 (σ, τ ; L02 )). We believe that Condition 3.2 holds for any C0 -semigroup S(·) and p ≥ 1. However, as far as we know, this is a longstanding unsolved problem. So far, the best known result is as follows: Theorem 3.18. Condition 3.2 holds whenever one of the following conditions is satisfied. 1) S(·) is a C0 -group on H; 2) S(·) is a generalized contractive C0 -semigroup, i.e., |S(s)|L(H) ≤ ecs for some constant c ∈ R and any s ≥ 0. Proof : The first case is obvious. We now consider the second case. Without loss of generality, we assume c = 0, i.e., S(·) is a contractive semigroup. For this case, by Sz.-Nagy’s theorem on unitary dilations (e.g. [312, p. 29, Theorem 8.1 in Chapter I]), there exist a Hilbert space H and a unitary C0 -group {U (t)}t∈R on H such that H embeds isometrically into H and Γ U (t) = S(t) on H for all t ≥ 0, where Γ is the orthogonal projection from H onto H. Therefore, taking any g ∈ LpF (Ω; L2 (0, T ; L02 )), we get ∫ t p ∫ t p E sup S(t − s)g(s)dW (s) = E sup Γ U (t − s)g(s)dW (s) t∈[0,T ]

H

0

t∈[0,T ]

H

0

∫ t p ∫ t p = E sup Γ U (t) U (−s)g(s)dW (s) ≤ E sup U (−s)g(s)dW (s) t∈[0,T ]

≤ Cp E = Cp E

(∫ (∫

T

2

|U (−s)g(s)|L2 (V ;H) ds

0 T 0

H

0

|g(s)|2L0 ds 2

)p/2

)p/2

≤ Cp E

t∈[0,T ]

(∫

T 0

H

0

|g(s)|2L2 (V ;H) ds

)p/2

,

where we have used the fact that L2 (V ; H)- and (L2 (V ; H) =)L02 -norms of g(·) coincide, because g(·) is L02 -valued. Remark 3.19. It is easy to see that, when H is finite dimensional, S(·) is always a generalized contractive C0 -semigroup. Hence, Condition 3.2 is always satisfied when dim H < ∞.

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

147

We have the following time-regularity results for mild solutions to the equation (3.32). Theorem 3.20. Let Condition 3.2 hold, X0 ∈ LpF0 (Ω; H), A ∈ Υ1 (H), B ∈ Υ2 (H; L02 ), f (·) ∈ LpF (Ω; L1 (0, T ; H)) and g(·) ∈ Lp (Ω; L2 (0, T ; L02 )). Then, the equation (3.32) admits a unique mild solution X(·) ∈ LpF (Ω; C([0, T ]; H)). Moreover, |X(·)|LpF (Ω;C([0,T ];H)) ( ) ≤ C |X0 |LpF (Ω;H) + |f |LpF (Ω;L1 (0,T ;H)) + |g|LpF (Ω;L2 (0,T ;L02 )) .

(3.53)

0

Proof : We first consider the case that A = 0 and B = 0. For such a case, the mild solution X(·) to (3.32) is still given by (3.34). Using Condition 3.2 (instead of Proposition 3.12), similarly to (3.35)–(3.36), we have |X(·)|LpF (Ω;L∞ (0,T ;H)) ( ) ≤ C |X0 |LpF (Ω;H) + |f |LpF (Ω;L1 (0,T ;H)) + |g|LpF (Ω;L2 (0,T ;L02 )) .

(3.54)

0

By Proposition 3.5, it is easy to see that X(·) ∈ LpF (Ω; C([0, T ]; H)). Now we handle the general case. For any a ∈ R, denote by ⌈a⌉ the biggest integer less than or equal to a. Write N =⌈

p p ) 1 ( |A|L(H) L∞ (Ω;L1 (0,T )) + |B|L(H;L02 ) L∞ (Ω;L2 (0,T )) ⌉ + 1, F F ε

where ε > 0 is a constant to be determined later. Define a sequence of stopping times {τj,ε }N j=1 as follows:  (∫ t { )p    τ (ω) = inf t ∈ [0, T ] |A(s, ω)|L(H) ds 1,ε    0    (∫ t ) p2 }     + |B(s, ω)|2L(H;L0 ) ds =ε ,   2  0   ∫ ( t { )p τ (ω) = inf t ∈ [τ , T ] |A(s, ω)| ds k,ε k−1,ε  L(H)   τk−1,ε    ∫ t  ( ) p2 }   2   + |B(s, ω)| ds = ε , 0)  L(H;L  2  τk−1,ε     k = 2, · · · , N. Here, we agree that inf ∅ = T . Consider the following stochastic evolution equation: { ( ) dX = AX + fˆ dt + gˆdW (t) in (0, T ], X(0) = X0 .

(3.55)

148

3 Stochastic Evolution Equations

Here fˆ ∈ LpF (Ω; L1 (0, T ; H)) and gˆ ∈ LpF (Ω; L2 (0, T ; L02 )). Clearly, (3.55) admits a unique mild solution X ∈ LpF (Ω; C([0, T ]; H)). Define a map J : LpF (Ω; C([0, τ1,ε ]; H)) → LpF (Ω; C([0, τ1,ε ]; H)) as follows: e 7→ X = J (X), e LpF (Ω; C([0, τ1,ε ]; H)) ∋ X e +f where X is the solution to (3.55) with fˆ and gˆ being replaced by AX e and B X + g, respectively. We claim that J is contractive. Indeed, by means e1 , X e2 ∈ Lp (Ω; C([0, τ1,ε ]; H)), of (3.52), for any X F e 1 ) − J (X e2 )|p p |J (X LF (Ω;C([0,τ1,ε ];H)) ∫ t ∫ t )p ( e1 − X e2 )dr+ S(t−r)B(X e1 − X e2 )dW (r) ≤E sup S(t−r)A(X t∈[0,τ1,ε ]

≤ CE

( sup t∈[0,τ1,ε ]

+CE ≤C

sup

0

H

0

)p ( ∫ t e1 − X e2 )dW (r) S(t − r)B(X

t∈[0,τ1,ε ]

(

+C

H

0

sup |S(t)|pL(H)

t∈[0,T ]

(

H

0

∫ t )p e1 − X e2 )dr S(t − r)A(X

) |A|L(H) p ∞

sup |S(t)|pL(H)

t∈[0,T ]

LF (Ω;L1 (0,τ1,ε ))

) |B|L(H;L0 ) p ∞ 2

e1 − X e 2 |p p |X L (Ω;C([0,τ1,ε ];H))

LF (Ω;L2 (0,τ1,ε ))

F

e1 − X e 2 |p p |X . L (Ω;C([0,τ1,ε ];H)) F

(3.56) Let us choose ε =

1

4C

supt∈[0,T ] |S(t)|p L(H)

. Then, from (3.56), we find that

1 e e 1 ) − J (X e2 )|p p e p |J (X ≤ |X 1 − X2 |Lp (Ω;C([0,τ1,ε ];H)) . LF (Ω;C([0,τ1,ε ];H)) F 2

(3.57)

This implies that J is contractive. Thus, it has a unique fixed point X ∈ LpF (Ω; C([0, τ1,ε ]; H)), which solves (3.32) in [0, τ1,ε ] (in the sense of mild solution). Inductively, we conclude that (3.32) admits a mild solution X in [τk−1,ε , τk,ε ] for k = 2, · · · , N . Furthermore,

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

149

|X|pLp (Ω;C([0,τ1,ε ];H)) F

≤E

∫ t ( S(t − r)(AX + f )dr S(t)X0 +

sup t∈[0,τ1,ε ]

0



t

+

)p S(t − r)(BX + g)dW (r) H

0

≤ CE

sup

)p ( ∫ t S(t − r)AXdr

t∈[0,τ1,ε ]

+CE

)p ( ∫ t S(t − r)BxdW (r)

sup t∈[0,τ1,ε ]

+CE sup ≤C

H

0

H

0

( S(t)X0 +

t∈[0,T ]

(

t∈[0,T ]

+C (



t

)p S(t − r)gdW (r) H

0

) |A|L(H) p ∞

sup |S(t)|pL(H)

t∈[0,T ]

t

S(t − r)f dr + 0

sup |S(t)|pL(H)

(



LF (Ω;L1 (0,τ1,ε ))

|X|pLp (Ω;C([0,τ1,ε ];H))

) |B|L(H;L0 ) p ∞ 2

LF (Ω;L2 (0,τ1,ε ))

F

|X|pLp (Ω;C([0,τ1,ε ];H)) F

)

+C E|X0 |pH + |f |pLp (Ω;L1 (0,T ;H)) + |g|pLp (Ω;L2 (0,T ;L0 )) . F

F

2

(3.58)

This, together with the choice of τ1,ε , implies that |X|LpF (Ω;C([0,τ1,ε ];H)) ( ) ≤ C |X0 |LpF (Ω;H) + |f |LpF (Ω;L1 (0,T ;H)) + |g|LpF (Ω;L2 (0,T ;L02 )) .

(3.59)

0

Repeating the above argument, we obtain (3.53). The uniqueness of solutions is obvious. Theorem 3.21. Let Conditions 3.1 and 3.2 hold, F (·, 0) ∈ LpF (Ω; L1 (0, T ; H)) and Fe (·, 0) ∈ LpF (Ω; L2 (0, T ; L02 )). Then, for any X0 ∈ LpF0 (Ω; H), the equation (3.16) admits a unique mild solution X(·) ∈ LpF (Ω; C([0, T ]; H)), and |X(·)|LpF (Ω;C([0,T ];H)) ( ) ≤ C |X0 |LpF (Ω;H)+|F (·, 0)|LpF (Ω;L1 (0,T ;H))+|Fe (·, 0)|LpF (Ω;L2 (0,T ;L02 )) .

(3.60)

0

Proof : The proof is very similarly to that of Theorem 3.14. Fix any T1 ∈ [0, T ]. Similar to the proof of Theorem 3.14, we define a map J : LpF (Ω; C([0, T1 ]; H)) → LpF (Ω; C([0, T1 ]; H)) by J (Y ) = X for any Y ∈ LpF (Ω; C([0, T1 ]; H)), where X solves the equation (3.41). To show that the map J is contractive for T1 small enough, we choose arbitrarily another b = J (Yb ). Write X(·), e Yb ∈ LpF (Ω; C([0, T1 ]; H)), and define X F1 (·) and Fe1 (·) e as that in the proof of Theorem 3.14. Then X(·) solves (3.44). By Conditions 3.1–3.2 and p ≥ 1, similarly to (3.45)–(3.46), we see that

150

3 Stochastic Evolution Equations

|F1 (·)|LpF (Ω;L1 (0,T1 ;H)) ≤ C|L1 (·)|L1 (0,T1 ) |Y (·) − Yb (·)|LpF (Ω;C([0,T1 ];H))

(3.61)

and |Fe1 (·)|LpF (Ω;L2 (0,T1 ;L02 )) ≤ C|L2 (·)|L2 (0,T1 ) |Y (·)− Yb (·)|LpF (Ω;C([0,T1 ];H)) .

(3.62)

Applying the estimate (3.54) (for the linear problem) to the equation (3.44) e (for X(·)) and noting (3.61)–(3.62), we find that e Lp (Ω;C([0,T ];H)) |X(·)| 1 F ( ) (3.63) ≤ C |F1 (·)|LpF (Ω;L1 (0,T1 ;H)) + |Fe1 (·)|LpF (Ω;L2 (0,T1 ;L02 )) ( ) ≤ C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |Y (·) − Yb (·)|LpF (Ω;C([0,T1 ];H)) . ( ) Choose T1 so that C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) < 1. Then, J is contractive. By means of the Banach fixed point theorem, J enjoys a unique fixed point X(·) ∈ LpF (Ω; C([0, T1 ]; H)). Clearly, X(·) is a mild solution to the following equation: { ( ) dX(t) = AX(t) + F (t, X(t)) dt + Fe (t, X(t))dW (t) in (0, T1 ], (3.64) X(0) = X0 . Using again Condition 3.1 and by X(·) ∈ LpF (Ω; C([0, T1 ]; H)), similarly to (3.61)–(3.62), we see that F (·, X(·)) ∈ LpF (Ω; L1 (0, T1 ; H)), Fe (·, X(·)) ∈ LpF (Ω; L2 (0, T1 ; L02 )), and |F (·, X(·))|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, X(·))|LpF (Ω;L2 (0,T1 ;L02 )) ≤ |F (·, 0)|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) ( ) +C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |X(·)|LpF (Ω;C([0,T1 ];H)) . Therefore, similarly to (3.54), we have |X(·)|LpF (Ω;C([0,T1 ];H)) ( ) ≤ C |X0 |LpF (Ω;H)+ |F (·, X(·))|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, X(·))|LpF (Ω;L2 (0,T1 ;L02 )) 0 [ ≤ C |X0 |LpF (Ω;H) + |F (·, 0)|LpF (Ω;L1 (0,T1 ;H)) + |Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) 0 ] ( ) + |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) |X(·)|LpF (Ω;C([0,T1 ];H)) . (3.65) ( ) Since C |L1 (·)|L1 (0,T1 ) + |L2 (·)|L2 (0,T1 ) < 1, it follows from (3.65) that X(·) p LF (Ω;C([0,T1 ];H)) ( ) (3.66) ≤ C |X0 |LpF (Ω;H)+|F (·, 0)|LpF (Ω;L1 (0,T1 ;H))+|Fe (·, 0)|LpF (Ω;L2 (0,T1 ;L02 )) . 0

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

151

Repeating the above argument, we obtain a mild solution to (3.16). The uniqueness of mild solutions to (3.16) is obvious. The desired estimate (3.60) follows from (3.66). This completes the proof of Theorem 3.21. 3.3.2 Space Regularity Sometimes we will need the space regularity of mild/weak solutions to (3.16). ∫t Generally speaking, the stochastic convolution 0 S(t−s)Fe (s, X(s))dW (s) is no longer a martingale. Hence, one cannot apply Itˆo’s formula directly to mild solutions to (3.16). For example, when establishing the pointwise identities (for Carleman estimates) for stochastic partial differentia equations of second order, we need the function to be second differentiable in the sense of weak derivative w.r.t the space variables. Nevertheless, this sort of problems can be solved by the following strategy: 1. Introduce suitable approximating equations with strong solutions such that the limit of these strong solutions is the mild/weak solution to the original equation; 2. Obtain the desired properties for the above strong solutions; 3. Utilize the density argument to establish the desired properties for the mild/weak solutions. There are many methods to implement the above three steps in the setting of deterministic partial differential equations. Roughly speaking, any of these methods, which does not destroy the adaptedness of solutions, can be applied to stochastic partial differential equations. Here we only present an approach based on Lemma 3.11. To this end, we introduce the following approximating equations of (3.16): { ( ) dXλ (t) = AXλ (t) + R(λ)F (t, Xλ (t)) dt + R(λ)Fe (t, Xλ (t))dW (t) in (0, T ], Xλ (0) = R(λ)X0 . (3.67) Here, λ belongs to ρ(A), the resolvent set of A, and R(λ) = λ(λI − A) . We have the following result. ∆

−1

Theorem 3.22. Let Condition 3.1 hold and p ≥ 2 (resp. Conditions 3.1 and 3.2 hold for some p ≥ 1), F (·, 0) ∈ LpF (Ω; L1 (0, T ; H)) and Fe (·, 0) ∈ LpF (Ω; L2 (0, T ; L02 )). Then, for each X0 ∈ LpF0 (Ω; H) and λ ∈ ρ(A), the equation (3.67) admits a unique strong solution Xλ (·) ∈ CF ([0, T ]; Lp (Ω; D(A))) (resp. Xλ (·) ∈ LpF (Ω; C([0, T ]; D(A)))). Moreover, as λ → ∞, the solution Xλ (·) converges to X(·) in CF ([0, T ]; Lp (Ω; H)) (resp. LpF (Ω; C([0, T ]; H))), where X(·) solves (3.16) in the sense of the mild solution. Proof : We only consider the case that Condition 3.1 holds and p ≥ 2 (Another case can be considered similarly).

152

3 Stochastic Evolution Equations

Since AR(λ) = λ2 (λI − A)−1 − λI is a bounded operator, by Lemma 3.11, we obtain the existence of unique strong solution Xλ (t) to the equation (3.67). Now we prove the convergence conclusion in Theorem 3.22. For any t ≥ 0, X(t) − Xλ (t) ( ) = S(t) X0 − R(λ)X0 +



t

( ) S(t − s) F (s, X(s)) − R(λ)F (s, Xλ (s)) ds

0



t

+

( ) S(t − s) Fe (s, X(s)) − R(λ)Fe (s, Xλ (s)) dW (s).

0

For any r ∈ (0, T ], we have that, p sup E X(t) − Xλ (t) H ≤ 3p (I1 + I2 + I3 ), t∈[0,r]

where ∫ t ( ) p I1 = sup E S(t − s)R(λ) F (s, X(s)) − F (s, Xλ (s)) ds , t∈[0,r]

0

t∈[0,r]

0

H

∫ t p ( ) I2 = sup E S(t − s)R(λ) Fe (s, X(s)) − Fe (s, Xλ (s)) dW (s) , H

∫ t ( ) ( ) I3 = sup E S(t) X0 − R(λ)X0 + S(t − s) I − R(λ) F (s, X(s))ds t∈[0,r]

0



t

+

p ( ) S(t − s) I − R(λ) Fe (s, X(s))dW (s) . H

0

Since |R(λ)|L(H) ≤ 2 for λ > 0 large enough, by (2.30) and Condition 3.1, similarly to (3.45), it follows that [∫ t ]p ( ) S(t − s)R(λ) F (s, X(s)) − F (s, Xλ (s)) ds I1 ≤ sup E H t∈[0,r]

0

(∫ t )p F (s, X(s)) − F (s, Xλ (s)) ds ≤ C sup E H t∈[0,r]

0

[∫ t ( p )1/p ]p ≤ C sup E F (x, X(s)) − F (s, Xλ (s)) H ds t∈[0,r]

0

p ≤ C|L1 (·)|pL1 (0,r) sup E X(t) − Xλ (t) H . t∈[0,r]

Further, by means of the Burkholder-Davis-Gundy inequality (i.e., Proposition 3.12), similarly to (3.46), we have that, p I2 ≤ C|L2 (·)|pL2 (0,r) sup E X(t) − Xλ (t) H . t∈[0,r]

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

153

Hence, p sup E X(t) − Xλ (t) H

t∈[0,r]

[( ] p ) ≤ C |L1 (·)|pL1 (0,r) + |L2 (·)|pL2 (0,r) sup E X(t) − Xλ (t) H + I3 . t∈[0,r]

(

) Choose r so that C |L1 (·)|pL1 (0,r) + |L2 (·)|pL2 (0,r) < 1. Then, p sup E X(t) − Xλ (t) H ≤ CI3 .

(3.68)

t∈[0,r]

On the other hand, [ p I3 ≤ 3p sup E S(t)(X0 − R(λ)X0 ) H t∈[0,r]

∫ t p ( ) + sup E S(t − s) I − R(λ) F (s, X(s))ds t∈[0,r]

0

t∈[0,r]

0

(3.69)

H

∫ t p ] ( ) + sup E S(t − s) I − R(λ) Fe (s, X(s))dW (s) . H

We now estimate each term in (3.69). First, p p sup E S(t)(X0 − R(λ)X0 ) H ≤ CE X0 − R(λ)X0 H → 0,

as λ → ∞.

t∈[0,r]

Next, by Condition 3.1 again and Theorem 3.14, we obtain that ∫ t p ( ) sup E S(t − s) I − R(λ) F (s, X(s))ds t∈[0,r]

H

0

∫ ≤ CE

r

p ( ) I − R(λ) F (s, X(s))|H ds → 0,

as λ → ∞.

0

Similarly, by means of Proposition 3.12 again, we have that ∫ t p ( ) sup E S(t − s) I − R(λ) Fe (x, X(s))dW (s) t∈[0,r]

H

0

∫ ≤ CE

r 0

p/2 ( ) I − R(λ) Fe (s, X(s))|2 0 ds → 0, L 2

as λ → ∞.

Hence, by (3.68), it follows that p sup E X(t) − Xλ (t) H ≤ o(1), t∈[0,r]

where o(1) → 0 as λ → ∞. Repeating the above argument, we deduce that

154

3 Stochastic Evolution Equations

p sup E X(t) − Xλ (t) H ≤ o(1) → 0,

as λ → ∞,

t∈[0,T ]

which gives the desired convergence. Likewise, for any λ ∈ ρ(A), we introduce the following approximating equation of (3.32) with A = 0 and B = 0: { ( ) dXλ (t) = AXλ (t) + R(λ)f (t) dt + R(λ)f˜(t)dW (t) in (0, T ], (3.70) Xλ (0) = R(λ)X0 . By Theorem 3.13, from the proof of Theorem 3.22, it is easy to show the following result. Corollary 3.23. For each p ≥ 1, X0 ∈ LpF0 (Ω; H), f (·) ∈ LpF (Ω; L1 (0, T ; H)) and f˜(·) ∈ Lp (Ω; L2 (0, T ; L02 )), the equation (3.70) admits a unique strong solution Xλ (·) ∈ CF ([0, T ]; Lp (Ω; D(A))). Moreover, as λ → ∞, the solution Xλ (·) converges to X(·) in CF ([0, T ]; Lp (Ω; H)), where X(·) solves (3.32) with A = 0 and B = 0 in the sense of mild solution. The following result shows the space smoothing effect of mild solutions to a class of stochastic evolutions equations, for example, the stochastic parabolic equation. Theorem 3.24. Let Condition 3.1 hold, F (·, 0) ∈ LpF (Ω; L1 (0, T ; H)) and Fe (·, 0) ∈ LpF (Ω; L2 (0, T ; L02 )) for some p ≥ 1, and A be a self-adjoint, negative definite (unbounded linear) operator on H. Then for any X0 ∈ LpF0 (Ω; H), the equation (3.16) admits a unique mild solution X(·) ∈ LpF (Ω; C([0, T ]; H)) ∩ 1 LpF (Ω; L2 (0, T ; D((−A) 2 ))). Moreover, |X(·)|LpF (Ω;C([0,T ];H)) + |X(·)| ( ≤ C |X0 |LpF

0

1

2 2 Lp F (Ω;L (0,T ;D((−A) )))

)

e

+|F (·, 0)|LpF (Ω;L2 (0,T ;L02 )) 1 (Ω;H)+|F (·, 0)|Lp F (Ω;L (0,T ;H))

(3.71) .

Proof : Clearly, A generates a contractive C0 -semigroup. Hence, Condition 3.2 hold. By Theorem 3.22, for each λ ∈ ρ(A), the equation (3.67) admits a unique strong solution Xλ (·) ∈ LpF (Ω; C([0, T ]; D(A))), and lim Xλ (·) = X(·) in LpF (Ω; C([0, T ]; H)). Moreover,

λ→∞

|Xλ (·)|LpF (Ω;C([0,T ];H)) ( ) ≤ C |X0 |LpF (Ω;H)+|F (·, 0)|LpF (Ω;L1 (0,T ;H))+|Fe (·, 0)|LpF (Ω;L2 (0,T ;L02 )) ,

(3.72)

0

for some constant C > 0, independent of λ. Applying Itˆo’s formula to |Xλ (·)|2H and using (3.67), we obtain that

3.3 Regularity of Mild Solutions to Stochastic Evolution Equations

|Xλ (T )|2H − |Xλ (0)|2H ∫ T ∫ ⟨ ⟩ =2 AXλ (s), Xλ (s) H ds + 2 0



T

+2 0





T

R(λ)F (s, Xλ (s)), Xλ (s)



0

H

155

ds (3.73)

⟨⟨ R(λ)Fe (s, Xλ (s)), Xλ (s) ⟩⟩H dW (s)

R(λ)Fe (s, Xλ (s)) 2 0 ds.

T

+

L2

0

By (3.73), we see that ∫ E

T

p2 1 |(−A) 2 Xλ (s)|2H ds

0

∫ [ p ≤ C E|Xλ (0)|H + E ∫ +E 0 T

+E



R(λ)F (s, Xλ (s)), Xλ (s)

0

p2 ds H



p2 ⟨⟨ R(λ)Fe (s, Xλ (s)), Xλ (s) ⟩⟩H dW (s)

T

(∫

T

(3.74)

)p ] R(λ)Fe (s, Xλ (s)) 2 0 ds 2 . L2

0

It follows from Burkholder-Davis-Gundy inequality that ∫ T p2 E ⟨⟨ R(λ)Fe (s, Xλ (s)), Xλ (s) ⟩⟩H dW (s) 0

≤ CE

(∫

T

)p Fe (s, Xλ (s)) 2 0 |Xλ (s)|2H ds 4 L2

0

≤ CE ≤ CE

[( ∫ [( ∫

T

)p p ] Fe (s, Xλ (s)) 2 0 ds 4 sup |Xλ (t)| 2 H L 2

0 T

t∈[0,T ]

Fe (s, Xλ (s)) 2 0 ds L2

0

(3.75)

) p2

] + sup |Xλ (t)|pH . t∈[0,T ]

By Condition 3.1, it holds that (∫ T )p Fe (s, Xλ (s)) 2 0 ds 2 E L2

0

≤ CE

[( ∫

T 0

)p ( Fe (s, 0) 2 0 ds 2 + L 2



T 0

2 ) p2 ] L22 (s) Xλ (s) H ds

[ ] ≤ C |Fe (·, 0)|pLp (Ω;L2 (0,T ;L0 )) + |L2 (·)|pL2 (0,T ) sup E|Xλ (t)|pH . F

2

t∈[0,T ]

Combining (3.72), (3.74) and (3.75)–(3.76), we end up with

(3.76)

156

3 Stochastic Evolution Equations

∫ E

T

p2 1 |(−A) 2 Xλ (s)|2H ds

0

( ≤ C |X0 |LpF

0

(Ω;H)+|F (·, 0)|

) +|Fe (·, 0)|LpF (Ω;L2 (0,T ;L02 )) .

(3.77)

1 Lp F (Ω;L (0,T ;H))

Similarly, one can further show that 1

1

lim |(−A) 2 Xλ (·) − (−A) 2 Xµ (·)|LpF (Ω;L2 (0,T ;H)) = 0.

λ,µ→∞

(3.78)

1

By (3.78) and noting that (−A) 2 is a closed operator, we conclude that X(·) ∈ 1 LpF (Ω; L2 (0, T ; D((−A) 2 ))). By (3.77), it is easy to see that the estimate (3.71) holds.

3.4 Notes and Comments Since the seminal work in [156], stochastic differential/evolution equations were studied extensively in the literature and attracted a great interest even today. Detailed introduction to stochastic differential equations (resp. stochastic evolution equations) can be found in [267, 284] (resp. [68, 217, 223]), etc. Throughout this book, stochastic integrals appeared in stochastic differential/evolution equations are understood in Itˆo’s sense. For different definitions of stochastic integral, one has different types of stochastic differential/evolution equations. By a simple transformation, a stochastic differential/evolution equation in Stratonovich’s sense can be reduced to that in Itˆo’s sense. On the other hand, we refer the interested readers to [105, 136, 249] for the study of stochastic differential/evolution equations when the Brownian motion is regarded as a rough path. In this chapter, we only consider stochastic evolution equations with the nonlinearities satisfying (3.39). Such kind of assumptions are essential in the proofs of Theorems 3.14 and 3.21. To drop it, generally speaking, one has to assume that the nonlinearities have “good” signs (e.g., [68]). When studying the stochastic evolution equations driven by cylindrical Brownian motions, we assume that the integrand is a Hilbert-Schmidt operator-values process. Without this assumption, people usually need some conditions on the operator A, for example some smoothing effect of the C0 semigroup generated by A, to show the well-posedness of the corresponding stochastic evolution equation (e.g., [72]). Almost all results presented in this chapter are either well-known (e.g., [68]) or small modification of the known ones. The very simple proof of the second case in Theorem 3.18 is due to E. Hausenblas and J. Seidler ([138]). The space Υp,q (H1 ; H2 ) introduced in (3.50) seems to be new; while Theorem 3.20 was taken from [248], which can be viewed as an infinite dimensional counterpart of the related result appeared in [284, Chapter V, Section 3].

4 Backward Stochastic Evolution Equations

In this chapter, we present an introduction to backward stochastic evolution equations (valued in Hilbert spaces), which appear naturally in the study of control problems for stochastic distributed parameter systems. In the case of natural filtration, by means of the Martingale Representation Theorem, these equations are proved to be well-posed in the sense of mild solutions; while for the general filtration, using our stochastic transposition method, we also establish their well-posedness. ∆ Throughout this chapter, T > 0, (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) is a fixed filtered probability space satisfying the usual condition, and we denote by F the progressive σ-field w.r.t. F; H and V are two separable Hilbert ∆ spaces; we denote by I the identity operator on H, and write L02 = L2 (V ; H).

4.1 The Case of Finite Dimensions and Natural filtration In this section, we consider backward stochastic differential equations, i.e., backward stochastic evolution equations in Rn (for some n ∈ N). It is wellknown that, for a deterministic ordinary differential equation, the terminal value problem on [0, T ] is equivalent to an initial value problem on [0, T ] under a simple time transformation, i.e., t 7→ T − t. However, things are completely different in the stochastic setting. Indeed, for stochastic problems one cannot simply reverse the “time” when looking for solutions that are adapted to the given filtration F. As a result, one has to introduce some new idea to treat the terminal value problem for stochastic differential equations. This leads to a new class of differential equations in the stochastic setting, i.e., backward stochastic differential equations. To begin with, let us analyze how to formulate correctly backward stochastic differential equations. Let W (·) = (W 1 (·), · · · , W m (·))⊤ (for some m ∈ N) be an m-dimensional standard Brownian motion, and F = {Ft }t∈[0,T ] be the natural filtration © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_4

157

158

4 Backward Stochastic Evolution Equations

generated by W (·). Consider the following simple terminal value problem for stochastic differential equation: { dy(t) = 0, t ∈ [0, T ], (4.1) y(T ) = yT ∈ L2FT (Ω). We want to find an F-adapted solution y(·) to (4.1). However, this is impossible since the only possible solution to (4.1) is y(·) ≡ yT , which is not necessarily F-adapted (unless yT itself is F0 -measurable). Namely, the equation (4.1) is not well-formulated if one expects to find an F-adapted solution. In what follows, we will see that the following modified version of (4.1): { dy(t) = Y (t)dW (t), t ∈ [0, T ], (4.2) y(T ) = yT is an appropriate reformation of (4.1). Comparing (4.2) with (4.1), we see that an additional term, i.e., Y (t)dW (t), has been added. The process Y (·) is not a priori known but a part of the solution! As a matter of fact, the presence of the term Y (t)dW (t) “corrects” the “non-adaptiveness” of the original y(·) in (4.1). Generally, we consider the following backward stochastic differential equation: { dy(t) = h(t, y(t), Y (t))dt + Y (t)dW (t), t ∈ [0, T ], (4.3) y(T ) = yT . Here yT ∈ L2FT (Ω; Rn ) and h : [0, T ] × Rn × Rn×m × Ω → Rn is a given function. The main goal of this section is to find a pair of F-adapted processes y : [0, T ] × Ω → Rn and Y : (0, T ) × Ω → Rn×m satisfying (4.3) in a suitable sense. Similarly to Definition 3.1, the following notion of solution to (4.3) seems to be the most natural one. Definition 4.1. A pair of stochastic processes (y(·), Y (·)) is called a solution to (4.3) if y(·) is an Rn -valued, F-adapted, continuous process, Y (·)) ∈ L2,loc (0, T ; Rn×m ), h(·, y(·), Y (·)) ∈ L1 (0, T ; Rn ) a.s., and for each t ∈ [0, T ], F ∫



T

y(t) = yT −

T

h(s, y(s), Y (s))ds − t

Y (s)dW (s),

a.s.

(4.4)

t

Equation (4.3) is said to admit a unique solution if for any two solutions (y(·), Y (·)) and (˜ y (·), Ye (·)) to (4.3), it holds that P({y(t) = y˜(t), ∀ t ∈ [0, T ]}) = 1,

P({Y (t) = Ye (t)}) = 1, a.e. t ∈ [0, T ].

In the rest of this section, we assume that: 1) h(·, y, z) is F-adapted for each y ∈ Rn and z ∈ Rn×m ; and

4.1 The Case of Finite Dimensions and Natural filtration

159

2) h(·, 0, 0) ∈ L2F (Ω; L1 (0, T ; Rn )), and there exist two nonnegative functions L1 (·) ∈ L1 (0, T ) and L2 (·) ∈ L2 (0, T ) such that |h(t, y, z) − h(t, yˆ, zˆ)|Rn ≤ L1 (t)|y − yˆ|Rn + L2 (t)|z − zˆ|Rn×m ,

(4.5)

∀ y, yˆ ∈ Rn , z, zˆ ∈ Rn×m , a.e.t ∈ [0, T ], a.s. One has the following well-posedness result for the equation (4.3).

Theorem 4.2. For any given yT ∈ L2FT (Ω; Rn ), the equation (4.3) admits a unique solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; Rn )) × L2F (0, T ; Rn×m ), and |(y(·), Y (·))|L2F (Ω;C([0,T ];Rn ))×L2F (0,T ;Rn×m ) ( ) ≤ C |yT |L2F (Ω;Rn ) + |h(·, 0, 0)|L2F (Ω;L1 (0,T ;Rn )) .

(4.6)

T

Proof : For s ∈ [0, T ), put ∆

V(s, T ) = L2F (Ω; C([s, T ]; Rn )) × L2F (s, T ; Rn×m ). We divide the proof into two steps. 2 1 n Step 1. In this step, we shall show (that, for any ) h(·) ∈ LF (Ω; L (0, T ; R )) 2 n and yT ∈ LFT (Ω; R ), there is a pair y(·), Y (·) ∈ V(0, T ) such that ∫



T

y(t) = yT −

T

h(s)ds − t

Y (s)dW (s),

∀ t ∈ [0, T ], a.s.

(4.7)

t

Indeed, put ∫ ( M (t) = E yT −

T

∫ ( y(t) = E yT −

) h(s)ds Ft ,

0

T

) h(s)ds Ft . (4.8)

t

Then, M (0) = y(0) = EM (T ). By the Martingale Representation Theorem (i.e., Theorem 2.147 and Remark 2.148), one can find a Y (·) ∈ L2F (0, T ; Rn×m ) such that ∫ t M (t) = M (0) + Y (s)dW (s). (4.9) 0

Hence, ∫ T [ (∫ E |Y (t)|2Rn×m dt ≤ E|M (T )|2Rn ≤ 2E |yT |2Rn + 0

T

|h(t)|Rn dt

)2 ] . (4.10)

0

From (4.9), we see that ∫



T

h(s)ds = M (T ) = M (0) + 0

and



T

yT −

T

Y (s)dW (s) = y(0) + 0

Y (s)dW (s), 0

160

4 Backward Stochastic Evolution Equations





t

y(t) = M (t) + 0



h(s)ds 0

T

h(s)ds − t

t

Y (s)dW (s) + 0



T

= yT −



t

h(s)ds = y(0) +

Y (s)dW (s). t

∫· ∫· Hence, y(·) = y(0) + 0 Y (s)dW (s) + 0 h(s)ds. By the fourth conclusion in Theorem 2.124, it is easy to see that y(·) ∈ L2F (Ω; C([0, T ]; Rn )). From (4.8), it follows that [ (∫ T )2 ] |y(·)|2L2 (Ω;C([0,T ];Rn )) ≤ CE |yT |2Rn + |h(t)|Rn dt . (4.11) F

0

By (4.10) and (4.11), we see that (y(·), Y (·)) satisfies [ (∫ T )2 ] |(y(·), Y (·))|2V(0,T ) ≤ CE |yT |2 + |h(t)|dt .

(4.12)

0

( ) Step 2. Fix any T1 ∈ [0, T ). For any fixed z(·), Z(·) ∈ V(T1 , T ), it is easy to see that h(·) ≡ h(·, z(·), Z(·)) ∈ L2F (Ω; L1 (T1 , T ; Rn )). Consider the following backward stochastic differential equation: { dy(t) = h(t, z(t), Z(t))dt + Y (t)dW (t), t ∈ [T1 , T ), (4.13) y(T ) = yT . By the result in Step 1, the equation (4.13) admits a unique solution (y(·), Y (·)) ∈ V(T1 , T ). This defines a map J : V(T1 , T ) → V(T1 , T ) by (z(·), Z(·)) 7→ (y(·), Y (·)). We claim that, for T1 being sufficiently close to T , 1 e e J (z, Z) − J (˜ z , Z) ≤ (z, Z) − (˜ z , Z) , V(T1 ,T ) V(T1 ,T ) 2 (4.14) e ∈ V(T1 , T ). ∀ (z, Z), (˜ z , Z) ˆ = h(·, z(·), Z(·))− e and h(·) To show (4.14), put (ˆ y (·), Yˆ (·)) = J (z, Z)−J (˜ z , Z) e b h(·, z˜(·), Z(·)). Then, (ˆ y (·), Y (·)) solves { ˆ dˆ y (t) = h(t)dt + Yˆ (t)dW (t), t ∈ [T1 , T ), (4.15) yˆ(T ) = 0. From (4.12) and using (4.5), we find that (∫ T )2 2 ˆ Rn dt ˆ |(ˆ y (·), Y (·))|V(T1 ,T ) ≤ CE |h(t)| = CE (

(∫

T1

T T1

e |h(t, z(t), Z(t)) − h(t, z˜(t), Z(t))| Rn dt

)2

) 2 ˆ ≤ C |L1 (·)|2L1 (T1 ,T ) + |L2 (·)|2L2 (T1 ,T ) |(z(·) − zˆ(·), Z(·) − Z(·))| V(T1 ,T )

(4.16)

4.2 The Case of Infinite Dimensions

161

and |(y(·), Y ≤C

(

(·))|2V(T1 ,T )

|yT |2L2 (Ω;Rn ) F T

≤ CE +

[

|yT |2Rn

(∫

T

|h(t, z(t), Z(t))|Rn dt

+ T1

|h(·, 0, 0)|2L2 (Ω;L1 (0,T ;Rn )) F

)

)2 ] (4.17)

( ) +C |L1 (·)|2L1 (T1 ,T ) + |L2 (·)|2L2 (T1 ,T ) |(z(·), Z(·))|2V(T1 ,T ) . Let us choose T1 such that ( ) 1 C |L1 (·)|2L1 (T1 ,T ) + |L2 (·)|2L2 (T1 ,T ) ≤ . 4

(4.18)

Then, by (4.16), we get (4.14). This implies that the map J is contractive. Hence, it admits a unique fixed point, which is a solution to (4.3) on [T1 , T ]. Moreover, by (4.17)–(4.18), it follows that ( ) |(y(·), Y (·))|2V(T1 ,T ) ≤ C |yT |2L2 (Ω;Rn ) + |h(·, 0, 0)|2L2 (Ω;L1 (0,T ;Rn )) . (4.19) FT

F

Repeating the above argument, we obtain the existence of solutions to (4.3). The uniqueness and the estimate (4.6) follow from (4.19).

4.2 The Case of Infinite Dimensions In the rest of this chapter, W (·) is a V -valued, Q-Brownian motion or cylindrical Brownian motion but we only consider the case of cylindrical Brownian motion. We consider the following backward stochastic evolution equation: { dy(t) = −Ay(t)dt + F (t, y(t), Y (t))dt + Y (t)dW (t) in [0, T ), (4.20) y(T ) = yT . Here and in the rest of this chapter, yT : Ω → H is an FT -random variable, A generates a C0 -semigroup {S(t)}t≥0 on H, and F : [0, T ] × Ω × H × L02 → H is a given function. Remark 4.3. The diffusion term “Y (t)dW (t)” in (4.20) may be replaced by a ( ) more general form “ Fe (t, y(t)) + Y (t) dW (t)” (for a function Fe : [0, T ] × Ω × H → L02 ). Clearly, the later can be reduced to the former one by means of a simple transformation: y˜(·) = y(·),

Ye (·) = Fe (·, y(·)) + Y (·),

under suitable assumptions on the function Fe .

162

4 Backward Stochastic Evolution Equations

It is easy to see that, for any f (·) ∈ L1,loc (0, T ; H), the following equation F { dy(t) = −Ay(t)dt + f (t)dt + Y (t)dW (t) in [0, T ), (4.21) y(T ) = yT is a special case of (4.20), in which the function F (·, ·, ·) is independent of y and Y . 4.2.1 Notions of Solutions Similar to the case of stochastic evolution equations considered in the last chapter, we introduce below notions of strong/weak/mild solutions to the equation (4.20) (Recall (2.132) for the notation ⟨⟨ ·, · ⟩⟩H ). Definition 4.4. An H × L02 -valued process (y(·), Y (·)) is called a strong solution to (4.20) if 1) y(·) is F-adapted and continuous, Y (·) ∈ L2,loc (0, T ; L02 ); F 2) y(t) ∈ D(A) for a.e. (t, ω) ∈ [0, T ] × Ω, and Ay(·) ∈ L1 (0, T ; H) and F (·, y(·), Y (·)) ∈ L1 (0, T ; H) a.s.; and 3) For any t ∈ [0, T ], ∫ T ∫ T ( ) y(t) = yT + Ay(s) − F (s, y(s), Y (s)) ds − Y (s)dW (s), a.s. (4.22) t

t

H ×L02 -valued

Definition 4.5. An process (y(·), Y (·)) is called a weak solution to (4.20) if 1) y(·) is F-adapted and continuous, Y (·) ∈ L2,loc (0, T ; L02 ), and F (·, y(·), F 1 Y (·)) ∈ L (0, T ; H) a.s.; and 2 For any t ∈ [0, T ] and η ∈ D(A∗ ), ∫ T ⟨ y(t), η ⟩H = ⟨ yT , η ⟩H + ⟨ y(s), A∗ η ⟩H ds t



T

− t

⟨ F (s, y(s), Y (s)), η ⟩H ds −



T t

⟨⟨ Y (s), η ⟩⟩H dW (s), a.s. (4.23)

H ×L02 -valued

Definition 4.6. An process (y(·), Y (·)) is called a mild solution to (4.20) if 1) y(·) is F-adapted and continuous, Y (·) ∈ L2,loc (0, T ; L02 ), and F (·, y(·), F 1 Y (·)) ∈ L (0, T ; H) a.s.; and 2 For any t ∈ [0, T ], ∫ T y(t) = S(T − t)yT − S(s − t)F (s, y(s), Y (s))ds t (4.24) ∫ T



S(s − t)Y (s)dW (s), t

a.s.

4.2 The Case of Infinite Dimensions

163

It is easy to show the following result. Proposition 4.7. If (y(·), Y (·)) is a strong solution to (4.20), then it is also a weak solution to the same equation. Conversely, if a weak solution (y(·), Y (·)) to (4.20) satisfies that y(t) ∈ D(A) for a.e. (t, ω) ∈ [0, T ] × Ω, and Ay(·) ∈ L1 (0, T ; H) a.s., then (y(·), Y (·)) is also a strong solution to (4.20). Similarly to Theorem 3.10, we have the following result concerning the equivalence between weak and mild solutions to the equation (4.20). Theorem 4.8. An H × L02 -valued process (y(·), Y (·)) is a weak solution to (4.20) if and only if it is a mild solution to the same equation. Proof : The “only if” part. Let (y(·), Y (·)) be a weak solution to (4.20). For any ϕ ∈ D((A∗ )2 ), t ∈ [0, T ] and r ∈ [t, T ], by (4.23), we obtain that ⟨ ⟩ y(r), S(r − t)∗ A∗ ϕ H ∫ T ⟨ ⟩ ⟨ ⟩ = yT , S(r − t)∗ A∗ ϕ H + y(s), A∗ S(r − t)∗ A∗ ϕ H ds r



T

− ∫

r T

− r

⟨ ⟩ F (s, y(s), Y (s)), S(r − t)∗ A∗ ϕ H ds ⟨⟨ Y (s), S(r − t)∗ A∗ ϕ ⟩⟩H dW (s).

Integrating the above equality w.r.t. r from t to T , we obtain that ∫ T ⟨ ⟩ y(r), S(r − t)∗ A∗ ϕ H dr = I1 + I2 − I3 − I4 ,

(4.25)

t

where ∆



T

I1 = ∆



t T



yT , S(r − t)∗ A∗ ϕ



T

I2 = ∆



t T



T





T



dr, ⟩ H

dsdr,

F (s, y(s), Y (s)), S(r − t)∗ A∗ ϕ



r T

I4 = t



H

y(s), A∗ S(r − t)∗ A∗ ϕ

r

I3 = t





r

H

dsdr,

⟨⟨ Y (s), S(r − t)∗ A∗ ϕ ⟩⟩H dW (s)dr.

We have ∫ ⟨ I1 = yT ,

T t

S(r − t)∗ A∗ ϕdr

⟩ H

⟨ ⟩ = yT , S(T − t)∗ ϕ − ϕ H .

By Fubini’s theorem (i.e., Theorem 2.21), we obtain that

(4.26)

164

4 Backward Stochastic Evolution Equations



T





s

I2 = ∫

t

t

⟩ H

t T

=

y(s), A∗ S(r − t)∗ A∗ ϕ

⟨ ⟩ y(s), S(s − t)∗ A∗ ϕ H ds −

and





T

s

I3 = ∫

t

t



T t

⟨ ⟩ y(s), A∗ ϕ H ds

= ∫

T





(4.27)

⟨ ⟩ F (s, y(s), Y (s)), S(r − t)∗ A∗ ϕ H drds

⟨ ⟩ F (s, y(s), Y (s)), S(s − t)∗ ϕ H ds

T t

drds

F (s, y(s), Y (s)), ϕ



t

H

(4.28)

ds.

Further, by the stochastic Fubini theorem (i.e., Theorem 2.141), we conclude that ∫ T∫ s I4 = ⟨⟨ Y (s), S(r − t)∗ A∗ ϕ ⟩⟩H drdW (s) t t (4.29) ∫ ∫ T

= t

⟨⟨ Y (s), S(s − t)∗ ϕ ⟩⟩H dW (s) −

T

t

⟨⟨ Y (s), ϕ ⟩⟩H dW (s).

From (4.25)–(4.29) and using (4.23) again, we obtain that ∫ T ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ y(t), ϕ H = yT , S(T − t)∗ ϕ H − F (s, y(s), Y (s)), S(s − t)∗ ϕ H ds t



T

− t

⟨⟨ Y (s), S(s − t)∗ ϕ ⟩⟩H dW (s).

which implies that ⟨ ⟩ ⟨ ⟩ y(t), ϕ H = S(T − t)yT , ϕ H − ∫



T t

⟨ ⟩ S(s − t)F (s, y(s), Y (s)), ϕ H ds

T

− t

⟨⟨ S(s − t)Y (s), ϕ ⟩⟩H dW (s).

(4.30) Since D((A∗ )2 ) is dense in H, we see that (4.30) holds for any ϕ ∈ H. Therefore, (y(·), Y (·)) is a mild solution to (4.20). The “if” part. Assume that (y(·), Y (·)) is a mild solution to (4.20). By (4.24), for any η ∈ D(A∗ ), t ∈ [0, T ] and s ∈ [t, T ), it holds ⟨ ⟩ y(s), A∗ η H ∫ T ⟨ ⟩ ⟨ ⟩ = yT , S(T − s)∗ A∗ η H − F (r, y(r), Y (r)), S(r − s)∗ A∗ η H dr s



T

− s

⟨⟨ Y (r), S(r − s)∗ A∗ η ⟩⟩H dW (r).

4.2 The Case of Infinite Dimensions

Integrating the above equality w.r.t. s from t to T , we find that ∫ T ⟨ ⟩ y(s), A∗ η H ds = I5 − I6 − I7 ,

165

(4.31)

t

where ∆



T

I5 = ∆



t

⟨ ⟩ yT , S(T − s)∗ A∗ η H ds,

T



T



T

I6 = ∆



t

s T

I7 = t

s

⟨ ⟩ F (r, y(r), Y (r)), S(r − s)∗ A∗ η H drds, ⟨⟨ Y (r), S(r − s)∗ A∗ η ⟩⟩H dW (r)ds.

Clearly, ∫



T

I5 = yT ,

S(T − s)∗ A∗ ηds

t

⟩ H

⟨ ⟩ ⟨ ⟩ = yT , S(T − t)∗ η H − yT , η H . (4.32)

By means of Fubini’s theorem, we get that ∫ T∫ r ⟨ ⟩ I6 = F (r, y(r), Y (r)), S(r − s)∗ A∗ η H dsdr ∫

t

t

T

= t





F (r, y(r), Y (r)), S(r − t)∗ η

T

− t

⟩ H

dr

(4.33)

⟨ ⟩ F (r, y(r), Y (r)), η H dr.

By virtue of the stochastic Fubini theorem, we obtain that ∫ T ∫ r I7 = ⟨ ⟨ Y (r), S(r − s)∗ A∗ η ⟩⟩H dsdW (r) ∫

t

t

T





⟨⟨ Y (r), S(r − t) η ⟩⟩H dW (r) −

= t

(4.34)

T t

⟨⟨ Y (r), η ⟩⟩H dW (r).

From (4.31)–(4.34) and noting (4.24), we find that (y(·), Y (·)) satisfies (4.23). This implies that (y(·), Y (·)) is a weak solution to (4.20). Similar to Lemma 3.11, we have the following result. Lemma 4.9. A mild solution (y(·), Y (·)) to (4.20) is a strong solution (to the same equation) if the following three conditions hold, 1) yT ∈ D(A), S(s − t)F (s, y(s), Y (s)) ∈ D(A) and S(s − t)Y (s) ∈ L2 (V ; D(A)) for any t ∈ [0, T ] and s ∈ [t, T ], a.s.; ∫ T∫ T 2) |AS(s − t)F (s, y(s), Y (s))|H dsdt < ∞; and ∫0 T ∫t T 3) |AS(s − t)Y (s)|2L0 dsdt < ∞. 0

t

2

166

4 Backward Stochastic Evolution Equations

Proof : Recall that for every t ∈ [0, T ], y(t) satisfies (4.24). By the conditions in Lemma 4.9, we see that, for any t ∈ [0, T ] and a.s., y(t) ∈ D(A), ∫ T ∫ T A S(s − t)F (s, y(s), Y (s))ds = AS(s − t)F (s, y(s), Y (s))ds, (4.35) t

t

and





T

T

S(s − t)Y (s)dW (s) =

A t

AS(s − t)Y (s)dW (s).

(4.36)

t

From (4.24) and (4.35)–(4.36), we find that for any t ∈ [0, T ], ∫ T Ay(t) = AS(T − t)yT − AS(s − t)F (s, y(s), Y (s))ds t



T



(4.37)

AS(s − t)Y (s)dW (s). t

Hence, Ay(·) ∈ L1 (0, T ; H) a.s. This, together with Proposition 4.7 and Theorem 4.8, implies that (y(·), Y (·)) is a strong solution to (4.20). In the sequel, we will need the following assumption on the nonlinearity in (4.20). Condition 4.1 F : [0, T ] × Ω × H × L02 → H is a given function satisfying 1) F (·, y, z) is F-adapted for each (y, z) ∈ H × L02 ; and 2) There exist two nonnegative functions L1 (·) ∈ L1 (0, T ) and L2 (·) ∈ 2 L (0, T ) such that, for any y1 , y2 ∈ H, z1 , z2 ∈ L02 , a.e. t ∈ [0, T ], |F (t, y1 , z1 ) − F (t, y2 , z2 )|H ≤ L1 (t)|y1 − y2 |H + L2 (t)|z1 − z2 |L02 , a.s. (4.38) Generally speaking, it is impossible to find mild solutions to (4.20) under Condition 4.1 except for the case of natural filtration (See the next subsection). Because of this, in Section 4.3 we shall introduce a weaker concept of solution to (4.20) for the case of general filtration. 4.2.2 Well-Posedness in the Sense of Mild Solution for the Case of Natural Filtration In this subsection, we assume that F is the natural filtration generated by the underlying Brownian motion W (·). Similar to Theorem 3.14, we have the following well-posedness of (4.20) in the sense of mild solution. Theorem 4.10. Let Condition 4.1 hold and F (·, 0, 0) ∈ L1F (0, T ; L2 (Ω; H)). Then, for any yT ∈ L2FT (Ω; H), the equation (4.20) admits a unique mild solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ). Further, |(y(·), Y (·))|L2F (Ω;C([0,T ];H))×L2F (0,T ;L02 ) ( ) ≤ C |yT |L2F (Ω;H) + |F (·, 0, 0)|L1F (0,T ;L2 (Ω;H)) . T

(4.39)

4.2 The Case of Infinite Dimensions

167



Proof : For any s ∈ [0, T ), let X [s, T ] = L2F (Ω; C([s, T ]; H)) × L2F (s, T ; L02 ). In what follows, we divide the proof into two steps. Step 1. In this step, we consider the linear equation (4.21) with f (·) ∈ L1F (0, T ; L2 (Ω; H)) and yT ∈ L2FT (Ω; H). By the Martingale Representation Theorem (Theorem 2.147), there is an l(·) ∈ L2F (0, T ; L02 ) such that ∫

t

E(yT | Ft ) = EyT +

l(σ)dW (σ).

(4.40)

0

By Corollary 2.149, there is an h(·, ·) ∈ L1 (0, T ; L2F (0, T ; L02 )) such that for a.e. s ∈ [0, T ], ∫ s f (s) = Ef (s) + h(s, σ)dW (σ) (4.41) 0

and |h(·, ·)|L1 (0,T ;L2F (0,T ;L02 )) ≤ |f (·)|L1F (0,T ;L2 (Ω;H)) . Put



(

T

y(t) = E S(T − t)yT −

) S(s − t)f (s)ds Ft .

(4.42)

(4.43)

t

This, together with (4.40) and (4.41), implies that (



∫ −

l(σ)dW (σ)

(

T

)

T

y(t) = S(T − t) yT − t



)

s

S(s − t) f (s) +

(4.44)

h(s, σ)dW (σ) ds.

t

t

From the stochastic Fubini Theorem, we get that ∫ ∫

T





T

y(t) = S(T − t)yT −

T

S(s − t)f (s)ds −

S(T − t)l(σ)dW (σ)

t

t

T



S(s − t)h(s, σ)dsdW (σ) t

σ





T

= S(T − t)yT −

(4.45) T

S(s − t)f (s)ds − t

S(s − t)Y (s)dW (s), t

where





T

Y (s) = S(T − s)l(s) +

S(η − s)h(η, s)dη.

(4.46)

s

Hence, (y(·), Y (·)) is a mild solution to (4.21). From (4.43) and (4.46), and noting (4.42), we find that ( ) |(y(·), Y (·))|X [0,T ] ≤ C |yT |L2F (Ω;H) + |f (·)|L1F (0,T ;L2 (Ω;H)) . T

(4.47)

168

4 Backward Stochastic Evolution Equations

( ) Step 2. Fix any T1 ∈ [0, T ). For any fixed z(·), Z(·) ∈ X (T1 , T ), it is easy to see that F (·, z(·), Z(·)) ∈ L1F (T1 , T ; L2 (Ω; H)). Consider the following equation { dy(t) = −Ay(t)dt + F (t, z(t), Z(t))dt + Y (t)dW (t), t ∈ [T1 , T ), (4.48) y(T ) = yT . By the result in Step 1, the equation (4.48) admits a unique mild solution (y(·), Y (·)) ∈ X (T1 , T ). This defines a map J : X (T1 , T ) → X (T1 , T ) by (z(·), Z(·)) 7→ (y(·), Y (·)). We claim that, for T1 being sufficiently close to T , e J (z, Z) − J (˜ z , Z)

X (T1 ,T )

1 e (z, Z) − (˜ z , Z) , X (T1 ,T ) 2 e ∈ X (T1 , T ). ∀ (z, Z), (˜ z , Z) ≤

(4.49)

e To show (4.49), put fˆ(·) = F (·, z(·), Z(·)) − F (·, z˜(·), Z(·)) and (ˆ y (·), Yb (·)) = e b J (z, Z) − J (˜ z , Z). Then, (ˆ y (·), Y (·)) satisfies { dˆ y (t) = −Aˆ y (t)dt + fˆ(t)dt + Yb (t)dW (t), t ∈ [T1 , T ), (4.50) yˆ(T ) = 0. From (4.47), we find that 2 e |(ˆ y (·), Yb (·))|2X (T1 ,T ) ≤ C|F (·, z(·), Z(·)) − F (·, z˜(·), Z(·))| L1F (T1 ,T ;L2 (Ω;H)) ( ) 2 e ≤ C |L1 (·)|2L1 (T1 ,T ) + |L2 (·)|2L2 (T1 ,T ) |(z(·) − z˜(·), Z(·) − Z(·))| X (T1 ,T ) (4.51) and ( ) |(y(·), Y (·))|2X (T1 ,T ) ≤ C |yT |2L2 (Ω;H) + |F (·, z(·), Z(·))|2L1 (T1 ,T ;L2 (Ω;H)) FT F ( ) 2 2 ≤ C |yT |L2 (Ω;H) + |F (·, 0, 0)|L1 (T1 ,T ;L2 (Ω;H)) FT F ( ) 2 2 +C |L1 (·)|L1 (T1 ,T ) + |L2 (·)|L2 (T1 ,T ) |(z(·), Z(·))|2X (T1 ,T ) . (4.52) Let us choose T1 such that

( ) 1 C |L1 (·)|2L1 (T1 ,T ) + |L2 (·)|2L2 (T1 ,T ) ≤ . 4

(4.53)

Then, by (4.51), we get (4.49). This implies that the map J is contractive. Hence, it admits a unique fixed point, which is a mild solution to (4.20) on [T1 , T ]. Moreover, by (4.52)–(4.53), it follows that ( ) |(y(·), Y (·))|2X (T1 ,T ) ≤ C |yT |2L2 (Ω;H) + |F (·, 0, 0)|2L1 (0,T ;L2 (Ω;H)) . (4.54) FT

F

4.2 The Case of Infinite Dimensions

169

Repeating the above argument, we obtain the existence of mild solutions to (4.20). The uniqueness and the estimate (4.39) follow from (4.54). Similar to the proof of Theorem 3.24, we can show the following result, which describes the smoothing effect of mild solutions to a class of backward stochastic evolution equations. Theorem 4.11. Let Condition 4.1 hold, F (·, 0, 0) ∈ L1F (0, T ; L2 (Ω; H)), and A be a self-adjoint, negative definite (unbounded linear) operator on H. Then, for any yT ∈ L2FT (Ω; H), the equation (4.20) admits a unique mild solu( ) 1 tion (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) ∩ L2F (0, T ; D((−A) 2 )) × L2F (0, T ; L02 ). Moreover, 1 |y(·)|L2F (Ω;C([0,T ];H)) + |y(·)| 2 + |Y (·)|L2F (0,T ;L02 ) LF (0,T ;D((−A) 2 )) ( ) ≤ C |YT |L2F (Ω;H) + |F (·, 0, 0)|L1F (0,T ;L2 (Ω;H)) .

(4.55)

T

As the case of stochastic evolution equations in infinite dimensions, we cannot employ Itˆo’s formula for mild solutions to backward stochastic evolution equations directly on most occasions. Indeed, usually the mild solution does not have enough regularity. Nevertheless, similarly to the last chapter, the problem can also be solved by the following strategy. 1. Introduce suitable approximating (backward stochastic evolution) equations with strong solutions such that the limit of these strong solutions is the mild or weak solution to the equation (4.20). 2. Obtain the desired properties for the above strong solutions. 3. Utilize the density argument to obtain the desired properties for mild/weak solutions to (4.20). Similar to the case of stochastic evolution equations, there are many mays to implement the above three steps for backward stochastic evolution equations. For example, for each λ ∈ ρ(A), we may introduce an approximating equation of (4.20) as follows: { dyλ (t) = −Ayλ (t)dt + R(λ)F (t, yλ (t), Yλ (t))dt + Yλ (t)dW (t) in (0, T ], yλ (T ) = R(λ)yT , (4.56) where R(λ) = λ(λI − A)−1 . By Lemma 4.9, similarly to the proof of Theorem 3.22, one can show the following result. Theorem 4.12. Let Condition 4.1 hold and F (·, 0, 0) ∈ L1F (0, T ; L2 (Ω; H)). Then, for each yT ∈ L2FT (Ω; H) and λ ∈ ρ(A), (4.56) admits a unique strong solution (yλ (·), Yλ (·)) ∈ L2F (Ω; C([0, T ]; D(A))) × L2F (0, T ; L02 ). Moreover, as λ → ∞, (yλ (·), Yλ (·)) converges to (y(·), Y (·)) (in L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 )), the mild solution to (4.20).

170

4 Backward Stochastic Evolution Equations

4.3 The Case of General Filtration In this section, we shall employ the stochastic transposition method (developed first in our paper [241] for backward stochastic differential equations) to introduce another concept of solution, i.e., the transposition solution, to the equation (4.20). This sort of equation will play an important role in establishing the Pontryagin type maximum principle for optimal control problems of stochastic evolution equations in infinite dimensions. It is notable that we do not assume the condition that F is the natural filtration, and therefore we cannot use the Martingale Representation Theorem (Theorem 2.147) anymore. Instead, as we shall see later, the Riesz-type representation theorem presented in Chapter 2 (i.e., Theorem 2.55 or more precisely Theorem 2.73, established in [239]) will play a fundamental role to derive the well-posedness of (4.20) in the sense of transposition solution. Throughout this section, we assume that yT ∈ LpFT (Ω; H)) for a given p ∈ (1, 2] unless other stated. In order to define the transposition solution to (4.20), we introduce the following (forward) stochastic evolution equation: { dz = (A∗ z + v1 )ds + v2 dW (s) in (t, T ], (4.57) z(t) = η, where t ∈ [0, T ], v1 ∈ L1F (t, T ; Lq (Ω; H)), v2 ∈ LqF (Ω; L2 (t, T ; L02 )), η ∈ p LqFt (Ω; H), and q = p−1 . By Theorem 3.13 and noting (2.30), the equation (4.57) admits a unique mild solution z ∈ CF ([t, T ]; Lq (Ω; H)), and |z|CF ([t,T ];Lq (Ω;H)) ( ) ≤ C |η|LqF (Ω;H) + |v1 |L1F (t,T ;Lq (Ω;H)) + |v2 |LqF (Ω;L2 (t,T ;L02 )) .

(4.58)

t

We now introduce the following notion. Definition 4.13. We call (y(·), Y (·)) ∈ DF ([0, T ]; Lp (Ω; H)) × L2F (0, T ; Lp (Ω; L02 )) a transposition solution to the equation (4.20) if for any t ∈ [0, T ], v1 (·) ∈ L1F (t, T ; Lq (Ω; H)), v2 (·) ∈ LqF (Ω; L2 (t, T ; L02 )), η ∈ LqFt (Ω; H) and the z ∈ CF ([t, T ]; Lq (Ω; H)) to (4.57), it holds that ⟨ corresponding solution ⟩ 1 z(·), F (·, y(·), Y (·)) H ∈ LF (Ω; L1 (t, T )), and ∫ T ⟨ ⟩ ⟨ ⟩ E z(T ), yT H − E z(s), F (s, y(s), Y (s)) H ds t (4.59) ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ = E η, y(t) H + E v1 (s), y(s) H ds + E v2 (s), Y (s) L0 ds. t

t

2

Before showing the well-posedness of (4.20) in the sense of transposition solution, we collect some preliminaries. First, we need the following technical result, which can be viewed as a variant of the classical Lebesgue Theorem (on the Lebesgue points).

4.3 The Case of General Filtration

{

171

if α ∈ (1, ∞), r ∈ ∞, if α = 1, ′ ′ r (1, ∞), r′ = r−1 , f1 ∈ LrF (0, T ; Lα (Ω; H)) and f2 ∈ LrF (0, T ; Lα (Ω; H)). Then there exists a monotonic sequence {hn }∞ n=1 of positive numbers such that lim hn = 0, and Lemma 4.14. Assume that α ∈ [1, ∞), α′ =

α α−1 ,

n→∞

1 n→∞ hn



t+hn

lim

E⟨f1 (t), f2 (τ )⟩H dτ = E⟨f1 (t), f2 (t)⟩H ,

t

Proof : Write f˜2 = |f˜2 |Lr′ (0,2 T ;Lα′ (Ω;H)) F

a.e. t ∈ [0, T ].

{

′ ′ f2 , t ∈ [0, T ], Then, f˜2 ∈ LrF (0, 2 T ; Lα (Ω; H)) and 0, t ∈ (T, 2 T ]. = |f˜2 |Lr′ (0,T ;Lα′ (Ω;H)) = |f2 |Lr′ (0,T ;Lα′ (Ω;H)) . By Lemma F

F



2.76, for any ε > 0, one can find an f20 ∈ CF ([0, 2 T ]; Lα (Ω; H)) such that |f˜2 − f20 |Lr′ (0,2T ;Lα′ (Ω;H)) ≤ ε. F

Thanks to (4.60), we find that ∫ T ( E⟨f1 (t), f˜2 (t)⟩H − E f1 (t), f20 (t)⟩H dt 0

≤ |f1 |LrF (0,T ;Lα (Ω;H)) |f˜2 −f20 |Lr′ (0,2T ;Lα′ (Ω;H))≤ ε|f1 |LrF (0,T ;Lα (Ω;H)) .

(4.60)

(4.61)

F

Utilizing (4.60) again, for h ∈ (0, T ), we see that ∫ T ∫ t+h ∫ 1 t+h 1 E⟨f1 (t), f˜2 (τ )⟩H dτ − E⟨f1 (t), f20 (τ )⟩H dτ dt h t h t 0 ∫ ∫ 1 T t+h = E⟨f1 (t), f˜2 (τ ) − f20 (τ )⟩H dτ dt h 0 t ∫ T ∫ t+h 1 ≤ |f1 (t)|LαF (Ω;H) |f˜2 (τ ) − f20 (τ )|Lα′ (Ω;H) dτ dt FT T h 0 t ∫ T∫ t+h 1( ∫ T∫ t+h ( ) ) 1′ ′ 1 r r ≤ |f1 (t)|rLα (Ω;H) dτ dt |f˜2 (τ )−f20 (τ )|rLα′ (Ω;H) dτ dt FT FT h 0 t 0 t (1 ∫ T ∫ h )1/r′ ′ = |f1 |LrF (0,T ;Lα (Ω;H)) |f˜2 (t + τ ) − f20 (t + τ )|rLα′ (Ω;H) dτ dt FT h 0 0 ( 1 ∫ h ∫ T +τ )1/r′ ′ = |f1 |LrF (0,T ;Lα (Ω;H)) |f˜2 (t) − f20 (t)|rLα′ (Ω;H) dtdτ FT h 0 τ ( 1 ∫ h ∫ 2T )1/r′ ′ ≤ |f1 |LrF (0,T ;Lα (Ω;H)) |f˜2 (t) − f20 (t)|rLα′ (Ω;H) dtdτ FT h 0 0 ≤ ε|f1 |LrF (0,T ;Lα (Ω;H)) .

(4.62) ′ On the other hand, by the uniform continuity of f20 (·) in Lα FT (Ω; H), one can find a δ = δ(ε) > 0 such that

172

4 Backward Stochastic Evolution Equations

|f20 (s1 ) − f20 (s2 )|Lα′

FT

(Ω;H)

≤ ε,

∀ s1 , s2 ∈ [0, 2T ] satisfying |s1 − s2 | ≤ δ.

Hence, for each h ∈ (0, δ], we have ∫ T ∫ t+h 1 E⟨f1 (t), f20 (τ )⟩H dτ − E⟨f1 (t), f20 (t)⟩H dt h t 0 ∫ ∫ 1 T t+h = E⟨f1 (t), f20 (τ ) − f20 (t)⟩H dτ dt h 0 t ∫ T ∫ t+h 1 ≤ |f1 (t)|LαF (Ω;H) |f20 (τ ) − f20 (t)|Lα′ (Ω;H) dτ dt FT T h 0 t ∫ T ≤ε |f1 (t)|LαF (Ω;H) dt ≤ Cε|f1 |LrF (0,T ;Lα (Ω;H)) .

(4.63)

T

0

By (4.61), (4.62) and (4.63), we conclude that, for h ∈ (0, δ] ∫ T ∫ t+h 1 ˜ ˜ E⟨f1 (t), f2 (τ )⟩H dτ −E⟨f1 (t), f2 (t)⟩H dt ≤ Cε|f1 |LrF (0,T ;Lα (Ω;H)) . h t 0 This leads to ∫ lim h→0

0

T

∫ 1 t+h E⟨f1 (t), f˜2 (τ )⟩H dτ − E⟨f1 (t), f˜2 (t)⟩H dt = 0. h t

Hence, one can find a monotonic sequence {hn }∞ n=1 of positive numbers with lim hn = 0 such that n→∞ ∫ t+hn 1 lim E⟨f1 (t), f˜2 (τ )⟩H dτ = E⟨f1 (t), f˜2 (t)⟩H , a.e. t ∈ [0, T ]. n→∞ hn t By this and the definition of f˜2 (·), we conclude that ∫ t+hn ∫ t+hn 1 1 lim E⟨f1 (t), f2 (τ )⟩H dτ = lim E⟨f1 (t), f˜2 (τ )⟩H dτ n→∞ hn t n→∞ hn t = E⟨f1 (t), f˜2 (t)⟩H = E⟨f1 (t), f2 (t)⟩H , a.e. t ∈ [0, T ]. This completes the proof of Lemma 4.14. We also need the following technical result. Lemma 4.15. If p ∈ [1, ∞], yT ∈ LpFT (Ω; H) and f (·) ∈ LpF (Ω; L1 (0, T ; H)), then the process ∫ T ) ( ∆ ζ · = E S(T − ·)yT − S(s − ·)f (s)ds F· (4.64) ·

has a c` adl` ag modification in LpF (Ω; D([0, T ]; H)).

4.3 The Case of General Filtration

173

Proof : We consider only the case p ∈ [1, ∞). Recall that for any λ ∈ ρ(A), ∆ the Yosida approximation Aλ = λ2 (λI − A)−1 − λI of A generates a C0 -group {Sλ (t)}t∈R on H (e.g., [372]). For t ∈ [0, T ], put ∆ ζλt = E

(



T

Sλ (T − t)yT −

) Sλ (s − t)f (s)ds Ft

(4.65)

t

and





t

Xλ (t) = Sλ (t)ζλt −

Sλ (s)f (s)ds.

(4.66)

0

) ( ∫T It is easy to see that Xλ (·) = E Sλ (T )yT − 0 Sλ (s)f (s)ds F· is an Hvalued F-martingale. By Theorem 2.111, it enjoys a c`adl`ag modification in LpF (Ω; D([0, T ]; H)), and hence so does the following process ∫ t [ ] ζλt = Sλ (−t) Xλ (t) + Sλ (s)f (s)ds . 0

Here we have used the fact that {Sλ (t)}t∈R is a C0 -group on H. We still use ζλ· to stand for its c`adl`ag modification. From (4.64) and (4.65), it follows that lim |ζ · − ζλ· |LpF (Ω;L∞ (0,T ;H)) ≤ lim S(T − ·)yT − Sλ (T − ·)yT λ→∞

λ→∞

∫ + lim λ→∞



T ·

S(s − ·)f (s)ds −

T ·

(4.67)

∞ Lp F (Ω;L (0,T ;H))

Sλ (s − ·)f (s)ds

∞ Lp F (Ω;L (0,T ;H))

.

Let us prove the right hand side of (4.67) equals zero. For this purpose, noting that the set of simple functions is dense in LpFT (Ω; H), we conclude p that there exists a sequence {y m }∞ m=1 ⊂ LFT (Ω; H) satisfying the following two conditions: ∑ Nm m 1) y m = k=1 αk χΩkm (ω), where Nm ∈ N, αkm ∈ H and Ωkm ∈ FT with m Nm {Ωk }k=1 to be a partition of Ω; and 2) limm→∞ |y m − yT |LpF (Ω;H) = 0. T

Likewise, since the set of simple adapted processes is dense in LpF (Ω; L1 (0, T ; p 1 H)), there exists a sequence {f m }∞ m=1 ⊂ LF (Ω; L (0, T ; H)) satisfying the following two conditions: ∑Lm ∑Mjm m m m m m i) f m = j=1 k=1 αj,k χΩj,k (ω)χ[tj ,tj+1 ) (t), where Lm ∈ N, Mj ∈ N, Mm

j m m m αj,k ∈ H, Ωj,k ∈ F tm with {Ωj,k }k=1 being a partition of Ω, and 0 = tm 1 < j m m m t2 · · · < tJm < tJm +1 = T ; and ii) limm→∞ |f m − f |LpF (Ω;L1 (0,T ;H)) = 0. Now, let us prove that

174

4 Backward Stochastic Evolution Equations

lim S(T − ·)yT − Sλ (T − ·)yT

λ→∞

∞ Lp F (Ω;L (0,T ;H))

= 0.

(4.68)

Since {S(t)}t≥0 is a C0 -semigroup, for any ε > 0, there is an M > 0 such that for any m > M , it holds that |S(T − ·)yT − S(T − ·)y m − Sλ (T − ·)yT + Sλ (T − ·)y m |LpF (Ω;L∞ (0,T ;H))
0 such that for any λ > Λ, it holds that |S(T − ·)αkm − Sλ (T − ·)αkm |L∞ (0,T ;H)
Λ(m), it holds that S(T − ·)yT − Sλ (T − ·)yT p ∞ LF (Ω;L

(0,T ;H))

≤ |S(T − ·)yT − Sλ (T − ·)yT − S(T − ·)y m + Sλ (T − ·)y m |LpF (Ω;L∞ (0,T ;H)) +|S(T − ·)y m − Sλ (T − ·)y m |LpF (Ω;L∞ (0,T ;H)) ε ε < + = ε. 2 2 This gives (4.68). Further, we show that ∫ lim

λ→∞



T ·

S(s − ·)f (s)ds −

T ·

Sλ (s − ·)f (s)ds

∞ Lp F (Ω;L (0,T ;H))

= 0. (4.69)

For any ε > 0, there is a M ∗ > 0 such that for any m > M ∗ , ∫



T ·

S(s − ·)f (s)ds −

∫ +
0 such that for any λ > Λ∗ , ∫ T ∫ T m m S(s − ·)αj,k ds− Sλ (s − ·)αj,k ds ·

·

ε < , m 2Jm max(M1 , M2m , · · · , MJmm ) This implies that ∫ ∫ T S(s − ·)f m (s)ds − ·

Lm M j ∫ ∑ ∑ ≤

T ·



T ·

j = 1, 2, · · · , Lm ; k = 1, 2, · · · , Mjm .

Sλ (s − ·)f m (s)ds

m

j=1 k=1

L∞ (0,T ;H)

m S(s − ·)αj,k ds −

T ·

∞ Lp F (Ω;L (0,T ;H))

m Sλ (s − ·)αj,k ds

L∞ (0,T ;H)


M ∗ and λ > Λ∗ = Λ∗ (m), we have ∫



T ·

S(s − ·)f (s)ds −

T ·

Sλ (s − ·)f (s)ds

∞ Lp F (Ω;L (0,T ;H))


0, (Ω, F , F, P) (with F = {Ft }t∈[0,T ] ) is a fixed filtered probability space (satisfying the usual condition), on which a one dimensional standard Brownian motion W (·) (unless other stated) is defined. Before introducing control problems for stochastic distributed parameter systems, we first present some control problems for a linear stochastic differential equation appeared in the financial market (e.g., [371]). Consider a market where a bond and n stocks are traded continuously, where n ∈ N. The price process P0 (t) of the bond is governed by the following ordinary differential equation: { dP0 (t) = r(t)P0 (t)dt in [0, T ], (5.1) P0 (0) = p0 > 0, where r(t) > 0 is called the interest rate of the bond, p0 is its initial price. The price processes P1 (t), · · · Pn (t) of the stocks satisfy the following stochastic differential equation: { dPj (t) = aj (t)Pj (t)dt + bj (t)Pj (t)dW (t) in [0, T ], (5.2) Pj (0) = pj > 0, © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_5

189

190

5 Control Problems for Stochastic Distributed Parameter Systems

where, aj : [0, T ] × Ω → R is called the appreciation rate, and bj : [0, T ] × Ω → R is called the volatility of the j-th stock, pj is its initial price, j = 1, · · · , n. All aj and bj are assumed to be F-adapted. The diffusion term in (5.2) reflects the fluctuation of the stock prices. Denote by x(t) the total wealth of an investor at time t ≥ 0. Suppose that he/she holds Nj (t) shares of the j-th asset (j = 0, 1, · · · , n) at time t. Clearly, x(t) =

n ∑

Nj (t)Pj (t),

t ≥ 0.

(5.3)

j=0

Assume that the trading of shares and the payment of dividends (at the rate µj (t) per unit time and per unit of money invested in the j-th stock) takes place continuously, while for the bond, the rate µ0 (t) ≡ 0. Write c(t) for the rate of withdrawal for consumption. Then, with time increment ∆t, the change of this investor’s total wealth is given by x(t + ∆t) − x(t) =

n ∑

n ( ) ∑ Nj (t) Pj (t + ∆t) − Pj (t) + µj (t)Nj (t)Pj (t)∆t − c(t)∆t.

j=0

j=1

Put



(5.4)

j = 0, 1, 2 · · · , n,

uj (t) = Nj (t)Pj (t),

which stands for the market value of the investor’s wealth in the j-th bond/stock. By letting ∆t → 0, we obtain that dx(t) =

n ∑

Nj (t)dPj (t) +

n ∑

µj (t)Nj (t)Pj (t)dt − c(t)dt

j=0

j=1

[

n ∑ (

= r(t)N0 (t)P0 (t) +

] ) aj (t) + µj (t) Nj (t)Pj (t) − c(t) dt

j=1

+

n ∑

Nj (t)Pj (t)bj (t)dW (t)

j=1 n [ ] ∑ ( ) = r(t)x(t) + aj (t) + µj (t) − r(t) uj (t) − c(t) dt j=1

+

n ∑

bj (t)uj (t)dW (t).

j=1

Hence, x(·) satisfies the following stochastic differential equation:

5.1 An Example of Controlled Stochastic Differential Equations

 n [ ] ∑ ( )    dx(t) = r(t)x(t) + aj (t) + µj (t) − r(t) uj (t) − c(t) dt     j=1   n ∑  + bj (t)uj (t)dW (t),     j=1     x(0) = x0 ,

191

(5.5)

where x0 represents the investor’s initial wealth. Remark 5.1. For simplicity, we assume that the investor can do short-selling and borrow money. In this case, each uj (·) can be negative. When uj (t) < 0 (for some j = 1, 2, · · · , n), the investor is short-selling the j-th stock; when u0 (t) < 0, the investor borrows money of the amount |u0 (t)| with an interest rate r(t). On the other hand, it is clear that c(·) ≥ 0. The investor can change the “allocation” of his/her wealth in these assets by changing u(·) = (u0 (·), u1 (·), · · · , un (·))⊤ and/or c(·), both of which can be viewed as controls. Clearly, there is an implicit constraint on the portfolio u(·) due to the stochastic environment, that is, at any time the investor can only utilize the past information. Therefore, the controls must be nonanticipative, i.e., adapted to the filtration F. For a given initial wealth x0 > 0 and an expected return xT ∈ L2FT (Ω), a natural objective of the investor is to choose an investment portfolio u(·) and a consumption plan c(·) such that x(T ) = xT ,

a.s.

(5.6)

This is typically a controllability problem. Usually, the requirement (5.6) is too strong, particularly in the stochastic setting. A more realistic requirement is to relax (5.6) as follows: x(T ) ∈ Λ,

a.s.,

(5.7)

for some given subset Λ ∈ L2FT (Ω). This is also a sort of controllability problem. On the other hand, given an initial wealth x0 > 0, in order to prohibit bankruptcy and guarantee a rewarding investment, the investor needs to choose u(·) and c(·) so that x(t) ≥ 0,

∀ t ∈ [0, T ],

E x(T ) ≥ θx0 ,

a.s.

a.s.,

(5.8) (5.9)

for some given θ > 1, and maximizes the stream of discounted utility: (∫ T ) J (u(·), c(·)) = E e−γt φ(c(t))dt + e−γT h(x(T )) , (5.10) 0

192

5 Control Problems for Stochastic Distributed Parameter Systems

where γ > 0 is the discount rate, φ(·) is the instantaneous utility from consumption and h(·) is the utility that comes from bequests. This is typically an optimal control problem. Obviously, in order that this optimal control problem makes sense, one needs the nonempty-ness of the set of all u(·) and c(·) so that both (5.8) and (5.9) hold and that the integration in (5.10) is meaningful. Such a nonempty-ness condition can be viewed also as a controllability problem. In summary, the above investment problem has the following features: (1) The system is governed by a stochastic evolution equation; (2) There are usually alternative decisions that can affect the dynamics of the system; (3) The aim is to drive the solutions to the system to a given state (in some sense) by selecting a nonanticipative decision; (4) There are some constraints that the decisions and/or the state are subject to; (5) There exists a criterion which measures the performance of the decisions, and the goal is to optimize (maximize or minimize) the criterion by selecting a nonanticipative decision among the ones satisfying all the constraints. In the next section, we shall present some control problems governed by stochastic evolution equations in infinite dimensions.

5.2 Control Systems Governed by Stochastic Partial Differential Equations Generally speaking, any ordinary/partial differential equation can be regarded as a stochastic ordinary/partial differential equation provided that its coefficients, forcing terms, initial and boundary conditions, or at least one of them is random. The terminology “stochastic partial differential equation” is a little misused, which may mean different types of equations in different places. The analysis of equations with random coefficients differs dramatically from that of equations with random noises. In this book, we focus on the latter ones. The study of stochastic partial differential equations was motivated by two aspects: •



One is due to the rapid recent development of stochastic analysis. The topic of stochastic partial differential equations is an interdisciplinary area involved both stochastic processes (random fields) and partial differential equations, which has quickly become a discipline in itself. In the last two decades, stochastic partial differential equations have been one of the most active areas in stochastic analysis. Another is the requirement from some branches of sciences. In many phenomena in physics, biology, economics and Control Theory (including filter

5.2 Control Systems Governed by Stochastic Partial Differential Equations

193

theory in particular), stochastic effects play a central role. Thus, stochastic corrections to the deterministic models are indispensably needed. These backgrounds have been influencing the development of stochastic partial differential equations remarkably. In this section, we shall present some physical/biological models described by stochastic partial differential equations, including particularly the derivation of a refined stochastic wave equation in one space dimension, motivated by the study of some control problems. The readers are referred to [70, 96, 145, 161, 169, 171, 257, 290, 316, 332] for typical references on systems governed by stochastic equations in finite/infinite dimensions. Example 5.2. Stochastic parabolic equations The following equation was introduced to describe the evolution of the density of a bacteria population (e.g., [71]):  √ dy = ν∂xx ydt + α ydW (t) in (0, T ) × (0, L),    yx = 0 on (0, T ) × {0, L}, (5.11)    y(0) = y0 in (0, L), where ν > 0, α > 0 and L > 0 are given constants, and y0 ∈ L2 (0, L). The derivation of (5.11) is as follows: Suppose that such a bacteria population is distributed in the interval [0, L]. Denote by y(t, x) the density of this population at time t ∈ [0, T ] and position x ∈ [0, L]. If there is no large-scale migration, the variation of the density is governed by the following dy(t, x) = νyxx (t, x)dt + dξ(t, x, y),

(5.12)

where νyxx (t, x) describes the population’s diffusion from the high density place to the low one, while ξ(t, x, y) is a random perturbation caused by lots of small independent random disturbances. Suppose that the random perturbation ξ(t, x, y) at time t and position x can be approximated by a Gaussian stochastic process whose variance Var ξ is monotone w.r.t. y. When L is small, which is usual for the study of bacteria, we may assume that ξ is independent of x. Further, when y is small, the variance of ξ can be approximated by α2 y, where α2 is the derivative of Var ξ at y = 0. Under these assumptions, it follows that √ dξ(t, x, y) = α ydW (t). (5.13) Combining (5.12) and (5.13), we arrive at the first equation of (5.11). If there is no bacteria entering or leaving [0, L] from its boundary, we have yx (t, 0) = yx (t, 1) = 0. According to these, we obtain the second equation of (5.11). To control the population’s density, people can put in and/or draw out some species. Under such sort of actions, the equation (5.11) becomes the following controlled stochastic parabolic equation:

194

5 Control Problems for Stochastic Distributed Parameter Systems

 √ dy = (ν∂xx y + u)dt + (α y + v)dW (t) in (0, T ) × (0, L),    yx = 0 on (0, T ) × {0, L},    y(0) = y0 in (0, L),

(5.14)

where u and v denote the way of putting in and/or drawing out species. The following equation has been proposed in [335] to study the propagation of an electric potential in a neuron:  dy = (ν∂xx y − y)dt + g(y)dW (t) in (0, T ) × (0, L),    yx = 0 on (0, T ) × {0, L}, (5.15)    y(0) = y0 in (0, L), in which g(·) is a suitable operator-valued function, and {W (t)}t≥0 is a QBrownian motion. To control the electric potential, one puts some charge in the system. In this way, the equation (5.15) becomes a controlled stochastic parabolic equation as follows:  dy = (ν∂xx y − y + u)dt + (g(y) + v)dW (t) in (0, T ) × (0, L),    yx = 0 on (0, T ) × {0, L}, (5.16)    y(0) = y0 in (0, L). where u and v stand for the way of putting charges. Similarly to Section 5.1, one may pose the controllability and optimal control problems for (5.14) and/or (5.16) but we omit the details here. Some other models governed by stochastic parabolic equations can be found in [68, 356], for example. Example 5.3. A refined stochastic wave equation To study the vibration of thin string/membrane perturbed by the random force, people introduced the following stochastic wave equation (e.g. [115]):  dyt = ∆ydt + α(t)ydW (t) in (0, T ) × G,    y=0 on (0, T ) × ∂G, (5.17)    y(0) = y0 , yt (0) = y1 in G. Here G ⊂ Rm (m ∈ N) is a bounded domain, α(·) is a suitable function and (y0 , y1 ) ∈ H01 (G) × L2 (G). Let us recall below a derivation of (5.17) for m = 1 by studying the motion of a DNA molecule in a fluid (e.g., [115, 300]). Compared with its length, the diameter of such a DNA molecule is very small, and therefore, it can be viewed as a thin and long elastic string. One can describe its position by using an R3 -valued function y = (y1 , y2 , y3 ) defined on [0, L] × [0, +∞) for some L > 0. Usually, a DNA molecule floats in a fluid. Thus, it is always struck by the fluid molecules, just as a particle of pollen floating in a fluid.

5.2 Control Systems Governed by Stochastic Partial Differential Equations

195

Let the mass of this string per unit length be equal to 1. The acceleration at position x ∈ [0, L] along the string at time t ∈ [0, +∞) is ytt (t, x). There are mainly four kinds of forces acting on the string: the elastic force F1 (t, x), the friction F2 (t, x) due to viscosity of the fluid, the impact force F3 (t, x) from the flowing of the fluid and the random impulse F4 (t, x) from the impacts of the fluid molecules. By Newton’s second law, it follows that ytt (t, x) = F1 (t, x) + F2 (t, x) + F3 (t, x) + F4 (t, x).

(5.18)

Similar to the derivation of deterministic wave equations, the elastic force F1 (t, x) = yxx (t, x). The fiction depends on the property of the fluid and the velocity and shape of the string. When y, yx and yt are small, F2 (t, x) approximately depends on them linearly, that is, F2 = a1 yx + a2 yt + a3 y for some suitable functions a1 , a2 and a3 . From the classical theory of Statistical Mechanics (e.g., [306, Chapter VI]), the random impulse F4 (t, x) at time t and position x can be approximated by a Gaussian stochastic process with a covariance k(·, y), which also depends on the property of the fluid. More precisely, ∫ t

F4 (t, x) =

k(x, y(s, x))dW (s). 0

Thus, the equation (5.18) can be rewritten as dyt (t, x) = yxx (t, x)dt + F2 (t, x)dt + F3 (t, x)dt + k(x, y(t, x))dW (t). (5.19) When y is small, we may assume that k(·, y) is linear w.r.t. y. In this case, k(x, y(t, x)) = k1 (x)y(t, x) for a suitable k1 (·). More generally, one may assume that k1 depends on t (and even on the sample point ω ∈ Ω). Thus, (5.19) is reduced to the following: dyt (t, x) = yxx (t, x)dt + F2 (t, x)dt + F3 (t, x)dt + k1 (t, x)y(t, x)dW (t). (5.20) Many biological behaviors are related to the motions of DNA molecules. Hence, there is a strong motivation for controlling these motions. In (5.20), F3 (·, ·) in the drift term and a part of the diffusion term can be designed as controls on the fluid. Furthermore, one can act some forces on the endpoints of the molecule. In this way, we arrive at the following controlled stochastic wave equation:  dyt = (yxx + a1 yx + a2 yt + a3 y + f )dt       +(a4 y + g)dW (t) in (0, T ) × (0, L),    y = h1 on (0, T ) × {0}, (5.21)     y = h2 on (0, T ) × {L},      y(0) = y0 , yt (0) = y1 in (0, L). In (5.21), (f, g, h1 , h2 ) are controls which belong to some suitable spaces, a4 is a suitable function, while (y0 , y1 ) is an initial datum.

196

5 Control Problems for Stochastic Distributed Parameter Systems

A natural question is as follows: Can we drive the DNA molecule (governed by (5.21)) from any (y0 , y1 ) to a given configuration by choosing suitable controls (f, g, h1 , h2 )? Clearly, this is a controllability problem for the stochastic wave equation (5.21). Although there are four controls in the system (5.21), the answer to this question is negative (see Theorem 10.2 in Chapter 10). This differs completely from the well-known controllability property for the deterministic version of (5.21). Indeed, for the latter, under some mild assumptions, one can prove the corresponding exact controllability (e.g., [110, 207, 388, 396]). Since (5.20) is a generalization of the classical wave equation to the stochastic setting, as remarked in our paper [247], from the viewpoint of Control Theory, some key feature has been ignored in the derivation of this equation. Motivated by the above-mentioned negative controllability result, in what follows, we shall propose a refined model to describe the DNA molecule. For this purpose, we partially employ a dynamical theory of Brownian motions, developed in [265], to describe the motion of a particle perturbed by random forces. In our opinion, the essence of the theory in [265] is a stochastic Newton’s law, at least in some sense. According to [265, Chapter 11], we may suppose that ∫ t ∫ t y(t, x) = y˜(s, x)ds + F (s, x, y(s))dW (s). (5.22) 0

0

Here y˜(·, ·) is the expected velocity, F (·, ·, ·) is the random perturbation from the fluid molecule. When y is small, one can assume that F (·, ·, ·) is linear in the third argument, i.e., F (s, x, y(t, x)) = b1 (t, x)y(t, x)

(5.23)

for a suitable b1 (·, ·). Formally, the acceleration at position x along the string at time t is y˜t (t, x). By Newton’s second law, it follows that y˜t (t, x) = F1 (t, x) + F2 (t, x) + F3 (t, x) + F4 (t, x). Similar to (5.20), we obtain that d˜ y (t, x) = yxx (t, x)dt + F2 (t, x)dt + F3 (t, x)dt + k1 (t, x)y(t, x)dW (t). (5.24) Combining (5.22), (5.23) and (5.24), we arrive at the following modified version of (5.20): { dy = y˜dt + b1 ydW (t) in (0, T ) × (0, L), (5.25) d˜ y = (yxx + a1 yx + a2 yt + a3 y)dt + a4 ydW (t) in (0, T ) × (0, L). Then, similar to (5.21), we obtain the following control system:

5.2 Control Systems Governed by Stochastic Partial Differential Equations

 dy = y˜dt + (b1 y + f )dW (t) in (0, T ) × (0, L),       d˜ y = (yxx + a1 yx + a2 yt + a3 y)dt       +(a4 y + g)dW (t) in (0, T ) × (0, L),  y = h1       y = h2      y(0) = y0 ,

on (0, T ) × {0},

197

(5.26)

on (0, T ) × {L}, y˜(0) = y1

in (0, L),

where (f, g, h1 , h2 ) are controls which belong to some suitable spaces. Under some assumptions, we shall show the exact controllability of the above system (See Theorem 10.12 in Chapter 10). This, in turn, justifies our modification (5.25). Example 5.4. Stochastic Schr¨ odinger equation To investigate open quantum systems, people introduced the following stochastic Schr¨ odinger equation (e.g., [169]):  (i ) α 2   dy(t, x) = ∆y(t, x) − iV (x)y(t, x) − |x| y(t, x) dt   2 2   √  + αy(t, x)xdW (t) in (0, T ) × G,    y=0 on (0, T ) × ∂G,     y(0) = y0 in G. (5.27) Here G is a domain in Rm (m ∈ N), y(t, x) stands for the wave function at time t ∈ [0, T ] and x ∈ G, W (·) is an m-dimensional Brownian motion, α is a positive constant, V is a suitable real function and y0 ∈ L2 (G; C) is the initial state. A derivation of (5.27) is given below. For simplicity, we consider the continuous measurement of the position of a one-dimensional quantum particle (m = 1) governed by the standard Hamiltonian i H = iV (x) − ∂xx . 2 Let us first recall the following postulates in Quantum Mechanics (e.g., [184]). 1. Non-ideal measurement principle. After a measurement, the wave function y(t, x) is transformed into a new one which, up to a normalization, reads y(t, x)e−α(x−q(t)) , 2

where q(t) ∈ R is the result of the measurement at time t and α characterizes the accuracy of the measurement. 2. Continuous limit of discrete observations. Fix a time t and denote by n the times of the measurements. Take the above non-ideal measurements of the position of the quantum particle at

198

5 Control Problems for Stochastic Distributed Parameter Systems

moments tk = kδ, where δ = t/n, n ∈ N. As n → +∞, the accuracy of each measurement will be proportional to the time duration δ between successive measurements, i.e. α = λδ, where λ reflects the properties of the measuring apparatus. The above two postulates are standard in Quantum Mechanics. Next, we introduce a postulate about the errors of the measurement. 3. Gaussian model of errors. Suppose that q(t) obeys the following relation: ∫ s 1 q(s)ds = √ W (t) for a constant λ > 0. (5.28) 2 λ 0 Note that the evolution of the quantum system between measurements is governed by the law of free Hamiltonian. Hence, after n times measurements, the resulting wave function reads y(nδ) =

n ∏

exp{−δλ(x − q(kδ))2 } exp(−δH)y0 .

(5.29)

k=1

Letting n → +∞ in (5.29), we obtain that y = e−(tH+

∫t 0

λ(x−q(s))2 ds)

y0 .

Hence, yt = −(H + λ(x − q(t))2 )y.

(5.30)

2

The term λq(t) y can be dropped in (5.30) because solutions to the equation with and without the term λq(t)2 y are proportional and this difference is irrelevant in Quantum Mechanics (in which people are only interested in normalized states). In this way, we obtain the following equation: yt + (H + λx2 )y = 2λxqy.

(5.31)

Noting (5.28), we then replace the equation (5.31) by the following one: √ dy + (H + λx2 )ydt = 2 λxydW (t). (5.32) This, after some obvious scaling, gives the first equation of (5.27) for one space dimension. People have two possible ways to put controls on (5.32). One is absorbing/delivering the wave on the boundary. This leads to a boundary control in the drift term. The other is to put some external force when doing the measurement. This leads to an internal control in the diffusion term. With such controls, the equation (5.27) becomes the following control system:  (i ) α   dy = ∆y − iV (x)y − |x|2 y dt   2 2   √  +( αy + v)xdW (t) in (0, T ) × G, (5.33)    y = f on (0, T ) × ∂G,     y(0) = y0 in G.

5.3 Some Control Problems for Stochastic Distributed Parameter Systems

199

Example 5.5. Stochastic Navier-Stokes equation To study turbulence, people introduced the statistical approach (e.g. [257, 262, 296, 331]). In this framework, the velocity field of an incompressible fluid is described by a space-time vector-valued random field, which is governed by the following Navier-Stokes equation with some random force:  ut = ν∆u − u∇u − ∇p + f in (0, T ) × G,    div u = 0 in (0, T ) × G, (5.34)    u(0) = u0 in G. Here G ⊂ Rm (m = 2 or 3) is a domain, ν is a positive real number, u(t, x) = (u1 (t, x), · · · , um (t, x)) is the velocity of the fluid at time t and point x, u0 is the initial datum of u, p(t, x) is the pressure of the fluid, f = f (t, x, u) is the external force acted to the fluid, which is caused by lots of small independent random disturbances. In many situations, it is reasonable to assume that the random force f (t, x, u) at time t and position x can be approximated by a Gaussian stochastic process as follows: df (t, x, u) = g(u)dW (t), where g is a suitable process depending on u, while W (·) is a cylindrical Brownian motion. Consequently, instead of (5.34), we obtain the following stochastic Navier-Stokes equation:  du = (ν∆u − u∇u − ∇p)dt + g(u)dW (t) in (0, T ) × G,    div u = 0 in (0, T ) × G, (5.35)    u(0) = u0 in G. To control the velocity of the fluid, one can put a force h1 in the drift term and a force h2 in the diffusion term. Then the equation (5.35) becomes  du = (ν∆u − u∇u − ∇p + h1 )dt + (g(u) + h2 )dW (t) in (0, T ) × G,    div u = 0 in (0, T ) × G,    u(0) = u0 in G. Although the control problems for stochastic Navier-Stokes equations are very important both theoretically and in applications, due to the space limitation, we will not consider these problems in this book (See [128, 132, 301, 304] for some interesting works on this topic).

5.3 Some Control Problems for Stochastic Distributed Parameter Systems ∆

Let H and V be two separable Hilbert spaces, and write L02 = L2 (V ; H). Let {W (t)}t∈[0,T ] be a V -valued, F-adapted, standard Q-Brownian motion (with

200

5 Control Problems for Stochastic Distributed Parameter Systems

Q being a given positive definite, trace-class operator on V ) or a cylindrical Brownian motion on (Ω, F , F, P). Denote by F the progressive σ-field (in [0, T ]×Ω) w.r.t. F. In the sequel, to simplify the presentation, we only consider the case of cylindrical Brownian motion. Suppose that A generates a C0 -semigroup {S(t)}t≥0 on H, U is a given separable metric space and a(·) : [0, T ] × Ω × H × U → H and b(·) : [0, T ] × Ω ×H ×U → L02 are two suitably given functions. Let us consider the following controlled stochastic evolution equation:  ( ) dx(t) = Ax(t) + a(t, x(t), u(t)) dt    +b(t, x(t), u(t))dW (t) in (0, T ], (5.36)    x(0) = x0 . Here x0 is the initial state (in a suitable set H0 consisting of some H-valued, F0 -measurable random variables), x(·) is the state variable (valued in H), u(·) is the control variable (valued in U ), and hence H and U serve as the state and the control spaces, respectively. Write } ∆{ U[0, T ] = u : [0, T ] × Ω → U u(·) is F-adapted . This is the (biggest) control class that we shall use. Such a choice of controls means that one can only use the information (as specified by the filtration F) of what has happened up to the present moment, rather than what is going to happen afterwards due to the uncertainty of the system under consideration (as a consequence, for any t ∈ [0, T ] one cannot perform his/her action u(t) before the time t really comes). This nonanticipative restriction is the reason for us to choose u(·) to be F-adapted. Let U 0 [0, T ] be a suitable subset of U [0, T ], and HT a suitable set consisting of some H-valued, FT -measurable random variables. We suppose that the choices of the functions a(·) and b(·) and the sets H0 and the control class U 0 [0, T ] are such that, for any x0 ∈ H0 and u(·) ∈ U 0 [0, T ], the system (5.36) admits a mild solution1 x(·) in some solution space S[0, T ] (in which solutions are understood in the sense of Definition 3.9), say CF ([0, T ]; Lp (Ω; H)) (for some p ≥ 2) as that in Theorem 3.14. Let Z be a nonempty set, and Γ be a nonempty subset of Z. Suppose F : S[0, T ] × U 0 [0, T ] → Z is a given map. Motivated by [370, p. 14] and [50], we introduce the following notions of controllability: Definition 5.6. 1) The system (5.36) is said to be (F, Γ )-exactly controllable (or simply exactly controllable in the case that (F, Γ ) is clear from the context) 1

Here, we do not need to assume the solution is unique. Solutions to (5.36) can also be understood in other sense, say transpositions (for the case of unbounded control operators) which will be introduced in Definition 7.11 of Chapter 7.

5.3 Some Control Problems for Stochastic Distributed Parameter Systems

201

if for any x0 ∈ H0 and x1 ∈ Γ , there is a control u(·) ∈ U 0 [0, T ] such that (5.36) admits a mild solution x(·) satisfying F (x(·), u(·)) = x1 ;

(5.37)

2) The system (5.36) is said to be (F, Γ )-approximately controllable (or simply approximately controllable in the case that (F, Γ ) is clear from the context) if Z is another metric space with a metric dZ (·, ·), and for any x0 ∈ H0 , x1 ∈ Γ and ε > 0, there is a control u(·) ∈ U 0 [0, T ] such that (5.36) admits a mild solution x(·) satisfying dZ (F (x(·), u(·)), x1 ) < ε; 3) The system (5.36) is said to be (F, Γ )-controllable (or simply controllable in the case that (F, Γ ) is clear from the context) if for any x0 ∈ H0 , there is a control u(·) ∈ U 0 [0, T ] such that (5.36) admits a mild solution x(·) satisfying F (x(·), u(·)) ∈ Γ. (5.38) Clearly, the above controllability notions are quite general, which include the usual exact/null/approximate controllability studied in the literature. For example, let us choose H1 to be a metric space consisting of suitably Hvalued, FT -measurable random variables so that 0 ∈ H1 . For Z = H1 , Γ = Z and F (x(·), u(·)) = x(T ), we get the exact/approximate controllability; for Z = H1 , Γ = {0} and F (x(·), u(·)) = x(T ), we get the null controllability. Further, for Z = H1 × H1 , Γ = {0} × {0} and



x(T )

 F (x(·), u(·)) =  ∫

T

M (T − s)x(s)ds

  ,

0

in which M (·) : [0, T ] → R is a given function, we get the memory-type null controllability of (5.36) (See [50] for examples in the deterministic setting). Clearly, the (F, Γ )-controllability concept is too general to obtain meaningful results, and therefore in the rest of this book we will focus mainly on exact/null controllability. Nevertheless, as we shall see later, (F, Γ )-controllability is a basis for the study of optimal control problems for (5.36). This is quite natural because, as we mentioned in the first chapter, controllability actually means “feasibility” for many activities (at least in some broad sense). Clearly, the map F : S[0, T ] × U 0 [0, T ] → Z can also be viewed an inputoutput operator, while Γ can be regarded as a (“measurable”) observation. Similarly to Definition 5.6, we introduce the following notions of observability: Definition 5.7. 1) The system (5.36) is said to be (F, Γ )-exactly observable (or simply exactly observable in the case that (F, Γ ) is clear from the context)

202

5 Control Problems for Stochastic Distributed Parameter Systems

if for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ , there is a unique initial datum x0 ∈ H0 such that the corresponding mild solution x(·) to (5.36) is unique and satisfies F (x(·), u(·)) = γ; (5.39) 2) The system (5.36) is said to be (F, Γ )-continuously observable (or simply continuously observable in the case that (F, Γ ) is clear from the context) if S[0, T ], U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique initial datum x ˆ0 ∈ H0 such that the corresponding mild solution x ˆ(·) to (5.36) (with (x0 , u) replaced by (ˆ x0 , u ˆ)) is unique and satisfies F (ˆ x(·), u ˆ(·)) = γˆ ,

(5.40)

and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying dU 0 [0,T ] (u(·), u ˆ(·)) + dΓ (γ, γˆ ) < ε,

(5.41)

there is a unique initial datum x0 ∈ H0 such that the corresponding mild solution x(·) to (5.36) is unique, and satisfies (5.39) and dS[0,T ] (x(·), x ˆ(·)) ≤ δ; 3) The system (5.36) is said to be (F, Γ )-continuously initially observable (or simply continuously initially observable in the case that (F, Γ ) is clear from the context) if H0 , U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique initial datum x ˆ0 ∈ H0 such that the corresponding mild solution x ˆ(·) to (5.36) (with (x0 , u) replaced by (ˆ x0 , u ˆ)) is unique and satisfies (5.40), and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying (5.41), there is a unique initial datum x0 ∈ H0 such that the corresponding mild solution x(·) to (5.36) is unique, and satisfies (5.39) and dH0 (x0 , x ˆ0 ) ≤ δ; 4) The system (5.36) is said to be (F, Γ )-continuously finally observable (or simply continuously finally observable in the case that (F, Γ ) is clear from the context) if HT , U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique initial datum x ˆ0 ∈ H0 such that the corresponding mild solution x ˆ(·) to (5.36) (with (x0 , u) replaced by (ˆ x0 , u ˆ)) is unique and satisfies (5.40), and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying (5.41), there is a unique initial datum x0 ∈ H0 such that the corresponding mild solution x(·) to (5.36) is unique, and satisfies (5.39) and dHT (x(T ), x ˆ(T )) ≤ δ.

5.3 Some Control Problems for Stochastic Distributed Parameter Systems

203

The above observability notions are also quite general, which include the usual exact/internal/boundary observability studied in the literature. Particularly, the above continuous observability and continuous initial observability include various observability estimates which are extensively studied for many partial differential equations. Roughly speaking, observability means that people may recognize the whole state under consideration through the observation part. In the deterministic setting, the observation problems are strongly related to the controllability problems, which are actually dual to each other at least for linear systems. However, in the stochastic situation, as we shall see in the next two chapters, the relationship between the controllability problems and the observation problems are much more complicated. Note also that, the observation problems make sense even if there is no control variable in the system (5.36) (The corresponding problems can be easily formulated as special cases as that in Definition 5.7). In this way, the observation problems may contain also various inverse problems which are also extensively studied in the literature for deterministic differential equations. Nevertheless, in this book we shall focus more on control problems, and therefore, the observation problems are considered only as auxiliary tools for studying the controllability problems. More importantly, in order to solve the controllability problem for the forward system (5.36), as we shall see in Section 7.3 of Chapter 7, stimulated also by [275], people need to analyze suitable controllability and/or observability problems for some backward stochastic evolution equations. Because of this, we shall give below a general formulation of backward controllability/observability problems in the stochastic setting. Suppose that a(·) : [0, T ] × Ω × H × L02 × U → H and b(·) : [0, T ] × Ω × H × U → L02 are two suitably given functions. Let us consider the following backward stochastic equation evolved in H:  ( ) dy(t) = − A∗ y(t) + a(t, y(t), u(t), Y (t)) dt    ( ) + b(t, u(t), y(t)) + Y (t) dW (t) in [0, T ), (5.42)    y(T ) = yT . Here u(·) ∈ U 0 [0, T ], yT ∈ HT is the final state. We suppose that the choices of the functions a(·) and b(·) and the sets U 0 [0, T ] and HT are such that, for any yT ∈ HT and u(·) ∈ U 0 [0, T ], the system (5.42) admits a (mild/transposition) solution2 (y(·), Y (·)) in some solution space S[0, T ] × T [0, T ] (in which solutions are understood in the sense of Definition 4.6 or Definition 4.13), say (y(·), Y (·)) ∈ LpF (Ω; C([0, T ]; H)) × L2F (0, T ; Lp (Ω; L02 )) (for some p ∈ (1, 2]) as that in Theorem 4.24. 2

Here, similarly to that for the system (5.36), we do not need to assume the solution to (5.42) is unique. Also, solutions to (5.42) can be understood in other sense, say transpositions (for the case of unbounded control operators, even if the filtration F is natural) which will be introduced in Definition 7.13 of Chapter 7.

204

5 Control Problems for Stochastic Distributed Parameter Systems

Suppose G : S[0, T ] × T [0, T ] × U 0 [0, T ] → Z is a given map. Similarly to Definition 5.6, we introduce the following notions of controllability: Definition 5.8. 1) The system (5.42) is said to be (G, Γ )-exactly controllable (or simply exactly controllable in the case that (G, Γ ) is clear from the context) if for any yT ∈ HT and y0 ∈ Γ , there is a control u(·) ∈ U 0 [0, T ] such that (5.42) admits a mild/transposition solution (y(·), Y (·)) satisfying G(y(·), Y (·), u(·)) = y0 ; 2) The system (5.42) is said to be (G, Γ )-approximately controllable (or simply approximately controllable in the case that (G, Γ ) is clear from the context) if Z is another metric space, and for any yT ∈ HT , y0 ∈ Γ and ε > 0, there is a control u(·) ∈ U 0 [0, T ] such that (5.42) admits a mild/transposition solution (y(·), Y (·)) satisfying dZ (G(y(·), Y (·), u(·)), y0 ) < ε; 3) The system (5.42) is said to be (G, Γ )-controllable (or simply controllable in the case that (G, Γ ) is clear from the context) if for any yT ∈ HT , there is a control u(·) ∈ U 0 [0, T ] such that (5.42) admits a mild/transposition solution (y(·), Y (·)) satisfying G(y(·), Y (·), u(·)) ∈ Γ.

(5.43)

Also, similarly to Definition 5.7, we introduce the following notions of observability: Definition 5.9. 1) The system (5.42) is said to be (G, Γ )-exactly observable (or simply exactly observable in the case that (G, Γ ) is clear from the context) if for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ , there is a unique final initial datum yT ∈ HT such that the corresponding solution (y(·), Y (·)) to (5.42) is unique and satisfies G(y(·), Y (·), u(·)) = γ;

(5.44)

2) The system (5.42) is said to be (G, Γ )-continuously observable (or simply continuously observable in the case that (G, Γ ) is clear from the context) if S[0, T ], T [0, T ], U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique final datum yˆT ∈ HT such that the corresponding solution (ˆ y (·), Yb (·)) to (5.42) (with (yT , u) replaced by (ˆ yT , u ˆ)) is unique and satisfies G(ˆ y (·), Yb (·), u ˆ(·)) = γˆ ,

(5.45)

and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying dU 0 [0,T ] (u(·), u ˆ(·)) + dΓ (γ, γˆ ) < ε,

(5.46)

5.3 Some Control Problems for Stochastic Distributed Parameter Systems

205

there is a unique final datum yT ∈ HT such that the corresponding solution (y(·), Y (·)) to (5.42) is unique, and satisfies (5.44) and dS[0,T ] (y(·), yˆ(·)) + dT [0,T ] (Y (·), Yb (·)) ≤ δ; 3) The system (5.42) is said to be (G, Γ )-continuously finally observable (or simply continuously finally observable in the case that (G, Γ ) is clear from the context) if HT , U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique final datum yˆT ∈ HT such that the corresponding solution (ˆ y (·), Yb (·)) to (5.42) (with (yT , u) replaced by (ˆ yT , u ˆ)) is unique and satisfies (5.45), and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying (5.46), there is a unique final datum yT ∈ HT such that the corresponding solution (y(·), Y (·)) to (5.42) is unique, and satisfies (5.44) and dHT (yT , yˆT ) ≤ δ; 4) The system (5.42) is said to be (G, Γ )-continuously initially observable (or simply continuously initially observable in the case that (G, Γ ) is clear from the context) if H0 , U 0 [0, T ] and Γ are metric spaces, and for any control u ˆ(·) ∈ U 0 [0, T ], observation γˆ ∈ Γ and ε > 0, there exists δ = δ(ε) > 0 and a unique final datum yˆT ∈ HT such that the corresponding solution (ˆ y (·), Yb (·)) to (5.42) (with (yT , u) replaced by (ˆ yT , u ˆ)) is unique and satisfies (5.45), and for any control u(·) ∈ U 0 [0, T ] and observation γ ∈ Γ satisfying (5.46), there is a unique final datum yT ∈ HT such that the corresponding solution (y(·), Y (·)) to (5.42) is unique, and satisfies (5.44) and dH0 (y(0), yˆ(0)) ≤ δ. In this book, as a key step to solve the stochastic controllability problems, we shall employ mainly the global Carleman estimates to derive observability estimates for some backward/forward stochastic evolution equations in infinite dimensions. In the rest of this section, we fix a nonempty set Z, a nonempty subset Γ of Z and a map F : S[0, T ] × U 0 [0, T ] → Z, and we suppose that the system (5.36) is (F, Γ )-controllable. For any fixed functions f (·, ·, ·) : [0, T ] × S[0, T ] × U 0 [0, T ] → R and h(·) : H → R, we introduce a cost function J (·, ·) : S[0, T ] × U 0 [0, T ] → R for the control system (5.36) as follows: (∫ ) T

J (x(·), u(·)) = E

f (t, x(·), u(·))dt + h(x(T )) , 0

(5.47)

∀ (x(·), u(·)) ∈ S[0, T ] × U 0 [0, T ]. Denote by Uad [0, T ] the set of all controls u(·) ∈ U 0 [0, T ] such that for some x0 ∈ H0 , (5.36) admits a mild solution x(·) satisfying (5.38) and

206

5 Control Problems for Stochastic Distributed Parameter Systems

f (·, x(·), u(·)) ∈ L1F (0, T ) and

h(x(T )) ∈ L1FT (Ω).

(5.48)

Also, denote by Pad [0, T ] the set of all pairs (x(·), u(·)) ∈ S[0, T ] × U 0 [0, T ] such that for some x0 ∈ H0 , x(·) is a mild solution to (5.36) satisfying (5.38) and (5.48). Clearly, under our (F, Γ )-controllability assumption on the system (5.36), both Uad [0, T ] and Pad [0, T ] are non-empty. Any control u(·) ∈ Uad [0, T ] (resp. pair (x(·), u(·)) ∈ Pad [0, T ]) is called an admissible control (resp. pair). The optimal control problem for the system (5.36) with the cost functional (5.47) is formulated as follows: Problem (SOP) Find a (¯ x(·), u ¯(·)) ∈ Pad [0, T ] such that J (¯ x(·), u ¯(·)) =

inf

(x(·),u(·)) ∈ Pad [0,T ]

J (x(·), u(·)).

(5.49)

Any (¯ x(·), u ¯(·)) ∈ Pad [0, T ] satisfying (5.49) is called an optimal pair (for Problem (SOP)), and x ¯(·) and u ¯(·) are called respectively an optimal state and an optimal control. Similarly, we fix a map G : S[0, T ] × T [0, T ] × U 0 [0, T ] → Z, and we suppose that the system (5.42) is (G, Γ )-controllable. For any fixed functions g(·, ·, ·) : [0, T ] × S[0, T ] × T [0, T ] × U 0 [0, T ] → R and h(·) : H → R, we introduce a cost function J (·, ·, ·) : S[0, T ] × T [0, T ] × U 0 [0, T ] → R for the control system (5.42) as follows: ∫

T

J (y(·), Y (·), u(·)) = E

g(t, y(·), Y (·), u(·))dt + h(y(0)), 0

(5.50)

∀ (y(·), Y (·), u(·)) ∈ S[0, T ] × T [0, T ] × U 0 [0, T ]. Denote by Uad [0, T ] the set of all controls u(·) ∈ U 0 [0, T ] such that for some yT ∈ HT , (5.42) admits a solution (y(·), Y (·)) satisfying (5.43) and g(·, y(·), Y (·), u(·)) ∈ L1F (0, T ).

(5.51)

Also, denote by Pad [0, T ] the set of all triples (y(·), Y (·), u(·)) ∈ S[0, T ] × T [0, T ] × U 0 [0, T ] such that for some yT ∈ HT , (y(·), Y (·)) is a solution to (5.42) satisfying (5.43) and (5.51). Clearly, under our (G, Γ )-controllability assumption on the system (5.42), both Uad [0, T ] and Pad [0, T ] are non-empty. Any control u(·) ∈ Uad [0, T ] is called an admissible control. The optimal control problem for the system (5.42) with the cost functional (5.50) is formulated as follows: Problem (BSOP) Find a triple (¯ y (·), Y (·), u ¯(·)) ∈ Pad [0, T ] such that J (¯ y (·), Y (·), u ¯(·)) =

inf

(y(·),Y (·),u(·)) ∈ Pad [0,T ]

J (y(·), Y (·), u(·)).

(5.52)

5.4 Notes and Comments

207

Any (¯ y (·), Y (·), u ¯(·)) ∈ Pad [0, T ] satisfying (5.52) is called an optimal triple (for Problem (BSOP)), and (¯ y (·), Y (·)) and u ¯(·) are called respectively an optimal state and an optimal control. In this book, we shall address only Problem (SOP) (Actually, Problem (BSOP) is much easier to handle than Problem (SOP)). We shall derive the first and second order necessary conditions for optimal controls for Problem (SOP), and also characterize the optimal feedback operators for a special version of this problem. The main difficulty to do all of these is how to establish the well-posedness of the operator-valued backward stochastic evolution equations (including operator-valued backward stochastic Lyapunov/Riccati equations in infinite dimensions in particular). To do this, we shall systematically employ our stochastic transposition method.

5.4 Notes and Comments The stochastic control systems presented in this chapter are well-known except the system (5.26) (and its high dimensional counterpart (10.3) in Chapter (10)) introduced in [247]. Our formulation of (F, Γ )- (resp. (G, Γ )-) controllability/observability problems for the system (5.36) (resp. (5.42)) are quite general, even if the system is reduced to deterministic equations in finite dimensions. The same can be said for optimal control problems for the systems (5.36) and (5.42), i.e. Problems (SOP) and (BSOP). Of course, in this book (as any of other works on Control Theory), we have to focus on more specific controllability and optimal control problems. Also, we emphasize again that controllability is the basis for optimal control problems though the controllability for the stochastic optimal control problems to be studied in this book is obvious. There are many other control problems for stochastic distributed parameter systems which deserve careful studies (but they are not addressed to in this book), for example: •



In the field of control theory for stochastic finite dimensional systems, stochastic approximation, system identification and adaptive control ([52, 53, 181, 340]) are important topics. It would be quite interesting to study their infinite dimensional counterparts but it seems that not too much are known in this respect (See however [83] and the two books [15, 167] for some very interesting related works). There exist numerous works on inverse problems for deterministic equations. However, very little are known for the relatively unexplored but important subject of inverse problems for stochastic partial differential equations. By means of the tools that we developed for solving the stochastic controllability problems, in [226, 229, 243] we initiated the study of inverse problems for stochastic partial differential equations (See [198, 238, 353, 358, 366, 373, 374, 375] for further progress), in which the

208













5 Control Problems for Stochastic Distributed Parameter Systems

point is that, unlike most of the previous works in this topic, the problem is genuinely stochastic and therefore it cannot be reduced to any deterministic one. Especially, it was found in [243] that both the formulation of stochastic inverse problems and the tools to solve them differ considerably from their deterministic counterpart. Indeed, as the counterexample in [243, Remark 2.7] shows, the inverse problem considered therein makes sense only in the stochastic setting! There exist numerous works on the stability/stabilization problems for deterministic equations. However, very little are known for the same problems but for stochastic partial differential equations (We refer to [216] for some results on this topic). Further, we remark that, it would be quite interesting to extend the famous backstepping method for handling the stabilization problems of some deterministic partial differential equations ([172]) to the stochastic setting but this remains to be done. It is well-known that Game Theory is strongly related to Control Theory. In recent years, there are many studies on stochastic differential games (e.g., [39, 48, 99]). It would be quite interesting to develop game theory for stochastic distributed parameter systems but, as far as we know, nothing has been done in this direction. In recent years, there are many works on the time-inconsistent optimal control problems for stochastic differential equations (See [369] and the references cited therein). It would be quite interesting to study the same problems but for stochastic evolution equations in infinite dimensions. As far as we know, [76] is the only publication on this topic. In recent years, there are some works on control problems for stochastic systems with fractional Brownian motions (e.g., [82]). It would be quite interesting to study carefully this sort of problems (especially those in infinite dimensions). It seems that numerical methods for solving control problems of stochastic distributed parameter systems are almost blank (See [283] for the only one work that we know). Very likely, people need to develop completely new algorithms for these problems. The control problems presented in this chapter and mentioned above make sense also for forward-backward stochastic evolution equations, stochastic evolution equations driven by G-Brownian motions, forward-backward doubly stochastic evolution equations, stochastic Volterra integral equation in infinite dimensions or stochastic evolution equations driven by general martingales (even with incomplete information). Also, it seems interesting to study recursive optimal control problems for stochastic evolution equations and/or the above mentioned stochastic equations. As far as we know, all of these problems remain to be done, and very likely people might need to develop new tools to obtain interesting new results.

6 Controllability for Stochastic Differential Equations in Finite Dimensions

In this chapter, we are concerned with the controllability problem for the simplest stochastic linear evolution equations, i.e., stochastic linear differential equations (with constant coefficients) in finite dimensions. A Kalman-type rank condition to characterize the exact controllability is given for the special case that the control matrix in the diffusion term is of full rank. Nevertheless, generally speaking, the controllability of stochastic linear differential equations in finite dimensions is surprisingly different from its deterministic counterpart. Indeed, it will be shown further that •



There is a very simple stochastic control system (in one dimension) which is exactly controllable by means of L1 -in time controls but the same system is NOT exactly controllable anymore provided that one chooses Lp -in time controls for any p ∈ (1, ∞]; One can construct a class of very simple stochastic control systems (in two dimensions) for which neither null nor approximate controllability is robust w.r.t. small perturbations.

6.1 The Control Systems With Controls in Both Drift and Diffusion Terms Let T > 0 and (Ω, F , F, P) be a complete filtered probability space on which a one dimensional Brownian motion W (·) is defined and F = {Ft }t∈[0,T ] is the natural filtration generated by W (·). Denote by F the progressive σ-field w.r.t. F. Fix m, n ∈ N. Let us consider the following linear controlled stochastic differential equation in Rn : { dy = (Ay + Bu)dt + (Cy + Du)dW (t) in [0, T ], (6.1) y(0) = y0 , © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_6

209

210

6 Controllability for Stochastic Differential Equations in Finite Dimensions

where A ∈ Rn×n , B ∈ Rn×m , C ∈ Rn×n and D ∈ Rn×m , u(·) ∈ L2F (0, T ; Rm ) is the control and y(·) ∈ L2F (Ω; C([0, T ]; Rn )) is the state. We introduce the following notion of exact controllability for (10.2). Definition 6.1. The system (6.1) is called exactly controllable (at time T ) if for any y0 ∈ Rn and y1 ∈ L2FT (Ω; Rn ), one can find a control u(·) ∈ L2F (0, T ; Rm ) such that the corresponding solution y to (6.1) satisfies that y(T ) = y1 . Define a (deterministic) function η(·) on [0, T ] by  [( ) ( ) )  1, for t ∈ 1 − 1 T, 1 − 1 T , 22j 22j+1 η(t) =  −1, otherwise.

j = 0, 1, 2, · · · ,

One can check that ([275]) there exists a constant β > 0 such that ∫ T η(s) − c 2 ds ≥ 4β(T − t), ∀ (c, t) ∈ R × [0, T ].

(6.2)

(6.3)

t

The following result is useful later. ∫ T Proposition 6.2. Let ξ = η(t)dW (t), where η(·) is given by (6.2). Then, 0

it is impossible to find (ϱ1 , ϱ2 ) ∈ L2F (0, T ) × CF ([0, T ]; L2 (Ω)) and y0 ∈ R such that ∫ ∫ T

ξ = y0 +

T

ϱ1 (t)dt + 0

ϱ2 (t)dW (t).

(6.4)

0

Proof : We use the contradiction argument. Assume that (6.4) held for some (ϱ1 , ϱ2 ) ∈ L2F (0, T ) × CF ([0, T ]; L2 (Ω)) and y0 ∈ R. Then, ∫ T ∫ T ( ) η(t) − ϱ2 (t) dW (t) = y0 + ϱ1 (t)dt. 0

Hence, ∫ t

0

( ) η(s) − ϱ2 (s) dW (s) = y0 +



t

ϱ1 (s)ds + E

0

(∫

0

This gives ∫ T

(

) η(s) − ϱ2 (s) dW (s) =



T

ϱ1 (s)ds − E

t

t

T

) ϱ1 (s)ds Ft .

t

(∫

T

) ϱ1 (s)ds Ft ,

t

which yields that ∫ T )]2 [∫ T (∫ T 2 E η(s) − ϱ2 (s) ds = E ϱ1 (s)ds − E ϱ1 (s)ds Ft t t t ∫ T (∫ T )2 2 ϱ1 (s) ds. ≤E ϱ1 (s)ds ≤ (T − t) t

t

(6.5)

6.1 The Control Systems With Controls in Both Drift and Diffusion Terms

211

On the other hand, it follows from the inequality (6.3) that ∫ T η(s) − ϱ2 (s) 2 ds E t

1 ≥ E 2



T

η(s) − ϱ2 (T ) 2 ds − E

t



T

ϱ2 (T ) − ϱ2 (s) 2 ds

(6.6)

t



T

≥ 2β(T − t) − E

ϱ2 (T ) − ϱ2 (s) 2 ds.

t

Since ϱ2 ∈ CF ([0, T ]; L2 (Ω)), there exists t˜ ∈ [0, T ) such that 2 E ϱ2 (T ) − ϱ2 (s) ≤ β, ∀ s ∈ [t˜, T ]. This, together with (6.6) implies that ∫ T η(s) − ϱ2 (s) 2 ds ≥ β(T − t), E

∀ t ∈ [t˜, T ].

(6.7)

t

It follows from (6.5) and (6.7) that ∫ T η(s) − ϱ2 (s 2 ds, β≤

∀ t ∈ [t˜, T ],

t

a contradiction. The following result gives some necessary conditions for the exact controllability of (6.1). Proposition 6.3. If the system (6.1) is exactly controllable, then 1) rank D = n; 2) (A, B) fulfills the Kalman rank condition (1.3). Proof : We use the contradiction argument to prove the conclusion 1). Assume that the system (6.1) was exactly controllable for some matrix D with rank D < n. Then, we could find a vector v ∈ Rn with |v|Rn = 1 such that v ⊤ Dy0 = 0 for any y0 ∈ Rm . Let y1 = ξv, where ξ is given in Proposition 6.2. By the exact controllability of (6.1), one can find a control u ∈ L2F (0, T ; Rm ) such that ∫

T

y1 = y0 +

(

) Ay(t) + Bu(t) dt +

0

T

(

) Cy(t) + Du(t) dW (t),

0

which implies that ∫ T ∫ η(t)dW (t) = v ⊤ y0 + 0



T

( ) v ⊤ Ay(t) + Bu(t) dt +

0

This contradicts Proposition 6.2.



T 0

v ⊤ Cy(t)dW (t).

212

6 Controllability for Stochastic Differential Equations in Finite Dimensions

In order to prove the conclusion 2), write y˜ = Ey, where y is a solution to (6.1) for some y0 ∈ Rn and u(·) ∈ L2F (0, T ; Rm ). Then y˜ solves  y  d˜ = A˜ y + BEu in [0, T ], dt  y˜(0) = y0 .

(6.8)

For any y1 ∈ Rn , since (6.1) is exactly controllable at time T , we can find a control u ∈ L2F (0, T ; Rm ) such that the corresponding solution to (6.1) satisfies that y(T ) = y1 . Then we have that y˜(T ) = E(y(T )) = Ey1 = y1 . This implies that (6.8) is exactly controllable at time T . Thus, in view of Theorem 1.2, (A, B) fulfills the Kalman rank condition. By means of Proposition 6.3, we should assume that rank D = n and (A, B) fulfills the Kalman rank condition if we expect the exact controllability of the system (6.1). In the rest of this section, we keep this assumption. Since rank D = n, we have n ≤ m. Thus, there exist two matrices K1 ∈ Rm×m and K2 ∈ Rm×n such that DK1 = (In , 0),

DK2 = −C,

(6.9)

where In is the identity matrix in Rn×n . Introduce a simple linear transformation as ( ) Y u = K1 + K2 y, (6.10) v where Y ∈ L2F (0, T ; Rn ) and v ∈ L2F (0, T ; Rm−n ). Then the system (6.1) is reduced to { dy = (A1 y + A2 Y + B1 v)dt + Y dW (t) in [0, T ], (6.11) y(0) = y0 , where

A2 ∈ Rn×n , ( ) Y A2 Y + B1 v = BK1 . v A1 = A + BK2 ,

B1 ∈ Rn×(m−n) , (6.12)

Clearly, (6.11) can be viewed as a controlled system, in which (Y, v) is the control variable, and y is still the state variable. Similarly to Definition 6.1, one can define the exact controllability (at time T ) of (6.11). It is easy to show the following result. Proposition 6.4. If rank D = n and (6.11) is exactly controllable, then so is the system (6.1).

6.1 The Control Systems With Controls in Both Drift and Diffusion Terms

213

Remark 6.5. At this moment, it seems unclear whether the reverse of Proposition 6.4 is true or not. Indeed, suppose that the system (6.1) is exactly controllable, then by Proposition 6.3, rank D = n; moreover, for any y0 ∈ Rn and y1 ∈ L2FT (Ω; Rn ), there is a control u(·) ∈ L2F (0, T ; Rm ) so that the corresponding solution y to (6.1) verifies that y(T ) = y1 . Write { Y = Du − DK2 y, B1 v = B(u − K2 y) − A2 Y = (B − A2 D)(u − K2 y). Then, by (6.9)–(6.12), it is easy to see that y, Y and B1 v satisfy (6.11) and y(T ) = y1 . However, we do not know whether it is possible to find v from the algebraic equation B1 v = (B − A2 D)(u − K2 y). If v can be found in this way, then (6.11) is exactly controllable at time T , as well. In order to deal with the exact controllability problem for (6.11), we consider the following controlled backward stochastic differential system: { dy = (A1 y + A2 Y + B1 v)dt + Y dW (t) in [0, T ], (6.13) y(T ) = y1 , where y1 ∈ L2FT (Ω; Rn ), v ∈ L2F (0, T ; Rm−n ) is the control variable. Definition 6.6. The system (6.13) is called exactly controllable (at time 0) if for any y1 ∈ L2FT (Ω; Rn ) and y0 ∈ Rn , there is a control v ∈ L2F (0, T ; Rn×(m−n) ) such that the corresponding solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; Rn )) × L2F (0, T ; Rn ) to (6.13) satisfies y(0) = y0 . It is easy to show the following result: Proposition 6.7. The system (6.11) is exactly controllable if and only if so is the system (6.13). The dual equation of the system (6.13) is the following (forward) stochastic ordinary differential equation: { ⊤ dz = −A⊤ 1 zdt − A2 zdW (t) in [0, T ], (6.14) z(0) = z0 ∈ Rn . Similar to Theorem 1.2, one can show the following result: Theorem 6.8. The following statements are equivalent: 1) The system (6.13) is exactly controllable; 2) Solutions to (6.14) satisfy the following observability estimate: ∫

T

|z0 |2 ≤ CE 0

|B1⊤ z(t)|2 dt,

∀ z0 ∈ Rn ;

(6.15)

214

6 Controllability for Stochastic Differential Equations in Finite Dimensions

3) Solutions to (6.14) enjoy the following observability: B1⊤ z(·) ≡ 0 in (0, T ), a.s. ⇒ z0 = 0;

(6.16)

4) The following rank condition holds: rank [B1 , A1 B1 , A2 B1 , A21 B1 , A1 A2 B1 , A22 B, A2 A1 B1 , · · · ] = n.

(6.17)

Proof : “1)⇐⇒2)”. Similarly to the deterministic setting, it suffices to consider the special case that the final datum y1 in the system (6.13) is equal to 0. We define an operator G : L2F (0, T ; Rm−n ) → Rn by G(v(·)) = y(0),

∀ v(·) ∈ L2F (0, T ; Rm−n ),

(6.18)

where (y(·), Y (·)) is the corresponding solution to (6.13) with y1 = 0. Then, applying Itˆo’s formula to (6.13) (with y1 = 0) and (6.14), we obtain that ∫ ⟨ y(0), z0 ⟩Rn = −E

T 0

⟨ B1 v(t), z(t) ⟩Rn dt,

∀ z0 ∈ Rn .

(6.19)

where z(·) solves (6.14). By (6.18)–(6.19), it is clear that, (G∗ z0 )(t) = −B1⊤ z(t),

a.e. t ∈ [0, T ].

(6.20)

Now, the exact controllability of (6.13) is equivalent to that R(G) = Rn ; while the latter, by (6.20) and using Theorem 1.7 (with G defined by (6.18) and F being the identity operator in Rn ), is equivalent to the estimate (6.15). The proof of “2)⇐⇒3)” is easy as that in Theorem 1.13 (because Rn is finite dimensional). “4)=⇒3)”. We use an idea from the proof of [275, Theorem 3.2]. Let us assume that B1⊤ z(·) ≡ 0 in (0, T ), a.s. for some z0 ∈ Rn . Then, ∫ t ∫ t ⊤ ⊤ ⊤ ⊤ B1 z(t) = B1 z0 + B1 A1 z(s)ds + B1⊤ A⊤ ∀ t ∈ (0, T ). 2 z(s)dW (s) = 0, 0

Therefore,

0

B1⊤ z0 = 0,

B1⊤ A⊤ 1 z ≡ 0,

B1⊤ A⊤ 2 z ≡ 0.

(6.21)

⊤ ⊤ Hence B1⊤ A⊤ 1 z0 = B1 A2 z0 = 0. Noticing that z(·) solves (6.14), by (6.21), we have ∫ t ∫ t ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ B1 A1 z = B1 A1 z0 + B1 A1 A1 z(s)ds + B1⊤ A⊤ 1 A2 z(s)dW (s) = 0 0

0

and B1⊤ A⊤ 2z

=

B1⊤ A⊤ 2 z0



t

+ 0

⊤ B1⊤ A⊤ 2 A1 z(s)ds



t

+ 0

⊤ B1⊤ A⊤ 2 A2 z(s)dW (s) = 0.

6.1 The Control Systems With Controls in Both Drift and Diffusion Terms

215

Hence, ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ B1⊤ A⊤ 1 A1 z ≡ B1 A1 A2 z ≡ B1 A2 A1 z ≡ B1 A2 A2 z ≡ 0,

which implies ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ B1⊤ A⊤ 1 A2 z0 = B1 A1 A1 z0 = B1 A2 A1 z0 = B1 A2 A2 z0 = 0.

Repeating the above argument, we conclude that z0⊤ [B1 , A1 B1 , A2 B1 , A21 B1 , A1 A2 B1 , A22 B, A2 A1 B1 , · · · ] = 0.

(6.22)

By (6.17) and (6.22), it follows that z0 = 0. “3)=⇒4)”. We use the contradiction argument. Assume that (6.17) was false. Then, we could find a nonzero z0 ∈ Rn satisfying (6.22). For this z0 , denote by z(·) the corresponding solution to (6.14). Clearly, z(·) can be approximated (in L2F (Ω; C([0, T ]; Rn ))) by the Picard sequence {zk (·)}∞ k=0 defined as follows:    z0 (t) = z0 , ∫ t ∫ t (6.23)   zk (t) = z0 + A⊤ z (s)ds + A⊤ k ∈ N, k−1 1 2 zk−1 (s)dW (s), 0

0

for any t ∈ [0, T ]. By (6.22) and (6.23), via a direct computation, one can show that B1⊤ zk (·) = 0, k = 0, 1, 2, · · · . (6.24) By (6.24), we deduce that B1⊤ z(·) ≡ 0 in (0, T ). Hence, by (6.16), it follows that z0 = 0, which is a contradiction. As a consequence of Theorem 6.8, we have the following characterization on the exact controllability of (6.11). Corollary 6.9. ([275]) The system (6.11) is exactly controllable at time T if and only if the rank condition (6.17) holds. Note that the rank condition (6.17) is not easy to verify because the matrix (B1 , A1 B1 , A2 B1 , A21 B1 , A1 A2 B1 , A22 B, A2 A1 B1 , · · · ) has infinite columns. To simplify this condition, we introduce two sequences of matrices {M1,k }∞ k=1 and {M2,k }∞ k=1 inductively as follows: M1,1 = A1 B1 ,

M2,1 = A2 B1 ,

(6.25)

and for k ∈ N, M1,k+1 = (A1 M1,k , A1 M2,k ),

M2,k+1 = (A2 M1,k , A2 M2,k ).

(6.26)

We have the following Kalman-type rank condition to characterize the exact controllability of (6.11):

216

6 Controllability for Stochastic Differential Equations in Finite Dimensions

Theorem 6.10. The rank condition (6.17) holds if and only if rank (B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,n−1 , M2,n−1 ) = n.

(6.27)

Proof : The “if” part is obvious. It suffices to prove the “only if” part. For any matrix M , we denote by span {M } the vector space spanned by the column vectors of M . If for each k ∈ {1, · · · , n − 1}, there exists an ηk ∈ span {M1,k , M2,k } such that ηk ∈ / span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. then, {B1 , η1 , · · · , ηn−1 } is linearly independent, and therefore (6.27) holds. Now, we suppose that there is a k ∈ {1, · · · , n − 1} such that span {M1,k , M2,k } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }.

(6.28)

Then, we claim that for any ℓ > k, span {M1,ℓ , M2,ℓ } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }.

(6.29)

We first consider the case that ℓ = k + 1. Let η ∈ span {M1,k+1 }. Then η = A1 η1 , where η1 ∈ span {M1,k , M2,k }. By (6.28), we find that η1 ∈ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. Thus, A1 η1 ∈ span {A1 B1 , A1 M1,1 , A1 M2,1 , A1 M1,2 , A1 M2,2 , · · · , A1 M1,k−1 , A1 M2,k−1 } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 , M1,k , M2,k } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. This implies that span {M1,k+1 } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. Similarly, span {M2,k+1 } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. Hence, (6.29) holds for ℓ = k + 1. By the induction argument, we see that (6.29) holds for all ℓ > k. Hence,

6.2 Control System With a Control in the Drift Term

217

span {B1 , A1 B1 , A2 B1 , A21 B1 , A1 A2 B1 , A22 B, A2 A1 B1 , · · · } ⊂ span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 }. It follows from (6.17) that the dimension of span {B1 , A1 B1 , A2 B1 , A21 B1 , A1 A2 B1 , A22 B, A2 A1 B1 , · · · } is n. Thus, the dimension of span {B1 , M1,1 , M2,1 , M1,2 , M2,2 , · · · , M1,k−1 , M2,k−1 } is also equal to n. This completes the proof of Theorem 6.10. Remark 6.11. By Remark 6.5, it is easy to see that the rank condition (6.17) or (6.27) is sufficient for the exact controllability of (6.1) but maybe not necessary. It would be quite interesting to derive a rank condition to characterize the exact controllability of this system directly (rather than through that of the reduced system (6.11) or (6.13)). However, as far as we know, this is an unsolved problem. By Proposition 6.3, in order that the system (6.1) is exactly controllable, one needs rank D = n, which means that the control has to be active everywhere in the diffusion term of this system. Clearly, this is unsatisfactory for many practice problems. How to avoid this? One possibility is to relax the class of control functions in the drift term of (6.1), which is the main concern in the next section.

6.2 Control System With a Control in the Drift Term In this subsection, we study a special case for which the control is only acted in the drift term. Let us consider the following special case of the system (6.1): { dy = (Ay + Bu)dt + CydW (t) in [0, T ], (6.30) y(0) = y0 . In (6.30), u(·) ∈ L1F (0, T ; L2 (Ω; Rm )) is the control variable and y(·) is the state. Clearly, y(·) ∈ L2F (Ω; C([0, T ]; Rn )). The exact controllability for (6.30) is defined as follows: Definition 6.12. The system (6.30) is called exactly controllable (at time T ) if for any y0 ∈ Rn and y1 ∈ L2FT (Ω; Rn ), there exists a control u(·) ∈ L1F (0, T ; L2 (Ω; Rm )) so that the corresponding solution y to (6.30) verifies that y(T ) = y1 , a.s. Let us first show the following preliminary result. Theorem 6.13. Let H be a Hilbert space and p ∈ [1, ∞). For any ξ ∈ LpFT (Ω; H), there is an f ∈ L1F (0, T ; Lp (Ω; H)) such that

218

6 Controllability for Stochastic Differential Equations in Finite Dimensions



T

ξ=

f (t)dt

(6.31)

0

and |f (·)|L1F (0,T ;Lp (Ω;H)) ≤ |ξ|LpF

T

(Ω;H)) .

(6.32)

Proof : For any p ∈ [1, ∞), define an operator LT : L1F (0, T ; Lp (Ω; H)) → LpFT (Ω; H) by (



)

T

LT f (·) =

∀ f (·) ∈ L1F (0, T ; Lp (Ω; H)).

f (t)dt, 0

Clearly, to prove (6.31), we only need to prove that R(LT ) = LpFT (Ω; H).

(6.33)

( )′ ′ Note that LpFT (Ω; H) = LpFT (Ω; H), where p′ denotes the H¨older conjugates of p, and, by Theorem 2.73, ( 1 )′ p′ LF (0, T ; Lp (Ω; H)) = L∞ F (0, T ; L (Ω; H)). Hence, in order to prove (6.33) and (6.32), by Theorem 1.10, it suffices to derive the following inequality: |L∗T η|L∞ (0,T ;Lp′ (Ω;H)) ≥ |η|Lp′ F

FT

(Ω;H)

,



∀ η ∈ LpFT (Ω; H).

(6.34)

In order to prove (6.34), let us first find the dual operator L∗T of LT . For ′ any f (·) ∈ L1F (0, T ; Lp (Ω; H)), and η ∈ LpFT (Ω; H), we have ⟨LT f, η⟩Lp =E



FT

⟨∫

(Ω;H),Lp F (Ω;H) T





T

f (t)dt, η H

0



T

T

E⟨f (t), η⟩H dt =

= 0

E⟨f (t), E(η | Ft )⟩H dt. 0





p Consequently, L∗T : LpFT (Ω; H) → L∞ F (0, T ; L (Ω; H)) is given by

(L∗T η)(t) = E(η | Ft ),



a.e. t ∈ [0, T ], ∀ η ∈ LpFT (Ω; H).

For p > 1, making use of (6.35), we find that |L∗T η|L∞ (0,T ;Lp′ (Ω;H)) = F

( p ′ ) ≥ E E(η | FT ) H Therefore, (6.34) holds for p > 1.

1 p′

(

p′ ) p1′ sup E E(η | Ft ) H

t∈[0,T ]

( ′) 1 = E|η|p p′ = |η|Lp′

FT

(Ω;H)

.

(6.35)

6.2 Control System With a Control in the Drift Term

219

Next, for p = 1, we have that ∞ |L∗T η|L∞ = sup F (0,T ;L (Ω;H))

(

ess sup ω∈Ω |E(η | Ft )|H

)

t∈[0,T ]

≥ ess sup ω∈Ω (|E(η | FT )|H ) = ess sup ω∈Ω |η(ω)|H = |η|L∞ F

T

(Ω;H) .

This implies that (6.34) also holds for p = 1. Remark 6.14. Theorem 6.13 is also true when H is a Banach space having the Radon-Nikod´ ym property. We refer the readers to [239, Theorem 3.1] for the proof. By Theorem 6.13, it is easy to show that the system (6.30) is exactly controllable whenever C = 0 rank B = n. Generally, we have the following result. Theorem 6.15. System (6.30) is exactly controllable if and only if rank B = n. Proof : The “only if” part. We use the contradiction argument. If rank B < n, then there would exist an η ∈ Rn with |η|Rn = 1 such that η ⊤ B = 0. Let ξ ∈ L2FT (Ω) be the random variable given in Proposition 6.2 and put y1 = ξη. Since the system (6.30) is exactly controllable, one can find a control u ∈ L1F (0, T ; L2 (Ω; Rm )) such that ∫

T

ξη = y0 +

(



)

T

Ay(t) + Bu(t) dt +

0

Cy(t)dW (t).

(6.36)

0

Multiplying both sides of (6.36) by η ⊤ , we obtain that ⊤



T

ξ = η y0 +





T

η Ay(t)dt + 0

η ⊤ Cy(t)dW (t),

0

which contradicts Proposition 6.2. The “if” part. Similarly to the proof of Theorem 1.13 (for “1)⇐⇒2)”), it suffices to consider the special case that the initial datum y0 in the system (6.30) is equal to 0. We define an operator G : L1F (0, T ; L2 (Ω; Rm )) → L2FT (Ω; Rn ) by G(u(·)) = y(T ),

∀ u(·) ∈ L1F (0, T ; L2 (Ω; Rm )),

(6.37)

where y(·) is the corresponding solution to (6.30) with y0 = 0. We need to give 2 m an explicit form of the dual operator G∗ : L2FT (Ω; Rn ) → L∞ F (0, T ; L (Ω; R )). For this purpose, we introduce the following backward stochastic differential equation:

220

6 Controllability for Stochastic Differential Equations in Finite Dimensions

{

( ) dz(t) = − A⊤ z(t) + C ⊤ Z(t) dt + Z(t)dW (t) in [0, T ),

(6.38)

z(T ) = z1 , where z1 ∈ L2FT (Ω; Rn ). Clearly, the equation (6.38) admits a unique mild solution (z(·), Z(·)) ∈ L2F (Ω; C([0, T ]; Rn )) × L2F (0, T ; Rn ). Applying Itˆo’s formula to (6.30) (with y0 = 0) and (6.38), we obtain that ∫ T E⟨ y(T ), z1 ⟩Rn = E ⟨ Bu(t), z(t) ⟩Rn dt, ∀ z1 ∈ L2FT (Ω; Rn ). (6.39) 0

where (z(·), Z(·)) solves (6.38). By (6.37) and (6.39), it is clear that, (G∗ z1 )(t) = B ⊤ z(t),

a.e. t ∈ [0, T ].

(6.40)

Now, the exact controllability of (6.30) is equivalent to that R(G) = L2FT (Ω; Rn ); while the latter, by (6.40) and using Theorem 1.10 (with G defined by (6.18) and F being the identity operator in L2FT (Ω; Rn )), is equivalent to that all solutions (z(·), Z(·)) ∈ L2F (Ω; C([0, T ]; Rn ))×L2F (0, T ; Rn ) to (6.38) satisfy the following estimate: |z1 |L2F

T

(Ω;Rn )

2 m , ≤ C|B ⊤ z|L∞ F (0,T ;L (Ω;R ))

∀ z1 ∈ L2FT (Ω; Rn ).

(6.41)

Now, by rank B = n, z(·) ∈ L2F (Ω; C([0, T ]; Rn )) and the second equation in (6.38), the desired estimate (6.41) is obvious. This completes the proof of Theorem 6.15. Remark 6.16. Fix any q ∈ [1, 2). By means of Theorem 6.13, we may give another “proof” of the “if” part in of Theorem 6.15. Indeed, without loss of generality, we assume that B ∈ Rn×n . Fix any y1 ∈ L2FT (Ω; Rn ). Since 2+q ( ) Φ(T )−1 y1 − Φ(T )y0 ∈ LF2T (Ω; Rn ), it follows from Theorem 6.13 that there is an f ∈ L1F (0, T ; L

2+q 2

(Ω; Rn )) such that

Φ(T )

−1

(

)



T

y1 − Φ(T )y0 =

f (s)ds.

(6.42)

0

Here Φ(·) is the solution to (3.13) with A(t) = A, d = 1 and C1 (t) = C. Let u(·) = B −1 Φ(·)f (·). It is easy to see that u(·) ∈ L1F (0, T ; Lq (Ω; Rm )). By Theorem 3.3, we have that ∫ T y(T ) = Φ(T )y0 + Φ(T ) Φ(s)−1 Bu(s)ds = y1 . (6.43) 0

Hence, we conclude the exact controllability of the system (6.30) provided that the control class is chosen as follows:

6.2 Control System With a Control in the Drift Term

∆{ U = u(·) ∈ L1F (0, T ; Lq (Ω; Rm )) The corresponding solution to } (6.30) with y0 = 0 satisfies that y(T ) ∈ L2FT (Ω; Rn ) .

221

(6.44)

In this case, u(·) ∈ U and y(·) are respectively the control and the state variables for the system (6.30). Then, clearly, y(·) ∈ LqF (Ω; C([0, T ]; Rn )). Note that, in the above “proof”, q cannot be chosen to be 2, and therefore, the control class U is a proper subset of L1F (0, T ; L2 (Ω; Rm )). Next, we show that, for any r ∈ (1, ∞) and p ∈ (1, ∞), the system (6.30) with C = 0 and m = n is not exactly controllable if the control class is chosen to be LrF (0, T ; Lp (Ω; Rm )). For simplicity, we only consider the case that m = n = 1. Theorem 6.17. For any r, p ∈ (1, ∞), there exists an η ∈ LpFT (Ω) such that for any f ∈ LrF (0, T ; Lp (Ω)), ∫

T

η ̸=

f (t)dt.

(6.45)

0

Proof : For any r, p ∈ (1, ∞), define an operator LT : LrF (0, T ; Lp (Ω)) → LpFT (Ω) by ( ) LT f (·) =



T

f (t)dt, 0

∀ f (·) ∈ LrF (0, T ; Lp (Ω)).

Denote by r′ and p′ the H¨older conjugates of r and p, respectively. Similarly ′ ′ ′ to (6.35), by Theorem 2.73, L∗T : LpFT (Ω) → LrF (0, T ; Lp (Ω)) is given by ′

(L∗T η)(t) = E(η | Ft ),

a.e. t ∈ [0, T ], ∀ η ∈ LpFT (Ω).

(6.46)

Let us use the contradiction argument to prove Theorem 6.17. If there was an r ∈ (1, ∞) so that R(LT ) = LpFT (Ω), then, in view of Theorem 1.7, we could find a constant C > 0 such that |η|Lp′

FT

(Ω)

≤ C|L∗T η|Lr′ (0,T ;Lp′ (Ω)) , F



∀ η ∈ LpFT (Ω).

(6.47)

Consider a sequence of random variables {ηn }∞ n=1 defined by ∫

T

ent dW (t),

ηn =

n ∈ N.

0 ′

It is obvious that ηn ∈ LpFT (Ω) for any n ∈ N. By Theorem 2.123, the integral ∫ T nt 2nT e dW (t) is a Gaussian random variable with mean 0 and variance e 2n−1 . 0 Hence,

222

6 Controllability for Stochastic Differential Equations in Finite Dimensions

( ∫ E

T

p′ ) 1′ [ ∫ p e dW (t) =



] 1′ 2 p − nx √ e e2nT −1 dx 2nT (e − 1)π −∞ [ ∫ ∞ ( e2nT − 1 )p′ /2 |x|p′ ] 1′ 2 p √ e−x dx = n π −∞ ( ) 1′ √ 2nT ∫ ∞ p 1 e −1 p′ −x2 = √ |x| e dx . n π −∞

nt

0



n |x|

p′

(6.48)

Now, by (6.48), it is easy to see that |ηn |Lp′

FT

(Ω)

( ∫ = E ( =

T

p′ ) 1′ p ent dW (t) ) 1′ √

0

1 √ π





p′

−∞

|x| e

−x2

p

dx

e2nT − 1 . n

(6.49)

Using (6.48) again, we have [ E(ηn | Ft ) r′ = ′ p L (0,T ;L (Ω)) F

 ∫ =



T 0

[



T

p′ ) r′′ ] 1′ ( ∫ t r p E enτ dW (τ ) dt

0

0

]r′  r1′ √ 1  ( 1 ∫ ∞ ) 2nt 2 e −1 p′ p′ √ |x| e−x dx dt  n π −∞

∫ ∞ ) 1′ ( ∫ T ) 1′ ′ 1 ( 1 p r p′ −x2 ≤√ √ |x| e dx enr t dt n π −∞ 0 ∫ ∞ 1 ( ) nT 2 1 1 e p′ p′ ≤√ √ |x| e−x dx 1 . n π −∞ (nr′ ) r′

(6.50)

From (6.49) and (6.50), it follows that E(ηn | Ft ) r′ enT LF (0,T ;Lp′ (Ω)) lim ≤ lim = 0. 1 √ n→∞ n→∞ |ηn |Lp′ (Ω) (nr′ ) r′ e2nT − 1 F T

This, combined with (6.46), gives ∗ L η n r ′ T LF (0,T ;Lp′ (Ω)) lim = 0, n→∞ |ηn |Lp′ (Ω) FT

which contradicts the inequality (6.47). This completes the proof of Theorem 6.17. Remark 6.18. At this moment, it is unclear whether the same result in Theorem 6.17 holds for p = 1 or not.

6.3 Lack of Robustness for Null/Approximate Controllability

223

For any r ∈ [1, ∞), let us “embed” the controlled ordinary equation (1.1) with m = n into our stochastic setting, i.e., we consider the following controlled stochastic differential equation: { dy = (Ay + Bu)dt in [0, T ], (6.51) y(0) = y0 , where A ∈ Rn×n , B ∈ Rn×n , u(·) ∈ LrF (0, T ; L2 (Ω; Rn )) is the control variable and y(·) ∈ L2F (Ω; C([0, T ]; Rn )) is the state variable. Similarly to Definition 6.12, the system (6.51) is called exactly controllable (at time T ) if for any y0 ∈ Rn and y1 ∈ L2FT (Ω; Rn ), one can find a control u(·) ∈ LrF (0, T ; L2 (Ω; Rn )) such that the corresponding solution y to (6.51) satisfies y(T ) = y1 . Then, as an easy consequence of Theorems 6.15 and 6.17, the following result holds: Corollary 6.19. The system (6.51) is exactly controllable if and only if rank B = n and r = 1. To the best of our knowledge, unlike the deterministic case, so far there exists no universally accepted notion for stochastic controllability, even for stochastic differential equations in finite dimensions. Motivated by Theorem 6.15 and Corollary 6.19, we introduced a corrected formulation for the exact controllability of the system (6.1) as follows: Definition 6.20. The system (6.1) is called exactly controllable if for any y0 ∈ Rn and y1 ∈ L2FT (Ω; Rn ), one can find a control u(·) ∈ L1F (0, T ; Rm ) so that Du ∈ L2,loc (0, T ; Rn ) and the corresponding solution y(·) to (6.1) satisfies F y(T ) = y1 , a.s. Remark 6.21. Clearly, the control class in Definition 6.20 is the biggest one so that the solutions to (6.1) make sense. The above definition seems to be a reasonable notion for exact controllability of stochastic differential equations. Nevertheless, a complete study on this problem is still under consideration and it does not seem to be easy, even for the very simple case when n = 2. On the other hand, from Corollary 6.19, it is easy to see that exact controllability seems a too strong controllability requirement for stochastic evolution equations. Naturally, people hope to see what happens if such a controllability requirement is weakened, say to be null/approximate controllability, which is the main concern in the next section.

6.3 Lack of Robustness for Null/Approximate Controllability In this section we shall show the lack of robustness for null/approximate controllability for stochastic differential equations.

224

6 Controllability for Stochastic Differential Equations in Finite Dimensions

We begin with the following two notions of controllability for the system (6.1). Definition 6.22. The system (6.1) is called null controllable (at time T ) if for any y0 ∈ Rn , there exists a control u(·) ∈ L1F (0, T ; L2 (Ω; Rm ) so that Du ∈ L2,loc (0, T ; Rn ) and the corresponding solution y(·) to (6.1) satisfies F y(T ) = 0, a.s. Definition 6.23. The system (6.1) is called approximately controllable (at time T ) if for any y0 ∈ Rn , y1 ∈ L2FT (Ω; Rn ) and ε > 0, there exists a control u(·) ∈ L1F (0, T ; L2 (Ω; Rm ) so that Du ∈ L2,loc (0, T ; Rn ) and the corF responding solution y(·) to (6.1) satisfies that y(T ) ∈ L2FT (Ω; Rn ) and that |y(T ) − y1 |L2F (Ω;Rn ) < ε. T

To show the lack of robustness for null/approximate controllability for (6.1), we consider the following very simple stochastic control system (in two dimensions):   dy1 = y2 dt + εy2 dW (t) in [0, T ],    (6.52) dy2 = udt in [0, T ],     y1 (0) = y10 , y2 (0) = y20 , where (y10 , y20 ) ∈ R2 , u(·) ∈ L1F (0, T ; L2 (Ω)) is the control variable, ε is a parameter. Clearly, if ε = 0, then (6.52) is null controllable for any T > 0. However, this system is NOT null controllable anymore whenever ε ̸= 0. To see this, we use the contradiction argument. Assume that there was an ε0 > 0 such that for all ε ∈ [−ε0 , ε0 ] and T > 0, (6.52) would be null controllable. Let us take ε2 y10 = 0, y20 = 1, ε = ε0 and T = 20 . By the null controllability of (6.52) at T =

ε20 2 ,

we have

y1

( ε2 ) 0

2





ε2 0 2

=

ε2 0 2

y2 dt + ε0 0

y2 dW (t) = 0. 0

Thus, ∫ E

ε2 0 2

2 ∫ y2 dt = E ε0

0

ε2 0 2

∫ 2 y2 dW (t) = ε20

0

ε2 0 2

E|y2 |2 dt.

(6.53)

0

On the other hand, ∫ E

ε2 0 2

0

2 ( ∫ y2 dt ≤ E

)( ∫

ε2 0 2

ε2 0 2

1dt 0

0

2 ) ε2 ∫ ε20 0 |y2 | dt ≤ E|y2 |2 dt. 2 0

2

(6.54)

6.3 Lack of Robustness for Null/Approximate Controllability

225

∫ ε20 It follows from (6.53) and (6.54) that 0 2 E|y2 |2 dt = 0, which contradicts the choice of y2 (0). Next, we consider the approximate controllability of (6.52). For this purpose, we introduce the following backward stochastic differential equation:   dz1 = Z1 dW (t) in [0, T ],    (6.55) dz2 = −(z1 + εZ1 )dt + Z2 dW (t) in [0, T ],     z1 (T ) = z1T , z2 (T ) = z2T , where (z1T , z2T ) ∈ L2FT (Ω; R2 ). We define an operator L : L1F (0, T ; L2 (Ω)) → L2FT (Ω; R2 ) by L(u(·)) = (y1 (T ), y2 (T )),

∀ u(·) ∈ L1F (0, T ; L2 (Ω)),

(6.56)

where (y1 (·), y2 (·)) is the corresponding solution to (6.52) with y10 = y20 = 0. Then, applying Itˆo’s formula to (6.52) (with y10 = y20 = 0) and (6.55), we obtain that ∫ T ⟨(y1 (T ), y2 (T )), (z1T , z2T ) ⟩L2 (Ω;R2 ) = E u(t)z2 (t)dt, FT (6.57) 0 ∀ (z1T , z2T ) ∈ L2FT (Ω; R2 ). 2 By (6.56)–(6.57), it is clear that, L∗ : L2FT (Ω; R2 ) → L∞ F (0, T ; L (Ω)) is given as follows: (L∗ (z1T , z2T ))(t) = z2 (t), a.e. t ∈ [0, T ]. (6.58)

Obviously, the system (6.52) is approximately controllable at time T if and only if R(L) = L2FT (Ω; R2 ). Hence, by Theorem 1.12, the approximate controllability of (6.52) is equivalent to the following observability property for (solutions to) (6.55): If z2 (·) = 0 in (0, T ), then z1T = z2T = 0, a.s. If ε = 0 and z2 (·) = 0, then by the second equation in (6.55), we obtain that ∫ t ∫ t z2 (t) = − z1 (s)ds + Z2 (s)dW (s) = 0, ∀ t ∈ [0, T ]. (6.59) 0

0

By Itˆo’s formula, we see that ∫

T

E|z2 (T )|2 − E|z2 (0)|2 =

|Z2 (s)|2 ds = 0,

(6.60)

0

which implies that Z2 (t) = 0 for a.e. (t, ω) ∈ [0, T ] × Ω. Thus, it follows from ∫t (6.59) that 0 z1 (s)ds = 0 for any t ∈ [0, T ]. This concludes that z1 (t) = 0 for a.e. (t, ω) ∈ [0, T ]×Ω. Then, by (z1 (·), z2 (·)) ∈ L2F (Ω; C([0, T ]; R2 )), we deduce that z1T = z2T = 0, a.s. Therefore, we conclude that (6.52) is approximately controllable if ε = 0.

226

6 Controllability for Stochastic Differential Equations in Finite Dimensions

However, if ε ̸= 0, then it is easy to check that (z1 (t), z2 (t), Z1 (t), Z2 (t)) ( { W (t) { W (t) t } 1 t } ) = exp − − 2 , 0, − exp − − 2 ,0 ε 2ε ε ε 2ε is a nonzero solution to (6.55) with the final datum ( (z1T , z2T ) =

{ exp



{ W (T ) W (T ) T } 1 T }) − 2 , − exp − − 2 . ε 2ε ε ε 2ε

Hence, the above observability property for (6.55) does not hold. Therefore, (6.52) is not approximately controllable whenever ε ̸= 0. Remark 6.24. By the above two examples, we deduce that both null and approximate controllability for stochastic differential equations in finite dimensions are unstable under small perturbations. This differs significantly from the controllability properties for deterministic problems. Note that, a similar phenomenon was firstly observed in [219] for control systems governed by stochastic parabolic equations (See also Section 9.3).

6.4 Notes and Comments Except Theorem 6.10 (which was communicated privately to us by Qida Xie), the results in Section 6.1 were first proven in [275] (See also [246, Section 3] for a slightly different presentation). The results in Section 6.2 are modification of that in [239]. Section 6.3 is taken from [246]. We refer to [13, 40, 51, 125, 127, 141, 347, 348, 376, 382] for some other controllability results on stochastic differential equations in finite dimensions. Compared to stochastic optimal control theory, the controllability theory for stochastic differential equations is surprisingly far from satisfactory, even for the case in finite dimensions. Actually, in our opinion (as remarked in [246]), compared to the deterministic case, the controllability/observability for stochastic differential equations (even in finite dimensions) is still at its enfant stage. There are numerous works which remain to be done, for example: • •



Generally speaking, when n > 1, the exact/null/approximate controllability for the linear system (6.1) (in the sense of Definitions 6.20, 6.22 and 6.23) are far from well-understood; In the deterministic setting, at least for the case of finite dimensions, controllability is closely related to another important notion in Control Theory, i.e., stabilization. It seems that so far there exist no studies on stochastic stabilization via controllability; It is well-known that, the classical controllability theory for the deterministic setting is the basis for linear filtering and prediction problems

6.4 Notes and Comments





227

([162, 163]. Now, since the stochastic controllability theory is completely unsatisfactory (as mentioned above), it seems quite interesting to revisit the corresponding filtering and prediction problems; Similarly to the deterministic setting ([197, 350]), it would be quite interesting to give topological classification of stochastic linear control systems, at least for the relatively simple system (6.11) but, as far as we know, no results have been published for the stochastic problems; To the best of our knowledge, there exist no nontrivial results about the controllability of stochastic nonlinear control systems. It would be quite interesting to give some Lie bracket conditions (as people did for the deterministic setting, e.g., [61, 311]) for stochastic controllability problems, again at least for the nonlinear version of the relatively simple system (6.11).

7 Controllability for Stochastic Linear Evolution Equations

This chapter is devoted to presenting some results on controllability for forward and backward stochastic linear evolution equations with possibly unbounded control operators. By the duality argument, these controllability problems can be reduced to suitable observability for the dual equations. Explicit forms of controls for the controllability problems are also provided. Finally, the controllability of some forward stochastic evolution equations are shown to be equivalent to that of suitably chosen backward stochastic evolution equations.

7.1 Formulation of the Problems The (F, Γ )-controllability concept introduced in Definition 5.6 (in Section 5.3 of Chapter 5) is quite general, which is very hard to be studied (even in the deterministic setting and in finite dimensions). In this chapter, we shall focus mainly on exact/null/approximate controllability problems for stochastic linear evolution equations in the abstract setting. Throughout this chapter, H, V and U are three Hilbert spaces which are ∆ identified with their dual spaces, L02 = L2 (V ; H), and A is the generator of a C0 -semigroup {S(t)}t≥0 on H. In this chapter, H and U will serve as respectively the state and the control spaces for the systems under consideration unless other stated. For any fixed T > 0 and a control operator B ∈ L(U ; H), we begin with the following deterministic controlled evolution equation: { yt (t) = Ay(t) + Bu(t) in (0, T ], (7.1) y(0) = y0 , where y0 ∈ H, y is the state variable, and u(∈ L2 (0, T ; U )) is the control variable. The exact/null/approximate controllability of (7.1) can be defined © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_7

229

230

7 Controllability for Stochastic Linear Evolution Equations

as that in Definition 5.6. For example, the system (7.1) is called exactly (resp. null) controllable at time T if for any y0 , yT ∈ H (resp. y0 ∈ H), one can find a control u ∈ L2 (0, T ; U ) so that the corresponding solution to (7.1) verifies y(T ) = yT (resp. y(T ) = 0). Similarly to the equation (1.27) for the exact controllability of (1.1), people introduce the following dual equation (of (7.1)), which is a (backward) equation evolved in H: { zt = −A∗ z in [0, T ), (7.2) z(T ) = zT . One can show the following result. Theorem 7.1. The following statements hold: 1) The system (7.1) is exactly controllable at time T if and only if solutions of (7.2) satisfy ∫ T |zT |H ≤ C |B ∗ z(t)|2 dt, ∀ zT ∈ H; (7.3) 0

2) The system (7.1) is null controllable at time T if and only if solutions of (7.2) satisfy ∫ T |z(0)|H ≤ C |B ∗ z(t)|2 dt, ∀ zT ∈ H; (7.4) 0

3) The system (7.1) is approximately controllable at time T if and only if solutions to (7.2) enjoy the following property: B ∗ z(·) ≡ 0 in (0, T ) ⇒ zT = 0.

(7.5)

The proof of Theorem 7.1 can be found, say, in that of [377, Theorems 2.4–2.6 of Chapter 2]. Remark 7.2. 1) It is easy to see that, when A generates a C0 -group {S(t)}t∈R on H (hence both (7.1) and (7.2) are time reversible), the estimates (7.3) and (7.4) are equivalent, and therefore in this case the exact controllability and null controllability for the deterministic system (7.1) are equivalent. At the end of this section, we shall see that this is NOT true anymore in the stochastic setting. 2) From (7.4) and (7.5), it is clear that, if the dual equation (7.2) enjoys the following property: z(0) = 0 ⇒ zT = 0, then, the null controllability of (7.1) implies the approximate controllability of the same system. This is the case, for example, for deterministic parabolic equations. However, as we shall see in Chapter 9, this is NOT true for stochastic parabolic equations (See Theorems 9.7 and 9.9 for more details).

7.1 Formulation of the Problems

231

A main goal in this chapter is to extend Theorem 7.1 to the stochastic setting. For this purpose, suppose that (Ω, F , F, P) (with F = {Ft }t∈[0,T ] ) is a fixed filtered probability space satisfying the usual condition. Let {W (t)}t∈[0,T ] be a V -valued, F-adapted, standard Q-Brownian motion (with Q being a given positive definite, trace-class operator on V ) or a cylindrical Brownian motion on (Ω, F , F, P). Denote by F the progressive σ-field (in [0, T ] × Ω) w.r.t. F. In the sequel, to simplify the presentation, we only consider the case of cylindrical Brownian motion and F is the corresponding natural filtration (generated by W (·)). Also, in the rest of this chapter, to simplify the notation, we write ∆ UT = L2F (0, T ; U ). We consider the following forward stochastic control system evolved in H:  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) dt    ( ) + G(t)y(t) + Du(t) dW (t) in (0, T ], (7.6)    y(0) = y0 , 2 1 ∞ 0 where F ∈ L∞ F (0, T ; L(H)), f ∈ LF (Ω; L (0, T ; H)), G ∈ LF (0, T ; L(H; L2 )) 0 and D ∈ L(U ; L2 ). In (7.6), y0 (∈ H) is the initial state, y is the state variable, and u(∈ UT ) is the control variable. By Theorem 3.14, the system (7.6) admits a unique mild solution y(·) ∈ CF ([0, T ]; L2 (Ω; H)). Some notions of controllability of (7.6) are given as follows:

Definition 7.3. 1) System (7.6) is called exactly controllable at time T if for any y0 ∈ H and yT ∈ L2FT (Ω; H), there exists u ∈ UT such that the corresponding mild solution to (7.6) satisfies y(T ) = yT a.s. 2) System (7.6) is called null controllable at time T if for any y0 ∈ H, there exists u ∈ UT such that the corresponding mild solution to (7.6) satisfies y(T ) = 0 a.s. 3) System (7.6) is called approximately controllable at time T if for any y0 ∈ H, yT ∈ L2FT (Ω; H) and ε > 0, there exists u ∈ UT such that the corresponding mild solution to (7.6) satisfies |y(T ) − yT |L2F (Ω;H) ≤ ε. T

In view of the conclusion 1) in Proposition 6.3 and also motivated by the controlled backward stochastic differential equation (6.13), we introduce the following controlled backward stochastic evolution equation:  ( ) dy(t) = −A∗ y(t)dt + J(t)y(t) + K(t)Y (t) + g(t) + Bu(t) dt    +Y (t)dW (t) in [0, T ), (7.7)    y(T ) = yT , ∞ 0 2 1 where J ∈ L∞ F (0, T ; L(H)), K ∈ LF (0, T ; L(L2 ; H)) and g ∈ LF (Ω; L (0, T ; 2 H)). In (7.7), yT (∈ LFT (Ω; H)) is the final state, y and u(∈ UT ) are viewed as the state and control variables, respectively. By Theorem 4.10, the system (7.7) admits a unique mild solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ).

232

7 Controllability for Stochastic Linear Evolution Equations

Similarly to Definition 7.3, some notions of controllability of (7.7) are introduced as follows: Definition 7.4. 1) System (7.7) is called exactly controllable at time 0 if for any yT ∈ L2FT (Ω; H) and y0 ∈ H, there exists u ∈ UT such that the corresponding mild solution to (7.7) satisfies y(0) = y0 . 2) System (7.7) is called null controllable at time 0 if for any yT ∈ L2FT (Ω; H), there exists u ∈ UT such that the corresponding mild solution to (7.7) satisfies y(0) = 0. 3) System (7.7) is called approximately controllable at time 0 if for any yT ∈ L2FT (Ω; H), y0 ∈ H and ε > 0, there exists u ∈ UT such that the corresponding mild solution to (7.7) satisfies |y(0) − y0 |H ≤ ε. As that in the deterministic setting, for control problems of stochastic partial differential equations, there are many cases that the control operators are unbounded, say the Dirichlet boundary controls for stochastic wave/heat/Schr¨ odinger equations. For such kind of problems, one cannot obtain the existence of the H-valued mild solutions to (7.6) or (7.7). Nevertheless, as we shall see in the next section that, when the control operators have some further property, one can show the well-posedness of the corresponding control systems in the sense of transposition solutions. In the rest of this chapter, O is another Hilbert space which is dense in H and denote by O′ its dual space w.r.t. the pivot space H. Hence, O ⊂ H ≡ H ′ ⊂ O′ . Let A generate also a C0 -semigroup {S(t)}t≥0 on O and O′ , respectively. Denote by H−1 the completion of H w.r.t. the norm |x|H−1 = |(βI − A)−1 x|H , ∆

where β is a fixed real number which belongs to the resolvent set of A. We fix two unbounded operators B ∈ L(U ; H−1 ) and D ∈ L(U ; L2 (V ; H−1 )). Similarly to (7.6), we consider the following forward stochastic control system evolved in the state space O′ :  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) dt    ( ) + G(t)y(t) + Du(t) dW (t) in (0, T ], (7.8)    y(0) = y0 , ′ 2 1 ′ ∞ ′ where F ∈ L∞ F (0, T ; L(O )), f ∈ LF (Ω; L (0, T ; O )) and G ∈ LF (0, T ; L(O ; ′ ′ L2 (V ; O ))). In (7.8), y0 ∈ O is the initial state, y is the state variable, while u(∈ UT ) is the control variable. Also, similarly to (7.7), we consider the following backward stochastic control system evolved in the state space O′ :  ( ) dy(t) = −A∗ y(t)dt + J (t)y(t) + K(t)Y (t) + g(t) + Bu(t) dt    +Y (t)dW (t) in [0, T ), (7.9)    y(T ) = yT ,

7.1 Formulation of the Problems

233

′ ∞ ′ ′ 1 where J ∈ L∞ F (0, T ; L(O )), K ∈ LF (0, T ; L(L2 (V ; O ); O )), g ∈ LF (0, T ; 2 ′ 2 ′ L (Ω; O )). In (7.9), yT (∈ LFT (Ω; O )) is the final state, y and u(∈ UT ) are viewed as the state and control variables, respectively. Clearly, unlike that in (7.6) and (7.7), the control operator B in both (7.8) and (7.9) and the control operator D in (7.8) are allowed unbounded. As we shall see in the next section, under some assumptions, the systems (7.8) and (7.9) are well-posed in the sense of transposition solutions, in the solution spaces CF ([0, T ]; L2 (Ω; O′ )) and CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )), respectively. The notions of controllability for the systems (7.8) and (7.9) can be defined very similarly to that for (7.6) and (7.7) (The only two differences are to replace the Hilbert space H and mild solutions used Definitions 7.3 and 7.4 by the Hilbert space O′ and transposition solutions, respectively), and therefore we omit the details.

Remark 7.5. 1) In the system (7.6) (and also (7.8)), the control u enters in both the drift and diffusion terms. This is natural in the stochastic setting. Indeed, as we shall see in the Chapters 8–11 (see also that in the last chapter), for many stochastic controllability problems, people have to introduce controls in this way. On the other hand, even if in some case it is enough to have only one control in the drift (or diffusion) term, this control may affect the diffusion (or drift) term one way or another. 2) In view of Definition 6.20, it is more natural to replace the control class UT employed in the definition of controllability for the system (7.6) { } { 2,loc 1 (resp. (7.8)) by the set u ∈ LF (0, T ; U ) Du ∈ LF (0, T ; L02 ) (resp. u ∈ } L1F (0, T ; U ) Du ∈ L2,loc (0, T ; L2 (V ; O′ )) ). Nevertheless, so far it seems F difficult to obtain useful results using such a control class. 3) The control operator D in (7.8) is also allowed to be unbounded. Compared to the deterministic problems, this is completely new for the stochastic setting though the analysis for concrete problems remains to be done and there might exist some essential difficulties. In the sequel, the controllability for the forward (resp. backward) stochastic evolution equations (7.6) and (7.8) (resp. (7.7) and (7.9)) are simply referred to the forward (resp. backward) controllability. In the deterministic setting, it is easy to see that any backward controllability problem can be reduced to a forward one by simply reversing the time variable. However, such a technique does not work anymore for stochastic problems because of our adapted-ness requirement on stochastic processes, and therefore one has to analyze the forward and the backward controllability separately. In the rest of this section, we shall present more new phenomena for the controllability problems in the stochastic setting. For simplicity, we only consider the case of bounded control operators and the special case that V = R, i.e., W (·) is a one dimensional standard Brownian motion. First, as a simple generalization of the conclusion 1) in Proposition 6.3, we have the following necessary condition for the exact controllability of (7.6).

234

7 Controllability for Stochastic Linear Evolution Equations

Proposition 7.6. If f (·) ∈ L2F (0, T ; H) and G(·) ∈ CF ([0, T ]; L∞ (Ω; L(H))) and the system (7.6) is exactly controllable at time T , then ⊥

D(A∗ ) ∩ R(D) = ∅. ⊥

Proof : We use the contradiction argument. Assume that D(A∗ )∩R(D) ̸= ∅. Then, we could find φ ∈ D(A∗ ) with |φ|H = 1, such that ⟨Dv, φ⟩H = 0 for any v ∈ U . Hence, for any u ∈ UT , by (7.6), ⟨y(T ), φ⟩H − ⟨y0 , φ⟩H ∫ T ( ) = ⟨y(t), A∗ φ⟩H + ⟨F (t)y(t) + f (t) + Bu(t), φ⟩H dt 0



(7.10)

T

⟨G(t)y(t), φ⟩H dW (t).

+ 0

(∫

T

Let us choose y0 = 0 and yT =

) η(t)dW (t) φ, where η(·) is given by

0

(6.2). Then, by the exact controllability of the system (7.6) at time T , it follows from (7.10) that, for some u ∈ UT , ∫ T ∫ T ( ) η(t)dW (t) = ⟨y(t), A∗ φ⟩H + ⟨F (t)y(t) + f (t) + Bu(t), φ⟩H dt 0

0



T

⟨G(t)y(t), φ⟩H dW (t).

+ 0

(7.11) Since y(·) ∈ CF ([0, T ]; L2 (Ω; H)) and u(·) ∈ UT , we see that ⟨y(·), A∗ φ⟩H + ⟨F (·)y(·) + f (·) + Bu(·), φ⟩H ∈ L2F (0, T ) and ⟨G(·)y(·), φ⟩H ∈ CF ([0, T ]; L2 (Ω)). Hence, it is clear that (7.11) contradicts Proposition 6.2. Remark 7.7. By Proposition 7.6 (See also the conclusion 1) in Proposition 6.3), we see that, in order that the system (7.6) is exactly controllable at time T , one has to choose the control operator D in the diffusion term of (7.6) to be sufficiently “effective” (or even active everywhere in some sense). This is because, in the stochastic setting, the “target” set (i.e., L2FT (Ω; H)) is too large, compared with the set of initial state (i.e., H). Next, we consider the following control system with deterministic coefficients: ( ) { dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) dt + G(t)y(t)dW (t) in (0, T ], y(0) = y0 (7.12)

7.1 Formulation of the Problems

and its deterministic counterpart:  y (t)  dˆ = Aˆ y (t) + F (t)ˆ y (t) + f (t) + B u ˆ(t) in (0, T ], dt  yˆ(0) = y0 ,

235

(7.13)

where y0 ∈ H, F ∈ L∞ (0, T ; L(H)), f ∈ L2 (0, T ; H) and G ∈ L∞ F (0, T ) ∞ (Clearly, any G ∈ L∞ (0, T ) can be viewed as an element in L (0, T ; L(H))). F F In (7.12) (resp. (7.13)), y (resp. yˆ) is the state variable and u ∈ UT (resp. u ˆ ∈ L2 (0, T ; U )) is the control variable. We have the following simple result: Proposition 7.8. The system (7.12) is null controllable at time T if and only if so is the system (7.13). Proof : The “only if” part. For arbitrary y0 ∈ H, let u(·) ∈ UT be a control which transfers the state y(·) of (7.12) from y0 to zero at time T . Then, clearly, Ey is the solution to (7.13) with control Eu. Since y(T ) = 0, a.s., we see that Ey(T ) = 0, which implies that (7.13) is null controllable. The “if” part. For arbitrary y0 ∈ H, let u ˆ(·) ∈ L2 (0, T ; U ) be a control which transfers the state yˆ(·) of (7.13) from y0 to zero at time T . Let y(t) = e

∫t 0

G(s)dW (s)− 12

∫t 0

G(s)2 ds

yˆ(t).

Then, it is easy to see that y(·) is the unique solution to (7.12) with the ∫· ∫ G(s)dW (s)− 12 0· G(s)2 ds 0 control u(·) = e u ˆ(·). Since yˆ(T ) = 0, we see that ∫T ∫T 2 1 y(T ) = e 0 G(s)dW (s)− 2 0 G(s) ds yˆ(T ) = 0. Further, ∫ T ∫ T ∫t ∫t 2 E |u(t)|2U dt = |ˆ u(t)|2U Ee2 0 G(s)dW (s)− 0 G(s) ds dt 0

0



T

≤C

|ˆ u(t)|2U dt < ∞. 0

This complete the proof of Proposition 7.8. Remark 7.9. The same technique to prove Proposition 7.8 cannot be applied to get the exact controllability or approximate controllability of the system (7.12). Indeed, let us assume that (7.13) is exactly controllable. Put ∆{ AT = y(T ) y(·) solves (7.12) for some y0 ∈ H and ∫· ∫· } 2 1 u(·) = e 0 G(s)dW (s)− 2 0 G(s) ds u ˆ(·) for some u ˆ(·) ∈ L2 (0, T ; U ) . Then, by the exact controllability of (7.13), it is easy to see that ∫T { ∫T } 2 1 AT = e 0 G(s)dW (s)− 2 0 G(s) ds y1 y1 ∈ H . Clearly, AT ̸= L2FT (Ω; H) and AT is not dense in L2FT (Ω; H), which means that the system (7.12) is neither exactly controllable nor approximately controllable. By Remark 7.2, it is clear that this is significantly different from the deterministic setting, at least when A generates a C0 -group on H.

236

7 Controllability for Stochastic Linear Evolution Equations

7.2 Well-Posedness of Stochastic Systems With Unbounded Control Operators In this section, we shall use the transposition method to establish the wellposedness of stochastic control systems (7.8) and (7.9). In order to define transposition solutions to (7.8), we introduce the following (backward stochastic) test equation evolved in O: ( ) { dξ(t) = − A∗ ξ(t) + F (t)∗ ξ(t) + G(t)∗ Ξ(t) dt + Ξ(t)dW (t) in [0, τ ), ξ(τ ) = ξτ . (7.14) In (7.14), τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O). Recall that we assume F is the natural filtration. Hence, by Theorem 4.10, the equation (7.14) admits a unique mild solution (ξ, Ξ) ∈ L2F (Ω; C([0, τ ]; O)) ×L2F (0, τ ; L2 (V ; O)) such that ( ) ξ, Ξ 2 ≤ C(F , G) ξτ L2 (Ω;O) . (7.15) L (Ω;C([0,τ ];O))×L2 (0,τ ;L (V ;O)) F

2

F



We make the following additional assumptions. Condition 7.1 There exists a sequence {un }∞ n=1 ⊂ UT such that Bun ∈ L2F (0, T ; O′ ) and Dun ∈ L2F (0, T ; L2 (V ; O′ )) for each n ∈ N and lim un = u

n→∞

in UT .

(7.16)

Condition 7.2 There exists a constant C > 0 such that for any τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O), the solution (ξ, Ξ) to (7.14) satisfies that ∗ B ξ + D ∗ Ξ 2 ≤ C ξτ L2 (Ω;O) . (7.17) L (0,τ ;U ) Fτ

F

Remark 7.10. When D is bounded, i.e., D ∈ L(U ; L2 (V ; O′ )), the condition Dun ∈ L2F (0, T ; L2 (V ; O′ )) in Condition 7.1 holds automatically, and the inequality (7.17) in Condition 7.2 is equivalent to the following estimate: ∗ B ξ 2 ≤ C ξ τ 2 . (7.18) LF (0,τ ;U )

LFτ (Ω;O)

Under Condition 7.2, the solutions to the system (7.8) are understood in the following sense: Definition 7.11. A process y ∈ CF ([0, T ]; L2 (Ω; O′ )) is called a transposition solution to the system (7.8) if for every τ ∈ (0, T ] and ξτ ∈ L2Fτ (Ω; O), it holds that ⟨ ⟩ ⟨ ⟩ E y(τ ), ξτ O′ ,O − y0 , ξ(0) O′ ,O ∫ τ ∫ τ (7.19) ⟨ ⟩ ⟨ ⟩ ∗ ∗ =E u(t), B ξ(t) + D Ξ(t) U dt + E f (t), ξ(t) O′ ,O dt, 0

where (ξ, Ξ) solves the equation (7.14).

0

7.2 Well-Posedness of Systems With Unbounded Control Operators

237

We have the following well-posedness result for the system (7.8). Theorem 7.12. Let Conditions 7.1 and 7.2 hold. For each y0 ∈ O′ and u ∈ UT , the system (7.8) admits one and only one transposition solution y ∈ CF ([0, T ]; L2 (Ω; O′ )). Moreover, ( ) |y|CF ([0,T ];L2 (Ω;O′ )) ≤ C |y0 |O′ + |f |L2F (Ω;L1 (0,T ;O′ )) + |u|UT . (7.20) Proof : Uniqueness of solutions. Suppose that there are y(·) and yˆ(·) belong to CF ([0, T ]; L2 (Ω; O′ )) such that (7.19) holds. Then, E⟨y(τ ), ξτ ⟩O′ ,O = E⟨ˆ y (τ ), ξτ ⟩O′ ,O , for all τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O). This concludes that y(·) = yˆ(·). Existence of solutions. Let {un }∞ n=1 ⊂ UT be the sequence given in Condition 7.1. Consider the following equation:  ( ) dyn (t) = Ayn (t) + F (t)yn (t) + f (t) + Bun (t) dt    ( ) + G(t)yn (t) + Dun (t) dW (t) in (0, T ], (7.21)    yn (0) = y0 . By Theorem 3.14, we see that the equation (7.21) admits a unique mild solution yn ∈ CF ([0, T ]; L2 (Ω; O′ )). By Itˆo’s formula, for each n ∈ N, τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O), we obtain that ⟨ ⟩ ⟨ ⟩ E yn (τ ), ξτ O′ ,O − y0 , ξ(0) O′ ,O ∫ τ ∫ τ ⟨ ⟩ ⟨ ⟩ =E Bun (t), ξ(t) O′ ,O dt + E f (t), ξ(t) O′ ,O dt 0



0



τ

+E

Dun (t), Ξ(t)

0 τ ⟨





(7.22) L2 (V ;O ′ ),L2 (V ;O)

un (t), B ∗ ξ(t) + D∗ Ξ(t)

=E 0



dt

dt + E U



τ 0

⟨ ⟩ f (t), ξ(t) O′ ,O dt,

where (ξ, Ξ) solves (7.14). Therefore, for any n, m ∈ N, it holds that ∫ τ ⟨ ⟩ ⟨ ⟩ E yn (τ ) − ym (τ ), ξτ O′ ,O = E un (t) − um (t), B ∗ ξ(t) + D∗ Ξ(t) U dt. 0

(7.23) Let us choose ξτ ∈ L2Fτ (Ω; O) such that   |ξτ |L2F (Ω;O) = 1, τ

⟨ ⟩  E yn (τ ) − ym (τ ), zτ ′ ≥ 1 |yn (τ ) − ym (τ )|L2 (Ω;O′ ) . O ,O Fτ 2 These, together with (7.23) and Condition 7.2, imply that for any τ ∈ (0, T ],

238

7 Controllability for Stochastic Linear Evolution Equations

1 |yn (τ ) − ym (τ )|L2F (Ω;O′ ) τ 2 ≤ |un − um |L2F (0,T ;U ) |B ∗ ξ(·) + D∗ Ξ(·)|UT ≤ C|ξτ |L2F

τ

(Ω;O) |un

− um |UT ≤ C|un − um |UT ,

where C is independent of τ . Hence, |yn − ym |CF ([0,T ];L2 (Ω;O′ )) ≤ C|un − um |UT .

(7.24)

By Condition 7.1, it is clear that {un }∞ n=1 is a Cauchy sequence in UT . Combining this with (7.24), we conclude that {yn }∞ n=1 is a Cauchy sequence in CF ([0, T ]; L2 (Ω; O′ )). Hence, there exists y ∈ CF ([0, T ]; L2 (Ω; O′ )) such that limn→∞ yn = y in CF ([0, T ]; L2 (Ω; O′ )). By letting n → +∞ in (7.22) and noting Condition 7.1, we see that for each τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O), the equality (7.19) holds. This indicates that y is a transposition solution to (7.21). Choose a ξτ ∈ L2Fτ (Ω; O) such that   |ξτ |L2Fτ (Ω;O) = 1, ⟨ ⟩  E y(τ ), ξτ ′ ≥ 1 |y(τ )|L2 (Ω;O′ ) . O ,O Fτ 2 These, together with (7.19), imply that for any τ ∈ (0, T ], 1 |y(τ )|L2F (Ω;O′ ) τ 2 ≤ |y0 |O′ |ξ(0)|O + |u|UT |B ∗ ξ(t) + D∗ Ξ(t)|L2F (0,τ ;U ) +|f |L2F (Ω;L1 (0,T ;O′ )) |z|L2F (Ω;C([0,τ ];O)) ( ) ≤ C|ξτ |L2F (Ω;O′ ) |y0 |O′ + |u|UT + |f |L2F (Ω;L1 (0,T ;O′ )) τ ( ) ≤ C |y0 |O′ + |u|UT + |f |L2F (Ω;L1 (0,T ;O′ )) , where C is independent of τ . Hence, the desired estimate (7.20) holds. This completes the proof of Theorem 7.12. Similarly, in order to define transposition solutions to (7.9), for any τ ∈ [0, T ], we introduce the following (forward stochastic) test equation: ( ) ( ) { dη(t) = Aη(t) − J (t)∗ η(t) dt + − K(t)∗ η(t) + h(t) dW (t) in (τ, T ], η(τ ) = ητ , (7.25) where ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)). By Theorem 3.14, the equation (7.25) admits a unique mild solution η ∈ CF ([τ, T ]; L2 (Ω; O)) such that

7.2 Well-Posedness of Systems With Unbounded Control Operators

η

CF ([τ,T ];L2 (Ω;O))

( ≤ C ητ L2



(Ω;O)

) + |h|L2F (0,T ;L2 (V ;O)) .

239

(7.26)

We need to introduce the following assumption: Condition 7.3 There exists a constant C > 0 such that for any τ ∈ [0, T ], ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), the corresponding mild solution η to (7.25) satisfies that ( ) |B ∗ η|L2F (0,τ ;U ) ≤ C ητ L2 (Ω;O) + |h|L2F (0,T ;L2 (V ;O)) . (7.27) Fτ

Under Condition 7.3, the solutions to the system (7.9) are understood in the following sense: Definition 7.13. A pair of processes (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )) is called a transposition solution to the system (7.9) if for every τ ∈ [0, T ), ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), it holds that ⟨ ⟩ ⟨ ⟩ E yT , η(T ) O′ ,O − E y(τ ), ητ O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E u(s), B ∗ η(s) U ds + E g(s), η(s) O′ ,O ds τ

τ



T

+E τ

⟨ ⟩ Y (s), h(s) L

2 (V

;O ′ ),L2 (V ;O)

(7.28)

ds,

where η(·) solves (7.25). Similarly to Theorem 7.12, we have the following well-posedness result for the system (7.9). Theorem 7.14. Under Conditions 7.1 (with D = 0) and 7.3, for each yT ∈ L2FT (Ω; O′ ) and u ∈ UT , the system (7.9) admits one and only one transposition solution (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )). Moreover, |y|CF ([0,T ];L2 (Ω;O′ )) + |Y |L2F (0,T ;L2 (V ;O′ )) (7.29) ( ) ≤ C |yT |L2F (Ω;O′ ) + |g|L1F (0,T ;L2 (Ω;O′ )) + |u|UT . T

Proof : The proof is very similarly to that of Theorem 7.12, hence we give below only a sketched proof of the existence of solutions. Let {un }∞ n=1 ⊂ UT be the sequence given in Condition 7.1 (with D = 0). Consider the following equation:  ( ) dyn (t) = −A∗ yn (t)dt + J (t)yn (t) + K(t)Yn (t) + g(t) + Bun (t) dt    +Yn (t)dW (t) in [0, T ),    yn (T ) = yT . (7.30)

240

7 Controllability for Stochastic Linear Evolution Equations

Since F is assumed to be the natural filtration, by Theorem 4.10, the system (7.30) admits a unique mild solution (yn (·), Yn (·)) ∈ L2F (Ω; C([0, T ]; O′ )) × L2F (0, T ; L2 (V ; O′ )). By Itˆo’s formula, for each n ∈ N, τ ∈ [0, T ), ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), we obtain that ⟨ ⟩ ⟨ ⟩ E yT , η(T ) O′ ,O − E yn (τ ), ητ O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E un (s), B ∗ η(s) U ds + E g(s), η(s) O′ ,O ds τ

τ



T

+E τ

(7.31)

⟨ ⟩ Yn (s), h(s) L2 (V ;O′ ),L2 (V ;O) ds,

where η(·) solves (7.25). Therefore, for any n, m ∈ N, it holds that ⟨ ⟩ E yn (τ ) − ym (τ ), ητ O′ ,O + E ∫

T

=E τ





T



Yn (s) − Ym (s), h(s)

τ

um (s) − un (s), B ∗ η(s)

⟩ U

⟩ L2 (V ;O ′ ),L2 (V ;O)

ds

ds.

(7.32) By the arbitrariness of τ , ητ and h in (7.32), similarly to (7.24), by Condition 7.3, we can show that |(yn − ym , Yn − Ym )|CF ([0,T ];L2 (Ω;O′ ))×L2F (0,T ;L2 (V ;O′ )) ≤ C|un − um |UT . (7.33) By Condition 7.1 (with D = 0), it is clear that {un }∞ n=1 is a Cauchy sequence in UT . Combining this with (7.33), we conclude that {(yn , Yn )}∞ n=1 is a Cauchy sequence in CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )). Hence, there exists (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )) such that lim (yn , Yn ) = (y, Y ) in CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )).

n→∞

By letting n → +∞ in (7.31) and noting Condition 7.1 (with D = 0), we see that for each τ ∈ [0, T ], ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), the equality (7.28) holds. This indicates that (y, Y ) is a transposition solution to (7.9). Finally, proceeding as that in the proof of Theorem 7.12, we can show that the desired estimate (7.29) holds. This completes the proof of Theorem 7.14. Remark 7.15. In view of Theorem 4.24, it seems that the regularity of the first component of the transposition solution (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )) to the system (7.9) can be improved as y ∈ L2F (Ω; C([0, T ]; O′ )) but this remains to be proved.

7.3 Reduction to the Observability of Dual Problems

241

7.3 Reduction to the Observability of Dual Problems In this section, we shall reduce the controllability problems presented in Section 7.1 to the observability of their dual problems. To begin with, let us introduce respectively the following dual equations of (7.6) and (7.7): ( ) { dz(t) = − A∗ z(t) + F (t)∗ z(t) + G(t)∗ Z(t) dt + Z(t)dW (t) in [0, T ), z(T ) = zT (7.34) and

{

( ) dx(t) = Ax(t) − J(t)∗ x(t) dt − K(t)∗ x(t)dW (t) in (0, T ],

(7.35)

x(0) = x0 , where zT ∈ L2FT (Ω; H) and x0 ∈ H. By Theorem 4.10, the backward stochastic evolution equation (7.34) admits a unique mild solution (z, Z) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ); while by Theorem 3.14, the stochastic evolution equation (7.35) admits a unique mild solution x(·) ∈ CF ([0, T ]; L2 (Ω; H)). For the exact controllability of (7.6), we have the following result. Theorem 7.16. System (7.6) is exactly controllable at time T if and only if (7.34) is continuously finally observable on [0, T ] in the sense that all of its solutions satisfy the following (final time) observability estimate: |zT |L2F

T

(Ω;H)

≤ C|B ∗ z + D∗ Z|UT ,

∀ zT ∈ L2FT (Ω; H).

(7.36)

Proof : Clearly, it suffices to consider the special case that the initial datum y0 and the non-homogenous term f in the system (7.6) are equal to 0. Similarly to the proof of “1)⇐⇒2)” in Theorem 6.8, we define an operator L : UT → L2FT (Ω; H) by L(u(·)) = y(T ), ∀ u(·) ∈ UT , (7.37) where y(·) is the corresponding solution to (7.6) with y0 = 0 and f = 0. Then, applying Itˆo’s formula to (7.6) (with y0 = 0 and f = 0) and (7.34), we obtain that ∫ T ⟨ y(T ), zT ⟩L2 (Ω;H) = E ⟨ u(t), B ∗ z(t) + D∗ Z(t) ⟩U dt, FT (7.38) 0 ∀ zT ∈ L2FT (Ω; H), where (z(·), Z(·)) ∈ L2F (Ω; C([0, T ]; H))×L2F (0, T ; L02 ) solves (7.34). By (7.37)– (7.38), it is clear that, (L∗ zT )(t) = B ∗ z(t) + D∗ Z(t),

a.e. t ∈ [0, T ].

(7.39)

242

7 Controllability for Stochastic Linear Evolution Equations

Now, the exact controllability of (7.6) is equivalent to that R(L) = L2FT (Ω; H); while the later, by (7.39) and using Theorem 1.7, is equivalent to the estimate (7.36). Next, for the null controllability of (7.6), we have the following result. Theorem 7.17. System (7.6) with f = 0 is null controllable at time T if and only if (7.34) is continuously initially observable on [0, T ] in the sense that all of its solutions satisfy the following (initial time) observability estimate: |z(0)|H ≤ C|B ∗ z + D∗ Z|UT ,

∀ zT ∈ L2FT (Ω; H).

(7.40)

Proof : Similarly to the proof of Theorem 7.16, we define an operator L : UT → L2FT (Ω; H) by (7.37) and an operator M : H → L2FT (Ω; H) My0 = −y(T ),

∀ y0 ∈ H,

(7.41)

where y(·) is the corresponding solution to (7.6) with u(·) = 0 and f = 0. Then, L∗ can be expressed as that in (7.39). On the other hand, applying Itˆo’s formula to (7.6) (with u(·) = 0 and f = 0) and (7.34), we obtain that ⟨ y(T ), zT ⟩L2

FT

(Ω;H)

= ⟨ y0 , z(0) ⟩H ,

∀ zT ∈ L2FT (Ω; H),

(7.42)

where (z(·), Z(·)) ∈ L2F (Ω; C([0, T ]; H))×L2F (0, T ; L02 ) solves (7.34). By (7.41)– (7.42), it is clear that, M∗ zT = −z(0). (7.43) Now, the null controllability of (7.6) is equivalent to that R(M) ⊂ R(L); while the later, by (7.43) and (7.39), and in view of Theorem 1.7, is equivalent to the estimate (7.40). Remark 7.18. Compared with Theorem 7.16, we assume that f = 0 in Theorem 7.17. This condition is necessary. Indeed, Theorem 7.16 may be failed if f ̸= 0. Such a phenomenon has already been found in the study of null controllability of deterministic heat equations. Further, for the approximate controllability of (7.6), we have the following result. Theorem 7.19. System (7.6) is approximately controllable at time T if and only if solutions to (7.34) satisfy the following observability: B ∗ z + D∗ Z ≡ 0 in (0, T ), a.s. ⇒ zT = 0.

(7.44)

Proof : Recall the operator L : UT → L2FT (Ω; H) defined by (7.37) (in the proof of Theorem 7.16). It is easy to show that the system (7.6) is approximately controllable at time T if and only if R(L) = L2FT (Ω; H). Note that L∗ can be expressed as that in (7.39). Hence, by Theorem 1.12, R(L) = L2FT (Ω; H) if and only if solutions to (7.34) satisfy the property (7.44).

7.3 Reduction to the Observability of Dual Problems

243

In deterministic setting, by Theorem 7.1, if the control operator B is invertible, then it is easy to get controllability results (especially the null and approximate controllability of (7.1)). The same holds in the stochastic framework for this special case. Indeed, we have the following result. Theorem 7.20. If B ∈ L(U ; H) is invertible, D = 0 and f = 0, then the system (7.6) is both null controllable and approximately controllable at any time T > 0. Proof : By Theorem 7.17, to prove that (7.6) (with D = 0 and f = 0) is null controllable, it suffices to show that solutions to (7.34) satisfy ∫ |z(0)|2H

T

≤ CE

|B ∗ z(t)|2U dt,

0

∀ zT ∈ L2FT (Ω; H).

(7.45)

By Theorem 4.39, there is a constant C > 0 such that |z(0)|2H ≤ CE|z(t)|2H ,

∀ t ∈ [0, T ].

(7.46)

Since B is invertible, we have that |y|H ≤ C|B ∗ y|U ,

∀ y ∈ H.

This, together with (7.46), implies the inequality (7.45). By Theorem 7.19, in order to prove that (7.6) (with D = 0 and f = 0) is approximately controllable, it suffices to show that solutions to (7.34) satisfy B ∗ z ≡ 0 in (0, T ), a.s. ⇒ zT = 0.

(7.47)

Since B is invertible, we conclude that z(·) = 0 in L2F (0, T ; H). Noting that z(·) ∈ CF ([0, T ]; L2 (Ω; H)), we find that zT = 0. By Proposition 7.6, it is easy to see that, generally speaking, the same result in Theorem 7.20 does NOT hold for the exact controllability of (7.6). Note that, this is because we choose the controls in (7.6) to be L2 -in time. The situation will be completely different if we choose L1 -in time controls. More precisely, similarly to Definition 6.12, we introduce the following notion: Definition 7.21. The system (7.6) with D = 0 is called exactly controllable at time T by means of controls in the space L1F (0, T ; L2 (Ω; U )) if for any y0 ∈ H and y1 ∈ L2FT (Ω; H), there exists a control u(·) ∈ L1F (0, T ; L2 (Ω; U )) so that the corresponding solution y to (7.6) with D = 0 verifies that y(T ) = y1 , a.s. Proceeding as the proof of the “if” part in Theorem 6.15, one can show the following result (and hence we will not prove it). Theorem 7.22. If B ∈ L(U ; H) is invertible and D = 0, then the system (7.6) is exactly controllable at time T by means of controls in the space L1F (0, T ; L2 (Ω; U )).

244

7 Controllability for Stochastic Linear Evolution Equations

Similarly to Theorems 7.16, 7.17 and 7.19, we can prove the following controllability results for the system (7.7): Theorem 7.23. 1) System (7.7) is exactly controllable at time 0 if and only if (7.35) is continuously initially observable on [0, T ] in the sense that all of its solutions satisfy the following (initial time) observability estimate: |x0 |H ≤ C|B ∗ x|UT ,

∀ x0 ∈ H.

(7.48)

2) System (7.7) with g = 0 is null controllable at time 0 if and only if (7.35) is continuously finally observable on [0, T ] in the sense that all of its solutions satisfy the following (finial time) observability estimate: |x(T )|L2F

T

(Ω;H)

≤ C|B ∗ x|UT ,

∀ x0 ∈ H.

(7.49)

3) System (7.7) is approximately controllable at time 0 if and only if solutions to (7.35) satisfy the following observability: B ∗ x ≡ 0 in (0, T ), a.s. ⇒ x0 = 0.

(7.50)

Now, let us introduce respectively the following dual equations of (7.8) and (7.9): ( ) { dz(t) = − A∗ z(t) + F (t)∗ z(t) + G(t)∗ Z(t) dt + Z(t)dW (t) in [0, T ), z(T ) = zT (7.51) and

{

( ) dx(t) = Ax(t) − J (t)∗ x(t) dt − K(t)∗ x(t)dW (t) in (0, T ],

(7.52)

x(0) = x0 , where zT ∈ L2FT (Ω; O) and x0 ∈ O. Similarly to Theorems 7.16, 7.17 and 7.19, we have the following controllability results for the system (7.8): Theorem 7.24. Under Conditions 7.1 and 7.2, the following assertions hold: 1) System (7.8) is exactly controllable at time T if and only if (7.51) is continuously finally observable on [0, T ] in the sense that all of its solutions satisfy the following (final time) observability estimate: |zT |L2F

T

(Ω;O)

≤ C|B ∗ z + D∗ Z|UT ,

∀ zT ∈ L2FT (Ω; O).

(7.53)

2) System (7.8) with f = 0 is null controllable at time T if and only if (7.51) is continuously initially observable on [0, T ] in the sense that all of its solutions satisfy the following (initial time) observability estimate: |z(0)|O ≤ C|B ∗ z + D∗ Z|UT ,

∀ zT ∈ L2FT (Ω; O).

(7.54)

7.3 Reduction to the Observability of Dual Problems

245

3) System (7.8) is approximately controllable at time T if and only if solutions to (7.51) satisfy the following observability: B ∗ z + D∗ Z ≡ 0 in (0, T ), a.s. ⇒ zT = 0.

(7.55)

Proof : The proof of Theorem 7.24 is very similarly to that of Theorems 7.16–7.19 but it is even simpler. Indeed, the only difference is to use the equality (7.19) instead of the Itˆo formula to derive the duality relation. Hence, we only give below a proof of the conclusion 1) in Theorem 7.24. Similarly to the proof of Theorem 7.16, we define an operator L : UT → L2FT (Ω; O′ ) by L(u(·)) = y(T ),

∀ u(·) ∈ UT ,

(7.56)

where y(·) is the corresponding transposition solution to (7.8) with y0 = 0 and f = 0. Then, by (7.19) (with τ = T ) in Definition 7.11, using (7.51) as a test equation for the transposition solution to (7.8) with y0 = 0 and f = 0, we obtain that ∫ T E⟨ y(T ), zT ⟩O′ ,O = E ⟨ u(t), B ∗ z(t) + D∗ Z(t) ⟩U dt, (7.57) 0 ∀ zT ∈ L2FT (Ω; O), where (z(·), Z(·)) ∈ L2F (Ω; C([0, T ]; O)) × L2F (0, T ; L2 (V ; O)) solves (7.51). By (7.56)–(7.57), it is clear that, (L∗ zT )(t) = B ∗ z(t) + D∗ Z(t),

a.e. t ∈ [0, T ].

(7.58) L2FT (Ω; O′ );

Now, the exact controllability of (7.8) is equivalent to R(L) = while the later, by (7.58) and using Theorem 1.7, is equivalent to the estimate (7.53).

Similarly to Theorems 7.23 and 7.24, we can prove easily the following controllability results for the system (7.9) (Hence we will not prove it here): Theorem 7.25. Under Conditions 7.1 (with D = 0) and 7.3, the following assertions hold: 1) System (7.9) is exactly controllable at time 0 if and only if (7.52) is continuously initially observable on [0, T ] in the sense that all of its solutions satisfy the following (initial time) observability estimate: |x0 |O ≤ C|B ∗ x|UT ,

∀ x0 ∈ O.

(7.59)

2) System (7.9) with g = 0 is null controllable at time 0 if and only if (7.52) is continuously finally observable on [0, T ] in the sense that all of its solutions satisfy the following (finial time) observability estimate: |x(T )|L2F

T

(Ω;O)

≤ C|B ∗ x|UT ,

∀ x0 ∈ O.

(7.60)

3) System (7.9) is approximately controllable at time 0 if and only if solutions to (7.52) satisfy the following observability: B ∗ x ≡ 0 in (0, T ), a.s. ⇒ x0 = 0.

(7.61)

246

7 Controllability for Stochastic Linear Evolution Equations

7.4 Explicit Forms of Controls for the Controllability Problems We have shown the existence of controls for exact/null/approxiate controllability of systems (7.6) and (7.7) (as well as systems (7.8) and (7.9)), provided that their dual systems satisfy suitable observability properties. These results allow concluding whether a system is controllable but do not give any information about the control which drives the state to the destination. In this section, stimulated by the simple control formula (1.5) in Theorem 1.3 for the exact controllability of linear ordinary differential equations and also [123] for a systematic development on similar problems but for deterministic partial differential equations, we shall present explicit forms of controls for the controllability problems of stochastic linear evolution equations. Such kind of results not only provide variational characterizations on the desired controls but also serve for numerical analysis for the controllability problems. In the rest of this section, we fix any given f, g ∈ L2F (Ω; L1 (0, T ; H)), y0 ∈ H, yT ∈ L2FT (Ω; H) and ε ≥ 0. Define a functional on L2FT (Ω; H) as follows: ∫ T 1 J1 (zT ) = E |B ∗ z(t) + D∗ Z(t)|2U dt − E⟨yT , zT ⟩H + ⟨y0 , z(0)⟩H 2 0 ∫ T +E ⟨f (t), z(t)⟩H dt + ε|zT |L2F (Ω;H) , ∀ zT ∈ L2FT (Ω; H), T

0

(7.62) where (z, Z) solves (7.34). Also, we define a functional on H as follows: ∫ T 1 J2 (x0 ) = E |B ∗ x(t)|2U dt − E⟨x(T ), yT ⟩H + ⟨x0 , y0 ⟩H 2 0 (7.63) ∫ T +E ⟨g(t), x(t)⟩H dt + ε|x0 |H , ∀ x0 ∈ H, 0

where x(·) solves (7.35). In what follows, we shall give the following explicit form of controls for exact/null/approxiate controllability of (7.6): u(t) = B ∗ z(t; zˆT ) + D∗ Z(t; zˆT ),

(7.64)

where (z(·; zˆT ), Z(·; zˆT )) is the corresponding solution to (7.34) with the final datum zT replaced by a suitable chosen zˆT ∈ L2FT (Ω; H), i.e., the minimizer of the functional J1 (·) under some assumptions. First, for the exact controllability of (7.6), we have the following result: Theorem 7.26. If the equation (7.34) is continuously finally observable on [0, T ], then, among all controls transferring the state of (7.6) from y0 to yT at time T , the one given by (7.64) enjoys the minimal UT -norm, where zˆT ∈ L2FT (Ω; H) is the minimizer of the functional J1 (·) (defined by (7.62)) with ε = 0.

7.4 Explicit Forms of Controls for the Controllability Problems

247

Proof : Note that, in Theorem 7.26 the parameter ε in the functional J1 (·) is taken to be 0. Clearly, in this case the functional J1 (·) : L2FT (Ω; H) → R is continuous and strictly convex. Since the equation (7.34) is continuously finally observable on [0, T ] (Recall Theorem 7.16 for the meaning of continuous final observability), we have J1 (zT ) ≥ CE|zT |2H − |yT |L2F

T

(Ω;H) |zT |L2F (Ω;H) T

− |y0 |H |z(0)|H

−|f |L2F (Ω;L1 (0,T ;H)) |z|L2F (Ω;C([0,T ];H)) ( ) ≥ CE|zT |2H − C |yT |L2F (Ω;H) + |y0 |H + |f |L2F (Ω;L1 (0,T ;H)) |zT |L2F T

T

(Ω;H) .

This implies that J1 (·) is coercive. Hence, J1 (·) admits a unique minimizer zˆT . By computing the first order variation of J1 (·) (with ε = 0), we obtain that, for any zT ∈ L2FT (Ω; H), ∫

T

E 0

⟨ ∗ ⟩ B z(t; zˆT ) + D∗ Z(t; zˆT ), B ∗ z(t) + D∗ Z(t) U dt ⟨

= E⟨yT , zT ⟩H − y0 , z(0)



⟩ H

T

−E 0



f (t), z(t)

⟩ H

(7.65) dt.

On the other hand, applying Itˆo’s formula to (7.6) and (7.34), we obtain that ∫ T ⟨ ⟩ ⟨ ⟩ E⟨y(T ), zT ⟩H − y0 , z(0) H − E f (t), z(t) H dt 0 (7.66) ∫ T ⟨ ⟩ ∗ ∗ 2 =E u(t), B z(t) + D Z(t)) U dt, ∀ zT ∈ LFT (Ω; H). 0

From (7.65) and (7.66), if we choose the control u(·) as (7.64), then E⟨yT , zT ⟩H = E⟨y(T ), zT ⟩H ,

∀ zT ∈ L2FT (Ω; H),

which implies that y(T ) = yT , a.s. Hence, the control u(·) transfers the state of (7.6) from y0 to yT at time T . Next, let us prove that u(·) is the control with the minimal UT -norm, which transfers the state of (7.6) from y0 to yT at time T . Choosing zT = zˆT in (7.65), we obtain that ∫ T ∗ B z(t; zˆT ) + D∗ Z(t; zˆT ) 2 dt E U 0 (7.67) ∫ T ⟨ ⟩ ⟨ ⟩ = E⟨yT , zˆT ⟩H − y0 , z(0; zˆT ) H − E f (t), z(t; zˆT ) H dt. 0

Let u1 (·) be another control which transfers the state of (7.6) from y0 to yT at time T . Then, it follows from (7.66) that

248

7 Controllability for Stochastic Linear Evolution Equations

⟨ ⟩ E⟨yT , zˆT ⟩H − y0 , z(0; zˆT ) H − E ∫

T

=E 0



T 0

⟨ ⟩ f (t), z(t; zˆT ) H dt (7.68)

⟨ ⟩ u1 (t), B ∗ z(t; zˆT ) + D∗ Z(t; zˆT ) U dt.

Combining (7.67) and (7.68), we arrive at ∫

T

E 0

∗ B z(t; zˆT ) + D∗ Z(t; zˆT ) 2 dt U



T

=E



u1 (t), B ∗ z(t; zˆT ) + D∗ Z(t; zˆT )

0

≤ B ∗ z(t; zˆT ) + D∗ Z(t; zˆT )

UT

u1 (·)

UT

⟩ U

dt

,

which implies that |u|UT ≤ |u1 |UT . Hence, we complete the proof of Theorem 7.26. Remark 7.27. Theorem 7.26 provides a way to compute the control u (with minimal UT -norm) which transfers the state of the system (7.6) from y0 to yT at time T , by means of the solution to its dual equation (7.34) with the final datum given by the minimizer of the functional J1 (·) (with ε = 0) on L2FT (Ω; H). In the deterministic cases, people use numerical methods, such as implementing gradient like iterative algorithms, to compute such sort of controls (e.g., [395]). However, in the present stochastic setting, one will meet much more difficulties when employing similar methods to compute the desired controls, and it seems that a careful study of these problems remains to be done. Next, similarly to Theorem 7.26, for the null controllability of the system (7.6), we have the following result: Theorem 7.28. If the system (7.34) is continuously initially observable on [0, T ], then, among all controls transferring the state of (7.6) with f = 0 from y0 to 0 at time T , the one given by (7.64) enjoys the minimal UT -norm, where zˆT ∈ L2FT (Ω; H) is the minimizer of the functional J1 (·) with ε = 0 and f = 0. Further, similarly to Theorem 7.26, for the approximate controllability of the system (7.6), we have the following result: Theorem 7.29. If the system (7.34) is finally observable on [0, T ], then for any given ε > 0, among all controls transferring the state of (7.6) from y0 to the ε neighborhood of yT at time T , i.e., |y(T ) − yT |L2F (Ω;H) < ε, the one T

given by (7.64) enjoys the minimal UT -norm, where zˆT ∈ L2FT (Ω; H) is the minimizer of the functional J1 (·). Proof : It is easy to see that the functional J1 (·) : L2FT (Ω; H) → R is continuous and strictly convex. We claim that it is coercive under the assumption that

7.4 Explicit Forms of Controls for the Controllability Problems

249

the system (7.34) is finally observable on [0, T ]. Indeed, choose any sequence 2 {zT,k }∞ k=1 ⊂ LFT (Ω; H) such that |zT,k |L2F

T

(Ω;H)

For each k ∈ N, put z˜T,k =

→ +∞ as k → +∞.

zT,k . |zT,k |L2F (Ω;H) T

Denote by (z(·; z˜T,k ), Z(·; z˜T,k )) the corresponding solution to (7.34) with the final datum zT replaced by z˜T,k . Then, J1 (zT,k ) |zT,k |L2F (Ω;H) T

=

|zT,k |L2F

T

(Ω;H)

2



T

|B ∗ z(t; z˜T,k ) + D∗ Z(t; z˜T,k )|2U dt

E 0



T

−E⟨yT , z˜T,k ⟩H + ⟨y0 , z(0; z˜T,k )⟩H + E ≥

|zT,k |L2F

T

(Ω;H)

2

⟨f (t), z(t; z˜T,k )⟩H dt + ε

(7.69)

0



T

|B ∗ z(t; z˜T,k ) + D∗ Z(t; z˜T,k )|2U dt

E 0

( −C |yT |L2F

T

) (Ω;H) + |y0 |H + |f |L2F (Ω;L1 (0,T ;H)) + ε.

If



T

lim E k→+∞

|B ∗ z(t; z˜T,k ) + D∗ Z(t; z˜T,k )|2U dt > 0,

(7.70)

0

then it follows from (7.69) that J1 (zT,k ) → +∞ as |zT,k |L2F

T

(Ω;H)

→ +∞.

(7.71)

Hence, we only need to consider the case that the left hand side of (7.70) equals 0. Since |˜ zT,k |L2F (Ω;H) = 1 for all k ∈ N, there is a subsequence {˜ zT,kj }∞ j=1 of T

{˜ zT,k }∞ ˜T ∈ L2FT (Ω; H) such that z˜T,kj converges weakly to z˜T in k=1 and a z 2 LFT (Ω; H). Denote by (z(·; z˜T ), Z(·; z˜T )) the solution to (7.34) with the final datum zT replaced by z˜T . By the definition of the weak solution to (7.34), it follows that the corresponding solution z(·; z˜T,kj ) converges weak-star to z(·; z˜T ) in L2F (Ω; L∞ (0, T ; H)). Then, ∫

T

E

|B ∗ z(t; z˜T ) + D∗ Z(t; z˜T )|2U dt

0



T

≤ lim E k→+∞

0

|B ∗ z(t; z˜T,k ) + D∗ Z(t; z˜T,k )|2U dt = 0.

250

7 Controllability for Stochastic Linear Evolution Equations

Since the system (7.34) is finally observable on [0, T ], we conclude that z˜T = 0, a.s. Thus, z(·; z˜T ) = 0. It follows from the first equality in (7.69) that J1 (˜ zT,k ) ≥ ε, |˜ z | k→+∞ T,k L2F (Ω;H) lim

T

which gives also (7.71). Hence, we prove the coercivity of J1 (·). Then, J1 (·) admits a unique minimizer zˆT . By computing the first variation of J1 (·), we get that ∫ T ⟨ ∗ ⟩ E B z(t; zˆT ) + D∗ Z(t; zˆT ), B ∗ z(t) + D∗ Z(t) U dt 0

⟨ ⟩ = E⟨yT , zT ⟩H − y0 , z(0) H − E ∀ zT ∈



T

⟨f (t), z(t)⟩H dt − 0

εE⟨zT , zˆT ⟩H (7.72) , |ˆ zT |L2F (Ω;H) T

L2FT (Ω; H).

On the other hand, we still have (7.66). Hence, by (7.72) and (7.66), if we choose the control as (7.64), then E⟨yT − y(T ), zT ⟩H =

εE⟨zT , zˆT ⟩H ≤ ε|zT |L2F (Ω;H) , T |ˆ zT |L2F (Ω;H)

∀ zT ∈ L2FT (Ω; H).

T

(7.73) By taking zT = yT − y(T ), we find from (7.73) that |yT − y(T )|L2F

T

(Ω;H)

≤ ε.

Now we prove that u(·) given by (7.64) is the control with the minimal UT -norm, which transfers the state of the system (7.6) from y0 to the ε neighborhood of yT at time T . By choosing zT = zˆT in (7.72), we find that ∫ T ∗ B z(t; zˆT ) + D∗ Z(t; zˆT ) 2 dt E U 0

⟨ ⟩ = E⟨yT , zˆT ⟩H − y0 , z(0; zˆT ) H − E



T 0

⟨f (t), z(t; zˆT )⟩H dt − ε|ˆ zT |L2F

T

(Ω;H) .

(7.74) Let u1 (·) be another control which transfers the state of the system (7.6) from y0 to the ε neighborhood of yT at time T , i.e., the corresponding solution y(·; u1 ) to (7.6) verifies |y(T ; u1 ) − yT |L2F

T

(Ω;H)

< ε.

(7.75)

Then, similarly to (7.68), we have ⟨ ⟩ E⟨y(T ; u1 ), zˆT ⟩H − y0 , z(0; zˆT ) H − E ∫

T

=E 0



T



f (t), z(t; zˆ)

0

⟨ ⟩ u1 (t), B ∗ z(t; zˆT ) + D∗ Z(t; zˆT ) U dt.

⟩ H

dt (7.76)

7.4 Explicit Forms of Controls for the Controllability Problems

251

It follows from (7.74)–(7.76) that ∫

T

E 0

∗ B z(t; zˆT ) + D∗ Z(t; zˆT ) 2 dt U



T

≤E



u1 (t), B ∗ z(t; zˆT ) + D∗ Z(t; zˆT )



0

U

dt

+E⟨yT − y(T ; u1 ), zˆT ⟩H − ε|ˆ zT |L2F (Ω;H) T ∫ T ⟨ ⟩ ≤E u1 (t), B ∗ z(t; zˆT ) + D∗ Z(t; zˆT ) U dt 0

≤ B ∗ z(t; zˆT ) + D∗ Z(t; zˆT ) UT u1 (·) UT , which implies |u|UT ≤ |u1 |UT . Therefore, we complete the proof of Theorem 7.29. Similarly to (7.64), we shall give the following explicit form of controls for exact/null/approxiate controllability of (7.7): u(t) = B ∗ x(t; x ˆ0 ),

(7.77)

where x(·; x ˆ0 ) is the corresponding solution to (7.35) with the initial datum x0 replaced by a suitable chosen x ˆ0 , i.e., the minimizer of the functional J2 (defined by (7.63)) under some assumptions. Proceeding exactly as that for Theorems 7.26, 7.28 and 7.29, one can prove the following result for exact/null/approxiate controllability of (7.7) (and hence we will not give the proof). Theorem 7.30. 1) If the equation (7.35) is continuously initially observable on [0, T ], then, among all controls transferring the state of (7.7) from yT ∈ L2FT (Ω; H) to y0 ∈ H at time 0, the one given by (7.77) enjoys the minimal UT -norm, where x ˆ0 ∈ H is the minimizer of the functional J2 (·) with ε = 0. 2) If the system (7.35) is continuously finally observable on [0, T ], then, among all controls transferring the state of (7.7) with g = 0 from yT to 0 at time 0, the one given by (7.77) enjoys the minimal UT -norm, where x ˆ0 ∈ H is the minimizer of the functional J2 (·) with ε = 0 and g = 0. 3) If the system (7.35) is initially observable on [0, T ], then for any given ε > 0, among all controls transferring the state of (7.7) from yT to the ε neighborhood of y0 at time 0, i.e., |y(0) − y0 |H < ε, the one given by (7.77) enjoys the minimal UT -norm, where x ˆ0 ∈ H is the minimizer of the functional J2 (·). Finally, for the case of unbounded control operators, using similar arguments as that in Theorems 7.26, 7.28, 7.29 and 7.30, one may obtain the explicit forms of controls for the controllability problems of the systems (7.8) and (7.9) (Hence, we will not give the details here).

252

7 Controllability for Stochastic Linear Evolution Equations

7.5 Relationship Between the Forward and the Backward Controllability In this section, we shall present a relationship between the forward and the backward controllability, i.e., that for the controllability of the forward and the backward stochastic evolution equations. 7.5.1 The Case of Bounded Control Operators In this subsection, we assume that A generates a C0 -group on H, F (·), f (·), B and G(·) are the same as that in the system (7.6). Let U be another Hilbert space which will serve as another control space, D ∈ L(U; L02 ) be invertible, B1 ∈ L(U ; L02 ) and D1 ∈ L(U; H). We consider the following (forward) stochastic control system evolved on H:  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) + D1 v(t) dt    ( ) + G(t)y(t) + Dv(t) + B1 u(t) dW (t) in (0, T ], (7.78)    y(0) = y0 , and the following backward stochastic control system evolved on H:  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) + D1 D−1 Y (t) dt    ( ) + G(t)y(t) + Y (t) + B1 u(t) dW (t) in [0, T ),    y(T ) = yT .

(7.79)

Here, y(·) is the state variable, u(·)(∈ UT ) and v(·)(∈ L2F (0, T ; U)) are the control variables, y0 (∈ H) and yT (∈ L2FT (Ω; H)) are respectively the initial and the final states. Since A is assumed to generate a C0 -group on H, the system (7.78) admits a unique mild solution y(·) ∈ L2F (Ω; C([0, T ]; H)), while the system (7.79) admits a unique mild solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ). In (7.78), we emphasize that the control u(·) (resp. v(·)) in the drift (resp. diffusion) term affects the diffusion (resp. drift) term (Nevertheless, if we regard (u(·), v(·)) as “one” control, then (7.78) is still in the form of (7.6)). Stimulated by [275], we shall show below that the controllability problems for the forward stochastic control system (7.78) (with “two” controls u(·) and v(·)) can be reduced to that for the backward stochastic control system (7.79) (with only one control u(·)). For the exact controllability, we have the following result. Theorem 7.31. The system (7.78) is exactly controllable at time T if and only if the system (7.79) is exactly controllable at time 0. Proof : We fix any y0 ∈ H and yT ∈ L2FT (Ω; H). The “only if” part. Since (7.78) is exactly controllable at time T , one can find a pair of controls (u, v) ∈ UT × L2F (0, T ; U) such that the solution

7.5 Relationship Between the Forward and the Backward Controllability

253

to (7.78) satisfies that y(T ) = yT . Let Y = Dv. Then (y, Y ) is the solution to (7.79) corresponding to the final datum yT and the control u ∈ UT , and y(0) = y0 . This implies that (7.79) is exactly controllable at time 0. The “if” part. Fix any y0 ∈ H and yT ∈ L2FT (Ω; H). Since (7.79) is exactly controllable at time 0, one can find a control u ∈ UT such that the solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H))×L2F (0, T ; L02 ) to (7.79) satisfies that y(0) = y0 . Let v = D−1 Y . Then y is the solution to (7.78) corresponding to the initial datum y0 and the controls (u, v) ∈ UT ×L2F (0, T ; U), and y(T ) = yT . This implies that (7.6) is exactly controllable at time T . Next, we give the relationship for the null controllability of (7.6) and exact controllability (7.79). Theorem 7.32. The system (7.78) is null controllable at time T if and only if the system (7.79) is exactly controllable at time 0. Proof : By Theorem 7.31, it suffices to prove the “only if” part. Fix any y0 ∈ H and yT ∈ L2FT (Ω; H). Denote by (˜ y , Ye ) the solution to (7.79) with the final datum yT and the control u(·) = 0. Since (7.78) is null controllable, one can find a pair of controls (u, v) ∈ UT ×L2F (0, T ; U) such that the corresponding solution yˆ to (7.78) with initial datum y0 replaced by y0 − y˜(0) satisfies yˆ(T ) = 0. Then (y, Y ) = (ˆ y + y˜, Dv + Ye ) is the solution to (7.79) corresponding to the final datum yT and the control u(·) ∈ UT obtained above for (7.78), and satisfies y(0) = y0 . This implies that (7.79) is exactly controllable at time 0. Now, as an immediate consequence of Theorem 7.32, we have the following result: Corollary 7.33. The system (7.78) is null controllable at time T if and only if it is exactly controllable at time T . In view of Theorem 7.31, similarly to Theorem 7.23, the exact controllability problem of (7.78) can be reduced to a suitable observability estimate for the following stochastic evolution equation on H: { ( ) dx = − A∗ + F ∗ − G∗ (D−1 )∗ D1∗ xdt − (D−1 )∗ D1∗ xdW (t) in (0, T ], x(0) = x0 . (7.80) More precisely, we have the following result: Theorem 7.34. The system (7.78) is exactly controllable at time T if and only if all solutions to (7.80) satisfy the following observability estimate: ∫

T

|x0 |2H ≤ CE 0

|(B − D1 D−1 B1 )∗ x(t)|2U dt,

∀ x0 ∈ H.

254

7 Controllability for Stochastic Linear Evolution Equations

It follows from Theorem 7.34 that when D1 = 0, the exact controllability of (7.78) can be reduced to the observability estimate for a random evolution equation. Indeed, let us consider the following random evolution equation on H:   dx = −(A∗ + F ∗ )x in (0, T ], dt (7.81)  x(0) = x0 . As a special case of Theorem 7.34, we have the following result: Theorem 7.35. If D1 = 0, then the following statements are equivalent: 1) The system (7.78) is exactly controllable at time T ; 2) All solutions to (7.81) satisfy the following observability estimate: ∫

T

|x0 |2H ≤ CE

|B ∗ x(t)|2U dt,

∀ x0 ∈ H.

(7.82)

0

Remark 7.36. The equation (7.81) can be regarded as an evolution equation with a parameter ω. Since F ∈ L∞ F (0, T ; L(H)), sometimes (7.82) can be obtained by the existing observability result for deterministic evolution equation immediately. Remark 7.37. Since D is invertible, one may expect that the randomness of the system (7.78) can be eliminated simply by taking v(·) = −D−1 (G(·)y(·) + B1 u(·)), and hence the system is reduced to the following controlled random evolution equation:  ( ) dy(t)   = Ay(t) + F (t) − D1 D−1 G(·) y(t) + f (t)   dt ( ) (7.83) + B − D1 D−1 B1 u(t) in (0, T ],     y(0) = y0 , However, this does not help us to achieve our goal to find a control to transfer the state of the system (7.83) (or equivalently (7.78)) from a given y0 to any given element in L2FT (Ω; H) at time T . Indeed, the dual equation of (7.83) is still a backward stochastic evolution equation given below: { [ ( )∗ ] dz(t) = − A∗ z(t) + F (t) − D1 D−1 G(·) z(t) dt + Z(t)dW (t) in [0, T ), z(T ) = zT . (7.84) By Theorem 7.16, the system (7.83) (and hence (7.78)) is exactly controllable at time T if and only if solutions to (7.84) satisfy the following observability estimate:

7.5 Relationship Between the Forward and the Backward Controllability

|zT |L2F

T

(Ω;H)

( )∗ ≤ C| B − D1 D−1 B1 z|UT ,

∀ zT ∈ L2FT (Ω; H).

255

(7.85)

As far as we know, generally speaking, the observability estimate (7.85) cannot be proved directly by the existing controllability/observability results for deterministic evolution equations, even if in the special case that D1 = 0. Remark 7.38. We have proved that the system (7.78) is null controllable if and only if it is exactly controllable, provided that A generates a C0 -group and D is invertible. For deterministic systems, the condition that A generates a C0 -group is enough to establish this equivalence. However, according to Proposition 6.3, it is easy to see that, in the stochastic context the condition that D is invertible cannot be dropped. Next, we give the relationship for the approximate controllability of (7.78) and (7.79). Theorem 7.39. The system (7.78) is approximately controllable at time T if and only if the system (7.79) is approximately controllable at time 0. Proof : We fix any ε > 0, y0 ∈ H and yT ∈ L2FT (Ω; H). The “only if” part. Since (7.78) is approximately controllable, for any ε1 > 0, one can find a pair of controls (u1 , v 1 ) ∈ UT × L2F (0, T ; U) such that the corresponding mild solution y 1 (·) ∈ L2F (Ω; C([0, T ]; H)) to (7.78) satisfies that y 1 (0) = y0 and |y 1 (T ) − yT |L2F (Ω;H) < ε1 . Denote by (y, Y ) the T

corresponding solution to (7.79) with the control u(·) replaced by u1 (·). Then ∆ (ˆ y , Yb ) =(y, Y ) − (y 1 , Dv 1 ) solves the following backward stochastic evolution equation:  ( ) dˆ y (t) = Aˆ y (t) + F (t)ˆ y (t) + D1 D−1 Yb (t) dt    ( ) (7.86) + G(t)ˆ y (t) + Yb (t) dW (t) in [0, T ),    yˆ(T ) = yT − y 1 (T ). By Theorem 4.55, it follows that |ˆ y (0)|H ≤ C|y 1 (T ) − yT |L2F

T

(Ω;H)

< Cε1 .

Let us choose ε1 = Cε . Then, |y(0) − y0 |H = |y(0) − y 1 (0)|H = |ˆ y (0)|H < Cε1 = ε. This implies that (7.79) is approximately controllable. The “if” part. Since (7.79) is approximately controllable, for any ε1 > 0, one can find a control u1 ∈ UT such that the corresponding mild solution (y 1 (·), Y 1 (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ) to (7.79) with the control u(·) replaced by u1 (·) satisfies y 1 (T ) = yT and |y 1 (0) − y0 |H < ε1 . Denote by

256

7 Controllability for Stochastic Linear Evolution Equations

y the corresponding solution to (7.78) with the controls (u(·), v(·)) replaced ∆ by (u1 (·), D−1 Y 1 (·)). Then yˆ = y − y 1 solves the following equation: ( ) { dˆ y (t) = Aˆ y (t) + F (t)ˆ y (t) dt + G(t)ˆ y (t)dW (t) in (0, T ], (7.87) 1 yˆ(0) = y0 − y (0). By Theorem 3.14, we have |ˆ y (T )|L2F

T

(Ω;H)

≤ C|y 1 (0) − y0 |H < Cε1 .

(Ω;H)

= |ˆ y (T )|L2F

Let us choose ε1 = Cε . Then, |y(T ) − yT |L2F

T

T

(Ω;H)

< Cε1 = ε.

Therefore, y is the solution to (7.78) so that |y(T ) − yT |L2F (Ω;H) < ε. This T implies that (7.78) is approximately controllable. 7.5.2 The Case of Unbounded Control Operators In this subsection, O, O′ , F (·), f (·), B and G(·) are the same as that for the system (7.8), and we assume that A generates a C0 -group on both O and O′ . Let U be the same control space as that in the Subsection 7.5.1, D ∈ L(U; L2 (V ; O′ )) be invertible, B1 ∈ L(U ; L2 (V ; O′ )) and D1 ∈ L(U; O′ ). We consider the following (forward) stochastic control system evolved on O′ :  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) + D1 v(t) dt    ( ) + G(t)y(t) + Dv(t) + B1 u(t) dW (t) in (0, T ], (7.88)    y(0) = y0 , and the following backward stochastic control system evolved on O′ :  ( ) dy(t) = Ay(t) + F (t)y(t) + f (t) + Bu(t) + D1 D−1 Y (t) dt    ( ) + G(t)y(t) + Y (t) + B1 u(t) dW (t) in [0, T ),    y(T ) = yT .

(7.89)

Here, y(·) is the state variable, u(·)(∈ UT ) and v(·)(∈ L2F (0, T ; U)) are the control variables, y0 (∈ O′ ) and yT (∈ L2FT (Ω; O′ )) are respectively the initial and the final states. Note that in both (7.88) and (7.89) there is only one unbounded control operator, i.e., B. For this unbounded control operator, we need similar assumptions as that in Conditions 7.1, 7.2 and 7.3. More precisely, in this subsection we assume that •

2 ′ There exists a sequence {un }∞ n=1 ⊂ UT such that Bun ∈ LF (0, T ; O ) for each n ∈ N and (7.16) holds;

7.5 Relationship Between the Forward and the Backward Controllability



257

There exists a constant C > 0 such that for any τ ∈ [0, T ], ξτ ∈ L2Fτ (Ω; O), ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), the solution (ξ, Ξ) to (7.14) satisfies (7.18), and the solution η to the following equation  [ ( )∗ ] dη(t) = − A∗ η(t) + F (t) − D1 D−1 G(t) η(t) + G(t)∗ h(t) dt    ( ) (7.90) + − (D−1 )∗ D1∗ η(t) + h(t) dW (t) in (τ, T ],    η(τ ) = ητ satisfies the inequality (7.27).

Under these assumptions, similarly to Definition 7.11, a process y ∈ CF ([0, T ]; L2 (Ω; O′ )) is called a transposition solution to the system (7.88) if for any τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O), it holds that ⟨ ⟩ ⟨ ⟩ E y(τ ), ξτ O′ ,O − y0 , ξ(0) O′ ,O ∫ τ ∫ τ ⟨ ⟩ ⟨ ⟩ =E u(t), B ∗ ξ(t) U dt + E f (t) + D1 v(t), ξ(t) O′ ,O dt 0

0



τ

+E



Dv(t) + B1 u(t), Ξ(t)

0

⟩ L2 (V ;O ′ ),L2 (V ;O)

(7.91)

dt,

where (ξ, Ξ) solves (7.14). Also, similarly to Definition 7.13, a pair of processes (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ )) is called a transposition solution to the system (7.89) if for every τ ∈ [0, T ), ητ ∈ L2Fτ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), it holds that ⟨ ⟩ ⟨ ⟩ E yT , η(T ) O′ ,O − y(τ ), ητ O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E u(t), B ∗ η(t) U dt + E f (t) − D1 D−1 B1 u(t), η(t) O′ ,O dt (7.92) τ

τ



T

+E τ



Y (t) + B1 u(t), h(t)

⟩ L2 (V ;O ′ ),L2 (V ;O)

dt,

where η solves (7.90). Proceeding exactly as that in Theorems 7.12 and 7.14, we can show that the systems (7.88) and (7.89) are well-posed in the sense of transposition solutions. Similarly to Theorem 7.31, we have the following result. Theorem 7.40. The system (7.88) is exactly controllable at time T if and only if the system (7.89) is exactly controllable at time 0. Proof : We only prove the “if” part (The “only if” part can be proved similarly). Let y0 ∈ O′ and yT ∈ L2FT (Ω; O′ ) be arbitrarily given. Since (7.89) is exactly controllable at time 0, one can find a control u ∈ UT such that the transposition solution (y, Y ) ∈ CF ([0, T ]; L2 (Ω; O′ )) × L2F (0, T ; L2 (V ; O′ ))

258

7 Controllability for Stochastic Linear Evolution Equations

to (7.89) satisfies that y(0) = y0 , and by (7.92), for any τ ′ ∈ [0, T ], ητ ′ ∈ L2Fτ ′ (Ω; O) and h ∈ L2F (0, T ; L2 (V ; O)), it holds that ⟨ ⟩ ⟨ ⟩ E yT , η(T ) O′ ,O − y(τ ′ ), ητ ′ O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ ∗ =E u(t), B η(t) U dt + E f (t) − D1 D−1 B1 u(t), η(t) O′ ,O dt (7.93) ′ ′ τ τ ∫ T ⟨ ⟩ +E Y (t) + B1 u(t), h(t) L2 (V ;O′ ),L2 (V ;O) dt, τ′

where η solves (7.90) with τ replaced by τ ′ . We claim that the same y as above is the transposition solution to (7.88) with the above u and v = D−1 Y . Indeed, we only need to show that (7.91) holds for any τ ∈ [0, T ] and ξτ ∈ L2Fτ (Ω; O), and the corresponding mild solution (ξ, Ξ) to (7.14). For this purpose, first by (7.93) with τ ′ ∈ [0, T ), ξ(t) = η(t),

Ξ(t) = −(D−1 )∗ D1∗ η(t) + h(t),

a.e. t ∈ (τ ′ , T ),

where (ξ, Ξ) solves (7.14) with τ = T and ξT ∈ L2FT (Ω; O) given arbitrarily, we obtain that ⟨ ⟩ ⟨ ⟩ E yT , ξT O′ ,O − y(τ ′ ), ξ(τ ′ ) O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E u(t), B ∗ ξ(t) U dt + E f (t) − D1 D−1 B1 u(t), ξ(t) O′ ,O dt τ′

τ′



⟨ ⟩ Y (t) + B1 u(t), Ξ(t) + (D−1 )∗ D1∗ ξ(t) L

T

+E ∫ =E

τ′ T



τ′



u(t), B ∗ ξ(t)



∫ dt + E U

T



τ′

2 (V

;O ′ ),L2 (V ;O)

f (t) + D1 v(t), ξ(t)

⟩ O ′ ,O

dt

(7.94)

dt

⟨ ⟩ Dv(t) + B1 u(t), Ξ(t) L2 (V ;O′ ),L2 (V ;O) dt.

T

+E τ′

Particularly, by (7.94) and recalling that y(0) = y0 , we have ⟨ ⟩ ⟨ ⟩ E yT , ξT O′ ,O − y0 , ξ(0) O′ ,O ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E u(t), B ∗ ξ(t) U dt + E f (t) + D1 v(t), ξ(t) O′ ,O dt 0

0



T

+E



Dv(t) + B1 u(t), Ξ(t)

0

Combining (7.94) and (7.95), we get

⟩ L2 (V ;O ′ ),L2 (V ;O)

dt.

(7.95)

7.5 Relationship Between the Forward and the Backward Controllability

⟨ ⟩ ⟨ ⟩ E y(τ ′ ), ξ(τ ′ ) O′ ,O − y0 , ξ(0) O′ ,O ∫ τ′ ∫ τ′ ⟨ ⟩ ⟨ ⟩ ∗ =E u(t), B ξ(t) U dt + E f (t) + D1 v(t), ξ(t) O′ ,O dt 0

0



τ

+E





Dv(t) + B1 u(t), Ξ(t)

0

⟩ L2 (V ;O ′ ),L2 (V ;O)

259

(7.96)

dt.

Since ξT ∈ L2FT (Ω; O) is given arbitrarily, by (7.96), we obtain (7.91). Noting that y(0) = y0 and y(T ) = yT , we conclude the exact controllability of (7.88) at time T . Similarly to Theorem 7.32, Corollary 7.33 and Theorem 7.39, we can prove following result. Theorem 7.41. 1) The system (7.88) is null controllable at time T if and only if the system (7.89) is exactly controllable at time 0; 2) The system (7.88) is null controllable at time T if and only if it is exactly controllable at time T ; 3) The system (7.88) is approximately controllable at time T if and only if the system (7.89) is approximately controllable at time 0. Also, the exact controllability problem of (7.88) can be reduced to a suitable observability estimate for the following stochastic evolution equation on O: { ( ) dx = − A∗ + F ∗ − G ∗ (D−1 )∗ D1∗ xdt − (D−1 )∗ D1∗ xdW (t) in (0, T ], x(0) = x0 . (7.97) More precisely, similarly to Theorem 7.34, we have the following result: Theorem 7.42. The system (7.88) is exactly controllable at time T if and only if all solutions to (7.97) satisfy the following observability estimate: ∫ T |x0 |2O ≤ CE |(B − D1 D−1 B1 )∗ x(t)|2U dt, ∀ x0 ∈ O. 0

Remark 7.43. Clearly, similarly to Theorem 7.35, if D1 = 0, then, by Theorem 7.42, the exact controllability of (7.88) at time T is equivalent to that all solutions of the following random evolution equation on O:   dx = −(A∗ + F ∗ )x in (0, T ], dt  x(0) = x0 satisfy the observability estimate: ∫ T |x0 |2O ≤ CE |B ∗ x(t)|2U dt, 0

∀ x0 ∈ O.

260

7 Controllability for Stochastic Linear Evolution Equations

7.6 Notes and Comments Since the seminal paper [163], controllability and observability became the basis of Control Theory. Controllability and observability problems for deterministic systems attracted many studies. It is impossible for us to list them comprehensively even though we restrict ourselves to systems governed by abstract evolution systems. Interested readers are referred to [8, 67, 325] and the rich references therein. The relationships between controllability and observability for deterministic linear control system are well-known, which can be established by several differential methods (e.g. [123, 202, 207]). Similar relationship in the stochastic setting has been studied in [40, 125, 126, 303]. The main concern in this chapter is to reduce the controllability problems for both forward and backward stochastic evolution equations to the observability or observability estimates of their dual equations, particularly for the case of unbounded control operators. To do this, as in [40, 125, 126, 202, 303], we use the classical duality argument. Note that, for concrete stochastic partial differential equations, it is far from easy to establish the desired observability or observability estimates. Actually, in the next four chapters, the main task is to derive the needed observability estimates for stochastic transport equations, stochastic parabolic equations, stochastic wave equations and stochastic Schr¨ odinger equations, respectively. In deterministic setting, the variational characterization on the forms of controls for the controllability problems provides a useful way to compute the desired controls numerically (e.g. [123]). It would be quite interesting to extend these results to the stochastic setting. Nevertheless, as far as we know, there is no published result for the stochastic counterpart, especially on the numerical solutions to the stochastic controllability problems, even for that of stochastic evolution equations in finite dimensions. There are several other concepts of controllability for stochastic evolution equations (e.g. [51, 75, 309, 310, 376]). Among them, we mention the so-called ε-controllability, which will be recalled below. For any ε > 0, p ∈ [0, 1] and y0 ∈ H, put } ∆ { S(ε, p; y0 ) = yT ∈ H P(|y(T ) − yT |2H > ε) ≤ 1 − p for some u ∈ UT , where y(·) solves (7.6). Following [309], we introduce the following notion of controllability: Definition 7.44. The system (7.6) is called completely ε-controllable in probability p (in the normed square sense), if S(ε, p; y0 ) = H for any y0 ∈ H. Rough speaking, if the system (7.6) is completely ε-controllable, then it can transfer any given initial state y0 into the ε-neighborhood of an arbitrary point in H with probability not less than p. This kind of controllability and some generalizations have been studied by several authors in the case that D = 0,

7.6 Notes and Comments

261

G = 0 and F is deterministic (in the system (7.6)) (e.g., [16, 17, 253, 254, 255]). In [17], a stronger controllability concept was introduced for the system (7.6), that is, not only S(ε, p; y0 ) = H for any y0 ∈ H, but also S(ε, p; y0 ) = H for any y0 ∈ H, where ∆ { S(ε, p; y0 ) = yT ∈ H P(|y(T ) − yT |2H > ε) ≤ 1 − p and } Ey(T ) = yT for some u ∈ UT , and y(·) solves (7.6). The above assumptions on D, F and G are essential for the method used in [16, 17, 253, 254, 255], which reduces the ε-controllability problem of a stochastic control system to the exact/approximate controllability of a deterministic control system. To our best knowledge, there is no published works on the ε-controllability problem for general stochastic control systems. Note that, unlike our definitions of controllability, when the stochastic control system reduces to a deterministic one, the above complete ε-controllability does not coincide with the usual notion of exact controllability. Because of this, it seems that our definitions are more physically meaningful. In Section 4.3 of Chapter 4, we have introduced the notion of transposition solution for backward stochastic evolution equations with general filtration, to overcome the difficulty that there is no Martingale Representation Theorem. In Section 7.2, we also use the notion of transposition solution but the purpose is to overcome the difficulty introduced by the unbounded control operators. Note however that the idea to solve these two different problems is very similar, that is, to study the well-posedness of a “less-understood” equation by means of another “well-understood” one. To keep clarity, it seems that we would better use two different notions for them but we prefer keeping the present presentation because the readers should be able to detect the differences easily from the context. Note that, in this chapter, we assume that the filtration is natural. It would be interesting to consider the same problems (studied in this chapter) but in general filtration, for which people need to use the notion of transposition solutions to backward stochastic evolution equations (7.7), (7.9), (7.14), (7.34), (7.51), (7.79) and (7.89) (especially for (7.9) and (7.89) with unbounded control operators) but this remains to be done. In many concrete control systems, it is very common that both the control and the observation operators are unbounded w.r.t. the state spaces. Typical examples are systems governed by partial differential equations in which the actuators and the sensors act on lower-dimensional hypersurfaces or on the boundary of a spatial domain. The unboundedness of the control and the observation operators leads to substantial technical difficulties even for the formulation of the state spaces. In the deterministic setting, to overcome this sort of difficulties, people introduced the notions of the admissible control operator and the admissible observation operator (e.g. [298]). On the other hand, people introduced the notion of well-posed linear system (with possibly unbounded control and observation operators), which satisfies, roughly

262

7 Controllability for Stochastic Linear Evolution Equations

speaking, that the map from the input space to the output one is bounded (e.g. [298, 305]). The well-posed linear systems form a very general class whose basic properties (such as that for feedback control, dynamic stabilization, tracking/disturbance rejection and so on) are rich enough to develop a parallel theory as that for control systems with bounded control and observation operators. The concept of well-posed linear system was generalized to the stochastic setting in [232] but it seems that there are many things to be done to develop a satisfactory theory for stochastic well-posed linear systems.

8 Exact Controllability for Stochastic Transport Equations

In this chapter, we are concerned with the exact boundary controllability for stochastic transport equations. By the duality argument, the controllability problem is reduced to a suitable observability estimate for backward stochastic transport equations, and we employ a stochastic version of global Carleman estimate to derive such an estimate.

8.1 Formulation of the Problem and the Main Result In this chapter and the next three ones, we shall apply our general controllability results (for stochastic evolution equations) presented in the last chapter to some typical and concrete stochastic partial differential equations. As we shall see later, this is by no means an easy task. Indeed, general speaking, it is usually highly nontrivial to establish the needed observability estimates for the dual equations. In some sense, the transport equation is one of the simplest partial differential equations, and therefore in this chapter we shall focus on the exact controllability for stochastic transport equations. Let T > 0 and (Ω, F , F, P) be a complete filtered probability space on which a one dimensional Brownian motion W (·) is defined and F = {Ft }t∈[0,T ] is the natural filtration generated by W (·). Denote by F the progressive σ-field (in [0, T ] × Ω) w.r.t. F. Let G ⊂ Rn (n ∈ N) be a bounded domain with a C 1 boundary Γ . Put Q ≡ (0, T ) × G,

Σ ≡ (0, T ) × Γ,

R = max |x|Rn . x∈Γ

Fix any O ∈ Rn satisfying |O|Rn = 1. Set } ∆{ Γ − = x ∈ Γ O · ν(x) < 0 ,

Γ + = Γ \ Γ −,

Σ − =(0, T ) × Γ − ,

Σ + =(0, T ) × Γ + .







© Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_8

263

264

8 Exact Controllability for Stochastic Transport Equations

Define a Hilbert space L2O (Γ − ) as the completion of all h ∈ C0∞ (Γ − ) w.r.t. the following norm ( ∫ ) 12 ∆ |h|L2O (Γ − ) = − O · ν|h|2 dΓ . Γ−

Clearly, the usual L2 (Γ − ) is dense in L2O (Γ − ). Let us recall first the controllability problem for the following deterministic transport equation:   y + O · ∇y = a01 y in Q,   t  (8.1) y=u on Σ − ,     y(0) = y0 in G, where a01 ∈ L∞ (0, T ; L∞ (G)). In (8.1), y is the state variable and u is the control variable. The state and control spaces are chosen to be L2 (G) and L2 (0, T ; L2O (Γ − )), respectively. It is easy to show that, for any y0 ∈ L2 (G) and u ∈ L2 (0, T ; L2O (Γ− )), (8.1) admits one and one (transposition) solution y(·) ∈ C([0, T ]; L2 (G)). The system (8.1) is said to be exactly controllable at time T if for any given y0 , y1 ∈ L2 (G), one can find a control u ∈ L2 (0, T ; L2O (Γ − )) such that the corresponding solution y(·) ∈ C([0, T ]; L2 (G)) to (8.1) satisfies y(T ) = y1 . In order to solve the above exact controllability of (8.1), people introduce the following dual equation:  z + O ·∇z = −a01 z in Q,    t (8.2) z=0 on Σ + ,    z(T ) = zT in G. By means of the classical duality argument (similarly to the conclusion 1) in Theorems 7.1 and 7.24), it is easy to show the following result. Proposition 8.1. The equation (8.1) is exactly controllable at time T if and only if solutions to the equation (8.2) satisfy the following observability estimate: |zT |L2 (G) ≤ C|z|L2O (Σ − ) , ∀ zT ∈ L2 (G). (8.3) Controllability and observability problems for deterministic transposition equations are now well-understood. Indeed, one can use the global Carleman estimate to prove the observability inequality (8.3) for T > 2R (c.f. [166]), and by which the exact controllability of (8.1) follows. In the rest of this chapter, we shall see that things are different in the stochastic setting. Now, we consider the following controlled stochastic transport equation:

8.1 Formulation of the Problem and the Main Result

 ( ) ( ) dy + O · ∇ydt = a1 y + a3 v dt + a2 y + v dW (t) in Q,    y=u on Σ − ,    y(0) = y0 in G.

265

(8.4)

∞ Here y0 ∈ L2 (G), and a1 , a2 , a3 ∈ L∞ F (0, T ; L (G)). In (8.4), y is the state, while u ∈ L2F (0, T ; L2O (Γ − )) and v ∈ L2F (0, T ; L2 (G)) are two controls.

Remark 8.2. We introduce two controls into the system (8.4). Moreover, the control v acts on the whole domain, and also it affects the system through its drift term (in the form a3 v). Compared with the deterministic transport equations, it seems that we choose too many controls. However, by Proposition 7.6, it is easy to see that two controls are really necessary for our exact controllability problem for (8.4) to be defined below. Solutions to (8.4) are also understood in the transposition sense. For this, we introduce the following backward stochastic transport equation:  [ ] dz + O · ∇zdt = − (a1 + a2 a3 )z + a2 Z dt + (Z − a3 z)dW (t) in Qτ ,    z=0 on Στ+ ,    z(τ ) = zτ in G, (8.5) where τ ∈ (0, T ], Qτ = (0, τ ) × G, Στ+ = (0, τ ) × Γ + , zτ ∈ L2Fτ (Ω; L2 (G)). Define an unbounded operator A on L2 (G) as follows: { { } D(A) = h ∈ H 1 (G) h|Γ − = 0 , (8.6) Ah = −O · ∇h, ∀ h ∈ D(A). Clearly, D(A) is dense in H and A is closed. Furthermore, for every h ∈ D(A), ∫ ∫ ⟨Ah, h⟩L2 (G) = − hO · ∇hdx = − O · ν|h|2 dΓ + ≤ 0. Γ+

G

One can easily check that the adjoint operator of A is { { } D(A∗ ) = h ∈ H 1 (G) | h = 0 on Γ + , A∗ h = O · ∇h,

∀ h ∈ D(A∗ ).

For every h ∈ D(A∗ ), it holds that ∫ ∫ ∗ ⟨A h, h⟩L2 (G) = hO · ∇hdx = G

Γ−

(8.7)

O · ν|h|2 dΓ − ≤ 0.

Hence, both A and A∗ are dissipative operators. By the standard operator semigroup theory (e.g., [87, Page 84]), A generates a contractive C0 -semigroup on L2 (G). By Theorem 4.10, we have the following well-posedness result for the backward stochastic partial differential equation (8.5).

266

8 Exact Controllability for Stochastic Transport Equations

Proposition 8.3. For any τ ∈ (0, T ] and zτ ∈ L2Fτ (Ω; L2 (G)), the equation (8.5) admits a unique mild solution (z, Z) ∈ L2F (Ω; C([0, τ ]; L2 (G))) × L2F (0, τ ; L2 (G)) such that |z|L2F (Ω;C([0,τ ;L2 (G))) + |Z|L2F (0,τ ;L2 (G)) ≤ C|zτ |L2F

τ

(Ω;L2 (G)) ,

(8.8)

where C is a constant, independent of τ and zτ . Note that, by Proposition 8.3, the first component z (of solution to (8.5)) belongs to L2F (Ω; C([0, τ ]; L2 (G))), hence, it is not obvious that z|Γ − ∈ L2F (0, τ ; L2O (Γ − )). The latter is guaranteed by Proposition 8.6 in the next section. This fact will be used in the next notion: Definition 8.4. A stochastic process y ∈ CF ([0, T ]; L2 (Ω; L2 (G))) is called a transposition solution to (8.4) if for any τ ∈ (0, T ] and zτ ∈ L2Fτ (Ω; L2 (G)), it holds that E⟨y(τ ), zτ ⟩L2 (G) − ⟨y0 , z(0)⟩L2 (G) ∫ τ ∫ τ∫ =E ⟨v, Z⟩L2 (G) dt − E O · νuzdΓ − dt. 0

0

(8.9)

Γ−

Here (z, Z) solves (8.5). The main result in this chapter is the following result for the exact controllability of (8.4). Theorem 8.5. If T > 2R, then the system (8.4) is exactly controllable at time T , i.e., for any y0 ∈ L2 (G) and yT ∈ L2FT (Ω; L2 (G)), one can find a couple of controls (u, v) ∈ L2F (0, T ; L2O (Γ − )) × L2F (0, T ; L2 (G)) such that the corresponding (transposition) solution y to (8.4) satisfies that y(T ) = yT in G, a.s. Similarly to the conclusion 1) in Theorem 7.24 (in Section 7.3), in order to prove Theorem 8.5, it suffices to establish a suitable observability estimate for (8.5) with τ = T . The latter will be done in Theorem 8.11 (in Section 8.3), by means of a stochastic version of global Carleman estimate.

8.2 Hidden Regularity and a Weighted Identity This section is addressed to present some preliminary results. We first prove the following result: Proposition 8.6. Solutions to the equation (8.5) satisfy that |z|2L2 (0,τ ;L2 (Γ − )) ≤ CE|zτ |2L2 (G) , F

O

∀ τ ∈ (0, T ], zτ ∈ L2Fτ (Ω; L2 (G)).

8.2 Hidden Regularity and a Weighted Identity

267

Proof : For simplicity, we only consider the case of τ = T . By Theorem 4.10 and Proposition 4.7, one can show that if zT ∈ L2FT (Ω; D(A∗ )), then the so( ) lution (z, Z) ∈ L2F (Ω; C([0, T ]; L2 (G))) ∩ L2F (0, T ; D(A∗ )) × L2F (0, T ; L2 (G)). It follows from Itˆo’s formula that E|zT |2L2 (G) − |z(0)|2L2 (G) ∫ T∫ = −2E zO · ∇zdxdt 0



T



G

{

+E 0

Therefore, ∫ T∫ −E 0



Γ−

[ ] } − 2z (a1 + a2 a3 )z + a2 Z + |Z − a3 z|2 dxdt.

G

O · νz 2 dΓ − dt

E|zT |2L2 (G)



T



[ ] z (a1 + a2 a3 )z + a2 Z dxdt ≤ CE|zT |2L2 (G) .

+ 2E 0

G

(8.10) (k) 2 For any zT ∈ L2FT (Ω; L2 (G)), we can find a sequence {zT }∞ ⊂ L FT (Ω; k=1 D(A∗ )) such that limk→∞ zT = zT in L2FT (Ω; L2 (G)). Hence, the inequality (8.10) also holds for zT ∈ L2FT (Ω; L2 (G)). (k)

Remark 8.7. The fact that z|Γ − ∈ L2F (0, τ ; L2O (Γ − )) (as shown in Proposition 8.6) is sometimes called a hidden regularity property, because it does not follow from the classical trace theorem of Sobolev spaces. We have the following well-posedness result for (8.4) . Proposition 8.8. For any y0 ∈ L2 (G), u ∈ L2F (0, T ; L2O (Γ − )) and v ∈ L2F (0, T ; L2 (G)), the equation (8.4) admits a unique (transposition) solution y ∈ CF ([0, T ]; L2 (Ω; L2 (G))) such that |y|CF ([0,T ];L2 (Ω;L2 (G))) ( ) ≤ C |y0 |L2 (G) + |u|L2F (0,T ;L2O (Γ − )) + |v|L2F (0,T ;L2 (G)) .

(8.11)

Proof : Proposition 8.8 is actually a consequence of Theorem 7.12 (for abstract stochastic evolution equations). Here, for the reader’s convenience, we give a “concrete” proof (without using the abstract result in Theorem 7.12). We only prove the existence. Since u ∈ L2F (0, T ; L2O (Γ − )), there exists 3/2 2 − a sequence {um }∞ m=1 ⊂ CF ([0, T ]; H0 (Γ )) with um (0) = 0 for all m ∈ N such that lim um = u in L2F (0, T ; L2O (Γ − )). (8.12) m→∞

For each m ∈ N, we can find a u ˜m ∈ CF2 ([0, T ]; H 2 (G)) such that u ˜ m | Γ − = um and u ˜m (0) = 0.

268

8 Exact Controllability for Stochastic Transport Equations

Consider the following stochastic transport equation:  d˜ ym + O · ∇˜ ym dt     ( ) [ ]  = a1 y˜m + a3 v + ζm dt + a2 (˜ ym + u ˜m ) + v dW (t) in Q,  y˜m = 0      y˜m (0) = y0

(8.13)

on Σ, in G,

where ζm = −˜ um,t − O · ∇˜ um dt + a1 u ˜m . By Theorem 3.13, the system (8.13) admits a unique mild solution y˜m ∈ CF ([0, T ]; L2 (Ω; L2 (G))). Let ym = y˜m + u ˜m . For any τ ∈ (0, T ] and zτ ∈ L2Fτ (Ω; L2 (G)), by Itˆo’s formula and using integration by parts, we have that E⟨ym (τ ), zτ ⟩L2 (G) − ⟨y0 , z(0)⟩L2 (G) ∫ τ ∫ τ∫ =E ⟨v, Z⟩L2 (G) dt − E O · νum zdΓ − dt, 0

(8.14)

Γ−

0

where (z, Z) is the mild solution to (8.5). Consequently, for any m1 , m2 ∈ N, it holds that ∫ τ∫ E⟨ym1 (τ ) − ym2 (τ ), zτ ⟩L2 (G) = −E O · ν(um1 − um2 )zdΓ − dt. (8.15) 0

Γ−

Let us choose zτ ∈ L2Fτ (Ω; L2 (G)) such that |zτ |L2F E⟨ym1 (τ ) − ym2 (τ ), zτ ⟩L2 (G) ≥

τ

(Ω;L2 (G))

= 1 and

1( |ym1 (τ ) − ym2 (τ )|L2F (Ω;L2 (G)) . τ 2

(8.16)

It follows from (8.15), (8.16) and Proposition 8.6 that ∫ τ∫ |ym1 (τ ) − ym2 (τ )|L2F (Ω;L2 (G)) ≤ 2 E O · ν(um1 − um2 )zdΓ − ds τ

0

Γ−

≤ C|um1 − um2 |L2F (0,T ;L2O (Γ − )) |zτ |L2F (0,T ;L2O (Γ − )) ≤ C|um1 − um2 |L2F (0,T ;L2O (Γ − )) , where the constant C is independent of τ . Consequently, it holds that |ym1 − ym2 |CF ([0,T ];L2 (Ω;L2 (G))) ≤ C|um1 − um2 |L2F (0,T ;L2O (Γ − )) . 2 2 Hence, {ym }∞ m=1 is a Cauchy sequence in CF ([0, T ]; L (Ω; L (G))). Denote by ∞ 2 2 y the limit of {ym }m=1 in CF ([0, T ]; L (Ω; L (G))). Letting m → ∞ in (8.14), we see that y satisfies (8.9). Thus, y is a transposition solution to (8.4). Let us choose zτ ∈ L2Fτ (Ω; L2 (G)) such that |zτ |L2F (Ω;L2 (G)) = 1 and τ

E⟨y(τ ), zτ ⟩L2 (G) ≥

1 |y(τ )|L2F (Ω;L2 (G)) . τ 2

(8.17)

8.2 Hidden Regularity and a Weighted Identity

269

Combining (8.9), (8.17) and using Proposition 8.6 again, we obtain that |y(τ )|L2F (Ω;L2 (G)) τ ∫ τ ∫ ( ≤ 2 ⟨y0 , z(0)⟩L2 (G) + E ⟨v, Z⟩L2 (G) dt + E 0

τ 0

( ) ≤ C |y0 |L2 (G) + |u|L2F (0,T ;L2O (Γ − )) + |v|L2F (0,T ;L2 (G)) ,

∫ Γ−

) O · νuzdΓ − dt

where the constant C is independent of τ . Therefore, we obtain the desired estimate (8.11). This completes the proof of Proposition 8.6. Next, we present a weighted identity for the stochastic transport operator d + O · ∇dt, which will play an important role in establishing the global Carleman estimate for (8.5). Let λ > 0, and let c ∈ (0, 1) be such that cT > 2R. Put [ ( T )2 ] ℓ = λ |x|2Rn − c t − and θ = eℓ . 2

(8.18)

Similarly to (1.48) (at least in spirit), we have the following key weighted identity: Proposition 8.9. Let u be an H 1 (Rn )-valued Itˆ o process, and v = θu. Then, ( ) ( ) −θ ℓt + O · ∇ℓ v du + O · ∇udt ) ] 1 [( ) ] 1 [( 1[ = − d ℓt + O · ∇ℓ v2 − O · ∇ ℓt +O · ∇ℓ v2 dt+ ℓtt + O · ∇(O · ∇ℓ) 2 2 2 ] 2 ) ( )2 1( 2 +2O · ∇ℓt v dt + ℓt + O · ∇ℓ (dv) + ℓt + O · ∇ℓ v2 dt. 2 (8.19) Proof : Clearly, ( ) ( ) ( ) θ du + O · ∇udt = θd θ−1 v + θO · ∇ θ−1 v dt ( ) = dv + O · ∇vdt − ℓt + O · ∇ℓ vdt. Thus, ( ) ( ) −θ ℓt + O · ∇ℓ v du + O · ∇udt ( ) [ ( ) ] = − ℓt + O · ∇ℓ v dv + O · ∇vdt − ℓt + O · ∇ℓ vdt ( ) ( ) ( )2 = − ℓt + O · ∇ℓ v dv + O · ∇vdt + ℓt + O · ∇ℓ v2 dt. It is easy to see that

(8.20)

270

8 Exact Controllability for Stochastic Transport Equations

 1 1 1   −ℓt vdv = − d(ℓt v2 ) + ℓtt v2 dt + ℓt (dv)2 ,   2 2 2     1 1 1    −O · ∇ℓvdv = − d(O · ∇ℓv2 ) + (O · ∇ℓ)t v2 dt + O · ∇ℓ(dv)2 , 2 2 2  1 1  −ℓ vO · ∇vdt = − O · ∇(ℓ v2 )dt + O · ∇ℓ v2 dt,  t t t   2 2       −O · ∇ℓvO · ∇vdt = − 1 O · ∇(O · ∇ℓv2 )dt + 1 O · ∇(O · ∇ℓ)v2 dt. 2 2 This, together with (8.20), implies (8.19). This completes the proof of Proposition 8.9. Remark 8.10. It is easy to see that, the spirit for proving the identity (8.19) in Proposition 8.9 is very close to that for the identity (1.48).

8.3 Observability Estimate for Backward Stochastic Transport Equations In this subsection, we shall show the following observability estimate for the equation (8.5). Theorem 8.11. If T > 2R, then solutions to the equation (8.5) with τ = T satisfy that ( ) |zT |L2F (Ω;L2 (G)) ≤ C |z|L2F (0,T ;L2O (Γ − )) + |Z|L2F (0,T ;L2 (G)) , T (8.21) ∀ zT ∈ L2FT (Ω; L2 (G)). Proof : Applying Proposition 8.9 to the equation (8.5) with u = z, integrating (8.19) on Q, using integration by parts, noting (8.18) and taking expectation, we get that ∫ −2E θ2 (ℓt + O · ∇ℓ)z(dz + O · ∇zdt)dx Q



∫ (cT − 2O · x)θ2 (T )z 2 (T )dx + λE

= λE G



[ ] O · ν c(T − 2t) − 2O · x θ2 z 2 dΣ− + 2(1 − c)λE

+λE ∫

(cT + 2O · x)θ2 (0)z 2 (0)dx G

Σ−

+E Q

θ2 z 2 dxdt Q



θ (ℓt + O · ∇ℓ)(dz) dx + 2E 2



θ2 (ℓt + O · ∇ℓ)2 z 2 dxdt.

2

Q

(8.22) By virtue of that (z, Z) solves (8.5) with τ = T , we see that

8.4 Notes and Comments

271

∫ −2E

θ2 (ℓt + O · ∇ℓ)z(dz + O · ∇zdt)dx Q



[ ] θ2 (ℓt + O · ∇ℓ)z (a1 + a2 a3 )z + a2 Z dxdt

= 2E Q

1 ≤ E 2 and



(8.23)

∫ θ (ℓt + O · ∇ℓ) z dxdt + CE 2

2 2

Q

θ2 (z 2 + Z 2 )dxdt Q





E

θ2 (ℓt + O · ∇ℓ)(dz)2 dx = E Q

1 ≤ E 2

θ2 (ℓt + O · ∇ℓ)|Z − a3 z|2 dxdt Q





θ (ℓt + O · ∇ℓ) z dxdt + CE 2

2 2

Q

θ2 z 2 dxdt

(8.24)

Q



θ2 |ℓt + O · ∇ℓ|Z 2 dxdt,

+4E Q

By (8.22)–(8.24) and z(T ) = zT in G, it follows that ∫ ∫ 2 2 λE (cT − 2O · x)θ (T )zT dx + λ (cT + 2O · x)θ2 (0)z 2 (0)dx G



+2(1 − c)λE

θ2 z 2 dxdt + E Q

∫ ≤ CE

G



θ2 (ℓt + O · ∇ℓ)2 z 2 dxdt Q



[ ] O · ν c(T − 2t) − 2O · x θ2 z 2 dΣ− ,

θ2 (z 2 + λ2 Z 2 )dxdt − λE Q

Σ−

(8.25) where the constant C > 0 is independent of λ. Finally, recalling that R = max |x|Rn and c ∈ (0, 1) are so that cT > 2R, x∈Γ

by choosing λ to be large enough in (8.25), we obtain the desired estimate (8.21). This completes the proof of Theorem 8.11.

8.4 Notes and Comments The main result in this chapter is a modification and generalization of that in [231]. Generally speaking, there are three methods to establish the exact controllability of deterministic transport equations. The first one is to utilize the explicit formula of their solutions. By this method, for some simple transport equations, one can explicitly give a control steering the system from a given initial state to another given final state, provided that the time is large enough. It seems that this method cannot be used to solve our stochastic problem since generally we do not have the explicit formula for solutions to the system (8.4). The second one is the extension method. This method was introduced

272

8 Exact Controllability for Stochastic Transport Equations

in [294] to prove the exact controllability of the classical wave equation. It is also effective to solve the exact controllability problem for many hyperbolictype equations including the transport equations. However, it seems that it is only valid for time reversible systems, and therefore, it is not suitable for the stochastic problems in the framework of Itˆo’s integral. The third method is based on the duality between controllability and observability, via which the exact controllability problem is reduced to suitable observability estimate for the dual equation, and the desired observability estimate is obtained by some global Carleman estimate (e.g., [43, 166]). In the literature, in order to obtain the observability estimate, people usually combine the Carleman estimate and the usual energy estimate (e.g., [166, 384]). It deserves mentioning that the desired observability inequality (8.21) in this chapter is established directly by means of the global Carleman estimate (without using the energy estimate). Indeed, even for the observability estimate for deterministic transport equations, such a method provides a new proof which is simpler than that in [166]. As mentioned in Remark 8.2, in order to study the exact controllability of the system (8.4), people do need to introduce two controls u and v (and the control v acting on the whole domain in particular). Nevertheless, it seems unnecessary to introduce the extra control v if we are only concerned with null/approximate controllability of stochastic transport equations. It would be quite interesting to study such weak versions of controllability for (8.4) (with the control v = 0) but it seems that some new tool should be developed to obtain nontrivial results. As far as we know, controllability problems for nonlinear stochastic transport equations are completely open.

9 Controllability and Observability of Stochastic Parabolic Systems

This chapter is devoted to studying the null/approximate controllability and observability of stochastic parabolic systems. The main results can be described as follows: • •

• •

When the coefficients of the underlying system are space-independent, based on the spectral method, we show the null/approximate controllability using only one control applied to the drift term; For a class of coupled stochastic parabolic system with only one control (applied to also only one equation), the null controllability is derived when the system is effectively coupled by the drift terms and un-coupled by the diffusion terms; By means of the global Carleman estimate, we establish the observability estimate for stochastic parabolic systems with considerably general coefficients; The null/approximate controllability of general stochastic parabolic systems with two controls are shown by means of duality argument.

In each of the above cases, we shall explain the main differences between the deterministic problem and its stochastic counterpart. ∆ In this chapter, T > 0 is given, and (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) is a fixed filtered probability space, on which a one dimensional standard Brownian motion W (·) is defined, and F is the corresponding natural filtration. We denote by F the progressive σ-field w.r.t. F.

9.1 Formulation of the Problems Parabolic equations (and its special case, i.e., the heat equations) are one class of most typical partial differential equations. In the deterministic setting, the controllability theory for parabolic equations are studied most thoroughly. It is then very natural to see what will happen for its stochastic counterpart. © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_9

273

274

9 Controllability and Observability of Stochastic Parabolic Systems

Throughout this chapter, G ⊂ Rn (n ∈ N) is a given bounded domain with a C ∞ boundary Γ , and G0 is a given nonempty open subset of G. Denote by χG0 the characteristic function of G0 . Put ∆

Q =(0, T ) × G,



Σ =(0, T ) × Γ,



Q0 =(0, T ) × G0 .

Also, we assume that ajk : G → R (j, k = 1, 2, · · · , n) satisfies ajk ∈ C 2 (G), ajk = akj , and for some s0 > 0, n ∑

ajk (x)ξi ξj ≥ s0 |ξ|2 ,

∀ (x, ξ) ≡ (x1 , · · · , xn , ξ1 , · · · , ξn ) ∈ G × Rn .

j,k=1

(9.1) We shall denote by C a generic positive constant depending only on G, G0 , T and (ajk )n×n (unless other stated), which may change from one place to another. We begin with the following controlled deterministic parabolic equation:  n ∑    yt − (ajk yxj )xk = a01 · ∇y + a02 y + χG0 u in Q,  j,k=1 (9.2)  y=0 on Σ,    y(0) = y0 in G, where a01 ∈ L∞ (0, T ; W 1,∞ (G; Rn )) and a02 ∈ L∞ (0, T ; L∞ (G)). In (9.2), y and u are the state variable and control variable, the state space and control space are chosen to be L2 (G) and L2 (Q0 ), respectively. The equation (9.2) is said to be null controllable (resp. approximately controllable) if for any given y0 ∈ L2 (G) (resp. for any given ε > 0, y0 , y1 ∈ L2 (G)), one can find a control u ∈ L2 (Q0 ) such that the solution y(·) ∈ C([0, T ]; L2 (G))∩L2 (0, T ; H01 (G)) to (9.2) satisfies y(T ) = 0 (resp. |y(T )−y1 |L2 (G) ≤ ε). Note that, due to the smoothing effect of solutions to the parabolic equation, the exact controllability for (9.2) is impossible, i.e., the above ε cannot be zero. The dual equation of (9.2) reads as follows  n ∑    z + (ajk zxj )xk = −∇ · (a01 z) + a02 z in Q, t    j,k=1 (9.3)  z=0 on Σ,      z(T ) = z0 in G. Write w(t, x) = z(T − t, x) and w0 = z0 . By (9.3), it is easy to check that  n ∑    wt − (ajk wxj )xk = −∇ · (a01 w) + a02 w in Q,    j,k=1 (9.4)  w=0 on Σ,      w(0) = w0 in G.

9.1 Formulation of the Problems

275

By means of the standard duality argument (similar to Theorem 7.17), it is easy to show the following result. Proposition 9.1. i) The equation (9.2) is null controllable if and only if solutions to the equation (9.4) satisfy the following observability estimate: |w(T )|L2 (G) ≤ C|w|L2 (Q0 ) ,

∀ w0 ∈ L2 (G);

(9.5)

ii) The equation (9.2) is approximately controllable if and only if any solution to the equation (9.4) satisfy the following unique continuation property: w = 0 in Q0 =⇒ w0 = 0. Controllability and observability of deterministic parabolic equations are now well-understood. Indeed, one can use the global Carleman estimate to prove the observability inequality (9.5) (c.f. [117, 357]), and by which the null controllability of (9.2) follows. On the other hand, by Proposition 8.1 and the backward uniqueness for solutions to the equation (9.4), it is easy to deduce the approximate controllability of (9.2) from the null controllability of the same equation. In the rest of this chapter, we shall see a quite different picture in the stochastic setting. We now fix m ∈ N and consider the following controlled stochastic parabolic system:  n ∑    dy − (ajk yxj )xk dt     j,k=1     n (∑ )  = a1j yxj + a2 y + χG0 u + a4 v dt + (a3 y + v) dW (t) in Q, (9.6)   j=1       y=0 on Σ,     y(0) = y0 in G, where

{

1,∞ a1j ∈ L∞ (G; Rm×m )), F (0, T ; W ∞ m×m ak ∈ L∞ )), F (0, T ; L (G; R

j = 1, 2, · · · , n,

(9.7)

k = 2, 3, 4.

In the system (9.6), the initial state y0 ∈ L2F0 (Ω; L2 (G; Rm )), y is the state variable, and the control variable consists of the pair (u, v) ∈ L2F (0, T ; L2 (G0 ; Rm )) × L2F (0, T ; L2 (G; Rm )). The main goals in this chapter are to study the null and approximate controllability of (9.6). Remark 9.2. Similar to the control system (8.4), the term a4 v reflects the effect of the control v in the diffusion term to the drift term. One can also consider the case that the control u in the drift term may influence the diffusion term. Nevertheless, since the control v in the diffusion term is acted on the whole domain, one can cancelation such effect by suitable choice of v.

276

9 Controllability and Observability of Stochastic Parabolic Systems

From the proof of Theorem 3.24 and using Theorem 3.10, it is easy to show the following well-posedness result for the equation (9.6). Proposition 9.3. For any y0 ∈ L2F0 (Ω; L2 (G; Rm )) and (u, v) ∈ L2F (0, T ; L2 (G0 ; Rm )) × L2F (0, T ; L2 (G; Rm )), the system (9.6) admits a unique weak solution y ∈ L2F (Ω; C([0, T ]; L2 (G; Rm ))) ∩ L2F (0, T ; H01 (G; Rm )). Moreover, |y|L2F (Ω;C([0,T ];L2 (G;Rm )))∩L2F (0,T ;H01 (G;Rm )) ( ) ≤ CeCκ1 |y0 |L2F (Ω;L2 (G;Rm )) + |(u, v)|L2F (0,T ;L2 (G0 ;Rm ))×L2F (0,T ;L2 (G;Rm )) , 0 (9.8) where κ1 =

n ∑

∞ m×m )) + |a2 |L∞ (0,T ;L∞ (G;Rm×m )) |a1j |L∞ F (0,T ;L (G;R F

j=1

+|a3 |2L∞ (0,T ;L∞ (G;Rm×m )) . F

Note that we introduce two controls u and v in (9.6). In view of the controllability result for the deterministic parabolic equation (9.2), it is more natural to use only one control and consider the following controlled stochastic parabolic system (which is a special case of (9.6) with v ≡ 0):  n ∑    dy − (ajk yxj )xk dt     j,k=1     n (∑ )  = a1j yxj + a2 y + χG0 u dt + a3 ydW (t) in Q, (9.9)   j=1       y=0 on Σ,     y(0) = y0 in G. It is easy to see that, the dual system of both (9.6) and (9.9) are tively the following backward stochastic parabolic systems:  n ∑    dz + (ajk zxj )xk dt     j,k=1     n [∑ ]  ( ⊤ ) ⊤ ⊤ ⊤ ⊤ = a1j z xj − (a⊤ 2 − a3 a4 )z − a3 Z dt + (−a4 z + Z)dW (t)   j=1       z=0     z(T ) = zT

respec-

in Q, on Σ, in G. (9.10)

9.1 Formulation of the Problems

 n ∑    dz + (ajk zxj )xk dt     j,k=1     n [∑ ]  ( ⊤ ) ⊤ = a1j z xj − a⊤ 2 z − a3 Z dt + ZdW (t)   j=1       z=0     z(T ) = zT

in Q,

277

(9.11)

on Σ, in G.

Similar to the proof of Theorem 3.24 and using Theorem 4.11, one can show the following well-posedness result for the equations (9.10) and (9.11): Proposition 9.4. Under the condition (9.7), for any zT ∈ L2FT (Ω; L2 (G; m R (9.10) (resp.∩(9.11)) admits a unique ( )), the system ) weak solution (z, Z) ∈ CF ([0, T ]; L2 (Ω; L2 (G; Rm ))) L2F (0, T ; H01 (G; Rm )) × L2F (0, T ; L2 (G; Rm )). Moreover, for any t ∈ [0, T ], |(z(·), Z(·))|(CF ([0,t];L2 (Ω;L2 (G;Rm )))∩L2 (0,t;H 1 (G;Rm )))×L2 (0,t;L2 (G;Rm )) F

≤ Ce

Ce κ1

|z(t)|L2F

t

0

F

(9.12)

(Ω;L2 (G;Rm )) ,

where κ e1 =

n ∑

1,∞ (G;Rm×m )) + |a2 |L∞ (0,T ;L∞ (G;Rm×m )) |a1j |L∞ F (0,T ;W F

j=1

+

4 ∑

(9.13) |ak |2L∞ (0,T ;L∞ (G;Rm×m )) F

k=3

( resp.

κ e1 =

n ∑

1,∞ (G;Rm×m )) + |a2 |L∞ (0,T ;L∞ (G;Rm×m )) |a1j |L∞ F (0,T ;W F

j=1

+|a3 |2L∞ (0,T ;L∞ (G;Rm×m )) .

)

F

In order to obtain the null controllability of (9.9), by Theorem 7.17, we need to show that solutions to the system (9.11) satisfy the following observability estimate: |z(0)|L2F

0

(Ω;L2 (G;Rm ))

≤ C|χG0 z|L2F (0,T ;L2 (G0 ;Rm )) , ∀ zT ∈ L2FT (Ω; L2 (G; Rm )).

(9.14)

Unfortunately, at this moment, we are not able to prove the observability estimate (9.14). Instead, we obtain a weak version of (9.14), i.e., a weak observability estimate (for the system (9.10)) in Theorem 9.37 (See Section 9.6). By duality, Theorem 9.37 implies the null controllability of (9.6).

278

9 Controllability and Observability of Stochastic Parabolic Systems

There exist two main difficulties to establish (9.14). The first one is that, unlike the deterministic setting (See (9.3)–(9.4)), it is impossible to reduce the backward stochastic parabolic system (9.10) to a (forward) stochastic parabolic system. The second one is that, though the corrected term “Z” plays a “coercive” role for the well-posedness of (9.10), it seems to be a “bad” (nonhomogeneous) term when one tries to prove (9.14) using the global Carleman estimate. At least at this moment, generally speaking, technically we do need to introduce the second control v in (9.6). Indeed, it is unclear whether it is possible or not to use only one (F-adapted) control u such that the following random parabolic system  n n ∑ ∑  jk   y − (a y ) = a1j yxj + a2 y + χG0 u in Q, t xj xk    j=1 j,k=1 (9.15)  y=0 on Σ,      y(0) = y0 in G is null controllable. The answer seems to be “yes” but we do not know how to prove it for the general case. Of course, it is quite interesting to study the controllability of (9.9), in which only one control u is introduced. So far, this problem is much less understood. Indeed, positive results are available only for some special cases of (9.9) when the spectral method can be applied (See Sections 9.2–9.3). As we shall see later, even for these special cases, some new phenomenons will appear in the stochastic setting. Although it does not imply any controllability of the same sort equations, the observability of stochastic parabolic equations is still an interesting control problem. Because of this, as another main goal in this chapter, we shall study observability estimates for the following stochastic parabolic equation:  n n (∑ ) ∑ ( jk )    dz − a z dt = b z + b z dt + b3 zdW (t) in Q, x 1j x 2  j j xk   j=1 j,k=1  z=0      z(0) = z0 where

{

on Σ, in G, (9.16)

∞ m×m b1j ∈ L∞ )), F (0, T ; L (G; R

b2 , b3 ∈

j = 1, 2, · · · , n,

∞ m×m L∞ )). F (0, T ; L (G; R

(9.17)

As a consequence of Proposition 9.3, it is easy to show that Corollary 9.5. For any z0 ∈ L2 (G; Rm ), the system (9.16) admits a unique weak solution z ∈ L2F (Ω; C([0, T ]; L2 (G; Rm ))) ∩ L2F (0, T ; H01 (G; Rm )). Moreover, for any t ∈ [0, T ],

9.1 Formulation of the Problems

|z(·)|L2F (Ω;C([t,T ];L2 (G;Rm )))∩L2F (t,T ;H01 (G;Rm )) ≤ CeCκ2 |z(t)|L2F

t

279

(Ω;L2 (G;Rm )) ,

(9.18) where κ2 =

n ∑

∞ m×m )) + |b2 |L∞ (0,T ;L∞ (G;Rm×m )) |b1j |L∞ F (0,T ;L (G;R F

j=1

+|b3 |2L∞ (0,T ;L∞ (G;Rm×m )) . F

We shall give an observability estimate for (9.16) in Theorem 9.28 (of ∞ m Section 9.5). Since we assume only that b3 ∈ L∞ F (0, T ; L (G; R )) (in (9.17)), one needs to overcome some difficulty to derive the observability estimate (9.93) for the system (9.16). 2,∞ When m = 1 and b3 ∈ L∞ (G)), (9.16) can ∫be reduced to a F (0, T ; W t random parabolic equation. To see this, we write ϑ = e− 0 b3 (s)dW (s) , and introduce the following simple transformation: z˜ = ϑz.

(9.19)

Then, one can check that z˜ satisfies  n n ∑ ∑ ( jk )   ˜b1j z˜x + ˜b2 z˜  z ˜ − a z ˜ = xj x  t j  k  j=1 j,k=1  z˜ = 0      z˜(0) = z0

in Q, (9.20) on Σ, in G,

where ˜b1j = b1j − 2

n ∑



t

ajk

b3,xk (s)dW (s),

j = 1, 2, · · · , n,

0

k=1

∫ t ∫ t n n ( ) ∑ ∑ jk ˜b2 = b2 − 1 b2 − b b (s)dW (s) − a b (s)dW (s) 1j 3,x 3,x j j 2 3 j=1 xk 0 0 j,k=1

+

n ∑ j,k=1





t

ajk

t

b3,xj (s)dW (s) 0

b3,xk (s)dW (s). 0

One may expect that the observability estimate (9.93) (in Theorem 9.28) for (9.16) follows from a similar observability estimate but for the random parabolic equation (9.20). But usually this is not the case. Indeed, it is easy to see that, in general neither ˜b1j nor ˜b2 is guaranteed to be bounded (w.r.t. the sample point ω) unless b3 is space-independent. Because of this, it seems that one cannot use Corollary 9.32 (which is a global Carleman estimate for random parabolic equations) directly to derive any useful observability estimate for (9.20).

280

9 Controllability and Observability of Stochastic Parabolic Systems

Remark 9.6. When m > 1, general speaking, one cannot use the above technique (by introducing a suitable transformation like (9.19)) to reduce (9.16) 2,∞ to a random parabolic system even if b3 ∈ L∞ (G; Rm×m )). F (0, T ; W

9.2 Controllability of a Class of Stochastic Parabolic Systems This section is based on [225]. Throughout this section, we consider the following stochastic parabolic system:  n ∑    dy − (ajk yxj )xk dt = [a(t)y + χE (t)χG0 (x)u]dt + b(t)ydW (t) in Q,    j,k=1  y=0      y(0) = y0

on Σ,

in G, (9.21) m×m ∞ m×m where a(·) ∈ L∞ (0, T ; R ) and b(·) ∈ L (0, T ; R ) are given, E is a F F fixed Lebesgue measurable subset in (0, T ) with a positive Lebesgue measure (i.e., m(E) > 0). In (9.21), y is the state variable (valued in L2 (G; Rm )), y0 ∈ L2 (G; Rm ) is the initial state, u is the control variable, and the control 2 2 m space is L∞ F (0, T ; L (Ω; L (G; R ))). We have the following null controllability result for the system (9.21). Theorem 9.7. The system (9.21) is null controllable at time T . We shall give a proof of Theorem 9.7 in Subsection 9.2.2. Remark 9.8. As we mentioned in Section 9.1 that, when E = (0, T ), one can use the global Carleman estimate to prove the corresponding null controllability result for the deterministic counterpart of (9.21). However, at least at this moment we do not know how to use a similar method to prove Theorem 9.7 even for the same case that E = (0, T ). Next, we consider the approximate controllability for the system (9.21) under a stronger assumption on the controller E × G0 than that for the null controllability. Theorem 9.9. System (9.21) is approximately controllable at time T if and only if m((s, T ) ∩ E) > 0 for any s ∈ [0, T ). We refer to Subsection 9.2.3 for a proof of Theorem 9.9. At the first glance, it seems that Theorem 9.9 is unreasonable. If b(·) ≡ 0, then the system (9.21) is like a deterministic parabolic equation with a random parameter. The readers may guess that one can obtain the approximate controllability by only assuming that m(E) > 0. However, this is NOT the

9.2 Controllability of a Class of Stochastic Parabolic Systems

281

case. The reason for this comes from our definition of the approximate controllability for the system (9.21). We expect that any element belonging to L2FT (Ω; L2 (G)) rather than L2Fs (Ω; L2 (G)) (s < T ) can be attached as close as one wants. Hence we need put the control u to be active until the time T . In some sense, it is surprising that one needs a little more assumption in Theorem 9.9 for the approximate controllability of (9.21) than that in Theorem 9.7 for the null controllability. Indeed, it is well-known that in the deterministic setting, the null controllability is usually stronger than the approximate controllability. But this does NOT remain to be true in the stochastic case. Actually, from Theorem 9.9, we see that the ∩ additional condition (compared to the null controllability) that m((s, T ) E) > 0 for any s ∈ [0, T ) is not only sufficient but also necessary for the approximate controllability of (9.21). Therefore, in the setting of stochastic distributed parameter systems, the null controllability does NOT imply the approximate controllability. This indicates that there exists some essential difference between the controllability theory between the deterministic parabolic equations and its stochastic counterpart (See also Remark 9.8). 9.2.1 Preliminaries In this subsection, we collect some preliminary results that will be used subsequently. To begin with, we recall the following known property about Lebesgue measurable sets. Lemma 9.10. ([205, pp. 256–257]) For a.e. t˜ ∈ E, there exists a sequence of numbers {ti }∞ i=1 ⊂ (0, T ) such that t1 < t2 < · · · < ti < ti+1 < · · · < t˜, m(E ∩ [ti , ti+1 ]) ≥ ρ1 (ti+1 − ti ), ti+1 − ti ≤ ρ2 , ti+2 − ti+1

ti → t˜ as i → ∞, i = 1, 2, · · · ,

i = 1, 2, · · · ,

(9.22) (9.23) (9.24)

where ρ1 and ρ2 are two positive constants which are independent of i. Next, let us define an unbounded operator A on L2 (G) as follows:  2 1   D(A) = Hn (G) ∩ H0 (G), ∑ (9.25) (ajk hxj )xk , ∀ h ∈ D(A).   Ah = − j,k=1

∞ Let {λi }∞ i=1 be the eigenvalues of A, and {ei }i=1 be the corresponding eigenfunctions satisfying |ei |L2 (G) = 1, i = 1, 2, 3 · · · . It holds{ that 0 < λ1 ≤}λ2 ≤ λ3 ≤ · · · ≤ λk ≤ · · · → ∞. For any r ≥ λ1 , write Λr = i ∈ N λi ≤ r . We

282

9 Controllability and Observability of Stochastic Parabolic Systems

recall the following observability estimate (for partial sums of the eigenfunctions of A), established in [230, Theorem 1.2] (See also [192, Theorem 3] for a special case of this result). Lemma 9.11. There exist two positive constants C1 ≥ 1 and C2 ≥ 1 such that ∫ ∑ 2 ∑ √ |ai |2 ≤ C1 eC2 r ai ei (x) dx (9.26) G0

i∈Λr

i∈Λr

holds for any r ≥ λ1 and ai ∈ C with i ∈ Λr . Further, for any s1 and s2 satisfying 0 ≤ s1 < s2 ≤ T , we introduce the following backward stochastic parabolic system:  n ∑    dz + (ajk zxj )xk dt = −[a(t)⊤ z + b(t)⊤ Z]dt + ZdW (t) in (s1 , s2 ) × G,    j,k=1  z=0      z(s2 ) = η

on (s1 , s2 ) × Γ, in G, (9.27)

where η ∈ L2Fs (Ω; L2 (G; Rm )). 2 Put 2 m×m ) + |b| ∞ r0 = 2|a|L∞ L (0,T ;Rm×m ) . F (0,T ;R F

For each r ≥ λ1 , we set Hr = span {ei | λi ≤ r} and denote by Πr the orthogonal projection from L2 (G) to Hr . Write m

Hrm

times

z }| { = Hr × Hr × · · · × Hr .

(9.28)

To simplify the notation, we also denote by Πr the orthogonal projection from L2 (G; Rm ) to Hrm . We need the following observability result for (9.27) with the final data belonging to L2Fs (Ω; Hrm ), a proper subspace of 2 L2Fs (Ω; L2 (G; Rm )). 2

Proposition 9.12. For each r ≥ λ1 , the solution to the system (9.27) with η ∈ L2Fs (Ω; Hrm ) satisfies that 2



2 2 C1 eC2 r+r0 (s2 −s1 ) E z(s1 ) L2 (G;Rm ) ≤ χE χG0 z L1 (s1 ,s2 ;L2 (Ω;L2 (G;Rm ))) , 2 F (m(E ∩ [s1 , s2 ])) (9.29) whenever m(E ∩ [s1 , s2 ]) ̸= 0. ∑ Proof : Each η ∈ L2Fs (Ω; Hrm ) can be written as η = ηi ei (x) for some 2

i∈Λr

ηi ∈ L2Fs (Ω; Rm ) with i ∈ Λr . The solution (z, Z) to (9.27) can be expressed 2 as

9.2 Controllability of a Class of Stochastic Parabolic Systems

z=



zi (t)ei ,



Z=

i∈Λr

283

Zi (t)ei ,

i∈Λr

where zi (·) ∈ CF ([s1 , s2 ]; L2 (Ω; Rm )) and Zi (·) ∈ L2F (s1 , s2 ; Rm ), and satisfy the following equation { dzi − λi zi dt = −[a(t)⊤ zi + b(t)⊤ Zi ]dt + Zi dW (t) in [s1 , s2 ], zi (T ) = ηi . By Lemma 9.11, for any t ∈ [s1 , s2 ], we have ∫ ∫ ∑ √ E |z(t)|2Rm dx = E |zi (t)|2Rm ≤ C1 eC2 r E G

i∈Λr

= C1 e

C2



∫ r

G0

E G0

∑ 2 zi (t)ei m dx R

i∈Λr

(9.30)

|z(t)|2Rm dx.

By Itˆo’s formula, we find that ( ) d(er0 t |z|2Rm ) = r0 er0 t |z|2Rm + er0 t ⟨ dz, z ⟩Rm + ⟨ z, dz ⟩Rm + er0 t |dz|2Rm . Hence, ∫ ∫ ( ) ( ) E er0 t |z(t)|2Rm dx − E er0 s1 |z(s1 )|2Rm dx G

G

∫ t∫ = r0 E

r0 s

e s1

G

∫ t∫ +E s1

G

|z(s)|2Rm dxds

+2





t

E

i∈Λr

er0 s λi |zi (s)|2Rm ds

s1

( er0 s − ⟨a(s)⊤ z(s) + b(s)⊤ Z(s), z(s)⟩Rm

(9.31)

) −⟨z(s), a(s)⊤ z(s) + b(s)⊤ Z(s)⟩Rm + |Z(s)|2Rm dxds ∑ ∫ t ≥2 E er0 s λi |zi (s)|2Rm ds ≥ 0. i∈Λr

s1

From (9.30) and (9.31), we obtain that, for any t ∈ [s1 , s2 ], ∫ ∫ √ E |z(s1 , x)|2Rm dx ≤ C1 eC2 r+r0 (s2 −s1 ) E |z(t, x)|2Rm dx. G

G0

By (9.32), it follows that ∫ [ ∫ ] 12 E |z(s1 , x)|2Rm dx dt E∩[s1 ,s2 ]

(

≤ C1 e

G

√ )1 C2 r+r0 (s2 −s1 ) 2



[ ∫ E E∩[s1 ,s2 ]

G0

|z(t, x)|2Rm dx

] 12

dt.

(9.32)

284

9 Controllability and Observability of Stochastic Parabolic Systems

Hence, when m(E ∩ [s1 , s2 ]) ̸= 0, we obtain that for each η ∈ L2Fs (Ω; Hrm ), 2

∫ E G

|z(s1 , x)|2Rm dx √

C1 eC2 r+r0 (s2 −s1 ) { ≤ (m(E ∩ [s1 , s2 ]))2 C2

=

√ r+r0 (s2 −s1 )



s2 s1

[ ∫ ] 12 }2 E |χE (t)χG0 (x)z(t, x)|2Rm dx dt G

C1 e |χE χG0 z|2L1 (s1 ,s2 ;L2 (Ω;L2 (G;Rm ))) , F (m(E ∩ [s1 , s2 ]))2

which gives (9.29). By means of the usual duality argument, Proposition 9.12 yields a partial controllability result for the following controlled system:  n ∑    dy − (ajk yxj )xk dt     j,k=1   (9.33) = [a(t)y + χE χG0 u]dt + b(t)ydW (t) in (s1 , s2 ) × G,     y=0 on (s1 , s2 ) × Γ,      y(s1 ) = ys1 in G, where ys1 ∈ L2Fs (Ω; L2 (G; Rm )). That is, we have the following result. 1

Proposition 9.13. If m(E ∩ [s1 , s2 ]) ̸= 0, then for every r ≥ λ1 and ys1 ∈ 2 2 m L2Fs (Ω; L2 (G; Rm )), there exists a control ur ∈ L∞ F (s1 , s2 ; L (Ω; L (G; R ))) 1 such that the solution y to the system (9.33) with u = ur satisfies that Πr (y(s2 )) = 0, a.s. Moreover, ur verifies that √

|ur |2L∞ (s1 ,s2 ;L2 (Ω;L2 (G;Rm ))) F

C1 eC2 r+r0 (s2 −s1 ) ≤ |ys |2 2 2 m . (9.34) (m(E ∩ [s1 , s2 ]))2 1 LFs1 (Ω;L (G;R ))

Proof : Define a subspace H of L1F (s1 , s2 ; L2 (Ω; L2 (G; Rm ))): { } H = f = χE χG0 z (z, Z) solves (9.27) for some η ∈ L2Fs2 (Ω; Hrm ) and a linear functional L on H:



L(f ) = −E G

⟨ ys1 , z(s1 ) ⟩Rm dx.

By Proposition 9.12, it is easy to check that L is a bounded linear functional on H and √

|L|2L(H;R)

C1 eC2 r+r0 (s2 −s1 ) ≤ |ys |2 2 2 m . (m(E ∩ [s1 , s2 ]))2 1 LFs1 (Ω;L (G;R ))

9.2 Controllability of a Class of Stochastic Parabolic Systems

285

By the Hahn-Banach Theorem, L can be extended to a bounded linear func tional Le (satisfying Le L(L1 (s1 ,s2 ;L2 (Ω;L2 (G;Rm )));R) = |L|L(H;R) ) on L1F (s1 , s2 ; F

L2 (Ω; L2 (G; Rm ))). By Theorem 2.73, there exists a control ur ∈ L∞ F (s1 , s2 ; L2 (Ω; L2 (G; Rm ))) such that ∫ s2 ∫ e ), ∀ f ∈ L1 (s1 , s2 ; L2 (Ω; L2 (G; Rm ))). E ⟨ ur , f ⟩Rm dxdt = L(f F s1

G

Particularly, for any η ∈ L2Fs (Ω; Hrm ), the corresponding solution (z, Z) to 2 (9.27) satisfies ∫ s2 ∫ ∫ E ⟨ ur , χE χG0 z ⟩Rm dxdt = −E ⟨ ys1 , z(s1 ) ⟩Rm dx. (9.35) s1

G

G

Applying Itˆo’s formula to ⟨ y, z ⟩Rm , where y solves the system (9.33) with u = ur , we obtain that ∫ ∫ E ⟨ y(s2 ), η ⟩Rm dx − E ⟨ ys1 , z(s1 ) ⟩Rm dx G



=E

s2 s1

G

∫ G

(9.36)

⟨ χE χG0 ur , z ⟩Rm dxdt.

Combining (9.35) and (9.36), we arrive at ∫ E ⟨ y(s2 ), η ⟩Rm dx = 0, ∀ η ∈ L2Fs2 (Ω; Hrm ), G

which implies that Πr (y(s2 )) = 0, a.s. Moreover, 2 2 m |ur |L∞ = |L|L(H;R) , F (s1 ,s2 ;L (Ω;L (G;R )))

which yields (9.34). Finally, for any s ∈ [0, T ), we consider the following equation:  n ∑    dy − (ajk yxj )xk dt = a(t)ydt + b(t)ydW (t) in (s, T ) × G,    j,k=1  y=0      y(s) = ys

on (s, T ) × Γ,

(9.37)

in G,

where ys ∈ L2Fs (Ω; L2 (G; Rm )). Let us show the following decay result for the system (9.37). Proposition 9.14. Let r ≥ λ1 . Then, for any ys ∈ L2Fs (Ω; L2 (G; Rm )) with Πr (ys ) = 0, a.s., the corresponding solution y to (9.37) satisfies that E|y(t)|2L2 (G;Rm ) ≤ e−(2r−r0 )(t−s) |ys |2L2

Fs (Ω;L

2 (G;Rm ))

,

∀ t ∈ [s, T ].

(9.38)

286

9 Controllability and Observability of Stochastic Parabolic Systems

Proof : Since ys ∈ L2Fs (Ω; L2 (G; Rm )) satisfying Πr (ys ) = 0, we see that ∑ ys = i∈N\Λr ysi ei for some ysi ∈ L2Fs (Ω; Rm ) with i ∈ N \ Λr . Clearly, the ∑ solution y to (9.37) can be expressed as y = i∈N\Λr y i (t)ei , where y i (·) ∈ CF ([s, T ]; L2 (Ω; Rm ) solves the following stochastic differential equation: { i dy + λi y i dt = a(t)y i dt + b(t)y i dW (t) in [s, T ], y i (s) = ysi . By Itˆo’s formula, we have that ( ) d(e(2r−r0 )(t−s) |y|2Rm ) = e(2r−r0 )(t−s) ⟨ dy, y ⟩Rm + ⟨ y, dy ⟩Rm +e(2r−r0 )(t−s) |dy|2Rm + (2r − r0 )e(2r−r0 )(t−s) |y|2Rm . m×m ) + Hence, by λi > r for each i ∈ N\Λr and recalling that r0 = 2|a|L∞ F (0,T ;R |b|2L∞ (0,T ;Rm×m ) , we arrive at F





E

e G

= −2

(2r−r0 )(t−s)



∑ ∫ t∫

+E s

G

−E G

|y(s)|2Rm dx

t

λi E s

i∈N\Λr

∫ t∫

|y(t)|2Rm dx

e(2r−r0 )(σ−s) |y i (σ)|2Rm dσ

( ) e(2r−r0 )(σ−s) ⟨ ay, y ⟩Rm + ⟨ y, ay ⟩Rm dxdσ

e(2r−r0 )(σ−s) |b(σ)y(σ)|2Rm dxdσ s G ∫ t +(2r − r0 )E e(2r−r0 )(σ−s) |y(σ)|2Rm dxdσ +E

≤ 0,

s

which gives the desired estimate (9.38) immediately. 9.2.2 Proof of the Null Controllability This subsection is devoted to proving the null controllability result for the system (9.21), i.e., Theorem 9.7. For simplicity, we assume that m = 1. The proof for general m is similar. By Lemma 9.10, we may take a number t˜ ∈ E with t˜ < T and a sequence {tN }∞ N =1 ⊂ (0, T ) such that (9.22)–(9.24) hold for some positive numbers ρ1 and ρ2 . Write y˜0 = ψ(t1 ), where ψ(·) solves the following stochastic parabolic system:

9.2 Controllability of a Class of Stochastic Parabolic Systems

 n ∑    dψ − (ajk ψxj )xk dt = a(t)ψdt + b(t)ψdW (t)    j,k=1  ψ=0      ψ(0) = y0

287

in (0, t1 ) × G, on (0, t1 ) × Γ, in G.

Let us consider the following controlled stochastic  n ∑   d˜  y − (ajk y˜xj )xk dt     j,k=1   = [a(t)˜ y + χ E χ G0 u ˜]dt + b(t)˜ y dW (t)     y˜ = 0      y˜(t1 ) = y˜0

parabolic system:

in (t1 , t˜) × G,

(9.39)

on (t1 , t˜) × Γ, in G.

2 ˜ 2 It suffices to find a control u ˜ ∈ L∞ F (t1 , t; L (Ω; L (G))) with

|˜ u|2L∞ (t1 ,t˜;L2 (Ω;L2 (G))) ≤ CE|˜ y0 |2L2 (Ω) , F

(9.40)

such that the solution y˜ to (9.39) satisfies y˜(t˜) = 0 in G, a.s. ˜ ∪∞Set IN = [t2N −1 , t2N ] and JN = [t2N , t2N +1 ] for N ∈ N. Then [t1 , t ) = (I ∪J ). Clearly, m(E∩I ) > 0 and m(E∩J ) > 0. We will introduce N N N N =1 N a suitable control on each IN and allow the system to evolve freely on every JN . Also, we fix a suitable, strictly increasing sequence {rN }∞ N =1 of positive integers (to be given later) satisfying that λ1 ≤ r1 < r2 < · · · < rN → ∞ as N → ∞. We consider first the controlled stochastic parabolic system on the interval I1 = [t1 , t2 ] as follows:  n ∑    dy − (ajk y1,xj )xk dt 1     j,k=1   (9.41) = [a(t)y1 + χE χG0 u1 ]dt + b(t)y1 dW (t) in (t1 , t2 ) × G,     y1 = 0 on (t1 , t2 ) × Γ,      y1 (t1 ) = y˜0 in G. 2 2 By Proposition 9.13, there exists a control u1 ∈ L∞ F (t1 , t2 ; L (Ω; L (G))) with the estimate: √

|u1 |2L∞ (t1 ,t2 ;L2 (Ω;L2 (G))) F

C1 eC2 r1 +r0 T ≤ E|˜ y0 |2L2 (G) , (m(E ∩ [t1 , t2 ]))2

such that Πr1 (y(t2 )) = 0 in G, a.s. By (9.23), we see that √

|u1 |2L∞ (t1 ,t2 ;L2 (Ω;L2 (G))) F

C1 eC2 r1 +r0 T ≤ 2 E|˜ y0 |2L2 (G) . ρ1 (t2 − t1 )2

(9.42)

288

9 Controllability and Observability of Stochastic Parabolic Systems

Applying Itˆo’s formula to e−(r0 +1)t |y1 (t)|2L2 (G) , similar to the proof of (9.31) and Theorem 3.24, we obtain that e−(r0 +1)t2 E|y1 (t2 )|2L2 (G) =e

−(r0 +1)t1

−2

n ∑



e

−(r0 +1)s

t1

t2

+E

t2

E

j,k=1



e

t2

− (r0 + 1)E

e

−(r0 +1)s

t1



∫ |y1 |2 dxds G

ajk y1,xj y1,xk dxds G

−(r0 +1)s



t1





E|y1 (t1 )|2L2 (G)

[2a(s)|y1 |2 + |b(s)y1 |2 ]dxds G

t2

+2E

e−(r0 +1)s



χE χG0 u1 y1 dxds

t1

G

≤ e−(r0 +1)t1 E|y1 (t1 )|2L2 (G) + E



t2

e−(r0 +1)s

t1

≤ e−(r0 +1)t1 E|˜ y0 |2L2 (G) +

∫ |u1 |2 dxds G

e−(r0 +1)t1 − e−(r0 +1)t2 |u1 |2L∞ (t1 ,t2 ;L2 (Ω;L2 (G))) . F r0 + 1

Hence, in view of (9.42), √

E|y1 (t2 )|2L2 (G)

C3 eC3 r1 ≤ E|˜ y0 |2L2 (G) . (t2 − t1 )2

(9.43)

(2r0 +1)T where C3 = max(2ρ−2 , C2 ). 1 C1 e Then, on the interval J1 ≡ [t2 , t3 ], we consider the following stochastic parabolic system without control:  n ∑    dz − (ajk z1,xj )xk dt = a(t)z1 dt + b(t)z1 dW (t) in (t2 , t3 ) × G, 1    j,k=1

 z1 = 0      z1 (t2 ) = y1 (t2 )

on (t2 , t3 ) × Γ, in G.

Since Πr1 (y1 (t2 )) = 0, a.s., by Proposition 9.14, we have E|z1 (t3 )|2L2 (G) ≤ e(−2r1 +r0 )(t3 −t2 ) E|y1 (t2 )|2L2 (G) √



C3 eC3 r1 (−2r1 +r0 )(t3 −t2 ) e E|˜ y0 |2L2 (G) . (t2 − t1 )2

(9.44)

Generally, on the interval IN with N ∈ N \ {1}, we consider a controlled stochastic parabolic system as follows:

9.2 Controllability of a Class of Stochastic Parabolic Systems

 n ∑    dyN − (ajk yN,xj )xk dt     j,k=1   = [a(t)yN + χE χG0 uN ]dt + b(t)yN dW (t)     yN = 0      yN (t2N −1 ) = zN −1 (t2N −1 )

289

in (t2N −1 , t2N ) × G, on (t2N −1 , t2N ) × Γ, in G.

Similar to the above argument (See the proof of (9.42) and (9.43)), one can 2 2 find a control uN ∈ L∞ F (t2N −1 , t2N ; L (Ω; L (G))) with the estimate: √

C1 eC2 rN +r0 T ≤ 2 E|zN −1 (t2N −1 )|2L2 (G) , ρ1 (t2N − t2N −1 )2 (9.45) such that ΠrN (yN (t2N )) = 0 in G, a.s. Moreover, |uN |2L∞ (t2N −1 ,t2N ;L2 (Ω;L2 (G))) F



E|yN (t2N )|2L2 (G)

C3 eC3 rN ≤ E|zN −1 (t2N −1 )|2L2 (G) . (t2N − t2N −1 )2

(9.46)

On the interval JN , we consider the following stochastic parabolic system without control:  n ∑    dz − (ajk zN,xj )xk dt N     j,k=1   = a(t)zN dt + b(t)zN dW (t) in (t2N , t2N +1 ) × G,     zN = 0 on (t2N , t2N +1 ) × Γ,      zN (t2N ) = yN (t2N ) in G. Since ΠrN (yN (t2N )) = 0, a.s., by Proposition 9.14 and similar to (9.44), and recalling that yN (t2N −1 ) = zN −1 (t2N −1 ) in G, we have E|zN (t2N +1 )|2L2 (G) √

C3 eC3 rN ≤ e(−2rN +r0 )(t2N +1 −t2N ) E|yN (t2N −1 )|2L2 (G) (t2N − t2N −1 )2 ≤

√ C4 rN

C4 e e−2rN (t2N +1 −t2N ) E|zN −1 (t2N −1 )|2L2 (G) , (t2N − t2N −1 )2

where C4 = C3 er0 T . Inductively, by (9.24) and (9.47), we conclude that, for all N ≥ 1,

(9.47)

290

9 Controllability and Observability of Stochastic Parabolic Systems

E|zN (t2N +1 )|2L2 (G) √





C4N eC4 ( rN + rN −1 +···+ r1 ) ≤ (t2N − t2N −1 )2 (t2N −2 − t2N −3 )2 · · · (t2 − t1 )2 { ×exp − 2rN (t2N +1 − t2N ) − 2rN −1 (t2N −1 − t2N −2 ) } − · · · − 2r1 (t3 − t2 ) E|˜ y0 |2L2 (G) { } √ C4N exp C4 N rN − 2rN (t2N +1 − t2N ) ≤ E|˜ y0 |2L2 (G) (t2N − t2N −1 )2 (t2N −2 − t2N −3 )2 · · · (t2 − t1 )2 { } √ 2N (N −1) C4N ρ2 exp C4 N rN − 2(t2 − t1 )ρ1−2N rN 2 ≤ E|˜ y0 |2L2 (G) . (t2 − t1 )2N

(9.48)

By (9.24), (9.45)–(9.46) and (9.48), we see that |uN |2L∞ (t2N −1 ,t2N ;L2 (Ω;L2 (G))) F

2N (N −1)



C1 C4N −1 ρ2 ρ21 (t2 − t1 )2N

{ √ √ exp C2 rN + r0 T + C4 (N − 1) rN −1

(9.49)

} −2(t2 − t1 )ρ3−2N rN −1 E|˜ y0 |2L2 (G) . 2 and

E|yN (t2N )|2L2 (G) 2N (N −1)



C3 C4N −1 ρ2 (t2 − t1 )2N

{ √ √ exp C3 rN + C4 (N − 1) rN −1

(9.50)

} −2(t2 − t1 )ρ3−2N rN −1 E|˜ y0 |2L2 (G) . 2 2

We now choose rN = max(2N , [λ1 ] + 1). From (9.49)–(9.50), it is easy to see that, whenever N is large enough, |uN |2L∞ (t2N −1 ,t2N ;L2 (Ω;L2 (G))) ≤ F

1 E|˜ y0 |2L2 (G) 2N

(9.51)

and

1 E|˜ y0 |2L2 (G) . 2N We now construct a control u ˜ by setting { uN (t, x), (t, x) ∈ IN × G, N ≥ 1, u ˜(t, x) = 0, (t, x) ∈ JN × G, N ≥ 1. E|yN (t2N )|2L2 (G) ≤

(9.52)

(9.53)

2 ˜ 2 By (9.51), we see that u ˜ ∈ L∞ ˜ be F (t1 , t ; L (Ω; L (G))) satisfying (9.40). Let y the solution to the system (9.39) corresponding to the control constructed in (9.53). Then y˜(·) = yN (·) on IN × G. By (9.52) and recalling that t2N → t˜ as N → ∞, we deduce that y˜(t˜) = 0, a.s. This completes the proof of Theorem 9.7.

9.2 Controllability of a Class of Stochastic Parabolic Systems

291

9.2.3 Proof of the Approximate Controllability This subsection is devoted to giving a proof of the approximate controllability result for the system (9.21), i.e., Theorem 9.9. To begin with, we show the following two preliminary results, which have some independent interests. ∩ Proposition 9.15. If m((s, T ) E) > 0 for any s ∈ [0, T ), then for any given η ∈ L2FT (Ω; L2 (G; Rm )), the corresponding solution to (9.27) with s1 = 0 and s2 = T satisfies |z(s)|L2F

s

(Ω;L2 (G;Rm ))

≤ C(s)|χE χG0 z|L1F (s,T ;L2 (Ω;L2 (G;Rm ))) .

(9.54)

Here and henceforth, C(s) > 0 is a generic constant depending on s. Proof : The proof of Proposition 9.15 is very similar to that of Theorem 7.17. We consider the following controlled stochastic parabolic system:  n ∑    dy − (ajk yxj )xk dt     j,k=1   = [a(t)y + χ(s,T )∩E χG0 u]dt + b(t)ydW (t) in (s, T ) × G, (9.55)     y=0 on (s, T ) × Γ,      y(s) = ys in G, where y is the state variable, u is the control variable, the initial state ys ∈ 2 2 m L2Fs (Ω; L2 (G; Rm )) and the control u(·) ∈ L∞ F (s, T ; L (Ω; L (G; R ))). By the proof of Theorem 9.7, it is easy to show that the system (9.55) is null controllable, i.e., for any ys ∈ L2Fs (Ω; L2 (G; Rm )), there exists a control u ∈ 2 2 m L∞ F (s, T ; L (Ω; L (G; R ))) such that y(T ) = 0 in G, a.s., and |u|2L∞ (s,T ;L2 (Ω;L2 (G;Rm ))) ≤ C(s)|ys |2L2

Fs (Ω;L

F

2 (G;Rm ))

.

(9.56)

Applying Itˆo’s formula to ⟨ y, z ⟩Rm , where y and (z, Z) solve respectively (9.55) and (9.27) with s1 = 0 and s2 = T , and noting that y(T ) = 0 in G, a.s., we obtain that ∫ ∫ ∫ −E ⟨ ys , z(s) ⟩Rm dx = E ⟨ u, z ⟩Rm dxdt. G

(s,T )∩E

G0

Choosing ys = −z(s) in (9.55), we then have ∫ ∫ ∫ E |z(s)|2Rm dx = E ⟨ u, z ⟩Rm dxdt G

(s,T )∩E

G0

2 2 m ≤ |u|L∞ |χ(s,T )∩E χG0 z|L1F (s,T ;L2 (Ω;L2 (G;Rm ))) F (s,T ;L (Ω;L (G;R ))) ∫ ( ) 12 ≤ C(s) E |z(s)|2Rm dx |χ(s,T )∩E χG0 z|L1F (s,T ;L2 (Ω;L2 (G;Rm ))) ,

G

292

9 Controllability and Observability of Stochastic Parabolic Systems

which gives immediately the desired estimate (9.54). As an easy consequence of Proposition 9.15, we have the following unique continuation property for solutions to (9.27) with s1 = 0 and s2 = T . Corollary 9.16. If m((s, T ) ∩ E) > 0 for any s ∈ [0, T ), then any solution (z, Z) to (9.27) with s1 = 0 and s2 = T vanishes identically in Q, a.s. provided that z = 0 in G0 × E, a.s. Proof : Since z = 0 in G0 ×E, a.s., by Proposition 9.15, we see that z(s) = 0 in G, a.s., for any s ∈ [0, T ). Therefore, z ≡ 0 in Q, a.s. Remark 9.17. If the condition m((s, T ) ∩ E) > 0 for any s ∈ [0, T ) was not assumed, the conclusion in Corollary 9.16 might fail to be true. This can be shown by the following counterexample. Let E satisfy that m(E) > 0 and m((s0 , T )∩E) = 0 for some s0 ∈ [0, T ). Let (z1 , Z1 ) = 0 in (0, s0 )×G, a.s. and ξ2 be a nonzero process in L2F (s0 , T ; Rm ) (Then Z2 ≡ ξ2 e1 is a nonzero process in L2F (s0 , T ; L2 (G; Rm ))). Solving the following forward stochastic differential equation: { dζ1 − λ1 ζ1 dt = −[a(t)⊤ ζ1 + b(t)⊤ ξ2 ]dt + ξ2 dW (t) in [s0 , T ], ζ1 (s0 ) = 0, we find a nonzero ζ1 ∈ L2F (Ω; C([s0 , T ]; Rm )). In this way, we find a nonzero solution (z2 , Z2 ) ≡ (ζ1 e1 , ξ2 e1 ) ∈ L2F (Ω; C([s0 , T ]; L2 (G; Rm ))) × L2F (s0 , T ; L2 (G; Rm )) to the following forward stochastic partial differential equation:  n ∑    dz + (ajk z2,xj )xk dt 2     j,k=1   (9.57) = −[a(t)⊤ z2 + b(t)⊤ Z2 ]dt + Z2 dW (t) in (s0 , T ) × G,      z2 = 0 on (s0 , T ) × Γ,     z2 (s0 ) = 0 in G. (Note however that one cannot solve the system (9.57) directly because this system is not well-posed). Put { (z1 , Z1 ), in (0, s0 ) × G, (z, Z) = (z2 , Z2 ), in (s0 , T ) × G. Then, (z, Z) is a nonzero solution to (9.27) with s1 = 0 and s2 = T , and z = 0 in G0 × E, a.s. Note also that, the nonzero solution constructed for the system (9.57) indicates that, in general, the forward uniqueness does NOT hold for backward stochastic differential equations.

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control

293

We are now in a position to prove Theorem 9.9. Proof of Theorem 9.9 : Similar to the proof of Theorem 7.19, the “if” part follows from Corollary 9.16. To prove the “only if” part, we use the contradiction argument. Assume that m((s0 , T ) ∩ E) = 0 for some s0 ∈ [0, T ). Since the system (9.21) is approximately controllable at time T , similar to the proof of Theorem 7.19 again, we deduce that any solution (z, Z) to (9.27) with s1 = 0 and s2 = T vanishes identically in Q provided that z = 0 in G0 × E, a.s. This contradicts the counterexample in Remark 9.17.

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control In this section, we consider the controllability problem for a class of coupled stochastic parabolic systems by using only one control. We shall see that, quite different from the deterministic setting, the controllability of this sort of stochastic partial differential equations is NOT robust with respect to the coefficients of lower order terms. This section is based on the paper [219]. We are concerned with the following controlled linear coupled stochastic parabolic system:   n  ∑    dy − (ajk yxj )xk dt     j,k=1      = [a1 (t)y + a2 (t)z + χG0 u] dt + [a3 (t)y + a4 (t)z] dW (t) in Q,      n ∑ (9.58) dz − (aij zxj )xk dt    j,k=1       = [a5 (t)y + a6 (t)z] dt + [a7 (t)z + a8 (t)y] dW (t) in Q,       y=z=0 on Σ,     y(0) = y0 , z(0) = z0 in G. Here, u is the control variable, (y, z) is the state variable, (y0 , z0 ) ∈ L2 (G)) × L2 (G) is any given initial value, ai ∈ L∞ F (0, T )(i = 1, · · · , 8). In (9.58), the control u acts only on the first equation directly, and the control space is L2F (0, T ; L2 (G0 )); while the state variable y may be regarded as an indirect control entering the second equation through both the drift term a5 y and the diffusion term a8 y. By means of Theorem 3.24, it is easy to check that for any (y0 , z0 ) ∈ L2 (G)) × L2 (G) and u ∈ L2F (0, T ; L2 (G0 )), ∩ (9.58) admits a unique weak solution (y, z) ∈ L2F (Ω; C([0, T ]; L2 (G; R2 ))) L2F (0, T ; H01 (G; R2 )). We suppose that the coefficients a5 and a8 satisfy respectively the following conditions:

294

9 Controllability and Observability of Stochastic Parabolic Systems

Condition 9.1 There exist a constant l0 > 0 and a nonempty interval I ⊆ [0, T ] such that a5 (·) ≥ l0 or a5 (·) ≤ −l0 on I, a.e. Condition 9.2 There exists a nonempty interval ˜I ⊆ I such that a8 (·) = 0 on ˜I, a.e. Write ∆

r1 =

2 ∑ i=1

|ai |L∞ + F (0,T )

4 ∑

|ai |2L∞ (0,T ) +

6 ∑

F

i=3

i=5

|ai |L∞ + F (0,T )

8 ∑

|ai |2L∞ (0,T ) . (9.59) F

i=7

To simplify the presentation, we suppose that ˜I = (t1 , t2 ) for some t1 and t2 satisfying 0 ≤ t1 < t2 ≤ T . We have the following positive null controllability result for the system (9.58). Theorem 9.18. Under Conditions 9.1–9.2, the system (9.58) is null controllable at time T . On the other hand, we have the following negative result for the null controllability of (9.58). Theorem 9.19. The system (9.58) is not null controllable at time T provided that one of the following conditions is satisfied: 1) a5 (·) = 0 in (0, T ) × Ω, a.e.; ∞ 2) a8 (·) ̸= 0 in (0, T ) × Ω, a.e. and aa58 (·) (·) ∈ LF (0, T ). In some sense, Theorem 9.19 means that Conditions 9.1–9.2 are also necessary for the null controllability of (9.58) to hold. Remark 9.20. Condition 9.1 is used to guarantee that the action of the control in the first equation of (9.58) can be transferred to the second one through the coupling in drift term. Theorem 9.19 indicates that the action transferred from the first equation to the second one should be good enough to control the second one. Indeed, the coupling in diffusion terms may destroy the controllability of (9.58). To prove Theorem 9.18, we consider the following backward stochastic parabolic system:

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control

  n  ∑    dα + (ajk αxj )xk dt     j,k=1    ( )   = − a6 (t)α − a2 (t)β − a7 (t)K − a4 (t)R dt + KdW (t)      n ∑ (ajk βxj )xk dt  dβ +   j,k=1    ( )    = − a5 (t)α − a1 (t)β − a3 (t)R − a8 (t)K dt + RdW (t)       α=β=0     α(T ) = αT , β(T ) = βT

295

in Q, (9.60) in Q, on Σ, in G,

where (αT , βT ) ∈ L2FT (Ω; L2 (G; R2 )). Clearly, (9.60) is the dual system of (9.58). By Theorem 4.11, the equation (9.60) admits a unique weak )so( ∩ lution (α, β, K, R) ∈ L2F (Ω; C([0, T ]; L2 (G; R2 ))) L2F (0, T ; H01 (G; R2 )) × L2F (0, T ; L2 (G; R2 )). By Theorem 7.17, we have the following equivalence between the controllability of (9.58) and a suitable observability estimate for (9.60). Proposition 9.21. For any T > 0, the system (9.58) is null controllable at time T if and only if solutions of (9.60) satisfy the estimate: ∫ |α(0)|2L2 (G)

+

|β(0)|2L2 (G)

T



≤ CE

β 2 (t, x)dxdt, 0

(9.61)

G0

for any (αT , βT ) ∈ L2FT (Ω; L2 (G; R2 )). The corresponding controllability problem might be quite difficult when the coefficients ai (t) (i = 1, · · · , 8) in the equation (9.58) are replaced by ai (t, x) (i = 1, · · · , 8), respectively. The reason is, as we explained before, the controllability of (9.58) (needless to say the above general case) is not robust and therefore, it seems impossible to use the Carleman-type estimate to treat this problem. On the other hand, it is clear that the usual spectral method does not work for the general form of (9.58) (when its coefficients are both time and space-dependent), either. 9.3.1 Proof of the Null Controllability Result This subsection is devoted to a proof of the null controllability result, i.e., Theorem 9.18. Similar to the proof of Theorem 9.7, it suffices to establish the analogue of Propositions 9.12–9.14 (for the system (9.58) or (9.60)). ∞ In the sequel of this subsection, {λi }∞ i=1 , {ei }i=1 , Λr (with r ≥ λ1 ) and Πr are the same as that in Subsection 9.2.1. First, let t1 ≤ s1 < s2 ≤ t2 and m ∈ N. Consider the following backward stochastic parabolic system:

296

9 Controllability and Observability of Stochastic Parabolic Systems

  n  ∑    dα + (ajk αxj )xk dt     j,k=1    [ ]   = − a6 (t)α − a2 (t)β − a7 (t)K − a4 (t)R dt + KdW (t)      n ∑ (ajk βxj )xk dt  dβ +   j,k=1    [ ]    = − a5 (t)α − a1 (t)β − a3 (t)R − a8 (t)K dt + RdW (t)       α=β=0     α(s2 ) = αs2 , β(s2 ) = βs2

in (s1 , s2 ) × G,

in (s1 , s2 ) × G, on (s1 , s2 ) × Γ, in G,

(9.62) where (αs2 , βs2 ) ∈ L2Fs (Ω; Hr2 ) (Recall (9.28) for the definition of Hr2 ). 2 Similar to Propositions 9.12, we have Proposition 9.22. Under Conditions 9.1–9.2, for each r ≥ λ1 and (αs2 , βs2 ) ∈ L2Fs (Ω; Hr2 ), the solution (α, β, K, R) to (9.62) satisfies that 2



2 E |α(s1 )|L2 (G)

+

2 E |β(s1 )|L2 (G)

CeC r ≤ E (s2 − s1 )5



s2

∫ β(t, x)2 dxdt. (9.63)

s1

G0

Proof : We divide the proof into several steps. Step 1. For any (αs2 , βs2 ) ∈ L2Fs (Ω; Hr2 ), write 2

α s2 =

∑ i∈Λr

αsi 2 ei ,

β s2 =



βsi 2 ei ,

i∈Λr

where αsi 2 and βsi 2 are suitable Fs2 -measurable random variables. Then the solution (α, K, β, R) to (9.62) can be represented as follows: ∑ ∑ ∑ ∑ α= αi (t)ei , K = K i (t)ei , β = β i (t)ei , R = Ri (t)ei , i∈Λr

i∈Λr

i∈Λr

i∈Λr

where (αi , β i , K i , Ri ) ∈ L2F (Ω; C([s1 , s2 ]; R2 )) × L2F (s1 , s2 ; R2 ) solves the following backward stochastic differential equation:  i dα − λi αi dt     [ ]   = −a6 (t)αi − a2 (t)β i − a7 (t)K i − a4 (t)Ri dt + K i dW (t) in (s1 , s2 ),    dβ i − λi β i dt   [ ]   = −a5 (t)αi − a1 (t)β i − a3 (t)Ri − a8 (t)K i dt + Ri dW (t) in (s1 , s2 ),      i α (s2 ) = αsi 2 , β i (s2 ) = βsi 2 . From Lemma 9.11, it follows that, for a.e. t ∈ (0, T ) and ω ∈ Ω,

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control

∫ β(t, x)2 dx = G



|β i (t)|2 ≤ CeC

i∈Λr

= Ce

√ C r





∑ 2 β i (t)ei (x) dx

r G0



297

i∈Λr

(9.64)

2

β(t, x) dx. G0

On the other hand, note that (9.62) is a special case of (9.27). Hence, similarly to (9.31), recalling (9.59) for the definition of r1 , we see that, for any t ∈ [s1 , s2 ], ∫ ∫ E [α(s1 , x)2 + β(s1 , x)2 ]dx ≤ er1 (s2 −s1 ) E [α(t, x)2 + β(t, x)2 ]dx. (9.65) G

G

Step 2. We establish a local estimate for α (See (9.66) below). To this aim, we choose a function ξ ∈ C ∞ [s1 , s2 ] with ξ(s1 ) = ξ(s2 ) = 0 as follows ξ(t) = (t − s1 )2 (s2 − t)2 . By (9.62), we have that d (ξαβ) = ξt αβdt − ξα

n [ ∑

] (ajk βxj )xk + a5 (t)α + a1 (t)β + a3 (t)R + a8 (t)K dt

j,k=1

−ξβ

n [ ∑

] (ajk αxj )xk + a6 (t)α + a2 (t)β + a7 (t)K + a4 (t)R dt

j,k=1

+ξKRdt + ξαRdW (t) + ξβKdW (t). Therefore, by Condition 9.2 (which leads to a8 (·) ≡ 0 on (t1 , t2 )), for any (small) positive constant ε, we find that ∫ s2 ∫ a5 (t)ξα2 dxdt E s1

G s2 ∫

∫ = E



s1



s1



s1 G s2 ∫

C + E ε

i∈Λr

) ξ KR − a1 αβ − a3 αR − a6 αβ − a2 β 2 − a7 βK − a4 βR dxdt





s2



( ) ∑ 3 ξ 2 K2 + λi |αi (t)|2 dxdt

2

ξα dxdt + εE s1



G s2

s1

∫ β 2 dxdt +

s1

λi αi (t)β i (t)dt

(

+E l0 ≤ E 2

ξ

G s2



s2

ξt αβdxdt + 2E

G

C E ε



G s2



( 1

i∈Λr

ξ 2 R2 + s1

G

This, together with Condition 9.1, implies that

∑ i∈Λr

) λi |β i (t)|2 dxdt.

298

9 Controllability and Observability of Stochastic Parabolic Systems

l0 E 2





s2

ξα2 dxdt s1



G



s2

≤ εE

s1

C + E ε

( ) ∑ 3 ξ 2 K2 + λi |αi (t)|2 dxdt

G



s2



i∈Λr



C β dxdt + E ε G



s2

( ) ∑ 1 ξ 2 R2 + λi |β i (t)|2 dxdt.

2

s1

s1

(9.66)

G

i∈Λr

Step 3. In this step, we estimate the first and the third integrals in the right hand side of (9.66). Noticing that ( 3 ) 3 1 3 3 d ξ 2 α2 = ξ 2 ξt α2 dt + 2ξ 2 αKdW (t) + ξ 2 K 2 dt 2 n [∑ ] 3 −2ξ 2 α (ajk αxj )xk +a6 (t)α+a2 (t)β +a7 (t)K +a4 (t)R dt, j,k=1

we obtain that ∫ s2 ∫ ∫ 3 E ξ 2 K 2 dxdt + 2E s1

3 =− E 2

G



ξt



s2 s1

i∈Λr

s2 ∫

( ) 3 ξ 2 α a6 α+a2 β +a7 K +a4 R dxdt.

ξα dxdt + 2E



E



λi |αi (t)|2 dt

2

1 2

Hence,



3

ξ2 s1

s2 ∫ s1

s2

s1



3 2

G



ξ K dxdt + E s2

≤ CE

3

ξ2

G



s2

2

s1

∫ 2

1 2

2



λi |αi (t)|2 dt

i∈Λr

(9.67)

2

(ξα + β + ξ R )dxdt. s1

G

On the other hand, by ( 1 ) 1 1 1 1 d ξ 2 β 2 = ξ − 2 ξt β 2 dt + 2ξ 2 βRdW (t) + ξ 2 R2 dt 2 n [ ∑ ] 1 −2ξ 2 β (ajk βxj )xk +a5 (t)α+a1 (t)β +a3 (t)R+a8 (t)K dt, j,k=1

and using Condition 9.2 again, we obtain that ∫ s2 ∫ ∫ s2 ∑ 1 1 E ξ 2 R2 dxdt + 2E ξ2 λi |β i (t)|2 dt s1

1 =− E 2

G



s2 ∫ s1

s1

ξt Gξ

1 2



i∈Λr

s2 ∫

[ ] 1 ξ 2 β a5 (t)α+a1 (t)β + a3 (t)R dxdt.

β 2 dxdt+2E s1

G

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control

299

Hence, for any (small) positive constant δ, it holds that ∫ s2 ∫ ∫ s2 ∑ 1 1 2 2 E ξ R dxdt + E ξ2 λi |β i (t)|2 dt s1

G



s2

≤ δE



s1

s1

C ξα2 dxdt + E δ G

i∈Λr



s2

(9.68)



2

β dxdt. s1

G

Combining (9.66), (9.67) and (9.68), by suitably choosing ε and δ, we find that ∫ s2 ∫ ∫ s2 ∫ E ξα2 dxdt ≤ CE β 2 dxdt. (9.69) s1

G

s1

G

Choose s3 = s1 +(s2 −s1 )/4 and s4 = s2 −(s2 −s1 )/4. Then, s1 < s3 < s4 < s2 . From (9.69), we see that ∫ s4 ∫ ∫ s2 ∫ C 2 E α dxdt ≤ E β 2 dxdt. (9.70) (s2 − s1 )4 s3 G s1 G Step 4. Integrating (9.65) on (s3 , s4 ) with respect to the variable t, we obtain that ∫ ∫ E α(s1 , x)2 dx + E β(s1 , x)2 dx G



C E s2 − s1



s4

G



(9.71)

[α(t, x)2 + β(t, x)2 ]dxdt. s3

G

By (9.70)–(9.71) and (9.64), we obtain that ∫ ∫ E α(s1 , x)2 dx + E β(s1 , x)2 dx G

≤ ≤

C E (s2 − s1 )5 √ C r

Ce E (s2 − s1 )5



s2

G



β(t, x)2 dxdt ∫

s1 s2



G

β(t, x)2 dxdt, s1

G0

which gives (9.63). This completes the proof of Proposition 9.22. Next, we consider the following control system:

300

9 Controllability and Observability of Stochastic Parabolic Systems

 n ∑    dy − (ajk yxj )xk dt     j,k=1    ( ) ( )   = a1 (t)y+ a2 (t)z+ χG0 u dt + a3 (t)y+ a4 (t)z dW (t)      n  ∑ dz − (aij zxj )xk dt   j,k=1    ( ) ( )    = a5 (t)y + a6 (t)z dt + a7 (t)z + a8 (t)y dW (t)      y = z = 0     y(s1 ) = ys1 , z(s1 ) = zs1

in (s1 , s2 ) × G,

in (s1 , s2 ) × G, on (s1 , s2 ) × Γ, in G.

(9.72) Here ys1 , zs1 ∈ L2Fs (Ω; Hr ). 1 Proposition 9.22 implies the following partial controllability result for (9.72). Proposition 9.23. Let Conditions 9.1–9.2 hold. Then, for every r ≥ λ1 and ys1 , zs1 ∈ L2Fs (Ω; Hr ), there is a control ur ∈ L2F (s1 , s2 ; L2 (G0 )), such that 1 the corresponding solution (y, z) to (9.72) with u = ur satisfies Πr (y(s2 )) = Πr (z(s2 )) = 0 in G,

a.s.

Moreover, ur verifies √

2 |ur |L2 (s1 ,s2 ;L2 (G0 )) F

( ) CeC r 2 2 ≤ E |y(s )| + |z(s )| 2 2 1 1 L (G) L (G) . (s2 − s1 )5

The proof of Proposition 9.23 is very similar as that of Proposition 9.13 (follows from Theorem 7.17)), and hence we omit it here. Further, as a special case of Proposition 9.14, we have the following decay result for (9.72). Proposition 9.24. Let r ≥ λ1 . Then, for any (ys1 , zs1 ) ∈ L2Fs (Ω; L2 (G; R2 )) 1 with Πr (ys1 ) = Πr (zs1 ) = 0, a.s., the corresponding solution (y, z) to (9.72) with u ≡ 0 satisfies that, for all t ∈ [s1 , s2 ], ( ) ( ) E |y(t)|2L2 (G) + |z(t)|2L2 (G) ≤ e−(2r−r1 )(t−s1 ) E |ys1 |2L2 (G) + |zs1 |2L2 (G) . Based on Propositions 9.22–9.24, one can give a proof of Theorem 9.18 similar to that of Theorem 9.7, and therefore we omit the details here. 9.3.2 Proof of the Negative Null Controllability Result In this subsection, we show the lack of null controllability for the system (9.58), presented in Theorem 9.19.

9.3 Controllability of a Class of Stochastic Parabolic Systems by one Control

301

Proof of Theorem 9.19 : First, we prove that the system (9.58) is not null controllable if a5 (·) = 0 in (0, T ) × Ω, a.e. Without loss of generality, we may assume that the coefficient a6 (·) in the system (9.58) is equal to 0 (Otherwise, we introduce a simple transformation ∫t y˜ = y, z˜(t) = e− 0 a6 (s)ds z(t) and u ˜ = u and consider the system for the new state variable (˜ y , z˜) and the control variable u ˜). Then, by the system (9.58), and noting that a5 (·) = a6 (·) = 0 in (0, T ) × Ω, a.e., we find that (Ey, Ez) solves   n  ∑ ( jk ) ( )    (Ey) − a (Ey)xj x = E a1 (t)y + a2 (t)z + χG0 u in Q, t  k    j,k=1     n ∑ ( ij ) (Ez) − a (Ez)xj x = 0 in Q, (9.73) t  k   j,k=1       Ey = Ez = 0 on Σ,     (Ey)(0) = y0 , (Ez)(0) = z0 in G. Since there is no control in the second equation of (9.73), Ez cannot be driven to the rest for any time T if z0 ̸= 0 in G. Next, we prove that the system (9.58) is not null controllable if a8 (·) ̸= 0 ∞ in (0, T ) × Ω, a.e. and aa58 (·) (·) ∈ LF (0, T ). In the following, we construct a nontrivial solution (α, β, K, R) to (9.60) such that a5 (t)α + a8 (t)K = 0 and (β, R) = 0 in Q, a.s. For this purpose, we consider the following linear stochastic differential equation:  a (t)   dζ − λ1 ζdt = a6 (t)ζdt − 5 ζdW (t) in (0, T ), a8 (t) (9.74)   ζ(0) = 1. Denote by ζ ∈ L2F (Ω; C([0, T ])) the unique solution to (9.74). Write q = ζ(t)e1 , then it is easy to check that q ∈ L2F (Ω; C([0, T ]; H 2 (G) ∩ H01 (G))) solves  n ∑ ( ij )  a5 (t)   dq + a qxj x dt = a6 (t)qdt − qdW (t) in Q,   k a8 (t)  j,k=1 (9.75)  q=0 on Σ,      q(0) = e1 in G. (Similarly to (9.57), one cannot solve the system (9.75) directly because it is not well-posed.) Furthermore, consider the following backward stochastic parabolic equation:

302

9 Controllability and Observability of Stochastic Parabolic Systems

 n ∑ ( ij )    dv + a vxj x dt = a6 (t)vdt + V dW (t) in Q,   k  j,k=1  v=0      v(T ) = ζ(T )e1

(9.76)

on Σ, in G.

By the well-posedness of the equation (9.76), it is easy to check that the solution (v, V ) to (9.76) satisfies that v = q and V = − aa58 (t) (t) q in Q, a.s. It is easy to check that (α, β, K, R) = (v, 0, V, 0) solves the equation (9.60) with (αT , βT ) = (ζ(T )e1 , 0). However, E|α(0)|2L2 (G) = 1. This implies that (9.61) does not hold for the solution to (9.60) associated with the above (αT , βT ). By means of Proposition 9.21, we deduce that, the system (9.58) is not null controllable. Remark 9.25. Notice that by Theorem 9.18, when a8 (·) ≡ 0 and a5 (·) ≡ 1 on (0, T ), a.s., the system (9.58) is null controllable at any time. On the other hand, Theorem 9.19 shows that no matter how small the nonzero bounded function a8 (·) is, the corresponding system is not null controllable any more. This indicates that the controllability of the stochastic coupled system (9.58) is not robust with respect to the coupling coefficient in the diffusion term. Moreover, by Condition 9.2, we see that if the coefficient a8 (·) equals to zero on some time interval with an arbitrarily small (but still positive) measure, then (9.58) is null controllable again. Therefore, the controllability of stochastic coupled parabolic systems is influenced strongly by the coupling coefficient in diffusion terms. Also, it indicates that it is very hard (even impossible) to apply the Carleman estimate to prove Theorem 9.18.

9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator This section is addressed to deriving a Carleman estimate for a stochastic parabolic-like operator (to be given below), which will play key roles in the sequel. The main content of this section is taken from [315]. Throughout this section, we assume that bjk = bkj ∈ L2F (Ω; C 1 ([0, T ]; W 2,∞ (G))), ℓ ∈ C 1,3 (Q) and Ψ ∈ C 1,2 (Q). Write

j, k = 1, 2, · · · , n,

(9.77)

9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator

 n ∑ ( jk )  jk   A=− b ℓxj ℓxk − bjk x k ℓx j − b ℓx j x k − Ψ − ℓt ,     j,k=1     n n [  ∑ ∑ ( jk ) ] B = 2 AΨ − Ab ℓxj x − At − (bjk Ψxk )xj , k   j,k=1 j,k=1     n [  ∑ ( ) ( jk j ′ k′ ) ] bjk   cjk = jk′ j ′ k  2b b ℓ − b b ℓ − t + Ψ bjk .  x x ′ ′ j j xk ′ xk′  2 ′ ′

303

(9.78)

j ,k =1

First, we establish a fundamental weighted identity for the stochastic ∑n parabolic-like operator “d − j,k=1 ∂xk (bjk ∂xj )dt”1 , in the spirit of the pointwise identities (1.48) and (8.19). Theorem 9.26. Let u be an H 2 (G)-valued Itˆ o process. Set θ = eℓ and v = θu. Then, for any t ∈ [0, T ] and a.e. (x, ω) ∈ G × Ω, n n [ ][ ] ∑ ∑ 2θ − (bjk vxj )xk + Av du − (bjk uxj )xk dt j,k=1 n ∑

+2

j,k=1

(bjk vxj dv)xk + 2

j,k=1

n ∑

n [ ∑ (

j,k=1

j ′ ,k′ =1

2bjk bj

′ ′

k

ℓxj′ vxj vxk′

( ) Ψx ) ] ℓxj vxj′ vxk′ + Ψ bjk vxj v − bjk Aℓxj + j v2 dt 2 xk n n ( ∑ ) ∑ =2 cjk vxj vxk dt + Bv2 dt + d bjk vxj vxk + Av2 −bjk bj

′ ′

k

j,k=1

[ +2 −

(9.79)

j,k=1 n ∑

(

bjk vxj

) xk

]2 + Av dt

j,k=1

−θ2

n ∑

bjk (duxj + ℓxj du)(duxk + ℓxk du) − θ2 A(du)2 .

j,k=1

Proof : The proof is divided into four steps. Step 1. Recalling θ = eℓ and v = θu, one has du = θ−1 (dv − ℓt vdt) and uxj = θ−1 (vxj − ℓxj v) for i = 1, 2, · · · , m. By (9.77), it is easy to see that n n ∑ ∑ bjk (ℓxj vxk + ℓxk vxj ) = 2 bjk ℓxj vxk . Hence, j,k=1 1

j,k=1

Since condition (9.77) is assumed for the coefficient matrix ( jk ) only the symmetry ∑ jk b , we call “d − n j,k=1 ∂xk (b ∂xj )dt” a stochastic parabolic-like operator.

304

9 Controllability and Observability of Stochastic Parabolic Systems n ∑

θ

jk

(b uxj )xk = θ

j,k=1

[θ−1 bjk (vxj − ℓxj v)]xk

j,k=1

n ∑

=

n ∑

n ∑

[bjk (vxj − ℓxj v)]xk −

j,k=1

bjk (vxj − ℓxj v)ℓxk

j,k=1

n [ ∑ = (bjk vxj )xk − bjk (ℓxj vxk + ℓxk vxj ) j,k=1 jk +(bjk ℓxj ℓxk − bjk xk ℓxj − b ℓxj xk )v

(9.80) ]

n [ ] ∑ jk (bjk vxj )xk − 2bjk ℓxj vxk + (bjk ℓxj ℓxk − bjk xk ℓxj − b ℓxj xk )v .

=

j,k=1

Put  n ∑  ∆ I =  − (bjk vxj )xk + Av,    j,k=1   ∆     I2 = dv + 2

n ∑

n [ ] ∑ ∆ I1 = − (bjk vxj )xk + Av dt,

bjk ℓxj vxk dt,

j,k=1

(9.81)



I3 = Ψ vdt.

j,k=1

By (9.80) and (9.81), it follows that n [ ] ∑ θ du − (bjk uxj )xk dt = I1 + I2 + I3 . j,k=1

Hence, n n [ ][ ] ∑ ∑ 2θ − (bjk vxj )xk + Av du − (bjk uxj )xk dt = 2I(I1 + I2 + I3 ). j,k=1

j,k=1

(9.82) Step 2. Let us compute 2II2 . Utilizing (9.77) again, and noting that n ∑

(bjk bj

′ ′

k

ℓxj ′ vxj vxk )xk′ =

j,k,j ′ ,k′ =1

n ∑

(bjk bj

j,k,j ′ ,k′ =1

we get n ∑

2

bjk bj

′ ′

k

ℓxj′ vxj vxk xk′

j,k,j ′ ,k′ =1

=

n ∑ j,k,j ′ ,k′ =1

bjk bj

′ ′

k

ℓxj′ (vxj vxk xk′ + vxk vxj xk′ )

′ ′

k

ℓxj vxj′ vxk′ )xk ,

9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator n ∑

=

=

=

bjk bj

j,k,j ′ ,k′ =1 n ∑ j,k,j ′ ,k′ =1 n ∑

′ ′

k

(bjk bj (bjk bj

ℓxj′ (vxj vxk )xk′

′ ′

k

′ ′

k

(9.83) n ∑

ℓxj ′ vxj vxk )xk′ − ℓxj vxj′ vxk′ )xk −

j,k,j ′ ,k′ =1

305

(bjk bj

j,k,j ′ ,k′ =1 n ∑

(bjk bj

′ ′

k

′ ′

k

ℓ x j ′ ) x k ′ v x j vx k ℓ x j ′ ) x k ′ v x j vx k .

j,k,j ′ ,k′ =1

Hence, by (9.83), and noting that n ∑

bjk (bj

′ ′

k

ℓxj ′ )xk vxj vxk′ =

j,k,j ′ ,k′ =1

n ∑





bjk (bj k ℓxj′ )xk′ vxj vxk ,

j,k,j ′ ,k′ =1

we obtain that n n [ ] ∑ ∑ 4 − (bjk vxj )xk + Av bjk ℓxj vxk j,k=1 n ∑

j,k=1

= −4 +4 = −2

(bjk bj

j,k,j ′ ,k′ =1 n ∑

bjk bj

j,k,j ′ ,k′ =1 n [ ∑ n ∑

−Abjk ℓxj v2 −2

k

′ ′

k

ℓ x j ′ v x j vx k ′ ) x k + 4

ℓxj′ vxj vxk xk′ + 2A

n ∑

bjk (bj

′ ′

k

ℓxj ′ )xk vxj vxk′

j,k,j ′ ,k′ =1 n ∑ jk

b ℓxj (v2 )xk

j,k=1

( jk j ′ k′ ) ′ ′ 2b b ℓxj′ vxj vxk′ − bjk bj k ℓxj vxj ′ vxk′

(9.84)

j ′ ,k′ =1

j,k=1

n ∑

′ ′

] +2 xk

[ ] ′ ′ ′ ′ 2bjk (bj k ℓxj′ )xk′ − (bjk bj k ℓxj′ )xk′ vxj vxk

n ∑ j,k,j ′ ,k′ =1

(Abjk ℓxj )xk v2 .

j,k=1

Using Itˆo’s formula, we have n [ ] ∑ 2 − (bjk vxj )xk + Av dv j,k=1

= −2

n ∑

(bjk vxj dv)xk + 2

j,k=1

= −2

n ∑ j,k=1



n ∑ j,k=1

n ∑

bjk vxj dvxk + 2Avdv

j,k=1 jk

(b vxj dv)xk

n ( ∑ ) +d bjk vxj vxk + Av2 j,k=1

2 bjk t vxj vxk dt − At v dt −

n ∑ j,k=1

bjk dvxj dvxk − A(dv)2 .

(9.85)

306

9 Controllability and Observability of Stochastic Parabolic Systems

Now, from (9.81), (9.84) and (9.85), we arrive at 2II2 = −2

n [ ∑ n ∑ ( jk j ′ k′ ) ′ ′ 2b b ℓxj′ vxj vxk′ − bjk bj k ℓxj vxj ′ vxk′ j ′ ,k′ =1

j,k=1

−Ab ℓxj v jk

+2

2

] xk

dt − 2

n ∑

jk

(b vxj dv)xk + d

j,k=1

n ( ∑

bjk vxj vxk + Av2

)

j,k=1

n [ ∑ n ( ) bjk ] ∑ ′ ′ ′ ′ 2bjk (bj k ℓxj ′ )xk′ − (bjk bj k ℓxj ′ )xk′ − t vxj vxk dt 2 ′ ′

j,k=1

j ,k =1

[

n ] ∑ (Abjk ℓxj )xk v2 dt − bjk dvxj dvxk − A(dv)2 .

n ∑

− At + 2

j,k=1

j,k=1

(9.86) Step 3. Let us compute 2II3 . By (9.81), we get n [ ] ∑ 2II3 = 2 − (bjk vxj )xk + Av Ψ vdt j,k=1

[ = −2

n ∑

(

Ψ bjk vxj v

) xk

+ 2Ψ

j,k=1

+

n ∑

bjk vxj vxk

j,k=1

] bjk Ψxk (v2 )xj + 2AΨ v2 dt

n ∑

(9.87)

j,k=1

{ =



n ( ∑

2Ψ bjk vxj v − bjk Ψxj v2

j,k=1

[

+ −

n ∑

(bjk Ψxk )xj

) + 2Ψ xk

n ∑

bjk vxj vxk

j,k=1

] } + 2AΨ v2 dt.

j,k=1

Step 4. Finally, combining the equalities (9.82), (9.86) and (9.87), and noting that n ∑

bjk dvxj dvxk + A(dv)2

j,k=1

= θ2

n ∑

bjk (duxj + ℓxj du)(duxk + ℓxk du) + θ2 A(du)2 ,

j,k=1

we conclude the desired equality (9.79) immediately. This completes the proof of Theorem 9.26.

9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator

307

Next, we shall ∑n derive a Carleman estimate for the stochastic parabolic-like operator “d − j,k=1 ∂xk (bjk ∂xj )dt”. For any fixed nonnegative and nonzero function ψ ∈ C 4 (G), and (large) parameters λ > 1 and µ > 1, we choose θ = eℓ ,

ℓ = λα,

α(t, x) =

eµψ(x) − e2µ|ψ|C(G) , t(T − t)

and Ψ =2

n ∑

φ(t, x) =

eµψ(x) , t(T − t) (9.88)

bjk ℓxj xk .

(9.89)

j,k=1

In what follows, for a positive integer r, we denote by O(µr ) a function of order µr for large µ (which is independent of λ); by Oµ (λr ) a function of order λr for fixed µ and for large λ. In a similar way, we use the notation O(eµ|ψ|C(G) ) and so on. For j, k = 1, 2, · · · , n, it is easy to check that ℓt = λαt ,

ℓxj = λµφψxj ,

ℓxj xk = λµ2 φψxj ψxk + λµφψxj xk

(9.90)

and that αt = φ2 O(e2µ|ψ|C(G) ),

φt = φ2 O(eµ|ψ|C(G) ).

(9.91)

We have the following result. Theorem 9.27. Assume that either (bjk )n×n or −(bjk )n×n is a uniformly positive definite matrix, and its smallest eigenvalue is bigger than a constant s0 > 0. Let u and v = θu be that in Theorem 9.26 with θ being given in (9.88). Then, the equality (9.79) holds for any t ∈ [0, T ] and a.e. (x, ω) ∈ G × Ω. Moreover, for A, B and cjk appeared in (9.79) (and given by (9.78)), when |∇ψ(x)| > 0, λ and µ are large enough, it holds that  n ∑  2 2 2  A = −λ µ φ bjk ψxj ψxk + λφ2 O(e2µ|ψ|C(G) ),     j,k=1      2 3 4 3  B ≥ 2s0 λ µ φ |∇ψ|4 + λ3 φ3 O(µ3 )   +λ2 φ3 O(µ2 e2µ|ψ|C(G) ) + λφ3 O(e2µ|ψ|C(G) ),     n  ∑    cjk vxj vxk ≥ [s20 λµ2 φ|∇ψ|2 + λφO(µ)]|∇u|2  

(9.92)

j,k=1

for any t ∈ [0, T ], a.s. Proof : By Theorem 9.26, it remains to prove the estimates in (9.92). Noting (9.89)–(9.90), from (9.78), we have ℓxj xk = λµ2 φψxj ψxk + λφO(µ) and

308

9 Controllability and Observability of Stochastic Parabolic Systems

n ∑

cjk vxj vxk

j,k=1

=

n { ∑ n [ ∑



′ ′

k





ℓxj ′ xk′ + 2bjk bjxkk′ ℓxj ′

j ′ ,k′ =1

j,k=1

−(bjk bj =



2bjk bj k ℓxj′ xk′ + bjk bj

′ ′

n { ∑ n [ ∑

k

] bjk } )xk′ ℓxj ′ − t vxj vxk 2 ′



2λµ2 φbjk bj k ψxj′ ψxk′ + λµ2 φbjk bj

′ ′

k

] ψx j ′ ψx k ′

j ′ ,k′ =1

j,k=1

} +λφO(µ) vxj vxk

= 2λµ2 φ

n ( ∑

bjk ψxj vxk

)2

n n ( ∑ )( ∑ ) + λµ2 φ bjk ψxj ψxk bjk vxj vxk

j,k=1

j,k=1

j,k=1

2

+λφ|∇v| O(µ) ≥ [s20 λµ2 φ|∇ψ|2 + λφO(µ)]|∇v|2 , which gives the last inequality in (9.92). Similarly, by the definition of A in (9.78), and noting (9.91), we see that A =−

n ∑ (

) jk bjk ℓxj ℓxk − bjk x k ℓx j + b ℓx j x k − ℓt

j,k=1 n [ ∑ ( )] jk = −λµ bjk λµφ2 ψxj ψxk − bjk µφψxj ψxk + φψxj xk xk φψxj + b j,k=1

+λφ2 O(e2µ|ψ|C(G) ) n ∑

= −λ2 µ2 φ2

bjk ψxj ψxk + λφ2 O(e2µ|ψ|C(G) ).

j,k=1

Hence, we get the first estimate in (9.92). Now, let us estimate B (recall (9.78) for the definition of B). For this, by (9.90), and recalling the definitions of Ψ (in (9.89)), we see that Ψ = 2λµ

n ∑

bjk (µφψxj ψxk + φψxj xk )

j,k=1 n ∑ 2

= 2λµ φ

bjk ψxj ψxk + λφO(µ);

j,k=1

ℓxj′ xk′ xk = λµ3 φψxj′ ψxk′ ψxk + λφO(µ2 ),

9.4 Carleman Estimate for a Stochastic Parabolic-Like Operator

309

ℓxj′ xk′ xj xk = λµ4 φψxj′ ψxk′ ψxj ψxk + λφO(µ3 ), n ∑ (

Ψx k = 2

bj

′ ′

k

ℓxj ′ xk′

j ′ ,k′ =1

= 2λµ3 φ

n ∑

bj

′ ′

k

)

n ∑

=2

xk

′ ′

(bjxkk ℓxj′ xk′ + bj

′ ′

k

ℓxj′ xk′ xk )

j ′ ,k′ =1

ψxj ′ ψxk′ ψxk + λφO(µ2 ),

j ′ ,k′ =1 n ∑ (

Ψx j x k = 2

′ ′

bjxjkxk ℓxj ′ xk′ + bj

′ ′

k

′ ′

ℓxj′ xk′ xj xk + 2bjxkk ℓxj′ xk′ xj

)

j ′ ,k′ =1

= 2λµ4 φ

n ∑

bj

′ ′

k

ψxj′ ψxk′ ψxj ψxk + λφO(µ3 ),

j ′ ,k′ =1



n n ∑ ∑ ( jk ) ( jk ) b Ψx k x = − bxj Ψxk + bjk Ψxj xk j

j,k=1

j,k=1

= −2λµ4 φ

n ( ∑

bjk ψxj ψxk

)2

+ λφO(µ3 ).

j,k=1

Hence, recalling the definition of A (in (9.78)), and using (9.90) and (9.91), we have that AΨ = −2λ3 µ4 φ3

n ( ∑

bjk ψxj ψxk

)2

+ λ3 φ3 O(µ3 ) + λ2 φ3 O(µ2 e2µ|ψ|C(G) ),

j,k=1

Axk

n ∑ ( j ′ k′ ′ ′ ′ ′ ′ ′ =− bxk ℓxj′ ℓxk′ + 2bj k ℓxj ′ ℓxk′ xk − bjxkk′ xk ℓxj′ − bjxkk′ ℓxj′ xk j ′ ,k′ =1

′ ′

+bjxkk ℓxj ′ xk′ + bj =−

n ∑ (

2bj

′ ′

k

′ ′

k

) ℓxj′ xk′ xk − ℓtxk

ℓxj ′ ℓxk′ xk + bj

′ ′

k

) ( ) ℓxj′ xk′ xk − ℓtxk + λφ + λ2 φ2 O(µ2 )

j ′ ,k′ =1

= −2λ2 µ3 φ2

n ∑

bj

′ ′

k

ψxj ′ ψxk′ ψxk + λ2 φ2 O(µ2 ) + λφ2 O(µe2µ|ψ|C(G) ),

j ′ ,k′ =1 n ∑

Axk bjk ℓxj = −2λ3 µ4 φ3

j,k=1

bjk ψxj ψxk

)2

+ λ3 φ3 O(µ3 )

j,k=1 2

n ∑ (

n ( ∑

Abjk ℓxj

) xk

j,k=1

= −3λ3 µ4 φ3

3

2 2µ|ψ|C(G)

+λ φ O(µ e ), n n ∑ ∑ ( jk ) jk = Axk b ℓxj + A bxk ℓxj + bjk ℓxj xk j,k=1 n ( ∑ j,k=1

bjk ψxj ψxk

j,k=1

)2

+ λ3 φ3 O(µ3 ) + λ2 φ3 O(µ2 e2µ|ψ|C(G) ),

310

9 Controllability and Observability of Stochastic Parabolic Systems

and that n ( ∑

At = −

jk bjk ℓxj ℓxk − bjk x k ℓx j + b ℓx j x k − ℓt

j,k=1 n [ ∑

) t

] ( ) jk bjk ℓxj ℓxk t − bjk xk ℓxj t + b ℓxj xk t

=−

j,k=1

+λ2 φ2 O(µ2 ) + λφ3 O(e2µ|ψ|C(G) ) = λ2 φ3 O(µ2 e2µ|ψ|C(G) ) + λφ3 O(e2µ|ψ|C(G) ). From the definition of B (see (9.78)), we have that B = −4λ3 µ4 φ3

n ( ∑

bjk ψxj ψxk

)2

+ λ3 φ3 O(µ3 ) + λ2 φ3 O(µ2 e2µ|ψ|C(G) )

j,k=1

+6λ3 µ4 φ3

n ( ∑

bjk ψxj ψxk

)2

+ λ3 φ3 O(µ3 ) + λ2 φ3 O(µ2 e2µ|ψ|C(G) )

j,k=1

+λ φ O(µ e ) + λφ3 O(e2µ|ψ|C(G) ) n ( ∑ )2 −2λµ4 φ bjk ψxj ψxk + λφO(µ3 ) 2

3

2 2µ|ψ|C(G)

j,k=1

= 2λ3 µ4 φ3

n ( ∑

bjk ψxj ψxk

)2

+ λ3 φ3 O(µ3 )

j,k=1

+λ φ O(µ2 e2µ|ψ|C(G) ) + λφ3 O(e2µ|ψ|C(G) ), 2

3

which leads to the second estimate in (9.92).

9.5 Observability Estimate for Stochastic Parabolic Equations In this section, we shall establish an observability estimate for the stochastic parabolic equation (9.16). The main content of this section is taken from [218, 315]. We have the following result. Theorem 9.28. Let the condition (9.17) be satisfied. Then, solutions to (9.16) satisfy that, for any z0 ∈ L2F0 (Ω; L2 (G; Rm )), |z(T )|L2F

T

≤ CeCR2 |z|L2F (0,T ;L2 (G0 ;Rm )) , 2

(Ω;L2 (G;Rm ))

(9.93)

where R2 is given in Corollary 9.5. We shall give a proof of Theorem 9.28 in Subsection 9.5.3. For simplicity, we consider only the case m = 1.

9.5 Observability Estimate for Stochastic Parabolic Equations

311

9.5.1 Global Carleman Estimate for Stochastic Parabolic Equations, I In this subsection, as a preliminary to prove Theorem 9.28, we shall derive a global Carleman estimate for the following stochastic parabolic equation:  n ∑    dh − (ajk hxj )xk dt = f dt + gdW (t), in Q,    j,k=1 (9.94)  h = 0, on Σ,      h(0) = h0 , in G, where h0 ∈ L2F0 (Ω; L2 (G)), while f and g are suitable stochastic processes to be given later. We begin with the following known technical result (See [117, p. 4, Lemma 1.1] and [337, Lemma 2.1] for its proof), which shows the existence of a nonnegative function with an arbitrarily given critical point location in G. Lemma 9.29. For any nonempty open subset G1 of G, there is a ψ ∈ C ∞ (G) such that ψ > 0 in G, ψ = 0 on Γ , and |∇ψ(x)| > 0 for all x ∈ G \ G1 . In the rest of this section, we choose θ and ℓ as that in (9.88), and ψ given by Lemma 9.29 with G1 being any fixed nonempty open subset of G such that G1 ⊂ G0 . The desired Carleman estimate for (9.94) is stated as follows: Theorem 9.30. There is a constant µ0 = µ0 (G, G0 , (ajk )n×n , T ) > 0 such that for all µ ≥ µ0 , one can find two constants C = C(µ) > 0 and λ0 = λ0 (µ) > 0 such that for any λ ≥ λ0 , h0 ∈ L2F0 (Ω; L2 (G)), f ∈ L2F (0, T ; L2 (G)) and g ∩ ∈ L2F (0, T ; H 1 (G)), the corresponding solution h ∈ L2F (Ω; C([0, T ]; L2 (G))) L2F (0, T ; H01 (G)) to (9.94) satisfies ∫ ∫ 3 4 2 3 2 2 λ µ E θ φ h dxdt + λµ E θ2 φ|∇h|2 dxdt Q

≤ CE

[∫

Q

( ) θ2 f 2 + |∇g|2 + λ2 µ2 φ2 g 2 dxdt + λ3 µ4 Q



] (9.95) θ2 φ3 h2 dxdt . Q0

Proof : We barrow some idea from [117]. We shall use Theorem 9.27 with bjk being replaced by ajk (and hence u = h). The proof is divided into three steps. Step 1. Integrating the equality (9.79) (with bjk replaced by ajk ) on Q, taking mean value in both sides, and noting (9.92), we conclude that ∫

n n [ ][ ] ∑ ∑ θ − (ajk vxj )xk + Av dh − (ajk hxj )xk dt dx

2E Q

j,k=1

j,k=1

312

9 Controllability and Observability of Stochastic Parabolic Systems



n ∑

+2E ∫ +2E

(ajk vxj dv)xk dx

Q j,k=1 n [ ∑ Q j,k=1

n ( ∑

2ajk aj

′ ′

k

ℓxj′ vxj vxk′ − ajk aj

′ ′

k

) ℓx j v x j ′ v x k ′

j ′ ,k′ =1

( Ψx ) ] +Ψ ajk vxj v − ajk Aℓxj + j v2 dxdt (9.96) 2 xk ∫ [ ( ( ) ≥ 2s20 E φ λµ2 |∇ψ|2 + λO(µ) |∇v|2 + φ3 λ3 µ4 |∇ψ|4 + λ3 O(µ3 ) Q ) ] +λ2 O(µ2 e2µ|ψ|C(G) ) + λO(e2µ|ψ|C(G) ) v2 dxdt ∫ n 2 ∑ +2E (ajk vxj )xk + Av dxdt − Q

j,k=1

∫ −E

θ2 Q

n ∑



ajk (dhxj + ℓxj dh)(dhxk + ℓxk dh)dx − E

θ2 A(dh)2 dx, Q

j,k=1

where A=−

n ∑

jk (ajk ℓxj ℓxk − ajk xk ℓxj + a ℓxj xk ) − ℓt ,

Ψ =2

j,k=1

n ∑

ajk ℓxj xk .

j,k=1

By (9.94), we find that ∫

n n [ ][ ] ∑ ∑ θ − (ajk vxj )xk + Av dh − (ajk hxj )xk dt dx

2E Q

j,k=1



[

θ −

= 2E Q



θ − Q

≤E

j,k=1

](

) (ajk vxj )xk + Av f dt + gdW (t) dx

j,k=1

[

= 2E

n ∑

n ∑

(9.97)

] + Av f dtdx

jk

(a vxj )xk

j,k=1

∫ ∫ n 2 ∑ (ajk vxj )xk + Av dtdx + E θ2 f 2 dtdx. − Q

Q

j,k=1

Since h|Σ = 0, we have v|Σ = 0 and vxj |Σ = θhxj |Σ = θ

∂h j ν . ∂ν Σ

Similarly, by Lemma 9.29, we get ℓxj = λµφψxj = λµφ

∂ψ j ν ∂ν

and

∂ψ 0. Hence, there is a µ0 > 0 such that for all µ ≥ µ0 , one can find a constant λ0 = λ0 (µ) so that for any λ ≥ λ0 , it holds that ∫ T∫ [ ( ) 2E λφ µ2 min |∇ψ|2 + O(µ) |∇v|2 0

(

x∈G\G1

G\G1

+φ3 λ3 µ4 min |∇ψ|4 + λ3 O(µ3 ) x∈G\G1

2 2µ|ψ|C(G)

2

+λ O(µ e ∫ T∫ ≥ λµ2 c1 E 0

) ] ) + λO(e2µ|ψ|C(G) ) v2 dxdt

(9.102)

( ) φ |∇v|2 + λ2 µ2 φ2 v2 dxdt, G\G1

(

) ∆ where c1 = min minx∈G\G1 |∇ψ|2 , minx∈G\G1 |∇ψ|4 . By hxj = θ−1 (vxj − ℓxj v) = θ−1 (vxj − λµφψxj v) and vxj = θ(hxj + ℓxj h) = θ(hxj + λµφψxj h), we obtain that ) ( ) 1 2( θ |∇h|2 + λ2 φ2 µ2 h2 ≤ |∇v|2 + λ2 µ2 φ2 v2 ≤ Cθ2 |∇h|2 + λ2 φ2 µ2 h2 . C Therefore, it follows from (9.102) that ∫ λµ2 E θ2 φ(|∇h|2 + λ2 φ2 µ2 h2 )dxdt Q



(∫



) = λµ E + θ2 φ(|∇h|2 + λ2 φ2 µ2 h2 )dxdt (9.103) 0 G\G1 G1 ∫ { [ ( ( ) ≤C E φ λµ2 |∇ψ|2 + λO(µ) |∇v|2 + φ3 λ3 µ4 |∇ψ|4 + λ3 O(µ3 ) Q ) ] +λ2 O(µ2 e2µ|ψ|C(G) ) + λO(e2µ|ψ|C(G) ) v2 dxdt ∫ T∫ } ( ) 2 +λµ E θ2 φ |∇h|2 + λ2 µ2 φ2 h2 dxdt . 2

T

0

G1

Now, combining (9.101) and (9.103), we end up with ∫ λµ2 E θ2 φ(|∇h|2 + λ2 φ2 µ2 h2 )dxdt Q

[ ∫ ( ) ≤C E θ2 f 2 + |∇g|2 + λ2 µ2 φ2 g 2 dxdt Q



T



] ( ) θ2 φ |∇h|2 + λ2 µ2 φ2 h2 dxdt .

+λµ2 E 0

G1

(9.104)

9.5 Observability Estimate for Stochastic Parabolic Equations

315

Step 3. We choose a cut-off function ζ ∈ C0∞ (G0 ; [0, 1]) so that ζ ≡ 1 in G1 . By d(θ2 φh2 ) = h2 (θ2 φ)t dt + 2θ2 φhdh + θ2 φ(dh)2 , recalling limt→0+ φ(t, ·) = limt→T − φ(t, ·) ≡ 0 and using (9.94), we find that ∫ 0=E ζ 2 [h2 (θ2 φ)t dt + 2θ2 φhdh + θ2 φ(dh)2 ]dx ∫

Q0

=E

n [ ( ∑ ) ] θ2 ζ 2 h2 (φt + 2λφαt ) + 2φh (ajk hxj )xk + f + φg 2 dxdt

Q0



j,k=1 n [ ∑ θ2 ζ 2 h2 (φt + 2λφαt ) − 2ζ 2 φ ajk hxj hxk

=E Q0

(9.105)

j,k=1 n ∑

−2µζ 2 φ(1 + 2λφ)h

ajk hxj ψxk

j,k=1 n ∑

−4ζφh

] ajk hxj ζxk + 2ζ 2 φf h + ζ 2 φg 2 dxdt.

j,k=1

From (9.105), we deduce that, for any ε > 0, ∫ n ∑ 2E θ2 ζ 2 φ ajk hxj hxk dxdt Q0

j,k=1

∫ =E

[

θ ζ h (φt + 2λφαt ) − 2µζ φ(1 + 2λφ)h 2

2 2

2

Q0

ajk hxj ψxk

j,k=1

−4ζφh

n ∑

] ajk hxj ζxk + 2ζ 2 φf h + ζ 2 φg 2 dxdt

j,k=1

∫ ≤ εE

n ∑

θ2 ζ 2 φ|∇h|2 dxdt + Q0



C E ε

∫ θ2 Q0

(9.106)

( 1 ) 2 2 2 3 2 f + λ µ φ h dxdt λ2 µ 2

θ2 φg 2 dxdt.

+E Q0

From (9.106), we conclude that ∫ T∫ E θ2 φ|∇h|2 dxdt 0

G1

∫ [ ∫ ( 1 ) ] 2 2 2 2 2 2 3 2 ≤C E θ f + φg dxdt + λ µ E θ φ h dxdt . λ2 µ 2 Q Q0

(9.107)

Finally, combining (9.104) and (9.107), we obtain (9.95). Remark 9.31. In Theorem 9.30, it is assumed that g ∈ L2F (0, T ; H 1 (G)). This assumption is not natural. In the next subsection, we shall relax this assumption to be g ∈ L2F (0, T ; L2 (G)).

316

9 Controllability and Observability of Stochastic Parabolic Systems

As an easy consequence of Theorem 9.30, we have the following result. Corollary 9.32. For µ = µ0 and any λ ≥ λ0 (= λ0 (µ0 )) given in Theorem 2 2 9.30, h0 ∈ L2F0 (Ω; L2 (G)) and ∩ f 2∈ LF (0, T1; L (G)), the corresponding solution 2 2 h ∈ LF (Ω; C([0, T ]; L (G))) LF (0, T ; H0 (G)) to (9.94) with g ≡ 0 satisfies ∫ ∫ 3 2 3 2 λ E θ φ h dxdt + λE θ2 φ|∇h|2 dxdt Q

≤ CE

(∫



Q

θ2 f 2 dxdt + λ3 Q

) θ2 φ3 h2 dxdt .

Q0

Since the equation (9.94) with g ≡ 0 is a random parabolic equation, Corollary 9.32 follows also from the known global Carleman estimate for deterministic parabolic equations (e.g. [117]). Nevertheless, it seems that Theorem 9.30 has some independent interest. 9.5.2 Global Carleman Estimate for Stochastic Parabolic Equations, II In this subsection, we derive an improved global Carleman estimate for the forward stochastic parabolic equation (9.94). Throughout this subsection, µ = µ0 and λ ≥ λ0 are given as that in Theorem 9.30, and θ and φ are the same as that in the last subsection. For any fixed f, g ∈ L2F (0, T ; L2 (G)) and h0 ∈ L2F0 (Ω; L2 (G)), let h denote the corresponding solution to the equation (9.94). Based on the Carleman estimate in Corollary 9.32, we give below a “partial” null controllability result for the following controlled backward stochastic parabolic equation:  n ( ) ∑   ajk rxj x dt = (λ3 θ2 φ3 h + χG1 u)dt + RdW (t) in Q,   dr + j,k=1

k

on Σ, (9.108)

r=0     r(T ) = 0

in G,

where u ∈ L2F (0, T ; L2 (G1 )) is the control variable and (r, R) is the state variable. Proposition 9.33. There exists a control u ˆ ∈ L2F (0, T ; L2 (G1 )) such that the ( 2 ) b ∈ L (Ω; C([0, T ]; L2 (G))) ∩ L2 (0, T ; H 1 (G)) corresponding solution (ˆ r, R) 0 F F ×L2F (0, T ; L2 (G)) to (9.108) with u = u ˆ satisfies rˆ(0) = 0 in G, a.s. Moreover, ∫ ∫ ( ) b2 dxdt ≤ Cλ3 E E θ−2 rˆ2 + λ−3 φ−3 u ˆ2 + λ−2 φ−2 R θ2 φ3 h2 dxdt. Q

Q

(9.109)

9.5 Observability Estimate for Stochastic Parabolic Equations

317

Proof : We divide the proof into several steps. Step 1. Write ∫ { U = u ∈ L2F (0, T ; L2 (G1 )) E

T 0



} θ−2 φ−3 u2 dxdt < ∞ . G1

eµψ(x) − e2µ|ψ|C(G) and construct the (t + ε)(T − t + ε) following optimal control problem for the equation (9.108): { } inf J (u) u ∈ U , For any ε > 0, write αε ≡ αε (t, x) =

where 1 ( J (u) = E 2 ∆

∫ e Q

+

1 ε

−2λαε 2

r dxdt + λ



−3



T



0

) r2 (0)dx .

θ−2 φ−3 u2 dxdt G1

G

It is easy to check that for each ε > 0, the above optimal control problem admits a unique optimal solution (uε , rε , Rε ) ∈ U × L2F (0, T ; H01 (G)) × L2F (0, T ; L2 (G)). Moreover, uε = χG1 λ3 θ2 φ3 zε

in Q,

a.s.,

where zε satisfies  n ( ) ∑   zε,t − ajk zε,xj x = e−2λαε rε in Q,  k   j,k=1 zε = 0 on Σ,      zε (0) = 1 rε (0) in G. ε

(9.110)

(9.111)

Step 2. We now establish a uniform estimate for the optimal solutions (uε , rε , Rε ) w.r.t. ε > 0. By (9.108), (9.111), Itˆo’s formula and (9.110), it follows that ∫ −E rε (0)zε (0)dx G

∫ { [ ∑ n ] ( jk ) =E rε a zε,xj x + e−2λαε rε Q

k

j,k=1 n [ ]} ∑ ( jk ) +zε − a rε,xj x + λ3 θ2 φ3 h + χG1 uε dxdt k

j,k=1

318

9 Controllability and Observability of Stochastic Parabolic Systems



(

=E

) e−2λαε rε2 + λ3 θ2 φ3 zε h + χG1 λ3 θ2 φ3 zε2 dxdt.

Q

This, together with the last equality of (9.111) and Corollary 9.32, indicates that for any ρ > 0, ∫ ∫ ( −2λαε 2 ) 1 E e rε + χG1 λ3 θ2 φ3 zε2 dxdt + E rε2 (0)dx ε Q G ∫ = −λ3 E θ2 φ3 zε hdxdt ∫Q ∫ λ3 2 3 2 3 ≤ ρλ E θ φ zε dxdt + E θ2 φ3 h2 dxdt (9.112) 4ρ Q Q ∫ [ ∫ T∫ ] ≤ ρCλ3 E θ2 φ3 zε2 dxdt + E θ2 (e−2λαε rε )2 dxdt 0 G Q 1 ∫ λ3 + E θ2 φ3 h2 dxdt. 4ρ Q Notice that θ2 e−2λαε ≤ 1. Therefore, by (9.110) and (9.112), we see that ∫ ∫ ( −2λαε 2 ) 1 −3 −2 −3 2 E e rε + λ θ φ uε dxdt + E rε2 (0)dx ε Q G ∫ (9.113) ≤ Cλ3 E θ2 φ3 h2 dxdt. Q

On the other hand, by the first equation of (9.108), it follows that d(e−2λαε φ−2 rε2 ) = (e−2λαε φ−2 )t rε2 dt + e−2λαε φ−2 (drε )2 n [ ∑ ] +2e−2λαε φ−2 rε − (ajk rε,xj )xk dt+λ3 θ2 φ3 hdt+χG1 uε dt+Rε dW (t) . j,k=1

This implies that ∫ E

e

−2λαε

φ

Q

−2

∫ Rε2 dxdt

e−2λαε φ−2

+ 2E Q



(e−2λαε φ−2 )t rε2 dxdt − 2E

= −E Q

n ∑

ajk rε,xj rε,xk dxdt

j,k=1



n ∑

ajk (e−2λαε φ−2 )xk rε,xj rε dxdt

Q j,k=1



θ2 e−2λαε φrε hdxdt − 2E

−2λ3 E Q



χG1 e−2λαε φ−2 rε uε dxdt. Q

Notice that e−2λαε ≤ θ−2 . By a simple calculation, we find that

9.5 Observability Estimate for Stochastic Parabolic Equations

λ−2 E



e−2λαε φ−2 Rε2 dxdt + 2λ−2 E



Q

319

e−2λαε φ−2 |∇rε |2 dxdt Q

( ∫ ∫ ≤ C λ−1 E e−2λαε rε2 dxdt + λ−1 E e−2λαε φ−1 |∇rε ||rε |dxdt Q

∫ +λE

θe

Q −λαε

φ|rε h|dxdt + λ

Q

≤ λ−2 E



−2

∫ E

e

−2λαε

φ

−2

) |rε uε |dxdt

Q

e−2λαε φ−2 |∇rε |2 dxdt

(9.114)

Q



(

+CE

) e−2λαε rε2 + λ−3 θ−2 φ−3 u2ε + λ3 θ2 φ3 h2 dxdt.

Q

Combining (9.114) with (9.113), we conclude that ∫ ( ∫ ) E e−2λαε rε2 + λ−3 θ−2 φ−3 u2ε dxdt + λ−2 E e−2λαε φ−2 |∇rε |2 dxdt Q

Q

1 + E ε



rε2 (0)dx + λ−2 E G



e−2λαε φ−2 Rε2 dxdt

(9.115)

Q

∫ ≤ Cλ E 3

θ2 φ3 h2 dxdt. Q

By (9.115), and noting (9.108), we deduce that there exists a triple b ∈ L2 (0, T ; L2 (G1 )) × L2 (0, T ; H 1 (G)) × L2 (0, T ; L2 (G)) such that (ˆ u, rˆ, R) 0 F F F as ε → 0, uε → u ˆ

weakly in L2 ((0, T ) × Ω; L2 (G1 ));

rε → rˆ

weakly in L2 ((0, T ) × Ω; H01 (G));

b Rε → R

(9.116)

weakly in L2 ((0, T ) × Ω; L2 (G)).

b is the solution to (9.108) with u = u Step 3. We conclude that (ˆ r, R) ˆ. In fact, by Theorem 4.11, the equation (9.108) with g = gˆ admits one and only one so( ) e ∈ L2 (Ω; C([0, T ]; L2 (G))) ∩ L2 (0, T ; H 1 (G)) ×L2 (0, T ; L2 (G)). lution (˜ r, R) 0 F F F Then, for any fi ∈ L2F (0, T ; L2 (G)) (i = 1, 2), consider the following forward stochastic parabolic equation:  n ( ) ∑   ajk ϕxj x dt = f1 dt + f2 dW (t) in Q,  dϕ −  k j,k=1 (9.117) ϕ=0 on Σ,     ϕ(0) = 0 in G. By (9.108), (9.117) and Itˆo’s formula, we see that

320

9 Controllability and Observability of Stochastic Parabolic Systems





E

e 2 )dxdt = 0. (˜ rf1 + Rf

(λ θ φ h + χG1 u ˆ)ϕdxdt + E 3 2

3

Q

(9.118)

Q

Likewise, ∫



E

(λ3 θ2 φ3 h + χG1 uε )ϕdxdt + E Q

(rε f1 + Rε f2 )dxdt = 0, Q

which, together with (9.116), indicates that ∫ ∫ b 2 )dxdt = 0. E (λ3 θ2 φ3 h + χG1 u ˆ)ϕdxdt + E (ˆ rf1 + Rf Q

(9.119)

Q

e=R b in Q, a.s. Combining (9.118) and (9.119), we see that r˜ = rˆ and R Finally, by (9.115), we obtain that rˆ(0) = 0 in G, a.s. and the estimate (9.109) holds. Now, based on the above null controllability result for the backward parabolic equation (9.108), we show the following improved global Carleman estimate for (9.94) (compared to Theorem 9.30): Theorem 9.34. For µ = µ0 and any λ ≥ λ0 given in Theorem 9.30, h0 ∈ L2F0 (Ω; L2 (G)) and f, g∩∈ L2F (0, T ; L2 (G)), the corresponding solution h ∈ L2F (Ω; C([0, T ]; L2 (G))) L2F (0, T ; H01 (G)) to (9.94) satisfies ∫ ( ) E θ2 λ3 φ3 h2 + λφ|∇h|2 dxdt Q ∫ ∫ (9.120) [ ] ( ) ≤ CE θ2 f 2 + λ2 φ2 g 2 dxdt + λ3 θ2 φ3 h2 dxdt . Q

Q0

b Proof : For any h0 ∈ L2F0 (Ω; L2 (G)) and f, g ∈ L2F (0, T ; L2 (G)), let (ˆ r, R) be the solution to (9.108) with u = u ˆ, which was given in Proposition 9.33. Then, by Itˆo’s formula, we see that ∫ ∫ b λ3 E θ2 φ3 h2 dxdt = −E (χG1 u ˆh + rˆf + Rg)dxdt. Q

Q

It follows that for any ρ > 0, ∫ 3 λ E θ2 φ3 h2 dxdt Q ∫ ( ) b2 dxdt ≤ ρE θ−2 λ−3 φ−3 u ˆ2 + rˆ2 + λ−2 φ−2 R Q ∫ T∫ ∫ ] ( ) 1[ 3 + λ E θ2 φ3 h2 dxdt + E θ2 f 2 + λ2 φ2 g 2 dxdt . 4ρ 0 G1 Q By (9.109), this implies that

9.5 Observability Estimate for Stochastic Parabolic Equations

321

∫ λ3 E

θ2 φ3 h2 dxdt ∫ T∫ ∫ [ ] (9.121) ( ) ≤ C λ3 E θ2 φ3 h2 dxdt + E θ2 f 2 + λ2 φ2 g 2 dxdt . Q

0

G1

Q

On the other hand, by Itˆo’s formula, we have that d(θ2 φh2 ) = (θ2 φ)t h2 dt + 2θ2 φh

n ( ∑

(ajk hxj )xk dt + f dt + gdW (t)

)

j,k=1 2

2

+θ φ(dh) . Hence, for any ρ > 0, ∫ θ2 φ

2E Q

=E

n ∑

ajk hxj hxk dxdt

j,k=1

∫ (

(θ2 φ)t h2 − 2

Q



) ajk (θ2 φ)xk hxj h + 2θ2 φhf + θ2 φg 2 dxdt

j,k=1

(

≤ CE

n ∑

) λθ2 φ3 h2 + λθ2 φ2 |∇h||h| + θ2 φ|hf | + θ2 φg 2 dxdt

Q

∫ ≤ ρE

θ2 φ|∇h|2 dxdt + Q

C E ρ



( ) θ2 λ2 φ3 h2 + λ−1 f 2 + φg 2 dxdt. Q

By (9.121), this implies that ∫ λE θ2 φ|∇h|2 dxdt Q ∫ T∫ ∫ [ ] (9.122) ( ) ≤ C λ3 E θ2 φ3 h2 dxdt + E θ2 f 2 + λ2 φ2 g 2 dxdt . 0

G1

Q

Combining (9.121) and (9.122), we obtain the desired estimate (9.120). Remark 9.35. By a similar method used in the proof of Theorem 9.34, one can show the following global Carleman estimate for (9.94) in H −1 -space: For µ = µ0 and any λ ≥ λ0 given in Theorem 9.30, h0 ∈ L2F0 (Ω; L2 (G)), f ∈ L2F (0, T ; H −1 (G)) and ∩ g ∈ L2F (0, T ; L2 (G)), the corresponding solution 2 2 h ∈ LF (Ω; C([0, T ]; L (G))) L2F (0, T ; H01 (G)) to (9.94) satisfies ∫ ( ) E θ2 λφ3 h2 + λ−1 φ|∇h|2 dxdt Q ∫ T ∫ ( ∫ T∫ ) 2 3 2 2 2 ≤ C λE θ φ h dxdt + E φ |θf |H −1 (G) dt + E θ2 φ2 g 2 dxdt . 0

G1

0

Q

322

9 Controllability and Observability of Stochastic Parabolic Systems

9.5.3 Proof of the Observability Result We are now in a position to prove Theorem 9.28. Proof of Theorem 9.28 : Applying Theorem 9.34 to the equation (9.16), recalling (9.90) and the definition of κ2 , noting also that we assume that m = 1, we obtain that, for µ = µ0 and any λ ≥ λ0 given in Theorem 9.30, ∫ ∫ 3 2 3 2 λ E θ φ z dxdt + λE θ2 φ|∇z|2 dxdt Q

≤ CE

Q

{∫ θ

2

n [( ∑

Q

)2 b1j zxj + b2 z



]

} θ2 φ3 z 2 dxdt

+ λ φ (b3 z) dxdt + λ E 2

2

2

3

Q0

j=1

∫ ∫ [ ( ) ≤ C κ22 E θ2 |∇z|2 + λ2 φ3 z 2 dxdt + λ3 E Q

] θ2 φ3 z 2 dxdt . Q0

(9.123) Choosing λ = C(1 + κ22 ), from (9.123), we deduce that ∫ ∫ 2 E θ2 φ3 z 2 dxdt ≤ CeCκ2 E θ2 φ3 z 2 dxdt. Q

(9.124)

Q0

Noting that ∫



E

3T /4



θ2 φ3 z 2 dxdt ≥ E Q

θ2 φ3 z 2 dxdt T /4

G

( ) ∫ ≥ min θ2 (T /4, x)φ3 (T /2, x) E x∈G

and that

θ2 φ3 z 2 dxdt ≤ max

(t,x)∈Q

Q0

(

∫ z 2 dxdt

T /4

∫ E

3T /4

G

) ∫ θ2 (t, x)φ3 (t, x) E

z 2 dxdt, Q0

recalling (9.88) and θ = eℓ , we deduce that (9.124) implies that ∫

3T /4



E

z 2 dxdt T /4

G

( ) ∫ max θ2 (t, x)φ3 (t, x) 2 (t,x)∈Q ( )E ≤ CeCκ2 z 2 dxdt 2 3 Q0 min θ (T /4, x)φ (T /2, x) x∈G ∫ 2 ≤ CeCκ2 E z 2 dxdt.

(9.125)

Q0

By Corollary 9.5, the inequality (9.125) implies the desired estimate (9.93). This completes the proof of Theorem 9.28.

9.6 Null and Approximate Controllability of Stochastic Parabolic Equations

323

9.6 Null and Approximate Controllability of Stochastic Parabolic Equations In this section, based on [315], we deal with the null and approximate controllability for (9.6). First of all, we have the following result. Theorem 9.36. Under the condition (9.7), the system (9.6) is null controllable at any time T . In order to prove Theorem 9.36, in view of Theorem 7.17, we need the following observability result for (9.10) (Recall (9.13) for κ e1 ): Theorem 9.37. Under the condition (9.7), all solutions (z, Z) ∈ L2F (Ω; C([0, T ]; L2 (G; Rm ))) × L2F (0, T ; L2 (G; Rm )) to the system (9.10) satisfy that |z(0)|L2F

0

≤ Ce

Ce κ21

(Ω;L2 (G;Rm ))

(

) |χG0 z|L2F (0,T ;L2 (G;Rm )) + |Z|L2F (0,T ;L2 (G;Rm )) ,

(9.126)

∀ zT ∈ L2FT (Ω; L2 (G; Rm )). 1,∞ Remark 9.38. In Theorem 9.37, we assume that a1j ∈ L∞ (G; F (0, T ; W m×m R )) for j = 1, 2, · · · , n (See the condition (9.7)). It seems that this as∞ m×m sumption can be weakened as a1j ∈ L∞ )). F (0, T ; L (G; R

The rest of this section is mainly devoted to giving a proof of Theorem 9.37. For simplicity, we consider only the case m = 1. 9.6.1 Global Carleman Estimate for Backward Stochastic Parabolic Equations As a key preliminary to prove Theorem 9.37, in this subsection we establish a global Carleman estimate for the following backward stochastic parabolic equation:  n ∑    dz + (ajk zxj )xk dt = f dt + ZdW (t) in Q,    j,k=1 (9.127)  z=0 on Σ,      z(T ) = zT in G. In the rest of this section, similar to that in Section 9.5, we choose θ and ℓ as that in (9.88), and ψ given by Lemma 9.29 with G1 being any fixed nonempty open subset of G such that G1 ⊂ G0 . The desired global Carleman estimate for (9.127) is stated as follows:

324

9 Controllability and Observability of Stochastic Parabolic Systems

Theorem 9.39. There is a constant µ0 = µ0 (G, G0 , (ajk )n×n , T ) > 0 such that for all µ ≥ µ0 , one can find two constants C = C(µ) > 0 and λ0 = λ0 (µ) > 0 such that for all λ ≥ λ0 , f ∈ L2F (0, T ; L2 (G)) and zT ∈ L2FT (Ω; L2 (G)), the solution (z, Z) ∈ [L2F (Ω; C([0, T ]; L2 (G)))∩L2F (0, T ; H01 (G))]×L2F (0, T ; L2 (G)) to (9.127) satisfies that ∫ ∫ λ3 µ 4 E θ2 φ3 z 2 dxdt + λµ2 E θ2 φ|∇z|2 dxdt Q

(

Q





≤ C λ3 µ 4 E



θ2 φ3 z 2 dxdt + E

) θ2 φ2 Z 2 dxdt .

θ2 f 2 dxdt + λ2 µ2 E

Q0

Q

Q

(9.128) Proof : The proof is similar to that of Theorem 9.30. We shall use Theorem 9.27 with bjk and h replaced respectively by −ajk and z (and hence w = θz). Integrating the equality (9.79) (with bjk replaced by −ajk ) on G, taking mean value in both sides, and noting (9.92), we conclude that ∫ 2E

θ

n [ ∑

Q

j,k=1 n ∑

∫ −2E +2E

n ][ ] ∑ (ajk zxj )xk dt dx (ajk wxj )xk + Aw dz + j,k=1

(ajk wxj dw)xk dx

Q j,k=1 ∫ ∑ n [ Q j,k=1

n ( ∑

2ajk aj

′ ′

k

ℓxj′ wxj wxk′ − ajk aj

′ ′

k

) ℓxj wxj ′ wxk′

j ′ ,k′ =1

( Ψx ) ] −Ψ ajk wxj w + ajk Aℓxj + j w2 dxdt (9.129) 2 xk ∫ [ ( ( ) ≥ 2s20 E φ λµ2 |∇ψ|2 + λO(µ) |∇w|2 + φ3 λ3 µ4 |∇ψ|4 + λ3 O(µ3 ) Q ) ] +λ2 O(µ2 e2µ|ψ|C(G) ) + λO(e2µ|ψ|C(G) ) w2 dxdt ∫ ∑ n 2 +2E (ajk wxj )xk + Aw dxdt Q

∫ θ2

+E Q

j,k=1 n ∑

∫ ajk (dzxj + ℓxj dz)(dzxk + ℓxk dz)dx − E

j,k=1

θ2 A(dz)2 dx, Q

where A=

n ∑

jk (ajk ℓxj ℓxk − ajk xk ℓxj + a ℓxj xk ) − ℓt ,

j,k=1

By (9.127), it follows that

Ψ = −2

n ∑ j,k=1

ajk ℓxj xk .

9.6 Null and Approximate Controllability of Stochastic Parabolic Equations

∫ 2E

θ

n [ ∑

Q

][

(a wxj )xk + Aw dz + jk

j,k=1

∫ = 2E

θ

]( ) (ajk wxj )xk + Aw f dt + ZdW (t) dx

j,k=1

[

n ∑

θ −

= 2E

] (ajk zxj )xk dt dx

j,k=1

n [ ∑

Q



n ∑

Q

325

(9.130)

]

(a wxj )xk + Aw f dtdx jk

j,k=1

∫ ∑ ∫ n 2 jk ≤E (a wxj )xk + Aw dtdx + E θ2 f 2 dtdx. Q

Q

j,k=1

∫ θ2

It is clear that the term “E Q

n ∑

ajk (dzxj + ℓxj dz)(dzxk + ℓxk dz)dx”

j,k=1

in (9.129) is nonnegative. Hence, by (9.129)–(9.130), similarly to the proof of (9.101), one can show that ∫ [ ( ( ) 2s20 E φ λµ2 |∇ψ|2 + λO(µ) |∇w|2 + φ3 λ3 µ4 |∇ψ|4 + λ3 O(µ3 ) Q ) ] +λ2 O(µ2 e2µ|ψ|C(G) ) + λO(e2µ|ψ|C(G) ) w2 dxdt (9.131) ∫ ≤E θ2 (f 2 + AZ 2 )dxdt. Q

Similarly to (9.104), from (9.131), we conclude that there is a µ0 > 0 such that for all µ ≥ µ0 , one can find a constant λ0 = λ0 (µ) so that for any λ ≥ λ0 , it holds that ∫ ( ) λµ2 E θ2 φ |∇z|2 + λ2 µ2 φ2 z 2 dxdt Q

[ ∫ ≤C E θ2 (f 2 + λ2 µ2 φ2 Z 2 )dxdt Q



T



] ( ) θ2 φ |∇z|2 + λ2 µ2 φ2 z 2 dxdt .

+λµ E 2

0

(9.132)

G1

Choose a cut-off function ζ ∈ C0∞ (G0 ; [0, 1]) so that ζ ≡ 1 in G1 . Proceeding as in (9.105), from (9.127), we obtain that

326

9 Controllability and Observability of Stochastic Parabolic Systems

∫ 0=E

2

[

2 2

2

θ ζ z (φt + 2λφηt ) + 2ζ φ Q0

n ∑

ajk zxj zxk

j,k=1 n ∑

+2µζ 2 φ(1 + 2λφ)z

ajk zxj ψxk

j,k=1

+4ζφz

n ∑

] ajk zxj ζxk + 2ζ 2 φf z + ζ 2 φZ 2 dxdt.

j,k=1

Therefore, for any ε > 0, one has ∫ ∫ n ∑ 2E θ2 ζ 2 φ ajk zxj zxk dxdt + E Q0

j,k=1



C ≤ εE θ ζ φ|∇z| dxdt + E ε Q0 2 2

2



θ2 ζ 2 φZ 2 dxdt Q0

( 1 ) θ f 2 + λ2 µ2 φ3 z 2 dxdt. 2 2 λ µ Q0

(9.133)

2

Since the matrix (ajk )1≤i,j≤n is uniformly positive definite, we conclude from (9.133) that ∫ T∫ ∫ ( 1 ) E θ2 φ|∇z|2 dxdt ≤ CE θ2 2 2 f 2 + λ2 µ2 φ3 z 2 dxdt. (9.134) λ µ 0 G1 Q0 Combining (9.132) and (9.134), we obtain (9.128). This completes the proof of Theorem 9.39. 9.6.2 Proof of the Observability Estimate for Backward Stochastic Parabolic Equations We are now in a position to prove Theorem 9.37, which is the key observability estimate for the backward stochastic parabolic equation (9.10). Proof of Theorem 9.37 : Applying Theorem 9.39 to the equation (9.10), recalling (9.13) the definition of κ e1 , we deduce that, for all µ ≥ µ0 and λ ≥ λ0 (µ), ∫ ∫ 3 4 2 3 2 2 λ µ E θ φ z dxdt + λµ E θ2 φ|∇z|2 dxdt {

Q

Q



≤ C λ3 µ 4 E Q0



θ2

+E



θ2 φ3 z 2 dxdt + λ2 µ2 E n [∑ (

Q

a1j z

) xj

]2 } − (a2 − a3 a4 )z − a3 Z dxdt

j=1

∫ [ ≤ C λ3 µ 4 E





+λ2 µ2 E

]

θ2 φ2 Z 2 dxdt . Q

(9.135)

( ) θ2 |∇z|2 + λ2 µ2 φ2 z 2 + Z 2 dxdt

θ2 φ3 z 2 dxdt + κ e21 E Q0

θ2 φ2 Z 2 dxdt Q

Q

9.6 Null and Approximate Controllability of Stochastic Parabolic Equations

327

Choosing µ = µ0 and λ = C(1 + κ e21 ), from (9.135), we obtain that ∫ E θ2 φ3 z 2 dxdt Q

( ∫ 2 ≤ CeCeκ1 E

∫ Q0

(9.136)

) θ2 φ2 |Z|2 dxdt .

θ2 φ3 |z|2 dxdt + E Q

Recalling (9.88), it follows from (9.136) that ∫

3T /4



E

z 2 dxdt T /4

G

(

θ2 (t, x)φ3 (t, x) + θ2 (t, x)φ2 (t, x) Ce κ21 (t,x)∈Q ( ) ≤ Ce min θ2 (T /4, x)φ3 (T /2, x) x∈G ∫ ( ∫ ) 2 × E |z| dxdt + E |Z|2 dxdt Q0 Q ∫ ( ∫ ) Ce κ21 2 ≤ Ce E |z| dxdt + E |Z|2 dxdt .

)

max

Q0

Q

By (9.12) in Proposition 9.4, it follows that ∫ ∫ E z 2 (0)dx ≤ eCeκ1 E z 2 (t)dx, ∀ t ∈ [0, T ]. G

(9.137)

(9.138)

G

Finally, combining (9.137) and (9.138), we conclude that, the solution (z, Z) to the equation (9.10) satisfies (9.126). This completes the proof of Theorem 9.37. By modifying a little the proof of Theorem 9.37, we can prove easily the following result (hence we omit the proof): Proposition 9.40. Under the condition (9.7), any solution (z, Z) ∈ L2F (Ω; C([0, T ]; L2 (G; Rm ))) × L2F (0, T ; L2 (G; Rm )) to the system (9.10) vanishes identically provided that z = 0 in Q0 and Z = 0 in Q, a.s. Remark 9.41. Since Z = 0 in Q, a.s., the system (9.10) becomes a (backward) random parabolic equation. Hence Proposition 9.40 follows also from the known observability estimate for deterministic parabolic equations (e.g. [117]). By Theorem 7.19 and Proposition 9.40, one obtains immediately the following approximate controllability result for (9.6): Corollary 9.42. Under the condition (9.7), the system (9.6) is approximately controllable at any time T .

328

9 Controllability and Observability of Stochastic Parabolic Systems

9.7 Notes and Comments There are numerous studies on the controllability theory of deterministic parabolic equations (See [77, 117, 154, 191, 336] and the references cited therein). Generally speaking, most of these results are based on either the global Carleman estimate introduced in [117] or the time iteration method introduced in [191]. Each method has advantages. In this chapter, we employ both of them to study the null controllability of stochastic parabolic equations. In order to establish the key global Carleman estimates for stochastic parabolic equations (or even other stochastic partial differential equations), so far there exist two ways in the literatures: • •

The first way is that introduced in [315], which is based on a pointwise weighted identity for the stochastic parabolic-like operator, i.e., the identity (9.79) in Theorem 9.26. The second way is that introduced in [218], which uses surprisingly a known global Carleman estimate (See Corollary 9.32) for a random parabolic equation (which may follow from the existing results for deterministic parabolic equations) and a careful duality argument.

Each of the above two ways has its own merits. Indeed, each of them may need different conditions for different equations (See for example [218, 222, 315] for more details). By virtue of some counterexamples in Section 9.3 (from [219]), it is found that the controllability of the coupled system (9.58) is not robust w.r.t. the coupling coefficient in the diffusion terms. Indeed, when this coefficient equals to zero on the whole time interval [0, T ], the system is null controllable (See Theorem 9.18). However, if it is a nonzero bounded function, no matter how small this function is, the corresponding system is uncontrollable any more. Hence, the Carleman-type estimates cannot be used to study the controllability of (9.58)). Notice that in [390] a similar phenomenon was found in studying the unique continuation for the characteristic Cauchy problem of (deterministic) linearized Benjamin-Bona-Mahony equations. In fact, the uniqueness of this equation depends strongly on the zero sets of its potentials. Hence, the desired unique continuation results in [390] were proved by means of spectral analysis and the eigenvector expansion of the solution rather than the Carleman-type estimates. Similarly, the controllability of the stochastic coupled system (9.58) was proved also by the spectral method. One can find some other interesting works related to controllability, observability, unique continuation, stabilization, Hardy’s uncertainty principle, insensitizing controls and so on for stochastic parabolic equations, say [11, 12, 54, 94, 142, 195, 196, 215, 237, 358, 359, 360, 361, 362, 365, 387]. There are some other interesting techniques to prove the controllability of deterministic parabolic equations, which should be extended to the stochastic setting, for example:

9.7 Notes and Comments







329

In [93], the null controllability of the heat equation in one space dimension was obtained by solving a moment problem, based on the classical result on the linear independence of a suitably chosen families of real exponentials in L2 (0, T ). This method was generalized to obtain the controllability result for the heat equations in multi-dimensions (e.g., [7]). However, it seems that it is very hard to employ the same idea to prove the null controllability of stochastic partial differential equations. Indeed, so far it is unclear that how to reduce the latter stochastic null controllability problem to a suitable moment problem. For example, let us consider the following equation  dy(t, x) − yxx (t, x)dt       = f (x)u(t)dt + a(x)y(t, x)dW (t) in (0, T ] × (0, 1), (9.139)  y(t, 0) = y(t, 1) = 0 on (0, T ),      y(0, x) = y0 (x) in (0, 1). Here y0 ∈ L2 (0, 1), a ∈ L∞ (0, 1), f ∈ L2 (0, 1), y is the state variable and u ∈ L2F (0, T ) is the control variable. One can see that it is not easy to reduce the null controllability problem for the system (9.139) to the usual moment problem (but, under some conditions it is still possible to reduce it to a stochastic moment problem, which remains to be done). In [293], it was shown that if the wave equation is exactly controllable for some T > 0 with controls supported in some subdomain G0 of G, then the heat equation is null controllable for all T > 0 with controls supported in G0 . This result is not sharp when applied to the heat equation with smooth coefficients, since geometric restrictions are needed on the subset G0 where the control applies. It seems that one can follow this idea to establish a connection between the null controllability of stochastic heat equations and stochastic wave equations. Nevertheless, at this moment the null controllability of stochastic wave equations is still open, which seems even more difficult than that of stochastic heat equations. In this chapter, we assume that the principle part coefficients ajk ∈ W 2,∞ (G). This assumption can be weaken. Indeed, one can easily see that the Carleman estimate approach works for Lipschitz continuous coefficients. Nevertheless, at this moment, we do not know what the minimal regularity condition is for the coefficients ajk . A first controllability result for deterministic parabolic equations with piecewise constant principle part coefficients (by means of Carleman inequalities) has been established in [78] but imposing some monotonicity conditions for these coefficients at the interfaces. In [189], based on some modification of the tools from pseudo-differential operators, it has been proved that these monotonicity conditions can be dropped for the scalar coefficients. It seems that one can follow their methods to study the Carleman and observability estimates for stochastic parabolic equations with discontinuous principle part coefficients but this remains to be done.

330

9 Controllability and Observability of Stochastic Parabolic Systems

There are many other unsolved problems related to the topic of this chapter. Here, we mention only two of them. One is to prove the null controllability of (9.6) with only one control, i.e., v = 0, for which we have explained the difficulty before. Another one is the null and approximate controllability for stochastic semi-linear parabolic equations. For deterministic semi-linear parabolic equation, people employ the Carleman estimate and some fixed point theorems to derive the null and approximate controllability (e.g., [77, 95, 117]). Since at this moment it is unclear whether these fixed point theorems can be generalized to the stochastic setting, we do not know how to use a similar method to solve the null and approximate controllability problems for stochastic semi-linear parabolic equations.

10 Exact Controllability for a Refined Stochastic Wave Equation

In this chapter, we shall prove that the usual stochastic wave equation, i.e., the classic wave equation perturbed by a term of Itˆo’s integral, is not exactly controllable even if the controls are effective everywhere in both drift and diffusion terms, which means that some key feature is ignored in this model. Then, by means of a global Carleman estimate, we establish the exact controllability of a refined stochastic wave equation with three controls. Moreover, we give a result about the lack of exact controllability, which shows that the action of three controls is necessary. Our analysis indicates that, at least from the view point of Control Theory, the new stochastic wave equation introduced in our work is more reasonable than the one in the existing literature.

10.1 Formulation of the Problem Similarly to parabolic equations, hyperbolic equations (and its special case, i.e., the wave equations) are another class of most typical partial differential equations. In the deterministic setting, the controllability theory for hyperbolic equations are also extensively studied. Hence, people hope to know what will happen for its stochastic counterpart. As we will see in this chapter, so far the stochastic situation is far from satisfactory. For T > 0, n ∈ N, a bounded domain G ⊂ Rn with a C 2 boundary Γ , a nonempty subset Γ0 of Γ and a nonempty open subset G0 of G, satisfying suitable assumptions to be given later, write Q = (0, T ) × G, Σ0 = (0, T ) × Γ0 ,

Σ = (0, T ) × Γ, Q0 = (0, T ) × G0 .

In the rest of this chapter, we assume (ajk )1≤j,k≤n ∈ C 3 (G; Rn×n ) satisfy that ajk = akj (j, k = 1, 2, · · · , n) and for some constant s0 > 0, n ∑

ajk ξ j ξ k ≥ s0 |ξ|2 ,



∀ (x, ξ) =(x, ξ 1 , · · · , ξ n ) ∈ G × Rn .

j,k=1

© Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_10

331

332

10 Exact Controllability for a Refined Stochastic Wave Equation

We begin with the following controlled (deterministic) wave equation:  n ∑    y − (ajk yxj )xk = a1 · ∇y + a2 y in Q, tt    j,k=1  y = χ Σ0 h      y(0) = y0 ,

(10.1)

on Σ, yt (0) = y1

in G.

Here (y0 , y1 ) ∈ L2 (G) × H −1 (G), a1 ∈ L∞ (0, T ; W 1,∞ (G; Rn )), a2 ∈ L∞ (Q), (y, yt ) is the state variable, and h ∈ L2 (Σ0 ) is the control variable. It is wellknown that (e.g., [209]), the system (10.1) admits one and only one transposition solution y ∈ C([0, T ]; L2 (G)) ∩ C 1 ([0, T ]; H −1 (G)). Controllability (and also observability) for deterministic wave equations are now well-understood. Indeed, it is known that (e.g., [14, 80, 110, 207, 388, 396]), under some assumptions on (T, G, Γ0 ) and the coefficients (ajk )1≤j,k≤n , a1 and a2 (for example, for the case that (ajk )1≤j,k≤n is the n × n identity matrix, a1 = 0, a2 = 0, Γ0 = {x ∈ Γ | (x − x0 ) · ν(x) > 0} for some x0 ∈ Rn , and T > 2 maxx∈G |x − x0 |), the system (10.1) is exactly controllable, i.e., for any given (y0 , y1 ), (z0 , z1 ) ∈ L2 (G) × H −1 (G), one can find a control h ∈ L2 (Σ0 ) such that the solution to (10.1) satisfying y(T ) = z0 and yt (T ) = z1 . The main goal of this chapter is to study what happens when (10.1) is replaced by stochastic models. We shall see that, the corresponding stochastic controllability problems are much less understood. ∆ As before, (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) is a fixed filtered probability space, on which a one dimensional standard Brownian motion W (·) is defined, and F is the corresponding natural filtration. We denote by F the progressive σ-field w.r.t. F. Also, in the sequel we fix some stochastic 1,∞ ∞ coefficients a1 ∈ L∞ (G; Rn )), a2 , a3 , a4 ∈ L∞ F (0, T ; W F (0, T ; L (G)) and 1,∞ ∞ a5 ∈ LF (0, T ; W0 (G)). First, let us consider the following controlled stochastic wave equation:  n ∑    dy − (ajk yxj )xk dt = (a1 · ∇y + a2 y + f )dt t     j,k=1   (10.2) +(a3 y + g)dW (t) in Q,     y = χΣ 0 h on Σ,      y(0) = y0 , yt (0) = y1 in G, where the initial datum (y0 , y1 ) ∈ L2 (G)×H −1 (G), (y, yt ) is the state variable, −1 and f, g ∈ L∞ (G)) and h ∈ L2F (0, T ; L2 (Γ0 )) are three controls. As F (0, T ; H we shall see in Section 10.2, the equation (10.2) admits one and only one transposition solution y ∈ CF ([0, T ]; L2 (Ω; L2 (G))) ∩ CF1 ([0, T ]; L2 (Ω; H −1 (G))). In some sense, (10.2) looks quite natural because it is the classic (controlled) wave equation perturbed by a term of Itˆo’s integral. We introduce the following notion of exact controllability for (10.2).

10.1 Formulation of the Problem

333

Definition 10.1. The system (10.2) is called exactly controllable at time T if for any (y0 , y1 ) ∈ L2 (G) × H −1 (G) and (y0′ , y1′ ) ∈ L2FT (Ω; L2 (G)) × L2FT (Ω; H −1 (G)), one can find a triple of controls (f, g, h) ∈ L2F (0, T ; H −1 (G)) ×L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )) such that the corresponding solution y to (10.2) satisfies that (y(T ), yt (T )) = (y0′ , y1′ ). Since three controls are introduced in (10.2), one may guess that the desired exact controllability should be trivially correct. Surprisingly, as we shall show below that, the exact controllability of (10.2) fails for any T > 0 and Γ0 ⊂ Γ , even if the controls f and g are acted everywhere on the domain G. Theorem 10.2. The system (10.2) is not exactly controllable for any T > 0 and Γ0 ⊂ Γ . Proof : We use the contradiction argument. Choose ψ ∈ H01 (G) satisfying |ψ|L2 (G) = 1 and let y˜0 = ξψ, where ξ is the random variable given in Proposition 6.2. Assume that (10.2) was exactly controllable for some T > 0 and Γ0 ⊂ Γ . Then, for any y0 ∈ L2 (G), we would find a triple of controls (f, g, h) ∈ L2F (0, T ; H −1 (G)) × L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )) such that the solution y ∈ CF ([0, T ]; L2 (Ω; L2 (G))) ∩ CF1 ([0, T ]; L2 (Ω; H −1 (G))) to the equation (10.2) satisfies that y(T ) = y˜0 . Clearly, ∫ ∫ ∫ T y˜0 ψdx − y0 ψdx = ⟨yt , ψ⟩H −1 (G),H01 (G) dt, G

G

0

which leads to ∫



ξ=

T

y0 ψdx + G

0

⟨yt , ψ⟩H −1 (G),H01 (G) dt.

This contradicts Proposition 6.2. Note that, the above controls are the strongest possible ones that people can introduce into (10.2). Obviously, this differs significantly from the well-known controllability property (mentioned before) for deterministic wave equations. Motivated by the above negative controllability result for (10.2), similarly to (5.26), in what follows, we consider the following refined version of controlled stochastic wave equation:  dy = yˆdt + (a4 y + f )dW (t) in Q,      n  ∑    dˆ y− (ajk yxj )xk dt = (a1 · ∇y + a2 y + a5 g)dt    j,k=1 (10.3)  +(a3 y + g)dW (t) in Q,       on Σ,  y = χ Σ0 h     y(0) = y0 , yˆ(0) = yˆ0 in G.

334

10 Exact Controllability for a Refined Stochastic Wave Equation

Here (y0 , yˆ0 ) ∈ L2 (G) × H −1 (G), (y, yˆ) is the state variable, and f ∈ L2F (0, T ; L2 (G)), g ∈ L2F (0, T ; H −1 (G)) and h ∈ L2F (0, T ; L2 (Γ0 )) are three controls. As we shall see in Section 10.2, the equation (10.3) admits one and only one transposition solution (y, yˆ) ∈ CF ([0, T ]; L2 (Ω; L2 (G))) × CF ([0, T ]; L2 (Ω; H −1 (G))). Remark 10.3. We put two controls f and g in the diffusion terms of (10.3). The first equation in this system can be regarded as a family of stochastic differential equations with a parameter x ∈ G; while the second one is a stochastic partial differential equation. Usually, if we put a control in the diffusion term, it may affect the drift term in one way or another. Here we assume that the effect is linear and in the form of “a5 gdt” as that in the second equation of (10.3). One may consider more general cases, say, to add a ∞ term like “a6 f dt” (in which a6 ∈ L∞ F (0, T ; L (G))) into the first equation of (10.3). However, the corresponding controllability problem (as the one studied in this chapter) is still unsolved. The exact controllability for (10.3) is defined as follows: Definition 10.4. The system (10.3) is called exactly controllable at time T if for any (y0 , yˆ0 ) ∈ L2 (G) × H −1 (G) and (y1 , yˆ1 ) ∈ L2FT (Ω; L2 (G)) × L2FT (Ω; H −1 (G)), one can find a triple of controls (f, g, h) ∈ L2F (0, T ; L2 (G))× L2F (0, T ; H −1 (G))×L2F (0, T ; L2 (Γ0 )) such that the corresponding solution (y, yˆ) to (10.3) satisfies that (y(T ), yˆ(T )) = (y1 , yˆ1 ). In this chapter, under some assumptions, we shall show that (10.3) is exactly controllable (See Theorem 10.12). Hence, from the viewpoint of controllability theory, the system (10.3) is a more reasonable model than (10.2). Noting that, we also introduce three controls into (10.3), which seems too many. However, we prove that none of these three controls can be ignored; moreover both f and g (the two internal controls) have to be effective everywhere in the domain G (See Theorem 10.9).

10.2 Well-Posedness of Stochastic Wave Equations With Boundary Controls In order to define solutions to both (10.2) and (10.3) in the sense of transposition, we need to introduce the following backward stochastic wave equation:  dz = zˆdt + (b5 z + Z)dW (t) in Qτ ,     n  ∑     dˆ z− (ajk zxj )xk dt    j,k=1 (10.4)  b b  = (b1 · ∇z + b2 z + b3 Z + b4 Z)dt + ZdW (t) in Qτ ,       z=0 on Στ ,     τ τ z(τ ) = z , zˆ(τ ) = zˆ in G,

10.2 Well-Posedness of Stochastic Wave Equations With Boundary Controls ∆

335



where τ ∈ (0, T ], Qτ =(0, τ ) × G, Στ =(0, τ ) × Γ , (z τ , zˆτ ) ∈ L2Fτ (Ω; H01 (G) × 1,∞ ∞ L2 (G)), b1 ∈ L∞ (G; Rn )), bi ∈ L∞ F (0, T ; W F (0, T ; L (G)) (i = 2, 3, 4) and 1,∞ b5 ∈ L∞ (G)). F (0, T ; W0 By Theorem 4.10, for any (z τ , zˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; L2 (G)), the b Moreover, equation (10.4) admits a unique mild solution (z, zˆ, Z, Z). b L2 (0,τ ;L2 (G)) |z|CF ([0,τ ];H01 (G)) + |ˆ z |CF ([0,τ ];L2 (G)) + |Z|L2F (0,τ ;H01 (G)) + |Z| F ( τ ) Cr1 τ ≤ Ce |z |L2F (Ω;H01 (G)) + |ˆ z |L2F (Ω;L2 (G)) , τ τ (10.5) where ∆

r1 = |b1 |2L∞ (0,T ;W 1,∞ (G;Rn )) +

4 ∑

F

|bi |2L∞ (0,T ;L∞ (G)) + |b5 |2L∞ (0,T ;W 1,∞ (G)) . F

F

i=2

0

Let us recall the following known result (See [207, p. 29] for its proof). Proposition 10.5. There exists a vector field ξ = (ξ 1 , · · · , ξ n ) ∈ C 1 (Rn ; Rn ) such that ξ = ν on Γ . The following regularity result for solutions to (10.4) will be needed later. Proposition 10.6. Let (z τ , zˆτ ) ∈ L2Fτ (Ω; H01 (G))×L2Fτ (Ω; L2 (G)). Then the b to (10.4) satisfies ∂z ∈ L2 (0, τ ; L2 (Γ )). Further(mild) solution (z, zˆ, Z, Z) F ∂ν Γ more, ∂z ( τ ) Cr1 τ 2 (Ω;H 1 (G)) + |ˆ 2 (Ω;L2 (G)) , ≤ Ce |z | z | (10.6) 2 L L 0 Fτ Fτ ∂ν LF (0,τ ;L2 (Γ )) where the constant C is independent of τ . ∆

Proof : For any η =(η 1 , · · · , η n ) ∈ C 1 (Rt × Rnx ; Rn ), by Itˆo’s formula and the first equation of (10.4), we have d(ˆ z η · ∇z) = dˆ z η · ∇z + zˆηt · ∇zdt + zˆη · ∇dz + dˆ z η · ∇dz ( ) = dˆ z η · ∇z + zˆηt · ∇zdt + zˆη · ∇ zˆdt + (b5 z + Z)dW (t) +dˆ z η · ∇dz

] 1[ div (ˆ z 2 η) − (div η)ˆ z 2 dt 2 +ˆ z η · ∇(b5 z + Z)dW (t) + dˆ z η · ∇dz.

= dˆ z η · ∇z + zˆηt · ∇zdt +

It follows from a direct computation that n ( n n ) ∑ ∑ ∑ 2(η · ∇z) ajk zxj − η k aij zxi zxj j=1

k=1

=2

n [ ∑ j,k=1

(ajk zxj )xk η · ∇z +

i,j=1 n ∑ i,j,k=1

xk

n ] ∑ aij zxi zxk ηxkj − zxj zxk div (ajk η). j,k=1

336

10 Exact Controllability for a Refined Stochastic Wave Equation

Hence, −

n [ ∑

2(η · ∇z)

n ∑

n ( )] ∑ ajk zxj + η k zˆ2 − aij zxi zxj dt

j=1

k=1

xk

i,j=1

n [ ( ) ∑ = 2 − d(ˆ z η · ∇z) + dˆ z− (ajk zxj )xk dt η · ∇z + zˆηt · ∇zdt

(10.7)

j,k=1

]

n ∑



aij zxi zxk ηxkj dt − (div η)ˆ z 2 dt +

i,j,k=1

n ∑

zxj zxk div (ajk η)dt

j,k=1

+2dˆ z η · ∇dz + 2ˆ z η · ∇(b5 z + Z)dW (t). Let ξ ∈ C 1 (Rn ; Rn ) be given in Proposition 10.5. Setting η = ξ in (10.7), integrating it in Q, and taking expectation on Ω, we obtain that ∫ ∑ n [ n n ( )] ∑ ∑ −E 2(η · ∇z) ajk zxj + η k zˆ2 − aij zxi zxj ν k dΓ dt Στ k=1

j=1



= −2E

i,j=1



zˆT η · ∇z T dx + 2E G



[(

+2

zˆ(0)η · ∇z(0)dx G

) b1 · ∇z + b2 z + b3 Z + b4 Zb η · ∇z + zˆηt · ∇z

(10.8)

Qτ n ∑



aij zxi zxk ηxkj − (div η)ˆ z2 +

i,j,k=1

] b · ∇(b5 z + Z) dxdt. +2Zη

n ∑

zxk zxj div (ajk η)

j,k=1

Noting that z = 0 on (0, τ ) × Γ , we have ∫

n [ n n ( )] ∑ ∑ ∑ 2(η · ∇z) ajk zxj + η k zˆ2 − aij zxi zxj ν k dΓ dt

E

Στ k=1



=E Στ

∫ =E

j=1

i,j=1

n n ∂z 2 ] [ ( ∑ ∂z ) ∑ jk k j ∂z 2 η·ν a ν ν − aij ν k ν i η k ν j dΓ dt ∂ν ∂ν ∂ν j,k=1 i,j,k=1 ∫ 2 n ∂z 2 ∑ ∂z ajk ν k ν j dΓ dt ≥ s0 E (10.9) dΓ dt. ∂ν Στ ∂ν

Στ j,k=1

It follows from the estimate (10.5) that ( |The right hand side of (10.8)| ≤ CeCr1 |z τ |L2F

τ

(Ω;H01 (G))

+ |ˆ z τ |L2F

τ

) (Ω;L2 (G))

.

This, together with (10.8) and (10.9), yields the desired estimate (10.6). This complete the proof of Proposition 10.6.

10.2 Well-Posedness of Stochastic Wave Equations With Boundary Controls

337

Remark 10.7. Proposition 10.6 shows that, solutions to (10.4) enjoy a better regularity on the boundary than the one provided by the classical trace theorem on Sobolev spaces. Such kind of results are called hidden regularities (for solutions to the equations under consideration). There are many studies on this topic for deterministic partial differential equations (e.g. [208]). Systems (10.2) and (10.3) are nonhomogeneous boundary value problems. Their solutions are understood in the sense of transposition solution introduced in Section 7.2. In view of Definition 7.11, a stochastic process y ∈ CF ([0, T ]; L2 (Ω; 2 L (G)))∩CF1 ([0, T ]; L2 (Ω; H −1 (G))) is called a transposition solution to (10.2) if for any τ ∈ (0, T ] and (z τ , zˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; L2 (G)), it holds that E⟨yt (τ ), z τ ⟩H −1 (G),H01 (G) − E⟨y(τ ), zˆτ ⟩L2 (G) −⟨ˆ y0 , z(0)⟩H −1 (G),H01 (G) + ⟨y0 , zˆ(0)⟩L2 (G) ∫ τ ∫ τ =E ⟨f, z⟩H −1 (G),H01 (G) dt + E ⟨g, Z⟩H −1 (G),H01 (G) dt 0



τ



−E

h 0

(10.10)

0

Γ0

∂z dΓ ds, ∂ν

b solves (10.4) with where (z, zˆ, Z, Z) b1 = −a1 ,

b2 = −div a1 + a2 ,

b 3 = a3 ,

b4 = 0,

b5 = 0.

Likewise, a pair of stochastic processes (y, yˆ) ∈ CF ([0, T ]; L2 (Ω; L2 (G))) × CF ([0, T ]; L2 (Ω; H −1 (G))) is called a transposition solution to (10.3) if for any τ ∈ (0, T ] and (z τ , zˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; L2 (G)), it holds that E⟨ˆ y (τ ), z τ ⟩H −1 (G),H01 (G) − E⟨y(τ ), zˆτ ⟩L2 (G) −⟨ˆ y0 , z(0)⟩H −1 (G),H01 (G) + ⟨y0 , zˆ(0)⟩L2 (G) ∫ τ ∫ τ ∫ b L2 (G) dt + E = −E ⟨f, Z⟩ ⟨g, Z⟩H −1 (G),H01 (G) dt − E 0

0

τ

∫ h

0

Γ0

b solves (10.4) with where (z, zˆ, Z, Z) b1 = −a1 ,

b2 = −div a1 + a2 − a3 a5 ,

b 3 = a3 ,

b4 = −a4 ,

∂z dΓ ds, ∂ν (10.11)

b5 = −a5 .

b to (10.4) satisfies by Proposition 10.6, the solution , Z, Z) Note that, ∫ τ ∫ (z, zˆ∂z 2 2 ∈ LF (0, τ ; L (Γ )), hence the term “E 0 Γ0 h ∂ν dΓ ds” in both (10.10) and (10.11) makes sense. By Theorem 7.12, one can easily deduce the following well-posedness results for (10.2) and (10.3): ∂z ∂ν Γ

338

10 Exact Controllability for a Refined Stochastic Wave Equation

Proposition 10.8. 1) For each (y0 , y1 ) ∈ L2 (G) × H −1 (G) and (f, g, h) ∈ L2F (0, T ; H −1 (G)) ×L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )), the equation (10.2) admits a unique transposition solution y ∈ CF ([0, T ]; L2 (Ω; L2 (G)))∩CF1 ([0, T ]; L2 (Ω; H −1 (G))). Furthermore, |y|CF ([0,T ];L2 (Ω;L2 (G)))∩CF1 ([0,T ];L2 (Ω;H −1 (G))) ( ≤ CeCr3 |y0 |L2 (G) + |y1 |H −1 (G) + |f |L2F (0,T ;H −1 (G)) ) +|g|L2F (0,T ;H −1 (G)) + |h|L2F (0,T ;L2 (Γ0 )) . Here r3 =

|a1 |2L∞ (0,T ;W 1,∞ (G;Rn )) F

+

3 ∑

|a2 |2L∞ (0,T ;L∞ (G)) ; F

k=2

2) For each (y0 , yˆ0 ) ∈ L2 (G) × H −1 (G) and (f, g, h) ∈ L2F (0, T ; L2 (G)) × L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )), the equation (10.3) admits one and only one transposition solution (y, yˆ) ∈ CF ([0, T ]; L2 (Ω; L2 (G))) × CF ([0, T ]; L2 (Ω; H −1 (G))). Moreover, |(y, yˆ)|CF ([0,T ];L2 (Ω;L2 (G)))×CF ([0,T ];L2 (Ω;H −1 (G))) ( ≤ CeCr2 |y0 |L2 (G) + |ˆ y0 |H −1 (G) + |f |L2F (0,T ;L2 (G)) ) +|g|L2F (0,T ;H −1 (G)) + |h|L2F (0,T ;L2 (Γ0 )) . Here ∆

r2 = |a1 |2L∞ (0,T ;W 1,∞ (G;Rn )) +

4 ∑

|ak |2L∞ (0,T ;L∞ (G)) F

F

k=2

(10.12)

(10.13)

+|a5 |2L∞ (0,T ;W 1,∞ (G)) . F

0

10.3 Main Controllability Results We have introduced three controls (f , g and h) in the system (10.3). At a first glance, it seems this is unreasonable, especially for that the controls f and g in the diffusion term of (10.3) are acted on the whole domain G. One may ask whether localized controls are enough or the boundary control can be dropped. However, the answer is “no”. More precisely, we have the following negative controllability result for the system (10.3). Theorem 10.9. For any open subset Γ0 of Γ and open subset G0 of G, the system (10.3) is not exactly controllable at any time T > 0, provided that one of the following three conditions is satisfied: 1) a4 ∈ CF ([0, T ]; L∞ (G)), G \ G0 ̸= ∅ and f is supported in G0 ; 2) a3 ∈ CF ([0, T ]; L∞ (G)), G \ G0 ̸= ∅ and g is supported in G0 ; 3) h = 0.

10.3 Main Controllability Results

339

Proof : Let us employ the contradiction argument, and divide the proof into three cases. Case 1) a4 ∈ CF ([0, T ]; L∞ (G)) and f is supported in G0 . Since G0 ⊂ G is an open subset and G \ G0 ̸= ∅, we can find ρ ∈ C0∞ (G \ G0 ) satisfying |ρ|L2 (G) = 1. Assume that (10.3) was exactly controllable. Then, for (y0 , yˆ0 ) = (0, 0), one could find controls (f, g, h) ∈ L2F (0, T ; L2 (G)) × L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )) with supp f ⊂ G0 , a.e. (t, ω) ∈ (0, T ) × Ω such that the corresponding solution to (10.3) fulfills (y(T ), yˆ(T )) = (ρξ, 0), where ξ is given in Proposition 6.2. Thus, ∫



T

ρξ =

T

yˆdt + 0

(a4 y + f )dW (t).

(10.14)

0

Multiplying both sides of (10.14) by ρ and integrating it in G, we obtain that ∫



T

ξ= 0

⟨ˆ y , ρ⟩H −1 (G),H01 (G) dt +

T

⟨a4 y, ρ⟩L2 (G) dW (t).

(10.15)

0

Since the pair (y, yˆ) ∈ CF ([0, T ]; L2 (Ω; L2 (G))) × CF ([0, T ]; L2 (Ω; H −1 (G))) solves (10.3), then ⟨ˆ y , ρ⟩H −1 (G),H01 (G) ∈ CF ([0, T ]; L2 (Ω)) and ⟨a4 y, ρ⟩L2 (G) ∈ CF ([0, T ]; L2 (Ω)), which, together with (10.15), contradicts Proposition 6.2. Case 2) a3 ∈ CF ([0, T ]; L∞ (G)) and g is supported in G0 . Choose ρ as in Case 1). If (10.3) was exactly controllable, then, for (y0 , yˆ0 ) = (0, 0), one could find controls (f, g, h) ∈ L2F (0, T ; L2 (G)) × L2F (0, T ; H −1 (G)) × L2F (0, T ; L2 (Γ0 )) with supp g ⊂ G0 , a.e. (t, ω) ∈ (0, T ) × Ω such that the corresponding solution to (10.3) fulfills (y(T ), yˆ(T )) = (0, ξ). ˆ = (ρy, ρˆ It is clear that (ϕ, ψ) y ) solves the following equation:  ˆ + (a4 ϕ + ρf )dW (t) dϕ = ϕdt in Q,     n    ˆ ∑ jk   dϕ − (a ϕxj )xk dt = ζdt + a3 ϕdW (t) in Q,    j,k=1

 ϕ=0        ϕˆ = 0     ϕ(0) = 0, where ζ =

n ∑

on Σ,

(10.16)

on Σ, ˆ ϕ(0) =0

in G,

[(ajk ρxj y)xk + ajk yxj ρxk ] + ρa1 · ∇y + ρa2 y. Further, we have

j,k=1

ˆ ) = ρξ. Noting that (ϕ, ϕ) ˆ is the weak solution to (10.16), ϕ(T ) = 0 and ϕ(T we see that

340

10 Exact Controllability for a Refined Stochastic Wave Equation

⟨ ⟩ ρξ, ρ H −2 (G),H 2 (G) 0



n [⟨ ∑

T

= 0

(ajk ϕxj )xk , ρ

j,k=1



T

+ 0

j,k=1



T

+



0

H −2 (G),H02 (G)

] ⟨ ⟩ + ζ, ρ H −1 (G),H 1 (G) dt 0

⟨ ⟩ a3 ϕ, ρ L2 (G) dW (t),

which implies that ∫ T [⟨ ∑ n ⟩ ξ= (ajk ϕxj )xk , ρ 0



a3 ϕ, ρ

⟩ L2 (G)

H −2 (G),H02 (G)

] ⟨ ⟩ + ζ, ρ H −1 (G),H 1 (G) dt 0

(10.17)

dW (t).

ˆ ∈ CF ([0, T ]; L2 (Ω; L2 (G))) × CF ([0, T ]; L2 (Ω; H −1 (G))), then Since (ϕ, ϕ) n ⟨ ∑ j,k=1

(ajk ϕxj )xk , ρ

⟩ H −2 (G),H02 (G)

⟨ ⟩ + ζ, ρ H −1 (G),H 1 (G) ∈ L2F (0, T ) 0

and ⟨a3 ϕ, ρ⟩L2 (G) ∈ CF ([0, T ]; L2 (Ω)). These, together with (10.17), contradict Proposition 6.2. Case 3) h = 0. Assume that the system (10.3) was exactly controllable. b to (10.4) (with τ = T ) would Then, by Theorem 7.16, solutions (z, zˆ, Z, Z) T T 2 1 satisfy that, for all (z , zˆ ) ∈ LFT (Ω; H0 (G)) × L2FT (Ω; L2 (G)), |(z T , zˆT )|L2F (Ω;H01 (G))×L2F (Ω;L2 (G)) T T ( ) b ≤ C |Z|L2F (0,T ;H01 (G)) + |Z|L2F (0,T ;L2 (G)) .

(10.18)

For any nonzero (η0 , η1 ) ∈ H01 (G) × L2 (G), let us consider the following random wave equation:  n ∑    ηtt − (ajk ηxj )xk dt = b1 · ∇η + b2 η in (0, T ) × G,    j,k=1 (10.19)  z=0 on (0, T ) × Γ,      in G. η(0) = η0 , ηt (0) = η1 Clearly, (η, 0, ηt , 0) solves (10.4) with the final datum (z T , zˆT ) = (η(T ), ηˆ(T )), a contradiction to the inequality (10.18). Now, similarly to the deterministic setting, in order to give a positive controllability result for the system (10.3), we need the following additional assumptions on the coefficients (ajk )1≤j,k≤n :

10.3 Main Controllability Results

341

Condition 10.1 There exists a positive function φ(·) ∈ C 2 (G) satisfying that: 1) For some constant µ0 > 0, n ( ∑

n ∑





j 2ajk (aj k φxj′ )xk′ − ajk xk′ a

′ ′

k

) φx j ′ ξ j ξ k

j,k=1 j ′ ,k′ =1

≥ µ0

n ∑

(10.20) ∀ (x, ξ , · · · , ξ ) ∈ G × R .

jk j k

1

a ξ ξ ,

n

n

j,k=1

2) The function φ(·) has no critical point in G, i.e., min |∇φ(x)| > 0.

(10.21)

x∈G

In the rest of this chapter, we shall choose the set Γ0 as follows: n ∑ { } ∆ Γ0 = x ∈ Γ ajk φxj (x)ν k (x) > 0 .

(10.22)

j,k=1

Also, write ∆

R1 =

√ max φ(x), x∈G



R0 =

√ min φ(x). x∈G

It is easy to check that if φ(·) satisfies Condition 10.1, then for any given constants α ≥ 1 and β ∈ R, φ˜ = αφ + β still satisfies Condition 10.1 with µ0 replaced by αµ0 . Therefore we may choose φ, µ0 , c0 > 0, c1 > 0 and T such that Condition 10.2 The following inequalities hold: n 1 ∑ jk 1) a (x)φxj (x)φxk (x) ≥ R12 , ∀ x ∈ G; 4 j,k=1

2) 3) and 4)



T > T0 = 2R1 ; { } ( 2R )2 2R1 1 1 < c1 < and c1 < min 1, ; T T 16(1 + |a5 |2L∞ (0,T ;L∞ (G)) )2 F

µ0 − 4c1 − c0 >



R1 .

Remark 10.10. Conditions 10.1-10.2 can be regarded as some modification of similar conditions in [153] (See also [111, 109]). As we have ex∑n introduced jk plained, since j,k=1 a φxj φxk > 0, and one can choose µ0 in Condition 10.1 large enough, Condition 10.2 could be satisfied obviously. We put it here merely to emphasize the relationship among 0 < c0 < c1 < 1, µ0 and T . In other words, once Condition 10.1 is fulfilled, Condition 10.2 can be always satisfied.

342

10 Exact Controllability for a Refined Stochastic Wave Equation

To be clearer, we give an example for the choice of φ when (ajk )1≤j,k≤n is the identity matrix. Let x0 ∈ Rn \ G such that |x − x0 | ≥ 1 for all x ∈ G and α0 = max |x − x0 |2 . Then for all constant α ≥ max{α0 , 1}, x∈G

α≥



α max |x − x0 |. x∈G

Let φ(x) = α|x − x0 | . Then the left hand side of (10.20) is specialized as 2

n ∑

φxj xj ξj2 = 2α|ξ|2 ,

∀ (x, ξ 1 , · · · , ξ n ) ∈ G × Rn .

(10.23)

j=1

Then, (10.20) holds with µ0 = 2α. Further, it is obvious that (10.21) is true and n n 1 ∑ jk 1∑ a (x)φxj (x)φxk (x) = φx (x)2 = α2 |x − x0 |2 ≥ max φ(x). 4 4 j=1 j x∈G j,k=1

Hence, the first inequality in Condition 10.2 holds. Next, one can choose T large enough such that the second and third inequalities in Condition 10.2 hold and { } 1 c1 < min 1, . 16(1 + |a5 |2L∞ (0,T ;L∞ (G)) )2 F √ Let α ≥ {α0 , 1, 4c1 + c0 }. Then µ0 − 4c1 − c0 > R1 . Remark 10.11. To ensure that 3) in Condition 10.2 holds, the larger of the ∞ number |a5 |L∞ is given, the smaller of c1 and the longer of time F (0,T ;L (G)) T we should choose. In some sense, this seems reasonable because a5 stands for the effect of the control in the diffusion term to the drift term. One needs time to get rid of such an effect. Our exact controllability result for the system (10.3) is stated as follows: Theorem 10.12. Let Conditions 10.1 and 10.2 hold, and Γ0 be given by (10.22). Then, the system (10.3) is exactly controllable at time T . The proof of Theorem 10.12 is rather long and technical. Indeed, the main body in the rest of this chapter is devoted to proving this theorem. Remark 10.13. Although it is necessary to put controls f and g on the whole domain, one may suspect that Theorem 10.12 is trivial. For instance, one may give a possible “proof” of Theorem 10.12 as follows: Choosing f = −a4 y and g = −a3 y, then the system (10.3) becomes  dy = yˆdt in Q,      n  ∑   dˆ  y− (ajk yxj )xk dt = (a1 · ∇y + a2 y − a5 a3 y)dt in Q, (10.24) j,k=1     y = χ Σ0 h on Σ,      y(0) = y0 , yˆ(0) = yˆ0 in G.

10.4 A Reduction of the Exact Controllability Problem

343

This is actually a wave equation with random coefficients. If one regards the sample point ω as a parameter, then for every given ω ∈ Ω, there is a control u(·, ·, ω) such that the solution to (10.24) fulfills (y(T, x, ω), yˆ(T, x, ω)) = (y1 (x, ω), yˆ1 (x, ω)). It is easy to see that the control constructed in this way belongs to L2FT (Ω; L2 (0, T ; L2 (Γ0 ))). However, we do not know whether it is adapted to the filtration F or not. If it is not, then it means to determine the value of the control at present, one needs to use information in future, which is meaningless in the stochastic framework chosen in this book.

10.4 A Reduction of the Exact Controllability Problem In this section, we shall reduce the exact controllability problem for the system (10.3) to the same problem but for the following controlled backward stochastic wave equation:  ˆ dt + (a4 y + Y)dW (t) dy = y in Q,     n  ∑    b  y− (ajk yxj )xk dt = (a1 · ∇y + a2 y + a5 Y)dt  dˆ    j,k=1   b (10.25) +(a3 y + Y)dW (t) in Q,      y = χ Σ0 h on Σ,       ˆ=0 y on Σ,     ˆ (T ) = y ˆT y(T ) = yT , y in G. b is the state ˆ T ) ∈ L2FT (Ω; L2 (G)) × L2FT (Ω; H −1 (G)), (y, y ˆ , Y, Y) Here (yT , y variable and h ∈ L2F (0, T ; L2 (Γ0 )) is the control variable. Note that there is only one control in the system (10.25). Solutions to (10.25) are defined in the sense of transposition as well. For this, we need to introduce the following equation:  ˆdt + (f − a5 z)dW (t) dz = z in Qτ ,     n  ∑     dˆ z − (ajk zxj )xk dt    j,k=1   = [−a1 · ∇z + (−div a1 + a2 − a3 a5 )z + a3 f − a4 ˆf ]dt + ˆf dW (t)       z=0     ˆ(τ ) = z ˆτ z(τ ) = zτ , z

in Qτ , on Σ τ ,

in G. (10.26) ∆ ∆ ˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; Here Qτ =(τ, T ) × G, Σ τ =(τ, T ) × Γ , (zτ , z L2 (G)), f ∈ L2F (0, T ; H01 (G)) and ˆf ∈ L2F (0, T ; L2 (G)). Note that, unlike the backward stochastic wave equation (10.4), the system (10.26) is

344

10 Exact Controllability for a Refined Stochastic Wave Equation

a forward stochastic wave equation. By means of Theorem 3.13, for any ˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; L2 (G)), f ∈ L2F (0, T ; H01 (G)) and ˆf ∈ (zτ , z 2 ˆ) ∈ LF (0, T ; L2 (G)), the system (10.26) admits a unique weak solution (z, z CF ([τ, T ]; H01 (G)) × CF ([τ, T ]; L2 (G)). Moreover, |z|CF ([τ,T ];H01 (G)) + |ˆ z|CF ([τ,T ];L2 (G)) ( ≤ CeCr4 |zτ |L2F (Ω;H01 (G)) + |ˆ zτ |L2F (Ω;L2 (G)) + |f |L2F (0,T ;H01 (G)) τ τ ) +|ˆf |L2F (0,T ;L2 (G)) , where ∆

r4 = |a1 |2L∞ (0,T ;W 1,∞ (G;Rn )) +

5 ∑

F

(10.27)

|ai |2L∞ (0,T ;L∞ (G)) F

i=2

+

5 ∑

|ai |4L∞ (0,T ;L∞ (G)) + |a5 |2L∞ (0,T ;W 1,∞ (G)) F

F

i=3

0

and the constant C is independent of τ . Similar to the proof of Proposition 10.6, one can show the following result: ˆ) to (10.26) satisfies Proposition 10.14. The solution (z, z L2 (Γ )). Furthermore,

∂z ∂ν |Γ

∈ L2F (τ, T ;

∂z ( Cr4 ≤ Ce |zτ |L2F (Ω;H01 (G)) + |ˆ zτ |L2F (Ω;L2 (G)) 2 τ τ ∂ν LF (τ,T ;L2 (Γ ))

) +|f |L2F (0,T ;H01 (G)) + |ˆf |L2F (0,T ;L2 (G)) ,

(10.28)

where the constant C is independent of τ . b ˆ , Y, Y) In view of Definition 7.13, a quadruple of stochastic processes (y, y 2 2 2 −1 2 2 ∈ CF ([0, T ]; L (Ω; L (G))) × CF ([0, T ]; L (Ω; H (G))) × LF (0, T ; L (G))× L2F (0, T ; H −1 (G)) is called a transposition solution to (10.25), if for any ˆτ ) ∈ L2Fτ (Ω; H01 (G)) × L2Fτ (Ω; L2 (G)), f ∈ L2F (0, T ; H01 (G)) and ˆf ∈ (zτ , z 2 LF (0, T ; L2 (G)), it holds that ˆ(T )⟩L2 (G) E⟨ˆ yT , z(T )⟩H −1 (G),H01 (G) − E⟨yT , z ˆτ ⟩L2 (G) −E⟨ˆ y(τ ), zτ ⟩H −1 (G),H01 (G) + E⟨y(τ ), z ∫ T ∫ T ∫ ˆ b = −E ⟨Y, f ⟩L2 (G) dt + E ⟨Y, f ⟩H −1 (G),H01 (G) dt − E

T



∂z dΓ ds. ∂ν τ τ τ Γ0 (10.29) ˆ) solves (10.26). Note that, by Proposition 10.14 the solution (z, z ˆ) to Here (z, z ∫T ∫ ∂z ∂z 2 2 (10.26) satisfies ∂ν |Γ ∈ LF (τ, T ; L (Γ )), hence the term “E τ Γ0 h ∂ν dΓ ds” in (10.29) makes sense. h

10.4 A Reduction of the Exact Controllability Problem

345

ˆ T ) ∈ L2FT (Ω; L2 (G)) × L2FT (Ω; H −1 (G)) By Theorem 7.14, for each (yT , y 2 2 and h ∈ LF (0, T ; L (Γ0 )), the system (10.25) admits a unique transposition b Moreover, ˆ , Y, Y). solution (y, y ˆ )|CF ([0,T ];L2 (Ω;L2 (G)))×CF ([0,T ];L2 (Ω;H −1 (G))) |(y, y b L2 (0,T ;L2 (G))×L2 (0,T ;H −1 (G)) +|(Y, Y)| (10.30) F F ( ) Cr2 T T ≤ Ce |y |L2F (Ω;L2 (G)) + |ˆ y |L2F (Ω;H −1 (G)) + |h|L2F (0,T ;L2 (Γ0 )) , T

T

where r2 is given by (10.13). Similarly to that for (7.9), the system (10.25) is called exactly controlˆ T ) ∈ L2FT (Ω; L2 (G)) × L2FT (Ω; H −1 (G)) and lable at time T , if for any (yT , y 2 −1 ˆ 0 ) ∈ L (G) × H (G), one can find a control h ∈ L2F (0, T ; L2 (Γ0 )) such (y0 , y ˆ ) to (10.25) satisfies that (y(0), y ˆ (0)) = that the corresponding solution (y, y ˆ 0 ). (y0 , y Clearly, the following two results hold. b Proposition 10.15. Let τ = T in (10.4) and τ = 0 in (10.26). If (z, zˆ, Z, Z) ˆ) = (z, zˆ) to (10.26) with the initial is a solution to (10.4), then so is (z, z b ˆ0 ) = (z(0), zˆ(0)) and the nonhomogeneous terms (f , ˆf ) = (Z, Z). data (z0 , z b ˆ) is a solution to (10.26), then so is (z, zˆ, Z, Z) = (z, z ˆ, f , ˆf ) Conversely, if (z, z T T ˆ(T )). to (10.4) with the final data (z , zˆ ) = (z(T ), z Proposition 10.16. If (y, yˆ) is a transposition solution to (10.3) for some f ∈ L2F (0, T ; L2 (G)), g ∈ L2F (0, T ; H −1 (G)) and h ∈ L2F (0, T ; L2 (Γ0 )), then b = (y, yˆ, f, g) to (10.25) with the final data (yT , y ˆ , Y, Y) ˆT ) = so is (y, y b ˆ , Y, Y) (y(T ), yˆ(T )) and the nonhomogeneous term h = h. Conversely, if (y, y is a transposition solution to (10.25) for some h ∈ L2F (0, T ; L2 (Γ0 )), then so ˆ ) to (10.3) with the initial data (y0 , yˆ0 ) = (y(0), y ˆ (0)) and the is (y, yˆ) = (y, y b nonhomogeneous terms (f, g, h) = (Y, Y, h). By Propositions 10.15 and 10.16, and by borrowing some idea from [275], we obtain the following result: Proposition 10.17. The system (10.3) is exactly controllable at time T if and only if so is the system (10.25). Proof : The “if” part. Let (y0 , yˆ0 ) ∈ L2 (G) × H −1 (G) and (y T , yˆT ) ∈ L2FT (Ω; L2 (G))×L2FT (Ω; H −1 (G)) be arbitrarily given. Since (10.25) is exactly controllable at time T , there exists h ∈ L2F (0, T ; L2 (Γ0 )) such that the corb of (10.25) with (yT , y ˆ , Y, Y) ˆ T ) = (y T , yˆT ) satisfies responding solution (y, y ˆ (0)) = (y0 , yˆ0 ). Hence, (y, yˆ) = (y, y ˆ ) is a solution to (10.3) with that (y(0), y b h) such that (y(T ), yˆ(T )) = (y T , yˆT ). a triple of controls (f, g, h) = (Y, Y, Hence, the system (10.3) is exactly controllable at time T . The proof for the “only if” part is similar. By Proposition 10.17, the exact controllability of (10.3) is reduced to that of (10.25), which is easier to be handled, and therefore in the rest of this chapter we shall focus on the later.

346

10 Exact Controllability for a Refined Stochastic Wave Equation

10.5 A Fundamental Identity for Stochastic Hyperbolic-Like Operators As a key preliminary to prove the exact controllability of (10.25), in this section we shall derive a fundamental identity for stochastic hyperbolic-like operator, which has some independent interest and may be applied in other places. Throughout this section, we assume that bjk ∈ C 3 ((0, T ) × Rn ) satisfies jk b = bkj for j, k = 1, 2, · · · , n, and ℓ, Ψ ∈ C 2 ((0, T ) × Rn ). Write  n ∑ ( jk′ j ′ k ) ′ ′  ∆ jk  cjk =(b  ℓ ) + 2b (b ℓxj′ )xk′ − (bjk bj k ℓxj ′ )xk′ + Ψ bjk , t t     j ′ ,k′ =1   n  ∑ ) ( jk  ∆ 2 A =(ℓt − ℓtt ) − b ℓxj ℓxk − (bjk ℓxj )xk − Ψ, (10.31)  j,k=1     n n )  ∑ ∑  1( ∆  jk jk B = AΨ + (Aℓ ) − (Ab ℓ ) + Ψ − (b Ψ ) .  t t x x tt x x j j k k  2 j,k=1

j,k=1

Very similarly to the identities (1.48), (8.19) and (9.26), we have the following result. Lemma 10.18. Let z be an H 2 (Rn )-valued Itˆ o process and zˆ be an L2 (Rn )valued Itˆ o process so that in (0, T ) × Rn

dz = zˆdt + ZdW (t)

(10.32)

for some Z ∈ L2F (0, T ; H 1 (Rn )). Set θ = eℓ , v = θz and vˆ = θˆ z + ℓt v. Then, for a.e. x ∈ Rn , n n ( )( ) ∑ ∑ θ − 2ℓt vˆ + 2 bjk ℓxj vxk + Ψ v dˆ z− (bjk zxj )xk dt j,k=1 n ∑

n [ ∑ (

j,k=1

j ′ ,k′ =1

+

j,k=1

2bjk bj

′ ′

k

ℓxj ′ vxj vxk′ − bjk bj

′ ′

k

) ℓx j v x j ′ v x k ′

] Ψxj jk 2 b v − Abjk ℓxj v 2 (10.33) 2 xk n n [ ∑ ( ∑ Ψt ) 2 ] +d ℓt bjk vxj vxk + ℓt vˆ2 − 2 bjk ℓxj vxk vˆ − Ψ vˆ v + Aℓt + v 2 −2ℓt bjk vxj vˆ + bjk ℓxj vˆ2 + Ψ bjk vxj v −

j,k=1 n ∑

[( =

ℓtt +

j,k=1

)

(bjk ℓxj )xk − Ψ vˆ2 +

j,k=1

−2

n ( ∑

j,k=1

n ∑

cjk vxj vxk + Bv 2

j,k=1

)

n ( )2 ] ∑ (bjk ℓxk )t + bjk ℓtxk vxj vˆ + − 2ℓt vˆ +2 bjk ℓxj vxk+ Ψ v dt j,k=1

10.5 A Fundamental Identity for Stochastic Hyperbolic-Like Operators

+ℓt (dˆ v) − 2 2

n ∑

b ℓxj dvxk dˆ v −Ψ dvdˆ v + ℓt jk

j,k=1

n ∑

347

bjk (dvxj )(dvxk )

j,k=1

n [ ( ) ∑ +Aℓt (dv)2 − θ − 2ℓt vˆ + 2 bjk ℓxj vxk + Ψ v ℓt Z j,k=1 n ( ∑ ) − 2 bjk (θZ)xk ℓxj vˆ −θΨt vZ + θΨ vˆZ j,k=1

+2

n ( ∑

) ] bjk vxj (θZ)xk +θAvZ ℓt dW (t),

a.s.,

j,k=1

where (dv)2 and (dˆ v )2 denote the quadratic variation processes of v and vˆ, respectively. Proof : By (10.32), and recalling v = θz and vˆ = θˆ z + ℓt v, we obtain that dv = d(θz) = θt zdt + θdz = ℓt θzdt + θˆ z dt + θZdW (t) = vˆdt + θZdW (t). (10.34) Hence, dˆ z = d[θ−1 (ˆ v − ℓt v)] = θ−1 [dˆ v − ℓtt vdt − ℓt dv − ℓt (ˆ v − ℓt v)dt] [ ] ( ) = θ−1 dˆ v − 2ℓt vˆ + ℓtt v − ℓ2t v dt − θℓt ZdW (t) .

(10.35)

Similarly, by bjk = bkj for j, k = 1, 2, · · · , n, we have n ∑

(bjk zxj )xk

(10.36)

j,k=1

= θ−1

n ∑ [

] jk (bjk vxj )xk − 2bjk ℓxj vxk + (bjk ℓxj ℓxk − bjk xk ℓxj − b ℓxj xk )v .

j,k=1

Therefore, from (10.35)–(10.36) and the definition of A in (10.31), we obtain that n n ( )( ) ∑ ∑ jk θ − 2ℓt vˆ + 2 b ℓxj vxk + Ψ v dˆ z− (bjk zxj )xk dt ( =

j,k=1 n ∑

− 2ℓt vˆ + 2 (

j,k=1

bjk ℓxj vxk

j,k=1 n ∑

+ − 2ℓt vˆ + 2

n )[ ∑ + Ψ v dˆ v− (bjk vxj )xk dt + Avdt j,k=1

)

bjk ℓxj vxk + Ψ v dt − θℓt ZdW (t)

] (10.37)

j,k=1 n n ( ) ( )2 ∑ ∑ = −2ℓt vˆ +2 bjk ℓxj vxk + Ψ v dˆ v + −2ℓt vˆ +2 bjk ℓxj vxk + Ψ v dt j,k=1

j,k=1

348

10 Exact Controllability for a Refined Stochastic Wave Equation

(

n ∑

+ − 2ℓt vˆ + 2

b ℓxj vxk + Ψ v

j,k=1 n ∑

(

)(

jk



) (bjk vxj )xk + Av dt

n ∑ j,k=1

)

−θ − 2ℓt vˆ + 2

bjk ℓxj vxk + Ψ v ℓt ZdW (t).

j,k=1

We now analyze the first and third terms in the right-hand side of (10.37). Using Itˆo’s formula and noting (10.34), we have (

− 2ℓt vˆ + 2

) bjk ℓxj vxk + Ψ v dˆ v

n ∑ j,k=1

(

n ∑

(

j,k=1 n ∑

= d − ℓt vˆ2 + 2

n ) ∑ bjk ℓxj vxk vˆ + Ψ vˆ v −2 bjk ℓxj vˆdvxk − Ψ vˆdv

− − ℓtt vˆ2 + 2

j,k=1

) ( jk ) b ℓxj t vxk vˆ + Ψt vˆ v dt + ℓt (dˆ v )2

j,k=1

−2

n ∑

bjk ℓxj dvxk dˆ v − Ψ dvdˆ v

(10.38)

j,k=1 n n ( ∑ ∑ ( jk ) Ψt ) = d − ℓt vˆ2 + 2 bjk ℓxj vxk vˆ + Ψ vˆ v − v2 − b ℓxj vˆ2 x dt k 2 j,k=1

j,k=1

n n [( ) ∑ ∑ Ψtt 2 ] + ℓtt + (bjk ℓxj )xk − Ψ vˆ2 − 2 (bjk ℓxk )t vxj vˆ + v dt 2 j,k=1

+ℓt (dˆ v )2 − 2

n ∑

j,k=1

bjk ℓxj dvxk dˆ v − Ψ dvdˆ v

j,k=1 n ( ∑ ) − 2 bjk (θZ)xk ℓxj vˆ − θΨt vZ + θΨ vˆZ dW (t). j,k=1

Next, n ( ) ∑ −2ℓt vˆ − (bjk vxj )xk + Av dt j,k=1

=2

n ∑

(ℓt bjk vxj vˆ)xk dt − 2

j,k=1

n ∑

n ∑

ℓtxk bjk vxj vˆdt − 2ℓt

j,k=1

j,k=1

−2Aℓt vˆ v dt =2

n ∑

(10.39)

(ℓt bjk vxj vˆ)xk dt−2

j,k=1

bjk vxj vˆxk dt

n ∑

ℓtxk bjk vxj vˆdt−2ℓt

j,k=1

n ∑

bjk vxj (dv−θZdW (t))xk

j,k=1

10.6 Observability Estimate for the Stochastic Wave Equation

349

−2Aℓt v(dv−θZdW (t)) n ∑

=2

(ℓt bjk vxj vˆ)xk dt− 2

j,k=1 n ∑

n ∑

n ( ∑ ) ℓtxk bjk vxj vˆdt− d ℓt bjk vxj vxk +Aℓt v 2

j,k=1

j,k=1 n ∑

( jk ) ℓt b t vxj vxk dt + (Aℓt )t v 2 dt + ℓt

+

j,k=1

bjk (dvxj )(dvxk )

j,k=1

+Aℓt (dv)2 + 2

n ( ∑

) bjk vxj (θZ)xk + θAvZ ℓt dW (t).

j,k=1

Further, by some direct computation, one may check that 2

n ∑

n ( ) ∑ bjk ℓxj vxk − (bjk vxj )xk + Av

j,k=1 n ∑

=− + −

j,k=1 n [ ∑ (

j,k=1 j ′ ,k′ =1 n ( ∑

2bjk bj ′

′ ′

k

ℓxj′ vxj vxk′ −bjk bj



2bjk (bj k ℓxj′ )xk′ − (bjk bj

′ ′

k

′ ′

k

) ] ℓxj vxj ′ vxk′ −Abjk ℓxj v 2

) ℓxj ′ )xk′ vxj vxk

xk

(10.40)

j,k,j ′ ,k′ =1 n ∑ jk

(Ab ℓxj )xk v 2

j,k=1

and n ( ) ∑ Ψv − (bjk vxj )xk + Av j,k=1

=−

n ∑

(

Ψ bjk vxj v −

j,k=1

n ∑ Ψxj jk 2 ) b v +Ψ bjk vxj vxk 2 xk

(10.41)

j,k=1

n ( 1 ∑ ) + − (bjk Ψxj )xk + AΨ v 2 . 2 j,k=1

Finally, combining (10.37)–(10.41), we obtain the desired equality (10.33). This completes the proof of Lemma 10.18.

10.6 Observability Estimate for the Stochastic Wave Equation In order to prove Theorem 10.12, by Proposition 10.17 and Theorem 7.24, it suffices to establish the following observability estimate for the stochastic wave equation (10.26).

350

10 Exact Controllability for a Refined Stochastic Wave Equation

Theorem 10.19. Under the assumptions of Theorem 10.12, all solutions to the equation (10.26) with τ = 0, f = 0 and ˆf = 0 satisfy that ∂z ˆ0 )|H01 (G)×L2 (G) ≤ CeCr4 2 |(z0 , z , ∂ν LF (0,T ;L2 (Γ0 )) ˆ0 ) ∈ H01 (G) × L2 (G). ∀ (z0 , z

(10.42)

Proof : The proof is split into three steps. Step 1. For each parameter λ > 0, let us choose [ ( T )2 ] ℓ(t, x) = λ φ(x) − c1 t − , 2

(10.43)

where φ(·) and c1 are given in Conditions 10.1 and 10.2, respectively. Put { ( T) R02 } ∆ Λi = (t, x) ∈ Q φ(x) − c1 t − > , 2 2(i + 2) Let

 ∆ T ∆ T  T i = − εi T, Ti′ = + εi T, 2 2  ∆ Q =(T ′ , T ) × G, i

i

for i = 0, 1, 2. (10.44)

for i = 0, 1,

(10.45)

i

where ε0 and ε1 are given below. From Condition 10.2 and (10.43), it follows that ( c1 T 2 ) ℓ(0, x) = ℓ(T, x) ≤ λ R1 − < 0, 4

∀ x ∈ G.

Hence, there exists ε1 ∈ (0, 12 ) such that Λ2 ⊂ Q 1 and that ℓ(t, x) < 0,

( ) ∀ (t, x) ∈ (0, T1 ) ∪ (T1′ , T ) × G.

(10.46)

(10.47) (10.48)

Next, since {T /2} × G ⊂ Λ0 , one can find ε0 ∈ (0, ε1 ) such that Q 0 ⊂ Λ0 .

(10.49)

ˆ0 ) ∈ H01 (G) × L2 (G), Step 2. Recall Condition 10.2 for c0 . For each (z0 , z jk jk let us apply Lemma 10.18 with (b )1≤j,k≤n = (a )1≤j,k≤n and Ψ = ℓtt +

n ∑

(ajk ℓxj )xk − c0 λ

(10.50)

j,k=1

ˆ) ∈ CF ([0, T ]; H01 (G)) × CF ([0, T ]; L2 (G)) to to the corresponding solution (z, z ˆ (10.26) with τ = 0, f = 0 and f = 0, and then analyze the resulting terms

10.6 Observability Estimate for the Stochastic Wave Equation

351

in (10.33) one by one. In what follows, we use the notations v = θz and vˆ = θˆ z + ℓt v, where θ = eℓ for ℓ given by (10.43). Let us consider first the terms which stand for the “energy” of the solutions. To this end, we need to compute the orders of λ (when λ is sufficiently large) in the coefficients of vˆ2 , |∇v|2 and v 2 . Clearly, the term for vˆ2 reads n ( ) ∑ ℓtt + (ajk ℓxj )xk − Ψ vˆ2 = c0 λˆ v2 . (10.51) j,k=1

Noting that ℓtxk = ℓxk t = 0, we get n ∑ (

2

) (ajk ℓxk )t + ajk ℓtxk vxj vˆ = 0.

(10.52)

j,k=1

From (10.50), we see that n ∑ ( jk′ ( j ′ k ) ( ) ) ′ ′ 2a a ℓxj ′ x ′ − ajk aj k ℓxj′ x ′ + Ψ ajk

(ajk ℓt )t +

k

j ′ ,k′ =1 n ∑

( jk′ ( j ′ k ) ( ) ) ′ ′ 2a a ℓxj ′ x ′ − ajk aj k ℓxj′ x ′

= ajk ℓtt +

k

j ′ ,k′ =1 ∞ ∑

( +ajk ℓtt + = 2ajk ℓtt +

(

′ ′

k

ℓxj ′

) xk ′

− c0 λ

k

)

( jk′ j ′ k ) j ′ k′ 2a (a ℓxj ′ )xk′ − ajk ℓxj ′ − ajk c0 λ xk ′ a (

= 2ajk ℓtt + λ

j ′ ,k′ =1



aj

j ′ ,k′ =1 n ∑ j ′ ,k′ =1 n ∑

n ∑

k







j 2ajk (aj k φxj′ )xk′ − ajk xk ′ a

′ ′

k

φx j ′

)



2ajk aj k φxj ′ φxk′ − ajk c0 λ.

j ′ ,k′ =1

This, together with Condition 10.1, implies that n ∑

cjk vxj vxk

j,k=1

=

n [ n ( ) ∑ ∑ ′ ′ ′ ′ (ajk ℓt )t + 2ajk (aj k ℓxj′ )xk′ − (ajk aj k ℓxj′ )xk′ + j,k=1

Ψa

jk

j ′ ,k′ =1

] vxj vxk

n ( ) ∑ ≥ λ µ0 − 4c1 − c0 ajk vxj vxk . j,k=1

(10.53)

352

10 Exact Controllability for a Refined Stochastic Wave Equation

Now we compute the coefficients of v 2 . By (10.31), it is easy to obtain that n ∑ (

A = ℓ2t − ℓtt −

) ajk ℓxj ℓxk − (ajk ℓxj )xk − Ψ

j,k=1



2

[

n ∑

− T) −

c21 (2t

2

(10.54)

]

jk

a φxj φxk + 4c1 λ + c0 λ.

j,k=1

By the definition of B in (10.31), we see that n ∑

B = AΨ + (Aℓt )t −

n ) 1 ∑ ( Ψtt − (ajk Ψxj )xk 2

(Aajk ℓxj )xk +

j,k=1

j,k=1

n ∑

= 2Aℓtt − λc0 A −

ajk ℓj Ak + At ℓt

j,k=1



1 2

n ∑

n ∑

[

ajk (aj

′ ′

k

]

ℓxj′ )xk′ xj

xk

j,k=1 j ′ ,k′ =1

n [ ] ∑ = 2λ3 − 2c31 (2t − T )2 + 2c1 ajk φxj φxk − λ3 c0 c21 (2t − T )2

(10.55)

j,k=1

+λ3 c0

n ∑

ajk φxj φxk + λ3

n ∑

n ∑

ajk φxj (aj

′ ′

k

φx j ′ φx k ′ ) x k

j,k=1 j ′ ,k′ =1

j,k=1

−4λ3 c31 (2t − T )2 + O(λ2 ) n ∑

= (4c1 + c0 )λ3

n ∑

n ∑

j,k=1

j ′ ,k′ =1

ajk φxj φxk + λ3

j,k=1

ajk φxj (aj

′ ′

k

φx j ′ φx k ′ )x k

−(8c31 + c0 c21 )λ3 (2t − T )2 + O(λ2 ). n ∑

n ∑

j,k=1

j ′ ,k′ =1

Now we estimate

ajk φxj (aj

′ ′

k

φxj′ φxk′ )xk . From Condition 10.1,

we have µ0 ≤ =

n ∑ j,k=1 n ∑

ajk φxj φxk n ( ∑

j,k=1 j ′ ,k′ =1 n n ( ∑ ∑ j,k=1 j ′ ,k′ =1





j 2ajk (aj k φxj ′ )xk′ − ajk xk ′ a









′ ′

k

) φxj′ φxj φxk

j 2ajk ajxkk′ φxj′ + 2ajk aj k φxj ′ xk′ − ajk xk ′ a

′ ′

k

) φxj′ φxj φxk

10.6 Observability Estimate for the Stochastic Wave Equation

=

n ( ∑

n ∑

j,k=1 j ′ ,k′ =1





j −ajk xk ′ a

=

n ( ∑

n ∑ j,k=1

aj

′ ′

k

j ′ ,k′ =1

n ∑

n ∑

j,k=1

j ′ ,k′ =1

′ ′

k



′ ′

k

ajk φxj (aj

(10.56)

) φxj′ φxj φxk

jk j ajk xk′ φxj ′ φxj φxk + a a

+ajk aj =



2ajk ajxkk′ φxj′ φxj φxk + 2ajk aj k φxj′ xk′ φxj φxk

353

′ ′

k

φxj′ xk φxj φxk′

) φxk′ xk φxj φxj ′

′ ′

k

φx j ′ φx k ′ )x k .

From (10.55), (10.56) and Condition 10.2, we arrive at B ≥ λ3 (4c1 +c0 )

n ∑

ajk φxj φxk + λ3 µ0

j,k=1

−(8c31

+

ajk φxj φxk

j,k=1

2c0 c21 )λ3 (2t

≥ (µ0 + 4c1 + c0 )λ3

n ∑

− T ) + O(λ )

n ∑ j,k=1

2

2

(10.57)

( T )2 ajk φxj φxk − 8c21 (4c1 +c0 ) t− λ3 + O(λ2 ). 2

Since the diffusion term in the second equation of (10.26) (with τ = 0, f = 0 and ˆf = 0) is zero, it follows that   n ∑ ( ) 2 jk E ℓt (dˆ v ) = 0, E  a ℓxj dvxk dˆ v  = 0, E (Ψ dvdˆ v ) = 0. (10.58) j,k=1

ˆ)dt − a5 vdW (t), From (10.43) and noting that dv = θℓt zdt + θdz = θ(ℓt z + z we see that, for some constant C0 > 0,   n ∑ E ℓ t ajk (dvxj )(dvxk ) j,k=1 n ( ) T ) ( ∑ jk = −2c1 λ t − E a (a5 v)xj (a5 v)xk 2 j,k=1 n ∑ 2

[ ≥ −c1 T λE (1 + |a5 | )

] ajk vxj vxk + C0 |v|2 .

j,k=1

Next, from (10.43) and (10.54), we find that

(10.59)

354

10 Exact Controllability for a Refined Stochastic Wave Equation

[ ] E Aℓt (dv)2 n [ ] ( ∑ ) = −λ3 c1 (2t − T ) c21 (2t − T )2 − ajk φxj φxk E |a5 |2 v 2

(10.60)

j,k=1

( ) −(4c1 + c0 )c1 (2t − T )λ2 E |a5 |2 v 2 . From (10.53), (10.57), (10.59) and (10.60), and noting the fourth inequality in Condition 10.2, we conclude that there is c2 > 0 such that for all (t, x) ∈ Λ2 , n n [ ∑ ] ∑ E cjk vxj vxk + Bv 2 + ℓt ajk (dvxj )(dvxk ) + Aℓt (dv)2 j,k=1

j,k=1

n { [ ] ∑ = E λ µ0 − 4c1 − c0 − c1 T (1 + |a5 |2 ) ajk vxj vxk j,k=1 n ∑

+(µ0 + 4c1 + c0 )λ3 (

+4c1 λ3 t − 2

2

T) 2

|a5 |

( T )2 2 ajk φxj φxk |v|2 − 8c21 (4c1 + c0 )λ3 t − |v| 2

j,k=1 n ∑ 2

ajk φxj φxk |v|2 − c31 λ3 (2t − T )3 |a5 |2 |v|2

j,k=1

}

+O(λ )|v| [ ] ≥ E c2 λ|∇v|2 + c2 λ3 |v|2 + O(λ2 )|v|2 . Thus, there exist λ1 > 0 and c3 > 0 such that for all λ ≥ λ1 and for every (t, x) ∈ Λ2 , n n ( ∑ ) ∑ E cjk vxj vxk + Bv 2 + ℓt ajk (dvxj )(dvxk ) + Aℓt (dv)2 (10.61) j,k=1 j,k=1 ( ) 2 3 2 ≥ E c3 λ|∇v| + c3 λ |v| . Step 3. For the boundary terms, by v|Σ = 0, we have the following equality: ∫ ∑ n n ( ) ∑ ′ ′ ′ ′ E 2ajk aj k ℓxj ′ vxj vxk′ − ajk aj k ℓxj vxj′ vxk′ ν k dΣ Σ j,k=1 j ′ ,k′ =1



n ∑

= λE

n ( ∑

2ajk aj

Σ j,k=1 j ′ ,k′ =1

= λE

∫ ∑ n n ( ∑

jk j ′ k′

2a a

Σ j,k=1j ′ ,k′ =1

= λE

∫ ( ∑ n Σ

j,k=1

ajk ν j ν k

′ ′

k

φxj′ vxj vxk′ − ajk aj

′ ′

k

) φxj vxj′ vxk′ ν k dΣ

(10.62) ∂v j ∂v k′ jk j ′ k′ ∂v j ′ ∂v k′ ) φx j ′ ν ν −a a φxj ν ν νk dΣ ∂ν ∂ν ∂ν ∂ν

n )( ∑ j ′ ,k′ =1

aj

′ ′

k

) ′ ∂v 2 φxj′ ν k dΣ. ∂ν

10.6 Observability Estimate for the Stochastic Wave Equation

355

For any τ ∈ (0, T1 ) and τ ′ ∈ (T1′ , T ), put ′

Qττ =(τ, τ ′ ) × G. ∆

(10.63)



Integrating (10.33) in Qττ , taking expectation and by (10.51), (10.52), (10.58) and (10.61), we obtain that ∫ n n ( )( ) ∑ ∑ E θ − 2ℓt vˆ + 2 ajk ℓxj vxk + Ψ v dˆ z− (ajk zxj )xk dt dx Qττ ′

i=1



(

n ∑

+λE Σ0

∫ +E

Qττ ′

ajk ν j ν k

[ d ℓt

+E

k

2

j,k=1

(

∫ (

′ ′

n ∑

ℓt

) ′ ∂v 2 φxj ′ ν k dΣ ∂ν

a vxj vxk + ℓt vˆ − 2 jk

Qττ ′

Qττ ′

aj

j ′ ,k′ =1

n ∑

Ψt ) 2 ] + Aℓt + v dx 2 ∫ ∫ ≥ c0 λE vˆ2 dxdt + E ∫

)(

j,k=1

(

+E

j,k=1 n ∑

n ∑

ajk ℓxj vxk vˆ − Ψ vˆ v

j,k=1

n ∑ Qττ ′ j,k=1

∫ cjk vxj vxk dxdt + E

Qττ ′

Bv 2 dxdt

) ajk (dvxj )(dvxk ) + Aℓt (dv)2 dx

j,k=1

− 2ℓt vˆ + 2

Q

n ∑

ajk ℓxj vxk + Ψ v

)2 dxdt.

j,k=1

(10.64) Clearly, ∫ n n [ ∑ ∑ E d ℓt ajk vxj vxk + ℓt vˆ2 − 2 ajk ℓxj vxk vˆ − Ψ vˆ v Qττ ′

j,k=1

j,k=1

(

Ψt ) 2 ] + Aℓt + v dx 2 ∫ [ ∑ n n ∑ =E ℓt ajk vxj (τ ′ )vxk (τ ′ ) + ℓt vˆ(τ ′ )2 − 2 ajk ℓxj vxk (τ ′ )ˆ v (τ ′ ) G

j,k=1

j,k=1

Ψt ) ′ 2 ] −Ψ v(τ ′ )ˆ v (τ ′ ) + Aℓt + v(τ ) dx (10.65) 2 ∫ [ ∑ n n ∑ −E ℓt ajk vxj (τ )vxk (τ ) + ℓt vˆ(τ )2 − 2 ajk ℓxj vxk (τ )ˆ v (τ ) G

j,k=1

(

j,k=1

] Ψt ) −Ψ v(τ )ˆ v (τ ) + Aℓt + v(τ )2 dx 2 ∫ [( ) ( )] ≤ Cλ3 E vˆ(τ )2 + |∇v(τ )|2 + v(τ )2 + vˆ(τ ′ )2 + |∇v(τ ′ )|2 + v(τ ′ )2 dx. G

(

356

10 Exact Controllability for a Refined Stochastic Wave Equation

By θ = eℓ and (10.48), there is a λ1 > 0 such that for any λ > λ1 , λ3 θ(τ ′ ) ≤ 1.

λ3 θ(τ ) ≤ 1,

(10.66)

Since v = θz and vˆ = θˆ z, it follows from (10.66) that ∫ [( ) ( )] λ3 E vˆ(τ )2 + |∇v(τ )|2 + v(τ )2 + vˆ(τ ′ )2 + |∇v(τ ′ )|2 + v(τ ′ )2 dx G (10.67) ∫ [( ) ( ′ 2 )] ˆ(τ )2 + |∇z(τ )|2 + z(τ )2 + z ˆ(τ ) + |∇z(τ ′ )|2 + z(τ ′ )2 dx. ≤E z G ′

From (10.44), (10.47) and (10.63), we obtain that Λ2 ⊂ Qττ . Thus, ∫ ∫ ( )2 2 λE vˆ dxdt = λE θˆ z + ℓt θz dxdt Qττ ′ \Λ2

Qττ ′ \Λ2



≤ 2λE and ∫ E

Qττ ′ \Λ2

Qττ ′ \Λ2 j,k=1



cjk vxj vxk dxdt = E

n ∑ Qττ ′ \Λ2 j,k=1

Qττ ′ \Λ2

Qττ ′ \Λ2

(2t − T )2 θ2 z2 dxdt

n ∑ Qττ ′ \Λ2 j,k=1

cjk (θzxj )(θzxk )dxdt

cjk θ2 (λφxj + zxj )(λφxk + zxk )dxdt



≤ CλE

(10.68)



n ∑

=E



ˆ2 dxdt + 2λ3 E θ2 z

(10.69)



θ2 |∇z|2 dxdt + Cλ3 E

θ2 |z|2 dxdt.

Qττ ′ \Λ2

Furthermore, it follows from (10.55) that ∫ ∫ E Bv 2 dxdt ≤ Cλ3 E Qττ ′ \Λ2

Qττ ′ \Λ2

θ2 z2 dxdt.

(10.70)

Next, by (10.59) and (10.60), we obtain that ∫ E

[ Qττ ′ \Λ2

ℓt



≤ CλE

n ∑

] ajk (dvxj )(dvxk ) + Aℓt (dv)2 dx

j,k=1

Qττ ′ \Λ2

(10.71)



θ |∇z| dxdt + Cλ E 2

2

3

θ |z| dxdt. 2

Qττ ′ \Λ2 2

2



From (10.44), we deduce that θ ≤ exp(λeR0 µ/8 ) in Qττ \ Λ2 . Consequently, there exists λ2 ≥ max{λ0 , λ1 } such that for all λ ≥ λ2 , Cλ

max

(x,t)∈Qττ ′ \Λ2

2

θ2 ≤ eλR0 /3 ,

Cλ3

max

(x,t)∈Qττ ′ \Λ2

2

θ2 ≤ eλR0 /3 .

(10.72)

10.6 Observability Estimate for the Stochastic Wave Equation

357

It follows from (10.61) and (10.68)–(10.72) that ∫ λE

∫ Qττ ′

vˆ2 dxdt + E



+E ∫

( ℓt

Qττ ′

≥ λE

n ∑ Qττ ′

∫ cjk vxj vxk dxdt + E

j,k=1

Qττ ′

Bv 2 dxdt

) ajk (dvxj )(dvxk ) + Aℓt (dv)2 dx

n ∑ j,k=1



(10.73)



vˆ dxdt + c2 λE |∇v| dxdt + c2 λ E Λ2 ∫ ( 2 ) 2 −eλR0 /3 E |ˆ z| + |∇z|2 dxdt. 2

2

|v|2 dxdt

3

Λ2

Λ2

Q

ˆ) solves the equation (10.26) with τ = 0, f = 0 and ˆf = 0, Noting that (z, z we deduce that ∫ n n ( )( ) ∑ ∑ E θ − 2ℓt vˆ + 2 ajk ℓxj vxk + Ψ v dˆ z− (ajk zxj )xk dt dx Qττ ′

i=1

∫ =E

Qττ ′

∫ ≤E

Qττ ′

(

n ∑

[

i=1

θ − 2ℓt vˆ + 2

j,k=1

ajk ℓxj vxk + Ψ v

)

( ) ] × − a1 · ∇z + − div a1 + a2 − a3 a5 z dxdt n ( )2 ∑ θ − 2ℓt vˆ + 2 ajk ℓxj vxk + Ψ v dxdt i=1



+r2 E

Qττ ′

(|∇z|2 + z2 )dxdt.

(10.74) Combing (10.64), (10.67), (10.73) and (10.74), we conclude that there exists λ3 ≥ max{λ2 , Cr5 + 1} such that for any λ ≥ λ3 , ∫ ∫ 2 2 2 E θ (|ˆ z| + |∇z| )dxdt + E θ2 |z|2 dxdt Λ1

[

≤C e

λR02 /3

Λ1



( 2 ) 2 |ˆ z| + |∇z|2 + |z|2 dxdt + eλR1 E

E Q



(

+E



∂z 2 dΣ Σ0 ∂ν

) ] ˆ(τ )2 + |∇z(τ )|2 + z(τ )2 + z ˆ(τ ′ )2 + |∇z(τ ′ )|2 + z(τ ′ )2 dx . z

G

(10.75) Integrating (10.75) with respect to τ and τ ′ on [T2 , T1 ] and [T1′ , T2′ ], respectively, we get that

358

10 Exact Controllability for a Refined Stochastic Wave Equation





E

θ (|ˆ z| + |∇z| )dxdt + E 2

2

Λ1

Λ1



2

(

≤ CeλR0 /3 E +Ce

λR12

θ2 |z|2 dxdt

2



) 2 |ˆ z|2 + |∇z|2 dxdt + CeλR0 /3 E

∫ |z|2 dxdt

Q

(10.76)

Q

∫ ∂z 2 ( 2 ) ˆ + |∇z|2 + z2 dxdt. E z dΣ + CE ∂ν Σ0 Q

From (10.44), we find that ∫ E θ2 (|ˆ z|2 + |∇z|2 + |z|2 )dxdt Λ1



≥E

θ2 (|ˆ z|2 + |∇z|2 + |z|2 )dxdt Λ0 2

(10.77)



≥ eλR0 /2 E

(|ˆ z|2 + |∇z|2 + |z|2 )dxdt. Q0

Combing (10.76) and (10.77), we arrive at ∫ E (|ˆ z|2 + |∇z|2 + |z|2 )dxdt Q0

∫ ∫ [ 2 2 ≤ C e−λR0 /6 E (|ˆ z|2 + |∇z|2 + |z|2 )dxdt + λ2 e−λR0 /6 E |z|2 dxdt +e

λR12



Q

Q

∫ ∂z 2 ] ( 2 ) −λR02 /2 ˆ + |∇z|2 + z2 dxdt . E z dΣ + Ee Σ0 ∂ν Q

(10.78) Applying the standard energy method to the equation (10.26) with τ = 0, f = 0 and ˆf = 0, we obtain that ∫ ( 2 ) 2 Cr4 ˆ0 )|H 1 (G)×L2 (G) ≤ Ce E |(z0 , z |ˆ z| + |∇z|2 + |z|2 dxdt (10.79) 0 Q0

and that ˆ0 )|2H 1 (G)×L2 (G) ≥ Ce−Cr4 E |(z0 , z 0



(

) |ˆ z|2 + |∇z|2 + |z|2 dxdt.

(10.80)

Q

Now, by (10.78)–(10.80), we end up with ˆ0 )|2H 1 (G)×L2 (G) |(z0 , z 0

≤ Ce

Cr4 −λR02 /6

e

ˆ0 )|2H 1 (G)×L2 (G) |(z0 , z 0

+ Ce

λR12

∫ E

∂z 2 dΣ. Σ0 ∂ν

(10.81)

Let us choose λ4 ≥ λ3 such that CeCr4 e−λ4 R0 /6 < 1. Then, from (10.81) with λ ≥ λ4 , we obtain the desired inequality (10.42) immediately. This completes the proof of Theorem 10.19. 2

10.7 Notes and Comments

359

10.7 Notes and Comments There are numerous studies on the controllability and observability of deterministic hyperbolic equations (e.g., [7, 14, 63, 80, 88, 110, 123, 146, 153, 165, 199, 207, 295, 388, 395, 396] and the references cited therein). Generally speaking, one can find the following five main methods to solve the exact controllability/observability problem of deterministic wave equations: • •

• •



The first one is the method of characteristic line ([199, 295]). This method only applies to wave equations in one space dimension. The second one is based on the Ingham type inequality ([7, 295]). This method works well for many partial differential equations involved in some special domains, i.e., intervals and rectangles. However, it seems that it is very hard to be applied to equations in general domains. The third one is the classical Rellich-type multiplier approach ([207]). Usually, it is used to treat several partial differential equations with time independent lower order terms. The forth one is the microlocal analysis approach ([14]). It is useful to solve controllability problems for many kinds of partial differential equations such as wave equations, Schr¨odinger equations and plate equations. Further, it can give sharp sufficient conditions for the exact controllability of wave equations. However, there exist lots of difficulties if one expects to adopt this approach to study stochastic control problems. The last one is the global Carleman estimate ([384]). This approach has the advantage of being more flexible and allowing to address variable coefficients. Further, it is robust with respect to the lower order terms and can be used to obtain explicit bounds on observability constants/control costs in terms of the involved coefficients.

So far, among the above mentioned five methods, only the one of global Carleman estimate was successfully extended to studying the controllability/observability problem of stochastic wave equations ([109, 224, 227, 238, 247, 386]). This chapter is based on our recent work [247]. There are many open problems related to the topic of this chapter. We shall list below some of them which, in our opinion, are particularly interesting: 1) Null and approximate controllability for stochastic wave equations. In Theorem 10.12, the exact controllability for (10.3) is presented. As an immediate consequence, we obtain the null and approximate controllability for the same system. Nevertheless, in order to show this later result, there seems no reason to introduce three controls. By Theorem 10.9, it is shown that only one control applied in the diffusion term is not enough. However, it seems that one boundary control is enough for the null and approximate controllability for both (10.2) and (10.3). Unfortunately, some essential difficulties appear when we try to prove such a result, following the method

360

10 Exact Controllability for a Refined Stochastic Wave Equation

to prove Theorem 10.12. For example, for the null controllability, we should prove the following inequality for solutions to (10.4) with τ = T : ∫ 2 ∂z |z(0)|2H 1 (G) + |ˆ z (0)|2L2 (G) ≤ C dΓ dt, 0 Σ0 ∂ν ∀ (z T , zˆT ) ∈ L2FT (Ω; H01 (G) × L2 (G)). However, if we utilize the method developed in this chapter, we only obtain that, for all (z T , zˆT ) ∈ L2FT (Ω; H01 (G) × L2 (G)), |z(0)|2H 1 (G) + |ˆ z (0)|2L2 (G) 0 (∫ ) ∫ T ∫ T ∂z 2 2 2 b 2 dt . ≤C |Z|H 1 (G) + |Z| dΓ dt + L (G) 0 Σ0 ∂ν 0 0 There are two additional terms containing Z and Zb in the right hand side of the above inequality. These terms come from the fact that, in the Carleman b simply as nonhomogeneous terms rather than estimate, we regard Z and Z part of solutions. Therefore, we believe that one should introduce some new technique, for example, a Carleman estimate in which the fact that Z and Zb are part of solutions is essentially used, to get rid of the additional terms b However, we do not know how to do it at this moment. containing Z and Z. 2) Exact controllability for stochastic wave equations with sharp conditions. In Theorem 10.12, we obtain the exact controllability of (10.3) for Γ0 given by (10.22). It is well known that a sharp sufficient condition for exact controllability of deterministic wave equations with time-invariant coefficients is that the triple (G, Γ0 , T ) satisfies the Geometric Control Condition introduced in [14]. It would be quite interesting and challenging to extend this result to the stochastic setting, and it seems that there are lots of things to be done before solving this problem. Indeed, the propagation of singularities for stochastic partial differential equations, at least, for stochastic hyperbolic equations, should be established. However, as far as we know, this topic is completely open. 3) Exact controllability for stochastic wave equations with more regular controls. In Theorem 10.12, we show the exact controllability of (10.3) by a triple of controls (f, g, h), where g ∈ L2F (0, T ; H −1 (G)), which is very irregular. It is very interesting to see whether (10.3) is exactly controllable with the control g ∈ L2F (0, T ; L2 (G)). By the standard duality argument, one can show that this is equivalent to the following observability estimate: |(z T , zˆT )|L2F (Ω;H01 (G)×L2 (G)) T ( ) ∂z b 2 2 ≤C 2 + |Z| + | Z| 2 2 LF (0,T ;L (G)) LF (0,T ;L (G)) , ∂ν LF (0,T ;L2 (Γ0 )) ∀ (z T , zˆT ) ∈ L2FT (Ω; H01 (G) × L2 (G)),

(10.82)

10.7 Notes and Comments

361

b is the solution to (10.4) with τ = T . By Lemma 10.18, simiwhere (z, zˆ, Z, Z) larly to the proof of Theorem 10.19, one can prove that the inequality (10.82) holds if the term |Z|L2F (0,T ;L2 (G)) is replaced by |Z|L2F (0,T ;H01 (G)) . However, at this moment we do not know whether (10.82) itself is true or not.

11 Exact Controllability for Stochastic Schr¨ odinger Equations

This chapter is devoted to showing the exact controllability for stochastic Schr¨ odinger equations by means of two controls, in which one is a boundary control and the other is an internal control acting everywhere in the diffusion term. Based on the duality argument, we solve this controllability problem by employing the global Carleman estimate to derive a suitable observability estimate for the dual equation.

11.1 Formulation of the Problem and the Main Result In the previous three chapters, we studied the controllability/observability for stochastic transport, heat and wave equations. Another typical equation for which the controllability problems are extensively studied in the deterministic setting is the Schr¨odinger equation. It is notable that many properties of the Schr¨ odinger equation are between that of the heat and the wave equations. In this chapter, we shall analyze the exact controllability for stochastic Schr¨ odinger equations. Let T > 0, n ∈ N, G ⊂ Rn be a given bounded domain with a C 2 boundary Γ . Fix any x0 ∈ Rn \ G and put  } ∆{  Γ0 = x ∈ Γ (x − x0 ) · ν(x) > 0 , (11.1) ∆ ∆  ∆ Q =(0, T ) × G, Σ =(0, T ) × Γ, Σ0 =(0, T ) × Γ0 . To begin with, let us recall the controllability Schr¨ odinger equation:    iyt + ∆y = a1 · ∇y + a2 y  y = χ Σ0 u    y(0) = y0

problem for (deterministic) in Q, on Σ,

(11.2)

in G,

© Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_11

363

364

11 Exact Controllability for Stochastic Schr¨ odinger Equations

where the initial state y0 ∈ H −1 (G; C), the control u ∈ L2 (Σ0 ; C), and the coefficients ak (k = 1, 2) satisfy a1 ∈ L∞ (0, T ; W 2,∞ (G; Rn ) ∩ W01,∞ (G; Rn )) and a2 ∈ L∞ (0, T ; W 1,∞ (G; C)). One can show that, for any y0 ∈ H −1 (G; C) and u ∈ L2 (Σ0 ; C), (11.2) admits one and only one (transposition) solution y(·) ∈ C([0, T ]; H −1 (G; C)). The system (11.2) is called exactly controllable at time T if for any given y0 , y1 ∈ H −1 (G; C), one can find a control u ∈ L2 (Σ0 ; C) such that the corresponding solution y(·) to (11.2) satisfies y(T ) = y1 . Controllability/observability problems for deterministic Schr¨odinger equation are extensively studied (e.g., [4, 35, 18, 19, 41, 42, 187, 190, 252, 260, 280, 289, 317, 394]). It is well-known that under some assumptions on the controller Γ0 (for example, Γ0 is given in (11.1)) and the coefficients a1 and a2 , the system (11.2) is exactly controllable at time T . The main goal of this chapter is to study what happens when (11.2) is replaced by a stochastic model. Compared to the deterministic setting, there exist only very few results (in [224, 228, 229]) on the corresponding controllability problems for the stochastic Schr¨odinger equations. ∆ As in the previous three chapters, let (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) be a filtered probability space on which a one dimensional Brownian motion {W (t)}t∈[0,T ] is defined, and F be the natural filtration generated by W (·). Denote by F the progressive σ-field (in [0, T ] × Ω) w.r.t. F. Let us consider the following controlled stochastic Schr¨odinger equation:  ( ) ( ) idy + ∆ydt = a1 · ∇y + a2 y + a4 v dt + a3 y + v dW (t) in Q,    χ Σ0 y = u on Σ, (11.3)    y(0) = y0 in G. Here, y0 ∈ H −1 (G; C), y is the state variable, both u ∈ L2F (0, T ; L2 (Γ0 ; C)) and v ∈ L2F (0, T ; H −1 (G; C)) are the control variables, and the coefficients ak 2,∞ (k = 1, 2, 3, 4) satisfy a1 ∈ L∞ (G; Rn ) ∩ W01,∞ (G; Rn )), a2 , a3 ∈ F (0, T ; W ∞ 1,∞ ∞ LF (0, T ; W (G; C)) and a4 ∈ LF (0, T ; W01,∞ (G; C)). It is easy to see that the control system (11.3) is a generalization of the system (5.33). As we shall see in Section 11.2, the system (11.3) admits one and only one transposition solution y ∈ CF ([0, T ]; L2 (Ω; H −1 (G; C))). Remark 11.1. Similarly to Remark 8.2, we introduce two controls into the system (11.3). Moreover, the control v acts on the whole domain, and also it affects the system through its drift term (in the form a4 v). Compared with the deterministic Schr¨odinger equation, it seems that we choose too many controls. However, similarly to Proposition 7.6, one can show that two controls are really necessary for our exact controllability problem for (11.3) to be defined below. We now introduce the following notion of exact controllability for (11.3).

11.2 Well-Posedness of the Control System

365

Definition 11.2. The system (11.3) is called exactly controllable at time T if for any y0 ∈ H −1 (G; C) and y1 ∈ L2FT (Ω; H −1 (G; C)), one can find a pair of controls (u, v) ∈ L2F (0, T ; L2 (Γ0 ; C)) × L2F (0, T ; H −1 (G; C)) such that the corresponding (transposition) solution y to (11.3) satisfies that y(T ) = y1 , a.s. The main result in this chapter is stated as follows: Theorem 11.3. The system (11.3) is exactly controllable at (any given) time T > 0. One may follow the idea in Section 7.5 of Chapter 7 (or, more precisely, that in Chapter 10 for the exact controllability for stochastic wave equations) to prove Theorem 11.3 in the following two steps: 1. Reducing the exact controllability problem for the system (11.3) to another exact controllability problem for a suitable backward stochastic Schr¨ odinger equation; 2. Solving the exact controllability problem for the above backward stochastic Schr¨ odinger equation by establishing a suitable observability estimate for some forward stochastic Schr¨odinger equation. Nevertheless, in this chapter we do not proceed in this way. Instead, in order to prove Theorem 11.3, we shall derive a key observability estimate for a suitable backward stochastic Schr¨odinger equation. This will be done in Theorem 11.14 by means of the global Carleman estimate.

11.2 Well-Posedness of the Control System The control system (11.3) is a nonhomogeneous boundary value problem, and hence the notions of solutions introduced in Section 3.2 cannot be applied to it. Instead, as in the last chapter, we need to employ the notion of transposition solution presented in Section 7.2. For this, we first introduce the dual equation of (11.3). For any τ ∈ (0, T ], let us consider the following backward stochastic Schr¨ odinger equation:  ( ) idz + ∆zdt = b1 · ∇z + b2 z + b3 Z dt + ZdW (t) in (0, τ ) × G,    z=0 on (0, τ ) × Γ, (11.4)    z(τ ) = zτ in G, where zτ ∈ L2Fτ (Ω; H01 (G; C)), the coefficients bj (j = 1, 2, 3) satisfy b1 ∈ 1,∞ 1,∞ L∞ (G; Rn )) and b2 , b3 ∈ L∞ (G; C)). F (0, T ; W0 F (0, T ; W Remark 11.4. The regularity of the coefficients can be suitably relaxed. But we do not do this because we hope to present the key idea in a simple way.

366

11 Exact Controllability for Stochastic Schr¨ odinger Equations

Similarly to the proofs of Theorems 4.8 and 4.10, by means of the usual energy estimate and noting that the coefficient b1 is Rn -valued, it is easy to show the following well-posedness result for (11.4) (and hence we will not prove it here). Lemma 11.5. For any zτ ∈ L2Fτ (Ω; H01 (G; C)), the equation (11.4) admits a unique weak solution (z, Z) ∈ L2F (Ω; C([0, τ ]; H01 (G; C))) × L2F (0, τ ; H01 (G; C)). Moreover, |z|L2F (Ω;C([0,τ ];H01 (G;C))) + |Z|L2F (0,τ ;H01 (G;C)) ≤ eCr1 |zτ |L2F

τ

(Ω;H01 (G;C)) ,

(11.5)

where ∆

r1 = |b1 |2L∞ (0,T ;W 1,∞ (G;Rn )) + F

0

3 ∑

|bj |2L∞ (0,T ;W 1,∞ (G;C)) + 1. F

j=2

Next, for the sake of completeness, we give an energy estimate for the equation (11.4). Proposition 11.6. For all solutions to the equation (11.4), it holds that ) √ ( E|z(t)|2H 1 (G;C) ≤ eC r1 E|z(s)|2H 1 (G;C) + |Z|2L2 (s,t;H 1 (G;C)) , (11.6) 0

0

F

0

for any 0 ≤ s ≤ t ≤ τ . Proof : By Itˆo’s formula, we have E|z(t)|2L2 (G;C) − E|z(s)|2L2 (G;C) ∫ t∫ ( ) =E zd¯ z + z¯dz + |Z|2 dσ dx =E =E

s

G

s

G

s

G

∫ t∫ { ∫ t∫ {



t

≤E s

[ ( } )] 2Re i¯ z ∆z − b1 · ∇z − b2 z − b3 Z + |Z|2 dxdσ [ 2Im div (¯ z ∇z) − |∇z|2 − div (|z|2 b1 ) + (div b1 )|z|2

] } −b2 |z|2 − b3 i¯ z Z + |Z|2 dxdσ

[( ) 2 |b1 |W 1,∞ (G;Rn ) + |b2 |L∞ (G;C) + |b3 |L∞ (G;C) + 1 |z|2L2 (G;C) ] +|Z|2L2 (G;C) dσ

and E|∇z(t)|2L2 (G;C) − E|∇z(s)|2L2 (G;C)

(11.7)

11.2 Well-Posedness of the Control System

=E

∫ t∫ ( s

) ∇z · d∇¯ z + ∇¯ z · d∇z + |∇Z|2 dσ dx

367

(11.8)

G

∫ t∫ [ ] div (∇zd¯ z ) − ∆zd¯ z + div (∇¯ z dz) − ∆¯ z dz + |∇Z|2 dσ dx s G ∫ t∫ { [ } ( )] =E 2Re i∆z ∆¯ z − b1 · ∇¯ z − b2 z − b3 Z + |∇Z|2 dxdσ ∫s s G [( ) ≤ 2E |b1 |W 1,∞ (G;Rn ) + |b3 |W 1,∞ (G;C) + 1 |∇z|2L2 (G;C) =E

t

] ( ) + |b2 |W 1,∞ (G;C) + |b3 |W 1,∞ (G;C) + 1 |z|2L2 (G;C) + |Z|2H 1 (G;C) dσ. 0

It follows from (11.7) and (11.8) that E|z(t)|2H 1 (G;C) − E|z(s)|2H 1 (G;C) 0 0 ∫ t ∫ t √ ≤ 4 r1 E |z(σ)|2H 1 (G;C) dσ + 4E |Z(σ)|2H 1 (G;C) dσ. s

0

0

s

This, together with Gronwall’s inequality, implies that ∫ t ) √ ( E|z(t)|2H 1 (G;C) ≤ eC r1 E|z(s)|2H 1 (G;C) + E |Z(σ)|2H 1 (G;C) dσ , 0

0

s

0

which gives the desired inequality (11.6) immediately. The completes the proof of Proposition 11.6. Remark 11.7. The proof of Proposition 11.6 is almost standard. Indeed, if we regard z (in (11.4)) as a solution to a forward stochastic Schr¨odinger equation with a nonhomogeneous term Z, then the estimate as (11.6) follows from a standard energy estimate. Nevertheless, we still give a proof here for exhibiting how the constant in (11.6) depends on r1 . Further, similarly to Proposition 8.6 in Chapter 8 and Proposition 10.6 in Chapter 10, we need the following hidden regularity result for solutions to (11.4) (Note that this sort of regularity result does NOT follow from Trace Theorem for Sobolev spaces). Proposition 11.8. For any τ ∈ (0, T ] and zτ ∈ L2Fτ (Ω; H01 (G; C)), the unique weak solution (z, Z) ∈ L2F (Ω; C([0, τ ]; H01 (G; C))) × L2F (0, τ ; H01 (G; C)) ∂z to (11.4) satisfies ∈ L2F (0, τ ; L2 (Γ ; C)). Furthermore, ∂ν Γ ∂z ≤ eCr1 |zτ |L2F (Ω;H01 (G;C)) , (11.9) 2 τ ∂ν LF (0,τ ;L2 (Γ ;C)) where the constant C is independent of τ .

368

11 Exact Controllability for Stochastic Schr¨ odinger Equations

Proof : Since Γ is C 2 , by Proposition 10.5, one can find a vector field h = (h1 , · · · , hn ) ∈ C 1 (G; Rn ) such that h = ν on Γ . By a direct computation, we obtain that n ∑ n ∑

hk z¯xk zxj xj +

k=1 j=1

=

n ∑ n ∑

hk zxk z¯xj xj

k=1 j=1

n ∑ n ∑

[

(hk z¯xk zxj )xj + (hk zxk z¯xj )xj + hkxk |zxj |2

k=1 j=1

−(hk |zxj |2 )xk − 2hkxj z¯xk zxj

(11.10)

]

and i

n ∑

(hk z¯xk dz − hk zxk d¯ z)

k=1

=i

n ∑ [

d(hk z¯xk z) − hk zd¯ zxk − hk d¯ zxk dz − (hk zd¯ z )x k

k=1

+hk zd¯ zxk =i

n ∑ [

] + hkxk zd¯ z

(11.11)

] d(hk z¯xk z) − hk d¯ zxk dz − (hk zd¯ z )xk + hkxk zd¯ z .

k=1

Combining (11.10) and (11.11), we have h · ∇¯ z (idz + ∆zdt) + h · ∇z(−id¯ z + ∆¯ z dt) [ ] = ∇ · (h · ∇¯ z )∇z + (h · ∇z)∇¯ z − i(zd¯ z )h − |∇z|2 h dt +d(ih · ∇¯ z z) − 2

n ∑

hkxk zj z¯k dt + (∇ · h)|∇z|2 dt

(11.12)

j,k=1

+i(∇ · h)zd¯ z − i(h · ∇d¯ z )dz. Finally, integrating (11.12) in (0, τ ) × G and then taking the expectation in the both sides, noting that ∫ τ∫ [ ] E ∇ · (h · ∇¯ z )∇z + (h · ∇z)∇¯ z − i(zd¯ z )h − |∇z|2 h dxdt 0

G τ ∫



[ ∂ z¯ ∂z

=E 0

Γ

∂ν ∂ν

+

∫ τ∫ ∂ z¯ ∂z ∂z 2 ] − dΓ dt = E ∂ν ∂ν ∂ν 0 Γ

∂z 2 dΓ dt, ∂ν

and using Lemma 11.5, we obtain (11.9). With the aid of Lemma 11.5 and Proposition 11.8, proceeding as that in Definition 7.13, we may define transposition solutions to (11.3) as follows:

11.3 A Fundamental Identity for Stochastic Schr¨ odinger-Like Operators

369

Definition 11.9. A process y ∈ CF ([0, T ]; L2 (Ω; H −1 (G; C))) is called a transposition solution to the system (11.3) if for every τ ∈ [0, T ] and zτ ∈ L2Fτ (Ω; H01 (G; C)), it holds that E⟨y(τ, ·), zτ (·)⟩H −1 (G;C),H01 (G;C) − ⟨y0 (·), z(0, ·)⟩H −1 (G;C),H01 (G;C) ∫ τ∫ ∫ τ ∂z =E u dxds + E ⟨v, a4 z + Z⟩H −1 (G;C),H01 (G;C) ds. 0 Γ0 ∂ν 0

(11.13)

Here (z, Z) solves (11.4) with b1 = a 1 ,

b2 = div a1 − a2 ,

b3 = −a3 .

Now, similarly to the proof of Theorem 7.12, one can show the following well-posedness result for (11.3) (hence we omit the proof). Proposition 11.10. For each y0 ∈ H −1 (G; C), the system (11.3) admits a unique transposition solution y ∈ CF ([0, T ]; L2 (Ω; H −1 (G; C))). Moreover, |y|CF ([0,T ];L2 (Ω;H −1 (G;C))) ( ) ≤ eCr2 |y0 |H −1 (G;C) + |u|L2F (0,T ;L2 (Γ0 ;C)) + |v|L2F (0,T ;H −1 (G;C)) .

(11.14)

Here ∆

r2 = |a1 |2L∞ (0,T ;W 2,∞ (G;Rn )) +

4 ∑

F

|ak |2L∞ (0,T ;W 1,∞ (G;C)) + 1. F

k=2

Proceeding exactly as the proof of Theorem 7.24, one can show that the exact controllability of (11.3) can be reduced to an observability estimate for its dual equation, that is, the equation (11.4) with τ = T in the present case. The main task in the rest of this chapter is to derive such an estimate (See the inequality (11.56) in Theorem 11.14), via which the desired exact controllability result (for (11.3)) in Theorem 11.3 follows.

11.3 A Fundamental Identity for Stochastic Schr¨ odingerLike Operators In this section, we shall establish a fundamental identity for a stochastic schr¨ odinger-like operator, which is in spirit similarly to the identities (1.48), (8.19), (9.26) and (10.18). Besides the key role in proving observability estimates for backward stochastic Schr¨odinger equations, it can also be applied in the study of observability and unique continuation problems for forward stochastic Schr¨ odinger equations. Let β(t, x) ∈ C 2 (R1+n ). For j, k = 1, 2, · · · , n, let bjk (t, x) ∈ C 1,2 (R1+n ) satisfy that

370

11 Exact Controllability for Stochastic Schr¨ odinger Equations

bjk = bkj .

(11.15)

Define a (formal) second order stochastic schr¨ odinger-like operator P as ∆

Pz = iβ(t, x)dz +

n ∑

(bjk (t, x)zxj )xk dt.

(11.16)

j,k=1

We have the following result: Theorem 11.11. Let ℓ, Ψ ∈ C 2 (R1+n ). Assume that u is an H 2 (Rn ; C)valued Itˆ o process. Put θ = eℓ and v = θu. Then for a.e. x ∈ Rn , it holds that ( ) θ PuI1 + PuI1 + dM + div V = 2|I1 | dt + 2

n ∑

( ) ¯ xk + v ¯ xj vxk dt + D|v|2 dt cjk vxj v

j,k=1

+i

n [ ∑ (

βbjk ℓxj

( ) ]( ) jk ¯ x k v − vx k v ¯ dt + b βℓt xj v t

)

(11.17)

j,k=1 n [ ∑ ( jk ) ]( ) ¯ dv − vd¯ +i βΨ + βb ℓxj x v v k

j,k=1

2 +β 2 ℓt dv + i

n ∑

( ) βbjk ℓxj dvd¯ vxk − dvxk d¯ v ,

a.s.

j,k=1

where

 n ∑  ∆   I = −iβℓ v − 2 bjk ℓxj vxk + Ψ v, 1 t     j,k=1     n n  ∆ ∑ ∑ A= bjk ℓxj ℓxk − (bjk ℓxj )xk − Ψ,   j,k=1 j,k=1     n  ∑  ∆ 2  2  ¯ ), bjk ℓxj (¯ v xk v − v xk v   M = β ℓt |v| + iβ

(11.18)

j,k=1

 ) ∆(  V = V 1, · · · , V k , · · · , V n ,      n [ ]  ∑   k ∆ jk jk  ¯ ¯ ¯ V = −iβ b ℓ (vd¯ v − v dv) + b ℓ (v v − v v)dt  xj t xj xj    j=1   n n ∑ ∑ jk  ¯ ¯ −Ψ b (v v + v v)dt + bjk (2Aℓxj + Ψxj )|v|2 dt  xj xj    j=1 j=1     n  ∑ (  ′ ′ ′ ′)    ¯ xk ′ + v ¯ xj′ vxk′ )dt, + 2bjk bj k − bjk bj k ℓxj (vxj′ v   ′ ′ j,j ,k =1

(11.19)

11.3 A Fundamental Identity for Stochastic Schr¨ odinger-Like Operators

371

and  n [ ] ∑ ′ ′ ′ ′  ∆   cjk = 2(bj k ℓxj ′ )xk′ bjk − (bjk bj k ℓxj ′ )xk′ − bjk Ψ,    j ′ ,k′ =1 n n [ ∑ ]  ∑  ∆  2 jk jk  D =(β ℓ ) + (b Ψ ) + 2 (b ℓ A) + AΨ .  t t x x x x j j k k  j,k=1

(11.20)

j,k=1

Proof : The proof is divided into three steps. Step 1. By θ = eℓ and v = θu, a straightforward computation shows that: θPu = iβdv − iβℓt vdt +

n ∑

(bjk vxj )xk dt +

j,k=1

−2

n ∑

bjk ℓxj vxk dt −

j,k=1

n ∑

bjk ℓxj ℓxk vdt

j,k=1 n ∑

(11.21)

(bjk ℓxj )xk vdt

j,k=1

= I1 dt + I2 , where I1 is given in (11.18), I2 = iβdv +

n ∑

(bjk vxj )xk dt + Avdt.

(11.22)

j,k=1

Hence, θ(P uI1 + P uI1 ) = 2|I1 |2 dt + (I1 I2 + I2 I1 ).

(11.23)

Step 2. In this step, we compute I1 I2 + I2 I1 . Denote the three terms in I1 and I2 by I1j and I2j , j = 1, 2, 3, respectively. Then, I11 I21 + I21 I11 = −iβℓt v(iβdv) + iβdv(−iβℓt v)

(11.24)

= −d(β ℓt |v| ) + (β ℓt )t |v| dt + β ℓt |dv| . 2

Noting that

{

2

2

2

2

2vd¯ v = d(|v|2 ) − (¯ vdv − vd¯ v) − |dv|2 , 2v¯ vxk = (|v|2 )xk − (¯ vvxk − v¯ vxk ),

we find first

2

(11.25)

372

11 Exact Controllability for Stochastic Schr¨ odinger Equations n ∑

2i

(βbjk ℓxj vd¯ v)xk

j,k=1 n { ∑

=i

[ ]} βbjk ℓxj d(|v|2 ) − (¯ vdv − vd¯ v) − |dv|2

j,k=1 n { ∑ (

=i

βbjk ℓxj

) xk

xk

[ ] [ ] d(|v|2 ) + βbjk ℓxj d(|v|2 ) x − βbjk ℓxj (¯ vdv − vd¯ v) x k

k

j,k=1

} ( ) − βbjk ℓxj x |dv|2 − βbjk ℓxj dvxk d¯ v − βbjk ℓxj d¯ vdvxk , k

(11.26) next −2i

n ∑ ( jk ) βb ℓxj x vd¯ v k

j,k=1 n ∑ (

= −i

βbjk ℓxj

) [ xk

d(|v|2 ) − (¯ vdv − vd¯ v) − |dv|2

] (11.27)

j,k=1 n ∑

= −i

[(

βbjk ℓxj

) xk

( ) d(|v|2 ) − βbjk ℓxj x (¯ vdv − vd¯ v) k

j,k=1

( ) ] − βbjk ℓxj x |dv|2 , k

then −2i

n ∑ ( ) d βbjk ℓxj v¯ vx k j,k=1

= −i

n { ∑ [ ]} d βbjk ℓxj (|v|2 )xk − (¯ vvxk − v¯ vxk )

(11.28)

j,k=1

= −i

n ∑ j,k=1

{(

) [ ] βbjk ℓxj t (|v|2 )xk dt + βbjk ℓxj d (|v|2 )xk

[ ]} −d βbjk ℓxj (¯ vvxk − v¯ vxk ) ,

and 2i

n ∑ (

n ∑ ) ( )[ ] βbjk ℓxj t v¯ vxk dt = i d βbjk ℓxj t (|v|2 )xk − (¯ vvxk − v¯ vxk ) dt

j,k=1

=i

n [ ∑ (

j,k=1

] ) ( ) βbjk ℓxj t (|v|2 )xk dt − βbjk ℓxj t (¯ vvxk − v¯ vxk )dt .

j,k=1

(11.29)

11.3 A Fundamental Identity for Stochastic Schr¨ odinger-Like Operators

373

It follows from (11.26)–(11.29) that (I12 + I13 )I21 + I21 (I12 + I13 ) ( =

n ∑

−2

)

(

b ℓxj vxk + Ψ v (iβdv) + iβdv − 2 jk

j,k=1

n ∑

= 2i = 2i

j,k=1 n ∑

j,k=1

βbjk ℓxj vd¯ v

) xk

] ( ) − βbjk ℓxj x vd¯ v − βbjk ℓxj vd¯ vxk k

[ ( ] ) ( ) d βbjk ℓxj v¯ vxk − βbjk ℓxj t v¯ vxk dt − βbjk ℓxk vd¯ v xk

−2i

j,k=1 n ∑

βbjk ℓxj dvd¯ vxk + iβΨ (¯ vdv − vd¯ v)

j,k=1

−i +i

)

¯ xk dv) + iβΨ (¯ βb ℓxj (vxk d¯ v−v vdv − vd¯ v) (

= −i

bjk ℓxj vxk + Ψ v

jk

j,k=1 n [ ∑

+2i

n ∑

n [ ∑

βbjk ℓxj (¯ vdv − vd¯ v)

j,k=1 n ∑

jk

(βb ℓxj )t (¯ v v xk

j,k=1 n ∑

] xk

dt − i

n [ ] ∑ ¯ vxk ) d βbjk ℓxj (v¯ vxk − v j,k=1

n [ ] ∑ − v¯ vxk )dt + i βΨ + (βbjk ℓxj )xk (¯ vdv − vd¯ v) j,k=1

( ) βbjk ℓxj dvd¯ vxk − dvxk d¯ v .

j,k=1

(11.30) Noting (11.15), we have that I11 I22 + I22 I11 = −iβℓt v

n ∑ j,k=1

=

n ∑

[

n ∑

(bjk vxj )xk dt +

(bjk vxj )xk (−iβℓt v)

j,k=1

n ∑ ( )] ( ) ¯−v ¯ xj v ¯ xk v − v xk v ¯ dt. iβbjk ℓt vxj v dt + i bjk (βℓt )xj v xk

j,k=1

j,k=1

(11.31) Utilizing (11.15) again, we find that n ∑

bjk bj

′ ′

k

( ) ¯ xk xk ′ + v ¯ xj′ vxk xk′ ℓx j v x j ′ v

j,k,j ′ ,k′ =1

=

n ∑ j,k,j ′ ,k′ =1

Hence,

bjk bj

′ ′

k

( ) ¯ xk ′ + v ¯ xj ′ xk v xk ′ . ℓxj vxj′ xk v

374

11 Exact Controllability for Stochastic Schr¨ odinger Equations n ∑

2

bjk bj

′ ′

k

( ) ¯ xk x′k + v ¯ xj′ vxk x′k dt ℓxj vxj ′ v

j,k,j ′ ,k′ =1 n ∑

=

bjk bj

′ ′

k

( ) ¯ xk x′k + v ¯ xj ′ vxk x′k dt ℓx j v x j ′ v

j,k,j ′ ,k′ =1 n ∑

+

bjk bj

′ ′

k

( ) ¯ xk ′ + v ¯ xj′ xk vxk′ dt ℓx j v x j ′ x k v

j,k,j ′ ,k′ =1 n ∑

=

jk j ′ k′

b b

k

j,k,j ′ ,k′ =1

[

n ∑

=

bjk bj

′ ′

k

( )] ¯ xk ′ + v ¯ xj ′ v xk ′ ℓ x j vx j ′ v dt xk

j,k,j ′,k′ =1 n ∑



(11.32)

( ) ¯ xk ′ + v ¯ xj′ vxk′ x dt ℓx j v x j ′ v

(

bjk bj

′ ′

k

ℓx j

j,k,j ′ ,k′ =1

) ( ) ¯ xk ′ + v ¯ xj′ vxk′ dt. v xj ′ v x k

By the equality (11.32), we get that I12 I22 + I22 I12 = −2

n ∑

bjk ℓxj vxk

j,k=1 n ∑

+2 = −2 +2

+ −

xk

dt −2

n n ∑ ( jk ) ∑ b vxj x bjk ℓxj vxk dt k

j,k=1

j,k=1

dt

xk ′

bj

j,k,j ′ ,k′ =1 n ∑

′ ′

k

(

bjk bj

j,k,j ′ ,k′ =1 n ∑

bjk ℓxj

′ ′

k

)

( xk ′

) ¯ xk + v ¯ xj′ vxk dt v xj ′ v

) ( ¯ xk xk ′ + v ¯ xj′ vxk xk′ dt ℓx j v x j ′ v

[ ( )] ′ ′ ¯ xk + v ¯ xj ′ v xk bjk bj k ℓxj vxj ′ v

dt

xk ′

j,k,j ′ ,k′ =1 n ∑

bj

j,k,j ′ ,k′ =1 n [ ∑

j,k,j ′ ,k′ =1

)

[ ( )] ′ ′ ¯ xk + v ¯ xj ′ v xk bjk bj k ℓxj vxj ′ v

j,k,j ′ ,k′ =1 n ∑

j,k,j ′ ,k′ =1 n ∑

bjk vxj

j,k=1

= −2 +2

n ∑ (

′ ′

k

(

bjk bj

bjk ℓxj

′ ′

k

)

( xk ′

) ¯ xk + v ¯ xj′ vxk dt vxj ′ v

( )] ¯ xk ′ + v ¯ xj ′ v xk ′ ℓx j v x j ′ v dt xk

(

bjk bj

′ ′

k

ℓx j

) ( ) ¯ xk ′ + v ¯ xj′ vxk′ dt v xj ′ v x k

(11.33)

11.3 A Fundamental Identity for Stochastic Schr¨ odinger-Like Operators

+2

xk

j,k,j ′ ,k′ =1 n ∑ j,k,j ′ ,k′ =1 n [ ∑

+ −

[ ′ ′ ( )] ¯ xk′ + v ¯ xj ′ vxk′ bjk bj k ℓxj vxj ′ v dt

n ∑

= −2

375

j,k,j ′ ,k′ =1 n ∑

) ( ) ′( ′ ¯ xk + v ¯ xj vxk dt bjk bj k ℓxj ′ x ′ vxj v k

bjk bj

′ ′

k

( )] ¯ xk ′ + v ¯ xj ′ v xk ′ ℓ x j vx j ′ v dt xk

(

bjk bj

′ ′

k

ℓx j ′

) xk ′

j,k,j ′ ,k′ =1

( ) ¯ xk + v ¯ xj vxk dt. v xj v

Further, it holds that I13 I22 + I22 I13 = Ψv

n ∑

(bjk vxj )xk dt +

j,k=1 n [ ∑

=



¯+v ¯ xj v) Ψ bjk (vxj v

j,k=1 n ∑



] xk

dt −

n ∑

¯ xk + v ¯ xj vxk )dt Ψ bjk (vxj v

j,k=1

¯+v ¯ xk v)dt Ψxj bjk (vxk v

j,k=1 n ∑

[

(bjk vxj )xk Ψ vdt

j,k=1

¯+v ¯ xj v) Ψ bjk (vxj v

j,k=1 n ∑

=

n ∑

] xk

(11.34)

dt −

n ∑

¯ xk + v ¯ xj vxk )dt Ψ bjk (vxj v

j,k=1 n ∑

) ( jk b Ψxj |v|2 x dt + k

j,k=1

(bjk Ψxj )xk |v|2 dt.

j,k=1

Finally, I1 I23 + I23 I1 = −iβℓt vAvdt + Av(−iβℓt v)dt = −2

n ∑

(b ℓxj A|v| )xk dt + 2

j,k=1

jk

2

n [ ∑

(b ℓxj A)xk jk

] + AΨ |v|2 dt.

(11.35)

j,k=1

Step 3. Combining (11.23)–(11.35), we conclude the desired formula (11.17). This completes the proof of Theorem 11.11. Remark 11.12. Since we do not assume that the matrix (bjk )1≤j,k≤n is positively definite, similar to [106] and based on the identity (11.17) in Theorem 11.11, we can deduce observability estimate not only for the backward stochastic Schr¨ odinger equation, but also for deterministic hyperbolic, Schr¨odinger and plate equations, which had been derived via Carleman estimates (See [111], [187] and [385], respectively).

376

11 Exact Controllability for Stochastic Schr¨ odinger Equations

11.4 Observability Estimate for Backward Stochastic Schr¨ odinger Equations In this section, we shall prove the desired observability estimate for the backward stochastic Schr¨odinger equation (11.4) with τ = T (See Theorem 11.14 below). To this end, put ψ(x) = |x − x0 |2 + σ,

(11.36)

where σ is a positive constant such that min ψ(x) ≥ x∈G

5 |ψ|L∞ (G) . 6

Let λ > 0 and µ > 0. Put ℓ=λ

e4µψ − e5µ|ψ|L∞ (G) , t2 (T − t)2

φ=

e4µψ , t2 (T − t)2

θ = eℓ .

(11.37)

We first establish the following global Carleman estimate for backward stochastic Schr¨ odinger equations: Theorem 11.13. There are λ1 > 0 (depending on r1 ) and µ1 > 0 such that for each λ ≥ λ1 and µ ≥ µ1 , any solution to the equation (11.4) with τ = T satisfies that ∫ ( ) E θ2 λ3 µ4 φ3 |z|2 + λµφ|∇z|2 dxdt Q

∫ [ ∫ ( ) ≤C E θ2 λ2 µ2 φ2 |Z|2 +|∇Z|2 dxdt + λµE Q

∂z 2 ] (11.38) θ φ dΓ dt . ∂ν Σ0 2

Proof : The proof is divided into three steps. Step 1. Let β = 1 and (bjk )1≤j,k≤n be the n × n identity matrix. Put { 1, if j = k, jk δ = 0, if j ̸= k. For any solution (z, Z) to the equation (11.4) with τ = T , applying Theorem 11.11 with θ given by (11.37), u replaced by z and v = θz, we obtain that n n ( ) ( ) ∑ ∑ ¯−2 ¯ xj + Ψ v ¯ + θPz − iβℓt v − 2 θPz iβℓt v ℓx j v ℓxj vxj + Ψ v j=1

+ dM + div V n 2 ∑ = 2 − iβℓt v − 2 ℓxj vxj + Ψ v dt j=1

j=1

(11.39)

11.4 Observability Estimate for Backward Stochastic Schr¨ odinger Equations n ∑

+

377

¯ xk + v ¯ xj vxk )dt + D|v|2 dt cjk (vxj v

j,k=1 n ∑

+2i

¯ )dt + i(Ψ + ∆ℓ)(¯ (ℓxj t + ℓtxj )(¯ vx j v − vx j v vdv − vd¯ v) + ℓt |dv|2

j=1

+i

n ∑

( ) ℓxj d¯ vxj dv − dvxj d¯ v .

j=1

Here

 n ∑  2   ¯ ), M = ℓ |v| + i ℓxj (¯ v xj v − v xj v t     j=1     n  ∑   A = (ℓ2xj − ℓxj xj ) − Ψ, j=1

   n n  ∑ ∑    D = ℓtt + Ψ + 2 (ℓxj A)xj + 2AΨ,  x x j j    j=1 j=1     jk c = 2ℓjk − δ jk ∆ℓ − δ jk Ψ, and

[ ] ¯ dv) + ℓt (vxk v ¯−v ¯ xk v)dt Vk = −i ℓxk (vd¯ v−v ¯+v ¯ xk v)dt + (2Aℓxk + Ψxk )|v|2 dt −Ψ (vxk v +2

n ∑ j=1

¯ xk )dt − 2 ℓxj (¯ v x j vx k + v x j v

n ∑

¯ xj )dt. ℓxk (vxj v

j=1

Step 2. In this step, we estimate the terms in the right-hand side of the equality (11.39) one by one. First, from the definitions of ℓ and φ in (11.37) and the choice of ψ in (11.36), we have that 2(2t − T ) ( ) 4λψ 5µ|ψ|L∞ (G) |ℓt | = λ 3 e − e t (T − t)3 2(2t − T ) 1 1 5µ|ψ|L∞ (G) 5µψ ≤ λ 3 e ≤ Cλ e ≤ Cλφ1+ 2 , t (T − t)3 t3 (T − t)3

(11.40)

and 20t2 − 20tT + 6T 2 ( ) 4µψ 5µ|ψ|L∞ (G) |ℓtt | = λ e − e t4 (T − t)4 1 ≤ Cλ 4 e5µ|ψ|L∞ (G) t (T − t)4 1 8µψ ≤ Cλ 4 e ≤ Cλφ2 ≤ Cλφ3 . t (T − t)4

(11.41)

378

11 Exact Controllability for Stochastic Schr¨ odinger Equations

Let us choose Ψ = −∆ℓ. Since A=

m ∑ j=1

ℓ2xj =

m ∑ (

4λµφψxj )2 = 16λ2 µ2 φ2 |∇ψ|2 ,

j=1

we find that D = ℓtt +

n ∑

Ψx j x j + 2

j=1

n ∑

(ℓxj A)xj + 2AΨ

j=1

= ℓtt + ∆(∆ℓ) + 2

n ∑ ( ) 4λµφψxj 16λ2 µ2 φ2 |∇ψ|2 xj

(11.42)

j=1

−32λ2 µ2 φ2 |∇ψ|2 ∆ℓ = 384λ3 µ4 φ3 |∇ψ|4 − µ4 φO(λ) − λ3 φ3 O(µ3 ) + ℓtt . Recalling that x0 ∈ Rn \ G, we have |∇ψ| > 0 in G. This, together with (11.41)–(11.42), implies that there exists a µ0 > 0 such that for all µ > µ0 , one can find a constant λ0 = λ0 (µ0 ) so that for any λ > λ0 , it holds that D|v|2 ≥ λ3 µ4 φ3 |∇ψ|4 |v|2 .

(11.43)

Since cjk = 2ℓxj xk − δ jk ∆ℓ − δ jk Ψ = 32λµ2 φψxj ψxk + 16λµφψxj xk , we deduce that n ∑

¯ xk + vxj v ¯ xk ) cjk (vxj v

j,k=1

= 16λµφ

n ∑

¯ xk + vxk v ¯ xj ) (2µψxj ψxk + ψxj xk )(vxj v

j,k=1 n n n n [∑ ] ∑ ∑ ∑ ¯ xk ) + ¯ xj ) = 32λµ2 φ (ψxj vxj ) (ψxk v (ψxk vxk ) (ψxj v j=1

+32λµφ

n ∑

k=1

k=1

(11.44)

j=1

¯ xj + v ¯ xj vxj ) (vxj v

j=1

= 64λµ2 φ|∇ψ · ∇v|2 + 64sµφ|∇v|2 ≥ 64λµφ|∇v|2 . Now we estimate the other terms in the right-hand side of the equality (11.39). The first one reads

11.4 Observability Estimate for Backward Stochastic Schr¨ odinger Equations

379

n ∑ ¯ ) (ℓxj t + ℓtxj )(¯ v x j v − vx j v 2i j=1 n ∑ ¯ vxj ) ≤ 2λφ|∇v|2 + 2λµ2 φ3 |∇ψ|2 |v2 |. = 4i λµψxj ℓt (¯ vx j v − v

(11.45)

j=1

The second one satisfies i(Ψ + ∆ℓ)(¯ vdv − vd¯ v) = 0.

(11.46)

To estimate the third one, we take mean value and get that 3

ℓt E|dv|2 = ℓt E|θℓt zdt + θdz|2 = ℓt θ2 E|dz|2 ≤ 2λθ2 φ 2 E|Z|2 dt.

(11.47)

Here we utilize the inequality (11.40). Further, [( ) ] [ ] E(d¯ vxk dv) = E θℓt zdt + θdz x (θℓt zdt + θdz) = E (θdz)xk θdz k [ ] = E (λµφψxk θdz + θdzxk )(θdz) = λµφψxk θ2 E|dz|2 + θ2 E(d¯ zxk dz) ( ) = λµφψxk θ2 E|Z|2 dt + θ2 E Z xk Z dt. Similarly,

( ) E(dvxk d¯ v) = λµφψxk θ2 E|Z|2 dt + θ2 E ZZxk dt. Therefore, the fourth one satisfies that iE

n ∑

( ) ℓxj d¯ vxj dv − dvxj d¯ v

j=1

= iλµφ

n ∑

ψx j

[

(11.48)

( ) ( )] θ2 E Z xj Z − θ2 E Z xj Z dt.

j=1

Step 3. Integrating the equality (11.39) in Q, taking mean value in both sides, and noting (11.40), (11.41), (11.43)–(11.48), we obtain that ∫ ( ) E λ3 µ4 φ3 |v|2 + λµ2 φ|∇v|2 dxdt Q

+2E

∫ n 2 ∑ ℓxj vxj + Ψ v dxdt − iℓt v − 2 Q

j=1

∫ [ n ( ) ∑ ¯−2 ¯ xj + Ψ v ¯ ≤E θPy iℓt v ℓxj v Q

(

j=1

)] +θPy − iℓt v − 2 ℓxj vxj + Ψ v dx j=1 ∫ ∫ ∫ ( ) 2 2 2 2 + CE θ s µ φ |Z|2 + |∇Z|2 dxdt + E dM dx + E div V dx. Q

n ∑

Q

Q

(11.49)

380

11 Exact Controllability for Stochastic Schr¨ odinger Equations

Now we analyze the terms in the right-hand side of (11.49) one by one. The first one reads ∫ [ n ( ) ∑ ¯−2 ¯ xj + Ψ v ¯ E θPz iℓt v ℓxj v Q

j=1 n ∑

(

+θPz − iℓt v − 2

ℓ x j vx j + Ψ v

)] dx

j=1

∫ [ n ) ∑ ( )( ¯−2 ¯ xj + Ψ v ¯ =E θ b1 · ∇z + b2 z + b3 Z iℓt v ℓxj v Q

(

+θ − b1 · ∇¯ z + b2 z¯ + b3 Z

)(

j=1 n ∑

− iℓt v − 2

ℓ x j vx j + Ψ v

)] dxdt

j=1

∫ ( n 2 ) ∑ 2 ≤E θ2 b1 · ∇z + b2 z + b3 Z + − iβℓt v − 2 ℓxj vxj + Ψ v dxdt. Q

j=1

(11.50) From the choice of θ, it is clear that v(0) = v(T ) = 0. Hence, we obtain that ∫ dM dx = 0. (11.51) Q

Further, noting that v = z = 0 on Σ, we find ∫ E div V dx Q



=E

2 Σ

n ∑ n ∑ [

] ¯ xk )ν k − ℓxk ν k vxj v ¯ xj dΣ ℓxj (¯ vxj vxk + vxj v

k=1 j=1

∫ ( ∑ n n ∂v 2 ∂v 2 ) ∑ =E 4 ℓx j ν j − 2 ℓxj ν j dΣ ∂ν ∂ν Σ j=1 j=1 ∫ =E

2 Σ

n ∑ j=1

ℓx j ν

∫ 2 dΣ ≤ CλµE

j ∂v

∂ν

(11.52)

∂z 2 θ2 φ dΓ dt. ∂ν Σ0

It follows from (11.49)–(11.52) that ∫ ( ) E λ3 µ4 φ3 |v|2 + λµφ|∇v|2 dxdt Q





≤ CE

θ2 |b1 · ∇z + b2 z + b3 Z|2 dxdt + CλµE Q



∂z 2 θ2 φ dΓ dt ∂ν Σ0

( ) θ2 λ2 µ2 φ2 |Z|2 + |∇Z|2 dxdt.

+ CE Q

Noting that zxk = θ−1 (vxk − ℓxk v) = θ−1 (vxk − λµφψxk v), we get

(11.53)

11.4 Observability Estimate for Backward Stochastic Schr¨ odinger Equations

( ) ( ) θ2 |∇z|2 + λ2 µ2 φ2 |z|2 ≤ C |∇v|2 + λ2 µ2 φ2 |v|2 .

381

(11.54)

Therefore, it follows from (11.53) that ∫ ( 3 3 3 2 ) E λ µ φ |z| + λµφ|∇z|2 dxdt Q



( ) θ2 |b1 |2 |∇z|2 + b22 |z|2 + |b3 |2 |Z|2 dxdt

≤ CE Q



∫ ∂z 2 ( ) θ2 φ dΓ dt + CE θ2 λ2 µ2 φ2 |Z|2 + |∇Z|2 dxdt. ∂ν Σ0 Q (11.55) Taking µ1 = µ0 and λ1 = max(λ0 , Cr12 ), and utilizing the inequality (11.55), we conclude the desired inequality (11.38). +CλµE

Now we are in a position to establish the following key observability estimate for the backward stochastic Schr¨odinger equation (11.4) with τ = T . Theorem 11.14. All solutions to the backward stochastic Schr¨ odinger equation (11.4) with τ = T satisfy that |zT |L2F

T

(Ω;H01 (G;C))

( ∂z ) a4 z + Z 2 ≤ eCr1 2 + . 1 LF (0,T ;H0 (G;C)) ∂ν LF (0,T ;L2 (Γ0 ;C)) (11.56)

Proof : Noting that ∫ ( ) θ2 λ2 µ2 φ2 |Z|2 + |∇Z|2 dxdt Q



( ) θ2 λ2 µ2 φ2 |Z + a4 z|2 + |∇(Z + a4 z)|2 dxdt

≤2 Q



( ) θ2 λ2 µ2 φ2 |a4 z|2 + |∇(a4 z)|2 dxdt,

+2 Q

from (11.38), we obtain that ∫ ( ) E θ2 λ3 µ4 φ3 |z|2 + λµφ|∇z|2 dxdt Q [∫ ( ) ≤ CE θ2 λ2 µ2 φ2 |Z + a4 z|2 + |∇(Z + a4 z)|2 dxdt (11.57) Q ∫ ∫ ∂z 2 ( ) ] + θ2 λ2 µ2 φ2 |a4 z|2 + |∇(a4 z)|2 dxdt + λµ θ2 φ dΓ dt . ∂ν Q Σ0 Let λ2 = max{λ1 , C|a4 |L∞ (0,T ;W 1,∞ (Ω)) }. Then for all λ ≥ λ2 , we deduce from 0 F (11.57) that

382

11 Exact Controllability for Stochastic Schr¨ odinger Equations



( ) θ2 λ3 µ4 φ3 |z|2 + λµφ|∇z|2 dxdt

E Q

≤ CE

[∫

( ) θ2 λ2 µ2 φ2 |Z + a4 zˆ|2 + |∇(Z + a4 z)|2 dxdt Q

+λµ



(11.58)

∂z 2 ] θ2 φ dΓ dt . ∂ν Σ0

It follows from the definition of ℓ and θ (in (11.37)) that ∫ ( ) E θ2 φ3 |z|2 + φ|∇z|2 dxdt Q

∫ ( ( T ) ( T )) ∫ 3T 4 ( 2 ) 2 ≥ min φ ,x θ ,x E |z| + |∇z|2 dxdt, T 2 4 x∈G G 4



( ) θ2 φ2 |Z + a4 z|2 + |∇(Z + a4 z)|2 dxdt

E Q

≤ max

(

(x,t)∈Q

and



E

(11.59)

) φ (t, x)θ (t, x) E 2



(

2

) |Z + a4 z| + |∇(Z + a4 z)| dxdt 2

(11.60)

2

Q

∫ 2 ∂z 2 ( ) ∂z θ2 φ dΓ dt ≤ max φ(t, x)θ2 (t, x) E dΓ dt. ∂ν (x,t)∈Q Σ0 Σ0 ∂ν

(11.61)

Combining (11.38) and (11.59)–(11.61), we arrive at ∫ E

3T 4 T 4

∫ (|z|2 + |∇z|2 )dxdt G

( ) max(x,t)∈Q φ2 (t, x)θ2 (t, x) ( ) ≤ Cr1 minx∈G φ( T2 , x)θ2 ( T4 , x) ∫ [ ∫ ( ) 2 2 × E |Z + a4 z| + |∇(Z + a4 z)| dxdt + E

(11.62) ∂z 2 ] dΓ dt Q Σ0 ∂ν ∫ 2 [ ∫ ( ] ) ∂z ≤ eCr1 E |Z + a4 z|2 + |∇(Z + a4 z)|2 dxdt + E dΓ dt . Q Σ0 ∂ν Utilizing (11.62) and (11.6), we obtain that ∫ ( ) E |zT |2 + |∇zT |2 dx G

∫ [ ∫ ( ) ≤ eCr1 E |Z + a4 z|2 + |∇(Z + a4 z)|2 dxdt + E Q

which concludes Theorem 11.14 immediately.

∂z 2 ] dΓ dt , Σ0 ∂ν

11.5 Notes and Comments

383

Remark 11.15. The constant eCr1 in (11.56) can be improved. Nevertheless, as we explained before, since we want to present the key idea in a simple way, we do not pursue the sharpest result. 2

11.5 Notes and Comments This chapter is an improved version of the results presented in [228, 229]. There are numerous studies on the controllability and observability of deterministic Schr¨ odinger equations (e.g., [4, 35, 18, 19, 41, 42, 187, 190, 252, 260, 280, 289, 317, 394]). Generally speaking, there are five useful methods to solve the exact controllability/observability problem for deterministic Schr¨odinger equations: •

• • • •

The first one is based on the Ingham type inequality ([317]). It can be used to solve controllability/observability problem for Schr¨odinger equations involved in some special domains, i.e., intervals and rectangles. However, people do not know how to apply it to the equations evolved in general domains. The second one is the classical Rellich-type multiplier approach ([252]). It can be applied to Schr¨odinger equations with (small) time-invariant lower order terms. The third one is the microlocal analysis approach ([4, 190]), which gives sharp sufficient conditions for the exact controllability of Schr¨odinger equations with time independent lower order terms. The fourth one is the transmutation method ([280]). This method reduces the controllability problem for Schr¨odinger equations to the same problem but for parabolic equations. The last one is based on the global Carleman estimate ([187, 260]), which applies to Schr¨odinger equations with quite general lower order terms but the controller is not as sharp as those given by the first and the third methods (mentioned above).

Until now, as far as we know, only the last method was extended to studying the controllability/observability problem of stochastic Schr¨odinger equations (See [224, 228, 229]). Recently, in a very interesting paper [107], a significant weighted identity for quite general stochastic partial differential operators was established, via which one can obtain a unified treatment in establishing the weighted identities (including in particular (8.19), (9.26) and (11.17)) for some deterministic/stochastic partial differential operators. Hence, based on this identity, some known global Carleman estimates for stochastic transport equations, stochastic parabolic equations, stochastic Schr¨odinger equations and so on can be deduced. It seems that people will face many essential difficulties to apply the other four methods mentioned above to study the controllability problem for stochastic Schr¨ odinger equations. On the other hand, there are lots of open

384

11 Exact Controllability for Stochastic Schr¨ odinger Equations

problems related to the topic of this chapter. In our opinion, the following are particularly interesting: 1) Observability estimate and unique continuation for stochastic Schr¨ odinger equations. Consider the following stochastic Schr¨odinger equation:  ( )   idy + ∆ydt = a1 · ∇y + a2 y dt + a3 ydW (t) in Q, (11.63) y=0 on Σ,   y(0) = y0 in G. As an easy consequence of Theorem 11.14, we can show that, the solution y to (11.63) equals zero in Q, a.s., provided that y = 0 in a neighborhood of Γ0 , a.e. t ∈ (0, T ). Compared with the classical unique continuation result for deterministic Schr¨odinger equations with time independent coefficients (e.g., [4, 190] for example), it seems too restrictive to assume that y vanishes in a neighborhood of Γ0 , a.e. t ∈ (0, T ). It would be quite interesting but may be challenging to prove whether the results in [4, 190] are true or not for stochastic Schr¨ odinger equations. To do this, several new tools should be developed in the stochastic setting, such as the stochastic counterpart of microlocal defect measure and semiclassical defect measure. 2) Null and approximate controllability for stochastic Schr¨ odinger equations. As immediate consequences of the main result of this chapter, we can obtain the null and approximate controllability for the same system. However, in order to get these two kinds of controllability, there are no reasons to employ two controls. By Proposition 7.8, it is possible to put only one control in the drift term to get the null controllability, at least for the system with deterministic coefficients. On the other hand, as suggested by the result in Section 9.2, we believe that one boundary control can guarantee the null and approximate controllability of (11.3). However, we will meet some essential difficulties to prove such a result. For example, to get the null controllability, we should prove that ∫ 2 ∂z 2 E|z(0)|H 1 (G;C) ≤ CE (11.64) dΣ0 . 0 Σ0 ∂ν If we utilize the method in this chapter, we only get that E|z(0)|2H 1 (G;C) 0

≤ CE

(∫

∫ T ∂z 2 ) |Z|2H 1 (G;C) dt . dΣ0 + 0 Σ0 ∂ν 0

An extra term Z appears in the right hand side of the above inequality. We believe that one should be able to introduce some new technique to get rid of this additional term. However, we do not know how to achieve this goal at this moment.

11.5 Notes and Comments

385

3) Controllability for stochastic Schr¨ odinger equations with bilinear controls. In practice, it is more convenient to add controls in the coefficients of stochastic Schr¨odinger equations. This leads to the control system with bilinear controls. An example is as follows:  idy + ∆ydt = (a1 + u)ydt + (a1 + v)ydW (t) in Q,    y=0 on Σ, (11.65)    y(0) = y0 in G. Here both u and v are controls belong to suitable function spaces. Bilinear control for deterministic Schr¨odinger equations are studied extensively in the literature (e.g., [19]). However, at this moment we have no idea on how to study the bilinear controllability problem for stochastic Schr¨odinger equations. In this book, we consider only controllability/obseverbility problems for four kind of stochastic partial differential equations, i.e., stochastic transport equations, stochastic parabolic equations, stochastic wave equations and stochastic Schr¨ odinger equations. In recent years, there are some interesting works on the same problems but for other stochastic partial differential equations, say [108] for stochastic complex Ginzburg-Landau equations, [118] for stochastic Korteweg-de Vries equations, [119] for stochastic Kawahara equations, [120] for stochastic fourth order Schr¨odinger equations, [121] for stochastic Kuramoto-Sivashinsky equations, [142] for stochastic coupled systems of fourth and second-order parabolic equations, [222] for stochastic degenerate parabolic equations, [358] for stochastic Grushin equations, and [391] for stochastic beam equations. As far as we know, there are no results on the controllability/obseverbility problems for stochastic plate equations, stochastic Maxwell equations, stochastic Benjamin-Bona-Mahony equations, stochastic elasticity/thermoelasticity equations, stochastic hyperbolic-parabolic coupled systems, stochastic Navier-Stokes equations and atmospheric equations in particular, etc. In our opinion, the direction of controllability/obseverbility for stochastic partial differential equations is full of open problems, and very likely people need to develop new tools to solve some of them.

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

The main purpose of this chapter is to derive first order necessary conditions, i.e., Pontryagin-type maximum principle for optimal controls of general nonlinear stochastic evolution equations in infinite dimensions, in which both the drift and the diffusion terms may contain the control variables, and the control regions are allowed to be nonconvex. In order to do this, quite different from the deterministic infinite dimensional setting and the stochastic finite dimensional case, people have to introduce a suitable operator-valued backward stochastic evolution equation, served as the second order adjoint equation. It is very difficult to prove the existence of solutions to this equation for the general case. Indeed, in the infinite dimensional setting, there exists no such a satisfactory stochastic integration/evolution equation theory (in the previous literatures) that can be employed to establish the well-posedness of such an equation. To overcome the above-mentioned difficulty, we employ our stochastic transposition method to introduce a concept of transposition solution/relaxed transposition solution to the desired operator-valued backward stochastic evolution equation, and develop carefully a way to study the corresponding wellposedness. We shall also consider very quickly first order sufficient conditions and second order necessary conditions (for optimal controls), while the latter relies also essentially on the above well-posedness result, in particular on the characterization of the correction part in the above operator-valued backward stochastic evolution equation.

12.1 Formulation of the Optimal Control Problem ∆

Throughout this chapter, T > 0, (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) is a fixed filtered probability space satisfying the usual condition, and we denote by F the progressive σ-field w.r.t. F; H and V are two separable (complex) Hilbert © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_12

387

388

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

∞ spaces (unless other stated); {hj }∞ j=1 and {ej }j=1 are respectively orthonormal basis of H and V ; we denote by I the identity operator on H, and write ∆ L02 = L2 (V ; H). Unless otherwise specified, W (·) is a V -valued, Q-Brownian motion or cylindrical Brownian motion but we only consider the case of cylindrical Brownian motion. Let A be an unbounded linear operator (with domain D(A) on H), which generates a C0 -semigroup {S(t)}t≥0 . Denote by A∗ the dual operator of A. Clearly, D(A) is a Hilbert space with the usual graph norm, and A∗ is the infinitesimal generator of {S ∗ (t)}t≥0 , the dual C0 -semigroup of {S(t)}t≥0 . For any λ ∈ ρ(A), the resolvent of A, denote by Aλ the Yosida approximation of A and by {Sλ (t)}t∈R the C0 -group generated by Aλ . Let U be a separable metric space with a metric d(·, ·). Put } ∆{ U [0, T ] = u(·) : [0, T ] × Ω → U u(·) is F-adapted (12.1)

Throughout this chapter, CL > 0 is a generic constant, and we assume the following condition: (S1) Suppose that a(·, ·, ·) : [0, T ] × Ω × H × U → H and b(·, ·, ·) : [0, T ] × Ω × H × U → L02 are two (vector-valued) functions satisfying: i) For any (x, u) ∈ H × U , the functions a(·, x, u) : [0, T ] × Ω → H and b(·, x, u) : [0, T ] × Ω → L02 are F-measurable; ii) For any x ∈ H and a.e. (t, ω) ∈ (0, T )×Ω, the functions a(t, x, ·) : U → H and b(t, x, ·) : U → L02 are continuous; and iii) For any (x1 , x2 , u) ∈ H × H × U and a.e. (t, ω) ∈ (0, T ) × Ω, { |a(t, x1 , u) − a(t, x2 , u)|H + |b(t, x1 , u) − b(t, x2 , u)|L02 ≤ CL |x1 − x2 |H , |a(t, 0, u)|H + |b(t, 0, u)|L02 ≤ CL . Let us consider the following controlled stochastic evolution equation:  ( ) dx(t) = Ax(t) + a(t, x(t), u(t)) dt    +b(t, x(t), u(t))dW (t) in (0, T ], (12.2)    x(0) = x0 , where u(·) ∈ U [0, T ] is the control variable, x(·) is the state variable, and the (given) initial state x0 ∈ LpF00 (Ω; H) for some given p0 ≥ 2. In the rest of this chapter, we shall denote by C a generic constant, depending on T , A, p0 (or p to be introduced later) and CL (or J and K to be introduced later), which may be different from one place to another. By Theorem 3.14, it is easy to show the following result: Proposition 12.1. Let the assumption (S1) hold. Then, for any x0 ∈ LpF00 (Ω; H) and u(·) ∈ U [0, T ], the equation (12.2) admits a unique mild solution x(·) ≡ x(· ; x0 , u) ∈ CF ([0, T ]; Lp0 (Ω; H)). Furthermore, ( ) |x(·)|CF ([0,T ];Lp0 (Ω;H)) ≤ C 1 + |x0 |LpF0 (Ω;H) . 0

12.1 Formulation of the Optimal Control Problem

389

Also, we need the following condition: (S2) Suppose that g(·, ·, ·) : [0, T ] × Ω × H × U → R and h(·) : Ω × H → R are two functions satisfying: i) For any (x, u) ∈ H × U , the function g(·, x, u) : [0, T ] × Ω → R is F-measurable, the function h(x) : Ω → R is FT -measurable; ii) For any x ∈ H and a.e. (t, ω) ∈ (0, T ) × Ω, the function g(t, x, ·) : U → R is continuous; and iii) For any (x1 , x2 , u) ∈ H × H × U and a.e. (t, ω) ∈ (0, T ) × Ω, { |g(t, x1 , u) − g(t, x2 , u)| + |h(x1 ) − h(x2 )| ≤ CL |x1 − x2 |H , |g(t, 0, u)| + |h(0)| ≤ CL . Define a cost functional J (·) (for the controlled system (12.2)) as follows: ∆

J (u(·)) = E

(∫

T

) g(t, x(t), u(t))dt + h(x(T )) ,

∀ u(·) ∈ U [0, T ],

(12.3)

0

where x(·) is the corresponding solution to (12.2). We consider the following optimal control problem for the controlled equation (12.2) with the cost functional (12.3): Problem (OP) Find a u ¯(·) ∈ U [0, T ] such that J (¯ u(·)) =

inf

u(·) ∈ U [0,T ]

J (u(·)).

(12.4)

Any u ¯(·) ∈ U [0, T ] satisfying (12.4) is called an optimal control. The corresponding state x ¯(·) (of (12.2)) is called an optimal state, and (¯ x(·), u ¯(·)) is called an optimal pair (of Problem (OP)). The main goal of this chapter is to establish a first order necessary condition for optimal pairs of Problem (OP), in the spirit of the classical Pontryagin maximum principle for deterministic finite dimensional controlled systems ([281]). Also, we shall consider very quickly the related first order sufficient (optimality) condition and second order necessary (optimality) condition. Stimulated by [273], for the case of general control regions, we need to assume the following further conditions1 : (S3) For any u ∈ U and a.e. (t, ω) ∈ (0, T ) × Ω, the functions a(t, ·, u) : H → H, b(t, ·, u) : H → L02 , g(t, ·, u) : H → R and h(·) : H → R are C 2 . For any x ∈ H and a.e. (t, ω) ∈ (0, T ) × Ω, the functions ax (t, x, ·) : U → L(H), bx (t, x, ·) : U → L(H; L02 ), gx (t, x, ·) : U → H, axx (t, x, ·) : U → L(H, H; H), bxx (t, x, ·) : U → L(H, H; L02 ) and gxx (t, x, ·) : U → L(H) are continuous. Moreover, for any (x, u) ∈ H × U and a.e. (t, ω) ∈ (0, T ) × Ω, 1

See Subsection 2.11.1 for the notations L(H, H; H) and L(H, H; L02 ).

390

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

 |ax (t, x, u)|L(H) + |bx (t, x, u)|L(H;L02 ) + |gx (t, x, u)|H + |hx (x)|H ≤ CL ,       |axx (t, x, u)|L(H,H;H) + |bxx (t, x, u)|L(H,H;L0 ) 2      

+|gxx (t, x, u)|L(H) + |hxx (x)|L(H) ≤ CL .

Problem (OP) is now well-understood for the case of finite dimensions (i.e., dim H < ∞) and natural filtration. In this case, a Pontryagin-type maximum principle was obtained in [273] for general stochastic control systems with control-dependent diffusion coefficients and possibly non-convex control regions, and it was found that the corresponding result differs significantly from its deterministic counterpart. At first glance, one might think that the study of Problem (OP) is simply a routine extension of that in [273]. However, the infinite dimensional setting leads to significantly new difficulties. To illustrate this, we first recall the main idea and result in [273]. Suppose that (¯ x(·), u ¯(·)) is a given optimal pair for the special case that A = 0, H = Rn (for some n ∈ N), V = R, p0 = 2 and F is the natural filtration generated by W (·). First, similarly to the deterministic setting, one introduces the following first order adjoint equation (which is now however a backward stochastic differential equation in the stochastic case):  (  dy(t) = − ax (t, x ¯(t), u ¯(t))⊤ y(t) + bx (t, x ¯(t), u ¯(t))⊤ Y (t)    ) (12.5) −gx (t, x ¯(t), u ¯(t)) dt + Y (t)dW (t) in [0, T ),     y(T ) = −hx (¯ x(T )). In (12.5), the unknown is a pair of F-adapted processes (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; Rn ))×L2F (0, T ; Rn ). Next, to establish the desired maximum principle for stochastic controlled systems with control-dependent diffusion and possibly nonconvex control regions, it was found in [273] that, except for the first order adjoint equation (12.5), one needs to introduce an additional second order adjoint equation as follows:  (  dP (t) = − ax (t, x ¯(t), u ¯(t))⊤ P (t) + P (t)ax (t, x ¯(t), u ¯(t))        +bx (t, x ¯(t), u ¯(t))⊤ P (t)bx (t, x ¯(t), u ¯(t))    ⊤ +bx (t, x ¯(t), u ¯(t)) Q(t) + Q(t)bx (t, x ¯(t), u ¯(t)) (12.6)   )    +Hxx (t, x ¯(t), u ¯(t), y(t), Y (t)) dt + Q(t)dW (t) in [0, T ]       P (T ) = −hxx (¯ x(T )). In (12.6), the Hamiltonian H(·, ·, ·, ·, ·) is defined by H(t, x, u, y1 , y2 ) = ⟨ y1 , a(t, x, u) ⟩Rn + ⟨ y2 , b(t, x, u) ⟩Rn −g(t, x, u), (t, ω, x, u, y1 , y2 ) ∈ [0, T ] × Ω × Rn × U × Rn × Rn .

12.1 Formulation of the Optimal Control Problem

391

Clearly, the equation (12.6) is an Rn×n -valued backward stochastic differential equation in which the unknown is a pair of processes (P (·), Q(·)) ∈ L2F (Ω; C([0, T ]; Rn×n )) × L2F (0, T ; Rn×n ). Then, associated with the 6-tuple (¯ x(·), u ¯(·), y(·), Y (·), P (·), Q(·)), define 1 ∆ H(t, x, u) = H(t, x, u, y(t), Y (t)) + ⟨ P (t)b(t, x, u), b(t, x, u) ⟩Rn 2 −⟨ P (t)b(t, x ¯(t), u ¯(t)), b(t, x, u) ⟩Rn . The main result in [273] asserts that any optimal pair (¯ x(·), u ¯(·)) satisfies the following maximum principle condition: H(t, x ¯(t), u ¯(t)) = max H(t, x ¯(t), u), u∈U

a.e. t ∈ [0, T ],

a.s.

It is easy to see that, in order to establish the Pontryagin-type necessary conditions for an optimal pair (¯ x(·), u ¯(·)) of Problem (OP), we need to introduce first the following H-valued backward stochastic evolution equation2 :  ( ∗  dy(t) = −A y(t)dt − ax (t, x ¯(t), u ¯(t))∗ y(t) + bx (t, x ¯(t), u ¯(t))∗ Y (t)     ) (12.7) −g (t, x ¯ (t), u ¯ (t)) dt + Y (t)dW (t) in [0, T ), x     ( )  y(T ) = −hx x ¯(T ) , which will serve as the first order adjoint equation. By Theorem 4.16, the equation (12.7) is well-posed in the sense of transposition solution (Note that we do not assume the filtration F is the natural one). Next, inspired by [273], to deal with the case of possibly non-convex control region U , for any p ∈ (1, 2], we need to introduce the following formally L(H)valued backward stochastic evolution equation:  dP = −(A∗ + J ∗ )P dt − P (A + J)dt − K ∗ P Kdt    −(K ∗ Q + QK)dt + F dt + QdW (t) in [0, T ), (12.8)    P (T ) = PT , where (q = p/(p − 1)), ∞ J ∈ L2q F (0, T ; L (Ω; L(H))),

∞ 0 K ∈ L2q F (0, T ; L (Ω; L(H; L2 ))),

F ∈ L1F (0, T ; Lp (Ω; L(H))),

PT ∈ LpFT (Ω; L(H)).

(12.9) (12.10)

For the special case when H = R , similarly to (12.6), it is easy to see that (12.8) is an Rn×n (matrix)-valued backward stochastic differential equation n

2

Throughout this book, for any operator-valued process (resp. random variable) R, we denote by R∗ its pointwise dual operator-valued process (resp. random variable). For example, if R ∈ L1F (0, T ; L2 (Ω; L(H))), then R∗ ∈ L1F (0, T ; L2 (Ω; L(H))), and |R|L1 (0,T ;L2 (Ω;L(H))) = |R∗ |L1 (0,T ;L2 (Ω;L(H))) . F

F

392

12 Pontryagin-Type Stochastic Maximum Principle and Beyond 2

(which can be easily regarded as an Rn (vector)-valued backward stochastic differential equation), and therefore, the desired well-posedness follows from the one for backward stochastic evolution equations valued in Hilbert spaces. One has to face a real challenge in the study of (12.8) when dim H = ∞, without further assumption on the data F and PT . Indeed, as we mentioned in Remark 2.62, in the infinite dimensional setting, although L(H) is still a Banach space, it is neither reflexive (needless to say to be a Hilbert space) nor separable. To the best of the authors’ knowledge, in the previous literatures there exists no such a stochastic integration/evolution equation theory in general Banach spaces that can be employed to treat the well-posedness of (12.8) in the usual sense, even if the filtration F is the natural one. For example, the existing results on stochastic integration/evolution equation in UMD Banach spaces (e.g., [328, 329]) do not fit the present case because, if a Banach space is UMD, then it is reflexive. From the above analysis, it is clear that the key to establish the Pontryagintype stochastic maximum principle for optimal pairs of Problem (OP) is to give a suitable sense of solutions to the equation (12.8) and prove the corresponding well-posedness. For this purpose, we shall adopt the stochastic transposition method developed in our previous work [241], which was addressed to the backward stochastic differential equations in Rn (See also Section 4.3 (in Chapter 4) for the well-posedness of H-valued backward stochastic evolution equations in the sense of transposition solutions). The rest of this chapter is organized as follows. In Section 12.2, we review the Pontryagin maximum principle for optimal control problems of stochastic differential equations. In Section 12.3, a necessary condition for optimal controls for convex control domains is given. Section 12.4 is devoted to the study of operator-valued backward stochastic evolution equations. In Section 12.5, we establish the Pontryagin-type maximum principle for optimal controls for convex control domains. In Section 12.6, we give a sufficient condition for the optimal control. In Section 12.7, we give some integral-type second order necessary conditions for optimal controls. Finally, some remarks and open problems are given in Section 12.8.

12.2 The Case of Finite Dimensions In this section, we consider the special case that A = 0, H = Rn and V = R (and hence {W (t)}t∈[0,T ] is a one dimensional standard Brownian motion). As we mentioned before, for this case the well-posedness of the equation (12.8) 2 follows from the one for Rn (vector)-valued backward stochastic differential equations. Let (¯ x(·), u ¯(·)) be an optimal pair for Problem (OP). Note that we do not assume F is the natural filtration in this section. Hence, the solution (y(·), Y (·)) ∈ DF ([0, T ]; L2 (Ω; Rn )) × L2F (0, T ; Rn ) to (12.5) is understood in the sense of transposition solution. By taking an inner product in Rn×n

12.2 The Case of Finite Dimensions

393

as ⟨P1 , P2 ⟩Rn×n = tr (P1 P2⊤ ) for any n × n matrices P1 and P2 , as a direct consequence of Theorem 4.16, we see that the equation (12.6) admits a unique transposition solution (P (·), Q(·)) ∈ DF ([0, T ]; L8 (Ω; Rn×n )) × L8F (Ω; L2 (0, T ; Rn×n )). We have the following Pontryagin-type maximum principle for optimal pairs of Problem (OP). Theorem 12.2. Let (S1)–(S3) hold (for H = Rn and V = R) and x0 ∈ L8F0 (Ω; Rn ). Then, for any optimal pair (¯ x(·), u ¯(·)) of Problem (OP), it holds that H(t, x ¯(t), u ¯(t), y(t), Y (t)) − H(t, x ¯(t), u, y(t), Y (t)) ( ) ⟩ 1⟨ − P (t) b(t, x ¯(t), u ¯(t)) − b(t, x ¯(t), u) , b(t, x ¯(t), u ¯(t)) − b(t, x ¯(t), u) Rn 2 ≥ 0, ∀ u ∈ U, a.e. t ∈ [0, T ], a.s. (12.11) Sketch of the proof of Theorem 12.2 : The detailed proof of this theorem is very close to that of [273, Theorem 3] and [371, Theorem 3.2 in Chapter 3] (addressed to the stochastic maximum principle for controlled stochastic differential equations with the natural filtration generated by W (·)). Hence, we only give below a sketch of proof for this theorem. Fix any u(·) ∈ U [0, T ] and ε > 0. Similar to (1.44), let { u ¯(t), t ∈ [0, T ] \ Eε , ε u (t) = u(t), t ∈ Eε , where Eε ⊆ [0, T ] is a measurable set with the Lebesgue measure m(Eε ) = ε. For ψ = a, b, f and g, we write  ψ1 (t) = ψx (t, x ¯(t), u ¯(t)),       ψ11 (t) = ψxx (t, x ¯(t), u ¯(t)),  δψ(t) = ψ(t, x ¯(t), u(t)) − ψ(t, x ¯(t), u ¯(t)),      δψ1 (t) = ψx (t, x ¯(t), u(t)) − ψx (t, x ¯(t), u ¯(t)). Suppose that xε1 (·) and xε2 (·) solve respectively the following stochastic differential equations ( ) { ε dx1 (t) = a1 (t)xε1 (t)dt + b1 (t)xε1 (t) + χEε (t)δb(t) dW (t) in (0, T ], xε1 (0) = 0, and

394

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

 ( ( ε )) 1 ε ε ε  dx (t) = a (t)x (t) + χ (t)δa(t) + a (t) x (t), x (t) dt  1 E 11 2 2 1 1 ε   2   (  ( ))  1 + b1 (t)xε2 (t) + χEε (t)δb1 (t)xε1 (t) + b11 (t) xε1 (t), xε1 (t) dW (t) 2     in (0, T ],     ε x2 (0) = 0. Then, when ε → 0, by a very similar argument to the proof of [371, Theorem 4.4 in Chapter 3] (See [371, pp. 128–134]), one can obtain that J (uε (·)) − J (¯ u(·)) ⟨ ⟩ ⟩ 1 ⟨ = E hx (¯ x(T )), xε1 (T ) + xε2 (T ) Rn + E hxx (¯ x(T ))xε1 (T ), xε1 (T ) Rn 2 ∫ T( (12.12) ⟨ ⟩ ⟩ 1⟨ +E g1 (t), xε1 (t) + xε2 (t) Rn + g11 (t)xε1 (t), xε1 (t) Rn 2 0 ) +χEε (t)δg(t) dt + o(ε). Since (y(·), Y (·)) is the transposition solution to the equation (12.5), we find that ⟨ ⟩ −E hx (¯ x(T )), xε1 (T ) Rn ∫ T (12.13) (⟨ ⟩ ⟨ ⟩ ) =E g1 (t), xε1 (t) Rn + χEε (t) δb(t), Y (t) Rn dt, 0

and −E⟨hx (¯ x(T )), xε2 (T )⟩Rn ∫ T[ ⟨ ⟩ ( )⟩ 1 (⟨ =E g1 (t), xε2 (t) Rn + y(t), a11 (t) xε1 (t), xε1 (t) Rn 2 0 ⟨ ( )⟩ ) + Y (t), b11 (t) xε1 (t), xε1 (t) Rn (⟨ ⟩ ⟨ ⟩ )] +χEε (t) y(t), δa(t) Rn + Y (t), δb1 (t)xε1 (t) Rn dt.

(12.14)

Further, put xε3 (t) = xε1 (t)xε1 (t)⊤ (∈ Rn×n ). By Itˆo’s formula, we get that xε3 (·) solves  [ ε  dx (t) = a1 (t)xε3 (t) + xε3 (t)a1 (t)⊤ + b1 (t)xε3 (t)b1 (t)⊤ + χEε (t)δb(t)δb(t)⊤  3     ( )]   ε ⊤ ε ⊤ ⊤  +χ (t) b (t)x (t)δb(t) + δb(t)x (t) b (t) dt  E 1 1 1 1 ε    [ (12.15) + b1 (t)xε3 (t) + xε3 (t)b1 (t)⊤     ( )]   ε ⊤ ε ⊤  +χ (t) δb(t)x (t) + x (t)δb(t) dW (t) in (0, T ],  E 1 1 ε      ε x3 (0) = 0.

12.3 Necessary Condition for Optimal Controls for Convex Control Regions

395

Using the fact that (P (·), Q(·)) is the transposition solution to the equation (12.6) and noting that the inner product defined in Rn×n is tr (P1 P2⊤ ) for P1 , P2 ∈ Rn×n , we find that ( ) −E tr hxx (¯ x(T ))xε3 (T ) ∫ T ( ) =E tr χEε (t)δb(t)⊤ P (t)δb(t) − Hxx (t, x ¯(t), u ¯(t), y(t), Y (t))xε3 (t) dt 0

+o(ε), which gives that ⟨ ⟩ −E hxx (¯ x(T ))xε1 (T ), xε1 (T ) Rn ∫ T( ⟨ ⟩ =E χEε (t) P (t)δb(t), δb(t) Rn 0

(12.16) )

⟨ ⟩ − Hxx (t, x ¯(t), u ¯(t), y(t), Y (t))xε1 (t), xε1 (t) Rn dt + o(ε).

From (12.12)–(12.16), we obtain that J (uε (·)) − J (¯ u(·)) ∫ T ( =E χEε (t) H(t, x ¯(t), u ¯(t), y(t), Y (t)) − H(t, x ¯(t), u(t), y(t), Y (t)) 0



⟩ ) 1⟨ P (t)δb(t), δb(t) Rn dt + o(ε). 2

(12.17) Since u ¯(·) is an optimal control, we have J (uε (·))−J (¯ u(·)) ≥ 0. This, together with (12.17), yields that ∫

T

E

( χEε (t) H(t, x ¯(t), u ¯(t), y(t), Y (t)) − H(t, x ¯(t), u(t), y(t), Y (t))

0



⟩ ) 1⟨ P (t)δb(t), δb(t) Rn dt 2

≥ o(ε), which gives (12.11).

12.3 Necessary Condition for Optimal Controls for Convex Control Regions In this section, we shall present a necessary condition for optimal controls of Problem (OP) under the following condition:

396

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

e (S4) The control region U is a convex subset of a separable Hilbert space H e i.e., d(u1 , u2 ) = |u1 −u2 | e . and the metric of U is introduced by the norm of H, H We begin with the following preliminary result. e and u Lemma 12.3. If F (·) ∈ L2F (0, T ; H) ¯(·) ∈ U [0, T ] such that ∫

T

Re E 0

⟨ ⟩ F (t, ·), u(t, ·) − u ¯(t, ·) He dt ≤ 0,

(12.18)

e holds for any u(·) ∈ U [0, T ] satisfying u(·) − u ¯(·) ∈ L2F (0, T ; H)), then, ⟨ ⟩ Re F (t, ω), u − u ¯(t, ω) He ≤ 0, a.e. (t, ω) ∈ [0, T ] × Ω, ∀ u ∈ U. (12.19) Proof : We use a contradiction argument. Suppose that the inequality (12.19) did not hold. Then, there would exist a u0 ∈ U and an ε > 0 such that ∫ ∫ T ∆ αε = χΛε (t, ω)dtdP > 0, Ω

0

where ⟨ ⟩ } ∆{ Λε = (t, ω) ∈ [0, T ] × Ω Re F (t, ω), u0 − u ¯(t, ω) He ≥ ε . For any m ∈ N, define { } ∆ Λε,m = Λε ∩ (t, ω) ∈ [0, T ] × Ω |¯ u(t, ω)|He ≤ m . It is clear that lim Λε,m = Λε . Hence, there is an mε ∈ N such that m→∞

∫ ∫

T

χΛε,m (t, ω)dtdP > Ω

0

αε > 0, 2

∀ m ≥ mε .

⟨ ⟩ Since F (·), u0 − u ¯(·) He is F-adapted, so is the process χΛε,m (·). Define u ˆε,m (t, ω) = u0 χΛε,m (t, ω) + u ¯(t, ω)χΛcε,m (t, ω),

(t, ω) ∈ [0, T ] × Ω.

Noting that |¯ u(·)|He ≤ m on Λε,m , we see that u ˆε,m (·) ∈ U [0, T ] satisfies e Hence, for any m ≥ mε , we obtain that u ˆε,m (·) − u ¯(·) ∈ L2F (0, T ; H). ∫

⟨ ⟩ F (t), u ˆε,m (t) − u ¯(t) He dt

T

Re E 0

∫ ∫

T

= Ω

0

∫ ∫

⟨ ⟩ χΛε,m (t, ω)Re F (t, ω), u0 − u ¯(t, ω) He dtP T

≥ε

χΛε,m (t, ω)dtP ≥ Ω

0

εαε > 0, 2

12.3 Necessary Condition for Optimal Controls for Convex Control Regions

397

which contradicts (12.18). This completes the proof of Lemma 12.3. In this section, we introduce the following further assumptions for a(·, ·, ·), b(·, ·, ·), g(·, ·, ·) and h(·). (S5) For a.e. (t, ω) ∈ (0, T ) × Ω, the functions a(t, ·, ·) : H × U → H and b(t, ·, ·) : H × U → L02 , g(t, ·, ·) : H × U → R and h(·) : H → R are C 1 . Moreover, for any (x, u) ∈ H × U and a.e. (t, ω) ∈ (0, T ) × Ω, {

|ax (t, x, u)|L(H) + |bx (t, x, u)|L(H;L02 ) + |gx (t, x, u)|H + |hx (x)|H ≤ CL , |au (t, x, u)|L(H;H) + |bu (t, x, u)|L(H;L e e 0 ) + |gu (t, x, u)|H e ≤ CL . 2

We have the following necessary condition for optimal controls for Problem (OP) with convex control domains. Theorem 12.4. Assume that the assumptions (S1)–(S2) and (S4)–(S5) hold. Let x0 ∈ L2F0 (Ω; H) and (¯ x(·), u ¯(·)) be an optimal pair for Problem (OP). Then, ⟨ ⟩ Re au (t, x ¯(t), u ¯(t))∗ y(t) + bu (t, x ¯(t), u ¯(t))∗ Y (t) − gu (t, u ¯(t), x ¯(t)), u − u ¯(t) He ≤ 0,

a.e. (t, ω) ∈ [0, T ] × Ω, ∀ u ∈ U, (12.20)

where (y(·), Y (·)) is the transposition solution to the equation (12.7). Proof : We use the convex perturbation technique and divide the proof into several steps. Step 1. For the optimal pair (¯ x(·), u ¯(·)), we fix arbitrarily a control u(·) ∈ e U [0, T ] satisfying u(·) − u ¯(·) ∈ L2F (0, T ; H)). Since U is convex, we see that uε (·) = u ¯(·) + ε(u(·) − u ¯(·)) = (1 − ε)¯ u(·) + εu(·) ∈ U [0, T ],

∀ ε ∈ [0, 1].

Denote by xε (·) the state process of (12.2) corresponding to the control uε (·). By Proposition 12.1, it follows that ( ) |xε |CF ([0,T ];L2 (Ω;H)) ≤ C 1 + |x0 |L2F (Ω;H) , ∀ ε ∈ [0, 1]. (12.21) 0

) 1( ε x (·) − x ¯(·) and δu(·) = u(·) − u ¯(·). Since (¯ x(·), u ¯(·)) satisfies ε ε (12.2), it is easy to see that x1 (·) satisfies the following stochastic evolution equation: ) ( ) { ε ( ε dx1 = Ax1 + aε1 xε1 + aε2 δu dt + bε1 xε1 + bε2 δu dW (t) in (0, T ], (12.22) xε1 (0) = 0, Write xε1 (·) =

where for ψ = a, b,

398

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

 ∫  ε  ψ (t) =   1 ∫     ψ2ε (t) =

1

ψx (t, x ¯(t) + σεxε1 (t), uε (t))dσ, 0

(12.23)

1

ψu (t, x ¯(t), u ¯(t) + σεδu(t))dσ. 0

Consider the following stochastic evolution equation: ( ) ( ) { dx2 = Ax2 + a1 (t)x2 + a2 (t)δu dt + b1 (t)x2 + b2 (t)δu dW (t) in (0, T ], x2 (0) = 0, (12.24) where for ψ = a, b, ψ1 (t) = ψx (t, x ¯(t), u ¯(t)),

ψ2 (t) = ψu (t, x ¯(t), u ¯(t)).

(12.25)

Step 2. In this step, we shall prove that 2 lim |xε1 − x2 |L∞ = 0. F (0,T ;L (Ω;H))

(12.26)

ε→0+

First, applying Theorem 3.14 to (12.22) and by the assumption (S1), we find that sup E|xε1 (t)|2H t∈[0,T ]

≤ CE

[( ∫

T

|aε2 (s)δu(s)|H ds



)2

T

+

0

0

|bε2 (s)δu(s)|2L0 ds

]

(12.27)

2

≤ C|¯ u − u|2L2 (0,T ;H)) e . F

By a similar computation, we obtain that sup E|x2 (t)|2H ≤ C|¯ u − u|2L2 (0,T ;H)) e .

t∈[0,T ]

F

(12.28)

On the other hand, put xε3 = xε1 − x2 . Then, xε3 solves the following equation:  ε [ ε ( ) ( ) ] dx3 = Ax3 + aε1 (t)xε3 + aε1 (t) − a1 (t) x2 + aε2 (t) − a2 (t) δu dt    [ ( ) ( ) ] + bε1 (t)xε3 + bε1 (t) − b1 (t) x2 + bε2 (t) − b2 (t) δu dW (t) in (0, T ],    ε x3 (0) = 0. (12.29) Applying Theorem 3.14 to (12.29), we obtain that max E|xε3 (t)|2H

t∈[0,T ]



T

≤ CE 0

[(

) |aε1 (s) − a1 (s)|2L(H) + |bε1 (s) − b1 (s)|2L(H;L0 ) |x2 (s)|2H 2

] ( ) ε 2 2 + |aε2 (s) − a2 (s)|2L(H;H) + |b (s) − b (s)| |u(s) − u ¯ (s)| 2 0 2 e e e ds. L(H;L H 2) (12.30)

12.4 Operator-Valued Backward Stochastic Evolution Equations

399

Note that (12.27) implies xε (·) → x ¯(·) (in H) in probability, as ε → 0. Hence, by (12.23), (12.25) and the continuity of ax (t, ·, ·), bx (t, ·, ·), au (t, ·, ·) and bu (t, ·, ·), and noting (12.28), we deduce that ∫

T

E 0

[( ) |aε1 (s) − a1 (s)|2L(H) + |bε1 (s) − b1 (s)|2L(H;L0 ) |x2 (s)|2H 2

] ( ) + |aε2 (s) − a2 (s)|2L(H;H) + |bε2 (s) − b2 (s)|2L(H;L ¯(s)|2He ds = 0. e e 0 ) |u(s) − u 2

This, combined with (12.30), gives (12.26). Step 3. Since (¯ x(·), u ¯(·)) is an optimal pair for Problem (OP), from (12.26), we find that J (uε (·)) − J (¯ u(·)) ε→0 ε [ ∫ T (⟨ ⟩ ⟨ ⟩ ) = Re E g1 (t), x2 (t) H + g2 (t), δu(t) He dt

0 ≤ lim

(12.31)

0

⟨ ⟩ ] +E hx (¯ x(T )), x2 (T ) H , where g1 (t) = gx (t, x ¯(t), u ¯(t)) and g2 (t) = gu (t, x ¯(t), u ¯(t)). By the definition of the transposition solution to (12.7), it follows that ⟨ ⟩ −E hx (¯ x(T )), x2 (T ) H − E ∫

T

=E



T 0

⟨ ⟩ g1 (t), x2 (t) H dt

(⟨ ⟩ ⟨ ⟩ ) a2 (t)δu(t), y(t) H + b2 (t)δu(t), Y (t) L0 dt.

(12.32)

2

0

Combining (12.31) and (12.32), we find that ∫

T

Re E 0

⟨ ⟩ a2 (t)∗ y(t) + b2 (t)∗ Y (t) − g2 (t), u(t) − u ¯(t) He dt ≤ 0 (12.33)

e holds for any u(·) ∈ U [0, T ] satisfying u(·) − u ¯(·) ∈ L2F (0, T ; H)). Hence, by means of Lemma 12.3, from (12.33), we conclude the desired inequality (12.20). This completes the proof of Theorem 12.4.

12.4 Operator-Valued Backward Stochastic Evolution Equations This section is addressed to the well-posedness and regularity of solutions to the operator-valued backward stochastic evolution equation (12.8).

400

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

12.4.1 Notions of Solutions In this subsection, we shall define the solution to (12.8) in the transposition sense. We begin with some simple observation. Any Q ∈ L(L02 ; H) induces a formal bilinear functional Ψ (·, ·) on H × L02 as follows: ∆

Ψ (h, v) =

∞ ∑

⟨ Q(ej ⊗ h), vej ⟩H ,

∀ h ∈ H, v ∈ L02 .

(12.34)

j=1

Here, ej ⊗h is defined by (2.18) (Clearly, ej ⊗h ∈ L02 , and therefore Q(ej ⊗h) ∈ H). Recall that L02 = L2 (V ; H). It is easy to show that Ψ (·, ·) is a bounded bilinear functional on H × L02 provided either dim V < ∞ or Q ∈ L2 (L02 ; H). When Ψ (·, ·) is a bounded bilinear functional on H ×L02 , it determines uniquely e : H → L0 satisfying a bounded linear operator Q 2 e v ⟩ 0, Ψ (h, v) = ⟨ Qh, L 2

∀ h ∈ H, v ∈ L02 .

e : H → L0 , In this case, we say that Q induces a bounded linear operator Q 2 and by (12.34), we have e v⟩ 0 = ⟨ Qh, L

∞ ∑

2

⟨ Q(ej ⊗ h), vej ⟩H ,

∀ h ∈ H, v ∈ L02 .

(12.35)

j=1

On the other hand, any Q ∈ L(H; L02 ) induces a formally linear operator e Q : L02 → H, defined by e Q

∞ (∑ i,j=1

∞ )∆ ∑ aij ei ⊗ hj = aij (Qhj )ei ,

∀ aij ∈ C with

i,j=1

∞ ∑

|aij |2 < ∞.

i,j=1

(12.36) e is a bounded linear operator from L0 to H, then It is easy to show that, if Q 2 e ⊗ h) = (Qh)v, Q(v

∀ v ∈ V, h ∈ H.

(12.37)

e : L0 → H. It In this case, we say that Q induces a bounded linear operator Q 2 e is easy to show that Q is a bounded linear operator provided either dim V < ∞ or Q ∈ L2 (H; L02 ). e: Now, suppose that Q ∈ L(L02 ; H) induces a bounded linear operator Q 0 0 e H → L2 and Q induces a bounded linear operator Q : L2 → H. Then, Q = Q. Conversely, suppose that Q ∈ L(H; L02 ) induces a bounded linear e : L0 → H and Q e induces a bounded linear operator Q : H → L0 . operator Q 2 2 Then, Q = Q. We shall need the following result.

12.4 Operator-Valued Backward Stochastic Evolution Equations

401

Proposition 12.5. Any Λ ∈ L2 (V ; L2 (H)) induces (uniquely) a bounded line ∈ L2 (H; L0 ) ear operator Q ∈ L2 (L02 ; H) and a bounded linear operator Q 2 satisfying (12.35) and Q(v ⊗ h) = (Λv)h, Moreover,

∀ v ∈ V, h ∈ H.

(12.38)

e L (H;L0 ) ≤ C|Λ|L (V ;L (H)) . |Q|L2 (L02 ;H) + |Q| 2 2 2 2

Proof : As in (12.36), we define a linear operator Q from L02 to H by Q

∞ (∑

∞ )∆ ∑ aij ei ⊗ hj = aij (Λei )hj ,

i,j=1

∀ aij ∈ C with

i,j=1

∞ ∑

|aij |2 < ∞.

i,j=1

By Λ ∈ L2 (V ; L2 (H)), it is easy to check that Q ∈ L2 (L02 ; H) and |Q|L2 (L02 ;H) ≤ C|Λ|L2 (V ;L2 (H)) . e ∈ L2 (H; L0 ) satisfying (12.35) Then, Q induces a bounded linear operator Q 2 and e L (H;L0 ) ≤ C|Q|L (L0 ;H) . |Q| 2

2

2

2

This completes the proof of Proposition 12.5. Let us introduce the following two (forward) stochastic evolution equations: { dx1 = (A + J)x1 ds + u1 ds + Kx1 dW (s) + v1 dW (s) in (t, T ], (12.39) x1 (t) = ξ1 and {

dx2 = (A + J)x2 ds + u2 ds + Kx2 dW (s) + v2 dW (s) in (t, T ],

(12.40)

x2 (t) = ξ2 . 2 2q 2 Here ξ1 , ξ2 ∈ L2q Ft (Ω; H), u1 , u2 ∈ LF (t, T ; L (Ω; H)), and v1 , v2 ∈ LF (t, T ; 2q 0 L (Ω; L2 )). Also, we need to introduce the solution space for (12.8). For this purpose, write

DF,w ([0, T ]; Lp (Ω; L(H)) { 2p ( ) ∆ = P (·, ·) P (·, ·) ∈ Lpd L2F (0, T ; L2q (Ω; H)); L2 (0, T ; LFp+1 (Ω; H)) , 2p

P (·, ·)ξ ∈ DF ([t, T ]; L p+1 (Ω; H)) and |P (·, ·)ξ|

2p

DF ([t,T ];L p+1 (Ω;H))

} ≤ C|ξ|L2q (Ω;H) for every t ∈ [0, T ] and ξ ∈ L2q (Ω; H) , Ft Ft

(12.41)

402

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

and L2F,w (0, T ; Lp (Ω; L(L02 ; H))) { ( ) 2p ∆ = Q(·, ·) Q(·, ·) ∈ Lpd L2F (0, T ; L2q (Ω; L02 )); L1F (0, T ; L p+1 (Ω; H)) , e ω) : H → L02 and Q(t, ω) induces a bounded linear operator Q(t, } for a.e. (t, ω) ∈ [0, T ] × Ω . (12.42) We now introduce the notion of transposition solution to (12.8) as follows: Definition 12.6. We call (P (·), Q(·)) ∈ DF,w ([0, T ]; Lp (Ω; L(H)))×L2F,w (0, T ; Lp (Ω; L(L02 ; H))) a transposition solution to (12.8) if for any t ∈ [0, T ], 2 2q ξ1 , ξ2 ∈ L2q Ft (Ω; H), u1 (·), u2 (·) ∈ LF (t, T ; L (Ω; H)) and v1 (·), v2 (·) ∈ 2 2q 0 3 LF (t, T ; L (Ω; L2 )), it holds that ⟨ ⟩ E PT x1 (T ), x2 (T ) H − E ⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫

T

+E ∫

t



t

t

T t

T



F (s)x1 (s), x2 (s)

t

⟩ H

ds

⟨ ⟩ P (s)u1 (s), x2 (s) H ds

⟨ ⟩ P (s)x1 (s), u2 (s) H ds + E



T

(12.43)

⟨ ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

t

T

⟨ ⟩ P (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds

T

⟨ ⟩ Q(s)v1 (s), x2 (s) H ds + E

+E +E





2



T t

⟨ ⟩ e Q(s)x 1 (s), v2 (s) L0 ds. 2

Here, x1 (·) and x2 (·) solve (12.39) and (12.40), respectively. We have the following uniqueness result for the transposition solution to (12.8). Theorem 12.7. If p ∈ (1, 2], and J, K, F and PT satisfy (12.9)–(12.10), then the equation (12.8) admits at most one transposition solution (P (·), Q(·)) ∈ DF,w ([0, T ]; Lp (Ω; L(H))) × LpF,w (Ω; L2 (0, T ; L(L02 ; H))). We shall give a proof of Theorem 12.7 in Subsection 12.4.3. Clearly, Definition 12.6 looks quite natural. However, the corresponding well-posedness of (12.8) is still unsolved. Because of this, we introduce below a weaker concept of solution to (12.8). Write ∆

2 2q 2 2q 0 Ht = L2q Ft (Ω; H) × LF (t, T ; L (Ω; H)) × LF (t, T ; L (Ω; L2 )), 3

∀ t ∈ [0, T ),

e is the (pointwise defined) operator induced uniquely by Q(·). In (12.43), Q(·)

12.4 Operator-Valued Backward Stochastic Evolution Equations

403

and4 Qp [0, T ] { ) ( ) 2p ∆ ( b (·) Q(t) , Q b (t) ∈ L Ht ; L2 (t, T ; L p+1 (Ω; L0 )) = Q(·) , Q F 2 } b (t) (0, 0, ·) for any t ∈ [0, T ) . and Q(t) (0, 0, ·)∗ = Q

(12.44)

We now define the notion of relaxed transposition solution to (12.8) as follows: ( ) b (·) ∈ DF,w ([0, T ]; Lp (Ω; L(H))) × Definition 12.8. We call P (·), Q(·) , Q Qp [0, T ] a relaxed transposition solution to the equation (12.8) if for any t ∈ 2 2q [0, T ], ξ1 , ξ2 ∈ L2q Ft (Ω; H), u1 (·), u2 (·) ∈ LF (t, T ; L (Ω; H)) and v1 (·), v2 (·) ∈ L2F (t, T ; L2q (Ω; L02 )), it holds that ⟨ ⟩ E PT x1 (T ), x2 (T ) H − E ⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫

T

+E ∫

t



t



t

t



F (s)x1 (s), x2 (s)

t

⟩ H

ds

⟨ ⟩ P (s)u1 (s), x2 (s) H ds

⟨ ⟩ P (s)x1 (s), u2 (s) H ds + E



T t

(12.45)

⟨ ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

⟨ ⟩ P (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds

T

⟨ ⟩ b (t) (ξ2 , u2 , v2 )(s) 0 ds v1 (s), Q L

T

⟨ (t) ⟩ Q (ξ1 , u1 , v1 )(s), v2 (s) L0 ds,

2

+E

t

T

T

T

+E

+E





2

2

where x1 (·) and x2 (·) solve respectively (12.39) and (12.40). We refer to Remark 12.11 for the relationship between transposition solutions and relaxed transposition solutions. We have the following well-posedness result for the equation (12.8) in the sense of relaxed transposition solution. Theorem 12.9. Assume that p ∈ (1, 2] and the Banach space LpFT (Ω; C) is separable. Then, for any J, K, F and PT satisfying (12.9)–(12.10),( the equation (12.8) admits one and only one relaxed transposition solution P (·), Q(·) , ) b (·) ∈ DF,w ([0, T ]; Lp (Ω; L(H))) × Qp [0, T ]. Furthermore, Q 4

By Theorem 2.73 and noting L02 is a Hilbert space, we see that Q(t) (0, 0, ·)∗ is a 2q

bounded linear operator from L2F (t, T ; L 2q−1 (Ω; L02 ))∗ = L2F (t, T ; L2q (Ω; L02 )) to 2q b (t) (0, 0, ·) L2F (t, T ; L2q (Ω; L02 ))∗ = L2F (t, T ; L 2q−1 (Ω; L02 )). Hence, Q(t) (0, 0, ·)∗ = Q makes sense.

404

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

|P |L(L2 (0,T ;L2q (Ω;H)); L2 (0,T ;L2p/(p+1) (Ω;H))) F F ( (t) (t) ) b + sup Q ,Q L(Ht ; L2 (t,T ;L2p/(p+1) (Ω;L0 )))2 t∈[0,T )

F

2

( ≤ C |F |L1F (τ,T ; Lp (Ω;L(H))) + |PT |LpF

T

) (Ω; L(H))

(12.46)

.

The proof of Theorem 12.9 is quite long. We shall prove its “Uniqueness” part in Subsection 12.4.3, and its “Existence and Stability” part in Subsection 12.4.5. 12.4.2 Preliminaries In this section, we present two preliminary results which will be used in the sequel. Lemma 12.10. Let p ∈ (1, 2], and J and K satisfy (12.9). Write 2q 2q ∆ MJ,K,q (·) = J(·) L∞ (Ω;L(H)) + K(·) L∞ (Ω;L(H;L0 )) .

(12.47)

2

Then, for each t ∈ [0, T ], the following three conclusions hold: 1) If u2 = v(2 = 0 in the equation (12.40), )then there exists an oper2q ator U (·, t) ∈ L L2q Ft (Ω; H); CF ([t, T ]; L (Ω; H)) such that the solution to (12.40) can be represented as x2 (·) = U(·, t)ξ2 . Further, for any t ∈ [0, T ), ξ ∈ L2q Ft (Ω; H) and ε > 0, there is a δ ∈ (0, T −t) such that for any s ∈ [t, t+δ], it holds that 2q |U(·, t)ξ − U(·, s)ξ|L∞ < ε. (12.48) F (s,T ;L (Ω;H)) 2) If ξ2 = 0 and then )there exists an ( v2 = 0 2in the equation (12.40), 2q operator V(·, t) ∈ L L2q (Ω; L (t, T ; H)); C ([t, T ]; L (Ω; H)) such that the F F solution to (12.40) can be represented as x2 (·) = V(·, t)u2 . 3) If ξ2 = 0 and then there exists an ( u2 = 02 in the 0equation (12.40), ) 2q operator Ξ(·, t) ∈ L L2q (Ω; L (t, T ; L )); C ([t, T ]; L (Ω; H)) such that the F 2 F solution to (12.40) can be represented as x2 (·) = Ξ(·, t)v2 . Proof : We prove only the first conclusion. Define U(·, t) as follows: { 2q U(·, t) : L2q Ft (Ω; H) → CF ([t, T ]; L (Ω; H)), U(s, t)ξ2 = x2 (s),

∀ s ∈ [t, T ],

where x2 (·) is the mild solution to (12.40) with u2 = v2 = 0. By Proposition 3.12 and H¨older’s inequality, and noting the fact that J ∈ 2q ∞ ∞ 0 L2q F (0, T ; L (Ω; L(H))) and K ∈ LF (0, T ; L (Ω; L(H; L2 ))), we obtain that for any s ∈ [t, T ],

12.4 Operator-Valued Backward Stochastic Evolution Equations

E|x2 (s)|2q H ∫ = E S(s − t)ξ2 +

s

S(s − σ)J(σ)x2 (σ)dσ t



s

+ [

405

2q S(s − σ)K(σ)x2 (σ)dW (σ) H

t

∫ 2q ≤ C E S(s − t)ξ2 H + E +E

(∫

s

s

2q S(s − σ)J(σ)x2 (σ)dσ

t

)q S(s − σ)K(σ)x2 (σ) 2 0 dσ

H

L2

t

∫ ( 2q ≤ C E ξ2 H +

]

s t

2q ) MJ,K,q (σ)E x2 (σ) H dσ .

This, together with Gronwall’s inequality, implies that |x2 (·)|CF ([t,T ];L2q (Ω;H)) ≤ C|ξ2 |L2q (Ω;H) . Ft

2q Hence, U(·, t) is a bounded linear operator from L2q Ft (Ω; H) to CF ([t, T ]; L (Ω; H)) and U(·, t)ξ2 solves the equation (12.40) with u2 = v2 = 0. On the other hand, by the definition of U(·, t) and U(·, s), for each s ∈ [t, T ] and r ∈ [s, T ], we see that ∫ r U(r, t)ξ = S(r − t)ξ + S(r − τ )J(τ )U(τ, t)ξdτ t



r

S(r − τ )K(τ )U(τ, t)ξdW (τ ),

+ t



and

r

U(r, s)ξ = S(r − s)ξ +

S(r − τ )J(τ )U(τ, s)ξdτ s



r

S(r − τ )K(τ )U(τ, s)ξdW (τ ).

+ s

Hence, E|U(r, s)ξ − U(r, t)ξ|2q H 2q [ ≤ CE S(r − s)ξ − S(r − t)ξ H ∫ r ( ) 2q + S(r − τ )J(τ ) U(τ, s)ξ − U(τ, t)ξ dτ H s ∫ r 2q ( ) + S(r − τ )K(τ ) U(τ, s)ξ − U(τ, t)ξ dW (τ ) H s ∫ s 2q ∫ s 2q ] + S(r−τ )J(τ)U(τ, t)ξdτ + S(s−τ )K(τ)U(τ, t)ξdW (τ ) t

H

t

H

406

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

∫ r 2q 2q ≤ CE S(r − s)ξ − S(r − t)ξ + C MJ,K,q (τ )E U(τ, s)ξ − U(τ, t)ξ dτ H H s ∫ s 2q +C MJ,K,q (τ )E U(τ, t)ξ dτ H t ∫ r 2q 2q ≤ CE S(r − s)ξ − S(r − t)ξ + C MJ,K,q (τ )E U(τ, s)ξ − U(τ, t)ξ dτ H H s ∫ s 2q +C MJ,K,q (τ )dτ E ξ H . t

Then, by Gronwall’s inequality, we find that ∫ 2q ( E U(r, s)ξ − U(r, t)ξ ≤ C h(r, s, t) + H

where

r

) h(σ, s, t)dσ ,

s

2q ∫ h(r, s, t) = E S(r − s)ξ − S(r − t)ξ + H

s t

2q MJ,K,q (τ )dτ E ξ H .

Further, it is easy to see that ξ − S(s − t)ξ 2q ≤ C|ξ|2q . H H By Lebesgue’s dominated convergence theorem, we have 2q lim E ξ − S(s − t)ξ H = 0. s→t+0

Hence, there is a δ ∈ (0, T − t) such that (12.48) holds for any s ∈ [t, t + δ]. This completes the proof of Lemma 12.10. ( ) Remark 12.11. It is easy to see that, if P (·), Q(·) is a transposition solution ( ) b (·) is a relaxed transposition solution to the same to (12.8), then P (·), Q(·) , Q equation, where (Recall Lemma 12.10 for U(·, t), V(·, t) and Ξ(·, t))  ( )∗ e  Q(·)Ξ(·, t) + Q(·)∗ Ξ(·, t)  (t) e e  v, Q (ξ, u, v) = Q(·)U(·, t)ξ + Q(·)V(·, t)u + 2 ( ) ∗  e  Q(·)∗ Ξ(·, t) + Q(·)Ξ(·, t) b (t)  Q (ξ, u, v) = Q(·)∗ U(·, t)ξ +Q(·)∗ V(·, t)u+ v, 2 for any (ξ, ( u, v) ∈ H ) t . However, it is unclear how to obtain a transposition solution P (·), Q(·) to (12.8) by means of its relaxed transposition solution ( ) b (·) . It seems that this is possible (at least under some mild asP (·), Q(·) , Q sumptions) but we cannot do it at this moment. For any t ∈ [0, T ] and λ ∈ ρ(A), consider the following two forward stochastic evolution equations:

12.4 Operator-Valued Backward Stochastic Evolution Equations

{

dxλ1 = (Aλ + J)xλ1 ds + u1 ds + Kxλ1 dW (s) + v1 dW (s) in (t, T ], xλ1 (t) = ξ1

and {

dxλ2 = (Aλ + J)xλ2 ds + u2 ds + Kxλ2 dW (s) + v2 dW (s) in (t, T ], xλ2 (t) = ξ2 .

407

(12.49)

(12.50)

Here (ξ1 , u1 , v1 ) (resp. (ξ2 , u2 , v2 )) is the same as that in (12.39) (resp. (12.40)). We have the following result: Lemma 12.12. Under the assumptions in Lemma 12.10, the solutions of (12.49) and (12.50) satisfy  lim xλ1 (·) = x1 (·) in CF ([t, T ]; L2q (Ω; H)),  λ→∞ (12.51)  lim xλ (·) = x2 (·) in CF ([t, T ]; L2q (Ω; H)). 2 λ→∞

Here x1 (·) and x2 (·) are solutions of (12.39) and (12.40), respectively. Proof : Clearly, for any s ∈ [t, T ], it holds that E|x1 (s) − xλ1 (s)|2q H ∫ ( ) = E S(s − t) − Sλ (s − t) ξ1 +

s

(

) S(s − σ) − Sλ (s − σ) u1 (σ)dσ

t



s

(

) S(s − σ)J(σ)x1 (σ) − Sλ (s − σ)J(σ)xλ1 (σ) dσ

s

(

) S(s − σ)K(σ)x1 (σ) − Sλ (s − σ)K(σ)xλ1 (σ) dW (σ)

s

(

2q ) S(s − σ) − Sλ (s − σ) v1 (σ)dW (σ) .

+ ∫

t



t

+ +

H

t

Since Aλ is the Yosida approximation of A, one can find a positive constant C = C(A, T ), independent of λ, such that |Sλ (·)|L∞ (0,T ;L(H)) ≤ C.

(12.52)

Hence, ∫ E

( ) 2q S(s − σ)J(σ)x1 (σ) − Sλ (s − σ)J(σ)xλ1 (σ) dσ H t ∫ s 2q ( ) ≤ CE S(s − σ) − Sλ (s − σ) J(σ)x1 (σ) dσ H t ∫ s 2q [ ] +CE Sλ (s − σ)J(σ) x1 (σ) − xλ1 (σ) dσ s

t

H

408

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

∫ s ( 2q ) ≤ CE S(s − σ) − Sλ (s − σ) J(σ)x1 (σ)dσ H t ∫ s 2q 2q J(σ) ∞ x (σ) − xλ1 (σ) dσ. +CE L (Ω;L(H)) 1 H t

It follows from Proposition 3.12 that ∫ s( 2q ) E S(s − σ)K(σ)x1 (σ) − Sλ (s − σ)K(σ)xλ1 (σ) dW (σ) t



s

≤ CE

L2

t



s

+CE

( ) 2q Sλ (s − σ)K(σ) x1 (σ) − xλ1 (σ) 0 dσ L2

t s (

∫ ≤ CE

H

( 2q ) S(s − σ) − Sλ (s − σ) K(σ)x1 (σ) 0 dσ

2q ) S(s − σ) − Sλ (s − σ) K(σ)x1 (σ) 0 dσ L2

t



s

+CE

K(σ) 2q∞ L

t

(Ω;L(H;L02 ))

x1 (σ) − xλ1 (σ) 2q dσ. H

Hence, for t ≤ s ≤ T (Recall (12.47) for MJ,K,q (·)), ∫ s 2q 2q λ E x1 (s) − x1 (s) H ≤ Λ(λ, s) + C MJ,K,q (σ)E x1 (σ) − xλ1 (σ) H dσ. t

Here

( ) 2q Λ(λ, s) = CE S(s − t) − Sλ (s − t) ξ1 H ∫ s( 2q ) +CE S(s − σ) − Sλ (s − σ) u1 (σ)dσ t s

∫ +CE

H

( ) S(s − σ) − Sλ (s − σ) v1 (σ) 2q dσ H

t

∫ +CE

s t s

∫ +CE

(

2q ) S(s − σ) − Sλ (s − σ) J(σ)x1 (σ)dσ H

( ) S(s − σ) − Sλ (s − σ) K(σ)x1 (σ) 2q dσ. H

t

By Gronwall’s inequality, it follows that ∫ s 2q E x1 (s) − xλ1 (s) H ≤ Λ(λ, s) + C eC(s−τ ) Λ(λ, τ )dτ,

t ≤ s ≤ T.

t

Since Aλ is the Yosida approximation of A, we see that lim Λ(λ, s) = 0, λ→∞

which implies that

lim xλ1 (·) − x1 (·) C

λ→∞

2q F ([t,T ]:L (Ω;H))

= 0.

This leads to the first equality in (12.51). The second equality in (12.51) can be proved similarly. This completes the proof of Lemma 12.12.

12.4 Operator-Valued Backward Stochastic Evolution Equations

409

12.4.3 Proof of the Uniqueness Results This subsection is devoted to proving the uniqueness of both transposition solution and relaxed transposition solution to the equation (12.8). To begin with, let us prove the following lemma. Lemma 12.13. The following set ∆{ R = x2 (·) x2 (·) solves (12.40) with t = 0, ξ2 = 0, v2 = 0 } and some u2 ∈ L2F (0, T ; L2q (Ω; H)) 4 is dense in L2q F (Ω; L (0, T ; H)).

Proof : Let us prove Lemma 12.13 by contradiction. If this is not the case, 2p

4

then we could find a nonzero ρ ∈ LF1+p (Ω; L 3 (0, T ; H)) such that ∫

T

E 0

⟨ ⟩ ρ, x2 H ds = 0,

for any x2 ∈ R.

(12.53)

Let us consider the following H-valued backward stochastic evolution equation: ( ) { dy = −A∗ ydt + ρ − J ∗ y − K ∗ Y dt + Y dW (t) in [0, T ), (12.54) y(T ) = 0. The solution to (12.54) is understood in the transposition sense. By Theorem 4.16, the equation (12.54) admits a unique transposition solution (y(·), Y (·)) ∈ 2p 2p DF ([0, T ]; L p+1 (Ω; H))×L2F (0, T ; L p+1 (Ω; L02 )). Hence, for any ϕ1 (·) ∈ L1F (0, T ; 2 0 L2q (Ω; H)) and ϕ2 (·) ∈ L2q F (Ω; L (0, T ; L2 )), it holds that ∫

T

−E 0



T

=E 0

⟨ ⟩ z(s), ρ(s) − J(s)∗ y(s) − K(s)∗ Y (s) H ds ⟨ ⟩ ϕ1 (s), y(s) H ds + E

where z(·) solves {



T 0

⟨ ⟩ ϕ2 (s), Y (s) L0 ds,

dz = (Az + ϕ1 )dt + ϕ2 dW (t),

(12.55)

2

in (0, T ], (12.56)

z(0) = 0. In particular, for any x2 (·) solving (12.40) with t = 0, ξ2 = 0, v2 = 0 and an arbitrarily given u2 ∈ L2F (0, T ; L2q (Ω; H)), we choose z = x2 , ϕ1 = Jx2 + u2 and ϕ2 = Kx2 . By (12.55), it follows that for all u2 ∈ L2F (0, T ; L2q (Ω; H)), ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ −E x2 (s), ρ(s) H ds = E u2 (s), y(s) H ds. (12.57) 0

0

410

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

By (12.57) and recalling (12.53), we conclude that y(·) = 0. Hence, (12.55) is reduced to ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ −E z(s), ρ(s) − K(s)∗ Y (s) H ds = E ϕ2 (s), Y (s) L0 ds. (12.58) 0

2

0

Choosing ϕ2 (·) = 0 in (12.56) and (12.58), we obtain that for every ϕ1 (·) ∈ L1F (0, T ; L2q (Ω; H)), ∫ T ⟨∫ s ⟩ E S(s − σ)ϕ1 (σ)dσ, ρ(s) − K(s)∗ Y (s) ds = 0. (12.59) 0

H

0

Hence, ∫

T

( ) S(s − σ) ρ(s) − K(s)∗ Y (s) ds = 0,

∀ σ ∈ [0, T ].

(12.60)

σ

Then, for any given λ0 ∈ ρ(A) and σ ∈ [0, T ], we have ∫ T ( ) S(s − σ)(λ0 − A)−1 ρ(s) − K(s)∗ Y (s) ds σ

= (λ0 − A)

−1



T

( ) S(s − σ) ρ(s) − K(s)∗ Y (s) ds = 0.

(12.61)

σ

Differentiating the equality (12.61) with respect to σ, and noting (12.60), we see that ( ) (λ0 − A)−1 ρ(σ) − K(σ)∗ Y (σ) ∫ T ( ) =− S(s − σ)A(λ0 − A)−1 ρ(s) − K(s)∗ Y (s) ds σ



( ) S(s − σ) ρ(s) − K(s)∗ Y (s) ds

T

= σ



T

−λ0

( ) S(s − σ)(λ0 − A)−1 ρ(s) − K(s)∗ Y (s) ds

σ

= 0,

∀ σ ∈ [0, T ].

Therefore,

ρ(·) = K(·)∗ Y (·).

By (12.62), the equation (12.54) is reduced to { dy = −A∗ ydt − J ∗ ydt + Y dW (t) in [0, T ),

(12.62)

(12.63)

y(T ) = 0. It is clear that the unique transposition to (12.63) is (y(·), Y (·)) = (0, 0). Hence, by (12.62), we conclude that ρ(·) = 0, which is a contradiction. There4 fore, Ξ is dense in L2q F (Ω; L (0, T ; H)).

12.4 Operator-Valued Backward Stochastic Evolution Equations

411

Now we give the proof of Theorem 12.7. Proof of Theorem 12.7 : Assume that (P (·), Q(·)) is another transposition solution to the equation (12.8). Then, by Definition 12.6, it follows that, for any t ∈ [0, T ], ∫ T ⟨( ) ⟩ ⟨( ) ⟩ 0 = E P (t) − P (t) ξ1 , ξ2 H + E P (s) − P (s) u1 (s), x2 (s) H ds t



T

+E ∫

t



t



t



t

T

+E

⟨( ⟨(

)

P (s) − P (s) x1 (s), u2 (s)

⟩ H

ds

) ⟩ P (s) − P (s) K(s)x1 (s), v2 (s) L0 ds

(12.64)

2

T

+E

⟨(

) ⟩ P (s) − P (s) v1 (s), K(s)x2 (s) + v2 (s) L0 ds 2

T

⟨(

T

⟨( e ) ⟩ e Q(s) − Q(s) x1 (s), v2 (s) L0 ds,

+E +E

) ⟩ Q(s) − Q(s) v1 (s), x2 (s) H ds

2

t

e and Q(·) e are respectively the (pointwise defined) operators induced where Q(·) uniquely by Q(·) and Q(·). Choosing u1 = v1 = 0 and u2 = v2 = 0 in the equations (12.39) and (12.40), respectively, by (12.64), we obtain that, for any t ∈ [0, T ], ⟨( ) ⟩ 0 = E P (t) − P (t) ξ1 , ξ2 H , ∀ ξ1 , ξ2 ∈ L2q Ft (Ω; H). Hence, we find that P (·) = P (·). By this, it is easy to see that (12.64) becomes the following ∫ T ⟨( ) ⟩ 0=E Q(s) − Q(s) v1 (s), x2 (s) H ds t (12.65) ∫ T ⟨( e ) ⟩ e +E Q(s) − Q(s) x1 (s), v2 (s) 0 ds, ∀ t ∈ [0, T ]. L2

t

Choosing t = 0, ξ2 = 0 and v2 = 0 in the equation (12.40), we see that (12.65) becomes ∫ T ⟨( ) ⟩ 0=E Q(s) − Q(s) v1 (s), x2 (s) H ds. (12.66) t

Fix any v1 (·) ∈

L2F (0, T ; L2q (Ω; L02 ))

and N > 0. Write

v1N (t, ω) = χ{|(Q(·)−Q(·))v1 (·)|H ≤N } (t, ω)v1 (t, ω). Replacing v1 in (12.66) by v1N and noting that both Q(·) and Q(·) are pointwisely defined, we obtain that

412

12 Pontryagin-Type Stochastic Maximum Principle and Beyond



T

0=E



t

( ) ⟩ χ{|(Q(·)−Q(·))v1 (·)|H ≤N } (s) Q(s) − Q(s) v1 (s), x2 (s) H ds.

This, together with Lemma 12.13, yields that, for any v1 (·) ∈ L2F (0, T ; L2q (Ω; L02 )) and N > 0, ( ) χ{|(Q(·)−Q(·))v1 (·)|H ≤N } (·) Q(·) − Q(·) v1 (·) = 0, a.e. (t, ω) ∈ (0, T ) × Ω. Hence Q(·) = Q(·). This completes the proof of Theorem 12.7. Next, we prove the uniqueness result in Theorem 12.9. Proof of the “Uniqueness” Part in Theorem 12.9 : Assume that both ) ( b (·) ) are two relaxed transposition solutions b (·) and P (·), Q(·) , Q P (·), Q(·) , Q to the equation (12.8). Then, by (12.45), for any t ∈ [0, T ], it holds that ∫ T ⟨( ) ⟩ ⟨( ) ⟩ 0 = E P (t) − P (t) ξ1 , ξ2 H + E P (s) − P (s) u1 (s), x2 (s) H ds (

t



T

+E ∫

t



t



t



t

T

+E

⟨(

)

P (s) − P (s) x1 (s), u2 (s)

⟩ H

ds

) ⟩ P (s) − P (s) K(s)x1 (s), v2 (s) L0 ds

(12.67)

2

T

+E

⟨ ( )( )⟩ v1 (s), P (s) − P (s) K(s)x2 (s) + v2 (s) L0 ds 2

T

+E

⟨ ( b (t) ) ⟩ b (t) (ξ2 , u2 , v2 )(s) 0 ds v1 (s), Q −Q L 2

T

+E

⟨(

⟨(

Q

(t)

) ⟩ − Q(t) (ξ1 , u1 , v1 )(s), v2 (s) L0 ds. 2

t

Choosing u1 = u2 = 0 and v1 = v2 = 0 respectively in the test equations (12.39) and (12.40), by (12.67), we obtain that, for any t ∈ [0, T ], ⟨( ) ⟩ 0 = E P (t) − P (t) ξ1 , ξ2 H , ∀ ξ1 , ξ2 ∈ L2q Ft (Ω; H). Hence, we find that P (·) = P (·). By this, it is easy to see that (12.67) becomes that ∫ T ⟨ ( b (t) ) ⟩ b (t) (ξ2 , u2 , v2 )(s) 0 ds 0=E v1 (s), Q −Q L2 t (12.68) ∫ T ⟩ ⟨( (t) ) (t) +E Q − Q (ξ1 , u1 , v1 )(s), v2 (s) 0 ds, ∀ t ∈ [0, T ]. L2

t

Choosing v2 = 0 in the test equation (12.40), we see that (12.68) becomes ∫ T ⟨ ( b (t) ) ⟩ b (t) (ξ2 , u2 , 0)(s) ds. 0=E v1 (s), Q −Q (12.69) H t

12.4 Operator-Valued Backward Stochastic Evolution Equations

413

Noting that v1 is arbitrary in L2F (0, T ; L2q (Ω; L02 )), we conclude from (12.69) b (t) (·, ·, 0) = Q b (t) (·, ·, 0). Similarly, Q(t) (·, ·, 0) = Q(t) (·, ·, 0). Hence, that Q ∫

T

0=E



( b (t) ) ⟩ b (t) (0, 0, v2 )(s) 0 ds v1 (s), Q −Q L 2

t



T

+E t

⟨(

Q

(t)

) ⟩ − Q (0, 0, v1 )(s), v2 (s) L0 ds.

(12.70)

(t)

2

(t) b (t) (0, 0, ·) and Q(t) (0, 0, ·)∗ = Q b (t) (0, 0, ·), from (12.70), Since Q (0, 0, ·)∗ = Q we find that ∫ T ⟨ ( b (t) ) ⟩ b (t) (0, 0, v2 )(s) 0 ds, (12.71) 0 = 2E v1 (s), Q −Q L 2

t

(t) b (t) (0, 0, ·) = Q b (t) (0, 0, ·). which implies that Q (0, 0, ·) = Q(t) (0, 0, ·) and Q (t) (t) b (·, ·, ·) = Q b (t) (·, ·, ·). This completes the Hence Q (·, ·, ·) = Q(t) (·, ·, ·) and Q proof of the “Uniqueness” part in Theorem 12.9.

12.4.4 Well-Posedness Result for a Special Case This section is addressed to proving the following well-posedness result for the transposition solutions to the operator-valued backward stochastic evolution equation (12.8) with some special data PT and F . Theorem 12.14. If p ∈ (1, 2], F ∈ L1F (0, T ; Lp (Ω; L2 (H))), PT ∈ LpFT (Ω; L2 (H)) and J, K satisfy ( (12.9),) then the equation (12.8) admits a unique transposition solution P (·), Q(·) with the following regularity: ( ) P (·), Q(·) ∈ DF ([0, T ]; Lp (Ω; L2 (H))) × L2F (0, T ; Lp (Ω; L2 (L02 ; H)). Furthermore, |(P, Q)|DF ([0,T ];Lp (Ω;L2 (H)))×L2F (0,T ;Lp (Ω;L2 (L02 ;H))) ( ) ≤ C |F |L1F (0,T ;Lp (Ω;L2 (H))) + |PT |LpF (Ω;L2 (H)) .

(12.72)

T

Proof : We consider only the case that H is a real Hilbert space (The case of complex Hilbert spaces can be treated similarly). We divide the proof into several steps. Step 1. Define a family of operators {T (t)}t≥0 on L2 (H) as follows: T (t)O = S(t)OS ∗ (t),

∀ O ∈ L2 (H).

We claim that {T (t)}t≥0 is a C0 -semigroup on L2 (H). Indeed, for any O ∈ L2 (H) and nonnegative s and t, we have

414

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

T (t + s)O = S(t + s)OS ∗ (t + s) = S(t)S(s)OS ∗ (s)S ∗ (t) = T (t)T (s)O. Hence, {T (t)}t≥0 is a semigroup on L2 (H). Next, we choose an orthonormal basis {hi }∞ i=1 of H. For any O ∈ L2 (H) and t ∈ [0, ∞), 2 lim+ T (s)O − T (t)O L2 (H) s→t

2 ≤ |S(t)|2L(H) lim+ S(s − t)OS ∗ (s − t) − O L2 (H) |S ∗ (t)|2L(H) s→t

≤ |S(t)|4L(H) lim+ s→t

∞ ∑ S(s − t)OS ∗ (s − t)hi − Ohi 2

(12.73)

H

i=1

≤ 2|S(t)|4L(H) lim+ s→t

∞ ( ∑ S(s − t)OS ∗ (s − t)hi − S(s − t)Ohi 2

H

i=1

2 ) + S(s − t)Ohi − Ohi H .

For the first series in the right hand side of (12.73), we have ∞ ∑ S(s − t)OS ∗ (s − t)hi − S(s − t)Ohi 2

H

i=1

≤C

∞ ∑ ∗ OS (s − t)hi − Ohi 2 = C OS ∗ (s − t) − O 2 H L2 (H) i=1

∞ ∑ ( ( ) ∗ 2 ) S(s − t)O∗ − O∗ hi 2 . = C OS ∗ (s − t) − O L2 (H) = C H i=1

For each i ∈ N, ( ) S(s − t)O∗ − O∗ hi 2 ≤ 2 S(s − t)O∗ hi 2 + O∗ hi 2 ≤ C O∗ hi 2 . H H H H It is clear that

∞ ∑ ∗ 2 2 O hi = |O∗ |2 L2 (H) = |O|L2 (H) . H i=1

Hence, lim+

s→t

∞ ∑ S(s − t)OS ∗ (s − t)hi − S(s − t)Ohi 2

H

i=1

≤ C lim+ s→t

=C

∞ ∑ i=1

∞ ∑ ∗ OS (s − t)hi − Ohi 2

H

i=1

2 lim+ OS ∗ (s − t)hi − Ohi H = 0.

s→t

Similarly, it follows that

(12.74)

12.4 Operator-Valued Backward Stochastic Evolution Equations ∞ ∑ S(s − t)Ohi − Ohi 2 = 0. lim+ H

s→t

415

(12.75)

i=1

From (12.73)–(12.75), we find that 2 lim+ T (s)O − T (t)O L2 (H) = 0,

s→t

∀ t ∈ [0, ∞) and O ∈ L2 (H).

Similarly, 2 lim− T (s)O − T (t)O L2 (H) = 0,

s→t

∀ t ∈ (0, ∞) and O ∈ L2 (H).

Hence, {T (t)}t≥0 is a C0 -semigroup on L2 (H). Step 2. Denote by A the infinitesimal generater of {T (t)}t≥0 . Then, A∗ generates a C0 -semigroup on L2 (H). We consider the following L2 (H)-valued backward stochastic evolution equation5 : { dP = −A∗ P dt + f (t, P, Λ)dt + ΛdW in [0, T ), (12.76) P (T ) = PT , where f (t, P, Λ) = −J ∗ P − P J − K ∗ P K − K ∗ Λ − ΛK + F.

(12.77)

Note that L2 (H) is a Hilbert space, and the Hilbert spaces L2 (V ; L2 (H)) and L2 (L02 ; H)(= L2 (L2 (V ; H); H)) are isomorphism. Hence, the composition operator ΛK in (12.77) makes sense, and the function f (·, ·, ·) satisfies Condition 4.1, in which the Hilbert space H is replaced by L2 (H). By Theorem 4.19, the equation (12.76) admits one and only one transposition solution (P, Λ) ∈ DF ([0, T ]; Lp (Ω; L2 (H))) × L2F (0, T ; Lp (Ω; L2 (V ; L2 (H))) (in the sense of Definition 4.13). Further, (P, Λ) satisfies |(P, Λ)|DF ([0,T ];Lp (Ω;L2 (H)))×L2F (0,T ;Lp (Ω;L2 (V ;L2 (H))) ( ) ≤ C |F |L1F (0,T ;Lp (Ω;L2 (H))) + |PT |LpF (Ω;L2 (H)) .

(12.78)

T

Denote by T(·) the tensor product of x1 (·) and x2 (·), i.e., T(·) = x1 (·) ⊗ x2 (·), where x1 and x2 solve respectively (12.39) and (12.40). Clearly, T(t, ω) ∈ L2 (H). We shall derive below the stochastic evolution equation (valued in L2 (H)) such that T(·) is the mild solution of this stochastic evolution equation. For any λ ∈ ρ(A), define a family of operators {Tλ (t)}t≥0 on L2 (H) as follows: 5

Generally, in the equation (12.76) we do NOT have A∗ P = AP + P A∗ (Actually, we do not need this equality for the definition of mild solution to (12.76)). Nevertheless, this equality does hold when A ∈ L(H).

416

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

Tλ (t)O = Sλ (t)OSλ∗ (t),

∀ O ∈ L2 (H).

By the result proved in Step 1, it follows that {Tλ (t)}t≥0 is a C0 -semigroup on L2 (H). Further, for any O ∈ L2 (H), we have Tλ (t)O − O Sλ (t)OSλ∗ (t) − O = lim+ t t t→0 t→0 Sλ (t)OSλ∗ (t) − OSλ∗ (t) + OSλ∗ (t) − O = lim+ = Aλ O + OA∗λ . t t→0 lim+

Hence, the infinitesimal generater Aλ of {Tλ (t)}t≥0 is given as follows: Aλ O = Aλ O + OA∗λ ,

∀ O ∈ L2 (H).

(12.79)

Now, for any O ∈ L2 (H), it holds that lim T (t)O − Tλ (t)O L2 (H) λ→∞ = lim S(t)OS ∗ (t) − Sλ (t)OSλ∗ (t) L2 (H) λ→∞ ≤ lim S(t)OS ∗ (t) − S(t)OSλ∗ (t) L2 (H) λ→∞ + lim S(t)OSλ∗ (t) − Sλ (t)OSλ∗ (t) L2 (H) . λ→∞

Let us compute each term in the right hand side of the above inequality. First, 2 S(t)OS ∗ (t) − S(t)OSλ∗ (t) 2 ≤ C OS ∗ (t) − OSλ∗ (t) L2 (H)

2 = C S(t)O∗ − Sλ (t)O∗ L

2 (H)

=C

L2 (H)

∞ ∑

( ) S(t) − Sλ (t) O∗ hi 2 . H

i=1

Since

( ) S(t) − Sλ (t) O∗ hi 2 ≤ C|O∗ hi |2H H

and

∞ ∑ ∗ 2 O hi = |O|2 L2 (H) < ∞, H i=1

by means of Lebesgue’s dominated theorem and (12.80), we find that 2 lim S(t)OS ∗ (t) − S(t)OSλ∗ (t) L2 (H) = 0.

λ→∞

Similarly, we get that 2 lim S(t)OSλ∗ (t) − Sλ (t)OSλ∗ (t) L2 (H) = 0.

λ→∞

Hence,

(12.80)

12.4 Operator-Valued Backward Stochastic Evolution Equations

lim T (t)O − Tλ (t)O L2 (H) = 0,

λ→∞

for any t ≥ 0.

417

(12.81)

Write Tλ = xλ1 ⊗ xλ2 , where xλ1 and xλ2 solve accordingly (12.49) and 0 (12.50). Let {en }∞ n=1 be an orthonormal basis of V . For any f, g ∈ L2 , denote by f ⊗1,1 g the bounded bilinear functional on H defined by (f ⊗1,1 g)(x, y) =

∞ ∑

⟨ x, f ej ⟩H ⟨ gej , y ⟩H ,

∀ x, y ∈ H.

j=1

Also, for any h ∈ H, denote respectively by f ⊗1,0 h and h ⊗0,1 g the bounded linear operators from V to L(H), defined by (f ⊗1,0 h)v = (f v) ⊗ h

and

(h ⊗0,1 g)v = h ⊗ (gv),

∀ v ∈ V.

Recall that any bounded linear operator on H can be viewed as a bounded bilinear functional on H, and vice versa. In what follows, we will also view xλ1 ⊗ xλ2 , f ⊗1,1 g and so on as bounded bilinear functionals on H. By Itˆo’s formula, we have dTλ = d(xλ1 ⊗ xλ2 ) ( ) ( ) ( ) ( ) = dxλ1 ⊗ xλ2 + xλ1 ⊗ d xλ2 + dxλ1 ⊗ d xλ2 [( [( ) ] ) ] = Aλ + J xλ1 ⊗ xλ2 ds + xλ1 ⊗ Aλ + J xλ2 ds [ ( ) ( ) ( ) + u1 ⊗ xλ2 + xλ1 ⊗ u2 + Kxλ1 ⊗1,1 Kxλ2 + Kxλ1 ⊗1,1 v2 ] [( ( ) ) +v1 ⊗1,1 Kxλ2 + v1 ⊗1,1 v2 ds + Kxλ1 ⊗1,0 xλ2 ] ( ) +xλ1 ⊗0,1 Kxλ2 + v1 ⊗1,0 xλ2 + xλ1 ⊗0,1 v2 dW (s),

(12.82)

On the other hand, for any h ∈ H, we find ( ) ⟨ ⟩ ⟨ ⟩ ( ) (Aλ xλ1 ) ⊗ xλ2 (h) = h, Aλ xλ1 H xλ2 = A∗λ h, xλ1 H xλ2 = xλ1 ⊗ xλ2 A∗λ h. Thus,

( ) Aλ xλ1 ⊗ xλ2 = Tλ A∗λ .

Similarly, we have the following equalities: )  λ ( x1 ⊗ Aλ xλ2 = Aλ Tλ ,     ( ) ( )   Jxλ1 ⊗ xλ2 + xλ1 ⊗ Jxλ2 = Tλ J ∗ + JTλ , ( λ) ( )  Kx1 ⊗1,1 Kxλ2 = KTλ K ∗ ,     ( )  ( λ) Kx1 ⊗1,0 xλ2 + xλ1 ⊗0,1 Kxλ2 = Tλ K ∗ + KTλ .

(12.83)

(12.84)

By (12.79), (12.82)–(12.84), we see that Tλ solves the following equation:

418

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

{

dTλ = Aλ Tλ ds + αλ ds + β λ dW (s) in (t, T ], Tλ (t) = ξ1 ⊗ ξ2 ,

where

 λ α = JTλ + Tλ J ∗ + u1 ⊗ xλ2 + xλ1 ⊗ u2 + KTλ K ∗    ( ) ( ) + Kxλ1 ⊗1,1 v2 + v1 ⊗1,1 Kxλ2 + v1 ⊗1,1 v2 ,    λ β = KTλ + Tλ K ∗ + v1 ⊗1,0 xλ2 + xλ1 ⊗0,1 v2 .

Hence, for any s ∈ [t, T ],



(12.86)

s

T (s) = Tλ (s − t)(ξ1 ⊗ ξ2 ) +

Tλ (τ − t)αλ (τ )dτ

λ

t



(12.85)

(12.87)

s

Tλ (τ − t)β (τ )dW (τ ). λ

+ t

We claim that lim |Tλ (·) − T(·)|CF ([t,T ];Lq (Ω;L2 (H))) = 0,

λ→∞

∀ t ∈ [0, T ].

(12.88)

Indeed, for any s ∈ [t, T ], we have |Tλ (s) − T(s)|2L2 (H) =

∞ ∑ λ T (s)hi − T(s)hi 2

H

i=1

=

∞ ∑ ⟨hi , xλ1 (s)⟩H xλ2 (s) − ⟨hi , x1 (s)⟩H x2 (s) 2

H

i=1 ∞ ∑

≤2

⟨hi , xλ1 (s)⟩H xλ2 (s) − ⟨hi , xλ1 (s)⟩H x2 (s) 2

H

i=1 ∞ ∑

+2

⟨hi , xλ1 (s)⟩H x2 (s) − ⟨hi , x1 (s)⟩H x2 (s) 2

H

i=1 ∞ 2 ∑ ⟨hi , xλ1 (s)⟩H 2 ≤ 2 xλ2 (s) − x2 (s) H i=1 ∞ 2 ∑ ⟨hi , xλ1 (s) − x1 (s)⟩H 2 +2 x2 (s) H i=1

2 2 2 2 = 2 x1 (s) H xλ2 (s) − x2 (s) H + 2 x2 (s) H xλ1 (s) − x1 (s) H . This, together Lemma 12.12, implies that (12.88) holds. Write  α = JT + TJ ∗ + u1 ⊗ x2 + x1 ⊗ u2 + KTK ∗    ( ) ( ) + Kx1 ⊗1,1 v2 + v1 ⊗1,1 Kx2 + v1 ⊗1,1 v2 ,    β = KT + TK ∗ + v1 ⊗1,0 x2 + x1 ⊗0,1 v2 .

(12.89)

12.4 Operator-Valued Backward Stochastic Evolution Equations

419

Similar to the proof of (12.88), we have lim |αλ (·) − α(·)|L2F (t,T ;Lq (Ω;L2 (H))) = 0,

λ→∞

(12.90)

lim |β λ (·) − β(·)|L2F (t,T ;Lq (Ω;L2 (V ;L2 (H)))) = 0.

λ→∞

By (12.81), using (12.88), (12.90) and Proposition 3.12, one can show that, for any t ∈ [0, T ], it holds that ∫ · ∫ ·  λ  T (τ − t)α (τ )dτ → T (τ − t)α(τ )dτ,  λ    t  ∫t ∫ · · (12.91) λ T (τ − t)β (τ )dW (τ ) → T (τ − t)β(τ )dW (τ )  λ   t t     in C ([t, T ]; Lq (Ω; L (H))), F

2

as λ → ∞. Hence, T(·) is the (unique) mild solution to the following stochastic evolution equation (valued in L2 (H)): { dT(s) = AT(s)ds + αds + βdW (s) in (t, T ], (12.92) T(t) = ξ1 ⊗ ξ2 . Step 3. Since (P, Λ) solves (12.76) in the transposition sense and by (12.92), it follows that ⟨ ⟩ E T(T ), PT L2 (H) − E



T



t

⟨ ⟩ = E ξ1 ⊗ ξ2 , P (t) L2 (H) + E ∫

T

+E t

T(s), f (s, P (s), Λ(s)) ∫

T



α(s), P (s)

t



⟩ L2 (H)

L2 (H)

ds

ds

(12.93)

⟨ ⟩ β(s), Λ(s) L2 (V ;L2 (H)) ds.

By (12.77) and recalling that T(·) = x1 (·) ⊗ x2 (·), we find that ∫

T

E t

⟨ ⟩ T(s), f (s, P (s), Λ(s)) L2 (H) ds



T

=E t

⟨(

− J(s)∗ P (s) − P (s)J(s) − K ∗ (s)P (s)K(s)

) ⟩ −K ∗ (s)Λ(s) − Λ(s)K(s) + F (s) x1 (s), x2 (s) H ds.

Further, by the first equality in (12.89), we have ∫

T

E t

⟨ ⟩ α(s), P (s) L2 (H) ds

(12.94)

420

12 Pontryagin-Type Stochastic Maximum Principle and Beyond



T

=E



P (s)x1 (s), J(s)x2 (s)

t



T

+E ∫

t



t



t





ds + E H

⟨ ⟩ P (s)u1 (s), x2 (s) H ds + E



T

t

T

⟨ ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

T

+E

⟨ ⟩ P (s)v1 (s), K(s)x2 (s) L0 ds + E



2

t

⟩ H

⟨ ⟩ P (s)x1 (s), u2 (s) H ds

T

⟨ ∗ ⟩ K (s)P (s)K(s)x1 (s), x2 (s) H ds

+E

P (s)J(s)x1 (s), x2 (s)

t

T

+E



T

ds

(12.95)

⟨ ⟩ P (s)v1 (s), v2 (s) L0 ds. 2

t

Further, by Proposition 12.5, Λ(·) induces a bounded linear operator Q(·) ∈ e ∈ L2 (H; L0 ) satisfying (12.35) L2 (L02 ; H) and a bounded linear operator Q(·) 2 and 12.38 pointwise. Hence, ⟨ ⟩ v1 (·) ⊗1,0 x2 (·), Λ(·) L

2 (V ;L2 (H))

=

∞ ∑ ⟨(

) ⟩ v1 (·) ⊗1,0 x2 (·) ej , Λ(·)ej L

2 (H)

j=1

=

∞ ∑ ⟨(

∞ ∑ ) ⟩ ⟨( )( ) ⟩ v1 (·)ej ⊗ x2 (·), Λ(·)ej L2 (H) = Λ(·)ej v1 (·)ej , x2 (·) H

j=1

=

∞ ∑

j=1



( ( )) ⟩ Q(·) ej ⊗ v1 (·)ej , x2 (·) H =

j=1

=

∞ ∑



( ) ⟩ ej ⊗ v1 (·)ej , Q(·)∗ x2 (·) L0 2

j=1

∞ ∑ ⟨(

) ⟩ ⟨ ⟩ ⟨ ⟩ Q(·)∗ x2 (·) ej , v1 (·)ej H = Q(·)∗ x2 , v1 (·) L0 = Q(·)v1 (·), x2 (·) H , 2

j=1

and ∞ ∑ ⟨ ⟩ ⟨( ) ⟩ x1 (·) ⊗0,1 v2 (·), Λ(·) L2 (V ;L2 (H)) = x1 (·) ⊗0,1 v2 (·) ej , Λ(·)ej L2 (H) j=1

=

∞ ∑

∞ ∑ ⟨ ( ) ⟩ ⟨( ) ⟩ x1 (·) ⊗ v2 (·)ej , Λ(·)ej L2 (H) = Λ(·)ej x1 (·), v2 (·)ej H

j=1

=

∞ ∑

j=1

⟨ ( ) ⟩ ⟨ ⟩ e Q(·) ej ⊗ x1 (·) , v2 (·)ej H = Q(·)x 1 (·), v2 (·) L0 . 2

j=1

Thus, by the second equality in (12.89), we have ∫ T ⟨ ⟩ E β(s), Λ(s) L2 (V ;L2 (H)) ds t



T

=E t





K (s)Λ(s)x1 (s), x2 (s)



ds + E H



T t



Λ(s)K(s)x1 (s), x2 (s)

⟩ H

ds

12.4 Operator-Valued Backward Stochastic Evolution Equations



T

+E t

⟨ ⟩ Q(s)v1 (s), x2 (s) H ds + E



T t

⟨ ⟩ e Q(s)x 1 (s), v2 (s) L0 ds.

421

(12.96)

2

By (12.93)–(12.96), it follows that (P (·), Q(·)) satisfies (12.43) and thus is a transposition solution of (12.8) due to Definition 12.6. The uniqueness of which follows theorem 12.7. As a result, the proof for Theorem 12.14 is completed . 12.4.5 Proof of the Existence and Stability for the General Case This subsection is devoted to proving the existence and stability result (for the equation (12.8) in the sense of relaxed transposition solution) in Theorem 12.9. Proof of the “Existence and Stability” part in Theorem 12.9 : We consider only the case that H is a real Hilbert space (The case of complex Hilbert spaces can be treated similarly). The proof is divided into several steps. Step 1. In this step, we introduce a suitable approximation to the equation (12.8). ∞ Let {hn }∞ n=1 be an orthonormal basis of H and {Γn }n=1 be the standard projection operator from H onto its subspace span {h1 , h2 , · · · , hn }, that is, n ∞ ∑ ∑ Γn x = xi hi for any x = xi hi ∈ H. Write i=1

i=1

F n = Γn F Γ n ,

PTn = Γn PT Γn .

(12.97)

Obviously, F n ∈ L1F (0, T ; Lp (Ω; L2 (H))), PTn ∈ LpFT (Ω; L2 (H)). Furthermore, |F n |L1F (0,T ;Lp (Ω;L(H))) + |PTn |LpF

T

(Ω;L(H))

≤ |F |L1F (0,T ;Lp (Ω;L2 (H))) + T ( ) ≤ C |F |L1F (0,T ;Lp (Ω;L(H))) + |PT |LpF (Ω;L(H)) . n

|PTn |LpF (Ω;L2 (H))

(12.98)

T

Here and henceforth C denotes a generic constant, independent of n. Let us consider the following L2 (H)-valued backward stochastic evolution equation:  n dP = −(A∗ + J ∗ )P n dt − P n (A + J)dt − K ∗ P n Kdt    −(K ∗ Qn + Qn K)dt + F n dt + Qn dW (t) in [0, T ), (12.99)    n n P (T ) = PT . Clearly, for each n ∈ N, (12.99) can be regarded as an approximation of the equation (12.8). In the rest of the proof, we shall construct the desired solution to the equation (12.8) by means of the solution to (12.99).

422

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

According to Theorem 12.14, the equation (12.99) admits one and only one transposition solution ( n ) P (·), Qn (·) (12.100) ∈ DF ([0, T ]; Lp (Ω; L2 (H))) × L2F (0, T ; Lp (Ω; L2 (V ; L2 (H))), in the sense of Definition 12.6. Hence, for any t ∈ [0, T ], ξ1 , ξ2 ∈ L2q Ft (Ω; H), 2 2q 2 2q u1 (·), u2 (·) ∈ LF (t, T ; L (Ω; H)) and v1 (·), v2 (·) ∈ LF (t, T ; L (Ω; L02 )), it holds that ∫ T ⟨ n ⟩ ⟨ n ⟩ E PT x1 (T ), x2 (T ) H − E F (s)x1 (s), x2 (s) H ds ⟨ ⟩ = E P n (t)ξ1 , ξ2 H + E ∫

T

+E ∫

t



t

T

+E T

+E

⟨ ⟨ ⟨

t



T t

⟨ n ⟩ P (s)u1 (s), x2 (s) H ds

P n (s)x1 (s), u2 (s)



∫ ds + E H

T



t

∫ ds + E H



T t

⟨ n ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

t

P n (s)v1 (s), K(s)x2 (s) + v2 (s) Qn (s)v1 (s), x2 (s)

(12.101)

L02

ds

⟨ n ⟩ e (s)x1 (s), v2 (s) 0 ds. Q L 2

e n (s) is the operator induced by Qn (s), and x1 (·)(resp. x2 (·)) solves Here, Q (12.39)(resp. (12.40)). As we shall see below, the variational equality (12.101) can be viewed as an approximation of (12.45). Step 2. In this step, we take n → ∞ in (12.101). For this purpose, we need to establish some a priori estimates for P n (·) an Qn (·) (in suitable sense). Let u1 = v1 = 0 in (12.39) and u2 = v2 = 0 in (12.40). By (12.101), we obtain that, for any t ∈ [0, T ] (Recall Lemma 12.10 for U(·, ·)), ∫ T ⟨ ⟩ ⟨ n ⟩ E PTn U(T, t)ξ1 , U(T, t)ξ2 H − E F (s)U(s, t)ξ1 , U(s, t)ξ2 H ds t ⟨ n ⟩ = E P (t)ξ1 , ξ2 H , ∀ ξ1 , ξ2 ∈ L2q (12.102) Ft (Ω; H). Hence, ∫ ⟨ E U∗ (T, t)PTn U(T, t)ξ1 − ⟨ ⟩ = E P n (t)ξ1 , ξ2 H ,

T

U∗ (s, t)F n (s)U(s, t)ξ1 ds, ξ2

t

∀ ξ1 , ξ 2 ∈

This leads to ∫ ( E U∗ (T, t)PTn U(T, t)ξ1 −

T

= P (t)ξ1 ,

) U∗ (s, t)F n (s)U(s, t)ξ1 ds Ft

a.s., ∀ t ∈ [0, T ], ξ1 ∈

H

L2q Ft (Ω; H).

t n



L2q Ft (Ω; H).

(12.103)

12.4 Operator-Valued Backward Stochastic Evolution Equations

By (12.98) and (12.102), it follows that ⟨ n ⟩ E P (t)ξ1 , ξ2 H ( ) ≤ C |PT |LpF (Ω;L(H)) + |F |L1F (0,T ; Lp (Ω;L(H))) T

423

(12.104)

×|ξ1 |L2q (Ω;H) |ξ2 |L2q (Ω;H) . Ft

Ft

Here and henceforth C denotes a generic constant, independent of n and t. For P n (t), we can find a ξ1,n ∈ L2q Ft (Ω; H) with |ξ1,n |L2q (Ω;H) = 1 such that Ft

n P (t)ξ1,n

2q 2q−1 LF (Ω;H) t

1 ≥ P n (t) Lp (Ω;L(H)) . Ft 2

(12.105)

Moreover, we can find a ξ2,n ∈ L2q Ft (Ω; H) with |ξ2,n |L2q (Ω;H) = 1 such that Ft

⟨ ⟩ 1 E P n (t)ξ1,n , ξ2,n H ≥ P n (t)ξ1,n 2q . 2q−1 2 LF (Ω;H)

(12.106)

t

From (12.104)–(12.106), we obtain that, for all n ∈ N, p |P n |L∞ F (0,T ;L (Ω;L(H))) ( ) ≤ C |PT |LpF (Ω;L(H)) + |F |L1F (0,T ;Lp (Ω;L(H))) .

(12.107)

T

( By Theorem 2.77, one can find P ∈ Lpd L2F (0, T ; L2q (Ω; H)); L2F (0, T ; ) 2q L 2q−1 (Ω; H)) such that |P |

2q

L(L2F (0,T ;L2q (Ω;H));L2F (0,T ;L 2q−1 (Ω;H)))

( ≤ C |PT |LpF

T

(Ω;L(H))

) + |F |L1F (0,T ;Lp (Ω;L(H))) ,

(12.108)

∞ 2 2q and a subsequence {nk }∞ k=1 ⊂ {n}n=1 so that, for any u1 ∈ LF (0, T ; L (Ω; H)), (1)

(1)

(w)- lim P nk u1 = P u1 k→∞

2q

in L2F (0, T ; L 2q−1 (Ω; H)).

(12.109)

Further, by (12.107) and Theorem 2.79, for each fixed t ∈ [0, T ), there ex2q ( ) (2) ∞ 2q−1 ist an R(t) ∈ Lpd L2q Ft (Ω; H); LFt (Ω; H) and a subsequence {nk }k=1 ⊂ {nk }∞ k=1 (Generally speaking, each nk may depend on t) such that, for all ξ ∈ L2q Ft (Ω; H), (1)

(2)

(2)

(w)- lim P nk (t)ξ = R(t) ξ k→∞

2q

in LF2q−1 (Ω; H). t

(12.110)

Next, let u1 = v1 = 0 and ξ2 = 0, u2 = 0 in (12.39) and (12.40), respectively. From (12.101), we find that

424

12 Pontryagin-Type Stochastic Maximum Principle and Beyond



T

E



e n (s)U(s, t)ξ1 , v2 (s) Q



t

⟨ ⟩ = E PTn x1 (T ), x2 (T ) H − E ∫



T

−E

L02



ds

T t

⟨ n ⟩ F (s)x1 (s), x2 (s) H ds

P n (s)K(s)x1 (s), v2 (s)



t

L02

ds.

This implies that ∫ T ⟨ n ⟩ e (s)U(s, t)ξ1 , v2 (s) 0 ds E Q L 2

t

(

≤ C |PT |LpF

T

×|ξ1 |

(Ω;L(H))

L2q Ft (Ω;H)

|v2 |

+ |F |L1F (0,T ;Lp (Ω;L(H)))

L2F (t,T ;L2q (Ω;L02 ))

) (12.111)

,

2 2q 0 ∀ ξ1 ∈ L2q Ft (Ω; H), v2 ∈ LF (t, T ; L (Ω; L2 )). 2q 2 b n,t 2q−1 (Ω; We define two operators Qn,t 1 and Q1 from LFt (Ω; H) to LF (t, T ; L 0 L2 )) as follows (Recall Lemma 12.10 for U(·, ·)):  en Qn,t 1 ξ = Q (·)U(·, t)ξ, ∀ ξ ∈ L2q Ft (Ω; H). Q b n,t ξ = Qn (·)∗ U(·, t)ξ, 1 2q

b n,t ∈ L(L2q (Ω; H); L2 (t, T ; L 2q−1 (Ω; L0 ))). By It is easy to see that Qn,t 2 1 , Q1 F Ft (12.111), it follows that n,t Q 2q 1 2 2q−1 (Ω;L0 ))) L(L2q 2 Ft (Ω;H);LF (t,T ;L (12.112) ( ) ≤ C |PT |LpF (Ω;L(H)) + |F |L1F (0,T ;Lp (Ω;L(H))) . 2q

T

Similarly, n,t b Q 1

2q

2 2q−1 (Ω;L0 ))) L(L2q 2 F (Ω;H);LF (t,T ;L t

( ≤ C |PT |LpF

T

) (Ω;L(H)) + |F |L1F (0,T ;Lp (Ω;L(H))) .

(12.113)

By Lemma 2.61, for each t, there exist two bounded linear operators Qt1 2q 2q b t , from L2q (Ω; H) to L 2q−1 (t, T ; L 2q−1 (Ω; L0 )), and a subsequence and Q 1 2 Ft F 2q ∞ {nk }∞ k=1 ⊂ {nk }n=1 such that, for any ξ ∈ LFt (Ω; H),  (3) 2q n ,t  t  in L2F (t, T ; L 2q−1 (Ω; L02 )),  (w)- lim Q1 k ξ = Q1 ξ (3)

(2)

k→∞

(3)   b nk ,t ξ = Q b t1 ξ  (w)- lim Q 1

k→∞

(12.114) 2q

in L2F (t, T ; L 2q−1 (Ω; L02 )).

12.4 Operator-Valued Backward Stochastic Evolution Equations

425

In view of (12.112)–(12.113), we have t Q 1 2q 2q 2

L(LF (Ω;H);LF (t,T ;L 2q−1 (Ω;L02 )))

t b1 + Q

t

(12.115)

2q

2 2q−1 (Ω;L0 ))) L(L2q 2 F (Ω;H);LF (t,T ;L t

( ≤ C |PT |LpF

T

(Ω;L(H))

) + |F |L1F (0,T ;Lp (Ω;L(H))) .

Further, we choose ξ1 = 0, v1 = 0 in (12.39) and ξ2 = 0, u2 = 0 in (12.40). From (12.101), we obtain that ∫ T ⟨ n ⟩ ⟨ n ⟩ E PT x1 (T ), x2 (T ) H − E F (s)x1 (s), x2 (s) H ds t



T

=E t



⟨ n ⟩ P (s)u1 (s), x2 (s) H ds + E T

+E



e n (s)x1 (s), v2 (s) Q

⟩ L02

t



T

⟨ n ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

t

ds.

(12.116) 2q

2 2q 2 2q−1 (Ω; L0 )) as Define an operator Qn,t 2 2 from LF (t, T ; L (Ω; H)) to LF (t, T ; L follows (Recall Lemma 12.10 for V(·, ·)): ( n,t ) e n (·)V(·, t)u2 , ∀ u2 ∈ L2F (t, T ; L2q (Ω; H)). Q2 u2 (·) = Q

From (12.116), we get that ∫ T ⟨( n,t ) ⟩ E Q2 u1 (s), v2 (s) L0 ds 2

t

⟨ ⟩ = E PTn x1 (T ), x2 (T ) H − E



T t

⟨ n ⟩ F (s)x1 (s), x2 (s) H ds

(12.117)

∫ T ∫ T ⟨ n ⟩ ⟨ n ⟩ −E P (s)u1 (s), x2 (s) H ds − E P (s)K(s)x1 (s), v2 (s) L0 ds 2 t ( t p ≤ C |PT |LF (Ω;L(H)) T ) +|F |L1F (0,T ;Lp (Ω;L(H))) |u1 |L2F (t,T ;L2q (Ω;H)) |v2 |L2F (t,T ;L2q (Ω;L02 )) , ∀ u1 ∈ L2F (t, T ; L2q (Ω; H)), v2 ∈ L2F (t, T ; L2q (Ω; L02 )). From (12.117), we see that n,t Q 2 2 2q L(LF (t,T ;L

( ≤ C |PT |LpF

T

2q

(Ω;H));L2F (t,T ;L 2q−1 (Ω;L02 )))

(Ω;L(H))

) + |F |L1F (0,T ;Lp (Ω;L(H))) .

(12.118)

b n,t from L2 (t, T ; L2q (Ω; H)) to L2 (t, T ; Also, we define a linear operator Q 2 F F 2q L 2q−1 (Ω; L02 )) by

426

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

(

) b n,t u2 (·) = Qn (·)∗ V(·, t)u2 , Q 2

∀ u2 ∈ L2F (t, T ; L2q (Ω; H)).

By a similar argument to derive the inequality (12.118), we find that n,t b Q 2q 2 L(L2F (t,T ;L2q (Ω;H));L2F (t,T ;L 2q−1 (Ω;L02 ))) (12.119) ( ) ≤ C |PT |LpF (Ω;L(H)) + |F |L1F (0,T ;Lp (Ω;L(H))) . T

By Lemma 2.61, for each t, there exist two bounded linear operators Qt2 2q b t from L2 (t, T ; L2q (Ω; H)) to L2 (t, T ; L 2q−1 (Ω; L0 )) and a subsequence and Q 2 2 F F (4) (3) ∞ 2 2q {nk }∞ k=1 ⊂ {nk }n=1 such that, for all u2 ∈ LF (t, T ; L (Ω; H)),  (4) 2q   (w)- lim Qn2 k ,t u2 = Qt2 u2 in L2F (t, T ; L 2q−1 (Ω; L02 )),  k→∞ (12.120) (4) 2q   2 0 b nk ,t u2 = Q b t2 u2 2q−1  (w)- lim Q in LF (t, T ; L (Ω; L2 )). 2 k→∞

In terms of (12.118) and (12.119), we obtain that t Q 2 2q 2 2 0 2q L(LF (t,T ;L

t b2 + Q

(Ω;H));LF (t,T ;L 2q−1 (Ω;L2 )))

(12.121)

2q

L(L2F (t,T ;L2q (Ω;H));L2F (t,T ;L 2q−1 (Ω;L02 )))

( ≤ C |PT |LpF

T

(Ω;L(H))

) + |F |L1F (0,T ;Lp (Ω;L(H))) .

Now, we choose ξ1 = 0 and u1 = 0 in (12.39), and ξ2 = 0 and u2 = 0 in (12.40). From (12.101), we obtain that ∫ T ⟨ n ⟩ ⟨ n ⟩ E PT x1 (T ), x2 (T ) H − E F (s)x1 (s), x2 (s) H ds t



T

=E



P n (s)K(s)x1 (s), v2 (s)



t



t

(12.122)

⟨ n ⟩ P (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds

T

⟨ n ⟩ Q (s)v1 (s), x2 (s) H ds + E

t

+E

ds

T

+E ∫

L02

2



T



e n (s)x1 (s), v2 (s) Q

t

⟩ L02

ds.

We define a bilinear functional Bn,t (·, ·) on L2F (t, T ; L2q (Ω; L02 )) × L2F (t, T ; L2q (Ω; L02 )) as follows: ∫ T ⟨ n ⟩ Bn,t (v1 , v2 ) = E Q (s)v1 (s), x2 (s) H ds t



T

+E t

⟨ n ⟩ e (s)x1 (s), v2 (s) 0 ds, Q L

∀ v1 , v 2 ∈

2

L2F (t, T ; L2q (Ω; L02 )).

(12.123)

12.4 Operator-Valued Backward Stochastic Evolution Equations

427

It is easy to check that Bn,t (·, ·) is a bounded bilinear functional. From (12.122), it follows that Bn(1) ,t (v1 , v2 ) k

∫ ⟨ (1) ⟩ n = E PT k x1 (T ), x2 (T ) −E H





T

−E ∫

T



t

(1)

P nk (s)K(s)x1 (s), v2 (s)



−E

⟩ ds H



t T

(1)

F nk (s)x1 (s), x2 (s)

(12.124) ds 0

L2

(1)

P nk (s)v1 (s), K(s)x2 (s) + v2 (s)



t

L02

ds.

It is easy to show that  ⟨ n(1) ⟩ ⟨ ⟩   lim E PT k x1 (T ), x2 (T ) H = E PT x1 (T ), x2 (T ) H ,   k→∞    ∫ T ∫ T   ⟨ n(1) ⟩ ⟨ ⟩   k (s)x (s), x (s) lim E F ds = E F (s)x1 (s), x2 (s) H ds,  1 2  H  k→∞ t t    ∫ T   ⟨ n(1) ⟩    lim E P k (s)K(s)x1 (s), v2 (s) L0 ds   k→∞ 2 t ∫ T ⟨ ⟩    P (s)K(s)x1 (s), v2 (s) L0 ds,  =E  2  t    ∫  T  ⟨ n(1) ⟩   lim E P k (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds    2 k→∞ t    ∫ T   ⟨ ⟩   =E  P (s)v (s), K(s)x (s) + v (s) ds, 1

2

2

t

L02

where x1 (resp. x2 ) solves the equation (12.39) (resp. (12.40)) with ξ1 = 0 and u1 = 0 (resp. ξ2 = 0 and u2 = 0). This, together with (12.124), implies that ∆

B t (v1 , v2 ) = lim Bn(1) ,t (v1 , v2 ) k→∞



k

= E PT x1 (T ), x2 (T ) ∫

T

−E ∫

T t

H

T

−E



F (s)x1 (s), x2 (s)

t

⟩ H

ds (12.125)

⟨ ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds 2

t

−E





⟨ ⟩ P (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds. 2

Noting that the solution of (12.39) (with ξ1 = 0 and u1 = 0) satisfies

428

12 Pontryagin-Type Stochastic Maximum Principle and Beyond





s

s

S(s − τ )J(τ )x1 (τ )dτ +

x1 (s) = t

S(s − τ )K(τ )x1 (τ )dτ t



s

S(s − τ )v1 (τ )dW (τ ),

+ t

by means of Proposition 3.12 and Gronwall’s inequality, we conclude that 2q |x1 |L∞ ≤ C|v1 |L2F (t,T ;L2q (Ω;L02 )) . F (t,T ;L (Ω;H))

(12.126)

2q |x2 |L∞ ≤ C|v2 |L2F (t,T ;L2q (Ω;L02 )) . F (t,T ;L (Ω;H))

(12.127)

Similarly, Combining (12.125), (12.126), (12.127) and (12.108), we obtain that ( ) |B t (v1 , v2 )| ≤ C |PT |LpF (Ω;L(H)) + |F |L1F (0,T ;Lp (Ω;L(H))) T

×|v1 |L2F (t,T ;L2q (Ω;L02 )) |v2 |L2F (t,T ;L2q (Ω;L02 )) . Hence, B t (·, ·) is a bounded bilinear functional on L2F (t, T ; L2q (Ω; L02 ))× L2F (t, T ; L2q (Ω; L02 )). Now, for any fixed v2 ∈ L2F (t, T ; L2q (Ω; L02 )), it is easy to see that B t (·, v2 ) is a bounded linear functional on L2F (t, T ; L2q (Ω; L02 )). 2q Therefore, by Theorem 2.73, we can find a unique v˜1 ∈ L2F (t, T ; L 2q−1 (Ω; L02 )) such that, for all v2 ∈ L2F (t, T ; L2q (Ω; L02 )), ⟨ ⟩ 2q B t (v1 , v2 ) = v˜1 , v2 2 . 2 0 2q LF (t,T ;L 2q−1 (Ω;H)), LF (t,T ;L

(Ω;L2 ))

b t from L2 (t, T ; L2q (Ω; L0 )) to L2 (t, T ; L 2q−1 (Ω; L0 )) as Define an operator Q 3 2 2 F F follows: b t3 v1 = v˜1 . Q 2q

b t is well-defined. Further, From the uniqueness of v˜1 , it is clear that Q 3 b t3 v1 | |Q

2q

L2F (t,T ;L 2q−1 (Ω;L02 ))

( ≤ C |PT |LpF

T

(Ω;L(H))

= |˜ v1 |

2q

L2F (t,T ;L 2q−1 (Ω;L02 ))

) + |F |L1F (0,T ;Lp (Ω;L(H))) |v1 |L2F (t,T ;L2q (Ω;L02 )) .

(12.128)

b t is a bounded operator. For any α, β ∈ R and v2 , v3 , v4 ∈ This shows that Q 3 2 2q 0 LF (t, T ; L (Ω; L2 )), ⟨

b t (αv3 + βv4 ), v2 Q 3

⟩ 2q

L2F (t,T ;L 2q−1 (Ω;L02 )), L2F (t,T ;L2q (Ω;L02 ))

= B t (αv3 + βv4 , v2 ) = αB t (v3 , v2 ) + βB t (v4 , v2 ), which indicates that b t3 (αv3 + βv4 ) = αQ b t3 v3 + β Q b t3 v4 . Q

12.4 Operator-Valued Backward Stochastic Evolution Equations

429

b t is a bounded linear operator from L2 (t, T ; L2q (Ω; L0 )) to L2 (t, T ; Hence, Q 3 2 F F 2q b t . Then, for any v1 , v2 ∈ L2 (t, T ; L2q (Ω; L0 )), it L 2q−1 (Ω; L02 )). Put Qt3 = 12 Q 3 2 F holds that ⟨ ⟩ 2q B t (v1 , v2 ) = Qt3 v1 , v2 2 0 2 0 2q ⟨

LF (t,T ;L 2q−1 (Ω;L2 )), LF (t,T ;L

(

)∗ ⟩ + v1 , Qt3 v2

L2F (t,T ;L2q (Ω;L02 )),

(Ω;L2 ))

2q L2F (t,T ;L 2q−1

(Ω;L02 ))

(12.129) .

( )∗ Here, by Theorem 2.73, Qt3 is a bounded linear operator from L2F (t, T ; 2q

L 2q−1 (Ω; L02 ))∗ = L2F (t, T ; L2q (Ω; L02 )) to L2F (t, T ; L2q (Ω; L02 ))∗ = L2F (t, T ; 2q L 2q−1 (Ω; L02 )). It follows from (12.128) that |Qt3 |

2q

L(L2F (t,T ;L2q (Ω;L02 )); L2F (t,T ;L 2q−1 (Ω;L02 )))

( ≤ C |PT |LpF

T

(Ω;L(H))

) + |F |L1F (0,T ;Lp (Ω;L(H))) .

(12.130)

b (t) on L2q (Ω; H) × For any t ∈ [0, T ], we define two operators Q(t) and Q Ft L2F (t, T ; L2q (Ω; H)) × L2F (t, T ; L2q (Ω; L02 )) as follows:  (t) Q (ξ, u, v) = Qt1 ξ + Qt2 u + Qt3 v,    ( ) b (t) (ξ, u, v) = Q b t1 ξ + Q b t2 u + Q3t ∗ v, Q    2 2q 2 2q 0 ∀ (ξ, u, v) ∈ L2q Ft (Ω; H) × LF (t, T ; L (Ω; H)) × LF (t, T ; L (Ω; L2 )). (12.131) ( ) bt , Q b t and Qt ∗ ), we Thanks to the definition of Qt1 , Qt2 and Qt3 (resp. Q 1 2 3 b (t) (·, ·, ·)) is a bounded linear operator from find that Q(t) (·, ·, ·) (resp. Q 2q 2 2q 2 2q 0 2 2q−1 (Ω; L2q Ft (Ω; H) × LF (t, T ; L (Ω; H)) × LF (t, T ; L (Ω; L2 )) to LF (t, T ; L b (t) (0, 0, ·). L02 )) and Q(t) (0, 0, ·)∗ = Q For any t ∈ [0, T ], by means of (12.101), (12.109), (12.110), (12.114), (12.120), (12.123), (12.125), (12.129) and (12.131), we see that, for all (ξ1 , u1 , 2 2q 2 2q 0 v1 ), (ξ2 , u2 , v2 ) ∈ L2q Ft (Ω; H) × LF (t, T ; L (Ω; H)) × LF (t, T ; L (Ω; L2 )), ⟨ ⟩ E PT x1 (T ), x2 (T ) H − E ⟨

= E R(t) ξ1 , ξ2 ∫

T

+E ∫

t



t

T

+E

t

H



T

+E t

T t

⟨ ⟩ F (s)x1 (s), x2 (s) H ds

⟨ ⟩ P (s)u1 (s), x2 (s) H ds

⟨ ⟩ P (s)x1 (s), u2 (s) H ds + E



T t

2

⟨ ⟩ P (s)v1 (s), K(s)x2 (s) + v2 (s) L0 ds ⟨ ⟩ b (t) (ξ2 , u2 , v2 )(s) 0 ds v1 (s), Q L 2

(12.132)

⟨ ⟩ P (s)K(s)x1 (s), v2 (s) L0 ds

2

T

+E





430

12 Pontryagin-Type Stochastic Maximum Principle and Beyond



T

+E

⟨ (t) ⟩ Q (ξ1 , u1 , v1 )(s), v2 (s) L0 ds. 2

t

Step 3. In this step, we shall show that P (·) ∈ DF,w ([0, T ]; Lp (Ω; L(H))) (recall (12.41) for the definition of this space) and P (t) = R(t) , a.e. t ∈ [0, T ].

(12.133)

Similar to the proof of (12.103), by (12.132), one can show that ∫ T ) ( E U∗ (T, t)PT U(T, t)ξ1 − U∗ (s, t)F (s)U(s, t)ξ1 ds Ft t

a.s., ∀ t ∈ [0, T ], ξ1 ∈

(t)

= R ξ1 ,

(12.134)

L2q Ft (Ω; H). 2p

Now we show that R(·) ξ ∈ DF ([t, T ]; L p+1 (Ω; H)) for any ξ ∈ L2q Ft (Ω; H). By (12.100), it remains to show that 2q lim R(·) ξ − P n (·)ξ ∞ = 0. (12.135) n→∞

LF (t,T ;L 2q−1 (Ω;H))

For this purpose, by (12.103) and (12.134), for any τ ∈ [t, T ], we see that 2q 2q−1 E R(τ ) ξ − P n (τ )ξ H 2q ∫ T 2q−1 ( ) ≤ CE U∗ (s, τ ) F (s) − Fn (s) U(s, τ )ξds

H

τ

2q 2q−1 ( ) +CE U∗ (T, τ ) PT − PTn U(T, τ )ξ .

H

By the first conclusion in Lemma 12.10, we deduce that for any ε1 > 0, there is a δ1 > 0 so that for all τ ∈ [t, T ] and τ ≤ σ ≤ τ + δ1 , 2q 2q−1 E U(r, τ )ξ − U(r, σ)ξ H < ε1 ,

∀ r ∈ [σ, T ].

(12.136)

1 Now, we choose a monotonically increasing sequence {τi }N i=1 ⊂ [0, T ] for N1 being sufficiently large such that τi+1 − τi ≤ δ1 with τ1 = t and τN1 = T , and that 2q ( ∫ τi+1 ( ) 1 ) 2q−1 E|F (s)|pL(H) p ds < ε1 , for all i = 1, · · · , N1 − 1. (12.137)

τi

For any τ (τi , τi+1 ], recalling F n = Γn F Γn , we conclude that 2q ∫ T 2q−1 ( ) E U∗ (s, τ ) F (s) − F n (s) U(s, τ )ξds H

τ

∫ ≤ CE

T τi

2q 2q−1 ) U∗ (s, τ ) F (s) − F n (s) U(s, τi )ξds

(

H

12.4 Operator-Valued Backward Stochastic Evolution Equations

∫ +CE ∫ +CE

2q 2q−1 ( ) U∗ (s, τ ) F (s) − F n (s) U(s, τi )ξds

τi

T

≤C

(12.138)

H

τ T

2q ( )( ) 2q−1 U∗ (s, τ ) F (s) − F n (s) U(s, τi ) − U(s, τ ) ξds

τ



431

2q ( 2q−1 ) E F (s) − F n (s) U(s, τi )ξ ds

H

H

τi

(∫

τi+1

+C τi

( )1 E|F (s)|pL(H) p ds

2q ) 2q−1

2q ( ) 2q−1 +C max E U(s, τi ) − U(s, τ ) ξ ds.

s∈[τ,T ]

H

By the choice of F n , there is an integer N2 (ε1 ) > 0 so that for all n > N2 and i = 1, · · · , N1 − 1, ∫

T

2q ( 2q−1 ) E F (s) − F n (s) U(s, τi )ξ ds ≤ ε1 .

(12.139)

H

τi

By (12.136)–(12.139), we conclude that for all n > N2 and τ ∈ [t, T ], ∫ E

T

2q 2q−1 ( ) U∗ (s, τ ) F (s) − F n (s) U(s, τ )ξds ≤ C1 ε1 .

(12.140)

H

τ

Here the constant C1 is independent of ε1 , n and τ . Similarly, there is an integer N3 (ε1 ) > 0 such that for every n > N3 , 2q 2q−1 ( ) E U∗ (T, τ ) PT − PTn U(T, τ )ξ ≤ C2 ε1 ,

(12.141)

H

for a constant C2 which is independent of ε1 , n and τ . Now for any ε > 0, let ε us choose ε1 = . Then, C1 + C2 2q 2q−1 E R(·) ξ − P n (·)ξ H max{N2 (ε1 ), N3 (ε1 )}, ∀ τ ∈ [t, T ].

Therefore, we obtain the desired result (12.135). To show (12.133), for any 0 ≤ t1 < t2 < T and η1 , η2 ∈ L2q Ft1 (Ω; H), we choose ξ1 = η1 and u1 = v1 = 0 in the equation (12.39), and ξ2 = 0, χ 1 ,t2 ] u2 (·) = t[t2 −t η2 and v2 = 0 in the equation (12.40). By (12.132) and recalling 1 the definition of the evolution operator U(·, ·) (in Lemma 12.10), we see that 1 E t2 − t1



t2 t1

⟨ ⟩ P (s)U(s, t1 )η1 , η2 H ds

⟨ ⟩ = E PT U(T, t1 )η1 , x2,t2 (T ) H − E



T t1

(12.142) ⟨

F (s)U(s, t1 )η1 , x2,t2 (s)

⟩ H

ds,

432

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

where x2,t2 (·) stands for the solution to the equation (12.40) with the above choice of ξ2 , u2 and v2 . It is clear that ∫ s ∫ s 1   S(s − τ )J(τ )x (τ )dτ + S(s − τ )η2 dτ  2,t2  t2 − t1 t1    t1 ∫ s x2,t2 (s) = (12.143) + S(s − τ )K(τ )x2,t2 (τ )dW (τ ), s ∈ [t1 , t2 ],    t1     U(s, t2 )x2,t2 (t2 ), s ∈ [t2 , T ]. Then, by Proposition 3.12, we see that for all s ∈ [t1 , t2 ] (Recall (12.47) for MJ,K,q (·)), (∫ s ) 2q 2q E x2,t2 (s) H ≤ C MJ,K,q (τ )E x2,t2 (τ ) H dτ + E|η2 |2q H . t1

By Gronwall’s inequality, it follows that x2,t2 (·) ∞ 2q LF (t1 ,t2 ;L

(Ω;H))

≤ C|η2 |L2q

Ft 1

(Ω;H) ,

(12.144)

where the constant C is independent of t2 . On the other hand, by (12.143), we have 2q E x2,t2 (t2 ) − η2 H 1 ∫ t2 2q ) (∫ s 2q ≤C MJ,K,q (τ )E x2,t2 (τ ) H dτ + E S(t2 − τ )η2 dτ − η2 . t − t H 2 1 t1 t1 This, together with (12.144), implies that lim

t2 →t1 +0

2q E x2,t2 (t2 )−η2 H ≤ C

lim

t2 →t1 +0

E

1 t2 − t1



t2

2q S(t2 −τ )η2 dτ −η2 = 0. H

t1

Therefore, for any s ∈ [t2 , T ], 2q lim E U(s, t2 )x2,t2 (t2 ) − U(s, t1 )η2 H t2 →t1 +0 ( 2q ≤ C lim E U(s, t2 )x2,t2 (t2 ) − U(s, t2 )η2 H t2 →t1 +0 2q ) +E U(s, t2 )η2 − U(s, t1 )η2 H

≤C

lim

t2 →t1 +0

( 2q 2q ) E x2,t2 (t2 ) − η2 H + E U(s, t2 )η2 − U(s, t1 )η2 H

= 0. Hence, lim

t2 →t1 +0

x2,t2 (s) = U(s, t1 )η2 in L2q Fs (Ω; H),

∀ s ∈ [t2 , T ].

(12.145)

12.4 Operator-Valued Backward Stochastic Evolution Equations

By (12.144) and (12.145), we conclude that ( ⟨ ⟩ lim E PT U(T, t1 )η1 , x2,t2 (T ) H t2 →t1 +0 ∫ T ) ⟨ ⟩ −E F (s)U(s, t1 )η1 , x2,t2 (s) H ds t1

⟨ ⟩ = E PT U(T, t1 )η1 , U(T, t1 )η2 H − E



433

(12.146)

T⟨

F (s)U(s, t1 )η1 , U(s, t1 )η2

t1

⟩ H

ds.

Choosing ξ1 = η1 and u1 = v1 = 0 in (12.39), and ξ2 = η2 and u2 = v2 = 0 in (12.40), by (12.132), we find that ⟨ ⟩ ⟨ ⟩ E R(t1 ) η1 , η2 H = E PT U(T, t1 )η1 , U(T, t1 )η2 H ∫ T (12.147) ⟨ ⟩ −E F (s)U(s, t1 )η1 , U(s, t1 )η2 H ds. t1

Combining (12.142), (12.146) and (12.147), we obtain that ∫ t2 ⟨ ⟩ ⟨ ⟩ 1 lim E P (s)U(s, t1 )η1 , η2 H ds = E R(t1 ) η1 , η2 H . (12.148) t2 →t1 +0 t2 − t1 t1 On the other hand, by Lemma 4.14, there is a monotonically decreasing se(n) (n) quence {t2 }∞ n=1 with t2 > t1 for every n, such that for a.e. t1 ∈ [0, T ), lim

(n) (n) t2 →t1 +0 t2

1 − t1

∫ E

(n)

t2



P (s)U(s, t1 )η1 , η2

t1

⟩ H

⟨ ⟩ ds = E P (t1 )η1 , η2 H .

This, together with (12.148), implies that ⟨ ⟩ ⟨ ⟩ E R(t1 ) η1 , η2 H = E P (t1 )η1 , η2 H , for a.e. t1 ∈ [0, T ). Since η1 and η2 are arbitrary elements in L2q Ft1 (Ω; H), we conclude (12.133). ( ) b (·) satisfies (12.45). By (12.132) and (12.133), we see that P (·), Q(·) , Q ( ) b (·) is a relaxed transposition solution to (12.8), and by Hence, P (·), Q(·) , Q (12.108), (12.115), (12.121), (12.130) and (12.131), it satisfies the estimate (12.46). This completes the proof of Theorem 12.9. 12.4.6 A Regularity Result In this subsection, we shall derive a regularity result for relaxed transposition solutions to the equation (12.8), which will play a key role in the proof of the general Pontryagin-type stochastic maximum principle in Section 12.5. Let {∆m }∞ m=1 be a sequence of partitions of [0, T ], that is,

434

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

{ } ∆ m m m ∆m = t m i i = 0, 1, · · · , m, and 0 = t0 < t1 < · · · < tm = T ∆

m such that ∆m ⊂ ∆m+1 and |∆m | = max0≤i≤m−1 (tm i+1 − ti ) → 0 as m → ∞. ∞ Recall that {ek }k=1 is an orthonormal basis of V . For m, n ∈ N, we introduce the following subspaces of L2F (0, T ; L2q (Ω; L02 )):

Hm,n =

n { m−1 ∑∑

( ) m m ) (·)aki ek ⊗ U(·, t χ[tm )f aki ∈ C, i ,t i i i+1

i=0 k=1

fi ∈

L2q Ftm (Ω; H)

(12.149)

} .

i

Here U(·, ·) is the operator introduced in Lemma 12.10. In the sequel, we shall use the following norm for the linear space Hm,n , m−1 n ∑ ∑ m χ[tm (·)a e ⊗ f , ki k i i ,ti+1 ) 2 0 i=0 k=1

for any v =

m−1 n ∑∑

LF (0,T ;L2q (Ω;L2 ))

( ) m χ[tm (·)aki ek ⊗ U(·, tm i )fi (∈ Hm,n ) with aki ∈ C i ,ti+1 )

i=0 k=1

and fi ∈ L2q Ftm (Ω; H). i We have the following result. Proposition 12.15. The set

∞ ∪

Hm,n is dense in L2F (0, T ; L2q (Ω; L02 )).

m,n=1

Proof : We introduce the following subspace of L2F (0, T ; L2q (Ω; L02 )): em,n = H

n { m−1 ∑∑

} 2q m ) (·)aki ek ⊗ fi aki ∈ C, fi ∈ L χ[tm (Ω; H) . ,t m Ft i i+1 i

i=0 k=1

It is clear that

∞ ∪

em,n is dense in L2 (0, T ; L2q (Ω; L0 )). H 2 F

m,n=1

For any m, n ∈ N, fi ∈ L2q Ftm (Ω; H) and aki ∈ C (i ∈ {0, 1, · · · , m − 1}, i

k ∈ {1, 2, · · · , n}), put ∆

v˜m,n =

m−1 n ∑∑

m χ[tm (·)aki ek ⊗ fi . i ,ti+1 )

i=0 k=1

e m,n . We claim that for any ε > 0, there exist an mε ∈ N Clearly, v˜m,n ∈ H and a vmε ,n ∈ Hmε ,n such that v˜m,n − vm ,n 2 < ε. (12.150) ε L (0,T ;L2q (Ω;L0 )) F

2

12.4 Operator-Valued Backward Stochastic Evolution Equations

435

Indeed, by Lemma 12.10, for each fi , there is a δim > 0 such that for all m m t ∈ [tm i , T − δi ) and s ∈ [t, t + δi ], it holds that U(s, t)fi − fi 2q L (Ω;H) FT

0 such that Bδ (x) = z ∈ H |z − x| ≤ δ ⊆ O, there exists a constant CL > 0 so that |φ(z) − φ(b z )| ≤ CL |z − zb|H ,

∀ z, zb ∈ Bδ (x).

Thus, for any fixed y ∈ H and t > 0 is small enough, |φ(z + ty) − φ(z)| ≤ CL |y|H , t

∀ z ∈ Bδ/2 (x).

Consequently, the following functional is well-defined: ∆

φ0 (x; y) =

lim

z→x,z∈O t↓0

φ(z + ty) − φ(z) . t

(12.233)

It is an easy matter to see that φ0 (x; λy) = λφ0 (x; y), and

∀ y ∈ H, λ ≥ 0

(12.234)

12.6 Sufficient Condition for Optimal Controls

φ0 (x; y + z) ≤ φ0 (x; y) + φ0 (x; z),

∀ y, z ∈ H.

459

(12.235)

Thus, the map y 7→ φ (x; y) is convex. Further, the inequality (12.235) implies that − φ0 (x; y) ≤ φ0 (x; −y), ∀ y ∈ H. (12.236) Next, we fix a z ∈ H and define F : {λz λ ∈ R} → R by 0

F (λz) = λφ0 (x; z),

∀λ ∈ R.

Then for λ ≥ 0, { F (λz) ≡ λφ0 (x; z) = φ0 (x; λz), F (−λz) ≡ −λφ0 (x; z) = −φ0 (x; λz) ≤ φ0 (x, −λz), which yields F (λz) ≤ φ0 (x; λz),

∀λ ∈ R.

(12.237)

Therefore, F is a linear functional defined on the linear space spanned by z, and it is dominated by the convex function φ0 (x; ·). By the Hahn-Banach theorem, F can be extended to be a bounded linear functional on H. Then, by the classical Riesz representation theorem, there exists ξ ∈ H, such that { ⟨ ξ, λz ⟩ = F (λz) ≡ λφ0 (x; z), ∀λ ∈ R, (12.238) ⟨ ξ, y ⟩ ≤ φ0 (x; y), ∀ y ∈ H. This implies ξ ∈ ∂φ(x). Consequently, ∂φ(x) is nonempty. 2) It follows from (12.233) that (−φ)0 (x; y) = =

lim

z→x,z∈O t↓0

−φ(z + ty) + φ(z) t

lim′

z ′ →x,z ∈O t↓0

−φ(z ′ ) + φ(z ′ − ty) = φ0 (x; −y). t

Thus, ξ ∈ ∂(−φ)(x) if and only if ⟨ −ξ, y⟩H = ⟨ ξ, −y⟩H ≤ (−φ)0 (x; −y) = φ0 (x; y),

∀y ∈ H,

which is equivalent to −ξ ∈ ∂φ(x). 3) Suppose φ attains a local minimum at x. Then φ0 (x; y) =

lim

z→x,z∈O t↓0

≥ lim

t→0+

φ(z + ty) − φ(z) t

φ(x + ty) − φ(x) ≥ 0 = ⟨ 0, y⟩H , t

∀y ∈ H,

460

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

which implies 0 ∈ ∂φ(x). If φ attains a local maximum at x, then the conclusion follows from the fact that −φ attains a local minimum at x. By fixing some arguments in a function, one may naturally define its partial e → R is locally Lipschitz, generalized gradient. For example, if ψ(·, ·) : H × H by ∂x ψ(x, u) (resp. ∂u ψ(x, u)), we mean the partial generalized gradient of ψ in x (resp. in u) at (x, u) ∈ H × U . In the rest of this subsection, we give two technical lemmas. e Assume Lemma 12.19. Let ψ be a convex or concave function on H × H. that ψ(x, u) is differentiable in x and ψx (x, u) is continuous in (x, u). Then { } (ψx (ˆ x, u ˆ), r) r ∈ ∂u ψ(ˆ x, u ˆ) ⊆ ∂x,u ψ(ˆ x, u ˆ), ∀ (ˆ x, u ˆ) ∈ H × U. (12.239) Proof : Once (12.239) is proved for convex ψ, it is also true for concave ψ by noting that −ψ is convex and the assertion 2) in Lemma 12.18. Hence, we only handle the case that ψ is convex. e we choose a sequence {(xj , δj )}∞ ⊂ H × R as For any ξ ∈ H and u ∈ H, j=1 follows: (xj , u ˆ) ∈ H × U, (xj + δj ξ, u ˆ + δj u) ∈ H × U, δj → 0+ , as j → ∞, and |xj − x ˆ|H ≤ δj2 . By the convexity of ψ, we have ψ(xj + δj ξ, u ˆ + δj u) − ψ(ˆ x, u ˆ + δj u) j→∞ δj lim

⟨ψx (ˆ x, u ˆ + δj u), xi − x ˆ + δj ξ⟩H ≥ lim = ⟨ψx (ˆ x, u ˆ), ξ⟩H . j→∞ δj Similarly, lim

j→∞

ψ(ˆ x, u ˆ + δj u) − ψ(ˆ x, u ˆ) ≥ ⟨r, u⟩H . δj

(12.240)

(12.241)

Also, lim

j→∞

ψ(ˆ x, u ˆ) − ψ(xj , u ˆ) ⟨ψx (xj , u ˆ), x ˆ − x j ⟩H ≥ lim = 0. j→∞ δj δj

(12.242)

It follows from (12.240)–(12.242) that ψ(xj + δj ξ, u ˆ + δj u) − ψ(xj , u ˆ) ≥ ⟨ψx (ˆ x, u ˆ), ξ⟩H + ⟨r, u⟩H . j→∞ δj lim

This, together with the definition of the generalized gradient, implies that (ψx (ˆ x, u ˆ), r) ∈ ∂x,u ψ(ˆ x, u ˆ).

12.6 Sufficient Condition for Optimal Controls

461

12.6.2 A Sufficient Condition for Optimal Controls In this subsection, following [371, 393] (for stochastic controls in finite dimensions), we shall derive a sufficient condition for optimal controls for Problem (OP), i.e., for a given admissible control u ¯(·) ∈ U [0, T ] and the corresponding state x ¯(·), we hope to find a suitable condition to guarantee (12.4)(holds. For) this purpose, for the above admissible pair (¯ x(·), u ¯(·)), denote by y(·), Y (·) b (·) ) be the corresponding transposition solution to (12.7), and by (P (·), Q(·) , Q the corresponding relaxed transposition solution to the equation (12.8) in which F (·), J(·), K(·) and PT are given by (12.165)7 . Recall (12.164) for the definition of H(·, ·, ·, ·, ·), and write ) 1⟨ ( ) ( )⟩ ∆ ( H(t, x, u) = H t, x, u, y(t), Y (t) + P (t)b t, x, u , b t, x, u L0 2 2 ⟨ ( ) ( )⟩ − P (t)b t, x ¯(t), u ¯(t) , b t, x, u L0 , ∀ (t, x, u) ∈ [0, T ] × H × U. 2

We need the following intermediate result: Lemma 12.20. Let (S1)–(S4) hold. Then, for a.e. (t, ω) ∈ [0, T ] × Ω, ∂u H(t, x ¯(t), u ¯(t), y(t), Y (t)) = ∂u H(t, x ¯(t), u ¯(t)).

(12.243)

Proof : Fix a t ∈ [0, T ]. Denote  ) ( ) ∆ ( ∆  ¯(t), u, y(t), Y (t) , H(u) = H t, x ¯(t), u ,  H(u) = H t, x ) ⟩ ⟨ ⟩ ∆ ( ∆ 1⟨   b(u) = b t, x ¯(t), u , ψ(u) = P (t)b(u), b(u) L0 − P (t)b(¯ u(t)), b(u) L0 . 2 2 2 Then, H(u) = H(u) + ψ(u). Note that for any r → 0 , u, v ∈ U , with u → u ¯(t), +

ψ(u + rv) − ψ(u) [ ] ⟩ 1⟨ = P (t) b(u + rv) + b(u) − 2b(u(t)) , b(u + rv) − b(u) L0 = o(r). 2 2 Thus, lim u→u(t),u∈U ¯ r↓0

H(u + rv) − H(u) = r

lim u→u(t),u∈U ¯ r↓0

H(u + rv) − H(u) . r

Consequently, by (12.232), the desired result (12.243) follows. Now, let us prove the following sufficient optimality condition for Problem (OP). 7

Clearly, the data in both (12.7) and (12.8) can be well-defined for any given admissible pair (¯ x(·), u ¯(·)), not only for the optimal ones.

462

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

Theorem 12.21. Let (S1)–(S4) hold. Suppose that the function h(·) is convex, H(t, · , · , y(t), Y (t)) is concave for a.e. t ∈ [0, T ], a.s., and H(t, x ¯(t), u ¯(t)) = max H(t, x ¯(t), u), u∈U

a.e. (t, ω) ∈ [0, T ] × Ω.

(12.244)

Then (¯ x(·), u ¯(·)) is an optimal pair for Problem (OP). Proof : By the maximum condition (12.244), Lemma 12.20 and the assertion 3) in Lemma 12.18, we have 0 ∈ ∂u H(t, x ¯(t), u ¯(t)) = ∂u H(t, x ¯(t), u ¯(t), y(t), Y (t)). Hence, by Lemma 12.19, it follows that ( ) Hx (t, x ¯(t), u ¯(t), y(t), Y (t)), 0 ∈ ∂x,u H(t, x ¯(t), u ¯(t), y(t), Y (t)). This, combined with the concavity of H(t, · , · , y(t), Y (t)), yields that ∫

(

T 0



) H(t, x(t), u(t), y(t), Y (t)) − H(t, x ¯(t), u ¯(t), y(t), Y (t)) dt

T

≤ 0

⟨ ⟩ Hx (t, x ¯(t), u ¯(t), y(t), Y (t)), x(t) − x ¯(t) H dt,

(12.245)



for any admissible pair (x(·), u(·)). Let ξ(t) = x(t) − x ¯(t). Then ξ(t) satisfies the following equation:  ( ) dξ(t) = Aξ(t) + ax (t, x(t), u(t))ξ(t) + α(t) dt    ( ) + bx (t, x(t), u(t))ξ(t) + β(t) dW (t) in (0, T ], (12.246)    ξ(0) = 0, where

 ( ) ( ) ( ) ∆  α(t) = −ax t, x ¯(t), u ¯(t) ξ(t) + a t, x(t), u(t) − a t, x ¯(t), u ¯(t) , 

( ) ( ) ( ) ∆ β(t) = −bx t, x ¯(t), u ¯(t) ξ(t) + b t, x(t), u(t) − b t, x ¯(t), u ¯(t) .

By (12.245)–(12.246), it follows from the definition of the transposition solution to (12.7) that

12.7 Second Order Necessary Condition for Optimal Controls

463

⟨ ⟩ ⟨ ⟩ ⟨ ⟩ E hx (¯ x(T )), ξ(T ) H = −E y(T ), ξ(T ) H + E y(0), ξ(0) H ∫ T[ ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ ] = −E gx (t, x ¯(t), u ¯(t)), ξ(t) H + y(t), α(t) H + Y (t), β(t) H dt 0



⟨ ⟩ Hx (t, x ¯(t), u ¯(t), y(t), Y (t)), ξ(t) H dt

T

=E 0



T

−E 0



(⟨

y(t), a(t, x(t), u(t)) − a(t, x ¯(t), u ¯(t))

⟩ H

⟨ ⟩ ) + Y (t), b(t, x(t), u(t)) − b(t, x ¯(t), u ¯(t)) H dt

T

(



T

≥E 0

−E 0



(⟨

y(t), a(t, x(t), u(t)) − a(t, x ¯(t), u ¯(t))

⟩ H

⟨ ⟩ ) + Y (t), b(t, x(t), u(t)) − b(t, x ¯(t), u ¯(t)) H dt T

= −E

) H(t, x(t), u(t), y(t), Y (t)) − H(t, x ¯(t), u ¯(t), y(t), Y (t)) dt

(

) g(t, x(t), u(t)) − g(t, x ¯(t), u ¯(t)) dt.

0

On the other hand, the convexity of h implies that ⟨ ⟩ E hx (¯ x(T )), ξ(T ) H ≤ Eh(x(T )) − Eh(¯ x(T )). Combining the above two, we arrive at J (¯ u(·)) ≤ J (u(·)).

(12.247)

Since u(·) ∈ U [0, T ] is arbitrary, (12.247) means that u ¯(·) is an optimal control for Problem (OP). This completes the proof of Theorem 12.21.

12.7 Second Order Necessary Condition for Optimal Controls In Section 12.5, we have given in Theorem 12.17 a Pontryagin-type maximum for Problem (OP), which is a first order necessary for optimal controls. Similarly to Calculus of Variations (or even the elementary Calculus), in addition to first-order necessary conditions, some second order necessary conditions are needed to distinguish optimal controls from the candidates which satisfy the first order necessary conditions, especially when the controls are singular, i.e., when these controls satisfy the first order necessary conditions trivially. In this section, we shall give an integral-type second order necessary condition for optimal controls.

464

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

For simplicity, as in Section 12.6, we assume that H is a real separable Hilbert space, the control region U is a convex subset of the separable Hilbert e appeared in (S4), and Lp (Ω) (p ≥ 1) is separable. space H FT We need to introduce the following assumption8 . (S6) For a.e. (t, ω) ∈ (0, T ) × Ω, the functions a(t, ·, ·) : H × U → H, b(t, ·, ·) : H × U → L02 , and g(t, ·, ·) : H × U → R are C 2 . Moreover, for a.e. (t, ω) ∈ (0, T ) × Ω and any (x, u) ∈ H × U ,  + |bxu (t, x, u)|L(H,H;L  e e 0)  |axu (t, x, u)|L(H,H;H) 2      +|auu (t, x, u)| e e + |buu (t, x, u)| e e 0 L(H,H;H)

L(H,H;L2 )

  ≤ CL ,      |g (t, x, u)| xu e + |guu (t, x, u)|L(H) e ≤ CL . L(H;H) Let (¯ x(·), u ¯(·)) be an optimal pair for Problem (OP). Let us introduce some notations. For ψ = a, b, g, put  ψ1 (t) = ψx (t, x ¯(t), u ¯(t)), ψ2 (t) = ψu (t, x ¯(t), u ¯(t)),    ψ11 (t) = ψxx (t, x ¯(t), u ¯(t)), ψ22 (t) = ψuu (t, x ¯(t), u ¯(t)), (12.248)    ψ12 (t) = ψxu (t, x ¯(t), u ¯(t)). Let (y(·), Y (·)) be the transposition solution of the equation (12.7). Put   H1 (t) = Hx (t, x ¯(t), u ¯(t), y(t), Y (t)),      H (t) = H (t, x ¯ (t), u ¯(t), y(t), Y (t)), u  2 H11 (t) = Hxx (t, x ¯(t), u ¯(t), y(t), Y (t)),    H (t) = H (t, x ¯ (t), u ¯(t), y(t), Y (t)),  12 xu   H (t) = H (t, x ¯(t), u ¯(t), y(t), Y (t)). 22 uu b (·) ) be the relaxed transposition solution to the equation Let (P (·), Q(·) , Q (12.8) in which F (·), J(·), K(·) and PT are given by (12.165), i.e., ( ) { PT = −hxx x ¯(T ) , J(t) = a1 (t), K(t) = b1 (t),

F (t) = −H11 (t).

For any admissible control u(·) ∈ U [0, T ], let x1 (·) be the corresponding solution to the following equation:  ( ) dx1 (t) = Ax1 (t) + a1 (t)x1 (t) + a2 (t)δu(t) dt    ( ) + b1 (t)x1 (t) + b2 (t)δu(t) dW (t) in (0, T ], (12.249)    x1 (0) = 0, 8

e H), L(H, H; e L02 ), L(H, e H; e H) and See Subsection 2.11.1 for the notations L(H, H; 0 e e L(H, H; L2 ).

12.7 Second Order Necessary Condition for Optimal Controls

465

where δu(·) = u(·) − u ¯(·). The desired integral-type second order necessary condition for optimal controls for Problem (OP) is stated as follows. e ∩ U [0, T ] be Theorem 12.22. Let (S1)–(S6) hold, and let u ¯(·) ∈ L4F (0, T ; H) e ∩ an optimal control for Problem (OP). Then, for any u(·) ∈ L4F (0, T ; H) U [0, T ] with ∫ T ⟨ ⟩ E H2 (t), u(t) − u ¯(t) He dt = 0, (12.250) 0

the following second order necessary condition holds: ∫

T

E 0

[⟨ ( ) ⟩ H22 (t) u(t) − u ¯(t) , u(t) − u ¯(t) He

⟨ ( ) ⟩ ] + b2 (t)∗ P (t)b2 (t) u(t) − u ¯(t) , u(t) − u ¯(t) He dt ∫ T ⟨( ) ⟩ +2E H12 (t) + a2 (t)∗ P (t) + b2 (t)∗ P (t)b1 (t) x1 (t), u(t) − u ¯(t) He dt 0



T

+E 0

⟨(

)( ( ) ( )) b (0) + Q(0) 0, a2 (t) u(t) − u Q ¯(t) , b2 (t) u(t) − u ¯(t) ,

( ))⟩ b2 (t) u(t) − u ¯(t) L0 dt 2

≤ 0. (12.251) Proof : The proof will be divided into four steps. Step 1. In this step, we introduce some notations. e Let δx(·) = x(·) − x Obviously, δu(·) = u(·) − u ¯(·) ∈ L4F (0, T ; H). ¯(·). Since U is convex, we see that, for any ε ∈ [0, 1], e ∩ U [0, T ]. uε (·) = u ¯(·) + εδu(·) = (1 − ε)¯ u(·) + εu(·) ∈ L4F (0, T ; H) Denote by xε (·) the corresponding solution to (12.2) with u(·) replaced by uε (·). Let δxε (·) = xε (·) − x ¯(·) and for ψ = a, b, g, put  ∫ 1 ∆  ˜ε (t) =  ψ (1 − θ)ψxx (t, x ¯(t) + θδxε (t), u ¯(t) + θεδu(t))dθ,  11   0    ∫ 1  ∆ ε ˜ ψ12 (t) = (1 − θ)ψxu (t, x ¯(t) + θδxε (t), u ¯(t) + θεδu(t))dθ,   0   ∫ 1    ∆  ψ˜ε (t) = (1 − θ)ψuu (t, x ¯(t) + θδxε (t), u ¯(t) + θεδu(t))dθ. 22

0

Also, we define

466

12 Pontryagin-Type Stochastic Maximum Principle and Beyond ∆ ˜ ε (T ) = h xx



1

(1 − θ)hxx (¯ x(T ) + θδxε (T ))dθ. 0

ε

Obviously, δx solves the following stochastic evolution equation: [  ( ) ε  dδx = Aδxε + a1 (t)δxε + εa2 (t)δu + a ˜ε11 (t) δxε , δxε      ( ε ) ( )]   ε 2 ε  +2ε˜ a (t) δx , δu + ε a ˜ (t) δu, δu dt  12 22   [ ( ) (12.252) + b1 (t)δxε + εb2 (t)δu + ˜bε11 (t) δxε , δxε       ( ) ( )]    +2ε˜bε12 (t) δxε , δu + ε2˜bε22 (t) δu, δu dW (t) in (0, T ],    ε δx (0) = 0. Consider the following linearized stochastic evolution equation:  [ dx2 = Ax2 + a1 (t)x2 + a11 (t)(x1 , x1 ) + 2a12 (t)(x1 , δu)     ] [   +a22 (t)(δu, δu) dt + b1 (t)x2 + b11 (t)(x1 , x1 ) (12.253) ]  +2b12 (t)(x1 , δu) + b22 (t)(δu, δu) dW (t) in (0, T ],      x2 (0) = 0. Similar to Step 2 in the proof of Theorem 12.4, we can prove the following estimates (for some constant C, independent of ε):  ε   e , |δx |CF ([0,T ];L2 (Ω;H)) ≤ Cε|δu|L2F (0,T ;H)      |x1 |CF ([0,T ];L2 (Ω;H)) ≤ C|δu|L2 (0,T ;H) e , F (12.254)   |x2 |CF ([0,T ];L2 (Ω;H)) ≤ C|δu|2L4 (0,T ;H) , e   F     |δx − x | 2 ≤ C|δu|2 . 1 CF ([0,T ];L (Ω;H))

e L4F (0,T ;H)

Step 2. We claim that there exists a subsequence {εn }∞ n=1 ⊂ (0, 1] such that limn→∞ εn = 0 and ε2 εn = o(ε2n ), δx − εn x1 − n x2 2 CF ([0,T ];L2 (Ω;H))

as n → ∞.

(12.255)

( ) ε2 To show this, we write r2ε (·) = ε−2 δxε (·) − εx1 (·) − x2 (·) . Then, r2ε (·) 2 fulfills

12.7 Second Order Necessary Condition for Optimal Controls

 { [ ( δxε δxε ) 1 ] ε ε ε ε  dr = Ar + a (t)r + a (t) , − a (t)(x , x )  1 11 1 1 2 2 2 11   ε ε 2    [ ( δxε ) ]   ε  + 2˜ a12 (t) , δu − a12 (t)(x1 , δu)    ε   ( ) }   1  ε  + a ˜ (t) − a (t) (δu, δu) dt 22  22  2   { [ ( ε ] ε) ε ˜bε (t) δx , δx − 1 b11 (t)(x1 , x1 ) + b (t)r +  1 2 11   ε ε 2    [ ( δxε ) ]    + 2˜bε12 (t) , δu − b12 (t)(x1 , δu)    ε   ( ) }   1  ˜bε (t) − b22 (t) (δu, δu) dW (t) in (0, T ],  +  22  2    ε r2 (0) = 0. Put

467

(12.256)

[ ( δxε (t) δxε (t) ) 1 ] Ψ1,ε (t) = a ˜ε11 (t) , − a11 (t)(x1 (t), x1 (t)) ε ε 2 [ ( δxε (t) ) ] + 2˜ aε12 (t) , δu(t) − a12 (t)(x1 (t), δu(t)) ε ( ) 1 + a ˜ε22 (t) − a22 (t) (δu(t), δu(t)) 2

and

[ ( δxε (t) δxε (t) ) 1 ] Ψ2,ε (t) = ˜bε11 (t) , − b11 (t)(x1 (t), x1 (t)) ε ε 2 [ ( δxε (t) ) ] + 2˜bε12 (t) , δu(t) − b12 (t)(x1 (t), δu(t)) ε ( ) 1 ε + ˜b22 (t) − b22 (t) (δu(t), δu(t)). 2 We have that ∫ t ∫ t 2 E r2ε (t) H = E S(t − s)a1 (s)r2ε (s)ds + S(t − s)b1 (s)r2ε (s)dW (s) 0



0



t

t

S(t − s)Ψ1,ε (s)ds +

+ 0

2 S(t − s)Ψ2,ε (s)dW (s)

H

0

∫ t ∫ t ( ∫ t ) ≤C E |r2ε (s)|2H ds + E |Ψ1,ε (s)|2H ds + E |Ψ2,ε (s)|2L0 ds . 0

0

0

2

(12.257) By (12.254), there exists a subsequence {εn }∞ ⊂ (0, T ] such that εn → 0 n=1 and xεn (·) → x ¯(·) (in H) a.e. in [0, T ] × Ω, as n → ∞. Then, by (12.254), the condition (S4) and Lebesgue’s dominated convergence theorem, we deduce that

468

12 Pontryagin-Type Stochastic Maximum Principle and Beyond



t

lim E

n→∞

|Ψ1,εn (t)|2H dt 0



[ ( δxεn (t) δxεn (t) ) 1 ] εn ˜11 (t) , − a11 (t)(x1 (t), x1 (t)) a n→∞ εn εn 2 0 [ ( δxεn (t) ) ] + 2˜ aε12n (t) , δu(t) − a12 (t)(x1 (t), δu(t)) εn 2 ( ) 1 + a ˜ε22n (t) − a22 (t) (δu(t), δu(t)) dt 2 H ∫ T[ 2 ( δxε (t) δxε (t) ) εn n ≤ C lim E a11 (t) , n −a ˜ε11n (t)(x1 (t), x1 (t)) ˜ (12.258) n→∞ εn εn H 0 1 εn 2 + ˜ a11 (t) − a11 (t) |x1 (t)|4H 2 L(H,H;H) 2 ( δxεn (t) ) εn +2 ˜ a12 (t) , δu(t) − a ˜ε12n (t)(x1 (t), δu(t)) εn H ε 2 + 2˜ a12n (t) − a12 (t) L(H,H;H) |x1 (t)|2H |δu(t)|2He e 2 ] 1 εn + ˜ a22 (t) − a22 (t) |δu(t)|4He dt e H;H) e 2 L(H, T

≤ lim E

= 0. Similarly,



t

lim E

n→∞

0

|Ψ2,εn (t)|2L0 dt = 0. 2

(12.259)

Combining (12.257), (12.258) with (12.259) and using Gronwall’s inequality, we obtain (12.255). Step 3. By Taylor’s formula, we see that g(t, xε (t), uε (t)) − g(t, x ¯(t), u ¯(t)) ⟨ ⟩ ⟨ ⟩ ⟨ ε ⟩ = g1 (t), δxε (t) H + ε g2 (t), δu(t) He + g˜11 (t)δxε (t), δxε (t) H ⟨ ε ⟩ ⟨ ε ⟩ +2ε g˜12 (t)δxε (t), δu(t) He + ε2 g˜22 (t)δu(t), δu(t) He

(12.260)

and h(xε (T )) − h(¯ x(T )) ⟨ ⟩ ⟨ ε ⟩ ˜ (T )δxε (T ), δxε (T ) . = hx (¯ x(T )), δxε (T ) H + h xx H

(12.261)

Using a similar argument as the proof of (12.255), we can show that, for εn the subsequence {εn }∞ ¯(·) (in H) a.e. in [0, T ] × Ω (as n=1 such that x (·) → x n → ∞), the following equalities hold: ∫ T( ⟨ εn ⟩ ⟩ ) 1 ε2 ⟨ lim 2 E g˜11 (t)δxεn (t), δxεn (t) H − n g11 (t)x1 (t), x1 (t) H dt = 0, n→∞ εn 2 0

12.7 Second Order Necessary Condition for Optimal Controls

1 E n→∞ ε2 n



T

lim

0

( ⟨ ⟩ ⟨ ⟩ ) εn 2 g˜12 (t)δxεn (t), εn δu(t) He − ε2n g12 (t)x1 (t), δu(t) He dt = 0,



T

lim E

n→∞

469

0

(⟨

εn g˜22 (t)δu(t), δu(t)

⟩ e H



⟩ ) 1⟨ g22 (t)δu(t), δu(t) He dt = 0 2

and ⟩ ⟩ ) 1 ( ⟨ ˜ εn ε2n ⟨ εn εn E h (¯ x (T ))δx (T ), δx (T ) − h (¯ x (T ))x (T ), x (T ) xx 1 1 xx H H n→∞ ε2 2 n lim

= 0. These, together with (12.255), imply that J (uεn ) − J (¯ u) ∫ T[ ⟨ ⟩ ⟩ ⟨ ⟩ ε2 ⟨ =E εn g1 (t), x1 (t) H + n g1 (t), x2 (t) H + εn g2 (t), δu(t) He 2 0 ⟩ ⟨ ⟩ ε2 (⟨ + n g11 (t)x1 (t), x1 (t) H + 2 g12 (t)x1 (t), δu(t) He 2 (12.262) ⟨ ⟩ )] + g22 (t)δu(t), δu(t) He dt ( ⟨ ⟩ ⟩ ε2 ⟨ +E εn hx (¯ x(T )), x1 (T ) H + n hx (¯ x(T )), x2 (T ) H 2 ⟩ ) ε2 ⟨ + n hxx (¯ x(T ))x1 (T ), x1 (T ) H + o(ε2n ), as n → ∞. 2 Step 4. By the definition of transposition solution to (10.4), we have that ⟨ ⟩ E hx (¯ x(T )), y(T ) H ∫ T (⟨ ⟩ ⟨ ⟩ (12.263) = −E y(t), a2 (t)δu(t) H + Y (t), b2 (t)δu(t) L0 0

⟨ ⟩ ) + g1 (t), x1 (t) H dt

2

and ⟨ ⟩ E hx (¯ x(T )), x2 (T ) H ∫ T( ⟨ ⟩ ⟨ ⟩ = −E y(t), a11 (t)(x1 (t), x1 (t)) H + 2 y(t), a12 (t)(x1 (t), δu(t)) H 0

⟨ ⟩ ⟨ ⟩ + y(t), a22 (t)(δu(t), δu(t)) H + Y (t), b11 (t)(x1 (t), x1 (t)) L0 2 ⟨ ⟩ ⟨ ⟩ +2 Y (t), b12 (t)(x1 (t), δu(t)) L0 + Y (t), b22 (t)(δu(t), δu(t)) L0 2 2 ⟨ ⟩ ) + g1 (t), x2 (t) H dt. (12.264)

470

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

On the other hand, by the definition of relaxed transposition solution to (12.8), we obtain that ⟨ ⟩ E hxx (¯ x(T ))x1 (T ), x1 (T ) H ∫ T( ⟨ ⟩ ⟨ ⟩ = −E P (t)x1 (t), a2 (t)δu(t) H + x1 (t), P (t)a2 (t)δu(t) H 0

⟨ ⟩ ⟨ ⟩ + P (t)b1 (t)x1 (t), b2 (t)δu(t) L0 + b1 (t)x1 (t), P (t)b2 (t)δu(t) L0 2 2 ⟨ ⟩ ⟨ (0) ⟩ b + P (t)b2(t)δu(t), b2 (t)δu(t) L0 + Q (0, a2(t)δu, b2 (t)δu)(t), b2 (t)δu(t) L0 2 2 ⟨ (0) ⟩ ⟨ ⟩ ) + Q (0, a2 (t)δu, b2 (t)δu)(t), b2 (t)δu(t) L0 − H11 (t)x1 (t), x1 (t) H dt. 2 (12.265) Combining (12.262)–(12.265) with (12.250), we obtain that, for sufficiently large n, J (uεn ) − J (¯ u) 2 εn ∫ T[ ( ⟩ ⟨ ⟩ ⟨ ⟩ ) 1 ⟨ = −E y(t), a2 (t)δu(t) H + Y (t), b2 (t)δu(t) L0 − g2 (t), δu(t) He 2 εn 0 ( )⟩ ⟨ ( )⟩ 1 (⟨ + y(t), a22 (t) δu(t), δu(t) H + Y (t), b22 (t) δu(t), δu(t) L0 2 2 ⟨ ⟩ ) 1⟨ ⟩ − g22 (t)δu(t), δu(t) He + P (t)b2 (t)δu(t), b2 (t)δu(t) L0 2 2 ( ⟨ ⟩ ⟨ ⟩ + − g12 (t)x1 (t), δu(t) He + y(t), a12 (t)(x1 , δu) H ⟨ ⟩ ⟨ ⟩ + Y (t), b12 (t)(x1 , δu) L0 + a2 (t)∗ P (t)x1 (t), δu(t) He 2 ⟨ ⟩ + b2 (t)∗ P (t)b1 (t)x1 (t), δu(t) He 1 ⟨ b (0) + Q (0, a2 (t)δu, b2 (t)δu)(t) + Q(0) (0, a2 (t)δu, b2 (t)δu)(t), 2 ⟩ )] b2 (t)δu(t) L0 dt + o(1) 2 ∫ T[ ⟩ ⟩ 1⟨ 1⟨ = −E H2 (t), δu(t) He + H22 (t)δu(t), δu(t) He ε 2 n 0 ⟩ ] 1⟨ + b2 (t)∗ P (t)b2 (t)δu(t), δu(t) He dt 2 ∫ T ⟨[ ] ⟩ −E H12 (t) + a2 (t)∗ P (t) + b2 (t)∗ P (t)b1 (t) x1 (t), δu(t) He dt

0≤

0



⟨ (0) b (0, a2 (t)δu, b2 (t)δu)(t) + Q(0) (0, a2 (t)δu, b2 (t)δu)(t), Q 0 ⟩ b2 (t)δu(t) L0 dt + o(1) 2 ∫ T( ⟩ ⟩ ) 1⟨ 1⟨ = −E H22 (t)δu(t), δu(t) He + b2 (t)∗ P (t)b2 (t)δu(t), δu(t) He dt 2 2 0 1 − E 2

T

12.8 Notes and Comments



T

−E

⟨(

0

1 − E 2



T 0

471

) ⟩ H12 (t) +a2 (t)∗ P (t)+b2 (t)∗ P (t)b1 (t) x1 (t), δu(t) He dt

⟨ (0) b (0, a2 (t)δu, b2 (t)δu)(t) + Q(0) (0, a2 (t)δu, b2 (t)δu)(t), Q ⟩ b2 (t)δu(t) L0 dt + o(1). 2

Finally, letting n → ∞ in the above inequality, we then obtain the desired second order necessary condition (12.251). This completes the proof of Theorem 12.22. b (·) ) of the Remark 12.23. It is easy to see that, the correction part (Q(·) , Q (·) b (·) relaxed transposition solution (P (·), (Q , Q ) to the equation (12.8) appears explicitly in the second order necessary condition (12.251) in Theorem 12.22.

12.8 Notes and Comments The main body of this chapter is an improved version of the results in [242, 244], except that Sections 12.2 and 12.7 are based respectively on [241] and [240], while the sufficient optimality condition in Section 12.6 can be regarded as an infinite version of the similar result (for stochastic controls in finite dimensions) in [371, 393]. Pontryagin’s maximum principle for deterministic optimal control problems in finite dimensions is one of the three milestones of modern optimal control theory. Since the classical works in [34], the theory of maximum principle for controlled ordinary differential equations has been studied extensively and many different versions of maximum principle were established for more complex systems in various aspects, such as control systems governed by ordinary differential equations on manifolds, partial differential equations, stochastic differential equations in particular. We refer the readers to [1, 202, 371] for Pontryagin-type maximum principle for these three kinds of control systems, respectively. Soon after the work [34], the maximum principle has been extended to the controlled stochastic differential equations. To our best knowledge, the first paper concerning the stochastic maximum principle is [180]. At that early time, people usually assumed that the diffusion term is nondegenerate, and hence one can apply the Girsanov transformation to obtain the desired necessary conditions for stochastic optimal controls (e.g., [139]). A breakthrough was made in [31], in which the backward stochastic differential equation was introduced to study the maximum principle for stochastic control systems, and via the nondegenerate condition on the diffusion terms was dropped. Further development in this respect can be found in [25, 33], etc. Until 1988, all the works on maximum principle for controlled stochastic differential equations were obtained under one of the following assumptions:

472

• • •

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

The diffusion term is independent of the controls (e.g., [25, 31]); The coefficients of the diffusion term is nondegenerate (e.g., [139]); The control region is convex (e.g., [25]).

In [273], a new maximum principle was obtained for general controlled stochastic differential equations without the above mentioned assumptions, and it was found that the corresponding result in the general case differs essentially from its deterministic counterpart. Naturally, one expects to establish the Pontryagin-type maximum for control systems governed by stochastic evolution equations. A pioneer work along this line is [26]. Further progresses are available in the literature [148, 314, 324, 392] and so on. Similar to the study of controlled stochastic differential equations, all of the existing published works before 2012 on the necessary conditions for optimal controls of infinite dimensional stochastic evolution equations addressed only the case that one of the following conditions holds: • •

The diffusion term does NOT depend on the control variable, or the control region is convex; For a.e. (t, ω) ∈ (0, T )×Ω and any (x, u, k1 , k2 ) ∈ H ×U ×H ×L02 , the data ⟨ k1 , axx (t, x, u) ⟩H , ⟨ k2 , bxx (t, x, u) ⟩L0 , gxx (t, x, u) and hxx (x) are Hilbert2 Schmidt valued operators.

The main difficulty in the infinite dimensional setting is how to handle the well-posedness of (12.8). There were several works ([79, 112, 242], of which the arXiv versions were all posted in 2012) addressed to overcome the above difficulty. The main idea is to solve the equation (12.8) in a weak sense. In [79, 112], a partial well-posedness result for (12.8) was established, that is, only P was obtained by Riesz Representation Theorem (See [113, 114] for further related works). In [242], a complete well-posedness result for (12.8) was derived by means of the stochastic transposition method. Results in [242] have been further improved in [244, 245]. Although the correction part Q (in the equation (12.8)) is not need in establishing the Pontryagin-type maximum principle for controlled stochastic evolution equations, as mentioned in Remark 12.23 (for a weak version of Q), it plays an important role in the study of the second order necessary conditions for optimal controls (See [103, 104, 240] for more details). As far as we know, the sufficiency of the Pontryagin-type maximum principle was first studied in [33] for controlled stochastic differential equations. In [393], it was proved that the maximum condition in [273] is also sufficient, provided that some convexity conditions on the control regions and the cost functionals are imposed. Some further development in this respect can be found in [269] and the references cited therein. To our best knowledge, there exist only very few published works on the sufficiency of the Pontryagin-type maximum principle for controlled stochastic evolution equations in infinite dimensions (e.g., [194, 268]).

12.8 Notes and Comments

473

There are many open problems related to the topic of this chapter. We shall list below some of them which, in our opinion, are particularly interesting and/or important: 1) Well-posedness of (12.8) in the sense of transposition solution In Theorem 12.9, we established the well-posedness of the equation (12.8) in the sense of relaxed transposition solution. It would be quite interesting (and also very important for some problems) to prove that this equation is also well-posed in the sense of transposition solution (See Definition 12.6). Nevertheless, so far the well-posedness of (12.8) in the sense of transposition solution is know only for some very special case (Se Theorem 12.14). As far as we know, it is a challenging (unsolved) problem to prove the existence of transposition solution to (12.8) (See Theorem 12.7 for the uniqueness result), even for the following special case: { dP = −A∗ P dt − P Adt + F dt + QdW (t) in [0, T ), (12.266) P (T ) = PT , where F ∈ L1F (0, T ; L2 (Ω; L(H))), PT ∈ L2FT (Ω; L(H)). The same can be said for (12.266) even for concrete problems, say when A = −∆, the Laplacian with the usual homogenous Dirichlet boundary condition (even for the case of one space dimension!). 2) Optimal control problems with endpoint/state constraints In this chapter, we have not considered the endpoint/state constraints to the control systems since this topic is quite difficult even for the case H = Rn . For some special constraints, such as Ex(T ) > 0, one can use the Ekeland principle to establish a similar Pontryagin type maximum principle with nontrivial Lagrange multipliers. However, for the general case, one does need some further condition to obtain nontrivial results in this repect, as shown in [202] (even for the deterministic optimal control problems in infinite dimensions). In the study of the aforementioned deterministic optimal control problems, people introduce the so-called finite codimension condition to guarantee the nontriviality of the Lagrange multiplier (e.g., [202, 220]). There is some attempt to generalize this condition to the stochastic framework (e.g., [221]) but so far the results are still not so satisfactory. Another way is to use some tools from the set-valued analysis, as developed in the recent paper [103]. 3) Higher order necessary conditions for optimal controls As mentioned at the very beginning of Section 12.7, similarly to Calculus of Variations or even the elementary Calculus, in addition to the first-order necessary conditions, sometimes higher order necessary conditions should be established to distinguish optimal controls from the candidates which satisfy the first order necessary conditions trivially. In this chapter, we only gave one result of integral-type second order necessary condition for optimal controls of Problem (OP). Some further results in this direction can be found in [103, 104,

474

12 Pontryagin-Type Stochastic Maximum Principle and Beyond

240]. However, the theory on higher order necessary conditions for stochastic optimal control problems is far from satisfactory. Indeed, as far as we know, so far only second order necessary conditions for stochastic optimal controls were under consideration, and pointwise second order necessary conditions (which are more useful in applications than the integral ones) were obtained only under very strong assumptions, even for the stochastic control problems in finite dimensions. 4) Necessary conditions for optimal controls of partially observed control systems In this chapter, we actually assumed that the system state is completely observed. For many practical problems, it often happens that only a part of the state can be observed, and also there might exist noises in the observation. A typical example is as follows: Let W1 and W2 be two independent cylindrical Brownian motions on the filtered probability space (Ω, F , F, P), with values in two Hilbert spaces V1 and V2 , respectively. For a given control u(·) ∈ U [0, T ] (See (12.1) for the definition of U [0, T ]), let us consider the following stochastic evolution equation: ( )  dx(t) = Ax(t) + a(t, x(t), u(t)) dt      2  ∑ + bj (t, x(t), u(t))dWj (t) in (0, T ], (12.267)   j=1     x(0) = x0 , and the following (partial) observation equation: { dy(t) = c(t, x(t), y(t), u(t))dt + f (t, x(t), y(t), u(t))dW2 (t) in [0, T ], y(0) = y0 , (12.268) with suitably given initial values x0 and y0 , and functions a, b1 , b2 , c and f . For any t ∈ [0, T ], write } ∆ { Yt = σ y(s) s ∈ [0, t] . We call the above control u(·) an admissible control for the above partially observed system if it happens to be {Yt }t∈[0,T ] -adapted. Denote by V[0, T ] the class of all admissible controls for the above partially observed system. People then hope to find a control u ¯(·) ∈ V[0, T ], which minimizes the following cost functional: J (u(·)) = E

[∫

T

] g(t, x(t), y(t), u(t))dt + h(x(T ), y(T )) ,

0

in which the functions g and h are suitably given.

u(·) ∈ V[0, T ],

12.8 Notes and Comments

475

Motivated by the works for the same problems but in finite dimensions (e.g., [27]), one may follow the following two main steps to study the above optimal control problems of partially observable systems governed by stochastic evolution equations: • •

Step 1. Computing the filtering of the state; Step 2. Solving a complete information optimal control problem driven by the filtering and obtaining the optimal control.

Nevertheless, compared with the partially observed systems in finite dimensions (e.g., [27, 339, 363]), there exist some essential difficulties to achieve the above two steps. For example, in Step 1, one has to solve a stochastic partial differential equation with infinitely many variables to compute the filtering of the state. Unfortunately, at this moment the well-posedness of such an equation is far from well-understood.

13 Linear Quadratic Optimal Control Problems

In this chapter, we are concerned with linear quadratic optimal control problems (LQ problems for short) for stochastic evolution equations, in which the diffusion terms depend on the control variables and the coefficients are stochastic. In such a general setting, one has to introduce suitable operatorvalued backward stochastic evolution equations (to characterize the optimal controls in the form of Pontryagin-type maximum principle or in the feedback forms), served as the second order adjoint equations or the Riccati type equations. As in the previous chapter, it is very difficult to show the existence of solutions to these equations. We shall use the stochastic transposition method to overcome this difficulty. Compared with the general optimal control problems, the main novelty of LQ problems is that the optimal controls can be represented in the feedback forms, which keep the corresponding control strategies robust w.r.t. (small) perturbation/disturbance. This is particularly important in many practical applications for which perturbation/disturbance are usually unavoidable. Nevertheless, it is actually very difficult to find feedback controls for general control problems. So far, the most successful attempt in this respect is that for various LQ problems in the deterministic setting.

13.1 Formulation of the Problem ∆

Throughout this chapter, T > 0, (Ω, F , F, P) (with F ={Ft }t∈[0,T ] ) is a fixed filtered probability space satisfying the usual condition, and we denote by F the progressive σ-field w.r.t. F; H, V and U are three separable Hilbert spaces. We denote by I the identity operator on H and by In the identity ∆ matrix on Rn (n ∈ R), and L02 = L2 (V ; H). Unless otherwise specified, W (·) is a V -valued, Q-Brownian motion or cylindrical Brownian motion but we only consider the case of cylindrical Brownian motion. Let A be an unbounded linear operator (with domain D(A) on H), which generates a C0 -semigroup {S(t)}t≥0 . Denote by A∗ the dual operator of A. © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3_13

477

478

13 Linear Quadratic Optimal Control Problems

Denote by S(H) the Banach space of all self-adjoint (linear bounded) operators on H. For M, N ∈ S(H), we use the notation M ≥ N (resp. M > N ) to indicate that M − N is positive semi-definite (resp. positive definite). For any S(H)-valued stochastic process F on [0, T ], we write F ≥ 0 (resp. F > 0, F ≫ 0) if F (t, ω) ≥ 0 (resp. F (t, ω) > 0, F (t, ω) ≥ δI for some δ > 0) for a.e. (t, ω) ∈ [0, T ] × Ω. One can define F ≪ 0 and so on in a similar way. For any given η ∈ H, we consider a control system governed by the following linear stochastic evolution equation:  ( ) dx(t) = Ax(t) + B(t)u(t) dt    ( + C(t)x(t) + D(t)u(t))dW (t) in (0, T ], (13.1)    x(0) = η, ∞ 0 where B(·) ∈ L∞ F (0, T ; L(U ; H)), C(·) ∈ LF (0, T ; L(H; L2 )) and D(·) ∈ ∞ 0 2 LF (0, T ; L(U ; L2 )). In (13.1), u(·)(∈ LF (0, T ; U )) is the control variable, x(·) is the state variable. In view of Theorem 3.13, for any u ∈ L2F (0, T ; U ), the system (13.1) admits a unique (mild) solution x(·) ≡ x(·; η, u) ∈ CF ([0, T ]; L2 (Ω; H)) such that ( ) |x(·)|CF ([0,T ];L2 (Ω;H)) ≤ C |η|H + |u(·)|L2F (0,T ;U ) . (13.2)

Associated with the system (13.1), we consider the following quadratic cost functional ∫ ) 1 [ T( J (η; u(·)) = E ⟨M (t)x(t), x(t)⟩H + ⟨R(t)u(t), u(t)⟩U dt 2 0 (13.3) ] +⟨Gx(T ), x(T )⟩H , ∞ ∞ where M (·) ∈ L∞ F (0, T ; S(H)), R(·) ∈ LF (0, T ; S(U )) and G ∈ LFT (Ω; S(H)). In what follows, to simplify the notations, the time variable t will be suppressed in B(·), C(·), D(·), M (·) and R(·), and therefore we shall simply write them as B, C, D, M and R, respectively (if there is no confusion).

Remark 13.1. In this chapter, we assume that B ∈ L∞ F (0, T ; L(U ; H)) and 0 D ∈ L∞ (0, T ; L(U ; L )). Thus, our results can only be applied to controlled 2 F stochastic partial differential equations with internal controls. To study systems with boundary/pointwise controls, one needs to introduce some further assumptions, such as the semigroup {S(t)}t∈[0,T ] has some smoothing effect. Under these conditions, many results of this chapter can be generalized to systems with unbounded control operators. More details will be discussed in Section 13.10. The main concern of this chapter is to study the following stochastic linear quadratic optimal control problem: Problem (SLQ). For each η ∈ H, find a u ¯(·) ∈ L2F (0, T ; U ) such that J (η; u ¯(·)) =

inf

u(·) ∈ L2F (0,T ;U )

J (η; u(·)).

(13.4)

13.1 Formulation of the Problem

479

Remark 13.2. Unlike Chapter 12, we denote by J (η; u) rather than J (u) for the cost functional. The reason is that we want to emphasize the relationship among the initial data, the optimal controls and the optimal feedback operators. Let us begin with the following notions. Definition 13.3. 1) Problem (SLQ) is called a standard LQ problem if M (·) ≥ 0,

R ≫ 0,

G ≥ 0;

2) Problem (SLQ) is said to be finite at η ∈ H if the right hand side of (13.4) is finite; 3) Problem (SLQ) is said to be (uniquely) solvable at η ∈ H if there exists a (unique) control u ¯(·) ∈ L2F (0, T ; U ) satisfying (13.4). In this case, u ¯(·) is called an (the) optimal control, the corresponding x ¯(·) and (¯ x(·), u ¯(·)) are called an (the) optimal state and an (the) optimal pair, respectively; 4) Problem (SLQ) is said to be finite (resp. (uniquely) solvable) if for any η ∈ H, it is finite (resp. (uniquely) solvable) at η. Let us briefly review the deterministic LQ problems in finite dimensions (See [193, 370] for more materials). For any m, n ∈ N and η ∈ Rn , we consider the following control system: { x(t) ˙ = A(t)x(t) + B(t)u(t) in [0, T ], (13.5) x(0) = η, where A(·) ∈ L∞ (0, T ; Rn×n ), B(·) ∈ L∞ (0, T ; Rn×m ) and the control u(·) ∈ L2 (0, T ; Rm ). The cost functional is ∫ ) 1[ T ( J (η; u(·)) = ⟨M (t)x(t), x(t)⟩Rn + ⟨R(t)u(t), u(t)⟩Rm dt 2 0 (13.6) ] +⟨Gx(T ), x(T )⟩Rn , with M (·) ∈ L∞ (0, T ; S(Rn )), R(·) ∈ L∞ (0, T ; S(Rm )) and G ∈ S(Rn ). Similarly to Problem (SLQ), we consider the following (deterministic) linear quadratic optimal control problem: Problem (DLQ). For each η ∈ Rn , find a control u ¯(·) ∈ L2 (0, T ; Rm ), such that J (η; u ¯(·)) = inf J (η; u(·)). 2 m u(·) ∈ L (0,T ;R )

A special case of the above problem has already been introduced in Section 1.3. Similarly to that for Problem (SLQ), one can define the so-called standard Problem (DLQ), and the finiteness and (unique) solvability of Problem (DLQ) (and hence we omit the details). The following result (e.g., [370, p. 227]) shows that R ≥ 0 is a necessary condition for Problem (DLQ) to be finite.

480

13 Linear Quadratic Optimal Control Problems

Proposition 13.4. If Problem (DLQ) is finite, then R ≥ 0. Remark 13.5. Surprisingly, it was found in [55] that in the stochastic setting R ≥ 0 is NOT a necessary condition anymore for Problem (SLQ) to be finite even for the case of finite dimensions (See also Example 13.16). The following example shows that Problem (DLQ) may not be finite (needless to say solvable) even if R ≫ 0 (but Q and G are indefinite). Example 13.6. For m = n = 1, let us consider the control system   dx = u in [0, T ], dt  x(0) = η ∈ R, with the cost functional 1[ J (η; u(·)) = 2



T

u(t)2 dt − 0

] 2 x(T )2 . T

(13.7)

(13.8)

Let λ ∈ R and take uλ (t) ≡ λ for t ∈ [0, T ]. Then J (η; uλ (·)) =

)2 1 2 1( λ T− λT + η → −∞ as λ → ∞. 2 T

Thus, the corresponding LQ problem is not finite. Because of Example 13.6, in order that Problem (DLQ) is solvable, one needs more assumptions besides R ≫ 0. For any u(·) ∈ L2 (0, T ; Rm ), define two operators L : L2 (0, T ; Rm ) → L2 (0, T ; Rn ) and LT : L2 (0, T ; Rm ) → Rn as follows: ( ) Lu(·) (t) ≡ x(t; 0, u(·)), LT u ≡ x(T ; 0, u(·)), where x(·; 0, u(·)) solves the equation (13.5) with η = 0. Further, define N = L∗ M L + L∗T GLT + R. The following result (e.g., [370, p. 229]) characterizes the unique solvability of Problem (DLQ) under suitable assumptions: Theorem 13.7. Suppose that N ≥ 0 and R(·) ≫ 0. Then, Problem (DLQ) is uniquely solvable if and only if the following coupled forward-backward system    ¯˙ (t) = A(t)¯ x(t) − B(t)R−1 (t)B ⊤ (t)¯ y (t), t ∈ [0, T ],  x y¯˙ (t) = −A(t)⊤ y¯(t) + M (t)¯ x(t), t ∈ [0, T ],     x ¯(0) = η, y¯(T ) = −G¯ x(T )

(13.9)

admits a unique solution (¯ x(·), y¯(·)). In this case, u ¯(t) = R−1 (t)B ⊤ (t)¯ y (t), is the optimal control.

t ∈ [0, T ],

(13.10)

13.1 Formulation of the Problem

481

One task of this chapter is to establish a characterization on the unique solvability of Problem (SLQ) similarly to that in Theorem 13.7 for Problem (DLQ). This will be done in Corollary 13.33 (in Section 13.5). Note that the optimal control given by (13.10) is not in a feedback form. In Control Theory, another fundamental issue is to find feedback controls, which are particularly important in practical applications. Indeed, the main advantage of feedback controls is that they keep the corresponding control strategy robust w.r.t. small perturbation/disturbance, which are usually unavoidable in realistic background. In order to find a feedback form of u ¯(·), formally, we assume that, for some Rn×n -valued function P (·) (to be determined later), y¯(t) = −P (t)¯ x(t),

t ∈ [0, T ].

(13.11)

Combining this with (13.10), we obtain an optimal feedback control as follows: u ¯(t) = −R−1 B ⊤ P (t)¯ x(t),

t ∈ [0, T ].

(13.12)

In order to find the above P (·), proceeding as that in [370, pp. 230–231], differentiating (13.11) w.r.t. t, using (13.9) and (13.12), we obtain that ( ) d¯ y dP =− x ¯ − P A(t)¯ x + Bu ¯ dt dt ( ) dP =− x ¯ − P A(t)¯ x − BR−1 B ⊤ P x ¯ , dt A(t)⊤ P x ¯ + Mx ¯=

which yields 0=

( dP

) + P A(t) + A(t)⊤ P − P BR−1 B ⊤ P + M x ¯.

dt Thus, if we choose P (·) to solve the following Riccati equation (associated with Problem (DLQ)):   dP + P A(t) + A(t)⊤ P − P BR−1 B ⊤ P + M = 0 in [0, T ], dt (13.13)  P (T ) = G, then (13.11) and (13.12) hold. Clearly, (13.13) is a matrix-valued nonlinear ordinary differential equation. The following result (e.g., [370, p. 234]) shows the equivalence between the unique solvability of Problem (DLQ) and the global solvability of the equation (13.13) under suitable assumptions: Theorem 13.8. Suppose that R ≫ 0. Then, for any η ∈ Rn , Problem (DLQ) is uniquely solvable if and only if the Riccati equation (13.13) admits a global solution P (·) ∈ C([0, T ]; S(Rn )). In this case, the optimal feedback control is given by (13.12), and inf

u(·) ∈ L2 (0,T ;Rm )

J (η; u(·)) =

1 ⟨P (0)η, η⟩Rn . 2

(13.14)

482

13 Linear Quadratic Optimal Control Problems

It is easy to show that any standard LQ problem (even in the stochastic setting) is uniquely solvable. Hence, thanks to Theorem 13.8, one can find a unique optimal feedback control for any standard Problem (DLQ). Nevertheless, things are completely different in the stochastic situation. Indeed, as shown in [236, Example 6.2] (See also Example 13.22 in Section 13.3), a standard Problem (SLQ) (even in finite dimensions, i.e., both H and U are Euclidean spaces) does NOT need to have feedback controls. This is a significant difference between stochastic LQ problems and their deterministic counterparts. Indeed, Theorem 13.8 means that, under the assumption R ≫ 0, one can always find the desired feedback control through the corresponding Riccati equation whenever Problem (DLQ) is uniquely solvable. Because of the difference mentioned above, in the present stochastic setting it is quite natural to ask such a question: Is it possible to study directly the existence of optimal feedback controls (rather than the solvability) for Problem (SLQ)? Clearly, from the viewpoint of applications, it is more desirable to study the existence of feedback controls for Problem (SLQ) than its solvability. Another task of this chapter is to give an affirmative answer to the above question under sharp assumptions on the coefficients appearing in (13.1) and (13.3). For this purpose, let us introduce below more notions1 . Definition 13.9. Any Θ(·) ∈ Υ2 (H; U ) is called an admissible feedback operator for Problem (SLQ). Remark 13.10. In order to keep the feedback controls to be robust w.r.t. small perturbations, we should choose the feedback operators so that they are bounded linear operators from L2F (Ω; C([0, T ]; H)) to L2F (0, T ; H). This is the reason to introduce Υ2 (H; U ). Now, motivated by [236, 307, 308], we introduced the following notion of optimal feedback operator for Problem (SLQ): Definition 13.11. An admissible feedback operator Θ(·) ∈ Υ2 (H; U ) is called an optimal feedback operator for Problem (SLQ) if J (η; Θ(·)¯ x(·)) ≤ J (η; u(·)),

∀ (η, u(·)) ∈ H × L2F (0, T ; U ),

(13.15)

where x ¯(·) = x ¯(·; η, Θ(·)¯ x(·)) solves ( ) ( ) { d¯ x = A¯ x + BΘ¯ x dt + C x ¯ + DΘ¯ x dW (t) in (0, T ], (13.16) x ¯(0) = η. Clearly, for a fixed η ∈ H, the inequality (13.15) implies that the control u ¯(·) ≡ Θ(·)¯ x(·) ∈ L2F (0, T ; U ) is optimal for Problem (SLQ). Therefore, for Problem (SLQ), the existence of an optimal feedback operator implies the existence of an optimal control for any initial state η ∈ H, but not vice versa. 1

Recall (3.50) for the definition of Υ2 (H; U ).

13.2 Optimal Feedback for Deterministic LQ Problem in Finite Dimensions

483

In the study of Problem (DLQ), people introduced the Riccati equation (13.13) to construct the desired feedback controls. Stimulated by this and especially by the pioneer work [32] (for stochastic LQ problems in finite dimensions), in order to study the optimal feedback operator for Problem (SLQ), we introduce the following operator-valued backward stochastic Riccati equation:  ( dP = − P A + A∗ P + ΛC + C ∗ Λ + C ∗ P C + M    ) (13.17) −L∗ K −1 L dt + ΛdW (t) in [0, T ),    P (T ) = G, where

K = R + D∗ P D,

L = B ∗ P + D∗ (P C + Λ).

(13.18)

Note that, the presence of the control variable in the diffusion term of (13.1) and the infinite dimensional setting make Problem (SLQ) (and in particular the study of (13.17)) significantly different from LQ problems for ordinary differential equations or stochastic differential equations. To illustrate this, we present in the next two sections some basic results for LQ problems for the latter two cases, respectively. Then, we go back to the study of Problem (SLQ). One can see the difference caused by the infinite dimensional setting. The rest of this chapter is organized as follows. In Sections 13.2 and 13.3, we revisit some important aspects in the LQ problems for ordinary differential equations and stochastic differential equations, respectively. In Section 13.4, we give the finiteness and solvability of LQ problems for stochastic evolution equations. In Section 13.5, we establish Pontryagin-type maximum principle for optimal controls. In Section 13.6 optimal feedback operator and the stochastic Riccati equation are introduced. In Section 13.7, the optimal feedback operator is represented through the solution to the stochastic Riccati equation. Section 13.8 is devoted to the solvability of the stochastic Riccati equation. In Section 13.9, we present some examples for LQ problems of stochastic partial differential equations which satisfies the assumptions in this chapter. Finally, some remarks and open problems are given in Section 13.10.

13.2 Optimal Feedback for Deterministic LQ Problem in Finite Dimensions In this section, we focus on the relationship between the existence of optimal feedback operator for Problem (DLQ) (introduced in the last section) and the global solvability of the Riccati equation (13.13). All contents in this section are classical. We refer the readers to [193, 371] for a comprehensive introduction on deterministic LQ problems in finite dimensions. The following concept is actually a special case of that in Definition 13.11.

484

13 Linear Quadratic Optimal Control Problems

Definition 13.12. We call Θ(·) ∈ L2 (0, T ; Rm×n ) an optimal feedback operator for Problem (DLQ) if J (η; Θ(·)¯ x(·)) ≤ J (η; u(·)),

∀ (η, u(·)) ∈ Rn × L2 (0, T ; Rm ),

where x ¯(·) solves the following equation:  x  d¯ = (A(t) + BΘ)¯ x in [0, T ], dt  x ¯(0) = η.

(13.19)

(13.20)

We have the following result: Theorem 13.13. Assume R ≫ 0. If the Riccati equation (13.13) admits a global solution P (·) ∈ C([0, T ]; S(Rn )), then Problem (DLQ) has an optimal feedback operator in the following form Θ(·) = −R−1 (·)B ⊤ (·)P (·),

(13.21)

and (13.14) holds. Proof : Let x ¯(·) satisfy  ) x (  d¯ = A(t) − BR−1 B ⊤ P x ¯ in [0, T ], dt  x ¯(0) = η.

(13.22)

Define u ¯(·) by (13.12). Then x ¯(·) is the corresponding state. For any u(·) ∈ L2 (0, T ; Rm ), let x(·) = x(·; η, u(·)). By (13.13), we have ⟨ P (T )x(T ), x(T )⟩Rn − ⟨ P (0)η, η⟩Rn ∫ T [ ] = ⟨(P BR−1 B ⊤ P − M )x(t), x(t)⟩Rn + 2 ⟨ B ⊤ P (t)x(t), u(t)⟩Rm dt. 0

Consequently, J (η; u(·)) − =

1 2

1 = 2

∫ ∫

T

(

1 ⟨ P (0)η, η⟩Rn 2

) ⟨ Ru, u⟩Rm + ⟨ P BR−1 B ⊤ P x, x⟩Rn + 2 ⟨ B ⊤ P x, u⟩Rm dt (13.23)

0 T 0

( ) ⟨ R u + R−1 B ⊤ P x , u + R−1 B ⊤ P x⟩2Rm dt.

This implies J (η, u ¯(·)) =

1 ⟨ P (0)η, η⟩Rn ≤ J (η; u(·)), 2

13.2 Optimal Feedback for Deterministic LQ Problem in Finite Dimensions

485

which concludes that u ¯(·) is an optimal control, Θ(·) given by (13.21) is an optimal feedback operator, and (13.14) holds. This completes the proof of Theorem 13.13. Since (13.13) is locally Lipschitz in the unknown P (·), it is locally solvable, that is, there exists s < T such that (13.13) admits a solution on [s, T ]. Nevertheless, due to the quadratic term, the global well-posedness is not guaranteed, unless there are some additional conditions. The following result, together with Theorem 13.13, gives an equivalence between the solvability of the Riccati equation (13.13) and the existence of an optimal feedback operator for Problem (DLQ). Theorem 13.14. Assume R ≫ 0. If there exists an optimal feedback operator for Problem (DLQ), then the Riccati equation (13.13) admits a unique, global solution P (·) ∈ C([0, T ]; S(Rn )). Proof : The uniqueness of the solution follows from the boundedness of the coefficients along with the Gronwall’s inequality. Consequently, we only need to show the existence of the solution. Let Θ be an optimal feedback operator for Problem (DLQ). By (1.41) and (1.43), the following de-coupled forward-backward (ordinary differential) system:  d¯ x   = A(t)¯ x + BΘ¯ x in [0, T ],     dt dy (13.24) = −A(t)⊤ y + M x ¯ in [0, T ],   dt    x ¯(0) = η, y(T ) = −G¯ x(T ), admits a (unique) solution (¯ x(·), y(·))⊤ ∈ C([0, T ]; R2n ), and u ¯(t) = R−1 B ⊤ y(t) = Θ¯ x(t),

t ∈ [0, T ]

(13.25)

gives an optimal control, where x ¯(·) is the corresponding optimal state. Define Rn×n -valued functions X(·) and Y (·) by  ∆  X(t)η = x ¯(t; η), ∀ (t, η) ∈ [0, T ] × Rn . (13.26) ∆  Y (t)η = y(t; η), Clearly, (X(·), Y (·)) solves the following matrix-valued forward-backward ordinary differential equation  dX   = A(t)X − BΘX in [0, T ],     dt dY (13.27) = −A(t)⊤ Y + M X in [0, T ],   dt     X(0) = I , Y (T ) = −GX(T ). n

486

13 Linear Quadratic Optimal Control Problems

Consider the following ordinary differential equation:  e   dX = ( − A(t) − BΘ)⊤ X e in [0, T ], dt   e X(0) = In . Then,

(13.28)

( ) e ⊤ d X(t)X(t)

= 0, ∀t ∈ [0, T ]. dt e ⊤ for all t ∈ [0, T ]. This implies that X(t)−1 = X(t) From (13.25), it follows that

Let

R−1 B ⊤ Y (t) = ΘX(t),

t ∈ [0, T ].

(13.29)

P (t) = −Y (t)X(t)−1 ,

t ∈ [0, T ].

(13.30)



From (13.27), (13.28) and (13.29), we have ( ) ( ) dP = − M X − A(t)⊤ Y X −1 + Y X −1 A(t) − X −1 BR−1 B ⊤ Y X −1 dt = −M − A(t)⊤ P − P A(t) + P BR−1 B ⊤ P. This, together with P (T ) = G, proves that P (·) is the solution to (13.13). This completes the proof of Theorem 13.14. Remark 13.15. Compared with the coupled forward-backward system (13.9), due to the assumption on the existence of optimal feedback operator for Problem (DLQ), we obtain the de-coupled forward-backward system (13.24), which plays a key role in the proof of Theorem 13.14. As we shall see in the sequel, this point is also the key for us to characterize the optimal feedback operator for Problem (SLQ).

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions In this section, we recall the main results on the optimal feedback for stochastic LQ problems in finite dimensions. To this end, we set H = Rn , V = R and U = Rm . The main content of this section are taken from [236, 371].

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

487

13.3.1 Differences Between Deterministic and Stochastic LQ Problems in Finite Dimensions For any η ∈ Rn , we consider the following control system ( ) ( ) { dx(t) = A(t)x + Bu dt + Cx + Du dW (t) in [0, T ], (13.31) x(0) = η, with the quadratic cost functional ∫ ⟩ ⟨ ⟩ ) ⟨ ⟩ ] 1 [ T (⟨ J (η; u(·)) = E M x, x Rn + Ru, u Rm dt+ Gx(T ), x(T ) Rn , 2 0 (13.32) where u(·) ∈ L2F (0, T ; Rm ) is the control variable, and n×n A(·), C(·) ∈ L∞ ), F (0, T ; R n M (·) ∈ L∞ F (0, T ; S(R )),

n×m B(·), D(·) ∈ L∞ ), F (0, T ; R

m R(·) ∈ L∞ F (0, T ; S(R )),

n G ∈ L∞ FT (Ω; S(R )).

It is easy to see that, the system (13.31) admits a unique solution x(·) ∈ L2F (Ω; C([0, T ]; Rn )) and hence (13.32) is well-defined. The following optimal control problem can be viewed as a finite dimensional version of Problem (SLQ): Problem (FSLQ). For each η ∈ Rn , find a u ¯(·) ∈ L2F (0, T ; Rm ) such that ( ) ( ) J η; u ¯(·) = inf J η; u(·) . 2 u(·) ∈ LF (0,T ;Rm )

Similarly to that of Problem (SLQ), one can define the so-called standard Problem (FSLQ), and the finiteness and (unique) solvability of Problem (FSLQ) (and hence we omit the details). Let us recall that, Proposition 13.4 shows the necessity of the condition R ≫ 0 for Problem (DLQ) to be finite. The following example, however, reveals a completely different phenomenon in the stochastic setting. Example 13.16. For m = n = 1, consider the stochastic control system { dx = udW (t) in [0, T ], (13.33) x(0) = η, with the cost functional J (η; u(·)) =

1 ( E − 2



T

) u2 dt + 2x(T )2 .

(13.34)

0

Then, 1 J (η; u(·)) = − E 2





T

T

u2 dt + E 0

u2 dt + η 2 = 0

1 E 2



T

u2 dt + η 2 ≥ 0. 0

Hence, this LQ problem is finite though, in this example, the corresponding R = −1!

488

13 Linear Quadratic Optimal Control Problems

A necessary condition for the optimal control is given below. Theorem 13.17. Let Problem (FSLQ) be solvable at η ∈ Rn with (¯ x(·), u ¯(·)) being an optimal pair. Then there exists a transposition solution (¯ y (·), Y (·)) to { ( ) d¯ y = − A(t)⊤ y¯ + C ⊤ Y − M x ¯ dt + Y dW (t) in [0, T ], (13.35) y¯(T ) = −G¯ x(T ), and a transposition solution (P (·), Q(·)) to  ( ) dP = − A(t)⊤ P + P A(t) + C ⊤ P C + QC + C ⊤ Q − M dt    +QdW (t) in [0, T ],    P (T ) = −G,

(13.36)

such that R(t)¯ u(t) − B(t)⊤ y¯(t) − D(t)⊤ Y (t) = 0, and

R(t) − D(t)⊤ P (t)D(t) ≥ 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

a.e. (t, ω) ∈ [0, T ] × Ω.

(13.37) (13.38)

Theorem 13.17 can be viewed as a special case of Theorem 13.30, and hence we will not prove it here. Remark 13.18. Compared with the deterministic case (Proposition 13.4), we see that, by (13.38), R may be negative. Namely, the necessary condition might still be satisfied even if R is negative, provided that the term −D⊤ P D is sufficiently positive definite. The following concept can be viewed as a special case of that in Definition 13.11. 2 m×n Definition 13.19. Any Θ(·) ∈ L∞ )) is called an admissible F (Ω; L (0, T ; R feedback operator for Problem (FSLQ). Further, it is called an optimal feedback operator for Problem (FSLQ) if

J (η; Θ(·)¯ x(·)) ≤ J (η; u(·)),

∀ (η, u(·)) ∈ Rn × U [0, T ],

(13.39)

where x ¯(·) = x ¯(·; η, Θ(·)¯ x(·)) solves ( ) ( ) { d¯ x(t) = A(t) + BΘ x ¯dt + C + DΘ x ¯dW (t) in [0, T ], (13.40) x ¯(0) = η. 2 m×n Remark 13.20. In the above, we choose Θ ∈ L∞ )) rather F (Ω; L (0, T ; R p 2 n×n than Θ ∈ LF (Ω; L (0, T ; R )) for any p ∈ [1, ∞). This is because we hope to keep the control strategy to be robust w.r.t. small perturbations. Indeed, under the latter assumption on Θ (i.e., Θ ∈ LpF (Ω; L2 (0, T ; Rm×n )) for any p ∈ [1, ∞)), if there is a measurement error δx ∈ L2F (Ω; C([0, T ]; Rn )) with |δx|L2F (Ω;C([0,T ];Rn )) = ε > 0 for ε being small enough in the observation of the state, then one cannot conclude that Θ(¯ x + δx) is an admissible control.

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

489

As in the deterministic case, we shall construct the optimal feedback operator by means of a suitable Riccati equation. Let us first derive such an equation formally. For simplicity, in the rest of this section, we assume that F is the natural filtration generated by W (·). Suppose that Problem (FSLQ) is uniquely solvable. For any η ∈ Rn , let (¯ x(·), u ¯(·)) be the optimal pair and (¯ y (·), Y (·) the classical adapted solution to (13.35). It follows from (13.37) that R¯ u(t) − B ⊤ y¯(t) − D⊤ Y (t) = 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

(13.41)

Assume that there is an Itˆo process dP = F dt + ΛdW (t)

(13.42)

with suitable F and Λ such that y¯(t) = −P (t)¯ x(t),

t ∈ [0, T ].

(13.43)

Then, by Itˆo’s formula, d(P x ¯)

( ) ( ) ( ) = (dP )¯ x + P A(t)¯ x + Bu ¯ dt + P C x ¯ + D¯ u dW (t) + Λ C x ¯ + D¯ u dt ( ) = −d¯ y = A(t)⊤ y¯ + C ⊤ Y¯ − M x ¯ dt − Y dW (t). (13.44) Consequently, ( ) ( ) (dP )¯ x = − P A(t)¯ x − P Bu ¯ − Mx ¯ + A(t)⊤ y¯ + C ⊤ Y dt − Λ C x ¯ + D¯ u dt ( ) −P C x ¯ + D¯ u dW (t) − Y dW (t). (13.45) From (13.42) and (13.45), we see that Λ¯ x = −P (C x ¯ + D¯ u) − Y . This, together with (13.41) and (13.43), implies that { u ¯ = −(R + D⊤ P D)−1 (B ⊤ P + D⊤ P C + D⊤ Λ)¯ x, Y = −Λ¯ x − P Cx ¯ + (R + D⊤ P D)−1 (B ⊤ P + D⊤ P C + D⊤ Λ)¯ x. By (13.45) and (13.46), we have  ( dP = − P A(t) + A(t)⊤ P + ΛC + C ⊤ Λ + C ⊤ P C    ) +M − L⊤ K −1 L dt + ΛdW (t) in [0, T ],    P (T ) = G, where

(13.46)

(13.47)

490

13 Linear Quadratic Optimal Control Problems

K = R + D⊤ P D,

L = B ⊤ P + D⊤ (P C + Λ).

By (13.46), we see then that, formally, the optimal feedback operator is Θ = −K −1 L.

(13.48)

We call (13.47) the matrix-valued backward stochastic Riccati equation associated with Problem (FSLQ). It is a nonlinear backward stochastic differential equation and its well-posedness is highly nontrivial. Compared with Problem (DLQ), clearly Problem (FSLQ) is much more complicated. Both the well-posedness of (13.47) and the relationship between the solution (P, Λ) and the solvability of Problem (FSLQ) are much different from the deterministic problem, for example 1) The uniqueness of the solution to the Riccati equation (13.47) is not guaranteed; 2) The Θ given by (13.48) may not be an admissible feedback operator, i.e. 2 m×n it may not belong to L∞ )). F (Ω; L (0, T ; R Let us illustrate the above two differences by the following two examples, respectively. Example 13.21. For m = n = 1, consider the control system ( ) { dx = A(t)x + Bu dt + udW (t) in [0, 1], x(0) = η, with the cost functional J (η; u(·)) = where

1 [ E − x(1)2 + 2



1

( ) ] M x2 + Ru2 dt ,

0

( 3 )2 3 1 R(t) = t − + > 0, M (t) = − , 2 4 R(t) ] 1 [ (R(t) − 1)2 R(t) − 1 A(t) = − 1 , B(t) = . 2 R(t)2 R(t)

(13.49)

In this case, the corresponding stochastic Riccati equation becomes an ordinary differential equation:   B2P 2  dP + 2A(t)P + M − = 0 in [0, 1], dt R+P   P (1) = −1. It follows from (13.49) that

(13.50)

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

491

B2P 2 (B 2 − 2A(t))P 2 − (M + 2A(t)R)P − M R − 2A(t)P − M = R+P R+P 2 P + 2P + 1 = . R+P This, together with (13.50), implies that  2   dP = P + 2P + 1 in [0, 1], dt R+P   P (1) = −1.

(13.51)

Clearly, P1 (t) = −1,

t ∈ [0, 1],

and P2 (t) = t − 2,

t ∈ [0, 1]

are solutions to (13.51). Example 13.22. Define one-dimensional stochastic processes M (·), ζ(·) and a stopping time τ as follows:  ∫ t 1  ∆  √ dW (s), t ∈ [0, T ), M (t) =   T −s 0   { } ∆ (13.52) τ = inf t ∈ [0, T ) |M (t)| > 1 ∧ T,     π  ∆  ζ(t) = √ √ χ[0,τ ] (t), t ∈ [0, T ). 2 2 T −t By some direct computations, we have ∫ ∫ T π τ 1 √ ζ(s)dW (s) = √ dW (t) T −t 2 2 0 0 π π = √ M (τ ) ≤ √ , 2 2 2 2 and

[ (∫ E exp

T

|ζ(t)|2 dt

)]

= ∞.

(13.53)

(13.54)

0

Consider the following backward stochastic differential equation: ∫ T ∫ T π z(t) = ζ(s)dW (s) + √ + 1 − Z(s)dW (s), t ∈ [0, T ]. 2 2 0 t This equation admits a unique solution ∫ t π z(t) = ζ(s)dW (s) + √ + 1, Z(t) = ζ(t), 2 2 0

a.e. t ∈ [0, T ].

492

13 Linear Quadratic Optimal Control Problems

From (13.52)–(13.54), it is easy to see that  π  1 ≤ z(·) ≤ √ + 1, 2  ∞ Z(·) ∈ / L (Ω; L2 (0, T )).

(13.55)

F

Consider a standard Problem (FSLQ) with the following setting:   m = k = 1, A(·) = B = C = 0, D = 1,  M = 0,

R=

1 > 0, 4

G = z(T )−1 −

1 > 0. 4

(13.56)

For this problem, the corresponding Riccati equation reads { dP = (R + P )−1 Λ2 dt + ΛdW (t) in [0, T ], (13.57) P (T ) = G, and Θ(·) = −(R + P (·))−1 Λ(·). Put ∆ Pe(·) = P (·) + R,

∆ Λe = Λ.

It follows from (13.57) that {

e dPe = Pe −1 Λe2 ds + ΛdW (s) in [0, T ], Pe (T ) = z(T )−1 .

(13.58)

e Applying Itˆo’s formula to z(·)−1 , we deduce that (Pe (·), Λ(·)) = (z(·)−1 , −2 −z(·) Z(·)) is the unique solution to (13.58). As a result, (P (·), Λ(·)) =(z(·)−1 − R, −z(·)−2 Z(·)) ∆

is the unique solution to the Riccati equation (13.57). Moreover, Θ(·) = 2 −z(·)−1 Z(·). By (13.55), we see that Θ(·) does not belong to L∞ F (Ω; L (0, T )). Hence, it is not an admissible feedback operator. 13.3.2 Characterization of Optimal Feedbacks for Stochastic LQ Problems in Finite Dimensions From Examples 13.21 and 13.22, we see that neither the well-posedness of the matrix-valued backward stochastic Riccati equation (13.47) nor the existence of the optimal feedback operator is true even for a standard Problem (FSLQ). Nevertheless, we have the following result which gives the equivalence between the well-posedness of the equation (13.47) and the existence of optimal feedback operator for Problem (FSLQ).

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

493

Theorem 13.23. Assume that Problem (FSLQ) is standard. Then, this prob2 m×n lem admits an optimal feedback operator Θ(·) ∈ L∞ )) if and F (Ω; L (0, T ; R only if the (matrix-valued backward stochastic Riccati equation (13.47) admits ) n 2 n a solution P (·), Λ(·) ∈ L∞ F (Ω; C([0, T ]; S(R ))) × LF (0, T ; S(R )) such that

and

K≫0

(13.59)

2 m×n K(·)−1 L(·) ∈ L∞ )). F (Ω; L (0, T ; R

(13.60)

In this case, the optimal feedback operator Θ(·) is given as Θ(·) = −K(·)−1 L(·).

(13.61)

Furthermore, inf

u(·) ∈ L2F (0,T ;Rm )

J (η; u) =

1 ⟨P (0)η, η⟩Rn . 2

(13.62)

Proof : )The “if ” part. Assume that the equation (13.47) admits a solution n 2 n P (·), Λ(·) ∈ L∞ F (Ω; C([0, T ]; S(R ))) × LF (0, T ; S(R )) such that (13.60) holds. Then, it is clear that the function Θ(·) given by (13.61) belongs to 2 m×n L∞ )). For any η ∈ Rn and u(·) ∈ L2F (0, T ; Rm ), let x(·) ≡ F (Ω; L (0, T ; R x(· ; η, u(·)) be the corresponding state process for (13.31). By Itˆo’s formula, and using (13.31) and (13.47), we obtain that ⟨ ⟩ d P x, x Rn ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ = dP x, x Rn + P dx, x Rn + P x, dx Rn ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ + dP dx, x Rn + dP x, dx Rn + P dx, dx Rn ⟨ ( ) ⟩ = − P A(t) + A(t)⊤ P + ΛC + C ⊤ Λ + C ⊤ P C + M − L⊤ K −1 L x, x Rn dr ⟨ ⟩ ⟨ ⟩ + P (A(t)x + Bu), x Rn dr + P (Cx + Du), x Rn dW (r) (13.63) ⟨ ⟩ ⟨ ⟩ + P x, A(t)x + Bu Rn dr + P x, Cx + Du Rn dW (r) ⟨ ⟩ ⟨ ⟩ + Λ(Cx + Du), x Rn dr + Λx, Cx + Du Rn dr ⟨ ⟩ ⟨ ⟩ + P (Cx + Du), Cx + Du Rn dr + Λx, x Rn dW (r) ⟨ ⟩ ⟨ ⟩ = − (M − L⊤ K −1 L)x, x Rn dr + P Bu, x Rn dr ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ + P x, Bu Rn dr + P Cx, Du Rn dr + P Du, Cx + Du Rn dr ⟨ ⟩ ⟨ ⟩ + Du, Λx Rn dr + Λx, Du Rn dr + ⟨P (Cx + Du), x⟩Rn dW (r) ⟨ ⟩ +⟨P x, Cx + Du⟩Rn dW (r) + Λx, x Rn dW (r) ⟨ ⟩ = − (M − L⊤ K −1 L)x, x Rn dr + 2⟨L⊤ u, x⟩Rn dr + ⟨D⊤ P Du, u⟩Rm dr [ ] + 2⟨P (Cx + Du), x⟩Rn + ⟨Λx, x⟩Rn dW (r). (

From the definition of Θ in (13.138), we derive that, Θ⊤ KΘ = −Θ⊤ KK −1 L = L⊤ K −1 L.

494

13 Linear Quadratic Optimal Control Problems

This, together with (13.63), implies that ⟨ ⟩ ⟨ ⟩ d P x, x Rn = − (M − Θ⊤ KΘ)x, x Rn dr + 2⟨L⊤ u, x⟩Rn dr [ +⟨D⊤ P Du, u⟩Rm dr + 2⟨P (Cx + Du), x⟩Rn ] +⟨Λx, x⟩Rn dW (r).

(13.64)

To handle the stochastic integral above, we introduce a sequence of stopping times {τj }∞ j=1 as ∫ t { } ∆ τj = inf t ≥ 0 |Λ(r)|2 dr ≥ j ∧ T. 0

It is easy to see that τj → T , a.s., as j → ∞. Using (13.64), we obtain that, ∫

T

E⟨P (τj )x(τj ), x(τj )⟩Rn + E

[ ] χ[0,τj ] ⟨M x, x⟩Rn + ⟨Ru, u⟩Rm dr

0



T

= E⟨P (0)η, η⟩Rn + E

[ ] χ[0,τj ] ⟨Θ⊤ KΘx, x⟩Rn + 2⟨L⊤ u, x⟩Rn dr

0



T

+E

χ[s,τj ] ⟨(R + D⊤ P D)u, u⟩Rm dr.

0

Clearly, 2 n×n ) |x| 2 |⟨P (τj )x(τj ), x(τj )⟩Rn | ≤ |P |L∞ L (Ω;C([0,T ];Rn ) . F (0,T ;R F

By the Dominated Convergence Theorem, we obtain that lim ⟨P (τj )x(τj ), x(τj )⟩Rn = ⟨P (T )x(T ), x(T )⟩Rn .

j→∞

(13.65)

Furthermore, [ ] χ[0,τ ] ⟨M x, x⟩Rn + ⟨Ru, u⟩Rm j [ ] ≤ ⟨M x, x⟩Rn + ⟨Ru, u⟩Rm ∈ L1F (0, T ; R). By the Dominated Convergence Theorem again, we get that ∫

T

lim E

j→∞

[ ] χ[0,τj ] ⟨M x, x⟩Rn + ⟨Ru, u⟩Rm dr

0



T

=E

[

]

⟨M x, x⟩Rn + ⟨Ru, u⟩Rm dr.

0

Similarly, we can obtain that

(13.66)

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions



[ ] χ[0,τj ] ⟨Θ⊤ KΘx, x⟩Rn + 2⟨L⊤ u, x⟩Rn dr

T

lim E

j→∞

0



T

χ[0,τj ] ⟨(R + D⊤ P D)u, u⟩Rm dr

+ lim E j→∞



T

=E

0

[

495



]



(13.67)

⟨Θ KΘx, x⟩Rn + 2⟨L u, x⟩Rn dr

0



T

+E

⟨(R + D⊤ P D)u, u⟩Rm dr.

0

It follows from (13.65)–(13.67) that 2J (η; u(·)) ∫

T

= E⟨Gx(T ), x(T )⟩Rn + E T

[

T

(⟨

= E⟨P (0)η, η⟩Rn + E



T

= 2J (η; Θ¯ x) + E

] ⟨M x, x⟩Rn + ⟨Ru, u⟩Rm dr

0

∫ ∫ [⟨ ⟩ = E P (0)η, η Rn +

[

] ⟨Θ⊤ KΘx, x⟩Rn + 2⟨L⊤ u, x⟩Rn + ⟨Ku, u⟩Rm dr

0

KΘx, Θx

0



⟩ Rm

⟨ ⟩ ) ] − 2 KΘx, u Rm + ⟨Ku, u⟩Rm dr

K(u − Θx), u − Θx

0

⟩ Rm

dr,

where we have used the fact that L⊤ = −Θ⊤ K. Hence, by K ≫ 0, we get J (η; Θ¯ x) ≤ J (η; u),

∀ u(·) ∈ L2F (0, T ; Rm ).

Thus, the function Θ(·) given by (13.138) is an optimal feedback operator for Problem (FSLQ). This completes the proof of the “if” part of Theorem 13.23. The “only if ” part. Let us divide the proof into three steps. 2 m×n Step 1. Let Θ(·) ∈ L∞ )) be an optimal feedback opF (Ω; L (0, T ; R erator for Problem (FSLQ). Then, by Theorem 13.17, for any ζ ∈ Rn , the following forward-backward stochastic differential equation  dx = (A(t) + BΘ)xdt + (C + DΘ)xdW (t) in [0, T ],    ( ) (13.68) dy = − A(t)⊤ y + C ⊤ Y + M x dt + Y dW (t) in [0, T ],    x(0) = ζ, y(T ) = Gx(T ) admits a unique solution (x(·), y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; Rn )) × L2F (Ω; C([0, T ]; Rn )) × L2F (0, T ; Rn ) such that RΘx + B ⊤ y + D⊤ Y = 0,

a.e. (t, ω) ∈ (0, T ) × Ω.

Also, consider the following stochastic differential equation:

(13.69)

496

{

13 Linear Quadratic Optimal Control Problems

[ ( )2 ]⊤ ( )⊤ d˜ x = − A(t) − BΘ + C + DΘ x ˜dt − C + DΘ x ˜dW (t)

in [0, T ],

x ˜(0) = ζ, (13.70) which has a unique solution x ˜ ∈ L2F (Ω; C([0, T ]; Rn )). Further, consider the following Rn×n -valued forward-backward stochastic differential equation:  in [0, T ],   dX = (A(t) + BΘ)Xdt + (C + DΘ)XdW (t)  ( ) ⊤ ⊤ (13.71) dY = − A(t) Y + C Y + M X dt + YdW (t) in [0, T ],    X(0) = In , Y(T ) = GX(T ) and Rn×n -valued stochastic differential equation:  [ ( ) ] e = − A(t) − BΘ + C + DΘ 2 ⊤ Xdt e  dX    e −(C + DΘ)⊤ XdW (t) in [0, T ],    e X(0) = In .

(13.72)

Equations (13.71) and (13.72) admit, respectively, unique solutions (X, Y, Y) ∈ L2F (Ω; C([0, T ]; Rn×n )) × L2F (Ω; C([0, T ]; Rn×n )) × L2F (0, T ; Rn×n ) and e ∈ L2 (Ω; C([0, T ]; Rn×n )). X F It follows from (13.68) to (13.72) that, for any ζ ∈ Rn , e x(t; ζ) = X(t)ζ, y(t; ζ) = Y(t)ζ, x ˜(t; ζ) = X(t)ζ, z(t; ζ) = Y(t)ζ,

∀ t ∈ [0, T ], (13.73) a.e. t ∈ [0, T ].

By (13.69) and (13.73), we find that RΘX + B ⊤ Y + D⊤ Y = 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

For any ζ, ρ ∈ R and t ∈ [0, T ], by Itˆo’s formula, we have ⟨ ⟩ ⟨ ⟩ x(t; ζ), x ˜(t; ρ) Rn − ζ, ρ Rn ∫ t ⟨( ) ⟩ = A(t) + BΘ x(r; ζ), x ˜(r; ρ) Rn dr n

0



t

⟨(

t

⟨ [ ( )2 ]⊤ ⟩ x(r; ζ), − A(t) − BΘ + C + DΘ x ˜(r; ρ) Rn dr

t

⟨ ( )⊤ ⟩ x(r; ζ), C + DΘ x ˜(r; ρ) Rn dW (r)

t

⟨(

+ ∫

0



0



0

+ − − 0

= 0.

) ⟩ C + DΘ x(r; ζ), x ˜(r; ρ) Rn dW (r)

) ( )⊤ ⟩ C + DΘ x(r; ζ), C + DΘ x ˜(r; ρ) Rn dr

(13.74)

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

Consequently, ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ e X(t)ζ, X(t)ρ = x(t; ζ), x ˜(t; ρ) Rn = ζ, ρ Rn , Rn

497

a.s.

e ⊤ = In for all t ∈ [0, T ], a.s., that is, X(t) e ⊤ = X(t)−1 This implies that X(t)X(t) for all t ∈ [0, T ], a.s. Step 2. Put P (t, ω) = Y(t, ω)X(t, ω)−1 , Π(t, ω) = Y(t, ω)X(t, ω)−1 , ∆



t ∈ [0, T ]. (13.75)

By Itˆo’s formula, { ( ) [ ] dP = − A(t)⊤ Y + C ⊤ Y + M X X−1 + YX−1 (C + DΘ)2 − A(t) − BΘ } [ ] −YX−1 (C + DΘ) dt + YX−1 − YX−1 (C + DΘ) dW (t) { [ ] = − A(t)⊤ P − C ⊤ Π − M + P (C + DΘ)2 − A(t) − BΘ } [ ] −Π(C + DΘ) dt + Π − P (C + DΘ) dW (t). Let



Λ = Π − P (C + DΘ).

(13.76)

Then, (P (·), Λ(·)) solves the following Rn×n -valued backward stochastic differential equation:  [ dP = − P A(t) + A(t)⊤ P + ΛC + C ⊤ Λ + C ⊤ P C    ] (13.77) +(P B + C ⊤ P D + ΛD)Θ + M dt + ΛdW (t) in [0, T ],    P (T ) = G. n×n By Theorem 4.2, we conclude that (P, Λ) ∈ L∞ )) × L2F (0, T ; F (Ω; C([0, T ]; R n×n R ). For any t ∈ [0, T ) and η ∈ L2Ft (Ω; Rn ), let us consider the following forward-backward stochastic differential equation:  t ( ) ( ) dx (r) = A(t) + BΘ xt dr + C + DΘ xt dW (r) in [t, T ],    ( ) t ⊤ t ⊤ t t t (13.78) dy (r) = − A(t) y + C Y + M x dr + Y dW (r) in [t, T ],    t x (t) = η, y t (T ) = Gxt (T ). ( ) Clearly, the equation (13.78) admits a unique solution xt (·), y t (·), z t (·) ∈ L2F (Ω; C([t, T ]; Rn )) × L2F (Ω; C([t, T ]; Rn )) × L2F (t, T ; Rn ). Also, we consider the following forward-backward stochastic differential equation:  t ( ) ( ) dX (r) = A(t) + BΘ Xt dr + C + DΘ Xt dW (r) in [t, T ],    ( ) dYt (r) = − A(t)⊤ Yt + C ⊤ Yt + M Xt dr + Yt dW (r) in [t, T ], (13.79)    t X (t) = In , Yt (T ) = GXt (T ).

498

13 Linear Quadratic Optimal Control Problems

( ) Likewise, the equation (13.79) admits a unique solution Xt (·), Yt (·), Yt (·) ∈ L2F (Ω; C([t, T ]; Rn×n )) × L2F (Ω; C([t, T ]; Rn×n )) × L2F (t, T ; Rn×n ). It follows from (13.78) and (13.79) that, for any η ∈ L2Ft (Ω; Rn ), xt (r) = Xt (r)η,

y t (r) = Yt (r)η,

∀ r ∈ [t, T ]. a.e. r ∈ [t, T ].

Y t (r) = Yt (r)η,

(13.80)

By the uniqueness of the solution to (13.68), for any ζ ∈ Rn and t ∈ [0, T ], we have that Xt (r)X(t)ζ = xt (r; X(t)ζ) = x(r; ζ),

a.s.

Thus, Yt (t)X(t)ζ = y t (t; X(t)ζ) = Y(t)ζ,

a.s.

This implies that for all t ∈ [0, T ], Yt (t) = Y(t)X(t)−1 = P (t),

a.s.

(13.81)

Let η, ξ ∈ L2Ft (Ω; Rn ). Since Yt (r)η = y t (r; η) and Xt (r)ξ = xt (r; ξ), applying Itˆo’s formula to ⟨xt (·), y t (·)⟩Rn , we get that E⟨ξ, P (t)η⟩Rn



T

= E⟨GXt (T )η, Xt (T )ξ⟩Rn + E

⟨M Xt η, Xt ξ⟩Rn dr t





T

−E

T

⟨BΘXt ξ, Yt η⟩Rn dr − E t

(13.82)

⟨DΘXt ξ, Yt η⟩Rn dr. t

This implies that ∫ E⟨P (t)η, ξ⟩

Rn

= E⟨GX (T )η, X (T )ξ⟩ t

t

Rn

T

+E

) +⟨RΘXt η, ΘXt ξ⟩Rn dr.

(

⟨M Xt η, Xt ξ⟩Rn

t

Therefore, E⟨P (t)η, ξ⟩Rn ∫ ⟨ = E ξ, Xt (T )⊤ GXt (T )η + E

T

[

] ⟩ (Xt )⊤ M Xt η + (Xt )⊤ Θ⊤ RΘXt η dr

t

Rn

,

which concludes that ∫ ( P (t) = E Xt (T )⊤ GXt (T ) + E

T

[

] ) (Xt )⊤ QXt + (Xt )⊤ Θ⊤ RΘXt dr Ft .

t

This, together with the assumption that Problem (FSLQ) is standard, proves that

13.3 Optimal Feedback for Stochastic LQ Problem in Finite Dimensions

P (t) ≥ 0, ⊤

a.s.,

∀ t ∈ [0, T ].

499

(13.83)



Clearly, (P , Λ ) satisfies that  ⊤ [ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤   dP = − P A(t) + A(t) P + Λ C + C Λ + C P C  ] +Θ⊤ (P B + C ⊤ P D + ΛD)⊤ + M dt + Λ⊤ dW (t) in [0, T ],    P (T )⊤ = G. (13.84) According to (13.77) and (13.84), and noting that P (·) is symmetric, we find that for any t ∈ [0, T ], ∫ t {[ ] 0=− ΛC + C ⊤ Λ + (P B + C ⊤ P D + ΛD)Θ 0

[ ]⊤ } − ΛC + C ⊤ Λ + (P B + C ⊤ P D + ΛD)Θ dτ ∫ t + (Λ − Λ⊤ )dW (τ ).

(13.85)

0

The first term in the right hand side of (13.85) is absolutely continuous with ∫t respect to t, a.s. Hence, 0 (Λ−Λ⊤ )dW (τ ) is absolutely continuous with respect to t, a.s. Consequently, its quadratic variation is 0, a.s. Namely, ∫ T |Λ − Λ⊤ |2Rn×n ds = 0, a.s. 0

This concludes that Λ(t, ω) = Λ(t, ω)⊤ ,

a.e. (t, ω) ∈ (0, T ) × Ω.

(13.86)

Step 3. In this step, we show that (P, Λ) is a pair of stochastic processes satisfying (13.47), (13.59) and (13.60). It follows from R ≫ 0 and (13.83) that (13.59) holds. By (13.74), we get that B ⊤ P + D⊤ Π + RΘ = 0, a.e. (t, ω) ∈ [0, T ] × Ω. (13.87) This implies that [ ] 0 = B ⊤ P + D⊤ Λ + P (C + DΘ) + RΘ = B ⊤ P + D⊤ P C + D⊤ Λ + KΘ = L + KΘ.

(13.88)

From (13.59) and (13.88), we obtain Θ = −K −1 L. 2 m×n This, together with the fact that Θ ∈ L∞ )), leads to (13.60). F (Ω; L (0, T ; R Therefore, it follows from (13.86) and (13.88) that

500

13 Linear Quadratic Optimal Control Problems

(P B + C ⊤ P D + ΛD)Θ = L⊤ Θ = −Θ⊤ KΘ = −L⊤ K −1 L.

(13.89)

Hence, by (13.77), we conclude that (P, Λ) is a solution to (13.47). This completes the proof of the necessity in Theorem 13.23. Remark 13.24. By Example 13.22, we see that sometimes it is impossible to construct the desired optimal feedback operator by solving the Riccati equation (13.47). Because of this, we need to introduce the condition (13.60). Clearly, this is quite different from the deterministic case. Remark 13.25. Actually, the condition that Problem (FSLQ) is standard can be dropped (e.g., [236]). Nevertheless, in order to present the key idea in the simplest way, we do not pursue the full generality in this section. Remark 13.26. One can see that the main ideas in the proof of Theorem 13.23 are similarly to the ones in the proof of Theorems 13.13 and 13.14. Nevertheless, the control dependent diffusion term (Cx + Du)dW (t) leads to some new difficulty.

13.4 Finiteness and Solvability of Problem (SLQ) In this section, similar to [371], we study the finiteness and solvability of Problem (SLQ). Since the state equation (13.1) is linear, the state process can be explicitly expressed in terms of the initial state and the control in a linear form. Substituting this formula into the cost functional (which is quadratic in the state and control), one can obtain a quadratic functional w.r.t. the initial state and the control. Thus, Problem (SLQ) can be transformed to a quadratic optimization problem in the Hilbert space L2F (0, T ; U ). This leads to some necessary and sufficient conditions for the finiteness and solvability of Problem (SLQ). Let us explain the details below. To this end, we define the following operators: For any η ∈ H and u(·) ∈ L2F (0, T ; U ),  F1 : L2F (0, T ; U ) → L2F (0, T ; H), ∆ (F u(·))(·) = x(·; 0, u), 1

∀ u(·) ∈ L2F (0, T ; U );

 Fb1 : L2F (0, T ; U ) → L2FT (Ω; H), ∆ Fb u(·) = x(T ; 0, u), ∀ u(·) ∈ L2F (0, T ; U ); 1  F2 : H → L2F (0, T ; H), ∆ (F η)(·) = x(·; η, 0), 2

∀ η ∈ H;

13.4 Finiteness and Solvability of Problem (SLQ)

501

 Fb2 : H → L2FT (Ω; H), ∆ Fb η = x(T ; η, 0), 2

∀ η ∈ H,

where x(·) ≡ x(·; η, u(·)) solves the equation (13.1). Then, for any η ∈ H and u(·) ∈ L2F (0, T ; U ), the corresponding state process x(·) and its terminal value x(T ) (for the equation (13.1)) are given respectively by { x(·) = (F2 η)(·) + (F1 u)(·), (13.90) x(T ) = Fb2 η + Fb1 u. Next, we compute the adjoint operators of F1 , Fb1 , F2 and Fb2 :  ∗ F1 : L2F (0, T ; H) → L2F (0, T ; U ),       Fb1∗ : L2F (Ω; H) → L2F (0, T ; U ), T  F2∗ : L2F (0, T ; H) → H,      b∗ F2 : L2FT (Ω; H) → H. For this purpose, we introduce the following backward stochastic evolution equation: ( ) { dy = − A∗ y + C ∗ Y + ξ dt + Y dW (t) in (0, T ], (13.91) y(T ) = yT , where yT ∈ L2FT (Ω; H) and ξ(·) ∈ L2F (0, T ; H). Proposition 13.27. For any ξ(·) ∈ L2F (0, T ; H), let (y0 (·), Y0 (·)) be the transposition solution to (13.91) with yT = 0. Then { ∗ (F1 ξ)(t) = B ∗ y0 (t) + D∗ Y0 (t), a.e. t ∈ [0, T ], (13.92) F2∗ ξ = y0 (0). On the other hand, for any yT ∈ L2FT (Ω; H), let (y1 (·), Y1 (·)) be the transposition solution to (13.91) with ξ(·) = 0. Then { ∗ (Fb1 yT )(t) = B ∗ y1 (t) + D∗ Y1 (t), a.e. t ∈ [0, T ], (13.93) Fb2∗ yT = y1 (0). Proof : For any yT ∈ L2FT (Ω; H) and ξ(·) ∈ L2F (0, T ; H), there exists a unique transposition solution (y(·), Y (·)) ∈ DF ([0, T ]; L2 (Ω; H))×L2F (0, T ; L02 ) to (13.91), which satisfies that ∫

T

sup E|y(t)|2H + E

0≤t≤T

0

∫ ( |Y (t)|2L0 dt ≤ CE |yT |2H + 2

T 0

) |ξ(t)|2H dt ,

502

13 Linear Quadratic Optimal Control Problems

for some constant C > 0. Therefore, we know that all the operators defined in (13.92) and (13.93) are bounded. Next, for any η ∈ H and u(·) ∈ L2F (0, T ; U ), let x(·) be the solution of (13.1). From the definition of the transposition solution to the equation (13.91), we find E⟨x(T ), yT ⟩H − E⟨η, y(0)⟩H ∫ T ( ) =E ⟨u(t), B ∗ y(t) + D∗ Y (t)⟩U − ⟨x(t), ξ(t)⟩H dt, 0

which implies that ( ) E ⟨Fb2 η + Fb1 u, yT ⟩H − ⟨η, y(0)⟩H (13.94) ∫ T ( ) =E ⟨u(t), B ∗ y(t) + D∗ Y (t)⟩U − ⟨(F2 η)(t) + (F1 u)(t), ξ(t)⟩H dt. 0

Let yT = 0 and η = 0 in (13.94). We get ∫



T

E

T

⟨(F1 u)(t), ξ(t)⟩H dt = E 0

⟨u(t), B ∗ y0 (t) + D∗ Y0 (t)⟩U dt.

0

This proves the first equality in (13.92). Letting u(·) = 0 and yT = 0 in (13.94), we obtain E⟨η, F2∗ ξ⟩H = E



T

⟨(F2 η)(t), ξ(t)⟩H dt = E⟨η, y0 (0)⟩H . 0

This gives the second equality in (13.92). Let η = 0 and ξ(·) = 0 in (13.94). We find that ∫

T

E 0

E⟨u(t), (Fb1∗ yT )(t)⟩H dt = E⟨Fb1 u, yT ⟩H



T

=E

⟨u(t), B ∗ y1 (t) + D∗ Y1 (t)⟩H dt.

0

This demonstrates the first equality in (13.93). At last, let u(·) = 0 and ξ(·) = 0 in (13.94). We see that E⟨η, Fb2∗ yT ⟩H = E⟨Fb2 η, yT ⟩H = E⟨η, y1 (0)⟩H . This verifies the second equality in (13.93). From Proposition 13.27, we immediately obtain the following result, which is a representation for the cost functional (13.3).

13.4 Finiteness and Solvability of Problem (SLQ)

503

Proposition 13.28. The cost functional (13.3) can be represented as J (η; u(·)) =

1[ E 2



T

] ( ) ⟨N u, u⟩U + 2⟨H(η), u⟩U dt + M(η) .

(13.95)

0

Here  N = R + F1∗ M F1 + Fb1∗ GFb1 ,     ( ) ( ) H(η) = F1∗ M F2 η (·) + Fb1∗ GFb2 η (·),   ⟨ ⟩ ⟨ ⟩   M(η) = M F2 η, F2 η L2 (0,T ;H) + GFb2 η, Fb2 η L2

FT

F

(13.96) (Ω;H)

.

As a corollary of Proposition 13.28, we have the following result for the finiteness and solvability of Problem (SLQ) . Theorem 13.29. 1) If Problem (SLQ) is finite at some η ∈ H, then N ≥ 0.

(13.97)

2) Problem (SLQ) is solvable at η ∈ H if and only if N ≥ 0 and there exists u ¯(·) ∈ L2F (0, T ; U ), such that Nu ¯(·) + H(η) = 0.

(13.98)

In this case, u ¯(·) is an optimal control. 3) If N ≫ 0, then for any η ∈ H, J (η; ·) admits a unique minimizer u ¯(·) = −N −1 H(η).

(13.99)

In this case, it holds inf

u(·) ∈ L2F (0,T ;U )

=

J (η; u(·)) = J (η; u ¯(·))

) 1( M(η) − ⟨N −1 H(η), H(η)⟩L2F (0,T ;U ) , 2

(13.100) ∀ η ∈ H.

Proof : “Proof of the assertion 1)”. Suppose that (13.97) does not hold. Then there is u0 ∈ L2F (0, T ; U ) such that ∫

T

E

⟨N u0 (·), u0 (·)⟩H ds < 0. 0

2 Define a sequence {uk }∞ k=1 ⊂ LF (0, T ; U ) as follows:

uk (·) = ku0 (·) in [0, T ]. Then J (η; uk (·)) → −∞ as k → ∞,

504

13 Linear Quadratic Optimal Control Problems

which contradicts that Problem (SLQ) is finite at η. “Proof of the assertion 2)”. The “if” part. Let u ¯(·) ∈ L2F (0, T ; U ) be an optimal control of Problem (SLQ) for η ∈ H. By the optimality of u ¯(·), for any u(·) ∈ L2F (0, T ; U ), it holds that ) 1( J (η; u ¯(·) + λu(·)) − J (η; u ¯(·)) λ→0 λ ∫ T =E ⟨N u ¯(·) + H(η), u(·)⟩U dt.

0 ≤ lim

0

Consequently, Nu ¯(·) + H(η) = 0. The “only if” part. Let (η, u ¯(·)) ∈ H × L2F (0, T ; U ) satisfy (13.98). For 2 any u ∈ LF (0, T ; U ), from (13.97), we see ( ) J (η; u(·)) − J (η; u ¯(·)) = J η; u ¯(·) + u(·) − u ¯(·) − J (η; u ¯(·)) ∫ T ⟨ ⟩ =E Nu ¯(·) + H(η), u(·) − u ¯(·) U dt 0

∫ T ⟨ ( ) ⟩ 1 + E N u(·) − u ¯(·) , u(·) − u ¯(·) U dt 2 0 ∫ T ⟨ ( ) ⟩ 1 = E N u(·) − u ¯(·) , u(·) − u ¯(·) U dt ≥ 0. 2 0 This concludes that u ¯(·) is an optimal control. “Proof of the assertion 3)”. Since all optimal controls should satisfy (13.98) and N is invertible, we get assertion 3) immediately. Clearly, if Problem (SLQ) is standard, then N ≫ 0, and by Theorem 13.29, it is uniquely solvable and the optimal control is given by (13.99). There is a main drawback of the formula (13.99), that is, it is very difficult to compute the inverse of the operator N .

13.5 Pontryagin-Type Maximum Principle for Problem (SLQ) In this section, we shall derive the following Pontryagin type maximum principle for Problem (SLQ). Theorem 13.30. Let Problem (SLQ) be solvable at η ∈ H with (¯ x(·), u ¯(·)) being an optimal pair. Then, for the transposition solution (y(·), Y (·)) to

13.5 Pontryagin-Type Maximum Principle for Problem (SLQ)

{

( ) dy = − A∗ y + C ∗ Y − M x ¯ dt + Y dW (t) in [0, T ), y(T ) = −G¯ x(T ),

b (·) ) to and the relaxed transposition solution (P (·), Q(·) , Q  ( ) dP = − A∗ P + P A + C ∗ P C + QC + C ∗ Q − M dt    +QdW (t) in [0, T ),    P (T ) = −G,

505

(13.101)

(13.102)

it holds that R(t)¯ u(t) − B(t)∗ y(t) − D(t)∗ Y (t) = 0, and

R(t) − D(t)∗ P (t)D(t) ≥ 0,

a.e. (t, ω) ∈ [0, T ] × Ω

a.e. (t, ω) ∈ [0, T ] × Ω.

(13.103) (13.104)

Proof : Since the assumption (S2) in Chapter 12 is not satisfied, we cannot apply Theorem 12.17 to Problem (SLQ) directly. The proof is divided into two steps. Step 1. In this step, we prove (13.103) by the convex perturbation technique. For the optimal pair (¯ x(·), u ¯(·)) and a control u(·) ∈ L2F (0, T ; U ), we have that uε (·) = u ¯(·) + ε(u(·) − u ¯(·)) = (1 − ε)¯ u(·) + εu(·) ∈ L2F (0, T ; U ), ∀ ε ∈ [0, 1]. Denote by xε (·) the solution of (13.1) corresponding to the control uε (·). It is easy to get that ( ) |xε |CF ([0,T ];L2 (Ω;H)) ≤ C 1 + |η|H , ∀ ε ∈ [0, 1]. (13.105) ) 1( ε Write xε1 (·) = x (·) − x ¯(·) and δu(·) = u(·) − u ¯(·). Then xε1 (·) solves the ε following stochastic evolution equation: ) ( ) { ε ( ε dx1 = Ax1 + Bδu dt + Cxε1 + Dδu dW (t) in (0, T ], (13.106) xε1 (s) = 0. Since (¯ x(·), u ¯(·)) is an optimal pair of Problem (SLQ) , we have J (η; uε (·)) − J (η; u ¯(·)) ε→0 ε ∫ T (⟨ ⟩ ⟨ ⟩ ) ⟨ ⟩ =E Mx ¯, xε1 H + R¯ u, δu U dt + E G¯ x(T ), xε1 (T ) H .

0 ≤ lim

(13.107)

0

It follows from the definition of the transposition solution to (13.101) that

506

13 Linear Quadratic Optimal Control Problems

⟨ ⟩ −E G¯ x(T ), xε1 (T ) H + E ∫

T

=E 0

(⟨

Bδu, y

⟩ H



T





0



+ Dδu, Y

Combining (13.107) and (13.108), we find that ∫ T ⟨ ⟩ E R¯ u − B ∗ y − D∗ Y, u − u ¯ U dt ≥ 0, 0

Mx ¯, xε1 ⟩ ) L02

H

dt (13.108)

dt.

∀ u(·) ∈ L2F (0, T ; U ).

Hence, by Lemma 12.3, we conclude that ⟨ ⟩ R(t)¯ u(t) − B(t)∗ y(t) − D(t)∗ Y (t), u − u ¯(t) U ≥ 0, a.e. [0, T ] × Ω, ∀ u ∈ U.

(13.109)

This implies (13.103). Step 2. In this step, we prove (13.104) by the spike variation method. For each ε > 0 and τ ∈ [0, T − ε), let Eε = [τ, τ + ε]. Let u ∈ L2F (0, T ; U ) be such that |¯ u − u|L∞ < +∞. Put F (0,T ;U ) { u ¯(t), if t ∈ [0, T ] \ Eε , ε u (t) = (13.110) u(t), if t ∈ Eε . Let xε (·) be the solution to (13.1) corresponding to the control uε (·). Consider the following two stochastic evolution equations: ( ) { ε dx2 = Axε2 dt + Cxε2 + χEε Dδu dW (t) in (0, T ], xε2 (0) = 0 and

{

( ) dxε3 = Axε3 + χEε Bδu dt + Cxε3 dW (t) in (0, T ], xε3 (0) = 0.

Clearly, xε − x ¯ = xε2 + xε3 . Similar to the proof of (12.181), we can obtain that √ { ε 2 |x2 (·)|L∞ ≤ C ε, F (0,T ;L (Ω;H)) (13.111) 2 |xε3 (·)|L∞ ≤ Cε. F (0,T ;L (Ω;H)) Similar to the proof of (12.208), we can get that J (η; uε (·)) − J (η; u ¯(·)) ∫ T[ ⟨ ⟩ ⟩ 1⟨ =E Mx ¯, xε2 + xε3 H + M xε2 , xε2 H 2 0 (13.112) (⟨ ⟩ ⟩ )] 1⟨ +χEε Rδu, δu U + Rδu, δu U dt 2 ⟨ ⟩ ⟩ 1 ⟨ ε ε +E G¯ x(T ), x2 (T ) + x3 (T ) H + E Gxε2 (T ), xε2 (T ) H + o(ε). 2

13.5 Pontryagin-Type Maximum Principle for Problem (SLQ)

507

It follows from the definition of the transposition solution to (13.101) that ∫ T ⟨ ⟩ ⟨ ⟩ −E G¯ x(T ), xε2 (T ) + xε3 (T ) H + E Mx ¯, xε2 + xε3 H dt 0 (13.113) ∫ T (⟨ ⟩ ⟨ ⟩ ) =E Bδu, y H + Dδu, Y L0 dt. 2

0

By the definition of the relaxed transposition solution to (13.102), we obtain that ∫ T ⟨ ε ⟩ ⟨ ⟩ ε −E Gx2 (T ), x2 (T ) H + E M xε2 , xε2 H dt 0



T

=E

⟨ ⟩ χEε Cxε2 , P ∗ Dδu L0 dt + E



2

0



T

+E ∫

0



0

T

2

0

⟨ ⟩ χEε P Dδu, Dδu L0 dt

(13.114)

2

T

+E

⟨ ⟩ b (0) (0, 0, χE Dδu) 0 dt χEε Dδu, Q ε L 2



T

χEε Q(0) (0, 0, χEε Dδu), Dδu

+E

⟨ ⟩ χEε P Dδu, Cxε2 L0 dt

0

From (13.111), we have that ∫ ∫ T ⟨ ⟩ χEε Cxε2 , P ∗ χEε Dδu L0 dt + E E 2

0

T 0

⟩ L02

dt.

⟨ ⟩ χEε P χEε Dδu, Cxε2 L0 dt 2

= o(ε). (13.115) Similarly to the proof of (12.227), one can show that ∫ T ⟨ ⟩ b (0) (0, 0, χE Dδu) 0 dt χEε Dδu, Q E ε L 0

2



T

+E 0

⟨ ⟩ χEε Q(0) (0, 0, χEε Dδu), Dδu L0 dt 2

= o(ε), as ε → 0. By (13.112)–(13.116), we obtain J (η; uε (·)) − J (η; u ¯(·)) ∫ T ⟨ ⟩ =E χEε (t) R¯ u − B ∗ y − D∗ Y, δu U dt 0

∫ T (⟨ ⟩ ⟨ ⟩ ) 1 + E χEε Rδu, δu U − P Dδu, Dδu L0 dt + o(ε) 2 2 0 ∫ T ⟨( ) ⟩ 1 = E χEε (t) R − D∗ P D δu, δu U dt + o(ε). 2 0

(13.116)

508

13 Linear Quadratic Optimal Control Problems

Since u ¯(·) is an optimal control, J (η; uε (·)) − J (η; u ¯(·)) ≥ 0. Thus, ∫ T ⟨( ) ⟩ 1 E χEε R − D∗ P D δu, δu U dt ≥ o(ε), (13.117) 2 0 as ε → 0. By (13.117), similarly to Step 7 in the proof of Theorem 12.17, we can show that for all u ∈ U , ⟨( ) ⟩ R − D∗ P D u, u U ≥ 0, a.e. (t, ω) ∈ [0, T ] × Ω, which gives (13.104). This completes the proof of Theorem 13.30. Next, we introduce the following decoupled forward-backward stochastic evolution equation:  ( ) ( ) dx = Ax + Bu dt + Cx + Du dW (t) in (0, T ],    ( ) dy = − A∗ y − M x + C ∗ Y dt + Y dW (t) in [0, T ), (13.118)    x(0) = η, y(T ) = −Gx(T ). We call (x(·), y(·), Y (·)) a transposition solution to the equation (13.118) if x(·) is the mild solution to the forward stochastic evolution equation and (y(·), Y (·)) is the transposition solution of the backward one. Since the equation (13.118) is decoupled, its well-posedness is easy to be obtained. Given η ∈ H and u(·) ∈ L2F (0, T ; U ), one can first solve the forward one to get x(·), and then solve the backward one. Consequently, the equation (13.118) admits a unique transposition solution (x(·), y(·), Y (·)) corresponding to η and u(·). The following result is a consequence of Proposition 13.27. Proposition 13.31. For any (η, u(·)) ∈ H × L2F (0, T ; U ), let (x(·), y(·), Y (·)) be the transposition solution to (13.118). Then ( ) N u + H(η) (t) = Ru(t) − B ∗ y(t) − D∗ Y (t), (13.119) a.e. (t, ω) ∈ [0, T ] × Ω. ( ) In particular, if x0 (·), y0 (·), Y0 (·) is the transposition solution to (13.118) with η = 0, then (N u)(t) = R(t)u(t) − B(t)∗ y0 (t) − D(t)∗ Y0 (t),

a.e. (t, ω) ∈ [0, T ] × Ω. (13.120)

Proof : Let (x(·), y(·), Y (·)) be the transposition solution of (13.118). From (13.96), we obtain that ( ) N u + H(η) (·) [ ] = (R + F1∗ M F1 + Fb1∗ GFb1 )u (·) + (F1∗ M F2 η)(·) + (Fb1∗ GFb2 η)(·) (13.121) [ ] = (Ru)(·) + F1∗ M (F2 η)(·) + F1 u(·) + Fb1∗ G(Fb2 η + Fb1 u) ( ) = (Ru)(·) + (F1∗ M x)(·) + Fb1∗ Gx(T ) (·).

13.5 Pontryagin-Type Maximum Principle for Problem (SLQ)

509

By (13.92) and (13.93), we know that ( ) (F1∗ M x)(·) + Fb1∗ Gx(T ) (·) = −B(t)∗ y(·) − D(t)∗ Y (·). This, together with (13.121), implies (13.119) immediately. At last, (13.120) follows from (13.119) as a special case. The following result (for which the conditions are given in terms of the transposition solution to the forward-backward stochastic evolution equation (13.118)) is an immediate corollary of Theorem 13.29. Theorem 13.32. Problem (SLQ) is solvable at η ∈ H with an optimal pair (¯ x(·), u ¯(·)) if and only if the following two conditions hold: 1) The unique transposition solution (x(·), y(·), Y (·)) to (13.118) with u(·) = u ¯(·), verifies that x(·) = x ¯(·) and R(t)¯ u(t) − B(t)∗ y(t) − D(t)∗ Y (t) = 0,

a.e. (t, ω) ∈ [0, T ] × Ω. (13.122)

2) For any u(·) ∈ L2F (0, T ; U ), the unique transposition solution (x0 (·), y0 (·), Y0 (·)) to (13.118) with η = 0 satisfies ∫ T ⟨ ⟩ E Ru − B ∗ y0 − D∗ Y0 , u U dt ≥ 0. (13.123) 0

Proof : The “only if” part. The equality (13.122) follows from Theorem 13.30 while (13.123) follows from Theorem 13.29 and (13.120). The “if” part. By Proposition 13.31, the inequality (13.123) is equivalent to N ≥ 0. Now, let (¯ x(·), y(·), Y (·)) be a transposition solution to the equation (13.118) such that (13.122) holds. Then, by Proposition 13.31, we see that (13.122) is the same as (13.98). Hence by Theorem 13.29, Problem (SLQ) is solvable. Theorem 13.32 is nothing but a restatement of Theorem 13.29. However, when R(t) is invertible for all t and R(·)−1 ∈ L∞ F (0, T ; L(U )),

(13.124)

it gives us a way to find the optimal control by solving the following coupled forward-backward stochastic evolution equation:  ( ) d¯ x = A¯ x + BR−1 B ∗ y + BR−1 D∗ Y dt     ( )   + Cx ¯ + DR−1 B ∗ y +DR−1 D∗ Y dW (t) in (0, T ], (13.125) ( )  dy(t) = − A∗ y + M x ¯ − C ∗ Y dt + Y dW (t) in [0, T ),      x ¯(0) = η, y(T ) = −G¯ x(T ). As a direct consequence of Theorem 13.32, we have the following result.

510

13 Linear Quadratic Optimal Control Problems

Corollary 13.33. Let (13.124) hold and N ≥ 0. Then Problem (SLQ) is uniquely solvable at η ∈ H if and only if the forward-backward stochastic evolution equation (13.125) admits a unique transposition solution (¯ x(·), y(·), Y (·)). In this case, the optimal control is given by [ ] u ¯(t) = R(t)−1 B(t)∗ y(t) + D(t)∗ Y (t) , a.e. (t, ω) ∈ [0, T ] × Ω. (13.126) When Problem (SLQ) is a standard SLQ problem, (13.124) holds and N ≫ 0. Consequently, we have the following result. Corollary 13.34. If Problem (SLQ) is standard, then the equation (13.125) admits a unique transposition solution (¯ x(·), y(·), Y (·)) and Problem (SLQ) is uniquely solvable with the optimal control given by (13.126).

13.6 Transposition Solutions to Operator-Valued Backward Stochastic Riccati Equations From this section to Section 13.8, we shall study the relationship between the existence of optimal feedback controls for Problem (SLQ) and the global solvability of the operator-valued backward stochastic Riccati equation (13.17). Though the equations (13.17) and (13.47) are in the same form, there exists an essential difference between them. Indeed, similarly to the relationship between the equations (12.8) and (12.6) (in Chapter 12), (13.47) is an Rn×n (matrix)-valued backward stochastic differential equation (which can be 2 easily regarded as an Rn -valued backward stochastic differential equation), and therefore, the desired well-posedness follows from the one for backward s2 tochastic differential equations valued in Rn . Also, as mentioned before, there exists no such a stochastic integration/evolution equation theory in general Banach spaces that can be employed to treat the well-posedness of (13.17) in the usual sense. On the other hand, compared with the linear operator-valued backward stochastic evolution equation (12.8), it is clear that (13.17) is a nonlinear operator-valued backward stochastic evolution equation with quadratical nonlinearities. Generally speaking, in order to study the difficult (deterministic or stochastic) nonlinear partial differential equations, people need to introduce suitable new concept of solutions, such as viscosity solutions for HamiltonJacobi equations ([64]) and fully nonlinear second-order equations ([213]), and renormalized solutions for the KPZ equation ([136]). In order to overcome the difficulties mentioned above, similarly to Chapter 12, we employ the transposition method to study (13.17). More precisely, we need to introduce another type of solution, i.e., transposition solution to this equation. To this end, let us introduce the following assumptions: (AS1) The eigenvectors {ej }∞ j=1 of A such that |ej |H = 1 for all j ∈ N constitute an orthonormal basis of H.

13.6 Transposition Solutions to Operator-Valued Riccati Equations

511

∞ Let {µj }∞ j=1 (corresponding to {ej }j=1 ) be the eigenvalues of A. Let ∞ 2 {λj }j=1 ∈ ℓ be an arbitrarily given sequence of positive real numbers. Define a norm | · |Hλ on H as follows: v u∑ ∞ ∑ u ∞ 2 −2 |h|Hλ = t λj |ej |D(A) h2j , ∀h = hj ej ∈ H. j=1

j=1

Denote by Hλ the completion of H with respect to this norm. Clearly, Hλ is ∞ a Hilbert space, H ⊂ Hλ and {λ−1 j |ej |D(A) ej }j=1 is an orthonormal basis of Hλ . Write VH for the set of all such kind of Hilbert spaces. Write Hλ′ for the dual space of Hλ with respect to the pivot space H ≡ H ′ . For any Hλ ∈ VH , from the definition of Hλ , it is easy to see ∞ ′ ′ that {λj |ej |−1 D(A) ej }j=1 ⊂ Hλ is an orthonormal basis of Hλ and the norm on ′ Hλ is given by v u∑ u∞ 2 2 |ξ|Hλ′ = t ξj |ej |D(A) λ−2 ∀ ξ ∈ Hλ′ , j , j=1

where ξj = ⟨ξ, ej ⟩H . We also need the following technical condition: (AS2) There exists Hλ ∈ VH such that C ∈ L∞ F (0, T ; L(Hλ ; L2 (V ; Hλ ))), ∞ ′ M ∈ L∞ (0, T ; L(H )), G ∈ L (Ω; L(H )) and C ∈ L∞ λ λ FT F F (0, T ; L(Hλ ; L2 (V ; ′ ∞ ′ ∞ ′ Hλ ))), M ∈ LF (0, T ; L(Hλ )), G ∈ LFT (Ω; L(Hλ )). Remark 13.35. In (AS2), for C ∈ L∞ F (0, T ; L(Hλ ; L2 (V ; Hλ ))), we mean that for a.e. (t, ω) ∈ [0, T ] × Ω, C(t, ω) can be extended to be a bounded linear operator CHλ (t, ω) from Hλ to L2 (V ; Hλ ) and after the extension, CHλ ∈ L∞ F (0, T ; L(Hλ ; L2 (V ; Hλ ))). For simplicity of notations, we still denote the extension by C if there is no confusion. On the other hand, ′ ′ C ∈ L∞ F (0, T ; L(Hλ ; L2 (V ; Hλ ))) means for a.e. (t, ω) ∈ [0, T ] × Ω, the re′ striction of C(t, ω) on Hλ (denoted by CHλ′ (t, ω)) belongs to L(Hλ′ ; L2 (V ; Hλ′ )) ′ ′ and CHλ′ ∈ L∞ F (0, T ; L(Hλ ; L2 (V ; Hλ ))). The other notations in (AS2) can be understood in a similar way. Lemma 13.36. Let Hλ ∈ VH and (AS2) hold. If {S(t)}t∈R is a C0 -group (resp. C0 -semigroup) on H, then it is a C0 -group on Hλ′ (resp. C0 -semigroup), and it can be uniquely extended to a C0 -group (resp. C0 -semigroup) (also denoted by itself ) on Hλ . Proof : We only prove that {S(t)}t≥0 is a C0 -group on Hλ′ . Proofs of the other conclusions are similar. ∞ ∞ ∑ ∑ ′ ∞ 2 ˜= Let ξ = ξj λj |ej |−1 e ∈ H with {ξ } ∈ ℓ . Then, ξ ξj ej ∈ j j λ j=1 D(A) j=1

˜ H = |ξ|H ′ . Clearly, H and |ξ| λ

j=1

512

13 Linear Quadratic Optimal Control Problems

S(t)ξ =

∞ ∑

µj t ξj λj |ej |−1 ej . D(A) e

j=1

For any t1 , t2 ∈ R, S(t2 )S(t1 )ξ =

∞ ∑

ξj λj |ej |−1 D(A) S(t2 )S(t1 )ej

j=1

=

∞ ∑

(13.127) µj (t1 +t2 ) ξj λj |ej |−1 ej D(A) e

= S(t2 + t1 )ξ.

j=1

This indicates that {S(t)}t≥0 is a group on Hλ′ . For any t2 > t1 > 0, ( ) S(t2 ) − S(t1 ) ξ

′ Hλ

∞ ∑ ( ) = ξj λj |ej |−1 D(A) S(t2 ) − S(t1 ) ej j=1

∞ ∑ ( µj t2 µj t1 ) = ξj λj |ej |−1 −e ej D(A) e j=1

=

∞ [∑

′ Hλ

′ Hλ

( ( )2 ] 12 ) ξj2 eµj t2 − eµj t1 = S(t2 ) − S(t1 ) ξ˜ H .

j=1

This, together with that {S(t)}t≥0 is strongly continuous on H, implies that {S(t)}t≥0 is strongly continuous on Hλ′ . Next, similarly to Subsection 12.4.1, we need to introduce some more concepts/notations. Beginners may skip this and only consider the case that V = R in the rest of this chapter. ′ Let {˜ ej }∞ j=1 be an orthonormal basis of V . Any Λ ∈ L(L2 (V ; Hλ ); Hλ ) ′ ′ induces a bounded bilinear functional Ψ (·, ·) on Hλ × L2 (V ; Hλ ) as follows: ∆

Ψ (h, v) =

∞ ∑

∀ h ∈ Hλ′ , v ∈ L2 (V ; Hλ′ ). (13.128)

⟨ Λ(˜ ej ⊗ h), v˜ ej ⟩Hλ ,H ′ , λ

j=1

Here, e˜j ⊗ h is defined by (2.18) (Clearly, e˜j ⊗ h ∈ L2 (V ; Hλ′ ), and therefore Λ(˜ ej ⊗ h) ∈ Hλ ). Define a bounded linear operator Λe : Hλ′ → L2 (V ; Hλ ) as follows: e v⟩ Ψ (h, v) = ⟨ Λh, L2 (V ;Hλ ),L2 (V ;H ′ ) , λ

∀ h ∈ Hλ′ , v ∈ L2 (V ; Hλ′ ).

It follows from (12.34) that e v⟩ ⟨ Λh, L2 (V ;Hλ ),L2 (V ;H ′ ) =

∞ ∑

λ

⟨ Λ(˜ ej ⊗ h), v˜ ej ⟩Hλ ,H ′ , λ

j=1

∀h∈

Hλ′ , v

∈ L2 (V

; Hλ′ ).

(13.129)

13.6 Transposition Solutions to Operator-Valued Riccati Equations

513

On the other hand, any Λ ∈ L(Hλ′ ; L2 (V ; Hλ )) induces a bounded linear operator Λe : L2 (V ; Hλ′ ) → Hλ as follows: ∞ ∞ (∑ ) ∑ ∆ Λe aij e˜i ⊗ hj = aij (Λhj )˜ ei , i,j=1

∀ aij ∈ C with

i,j=1

∞ ∑ i,j=1

(13.130)

Clearly, if Λe ∈ L(L2 (V ; Hλ′ ); Hλ ), then e ⊗ h) = (Λh)v, Λ(v

|aij |2 < ∞.

∀ (v, h) ∈ V × Hλ′ .

(13.131)

In this case, we say that Λ induces an operator Λe ∈ L(L2 (V ; Hλ′ ); Hλ ). Now, suppose that Λ ∈ L(L2 (V ; Hλ′ ); Hλ ) induces an operator Λe ∈ L(Hλ′ ; L2 (V ; Hλ )) and Λe induces an operator Λ ∈ L(L2 (V ; Hλ′ ); Hλ ). Then, Λ = Λ. Conversely, suppose that Λ ∈ L(Hλ′ ; L2 (V ; Hλ )) induces an operator Λe ∈ L(L2 (V ; Hλ′ ); Hλ ) and Λe induces an operator Λ ∈ L(Hλ′ ; L2 (V ; Hλ )). Then, Λ = Λ. We shall need the following result. Proposition 13.37. Any Ξ ∈ L2 (V ; L2 (Hλ′ ; Hλ )) induces (uniquely) an operator Λ ∈ L2 (L2 (V ; Hλ′ ); Hλ ) and an operator Λe ∈ L2 (Hλ′ ; L2 (V ; Hλ )) satisfying (13.130) and Λ(v ⊗ h) = (Ξv)h,

∀ (v, h) ∈ Hλ × H.

(13.132)

Moreover, e L (H ′ ;L (V ;H )) ≤ C|Ξ|L (V ;L (H ′ ;H )) . |Λ|L2 (L2 (V ;Hλ′ );Hλ ) + |Λ| 2 2 2 2 λ λ λ λ Proof : As in (12.36), we define a linear operator Λ from L2 (V ; Hλ′ ) to Hλ by Λ

∞ (∑ i,j=1

∞ ) ∑ ∆ aij e˜i ⊗ hj = aij (Ξ e˜i )hj ,

∀ aij ∈ C with

i,j=1

∞ ∑

|aij |2 < ∞.

i,j=1

By Ξ ∈ L2 (V ; L2 (Hλ′ ; Hλ )), it is easy to check that Λ ∈ L2 (L2 (V ; Hλ′ ); Hλ ) and |Λ|L2 (L2 (V ;Hλ′ );Hλ ) ≤ C|Ξ|L2 (V ;L2 (Hλ′ ;Hλ )) . Then, Λ induces a bounded linear operator Λe ∈ L2 (Hλ′ ; L2 (V ; Hλ )) satisfying (13.130) and e L (H ′ ;L (V ;H )) ≤ C|Λ|L (L (V ;H ′ );H ) . |Λ| 2 2 2 2 λ λ λ λ This completes the proof of Proposition 13.37. Now, let us consider the following two (forward) stochastic evolution equations:

514

13 Linear Quadratic Optimal Control Problems

{

( ) ( ) dx1 = Ax1 + u1 dτ + Cx1 + v1 dW (τ ) in (t, T ],

(13.133)

x1 (t) = ξ1 and

{

( ) ( ) dx2 = Ax2 + u2 dτ + Cx2 + v2 dW (τ ) in (t, T ],

(13.134)

x2 (t) = ξ2 . Here t ∈ [0, T ), ξ1 , ξ2 are suitable random variables and u1 , u2 , v1 , v2 are suitable stochastic processes. By Lemma 13.36 and Theorem 3.20, we obtain the following result immediately. Corollary 13.38. Let (AS1)–(AS2) hold. Then, for j = 1, 2, for any ξj ∈ L4Ft (Ω;Hλ′ ), uj (·) ∈ L4F (Ω;L2 (t, T ; Hλ′ )) and vj (·) ∈ L4F (Ω; L2 (t, T ; L2 (V ; Hλ′ ))), the mild solution x1 (·) (resp. x2 (·)) to (13.133) (resp. (13.134)) belongs to CF ([t, T ]; L4 (Ω; Hλ′ )). Put CF,w ([0, T ]; L∞ (Ω; L(H))) { ∆ = P ∈ Υ2 (H) P (t, ω) ∈ S(H), a.e. (t, ω) ∈ [0, T ]×Ω, |P (·)|L(H) ∈ L∞ F (0, T ), } and P (·)ζ ∈ CF ([0, T ]; L∞ (Ω; H)), ∀ ζ ∈ H and L2F,w (0, T ; L(H)) { } ∆ = Λ ∈ L2F (0, T ; L2 (L2 (V ;Hλ′ );Hλ )) D∗ Λ ∈ Υ2 (U ; H), a.e. (t, ω) ∈ [0, T ]×Ω . Now, we introduce the notion of transposition solution to (13.17): ( ) Definition 13.39. A pair of operator-valued stochastic processes P (·), Λ(·) ∈ CF,w ([0, T ]; L∞ (Ω; L(H))) × L2F,w (0, T ; L(H)) is called a transposition solution to (13.17) if the following three conditions hold: ( ) 1) K(t, ω) ≡ R(t, ω) + D(t, ω)∗ P (t, ω)D(t, ω) > 0 and its left inverse K(t, ω)−1 is a densely defined closed operator for a.e. (t, ω) ∈ [0, T ] × Ω; 2) For any t ∈ [0, T ], ξ1 , ξ2 ∈ L4Ft (Ω; Hλ′ ), u1 (·), u2 (·) ∈ L4F (Ω; L2 (t, T ; Hλ′ )) and v1 (·), v2 (·) ∈ L4F (Ω; L2 (t, T ; L2 (V ; Hλ′ ))), it holds that ∫

T⟨

E⟨Gx1 (T ), x2 (T )⟩H + E

M (τ )x1 (τ ), x2 (τ )

t



T⟨

K(τ )−1 L(τ )x1 (τ ), L(τ )x2 (τ )

−E t

⟩ H



⟩ H



13.7 Existence of Optimal Feedback Operator for Problem (SLQ)

⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫

T⟨

+E ∫

T t

⟨ ⟩ P (τ )u1 (τ ), x2 (τ ) H dτ

P (τ )x1 (τ ), u2 (τ )



t T

+E +E





t ∫ T ⟨



T

Λ(τ )v1 (τ ), x2 (τ )



dτ + E H



e )x1 (τ ), v2 (τ ) Λ(τ

t

′ Hλ ,Hλ

(13.135)

T⟨

P (τ )C(τ )x1 (τ ), v2 (τ )



t

P (τ )v1 (τ ), C(τ )x2 (τ ) + v2 (τ )

t

+E



⟩ L02

515

L02







⟩ ′) L2 (V ;Hλ ),L2 (V ;Hλ

dτ,

e where Λ(·) is the operator induced by Λ(·), and x1 (·)(resp. x2 (·)) solves (13.133)(resp. (13.134)); 2 and 3) For any t ∈ [0, T ], ξ1 , ξ2 ∈ L2Ft (Ω; H), u1 (·), u2 (·) ∈ L2F (t, T ; H) and v1 (·), v2 (·) ∈ L2F (t, T ; U ), it holds that ∫ T ⟨ ⟩ E⟨Gx1 (T ), x2 (T )⟩H + E M (τ )x1 (τ ), x2 (τ ) H dτ t



⟨ ⟩ K(τ )−1 L(τ )x1 (τ ), L(τ )x2 (τ ) H dτ

T

−E t

⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫

T⟨

+E

t T⟨

∫ +E



T



P (τ )u1 (τ ), x2 (τ )

t

P (τ )x1 (τ ), u2 (τ )





dτ + E H

T⟨





Λ(τ )D(τ )v1 (τ ), x2 (τ )



(13.136) ⟩

t

P (τ )D(τ )v1 (τ ), C(τ )x2 (τ )+D(τ )v2 (τ )

T⟨ t

H

P (τ)C(τ )x1 (τ ), D(τ )v2 (τ )

t

+E



dτ + E H



⟩ L02

L02





T⟨

e )x1 (τ ), v2 (τ ) D(τ )∗ Λ(τ

t

⟩ U

dτ.

Here, x1 (·) and x2 (·) solve (13.133) and (13.134) with v1 and v2 replaced by Dv1 and Dv2 , respectively.

13.7 Existence of Optimal Feedback Operator for Problem (SLQ) In this section, we shall prove the existence of optimal feedback operator for Problem (SLQ), provided that the operator-valued backward stochastic Riccati equation (13.17) admits a transposition solution. To this end, we should introduce the following assumption. 2

By Theorem 3.20, one has x1 (·), x2 (·) ∈ L4F (Ω; C([0, T ]; Hλ′ )).

516

13 Linear Quadratic Optimal Control Problems

e (AS3) Let {φj }∞ j=1 be an orthonormal basis of U . There is a U ⊂ U e is dense in U , {φj }∞ ⊂ U e , R ∈ L∞ (0, T ; L(U e )), B ∈ such that U j=1 F ∞ ′ ∞ ′ e ; H )) and D ∈ L (0, T ; L(U e ; L2 (V ; H )), where Hλ is given LF (0, T ; L(U F λ λ in Assumption (AS2). Theorem 13.40. Let (AS1)–(AS3) hold. If the operator-valued ( backward) stochastic Riccati equation (13.17) admits a transposition solution P (·), Λ(·) ∈ CF,w ([0, T ]; L∞ (Ω; L(H))) × L2F,w (0, T ; L(H)) such that [ ] e e ), K(·)−1 B(·)∗ P (·) + D(·)∗ P (·)C(·) + D(·)∗ Λ(·) ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U (13.137) then Problem (SLQ) is uniquely solvable and admits an optimal feedback opere ). In this case, the optimal feedback operator ator Θ(·) ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U Θ(·) is given by e Θ(·) = −K(·)−1 [B(·)∗ P (·) + D(·)∗ P (·)C(·) + D(·)∗ Λ(·)].

(13.138)

Furthermore, J (η; u) =

inf

u(·) ∈ L2F (0,T ;U )

1 ⟨P (0)η, η⟩H . 2

(13.139)

Proof(: Let us assume that the equation (13.17) admits a transposition ) solution P (·), Λ(·) ∈ CF,w ([0, T ]; L∞ (Ω; L(H)))×L2F,w (0, T ; L(H)) such that (13.137) holds. Then, e ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U e ). Θ = −K −1 (B ∗ P + D∗ P C + D∗ Λ) ∆

(13.140)

For any t ∈ [0, T ), η ∈ L2Ft (Ω; H) and u(·) ∈ L2F (0, T ; U ), choose ξ1 = ξ2 = η, u1 = u2 = Bu and v1 = v2 = Du in (13.133)–(13.134). From (13.18), (13.136) and the pointwise self-adjointness of K(·), we obtain that ∫

T

E⟨Gx(T ), x(T )⟩H + E



M (r)x(r), x(r)

t



T

−E



Θ(r)∗ K(r)Θ(r)x(r), x(r)

t

⟨ ⟩ = E P (t)η, η H + E ∫

T⟨

+E



T





T

+E

H

t

P (r)x(r), B(r)u(r) ⟨







dr+E H

T t



dr

dr

T⟨

⟩ H



(13.141)

P (r)C(r)x(r), D(r)u(r)

P (r)D(r)u(r), C(r)x(r) + D(r)u(r) Λ(r)D(r)u(r), x(r)

dr

⟩ L02

t

t

+E

H

P (r)B(r)u(r), x(r)

t





dr + E H



T t



⟩ L02

dr

dr

e D(r)∗ Λ(r)x(r), u(r)

⟩ U

dr.

13.7 Existence of Optimal Feedback Operator for Problem (SLQ)

517

Then, by (13.3) and (13.141), recalling the definition of L(·) and K(·), we arrive at [ ∫ T (⟨ ⟩ ⟨ ⟩ ) ⟨ ⟩ ] E M x(r), x(r) H + Ru(r), u(r) U dr + Gx(T ), x(T ) H t

⟨ ⟩ = E P (t)η, η H + E ∫

T

+E ∫

T



P Bu(r), x(r)

t

P Cx(r), Du(r)



ΛDu(r), x(r)

t T

+E [⟨

T







t

+E ∫





⟩ H

L02

t

= E P (t)η, η



⟩ H

T

+ t

T

T





dr + E H

T t

⟨ ⟩ P x(r), Bu(r) H dr

⟨ ⟩ P Du(r), Cx(r) + Du(r) L0 dr 2

t

∫ t



∫ dr + E H

dr + E

dr + E

Θ∗ KΘx(r), x(r)



e D∗ Λx(r), u(r) T

t

⟩ U

dr

⟨ ⟩ Ru(r), u(r) U dr

(⟨ ⟩ ⟨ ⟩ Θ∗ KΘx(r), x(r) H + 2 Lx(r), u(r) U ⟨ ⟩ ) ] + Ku(r), u(r) U dr .

This, together with (13.138), implies that [ ∫ T (⟨ ⟩ ⟨ ⟩ ) ⟨ ⟩ ] E M x(r), x(r) H + Ru(r), u(r) U dr + Gx(T ), x(T ) H t

∫ T ⟩ (⟨ ⟩ ⟨ ⟩ ) ] 1 [⟨ = E P (t)η, η H + KΘx, Θx U −2 KΘx, u U + ⟨Ku, u⟩U dr 2 t ∫ T ) ⟩ ⟨ ⟩ 1 (⟨ = E P (t)η, η H + K(u − Θx), u − Θx U dr . (13.142) 2 t By taking t = 0 in (13.142), we get that J (η; u(·)) ∫ T ⟩ (⟨ ⟩ ⟨ ⟩ ) ] 1 [⟨ = E P (0)η, η H + KΘx, Θx U −2 KΘx, u U + ⟨Ku, u⟩U dr 2 0 ∫ T ( ) (13.143) ⟩ ⟨ ⟩ 1 ⟨ = E P (0)η, η H + K(u − Θx), u − Θx U dr 2 0 ∫ T ( ) 1 ⟨ ⟩ = J η; Θ¯ x + E K(u − Θx), u − Θx U dr. 2 0 Hence, J (η; Θ¯ x) ≤ J (η; u),

∀ u(·) ∈ L2F (0, T ; U ).

Consequently, Θ(·) is an optimal feedback operator for Problem (SLQ), and (13.139) holds. Further, since K > 0, we know that the optimal control is unique. This completes the proof of Theorem 13.40.

518

13 Linear Quadratic Optimal Control Problems

13.8 Global Solvability of Operator-Valued Backward Stochastic Riccati Equations Theorem 13.40 in the last section concludes the existence of optimal feedback operator for Problem (SLQ) under the assumption that the operator-valued backward stochastic Riccati equation (13.17) admits a transposition solution satisfying (13.137). In this section, we shall show that the converse of this theorem is also true under some more conditions. The main result of this section (together with Theorem 13.40), which reveals the relationship between the existence of optimal feedback operator for Problem (SLQ) and the global solvability of (13.17) in the sense of transposition solution (See Definition 13.39), is stated as follows: Theorem 13.41. Let (AS1)–(AS3) hold and A generate a C0 -group on H, and let F be the natural filtration generated by W (·). If Problem (SLQ) is uniquely solvable and admits an optimal feedback operator Θ(·) ∈ e ), then the equation (13.17) admits a unique transposiΥ2 (H; U ) ∩ Υ2((Hλ′ ; U ) tion solution P (·), Λ(·) ∈ CF,w ([0, T ]; L∞ (Ω; L(H))) × L2F,w (0, T ; L(H)) satisfying (13.137) and the optimal feedback operator Θ(·) is given by (13.138). Furthermore, (13.139) holds. The proof of Theorem 13.41 is quite long, and will be given in the Subsection 13.8.2, after some careful preliminaries presented in the Subsection 13.8.1. Several remarks are in order. Remark 13.42. In Theorem 13.41, we only conclude that K(t, ω) has left inverse for a.e. (t, ω) ∈ (0, T ) × Ω, and therefore K(t, ω)−1 may be unbounded. Nevertheless, this result cannot be improved. Let us show this by the following example. Let O ⊂ Rk (for some k ∈ N) be a bounded domain with a(smooth ) 0 I 1 2 2 boundary ∂O. Let H = H0 (O)×L (O), V = R, U = L (O) and A = , ∆0 where ∆ is the Laplacian ( )on O with ( )the usual homogeneous Dirichlet boundary 0 I condition. Let B = ,C= , D = 0, M = 0, R = (−∆)−1 and G = 0. I 0 Then (13.1) is specialized as ( ) { dx = Ax + Bu dt + CxdW (t) in (0, T ], (13.144) x(0) = η. The cost functional reads 1 J (η; u(·)) = E 2



T 0



(−∆)−1 u(t), u(t)

⟩ L2 (O)

dt.

(13.145)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

519

Clearly, for any η ∈ H01 (O) × L2 (O), there is a unique optimal control u ≡ 0. For the present case, it is easy to check that (P (·), Λ(·)) = (0, 0) is the unique transposition solution to (13.17). However, K = (−∆)−1 is not surjective and K −1 is unbounded. Remark 13.43. In Theorem 13.41, we assume that A generates a C0 -group on H and F is the natural filtration generated by W (·). These assumptions are used to guarantee the well-posednss of the stochastic evolution equation (13.219) in the sequel. Actually, the C0 -group condition can be dropped (See [235] for the details). Remark 13.44. By Theorem 3.20, if ξj ∈ L2Ft (Ω; Hλ′ ), uj (·) ∈ L2F (t, T ; Hλ′ ) and vj (·) ∈ L2F (t, T ; L2 (V ; Hλ′ )), then the solutions xj (j = 1, 2) to (13.133)– (13.134) belong to L2F (Ω; C([t, T ]; Hλ′ )). This plays a key role in Step 5 in the proof of Theorem 13.41. We believe that this assumption can be dropped. However, we do not know how to do it at this moment. Remark 13.45. In Theorem 13.41, the most natural choice of optimal feedback operator should be an element in Υ2 (H; U ) rather than an element in e ). Nevertheless, at this moment, in the proof of Theorem Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U e ). 13.41, we do need to suppose that Θ(·) ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U In this rest of this section, unless otherwise stated, we assume the assumptions in Theorem 13.41. 13.8.1 Some Preliminary Results In this subsection, we present some results for the approximation of stochastic evolution equations and backward stochastic evolution equations by stochastic differential equations and backward stochastic differential equations, respectively, which will be useful in this section. Besides proving Theorem 13.41, we believe that these results have their own interest. First, for any η ∈ H, consider the following stochastic evolution equation: { dx = [(A + A)x + f ]dt + (Bx + g)dW (t) in (0, T ], (13.146) x(0) = η. Here A ∈ Υ1 (H), B ∈ Υ2 (H; L02 ), η ∈ H, f ∈ L2F (Ω; L1 (0, T ; H)) and g ∈ L2F (0, T ; L02 ). By Theorem 3.20, we know that the equation (13.146) admits a unique mild solution x(·) ∈ L2F (Ω; C([0, T ]; H)), and |x(·)|L2F (Ω;C([0,T ];H)) ( ) ≤ C |η|H + |f |L2F (Ω;L1 (0,T ;H)) + |g|L2F (0,T ;L02 ) .

(13.147)

Next, we consider the following backward stochastic evolution equation:

520

13 Linear Quadratic Optimal Control Problems

{

( ) dy = − A∗ y + DY + h dt + Y dW (t) in [0, T ),

(13.148)

y(T ) = ξ. 0 2 Here ξ ∈ L2FT (Ω; H), D ∈ L∞ F (0, T ; L(L2 ; H)) and h ∈ LF (0, T ; H). By Theorem 4.10, we know that the equation (13.148) admits a unique mild solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (0, T ; L02 ), and ( ) |(y(·), Y (·))|L2F (Ω;C([0,T ];H))×L2F (0,T ;L02 ) ≤ C |ξ|L2F (Ω;H) + |h|L2F (0,T ;H) . T

For each n ∈ N, denote by Γn the projection operator from H to the finite ∆ dimensional space Hn = span 1≤j≤n {ej } (recall that {ej }∞ j=1 is an orthonormal basis of H). Let An = Γn AΓn ,

An = Γn AΓn ,

Bn = Γn BΓn ,

Dn = Γn DΓn ,

Gn = Γn GΓn ,

fn = Γn f,

gn = Γn g,

hn = Γn h.

(13.149)

It is easy to show that lim An ζ = Aζ, lim Bn ζ = Bζ,

n→∞

n→∞

for all ζ ∈ H and a.e. (t, ω) ∈ [0, T ] × Ω,

(13.150)

lim Dn ζ = Dζ,

n→∞

for all ζ ∈ L02 and a.e. (t, ω) ∈ [0, T ] × Ω, lim Gn ζ = Gζ, n→∞   lim fn = f, n→∞

 lim gn = g n→∞

for all ζ ∈ H and a.e. ω ∈ Ω, lim hn = h

n→∞

(13.151) (13.152)

in L2F (0, T ; H), (13.153)

in L2F (0, T ; L02 ).

For any ξ ∈ D(A), lim |An ξ − Aξ|H = lim |Γn AΓn ξ − Aξ|H

n→∞

n→∞

≤ lim |Γn (AΓn ξ − Aξ)|H + lim |(Γn − I)Aξ|H n→∞

n→∞

≤ lim |(AΓn ξ − Aξ)|H + lim |(Γn − I)Aξ|H n→∞

n→∞

≤ lim |A|L(D(A);H) |(Γn ξ − ξ)|D(A) + lim |(Γn − I)Aξ|H = 0. n→∞

n→∞

By the Trotter-Kato approximation theorem (e.g. [87, page 209]), we have that, for any ζ ∈ H, lim eAn t ζ = S(t)ζ in H,

n→∞

uniformly for t ∈ [0, T ].

(13.154)

Similarly, we have that for any ζ ∈ Hλ , lim eAn t ζ = S(t)ζ in Hλ ,

n→∞

uniformly for t ∈ [0, T ].

(13.155)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

521

Lemma 13.46. Let (AS1) hold, and Hλ ∈ VH . Then A ∈ L2 (H; Hλ ) and lim |An − A|L2 (H;Hλ ) = 0. n→∞

Proof : Since {ej }∞ j=1 are eigenvectors of A, it holds that ∞ ∑

|Aej |2Hλ =

j=1

∞ ∑

|µj ej |2Hλ =

j=1

∞ ∑

λ2j µ2j |ej |−2 D(A) ≤

j=1

∞ ∑

λ2j < ∞.

j=1

Hence, A ∈ L2 (H; Hλ ). Next, lim |An − A|2L2 (H;Hλ ) = lim

n→∞

n→∞

∞ ∑

= lim

n→∞

∞ ∑ j=1

λ2j µ2j |ej |−2 D(A) ≤ lim

j=n+1

|(An − A)ej |2Hλ

n→∞

∞ ∑

λ2j = 0.

j=n+1

This completes the proof of Lemma 13.46. Now we introduce sequences of stochastic differential equations and backward stochastic differential equations, which approximate the stochastic evolution equation (13.146) and backward stochastic evolution equation (13.148), respectively. For any η ∈ H, ξ ∈ L2FT (Ω; H) and n ∈ N, consider the following two equations: [ ] { dxn = (An + An )xn + fn dt + (Bn xn + gn )dW (t) in [0, T ], (13.156) xn (0) = Γn η, and {

( ) dyn = − A∗n yn + Dn Yn + hn dt + Yn dW (t) in [0, T ],

(13.157)

yn (T ) = Γn ξ. Lemma 13.47. For any η ∈ H and ξ ∈ L2FT (Ω; H), it holds that  ( )  lim E sup |xn (t) − x(t)|2H = 0,  n→∞ t∈[0,T ]     ( ) lim E sup |yn (t) − y(t)|2H = 0, n→∞  t∈[0,T ]       lim |Yn (·) − Y (·)|L2 (0,T ;L0 ) = 0. 2 F

(13.158)

n→∞

Proof : Step 1. In this step, we prove the first equality in (13.158). Put

522

13 Linear Quadratic Optimal Control Problems

( ) ∆ M = sup |S(t)|L(H) |A|L(H) + |B|L(H;L02 ) t∈[0,T ]

and let t0 ∈ [0, T ] satisfying that { ∆ t0 = max t ∈ [0, T ] max{t, t2 } ≤

1 } . 4M2

(13.159)

From (13.146) and (13.156), we have that ( ) E sup |xn (t) − x(t)|2H t∈[0,t0 ]

2 ( S(t)η − eAn t Γn η

≤ CE sup

H

t∈[0,t0 ]

∫ t ∫ t 2 + S(t − r)Ax(r)dr − eAn (t−r) An xn (r)dr (13.160) H 0 0 ∫ ∫ t 2 t + S(t − r)f (r)dr − eAn (t−r) fn (r)dr H 0 0 ∫ t ∫ t 2 + S(t − r)Bx(r)dW (r) − eAn (t−r) Bn xn (r)dW (r) H 0 0 ∫ t ∫ t 2 ) + S(t − r)g(r)dr − eAn (t−r) gn (r)dW (r) . 0

H

0

Let us estimate the terms in the right hand side of (13.160) one by one. First, 2 sup S(t)η − eAn t Γn η H t∈[0,t0 ]

2 2 ≤ 2 sup S(t)η− eAn t η H + 2 sup eAn t η − eAn t Γn η H t∈[0,t0 ]

t∈[0,t0 ]

2 2 ≤ 2 sup S(t)η − eAn t η H + CE η − Γn η H .

(13.161)

t∈[0,t0 ]

Next, from the definition of An and An , we know that ( ) S(t) − eAn t An = 0, ∀t ∈ [0, T ].

(13.162)

Thus, we have ∫ t ∫ t 2 E sup S(t − r)Ax(r)dr − eAn (t−r) An xn (r)dr t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 ≤ 2E sup S(t − r)Ax(r)dr − eAn (t−r) An x(r)dr t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 +2E sup eAn (t−r) An x(r)dr − eAn (t−r) An xn (r)dr (13.163) t∈[0,t0 ]

0

0

H

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations



t0

≤ 2E 0

523

[( ) ( ) ] S(t−r)− eAn (t−r) A + S(t−r) − eAn (t−r) An x(r) dr H

+M2 t20 E sup |x(t) − xn (t)|2H t∈[0,t0 ]



t0

≤ 2E 0

( ) S(t − r) − eAn (t−r) Ax(r) dr H

+M2 t20 E sup |x(t) − xn (t)|2H . t∈[0,t0 ]

Using (13.162) again, we get that ∫ t ∫ t 2 E sup S(t − r)f (r)dr − eAn (t−r) fn (r)dr t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 ≤ 2E sup S(t − r)f (r)dr − eAn (t−r) f (r)dr t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 +2E sup eAn (t−r) f (r)dr − eAn (t−r) fn (r)dr ∫

t∈[0,t0 ] t0

≤ 2E 0

0

0

( ) S(t − r) − eAn (t−r) f (r) 2 dr + CE H

(13.164)

H



t0

|f (r) − fn (r)|2H dr.

0

By Theorem 3.18 (Burkholder-Davis-Gundy inequality) and (13.162), we have that ∫ t ∫ t 2 E sup S(t − r)Bx(r)dW (r) − eAn (t−r) Bn xn (r)dW (r) t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 ≤ 2E sup S(t−r)Bx(r)dW (r) − eAn (t−r) Bn x(r)dW (r) t∈[0,t0 ]

0

(13.165)

H

0

∫ t ∫ t 2 +2E sup eAn (t−r) Bn x(r)dW (r) − eAn (t−r) Bn xn (r)dW (r) t∈[0,t0 ]

∫ ≤ 2E

0

H

0

( ) S(t − r) − eAn (t−r) Bx(r) 2 0 dr + M2 t0 E sup |x(t) − xn (t)|2H L

t0

2

0

t∈[0,t0 ]

and

∫ t ∫ t 2 E sup S(t − r)g(r)dr − eAn (t−r) gn (r)dW (r) t∈[0,t0 ]

0

H

0

∫ t ∫ t 2 ≤ 2E sup S(t − r)g(r)dW (r) − eAn (t−r) g(r)dW (r) t∈[0,t0 ]

0

(13.166)

H

0

∫ t ∫ t 2 +2E sup eAn (t−r) g(r)dW (r) − eAn (t−r) gn (r)dW (r) ∫

t∈[0,t0 ] t0

≤ 2E 0

0

0

( ) S(t − r) − eAn (t−r) g(r) 2 0 dr + CE L2

H



t0 0

|g(r) − gn (r)|2L0 dr. 2

524

13 Linear Quadratic Optimal Control Problems

From (13.159) to (13.166), we find that ( ) E sup |xn (t) − x(t)|2H t∈[0,t0 ]

( 2 2 ≤ C E sup S(t − s)η − eAn (t−s) η H + E η − Γn η H t∈[0,t0 ]



t0

+E ∫

0



0



0

t0

+E t0

+E

( ) S(t0 − r) − eAn (t0 −r) Ax(r) dr H

(13.167)

( ) S(t0 − r) − eAn (t0 −r) f (r) 2 dr + E H

+E

t0

|f (r) − fn (r)|2H dr

0

( ) S(t0 − r) − eAn (t0 −r) Bx(r) 2 0 dr L2

t0



( ) S(t0 − r) − eAn (t0 −r) g(r) 2 0 dr + E L2

0

For all n ∈ N,



t0 0

) |g(r) − gn (r)|2L0 dr . 2

2 sup S(t)η − eAn t η H ≤ C|η|2H .

t∈[0,T ]

This, together with Lebesgue’s dominated convergence theorem and (13.150), implies that 2 lim E sup S(t)η − eAn t η H n→∞

t∈[0,T ]

= E lim

2 sup S(t)η − eAn t η H = 0.

(13.168)

n→∞ t∈[0,T ]

Similarly, we can prove that 2 lim E η − Γn η H = 0.

(13.169)

n→∞

Next, noting that for all n ∈ N, ( ) S(t − r) − eAn (t−r) Ax(r) 2 ≤ C x(r) 2 , H H

a.e. (t, ω) ∈ [0, t0 ] × Ω,

it follows from Lebesgue’s dominated convergence theorem and (13.150) that ∫

t0

lim E

n→∞

0

( ) S(t0 − r) − eAn (t0 −r) Ax(r) 2 dr = 0. H

(13.170)

Similar to the above arguments, we can show that all the terms in the right hand side of (13.167) tend to zero as n tends to ∞. Consequently, we obtain that ( ) lim E sup |xn (t) − x(t)|2H = 0. n→∞

t∈[0,t0 ]

If t0 = T , then we complete our proof. Otherwise, let

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

{ ∆ t1 = max t ∈ [t0 , T ] max{t − t0 , (t − t0 )2 } ≤

1 } . 4M2

525

(13.171)

Repeating the above argument, we get that ( ) lim E sup |xn (t) − x(t)|2H = 0. n→∞

t∈[0,t1 ]

By an induction argument, we can obtain that ( ) lim E sup |xn (t) − x(t)|2H = 0. n→∞

t∈[0,T ]

Step 2. In this step, we prove that the second and third equalities in (13.158) hold. Denote by σ(A∗ ) the spectrum of A∗ . Since A∗ generates a C0 -semigroup on H, there exists C0 > 0 such that sup {Re µ} ≤ C0 .

(13.172)

µ∈σ(A∗ )

Recall that the eigenvectors {ek }∞ k=1 of A constitutes an orthonormal basis of ∞ ∑ H. For any ξ = ξk ek ∈ D(A), we have k=1

Re ⟨Aξ, ξ⟩H = Re

∞ ⟨∑ k=1

µk ξ k e k ,

∞ ∑ k=1

⟩ ξk ek

H

≤ C0 |ξ|2H .

(13.173)

For each µ ∈ ρ(A∗ ), let R(µ) = µ(µI − A∗ )−1 . Introduce respectively the approximating equations of (13.148) and (13.157) as follows: ( ) { dyµ = −A∗ yµ dt − R(µ) DY + h dt + Yµ dW (t) in [0, T ), (13.174) yµ (T ) = R(µ)ξ, {

( ) dyn,µ = −A∗n yn,µ − Γn R(µ) Dn Yn + hn dt + Yn,µ dW (t) in [0, T ], yn,µ (T ) = Γn R(µ)Γn ξ. (13.175)

By Theorem 4.12, we obtain that ( ) lim |yn,µ − yn |L2F (Ω;C([0,T ];H)) + |Yn,µ − Yn |L2F (0,T ;L02 ) = 0 µ→∞

(13.176)

and ( ) lim |yµ − y|L2F (Ω;C([0,T ];H)) + |Yµ − Y |L2F (0,T ;L02 ) = 0.

µ→∞

(13.177)

526

13 Linear Quadratic Optimal Control Problems

By Itˆo’s formula, (13.173) and noting Ayn,µ = An yn,µ as well as (13.172), we have ∫ T 2 |yµ (t) − yn,µ (t)|H + |Yµ (r) − Yn,µ (r)|L02 dr t

= |R(µ)ξ −



Γn R(µ)Γn ξ|2H

T

+

⟨A∗ (yµ (r) − yn,µ (r)), yµ (r) − yn,µ (r)⟩H dr

t



T

⟨R(µ)DY (r) − Γn R(µ)Dn Yn (r), yµ (r) − yn,µ (r)⟩H dr

+ ∫

t



t

T

⟨R(µ)h(r) − Γn R(µ)hn (r), yµ (r) − yn,µ (r)⟩H dr

+

(13.178)

T

⟨(Yµ (r) − Yn,µ (r))dW (r), yµ (r) − yn,µ (r)⟩H

+ t



T

≤ |R(µ)ξ − Γn R(µ)Γn ξ|2H + C0

|yµ (r) − yn,µ (r)|2H dr t



T

⟨R(µ)DY (r) − Γn R(µ)Dn Yn (r), yµ (r) − yn,µ (r)⟩H dr

+ ∫

t



t

T

⟨R(µ)h(r) − Γn R(µ)hn (r), yµ (r) − yn,µ (r)⟩H dr

+ T

⟨(Yµ (r) − Yn,µ (r))dW (r), yµ (r) − yn,µ (r)⟩H .

+ t

Letting µ → ∞ in (13.178), from (13.176) and (13.177), using BurkholderDavis-Gundy inequality, we have ∫ E|y(t) −

yn (t)|2H

T

+E

|Y − Yn |L02 dr

t



T

≤ E sup |y(t) − yn (t)|2H + E t∈[0,T ]

t



T

≤ E|ξ − Γn ξ|2H + CE t



|Y − Yn |L02 dr

1 |y − yn |2H dr + E 2



(13.179)

T t

|Y − Yn |2L0 dr 2

T

|h − hn |2H dr.

+ t

This, together with Gronwall’s inequality, implies that ∫

T

E|y(t) − yn (t)|2H + E (

≤ C E|ξ −

|Y − Yn |L02 dr

t

∫ Γn ξ|2H

T

+E

|h − t

hn |2H dr

(13.180)

) .

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

527

From (13.180), we see that ( lim

n→∞



T

sup E|y(t) − yn (t)|2H + E t∈[0,T ]

0

) |Y − Yn |L02 dr = 0.

(13.181)

By the second and third line of (13.179), and using (13.181), we obtain the second and third equalities in (13.158). Consider the following stochastic evolution equation: ( ) ( ) { dx1,n = Ax1,n + Γn u1 dτ + Γn Cx1,n + Γn v1 dW (τ ) in (0, T ], x1,n (s) = Γn ξ1 . (13.182) Similar to Lemma 13.47, we can establish the following result. Lemma 13.48. For any ξ1 ∈ L4Ft (Ω; Hλ′ ), u1 (·) ∈ L4F (Ω; L2 (0, T ; Hλ′ )) and v1 (·) ∈ L4F (Ω; L2 (0, T ; L2 (V ; Hλ′ ))), the solution x1,n (·) ∈ L4F (Ω; C([0, T ]; Hλ′ )) to (13.182) satisfies lim |x1,n (·) − x1 (·)|L4F (Ω;C([0,T ];Hλ′ )) = 0,

n→∞

(13.183)

where x1 (·) is the solution to (13.133). For a.e. τ ∈ [0, T ], define six operators Φ, Φn , Ψ , Ψn , Ξ and Ξn as follows: { Φ : H → L2F (Ω; C([0, T ]; H)), (Φη)(τ ) = x(τ ), {

∀ η ∈ H,

Φn : H → L2F (Ω; C([0, T ]; H)),

(Φn η)(τ ) = xn (τ ), ∀ η ∈ H, { Ψ : H → L2F (Ω; C([0, T ]; H)), {

(Ψ η)(τ ) = y(τ ),

∀ η ∈ H,

Ψn : H → L2F (Ω; C([0, T ]; H)),

(Ψn η)(τ ) = yn (τ ), ∀ η ∈ H, { Ξ : H → L2F (0, T ; L02 ), (Ξη)(τ ) = Y (τ ), and

{

∀ η ∈ H,

Ξn : H → L2F (0, T ; L02 ), (Ξn η)(τ ) = Yn (τ ),

∀η ∈ H.

Here x(·) (resp. xn (·)) is the solution to (13.146) (resp. (13.156)) with f = g = 0, (y(·), Y (·)) (resp. (yn (·), Yn (·))) is the solution to (13.148) (resp. (13.157))

528

13 Linear Quadratic Optimal Control Problems

with h and ξ replaced by Kx for some K ∈ L∞ F (0, T ; L(H)) and Gx(T ) (resp. hn and ξn replaced by Kn xn with Kn = Γn KΓn and Gn xn (T )), respectively. Denote by IHHλ the embedding operator from H to Hλ . We have the following result. Lemma 13.49. Suppose A ∈ Υ1 (Hλ ), B ∈ Υ2 (Hλ ; L2 (V ; Hλ )), D ∈ L∞ F (0, T ; L(L2 (V ; Hλ ); Hλ )) and K ∈ L∞ (0, T ; L(H )). Then, λ F lim |IHHλ Φn − IHHλ Φ|L4F (Ω;C([0,T ];L2 (H;Hλ ))) = 0,

(13.184)

n→∞

  lim |IHHλ Ψn − IHHλ Ψ |L4F (Ω;C([0,T ];L2 (H;Hλ ))) = 0, n→∞

(13.185)

  lim |IHHλ Ξn − IHHλ Ξ|L4 (Ω;L2 (0,T ;L2 (H;L2 (V ;Hλ )))) = 0. F n→∞

Proof : We first prove (13.184). It is easy to show that, for any ϱ ∈ L∞ F (0, T ; L (Ω; Hλ )),  lim |An ϱ − Aϱ|L4F (Ω;L1 (0,T ;Hλ )) = 0,  n→∞ (13.186)  lim |Bn ϱ − Bϱ|L4 (Ω;L2 (0,T ;L (H ;L (V ;H ))) = 0. 2 2 λ λ F 2

n→∞

From the definitions of Φ and Φn , we see that, for any η ∈ H and t ∈ [0, T ], ∫ t Φ(t)η = S(t)η + S(t − r)A(r)Φ(r)ηdr 0



t

S(t − r)B(r)Φ(r)ηdW (r) in H,

+

a.s.,

0

and



t

eAn (t−r) An (r)Φn (r)ηdr

Φn (t)η = eAn t Γn η + 0



t

eAn (t−r) Bn (r)Φn (r)ηdW (r) in H,

+

a.s.

0

Noting eAn t Γn η = eAn t η, we have ∫ t IHHλ Φ(t)η = IHHλ S(t)η + IHHλ S(t − r)A(r)Φ(r)ηdr 0



(13.187)

t

IHHλ S(t − r)B(r)Φ(r)ηdW (r) in Hλ ,

+

a.s.

0

and



t

IHHλ eAn (t−r) An (r)Φn (r)ηdr

IHHλ Φn (t)η = IHHλ eAn t η + 0



t

IHHλ eAn (t−r) Bn (r)Φn (r)ηdW (r) in Hλ ,

+ 0

a.s.

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

529

Since IHHλ ej = ej for all j ∈ N, if O ∈ L(H) can be extended to a bounded linear operator on Hλ , then IHH O − OIHH 2 λ λ L2 (H;H

λ)

=

∞ ∑ IHH Oej − OIHH ej 2 λ λ H

λ

j=1

=

∞ ∑ Oej − Oej 2 = 0. H λ

j=1

Consequently, IHHλ O = OIHHλ . This, together with (13.187), implies that ∫

t

S(t − r)A(r)IHHλ Φ(r)ηdr

IHHλ Φ(t)η = IHHλ S(t)η + 0



t

S(t − r)B(r)IHHλ Φ(r)ηdW (r) in Hλ ,

+

a.s.

0

Since L2 (H; Hλ ) is a Hilbert space, for any t ∈ [0, T ] and a.s., ∫

t

S(t − r)A(r)IHHλ Φ(r)dr

IHHλ Φ(t) = IHHλ S(t) + 0



(13.188)

t

S(t − r)B(r)IHHλ Φ(r)dW (r) in L2 (H; Hλ ).

+ 0

Similarly, we can prove that for any t ∈ [0, T ] and a.s., ∫

t

eAn (t−r) An (r)IHHλ Φn (r)dr

IHHλ Φn (t) = IHHλ eAn t + 0



(13.189)

t

+

e

An (t−r)

Bn (r)IHHλ Φn (r)dW (r) in L2 (H; Hλ ).

0

In what follows, to simplify notations, we omit the operator IHHλ if there is no confusion. It follows from (13.188) and (13.189) that for any stopping time τ0 with τ0 (ω) ∈ (0, T ], a.s., 4 E sup Φ(r) − Φn (r) L2 (H;H

λ)

r∈[0,τ0 ]

[ 4 ≤ CE sup S(r) − eAn r L2 (H;H r∈[0,τ0 ] ∫ r (

) 4 S(r − τ )A(τ )Φ(τ ) − eAn (r−τ ) An (τ )Φn (τ ) dτ

+

∫ +

(13.190)

λ)

L2 (H;Hλ )

0 r 0

(

4 S(r − τ )B(τ )Φ(τ ) − eAn (r−τ ) Bn (τ )Φn (τ ) dW (τ ) )

L2 (H;Hλ )

] .

530

13 Linear Quadratic Optimal Control Problems

Hence, by Burkholder-Davis-Gundy inequality, and noting that |An (·)|L(Hλ ) ≤ |A(·)|L(Hλ ) as well as |Bn (·)|L(Hλ ;L2 (V ;Hλ )) ≤ |B(·)|L(Hλ ;L2 (V ;Hλ )) , we deduce that 4 E sup Φ(r) − Φn (r) L2 (H;Hλ )

r∈[0,τ0 ]

[ ( ) 4 ≤ C E sup S(r) − eAn r L2 (H;H ) r∈[0,τ0 ] ( ∫ τ0 (

λ

) S(τ0 − τ )A(τ ) − eAn (τ0 −τ ) An (τ ) Φ(τ ) 0 )4 ( ) +eAn (τ0 −τ ) An (τ ) Φ(τ ) − Φn (τ ) dτ L2 (H;Hλ ) ( ∫ τ0 ( ) An (τ0 −τ ) +E Bn (τ ) Φ(τ ) S(τ0 − τ )B(τ ) − e +E

0

)2 ] ( ) 2 +eAn (τ0 −τ ) Bn (τ ) Φ(τ ) − Φn (τ ) dτ L2 (H;L2 (V ;Hλ )) [ ( ) An r 4 ≤ C E sup |S(r) −e |L2 (H;Hλ ) (13.191) r∈[0,τ0 ]

(∫

( )4 ) dτ S(T − τ )A(τ ) − eAn (T −τ ) An (τ ) Φ(τ ) L2 (H;Hλ ) 0 4 4 ( ) + |A(·)|L(Hλ ) L∞ (Ω;L1 (0;τ )) + |B(·)|L(Hλ ;L2 (V ;Hλ )) L∞ (Ω;L2 (0;τ )) 0 0 F F ( ) ×E sup |Φ(r) − Φn (r)|4L2 (H;Hλ ) T

+E

r∈[0,τ0 ]

(∫

T

+E 0

( ) S(T − τ )B(τ ) − eAn (T −τ ) Bn (τ ) Φ(τ ) 2 L2 (H;L2 (V ;H

)2 ] λ

dτ ))

.

Noting that A ∈ Υ1 (Hλ ) and B ∈ Υ2 (Hλ ; L2 (V ; Hλ )), we conclude that there is a stopping time τ0 ∈ (0, T ], a.s., such that 4 ∫ τ0 2 ( ∫ τ0 ) 1 B(τ ) 2 C A(τ ) L(H ) dτ ∞ + dτ ≤ . L(Hλ ;L2 (V ;Hλ )) λ 2 L (Ω) L∞ (Ω) 0 0 For this τ0 , it follows from (13.190) that ( ) E sup |Φ(r) − Φn (r)|4L2 (H;Hλ ) r∈[0,τ0 ]

[ ( ) ≤ C E sup |S(r) − eAn r |4L2 (H;Hλ ) r∈[0,τ0 ]

∫ +E

T

T

+E 0

(13.192)

L2 (H;Hλ )

0

(∫

4 ( ) S(T − τ )A(τ ) − eAn (T −τ ) An (τ ) Φ(τ )dτ

( ) S(T − τ )B(τ ) − eAn (T −τ ) Bn (τ ) Φ(τ ) 2 L2 (H;L2 (V ;H

)2 ] λ

dτ ))

.

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

531

From (13.154), we have lim E

(

n→∞

) sup |S(r) − eAn r |4L2 (H;Hλ ) = 0.

r∈[0,τ0 ]

By (13.154) and (13.186), we get that ∫ T ( 4 ) lim E S(T − τ )A(τ ) − eAn (T −τ ) An (τ ) Φ(τ )dτ n→∞

and lim E

(∫

n→∞

L2 (H;Hλ )

0

) S(T −τ )B(τ )−eAn (T −τ ) Bn (τ ) Φ(τ ) 2 L

)2

T ( 0

=0

2 (H;L2 (V

;Hλ

dτ ))

= 0.

These, together with (13.192), imply that lim |Φn − Φ|L4F (Ω;L∞ (0,T ;L2 (H;Hλ ))) = 0.

n→∞

(13.193)

Repeating the above argument gives (13.184). 2 Next, we prove (13.185). Clearly, for any ϱ ∈ L∞ F (0, T ; L (Ω; Hλ )) and ∞ 2 ϑ ∈ LF (0, T ; L (Ω; L2 (V ; Hλ ))),   2 lim |Kn ϱ − Kϱ|L∞ = 0, n→∞ F (0,T ;L (Ω;Hλ )) (13.194)   lim |Dn ϑ − Dϑ|L∞ (0,T ;L2 (Ω;Hλ )) = 0. F n→∞

Similar to the proof of (13.188), we obtain that for any t ∈ [0, T ] and a.s., ∫ T ( ) Ψ (t) = S(T − t)∗ GΦ(T ) + S(r − t)∗ K(r)Ψ (r) + D(r)Ξ(r) dr t (13.195) ∫ T



S(r − t)∗ Ξ(r)dW (r) in L2 (H; Hλ )

t

and ∗

Ψn (t) = eAn (T −t) Gn Φn (T ) +

∫ t



T



e

A∗ n (r−t)

( ) ∗ eAn (r−t) Kn (r)Ψn (r)+Dn (r)Ξn (r) dr (13.196)

T

Ξn (r)dW (r)

in L2 (H; Hλ ).

t

Since L2 (H; Hλ ) is a Hilbert space, by (13.195)–(13.196), it is easy to see that (Ψ, Ξ) and (Ψn , Ξn ) are respectively weak solutions of the following L2 (H; Hλ )-valued backward stochastic evolution equations  dΨ = −(A∗ Ψ + KΨ + DΞ)dt + ΞdW (t) in [0, T ), (13.197) Ψ (T ) = GΦ(T )

532

and

13 Linear Quadratic Optimal Control Problems

 dΨn = −(A∗n Ψn + Kn Ψn + Dn Ξn )dt + Ξn dW (t)

in [0, T ), (13.198)

Ψ (T ) = G Φ (T ). n n n

Then, for any t ∈ (0, T ], by Itˆo’s formula and noting that (A∗ − A∗n )Ψn = 0 (by our assumption (AS1)), ∫ T 2 |Ψ (t) − Ψn (t)|L2 (H;Hλ ) + |Ξ(r) − Ξn (r)|2L2 (H;L2 (V ;Hλ )) dr t

∫ T [⟨ ∗ ⟩ = |GΦ(T ) − Gn Φn (T )|2L2 (H;Hλ ) + 2 A (Ψ − Ψn ), Ψ − Ψn L2 (H;H ) λ ⟨ ⟩ ⟨t ⟩ + (K − Kn )Ψ, Ψ − Ψn L2 (H;H ) + Kn (Ψ − Ψn ), Ψ − Ψn L2 (H;H ) λ λ ⟨ ⟩ ⟨ ⟩ ] + (D − Dn )Ξ, Ψ − Ψn L (H;H ) + Dn (Ξ − Ξn ), Ψ − Ψn L (H;H ) dτ 2 2 λ λ ∫ T ⟨ ⟩ −2 (Ξ − Ξn )dW (τ ), Ψ − Ψn L2 (H;H ) . (13.199) λ

t

Since A∗ generates a C0 -group on Hλ , we have that for any ϱ ∈ L2 (H; Hλ ), ⟨A∗ ϱ, ϱ⟩L2 (H;Hλ ) =

∞ ∑

⟨A∗ (ϱek ), ϱek ⟩Hλ ≤ C

k=1

Thus, ∫ T



t





t

T

|ϱek |2Hλ = C|ϱ|2L2 (H;Hλ ) .

k=1

A∗ (Ψ − Ψn ), Ψ − Ψn

Clearly, ∫

∞ ∑

L2 (H;Hλ

dτ ≤ C )

T

|Ψ − Ψn |2L2 (H;Hλ ) dτ. (13.200)

t

⟨ ⟩ Kn (Ψ − Ψn ), Ψ − Ψn L2 (H;H ) dτ λ



T

+ t

⟨ ⟩ Dn (Ξ − Ξn ), Ψ − Ψn L ∫

≤ |Kn |L∞ F (0,T ;Hλ ) +|Dn |L∞ F (0,T ;Hλ ) ∫

T

≤C t

T

2 (H;Hλ )



Ψ − Ψn 2 dτ L2 (H;H )

(13.201)

λ

t



T t

Ψ − Ψn L

1 Ψ − Ψn 2 dτ + L2 (H;Hλ ) 2

2 (H;Hλ )



T t

Ξ − Ξn

Ξ − Ξn 2

L2 (H;L2 (V ;Hλ ))

L2 (H;L2 (V ;Hλ ))

dτ.

From (13.199)–(13.201), we find that (∫ T )2 4 E|Ψ (t) − Ψn (t)|L2 (H;Hλ ) + E |Ξ(r) − Ξn (r)|2L2 (H;L2 (V ;Hλ )) dr t



13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

[ (∫ ≤ CE |GΦ(T ) − Gn Φn (T )|4L2 (H;Hλ ) + (∫

T

+

L2 (H;Hλ

t

Since lim E

[( ∫

n→∞

T

dτ )

|Ψ − Ψn |2L2 (H;Hλ ) dτ

t

(∫

)2

(K − Kn )Ψ 2

T

T

+

T

(D − Dn )Ξ 2

.

dτ )

L2 (H;Hλ

0

dτ )

)2

L2 (H;Hλ

+

(13.202) )2 ]

L2 (H;Hλ

(K − Kn )Ψ 2

0

(∫

)2

(D − Dn )Ξ 2

t

533

)2 ] dτ )

= 0,

the estimate (13.202) implies that [ lim sup E|Ψ (t) − Ψn (t)|4L2 (H;Hλ ) n→∞

t∈[0,T ]

(∫

T

|Ξ(r) −

+E 0

Ξn (r)|2L2 (H;L2 (V ;Hλ )) dr

(13.203)

)2 ] = 0.

This gives the second equality in (13.185). It remains to prove the first equality in (13.185). From (13.195) and (13.196), it follows that, for any s0 ∈ [0, T ), E sup |Ψ (r) − Ψn (r)|4L2 (H;Hλ ) r∈[s0 ,T ]

≤ CE sup r∈[s0 ,T ]

∫ + ∫ + ∫ +

T

(

[ ∗ |S(T − r)∗ GΦ(T ) − eAn (T −r) Gn Φn (T )|4L2 (H;Hλ )

) 4 ∗ S(τ − r)∗ K(τ )Ψ (τ ) − eAn (τ −r) Kn (τ )Ψn (τ ) dτ

L2 (H;Hλ )

r T

(

(13.204)

) 4 ∗ S(τ − r)∗ D(τ )Ξ(τ ) − eAn (τ −r) Dn (τ )Ξn (τ ) dτ

L2 (H;Hλ )

r T

4 ( ) ∗ S(τ − r)∗ Ξ(τ ) − eAn (τ −r) Ξn (τ ) dW (τ )

]

L2 (H;Hλ )

r

.

Therefore, by Burkholder-Davis-Gundy inequality, similar to (13.191), we obtain that E sup |Ψ (r) − Ψn (r)|4L2 (H;Hλ ) r∈[s0 ,T ]

[ ∗ ≤ C E sup |S(T − r)∗ GΦ(T ) − eAn (T −r) Gn Φn (T )|4L2 (H;Hλ ) r∈[0,T ]

(∫

( )4 ) ∗ dτ S(τ )∗ K(τ ) − eAn τ Kn (τ ) Ψ (τ ) L2 (H;Hλ ) 0 3 4 +(T − s0 ) K L∞ (0,T ;L(H )) E sup |Ψ (r) − Ψn (r)|4L2 (H;Hλ ) T

+E

F

λ

r∈[s0 ,T ]

(13.205)

534

13 Linear Quadratic Optimal Control Problems

(∫

T

+E

)2

( ) S(τ )∗ D(τ ) − eA∗n τ Dn (τ ) Ξ(τ ) 2

L2 (H;Hλ )

0

( 4 + D L∞ (0,T ;L(L2 (V ;H

λ );Hλ ))

F



) )] + 1 |Ξ − Ξn |4L2 (0,T ;L2 (H;L2 (V ;Hλ ))) . F

− 13

Let s0 = T −(2C|K|4L∞ (0,T ;L(Hλ )) ) . Then C(T −s0 )3 |K|4L∞ (0,T ;L(Hλ )) = 12 . F F By (13.153), (13.184) and (13.205), we conclude that lim |Ψn − Ψ |L4F (Ω;L∞ (s0 ,T ;L2 (H;Hλ ))) = 0.

n→∞

Repeating this argument gives the first equality in (13.185). To end this subsection, we provide below a controllability result concerning the trajectories of solutions to (13.134), which is a variant of Lemma 12.13 and will play an important role in the proof of the uniqueness of transposition solutions to (13.17). Lemma 13.50. The set ∆{ R = x2 (·) x2 (·) solves (13.134) with t = 0, ξ2 = 0, v2 = 0 } and u2 ∈ L4F (Ω; L2 (0, T ; Hλ′ )) is dense in L2F (0, T ; Hλ′ ). Proof : The proof of Lemma 13.50 is very similar to the proof of Lemma 12.13. We give it here for the sake of completeness. If Lemma 13.50 were not true, then there would be a nonzero ρ ∈ L2F (0, T ; Hλ ) such that ∫ T ⟨ ⟩ E ρ, x2 H ,H ′ ds = 0, for every x2 ∈ R. (13.206) 0

λ

λ

Consider the following backward stochastic evolution equation: ( ) { dy = −A∗ ydt + ρ − C ∗ Y dt + Y dW (t) in [0, T ),

(13.207)

y(T ) = 0, which admits a unique solution (y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; Hλ )) × L2F (0, T ; Hλ ). Hence, for any ϕ1 (·) ∈ L1F (0, T ; L4 (Ω; Hλ′ )) and ϕ2 (·) ∈ L2F (0, T ; L4 (Ω; Hλ′ )), we have ∫ T ⟨ ⟩ −E z(s), ρ(s) − C ∗ Y (s) H ′ ,H ds λ λ 0 (13.208) ∫ T ∫ T ⟨ ⟩ ⟨ ⟩ =E ϕ1 (s), y(s) H ′ ,H ds + E ϕ2 (s), Y (s) H ′ ,H ds, 0

λ

λ

0

λ

λ

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

535

where z(·) solves {

dz = (Az + ϕ1 )dt + ϕ2 dW (t) in (0, T ],

(13.209)

z(0) = 0. In particular, for any x2 (·) solving (12.40) with t = 0, ξ2 = 0, v2 = 0 and an arbitrarily given u2 ∈ L4F (0, T ; Hλ′ ), we choose z = x2 , ϕ1 = u2 and ϕ2 = Cx2 . It follows from (13.208) that ∫

⟨ ⟩ x2 (s), ρ(s) H ′ ,H ds = E

T

−E

λ

0

λ



T 0

⟨ ⟩ u2 (s), y(s) H ′ ,H ds, λ

∀ u2 ∈

(13.210)

λ

L4F (0, T ; Hλ′ ).

By (13.210) and recalling (13.206), we get that y(·) = 0. Consequently, (13.208) is reduced to ∫

T

−E



z(s), ρ(s) − C ∗ Y (s)

0



⟩ ′ ,H Hλ λ

T

ds = E 0

⟨ ⟩ ϕ2 (s), Y (s) H ′ ,H ds. λ

λ

(13.211) Choosing ϕ2 (·) = 0 in (13.209), by (13.211), we obtain that ∫

T

E

⟨∫

0

s

S(s − σ)ϕ1 (σ)dσ, ρ(s) − C ∗ Y (s)

0

⟩ ′ ,H Hλ λ

ds = 0,

(13.212)

∀ ϕ1 (·) ∈ L1F (0, T ; L4 (Ω; Hλ′ )). Hence, ∫

[ ] S(s − σ) ρ(s) − C ∗ Y (s) ds = 0,

T

∀ σ ∈ [0, T ].

(13.213)

σ

Then, for any given λ0 ∈ ρ(A) and σ ∈ [0, T ], it holds that ∫

T

( ) S(s − σ)(λ0 − A)−1 ρ(s) − C ∗ Y (s) ds

σ

= (λ0 − A)

−1



T

( ) S(s − σ) ρ(s) − C ∗ Y (s) ds = 0.

(13.214)

σ

Differentiating the equality (13.214) with respect to σ, and noting (13.213), we obtain that

536

13 Linear Quadratic Optimal Control Problems

( ) (λ0 − A)−1 ρ(σ) − C ∗ Y (σ) ∫ T ( ) =− S(s − σ)A(λ0 − A)−1 ρ(s) − C ∗ Y (s) ds σ



T

= σ

[ ] S(s − σ) ρ(s) − C ∗ Y (s) ds ∫

T

−λ0

( ) S(s − σ)(λ0 − A)−1 ρ(s) − C ∗ Y (s) ds

σ

= 0, Therefore,

∀ σ ∈ [0, T ]. ρ(·) = C(·)∗ Y (·).

By (13.215), the equation (13.207) is reduced to { dy = −A∗ ydt + Y dW (t) in [0, T ),

(13.215)

(13.216)

y(T ) = 0. It is clear that the unique solution of (13.216) is (y(·), Y (·)) = (0, 0). Hence, by (13.215), we conclude that ρ(·) = 0, a contradiction. This completes the proof of Lemma 13.50. 13.8.2 Proof of the Main Solvability Result In this subsection, we shall prove Theorem 13.41. To avoid the beginners being trapped in the technical details, we only handle the case that V = R, that is W (·) is a standard one dimensional Brownian motion (Note that, in this case, L02 = R ⊗ H is isometric isomorphism to H. Hence, in the rest of this chapter, we simply regard L02 as H). With the preliminary results in Subsection 13.8.1, interested readers can follow the proof in this subsection to deal with the general case. The proof (even for the case V = R) is so long that we have to divide it into several steps. Step 1. In this step, we introduce some operators X(·) 3 , Y(·), Y(·) and e X(·). e ) be an optimal feedback operator of ProbLet Θ(·) ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U lem (SLQ). Then, by Theorems 3.20 and 4.10, for any ζ ∈ H, the following forward-backward stochastic evolution equation  ( ) dˆ x = A + BΘ x ˆdt + (C + DΘ)ˆ xdW (t) in (0, T ],    ( ∗ ) ∗ dy = − A y + C Y + M x ˆ dt + Y dW (t) in [0, T ), (13.217)    x ˆ(0) = ζ, y(T ) = Gˆ x(T ) 3

In the sequel, we shall interchangeably use X(t), X(t, ·), or even X to denote the e operator X(·). The same can be said for Y(·), Y(·) and X(·).

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

537

admits a unique mild solution (ˆ x(·), y(·), Y (·)) ∈ L2F (Ω; C([0, T ]; H)) × L2F (Ω; 2 C([0, T ]; H)) × LF (0, T ; H) such that RΘˆ x + B ∗ y + D∗ Y = 0,

a.e. (t, ω) ∈ (0, T ) × Ω.

Further, consider the following stochastic evolution equation:  [ ( )2 ]∗    x = − A − BΘ + C + DΘ x ˜dt  d˜ ( )∗ − C + DΘ x ˜dW (t) in (0, T ],     x ˜(0) = ζ.

(13.218)

(13.219)

Note that A generates a C0 -group, and hence, so does −A∗ . By Theorem 3.20, the equation (13.219) admits a unique mild solution x ˜(·) ∈ L2F (Ω; C([0, T ]; H)). For each n ∈ N, denote by Γen the projection operator from U to ∆ Un = span 1≤j≤n {φj } (Recall that {φj }∞ j=1 is an orthonormal basis of U ). Write Bn = Γn B Γen , Cn = Γn CΓn , Dn = Γn DΓen , M n = Γn M Γ n , Clearly,

Rn = Γen RΓen ,

  lim Cn ζ = Cζ   n→+∞    lim Mn ζ = M ζ n→+∞       lim Θn ζ = Θζ n→+∞

Θn = Γen ΘΓn .

in H, in H, (13.220) in U,

for all ζ ∈ H and a.e. (t, ω) ∈ [0, T ] × Ω, and

  lim Bn ς = Bς   n→+∞    lim Dn ς = Dς n→+∞       lim Rn ς = Rς n→+∞

in H, in H, (13.221) in U,

for all ς ∈ U and a.e. (t, ω) ∈ [0, T ] × Ω. Consider the following forward-backward stochastic differential equation:  dˆ xn = (An + Bn Θn )ˆ xn dt + (Cn + Dn Θn )ˆ xn dW (t) in [0, T ],    ( ∗ ) ∗ dyn = − An yn + Cn Yn + Mn x ˆn dt + Yn dW (t) in [0, T ], (13.222)    x ˆn (0) = Γn ζ, yn (T ) = Gn x ˆn (T ) and the following stochastic differential equation:

538

13 Linear Quadratic Optimal Control Problems

 [ ( )2 ]⊤  d˜ xn = − An − Bn Θn + Cn + Dn Θn x ˜n dt     

−(Cn + Dn Θn )⊤ x ˜n dW (t)

in [0, T ],

(13.223)

x ˜n (0) = Γn ζ,

where An and Gn are given in (13.149). For each t ∈ [0, T ], define three e n,t on Hn as follows: operators Xn,t , Yn,t and X  ∆  ˆn (t; Γn ζ),  Xn,t Γn ζ = x   ∆ ∀ ζ ∈ H. (13.224) Yn,t Γn ζ = yn (t; Γn ζ),    e ∆  Xn,t Γn ζ = x ˜n (t; Γn ζ), For a.e. t ∈ [0, T ], define an operator Yn,t on Hn by ∆

Yn,t Γn ζ = Yn (t; Γn ζ),

∀ ζ ∈ H.

(13.225)

By the well-posedness results for the equations (13.222) and (13.223), and the fact that both A and −A∗ generate C0 -semigroups on H (because A generates a C0 -group on H), we see that    |Xn,t Γn ζ|L2Ft (Ω;H) ≤ C|ζ|H ,       |Y Γ ζ| 2 ≤ C|ζ| , n,t n

LF (Ω;H) t

H

  |Yn,· Γn ζ|L2F (0,T ;H) ≤ C|ζ|H ,       |X e n,t Γn ζ|L2 (Ω;H) ≤ C|ζ|H , F t

where the constant C is independent of n. This implies that  2 |X Γ | ≤ C,    n,t n L(H;LFt (Ω;H))      |Yn,t Γn |L(H;L2 (Ω;H)) ≤ C, Ft

  |Yn,· Γn |L(H;L2F (0,T ;H)) ≤ C,       |X e n,t Γn |L(H;L2 (Ω;H)) ≤ C. F

(13.226)

t

Consider the following equations:    dXn = (An + Bn Θn )Xn dt + (Cn + Dn Θn )Xn dW (t) in [0, T ],  ( ) dYn = − A∗n Yn + Cn∗ Yn + Mn Xn dt + Yn dW (t) in [0, T ],    Xn (0) = In , Yn (T ) = Gn Xn (T ) and

(13.227)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

 [ ( ) ] e n = − An − Bn Θn + Cn + Dn Θn 2 ⊤ X e n dt  dX    e n dW (t) in [0, T ], −(Cn + Dn Θn )⊤ X    e Xn (0) = In .

539

(13.228)

2

Clearly, both (13.227) and (13.228) can be viewed as Rn×n ≡ Rn -valued equations. By Theorems 3.20 and 4.10, the equations (13.227) and (13.228) admit solutions (Xn , Yn , Yn ) ∈ L2F (Ω; C([0, T ]; Rn×n )) × L2F (Ω; C([0, T ]; Rn×n ))× e n ∈ L2 (Ω; C([0, T ]; Rn×n )), respectively. It follows from L2F (0, T ; Rn×n ) and X F (13.224)–(13.228) that, for a.e. t ∈ [0, T ],  Xn,t Γn ζ = Xn (t)Γn ζ,        Yn,t Γn ζ = Yn (t)Γn ζ, ∀ ζ ∈ H. (13.229)   Y Γ ζ = Y (t)Γ ζ, n,t n n n     e e n (t)Γn ζ, Xn,t Γn ζ = X Thus,

 Xn,t Γn ∈ L2Ft (Ω; L(H)),        Yn,t Γn ∈ L2F (Ω; L(H)), t  e n,t Γn ∈ L2 (Ω; L(H)),  X  Ft     Yn,· Γn ∈ L2F (0, T ; L(H)).

Clearly,

 Xn,t Γn ∈ Lpd (H; L2Ft (Ω; H)) for a.e. t ∈ [0, T ],        Yn,t Γn ∈ Lpd (H; L2F (Ω; H)) for a.e. t ∈ [0, T ], t  e n,t Γn ∈ Lpd (H; L2 (Ω; H)) for a.e. t ∈ [0, T ],  X  Ft     2 Yn,· Γn ∈ Lpd (H; LF (0, T ; H)).

By (13.226) and using Theorem 2.63, we deduce that, there exist suitable ∞ ∞ ∞ ∞ subsequences {Xnk ,t }∞ k=1 ⊂ {Xn,t }n=1 , {Ynk ,t }k=1 ⊂ {Yn,t }n=1 , {Ynk ,t }k=1 ⊂ ∞ ∞ ∞ e e {Yn,t }n=1 and {Xnk ,t }k=1 ⊂ {Xn,t }n=1 (these sequences may depend on t), e ·) ∈ Lpd (H; L2 (Ω; H)) and (pointwise defined) operators X(t, ·), Y(t, ·), X(t, Ft 2 (for each t ∈ [0, T ]) and Y(·, ·) ∈ Lpd (H; LF (0, T ; H)) such that  lim Xnk ,t Γnk ζ = X(t, ·)ζ weakly in L2Ft (Ω; H),    k→+∞     lim Yn ,t Γn ζ = Y(t, ·)ζ weakly in L2 (Ω; H),  k Ft  k→+∞ k (13.230)  lim Ynk ,t Γnk ζ = Y(·, ·)ζ weakly in L2F (0, T ; H),    k→+∞     lim X e n ,t Γn ζ = X(t, e ·)ζ weakly in L2 (Ω; H),  k k Ft k→+∞

540

13 Linear Quadratic Optimal Control Problems

 |X(t, ·)ζ|L2F (Ω;H) ≤ C|ζ|H ,   t      |Y(t, ·)ζ|L2 (Ω;H) ≤ C|ζ|H , F

and that

t

  |Y(·, ·)ζ|L2F (0,T ;H) ≤ C|ζ|H ,      |X(t, e ·)ζ|L2 (Ω;H) ≤ C|ζ|H . F

(13.231)

t

On the other hand, from the definition of (ˆ xn (·; Γn ζ), yn (·; Γn ζ), Yn (·; Γn ζ)) and x ˜n (·; Γn ζ), by Lemma 13.47, we have that  lim x ˆn (·; Γn ζ) = x ˆ(·; ζ) in L2F (Ω; C([0, T ]; H)),   n→∞      lim yn (·; Γn ζ) = y(·; ζ) in L2F (Ω; C([0, T ]; H)),  n→∞ (13.232)  lim Yn (·; Γn ζ) = Y (·; ζ) in L2F (0, T ; H),   n→∞      lim x ˜n (·; Γn ζ) = x ˜(·; ζ) in L2F (Ω; C([0, T ]; H)). n→∞

Hence, in view of (13.224), we find that  lim Xn,t Γn ζ = x ˆ(t; ζ) strongly in L2Ft (Ω; H),   n→∞      lim Yn,t Γn ζ = y(t; ζ) strongly in L2F (Ω; H),  t n→∞

 lim Yn,t Γn ζ = Y (t; ζ) strongly in L2F (0, T ; H),   n→∞      lim X e n,t Γn ζ = x ˜(t; ζ) strongly in L2Ft (Ω; H).

(13.233)

n→∞

According to (13.230) and (13.233), we obtain that X(t, ·)ζ = x ˆ(t; ζ), Y(t, ·)ζ = Y (t; ζ),

Y(t, ·)ζ = y(t; ζ), e ·)ζ = x X(t, ˜(t; ζ).

(13.234)

Also, from the equality (13.218) and noting (13.234), we find that RΘX + B ∗ Y + D∗ Y = 0,

for a.e. (t, ω) ∈ [0, T ] × Ω.

(13.235)

Combining (13.232) and (13.234), we get that  lim Xn,t Γn ζ = X(t, ·)ζ strongly in L2Ft (Ω; H),   n→∞     2   lim Yn,t Γn ζ = Y(t, ·)ζ strongly in LFt (Ω; H), n→∞

 lim Yn,t Γn ζ = Y(·, ·)ζ strongly in L2F (0, T ; H),   n→∞      lim X e n,t Γn ζ = X(t, e ·)ζ strongly in L2 (Ω; H). Ft n→∞

(13.236)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

541

Moreover, from Lemma 13.49, it follows that e ∈ L4 (Ω; C([0, T ]; L2 (H; Hλ ))), X, Y, X F Y ∈ L4F (Ω; L2 (0, T ; L2 (H; Hλ )))  lim Xn = X   n→∞        lim Yn = Y

and

(13.237)

in L4F (Ω; C([0, T ]; L2 (H; Hλ ))), in L4F (Ω; C([0, T ]; L2 (H; Hλ ))),

n→∞

  lim Yn = Y   n→∞      lim X en = X e n→∞

in L4F (Ω; L2 (0, T ; L2 (H; Hλ ))),

(13.238)

in L4F (Ω; C([0, T ]; L2 (H; Hλ ))).

Step 2. In this step, we shall prove that e ·)∗ = I X(t, ·)X(t,

∀t ∈ [0, T ], a.s.

(13.239)

For any ζ, ρ ∈ H and t ∈ [0, T ], by Itˆo’s formula, we have ⟨ ⟩ ⟨ ⟩ x ˆn (t; Γn ζ), x ˜n (t; Γn ρ) Hn − Γn ζ, Γn ρ Hn ∫ t ⟨( ) ⟩ = An + Bn Θn x ˆn (r; Γn ζ), x ˜n (r; Γn ρ) Hn dτ 0



t

+

⟨(

0 t⟨

∫ +

[ ( )2 ]⊤ ⟩ x ˆn (r; Γn ζ), −An −Bn Θn + Cn +Dn Θn x ˜n (r; Γn ρ) Hn dτ

0



t

⟨ ( )⊤ ⟩ x ˆn (r; Γn ζ), Cn + Dn Θn x ˜n (r; Γn ρ) Hn dW (τ )

t

⟨(

− ∫

0

− 0

) ⟩ Cn + D n Θn x ˆn (r; Γn ζ), x ˜n (r; Γn ρ) Hn dW (τ )

) ( )⊤ ⟩ Cn + D n Θn x ˆn (r; Γn ζ), Cn + Dn Θn x ˜n (r; Γn ρ) Hn dτ = 0.

Hence, for every t ∈ [0, T ], ⟨ ⟩ ⟨ ⟩ e n,t Γn ρ e n (t)Γn ρ Xn,t Γn ζ, X = Xn (t)Γn ζ, X Hn Hn ⟨ ⟩ ⟨ ⟩ = x ˆn (t; Γn ζ), x ˜n (t; Γn ρ) Hn = Γn ζ, Γn ρ Hn , e n (t)⊤ = In , a.s. Namely, for all t ∈ [0, T ], This implies that Xn (t)X e n (t)⊤ = Xn (t)−1 , X

a.s.

By (13.229), (13.232) and (13.234), for any ζ ∈ H and t ∈ [0, T ],

a.s.

542

13 Linear Quadratic Optimal Control Problems

e ·)∗ ζ = ζ X(t, ·)X(t,

in Hλ ,

a.s.

e ·) ∈ L(H), a.s. that Furthermore, it follows from X(t, ·), X(t, e ·)∗ ζ = ζ X(t, ·)X(t,

in H,

a.s.,

which implies (13.239). Step 3. Put  e ∗, P (·) = Y(·)X(·)    e ∗, Π(·) = Y(·)X(·)   ( )  Λ(·) = Π(·) − P (·) C(·) + D(·)Θ(·) .

(13.240)

e ∗ ∈ L4 (Ω; C([0, T ]; L2 (H ′ ; H))). This, toFrom (13.237), it follows that X(·) F λ gether with (13.240), implies that  P (·) ∈ L2F (Ω; C([0, T ]; L2 (Hλ′ ; Hλ ))), (13.241) Λ(·) ∈ L2 (0, T ; L (H ′ ; H )). 2 λ F λ Put

 e n (·)⊤ , Pn (·) = Yn (·)X

e n (·)⊤ , Πn (·) = Yn (·)X

Λ (·) = Π (·) − P (·)(C (·) + D (·)Θ (·)). n n n n n n Similar to the proof of (13.83) and (13.86), we can show that  Pn (t) = Pn (t)⊤ , a.s., ∀t ∈ [0, T ], Λ (t, ω) = Λ (t, ω)⊤ , n n

a.e. (t, ω) ∈ (0, T ) × Ω.

(13.242)

(13.243)

It follows from Lemma 13.49 that e⊤ − X e ∗ |L4 (Ω;C([0,T ];L (H ′ ;H))) lim |X n 2 F λ

n→∞

e n − X| e L4 (Ω;C([0,T ];L (H;H ))) = 0, = lim |X 2 λ F

(13.244)

n→∞

lim |Yn − Y|L4F (Ω;C([0,T ];L2 (H;Hλ ))) = 0,

(13.245)

lim |Yn − Y|L4F (Ω;L2 (0,T ;L2 (H;Hλ ))) = 0.

(13.246)

n→∞

and n→∞

Now, by (13.240), (13.242) and (13.244)–(13.246), we deduce that lim |Pn − P |L2F (Ω;C([0,T ];L2 (Hλ′ ;Hλ ))) = 0,

n→∞

(13.247)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

543

and lim |Λn − Λ|L2F (0,T ;L2 (Hλ′ ;Hλ )) = 0.

n→∞

Combining (13.247), (13.248) and (13.243), we obtain that  P (t, ω) = P (t, ω)∗ , a.e. (t, ω) ∈ (0, T ) × Ω. Λ(t, ω) = Λ(t, ω)∗ ,

(13.248)

(13.249)

By Itˆo’s formula, and noting (13.227)–(13.228), we obtain that { [ ] −1 ⊤ dPn = − A⊤ n Yn + Cn Yn + Mn Xn Xn

} [ ] 2 −1 +Yn X−1 (C + D Θ ) − A − B Θ − Y X (C + D Θ ) dt n n n n n n n n n n n n [ ] −1 + Yn X−1 n − Yn Xn (Cn + Dn Θn ) dW (t) { [ ] ⊤ 2 = − A⊤ n Pn − Cn Πn − Mn + Pn (Cn + Dn Θn ) − An − Bn Θn } [ ] −Πn (Cn + Dn Θn ) dt + Πn − Pn (Cn + Dn Θn ) dW (t).

Hence, by (13.242), (Pn (·), Λn (·)) solves the following Rn×n -valued backward stochastic differential equation:  [ ⊤ ⊤ dPn = − Pn An + A⊤  n Pn + Λn Cn + Cn Λn + Cn Pn Cn    ]   +(Pn Bn + Cn⊤ Pn Dn + Λn Dn )Θn + Mn dt (13.250)  +Λn dW (t) in [0, T ],      Pn (T ) = Gn . For ξk ∈ L4Ft (Ω; Hλ′ ), uk ∈ L4F (Ω; L2 (t, T ; Hλ′ )) and vk ∈ L4F (Ω; L2 (t, T ; (k = 1, 2), denote by x1 (·) and x2 (·) respectively the mild solutions to the equations (13.133) and (13.134). For k = 1, 2, let us introduce the following stochastic differential equations: ( ) ( ) { dxk,n = An xk,n +uk,n dr+ Cn xk,n +vk,n dW (r) in [t, T ], (13.251) xk,n (t) = ξk,n , Hλ′ ))

where uk,n = Γn uk , vk,n = Γn vk and ξk,n = Γn ξk . Clearly,  lim ξk,n = ξk in L4Ft (Ω; Hλ′ ),   n→∞   lim uk,n = uk in L4F (Ω; L2 (t, T ; Hλ′ )), n→∞     lim v 4 2 ′ k,n = vk in LF (Ω; L (t, T ; Hλ )). n→∞

(13.252)

544

13 Linear Quadratic Optimal Control Problems

From Lemma 13.48 and (13.252), for k = 1, 2, we get that lim xk,n (·) = xk (·) in L4F (Ω; C([t, T ]; Hλ′ )).

(13.253)

n→∞

By Itˆo’s formula, and using (13.250)–(13.251), we arrive at ⟨ ⟩ d Pn x1,n , x2,n Hn ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ = dPn x1,n , x2,n Hn + Pn dx1,n , x2,n Hn + Pn x1,n , dx2,n Hn ⟨ ⟩ ⟨ ⟩ ⟨ ⟩ + dPn dx1,n , x2,n Hn + dPn x1,n , dx2,n Hn + Pn dx1,n , dx2,n Hn ⟨ [ ⊤ ⊤ = − Pn An + A⊤ n Pn + Λn Cn + Cn Λn + Cn Pn Cn ] ⟩ +(Pn Bn + Cn⊤ Pn Dn + Λn Dn )Θn + Mn x1,n , x2,n Hn dr ⟨ ⟩ ⟨ ⟩ + Λn x1,n , x2,n Hn dW (r) + Pn (An x1,n + u1,n ), x2,n Hn dr ⟨ ⟩ ⟨ ⟩ + Pn (Cn x1,n + v1,n ), x2,n Hn dW (r) + Pn x1,n , An x2,n + u2,n Hn dr ⟨ ⟩ ⟨ ⟩ + Pn x1,n , (Cn x2,n + v2,n ) Hn dW (r) + Λn (Cn x1,n + v1,n ), x2,n Hn dr ⟨ ⟩ + Λn x1,n , Cn x2,n + v2,n Hn dr ⟨ ⟩ + Pn (Cn x1,n + v1,n ), Cn x2,n + v2,n Hn dr ⟨ [ ] ⟩ = − (Pn Bn + Cn⊤ Pn Dn + Λn Dn )Θn + Mn x1,n , x2,n Hn dr ⟨ ⟩ ⟨ ⟩ + Pn u1,n , x2,n H dr + Pn x1,n , u2,n H dr n n ⟨ ⟩ ⟨ ⟩ + Pn Cn x1,n , v2,n Hn dr + Pn v1,n , Cn x2,n + v2,n Hn dr ⟨ ⟩ ⟨ ⟩ + Λn v1,n , x2,n H dr+ Λn x1,n , v2,n H dr n ⟨ ⟩ n + Λn x1,n , x2,n Hn dW (r) + ⟨Pn (Cn x1,n + v1,n ), x2,n ⟩Hn dW (r) +⟨Pn x1,n , (Cn x2,n + v2,n )⟩Hn dW (r). This implies that, for any t ∈ [0, T ], ∫ T ⟨ ⟩ E⟨Gn x1,n (T ), x2,n (T )⟩Hn + E Mn (τ )x1,n (τ ), x2,n (τ ) Hn dτ t



T

⟨[

Pn (τ )Bn (τ ) + Cn (τ )⊤ Pn (τ )Dn (τ ) + Λn (τ )Dn (τ ) t ⟩ ×Θn (τ )x1,n (τ ), x2,n (τ ) Hn dτ ∫ T ⟨ ⟩ ⟨ ⟩ = E Pn (t)ξ1,n , ξ2,n Hn + E Pn (τ )u1,n (τ ), x2,n (τ ) Hn dτ +E

t



T

+E ∫

t T

+E ∫

⟨ ⟨

t T

+E t



Pn (τ )x1,n (τ ), u2,n (τ )

⟩ Hn



Pn (τ )Cn (τ )x1,n (τ ), v2,n (τ )





Hn

Pn (τ )v1,n (τ ), Cn (τ )x2,n (τ ) + v2,n (τ )

⟩ Hn



]

(13.254)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations



T⟨

+E t

Λn (τ )v1,n (τ ), x2,n (τ )



∫ dτ + E Hn

T⟨

Λn (τ )x1,n (τ ), v2,n (τ )

t

545

⟩ Hn

dτ.

Step 4. In this step, we derive some properties of P and Λ. Let t ∈ [0, T ) and η ∈ L2Ft (Ω; H). Consider the following forwardbackward stochastic evolution equation:  t ( ) ( ) dx (r) = A + BΘ xt dr + C + DΘ xt dW (r) in (t, T ],    ( ) dy t (r) = − A∗ y t + C ∗ Y t + M xt dr + Y t dW (r) in [t, T ), (13.255)    t t t x (t) = η, y (T ) = Gx (T ). By Theorems 3.20 and 4.10, a unique mild so( )( we ( deduce that (13.255) admits )) lution xt (·), y t (·), Y t (·) ≡ xt (·; η), y t (·; η), z t (·; η) ∈ L2F (Ω; C([t, T ]; H)) ×L2F (Ω; C([t, T ]; H)) × L2F (t, T ; H) such that ( t ) x (·), y t (·), Y t (·) 2 ≤ C|η|L2F (Ω;H) . LF (Ω;C([t,T ];H))×L2F (Ω;C([t,T ];H))×L2F (t,T ;H) t (13.256) For every r ∈ [t, T ], define two families of operators Xtr and Ytr on L2Ft (Ω; H) as follows: ∆ ∆ Xtr η = xt (r; η), Ytr η = y t (r; η). For a.e. r ∈ [t, T ], define a family of operators Yt (r) on L2Ft (Ω; H) by ∆

Yt (r)η = Y t (r; η). It follows from (13.256) that for any r ∈ [t, T ] and η ∈ L2Ft (Ω; H),   |Xtr η|L2F (Ω;H) ≤ C|η|L2F (Ω;H) ,   r t   t |Yr η|L2F (Ω;H) ≤ C|η|L2F (Ω;H) , r t    t  |Y (·)η| 2 LF (t,T ;H) ≤ C|η|L2F (Ω;H) .

(13.257)

t

This indicates that for every r ∈ [t, T ], Xtr ∈ L(L2Ft (Ω; H); L2Fr (Ω; H)),

Ytr ∈ L(L2Ft (Ω; H); L2Fr (Ω; H))

and Yt (·) ∈ L(L2Ft (Ω; H); L2F (t, T ; H)). By (13.217) and (13.255), it is easy to see that, for any ζ ∈ H, Xtr X(t)ζ = xt (r; X(t)ζ) = x ˆ(r; ζ). Thus, Ytt X(t)ζ = y t (t; X(t)ζ) = Y(t)ζ

546

13 Linear Quadratic Optimal Control Problems

and Yt (τ )X(t)ζ = Y t (τ ; X(t)ζ) = Y(τ )ζ,

for a.e. τ ∈ [t, T ].

This implies that e ∗ Ytt = Y(t)X(t)

for all t ∈ [0, T ], a.s.,

(13.258)

and e ∗ Yt (τ ) = Y(τ )X(t)

for a.e. t ∈ [0, T ], τ ∈ [t, T ], a.s.

(13.259)

Further, the inequality (13.257) implies that for all t ∈ [0, T ] and η ∈ L2Ft (Ω; H), E|Ytt η|2H ≤ CE|η|2H , (13.260) where C is independent of t ∈ [0, T ]. According to (13.260), we find that e ∗ |L(L2 |Y(t)X(t) F

t

(Ω;H); L2F (Ω;H)) t

≤ C.

(13.261)

Thus, from (13.240), (13.258) and (13.261), it follows that, for some positive constant C0 , |P (t)|L(L2F

t

(Ω;H); L2F (Ω;H)) t

≤ C0 ,

∀ t ∈ [0, T ].

(13.262)

We claim that, |P (t)|L(H) ≤ C0 ,

∀ t ∈ [0, T ],

a.s.

(13.263)

e ∈ Ft with P(Ω) e > 0 such that Otherwise, there would exist ε0 > 0 and Ω |P (t, ω)|L(H) > C0 + ε0 ,

e for a.e. ω ∈ Ω.

e Let {ηk }∞ k=1 be a dense subset of the unit sphere of H. Then, for a.e. ω ∈ Ω, ∞ there is an ηω ∈ {ηk }k=1 such that |P (t, ω)ηω |H ≥ |P (t, ω)|L(H) − Write {  e  Ω = ω∈Ω  1      { e Ω = ω∈Ω j       

ε0 ε0 > C0 + . 2 2

ε0 } , |P (t, ω)η1 |H > |P (t, ω)|L(H) − 2 j−1 ) ε0 } ( ∪ \ Ωk , |P (t, ω)ηj |H > |P (t, ω)|L(H) − 2 k=1

for j = 2, 3, · · ·

Since P (t, ω)ηj ∈ L2Ft (Ω; H) for all j ∈ N, we see that {Ωj }∞ j=1 ⊂ Ft and ∞ ∑ e = P(Ω) P(Ωj ). Hence, j=1

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations ∞ ∑ χΩj η j P (t, ω) j=1

Therefore,

≥ C0 +

H

ε0 , 2

547

e for a.e. ω ∈ Ω.

∞ 2 ( ∑ ε 0 )2 e E P (t, ω) χΩj ηj ≥ C0 + P(Ω). 2 H j=1

On the other hand, it follows from (13.262) that ∞ ∞ ∞ 2 ∑ 2 ∑ ∑ E P (t, ω) χΩj ηj ≤ C02 E χΩj ηj = C02 P(Ωj ) H

j=1

j=1

H

j=1

e = C02 P(Ω). The above two inequalities contradict each other. Hence, (13.263) holds. Since the constant C0 (in (13.262)) is independent of t ∈ [0, T ], it holds that |P (t, ω)|L(H) ≤ C0 , for a.e. (t, ω) ∈ [0, T ] × Ω. (13.264) Similar to the proof of (13.262), we can show that for any η ∈ L2Ft (Ω; Hλ′ ), |P (t)|L(L2F

t

′ ); L2 (Ω;H ′ )) (Ω;Hλ F λ t

≤ C.

Then, similar to the proof of (13.264), we obtain that |P (t, ω)|L(Hλ′ ) ≤ C,

for a.e. (t, ω) ∈ [0, T ] × Ω.

(13.265)

From (13.235), it follows that RΘ + B ∗ P + D∗ Π = 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

(13.266)

This, together with (13.262), (13.265) and (AS3), implies that e ). D∗ Π ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U

(13.267)

According to (13.240), (13.267) and (AS3), it holds that e ). D∗ Λ = D∗ Π − D∗ P (C + DΘ) ∈ Υ2 (H; U ) ∩ Υ2 (Hλ′ ; U

(13.268)

From (13.249) and (13.268), we see that e ; Hλ′ ). ΛD ∈ Υ2 (U ; H) ∩ Υ2 (U

(13.269)

Step 5. In this step, we prove that a variant of (13.135) holds. We shall do this by taking n → ∞ in (13.254). e ′ the dual space of U e with respect to the pivot space U . From Denote by U (AS3), (13.247), (13.243) (13.248) and (13.253), we obtain that

548

13 Linear Quadratic Optimal Control Problems



⟨[

T

lim E

n→∞

t



T

=E t

Pn (τ )Bn (τ ) + Cn (τ )⊤ Pn (τ )Dn (τ ) ] ⟩ +Λn (τ )Dn (τ ) Θn (τ )x1,n (τ ), x2,n (τ ) Hn dτ

⟨[

P (τ )B(τ ) + C(τ ) P (τ )D(τ )

] ⟩ +Λ(τ )D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H

′ λ ,Hλ

By (13.248) and (13.253), we see that, for k = 1, 2, lim Λn (·)xk,n (·) − Λ(·)xk (·) 43 n→∞ LF (Ω;L2 (0,T ;Hλ )) [ ( ) ≤ lim Λn (·) xk,n (·) − xk (·) 43 2 n→∞

(13.270)



( ) + Λn (·) − Λ(·) xk (·)

dτ.

LF (Ω;L (0,T ;Hλ ))

]

(13.271) (Ω;L2 (0,T ;Hλ )) ≤ lim Λ(·) L2 (0,T ;L2 (H ′ ;H )) xk,n (·) − xi (·) L4 (Ω;L∞ (0,T ;H ′ )) λ n→∞ F λ λ F + lim Λn (·) − Λ(·) L2 (0,T ;L2 (H ′ ;H )) xk (·) L4 (Ω;L∞ (0,T ;H ′ )) = 0. n→∞

4 LF3

F

λ

λ

F

λ

By (13.270)–(13.271), using a similar argument for other terms in (13.254), we can take n → ∞ on both sides of this equality to get that ∫ T ⟨ ⟩ E⟨Gx1 (T ), x2 (T )⟩H + E M (τ )x1 (τ ), x2 (τ ) H dτ t



T⟨[

+E t

] ⟩ P (τ )B(τ )+C(τ )∗ P (τ )D(τ )+Λ(τ )∗D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H ∫

⟨ ⟩ = E P (t)ξ1 , ξ2 H

′ λ ,Hλ



T⟨

+E

′ λ ,Hλ

T

+E

P (τ )x1 (τ ), u2 (τ )

t





T

+E ∫

λ



⟩ Hλ

t

⟨ ⟩ u1 (τ ), P (τ )∗ x2 (τ ) H ′ ,H dτ dτ +E ,H ′ λ

P (τ )v1 (τ ), C(τ )x2 (τ ) + v2 (τ )

P (τ )C(τ )x1 (τ ), v2 (τ )

t

t



T

+E

Λ(τ )x2 (τ ), v1 (τ )

t

′ Hλ ,Hλ



⟩ Hλ

(13.272)

λ

T⟨



dτ + E ,H ′ λ

T





⟩ ′ Hλ ,Hλ





Λ(τ )x1 (τ ), v2 (τ )

t

⟩ ′ Hλ ,Hλ

dτ.

Noting that P (t) ∈ L(L2Ft (Ω; H); L2Ft (Ω; H)), we have ∫ T ⟨ ⟩ ⟨ ⟩ E P (t)ξ1 , ξ2 H ,H ′ + E u1 (τ ), P (τ )∗ x2 (τ ) H ′ ,H dτ λ



T⟨

+E

λ

t

P (τ )x1 (τ ), u2 (τ )

t



T

+E t



λ



⟩ ′ Hλ ,Hλ

T⟨

dτ + E

P (τ )v1 (τ ), C(τ )x2 (τ ) + v2 (τ )

λ

P (τ )C(τ )x1 (τ ), v2 (τ )

t

⟩ ′ Hλ ,Hλ



⟩ ′ Hλ ,Hλ



(13.273)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫





T t

T⟨

+E

P (τ )x1 (τ ), u2 (τ )

T

+E





⟩ ′ Hλ ,Hλ

t



u1 (τ ), P (τ )∗ x2 (τ ) T⟨

dτ + E

⟩ H



P (τ )C(τ )x1 (τ ), v2 (τ )

⟩ H

t

P (τ )v1 (τ ), C(τ )x2 (τ ) + v2 (τ )



t

H

549



dτ.

From (13.269) and noting again P (t) ∈ L(L2Ft (Ω; H); L2Ft (Ω; H)), we obtain that ∫ T ⟨[ ] ⟩ E P (τ )B(τ ) + C(τ )∗ P (τ )D(τ ) + Λ(τ )D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H ,H ′ dτ t

λ



T

=E

⟨[

t

λ

] ⟩ P (τ )B(τ ) + C(τ )∗ P (τ )D(τ ) + Λ(τ )D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H dτ. (13.274)

Combing (13.272)–(13.274), we conclude that ∫

T

E⟨Gx1 (T ), x2 (T )⟩H + E



M (τ )x1 (τ ), x2 (τ )

t



T⟨[

+E t





t



t

t

H



T

+E



t

⟨ ⟩ P (τ )u1 (τ ), x2 (τ ) H dτ ∫

⟨ ⟩ P (τ )x1 (τ ), u2 (τ ) H dτ + E

T

⟨ ⟩ P (τ )v1 (τ ), C(τ )x2 (τ ) + v2 (τ ) H dτ

T

⟨ ⟩ Λ(τ )v1 (τ ), x2 (τ ) H

+E +E



T

+E

H

] ⟩ P (τ )B(τ )+C(τ )∗ P (τ )D(τ )+Λ(τ )∗ D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H dτ

= E P (t)ξ1 , ξ2 ∫



λ

T t

⟨ ⟩ P (τ )C(τ )x1 (τ ), v2 (τ ) H dτ

∫ dτ + E ,H ′ λ

(13.275)

T t

⟨ ⟩ Λ(τ )x1 (τ ), v2 (τ ) H

′ λ ,Hλ

dτ.

Step 6. In this step, we prove that a variant of (13.136) holds. For any ξk ∈ L2Ft (Ω; H), uk ∈ L2F (t, T ; H) and vk ∈ L2F (t, T ; U ) (k = 1, 2), denote by x1 (·) and x2 (·) respectively the mild solutions to the equation4 ′ s (13.133) and (13.134). We can find six sequences {ξkj }∞ j=1 ∈ LFt (Ω; Hλ ), j ∞ j 4 2 e {uk }j=1 ⊂ L4F (Ω; L2 (t, T ; Hλ′ )) and {vk }∞ j=1 ⊂ LF (Ω; L (t, T ; U )), such that  lim ξkj = ξk    j→∞    lim uj = uk j→∞ k       lim vkj = vk j→∞

in L2Ft (Ω; H), in L2F (t, T ; H), in L2F (t, T ; U ).

(13.276)

550

13 Linear Quadratic Optimal Control Problems

Denote by xj1 (·) (resp. xj2 (·)) the mild solution to the equation (13.133) (resp. (13.134)) with ξ1 , u1 and v1 (resp. ξ2 , u2 and v2 ) replaced by ξ1j , uj1 and v1j (resp. ξ2j , uj2 and v2j ), and by xjk,n the solution to (13.251) with ξk,n , uk,n and vk,n replaced respectively by Γn ξkj , Γn ujk and Dn vkj . It follows from (13.254) that ∫ T ⟨ ⟩ E⟨Gn xj1,n (T ), xj2,n (T )⟩Hn + E Mn (τ )xj1,n (τ ), xj2,n (τ ) H dτ ∫

n

t

] Pn (τ )Bn (τ ) + Cn (τ )⊤ Pn (τ )Dn (τ ) + Λn (τ )Dn (τ ) t ⟩ ×Θn (τ )xj1,n (τ ), xj2,n (τ ) H dτ (13.277) n ∫ T[ ⟨ ⟩ ⟨ ⟩ = E Pn (t)ξ1j , ξ2j Hn + E Pn (τ )Cn (τ )xj1,n (τ ), Dn (τ )v2j (τ ) Hn t ⟨ ⟩ ⟨ ⟩ j j + Pn (τ )u1 (τ ), x2,n (τ ) Hn + Pn (τ )xj1,n (τ ), uj2 (τ ) Hn ⟨ ⟩ + Pn (τ )Dn (τ )v1j (τ ), Cn (τ )xj2,n (τ ) + Dn (τ )v2j (τ ) Hn ⟨ ⟩ ⟨ ⟩ ] + Dn (τ )v1j (τ ), Λn (τ )xj2,n (τ ) Hn + Λn (τ )xj1,n (τ ), Dn (τ )v2j (τ ) Hn dτ. T

+E

⟨[

From Assumption (AS3) and (13.271), for k = 1, 2 and j ∈ N, we have that e ′ )). lim D(·, ·)∗ Λn (·, ·)xjk,n (·, ·) = D(·, ·)∗ Λ(·, ·)xjk (·, ·) in LF3 (Ω; L2 (t, T ; U 4

n→∞

Consequently, we get that ∫ T ⟨ ⟩ lim E Dn (τ )v1j (τ ), Λn (τ )xj2,n (τ ) Hn dτ n→∞

t



T

=E



t

⟩ v1j (τ ), D(τ )∗ Λ(τ )xj2 (τ ) Ue ,Ue ′ dτ.

(13.278)

By (13.268), we see that D∗ Λxj2 ∈ L2F (t, T ; U ). Hence, ∫ T ⟨ j ⟩ lim E v1 (τ ), D(τ )∗ Λ(τ )xj2 (τ ) Ue ,Ue ′ dτ j→∞

t



T

=E t

⟨ ⟩ v1 (τ ), D(τ )∗ Λ(τ )x2 (τ ) U dτ.

(13.279)

Similarly, ∫

T

lim lim E

j→∞ n→∞



T

=E t

t

⟨ ⟩ j Λn (τ )xj1,n (τ ), Dn (τ )v2,n (τ ) Hn dτ

⟨ ⟩ D(τ )∗ Λ(τ )x1 (τ ), v2 (τ ) U dτ.

(13.280)

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

551

Noting that P (t) ∈ L(L2Ft (Ω; H); L2Ft (Ω; H)), we have lim P (t)ξ1j = P (t)ξ1

in L2Ft (Ω; H).

j→∞

Thus, we get that ⟨ ⟩ lim E P (t)ξ1j , ξ2j H j→∞

⟨ ⟩ ⟨ ⟩ = lim E P (t)ξ1j , ξ2j H = E P (t)ξ1 , ξ2 H .

′ λ ,Hλ

j→∞

Since P (·) ∈ Υ2 (H), we see that lim P (·)xj2 = P (·)x2

in L2F (t, T ; H).

j→∞

Consequently, ∫ T ∫ ⟨ j ⟩ lim E u1 (τ ), P (τ )xj2 (τ ) H ′ ,H dτ = lim E j→∞

λ

t





T

=E

u1 (τ ), P (τ )x2 (τ )





T

dτ = E

H

t

j→∞

λ

t

T t

⟨ j ⟩ u1 (τ ), P (τ )xj2 (τ ) H dτ

⟨ ⟩ P (τ )u1 (τ ), x2 (τ ) H dτ.

By (13.279)–(13.280) and a similar proof of (13.135), we can get that ∫ T ⟨ ⟩ E⟨Gx1 (T ), x2 (T )⟩H + E M (τ )x1 (τ ), x2 (τ ) H dτ t



T⟨[

+E t



] ⟩ P (τ )B(τ )+C(τ )∗ P (τ )D(τ )+Λ(τ )D(τ ) Θ(τ )x1 (τ ), x2 (τ ) H dτ

= E P (t)ξ1 , ξ2 ∫

T⟨

+E +E

⟩ H



T

+E t

⟨ ⟩ P (τ )u1 (τ ), x2 (τ ) H dτ

P (τ )x1 (τ ), u2 (τ )





t ∫ T ⟨

dτ + E H

T⟨

P (τ )C(τ )x1 (τ ), D(τ )v2 (τ )

t

P (τ )D(τ )v1 (τ ), C(τ )x2 (τ )+D(τ )v2 (τ )

t



T⟨

v1 (τ ), D(τ )∗ Λ(τ )x2 (τ )

+E t



(13.281)

∫ dτ + E U

T⟨

⟩ H

⟩ H





D(τ )∗ Λ(τ )x1 (τ ), v2 (τ )

t

⟩ U

dτ.

Step 7. In this step, we prove that the assertion 1) in Definition 13.39 holds. We first show that K ≥ 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

Let us replace the v1 in (13.133) and v2 in (13.134) by Dv1 and Dv2 , respectively. From (13.266), we see that [ ] 0 = B ∗ P + D∗ Λ + P (C + DΘ) + RΘ (13.282) = B ∗ P + D∗ P C + D∗ Λ + KΘ.

552

13 Linear Quadratic Optimal Control Problems

Thus,

P B + C ∗ P D + ΛD = −Θ∗ K ∗ .

(13.283)



Thanks to (13.281) and (13.283), and noting K(·) = K(·), we obtain that ∫

T⟨

E⟨Gx1 (T ), x2 (T )⟩H + E T⟨

Θ(τ )∗ K(τ )Θ(τ )x1 (τ ), x2 (τ )

−E t

⟨ ⟩ = E P (t)ξ1 , ξ2 H + E ∫

T⟨

+E



T t

P (τ )x1 (τ ), u2 (τ )



T

+E t T⟨

⟩ H

H





⟨ ⟩ P (τ )u1 (τ ), x2 (τ ) H dτ ∫



t

+E



t





M (τ )x1 (τ ), x2 (τ )

dτ + E H

T⟨

P (τ )C(τ )x1 (τ ), D(τ )v2 (τ )

(13.284) ⟩

t

H



⟨ ⟩ P (τ )D(τ )v1 (τ ), C(τ )x2 (τ )+D(τ )v2 (τ ) H dτ

v1 (τ ), D(τ )∗ Λ(τ )x2 (τ )

t



∫ dτ + E U

T⟨

D(τ )∗ Λ(τ )x1 (τ ), v2 (τ )

t

⟩ U

dτ.

For any t ∈ [0, T ), η ∈ L2Ft (Ω; H) and u(·) ∈ L2F (0, T ; U ), choose ξ1 = ξ2 = η, u1 = u2 = Bu and v1 = v2 = Du in (13.133)–(13.134). Similar to the proof of (13.143), thanks to (13.284), and noting (13.18) and (13.283), we can show that J (η; u(·)) ∫ T ⟩ (⟨ ⟩ ⟨ ⟩ ) ] 1 [⟨ = E P (0)η, η H + KΘx, Θx U −2 KΘx, u U + ⟨Ku, u⟩U dr 2 0 ∫ T ( ) ⟨ ⟩ ⟨ ⟩ 1 = E P (0)η, η H + K(u − Θx), u − Θx U dr . (13.285) 2 0 Hence, 1 E⟨P (0)η, η⟩H = J (η; Θ(·)¯ x(·)) ≤ J (η; u(·)), ∀ u(·) ∈ L2F (0, T ; U ), (13.286) 2 if and only if K ≥ 0,

a.e. (t, ω) ∈ [0, T ] × Ω.

Put } ∆{ U1 = (t, ω) ∈ (0, T ) × Ω K(t, ω)h = 0 for some nonzero h ∈ U and

} ∆{ U2 = (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > 0 for all h ∈ BU , } ∆{ where BU = h ∈ U |h|U = 1 .

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

553

Clearly, U1 ∩ U2 = ∅ and U1 ∪ U2 = (0, T ) × Ω. By the definition of U2 , we see that ∞ { ∪

U2 =

m=1

} 1 (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > for all h ∈ BU . m

0 Let BU be a countable dense subset of BU . Then

U2 =

∞ { ∪ m=1

=

} 1 0 (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > for all h ∈ BU m

∞ ∪ ∩ { 0 m=1 h∈BU

1} (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > . m

(13.287)

Since K(·, ·)h ∈ L2F (0, T ; U ), we get that, for any h ∈ U , {

1} (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > ∈ F. m

This, together with (13.287), implies that U2 ∈ F. So does U1 . We now show that K > 0 for a.e. (t, ω) ∈ [0, T ] × Ω. Let us use the contradiction argument and assume that this were untrue. Then the measure (given by the product measure of the Lebesgue measure on [0, T ] and the probability measure P) of U1 would be positive. For a.e. (t, ω) ∈ U1 , put } ∆{ Υ (t, ω) = h ∈ BU K(t, ω)h = 0 . Clearly, Υ (t, ω) is closed in U . Define a map F : (0, T ) × Ω → 2U as follows: { Υ (t, ω), if (t, ω) ∈ U1 F (t, ω) = 0, if (t, ω) ∈ U2 . Then, F (t, ω) is closed for a.e. (t, ω) ∈ (0, T ) × Ω. We now prove that F is F-measurable. Let O be a closed subset of U and O1 = O ∩ BU . Put ∆{ Σ1 = (t, ω) ∈ (0, T ) × Ω ∆{ Σ2 = (t, ω) ∈ (0, T ) × Ω

} F (t, ω) ∩ O = ̸ ∅ , } F (t, ω) ∩ O1 ̸= ∅ .

Clearly, Σ1 ⊃ Σ2 . Moreover, { Σ1 =

Σ2 ∪ U2 , if 0 ∈ O, Σ2 ,

if 0 ∈ / O.

(13.288)

554

13 Linear Quadratic Optimal Control Problems

Write } ∆{ Σ3 = (t, ω) ∈ (0, T ) × Ω |K(t, ω)h|U > 0 for all h ∈ O1 . Clearly, Σ2 ∩ Σ3 = ∅ and (0, T ) × Ω = Σ2 ∪ Σ3 . Similar to the above (for the proof of U2 ∈ F), we can show that Σ3 ∈ F. Hence, Σ2 ∈ F, and therefore so does Σ1 . Before continuing, let us recall the following known measurable selection result (e.g. [334]). e Fe) be a measurable space. Let F : (Ω, e Fe) → 2H be a Lemma 13.51. Let (Ω, e and for each closed-valued set mapping such that F (˜ ω ) ̸= ∅ for every ω ˜ ∈ Ω, open set O ⊂ H, } ∆{ e F (˜ F −1 (O) = ω ˜∈Ω ω ) ∩ O ̸= ∅ ∈ Fe. e → H, i.e., there is an H-valued, Then F has a measurable selection f : Ω e e F -measurable function f such that f (˜ ω ) ∈ F (˜ ω ) for every ω ˜ ∈ Ω. e Fe) = ((0, T ) × Ω, F) to find Now we apply Lemma 13.51 to F (·, ·) with (Ω, an F-adapted process f such that Kf = 0 for a.e. (t, ω) ∈ (0, T ) × Ω. Noting that |f (t, ω)|U ≤ 1 for a.e. (t, ω) ∈ (0, T ) × Ω, we find that f ∈ L2F (0, T ; U ). Furthermore, we have |f (t, ω)|U = 1 for a.e. (t, ω) ∈ U1 , which concludes that |f |L2F (0,T ;U ) > 0. By (13.285), we see that Θ¯ x +f is also an optimal control. This contradicts the uniqueness of the optimal control. Hence, K(t, ω) is invertible for a.e. (t, ω) ∈ [0, T ] × Ω, (but K(t, ω)−1 does not need to be bounded). Further, we show that the domain of K(t, ω)−1 is dense in U for a.e. (t, ω) ∈ [0, T ]×Ω. Denote by R(K(t, ω)) the range of K(t, ω). Clearly, R(K(t, ω)) ⊂ U . Put } ∆{ e1 = U (t, ω) ∈ (0, T ) × Ω R(K(t, ω))⊥ ̸= {0} and

} ∆{ e2 = U (t, ω) ∈ (0, T ) × Ω R(K(t, ω))⊥ = {0} .

e1 ∪ U e 2 = (0, T ) × Ω. By the definition of U e 2 , we see that Clearly, U

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

e2 = U

∞ { ∪

555

˜ 0 0 (t, ω) ∈ (0, T ) × Ω ∀ h ∈ BU , there is an h ∈ BU such that

m=1

} ˜ U > 1 . ⟨K(t, ω)h, h⟩ m

Then e2 = U



∞ { ∪ ∪

0 h∈B 0 m=1 ˜ h∈B U U

} ˜ U > 1 . (13.289) (t, ω) ∈ (0, T ) × Ω ⟨K(t, ω)h, h⟩ m

˜ ∈ U, Since K(·, ·)h ∈ L2F (0, T ; U ), it follows that, for any h, h { } ˜ U > 1 ∈ F. (t, ω) ∈ (0, T ) × Ω ⟨K(t, ω)h, h⟩ m

(13.290)

e 2 ∈ F. Hence, U e 1 ∈ F. From (13.289) and (13.290), we see that U It suffices to prove that R(K(t, ω)) is dense in U for a.e. (t, ω) ∈ (0, T )×Ω. To show this, we use the contradiction argument. If R(K(t, ω)) were not dense e 1 would be positive. in U for a.e. (t, ω) ∈ (0, T ) × Ω., then the measure of U e For a.e. (t, ω) ∈ U1 , put } ∆ {˜ ˜ U = 0, ∀ h ∈ U . Υe(t, ω) = h ∈ BU ⟨K(t, ω)h, h⟩ Clearly, Υe(t, ω) is closed in U . Define a map Fe : (0, T ) × Ω → 2U as follows: { e1 Υe(t, ω), if (t, ω) ∈ U Fe (t, ω) = e2. 0, if (t, ω) ∈ U Then, Fe (t, ω) is closed for a.e. (t, ω) ∈ (0, T ) × Ω. We now prove that Fe is F-measurable. Similar to (13.288), put ∆{ e1 = Σ (t, ω) ∈ (0, T ) × Ω { ∆ e2 = (t, ω) ∈ (0, T ) × Ω Σ

} Fe (t, ω) ∩ O = ̸ ∅ , } Fe (t, ω) ∩ O1 ̸= ∅ .

e 2 . If 0 ∈ e1 = Σ e2 ∪ U e1 = Σ e2 . Hence, we only need to If 0 ∈ O, then Σ / O, then Σ e show that Σ2 ∈ F. Write { ∆ ˜ 0 e3 = Σ (t, ω) ∈ (0, T ) × Ω ∀ h ∈ O1 , there is an h ∈ BU so that } ˜ U >0 . ⟨K(t, ω)h, h⟩ e2 ∪ Σ e3 and Σ e2 ∩ Σ e3 = ∅. Hence, it suffices to show that Then, (0, T ) × Ω = Σ e Σ3 ∈ F. Let O0 be a countable dense subset of O1 . Clearly,

556

13 Linear Quadratic Optimal Control Problems ∞ { ∪

e3 = Σ

˜ 0 (t, ω) ∈ (0, T ) × Ω ∀ h ∈ O1 , there is an h ∈ BU such that

m=1

∞ { ∪

=

(13.291)

˜ 0 (t, ω) ∈ (0, T ) × Ω ∀ h ∈ O0 , there is an h ∈ BU such that

m=1

∞ ∪ ∩

=

} ˜ U > 1 ⟨K(t, ω)h, h⟩ m

} ˜ U > 1 ⟨K(t, ω)h, h⟩ m } ∪ { ˜ U > 1 . (t, ω) ∈ (0, T ) × Ω ⟨K(t, ω)h, h⟩ m 0

m=1 h∈O ˜ 0 h∈BU

˜ ∈ O0 and h ∈ B 0 , noting that K(·, ·)h ∈ L2 (0, T ; U ), we For any m ∈ N, h U F deduce that { } ˜ U > 1 ∈ F. (t, ω) ∈ (0, T ) × Ω ⟨K(t, ω)h, h⟩ (13.292) m e3 ∈ F. Hence, Σ e2 ∈ F. From (13.291) and (13.292), it follows that Σ e Fe) = ((0, T ) × Ω, F) to find Now we apply Lemma 13.51 to Fe (·, ·) with (Ω, an F-adapted process f˜ such that ⟨K(t, ω)h, f˜(t, ω)⟩U = 0,

∀ h ∈ U, a.e. (t, ω) ∈ (0, T ) × Ω.

Since |f˜(t, ω)|U ≤ 1 for a.e. (t, ω) ∈ (0, T ) × Ω, it holds that f˜ ∈ L2F (0, T ; U ). Furthermore, we have e1, |f˜(t, ω)|U = 1 for a.e. (t, ω) ∈ U which implies that |f˜|L2F (0,T ;U ) > 0. We claim that Θ¯ x + f˜ is also an optimal control. Indeed, by the choice of ˜ f , it holds that ∫ T ⟨ ⟩ E K(u − Θ¯ x − f˜), f˜ U dr = 0 0

and



T

E 0

⟨ ⟩ K f˜, u − Θ¯ x − f˜ U dr = E



T s

⟨ ⟩ f˜, K(u − Θx − f˜) U dr = 0.

Therefore, for any u(·) ∈ ∫ T ⟨ ⟩ E K(u − Θ¯ x − f˜), u − Θ¯ x − f˜ U dr L2F (0, T ; U ),

0



T

=E 0



K(u − Θ¯ x), u − Θ¯ x

⟩ U

(13.293) dr.

13.8 Solvability of Operator-Valued Backward Stochastic Riccati Equations

557

According to (13.285) and (13.293), we obtain that J (η; Θ(·)¯ x(·) + f˜) ≤ J (η; u),

∀u(·) ∈ L2F (0, T ; U ),

which indicates that Θ¯ x + f˜ is also an optimal control. This leads to a contradiction to the uniqueness of the optimal controls. Hence, K(t, ω)−1 is densely defined in U for a.e. (t, ω) ∈ (0, T ) × Ω. Now, let us prove that K(t, ω)−1 is a closed operator for a.e. (t, ω) ∈ −1 (0, T ) × Ω. Let {ρj }∞ ), ρ ∈ U and ρˆ ∈ U satisfy j=1 ⊂ D(K(t, ω) lim ρj = ρ in U

(13.294)

lim K(t, ω)−1 ρj = ρˆ in U.

(13.295)

j→∞

and

j→∞

From (13.295), we obtain that lim ρj = K(t, ω)ˆ ρ

j→∞

in U.

This, together with (13.294), implies that ρ = K(t, ω)ˆ ρ. Consequently, ρ ∈ D(K(t, ω)−1 ) and K(t, ω)−1 ρ = ρˆ. This indicates that the operator K(t, ω)−1 is closed. Therefore, the assertion 1) in Definition 13.39 holds. By (13.282), we find that − K −1 (B ∗ P + D∗ Λ + D∗ P C) = Θ.

(13.296)

This implies (13.137). Moreover, from (13.275), (13.284) and (13.296), we see that the assertions 2) and 3) in Definition 13.39 holds. Step 8. In this step, we prove the uniqueness of the transposition solution to (13.17). Assume that (P1 (·), Λ1 (·)), (P2 (·), Λ2 (·)) ∈ CF,w ([0, T ]; L∞ (Ω; L(H))) × 2 LF,w (0, T ; L(H)) are two transposition solutions to (13.17). Similar to the proof of (13.142), we can obtain that E

[∫

T t

(⟨

M x(r), x(r)

⟩ H

⟨ ⟩ ) ⟨ ⟩ ] + Ru(r), u(r) U dr + Gx(T ), x(T ) H

∫ T ⟩ 1 (⟨ = E P1 (t)η, η H + 2 t ∫ T ( ⟨ ⟩ 1 = E P2 (t)η, η H + 2 t

) ⟨ ⟩ K1 (u − Θx), u − Θx U dr

(13.297)

) ⟨ ⟩ K2 (u − Θx), u − Θx U dr ,

where Kj = R + DPj D for j = 1, 2. By taking u = Θx in (13.297), we get that for any t ∈ [0, T ) and η ∈ L2Ft (Ω; H), E⟨P1 (t)η, η⟩H = E⟨P2 (t)η, η⟩H . (13.298)

558

13 Linear Quadratic Optimal Control Problems

Thus, for any ξ, η ∈ L2Ft (Ω; H), we have that E⟨P1 (t)(η + ξ), η + ξ⟩H = E⟨P2 (t)(η + ξ), η + ξ⟩H , and E⟨P1 (t)(η − ξ), η − ξ⟩H = E⟨P2 (t)(η − ξ), η − ξ⟩H . These, together with P1 (·) = P1 (·)∗ and P2 (·) = P2 (·)∗ , imply that E⟨P1 (t)η, ξ⟩H = E⟨P2 (t)η, ξ⟩H ,

∀ ξ, η ∈ L2Ft (Ω; H).

(13.299)

Hence, P1 (t) = P2 (t) for any t ∈ [0, T ], a.s. Let ξ1 = ξ2 = 0, u1 = 0 and v2 = 0. By (13.135), we see that for any u2 ∈ L4F (Ω; L2 (0, T ; Hλ′ )) and v1 ∈ L4F (Ω; L2 (0, T ; Hλ′ )), ∫

T

0=E 0



( ) ⟩ v1 (r), Λ1 (r) − Λ2 (r) x2 (r) H ′ ,H dr, λ

λ

which implies that (

4 ) Λ1 − Λ2 x2 = 0 in LF3 (Ω; L2 (0, T ; Hλ )).

This, together with Lemma 13.50, concludes that Λ1 = Λ2 in L2F (0, T ; L2 (Hλ′ ; Hλ )). Consequently, the desired uniqueness follows.

13.9 Some Examples In this section, we shall give two illuminating examples for LQ problems governed respectively by stochastic wave and Schr¨odinger equations. One can also consider LQ problems for other controlled stochastic partial differential equations. We leave it to interested readers. Throughout this section, n ∈ N and O ⊂ Rn is a bounded domain with a ∞ C boundary ∂O. 13.9.1 LQ Problems for Stochastic Wave Equations Consider the following controlled stochastic wave equation:   dyt − ∆ydt = b1 udt + (ay + b2 u)dW (t) in O × (0, T ),     y=0 on ∂O × (0, T ),      y(0) = y0 , yt (0) = y1 in O, with the cost functional

(13.300)

13.9 Some Examples ∆



T

J (y0 , y1 ; u) = E



∫ O

0

(q|y|2 + r|u|2 )dxdt + E

g|y(T )|2 dx. O

Here (y0 , y1 ) ∈ ∩ L (O), u ∈  2n a, b1 , b2 , q, r ∈ L∞ F (0, T ; C (O)), H01 (O)

559

2

g ∈ L∞ (Ω; C 2n (O)), FT

L2F (0, T ; L2 (O)),

q ≥ 0, r ≥ 1, g ≥ 0.

(13.301)

Our optimal control problem is as follows: Problem (wSLQ). For each (y0 , y1 ) ∈ H01 (O) × L2 (O), find u ¯(·) ∈ L2F (0, T ; 2 L (O)) such that ( ) ( ) J y0 , y 1 ; u ¯(·) = inf J y0 , y1 ; u(·) . (13.302) 2 u(·) ∈ LF (0,T ;L2 (O))

Problem (wSLQ) is a concrete example of Problem (SLQ) in the following setting: •

H = H01 (O) × L2 (O) and U = L2 (O);



The operator A is defined as follows:   D(A) = [H 2 (O) ∩ H01 (O)] × H01 (O),    ( ) ( ) ( ) φ φ φ1 1 2   = , ∀ ∈ D(A);  A φ ∆φ φ2 2 1



Bu = (0, b1 u)⊤ , C(y, yt )⊤ = (0, ay)⊤ and Du = (0, b2 u)⊤ ;



The operators M , R and G are given by ∫  ⊤   ⟨M (y, yt ), (y, yt ) ⟩H = q(|∇y|2 + |yt |2 )dx,    O   ∫  ⟨Ru, u⟩U = r|u|2 dx,  O   ∫      ⟨G(y(T ), yt (T ))⊤ , (y(T ), yt (T ))⊤ ⟩H = g(|∇y(T )|2 + |yt (T )|2 )dx. O

Since q ≥ 0, r ≥ 1 and g ≥ 0, we have that Problem (wSLQ) is uniquely solvable. b as follows: Define an operator A  b = H 2 (O) ∩ H 1 (O), D(A) 0 b Aφ = −∆φ,

b ∀φ ∈ D(A).

560

13 Linear Quadratic Optimal Control Problems

ˆ j }∞ the eigenvalues of A b and {ˆ Denote by {λ ej }∞ j=1 j=1 the corresponding eigen1} { ˆ 2 ∞ are the eigenvectors with |ˆ ej |L2 (O) = 1 for j ∈ N. Clearly, ± iλ j j=1 {( )}∞ 1 values of A and ± 1 eˆj , eˆj are the corresponding eigenvectors. It j=1 ˆ2 iλ j {( )}∞ 1 is well known that ± 1 eˆj , eˆj constitutes an orthonormal basis of j=1 ˆ2 iλ j

H01 (O) × L2 (O). Hence, (AS1) holds. Denote by Hλ the completion of the Hilbert space H01 (O) × L2 (O) with the norm |(f1 , f2 )|2Hλ =

∞ ∑

( ) ˆ j |−n |f1,j |2 + |f2,j |2 , |λ

j=1

for all f1 =

∞ ∑

−1

ˆ 2 ej ∈ H 1 (O) and f2 = f1,j λ 0 j

j=1

∞ ∑

f2,j ej ∈ L2 (O).

j=1

b (e.g., [297, Chapter 1, By the asymptotic distribution of the eigenvalues of A Theorem 1.2.1]), we see that |I|2L2 (H;Hλ )

=

∞ ⟨( ∑

ˆ − 2 ej , e j ± iλ j 1

)⊤ ( )⊤ ⟩ − 12 ˆ , ± i λj e j , e j Hλ

j=1

≤C

∞ ∑

ˆ j |−n < ∞. |λ

j=1

Hence, the embedding from H01 (O) × L2 (O) to Hλ is Hilbert-Schmidt. From the definition of Hλ , it follows that Hλ′ = D(An ). Let α ∈ C 2n (O). For any f ∈ Hλ′ , one has |αf |Hλ′ = |αf |D(An ) = |An (αf )|H01 (O)×L2 (O) ≤ |α|C 2n (O) |f |D(An ) = |α|C 2n (O) |f |Hλ′ . ′ ∞ ′ From this, we conclude that C ∈ L∞ F (0, T ; L(Hλ )), G ∈ LFT (Ω; L(Hλ )) and ∞ ′ M ∈ LF (0, T ; L(Hλ )). Thus, (AS2) holds. e = D(A bn ). Clearly, U e is dense in L2 (O). From the definition of B, Let U 2 e e ′ D and R, we find that R ∈ L∞ F (0, T ; L(U )) and B, D ∈ LF (0, T ; L(U ; Hλ )). Therefore, (AS3) holds.

13.9.2 LQ problems for Stochastic Schr¨ odinger Equations In this section, we consider the LQ problem for the following controlled stochastic Schr¨ odinger equations:

13.9 Some Examples

  dy − i∆ydt = b1 udt + (ay + b2 u)dW (t)    y=0      y(0) = y0

561

in O × (0, T ), on ∂O × (0, T ),

(13.303)

in O,

with the following cost functional ∆



T



J (y0 ; u) = E

∫ O

0

(q|y|2 + r|u|2 )dxdt + E

g|y(T )|2 dx. O

Here y0 ∈ L2 (O), u ∈ L2F (0, T ; L2 (O)), and the coefficients fulfill (13.301). Our optimal control problem is as follows: Problem (sSLQ). For each y0 ∈ L2 (O), find a u ¯(·) ∈ L2F (0, T ; L2 (O)) such that ( ) ( ) J y0 ; u ¯(·) = inf J y0 ; u(·) . (13.304) 2 u(·) ∈ LF (0,T ;L2 (O))

Problem (sSLQ) is a concrete example of Problem (SLQ) in the following setting: • •

H = U = L2 (O); The operator A is defined as follows:  D(A) = H 2 (O) ∩ H01 (O), 

• •

Aφ = i∆φ,

∀φ ∈ D(A);

Bu = b1 u, Cy = ay and Du = b2 u; The operators M , R and G are given by ∫ ∫  2  ⟨M y, y⟩ = q|y| dx, ⟨Ru, u⟩ = r|u|2 dx,  H U  O O ∫   ⟨Gy(T ), y(T )⟩H = g|y(T )|2 dx. O

Since q ≥ 0, r ≥ 1 and g ≥ 0, Problem (sSLQ) is uniquely solvable. ∞ Write {µj }∞ j=1 for the eigenvalues of A and {ej }j=1 the corresponding eigenvectors with |ej |L2 (O) = 1 for j ∈ N. It is well known that {ej }∞ j=1 constitutes an orthonormal basis of L2 (O). Hence, (AS1) holds. Denote by Hλ the completion of the Hilbert space L2 (O) with the norm |f |2Hλ

=

∞ ∑ j=1

|µj |

−n

|fj | , 2

for all f =

∞ ∑ j=1

fj ej ∈ L2 (O).

562

13 Linear Quadratic Optimal Control Problems

By the asymptotic distribution of the eigenvalues of A (e.g., [297, Chapter 1, Theorem 1.2.1]), we see that ∞ ∑

|I|2L2 (H;Hλ ) =

⟨ e j , e j ⟩H λ = 4

j=1

∞ ∑

|µj |−n < ∞.

j=1

Hence, the embedding from L2 (O) to Hλ is Hilbert-Schmidt. From the definition of Hλ , it follows that Hλ′ = D((iA)n/2 ). Let α ∈ C 2n (O). For any f ∈ Hλ′ , one has |αf |Hλ′ = |αf |D((iA)n/2 ) = |(iA)n/2 (αf )|L2 (O) ≤ |α|C 2n (O) |f |D((iA)n/2 ) = |α|C 2n (O) |f |Hλ′ . From this, we conclude that ′ C ∈ L∞ F (0, T ; L(Hλ )),

′ G ∈ L∞ FT (Ω; L(Hλ )),

′ M ∈ L∞ F (0, T ; L(Hλ )).

Thus, (AS2) holds. e = D((iA)n ). Clearly, U e is dense in L2 (O). From the definitions of Let U B, D and R, we find that e R ∈ L∞ F (0, T ; L(U )),

e ; Hλ′ )). B, D ∈ L2F (0, T ; L(U

Therefore, (AS3) holds.

13.10 Notes and Comments Unless other stated, the material of this chapter is mainly based on [248]. Other works that are related to this chapter can be found in [23, 76, 150, 235], etc. LQ problem for (deterministic) finite dimensional systems is one of the three milestones in modern control theory and has been extensively studied for different kinds of control systems. It is fundamentally important due to the following reasons: • •

It can be used to model many problems in applications; Many optimal control problems for nonlinear control systems can be reasonably approximated by LQ problems.

To our best knowledge, the deterministic LQ problem was first studied in [22]. In [163], the optimal feedback control was found via the matrix-valued Riccati equation, and hence a systematic LQ control theory was established. This theory was extended to infinite dimensions as well (e.g., [185, 202]). LQ problems for stochastic finite dimensional systems were first studied in [177] by means of the dynamic programming method. In [351, 352], Riccati

13.10 Notes and Comments

563

equation was first introduced to study LQ problems for stochastic differential equations with deterministic coefficients. The LQ problems for stochastic differential equations with random coefficients were first studied in [32], in which a matrix-valued nonlinear backward stochastic differential equation in the form of (13.47) was derived. These works have inspired many new progresses and applications on LQ problems for stochastic differential equations (e.g., [55, 236, 285, 307, 308, 313] and the rich references cited therein). Compared with LQ problems for stochastic differential equations, there exist only a quite limited number of works dwelling on some special cases of LQ problems for stochastic distributed parameter systems (e.g., [2, 76, 130, 131, 134, 152, 233, 318]). We list below some of these typical works: •





In [152, 318], Problem (SLQ) was studied under the condition that the diffusion term in (13.1) is specialized as Cx(t)dW1 (t)+Du(t)dW2 (t), where W1 (·) and W2 (·) are mutually independent Brownian motions. With this assumption, the corresponding Riccati equation reads [  dP = − P (A + A1 ) + (A + A1 )∗ P + C ∗ P C + Q    ] −P BK −1 B ∗ P dt in [0, T ), (13.305)    P (T ) = G, which is a random operator-valued Riccati equation (rather than operatorvalued backward stochastic Riccati equation). In [2], under the assumption that the diffusion term in (13.1) is σdW (t) with σ being a suitable F-adapted H-valued process independent of both the state and the control, Problem (SLQ) was studied and the optimal feedback control was obtained by solving a random operator-valued Riccati equation (similarly to (13.305)) and a backward stochastic evolution equation. In [130], Problem (SLQ) was considered for the case that R = IU , the identity operator on U , and D = 0. In this case, the equation (13.17) is specialized as  ( dP = − P A + A∗ P + ΛC + C ∗ Λ + C ∗ P C + Q    ) −P BB ∗ P dt + ΛdW (t) in [0, T ), (13.306)    P (T ) = G. Although (13.306) looks much simpler than (13.17), it is also an operatorvalued backward stochastic evolution equation (because the “bad” term “ΛdW (t)” is still in (13.306)). Nevertheless, [130] considered a kind of generalized solution to (13.306), which was a “weak limit” of solutions to some suitable finite dimensional approximations of (13.306). A little more precisely, it was shown in [130] that the finite dimensional approximations Pn of P are convergent in some weak sense, and via which P may be obtained as a suitable generalized solution to the equation (13.306) although

564

13 Linear Quadratic Optimal Control Problems

nothing can be said about Λ (to the same equation). This is enough for the case that D = 0 since the corresponding optimal feedback operator in (13.138) is then specialized as Θ(·) = −K(·)−1 B(·)∗ P (·), •

which is independent of Λ. In [131], the well-posedness of (13.306) was further studied when A is a self-adjoint operator on H and there exists an orthonormal basis {ej }∞ j=1 in H and an increasing sequence of positive numbers {µj }∞ so that j=1 ∑∞ Aej = −µj ej for j ∈ N and j=1 µ−r < ∞ for some r ∈ ( 14 , 12 ). Clearly, j this assumption is not satisfied by many controlled stochastic partial differential equations, such as stochastic wave equations, stochastic Schr¨odinger equations, stochastic KdV equations, stochastic beam equations, etc. It is even not fulfilled by the classical m-dimensional stochastic heat equation for m ≥ 2. The well-posedness result (in [131]) for (13.306) was then applied to Problem (SLQ) in the case that R = IU and D = 0.

In this chapter, we do not assume that R = IU and D = 0. If D = 0, then the nonlinear term L∗ K −1 L is specialized as P BB ∗ P , which enjoys a “good” sign in the energy estimate, and therefore, it is not very hard to obtain the well-posedness of the linearized equation of (13.17), at least for some special situation. Once the well-posedness of this linearized equation is established, the well-posedness of (13.17) follows from a fixed point argument (e.g. [130, 131]). However, if D ̸= 0, it is not an easy task even to derive the well-posedness of the linearized version of (13.17). In order to study the well-posedness of (13.17), we introduce a sequence of finite dimensional approximate equations, and prove that the limit of the solutions for these approximate equations is the transposition solution to (13.17). To this end, we need to assume that A generates a C0 group on H. As pointed out in Remark 13.43, this assumption can be dropped (See [235]). However, we keep the use of the present method due to the following two reasons: •



This method is rather elementary and provides an intrinsic connection between the well-posedness of (13.17) and the forward-backward stochastic evolution equation (13.217). Consequently, it shows the relationship between the Pontryagin-type maximum principle (Theorem 13.32 and Corollary 13.33) and the optimal feedback operator. It provides a numerical scheme to compute the solution to (13.17). Of course, since the convergence is very weak, it is not easy to compute the numerical solution to (13.17) by this method. However, as far as we know, so far it is the only existing convergent numerical scheme.

There are many open problems related to the main topic of this chapter. We shall list below some of them which, in our opinion, are particularly interesting: 1) Control systems with unbounded control operators.

13.10 Notes and Comments

565

As we have pointed out in Remark 13.1, In this paper, we assume that both B(·) and D(·) are bounded operator-valued process. As a result, the system (13.1) does not cover controlled stochastic partial differential equations with boundary/pointwise controls. To drop this restriction, one needs to make some further assumptions, such as the semigroup {S(t)}t≥0 has some smoothing effect. When D = 0, some results along this line are obtained (e.g. [134]). On the other hand, when D ̸= 0, there is no published result in the literature. We believe that one can combine the transposition methods introduced in Section 7.2 and this chapter to deal with that case. But several technical difficulties should be overcome. 2) LQ problems with incomplete state information. In this chapter, we only consider LQ problems with complete state information. When the control system is only partially observed, that is, what we know is only z = Ox + ξ, where O is a suitable operator and ξ is the observation noise. LQ problems for partially observed stochastic differential equations are extensively studied (e.g. [27, 339]). However, as far as we know, except for some very particular case, such as the coefficients are deterministic and the noise is additive, there exist no results for LQ problems of stochastic distributed parameter systems with incomplete state information.

References

1. A. A. Agrachev and Y. L. Sachkov. Control theory, from the geometric viewpoint. Encyclopaedia of Mathematical Sciences, 87. Control Theory and Optimization, II. Springer-Verlag, Berlin, 2004. 2. N. U. Ahmed. Stochastic control on Hilbert space for linear evolution equations with random operator-valued coefficients. SIAM J. Control Optim. 19 (1981), 401–430. 3. A. Al-Hussein. Strong, mild and weak solutions of backward stochastic evolution equations. Random Oper. Stochastic Equations. 13 (2005), 129–138. 4. N. Anantharaman, M. L´eautaud and F. Maci` a. Wigner measures and observability for the Schr¨ odinger equation on the disk. Invent. Math. 206 (2016), 485–599. 5. A. Arapostathis, V. S. Borkar and M. K. Ghosh. Ergodic control of diffusion processes. Encyclopedia of Mathematics and its Applications, 143. Cambridge University Press, Cambridge, 2012. 6. K. J. ˚ Astr¨ om. Introduction to stochastic control theory. Mathematics in Science and Engineering, 70. Academic Press, New York-London 1970. 7. S. A. Avdonin and S. A. Ivanov. Families of exponentials. The method of moments in controllability problems for distributed parameter systems. Translated from the Russian and revised by the authors. Cambridge University Press, Cambridge, 1995. 8. K. Balachandran and J. P. Dauer. Controllability of nonlinear systems in Banach spaces: a survey. J. Optim. Theory Appl. 115 (2002), 7–28. 9. A. V. Balakrishnan. Optimal Control Problems in Banach Spaces. J. Soc. Indust. Appl. Math. Ser. A Control. 3 (1965), 152–180. 10. H. T. Banks and K. Kunisch. Estimation techniques for distributed parameter systems. Systems & Control: Foundations & Applications, 1. Birkh¨ auser Boston, Inc., Boston, MA, 1989. 11. V. Barbu. Note on the internal stabilization of stochastic parabolic equations with linearly multiplicative Gaussian noise. ESAIM Control Optim. Calc. Var. 19 (2013), 1055–1063. 12. V. Barbu, A. R˘ a¸scanu and G. Tessitore. Carleman estimate and cotrollability of linear stochastic heat equatons. Appl. Math. Optim. 47 (2003), 97–120. 13. V. Barbu and L. Tubaro. Exact controllability of stochastic differential equations with multiplicative noise. Systems Control Lett. 122 (2018), 19–23. © Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3

567

568

References

14. C. Bardos, G. Lebeau and J. Rauch. Sharp sufficient conditions for the observation, control and stabilizion of waves from the boundary. SIAM J. Control Optim. 30 (1992), 1024–1065. 15. A. Bashirov. Partially observable linear systems under dependent noises. Birkh¨ auser Verlag, Basel-Boston-Berlin, 2003. 16. A. Bashirov and K. R. Kerimov. On controllability conception for stochastic systems. SIAM J. Control Optim. 35 (1997), 384–398. 17. A. Bashirov and N. I. Mahmudov. On concepts of controllability for deterministic and stochastic systems. SIAM J. Control Optim. 37 (1999), 1808–1821. 18. K. Beauchard, Y. Chitour, D. Kateb and R. Long. Spectral controllability for 2D and 3D linear Schr¨ odinger equations. J. Funct. Anal. 256 (2009), 3916– 3976. 19. K. Beauchard and C. Laurent. Local controllability of 1D linear and nonlinear Schr¨ odinger equations with bilinear control. J. Math. Pures Appl. 94 (2010), 520–554. 20. R. Bellman. Dynamic Programming. Princeton Univ. Press, Princeton, New Jersey, 1957. 21. R. Bellman. Dynamic programming and stochastic control process. Inform. & Control. 1 (1958), 228–239. 22. R. Bellman, I. Glicksberg and O. A. Gross. Some aspects of the mathematical theory of control processes. Rand Corporation, Santa Monica, Calif., Rep. No. R-313, 1958. 23. P. Benner and C. Trautwein. A linear quadratic control problem for the stochastic heat equation driven by Q-Wiener processes. J. Math. Anal. Appl.bf 457 (2018), 776–802. 24. A. Bensoussan. Contrˆ ole optimal stochastique de syst`eme gouvern´es par des ´equations aux d´eriv´ees partielles de type parabolique. Rend. Mat. 2 (1969), 135–173. 25. A. Bensoussan. Lecture on Stochastic Control. In: Nonlinear Filtering and Stochastic Control. Edited by S. K. Mitter and A. Moro. Lecture Notes in Mathematics, 972. Springer-Verlag, Berlin, 1982, 1–62. 26. A. Bensoussan. Stochastic maximum principle for distributed parameter systems. J. Franklin Inst. 315 (1983), 387–406. 27. A. Bensoussan. Stochastic control of partially observable systems. Cambridge University Press, Cambridge, 1992. 28. L. D. Berkovitz. Optimal control theory. Springer-Verlag, New York, 1974. 29. D. P. Bertsekas. Dynamic programming and stochastic control. Mathematics in Science and Engineering, 125. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1976. 30. M. Kh. Bikchantayev. Optimal control of systems with distributed parameters and stochastic signals. Engrg. Cybernetics. 1969 (1969), 147–157. 31. J.-M. Bismut. Analyse convexe et probabiliti´es. Ph D Thesis, Facult´e des Sciences de Paris, Paris, France, 1973. 32. J.-M. Bismut. Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim. 14 (1976), 419–444. 33. J.-M. Bismut. An introductory approach to duality in optimal stochastic control. SIAM Rev. 20 (1978), 62–78. 34. V. G. Boltyanski, R. V. Gamkrelidze and L. S. Pontryagin. The theory of optimal processes, I: The maximum principle. Izv. Akad. Nauk SSSR Ser. Mat.,

References

35.

36. 37.

38. 39.

40.

41. 42. 43.

44.

45. 46.

47.

48.

49.

50. 51.

569

24 (1960), 3–42 (in Russian); English transl. in Amer. Math. Soc. Transl. 18 (1961), 341–382. U. Boscain, M. Caponigro, T. Chambrion and M. Sigalotti. A weak spectral condition for the controllability of the bilinear Schr¨ odinger equation with application to the control of a rotating planar molecule. Comm. Math. Phys. 311 (2012), 423–455. R. Bouldin. A counterexample in the factorization of Banach space operators. Proc. Amer. Math. Soc. 68 (1978), 327–327. A. Bressan and B. Piccoli. Introduction to the mathematical theory of control. AIMS Series on Applied Mathematics, 2. American Institute of Mathematical Sciences (AIMS), Springfield, MO, 2007. W. Bryc. The normal distribution. Characterizations with applications. Lecture Notes in Statistics, 100. Springer-Verlag, New York, 1995. R. Buckdahn and J. Li. Stochastic differential games and viscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations. SIAM J. Control Optim. 47 (2008), 444–475. R. Buckdahn, M. Quincampoix and G. Tessitore. A Characterization of approximately controllable linear stochastic differential equations. Stochastic partial differential equations and applicationsVII. Lect. Notes Pure Appl. Math., 245, Chapman & Hall/CRC, Boca Raton, FL, 2006, 53–60. J. Bourgain, N. Burq and M. Zworski. Control for Schr¨ odinger equations on 2-tori: rough potentials. J. Eur. Math. Soc. 15 (2013), 1597–1628. N. Burq and M. Zworski. Geometric control in the presence of a black box. J. Amer. Math. Soc. 17 (2004), 443–471. A. L. Bukhge˘ım and M. V. Klibanov. Uniqueness in the large of a class of multidimensional inverse problems. Dokl. Akad. Nauk SSSR. 260 (1981), 269– 272. A. G. Butkovski˘ı. Distributed Control Systems. Translated from the Russian by S. Technica. Inc. Translation Editor: George M. Kranc. Modern Analytic and Computational Methods in Science and Mathematics, No. 11. American Elsevier Publishing Co., Inc., New York, 1969. P. Cannarsa and C. Sinestrari. Semiconcave functions, Hamilton-Jacobi equations, and optimal control. Birkh¨ auser, Boston, 2004. P. Cannarsa, P. Martinez and J. Vancostenoble. Global Carleman estimates for degenerate parabolic operators with applications. Mem. Amer. Math. Soc. 239 (2016), no. 1133, ix+209 pp. T. Carleman. Sur un probl`eme d’unicit´e pour les syst`emes d’´equations aux ´ d´eriv´ees partielles ` a deux variables independantes. Ark. Mat. Astr. Fys. 26 B. 17 (1939), 1–9. R. Carmona. Lectures on BSDEs, stochastic control, and stochastic differential games with financial applications. Financial Mathematics, 1. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016. E. Casas and F. Tr¨ oltzsch. Recent advances in the analysis of pointwise stateconstrained elliptic optimal control problems. ESAIM Control Optim. Calc. Var. 16 (2010), 581–600. F. W. Chaves-Silva, X. Zhang and E. Zuazua. Controllability of evolution equations with memory. SIAM J. Control Optim. 55 (2017), 2437–2459. H. F. Chen. On stochastic observability and controllability. Automatica. 16 (1980), 179–190.

570

References

52. H. F. Chen. Stochastic approximation and its applications. Nonconvex Optimization and its Applications, 64. Kluwer Academic Publishers, Dordrecht, 2002. 53. H. F. Chen and L. Guo. Identification and stochastic adaptive control. Systems & Control: Foundations & Applications. Birkh¨ auser Boston, Inc., Boston, MA, 1991. 54. M. Chen. Null controllability with constraints on the state for stochastic heat equation. J. Dyn. Control Syst. 24 (2018), 39–50. 55. S. Chen, X. Li and X. Zhou. Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 36 (1998), 1685–1702. 56. Z. Chen and L. Epstein. Ambiguity, risk, and asset returns in continuous time. Econometrica. 70 (2002), 1403–1443. 57. K. Chung. A course in probability theory. Second edition. Probability and Mathematical Statistics, 21. Academic Press, New York-London, 1974. 58. K. Chung and R. J. Williams. Introduction to stochastic integration. Modern Birkh¨ auser Classics. Birkh¨ auser/Springer, New York, 2014. 59. F. Clarke. Functional analysis, calculus of variations and optimal control. Graduate Texts in Mathematics, 264. Springer, London, 2013. 60. J. B. Conway. A Course in Functional Analysis. Second edition. SpringerVerlag, New York, 1994. 61. J.-M. Coron. Control and nonlinearity. Mathematical Surveys and Monographs, 136. American Mathematical Society, Providence, RI, 2007. 62. J.-M. Coron. On the controllability of nonlinear partial differential equations. Proceedings of the International Congress of Mathematicians. Vol. I, 238–264, Hindustan Book Agency, New Delhi, 2010. 63. J.-M. Coron, O. Glass and Z. Wang. Exact boundary controllability for 1-D quasilinear hyperbolic systems with a vanishing characteristic speed. SIAM J. Control Optim. 48 (2009/10), 3105–3122. 64. M. G. Crandall and P.-L. Lions. Viscosity solutions of Hamilton-Jacobi equations. Trans. Amer. Math. Soc. 277 (1983), 1–42. 65. R. Curtain and K. Morris. Transfer functions of distributed parameter systems: a tutorial. Automatica J. IFAC. 45 (2009), 1101–1116. 66. R. F. Curtain and A. J. Pritchard. Infinite dimensional linear systems theory. Lecture Notes in Control and Information Sciences, 8. Springer-Verlag, BerlinNew York, 1978. 67. R. F. Curtain and H. Zwart. An introduction to infinite-dimensional linear systems theory. Texts in Applied Mathematics, 21. Springer-Verlag, New York, 1995. 68. G. Da Prato and J. Zabczyk. Stochastic equations in infinite dimensions. Second edition. Encyclopedia of Mathematics and Its Applications, 152. Cambridge University Press, Cambridge, 2014. 69. L. Dai, Y. Zhang and J. Zou. Numerical schemes for forward-backward stochastic differential equations using transposition solutions. Preprint. 70. R. Dalang, D. Khoshnevisan, C. Mueller, D. Nualart and Y. Xiao. A minicourse on stochastic partial differential equations. Lecture Notes in Mathematics, 1962. Springer-Verlag, Berlin, 2009. 71. D. A. Dawson. Stochastic evolution equations. Math. Biosci. 15 (1972), 287– 316. 72. A. de Bouard and A. Debussche. A stochastic nonlinear Schr¨ odinger equation with multiplicative noise. Comm. Math. Phys. 205 (1999), 161–181.

References

571

73. F. Delbaen. BSDE and risk measures. Proceedings of the International Congress of Mathematicians, Vol. IV. Hyderabad, India, 2010, 3054–3060. 74. J. Diestel and J. J. Uhl, Jr. Vector Measures, Math. Surveys, vol. 15, American Mathematical Society, Providence, R.I., 1977. 75. F. Dou and Q. L¨ u. Partial approximate controllability for linear stochastic control systems. SIAM J. Control Optim. 57 (2019), 1209–1229. 76. F. Dou and Q. L¨ u. Time-inconsistent linear quadratic optimal control problems for stochastic evolution equations. SIAM J. Control Optim. 58 (2020), 485–509. 77. A. Doubova, E. Fern´ andez-Cara, M. Gonz´ alez-Burgos and E. Zuazua. On the controllability of parabolic systems with a nonlinear term involving the state and the gradient. SIAM J. Control Optim. 41 (2002), 798–819. 78. A. Doubova, A. Osses and J.-P. Puel. Exact controllability to trajectories for semilinear heat equations with discontinuous diffusion coefficients. ESAIM Control Optim. Calc. Var. 8 (2002), 621–661. 79. K. Du and Q. Meng. A maximum principle for optimal control of stochastic evolution equations. SIAM J. Control Optim. 51 (2013), 4343–4362. 80. T. Duyckaerts, X. Zhang and E. Zuazua. On the optimality of the observability inequalities for parabolic and hyperbolic systems with potentials. Ann. Inst. H. Poincar´e Anal. Non Lin´eaire. 25 (2008), 1–41. 81. D. Duffie and L. G. Epstein. Stochastic differential utility. Econometrica. 60 (1992), 353–394. 82. T. E. Duncan. Some topics in stochastic control. Discrete Contin. Dyn. Syst. Ser. B. 14 (2010), 1361–1373. 83. T. E. Duncan, B. Maslowski and B. Pasik-Duncan. Adaptive boundary and point control of linear stochastic distributed parameter system. SIAM J. Control Optim. 32 (1994), 648–672. 84. T. Dunst and A. Prohl. The forward-backward stochastic heat equation: numerical analysis and simulation. SIAM J. Sci. Comput. 38 (2016) A2725–A2755. ˇ Vyˇcisl Mat. 85. Y. V. Egorov. Some problems in the theory of optimal control. Z. i Mat. Fiz. 3 (1963), 887–904. (In Russian) 86. N. El Karoui, S. Peng and M. C. Quenez. Backward stochastic differential equations in finance. Math. Finance. 7 (1997), 1–71. 87. K. J. Engel and R. Nagel. One-parameter semigroups for linear evolution equations. Graduate Texts in Mathematics, 194. Springer-Verlag, New York, 2000. 88. S. Ervedoza and E. Zuazua. Numerical approximation of exact controls for waves. Springer Briefs in Mathematics. Springer, New York, 2013. ´ ech. Stochastic optimal control in infinite di89. G. Fabbri, F Gozzi and A. Swi¸ mension, Dynamic programming and HJB equations. With a contribution by M. Fuhrman and G. Tessitore. Probability Theory and Stochastic Modelling, 82. Springer, Cham, 2017. 90. P. L. Falb. Infinite-dimensional filtering: The Kalman-Bucy filter in Hilbert space. Information and Control. 11 (1967), 102–137. 91. H. O. Fattorini. Control in finite time of differential equations in Banach space. Comm. Pure Appl. Math. 19 (1966), 17–34. 92. H. O. Fattorini. Infinite-dimensional optimization and control theory. Encyclopedia of Mathematics and its Applications, 62. Cambridge University Press, Cambridge, 1999.

572

References

93. H. O. Fattorini and D. L. Russell. Exact controllability theorems for linear parabolic equations in one space dimension. Arch. Rational Mech. Anal. 43 (1971), 272–292. 94. A. Fern´ andez-Bertolin and J. Zhong. Hardy’s uncertainty principle and unique continuation property for stochastic heat equations. ESAIM Control Optim. Calc. Var. 26 (2020), no. 9. 95. E. Fern´ andez-Cara. The control of PDEs: some basic concepts, recent results and open problems. Topics in mathematical modeling and analysis, 49–107, Jindˇrich Neˇcas Cent. Math. Model. Lect. Notes, 7. Matfyzpress, Prague, 2012. 96. W. H. Fleming. Distributed parameter stochastic systems in population biology. Control theory, numerical methods and computer systems modelling (Internat. Sympos., IRIA LABORIA, Rocquencourt, 1974). Lecture Notes in Econom. and Math. Systems, 107, Springer, Berlin, 1975, 179–191. 97. W. H. Fleming and M. Nisio. On the existence of optimal stochastic controls. J. Math. Mech. 15 (1966), 777–794. 98. W. H. Fleming and H. M. Soner. Controlled Markov processes and viscosity solutions. Second edition. Stochastic Modelling and Applied Probability, 25. Springer, New York, 2006. 99. W. H. Fleming and P. E. Souganidis. On the existence of value functions of two-player, zero-sum stochastic differential games. Indiana Univ. Math. J. 38 (1989), 293–314. 100. J. J. Florentin. Optimal control of continuous time, Markov, stochastic systems. J. Electronics Control. 10 (1961), 473–488. ¨ 101. C. Foias, H. Ozbay and A. Tannenbaum. Robust control of infinite-dimensional systems. Frequency domain methods. Lecture Notes in Control and Information Sciences, 209. Springer-Verlag London, Ltd., London, 1996. 102. H. Frankowska. Optimal control under state constraints. Proceedings of the International Congress of Mathematicians, Vol. IV. Hyderabad, India, 2010, 2915–2943. 103. H. Frankowska and Q. L¨ u. First and second order necessary optimality conditions for controlled stochastic evolution equations with control and state constraints. J. Differential Equations. 268 (2020), 2949–3015. 104. H. Frankowska and X. Zhang. Necessary conditions for stochastic optimal control problems in infinite dimensions. Stochastic Process. Appl., 130 (2020), 4081–4103. 105. P. K. Friz and M. Hairer, A course on rough paths. With an introduction to regularity structures. Universitext. Springer, Cham, 2014. 106. X. Fu. A weighted identity for partial differential operators of second order and its applications. C. R. Math. Acad. Sci. Paris. 342 (2006), 579–584. 107. X. Fu, X. Liu. A weighted identity for stochastic partial differential operators and its applications. J. Differential Equations 262 (2017), 3551–3582. 108. X. Fu, X. Liu. Controllability and observability of some stochastic complex Ginzburg-Landau equations. SIAM J. Control Optim. 55 (2017), 1102–1127. 109. X. Fu, X. Liu, Q. L¨ u and X. Zhang. An internal observability estimate for stochastic hyperbolic equations. ESAIM Control Optim. Calc. Var. , 22 (2016), 1382–1411. 110. X. Fu, Q. L¨ u and X. Zhang. Carleman Estimates for Second Order Partial Differential Operators and Applications, a Unified Approach. Springer, Cham, 2019.

References

573

111. X. Fu, J. Yong and X. Zhang. Exact controllability for the multidimensional semilinear hyperbolic equations. SIAM J. Control Optim. 46 (2007), 1578–1614. 112. M. Fuhrman, Y. Hu and G. Tessitore. Stochastic maximum principle for optimal control of SPDEs. Appl. Math. Optim. 68 (2013), 181–217. 113. M. Fuhrman, Y. Hu and G. Tessitore. Stochastic maximum principle for optimal control of partial differential equations driven by white noise. Stoch. Partial Differ. Equ. Anal. Comput. 6 (2018), 255–285. 114. M. Fuhrman and C. Orrier. Stochastic maximum principle for optimal control of a class of nonlinear SPDEs with dissipative drift. SIAM J. Control Optim. 54 (2016), 341–371. 115. T. Funaki. Random motion of strings and related stochastic evolution equations. Nagoya Math. J. 89 (1983), 129–193. 116. A. V. Fursikov. Optimal control of distributed systems. Theory and applications. Translations of Mathematical Monographs, 187. American Mathematical Society, Providence, RI, 2000. 117. A. V. Fursikov and O. Yu. Imanuvilov. Controllability of Evolution Equations. Lecture Notes Series 34, Research Institute of Mathematics, Seoul National University, Seoul, Korea, 1994. 118. P. Gao. Carleman estimate and unique continuation property for the linear stochastic Korteweg-de Vries equation. Bull. Aust. Math. Soc. 90 (2014), 283– 294. 119. P. Gao. Global Carleman estimates for linear stochastic Kawahara equation and their applications. Math. Control Signals Systems. 28 (2016), Art. 21, 22 pp. 120. P. Gao. Carleman estimates for forward and backward stochastic fourth order Schr¨ odinger equations and their applications. Evol. Equ. Control Theory. 7 (2018), 465–499. 121. P. Gao, M. Chen and Y. Li. Observability estimates and null controllability for forward and backward linear stochastic Kuramoto-Sivashinsky equations. SIAM J. Control Optim. 53 (2015), 475–500. 122. J. Glimm. Nonlinear and stochastic phenomena: the grand challenge for partial differential equations. SIAM Rev. 33 (1991), 626–643. 123. R. Glowinski, J.-L. Lions and J. He. Exact and approximate controllability for distributed parameter systems. A numerical approach. Encyclopedia of Mathematics and its Applications, 117. Cambridge University Press, Cambridge, 2008. 124. ˘I. ¯I. G¯ihman and A. V. Skorohod. Controlled stochastic processes. SpringerVerlag, New York-Heidelberg, 1979. 125. D. Goreac. A Kalman-type condition for stochastic approximate controllability. C. R. Math. Acad. Sci. Paris. 346 (2008), 183–188. 126. D. Goreac. Approximate controllability for linear stochastic differential equations in infinite dimensions. Appl. Math. Optim. 60 (2009), 105–132. 127. D. Goreac. Approximately reachable directions for piecewise linear switched systems. Math. Control Signals Systems. 31 (2019), 333–362. ´ ech. Bellman equations associated to the opti128. F. Gozzi, S. S. Sritharan, A. Swi¸ mal feedback control of stochastic Navier-Stokes equations. Comm. Pure Appl. Math. 58 (2005), 671–700. 129. W. Grecksch and C. Tudor. Stochastic evolution equations. A Hilbert space approach. Mathematical Research, 85. Akademie-Verlag, Berlin, 1995.

574

References

130. G. Guatteri and G. Tessitore. On the backward stochastic Riccati equation in infinite dimensions. SIAM J. Control Optim. 44 (2005), 159–194. 131. G. Guatteri and G. Tessitore. Well posedness of operator valued backward stochastic Riccati equations in infinite dimensional spaces. SIAM J. Control Optim. 52 (2014), 3776–3806. 132. M. Gunzburger and J. Ming. Optimal control of stochastic flow over a backward-facing step using reduced-order modeling. SIAM J. Sci. Comput. 33 (2011), 2641–2663. 133. B. Guo and J. Wang. Control of wave and beam PDEs. The Riesz basis approach. Communications and Control Engineering Series. Springer, Cham, 2019. 134. C. Hafizoglu, I. Lasiecka, T. Levajkovi´c, H. Mena and A. Tuffaha. The stochastic linear quadratic control problem with singular estimates. SIAM J. Control Optim. 55 (2017), 595–626. 135. P. R. Halmos. A Hilbert space problem book. Springer-Verlag, New YorkBerlin, 1982. 136. M. Hairer. Solving the KPZ equation. Ann. of Math. 178 (2013), 559–664. 137. M. Hairer. A theory of regularity structures. Invent. Math. 198 (2014), 269– 504. 138. E. Hausenblas and J. Seidler. A note on maximal inequality for stochastic convolutions. Czechoslovak Math. J. 51 (2001), 785–790. 139. U. G. Haussmann. General necessary conditions for optimal control of stochastic system. Math. Prog. Study. 6 (1976), 34–48. 140. S. He, J. Wang and J. Yan. Semimartingale theory and stochastic calculus. Kexue Chubanshe (Science Press), Beijing; CRC Press, Boca Raton, FL, 1992. 141. Y. He. Switching Controls for Linear Stochastic Differential Systems. Math. Control Relat. Fields. Published online, doi:10.3934/mcrf.2020005. 142. V. Hern´ andez-Santamar´ıa and L. Peralta. Controllability results for stochastic coupled systems of fourth and second-order parabolic equations. arXiv:2003.01334v1. 143. E. Hille and R. S. Phillips. Functional analysis and semi-groups. Revised edition. American Mathematical Society Colloquium Publications, 31. American Mathematical Society, Providence, R. I., 1957. 144. M. Hinze and A. R¨ osch. Discretization of optimal control problems. Constrained optimization and optimal control for partial differential equations, 391–430, Internat. Ser. Numer. Math., 160, Birkh¨ auser/Springer Basel AG, Basel, 2012. 145. H. Holden, B. Øksendal, J. Ubøe and T. Zhang. Stochastic partial differential equations. A modeling, white noise functional approach. Second edition. Universitext. Springer, New York, 2010. 146. L. Hu. Sharp time estimates for exact boundary controllability of quasilinear hyperbolic systems. SIAM J. Control Optim. 53 (2015), 3383–3410. 147. M. Hu, S. Ji, S. Peng and Y. Song. Backward stochastic differential equations driven by G-Brownian motion. Stochastic Process. Appl. 124 (2014), 759–784. 148. Y. Hu and S. Peng. Maximum principle for semilinear stochastic evolution control systems. Stoch. & Stoch. Rep. 33 (1990), 159–180. 149. Y. Hu and S. Peng. Adapted solution of a backward semilinear stochastic evolution equation. Stochastic Anal. Appl. 9 (1991), 445–459. 150. Y. Hu and S. Tang. Nonlinear backward stochastic evolutionary equations driven by a space-time white noise. Math. Control Relat. Fields. 8 (2018), 739–751.

References

575

151. T. Hyt¨ onen, J. van Neerven, M. Veraar and L. Weis. Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, 63. Springer, Cham, 2016. 152. A. Ichikawa. Dynamic programming approach to stochastic evolution equations. SIAM J. Control Optim. 17 (1979), 152–174. 153. O. Yu. Imanuvilov. On Carlerman estimates for hyperbolic equations. Asymptot. Anal. 32 (2002), 185–220. 154. O. Yu. Imanuvilov and M. Yamamoto. Carleman inequalities for parabolic equations in a Sobolev spaces of negative order and exact controllability for semilinear parabolic equations. Publ. RIMS Kyoto Univ. 39 (2003), 227–274. 155. K. Itˆ o. Stochastic integral. Proc. Imp. Acad. Tokyo. 20 (1944), 519–524. 156. K. Itˆ o. On a stochastic integral equation. Proc. Japan Acad. 22 (1946). 32–35. 157. K. Itˆ o. Foundations of stochastic differential equations in infinite-dimensional spaces. CBMS-NSF Regional Conference Series in Applied Mathematics, 47. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1984. 158. K. Itˆ o and K. Kunisch. Novel concepts for nonsmooth optimization and their impact on science and technology. Proceedings of the International Congress of Mathematicians. Volume IV, 3061–3090, Hindustan Book Agency, New Delhi, 2010. 159. K. Itˆ o, Y. Zhang and J. Zou. Fully discrete schemes and their analyses for forward-backward stochastic differential equations. arXiv:1804.10944v1. 160. J. Jacod, S. M´el´eard and P. Protter. Explicit form and robustness of martingale representations. Ann. Probab. 28 (2000), 1747–1780. 161. G. Kallianpur and J. Xiong. Stochastic differential equations in infinite dimensional spaces. Institute of Mathematial Statistics, Lecture Notes-Monograph Series, 26, Institute of Mathematical Statistics, United States, 1995. 162. R. E. Kalman. A new approach to linear filtering and prediction problems. J. Fluids Eng. 82 (1960), 35–45. 163. R. E. Kalman. On the general theory of control systems. Proceedings of the First IFAC Congress. Moscow, 1960; Butterworth, London, 1961, vol.1, 481– 492. 164. I. Karatzas and S. E. Shreve. Brownian motion and stochastic calculus. Springer-Verlag, New York, 1988. 165. M. A. Kazemi and M. V. Klibanov. Stability estimates for ill-posed Cauchy problems involving hyperbolic equations and inequalities. Appl. Anal. 50 (1993), 93–102. 166. M. V. Klibanov and M. Yamamoto. Exact controllability for the time dependent transport equation. SIAM J. Control Optim. 46 (2007), 2071–2195. 167. P. S. Knopov and O. N. Deriyeva. Estimation and control problems for stochastic partial differential equation. Springer, New York, Heidelberg, Dordrecht, London, 2013. 168. P. I. Kogut and G. R. Leugering. Optimal control problems for partial differential equations on reticulated domains. Approximation and asymptotic analysis. Systems & Control: Foundations & Applications. Birkh¨ auser/Springer, New York, 2011. 169. V. N. Kolokoltsov. Semiclassical analysis for diffusions and stochastic processes. Lecture Notes in Mathematics, 1724, Springer-Verlag, Berlin, 2000. 170. V. Komornik and P. Loreti. Fourier series in control theory. Springer Monographs in Mathematics. Springer-Verlag, New York, 2005.

576

References

171. P. Kotelenez. Stochastic ordinary and stochastic partial differential equations. Transition from microscopic to macroscopic equations. Stochastic Modelling and Applied Probability, 58. Springer, New York, 2008. 172. M. Krstic and A. Smyshlyaev. Boundary control of PDEs. A course on backstepping designs. Advances in Design and Control, 16. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008. 173. N. V. Krylov. Controlled diffusion processes. Reprint of the 1980 edition. Stochastic Modelling and Applied Probability, 14. Springer-Verlag, Berlin, 2009. 174. H. Kunita. On backward stochastic differential equations. Stochastics. 6 (1982), 293–313. 175. H. Kunita and S. Watanabe. On square integrable martingales. Nagoya Math. J. 30 (1967), 209–245. 176. H.-H. Kuo. Introduction to stochastic integration. Universitext. Springer, New York, 2006. 177. H. J. Kushner. Optimal stochastic control. IRE Trans. Auto. Control. 7 (1962), 120–122. 178. H. J. Kushner. On the optimal control of a system governed by a linear parabolic equation with white noise inputs. SIAM J. Control. 6 (1968), 596–614. 179. H. J. Kushner. Introduction to stochastic control. Holt, Rinehart and Winston, Inc., New York-Montreal, Que.-London, 1971. 180. H. J. Kushner and F. C. Schweppe. A maximum principle for stochastic control systems. J. Math. Anal. Appl. 8 (1964), 287–302. 181. H. J. Kushner and G. Yin. Stochastic approximation and recursive algorithms and applications. Second edition. Applications of Mathematics (New York), 35. Stochastic Modelling and Applied Probability. Springer-Verlag, New York, 2003. 182. D. F. Kuznetsov. Multiple Itˆ o and Stratonovich stochastic integrals: FourierLegendge and trigonometric expansions, approximations, formulas. Differ. Uravn. Protsessy Upr. 2017, no. 1, 385 pp. 183. C. C. Kwan and K. N. Wang. Sur la stabilisation de la vibration ´elastique. Sci. Sinica. 17 (1974), 446–467. 184. L. D. Landau and E. M. Lifshitz. Quantum mechanics: non-relativistic theory. Course of Theoretical Physics, Vol. 3. Addison-Wesley Series in Advanced Physics. Pergamon Press Ltd., London-Paris; for U.S.A. and Canada: AddisonWesley Publishing Co., Inc., Reading, Mass; 1958. 185. I. Lasiecka and R. Triggiani. Differential and algebraic Riccati equations with application to boundary/point control problems: continuous theory and approximation theory. Lecture Notes in Control and Information Sciences, 164. Springer-Verlag, Berlin, 1991. 186. I. Lasiecka and R. Triggiani. Control Theory for Partial Differential Equations: Continuous and Approximation Theories. I. Abstract Parabolic Systems. Encyclopedia of Mathematics and its Applications, 74. Cambridge University Press, Cambridge, 2000. 187. I. Lasiecka, R. Triggiani and X. Zhang. Global uniqueness, observability and stabilization of nonconservative Schr¨ odinger equations via pointwise Carleman estimates: Part I. H 1 -estimates. J. Inv. Ill-posed Problems. 11(2004), 43–123. 188. M. M. Lavrent’ev, V. G. Romanov and S. P. Shishat·ski˘ı. Ill-posed problems of mathematical physics and analysis. Translations of Mathematical Monographs, vol. 64. American Mathematical Society, Providence, RI, 1986.

References

577

189. J. Le Rousseau and L. Robbiano, Local and global Carleman estimates for parabolic operators with coefficients with jumps at interfaces. Invent. Math. 183 (2011), 245–336. 190. G. Lebeau. Contrˆ ole de l´equation de Schr¨ odinger. J. Math. Pures Appl. 71 (1992), 267–291. 191. G. Lebeau and L. Robbiano. Contrˆ ole exact de l’´equation de la chaleur. Comm. Partial Differential Equations. 20 (1995), 335–356. 192. G. Lebeau and E. Zuazua. Null controllability of a system of linear thermoelasticity. Arch. Rational Mech. Anal. 141 (1998), 297–329. 193. E. B. Lee and L. Markus. Foundations of optimal control theory. John Wiley, New York, 1967. 194. S. Lenhart, J. Xiong and J. Yong. Optimal controls for stochastic partial differential equations with an application in population modeling. SIAM J. Control Optim. 54 (2016), 495–535. 195. H. Li and Q. L¨ u. Null controllability for some systems of two backward stochastic heat equations with one control force. Chin. Ann. Math. Ser. B. 33 (2012), 909–918. 196. H. Li and Q. L¨ u. A quantitative boundary unique continuation for stochastic parabolic equations. J. Math. Anal. Appl. 402 (2013), 518–526. 197. J. Li and Z. Zhang. Topological classification of linear control systemsAn elementary analytic approach. J. Math. Anal. Appl. 402 (2013), 84–102. 198. J. Li, H. Liu and S. Ma. Determining a random Schr¨ odinger equation with unknown source and potential. SIAM J. Math. Anal. 51 (2019), 3465–3491. 199. T.-T. Li. Controllability and observability for quasilinear hyperbolic systems. AIMS Series on Applied Mathematics, vol. 3. American Institute of Mathematical Sciences, Springfield, MO, 2010. 200. T.-T. Li and B. Rao. Boundary synchronization for hyperbolic systems. Progress in Nonlinear Differential Equations and their Applications, 94. Subseries in Control. Birkh¨ auser/Springer, Cham, 2019. 201. W. Li and X. Zhang. Controllability of parabolic and hyperbolic equations: towards a unified theory. in Control Theory of Partial Differential Equations, Lect. Notes Pure Appl. Math., 242, Chapman & Hall/CRC, Boca Raton, FL, 2005, 157–174. 202. X. Li and J. Yong. Optimal control theory for infinite dimensional systems. Systems & Control: Foundations & Applications. Birkh¨ auser Boston, Inc., Boston, MA, 1995. 203. G. Liang, T. Lyons and Z. Qian, Backward stochastic dynamics on a filtered probability space. Ann. Probab. 39 (2011), 1422–1448. 204. A. Lindquist and G. Picci. Linear stochastic systems. A geometric approach to modeling, estimation and identification. Series in Contemporary Mathematics, 1. Springer, Heidelberg, 2015. 205. J.-L. Lions. Optimal control of systems governed by partial differential equations. Springer-Verlag, Berlin, Heidelberg, New York, 1971. 206. J.-L. Lions. Sur la th´eorie du contrˆ ole. Actes du Congr`es International des Math´ematiciens 1974. Vol. 1, 139–154, Vancouver, Canada, Canadian Mathematical Congress, 1975. 207. J.-L. Lions. Contrˆ olabilit´e exacte, perturbations et stabilisation de syst`emes distribu´es, Tome 1, Contrˆ olabilit´e exacte, Rech. Math. Appl. 8, Masson, Paris, 1988.

578

References

208. J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. I. Die Grundlehren der mathematischen Wissenschaften, Band 181. Springer-Verlag, New York-Heidelberg, 1972. 209. J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. II. Die Grundlehren der mathematischen Wissenschaften, Band 181. Springer-Verlag, New York-Heidelberg, 1972. 210. P.-L. Lions. Optimal control of diffusion processes and Hamilton-JacobiBellman equations. I. The dynamic programming principle and applications. Comm. Partial Differential Equations. 8 (1983), 1101–1174. 211. P.-L. Lions. Optimal control of diffusion processes and Hamilton-JacobiBellman equations. II. Viscosity solutions and uniqueness. Comm. Partial Differential Equations. 8 (1983), 1229–1276. 212. P.-L. Lions. Optimal control of diffusion processes and Hamilton-JacobiBellman equations. III. Regularity of the optimal cost function. Nonlinear partial differential equations and their applications. Coll`ege de France seminar, Vol. V (Paris, 1981/1982), 95–205, Res. Notes in Math., 93, Pitman, Boston, MA, 1983. 213. P.-L. Lions. Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. I. The case of bounded stochastic evolutions. Acta Math. 161 (1988), 243–278. 214. P.-L. Lions. Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. III. Uniqueness of viscosity solutions for general second-order equations. J. Funct. Anal. 86 (1989), 1–18. 215. L. Liu and X. Liu. Controllability and observability of some coupled stochastic parabolic systems. Math. Control Relat. Fields. 8 (2018), 829–854. 216. K. Liu. Stability of Infinite Dimensional Stochastic Differential Equations with Applications. Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 135. Chapman & Hall/CRC, 2006. 217. W. Liu and M. R¨ ockner. Stochastic partial differential equations: an introduction. Universitext. Springer, Cham, 2015. 218. X. Liu. Global Carleman estimate for stochastic parabolic equations, and its application. ESAIM Control Optim. Calc. Var. 20 (2014), 823–839. 219. X. Liu. Controllability of some coupled stochastic parabolic systems with fractional order spatial differential operators by one control in the drift. SIAM J. Control Optim. 52 (2014), 836–860. 220. X. Liu, Q. L¨ u and X. Zhang. Finite codimensional controllability and optimal control problems with endpoint state constraints. J. Math. Pures Appl. 138 (2020), 164–203. 221. X. Liu, Q. L¨ u, H. Zhang and X. Zhang. Finite codimensionality technique in optimization and optimal control problems. Preprint. 222. X. Liu and Y. Yu. Carleman estimates of some stochastic degenerate parabolic equations and application. SIAM J. Control Optim. 57 (2019), 3527–3552. 223. S. V. Lototsky and B. L. Rozovsky. Stochastic Partial Differential Equations. Universitext. Springer, Cham, 2017. 224. Q. L¨ u. Control and observation of stochastic partial differential equations. Ph D Thesis, Sichuan University, 2010. 225. Q. L¨ u. Some results on the controllability of forward stochastic parabolic equations with control on the drift. J. Funct. Anal. 260 (2011), 832–851. 226. Q. L¨ u. Carleman estimate for stochastic parabolic equations and inverse stochastic parabolic problems. Inverse Problems 28 (2012), 045008.

References

579

227. Q. L¨ u. Observability estimate and state observation problems for stochastic hyperbolic equations. Inverse Problems. 29 (2013), 095011. 228. Q. L¨ u. Exact controllability for stochastic Schr¨ odinger equations. J. Differential Equations. 255 (2013), 2484–2504. 229. Q. L¨ u. Observability estimate for stochastic Schr¨ odinger equations and its applications. SIAM J. Control Optim. 51 (2013), 121–144. 230. Q. L¨ u. A lower bound on local energy of partial sum of eigenfunctions for Laplace-Beltrami operators. ESAIM Control Optim. Calc. Var. 19 (2013), 255– 273. 231. Q. L¨ u. Exact controllability for stochastic transport equations. SIAM J. Control Optim. 52 (2014), 397–419. 232. Q. L¨ u. Stochastic well-posed systems and well-posedness of some stochastic partial differential equations with boundary control and observation. SIAM J. Control Optim. 53 (2015), 3457–3482. 233. Q. L¨ u. Well-posedness of stochastic Riccati equations and closed-loop solvability for stochastic linear quadratic optimal control problems. J. Differential Equations. 267 (2019), 180–227. 234. Q. L¨ u and J. van Neerven. Conditional expectations in Lp (µ; Lq (ν; X)). Positivity. 23 (2019), 11–19. 235. Q. L¨ u and T. Wang. Characterization of optimal feedback for linear quadratic control problems of stochastic evolution equations. Preprint. 236. Q. L¨ u, T. Wang and X. Zhang. Characterization of optimal feedback for stochastic linear quadratic control problems. Probab. Uncertain. Quant. Risk 2 (2017), Paper No. 11, 20 pp. 237. Q. L¨ u and Z. Yin. Unique continuation for stochastic heat equations. ESAIM Control Optim. Calc. Var. 21 (2015), 378–398. 238. Q. L¨ u and Z. Yin. Local state observation for stochastic hyperbolic equations. ESAIM Control Optim. Calc. Var. Published online, doi: 10.1051/cocv/2019049. 239. Q. L¨ u, J. Yong and X. Zhang. Representation of Itˆ o integrals by Lebesgue/ Bochner integrals. J. Eur. Math. Soc. 14 (2012), 1795–1823. (Erratum: J. Eur. Math. Soc. 20 (2018), 259–260). 240. Q. L¨ u, H. Zhang and X. Zhang. Second order necessary conditions for optimal control problems of stochastic evolution equations. arXiv:1811.07337. 241. Q. L¨ u and X. Zhang. Well-posedness of backward stochastic differential equations with general filtration. J. Differential Equations. 254 (2013), 3200–3227. 242. Q. L¨ u and X. Zhang. General Pontryagin-type stochastic maximum principle and backward stochastic evolution equations in infinite dimensions. Springer Briefs in Mathematics. Springer, Cham, 2014. 243. Q. L¨ u and X. Zhang. Global uniqueness for an inverse stochastic hyperbolic problem with three unknowns, Comm. Pure Appl. Math., 68 (2015), 948–963. 244. Q. L¨ u and X. Zhang. Transposition method for backward stochastic evolution equations revisited, and its application. Math. Control Relat. Fields. 5 (2015), 529–555. 245. Q. L¨ u and X. Zhang. Operator-valued backward stochastic Lyapunov equations in infinite dimensions, and its application. Math. Control Relat. Fields. 8 (2018), 337–381. 246. Q. L¨ u and X. Zhang. A mini-course on stochastic control. Control and Inverse Problems for Partial Differential Equations. Edited by G. Bao, J.-M. Coron and T.-T. Li. Higher Education Press, Beijing, 2019, 171–254.

580

References

247. Q. L¨ u and X. Zhang. Exact controllability for a refined stochastic wave equation. arXiv:1901.06074. 248. Q. L¨ u and X. Zhang. Optimal feedback for stochastic linear quadratic control and backward stochastic Riccati equations in infinite dimensions. arXiv:1901.00978. 249. T. J. Lyons, M. Caruana and T. L´evy. Differential equations driven by rough paths. Lecture Notes in Mathematics, 1908. Springer, Berlin, 2007. 250. T. J. Lyons and Z. Qian. System control and rough paths. Oxford Mathematical Monographs. Oxford Science Publications. Oxford University Press, Oxford, 2002. 251. J. Ma and J. Yong. Forward-backward differential equations and their applications. Lecture Notes in Mathematics, vol. 1702. Berlin, Springer, 1999. 252. E. Machtyngier. Exact controllability for the Schr¨ odinger equation. SIAM J. Control Optim. 32 (1994), 24–34. 253. N. I. Mahmudov. Controllability of linear stochastic systems. IEEE Trans. Automat. Control 46 (2001), 724–731. 254. N. I. Mahmudov. Controllability of linear stochastic systems in Hilbert spaces. J. Math. Anal. Appl. 259 (2001), 64–82. 255. N. I. Mahmudov. Controllability of semilinear stochastic systems in Hilbert spaces. J. Math. Anal. Appl. 288 (2003), 197–211. 256. N. I. Mahmudov and M. A. McKibben. On backward stochastic evolution equations in Hilbert spaces and optimal control. Nonlinear Anal. 67 (2007), 1260–1274. 257. A. J. Majda, I. Timofeyev and E. Vanden Eijnden. A mathematical framework for stochastic climate models. Comm. Pure Appl. Math. 54 (2001), 891–974. 258. A. M. M´ arquez-Dur´ an and J. Real. Some results on nonlinear backward stochastic evolution equations. Stochastic Anal. Appl. 22 (2004), 1273–1293. 259. J. Mart´ınez-Frutos and F. Periago Esparza. Optimal control of PDEs under uncertainty. An introduction with application to optimal shape design of structures. SpringerBriefs in Mathematics. BCAM SpringerBriefs. Springer, Cham, 2018. 260. A. Mercado, A. Osses and L. Rosier. Inverse problems for the Schr¨ odinger equation via Carleman inequalities with degenerate weights. Inverse Problems. 24 (2008), 015017. 261. A. P. Meyer. Probability and potentials. Blaisdell Publishing Co. Ginn and Co., Waltham, Mass.-Toronto, Ont.-London, 1966. 262. A. S. Monin and A. M. Yaglom. Statistical fluid mechanics: mechanics of turbulence. Vol. I. Translated from the 1965 Russian original. Edited and with a preface by John L. Lumley. English edition updated, augmented and revised by the authors. Reprinted from the 1971 edition. Dover Publications, Inc., Mineola, NY, 2007. 263. R. M. Murray et al. Control in an information rich world. Report of the Panel on Future Directions in Control, Dynamics, and Systems. Papers from the meeting held in College Park, MD, July 16-17, 2000. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2003. 264. J. Nedoma. Note on the generalized random variables. Trans. of the first Prague conference on information theory, etc. 1957, 139–141. 265. E. Nelson. Dynamical Theories of Brownian Motion. Princeton University Press, Princeton, N.J. 1967.

References

581

´ Pardoux. A generalized Itˆ 266. D. Ocone and E. o-Ventzell formula. Application to a class of anticipating stochastic differential equations. Ann. Inst. H. Poincar´e Probab. Statist. 25 (1989), 39–71. 267. B. Øksendal. Stochastic differential equations. An introduction with applications. Sixth edition. Universitext. Springer-Verlag, Berlin, 2003. 268. B. Øksendal. Optimal control of stochastic partial differential equations. Stoch. Anal. Appl. 23 (2005), 165–179. 269. B. Øksendal and A. Sulem. Maximum principles for optimal control of forwardbackward stochastic differential equations with jumps. SIAM J. Control Optim. 48 (2009/10), 2945–2976. 270. S. Omatu and J. H. Seinfeld. Distributed parameter systems. Theory and applications. Oxford Mathematical Monographs. Clarendon Press, Oxford, 1989. ´ Pardoux and S. Peng. Adapted solution of backward stochastic equation. 271. E. Systems Control Lett. 14 (1990), 55–61. 272. E. Pardoux and A. R˘ a¸scanu. Stochastic differential equations, backward SDEs, partial differential equations. Stochastic Modelling and Applied Probability, 69. Springer, Cham, 2014. 273. S. Peng. A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28 (1990), 966–979. 274. S. Peng. Probabilistic interpretation for systems of quasilinear parabolic partial differential equations. Stoch. Stoch. Rep. 37 (1991) 61–74. 275. S. Peng. Backward stochastic differential equation and exact controllability of stochastic control systems. Progr. Natur. Sci. (English Ed.). 4 (1994), 274–284. 276. S. Peng. G-expectation, G-Brownian motion and related stochastic calculus of Itˆ o type. Stochastic analysis and applications. 541–567, Abel Symp., 2, Springer, Berlin, 2007. 277. S. Peng. Nonlinear expectations and stochastic calculus under uncertainty. With robust CLT and G-Brownian motion. Probability Theory and Stochastic Modelling, 95. Springer, Berlin, 2019. 278. S. Peng and Y. Shi. A type of time-symmetric forward-backward stochastic differential equations. C. R. Math. Acad. Sci. Paris. 336 (2003), 773–778. 279. H. Pham. Continuous-time stochastic control and optimization with financial applications. Stochastic Modelling and Applied Probability, 61. SpringerVerlag, Berlin, 2009. 280. K. D. Phung. Observability and control of Schr¨ odinger equations. SIAM J. Control Optim. 40 (2001), 211–230. 281. L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mischenko. Mathematical theory of optimal processes. Wiley, New York, 1962. 282. C. Pr´evˆ ot and M. R¨ ockner. A concise course on stochastic partial differential equations. Lecture Notes in Mathematics, 1905. Springer, Berlin, 2007. 283. A. Prohl and Y. Wang. Strong rates of convergence for space-time discretization of the backward stochastic heat equation, and of a linear-quadratic control problem for the stochastic heat equation. In submission. 284. E. P. Protter. Stochastic integration and differential equations. Second edition. Version 2.1. Corrected third printing. Stochastic Modelling and Applied Probability, 21. Springer-Verlag, Berlin, 2005. 285. Z. Qian and X. Y. Zhou. Existence of solutions to a class of indefinite stochastic Riccati equations. SIAM J. Control Optim. 51 (2013), 221–229. 286. J. Ren, M. R¨ ockner and F. Wang. Stochastic generalized porous media and fast diffusion equations. J. Differential Equations. 238 (2007), 118–152.

582

References

287. D. Revuz and M. Yor. Continuous martingales and Brownian motion. Second edition. Grundlehren der Mathematischen Wissenschaften, 293. SpringerVerlag, Berlin, 1994. 288. L. Robbiano. Carleman estimates, results on control and stabilization for partial differential equations. International Congress of Mathematicians. Vol. IV, 897-920, Kyung Moon Sa Co. Ltd., Seoul, Korea, 2014. 289. L. Rosier and B. Zhang. Exact boundary controllability of the nonlinear Schr¨ odinger equation. J. Differential Equations. 246 (2009), 4129–4153. 290. P. Rouchon. Models and feedback stabilization of open quantum systems. International Congress of Mathematicians. Vol. IV, 921-946, Kyung Moon Sa Co. Ltd., Seoul, Korea, 2014. 291. W. Rudin. Functional Analysis. McGraw-Hill Series in Higher Mathematics, McGraw-Hill Book Co., New York-D¨ usseldorf-Johannesburg, 1973. 292. D. L. Russell. Nonharmonic fourier series in the control theory of distributed parameter systems. J. Math. Anal. Appl. 18 (1967), 542–560. 293. D. L. Russell. A unified boundary controllability theory for hyperbolic and parabolic partial differential equations. Studies in Appl. Math. 52 (1973), 189– 211. 294. D. L. Russell. Exact boundary value controllability theorems for wave and heat processes in star-complemented regions. Differential games and control theory. Proc. NSF–CBMS Regional Res. Conf., Univ. Rhode Island, Kingston, R.I., Dekker, New York, 1974, 291–319. 295. D. L. Russell. Controllability and stabilizability theory for linear partial differential equations: recent progress and open problems. SIAM Rev. 20 (1978), 639–739. 296. K. K. Sabelfeld. Random fields and stochastic Lagrangian models. Analysis and applications in turbulence and porous media. Walter de Gruyter & Co., Berlin, 2013. 297. Yu. Safarov and D. Vassiliev. The Asymptotic Distribution of Eigenvalues of Partial Differential Operators. American Mathematical Society, Providence, RI, 1997. 298. D. Salamon. Infinite-dimensional linear systems with unbounded control and observation: A functional analytic approach. Trans. Amer. Math. Soc. 300 (1987), 383–431. 299. R. Schatten. Norm ideals of completely continuous operators. Second printing. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 27 Springer-Verlag, Berlin-New York, 1970. 300. T. Schlick. Modeling superhelical DNA—recent analytical and dynamical approaches. Cuur. Opin. Struct. Biol. 5 (1995), 245–262. 301. A. Shirikyan. Control and mixing for 2D Navier-Stokes equations with space´ Norm. Sup´er. 48 (2015), 253–280. time localised noise. Ann. Sci. Ec. 302. T. K. Sirazetdinov. Optimal control of stochastic processes with distributed parameters. Trudy Kazan. Aviacion. Inst. Vyp. 89 (1965), 88–101. (In Russian) 303. M. Sˆırbu and G. Tessitore. Null controllability of an infinite dimensional SDE with state- and control-dependent noise. Systems Control Lett. 44 (2001), 385– 394. 304. S. S. Sritharan. Stochastic Navier-Stokes equations: solvability, control, and filtering. Stochastic partial differential equations and applications–VII, 273– 280, Lect. Notes Pure Appl. Math., 245, Chapman & Hall/CRC, Boca Raton, FL, 2006.

References

583

305. O. Staffans. Well-posed linear systems. Encyclopedia of Mathematics and its Applications, 103. Cambridge University Press, Cambridge, 2005. 306. K. Stowe. An introduction to thermodynamics and statistical mechanics. Second edition. Cambridge University Press, Cambridge, 2007. 307. J. Sun and J. Yong. Linear quadratic stochastic differential games: open-loop and closed-loop saddle points. SIAM J. Control Optim. 52 (2014), 4082–4121. 308. J. Sun, X. Li and J. Yong. Open-loop and closed-loop solvabilities for stochastic linear quadratic optimal control problems. SIAM J. Control Optim. 54 (2016), 2274–2308. 309. Y. Sunahara, T. Kabeuchi, Y. Asada, S. Aihara and K. Kishino. On stochastic controllability for nonlinear systems. IEEE Trans. Automatic Control. 19 (1974), 49–54. 310. Y. Sunahara, S. Aihara, K. Kamejima and K. Kishino. On stochastic observability and controllability for nonlinear distributed parameter systems. Information and Control 34 (1977), 348–371. 311. H. J. Sussmann. Orbits of families of vector fields and integrability of distributions. Trans. Amer. Math. Soc. 180 (1973), 171–188. 312. B. Sz.-Nagy, C. Foias, H. Bercovici and L. K´erchy. Harmonic analysis of operators on Hilbert space. Revised and enlarged edition. Springer, New York, 2010. 313. S. Tang. General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations. SIAM J. Control Optim. 42 (2003), 53–75. 314. S. Tang and X. Li. Maximum principle for optimal control of distributed parameter stochastic systems with random jumps. In: Differential Equations, Dynamical Systems, and Control Science. Lecture Notes in Pure and Appl. Math. vol. 152. Dekker, New York, 1994, 867–890. 315. S. Tang and X. Zhang. Null controllability for forward and backward stochastic parabolic equations. SIAM J. Control Optim. 48 (2009), 2191–2216. ´ 316. R. Temam. Equations aux d´eriv´ees partielles stochastiques. Problems in nonlinear analysis (Centro Internaz. Mat. Estivo (C.I.M.E.), IV Ciclo, Varenna, 1970), 431–462. Edizioni Cremonese, Rome, 1971. 317. G. Tenenbaum and M. Tucsnak. Fast and strongly localized observation for the Schr¨ odinger equation. Trans. Amer. Math. Soc. 361 (2009), 951–977. 318. G. Tessitore. Some remarks on the Riccati equation arising in an optimal control problem with state- and control-dependent noise. SIAM J. Control Optim. 30 (1992), 717–744. 319. G. Tessitore. Existence, uniqueness and space regularity of the adapted solutions of a backward SPDE. Stochastic Anal. Appl. 14 (1996), 461–486. 320. E. Tr´elat. Optimal shape and location of sensors or actuators in PDE models. Proceedings of the International Congress of Mathematicians-Rio de Janeiro 2018. Vol. IV. Invited lectures, World Sci. Publ., Hackensack, NJ, 2018, 3843– 3863. 321. N. Touzi. Second order backward SDEs, fully nonlinear PDEs, and applications in finance. Proceedings of the International Congress of Mathematicians, Vol. IV. Hyderabad, India, 2010, 3232–3150. 322. N. Touzi. Optimal stochastic control, stochastic target problems, and backward SDE. Fields Institute Monographs, 29. Springer, New York; Fields Institute for Research in Mathematical Sciences, Toronto, ON, 2013.

584

References

323. F. Tr¨ oltzsch. Optimal control of partial differential equations. Theory, methods and applications. Graduate Studies in Mathematics, 112. American Mathematical Society, Providence, RI, 2010. 324. C. Tudor. Optimal control for semilinear stochastic evolution equations. Appl. Math. Optim. 20 (1989), 319–331. 325. M. Tucsnak and G. Weiss. Observation and control for operator semigroups. Birkh¨ auser Advanced Texts: Basler Lehrb¨ ucher. Birkh¨ auser Verlag, Basel, 2009. 326. G. Tzafestas and J. M. Nightingale. Optimal control of a class of linear stochastic distributed-parameter systems. Proc. Inst. Elec. Engrs. 115 (1968), 1213–1220. 327. J. van Neerven and M. C. Veraar. On the stochastic Fubini theorem in infinite dimensions. Stochastic partial differential equations and applications– VII, 323–336, Lect. Notes Pure Appl. Math., 245, Chapman & Hall/CRC, Boca Raton, FL, 2006. 328. J. M. A. M. van Neerven, M. C. Veraar and L. W. Weis. Stochastic integration in UMD Banach spaces. Ann. Probab. 35 (2007), 1438–1478. 329. J. M. A. M. van Neerven, M. C. Veraar and L. W. Weis. Stochastic evolution equations in UMD Banach spaces. J. Funct. Anal. 255 (2008), 940–993. 330. A. Vieru. On null controllability of linear systems in Banach spaces. Systems Control Lett. 54 (2005), 331–337. 331. M. I. Vishik, A. V. Fursikov. Mathematical problems in statistical hydromechanics. Kluwer, Dordrecht, 1988. 332. J. vom Scheidt. Stochastic equations of mathematical physics. Mathematische Lehrb¨ ucher und Monographien, II. Abteilung: Mathematische Monographien, 76. Akademie-Verlag, Berlin, 1990. 333. G. Wachsmuth and D. Wachsmuth. Convergence and regularization results for optimal control problems with sparsity functional. ESAIM Control Optim. Calc. Var. 17 (2011), 858–886. 334. D. H. Wagner. Survey of measurable selection theorems. SIAM J. Control Optim. 15 (1977), 859–903. ´ 335. J. B. Walsh. An introduction to stochastic partial differential equations. Ecole d’´et´e de probabilit´es de Saint-Flour, XIV1984, 265-C439, Lecture Notes in Math., 1180, Springer, Berlin, 1986. 336. G. Wang. L∞ -null controllability for the heat equation and its consequences for the time optimal control problem. SIAM J. Control Optim. 47 (2008), 1701– 1720. 337. G. Wang and L. Wang. The Carleman inequality and its application to periodic optimal control governed by semilinear parabolic differential equations. J. Optim. Theory Appl. 118 (2003), 249–461. 338. G. Wang, L. Wang, Y. Xu and Y. Zhang. Time optimal control of evolution equation. Progress in Nonlinear Differential Equations and Their Applications: Subseries in Control, 92, Birkh¨ auser, Springer International Publishing AG, part of Springer Nature, 2018. 339. G. Wang, Z. Wu and J. Xiong. An introduction to optimal control of FBSDE with incomplete information. SpringerBriefs in Mathematics. Springer, Cham, 2018. 340. L. Y. Wang, G. Yin, J.-F. Zhang and Y. Zhao. System identification with quantized observations. Systems & Control: Foundations & Applications. Birkh¨ auser Boston, Ltd., Boston, MA, 2010.

References

585

341. P. Wang, Y. Wang and X. Zhang. Numerical analysis on backward stochastic differential equations by finite transposition method. Preprint. 342. P. Wang and X. Zhang. Numerical solutions of backward stochastic differential equations: a finite transposition method. C. R. Math. Acad. Sci. Paris. 349 (2011), 901–903. 343. P. K. C. Wang. Control of distributed parameter systems. Advances in Control Systems, Theory and Applications, Vol. 1, Edited by C. T. Leondes, 75–172. Academic Press, New York, 1964. 344. Y. Wang. Transposition Solutions of Backward Stochastic Differential Equations and Numerical Schemes. Ph.D. thesis, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, 2013. 345. Y. Wang. A semidiscrete Galerkin scheme for backward stochastic parabolic differential equations. Math. Control Relat. Fields. 6 (2016), 489–515. 346. Y. Wang. L2 -regularity of solutions to linear backward stochastic heat equations, and a numerical application. J. Math. Anal. Appl. 486 (2020), 123870, 18 pp. 347. Y. Wang, D. Yang, J. Yong and Z. Yu. Exact controllability of linear stochastic differential equations and related problems. Math. Control Relat. Fields. 7 (2017), 305–345. 348. Y. Wang and Z. Yu. On the partial controllability of SDEs and the exact controllability of FBSDEs. ESAIM Control Optim. Calc. Var. In press. 349. N. Wiener. Cybernetics or control and communication in the animal and the machine. The MIT Press, Cambridge, Massachusetts, 1948. 350. J.C. Willems. Topological classification and structural stability of linear systems. J. Differential Equations. 35 (1980), 306–318. 351. W. M. Wonham. On the separation theorem of stochastic control. SIAM J. Control. 6 (1968), 312–326. 352. W. M. Wonham. On a matrix Riccati equation of stochastic control. SIAM J. Control. 6 (1968), 681–697. 353. B. Wu, Q. Chen and Z. Wang. Carleman estimates for a stochastic degenerate parabolic equation and applications to null controllability and an inverse random source problem. Inverse Problems. Published online, DOI: 10.1088/13616420/ab89c3. 354. P. Wu and G. Wang. Representation theorems for generators of BSDEs and the extended g-expectations in probability spaces with general filtration. J. Math. Anal. Appl. 487 (2020), 124010. 355. D. Xia, S. Li, S. Yan, W. Shu. Theory of Real Functions and Functional Analysis, Vol. 2 (Second version). Gaodeng Jiaoyu Chubanshe (Higher Education Press), Beijing, 1985 (In Chinese). 356. J. Xiong. An introduction to stochastic filtering theory. Oxford Graduate Texts in Mathematics, 18. Oxford University Press, Oxford, 2008. 357. M. Yamamoto. Carleman estimates for parabolic equations and applications. Inverse Problems. 25 (2009), 123013. 358. L. Yan, B. Wu, S. Lu and Y. Wang. Null controllability and inverse source problem for stochastic Grushin equation with boundary degeneracy and singularity. arXiv:2001.01877. 359. Y. Yan. Carleman estimates for stochastic parabolic equations with Neumann boundary conditions and applications. J. Math. Anal. Appl. 457 (2018), 248– 272. 360. Y. Yan and F. Sun. Insensitizing controls for a forward stochastic heat equation. J. Math. Anal. Appl. 384 (2011), 138–150.

586

References

361. D. Yang and J. Zhong. Observability inequality of backward stochastic heat equations for measurable sets and its applications. SIAM J. Control Optim. 54 (2016), 1157–1175. 362. D. Yang and J. Zhong. Optimal actuator location of the minimum norm controls for stochastic heat equations. Math. Control Relat. Fields. 8 (2018), 1081–1095. 363. Y. Yavin. Feedback strategies for partially observable stochastic systems. Lecture Notes in Control and Information Sciences, 48. Springer-Verlag, Berlin, 1983. 364. G. Yin and Q. Zhang. Continuous-time Markov chains and applications. A singular perturbation approach. Applications of Mathematics (New York), 37. Springer-Verlag, New York, 1998. 365. Z. Yin. A quantitative internal unique continuation for stochastic parabolic equations. Math. Control Relat. Fields. 5 (2015), 165–176. 366. Z. Yin. Lipschitz stability for a semi-linear inverse stochastic transport problem. J. Inverse Ill-Posed Probl. 28 (2020), 185–193. 367. J. Yong. Backward stochastic Volterra integral equations and some related problems. Stoch. Proc. Appl. 116 (2006), 779–795. 368. J. Yong. Well-posedness and regularity of backward stochastic Volterra integral equations. Prob. Theory Rel. Fields. 142 (2008), 21–77. 369. J. Yong, Time-inconsistent optimal control problems. Proceedings of the International Congress of Mathematicians 2014, Vol. IV. Seoul, Korea, 2014, 947–969. 370. J. Yong and H. Lou. A Concise Course on Optimal Control Theory. Higher Education Press, Beijing, 2006. (In Chinese) 371. J. Yong and X. Y. Zhou. Stochastic controls: Hamiltonian systems and HJB equations. Springer-Verlag, New York, 1999. 372. K. Yosida. Functional analysis. Classics in Mathematics. Springer-Verlag, Berlin, 1995. 373. G. Yuan. Determination of two kinds of sources simultaneously for a stochastic wave equation. Inverse Problems. 31 (2015), 085003. 374. G. Yuan. Conditional stability in determination of initial data for stochastic parabolic equations. Inverse Problems. 33 (2017), 035014. 375. G. Yuan. Determination of two unknowns simultaneously for stochastic EulerBernoulli beam equations. J. Math. Anal. Appl. 450 (2017), 137–151. 376. J. Zabczyk. Controllability of stochastic linear systems. Systems Control Lett. 1 (1981/82), 25–31. 377. J. Zabczyk. Mathematical control theory. An introduction. Reprint of the 1995 edition. Modern Birkh¨ auser Classics. Birkh¨ auser Boston, Inc., Boston, MA, 2008. 378. H. Zhang. Second-order necessary conditions for stochastic optimal controls. Ph D thesis, Sichuan University, 2014. 379. H. Zhang and X. Zhang. Pointwise second-order necessary conditions for stochastic optimal controls, Part I: The case of convex control constraint. SIAM J. Control Optim. 53 (2015), 2267–2296. 380. H. Zhang and X. Zhang. Second-order necessary conditions for stochastic optimal control problems. SIAM Rev., 60 (2018), 139–178. 381. J. Zhang. Backward stochastic differential equations. From linear to fully nonlinear theory. Probability Theory and Stochastic Modelling, 86. Springer, New York, 2017.

References

587

382. R. Zhang and L. Guo. Controllability of stochastic game-based control systems. SIAM J. Control Optim. 57 (2019), 3799–3826. 383. W. Zhang, L. Xie and B-S. Chen. Stochastic H2 /H∞ control. A Nash game approach. CRC Press, Boca Raton, FL, 2017. 384. X. Zhang. Explicit observability estimate for the wave equation with potential and its application. R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 456 (2000), 1101–1115. 385. X. Zhang. Exact controllability of the semilinear plate equations. Asymptot. Anal. 27 (2001), 95–125. 386. X. Zhang. Carleman and observability estimates for stochastic wave equations. SIAM J. Math. Anal. 40 (2008), 851–868. 387. X. Zhang. Unique continuation for stochastic parabolic equations. Differential Integral Equations. 21 (2008), 81–93. 388. X. Zhang. A unified controllability/observability theory for some stochastic and deterministic partial differential equations. Proceedings of the International Congress of Mathematicians. Volume IV, 3008–3034, Hindustan Book Agency, New Delhi, 2010. 389. X. Zhang. A brief talk on mathematical control theory. Lectures at Institute of Mathematics 2014. Edited by N. Xi, Q. Feng, X. Zhang and B. Fu. Science Press, Beijing, 2017, 1–38. (In Chinese) 390. X. Zhang and E. Zuazua. Unique continuation for the linearized BenjaminBoma-Mahony equation with space dependent potential. Math. Ann. 325 (2003), 543–582. 391. M. Zhen, J. He, H. Xu and M. Yang. Carleman and observability estimates for stochastic beam equation. arXiv:1710.02910v4. 392. X. Y. Zhou. On the necessary conditions of optimal controls for stochastic partial differential equations. SIAM J. Control Optim. 31 (1993), 1462–1478. 393. X. Y. Zhou. Sufficient conditions of optimality for stochastic systems with controllable diffusions. IEEE Trans. Automat. Control. 41 (1996), 1176–1179. 394. E. Zuazua. Remarks on the controllability of the Schr¨ odinger equation. Quantum control: mathematical and numerical challenges, 193–211, CRM Proc. Lecture Notes, 33, Amer. Math. Soc., Providence, RI, 2003. 395. E. Zuazua. Propagation, observation, and control of waves approximated by finite difference methods. SIAM Rev. 47 (2005), 197–243. 396. E. Zuazua. Controllability and observability of partial differential equations: some results and open problems. in Handbook of Differential Equations: Evolutionary Equations, vol. 3, Elsevier Science, 2006, 527–621.

Index

LqF (0, T ; Lp (Ω; H)), 65 (F, Γ )-approximate controllability, 201 (F, Γ )-continuous final observability, 202 (F, Γ )-continuous initial observability, 202 (F, Γ )-continuous observability, 202 (F, Γ )-controllability, 201 (F, Γ )-exact controllability, 200 (F, Γ )-exact observability, 201 (G, Γ )-approximate controllability, 204 (G, Γ )-continuous final observability, 205 (G, Γ )-continuous initial observability, 205 (G, Γ )-continuous observability, 204 (G, Γ )-controllability, 204 (G, Γ )-exact controllability, 204 (G, Γ )-exact observability, 204 (Ω, F , F), 65 (Ω, F , F, P), 65 C 1 (O; Y), 109 C m (O; Y), 109 CFk ([0, T ]; Lp (Ω; H)), 67 ∞ C0,F ((0, T ); Ls (Ω; H)), 68 CF ([0, T ]; Lp (Ω; H)), 66 DF ([0, T ]; Lp (Ω; H)), 67 H ∗ , 29 I, 108 Im , 86 L∞ (Ω, F, µ; H), 34 Lp (0, T ), 34 Lp (0, T ; H), 34

Lp (G; CF ([0, T ]; Lq (Ω; H))), 67 Lp (G; LrF (0, T ; Lq (Ω; H))), 67 Lp (G; LqF (Ω; C([0, T ]; H))), 67 Lp (G; LqF (Ω; Lr (0, T ; H))), 67 Lp (Ω, F , µ; H), 33 LpM (X1 ; Lq (X2 )), 44 LpM (X1 ; Lq (X2 ; H)), 44 LpF (0, T ), 66 LpF (0, T ; H), 66 LpF (Ω; C([σ, τ ]; H)), 146 LpF (Ω; Lq (σ, τ ; H)), 146 LpF (Ω; Lq (0, T ; H)), 65 LpF (Ω; C([0, T ])), 67 LpF (Ω; C k ([0, T ]; H)), 66 LpF (Ω; D([0, T ])), 67 LpF (Ω; C([0, T ]; H)), 66 LpF (Ω; D([0, T ]; H)), 67 L∞ F (Ω; H), 34 LpF (Ω), 34 LpF (Ω; H), 33 Lp,loc (0, T ), 99 F Lp,loc (0, T ; Y ), 99 F LS,F (0, T ), 67 LS,F (0, T ; H), 67 Q-Brownian motion, 90 X ∼ N (λ, Q), 39 Cov (X, Y ), 39 Υ2 (H; U ), 482 Υp,q (H), 145 Υp (H), 145 Υp (H1 ; H2 ), 145 Var X, 39 B(G), 27

© Springer Nature Switzerland AG 2021 Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Probability Theory and Stochastic Modelling 101, https://doi.org/10.1007/978-3-030-82331-3

589

590

Index

F1 × · · · × Fn , 26 Fτ , 70 L(X), 10 L(X; Y ), 10 L(X , X ; Y), 109 L(X , Y; Z), 109 L1 (H), 38 L1 (H; G), 38 L2 (V ; H), 92 L02 , (92 1 Lpd LrM (Ω1 ; Lr2 (Ω2 ; X));) r3 (Ω1 ; Lr4 (Ω2 ; Y )) , 57 ( LrM 1 Lpd LM1 (Ω1 ; X); ) r2 (Ω1 ; Lr3 (Ω2 ; Y )) , 57) ( LrM r 1 2 Lpd LM (Ω1 ; X); LM (Ω2 ; Y ) , 57 1 2 ( ) 3 Lpd X; LrM (Ω1 ; Lr4 (Ω; Y )) , 57 2 M ([0, T ]; H), 84 M2 [0, T ], 85 M2c ([0, T ]; H), 84 M2c [0, T ], 85 N (λ, Q), 38, 39 R(L), 10 χE (·), 25 Ef , 32 F, 65 P-a.s., 26 PX , 37 lim Ai , 27 i→∞

(w)- lim , 55 n→∞

p′ , the H¨ older conjugate of p, 45 trQ, 38 x ⊗ y, 38 a.s., 26 absolute continuity of a signed measure, 37 absolute continuity of Bochner integral, 32 adapted stochastic process, 65 admissible control, 206 admissible feedback operator, 482, 488 admissible pair, 206 almost separably valued function, 29 approximate controllability, 201, 204, 231, 232 backward controllability, 233 Bochner integrable function, 31 Bochner integral, 32 Bochner integral w.r.t. a signed measure, 36 Borel σ-field, 27 Borel set, 27 Borel-Cantelli lemma, 27 Brownian motion, 86 Burkholder-Davis-Gundy inequality for stochastic integral, 115 Burkholder-Davis-Gundy type inequality, 145

(w*)- lim , 55 n→∞

φX (·), 39 dµ , 37 dν µ ≪ ν, 37, 40 µ+ , µ− , 36 µ1 × · · · × µn , 26 ν(x), ∏∞ 21 Fi , 26 ∏i=1 ∞ i=1 Pi , 27 ρ(A), 151 σ-field, 25 σ-finite measure, 25 σ-finite signed measure, 36 σ(f ), 27 σ(fλ ; λ ∈ Λ), 27 {P}, 28 b ∨ c, 73 b ∧ c, 73 f + , f − , 35

c´ adl` ag modification of a submartingale, 81 c´ adl` ag stochastic process, 63 Carleman estimates, 19 change-of-variable formula, 35 characteristic function, 39 Chebyshev’s inequality, 34 Clarke’s generalized gradient, 458 complete ε-controllability, 260 complete measure space, 26 complete probability space, 26 completion of a measure, 26 conditional expectation, 41, 42 conditional probability, 41 continuity of a vector measure, 40 continuous final observability, 202, 205, 241, 244 continuous finial observability, 244, 245

Index continuous initial observability, 202, 205, 242, 244, 245 continuous observability, 202, 204 continuous stochastic process, 63 control class, 6, 200, 233 control operator, 229 control space, 200, 229 controllability, 201, 204 convergence almost everywhere, 28 convergence in measure, 28 countably additive, 25 countably valued function, 29 covariance matrix, 39 covariance operator, 39 cylindrical Brownian motion, 92 density of a random variable, 38 distribution function of a random variable, 38 distribution of a random variable, 37 Dominated Convergence Theorem, 33 Doob inequality for stochastic integral, 107 Doob’s inequality, 74 Doob’s stopping theorem, 73, 82 Doob’s upcrossing inequality, 76 Doob-Dynkin lemma, 30 duality argument, 14 event, 26 exact controllability, 6, 200, 204, 231, 232 exact observability, 201, 204 expectation, 32 Fatou’s lemma, 33 filtered measurable space, 65 filtered probability space, 65 filtered probability space satisfying the usual condition, 65 filtration, 64 finite dimensional distribution, 64 finite measure, 25 finite signed measure, 36 finiteness of Problem (SLQ), 479 forward controllability, 233 forward-backward stochastic differential equation, 495

591

forward-backward stochastic evolution equation, 508 Fr´echet derivative, 109 Fr´echet differentiable, 109 Fubini Theorem, 33 Gaussian random variable, 38, 39 Gaussian stochastic process, 64 generalized contractive C0 -semigroup, 146 global Carleman type estimate, 19 H¨ older conjugate, 45 Hahn decomposition, 36 Hilbert-Schmidt operator, 92 independence, 27 independent processes, 64 Itˆ o process, 108 Itˆ o’s formula, 109 Itˆ o’s formula for weak Itˆ o processes, 120 Jensen’s inequality, 43 left continuous filtration, 65 Levi’s lemma, 33 linear quadratic optimal control problem, 479 local martingale, 85 lower variation of a signed measure, 36 martingale, 73 Martingale Representation Theorem, 125 matrix-valued backward stochastic Riccati equation, 490 mean, 32 measurable function, 27 measurable map, 27 measurable set, 25 measurable space, 25 measurable stochastic process, 65 measure, 25 measure space, 25 mild solution, 136, 162 modification of (a stochastic process), 64 Monotone Convergence Theorem, 33 natural filtration, 86

592

Index

negative part of a signed measure, 35 normal distribution, 38, 39 null controllability, 231, 232 null measurable set, 25 observability, 242, 244, 245 observability estimate, 241, 242, 244, 245 operator-valued backward stochastic Riccati equation, 483 optimal control, 206, 207 optimal feedback operator, 482, 484, 488 optimal pair, 206 optimal state, 206, 207 partition of a time interval, 90 pointwise defined linear operator, 57 positive part of a signed measure, 35 probability, 26 probability measure, 26 probability space, 26 process, 63 product measure, 26 product probability measure, 26 progressive σ-field, 65 progressively measurable set, 65 progressively measurable stochastic process, 65

signed measure, 35 simple function, 29 solvability of Problem (SLQ), 479 spike variation, 18 standard LQ problem, 479 state space, 200, 229 state space method, 1 stochastic equivalence, 64 stochastic Fubini theorem, 117 stochastic linear quadratic optimal control problem, 478 stochastic process, 63 stochastic transposition method, 20, 46, 66, 170 stopping time, 70 strong solution, 136, 162 strongly measurable vector-valued function, 29 submartingale, 73 supermartingale, 73 tensor product, 38 total variation of a signed measure, 36 trace-class operator, 38 transposition solution, 22, 170, 236, 239, 402, 514 uniform integrability, 34 unit outward normal vector, 21 upper variation of a signed measure, 36

Radon-Nikod´ ym derivative, 37 Radon-Nikod´ ym property, 41 Radon-Nikod´ ym Theorem, 37 random variable, 27 range inclusion theorem, 10 relaxed transposition solution, 403 Riccati equation, 481 Riesz-type representation theorem, 45 right continuous filtration, 65 right-continuous stochastic process, 63

variance matrix, 39 variance operator, 39 variation of a vector measure, 40 variation of constants formula, 134 vector measure, 40 vector measure of bounded variation, 40 vector-valued Gaussian random variable, 39 vector-valued martingale, 83

sample path of a stochastic process, 63 sample point, 26 separably valued function, 29

weak solution, 136, 162 weakly measurable vector-valued function, 29