Stochastic Control of Hereditary Systems and Applications 038775816X, 9780387758169


153 83 3MB

English Pages [418] Year 2008

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Stochastic Control of Hereditary Systems and Applications
 038775816X, 9780387758169

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Stochastic Mechanics Random Media Signal Processing and Image Synthesis Mathematical Economics and Finance

Stochastic Modelling and Applied Probability (Formerly: Applications of Mathematics)

Stochastic Optimization Stochastic Control Stochastic Models in Life Sciences

Edited by

Advisory Board

59 B. Rozovski˘ı G. Grimmett D. Dawson D. Geman I. Karatzas F. Kelly Y. Le Jan B. Øksendal G. Papanicolaou E. Pardoux

Mou-Hsiung Chang

Stochastic Control of Hereditary Systems and Applications

Mou-Hsiung Chang 4300 S. Miami Blvd. U.S. Army Research Office Durham, NC 27703-9142 USA [email protected]

Man aging Editors B. Rozovski˘ı Division of Applied Mathematics 182 George St. Providence, RI 02912 USA [email protected]

G. Grimmett Centre for Mathematical Sciences Wilberforce Road Cambridge CB3 0WB UK [email protected]

ISBN: 978-0-387-75805-3 e-ISBN: 978-0-387-75816-9 DOI: 10.1007/978-0-387-75816-9 ISSN: 0172-4568 Stochastic Modelling and Applied Probability Library of Congress Control Number: 2007941276 Mathematics Subject Classification (2000): 93E20, 34K50, 90C15 c 2008 Springer Science+Business Media, LLC  All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis.Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com

This book is dedicated to my wife, Yuen-Man Chang.

Preface

This research monograph develops the Hamilton-Jacobi-Bellman (HJB) theory via the dynamic programming principle for a class of optimal control problems for stochastic hereditary differential equations (SHDEs) driven by a standard Brownian motion and with a bounded or an unbounded but fading memory. These equations represent a class of infinite-dimensional stochastic systems that become increasingly important and have wide range of applications in physics, chemistry, biology, engineering, and economics/finance. The wide applicability of these systems is due to the fact that the reaction of realworld systems to exogenous effects/signals is never “instantaneous” and it needs some time, time that can be translated into a mathematical language by some delay terms. Therefore, to describe these delayed effects, the drift and diffusion coefficients of these stochastic equations depend not only on the current state but also explicitly on the past history of the state variable. The theory developed herein extends the finite-dimensional HJB theory of controlled diffusion processes to its infinite-dimensional counterpart for controlled SHDEs in which a certain infinite-dimensional Banach space or Hilbert space is critically involved in order to account for the bounded or unbounded memory. Another type of infinite-dimensional HJB theory that is not treated in this monograph but arises from real-world application problems can often be modeled by controlled stochastic partial differential equations. Although they are both infinite dimensional in nature and are both in the infancy of their developments, the SHDE exhibits many characteristics that are not in common with stochastic partial differential equations. Consequently, the HJB theory for controlled SHDEs is parallel to and cannot be treated as a subset of the theory developed for controlled stochastic partial differential equations. Therefore, the effort for writing this monograph is well warranted. The stochastic control problems treated herein include discounted optimal classical control and optimal stopping for SHDEs with a bounded memory over a finite time horizon. Applications of the dynamic programming principles developed specifically for control of stochastic hereditary equations yield an infinite-dimensional Hamilton-Jacobi-Bellman equation (HJBE)

VIII

Preface

for finite time horizons discounted optimal classical control problem, a HJB variational inequality (HJBVI) for optimal stopping problems, and a HJB quasi-variational inequality (HJBQVI) for combined optimal classical-impulse control problems. As an application to its theoretical developments, characterizations of pricing functions in terms of the infinite-dimensional Black-Scholes equation and an infinite-dimensional HJBVI, are derived for European and American option pricing problems in a financial market that consists of a riskless bank account and a stock whose price dynamics depends explicitly on the past historical prices instead of just the current price alone. To further illustrate the roles that the theory of stochastic control of hereditary differential systems played in real-world applications, a chapter is devoted to the development of theory of combined optimal classical-impulse control that is specifically applicable to an infinite time horizon discounted optimal investment-consumption problem in which capital gains taxes and fixed plus proportional transaction costs are taken into consideration. To address some computational issues, a chapter is devoted to Markov chain approximations and finite difference approximations of the viscosity solution of infinitedimensional HJBEs and HJBVIs. It is well known that the value functions for most of optimal control problems, deterministic or stochastic, are not smooth enough to be a classical solution of HJBEs, HJBVIs, or HJBQVIs. Therefore, the theme of this monograph is centered around development of the value function as the unique viscosity solution of these equations or inequalities. This monograph can be used as an introduction and/or a research reference for researchers and advanced graduate students who have special interest in theory and applications of optimal control of SHDEs. The monograph is intended to be as much self-contained as possible. Some knowledge in measure theory, real analysis, and functional analysis will be helpful. However, no background material is assumed beyond knowledge of the basic theory of Itˆ o integration and stochastic (ordinary) differential equations driven by a standard Brownian motion. Although the theory developed in this monograph can be extended with additional efforts to hereditary differential equations driven by semi martingales such as Levy processes, we restrain our treatments to systems driven by Brownian motion only for the sake of clarity in theory developments. This monograph is largely based on the current account of relevant research results contributed by many researchers on controlled SHDEs and on some research done by the author during his tenure as a faculty member at the University of Alabama in Huntsville and more recently as a member of the scientific staff at the U.S. Army Research Office. Most of the material in this monograph is the product of some recently published or not-yet-published results obtained by the author and his collaborators, Roger Youree, Tao Pang, and Moustapha Pemy. The list of references is certainly not exhaustive and is likely to have omitted works done by other researchers. The author apologizes for any inadvertent omissions in this monograph of their works.

Preface

IX

The author would like to thank Boris Rozovski˘ı for motivating the submission of the manuscript to Springer and for his encouragement. Sincere thanks also go to Achi Dosanjh and Donna Lukiw, Mathematics Editors of Springer, for their editorial assistance and to Frank Holzwarth, Frank Ganz, and Frank McGuckin for their help on matters that are related to svmono.cls and LaTex. The author acknowledges partial support by a staff research grant (W911NF-04-D-0003) from the U.S. Army Research Office for the development of some of the recent research results published in journals and contained in this monograph. However, the views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Army. Research Triangle Park, North Carolina, USA

Mou-Hsiung Chang October 2007

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XV Introduction and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Basic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Stochastic Control Problems and Summary . . . . . . . . . . . . . . . . . B1. Optimal Classical Control Problem . . . . . . . . . . . . . . . . . B2. Optimal Stopping Problem . . . . . . . . . . . . . . . . . . . . . . . . B3. Discrete Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . B4. Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B5. Hereditary Portfolio Optimization . . . . . . . . . . . . . . . . . . C. Organization of Monograph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2 3 5 9 10 19 23 34

Stochastic Hereditary Differential Equations . . . . . . . . . . . . . . . 1.1 Probabilistic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Gronwall Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Regular Conditional Probability . . . . . . . . . . . . . . . . . . . . 1.2 Brownian Motion and Itˆ o Integrals . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Itˆ o Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Itˆ o’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Girsanov Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 SHDE with Bounded Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Memory Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 The Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Strong Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Weak Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 SHDE with Unbounded Memory . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Memory Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 40 40 42 43 45 45 48 52 53 54 54 57 58 62 66 67

1

XII

Contents

1.5 Markovian Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 1.6 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 2

Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 2.1 Preliminary Analysis on Banach Spaces . . . . . . . . . . . . . . . . . . . . 79 2.1.1 Bounded Linear and Bilinear Functionals . . . . . . . . . . . . 80 2.1.2 Fr´echet Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 2.1.3 C0 -Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2.1.4 Bounded and Continuous Functionals on Banach Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 2.2 The Space C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.3 The Space M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 2.3.1 The Weighting Function ρ . . . . . . . . . . . . . . . . . . . . . . . . . 100 2.3.2 The S-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 2.4 Itˆ o and Dynkin Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 2.4.1 {xs , s ∈ [t, T ]}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 2.4.2 {(S(s), Ss ), s ≥ 0}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 2.5 Martingale Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 2.6 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

3

Optimal Classical Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 3.1.1 The Controlled SHDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 3.1.2 Admissible Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 3.1.3 Statement of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . 133 3.2 Existence of Optimal Classical Control . . . . . . . . . . . . . . . . . . . . . 134 3.2.1 Admissible Relaxed Controls . . . . . . . . . . . . . . . . . . . . . . . 137 3.2.2 Existence Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 3.3 Dynamic Programming Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 153 3.3.1 Some Probabilistic Results . . . . . . . . . . . . . . . . . . . . . . . . 153 3.3.2 Continuity of the Value Function . . . . . . . . . . . . . . . . . . . 158 3.3.3 The DDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 3.4 The Infinite-Dimensional HJB Equation . . . . . . . . . . . . . . . . . . . . 165 3.5 Viscosity Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 3.6 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 3.7 Verification Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 3.8 Finite-Dimensional HJB Equation . . . . . . . . . . . . . . . . . . . . . . . . . 192 3.8.1 Special Form of HJB Equation . . . . . . . . . . . . . . . . . . . . . 192 3.8.2 Finite Dimensionality of HJB Equation . . . . . . . . . . . . . 195 3.8.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 3.9 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

4

Optimal Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 4.1 The Optimal Stopping Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 4.2 Existence of Optimal Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Contents

4.3 4.4 4.5 4.6 4.7

XIII

4.2.1 The Infinitesimal Generator . . . . . . . . . . . . . . . . . . . . . . . 208 4.2.2 An Alternate Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 210 4.2.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . 218 HJB Variational Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Verification Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Viscosity Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 A Sketch of a Proof of Theorem 4.5.7 . . . . . . . . . . . . . . . . . . . . . . 234 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

5

Discrete Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 5.1.1 Temporal and Spatial Discretizations . . . . . . . . . . . . . . . 247 5.1.2 Some Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 5.2 Semidiscretization Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 5.2.1 First Approximation Step: Piecewise Constant Segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 5.2.2 Second Approximation Step: Piecewise Constant Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 5.2.3 Overall Discretization Error . . . . . . . . . . . . . . . . . . . . . . . 264 5.3 Markov Chain Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 5.3.1 Controlled Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . 267 5.3.2 Optimal Control of Markov Chains . . . . . . . . . . . . . . . . . 270 5.3.3 Embedding the Controlled Markov Chain . . . . . . . . . . . . 272 5.3.4 Convergence of Approximations . . . . . . . . . . . . . . . . . . . . 274 5.4 Finite Difference Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 5.4.1 Finite Difference Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 280 5.4.2 Discretization of Segment Functions . . . . . . . . . . . . . . . . 289 5.4.3 A Computational Algorithm . . . . . . . . . . . . . . . . . . . . . . . 291 5.5 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

6

Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 6.1 Pricing with Hereditary Structure . . . . . . . . . . . . . . . . . . . . . . . . . 297 6.1.1 The Financial Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 6.1.2 Contingent Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 6.2 Admissible Trading Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 6.3 Risk-Neutral Martingale Measures . . . . . . . . . . . . . . . . . . . . . . . . . 307 6.4 Pricing of Contingent Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 6.4.1 The European Contingent Claims . . . . . . . . . . . . . . . . . . 311 6.4.2 The American Contingent Claims . . . . . . . . . . . . . . . . . . 313 6.5 Infinite-Dimensional Black-Scholes Equation . . . . . . . . . . . . . . . . 314 6.5.1 Equation Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 6.5.2 Viscosity Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 6.6 HJB Variational Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 6.7 Series Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 6.7.1 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

XIV

Contents

6.7.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 6.7.3 Convergence of the Series . . . . . . . . . . . . . . . . . . . . . . . . . . 329 6.7.4 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 6.8 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 7

Hereditary Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.1 The Hereditary Portfolio Optimization Problem . . . . . . . . . . . . . 336 7.1.1 Hereditary Price Structure with Unbounded Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 7.1.2 The Stock Inventory Space . . . . . . . . . . . . . . . . . . . . . . . . 340 7.1.3 Consumption-Trading Strategies . . . . . . . . . . . . . . . . . . . 341 7.1.4 Solvency Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 7.1.5 Portfolio Dynamics and Admissible Strategies . . . . . . . . 344 7.1.6 The Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 7.2 The Controlled State Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 7.2.1 The Properties of the Stock Prices . . . . . . . . . . . . . . . . . . 346 7.2.2 Dynkin’s Formula for the Controlled State Process . . . 351 7.3 The HJBQVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 7.3.1 The Dynamic Programming Principle . . . . . . . . . . . . . . . 352 7.3.2 Derivation of the HJBQVI . . . . . . . . . . . . . . . . . . . . . . . . . 353 7.3.3 Boundary Values of the HJBQVI . . . . . . . . . . . . . . . . . . . 357 7.4 The Verification Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 7.5 Properties of Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 7.5.1 Some Simple Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 7.5.2 Upper Bounds of Value Function . . . . . . . . . . . . . . . . . . . 371 7.6 The Viscosity Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 7.7 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 7.8 Conclusions and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

Notation

• ≡ denotes “is defined as”. • Z, ℵ0 , and ℵ denote the sets of integers, nonnegative integers, and positive integers, respectively. •  and + denote the sets of real numbers and non-negative real numbers respectively. • Q denotes the set of all rational numbers and Q+ the set of all nonnegative rational numbers. Euclidean space equipped with the inner • n denotes the n-dimensional n product defined by x · y = i=1 xi yi and the Euclidean norm | · | defined by |x| = (x · x)1/2 for x = (x1 , x2 , . . . , xn ), y = (y1 , y2 , . . . , yn ) ∈ n . • n+ = {(x1 , x2 , . . . , xn ) ∈ n | xi ≥ 0, i = 1, 2, . . . , n}. • n×m - thespaceof all n × m real matrices A = [aij ] equipped with the n m norm |A| = { i=1 j=1 |aij |2 }1/2 . • A - the transpose of the matrix A. of all n × n symmetric matrices. • S n - the set  n • trace A = i=1 aii , the trace of matrix A = [aij ] ∈ S n . • a ∨ b = max{a, b}, a ∧ b = min{a, b}, a+ = a ∨ 0, and a− = −(a ∧ 0) for all real numbers a and b. • m : [0, T ] × C([t − r, T ], n ) → C denotes the memory map. • Ξ- a generic real separable Banach or Hilbert space equipped with the norm · Ξ . • B(Ξ)- the Borel σ-algebra of subsets of Ξ, i.e., the smallest σ-algebra of subsets of Ξ that contains all open (and hence all closed) subsets of Ξ. • Ξ ∗ denotes the class of bounded linear functionals on Ξ equipped with the operator norm · ∗Ξ . • Ξ † denotes the class of bounded bilinear functionals on Ξ and equipped with the operator norm · †Ξ . • L(Ξ, Θ) denotes the collection of bounded linear transformations (maps) Φ : Ξ → Θ and equipped with the operator norm Φ L(Ξ,Θ) .

XVI

Notation

• L(Ξ) = L(Ξ, Ξ). • Cb (Ξ) denotes the class of bounded continuous functions Φ : Ξ →  and equipped with the norm Φ Cb (Ξ) = supx∈Ξ |Φ(x)|. • DΦ(φ) ∈ Ξ ∗ denotes the first order Fr´echet derivative of Φ : Ξ →  at φ ∈ Ξ. • D2 Φ(φ) ∈ Ξ † denotes the second order Fr´echet derivative of Φ : Ξ →  at φ ∈ Ξ. • C 2 (Ξ) denotes the space of twice continuously Fr´echet differentiable functions Φ : Ξ → . 2 (Ξ) denotes the class of Φ ∈ C 2 (Ξ) satisfying the following condition: • Clip there exists a constant K > 0 such that D2 Φ(x) − D2 Φ(y) †Ξ ≤ K x − y Ξ ,

∀x, y ∈ Ξ.

• C([a, b]; n )(−∞ < a < b < ∞) the separable Banach space of continuous functions φ : [a, b] → n equipped with the sup-norm · ∞ defined by φ ∞ = sup |φ(t)|. t∈[a,b] 1,2 • Clip ([0, T ] × Ξ) denotes the class of functions Φ : [0, T ] × Ξ →  that is continuously differentiable with respect to its first variable t ∈ [0, T ] and twice continuously Fr´echet differentiable with respect to its second variable x ∈ Ξ 2 (Ξ) uniformly for all t ∈ [0, T ]. and Φ(t, ·) ∈ Clip ∂ ∂ • ∂t Φ(t, x1 , · · · , xn ) = ∂t Φ(t, x1 , · · · , xn ), ∂i Φ(t, x1 , · · · , xn ) = ∂x Φ(t, x1 , · · · , i 2

xn ), and ∂ij Φ(t, x1 , · · · , xn ) = ∂x∂i ∂xj Φ(t, x1 , · · · , xn ) for i, j = 1, 2, · · · , n. • L2 (a, b; n )(−∞ < a < b < ∞) the separable Hilbert space of Lebesque square integrable functions φ : [a, b] → n equipped with the L2 -norm · 2 defined by  b |φ(t)|2 dt. φ 2 = a

• D([a, b]; n )(−∞ < a < b < ∞) the space of functions φ : [a, b] → n that are continuous from the right on [a, b) and have a finite left-hand-limits on (a, b]. The space D([a, b]; n ) is a complete metric space equipped with Skorohod metric as defined in Definition ( ). • r > 0 is the duration of the bounded memory or delay. • C = C([−r, 0]; n ). • M ≡  × L2ρ ((−∞, 0]; )–the ρ-weighted separable Hilbert space equipped with the inner product  0 ρ(θ)φ(θ)ϕ(θ)dθ,

(x, φ), (y, ϕ)M = xy + −∞

∀(x, φ), (y, ϕ) ∈ M,

Notation

XVII

1/2

and the Hilbertian norm (x, φ) ρ = (x, φ), (x, φ)ρ . In the above, ρ : (−∞, 0] → [0, ∞) is a certain given function. • (Ω, F, P, F) (where F = {F(s), s ≥ 0}) denotes a complete filtered probability space that satisfies the usual conditions. • E[X|G] denotes the conditional expectation of the Ξ-valued random variable X (defined on (Ω, F, P )) given the sub-σ-algebra G ⊂ F. Denote E[X] = E[X|F]. • L2 (Ω, Ξ; G) denotes the collection of G-measurable (G ⊂ F) and Ξ-valued random variables X such that  X(ω) 2Ξ dP (ω) < ∞. X L2 (Ω,Ξ) ≡ Ω 2

2

Denote L (Ω, Ξ) = L (Ω, Ξ; F). • Tab (F) denotes the collection of F-stopping times τ such that 0 ≤ a ≤ τ ≤ b, P -a.s.. Tab (F) = T (F) when a = 0 and b = ∞. • (Ω, F, P, F, W (·)) (or simply W (·) denotes the standard Brownian motion of appropriate dimension. m • Lw Φ(x1 , · · · , xn ) = 12 j=1 ∂j2 Φ(x1 , · · · , xn ). • ∇x Φ(x) and ∇2x Φ(x) denote, respectively, the gradient and Hessian matrix of Φ : n → . • 1{0} : [−r, 0] → , where 1{0} (0) = 1 and 1{0} (θ) = 0 for −r ≤ θ < 0. • C ⊕ B = {φ + v1{0} | φ ∈ C, v ∈ n } equipped with the norm φ + v1{0} C⊕B = φ C + |v|. • Γ¯ is the continuous isometric extension of Γ from C∗ (respectively, C† ) to (C ⊕ B)∗ (respectively, (C ⊕ B)† ). • φ˜ : [−r, T ] → n (respectively, φ˜ : (−∞, T ] → n is the extension of ˜ = φ(0) if t ≥ 0 φ : [−r, 0] → n (respectively, φ : (−∞, 0] → n ), where φ(t) ˜ and φ(t) = φ(t) if t ≤ 0. • {T (t), t ≥ 0} ⊂ L(Ξ) denotes C0 -semigroup of bounded linear operators on Ξ. • J(t, ψ; u(·)) denotes the discounted objective functional for the optimal classical control problem. • U[t, T ] denotes the class of admissible controls the optimal classical control problem. ˆ T ] denotes the class of admissible relaxed controls the optimal classical • U[t, control problem. • J(t, ψ; τ ) denotes the discounted objective functional for the optimal stopping problem. • V (t, ψ) denotes the value function for the optimal classical control problem, the optimal stopping problem, and the pricing function, etc. • A denotes the weak infinitesimal generator of a Ξ-valued Markov process with domain D(A), where Ξ = C or Ξ = M. • S denotes the shift operator with domain D(S). • a denotes the integral part of a ∈ . • h(N ) = Nr for N ∈ ℵ.

XVIII Notation T • T (N ) =  h(N ) . t (N ) • tN = h  h(N ) . • Gκ denotes the liquidating function for the hereditary portfolio optimization problem. • Sκ denotes the solvency region for the hereditary portfolio optimization problem. • Mκ denotes the intervention operator for the hereditary portfolio optimization problem. • ∂Sκ denotes the boundary of Sκ . • (C, T ) denotes the class of admissible consumption-trading strategies.

Introduction and Summary

This monograph develops the Hamilton-Jacobi-Bellman theory via the dynamic programming principle for a class of optimal control problems for stochastic hereditary differential equations (SHDEs) driven by a standard Brownian motion and with a bounded memory of duration r > 0 or an unbounded r = ∞ but fading memory. One of the characteristics of these controlled stochastic equations is that their drift and diffusion coefficients at any time s > 0 depend not only on the present state x(s) but also explicitly on the finitely past history of the state x(s + θ), −r ≤ θ ≤ 0, over the time interval [s − r, s], or on the infinitely past history of the state x(s + θ), −∞ < θ ≤ 0, over the time interval (−∞, s]. These stochastic hereditary differential equations are often called stochastic delay differential equations (SDDEs), stochastic functional differential equations (SFDEs), or stochastic differential equations with aftereffects (especially among Russian and/or Eastern European mathematicians) in the literature. The mathematical models described by SHDEs are ubiquitous and represent a class of stochastic infinite-dimensional systems that have wide range of applications in physics (see, e.g., Frank [Fra02]), chemistry (see, e.g., Beta et al [BBMRE03], Singh and Fogler [SF98]), biology (see, e.g., Longtin et al [LMBM90], Vasilakos and Beuter [VB93], Frank and Beek [FB01], Peterka [Pet00]), engineering and communication (see, e.g., Kushner [Kus05], [Kus06], Ramezani et al [RBCY06], Yang et al [YBCR06]) and economics/finance (see, e.g., Chang and Youree [CY99], [CY07], Ivanov and Swishuk [IS04], Chang et al [CPP07d]), advertising (see, e.g., Gozzi and Marinelli [GM04], Gozzi et al. [GMS06]), and vintage capital (see Fabbri et al. [FFG06], Fabbri and Gozzi [FG06], Fabbi [Fab06]), just to mention a few. The wide applicability of these systems is mainly due to the fact that the reaction of real-world systems to exogenous signals/forces is never “instantaneous” and it needs some time, time which can be translated into a mathematical language by some delay terms. To illustrate some of these examples and to describe the class of optimal control problems for the SHDEs treated in this monograph, we first adopt the following conventional notation.

2

Introduction and Summary

A. Basic Notation Throughout the volume, we will adopt the following conventional notation: 1. Throughout the volume, all (time) intervals I will be interpreted as I ∩ . In particular, the interval [a, b] will be interpreted as [a, b] ∩  for all −∞ ≤ a < b ≤ ∞. Therefore, if b = ∞ and a > −∞ (respectively, b < ∞ and a = −∞), then [a, b] will be interpreted as [a, ∞) (respectively, (−∞, b]). Other intervals such as (a, b], [a, b), and (a, b) shall be interpreted similarly. 2. For 0 < r < ∞, let C ≡ C([−r, 0]; n ) be the space of continuous functions φ : [−r, 0] → n . The space C is a real separable Banach space equipped with the uniform topology, that is, φ =

sup |φ(θ)|,

φ ∈ C.

θ∈[−r,0]

3. For the unbounded memory case r = ∞, we will work with M ≡  × L2ρ (−∞, 0; ), the separable ρ-weighted Hilbert space equipped with the inner product

(x, φ), (y, ϕ)M = xy + φ, ϕρ,2  0 = xy + φ(θ)ϕ(θ)ρ(θ) dθ, −∞

∀(x, φ), (y, ϕ) ∈ M,



(x, φ), (x, φ)M . In the above, and the Hilbertian norm (x, φ) M = ρ : (−∞, 0] → [0, ∞) is a given function that satisfies the following two conditions: Condition (A1). ρ is summable on (−∞, 0], that is,  0 0< ρ(θ) dθ < ∞. −∞

Condition (A2). For every t ≤ 0 one has ¯ K(t) = ess

ρ(t + θ) ¯ < ∞, ≤K ρ(θ) θ∈(−∞,0]

K(t) = ess

sup

ρ(θ) < ∞. ρ(t + θ) θ∈(−∞,0] sup

Note that ρ will be referred to as the fading function for the case of infinite memory r = ∞. 4. For both bounded (0 < r < ∞) and unbounded (r = ∞) memory, we adopt the following conventional notation commonly used for (deterministic) functional differential equations (see, e.g., Hale [Hal77], Hale and Lunel [HL93]):

Introduction and Summary

3

If 0 < T ≤ ∞ and ϕ : [−r, T ] → n (r ≤ ∞) is a measurable function, we define, for each s ∈ [0, T ], the function ψs : [−r, 0] → n by ψs (θ) = ψ(s + θ),

θ ∈ [−r, 0].

In particular, if ψ ∈ C([−r, T ]; n ) for r < ∞, then ψs ∈ C, and if ψ ∈ L2 (−r, T ; n ), then ψs ∈ L2 (−r, 0; n ). The same notation is also applied to any stochastic process x(·) = {x(s), s ∈ [−r, T ]}. Therefore, for each s ∈ [0, T ], xs (θ) = x(s + θ), and θ ∈ [−r, 0], represents the segment of the process x(·) over the finite time interval [s − r, s] or the infinite time interval (−∞, s]. The readers are cautioned to make the distinction between x(s), the value of x(·) at time s, and xs , the segment of x(·) over the time interval [s − r, s] or (−∞, s], throughout the monograph. 5. Throughout, K > 0 denotes a generic positive constant whose values may change from line to line. We sometimes use the notation K(p) if the constant depends explicitly on the parameter or expression p. 6. Explanations of numbering system used in this monograph are in order. Section x.y stands for Section y of Chapter x. Subsection x.y.z means Subsection z in Section y of Chapter x. Definitions, lemmas, theorems, and remarks are numbered exactly the same way. Remarks. 1. In studying the finite time horizon control problems, we normally denote T ≥ 0 as the (fixed) terminal time and t ∈ [0, T ] as a variable initial time. For infinite time horizon control problems (i.e., T = ∞), we normally work with autonomous controlled or uncontrolled SHDE and in this case, we can and will assume the initial time t = 0. 2. The state variable for each of the control problems will be either xs ∈ C, s ∈ [t, T ] (mainly for Chapters 3, 4, 5, and 6) or (S(s), Ss ) ∈ M, s ≥ 0 (mainly for Chapter 7), depending on whether the initial data xt = ψ ∈ C or (S(0), S0 ) = (ψ(0), ψ) ∈ M as well as the dynamics of the controlled SHDEs involved. B. Brief Descriptions of the Stochastic Control Problems and Summary of Results This monograph treats the discounted optimal classical control, optimal stopping, and optimal classical-impulse control problems for the state systems described by a certain SHDE over a finite or an infinite time horizon. To illustrate the concepts and simplify the treatments, we consider only SHDEs driven by a standard Brownian motion of an appropriate dimension. Although they are increasingly important in real-world applications, the controlled SHDEs

4

Introduction and Summary

driven by Levy or fractional Brownian motions are not the subjects of investigation in this monograph. The omission of this type of SHDE is due to the fact that the treatments of SHDEs driven by Levy processes, although possible by using the concept of stochastic integration with respect to a semi-martingale, require additional technical background preparation and introductions of complicated notation that often unnecessarily obscure the presentation of the main ideas. On the other hand, a general and complete theory of SHDEs driven by fractional Brownian motion and its corresponding optimal control problems have yet to be developed. Interested readers are referred to Protter [Pro95] and Bensousen and Sulem [BS05] and the references contained therein for a theory of jumped SDEs (without delay) and its corresponding optimal control problems. The theory of SDEs without delay (r = 0) but driven by a fractional Brownian motion is currently a subject that draws a lot of research attention due to its applications in mathematical finance (see, e.g., Øksendal [Øks04]) and recently discovered phenomena of long-range dependence and self-similarity in modern data networks. On the other hand, the literature on optimal control of SDEs driven by fractional Brownian motion is rather scarce (see, e.g., Mazliak and Nourdin [MN06], Kleptsyna et al. [KLV04]). It is even scarcer in the area of SHDS driven by fractional Brownian motion. To the best of the author’s knowledge, there are only two existing papers (see Ferrante and Rovina [FR05], Prakasa-Rao [Pra03]) that treat SHDE driven by a fractional Brownian motion and none addresses the corresponding optimal control problems. Remark. In this monograph, we choose to treat the maximization of the expected objective functional instead of minimization of the expected cost functional for each of the optimal control problems described below. This is due to the fact that the maximization and minimization problems are related via the following simple relation: sup J(u) = − inf [−J(u)] u∈U

u∈U

for any real-valued function J(u) that are dependent on the control variable u in a control set U . Therefore, all minimization problems of practical applications can be easily reformulated as a maximization problem through the above relation. As an introduction and a summary, each of the aforementioned optimal control problems and their corresponding solutions will be briefly described as follows. The rigorous formulation of the problems and detailed description of the results obtained can be found in Chapter 3 through Chapter 7. The results on the existence and uniqueness of strong and weak solutions of the SHDE and the Markovian properties of their segmented solution processes are the subjects of Chapter 1. The stochastic calculus including the available Dynkin’s and Itˆ o’s formulas of the segmented solution processes in (C or in M) are given in Chapter 2.

Introduction and Summary

5

This introduction and summary chapter is intended to serve as a sneak preview of the serious stuffs to follow throughout the monograph. Readers who prefer having such a crashed course is not helpful are recommended to go directly to Chapter 1 without loss of reading continuity. B1. The Optimal Classical Control Problem The controlled SHDE for the discounted optimal control problem with the bounded memory 0 < r < ∞ and over a finite time horizon is normally described as follows: dx(s) = f (s, xs , u(s))ds + g(s, xs , u(s))dW (s),

s ∈ [t, T ],

(0.1)

where (i) 0 < T < ∞ is a fixed terminal time; (ii) W (·) = {W (s) = (W1 (s), W2 (s), . . . , Wm (s)), s ≥ 0} is an m-dimensional standard Brownian motion defined on a certain complete filtered probability space (Ω, F, P ; F), with F = {F(s), s ≥ 0} being the P -augmented natural filtration generated by the Brownian motion W (·). Therefore, F(s) = σ{W (t), 0 ≤ t ≤ s} ∨ N , and N contains all P -null sets, that is, N = {A ⊂ Ω | there exists a B ∈ F such that A ⊂ B and P (B) = 0}. (iii) The controlled drift coefficient f and the controlled diffusion g are some appropriate deterministic functions from [0, T ] × C × U into n and n×m , respectively. (iv) The process u(·) = {u(s), s ∈ [t, T ]} is a (classical) control process taking values in a control set U in a certain Euclidean space. Given initial data (t, ψ) ∈ [0, T ] × C, the main objective of the control problem is to find an admissible control u∗ (·) ∈ U[t, T ] (see Section 3.1 for the definition of an admissible control and the class of admissible controls U[0, T ]) that maximizes the following discounted objective functional:   T

J(t, ψ; u(·)) = E

e−α(s−t) L(s, xs , u(s)) ds + e−α(T −t) Ψ (xT ) ,

(0.2)

t

where α ≥ 0 denotes a discount factor and L : [0, T ] × C × U →  and Ψ : C →  are respectively the instantaneous and terminal rewards functions that satisfy appropriate polynomial growth conditions in Section 3.1.

6

Introduction and Summary

The value function (as a function of the initial datum (t, ψ) ∈ [0, T ] × C) V : [0, T ] × C →  of the control problem is defined by V (t, ψ) =

sup

J(t, ψ; u(·)).

(0.3)

u(·)∈U[t,T ]

Note that the optimal control problem over a finite time horizon briefly described above is referred to as an optimal classical control problem. This is because, although with a complication of having a bounded memory (0 < r < ∞), it is among the first type of continuous time stochastic control problems treated in the literature (see, e.g., Fleming and Rishel [FR75] and Fleming and Soner [FS93] for controlled diffusion processes without delay) in which the effects of the control process u(·) on the dynamics of the state process x(·) are through the controlled drift f and diffusion g and, therefore, will not cause any jumped discontinuity of the controlled state trajectories. This is in contrast to recent studies in singular or impulse control problems for SDEs in the literature (see, e.g., Bensoussan and Lions [BL84], Zhu [Zhu91] and references contained therein). The treatment and results of the optimal classical control problem are the main objectives of Chapter 3. In addition to the optimal classical control problem described above, we also consider the optimal relaxed control problem in which one wants to find an admissible relaxed control µ(·, ·) that maximizes the expected objective functional 





T

e−α(s−t) L(s, xs , u)µ(s, ˙ du) ds + e−α(T −t) Ψ (xT )

ˆ ψ; µ(·, ·)) = E J(t, t

U

(0.4) ˆ T ] and is subject to the among the class of admissible relaxed controls U[t, following controlled equation for all s ∈ [s, T ]:  dx(s) =

 f (s, xs , u)µ(s, ˙ du) ds +

U

g(s, xs , u)µ(s, ˙ du) dW (s).

(0.5)

U

We again define the value function Vˆ : [0, T ] × C →  as Vˆ (t, ψ) =

sup ˆ [t,T ] µ(·,·)∈U

ˆ ψ; µ(·, ·)). J(t,

It is known that any admissible classical control u(·) ∈ U[t, T ] can be written as an admissible relaxed control as 

T

 1{(s,u)∈B} δ u(s) (du)ds,

µu(·) (B) = t

U

B ∈ B([t, T ] × U ),

Introduction and Summary

7

where δ u is the Dirac measure at u ∈ U . However, the converse is not true in general. ˆ T ] is compact under weak convergence, by extending the classical Since U[t, ˆ T ], admissible controls U[t, T ] to the class of admissible relaxed controls U[t, one can construct a weak convergent sequence of admissible relaxed controls {µ(k) (·, ·)}∞ k=1 such that lim

k→∞

sup ˆ [t,T ] µ(k) ∈U

ˆ ψ; µ(k) (·, ·)) = Vˆ (t, ψ) = V (t, ψ). J(t,

This establishes the existence of an optimal classical control. The approach described for the existence of optimal classical control is also a main ingredient for the semidiscretization scheme and Markov chain approximation of the optimal control problem in Chapter 5. The main results obtained include the heuristic derivation via the dynamic programming principle (see Larssen [Lar02]), and under the smoothness conditions of the value function V : [0, T ] × C →  of an infinite-dimensional Hamilton-Jacobi-Bellman equation (HJBE) over a finite time horizon. The HJBE can be briefly described as follows (see Section 3.4 for details): αV (t, ψ) − ∂t V (t, ψ) − max [Au V (t, ψ) + L(t, ψ, u)] = 0, u∈U

(0.6)

with the terminal condition V (T, ·) = Ψ (·) on C, where the operator Au , u ∈ U , is given as Au V (t, ψ) ≡ SV (t, ψ) + DV (t, ψ)(f (t, ψ, u)1{0} ) m 1 2 D V (t, ψ)(g(t, ψ, u)ej 1{0} , g(t, ψ, u)ej 1{0} ), + 2 j=1 where ej is the jth unit vector of the standard basis in m . The above two expressions have terms that involve the infinitesimal generator, SV , of a semigroup of shift operators, and the extensions, DV (t, ψ) and D2 V (t, ψ), of the first- and second-order Fre´echet derivatives of V with respect to its second variable that takes value in C. Some brief explanations for these terms in the differential operator Au , u ∈ U , are given below. The precise definitions of these terms can be found in Sections 2.2 and 2.3. First, SV (t, ψ) is defined as SV (t, ψ) = lim ↓0

V (t, ψ˜ ) − V (t, ψ) , 

and ψ˜ : [−r, T ] → n is the extension of ψ ∈ C defined by  for t ≥ 0 ˜ = ψ(0) ψ(t) ψ(t) for t ∈ [−r, 0).

(0.7)

(0.8)

8

Introduction and Summary

Second, DV (t, ψ) ∈ C∗ and D2 V (t, ψ) ∈ C† are the first and second order Fr´echet derivatives of V with respect to its second argument ψ ∈ C, where C∗ and C† are the spaces of bounded linear and bilinear functionals on C, respectively. In addition, DV (t, ψ) ∈ (C ⊕ B)∗ is the extension of of DV (t, ψ) from C∗ to (C ⊕ B)∗ (see Section 2.4) and D2 V (t, ψ) ∈ (C ⊕ B)† is the extension of D2 V (t, ψ) from C† to (C ⊕ B)† (also see Section 2.4). Finally, the function 1{0} : [−r, 0] →  is defined by  0 for θ ∈ [−r, 0) 1{0} (θ) = 1 for θ = 0. These terms are unique to the SHDEs and the HJBE described above and are nontrivial extensions of its counter parts for finite- dimensional controlled diffusion processes without delay (r = 0) (see, e.g., [FS93]) and/or controlled stochastic partial differential equations. For comparison purpose, the optimal control problem of finite-dimensional diffusion processes are described below. The finite-dimensional optimal control problem consists of a controlled SDE described by dx(s) = f¯(s, x(s), u(s)) ds + g¯(s, x(s), u(s)) dW (s),

s ∈ [t, T ],

with the initial datum (t, x) ∈ [0, T ] × n and the objective functional given by   T −α(s−t) ¯ −α(T −t) ¯ ¯ J(t, x; u(·)) = E L(s, x(s), u(s)) ds + e Ψ (x(T )) , e t

¯ and Ψ¯ are appropriate functions defined on [0, T ] × n × U where f¯, g¯, L, (respectively, n ) instead of the infinite-dimensional spaces [0, T ] × C × U (respectively, C). In this case, the value function V¯ : [0, T ] × n →  is characterized by the following well-known finite- dimensional HJBE: αV¯ (t, x) − ∂t V¯ (t, x) − max [Lu V¯ (t, x) + L(t, x, u)] = 0, u∈U

with the terminal condition V¯ (T, x) = Ψ (x) for all x ∈ n , where 1 g (t, x, u). Lu V¯ (t, x) = ∇x V¯ (t, x) · f¯(t, x, u) + g¯(t, x, u) · ∇2x V¯ (t, x)¯ 2 As in most of optimal control problems, deterministic or stochastic, finite or infinite dimensional, it is not known whether the value function is smooth enough to be a solution of the HJBE in the classical sense. Therefore, the concept of viscosity solution is developed for the infinite-dimensional HJBE (0.6). It is also shown in Sections 3.5 and 3.6 that the value function V : [0, T ] × C →  is the unique viscosity solution of the HJBE (0.6). To characterize the optimal state-control pair, (x∗ (·), u∗ (·)), a generalized verification

Introduction and Summary

9

theorem in the viscosity solution framework is conjectured without proof in Section 3.7. Some special cases of the HJBE and a couple of application examples are illustrated in Section 3.8. B2. The Optimal Stopping Problem To describe the optimal stopping problem, we consider the following uncontrolled SHDE with bounded memory 0 < r < ∞: dx(s) = f (s, xs )ds + g(s, xs ) dW (s),

s ∈ [t, T ],

(0.9)

where, again, W (·) = {W (s) = (W1 (s), W2 (s), . . . , Wm (s)), s ≥ t} is an m-dimensional standard Brownian motion as described in B1, and f : [0, T ] × C → n , g : [0, T ] × C → n×m are some functions that represent respectively the drift and diffusion component of the equation. Let G(t) = {G(t, s), t ≤ s ≤ T } be the filtration of the solution process {x(s), s ∈ [t − r, T ]} of (0.9), that is, G(t, s) = σ{x(λ), t ≤ λ ≤ s ≤ T }. For each initial time t ∈ [0, T ], let TtT (G) (or simply TtT when there is no ¯ such that danger of ambiguity) be the class of G(t)-stopping times τ : Ω →  t ≤ τ ≤ T . Let L : [0, T ] × C →  and Ψ : C →  be functions that represent the instantaneous reward rate and the terminal reward of the optimal stopping problem, respectively. Given an initial data (t, ψ) ∈ [0, T ] × C of the SHDE (0.9), the main objective of the optimal stopping problem is to find a stopping time τ ∗ ∈ TtT that maximizes the following discounted objective functional:  τ J(t, ψ; τ ) = E e−α(s−t) L(s, xs ) ds + e−α(τ −t) Ψ (xτ ) . t

In this case, the value function V : [0, T ] × C →  is defined to be V (t, ψ) = sup J(t, ψ; τ ).

(0.10)

τ ∈TtT

In Chapter 4, the above optimal stopping problem is approached via two different methods: (1) the construction of the least superharmonic majorant of the terminal reward functional of an equivalent optimal stopping problem and (2) characterization of the value function of the optimal stopping problem in terms of an infinite dimensional HJB variational inequality (HJBVI). Method (1) is presented in Section 4.2. The main result in that section provides the existence result of an optimal stopping.

10

Introduction and Summary

It is shown in Section 4.3 that the value function V : [0, T ] × C →  satisfies the following infinite- dimensional HJBVI if it is sufficiently smooth (see Smoothness Conditions defined in Section 2.6): max{Ψ − V, ∂t V + AV + L − αV } = 0,

(0.11)

where AV (t, ψ) ≡ SV (t, ψ) + DV (t, ψ)(f (t, ψ)1{0} ) m 1 2 + D V (t, ψ)(g(t, ψ)ej 1{0} , g(t, ψ)ej 1{0} ), 2 j=1 ej is the jth unit vector of the standard basis in m , and SV (t, ψ), DV (t, ψ), DV (t, ψ), and 1{0} are as defined in B1. The above inequality will be interpreted as follows ∂t V + AV + L − αV ≤ 0 and V ≥ Ψ

(0.12)

(∂t V + AV + L − αV ) (V − Ψ ) = 0

(0.13)

and

on [0, T ] × C and with the terminal condition V (T, ψ) = Ψ (ψ) for all ψ ∈ C. Again, since the value function V : [0, T ] × C →  is in general not smooth enough to satisfy the above equation in the classical sense, therefore the concept of viscosity solution is introduced in Section 4.4. It is shown that the value function V is a unique viscosity solution of the HJBVI (0.11). The detail of these results are the subject of discussion in Sections 4.5 and 4.6. B3. Discrete Approximations In B1 we described the value function V : [0, T ] × C →  of the finite time horizon optimal classical control problem as the unique viscosity solution of the following HJBE: αV (t, ψ) − ∂t V (t, ψ) − max [Au V (t, ψ) + L(t, ψ, u)] = 0 u∈U

(0.14)

on [0, T ] × C, and V (T, ψ) = Ψ (ψ), ∀ψ ∈ C. The main objective in this subsection is to explore computational issues for the optimal classical control problem described in B1 under appropriate assumptions. In particular, we present in Chapter 5 three different discrete approximations, namely (1) semidiscrete scheme, (2) Markov chain approximations, and (3) finite difference approximation for the problem.

Introduction and Summary

Let N be a positive integer. We set h(N ) ≡

r N

11

and define

·N : [0, T ] → I(N ) ≡ {kh(N ) | k = 1, 2, . . .} ∩ [0, T ] (N )

 for t ∈ [0, T ], where a is the integer part of the real by tN = h(N )  ht number a ∈ . As T is the terminal time horizon for the original control problem, T N will be the terminal time for the N th approximating problem. It is clear that T N → T and tN → t for any t ∈ [0, T ]. The set I(N ) is the time grid of discretization degree N . Let π (N ) be the partition of the interval [−r, 0], that is, π (N ) : r = −N h(N ) < (−N + 1)h(N ) < · · · < −h(N ) < 0. Define π ˜ (N ) : C → (n )N +1 as the (N + 1)-point mass projection of a continuous function φ ∈ C based on the partition π (N ) , that is, π ˜ (N ) φ = (φ(−N h(N ) ), φ((−N + 1)h(N ) ), . . . , φ(−h(N ) ), φ(0)).

(0.15)

˜ for each Define Π (N ) : (n )N +1 → C by Π (N ) x = x x = (x(−N h(N ) ), x(−N + 1)h(N ) ), . . . , x(−h(N ) ), x(0)) ∈ (n )N +1 , ˜ ∈ C by making the linear interpolation between the two (consecuand x tive) time-space points (kh, x(kh)) and ((k + 1)h, x((k + 1)h). With a little abuse of notation, we also denote by Π (N ) : C → C the operator that maps a function ϕ ∈ C to its piece-wise linear interpolation Π (N ) ϕ on the grid π (N ) . B3.1. Semidiscrete Approximation Scheme The semidiscrete approximation scheme presented in Section 5.2 of Chapter 5 consists of two steps. Step one is to temporally discretize the state process x(·) = {x(s; t, ψ, u(·)), s ∈ [t − r, T ]} but not the control process, and step two further temporally discretizes the control process u(·) = {u(s), s ∈ [t, T ]} as well. The semidiscrete approximation scheme presented in that section is mainly due to Fischer and Nappo [FN07]. In the first step of the semi-discrete approximation, the controlled state process x(·), the objective functional J(t, ψ; u(·)), and the value function V (t, ψ) are approximated by its temporally discretized counterparts z (N ) (·) = {z (N ) (s; t, ψ, u(·)), s ∈ [t − r, T ]}, J (N ) (t, ψ, u(·)), and V (N ) (t, ψ), respectively, where  s f (λ, Π (N ) zλ N , u(λ)) dλ (0.16) z(s) = ψ(0) + t  s g(λ, Π (N ) zλ N , u(λ)) dW (λ), s ∈ [t, T N ]; + t

12

Introduction and Summary

J (N ) (t, ψ, u(·)) = E



T N

e−α(s−t) L(s, Π (N ) zs N , u(s)) ds +e−α(T −t) Ψ (N ) (Π (N ) zT (N ) ) ;

(0.17)

t

and

V (N ) (t, ψ) =

J (N ) (t, ψ; u(·)).

sup

(0.18)

u(·)∈U[t,T ]

Assuming the global boundedness of |f (t, φ, u)|, |g(t, φ, u)|, |L(t, φ, u)|, and |Ψ (φ)| by Kb and the Lipschitz continuity of these functions (with Lispschitz constant Klip ) in addition to assumptions made for B1, we have the following error bound for the approximation: Theorem B3.1.1. Let the initial segment φ ∈ C be γ-H¨ older continuous with 0 < γ ≤ KH < ∞ for some constant KH > 0. Then there is a constant ˜ depending only on γ, KH , Klip , Kb , T , and the dimensions (n and m) such K that for all N = 1, 2, . . . with N ≥ 2r, and all initial time t ∈ I(N ) , we have |V (t, φ) − V (N ) (t, ψ)| ≤

sup u(·)∈U[t,T ]

|J(t, φ; u(·)) − J (N ) (t, ψ; u(·))|



˜ ≤K

(h

(N ) γ

) ∨

h(N ) ln

1



h(N )

,

where ψ ∈ C(N ) is such that ψ|[−r,0] = φ and C(N ) is the space of ndimensional continuous functions defined on the extended interval [−r − h(N ) , 0]. The second step of the semidiscretization scheme is to also discretize the temporal variable of the control process and consider the piecewise constant, right-continuous, and F-adapted control process u(·) as follows. For M = 1, 2, . . ., set U (M ) [t, T ] = {u(·) ∈ U[t, T ] | u(s) is F(sM ) measurable and u(s) = u (sM ) for each s ∈ [t, T ]}.

(0.19)

For the purpose of approximating the control problem of degree N we will use strategies in U (N M ) [t, T ]. We write U (N,M ) [t, T ] for U (N M ) [t, T ]. Note that u(·) ∈ U (N,M ) [t, T ] has M times finer discretization of that of the discretization of degree N . With the same dynamics and the same discounted objective functional as in the previous subsection, for each N = 1, 2, . . . we introduce a family of value functions {V (N,M ) , M = 1, 2, . . .} defined on [t, T N ] × C(N ) by setting V (N,M ) (t, ψ) :=

sup u(·)∈U (N,M ) [t,T ]

J (N ) (t, ψ; u(·)).

(0.20)

Introduction and Summary

13

We will refer to V (N,M ) as the value function of degree (N, M ). Note that by construction, we have V (N,M ) (t, ψ) ≤ V (N ) (t, ψ) for all (t, ψ) ∈ [0, T (N ) ] × C(N ), since U (M ) [t, T ] ⊂ U[t, T ]. We have the following results for overall discretization error. ¯ Theorem B3.1.2. Let 0 < γ ≤ KH < ∞. Then there is a constant K depending on γ, KH , Klip , Kb , T , and dimensions n and m such that for all β > 3, N = 1, 2, . . . with N ≥ 2r, and all initial datum (t, φ) ∈ I(N ) × C with r , φ being γ-H¨ older continuous, it holds that, with h = N 1+β β

|V (N ) (t, φ) − V (N, N ) (t, φ)| 

β β−3 γβ γ β 1 1 − 1+β ¯ 2(1+β) 2(1+β) 4(1+β) 1+β 1+β h ∨r ln +r h . ≤K r h h In particular, with β = 5 and h =

r N6 ,

it holds that

|V

(N )

(t, φ) − V

5

(N,N )

¯ (t, φ)| ≤ K

r

5γ 6

h

2γ−1 12

∨r

5 12

 1 1 − 56 ln h 12 . +r h

¯ Theorem B3.1.3. Let 0 < γ ≤ KH . Then there is a constant K(r) depending on γ, KH , Klip , Kb , T , dimensions n and m, and delay duration r such that for all β > 3, N, M = 1, 2, . . . with N ≥ 2r and M ≥ N β , and all initial datum older continuous, the following holds: If (t, φ) ∈ I(N ) × C with φ being γ-H¨ u ¯(·) ∈ U (N,M ) [t, T ] is such that V (N,M ) (t, φ) − J (N ) (t, φ; u ¯(·)) ≤ , then with h =

r , N 1+β



 β−3 γ 1 1 ¯ 2(1+β) 4(1+β) V (t, φ) − J(t, φ; u ¯(·)) ≤ K(r) h 1+β ∨ ln + . +h h h B3.2. Markov Chain Approximations For notational simplicity, we assume n = m = 1 and consider the following autonomous one-dimensional controlled equation: dx(t) = f (xs , u(s)) ds + g(xs ) dW (s),

s ∈ [0, T ],

(0.21)

with the initial segment ψ ∈ C (here C = C[−r, 0] throughout B3.2) at the initial time t = 0. √ Using the spatial discretization S(N ) ≡ {k h(N ) | k = 0, ±1, ±2, . . . , } and letting (S(N ) )N +1 = S(N ) × · · · × S(N ) be the (N + 1)-folds Cartesian product of S(N ) , we call a one-dimensional discrete-time process, {ζ(kh(N ) ), k = −N, −N + 1, . . . , 0, 1, 2, . . . , T (N ) },

14

Introduction and Summary

defined on a complete filtered probability space (Ω, F, P, F), a discrete chain of degree N if it takes values in S(N ) and ζ(kh(N ) ) is F(kh(N ) )-measurable T (N ) N +1 ) for all k = 0, 1, 2, . . . , T (N ) , where T (N ) =  h(N ) . We define the (S (N ) valued discrete process {ζkh(N ) , k = 0, 1, 2, . . . , T } by setting for each k = 0, 1, 2, . . . , T (N )   ζkh(N ) = ζ((k − N )h(N ) ), ζ((k − N + 1)h(N ) ), . . . , ζ((k − 1)h(N ) ), ζ(kh(N ) ) . We note here that the S(N ) -valued process ζ(·) is not Markovian, but it is desirable under appropriate conditions that its corresponding (S(N ) )N +1 -valued segment process {ζkh(N ) , k = 0, 1, . . . , T (N ) } is Markovian with respect to the discrete-time filtration {F(kh(N ) ), k = 0, 1, 2, . . .}. A sequence u(N ) (·) = {u(N ) (kh(N ) ), k = 0, 1, 2, . . . , T (N ) } is said to be a discrete admissible control if u(N) (kh(N) ) is F(kh(N))-measurable for each k = 0, 1, 2, . . . , T (N ) , and ⎡ (N ) ⎤ T  2   E⎣ u(N ) (kh(N ) ) ⎦ < ∞. k=0

As described earlier in B3.1, we let U (N ) [0, T ] be the class of continuous-time admissible control process u ¯(·) = {¯ u(s), s ∈ [0, T ]}, where for each s ∈ [0, T ], u ¯(s) = u ¯(sN ) is F(sN )-measurable and takes only finite different values in U . Given an one-step Markov transition functions p(N ) : (S(N ) )N +1 × U × (N ) N +1 → [0, 1], where p(N ) (x, u; y) shall be interpreted as the probability (S ) that the ζ(k+1)h = y ∈ (S(N) )N +1 given that ζkh = x and u(kh) = u, where h = h(N ) . We define a sequence of controlled Markov chains associated with the initial segment ψ and u ¯(·) ∈ U (N ) [0, T ] as a sequence {ζ (N ) (·)}∞ N =1 of (N ) processes such that ζ (·) is a S(N ) -valued discrete chain of degree N defined on the same filtered probability space (Ω, F, P, F) as u(N ) (·), provided the following conditions are satisfied: (i) Initial Condition: ζ(−kh) = ψ(−kh) ∈ S(N ) for k = −N, . . . , T (N ) . (ii) Extended Markov Property: For any k = 1, 2, . . ., and y = (y(−N h), y((−N + 1)h), . . . , y(0)) ∈ (S)N +1 , P {ζ(k+1)h = y | ζ(ih), u(ih), i ≤ k}  (N ) p (ζkh , u(kh); y) if y(−ih) = ζ((−i + 1)h) for 1 ≤ i ≤ N = 0 otherwise

Introduction and Summary

15

(iii) Local Consistency with the Drift Coefficient: (N,u)

b(kh) ≡ Eψ

= hf (Π ≡ hf

[ζ((k + 1)h) − ζ(kh)]

(N )

(N )

(ζkh ), u(kh)) + o(h)

(ζkh , u(kh)),

where f (N ) : N +1 × U →  is defined by f (N ) (x, u) = f (Π (N ) (x), u),

∀(x, u) ∈ N +1 × U,

N,u(·)

is the conditional expectation given the discrete admissible control Eψ (N ) u (·) = {u(kh), k = 0, 1, 2, . . . , T (N ) }, and the initial function π (N ) ψ ≡ (ψ(−N h), ψ((−N + 1)h), . . . , ψ(−h), ψ(0)). (iv) Local Consistency with the Diffusion Coefficient: N,u(·)



[(ζ((k + 1)h) − ζ(kh) − b(kh))2 ] = hg 2 (Π (N ) (ζkh ), u(kh)) + o(h) ≡ h(g (N ) )2 (ζkh , u(kh)),

where g (N ) : N +1 →  is defined by g (N ) (x) = g(Π (N ) (x)),

∀x ∈ N +1 .

˜ independent of N such that (v) Jump Heights: There is a positive number K √ ˜ h for some K ˜ > 0. sup |ζ((k + 1)h) − ζ(kh)| ≤ K k

Note that in the above and below, h and S are abbreviations for h(N ) and S , respectively. Using the notation and concept developed in the previous subsection, we assume that the (S(N ) )N +1 -valued process {ζkh , k = 0, 1, . . . , T (N ) } is a controlled (S(N ) )N +1 -valued Markov chain with initial datum ζ0 = π (N ) ψ ∈ (S(N ) )N +1 and the Markov probability transition function p(N ) : (S(N ) )N +1 × U × (S(N ) )N +1 → [0, 1] that satisfies some appropriate local consistency conditions described in (iii) and (iv) above. The objective functional J (N ) : (S(N ) )N +1 × U (N ) [0, T ] →  and the value function V (N ) : (S(N ) )N +1 →  of the approximating optimal control problem are defined as follows. Define the objective functional of degree N by  T (N ) −1  (N ) (N ) (N ) J (π ; u (·)) = E e−αkh L(Π (N ) (ζkh ), u(kh))h (N )

k=0



(N ) +e−αT N Ψ (ζT N )

(0.22)

16

Introduction and Summary

and the value function V (N ) (ψ (N ) ) = sup J(ψ (N ) ; u(N ) (·)), u(N ) (·) (N )

ψ ∈ C,

(0.23)

(N )

with the terminal condition V (N ) (ζT N ) = Ψ (ζT N ). For each N = 1, 2, . . ., we have the following dynamic programming principle (DDP) (see Section 5.3.2 of Chapter 5) for the controlled Markov (N ) chain {ζkh , k = 0, 1, . . . , T (N ) } and discrete admissible control process u(·) ∈ U (N ) [0, T ] as follows. Proposition B3.2.1. Let ψ ∈ C and let {ζkh , k = 0, 1, 2, . . . , T (N ) } be an (S)N +1 -valued Markov chain determined by the probability transition function p(N ) . Then for each k = 0, 1, 2, . . . , T (N ) − 1, V (N ) (ψ (N ) )

 k  −α(k+1)h (N ) (N ) −αih (N ) = sup E e V (ζ(k+1)h ) + he L(Π(ζih ), u (ih)) . u(N ) (·)

i=0

The optimal control of the Markov chain can then be stated as follows: Given N = 1, 2, . . . and ψ ∈ C, find an admissible discrete control process u(N ) (·) that maximizes the objective functional J (N ) (ψ (N ) ; u(N ) (·)) in (0.22). The algorithm for computing the optimal discrete control u(N ) (·) and its corresponding value function V (N ) (ψ (N ) ) is provided in Subsection 5.3.2 using a DDP for controlled Markov chain. Under some reasonable assumptions, we prove the following main convergence result: lim V (N ) (ψ (N ) ) = Vˆ (ψ),

N →∞

where ψ ∈ C is the initial segment, ψ (N ) = π ˜ (N ) (ψ) is the point-mass projec(N ) N +1 ˆ , and V (ψ) is the value function of the optimal relaxed tion of ψ into (S ) control problem mentioned earlier in B1. Since Vˆ (ψ) = V (ψ) for all initial segment ψ, the convergence result is valid for the optimal classical control problem. We note here that a similar approach of the Markov chain approximation was investigated in Kushner [Kus05, Kus06] and Fischer and Reiss [FR06]. B3.3. Finite Difference Approximation In Section 5.4 of Chapter 5, a finite difference approximation for the viscosity solution for the HJBE (0.14) is presented. The method detailed in that section and described below is based on the result obtained in Chang et al. [CPP07] and is an extension of results in Barles and Souganidis [BS91]. Given a positive integer M , we consider the following truncated optimal control problem with value function VM : [0, T ] × C →  satisfying the following truncated objective functional:

Introduction and Summary

 VM (t, ψ) =

sup

E

u(·)∈U[t,T ]

T

17

e−α(s−t) (L(s, xs , u(s)) ∧ M ) ds

t

+e−α(T −t) (Ψ (xT ) ∧ M ) ,

(0.24)

where a ∧ b is defined by a ∧ b = min{a, b} for all a, b ∈ . The corresponding HJBE for the truncated VM : [0, T ] × C →  defined in (0.24) is given by αVM (t, ψ) − ∂t VM (t, ψ) − max [Au VM (t, ψ) + L(t, ψ, u) ∧ M ] = 0 u∈U

(0.25)

on [0, T ] × C, and VM (T, ψ) = Ψ (ψ) ∧ M, ∀ψ ∈ C. Similarly to the proof that V : [0, T ] × C →  is the unique viscosity solution of the HJBE (0.14) (see Section 3.6 of Chapter 3 for detail), one can show that the value function VM is the unique viscosity solution of (0.25). Moreover, it is easy to see that VM → V as M → ∞ pointwise on [0, T ] × C. To obtain a computational algorithm, we need only find the numerical solution for VM : [0, T ] × C →  for each M . Let  (0 <  < 1) be the stepsize for variable ψ and η (0 < η < 1) be the stepsize for t. We consider the finite difference operators ∆ , ∆η and ∆2η defined by Φ(t + η, ψ) − Φ(t, ψ) , η Φ(t, ψ + (φ + v1{0} )) − Φ(t, ψ) , ∆ Φ(t, ψ)(φ + v1{0} ) =  Φ(t, ψ + (φ + v1{0} )) − Φ(t, ψ) ∆2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ) = 2 Φ(t, ψ − (ϕ + w1{0} )) − Φ(t, ψ) + , 2 ∆η Φ(t, ψ) =

where φ, ϕ ∈ C and v, w ∈ n . Recall that  1 Φ(t, ψ˜ ) − Φ(t, ψ) . →0+ 

SΦ(t, ψ) = lim Therefore, we define S Φ(t, ψ) =

 1 Φ(t, ψ˜ ) − Φ(t, ψ) . 

It is clear that S Φ is an approximation of SΦ. Let C 1,2 ([0, T ]×C) be the space of continuous functions Φ : [0, T ]×C →  that are continuously differentiable with respect its first variable t ∈ [0, T ] and twice continuously Fr´echet differentiable with respect to its second variable

18

Introduction and Summary

ψ ∈ C. The following preliminary results hold true (see Section 5.4 of Chapter 5): For any Φ : [0, T ] × C → , Φ ∈ C 1,2 ([0, T ] × C), such that Φ can be smoothly extended to [0, T ] × (C ⊕ B), we have lim ∆ Φ(t, ψ)(φ + v1{0} ) = DΦ(t, ψ)(φ + v1{0} )

→0

(0.26)

and lim ∆2 Φ(t, ψ)(φ + v1{0} ) = D2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ).

→0

(0.27)

With the discrete approximating scheme described earlier, the discretized version of (0.25) for the truncated VM can be rewritten in the following form: VM (t, ψ) = T ,η VM (t, ψ),

(0.28)

where T ,η is operator on Cb ([0, T ]×(C⊕B)) (the space of bounded continuous functions from [0, T ] × (C ⊕ B) to ) defined by  Φ(t, ψ + (f (t, ψ, u)1{0} )) 1 1 Φ(t, ψ˜ ) + T ,η Φ(t, ψ) ≡ max 2 1 m u∈U   + η + 2 + δ 1  Φ(t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ(t, ψ − (g(t, ψ, u)ei 1{0} )) 2 i=1 2

Φ(t + η, ψ) + L(t, ψ, u) ∧ M . (0.29) + η m

+

By showing that T ,η is a contraction map for each  and η, one proves by the Banach fixed point that the strict contraction T ,η has a unique fixed point denoted by ΦM ,η . Given any function Φ0 ∈ Cb ([0, T ] × (C ⊕ B), we construct a sequence as follows: Φn+1 = T ,η Φn for n ≥ 0. It is clear that lim Φn = ΦM ,η .

n→∞

(0.30)

We have the following as one of the main theorems for Section 5.4 of Chapter 5. Theorem B3.3.1. Let ΦM ,η denote the solution to (0.30). Then, as (, η) → 0, M the sequence Φ ,η converges uniformly on [0, T ] × C to the unique viscosity solution VM of (0.25). Based on the results described above and combining with the semidiscretization scheme summarized in B3.1, we can construct the computational algorithm for each N = 1, 2, . . . to obtain a numerical solution for the HJBE (0.14). For example, one algorithm can be like the following: Step 0. Let (t, ψ) ∈ [0, T ] × C be the given initial datum. (i) Compute tN and π (N ) ψ.

Introduction and Summary

19

(ii) Choose any function Φ(0) ∈ Cb ([0, T ] × C ⊕ B). (iii) Compute Φˆ(0,N ) (tN , π (N ) ψ) = Φ(0) (tN , π (N ) ψ). Step 1. Pick the starting values for (1), η(1). For example, we can choose (1) = 10−2 and η(1) = 10−3 . Step 2. For the given , η > 0, compute the function (1,N ) Φˆ (1),η(1) ∈ Cb ([0, T ] × (S(N) )N +1 )

by the following formula: (0,N ) ˆ(1,N ) = T (N ) Φ , (1),η(1) (1),η(1) Φ (N )

where T (1),η(1) , the semidiscretized version of T ,η , is defined on Cb ([0, T ] × (S(N) )N +1 ) and can be found in Subsection 5.4.2 of Chapter 5. Step 3. Repeat Step 2 for i = 2, 3, . . . using (i−1,N ) (N ) ˆ(i,N ) (tN , π (N ) ψ) = T (N ) Φ ψ). (1),η(1) (1),η(1) Φ (1),η(1) (tN , π

Stop the iteration when (i+1,N ) (i,N ) |Φˆ (1),η(1) (t, ψ) − Φˆ (1),η(1) (t, ψ)| ≤ δ1 ,

where δ1 is a preselected number that is small enough to achieve the accuracy we want. Denote the final solution by Φˆ (1),η(1) (tN , π (N ) ψ). Step 4. Choose two sequences of (k) and η(k) such that lim (k) = lim η(k) = 0.

k→∞

k→∞

For example, we may choose (k) = η(k) = 10−(2+k) . Now, repeat Step 2 and Step 3 for each (k) and η(k) until (i,N ) (i,N ) |Φˆ (k+1),η(k+1) (tN , π (N ) ψ) − Φˆ (k),η(k) (tN , π (N ) ψ)| ≤ δ2 ,

where δ2 is chosen to obtain the expected accuracy. B4. Option Pricing As an application of the optimal stopping problem outlined in B2 and detailed in Chapter 4, we consider pricing problems of the American and European options briefly described below. The detail of this particular application is the subject of discussions in Chapter 6.

20

Introduction and Summary

An American option is a contract conferred by the contract writer on the contract holder giving the holder the right (but not the obligation) to buy from or to sell to the writer a share of the stock at a prespecified price prior to or at the contract expiry time 0 < T < ∞. The right for the holder to buy from the writer a share of the stock will be called a call option and the right to sell to the writer a share of the stock will be called a put option. If an option is purchased at time t ∈ [0, T ] and is exercised by the holder at an stopping time τ ∈ [t, T ], then he will receive a discounted payoff of the amount e−α(τ −t) Ψ (Sτ ) from the writer, where Ψ : C → [0, ∞) is the payoff function and {S(s), s ∈ [t, T ]} is the unit price of the underlying stock whose dynamics is described by the following one-dimensional SHDE: dS(s) = f (Ss ) ds + g(Ss ) dW (s), s ∈ [t, T ], (0.31) S(s) with initial data St = ψ ∈ C at time t ∈ [0, T ], where the mean growth rate f (Ss ) and the volatility rate g(Ss ) of the stock at time s ∈ [t, T ] depend explicitly on the stock prices Ss over the time interval [s − r, s] instead of the stock price S(s) at time s alone. In order to secure such a contract, the contract holder, however, has to pay to the writer at contract purchase time t a fee that is mutually agreeable to both parties. The determination of such a fee is called the pricing of the American option. In determining a price for the American option, the writer of the option seeks to invest in the (B, S)-market the fee x received from the holder and trades over the time interval [t, T ] between the bank account and the underlined stock in an optimal and prudent manner so that his total wealth will replicate or exceed that of the discounted payoff e−α(τ −t) Ψ (Sτ ) he has to pay to the holder if and when the option is exercised at τ . The smallest such x is called the fair price of the option. Note that the fair price of the option x is a function of the initial data (t, ψ) ∈ [0, T ] × C and will be expressed as the rational pricing function V : [0, T ] × C → [0, ∞). Under the same scenario described above, the contract is called an European option and can be viewed as a special case of an American option if the option can only be exercised at the expiry time T instead of any time prior to or at the expiry time T as is stipulated in an American option. In the (B, S)-market, it is assumed that the bank account {B(t), t ≥ −r} grows according to the following linear (deterministic) functional differential equation: dB(t) = L(Bt ) dt, where

 L(Bt ) ≡

0

−r

(0.32)

B(t + θ)dη(θ) dt, t ≥ 0,

and η : [−r, 0] →  is a nondecreasing function (and therefore of bounded variation) such that η(0) − η(−r) > 0.

Introduction and Summary

21

The following characterization of the pricing function of the American option V : [0, T ] × C → + as an optimal stopping problem is obtained in Chang and Youree [CY99], [CY07], Chang et al. [CPP07a], [CPP07d] and will be treated in detail in Chapter 6. Theorem B4.1. Given a (globally convex) payoff function Ψ : C → + satisfying the polynomial growth condition |Ψ (ψ)| ≤ K(1 + ψ k2 ) for some constants K > 0 and k ≥ 1, then the pricing function V : [0, T ] × C → + is given as follows:    ˜ e−α(T −τ ) Ψ (Sτ )St = ψ , V (t, ψ) = sup E  T τ ∈Tt

˜ · · ] represents the expectation for a suitable probability measure P˜ where E[· defined in Section 6.3 and α > 0 is the discount factor. As a consequence of the optimal stopping problem, we also have the following result. Theorem B4.2. Assume that the the payoff function Ψ : C → + satisfies the polynomial growth condition described in Theorem (B4.1). Then the pricing function is the unique viscosity solution of the following HJBVI: max{∂t V + AV − αV, Ψ − V } = 0,

(0.33)

with the terminal condition V (T, ψ) = Ψ (ψ) for all ψ ∈ C, where the operator A is defined by AΦ(ψ) = SΦ(ψ) + DΦ(ψ)(λψ(0)1{0} ) 1 + D2 Φ(ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ), 2

(0.34)

where λ > 0 is the effective interest rate of the Bank account that satisfies the following equation:  0 eλθ dη(θ). λ= −r

As a special case of the two results described in Theorem B4.1. and Theorem B4.2., we obtain the following infinite-dimensional Black-Scholes equation for the European option: Theorem B4.3. Assume that the the payoff function Ψ : C → + satisfies the polynomial growth condition as described in Theorem (B4.1). The the pricing function for the European option is the unique viscosity solution of the following infinite-dimensional Black-Scholes equation: ∂t V + AV − αV = 0.

(0.35)

22

Introduction and Summary

A computational algorithm is also developed for solving the above BlackScholes equation in Section 6.7 of Chapter 6. The reward function Ψ : C → + considered here includes the standard call/put option that has been traded in option exchanges around the world. Standard Options The payoff function for the standard call option is given by Ψ (Ss ) = max{S(s)− q, 0}, where S(s) is the stock price at time s and q > 0 is the strike price of the standard call option. If the standard American option is exercised by the holder at the G-stopping time τ ∈ TtT (G = {G(t), t ≥ 0} is the filtration generated by {S(t), t ≥ 0}), the holder will receive the payoff of the amount Ψ (Sτ ). Of course, the option will be called the standard European option if τ ≡ T . In this case, the amount of the reward will be Ψ (ST ) = max{S(T ) − q, 0}. We offer a financial interpretation for the standard American (or European) call option as follows. If the option has not been exercised and the current stock price is higher than the strike price, then the holder can exercise the option and buy a share of the stock from the writer at the strike price q and immediately sell it at the open market and make an instant profit of the amount S(τ ) − q > 0. It the strike price is higher than the current stock price, then the option is worthless to the holder. In this case, the holder will not exercise the option and, therefore, the payoff will be zero. The following path-dependent exotic options are special cases of our option pricing problem and gives justification for the formulation with bounded memory. Modified Russian Option The payoff process of a Russian call option can be expressed as     max E

sup s∈[τ −r,τ ]

S(s) − q, 0

= max{E[ Sτ − q, 0}

for some strike price q > 0. Note that the payoff of a Russian option depends on the highest price on the time interval [T − r, T ] if the option is exercised at τ ∈ TtT . Modified Asian Option The payoff for the Asian call option is given by     τ 1 S(s) ds − q, 0 . max E τ τ −r This is similar to a European call option with strike price q > 0 except that τ now the “averaged stock price” τ1 τ −r S(s) ds, over the interval [τ − r, τ ], is used in place of the stock price S(τ ) at option exercise time τ .

Introduction and Summary

23

It is clear that when r = 0, (0.31) reduces to the following nonlinear stochastic ordinary differential equation: dS(t) = f (S(t)) dt + g(S(t)) dW (t), t ≥ 0, S(t)

(0.36)

of which the Black-Scholes (B, S)-market (i.e., f (S(t)) ≡ µ ∈  and g(S(t)) ≡ σ > 0) is a special case. When r > 0, (0.31) is general enough to include the following linear model considered by Chang and Youree [CY99] and pure discrete delay models considered by Arriojas et al. [AHMP07] and Kazmerchuk et al [KSW04a,KSW04b, and KSW04c]: dS(t) = M (St ) dt + N (St ) dW (t) ([CY 99])  0  0 S(t + θ)dξ(θ) dt + S(t + θ) dζ(θ)dW (t), t ≥ 0 , = −r

−r

where ξ, ζ : [−r, 0] →  are certain functions of bounded variation. dS(t) = f (St ) dt + g(S(t − b))S(t) dW (t),

t ≥ 0 ([AHMP07]), (0.37)

where a, b > 0, f (St ) = µS(t − a)S(t) or f (St ) = µS(t − a), and dS(t) = µS(t − a) dt + σ(S(t − b)) dW (t), t ≥ 0 ([KSW04a]). S(t) B5. Hereditary Portfolio Optimization Problem As an illustration of a new class of combined optimal classical-impulse control problems that involve SHDEs with unbounded but fading memory (r = ∞), we briefly describe the hereditary portfolio optimization problem with capital gain taxes and fixed plus proportional transaction costs. This new optimal classical-impulse control problem is motivated by a realistic mathematical finance problem in consumption and investment and is not covered by any of the control problems outlined in B1-B4. The hereditary portfolio optimization problem and its solution will be detailed in Chapter 7 which is based on Chang [Cha07a], [Cha07b]. Consider a small investor who has assets in a financial market that consists of one savings account and one stock account. In this market, it is assumed that the savings account compounds continuously at a constant interest rate λ > 0 and {S(t), t ∈ (−∞, ∞)}, the unit price process of the stock, satisfies the following nonlinear SHDE with infinite but fading memory: dS(t) = S(t)[f (St ) dt + g(St ) dW (t)],

t ≥ 0.

(0.38)

In the above equation, the process {W (t), t ≥ 0} is an one-dimensional standard Brownian motion. Note that f (St ) and g(St ) in (0.38) represent respectively the mean growth rate and the volatility rate of the stock price at

24

Introduction and Summary

time t ≥ 0. The stock is said to have a hereditary price structure with infinite but fading memory because both the drift term S(t)f (St ) and the diffusion term S(t)g(St ) on the right-hand side of (0.38) explicitly depend on the entire past history prices (S(t), St ) ∈ + × L2ρ,+ (where L2ρ,+ = L2ρ ((−∞, 0]; + )) in a weighted fashion by the function ρ satisfying Conditions 2(i)-2(ii) of the Subsection A. The main purpose of the stock account is to keep track of the inventories (i.e., the time instants and the base prices at which shares were purchased or short sold) of the underlying stock for the purpose of calculating the capital gains taxes and so forth. The space of stock inventories, N, will be the space of bounded functions ξ : (−∞, 0] →  of the following form: ξ(θ) =

∞ 

n(−k)1{τ (−k)} (θ), θ ∈ (−∞, 0],

(0.39)

k=0

where {n(−k), k = 0, 1, 2, . . .} is a sequence in  with n(−k) = 0 for all but finitely many k, −∞ < · · · < τ (−k) < · · · < τ (−1) < τ (0) = 0, and 1{τ (−k)} is the indicator function at τ (−k). Note that n(−k) > 0 (respectively, n(−k) < 0) represents the number of shares of the stock purchased (respectively, short sold) at time τ (−k). The assumption that n(−k) = 0 for all but finitely many k implies that the investor can only have finitely many open long or short positions in his stock account. However, the number of open long and/or short positions may increase from time to time. The investor is said to have an open long (respectively, short) position at time τ if he still owns (respectively, owes) all or part of the stock shares that were originally purchased (respectively, short sold) at a previous time τ . The only way to close a position is to sell all of what he owns and buy back all of what he owes. Within the solvency region Sκ (to be defined later and in Subsection 7.1.4 of Chapter 7) and under the requirements of paying a fixed plus proportional transaction costs and capital gains taxes, the investor is allowed to consume from his savings account in accordance with a consumption rate process C = {C(t), t ≥ 0} and can make transactions between his savings and stock accounts according to a trading strategy T = {(τ (i), ζ(i)), i = 1, 2, . . .}, where τ (i), i = 0, 1, 2, . . ., denotes the sequence of transaction times and ζ(i) stands for quantities of the transaction at time τ (i). The investor will follow the following set of consumption, transaction, and taxation rules (Rules (B5.1)-(B5.6)). Note that an action of the investor in the market is called a transaction if it involves trading of shares of the stock such as buying and selling.

Introduction and Summary

25

Rule (B5.1). At the time of each transaction, the investor has to pay a transaction cost that consists of a fixed cost κ > 0 and a proportional transaction cost with the cost rate of µ ≥ 0 for both selling and buying shares of the stock. All the purchases and sales of any number of stock shares will be considered one transaction if they are executed at the same time instant and therefore incurs only one fixed fee, κ > 0 (in addition to a proportional transaction cost). Rule (B5.2). Within the solvency region Sκ , the investor is allowed to consume and to borrow money from his savings account for stock purchases. He can also sell and/or buy back some or all shares at the current price of the stock he bought and/or short sold at a previous time. Rule (B5.3). The proceeds for the sales of the stock minus the transaction costs and capital gains taxes will be deposited in his savings account and the purchases of stock shares together with the associated transaction costs and capital gains taxes (if short shares of the stock are bought back at a profit) will be financed from his savings account. Rule (B5.4). Without loss of generality it is assumed that the interest income in the savings account is tax-free by using the effective interest rate λ > 0, where the effective interest rate equals the interest rate paid by the bank minus the tax rate for the interest income. Rule (B5.5). At the time of a transaction (say, t ≥ 0), the investor is required to pay a capital gains tax (respectively, be paid a capital loss credit) in the amount that is proportional to the amount of profit (respectively, loss). A sale of stock shares is said to result in a profit if the current stock price S(t) is higher than the base price B(t) of the stock and it is a loss otherwise. The base price B(t) is defined to be the price at which the stock shares were previously bought or short sold; that is, B(t) = S(t − τ (t)), where τ (t) > 0 is the time duration for which those shares (long or short) have been held at time t. The investor will also pay capital gains taxes (respectively, be paid capital loss credits) for the amount of profit (respectively, loss) by short-selling shares of the stock and then buying back the shares at a lower (respectively, higher) price at a later time. The tax will be paid (or the credit will be given) at the buying-back time. Throughout the end, a negative amount of tax will be interpreted as a capital loss credit. The capital gains tax and capital loss credit rates are assumed to be the same as β > 0 for simplicity. Therefore, if |m| (m > 0 stands for buying and m < 0 stands for selling) shares of the stock are traded at the current price S(t) and at the base B(t) = S(t − τ (t)), then the amount of tax due at the transaction time is given by |m|β(S(t) − S(t − τ (t))). Rule (B5.6). The tax and/or credit will not exceed all other gross proceeds and/or total costs of the stock shares, that is, m(1 − µ)S(t) ≥ βm|S(t) − S(t − τ (t))| if m ≥ 0

26

Introduction and Summary

and m(1 + µ)S(t) ≤ βm|S(t) − S(t − τ (t))| if m < 0, where m ∈  denotes the number of shares of the stock traded, with m ≥ 0 being the number of shares purchased and m < 0 being the number of shares sold. Convention (B5.7). Throughout, we assume that µ + β < 1. Under the above assumption and Rules (B5.1)-(B5.6), the investor’s objective is to seek an optimal consumption-trading strategy (C ∗ , T ∗ ) in order to maximize  ∞ C γ (t)  dt , e−αt E γ 0 the expected utility from the total discounted consumption over the infinite time horizon, where α > 0 represents the discount rate and 0 < γ < 1 represents the investor’s risk aversion factor. Due to the fixed plus proportional transaction costs and the hereditary nature of the stock dynamics and inventories, the problem will be formulated as a combination of a classical control (for consumptions) and an impulse control (for the transactions) problem in infinite dimensions. A classical-impulse control problem in finite dimensions is treated in Øksendal and Sulem [ØS02] for a Black-Scholes market with fixed and proportional transaction costs without consideration of capital gains taxes. The problem treated here is the extension of existing works in the areas of the optimal consumption-investment problems in the following sense: (i) The stock price dynamics is described by a nonlinear SHDE with unbounded but fading memory instead of an ordinary stochastic differential equation or geometric Brownian motion. (ii) The tax basis for computing capital gains taxes or capital loss credits is based on the actual purchased price of shares of the stock. The details of the treatment of this infinite-dimensional consumptioninvestment problem are given in Chapter 7. In there, the Hamilton-JocobiBellman quasi-variational inequality (HJBQVI) for the value function together with its boundary conditions are derived and the verification theorem for the optimal investment-trading strategy is proved. It is also shown that the value function is a viscosity solution of the HJBQVI. Due to the complexity of the analysis involved, the uniqueness result and finite-dimensional approximations for the viscosity solution of HJBQVI are not included in the monograph. To describe the problem and the results obtained, let (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈  × N × + × L2ρ,+ ≡ S be the investor’s initial portfolio immediately prior to t = 0; that is, the investor starts with x ∈  dollars in his savings account, the initial stock inventory,

Introduction and Summary ∞ 

ξ(θ) =

27

n(−k)1{τ (−k)} (θ), θ ∈ (−∞, 0),

k=0

and the initial profile of historical stock prices (ψ(0), ψ) ∈ + × L2ρ,+ . Within the solvency region Sκ (see (0.43)) the investor is allowed to consume from his savings account and can make transactions between his savings and stock accounts under Rules (B5.1)-(B5.6) and according to a consumption-trading strategy π = (C, T ), where the following hold: (i) The consumption rate process C = {C(t), t ≥ 0} is a non-negative Gprogressively measurable process such that  0

T

C(t) dt < ∞ P -a.s. ∀T > 0.

(ii) T = {(τ (i), ζ(i)), i = 1, 2, . . .} is a trading strategy, with τ (i), i = 1, 2, . . . , being a sequence of trading times that are G-stopping times such that 0 = τ (0) ≤ τ (1) < · · · < τ (i) < · · · and lim τ (i) = ∞ P-a.s.,

i→∞

and for each i = 0, 1, . . . , ζ(i) = (. . . , m(i − k), . . . , m(i − 2), m(i − 1), m(i)) is an N-valued G(τ (i))-measurable random vector (instead of a random variable in ) that represents the trading quantities at the trading time τ (i). In the above, G = {G(t), t ≥ 0} is the filtration generated by the stock prices S(·) = {S(t), t ≥ 0} and m(i) > 0 (respectively, m(i) < 0) is the number of stock shares newly purchased (respectively, short sold) at the current time τ (i) and at the current price of S(τ (i)) and, for k = 1, 2, . . ., m(i − k) > 0 (respectively, m(i − k) < 0) is the number of stock shares bought back (respectively, sold) at the current time τ (i) and the current price of S(τ (i)) in his open short (respectively, long) position created at the previous time τ (i − k) and the base price of S(τ (i − k)). For each stock inventory ξ of the form expressed in (0.39), Rules (B5.1)(B5.6) also dictate that the investor can purchase or short sell new shares and/or buy back (respectively, sell) all or part of what he owes (respectively, owns). Therefore, the trading quantity {m(−k), k = 0, 1, . . .} must satisfy the constraint set R(ξ) ⊂ N defined by R(ξ) = {ζ ∈ N | ζ =

∞ 

m(−k)1{τ (−k)} , −∞ < m(0) < ∞, and

(0.40)

k=0

either n(−k) > 0, m(−k) ≤ 0 & n(−k) + m(−k) ≥ 0 or n(−k) < 0, m(−k) ≥ 0

& n(−k) + m(−k) ≤ 0 for k ≥ 1}.

28

Introduction and Summary

Define the function Hκ : S →  as follows:  Hκ (x, ξ, ψ(0), ψ) = max Gκ (x, ξ, ψ(0), ψ),

 min{x, n(−k), k = 0, 1, 2, . . .} ,

(0.41)

where Gκ : S →  is the liquidating function defined by Gκ (x, ξ, ψ(0), ψ) = x − κ ∞   min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0) + k=0

 −n(−k)β(ψ(0) − ψ(τ (−k)))

(0.42)

The solvency region Sκ of the portfolio optimization problem is defined as   Sκ = (x, ξ, ψ(0), ψ) ∈ S | Hκ (x, ξ, ψ(0), ψ) ≥ 0 = {(x, ξ, ψ(0), ψ) ∈ S | Gκ (x, ξ, ψ(0), ψ) ≥ 0} ∪ S+ ,

(0.43)

2 and N+ = {ξ ∈ N | ξ(θ) ≥ 0, ∀θ ∈ where S+ = + × N+ × + × Mρ,+ (−∞, 0]}. Note that within the solvency region Sκ there are positions that cannot be closed at all, namely those (x, ξ, ψ(0), ψ) ∈ Sκ such that

(x, ξ, ψ(0), ψ) ∈ S+ and Gκ (x, ξ, ψ(0), ψ) < 0. This is due to the insufficiency of funds to pay for the transaction costs and/or taxes and so forth. etc. Observe that the solvency region Sκ is an unbounded and nonconvex subset of the state space S. At time t ≥ 0, the investor’s portfolio in the financial market will be denoted by the quadruplet (X(t), Nt , S(t), St ), where X(t) denotes the investor’s holdings in his savings account, Nt ∈ N is the inventory of his stock account, and (S(t), St ) describes the profile of the unit prices of the stock over the past history (−∞, t] as described in (0.38). Given the initial portfolio (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ Sκ and applying a consumption-trading strategy π = (C, T ), the portfolio dynamics of {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0} can then be described as follows. First, the savings account holdings {X(t), t ≥ 0} satisfies the following differential equation between the trading times: dX(t) = [λX(t) − C(t)] dt, τ (i) ≤ t < τ (i + 1), i = 0, 1, 2, . . . , and the following jumped quantity at the trading time τ (i):

(0.44)

Introduction and Summary

X(τ (i)) = X(τ (i)−) − κ −

∞ 

29

 m(i − k) (1 − µ)S(τ (i))

k=0

 −β(S(τ (i)) − S(τ (i − k))) 1{n(i−k)>0,−n(i−k)≤m(i−k)≤0} −

∞ 

 m(i − k) (1 + µ)S(τ (i))

k=0

 −β(S(τ (i)) − S(τ (i − k))) 1{n(i−k) 0 (respectively, m(i) < 0) means buying (respectively, selling) new stock shares at τ (i) and m(i − k) > 0 (respectively, m(i − k) < 0) means buying back (respectively, selling) some or all of what owed (respectively, owned). Second, the inventory of the investor’s stock account at time t ≥ 0, Nt ∈ N, does not change between the trading times and can be expressed as follows: 

Q(t)

Nt = Nτ (i) =

n(k)1τ (k)

if τ (i) ≤ t < τ (i + 1) , i = 0, 1, . . . , (0.46)

k=−∞

where Q(t) = sup{k ≥ 0 | τ (k) ≤ t}. It has the following jumped quantity at the trading time τ (i): (0.47) Nτ (i) = Nτ (i)− ⊕ ζ(i), where Nτ (i)− ⊕ ζ(i) : (−∞, 0] → N is defined, for θ ∈ (−∞, 0], by (Nτ (i)− ⊕ ζ(i))(θ) ∞  = n ˆ (i − k)1{τ (i−k)} (τ (i) + θ) k=0

= m(i)1{τ (i)} (τ (i) + θ) +

∞  

n(i − k)

k=1

+m(i − k)(1{n(i−k)0,−n(i−k)≤m(i−k)≤0} ) 1{τ (i−k)} (τ (i) + θ).

(0.48)

Third, since the investor is small, the unit stock price process {S(t), t ≥ 0} will not be in anyway affected by the investor’s action in the market and is again described as in (0.38). If the investor starts with an initial portfolio (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ Sκ the consumption-trading strategy π = (C, T ) is said to be admissible at (x, ξ, ψ(0), ψ) if

30

Introduction and Summary

ζ(i) ∈ R(Nτ (i)− ),

∀i = 1, 2, . . . ,

and (X(t), Nt , S(t), St ) ∈ Sκ ,

∀t ≥ 0.

The class of consumption-investment strategies admissible at (x, ξ, ψ(0), ψ) ∈ Sκ will be denoted by Uκ (x, ξ, ψ(0), ψ). Given the initial state (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ Sκ , the investor’s objective is to find an admissible consumption-trading strategy π ∗ ∈ Uκ (x, ξ, ψ(0), ψ) that maximizes the following expected utility from the total discounted consumption:  ∞ C γ (t)  x,ξ,ψ(0),ψ;π dt (0.49) e−αt Jκ (x, ξ, ψ(0), ψ; π) = E γ 0 among the class of admissible consumption-trading strategies Uκ (x, ξ, ψ(0), ψ), where E x,ξ,ψ(0),ψ;π [· · · ] is the expectation with respect to P x,ξ,ψ(0),ψ;π {· · · }, the probability measure induced by the controlled (by π) state process {(X(t), Nt , S(t), St ), t ≥ 0} and conditioned on the initial state (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ). In the above, α > 0 denotes the discount factor and 0 < γ < 1 indicates that γ the utility function U (c) = cγ , for c > 0, is a function of the HARA (hyperbolic absolute risk aversion) type that was considered in most of the optimal consumption-trading literature. The admissible (consumption-trading) strategy π ∗ ∈ Uκ (x, ξ, ψ(0), ψ) that maximizes Jκ (x, ξ, ψ(0), ψ; π) is called an optimal (consumption-trading) strategy and the function Vκ : Sκ → + defined by Vκ (x, ξ, ψ(0), ψ) =

sup π∈Uκ (x,ξ,ψ(0),ψ)

Jκ (x, ξ, ψ(0), ψ; π)

= Jκ (x, ξ, ψ(0), ψ; π ∗ )

(0.50)

is called the value function of the hereditary portfolio optimization problem. Extending a standard technique in deriving the variational HJB inequality for stochastic classical-singular and classical-impulse control problems (see Bensoussan and Lions [BL84] for stochastic impulse controls, Brekke and Øksendal [BØ98] and Øksendal and Sulem [ØS02] for stochastic classicalimpulse controls of finite-dimensional diffusion processes, and Larssen [Lar02] and Larssen and Risebro [LR03] for classical and singular controls of a special class of stochastic delay equations), one can show that on the set {(x, ξ, ψ(0), ψ) ∈ S◦κ | Mκ Vκ (x, ξ, ψ(0), ψ) < Vκ (x, ξ, ψ(0), ψ)}, we have AVκ = 0, and on the set {(x, ξ, ψ(0), ψ) ∈ S◦κ | AVκ (x, ξ, ψ(0), ψ) < 0},

Introduction and Summary

31

we have Mκ Vκ = Vκ . Therefore, we have the following HJBQVI on S◦κ :   max AVκ , Mκ Vκ − Vκ = 0 on S◦κ , (0.51) where AΦ = (A + λx∂x − α)Φ + sup c≥0

 cγ γ

 − c∂x Φ ,

(0.52)

where S is the infinitesimal generator given by SΦ(x, ξ, ψ(0), ψ) = lim ↓0

Φ(x, ξ, ψ(0), ψ˜ ) − Φ(x, ξ, ψ(0), ψ) 

(0.53)

and ψ˜ : (−∞, ∞) →  is the extension of ψ : (−∞, 0] →  defined by  ˜ = ψ(0) for t ≥ 0 ψ(t) (0.54) ψ(t) for t < 0. A is a certain operator that involves second order Fr´echet derivatives (with respect to ψ(0) and ψ) which shall be described in detail in Section 7.3 of Chapter 7, and Mκ Φ is given as ˆ ψ(0), ˆ ˆ | ζ ∈ R(ξ) − {0}, Mκ Φ(x, ξ, ψ(0), ψ) = sup{ Φ(ˆ x, ξ, ψ) ˆ ψ(0), ˆ ˆ ∈ Sκ }, (ˆ x, ξ, ψ)

(0.55)

ˆ ψ(0), ˆ ˆ are defined as follows: where (ˆ x, ξ, ψ) x ˆ = x − κ − (m(0) + µ|m(0)|)ψ(0) ∞   − (1 + µ)m(−k)ψ(0) k=1

 −βm(−k)(ψ(0) − ψ(τ (−k))) 1{n(−k)0,−n(−k)≤m(−k)≤0} ,

(0.56)

and for all θ ∈ (−∞, 0], ˆ = (ξ ⊕ ζ)(θ) ξ(θ) = m(0)1{τ (0)} (θ) ∞   n(−k) + m(−k)[1{n(−k)0,−n(−k)≤m(−k)≤0} ] 1{τ (−k)} (θ),

(0.57)

32

Introduction and Summary

and ˆ ˆ = (ψ(0), ψ). (ψ(0), ψ)

(0.58)

The boundary values for the HJBQVI on ∂Sκ are given as follows: First, the boundary of the solvency region ∂Sκ can be decomposed as follows. For each (x, ξ, ψ(0), ψ) ∈ Sκ , let I ≡ I(x, ξ, ψ(0), ψ) = {i ∈ {0, 1, 2, . . .} | n(−i) < 0}. For I ⊂ {0, 1, 2, . . .}, let Sκ,I ⊂ Sκ be defined as Sκ,I = {(x, ξ, ψ(0), ψ) ∈ Sκ | n(−i) < 0 for all i ∈ I and n(−i) ≥ 0 for all i ∈ / I}. With this interpretation, 

(∂−,I Sκ ∪ ∂+,I Sκ ),

(0.59)

∂−,I Sκ = ∂−,I,1 Sκ ∪ ∂−,I,2 Sκ ,

(0.60)

∂+,I Sκ = ∂+,I,1 Sκ ∪ ∂+,I,2 Sκ ,

(0.61)

∂Sκ =

I⊂ℵ

where

∂+,I,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x ≥ 0, n(−i) < 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}, (0.62) ∂+,I,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x ≥ 0, n(−i) = 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}, (0.63) ∂−,I,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x < 0, n(−i) < 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}, (0.64) and ∂−,I,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x = 0, n(−i) = 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}. (0.65) The HJBQVI (together with the boundary conditions) should be expressed as follows: ⎧   ⎪ max AΦ, M Φ − Φ =0 on S◦κ ⎪ κ ⎪ ⎨ # AΦ = 0, on # I⊂ℵ ∂+,I,2 Sκ HJBQVI(∗) = 0 ⎪ L Φ = 0, on ⎪ ⎪ #I⊂ℵ ∂−,I,2 Sκ ⎩ Mκ Φ − Φ = 0 on I⊂ℵ ∂I,1 Sκ ,

Introduction and Summary

33

where AΦ, L0 Φ (Lc Φ with c = 0), and Mκ are given by AΦ = (A + λx∂x − α)Φ + sup

 cγ

c≥0

γ

 − c∂x Φ ,

L0 Φ(x, ξ, ψ(0), ψ) ≡ (A + λx∂x − α)Φ,

(0.66) (0.67)

and Mκ Φ(x, ξ, ψ(0), ψ) (0.68) ˆ ψ(0), ˆ ˆ | ζ ∈ R(ξ) − {0}, (ˆ ˆ ψ(0), ˆ ˆ ∈ Sκ }. = sup{Φ(ˆ x, ξ, ψ) x, ξ, ψ) We have the following verification theorem for the value function Vκ : Sκ →  for our hereditary portfolio optimization problem. Theorem (B5.1).# (The Verification Theorem) (a) Let Uκ = Sκ − I⊂ℵ ∂I,1 Sκ . Suppose that there exists a locally bounded 1,0,2 (Sκ ) ∩ D(S) such that non-negative-valued function Φ ∈ Clip

where

 ˜ = AΦ

˜ ≤ 0 on Uκ , AΦ

(0.69)

Φ ≥ Mκ Φ on Uκ ,

(0.70)

# AΦ on S◦κ # ∪ I⊂ℵ ∂+,I,2 Sκ L0 Φ on I⊂ℵ ∂−,I,2 Sκ .

Then Φ ≥ Vκ on Uκ . (b) Define D ≡ {(x, ξ, ψ(0), ψ) ∈ Uκ | Φ(x, ξ, ψ(0), ψ) > Mκ Φ(x, ξ, ψ(0), ψ)}. Suppose ˜ AΦ(x, ξ, ψ(0), ψ) = 0 on D (0.71) ˆ ξ, ψ(0), ψ) = ζˆΦ (x, ξ, ψ(0), ψ) exists for all (x, ξ, ψ(0), ψ) ∈ Sκ . and that ζ(x, Define  # 1 ∪ (∂x Φ) γ−1 on S◦κ # I⊂ℵ ∂+,I,2 Sκ ; c∗ = (0.72) 0 on I⊂ℵ ∂−,I,2 Sκ , and define the impulse control T ∗ = {(τ ∗ (i), ζ ∗ (i)), i = 1, 2, . . .} inductively as follows. First, put τ ∗ (0) = 0 and inductively (i)

τ ∗ (i + 1) = inf{t > τ ∗ (i) | (X (i) (t), Nt , S(t), St )) ∈ / D},

(0.73)

∗ ˆ (i) (τ ∗ (i + 1)−), N (i) ∗ ζ ∗ (i + 1) = ζ(X τ ∗ (i+1)− , S(τ (i + 1)), Sτ (i+1) ),

(0.74)

(i)

where {(X (i) (t), Nt , S(t), St ), t ≥ 0} is the controlled state process obtained by applying the combined control

34

Introduction and Summary

π ∗ (i) = (c∗ , (τ ∗ (1), τ ∗ (2), . . . , τ ∗ (i); ζ ∗ (1), ζ ∗ (2), · · · , ζ ∗ (i))), ∗



i = 1, 2, . . . .



Suppose π = (C , T ) ∈ Uκ (x, ξ, ψ(0), ψ) and that e−δt Φ(X ∗ (t), Nt∗ , S(t), St ) → 0, as t → ∞ a.s. and that the family {e−ατ Φ(X ∗ (τ ), Nτ∗ , S(τ ), Sτ )) | τ G-stopping times}

(0.75)

is uniformly integrable. Then Φ(x, ξ, ψ(0), ψ) = Vκ (x, ξ, ψ(0), ψ) and π ∗ obtained in (0.72)-(0.74) is optimal. Theorem (B5.2). The value function Vκ : Sκ →  for the hereditary portfolio optimization is a viscosity solution of the HJBQVI (*). C. Organization of the Monograph This monograph consists of seven chapters. The content of each of these seven chapters are summarized as follows. Chapter 1. This chapter collects and extends some of the known results on the existence, uniqueness, and Markovian properties of the strong and weak solution process for SHDEs treated in Mohammed [Moh84, Moh98] for bounded memory 0 < r < ∞ and for SHDEs with infinite but fading memory r = ∞ treated in Mizel and Trutzer [MT84]. The concept of the weak solution process and its uniqueness are also introduced in this chapter. It is also shown in there that a uniqueness of the strong solution of the SHDE implies uniqueness of the weak solution. This fact will be used in proving the DDP in Chapter 3. The Markovian and strong Markovian properties for the segment processes corresponding to the SHDE with bounded and unbounded memory are reviewed in Section 1.5. Chapter 2. This chapter reviews some of the known results and develops some of the analytic tools on a generic Banach space as well as on the specific function spaces C and M ≡  × L2ρ ((−∞, 0]; ) that are needed in the subsequent chapters. In particular, an infinitesimal generator of a semigroup of shift operators as well as Fr´echet differentiation of a real-valued function defined on C and/or on M are described in detail. The main tool established in this chapter include the weak infinitesimal generators of the C-valued Markovian solution process {xs , s ∈ [t, T ]} for an equation with bounded memory and for the M-valued Markovian solution process {(S(s), Ss ), s ≥ 0} for an equation with infinite but fading memory. These two weak infinitesimal generators along with Dynkin’s and Itˆ o’s formulas for the quasi-tame of the C and M-valued segment processes are the main ingredients for establishing the Hamilton-Jacobi-Bellman theory for the optimal control problems treated later.

Introduction and Summary

35

Chapter 3. This chapter formulates and provides a characterization of the value function for the optimal classical control problem as briefly described in B1. The existence of optimal classical control is proved in Section 3.2 via the extended class of admissible relaxed controls. The characterization is written in terms of an infinite-dimensional HJBE. The Bellman-type DDP is stated and proved in Section 3.3. Based on the DDP, the HJBE is heuristically derived in Section 3.4 for the value function under the condition that it is sufficiently smooth. However, it is known in most of optimal control problems, deterministic or stochastic, that the value functions do not meet these smoothness conditions and therefore cannot be a solution to the HBJE in the strong sense. The concept of viscosity solution is introduced in Section 3.5. In this section and Section 3.6, it is shown that the value function is the unique viscosity solution to the HJBE. The generalized verification theorem in the framework of viscosity solution is conjectured in Section 3.7. Finally, some special optimal control problems that yield a finite-dimensional HJBE along with a couple of application examples are given in Section 3.8. Chapter 4. This chapter formulates and provides a characterization of the value function for the optimal stopping problem briefly described in B2. Section 4.1 gives the problem formulation. Section 4.2 addresses the existence of optimal stopping via the construction of the least harmonic majorant of the terminal reward function of an equivalent optimal stopping problem. The characterization is written in terms of an infinite-dimensional HJBVI, which is given in Section 4.3. A verification theorem that provides the method for finding the optimal stopping is given in Section 4.4. The issues of the value as the unique viscosity solution of the HJBVI are addressed in Sections 4.5 and 4.6. The organization within this chapter is parallel to that of Chapter 3. Chapter 5. This chapter deals with computational issues of the viscosity solution of the HJBE arising from the finite horizon optimal classical control problem described in B1 and detailed in Chapter 3. Three approximation schemes are presented: (1) semidiscretization scheme; (2) Markov chain approximations; and (3) finite difference approximation. Preliminaries and discretization notation are given in Section 5.1. The semidiscretization scheme that discretizes the temporal variable (but not the spatial variable) of the state process and then the control process is the subject of investigation of Section 5.2. The section also provides the rate of convergence of the scheme. The well-known Markov chain approximation method for the controlled diffusion processes is extended to controlled SHDEs in Section 5.3. The finite difference approximation is given in Section 5.4. The results are based on the recent works by Chang et al. [CPP07a, CPP07b, CPP07c]. The method extends the finite difference scheme obtained by Barles and Souganidis [BS91] on the controlled diffusion process in finite dimensions to the viscosity solution of the infinite-dimensional HJBE. The convergence of the schemes are proved using the Banach fixed-point theorem. A computational algorithms are also provided based on the schemes obtained.

36

Introduction and Summary

Chapter 6. This chapter treats pricing problems of American and European options as an application of the optimal stopping described in Chapter 4. It is illustrated that the pricing function is the unique viscosity solution of a HJBVI. A computational algorithm is given for an infinite-dimensional Black-Scholes equation. Specifically, the formulation of option pricing problem in the (B, S)-market with hereditary structure is given in Section 6.1. Section 6.2 summarizes the definitions of an HJBVI. This section also contained the optimal strategy the option writer should use in order to replicate the pricing function if and when the contract holder exercises his option. Section 6.3 explores the concept of risk-neutral martingale measure under which the (B, S)-market is complete and arbitrage-free. Section 6.4 treats specifically the HJBVI characterization of the American option. As a special case, the infinite-dimensional Black-Scholes equation for the European option is also derived under the differentiability assumption of the pricing function. This is treated in Sections 6.5 and 6.6. A series solution technique for the BlackScholes equation is obtained in Section 6.7 for computational purposes. Chapter 7. This chapter treats an infinite time horizon hereditary portfolio optimization problem in a market that consists of one savings account and one stock account. The savings account compounds continuously with a fixed interest rate and the stock account keeps track of the inventories of one underlying stock whose unit price follows a nonlinear SHDE. Within the solvency region, the investor is allowed to consume from the savings account and can make transactions between the two assets subject to paying capital gains taxes as well as a fixed plus proportional transaction cost. The investor is to seek an optimal consumption-trading strategy in order to maximize the expected utility from the total discounted consumption. The portfolio optimization problem is formulated as an infinite-dimensional stochastic classical-impulse control problem due to the hereditary nature of the stock price dynamics and inventories. The HJBQVI together with its boundary conditions for the value function is derived under some smoothness conditions. The verification theorem for the optimal strategy is given. It is also shown that the value function is a viscosity solution of the HJBQVI. However, the uniqueness and finitedimensional approximations of the viscosity solution are not included here.

1 Stochastic Hereditary Differential Equations

The main purpose of this chapter is to study the strong and weak solutions x(·) = {x(s), s ∈ [t − r, T ]} for the class of stochastic hereditary differential equations (SHDEs) dx(s) = f (s, xs ) ds + g(s, xs ) dW (s),

s ∈ [t, T ];

with a bounded memory (i.e., 0 < r < ∞). We also study the one-dimensional SHDE dS(s) = f (Ss ) ds + g(Ss ) dW (s), s ≥ 0, S(s) with an infinite but fading memory (i.e., r = ∞) that describes the stock price dynamics involved in treating hereditary portfolio optimization problem in Chapter 7. It is assumed that both of these two equations are driven by a standard Brownian motion W (·) = {W (s), s ≥ 0} of appropriate dimensions. In the above two systems, we use the conventional notation xs (θ) = x(s + θ), θ ∈ [−r, 0] for bounded memory 0 < r < ∞, Ss (θ) = S(s + θ), θ ∈ (−∞, 0] for unbounded memory r = ∞. The first equation, which is to be studied in detail later in this chapter (see (1.27)), represents the uncontrolled version of the SHDE for the stochastic control problems that are the subject of investigation in Chapters 3 and 4. The state process S(·) = {S(s), s ∈ (−∞, ∞)} in the second equation (see (1.43)) represents the prices of the underlying stock that plays an important role in the hereditary portfolio optimization problem to be studied in Chapter 7. In particular, we are interested in the existence and uniqueness of both the strong and weak solutions and the Markovian properties of their corresponding segment processes {xs , s ∈ [t, T ]} for the bounded memory case and {(S(s), Ss ), s ≥ 0} for the infinite memory case. In order to make sense of a solution (strong or weak) to the above two equations, it is necessary that an appropriate initial segment be given on

38

1 Stochastic Hereditary Systems

the initial time interval [t − r, t] and (−∞, 0], respectively. This requirement is well documented in the study of deterministic functional differential equations (FDEs) (see, e.g., Hale [Hal77] and Hale and Lunel [HL93]). The fundamental theory of the first equation was systematically developed in Mohammed [Moh84, Moh98] with the state space C ≡ C([−r, 0]; n ). However, the state space of (1.43) studied by Mizel and Trutzer [MT84] will be the ρ-weighted Hilbert space M ≡ ×L2ρ ((−∞, 0]; ) instead of the Banach space C in order to take into account of unbounded but fading memory. This chapter is organized as follows. In Section 1.1, we either collect or prove some preliminary results in theory of stochastic processes that are needed for the development of SHDEs or stochastic control theory. These preliminary results include (i) the Gronwall inequality; (ii) regular conditional probability measure associated with a Banach space valued random variable given a sub-σ-algebra G of the underlying σ-algebra F; (iii) a summary of Itˆ o integration of an appropriate process with respect to the Brown motion W (·) and its relevant properties such as the Itˆ o formula, the Girsanov transformation, and so forth. These preliminaries are established in Section 1.2. The existence and uniqueness of the solution x(·) of (1.27) and S(·) of (1.43) are developed in Section 1.3 and Section 1.4 respectively. Since the the existence and uniqueness proofs are given in Chapter II of Mohammed [Moh84] and [Moh98] (for the bounded memory case 0 ≤ r < ∞) and in Mizel and Trutzer [MT84] (for unbounded but fading memory case r = ∞) using the standard Piccard iteration technique, we will only give an outline of the important steps involved in these two sections and refer the interested readers to [Moh84], [Moh98], and [MT87] for the details. The Markovian and strong Markovian properties of the C-valued segment process {xs , s ∈ [t, T ]} for (1.27) and the M-valued segment process {(S(s), Ss ), s ∈ [0, ∞)} for (1.43) will be established in Section 1.5. We note here that the results obtained for (1.43) carry over for a more general equation of the following form: dx(s) = f (s, x(s), xs ) ds + g(s, x(s), xs ) dW (s),

s ≥ 0,

where W (·) = {W (s), s ≥ 0} is an m-dimensional standard Brownian motion, and f and g are appropriate functions in n and n×m , respectively, and both are defined on [0, ∞) × n × L2ρ ((−∞, 0], n ). The following are some of the real world problems that require a stochastic hereditary differential equation with a bounded memory to describe its quantitative models. Example A. (Logistic Population Growth) Consider a single population x(s) at time s ≥ 0 evolving logistically with incubation period r > 0. Suppose there is instantaneous migration rate at a molecular level with contributes γdW (s) to the growth rate per capita during the time s. Then evolution of the population is governed by the nonlinear

1 Stochastic Hereditary Systems

39

logistic stochastic delay differential equation (SDDE): dx(s) = [a − bx(s − r)]x(s) ds + γx(s) dW (s),

s ≥ 0,

with a given continuous initial segment x(s) = ψ(s),

s ∈ [−r, 0].

The above SDDE falls into a special case of the (1.27): dx(s) = f (xs ) ds + g(xs ) dW (s), where f (φ) = a−bφ(−r) and g(φ) = γφ(−r) for all continuous φ : [−r, 0] → . The following is an example of a physical model that can be described as a system of two-dimensional SHDEs with unbounded but fading memory. The model describes the vertical motion of a dangling spider hanging from a ceiling with a massless viscoelastic string (see Mizel and Trutzer [MT84]). Example B. (A Dangling Spider) A spider of mass m > 0 is hanging by a massless but extensible filament of length x. The forces acting on the spider are the tension T and a body force f in the x-direction. It is assumed that there exists a time independent, twice continuously differentiable potential h(x) for the force f : f (x) =

dh(x) . dx

For example, if the force acting on the spider is gravity force, then h = m¯ g x,

f = −¯ g m,

with g¯ > 0 being the gravitational constant. It is assumed that the filament is composed of a simple viscoelastic material with constant and uniform temperature. The value of the tension T (s) at time s is given by a functional T (s) = F (xs ) of the history up to s, where xs (θ) = x(s + θ), θ ∈ (−∞, 0], and F : L2ρ ((−∞, 0], ) is continuous. Then by Newton’s second law of motion, we have m¨ x(s) = f (x(s)) − F (xs ). One can take m = 1 and  F (xs ) = −

0



k(θ)g(xs (θ) − x(s))dθ.

For a stochastic version of the equation, we can take ˙ (s), x ¨(s) = f (x(s)) − F (xs ) + σ(xs )W

40

1 Stochastic Hereditary Systems

where W (·) is a one-dimensional standard Brownian motion, and the sto˙ (s) can be interpreted as irregularity in the fiber. Let chastic term σ(xs )W y(s) = x(s), ˙ then the above second order equation can be re-written as the following system of two equation:  dx(s) = y(s) ds dy(s) = (f (x(s)) − F (xs )) ds + σ(xs ) dW (s), s ≥ 0. Of course, in order make sense of the above equation a initial segment (x(s), y(s)) has to be given for s ∈ (−∞, 0].

1.1 Probabilistic Preliminaries To make this volume accessible to those readers who have been exposed to but are not very familiar with stochastic differential equations (SDEs) of any type, we include the following basic material taken from the theory of stochastic processes for the purpose of being self-contained. These concepts can be found in any books of measure-theoretic nature in this area, such as Karatzas and Shreves [KS91] and Øksendal [Øks98]. 1.1.1 Gronwall Inequality The following Gronwall inequality will be used in many different places in this chapter and in our subsequent developments in other chapters. Lemma 1.1.1 Suppose that h ∈ L1 ([t, T ]; ) and α ∈ L∞ ([t, T ]; ) satisfy, for some β ≥ 0,  s h(λ) dλ for a.e. s ∈ [t, T ]. (1.1) 0 ≤ h(s) ≤ α(s) + β t



Then h(s) ≤ α(s) + β

s

α(λ)eβ(λ−t) dλ

for a.e. s ∈ [t, T ].

(1.2)

t

If, in addition, α is nondecreasing, then h(s) ≤ α(s)eβ(s−t)

for a.e. s ∈ [t, T ].

(1.3)

Proof. Assume that β = 0. Otherwise, the lemma is trivial. Define  s −β(s−t) h(λ) dλ z(s) := e t

and note that z is absolutely continuous in [t, T ]. After multiplying (1.1) by e−β(s−t) we get ˙ h(s) ≤ α(s)e−β(s−t)

for a.e. s ∈ [t, T ].

1.1 Probabilistic Preliminaries

41

We integrate both sides and multiply by eβ(s−t) to obtain  s  s h(λ) dλ ≤ eβs α(λ)e−βλ dλ for a.e. s ∈ [t, T ]. t

t

This inequality and (1.1) gives (1.2).

2

Let (Ω, F, P, F) be a complete filtered probability space, where F = {F(s), s ≥ 0} is a filtration of sub-σ-algebras of F that satisfies the following usual conditions: (i) F is complete, that is, N ≡ {A ⊂ Ω | ∃B ∈ F such that A ⊂ B and P (B) = 0} ⊂ F(0). (ii) F is nondecreasing, that is, F(s) ⊂ F(˜ s), ∀0 ≤ s ≤ s˜. (iii) F is right-continuous, i.e., F(s+) ≡ ∩ ↓0 F(s + ) = F(s), ∀s ≥ 0, where ≡ stands for “is defined as”. For a generic Banach (or Hilbert) space (Ξ, · Ξ ), a sub-σ-algebra G of F, let L2 (Ω, Ξ; G) be the collection of all Ξ-valued random variables X : (Ω, F, P ) → (Ξ, B(Ξ)) that are G-measurable and satisfy  % $ 2 X L2 (Ω,Ξ) ≡ E X Ξ ≡ X(ω) Ξ dP (ω) < ∞. Ω

If G = F, we simply write L2 (Ω, Ξ) for L2 (Ω, Ξ; F) for simplicity. For 0 < T ≤ ∞, let C([0, T ]; L2 (Ω, Ξ)) be the space of all square integrable continuous Ξ-valued processes X(·) = {X(s), s ∈ [0, T ]} such that % $ X(·) C([0,T ];L2 (Ω,Ξ)) ≡ sup E X(s) 2Ξ < ∞. s∈[0,T ]

Let C([0, T ], L2 (Ω, Ξ); G) be those of X(·) ∈ C([0, T ]; L2 (Ω, Ξ)) that are Gmeasurable. A process X(·) ∈ C([0, T ]; L2 (Ω, Ξ)) is said to be adapted to the filtration F = {F(s), s ≥ 0} if X(s) is F(s)-measurable for all s ∈ [0, T ]. Denote by C([0, T ], L2 (Ω, Ξ); F) the set of such processes. Lemma 1.1.2 C([0, T ], L2 (Ω, Ξ); F) is closed in C([0, T ], L2 (Ω, Ξ)). Proof. Since for each s ∈ [0, T ] the evaluation map π(s) : C([0, T ], L2 (Ω, Ξ)) → L2 (Ω; Ξ) defined by π(s)(X(·)) = X(s) is continuous, it is sufficient to observe that the space C([0, T ], L2 (Ω, Ξ); F(s)) is closed in C([0, T ], L2 (Ω, Ξ)) for each s ∈ [0, T ]. This proves the lemma. 2

42

1 Stochastic Hereditary Systems

1.1.2 Stopping Times A random variable τ : Ω → [0, ∞] is said to be an F-stopping time (or simply stopping time if there is no possibility of confusion occurring) if for each t ≥ 0, {τ ≤ t} ∈ F(t). The following properties of stopping times can be found in Karatzas and Shreves [KS91] and Øksendal [Øks98] and, therefore, the proofs of the trivial ones will be omitted here. Proposition 1.1.3 The random variable τ : Ω → [0, ∞] is an F-stopping time if and only if {τ < s} ∈ F(s),

∀s ≥ 0.

Proof. Since {τ ≤ s} = ∩s+ >λ>s {τ < λ}, any  > 0, we have {τ ≤ s} ∈ ∩λ>s F(λ) = F(s), so τ is a stopping time. For the converse, {τ < s} = ∪s> >0 {τ ≤ s − } hence also in F(s).

and {τ ≤ s − } ∈ F(s − ),

2

Proposition 1.1.4 Let τ and τ  be two F-stopping times. Then the following are also stopping times: (i) τ ∧ τ  ≡ min{τ, τ  }. (ii) τ ∨ τ  ≡ max{τ, τ  }. (iii) τ + τ  . (iv) aτ , where a > 1. Proof. The proof is omitted.

2

Let X(·) = {X(s), s ≥ 0} be a Ξ-valued stochastic process defined on the complete filtered probability space (Ω, F, P, F). We define the hitting time and the exit time τ : Ω → [0, ∞] as follows. Definition 1.1.5 Let X(t) ∈ / Λ for some t ∈ [0, T ]. The hitting time after time t of the Borel set Λ ∈ B(Ξ) for X(·) is defined by  inf{s > t | X(s) ∈ Λ} if {· · · } = ∅ τ= ∞ if {· · · } = ∅. Definition 1.1.6 Let X(t) ∈ Λ for some t ∈ [0, T ]. The exit time after time t of the Borel set Λ ∈ B(Ξ) for X(·) is defined by  inf{s > t | X(s) ∈ / Λ} if {· · · } = ∅ τ= ∞ if {· · · } = ∅.

1.1 Probabilistic Preliminaries

43

When t = 0, the random time τ defined in Definition 1.1.5 and Definition 1.1.6 will be called hitting time and exit time, respectively, for simplicity. We have the following theorem. Theorem 1.1.7 If X(·) is a continuous Ξ-valued process and if Λ is an open subset of Ξ, then the hitting time of Λ for X(·) after time t ≥ 0 is a stopping time. Proof. Let Q denote the set of all rational numbers. By Proposition 1.1.3, it suffices to show that {τ < s} ∈ F(s), 0 ≤ t ≤ s < ∞. However,  {X(u) ∈ Λ}, {τ < s} = u∈Q∩[t,s)

since Λ ⊂ Ξ is open and X(·) = {X(s), s ∈ [t, T ]} has right-continuous paths. Since {X(u) ∈ Λ} ∈ F(u), the result of the theorem follows. 2 Theorem 1.1.8 If Λ ⊂ Ξ is a closed set, then the exit time τ = inf{s > t | X(s) ∈ Λ or X(s−) ∈ Λ} is of Λ after time t a stopping time. Proof. By X(s−, ω) we mean limu→s,u 0, x ∈ m .

For convenience, we define p(s, x, y) = p(s, y − x) for s ≥ 0 and x, y ∈ m . From time to time we will assume for convenience that the filtration F = {F(s), s ≥ 0} for the Brownian motion (Ω, F, P, F; W (·)) is the P -augmented natural filtration generated by W (·), that is, F(s) = F W (s) ∨ N ,

s ≥ 0,

where F W (s) = σ(W (˜ s), 0 ≤ s˜ ≤ s) ∨ N

for each s ≥ 0,

N = {A ⊂ Ω | there exists a B ∈ F such that P (B) = 0 and A ⊂ B}, and the symbol ∨ denotes the smallest σ-algebra containing F W (s) and N . In this case, the five-tuple (Ω, F, P, F; W (·)) will be referred to as an mdimensional Brownian stochastic basis. Note that Condition (ii) of Definition 1.2.1 indicates that all increments of W (·) are independent of the Brownian past. Therefore, the Brownian motion has independent, stationary, and Gaussian increments. Although it is not a part of the definition, it can be shown (see, e.g., Karatzas and Shreves [KS91] and Øksendal [Øks98]) that an equivalent version of {W (s), s ≥ 0} can be chosen so that all samplepaths are continuous almost surely, that is, s) = W (s) ∀s ≥ 0} = 1. P {lim W (˜ s˜→s

We often drop (Ω, F, P, F) and write W (·) instead of more formal description (Ω, F, P, F, W (·)) whenever there is no danger of ambiguity. We recall the following definition of a martingale, sub-martingale, and super-martingale as follows. Definition 1.2.2 Given a complete filtered probability space (Ω, F, P, F) that satisfies the usual conditions, an F-adapted real-valued process ζ(·) = {ζ(s), s ∈ [0, T ]} is said to be an F-martingale (super-martingale, sub-martingale) if the following hold: (i) E[|ζ(s)|] < +∞ for each s ∈ [0, T ]. (ii) E[ζ(λ)|F(s)] = ζ(s) (respectively, E[ζ(λ)|F(s)] ≤ ζ(s), and E[ζ(λ)|F(s)] ≥ ζ(s)), P -a.s. for every 0 ≤ s ≤ λ ≤ T . An F-adapted n -valued process ζ(·) = {ζ(s), s ≥ 0} is said to be F-martingale if each of its n components is an F-martingale. The n-dimensional submartingale or super-martingale will be defined similarly. Definition 1.2.3 An n-dimensional process ζ(·) = {ζ(s), s ≥ 0} is said to be (i) an F-local martingale if there exists a nondecreasing sequence of F-stopping times {τk }∞ k=1 with

1.2 BM and Itˆ o Integrals

47

P { lim τk = ∞} = 1, k→∞

such that {ζ(s ∧ τk ), s ≥ 0} is an F-martingale for each k; (ii) a continuous F-martingale if it is an F-martingale and is continuous P a.s., that is, P {lim ζ(s) = ζ(t), ∀t ≥ [0, T ]} = 1; s→t

(iii) a square integrable F-martingale if it is a martingale and   T

P 0

|ζ(s)|2 ds < ∞

= 1.

The collection of n-dimensional martingales, continuous, local martingales, and squared-integrable martingales will be denoted by M([0, T ], n), Mc ([0, T ], n ), Mloc ([0, T ], n ), and M2 ([0, T ], n ), respectively. The following is a characterization of a standard Brownian motion. Proposition 1.2.4 (Ω, F, P, F, W (·)) is an m-dimensional Brownian stochastic basis if and only if MΦ (·) = {MΦ (s), s ≥ 0} is an F-martingale, where  s MΦ (s) = Φ(W (s)) − Φ(0) −

0

Lw Φ(W (t)) dt

(1.7)

for Φ ∈ C02 (m ) (the space of real-valued twice continuously differentiable functions on m with compact support) and Lw denotes the differential operator defined by m 1 2 ∂ Φ(x), (1.8) Lw Φ(x) = 2 j=1 j with ∂j2 Φ(x) being the second-order partial derivative of Φ(x) = Φ(x1 , . . . , xm ) with respect to xj , j = 1, 2, · · · , m. Proof. See [KS91] or [Øks98].

2

In some occasions such as in Section 3.2, if is desirable to show that a continuous process ξ(·) = {ξ(s), s ≥ 0} is a F-Brownian motion. In this case, we first need to show that ξ(s) − ξ(t) is independent of F(t) for all 0 ≤ t < s < ∞ by showing that E [H(y)(ξ(s) − ξ(t))] = 0, where y = {ξ(ti ), 0 ≤ t1 ≤ t2 ≤ · · · ≤ tl ≤ t, } for all l = 1, 2, . . ., and $ % E H(y)(ξ(s) − ξ(t))(ξ(s) − ξ(t)) = (s − t)I (m) . This is due to the following result.

48

1 Stochastic Hereditary Systems

Proposition 1.2.5 Let ζ (i) : (Ω, F) → (mi , B(mi ) be a sequence of random variables (i = 1, 2, . . .) and let G ≡ ∨i σ(ζ (i) ) and X ∈ L2 (Ω, m ; F). The E[X|G] = 0 if and only if E[H(ξ (1) , ξ (2) , · · · , ξ (i) )X] = 0 for any i and i any H ∈ Cb (N ), where N = j=1 jmi . Proof. See Proposition 1.12 of Chapter 1 in Yong and Zhou [YZ99].

2

The following result is L´evy’s famous theorem regarding modulus of continuity for the Brownian path. Theorem 1.2.6 (Modulus of Continuity) ⎫ ⎧ ⎬ ⎨ |W (s) − W (t)| = 1 = 1. P limδ↓0,0 0 such that for any F-stopping time τ ,  1 E K(p)

 0

τ

|g(s)|2 ds

p 

 τ 2p    g(s) dW (s) sup  0≤t≤τ 0   τ p  ≤ K(p)E |g(s)|2 ds .

≤E



0

(1.17)

52

1 Stochastic Hereditary Systems

Corollary 1.2.12 (Doob’s Maximal Inequality) Let g(·) ∈ L2,loc (0, ∞; n×m). F For each T > 0, there exists a constant K that depends on the dimension m such that   s 2   T   g(t) dW (t) ≤ K E[|g(s)|2 ] ds. (1.18) E sup  s∈[0,T ]

0

0

The following two theorems state the martingale properties of the indefinite Itˆ o integral process and its converse. Theorem 1.2.13 For each g(·) ∈ L2,loc (0, ∞; n×m ), the n-dimensional inF definite Itˆ o integral process I(g)(·) defined by    s I(g)(·) = I(g)(s) = g(0) + g(t) dW (t), s ≥ 0 0

is a continuous, square integrable, F-martingale, that is, (i) I(g)(s) is F(s)-measurable for each s ≥ 0; (ii) E[|I(g)(s)|] < ∞ for each s ≥ 0; (iii) E[I(g)(s) | F(t)] = I(g)(t) for all 0 < t < s. Furthermore, I(g)(·) has a continuous version. The following theorem, a converse of the previous theorem, is often referred to as a martingale representation theorem for any F-adapted process. Theorem 1.2.14 (Martingale RepresentationTheorem) Let(Ω, F, P, F, W (·)) be an m-dimensional standard Brownian motion. Suppose M (·) = {M (t), s ≥ 0} is a squared-integrable n-dimensional martingale with respect to (Ω, F, P, F). Then there exists a unique g(·) ∈ L2,loc (0, ∞; n×m ) such that F  s g(s) dW (s), P − a.s., ∀s ≥ 0. M (s) = E[M (0)] + 0

1.2.3 Itˆ o’s Formula Consider the n-dimensional Itˆ o process x(·) = {x(s), s ≥ 0} defined by  s  s x(s) = x(0) + f (t) dt + g(t) dW (t), s ≥ 0, (1.19) 0

(0, ∞; n ) f (·) ∈ L2,loc F 1,2 n l

0

and g(·) ∈ L2,loc (0, ∞; m×n ). where F Let C ([0, T ] ×  ;  ) be the space of functions Φ : [0, T ] × n → l (Φ = (Φ1 , Φ2 , . . . , Φl )) that are continuously differentiable with respect to its first variable t ∈ [0, T ] and twice continuously differentiable with respect to its second variable x = (x1 , x2 , . . . , xn ) ∈ n . The Itˆo formula is basically a change-of-variable formula for the process x(·) = {x(s), s ≥ 0} and can be stated as follows.

1.2 BM and Itˆ o Integrals

53

Theorem 1.2.15 Let x(·) = {x(s), s ≥ 0} be the Itˆ o process defined by (1.19). If Φ ∈ C 1,2 ([0, T ] × n ; l ), then dΦk (t, x(t)) = ∂t Φk (t, x(t)) dt +

n 

∂i Φk (t, x(t))fi (t) dt

i=1

+

+

n  m 

∂i Φk (t, x(t))gij (t) dWj (t)

i=1 j=1 n 

1 ∂ 2 Φk (t, x(t))aij (t) dt, 2 i,j=1 ij

(1.20)

where a(·) = [aij (·)] = g  (·)g(·), ∂t Φk (t, x) and ∂i Φk (t, x) are the partial 2 Φk (t, x) is the second derivatives with respect to t and xi , respectively, and ∂ij partial derivative first with respect to xj and then with respect to xi . Extensions of Theorem 1.2.15 to a more general (such as convex) function Φ : [0, T ] × n → l are possible but will not be dealt with here. 1.2.4 Girsanov Transformation Consider the m-dimensional standard Brownian motion (Ω, F, P, F; W (·)). Let ζ(·) ∈ L2,loc (0, ∞; m ) be such that F   T 2 P |ζ(s)| ds < ∞ = 1, 1 ≤ i ≤ m, 0 ≤ T < ∞. (1.21) 0

Then the one-dimensional Itˆ o integral process  s ζ(t) · dW (t), s ≥ 0} I(ζ)(·) = {I(ζ)(s) ≡ 0

2,loc

. We set is well defined and is a member of M  s   s 1 2 ξ(s) = exp ζ(t) · dW (t) − |ζ(t)| dt , 2 0 0

s ≥ 0.

From the classical Itˆo formula (see (1.20)), we have  s ξ(s) = 1 + ξ(t)ζ(t) · dW (t), s ≥ 0,

(1.22)

(1.23)

0

which shows that ξ(·) = {ξ(s), s ≥ 0} is a one-dimensional Mc,loc with Z(0) = 1. Under certain conditions on ζ(·), to be discussed later, ξ(·) will in fact be a martingale, and so E[ξ(s)] = 1, 0 ≤ s < ∞. In this case, we can define, for each 0 ≤ T < ∞, a probability measure P˜ (T ) on the measurable space (Ω, F(T )) by P˜ (T )(A) ≡ E[χA ξ(T )], A ∈ F(T ). (1.24)

54

1 Stochastic Hereditary Systems

The martingale property shows that the family of probability measures {P˜ (T ), 0 ≤ T < ∞} satisfies the following consistency condition: P˜ (T )(A) = P˜ (s)(A),

A ∈ F(s), 0 ≤ s ≤ T.

(1.25)

We have the following Girsanov theorem whose proof can be found in Theorem 5.1 of [KS91]: Theorem 1.2.16 (Girsanov–Cameron–Martin ) Assume that ξ(·) defined by (1.23) is a martingale. For each fixed T > 0, define an m-dimensional process ˜ (·) = {W ˜ (s) = (W ˜ 1 (s), W ˜ 2 (s), . . . , W ˜ m (s)), 0 ≤ s ≤ T } by W 

˜ (s) ≡ W (s) − W

s

ζ(t) dt, 0

0 ≤ s ≤ T.

(1.26)

˜ (·)) is an m-dimensional Then the process (Ω, F(T ), P˜ (T ), {F(s), 0 ≤ s ≤ T }, W standard Brownian motion.

1.3 SHDE with Bounded Memory Let (Ω, F, P, F, W (·)) be an m-dimensional standard Brownian motion. In this section, we study the strong and weak solutions x(·) = {x(s), s ∈ [t − r, T ]} of the following n-dimensional stochastic hereditary differential equation (SHDE) with bounded memory 0 < r < ∞: dx(s) = f (s, xs ) ds + g(s, xs ) dW (s),

s ∈ [t, T ],

(1.27)

with the initial datum (t, xt ) = (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F(t)), where f : [0, T ] × L2 (Ω, C) → n and g : [0, T ] × L2 (Ω, C) → n×m are continuous functions that are to be specified later. Remark 1.3.1 Although the relevant concepts can be easily extended or generalized to systems driven by semi-martingales such as Levy processes, we will restrain ourselves and treat only the differential systems that are driven by a certain m-dimensional standard Brownian motion W (·) in this monograph, for the sake of clarity in illustrating the theory and applications of optimal control of these systems. 1.3.1 Memory Maps We have the following lemma regarding the memory map for the SHDE (1.27) with bounded memory 0 ≤ r < ∞. We will work extensively with the Banach space C ≡ C([−r, 0]; n ) in this case. Recall that the space C is a real separable Banach space equipped with the uniform topology, that is, φ =

sup |φ(θ)|, θ∈[−r,0]

φ ∈ C.

1.3 SHDE with Bounded Memory

55

Lemma 1.3.2 For each 0 < T < ∞, the memory map m : [0, T ] × C([−r, T ], n ) → C defined by m(t, φ) = φt ,

(t, ψ) ∈ [0, T ] × C([−r, T ], n ),

is jointly continuous, where φt : [−r, 0] → n is defined by φt (θ) = φ(t + θ), θ ∈ [−r, 0]. Proof. Let (t, φ), (s, ϕ) ∈ [0, T ] × C([−r, T ], n ) be given. Let  > 0. We will find a δ > 0 such that m(t, φ) − m(s, ϕ) = φt − ϕs < , whenever |t − s| < δ and sups,t∈[−r,T ] |φ(t) − ϕ(s)| < δ. Since φ and ϕ are continuous on the compact interval [−r, T ], they are both uniformly continuous on [−r, T ] as well as the difference φ − ϕ. Therefore, given the same  > 0, we can find δ1 , δ2 > 0 such that for all s, t ∈ [−r, T ] with |s − t| = |(s + θ) − (t + θ)| < δ1 , we have |φ(t + θ) − φ(s + θ)| < 2 for all θ ∈ [−r, 0]. Similarly, with |s − t| = |(s + θ) − (t + θ)| < δ2 , we have |ϕ(t + θ) − ϕ(s + θ)| < 2 for all θ ∈ [−r, 0]. Therefore, for all θ ∈ [−r, 0], |φ(t + θ) − ϕ(s + θ)| ≤ |φ(t + θ) − φ(s + θ)| + |φ(s + θ) − ϕ(s + θ)|   < + = , 2 2 whenever |t − s| < δ ≡ (δ1 ∧ δ2 ). This completes the proof.

2

We have the following corollary. Corollary 1.3.3 The stochastic memory map m∗ : [0, T ] × L2 (Ω, C([−r, T ], n )) → L2 (Ω, C) defined by (t, x(·)) → xt is a continuous map. Proof. The stochastic memory map m∗ is related to the memory map m by the following relation: m∗ (t, x(·)) = m(t, ·)x(·),

∀t ∈ [0, T ] and x(·) ∈ L2 (Ω, C([−r, T ], n )).

Now, m is continuous, and m(t, ·) is continuous and linear for each t ∈ [0, T ] Therefore, the above relation between m∗ and m yields that m∗ (t, ·) is continuous and linear for all t ∈ [0, T ] and m∗ is continuous. Indeed, m∗ (t, x(·)) L2 (Ω,C) ≤ xt L2 (Ω,C) , ∀x(·) ∈ L2 (Ω, C([−r, T ], n )) and t ∈ [0, T ].

2

56

1 Stochastic Hereditary Systems

Lemma 1.3.4 Suppose x(·) = {x(s), s ∈ [t − r, T ]} is an n-dimensional (also defined on (Ω, F, P, F)) with almost all paths continuous). Assume that the restricted process x(·)|[t,T ] = {x(s), s ∈ [t, T ]} is F-adapted and x(θ) is F(t)measurable for all θ ∈ [t − r, t]. Then the C-valued process {xs , s ∈ [t, T ]} defined by xs (θ) = x(s + θ), θ ∈ [−r, 0], is F-adapted, with almost all sample paths continuous, that is,   P lim xs+ − xs = 0, ∀s ∈ [t, T ] = 1. →0

Proof. Without loss of generality, we can and will assume t = 0 for simplicity. Fix s ∈ [0, T ]. Since the restricted process x(·)|[−r,s] = {x(t), t ∈ [−r, s]} has continuous sample paths, it induces an F-measurable map x(·)|[−r,s] : Ω → C([−r, s]; n ). This map is in fact F(s)-measurable; to see this we observe that B(C([−r, s]; n )) is generated by cylinder sets of the form ∩ki=1 π −1 (ti )(Bi ) = {ϕ ∈ C([−r, s]; n ) | ϕ(ti ) ∈ Bi , i = 1, 2, . . . , k} where k is a positive integer, Bi ∈ B(n ), ti ∈ [−r, s], and π(ti ) : C([−r, s]; n ) → n is the point evaluation map defined by π(ti )(ϕ) = ϕ(ti ) for each i = 1, 2, . . . , k. Therefore, it is sufficient to check the F(ti )-measurability of x(·)|[−r,s] on such cylinder sets. With the above notation, (x(·)|[−r,s] )−1 [∩ki=1 π −1 (ti )(Bi )] = ∩ki=1 {ω ∈ Ω | x(ti , ω) ∈ Bi }. By hypotheses, if ti ∈ [−r, 0], {ω ∈ Ω | x(ti , ω) ∈ Bi } ∈ F(0); if ti ∈ [0, s], {ω ∈ Ω | x(ti , ω) ∈ Bi } ∈ F(ti ). Hence, {ω ∈ Ω | x(ti , ω) ∈ Bi } ∈ F(ti ) ⊂ F(s) if 1 ≤ i ≤ k. So,

$ % (x(·)|[−r,s] )−1 ∩ki=1 π −1 (ti )(Bi ) ∈ F(s).

Now, xs = m(s, ·) ◦ (x(·)|[−r,s] ) and the deterministic memory map m(s, ·) : C([−r, s]; n ) → C is continuous, so the C-valued random variable xs must be F(s)-measurable. Therefore, the C-valued process {xs , s ∈ [0, T ]} is Fadapted. 2 The following corollary is a simple consequence of the above lemma.

1.3 SHDE with Bounded Memory

57

Corollary 1.3.5 Let (Ω, F, P, F, W (·)) be an m-dimensional standard Brownian motion. If W0 ≡ 0 ∈ C, then, for each s ≥ 0, σ(W (t), −r ≤ t ≤ s) = σ(Wt , 0 ≤ t ≤ s),

∀s ≥ 0,

where Wt : [−r, 0] → m is again defined by Wt (θ) = W (t + θ) for all t ≥ 0 and all θ ∈ [−r, 0]. The above corollary implies that the historical information of the mdimensional standard Brownian motion {W (s), s ≥ −r} coincides with that of the C-valued process {Ws , s ≥ 0}. The same conclusion holds for any finite-dimensional continuous process x(·) = {x(s), s ∈ [t − r, T ]} defined on (Ω, F, P, F), where xt is F(t)-measurable and its restriction to the interval [t, T ], x(·)|[t,T ] , is F-adapted.

1.3.2 The Assumptions In (1.27), f : [0, T ] × L2 (Ω; C) → L2 (Ω; n ) and g : [0, T ] × L2 (Ω; C) → L2 (Ω; n×m ) are continuous functions that satisfy the following Lipschitz continuity and linear growth condition. Assumption 1.3.6 (Lipschitz Continuity) There exists a constant Klip > 0 such that E [|f (s, ϕ) − f (t, φ)| + |g(s, ϕ) − g(t, φ)|] ≤ Klip (|s − t| + ϕ − φ L2 (Ω;C) ), ∀(s, ϕ), (t, φ) ∈ [0, T ] × L2 (Ω; C). Assumption 1.3.7 (Linear Growth) There exists a constant Kgrow > 0 such that E[|f (t, φ)| + |g(t, φ)|] ≤ Kgrow E[1 + φ ],

∀(t, φ) ∈ [0, T ] × L2 (Ω, C).

Remark 1.3.8 Equation (1.27) is general enough to include the equations that contains discrete delays such as dx(s) = fˆ(s, x(s), x(s − r1 ), x(s − r2 ), . . . , x(s − rk )) ds +ˆ g (s, x(s), x(s − r1 ), x(s − r2 ), . . . , x(s − rk )) dW (s),

s ∈ [t, T ],

and continuous delays such as dx(s) = fˆ(s, x(s), y1 (s), y2 (s), . . . , yk (s)) ds +ˆ g (s, x(s), y1 (s), y2 (s), . . . , yk (s)) dW (s),

s ∈ [t, T ], (1.28)

where fˆ and gˆ are n- and n×m -valued functions, respectively, defined on (k+1)−f olds

* +, [0, T ] × n × · · · × n , 0 0 < r1 < r2 < · · · < rk = r and yi (s) = −r hi (θ)x(s + θ) dθ and hi : [−r, 0] → n×n , i = 1, 2, . . . , k, are some continuous functions.

58

1 Stochastic Hereditary Systems

1.3.3 Strong Solution The definition of a strong solution process of (1.27) and its pathwise uniqueness is given below. Definition 1.3.9 Let (Ω, F, P, F, W (·)) be an m-dimensional standard Brownian motion. A process {x(s; t, ψt ), s ∈ [t − r, T ]}, defined on (Ω, F, P, F), is said to be a (strong) solution of (1.27) on the interval [t − r, T ] and through the initial datum (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F(t)) if it satisfies the following conditions: 1. xt (·; t, ψt ) = ψt . 2. x(s; t, ψt ) is F(s)-measurable for each s ∈ [t, T ]. 3. For 1 ≤ i ≤ n and 1 ≤ j ≤ m we have    T  2 |fi (s, xs )| + gij (s, xs ) ds < ∞ = 1. P t

4. The process {x(s; t, ψt ), s ∈ [t, T ]} is continuous and satisfies the following stochastic integral equation P -a.s.:  s  s f (λ, xλ ) dλ + g(λ, xλ ) dW (λ), s ∈ [t, T ]. (1.29) x(s) = ψt (0) + t

t

In addition, the solution process {x(s; t, ψt ), s ∈ [t − r, T ]} is said to be (strongly) unique if {˜ x(s; t, ψt ), s ∈ [t − r, T ]} is also a solution of (1.27) on [t − r, T ] and through the same initial datum (t, ψt ), then P {x(s; t, ψt ) = x ˜(s; t, ψt ), ∀s ∈ [t, T ]} = 1. The following two lemmas provide intermediate steps for establishing the existence and uniqueness of the strong solution for (1.27). Lemma 1.3.10 Assume that f : [0, T ] × L2 (Ω; C) → L2 (Ω; n ) and g : [0, T ] × L2 (Ω; C) → L2 (Ω; n×m ) are continuous functions that satisfy the Lipschitz continuity Assumption 1.3.6. Let x(·) = {x(s), s ∈ [t − r, T ]} and y(·) = {y(s), s ∈ [t−r, T ]} be two processes in L2 (Ω, C([t−r, T ], n )) such that xt , yt ∈ L2 (Ω, C; F(t)) and their restrictions to the interval [t, T ], x(·)|[t,T ] and x(·)|[t,T ] , are F-adaptive. Denote  s  s R(s, x(·)) = f (λ, xλ ) dλ + g(λ, xλ ) dW (λ) t

t

≡ R(s, x(·)) + Q(s, x(·)). Similarly,  R(s, y(·)) =



s

f (λ, yλ ) dλ + t

s

g(λ, yλ ) dW (λ) t

≡ R(s, y(·)) + Q(s, y(·)).

(1.30)

1.3 SHDE with Bounded Memory

59

Let δ(·) = x(·) − y(·). Then the following inequality holds for each T > 0: 

% E[ sup |R(s, x(·)) − R(s, y(·))|2 ] ≤ AE δt 2 + BE $

s∈[t,T ]

T

 δs 2 ds

(1.31)

t

where A and B are positive constants that depend only on Klip and T . Proof. By Lemma 1.3.2 it is clear that xs , ys ∈ C for each s ∈ [t, T ]. The C-valued processes {xs , s ∈ [t, T ]} and {ys , s ∈ [t, T ]} are continuous and progressively measurable with respect to B([t, T ]) × F. Using the Burkholder-Davis-Gundy inequality (Theorem 1.2.11) and the elementary inequalities |a + b|r ≤ kr (|a|r + |b|r ), r ≥ 0, kr = 2r−1 ∨ 1; |a + b + c|r ≤ k¯r (|a|r + |b|r + |c|r ), r ≥ 0, k¯r = 3r−1 ∨ 1, we have 

 2

sup |R(s, x(·)) − R(s, y(·))|

E

s∈[t,T ]



 2

≤ k2 E

sup |R(s, x(·)) − R(s, y(·))|

s∈[t,T ]



+ sup s∈[t,T ]

|Q(s, x(·)) − Q(s, y(·))|2







T

≤ k2 E (T − t)

2

|f (s, xs ) − f (s, ys )| ds t



T



|g(s, xs ) − g(s, ys )|2 ds

+KE 

t

≤ k2 E (T −

2 t)Klip

2 +k2 KKlip E







T

2 k2 Klip ((T

2

xs − ys ds t



T

2

xs − ys ds t





− t) + K)E

2

T

 2

|δs ds . t

This proves the lemma.



60

1 Stochastic Hereditary Systems

Lemma 1.3.11 Assume that f : [0, T ] × L2 (Ω; C) → L2 (Ω; n ) and g : [0, T ] × L2 (Ω; C) → L2 (Ω; n×m ) are continuous functions that satisfy the linear growth Assumption 1.3.7. Let x(·) = {x(s), s ∈ [t − r, T ]} be a process in L2 (Ω, C([t − r, T ], n )) such that xt ∈ L2 (Ω, C; F(t)) and its restriction to the interval [t, T ], x(·)|[t,T ] is F-adaptive. Denote  s  s f (λ, xλ ) dλ + g(λ, xλ ) dW (λ) R(s, x(·)) = t

t

≡ R(s, x(·)) + Q(s, x(·)).

(1.32)

Then, the following estimate holds for each T > 0: 

T

E[ sup |R(s, x(·))|] ≤ A + BE s∈[t,T ]

 |xs |2 ds ,

(1.33)

t

where A and B are positive constants that depend only on Kgrow and T . Proof. Consider



 2

2

E[ sup |R(s, x(·))| ] ≤ k2 E s∈[t,T ]

sup |R(s, x(·))| + sup |Q(s, x(·))|



s∈[t,T ]

s∈[t,T ]





T

2

≤ k2 E (T − t)  t

|f (s, xs )| ds t

T

+ KE ≤



|g(s, xs ))|2 ds



2 k2 Kgrow E

 



T

(T − t)

2 +k2 Kgrow KE

2 k2 Kgrow ((T

2

(1 + xs ) ds t

T



(1 + xs )|2 ds

t

=

2





T

− t) + K)E

2

(1 + xs )| ds . t

The lemma follows with appropriate constants A and B.

2

We have the following existence and uniqueness result for the strong solution of (1.27). Theorem 1.3.12 Suppose Assumptions 1.3.6 and 1.3.7 hold. Then the SHDE (1.27) has a unique strong solution {x(s; t, ψt ), s ∈ [t − r, T ]}. Sketch of Proof. The existence and uniqueness of the strong solution process, {x(s; t, ψt ), s ∈ [t − r, T ]}, of (1.27) can be obtained by standard Piccard iterations and applications of Lemma 1.3.10 and Lemma 1.3.11. We

1.3 SHDE with Bounded Memory

61

only outline the important steps below and referred the readers to Chapter 2 of [Moh84] for details. Step 1. We assume that the initial process (segment) ψt ∈ L2 (Ω, C) is F(t)measurable. Note that this assumption is equivalent to saying that ψt (·)(θ) is F(t)-measurable for all θ ∈ [−r, 0], because ψt has almost all sample paths continuous. Step 2. We define inductively a sequence of continuous processes x(k) (·), k = 1, 2, . . ., where  ψt (0) if s ≥ 0 x(1) (s, ω) = ψt (s) if s ∈ [−r, 0], and for k = 1, 2, . . .,  s s ψt (0) + t f (λ, xkλ ) dλ + t g(λ, xkλ ) dW (λ) if s ∈ [t, T ], x(k+1) (s, ω) = ψ(s) if s ∈ [t − r, t], Step 3. Verify that the following three properties: (k)

(i) xt

is F(t)-measurable and x(k) |[t,T ] (·) ∈ L2 (Ω, C([t − r, T ]; n )

and is adapted to F. (k) (ii) For each s ∈ [t, T ], xs ∈ L2 (Ω, C) and is F(s)-measurable. (iii) For each k = 1, 2, . . ., by applying Lemma 1.3.10, there exists a constant K > 0 such that x(k+1) (·) − x(k) (·) L2 (Ω,C([t−r,T ],n )) ≤ (KKlip )k−1

(T − t)k−1 (1) x − x(0) L2 (Ω,C) (k − 1)!

and x(k+1) − x(k) s s L2 (Ω,C) ≤ (KKlip )k−1

(s − t)k−1 (1) x − x(0) L2 (Ω,C) . (k − 1)!

We comment here that since ψt ∈ L2 (Ω, C) is F(t)-measurable, then x(k) (·) |[t,T ] ∈ L2 (Ω, C([t, T ], n )) is trivially F-adapted for all k = 1, 2, . . .. Step 4. By applying Lemma 1.3.11, we prove that the sequence {x(k) (·)}∞ k=1 converges to some x(·) ∈ L2F (Ω, C([t−r, T ], n )) and that the limiting process x(·) satisfies (1.29).

62

1 Stochastic Hereditary Systems

Step 5. Prove the uniqueness of the convergent process x(·) by showing for any other process x ˜(·) satisfying (1.29),  s 2 xs − x ˜s 2L2 (Ω,C) ≤ KKlip xλ − x ˜λ 2L2 (Ω,C) dλ t

for all s ∈ [t, T ]. This implies that xs − x ˜s = 0 for all s ∈ [t, T ] by applying the ˜s L2 (Ω,C) , α = 0, Gronwell inequality, Lemma 1.1.1, by taking h(s) = xs − x ˜(·) a.s. in L2 (Ω, C([t − r, T ], n )). and β = Klip . Therefore x(·) = x This proves the existence and uniqueness of the strong solution.

2

The strong solution process of (1.27) given initial datum (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F(t)) will be denoted by x(·) = {x(s; t, ψt ), s ∈ [t − r, T ]}. The corresponding C-valued process will be denoted by xs = {xs (·; t, ψt ), s ∈ [t, T ]} (or, simply, xs = {xs (t, ψt ), s ∈ [t, T ]}), where xs (θ; t, ψt ) = x(s + θ; t, ψt ),

∀θ ∈ [−r, 0].

For each s ∈ [t, T ], we can treat xs (t, ·) as a map from L2 (Ω, C; F(t)) to L2 (Ω, C; F(s)) and prove the continuous dependence (on the initial segment process) result as follows. Theorem 1.3.13 Suppose Assumption 1.3.6 holds. Then the map xs (t, ·) : L2 (Ω, C; F(t)) → L2 (Ω, C; F(s)), (1)

∀s ∈ [t, T ], (2)

is Lipschitz continuous. Indeed, for all s ∈ [t, T ] and ψt , ψt (1)

∈ L2 (Ω, C; F(t)),

(2)

xs (t, ψt ) − xs (t, ψt ) L2 (Ω,C) √ (1) (2) 2 ≤ 2 ψt − ψt L2 (Ω,C) exp(KKlip (s − t)),

(1.34)

where K is the constant that appeared in the proof (Step 3) of Theorem 1.3.12 and Klip > 0 is the Lipschitz constant listed in Assumption 1.3.6. Proof. The proof is heavily dependent on the Lipschitz condition of f and g. It is straight forward but tedious and is therefore omitted here. Interested readers are referred to Theorem (3.1) on page 41 of [Moh84] for detail. 2 1.3.4 Weak Solution The following is the definition of a weak solution for (1.27) and its corresponding uniqueness. Definition 1.3.14 A weak solution process of (1.27) is a six-tuple (Ω, F, P, F, W (·), x(·)), where the following hold: 1. (Ω, F, P, F, W (·)) is an m-dimensional standard Brownian motion.

1.3 SHDE with Bounded Memory

63

2. x(·) = {x(s); s ∈ [t−r, T ]} is a continuous process that is F(s)-measurable for each s ∈ [t, T ] and F(t)-measurable for s ∈ [t − r, t]. 3. For 1 ≤ i ≤ n and 1 ≤ j ≤ m, we have  P

T



  2 |fi (s, xs )| + gij (s, xs ) ds < ∞ = 1,

t

4. The process x(·) = {x(s); t − r ≤ s ≤ T } satisfies the following stochastic integral equation P -a.s.:  s  s x(s) = ψt (0) + f (λ, xλ ) dλ + g(λ, xλ ) dW (λ) for s ∈ [t, T ]. (1.35) t

t

Note that Definition 1.3.14 is weaker than Definition 1.3.9, because one has a choice of an m-dimensional standard Brownian motion (Ω, F, P, F, W (·)) on which the solution process x(·) is defined. This is in contrast to the definition of a strong solution in which the m-dimensional standard Brownian motion (Ω, F, P, F, W (·)) is given a priori. When no confusion arises, we sometimes refer to (W (·), x(·)) as the week solution instead of the formal expression (Ω, F, P, F, W (·), x(·)) for notational simplicity. The probability measure µ(t, B) ≡ P [xt ∈ B],

B ∈ B(C),

is called the initial distribution of the solution. Definition 1.3.15 The weak uniqueness of the (weak) solution is said to hold for (1.27) if, for any two weak solutions (Ω, F, P, F, W (·), x(·)) and ˜ W ˜ F˜ , P˜ , F, ˜ (·), x (Ω, ˜(·)) with the same initial distribution P [xt ∈ B] = P˜ [˜ xt ∈ B],

∀B ∈ B(C),

the two process x(·)|[t,T ] ≡ {x(s), s ∈ [t, T ]} and x ˜(·)|[t,T ] ≡ {˜ x(s), s ∈ [t, T ]} also have the same probability law, that is, P [x(·)|[t,T ] ∈ B] = P˜ [˜ x(·)|[t,T ] ∈ B],

∀B ∈ B(C([t, T ], n )).

Theorem 1.3.16 If (1.27) has a strongly unique solution then it has a weakly unique solution. Proof. The theorem follows from the definitions of strong and weak solutions of (1.27). 2 It is well known that strong uniqueness implies weak uniqueness for SDEs (see Yamata and Watanabe [YW71] and Zronkin and Krylov [ZK81]). The above implication is also shown to be true for SHDE (1.27) by Larssen [Lar02]. Theorem 1.3.17 Strong uniqueness implies weak uniqueness.

64

1 Stochastic Hereditary Systems

Proof. Assume that (Ω (i) , F (i) , P (i) , F(i) , W (i) (·), x(i) (·)), i = 1, 2, are two weak solutions of (1.27) that have the same initial distribution (1)

P (1) [xt

(2)

∈ B] = P (2) [xt

∈ B],

B ∈ B(C).

(1.36)

Since these two solutions are defined on different probability spaces, one cannot use the assumption of strong uniqueness directly. First, we have to bring together the solutions on the same canonical space (Ω, F , P ) while preserving their joint distributions. Once this is done, the result follows easily from the strong uniqueness. We proceed as follows. First, set y (i) (s) ≡ x(i) |[t,T ] (s) − x(i) (t) for s ∈ [t, T ]. Then (i)

x(i) (s) = xt (t ∧ s) + y (i) (t ∨ s) for s ∈ [t − r.T ], (i)

and we may regard the ith solution as consisting of three parts xt , W (i) (·), and x(i) (·). The sample paths of the triple constitute the canonical measurable space (Θ, B(Θ) defined as follows: Θ = C × C([t, T ]; m ) × C([t, T ]; n ),

(1.37)

B(Θ) = B(C) × B(C([t, T ]; m )) × B(C([t, T ]; n )).

(1.38)

For i = 1, 2, the process x(i) (·) defined on (Θ, B(Θ)) induces a probability measure π (i) according to (i)

π (i) (A) = µ(i) [(xt (·), W (i) (·), x(i) (·)) ∈ A],

A ∈ B(Θ).

(1.39)

We denote the generic element of Θ by θ = (x, w, y). The marginal of each π (i) on the ψ-coordinate is the initial distribution µ, the marginal on the wcoordinate is the Wiener measure P ∗ , and the distribution of the (x, w) pair (i) is the product measure µ × P ∗ . This is because ψt is F (i) (t)-measurable (see (i) Lemma II.2.1 of Mohammed [Moh84]) and W (·) − W (i) (t) is independent of F (i) (t) (see Problem 2.5.5 of Karatzas & Streves [KS91]). Also, under π (i) , the initial value of the x-coordinate is zero, almost surely. Next, we note that on (Θ, B(Θ), π (i) ) there exists a regular conditional probability (see Proposition 1.1.10 for its properties) for B(Θ) given (ψ, ω) ∈ C × C([0, T ], n ). We will be interested only in conditional probabilities of sets in B(Θ) of the form C × C([t, T ]; m ) × F for F ∈ B(C[t, T ]; n )). Thus, with a slight abuse of terminology, we speak of Q(i) (ψ, ω; F ) : C × C([t, T ]; m ) × B(C([t, T ]; n )) → [0, 1],

i = 1, 2,

as the regular conditional probability for B(C([t, T ]; n )) given (ψ, ω). According to Proposition 1.1.10, this regular conditional probability has the following properties:

1.3 SHDE with Bounded Memory

65

(RCP1) For each fixed ψ ∈ C, ω ∈ C([t, T ]; m ), the mapping F → Q(i) (ψ, ω; F ) is a probability measure on (C([t, T ]; n ), B(C([t, T ]; n ))). (RCP2) For each fixed F ∈ B(C[t, T ]; n ), (ψ, ω) → Q(i) (ψ, ω; F ) is B(C) ⊗ B(C([t, T ]; m ))-measurable. (RCP3) For every F ∈ B(C([t, T ]; n )) and G ∈ B(C) ⊗ B(C([t, T ]; n ))), we have  π (i) (G × F ) =

Q(i) (ψ, ω; F )µ(dψ)P ∗ (dω).

G

Next, we construct the probability space (Ω, F , P ). Set Ω ≡ Θ × C ([t, T ]; n ) and denote the generic elements of Ω by ω = (ψ, ω, y1 , y2 ). Let F be the completion of B(Θ) ⊗ B(C([t, T ]; n )) by the collection N of null sets under the probability measure P (dω) ≡ Q(1) (ψ, ω; dy1 )Q(2) (ψ, ω; dy2 )µ(dψ)P ∗ (dω).

(1.40)

In the following, we endow (Ω, F , P ) with a filtration {F (t), t ∈ [0, T ]} that satisfies the usual conditions: (i) F (0) is complete; that is F (0) contains all subsets of A ∈ F with P (A) = 0. (ii) {F (s), s ≥ 0} is increasing; that is, F (s) ⊂ F (˜ s) if s < s˜. (iii) {F (s), s ≥ 0} is right-continuous; that is, s). F (s) = F (s+) ≡ ∩s˜>s F (˜ We first take G(0) := σ(x(s), −r ≤ s ≤ 0), G(t) := G(0) ∨ σ(w(s), y1 (s), y2 (s), 0 ≤ s ≤ t), ˜ G(t) := G(t) ∨ N . Then define the filtration {F (t), t ∈ [0, T ]} by ˜ + ) for t ∈ [0, T ]. F (t) := ∩ >0 G(t Now, for any A ∈ B(Θ), P {ω ∈ Ω | (ψ, ω, yi ) ∈ A}  = Q(i) (ψ, ω; dyi 1)µ(dψ)P ∗ (dω) (by (1.40)) A

= π (i) (A) (by RCP3 and a monotone class argument) (i)

= µ(i) [(x0 , W (i) (·), y (i) (·)) ∈ A]

i = 1, 2 (by RCP2(ii)).

(1.41)

66

1 Stochastic Hereditary Systems

This means that the distribution of (x(0 ∧ ·) + yi (0 ∨ ·), ω) under P is the same (i) as the distribution of (X0 (0∧·)+Y (i) (0∨·), W (i) (·)) under µ(i) . In particular, the ω-coordinate process {ω(t), G(t), 0 ≤ t ≤ T } is an m-dimensional Brownian motion on (Ω, F , P ), and by the following lemma (see Lemma IV.1.2 in Ikeda and Watanabe [IW81]), the same is true for {ω(t), F (t), 0 ≤ t ≤ T }. To sum up, we started from two weak solutions (Ω (i) , F (i) , P (i) , W (i) (·), x(i) (·), F(i) ),

i = 1, 2,

of (1.27) having the same initial distribution (i.e., (1.36) holds). We have constructed, on the filtered probability space (Ω, F , P ; F (t), t ∈ [0, T ]}, two weak solutions (x(0∧·)+y (i) (0∨·), ω(·)), i = 1, 2, having the same probability laws as the ones with which we started. Moreover, these weak solutions are driven by the same Brownian motion and they have the same initial path. Consequently, we may and will regard them as two different strong solutions to the SHDE dx(t) = f (t, xt ) dt + g(t, xt ) dW (t) on the filtered probability space (Ω, F , P ; F (t), t ∈ [0, T ]} and with the same initial path x(·). Then strong uniqueness (see Definition 1.3.9) implies that P [(x(0 ∧ ·) + y (1) (0 ∨ ·) = (x(0 ∧ ·) + y (2) (0 ∨ ·), −r ≤ t ≤ T ] = 1, or, equivalently, P [{ω = (ψ(·), ω(·), y (1) (·), y (2) (·)) ∈ Ω | y (1) (·) = y (2) (·)}] = 1.

(1.42)

From (1.40) and (1.41), we see for all A ∈ B(Θ), that (1)

(1)

µ(1) [(X0 (·), W0 (·), W (1) (·)) ∈ A] = P [{ω ∈ Ω | (x(·), w(·), y (1) (·)) ∈ A] = P [{ω ∈ Ω | (x(·), w(·), y (2) (·)) ∈ A] (2)

(2)

= µ(2) [(X0 (·), W0 (·), W (2) (·)) ∈ A], and this implies weak uniqueness. We thus have proved the theorem.

2

1.4 SHDE with Unbounded Memory We first note that Lemma 1.3.2 can be extended to the memory map with infinite but fading memory (r = ∞). In this case, we will work with the ρ-weighted Hilbert space M ≡  × L2ρ ((−∞, 0]; ) equipped with the inner product

(x, φ), (y, ϕ)ρ = xy + φ, ϕ2  0 = xy + ρ(θ)φ(θ)ϕ(θ) dθ, −∞

1.4 SHDE with Unbounded Memory

67

∀(x, φ), (y, ϕ) ∈  × L2ρ ((−∞, 0]; ), 1/2

and the Hilbertian norm (x, φ) ρ = (x, φ), (x, φ)ρ , where ρ : (−∞, 0] → [0, ∞) is a function that satisfies the following assumptions: Let ρ : (−∞, 0] → [0, ∞) be the influence function with relaxation property that satisfies the following conditions: Assumption 1.4.1 The function ρ satisfies the following two conditions: 1. ρ is summable on (−∞, 0], that is,  0 0< ρ(θ) dθ < ∞. −∞

2. For every λ ≤ 0, one has ¯ K(λ) = ess

ρ(θ + λ) ¯ < ∞, ≤K ρ(θ) θ∈(−∞,0]

K(λ) = ess

sup

ρ(θ) < ∞. ρ(θ + λ) θ∈(−∞,0] sup

1.4.1 Memory Maps Lemma 1.4.2 For each 0 < T < ∞, the memory map m : [0, T ] × L2ρ (−∞, T ; ) → M defined by m(t, ψ) = (ψ(t), ψt ),

(t, ψ) ∈ [0, T ] × L2ρ (−∞, T ; )

is jointly continuous. Proof. For 0 ≤ T ≤ ∞, we first extend the function ρ : (−∞, 0] → [0, ∞) to a larger domain (−∞, T ] by setting ρ(t) = ρ(0) for all t ∈ [0, T ]. Let Cρ ((−∞, T ], ) (0 ≤ T ≤ ∞) be the space of continuous functions φ : (−∞, T ] →  such that  ρ(θ)φ(θ) = 0. lim θ→−∞

It is clear (see Rudin [Rud71]) that the space Cρ ((−∞, T ], ) is dense and can be continuously embedded into L2ρ ((−∞, 0], ). To show that the memory map m(t, ψ) = (ψ(t), ψt ), (t, ψ) ∈ [0, T ] × L2ρ (−∞, T ; ) is jointly continuous, we let t, s ∈ [0, T ] and φ(i) ∈ L2ρ (−r, T ; n ) for i = 1, 2. For any  > 0, we want to find δ > 0 such that, if |t − s| < δ and φ(1) − φ(2) ρ,2 < δ, then

68

1 Stochastic Hereditary Systems (1)

|φ(1) (t) − φ(2) (s)|2 + φt − φ(2) s ρ,2 < . Let ϕ(i) ∈ Cρ ((−∞, T ], ) for i = 1, 2 be such that φ(i) − ϕ(i) ρ,2 < 1 ,

i = 1, 2,

where 1 > 0 can be made arbitrarily small. Therefore, for t ∈ [0, T ], (i)

(i)

φt − ϕt ρ,2 =



0

−∞ 0

ρ(θ)|φ(i) (t + θ) − ϕ(i) (t + θ)|2 dθ

1/2



1/2 ρ(θ) ρ(t + θ)|φ(i) (t + θ) − ϕ(i) (t + θ)|2 dθ −∞ ρ(t + θ) 1/2   t ¯ = K ρ(θ)|φ(i) (θ) − ϕ(i) (θ)|2 dθ =

−∞

1/2   T ¯ ≤ K ρ(s)|φ(i) (s) − ϕ(i) (s)|2 ds −∞  (i) ¯ = K φ − ϕ(i) ρ,2 < 1 . Taking δ < 1 , ϕ(2) − ϕ(1) ρ,2 ≤ ϕ(2) − φ(2) ρ,2 + φ(2) − φ(1) ρ,2 + φ(1) − ϕ(1) ρ,2 < 31 . √ Since ρϕ(i) , i = 1, 2, are uniformly continuous on (−∞, T ], we can choose  (i) δ 0 > 0 such that, if t, s ∈ (−∞, T ] and |t − s| < δ0 , then | ρ(t)ϕ (t) − ρ(s)ϕ(i) (s)| < 1 . Suppose t, s ∈ [0, T ] are such that |t − s| < δ0 . Then (i)

ϕt − ϕs(i) ρ,2 =



≤ 1 .

0

−∞

ρ(θ)|ϕ(i) (t + θ) − ϕ(i) (s + θ)|2 dθ

1/2

Suppose |t − s| < δ < δ0 ∧ 1 . Then the above two inequalities imply that (2)

(2)

(1)

(1)

(1) ϕt − ϕ(1) s ρ,2 ≤ ϕt − ϕt ρ,2 + ϕt − ϕs ρ,2 < 41 √ (1) √ ≤ ϕ(2) − ϕ(1) ρ,2 + ρϕt − ρϕ(1) s

< 31 + 1 = 41 . Therefore, (2)

(2)

(2)

(2)

(1) (1) (1) φt − φ(1) s ρ,2 ≤ φt − ϕt ρ,2 + ϕt − ϕs ρ,2 + ϕs − φs ρ,2 < 61

< 1 + 41 + 1 = 61 .

1.4 SHDE with Unbounded Memory

69

By taking 1 < 6 , the result follows. The above analysis also implies that |φ(1) (t) − φ(2) (t)|2 can also be made arbitrarily small. This proves the lemma. 2 The following corollary is similar to Corollary 1.3.3. Corollary 1.4.3 The stochastic memory map m∗ : [0, T ] × L2 (Ω, L2ρ ((−∞, T ], )) → L2 (Ω, L2ρ ((−∞, 0], )) defined by (t, φ(·)) → (φ(t), φt ) is a continuous map. Proof. The proof is similar to that of Corollary 1.3.3 and is therefore omitted. 2 Notation. In this section, we adopt the notation introduced at the beginning of the previous section but with Ξ = M ≡  × L2ρ ((−∞, 0]; ). For example, L2 (Ω, M) is the space of M-valued random variables (x, Υ ) defined on the probability space (Ω, F, P ) such that % $ (x, Υ ) L2 (Ω,M) ≡ E |x|2 + Υ 2ρ   0 = E |x|2 + |Υ (θ)|2 ρ(θ) dθ < ∞. −∞

Let (Ω, F, P, F; W (·)) be an one-dimensional standard Brownian motion. The main purpose of this section is to establish the existence and uniqueness of the strong solution process S(·) = {S(s), s ∈ (−∞, ∞)} of the following very special type of one-dimensional autonomous SHDE with infinite but fading memory (r = ∞): dS(s) = f (Ss ) ds + g(Ss ) dW (s), S(s)

s ∈ [0, ∞),

(1.43)

with the initial datum (S(0), S0 ) = (ψ(0), ψ) ∈ L2 (Ω, M). In the above, the solution process S(·) = {S(s), s ∈ (−∞, ∞)} will be the stock price process and (1.43) represents the price dynamics of the underlying stock to be considered in Chapter 7, where we treat a hereditary portfolio optimization problem with a fixed plus proportional transaction costs and capital-gains taxes. In (1.43), the left-hand side dS(s)/S(s) represents the instantaneous return of the stock at time s > 0. On the right-hand side of (1.43), f (Ss ) and g(Ss ) represent respectively the growth rate and volatility of the stock at time s ≥ 0. Note that both f (Ss ) and g(Ss ) depend explicitly on the historical prices Ss over the infinite time history (−∞, s] instead of just the price S(s) at time s ≥ 0 alone. The stock dynamics described by (1.43) is therefore referred to as a stock with an hereditary price structure. The stock growth function and the volatility function f and g are real-valued functions that are continuous on the space L2ρ ((−∞, 0]; ).

70

1 Stochastic Hereditary Systems

Assumption 1.4.4 (Lipschitz Continuity) There exists a constant Klip > 0 such that |ϕ(0)f (ϕ) − φ(0)f (φ)| + |ϕ(0)g(ϕ) − φ(0)g(φ)| ≤ Klip ( (ϕ(0), ϕ) − (φ(0), φ) M ),

∀(ϕ(0), ϕ), (φ(0), φ) ∈ M.

Assumption 1.4.5 (Linear Growth) There exists a constant KG > 0 such that |φ(0)f (φ)| + |φ(0)g(φ)| ≤ KG (1 + (φ(0), φ) M ),

∀(φ(0), φ) ∈ M.

Strong and Weak Solutions The definitions of strong and weak solutions of SHDE (1.43) together with their corresponding uniqueness are given as follows. Definition 1.4.6 Let (ψ(0), ψ) ∈ M ≡ ×L2ρ ((−∞, 0]; ). An one-dimensional process S(·) = {S(s), s ∈ (−∞, ∞)} is said to be a strong solution of (1.43) on the infinite interval (−∞, ∞) and through the initial datum (ψ(0), ψ) ∈ M if it satisfies the following conditions: 1. S(θ) = ψ(θ), ∀θ ∈ (−∞, 0]. 2. {S(s), s ∈ [0, ∞)} is F-adapted on [0, ∞). 3.  ∞ P 0

  |S(t)f (St )| + S 2 (t)g 2 (St ) dt < ∞ = 1.

4. For each s ∈ [0, ∞), the process {S(s), s ∈ [0, ∞)} satisfies the following stochastic integral equation P -a.s.:  s  s S(s) = ψ(0) + S(t)f (St ) dt + S(t)g(St ) dW (t). (1.44) 0

0

Definition 1.4.7 A strong solution, {S(s), s ∈ (−∞, ∞)}, of SHDE (1.43) is ˜ said to be unique if {S(s), s ∈ (−∞, ∞)} is also a strong solution of (1.43) on the interval (−∞, ∞) and through the the same initial datum (ψ(0), ψ) ∈ M, then ˜ P {S(s) = S(s) ∀s ∈ [0, ∞)} = 1. Definition 1.4.8 A weak solution of (1.43) is a six-tuple (Ω, F, P, F, W (·), S(·)), where 1. (Ω, F, P, F, W (·)) is an one-dimensional standard Brownian motion; 2. S(·) = {S(s); s ∈ [0, ∞)} is a continuous process that is F(s)-measurable for each s ∈ [0, ∞);

1.4 SHDE with Unbounded Memory



3.





P 0

71

  |S(s)f (Ss )| + S 2 (s)g 2 (Ss ) ds < ∞ = 1,

4. the process S(·) = {S(s); s ∈ [0, ∞)} satisfies the following stochastic integral equation P -a.s.  s  s x(s) = ψ(0) + S(t)f (St ) dt + S(t)g(St ) dW (t). (1.45) 0

0

First, the following lemma is needed for establishing the existence and uniqueness of the strong solution process {S(s), s ∈ (−∞, ∞)} of (1.43). Lemma 1.4.9 Assume that f, g : L2ρ ((−∞, 0]; ) →  satisfy the Lipschitz condition (Assumption 1.4.4). Let x(·) = {x(t), t ∈ } and y(·) = {y(t), t ∈ } be two F-adaptive processes that are continuous for t ≥ 0 and such that (x(0), x0 ), (y(0), y0 ) ∈ M a.s. Denote  R(s, x(·)) =



s

x(t)f (xt ) dt +

0

s

x(t)g(xt ) dW (t)

0

(1.46)

≡ R(s, x(·)) + Q(s, x(·)). Similarly,  R(s, y(·)) =

0



s

y(t)f (yt ) dt +

s

y(t)g(yt ) dW (t)

0

≡ R(s, y(·)) + Q(s, y(·)). Let δ(·) = x(·) − y(·). Then the following inequality holds for each T > 0: E[ sup |R(s, x(·)) − R(s, y(·))|2 ] s∈[0,T ]

≤ AE



(δ(0), δ0 ) 2ρ



 + BE

0

T

 |δ(s)|2 ds ,

(1.47)

where A and B are positive constants that depend only on L, T and ρ 1 . Proof. By Lemma 1.4.2 it is clear that (x(s), xs ), (y(s), ys ) ∈ M for each s ∈ [0, T ]. The M-valued processes {(x(s), xs ), s ≥ 0} and {(y(s), ys ), s ≥ 0} are continuous and progressively measurable with respect to G(0) ∨ F(s).

72

1 Stochastic Hereditary Systems

Using the Burkholder-Davis-Gundy inequality (Theorem 1.2.11), we have   E sup |R(t, x(·)) − R(t, y(·))|2 0≤t≤T

≤ k2 E



sup |R(t, x(·)) − R(t, y(·))|2 + sup |Q(t, x(·)) − Q(t, y(·))|2

0≤t≤T

  ≤ k2 E T 

0

T

+C 

T

0

0≤t≤T

|x(s)f (xs ) − y(s)f (ys )|2 ds

|x(s)g(xs ) − y(s)g(ys )|2 ds 

≤ k2 E T L

T



(x(s), xs ) − (y(s), ys ) 2ρ ds

0

 +k2 CLE



T

0

≤ k2 L(T + C)E



(x(s), xs ) − (y(s), ys ) 2ρ ds

 0

T



 |(δ(s), δs ) 2ρ ds .

Now, (δ(s), δs ) 2ρ

(1.48)



2

= |δ(s)| +

0

−∞

|δ(s + θ)|2 ρ(θ) dθ

and 

0

−∞  −s

= −∞  −s

|δ(s + θ)|2 ρ(θ) dθ 

|δ(s + θ)|2 ρ(θ) dθ +

0

−s

|δ(s + θ)|2 ρ(θ) dθ

 s ρ(θ) dθ + |δ(v)|2 ρ(v − s) dv ρ(s + θ) −∞ 0  s  −s ¯ |δ(s + θ)|2 ρ(s + θ) dθ + |δ(v)|2 ρ(v − s) dv ≤K ≤

|δ(s + θ)|2 ρ(s + θ)

−∞ 0

 ¯ =K

−∞

|δ(θ)|2 ρ(θ) dθ +

 0

0

s

|δ(v)|2 ρ(v − s) dv,

¯ is the constant specified in Assumption 1.4.1. Therefore, where K  0 ¯ (δ(s), δs ) 2ρ ≤ |δ(s)|2 + K |δ(θ)|2 ρ(θ) dθ −∞  s + |δ(v)|2 ρ(v − s) dv. 0

Integrating the above two inequalities with respect to s from 0 to T and taking the appropriate power yields

1.4 SHDE with Unbounded Memory

 E

  sup |R(t, x(·)) − R(t, y(·))|2 ≤ k2 L(T + C)E

T



T

0≤t≤T

0

≤ k2 L(T + C)E 

T

¯ +K



0



T



Note that  T  0

0

s



2

|δ(v)| ρ(v − s)dv ds ≤



0

−∞ s

+ 0

0

0

ρ(θ) dθ

= ρ 1

0

T

0



|δ(s)|2 ds

 |δ(θ)|2 ρ(θ)dθ ds

 



(δ(s), δs ) 2ρ ds

  |δ(v)|2 ρ(v − s)dv ds .

0

−∞

73

T

|δ(s)|2 ds



 |δ(s)|2 ds .

It follows that  E

  sup |R(t, x(·)) − R(t, y(·))|2 ≤ k2 L(T + C)(1 + ρ 1 )E

0≤t≤T

T

0

|δ(s)|2 ds



  ¯ E (δ(0), δ0 ) 2ρ ≤ k2 L(T + C)KT   T   ≤ AE (δ(0), δ0 ) 2ρ +BE |δ(s)|2 ds , 0

where ¯ and B = k2 L(T + C)(1 + ρ 1 ). A = k2 LT (T + C)K 2

This proves the lemma.

We will use mainly the following consequence of the previous lemma. Corollary 1.4.10 Under the conditions of Lemma 1.4.9, the following inequality holds for each T > 0:   E sup |R(s, x(·)) − R(s, y(·))|2 0≤s≤T

≤ E[ (δ(0), δ0 ) 2ρ + M (T )E



 sup |δ(s)|2 ,

0≤s≤T

(1.49)

where A is the same as above, M (T ) depends only on T and B, and M (T ) = o(1) as T → 0.

(1.50)

Proof. Inserting |δ(t)| ≤ sup0≤t≤T δ(t) with 0 ≤ t ≤ T into (1.47), one obtains (1.49) with appropriate constants A and M (T ). 2

74

1 Stochastic Hereditary Systems

Theorem 1.4.11 Assume Assumptions 1.4.4 and 1.4.5 hold. Then for each (ψ(0), ψ) ∈ M, there exists a unique nonnegative strong solution process {S(t), t ∈ (−∞, ∞)} through the initial datum (ψ(0), ψ) ∈ M. Sketch of a Proof. The existence and uniqueness of the strong solution process {S(s), s ∈ (−∞, ∞)} follows from the standard method of successive approximation for constructing a strong solution process (see Mizel and Trutzer [MT84]). A sketch of its proof is given below. Existence: For each fixed T > 0, define a sequence of processes {S (k) (s), s ∈ (−∞, T ]}, for k = 0, 1, 2, . . . , as follows: S (0) (s, ω) =



ψ(0) if s ≥ 0 ψ(s) if s ∈ (−∞, 0],

and for k = 1, 2, . . ., S (k) (s, ω) =



ψ(0) + R(s, S (k−1)(·) ) if s ∈ [0, T ] ψ(s) if s ∈ (−∞, 0], (k)

where R(s, S (k−1)(·) ) is defined in (1.47) and, again, Ss (θ) = S (k) (s+θ), θ ∈ (−∞, 0]. From this definition, the conditions (see Assumptions 1.4.4 and 1.4.5 on the drift and diffusions coefficients φ(0)f (φ) and φ(0)g(φ)), and Lemma 1.4.2, it follows that the processes {S (k) (s), s ∈ [0, T ]}, k = 0, 1, . . . (k) are nonanticipative, measurable and continuous, and (S (k) (s), Ss ) ∈ M, for s ≥ 0. By induction we will prove below that for any T > 0,   E sup |S (k) (s)|p < ∞. s∈[0,T ]

(0)

First, for k = 0, we have (S (0) (0), S0 ) ∈ M, and by the above construction for t ∈ [0, T ], (S (0) (s), Ss(0) ) M ≤ K · (S(0), S0 ) M ,

∀t ∈ [0, T ]

¯ Hence, for a constant K > 0 depending on K.   E sup |(S (0) (s), Ss(0) )|pM ≤ K · E[ (S(0), S0 ) M < ∞. s∈[0,T ]

For induction purposes, suppose that   E sup |S (k−1) (s)|p < ∞. s∈[0,T ]

1.4 SHDE with Unbounded Memory

75

Appraising as in Lemma 1.4.9 and Holder’s inequality, we have   (k) E sup (S (k) (t), St ) pM t∈[0,T ]

  ≤ k¯p E |ψ(0)|p + T p−1

0

 +cp

T

T

0

(k−1) p

|S (k−1) (t)f (St

(k−1)

|S (k−1) (t)g(St

  p p−1 ¯ ≤ kp E |ψ(0)| + T c2

0

T

)|2 dt

| dt

p/2  (k−1)

(1 + (S (k−1) (t), (St

)|pM dt

 T p/2  (k−1) 2 +cp2 cp (1 + S (k−1) (t), (St ) M dt 0  ≤ k¯p E |ψ(0)|p + cp (T p + cp T p/2 ) 2



p 

× 1 + sup (S

(k−1)

t∈[0,T ]

(k−1) (t), (St ) M

< ∞. Next, using the notation form Lemma 1.4.9 and Corollary 1.4.10, we have  T   (k−1) p p p−1 ¯ E sup (kp E |ψ(0)| + T |S (k−1) (t)f (St | dt 0≤t≤T



+ cp

0

T

0

(k−1)

|S (k−1) (t)g(St

)|2 dt

p/2 

< ∞.

The unique strong solution {S(s), s ∈ (−∞, T ]} can be extended to the interval (−∞, ∞) by standard continuation method in existence theory of differential equation. Therefore, we only need to show that for each initial (S(0), S0 ) = (ψ(0), ψ) ∈ M, S(s) ≥ 0 for each s ≥ 0. To show this, we note that {s ≥ 0 | S(s) ≥ 0} = ∅ by sample-path continuity of the solution process and non-negativity of the initial data (ψ(0), ψ) ∈ M. Now, let τ = inf{t ≥ 0 | S(t) ≥ 0}. If τ < ∞, then dS(t) = 0 for all t ≥ τ with S(τ ) = 0. This implies that S(t) = 0 for all t ≥ τ and, hence, S(t) ≥ 0 for all t ≥ 0. The same conclusion holds if τ = ∞. Uniqueness ˜ s ∈ (−∞, ∞)} be two (strong) solution Let {S(s), s ∈ (−∞, ∞)} and {S(s), processes of (1.43) through the initial data (ψ(0), ψ) ∈ M. We need to show that ˜ P {S(s) = S(s), ∀s ≥ 0} = 1.

76

1 Stochastic Hereditary Systems

˜ Let δ(s) = S(s) − S(s), s ≥ 0. Then δ(s) = 0 ∀s ≤ 0, and from Lemma 1.4.9, we have for all T > 0,      T 2 2 |δ(s)| ds . E sup |δ(s)| ≤ KE T s∈[0,T ]

0

By Grownwall inequality in Subsection 1.1.1, this shows that for all T > 0 ˜ P {S(s) = S(s),

∀s ∈ [0, T ]} = 1.

Therefore, ˜ P {S(s) = S(s), ∀s ≥ 0} ˜ lim P {S(s) = S(s) ∀s ∈ [0, T ]} = 1.

T ↑∞

2

stochastic delay equation driven by Brownian motion:

1.5 Markovian Properties In the following, let Ξ represent either the state space Ξ = C for (1.27) or the state space Ξ = M ≡  × L2ρ ((−∞, 0]; ) for (1.43). Let X(·) = {X(s), s ∈ [t, T ]} be either the Ξ-valued solution process of (1.27) or (1.43) with the initial state X(t) ∈ Ξ at the initial time t ∈ [0, T ] with T < ∞ or T = ∞. In the case of (1.27), X(s) = xs , s ∈ [t, T ] and X(t) = ψ ∈ C. In the case of (1.43), the initial time is t = 0, the initial datum is X(0) = (ψ(0), ψ) ∈ M, and X(s) = (S(s), Ss ), s ≥ 0. In either case wherever appropriate, x(·) is the unique strong solution process of (1.27) with xs (θ) = x(s + θ) for θ ∈ [−r, 0] and S(·) = {S(s), s ∈ (−∞, ∞)} is the strong solution of (1.43) with Ss (θ) = S(s + θ) for θ ∈ (−∞, 0]. The purpose of this section is to establish the Markovian and strong Markovian properties of the Ξ-valued solution process for (1.27) and (1.43). Let x(·) = {x(s), s ∈ [t − r, T ]} be the strong solution process of (1.27) on the interval [t − r, T ] and through the initial datum (t, ψ) ∈ [0, T ] × C. With almost the same notation and hoping that there will be no ambiguity, we also use the same notation and let S(·) = {S(s), s ∈ (−∞, ∞)} be the strong solution of (1.43) on the interval (−∞, ∞) and through the initial datum (ψ(0), ψ) ∈ M. Note that the only distinction between the two is the time interval and the initial segment involved. Here, we will state and prove the Markovian and strong Markovian properties of its corresponding C-valued segment process {xs , s ∈ [t, T ]} (where xs (θ) = x(s + θ) for each θ ∈ [−r, 0]) for (1.27) and the M-valued segment process {(S(s), Ss ), s ≥ 0} (where Ss (θ) = S(s + θ) for each θ ∈ (−∞, 0]) for (1.43).

1.5 Markovian Properties

77

Theorem 1.5.1 Assume that the functions f : [0, T ] × C → n and g : [0, T ] × C → n×m satisfy Assumptions 1.3.6 and 1.3.7. Then the C-valued segment process {xs , s ∈ [t, T ]} of (1.27) describes a C-valued Markov process with probability transition function p : [0, T ]×C×[0, T ]×B(C) → [0, 1], where p(t, xt , s, B) is given by p(t, xt , s, B) ≡ P {xs ∈ B | xt } ≡ P t,xt {xs ∈ B},

s ∈ [t, T ], B ∈ B(C),

(1.51)

where P t,xt {·} is the probability law of the C-valued Markov process {xs , s ∈ [t, T ]} given the initial datum (t, xt ) ∈ [0, T ] × C and B(C) is the Borel σ-algebra of subsets of C. Throughout, let E t,xt [·] be the expectation taken with respect to the probability law P t,xt . The probability transition function p(t, xt , s, B) defined in (1.51) has the following properties: (a) For any s ≥ t, B ∈ B(C), the function (t, xt ) → p(t, xt , s, B) is B([0, T ]) × B(C)-measurable. (b) For any s˜ ≥ s ≥ t, B ∈ B(C) P {xs˜ ∈ B|F(s)} = P {xs˜ ∈ B|xs } = p(s, xs , s˜, B) holds a.s. on Ω and s˜ > s > t. Proof. See Theorem (1.1) on page 51 of Mohammed [Moh84]. 2 When the coefficient functions f and g are time-independent, i.e., f (t, φ) ≡ f (φ) and g(t, φ) ≡ g(φ), then (1.27) reduces to the following autonomous system: (1.52) dx(s) = f (xs ) ds + g(xs ) dW (s). In this case, we usually assume the initial time t = 0. Its corresponding Cvalued process {xs , s ∈ [0, T ]} is a Markov process with time-homogeneous probability transition probabilities p(x0 , s, B) ≡ p(0, x0 , s, B) = p(t, xt , t + s, B),

∀s, t ≥ 0, x0 ∈ C, B ∈ B(C).

For a nonautonomous equation, one can consider the modified solution process {(s, xs ), s ∈ [t, T ]} instead of {xs , s ∈ [t, T ]} as a solution for a modified autonomous equation. We will not dwell upon this point here. Under Assumptions 1.3.6 and 1.3.7, we can also show that the C-valued strong solution process of (1.27) also satisfies the following strong Markov property. Theorem 1.5.2 (Strong Markovian Property 1) Under Assumptions 1.3.6 and 1.3.7, the C-valued solution process {xs , s ∈ [t, T ]} of (1.27) satisfies the following strong Markov property: P {xs˜ ∈ B | F(τ )} = P {xs˜ | xτ },

∀ G(t)-stopping times t ≤ τ ≤ s˜. (1.53)

78

1 Stochastic Hereditary Systems

The strong Markovian property also holds for the M-valued process {(S(s), Ss ), s ≥ 0} corresponding to (1.43). Theorem 1.5.3 (Strong Markovian Property 2) Under Assumptions 1.4.4 and 1.4.5, the M-valued solution process {(S(s), Ss ), s ∈ [0, T ]} of (1.43) satisfies the following strong Markov property: For all τ ∈ T0s˜, P {(S(˜ s), Ss˜) ∈ B | G(τ )} = P {(S(˜ s), Ss˜) ∈ B | (S(τ ), Sτ )}.

(1.54)

1.6 Conclusions and Remarks This chapter reviews stochastic tools and probabilistic preliminaries that are needed for developments of the subsequent chapters. The existing results on stochastic hereditary differential systems (mainly taken from Mohammed [Moh84] and Mizel and Trutzer [MT84]), including existence and uniqueness of strong and weak solution as well as Markovian and strong Markovian properties of the segment processes, are presented. The proofs of the known results are given here only if it might shed light on the subject and are needed in the later developments. The theory of SHDE is still in its toddler stage requires additional research efforts for many years to come. We have not presented at all results on stability theory, one better developed area of SHDEs but not relevant to stochastic control of finite time horizon. Interested readers are referred to Mao [Mao97] for introduction and results on this topic.

2 Stochastic Calculus

The main purpose of this chapter is to study stochastic calculus for the Cvalued Markovian solution process {xs , s ∈ [t, T ]} for the SHDE (1.27) and the M-valued Markovian solution process {(S(s), Ss ), s ≥ 0} for the SHDE (1.43). In particular, Dynkin’s formulas, which play an important role in the Hamilton-Jacobi-Bellman theory of the optimal control problems, will be derived for these two solution processes. This chapter is organized as follows. As a preparation for developments of stochastic calculus, the concepts and preliminary results of bounded linear and bilinear functionals, Fr´echet derivatives, C0 -semigroups on a generic separable Banach space (Ξ, · Ξ ) are introduced in Section 2.1. In Sections 2.2 and 2.3 these concepts and results that are specific to the spaces Ξ = C and Ξ = M are presented. In particular, the properties of the S operator are established in these two sections. Dynkin’s formulas in terms of a weak infinitesimal generator for the segment processes for both (1.27) and (1.43) are obtained in Section 2.4. Dynkin’s formulas can be calculated when quasi-tame functions are involved. Moreover, Itˆ o’s formulas for these two systems can be obtained for quasi-tame functions. Additional conclusions for the chapter and relevant remarks are given in Section 2.5. This chapter serves as the foundation of the construction of the Hamilton-Jacobi-Bellman theory for our optimal control problems.

2.1 Preliminary Analysis on Banach Spaces In dealing with optimal control problems that involve the C-valued Markovian solution process {xs , s ∈ [t, T ]} of (1.27) and the M-valued Markov solution process {(S(s), Ss ), s ≥ 0} of (1.43), we often encounter the analysis on these two infinite-dimensional spaces. Instead of introducing these analysis separately for each of these two spaces, we review or prove some of the preliminary analytic results on a generic Banach space (Ξ, · Ξ ) in this section. The analyses includes the concepts of Fr´echet derivatives and bounded linear

80

2 Stochastic Calculus

and bilinear functionals as well as the concept of the C0 -semigroup of bounded linear operators and their generators. The special results that are applicable to Ξ = C and Ξ = M will be discussed in Sections 2.3 and 2.4, respectively. 2.1.1 Bounded Linear and Bilinear Functionals For the time being, let Ξ be a generic Banach space with the norm · Ξ , and let B(Ξ) be the Borel σ-algebra of subsets of Ξ; that is, B(Ξ) is the smallest σ-algebra of subsets of Ξ that contains all open (and hence closed) subsets of Ξ. The pair (Ξ, B(Ξ)) denotes a Borel measurable space. If Ξ is a Hilbert space (a special case of a Banach space), then the Hilbertian inner product

·, ·Ξ : Ξ × Ξ → , which gives rise to the norm · Ξ , will normally be given or specified. Let (Ξ, · Ξ ) and (Θ, · Θ ) be two Banach spaces or Hilbert spaces. A mapping L : Ξ → Θ is said to be a bounded linear operator from Ξ to Θ if it satisfies the following linearity and boundedness (continuity) conditions: (i) (Linearity Condition) for ∀a, b ∈  and ∀x, y ∈ Ξ, L(ax + by) = aLx + bLy. (ii) (Boundedness Condition) There exists a constant K(L) > 0, dependent on L only, such that Lx Θ ≤ K(L) x Ξ ,

∀x ∈ Ξ.

Let L(Ξ, Θ) denote the Banach space of bounded linear operators from Ξ into Θ equipped with the operator norm · L(Ξ,Θ) defined by Lx Θ x=0 x Ξ

L L(Ξ,Θ) = sup

= sup Lx Θ , L ∈ L(Ξ, Θ). xΞ =1

In this case, Lx Θ ≤ L L(Ξ,Θ) x Ξ ,

∀L ∈ L(Ξ, Θ), x ∈ Ξ.

If Ξ = Θ, we simply write L(Ξ) for L(Ξ, Ξ) and any L ∈ L(Ξ) is called a bounded linear operator on Ξ. If Θ = , then L ∈ L(Ξ, ) will be called a bounded linear functional on Ξ. The space of bounded linear functionals (or the topological dual of Ξ ∗ ) will be denoted by Ξ ∗ and its corresponding operator norm will be denoted by · ∗Ξ . The mapping L : Ξ × Ξ →  is said to be a bounded bilinear functional if L(x, ·) and L(·, y) are both in Ξ ∗ for each x and y in Ξ. The space of bounded bilinear functionals on Ξ will be denoted by Ξ † . Again, Ξ † is a real separable Banach space under the operator norm · †Ξ defined by L(·, y) ∗Ξ L(x, ·) ∗Ξ = sup . y Ξ x Ξ y=0 x=0

L †Ξ = sup

2.1 Preliminary Analysis on Banach Spaces

81

2.1.2 Fr´ echet Derivatives Let Ψ : Ξ →  be a Borel measurable function. The function Ψ is said to be Fr´echet differentiable at x ∈ Ξ if for each y ∈ Ξ, Ψ (x + y) − Ψ (x) = DΨ (x)(y) + o(y), where DΨ : Ξ → Ξ ∗ and o : Ξ →  is a function such that o(x) → 0 as x Ξ → 0. x Ξ In this case, DΨ (x) ∈ Ξ ∗ is called the Fr´echet derivative of Ψ at x ∈ Ξ. The function Ψ is said to be continuously Fr´echet differentiable at x ∈ Ξ if its Fr´echet derivative DΨ : Ξ → Ξ ∗ is continuous under the operator norm · ∗Ξ . The function Ψ is said to be twice Fr´echet differentiable at x ∈ Ξ if its Fr´echet derivative DΨ (x) : Ξ →  exists and there exists D2 Ψ (x) : Ξ × Ξ →  such that for each y, z ∈ Ξ, D2 Ψ (x)(·, z), D2 Ψ (x)(y, ·) ∈ Ξ ∗ and

DΨ (x + y)(z) − DΨ (x)(z) = D2 Ψ (x)(y, z) + o(y, z).

Here, o : Ξ × Ξ →  is such that o(y, z) → 0 as y Ξ → 0 y Ξ and

o(y, z) → 0 as z Ξ → 0. z Ξ

In this case, the bounded bilinear functional D2 Ψ (x) : Ξ × Ξ →  is the second Fr´echet derivative of Ψ at x ∈ Ξ. As usual, the function Ψ : Ξ →  is said to be Fr´echet differentiable (respectively, twice Fr´echet differentiable) if Ψ is Fr´echet differentiable (twice Fr´echet differentiable) at every x ∈ Ξ. For computation of the Fr´echet derivatives of the function Ψ : Ξ → , we introduce the concept of Gˆ ateaux derivatives as follows. The function Ψ : Ξ →  is said to be k-times Gˆateaux differentiable at x ∈ Ξ for k = 1, 2, . . . if the following derivatives exist: Ψ (1) (x)(y) ≡

d Ψ (x + ty)|t=0 , y ∈ Ξ dt

and for i = 2, 3, . . . , k, Ψ (i) (x)(y1 , y2 , . . . , yi ) =

d (i−1) Ψ (x + tyi )(y1 , y2 , . . . , yi−1 )|t=0 . dt

82

2 Stochastic Calculus

Note that every Fr´echet derivative DΨ (φ) is also a Gˆateaux derivative and Ψ (1) (x)(y) = DΨ (x)(y), y ∈ Ξ . Conversely, if the Gˆ ateaux derivative Ψ (1) (x) of Ψ exists for all x in a neighborhood U (x) of x ∈ Ξ and if Ψ (1) : U (x) ⊂ Ξ →  is continuous, then Ψ (1) (x) is also the Fr´echet derivative at x. Throughout, let C 1,2 ([a, b] × Ξ) be the space of functions Ψ (t, x) that are continuously differentiable with respect to t ∈ [a, b] and twice continuously Fr´echet differentiable with respect to x ∈ Ξ. Denote the derivative of Ψ with respect to t by ∂t Ψ . Of course, ∂t Ψ at the end points a (respectively, b) will be understood to be the right-hand (respectively, the left-hand) derivative. The first and second Fr´echet derivatives of Ψ with respect to x will be denoted by Dx Ψ (t, x) and Dx2 Ψ (t, x) (or simply DΨ (t, x) and D2 Ψ (t, x) when there is no danger of ambiguity), respectively. D2 Ψ is said to be globally Lipschitz on [a, b] × Ξ on [a, b] if there exists a constant K > 0 that depends on Ψ only, such that D2 Ψ (t, x) − D2 Ψ (t, y) †Ξ ≤ K x − y Ξ ,

∀t ∈ [a, b], x, y ∈ Ξ.

The space of Ψ ∈ C 1,2 ([a, b] × Ξ) with D2 Ψ being globally Lipshitz on 1,2 ([a, b] × Ξ). [a, b] × Ξ will be denoted by Clip 1,2 If Φ ∈ C ([a, b] × Ξ), the following version of Taylor’s formula holds (see, e.g., Lang [Lan83]): Theorem 2.1.1 If Φ ∈ C 1,2 ([a, b] × Ξ), then Φ(t, y) = Φ(s, x) + ∂s Φ(s, x)(t − s) + DΦ(s, x)(y − x)  1 (1 − u)D2 Φ(s, x + u(y − x))(y − x, y − x) du. + 0

2.1.3 C0 -Semigroups Again, let Ξ be a generic real separable Banach or Hilbert space with the norm · Ξ and let L(Ξ) be the space of bounded linear operators on Ξ. Definition 2.1.2 A family {T (t), t ≥ 0} ⊂ L(Ξ) is said to be a strongly continuous semigroups of bounded linear operators on Ξ or, in short, a C0 (Ξ)semigroup if it satisfies the following conditions: (i) T (0) = I (the identity operator). (ii) T (s) ◦ T (t) = T (t) ◦ T (s) = T (s + t) for s, t ≥ 0. (iii) For any x ∈ Ξ, the map t → T (t)x is continuous for all t ≥ 0. (iv) there exist constants M ≥ 1 and β > 0 such that T (t) L(Ξ) ≤ M eβt ,

∀t ≥ 0.

(2.1)

2.1 Preliminary Analysis on Banach Spaces

83

The C0 (Ξ)-semigroup is said to be a contraction if there exists a constant k ∈ [0, 1) such that T (t)x − T (t)y Ξ ≤ k x − y Ξ ,

∀t ≥ 0, x, y ∈ Ξ.

One can associate such a C0 -semigroup {T (t), t ≥ 0} ⊂ L(Ξ) with its generator A defined by the following relation: 1 Ax = lim (T (t)x − x), t ≥ 0. t↓0 t

(2.2)

The domain D(A) of A is defined by the set   1 D(A) = x ∈ Ξ | lim (T (t)x − x) exists . t↓0 t

(2.3)

We have the following results regarding the generator A and its other relationship to the C0 -semigroup {T (t), t ≥ 0}. Theorem 2.1.3 Let A : D(A) ⊂ Ξ → Ξ be the generator of a C0 -semigroup {T (t), t ≥ 0} of bounded linear operators on a Banach space (Ξ, · Ξ ). Then the following hold: (i) For any t > 0 and x ∈ Ξ,  0



t

T (s)x ds ∈ D(A) and T (t)x − x = A

t

T (s)x ds.

(2.4)

0

(ii) If x ∈ D(A), then T (t)x ∈ D(A) and AT (t)x = T (t)Ax. Furthermore,  T (t)x − x =



t

AT (s)x ds = 0

t

T (s)Ax ds. 0

Proof. (i) It follows from the definition that  T (h) − I t T (s)x ds (by linearity) h↓0 h 0  1 t (T (s + h) − T (s))x ds (by (ii) of Definition 2.1.2) = lim h↓0 h 0  1 t+h (T (s) − I)x ds (by change of integration variable) = lim h↓0 h t = T (t)x − x (by continuity of T (t)x with respect to t). lim

t t This shows that 0 T (s)x ds ∈ D(A) and T (t)x − x = A 0 T (s)x ds for any t > 0 and x ∈ Ξ. (ii) Let x ∈ D(A) and t > 0. Then

84

2 Stochastic Calculus

AT (t)x = lim h↓0

 (T (h) − I)x  (T (h) − I)T (t)x = lim T (t) = T (t)Ax. h↓0 h h

This shows that T (t)x ∈ D(A) and the right-hand derivative

d dt

+ T (t)x = AT (t)x = T (t)Ax.

On the other hand, for t > 0 . T (t − h)x − T (t)x . . . − T (t)Ax. . −h Ξ . .  T (h)x − x  . . − Ax + T (t − h)Ax − T (t)Ax. = .T (t − h) h Ξ . . T (h)x − x . . − Ax. + T (t − h)Ax − T (t)Ax Ξ ≤ T (t − h) L(Ξ) . h Ξ → 0 as h ↓ 0. This proves (ii).

2

Corollary 2.1.4 If A : D(A) ⊂ Ξ → Ξ is the generator of a C0 -semigroup {T (t), t ≥ 0} of bounded linear operators on a Banach space (Ξ, · Ξ ), then A is a linear operator from the dense subspace D(A) ⊂ Ξ to Ξ. Proof. It is easy to see that D(A) is a subspace of Ξ and A is a linear operator, since 0 ∈ D(A) and x, y ∈ D(A) imply that ax + by ∈ D(A) for all a, b ∈ . For any x ∈ Ξ, we show that there exists a sequence {xk } ⊂ D(A) such that xk → x as k → ∞. In fact, for any t > 0,   1 t 1 t T (s)x ds ∈ D(A) and T (s)x ds → T (0)x = x as t ↓ 0. t 0 t 0 Hence, D(A) is a dense subset of Ξ.

2

Definition 2.1.5 The operator A : D(A) ⊂ Ξ → Ξ defined by (2.2) is called a closed operator if for {xk } ⊂ D(A) such that xk → x, Axk → y we have x ∈ D(A) and y = Ax. Corollary 2.1.6 If A : D(A) ⊂ Ξ → Ξ is the generator of a C0 -semigroup {T (t), t ≥ 0} of bounded linear operators on a Banach space (Ξ, · Ξ ), then A is a closed operator. Proof. By (2.4), we have  T (t)xk − xk =

0

t

T (s)Axk ds.

2.1 Preliminary Analysis on Banach Spaces

85

Hence, lim T (t)xk − xk = T (t)x − x  t T (s)Axk ds = lim k→∞ 0  t T (s)Ax ds. =

k→∞

0

This proves that x ∈ D(A) and y = Ax.

2

Definition 2.1.7 Let A : D(A) ⊂ Ξ → Ξ be a closed operator on Ξ. The resolvent set ρ(A) of A is the collection of all λ ∈  such that λI − A (I is the identity operator) is invertible, its range R(λI −A) = Ξ, and R(λ) ≡ (λ − A)−1 ∈ L(Ξ). For each λ ∈ ρ(A), R(λ) is called the resolvent of A at λ. The proofs of the following two theorems can be found in Theorem 1.2.10 and Theorem 1.2.11 of Kallingpur and Xiong [KX95]. Theorem 2.1.8 Let A : D(A) ⊂ Ξ → Ξ be a C0 -semigroup {T (t), t ≥ 0} of bounded linear operators on a Banach space (Ξ, · Ξ ). Let M and β be given by (2.1); then (β, ∞) ⊂ ρ(A),  ∞ R(λ) = e−λt T (t) dt, (2.5) 0

and

(R(λ))k L(Ξ) ≤ M (λ − β)−k ,

k = 1, 2, . . . , λ > β.

(2.6)

Theorem 2.1.9 Let A : D(A) ⊂ Ξ → Ξ be a generator of a densely defined closed linear operator on a Banach space Ξ such that the interval (β, ∞) ⊂ ρ(A) and (λI − A)−k L(Ξ) ≤ M (λ − β)−k ,

k = 1, 2, . . . , λ > β.

Then there exists a unique C0 -semigroup {T (t), t ≥ 0} of bounded linear operators on Ξ with generator A such that (2.1) holds. 2.1.4 Bounded and Continuous Functionals on Banach Spaces In this subsection we will explore some properties of bounded and real-valued continuous functions on a Banach space (Ξ, · Ξ ) and some strong and weak convergence results for a family or sequence of such functions. Let Cb (Ξ) be the real Banach space of bounded and continuous (but not necessarily linear) functions Φ : Ξ →  under the sup-norm · Cb (Ξ) defined by Φ Cb (Ξ) = sup |Φ(x)|. x∈Ξ

The topology on the space Cb (Ξ) generated by the sup-norm · Cb (Ξ) is referred to as the strong topology.

86

2 Stochastic Calculus

We also define a weak topology on Cb (Ξ) as follows: Let M(Ξ) be the Banach space of all finite regular measures on the Borel measurable space (Ξ, B(Ξ)) equipped with the total variation norm defined by µ M(Ξ) = sup{µ(B) | B ∈ B(Ξ)},

µ ∈ M(Ξ).

Then it can be shown that (see, e.g., Rudin [Rud71]) there is a continuous bilinear pairing ·, ·(Cb ,M) : Cb (Ξ) × M(Ξ) →  defined by  Φ(x) dµ(x), Φ ∈ Cb (Ξ), µ ∈ M(Ξ).

Φ, µ(Cb ,M) = Ξ

A family {Φ(t), t ≥ 0} in Cb (Ξ) is said to converge weakly to Φ ∈ Cb (Ξ) as t ↓ 0 if lim Φ(t), µ(Cb ,M) = Φ, µ(Cb ,M) , ∀µ ∈ M(Ξ). t↓0

In this case, we write Φ = (w) lim Φ(t). t↓0

The following result is due originally to Dynkin [Dyn65]: Proposition 2.1.10 For each t > 0, let Φ(t) ∈ Cb (Ξ) and let Φ ∈ Cb (Ξ). Then Φ = (w) lim Φ(t) t↓0

if and only if the set { Φ(t) Cb (Ξ) , t > 0} is bounded in  and lim Φ(t)(x) = Φ(x), t↓0

∀x ∈ Ξ.

Proof. For each x ∈ Ξ, let δx ∈ M(Ξ) be the Dirac measure concentrated at x and defined by  1 if x ∈ B δx (B) = 0 if x ∈ /B ˆ ∈ M∗ (Ξ) by for all B ∈ B(Ξ). For each t > 0, define Φ(t) ˆ Φ(t)(µ) = Φ(t), µ(Cb ,M) ,

∀µ ∈ M(Ξ),

where M∗ (Ξ) is the topological dual of M(Ξ). If Φ = (w) limt↓0 Φ(t), then Φ(x) = Φ, δx (Cb ,M) = lim Φ(t), δx (Cb ,M) = lim Φ(t)(x), t↓0

t↓0

∀x ∈ Ξ,

ˆ and the set {Φ(t)(µ), t > 0} is bounded for each µ ∈ M(Ξ). By the uniˆ form boundedness principle (see, e.g., Siddiqi [Sid04]) { Φ(t) M∗ (Ξ) , t > ˆ ∗ 0} is bounded. However, Φ(t) M (Ξ) = Φ(t) Cb (Ξ) for each t > 0, so { Φ(t) Cb (Ξ) , t > 0} is bounded.

2.1 Preliminary Analysis on Banach Spaces

87

Conversely, suppose { Φ(t) Cb (Ξ) , t > 0} is bounded and ∀x ∈ Ξ.

lim Φ(t)(x) = Φ(x), t↓0

By the dominated convergence theorem, 

Φ, µ(Cb ,M) = Φ(x) dµ(x) Ξ  Φ(t)(x) dµ(x) = lim t↓0

Ξ

= lim Φ(t), µ(Cb ,M) , t↓0

∀µ ∈ M(Ξ).

Thus, Φ = (w) limt↓0 Φ(t). This proves the proposition.

2.

Lifting from the space Ξ to a higher level Cb (Ξ) (i.e., with a little abuse of notation, letting Ξ above be the Banach space Cb (Ξ)), let us consider a C0 -semigroup of bounded linear operators {Γ (t), t ≥ 0} on space Cb (Ξ) (i.e., Γ (t) ∈ L(Cb (Ξ)) for each t ≥ 0). The definition of a C0 -semigroup given in Definition 2.1.2 still applies in the context of the Banach space Cb (Ξ): (i) Γ (0) = I (the identity operator on Cb (Ξ)). (ii) Γ (s) ◦ Γ (t) = Γ (t) ◦ Γ (s) = Γ (s + t) for t, s ≥ 0. (iii) For any Φ ∈ Cb (Ξ), Γ (t)Φ − Φ Cb (Ξ) → 0 as t ↓ 0. Note that Condition (iii) implies that Γ (t) converges to Γ (0) in the operator norm · L(Cb (Ξ)) . lim Γ (t) − Γ (0) L(Cb (Ξ)) = lim Γ (t) − I L(Cb (Ξ)) t↓0

t↓0

Γ (t)Φ − Φ Cb (Ξ) Φ Cb (Ξ) Φ=0

≡ lim sup t↓0

= lim

sup

t↓0 Φ Cb (Ξ)=1

Γ (t)Φ − Φ Cb (Ξ) .

Therefore, by the dominated convergence theorem, it is easy to see that (w) lim Γ (t)(Φ) = Φ, t↓0

∀Φ ∈ Cb (Ξ),

or, equivalently, lim Γ (t)(Φ) − Φ, µ(C(Ξ),M(Ξ)) = 0, t↓0

µ ∈ M(Ξ).

The weak infinitesimal generator Γ of the C0 -semigroup of bounded linear operators {Γ (t), t ≥ 0} on Cb (Ξ) is defined by Γ(Φ) = (w) lim t↓0

Γ (t)Φ − Φ . t

88

2 Stochastic Calculus

Let D(Γ), the domain of the weak infinitesimal generator Γ, be the set of all Φ ∈ Cb (Ξ) for which the above weak limit exists. The following properties can be found in [Dyn65] (Vol. 1, Chapter I §6, pp. 36-43). Theorem 2.1.11 Let {Γ (t), t ≥ 0} ⊂ L(Cb (Ξ)), and let Cb0 (Ξ) ⊂ Cb (Ξ) be defined by   Cb0 (Ξ) = Φ ∈ Cb (Ξ) | lim Γ (t)(Φ) = Φ . t↓0

Then the following hold: (i) D(Γ) ⊂ Cb0 (Ξ) and D(Γ) is weakly dense in Cb (Ξ); that is, for each Φ ∈ Cb (Ξ), there exists a sequence {Φk } ⊂ D(Γ) such that lim Φk , µ(C(Ξ),M(Ξ)) = Φ, µ(Cb (Ξ),M(Ξ)) ,

k↓∞

∀µ ∈ M(Ξ),

and Γ (t)(D(Γ)) ⊂ D(Γ),

∀t ≥ 0.

(ii) If Φ ∈ D(Γ) and t > 0, then the weak derivative (w) exists and

d 1 Γ (t)(Φ) ≡ (w) lim [Γ (t + h)(Φ) − Γ (t)(Φ)] h→0 h dt d Γ (t)(Φ) = Γ (t)(Γ(Φ)), dt  t Γ (u)(Γ(Φ)) du, ∀t > 0. Γ (t)(Φ) − Φ = (w)

0

(iii) Γ is weakly closed; that is, {Φ(k)}∞ k=1 ⊂ D(Γ) is weakly convergent and is also weakly convergent. Then {Γ(Φ(k))}∞ k=1 (w) lim Φ(k) ∈ D(Γ) and (w) lim Γ(Φ(k)) = Γ((w) lim Φ(k)). k→∞

k→∞

k→∞

(iv) For each λ > 0, the operator λI −Γ (note: I denotes the identity operator) is a bijection of D(Γ) onto Cb (Ξ). Indeed, the resolvent R(λ; Γ) ≡ (λI − Γ)−1 and satisfies the relation:  ∞ (λI − Γ)−1 (Φ) = e−λt Γ (t)(Φ) dt, ∀Φ ∈ Cb (Ξ). 0

The resolvent R(λ; Γ) = (λI − Γ)−1 is bounded and linear and its operator norm R(λ; Γ) L(Cb (Ξ)) ≤ λ1 for all λ > 0. (v) (w)limλ→∞ λR(λ)(Φ) = Φ, ∀Φ ∈ Cb (Ξ). In dealing with SHDE (1.27) with bounded memory (0 < r < ∞) and SHDE (1.43) with infinite but fading memory (r = ∞) and their associated stochastic control problems, we will, in the following two sections, specifically apply the results obtained in the preceding subsection to the Banach space (Ξ, · Ξ ) that takes the form of Ξ = C and Ξ = M.

2.2 The Space C

89

2.2 The Space C The analyses reviewed for a general generic Banach space (Ξ, · Ξ ) in the previous section will be specialized to the space C ≡ C([−r, 0]; n ) in this section. For −∞ < a < b ≤ ∞, the interval [a, b] will be interpreted as [a, b] ∩ . The intervals (a, b], (a, b), and [a, b) will be interpreted similarly. We use the ¯ + for the compactified interval [0, ∞]. notation  For −∞ < a < b ≤ ∞, let C([a, b]; n ) be the space of continuous functions φ : [a, b] → n . Note that when b < ∞, the space C([a, b]; n ) is a real separable Banach space equipped with the uniform topology defined by the the sup-norm · ∞ , where φ ∞ = sup |φ(t)|. t∈[a,b]

If b = ∞, the space C([0, ∞); n ) will be equipped with the topology defined by the metric: d(ω1 , ω2 ) =

∞  1 max (|ω1 (t) − ω2 (t)| ∧ 1). 2k 0≤t≤k

(2.7)

k=1

Let L2 (a, b; n ) be the space of Lebesque square-integrable functions φ : [a, b] → n . It is clear that L2 (a, b; n ) is a real separable Hilbert space with the inner product ·, ·2 defined by 

b

φ, ϕ2 =

φ(t) · ϕ(t) dt a 1/2

and the Hilbertian norm defined by φ 2 = φ, φ2 . We have the following theorem. Theorem 2.2.1 For −∞ < a < b < ∞, the space C([a, b]; n ) can be continuously and densely embedded into the space L2 (a, b; n ). Proof. Consider the linear injection i : C([a, b]; n ) → L2 (a, b; n ) defined by i(ϕ) = ϕ ∈ L2 (a, b; n ),

ϕ ∈ C([a, b]; n ).

Then i(ϕ) 2 = ≤

 √

b

|ϕ(t)|2 dt

1/2

a

b − a ϕ ∞ .

This shows that the injection i is continuous. The fact that C([a, b]; n ) is dense in L2 (a, b; n ) is a well-known result that can be found in any standard real analysis text (see, e.g., [Rud71]) whose proof will be omitted here. 2

90

2 Stochastic Calculus

Readers are reminded of the notation of these two spaces as follows. For 0 < r < ∞, let C ≡ C([−r, 0]; n ) be the space of continuous functions φ : [−r, 0] → n equipped with the uniform topology, that is, φ C =

sup |φ(θ)|,

φ ∈ C.

θ∈[−r,0]

To avoid the frequent usage of the cumbersome subscript of the sup-norm · C throughout this monograph, we simply denote · as · C , that is, φ =

sup |φ(θ)|,

φ ∈ C.

θ∈[−r,0]

Similarly, we denote · ∗ = · ∗C and · † = · †C as the operator norms in C∗ and C† , respectively. Let BV ([a, b]; n ) be the space of functions φ : [a, b] → n that are of bounded variation on the interval [a, b]. In the following, we will study C∗ and C† in a little bit more detail than for the general Ξ ∗ and Ξ † . We first note that C∗ has the following representation (see, e.g., Dunford and Schwartz [DS58]). Theorem 2.2.2 Φ ∈ C∗ if and only if there exists a unique regular finite measure η : [−r, 0] →  such that 

0

Φ(φ) = −r

φ(θ) · dη(θ),

∀φ ∈ C,

where the above integral is to be interpreted as a Lebesque-Stieltjes integral. Define the function 1{0} : [−r, 0] →  by  0 for θ ∈ [−r, 0) 1{0} (θ) = 1 for θ = 0. Let B be the vector space of all simple functions of the form v1{0} , where v ∈ n . Clearly, C ∩ B = {0}, where 0 is the zero function in C. We form the direct sum C ⊕ B and equip it with the norm also denoted by · C⊕B when there is no danger of ambiguity, where φ + v1{0} C⊕B = φ C + |v| =

sup |φ(θ)| + |v|,

φ ∈ C, v ∈ n .

θ∈[−r,0]

Note that C ⊕ B is the space of all functions ξ : [−r, 0] → n that are continuous on [−r, 0) and possibly with a jump discontinuity at 0 by the vector amount v ∈ n . The following two lemmas are due to Mohammed [Moh84].

2.2 The Space C

91

Lemma 2.2.3 Let Γ ∈ C∗ ; that is, Γ : C →  is a bounded linear functional on C. Then Γ has a unique (continuous) linear extension Γ¯ : C ⊕ B →  satisfying the following weak continuity property: (k) (θ) → ξ(θ) as (W1). If {ξ (k) }∞ k=1 is a bounded sequence in C such that ξ k → ∞ for each θ ∈ [−r, 0] for some ξ ∈ C ⊕ B, then Γ (ξ (k) ) → Γ¯ (ξ) as k → ∞. Furthermore, the extension map C∗ → (C ⊕ B)∗ , Γ → Γ¯ is a linear isometric injective map. Proof. We prove the lemma first for n = 1. Suppose Γ ∈ C ∗ [−r, 0]. By Riesz Representation Theorem 2.2.2, there is a unique regular finite measure η : B([−r, 0]) →  such that 

0

φ(θ) · dη(θ),

Γ (φ) = −r

∀φ ∈ C.

Define Γ¯ : C[−r, 0] ⊕ B1 →  by Γ¯ (φ + v1{0} ) = Γ (φ) + vη({0}),

φ ∈ C, v ∈ ,

where B1 = {v1{0} , v ∈ }. Note that Γ¯ is clearly a continuous linear extension of Γ . (k) (θ) → ξ(θ) Let {ξ (k) }∞ k=1 be a bounded sequence in C[−r, 0] such that ξ as k → ∞ for each θ ∈ [−r, 0] for some ξ ≡ φ + v1{0} ∈ C[−r, 0] ⊕ B1 . By the dominated convergence theorem,  0 ξ (k) (θ) dη(θ) lim Γ (ξ (k) ) = lim k→∞

k→∞



−r



0

0

φ(θ) dη(θ) +

= −r

−r

v1{0} (θ) dη(θ)

= Γ (φ) + vη({0}) = Γ¯ (φ + v1{0} ). The map Π : C∗ → (C ⊕ B)∗ , Γ → Γ¯ , is clearly linear. Higher dimension n > 1 can be reduced to the one-dimensional situation as follows: Write Γ ∈ C∗ in the form Γ (φ) =

n 

Γ (i) (φi ),

i=1

where φ = (φ1 , φ2 , . . . , φn ) ∈ C, φi ∈ C[−r, 0], i = 1, 2, . . . , n, and Γ (i) (ϕ) = Γ (0, . . . , 0, ϕ, 0, . . . , 0) with ϕ ∈ C[−r, 0] occupying the ith place.

92

2 Stochastic Calculus

Hence, Γ (i) ∈ C ∗ [−r, 0] for i = 1, 2, . . . , n. Write B as the n-fold Cartesian product of B1 : B = B1 × . . . × B1 , by taking v1{0} = (v1 1{0} , v2 1{0} , . . . , vn 1{0} ),

∀v = (v1 , v2 , . . . , vn ) ∈ n .

Let Γ¯ (i) ∈ (C[−r, 0] ⊕ B1 )∗ be the extension of Γ (i) described earlier and satisfying (W1). It is easy to verify that C ⊕ B = (C[−r, 0] ⊕ B1 ) × · · · × (C[−r, 0] ⊕ B1 ); that is, φ + v1{0} = (φ1 + v1 1{0} , φ2 + v2 1{0} , . . . , φn + vn 1{0} ). Define Γ¯ ∈ (C ⊕ B)∗ by Γ¯ (φ + v1{0} ) =

n 

Γ¯ (i) (φi + vi 1{0} )

i=1

when φ = (φ1 , φ2 , . . . , φn ), v = (v1 , v2 , . . . , vn ). Since Γ¯ (i) is a continuous linear extension of Γ (i) , Γ¯ is a continuous linear extension of Γ . Let {ξ (k) }∞ k=1 be a bounded sequence in C such that ξ (k) (θ) → ξ(θ) as k → ∞ for each (k) (k) (k) θ ∈ [−r, 0] for some ξ ≡ φ + v1{0} ∈ C ⊕ B. Let {ξ (k) } = (ξ1 , ξ2 , . . . , ξn ), (k) ξ = (ξ1 , ξ2 , . . . , ξn ), ξi ∈ C[−r, 0], ξi ∈ C[−r, 0] ⊕ B1 , i = 1, 2, . . . , n. Hence (k) {ξi }∞ k=1 is bounded in C[−r, 0] and (k)

ξi (θ) → ξi (θ) as k → ∞, θ ∈ [−r, 0], i = 1, 2, . . . , n. Therefore, lim Γ (ξ (k) ) = lim

k→∞

k→∞

=

n 

n 

(k)

Γ (i) (ξi )

i=1

Γ¯ (i) (ξi ) = Γ¯ (ξ).

i=1

Therefore, Γ¯ satisfies (W1). To prove uniqueness, let Γ˜ ∈ (C ⊕ B)∗ be any continuous extension of Γ (k) satisfying (W1). For any v1{0} ∈ B, choose a bounded sequence {ξ0 }∞ k=1 in (k)

C[−r, 0] such that ξ0 (θ) → v1{0} as k → ∞, for all θ ∈ [−r, 0]. For example, take

2.2 The Space C



(k)

ξ0 (θ) =

93

(kθ + 1)v for − k1 ≤ θ ≤ 0 0 for −r ≤ θ < − k1 .

(k)

Note that ξ0 = |v| for all k ≥ 1. Also by (W1), one has Γ˜ (φ + v1{0} = Γ˜ (φ) + Γ˜ (v1{0} ) (k)

= Γ (φ) + lim Γ (ξ0 ) k→∞

= Γ (φ) + Γ¯ (v1{0} ) = Γ¯ (φ + v1{0} ), ∀φ ∈ C. Thus, Γ˜ = Γ¯ . Define

Π : C∗ → (C ⊕ B)∗ ,

Γ → Γ¯ .

Since the extension map Π is linear in the one-dimensional case, it follows from the representation of Γ¯ in terms of Γ¯ (i) is also linear. However, Γ¯ is an extension of Γ , so Γ¯ ∗ ≥ Γ ∗ . Conversely, let ξ = φ + v1{0} ∈ C ⊕ B and (k) construct {ξ0 }∞ k=1 in C[−r, 0] as earlier. Then (k) Γ¯ (ξ) = lim Γ (φ + ξ0 ). k→∞

However, (k)

(k)

|Γ (φ + ξ0 )| ≤ Γ ∗ φ + ξ0 (k)

≤ Γ ∗ [ φ + ξ0 ] = Γ ∗ [ φ + |v|] = Γ ∗ ξ , ∀k ≥ 1. Hence,

(k) |Γ¯ (ξ)| = lim |Γ (φ + ξ0 )| ≤ Γ ∗ ξ , k→∞

∀ξ ∈ C ⊕ B.

Thus, Γ¯ ∗ ≤ Γ ∗ . So Γ¯ ∗ = Γ ∗ and Π is an isometry map.

2



Lemma 2.2.4 Let Γ ∈ C ; that is, Γ : C × C →  is a bounded bilinear functional on C. Then Γ has a unique (continuous) linear extension Γ¯ : (C⊕B)×(C⊕B) →  satisfying the following weak continuity property: (W2). (k) ∞ }k=1 are bounded sequences in C such that ξ (k) (θ) → If {ξ (k) }∞ k=1 and {ζ (k) ξ(θ) and ζ (θ) → ζ(θ) as k → ∞ for every θ ∈ [−r, 0] for some ξ, ζ ∈ C⊕B, then Γ (ξ (k) , ζ (k) ) → Γ¯ (ξ, ζ) as k → ∞. Proof. Here, we also deal first with the one-dimensional case. Write the continuous bilinear map Γ : C[−r, 0] × C[−r, 0] →  as a continuous linear map Γ : C[−r, 0] → C ∗ [−r, 0]. Since C ∗ [−r, 0] is weakly complete (see §IV. 13.22 of Dunford and Schwartz [DS58]). Γ : C[−r, 0] → C ∗ [−r, 0] is weakly compact (see Theorem (I.4.2) of [DS58]). Hence, there is a unique measure

94

2 Stochastic Calculus

η : B([−r, 0]) → C ∗ [−r, 0] (of finite semivariation η ([−r, 0]) < ∞) such that for all φ ∈ C[−r, 0], 

0

φ(θ) dη(θ) (see Theorem (I.4.1) of [DS58]).

Γ (φ) = −r

Using an argument similar to that used in the proof of Lemma 2.2.3, the above integral representation of Γ implies the existence of a unique continuous linear extension Γˆ : C[−r, 0] ⊕ B1 → C ∗ [−r, 0] satisfying (W1). To prove this, one needs the dominated convergence for vector-valued measures (see §IV.10.10, Theorem (I.3.1)(iv) of [DS58]). Define Γ¯ : C[−r, 0] ⊕ B1 → (C[−r, 0] ⊕ B1 )∗ by Γ¯ = Π ◦ Γ , where Π is the extension isometry of Lemma 2.2.3: Γ¯ (φ + v1{0} ) = Γˆ (φ + v1{0} ),

φ ∈ C[−r, 0], v ∈ .

Clearly, Γ¯ gives a continuous bilinear extension of Γ to (C[−r, 0] ⊕ B1 ) × (k) ∞ }k=1 (C[−r, 0] ⊕ B1 ). To prove that Γ¯ satisfies (W2), let {ξ (k) }∞ k=1 and {ζ (k) (k) be bounded sequences in C[−r, 0] such that ξ (θ) → ξ(θ) and ζ (θ) → ζ(θ) as k → ∞ for every θ ∈ [−r, 0] for some ξ, ζ ∈ C[−r, 0] ⊕ B1 . By Lemma 2.2.3 for Γˆ , we get Γˆ (ξ) = limk→∞ Γ (ξ (k) ). Now, for any k = 1, 2, . . ., |Γ (ξ (k) )(ζ (k) )− Γ¯ (ξ)(ζ)| ≤ |Γ (ξ (k) )(ζ (k) )− Γˆ (ξ)(ζ (k) )|+|Γˆ (ζ)(ζ (k) )− Γˆ (ξ)(ζ)| ≤ Γ (ξ (k) ) − Γˆ (ξ) ∗ ζ (k) + |Γˆ (ξ)(ζ (k) ) − Γˆ (ξ)(ζ)|. However, by Lemma 2.2.3 for Γˆ (ξ) we have lim |Γˆ (ζ)(ζ (k) ) − Γˆ (ξ)(ζ)| = 0.

k→∞

Since { ζ (K) }∞ k=1 is bounded, it follows from the last inequality that lim |Γ (ξ (k) )(ζ (k) ) − Γ¯ (ξ)(ζ)| = 0. k→

When n > 1, we use coordinates as in Lemma 2.2.3 to reduce to the onedimensional case. Indeed, write any continuous bilinear map Γ : C × C →  as the sum of continuous bilinear maps C × C →  in the following way: Γ ((φ1 , . . . , φn )(ϕ1 , . . . , ϕn )) =

n 

Γ (ij) (φi , ϕj ),

i,j=1

where (φ1 , · · · , φn ), (ϕ1 , · · · , ϕn ) ∈ C, φi , ϕi ∈ C[−r, 0], i = 1, 2, · · · , n,

2.2 The Space C

95

and Γ (ij) : C[−r, 0] × C[−r, 0] →  is the continuous bilinear map defined by Γ (ij) (ς, γ) = Γ ((0, . . . , 0, ς, 0, . . . , 0), (0, . . . , 0, γ, 0, . . . , 0)) for ς, γ ∈ C[−r, 0] occupying the ith and jth places, respectively, 1 ≤ i, j ≤ n. Now, extend each Γ (ij) continuously to a bilinear map Γ¯ (ij) : (C[−r, 0] × B1 ) × (C[−r, 0] × B1 ) →  satisfying (W2). Then define Γ¯ : (C ⊕ B) × (C ⊕ B) →  by

n 

Γ¯ (ξ, ζ) =

Γ¯ (ij) (ξi , ζj ),

i,j=1

where ξ = (ξ1 , . . . , ξn ), ζ = (ζ1 , . . . , ζn ) ∈ C ⊕ B, ξi , ζi ∈ C[−r, 0] ⊕ B1 , 1 ≤ i, j ≤ n. It is then easy to see that Γ¯ is a continuous bilinear extension of Γ satisfying (W2). Finally, we prove uniqueness. Let Γ˜ : C⊕B → (C⊕B)∗ be any continuous bilinear extension of Γ satisfying (W2). Take ξ = φ + v1{0} , ζ = ϕ + w1{0} ∈ C ⊕ B, where φ, ϕ ∈ C, v, w ∈ n . (k)

(k)

∞ Choose bounded sequences {ξ0 }∞ k=1 , {ζ0 }k=1 in C[−r, 0] such that (k)

(k)

ξ0 (θ) → v1{0} (θ), ζ0 (θ) → w1{0} (θ) as k → ∞, θ ∈ [−r, 0]; (k)

(k)

ξ0 = |v|, ζ0 = |w|, Let

(k)

ξ (k) = φ + ξ0

∀k ≥ 1. (k)

and ζ (k) = ϕ + ζ0 .

(k) ∞ Then {ξ (k) }∞ }k=1 are bounded sequences in C such that k=1 and {ζ

ξ (k) (θ) → ξ(θ) and ζ (k) (θ) → ζ(θ) as k → ∞ ∀θ ∈ [−r, 0]. Therefore, by (W2) for Γ˜ and Γ¯ , one gets Γ˜ (ξ)(ζ) = lim Γ (ξ (k) )(ζ (k) )Γ¯ (ξ)(ζ). k→∞

This proves that Γ˜ = Γ¯ . 2 We first define for each φ ∈ C and for all 0 < a ≤ ∞ the function φ˜ : [−r, ∞) → n , an extension of φ from the interval [−r, 0] to the interval [−r, a], by

96

2 Stochastic Calculus

 ˜ = φ(t)

φ(0) for t ≥ 0 φ(t) for t ∈ [−r, 0).

(2.8)

Again using the convention, we define, for each t ≥ 0, φ˜t : [−r, 0] → n by ˜ + θ), φ˜t (θ) = φ(t

θ ∈ [−r, 0],

and T˜(t)φ = φ˜t ,

t ≥ 0.

(2.9)

Theorem 2.2.5 The family of bounded linear operators {T˜(t), t ≥ 0} defined in (2.9) form a C0 -semigroup of operators on C. Proof. It is clear that for each t ≥ 0, T˜(t) : C → C defined in (2.9) is a linear operator. To show that it is bounded, we note that T˜(t)φ(θ) = φ(0) for all θ ∈ [−r, 0] with t + θ ≥ 0 and T˜(t)φ(θ) = φ(t + θ) for all θ ∈ [−r, 0] with t + θ ∈ [−r, 0). Therefore, for all t ≥ 0, T˜(t)φ =

sup |T˜(t)φ(θ)| θ∈[−r,0]



sup |φ(θ)| = φ ,

∀φ ∈ C.

θ∈[−r,0]

We then show that the family {T˜(t), t ≥ 0} satisfies Conditions (i)-(iii) of Definition 2.1.2. It is clear that T˜(0)φ = φ for all φ ∈ C, i.e., T˜(0) = I (the identity operator). Now, for s, t ≥ 0, φ ∈ C, and θ ∈ [−r, 0], we have T˜(s) ◦ T˜(t)φ(θ) = T˜(s)(φ˜t )(θ)  (φ˜s )(0) = φ(0) for t + θ ≥ 0 = (φ˜s )(t + θ) for −r ≤ t + θ < 0. ⎧ φ(0) for t + θ ≥ 0 ⎨ ˜ φ (0) = φ(0) for t+s+θ ≥0 = s ⎩˜ φs (t + θ) = φ(t + s + θ) for −r ≤ t + s + θ < 0. ˜ = (φt+s )(θ); that is, T˜(s) ◦ T˜(t) = T˜(t) ◦ T˜(s) = T˜(t + s). Let φ ∈ C be given. For each a > 0, the function φ˜ : [−r, a] → n as defined in (2.8) is uniformly continuous. Therefore, for each  > 0, there exists a δ > 0 such that 0 < t < δ implies that ˜ + θ) − φ(θ)| ˜ |φ(t < ,

∀θ ∈ [−r, 0].

Therefore, T˜(t)φ − φ <  whenever 0 < t < δ. This shows that lim T˜(t)φ − φ = 0. t↓0

2.2 The Space C

97

This proves that the family of bounded linear operators {T˜(t), t ≥ 0} forms a 2 C0 -semigroup of operators on C. ˜ for the semigroup The infinitesimal generator A˜ and its domain D(A) {T˜(t), t ≥ 0} is given as follows. ˜ D(A), ˜ of Theorem 2.2.6 The domain of the infinitesimal generator A, ˜ {T (t), t ≥ 0} is given by ˙ ˜ = {φ | φ˙ ∈ C, φ(0) D(A) = 0}, ˙ ˜ = φ. and Aφ ˜ and put ϕ = A(φ). ˜ Proof. Let φ ∈ D(A) From the definition of T˜(t) and . . . .1 ˜(t)φ − φ) − ϕ. 0 = lim . ( T . . t↓0 t      1 = lim sup  (T˜(t)φ(θ) − φ(θ) − ϕ(θ) , t↓0 θ∈[−r,0] t ˙ it follows that φ is right-differentiable on [−r, 0) with φ(θ+) = ϕ(θ) and, moreover, that necessarily ϕ(0) = 0. The continuity of ϕ implies that actually φ is differentiable and φ˙ = ϕ. Conversely, suppose that φ is continuously differentiable on [−r, 0] and ˙ φ(0) = 0. Define φ(t) = φ(0) for t ≥ 0; then 1/  1  t / 0 0     ˙ ˙ + s) − φ(θ) ˙ φ(t + θ) − φ(θ) − φ φ(θ ds  = t t 0 converges, as t ↓ 0, to zero uniformly for θ ∈ [−r, 0].

2

Define the family of shift operators {T (t), t ≥ 0} on Cb (C) by T (t)Ψ (φ) = Ψ (T˜(t)φ),

Ψ ∈ Cb (C), φ ∈ C.

(2.10)

The following result establishes that the family of shift operators {T (t), t ≥ 0} forms a contractive C0 -semigroup on Cb (C). Proposition 2.2.7 The family of shift operators {T (t), t ≥ 0} forms a contractive C0 -semigroup on Cb (C) such that for each φ ∈ C, lim T (t)(Φ)(φ) = Φ(φ),

t→0+

∀Φ ∈ Cb (C).

Proof. Let s, t ≥ 0, φ ∈ C, Φ ∈ Cb (C), and θ ∈ [−r, 0]. Then from the definition of {T (t), t ≥ 0} and Theorem 2.2.5, we have T (t)(T (s)(Φ))(φ) = T (t)Φ(T˜(s)φ) = Φ(T˜(t) ◦ T˜(s)φ) = Φ(T˜(t + s)φ).

98

2 Stochastic Calculus

Hence, T (t)(T (s)(Ψ ))(φ) = T (t + s)(Ψ )(φ),

∀s, t ≥ 0, φ ∈ C.

Therefore, T (t) ◦ T (s) = T (t + s). Since limt↓0 T˜(φ) = φ, it is clear that limt↓0 T (t)(Φ)(φ) = Φ(φ) for each φ ∈ C, Φ ∈ Cb (C). The fact that T (t) : Cb (C) → Cb (C) is a contractive map for each t ≥ 0 can be easily proved and is therefore omitted here. 2 In the following, we define the infinitesimal generator S : Cb (C) → Cb (C) of the C0 -semigroup {T (t), t ≥ 0} by (SΦ)(φ) = lim t↓0

= lim

T (t)Φ(φ) − Φ(φ) t ˜ Φ(T (t)φ) − Φ(φ)

t ˜ Φ(φt ) − Φ(φ) , = lim t↓0 t

(2.11)

t↓0

φ ∈ C.

ˆ Let D(S) be the set of all Φ ∈ Cb (C) such that the above limit exists for each φ ∈ C. Remark 2.2.8 The Dynkin’s formula (see Theorem 2.4.1) for any functional Φ : C →  for the C-valued Markovian solution process {xs , s ∈ [0, T ]} for 2 (C). It is important to point out here that this (1.27) requires that Φ ∈ Clip ˆ condition, although smooth enough, is not sufficient to ensure that Φ ∈ D(S). To illustrate this, we consider only the case where n = 1 and choose Φ : C[−r, 0] →  such that Φ(φ) = φ(−¯ r) for any φ ∈ C[−r, 0], where r¯ ∈ (0, r). Then DΦ(φ)(ϕ) = ϕ (−¯ r) , D2 Φ(φ)(ψ, ϕ) = 0, 2 and D2 Φ : C[−r, 0] → C † [−r, 0] is globally Lipschitz (i.e., Φ ∈ Clip (C[−r, 0])). However,

S(Φ)(φ) = lim t↓0

 1 ˜ φ˜ (t − r¯) − φ (−¯ r) Φ(φt ) − Φ(φ) = lim , t↓0 t t

which exists only if φ has a right derivative at −¯ r. Lemma 2.2.9 For the Fr´echet differentiable Φ ∈ Cb (C), Φ : D(A) → , ˆ the infinitesimal generator S : D(S) ⊂ Cb (C) → Cb (C) for the contractive C0 -semigroup {T (t), t ≥ 0} is given by ˜ S(Φ)(φ) = DΦ(φ)(A(φ)).

2.3 The Space M

99

Proof. The proof is straightforward and given as follows: Φ(φ˜t ) − Φ(φ) t↓0 t  1 = lim Φ(φ + φ˜t − φ) − Φ(φ) t↓0 t  1 = lim DΦ(φ)(φ˜t − φ) + o(φ˜t − φ) t↓0 t . . 

 . φ˜t − φ o(φ˜t − φ) . . φ˜t − φ . = lim DΦ(φ) + . . t↓0 t φ˜t − φ . t . . .

 . o(φ˜t − φ) . φ˜t − φ . φ˜t − φ . = DΦ(φ) lim + lim . . ˜t − φ . t . t↓0 t↓0 φ t

(SΦ)(φ) = lim

˜ = DΦ(φ)(A(φ)).

2

ˆ Example. As an example of a function Φ : C →  where Ψ ∈ D(S), let  0 Ψ (φ) = aφ2 (0) + b φ2 (θ) dθ, θ=−r

ˆ where a and b are real-valued constants. Then Φ ∈ D(S), with Φ(φ˜t ) − Φ(φ) (SΨ )(φ) = lim t↓0 t 0 / 2 = b φ (0) − φ2 (−r) . More examples of (SΦ)(φ) can be found in Section 3.6 of Chapter 3, when we prove that the value function is a unique viscosity solution of the HJBE.

2.3 The Space M Recall that M ≡  × L2ρ ((−∞, 0]; ) is the ρ-weighted Hilbert space equipped with the Hilbertian inner product ·, ·ρ : M × M →  defined by  0

(x, φ), (y, ϕ)M = xy + φ(θ)ϕ(θ)ρ(θ) dθ. −∞

As usual, the Hilbertian norm · M : M → [0, ∞) is defined by  (x, φ) M = (x, φ), (x, φ)M . Remark 2.3.1 Since we are working with M in which two functions in L2ρ ((−∞, 0]; ) are equal if there differ only on a set of Lebesque measure zero, we therefore can and will assign φ(0) = x for any x ∈  and write a typical element in M as (φ(0), φ) instead of (x, φ).

100

2 Stochastic Calculus

In the following, we consider the differential calculus outlined in Section 2.2 for the space Ξ = M. First, for the benefit of the readers who are not familiar with the theory of infinite-dimensional Hilbert space, we note that M∗ , the space of bounded linear functionals on M, can be identified with M by the Riesz Representation Theorem restated here. Theorem 2.3.2 Φ ∈ M∗ if and only if there exists a unique (ϕ(0), ϕ) ∈ M such that Φ(φ(0), φ) = (ϕ(0), ϕ), (φ(0), φ)M  0 ≡ ϕ(0)φ(0) + ϕ(θ)φ(θ)ρ(θ) dθ, −∞

∀(φ(0), φ) ∈ M.

Second, if Φ ∈ C 2 (M), then the actions of the first-order Fr´echet derivative DΦ(φ(0), φ) and the second-order Fr´echet D2 Φ(φ(0), φ) can be expressed as DΦ(φ(0), φ)(ϕ(0), ϕ) = ϕ(0)∂x Φ(φ(0), φ) + Dφ Φ(φ(0), φ)ϕ and D2 Φ(φ(0), φ)((ϕ(0), ϕ), (ς(0), ς)) = ϕ(0)∂x2 Φ(φ(0), φ)ς(0) + ς(0)∂x Dφ Φ(φ(0), φ)ϕ +ϕ(0)∂x Dφ Φ(φ(0), φ)ς + Dφ2 Φ(φ(0), φ)(ϕ, ς), where ∂x Φ and ∂x2 Φ are respectively the first- and second-order partial derivative of Φ with respect to its first variable x ∈ , Dφ Φ and Dφ2 Φ are respectively the first- and second-order Fr´echet derivatives with respect to its second variable φ ∈ L2ρ ((−∞, 0]; )), and ∂x Dφ Φ is the second-order derivative first with respect to φ ∈ L2ρ ((−∞, 0]; ) in the Fr´echet sense and then with respect to its first variable x ∈ . 2.3.1 The Weighting Function ρ Throughout the end of this monograph, let ρ : (−∞, 0] → [0, ∞) be an influence function with relaxation property and it satisfies Assumption 1.4.1 in Chapter 1, which is repeated here for convenience. Assumption 2.3.3 1. ρ is summable on (−∞, 0], that is,  0 0< ρ(θ) dθ < ∞. −∞

2. For every λ ≤ 0, one has ¯ K(λ) = ess

ρ(θ + λ) ¯ < ∞, ≤K ρ(θ) θ∈(−∞,0]

K(λ) = ess

sup

ρ(θ) < ∞. ρ(θ + λ) θ∈(−∞,0] sup

2.3 The Space M

101

Proposition 2.3.4 If ρ : (−∞, 0] → [0, ∞) satisfies Assumption 2.3.3, then the following conditions are satisfied: 1. ρ is essentially bounded on (−∞, 0]. 2. ρ is essentially strictly positive on (−∞, 0). 3. θρ(θ) → 0 as θ → −∞. 2

Proof. See Coleman and Mizel [CM68].

Example 2.3.5 The following functions ρ : (−∞, 0] → [0, ∞) satisfy Assumption 2.3.3: (i) Let ρ(θ) = eθ . Then 



0

0< −∞

¯ K(λ) = ess

eθ+λ = eλ ≤ 1, θ θ∈(−∞,0] e sup



sup θ∈(−∞,0]

(ii) Let ρ(θ) = 

1 1+θ 2 ,

0

ρ(θ) dθ = −∞

eθ+λ

= e−λ < ∞,

∀λ ≤ 0,

∀λ ≤ 0.

−∞ < θ ≤ 0. Then 

0

0
0. To show the semigroup property, let t, s ≥ 0, (φ(0), φ) ∈ M and Ψ ∈ Cb (M) be given. Then T (t)(T (s)(Ψ ))(φ(0), φ) = T (t)(Ψ )(T˜(s)(φ(0), φ)) = Ψ (T˜(t) ◦ T˜(s)(φ(0), φ)) = Ψ (T˜(t + s)(φ(0), φ)) = T (t + s)(Ψ )(φ(0), φ).

2

We denote the infinitesimal generator associated with the semigroup as S(Ψ )(φ(0), φ) = lim t↓0

Ψ (φ(0), φ˜t ) − Ψ (φ(0), φ) , t

(φ(0), φ) ∈ M.

(2.14)

ˆ Let D(S) be the set of all Ψ ∈ Cb (M) such that the above limit exists for each (φ(0), φ) ∈ M.

2.4 Itˆ o and Dynkin Formulas ˆ Let D(S) be the collection of Φ : [0, T ] × Ξ →  such that Φ(t, ·) ∈ D(S) for each t ∈ [0, T ], where Ξ is either Ξ = C or Ξ = M. In this section, we investi1,2 ([0, T ]×C)∩D(S) gate Itˆo’s and Dynkin’s formulas for Φ(s, xs ), where Φ ∈ Clip and {xs , s ∈ [t, T ]} is the C-valued segment process for (1.27). Similar for2 ˆ (M) ∩ D(S) and mulas will be developed for Φ(S(s), Ss ), where Φ ∈ Clip {(S(s), Ss ), s ≥ 0} is the M-valued segment process for (1.43). The details of the formulas will be given in the next two subsections. 2.4.1 {xs , s ∈ [t, T ]}. Let {xs , s ∈ [t, T ]} be the C-valued segment process for (1.27) with the initial datum (t, ψ) ∈ [0, T ] × C. In this subsection, we will establish the infinitesimal generator that characterizes the (strong) Markovian property of 2 (C), we recall from Lemmas 2.2.3 and 2.2.4 that {xs , s ∈ [t, T ]}. If Ψ ∈ Clip its first- and second-order Fr´echet derivatives DΨ (ψ) ∈ C∗ and D2 Ψ (ψ) ∈ C† at ψ ∈ C can be extended to DΨ (ψ) ∈ (C ⊕ B)∗ and D2 Ψ (ψ) ∈ (C ⊕ B)† . With this in mind, we state the following. Theorem 2.4.1 Let {xs (·; t, ψ), s ∈ [t, T ]} be the C-valued segment process for (1.27) through the initial datum (t, ψ) ∈ [0, T ] × C. If Φ is such that 1,2 ([0, T ] × C) ∩ D(S), then Φ ∈ Clip

2.4 Itˆ o and Dynkin Formulas

lim ↓0

E[Φ(t + , xt+ (·; t, ψ))] − Φ(t, ψ) = (∂t + A)Φ(t, ψ), 

107

(2.15)

where AΦ(t, ψ) = SΦ(t, ψ) + DΦ(t, ψ)(f (t, ψ)1{0} ) 1 2 D Φ(t, ψ)(g(t, ψ)ej 1{0} , g(t, ψ)ej 1{0} ), 2 j=1 m

+

(2.16)

ej , j = 1, 2, . . . , m, is the jth unit vector of the standard basis in m , and SΦ(t, ψ) is given by (2.11). Sketch of Proof. In the following, we will only give a sketch of the proof. The detailed proof for the autonomous SHDE with bounded drift and diffusion can be found in Theorem (3.2) of Mohammed [Moh84]. Note that lim ↓0

= lim ↓0

E[Φ(t + , xt+ (·; t, ψ)) − Φ(t, ψ)]  E[Φ(t + , xt+ (·; t, ψ)) − Φ(t, xt+ (·; t, ψ))] 

+ lim ↓0

E[Φ(t, xt+ (·; t, ψ)) − Φ(t, ψ)] . 

Since xt+ (·; t, ψ) → xt (·, t, ψ) = ψ in C, the first limit on the left-handside of the last expression becomes lim ↓0

E[Φ(t + , xt+ (·; t, ψ)) − Φ(t, xt+ (·; t, ψ))] = ∂t Φ(t, ψ). 

We therefore need to establish that lim ↓0

E[Φ(t, xt+ (·; t, ψ)) − Φ(t, ψ)] = AΦ(t, ψ). 

By Taylor’s formula (see Theorem 2.1.1) and using the simpler notation xt+ = xt+ (·; t, ψ), we have P -a.s.   Φ (t, xt+ ) − Φ(t, ψ) = Φ t, ψ˜t+ − Φ(t, ψ)    +DΦ t, ψ˜t+ xt+ − ψ˜t+ + R2 (t + ), where

 R2 (t + ) =

  (1 − λ)D2 Φ ψ˜t+ + λ(xt+ − ψ˜t+ ) 0   × xt+ − ψ˜t+ , xt+ − ψ˜t+ dλ. 1

108

2 Stochastic Calculus

Taking the expectation, we obtain  1 1  ˜  E [Φ (t, xt+ ) − Φ(t, ψ)] = Φ t, ψt+ − Φ(t, ψ)      1  + E DΦ t, ψ˜t+ xt+ − ψ˜t+  1 + E[R2 (t + )].  The outline of the rest of the proof is sketched in the following steps: Step 1. It is clear from the definition of the operator S (see (2.11)) that   Φ t, ψ˜t+ − Φ(t, ψ) = SΦ(t, ψ). lim ↓0  Step 2. By the continuity and local boundedness of DΦ, we have   1     1  lim E DΦ t, ψ˜t+ xt+ − ψ˜t+ = E DΦ(t, ψ) xt+ − ψ˜t+ ↓0   = DΦ(t, ψ)f (t, ψ)1{0} . Step 3. From Lemma (3.5) and the proof of Theorem (3.2) of [Moh84], one can prove that  1   1  1 E[R2 (t + )] = (1 − λ) lim E D2 Φ(t, ψ) xt+ − ψ˜t+ , xt+ − ψ˜t+ dλ ↓0   0 m  1 = D2 Φ(t, ψ)(g(t, ψ)ej 1{0} , g(t, ψ)ej 1{0} ). 2 j=1 This completes the sketch of the proof.

2

We define a tame and quasi-tame function Φ : C →  next. Definition 2.4.2 A function Φ : C →  is said to be tame if there is an integer k > 0 such that Φ(ψ) = h (η(−rk ), η(−rk−1 ), . . . , η(−r1 )) for all η ∈ C, where h(·) : (n )k →  is continuous and 0 ≤ r1 < r2 < · · · < rk ≤ r. The following theorem is due to Mohammed [Moh84]. Theorem 2.4.3 The class of tame functions Φ : C →  defined by Definition 2.4.2 is weakly dense in Cb (C). Proof. See Theorem IV.4.1 of Mohammed [Moh84].

2

2.4 Itˆ o and Dynkin Formulas

109

Definition 2.4.4 A function Φ : C →  is said to be quasi-tame if there is an integer k > 0 such that

 0  0 α1 (η(θ))β1 (θ) dθ, ..., αk−1 (η(θ))βk−1 (θ) dθ, η(0) Φ(η) = h −r

−r

for all η ∈ C, where the following hold: (i) h(·) : (n )k ≡ n × · · · × n →  is twice continuously differentiable and bounded. (ii) αi : n → n , i = 1, 2, ..., k − 1, is C ∞ and bounded. (iii) βi : [−r, 0] → , i = 1, 2, ..., k − 1, is piecewise C 1 and each of its derivative β˙ i is absolutely integrable over [−r, 0]. We will use the function q : C → (n )k defined by  0

 0 q(φ) = α1 (φ(θ))β1 (θ) dθ, ..., αk−1 (φ(θ))βk−1 (θ) dθ, φ(0) −r

−r

so that a quasi-tame function Φ(φ) = h ◦ q(φ) = h(q(φ)). The collection of quasi-tame functions defined Φ : C →  defined in Definition 2.4.4 will be denoted by Q(C, ). Proposition 2.4.5 Every tame function Φ : C →  can be approximated by a sequence of quasi-tame functions {Φ(k) }∞ k=1 ⊂ Q(C, ). Proof. Without loss of generality, we will assume, in order to simplify the notation, that the tame function Φ : C →  takes the following form Φ(φ) = h(φ(−¯ r), φ(0)), where h : n × n →  and 0 < r¯ ≤ r, i.e., there is only one delay. We will approximate the function Φ(φ) = h(φ(−¯ r), φ(0)) by a sequence ⊂ Q(C, ) as follows. Throughout the end of quasi-tame functions {Φ(k) }∞ k=1 of this proof, we define for each φ ∈ C and each k = 1, 2, . . ., the function r; γ) by φ(k) (−¯  0 r; γ) = k γ(k(−¯ r − ς))φ(ς)dς, φ(k) (−¯ −r

where γ :  → [0, ∞) is the mollifier defined by   if ς ≥ 2  0 γ(ς) = c exp (ς−1)1 2 −1 if 0 < ς < 2, and c > 0 is the constant chosen so that C ∞ and that

2 0

γ(ς) dς = 1. It is clear that γ is

lim Φ(k) (φ) ≡ lim h(φ(k) (−¯ r; γ)), φ(0))

k→∞

k→∞

= h(φ(−¯ r), φ(0)) = Φ(φ), since limk→∞ φ(k) (−¯ r; γ) = φ(−¯ r). This proves the proposition.

2

110

2 Stochastic Calculus

If Φ : [0, T ]×C →  is a quasi-tame function as defined in Definition 2.4.4, then Sq Φ(t, ψ) and Aq Φ(t, ψ) can be explicitly expressed as follows. Theorem 2.4.6 Every quasi-tame function on C is in D(Sq ). Indeed, if Φ : C →  is of the form Φ(ψ) = h(q(ψ)), ψ ∈ C, where h : [0, T ] × ()k →  and q : C → (n )k are as given in Definition 2.4.4, then  k−1   Sq Φ(ψ) = ∇j h(t, q(ψ)) αj (ψ(0))βj (0) − αj (ψ(−r))βj (−r) j=1





0

−r

 αj (ψ(θ))β˙ j (θ) dθ

,

where ∇j is the gradient operator of h(x1 , . . . , xk ) with respect to xj ∈ n for j = 1, 2, . . . , k. Proof. In proving the first and second statements, we will assume that Φ = h ◦ q, where h : (n )2 →  is C ∞ -bounded and  0

q(ψ) = α(ψ(θ))β(θ) dθ, ψ(0) −r

for some C ∞ -bounded map α : n → n and a (piecewise) C 1 function β : [−r, 0] → . Let 0 < t < r and consider the expression  1 1 [Φ(ψ˜t ) − Φ(ψ)] = h(q(ψ˜t )) − h(q(ψ)) t t  1   1   q(ψ˜t ) − q(ψ) dλ = Dh (1 − λ)q(ψ) + λq(ψ˜t ) t 0 using the meanvalue theorem for h, with 0 ≤ λ ≤ 1. However,

   1 −t 1 0 1 ˜ q(ψt ) − q(ψ) = α(ψ(t + θ))β(θ) dθ + α(ψ(0))β(θ) dθ t t −r t −t   1 0 − α(ψ(θ))β(θ) dθ, 0 t −t

 1 0 1 = α(ψ(θ)) (β(θ − t) − β(θ)) dθ t t−r t   0  1 1 t−r +α(ψ(0)) β(θ) dθ − α(ψ(θ))β(θ) dθ, 0 . t −t t −r Now, α is bounded and β is (piecewise) C 1 , so all three terms in the above expression are bounded in t and ψ. Moreover, letting t ↓ 0, we obtain

 0  1 ˜ ˙ q(ψt ) − q(ψ) = α(ψ(0))β(0)−α(ψ(−r))β(−r)− α(ψ(θ))β(θ) dθ, 0 . t −r

2.4 Itˆ o and Dynkin Formulas

111

As the quantity (1 − λ)q(ψ) + λq(ψ˜t ) is bounded in ψ and continuous in (t, λ), it follows from the above three expressions and the dominated convergence theorem that Φ ∈ D(Sq ) and   1 ˜ q(ψt ) − q(ψ) Sq Φ(ψ) = DΦ(ψ) lim t↓0 t  = ∇1 h(q(ψ)) α(ψ(0))β(0) − α(ψ(−r))β(−r)  −



0

˙ α(ψ(θ))β(θ) dθ .2

−r

Theorem 2.4.7 Every quasi-tame function on C is in D(Aq ). Indeed, if Φ : C →  is of the form Φ(ψ) = h(q(ψ)), ψ ∈ C, where h : (n )k →  and q : C → (n )k are as given in Theorem 2.4.7, then

k−1  Aq Φ(ψ) = ∇j h(q(ψ)) αj (ψ(0)βj (0) − αj (ψ(−r))βj (−r) j=1







0 −r

αj (ψ(θ))β˙ j (θ)dθ

+ ∇k h(t, q(ψ))(f (t, ψ))

$ % 1 + trace ∇2k h(t, q(ψ)) ◦ (g(t, ψ) × g(t, ψ)) , 2 where ∇j is the gradient operator of h(x1 , . . . , xk ) with respect to xj ∈ n for j = 1, 2, . . . , k. Sketch of Proof. To prove Φ = h ◦ q ∈ D(Aq ), we prove Φ = h ◦ q ∈ 2 (C) ∩ D(Sq ). First, it is not hard to see that Φ ∈ Q(C, ) is C ∞ . Also, by Clip applying the chain rule and differentiating under the integral sign, one gets

 0 ∇α1 (ψ(θ))(φ(θ))β1 (θ) dθ, . . . , DΦ(ψ)(φ) = ∇h(q(ψ)) 

−r

0

−r



∇αk−1 (ψ(θ))(φ(θ))βk−1 (θ) dθ, φ(0) ,

D2 Φ(ψ)(φ, ϕ) = ∇2 h(q(ψ))(∇q(ψ)φ, ∇q(ψ)ϕ)

 0 ∇2 α1 (ψ(θ))(φ(θ), ϕ(θ))β1 (θ) dθ, . . . , +∇h(q(ψ)) 

−r

0

−r

for all ψ, φ, ϕ ∈ C.

2



∇ αk−1 (ψ(θ))(φ(θ), ϕ(θ))βk−1 (θ) dθ, 0

112

2 Stochastic Calculus

Since all derivatives of h, αj , 1 ≤ j ≤ k − 1, are bounded, it is easy to 2 (C) ∩ D(Sq ). From the above two see, from the above formulas, that Φ ∈ Clip formulas it is easy to see that the unique weakly continuous extensions DΦ(ψ) and D2 Φ(ψ) of DΦ(ψ) and D2 Φ(ψ) to C ⊕ B are given by DΦ(ψ)(v1{0} ) = ∇h(q(ψ))(0, 0, . . . , v) = ∇k h(q(ψ)) · v, D2 Φ(ψ)(v1{0} , w1{0} ) = ∇2 h(q(ψ))((0, 0, . . . , v), (0, 0, . . . , v)) = ∇2k h(q(ψ))(v, w) for all v, w ∈ n . The given formula for Aq Φ(ψ) now follows directly from Theorem 2.4.1 and Theorem 2.4.6. 2 Theorem 2.4.8 If Φ : C →  is a quasi-tame function as defined in Definition 2.4.4, then we have the following Itˆ o’s formula:  s  s Aq Φ(xλ ) dλ + ∇k h(q(xλ )) · g(λ, xλ ) dW (λ), (2.17) Φ(xs ) = Φ(ψt ) + t

t

where Aq is as defined in Theorem 2.4.7. o Proof. Let y(·) = {y(s), s ∈ [t, T ]} be defined as y(s) = q(xs ). Applying Itˆ formula (see Theorem 1.2.15), we obtain (2.17). 2 2.4.2 {(S(s), Ss ), s ≥ 0}. Again, consider the M-valued segment process {(S(t), St ), t ≥ 0}, where St (θ) = S(t + θ), θ ∈ (−∞, 0]) for the nonlinear SHDE (1.43) with the initial historical price function (S(0), S0 ) = (ψ(0), ψ) ∈ M. We have the following result for its weak infinitesimal generator A. Other results for equations of a similar nature can be found in Arriojas [Arr97] and Mizel and Trutzer [MT84]. 2 (M) ∩ D(S) and {(S(s), Ss ), s ≥ 0} is the MTheorem 2.4.9 If Φ ∈ Clip valued segment process for (1.43) with an initial historical price function (ψ(0), ψ) ∈ M, then

lim t↓0

E[Φ(S(t), St ) − Φ(ψ(0), ψ)] = AΦ(ψ(0), ψ), t

(2.18)

where 1 AΦ(ψ(0), ψ) = SΦ(ψ(0), ψ) + ∂x2 Φ(ψ(0), ψ)ψ 2 (0)g 2 (ψ) 2 +∂x Φ(ψ(0), ψ)ψ(0)f (ψ),

(2.19)

S(Φ)(ψ(0), ψ) is as defined in (2.14), and ∂x Φ(ψ(0), ψ) and ∂x2 Φ(ψ(0), ψ) are the first and second partial derivatives of Φ(ψ(0), ψ) with respect to its first variable x = ψ(0).

2.4 Itˆ o and Dynkin Formulas

113

From a glance at (2.19), it seems that AΦ(ψ(0), ψ) requires only the existence of the first- and second-order partial derivatives ∂x Φ and ∂x2 Φ. However, detailed derivations of the formula reveal that a stronger condition than 2 (M) is required. Φ ∈ Clip We have the following Dynkin’s formula (see Mizel and Trutzer [MT84], Arriojas [Arr97], and Kolmanovskii and Shaikhet [KS96] for similar derivations). 2 ˆ (M) ∩ D(S). Then Theorem 2.4.10 Let Φ ∈ Clip  τ  E[e−ατ Φ(S(τ ), Sτ )] = Φ(ψ(0), ψ)+E e−αt (A−αI)Φ(S(t), St )dt (2.20) 0

for all P − a.s. finite G-stopping time τ , where the operators A and S are as described in (2.19) and (2.14). 2 ˆ (M) ∩ D(S) that has the following special form is The function Φ ∈ Clip referred to as a quasi-tame function

Φ(φ(0), φ) = h(q(φ(0), φ)),

(2.21)

→  is a function h(x, y1 , y2 , . . . , yk ) such that its parwhere h :  tial derivatives ∂x h, ∂x2 h, ∂yi h, i = 1, 2, . . . , k exist and are continuous, and ∀(φ(0), φ) ∈ M:

 0  0 η1 (φ(θ))λ1 (θ)dθ, · · · , ηk (φ(θ))λk (θ)dθ (2.22) q(φ(0), φ) = φ(0), k+1

−∞

−∞

for some positive integer k and some functions q ∈ C(M, k+1 ), ηi ∈ C ∞ (), λi ∈ C 1 ((−∞, 0]) with lim λi (θ) = λi (−∞) = 0

θ→−∞

for i = 1, 2, . . . , k, and h ∈ C ∞ (k+1 ) of the form h(x, y1 , y2 , . . . , yk ). The collection of quasi-tame functions Φ : M →  will be denoted by Q(M, ). Again the infinitesimal generators S and A that apply to Q(M, ) will be denoted by Sq and Aq , respectively. We have the following Itˆ o’s formula in case Φ ∈ Cb (M) is a quasi-tame function in the sense defined above. Theorem 2.4.11 Let {(S(s), Ss ), s ≥ 0} be the M-valued solution process corresponding to (1.43) with an initial historical price function (ψ(0), ψ) ∈ M. If Φ ∈ C(M) is a quasi-tame function, then Φ ∈ D(Aq ) and  τ −ατ e Φ(S(τ ), Sτ ) = Φ(ψ(0), ψ) + e−αt (Aq − αI)Φ(S(t), St ) dt 0  τ e−αt ∂x Φ(S(t), St )S(t)f (St ) dW (t) (2.23) + 0

114

2 Stochastic Calculus

for all finite G-stopping times τ , where I is the identity operator. Moreover, if Φ ∈ C(M) is the form described in (2.21) and (2.22), then Aq Φ(ψ(0), ψ) =

k 

  ∂yi h(q(ψ(0), ψ)) ηi (ψ(0))λi (0) −

i=1

0

−∞

ηi (ψ(θ))λ˙ i (θ) dθ

+∂x h(q(ψ(0), ψ))ψ(0)f (ψ) 1 + ∂x2 h(q(ψ(0), ψ))ψ 2 (0)g 2 (ψ), 2



(2.24)

where ∂yi h(x, y1 , . . . , yk ), ∂x h(x, y1 , . . . , yk ), and ∂x2 h(x, y1 , . . . , yk ) are partial derivatives of h(x, y1 , · · · , yk ) with respect to its appropriate variables. Proof. Itˆo’s formula for a quasi-tame function Φ :  × L2 ([−r, 0]) →  for the  × L2 ([−r, 0]) solution process {(x(t), xt ), t ≥ 0} of a stochastic function differential equation with a bounded delay r > 0 is obtained in Arriojas [Arr97] (the same result can also be obtained from Mohammed [Moh84, Moh98] with some modifications). The same arguments can be easily extended to the stochastic hereditary differential equation with unbounded memory (1.43) considered in this monograph. To avoid further lengthening the exposition, we omit the proof here. 2. In the following, we will prove that the above Itˆ o formula also holds for any tame function Φ : C(−∞, 0] →  of the form Φ(φ) = h(π(φ)) = h(φ(0), φ(−θ1 ), . . . , φ(−θk )),

(2.25)

where C(−∞, 0] is the space continuous functions φ : (−∞, 0] →  equipped with uniform topology (see (2.7)), 0 < θ1 < θ2 < · · · < θk < ∞, and Ψ (x, y1 , . . . , yk ) is such that Ψ ∈ C ∞ (k+1 ). Theorem 2.4.12 Let {(S(s), Ss ), s ≥ 0} be the M-valued segment process for (1.43) with an initial historical price function (ψ(0), ψ) ∈ M. If Φ : C(−∞, 0] →  is a tame function defined by (2.25), then Φ ∈ D(Aq ) and e−ατ h(S(τ ), S(τ − θ1 ), . . . , S(τ − θk )) = h(ψ(0), ψ(−θ1 ), . . . , ψ(−θk ))  τ e−αt (Aq − αI)h(S(t), S(t − θ1 ), . . . , S(t − θk )) dt + 

0

τ

+ 0

e−αt ∂x h(S(t), S(t − θ1 ), . . . , S(t − θk ))S(t)f (St ) dW (t) (2.26)

for all finite G-stopping times τ , where Aq h(ψ(0), ψ(−θ1 ), · · · , ψ(−θk )) = ∂x h(ψ(0), ψ(−θ1 ), · · · , ψ(−θk ))ψ(0)f (ψ) 1 + ∂x2 h(ψ(0), ψ(−θ1 ), · · · , ψ(−θk ))ψ 2 (0)g 2 (ψ), 2

(2.27)

2.4 Itˆ o and Dynkin Formulas

115

with ∂x h and ∂x2 h being the first- and second-order derivatives with respect to x of h(x, y1 , . . . , yk ). Proof. Without loss of generality, we will assume in order to simplify the notation that h :  ×  →  with h(x, y) and there is only one delay in the ¯ for some fixed θ¯ ∈ (0, ∞). function h(φ(0), φ(−θ)) We will approximate the Φ(φ) = h(φ(−¯ r), φ(0)) function by a sequence ⊂ Q(C, ) as follows. Throughout the end of quasi-tame functions {Φ(k) }∞ k=1 of this proof, we define for each φ ∈ C and each k = 1, 2, . . . the function r; γ) by φ(k) (−¯ φ(k) (−¯ r; γ) = k



0

−r

γ(k(−¯ r − ς))φ(ς) dς,

where γ :  → [0, ∞) is the mollifier defined by  0  if |ς| ≥ 1 γ(ς) = 1 c exp (ς)2 −1 if 0 < ς < 2 and c > 0 is the constant chosen so that C ∞ and that

∞ −∞

γ(ς) dς = 1. It is clear that γ is

r; γ)), φ(0)) lim Φ(k) (φ) ≡ lim h(φ(k) (−¯

k→∞

k→∞

= h(φ(−¯ r), φ(0)) = Φ(φ), since limk→∞ φ(k) (−¯ r; γ) = φ(−¯ r). Moreover, r; γ)) Sq h(φ(0), φ(k) (−¯

r )) φ(0)kγ(−k¯ r) = ∂y h(φ(0), −φ(¯ −2k 3

 r¯ + θ φ(θ)γ(k(¯ r + θ)) 2 dθ , (k (−¯ r − θ)2 − 1)2 −∞



0

and by the Lebesque dominating convergence theorem, we have lim Sq h(φ(0), φ(k) (−¯ r; γ)) = 0.

k→∞

Therefore, for any almost surely finite G-stopping time τ , we have from Theorem 2.4.11 and sample path convergence property of the Itˆ o integrals (see Section 1.2 in Chapter 1) that e−ατ h(S(τ ), S(τ − r¯)) = lim e−ατ h(S(τ ), Sτ(k) (−¯ r; γ)) k→∞

116

2 Stochastic Calculus

Therefore, e−ατ h(S(τ ), S(τ − r¯))  = lim h(ψ(0), ψ (k) (−¯ r; γ)) k→∞  τ + e−αs (Aq − αI)h(S(s), Ss(k) (−¯ r; γ))ds 0  τ  e−αs ∂x h(S(s), Ss(k) (−¯ r; γ))S(s)f (Ss )dW (s) . + 0  τ  = h(ψ(0), ψ(−¯ r )) + e−αs ∂x h(S(s), S(s − r¯))S(s)f (Ss ) 0

 1 + ∂x2 h(S(s), S(s − r¯))S 2 (s)g 2 (Ss ) ds 2 τ

+ 0

e−αs ∂x h(S(s), S(s − r¯))S(s)f (Ss ) dW (s).

2

2.5 Martingale Problem In this section, we consider a C-valued martingale problem. Much of the material presented here are mainly due to Kallianpur and Mandal [KM02]. Without loss of generality for formulating the martingale problem, we can and will consider the following autonomous SHDE throughout the end of this section. (2.28) dx(s) = f (xs ) ds + g(xs ) dW (s), s ≥ 0, with the initial segment ψ ∈ L2 (Ω, C; F(0)) at the initial time t = 0. As before, we will assume that the functions f : C → n and g : C → n×m satisfy the Lipschitz condition: There exists a constant Klip > 0 such  that |f (φ) − f (ϕ)| + |g(φ) − g(ϕ)| ≤ Klip φ − ϕ , ∀φ, ϕ ∈ C.

(2.29)

Recall from Proposition 2.1.10 that a sequence {Φ(k) }∞ k=1 ⊂ Cb (Ξ) converges weakly to Φ ∈ Cb (Ξ) if and only if the sequence {Φ(k) }∞ k=1 is bounded pointwise, that is, sup Φ(k) Cb (C) < ∞ k

and lim Φ(t)(x) = Φ(x), t↓0

In this case, we write Φ = (bp) limk→∞ Φ(k) .

∀x ∈ Ξ.

2.5 Martingale Problem

117

Let us consider an operator Γ : D(Γ) ⊂ Cb (C) → . We list the conditions that might be applicable to the operator Γ: C1. There exists Φ ∈ C(C) such that |ΓΨ (φ)| ≤ KΨ Φ(φ),

∀Ψ ∈ D(Γ), φ ∈ C,

where KΨ is a constant depending on Ψ . C2. There exists a countable subset Ψ (k) ⊂ D(Γ) such that {(Ψ, Φ−1 ΓΨ ) | Ψ ∈ D(Γ)} ⊂ ({(Ψ (k) , Φ−1 ΓΨ (k) ) | k ≥ 1}), where {· · · } denotes the closure of the set {· · · } in the bounded pointwise (bp) convergence topology. C3. D(Γ) is an algebra that separates points in C and contains the constant functions. Let µ be a probability measure on (C, B(C)). We define a solution to the martingale problem (see Mandal and Kallianpur [MK02]) as follows. Definition 2.5.1 A C-valued process X(·) = {X(s), s ∈ [0, T ]} defined on some complete filtered probability space (Ω, F, P, F) is said to be a solution to the martingale problem for (Γ, µ) if: (i) P ◦ X −1 (0) = µ. (ii)

s 0

E[Φ(X(t))]dt < ∞, for all s ∈ [0, T ].

(iii) for all Ψ ∈ D(Γ), M Ψ (s) ≡ Ψ (X(s)) − Ψ (X(0)) −



s

(Γ)(X(t))dt 0

is an F-martingale. The martingale problem for (Γ, µ) is said to be well posed in a class of C-valued processes C if X (i) (·), i = 1, 2, are two solutions to the martingale problem for (Γ, µ), then X (1) (·) and X (2) (·) have the same probability laws. The following result states that the uniqueness of the solution of a martingale problem always implies the Markovian property. Lemma 2.5.2 Suppose that the operator Γ satisfies Conditions C1 and C2. Furthermore, assume that the martingale problem for (Γ, δψ ) is well posed in the class of r.c.l.l. (right continuous with left hand limits) solutions for every ψ ∈ C, where δψ is the Dirac measure at ψ. Then the solution X(·) to the martingale problem for (Γ, µ) is a F-Markov process. Furthermore, if A is the infinitesimal generator of X(·), then D(Γ) ⊂ D(A) and Γ and A coincide on D(Γ). Proof. This is a very specialized result. We therefore omit the proof here and refer interested readers to Theorems IV.4.2 and IV.4.6 of Ethier and Kurtz [EK86] and Remark 2.1 of Horowitz and Karandikar [HK90].

118

2 Stochastic Calculus

We recall below the definition (see Definition 2.4.4) of a quasi-tame function Φ ∈ Q(C, ) and its infinitesimal generators Aq as given in Theorem 2.4.7. Aq Φ(ψ) = Sq Φ(ψ) + ∇k h(t, q(ψ))(f (t, ψ)) $ % 1 + trace ∇2k h(t, q(ψ)) ◦ (g(t, ψ)g  (t, ψ)) , 2

(2.30)

where Sq Φ(ψ) =

k−1 

∇j h(q(ψ)) (αj (ψ(0)βj (0) − αj (ψ(−r))βj (−r))

j=1

 −

0

−r

αj (ψ(θ))β˙ j (θ)dθ)

(2.31)

and ∇j is the gradient operator of h(x1 , . . . , xk ) with respect to xj ∈ n for j = 1, 2, . . . , k. Proposition 2.5.3 Suppose that the operator Aq is as defined in (2.30). Then Aq satisfies conditions C1-C3. Proof. Suppose that Φ ∈ D(Aq ) is a quasi-tame function and is given by Φ(φ) = h(q(φ)), where h(·) : (n )k →  is twice continuously differentiable and bounded,  0

 0 q(φ) = α1 (φ(θ))β1 (θ)dθ, ..., αk−1 (φ(θ))βk−1 (θ) dθ, φ(0) , −r

−r

and (i). αi : n → n , i = 1, 2, ..., k − 1, is C ∞ and bounded; (ii). βi : [−r, 0] → , i = 1, 2, ..., k−1, is piecewise C 1 and each of its derivative β˙ i is absolutely integrable over [−r, 0]. Then for φ ∈ C, |Aq Φ(φ)| =

k−1 

|∇j h(q(ψ))|

j=1

  ×  (αj (ψ(0)βj (0) − αj (ψ(−r))βj (−r))  0   αj (ψ(θ))β˙ j (θ)dθ) − −r

1 + |∇k h(t, q(ψ))(f (t, ψ))| + |∇2k h(t, q(ψ))||(g(t, ψ))|2 2 ≤ KΨ (φ),

2.5 Martingale Problem

119

where K is a constant depending on h, αj , βj , j = 1, 2, . . . , k − 1 and Ψ (φ) = 1 + |f (φ)| + |g(φ)|2 . Therefore, C1 is satisfied by Aq . To see that C2 holds, note that Cb∞ ((n )k ) is separable in |h| + |∇h| + |∇2 h| norm, Cb∞ (n ) is separable in |α| norm, ˙ norm. C 1 ([−r, 0], n ) is separable in |β| + |β| This will imply the existence of a countable set of quasi-tame functions {Φ(j }∞ j=1 ⊂ D(Aq ) such that {(Φ, Ψ −1 (Aq Φ) | Φ ∈ D(Aq )} ⊂ ({Φ(j) , Ψ −1 (Aq Φ(j) ), j ≥ 1}) This proves that C2 holds. That D(Aq ) is an algebra follows from Mohammed [Moh84]. It is also easy to check that D(Aq ) separates points in C and contains the constant functions 2 which implies that Aq satisfies C3. We therefore have the following theorem from Lemma 2.5.2 and the previous proposition. Theorem 2.5.4 Let {xs , s ∈ [0, T ]} be the C-valued segment process for (2.28). If Φ : C →  is a quasi-tame function defined by Definition 2.4.4. Then  s M Φ (s) ≡ Φ(xs ) − Φ(ψ) −

0

(Aq Φ)(xt )dt

is an F-martingale. We establish the C-valued segment process {xs , s ∈ [0, T ]} as the unique solution to the martingale problem below. Theorem 2.5.5 Suppose ψ ∈ L2 (Ω, C) and the operator Aq is as defined in (2.30). Then the martingale problem for (Aq , ψ) is well posed. Proof. Let {X(s), s ∈ [0, T ]} be a C-valued process defined on a complete ˆ and be a progressively measurable soˆ Fˆ , Pˆ , F), filtered probability space (Ω, lution to the (Aq , ψ)-martingale problem, that is, for a quasi-tame function Φ defined by Definition 2.4.4, {Φ(X(s)), s ∈ [0, T ]} is a semimartingale, given by  s Φ(X(s)) = Φ(X(0)) + (Aq )(X(t)) dt + M Φ (s) (2.32) 0

and X(0) = ψ. We shall show that X(s) = x ˆs for some n-dimensional continuous process {ˆ x(s), s ∈ [−r, T ]} satisfying a SHDE of the form (2.28) in the weak sense. Then by the uniqueness of the solution to (2.28) we will have that

120

2 Stochastic Calculus

the distribution of X(s) is the same as that of xs , proving the well-posedness of the martingale problem for (Aq , ψ). From (2.32) it follows that  s Γ (Φ, Φ)(X(t)) dt, (2.33)

M Φ s = 0

where

Γ (Φ, Φ) = Aq (Φ2 ) − 2Φ(Aq Φ).

(2.34)

Applying Itˆ o’s formula (see Theorem 2.4.8) to the semimartingale process {Φ(X(s)), s ∈ [0, T ]}, we have for γ ∈ C 1 [0, T ],  s γ(t)Φ(X(t)) ˙ dt γ(s)Φ(X(s)) = γ(0)Φ(X(0)) + 0





s

+ 0

γ(t)(Aq Φ)(X(t))dt +

s

γ(t)dM Φ (t).

(2.35)

0

Now, suppose α ∈ Cb∞ (n ), β ∈ C 1 [−r, 0]. Let ∆ = ∆(α, β) be a bound for 0 the integral −r α(φ(θ))β(θ)dθ, φ ∈ C. Suppose ς∆ :  → [0, 1] is a Cb∞ -bump function such that ς∆ (x) = 1, if |x| ≤ ∆, 0 < ς∆ (x) ≤ 1, if ∆ < |x| < ∆ + 1, ς∆ (x) = 0, if |x| ≥ ∆ + 1. n n that h ∈ Suppose h(x) = i=1 xni ς∆ (xi ) for x = (x1 , x2 , . . . , xn ) ∈  so ∞ n Cb ( ) and h(x) = i=1 xi for x = (x1 , x2 , . . . , xn ) ∈ [−∆, ∆]n . Let h∗ ∈ Cb∞ (n × n ) by defined by h∗ (x, y) = h(x). Consider a quasi-tame function Φ of the form defined in Definition 2.4.4 with k = 2, given by

 0 ∗ Φ(φ) = h α(φ(θ))β(θ)dθ, φ(0) 

−r

0

=h

α(φ(θ))β(θ)dθ −r

=

n  0  i=1

−r

α(φi (θ))β(θ)dθ

(2.36)

Then, from Theorem 2.4.7, we have  n   Aq Φ(φ) = α(φi (0))β(0) − α(φi (−r))β(−r) − i=1

0

−r

˙ α(φi (θ))β(θ)dθ ,

2.5 Martingale Problem

121

and similarly by chain rule, (Aq Φ2 )(φ)

n  

n  n   0   = 2h α(φi (θ))β(θ)dθ × ∂j h −r

i=1

j=1

 × α(φj (0))β(0) − α(φj (−r))β(−r) −

i=1 0

−r



0

−r

α(φi (θ))β(θ)dθ

˙ α(φj (θ))β(θ)dθ



= 2Φ(θ)(Aq Φ(φ)). This shows that Γ (Φ, Φ)(φ) = (Aq Φ2 )(φ) − 2Φ(φ)(Aq Φ(φ)) = 0 and hence, from (2.33), M Φ s = 0. Therefore, M Φ (s) = 0, a.s. for all s ∈ [0, T ]. From (2.35), we then have 0 ≤ t ≤ s ≤ T ,  s  s γ(λ)Φ(X(λ)) ˙ dλ + γ(λ)(Aq Φ)(X(λ)) dλ. γ(s)Φ(X(s)) = γ(t)Φ(X(t)) + t

t

Since X(s) is C-valued, we write X(s, ·) : [−r, 0] → n as a continuous function. Using the special forms of Φ and Aq Φ given above, we have  0  0 α(X(s, θ))γ(s)β(θ) − α(X(t, θ))γ(t)β(θ) −r s 0

−r



α(X(λ, θ))γ(λ)β(θ) ˙ dλ

= −r

t



s





γ(λ) α(X(λ, 0))β(0) − α(X(λ, −r))β(−r)−

+ t



s



0

˙ α(X(λ, θ))β(θ)dθ dλ

−r

0

α(X(λ, θ))γ(λ)β(θ) ˙ dλ  s  s + α(X(λ, 0))γ(λ)β(0)dλ − α(X(λ, −r))γ(λ)β(−r) dλ

=

−r

t

t



s



t 0

+ t

−r

˙ α(X(λ, θ))[γ(λ)β(θ) ˙ − γ(λ)β(θ)]dθ dλ.

Letting G(s, θ) = γ(s)β(θ),

∀(s, θ) ∈ [0, T ] × [−r, 0],

we may rewrite the above equation in the following form  0  0 α(X(s, θ))G(s, θ) dθ − α(X(t, θ))G(t, θ) dθ −r −r  s  s = α(X(λ, 0))G(λ, 0) dλ − α(X(λ, −r))G(λ, −r) dλ t



s



t 0

+ t

−r

α(X(λ, θ))(∂λ G − ∂θ G)dθ dλ.

(2.37)

122

2 Stochastic Calculus

By linearity (2.37), this holds for all functions G of the form G(s, θ) = l 1 1 i=1 γi (s)βi (θ), where γi ∈ C [0, T ], βi ∈ C [−r, 0], i = 1, 2, . . . , l. Then, by standard limiting arguments, it can be shown that (2.37) holds for all G ∈ C 1,1 ([0, T ] × [−r, 0]). Define the process x ˆ(·) = {ˆ s(s), s ∈ [−r, T ]} by  X(s, 0) for s ∈ [0, T ] x ˆ(s) = ψ(s) for s ∈ [−r, 0]. To show that X(s) = x ˆs , it suffices to show that for s1 , s2 ∈ [0, T ], θ1 , θ2 ∈ [−r, 0], X(s1 , θ1 ) = X(s2 , θ2 ), if s1 + θ1 = s2 + θ2 . For, if s ≥ 0, −r ≤ θ ≤ 0,   X(s + θ, 0) for 0 ≤ s + θ ≤ T x ˆs (θ) = x ˆ(s + θ) = = X(s, θ). X(0, s + θ) for −r ≤ s + θ ≤ 0. First, let us consider the case when −r < θ2 < θ1 < 0. It suffices to show that for some 0 < δ < θ2 + r, X(s, θ) = X(s , θ + s − s ),

whenever s < s < s + δ. (2.38)

θ2 ≤ θ ≤ θ 1

Because, if (2.38) holds, then letting j to be the largest integer smaller than 2(s2 − s1 )/δ, we have X(s1 , θ1 ) = X(s1 + θ, θ1 − δ/2) = X(s1 + δ, θ1 − δ) = · · · = X(s1 + jδ/2, θ1 − jδ/2) = X(s2 + θ1 − θ2 ) = X(s2 , θ2 ). Note that θ2 ≤ θ1 − iδ/2 ≤ θ1 for all i = 1, 2, · · · , j. To prove (2.38), we suppose that  > 0 is such that −r < θ2 − < θ1 +θ < 0. Let δ = min{r + θ2 − , −(θ1 + )} and β ∗ ∈ C 1 [−r, 0] be supported on the interval [θ2 − , θ1 + ]. Fix an s ∈ [0, T ] and let s < s < s + δ. Taking G(λ, θ) = β ∗ (λ + θ − s) in (2.37), we see that for s ≤ λ ≤ s , G(λ, 0) = β ∗ (λ − s) = 0, since λ − s ≥ 0 > θ1 + . G(λ, −r) = β ∗ (λ − r − s) = 0, since λ − r − s ≤ s − s − r < δ − r ≤ θ2 − . (∂λ − ∂θ )G = 0. Hence from (2.37)   0   α(X(s , θ))G(s , θ) dθ = ⇒ ⇒

−r  0 −r  0 −r

0

α(X(s, θ))G(s, θ) dθ

−r

α(X(s , θ))β ∗ (s + θ − s) dθ = α(X(s , t + s − s ))β ∗ (t) dt =





0

α(X(s, θ))β ∗ (θ) dθ

−r 0

α(X(s, t))β(t) dt, −r

(2.39)

2.5 Martingale Problem

123

putting t = s + θ − s. Note that during the change of variable of integration in (2.39), the boundary points for t lie outside the support of β ∗ because s − r − s < δ − r ≤ θ2 −  and s − s ≥ 0 > θ1 + . Since (2.39) holds for all β ∗ ∈ C 1 [θ2 − , θ1 + ], we have for any t ∈ (θ2 − , θ1 + ), α(X(s , t + s − s )) = α(X(s, t)). But this being true for all α ∈ Cb∞ (n , n ), we have X(s , t+s−s ) = X(s, t), ∀t ∈ [θ2 , θ1 ], which is (2.39). Hence, we have proved (2.38) when −r < θ2 < θ1 < 0. (j) If −r ≤ θ2 < θ1 < 0, then take a sequence θ2 > −r which decreases to (j) θ2 . Then, for large j so that θ1 − θ2 + θ2 < 0, we have (j)

(j)

X(s2 , θ2 ) = X(s1 , θ1 − θ2 + θ2 ) (j)

(j)

(j)

since s1 + θ1 − θ2 + θ2 = s2 + θ1 − θ2 + θ2 = s2 + θ2 . Taking the limit as j → ∞, by continuity of X(s, ·) : [−r, 0] → n , we then have X(s2 , θ2 ) = X(s1 , θ1 ). (j) If −r < θ2 < θ1 ≤ 0, then, taking a sequence θ1 < 0 increasing to θ1 , we (j) have for large j (so that θ2 − θ1 + θ1 > −r), (j)

(j)

X(s2 , θ2 − θ1 + θ1 ) = X(s1 , θ1 ), (j)

(j)

(j)

since s2 + θ2 − θ1 + θ1 = s1 + θ1 − θ1 + θ1 = s1 + θ1 . Again by continuity of X(s, ·), taking the limit as j → ∞, we get X(s2 , θ2 ) = X(s1 , θ1 ) Finally, if θ2 = −r and θ1 = 0 then X(s2 , −r) = X(s1 , θ1 ). Thus, we have proved that X(s, ·) = x ˆs . It is easy to check that {ˆ x(s), s ∈ [−r, T ]} is a continuous process and hence, so is {X(s, ·), s ∈ [0, T ]}. Now, it remains to show that the process x ˆ(·) satisfies a SHDE of the form (2.28). For F ∈ Cb∞ (n ), taking Φ(φ) = F (φ(0)) in (2.32), we have  s F (X(s, 0)) − F (X(0, 0)) − ∇F (X(t, 0)) · f (X(t))dt 0  s 1 g(X(t)) · ∇2 F (X(t, 0))g(X(t))dt − 2 0 is a martingale. That is,  s F (ˆ x(s) − F (ˆ x(0)) − ∇F (ˆ xt )) · f (ˆ xt )dt 0  1 s g(ˆ xt ) · ∇2 F (ˆ x(t))g(ˆ xt )dt − 2 0 is a martingale for all F ∈ Cb∞ (n ). Then, using standard arguments (see, e.g., Theorems 13.55 and 14.80 of Jacod [Jac79] and Theorem 4.5.2 of Strook

124

2 Stochastic Calculus

and Varadhan [SV79]), we conclude that x ˆ(·) satisfies a SHDE of the form ˜ (·). This implies that the law of x (2.28) for some Brownian motion W ˆ(·) and X(s) = x ˆs is uniquely determined. Thus, the martingale problem for (Aq , ψ) is well posed. 2 Remark 2.5.6 In the course of proving Theorem 2.5.5, we have proved that, for any probability measure µ on (C, B(C)), the martingale measurable solution for (Aq , µ) is well posed in the class of progressively measurable solutions and any progressively measurable solution (Aq , µ) has a continuous modification. In particular, the following two conditions (i) and (ii) hold for Aq . (i) The martingale problem for (Aq , δψ ) is well posed in the class of r.c.l.l. solutions for every ψ ∈ C. (ii) For all probability measure µ on (C, B(C)), any progressively measurable solution to the martingale problem for (Aq , µ) admits a r.c.l.l. modification, where r.c.l.l. stands for right-continuous with left limits. We have the following theorem. Theorem 2.5.7 Consider the C-valued segment process {xs , s ∈ [0, T ]} for (2.28), with f and g satisfying the Lipschitz condition (2.29). Then (i) {xs , s ∈ [0, T ]} is a Markov process. (ii) Q(C, ) ⊂ D(A), the domain of the weak infinitesimal generator A of the C-valued Markov process {xs , s ∈ [0, T ]} and the restriction of A on Q(C, ) is the same as Aq . (iii) Q(C, ) is weakly dense in Cb (C). Proof. By Theorem 2.5.4, {xs , s ∈ [0, T ]} is a well posed solution for (Aq , ψ). Therefore, Conclusions (i) and (ii) follow from Lemma 2.5.2 and Proposition 2.5.3. Since the class of tame functions is weakly dense in Cb (C) and every tame function can be approximated by a sequence of quasi-tame functions, the class of quasi-tame functions Q(C, ) is weakly dense in Cb (C). This proves (iii). 2

2.6 Conclusions and Remarks The main results presented in this chapter are Dynkin’s formulas for Φ(s, xs ) for the C-valued segment process {xs , s ∈ [t, T ]} for (1.27) and Φ(S(s), Ss ) for the M-valued segment process {(S(s), Ss ), s ≥ ∞} for (1.43) provided the Φ satisfies appropriate smooth conditions. When Φ is a quasi-tame function in appropriate spaces, SΦ and AΦ can be explicitly computed. These concepts and results are needed for treating problems in optimal classical control (Chapter 3), optimal stopping (Chapter 4), option pricing (Chapter 6), and hereditary portfolio optimization (Chapter 7). We note here that Itˆ o’s formula for Φ(x(s), xs )), Φ : ×n × L2 ([−r, 0]; n ) for the solution process x(·) of the equation

2.6 Conclusions and Remarks

125

dx(s) = f (x(s), xs )ds + g(x(s), xs )dW (s) 2

in the L -setting has just recently been obtained by Yan and Mohammed [YM05] using Malliavin calculus for anticipating integrands (see, e.g., Malliavin [Mal97], Nualart and Pardoux [NP88], and Nualart [Nua95])). Itˆ o’ formula for stochastic systems with pure delays has been established by Hu et al. [HMY04] for applications of discrete-time approximations of stochastic delay equations. However, the Itˆ o’s formula for the C-valued segment process for (1.27) is not yet available and it remains an open problem for the community of stochastic analysts. The source of difficulty in obtaining such a formula is due to nondifferentiability in the Fr´echet sense of the sup-norm · for the Banach space C. It is conjectured that Itˆ o’s formula in the L2 -setting can be generalized to the M-valued segment process {(S(s), Ss ), s ≥ 0} for (1.43).

3 Optimal Classical Control

Let 0 < T < ∞ be the given and fixed terminal time and let t ∈ [0, T ] be a variable initial time for the optimal classical control problem treated in this chapter. The main theme of this chapter is to consider the infinitedimensional optimal classical control problem over a finite time horizon [t, T ]. The dynamics of the process x(·) = {x(s), s ∈ [t − r, T ]} being controlled are governed by a stochastic hereditary differential equation (SHDE) with a bounded memory of duration 0 < r < ∞ and are taking values in the Banach space C = C([−r, 0]; n ). The formulation of the control problem is given in Section 3.1. The value function V : [0, T ] × C →  of the optimal classical control problem is written as a function of the initial datum (t, ψ) ∈ [0, T ] × C. The existence of optimal control is proved in Section 3.2. In there, we consider an optimizing sequence of stochastic relaxed control problems with its corresponding sequence of value functions that converges to the value function of our original optimal control problem. Since the regular optimal control is a special case of optimal relaxed control, the existence of optimal control is therefore established. The Bellman-type dynamic programming principle (DPP) originally due to Larssen [Lar02] is derived and proved in Section 3.3. Based on the DDP, an infinite-dimensional Hamilton-JacobiBellman equation (HJBE) is heuristically derived in Section 3.4 for the value function under the condition that it is sufficiently smooth. This HJBE involves a first- and second-order Fr´echet derivatives with respect to spatial variable ψ ∈ C as well as an S-operator that is unique only to SHDE. However, it is known in most optimal control problems, deterministic or stochastic, that the value functions, although can be proven to be continuous, do not meet these smoothness conditions and, therefore, cannot be a solution to the HBJE in the classical sense. To overcome this difficulty, the concept of viscosity solution to the infinite-dimensional HJBE is introduced in Section 3.5. Section 3.6 concerns the comparison principle between a super-viscosity solution and a sub-viscosity solution. Based on this comparison principle, it is shown that the value function is the unique viscosity solution to the HJBE. Due to the lack of smoothness of the value function, a classical verification theorem will

128

3 Optimal Classical Control

not be useful in characterizing the optimal control. A generalized verification theorem in the framework of a viscosity solution is stated without a proof in Section 3.7. In Section 3.8, we prove that, under some special conditions on the controlled SHDE and the value function, the HJBE can take a finitedimensional form in which only regular partial derivatives but not Fr´echet derivatives are involved. Two application examples in this special form are also illustrated in this section. We give the following example as a motivation for studying optimal classical control problems. Two other completely worked-out examples are given in Subsection 3.8.3. Example. (Optimal Advertising Problem) (see Gossi and Marinelli [GM04] and Gossi et al. [GMS06]) Let y(·) = {y(s), s ∈ [0, T ]} denote the stock of advertising goodwill of the product to be launched. The process y(·) is described by the following one-dimensional controlled stochastic hereditary differential equation:  0  a1 (θ)y(s + θ) dθ + b0 u(s) dy(s) = a0 y(s) + 

−r

0

+ −r

 b1 (θ)u(s + θ) dθ ds + σdW (s),

s ∈ [0, T ],

with the initial conditions y0 = ψ ∈ C[−r, 0] and u0 = φ ∈ L2 ([−r, 0]) at initial time t = 0. In the above (Ω, F, P, F, W (·)) denotes an one-dimensional standard Brownian motion and the control process u(·) = {u(s), s ∈ [0, T ]} denotes the advertising expenditures as a process in L2 ([0, T ], + ; F), the space of square integrable non-negative processes adapted to F. Moreover, it is assumed that the following conditions are satisfied: (i) a0 ≤ 0 denotes a constant factor of image deterioration of the product in absence of advertising. (ii) a1 (·) ∈ L2 ([−r, 0], ) is the distribution of the forgetting time. (iii) b0 ≥ 0 denotes the effective constant of instantaneous advertising effect. (iv) b1 (·) ∈ L2 ([−r, 0], + ) is the density function of the time lag between the advertising expenditure u(·) and the corresponding effect on the goodwill level. (v) ψ(·) and φ(·) are non-negative and represent, respectively, the histories of goodwill level and the advertising expenditure before time zero. The objective of this optimal advertising problem is to seek an advertising strategy u(·) that maximizes the objective functional    T

J(ψ, φ; u(·)) = E Ψ (y(T )) −

L(u(s)) ds , 0

where Ψ : [0, ∞) → [0, ∞) is a concave utility function with polynomial growth at infinity, L : [0, ∞) → [0, ∞) is a convex cost function which is superlinear at infinity, that is,

3.1 Problem Formulation

129

L(u) = ∞. u The above objective functional accounts for the balance between an utility of terminal goodwill Ψ (y(T )) and overall functional of advertising expenditures T L(u(s)) ds over the period. Note that this model example involves the his0 tories of both the state and control processes. The general theory for optimal control of stochastic systems with delays in both state and control processes has yet to be developed. If b1 (·) = 0, then there is no aftereffect of previous advertising expenditures on the goodwill level. In this case, it is a special case of what to be developed in this chapter. lim

u→∞

3.1 Problem Formulation 3.1.1 The Controlled SHDE In the following, let (Ω, F, P, F, W (·)) be a certain m-dimensional Brownian stochastic basis. Consider the following controlled SHDE with a bounded memory (or delay) of duration 0 < r < ∞: dx(s) = f (s, xs , u(s)) ds + g(s, xs , u(s)) dW (s),

s ∈ [t, T ],

(3.1)

with the given initial data (t, xt ) = (t, ψ) ∈ [0, T ] × C and defined on a certain m-dimensional Brownian stochastic basis (Ω, F, P, F, W (·)) that is yet to be determined. In (3.1), the following is understood: (i) The drift f : [0, T ] × C × U → n and the diffusion coefficient g : [0, T ] × C × U → n×m are deterministic continuous functions. (ii) U , the control set, is a complete metric space and is typically a subset of an Euclidean space. (iii) u(·) = {u(s), s ∈ [t, T ]} is a U -valued F-progressively measurable process that satisfies the following conditions:   T

E

|u(s)|2 ds < ∞.

(3.2)

t

Note that the control process u(·) = {u(s), s ∈ [t, T ]} defined on (Ω, F, P, F, W (·)) is said to be progressively measurable if u(·) = {u(s), t ≤ s ≤ T } in U is F-adapted ( i.e., u(s) is F(s)-measurable for every s ∈ [t, T ]), and for each a ∈ [t, T ] and A ∈ B(U ), the set {(s, ω) | t ≤ s ≤ a, ω ∈ Ω, u(s, ω) ∈ A} belongs to the product σ-field B([t, a]) ⊗ F(a); that is, if the mapping

130

3 Optimal Classical Control

(s, ω) → u(s, ω) : ([t, a] × Ω, B([t, a]) ⊗ F(a)) → (U, B(U )) is measurable, for each a ∈ [t, T ]. Let L : [0, T ]×C×U →  and Ψ : C →  be two deterministic continuous functions that represent the instantaneous and terminal reward functions, respectively, for the optimal classical control problem. Assumption 3.1.1 The assumptions on the functions f , g, L, and Ψ are stated as follows: (A3.1.1) (Lipschitz Continuity) The maps f (t, φ, u), g(t, φ, u), L(t, φ, u), and Ψ (φ) are Lipschitz on [0, T ] × C × U and H¨ older continuous in t ∈ [0, T ]: There is a constant Klip > 0 such that |f (t, φ, u) − f (s, ϕ, v)| + |g(t, φ, u) − g(s, ϕ, v)| + |L(t, φ, u) − L(s, ϕ, v)| + |Ψ (φ) − Ψ (ϕ)|  ≤ Klip ( |t − s| + φ − ϕ + |u − v|), ∀s, t ∈ [0, T ], u, v ∈ U, and φ, ϕ ∈ C. (A3.1.2) (Linear Growth) There exists a constant Kgrow > 0 such that |f (t, φ, u)| + |g(t, φ, u)| ≤ Kgrow (1 + φ ) and |L(t, φ, u)| + |Ψ (φ)| ≤ Kgrow (1 + φ 2 )k ,

∀(t, φ) ∈ [0, T ] × C, u ∈ U.

(A3.1.3) The initial function ψ belongs to the space L2 (Ω, C; F(t)) of F(t)measurable elements in L2 (Ω, C) such that ψ 2L2 (Ω;C) ≡ E[ ψ 2 ] < ∞. Condition (A3.1.2) in Assumption 3.1.1 stipulates that both L and Ψ satisfy a polynomial growth in φ ∈ C under the norm L2 -norm · 2 instead of the sup-norm · . This stronger requirement is needed in order to show that the uniqueness of the viscosity solution of the HJBE in Section 3.6. The solution process of the controlled SHDE (3.1) is given next. Definition 3.1.2 Given an m-dimensional Brownian stochastic basis (Ω, F, P, F, W (·)) and the control process u(·) = {u(s), s ∈ [t, T ]}, a process x(·; t, ψt , u(·)) = {x(s; t, ψt , u(·)), s ∈ [t−r, T ]} is said to be a (strong) solution of the controlled SHDE (3.1) on the interval [t − r, T ] and through the initial datum (t, ψ) ∈ [0, T ] × L2 (Ω, C; F(t)) if it satisfies the following conditions: 1. xt (·; t, ψt , u(·)) = ψt (·), P -a.s. 2. x(s; t, ψt , u(·)) is F(s)-measurable for each s ∈ [t, T ];

3.1 Problem Formulation

131

3. For 1 ≤ i ≤ n and 1 ≤ j ≤ m, we have    T  2 |fi (s, xs , u(s))| + gij (s, xs , u(s)) ds < ∞ = 1. P t

4. The process {x(s; t, ψt , u(·)), s ∈ [t, T ]} is continuous and satisfies the following stochastic integral equation P -a.s.:  s  s x(s) = ψt (0) + f (λ, xλ , u(λ)) dλ + g(λ, xλ , u(λ)) dW (λ). t

t

In addition, the solution process {x(s; t, ψt , u(·)), s ∈ [t − r, T ]} is said to be (strongly) unique if {y(s; t, ψ, u(·)), s ∈ [t − r, T ]} is also a solution of (3.1) on [t − r, T ] with the control process u(·) and through the same initial datum (t, ψt ); then P {x(s; t, ψ, u(·)) = y(s; t, ψ, u(·)), ∀s ∈ [t, T ]} = 1. Theorem 3.1.3 Let (Ω, F, P, F, W (·)) be an m-dimensional Brownian motion and let u(·) = {u(s), s ∈ [t, T ]} be a control process. Then for each initial datum t, ψt ) ∈ [0, T ]×L2 (Ω, C; F(t)), the controlled SHDE (3.1) has a unique strong solution process x(·; t, ψt , u(·)) = {x(s; t, ψt , u(·)),

s ∈ [t, T ]}

under Assumption 3.1.1. The following holds: 1. The map (s, ω) → x(s; t, ψt , u(·)) belongs to the space L2 (Ω, C([t − r, T ]; n )); F(s)), and the map (t, ω) → xs (·; t, ψt , u(·)) belongs to the space L2 (Ω, C; F(s)). Moreover, there exists constants Kb > 0 and k ≥ 1 such that E[ xs (·; t, ψt , u(·)) 2 ] ≤ Kb (1+E[ ψt 2 ])k ,

∀s ∈ [t, T ] and u(·) ∈ U[t, T ]. (3.3) 2. The map ψt → xs (·; t, ψt , u(·)) is Lipschitz; that is, there is a constant (1) (2) K > 0 such that for all s ∈ [t, T ] and ψt , ψt ∈ L2 (Ω, C; F(t)), (1)

(2)

(1)

E[ xs (·; t, ψt , u(·)) − xs (·; t, ψt , u(·)) ] ≤ KE[ ψt

(2)

− ψt ].

(3.4)

Proof. Let the random functions f˜ : [0, T ] × C × Ω → n and g˜ : [0, T ] × C × Ω → n×m be defined as follows: f˜(s, φ, ω) = f (s, φ, u(s, ω)) and g˜(s, φ, ω) = g(s, φ, u(s, ω)), where u(·) = {u(s), s ∈ [t, T ]} is the control process. If the functions f and g satisfy Assumption 3.1.1, then the random functions f˜ and g˜ defined above

132

3 Optimal Classical Control

satisfy Assumptions 1.3.6 and 1.3.7 of Chapter 1 and, therefore, the controlled system (3.1) has a unique strong solution process on [t − r, T ] and through the initial datum (t, ψ) ∈ [0, T ] × L2 (Ω, C; F(t)), which is denoted by x(·; t, ψt , u(·)) = {x(s; t, ψt , u(·)), s ∈ [t, T ]} (or simply x(·) when there is no danger of ambiguity) according to Theorem 1.3.12 of Chapter 1. 2 Remark 3.1.4 It is clear from the appearance of (3.1) that the use of the term “classical control” (as opposed to “impulse control” in Chapter 7) becomes self-explanatory. This is due to the fact that an application of the control u(s) at time s ∈ [t, T ] will only change the rate of the drift and the diffusion coefficient and, therefore, the pathwise continuity of controlled state process x(·) = {x(s), s ∈ [t − r, T ]} will not be affected by this action. Definition 3.1.5 The (classical) control process u(·) is an Markov (or feedback) control if there exist a Borel measurable function η : [0, T ] × C → U such that u(s) = η(s, xs ), where {xs , s ∈ [t, T ]} is the C-valued Markov process corresponding to the solution process {x(s), s ∈ [t − r, T ]} of the following feedback equation: dx(s) = f (s, xs , η(s, xs )) ds + g(s, xs , η(s, xs )) dW (s)

(3.5)

with the initial data (t, xt ) = (t, ψ) ∈ [0, T ] × C. 3.1.2 Admissible Controls Definition 3.1.6 For each t ∈ [0, T ], a six-tuple (Ω, F, P, F, W (·), u(·)) is said to be an admissible control if it satisfies the following conditions: 1. (Ω, F, P, F, W (·)) is a certain m-dimensional Brownian stochastic basis; 2. u : [t, T ] × Ω → U is an F-adapted and is right-continuous at the initial time t; that is, lims↓t u(s) = u(t)(say = u ∈ U ). 3. Under the control process u(·) = {u(s), s ∈ [t, T ]}, (3.1) admits a unique strong solution x(·; t, ψ, u(·)) = {x(s; t, ψ, u(·)), s ∈ [t, T ]} on (Ω, F, P, F, W (·)) and through each initial datum (t, ψ) ∈ [0, T ] × C. 4. The C-valued segment process {xs (t, ψ, u(·)), s ∈ [t, T ]} defined by xs (θ; t, ψ, u(·)) = x(s + θ; t, ψ, u(·)),

θ ∈ [−r, 0],

is a strong Markov process with respect to the Brownian stochastic basis (Ω, F, P, F, W (·)). 5. The control process u(·) is such that   T |L(s, xs (·; t, ψ, u(·)), u(s))| ds + |Ψ (xT (·; t, ψ, u(·)))| < ∞, E t

3.1 Problem Formulation

133

where L : [0, T ] × C × U →  and Ψ : C →  represent the instantaneous and the terminal reward functions, respectively. The collection of admissible controls (Ω, F, P, F, W (·), u(·)) over the interval [t, T ] will be denoted by U[t, T ]. We will write u(·) ∈ U[t, T ] or formally the 6-tuple (Ω, F, P, F, W (·), u(·)) ∈ U[t, T ] interchangeably, whenever there is no danger of ambiguity. Remark 3.1.7 Definition 3.1.6 defines a weak formulation of an admissible control in that the Brownian stochastic basis (Ω, F, P, F, W (·)) is not predetermined and in fact is a part of the ingredients that constitute an admissible control. This is contrary to the strong formulation of an admissible control in which the Brownian stochastic basis (Ω, F, P, F, W (·)) is predetermined and given. Remark 3.1.8 To avoid using the yet-to-be-developed Itˆ o formula for the the C-valued process {xs (t, ψ, u(·)), s ∈ [t, T ]} in the development of the HJB theory, we make additional requirement in Condition 3 of Definition 3.1.6 that it is a strong Markov process. This requirement is not a stringent one. In fact, the class of admissible controls U[t, T ] defined in Definition 3.1.6 includes all Markov (or feedback) control (see Definition 3.1.5), where η : [0, T ] × C → U is Lipschitz with respect to the segment variable; that is, there exists a constant K > 0 such that |η(t, φ) − η(t, ϕ)| ≤ φ − ϕ ,

∀(t, φ), (t, ϕ) ∈ [0, T ] × C.

Throughout, we assume that the functions f : [0, T ] × C × U → n , g : [0, T ] × C × U → n×m , L : [0, T ] × C × U → , and Ψ : C →  satisfy Assumption 3.1.1. Given an admissible control u(·) ∈ U[t, T ], let x(·; t, ψ, u(·)) = {x(s; t, ψ, u(·)), s ∈ [t − r, T ]} be the solution of (3.1) through the initial datum (t, ψ) ∈ [0, T ] × C. We again consider the corresponding C-valued segment process {xs (·; t, ψ, u(·)), s ∈ [t, T ]}. For notational simplicity, we often write x(s) = x(s; t, ψ, u(·)) and xs = xs (·; t, ψ, u(·)) for s ∈ [t, T ] whenever there is no danger of ambiguity. 3.1.3 Statement of the Problem Given any initial datum (t, ψ) ∈ [0, T ] × C and any admissible control u(·) ∈ U[t, T ], we define the objective functional  T J(t, ψ; u(·)) ≡ E e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds t −α(T −t) Ψ (xT (·; t, ψ, u(·))) , (3.6) +e where α ≥ 0 denotes a discount factor.

134

3 Optimal Classical Control

For each initial datum (t, ψ) ∈ [0, T ] × C, the optimal control problem is to find u(·) ∈ U[t, T ] so as to maximize the objective functional J(t, ψ; u(·)) defined above. In this case, the value function V : [0, T ] × C →  is defined to be (3.7) V (t, ψ) = sup J(t, ψ; u(·)). u(·)∈U[t,T ] ∗

The control process u (·) = {u∗ (s), s ∈ [t, T ]} ∈ U[t, T ] is said to be an optimal control for the optimal classical control problem if V (t, ψ) = J(t, ψ; u∗ (·)).

(3.8)

The (strong) solution process x∗ (·; t, ψ, u∗ (·)) = {x∗ (s; t, ψ, u∗ (·)), s ∈ [t − r, T ]} of (3.1) corresponding to the optimal control u∗ (·)) will be called the optimal state process corresponding to u∗ (·). The pair (u∗ (·), x∗ (·)) will be called the optimal control-state pair. The characterizations of the value function V : [0, T ] × C →  and the optimal control-state pair (u∗ (·), x∗ (·)) will normally constitute an solution to the control problem. The optimal classical control problem, Problem (OCCP), is now formally formulated as follows. Problem (OCCP). For each initial datum (t, ψ) ∈ [0, T ] × C: 1. Find an u∗ (·) ∈ U[t, T ] that maximizes J(t, ψ; u(·)) defined in (3.6) among U[t, T ]. 2. Characterize the value function V : [0, T ] × C →  defined in (3.7). 3. Identify the optimal control-state pair (u∗ (·), x∗ (·)).

3.2 Existence of Optimal Classical Control In the class U[t, T ] of admissible controls it may happen that there does not exist an optimal control. The following artificial example of Kushner and Dupuis [KD01, p.86] shows that an optimal control does not exist even for a controlled deterministic equation without a delay. Example. Consider the following one-dimensional controlled deterministic equation: x(s) ˙ = f (x(s), u(s)) ≡ u(s), s ≥ 0 with the control set U = [−1, 1]. Starting from the initial state x(0) = x ∈ , the objective is to find an admissible (deterministic) control u(·) = {u(s), s ≥ 0} that minimizes the following discounted cost functional over the infinite time horizon:

3.2 Existence of Optimal Classical Control





J(x; u(·)) = 0

135

e−αs [x2 (s) + (u2 (s) − 1)2 ] ds.

Again, let V :  →  be the value function of the control problem defined by V (x) =

inf

J(x; u(·)).

u(·)∈U[0,∞)

Note that V (0) = 0 and, in general, V (x) = x2 /α for all x ∈ . To see this, define the sequence of controls u(k) (·) by u(k) (s) = (−1)j on the half-open interval [j/k, (j + 1)/k), j = 0, 1, 2, . . .. It is not hard to see that J(0; u(k) (·)) → 0 as k → ∞. In a sense, when x(0) = 0, the optimal control u∗ (·) wants to take values ±1. However, it is easy to check that u∗ (·) does not satisfy Definition 3.1.6. Therefore, an optimal control u∗ (·) does not exist as defined. In order to establish the existence of an optimal control for Problem (OCCP), we will enlarge the class of controls, allowing the so-called relaxed controls, so that the existence of an optimal (relaxed) control is guaranteed, and the supremum of the expected objective functional over this new class of controls coincides with the value function V : [0, T ] × C →  of the original optimal classical control problem defined by (3.7). The idea to show the existence of an optimal relaxed control is to consider a maximizing sequence of admissible relaxed controls {µ(k) (·, ·)}∞ k=1 on the Borel measurable space ([0, T ] × U, B([0, T ] × U )) and the corresponding sequence of objective funcˆ ψ; µ(k) (·, ·))}∞ . By the fact that [0, T ] × U is compact (and tionals {J(t, k=1 hence the maximizing sequence of admissible relaxed controls {µ(k) (·, ·)}∞ k=1 ˆ ψ; µ(·, ·)) is upis compact in the Prohorov metric) and the fact that J(t, per semicontinuous in admissible relaxed controls µ(·, ·), one can show that the sequence {µ(k) (·, ·)}∞ k=1 converges weakly to an admissible relaxed control µ∗ (·, ·). This µ∗ (·, ·) can be shown to be optimal among the class of admissible relaxed controls and that its value function coincides with the value function of Problem (OCCP). We also prove that an optimal (classical) control exists if the value function V (t, ψ) is finite for each initial datum (t, ψ) ∈ [0, T ] × C. We recall the concept and characterizations of weak convergence of probability measures without proofs as follows. The detail can be found in Billingsley [Bil99]. Let (Ξ, d) be a generic metric space with the Borel σ-algebra denoted by B(Ξ). Let P(Ξ) (or simply P whenever there is no ambiguity) be the collection of probability measures defined on (Ξ, B(Ξ)). We will equip the space P(Ξ) with the Prohorov metric π : P(Ξ) × P(Ξ) → [0, ∞) defined by π(µ, ν) = inf{ > 0 | µ(A ) ≤ ν(A) +  for all closed A ∈ B(Ξ)}, where A is the -neighborhood of A ∈ B(Ξ), that is, A = {y ∈ Ξ | d(x, y) <  for some x ∈ Ξ}.

(3.9)

136

3 Optimal Classical Control

If µ(k) , k = 1, 2, . . ., is a sequence in P(Ξ), we say that the sequence µ(k) , k = 1, 2, . . ., converges weakly to µ ∈ P(Ξ) and is to be denoted by µ(k) ⇒ µ as k → ∞ if   (k) lim Φ(x)µ (dx) = Φ(x)µ(dx), ∀Φ ∈ Cb (Ξ), (3.10) k→∞

Ξ

Ξ

where Cb (Ξ) is the space of bounded and continuous functions Φ : Ξ →  equipped with the sup-norm: Φ Cb (Ξ) = sup |Φ(x)|, x∈Ξ

Φ ∈ Cb (Ξ).

In the case where µ(k) and µ ∈ P(Ξ) are probability measures induced by Ξ-valued random variables X (k) and X, respectively, we often say that X (k) converges weakly to X and is to be denoted by X (k) ⇒ X as k → ∞. A direct consequence of the definition of weak convergence is that X (k) ⇒ X implies that Φ(X (k) ) ⇒ Φ(X) for any continuous function Φ from Ξ to another metric space. We state the following results without proofs. The details of these results can be found in [Bil99]. Theorem 3.2.1 If Ξ is complete and separable, then P(Ξ) is complete and separable under the Prohorov metric. Furthermore, if Ξ is compact, then P(Ξ) is compact. Let {µ(λ) , λ ∈ Λ} ⊂ P(Ξ), where Λ is an arbitrary index set. Definition 3.2.2 The collection of probability measure {µ(λ) , λ ∈ Λ} is called tight if for each  > 0, there exists a compact set K ⊂ Ξ such that inf µ(λ) (K ) ≥ 1 − .

λ∈Λ

(3.11)

If the measures µ(λ) are the induced measures defined by some random variables X (λ) , then we also refer to the collection {X (λ) } as tight. Condition (3.11) then reads (in the special case where all the random variables are defined on the same space) inf P {X (λ) ∈ K } ≥ 1 − .

λ∈Λ

Theorem 3.2.3 (Prohorov’s Theorem) If Ξ is complete and separable, then a set {µ(λ) , λ ∈ Λ} ⊂ P(Ξ) has compact closure in the Prohorov metric if and only if {µ(λ) , λ ∈ Λ} is tight. Remark 3.2.4 Let Ξ1 and Ξ2 be two complete and separable metric spaces, and consider the space Ξ = Ξ1 ×Ξ2 with the usual product space topology. For (λ) {µ(λ) , λ ∈ Λ} ⊂ P(Ξ), let {µi , λ ∈ Λ} ⊂ P(Ξi ), for i = 1, 2, be defined by (λ) taking µi to be the marginal distribution of µ(λ) on Ξi . Then {µ(λ) , λ ∈ Λ} (λ) (λ) is tight if and only if {µ1 , λ ∈ Λ} and {µ2 , λ ∈ Λ} are tight.

3.2 Existence of Optimal Classical Control

137

Theorem 3.2.5 Let Ξ be a metric space and let µ(k) , k = 1, 2, . . ., and µ be probability measures in P(Ξ) satisfying µ(k) ⇒ µ. Let Φ be a real-valued measurable function on Ξ and define D(Φ) to be the measurable set of points at which Φ is not continuous. Let X (k) and X be Ξ-valued random variables defined on a probability space (Ω, F, P ) that induce the measures µ(k) and µ on Ξ, respectively. Then Φ(X (k) ) ⇒ Φ(X) whenever P {X ∈ D(Φ)} = 0. Theorem 3.2.6 (Aldous Criterion) Let X (k) (·) = {X (k) (t), t ∈ [0, T ]}, k = 1, 2, . . ., be a sequence of Ξ-valued continuous processes (defined on the same filtered probability space (Ω, F, P, F)). Then the sequence {X (k) (·)}∞ k=1 converges weakly if and only if the following condition is satisfied: Given k = 1, 2, . . ., any bounded F-stopping time τ , and δ > 0, we have E (k) [ X (k) (τ + δ) − X (k) (τ ) 2Ξ | F (k) (τ )] ≤ 2K 2 δ(δ + 1). We also recall the following Skorokhod representation theorem that is often used to prove convergence with probability 1. The proof can be found in Ethier and Kurtz [EK86]. Theorem 3.2.7 (Skorokhod Representation Theorem) Let Ξ be a separable metric space and assume the probability measures {µ(k) }∞ k=1 ⊂ P(Ξ) converges ˜ F˜ , P˜ ) on which weakly to µ ∈ P(Ξ). Then there exists a probability space (Ω, (k) ∞ ˜ such that for ˜ there are defined Ξ-valued random variables {X }k=1 and X all Borel sets B ∈ B(Ξ) and all k = 1, 2, . . ., ˜ (k) ∈ B} = µ(k) (B), P˜ {X and such that

˜ ∈ B} = µ(B) P˜ {X

˜ (k) = X} ˜ = 1. P˜ { lim X k→∞

3.2.1 Admissible Relaxed Controls We first define a deterministic relaxed control as follows. Definition 3.2.8 A deterministic relaxed control is a positive measure µ on the Borel σ-algebra B([0, T ] × U ) such that µ([0, t] × U ) = t,

t ∈ [0, T ].

(3.12)

For each G ∈ B(U ), the function t → µ([0, t]×G) is absolutely continuous with respect to Lebesque measure on ([0, T ], B([0, T ]) by virtue of (3.12). Denote d µ([0, t] × G) any Lebesque density of µ([0, t] × G). The family by µ(·, ˙ G) = dt of densities {µ(·, ˙ G), G ∈ B(U )} is a probability measure on B(U ) for each t ∈ [0, T ], and  T µu(·) (B) = 1{(t,u)∈B} µ(dt, du) (3.13) 0



U

T



1{(t,u)∈B} µ(t, ˙ du) dt,

= 0

U

∀B ∈ B([0, T ] × U ).

138

3 Optimal Classical Control

Denote by R the space of deterministic relaxed controls that is equipped with the weak compact topology induced by the following notion of convergence: A sequence {µ(k) , k = 1, 2, . . .} of relaxed controls converges (weakly) to µ ∈ R if   γ(t, u) dµ(k) (t, u) = γ(t, u) dµ(t, u), (3.14) lim k→∞

[0,T ]×U

[0,T ]×U

∀γ ∈ Cc ([0, T ] × U ), where Cc ([0, T ] × U ) is the space of all real-valued continuous functions on [0, T ] × U having compact support. Note that if U is compact then Cc ([0, T ] × U ) = C([0, T ] × U ). Under the weak-compact topology defined above, R is a (sequentially) compact space; that is, every sequence in R has a subsequence that converges to an element in R in the sense of (3.14). Now, we introduce a suitable filtration for R as follows. We first identify each µ ∈ R as a linear functional on C([0, T ] × U ) in the following way:  µ(ς) ≡

T

 ς(t, u)µ(dt, du),

0

∀ς ∈ C([0, T ] × U ).

U

For any ς ∈ C([0, T ] × U ) and t ∈ [t, T ], define ς t ∈ C([0, T ] × U ) by ς t (s, u) ≡ ς(s ∧ t, u). Since C([0, T ] × U ) is separable (and therefore has a countable dense subset), we may let {ς (k) }∞ k=1 be countable dense subset (with respect to the uniform t norm). It is easy to see that {ς (k),t }∞ k=1 is dense in the set {ς | ς ∈ C([0, T ] × U )}. Define Bs (R) ≡ σ{{µ ∈ R | µ(ς t ) ∈ B} | ς ∈ C([0, T ] × U ), t ∈ [0, s], B ∈ B()}. One can easily show that Bs (R) can be generated by cylinder sets of the following form: σ{{µ ∈ R | µ(ς (k),t ) ∈ (a, b)} | s ≥ t ∈ Q, k = 1, 2, . . . , a, b ∈ Q}.

(3.15)

Definition 3.2.9 A relaxed control process is an R-valued random variable µ, defined on a Brownian stochastic basis (Ω, F, P, F, W (·)), such that the mapping ω → µ([0, t] × G)(ω) is F(t)-measurable for all t ∈ [0, T ] and G ∈ B(U ). Using the relaxed control process µ(·, ·) ∈ R, the controlled state equation can be written as   f (s, xs , u)µ(s, ˙ du) ds + g(s, xs , u)µ(s, ˙ du) dW (s), s ∈ [t, T ], dx(s) = U

U

or, equivalently, in the form of the stochastic integral equation:

3.2 Existence of Optimal Classical Control



x(s) = ψ(0) + f (λ, xλ , u)µ(λ, ˙ du) dλ  s t U g(λ, xλ , u)µ(λ, ˙ du) dW (λ), s ∈ [t, T ], + t

139

s

(3.16)

U

with the initial datum (t, ψ) ∈ [0, T ] × C. The objective functional can be written as   T ˆ ψ; µ(·, ·)) = E J(t, L(s, xs (·; t, ψ, µ(·, ·)), u)µ(s, ˙ du) ds U t  + Ψ (xT (·; t, ψ, µ(·, ·)) , (3.17) where {x(s; t, ψ, µ(·, ·)), s ∈ [t, T ]} is the solution process of (3.16) when the relaxed control process µ(·, ·) ∈ R is applied. We now define an admissible relaxed control as follows. Definition 3.2.10 For each initial datum (t, ψ) ∈ [0, T ] × C, a six-tuple (Ω, F, P, F, W (·), µ(·, ·)) is said to be an admissible relaxed control at (t, ψ) ∈ [0, T ] × C if it satisfies the following conditions: 1. (Ω, F, P, F, W (·)) is a certain m-dimensional Brownian stochastic basis. 2. µ(·, ·) ∈ R is a relaxed control defined on the Brownian stochastic basis (Ω, F, P, F, W (·)). 3. Under the relax control process µ(·, ·), (3.16) admits a unique strong solution x(·; t, ψ, µ(·, ·)) = {x(s; t, ψ, µ(·, ·)), s ∈ [t, T ]} and through each initial datum (t, ψ) ∈ [0, T ] × C. 4. The control process µ(·, ·) is such that   E

T

|L(s, xs (·; t, ψ, µ(·, ·)), u)|µ(du, ˙ s) ds  + |Ψ (xT (·; t, ψ, µ(·, ·))| < ∞,

U

t

The collection of admissible relaxed controls (Ω, F, P, F, W (·), µ(·, ·)) over the ˆ T ]. Again, when there is no ambiguity, interval [t, T ] will be denoted by U[t, ˆ we often write µ(·, ·) ∈ U[t, T ] instead of (Ω, F, P, F, W (·), µ(·, ·)). The optimal relaxed control problem can be stated as follows. ˆ T ] that maxProblem (ORCP) Find an optimal relaxed control µ∗ (·, ·) ∈ U[t, imizes (3.17) subject to (3.16). We again define the value function Vˆ : [0, T ] × C →  for the Problem (ORCP) by ˆ ψ; µ(·, ·)). Vˆ (t, ψ) = sup J(t, (3.18) ˆ [t,T ] µ(·,·)∈U

We have the following existence theorem for Problem (ORCP).

140

3 Optimal Classical Control

Theorem 3.2.11 Let Assumption 3.1.1 hold. Given any initial datum (t, ψ) ∈ [0, T ] × C, then Problem (ORCP) admits an optimal relaxed control µ∗ (·, ·), and its value function Vˆ coincides with the value function V of Problem (OCCP). We will postpone proof of Theorem 3.2.11 until the end of the next subsection. 3.2.2 Existence Result For the existence of an optimal control for Problem (OCCP), we need the following Roxin condition: (Roxin’s Condition). For every (t, ψ) ∈ [0, T ] × C, the set / 0 0 / fi (t, ψ, u), (gg  )ij (t, ψ, u), L(t, ψ, u) | f, gg  , L (t, ψ, U ) ≡  u ∈ U, i, j = 1, 2, . . . , n is convex in n+nn+1 . The main purpose of this subsection is to prove the existence theorem. Theorem 3.2.12 Let Assumption 3.1.1 and the Roxin condition hold. Given any initial datum (t, ψ) ∈ [0, T ]×C, then Problem (OCCP) admits an optimal classical control u∗ (·) ∈ U[t, T ] if the value function V (t, ψ) is finite. Proof. Without loss of generality, we can and will assume that t = 0 in the following for notational simplicity. The proof is similar to that of Theorem 2.5.3 in Yong and Zhou [YZ99] and will be carried out by the following steps: Step 1. Since V (0, ψ) is finite, we can find a sequence of maximizing admissible controls in U[t, T ], {(Ω (k) , F (k) , P (k) , F(k) , W (k) (·), u(k) (·))}∞ k=1 , such that

lim J(0, ψ; u(k) (·)) = V (0, ψ).

k→∞

(3.19)

Let x(k) (·) = {x(·; 0, ψ, u(k) (·)), s ∈ [0, T ]} be the solution of (3.1) corresponding to u(k) (·). Define X (k) (·) ≡ (x(k) (·), F (k) (·), G(k) (·), L(k) (·), W (k) (·)), where the processes F (k) (·), G(k) (·), and L(k) (·) are defined as follows:  s (k) f (t, xt (·; 0, ψ, u(k) (·)), u(k) (t)) dt, F (k) (s) = G(k) (s) =

 0

0

s

(k)

g(t, xt (·; 0, ψ, u(k) (·)), u(k) (t)) dW (k) (t),

(3.20)

3.2 Existence of Optimal Classical Control

and (k)

L



s

(s) = 0

(k)

e−αt L(t, xt (·; 0, ψ, u(k) (·)), u(k) (t)) dt,

141

s ∈ [0, T ].

Step 2. We prove the following lemma: Lemma 3.2.13 Assume Assumption 3.1.1 holds. Then there exists a constant K > 0 such that  4  (k)  (k) (k) E X (s1 ) − X (s2 ) ≤ K|s1 − s2 |2 , ∀s1 , s2 ∈ [0, T ], ∀k = 1, 2, . . . , where E (k) [· · · ] is the expectation with respect to the probability measure P (k) . Proof of the Lemma. Let us fix k, 0 ≤ s1 ≤ s2 ≤ T , and consider   E (k) |F (k) (s1 ) − F (k) (s2 )|4   4    s2 (k) (k)  (k) (k) ≤E f (t, xt (·; 0, ψ, u (·)), u (t))dt  s1 2

≤ |s1 − s2 | E 2

≤ |s1 − s2 |



(k)

 

s2

  2 2   (k) f (t, xt (·; 0, ψ, u(k) (·)), u(k) (t)) dt

s1 s2

2 Kgrow E (k)

 1+

2

(k) xt (·; t, ψ, u(k) (·))



dt.

s1 (k)

Since xt (·; 0, ψ, u(k) (·)) is continuous P (k) -a.s. in t on the compact interval [0, T ], it can be shown that there exists a constant K > 0 such that   s2 2 (k) 2 (k) (k) 1 + xt (·; 0, ψ, u (·)) Kgrow E dt < K. s1

Therefore,   E (k) |F (k) (s1 ) − F (k) (s2 )|4 ≤ K|s1 − s2 |2 ,

∀s1 , s2 ∈ [0, T ], ∀k = 1, 2, . . . .

Similar conclusion holds for L(k) ; that is,   E (k) |L(k) (s1 ) − L(k) (s2 )|4 ≤ K|s1 − s2 |2 ,

∀s1 , s2 ∈ [0, T ], ∀k = 1, 2, . . . .

We consider

  E (k) |G(k) (s1 ) − G(k) (s2 )|4   4    s2 (k) (k)  (k) (k) (k) ≤E g(t, xt (·; 0, ψ, u (·)), u (t))dW (t)  s1

142

3 Optimal Classical Control

 ≤ K1 (s2 − s1 )

s2

  4 2   (k) E (k) g(t, xt (·; 0, ψ, u(k) (·)), u(k) (t)) dt

s1

for some constant K1 > 0  s2  2 (k) 2 ≤ |s1 − s2 |2 1 + xt (·; 0, ψ, u(k) (·)) dt. K1 Kgrow s1

≤ K|s1 − s2 |2 for some constant K > 0. It is clear that     E (k) |W (k) (s1 ) − W (k) (s2 )|4 = E (k) |W (k) (s2 − s1 )|4 ≤ K|s1 − s2 |2 , since W (k) (s2 − s1 ) is Gaussian with mean zero and variance I (m) (s2 − s1 ). The above estimates give  4   E (k) X (k) (s1 ) − X (k) (s2 ) ≤ K|s1 − s2 |2 , ∀s1 , s2 ∈ [0, T ], ∀k = 1, 2, . . . . This completes the proof of the lemma.

2

From the above lemma, we use the following well-known results to conclude 3n+m+1 ), that {(X (k) (·), µu(k) (·, ·))}∞ k=1 is tight as a sequence of C([0, T ],  since R is compact. Proposition 3.2.14 Let {ζ (k) (·)}∞ k=1 be a sequence of d-dimensional continuous processes over [0, T ] on a probability space (Ω, F, P ) satisfying the following conditions: sup E[|ζ (k) (0)|c ] < ∞ k≥1

and

sup E[|ζ (k) (t) − ζ (k) (s)|a ] ≤ K|t − s|1+b ,

∀t, s ∈ [0, T ],

k≥1 d for some constants a, b, c > 0. Then {ζ (k) (·)}∞ k=1 is tight as C([0, T ],  )valued random variables. As a consequence there exists a subsequence {kj }, ˆ d-dimensional continuous processes {ζˆ(kj ) (·)}∞ j=1 and ζ(·) defined on a probaˆ F, ˆ Pˆ ) such that bility space (Ω,

P (ζ (kj ) (·) ∈ A) = Pˆ (ζˆ(kj ) (·) ∈ A), and

∀A ∈ B(C([0, T ], d ))

ˆ in C([0, T ], d ), Pˆ -a.s. lim ζˆ(kj ) (·) → ζ(·)

j→∞

Corollary 3.2.15 Let ζ(·) be a d-dimensional process over [0, T ] such that E[|ζ(t) − ζ(s)|a ] ≤ K|t − s|1+b ,

∀t, s ∈ [0, T ],

for some constants a, b > 0. Then there exists a d-dimensional continuous process that is stochastically equivalent to ζ(·).

3.2 Existence of Optimal Classical Control

143

We refer the readers to Ikeda and Watanabe [IW81, pp.17-20] for a proof of the above proposition and corollary. Step 3. Since {(X (k) (·), µu(k) (·, ·))}∞ k=1 is tight as a sequence in C([0, T ], 3n+m+1 ), by the Skorokhod representation theorem (see Theorem 3.2.7), one can choose a subsequence (still labeled as {k}) and have ¯ (k) (·), µ ¯ (k) (·), L ¯ (k) (·), W ¯ (k) (·), µ {(X ¯(k) (·, ·))} ≡ {(¯ x(k) (·), F¯ (k) (·), G ¯(k) (·, ·))} and ¯ ¯ ¯ ¯ (·), µ (X(·), µ ¯(·, ·)) ≡ (¯ x(·), F¯ (·), G(·), L(·), W ¯(·, ·)) ¯ F¯ , P¯ ) such that on a suitable common probability space (Ω, ¯ (k) (·), µ law of (X ¯(k) (·, ·)) = law of (X (k) (·), µ(k) (·, ·)), and P¯ -a.s., and

∀k ≥ 1,

(3.21)

¯ (k) (·) → X(·) ¯ X uniformly on [0, T ]

(3.22)

µ ¯(k) (·, ·) → µ ¯(·, ·) weakly on R.

(3.23)

¯ (k)

¯ (k)

¯ = {F, ¯ s ≥ 0}, Step 4. Construct the filtration F = {F (s), s ≥ 0} and F where ¯ (k) (t), x F¯ (k) (s) = σ{(W ¯(k) (t)), t ≤ s} ∨ (¯ µ(k) )−1 (Bs (R)) and

¯ ¯ (t), x F(s) = σ{(W ¯(t)), t ≤ s} ∨ (¯ µ)−1 (Bs (R)),

s ≥ 0.

By the definition of Bs (R) and the fact that the σ-algebra generated by the cylinder sets of C([0, T ]; d ) coincides with B(C([0, T ]; d )), F¯ (k) (s) is the σ-algebra generated by ¯ (k) (t1 ), . . . , W ¯ (k) (tl ), x W ¯(k) (t1 ), . . . , x ¯(k) (tl ), µ ¯(k) (ς (j),t1 ), . . . , µ ¯(k) (ς (j),tl ), t1 , . . . , tl ≤ s, ς (j) ∈ C([0, T ], U ) and j, l = 1, 2, . . .. ¯ A similar statement can be made for F(s). (k) ¯ (k) Brownian ¯ ¯ (k) (s), s ≥ 0} is an F We need to show that W (·) = {W (k) motion. We first note that W (·) is a Brownian motion with respect to   σ{(W (k) (t), x(k) (t)), t ≤ s} ∨ (µ−1 (B (R)), s ≥ 0 . s (k) u This implies that for any 0 ≤ t ≤ s ≤ T and any bounded function H on (m+n+b)l (b is a positive integer), we have   E (k) H(y (k) )(W (k) (s) − W (k) (t)) = 0,

144

3 Optimal Classical Control

where

y (k) = {W (k) (ti ), x(k) (ti ), µ(k) (ς ja ,ti )} ∀ ≤ 0 ≤ t1 ≤ t2 ≤ · · · ≤ tl ≤ t, a = 1, 2, . . . , b.   ¯ (k) H(¯ ¯ (k) (s) − W ¯ (k) (t)) = 0, E y (k) )(W

We have where

¯ (k) (ti ), x ¯(k) (ti ), µ ¯(k) (ς ja ,ti )} y¯(k) = {W ∀0 ≤ t1 ≤ t2 ≤ · · · ≤ 0 ≤ tl ≤ t, a = 1, 2, . . . , b,

since ¯ (k) (·), µ ¯(k) (·, ·)) = law of (X (k) (·), µ(k) (·, ·)), law of (X

∀k ≥ 1.

We therefore have E (k) [(W (k) (s) − W (k) (t)) | F (k) (t)] = 0. ¯ (k) Brownian motion, we need ¯ (k) (·) is an F In order to show W E (k) [(W (k) (s) − W (k) (t))(W (k) (s) − W (k) (t)) | F (k) (t)] = (s − t)I (m) . This can be shown similarly. Step 5. Again, since ¯ (k) (·), µ law of (X ¯(k) (·, ·)) = law of (X (k) (·), µ(k) (·, ·)),

∀k ≥ 1,

¯ (k) , P¯ )) holds: ¯ F¯ , F the following stochastic integral equation (defined on (Ω,  s (k) x ¯(k) (s) = ψ(0) + ¯(k) ) dt f˜(t, x ¯t , µ  s 0 (k) ¯ (k) (t), g˜(t, x ¯t , µ ¯(k) ) dW + 0

where (k) ¯(k) ) = f˜(t, x ¯t , µ



(k)

f (t, x ¯t , u)µ ¯˙ (k) (t, du)

U

and (k)

¯(k) ) = g˜(t, x ¯t , µ



(k) g(t, x ¯t , u)µ ¯˙ (k) (t, du).

U

¯ (k) Brownian ¯ (k) (·) is a F Note that the above integrals are well defined, since W motion. Moreover, for each s ∈ [0, T ],  s  s (k) (k) ˜ lim ¯ ) dt = ¯) dt f (t, x ¯t , µ f˜(t, x ¯t , µ k→∞

0

0

3.2 Existence of Optimal Classical Control

 lim

k→∞

0

s

(k)

˜ x ¯t , µ e−αt L(t, ¯(k) ) dt =

 0

s

˜ x ¯t , µ e−αt L(t, ¯) dt,

145

P¯ -a.s.,

(k) e−α(T −t) Ψ (¯ xT ) → e−α(T −t) Ψ (¯ xT ), P¯ -a.s.,

and

 lim k→∞  s

s

(k)

¯ (k) (t) g˜(t, x ¯t , µ ¯(k) ) dW

0

¯ (t), g˜(t, x ¯t , µ ¯ ) dW

= 0

P¯ -a.s.,

U



where ¯) = f˜(t, x ¯t , µ

f (t, x ¯t , u)µ ¯˙ (t, du), U



(k) ˜ x L(t, ¯t , µ ¯(k) ) =

(k) L(t, x ¯t , u)µ ¯˙ (k) (t, du),

U

 ˜ x L(t, ¯t , µ ¯) =

L(t, x ¯t , u)µ ¯˙ (t, du), U



and

g(t, x ¯t , u)µ ¯˙ (t, du).

g˜(t, x ¯t , µ ¯) = U

We have by taking the limit k → ∞  s x ¯(s) = ψ(0) + ¯) dt f˜(t, x ¯t , µ 0  s ¯ (t), ∀s ∈ [0, T ], P¯ − a.s. g˜(t, x ¯t , µ ¯ ) dW + 0

Moreover, J(0, ψ; u

(k)



T

¯ (·)) = E

 −αt ˜

L(t, x ¯t , µ ¯

e 0



sup

(k)

−αT

) dt + e

(k) Ψ (¯ xT )

J(0, ψ; u(·))

u(·)∈U[0,T ]

as k → ∞. Step 6. Let us consider the sequence A˜(k) (s) ≡ (˜ g g˜ )(s, x ¯(k) ¯(k) ), s ,µ

s ∈ [0, T ].

By the Lipschitz continuity and linear growth conditions on f and g, it is tedious but straight forward to show that   T ¯ sup E |A˜(k) (s)|2 ds < ∞. k

0

146

3 Optimal Classical Control

Hence the sequence {A˜(k) }∞ k=1 is weakly relatively compact in the space ¯ S n ), where S n is the space of symmetric n × n matrices. L2 ([0, T ] × Ω, We can then find a subsequence (still labelled by {k}) and a function ¯ S n ) such that A˜ ∈ L2 ([0, T ] × Ω, ˜ A˜(k) → A,

¯ S n ). weakly on L2 ([0, T ] × Ω,

(3.24)

˜ we claim that for almost all Denoting by A˜ij the (ij) entry of the matrix A, (s, ω), (k) limk→∞ A˜ij (s, ω) ≤ A˜ij (s, ω) (k) ≤ limk→∞ A˜ij (s, ω),

i, j = 1, 2, . . . , n.

(3.25)

¯ of positive measure, Indeed, if (3.25) is not true and on a set A ⊂ [0, T ] × Ω (k)

limk→∞ A˜ij (s, ω) > A˜ij (s, ω), then, by Fatou’s lemma, we have   (k) A˜ij (s, ω)dsdP¯ (ω) > A˜ij (s, ω)dsdP¯ (ω), limk→∞ A

A

which is a contradiction to (3.24). The same can be said for lim, which proves (3.25). Moreover, by the Lipschitz continuity and linear growth of f and g ¯ uniformly on [0, T ], for almost all (s, ω), we ¯ (k) (·) → X(·) and the fact that X have limk→∞ A˜(k) (s) = limk→∞ (˜ g g˜ )(s, x ¯(k) ¯(k) ), s ,µ

¯ (s, ω) ∈ [0, T ] × Ω,

(3.26)

¯ (s, ω) ∈ [0, T ] × Ω,

(3.27)

and limk→∞ A˜(k) (s) = limk→∞ (˜ g g˜ )(s, x ¯(k) ¯(k) ), s ,µ

Then, combining (3.25), (3.26), (3.27) and the Roxin condition, we have ˜ ω) ∈ (gg  )(s, x A(s, ¯s (ω), U ),

a.e.(s, ω).

(3.28)

Modify A˜ on a null set, if necessary, so that (3.28) holds for all (s, ω) ∈ ¯ ) ¯ One can similarly prove that there are f˜ and L ˜ ∈ L2 ([0, T ] × Ω, [0, T ] × Ω. such that ˜ (k) → L, ˜ f˜(k) → f˜, L

¯ ), weakly on L2 ([0, T ] × Ω,

(3.29)

and f˜(s, ω) ∈ f (s, x ¯s (ω), U ),

˜ ω) ∈ L(s, x L(s, ¯s (ω), U ),

(3.30)

3.2 Existence of Optimal Classical Control

147

¯ ∀(s, ω) ∈ [0, T ] × Ω. By (3.28), (3.30) the Roxin condition, and a measurable selection theorem (see ¯ Corollary 2.26 of Li and Yong [LY91, p102]), there is a U -valued, F-adapted process u ¯(·) such that ˜ L)(s, ˜ (f˜, A, ω) = (f, gg  , L)(s, x ¯s (ω), u ¯(s, ω)),

(3.31)

¯ ∀(s, ω) ∈ [0, T ] × Ω. Step 7. The last step is to use Roxin condition to show that there exists ˆ ˆ F, ˆ Pˆ , F) an m-dimensional Brownian motion defined on the filtered space (Ω, ¯ We, then, conclude ¯ F, ¯ P¯ , F). which extends the filtered probability space (Ω, that ˆ W ˆ F, ˆ Pˆ , F, ˆ (·), u (Ω, ¯(·)) ∈ U[0, T ] is an optimal control. ¯ g )(·) = {I(˜ ¯ g )(s), s ∈ [0, T ]} We next prove that the Itˆ o’s integral process I(˜ ¯ is an F-martingale, where  s ¯ g )(s) = ¯ (t), s ∈ [0, T ]. I(˜ g˜(t, x ¯t , µ ¯ ) dW 0

To see this, once again, let 0 ≤ t ≤ s ≤ T , and define ¯ (k) (ti ), x ¯(k) (ti ), µ ¯(k) (ς ja ,ti )}, y¯(k) ≡ {W and ¯ (ti ), x ¯(ti ), µ ¯(ς ja ,ti )}, y¯ ≡ {W 0 ≤ t1 ≤ t2 ≤ · · · ≤ tl ≤ s, a = 1, 2, . . . , b. ¯ (k) -martingale, for any bounded continuous function g )(·) is a F Since I¯(k) (˜ (m+n+b)l → , we have H: ¯ y (k) (I¯(k) (˜ g )(s) − I¯(k) (˜ g )(t))] 0 = E[Φ(¯ ¯ y (I(˜ ¯ g )(s) − I(˜ ¯ g )(t))], → E[Φ(¯

(3.32)

¯ uniformly on [0, T ] and µ ¯ (k) (·) → X(·) ¯(k) → µ ¯ weakly on R and by the since X ¯ ¯ g )(·) is an F-martingale. dominated convergence theorem. This proves that I(˜ Furthermore,  s (k) ¯ A˜(k) (t)dt, g )(s) =

I (˜ 0

g ) is the quadratic variation of I¯(k) (˜ g )(·). Hence, where I¯(k) (˜  s A˜(k) (t)dt g ))(I¯(k) (˜ g )) − (I¯(k) (˜ 0

¯ (k) -martingale. Recalling A˜(k) (·) → A(·) ˜ weakly on L2 ([0, T ] × Ω), ¯ we is an F have for any t, s ∈ [0, T ],

148

3 Optimal Classical Control



s

A˜(k) (λ)dλ →

t



s

(gg  )(λ, x ¯λ , u ¯(λ))dλ,

weakly on L2 (Ω).

t

On the other hand, by the dominated convergence theorem, we have, for realvalued function H of appropriate dimension, H(¯ y (k) ) → H(¯ y ),

strongly on L2 (Ω).

Thus,   ¯ H(¯ E y (k) )

s

  s ¯ H(¯ A˜(k) (λ)dλ → E y) (gg  )(λ, x ¯λ , u ¯(λ))dλ .

t

t

¯ (·) = Therefore, using an argument similar to the above, we obtain that M ¯ ¯ (s), s ∈ [0, T ]} is an F-martingale, {M where  s  ¯ (s) ≡ (I(g))( ¯ ¯ M I(g)) (s) − (gg  )(t, x ¯t , u ¯(t))dt. 0

This implies that



s

¯ I(g)(s) = 0

(gg  )(t, x ¯t , u ¯(t))dt.

By a martingale representation theorem (see Subsection of Chapter 1), there ˆ Pˆ ) of (Ω, ¯ Pˆ ) on which lives an m-dimensional ˆ Fˆ , F, ¯ F¯ , F, is an extension (Ω, ˆ Brownian motion W ˆ (·) = {W ˆ (s), s ≥ 0}, such that F  s ˆ (t). ¯ g(t, x ¯t , u ¯(t)) dW I(g)(s) = 0

Similarly, one can show that  F¯ (s) = 0

s

f (t, x ¯t , u ¯(t)) dt.

Putting into ¯ x ¯(s) = ψ(0) + F¯ (s) + I(g)(s),

∀s ∈ [0, T ], P¯ − a.s.,

with ¯ E

 0

T

e−αt (t, x ¯t , u ¯(t)) dt

 + e−αT Ψ (¯ xT ) =

inf

u(·)∈U[0,T ]

we arrive at the conclusion that ˆ Pˆ , W ˆ F, ˆ F, ˆ (·), u (Ω, ¯(·)) ∈ U[0, T ] is an optimal control. This prove the theorem.

2

J(0, ψ; u(·)),

3.2 Existence of Optimal Classical Control

149

Proof of Theorem 3.2.11. The idea in proving the existence of an optimal relaxed control µ∗ (·, ·) is to (1) observe that the space of relaxed control R is sequentially compact, since [0, T ] × U is compact (see Theorem 3.2.1), and hence every sequence in R has a convergent subsequence under the weak compact topology defined ˆ ψ; ·) : R →  is a (sequentially) upper semiby (3.14); (2) check that J(t, continuous function defined on the sequentially compact space R; 3) provoke a classical theorem (see, e.g., Rudin [Rud71]) that states any (sequentially) upper semicontinuous real-valued function defined on a (sequentially) comˆ ψ; ·) attains its pact space attends a maximum in the space, and hence J(t, maximum at some point µ∗ (·, ·) ∈ R (see, e.g., Yong and Zhou [YZ99, p.65]); and (4) show that the value function for the Problem (ORCP) coincides with that of the original Optimal Classical Control Problem (OCCP). First, the following proposition is analogous to Theorem 10.1.1 of Kushner and Dupuis [KD01, pp.271-275] for our setting. The detail of the proof is very similar to that of Theorem 3.2.12 and, therefore, only a sketch is provided here. Proposition 3.2.16 Let Assumption 3.1.1 hold. Let {(Ω (k) , F (k) , P (k) , F(k) , W (k) (·), µ(k) (·, ·))}∞ k=1 ˆ T ]. For each k = 1, 2, . . ., be any sequence of admissible relaxed controls in U[t, (k) (k) let {x (s; t, ψ, µ (·, ·)), s ∈ [t − r, T ]} be the corresponding strong solution of (3.16) through the initial datum (t, ψ (k) ) ∈ [0, T ] × C. Assume that the sequence of initial functions {ψ (k) , k = 1, 2, · · · } converges to ψ ∈ C. The sequence {(x(k) (·), W (k) (·), µ(k) (·, ·)), k = 1, 2, . . .} is tight. Denote by (x(·), W (·), µ(·)) the limit point of the sequence {(x(k) (·), W (k) (·), µ(k) (·, ·)), k = 1, 2, . . .} Define the filtration {H(s), s ∈ [t, T ]} by H(s) = σ((x(λ), W (λ), µ(λ, G)), t ≤ λ ≤ s, G ∈ B(U )). Then W (·) is is an (H(t, s), s ∈ [t, T ])-adapted Brownian motion, the six-tuple {(Ω, F, P, F, W (·), µ(·, ·))} is an admissible relaxed control and the process x(·) = {x(s; t, ψ, µ(·, ·)), s ∈ [t − r, T ]} is the strong solution process to (3.16) defined on {(Ω, F, P, F, W (·), µ(·, ·))} and with the initial datum (t, ψ) ∈ [0, T ] × C.

150

3 Optimal Classical Control

A Sketch of Proof. Without loss of generality, we can and will assume that {(Ω (k) , F (k) , P (k) , F(k) , W (k) (·), µ(k) (·, ·)), k = 1, 2, . . .} is the maximizing sequence of admissible relaxed controls for Problem (ORCP). We claim that the sequence of triplets {(W (k) (·), µ(k) (·, ·), x(k) (·)), k = 1, 2, . . .}

(3.33)

is tight and therefore has a subsequence that is also to be denoted by (3.33), which converges weakly to some triplet (W (·), µ(·, ·), x(·)), where W (·) is a standard Brownian motion, µ(·, ·) is the optimal relaxed control process, and x(·) is the optimal state process (corresponding to the optimal relaxed control process) that satisfies (3.16). Componentwise tightness implies tightness of the products (cf. Remark 3.2.4 or [Bil99,p.65]). We therefore prove the following componentwise results. First, we observe that the sequence {W (k) (·)}∞ k=1 is tight. This is because they all have the same (Wiener) probability measure. Note that W (k) (·) is continuous for each k = 1, 2, . . ., so is its limit W (·). To show that W (·) is an m-dimensional standard Brownian motion, we will use the martingale characterization theorem in Section 1.2.1 of Chapter 1 by showing that if Φ ∈ C02 (m ) (the space of real-valued twice continuously differentiable functions on m and with compact support), then MΦ (·) is a F-martingale, where  s Lw Φ(W (t)) dt, s ≥ 0, MΦ (s) ≡ Φ(W (s)) − Φ(0) − 0

and Lw is the differential operator defined by (1.8). To prove this, we have by the fact that W (k) (·) is F(k) -Brownian motion,  E H(x(k) (ti ), W (k) (ti ), µ(k) (ti ), i ≤ p)



× Φ(W (k) (t + λ)) − Φ(W (k) (t)) −

(3.34)

t+λ

 Lw Φ(W (k) (s))ds

= 0.

t

By the probability 1 convergence which is implied by the Skorokhod representation theorem,    t+λ  t+λ    (k) E  Lw Φ(W (s)) ds − Lw Φ(W (s)) ds → 0.  t  t Using this result and taking limits in (3.34) yields  E H(x(ti ), W (ti ), µ(ti ), i ≤ p)

(3.35) 

× Φ(W (t + λ)) − Φ(W (t)) −



t+λ

Lw Φ(W (s)) ds t

= 0.

3.2 Existence of Optimal Classical Control

151

The set of random variables H(x(ti ), W (ti ), µ(ti ), i ≤ p), as H(·), p, and ti vary over all possibilities, induces the σ-algebra F(t). Thus, (3.35) implies that  s (Φ(W (s) − Φ(0) −

Lw Φ(W (t)) dt

0

is an F-martingale for all Φ of chosen class. Thus W (·) is a standard F-Brownian motion. Second, the sequence {(µ(k) (·, ·), k = 1, 2, . . .} is tight, because the space R is (sequentially) weak compact. Furthermore, its weak limit µ(·) ∈ R and µ([0, t]; U ) = t for all t ∈ [0, T ]. Third, the tightness of the sequence of processes {x(k) (·), k = 1, 2, ...} follows from the Aldous criterion (cf. Theorem 3.2.6 or [Bil99, pp. 176-179]: Given k = 1, 2, . . ., any bounded F-stopping time τ , and δ > 0, we have E (k) [|x(k) (τ + δ) − x(k) (τ )|2 | F (k) (τ )] ≤ 2K 2 δ(δ + 1) as a consequence of Assumption 3.1.1 and Itˆo’s isometry. To show that its limit process x(·) = {x(s), s ∈ [t, T ]} satisfies (3.16), we note that the weak limit (x(·), W (·), µ(·, ·)) is continuous on the time interval [t, T ]. This is because it has been shown in the proof of Theorem 3.2.12 that both the pathwise convergence of the Lebesque integral  s  s (k) (k) lim f (λ, xλ , u)µ˙ (λ, du) dλ = f (λ, xλ , u)µ(λ, ˙ du) dλ, P -a.s., k→∞

t

t

U

U

and of the stochastic integral  s (k) lim g(λ, xλ , u)dW (k) (λ)µ˙ (k) (λ, du)dλ k→∞ t U  s = g(λ, xλ , u)µ(λ, ˙ du)dW (λ), P -a.s. t

U

We assume that the probability spaces are chosen as required by the Skorokhod representation (Theorem 3.2.7), so that we can suppose that the convergence of {(W (k) (·), µ(k) (·, ·), x(k) (·))}∞ k=1 to its limit is with probability 1 in the topology of the path spaces of the processes. Thus,  s  s (k) f (λ, xλ , u)µ˙ (k) (λ, du) dλ → f (λ, xλ , u)µ(λ, ˙ du)d λ t

and  s t

U

t

U

(k)

g(λ, xλ , u)µ˙ (k) (λ, du) dW (k) (λ) →

U



s

 g(λ, xλ , u)µ(λ, ˙ du) dW (λ)

t

U

152

3 Optimal Classical Control

as k → ∞ uniformly on [t, T ] with probability 1. The sequence {µ(k) (·, ·)}∞ k=1 converges weakly. In particular, for any Φ ∈ Cb ([0, T ] × U ),  T  T Φ(λ, u)µ(k) (dλ, du) → Φ(λ, u)µ(dλ, du). t

t

U

U

Now, the Skorokhod representation theorem 3.2.7 and weak convergence imply that  s  s (k) (k) f (λ, xλ , u)µ˙ (λ, du) dλ → f (λ, xλ , u)µ(λ, ˙ du) dλ t

and  s t

t

U

(k)

g(λ, xλ , u)µ˙ (k) (λ, du) dW (k) (λ) →

U



s

 g(λ, xλ , u)µ(λ, ˙ du) dW (λ)

t

U

U

as k → ∞ uniformly on [t, T ] with probability 1. Since ψ (k) ∈ C converges to ψ ∈ C, we therefore prove that  s x(s) = ψ(0) + f (λ, xλ , u)µ(λ, ˙ du) dλ t U  s g(λ, xλ , u)µ(λ, ˙ du) dW (λ). s ∈ [t, T ], (3.36) + t

U

We next claim that the weak limit (x(·), W (·), µ(·, ·)) is continuous on the time interval [t, T ]. First, x(·) is a continuous process; this is because both the pathwise Lebesque integral  s  s (k) f (λ, xλ , u)µ˙ (k) (λ, du) dλ = f (λ, xλ , u)µ(λ, ˙ du) dλ, P -a.s., lim

k→∞

t

U

U

t

and the stochastic integral  s (k) lim g(λ, xλ , u)µ˙ (k) (λ, du) dW (k) (λ) k→∞ t U  s g(λ, xλ , u)µ(λ, ˙ du) dW (λ), P -a.s. = t

U

Similarly,   T α(s−t) (k) e L(s, xs , u)µ˙ (s, du) ds → t

t

U

and

T

 eα(s−t) L(s, xs , u)µ(s, ˙ du) ds, U

(k)

e−α(T −t) Ψ (xT ) → e−α(T −t) Ψ (xT )

as k → ∞ with probability 1.

2

We have therefore proved the following two propositions.

3.3 Dynamic Programming Principle

153

Proposition 3.2.17 Let Assumption 3.1.1 hold. Suppose the sequence of initial segment functions {ψ (k) }∞ k=1 ⊂ C converges to ψ ∈ C. Then lim Vˆ (t, ψ(k)) = Vˆ (t, ψ).

k→∞

ˆ [t, T ] be the Proposition 3.2.18 Let Assumption 3.1.1 hold. Let µ(·, ·) ∈ U relaxed control representation of u(·) ∈ U[t, T ] via (3.13). Then V (t, ψ) :=

sup

J(t, ψ; u(·)) = Vˆ (t, ψ) :=

u(·)∈U[0,T ]

sup µ(·,·)∈U[0,T ]

ˆ ψ; µ(·, ·)). J(t,

Proof of Theorem 3.2.11 The theorem follows immediately from Propositions 3.2.17 and 3.2.18. 2

3.3 Dynamic Programming Principle 3.3.1 Some Probabilistic Results To establish and prove the dynamics programming principle (DDP), we need some probabilistic results as follows. First, we recall that if O is a nonempty set and if O is a collection of subsets of O, the collection O is called a π-system if it is closed under the finite intersection; that is, A, B ∈ O imply that A ∩ B ∈ O. It is a λ-system if the following three conditions are satisfied: (i) O ∈ O; (ii) A, B ∈ O and A ⊂ B imply that B − A ∈ O; and (iii) Ai ∈ O, Ai ↑ A, i = 1, 2, . . ., implies that A ∈ O. The following lemmas will be used later. ˜ be two collections of subsets of O with O ⊂ O. ˜ Lemma 3.3.1 Let O and O ˜ is a λ-system. Then σ(O) ⊂ O, ˜ where σ(O) Suppose O is a π-system and O is the smallest σ-algebra containing O. Proof. This is the well-known monotone class theorem, the proof of which can be found in Lemma 1.1.2 of [YZ99]. Lemma 3.3.2 Let O be a π-system on O. Let H be a linear space of functions from O to  such that 1 ∈ H; and

IA ∈ H,

∀A ∈ O,

ϕ(i) ∈ H with 0 ≤ ϕ(i) ↑ ϕ, ϕ is finite ⇒ ϕ ∈ H.

Then H contains all σ(O)-measurable functions from O to , where 1 is the constant function of value 1 and IA is the indicator function of A.

154

3 Optimal Classical Control

Proof. Let ˜ = {A ⊂ O | IA ∈ H}. O ˜ is a λ-system containing O. From Lemma 3.3.1 it can be shown that Then O ˜ σ(O) ⊂ O. Now, for any σ(O)-measurable function ϕ : O → , we set for i = 1, 2, . . .  ϕ(i) = j2−i I{j2−i ≤ϕ+ (ω)t Bs (C([0, T ]; m ))), t ∈ [0, T ], where σ(B(Ct ([0, T ]; m ))) denotes the smallest σ-algebra in C([0, T ]; m ) that contains B(Ct ([0, T ]; m )). Clearly, both of the following are filtered measurable spaces: (C([0, T ]; m ), B(C([0, T ]; m )), {Bt (C([0, T ]; m ))}t≥0 ) and (C([0, T ]; m ), B(C([0, T ]; m )), {Bt+ (C([0, T ]; m ))}t≥0 ). However, Bt+ (C([0, T ]; m )) = Bt (C([0, T ]; m )),

∀t ∈ [0, T ].

A set B ⊂ C([0, T ]; m ) is called a Borel cylinder if there exists a partition π = {0 ≤ t1 < t2 < · · · < tj ≤ T } of [0, T ] and A ∈ B((m )j ) such that B = π −1 (A) ≡ {ξ ∈ C([0, T ]; m ) | (ξ(t1 ), ξ(t2 ), . . . , ξ(tj )) ∈ A}.

(3.37)

For s ∈ [0, T ], let C(s) be the set of all Borel cylinder in Cs ([0, T ]; m ) of the form (3.37) with partition π ⊂ [0, s]. Lemma 3.3.3 The σ-algebra σ(C(T )) generated by C(T ) coincides with the Borel σ-algebra B(C[0, T ]; m )) of C([0, T ]; m ). Proof. Let the partition π = {0 ≤ t1 < t2 < · · · < tj ≤ T } of [0, T ] be given. We define a point-mass projection map Π : C([0, T ]; m ) → (m )j associated with the partition π as follows: Π(ξ) = (ξ(t1 ), ξ(t2 ), . . . , ξ(tj )),

∀ξ ∈ C([0, T ]; m ).

3.3 Dynamic Programming Principle

155

Clearly, Π is continuous. Consequently, for any A ∈ B((m )j ), it follows that Π −1 (A) ∈ B(C([0, T ]; m )). This implies that σ(C(T )) ⊂ B((C([0, T ]; m )).

(3.38)

ˆ ) in Next, for any ξˆ ∈ C([0, T ]; m ) and  > 0, the closed -ball B(ξ; n C([0, T ];  ) can be written as ˆ ) ≡ {ξ ∈ C([0, T ]; m ) | sup |ξ(t) − ξ(t)| ˆ B(ξ; ≤ } =

1

(3.39)

t∈[0,T ]

ˆ {ξ ∈ C([0, T ]; m ) | |ξ(t) − ξ(t)| ≤ } ∈ σ(C(T )),

t∈Q∩[0,T ]

ˆ ≤ } is a Borel cylinder and Q is the since {ξ ∈ C([0, T ]; m ) | |ξ(t) − ξ(t)| set of all rational numbers (which is countable). Because the set of all sets in the form of the left-hand side of (3.39) is a basis of the closed (and therefore open) sets in C([0, T ]; m ), we have B(C([0, T ]; m )) ⊂ σ(C(T )). Combining (3.38) and (3.40), we obtain the conclusion of the lemma.

(3.40) 2

Lemma 3.3.4 Let (Ω, F, P ) be a probability space and let ζ : [0, T ]×Ω → m be a continuous process. Then there exists an Ω0 ∈ F with P (Ω0 ) = 1 such that ζ : Ω0 → C([0, T ]; m ), and for any s ∈ [0, T ], 1 1 (3.41) Ω0 F ζ (s) = Ω0 ζ −1 (Bs (C([0, T ]; m )), where Fζ = {F ζ (s), s ∈ [0, T ]} is the filtration of sub-σ-algebras generated by the process ζ(·); that is, for all s ∈ [0, T ], F ζ (s) = σ{ζ(t), 0 ≤ t ≤ s}. Proof. Let t ∈ [0, s] and A ∈ B(m ) be fixed. Then B(t) ≡ {ξ ∈ C([0, T ]; m ) | ξ(t) ∈ A} ∈ C(s) and ω ∈ ζ −1 (B(t)) ⇐⇒ ζ(·, ω) ∈ B(t) ⇐⇒ ζ(t, ω) ∈ A ⇐⇒ ω ∈ ζ −1 (t, ·)(A). Thus,

ζ −1 (t, ·)(A) = ζ −1 (B(t)).

Then by Lemma 3.3.3, we obtain (3.41).

2

156

3 Optimal Classical Control

˜ F˜ ) be two measurable spaces and let (Ξ, d) Lemma 3.3.5 Let (Ω, F) and (Ω, ˜ and ϕ : be a Polish (complete and separable) metric space. Let ζ : Ω → Ω Ω → Ξ be two random variables. Then ϕ is σ(ζ)-measurable; that is, ˜ ϕ−1 (B(Ξ)) ⊂ ζ −1 (F)

(3.42)

˜ → Ξ such that if and only if there exists a measurable map η : Ω ϕ(ω) = η(ζ(ω)),

∀ω ∈ Ω.

(3.43)

Proof. We only need to prove the necessity. First, we assume that Ξ = . For this case, set ˜ → Ξ is measurable}. H ≡ {η(ζ(·)) | η : Ω ˜ forms a Then H is a linear space and 1 ∈ H. We note here that ζ −1 (F) π-system; that is, it is closed under finite intersections. Also, if A ∈ σ(ζ) ≡ ˜ IA (·) = IB (ζ(·)) ∈ H. Now, suppose η (i) : Ω ˜→ ζ −1 (F˜ ), then for some B ∈ F, Ξ is measurable for i = 1, 2, . . . and η (i) (ζ(·)) ∈ H such that 0 ≤ η (i) (ζ(·)) ↑ ξ(·), which is finite. Set ˜ | sup η (i) (˜ A = {˜ ω∈Ω ω ) < ∞}. i

Then A ∈ F˜ and ζ(Ω) ⊂ A. Define  supi η (i) (˜ ω ) if ω ˜∈A η(˜ ω) = ˜ − A. 0 if ω ˜∈Ω ˜ → Ξ is measurable and ξ(·) = η(ζ(·)). Thus, ξ(·) ∈ H. By Clearly, η : Ω Lemma 3.3.2, H contains all σ(ζ)-measurable random variables, in particular, ϕ ∈ H, which lead to (3.43). This proves our conclusion for the case U = . Now, let (Ξ, d) be an uncountable Polish space. Then it is known that (Ξ, d) is Borel isomorphic to the Borel measurable space of real numbers (, B()); that is, there exists a bijection h : Ξ →  such that h(B(Ξ)) = B(). Consider the map ϕ˜ = h ◦ ϕ : Ω → , which satisfies ˜ ϕ˜−1 (B()) = ϕ−1 ◦ h−1 (B()) = ϕ−1 (B(Ξ)) ⊂ ζ −1 (F). ˜ →  such that Thus, there exists an η˜ : Ω ϕ(ω) ˜ = η˜(ζ(ω)),

∀ω ∈ Ω.

By taking η = h−1 ◦ η˜, we obtain the desired result. Finally, if (Ξ, d) is countable or finite, we can prove the result by replacing  in the above by the set of natural numbers N or {1, 2, . . . , n}. 2

3.3 Dynamic Programming Principle

157

Later we will take Ξ to be the control set U . Let Am T (U ) be the set of all Bt+ (C([0, T ]; m ))-progressively measurable processes η : [0, T ] × C([0, T ]; m ) → U, where U is the control set, which is assumed to be a Polish (complete and separable) metric space. Proposition 3.3.6 Let (Ω, F, P ) be a complete probability space and let U be a Polish space. Let ζ : [0, T ] × Ω → m be a continuous process and F ζ (s) = σ(ζ(t); 0 ≤ t ≤ s). Then ϕ : [0, T ] × Ω → U is {F ζ (s)}-adapted if and only if there exists an η ∈ Am T (U ) such that ϕ(t, ω) = η(t, ζ(· ∧ t, ω)), t ∈ [0, T ], P − a.s. ω ∈ Ω. Proof. We prove only the “only if” part. The “if” part is clear. For any s ∈ [0, T ], we consider a mapping θs (t, ω) ≡ (t ∧ s, ζ(· ∧ s, ω) : [0, T ] × Ω → [0, s] × Ct ([0, T ]; m ). By Lemma 3.3.4, we have B([0, s]) × F ζ (s) = σ(θs ). On the other hand, (t, ω) → ϕ(t ∧ s, ω) is (B([0, s]) × F ζ (s))/B(U )-measurable. Thus, by Lemma 3.3.5, there exists a measurable map η s : ([0, T ] × Cs ([0, T ]; m ), B([0, s]) × Bs (C([0, T ]; m )) → U such that ϕ(t ∧ s, ω) = η s (t ∧ s, ζ(· ∧ s, ω)),

∀ω ∈ Ω, t ∈ [0, T ].

(3.44)

Now, for any i ≥ 1, let 0 = ti0 < ti1 < · · · be a partition of [0, T ] (with the mesh maxj≥1 |tij − tij−1 | → 0 as i → ∞) and define η (i) (t, ξ) = η 0 (0, ξ(· ∧ 0))I{0} (t)  i + η tj (t, ξ(· ∧ tij ))I(tij−1 ,tij ] (t),

(3.45) ∀(t, ξ) ∈ [0, T ] × C([0, T ]; m ).

j≥1

For any t ∈ [0, T ], there exists j such that tij−1 < t ≤ tij . Then i

η (i) (t, ζ(· ∧ tij , ω)) = η tj (t, ζ(· ∧ tij , ω)) = ϕ(t, ω).

(3.46)

Now, in the case U = , N, {1, 2, . . . , n}, we may define η(t, ξ) = limi→∞ η (i) (t, ξ)

(3.47)

to get the desired result. In the case where U is a general Polish space, we need to amend the proof in the same fasion as in that of Lemma 3.3.5. 2

158

3 Optimal Classical Control

3.3.2 Continuity of the Value Function For each t ∈ [0, T ], the following lemma shows that the value function V (t, ·) : C →  is Lipschitz. Lemma 3.3.7 Assume Assumptions (A3.1.1)-(A3.1.3) hold. The value function V satisfies the following properties: There is a constant KV ≥ 0 not 2 greater than 3Klip (T + 1)e3T (T +4m)Klip such that for all t ∈ [0, T ] and φ, φ˜ ∈ C, we have |V (t, φ) − V (t, ϕ)| ≤ KV φ − ϕ . Proof. Let t ∈ [0, T ] and φ, ϕ ∈ C. We have, by definition, |V (t, φ) − V (t, ϕ)| ≤

sup u(·)∈U[t,T ]

|J(t, φ; u(·)) − J(t, ϕ; u(·))|.

Let x(·) and y(·) be the solution processes of (3.1) under the control process u(·) but with two different initial data: (t, φ) and (t, ϕ) ∈ [0, T ] × C, respectively. Then by (A3.1.3) of Assumption 3.1.1, we have for all u(·) ∈ U[t, T ], |J(t, φ; u(·) − J(t, ϕ; u(·))|   T |L(s, xs , u(s)) − L(s, ys , u(s))| ds + |Ψ (xT ) − Ψ (yT )| ≤E t





≤ Kkip (1 + T − t)E

2

sup |x(s) − y(s)|

.

s∈[−r,T ]

Now, 





sup |x(s) − y(s)|2 ≤ 2E

E

s∈[−r,T ]

 sup |x(s) − y(s)|2 + 2 φ − ϕ 2 ,

s∈[0,T ]

while H¨ older’s inequality, Doob’s maximal inequality, Itˆ o’s isometry, Fubini’s theorem and (A3.1.3) of Assumption 3.1.1 together yield   E

sup |x(s) − y(s)|2

s∈[0,T ]



2

≤ 3|φ(0) − ϕ(0)| + 3T E + 3m

n  m 





T

2

|f (s, xs , u(s)) − f (s, ys , u(s))| ds t



T

2

|(gij (s, xs , u(s)) − gij (s, ys , u(s))) dWj (s)|

E t

i=1 j=1 2

≤ 3|φ(0) − ϕ(0)| +

2 3T Klip

 t

T

E[ xs − ys 2 ds

3.3 Dynamic Programming Principle

⎡  + 12mE ⎣

T

m n  

t

159

⎤ |(gij (s, xs , u(s)) − gij (s, ys , u(s)))|2 ds⎦

i=1 j=1 2

≤ 3|φ(0) − ϕ(0)| + 3(T +

2 4m)Klip





T

E t

 sup

2

x(λ) − y(λ)

ds.

λ∈[t−r,s]

Since |φ(0) − ϕ(0)| ≤ φ − ϕ , Gronwall’s lemma gives   E

sup s∈[t−r,T ]

2

x(s) − y(s) 2 ≤ 8 φ − ϕ 2 e6T (T +4m)Klip .

Combining the above estimates, we obtain the assertion.

2

3.3.3 The DDP The advantage of the weak formulation of the control problem will be apparent in the following lemmas and its use in the proof of the DPP. Let t ∈ [0, T ] and u(·) ∈ U[t, T ]. Then under Assumption 3.1.1, for any F(t)-stopping time τ ∈ [t, T ) and F(t, τ )-measurable random variable ξ : Ω → C, we can solve the following controlled SHDE: dx(s) = f (s, xs , u(s)) ds + g(s, xs , u(s)) dW (s), s ∈ [τ, T ],

(3.48)

with the initial function xτ = ξ at the stopping time τ . Lemma 3.3.8 Let t ∈ [0, T ] and (Ω, F, P, F, W (·), u(·)) ∈ U[t, T ]. Then for any F-stopping time, τ ∈ [t, T ), and any F(τ )- measurable random variable ξ : Ω → C,  T eα(s−τ ) L(s, xs (·; τ, ξ, u(·)), u(s))ds J(τ, ξ(ω); u(·)) = E τ  + e−α(T −τ ) Ψ (xT (·; τ, ξ, u(·))|F(τ ) (ω) P -a.s. ω. Proof. Since u(·) is F-adapted, where F is the P -augmented natural filtration generated by W (·), by Propostion 3.3.6 there is a function η ∈ Am T (U ) such that u(s, ω) = η(s, W (· ∧ s, ω)), P − a.s. ω ∈ Ω, ∀s ∈ [t, T ]. Then (3.48) can be written as dx(s) = f (s, xs , η(s, W (· ∧ s))) ds +g(s, xs , η(s, W (· ∧ s))) dW (s), s ∈ [τ, T ],

(3.49)

with xτ = ξ. Due to Assumption 3.1.1, Theorem 1.3.12, and Theorem 1.3.16, this equation has a strongly unique strong solution and, therefore, weak uniqueness holds. In addition, we may write, for s ≥ τ ,

160

3 Optimal Classical Control

˜ (· ∧ s, ω) + W (τ, ω)) u(s, ω) = η(s, W (· ∧ s, ω)) = η(s, W ˜ (s) = W (s) − W (τ ). Since τ is random, W ˜ (·) is not a Brownian where W motion under the probability measure P . However, we may, under the weak formulation of the control problem, change the probability measure P as follows. Note first that P {ω  | τ (ω  ) = τ (ω)|F(τ )}(ω) = E[1{ω :τ (ω )=τ (ω)} |F(τ )](ω) = 1{ω :τ (ω )=τ (ω)} (ω) = 1, P − a.s. ω ∈ Ω. This means that there is an Ω0 ∈ F with P (Ω0 ) = 1, so that for any fixed ω0 ∈ Ω0 , τ becomes a deterministic time τ (ω0 ); that is, τ = τ (ω0 ) almost surely in the new probability space (Ω, F, P (·|F(τ ))), where P (·|F(τ ))) denotes the probability measure P restricted to the σ-sub-algebra F(τ ). A similar argument shows that W (τ ) almost surely equals a constant W (τ (ω0 ), ω) and also that ξ almost surely equals a constant ξ(ω0 ) when we work in the probability space (Ω, F, P (·|F(τ ))(ω0 )). So, under the measure P (·|F(τ ))(ω0 ), ˜ (·) will be a standard Brownian motion for s ≥ τ (ω0 ), the process W ˜ (s) = W (s) − W (τ (ω0 )), W and for any s ≥ τ (ω0 ), ˜ (· ∧ s, ω) + W (τ (ω0 ), ω)). u(s, ω) = η(s, W It follows then that u(·) is adapted to the filtration F(τ (ω0 )) generated by ˜ (s) for s ≥ τ (ω0 ). Hence, by the definition the standard Brownian motion W of admissible controls, ˜ (·), u|[τ (ω ),T ] ) ∈ U[τ (ω0 ), T ]. (Ω, F, P (·|F(τ ))(ω0 ), W 0 Note that for A ∈ B(C), P [ξ ∈ A|F(τ )](ω0 ) = E[1{ξ∈A} |F(τ )](ω0 ) = E[1{ξ∈A} ] = P {ξ ∈ A}. This means that the two weak solutions ˜ x(·), W ˜ (·)) (Ω, F, P, F, x(·), W (·)) and (Ω, F, P (·|F(τ )(ω0 )), F, of (3.1) have the same initial distribution. Then, by the weak uniqueness,  T  τ,ξ(ω),u(·) J(τ, ξ(ω); u(·)) = E e−α(s−τ ) L(s, xs , u(s)) ds + e−α(T −τ ) Ψ (xT ) 

τ T

e−α(s−τ ) L(s, xs (τ, ξ, u(·)), u(s)) ds    + e−α(T −τ ) Ψ (xT (τ, ξ, u(·))F(τ ) (ω), P − a.s. ω.

=E

τ

2

3.3 Dynamic Programming Principle

161

The following dynamic programming principle (DDP) is due to Larssen [Lar02]. Theorem 3.3.9 (Dynamic Programming Principle) Let Assumption 3.1.1 hold. Then for any initial datum (t, ψ) ∈ [0, T ] × C and F-stopping time τ ∈ [t, T ],  τ e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds V (t, ψ) = sup E u(·)∈U[t,T ]

t

 + e−α(τ −t) V (τ, xτ (·; t, ψ, u(·))) .

(3.50)

Proof. Denote the right-hand side of (3.50) by V¯ (t, ψ). Given any  > 0, there exists an (Ω, F, P, F, W (·), u(·)) ∈ U[t, T ] such that V (t, ψ) −  ≤ J(t, ψ; u(·)). Equivalently, V (t, ψ) −  ≤ J(t, ψ; u(·))  T e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds =E t  + e−α(T −t) Ψ (xT (·; t, ψ, u(·)))  τ e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds =E t T



e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds  + e−α(T −t) Ψ (xT (·; t, ψ, u(·))) .

+

τ

Therefore, V (t, ψ) −  ≤ E



τ

e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds

t



T

e−α(s−τ ) L(s, xs (·; t, ψ, u(·)), u(s)) ds τ    −α(T −t) Ψ (xT (·; t, ψ, u(·)))F(τ ) +e  τ =E e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds t  + e−α(τ −t) J(τ, xτ (·; t, ψ, u(·)); u(·)) (by Lemma 3.3.11)  τ ≤E e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds t  + e−α(τ −t) V (τ, xτ (·; t, ψ, u(·)) , +E

162

3 Optimal Classical Control

so by taking the supremum over u(·) ∈ U[t, T ], we have V (t, ψ) −  ≤ V¯ (t, ψ),

∀ > 0.

This shows that V (t, ψ) ≤ V¯ (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

(3.51)

Conversely, we want to show that V (t, ψ) ≥ V¯ (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

˜ such that whenever Let  > 0; by Lemma 3.3.7 and its proof, there is a δ˜ = δ() ˆ < δ, ˜ ψ − ψ ˆ u(·))| + |V (τ, ψ) − V (τ, ψ)| ˆ ≤ , ∀u(·) ∈ U[τ, T ]. (3.52) |J(τ, ψ; u(·)) − J(τ, ψ; Now, let {Dj }j≥1 be a Borel partition of C. This means that Dj ∈ B(C) for each j, ∪j≥1 Dj = C, and Di ∩ Dj = if i = j. We also assume that the Dj are chosen so that φ − ϕ < δ˜ whenever φ and ϕ are both in Dj . Choose ψ (j) ∈ Dj . For each j, there exists an (Ω (j) , F (j) , P (j) , W (j) (·), u(j) (·)) ∈ U[τ, T ] such that J(τ, ψ (j) ; u(j) (·)) ≤ V (τ, ψ (j) ) + .

(3.53)

For any ψ ∈ Dj , (3.52) implies in particular that J(τ, ψ; u(j) (·)) ≥ J(τ, ψ (j) ; u(j) (·)) −  and V (τ, ψ (j) ) ≥ V (τ, ψ) − . (3.54) Combining the above two inequalities, we see that J(τ, ψ; u(j) (·)) ≥ J(τ, ψ (j) ; u(j) (·))− ≥ V (τ ; ψ (j) )−2 ≥ V (τ, ψ)−3. (3.55) By the definition of the five-tuple (Ω (j) , F (j) , P (j) , F(j) , W (j) (·), u(j) (·)) ∈ U[τ, T ], there is a function ϕj ∈ AT (U ) such that u(j) (s, ω) = ϕ(j) (s, W (j) (· ∧ s, ω)),

P (j) − a.s. ω ∈ Ω (j) , ∀s ∈ [τ, T ].

Now, let (Ω, F, P, F, W (·), u(·)) ∈ U[t, T ] be arbitrary. Define the new control u ˜(·) = {˜ u(s), s ∈ [t, T ]}, where u ˜(s, ω) = u(s, ω), if s ∈ [t, τ ),

3.3 Dynamic Programming Principle

163

and u ˜(s, ω) = ϕ(j) (s, W (j) (· ∧ s, ω) − W (τ, ω)) if s ∈ [τ, T ] and xs (t, ψ, u(·)) ∈ Dj . Then (Ω, F, P, F, W (·), u ˜(·)) ∈ U[t, T ]. Thus, V (t, ψ) ≥ J(t, ψ; u ˜(·))  T =E e−α(s−t) L(s, xs (·; t, ψ, u ˜(·)), u ˜(s)) ds t



−α(T −t)

+e



τ

≥E

Ψ (xT (·; t, ψ, u ˜(·)))

e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds

t

 +E

T

e−α(s−τ ) L(s, xs (·; τ, xs (t, ψ, u(·)), u ˜(s)) ds

τ

   + e−α(T −t) Ψ (xT (·; τ, xτ (t, ψ, u(·)), u ˜(·)))F(τ ) . Therefore, 

τ

V (t, ψ) ≥ E

e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds

t



+J(τ, xτ (·; t, ψ, u(·)); u ˜(·)) (by Lemma reflem:3.3.8) 

τ

≥E

L(s, xs (·; t, ψ, u(·)), u(s)) ds t



+ V (τ, xτ (·; t, ψ, u(·))) − 3 (by (3.55). Since this holds for arbitrary (Ω, F, P, F(t), W (·), u ˜(·)) ∈ U[t, T ], by taking the supremum over U[t, T ] we obtain V (t, ψ) ≥ V¯ (t, ψ) − 3,

∀ > 0.

Letting  ↓ 0, we conclude that V (t, ψ) ≥ V¯ (t, ψ), This proves the DDP.

2

∀(t, ψ) ∈ [0, T ] × C.

(3.56)

164

3 Optimal Classical Control

Assumption 3.3.10 There exists a constant K > 0 such that |f (t, φ, u)| + |g(t, φ, u)| + |L(t, φ, u)| + |Ψ (φ)| ≤ K,

∀(t, φ, u) ∈ [0, T ] × C × U.

In addition to its continuity in the initial function ψ ∈ C as proved in Lemma 3.3.7, the value function V has some regularity in the time variable as well. By using Theorem 3.3.9, it can be shown below that the value function V is H¨ older continuous in time with respect to a parameter γ for any γ ≤ 12 , provided that the initial segment is at least γ-H¨ older continuous. Notice that the coefficients f, g, and L need not be H¨ older continuous in time. Except for the role of the initial segment, the statement and proof of the following lemma are analogous to the nondelay case (see, e.g., Krylov [Kry80, p.167]). See also Proposition 2 of Fischer and Nappo [FN06] for delay case. Lemma 3.3.11 Assume Assumptions 3.1.1 and 3.8.1 hold. Let the initial function ψ ∈ C. If ψ is γ-H¨ older continuous with γ ≤ K(H), then the function V (·, ψ) : [0, T ] →  is H¨ older continuous; that is, there is a constant K(V ) > 0 depending only on K(H), K (the Lipschitz constant in Assumption 3.1.1), T , and the dimensions such that for all t, t˜ ∈ [0, T ]

& |V (t, ψ) − V (t˜, ψ)| ≤ K(V ) |t − t˜|γ ∨ |t − t˜| . Proof. Let the initial function ψ ∈ C be γ-H¨ older continuous with γ ≤ K(H). Without loss of generality, we assume that s = t + h for some h > 0. We may also assume that h ≤ 12 , because we can choose K(V ) ≥ 4K(T + 1) so that the asserted inequality holds for |t − s| > 12 . By the DDP (Theorem 3.3.9), we see that |V (t, ψ) − V (s, ψ)| = |V (t, ψ) − V (t + h, ψ)|    t+h  =  sup E e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds u(·)∈U[t,T ]

t

 + e−αh V (t + h, xt+h (t, ψ, u(·))   − V (t + h, ψ)  t+h



sup

E

u(·)∈U[t,T ]

+

sup u(·)∈U[t,T ]

≤ Kh +

−α(s−t)

e



L(s, xs (·; t, ψ, u(·)), u(s)) ds

t

$ % E |e−αh V (t + h, xt+h (·; t, ψ, u(·)) − V (t + h, ψ)|

sup u(·)∈U[t,T ]

L(V )E[ xt+h (·; t, ψ, u(·)) − ψ ],

3.4 HJB Equation

165

where K is the largest Lipschitz constant from Assumption 3.1.1 and L(V ) is the Lipschitz constant for V in the segment variable according to Lemma 3.3.7. Notice that ψ = xt (t, ψ, u(·)) for all u(·) ∈ U[t, T ]. By the linear growth condition (Assumption 3.1.1) of f and g, the H¨ older inequality, Doob-DavisGundy’s maximal inequality (Theorem 1.2.11), and It´ o’s isometry, we have for arbitrary u(·), E[ xt+h (·; t, ψ, u(·)) − ψ ] ≤

|ψ(θ + h) − ψ(θ)|+ sup |ψ(0)−ψ(θ)|

sup θ∈[−r,−h]



θ∈[−r,0]

t+h



|f (s, xs (·; t, ψ, u(·)), u(s))| ds

+E t

⎡ 2 ⎤ 12  t+h    +E ⎣ g(s, xs (·; t, ψ, u(·)), u(s)) dW (s) ⎦  t  √ ≤ 2K(H)hγ + Kh + 4Km h. Putting everything together, we have the assertion of the lemma.

2

3.4 The Infinite-Dimensional HJB Equation We will use the dynamic programming principle (Theorem 3.3.9) to derive the HJBE. Recall that {xs (·; t, ψ, u(·)), s ∈ [t, T ] is a C-valued (strong) Markov process whenever u(·) ∈ U[t, T ]. Therefore, using Theorem 2.4.1, we have the following. 1,2 Theorem 3.4.1 Suppose that Φ ∈ Clip ([0, T ] × C) ∩ D(S). Let u(·) ∈ U[t, T ] with lims↓t u(s) = u ∈ U and {xs (·; t, ψ, u(·)), s ∈ [t, T ]} be the C-valued Markov process of (3.1) with the initial datum (t, ψ) ∈ [0, T ] × C. Then

lim ↓0

E[Φ(t + , xt+ (t, ψ, u(·)))] − Φ(t, ψ) = ∂t Φ(t, ψ) + Au Φ(t, ψ), 

(3.57)

where Au Φ(t, ψ) = SΦ(t, ψ) + DΦ(t, ψ)(f (t, ψ, u)1{0} ) (3.58) m 1 2 + D Φ(t, ψ)(g(t, ψ, u)(ej )1{0} , g(t, ψ, u)(ej )1{0} ), 2 j=1 where ej , j = 1, 2, . . . , m, is the jth unit vector of the standard basis in m . Heuristic Derivation of the HJB Equation Let u ∈ U . We define the Fr´echet differential operator Au as follows:

166

3 Optimal Classical Control

Au Φ(t, ψ) ≡ S(Φ)(t, ψ) + DΦ(t, ψ)(f (t, ψ, u)1{0} ) (3.59) m  1 D2 Φ(t, ψ)(g(t, ψ, u)ej 1{0} , g(t, ψ, u)ej 1{0} ), + 2 j=1 1,2 ([0, T ] × C) ∩ D(S), where ej is the jth vector of the standard for any Φ ∈ Clip m basis in  . To recall the meaning of the terms involved in this differential operator, we remind the readers of the following definitions given earlier in this volume. First, S(Φ)(t, ψ) is defined (see (2.11) of Chapter 2) as

S(Φ)(t, ψ) = lim ↓0

Φ(t, ψ˜t+ ) − Φ(t, ψ) 

(3.60)

and ψ˜ : [−r, T ] → n is the extension of ψ ∈ C from [−r, 0] to [−r, T ] and is defined by  for t ≥ 0 ˜ = ψ(0) ψ(t) ψ(t) for t ∈ [−r, 0). Second, DΦ(t, ψ) ∈ C∗ and D2 Φ(t, ψ) ∈ C† are the first- and secondorder Fr´echet derivatives of Φ with respect to its second argument ψ ∈ C. In addition, DΦ(t, ψ) ∈ (C ⊕ B)∗ is the extension of of DΦ(t, ψ) from C∗ to (C ⊕ B)∗ (see Lemma 2.2.3 of Chapter 2) and D2 V (t, ψ) ∈ (C ⊕ B)† is the extension of D2 V (t, ψ) from C† to (C ⊕ B)† (see Lemma 2.2.4 of Chapter 2). Finally, the function 1{0} : [−r, 0] →  is defined by  0 for θ ∈ [−r, 0) 1{0} (θ) = 1 for θ = 0. Without loss of generality, we can and will assume that for every u ∈ U , the 1,2 domain of the generator Au is large enough to contain Clip ([0, T ]×C)∩D(S). From the DDP (Theorem 3.3.9), if we take a constant control u(·) ≡ u ∈ U[t, T ], then for ∀δ ≥ 0, 

t+δ

e−α(s−t) L(s, xs (·; t, ψ, v), v) ds  + e−αδ V (t + δ, xt+δ (t, ψ, v)) .

V (t, ψ) ≥ E

t

From this principle, we have

3.4 HJB Equation

1 0 ≥ lim E δ↓0 δ



167

t+δ

e−α(s−t) L(s, xs (·; t, ψ, u), u) ds −αδ V (t + δ, xt+δ (·; t, ψ, u)) − V (t, ψ) +e t

  t+δ 1 = lim E e−α(s−t) L(s, xs (·; t, ψ, u), u) ds δ↓0 δ t 1 + lim E[e−αδ V (t + δ, xt+δ (·; t, ψ, u)) − e−αδ V (t + xt+δ (·; t, ψ, u))] δ↓0 δ 1 + lim E[e−αδ V (t, xt+δ (t, ψ, u)) − e−αδ V (t, ψ))] δ↓0 δ 1 + lim [(e−αδ − 1)V (t, ψ)] δ↓0 δ = −αV (t, ψ) + ∂t V (t, ψ) + Au V (t, ψ) + L(t, ψ, u)

(3.61)

1,2 for all (t, ψ) ∈ [0, T ] × C, provided that V ∈ Clip ([0, T ] × C) ∩ D(S). ∗ Moreover, if u (·) ∈ U[t, T ] is the optimal control policy that satisfies lims↓t u∗ (s) = v ∗ , we should have, ∀δ ≥ 0, that



t+δ

e−α(s−t) L(s, x∗s (·; t, ψ, u∗ (·)), u∗ (s)) ds  + e−αδ V (t + δ, x∗t+δ (·; t, ψ, u∗ (·)) ,

V (t, ψ) = E

t

(3.62)

where x∗s (t, ψ, u∗ (·)) is the C-valued solution process corresponding to the initial datum (t, ψ) and the optimal control u∗ (·) ∈ U[t, T ]. Similarly, under the strong assumption on u∗ (·) (including the right-continuity at the initial time t), we can get ∗

0 = −αV (t, ψ) + ∂t V (t, ψ) + Av V (t, ψ) + L(t, ψ, v ∗ ).

(3.63)

Inequalities (3.61) and (3.62) are equivalent to the HJBE 0 = −αV (t, ψ) + ∂t V (t, ψ) + max [Au V (t, ψ) + L(t, ψ, u)]. u∈U

We therefore have the following result. Theorem 3.4.2 Let V : [0, T ]×C →  be the value function defined by (3.7). 1,2 Suppose V ∈ Clip ([0, T ] × C) ∩ D(S). Then the value function V satisfies the following HJBE: αV (t, ψ) − ∂t V (t, ψ) − max [Au V (t, ψ) + L(t, ψ, u)] = 0 u∈U

on [0, T ] × C, and V (T, ψ) = Ψ (ψ), ∀ψ ∈ C, where

(3.64)

168

3 Optimal Classical Control

Au V (t, ψ) ≡ SV (t, ψ) + DV (t, ψ)(f (t, ψ, u)1{0} ) m 1 2 D V (t, ψ)(g(t, ψ, u)ej 1{0} , g(t, ψ, u)ej 1{0} ). + 2 j=1 Note that it is not known that the value function V satisfies the necessary 1,2 ([0, T ] × C) ∩ D(S). In fact, the following smoothness condition V ∈ Clip simple example shows that the value function does not possess the smoothness condition for it to be a classical solution of the HJBE (3.64) even for a very simple deterministic control problem. The following is an example taken from Example 2.3 of [FS93, p.57]. Example. Consider a one-dimensional simple deterministic control problem described by x(s) ˙ = u(s), s ∈ [0, 1), with the running cost function L ≡ 0 and the terminal cost function Ψ (x) = x ∈  and a control set U = [−a, a] with some constant a > 0. Since the boundary data are increasing and the running cost is zero, the optimal control is u∗ (s) ≡ −a. Hence, the value function is  −1 if x + at ≤ a − 1 V (t, x) = (3.65) x + at − a if x + at ≥ a − 1 for (t, x) ∈ [0, 1] × . Note that the value function is differentiable except on the set {(t, x) | x + at = a − 1} and it is a generalized solution of −∂t V (t, x) + a |∂xV (t, x)| = 0,

(3.66)

with the corresponding terminal boundary condition given by V (1, x) = Ψ (x) = x,

x ∈ [−1, 1].

(3.67)

The above example shows that in general we need to seek a weaker condition for the HJBE (3.64) such as a viscosity solution instead of a solution for HJBE (3.64) in the classical sense. In fact, it will be shown that the value function is a unique viscosity solution of the HJBE (3.64). These results will be given in Sections 3.5 and 3.6.

3.5 Viscosity Solution In this section, we shall show that the value function V : [0, T ] × C →  defined by Equation (3.7) is actually a viscosity solution of the HJBE (3.64). Definitions of Viscosity Solution First, let us define the viscosity solution of (3.64) as follows.

3.5 Viscosity Solution

169

Definition 3.5.1 An upper semicontinuous (respectively, lower semicontinuous) function w : (0, T ] × C) →  is said to be a viscosity subsolution (respectively, supersolution) of the HJBE (3.64) if w(T, φ) ≤ (≥)Ψ (φ),

∀φ ∈ C,

1,2 ([0, T ] × C) ∩ D(S) and for every (t, ψ) ∈ [0, T ] × C and if, for every Γ ∈ Clip satisfying Γ ≥ (≤)w on [0, T ] × C and Γ (t, ψ) = w(t, ψ), we have

αΓ (t, ψ) − ∂t Γ (t, ψ) − max [Av Γ (t, ψ) + L(t, ψ, v)] ≤ (≥) 0. v∈U

(3.68)

We say that w is a viscosity solution of the HJBE (3.64) if it is both a viscosity supersolution and a viscosity subsolution of the HJBE (3.64). Definition 3.5.2 Let a function Φ : [0, T ] × C →  be given. We say that 1,2,+ (p, q, Q) ∈  × C∗ × C† belongs to Dt+,φ Φ(t, φ), the second-order superdifferntial of Φ at (t, φ) ∈ (0, T ) × C, if, for all ψ ∈ C, s ≥ t, Φ(s, ψ) ≤ Φ(t, φ) + p(s − t) + q(ψ − φ) 1 + Q(ψ − φ, ψ − φ) + o(s − t + ψ − φ 2 ). 2

(3.69)

The second-order one-sided parabolic subdifferential of Φ at (t, φ) ∈ (0, T )×C, 1,2,− Dt+,φ Φ(t, φ), is defined as those (p, q, Q) ∈  × C∗ × C† such that the above inequality is reversed; that is, Φ(s, ψ) ≥ Φ(t, φ) + p(s − t) + q(ψ − φ) 1 + Q(ψ − φ, ψ − φ) + o(s − t + ψ − φ 2 ). 2

(3.70) (3.71)

Remark 3.5.3 The second-order one-sided parabolic subdifferential and second-order one-sided parabolic superdifferential have the following relationship: 1,2,− 1,2,+ Φ(t, φ) = −Dt+,φ (−Φ)(t, φ). Dt+,φ Lemma 3.5.4 Let w : (0, T ] × C →  and let (t, ψ) ∈ (0, T ) × C. Then 1,2,+ w(t, ψ) if and only if there exists a function Φ ∈ C 1,2 ((0, T )× (p, q, Q) ∈ Dt+,φ C) such that w − Φ attains a strict global maximum at (t, ψ) relative to the set of (s, φ) such that s ≥ t and (Φ(t, ψ), ∂t Φ(t, ψ), DΦ(t, ψ), D2 Φ(t, ψ)) = (w(t, ψ), p, q, Q).

(3.72)

Moreover, if w has polynomial growth (i.e., there exist positive constants Kp and k ≥ 1 such that |w(s, φ)| ≤ Kp (1 + φ 2 )k ,

∀(s, φ) ∈ (0, T ) × C),

(3.73)

then Φ can be chosen so that Φ ∂t Φ, DΦ, and D2 Φ satisfy (3.73) (with possibly different constants Kp ).

170

3 Optimal Classical Control

Lemma 3.5.5 Let v be an upper-semicontinuous function on (0, T ) × C and ¯ if and only if there exists ¯ ∈ (0, T ) × C. Then (p, q, Q) ∈ D ¯ 1,2 v(t¯, ψ) let (t¯, ψ) t+,ψ 1,2 a function Φ ∈ C ((0, T ) × C) such that v − Φ ∈ C((0, T ) × C) attains a ¯ relative to the set of (t, ψ) ∈ [t¯, T ) × C and strict global maximum at (t¯, ψ) ¯ ∂t Φ(t¯, ψ), ¯ DΦ(t¯, ψ), ¯ D2 Φ(t¯, ψ)) ¯ = (v(t¯, ψ), ¯ p, q, Q). (Φ(t¯, ψ),

(3.74)

Moreover, if v has polynomial growth (i.e., if there exists a constant k ≥ 1 such that |v(t, ψ)| ≤ C(1 + ψ k ) ∀(t, ψ) ∈ [0, T ] × C), (3.75) then Φ can be chosen so that Φ, ∂t Φ, DΦ, and D2 Φ satisfy (3.69) under the appropriate norms and with possibly different constants C. Proof. The proof of this lemma is an extension of Lemma 5.4 in Yong and Zhou [YZ99, Chap.4] from an Euclidean space to the infinite-dimensional space C. ¯ Define the function γ : (0, T ) × C →  ¯ 1,2 v(t¯, ψ). Suppose (p, q, Q) ∈ D t+,ψ as γ(t, ψ) =

1 ¯ ¯ ¯ 2 [v(t, ψ) − v(t, ψ) ¯ t − t + ψ − ψ ¯ − 1 Q(φ − ψ, φ − ψ)] ∨ 0 − p(t − t¯) − q(ψ − ψ) 2 ¯ for (t, ψ) = (t¯, ψ),

and γ(t, ψ) = 0 otherwise. We also define the function  :  →  by ¯ 2 ≤ r} if r > 0 (r) = sup{γ(t, ψ) | (t, ψ) ∈ (t¯, T ] × C, t − t¯ + ψ − ψ ¯ 1,2 v(t, ψ) that and (r) = 0 if r ≤ 0. Then it follows from the definition of D t+,ψ ¯ + p(t − t¯) + q(ψ − ψ) ¯ + 1 Q(φ − ψ, φ − ψ)] v(t, ψ) − [v(t¯, ψ) 2 2 2 ¯ ¯ ¯ ¯ ≤ (t − t + ψ − ψ )(t − t + ψ − ψ ) ∀(t, ψ) ∈ [t¯, T ] × C. Define the function α : + →  by α(ρ) =

2 ρ







r

(θ) dθ dr, ρ > 0. 0

0

Then it is easy to see that its first-order derivative   2ρ  r 2 4 2ρ α(ρ) ˙ =− 2 (θ) dθ dr + (θ) dθ ρ 0 ρ 0 0 and its second order-derivative

(3.76)

3.5 Viscosity Solution

α ¨ (ρ) =

4 ρ3

 0





r

0

(θ) dθ dr −

8 ρ2

 0



171

8 (θ) dθ + (2ρ). ρ

Consequently, |α(ρ)| ≤ 4ρ(2ρ), |α(ρ)| ˙ ≤ 12ρ(2ρ), and |¨ α(ρ)| ≤

32ρ(2ρ) . ρ

Now, we define the function β : [0, T ] × C →  by  ¯ α(ρ(t, ψ)) + ρ2 (t, ψ) if (t, ψ) = (t¯, ψ) β(t, ψ) = 0 otherwise, ¯ 2. where ρ(t, ψ) = t − t¯ + ψ − ψ Finally, we define the function Φ : [t¯, T ] × C →  by ¯ + p(t − t¯) + q(ψ − ψ) ¯ Φ(t, ψ) = v(t¯, ψ) 1 + Q(φ − ψ, φ − ψ) + β(t, ψ), 2

∀(t, ψ) ∈ [0, T ] × C. (3.77)

1,2 We claim that Φ ∈ Clip ([0, T ] × C) and it satisfies the following three conditions: ¯ = Φ(t¯, ψ). ¯ (i) v(t¯, ψ) ¯ (ii) v(t, ψ) < Φ(t, ψ) for all (t, ψ) = (t¯, ψ). ¯ DΦ(t¯, ψ), ¯ D2 Φ(t¯, ψ)) ¯ = (v(t¯, ψ), ¯ p, q, Q). ¯ ∂t Φ(t¯, ψ), (iii) (Φ(t¯, ψ), Note that (i) is trivial by the definition of Φ. The proofs for (ii) and (iii) are very similar to those of Lemma 2.7 and Lemma 2.8 of Yong and Zhou [YZ99] and are omitted here. 2

Proposition 3.5.6 A function w ∈ C([0, T ] × C) is a viscosity solution of the HJBE (3.64) if −p − sup G(t, φ, u, q, Q) ≤ 0, u∈U

−p − sup G(t, φ, u, q, Q) ≥ 0, u∈U

1,2,+ ∀(p, q, Q) ∈ Dt+,φ w(t, φ), ∀(t, φ) ∈ [0, T ) × C, 1,2,− ∀(p, q, Q) ∈ Dt+,φ w(t, φ), ∀(t, φ) ∈ [0, T ) × C,

and w(T, φ) = Ψ (φ),

∀φ ∈ C,

where the function G is defined as G(t, φ, u, q, Q) = S(Φ)(t, ψ) + q(f (t, φ, v)1{0} ) m 1 + Q(g(t, ψ, v)(ej )1{0} , g(t, ψ, v)(ej )1{0} ). (3.78) 2 j=1 Proof. The proposition follows immediately from Lemma 3.5.4 and Lemma 3.5.5. 2 For our value function V defined by (3.7), we now show that it has the following property.

172

3 Optimal Classical Control

Lemma 3.5.7 Let V : [0, T ] × C →  be the value function defined in (3.7). Then there exists a constant k > 0 and a positive integer p such that for every (t, ψ) ∈ [0, T ] × C, (3.79) |V (t, ψ)| ≤ K(1 + ψ 2 )k . Proof. It is clear that V has at most a polynomial growth, since L and Φ have at most a polynomial growth. This proves the lemma. 2 Theorem 3.5.8 The value function V : [0, T ] × C →  defined in (3.7) is a viscosity solution of the HJBE: αV (t, ψ) − ∂t V (t, ψ) − max [Av V (t, ψ) + L(t, ψ, v)] = 0

(3.80)

v∈U

on [0, T ] × C, and V (T, ψ) = Ψ (ψ), ∀ψ ∈ C, where Av V (t, ψ) = SV (t, ψ) + DV (t, ψ)(f (t, ψ, v)1{0} ) m 1 2 + D V (t, ψ)(g(t, ψ, v)(ej )1{0} , g(t, ψ, v)(ej )1{0} ), (3.81) 2 j=1 where ej , j = 1, 2, . . . , m, is the jth unit vector of the standard basis in m . 1,2 ([0, T ] × C) ∩ D(S). For (t, ψ) ∈ [0, T ] × C such that Proof. Let Γ ∈ Clip Γ ≤ V on [0, T ] × C and Γ (t, ψ) = V (t, ψ), we want to prove the viscosity supersolution inequality, that is,

αΓ (t, ψ) − ∂t Γ (t, ψ) − max [Av Γ (t, ψ) + L(t, ψ, v)] ≥ 0.

(3.82)

v∈U

1,2 ([0, T ] × C) ∩ D(S) (by virtue of TheoLet u(·) ∈ U[t, T ]. Since Γ ∈ Clip rem 3.4.1) for t ≤ s ≤ T , we have   E e−α(s−t) Γ (s, xs (·; t, ψ, u(·))) − Γ (t, ψ)

 s −α(ξ−t) e ∂ξ Γ (ξ, xξ (·; t, ψ, u(·))) =E t

+A

u(ξ)





Γ (ξ, xξ (·; t, ψ, u(·))) − αΓ (ξ, xξ (·; t, ψ, u(·))) dξ . (3.83)

On the other hand, for any s ∈ [t, T ], the DDP (Theorem 3.3.9) gives,  s e−α(ξ−t) L(ξ, xξ (·; t, ψ, u(·)), u(ξ))dξ V (t, ψ) = max E u(·)∈U[t,T ] t  −α(s−t) +e V (s, xs (·; t, ψ, u(·))) . (3.84) Therefore, we have

3.5 Viscosity Solution



s

e−α(ξ−t) L(ξ, xξ (·; t, ψ, u(·)), u(ξ)) dξ   + E e−α(s−t) V (s, xs (·; t, ψ, u(·))) .

173



V (t, ψ) ≥ E

t

(3.85)

By virtue of (3.177) and using Γ ≤ V, Γ (t, ψ) = V (t, ψ), we can get  s 0≥E e−α(ξ−t) L(ξ, xξ (·; t, ψ, u(·)), u(ξ))dξ t  + E e−α(s−t) V (s, xs (·; t, ψ, u(·))) − V (t, ψ)  s −α(ξ−t) ≥E e L(ξ, xξ (·; t, ψ, u(·)), u(ξ)) dξ t

  + E e−α(s−t) Γ (s, xs (·; t, ψ, u(·))) − Γ (t, ψ)   s ≥E e−α(ξ−t) − αΓ (ξ, xξ (·; t, ψ, u(·))) + ∂ξ Γ (ξ, xξ (·; t, ψ, u(·))) t + Au Γ (ξ, xξ (·; t, ψ, u(·))) + L(ξ, xξ (·; t, ψ, u(·)), u(ξ)) dξ. Dividing both sides by (s − t), we have   s 1 e−α(ξ−t) αΓ (ξ, xξ (·; t, ψ, u(·))) 0≤E s−t t − ∂ξ Γ (ξ, xξ (·; t, ψ, u(·))) − Au(ξ) Γ (ξ, xξ (·; t, ψ, u(·)))

−L(ξ, xξ (·; t, ψ, u(·)), u(ξ)) dξ .

(3.86)

Now, let s ↓ t in (3.86) and lims↓t u(s) = v, and we obtain αΓ (t, ψ) − ∂t Γ (t, ψ) − [Av Γ (t, ψ) + L(t, ψ, v)] ≥ 0.

(3.87)

Since v ∈ U is arbitrary, we prove that V is a viscosity supersolution. Next, we want to prove that V is a viscosity subsolution. Let Γ ∈ 1,2 Clip ([0, T ] × C) ∩ D(S). For (t, ψ) ∈ [0, T ] × C satisfying Γ ≥ V on [0, T ] × C and Γ (t, ψ) = V (t, ψ), we want to prove that αΓ (t, ψ) − ∂t Γ (t, ψ) − max [Av Γ (t, ψ) + L(t, ψ, v)] ≤ 0. v∈U

(3.88)

We assume the contrary and try to obtain a contradiction. Let suppose that 1,2 ([0, T ] × C) ∩ D(S), with Γ ≥ V there exit (t, ψ) ∈ [0, T ] × C, Γ ∈ Clip on [0, T ] × C and Γ (t, ψ) = V (t, ψ), and δ > 0 such that for all control u(·) ∈ U[t, T ] with lims↓t u(s) = v,

174

3 Optimal Classical Control

αΓ (τ, φ) − ∂t Γ (τ, φ) − Av Γ (τ, φ) − L(τ, φ, v) ≥ δ

(3.89)

for all (τ, φ) ∈ N (t, ψ), where N (t, ψ) is a neighborhood of (t, ψ). Let u(·) ∈ U[t, T ] with lims↓t u(s) = v, and t1 such that for t ≤ s ≤ t1 , the solution x(s; t, ψ, u(·)) ∈ N (t, ψ). Therefore, for any s ∈ [t, t1 ], we have almost surely αΓ (s, xs (t, ψ, u(·))) − ∂t Γ (s, xs (·; t, ψ, u(·))) −Av Γ (s, xs (·; t, ψ, u(·))) − L(s, xs (·; t, ψ, u(·)), u(s)) ≥ δ. (3.90) On the other hand, since Γ ≥ V , using the definition of J and V , we can get

 J(t, ψ; u(·)) ≤ E

t1

−α(s−t)

e

−α(t1 −t)

L(s, xs , u(s))ds + e

V (t1 , xt1 )

t



t1

e−α(s−t) L(s, xs , u(s)) ds −α(t1 −t) Γ (t1 , xt1 (t, ψ, u(·))) . +e

≤E

t

Using (3.90), we have  J(t, ψ; u(·)) ≤ E

t1

−α(s−t)

− δ + αΓ (s, xs (·; t, ψ, u(·)))

e

t

− ∂t Γ (s, xs (·; t, ψ, u(·))) − A −α(t1 −t)

+e

u(s)

Γ (s, xs (·; t, ψ, u(·))) ds

Γ (t1 , xt1 (t, ψ, u(·))) .

(3.91)

In addition, similar to (3.86), we can get   E e−α(t1 −t) Γ (t1 , xt1 (·; t, ψ, u(·))) − Γ (t, ψ)   t1 −α(s−t) e =E ∂s Γ (s, xs (·; t, ψ, u(·))) + Au(s) Γ (s, xs (·; t, ψ, u(·))) t

(3.92) − αΓ (s, xs (·; t, ψ, u(·))) ds . Therefore, we can get



J(t, ψ; u(·)) ≤ −

t1

e−α(s−t) δ ds + Γ (t, ψ)

t

δ = − (1 − e−α(t1 −t) ) + V (t, ψ) α Taking the supremum over all admissible controls u(·) ∈ U[t, T ], we have δ V (t, ψ) ≤ − (1 − e−α(t1 −t) ) + V (t, ψ). α This contradicts the fact that δ > 0. Therefore, V (t, ψ) is a viscosity subsolution. This completes the proof of the theorem. 2

3.6 Uniqueness

175

3.6 Uniqueness In this section, we will show that the value function V : [0, T ] × C →  of Problem (OCCP) is the unique viscosity solution of the HJBE (3.64). We first need the following comparison principle. Theorem 3.6.1 (Comparison Principle) Assume that V1 (t, ψ) and V2 (t, ψ) are both continuous with respect to the argument (t, ψ) ∈ [0, T ] × C and are respectively the viscosity subsolution and supersolution of (3.64) with at most a polynomial growth (Lemma 3.5.7). In other words, there exists a real number Λ > 0 and a positive integer k ≥ 1 such that |Vi (t, ψ)| ≤ Λ(1 + ψ 2 )k ,

∀(t, ψ) ∈ [0, T ] × C,

i = 1, 2.

Then V1 (t, ψ) ≤ V2 (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

(3.93)

Before we proceed to the proof of Theorem 3.6.1, we will use the following result proven in Ekeland and Lebourg [EL76] and also in a general form in Stegall [Ste78] and Bourgain [Bou79]. The reader is also referred to Crandall et al. [CIL92] and Lions [Lio82, Lio89] for an application example of this result in a setting similar but significantly different in details from what are to be presented below. A similar proof of uniqueness of viscosity solution is also done in Chang et al. [CPP07a]. Lemma 3.6.2 Let Φ be a bounded and upper-semicontinuous real-valued function on a closed ball B of a Banach space Ξ that has the Radon-Nikodym property. Then for any  > 0, there exists an element u∗ ∈ Ξ ∗ with norm at most , where Ξ ∗ is the topological dual of Ξ, such that Φ + u∗ attains its maximum on B. Note that every separable Hilbert space (Ξ, · Ξ ) satisfies the RadonNikodym property (see, e.g., [EL76]). In order to apply Lemma 3.6.2, we will therefore restrict ourself to a subspace Ξ of the product space C × C (C = C([−r, 0]; n )), which is a separable Hilbert space and dense in C × C. A good candidates is the product space W 1,2 ((−r, 0); n )×W 1,2 ((−r, 0); n ), where W 1,2 ((−r, 0); n ) is the Sobolev space defined by W 1,2 ((−r, 0); n ) = {φ ∈ C | φ is absolutely continuous on (−r, 0) and φ 1,2 < ∞}, ˙ 2, φ 21,2 ≡ φ 22 + φ 2 and φ˙ is the derivative of φ in the distributional sense. Note that it can be shown that the Hilbertian norm · 1,2 is weaker than the sup-norm · ; that is, there exists a constant K > 0 such that φ 1,2 ≤ K φ ,

∀φ ∈ W 1,2 ((−r, 0); n ).

176

3 Optimal Classical Control

From the Sobolev embedding theorems (see, e.g., Adams [Ada75]), it is known that W 1,2 ((−r, 0); n ) ⊂ C and that W 1,2 ((−r, 0); n ) is dense in C. For more about Sobolev spaces and corresponding results, one can refer to [Ada75]. Before we proceed to the proof of the Comparison Principle, first let us establish some results that will be needed in the proof. Let V1 and V2 be respectively a viscosity subsolution and supersolution of (3.64). For any 0 < δ, γ < 1, and for all ψ, φ ∈ C and t, s ∈ [0, T ], define Θδγ (t, s, ψ, φ) ≡

% 1$ ψ − φ 22 + ψ 0 − φ0 22 + |t − s|2 (3.94) δ $ % 2 0 2 2 0 2 +γ exp(1 + ψ 2 + ψ 2 ) + exp(1 + φ 2 + φ 2 ) ,

and Φδγ (t, s, ψ, φ) ≡ V1 (t, ψ) − V2 (s, φ) − Θδγ (t, s, ψ, φ), where ψ 0 , φ0 ∈ C with ψ 0 (θ) = θ ∈ [−r, 0].

θ −r ψ(−r

− θ) and φ0 (θ) =

θ −r φ(−r

(3.95) − θ) for

Moreover, using the polynomial growth condition for V1 and V2 , we have lim

ψ2 ,φ2 →∞

Φδγ (t, s, ψ, φ) = −∞.

(3.96)

The function Φδγ is a real-valued function that is bounded above and continuous on [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ) (since the Hilbertian norm · 1,2 is weaker than the sup-norm · ). Therefore, from Lemma 3.6.2 (which is applicable by virtue of (3.96)), for any 1 >  > 0 there exits a continuous linear functional T in the topological dual of W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ), with norm at most , such that the function Φδγ + T attains it maximum in [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ) (see Lemma 3.6.2). Let us denote by (tδγ , sδγ , ψδγ , φδγ ) the global maximum of Φδγ + T on [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ). Without loss of generality, we assume that for any given δ, γ, and , there exists a constant Mδγ such that the maximum value Φδγ + T + Mδγ is zero. In other words, we have Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ) + Mδγ = 0.

(3.97)

We have the following lemmas. Lemma 3.6.3 (tδγ , sδγ , ψδγ , φδγ ) is the global maximum of Φδγ + T in [0, T ] × [0, T ] × C × C. Proof. Let (t, s, ψ, φ) ∈ [0, T ] × [0, T ] × C × C. By virtue of the density of W 1,2 ((−r, 0); n ) in C, we can find a sequence (tk , sk , ψk , φk ) in [0, T ]×[0, T ]× W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ) such that

3.6 Uniqueness

177

(tk , sk , ψk , φk ) → (t, s, ψ, φ) as k → ∞. It is known that Φδγ (tk , sk , ψk , φk ) + T (ψk , φk ) ≤ Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ). Taking the limit as k goes to ∞ in the last inequality, we obtain Φδγ (t, s, ψ, φ) + T (ψ, φ) ≤ Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ). This shows that (tδγ , sδγ , ψδγ , φδγ ) is the global maximum over [0, T ] × [0, T ] × C × C. 2 Lemma 3.6.4 For each fixed γ > 0, we can find a constant Λγ > 0 such that 0 2 + φδγ 2 + φ0δγ 2 ≤ Λγ ψδγ 2 + ψδγ

and

 lim

↓0,δ↓0

(3.98)

 0 ψδγ − φδγ 22 + ψδγ − φ0δγ 22 + |tδγ − sδγ |2 = 0,

(3.99)

Proof. Noting that (tδγ , sδγ , ψδγ , φδγ ) is the global maximum of Φδγ + T , we can obtain Φδγ (tδγ , tδγ , ψδγ , ψδγ ) + T (ψδγ , ψδγ ) + Φδγ (sδγ , sδγ , φδγ , φδγ ) + T (φδγ , φδγ ) ≤ 2Φδγ (tδγ , sδγ , ψδγ , φδγ ) + 2T (ψδγ , φδγ ). It implies that 0 V1 (tδγ , ψδγ ) − V2 (tδγ , ψδγ ) − 2γ(exp(1 + ψδγ 22 + ψδγ 22 ))

+ T (ψδγ , ψδγ ) + V1 (sδγ , φδγ ) − V2 (sδγ , φδγ ) − 2γ(exp(1 + φδγ 22 + φ0δγ 22 )) + T (φδγ , φδγ ) ≤ 2V1 (tδγ , ψδγ ) − 2V2 (sδγ , φδγ )  2 0 − φ0δγ 22 + |tδγ − sδγ |2 − ψδγ − φδγ 22 + ψδγ δ   0 − 2γ exp(1 + ψδγ 22 + ψδγ 22 ) + exp(1 + φδγ 22 + φ0δγ 22 ) + 2T (ψδγ , φδγ ).

(3.100)

From the above inequality, it is easy to obtain  2 2 0 0 2 2 ψδγ − φδγ 2 + ψδγ − φδγ 2 + |tδγ − sδγ | δ ≤ [V1 (tδγ , ψδγ ) − V1 (sδγ , φδγ )] + [V2 (tδγ , ψδγ ) − V2 (sδγ , φδγ )] + 2T (ψδγ , φδγ ) − [T (ψδγ , ψδγ ) + T (φδγ , φδγ )].

(3.101)

178

3 Optimal Classical Control

From the polynomial growth condition of V1 and V2 , and the fact that the norm of T is  ∈ (0, 1), we can find a constant Λ > 0 and a positive integer k ≥ 1 such that  2 0 −φ0δγ 22 +|tδγ −sδγ |2 ≤ Λ(1+ ψδγ 2 + φδγ 2 )k . ψδγ −ψδγ 22 + ψδγ δ (3.102) So, 0 ψδγ − φδγ 22 + ψδγ − φ0δγ 22 + |tδγ − sδγ |2 ≤ δΛ(1 + ψδγ 2 + φδγ 2 )k . (3.103) On the other hand, because (tδγ , sδγ , ψδγ , φδγ ) is the global maximum of Φδγ + T , we obtain

Φδγ (tδγ , sδγ , 0, 0) + T (0, 0) ≤ Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ) (3.104) In addition, by the definition of Φδγ and the polynomial growth condition of V1 , V2 , we can get a Λ > 0 and a positive integer k ≥ 1 such that |Φδγ (tδγ , sδγ , 0, 0) + T (0, 0)| ≤ Λ(1 + ψδγ 2 + φδγ 2 )k and V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ ) ≤ Λ(1 + ψδγ 2 + φδγ 2 )k . Therefore, by virtue of (3.104) and the definition of Φδγ , we have $ % 0 γ exp(1 + ψδγ 22 + ψδγ 22 ) + exp(1 + φδγ 22 + φ0δγ 22 ) ≤ V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )  1 2 0 0 2 2 − ψδγ − φδγ 2 + ψδγ − φδγ 2 + |tδγ − sδγ | δ − Φδγ (tδγ , sδγ , 0, 0) + T (ψδγ , φδγ ) − T (0, 0) ≤ 3Λ(1 + ψδγ 2 + φδγ 2 )k . It follows that   0 γ exp(1 + ψδγ 22 + ψδγ 22 ) + exp(1 + φδγ 22 + φ0δγ 22 ) (1 + ψδγ 2 + φδγ 2 )k

(3.105)

≤ 3Λ.

Consequently, there exists Λγ < ∞ such that 0 ψδγ 2 + ψδγ 2 + φδγ 2 + φ0δγ 2 ≤ Λγ .

(3.106)

In order to obtain (3.99), we set δ to zero in (3.103) using the above inequality. 2 Now, let us introduce a functional F : C →  defined by

3.6 Uniqueness

F (ψ) ≡ ψ 22

179

(3.107)

and the linear map H : C → C defined by H(ψ)(θ) ≡

θ ψ(−r − θ) = ψ 0 (θ), −r

θ ∈ [−r, 0].

(3.108)

Note that H(ψ)(0) = ψ 0 (0) = 0 and H(ψ)(−r) = ψ 0 (−r) = −ψ(0). It is not hard to show that the map F is Fr´echet differentiable and its derivative is given by  0

DF (ϕ)h = 2 −r

ϕ(θ) · h(θ) dθ ≡ 2 u, h2 ,

where ·, ·2 and · 2 are the inner product and the Hilbertian norm in the Hilbert space L2 ([−r, 0]; n ). This comes from the fact that ψ + h 22 − ψ 22 = 2 ψ, h2 + h 22 , and we can always find a constant Λ > 0 such that | ψ + h 22 − ψ 22 − 2 ψ, h2 | h 22 Λ h 2 = ≤ = Λ h . h h h

(3.109)

Moreover, we have 2 ψ + h, 2 − 2 ψ, ·2 = 2 h, ·2 . We deduce that F is twice differentiable and D2 F (u)(h, k) = 2 h, k2 . In addition, the map H is linear, thus twice Fr´echet differentiable. Therefore, DH(ψ)(h) = H(h) and D2 H(ψ)(h, k) = 0, for all ψ, h, k ∈ C. From the definition of Θδγ and the definition of F , we obtain  1 0 0 2 Θδγ (t, s, ψ, φ) = F (ψ − φ) + F (ψ − φ ) + |t − s| δ + γ[e1+F (ψ)+F (H(ψ)) + e1+F (φ)+F (H(φ)) ]. The following chain rule, quoted in Theorem 5.2.5 in Siddiqi [Sid04], is needed to get the Fr´echet derivatives of Θδγ : Theorem 3.6.5 (Chain Rule) Let X , Y, and Z be real Banach spaces. If S : X → Y and T : Y → Z are Fr´echet differentiable at x and y = S(x) ∈ Y, respectively, then U = T ◦ S is Fr´echet differentiable at x and its Fr´echet derivative is given by Dx U (x) = Dy T (S(x))Dx S(x). Given the above chain rule, we can say that Θδγ is Fr´echet differentiable. Actually, for h, k ∈ C, we can get

180

3 Optimal Classical Control

Dψ Θδγ (t, s, ψ, φ)(h)  2

ψ − φ, h2 + H(ψ − φ), H(h)2 = δ + 2γe1+F (ψ)+F (H(ψ)) [ ψ, h2 + H(ψ), H(h)2 ].

(3.110)

Dφ Θδγ (t, s, ψ, φ)(k)  2 =

φ − ψ, k2 + H(φ − ψ), H(k)2 δ + 2γe1+F (φ)+F (H(φ)) [ φ, k2 + H(φ), H(k)2 ].

(3.111)

Similarly,

Furthermore, 2 Dψ Θδγ (t, s, ψ, φ)(h, k)   2

h, k2 + H(h), H(k)2 = δ  + 2γe1+F (ψ)+F (H(ψ)) 2( ψ, k2 + H(ψ), H(k)2 )( ψ, h2 + H(ψ), H(h)2 )

 + k, h2 + H(k), H(h)2 .

(3.112)

Similarly, Dφ2 Θδγ (t, s, ψ, φ)(h, k) (3.113)  2 =

h, k2 + H(h), H(k)2 δ  + 2γe1+F (φ)+F (H(φ)) 2( φ, k2 + H(φ), H(k)2 )( φ, h2 + H(φ), H(h)2 )  + k, h2 + H(k), H(h)2 .

(3.114)

By the Hahn-Banach theorem (see, e.g., [Sid04]), we can extent the continuous linear functional T to the space C × C and its norm is preserved. Thus, the first-order Fr´echet derivatives of T is just T , that is, Dψ T (ψ, φ)h = T (h, φ), Dφ T (ψ, φ)k = T (ψ, k)

∀ψ, φ, h, k ∈ C.

For the second derivative, we have 2 Dψ T (ψ, φ)(h, k) = 0,

Dφ2 T (ψ, φ)(h, k) = 0,

(3.115)

∀ψ, φ, h, k ∈ C.

2 Observe that we can extend Dψ Θδγ (t, s, ψ, φ) and Dψ Θδγ (t, s, ψ, φ), the first- and second-order Fr´echet derivatives of Θδγ with respect to ψ, to the space C ⊕ B (see Lemma 2.2.3 and Lemma 2.2.4 in Chapter 2) by setting

3.6 Uniqueness

Dψ Θδγ (t, s, ψ, φ)(h + v1{0} )  2

ψ − φ, h + v1{0} 2 + H(ψ − φ), H(h + v1{0} 2 = δ

181

(3.116)

+ 2γe1+F (ψ)+F (H(ψ)) [ ψ, h + v1{0} 2 + H(ψ), H(h + v1{0} )2 ]. and 2 Θ (t, s, ψ, φ)(h + v1 Dψ δγ {0} , k + w1{0} )   2

h + v1{0} , k + w1{0} 2 + H(h + v1{0} ), H(k + w1{0} 2 = δ  +2γe1+F (ψ)+F (H(ψ)) 2( ψ, k + w1{0} 2 + H(ψ), H(k + w1{0} )2 )

×( ψ, h + v1{0} 2 + H(ψ), H(h + v1{0} )2 )

 + k + w1{0} , h + v1{0} 2 + H(k + w1{0} ), H(h + v1{0} 2 ) , (3.117) for v, w ∈ n and h, k ∈ C. Moreover, it is easy to see that these extensions are continuous in that there exists a constant Λ > 0 such that | ψ − φ, h + v1{0} 2 | ≤ ψ − φ 2 · h + v1{0} 2 ≤ Λ ψ − φ 2 ( h + |v|),

(3.118)

| ψ, h + v1{0} 2 | ≤ ψ 2 · h + v1{0} 2 ≤ Λ ψ 2 ( h + |v|),

(3.119)

| ψ, k + w1{0} 2 | ≤ ψ 2 · k + w1{0} 2 ≤ Λ ψ 2 ( k + |w|),

(3.120)

and | k + w1{0} , h + v1{0} 2 | ≤ k + w1{0} 2 h + v1{0} 2 ≤ Λ( k + |w|)( h + |v|).

(3.121)

Similarly, we can extend the first- and second-order Fr´echet derivatives of Θδγ with respect to φ to the space C ⊕ B and obtain similar expressions for Dφ Θδγ (t, s, ψ, φ)(k + w1{0} ) and Dφ2 Θδγ (t, s, ψ, φ)(h + v1{0} , k + w1{0} ). The same is also true for the bounded linear functional T whose extension is still written as T . In addition, it is easy to verify that for any φ ∈ C and v, w ∈ n , we have  0 φ(θ) · v1{0} (θ)dθ = 0, (3.122)

φ, v1{0} 2 = −r 0



w1{0} , v1{0} 2 =

−r

w1{0} (θ) · v1{0} (θ)dθ = 0,

(3.123)

182

3 Optimal Classical Control

H(v1{0} ) = v1{−r} ,

H(ψ), H(v1{0} )2 = 0,

(3.124)

H(w1{0} ), H(v1{0} )2 = 0.

(3.125)

These observations will be used later. Next, we need several lemmas about the operator S. Lemma 3.6.6 Given φ ∈ C, we have S(F )(φ) = |φ(0)|2 − |φ(−r)|2 ,

(3.126)

S(F )(φ0 ) = −|φ(0)|2 ,

(3.127)

where F is the functional defined in (3.107) and S is the operator defined in (3.60). Proof. Recall that S(F )(φ) = lim t↓0

 1 ˜ F (φt ) − F (φ) t

(3.128)

for all φ ∈ C, where φ˜ : [−r, T ] → n is an extension of φ defined by  ˜ = φ(t) if t ∈ [−r, 0) φ(t) (3.129) φ(0) if t ≥ 0, and, again, φ˜t ∈ C is defined by ˜ + θ), φ˜t (θ) = φ(t

θ ∈ [−r, 0].

Therefore, we have  1 ˜ 2 φt 2 − φ 22 t→0+ t  0  0 1 = lim |φ˜t (θ)|2 dθ − |φ(θ)|2 dθ t→0+ t −r −r  0  0 1 2 2 ˜ = lim |φ(θ + t)| dθ − |φ(θ)| dθ t→0+ t −r −r  t  0 1 2 ˜ = lim |φ(θ)| dθ − |φ(θ)|2 dθ t→0+ t −r+t −r  0  t  0 1 2 2 ˜ ˜ = lim |φ(θ)| dθ + |φ(θ)| dθ − |φ(θ)|2 dθ t→0+ t −r+t 0 −r  0  t  0 1 2 2 2 = lim |φ(θ)| dθ + |φ(0)| dθ − |φ(θ)| dθ t→0+ t −r+t 0 −r  0  0 1 = lim |φ(θ)|2 dθ − |φ(θ)|2 dθ t→0+ t −r+t −r  t 1 + lim |φ(0)|2 dθ t→0+ t 0

S(F )(φ) = lim

3.6 Uniqueness

1 t→0+ t



= lim

0

t

1 t→0+ t

|φ(0)|2 dθ − lim



−r+t

−r

|φ(θ)|2 dθ

183



= |φ(0)|2 − |φ(−r)|2 .

(3.130)

Similarly, we have S(F )(φ0 ) = |φ0 (0)|2 − |φ0 (−r)|2 = −|φ(0)|2 .

2

Let Sψ and Sφ denote the operator S applied to ψ and φ, respectively. We have the following lemma. Lemma 3.6.7 Given φ, ψ ∈ C, Sψ (F )(φ − ψ) + Sφ (F )(φ − ψ) = |ψ(0) − φ(0)|2 − |ψ(−r) − φ(−r)|2 (3.131) and Sψ (F )(φ0 − ψ 0 ) + Sφ (F )(φ0 − ψ 0 ) = −|ψ(0) − φ(0)|2 ,

(3.132)

where F is the functional defined in (3.107) and S is the operator defined in (3.60). Proof. To proof the lemma, we need the following result, which can be easily ˜ proved by definition provided that ψ ∈ D(S): ˜ S(F )(ψ) = DF (ψ)S(ψ),

(3.133)

˜ ⊂ C → C is where DF (ψ) is the Fr´echet derivative of F (ψ) and S˜ : D(S) defined by ψ˜t − ψ ˜ . S(ψ) = lim t↓0 t ˜ the domain of the operator S, ˜ consists of We first assume that ψ ∈ D(S), those ψ ∈ C for which the above limit exists. It can be shown that ˙ ˜ = {ψ ∈ C | ψ is absolutely continuous and ψ(0+) D(S) = 0}. In this case, we have ˜ ˜ S(F )(ψ) = DF (ψ)S(ψ) = 2(ψ|S(ψ)). On the other hand, by virtue of Lemma 3.6.6, we have S(F )(ψ) = |ψ(0)|2 − |ψ(−r)|2 . Therefore, we have ˜ (ψ|S(ψ)) =

 1 |ψ(0)|2 − |ψ(−r)|2 . 2

Since S˜ is a linear operator, we have ˜ ˜ ˜ − φ)) (ψ − φ|S(ψ) − S(φ)) = (ψ − φ|S(ψ  1 = |ψ(0) − φ(0)|2 − |ψ(−r) − φ(−r)|2 .(3.134) 2

184

3 Optimal Classical Control

Given the above results, now we can get Sψ (F )(ψ − φ) + Sφ (F )(ψ − φ)  1 ˜ ψt − φ 22 − ψ − φ 22 + ψ − φ˜t 22 − ψ − φ 22 = lim t↓0 t  1 = lim ψ˜t 22 − ψ 22 + φ˜t 22 − φ 22 t↓0 t −2 [(ψ˜t |φ) − (ψ|φ) + (ψ|φ˜t ) − (ψ|φ)] ˜ ˜ = S(F )(ψ) + S(F )(φ) − 2[(S(ψ)|φ) + (ψ|S(φ))] ˜ ˜ ˜ ˜ = 2(ψ|S(ψ)) + 2(φ|S(φ)) − 2[(S(ψ)|φ) + (ψ|S(φ))] ˜ − φ)) = 2(ψ − φ|S(ψ = [|ψ(0) − φ(0)|2 − |ψ(−r) − φ(−r)|2 ], ˜ provided that ψ, φ ∈ D(S). ∞ For any ψ, φ ∈ C, one can construct sequences {ψk }∞ k=1 and {φk }k=1 in ˜ such that D(S) lim ψk − ψ = 0

k→∞

and

lim φk − φ = 0.

k→∞

Consequently by the linearity of the S operator and continuity of F : C → , we have   Sψ (F )(ψ − φ) + Sφ (F )(ψ − φ) = lim Sψ (F )(ψk − φk ) + Sφ (F )(ψk − φk ) k→∞

= lim [|ψk (0) − φk (0)|2 − |ψk (−r)−φk (−r)|2 ] k→∞

= [|ψ(0) − φ(0)|2 − |ψ(−r) − φ(−r)|2 ]. By the same argument, we have Sψ (F )(ψ 0 − φ0 ) + Sφ (F )(ψ 0 − φ0 ) = |ψ 0 (0) − φ0 (0)|2 − |ψ 0 (−r) − φ0 (−r)|2 = −|ψ(0) − φ(0)|2 . 2 Lemma 3.6.8 Given φ ∈ C, we define a new operator G as follows 0

G(φ) = e1+F (φ)+F (φ ) . We have

(3.135) 0

S(G)(φ) = (−|φ(−r)|2 )e1+F (φ)+F (φ ) ,

(3.136)

where F is the functional defined in (4.88) and S is the operator defined in (3.60). Proof. Recall that S(G)(φ) = lim t↓0

 1 ˜ G(φt ) − G(φ) t

(3.137)

3.6 Uniqueness

185

for all φ ∈ C, where φ˜ : [−r, T ] → Rn is an extension of φ defined by  ˜ = φ(t) if t ∈ [−r, 0) φ(t) (3.138) φ(0) if t ≥ 0, and, again, φ˜t ∈ C is defined by ˜ + θ), φ˜t (θ) = φ(t

θ ∈ [−r, 0].

We have S(G)(φ)  0 1 $ 1+−r ˜t (θ)|2 dθ+ 0 |φ ˜0 (θ)|2 dθ |φ t −r e = lim t→0+ t 0 0 % 2 0 2 − e1+ −r |φ(θ)| dθ+ −r |φ (θ)| dθ 0 0 2 1 $ 1+−r ˜ ˜0 (θ+t)|2 dθ |φ(θ+t)| dθ+ −r |φ e = lim t→0+ t 0 0 % 2 0 2 − e1+ −r |φ(θ)| dθ+ −r |φ (θ)| dθ t t 2 1 $ 1+−r+t ˜ ˜0 (θ)|2 dθ |φ(θ)| dθ+ −r+t |φ e = lim t→0+ t 0 0 % 2 0 2 − e1+ −r |φ(θ)| dθ+ −r |φ (θ)| dθ  0  0 1 $ 1+−r+t |φ(θ)|2 dθ+ 0t |φ(0)|2 dθ+ −r+t |φ0 (θ)|2 dθ+ 0t |φ0 (0)|2 dθ e = lim t→0+ t 0 0 % 2 0 2 − e1+ −r |φ(θ)| dθ+ −r |φ (θ)| dθ 0 0 1 $ 1+−r+t |φ(θ)|2 dθ+t|φ(0)|2 + −r+t |φ0 (θ)|2 dθ+t|φ0 (0)|2 e = lim t→0+ t 0 0 % 2 0 2 (3.139) − e1+ −r |φ(θ)| dθ+ −r |φ (θ)| dθ . Using the L’Hospital rule on the last equality, we obtain S(G)(φ) 1+

= lim e

0

−r+t

|φ(θ)|2 dθ+t|φ(0)|2 +

0

−r+t

|φ0 (θ)|2 dθ+t|φ0 (0)|2

t→0+

2

0

2

0

2

|φ(0)|2

−|φ(−r + t)| + |φ (0)| − |φ (−r + t)| = (|φ(0)|2 − |φ(−r)|2 − |φ0 (−r)|2 )e1+ = (|φ(0)|2 − |φ(−r)|2 − |φ(0)|2 )e1+ 0

= −|φ(−r)|2 e1+F (φ)+F (φ ) .

0

0

−r

−r

|φ(θ)|2 dθ)+

|φ(θ)|2 dθ+

0

−r

0

−r

|φ0 (θ)|2 dθ

|φ0 (θ)|2 dθ

2

(3.140)

Lemma 3.6.9 For any ψ, φ ∈ C, we have lim |Sψ (T )(ψ, φ)| = 0 ↓0

and

lim |Sφ (T )(ψ, φ)| = 0. ↓0

(3.141)

186

3 Optimal Classical Control

Proof. We will only prove the first equality in the Lemma, since the second one can be proved similarly. ˜ where the operator S˜ : D(S) ˜ ⊂C→C We first assume that ψ ∈ D(S), ˜ are defined in the proof of Lemma 3.6.8. In this case, and D(S)    T (ψ˜t , φ) − T (ψ, φ)   lim |Sψ (T )(ψ, φ)| = lim lim   ↓0 ↓0  t↓0 t  

  ψ˜t − ψ   = lim (T ) lim ,φ   ↓0  t↓0 t  . ψ˜t − ψ . . . ≤ lim T . lim . + φ ↓0 t↓0 t   ˜ + φ = 0, ≤ lim  Sψ (3.142) ↓0

because T is a bounded linear functional on C × C with norm equal to . For any ψ, φ ∈ C, one can construct a sequence of ˜ ψk ∈ D(S),

k = 1, 2, · · · ,

such that lim ψk − ψ = 0.

k→∞

We have lim |Sψ (T )(ψk , φ)| = 0, ↓0

∀k = 1, 2, . . . .

Consequently, lim |Sψ (T )(ψ, φ)| = 0 ↓0

by the limit process.

2

Given all of the above results, now we are ready to prove Theorem 3.6.1. Proof of Theorem 3.6.1. Define Γ1 (t, ψ) ≡ V2 (sδγ , φδγ ) + Θδγ (t, sδγ , ψ, φδγ ) − T (ψ, φδγ ) − Mδγ (3.143) and Γ2 (s, φ) ≡ V1 (tδγ , ψδγ ) − Θδγ (tδγ , s, ψδγ , φ) + T (ψδγ , φ) + Mδγ (3.144) for all s, t ∈ [0, T ] and ψ, φ ∈ C. Recall that Φδγ (t, s, ψ, φ) = V1 (t, ψ) − V2 (s, φ) − Θδγ (t, s, ψ, φ) and that Φδγ + T + Mδγ reaches its maximum value zero at (tδγ , sδγ , ψδγ , φδγ ) in [0, T ] × [0, T ] × C × C.

3.6 Uniqueness

187

By the definition of Γ1 and Γ2 , it is easy to verify that for all φ and ψ, we have Γ1 (t, ψ) ≥ V1 (t, ψ),

Γ2 (s, φ) ≤ V2 (s, φ),

∀t, s ∈ [0, T ] and φ, ψ ∈ C,

and V1 (tδγ , ψδγ ) = Γ1 (tδγ , ψδγ ) and V2 (sδγ , φδγ ) = Γ2 (sδγ , φδγ ). Using the definitions of the viscosity subsolution of V1 and Γ1 , we have αV1 (tδγ , ψδγ ) − ∂t Γ1 (tδγ , ψδγ )

 − sup [Av (Γ1 )(tδγ , ψδγ )−L(tδγ , ψδγ , v)] ≤ 0. (3.145) v∈U

By the definitions of the operator Av and Γ1 and the fact that the secondorder Fr´echet derivatives of T = 0, we have, by combining (3.110), (3.111), (3.112), (3.113), (3.116), (3.117), (3.122), (3.123), (3.124), and (3.125), Av (Γ1 )(tδγ , ψδγ ) = S(Γ1 )(tδγ , ψδγ ) + Dψ Θδγ (· · · )(f (tδγ , ψδγ , v)1{0} )

m 1 2 Dψ Θδγ (· · · ) g(tδγ , ψδγ , v)(ej )1{0} , g(tδγ , ψδγ , v)(ej )1{0} 2 j=1 − Dψ T (ψδγ , φδγ )(f (tδγ , ψδγ , v)1{0} ) = S(Γ1 )(tδγ , ψδγ ) − T (f (tδγ , ψδγ , v)1{0} , φδγ ). Note that Θδγ (· · · ) is an abbreviation for Θδγ (tδγ , sδγ , ψδγ , φδγ ) in the above equation and the following. Inequality (3.145) and the above equation together yield that αV1 (tδγ , ψδγ ) − S(Γ1 )(tδγ , ψδγ ) − ∂t Γ1 (tδγ , ψδγ ) (3.146) $ % − sup − T (f (tδγ , ψδγ , v)1{0} , φδγ ) + L(tδγ , ψδγ , v) ≤ 0. v∈U

Similarly, using the definitions of the viscosity supersolution of V2 and Γ2 and by the virtue of the same techniques similar to (3.146), we have αV2 (sδγ , φδγ ) − S(Γ2 )(sδγ , φδγ ) − ∂s Γ2 (sδγ , φδγ ) $ % − sup T (ψδγ , f (sδγ , φδγ , v)1{0} ) + L(sδγ , φδγ , v) ≥ 0. (3.147) v∈U

Inequality (3.146) is equivalent to αV1 (tδγ , ψδγ ) − S(Γ1 )(tδγ , ψδγ ) − 2(tδγ − sδγ ) $ % − sup − T (f (tδγ , ψδγ , v)1{0} , φδγ ) + L(tδγ , ψδγ ) ≤ 0. (3.148) v∈U

188

3 Optimal Classical Control

Similarly, Inequality (3.147) is equivalent to αV2 (sδγ , φδγ ) − S(Γ2 )(sδγ , φδγ ) − 2(sδγ − tδγ ) $ % − sup T (ψδγ , f (sδγ , φδγ , v)1{0} ) + L(sδγ , φδγ , v) ≥ 0.

(3.149)

v∈U

By virtue of (3.148) and (3.149), we obtain α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ≤ S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) + 4(tδγ − sδγ ) + sup [L(tδγ , ψδγ , v) − T (f (tδγ , ψδγ , v)1{0} , φδγ )] v∈U

− sup [L(sδγ , φδγ , v) + T (ψδγ , f (sδγ , φδγ , v)1{0} )].

(3.150)

v∈U

From definition (3.60) of S, it is clear that S is linear and takes the value zero on constants. Recall that Γ1 (t, ψ) = V2 (sδγ , φδγ ) + Θδγ (t, sδγ , ψ, φδγ ) − T (ψ, φδγ ) − Mδγ

(3.151)

and Γ2 (s, φ) = V1 (tδγ , ψδγ ) − Θδγ (tδγ , s, ψδγ , φ) + T (ψδγ , φ) + Mδγ .

(3.152)

Thus, we have S(Γ1 )(tδγ , ψδγ ) = Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) Sψ (T )(ψδγ , φδγ )

(3.153)

and S(Γ2 )(sδγ , φδγ ) = −Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) +Sφ (T )(ψδγ , φδγ ).

(3.154)

Therefore, S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) = Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) + Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) − [Sψ (T )(ψδγ , φδγ ) + Sφ (T )(ψδγ , φδγ )]. (3.155) Recall that Θδγ (t, s, ψ, φ) =

% 1$ F (ψ − φ) + F (ψ 0 − φ0 ) + |t − s|2 + γ(G(ψ) + G(φ)). δ

3.6 Uniqueness

Therefore, we have /

189

0 Sψ (Θδγ ) + Sφ (Θδγ ) (tδγ , sδγ , ψδγ , φδγ )

≡ Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) +Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) 1 = [Sψ (F )(ψδγ − φδγ ) + Sφ (F )(ψδγ − φδγ ) δ 0 0 +Sψ (F )(ψδγ − φ0δγ ) + Sφ (F )(ψδγ − φ0δγ )] + γ[Sψ (G)(ψδγ ) + Sφ (G)(φδγ )].

(3.156)

Using Lemma 3.6.7 and Lemma 3.6.8, we deduce that / 0 Sψ (Θδγ ) + Sφ (Θδγ ) (tδγ , sδγ , ψδγ , φδγ ) % 1$ = − |ψδγ (−r) − φδγ (−r)|2 δ 0

−γ |ψδγ (−r)|2 e1+F (ψδγ )+F (ψδγ )

2 1+F (φδγ )+F (φ0δγ ) +|φδγ (−r)| e

≤ 0.

(3.157)

Thus, by virtue of (3.155) and Lemma 3.6.9, we have   lim sup S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) ≤ 0.

(3.158)

δ↓0, ↓0

Moreover, we know that the norm of T is less than ; thus, for any γ > 0, using (3.155) and taking the lim sup on both sides of (3.150) as δ and  go to zero, we obtain lim sup α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ↓0,δ↓0

 ≤ lim sup S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) ↓0,δ↓0

+ sup [L(tδγ , ψδγ , v) − T (f (t, ψδγ , v)1{0} , φδγ )] v∈U

 − sup [L(sδγ , φδγ , v) + T (ψδγ , f (t, φδγ , v)1{0} )] v∈U       ≤ lim sup sup [L(tδγ , ψδγ , v) − L(sδγ , φδγ , v)] . ↓0,δ↓0

(3.159)

v∈U

Using the Lipschitz continuity of L and Lemma 3.6.2, we see that lim sup sup |L(tδγ , ψδγ , v) − L(sδγ , φδγ , v)| δ↓0, ↓0 v∈U

  ≤ lim sup C |tδγ − sδγ | + ψδγ − φδγ 2 = 0; δ↓0, ↓0

(3.160)

190

3 Optimal Classical Control

moreover, by virtue of (3.160), we get lim sup α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ≤ 0.

(3.161)

↓0,δ↓0

Since (tδγ , sδγ , ψδγ , φδγ ) is maximum of Φδγ + T in [0, T ] × [0, T ] × C × C, then, for all (t, ψ) ∈ [0, T ] × C, we have Φδγ (t, t, ψ, ψ) + T (ψ, ψ) ≤ Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ). (3.162) Then we get V1 (t, ψ) − V2 (t, ψ) ≤ V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )  1 2 0 0 2 2 − ψδγ − φδγ 2 + ψδγ − φδγ 2 + |tδγ − sδγ | δ

(3.163)

+2γ exp(1 + ψ 22 + ψ 0 22 ) 0 −γ(exp(1 + ψδγ 22 + ψδγ 22 ) + exp(1 + φδγ 22 + φ0δγ 22 )) +T (ψδγ , φδγ ) − T (ψ, ψ) ≤ V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ ) +2γ exp(1 + ψ 22 + ψ 0 22 ) + T (ψδγ , φδγ ) − T (ψ, ψ),

(3.164)

where the last inequality comes from the fact that δ > 0 and γ > 0. By virtue of (3.161), when we take the lim sup on (3.164) as δ, and γ go to zero, we can obtain  V1 (t, ψ) − V2 (t, ψ) ≤ lim sup V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ ) γ↓0, ↓0,δ↓0

 +2γ exp(1 + ψ 22 + ψ 0 22 ) + T (ψδγ , φδγ ) − T (ψ, ψ)

≤ 0.

(3.165)

Therefore, we have V1 (t, ψ) ≤ V2 (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

This completes the proof of Theorem 3.6.1.

(3.166)

2

The uniqueness of the viscosity solution of (3.64) follows directly from this theorem because any viscosity solution is both the viscosity subsolution and supersolution. Theorem 3.6.10 The value function V : [0, T ]×C →  of Problem (OCCP) defined by (3.7) is the unique viscosity solution of the HJBE (3.64). Proof. Suppose V1 , V2 : [0, T ] × C →  are two viscosity solutions of the HJBE (3.64). Then they are both the viscosity subsolution and supersolution. By Theorem 3.6.1, we have

3.7 Verification Theorems

V2 (t, ψ) ≤ V1 (t, ψ) ≤ V2 (t, ψ),

191

∀(t, ψ) ∈ [0, T ] × C.

This shows that V1 (t, ψ) = V2 (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

Therefore, the value function V : [0, T ] × C →  of Problem (OCCP) is the unique viscosity solution of the HJBE (3.64). 2

3.7 Verification Theorems In this section, conjecture on a version of the verification theorem in the framework of viscosity solutions is presented. The value function V : [0, T ] × C →  for Problem (OCCP) has been shown to be the unique viscosity solution of the HJBE (3.64) as shown in Sections 3.5 and 3.6. The remaining question for completely solving Problem (OCCP) is the computation of the optimal state-control pair (x∗ (·), u∗ (·)). The classical verification theorem reads as follows. 1,2 ([0, T ] × C) ∩ D(S) be the (classical) solution Theorem 3.7.1 Let Φ ∈ Clip of the HJBE (3.64). Then the following hold: (i) Φ(t, ψ) ≥ J(t, ψ; u(·)) for any (t, ψ) ∈ [0, T ] × C and any u(·) ∈ U[t, T ]. (ii) Suppose that a given admissible pair (x∗ (·), u∗ (·)) for the optimal classical control problem (OCP )(t, ψ) satisfies ∗

0 = ∂t Φ(s, x∗s ) + Au

(s)

Φ(s, x∗s ) + L(s, x∗s , u∗ (s)) P − a.s., a.e. s ∈ [t, T ],

then (x∗ (·), u∗ (·)) is an optimal pair for (OCP )(t, ψ). Define the Hamiltonian function H : [0, T ] × C × C∗ × C† × U →  as follows: 1 ¯ Q(g(t, φ, u)1{0} ej , g(t, φ, u)1{0} ej ) 2 j=1 m

H(t, φ, q, Q, u) =

+¯ q (f (t, φ, u)1{0} ) + L(t, φ, u),

(3.167)

where q¯ ∈ (C ⊕ B) is the continuous extension of q from C∗ to (C ⊕ B)∗ and ¯ ∈ (C ⊕ B)† is the continuous extension of Q from C† to (C ⊕ B)† . (see Q Lemma 2.2.3 and Lemma 2.2.4 for details.) We make the following conjecture on verification theorem in the viscosity framework. Conjecture. (The Generalized Verification Theorem). Let V¯ ∈ C((0, T ] × C, ) be the viscosity supersolution of the HJBE (3.64) satisfying the following polynomial growth condition |V¯ (t, ψ)| ≤ C(1 + ψ k2 ) for some k ≥ 1, (t, ψ) ∈ (0, T ) × C.

(3.168)

192

3 Optimal Classical Control

and such that V¯ (T, ψ) = Ψ (ψ). Then we have the following. (i) V¯ (t, ψ) ≥ J(t, ψ; u(·)) for any (t, ψ) ∈ (0, T ] × C and u(·) ∈ U[t, T ]. (ii) Fix any (t, ψ) ∈ (0, T ) × C. Let (x∗ (·), u∗ (·)) be an admissible pair for Problem (OCCP). Suppose that there exists (p∗ (·), q ∗ (·), Q∗ (·)) ∈ L2F(t) (t, T ; ) × L2F(t) (t, T ; C∗ ) × L2F(t) (t, T ; C† ) such that, for a.e. s ∈ [t, T ], 1,2 ¯ (p∗ (s), q ∗ (s), Q∗ (s)) ∈ Ds+,ψ V (s, x∗s )), P − a.s.,

and

 E

T

(3.169)

 [p∗ (s) + H(s, x∗ (s), q ∗ (s), Q∗ (s); u∗ (s))]ds ≥ 0.

(3.170)

t

Then (x∗ (·), u∗ (·)) is an optimal pair for Problem (OCCP).

3.8 Finite-Dimensional HJB Equation It is clear that the HJBE as described in (3.64) is infinite dimensional in the sense that it is a generalized differential equation that involve a firstand second-order Fr´echet derivatives of a real-valued function defined on the Banach space C as well as the infinitesimal generator S. The explicit solution of this equation is not well understood in general. In this section, we investigate some special cases of the infinite-dimensional HJBE (3.64) in which only the regular partial derivatives are involved and of which explicit solutions can be found. Much of the material presented in this section can be found in Larssen and Risebro [LR03]. However, they can be shown to be a special case of (3.1) and the general HJBE (3.64) treated in the previous sections. 3.8.1 Special Form of HJB Equation In (3.1), we consider the one-dimensional case and assume that m = 1 and n = 1. Let the controlled drift and diffusion f, g : [0, T ] × C × U →  be defined as follows:

 0 λθ e φ(θ) dθ, φ(−r), u (3.171) f (t, φ, u) = b t, φ(0), −r

and





0

g(t, φ, u) = σ t, φ(0),

λθ

e φ(θ) dθ, φ(−r), u

(3.172)

−r

for all (t, φ, u) ∈ [0, T ] × C × U , where b and σ are some real-valued functions defined on [0, T ] ×  ×  ×  × U that satisfy the following two conditions.

3.8 Finite-Dimensional HJB Equation

193

Assumption 3.8.1 There exist constants K1 > 0, and K2 > 0 such that for all (t, x, y, z), (t, x ¯, y¯, z¯, u) ∈ [0, T ] ×  ×  ×  and u ∈ U , |b(t, x, y, z, u)| + |σ(t, x, y, z, u)| ≤ K1 (1 + |x| + |y| + |z|)p and |b(t, x, y, z, u) − b(t, x ¯, y¯, z¯, u)| + |σ(t, x, y, z, u) − σ(t, x ¯, y¯, z¯, u)| ¯| + |y − y¯| + |z − z¯|). ≤ K2 (|x − x It is easy to see that if Assumption 3.8.1 holds for b and σ, then Assumption 3.1.1 holds for f and g that are related through (3.171) and (3.172). Consider the following one-dimensional control stochastic delay equation: dx(s) = b(s, x(s), y(s), z(s), u(s)) ds + σ(s, x(s), y(s), z(s), u(s)) dW (s),

s ∈ (t, T ],

(3.173)

with the initial data (t, ψ) ∈ [0, T ] × C[−r, 0], where  0 eλθ x(s + θ) dθ (λ > 0 is a given constant) y(s) = −r

represents a weighted (by the factor eλ· ) sliding average of x(·) over the time interval [s − r, s], and z(s) = x(s − r) represents the discrete delay of the state process x(·). The objective of the control problem is to maximize among U[t, T ] the following expected performance index:   T l(s, x(s), y(s), u(s)) ds + h(x(T ), y(T )) , (3.174) J(t, ψ; u(·)) = E t

where l : [0, T ]×××U →  and h : × →  are the instantaneous reward and the terminal reward functions, respectively, that satisfy the following assumptions: ¯ > 0 and k ≥ 1 such that Assumption 3.8.2 There exist constants K, K |l(t, x, y, u)| + |h(x, y)| ≤ K(1 + |x| + |y|)k and ¯ |l(t, x, y, u) − l(t, x ¯, y¯, u)| + |h(x, y) − h(¯ x, y¯)| ≤ K(|x −x ¯| + |y − y¯|), for all (t, x, y, u), (t, x ¯, y¯, u) ∈ [0, T ] ×  ×  × U . We again define the value function V : [0, T ]×C[−r, 0] →  for the optimal control problem (3.173) and (3.174) is defined by V (t, ψ) =

sup u(·)∈U[t,T ]

J(t, ψ; u(·)).

(3.175)

194

3 Optimal Classical Control

As described in Problem (OCCP), the value function V may depend on the initial datum (t, ψ) ∈ [0, T ] × C[−r, 0] in a very general and complicated way. In this section, we will show that for a certain class of systems of the form (3.173), the value function depends on the initial function only through 0 the functional of x ≡ ψ(0), y ≡ −r eλθ ψ(θ) dθ. Let us therefore assume with a little abuse of notation that the value function V takes the following form:

 0 eλθ ψ(θ) dθ = Φ(t, x, y), (3.176) V (t, ψ) = Φ t, ψ(0), −r

where Φ : [0, T ] ×  ×  → . Then the DPP (Theorem 3.3.9) takes the form  τ e−α(τ −t) l(s, x(s), y(s), u(s)) ds Φ(t, x, y) = sup u(·)∈U[t,T ]

t

 +Φ(τ, x(τ ), y(τ ))

(3.177)

for all F-stopping times τ ∈ TtT and initial datum (t, ψ(0), (t, x, y) ∈ [0, T ] × 2 .

0 −r

eλθ ψ(θ) dθ) ≡

Lemma 3.8.3 (The Itˆo Formula) If Φ ∈ C 1,2,1 ([0, T ] ×  × ), then we have the following Itˆ o formula: dΦ(s, x(s), y(s)) = Lu Φ(s, x(s), y(s)) +∂x Φ(s, x(s), y(s))σ(s, x(s), y(s)) dW (s), (3.178) where Lu is the differential operator defined by Lu Φ(t, x, y) = b(t, x, y, z)∂x Φ(t, x, y) 1 + σ 2 (t, x, y, z)∂x2 Φ(t, x, y) 2 +(x − e−λr z − λz)∂y Φ(t, x, y).

(3.179)

Proof. Note that if φ ∈ C[t − r, T ], then φs ∈ C[−r, 0] for each s ∈ [t, T ]. Since  0 y(xs ) = eλθ x(s + θ) dθ (λ constant), −r

eλθ x(s + θ) dθ −r

 s = ∂s eλ(t−s) x(t) dt 

0

∂s y(xs ) = ∂s

s−r

= x(s) − e−λr x(s − r) − λ



s

eλ(t−s) x(t) dt s−r

The result follows from the classical Itˆo formula (see Theorem 1.2.15). We have the following special form of the HJBE (3.64).

2

3.8 Finite-Dimensional HJB Equation

195

Theorem 3.8.4 If we assume that (3.176) holds and that Φ ∈ C 1,2,1 ([0, T ] ×  × ), then Φ solves the following HJBE: αΦ(t, x, y) − ∂t Φ(t, x, y) − max [Lu Φ(t, x, y) + l(t, x, y, u)] = 0, u∈U

(3.180)

∀(t, x, y) ∈ [0, T ] ×  × , with the terminal condition Φ(T, x, y) = h(x, y), where Lu is the differential operator defined by (3.179). Note that (3.180) is also equivalent to the following: αΦ(t, x, y) − ∂t Φ(t, x, y) + min [−Lu Φ(t, x, y) − l(t, x, y, u)] = 0, u∈U

(3.181)

∀(t, x, y) ∈ [0, T ] ×  × . Proof. It is clear that Φ : [0, T ]×× defined above is a quasi-tame function. From Itˆo’s formula (see Lemma 3.8.3 in Chapter 2), we have dV (s, x(s), y(s)) = Lu V (s, x(s), y(s))ds (3.182) +σ(s, x(s), y(s), z(s))∂x V (s, x(s), y(s))dW (s), where the differential operator Lu is as defined in (3.179). We use the DDP (Theorem 3.3.9) and proceed exactly as in Subsection 3.4.1 to obtain (3.180). 2 3.8.2 Finite Dimensionality of HJB Equation Theorem 3.8.4 indicates that under the assumption that the value function takes the form Φ ∈ C 1,2,1 ([0, T ] ×  × ), we have a finite-dimensional HJBE (3.180) in the sense that it only involves regular partial derivatives such as ∂x Φ, ∂y Φ, and ∂x2 Φ instead of the Fr´echet derivatives and the S-operator as required in (3.64). The question that remains to be answered is under what conditions we can have the finite-dimensional HJBE (3.180). This question will be answered in this subsection. We consider the following one-dimensional controlled SHDE: dx(s) = [µ(x(s), y(s)) + β(x(s), y(s))z(s) − g(s, x(s), y(s), u(s))] ds +σ(x(s), y(s)) dW (s), s ∈ (t, T ],

(3.183)

with the initial datum (t, ψ) ∈ [0, T ] × C([−r, 0]; ), where, again, y(s) =  0 λθ e x(s + θ) dθ and z(s) = x(s − r) are described in the previous subsection −r and µ, β, σ :  ×  →  and g : [0, T ] ×  ×  × U →  are the given deterministic functions. Assume that the discount rate α = 0 for simplicity. It will be shown in this subsection that the HJBE has a solution depending

196

3 Optimal Classical Control

only on (t, x, y) provided that an auxiliary system of four first-order partial differential equations (PDEs) involving µ, β, g, σ, l, and h has a solution. When this is the case, the HJBE (3.180) reduces to an ”effective” equation in only one spatial variable in addition to time. For this model, the HJBE (3.180) takes the form −∂t Φ + min F − (x − e−λr z − λy)∂y Φ = 0, u∈U

∀z ∈ ,

(3.184)

where F = F (t, x, y, z, u, ∂x Φ, ∂y Φ, ∂x2 Φ) = −Lu Φ(t, x, y) − l(t, x, y, u).

(3.185)

Assume that F ∗ = inf u∈U F . Then −∂t Φ + F ∗ − (x − e−λr z − λy)∂y Φ = 0,

∀z ∈ .

(3.186)

Since this holds for all z, we must have ∂z (F ∗ − (x − e−λr z − λy)∂y Φ) = 0. Now, ∂u∗ F ∗ = 0 since u∗ ∈ U is a minimizer of the function F . With ∂z (x − e−λr z − λy) = −e−λr , this leads to ∂z F ∗ + e−λr ∂y Φ = 0 or ∂y Φ = −eλr ∂z F ∗ , which we insert into (3.186) to obtain −∂t Φ + F ∗ + (x − e−λr z − λy)eλr ∂z F ∗ = 0.

(3.187)

Here, F ∗ + (x − e−λr z − λy)eλr ∂z F ∗ should not depend on z. In the following, let H and G denote generic functions that may depend on t, x, y, u∗ , ∂x Φ, ∂y Φ, and ∂x2 Φ but not on z. (H and G may change from line to line in a calculation.) Then the following are equivalent: F ∗ + eλr ξ∂z F ∗ = H, ∂z

e−λr F ∗ + ξ∂z F ∗ = H,

F∗ H ξ∂z F ∗ + e−λr F ∗ = 2, = 2 ξ ξ ξ

where ξ ≡ x − e−λr z − λy. Integrating this yields   F∗ dz −Heλr dξ λr =H + G, = −He = 2 2 ξ ξ ξ ξ so that F ∗ = H + Gξ, which implies that F ∗ is linear in z; that is,

3.8 Finite-Dimensional HJB Equation

197

F ∗ = H + Gz, where H and G are functions that do not depend on z. Motivated by the above reasoning, we investigate more closely a modified version of (3.173) and consider dx(s) = [¯ µ(x(s), y(s), z(s)) − g(s, x(s), y(s), u(s))]ds +¯ σ (x(s), y(s), z(s))dW (s), s ∈ (t, T ],

(3.188)

with the initial datum (t, ψ) ∈ [0, T ] × C[−r, 0]. Recall the performance functional (3.174), 

T

J(t, ψ; u(·)) = E

 l(s, x(s), y(s), u(s)) ds + h(x(T ), y(T )) ,

(3.189)

t

and the value function Φ : [0, T ] × C[−r, 0] →  defined by (3.176),

 0 eλθ ψ(θ) dθ = Φ(t, x, y). Φ(t, ψ) = Φ t, ψ(0),

(3.190)

−r

It is known that if Φ = Φ(t, x, y), then Φ satisfies the HJBE 1 2 2 ¯ ∂x Φ − (x − e−λr z − λy)∂y Φ + F (∂x Φ, x, y, t) = 0, (3.191) ¯∂x Φ − σ −∂t Φ − µ 2 with the terminal condition Φ(T, x, y) = h(x, y),

(3.192)

F (t, x, y, p) = inf{(g(t, x, y, u)p − l(t, x, y, u)}.

(3.193)

where We wish to obtain conditions on µ ¯, σ ¯ , and F that ensure that (3.191) has a solution independent of z. Differentiating (3.191) with respect to z, we obtain ¯∂x Φ = eλr ∂z γ¯ ∂x2 Φ, ∂y Φ − eλr ∂z µ

(3.194)

where γ¯ = σ ¯ 2 /2. Inserting this into (3.191), this equation now takes the form − ∂t Φ − [¯ µ − (z − eλr (x − λy))∂z µ ¯]∂x Φ λr 2 − [¯ γ − (z − e (x − λy))∂z γ¯ ]∂x Φ + F (∂x Φ, x, y, t) = 0, If Φ is to be independent of z, then the coefficients of ∂x Φ and ∂x2 Φ must be independent of z. By arguments analogous to the previous, we see that µ ¯(x, y, z) = µ(x, y) + β(x, y)z and γ¯ (x, y, z) = γ(x, y) + ζ(x, y)z

198

3 Optimal Classical Control

for some functions µ, β, γ, and ζ depending on x and y only. Now, since γ¯ ≥ 0 for all (x, y, z), we must have ζ = 0, and, consequently, ∂z γ¯ = 0. Also note ¯ = β and that (3.208) takes the form that ∂z µ ∂y Φ − eλr β(x, y)∂x Φ = 0.

(3.195)

Using this in (3.191) we see that this equation now reads − ∂t Φ − [µ(x, y) + eλr (x − λy)β(x, y)]∂x Φ 1 − σ 2 (x, y)∂x2 Φ + F (∂x Φ, x, y, t) = 0. 2

(3.196)

Now, we introduce new variables x ˜ and y˜, such that ∂ ∂ ∂ = − eλr β(x, y) ∂ y˜ ∂y ∂x

and

∂ ∂ = . ∂x ˜ ∂x

(3.197)

Then (3.195) states that ∂y˜Φ = 0. In order to be compatible with ∂y˜Φ = 0, the coefficients of (3.196) and the function h must also be constant in y˜, or ˆ − eλr β∂x µ ˆ = 0, ∂y µ

(3.198)

∂y σ − eλr β∂x σ = 0,

(3.199)

e p∂p F ∂x β + ∂y F − e β∂x F = 0,

(3.200)

∂y h − e β∂x h = 0,

(3.201)

λr

λr

λr

where µ ˆ(x, y) = µ(x, y) + eλr (x − λy)β(x, y). To see why ∂y˜F = 0 is equivalent to (3.200), note that ∂y˜F = ∂p F ∂y (∂x Φ) + ∂y F − eλr β(∂p F ∂x Φx + ∂x F ) 2 = ∂p F (∂yx Φ − eλr β∂x2 Φ) + ∂y F − eλr β∂x F = ∂p F [∂x (∂y Φ − eλr β∂x Φ) + eλr ∂x β∂x Φ] + ∂y F − eλr β∂x F = eλr p∂p F ∂x β + ∂y F − eλr β∂x F

by (3.195).

(3.202)

Conversely, if µ ¯ = µ(x, y)+β(x, y)z and σ ¯ = σ(x, y), and (3.198)-(3.200) hold, then we can find a solution of (3.195) that is independent of z. We collect this in the following theorem. Theorem 3.8.5 The HJBE (3.195) with terminal condition Φ(T, x, y) = h(x, y) has a viscosity solution Φ = Φ(t, x, y) if and only if µ ¯ = µ(x, y) + β(x, y)z and σ ¯ = σ(x, y), and (3.198)-(3.201) hold. In this case, in the coordinates given by (3.197), the HJBE (3.195) reads 1 ˆ(˜ x)∂x˜2 Φ + F (∂x˜ Φ, x ˜, t) − σ 2 (˜ x)∂x˜2 Φ = 0 −∂t Φ − µ 2

(3.203)

with the terminal condition Φ(T, x ˜) = h(˜ x).

(3.204)

3.8 Finite-Dimensional HJB Equation

199

3.8.3 Examples In this subsection we present two examples that satisfy the requirements (3.198)-(3.201), and also indicate why it is difficult to find more general examples that can be completely solvable. Example 1 (Harvesting with Exponential Growth) Assume that the size x(·) of a population obeys the linear SHDE dx(s) = (ax(s) + by(s) + cz(s) − u(s)) ds +(σ1 x(s) + σ2 y(s))dW (s), s ∈ [t, T ],

(3.205)

with the initial datum (t, ψ) ∈ [0, T ]×C[−r, 0]. We assume that x(s) > 0. The population is harvested at a rate u(s) ≥ 0, and we are given the performance functional  T J(t, ψ; u(·)) = E t,ψ,u(·) {l1 (x(s), y(s)) + l2 (u(s))} ds t  +h(x(T ), y(T )) , (3.206) where T is the stopping time defined by   T = T1 , inf {x(s; t, ψ, u(·) = 0} s>t

(3.207)

and T1 > t is some finite deterministic time. If the value function Φ takes the form Φ(t, x, y), then Φ satisfies the HJBE 1 − ∂s Φ − (x − e−λr z − λy)∂y Φ − (σ1 x + σ2 )2 ∂x2 Φ 2 + inf {−(ax + by + cz − u)∂x Φ − l1 (x, y) − l2 (u)} = 0 u

(3.208)

Using Theorem 3.8.5, from (3.198) and (3.199) we find that the parameters must satisfy the relations σ2 = σ1 ceλr ,

b − λceλr = ceλr (a + ceλr ).

(3.209)

The function F defined in (3.193) now has the form F (p, x, y) = inf {pu − l2 (u)} − l1 (x, y) = pu∗ − l2 (u∗ ) − l1 (x, y), u∈U



where u is the minimizer in U . Then from (3.200) we see that we must have ∂y l1 − ceλr ∂x l1 = 0,

(3.210)

˜ = x + ceλr y and the constant or l1 = l1 (x + ceλr y). Introducing the variable x λr κ = a + ce , we find that the “effective” equations (3.203) and (3.204) in this case will be

200

3 Optimal Classical Control

1 −∂s Φ − (κ˜ x − u∗ )∂x˜ Φ − σ12 x ˜2 ∂x˜2 Φ − l1 (˜ x) − l2 (u∗ ) = 0, 2

(3.211)

with the terminal condition Φ(T, x ˜) = h(˜ x),

(3.212)

assuming h satisfies (3.201). This corresponds to the control problem without delay with system dynamics d˜ x(s) = (κ˜ x(s) − u) ds + σ1 x ˜(s) dW (s),

s ∈ (t, T ],

and x ˜(t) = x ˜ ≥ 0. To close the discussion of this example, let us be specific and choose l1 (x, y) = −c0 |x + ceλr y − m|,

l2 (u) = c1 u − c2 u2 ,

h = 0,

(3.213)

where c0 , c1 , c2 , and m are positive constants. Then (3.210) and (3.201) hold and F (p, x, y) = inf {c2 u2 − (c1 − p)u} + c0 |x + ceλr y − m|. u∈U

We solve for u and find that the optimal harvesting rate is given by   c1 − ∂x Φ u∗ = max ,0 . (3.214) 2c2 Insert (3.213) and (3.214) into the HJBEs (3.211) and (3.212). The resulting equation is a second-order PDE that may be solved numerically and the optimal control can be found provided that the solution of the HJBE really is the value function. Example 2 (Resource Allocation). Let x(·) = {x(s), s ∈ [t − r, T ]} denote a population developing according to (3.205). One can think of x(·) as a wild population that can be caught and bred in captivity and then harvested. The population in captivity, x ˆ(·), develops according to dˆ x(s) = (γ x ˆ(s) + u(s) − v(s)) ds,

s ∈ (t, T ],

(3.215)

with x ˆ(t) = x ˆ ≥ 0, where v denotes the harvesting rate. The state and control processes for the control problem are (x(·), x ˆ(·)) and (u(·), v(·)), respectively. For this case, we consider the gain functional  T (l(v(s)) − c1 x ˆ(s) − c2 u2 (s)) ds J(t, ψ, x ˆ; u(·), v(·)) = E t,ψ,ˆx;u(·),v(·) t  +h(x(T ), y(T ), x ˆ(T )) , (3.216) where T is again given by (3.207), l(v) denotes the utility from consumption ˆ models the cost of keeping the population, and or sales of the animals, c1 x c2 u2 models the cost of catch and transfer. Setting

3.9 Conclusions and Remarks

201

V (t, ψ, x ˆ) = sup J(t, ψ, x ˆ; u(·), v(·)), u≥0,v

we find that if V takes the special form V = Φ(t, x, y, x ˆ), then 1 − ∂t Φ − (ax + by + cz)∂x Φ − γ x ˆ∂xˆ Φ − (σ1 x + σ2 y + σ3 z)2 ∂x2 Φ 2 − (x − e−λr z − λy)∂y Φ + c1 x ˆ + F (∂x Φ, ∂xˆ Φ) = 0 (3.217) and Φ(T, x, y, x ˆ) = h(x, y, x ˆ), where F (p, q) =

inf

u≥0,v≤vmax

(c2 u2 − u(q − p) + vq − l(v)).

Since v is independent of z, we must demand that the parameters satisfy (3.209), and we introduce x ˜ as before to find that Φ = Φ(t, x ˜, x ˆ) satisfies 1 x∂x˜ Φ − γ x ˆ∂xˆ Φ − σ12 x ˜2 ∂x˜2 Φ + c1 x ˆ + F (∂x˜ Φ, ∂xˆ Φ) = 0, −∂t Φ − κ˜ 2

for t < T

(3.218) and Φ(T, x ˜, x ˆ) = h(˜ x, x ˆ). Again, the above PDE with the terminal condition can be solved numerically.

3.9 Conclusions and Remarks This chapter develops the infinite-dimensional HJBE for the value function of the discounted optimal classical control problem over finite time horizon. The HJBE involves extensions of first- and second-order Fr´echet derivatives as well as the shift operator, which are unique in controlled SHDEs. This distinguishes them from all other infinite-dimensional stochastic control problems such as the ones arising from stochastic partial differential equations. The main theme of this chapter is to show under very reasonable assumptions that the value function is the unique viscosity solution of the HJBE. Existence of optimal control as well as special cases that lead to a finite dimensional HJBE are demonstrated. There is no attempt to treat the ergodic controls and/or the combined classical-singular control problem. However, a combined classicalimpulse control arising from a hereditary portfolio optimization problem is treated in Chapter 7 in detail.

4 Optimal Stopping

Optimal stopping problems over a finite or an infinite time horizon for Itˆ o’s diffusion processes described by stochastic differential equations (SDEs) arise in many areas of science, engineering, and finance (see, e.g., Fleming and Soner [FS93], Øksendal [Øks00], Shiryaev [Shi78], Karazas and Shreve [KS91], and references contained therein). The objective of the problem is to find a stopping time τ with respect to the filtration generated by the solution process of the SDE that maximizes or minimizes a certain expected reward or cost functional. The value function of these problems are normally expressed as a viscosity or a generalized solution of Hamilton-Jacobi-Bellman (HJB) variational inequality that involves a second-order parabolic or elliptic partial differential equation in a finite-dimensional Euclidean space. In an attempt to achieve better accuracy and to account for the delayed effect of the state variables in the modeling of real- world problems, optimal stopping of the following stochastic delay differential equation:

 0 λθ e x(s + θ) dθ ds dx(s) = b s, x(s), x(s − r),

−r

+σ s, x(s), x(s − r),



0 −r

eλθ x(s + θ) dθ dW (s)

s ∈ [t, T ].

have been the subject of study in his unpublished dissertation Elsanousi [Els00]. In the above equation, b and σ are n , and n×m -valued functions, respectively, defined on [0, T ] × n × n × n and λ > 0 is a given constant. This chapter substantially extends the results obtained for finite- dimensional diffusion processes and stochastic delay differential equations described above and investigates an optimal stopping problem over a finite time horizon [t, T ] for a general system of stochastic hereditary differential equations

204

4 Optimal Stopping

(SHDEs) with a bounded memory described in (4.1), where T > 0 and t ∈ [0, T ] respectively denote the terminal time and an initial time of the optimal stopping problem. The consideration of such a system enable us to model many real- world problems that have aftereffects. This chapter develops its existence theory via a construction of the least superharmonic majorant of the terminal reward functional. It also derives an infinite-dimensional HJB variational inequality (HJBVI) for the value function via a dynamic programming principle (see, e.g., Theorem 3.3.9 in Chapter 3). In the content of an infinite-dimensional HJBVI, it is shown that the value function for the optimal stopping problem is the unique viscosity solution of the HJBVI. The proof of uniqueness is very similar to those in Section 3.6 in Chapter 3 and involves embedding the function space C = C([−r, 0]; n ) into the Hilbert space L2 ([−r, 0]; n ) and extending the concept of viscosity solution for the controlled Itˆ o diffusion process (see, e.g., Fleming and Soner [FS93]) to an infinite-dimensional setting. As an application of the results obtained, a pricing problem is considered in Chapter 6 for American options in a financial market with one riskless bank account that grows according to a deterministic linear functional differential equation and one stock whose price dynamics follows a nonlinear stochastic functional differential equation. In there, it is shown that the option pricing can be formulated into an optimal stopping problem considered in this chapter and therefore all results obtained are applicable under very realistic assumptions. This chapter is organized as follows. The basic assumptions and preliminary results that are needed for formulating the optimal stopping problem as well as the problem statement are contained in Section 4.1. In Section 4.2, the existence and uniqueness of the optimal stopping is developed via the concept of excessive and superharmonic functions and a construction of the least superharmonic majorant of the terminal reward functional. In Section 4.3, the HJBVI for the value function is heuristically derived using the Bellman type dynamic programming principle. In Section 4.4, the verification theorem is proved. Although continuous, the value function is not known to be smooth enough to be a classical solution of the HJBVI in general cases. It is shown in Section 4.5, however, that the value function is a viscosity solution of the HJBVI. The proof for the comparison principle as well as the necessary lemmas are given in Section 4.6. The uniqueness result for the value function being the unique viscosity solution of the HJBVI follows immediately from the comparison principle.

4.1 The Optimal Stopping Problem Let L2 (Ω, C) be the space of C-valued random variables Ξ : Ω → C such that 

12 Ξ(ω) 2 dP (ω) < ∞. Ξ L2 ≡ E[ Ξ 2 ] ≡ Ω

4.1 The Optimal Stopping Problem

205

Let L2 (Ω, C; F(t)) be the space of those Ξ ∈ L2 (Ω, C) that are F(t)measurable. Consider the uncontrolled SHDE described by s ∈ [t, T ].

dx(s) = f (s, xs ) ds + g(s, xs ) dW (s),

(4.1)

When there is no delay (i.e., r = 0), it is clear that SHDE (4.1) reduces to the following Itˆ o diffusion process described by dx(s) = f (s, x(s)) ds + g(s, x(s)) dW (s),

s ∈ [t, T ].

It is clear that (4.1) also includes the following stochastic delay differential equation (SDDE):



dx(s) = b s, x(s), x(s − r), + σ s, x(s), x(s − r),

0

λθ

e x(s + θ)dθ −r  0

ds

eλθ x(s + θ)dθ

dW (s),

−r

s ∈ [t, T ]

and many other equations that cannot be modeled in this form as a special case. We repeat the definition of the strong solution of (4.1) as follows. Definition 4.1.1 A process {x(s; t, ψt ), s ∈ [t − r, T ]} is said to be a (strong) solution of (4.1) on the interval [t−r, T ] and through the initial datum (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F(t)) if it satisfies the following conditions: 1. xt (·; t, ψt ) = ψt . 2. x(s; t, ψt ) is F(s)-measurable for each s ∈ [t, T ]. 3. The process {x(s; t, ψt ), s ∈ [t, T ]} is continuous and satisfies the following stochastic integral equation P -a.s.:  s  s x(s) = ψt (0) + f (λ, xλ ) dλ + g(λ, xλ ) dW (λ), s ∈ [t, T ]. (4.2) t

t

In addition, the (strong) solution process {x(s; t, ψt ), s ∈ [t − r, T ]} of (4.1) is said to be (strongly) unique if {˜ x(s; t, ψt ), s ∈ [t − r, T ]} is also a (strong) solution of (4.1) on [t − r, T ] and through the same initial datum (t, ψt ), then P {x(s; t, ψt ) = x ˜(s; t, ψt ), ∀s ∈ [t, T ]} = 1. Throughout, we assume that the functions f : [0, T ] × C → n and g : [0, T ] × C → n×m are continuous functions and satisfy the following conditions to ensure the existence and uniqueness of a (strong) solution x(·) = {x(s; t, ψt ), s ∈ [t − r, T ]} for each (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F (t)). (See Theorem 1.3.12 in Chapter 1).

206

4 Optimal Stopping

Assumption 4.1.2 1. (Lipschitz Continuity) There exists a constant Klip > 0 such that |f (t, ϕ) − f (s, φ)| + |g(t, ϕ) − g(s, φ)|  ≤ Klip ( |t − s| + ϕ − φ ), ∀(t, ϕ), (s, φ) ∈ [0, T ] × C. 2. (Linear Growth) There exists a constant Kgrow > 0 such that |f (t, φ)| + |g(t, φ)| ≤ Kgrow (1 + φ ),

∀(t, φ) ∈ [0, T ] × C.

Let {x(s; t, ψt ), s ∈ [t, T ]} be the solution of (4.1) through the initial datum (t, ψt ) ∈ [0, T ] × L2 (Ω, C; F(t)). We consider the corresponding C-valued process {xs (t, ψt ), s ∈ [t, T ]} defined by xs (θ; t, ψt ) ≡ x(s + θ; t, ψt ), θ ∈ [−r, 0]. For each t ∈ [0, T ], let G(t) = {G(t, s), s ∈ [t, T ]} be the filtration defined by G(t, s) = σ(x(λ; t, ψt ), t ≤ λ ≤ s). Note that it can be shown that for each s ∈ [t, T ], G(t, s) = σ(xλ (t, ψt ), t ≤ λ ≤ s) due to the sample path’s continuity of the process x(·) = {x(s; t, ψt ), s ∈ [t, T ]}. One can then establish the following Markov property (see Theorem 1.5.2 in Chapter 1). Theorem 4.1.3 Let Assumption 4.1.2 hold. Then the corresponding C-valued solution process of (4.1) describes a C-valued Markov process in the following sense: For any (t, ψt ) ∈ [0, T ] × L2 (Ω, C), the Markovian property P {xs (t, ψt ) ∈ B|G(t, t˜)} = P {xs (t, ψt ) ∈ B|xt˜(t, ψt )} ≡ p(t˜, xt˜(t, ψt ), s, B) holds a.s. for t ≤ t˜ ≤ s and B ∈ B(C), where B(C) is the Borel σ-algebra of subsets of C. In the above, the function p : [t, T ] × C × [0, T ] × B(C) → [0, 1] denotes the transition probabilities of the C-valued Markov process {xs (t, ψt ), s ∈ [t, T ]}. A random function τ : Ω → [0, ∞] is said to be a G(t)-stopping time if {τ ≤ s} ∈ G(t, s),

∀s ≥ t.

Let TtT (G) (or simply TtT ) be the collection of all G(t)-stopping times τ ∈ T such that t ≤ τ ≤ T a.s.. For each τ ∈ TtT , let the sub-σ-algebra G(t, τ ) of F be defined by G(t, τ ) = {A ∈ F | A ∩ {t ≤ τ ≤ s} ∈ G(t, s) ∀s ∈ [t, T ]}.

4.1 The Optimal Stopping Problem

207

The collection G(t, τ ) can be interpreted as the collection of events that are measurable up to the stopping time τ ∈ TtT . With a little bit more effort, one can also show that the corresponding C-valued solution process {xs (t, ψt ), s ∈ [t, T ]} of (4.1) is also a strong Markov process in C; that is, P {xs (t, ψt ) ∈ B|G(t, τ )} = P {xs (t, ψt ) ∈ B|xτ (t, ψt )} ≡ p(τ, xτ (t, ψt ), s, B) holds a.s. for all τ ∈ TtT , all deterministic s ∈ [τ, T ], and B ∈ B(C). If the drift f and the diffusion coefficient g are time independent (i.e., f (t, φ) ≡ f (φ) and g(t, φ) ≡ g(φ)), then (4.1) reduces to the following autonomous system: (4.3) dx(t) = f (xt ) dt + g(xt ) dW (t). In this case, we usually assume the initial datum (t, ψt ) = (0, ψ) and denote the solution process of (4.3) through (0, ψ) and on the interval [−r, T ] by {x(s; ψ), s ∈ [−r, T ]}. Then the corresponding C-valued solution process {xs (ψ), s ∈ [−r, T ]} of (4.3) is a strong Markov process with time-homogeneous probability transition function p(ψ, s, B) ≡ p(0, ψ, s, B) = p(t, ψ, t + s, B) for all s, t ≥ 0, ψ ∈ C, and B ∈ B(C). Assume L and Ψ are two · 2 -Lipschitz continuous real-valued functions on [0, T ] × C with at most polynomial growth in L2 ([−r, 0]; n ). In other words, we assume the following conditions. Assumption 4.1.4 1. (Lipschitz Continuity) There exist a constant Klip > 0 such that |L(t, ψ) − L(s, φ)| + |Ψ (t, ψ) − Ψ (s, φ)|  ≤ Klip ( |t − s| + ψ − φ 2 ), ∀(t, ψ), (s, φ) ∈ [0, T ] × C. 2. (Polynomial Growth) There exist constants Kp > 0, and k ≥ 1 such that |L(t, φ)| + |Ψ (t, φ)| ≤ Λ(1 + φ 2 )k , 3. (Integrability of L) For all initial ψ ∈ C,  T

E

ψ

−α(s−t)

e

∀(t, φ) ∈ [0, T ] × C. 

|L(λ, xλ (·; t, ψ))| dt < ∞.

t

4. (Uniform Integrability of Ψ ) The family {Ψ − (xτ ); τ ∈ TtT } is uniformly integrable w.r.t. P t,ψ (the probability law of the x(·) = {x(·; t, ψ), t ∈ [0, T ]}), given the initial datum (t, ψ) ∈ [0, T ]×C, where Ψ − ≡ max{−Ψ, 0} is the negative part of the function Ψ . Since the L2 -norm · 2 is weaker that the sup-norm · , that is, √ φ 2 ≤ r φ , ∀φ ∈ C,

208

4 Optimal Stopping

2 of Assumption 4.1.4 implies that there exists a constant K > 0 such that |L(t, φ)| + |Ψ (t, φ)| ≤ K(1 + φ )k ,

∀(t, φ) ∈ [0, T ] × C.

We state the optimal stopping problem (Problem (OSP1)) as follows. Problem (OSP1). Given the initial datum (t, ψ) ∈ [0, T ] × C, our objective is to find an optimal stopping time τ ∗ ∈ TtT that maximizes the following expected performance index:  τ e−α(s−t) L(s, xs ) ds + e−α(τ −t) Ψ (xτ ) , (4.4) J(t, ψ; τ ) ≡ E t

where α > 0 denotes a discount factor. In this case, the value function V : [0, T ] × C →  is defined to be V (t, ψ) ≡ sup J(t, ψ; τ ).

(4.5)

τ ∈TtT

For the autonomous case (i.e., (4.3)) dx(s) = f (xs ) dt + g(xs ) dW (s),

s ∈ [0, T ],

the following optimal stopping problem (Problem (OSP2)) is a special case of Problem (OSP1). Problem (OSP2). Find an optimal stopping time τ ∗ ∈ T0T that maximizes the following discounted objective functional:  τ −αs −ατ e L(xs ) ds + e Ψ (xτ ) . (4.6) J(ψ; τ ) ≡ E 0

In this case, the value function V : C →  is defined to be V (ψ) ≡ sup J(ψ; τ ).

(4.7)

τ ∈T0T

4.2 Existence of Optimal Stopping In this section we investigate the existence of optimal stopping for Problem (OSP1) via a construction of the least superharmonic majorant of an appropriate terminal reward functional. 4.2.1 The Infinitesimal Generator We recall the following concepts, which were introduced earlier in Section 2.3 in Chapter 2. Let C∗ and C† be the space of bounded linear functionals ˜ : C × C → , of the space Φ : C →  and bounded bilinear functionals Φ

4.2 Existence of Optimal Stopping

209

C, respectively. They are equipped with the operator norms, which will be respectively denoted by · ∗ and · † . Let B = {v1{0} , v ∈ n }, where 1{0} : [−r, 0] →  is defined by  0 for θ ∈ [−r, 0) 1{0} (θ) = 1 for θ = 0. We form the direct sum C ⊕ B = {φ + v1{0} | φ ∈ C, v ∈ n } and equip it with the norm, also denoted by · , defined by φ + v1{0} =

sup |φ(θ)| + |v|,

φ ∈ C, v ∈ n .

θ∈[−r,0]

Again, let (C ⊕ B)∗ and (C ⊕ B)† be spaces of bounded linear and bilinear functionals of C ⊕ B, respectively. The following two results can be found in Lemma 2.2.3 and Lemma 2.2.4 in Chapter 2. Lemma 4.2.1 If Γ ∈ C∗ , then Γ has a unique (continuous) linear extension Γ¯ : C ⊕ B →  satisfying the following weak continuity property: (k) (W1). If {ξ (k) }∞ (θ) → ξ(θ) as k=1 is a bounded sequence in C such that ξ (k) k → ∞ for all θ ∈ [−r, 0] for some ξ ∈ C⊕B, then Γ (ξ ) → Γ¯ (ξ) as k → ∞. The extension map C∗ → (C ⊕ B)∗ , Γ → Γ¯ is a linear isometric injective map. Lemma 4.2.2 If Γ ∈ C† , then Γ has a unique (continuous) linear extension Γ¯ ∈ (C ⊕ B)† satisfying the following weak continuity property: (k) ∞ }k=1 are bounded sequences in C such that (W2). If {ξ (k) }∞ k=1 and {ζ (k) (k) ξ (θ) → ξ(θ) and ζ (θ) → ζ(θ) as k → ∞ for all θ ∈ [−r, 0] for some ξ, ζ ∈ C ⊕ B, then Γ (ξ (k) , ζ (k) ) → Γ¯ (ξ, ζ) as k → ∞. For a sufficiently smooth functional Φ : C → , we can define its Fr´echet derivatives with respect to φ ∈ C. It is clear from the results stated above that its first-order Fr´echet derivative, DΦ(φ) ∈ C∗ , has a unique and (continuous) linear extension DΦ(φ) ∈ (C ⊕ B)∗ . Similarly, its second-order Fr´echet derivative, D2 Φ(φ) ∈ C† , has a unique and continuous linear extension D2 Φ(φ) ∈ (C ⊕ B)† . For a Borel measurable function Φ : C → , we also define  1 ˜ S(Φ)(φ) ≡ lim (4.8) Φ(φ ) − Φ(φ) ↓0  for all φ ∈ C, where φ˜ : [−r, T ] → n is an extension of φ : [−r, 0] → n defined by  ˜ = φ(t) if t ∈ [−r, 0) φ(t) φ(0) if t ≥ 0,

210

4 Optimal Stopping

and again φ˜t ∈ C is defined by ˜ + θ), φ˜t (θ) = φ(t

θ ∈ [−r, 0].

ˆ Let D(S), the domain of the operator S, be the set of Φ : C →  such that the above limit exists for each φ ∈ C. Define D(S) as the set of all functions ˆ Ψ : [0, T ] × C →  such that Ψ (t, ·) ∈ D(S), ∀t ∈ [0, T ]. 1,2 ([0, T ]×C) be the space of functions Φ : [0, T ]×C → R Throughout, let Clip ∗ 2 † such that ∂Φ ∂t : [0, T ]×C → R, DΦ : [0, T ]×C → C and D Φ : [0, T ]×C → C 2 exist and are continuous and its second-order Fr´echet derivative D Φ satisfies the following global Lipschitz condition: D2 Φ(t, φ) − D2 Φ(t, ϕ) † ≤ K φ − ϕ ∀t ∈ [0, T ], φ, ϕ ∈ C. The following result (see Theorem 2.4.1 in Chapter 2) will be used later on in this chapter to derive the HJBVI for Problem (OSP1). 1,2 Theorem 4.2.3 Suppose that Φ ∈ Clip ([0, T ] × C) ∩ D(S) satisfies (i) and (ii). Let {xs (·; t, ψ), s ∈ [t − r, T ]} be the C-valued Markov segment process for (4.1) through the initial data (t, ϕt ) ∈ [0, T ] × C. Then

lim ↓0

E[Φ(t + , xt+ )] − Φ(t, ψ) = ∂t Φ(t, ψ) + AΦ(t, ψ), 

(4.9)

where AΦ(t, ψ) = S(Φ)(t, ψ) + DΦ(t, ψ)(f (t, ψ)1{0} ) m 1 2 + D Φ(t, ψ)(g(t, ψ)(ej )1{0} , g(t, ψ)(ej )1{0} ) 2 j=1 and ej , j = 1, 2, · · · , m, is the jth unit vector of the standard basis in m . 4.2.2 An Alternate Formulation For convenience, we propose an alternate formulation of the Problem (OSP1) by considering an optimal stopping problem with only terminal reward function. Problem (OSP1) can be reformulated into this format by absorbing the time variable as well as the integral reward term as follows. First, we introduce the process {(s, xs , y(s)), s ∈ [t, T ]} with values in [0, T ] × C × , where  s y(s) = y + e−α(λ−t) L(λ, xλ ) dλ, ∀s ∈ [t, T ]. t

The dynamics of the process {(s, xs , y(s)), s ∈ [t, T ]} can be described by the following system of SHDEs: dx(s) = f (s, xs ) ds + g(s, xs ) dW (s); −α(s−t)

dy(s) = e

L(s, xs ) ds,

(4.10)

s ∈ [t, T ],

with the initial datum at (t, xt , y(t)) = (t, ψ, y) at the initial time s = t.

4.2 Existence of Optimal Stopping

211

Lemma 4.2.4 The process {(s, xs , y(s)), s ∈ [t, T ]} is a strong Markov process with respect to the filtration G(t). Proof. It is clear from Theorem 1.5.2 in Chapter 1 that {(s, xs ), s ∈ [t, T ]} is a strong Markov [0, T ]×C-valued process. We only need to note that the realvalued process {y(s), s ∈ [t, T ]} is strong Markov, because it is sample-path integral of a continuous functional of {(s, xs ), s ∈ [t, T ]} and is continuously dependent on its initial datum y ∈ . 2 1,2,1 ([0, T ] × C × ) denote the space of real-valued functions G deLet Clip fined on [0, T ] × C × , where G(t, φ, y) is continuously differentiable with respect to its first variable t ∈ [0, T ], twice continuously Fr´echet differentiable with respect to its second variable φ ∈ C, continuously differentiable with respect to its third variable y ∈ , and its second Fr´echet derivative D2 G(t, φ, y) ∈ C† satisfies the following global Lipschitz condition in operator norm · † : There exists a constant K > 0 such that for all t ∈ [0, T ], y ∈ , and ϕ, φ ∈ C,

D2 G(t, ϕ, y) − D2 G(t, φ, y) † ≤ K ϕ − φ . The weak infinitesimal generator L˜ for the strong Markov process 1,2,1 ([0, T ] × C × ) ∩ D(S) can be {(s, xs , y(s)), s ≥ 0} acting on any G ∈ Clip written as E[G(t + , xt+ , y(t + )) − G(t, ψ, y)] ˜ L(G)(t, ψ, y) ≡ lim ↓0  E[G(t + , xt+ , y(t + )) − G(t, ψ, y(t + ))] = lim ↓0  E[G(t, ψ, y(t + )) − G(t, ψ, y)] . + lim ↓0  From (4.9), it is clear that lim ↓0

E[G(t + , xt+ , y(t + )) − G(t, ψ, y)] = (∂t + A)G(t, ψ, y), 

since lim ↓0 y(t + ) = y. Now, by the mean-valued theorem, the second limit in (4.11) becomes lim ↓0

E[G(t, ψ, y(t + )) − G(t, ψ, y)] 

= lim ∂y G(t, ψ, y + (1 − λ)(y(t + ) − y)) ↓0

E[y(t + ) − y] (by the chain rule) 

= L(t, ψ)∂y G(t, ψ, y). Therefore, ˜ LG(t, ψ, y) = (∂t + A + L(t, ψ)∂y )G(t, ψ, y),

(4.11)

212

4 Optimal Stopping

where A is defined in (4.9). Define the new terminal reward functional Φ : [0, T ] × C ×  →  by Φ(s, φ, y) = e−α(s−t) Ψ (φ) + y,

∀(s, φ, y) ∈ [0, T ] × C × .

(4.12)

Remark 4.2.5 If Φ is the new terminal reward functional defined in (4.12) then 1,2,1 Φ ∈ Clip ([0, T ] × C × ) ∩ D(S) and ˜ L(Φ)(s, ψ, y) = AΨ (t, ψ) + L(t, ψ) − αΨ (t, ψ)

(4.13)

provided that Ψ , the terminal reward functional of Problem (OSP1), is 2 smooth enough, i.e., Ψ ∈ Clip ([0, T ] × C) ∩ D(S). Second, the reward functional (4.15) can be rewritten as ˜ ψ; τ ) = E t,ψ,0 [Φ(τ, xτ , y(τ ))] , J(t,

(4.14)

where Φ : [0, T ] × C ×  → is the new terminal reward functional that is related to the original terminal reward functional Ψ : C →  →  through the relation described by (4.12). We therefore have the following alternate optimal stopping problem with only terminal reward functional. Problem (OSP3). Given the initial datum (t, ψ, y) ∈ [0, T ] × C × , our objective is to find an optimal stopping time τ ∗ ∈ TtT that maximizes the following expected performance index: ˜ ψ, y; τ ) ≡ E t,ψ,y [Φ(τ, xτ , y(τ ))] J(t,

(4.15)

where Φ is the new terminal reward functional defined in (4.12). In this case, the value function V˜ : [0, T ] × C ×  →  is defined to be ˜ ψ, y; τ ) V˜ (t, ψ, y) ≡ sup J(t,

(4.16)

τ ∈TtT

and the stopping time τ ∗ ∈ TtT such that ˜ ψ, y; τ ∗ ) V˜ (t, ψ, y) = J(t, is called an optimal stopping time. Remark 4.2.6 It is clear that Problem (OSP3) is equivalent to Problem (OSP1) in the sense that V (t, ψ) = V˜ (t, ψ, 0),

∀(t, ψ) ∈ [0, T ] × C,

and the optimal stopping times for both problems coincide. This is because when the initial datum for {y(s), s ∈ [t, T ]} is y(t) = y = 0 at time s = t,

4.2 Existence of Optimal Stopping

213

˜ ψ, 0; τ ) = E t,ψ,0 [Φ(τ, xτ , y(τ )] J(t,   = E t,ψ,0 e−α(τ −t) Ψ (xτ ) + y(τ )   τ e−α(s−t) L(s, xs ) ds = E t,ψ,0 e−α(τ −t) Ψ (xτ ) + t

= J(t, ψ; τ ),

∀τ ∈

TtT .

In the following, we develop the existence of optimal stopping for Problem (OSP3) via excessive function, Snell envelop, and so forth, and parallel to those presented in Øksendal [Øks98] but with infinite-dimensional state space. First, we introduce the concept of any semicontinuous function as follows. Let Ξ be a metric space and let F : Ξ →  be a Borel measurable function. Then the upper semicontinuous (USC) envelop F¯ : Ξ →  and the lower semicontinuous (LSC) envelop F : Ξ →  of F are defined respectively by F¯ (x) = lim sup F (y) and F (x) = lim inf F (y). y→x,y∈Ξ

y→x,y∈Ξ

We let U SC(F ) and LSC(F ) denote the set of USC and LSC functions on Ξ, respectively. Note that, in general, one has F ≤ F ≤ F¯ , and that F is USC if and only if F = F¯ , F is LSC if and only if F = F . In particular, F is continuous if and only if F = F = F¯ The following lemma will be useful later, but its proof can be found in Wheeden and Zygmound [WZ77] and will be omitted here. (k) Lemma 4.2.7 If {F (k) }∞ ↑ F for some F : Ξ → , k=1 ⊂ LSC(Ξ) and F (k) ∞ then F ∈ LSU (Ξ). If {F }k=1 ⊂ U SC(Ξ) and F (k) ↓ F for some F : Ξ → , then F ∈ U SC(Ξ).

In connection with Problem (OSP3), we now define the concept of supermeanvalued and superharmonic functions Θ : [0, T ] × C ×  →  as follows. Definition 4.2.8 A measurable function Θ : [0, T ] × C ×  →  is said to be supermeanvalued with respect to the process {(s, xs , y(s)), s ∈ [t, T ]} if it satisfies Θ(t, ψ, y) ≥ E t,ψ,y [Θ(τ, xτ , y(τ ))],

∀(t, ψ, y) ∈ [0, T ] × C × .

(4.17)

If, in addition, Θ is LSC, then Θ : [0, T ] × C ×  →  is said to be the following:

214

4 Optimal Stopping

1. a superharmonic function with respect to the process {(s, xs , y(s)), s ∈ [t, T ]} if it satisfies Θ(t, ψ, y) ≥ E t,ψ,y [Θ(τ, xτ , y(τ ))],

∀(t, ψ, y) ∈ [0, T ] × C × ; (4.18)

2. an excessive function with respect to the process {(s, xs , y(s)), s ∈ [t, T ]} if ∀(t, ψ, y) ∈ [0, T ] × C × , and s ∈ [t, T ], and it satisfies Θ(t, ψ, y) ≥ E t,ψ,y [Θ(s, xs , y(s))].

(4.19)

Remark 4.2.9 1. If Θ : [0, T ]×C× →  is a superharmonic function with respect to the process {(s, xs , y(s)), s ∈ [t, T ]}, then for any sequence of G(t)-stopping times {τk }∞ k=1 such that limK→∞ τk = t we have ∀(t, ψ, y) ∈ [0, T ] × C ×  Θ(t, ψ, y) = lim E t,ψ,y [Θ(τ, xτ , y(τ ))]. k→∞

(4.20)

This is because of Fatou’s lemma and the property of superharmonic functions. Θ(t, ψ, y) ≤ E t,ψ,y [limk→∞ Θ(τk , xτk , y(τk ))] ≤ limk→∞ E t,ψ,y [Θ(τk , xτk , y(τk ))] ≤ Θ(t, ψ, y). 1,2,1 2. If Θ ∈ Clip ([0, T ] × C × ) ∩ D(S), then it follows from Dynkin’s formula that it is superharmonic with respect to the process {(s, xs , y(s)), s ∈ [t, T ]} if and only if ˜ ≤ 0, LΘ

where L˜ is the infinitesimal generator given by (4.11). 3. It is clear that a superharmonic function is excessive. However, the converse is not as trivial. It should be understood in the following lemma that the domain of all functions involved is [0, T ] × C ×  and the words “supermeanvalued” and “superharmonic” are meant to be with respect to the process {(s, xs , y(s)), s ∈ [t, T ]}. Lemma 4.2.10 1. If {Θ(j) }∞ j=1 is a family of supermeanvalued functions, then Θ(t, ψ, y) ≡ inf j∈J {Θ(j) (t, ψ, y)} is supermeanvalued if it is measurable (J is any set). 2. If Θ(1) , Θ(2) , . . . are superharmonic (supermeanvalued) functions and Θ(k) ↑ Θ pointwise, then Θ is superharmonic (supermeanvalued). 3. If Θ is supermeanvalued and τ ≤ τ˜ are G(t)-stopping times, then E t,ψ,y [Θ(˜ τ , xτ˜ , y(˜ τ ))] ≥ E t,ψ,y [Θ(τ, xτ , y(τ )))].

4.2 Existence of Optimal Stopping

215

˜ ψ, y) ≡ 4. If Θ is supermeanvalued and H ∈ B([0, T ] × C × ), then Θ(t, E t,ψ,y [Θ(τH , xτH , y(τH ))] is supermeanvalued, where τH is the first exit time from H, that is, τH = inf{s ≥ t | (s, xs , y(s)) ∈ / H}. Proof. 1. Suppose Θ(j) is supermeanvalued for all j ∈ J. Then Θ(j) (t, ψ, y) ≥ E t,ψ,y [Θ(j) (τ, xτ , y(τ ))] ≥ E t,ψ,y [Θ(τ, xτ , y(τ ))], ∀j ∈ J. So, Θ(t, ψ, y) = inf Θ(j) (t, ψ, y) j∈J

≥ E t,ψ,y [Θ(τ, xτ , y(τ ))], as required. Therefore Θ is supermeanvalued. 2. Suppose Θ(k) is supermeanvalued and Θ(k) ↑ Θ pointwise. Then Θ(t, ψ, y) ≥ Θ(k) (t, ψ, y) ≥ E t,ψ,y [Θ(k) (τ, xτ , y(τ ))],

∀k = 1, 2, . . . .

Therefore, Θ(t, ψ, y) ≥ lim E t,ψ,y [Θ(k) (τ, xτ , y(τ ))] k→∞

= E t,ψ,y [Θ(τ, xτ , y(τ ))] by monotone convergence. Therefore, Θ is supermeanvalued. If each Θ(k) is also LSC, then Θ(k) (t, ψ, y) = lim E t,ψ,y [Θ(k) (τj , xτj , y(τj ))] j→∞

≤ limj→∞ E t,ψ,y [Θ(τj , xτj , y(τj ))] for any sequence of stopping times {τj }∞ j=1 that converges to t. Therefore, Θ(t, ψ, y) ≤ limj→∞ E t,ψ,y [Θ(τj , xτj , y(τj ))]. 3. If Θ is supermeanvalued, we have by the Markov property of the process that {(s, xs , y(s)), s ∈ [t, T ]}; then for all t ≤ s˜ ≤ s ≤ T , E t,ψ,y [Θ(s, xs , y(s)) | F(˜ s)] = E s˜,xs˜,y(˜s) [Θ(s − s˜, xs−˜s , y(s − s˜))] ≤ Θ(˜ s, xs˜, y(˜ s))] (4.21) that is, the process {Θ(s, xs , y(s)), s ∈ [t, T ]} is a supermartingale with respect to the filtration F = {F(s), s ≥ 0} generated by the Brownian motion

216

4 Optimal Stopping

W (·) = {W (s), s ≥ t}. Therefore, by Doob’s optional sampling theorem (see Karatzas and Shreve [KS91, pp19 and 20]) E t,ψ,y [Θ(˜ τ , xτ˜ , y(˜ τ ))] ≥ E t,ψ,y [Θ(τ, xτ , y(τ ))] for all stopping times τ˜, τ ∈ TtT with τ˜ ≤ τ P t,ψ,y -a.s. 4. Suppose Θ is supermeanvalued. By the strong Markov property, Theorem 1.5.2 in Chapter 1, we have for any stopping time σ ∈ TtT , ˜ xσ , y(σ))] = E t,ψ,y [E σ,xσ ,y(σ) [Θ(τH , xτ , y(τH ))]] E t,ψ,y [Θ(σ, H = E t,ψ,y [E t,ψ,y [θ(σ)Θ(τH , xτH , y(τH )) | G(t, σ)]] = E t,ψ,y [θ(σ)Θ(τH , xτH , y(τH ))] = E t,ψ,y [θ(σ)Θ(τH , xτH , y(τH ))] σ σ = E t,ψ,y [Θ(τH , xτHσ , y(τH ))],

(4.22)

where θ(σ) is the strong Markov shift operator defined by θ(σ)Θ(s, xs , y(s)) = Θ(s + σ, xs+σ , y(s + σ) σ σ and τH = inf{s > σ | (s, xs , y(s)) ∈ / H}. Since τH ≥ τH , we have by item 3 of this lemma

˜ xσ , y(σ))] ≤ E t,ψ,y [Θ(τH , xτ , y(τH ))] E t,ψ,y [Θ(σ, H ˜ ψ, y), = Θ(t, ˜ is supermeanvalued. so Θ

2

We have the following theorem. Theorem 4.2.11 Let Θ : [0, T ] × C ×  → . Then Θ is a superharmonic function with respect to the process {(s, xs , y(s)), s ∈ [t, T ]} if and only if it is excessive with respect to the process {(s, xs , y(s)), s ∈ [t, T ]}. Proof in a Special Case. We only prove the theorem in the special case when 1,2,1 ([0, T ] × C × ) ∩ D(S). By Dynkin’s formula (see Theorem 2.4.1), Θ ∈ Clip we have  s t,ψ,y t,ψ,y ˜ L(Θ)(λ, xλ , y(λ))dλ E [Θ(s, xs , y(s))] = Θ(t, ψ, y) + E t

∀(t, ψ, y) ∈ [0, T ] × C × , and s ∈ [t, T ]. ˜ Therefore, if Θ is excessive, then L(Θ) ≤ 0. Consequently, if τ ∈ TtT , we get ∀(t, ψ, y) ∈ [0, T ] × C × , and s ∈ [t, T ], E t,ψ,y [Θ(s ∧ τ, xs∧τ , y(s ∧ τ ))] ≤ Θ(t, ψ, y). Letting s ↑ T , we see that Θ is superharmonic. A proof in the general case can be found in Dynkin [Dyn65] and is omitted here.

4.2 Existence of Optimal Stopping

217

Definition 4.2.12 If Θ : [0, T ] × C ×  →  is superharmonic (supermean˜ : [0, T ]×C× →  is also superharmonic (supermeanvalued) valued) and if Θ ˜ ≥ Θ, then Θ ˜ is said to be a superharmonic (supermeanvalued) majorant and Θ ˆ : [0, T ]×C× →  is said to be the least superharmonic of Θ. The function Θ (supermeanvalued) majorant of Θ if ˆ is a superharmonic (supermeanvalued) majorant of Θ and (i) Θ ˇ is another superharmonic (supermeanvalued) majorant of Θ, then (ii) if Θ ˇ ≥ Θ. ˆ Θ Given a LSC function Θ : [0, T ]×C× → . The following result provides an iterative construction of the least superharmonic majorant of Θ. Theorem 4.2.13 (Construction of the Least Superharmonic Majortant) Let Θ = Θ(0) be a non-negative, LSC function on [0, T ] × C ×  and define inductively for k = 1, 2, . . . Θ(k) (t, ψ, y) = sup E t,ψ,y [Θ(k−1) (s, xs , y(s))],

(4.23)

s∈S (k)

ˆ where where S (k) = {t + j · 2−k | 0 ≤ j ≤ 4k } ∧ T , k = 1, 2, . . .. Then Θ(k) ↑ Θ, ˆ is the least superharmonic majorant of Θ and Θ ˆ = Θ, ¯ where Θ ¯ is the least Θ superexcessive majorant of Θ. ˇ ψ, y) = limk→∞ Θ(k) (t, ψ, y). Proof. Note that {Θ(k) } is increasing. Define Θ(t, Then ˇ ψ, y) ≥ Θ(k) (t, ψ, y) ≥ E t,ψ,y [Θ(k−1) (s, xs , y(s))], ∀k and ∀s ∈ S (k) . Θ(t, (k) Hence, ∀s ∈ S ≡ ∪∞ , k=1 S

ˇ ψ, y) ≥ lim E t,ψ,y [Θ(k−1) (s, xs , y(s))] Θ(t, k→∞ t,ψ,y

≥E

[ lim Θ(k−1) (s, xs , y(s))] k→∞

t,ψ,y

ˇ xs , y(s))]. [Θ(s,

(4.24) ˇ ˇ is Since Θ is an increasing limit of LSC functions (see Lemma 4.2.7), Θ LSC. Fix s ∈ [t, T ] and choose tk ∈ S such that tk → s. Then by (4.24), Fatou’s lemma, and LSC, ˇ ψ, y) ≥ lim ˇ k , xt , y(tk ))] Θ(t, E t,ψ,y [Θ(t =E

k→∞

k

ˇ k , xt , y(tk ))] ≥ E t,ψ,y [limk→∞ Θ(t k ˇ xs , y(s))]. ≥ E t,ψ,y [Θ(s, ˇ is excessive. Therefore, Θ ˇ is superharmonic by Theorem 4.2.11 and, So, Θ ˇ ˜ is any hence, Θ is a superharmonic majorant of Θ. On the other hand, if Θ superharmonic majorant of Θ, then clearly, by induction, ˜ ψ, y) ≥ Θ(k) (t, ψ, y), ∀k = 1, 2, . . . , Θ(t, ˜ ψ, y) ≥ Θ(t, ˇ ψ, y); this shows that Θ ˇ is the least supermeanvalued and so Θ(t, ¯ ˇ ˆ majorant Θ of Θ. So, Θ = Θ. 2

218

4 Optimal Stopping

4.2.3 Existence and Uniqueness In this subsection, we establish the existence and uniqueness of the optimal stopping for Problem (OSP3). In the following, we let Φ : [0, T ] × C ×  →  be the terminal reward functional for Problem (OSP3) defined by (4.12). As a reminder, Φ(s, φ, y) = e−α(s−t) Ψ (φ) + y,

∀(s, φ, y) ∈ [0, T ] × C × ,

where Ψ : C →  is the terminal reward functional of Problem (OSP1). Theorem 4.2.14 (Existence Problem (OSP3)) Let V˜ : [0, T ] × C ×  →  be the value function of Problem (OSP3) and Φˆ be the least superharmonic majorant of a continuous terminal reward function Φ ≥ 0 of Problem (OSP3). 1. Then ˆ ψ, y), V˜ (t, ψ, y) = Φ(t,

∀(t, ψ, y) ∈ [0, T ] × C × .

(4.25)

2. For  > 0, let ˆ ψ, y) − }. D = {(t, ψ, y) ∈ [0, T ] × C ×  | Φ(t, ψ, y) < Φ(t,

(4.26)

Suppose Φ is bounded. Then stopping at the first time τ of exit from D , that is, / D }, τ = inf{s ≥ t | (s, xs , y(s)) ∈ is close to being optimal in the sense that ∀(t, ψ, y) ∈ [0, T ] × C × , |V˜ (t, ψ, y) − E t,ψ,y [Φ(τ , xτ , y(τ ))]| ≤ 2.

(4.27)

3. For a continuous terminal reward functional Φ ≥ 0, let the continuation region D be defined as D = {(t, ψ, y) ∈ [0, T ] × C ×  | Φ(t, ψ, y) < V˜ (t, ψ, y)}.

(4.28)

For N = 1, 2, . . ., define ΦN = Φ ∧ N , DN = {(t, ψ, y) ∈ [0, T ] × C ×  | ΦN (t, ψ, y) < ΦˆN (t, ψ, y)}, and τN = τDN . Then DN ⊂ DN +1 ,

DN ⊂ D ∩ Φ−1 ([0, N ]),

D = ∪N DN .

If τN < ∞, P t,ψ,y -a.s for all N , then V˜ (t, ψ, y) = lim E t,ψ,y [Φ(τN , xτN , y(τN ))]. N →∞

(4.29)

4.2 Existence of Optimal Stopping

219

4. In particular, if τD < ∞, P t,ψ,y -a.s. and the family Φ(τN , xτN , y(τN ))}∞ N =1 is uniformly integrable with respect to P t,ψ,y , then V˜ (t, ψ, y) = E t,ψ,y [Φ(τD , xτD , y(τD ))] is the value function for Problem (OSP3) and τ ∗ = τD is an optimal stopping time for Problem (OSP3). Proof. First, assume that Φ is bounded and define ˆ , xτ , y(τ ))], Φ (t, ψ, y) = E t,ψ,y [Φ(τ 

 > 0.

(4.30)

Then Φ is supermeanvalued by item 4 of Lemma 4.2.10. We claim that Φ(t, ψ, y) ≤ Φ (t, ψ, y) + ,

∀(t, ψ, y).

(4.31)

We prove the claim by contradiction. Suppose β≡

sup (t,ψ,y)∈[0,T ]×C×

{Φ(t, ψ, y) − Φ (t, ψ, y)} > .

(4.32)

¯ y¯) such that Then, for all η > 0, we can find (t¯, ψ, ¯ y¯) ≥ β − η. ¯ y¯) − Φ (t¯, ψ, Φ(t¯, ψ,

(4.33)

On the other hand, since Φ + β is a supermeanvalued majorant of Φ, we have ¯ y¯) ≤ Φ (t¯, ψ, ¯ y¯) + β. ˆ t¯, ψ, Φ(

(4.34)

Combining (4.33) and (4.34), we get ¯ y¯) ≤ Φ(t¯, ψ, ¯ y¯) + η. ˆ t¯, ψ, Φ(

(4.35)

Consider the following two possible cases. ¯ Case 1. τ > 0 P t¯,ψ,¯y -a.s. Then by (4.35) and the definition of D , ¯ y¯) ≥ Φ( ¯ y¯) ˆ t¯, ψ, Φ(t¯, ψ, ¯ ¯ ˆ ∧ τ , xs∧τ , y(s ∧ τ ))] ≥ E t,ψ,¯y [Φ(s  ¯ ¯

≥ E t,ψ,¯y [(Φ(s, xs , y(s)) + )χ{s 0 | xt (ψ) ∈ (vi) The family {Φ(xτ (ψ)), τ ≤ τD } is uniformly integrable, for all initial ψ ∈ C. Then, for all ψ ∈ C,  τ αt ατ e L(xt (ψ)) dt + e Ψ (xτ (ψ)) , (4.53) Φ(ψ) = V (ψ) = sup E τ ∈TtT

and

0

τ ∗ = τD

is an optimal stopping time for Problem (OSP2).

(4.54)

4.4 Verification Theorem

227

Proof. For R > 0 and the initial ψ ∈ C, put TR = R ∧ inf{t > 0 | xt ≥ R} and let τ ∈ TtT . Then by Dynkin’s formula in Subsection 2.4.1 of Chapter 2, and parts (i)-(iv),    τ ∧TR

Φ(ψ) = E ψ −   ≥ Eψ −

0 τ ∧TR 0

e−αt AΦ(xt ) dt + Φ(xτ ∧TR )

 e−αt L(xt ) dt + eτ ∧TR Ψ (xτ ∧TR ) .

(4.55)

Hence, by the Fatou lemma,   τ ∧TR Φ(ψ) ≥ limR→∞ E ψ e−αt L(xt ) dt + eτ ∧TR Ψ (xτ ∧TR ) 0  τ ≥ Eψ e−αt L(xt ) dt + e−ατ Ψ (xτ ) . 0

Since τ ∈ TtT is arbitrary, we conclude that Φ(ψ) ≥ V (ψ),

∀ψ ∈ C.

(4.56)

We consider the following two cases: (i) ψ ∈ / D and (ii) ψ ∈ D. If ψ ∈ / D, then Φ(ψ) = Ψ (ψ) ≤ V (ψ); so by (4.56), we have and τˆ = τˆ(ψ, ω) ≡ 0 is optimal.

Φ(ψ) = V (ψ)

(4.57)

If ψ ∈ D, we let {Dk }∞ k=1 be an increasing sequence of open sets Dk such that / Dk }, D¯k ⊂ D, D¯k is compact and D = ∪∞ k=1 Dk . Put τk = inf{t > 0 | xt ∈ k = 1, 2, . . .. By Dynkin’s formula we have, for ψ ∈ Dk ,    Φ(ψ) = E ψ −

τk ∧TR

0

  ≥ Eψ −

τk ∧TR

0

e−αt AΦ(xt ) dt + e−α(τk ∧TR ) (Φ(xτk ∧TR )  e−αt L(xt ) dt + e−α(τ ∧TR ) Φ(xτk ∧TR ) .

So, by the uniform integrability assumption and parts (i), (iv), and (vi), we get    Φ(ψ) =

lim E ψ −

R,k→∞



=E

ψ

τD

−αt

e 0

τk ∧TR

0

e−αt AΦ(xt ) dt + Φ(xτk ∧TR ) −ατD

L(xt ) dt + e

Ψ (xτD )

= J(ψ; τD ) ≤ V (ψ).

(4.58)

228

4 Optimal Stopping

Combining (4.56) and (4.58), we get Φ(ψ) ≥ V (ψ) ≥ J(ψ; τD = Φ(ψ). Therefore, Φ(ψ) = V (ψ)

and τˆ(ψ, ω) ≡ τD is optimal.

(4.59)

From (4.57) and (4.59), we conclude that Φ(ψ) = V (ψ),

∀ψ ∈ C.

Moreover, the stopping time τ ∈ TtT defined by  0 for ψ ∈ /D τˆ(ψ, ω) = τD for ψ ∈ D is optimal. We therefore conclude that τD is optimal as well. This proves the verification theorem. 2

4.5 Viscosity Solution In this section, we will show that the value function V defined for Problem (OSP1) is actually the unique viscosity solution of the HJBVI (4.52). First, let us define the viscosity solution of (4.52) as follows. Definition 4.5.1 Let w ∈ C([0, T ] × C). We say that w is a viscosity subsolution of (4.52) if for every (t, ψ) ∈ [0, T ]×C and for every Γ : [0, T ]×C →  satisfying smoothness conditions (i) and (ii), Γ ≥ w on [0, T ] × C, and Γ (t, ψ) = w(t, ψ), we have min {Γ (t, ψ) − Ψ (t, ψ), αΓ (t, ψ) − ∂t Γ (t, ψ) − AΓ (t, ψ) − L(t, ψ)} ≤ 0. We say that w is a viscosity supersolution of (4.52) if, for every (t, ψ) ∈ [0, T ] × C and for every Γ : [0, T ] × C →  satisfying Smoothness Conditions (i)-(ii), Γ ≤ w on [0, T ] × C, and Γ (t, ψ) = w(t, ψ), we have min {Γ (t, ψ) − Ψ (t, ψ), αΓ (t, ψ) − ∂t Γ (t, ψ) − AΓ (t, ψ) − L(t, ψ)} ≥ 0. We say that w is a viscosity solution of (4.52) if it is a viscosity supersolution and a viscosity subsolution of (4.52). As we can see in the definition, a viscosity solution must be continuous. So, first we will show that the value function V defined by (4.16) has this property. Actually, we have the following result. Lemma 4.5.2 The value function V : [0, T ]× C →  is continuous and there exist constants K > 0 and k ≥ 1 such that, for every (t, ψ) ∈ [0, T ] × C, we have |V (t, ψ)| ≤ K(1 + ψ 2 )k . (4.60)

4.5 Viscosity Solution

229

Proof. It is clear that V has at most polynomial growth, since L and Φ have at most polynomial growth with the same k ≥ 1 as in Assumption 4.1.4. Let {xs (t, ψ), s ∈ [t, T ]}, be the C-valued segment process for (4.1) with initial data (t, ψ) ∈ [0, T ] × C. It has been shown in Chapter 1 that the trajectory map (t, ψ) → xs (t, ψ) from [0, T ] × C to L2 (Ω, C) is globally Lipschitz in ψ uniformly with respect to t on compact sets and continuous in t for fixed ψ. Therefore, given two C-valued segment processes Ξ1 (s) = xs (t, ψ1 ) and Ξ2 (s) = xs (t, ψ2 ), s ∈ [t, T ] of (4.1) with initial data (t, ψ1 ) and (t, ψ2 ), respectively, we have E Ξ1 (s) − Ξ2 (s) ≤ Klip ψ1 − ψ2 2 ,

(4.61)

where K is a positive constant that depends on the Lipschitz constant in Assumption 4.1.4 and T . Using the Lipshitz continuity of L, Ψ : [0, T ] × C → , there exists yet another constant Λ > 0 such that |J(t, ψ1 ; τ ) − J(t, ψ2 ; τ )| ≤ ΛE[ Ξ1 (τ ) − Ξ2 (τ ) ].

(4.62)

Therefore, using (4.62) and (4.61), we have |V (t, ψ1 ) − V (t, ψ2 )| ≤ sup |J(t, ψ1 ; τ ) − J(t, ψ2 ; τ )| τ ∈TtT

≤ Λ sup E[ Ξ1 (τ ) − Ξ2 (τ ) ] τ ∈TtT

≤ Λ ψ1 − ψ2 2 .

(4.63)

This implies the (uniform) continuity of V (t, ψ) with respect to ψ. We next show the continuity of V (t, ψ) with respect to t. Let Ξ1 (s) = Xs (t1 , ψ), s ∈ [t1 T ], and Ξ2 (s) = Xs (t2 , ψ), s ∈ [t2 , T ], be two C-valued solutions of (4.1) with initial data (t1 , ψ) and (t2 , ψ), respectively. Without lost of generality, we assume that t1 < t2 . Then we can get J(t1 , ψ; τ ) − J(t2 , ψ; τ )   t2 =E e−α(ξ−t1 ) [L(ξ, Ξ1 (ξ))] dξ t1  τ e−α(ξ−t2 ) [L(ξ, Ξ1 (ξ)) − L(ξ, Ξ2 (ξ))] dξ + t2 −α(τ −t1 ) −α(τ −t2 ) Ψ (Ξ1 (τ )) − e Ψ (Ξ2 (τ )) . +e

(4.64)

Therefore, there exists a constant Λ > 0 such that |J(t1 , ψ; τ ) − J(t2 , ψ; τ )|

≤ Λ |t1 − t2 |E[ Ξ1 (τ ) ] + E[ Ξ1 (τ ) − Ξ2 (τ ) ] .

(4.65)

230

4 Optimal Stopping

Let  > 0; using the compactness of [0, T ] and the uniform continuity of the trajectory map in t, there exists η > 0 such that if |t1 − t2 | < η, then . In addition, there exists a constant K > 0 such E[ Ξ1 (s) − Ξ2 (s) ] ≤ 2Λ that   E

sup Ξ1 (s) ≤ K,

s∈[t1 ,T ]

∀t1 ∈ [0, T ].

3 2 , we have Then, for |t1 − t2 | < min η, 2ΛK |J(t1 , ψ; τ ) − J(t2 , ψ; τ )| ≤

  + = . 2 2

Consequently, |V (t1 , ψ) − V (t2 , ψ)| ≤ . 2

This completes the proof.

Before we show that the value function is a viscosity solution of HJBVI (4.52), we need to prove some results related to the dynamic programming principle. The results are given next in Lemma 4.5.3 and Lemma 4.5.5. Lemma 4.5.3 Let τ , τ¯ ∈ TtT be G(t)-stopping times and t > 0 such that t ≤ τ ≤ τ¯ a.s.. Then we have   τ¯ % $ −α(τ −t) −α(s−t) V (τ , xτ ) ≥ E e L(s, xs ) ds E e τ

$ % + E e−α(¯τ −t) V (¯ τ , xτ¯ ) .

(4.66)

Proof. It is known that   τ¯ E e−α(¯τ −t) V (¯ τ , xτ¯ ) + e−α(s−t) L(s, xs ) ds τ  τ −α(s−¯ τ ) −α(¯ τ −t) −α(τ −¯ τ ) −α(¯ τ −t) = sup E e e L(s, xs ) ds + e e Ψ (xτ ) τ ∈Tτ¯T



τ¯

e−α(s−t) L(s, xs ) ds t  τ = sup E e−α(s−t) L(s, xs ) ds + e−α(τ −t) Ψ (xτ ) τ¯

+E

τ ∈Tτ¯T



+E

τ¯

τ¯

e−α(s−t) L(s, xs ) ds

τ



τ

= sup E τ ∈Tτ¯T

e−α(s−t) L(s, xs ) ds + e−α(τ −t) Ψ (xτ )

τ



≤ sup E τ ∈TτT



τ

−α(τ −t) −α(s−τ )

e τ

= E[e−α(τ −t) V (τ , xτ )].

e

−α(τ −t) −α(τ −τ )

L(s, xs ) ds + e

e

Ψ (xτ )

4.5 Viscosity Solution

231

2

This completes the proof.

Now, let us give the definition of -optimal stopping time, which will be used in the next lemma. Definition 4.5.4 For each  > 0, a G(t)-stopping time τ ∈ TtT is said to be -optimal if   τ −α(s−t) −α(τ −t) e L(s, xs ) ds + e V (τ , xτ ) ≤ . 0 ≤ V (t, ψ) − E t

Lemma 4.5.5 Let θ be a stopping time such that θ ≤ τ a.s., for any  > 0, where τ ∈ TtT is -optimal. Then  θ V (t, ψ) = E e−α(s−t) L(s, xs ) ds + e−α(θ−t) V (θ, xθ ) . (4.67) t

Proof. Let θ be a stopping time such that θ ≤ τ a.s., for any -optimal τ ∈ TtT . Using Lemma 4.5.3, we have   τ −α(θ−t) −α(s−t) E[e V (θ, xθ )] ≥ E e L(s, xs ) ds θ

+ E[e−α(τ −t) V (τ , xτ )]. This implies that −α(θ−t)

E[e

 ≥E

 V (θ, xθ )] + E

θ

 −α(s−t)

e

L(s, xs ) ds

t

τ

e−α(s−t) L(s, xs ) ds + E[V e−α(τ −t) (τ , xτ )].

(4.68)

t

Note that τ is the -stopping time; then   τ −α(s−t) −α(τ −t) 0 ≤ V (t, ψ) − E e L(s, xs ) ds + e V (τ , xτ ) ≤ . t

On the other hand, by virtue of (4.68), we can get   θ −α(θ−t) −α(s−t) V (t, ψ) − E e V (θ, xθ ) + e L(s, xs ) ds t   τ −α(τ −t) −α(s−t) ≤ V (t, ψ) − E e V (τ , xτ ) + e L(s, xs ) ds . (4.69) t

Thus, we can get  0 ≤ V (t, ψ) − E t

θ

e−α(s−t) L(s, xs ) ds + e−α(θ−t) V (θ, xθ ) ≤ .

232

4 Optimal Stopping

Now, we let  → 0 in the above inequality and we can get  θ V (t, ψ) = E e−α(s−t) L(s, xs ) ds + E[e−α(θ−t) V (θ, xθ )]. t

This completes the proof.

2

Theorem 4.5.6 The value function V is a viscosity solution of the HJBVI (4.52). Proof. We need to prove that the value function V is both a viscosity subsolution and a viscosity supersolution of (4.52). First, we prove that V is a viscosity subsolution. Let (t, ψ) ∈ [0, T ] × C 1,2 ([0, T ] × C) ∩ D(S) satisfying Γ ≤ V on [0, T ] × C and Γ (t, ψ) = and Γ ∈ Clip V (t, ψ). We want to prove the viscosity subsolution inequality, that is,   min Γ (t, ψ) − Ψ (ψ), αΓ (t, ψ) − ∂t Γ (t, ψ) − AΓ (t, ψ) − L(t, ψ) ≥ 0. (4.70) We know that V ≥ Ψ and Γ (t, ψ) = V (t, ψ), so we have Γ (t, ψ) − Ψ (ψ) ≥ 0. Therefore, we just need to prove that αΓ (t, ψ) − ∂t Γ (t, ψ) − AΓ (t, ψ) − L(t, ψ) ≥ 0. 1,2 Since Γ ∈ Clip ([0, T ] × C) ∩ D(S), by virtue of Theorem 4.2.3, for t ≤ s ≤ T , we have

E[e−α(s−t) Γ (s, xs ) − Γ (t, ψ)]

  s =E e−α(ξ−t) ∂ξ Γ (ξ, xξ ) + AΓ (ξ, xξ ) − αΓ (ξ, xξ ) dξ . (4.71) t

For any s ∈ [t, T ], from Lemma 4.5.3, we can get  s   e−α(ξ−t) L(ξ, xξ ) dξ + E e−α(s−t) V (s, xs ) . V (t, ψ) ≥ E t

By virtue of (4.71), Γ ≤ V , and V (t, ψ) = Γ (t, ψ), we can get  s   e−α(ξ−t) L(ξ, xξ ) dξ + E e−α(s−t) V (s, xs ) − V (t, ψ) 0≥E  t s   e−α(ξ−t) L(ξ, xξ ) dξ + E e−α(s−t) Γ (s, xs ) − Γ (t, ψ) ≥E t  s e−α(ξ−t) L(ξ, xξ ) + ∂ξ Γ (ξ, xξ ) + AΓ (ξ, xξ ) ≥E t

 (4.72) − αΓ (ξ, xξ ) dξ .

4.5 Viscosity Solution

Dividing both sides of the above inequality by (s − t), we have   s 1 0≥E e−α(ξ−t) L(ξ, xξ ) + ∂ξ Γ (ξ, xξ ) + AΓ (ξ, xξ ) s−t t

−αΓ (ξ, xξ ) dξ .

233

(4.73)

Now, let s ↓ t in (4.73), and we obtain ∂t Γ (t, ψ) + AΓ (t, ψ) + L(t, ψ) − αΓ (t, ψ) ≤ 0

(4.74)

which proves the inequality (4.70). Next, we want to prove that V is also a viscosity supersolution of (4.52). 1,2 ([0, T ] × C) ∩ D(S) satisfying Γ ≥ V on Let (t, ψ) ∈ [0, T ] × C and Γ ∈ Clip [0, T ] × C and Γ (t, ψ) = V (t, ψ); we want to prove that   max Ψ (ψ) − Γ (t, ψ), ∂t Γ (t, ψ) + AΓ (t, ψ) + L(t, ψ) − αΓ (t, ψ) ≥ 0. (4.75) Actually, it is sufficient to show that ∂t Γ (t, ψ) + AΓ (t, ψ) + L(t, ψ) − αΓ (t, ψ) ≥ 0.

(4.76)

Let θ ∈ TtT be a stopping time such that θ ≤ τ for every τ , -optimal stopping time. Using Lemma 4.5.5, we can get  θ   −α(s−t) V (t, ψ) = E e L(s, xs ) ds + E e−α(θ−t) V (θ, xθ ) . (4.77) t

Using Dynkin’s formula (see Theorem 2.4.1), we have   E e−α(θ−t) Γ (θ, xθ ) − Γ (t, ψ)

 θ e−α(s−t) ∂s Γ (s, xs ) + AΓ (s, xs ) − αΓ (s, xs ) ds . =E t

Since Γ ≥ V and Γ (t, ψ) = V (t, ψ), now we can get   E e−α(θ−t) V (θ, xθ ) − V (t, ψ)

  θ −α(s−t) ≤E e ∂s Γ (s, xs ) + AΓ (s, xs ) − αΓ (s, xs ) ds t

Combining this with (4.77), the above inequality implies

 θ e−α(s−t) L(s, xs ) + ∂s Γ (s, xs ) + AΓ (s, xs ) − αΓ (s, xs ) ds . 0≤E t

(4.78)

234

4 Optimal Stopping

Dividing (4.78) by E[(θ − t)] and sending E[θ] → t, we deduce ∂t Γ (t, ψ) + AΓ (t, ψ) + L(t, ψ) − αΓ (t, ψ) ≥ 0,

(4.79)

which proves (4.75). Therefore, V is also a viscosity supersolution. This completes the proof of the theorem. 2 The following comparison principle is crucial for our uniqueness result. Its proof is rather lengthy but similar to the proof of the comparison principle provided in Chapter 3. We, therefore, provide only an outline. Theorem 4.5.7 (Comparison Principle) If V1 (t, ψ) and V2 (t, ψ) are both continuous with respect to the argument (t, ψ) ∈ [0, T ] × C and are respectively the viscosity subsolution and supersolution of (4.52) with at most a polynomial growth (i.e., there exist constants Λ > 0 and k ≥ 1 such that |Vi (t, ψ)| ≤ Λ(1 + ψ 2 )k , for (t, ψ) ∈ [0, T ] × C, i = 1, 2).

(4.80)

Then V1 (t, ψ) ≤ V2 (t, ψ)

for all (t, ψ) ∈ [0, T ] × C.

(4.81)

Since the value function V : [0, T ] × C →  of Problem (OSP1) is a viscosity solution (and, hence, is both a subsolution and a supersolution) of HJBVI (4.52), the uniqueness result of the viscosity solution follows immediately from the above comparison principle. We therefore have the following main result of this section. Theorem 4.5.8 The value function V : [0, T ] × C →  for Problem (OSP1) is the unique viscosity solution of HJBVI (4.52). Proof. Suppose V1 , V2 : [0, T ] × C →  are two viscosity solutions of HJBVI (4.52). Then they are both subsolution and supersolution. From Theorem 4.5.7, we have V1 (t, ψ) ≤ V2 (t, ψ) ≤ V1 (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

Therefore, V1 = V2 . This shows that viscosity solution is unique.

2

4.6 A Sketch of a Proof of Theorem 4.5.7 With exception of a few minor differences, the proof of the comparison principle, Theorem 4.5.7, is very similar to that of Theorem 3.6.1 in Chapter 3. Instead of giving a detailed proof, a sketch of the proof is provided below for completeness. Let V1 and V2 be respectively a viscosity subsolution and supersolution of (4.52). For any 0 < δ, γ < 1 and for all ψ, φ ∈ C and t, s ∈ [0, T ], define

4.6 A Sketch of a Proof of Theorem 4.5.7

235

 1 2 0 0 2 2 Θδγ (t, s, ψ, φ) ≡ (4.82) ψ − φ 2 + ψ − φ 2 + |t − s| δ % $ +γ exp(1 + ψ 22 + ψ 0 22 ) + exp(1 + φ 22 + φ0 22 ) and Φδγ (t, s, ψ, φ) ≡ V1 (t, ψ) − V2 (s, φ) − Θδγ (t, s, ψ, φ), 0

0

0

0

(4.83)

− θ), φ (θ) = − θ) for where ψ , φ ∈ C with ψ (θ) = θ ∈ [−r, 0]. It is desirable in the following proof that Φδγ : [0, T ]×[0, T ]×C×C has a global maximum. To achieve this goal, we first restrict its domain to the separable Hilbert space [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); Rn ), where W 1,2 ((−r, 0); n ) is a real separable Sobolev space defined by θ −r ψ(−r

θ −r φ(−r

W 1,2 ((−r, 0); n ) = {φ ∈ C | φ is absolutely continuous with and φ 1,2 < ∞}, ˙ 2, φ 21,2 ≡ φ 22 + φ 2 and φ˙ is the derivative of φ in the distributional sense. Note that it can be shown that the Hilbertian norm · 1,2 is weaker than the sup-norm · ; that is, there exists a constant K > 0 such that φ 1,2 ≤ K φ ,

∀φ ∈ W 1,2 ((−r, 0); n ).

From the Sobolev embedding theorems, it is known that W 1,2 ((−r, 0); n ) ⊂ C and that W 1,2 ((−r, 0); n ) is dense in C. We observe that, using the polynomial growth condition for V1 and V2 , we have Φδγ (ψ, φ) = −∞. (4.84) lim ψ2 +φ2 →∞

The function Φδγ is a real-valued function that is bounded above and continuous on [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ) (since the Hilbertian norm · 1,2 is weaker than the sup-norm · ). Therefore, from Lemma 3.6.2 in Chapter 3, for any 1 >  > 0 there exists a continuous linear functional T in the topological dual of W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ), with norm at most , such that the function Φδγ + T attains it maximum in [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ). Let us denote by (tδγ , sδγ , ψδγ , φδγ ) the global maximum of Φδγ + T on [0, T ] × [0, T ] × W 1,2 ((−r, 0); n ) × W 1,2 ((−r, 0); n ). Without loss of generality, we assume that for any given δ, γ, and , there exists a constant Mδγ such that the maximum value Φδγ + T + Mδγ is zero. In other words, we have Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ) + Mδγ = 0.

(4.85)

The lemmas stated in the remainder of this section are taken from Section 3.6. The readers are referred to that section for proofs.

236

4 Optimal Stopping

Lemma 4.6.1 (tδγ , sδγ , ψδγ , φδγ ) is the global maximum of Φδγ + T in [0, T ] × [0, T ] × C × C. Lemma 4.6.2 For each fixed γ > 0, we can find a constant Λγ > 0 such that 0 ψδγ 2 + ψδγ 2 + φδγ 2 + φ0δγ 2 ≤ Λγ

and

 lim

↓0,δ↓0

 0 ψδγ − φδγ 22 + ψδγ − φ0δγ 22 + |tδγ − sδγ |2 = 0.

(4.86)

(4.87)

Now, let us introduce a functional F : C → R defined by F (ψ) ≡ ψ 22

(4.88)

and the linear map H : C → C defined by H(ψ)(θ) ≡

θ ψ(−r − θ) = ψ 0 (θ), −r

θ ∈ [−r, 0].

(4.89)

Note that H(ψ)(0) = ψ 0 (0) = 0 and H(ψ)(−r) = ψ 0 (−r) = −ψ(0). It is not hard to show that the map F is Fr´echet differentiable and its derivative is given by DF (u)h = 2(u|h). This comes from the fact that ψ + h 22 − ψ 22 = 2(ψ|h) + h 22 , and we can always find a constant Λ > 0 such that | ψ + h 22 − ψ 22 − 2(ψ|h)| h 22 Λ h 2 = ≤ = Λ h . h h h

(4.90)

Moreover, we have 2(ψ + h| · ) − 2(ψ| · ) = 2(h| · ). We deduce that F is twice differentiable and D2 F (u)(h, k) = 2(h|k). In addition, the map H is linear, thus twice Fr´echet differentiable. Therefore, DH(ψ)(h) = H(h) and D2 H(ψ)(h, k) = 0, for all ψ, h, k ∈ C. From the definition of Θδγ and the definition of F , we can get that  1 0 0 2 Θδγ (t, s, ψ, φ) = F (ψ − φ) + F (ψ − φ ) + |t − s| δ +γ[e1+F (ψ)+F (H(ψ)) + e1+F (φ)+F (H(φ)) ]. Given the above chain rule, we can say that Θδγ is Fr´echet differentiable. Actually, for h, k ∈ C, we can get  2 (ψ − φ|h) + (H(ψ − φ)|H(h)) Dψ Θδγ (t, s, ψ, φ)(h) = δ + 2γe1+F (ψ)+F (H(ψ)) [(ψ|h) + (H(ψ)|H(h))]. (4.91)

4.6 A Sketch of a Proof of Theorem 4.5.7

237

Similarly,  2 (φ − ψ|k) + (H(φ − ψ)|H(k)) δ + 2γe1+F (φ)+F (H(φ)) [(φ|k) + (H(φ)|H(k))]. (4.92)

Dφ Θδγ (t, s, ψ, φ)(k) =

Furthermore, 2 Dψ Θδγ (t, s, ψ, φ)(h, k)   2 (h|k) + (H(h)|H(k)) = δ 

+ 2γe1+F (ψ)+F (H(ψ)) 2((ψ|k) + (H(ψ)|H(k)))((ψ|h) + (H(ψ)|H(h)))  + (k|h) + (H(k)|H(h)) . (4.93)

Similarly, Dφ2 Θδγ (t, s, ψ, φ)(h, k)  2 (h|k) + (H(h)|H(k)) = δ 

+ 2γe1+F (φ)+F (H(φ)) 2((φ|k) + (H(φ)|H(k)))((φ|h) + (H(φ)|H(h)))  + (k|h) + (H(k)|H(h)) . (4.94)

By the Hahn-Banach theorem (see, e.g., Siddiqi [Sid04]), we can extend the continuous linear functional T to the space C × C and its norm is preserved. Thus, the first-order Fr´echet derivatives of T is just T , that is, Dψ T (ψ, φ)h = T (h, φ), Dφ T (ψ, φ)k = T (ψ, k)

for all ψ, φ, h, k ∈ C.

Also, for the second derivative, we have 2 Dψ T (ψ, φ)(h, k) = 0,

Dφ2 T (ψ, φ)(h, k) = 0,

for all ψ, φ, h, k ∈ C.

2 Observe that we can extend Dψ Θδγ (t, s, ψ, φ) and Dψ Θδγ (t, s, ψ, φ), the first- and second-order Fr´echet derivatives of Θδγ with respect to ψ, to the space C ⊕ B (see Lemma 2.2.3 and Lemma 2.2.4) by setting

Dψ Θδγ (t, s, ψ, φ)(h + v1{0} )  2 = (ψ − φ|h + v1{0} ) + (H(ψ − φ)|H(h + v1{0} ) δ +2γe1+F (ψ)+F (H(ψ)) [(ψ|h + v1{0} ) + (H(ψ)|H(h + v1{0} ))] (4.95)

238

4 Optimal Stopping

and 2 Θ (t, s, ψ, φ)(h + v1 Dψ δγ {0} , k + w1{0} )   2 (h + v1{0} |k + w1{0} ) + (H(h + v1{0} )|H(k + w1{0} ) = δ 

+2γe1+F (ψ)+F (H(ψ)) 2((ψ|k + w1{0} ) + (H(ψ)|H(k + w1{0} )))

×((ψ|h + v1{0} ) + (H(ψ)|H(h + v1{0} )))

 +(k + w1{0} |h + v1{0} ) + (H(k + w1{0} )|H(h + v1{0} ))

(4.96)

for v, w ∈ Rn and h, k ∈ C. Moreover, it is easy to see that these extensions are continuous in that there exists a constant Λ > 0 such that |(ψ − φ|h + v1{0} )| ≤ ψ − φ 2 · h + v1{0} 2 ≤ Λ ψ − φ 2 ( h + |v|)

(4.97)

|(ψ|h + v1{0} )| ≤ ψ 2 · h + v1{0} 2 ≤ Λ ψ 2 ( h + |v|) |(ψ|k + w1{0} )| ≤ ψ 2 · k + w1{0} 2 ≤ Λ ψ 2 ( k + |w|)

(4.98)

(4.99)

and |(k + w1{0} |h + v1{0} )| ≤ k + w1{0} 2 h + v1{0} 2 ≤ Λ( k + |w|)( h + |v|).

(4.100)

Similarly, we can extend the first- and second-order Fr´echet derivatives of Θδγ with respect to φ to the space C ⊕ B and obtain similar expressions for Dφ Θδγ (t, s, ψ, φ)(k + w1{0} ) and Dφ2 Θδγ (t, s, ψ, φ)(h + v1{0} , k + w1{0} ). The same is also true for the bounded linear functional T whose extension is still written as T . In addition, it is easy to verify that for any φ ∈ C and v, w ∈ n , we have  0 (φ|v1{0} ) =

φ(s), v1{0} (s)ds = 0, (4.101) −r 0

 (w1{0} |v1{0} ) =

w1{0} (s), v1{0} (s)ds = 0,

(4.102)

H(v1{0} ) = v1{−r} , (H(ψ)|H(v1{0} )) = 0, (H(w1{0} )|H(v1{0} )) = 0.

(4.103) (4.104)

−r

These observations will be used later. Next, we need several lemmas about the operator S.

4.6 A Sketch of a Proof of Theorem 4.5.7

239

Lemma 4.6.3 Given φ ∈ C, we have S(F )(φ) = |φ(0)|2 − |φ(−r)|2 ,

(4.105)

S(F )(φ0 ) = −|φ(0)|2 ,

(4.106)

where F is the functional defined in (4.88) and S is the operator defined in (4.8). Let Sψ and Sφ denote the operator S applied to ψ and φ, respectively. We have the following lemma. Lemma 4.6.4 Given φ, ψ ∈ C, Sψ (F )(φ − ψ) + Sφ (F )(φ − ψ) = |ψ(0) − φ(0)|2 − |ψ(−r) − φ(−r)|2 (4.107) and Sψ (F )(φ0 − ψ 0 ) + Sφ (F )(φ0 − ψ 0 ) = −|ψ(0) − φ(0)|2 ,

(4.108)

where F is the functional defined in (4.88) and S is the operator defined in (4.8). Lemma 4.6.5 Given φ ∈ C, we define a new operator G as follows: 0

G(φ) = e1+F (φ)+F (φ ) . We have

(4.109) 0

S(G)(φ) = (−|φ(−r)|2 )e1+F (φ)+F (φ ) ,

(4.110)

where F is the functional defined in (4.88) and S is the operator defined in (4.8). Lemma 4.6.6 For any ψ, φ ∈ C, we have lim |Sψ (T )(ψ, φ)| = 0 ↓0

and

lim |Sφ (T )(ψ, φ)| = 0. ↓0

(4.111)

Given all of the above results, now we are ready to prove Theorem 4.5.7. Proof of Theorem 4.5.7. Define Γ1 (t, ψ) ≡ V2 (sδγ , φδγ ) + Θδγ (t, sδγ , ψ, φδγ ) − T (ψ, φδγ ) − Mδγ (4.112) and Γ2 (s, φ) ≡ V1 (tδγ , ψδγ ) − Θδγ (tδγ , s, ψδγ , φ) + T (ψδγ , φ) + Mδγ (4.113) for all s, t ∈ [0, T ] and ψ, φ ∈ C. Recall that

240

4 Optimal Stopping

Φδγ (t, s, ψ, φ) = V1 (t, ψ) − V2 (s, φ) − Θδγ (t, s, ψ, φ), and Φδγ + T + Mδγ reaches its maximum value zero at (tδγ , sδγ , ψδγ , φδγ ) in [0, T ] × [0, T ] × C × C. By the definition of Γ1 and Γ2 , it is easy to verify that, for all φ and ψ, we have Γ1 (t, ψ) ≥ V1 (t, ψ),

Γ2 (s, φ) ≤ V2 (s, φ),

∀t, s ∈ [0, T ] and φ, ψ ∈ C,

and V1 (tδγ , ψδγ ) = Γ1 (tδγ , ψδγ ) and V2 (sδγ , φδγ ) = Γ2 (sδγ , φδγ ). Using the definitions of the viscosity subsolution of V1 and Γ1 , we have  min V1 (tδγ , ψδγ ) − Ψ (t, ψδγ ), αV1 (tδγ , ψδγ ) − ∂t Γ1 (tδγ , ψδγ )

 −A(Γ1 )(tδγ , ψδγ ) − L(tδγ , ψδγ ) ≤ 0.

(4.114)

By the definitions of the operator A and Γ1 and the fact that the second-order Fr´echet derivatives of T = 0, we have, by combining (4.91), (4.92), (4.93), (4.94), (4.95), (4.96), (4.101), (4.102), and (4.104), A(Γ1 )(tδγ , ψδγ ) = S(Γ1 )(tδγ , ψδγ ) + Dψ Θδγ (· · · )(f (tδγ , ψδγ )1{0} )

m 1 2 + Dψ Θδγ (· · · ) g(tδγ , ψδγ )(ej )1{0} , g(tδγ , ψδγ )(ej )1{0} 2 j=1 −Dψ T (ψδγ , φδγ )(f (tδγ , ψδγ )1{0} ) = S(Γ1 )(tδγ , ψδγ ) − T (f (tδγ , ψδγ )1{0} , φδγ ). Note that Θδγ (· · · ) is an abbreviation for Θδγ (tδγ , sδγ , ψδγ , φδγ ) in the above equation and the following. It follows from (4.114) and the above inequality together yield that  min V1 (tδγ , ψδγ ) − Ψ (tδγ , ψδγ ), αV1 (tδγ , ψδγ ) − S(Γ1 )(tδγ , ψδγ ) − ∂t Γ1 (tδγ , ψδγ )  $ % − − T (f (tδγ , ψδγ )1{0} , φδγ ) + L(tδγ , ψδγ ) ≤ 0. (4.115) Similarly, using the definitions of the viscosity supersolution of V2 and Γ2 and by the virtue of the same techniques similar to (4.115), we have  min V2 (sδγ , φδγ ) − Ψ (sδγ , φδγ ), αV2 (sδγ , φδγ ) − S(Γ2 )(sδγ , φδγ ) − ∂s Γ2 (sδγ , φδγ )  % $ − T (ψδγ , f (sδγ , φδγ )1{0} ) + L(sδγ , φδγ ) ≥ 0.

(4.116)

4.6 A Sketch of a Proof of Theorem 4.5.7

241

Inequality (4.115) is equivalent to V1 (tδγ , ψδγ ) − Ψ (tδγ , ψδγ ) ≤ 0,

(4.117)

or αV1 (tδγ , ψδγ ) − S(Γ1 )(tδγ , ψδγ ) − 2(tδγ − sδγ ) $ % − − T (f (tδγ , ψδγ )1{0} , φδγ ) + L(tδγ , ψδγ ) ≤ 0.

(4.118)

Similarly, Inequality (4.116) is equivalent to V2 (sδγ , φδγ ) − Ψ (sδγ , φδγ ) ≥ 0,

(4.119)

and αV2 (sδγ , φδγ ) − S(Γ2 )(sδγ , φδγ ) − 2(sδγ − tδγ ) $ % − T (ψδγ , f (sδγ , φδγ )1{0} ) + L(sδγ , φδγ ) ≥ 0.

(4.120)

If we have (4.117), using (4.119), we can get that there exists a constant Λ > 0 such that V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ ) ≤ Ψ (tδγ , ψδγ ) − Ψ (sδγ , φδγ ) / 0 ≤ Λ |tδγ − sδγ | + ψδγ − φδγ 2 . (4.121) Thus, applying Lemma 4.6.2 to (4.121), we have lim sup(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ≤ 0.

(4.122)

δ↓0, ↓0

On the other hand, by virtue of (4.118) and (4.120), we obtain α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ≤ S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) + 4(tδγ − sδγ ) + [L(tδγ , ψδγ ) − T (f (tδγ , ψδγ )1{0} , φδγ )] − [L(sδγ , φδγ ) + T (ψδγ , f (sδγ , φδγ )1{0} )].

(4.123)

From definition (4.8) of S, it is clear that S is linear and takes zero on constants. Recall that Γ1 (t, ψ) = V2 (sδγ , φδγ ) + Θδγ (t, sδγ , ψ, φδγ ) − T (ψ, φδγ ) − Mδγ (4.124) and Γ2 (s, φ) = V1 (tδγ , ψδγ ) − Θδγ (tδγ , s, ψδγ , φ) + T (ψδγ , φ) + Mδγ . (4.125)

242

4 Optimal Stopping

Thus we have, S(Γ1 )(tδγ , ψδγ ) = Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) − Sψ (T )(ψδγ , φδγ ), (4.126) and S(Γ2 )(sδγ , φδγ ) = −Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) + Sφ (T )(ψδγ , φδγ ). (4.127) Therefore, S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) = Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) + Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) − [Sψ (T )(ψδγ , φδγ ) + Sφ (T )(ψδγ , φδγ )].

(4.128)

Recall that Θδγ (t, s, ψ, φ) =

% 1$ F (ψ − φ) + F (ψ 0 − φ0 ) + |t − s|2 + γ(G(ψ) + G(φ)). δ

Therefore, we have /

0 Sψ (Θδγ ) + Sφ (Θδγ ) (tδγ , sδγ , ψδγ , φδγ )

≡ Sψ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) +Sφ (Θδγ )(tδγ , sδγ , ψδγ , φδγ ) 1 = [Sψ (F )(ψδγ − φδγ ) + Sφ (F )(ψδγ − φδγ ) δ 0 0 − φ0δγ ) + Sφ (F )(ψδγ − φ0δγ )] +Sψ (F )(ψδγ + γ[Sψ (G)(ψδγ ) + Sφ (G)(φδγ )].

(4.129)

Using Lemma 4.6.4 and Lemma 4.6.5, we deduce / 0 Sψ (Θδγ ) + Sφ (Θδγ ) (tδγ , sδγ , ψδγ , φδγ ) % 1$ = − |ψδγ (−r) − φδγ (−r)|2 δ 0

−γ |ψδγ (−r)|2 e1+F (ψδγ )+F (ψδγ )

0 +|φδγ (−r)|2 e1+F (φδγ )+F (φδγ )

≤ 0. Thus, by virtue of (4.128) and Lemma 4.6.6, we have   lim sup S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) ≤ 0.

(4.130)

(4.131)

δ↓0, ↓0

Moreover, we know that the norm of T is less than ; thus, for any γ > 0, using (4.128) and taking the lim sup on both sides of (4.123) as δ and  go to 0, we obtain

4.6 A Sketch of a Proof of Theorem 4.5.7

243

lim sup α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ↓0,δ↓0

 ≤ lim sup S(Γ1 )(tδγ , ψδγ ) − S(Γ2 )(sδγ , φδγ ) ↓0,δ↓0

+[L(tδγ , ψδγ ) − T (f (t, ψδγ )1{0} , φδγ )]

 −[L(sδγ , φδγ ) + T (ψδγ , f (t, φδγ )1{0} )]     ≤ lim sup [L(tδγ , ψδγ ) − L(sδγ , φδγ )] .

(4.132)

↓0,δ↓0

Using the Lipschitz continuity of L and Lemma 4.6.2, we see that lim sup |L(tδγ , ψδγ ) − L(sδγ , φδγ )| δ↓0, ↓0

  ≤ lim sup C |tδγ − sδγ | + ψδγ − φδγ 2 δ↓0, ↓0

= 0;

(4.133)

moreover, by virtue of (4.133), we get lim sup α(V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )) ≤ 0.

(4.134)

↓0,δ↓0

Since (tδγ , sδγ , ψδγ , φδγ ) is the maximum of Φδγ +T in [0, T ]×[0, T ]×C×C, then, for all (t, ψ) ∈ [0, T ] × C, we have Φδγ (t, t, ψ, ψ) + T (ψ, ψ) ≤ Φδγ (tδγ , sδγ , ψδγ , φδγ ) + T (ψδγ , φδγ ). (4.135) Then we get V1 (t, ψ) − V2 (t, ψ) ≤ V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )  1 2 0 0 2 2 − ψδγ − φδγ 2 + ψδγ − φδγ 2 + |tδγ − sδγ | δ +2γ exp(1 + ψ 22 + ψ 0 22 )

0 −γ(exp(1 + ψδγ 22 + ψδγ 22 ) + exp(1 + φδγ 22 + φ0δγ 22 )) +T (ψδγ , φδγ ) − T (ψ, ψ)

≤ V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ ) +2γ exp(1 + ψ 22 + ψ 0 22 ) + T (ψδγ , φδγ ) − T (ψ, ψ),

(4.136)

where the last inequality comes from the fact that δ > 0 and γ > 0. By virtue of (4.122) and (4.134), when we take the lim sup on (4.136) as δ, , and γ go to zero, we can obtain

244

4 Optimal Stopping

V1 (t, ψ) − V2 (t, ψ) ≤ lim sup

γ↓0, ↓0,δ↓0



V1 (tδγ , ψδγ ) − V2 (sδγ , φδγ )

 +2γ exp(1 + ψ 22 + ψ 0 22 ) + T (ψδγ , φδγ ) − T (ψ, ψ)

≤ 0.

(4.137)

Therefore, we have V1 (t, ψ) ≤ V2 (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

This completes the proof of Theorem 4.5.7.

(4.138)

2

4.7 Conclusions and Remarks This chapter investigates an optimal stopping problem for a general system of SHDEs with a bounded delay. Using the concept and a construction of the least superharmonic majorant (see Shireyaev [Shi78] and Øksendal [Øks00], the existence and uniqueness results are extended to the optimal stopping problem Problem (OSP01). An infinite-dimensional HJBVI is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJBVI. Due to the length of the current chapter, computational issues of the HJBVI are not addressed in this chapter.

5 Discrete Approximations

In this chapter we address some computational issues and propose various discrete approximations for the optimal classical control considered in Chapter 3. Although it is feasible that these approximations may be applicable to the optimal stopping problems outlined in Chapter 4, we, however, do not attempt to deal with them for the sake of space. The methods of discrete approximation for the optimal classical control problem include (1) a two-step semidiscretization scheme; (2) a Markov chain approximation; and (3) a finite difference approximation. These three different methods of discrete approximation will be described in Sections 5.2, 5.3, and 5.4, respectively. Roughly speaking, the two-step semidiscretization scheme only discretizes the time variable of the solution process but not the spatial variable and, therefore, is not effective in providing explicit a numerically approximating solution to the problem. However, the scheme provides the error bound or rate of convergence that the other two methods fail to provide. The Markov chain approximation for controlled diffusion (without delay) started from a series of work done by Kushner and his collaborators (see [Kus77]) and was extended to various cases and summarized in Kushner and Dupuis [KD01]. The ideas behind this is to approximate the original optimal control problem by a sequence of discrete controlled Markov chain problems in which the mean and covariance of the controlled Markov chains satisfy the so-called local consistency condition. As an extension to the controlled diffusion, the Markov chain approximation for a certain class of optimal control problems that involved stochastic differential equations with delays have recently been done by Kushner [Kus05, Kus06] and Fischer and Reiss [FR06]. The basic idea behind the two-step semidiscretization scheme and the Markov chain approximation is to approximate the original control problem by a control problem with a suitable discretization approximation, solve the Bellman equation for the approximation, and then prove the convergence of the value functions to that of the original control problem when the discretization mesh goes to zero.

246

5 Discrete Approximations

The main ideas behind the two-step semidiscretization scheme and presented in Section 5.2 is derived from Fischer and Nappo [FN07]. The materials in Section 5.3 are mainly due to [Kus05], [Kus06], and [FR06]. The finite-difference scheme presented in Section 5.4, deviates from that of Sections 5.2 and 5.3. The result presented here is an extension of that obtained by Barles and Songanidis [BS91], which approximates the viscosity solution of the infinite-dimensional HJBE directly. The convergence result of the approximation is given in and the computational algorithm, based on the result obtained, is also summarized.

5.1 Preliminaries The optimal classical problem treated in Chapter 3 is briefly restated for convenience of the readers. Optimal Classical Control Problem. Given any initial data (t, ψ) ∈ [0, T ] × C, find an admissible control u∗ (·) ∈ U[0, T ] that maximizes the objective functional J(t, ψ; u(·)) defined by  T e−α(s−t) L(s, xs (·; t, ψ, u(·)), u(s)) ds J(t, ψ; u(·)) = E t −α(T −t) Ψ (xT (·; t, ψ, u(·))) (5.1) +e and subject to the following controlled SHDE with a bounded memory 0 < r < ∞: dx(s) = f (s, xs , u(s)) ds + g(s, xs , u(s)) dW (s),

s ∈ [t, T ].

(5.2)

Again, the value function V : [0, T ] × C →  is defined by V (t, ψ) =

sup

J(t, ψ; u(·)).

(5.3)

u(·)∈U[t,T ]

To obtain more explicit results in error bound and/or rate of convergence for the semidiscretization scheme in Section 5.2, we often use the following set of conditions instead of Assumption 3.1.1 in Chapter 3. Assumption 5.1.1 The functions f , g, L, and Ψ satisfy the following conditions: (A5.1.1). (Measurability) f : [0, T ] × C × U → n , g : [0, T ] × C × U → n×m , L : [0, T ] × C × U → , and Ψ : C →  are Borel measurable. (A5.1.2). (Boundedness) The functions f , g, L and Ψ are uniformly bounded by a constant Kb > 0, that is, |f (t, φ, u)| + |g(t, φ, u)| + |L(t, φ, u)| + |Ψ (ψ)| ≤ Kb ,

5.1 Preliminaries

247

∀(t, φ, u) ∈ [0, T ] × C × U. (A5.1.3). (Uniform Lipschitz and H¨ older condition) There is a constant Klip > 0 such that for all φ, ϕ ∈ C, t, s ∈ [0, T ], and u ∈ U ,  |f (s, φ, u) − f (t, ϕ, u)| + |g(s, φ, u) − g(t, ϕ, u)| ≤ Klip ( φ − ϕ + |t − s|),  |L(s, φ, u) − L(t, ϕ, u)| + |Ψ (φ) − Ψ (ϕ)| ≤ Klip ( φ − ϕ + |t − s|). (A5.1.4). (Continuity in the Control) f (t, φ, ·), g(t, φ, ·), and L(t, φ, ·) are continuous functions on U for any (t, φ) ∈ [0, T ] × C. Note that the linear growth condition in the previous chapters has been replaced by uniform boundedness Assumption (A5.1.2) for convenience of convergence analysis. 5.1.1 Temporal and Spatial Discretizations Let N ∈ ℵ ≡ 1, 2, . . . (the set of all positive integers). In order to construct the N th approximation of the optimal classical control problem, we set h(N ) := Nr (N )

and define ·N by tN := h(N )  ht , where a is the integer part of the T T (N ) (N ) =  h(N  h(N real number a ∈ . We also set T )  and T N = h ) (N ) (N ) (N ) and I := {kh | k = 0, 1, 2, . . .} ∩ [0, T ]. As T is the time horizon for the original control problem, T (N ) will be the time horizon for the N th approximating problem. It is clear that T N → T and tN → t for any t ∈ [0, T ]. The set I(N ) is the time grid of discretization degree N . Let π (N ) be the partition of the interval [−r, 0], that is, π (N ) : r = −N h(N ) < (−N + 1)h(N ) < · · · < −h(N ) < 0. Define π ˜ (N ) : C → (n )N +1 as the (N + 1)-point-mass projection of a continuous function φ ∈ C based on the partition π (N ) , that is, π ˜ (N ) φ = (φ(−N h(N ) ), φ((−N + 1)h(N ) ), . . . , φ(−h(N ) ), φ(0)). ˜ for each Define Π (N ) : (n )N +1 → C by Π (N ) s = x x = (x(−N h(N ) ), x(−N + 1)h(N ) ), . . . , x(−h(N ) ), x(0)) ∈ (n )N +1 ˜ ∈ C by making the linear interpolation between the two (consecutive) and x time-space points (kh, x(kh)) and ((k+1)h, x((k+1)h). Therefore, if θ ∈ [−r, 0] ˜ (kh) = x(kh). If and θ = kh for some k = −N, −N + 1, . . . , −1, 0, then x θ ∈ [−r, 0] is such that kh < θ < (k + 1)h for some k = −N, −N + 1, . . . , −1, then (x((k + 1)h) − x(kh))(θ + kh) ˜ (θ) = x(kh) + . x h With a little abuse of notation, we also denote by Π (N ) : C → C the operator that maps a function ϕ ∈ C to its piecewise linear interpolation Π (N ) ϕ on the grid π (N ) ; that is, if θ ∈ [−(k + 1)h(N ) , −kh(N ) ] for some k = 0, 1, 2, . . . , N ,

248

5 Discrete Approximations

(Π (N ) ϕ)(θ) = ϕ(−(k + 1)h(N ) ) +

(ϕ(−kh(N ) ) − ϕ(−(k + 1)h(N ) ))(θ + (k + 1)h(N ) ) . h(N )

In addition to temporal discretization introduced earlier, we define spatial discretization of the n :  S(N ) = {(k1 , k2 , . . . , kn ) h(N ) | ki = 0, ±1, ±2, . . . , for i = 1, 2, . . . , n}. When n = 1, we simply write S(N ) = {kh(N ) | k = 0, ±1, ±2, . . .}. Given N , let (S)N +1 = S × · · · × S be the (N + 1)-folds Cartesian product of S. The semidiscretization scheme presented in Section 5.2 involves temporal discretization of the controlled state process. The Markov chain and finite difference approximations presented in Sections 5.3 and 5.4 involve both temporal and spatial discretization. For simplicity of the notation, we sometime omit N in the superscript and ˜=π ˜ (N ) , and so forth, whenever there write h = h(N ) , π = π (N ) , Π (N ) = Π, π is no danger of ambiguity. This is particularly true when we are working in the context of a fixed N . However, we will carry the full superscripts and/or subscripts when we are working with the quantities that involve different N s such as the limiting quantity when N → ∞. 5.1.2 Some Lemmas To prepare what follows, we first recall the Gronwall lemma in Chapter 1 and the two results for the value function V : [0, T ] × C →  as follows. Lemma 5.1.2 Suppose that h ∈ L1 ([t, T ]; ) and α ∈ L∞ ([t, T ]; ) satisfy, for some β ≥ 0,  s 0 ≤ h(s) ≤ α(s) + β h(λ) dλ for a.e. s ∈ [t, T ]. (5.4) t



Then h(s) ≤ α(s) + β

s

α(λ)eβ(λ−t) dλ

for a.e. s ∈ [t, T ].

(5.5)

t

If, in addition, α is nondecreasing, then h(s) ≤ α(s)eβ(s−t)

for a.e. s ∈ [t, T ].

(5.6)

Proposition 5.1.3 Assume Assumptions (A5.1.1)-(A5.1.3) hold. The value function V satisfies the following properties: There is a constant KV ≥ 0 2 not greater than 3Klip (T + 1)e3T (T +4m)Klip such that for all t ∈ [0, T ] and φ, ϕ ∈ C, we have |V (t, φ)| ≤ Kb (T + 1)

and

|V (t, φ) − V (t, ϕ)| ≤ KV φ − ϕ .

5.2 Semidiscretization Scheme

249

Proof. The first inequality is due to the global boundedness of L : [0, T ]×C× U →  and Ψ : C →  and the second inequality is proven in Lemma 3.3.7 in Chapter 3. 2 Proposition 5.1.4 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let the initial segment ψ ∈ C. If ψ is γ-H¨ older continuous with 0 < γ ≤ KH < ∞, then the function V (·, ψ) : [0, T ] →  is H¨ older continuous; that is, there is a ˜ V > 0 depending only on KH , Klip , T , and the dimensions such constant K that for all t, t˜ ∈ [0, T ],

& γ ˜ ˜ ˜ ˜ |V (t, ψ) − V (t, ψ)| ≤ KV |t − t| ∨ |t − t| . Proof. See Lemma 3.3.11 in Chapter 3. We need the following moments of the modulus of continuity of the Itˆ o diffusion, due to Lemma A.4 of Slominski [Slo01]. Lemma 5.1.5 (Slominski’s Lemma) Let (Ω, F, P, F, W (·)) be a given m-dimensional Brownian basis. Let y(·) = {y(s), s ∈ [0, T ]} be an Itˆ o diffusion of the form  s  s g˜(t) dW (t), s ∈ [0, T ], f˜(t) dt + y(s) = y(0) + 0

0

where y(0) ∈ n and f˜(·) and g˜(·) are F-adapted processes with values in n and n×m , respectively. If |f˜|(·) and |˜ g |(·) are bounded by a constant Kb > 0, then for every k > 0 and every T > 0, there exists a constant Kk,T depending only on Kb , the dimensions n and m, k, and T such that  

k2 1 1 k E sup |y(s) − y(t)| ≤ Kk,T h ln , ∀h ∈ 0, . h 2 t,s∈[0,T ],|t−s|≤h Sketch of the Proof. To save space, we will omit the details of the proof but to point out here that Lemma 5.1.5 can be proved for the special case m = 1. The full statement is then derived by a componentwise estimate and a time-change argument; see Theorem 3.4.6 in [KS91], for example. One way of proving the assertion for Brownian motion is to follow the derivation of L´evy’s exact modulus of continuity given in Exercise 2.4.8 of [SV79]. The main ingredient there is an inequality due to Garsia, Rademich, and Rumsey; see Theorem 2.1.3, of [SV79, p.47]. 2.

5.2 Semidiscretization Scheme In this section we study a semidiscretization scheme for Problem (OCCP) described in Section 3.1 of Chapter 3 and restated below. The discretization scheme consists of two steps: (1) piecewise linear interpolation of the

250

5 Discrete Approximations

C-valued segment process for the solution of the controlled SHDE and then (2) piecewise constant control process. By discretizing time in two steps, we construct a sequence of approximating finite-dimensional Markovian optimal control problems. It is shown that the sequence of the value functions for the finite-dimensional Markovian optimal control problems converge to the value function of the original problem. An upper bound on the discretization error or, equivalently, an estimate for the rate of convergence is also given. Much of the material presented in this section can be found in Fischer and Nappo [FN06]. 5.2.1 First Approximation Step: Piecewise Constant Segments Throughout the end of this section, the segment space will be extended from [−r, 0] to [−r −h(N ) , 0] and the domain of the admissible control u(·) ∈ U[t, T ] will be enlarged from [t, T ] to [t − h(N ) , T ] by setting u(s) = u(t) for all s ∈ [t−h(N ) , t] according to the discretization degree N if the initial time t ∈ [0, T ] is such that t = tN . Denote by C(N ) the space C([−r − h(N ) , 0]; n ) of ndimensional continuous functions defined on [−r − h(N ) , 0] and the class of admissible control with an enlarged time domain [t − h(N ) , T ] defined above as U[t − h(N ) , T ]. For a continuous function or process z(·) defined on the (N ) interval [−r − h(N ) , ∞), let z˜t denote the segment of z(·) at time t ≥ 0 of (N ) the length r + h , that is, (N )

z˜t

(θ) = z(t + θ),

θ ∈ [−r − h(N ) , 0].

Given the initial datum (t, ψ) ∈ [0, T ] × C(N ) and u(·) ∈ U[t − h(N ) , T ], we define the Euler-Maruyama approximation z(·) = {z (N ) (s; t, ψ, u(·)), s ≥ t} of degree N of the solution of x(·) = {x(s; t, ψ, u(·)), s ≥ t} to (5.2) under the control process u(·) with the initial datum (t, ψ) as the solution to  s (N ) f (N ) (λ, z˜λ , u(λ)) dλ z(s) = ψ(0) + 

t N

s

+ t N

(N )

g (N ) (λ, z˜λ , u(λ)) dW (λ),

s ∈ [t, T ]

(5.7)

with the initial datum (t, ψ) ∈ [0, T ] × C(N ), where f (N ) (t, φ, u) := f (tN , Π (N ) φt N −t , u), g (N ) (t, φ, u) := g(tN , Π (N ) φt N −t , u),

(t, ψ, u) ∈ [0, T ] × C(N ) × U.

Similarly to f (N ) and g (N ) , we also defined the functions L(N ) and Ψ (N ) as L(N ) (t, φ, u) := L(tN , Π (N ) φt N −t , u), Ψ (N ) (φ) := Ψ (Π (N ) φt N −t ),

(t, φ, u) ∈ [0, T ] × C(N ) × U.

5.2 Semidiscretization Scheme

251

Thus, f (N ) (t, φ, u), g (N ) (t, φ, u), L(N ) (t, φ, u), and Ψ (N ) (φ) are calculated by ˆ u) and Ψ at φ, ˆ where φˆ is the evaluating the functions f , g, and L at (t, φ, segment in C that arises from the piecewise linear interpolation with mesh size Nr of the restriction of φ to the interval [tN − t − r, tN − t]. Notice that the control action u ∈ U remains unchanged. Note that for each N , the functions f (N ) (t, φ, u) and g (N ) (t, φ, u) satisfy Assumption 5.1.1. This guarantees that given any admissible control u(·) ∈ U[t − h(N ) , T ], (5.7) has a unique solution for each initial datum (t, ψ) ∈ [0, T ] × C(N ). Thus, the process z(·) = {z (N ) (s; t, ψ, u(·)), s ∈ [t, T ]} of degree N is well defined. Define the objective functional J (N ) : [0, T (N ) ]×C(N )×U[t−h(N ) , T ] →  of discretization degree N by J (N ) (t, ψ, u(·)) = E



T (N )

t N

(N ) L(N ) (s, z˜s(N ) , u(s)) ds + Ψ (N ) (˜ zT (N ) ) .

(5.8)

As f (N ) , g (N ) , L(N ) , and Ψ (N ) are Lipschitz continuous in the segment variable φ (uniformly in (t, u) ∈ [0, T ] × U ) under the sup-norm on C(N ), the value function V (N ) : [0, T (N ) ] × C(N ) →  is determined by V (N ) (t, ψ) =

J (N ) (t, ψ; u(·)).

sup

(5.9)

u(·)∈U[t−h(N ) ,T ]

When the initial time t ∈ [0, T ] is such that t = tN , then (5.7), (5.8), and (5.9) reduce to the following three equations, respectively.  s z(s) = ψ(0) + f (λ, Π (N ) zλ N , u(λ)) dλ t  s + g(λ, Π (N ) zλ N , u(λ)) dW (λ), s ∈ [t, T N ], (5.10) t

J (N ) (t, ψ, u(·)) = E

 t

T (N )

L(s, Π (N ) zs N , u(s)) ds + Ψ (N ) (Π (N ) zT (N ) ) ; (5.11)

and

V (N ) (t, ψ) =

sup

J (N ) (t, ψ; u(·)).

(5.12)

u(·)∈U[t,T ]

The following proposition estimates the difference between the solution x(·) of (5.2) and the solution z (N ) (·) of (5.7). Proposition 5.2.1 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let x(·) be the solution to (5.2) under admissible u(·) ∈ U[t, T ] and with initial datum (t, φ) and z (N ) (·) is the solution to (5.7) of discretization degree N under the same control process u(·) and with initial datum (t, ψ) ∈ [0, T ] × C(N ) being older continuous such that ψ|[−r,0] = φ. Let the initial segment ψ ∈ C be γ-H¨ ˜ with 0 ≤ γ ≤ KH < ∞. Then there is a positive constant K depending only on

252

5 Discrete Approximations

γ, KH , Klip , Kb , T , and the dimensions n and m such that for all N = 1, 2, . . . with N ≥ 2r, all t ∈ I(N ) , u(·) ∈ U[t, T ] such that 

  4 1 (N ) (N ) γ (N ) ˜ (h ) ∨ h ln( ) . E sup |x(s) − z (s)| ≤ K h(N ) s∈[−r,T ] Proof. Notice that h(N ) ≤ 12 since N ≥ 2r and observe that z(·) := z (N ) (·) depends on the initial segment ψ only through ψ|[−r,0] = φ. Using H¨ older inequality , we find that   E

sup 

s∈[t−r,T ]

|x(s) − z(s)|2  2

sup |x(s) − z(s)|

=E

(since ψ|[−r,0] = φ)

s∈[t,T ]

⎡ 2 ⎤  T    ≤ 2E ⎣ f (s, xs , u(s)) − f (sN , Π (N ) (zs N ), u(s)) ds ⎦  t    s

+2E

sup |

s∈[t,T ]

t



(g(λ, xλ , u(λ)) − g(λN , Π (N ) (zλ N ), u(λ))) dW (λ)|2 .

Now, by H¨ older inequality, Assumption (A5.1.3), and the Fubini theorem, we have ⎡ 2 ⎤  T    E ⎣ f (s, xs , u(s)) − f (sN , Π (N ) (zs N ), u(s)) ds ⎦  t    T

≤ 2T E t



2 4T Klip







T

t

Th



T

|s − sN | ds + E

2 4T Klip

|f (s, xs , u(s)) − f (sN , Π (N ) (zs N ), u(s))|2 ds xs − Π

(N )

t (N )



T

E[ xs − Π

+ t

(N )

0

(zs N ) ds 

2

(zs N ) ] ds .

Using Doob’s maximal inequality (1.18)   s 2       g(t) dW (t) ≤ K E sup  s∈[0,T ]

2

T

E[|g(s)|2 ] ds,

0

(0, ∞; n×m ), ∀T > 0, and ∀g(·) ∈ L2,loc F Itˆ o isometry, Assumption (A5.1.3), and the Fubini theorem, we have

5.2 Semidiscretization Scheme

253



 s 2    (N ) E sup  (g(λ, xλ , u(λ)) − g(λN , Π (zλ N ), u(λ))) dW (λ) s∈[t,T ] t ⎡ 2 ⎤   T    ≤ 4E ⎣ (g(λ, xλ , u(λ)) − g(λN , Π (N ) (zλ N ), u(λ))) dW (λ) ⎦  t     T    (N ) 2 ≤ 8mE g(s, xs , u(s)) − g(sN , Π (zs N ).u(s)))| ds t

Therefore, 

 s 2    (N )  E sup  (g(λ, xλ , u(λ)) − g(λN , Π (zλ N ), u(λ))) dW (λ) s∈[t,T ] t  

 T

2 ≤ 16mT Klip

T

|s − sN | ds + E t

2 ≤ 16mT Klip (T h(N ) +

 t

t T

xs − Π (N ) (zs N ) 2 ds

E[ xs − Π (N ) (zs N ) 2 ] ds.

By triangular inequality, we have 

T

t

E[ xs − Π (N ) (zs N ) 2 ] ds = 3



T t



E[ xs − xs N 2 ] ds T

+3 

t T

+3 t

Therefore,  E

sup s∈[t−r,T ]

 2

|x(s)−z(s)|

E[ xs N − zs N 2 ] ds E[ zs N − Π (N ) zs N 2 ] ds.

 T 2 E[ xs −xs N 2 ] ds ≤ 4(T + 4m)Klip T h(N ) + 3  + t

t

T

E[ zs N − Π (N ) zs N 2 ] ds

+12(T +

2 4m)Klip

 t

T

E[ xs N − zs N 2 ] ds.

In the last step of the above estimate, we use the following two preliminary steps: First, for all s ∈ [0, T ],

254

5 Discrete Approximations





2

E[ xs − xs N ] ≤ 2E

2

sup

(N ) ˜ ˜ θ,θ∈[−r,0],|θ− θ|≤h

˜ |φ(θ) − φ(θ)|



+2E

 2

sup s,˜ s∈[t,T ],|s−˜ s|≤h(N )

|x(s) − x(˜ s)|

≤ 2K 2 (H)(h(N ) )2γ + 2K2,T h(N ) ln(

1 ), h(N )

∀s ∈ [t, T ].

Second, E[ zs N − Π (N ) zs N 2 ]  =E

sup s˜∈[s N −r,s N ]

zs N −

 (N ) z˜s N 2



 2

≤ 2E

2

sup (|φ(θ) − φ(θN )| + |φ(θ − φ(θN )| ) θ∈[−r,0)



+2E

 2

sup (|z(˜ s) − z(˜ sN )| + |z(˜ s) − z(˜ sN ) + h

s˜∈[0,s) 2

≤ 4K (H)(h

)

+ 4E

Therefore, 



=E

2

sup

t,t˜∈[0,s],|t−t˜|≤h(N )

1 h(N )

|z(t) − z(t˜)|

).

 sup



| )



(N ) 2γ

≤ 4K 2 (H)(h(N ) )2γ + K2,T h(N ) ln(

E

(N ) 2

s∈[t−r,T ]

2

|x(s) − z(s)|

 2

sup |x(s) − z(s)|

s∈[t,T ]

≤ 4T (T +

2 4m)Klip



2 +12(T + 4m)Klip

h

(N )



T

+ 18K (H)(h 

E t

2

(N ) 2γ

)

+ 18K2,T h 

(N )

ln

1



h(N )

sup |x(λ) − z(λ)|2 ds. λ∈[t,s]

Applying Gronwall’s lemma (see Lemma 5.1.2), we have the assertion that  

 4 1 (N ) (N ) γ ˜ (h ) ∨ h(N ) ln( E sup |x(s) − z (s)| ≤ K ) h(N ) s∈[−r,T ] ˜ > 0. for some constant K

2

5.2 Semidiscretization Scheme

255

The order of approximation error or rate of convergence obtained in Proposition 5.2.1 for the underlying dynamics carries over to the approximation of the corresponding value function. This works due to the Lipschitz continuity of L and Ψ in the segment variable, the bound on the moments of the modulus of continuity from Lemma 5.1.5, and the fact that the error bound in Proposition 5.2.1 is uniform over all u(·) ∈ U[t, T ]. By Proposition 5.1.4, it is known that the original value function V is H¨ older continuous in time provided that the initial segment is H¨ older continuous. It is therefore enough to compare V and V (N ) on the grid I(N ) × C. This is the content of the next proposition. Again, the order of the error will be uniform only over those initial segments that are γ-H¨ older continuous of some γ > 0; the constant in the error bound also depends on the H¨ older constant of the initial segment. We start with comparing solutions to (5.2) and (5.10) for initial times t ∈ I(N ) . Theorem 5.2.2 Assume Assumption 5.1.1 holds. Let φ ∈ C be γ-H¨ older ˜ depending continuous with 0 < γ ≤ KH < ∞. Then there is a constant K only on γ, KH , Klip , Kb , T , and the dimensions (n and m) such that for all N = 1, 2, . . . with N ≥ 2r, all initial times t ∈ I(N ) , we have |V (t, φ) − V (N ) (t, ψ)| ≤

sup u(·)∈U[t,T ]

|J(t, φ; u(·)) − J (N ) (t, ψ; u(·))|



˜ ≤K

(h

4

(N ) γ

) ∨

h(N ) ln(

1 h(N )

 ) ,

where ψ ∈ C(N ) is such that ψ|[−r,0] = φ, and C(N ) is the space of ndimensional continuous functions defined on the extended interval [·, ·]. Proof. The proof of the first inequality is straightforward.       (N ) (N ) |V (t, φ) − V (t, ψ)| ≤  sup J(t, φ; u(·)) − sup J (t, ψ; u(·)) u(·)∈U[t,T ]  u(·)∈U[t,T ]       ≤  sup (J(t, φ; u(·)) − J (N ) (t, ψ; u(·))) u(·)∈U[t,T ]  ≤

sup u(·)∈U[t,T ]

|J(t, φ; u(·)) − J (N ) (t, ψ; u(·))|.

Now, let u(·) ∈ U[t, T ] be any admissible control. Let x(·) = {x(s) = x(s; t, φ, u(·)), s ∈ [t, T ]} be the (strong) solution to (5.2) under u(·) and with the initial datum (t, φ) ∈ [0, T ] × C and let z(·) = {z(s) = z(s; t, ψ, u(·)), s ∈ [t, T ]} be the (strong) solution to (5.7) under u(·) and with the initial datum (t, ψ) ∈ [0, T ] × C(N ). Using Assumption (A5.1.2) and the hypothesis that t ∈ I(N ) , the difference |J(t, φ; u(·)) − J (N ) (t, ψ; u(·))| can be estimated as follows:

256

5 Discrete Approximations

|J(t, φ; u(·)) − J (N ) (t, ψ; u(·))|    T N   T   (N ) L(s, xs , u(s)) ds − L(sN , Π (zs N , u(s)) ds =E    t t   +E Ψ (xT ) + Ψ (Π (N ) (zT N ) . Therefore, |J(t, φ; u(·)) − J (N ) (t, ψ; u(·))|   T (N ) |L(s, xs , u(s)) ds − L(sN , Π (zs N , u(s))|ds ≤E t





T

=E T N

|L(sN , Π (N ) (zs N , u(s))|ds

  +E |Ψ (xT ) − Ψ (Π (N ) (zT N )| ≤ Kb |T − T (N ) | + E[|Ψ (Π (N ) (zT (N ) )) − Ψ (xT )|]  (N ) T

|L(sN , Π

+E t

(N )



(zs N ), u(s)) − L(s, xs , u(s))|ds .

Recall that |T − T N | = T − T N ≤ h(N ) . Hence, Kb |T − T N | ≤ Kb h(N ) . Now, using Assumption (A5.1.3), we see that E[|Ψ (Π (N ) (zT N )) − Ψ (xT )|] / ≤ Klip E[ zT N − xT N ] + E[ Π (N ) (zT N ) − zT N ] 0 +E[ xT N − xT ]

1 (N ) γ ˜ ≤ Klip K((h ) ∨ h(N ) ln + 3KH (h(N ) )γ h(N )

1 +3K1,T h(N ) ln , h(N ) ˜ is a constant as in Proposition 5.2.1 and K1,T is a constant as in where K Lemma 5.1.5. Notice that x(·) as well as z(·) are Itˆ o diffusions with coefficients bounded by the constant Kb from Assumption (A5.1.2). In the same way, also using the H¨ older continuity of L in time and recalling that |s − sN | ≤ h(N ) for all s ∈ [t, T ], an estimate of   T N

E t

|L(sN , Π (N ) (zs N ), u(s)) − L(s, xs , u(s))| ds

5.2 Semidiscretization Scheme

is given by



E

T N

257

 |L(sN , Π

t

(N )

(zs N ), u(s)) − L(s, xs , u(s))| ds



1 (N ) (N ) ≤ Klip (T N − t) + 3K1,T h ln h h(N )

1 ˜ + 3KH )((h(N ) )γ ∨ h(N ) ln +(K . h(N ) Combining the above three estimates, we obtain the assertion.

2

By virtue of the above theorem, we can replace the original classical control problem with the sequence of approximating control problems defined above. The error at approximation degree N in terms of the difference between the corresponding value functions V and V (N ) is not greater than a multiple of older continuous initial segments if γ ∈ (0, 12 ), where the propor( Nr )γ for γ-H¨ tionality factor is affine in the H¨ older constant, and is less than a multiple of & ln(N ) N

if γ ≥ 12 . Although we obtain an error bound for the approximation of V by sequence of value functions {V (N ) , N = 1, 2, . . .} only for H¨ older continuous initial segments, the proofs of Proposition 5.2.1 and Theorem 5.2.2 show that pointwise convergence of the value functions holds true for all initial segments φ ∈ C. Recall that a function φ : [−r, 0] → n is continuous if and only if supt,s∈[−r,0],|t−s|≤h |φ(t) − φ(s)| tends to zero as h ↓ 0. We, therefore, have the following result. Corollary 5.2.3 Assume Assumptions (A5.1.1)-(A5.1.3). Then for all (t, ψ) ∈ [0, T ] × C, lim |V (t, ψ) − V (N ) (tN , ψ)| = 0. N →∞

Similarly to the value function of the original optimal classical control problem, we can also show that the function V (N ) (t, ·) : C(N ) →  is Lipschitz continuous uniformly in t ∈ I (N ) with the Lipschitz constant not depending on the discretization degree N . Since t ∈ I (N ) , we may interpret V (N ) )(t, ·) as a function defined on C. Proposition 5.2.4 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let V (N ) be the value function of discretization degree N . Then |V (N ) | is bounded by Kb (T + 1). Moreover, if t ∈ I(N ) , then V (N ) (t, ·) as a function of C satisfies the following Lipschitz condition: 2

|V (N ) (t, φ) − V (N ) (t, ϕ)| ≤ 3Klip (T + 1)e3T (T +4m)Klip φ − ϕ ,

∀φ, ϕ ∈ C.

5.2.2 Second Approximation Step: Piecewise Constant Strategies Notice in the previous subsection that we only discretize in the time variable as well as the segment space in time of the controlled SHDE and the objective

258

5 Discrete Approximations

function without discretizing the control process u(·). In this subsection, we will also discretize the time variable of the admissible controls u(·) ∈ U[t, T ]. To this end, for M = 1, 2, . . ., set U (M ) [t, T ] = {u(·) ∈ U[t, T ] | u(s) is F(sM ) measurable and u(s) = u (sM ) for each s ∈ [t, T ]}. (5.13) r M  r s. Hence, U (M ) [t, T ] is the set of all U -valued F(t)Recall sM = M progressively measurable processes that are right-continuous and piecewise r | k = 0, 1, 2, . . .} and, in addiconstant in time relative to the grid {k M tion, {F(sM ), s ∈ [t, T ]}-adapted. For the purpose of approximating the control problem of degree N , we will use strategies in U (N M ) [t, T ]. We write U (N,M ) [t, T ] for U (N M ) [t, T ]. Note that u(·) ∈ U (N,M ) [t, T ] has M -times finer discretization of that of the discretization of degree N . With the same dynamics and the same performance criterion as in the previous subsection, for each N = 1, 2, . . ., we introduce a family of value functions {V (N,M ) , M = 1, 2, . . .} defined on [t, T (N ) ] × C(N ) by setting

V (N,M ) (t, ψ) :=

sup

J (N ) (t, ψ; u(·)).

(5.14)

u(·)∈U (N,M ) [t,T ]

We will refer to V (N,M ) as the value function of degree (N, M ). Note that, by construction, we have V (N,M ) (t, ψ) ≤ V (N ) (t, ψ) for all (t, ψ) ∈ [0, T (N ) ] × C(N ), since U (M ) [t, T ] ⊂ U[t, T ]. Hence, in estimating the approximation error, we only need an upper bound for V (N ) − V (N,M ) . As with V (N ) , if the initial time t ∈ I(N ) , then V (N,M ) (t, ψ) depends on ψ only through its restriction ψ|[−r,0] ∈ C to the interval [−r, 0]. We write V (N,M ) (t, ψ) for this function. The dynamics and costs, in this case, can again be represented by (5.7) and (5.8), respectively. Again, if t ∈ I(N ) , we have V (N,M ) (t, φ) = V (N,M ) (t, Π (N ) (φ)) for all φ ∈ C. The following two propositions state Bellman’s DDP for the value functions V (N ) and V (N,M ) , respectively. The proofs of of the following two discrete versions of DDP can be reproduced from that of Theorem 3.3.9 in Chapter 3 and are, therefore, omitted here. Proposition 5.2.5 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let (t, ψ) ∈ [0, T (N ) ] × C(N ). Then for each s ∈ I(N ) ,  s (N ) (N ) (N ) (N ) (N ) V (t, ψ) = sup E L (λ, Π (zλ ), u(λ)) ds + V (s, Π (zs )) , u(·)∈U[t,T ]

t

where z(·) is the solution to (5.7) of degree N under u(·) ∈ U[t, T ] and with initial condition (t, ψ). If t ∈ I(N ) and s ∈ I(N ) ∩ [0, T ], then  s (N ) (t, ψ) = sup E L(λN , Π (N ) (zλ N , u(λ)) dλ V u(·)∈U[t,T ]

t

+V (N ) (s, Π (N ) (zs )) ,

5.2 Semidiscretization Scheme

259

where V (N ) (t, ·) and V (N ) (s, ·) are defined as functionals on C and φ is the restriction of ψ to the interval [−r, 0]. Proposition 5.2.6 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let (t, ψ) ∈ [0, T (N ) ] × C(N ). Then for s ∈ I(N M ) ∩ [0, T ],  s V (N,M ) (t, ψ) = sup E L(N ) (λN , Π (N ) (zλ ), u(λ)) dλ u(·)∈U (N,M ) [t,T ]

t

+V (N ) (s, Π (N ) (zs )) ,

where z(·) is the solution to (5.7) of degree N under control process u(·) and with the initial datum (t, ψ). If t ∈ I(N ) and s ∈ I(N ) ∩ [t, T ], then  s (N,M ) V (t, φ) = sup E L(λN , Π (N ) (zλ N ), u(λ)) dλ u(·)∈U (N,M ) [t,T ]

+V

(N,M )

(s, Π

t

(N )

(zs ) ,

where V (N,M ) (t, ·) and V (N,M ) (s, ·) are defined as functions on C and φ is the restriction of ψ to the interval [−r, 0]. The next result gives a bound on the order of the global approximation error between the value functions of degree N and (N, M ) provided that the local approximation error is of order greater than 1 in the discretization step. Theorem 5.2.7 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let N, M = ˆ δ > 0, the following holds: 1, 2, . . .. Suppose that for some positive constants K, ¯(·) ∈ U (N,M ) [t, T ] such that For any (t, φ) ∈ I(N ) × C, u(·) ∈ U[t, T ], there is u   (N ) t+h

E

L(s, Π (N ) φ, u ¯(s)) ds + V (N ) (t + h(N ) , z¯t+h(N ) )

t

 ≤E

t+h(N )

 L(s, Π (N ) φ, u(s)) ds + V (N ) (t + h(N ) , zt+h(N ) )

t

ˆ (N ) )1+δ , +K(h

(5.15)

where z(·) is the solution to (5.7) of degree N under control process u(·) and z¯(·) is the solution to (5.7) of degree N under control process u ¯(·), both with initial datum (t, ψ) ∈ I(N ) × C(N ) and ψ|[−r,0] = φ. Then    (N,M )  ˆ (N ) )δ , ∀(t, φ) ∈ I(N ) × C. (t, φ) − V (N ) (t, φ) ≤ T K(h V Proof. Let N, M = 1, 2, . . .. Recall that V (N,M ) ≤ V (N ) by construction. It is therefore sufficient to prove the upper bound for V (N ) − V (N,M ) . Suppose

260

5 Discrete Approximations

ˆ δ > 0. Observe Condition (5.15) is satisfied for N, M , and some constants K, that V (N,M ) (T N , ·) = V (N ) (T N , ·) = Ψ (Π (N ) ·). Let the initial datum (t, φ) be such that t ∈ I(N ) − {T (N ) } and φ ∈ C. Choose any ψ ∈ C(N ) such that ψ|[−r,0] = φ. Given  > 0, by virtue of Proposition 5.2.5, we can find an u(·) ∈ U[t, T ] such that V (N ) (t, φ) −    t+h(N ) (N ) (N ) (N ) (N ) ≤E L(s, Π φ, u(s)) ds + V (t + h , Π (zh(N ) )) , t

where z(·) is the solution to (5.7) of degree N under control process u(·) with initial datum (t, ψ). For this u(·), choose u ¯(·) ∈ U (N,M ) [t, T (N ) ] according to Condition (5.15) and let z¯(·) be the solution to (5.7) of degree N under control process u ¯(·) with the same initial datum as for z(·). Then, using the above inequality and Proposition 5.2.6, we have V (N ) (t, φ) − V (N,M ) (t, φ)   t+h(N ) ≤E L(s, Π (N ) (φ), u(s)) ds + V (N ) (t + h(N ) , Π (N ) (zh(N ) )) +  t





t+h(N )

−E

L(s, Π



(N )

(φ), u ¯(s))ds + V

(N,M )

(t + h

(N )



(N )

(¯ zh(N ) ))

t t+h(N )

=E

 L(s, Π (N ) (φ), u(s))ds + V (N ) (t + h(N ) , Π (N ) (zh(N ) )) + 

t





t+h(N )

−E

L(s, Π

(N )

(φ), u ¯(s))ds + V

(N )

(t + h

(N )



(N )

(¯ zh(N ) ))

t

zh(N ) )) − V (N,M ) (t + h(N ) , Π (N ) (¯ zh(N ) )) +V (N ) (t + h(N ) , Π (N ) (¯ (N ) 1+δ ˜ − V (N,M ) (t + h(N ) , φ)} ˜ + , ˆ ≤ K((h ) + sup {V (N ) (t + h(N ) , φ) ˜ φ∈C

where in the last line, Condition (5.15) has been exploited. Since  > 0 is arbitrary and neither the first nor the last line of the above inequalities depend on u(·) or u ¯(·), it follows that for all t ∈ I(N ) − {T (N ) }, sup {V (N ) (t, φ) − V (N,M ) (t, φ)} φ∈C

ˆ (N ) )1+δ . ≤ sup {V (N ) (t + h(N ) , φ) − V (N,M ) (t + h(N ) , φ)} + K(h φ∈C

Recalling the equality V (N ) (T (N ) , ·) = V (N,M ) (T (N ) , ·), we conclude that for all t ∈ I(N ) , sup {V (N ) (t, φ) − V (N,M ) (t, φ)} ≤ φ∈C

1 h(N )

ˆ (N ) ))1+δ ≤ T K(h ˆ (N ) )δ . (T (N ) K(h

5.2 Semidiscretization Scheme

261

2

This proves the assertion.

Let the initial datum (t, φ) be such that t ∈ I(N ) − {T (N ) } and φ ∈ C. Choose any ψ ∈ C(N ) such that ψ|[−r,0] = φ. Given  > 0, by virtue of Proposition 5.2.5, we can find an u(·) ∈ U[t, T ] such that V (N ) (t, φ) −    t+h(N ) ≤E L(s, Π (N ) φ, u(s)) ds + V (N ) (t + h(N ) , Π (N ) zh(N ) ) , t

where z(·) is the solution to (5.7) of degree N under control process u(·) with initial datum (t, ψ). For this u(·), choose u ¯(·) ∈ U (N,M ) [t, T (N ) ] according to Condition (5.15) and let z¯(·) be the solution to (5.7) of degree N under control process u ¯(·) with the same initial datum as for z(·). Then, using the above inequality and Proposition 5.2.6, we have V (N ) (t, φ) − V (N,M ) (t, φ)   t+h(N ) (N ) (N ) (N ) (N ) ≤E L(s, Π φ, u(s)) ds + V (t + h , Π zh(N ) ) +  t





t+h(N )

−E

L(s, Π



(N )

φ, u ¯(s)) ds + V

(N,M )

(t + h

(N )



(N )

t



t+h(N )

=E

L(s, Π

(N )

φ, u(s)) ds + V

(N )

(t + h

(N )



(N )

zh(N ) ) + 

t

 −E

z¯h(N ) )



t+h(N )

L(s, Π

(N )

φ, u ¯(s)) ds + V

(N )

(t + h

(N )



(N )

z¯h(N ) )

t

+V (N ) (t + h(N ) , Π (N ) z¯h(N ) − V (N,M ) (t + h(N ) , Π (N ) z¯h(N ) (N ) 1+δ ˜ − V (N,M ) (t + h(N ) , φ)} ˜ + , ˆ ≤ K((h ) + sup {V (N ) (t + h(N ) , φ) ˜ φ∈C

where in the last line, Condition (5.15) has been exploited. Since  > 0 is arbitrary and neither the first nor the last line of the above inequalities depend on u(·) or u ¯(·), it follows that for all t ∈ I(N ) − {T (N ) }, sup {V (N ) (t, φ) − V (N,M ) (t, φ)} φ∈C

ˆ (N ) )1+δ . ≤ sup {V (N ) (t + h(N ) , φ) − V (N,M ) (t + h(N ) , φ)} + K(h φ∈C

Recalling the equality V (N ) (T (N ) , ·) = V (N,M ) (T (N ) , ·), we conclude that for all t ∈ I(N ) , sup {V (N ) (t, φ) − V (N,M ) (t, φ)} ≤ φ∈C

1 h(N )

ˆ (N ) )1+δ ) ≤ T K(h ˆ (N ) )δ . (T (N ) K(h

262

5 Discrete Approximations

This proves the assertion.

2

In order to apply Theorem 5.2.7, we must check if Condition (5.15) is satisfied. Given a time grid of width Nr for the discretization in time and segment space, we would expect the condition to be satisfied provided we choose the subgrid for the piecewise constant controls fine enough; that is, the time discretization of the control processes should be of degree M , with M being sufficiently large in comparison to N . This is the content of the next result. Theorem 5.2.8 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let β > 3. ˆ > 0 depending only on Kb , r, Klip , the diThen there exists a constant K mensions n and m, and β such that Condition (5.15) in Theorem 5.2.7 is ˆ and δ = β−3 for all N, M = 1, 2, . . . such that satisfied with constants K 4 β N ≥ r and M ≥ N . Proof. Let N, M = 1, 2, . . . be such that N ≥ r and M ≥ N β . Let the initial datum (t, φ) ∈ I(N ) × C. Define the following functions: f˜ : U → n , f˜ = f (t, Π (N ) φ, u), g˜ : U → n×m , g˜ = g(t, Π (N ) φ, u), ˜ : U → , ˜ = L(t, Π (N ) φ, u), L L V˜ : n → n , V˜ = V (N ) (t + h(N ) , Π (N ) S(φ, x)), where for each φ ∈ C and x ∈ n , S(φ, x) ∈ C is defined for each θ ∈ [−r, 0] by  φ(θ + h(N ) ) if θ ∈ [−r, −h(N ) ] S(φ, x)(θ) = (N ) x if θ ∈ (−h(N ) , 0]. φ(0) + θ+h h(N ) ˜ defined As a consequence of Assumption (A5.1.4), the functions f˜, g˜, and L above are continuous functions on U . By Assumption (A5.1.2), |f˜|, |˜ g |, and ˜ are all bounded by Kb . As a consequence of Proposition 5.2.1, the function |L| V˜ is Lipschitz continuous, and for the Lipschitz constant Klip , we have sup

x,y∈n ,x=y

2 |V˜ (x) − V˜ (y)| ≤ 3Klip (T + 1)e3T (T +4m)Klip . |x − y|

Let u(·) ∈ U[t, T ] and z(·; u(·)) be the solution to (5.7) of degree N under control process u(·) with the initial datum (t, ψ) ∈ I(N ) ×C(N ) with ψ|[−r, 0] = φ. As z(·) also satisfies (5.10), we see that  s  s g˜(u(λ)) dW (λ), ∀s ∈ [t, t + h(N ) ]. f˜(u(λ)) dλ + z(s; u(·)) − φ(0) = t

t

We need the following Krylov stochastic mean value theorem (see Theorem 2.7 in [Kry01]), which is restated as follows.

5.2 Semidiscretization Scheme

263

¯ > 0 depending only Theorem. (Krylov) Let T¯ > 0. There is a constant K Kb and the dimensions n and m such that the following holds: For any N = ˜ : U → , any 1, 2, . . . such that N ≥ r, any bounded continuous function L bounded Lipschitz continuous function V˜ : n → , and any u(·) ∈ U[t, T ], there exists u(N ) (·) ∈ U (N ) [t, T ] such that   ¯   ¯ T T (N ) (N ) ˜ ˜ L(u L(u(s)) ds + V˜ (zT¯ (u(·)) (s)) ds + V˜ (zT¯ (u (·)) − E E t

t



¯ + T¯)(h(N ) ) ≤ K(1

1 4

1 4

˜ + (h(N ) ) sup |L(u) u∈U

sup

x,y∈n ,x=y

 |V˜ (x) − V˜ (y)| . |x − y|

(Note that in the above theorem, the difference between the two expectations ˜ in place of L ˜ and −V˜ in place of V˜ .) may be inverted, since we can take −L By the above Krylov theorem, we find u ¯(·) ∈ U (N,M ) [t, T ] such that   t+h(N ) (N ) ˜ u(s)) ds + V˜ (x(t + h ; u E L(¯ ¯(·)) t



t+h(N )

−E

 ˜ L(u(s))ds + V˜ (z(t + h(N ) − φ(0); u(·))

t 1

¯ + h(N ) )(h(N ) /M ) 4 ≤ K(1  1 ˜ × (h(N ) /M ) 4 sup |L(u) +

sup

x,y∈n ,x=y

u∈U

where x(·; u ¯(·)) satisfies   s f˜(¯ u(λ)) dλ + x(s; u ¯(·)) = t

 |V˜ (x) − V˜ (y)| , |x − y|

s

g˜(¯ u(λ)) dW (λ),

∀s ≥ t.

t

¯ above only depends on Kb and the dimensions n Notice that the constant K and m. Let z(·; u ¯(·)) be the solution to (5.7) of degree N under control process u ¯(·) with initial datum (t, ψ) ∈ I(N ) × C(N ), where ψ|[−r,0] = φ as above. Then, by construction, z(s; u ¯(·)) − φ(0) = x(s; u ¯(·)) for all s ∈ [t, t + h(N ) ]. Set 2 ˆ := 2Kr ¯ − β4 (Kb + 3Klip (T + 1)e3T (T +4m)Klip K ).

Since M ≥ N β by hypothesis, 1

1

1+β 4 1

= 1 + δ > 1, and h(N ) =

r 4 (N M )− 4 ≤ r 4 N −

1+β 4

r N,

we have

β

= r− 4 (h(N ) )1+δ .

˜ and V˜ , we have thus found a piecewise Recalling the definitions of f˜, g˜, L, (N,M ) constant strategy u ¯(·) ∈ U [t, T ] such that

264

5 Discrete Approximations





t+h(N )

E

L(s, Π

(N )

L(s, Π

(N )

φ, u ¯(s)) ds + V

(N )

φ, u(s)) ds + V

(N )

(t + h

(N )

(t + h

(N )

, z¯t+h(N ) )

t





t+h(N )

≤E

, zt+h(N ) )

t

ˆ (N ) )1+δ . +K(h 2

This proves the theorem.

Corollary 5.2.9 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Then there ¯ depending only on Kb , Klip , and the dimensions n is a positive constant K and m such that for all β > 3, N = 1, 2, . . . with N ≥ r, M = 1, 2, . . . with M ≥ N β , and all initial datum (t, φ) ∈ I(N ) × C, it holds that β−3  r  4(1+β) β ¯ r− 1+β |V (N ) (t, φ) − V (N,M ) (t, φ)| ≤ KT . N 1+β In particular, with M = "N β #, where "a# is the least integer not smaller than a for all a ∈ , the upper bound on the discretization error can be rewritten as β−3  r  4(1+β) β β ¯ r− 1+β |V (N ) (t, φ) − V (N, N ) (t, φ)| ≤ KT . N 1+β From the above corollary we see that, in terms of the total number of the time steps N "N β #, we can achieve any rate of convergence smaller than 14 by choosing the subdiscretization order β sufficiently large. 5.2.3 Overall Discretization Error Theorem 5.2.10 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let 0 < γ ≤ ¯ depending on γ, KH , Klip , Kb , T , and KH < ∞. Then there is a constant K dimensions n and m such that for all β > 3, N = 1, 2, . . . with N ≥ 2r, and older continuous, it holds all initial datum (t, φ) ∈ I(N ) × C with φ being γ-H¨ r , that, with h = N 1+β β

|V (N ) (t, φ) − V (N, N ) (t, φ)| 

β β−3 γβ γ β 1 1 ¯ r 1+β h 1+β ∨ r 2(1+β) ln ≤K h 2(1+β) + r− 1+β h 4(1+β) . h In particular, with β = 5 and h =

r N6 ,

it holds that



|V

(N )

(t, φ) − V

5

(N,N )

¯ (t, φ)| ≤ K β

r

5γ 6

h

2γ−1 12

∨r

5 12

 1 1 − 56 h 12 . ln +r h β

Proof. Clearly, |v − V (N, N ) | ≤ |V − V (N ) | + |V (N ) − V (N, N ) |. The assertion now follows from Corollary Theorem 5.2.7, where we assume  1+β  5.2.9 /and /r0 0 / 1 0 N 1 = ln = ln by ln . 2 ln h(N ) N r h

5.3 Markov Chain Approximation

265

Theorem 5.2.11 Assume Assumptions (A5.1.1)-(A5.1.3) hold. Let 0 < γ ≤ ¯ depending on γ, KH , Klip , Kb , T , dimenKH . Then there is a constant K(r) sions n and m, and delay duration r such that for all β > 3, N, M = 1, 2, . . . with N ≥ 2r and M ≥ N β , and all initial datum (t, φ) ∈ I(N ) × C with φ being γ-H¨ older continuous, the following holds: If u ¯(·) ∈ U (N,M ) [t, T ] is such that ¯(·)) ≤ , V (N,M ) (t, φ) − J (N ) (t, φ; u then with h =

r , N 1+β





¯ V (t, φ) − J(t, φ; u ¯(·)) ≤ K(r) h

γ 1+β



 β−3 1 1 ln h 2(1+β) + h 4(1+β) + . h

Proof. Let u ¯(·) ∈ U (N,M ) [t, T ] be such that V (N,M ) (t, φ)−J (N ) (t, φ; u ¯(·)) ≤ . Then ¯(·)) V (t, φ) − J(t, φ; u ¯(·)) ≤ V (t, φ) − V (N,M ) (t, φ) + V (N,M ) (t, φ) − J (N ) (t, φ; u +J (N ) (t, φ; u ¯(·)) − J(t, φ; u ¯(·)) The assertion is now a consequence of Theorem 5.2.10 and Theorem 5.2.11. 2

5.3 Markov Chain Approximation To avoid complications arising from the discretization of the space n and the degeneracy of the diffusion, we will assume without loss of generality that the controlled SHDE (5.2) is autonomous and one dimensional (i.e., n = m = 1). Specifically, we consider the following autonomous one-dimensional controlled equation: (5.16) dx(t) = f (xs , u(s)) ds + g(xs ) dW (s), s ∈ [0, T ], with the initial segment ψ ∈ C (here, C = C([−r, 0]; ) throughout this section) at the initial time t = 0. Assumption 5.3.1 In this section, we assume that the real-valued functions f , g, L, and Ψ defined on C × U , C, C × U , and C, respectively, satisfy the following conditions: (A5.3.1). (Global Boundedness) The functions |f | and |g| are bounded by a constant Kb > 0, that is, |f (φ, u)| + |g(φ)| ≤ Kb ,

∀(φ, u) ∈ C × U.

(A5.3.2). (Uniform Lipschitz Condition) There is a constant Klip > 0 such that for all φ, ϕ ∈ C and u ∈ U , |f (φ, u) − f (ϕ, u)| + |g(φ) − g(ϕ)| + |L(φ, u) − L(ϕ, u)| + |Ψ (φ) − Ψ (ϕ)| ≤ Klip φ − ϕ .

266

5 Discrete Approximations

(A5.3.3). (Ellipticity of the Diffusion Coefficient) g(φ) ≥ σ for all φ ∈ C, where σ > 0 is a given positive constant. The objective of the optimal classical control problem is to find an admissible control u(·) ∈ U[0, T ] so that the following expected objective functional is maximized:  T J(ψ; u(·)) = E e−αs L(s, xs (·; ψ, u(·)), u(s)) ds 0 −αT Ψ (xT (·; ψ, u(·))) , (5.17) +e Again, the value function of the optimal control problem is defined by V (ψ) =

sup

J(ψ; u(·)),

u(·)∈U[0,T ]

ψ ∈ C.

(5.18)

We also recall, in the following, the corresponding optimal relaxed control problem in which the controlled state equation is described by  dx(s) = f (xs , u)µ(s, ˙ du) ds + g(xs ) dW (s), s ∈ [0, T ], (5.19) U

and the objective function is defined by   T e−αs L(s, xs (·; ψ, u), u)µ(s, ˙ du) ds J(ψ; µ(·, ·)) = E U 0 + e−αT Ψ (xT (·; ψ, u(·))) .

(5.20)

The objective of the optimal relaxed control problem is to find an optimal relaxed control µ∗ (·, ·) ∈ F[0, T ] that maximizes J(ψ; µ(·, ·)). Again, the value function (using a slightly different notation from those of (5.18)) is described by Vˆ (ψ) = sup J(ψ; µ(·, ·)), ψ ∈ C (5.21) µ(·,·)∈R[0,T ]

(using a slightly different notation from those of (5.18)), where R[0, T ] is the class of admissible relaxed controls (see Section 3.2.2 in Chapter 3). Note that every u(·) ∈ U[0, T ] can be represented as an admissible relaxed control µ(·, ·) via 

T

 1{(t,u)∈B} δu(t) (du) dt,

µ(A) = 0

A ∈ A([0, T ] × U ),

(5.22)

U

where δu is the Dirac measure at u ∈ U and 1{(t,u)∈B} is the indicator function of the set {(t, u) ∈ B}. Recall in Section 3.2.2 that V (ψ) = Vˆ (ψ), ∀ψ ∈ C.

5.3 Markov Chain Approximation

267

In the following subsections, we formulate optimal control problems for appropriate approximating Markov chains. It is shown when the approximating parameter N → ∞ (or equivalently h(N ) → 0) that the value functions of the approximating optimal control problems converges to that of the optimal relaxed control problem and, hence, to the value function of the original optimal control problem. The difference between the Markov chain approximation and the semidiscretization scheme presented in Section 5.2 is that the Markov chain approximation requires discretion of the spatial variable of the √ state process to the nearest multiple of h(N ) . 5.3.1 Controlled Markov Chains T Let N = 1, 2, . . . be given and recall T (N ) = h(N )  h(N ) . Hence, we often suppress in the following the superscript (N ) to avoid the complication of notation when there is no danger of ambiguity. We call a one-dimensional discrete-time process

{ζ(kh), k = −N, −N + 1, . . . , 0, 1, 2, . . . , T (N ) } defined on a complete filtered probability space √ (Ω, F, P, F) a discrete chain of degree N if it takes values in S ≡ {k h | k = 0, ±1, ±2, . . .} and ζ(kh) is F(kh)-measurable for all k = 0, 1, 2, . . . , T (N ) . We define the N +1 valued discrete process {ζkh , k = 0, 1, 2, · · · , T (N ) } by setting for each k = 0, 1, 2, · · · , T (N ) , ζkh = (ζ((k − N )h), ζ((k − N + 1)h), . . . , ζ((k − 1)h), ζ(kh)). Note that the one-dimensional discrete chain needs not be Markovian. With a little abuse of notation, we also define Π(ζkh ) ∈ C and Π(φ) ∈ C as the linear interpolation of ζkh ∈ N +1 and φ ∈ C. A sequence u(N ) (·) = {u(N ) (kh), k = 0, 1, 2, . . . , T (N ) } is said to be a discrete admissible control if u(N ) (kh) is F(kh)-measurable for each k = 0, 1, 2, . . . , T (N ) and ⎤ ⎡ (N ) T E⎣ |u(N ) (kh)|2 ⎦ < ∞. k=0

As in Section 5.1.2 but with M = 1 for simplicity, we also let U (N ) [0, T ] be the class of a continuous-time admissible control process u ¯(·) = {¯ u(s), s ∈ [0, T ]}, where for each s ∈ [0, T ], u ¯(s) = u ¯(sN ) is F(sN )-measurable and takes only finite different values in U . Remark 5.3.2 We observe the following facts or convention:

268

5 Discrete Approximations

1. Without loss of generality, we can and will assume that all real-valued discrete processes ζ(·) = {ζ(kh), k = −N, −N + 1, . . . , 0, 1, 2, · · · , T (N ) } discussed in this section are actually taking discrete values in the √ set S. This can be done by truncation down to the nearest multiple of h. Implicitly, we are also discretizing the value of the chain ζ(·), which is not required in Section 5.2 for a semidiscretization scheme. 2. Similar to the fact that the real-valued solution x(·) to (5.16) is not a Markov process and yet its corresponding C-valued segment process {xs , s ∈ [0, T ]} is. We note here that the S-valued process ζ(·) is not Markovian but it is desirable under appropriate conditions that its corresponding (S)N +1 -valued segment process {ζkh , k = 0, 1, 2 . . . , T (N ) } is Markovian with respect to the discrete filtration {F(kh), k = 0, 1, 2, . . .}. Given a one-step Markov transition function p(N ) : (S)N +1 ×U ×(S)N +1 → [0, 1], where p(N ) (x, u; y) will be interpreted as the probability that the ζ(k+1)h = y ∈ (S)N +1 given that ζkh = x and u(kh) = u. We define a sequence of controlled Markov chains associated with the initial segment ψ and u ¯(·) ∈ U (N ) [0, T ] as a family {ζ (N ) (·), N = 1, 2, . . .} of processes such that ζ (N ) (·) is a S(N ) -valued discrete chain of degree N defined on the same stochastic basis as u(N ) (·), provided the following conditions are satisfied: Assumption 5.3.3 (i) Initial Condition: ζ(−kh) = ψ(−kh) = S for k = −N, −N + 1, · · · , 0, 1, . . . , T (N ) . (ii). Extended Markov Property: For any k = 1, 2, . . ., and y = (y(−N h), y((−N + 1)h), · · · , y(0)) ∈ (S)N +1 , P {ζ(k+1)h = y | ζ(ih), u(ih), i ≤ k}  (N ) p (ζkh , u(kh); y), if y(−ih) = ζ((−i + 1)h) for 1 ≤ i ≤ N = 0, otherwise. (iii) Local Consistency with the Drift Coefficient : (N,u)

b(kh) ≡ Eψ [ζ((k + 1)h) − ζ(kh)] = hf (Π(ζkh ), u(kh)) + o(h) ≡ hf (N ) (ζkh , u(kh)), where f (N ) : N +1 × U →  is defined by f (N ) (x, u) = f (Π (N ) (x), u), N,u(·)

∀(x, u) ∈ N +1 × U,

Eψ is the conditional expectation given the discrete admissible control (N ) u (·) = {u(kh), k = 0, 1, 2, · · · , T (N ) }, and the initial function

5.3 Markov Chain Approximation

269

π (N ) ψ ≡ (ψ(−N h), ψ((−N + 1)h), · · · , ψ(−h), ψ(0)). (iv) Local Consistency with the Diffusion Coefficient: N,u(·)



[(ζ((k + 1)h) − ζ(kh) − b(kh))2 ] = hg 2 (ζkh , u(kh)) + o(h) ≡ h(g (N ) )2 (ζkh , u(kh)),

where g (N ) : N +1 →  is defined by g (N ) (x) = g(Π (N ) (x)),

∀x ∈ N +1 .

˜ independent of N , such that (v) Jump Heights: There is a positive number K √ ˜ h for some K ˜ > 0. sup |ζ((k + 1)h) − ζ(kh)| ≤ K k

It is straightforward, under Assumption 5.3.3, that a sequence of extended transition functions with the jump height and the local consistency conditions can be constructed. The following is a construction example of the extended Markov transition functions that satisfy the above conditions. Examples for the Markov Transition Function (A) We can define the Markov probability transition function p(N ) : (S)N +1 × U × (S)N +1 → [0, 1] as follows: p(N ) (x, u; x ⊕ h) =

g 2 (Π(x)) + hf (Π(x), u) 2g 2 (Π(x))

p(N ) (x, u; x $ h) =

g 2 (Π(x)) − hf (Π(x), u) , 2g 2 (Π(x))

and

for all x = (x(−N h), x((−N + 1)h), . . . , x(−h), x(0)) ∈ (S)N +1 , x ⊕ h = (x(−N ), x((−N + 1)h), . . . , x(−h), x(0) + h), and x $ h = (x(−N h), x(−N + 1), . . . , x(−1), x(0) − h), where Π(x) ∈ C is the linear interpolation of x. (B) If the diffusion coefficient g : C →  is such that g(φ) ≡ 1 for all φ ∈ C, then the controlled SHDE (5.16) reduces to the following equation: dx(s) = f (xs , u(s))ds + dW (s),

s ∈ [0, T ].

270

5 Discrete Approximations

The discrete chain ζ(·) = {ζ(kh), k = −N, −N + 1, . . . , −1, 0, 1 · · · , T (N ) } corresponding to the above controlled SHDE can be written as ζ((k+1)h) = ζ(kh)+hf (Π (N ) (ζkh ), u(kh))+∆W (kh),

k = 0, 1, 2, . . . , T (N ) ,

and (ζ(−N h), ζ((−N + 1)h), . . . , ζ(−h), ζ(0)) = (ψ(−N h), ψ((−N + 1)h), . . . , ψ(−h), ψ(0)). Since ∆W (kh) ≡ W ((k + 1)h) − W (kh) is Gaussian with E[∆W (kh)] = 0 and V ar[∆W (kh)] = kh, it is clear that the Markov transition function p(N ) in Assumption 5.3.3 can be similaryly defined for this special case by taking g(φ) ≡ 1. 5.3.2 Optimal Control of Markov Chains Let N = 1, 2, . . . be given. Using the notation and concept developed in the previous subsection, we assume that the (S)N +1 -valued process {ζkh , k = 0, 1, 2, . . .} is a controlled (S)N +1 -valued Markov chain with initial datum ζ0 = π (N ) ψ ∈ (S)N +1 and the Markov probability transition function p(N ) : (S)N +1 × U × (S)N +1 → [0, 1] that satisfies Assumption 5.3.3. In the following, we will define the objective function J (N ) : (S)N +1 × (N ) U [0, T ] →  and the value function V (N ) : (S)N +1 →  of the approximating optimal control problem as follows. Define the discrete objective functional of degree N by ⎡ (N ) ⎤ T −1 (N ) e−αkh L(Π (N ) (ζkh ), u(kh))h + Ψ (Ψ (ζT N )⎦ , J (N ) (π (N ) ; u(N ) (·)) = E ⎣ k=0

(5.23) where ψ (N ) = (ψ(−N h), ψ((−N + 1)h), . . . , ψ(−h), ψ(0)) ∈ (S)N +1 ,

for ψ ∈ C,

and {u(N ) (·) = {u(kh), k = 0, 1, 2, . . .} is adapted to {F (N ) (kh), k = 0, 1, 2, . . .} and taking values in U . V (N ) (ψ (N ) ) = sup J(ψ (N ) ; u(N ) (·)), u(N ) (·) (N )

(N )

ψ ∈ C,

(5.24)

with the terminal condition V (N ) (ζ T h ) = Ψ (ζ T h ). h h For each N = 1, 2, . . ., we state without proof, the following dynamic (N ) programming principle (DDP) for the controlled Markov chain {ζkh , k = 0, 1, . . . , T (N ) } and discrete admissible control process u(·) ∈ U (N ) [0, T ] as follows.

5.3 Markov Chain Approximation

271

Proposition 5.3.4 Assume Assumption 5.3.3 holds. Let ψ ∈ C and let {ζkh , k = 0, 1, 2, . . . , T (N ) } be an (S)N +1 -valued Markov chain determined by the probability transition function p(N ) . Then for each k = 0, 1, 2, . . . , T (N ) −1,  k  (N ) V (N ) (ψ (N ) ) = sup E V (N ) (ζ(k+1)h ) + hL(Π(ζih ), u(N ) (ih)) . u(N ) (·)

i=0

The optimal control of Markov chain problem (OCMCP) is stated below. Problem (OCMCP). Given N = 1, 2, . . . and ψ ∈ C, find an admissible discrete control process u(N ) (·) that maximizes the objective functional J (N ) (ψ (N ) ; u(N ) (·)) in (5.23). The detail of optimal control of Markov chain is an entire subject of study by itself and the space will not be well spent if we attempt to reproduce the theory here. The readers are referred to monographs by Dynkin and Yushkevich [DY79] and Berzekas [Ber76] for details. Given N = 1, 2, . . . and initial segment ψ ∈ C, we outline a backward-intime computational algorithm for Problem (OCMCP), based on Proposition 5.3.4, as follows. A Computational Algorithm (N )

Step 1. For M = T (N ) , set VM (ψ) = Ψ (ψ) for all φ ∈ C. Compute   (N ) (N ) JM (x, u) ≡ E VM (xM ) | xM −1 = x, u((M − 1)h) = u √ √ = Ψ (x ⊕ h)p(N ) (x, u, x ⊕ h) √ √ + Ψ (x $ h, u)p(N ) (x, u, x $ h), where x = (x(−N ), x(−N + 1), . . . , x(−1), x(0)) ∈ (S)N +1 , √ √ x ⊕ h = (x(−N ), x(−N + 1), . . . , x(−1), x(0) + h), and x$



h = (x(−N ), x(−N + 1), . . . , x(−1), x(0) −

√ h).

Step 2. Find u ∈ U that maximizes the following function by setting (N )

u∗ = arg max{L(x, u) + JM (x, u)} u∈U

Let u ¯∗ (s) = u∗ for all (M − 1)h ≤ s ≤ M h, and  (N ) JM −1 (x, u) ≡ E L(xM −1 , u((M − 1)h)) (N ) JM (xM , u∗)

| xM −1 = x, u((M − 1)h) = u √ √ (N ) = L(x, u) + JM −1 (x ⊕ h, u)p(N ) (x, u, x ⊕ h) √ √ (N ) + JM −1 (x $ h, u)p(N ) (x, u, x $ h), +



272

5 Discrete Approximations

where, again, x = (x(−N ), x(−N + 1), . . . , x(−1), x(0)) ∈ (S)N +1 , √ √ x ⊕ h = (x(−N ), x(−N + 1), . . . , x(−1), x(0) + h), and x$



h = (x(−N ), x(−N + 1), . . . , x(−1), x(0) −

√ h).

Step k. For k = 1, 2, . . . , M , find u ∈ U that maximizes the function L(x, u)+ (N ) JM −k (x, u) by setting (N )

u∗ = arg max {L(x, u) + JM −k (x, u)}. u∈U

Let u ¯∗ (s) = u∗ for all (M − k − 1)h ≤ s ≤ (M − k)h, and  (N ) JM −k−1 (x, u) ≡ E L(xM −k−1 , u((M − k − 1)h))



(N ) JM −k (xM , u∗)

| xM −k−1 = x, u((M − k − 1)h) = u √ √ (N ) = L(x, u) + JM −k (x ⊕ h, u∗ )p(N ) (x, u, x ⊕ h) √ √ (N ) + JM −k (x $ h, u∗ )p(N ) (x, u, x $ h). +

Step M. Given the initial segment ψ ∈ C, set  (N ) J1 (x, u) ≡ E L(x1 , u(h))



(N )

(xM , u∗) | xM −k−1 = x, u((M − k − 1)h) = u √ √ (N ) = L(x, u) + JM −k (x ⊕ h, u∗ )p(N ) (x, u, x ⊕ h) √ √ (N ) + JM −k (x $ h, u∗ )p(N ) (x, u, x $ h). + J1

(N )

Let u ∈ U that maximizes the function L(x, u) + J1 (N )

u∗ = arg max {L(x, u) + J1 u∈U

(x, u) by setting

(x, u)}.

Let u ¯∗ (s) = u∗ for all 0 ≤ s ≤ h. In this case, the value function of the optimal (N ) Markov chain control problem can be set as V (N ) (πψ) = J1 (πψ, u∗ ) and ∗ (N ) the optimal piecewise constant control u ¯ (·) ∈ U [0, T ] can be constructed. 5.3.3 Embedding the Controlled Markov Chain Given N = 1, 2, . . . (hence the superscript (N ) will be suppressed when appropriate), we will represent the discrete chain ζ(·) = ζ(kh), k = −N, −N + 1, . . . , −1, 0, 1, 2, . . . , T (N )

5.3 Markov Chain Approximation

273

as a solution to a discretized equation ζ(kh) = ψ(0) +

k−1 

h · f (N ) (ζih , u(ih)) + ξ(kh),

k = 1, 2, . . . , T (N ) , (5.25)

i=0

corresponding to (5.16) with admissible constant control process u(·) = u(N ) (·) ∈ U (N ) [0, T ] and the initial datum ψ (N ) ≡ (ψ(−N h), ψ((−N + 1)h), . . . , ψ(−h), ψ(0)) = (ζ(−N h), ζ((−N + 1)h), . . . , ζ(−h), ζ(0)) ∈ SN +1 . Define the discrete process {ξ(kh), k = 0, 1, 2, . . . , T (N ) } by ξ(0) = 0 and ξ(kh) = ζ(kh) − ψ(0) −

k−1 

h · f (N ) (Πζih , u(ih)),

k = 1, 2, . . . , T (N ) . (5.26)

i=0

We have the following two lemmas regarding ζ(·) and ξ(·). Lemma 5.3.5 The one-step Markov transition function p(N ) : SN +1 × U × SN +1 → [0, 1] for the controlled discrete chain ζ(·) described by (5.25) satisfies Assumption 5.3.3. Lemma 5.3.6 The process ξ(·) = {ξ(kh), k = 0, 1, 2, · · · } defined by (5.26) is a discrete-time martingale with respect to the filtration {F (N ) (kh), k = 0, 1, 2, . . .}. Proof. From the definition of ξ (N ) (·), we have E[ξ((k + 1)h) | F (N ) (kh)]   k  (N ) (N ) h · f (Π(ζih ), u(ih)) | F (kh) = E ζ((k + 1)h) − ψ(0) − i=0



= E ζ((k + 1)h) − ζ(kh) +

k 

 (ζ((i + 1)h) − ζ(ih)) || F

(N )

(kh)

i=0



k 

  E h · f (N ) (Π(ζih ), u(ih)) | F (N ) (kh)

i=0

= ξ(kh). Therefore, the process ξ(·) is a discrete-time martingale with respect to the 2 filtration {F (N ) (kh), k = 0, 1, 2, . . .}. Set (N ) 1 (s)

s h −1

:=

 i=0

h·f

(N )

(N ) (Πζih , u ¯(N ) (ih)) −



s 0

(N )

f (Πζt N , u ¯(N ) (t))dt,

s ≥ 0,

274

5 Discrete Approximations

With T > 0, for the error term we have      (N )  (N ) sup 1 (s) E s∈[0,T ]

T h −1





  (N ) (N ) hE (N ) f (N ) (Πζih , u(N ) (ih)) − f (Πζih , u(N ) (ih))| + Kb h

i=0



×

h T h

0

  (N ) E (N ) f (Πζs N , u ¯(N ) (s)) − f (Πζs(N ) , u ¯(N ) (s))| ds,

which tends to zero as N goes to infinity by Assumption 5.3.3, dominated convergence, and the defining properties of {ζ (N ) (kh), k = −N, −N + (N ) 1, . . . , −1, 0, 1, 2, . . .}. Moreover, |1 (s)| is bounded by 2K ·T for all s ∈ [0, T ] and all N large enough. Therefore,    2   (N ) lim E (N ) sup 1 (s) = 0. N →∞

s∈[0,T ]

The discrete-time martingale {ξ (N ) (kh), k = 0, 1, 2, . . .} can be rewritten as discrete stochastic integral. Define {W (N ) (kh), k = 0, 1, 2, · · · } by setting W (N ) (0) = 0 and W (N ) (kh) =

k−1 

1

(N ) i=0 g(Πζih )

(ξ (N ) ((i + 1)h) − ξ (N ) (ih)),

k = 1, 2, . . . .

¯ (N ) (·) of W (N ) (·), the linearly Using the piecewise constant interpolation W (N ) ˜ interpolated process ζ (·) can be expressed as a solution to  s (N ) f (Πζs N , u ¯(N ) (s)) ds ζ˜(N ) (s) = ψ (N ) (0) + 0  s (N ) ¯ (N ) (s) + (N ) (s) + g(Πζs N ) dW 2 0  s (N ) ¯ (N ) (s), s ∈ [0, T ], g(Πζs N ) dW (5.27) ≡ C (N ) (s) + 0

where the error terms

(N ) 2 (·)

(N )

converges to 1 (·) in the following sense:   2    (N ) (N ) (N ) sup 2 (s) − 1 (s) = 0. lim E

N →∞

s∈[0,T ]

5.3.4 Convergence of Approximations In order to present convergence results, we will retain the superscript (N ) in each of the applicable quantities in order to highlight its dependence on the discretization parameter N .

5.3 Markov Chain Approximation

275

Let {¯ u(N ) (·), N = 1, 2, . . .} be the sequence of maximizing piecewise constant controls; that is, for each N = 1, 2, . . ., u ¯(N ) (s) = u(N ) (kh)

for kh ≤ s < (k + 1)h, and k = 0, 1, 2, . . . , T (N ) − 1,

for the optimal control of Problem (OCMCP) and let {µ(N ) (·, ·), N = 1, 2, . . .} be the corresponding sequence of admissible relax controls, where  T (N ) µ (A) = 1{(t,u)∈A} δu¯(t) (du) dt, A ∈ B([0, T ] × U ), 0

U

¯(N ) (t) and 1{(t,u)∈A} being the indiwith δu¯(t) being the Dirac measure at u cator function of the set {(t, u) ∈ A}. We have the following result, which looks very similar to Proposition 3.2.16 in Chapter 3. However, their proofs differ slightly and, therefore, are included for completeness. Proposition 5.3.7 Assume Assumption 5.3.3 holds. Suppose the sequence {ψ (N ) , n = 1, 2, . . .} in C converges to the initial function ψ ∈ C. Then the sequence of processes ¯ (N ) (·)), N = 1, 2, . . .} ¯(N ) (·), W {(ζ¯(N ) (·), u is tight. Let (x(·), µ(·, ·), W (·)) be a limit point of the above sequence and set F(s) = σ((x(t), µ(t, A), W (t)), A ∈ B(U ), t ≤ s) for s ∈ [0, T ]. Then we have the following. (i) W (·) is an {F(s), s ∈ [0, T ]}-adapted Brownian motion. (ii) µ(·, ·) is an admissible relaxed control. (iii) x(·) is a solution to (5.19) under (µ(·, ·), W (·)) and with initial condition ψ ∈ C. We note here that since ψ (N ) is taken to be π (N ) ψ for the initial segment ψ ∈ C,   lim ψ (N ) − ψ ≡ lim

N →∞

N →∞

sup |ψ (N ) (θ) − ψ(θ) = 0, θ∈[−r,0]

from the classical Weierstrauss theorem. We need the following lemmas for a proof of Proposition 5.3.7. Lemma 5.3.8 The family of relaxed controls {µ(N ) (·, ·), N = 1, 2, . . .} corresponding to the family of maximizing piecewise constant controls {¯ u(N ) (·), N = 1, 2, . . .} is tight and converges to an admissible relaxed control µ(·, ·) of the optimal relaxed control problem. Proof. Since the space of admissible relaxed controls R[0, T ] is compact under the weak convergence topology, the sequence {µ(N ) (·, ·), N = 1, 2, . . .} has a subsequence (which is still denoted by {µ(N ) (·, ·), N = 1, 2, · · · } with a little abuse of notation) that converges weakly to µ(·, ·) ∈ R[0, T ]. 2

276

5 Discrete Approximations

Lemma 5.3.9 The family of piecewise constant processes {ζ¯(N ) (·)), N = 1, 2, . . .} is tight. Its limiting process x(·) = {x(t), t ∈ [0, T ]} is a strong solution of (5.19) with the initial segment ψ ∈ C and under the admissible relaxed control µ(·, ·) ∈ R[0, T ]. Proof. It is clear that ψ (N ) converges to the initial datum ψ ∈ C, since ψ (N ) = Π (N ) ψ for each N = 1, 2, . . .. Tightness of the sequence {ζ¯(N ) (·)), N = 1, 2, . . .} follows from the Aldous criterion (see Theorem 3.2.6 in Chapter 2): For any F(N ) -stopping time τ and  > 0, we have   E (N ) |ζ¯(N ) (τ + ) − ζ¯(N ) (τ )|2 | F (N ) (τ ) ≤ 2Kb2 ( + 1) 2 Lemma 5.3.10 The family of one dimensional piecewise constant processes ¯ (N ) (·)), N = 1, 2, . . .} is tight and it converges weakly to the one-dimensional {W standard Brownian motion W (·) = {W (s), s ∈ [0, T ]}. ¯ (N ) (·); and (ii) Proof. In this proof we will (i) establish the tightness of W the identify the limit points

W (N ) (kh) ≡

k−1 

E[|W (N ) ((i + 1)h) − W (N ) (ih)|2 | F (N ) (ih)]

i=0

= kh + o(h)

k−1  i=0

1 , (N ) g 2 (ζ¯ )

∀N = 1, 2, . . . ; k = 0, 1, . . . , T (N ),

ih

for all N = 1, 2, . . . and k = 0, 1, 2, . . .. Taking into account Assumption (A5.3.1) and the definition of the continuous-time piecewise constant ¯ (N ) (·), N = 1, 2, . . .}, we see that W (N ) (·) converges to the processes {W constant process 1 in probability uniformly on [0, T ]. Since the sequence ¯ (N ) (·), N = 1, 2, . . .} is of uniformly controlled variations, hence a good {W sequence of integrators in the sense of Kurtz and Protter [KP91], because its jump heights are uniformly bounded and it is a martingale for each N = 1, 2, . . ., we conclude without going into the detail of proof that ¯ (N ) (·), N = 1, 2, . . .} converges weakly in C([0, T ]; ) to a standard Brown{W ian motion W (·). We also note that W (·) has independent increments with respect to the filtration {F(s), s ∈ [0, T ]}. This can be seen by considering the first and second conditional moments of the increments of W (N ) (·) for each N and applying the conditions on local consistency and the jump heights of {ζ (N ) (·), N = 1, 2, . . .}. In addition, the results of [KP91] guarantee weak convergence of the corresponding adapted quadratic variation processes; ¯ (N ) ], N = 1, 2, . . .} converges weakly to ¯ (N ) , W that is, the sequence {[W [W, W ] in D[0, T ], the space of right-continuous with finite left-hand limits under Skorokhod topology, where the square brackets indicate the adapted quadratic variation. Convergence also holds for the sequence of process pairs 2 (W (N ) , [W (N ) , W (N ) ]) in D([0, T ]; 2 ) (see Kurtz and Protter [KP04]).

5.3 Markov Chain Approximation

277

Proof of Proposition 5.3.7. The proof of this proposition is a slight modification of Proposition 3.2.16 in Chapter 3, where we prove the existence of an optimal classical control. With a little abuse of notation, suppose ¯ (N ) (·)), N = 1, 2, . . .} is a weakly convergent subsequence ¯(N ) (·), W {(ζ¯(N ) (·), u with limit point (x(·), µ(·, ·), W (·)). The remaining part that needs to be verified is the identification of x(·) as a solution to (5.19) under the relaxed control µ(·, ·) with the initial condition ψ. Define c´adl´ ag and bounded (due to Assumption (A5.2.2)) processes C (N ) (·) = {C (N ) (s), s ∈ [0, T ]} by C (N ) (s) = ψ (N ) (0) +



s

0

and C(·) = {C(s), s ∈ [0, T ]}

(N )

f (Π (N ) (ζt N ), u(N ) (t)) dt + 2 (s), 

s

s ∈ [0, T ],



C(s) = ψ(0) +

f (xs , u)µ(du, ˙ s) ds,

0

s ∈ [0, T ].

U

Invoking Skorohod’s representation theorem in Section 3.2 of Chapter 3, one can establish weak convergence of C (N ) (·) to C(·). We now know that each of the sequences ¯ (N ) (·), and [W ¯ (N ) , W ¯ (N ) ](·), ζ¯(N ) (·), C (N ) (·), W

N = 1, 2, . . .

is weakly convergent in D([0, T ]). Actually, we have weak convergence for the sequence of quadruples ¯ (N ) (·), [W ¯ (N ) , W ¯ (N ) ](·)), (ζ¯(N ) (·), C (N ) (·), W

N = 1, 2, . . . ,

4

which converges in D([0, T ];  ), since each of these four components converges weakly in D[0, T ]. To see this, we notice that each of the four sequences (ζ¯(N ) + C (N ) )(·), ¯ (N ) , W ¯ (N ) ])(·), (ζ¯(N ) + [W

and

¯ (N ) )(·), (ζ¯(N ) + W

¯ (N ) + [W ¯ (N ) , W ¯ (N ) ])(·), (W

N = 1, 2, . . . ,

is tight in D[0, T ], because the limit processes C(·), x(·), W (·), and [W, W ] = 1 are all continuous on [0, T ]. This implies tightness of the sequence of quadruples in D([0, T ]; 4 ). Consequently, it has a unique limit point, namely (x(·), C(·), W (·), [W, W ]). By virtue of Skorohod’s theorem, the convergence is with probability 1. This proves the proposition. 2 Proposition 5.3.11 Assume Assumptions (A5.4.1)-(A5.4.3) hold. If the sequence ¯ (N ) (·)), N = 1, 2, . . . , ¯(N ) (·), W (ζ¯(N ) (·), u of the interpolated processes converges weakly to a limit point (x(·), µ(·), W (·)), then x(·) is a solution to (5.19) under the relaxed control µ(·, ·) with the initial condition ψ ∈ C and lim J (N ) (ψ (N ) ; u(N ) (·)) = J(ψ; µ(·, ·)).

N →∞

278

5 Discrete Approximations

Proof. The convergence assertion for the objective functionals is a consequence of Proposition 5.3.7, Assumption (A5.4.3), and the definition of J (N ) and J. 2 Theorem 5.3.12 Assume Assumption (A5.4.1)-(A5.4.3) hold. Then we have lim V (N ) (ψ (N ) ) → Vˆ (ψ).

N →∞

Proof. We first note that limN →∞ V (N ) (ψ (N ) ) ≥ Vˆ (ψ) as a consequence of Propositions 5.3.7 and 5.3.11. In order to show that limN →∞ V (N ) (ψ (N ) ) ≤ Vˆ (ψ), we choose a relaxed control µ(·, ·) so that J(ψ; µ(·, ·)) = Vˆ (ψ) according to Proposition 5.3.7. Given  > 0, one can construct a sequence of discrete admissible controls {u(N ) (·), N = 1, 2, . . .} such that ¯ (N ) (·)), N = 1, 2, . . .} {(ζ¯(N ) (·), u ¯(N ) (·), W ¯ (N ) (·)) is constructed in is weakly convergent, where each of ζ¯(N ) (·) and W Proposition 5.3.7 and limN →∞ |J (N ) (ψ (N ) ; u(N ) (·)) − J(ψ; µ(·, ·))| ≤ . The existence of such a sequence of discrete admissible controls is guaranteed. By definition, there exists an admissible control u(N ) (·) such that V (N ) (ψ (N ) ) −  ≤ J (N ) (ψ (N ) ; u(N ) (·)) Using Proposition 5.3.11, we find that limN →∞ V (N ) (ψ (N ) ) −  ≤ limN →∞ J (N ) (ψ (N ) ; u(N ) (·) ≤ Vˆ (ψ). Since  is arbitrary, the assertion follows.

2

5.4 Finite Difference Approximation In this section we consider an explicit finite difference scheme and show that it converges to the unique viscosity solution of the HJBE αV (t, ψ) − ∂t V (t, ψ) − max [Av V (t, ψ) + L(t, ψ, v)] = 0 v∈U

(5.28)

on [0, T ] × C, and the terminal condition V (T, ψ) = Ψ (ψ), ∀ψ ∈ C. In the above, we recall from Section 3.4.1 for the convenience of the readers that

5.4 Finite Difference Approximation

279

Av Φ(t, ψ) ≡ SΦ(t, ψ) + DΦ(t, ψ)(f (t, ψ, v)1{0} ) (5.29) m  1 D2 Φ(t, ψ)(g(t, ψ, v)ej 1{0} , g(t, ψ, v)ej 1{0} ), + 2 j=1 1,2 ([0, T ] × C) ∩ D(S), where ej is the jth vector of the standard for any Φ ∈ Clip m basis in  , the function 1{0} : [−r, 0] →  is defined by  0 for θ ∈ [−r, 0) 1{0} (θ) = 1 for θ = 0,

and S(Φ)(t, ψ) is defined as SΦ(t, ψ) = lim ↓0

Φ(t, ψ˜ ) − Φ(t, ψ) , 

(5.30)

and ψ˜ : [−r, T ] → n is the extension of ψ ∈ C from [−r, 0] to [−r, T ] and is defined by  ψ(0) for t ≥ 0, ˜ ψ(t) = ψ(t) for t ∈ [−r, 0). Note that DΦ(t, ψ) ∈ C∗ and D2 Φ(t, ψ) ∈ C† are the first- and secondorder Fr´echet derivatives of Φ with respect to its second argument ψ ∈ C. In addition, DΦ(t, ψ) ∈ (C ⊕ B)∗ is the extension of of DΦ(t, ψ) from C∗ to (C ⊕ B)∗ and D2 V (t, ψ) ∈ (C ⊕ B)† is the extension of D2 V (t, ψ) from C† to (C ⊕ B)† . These results can be found in Lemma 2.2.3 and Lemma 2.2.4 in Chapter 2. In this section, we assume the following conditions, which are the same as those of Assumption 3.1.1 in Chapter 3 and are repeated here for convenience. Assumption 5.4.1 The functions f , g, L, and Ψ satisfy the following conditions: (A5.4.1) (Lipschitz Continuity) There exists a constant Klip > 0 such that |f (t, φ, u) − f (s, ϕ, v)| + |g(t, φ, u) − g(s, ϕ, v)| + |L(t, φ, u) − L(s, ϕ, v)| + |Ψ (φ) − Ψ (ϕ)|  ≤ Klip ( |t − s| + φ − ϕ + |u − v|), ∀s, t ∈ [0, T ], u, v ∈ U, and φ, ϕ ∈ C. (A5.4.2) (Linear and Polynomial Growth) There exists a constant Kgrow > 0 such that |f (t, φ, u)| + |g(t, φ, u)| ≤ Kgrow (1 + φ ) and |L(t, φ, u)| + |Ψ (φ)| ≤ Kgrow (1 + φ 2 )k ,

∀(t, φ) ∈ [0, T ] × C and u ∈ U.

280

5 Discrete Approximations

(A5.4.3) The initial function ψ belongs to the space L2 (Ω, C, F(t)) of F(t)measurable elements in L2 (Ω : C) such that ψ 2L2 (Ω;C) ≡ E[ ψ 2 ] < ∞. Based on Assumptions (A5.4.1)-(A5.4.3), we will extend a method introduced by Barles and Souganidis [BS91] to an infinite-dimensional setting that is suitable for the HJBE described in (5.28). The finite difference scheme and convergence results will be described in Section 5.3.1. A computational algorithm based on the finite difference scheme will be summarized in Section 5.4.2. 5.4.1 Finite Difference Scheme Given a positive integer M , we consider the following truncated optimal control problem with value function VM : [0, T ] × C →  defined by  VM (t, ψ) =

sup

E

u(·)∈U[t,T ] −α(T −t)

+e

T

e−α(s−t) (L(s, xs , u(s)) ∧ M ) ds

t

(Ψ (xT ) ∧ M ) ,

(5.31)

where a ∧ b is defined by a ∧ b = min{a, b} for all a, b ∈ . The corresponding truncated HJBE is given by αVM (t, ψ) − ∂t VM (t, ψ) − max [Au VM (t, ψ) + (L(t, ψ, u) ∧ M )] = 0 (5.32) u∈U

on [0, T ] × C, and VM (T, ψ) = Ψ (ψ) ∧ M, ∀ψ ∈ C. The corresponding truncated Hamiltonian is HM (t, ψ, VM (t, ψ), ∂t VM (t, ψ), DVM (t, ψ), D2 VM (t, ψ)) = S(VM )(t, ψ) + ∂t VM (t, ψ)  + sup DVM (t, ψ)(f (t, ψ, u)1{0} ) + (L(t, ψ, u) ∧ M ) u∈U m 

1 + 2

D2 V

M (t, ψ)(g(t, ψ, u)ei 1{0} , g(t, ψ, u)ei 1{0} ) .

(5.33)

i=1

Similar to what had been presented in Sections 3.5 and 3.6, it can be shown that the value function VM : [0, T ] × C →  is the unique viscosity solution (see Section 3.5 of Chapter 3 for a definition of a viscosity solution) of the HJBE 0 = αVM (t, ψ) − HM (t, ψ, VM (t, ψ), ∂t VM (t, ψ), DVM (t, ψ), D2 VM (t, ψ)), on [0, T ] × C with V (T, ψ) = Ψ (ψ) ∧ M ,

∀ψ ∈ C.

(5.34)

5.4 Finite Difference Approximation

281

Moreover, it is easy to see that VM (t, ψ) → V (t, ψ) for each (t, ψ) ∈ [0, T ] × C as M → ∞. In view of these, we need only to find the numerical solution for VM (t, ψ) for each (t, ψ) ∈ [0, T ] × C. Let  with 0 <  < 1 be the step size for variable ψ and η, with 0 < η < 1 be the step size for t. We define the finite difference operators ∆η Φ, D Φ and ∆2 Φ for each Borel measurable function Φ : [0, T ] × C →  by Φ(t + η, ψ) − Φ(t, ψ) , η Φ(t, ψ + (φ + v1{0} )) − Φ(t, ψ) , D Φ(t, ψ)(φ + v1{0} ) =  Φ(t, ψ + (φ + v1{0} )) − Φ(t, ψ) D 2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ) = 2 Φ(t, ψ − (ϕ + w1{0} )) − Φ(t, ψ) + , 2 ∆η Φ(t, ψ) =

where φ, ϕ ∈ C and v, w ∈ n . Recall that  1 SΦ(t, ψ) = lim Φ(t, ψ˜ ) − Φ(t, ψ) . ↓0  Therefore, we define S Φ(t, ψ) =

 1 Φ(t, ψ˜ ) − Φ(t, ψ) . 

It is clear that lim S Φ(t, ψ) = SΦ(t, ψ), ↓0

∀(t, ψ) ∈ [0, T ] × C.

We have the following lemma. 1,2 Lemma 5.4.2 For any Φ ∈ Clip ([0, T ] × C) such that Φ can be smoothly extended on [0, T ] × (C ⊕ B), we have for every φ, ϕ ∈ C, v, w ∈ n , and t ∈ [0, T ], (5.35) lim D Φ(t, ψ)(φ + v1{0} ) = DΦ(t, ψ)(φ + v1{0} ) →0

and lim D 2 Φ(t, ψ)(φ+v1{0} , ϕ+w1{0} ) = D2 Φ(t, ψ)(φ+v1{0} , ϕ+w1{0} ). (5.36)

→0

Proof. Note that the function Φ can be extended from [0, T ] × C to [0, T ] × (C ⊕ B). Let us denote by Φ¯ the smooth extension of Φ on [0, T ] × (C ⊕ B). It is clear that lim ∆ Φ(t, ψ)(φ + v1{0} ) = Φ¯(1) (t, ψ)(φ + v1{0} )

→0

=

  d (1) Φ (t, ψ + (φ + v1{0} ) , d =0

282

5 Discrete Approximations

¯ with respect to its where Φ¯(1) denotes the first-order Gˆ ateau derivative of Φ second variable ψ ∈ C. Since Φ is smooth, then the Gˆ ateau derivative and the ¯ coincide and are a continuous extension of the DΦ, Fr´echet derivative of Φ the Fr´echet derivative of Φ. In other words, we have ¯ ψ)(φ + v1{0} ) = DΦ(t, ψ)(φ + v1{0} ). DΦ(t, By the uniqueness of the linear continuous extension, we have lim D Φ(t, ψ)(φ + v1{0} )

(5.37)

→0

¯ ψ)(φ + v1{0} ) = lim D Φ(t, →0

= DΦ(t, ψ)(φ + v1{0} ).

(5.38)

Similarly, the same argument can be used for the second-order finite difference approximation D 2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ) =

Φ(t, ψ + (φ + v1{0} )) − Φ(t, ψ) 2 Φ(t, ψ − (ϕ + w1{0} )) − Φ(t, ψ) + . 2

Therefore, lim D 2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ) = D2 Φ(t, ψ)(φ + v1{0} , ϕ + w1{0} ).

→0

This proves the lemma.

2

Let , η > 0. The corresponding discrete version of (5.32) is given by 0 = αVM (t, ψ) − ∆η VM (t, ψ)  − sup S (VM )(t, ψ) + D VM (t, ψ)(φ + (f (t, ψ, u)1{0} )) u∈U

=

m 1

2

D 2 VM (t, ψ)(g(t, ψ, u)ej 1{0} ), g(t, ψ, u)ej 1{0} )

i=1

+(L(t, ψ, u) ∧ M ) . Substituting the terms ∆η VM (t, ψ),

S (VM )(t, ψ),

D VM (t, ψ)(φ + (f (t, ψ, u)1{0} )), and

D 2 VM (t, ψ)(g(t, ψ, u)ej 1{0} ), g(t, ψ, u)ej 1{0} )

5.4 Finite Difference Approximation

283

into the equation above, we have equivalently, αVM (t, ψ) =

VM (t, ψ˜ ) − VM (t, ψ) VM (t + η, ψ) − VM (t, ψ) +  η  VM (t, ψ + (f (t, ψ, u)1{0} )) − VM (t, ψ) + sup  u∈U m 1  VM (t, ψ + (g(t, ψ, u)ei 1{0} )) − VM (t, ψ) + 2 i=1 2

VM (t, ψ − (g(t, ψ, u)ei 1{0} )) − VM (t, ψ) + 2 +(L(t, ψ, u) ∧ M ) . (5.39)

Rearranging terms, we obtain  VM (t, ψ˜ ) VM (t + η, ψ) VM (t, ψ + (f (t, ψ, u)1{0} )) + + sup  η  u∈U m  VM (t, ψ + (g(t, ψ, u)ei 1{0} )) + VM (t, ψ − (g(t, ψ, u)ei 1{0} )) 1 + 2 i=1 2

2 1 m + + 2 + α VM (t, ψ) = 0. + (L(t, ψ, u) ∧ M ) − (5.40)  η  Since the term ( 2 +  sup u∈U

2

+

1 η

1 +

1 η

+

m 2

+ α) is always positive, (5.39) is equivalent to

m 2

VM (t, ψ˜ ) VM (t, ψ + (f (t, ψ, u)1{0} )) +   +α

m 

VM (t, ψ + (g(t, ψ, u)ei 1{0} )) + VM (t, ψ − (g(t, ψ, u)ei 1{0} )) 2 i=1

VM (t + η, ψ) + (L(t, ψ, u) ∧ M ) − VM (t, ψ) = 0. + (5.41) η +

1 2

Let Cb ([0, T ] × (C ⊕ B)) denote the space of bounded continuous functions Φ from [0, T ] × (C ⊕ B) to  equipped with the sup-norm · Cb defined by Φ Cb =

sup (t,φ⊕v1{0} )

|Φ(t, φ ⊕ v1{0} )|,

∀Φ ∈ Cb ([0, T ] × (C ⊕ B)). In the following we make necessary preparations for establishing the fixed point for an appropriate mapping as the limiting point of finite difference approximation. Define a mapping SM : (0, 1)2 × [0, T ] × C ×  × Cb ([0, T ] × (C ⊕ B)) →  as follows:

284

5 Discrete Approximations

SM (, η, t, ψ, x, Φ)  Φ(t, ψ˜ ) Φ(t + η, ψ) =  sup +  η u∈U Φ(t, ψ + (f (t, ψ, u)1{0} )) + + (L(t, ψ, u) ∧ M )  m 1  Φ(t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ(t, ψ − (g(t, ψ, u)ei 1{0} )) + 2 i=1 2

2 1 m + + 2 + α x. (5.42) −  η  Then, (5.39) is equivalent to SM (, η, t, ψ, VM (t, ψ), VM ) = 0. Moreover, note that the coefficient of x in SM is negative. This implies that SM is monotone: that is, for all x, y ∈ , , η ∈ (0, 1), t ∈ [0, T ], ψ ∈ C, and Φ ∈ Cb ([0, T ] × (C ⊕ B)) SM (, η, t, ψ, x, Φ) ≤ SM (, η, t, ψ, y, Φ)

whenever x ≥ y.

Definition 5.4.3 The scheme SM is said to be consistent if for every t ∈ [0, T ], ψ ∈ C ⊕ B, and for every test function Φ ∈ [0, T ] × (C ⊕ B) such that 1,2 ([0, T ] × (C ⊕ B)) ∩ D(S), Φ ∈ Clip αΦ(t, ψ) − HM (t, ψ, Φ(t, ψ), ∂t Φ(t, ψ), DΦ(t, ψ), D2 Φ(t, ψ)) SM (, η, τ, φ, Φ(τ, φ) + ξ, Φ + ξ) . = lim  (τ,φ)→(t,ψ), ,η↓0, ξ→0 In the above definition, Φ+ξ should be understood to be a real-valued function defined on [0, T ] × C such that (Φ + ξ)(t, ψ) = Φ(t, ψ) + ξ,

∀(t, ξ) ∈ [0, T ] × C.

Lemma 5.4.4 The scheme SM is consistent. 1,2 Proof. Let Φ ∈ Clip ([0, T ] × (C ⊕ B)) ∩ D(S). We write

SM (, η, τ, φ, Φ(τ, φ) + ξ, Φ + ξ)   Φ(τ, ψ + (f (τ, ψ, u)1{0} )) + ξ Φ(τ, ψ˜ ) + ξ + = sup   u∈U Φ(t + η, ψ) + ξ +(L(τ, ψ, u) ∧ M ) + η

m 1  Φ(τ, ψ + (g(t, ψ, u)ei 1{0} )) + 2ξ + Φ(τ, ψ − (g(t, ψ, u)ei 1{0} )) + 2 i=1 2 2 1 m −( + + 2 + α)(Φ(τ, φ) + ξ).  η 

5.4 Finite Difference Approximation

285

Sending ξ → 0, τ → t, φ → ψ, , η → 0, we have αΦ(t, ψ) − HM (t, ψ, Φ(t, ψ), ∂t Φ(t, ψ), DΦ(t, ψ), D2 Φ(t, ψ)) SM (, η, τ, φ, Φ(τ, φ) + ξ, Φ + ξ) . = lim  (τ,φ)→(t,ψ), ,η,↓0,ξ→0 2

This completes the proof.

Using (5.41), we see that the equation SM (, η, t, ψ, Φ(t, ψ), Φ) = 0 is equivalent to the equation  1 Φ(t, ψ˜ ) Φ(t, ψ + (f (t, ψ, u)1{0} )) Φ(t, ψ) = sup 2 1 m +   u∈U + η + 2 + α 1  Φ(t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ(t, ψ − (g(t, ψ, u)ei 1{0} )) 2 i=1 2

Φ(t + η, ψ) + (L(t, ψ, u) ∧ M ) . (5.43) + η m

+

For each  > 0 and η > 0, we define an operator T ,η on Cb ([0, T ] × (C ⊕ B)) as follows: T ,η Φ(t, ψ)  1 ≡ sup 2 1 u∈U + η +



m 2

Φ(t, ψ˜ ) Φ(t, ψ + (f (t, ψ, u)1{0} )) +   +α

1  W (t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ(t, ψ − (g(t, ψ, u)ei 1{0} )) 2 i=1 2

Φ(t + η, ψ) + L(t, ψ, u) ∧ M . (5.44) + η m

+

Note that to find a Φ ∈ Cb ([0, T ] × (C ⊕ B) that satisfies SM (, η, t, ψ, Φ(t, ψ), Φ) = 0

for each , η > 0 and (t, ψ) ∈ [0, T ] × C.

is equivalent to finding a fixed point of the map T ,η . Lemma 5.4.5 For each  > 0 and η > 0, T ,η is a contraction map. Proof. To prove that T ,η is a contraction, we need to show that there exists 0 < β < 1 such that T ,η Φ1 − T ,η Φ2 Cb ≤ β Φ1 − Φ2 Cb ,

∀Φ1 , Φ2 ∈ Cb ([0, T ] × (C ⊕ B)),

where · Cb is the sup-norm for the space Cb ([0, T ] × (C ⊕ B)). Let us define c ,η by

286

5 Discrete Approximations

c ,η =

2 1 m + + 2 + α.  η 

Note that |T ,η Φ1 (t, ψ) − T ,η Φ2 (t, ψ)|   1  Φ1 (t, ψ˜ ) Φ1 (t, ψ + (f (t, ψ, u)1{0} )) Φ1 (t + η, ψ) + + ≤ sup    η u∈U c ,η

m  Φ1 (t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ1 (t, ψ − (g(t, ψ, u)ei 1{0} )) 1 + 2 i=1 2 Φ2 (t, ψ + (f (t, ψ, u)1{0} )) Φ2 (t + η, ψ) 1 − Φ2 (t, ψ˜ ) + +   η

 m 1  Φ2 (t, ψ + (g(t, ψ, u)ei 1{0} )) + Φ2 (t, ψ − (g(t, ψ, u)ei 1{0} ))  +  . 2 2 i=1

This implies that for all (t, ψ) ∈ [0, T ] × C, 2

1 η

+



|T ,η Φ1 (t, ψ) − T ,η Φ2 (t, ψ)| ≤

+

m 2



c ,η

Φ1 − Φ2 Cb .

(5.45)

In addition, note that 2

+

1 η

+

c ,η

m 2

=

Let β ,η =

2

+

1 η

+

m 2

2

+

1 η

+

m 2



2

+

1 η

+

m 2

c ,η

< 1.

.

Therefore, T ,η Φ1 − T ,η Φ2 Cb ≤ β ,η Φ1 − Φ2 Cb . This proves that the operator T ,η is a contraction map.

2

Definition 5.4.6 The scheme SM is said to be stable if for every , η ∈ (0, 1), there exists a bounded solution Φ ,η ∈ Cb ([0, T ] × (C ⊕ B)) to the equation SM (, η, t, ψ, Φ(t, ψ), Φ) = 0,

(5.46)

with the bound independent of  and η. Lemma 5.4.7 The scheme SM is said to be stable. Proof. By the Banach fixed-point theorem, the strict contraction T ,η has a unique fixed point that we denote by ΦM ,η . Given any function Φ0 ∈ Cb ([0, T ]× (C ⊕ B))b , we construct a sequence as follows: Φk+1 = T ,η Φk for k ≥ 0. It is clear that

5.4 Finite Difference Approximation

287

lim Φk = ΦM ,η .

k→∞

Moreover, note that Φk+1 (t, ψ)  1 = sup 2 1 u∈U + η +



m 2

Φk (t, ψ˜ ) Φk (t, ψ + (f (t, ψ, u)1{0} )) +   +α

1  Φk (t, ψ + (g(t, ψ, u)ei 1{0} )) + Φk (t, ψ − (g(t, ψ, u)ei 1{0} )) 2 i=1 2

Φk (t + η, ψ) + (L(t, ψ, u) ∧ M ) + η 1 ≤ β ,η Φk Cb + M. (5.47) c ,η m

+

In addition, we have β ,η =

c ,η − ρ < 1. c ,η

This implies that c ,η − ρ 1 Φk Cb + M. c ,η c ,η

Φk+1 Cb ≤

(5.48)

From (5.48), we deduce that Φk+1 Cb ≤

c ,η − ρ c ,η

k+1

i k M  c ,η − ρ . c ,η i=0 c ,η

Φ0 Cb +

Taking the limit as k → ∞, we obtain ΦM ,η Cb ≤

M 1 M . · c,η −α = c ,η 1 − c α ,η 2

This implies the stability of the scheme SM .

Theorem 5.4.8 Let ΦM ,η denote the solution to (5.46). Then, as , η ↓ 0, the sequence ΦM converges uniformly on [0, T ]×C to the unique viscosity solution ,η VM of (5.32). Proof. Define

Φ∗M (t, ψ) = Φ∗M (t, ψ) =

lim sup (τ,φ)→(t,ψ), ,η↓0

lim inf

ΦM ,η (τ, φ),

(τ,φ)→(t,ψ), ,η↓0

ΦM ,η (τ, φ).

We claim that Φ∗M and Φ∗M are the viscosity subsolution and supersolution of (5.32), respectively. To prove this claim, we only consider the case for Φ∗M . The argument for that of Φ∗M is similar. We want to show that

288

5 Discrete Approximations

αΓ (t, ψ) − HM (t, ψ, Γ (t, ψ), ∂t Γ (t, ψ), DΓ (t, ψ), D2 Γ (t, ψ)) ≤ 0 1,2 for any test function Γ ∈ Clip ([0, T ] × (C ⊕ B)) ∩ D(S) such that (t, ψ) is a strictly local maximum of Φ∗M − Γ . Without loss of generality, we may assume that Φ∗M ≤ Γ and Φ∗M (t, ψ) = Γ (t, ψ), and because of the stability of our scheme, we can also assume that Γ ≥ 2 sup ,η ΦM ,η outside of the ball B((t, ψ), l) centered at (t, ψ) with radius l, where l > 0 is such that

Φ∗M (τ, φ) − Γ (τ, φ) ≤ 0 = Φ∗M (t, ψ) − Γ (t, ψ)

for (τ, φ) ∈ B((t, ψ), l).

This implies that there exist sequences k > 0, ηk > 0, and (τk , φk ) ∈ [0, T ] × (C ⊕ B) such that as k → ∞, we have ∗ k → 0, ηk → 0, τk → t, φk → ψ, ΦM k ,ηk (τk , φk ) → ΦM (t, ψ),

(5.49)

M and (τk , φk ) is a global maximum ΦM k ,ηk − Γ . Denote γk = Φ k ,ηk (τk , φk ) − Γ (τk , φk ). Obviously, γk → 0 and

ΦM k ,ηk (τ, φ) ≤ Γ (τ, φ) + γk ,

∀(τ, φ) ∈ [0, T ] × (C ⊕ B).

(5.50)

We know that M SM (k , ηk , τk , φk , ΦM k ,ηk (τk , φk ), Φ k ,ηk ) = 0.

The monotonicity of SM implies SM (k , ηk , τk , φk , Γ (τk , φk ) + γk , Γ (τk , φk ) + γk ) M ≤ SM (k , ηk , τk , φk , ΦM k ,ηk (τk , φk ), Φ k ,ηk ) = 0.

(5.51)

Therefore, M SM (k , ηk , τk , φk , ΦM k ,ηk (τk , φk ), Φ k ,ηk ) ≤ 0, k→∞ k

lim

so αΦ∗M (t, ψ) − HM (t, ψ, Φ∗M (t, ψ), Dt Γ (t, ψ), DΓ (t, ψ), D2 Γ (t, ψ)) M SM (k , ηk , τk , φk , ΦM k ,ηk (τk , φk ), Φ k ,ηk ) = lim k→∞ k ≤ 0. This proves that Φ∗M : [0, T ] × C →  is a viscosity subsolution of (5.32) and, similarly, we can prove that Φ∗M is a viscosity supersolution. By the comparison principle (see Theorem 3.6.1 in Chapter 3), we can get that Φ∗M (t, ψ) ≥ Φ∗M (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

(5.52)

5.4 Finite Difference Approximation

289

On the other hand, by the definition, of Φ∗M and Φ∗M , it is easy to see that Φ∗M (t, ψ) ≤ Φ∗M (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

Combined with (5.52), the above implies Φ∗M (t, ψ) = Φ∗M (t, ψ),

∀(t, ψ) ∈ [0, T ] × C.

Since Φ∗M is a viscosity supersolution and Φ∗M is a viscosity subsolution, they are also viscosity solutions of (5.32). Now, using the uniqueness of the viscosity solution (5.32), we see that VM = Φ∗M = Φ∗M . Therefore, we conclude that 2 the sequence (ΦM ,η ) ,η converges locally uniformly to VM as desired. 5.4.2 Discretization of Segment Functions Although we are able to prove convergence in theory of the finite difference scheme described in the previous subsection, it is clear that we still will not be able to extract any useful computational algorithm for the value function V : [0, T ] × C →  from it. To overcome this shortcoming, we propose to further discretize segment functions in C as follows. We again use the same discretization notation in Section 5.1 and define for each N = 1, 2, · · · and any x = (x(−N h), x((−N + 1)h), . . . , x(−h), x(0)) ∈ (n )N +1 ˜ ∈ C = C([−r, 0]; n ) by making the linear a continuous-time function x interpolation between the time-space points (kh, x(kh)) and ((k + 1)h, x((k + 1)h). Therefore, if θ ∈ [−r, 0] and θ = kh for some k = −N, −N + 1, . . . , −1, 0, ˜ (kh) = x(kh). If θ ∈ [−r, 0] is such that kh < θ < (k + 1)h for some x k = −N, −N + 1, . . . , −1, then ˜ (θ) = x(kh) + x

x((k + 1)h) − x(kh)(θ + kh) . h

For the functions f : [0, T ] × C × U → n , g : [0, T ] × C × U → n×m , L : [0, T ] × C × U → , and Ψ : C → , we define respectively the functions ˆ : I × SN +1 × U → , fˆ : I × SN +1 × U → n , gˆ : I × SN +1 × U → n×m , L N +1 ˆ →  as follows: and Ψ : S fˆ(tN , πφ, u) = f (t, φ, u), ˆ L(t N , πφ, u) = L(t, φ, u),

gˆ(tN , πφ, u) = g(t, φ, u),

and Ψˆ (πφ) = Ψ (φ),

∀(t, φ, u) ∈ [0, T ]×C×U,

(N +1

is the truncated point-mass projection function defined where π : C → S by πφ = (φ(−N h), φ((−N + 1)h), . . . , φ(−h), φ(0)). Using these notations, we define the discrete version of S Φ, ∂t,η Φ, D Φ, and D 2 Φ in Section 5.4.1 by

290

5 Discrete Approximations

˜ ˜ Φ(t N + η, πψ) − Φ(tN , πψ) ˜ ˆ , ∂t,η Φ(t N , πψ) = η ˜ D Φ(t N , πψ)(πφ ⊕ v) =

˜ ˜ Φ(t N , πψ + (πφ ⊕ v)) − Φ(tN , πψ) , 

˜ D 2 Φ(t N , πψ)(πφ ⊕ v, πϕ ⊕ w) =

ˆ ˆ Φ(t N , πψ + (πφ ⊕ v)) − Φ(tN , πψ) 2  ˆ ˆ Φ(t , πψ − (πφ ⊕ v)) − Φ(t N N , πψ) + , 2 

and

˜ ˆ ˆ Φ(t N , φ ) − Φ(tN , πφ) ˜ S (Φ)(t , N , πφ) =  n where, φ, ϕ ∈ C and v, w ∈  , πφ ⊕ v = (φ(−N h), φ((−N + 1)h), . . . , φ(−h), φ(0) + v). Therefore, we define S (Φ)(t, ψ) =

 1 Φ(t, ψ˜ ) − Φ(t, ψ) .  (N )

The discrete version of the mappings SM and T ,η are given as SM : (0, 1)2 × [0, T N ] × (S)N +1 ×  × B([0, tN ] × (S)N +1 ⊕ n )) →  as follows: (N )

ˆ SM (, η, tN , πψ, x, Φ) ˆ ˆ Φ(tN , π ψ˜ ) Φ(t N + η, πψ) + =  max u∈U  η ˆ Φ(tN , πψ ⊕ f (tN , πψ, u)) + + (L(tN , πψ, u) ∧ M )  m ˆ ˆ 1  Φ(t N , πψ ⊕ (g(tN , πψ, u)) + Φ(tN , πψ $ (g(tN , πψ, u)) + 2 i=1 2

2 1 m + + 2 + α x. (5.53) −  η  and (N ) ˆ Φ(tN , πψ) T ,η  ˆ ˆ ˆ 1 Φ(tN , π ψ˜ ) Φ(t N , πψ ⊕ (f (tN , πψ, u))) ≡ max 2 1 m + u∈U   + η + 2 + α

ˆ ˆ 1  Φ(t g (tN , πψ, u))) + Φ(t g (tN , πψ, u))) N , πψ ⊕ (ˆ N , πψ $ (ˆ 2 2 i=1 

ˆ Φ(t N + η, πψ) ˆ + L(t + , πψ, u) ∧ M . (5.54) N η m

+

5.4 Finite Difference Approximation

291

The following two lemmas can be proven in ways similar to that of Lemma 5.4.5 and Lemma 5.4.7. The proofs are omitted here. (N )

Lemma 5.4.9 For each  > 0 and η > 0, T ,η (N )

Lemma 5.4.10 The scheme SM

is a contraction map.

is monotone, consistent, and stable.

We have the following convergence result. Theorem 5.4.11 For each (t, ψ) ∈ [0, T ] × C, (N )

lim VM (tN , π (N ) ψ) = VM (t, ψ).

N →∞

Proof. Since tN → t and π (N ) ψ → ψ (in C) as N → ∞, by the definition 2 of VM : [0, T ] × C → , the convergence follows very easily. 5.4.3 A Computational Algorithm Based on the results obtained in the last two subsections, we can construct a computational algorithm for each N ∈ ℵ to obtain a numerical solution. For example, one algorithm can be as follows: Step 0. Let (t, ψ) ∈ [0, T ] × C. (i) Compute tN and πψ. (ii) Choose any function Φ(0) ∈ Cb ([0, T ] × C ⊕ B). (iii) Compute Φˆ(0,N ) (tN , π (N ) ψ) = Φ(0) (π (N ) ψ). Step 1. Pick the starting values for (1), η(1). For example, we can choose (1) = 10−2 , η(1) = 10−3 . Step 2. For the given , η > 0, compute the function (1,N ) Φˆ (1),η(1) ∈ Cb ([0, T ] × (S(N) )N +1 )

by the formula

(0,N ) ˆ(1,N ) = T (N ) , Φ (1),η(1) (1),η(1) Φ

(N )

where T (1),η(1) , which is defined on Cb ([0, T ] × (S(N) )N +1 ), is given by (5.54). Step 3. Repeat Step 2 for i = 2, 3, . . . using (i,N ) (N ) (i−1,N ) Φˆ (1),η(1) (tN , πψ) = T (1),η(1) Φ (1),η(1) (tN , πψ).

Stop the iteration when (i+1,N ) (i,N ) |Φˆ (1),η(1) (t, ψ) − Φˆ (1),η(1) (t, ψ)| ≤ δ1 ,

where δ1 is a preselected number small enough to achieve the accuracy we ˆ (1),η(1) (tN , πψ). want. Denote the final solution by Φ

292

5 Discrete Approximations

Step 4. Choose two sequences of (k) and η(k), such that lim (k) = lim η(k) = 0.

k→∞

k→∞

For example, we may choose (k) = η(k) = 10−(2+k) . Now, repeat Step 2 and Step 3 for each (k), η(k) until (i,N ) (i,N ) |Φˆ (k+1),η(k+1) (tN , πψ) − Φˆ (k),η(k) (tN , πψ)| ≤ δ2 ,

where δ2 is chosen to obtain the expected accuracy.

5.5 Conclusions and Remarks This chapter presents three different discrete approximation schemes: semidiscrete scheme, Markov chain approximation, and finite difference approximation for the viscosity solution of the infinite-dimensional HJBE. The basic idea behind the semidiscrete scheme and Markov chain approximation is to discrete-time variable and/or space variable for each of the controlled SHDEs, the objective functional. Using the discrete version of the Bellman’s dynamic programming principle, the value functions of the discrete optimal control problems can be obtained and are shown to converge to the value function of the original optimal classical control problem. These two discrete approximations do not deal with the infinite-dimensional HJBE at all. However, the finite difference method provides a discretization scheme for approximating the infinite-dimensional HJBE. A contraction map is set up so that the value function of the optimal classical control problem can be approximated by the fixed point of the contraction map. The finite difference method can be further discretized temporally and spatially for computational convenience. A computational algorithm is provided based on the finite difference scheme obtained. One important area of SHDEs not addressed in this monograph and requires more research effort is the theory and applications of nonlinear filtering. However, we mention here some related computational results on this subject by Chang [Cha87] and, more recently, by Calzolari et al. [CFN06, CFN07].

6 Option Pricing

A contingent claim (or option) is a contract giving the buyer of the contract (or simply the buyer) the right to buy from or sell to the contract writer (or simply writer) a share of an underlying stock at a predetermined price q > 0 (called the strike price) and at or prior to a prespecified time T > 0 (called the expiration date) in the future. The right to buy (respectively, to sell) a share of the stock is called a call (respectively, a put) option. The European (call or put) option can only be excised at the expiration date T , but the American (call or put) option can be excised at any time prior to or at the expiration date. The pricing problem of a contingent claim is, briefly, to determine the fee (called the rational price) that the writer should receive from the buyer for the rights of the contract and also to determine the trading strategy the writer should use to invest this fee in a (B, S)-market in such a way as to ensure that the writer will be able to cover the option if and when it is exercised. The fee should be large enough that the writer can, with riskless investing, cover the option but be small enough that the writer does not make an unfair (i.e., riskless) profit. Throughout this chapter, any financial market consisting of one (riskless) bank account and one (risky) stock account will be referred to as a (B, S)market, although the specifics of each (B, S)-market may depend on the characteristics of the bank account and the underlying stock along with their associated parameters. The characterizations of rational pricing function of the contingent claim therefore depend on the specifics of the (B, S)-market considered. The pricing of European or American options in the continuous-time (B, S)-market has been a subject of extensive research in recent years. Explicit results for the European call option obtained (see, e.g., Black and Scholes [BS73], Harrison and Kreps [HK79], Harrison and Pliska [HP81], Merton [Mer73, Mer90], Shiryaev et al. [SKKM94]) for the idealized Black-Scholes market often consisting of a riskless bank account {B(t), t ≥ 0} that grows continuously with a constant interest rate λ > 0, that is,

294

6 Option Pricing

dB(t) = λB(t) dt, B(0) = x, and a stock whose price process {S(t), t ≥ 0} satisfies the following linear stochastic differential equation: dS(t) = µS(t)dt + σS(t)dW (t), S(0) = y, or, equivalently, the geometric Brownian motion,   1 2 S(t) = S(0) exp (µ − σ ) t + σW (t) , ∀t ≥ 0. 2 In the above, W = {W (t), t ≥ 0} is a one-dimensional standard Brownian motion defined on a complete filtered probability space (Ω, F, P, F), µ and σ are positive constants that represent respectively the the stock appreciation rate and the stock volatility rate. The objective of the standard European call option under the BlackScholes market is to determine a rational price Q(t, y) (i.e., the fee the writer should receive) when the contract is sold to the buyer at time t ∈ [0, T ] given the stock price S(t) = y. It is now well known that the pricing function Q(t, y), (t, y) ∈ [0, T ] × [0, ∞), can be described by the following expression:   ˜ e−λ(T −t) (S(T ) − q)+ | S(t) = y Q(t, y) = E   ˜ e−λ(T −t) (S(T − t) − q)+ | S(0) = y , =E ˜ is the expectation corresponding to the probawhere q is the strike price, E ˜ (t), t ≥ 0} described by bility measure P˜ under which the process {W ˜ (t) = W (t) + µ − λ t, W σ is a new standard Brownian motion, and

t ≥ 0,

+

(S(T ) − q) ≡ max{S(T ) − q, 0} is the payoff function that represents the profit S(T ) − q for the buyer at the expiration time T . This payoff function can be interpreted as follows. If S(T ) > q, then the buyer shall exercise the option by buying (from the writer) a share of the stock at the strike price q and immediately selling it at the open market at the price S(T ), thus making an instant profit of S(T ) − q. On the other hand, if S(T ) ≤ q, then the buyer shall not exercise the option, and therefore, the payoff will be zero. It has also been shown (see, e.g., [SKKM94]) that the pricing function Q(t, y) may be expressed via the following Black-Scholes formula:

 ∞   √  2 σ2 1 Λ y exp σu T − t + λ − (T − t) e−u /2 du Q(t, y) = √ 2 2π −∞  ∞   2 σ u 1 , σ du, Λ(u)ϕ T − t, , λ − = y −∞ y 2

6 Option Pricing

where

295

Λ(y) = (y − q)+

is the payoff function, with ϕ(t, y, a, b) =

 log(y − at)2  1 √ . exp − 2b2 t by 2πt

Equivalently, if the rational pricing function for the European option Q : [0, T ] ×  →  is sufficiently smooth, then it satisfies the following well-known Black-Scholes equation: λQ(t, y) = ∂t Q(t, y) + λy∂y Q(t, y) 1 + σ 2 y 2 ∂y2 Q(t, y), (t, y) ∈ [0, T ] × [0, ∞), 2 with the terminal condition Q(T, y) = Λ(y) = (y − q)+ . On the other hand, based on the same Black-Scholes financial market described above, it has been shown by many authors (see, e.g., Myneni [Myn92] and Karatzas [Kar88]) that the rational pricing function Q(t, y) for the standard American call option is the value function of the following optimal stopping problem:   ˜ e−λ(τ −t) (S(τ ) − q)+ | S(t) = y Q(t, y) = sup E τ ∈TtT

=

  ˜ e−λ(τ −t) (S(τ − t) − q)+ | S(0) = y , sup E

τ ∈T0T −t

where τ ∈ TtT is a certain stopping time. For both of the European and American call options, the pricing function Q(t, y) for the Black-Scholes market given above is explicit and computable. However, its assumption of constant stock growth and volatility rates are not always satisfied by the real-life financial markets, as the probability distribution of an equity often has a fatter left tail and thinner right tail than the geometric Brownian motion model of the stock prices (see [Bat96]). In order to better model real-life options, various generalizations of the model have been made. These generalizations include the popular stochastic volatility models (see e.g. Hull [Hul00] and Fouque et al [FPS00] and references given therein) and a stock market that has hereditary structure or delayed responses (see, e.g., Chang and Youree [CY99, CY07], Arriojas et al. [AHMP07], Kazmerchuk et al. [KSW04a, KSW04b, KSW04c]) in which various versions of stochastic hereditary differential equations (SHDEs) were introduced to describe the dynamics of the bank account and the stock price. This chapter treats both European and American call options for a (B, S)financial market in which the bank account follows a linear functional differential equation (see (6.1)) and the stock price follows a general nonlinear SHDE

296

6 Option Pricing

with bounded memory (see (6.4)). For the European call option, the material presented here is based on results obtained in a recent paper of Chang and Youree [CY07]. For the American call option, the results presented here are consequences of optimal stopping obtained in Chapter 4. Under the (B, S)market described by (6.1) and (6.4), and with a very general path-dependent payoff function Ψ (ST ) for the European option and Ψ (Sτ ∧T ) for the American option at expiration time T or execution time τ ∧ T , we derive an infinitedimensional Black-Scholes equation (see (6.28) and (6.29)) for the pricing function for the European call option and an infinite-dimensional HJB variational inequality (HJBVI) (see (6.46) and (6.47)) for the pricing function of the American call option. Both the infinite-dimensional Black-Scholes equation and HJBVI involve extended Fr´echet derivatives of functions on the function space C (see Section 2.2 in Chapter 2 for definitions). Under certain smoothness conditions, it can be shown that the pricing function is a classical solution of the infinite-dimensional Black-Scholes equation and the pricing function of the American call option is the classical solution of the infinite-dimensional HJBVI in the sense that all Fr´echet derivatives and infinitesimal generator of the shift operator of these two pricing functions exist, are continuous, and satisfy respectively the infinite-dimensional Black-Scholes equation and the HJBVI pointwise in [0, T ] × C. Unfortunately, for a general payoff function, it is not known whether the pricing functions are smooth enough to be a solution of these two infinite-dimensional equations and inequalities in the classical sense. The main results contained in this chapter are that we have shown that the pricing function for the European call option is the unique viscosity solution of the infinite-dimensional Black-Scholes equation and the pricing function for the American call option is the unique viscosity solution of the infinite-dimensional HJBVI as a consequence of the optimal stopping from Chapter 4. For computational purposes, we also present an algorithm for computing the viscosity solution of the infinite-dimensional Black-Scholes equation. The computational algorithm is adopted from Chang and Youree [CY07]. Although other types of infinite-dimensional Black-Scholes equation and HJBVI for optimal stopping problems and their applications to pricing of the American option have been studied very recently by a few researchers, they either considered a stochastic delay equation of special form (see, e.g., Gapeev and Reiss [GR05a] and [GR05b]) or stochastic equations in Hilbert spaces (see ´ e.g. Gatarek and Swiech [GS99] and Barbu and Marinelli [BM06]). The material presented in this chapter differs from the aforementioned papers in the following significant ways: (i) The segmented solution process {St , t ∈ [0, T ]} for the stock price dynamics is a strong Markov process in the Banach space C with the sup-norm that is not differentiable and is therefore more difficult to handle than any Hilbert space considered in [GS99] and [BM06]; (ii) the infinite-dimensional Black-Scoles equation and HJBVI that characterize the pricing function of the European and American call options uniquely involve the extensions DV (t, ψ) and D2 V (t, ψ) of first- and second-order Fr´echet

6.1 Pricing with Hereditary Structure

297

derivatives DV (t, ψ) and D2 V (t, ψ) from C∗ and C† to (C⊕B)∗ and (C⊕B)† (see Sections 2.2 and 2.3 of Chapter 2 for definitions of these spaces), respectively; and (iii) the infinite-dimensional Black-Scholes and HJBVI also involve the infinitesimal generator SV (t, ψ) of the semigroup of shift operators of the value functions that does not appear in the special class of equations in the aforementioned papers. This chapter is organized as follows. Section 6.1 describes the (B, S)market that possesses hereditary structure and gives definitions for the contingent claims with a general path-dependent reward function. Examples are given to justify consideration of the path-dependent reward function Ψ : C → [0, ∞). In Section 6.2, we introduce the concept of the self-financing trading strategy in the market. In Section 6.3, the concept of risk-neutral martingale measure is constructed for the (B, S) and the derivation of pricing functions for European and American call options are given. The infinitedimensional Black-Scholes equation is derived in Section 6.4. In there, it is shown that the pricing function for the European call option is the unique viscosity solution of the infinite-dimensional Black-Scholes equation. As a consequence of the optimal stopping in Chapter 4, the infinite-dimensional HJBVI is derived in Section 6.5 once it is recognized that the pricing function is actually the value function of an optimal stopping problem treated in Chapter 4. In there it is also stated without proof that the pricing function is the unique viscosity solution of the infinite-dimensional HJBVI. Finally, in Section 6.6, we present an approximation algorithm for the solution of the infinite-dimensional Black-Scholes equation by considering the Taylor series solution of the equation. Section 6.7 contains conclusions and supplementary remarks for the chapter.

6.1 Pricing with Hereditary Structure 6.1.1 The Financial Market To describe the (B, S)-market with hereditary structures, we start by defining appropriate function spaces as follows. Let 0 < r < ∞ be the duration of bounded memory. Throughout this chapter, we let n = 1 and we consider C = C[−r, 0], the space of real-valued functions that are continuous on [−r, 0], and C+ = {φ ∈ C | φ(θ) ≥ 0 ∀θ ∈ [−r, 0]} for simplicity. We again use the following convention that has been adopted throughout the monograph: If ψ ∈ C[−r, T ] and t ∈ [0, T ], let ψt ∈ C be defined by ψt (θ) = ψ(t + θ), θ ∈ [−r, 0]. The new model for the (B, S)-market considered herein was first introduced in Chang and Youree [CY99,CY07]. The market is said to have a hereditary structure in the sense that the rate of change of stock prices and the

298

6 Option Pricing

bank account depend not only on the current price but also on the entire historical prices of time duration r > 0. Specifically, we assume that the bank (or savings) account B(·) and the stock prices S(·) evolve according to the following two equations. (A) The Bank (Savings) Account To describe the bank account holdings B(·) = {B(t), t ∈ [−r, T ]}, we assume that it satisfies the following linear (deterministic) functional differential equation: dB(t) = L(Bt )dt,

t ∈ [0, T ],

(6.1)

with an initial function φ ∈ C+ , where L is a bounded linear functional on C that can and will be represented as the following Lebesque-Stieltjes integral  0 L(φ) = φ(θ) dη(θ) −r

for some function η : [−r, 0] →  of bounded variation. Equivalently, (6.1) can be written as  0 B(t + θ) dη(θ) dt, t ≥ 0. dB(t) = −r

It is assumed that the function η : [−r, 0] →  satisfies the following assumption. Assumption 6.1.1 The function η : [−r, 0] →  is nondecreasing (and hence of bounded variation) with η(0) − η(−r) > 0. We can and will extend the domain of the above function to the entire  by defining η(θ) = η(−r) for θ ≤ −r and η(θ) = η(0) for θ ≥ 0. For the purpose of analyzing the discount rate for the bank account, let us assume that the solution process B(φ) = {B(t), t ∈ [−r, T ]} of (6.1) with the initial function φ ∈ C+ takes the following form: B(t) = φ(0)eλt ,

t ∈ [0, T ],

(6.2)

and B0 = φ ∈ C+ . Then the constant λ > 0 satisfies the following equation:  0 eλθ dη(θ). (6.3) λ= −r

Lemma 6.1.2 Under Assumption 6.1.1, there exists a unique λ > 0 that satisfies (6.3). Proof. Since η : [−r, 0] →  is nondecreasing with η(0) − η(−r) > 0, it is clear for all λ > 0 that

6.1 Pricing with Hereditary Structure



0

299

θeλθ dη(θ) < 0.

−r

To prove the lemma, we let γ : [0, ∞) → [0, ∞) by  0 γ(λ) = eλθ dη(θ). −r

Then γ(0) = η(0) − η(−r) > 0 and  0 γ(λ) ˙ = θeλθ dη(θ) < 0, −r

Therefore, (6.3) has a unique solution λ > 0.

∀λ > 0. 2

Remark 6.1.3 The constant λ > 0 specified in Lemma 6.1.2 will be referred to as the effective interest rate of the bank account. (B) Price Dynamics of the Stock Let (Ω, F, P, F, W (·)) be a one-dimensional standard Brownian motion, where F = {F(t), t ≥ 0} is the P-augmented natural filtration of W (·); that is, F(t) = σ(W (s), 0 ≤ s ≤ t) ∨ N ,

t ≥ 0,

where N = {A ⊂ Ω | ∃B ∈ F such that A ⊂ B and P (B) = 0}. Assume that the stock price process {S(t), t ∈ [−r, T ]} follows the following nonlinear SHDE: dS(t) = f (St ) dt + g(St ) dW (t), S(t)

t ∈ [0, T ],

(6.4)

with initial price function ψ ∈ C+ , where f , and g are some continuous realvalued functions defined on the real Banach space C. At any time t ∈ [0, T ], the terms f (St ) and g(St ) (referred to respectively as the stock appreciation rate and the stock volatility rate at time t) in (6.4) are random functions that depend not only on the current stock price S(t) but also on the stock prices, St , over the time interval [t − r, t]. We make the following assumption on the functions f, g : C → . Assumption 6.1.4 The functions f, g : C →  are continuous and satisfy the following local Lipschitz and linear growth conditions: There exist positive constants K1,N and K2 for all N = 1, 2, . . . such that |φ(0)f (φ) − ϕ(0)f (ϕ)| + |φ(0)g(φ) − ϕ(0)g(ϕ)| ≤ K1,N φ − ϕ , ∀φ, ϕ ∈ C with φ , ϕ ≤ N, and 0 ≤ |φ(0)f (φ)| + |φ(0)g(φ)| ≤ K2 (1 + φ ),

∀φ ∈ C.

300

6 Option Pricing

Note that Assumption 6.1.4 is standard for the existence and uniqueness of a strong solution of (6.4) (see Theorem 1.3.12 in Chapter 1). As a reminder, we recall the meaning of the strong solution {S(t), t ∈ [−r, T ]} of (6.4) as follows. Definition 6.1.5 A process {S(t), t ∈ [−r, T ]} in  is said to be a strong solution of (6.4) through the initial function ψ ∈ C+ if it satisfies the following conditions: 1. S(t) = ψ(t), ∀t ∈ [−r, 0]. 2. {S(t), t ∈ [0, T ]} is F-adapted; that is, S(t) is F(t)-measurable for each t ∈ [0, T ]. 3. The process {S(t), t ∈ [0, T ]} is continuous and satisfies the following stochastic integral equation P -a.s. for all t ∈ [0, T ]:  t  t S(s)f (Ss ) ds + S(s)g(Ss ) dW (s) t ∈ [0, T ]. S(t) = ψ(0) + 0

4.

T 0

|S(t)f (St )| dt < ∞ and

0

T 0

|S(t)g(St )|2 dt < ∞ P − a.s.

Definition 6.1.6 The strong solution {S(t), t ∈ [−r, T ]} of (6.4) through the initial datum ψ ∈ C+ and on the interval [−r, T ] is said to be strongly (or path˜ wise) unique if {S(t), t ∈ [−r, T ]} is another strong solution of (6.4) through the same initial function ψ and on the same time interval [0, T ]. Then ˜ P {S(t) = S(t),

∀t ∈ [0, T ]} = 1.

Under Assumption 6.1.4, it can be shown that for each initial historical price function ψ ∈ C+ , the price process {S(t), t ∈ [−r, T ]} exists and is non-negative, continuous, and F-adapted. Theorem 6.1.7 Assume Assumption 6.1.4 holds. Then for each ψ ∈ C+ , there exists a unique non-negative strong solution process {S(t), t ∈ [−r, T ]} through the initial datum ψ and on the time interval [0, T ]. Proof. The existence and uniqueness of the strong solution process {S(t), t ∈ [−r, T ]} follows from Theorem 1.3.12 in Chapter 1. Therefore, we only need to show that for each initial S0 = ψ ∈ C+ , then S(t) ≥ 0 for each t ∈ [0, T ]. To show this, we note that {t ∈ [0, T ] | S(t) ≥ 0} = ∅ by sample-path continuity of the solution process and nonnegativity of the initial data ψ ∈ C+ . Now, let τ = inf{t ∈ [0, T ] | S(t) ≥ 0}. If τ < T , then dS(t) = 0 for all t ≥ τ , with S(τ ) = 0. This implies that S(t) = 0 for all t ∈ [τ, T ] and, hence, S(t) ≥ 0 for all t ∈ [0, T ]. The same conclusion holds if τ = T . 2 Using the convention that for each t ∈ [0, T ], St (θ) = S(t+θ), ∀θ ∈ [−r, 0], we also consider the associated C-valued segment process {St , t ∈ [0, T ]}.

6.1 Pricing with Hereditary Structure

301

Let G = {G(t), t ∈ [0, T ]} be the subfiltration of F = {F(t), t ∈ [0, T ]} generated by {S(t), t ∈ [−h, T ]}; that is G(t) = σ(S(s), 0 ≤ s ≤ t) ∨ N ,

t ≥ 0.

From Lemma 1.3.4 in Chapter 1, it can be shown that G(t) = σ(Ss , 0 ≤ s ≤ t) ∨ N ,

t ≥ 0.

If τ : Ω → [0, T ] is a G-stopping time, ( i.e., the event {τ ≤ t} ∈ G(t) for each t ∈ [0, T ]), denote the sub-σ-algebra G(τ ) by G(τ ) = {A ∈ F | A ∩ {τ ≤ t} ∈ G(t), ∀t ≤ τ }. The following strong Markov property of the C-valued process {St , t ∈ [0, T ]} also holds under Assumption 6.1.4. Theorem 6.1.8 Under Assumption 6.1.4, the C-valued process {St , t ∈ [0, T ]} satisfies the following strong Markov property: For each t ∈ [0, T ] and each G-stopping time τ with P {τ ≤ t} = 1, we have P {St ∈ A | G(τ )} = P {St ∈ A | Sτ }, ∀A ∈ B(C). It is clear that when r = 0, (6.4) reduces to the nonlinear stochastic ordinary differential equation dS(t) = f (S(t)) dt + g(S(t)) dW (t), S(t)

t ∈ [0, T ],

(6.5)

of which the Black-Scholes (B, S)-market is a special case by considering the case in which f (x) ≡ µ and g(x) ≡ σ. In addition, (6.4) is general enough to include the following linear model considered by Chang and Youree [CY99] and pure discrete delay models considered by Arriojas et al. [AHMP07], and Kazmerchuk et al. [KSW04a, KSW04b, KSW04c]: dS(t) = M (St ) dt + N (St ) dW (t), (see [CY99])  0  0 S(t + θ) dξ(θ)dt + S(t + θ) dζ(θ)dW (t), = −r

−r

t ≥ 0,

where ξ, ζ : [−r, 0] →  are certain functions of bounded variation. dS(t) = f (St ) dt + g(S(t − b))S(t) dW (t),

t ≥ 0 (see [AHMP07]), (6.6)

where 0 < b ≤ r and f (St ) = µS(t − a)S(t) or f (St ) = µS(t − a) and dS(t) = µS(t − a) dt + σ(S(t − b)) dW (t), t ≥ 0 (see [KSW04a]). S(t)

302

6 Option Pricing

6.1.2 Contingent Claims As briefly mentioned earlier, a contingent claim or option is a contract conferred by the contract writer on the contract holder giving the holder the right (but not the obligation) to buy from or to sell to the writer a share of the stock at a prespecified price prior to or at the contract expiry time T > 0. The right for the holder to buy from the writer a share of the stock will be called a call option and the right to sell to the writer a share of the stock will be called a put option. If an option is purchased at time t ≥ 0 and is exercised by the holder at time τ ∈ [t, T ], then he will receive a payoff of the amount Ψ (Sτ ) from the writer, where Ψ : C → [0, ∞) is the payoff function and {S(s), s ∈ [−r, T ]} is the price of the underlying stock. To secure such a contract, the contract holder, however, has to pay the writer at contract purchase time t a fee that is mutually agreeable to both parties. The determination of such a fee is called the pricing of the contingent claim. In determining a fair price for the contingent claim, the writer of the contingent claim seeks to invest in the (B, S)-market the fee x received from the holder and trades over the time interval [t, T ] between the bank account and the stock account in an optimal and prudent manner so that his total wealth will replicate or exceed that of the payoff Ψ (Sτ ) he has to pay to the holder if and when the contingent claim is exercised. The smallest such x is called the fair price of the contingent claim. The contingent claims can be classified according to the their allowable exercise times τ . If the contingent claim can be exercised at any time between the option purchase time t and option expiry time T (i.e., τ ∈ [t, T ]), it is called a contingent claim of the American type and it will be denoted by ACC(Ψ ). In this case, the exercise time τ ∈ [t, T ] of the ACC(Ψ ) should satisfy the fundamental assumption that it is an F-stopping time (i.e., τ ∈ MTt ), since the information of the stock prices up to the exercise time τ are available to both parties and neither the writer nor the holder of the contingent claim can anticipate the future price of the underlying stock. If the contingent claim can only be exercised at the expiry time T (i.e., τ ≡ T ), then the contingent claim is called a European type and it will be denoted by ECC(Ψ ). We make the following assumption on the payoff function. Assumption 6.1.9 The payoff function Ψ : C → [0, ∞) is globally convex; that is, for all φ, ϕ ∈ C, Ψ (αφ + (1 − α)ϕ) ≤ αΨ (φ) + (1 − α)Ψ (ϕ),

α ∈ [0, 1].

We state the following lemma without proof. Lemma 6.1.10 Assuming the global convexity of the reward function Ψ : C → [0, ∞), then Ψ is uniformly Lipschitz on C; that is, there exists a constant K > 0 such that |Ψ (φ) − Ψ (ϕ)| ≤ K φ − ϕ ,

∀φ, ϕ ∈ C.

6.1 Pricing with Hereditary Structure

303

The following are some examples of the contingent claims or options that have been traded in option exchanges around the world. Example 6.1 (Standard Call Option). The payoff function for the standard call option is given by Ψ (Ss ) = max{S(s) − q, 0}, where S(s) is the stock price at time s and q > 0 is the strike price of the standard call option. If the standard American option is exercised by the holder at the F-stopping time τ ∈ TtT , the holder will receive the payoff of the amount Ψ (Sτ ). Of course, the option will be called the standard European call option if τ ≡ T . In this case, the amount of the reward will be Ψ (ST ) = max{S(T ) − q, 0}. We offer a financial interpretation for the standard American (or European) call option as follows. If the option has not been exercised and the current stock price is higher than the strike price, then the holder can exercise the option and buy a share of the stock from the writer at the strike price q and immediately sells it at the open market and make an instant profit of the amount S(τ ) − q > 0. It the strike price is higher than the current stock price, then the option is worthless to the holder. In this case, the holder will not exercise the option and therefore the payoff will be zero dollar. Example 6.2 (Standard Put Option). The payoff function for the standard put option is given by Ψ (Ss ) = max{q − S(t), 0}. This is similar to Example 6.1, except now the holder of the contract has the option to sell to the writer a share of the underlying stock at the prespecified price q > 0 and the writer of the contract has the obligation to buy at this price if and when the contract is exercised at τ ∈ TtT . Clearly, the payoff for the standard American put option is Ψ (Sτ ) = max{q − S(τ ), 0} and that of the European put option is Ψ (S(T )) = max{q − S(T ), 0}. One characteristic of a path-dependent option, whether it is of American type or European type, is that the payoff depends explicitly on the historical prices when the option is exercised. The following are some examples of the path-dependent option. Example 6.3 (Modified Russian Option). The payoff process of a Russian call option can be expressed as Ψ (Ss ) = max{

sup

s∈[0∧(t−r),t]

S(s) − q, 0}

304

6 Option Pricing

for some strike price q > 0. The payoff for a Russian put option can easily be written also. Note that the payoff Ψ (Sτ ) of a Russian option depends on the highest price on the time interval [τ − r, τ ] if the option is exercised at the time τ . Example 6.4 (Modified Barrier Options). The payoff for the barrier call option is given by Ψ (St ) = max{S(t) − q, 0}χ{τ (a)≤t} for some a > q > 0, a > S(0) and with τ (a) = inf{s ∈ R+ | S(s) ≥ a}. This is similar to the call option of Example 6.1, except now the stock price has to reach a certain “barrier” level a > q ∨ S(0) for the option to become activated. Example 6.5 (Modified Asian Option). The payoff for the Asian call option is given by    1 t S(s) ds − q, 0 . Ψ (St ) = max r (t−r)∨0 This is similar to a European call option with  t strike price q > 0 except that now the “moving-average stock price” 1r (t−r)∨0 S(s) ds, over the interval [t − r, t], is used in place of the “terminal stock price” S(t). All of the last three examples (Examples 6.3-6.5) demonstrate that the payoff Ψ (St ) depends not only on the stock price S(t) at time t ∈ [0, T ] but also explicitly on the prices of the underlying stock over the time interval [t − r, t]. In addition, the payoff function Ψ : C → [0, ∞) is globally convex in all five examples.

6.2 Admissible Trading Strategies Based on the (B, S)-market described by (6.1) and (6.4), the option holder will have the control over the trading between his bank account and the stock. It is assumed throughout this chapter that there is neither cost nor tax incurred in each of the transaction made by the holder. Definition 6.2.1 A trading strategy in the (B, S)-market is an F-progressively measurable two-dimensional vector process π(·) = {π(t) = (π1 (t), π2 (t)), t ∈ [0, T ]} defined on (Ω, F, P, F) such that  0

T

E[|π(t)|2 ] dt =



T

E 0

 2  i=1

 (πi (t))2 dt < ∞,

6.2 Admissible Trading Strategies

305

where π1 (t) and π2 (t) represent respectively the number of units of the bank account and the stock owned by the investor at time t ∈ [0, T ]. With a trading strategy π(·) described above, the holder’s total asset in the (B, S)-market is described by the wealth process {X π (t), t ∈ [0, T ]} defined by X π (t) = X1π (t) + X2π (t) = π1 (t)B(t) + π2 (t)S(t),

t ∈ [0, T ],

(6.7)

where, since there is no transaction cost or taxes, X1π (t) = π1 (t)B(t) and X2π (t) = π2 (t)S(t) denote the investor’s holdings at time t in the bank account and the stock, respectively, and the pair (X1π (t), X2π (t)) will be called the investor’s portfolio in the (B, S)-market at time t. Once again, {B(t), t ∈ [−r, T ]} and {S(t), t ∈ [−r, T ]} satisfy (6.1) and (6.4) with initial functions φ and ψ ∈ C+ , respectively. Definition 6.2.2 Given the initial wealth X π (0) = x, trading strategy π(·) is said to be admissible at x if X π (t) ≥ −ζ,

P -a.s.,∀t ∈ [0, T ],

(6.8)

for some non-negative random variable ζ with E[ζ 1+δ ] < ∞, where δ > 0. Denote the collection of trading strategies π(·) that are admissible by A(x). In a reasonable market, it is important that the constraint described by (6.8) be satisfied, for otherwise it is possible to create a doubling strategy (i.e., a trading strategy that attain arbitrarily large values of wealth with probability one at t = T , starting with zero initial capital X π (0) = 0 at t = 0). The following example due to Shreve is taken from Karatzas [Kar96]. Example 6.6. In this example, we consider the continuous (B, S)-market described by dB(t) = 0, t ∈ [0, T ] with B(0) = 0, and dS(t) = dW (t),

t ∈ [0, T ] with S(0) = 0.

Let F = {F(t), t ∈ [0, T ]}, where F(t) = σ{W (s), 0 ≤ s ≤ t}. Consider the F-martingale {M (t), t ∈ [0, T ]} described by  t dW (s) √ M (t) ≡ , t ∈ [0, T ], T −s 0 t T with E[M 2 (t)] = 0 Tds −s = log( T −t ) and E[M (t)] = 0 for all t ∈ [0, T ]. From ˜ (t) = the Girsanov theorem (see Theorem 1.2.16 in Chapter 1), we have M T log( T −t ), where ˜ (·) ≡ {M (T − T e−u ), u ∈ [0, ∞)} M

306

6 Option Pricing

is a Brownian motion. Therefore, for b sufficiently large, the F-stopping time τ (b) ≡ inf{t ∈ [0, T ) | M (t) = b} ∧ T satisfies P {0 < τ (b) < T } = 1; we choose the trading strategy π(·) = {π(t) = (π1 (t), π2 (t)), t ∈ [0, T ]} as follows: 1 χ{t≤τ (b)} . π1 (t) = 0 and π2 (t) = √ T −t √ In this case, the holding in the stock becomes X2π (t) ≡ (1/ T − t)χ{t≤τ (b)} and the total wealth corresponding to this strategy becomes  t 1 ˜ (t ∧ τ (b)), t ∈ [0, T ]. √ X π (t) = χ{s≤τ (b)} dW (s) = W T −s 0 ˜ (τ (b)) = b. Notice that admissibility condition of This shows that X π (T ) = M π stated in Definition 6.2.2 fails, for otherwise by Fatou’s lemma, we should have X π (T ) ≤ limt↑T X π (t) = 0. This is a contradiction. For the option pricing problems under the (B, S)-market described by (6.1) and (6.4) and without transaction costs or taxes, we will make the following basic assumption throughout the end of this chapter. Assumption 6.2.3 (Self-Financing Condition) In the general (B, S)-market described by (6.1) and (6.4), the trading-consumption strategy π(·) ∈ A(x) is said to satisfy the following self-financing condition if the wealth process {X π (t), t ∈ [0, T ]} satisfies the following equation: dX π (t) = dX1π (t) + dX2π (t) = π 1 (t) dB(t) + π 2 (s) dS(s), P -a.s. t ∈ [0, T ].

(6.9) (6.10)

Financially speaking, the trading strategy π(·) ∈ A(x) is said to satisfy the self-financing condition if there is no net inflow or outflow of wealth in and from the portfolio and the increments of the investor’s holdings are due only to the increments of the bond price and stock price and consumptions. The financial meaning of the self-financing can be made clearer if we consider the following discrete-time version: X π (t + %t) − X π (t) = π1 (t)(B(t + %t) − B(t)) + π2 (t)(S(t + %t) − S(t)) for sufficiently small %t > 0. Denote the set of all self-financing trading strategies π(·) ∈ A(x) by SF (x).

6.3 Risk-Neutral Martingale Measures

307

Remark 6.2.4 To have a more precise description of X1π (·), X2π (·), and X π (·) when π(·) ∈ SF (x), let α(t) and β(t) be respectively the cumulative number of shares of stock purchased and sold over the time interval [0, t]. Assuming that proceeds for selling stock are to be deposited in the bank account and purchases of stock are to be paid for from the bank account, it is clear that the following relations are true: π2 (t) = π2 (0) + α(t) − β(t), t ∈ [0, T ],  t S(s) d(β(s) − α(s)), t ∈ [0, T ]. π1 (t) = π1 (0) + B(s) 0 In this case, dX1π (t) = d(π1 (t)B(t)) = π1 (t) dB(t) + B(t) dπ1 (t) = π1 (t) dB(t) + S(t) d(β(s) − α(t)), dX2π (t) = d(π2 (t)S(t)) = π2 (t) dS(t) + S(t) dπ2 (t) = π2 (t) dS(t) + S(t) d(α(t) − β(t)) and dX π (t) = dX1π (t) + dX2π (t) = π1 (t) dB(t) + π2 (t) dS(t),

t ∈ [0, t].

Definition 6.2.5 The trading strategy π(·) ∈ SF (x) for the European contingent claim ECC(Ψ ) is called an arbitrage or realizing an arbitrage possibility if X π (0) ≤ 0 ⇒ X π (T ) ≥ 0 P -a.s. and P {X π (T ) > 0} > 0. Definition 6.2.6 The trading strategy π(·) ∈ SF (x) for the American contingent claim ACC(Ψ ) is called an arbitrage or realizing an arbitrage possibility, if there exists τ ∈ T0T such that X π (0) ≤ 0 ⇒ X π (τ ) ≥ 0 P -a.s. and P {X π (τ ) > 0} > 0. Financially speaking, a self-financing trading strategy is an arbitrage (or free lunch) if it provides opportunity for making profit without risk.

6.3 Risk-Neutral Martingale Measures In this section, we study the existence and uniqueness (or nonuniqueness) question of a risk-neutral martingale measure for the (B, S)-market described by (6.1) and (6.4).

308

6 Option Pricing

Assuming π(·) ∈ SF (x), we first study the concept of risk-neutral martingale measures. Let P be the collection of probability measures P˜ defined on the measurable space (Ω, F) that satisfy the following two conditions: (i) P˜ is equivalent to the underlying probability measure P ; that is, P˜ and P are mutually absolutely continuous. (ii) The discounted wealth process {Y π (t), t ∈ [0, T ]} is an F-martingale with π (t) . respect to P˜ , where Y π (t) = XB(t) In the following, it will be shown that P = ∅ for the (B, S)-market described by (6.1) and (6.4). For the unit price of the bank account B(·) = {B(t), t ∈ [0, T ]} and the stock S(·) = {S(t), t ∈ [0, T ]} described in ( 6.1) and ( 6.4), define  t ˜ (t) = W (t) + γ(Bs , Ss ) ds, t ∈ [0, T ], (6.11) W 0

where γ : C+ × C+ →  is defined by γ(φ, ψ) =

φ(0)f (ψ) − L(φ) . φ(0)g(ψ)

Define the process Z(·) = {Z(t), t ∈ [0, T ]} by   t  1 t 2 γ(Bs , Ss ) dW (s) − |γ(Bs , Ss )| ds . Z(t) = exp 2 0 0

(6.12)

(6.13)

Lemma 6.3.1 The process Z(·) defined by (6.13) is a martingale defined on (Ω, F, P ; F). Proof. It is clear from the assumptions on f , g, and L that we have |B(t)f (St )| + |L(Bt )| |B(t)g(St )| φ(0)eλt Kb St + λφ(0)eλt ≤ φ(0)eλt σ ≤ c St for some positive constant c.

|γ(Bt , St ) ≤

a Since E[ 0 St 2 dt] < ∞ for each a ≥ 0, it follows that    a  1 E exp |γ(Bt , St )|2 ds < ∞, 0 ≤ a < ∞. 2 0 Therefore, from the martingale characterization theorem in Section 1.2 of Chapter 1, the process is a martingale. 2 Lemma 6.3.2 There exists a unique probability measure P˜ defined on the canonical measurable space (Ω, F) such that P˜ (A) = E[1A Z(T )],

∀A ∈ F(T ),

where 1A is the indicator function of A ∈ F(T ).

6.4 Pricing of Contingent Claims

309

Proof. This follows from Lemma 6.3.1, since Z(·) is a martingale defined on (Ω, F, P ; F) and Z(0) = 1. 2 ˜ (·) defined by (6.11) is a standard Brownian Lemma 6.3.3 The process W motion defined on the filtered probability space (Ω, F, P˜ ; F). Proof. This follows from Lemmas 6.3.1 and 6.3.2 and the Girsanov transformation Theorem 1.2.16. 2 Consider the following nonlinear SHDE: dS(t) ˜ (t), = λ dt + g(St ) dW S(t)

t ∈ [0, T ],

(6.14)

where λ > 0 is the effective interest rate of the bank account described in (6.1.2). Theorem 6.3.4 The strong solution S(·) = {S(t), t ∈ [−r, T ]} for (6.4) under the probability measure P is also a unique strong solution of (6.14) with S0 = ψ ∈ C+ under the new probability measure P˜ in the sense that they have the same distribution. Proof. Since W˜(·) is a standard Brownian motion defined on the new filtered probability space (Ω, F, P˜ ; F), we have dS(t) = f (St ) dt + g(St ) dW (t) S(t)

˜ (t) − B(t)f (St ) − L(Bt ) dt = f (St ) dt + g(St ) dW B(t)g(St ) ˜ (t) − f (St ) dt + L(Bt ) dt = f (St ) dt + g(St ) dW B(t) L(Bt ) ˜ (t) = dt + g(St ) dW B(t) ˜ (t). = λ dt + g(St ) dW 2

6.4 Pricing of Contingent Claims An option pricing problem in the (B, S)-market is, briefly, to determine the rational price Υ (Ψ ) for the European contingent claim ECC(Ψ ) and the American contingent claim ACC(Ψ ) described in Section 6.1. It is the price that the writer of the contract should receive from the buyer for the rights of the contract and also to determine the trading strategy π(·) = {(π1 (s), π2 (s)), s ∈ [0, T ]} the writer should use to invest this fee in the (B, S)-market in such a way as to ensure that the writer will be able to cover the option if and when it is exercised. The fee should be large enough that the writer can, with riskless investing, cover the option but be small enough that the writer does not make an unfair (i.e., riskless) profit.

310

6 Option Pricing

Definition 6.4.1 A self financing trading strategy π(·) is a ECC(Ψ )-hedge if X π (0) = π1 (0)φ(0) + π2 (0)ψ(0) = x and P -a.s. X π (T ) ≥ Ψ (ST ). We say that a ECC(Ψ )-hedge self-financing trading strategy π ∗ (·) is minimal for the European contingent claim if ∗

X π (T ) ≥ X π (T ) for any ECC(Ψ )-hedge self-financing strategy π(·) for the European contingent claim. Definition 6.4.2 Let ΠECC(Ψ ) (x) be the set of ECC(Ψ )-hedge strategies from SF (x) for the European contingent claim. Define Υ (ECC(Ψ )) = inf{x ≥ 0 : ΠECC(Ψ ) (x) = ∅}.

(6.15)

Definition 6.4.3 A trading strategy π(·) ∈ SF (x) is a ACC(Ψ )-hedge of the American contingent claim ACC(Ψ ) if X π (0) = π1 (0)φ(0) + π2 (0)ψ(0) = x and P -a.s. X π (t) ≥ Ψ (St ),

∀t ∈ [0, T ].

Definition 6.4.4 Let ΠACC(Ψ ) (x) be the set of ACC(Ψ )-hedge strategies from SF (x) for the American contingent claim. Define Υ (ACC(Ψ )) = inf{x ≥ 0 : ΠACC(Ψ ) (x) = ∅}.

(6.16)

The value Υ (ECC(Ψ )) and Υ (ACC(Ψ )) defined above are called the rational price of the European contingent claim and the American contingent claim, respectively. If the infimum in (6.15) (respectively, (6.16)) is achieved, then Υ (ECC(Ψ )) (respectively, Υ (ACC(Ψ ))) is the minimal possible initial capital x for which there exists a trading strategy π(·) ∈ SF + (x) possessing the property that P˜ -a.s. X π (T ) ≥ Ψ (ST ) (respectively, X π (t) ≥ Ψ (St ), t ∈ [0, T ]). The main purpose of this section is twofold: (a) Determine the rational prices Υ (ECC(Ψ )) and Υ (ACC(Ψ )). (b) Find the ECC(Ψ )-hedging and ACC(Ψ )-hedging strategies π(·) ∈ SF (x) for Υ (ECC(Ψ )) and Υ (ACC(Ψ )). We will treat the European option and the American option separately in the following two subsections.

6.4 Pricing of Contingent Claims

311

6.4.1 The European Contingent Claims Let Y (·) = {Y (t), t ∈ [0, T ]} be defined by     ) Ψ (S  T ˜ Y (t) = E F(t) , 0 ≤ t ≤ T. B(T )  Then the process Y (·) = {Y (t), t ∈ [0, T ]} is a martingale defined on (Ω, F, P˜ ; F), since B(T ) = eλT φ(0) > 0 for φ(0) > 0 and, consequently,    Ψ (ST )  1 ˜ ˜ (ST )] < ∞.   = E  E[Ψ B(T )  eλT φ(0) From the well-known Ito-Clark theorem on martingale representation in Section 1.3 of Chapter 1, there exists a process β(·) = {β(t), t ∈ [0, T ]} that is T F-adapted and 0 β 2 (t) dt < ∞ (P-a.s.) such that 

t

˜ (s), β(s) dW

Y (t) = Y (0) + 0

t ∈ [0, T ].

(6.17)

Furthermore, since F(t) = σ(Ss , 0 ≤ s ≤ t), there exists a nonanticipating functional β ∗ : [0, T ] × C+ [0, T ] →  (where C+ [0, T ] is the set of all nonnegative continuous functions defined on [0, T ]) such that β(t) = β ∗ (t, S(·)), t ∈ [0, T ]. Let π ∗ (·) = {(π1∗ (t), π2∗ (t)), t ∈ [0, T ]} be a trading strategy, where π2∗ (t) = and π1∗ (t) = Y (t) −

β(t)B(t) g(St )

S(t) ∗ π (t), B(t) 2

(6.18)

t ∈ [0, T ].

(6.19)

We have the following lemma concerning the trading strategy π ∗ (·) and the process Y (·). ∗

Lemma 6.4.5 π ∗ (·) ∈ SF (x) and for each t ∈ [0, T ], Y (t) = Y π (t), where, ∗ again, Y π (·) is the process defined in (6.17), with the minimal strategy π ∗ (·) defined in (6.19) and (6.23). Proof. It is clear that ∗

X π (t) = π1∗ (t)B(t) + π2∗ (t)S(t)  S(t) = Y (t) − π2∗ (t) B(t) + π2∗ (t)S(t) B(t) = Y (t)B(t). ∗

This shows that Y (t) =

X π (t) B(t) ,

0 ≤ t ≤ T . In this case,

312

6 Option Pricing ∗

dX π (t) = B(t) dY (t) + Y (t) dB(t) ˜ (t) + Y (t) dB(t) = B(t)β(t) dW  β(t)S(t) = Y (t) − dB(t) N (St ) B(t)β(t) ˜ (t)) (L(St ) dt + N (St ) dW + N (St ) = π1∗ (t) dB(t) + π2∗ (t) dS(t). This shows that the trading strategy π ∗ (·) is self-financing.

2

In the following, we assume that a contingent claim is written at time t ∈ [0, T ] instead at time zero. Since the (B, S)-market are described by the two autonomous equations (6.1) and (6.4), all of the results obtained thus far can be restated with appropriate translation of time by t. Theorem 6.4.6 Let Ψ : C → [0, ∞) be the (convex) payoff function. Then the rational price Υ (Ψ ; t, ψ) for ECC(Ψ ) given St = ψ ∈ C at time t ∈ [0, T ] is given by ˜ −λ(T −t) Ψ (ST ) | St = ψ] Υ (Ψ ; t, ψ) = E[e ˜ −λ(T −t) Ψ (ST −t ) | S0 = ψ], = E[e where λ > 0 is the effective interest rate described by Lemma 6.1.2. Furthermore, there exists a minimal hedge π ∗ (·) = {(π1∗ (s), π2∗ (s)), s ∈ [t, T ]}, where π2∗ (s) = ∗

β(s)B(s) , S(s)g(Ss )

π1∗ (s) = Y π (s) − π2∗ (s)

S(s) , B(s)

s ∈ [t, T ],

(6.20)

and the process β(·) = {β(t), t ∈ [0, T ]} is given in (6.17). Proof. Evidently ∗ ˜ −λ(T −t) Ψ (ST ) | St = ψ], X π (t) = E[e ∗

X π (T ) = Ψ (ψ) (P -a.s.). ˜ −rT Ψ (ST )]. MoreTherefore, π ∗ is a (Ψ, x)-hedge with initial capital x = E[e ∗ over, the strategy π is minimal. This shows that the rational price given St = ψ at time t ∈ [0, T ] is ˜ −λ(T −t) Ψ (ST ) | St = ψ] Υ (Ψ ; t, ψ) = E[e ˜ −λ(T −t) Ψ (ST −t ) | S0 = ψ]. = E[e

2

(6.21)

6.4 Pricing of Contingent Claims

313

6.4.2 The American Contingent Claims At this point, we will prove the following theorem for American contingent claims. Theorem 6.4.7 For ACC(Ψ ), we have the following: 1. The rational price of an American contingent claim given St = ψ at time t ∈ [0, T ] is   ˜ e−λ(T −τ ) Ψ (Sτ ) | St = ψ Υ (Ψ ; t, ψ) = sup E τ ∈MT t

  ˜ e−λ(T −τ ) Ψ (Sτ −t ) | S0 = ψ . = sup E τ ∈MT t

2. In the class SF + (x) there exists a minimal ACC(Ψ )-hedge π ∗ (·) whose ∗ ∗ capital X π (·) = {X π (s), s ∈ [t, T ]} is given by   ∗ ˜ e−λ(τ −t) Ψ (Sτ )|St = ψ . X π (s) = ess sup E (6.22) τ ∈MT t

3. The minimal trading strategy π ∗ (·) = {(π1∗ (s), π2∗ (s)), s ∈ [t, T ]} is given by β(s)B(s) S(s)g(Ss )

(6.23)

X π (s) − π2∗ (s)S(s) , B(s)

(6.24)

π2∗ (s) = and, therefore, ∗

π1∗ (s) =

where β(·) is found from the martingale representation of the discounted wealth process, that is, ∗

∗ ∗ X π (t) = Y π (t) = Y π (0) + B(t)



t

˜ (s). β(s) dW

(6.25)

0

4. A stopping time τ ∗ ∈ T0T is a rational exercise time of the American option if and only if   ˜ e−λ(τ ∗ −t) Ψ (S ∗ ) | St = ψ Υ (Ψ ; t, ψ) = E τ   ˜ e−λ(τ ∗ −t) Ψ (Sτ ∗ −t ) | S0 t = ψ . =E

314

6 Option Pricing

6.5 Infinite-Dimensional Black-Scholes Equation Suppose the (B, S)-market is described by (6.1) and (6.4). In this section, an infinitesimal equation for the pricing function V : [0, T ] × C →  will be derived with a given convex payoff function Ψ : C → , where   ˜ e−λ(T −t) Ψ (ST ) | St = ψ . V (t, ψ) = E Note that we used the simpler notation V (t, ψ) instead of Υ (Ψ ; t, ψ), since the award function Ψ is assumed to be given and fixed. The same substitution of Υ by V remains valid throughout the end of the chapter. This infinitesimal equation will be referred to as the infinite-dimensional Black-Scholes equation for the European contingent claim ECC(Ψ ) and it takes the form of a partial differential equation that involves Fr´echet derivatives with respect to elements in C. The concept of Fr´echet derivatives, DV (t, ψ) ∈ C∗ and D2 V (t, ψ) ∈ C† , and their extensions, DV (t, ψ) ∈ (C ⊕ B)∗ and D2 V (t, ψ) ∈ (C ⊕ B)† , as well as the infinitesimal shift operator (SV )(t, ψ) have been introduced in Chapter 2. They will be used in deriving the infinite-dimensional Black-Scholes equation in this section and the infinite-dimensional HJBVI for the American contingent claim ACC(Ψ ) in next section without repeating their definitions. 6.5.1 Equation Derivation For the convex payoff function Ψ : C → , we have the (rational) pricing function V : [0, T ] × C →  from Section 6.4.2 that ˜ (ST )|St = ψ] V (t, ψ) = e−λ(T −t) E[Ψ ˜ (ST −t )|S0 = ψ]. = e−λ(T −t) E[Ψ

(6.26)

In the following, we explore the applicability of the infinitesimal generator AV (t, ψ) = lim ↓0

E[V (t + , St+ ) − V (t, ψ)] , 

where V : [0, T ] × C →  is given in (6.26). Let Pt (Φ)(ψ) = E [Φ (St ) |S0 = ψ] for Φ : C →  with ψ ∈ C. Let Cb ⊂ C be the Banach space of all bounded uniformly continuous functions Φ : C → . Here, we are using weak convergence. By Dynkin’s formula (Theorem 2.4.1 in Chapter 2), if dx(t) = H(xt ) dt + G(xt ) dW (t) 2 with x0 = ψ, and Φ ∈ Clip (C)D(S), then Φ ∈ D(A) and for each ψ ∈ C,

6.5 Black-Scholes Equation

A(Φ)(ψ) = S(Φ)(ψ) + DΦ(ψ)(H(ψ)1{0} ) 1 + D2 Φ(ψ)(G(ψ)1{0} , G(ψ)1{0} ). 2

315

(6.27)

It is known (see, e.g., Black and Scholes [BS73], SKKM94]) that the classical Black-Scholes equation is a deterministic parabolic partial differential equation (with a suitable auxiliary condition), the solution of which gives the value of the European option contract at a given time. Using the infinitesimal generator, a generalized version of the classical Black-Scholes equation can be derived when the (B, S)-market model uses (6.1) and (6.4). In the second part of the following theorem, we use Theorem 2.4.1 in Chapter 2, which states that for Λ ∈ D(A), ut = Pt (Λ) = E[Λ(St )] is the unique solution of dut = Aut dt that satisfies the following: dut t (i) ut is weakly continuous, du dt is weakly continuous from the right, and dt is bounded on every finite interval. (ii) ut ≤ cekt for some constants c and k. (iii) u0 = (w) lim ut = Λ. t↓0



˜ (ST −t )|S0 = ψ] be the Theorem 6.5.1 Let X π (t) = V (t, ψ) = e−λ(T −t) E[Ψ ∗ wealth process for the minimal (Ψ, x)-hedge, where ψ ∈ C+ , x = X π (0), and ˜ (St )|S0 = ψ] and let Φt ∈ C 2 (C) ∩ D(S) for t ∈ [0, T ]. Define Φt (ψ) = E[Ψ lip 2 (C) ∩ D(S). Then t ∈ [0, T ]. Finally, assume that Ψ ∈ Clip λV (t, ψ) = (∂t + A)V (t, ψ) = S(V )(t, ψ) + DV (t, ψ)(λψ(0)1{0} ) 1 + D2 V (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ), 2 ∀(t, ψ) ∈ [0, T ) × C+ ,

(6.28)

where V (T, ψ) = Ψ (ψ), and the trading strategy

(π1∗ (t), π2∗ (t)) π2∗ (t) =

and π1∗ (t) =

∀ψ ∈ C+ ,

(6.29)

is defined by β(t)B(t) S(t)g(St )

 1  π∗ X (t) − S(t)π2∗ (t) . B(t)

β(t) is found from (6.17). Furthermore, if (6.28) and (6.29) hold, then V (t, ψ) β(t)B(t) and π1∗ (t) = is the wealth process for the (Ψ, x)-hedge with π2∗ (t) = S(t)g(S t) $ % ∗ 1 π ∗ B(t) X (t) − S(t)π2 (t) .

316

6 Option Pricing

Note: Equations (6.28) and (6.29) are the infinite-dimensional Black-Scholes equations for the (B, S)-market with hereditary price structure as described by (6.1) and (6.4). Proof. Letting ˜ (t) dS(t) = λS(t) dt + S(t)g(St ) dW and ˜ [Ψ (St ) |S0 = ψ] , Pt (Ψ )(ψ) = E we have ˜ [Ψ (ST −t ) |S0 = ψ] = V (t, ψ). e−λ(T −t) PT −t (Ψ )(ψ) = e−λ(T −t) E From the above, we therefore have  ∂  −λ(T −t) PT −t (Ψ )(ψ) e ∂t = λe−λ(T −t) PT −t (Ψ )(ψ) − e−λ(T −t) A(PT −t (λ)(ψ)).

∂t V (t, ψ) =

2 Using the linearity of the A operator and Ψ ∈ Clip (C) ∩ D(S), we have

λV (t, ψ) = (∂t + AV (t, ψ) = ∂t V (t, ψ) + SV (t, ψ) + DV (t, ψ)(λψ(0)1{0} ) 1 + D2 V (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)(ψ)1{0} ). 2 Therefore, λV (t, ψ) = ∂t V (t, ψ) + SV (t, ψ) + DV (t, ψ)(λψ(0)1{0} ) 1 + D2 V (t, ψ)(ψ(0)g(ψ))1{0} , ψ(0)g(ψ)1{0} ). 2 Notice that ˜ [Ψ (ST −T ) |S0 = ψ] = Ψ (ψ). V (T, ψ) = e−λ(T −T ) E Note that {St , t ≥ 0} is a C-valued Markov process (see [CY07]), so by the derivation of (6.28) and (6.29), ˜ [Ψ (ST −t ) |S0 = ψ] . V (t, ψ) = e−λ(T −t) E ∗



However, V (t, ψ) = X π (t) = B(t)Y π (t), so ∗



dV (t, ψ) = B(t) dY π (t) + Y π (t) dB(t) ˜ (t) + λY π∗ (t)B(t) dt = B(t)β(t) dW ˜ (t) + λΨ (t, ψ) dt. = π2∗ (t)S(t)g(St ) dW

6.5 Black-Scholes Equation

317

Therefore,  V (t, St ) − V (0, ψ) = λ

0



t

V (s, Ss ) ds +

t

0

˜ (s) π2∗ (s)S(s)g(Ss ) dW

 t  π1∗ (s)B(s) + π2∗ (s)S(s) ds =λ 0  t ˜ (s) π2∗ (s)S(s)g(Ss ) dW + 0  t π1∗ (s) dB(s) =λ 0  t  t ˜ (s) π2∗ (s)S(s) ds + π2∗ (s)S(s)g(Ss ) dW +  =λ 0

0 t

π1∗ (s) dB(s) +





0

Therefore, V (t, ψ) = X π (t) is self-financing.

0

t

π2∗ (s) dS(s). 2

Following Definition 2.4.4 in Chapter 2, a function Φ : C →  is said to be quasi-tame if there are C ∞ -bounded maps h : ()k →  and fj :  →  and piecewise C 1 functions gj : [−r, 0] → , 1 ≤ j ≤ k − 1, such that 



0

Φ(η) = r −r

f1 (η(s))g1 (s) ds, . . . ,

0 −r

 fk−1 (η(s))gk−1 (s) ds, η(0)

for all η ∈ C. For the quasi-tame function Φ : C → , the following Itˆ o formula holds (see Section 2.5 in Chapter 2): dΦ(xt ) = S(Φ)(xt ) dt + DΦ(xt )(H(xt )1{0} ) dt 1 + D2 Φ(xt )(G(xt )1{0} , G(xt )1{0} ) dt 2 +Dφ(xt )(H(xt )1{0} ) dW (s),

(6.30)

where dx(t) = H(xt ) dt + G(xt ) dW (t) and x0 = ψ ∈ C. Using this formula, we have the following corollary. Corollary 6.5.2 Let Ψ (t, ψ) be a quasi-tame solution to (6.28) and (6.29), then π2∗ (t) = DΨ (t, ψ)(1{0} ). Proof. To find the trading strategy {π(t) = (π1 (t), π2 (t)), t ∈ [0, T ]}, we start by using Itˆ o’s formula for a quasi-tame function applied to e−λt Ψ (St ), which gives that

318

6 Option Pricing

e−λt Φ(t, St ) − Φ(0, ψ)  t  t e−λs Φ(s, Ss ) ds + e−λs ∂s Φ(s, Ss ) ds = −λ 0



t

+ 0



t

+ 0



t

+ 0

+

1 2

0

e−λs S(Ψ )(s, ψ˜s ) ds e−λs DΦ(s, Ss )(λS(s)1{0} ) ds ˜ (s) e−λs DΦ(s, Ss )(S(s)g(Ss )1{0} ) dW



t

0

e−λs D2 Φ(s, Ss )(S(s)g(Ss )1{0} , S(s)g(Ss )1{0} ) ds.

Since Φ satisfies the generalized Black-Scholes equation (6.28), the above equation gives,  t ˜ (s). (6.31) e−λt Φ(t, St ) − Φ(0, ψ) = e−λs DΦ(s, Ss )(S(s)g(Ss )1{0} ) dW 0

Using the self-financing condition, ∗



dY π (t) =

=

=

= =



B(t) dX π (t) − X π (t) dB(t) B 2 (t)     B(t) π1∗ (t) dB(t) + π2∗ (t) dS(t) − π1∗ (t)B(t) + π2∗ (t)S(t) λB(t) dt B 2 (t)     π1∗ (t) dB(t) + π2∗ (t) dS(t) − π1∗ (t)B(t) + π2∗ (t)S(t) λ dt B(t)   ˜ (t) − λπ ∗ (t)S(t) dt π2∗ (t) λS(t) dt + S(t)g(St ) dW 2 B(t) π2∗ (t) B(t)

˜ (t). S(t)g(St ) dW

Consequently, ∗





Y π (t) = Y π (0) + 0

and by (6.2), ∗

Y π (t) =



t

π2 ∗ (s) ˜ (s), S(s)g(Ss ) dW B(s) ∗

X π (t) X π (t) = e−λt . B(t) φ(0)

6.5 Black-Scholes Equation

319

Therefore, ∗



e−λt X π (t) − X π (0) = φ(0)  = 0



t 0

t

π2 ∗ (s)S(s)g(Ss ) ˜ dW (s) B(s)

˜ (s). e−λs π2 ∗ (s)S(s)g(Ss )dW

(6.32)

Combining (6.31) and (6.32), we have  t ˜ (s) e−λs DΦ(s, Ss )(S(s)g(Ss )1{0} ) dW 0  t ˜ (s), ∀t ∈ [0, T ], e−λs π2 ∗ (s)S(s)g(Ss ) dW = 0

and by the uniqueness of the martingale representation, we have π2 ∗ (t) = DΦ(t, St )(1{0} ) and the proof is complete.

a.s.

2

The following corollary shows that when there is no time delay (i.e., r = 0) and the stock price {S(t), t ≥ 0} satisfies the nonlinear stochastic differential equation dS(t) = f (S(t)) dt + g(S(t)) dW (t), t ≥ 0, (6.33) S(t) and dB(t) = λB(t) dt,

t ≥ 0,

(6.34)

then the generalized Black-Scholes equation (6.28) reduces to that of the classical one. Corollary 6.5.3 If Φ(t, St ) = V (t, S(t)) and there is no hereditary structure so that (6.1) and (6.4) reduce to (6.33) and (6.34), respectively, then the infinite-dimensional Black-Scholes equation (6.28)-(6.29) reduces to the classical Black-Scholes equation. Proof. If there is no delay, r = 0, so Φ(t, St ) = V (t, S(t)). Also, we have λΦ(t, St ) = λV (t, S(t)) = λV (t, x), where S(t) = x, and ∂t V (t, St ) = ∂t V (t, x), DV (t, St )(λS(t)1{0} ) = ∂x V (t, x)λx, DV (t, St )(S(s)g(St )1{0} ) = ∂x V (t, x)xg(x), and D2 V (t, St )(S(s)g(St )1{0} , S(s)g(St )1{0} ) =

∂2 V (t, x)x2 g 2 (x). ∂x2

320

6 Option Pricing

Also, SV (t, ψ˜t ) = 0 if r = 0. Therefore, λV (t, x) = ∂t V (t, x) + ∂x V (t, x)λx 1 ∂2 + V (t, x)x2 g 2 (x), 2 ∂x2

∀x ≥ 0, ∀t ∈ [0, T ].

For g(St ) = σ ≥ 0, λV (t, x) =

∂ ∂ V (t, x) + λx V (t, x) ∂t ∂x 1 ∂2 + σ 2 x2 2 V (t, x), 2 ∂x

which is the classical Black-Scholes equation.

(6.35)

2

6.5.2 Viscosity Solution In this subsection, we will show that the option price V : [0, T ] × C →  is actually a viscosity solution of the equation (6.28)-(6.29). The general theory for viscosity solutions can be found Chapters 3 and 4 for various settings. First, let us define the viscosity solution of (6.28)-(6.29) as follows. Definition 6.5.4 Let w ∈ C([0, T ] × C). We say that w is a viscosity sub1,2 solution of (6.28)-(6.29) if, for every Γ ∈ Clip ([0, T ] × C) ∩ D(S) and for (t, ψ) ∈ [0, T ] × C satisfying Γ ≥ w on [0, T ] × C and Γ (t, ψ) = w(t, ψ), we have λΓ (t, ψ) − ∂t Γ (t, ψ) − S(Γ )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≤ 0. 2 We say that w is a viscosity super solution of (6.28)-(6.29) if, for every Γ ∈ 1,2 ([0, T ] × C) ∩ D(S), and for (t, ψ) ∈ [0, T ] × C satisfying Γ ≤ w on Clip [0, T ] × C and Γ (t, ψ) = w(t, ψ), we have λΓ (t, ψ) − ∂t Γ (t, ψ) − S(Γ )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≥ 0. 2 We say that w is a viscosity solution of (6.28)-(6.29) if it is a viscosity supersolution and a viscosity subsolution of (6.28)-(6.29). Lemma 6.5.5 Let u, t ∈ [0, T ] with t ≤ u; we have ˜ −λ(u−t) V (u, Su )|St = ψ]. V (t, ψ) = E[e

(6.36)

6.5 Black-Scholes Equation

321

Proof. Let u, t ∈ [0, T ], such that t ≤ u. Note that ˜ −λ(T −u) Ψ (ST )|Su ] V (u, Su ) = E[e ˜ −λ(T −u) Ψ (ST )|F(u)]. = E[e

(6.37)

In view of this and using the fact the St (ψ) is Markovian, we have ˜ −λ(T −u) Ψ (ST )|F(u)]|St = ψ] ˜ −λ(u−t) V (u, Su )|St = ψ] = E[e ˜ −λ(u−t) E[e E[e −λ(u−t) ˜ (ST )|F(u)]|F(t)] ˜ = E[e e−λ(T −u) E[Ψ −λ(T −t) ˜ = E[e Ψ (ST )|F(t)] ˜ −λ(T −t) Ψ (ST )|St = ψ] = E[e = V (t, ψ). This proves the lemma.

(6.38)

2

Theorem 6.5.6 The pricing function V : [0, T ] × C →  is a viscosity solution of the infinite-dimensional Black-Scholes equation (6.28)-(6.29). 1,2 Proof. Let Γ ∈ Clip ([0, T ] × C) ∩ D(S), for (t, ψ) ∈ [0, T ] × C such that Γ ≤ V on [0, T ] × C and Γ (t, ψ) = V (t, ψ). We want to prove the viscosity supersolution inequality, that is,

λV (t, ψ) − ∂t V (t, ψ) − S(V )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≥ 0. 2 Notation: Throughout the proof of the theorem we will use the following notation: ˜ ψ [Y ] = E[Y ˜ |St = ψ] E for any process Y. 1,2 Let Γ ∈ Clip ([0, T ] × C) ∩ D(S). For 0 ≤ t ≤ t1 ≤ T , by virtue of Theorem 2.4.1 in Chapter 2, we have

˜ ψ [e−λ(s−t) Γ (s, Ss )] − Γ (t, ψ) E  s ˜ψ =E e−λ(u−t) ∂t Γ (u, Su ) + S(Γ )(u, Su )

(6.39)

t

+ DΓ (u, Su )(λSu (0)1{0} ) +

1 2 D Γ (u, Su )(Su (0)g(Su )1{0} , Su (0)g(Su )1{0} ) − λΓ (u, Su ) du . 2

For any s ∈ [t, T ], Lemma 6.5.5 gives ˜ ψ [V (s, Ss )]. V (t, ψ) ≥ e−λ(s−t) E Using the fact that Γ ≤ V , we can get

322

6 Option Pricing

  ˜ ψ e−λ(s−t) V (s, Ss ) − V (t, ψ) 0≥E   ˜ ψ e−λ(s−t) Γ (s, Ss ) − V (t, ψ) ≥E   s ψ −λ(u−t) ˜ ≥E e ∂ − tΓ (u, Su ) + S(Γ )(u, Su )

(6.40)

t

+DΓ (u, Su )(λSu (0)1{0} )

1 2 + D Γ (u, Su )(Su (0)g(Su )1{0} , Su (0)g(Su )1{0} ) − λΓ (u, Su ) du. 2 Dividing by (s − t) and letting s ↓ t in the previous inequality, we obtain λV (t, ψ) − ∂t Γ (t, ψ) − S(V )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≥ 0. 2

(6.41)

So we have proved the inequality. Next, we want to prove that V is a viscosity subsolution. Let Γ ∈ 1,2 ([0, T ] × C) ∩ D(S). For (t, ψ) ∈ [0, T ] × C satisfying Γ ≥ V on [0, T ] × C Clip and Γ (t, ψ) = V (t, ψ), we want to prove that λV (t, ψ) − ∂t Γ (t, ψ) − S(V )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≤ 0. 2

(6.42)

For any s ∈ [t, T ], Lemma 6.5.5 gives ˜ ψ e−λ(s−t) [V (s, Ss )], V (t, ψ) ≤ E so we can get   ˜ ψ e−λ(s−t) V (s, Ss ) − V (t, ψ) 0≤E   ˜ ψ e−λ(s−t) Γ (s, Ss ) − Γ (t, ψ) ≤E   s ψ −λ(u−t) ˜ ≤E e ∂t Γ (u, Su ) + S(Γ )(u, Su ) + DΓ (u, Su )(λSu (0)1{0} ) t

1 2 + D Γ (u, Su )(Su (0)g(Su )1{0} , Su (0)g(Su )1{0} ) − λΓ (u, Su ) du. 2 (6.43) Dividing by (s − t) and letting s ↓ t in the previous inequality, we obtain λV (t, ψ) − ∂t Γ (t, ψ) − S(V )(t, ψ) − DΓ (t, ψ)(λψ(0)1{0} ) 1 − D2 Γ (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) ≤ 0. 2

(6.44)

So we have proved the inequality. This completes the proof of the theorem. 2

6.6 HJB Variational Inequality

323

Proposition 6.5.7 (Comparison Principle) If V1 (t, ψ) and V2 (t, ψ) are both continuous with respect to the argument (t, ψ) and are respectively the viscosity subsolution and supersolution of (6.28)-(6.29) with at most a polynomial growth, then (6.45) V1 (t, ψ) ≤ V2 (t, ψ), ∀(t, ψ) ∈ [0, T ] × C. Proof. The proof follows the same argument as in Theorem 3.6.1 with some modifications and therefore is omitted here. 2 Given the above comparison principle, it is easy to prove that (6.28)−(6.29) has only a unique viscosity solution. Theorem 6.5.8 The rational pricing function V : [0, T ] × C →  for ECC(Ψ ) is the unique viscosity solution of the infinite-dimensional BlackScholes equation (6.28)-(6.29).

6.6 HJB Variational Inequality Since we are also interested in the secondary market of the American option that is sold at t ∈ [0, T ], characterizations of the rational price of the option at time t is also very desirable. In the following, we state the HJBVI that characterizes the pricing function V : [0, T ] × C → [0, ∞) of the American option under the assumption that it is sufficiently smooth (i.e., 1,2 ([0, T ] × C) ∩ D(S)). The derivation follows easily from Chapter 4 as V ∈ Clip a consequence of optimal stopping problem. From Theorem 6.4.7, it is known that the pricing function V : [0, T ]×C → R is the value function of the following optimal stopping problem:   ˜ e−λ(τ −t) Ψ (Sτ )|St = ψ . V (t, ψ) = sup E τ ∈MT t

It is clear that V (T, ψ) = Ψ (ψ) for all ψ ∈ C and V (0, ψ) = C ∗ (Ψ ). We have the following theorem. Theorem 6.6.1 Assume that the reward function Ψ : C → [0, ∞) is globally convex. Then the pricing function V ∈ C([0, T ] × C). Furthermore, V (t, ·) : C →  is Lipschitz in C for each t ∈ [0, T ], that is, |V (t, ϕ) − V (t, φ)| ≤ K ϕ − φ for all ϕ, φ ∈ C and for some K > 0. Proof. Since the reward function Ψ satisfies Assumption 6.1.9, it is also Lipschitz on C (see Lemma 6.1.10). By following the proof of Lemma 4.5.2 in Chapter 4, one can show that the pricing function V (t, ·) : C → [0, ∞) satisfies the Lipschitz condition for each t ∈ [0, T ]. 2

324

6 Option Pricing

It will be shown in the next section that the pricing function V : [0, T ] × C →  is a viscosity solution of the infinite-dimensional HJBVI (6.46)-(6.47). To derive (6.46)-(6.47), it is necessary to introduce some notations and concept as follows. Consider the following infinite-dimensional HJBVI with the terminal condition (6.46) max {V − Ψ, ∂t V + AV − λV } = 0 on [0, T ] × C and V (T, ψ) = Ψ (ψ),

∀ψ ∈ C,

(6.47)

where AV (t, ψ) = SV (t, ψ) + DV (t, ψ)(λψ(0))1{0} ) 1 + D2 V (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ). 2

(6.48)

We can partition the domain of this function into the continuation region C and the stopping region (C)c ≡ C − C as follows: C ≡ {(t, ψ) ∈ [0, T ] × C : V (t, ψ) > Ψ (ψ)} and (C)c ≡ {(t, ψ) ∈ [0, T ] × C : V (t, ψ) = Ψ (ϕ)}, since V ≥ Ψ . It is clear from the continuity of V : [0, T ] × C →  and Ψ : C →  that (C)c is closed and C is open. Define C(t) ≡ {ψ : (t, ψ) ∈ C} and (C)c (t) ≡ {ψ : (t, ψ) ∈ S} are connected for every t ∈ [0, T ). Detailed characterizations of these two regions can be found in Section 4.4 in Chapter 4. If it happens that the pricing function is sufficiently smooth, then it satisfies the infinite-dimensional HJBVI (6.46)-(6.47) as stated in the following result. 1,2 ([0, T ] × C) ∩ Theorem 6.6.2 If the pricing function V is such that V ∈ Clip D(S), then the pricing function satisfies (6.46)-(6.47) in the classical sense.

Proof. It is clear that V (T, ψ) = Ψ (ψ) for all ψ ∈ C. The rest of the theorem follows the derivation of HJBVI in Section 4.3 of Chapter 4. 2 Definition 6.6.3 Let Φ ∈ C([0, T ] × C). We say that Φ is a viscosity subsolution of (6.46)-(6.47) if for every (t, ψ) ∈ [0, T ] × C and for every Γ ∈ 1,2 ([0, T ] × C) ∩ D(S) satisfying Γ ≥ Φ on [0, T ] × C and Γ (t, ψ) = Φ(t, ψ), Clip we have max {Ψ (ψ) − Γ (t, ψ), ∂t Γ (t, ψ) + AΓ (t, ψ)} ≤ 0.

6.7 Series Solution

325

We say that Φ is a viscosity supersolution of (6.46)-(6.47) if for every (t, ψ) ∈ 1,2 ([0, T ] × C) ∩ D(S) satisfying Γ ≤ Φ on [0, T ] × C and for every Γ ∈ Clip [0, T ] × C and Γ (t, ψ) = Φ(t, ψ), we have max {Ψ (φ) − Γ (t, ψ), ∂t Γ (t, ψ) + AΓ (t, ψ)} ≥ 0. We say that Φ is a viscosity solution of (6.46)-(6.47) if it is a viscosity supersolution and a viscosity subsolution of (6.46)-(6.47). We have the following theorem as a consequence of the results obtained in Section 4.5 of Chapter 4. Theorem 6.6.4 The pricing function V : [0, T ] × C →  is the unique viscosity solution of (6.46)-(6.47).

6.7 Series Solution In this section, we consider a series solution for the (classical) solution Ψ : [0, T ] × C →  of the infinite-dimensional Black-Scholes equation (6.28)(6.29). 6.7.1 Derivations We start by simplifying our infinite-dimensional Black-Scholes equation λV (t, ψ) = ∂t V (t, ψ) + S(V )(t, ψ) + DV (t, ψ)(λψ(0)1{0} ) 1 + D2 V (t, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ), 2 ∀(t, ψ) ∈ [0, T ) × C+ , and V (T, ψ) = Ψ (ψ)

∀ψ ∈ C+ .

˜ (ST )|St = Since we know that the solution is of the form V (t, ψ) = e−λ(T −t) E[Ψ ψ], we use this characterization of the solution with the differential equation and obtain ˜ ψ) + S(Φ)(t, ˜ ˜ ψ)(λψ(0)1{0} ) ψ) + DΦ(t, 0 = ∂t Φ(t, 1 ˜ ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ) + D2 Φ(t, 2 ∀(t, ψ) ∈ [0, T ) × C+ , with ˜ ψ) = Ψ (ψ), Φ(T,

∀ψ ∈ C+ ,

326

6 Option Pricing

˜ ψ) = E[Ψ ˜ (ST )|St ]. We complete the simplification by using a where Φ(t, change of variables, replacing t with τ = T − t, which gives the equation ∂τ Φ(τ, ψ) = S(Φ)(τ, ψ) + DΦ(τ, ψ)(λψ(0)1{0} ) 1 + D2 Φ(τ, ψ)(ψ(0)g(ψ)1{0} , ψ(0)g(ψ)1{0} ), 2 ∀(τ, ψ) ∈ [0, T ) × C+ , with ∀ψ ∈ C+

Φ(0, ψ) = Ψ (ψ), We now seek a solution of the form Φ(τ, ψ) =

∞  ∞ 

aij τ i ϕj (ψ),

i=0 j=0

where ϕ : C →  is a bounded linear functional. Analogous to ordinary points for differential equations, we consider cases where λψ(0) =

∞ 

rk Θk (ψ)

k=0

and



 1 2 ψ (0)g 2 (ψ) = gk Θk (ψ). 2 k=0

The bounded linear functional Θ may be chosen so as to simplify the expressions for Ψ , λψ(0), and 12 ψ 2 (0)g 2 (ψ). Using the series expression for Φ, we have ∂τ Φ(τ, ψ) =

∞  ∞ 

aij iτ i−1 Θj (ψ),

i=1 j=0

DΦ(τ, ψ)(1{0} ) =

∞  ∞ 

aij τ i jΘj−1 (ψ)Θ(1{0} ),

i=0 j=1

D2 Φ(τ, ψ)(1{0} , 1{0} ) =

∞  ∞ 

aij τ i j(j − 1)Θj−2 (ψ)Θ2 (1{0} ),

i=0 j=2

and S(Φ)(τ, ψ) =

∞  ∞  i=0 j=1

aij τ i jΘj−1 (ψ)S(Θ)(ψ).

6.7 Series Solution

Therefore, ∞  ∞ 

aij iτ i−1 Θj (ψ)

i=1 j=0

=

∞  ∞  i=0 j=1



+

∞ 

k=0

+

∞ 

aij τ i jΘj−1 (ψ)S(Θ)(ψ) ⎞ ⎛ ∞ ∞  rk Θk (ψ) ⎝ aij τ i jΘj−1 (ψ)Θ(1{0} )⎠ ⎛ gk Θk (ψ) ⎝

i=0 j=1 ∞  ∞ 

⎞ aij τ i j(j − 1)Θj−2 (ψ))Θ2 (1{0} )⎠ ,

i=0 j=2

k=0

∀(τ, ψ) ∈ [0, T ) × C+ , By reworking the indexes, we have ∞  ∞ 

ai+1j (i + 1)τ i Θj (ψ)

i=0 j=0 ∞  ∞ 

=

i=0 j=0

+

∞ 

k=0

aij+1 τ i (j + 1)Θj (ψ)S(Θ)(ψ) ⎛

rk Θk (ψ) ⎝

∞  ∞ 

⎞ aij+1 τ i (j + 1)Θj (ψ)Θ(1{0} )⎠

i=0 j=0

⎞ ⎛ ∞ ∞ ∞   + gk Θk (ψ) ⎝ aij+2 τ i (j + 2)(j + 1)Θj (ψ))Θ2 (1{0} )⎠

k=0

i=0 j=0

∀(τ, ψ) ∈ [0, T ) × C+ . By collecting coefficients that go with τ i ϕj (ψ), we have (i + 1)ai+1j = (j + 1)aij+1 Γ (ϕ)(ψ)  + (j + 1)r0 aij+1 + jr1 aij + (j − 1)r2 aij−1 + · · ·  +(j − k + 1)rk aij−k+1 + · · · + rj ai1 Θ(1{0} )  + (j + 2)(j + 1)g0 aij+2 + (j + 1)jg1 aij+1 +j(j − 1)g2 aij + · · · + (j − k + 2)(j − k + 1)gk aij−k+2  + · · · + 2gj ai2 Θ2 (1{0} ), which when rearranged gives

327

328

6 Option Pricing

  $ % (i + 1)ai+1j = rj Θ(1{0} ) ai1 + 2rj−1 Θ(1{0} ) + 2gj Θ2 (1{0} ) ai2   + 3rj−2 Θ(10 ) + (3)(2)gj−1 Θ2 (1{0} ) ai3 + · · ·   + krj−k+1 Θ(1{0} ) + k(k − 1)gj−k+2 Θ2 (1{0} ) aik  + · · · + (j + 1)r0 Θ(1{0} ) + (j + 1)(j)g1 Θ2 (1{0} )    +(j + 1)S(Θ)(ψ) aij+1 + (j + 2)(j + 1)g0 Θ2 (1{0} ) aij+2 subject to Ψ (ψ) =

∞ 

a0j Θj (ψ).

(6.49)

j=0

Our procedure for finding the aij is then to find a0,j using (6.49)and then to find the a1j using a1j = bj1 a01 + bj2 a02 + · · · + bjj+2 a0j+2 , and so on, with aij =

bj1 bj2 bjj+2 ai−1,1 + ai−1,2 + · · · + ai−1,j+2 , i i i

(6.50)

where bj1 = rj Θ(1{0} ), bjk = krj−k+1 Θ(1{0} ) + k(k − 1)gj−k+2 Θ2 (1{0} ) for k = j + 1, j + 2, bjj+1 = (j + 1)r0 Θ(1{0} ) + (j + 1)(j)g1 Θ2 (1{0} ) + (j + 1)S(Θ)(ψ), and

bjj+2 = (j + 2)(j + 1)g0 Θ2 (1{0} ).

6.7.2 An Example Consider dS(t) = λS(t) dt + σS(t) dW (t). Here, there is no hereditary structure (i.e., r = 0), so we get the classical Black-Scholes equation λV (t, S(t)) = ∂t V (t, S(t)) + λS(t)∂x V (t, S(t)) 1 + σ 2 S 2 (t)∂x2 V (t, S(t)). 2

(6.51)

6.7 Series Solution

329

Our modified problem becomes 1 ∂τ Φ(τ, S(τ )) = λS(τ )∂x Φ(τ, S(τ )) + σ 2 S 2 (τ )∂x2 Φ(τ, S(τ )), 2

(6.52)

and Φ(0, S(0)) = Ψ (S(0)). We let Θ(ψ) = S(0). The series solution coefficients now take the form a11 = b11 a01 , a12 = b22 a02 , and so on, with a1j = bjj a0j . In general, bjj ai−1j . aij = i Here, bjj = jr1 + j(j − 1)g2 , where r1 = λ and g2 = 12 σ 2 . Therefore, i

aij =

(bjj ) a0j , i!

with V (t, S(t)) = e−λ(T −t)

∞  ∞ 

aij (T − t)i S(0)j

i=0 j=0

= e−λ(T −t)

∞  ∞  (bjj )i

i!

i=0 j=0

a0j (T − t)i S(0)j .

6.7.3 Convergence of the Series Another consideration is that of convergence of the series solution. Again, assume a solution of the form Φ(τ, ψ) =

∞  ∞ 

aij τ i Θj (ψ).

i=0 j=0

Since ψ is fixed for a given problem, we can simplify the notation by letting x ≡ Θ(ψ) and defining u(t, x) =

∞  ∞ 

aij ti xj ,

i=0 j=0

with u(0, x) = Ψ (x) =

∞ 

a0j xj .

j=0

If we let

(1)

f0

= S(Θ)(ψ) + Θ(1{0} )r0

(6.53)

330

6 Option Pricing

and

(1)

fk for k = 1, 2, . . . , and

(2)

fk

= Θ(1{0} )rk = Θ2 (1{0} )gk

for k = 0, 1, 2, . . . , we can define f (1) (x) =

∞ 

(1)

fk xk

k=0

and f (2) (x) =

∞ 

(2)

fk xk

k=0

so that the equation to solve is ∂u ∂u ∂2u (t, x) = f (1) (x) (t, x) + f (2) (x) 2 (t, x). ∂t ∂x ∂x The functions f (1) and f (2) are clearly analytic, and so per the proof of the Cauchy-Kowalewski theorem found in Petrovsky [Pet91], there is a unique analytic solution of this partial differential equation. A final consideration is the size of the region of convergence of the solution. The proof in [Pet91] uses the method of majorants, and so the region of convergence of (6.53) is at least as large as that of the solution of the majorizing problem. Let the series representing f (1) converge in |x| ≤ Rf (1) , the series representing f (2) converge for |x| ≤ Rf (2) , and the series representing Ψ converge for |x| ≤ RΨ . Then |f (1) (x)| ≤

Mf (1) , 1 − R x(1) f

|f (2) (x)| ≤

Mf (2) , 1 − R x(2) f

and |Ψ (x)| ≤

MΨ . 1 − RxΨ

Let M = max{Mf (1) , Mf (2) , MΨ } and a = min{Rf (1) , Rf (2) , RΨ }, then M 1 − xa majorizes f (1) , f (2) , and λ. The proof in Petrovsky [Pet91] actually uses A(z) =

M , 1 − az

6.8 Conclusions and Remarks

where z =

t α

331

+ x and 0 < α < 1 is chosen so that α
0 and the unit price process, {S(t), t ≥ 0}, of the underlying stock follows a nonlinear stochastic hereditary differential equation (SHDE) (see (7.1)) with an infinite but fading memory. The main purpose of the stock account is to keep track of the inventories (i.e., the time instants and the base prices at which shares were purchased or short sold) of the underlying stock for the purpose of calculating the capital gains taxes and so forth. In the stock price dynamics, we assume that both f (St ) (the mean rate of return) and g(St ) (the volatility coefficient) depend on the entire history of stock prices St over the time interval (−∞, t] instead of just the current stock price S(t) at time t ≥ 0 alone. Within the solvency region Sκ (to be defined in (7.6)) and under the requirements of paying a fixed plus proportional transaction costs and capital gains taxes, the investor is allowed to consume from his savings account in accordance with a consumption rate process C = {C(t), t ≥ 0} and to make transactions between his savings and stock accounts according to a trading strategy T = {(τ (i), ζ(i)), i = 1, 2, . . .}, where τ (i), i = 0, 1, 2, . . ., denote the sequence of transaction times and ξ(i) stands for quantities of transactions at time τ (i) (see Definition 7.1.7). The investor will follow the following set of consumption, transaction, and taxation rules (Rules 7.1-7.6). Note that an action of the investor in the market is called a transaction if it involves trading of shares of the stock, such as buying and selling: Rule 7.1. At the time of each transaction, the investor has to pay a transaction cost that consists of a fixed cost κ > 0 and a proportional transaction cost with the cost rate of µ ≥ 0 for both selling and buying shares of the stock. All of the purchases and sales of any number of stock shares will be considered one transaction if they are executed at the same time instant and therefore

334

7 Hereditary Portfolio Optimization

incurs only one fixed fee κ > 0 (and, of course, in addition to a proportional transaction cost). Rule 7.2. Within the solvency region Sκ , the investor is allowed to consume and to borrow money from his savings account for stock purchases. The investor can also sell and/or buyback at the current price shares of the stock he bought and/or short-sold at a previous time. Rule 7.3. The proceeds for the sales of the stock minus the transaction costs and capital gains taxes will be deposited in his/her savings account and the purchases of stock shares together with the associated transaction costs and capital gains taxes (if short shares of the stock are bought back at a profit) will be financed from his savings account. Rule 7.4. Without loss of generality it is assumed that the interest income in the savings account is tax-free by using the effective interest rate λ > 0, where the effective interest rate equals the interest rate paid by the bank minus the tax rate for the interest income. Rule 7.5. At the time of a transaction (say t ≥ 0), the investor is required to pay a capital gains tax (respectively, be paid a capital loss credit) in the amount that is proportional to the amount of profit (respectively, loss). A sale of stock shares is said to result in a profit if the current stock price S(t) is higher than the base price B(t) of the stock, and it is a loss otherwise. The base price B(t) is defined to be the price at which the stock shares were previously bought or shortsold; that is, B(t) = S(t − τ (t)), where τ (t) > 0 is the time duration for which those shares (long or short) have been held at time t. The investor will also pay capital gains taxes (respectively, be paid capital loss credits) for the amount of profit (respectively, loss) by short-selling shares of the stock and then buying back the shares at a lower (respectively, higher) price at a later time. The tax will be paid (or the credit shall be given) at the buying-back time. Throughout, a negative amount of tax will be interpreted as a capital-loss credit. The capital gains tax and capital loss credit rates are assumed to be the same, as β > 0 for simplicity. Therefore, if |m| (m > 0 stands for buying and m < 0 stands for selling) shares of the stock having the base price B(t) = S(t − τ (t)) are traded at the current price S(t), then the amount of tax due at the transaction time is given by |m|β(S(t) − S(t − τ (t))). Rule 7.6. The tax and/or credit will not exceed all other gross proceeds and/or total costs of the stock shares, that is, m(1 − µ)S(t) ≥ βm|S(t) − S(t − τ (t))| if m ≥ 0 and m(1 + µ)S(t) ≤ βm|S(t) − S(t − τ (t))| if m < 0, where m ∈  denotes the number of shares of the stock traded, with m ≥ 0 being the number of shares purchased and m < 0 being the number of shares of the stock sold.

7 Hereditary Portfolio Optimization

335

Convention 7.7. Throughout, we assume that µ + β < 1. This implies that the expenses (proportional transaction cost and tax) associated with a trade will not exceed the proceeds. Under the above assumptions and Rules 7.1-7.6, the investor’s objective is to seek an optimal consumption-trading strategy (C ∗ , T ∗ ) in order to maximize the expected utility from the total discounted consumption over the infinite time horizon  ∞ C γ (t)  dt , e−αt E γ 0 where α > 0 represents the discount rate and 0 < γ < 1 represents the investor’s risk aversion factor. Due to the fixed plus proportional transaction costs and the hereditary nature of the stock dynamics and inventories, the problem will be formulated as a combination of a classical control (for consumptions) and an impulse control (for the transactions) problem in infinite dimensions. A combined classicalimpulse control problem in finite dimensions is treated in Brekke and Øksendal [BØ98] for diffusion processes. In this chapter a Hamilton-Jocobi-Bellman quasi-variational inequality (HJBQVI) for the value function together with its boundary conditions are derived, and the verification theorem for the optimal investment-trading strategy is established. It is also shown that the value function is a viscosity solution of the HJBQVI (see HJBQVI (*) in Section 7.3.3 (D). In recent years there has been an extensive amount of research on the optimal consumption-trading problems with proportional transaction costs (see, e.g., Akian et al. [AMS96], Akian et al. [AST01], Davis and Norman [DN90], Shreve and Soner [SS94], and references contained therein) and fixed plus proportional transaction costs (see, e.g., Øksendal and Sulem [ØS02]), all within the geometric Brownian motion financial market. In all these papers, the objective has been to maximize the expected utility from the total discounted or averaged consumption over the infinite time horizon without considering the issues of capital gains taxes (respectively, capital loss credits) when stock shares are sold at a profit (respectively, loss). In different contexts, the issues of capital gains taxes have been studied in Cadenillas and Pliska [CP99], Constantinides [Con83, Con84], Dammon and Spatt [DS96], Leland [Lel99], Garlappi et al. [GNS01], Tahar and Touzi [TT03], Demiguel and Uppal [DU04], and references contained therein. In particular, [Con83] and [Con84] considered the effect of capital gains taxes and capital loss credits on capital market equilibrium without consumption and transaction costs. These two papers illustrated that under some conditions, it may be more profitable to cut one’s losses short and never to realize a gain because of capital loss credits and capital gains taxes as some conventional wisdom will suggest. In [CP99], the optimal transaction time problem with proportional transaction costs and capital gains taxes was considered in order to maximize the the long-run growth rate of the investment (or the so-called Kelley criterion),

336

7 Hereditary Portfolio Optimization

that is, 1 E[log V (t)], t where V (t) is the value of the investment measured at time t > 0. This paper is quite different from what is presented in this chapter in that the unit price of the stock is described by a geometric Brownian motion, and all shares of the stock owned by the investor are to be sold at a chosen transaction time and all of its proceeds from the sale are to be used to purchase new shares of the stock immediately after the sale without consumption. Fortunately, due to the nature of the geometric Brownian motion market, the authors of that paper were able to obtain some explicit results. In recent years, the interest in stock price dynamics described by stochastic delay equations has increased tremendously (see, e.g., Chang and Youree [CY99] and [CY07]). To the best of the author’s knowledge, Chang [Cha07a, Cha07b] was the first to treat the optimal consumption-trading problem in which the hereditary nature of the stock price dynamics and the issue of capital gains taxes are taken into consideration. Due to drastically different nature of the problem and the techniques involved, the hereditary portfolio optimization problem with taxes and proportional transaction costs (i.e., κ = 0 and µ > 0) remains to be solved. Much of the material presented in this chapter are taken from [Cha07a] and [Cha07b]. This chapter is organized as follows. The description of the stock price dynamics, the admissible consumption-trading strategies, and the formulation of the hereditary portfolio optimization problem are given in Section 7.1. In Section 7.2, the properties of the controlled state process are further explored and the corresponding infinite-dimensional Markovian solution of the price dynamics is investigated. Section 7.3 contains the derivations of the HJBQVI together with its boundary conditions (HJBQVI(*)) using a Bellman-type dynamic programming principle. The verification theorem for the optimal consumption-trading strategy and the proof that the value function is a viscosity solution of the HJBQVI(*) are contained in Sections 7.4 and 7.5, respectively. Similar to the SHDE with bounded memory, we again use the following convention for systems with infinite but fading memory: lim

t→∞

Convention 7.8. If t ≥ 0 and φ :  →  is a measurable function, define φt : (−∞, 0] →  by φt (θ) = φ(t + θ), θ ∈ (−∞, 0].

7.1 The Hereditary Portfolio Optimization Problem This section is devoted to formulation of the hereditary portfolio optimization problem with capital gains taxes and a fixed plus proportional transaction costs.

7.1 Portfolio Optimization Problem

337

7.1.1 Hereditary Price Structure with Unbounded Memory Let ρ : (−∞, 0] → [0, ∞) be the influence function with relaxation property that satisfies the following conditions. Assumption 7.1.1 The function ρ : (−∞, 0] → [0, ∞) satisfies the following two conditions: 1. ρ is summable on (−∞, 0], that is, 

0

0< −∞

ρ(θ) dθ < ∞.

2. For every λ ≤ 0, one has ¯ K(λ) = ess

ρ(θ + λ) ¯ < ∞, ≤K ρ(θ) θ∈(−∞,0]

K(λ) = ess

sup

ρ(θ) < ∞. ρ(θ + λ) θ∈(−∞,0] sup

Under Assumption 7.1.1, it can be shown that ρ is essentially bounded and strictly positive on (−∞, 0]. Furthermore, lim θρ(θ) = 0.

θ→−∞

Examples of ρ : (−∞, 0] → [0, ∞) that satisfy Assumption 7.1.1 are given in Section 2.4.1 of Chapter 2. Let M =  × L2ρ (−∞, 0) (or simply  × L2ρ for short) be the history space of the stock price dynamics, where L2ρ is the class of ρ-weighted Hilbert space of measurable functions φ : (−∞, 0) →  such that 

0

−∞

|φ(θ)|2 ρ(θ) dθ < ∞.

Note that any constant function defined on (−∞, 0] is an element of L2ρ (−∞, 0). For t ∈ (−∞, ∞), let S(t) denote the unit price of the stock at time t. It is assumed that the unit stock price process {S(t), t ∈ (−∞, ∞)} satisfies the following SHDE with an unbounded but fading memory: dS(t) = S(t)[f (St ) dt + g(St ) dW (t)],

t ≥ 0.

(7.1)

In the above equation, the process {W (t), t ≥ 0} is a one-dimensional standard Brownian motion defined on a complete filtered probability space (Ω, F, P ; F), where F = {F(t), t ≥ 0} is the P -augmented natural filtration generated by the Brownian motion {W (t), t ≥ 0}. Note that f (St ) and g(St ) in (7.1) represent respectively the mean growth rate and the volatility rate of the stock price

338

7 Hereditary Portfolio Optimization

at time t ≥ 0. Note that the stock is said to have a hereditary price structure with infinite but fading memory because both the drift term S(t)f (St ) and the diffusion term S(t)g(St ) on the right-hand side of (7.1) explicitly depend on the entire past history prices (S(t), St ) ∈ [0, ∞) × L2ρ,+ in a weighted fashion by the function ρ satisfying Assumption 7.1.1. Note that we have used the following notation in the above: L2ρ,+ = {φ ∈ L2ρ | φ(θ) ≥ 0,

∀θ ∈ (−∞, 0)}.

It is assumed for simplicity and to guarantee the existence and uniqueness of a strong solution S(t), t ≥ 0 that the initial price function (S(0), S0 ) = (ψ(0), ψ) ∈ + × L2ρ,+ is given and the functions f, g : L2ρ → [0, ∞) are continuous and satisfy the following Lipschitz and linear growth conditions (see, e.g., Sections 1.4 and 1.5 in Chapter 1 and Sections 2.4 and 2.5 in Chapter 2 for the theory of SHDEs with an unbounded but fading memory). Assumption 7.1.2 The functions f and g satisfy the following conditions: (A7.2.1). (Linear Growth Condition) There exists a constant Kg > 0 such that 0 ≤ |φ(0)f (φ) + φ(0)g(φ)|   ≤ Kgrow 1 + (φ(0), φ) M ,

∀(φ(0), φ) ∈ M.

(A7.2.2) (Lipschitz Condition) There exists a constant Klip > 0 such that |φ(0)f (φ) − ϕ(0)f (ϕ)| + |φ(0)g(φ) − ϕ(0)g(ϕ)| ≤ Klip (φ(0), φ) − (ϕ(0), ϕ) M ,

∀(φ(0), φ), (ϕ(0), ϕ) ∈ M,

where (φ(0), φ) M is the norm for the ρ-weighted Hilbert space M defined by  0 (φ(0), φ) M = |φ(0)|2 + |φ(θ)|2 ρ(θ) dθ. −∞

(A7.2.3) (Upper and Lower Bounds) There exist positive constants α and σ such that 0 < λ < b ≤ f (φ) ≤ ¯b and

0 < σ ≤ g(φ); ∀φ ∈ L2ρ,+ .

Note that the lower bound of the mean rate of return f in Assumption (A7.2.3) is imposed to make sure that the stock account has a higher mean growth rate than the interest rate λ > 0 for the savings account. Otherwise, it will be more profitable and less risky for the investor to put all his money in the savings account for the purpose of optimizing the expected utility from the total consumption. As a reminder, let us repeat the meaning of the strong solution process {S(t), t ∈ (−∞, ∞)} of (7.1) as follows.

7.1 Portfolio Optimization Problem

339

Definition 7.1.3 A process {S(t), t ∈ (−∞, ∞)} in  is said to be a strong solution of (7.1) through the initial price function (ψ(0), ψ) ∈ [0, ∞)×L2 , ρ, + if it satisfies the following conditions: 1. S(t) = ψ(t), ∀t ∈ (−∞, 0]. 2. {S(t), t ≥ 0} is F-adapted; that is, S(t) is F(t)-measurable for each t ≥ 0. 3. The process {S(t), t ≥ 0} is continuous and satisfies the following stochastic integral equation P -a.s. for all t ≥ 0:  t  t S(t) = ψ(0) + S(s)f (Ss ) ds + S(s)g(Ss ) dW (s) t ≥ 0. 0

0

T 4. 0 |S(t)f (St )| dt < ∞ P -a.s. for each T > 0. T 5. 0 |S(t)g(St )|2 dt P -a.s. for each T > 0. Definition 7.1.4 The strong solution {S(t), t ∈ (−∞, ∞)} of (7.1) through the initial datum (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ is said to be strongly (or pathwise) ˜ t ∈ (−∞, ∞)} is another strong solution of (7.1) through the unique if {S(t), same initial function. Then ˜ P {S(t) = S(t),

∀t ≥ 0} = 1.

Under Assumption 7.1.1 and Assumptions (A7.2.1)-(A7.2.3), it can be shown that for each initial historical price function (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , the price process {S(t), t ≥ 0} exists and is a positive, continuous, and Fadapted process defined on (Ω, F, P ; F). Theorem 7.1.5 Assume that Assumption 7.1.1 and Assumptions (A7.2.1)(A7.2.3) hold. Then, for each (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , there exists a unique non-negative strong solution process {S(t), t ∈ (−∞, ∞)} through the initial datum (ψ(0), ψ). Furthermore, the [0, ∞) × L2ρ,+ -valued segment process {(S(t), St ), t ≥ 0} is strong Markovian with respect to the filtration G, where G = {G(t), t ≥ 0} is the filtration generated by {S(t), t ≥ 0}, that is, G(t) = σ(S(s), 0 ≤ s ≤ t)(= σ((S(s), Ss ), 0 ≤ s ≤ t)), ∀t ≥ 0. We also note here that since security exchanges have only existed in a finite past, it is realistic but not technically required to assume that the initial historical price function (ψ(0), ψ) has the property that ψ(θ) = 0 ∀θ ≤ θ¯ < 0 for some θ¯ < 0. Although the modeling of stock prices is still under intensive investigations, it is not the intention to address the validity of the model stock price dynamics described in (7.1) but to illustrate an hereditary optimization problem that is explicitly dependent on the entire past history of the stock prices for computing capital gains taxes or capital loss credits. The term “hereditary portfolio optimization” was coined by Chang [Cha07a, Cha07b] for the first

340

7 Hereditary Portfolio Optimization

time. We, however, mention here that stochastic hereditary equation similar to (7.1) was first used to model the behavior of elastic material with infinite memory and that some special form of stochastic functional differential equations with bounded memory have been used to model stock price dynamics in option pricing problems (see, e.g., Chang and Youree [CY99, CY07]). 7.1.2 The Stock Inventory Space The space of stock inventories, N, will be the space of bounded functions ξ : (−∞, 0] →  of the following form: ξ(θ) =

∞ 

n(−k)1{τ (−k)} (θ), θ ∈ (−∞, 0],

(7.2)

k=0

where {n(−k), k = 0, 1, 2, . . .} is a sequence in  with n(−k) = 0 for all but finitely many k, −∞ < · · · < τ (−k) < · · · < τ (−1) < τ (0) = 0, and 1{τ (−k)} is the indicator function at τ (−k). Let · N (the norm of the space N) be defined by ξ N =

sup

|ξ(θ)|, ∀ξ ∈ N.

θ∈(−∞,0]

As illustrated in Sections 7.1.3 and 7.1.5, N is the space in which the investor’s stock inventory lives. The assumption that n(−k) = 0 for all but finitely many k implies that the investor can only have finitely many open positions in his stock account. However, the number of open positions may increase from time to time. Note that the investor is said to have an open long (respectively, short) position at time τ if he still owns (respectively, owes) all or part of the stock shares that were originally purchased (respectively, shortsold) at a previous time τ . The only way to close a position is to sell what he owns and buy back what he owes. Remark 7.1.6 The inventory at time t = 0 described in (7.2) can also be equivalently expressed as the following double sequence: ξ = {(n(−k), τ (−k)), k = 0, 1, 2, . . .}, where n(−k) > 0 (respectively, n(−k) < 0) denotes the number of share of the stock purchased (respectively, shortsold) by the investor at time τ (−k) for k = 0, 1, 2, . . .. If η :  →  is a bounded function of the form η(t) =

∞  k=−∞

n(k)1{τ (k)} (t), − ∞ < t < ∞,

7.1 Portfolio Optimization Problem

341

where −∞ < · · · < τ (−k) < · · · < 0 = τ (0) < τ (1) < · · · < τ (k) < · · · < ∞, then for each t ≥ 0, we define, using Convention 7.8, the function ηt : (−∞, 0] →  by ηt (θ) = η(t + θ), θ ∈ (−∞, 0]. In this case, ∞ 

ηt (θ) =

n(k)1{τ (k)} (t + θ)

k=−∞



Q(t)

=

n(k)1{τ (k)} (θ), θ ∈ (−∞, 0],

k=−∞

where Q(t) = sup{k ≥ 0 | τ (k) ≤ t}. Note that if ηt represents the inventory of the investor’s stock account, then ηt can also be expressed as the following double sequence (see Remark 7.1.6): ηt = {(n(k), τ (k)), k = . . . , −2, −1, 0, 1, 2, . . . , Q(t)}. 7.1.3 Consumption-Trading Strategies Let (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈  × N × [0, ∞) × L2ρ,+ be the investor’s initial portfolio immediately prior to t = 0; that is, the investor starts with x ∈  dollars in his savings account, the initial stock inventory, ξ(θ) =

∞ 

n(−k)1{τ (−k)} (θ), θ ∈ (−∞, 0),

k=0

and the initial profile of historical stock prices (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , where n(−k) > 0 (respectively, n(−k) < 0) represents an open long (respectively, short) position at τ (−k). Within the solvency region Sκ (see (7.6)), the investor is allowed to consume from his savings account and can make transactions between his savings and stock accounts under Rules 7.1-7.6 and according to a consumption-trading strategy π = (C, T ) defined below. Definition 7.1.7 The pair π = (C, T ) is said to be a consumption-trading strategy if the following hold: (i) The consumption rate process C = {C(t), t ≥ 0} is a non-negative Gprogressively measurable process such that  0

T

C(t) dt < ∞, P -a.s. ∀T > 0.

(ii) T = {(τ (i), ζ(i)), i = 1, 2, . . .} is a trading strategy with τ (i), i = 1, 2, . . . , being a sequence of trading times that are G-stopping times such that

342

7 Hereditary Portfolio Optimization

0 = τ (0) ≤ τ (1) < · · · < τ (i) < · · · and lim τ (i) = ∞ P-a.s. ,

i→∞

and for each i = 0, 1, . . . , ζ(i) = (. . . , m(i − k), . . . , m(i − 2), m(i − 1), m(i)) is an N-valued G(τ (i))-measurable random vector (instead of a random variable in ) that represents the trading quantities at the trading time τ (i). In the above, m(i) > 0 (respectively, m(i) < 0) is the number of stock shares newly purchased (respectively, short-sold) at the current time τ (i) and at the current price of S(τ (i)) and, for k = 1, 2, . . ., m(i − k) > 0 (respectively, m(i − k) < 0) is the number of stock shares bought back (respectively, sold) at the current time τ (i) and the current price of S(τ (i)) in his open short (respectively, long) position at the previous time τ (i − k) and the base price of S(τ (i − k)). For each stock inventory ξ of the form expressed (7.2), Rules 7.1-7.6 also dictate that the investor can purchase or short sell new shares (i.e., −∞ < m(0) < ∞), can sell all or part of what he/she owns, that is, m(−k) ≤ 0 and n(−k) + m(−k) ≥ 0 if n(−k) > 0, and/or can buy back all or part what he/she owes, that is, m(−k) ≥ 0 and n(−k) + m(−k) ≤ 0 if n(−k) < 0, all at the same time instant. Therefore, the trading quantity {m(−k), k = 0, 1, . . .} must satisfy the constraint set R(ξ) ⊂ N defined by R(ξ) = {ζ ∈ N | ζ =

∞ 

m(−k)1{τ (−k)} , −∞ < m(0) < ∞, and

k=0

either n(−k) > 0, m(−k) ≤ 0, and n(−k) + m(−k) ≥ 0 or n(−k) < 0, m(−k) ≥ 0, and (−k) + m(−k) ≤ 0 for k ≥ 1}. (7.3) 7.1.4 Solvency Region Throughout, the investor’s state space S is taken to be S =  × N × [0, ∞) × L2ρ,+ . An element (x, ξ, ψ(0), ψ) ∈ S is called a portfolio, where x ∈  is investor’s holding in his savings account, ξ is the investor’s stock inventory, and (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ is the profile of historical stock prices. Define the function Hκ : S →  as follows:  Hκ (x, ξ, ψ(0), ψ) = max Gκ (x, ξ, ψ(0), ψ),  min{x, n(−k), k = 0, 1, 2, · · · } , (7.4)

7.1 Portfolio Optimization Problem

343

where Gκ : S →  is the liquidating function defined by Gκ (x, ξ, ψ(0), ψ) = x − κ ∞   min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0) + k=0

 −n(−k)β(ψ(0) − ψ(τ (−k))) .

(7.5)

On the right-hand side of the above expression, x − κ = the amount in his savings account after deducting the fixed transaction cost κ, and for each k = 0, 1, . . . min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0) = the proceeds for selling n(−k) > 0 or buying back n(−k) < 0 shares of the stock net of proportional transactional cost; −n(−k)β(ψ(0) − ψ(τ (−k))) = the capital gains tax to be paid forselling the n(−k) shares of the stock with the current price of ψ(0) and base price of ψ(τ (−k)). Therefore, Gκ (x, ξ, ψ(0), ψ) defined in (7.5) represents the cash value (if the assets can be liquidated at all) after closing all open positions and paying all transaction costs (fixed plus proportional transactional costs) and taxes. The solvency region Sκ of the portfolio optimization problem is defined as   Sκ = (x, ξ, ψ(0), ψ) ∈ S | Hκ (x, ξ, ψ(0), ψ) ≥ 0 = {(x, ξ, ψ(0), ψ) ∈ S | Gκ (x, ξ, ψ(0), ψ) ≥ 0} ∪ S+ ,

(7.6)

where S+ = [0, ∞) × N+ × [0, ∞) × L2ρ,+ , and N+ = {ξ ∈ N | ξ(θ) ≥ 0, ∀θ ∈ (−∞, 0]}. Note that within the solvency region Sκ , there are positions that cannot be closed at all, namely those (x, ξ, ψ(0), ψ) ∈ Sκ such that (x, ξ, ψ(0), ψ) ∈ S+ and Gκ (x, ξ, ψ(0), ψ) < 0. This is due to the insufficiency of funds to pay for the transaction costs and/or taxes and so forth. Observe that the solvency region Sκ is an unbounded and nonconvex subset of the state space S. The boundary ∂Sκ will be described in detail in Section 7.3.3.

344

7 Hereditary Portfolio Optimization

7.1.5 Portfolio Dynamics and Admissible Strategies At time t ≥ 0, the investor’s portfolio in the financial market will be denoted by the quadruplet (X(t), Nt , S(t), St ), where X(t) denotes the investor’s holdings in his savings account, Nt ∈ N is the inventory of his stock account, and (S(t), St ) describes the profile of the unit prices of the stock over the past history (−∞, t], as described in Section 7.1.1. Given the initial portfolio (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ S and applying a consumption-trading strategy π = (C, T ) (see Definition 7.1.7), the portfolio dynamics of {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0} can then be described as follows. First, the savings account holdings {X(t), t ≥ 0} satisfies the following differential equation between the trading times: dX(t) = [λX(t) − C(t)]dt, τ (i) ≤ t < τ (i + 1), i = 0, 1, 2, . . . ,

(7.7)

and the following jumped quantity at the trading time τ (i): X(τ (i)) = X(τ (i)−) − κ −

∞ 

 m(i − k) (1 − µ)S(τ (i))

k=0

 −β(S(τ (i)) − S(τ (i − k))) 1{n(i−k)>0,−n(i−k)≤m(i−k)≤0} −

∞ 

 m(i − k) (1 + µ)S(τ (i))

k=0

 −β(S(τ (i)) − S(τ (i − k))) 1{n(i−k) 0 (respectively, m(i) < 0) means buying (respectively, selling) new stock shares at τ (i), and m(i − k) > 0 (respectively, m(i − k) < 0) means buying back (respectively, selling) some or all of what he owed (respectively, owned). Second, the inventory of the investor’s stock account at time t ≥ 0, Nt ∈ N, does not change between the trading times and can be expressed as 

Q(t)

Nt = Nτ (i) =

n(k)1τ (k)

if τ (i) ≤ t < τ (i + 1), i = 0, 1, 2 . . . , (7.9)

k=−∞

where Q(t) = sup{k ≥ 0 | τ (k) ≤ t}. It has the following jumped quantity at the trading time τ (i): (7.10) Nτ (i) = Nτ (i)− ⊕ ζ(i), where Nτ (i)− ⊕ ζ(i) : (−∞, 0] → N is defined by

7.1 Portfolio Optimization Problem

(Nτ (i)− ⊕ ζ(i))(θ) =

∞ 

n ˆ (i − k)1{τ (i−k)} (τ (i) + θ)

k=0

= m(i)1{τ (i)} (τ (i) + θ) +

345

(7.11)

∞   n(i − k) k=1

+ m(i − k)(1{n(i−k)0,−n(i−k)≤m(i−k)≤0} ) 1{τ (i−k)} (τ (i) + θ), for θ ∈ (−∞, 0]. Third, since the investor is small, the unit stock price process {S(t), t ≥ 0} will not be in anyway affected by the investor’s action in the market and is again described as in (7.1). Definition 7.1.8 If the investor starts with an initial portfolio (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ Sκ , the consumption-trading strategy π = (C, T ) defined in Definition 7.1.7 is said to be admissible at (x, ξ, ψ(0), ψ) if ζ(i) ∈ R(Nτ (i)− ),

∀i = 1, 2, . . .

and (X(t), Nt , S(t), St ) ∈ Sκ , ∀t ≥ 0. The class of consumption-investment strategies admissible at (x, ξ, ψ(0), ψ) ∈ Sκ will be denoted by Uκ (x, ξ, ψ(0), ψ). 7.1.6 The Problem Statement Given the initial state (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ) ∈ Sκ , the investor’s objective is to find an admissible consumption-trading strategy π ∗ ∈ Uκ (x, ξ, ψ(0), ψ) that maximizes the following expected utility from the total discounted consumption:  ∞ C γ (t)  x,ξ,ψ(0),ψ;π dt (7.12) e−αt Jκ (x, ξ, ψ(0), ψ; π) = E γ 0 among the class of admissible consumption-trading strategies Uκ (x, ξ, ψ(0), ψ), where E x,ξ,ψ(0),ψ;π [· · · ] is the expectation with respect to P x,ξ,ψ(0),ψ;π {· · · }, the probability measure induced by the controlled (by π) state process {(X(t), Nt , S(t), St ), t ≥ 0} and conditioned on the initial state (X(0−), N0− , S(0), S0 ) = (x, ξ, ψ(0), ψ). In the above, α > 0 denotes the discount factor, and 0 < γ < 1 indicates γ that the utility function U (c) = cγ , for c > 0, is a function of HARA (hyperbolic absolute risk aversion) type that were considered in most optimal

346

7 Hereditary Portfolio Optimization

consumption-trading literature (see, e.g., Davis and Norman [DN90], Akian et al. [AMS96], Akian et al. [AST01], Shreves and Soner [SS94], and Øksendal and Sulem [ØS02]) with or without a fixed transaction cost. The admissible (consumption-trading) strategy π ∗ ∈ Uκ (x, ξ, ψ(0), ψ) that maximizes Jκ (x, ξ, ψ(0), ψ; π) is called an optimal (consumption-trading) strategy and the function Vκ : Sκ → + defined by Vκ (x, ξ, ψ(0), ψ) =

sup π∈Uκ (x,ξ,ψ(0),ψ)

Jκ (x, ξ, ψ(0), ψ; π)

= Jκ (x, ξ, ψ(0), ψ; π ∗ )

(7.13)

is called the value function of the hereditary portfolio optimization problem. The hereditary portfolio optimization problem considered in this chapter is then formalized as the following combined classical-impulse control problem. Problem (HPOP). For each given initial state (x, ξ, ψ(0), ψ) ∈ Sκ , identify the optimal strategy π ∗ and its corresponding value function Vκ : Sκ → [0, ∞).

7.2 The Controlled State Process Given an initial state (x, ξ, ψ(0), ψ) ∈ Sκ and an admissible consumptioninvestment strategy π = (C, T ) ∈ Uκ (x, ξ, ψ(0), ψ), the Sκ -valued controlled state process will be denoted by {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0}. Note that the dependence of the controlled state process on the initial state (x, ξ, ψ(0), ψ) and the admissible consumption-trading strategy π will be suppressed for notational simplicity. The main purpose of this section is to establish the Markovian and Dynkin’s formula for the controlled state process {Z(t), t ≥ 0}. Note that the [0, ∞)×L2ρ,+ (= M+ )-valued process {S(t), St ), t ≥ 0} described by (7.1) is uncontrollable by the investor and is therefore independent of the consumptiontrading strategy π ∈ Uκ (x, ξ, ψ(0), ψ) but is dependent on the initial historical price function (S(0), S0 ) = (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ . 7.2.1 The Properties of the Stock Prices To study the Markovian properties of the [0, ∞)×L2ρ,+ -valued solution process {(S(t), St ), t ≥ 0}, where St (θ) = S(t + θ), θ ∈ (−∞, 0], and (S(0), S0 ) = (ψ(0), ψ), we need the following notation and ancillary results. Let M∗ be the space of bounded linear functionals (or the topological dual of the space M) equipped with the operator norm · ∗M defined by Φ ∗M =

|Φ(φ(0), φ)| , (φ(0), φ) M (φ(0),φ)=(0,0) sup

Φ ∈ M∗ .

Note that M∗ can be identified with M =  × L2ρ by the well-known Riesz representation theorem.

7.2 The Controlled State Process

347

Let M† be the space of bounded bilinear functionals Φ : M × M → ; that is, Φ((φ(0), φ), (·, ·)), Φ((·, ·), (φ(0), φ)) ∈ M∗ for each (φ(0), φ) ∈ M, equipped with the operator norm · †M defined by Φ †M = =

Φ((·, ·), (φ(0), φ)) ∗M (φ(0), φ) M (φ(0),φ)=(0,0) sup

Φ((φ(0), φ), (·, ·)) ∗M . (φ(0), φ) M (φ(0),φ)=(0,0) sup

Let Φ : M → . The function Φ is said to be Fr´echet differentiable at (φ(0), φ) ∈ M if for each (ϕ(0), ϕ) ∈ M, Φ((φ(0), φ)+(ϕ(0), ϕ))−Φ(φ(0), φ) = DΦ(φ(0), φ)(ϕ(0), ϕ)+o( (ϕ(0), ϕ) M ), where DΦ : M → M∗ and o :  →  is a function such that o( (ϕ(0), ϕ) M →0 (ϕ(0), ϕ) M

as

(ϕ(0), ϕ) M → 0.

In this case, DΦ(φ(0), φ) ∈ M∗ is called the (first-order) Fr´echet derivative of Φ at (φ(0), φ) ∈ M. The function Φ is said to be continuously Fr´echet differentiable if its Fr´echet derivative DΦ : M → M∗ is continuous under the operator norm · ∗M . The function Φ is said to be twice Fr´echet differentiable at (φ(0), φ) ∈ M if its Fr´echet derivative DΦ(φ(0), φ) : M →  exists and there exists a bounded bilinear functional D2 Φ(φ(0), φ) : M × M → , where for each (ϕ(0), ϕ), (ς(0), ς) ∈ M, D2 Φ(φ(0), φ)((·, ·), (ϕ(0), ϕ)), D2 Φ(φ(0), φ)((ς(0), ς), (·, ·)) ∈ M∗ and where   DΦ((φ(0), φ) + (ϕ(0), ϕ)) − DΦ(φ(0), φ) (ς(0), ς) = D2 Φ(φ(0), φ)((ς(0), ς), (ϕ(0), ϕ)) + o( (ς(0), ς) M , (ϕ(0), ϕ) M ). Here, o :  ×  →  is such that o(·, (ϕ(0), ϕ) M ) →0 (ϕ(0), ϕ) M and

o( ϕ(0), ϕ) M , ·) →0 (ϕ(0), ϕ) M

as

as

(ϕ(0), ϕ) M → 0

(ϕ(0), ϕ) M → 0.

In this case, the bounded bilinear functional D2 Φ(φ(0), φ) : M × M →  is the second-order Fr´echet derivative of Φ at (φ(0), φ) ∈ M. The second-order Fr´echet derivative D2 Φ is said to be globally Lipschitz on M if there exists a constant K > 0 such that

348

7 Hereditary Portfolio Optimization

D2 Φ(φ(0), φ) − D2 Φ(ϕ(0), ϕ) †M ≤ K (φ(0), φ) − (ϕ(0), ϕ) M , ∀(φ(0), φ), (ϕ(0), ϕ) ∈ M. Assuming all the partial and/or Fr´echet derivatives of the following exist, the actions of the first-order Fr´echet derivative DΦ(φ(0), φ) and the secondorder Fr´echet D2 Φ(φ(0), φ) can be expressed as DΦ(φ(0), φ)(ϕ(0), ϕ) = ϕ(0)∂φ(0) Φ(φ(0), φ) + Dφ Φ(φ(0), φ)ϕ and D2 Φ(φ(0), φ)((ϕ(0), ϕ), (ς(0), ς)) 2 = ϕ(0)∂φ(0) Φ(φ(0), φ)ς(0) + ς(0)∂φ(0) Dφ Φ(φ(0), φ)ϕ

+ ϕ(0)Dφ ∂φ(0) Φ(φ(0), φ)(ϕ, ς) + Dφ2 Φ(φ(0), φ)ς, 2 where ∂φ(0) Φ and ∂φ(0) Φ are the first- and second-order partial derivatives of Φ with respect to its first variable φ(0) ∈ , Dφ Φ and Dφ2 Φ are the first- and second-order Fr´echet derivatives with respect to its second variable φ ∈ L2ρ , ∂φ(0) Dφ Φ is the second-order derivative first with respect to φ in the Fr´echet sense and then with respect to φ(0) and so forth. Let C 2,2 (×L2ρ ) ≡ C 2 (M) be the space of functions Φ : ×L2ρ (≡ M) →  that are twice continuously differentiable with respect to both its first and second variable. The space of Φ ∈ C 2,2 ( × L2ρ )(= C 2 (M)) with D2 Φ being 2,2 2 ( × L2ρ ) (or equivalently Clip (M)). globally Lipschitz will be denoted by Clip

The Weak Infinitesimal Generator S For each (φ(0), φ) ∈  × L2ρ , define φ˜ :  →  by  ˜ = φ(t)

φ(0) for t ∈ [0, ∞) φ(t) for t ∈ (−∞, 0).

Then for each θ ∈ (−∞, 0] and t ∈ [0, ∞),  φ(0) for t + θ ≥ 0 ˜ ˜ φt (θ) = φ(t + θ) = φ(t + θ) for t + θ < 0. A bounded measurable function Φ :  × L2ρ →  (i.e., Φ ∈ Cb ( × L2ρ )), is said to belong to D(S), the domain of the weak infinitesimal operator S, if the following limit exists for each fixed (φ(0), φ) ∈  × L2ρ : SΦ(φ(0), φ) ≡ lim t↓0

Φ(φ(0), φ˜t ) − Φ(φ(0), φ) . t

(7.14)

7.2 The Controlled State Process

349

2,2 Remark 7.2.1 Note that Φ ∈ Clip (×L2ρ ) does not guarantee that Φ ∈ D(S). ¯ For example, let θ > 0 and define a simple tame function Φ :  × L2ρ →  by

¯ Φ(φ(0), φ) = φ(−θ),

∀(φ(0), φ) ∈  × L2ρ .

2,2 Then it can be shown that Φ ∈ Clip ( × L2ρ ) and yet Φ ∈ / D(S).

It is shown in Section 2.5 of Chapter 2, however, that any tame function of the above form can be approximated by a sequence of quasi-tame functions that are in D(S). Again, consider the associated Markovian ×L2ρ -valued process {(S(t), St ), t ≥ 0} described by (7.1) with the initial historical price function (S(0), S0 ) = (ψ(0), ψ) ∈  × L2ρ . The following four theorems are repetitions of those in Section 2.5 of Chapter 2. Proofs can be found in that section and therefore are omitted here. 2,2 ( × L2ρ ) ∩ D(S), then Theorem 7.2.2 If Φ ∈ Clip

lim t↓0

E[Φ(S(t), St ) − Φ(ψ(0), ψ)] = AΦ(ψ(0), ψ), t

(7.15)

where 1 AΦ(ψ(0), ψ) = SΦ(ψ(0), ψ) ∂x2 Φ(ψ(0), ψ)ψ 2 (0)g 2 (ψ) 2 + ∂x Φ(ψ(0), ψ)ψ(0)f (ψ)

(7.16)

and S(Φ)(ψ(0), ψ) is as defined in (7.14). It seems from a glance at (7.16) that AΦ(ψ(0), ψ) requires only the exis2 tence of the first- and second-order partial derivatives ∂φ(0) Φ and ∂φ(0) Φ of Φ(ψ(0), ψ) with respect to its first variable ψ(0) ∈ . However, detailed deriva2,2 tions of the formula reveal that a stronger condition than Φ ∈ Clip ( × L2ρ ) is required. We have the following Dynkin’s formula (see Section 2.5 in Chapter 2): 2,2 ( × L2ρ ) ∩ D(S). Then Theorem 7.2.3 Let Φ ∈ Clip −ατ

E[e

Φ(S(τ ), Sτ )] − Φ(ψ(0), ψ) = E

 0

τ

 e−αt (A − αI)Φ(S(t), St ) dt , (7.17)

for all P -a.s. finite G-stopping time τ . 2,2 ( × L2ρ ) ∩ D(S) that has the following special form The function Φ ∈ Clip is referred to as a quasi-tame function:

Φ(φ(0), φ) = Ψ (q(φ(0), φ)), where

(7.18)

350

7 Hereditary Portfolio Optimization

  q(φ(0), φ) = φ(0), 

0 −∞

0

..., −∞

η1 (φ(θ))λ1 (θ) dθ,

 ηn (φ(θ))λn (θ) dθ , ∀(φ(0), φ) ∈  × L2ρ , (7.19)

for some positive integer n and some functions q ∈ C( × L2ρ ; n+1 ), ηi ∈ C ∞ (), λi ∈ C 1 ((−∞, 0]) with lim λi (θ) = λi (−∞) = 0

θ→−∞

for i = 1, 2, . . . , n, and h ∈ C ∞ (n+1 ) of the form h(x, y1 , y2 , . . . , yn ). We have the following Ito’s formula (see Section 2.6 in Chapter 2) in case Φ ∈  × L2ρ is a quasi-tame function in the sense defined above. Theorem 7.2.4 Let {(S(t), St ), t ≥ 0} be the  × L2ρ -valued solution process corresponding to (7.1) with an initial historical price function (ψ(0), ψ) ∈  × L2ρ . If Φ ∈ C( × L2ρ ) is a quasi-time function, then Φ ∈ D(A) and −ατ

e



τ

Φ(S(τ ), Sτ ) = Φ(ψ(0), ψ) + e−αt (A − αI)Φ(S(t), St ) dt 0  τ e−αt ∂φ(0) Φ(S(t), St )S(t)f (St ) dW (t) (7.20) + 0

for all finite G-stopping time τ , where I is the identity operator. Moreover, if Φ ∈ C( × L2ρ ) is of the form described in (7.18) and (7.19), then AΦ(ψ(0), ψ)  n   = hyi (q(ψ(0), ψ)) × ηi (ψ(0))λi (0) − i=1

0

−∞

ηi (ψ(θ))λ˙ i (θ) dθ

1 + ∂x h(q(ψ(0), ψ))ψ(0)f (ψ) + ∂x2 (q(ψ(0), ψ))ψ 2 (0)g 2 (ψ), 2



(7.21)

where Ψx , Ψyi , and Ψxx denote the partial derivatives of Ψ (x, y1 , . . . , yn ) with respect to its appropriate variables. In the following, we will state that the above Ito’s formula also holds for any tame function Φ :  × C(−∞, 0] →  of the following form: Φ(φ(0), φ) = h(q(φ(0), φ)) = h(φ(0), φ(−θ1 ), . . . , φ(−θk ))

(7.22)

where C(−∞, 0] is the space continuous functions φ : (−∞, 0] →  equipped with uniform topology, 0 < θ1 < θ2 < . . . < θk < ∞, and h(x, y1 , · · · , yk ) is such that h ∈ C ∞ (k+1 ). Theorem 7.2.5 Let {(S(t), St ), t ≥ 0} be the  × L2ρ -valued process corresponding to Equation (7.1) with an initial historical price function (ψ(0), ψ) ∈

7.2 The Controlled State Process

351

 × L2ρ . If Φ :  × C(−∞, 0] →  is a tame function defined by (7.22), then Φ ∈ D(A) and e−ατ h(S(τ ), S(τ − θ1 ), . . . , S(τ − θk )) = h(ψ(0), ψ(−θ1 ), . . . , ψ(−θk ))  τ + e−αt (A − αI)h(S(t), S(t − θ1 ), . . . , S(t − θk )) dt 0 τ e−αt ∂x h(S(t), S(t − θ1 ), . . . , S(t − θk ))S(t)f (St ) dW (t) (7.23) + 0

for all finite G-stopping times τ , where Ah(ψ(0), ψ(−θ1 ), . . . , ψ(−θk )) = ∂x h(ψ(0), ψ(−θ1 ), . . . , ψ(−θk ))ψ(0)f (ψ) 1 + ∂x2 h(ψ(0), ψ(−θ1 ), . . . , ψ(−θk ))ψ 2 (0)g 2 (ψ), 2

(7.24)

with ∂x h and ∂x2 h being the first- and second-order derivatives with respect to x of h(x, y1 , · · · , yk ). 7.2.2 Dynkin’s Formula for the Controlled State Process Similar to the processes {X(t), t ≥ 0} and {(S(t), St ), t ≥ 0}, the N-valued controlled inventory process {Nt , t ≥ 0} of the investor’s stock account described by (7.9) and (7.10) also satisfies the following change-of-variable formula:  e−ατ Φ(Nτ ) = Φ(ξ) + e−αt [Φ(Nt ) − Φ(Nt− )], (7.25) 0≤t≤τ

for all Φ ∈ Cb (N) (the space of bounded and continuous function from N to ), and finite G-stopping times τ , where Nτ − = limt↓0 Nτ −t . The above change-of-variable formula is rather self-explanatory. 1,0,2,2 (O) is Notation. In the following, we will use the convention that Clip the collection of continuous functions Φ : O →  (Sκ ⊂ O) such that 2,2 ( × Φ(·, ξ, ψ(0), ψ) ∈ C 1 () for each (ξ, ψ(0), ψ), and Φ(x, ξ, ·, ψ) ∈ Clip 2 Lρ ) ∩ D(S) for each (x, ξ).

Combining the above results in this section, we have the following Dynkin’s formula for the controlled (by the admissible strategy π) Sκ -valued state process {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0}:  τ  −ατ E[e Φ(Z(τ ))] = Φ(Z(0−)) + E e−αt LC(t) Φ(Z(t)) dt +E

  0≤t≤τ

0

−αt

e



 Φ(Z(t)) − Φ(Z(t−)) ,

(7.26)

352

7 Hereditary Portfolio Optimization

for all Φ :  × N ×  × L2ρ →  such that Φ(·, ξ, ψ(0), ψ) ∈ C 1 () for each 2,2 ( × L2ρ ) ∩ D(S) for each (ξ, ψ(0), ψ) ∈ N ×  × L2ρ and Φ(x, ξ, ·, ·) ∈ Clip (x, ξ) ∈  × N, where Lc Φ(x, ξ, ψ(0), ψ) = (A − αI + (rx − c)∂x )Φ(x, ξ, ψ(0), ψ)

(7.27)

and A is as defined in (7.16). Note that E[· · · ] in the above stands for E x,ξ,ψ(0),ψ;π [· · · ]. In the case that Φ ∈ C(×N××L2ρ ) is such that Φ(x, ξ, ·, ·) : ×L2ρ →  is a quasi-tame (respectively, tame) function on  × L2ρ of the form described in (7.18) and (7.19) (respectively (7.22)), then the following Ito formula for the controlled state process {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0} also holds true. Theorem 7.2.6 If Φ ∈ C(×N××L2ρ ) is such that Φ(x, ξ, ·, ·) : ×L2ρ →  is a quasi-tame function (respectively, tame) on  × L2ρ , then −ατ

e



τ

Φ(Z(τ )) = Φ(Z(0−)) + e−αt LC(t) Φ(Z(t)) dt 0  τ + e−αt ∂ψ(0) Φ(Z(t))S(t)f (St ) dW (t) 0

+

 

  e−αt Φ(Z(t)) − Φ(Z(t−)) ,

(7.28)

0≤t≤τ

for every P -a.s. finite G-stopping time τ . Moreover, if Φ(x, ξ, ψ(0), ψ) = h(x, ξ, q(ψ(0), ψ)), where h ∈ C( × N × n+1 ) and q(ψ(0), ψ) is given by (7.18) and (7.19) (respectively, (7.22)), then Lc Φ(x, ξ, ψ(0), ψ) = (A − αI + (rx − c)∂x )h(x, ξ, q(ψ(0), ψ)) and Ah(x, ξ, q(ψ(0), ψ)) is as given in (7.15)(respectively, (7.14)) for each fixed (x, ξ) ∈  × N.

7.3 The HJBQVI The main objective of this section is to derive the dynamic programming equation for the value function in form of an infinite-dimensional HamiltonJacobi-Bellman quasi variational inequality (HJBQVI) (see HJBQVI (*) in Section 7.3.3(D)). 7.3.1 The Dynamic Programming Principle The following Bellman-type dynamic programming principle (DPP) was established in Section 3.3.3 in Chapter 3 and still holds true in our problem

7.3 The HJBQVI

353

by combining with that obtained in Theorem 3.3.9 in Section 3.3.3 (see also Kolmanovskii and Shaikhet [KS96]). For the sake of saving space, we take the following result as the starting point without proof for deriving our dynamic principle equation: Proposition 7.3.1 Let (x, ξ, ψ(0), ψ) ∈ Sκ be given and let O be an open subset of Sκ containing (x, ξ, ψ(0), ψ). For π = (C, T ) ∈ Uκ (x, ξ, ψ(0), ψ), let {(X(t), Nt , S(t), St ), t ≥ 0} be given by (7.7)-(7.11) and (7.1). Define ¯ / O}, τ = inf{t ≥ 0 | (X(t), Nt , S(t), St ) ∈ ¯ is the closure of O. Then, for each t ∈ [0, ∞), we have the following where O optimality equation:  Vκ (x, ξ, ψ(0), ψ) =

sup

E

π∈Uκ (x,ξ,ψ(0),ψ)

t∧τ

e−αs

0

C γ (s) ds γ

(7.29)

 + 1{t∧τ 0 When there is proportional but no fixed transaction cost (i.e., κ = 0 and µ > 0), the solvency region can be written as

358

7 Hereditary Portfolio Optimization

S0 = {(x, ξ, ψ(0), ψ) | G0 (x, ξ, ψ(0), ψ) ≥ 0} ∪ S+ , where G0 is the liquidating function given in (7.5) with κ = 0, that is, G0 (x, ξ, ψ(0), ψ) = x +

∞  

min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0)

k=0

 −n(−k)β(ψ(0) − ψ(τ (−k))) .

(7.44)

In the case κ = 0, we claim that S+ ⊂ {(x, ξ, ψ(0), ψ) | G0 (x, ξ, ψ(0), ψ) ≥ 0}. This is because x ≥ 0 and n(−i) ≥ 0,

∀i = 0, 1, 2, . . . ⇒ G0 (x, ξ, ψ(0), ψ) ≥ 0.

In this case, all shares of the stock owned or owed can be liquidated because of the absence of a fixed transaction cost κ = 0. Therefore, S0 = {(x, ξ, ψ(0), ψ) | G0 (x, ξ, ψ(0), ψ) ≥ 0}. We easily observe that S0 is an unbounded convex set. (B) Decomposition of ∂Sκ For I ⊂ ℵ0 ≡ {0, 1, 2, . . .}, the boundary ∂Sκ of Sκ can be decomposed as follows:  (∂−,I Sκ ∪ ∂+,I Sκ ), (7.45) ∂Sκ = I⊂ℵ0

where ∂−,I Sκ = ∂−,I,1 Sκ ∪ ∂−,I,2 Sκ ,

(7.46)

∂+,I Sκ = ∂+,I,1 Sκ ∪ ∂+,I,2 Sκ ,

(7.47)

∂+,I,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x ≥ 0, n(−i) < 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I},

(7.48)

∂+,I,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x ≥ 0, n(−i) = 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I},

(7.49)

∂−,I,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x < 0, n(−i) < 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}, (7.50)

7.3 The HJBQVI

359

and ∂−,I,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x = 0, n(−i) = 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}. (7.51) The interface (intersection) between ∂+,I,1 Sκ and ∂+,I,2 Sκ is denoted by Q+,I = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x ≥ 0, n(−i) = 0 for all i ∈ I & n(−i) ≥ 0 for all i ∈ / I}, (7.52) whereas the interface between ∂−,I,1 Sκ and ∂−,I,2 Sκ is denoted by Q−,I = {(0, ξ, ψ(0), ψ) | Gκ (0, ξ, ψ(0), ψ) = 0, x = 0, n(−i) = 0 for all i ∈ I & n(−1) ≥ 0 for all i ∈ / I}. (7.53) For example, if I = ℵ, then n(−i) < 0,

∀i = 0, 1, 2, . . ., and

Gκ (x, ξ, ψ(0), ψ) ≥ 0 ⇒ x ≥ κ. In this case, ∂−,ℵ Sκ = ∅ (the empty set), ∂+,ℵ0 ,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x ≥ 0, and n(−i) < 0 for all i ∈ ℵ}, and ∂+,ℵ0 ,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x ≥ 0, and n(−i) = 0 for all i ∈ ℵ0 } = {(x, 0, ψ(0), ψ) | 0 ≤ x ≤ κ}. On the other hand, if I = ∅ (the empty set) (i.e., n(−i) ≥ 0 for all i ∈ ℵ0 ), then ∂+,∅,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x ≥ 0, and n(−i) ≥ 0 for all i ∈ ℵ0 }, ∂+,∅,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x ≥ 0, and n(−i) ≥ 0 for all i ∈ ℵ0 }, ∂−,∅,1 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) = 0, x < 0, and n(−i) ≥ 0 for all i ∈ ℵ}, and ∂−,∅,2 Sκ = {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ) < 0, x = 0, and n(−i) ≥ 0 for all i ∈ ℵ0 }.

360

7 Hereditary Portfolio Optimization

(C) Boundary Conditions for the Value Function Let us now examine the conditions of the value function Vκ : Sκ → + on the boundary ∂Sκ of the solvency region Sκ defined in (7.45)-(7.51). We make the following observations regarding the behavior of the value function Vκ on the boundary ∂Sκ . ˆ and (ψ(0), ˆ ˆ be as deLemma 7.3.3 Let (x, ξ, ψ(0), ψ) ∈ Sκ and let x ˆ, ξ, ψ) fined in (7.36)- (7.38). Then ˆ ψ(0), ˆ ˆ = G0 (x, ξ, ψ(0), ψ) − κ. x, ξ, ψ) G0 (ˆ

(7.54)

Proof. Suppose the investor’s current portfolio is (x, ξ, ψ(0), ψ) ∈ Sκ ; then an instantaneous transaction of the quantity ζ = {m(−k), k = 0, 1, 2, . . .} ∈ R(ξ) will facilitate an instantaneous jump of the state from (x, ξ, ψ(0), ψ) to ˆ ψ(0), ˆ ˆ The result follows immediately by substituting the new state (ˆ x, ξ, ψ). ˆ ψ(0), ˆ ˆ into G0 defined by (7.44). This proves the lemma. 2 (ˆ x, ξ, ψ) Lemma 7.3.4 If there is no fixed transaction cost (i.e., κ = 0 and µ > 0) and if (x, ξ, ψ(0), ψ) ∈ ∂I,1 S0 , that is, G0 (x, ξ, ψ(0), ψ) = 0, then the only admissible strategy is to do no consumption but to close all open positions in order to bring his portfolio to {0} × {0} × [0, ∞) × L2ρ,+ after paying proportional transaction costs, capital gains taxes, and so forth. Proof. For a fixed (x, ξ, ψ(0), ψ) ∈ S0 , let I ⊂ ℵ0 ≡ {0, 1, 2 . . .} be such that i ∈ I ⇒ n(−i) < 0 and i ∈ / I ⇒ n(−i) ≥ 0. To guarantee that (X(t), Nt , S(t), St ) ∈ S0 , we require that G0 (X(t), Nt , S(t), St ) ≥ 0 for all t ≥ 0. Applying Theorem 7.2.5 to the process {e−λt G0 (X(t), Nt , S(t), St ), t ≥ 0}, we obtain e−λτ G0 (X(τ ), Nτ , S(τ ), Sτ )  τ = G0 (x, ξ, ψ(0), ψ) + (∂t + A)[e−λt G0 (X(t), Nt , S(t), St )] dt 0  τ ∂ψ(0) [e−λt G0 (X(t), Nt , S(t), St )]S(t)f (St ) dW (t) + 0  τ + ∂x [e−λt G0 (X(t), Nt , S(t), St )](λX(t) − C(t)) dt 0

+

 0≤t≤τ

e−λt [G0 (X(t), Nt , S(t), St ) − G0 (X(t−), Nt− , S(t), St )], (7.55)

7.3 The HJBQVI

361

for all almost surely finite G-stopping time τ , where X(t) and Nt are given in (7.8), and (7.9) and (7.10), respectively, with κ = 0. Taking into the account of (7.8)-(7.10) and substituting into the function G0 , we have G0 (X(t), Nt , S(t), St ) = G0 (X(t−), Nt− , S(t), St ). Intuitively, this is also because of the invariance of the liquidated value of the assets without increase of stock value. Hence, (7.55) becomes the following by grouping the terms n(Q(t) − i) according to i ∈ I and i ∈ / I: d[e−λt G0 (X(t), Nt , S(t), St )]   = e−λt − C(t) + (1 + µ − β)n(Q(t) − i)S(t)(f (St ) − λ) i∈I

+



(1 − µ − β)n(Q(t) − i)S(t)(f (St ) − λ)

i∈I /

− rβ



n(Q(t) − i)S(τ (Q(t) − i))

i∈I

− λβ



 n(Q(t) − i)S(τ (Q(t) − i)) dt

i∈I /

+ e−λt



(1 + µ − β)n(Q(t) − i)

i∈I

+



 (1 − µ − β)n(Q(t) − i) S(t)g(St ) dW (t).

(7.56)

i∈I /

Now, the first exit time τˆ (ˆ τ is a G-stopping time) is defined by  τˆ ≡ 1 ∧ inf t ≥ 0 | n(Q(t) − i)S(τ (Q(t) − i)) ∈ / the interval (n(−i)ψ(τ (−i)) − 1, 0) for i ∈ I, and n(Q(t) − i)S(τ (Q(t) − i)) ∈ /

 /I . the interval (0, n(−i)ψ(τ (−i)) + 1) for i ∈

We can integrate (7.56) from 0 to τˆ, keeping in mind that (x, ξ, ψ(0), ψ) ∈ ∂I,1 S0 (or, equivalently, G0 (x, ξ, ψ(0), ψ) = 0), to obtain

362

7 Hereditary Portfolio Optimization

0 ≤ e−λˆτ G0 (X(ˆ τ ), Nτˆ , S(ˆ τ ), Sτˆ )  τˆ   = e−λs − C(s) + (1 + µ − β)n(Q(s) − i)S(s)(f (Ss ) − λ) 0

i∈I

 (1 − µ − β)n(Q(s) − i)S(s)(f (Ss ) − λ) + i∈I /

− λβ



n(Q(s) − i)S(τ (Q(s) − i))

i∈I



−rβ 

 n(Q(s) − i)S(τ (Q(s) − i)) ds

i∈I / τˆ

+

e−λs



0

+

(1 + µ − β)n(Q(s) − i)

i∈I

  (1 − µ − β)n(Q(s) − i) S(s)g(Ss ) dW (s)

(7.57)

i∈I /

Now, use the facts that 0 < µ + β < 1, C(t) ≥ 0, α ≥ f (St ) > r > 0, n(−i) < 0 for i ∈ I and n(−i) ≥ 0 for i ∈ / I, and Rule 7.6 to obtain the following inequality: 0 ≤ e−λˆτ G0 (X(ˆ τ ), Nτˆ , S(ˆ τ ), Sτˆ )  τˆ  ≤ e−λs (1 − µ − β)n(Q(s) − i)S(t)(α − λ) dt 0

i∈I /



τˆ

+ 0

+



e−λrs



(1 + µ − β)n(Q(t) − i)

i∈I

 (1 − µ − β)n(Q(s) − i) S(s)g(Ss ) dW (s)

(7.58)

i∈I /

It is clear that    τˆ   E e−rs (1 + µ − β)n(Q(s) − i) S(s)g(Ss ) dW (s) = 0. t

i∈I

Now, define the process ˜ (t) = σ − λ t + W (t), W g(St )

t ≥ 0.

Then by the Girsanov transformation (see Theorem 1.2.16 in Chapter 1), ˜ (t), t ≥ 0} is a Brownian motion defined on a new probability space {W (Ω, F, P˜ ; F), where P˜ and P are equivalent probability measures and, hence,

7.3 The HJBQVI ˜

0 = EP



e−λs



0



τˆ

=E 

τˆ

+

  ˜ (s) (1 − µ − β)n(Q(s) − i) S(s)g(Ss )dW

i∈I /

e−λs

0 τˆ

363

 i∈I /

e−λs

0



 (1 − µ − β)n(Q(s) − i)S(s)(σ − λ) ds

 (1 − µ − β)n(Q(s) − i)S(s)g(Ss )dW (s) .

i∈I /

Therefore,  0

τˆ

e−λs



 ˜ (s) = 0, P˜ -a.s. (1 − ν − β)n(Q(s) − k)S(s)g(Ss )dW

k∈I /

Since G0 (X(t), Nt , S(t), St ) ≥ 0 for all t ≥ 0, this implies that τˆ = 0 a.s., that is, τ ), Sτˆ ) = (x, ξ, ψ(0), ψ) ∈ ∂I,1 S0 . (X(ˆ τ ), Nτˆ , S(ˆ We need to determine the conditions under which the exit time occurred. Let k be the index of the shares of the stock where the state process violated the condition for the stopping time τˆ. In other words, if k ∈ I, then if k ∈ I, then n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) ∈ / (n(−k)ψ(−k)) − 1, 0) or if k ∈ / I, then n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k) ∈ / (0, n(−k)ψ(−k)) + 1). We will examine both cases separately. Case 1. Suppose k ∈ I. Then τ ) − k)S(τ (Q(ˆ τ ) − k)) ≤ n(−k)ψ(τ (−k)) − 1 either n(Q(ˆ or n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) ≥ 0. We have established that (X(ˆ τ ), Nτˆ , S(ˆ τ ), Sτˆ ) ∈ ∂I,1 S0 , and this is inconsistent with both n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) ≤ n(−k)ψ(−k) − 1 and n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) > 0. Therefore, we know n(Q(ˆ τ − k)) = 0. Case 2. Suppose k ∈ / I. Then

364

7 Hereditary Portfolio Optimization

either n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) ≥ n(−k)ψ(−k) + 1 or n(Q(ˆ τ ) − k)S(τ (Q(ˆ τ ) − k)) ≤ 0. Again, since (X(ˆ τ ), Nτˆ , S(ˆ τ ), Sτˆ ) ∈ ∂I,1 S0 , we see that n(Q(ˆ τ ) − k) = 0. We conclude from both cases that (X(ˆ τ ), Nτˆ ) = (0, {0}). This means the only admissible strategy is to bring the portfolio from (x, ξ, ψ(0), ψ) to (0, 0, ψ(0), ψ) by an appropriate amount of the transaction specified in the lemma. This proves the lemma. 2 We have the following result. Theorem 7.3.5 Let κ > 0 and µ > 0. On ∂I,1 Sκ for I ⊂ ℵ; then the investor should not consume but close all open positions in order to bring his portfolio to {0} × {0} × + × L2ρ,+ . In this case, the value function Vκ : ∂I,1 Sκ → + satisfies the following equation: (Mκ Φ − Φ)(x, ξ, ψ(0), ψ) = 0.

(7.59)

Proof. Suppose the investor’s current portfolio is (x, ξ, ψ(0), ψ) ∈ ∂I,1 Sκ for some I ⊂ ℵ0 . A transaction of the quantity ζ = {m(−k), k = 0, 1, 2, . . .} ∈ R(ξ)−{0} will facilitate an instantaneous jump of the state from (x, ξ, ψ(0), ψ) ˆ ψ(0), ˆ ˆ as given in (7.36)-(7.38). to the new state (ˆ x, ξ, ψ) We observe that since ζ = (m(−k), k = 0, 1, 2, . . .) ∈ R(ξ) − {0}, n(−k) < 0 implies n ˆ (−k) = n(−k) + m(−k) ≤ 0 and n(−k) > 0 implies n ˆ (−k) = n(−k) + m(−k) ≥ 0 for k = 0, 1, 2, . . .. Taking into the account the new ˆ ψ(0), ˆ ˆ we have from Lemma (4.3) that portfolio (ˆ x, ξ, ψ), ˆ ψ(0), ˆ ˆ = G0 (x, ξ, ψ(0), ψ) − κ. G0 (ˆ x, ξ, ψ)

(7.60)

Therefore, if (x, ξ, ψ(0), ψ) ∈ ∂I,1 Sκ for some I ⊂ ℵ0 , then ˆ ψ(0), ˆ ˆ x, ξ, ψ). Gκ (x, ξ, ψ(0), ψ) = G0 (x, ξ, ψ(0), ψ) − κ = 0 = G0 (ˆ ˆ ψ(0), ˆ ˆ ∈ ∂I,1 S0 . From Lemma 7.3.3, we prove that the This implies (ˆ x, ξ, ψ) only admissible strategy is to make no consumption but to make another ˆ ψ(0), ˆ ˆ ∈ ∂I,1 S0 . Therefore, starting from trading from the new state (ˆ x, ξ, ψ) (x, ξ, ψ(0), ψ) ∈ ∂I,1 Sκ we make two immediate instantaneous transactions (which will be counted as only one transaction), with the total amount specified by the following two equations:  [n(−i)ψ(0)(1 − µ − β) + βn(−i)ψ(τ (−i))] 0 = x−κ+ +

 i∈I

i∈I c

[n(−i)ψ(0)(1 + µ − β) + βn(−i)ψ(τ (−i))],

(7.61)

7.3 The HJBQVI

0=ξ⊕ζ

365

(7.62)

to reach the final destination (0, 0, ψ(0), ψ). This proves the theorem. 2 We conclude the following boundary conditions from some simple observations and Theorem (4.5). Boundary Condition (i). On the hyperplane ∂−,∅,2 Sκ = {(0, ξ, ψ(0), ψ) ∈ Sκ | Gκ (0, ξ, ψ(0), ψ) < 0, n(−i) ≥ 0 ∀i}, the only strategy for the investor is to do no transaction and no consumption, since x = 0 and Gκ (0, ξ, ψ(0), ψ) < 0 (hence, there is no money to consume and not enough money to pay for the transaction costs, etc.), but to let the stock prices grow according to (7.1). Thus, the value function Vκ on ∂−,∅,2 Sκ satisfies the equation L0 Φ ≡ (A − α + rx∂x )Φ = 0,

(7.63)

provided that it is smooth enough. Boundary Condition (ii). On ∂I,1 Sκ for I ⊂ ℵ0 , the investor should not consume but buy back n(−i) shares for i ∈ I and sell n(−i) shares for i ∈ I c of the stock in order to bring his portfolio to {0} × {0} × [0, ∞) × L2ρ,+ after paying transaction costs, capital gains taxes, and so forth. In other words, the investor brings his portfolio from the position (x, ξ, ψ(0), ψ) ∈ ∂I,1 Sκ to (0, 0, ψ(0), ψ) by the quantity that satisfies (7.61) and (7.62). In this case, the value function Vκ : ∂I,1 Sκ →  satisfies the following equation: (Mκ Φ − Φ)(x, ξ, ψ(0), ψ) = 0.

(7.64)

Note that this is a restatement of Theorem 7.3.5. Boundary Condition (iii). On ∂+,I,2 Sκ for I ⊂ ℵ0 , the only optimal strategy is to make no transaction but to consume optimally according to the op1 κ γ−1 (x, ξ, ψ(0), ψ), timal consumption rate function c∗ (x, ξ, ψ(0), ψ, ) = ( ∂V ∂x ) which is obtained via  cγ  , c∗ (x, ξ, ψ(0), ψ) = arg max Lc Vκ (x, ξ, ψ(0), ψ) + c≥0 γ where Lc is the differential operator defined by Lc Φ(x, ξ, ψ(0), ψ) ≡ (A − α + (rx − c)∂x )Φ.

(7.65)

This is because the cash in his savings account is not sufficient to buy back any shares of the stock but to consume optimally. In this case, the value function Vκ : ∂+,I,2 Sκ → + satisfies the following equation provided that it is smooth enough:

366

7 Hereditary Portfolio Optimization

AΦ ≡ (A − α + rx∂x )Φ +

γ  γ−1 1−γ ∂x Φ = 0. γ

(7.66)

Boundary Condition (iv). On ∂−,I,2 Sκ , the only admissible consumptioninvestment strategy is to do no consumption and no transaction but to let the stock price grows as in the Boundary Condition (i). Boundary Condition (v). On ∂+,ℵ0 ,2 Sκ = {(x, ξ, ψ(0), ψ) | 0 ≤ x ≤ κ, n(−i) = 0, ∀i = 0, 1, . . .}, the only admissible consumption-investment strategy is to do no transaction but to consume optimally as in Boundary Condition (iii). Remark 7.3.6 From Boundary Conditions (i)-(v), it is clear that the value function Vκ is discontinuous on the interfaces Q+,I and Q−,I for all I ⊂ ℵ0 . (D) The HJBQVI with Boundary Conditions We conclude from the above subsections that the HJBQVI (together with the boundary conditions) can be expressed as   ⎧ ⎪ max AΦ, M Φ − Φ =0 on S◦κ ⎪ κ ⎪ ⎨ # AΦ = 0, on # I⊂ℵ0 ∂+,I,2 Sκ HJBQV I(∗) = 0 ⎪ L Φ = 0, on ⎪ ⎪ #I⊂ℵ0 ∂−,I,2 Sκ ⎩ Mκ Φ − Φ = 0 on I⊂ℵ0 ∂I,1 Sκ , where AΦ, L0 Φ (Lc Φ with c = 0), and Mκ are as defined in (7.43), (7.63), and (7.35), respectively.

7.4 The Verification Theorem Let

 ˜ = AΦ

# AΦ on S◦κ # ∪ I⊂ℵ ∂+,I,2 Sκ L0 Φ on I⊂ℵ ∂−,I,2 Sκ .

We have the following verification theorem for the value function Vκ : Sκ →  for our hereditary portfolio optimization problem. Theorem 7.4.1 (The # Verification Theorem) (a) Let Uκ = Sκ − I⊂ℵ0 ∂I,1 Sκ . Suppose there exists a locally bounded non1,0,2,2 (Sκ ) ∩ D(S) such that negative-valued function Φ ∈ Clip ˜ ≤ 0 on Uκ AΦ

(7.67)

Φ ≥ Mκ Φ on Uκ .

(7.68)

and Then Φ ≥ Vκ on Uκ .

7.4 The Verification Theorem

367

(b) Define D ≡ {(x, ξ, ψ(0), ψ) ∈ Uκ | Φ(x, ξ, ψ(0), ψ) > Mκ Φ(x, ξ, ψ(0), ψ)}. Suppose ˜ AΦ(x, ξ, ψ(0), ψ) = 0 on D (7.69) ˆ ξ, ψ(0), ψ) = ζˆΦ (x, ξ, ψ(0), ψ) exists for all (x, ξ, ψ(0), ψ) ∈ Sκ and that ζ(x, by Assumption 7.3.2. Let  ∗

c =

# 1 (∂x Φ) γ−1 on S◦κ # ∪ I⊂ℵ0 ∂+,I,2 Sκ 0 on I⊂ℵ0 ∂−,I,2 Sκ .

(7.70)

Define the impulse control T ∗ = {(τ ∗ (i), ζ ∗ (i)), i = 1, 2, . . .} inductively as follows: First, put τ ∗ (0) = 0 and inductively (i)

τ ∗ (i + 1) = inf{t > τ ∗ (i) | (X (i) (t), Nt , S(t), St )) ∈ / D}, (i)

∗ ˆ (i) (τ ∗ (i + 1)−), N ∗ ∗ ζ ∗ (i + 1) = ζ(X τ (i+1)− , S(τ (i + 1)), Sτ (i+1) ),

(7.71) (7.72)

(i)

{(X (i) (t), Nt , S(t), St ), t ≥ 0} is the controlled state process obtained by applying the combined control π ∗ (i) = (c∗ , (τ ∗ (1), τ ∗ (2), . . . τ ∗ (i); ζ ∗ (1), ζ ∗ (2), · · · , ζ ∗ (i))),

i = 1, 2, . . . .

Suppose π ∗ = (C ∗ , T ∗ ) ∈ Uκ (x, ξ, ψ(0), ψ), e−αt Φ(X ∗ (t), Nt∗ , S(t), St ) → 0, as t → ∞ a.s. and that the family {e−ατ Φ(X ∗ (τ ), Nτ∗ , S(τ ), Sτ )) | τ is a G-stopping time}

(7.73)

is uniformly integrable. Then Φ(x, ξ, ψ(0), ψ) = Vκ (x, ξ, ψ(0), ψ) and π ∗ obtained in (7.71) and (7.72) is optimal. Proof. (a) Suppose π = (C, T ) ∈ Uκ (x, ξ, ψ(0), ψ), where C = {C(t), t ≥ 0} is a consumption rate process and T = {(τ (i), ζ(i)), i = 1, 2, . . .} is a trading strategy. Denote the controlled state processes (by π) with the initial state by (x, ξ, ψ(0), ψ) by {Z(t) = (X(t), Nt , S(t), St ), t ≥ 0}. For R > 0, put T (R) = R ∧ inf{t > 0 | Z(t) ≥ R}

368

7 Hereditary Portfolio Optimization

and set θ(i + 1) = θ(i + 1; R) = τ (i) ∨ (τ (i + 1) ∧ T (R)), where Z(t) is the norm of Z(t) in  × N ×  × L2ρ in the product topology. Then by the generalized Dynkin formula (see (7.26)), we have E[e−αθ(i+1) Φ(Z(θ(i + 1)−)] = E[e−ατ (i) Φ(Z(τ (i))]  θ(i+1)− e−αt LC(t) Φ(Z(t)) dt] + τ (i)

−ατ (i)

≤ E[e

−E



Φ(Z(τ (i))]

θ(i+1)−

e−δt

τ (i)

C γ (t)  dt , γ

(7.74)

˜ ≤ 0. since AΦ Equivalently, we have E[e−ατ (i) Φ(Z(τ (i)))] − E[e−αθ(i+1) Φ(Z(θ(i + 1)−))]   θ(i+1)− C γ (t)  dt . e−αt ≥E γ τ (i) Letting R → ∞, using the Fatou lemma, and then summing from i = 0 to i = k gives Φ(x, ξ, ψ(0), ψ) +

k     E e−ατ (i) Φ(Z(τ (i)) − Φ(Z(τ (i−)) i=1

− E[e−ατ (k+1) Φ(Z(τ (k + 1)−))]   θ(k+1) C γ (t)  ≥E dt . e−δt γ 0

(7.75)

Now, Φ(Z(τ (i)) ≤ Mκ Φ(Z(τ (i)−) for i = 1, 2, . . .

(7.76)

and, therefore, Φ(x, ξ, ψ(0), ψ) + ≥E



k     E e−ατ (i) Mκ Φ(Z(τ (i)−) − Φ(Z(τ (i)−)) i=1

θ(k+1)−

0

e−δt

 C γ (t) dt + e−ατ (k+1) Φ(Z(τ (k + 1)−)) . γ

(7.77)

It is clear that Mκ Φ(Z(τ (i)−) − Φ(Z(τ (i)−)) ≤ 0 and, hence,

(7.78)

7.5 Properties of Value Function

Φ(x, ξ, ψ(0), ψ) ≥ E



θ(k+1)−

0 −ατ (k+1)

+e Letting k → ∞, we get

Φ(x, ξ, ψ(0), ψ) ≥ E





e−αt

C γ (t) dt γ

 Φ(Z(τ (k + i)−)) .

e−αt

0

369

C γ (t)  dt , γ

(7.79)

(7.80)

since Φ is a locally bounded non-negative function. Hence, Φ(x, ξ, ψ(0), ψ) ≥ Jκ (x, ξ, ψ(0), ψ; π),

∀π ∈ Uκ (x, ξ, ψ(0), ψ).

(7.81)

Therefore, Φ(x, ξ, ψ(0), ψ) ≥ Vκ (x, ξ, ψ(0), ψ). (b) Define π ∗ = (C ∗ , T ∗ ), where T ∗ = {(τ ∗ (i), ζ ∗ (i)), i = 1, 2, . . .} by (7.71) and (7.72). Then repeat the argument in part (a) for π = π ∗ . It is clear that the inequalities (7.79)-(7.81) become equalities. So we conclude that 

τ ∗ (k+1)

Φ(x, ξ, ψ(0), ψ) = E

e−αt

0 −ατ ∗ (k+1)

+e

C γ (t) dt γ

 Φ(Z(τ ∗ (k + 1)−)) ,

∀k = 1, 2, . . . . (7.82)

Letting k → ∞ in (7.82), by (7.73) we get Φ(x, ξ, ψ(0), ψ) = Jκ (x, ξ, ψ(0), ψ; π ∗ ).

(7.83)

Combining this with (7.81), we obtain Φ(x, ξ, ψ(0), ψ) ≥

sup π∈Uκ (x,ξ,ψ(0),ψ)

Jκ (x, ξ, ψ(0), ψ; π)

≥ Jκ (x, ξ, ψ(0), ψ; π ∗ ) = Φ(x, ξ, ψ(0), ψ).

(7.84)

Hence, Φ(x, ξ, ψ(0), ψ) = Vκ (x, ξ, ψ(0), ψ) and π ∗ is optimal. This proves the verification theorem. 2

7.5 Properties of Value Function 7.5.1 Some Simple Properties Some basic properties of the value function Vκ : Sκ → + defined by (7.13) are investigated in this section. Suppose the investor’s portfolio is at (x, ξ, ψ(0), ψ) ∈ Sκ and an instantaneous transaction of the N-valued quantity

370

7 Hereditary Portfolio Optimization

ζ = m(0)1{τ (0)} +

∞ 

m(−k)1{τ (−k)} (χ{n(−k)0,−n(−k)≤m(−k)≤0} ) ∈ R(ξ) ˆ ψ(0), ˆ ˆ where x leads to a new state of the portfolio (ˆ x, ξ, ψ), ˆ (the investor’s ˆ new holdings in the savings account), ξ (the investor’s new inventory in the ˆ ˆ (the new profile of stock prices) are given in stock account), and (ψ(0), ψ) (7.36)-(7.38) and are repeated below for the convenience of the readers. x ˆ = x − κ − (m(0) + µ|m(0)|)ψ(0) ∞   (1 − µ − β)m(−k)ψ(0) − k=1

 + βm(−k)ψ(τ (−k)) χ{n(−k)>,−n(−k)≤m(−k)≤0} −

∞  

 (1 + µ − β)m(−k)ψ(0) + βm(−k)ψ(τ (−k))

k=1

×χ{n(−k) Vκ (x, ξ, ψ(0), ψ). x, ξ, ψ) Vκ (ˆ If this were true, then one would start at such a position and then make ˆ ψ(0), ˆ ˆ to achieve a higher value without one immediate transaction to (ˆ x, ξ, ψ) making any consumption. This contradicts the definition of the value function Vκ . Therefore, ˆ ψ(0), ˆ ˆ ≤ Vκ (x, ξ, ψ(0), ψ). x, ξ, ψ) Vκ (ˆ The second statement follows from the first immediately.

2

7.5.2 Upper Bounds of Value Function In reality, the solvency region Sκ , the liquidating function Gκ : S → , and the value function Vκ defined in (7.6), (7.5), and (7.13), respectively, depend not only on the fixed transaction cost κ ≥ 0 but also on the proportional transaction and tax rates µ ≥ 0 and β ≥ 0. In this section, we will express Sκ as Sκ,µ,β , Gκ as Gκ,µ,β , and Vκ as Vκ,µ,β to reflect such effects. Therefore, S0,µ,β , G0,µ,β , and V0,µ,β denote respectively the solvency region, the liquidating function, and the value function, when there is no fixed transaction cost (κ = 0 and µ, β > 0) but there are positive proportional transaction costs and taxes. All other expressions will be interpreted similarly. For example, S0,µ,0 , G0,µ,0 , and V0,µ,0 will be interpreted respectively as the solvency region, the liquidating function, and the value function, when there are no fixed transaction cost and tax but there are proportional transaction costs (κ = β = 0 and µ > 0) and so forth. When κ = 0, the solvency region S0,µ,β reduces from (7.6) to   (7.90) S0,µ,β = (x, ξ, ψ(0), ψ) ∈ S | G0,µ,β (x, ξ, ψ(0), ψ) ≥ 0 ∪ S+ , where G0,µ,β (x, ξ, ψ(0), ψ) = x +

∞  

min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0)

k=0

 −n(−k)β(ψ(0) − ψ(τ (−k))) ,

(7.91)

372

7 Hereditary Portfolio Optimization

S+ = [0, ∞) × N+ × [0, ∞) × + × L2ρ,+ , and N+ = {ξ ∈ N | ξ(θ) ≥ 0, ∀θ ∈ (−∞, 0]}. Similarly, when κ = β = 0, the solvency region S0,µ,0 can be described by   (7.92) S0,µ,0 = (x, ξ, ψ(0), ψ) ∈ S | G0,µ,0 (x, ξ, ψ(0), ψ) ≥ 0 ∪ S+ , where G0,µ,0 (x, ξ, ψ(0), ψ) = x +

∞ 

[min{(1 − µ)n(−k), (1 + µ)n(−k)}ψ(0)] .

k=0

(7.93) For each (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , let S0,µ,β (ψ(0), ψ) be the projection of the solvency region S0,µ,β along (ψ(0), ψ) defined by S0,µ,β (ψ(0), ψ) ≡ {(x, ξ) ∈  × N | H0,µ,β (x, ξ, ψ(0), ψ) ≥ 0} = {(x, ξ) ∈  × N | G0,µ,β (x, ξ, ψ(0), ψ) ≥ 0}. We have the following results. Proposition 7.5.2 For each (ψ(0), ψ) ∈ [0, ∞)×L2ρ,+ , the projected solvency region S0,µ,β (ψ(0), ψ) along (ψ(0), ψ) is a convex subset of the space  × N. Furthermore, if V0,µ,β : S0,µ,β →  is the value function of the optimal consumption-trading problem when there is no fixed transaction cost, then for each (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , V0,µ,β (·, ·, ψ(0), ψ) is a concave function on the projected solvency region S0,µ,β (ψ(0), ψ). Proof. If x ≥ 0 and n(−i) ≥ 0,

∀i = 0, 1, 2, · · · ,

then G0,µ,β (x, ξ, ψ(0), ψ) = x + ψ(0)

∞ 

(1 − µ − β)n(−k)

k=0



∞ 

n(−k)ψ(τ (−k))

k=0

≥ 0,

since 1 − µ − β > 0.

Hence, [0, ∞) × N+ × [0, ∞) × L2ρ,+ ⊂ {(x, ξ, ψ(0), ψ) | G0,µ,β (x, ξ, ψ(0), ψ) ≥ 0}. Therefore, all shares of the stock owned or owed can be liquidated because of the absence of a fixed transaction cost κ = 0.

7.5 Properties of Value Function

373

For a fixed (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , let (x1 , ξ1 ), (x2 , ξ2 ) ∈ S0,µ,β (ψ(0), ψ)

and

0 ≤ c ≤ 1;

then c(x1 , ξ1 ) + (1 − c)(x2 , ξ2 ) ∈ S0,µ,β (ψ(0), ψ). This is because G0,µ,β (cx1 + (1 − c)x2 , cξ1 + (1 − c)ξ2 , ψ(0), ψ) = G0,µ,β (cx1 , cξ1 , ψ(0), ψ) + G0,µ,β ((1 − c)x2 , (1 − c)ξ2 , ψ(0), ψ) = cG0,µ,β (x1 , ξ1 , ψ(0), ψ) + (1 − c)G0 (x2 , ξ2 , ψ(0), ψ) ≥ 0. The value function V0,µ,β : S0,µ,β → + for the case κ = 0 and µ, β > 0 has the following concavity property: For each fixed (ψ(0), ψ) ∈ [0, ∞) × L2ρ,+ , V0,µ,β (·, ·, ψ(0), ψ) : S0,µ,β (ψ(0), ψ) → + is a concave function; that is, if (x1 , ξ1 ), (x2 , ξ2 ) ∈ S0,µ,β (ψ(0), ψ) and 0 ≤ c ≤ 1, then V0,µ,β (cx1 + (1 − c)x2 , cξ1 + (1 − c)ξ2 , ψ(0), ψ) ≥ cV0,µ,β (x1 , ξ1 , ψ(0), ψ) + (1 − c)V0,µ,β (x2 , ξ2 , ψ(0), ψ). The detailed proof of this statement is omitted.

2

Using the verification theorem (Theorem 7.4.1), we obtain some upper bounds of the value function Vκ,µ,β : Sκ,µ,β →  defined by (7.13) by comparing it with other value functions such as V0,0,0 , Vκ,ν,β , and so forth. (A) When κ = µ = β = 0 We consider the scenario when κ = µ = β = 0. In this case, there is no need to keep track of the time instants and the base prices when shares of the stock were purchased or shortsold in the past and we can lump all shares together, since there are no tax consequences. We therefore let Y (t) be the total shares of the stock owned (if Y (t) > 0) or owed (if Y (t) < 0) by the investor at time t ≥ 0, that is, Q(t)  n(k). Y (t) = k=0

The stock price dynamics {S(t), t ∈ } remains unchanged, but the investor’s savings and stock accounts should then be modified as follows: dX(t) = (λX(t) − C(t))dt,

τ (i) ≤ t < τ (i + 1),

X(τ (i + 1)) = X(τ (i + 1)−) − m(i + 1)S(τ (i + 1)),

(7.94) (7.95)

374

7 Hereditary Portfolio Optimization

and Y (τ (i + 1)) = Y (τ (i + 1)−) + m(i + 1),

i = 0, 1, 2, . . . ,

(7.96)

where m(i + 1) ∈  is the number of shares of the stock purchased or sold at the transaction time τ (i + 1). Let V0,0,0 (x, ξ, ψ(0), ψ) be the value function of the portfolio optimization problem when κ = µ = β = 0. Assume that it takes the following form for some constant C1 > 0: γ

∞  V0,0,0 (x, ξ, ψ(0), ψ) = C1 x + ψ(0) n(−k) . (7.97) k=0

We have the following proposition. Proposition 7.5.3 Assume that the value function V0,0,0 : Sκ,µ,β ⊂ S0,0,0 →  takes the form in (7.97). Then 1 C1 = C0γ−1 γ

with

provided that

 1 γ(b − λ)2 C0 = α − γλ − , 1−γ 2σ(1 − γ)  (b − λ)2 . α>γ λ+ 2σ(1 − γ)

(7.98)

(7.99)

Moreover, Vκ,µ,β (x, ξ, ψ(0), ψ) ≤ V0,0,0 (x, ξ, ψ(0), ψ),

∀(x, ξ, ψ(0), ψ) ∈ Sκ,µ,β . (7.100)

Proof. For notational simplicity, let

Λ0 (x, ξ, ψ(0), ψ) = x + ψ(0)

∞ 

 n(−k) .

k=0

Therefore, V0,0,0 (x, ξ, ψ(0), ψ) = C1 [Λ0 (x, ξ, ψ(0), ψ)]γ . We will prove that V0,0,0 satisfies part (a) of the verification theorem (Theorem 7.4.1). Then V0,0,0 ≥ Vκ,µ,β

on Uκ,µ,β ≡ Sκ,µ,β − ∪I⊂ℵ0 ∂I,1 Sκ,µ,β .

First, we must show that Mκ,µ,β (V0,0,0 (x, ξ, ψ(0), ψ)) ≤ V0,0,0 (x, ξ, ψ(0), ψ) on S◦κ,µ,β .

7.5 Properties of Value Function

375

1,0,2,2 Observe that V0,0,0 ∈ Clip (S◦κ,µ,β ) ∩ D(S). A straightforward calculation yields

∞   ˆ ψ(0), ˆ ˆ =x x, ξ, ψ) ˆ + ψ(0) n ˆ (−k) Λ0 (ˆ

 ≤ x + ψ(0)

k=0 ∞ 

(n(−k) + m(−k)(χ{n(−k)0,−n(−k)≤m(−k)≤0} )  ≤ x + ψ(0)

∞ 

 n(−k)(χ{n(−k)>0 + χ{n(−k)γ λ+ 2σ(1 − γ)

Third, we must also show that L0 (V0,0,0 )(x, ξ, ψ(0), ψ) ≤ 0,

∀(x, ξ, ψ(0), ψ) ∈ ∪I⊂ℵ0 ∂−,I Sκ,µ,β ,

where L0 (C1 Ψ0 )(x, ξ, ψ(0), ψ) = (A + λx∂x − αI)(C1 Ψ0 )(x, ξ, ψ(0), ψ). The proof involves the boundary conditions of the value function and it is up to the readers to provide the details. Concluding from the above, we have proved the proposition. 2 Note that the above result is also implied by the historical work of Merton (see [Mer90]) on the optimal consumption-investment problem in a perfect market when there is no transaction cost and no tax, and the stock price follows the geometric Brownian motion, with f (φ) ≡ b ≥ λ and g(φ) ≡ σ. In that case, it is shown that the optimal portfolio satisfies the following Merton line: c∗ Y (t) = , t ≥ 0, (7.104) X(t) 1 − c∗ where c∗ =

b−λ . (1 − γ)σ 2

(7.105)

(B) When β = 0, κ > 0, and µ > 0 In this case, we have the following result. Proposition 7.5.4 Assume that β = 0 but κ > 0 and µ > 0. Let ν be a constant such that 1 − µ ≤ ν ≤ 1 + µ. (7.106) Suppose α > γb.

(7.107)

7.5 Properties of Value Function

Then there exists K < ∞ such that ∀(x, ξ, ψ(0), ψ) ∈ Sκ,µ,β ,

γ ∞  n(−k) . Vκ,µ,β (x, ξ, ψ(0), ψ) ≤ K x + νψ(0)

377

(7.108)

k=0

Proof. We proceed as in Proposition 7.5.3, except that now we choose K < ∞ and define Λν : Sκ,µ,β →  and Ψν : Sκ →  by

∞   n(−k) (7.109) Λν (x, ξ, ψ(0), ψ) = x + νψ(0) k=0

and Ψν (x, ξ, ψ(0), ψ) = [Λν (x, ξ, ψ(0), ψ)]γ ≡ Λγν (x, ξ, ψ(0), ψ).

(7.110)

Then we have, from (7.36)-(7.38),

∞   x ˆ + νψ(0) n ˆ (−k)  =

k=0

∞ x − κ + νψ(0) (k=0 n(−k)) − m(1 + µ − ν) for m > 0 ∞ x − κ + νψ(0) ( k=0 n(−k)) − m(1 − µ − ν) for m < 0.

Thus, in any case, we have, by (7.106),

∞ 

∞    x ˆ + νψ(0) n ˆ (−k) ≤ x + νψ(0) n(−k) , k=0

k=0

and this proves that Mκ,µ,β Ψν (x, ξ, ψ(0), ψ) ≤ Ψν (x, ξ, ψ(0), ψ),

∀(x, ξ, ψ(0), ψ) ∈ Sκ,µ,β .

Using the verification theorem (Theorem 7.4.1), it remains to verify that AΨν (x, ξ, ψ(0), ψ) ≤ 0,

∀(x, ξ, ψ(0), ψ) ∈ S◦κ,µ,β .

A straightforward calculation yields A(KΨν )(x, ξ, ψ(0), ψ)   cγ = sup Lc (KΨν )(x, ξ, ψ(0), ψ) + γ c≥0 = (A − α + λx∂x )(KΨν )(x, ξ, ψ(0), ψ) γ  γ−1 1−γ K∂x Ψν (x, ξ, ψ(0), ψ) + γ

378

7 Hereditary Portfolio Optimization

=

∞  2 1 Kγ(γ − 1)Λγ−2 (x, ξ, ψ(0), ψ) (1 + ν)n(−k) ψ 2 (0)g 2 (ψ) ν 2 k=0

∞   + KγΛγ−1 (x, ξ, ψ) (1 + ν)n(−k) ψ(0)f (ψ) ν



k=0 γ αKΛν (x, ξ, ψ(0), ψ) +

KγλxΛγ−1 (x, ξ, ψ(0), ψ) ν

γ 1−γ (Kγ) γ−1 λγν (x, ξ, ψ(0), ψ) γ  1 − γ  γ (Kγ) γ−1 − αK Λ2ν (x, ξ, ψ(0), ψ) = Λγ−2 (x, ξ, ψ(0), ψ) ν γ ∞     (1 + ν)n(−k) ψ(0)f (ψ) Λν (x, ξ, ψ(0), ψ) + Kγ λx +

+

k=0

∞  2  1 − Kγ(1 − γ) (1 + ν − β)n(−k) ψ 2 (0)g 2 (ψ) . 2 k=0

Hence, A(KΨν )(x, ξ, ψ(0), ψ) ≤ 0,

∀(x, ξ, ψ(0), ψ) ∈ S◦κ,µ,β ,

if and only if  γ 1−γ ¯ γ−1 (Kγ) − αK + Kγ b Λ2ν γ

∞ 2  1 2 2 2 ≤ σ Kγ(1 − γ)ν ψ (0) n(−k) , 2

∀(x, ξ, ψ(0), ψ) ∈ S◦κ,µ,β .

k=0

This holds if and only if 1

α > γb + (1 − γ)(Kγ) γ−1 .

(7.111)

If (7.107) holds, then (7.111) holds for K large enough. This shows that ∀(x, ξ, ψ(0), ψ) ∈ Sκ,µ,β ,

γ ∞  Vκ,µ,β (x, ξ, ψ(0), ψ) ≤ K x + νψ(0) n(−k) . k=0

This proves the proposition.

2

Remark 7.5.5 Proposition 7.5.4 shows that the value function Vκ,µ,β : Sκ,µ,β →  is a finite function. Moreover, it is bounded on the set of the following form: {(x, ξ, ψ(0), ψ) ∈ Sκ,µ,β | Λν (x, ξ, ψ(0), ψ) = constant} for every ν ∈ (1 − µ, 1 + µ).

7.6 The Viscosity Solution

379

7.6 The Viscosity Solution It is clear that the value function Vκ : Sκ → + has discontinuity on the interfaces QI,+ and QI,− and, hence, it cannot be a solution of HJBQVI (*) in the classical sense. In addition, even though Vκ is continuous on the open set S◦κ , it is not necessarily in C 1,0,2,2 (Sκ ) to be a solution of the HJBQVI (*). The main purpose of this section is to show that Vκ is a viscosity solution of the HJBQVI (*). See Ishii [Ish93] and Øksendal and Sulem [ØS02] for the connection of viscosity solutions of second-order elliptic equations with stochastic classical control and classical-impulse control problems for diffusion processes. To give a definition of a viscosity solution, we first define the upper and lower semicontinuity concept as follows. Let Ξ be a metric space and let Φ : Ξ →  be a Borel measurable function. Then the upper semicontinuous (USC) envelope Φ¯ : Ξ →  and the lower semicontinuous (LSC) envelope Φ : Ξ →  of Φ are defined respectively by ¯ Φ(x) = lim sup Φ(y) and Φ(x) = lim inf Φ(y). y→x,y∈Ξ

y→x,y∈Ξ

We let U SC(Ξ) and LSC(Ξ) denote the set of USC and LSC functions on Ξ, respectively. Note that, in general, one has ¯ Φ≤Φ≤Φ and that Φ is USC if and only if Φ = Φ¯ and Φ is LSC if and only if Φ = Φ. In particular, Φ is continuous if and only if ¯ Φ = Φ = Φ. Let ( × L2ρ )∗ and ( × L2ρ )† be the space of bounded linear and bilinear functionals equipped with the usual operator norms · ∗ and · † , respectively. To define a viscosity solution, let us consider the following equation: F (A, S, ∂x , Vκ , (x, ξ, ψ(0), ψ)) = 0, where

∀(x, ξ, ψ(0), ψ) ∈ Sκ ,

(7.112)

F : ( × L2ρ )† × L( × L2ρ ) ×  × Sκ × Sκ → 

is defined by  ⎧ ⎪ max Λ(A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)), ⎪ ⎪  ⎪ ⎪ ⎪ ⎨ (Mκ Φ − Φ)(x, ξ, ψ(0), ψ) , on S◦κ # (7.113) F = Λ(A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)), on #I⊂ℵ0 ∂+,I,2 Sκ ⎪ ⎪ ⎪ 0 ⎪ Λ (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)), on #I⊂ℵ0 ∂−,I,2 Sκ ⎪ ⎪ ⎩ (MΦ − Φ)((x, ξ, ψ(0), ψ)), on I⊂ℵ0 ∂I,1 Sκ ,

380

7 Hereditary Portfolio Optimization

Λ(A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) = AΦ(x, ξ, ψ(0), ψ), and

Λ0 (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) = L0 Φ(x, ξ, ψ(0), ψ).

Note that F (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) = HJBQV I(∗), F¯ (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ))  = max Λ(A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)),  (Mκ Φ − Φ)(x, ξ, ψ(0), ψ) , ∀(x, ξ, ψ(0), ψ) ∈ Sκ ,

(7.114)

and F (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) = F (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)). Definition 7.6.1 (i) A function Φ ∈ U SC(Sκ ) is said to be a viscosity sub1,0,2,2 (Sκ ) ∩ D(S) and for solution of (7.112) if for every function Ψ ∈ Clip every (x, ξ, ψ(0), ψ) ∈ Sκ such that Ψ ≥ Φ on Sκ and Ψ ((x, ξ, ψ(0), ψ)) = Φ(x, ξ, ψ(0), ψ), we have F¯ (A, S, ∂x , Ψ, (x, ξ, ψ(0), ψ)) ≥ 0.

(7.115)

(ii) A function Φ ∈ LSC(Sκ ) is a viscosity supersolution of (7.112) if for 1,0,2,2 (Sκ ) ∩ D(S) and for every (x, ξ, ψ(0), ψ) ∈ Sκ such every function Ψ ∈ Clip that Ψ ≤ Φ on Sκ and Ψ (x, ξ, ψ(0), ψ) = Φ(x, ξ, ψ(0), ψ), we have F (A, S, ∂x , Ψ, (x, ξ, ψ(0), ψ)) ≤ 0.

(7.116)

(iii) A locally bounded function Φ : Sκ →  is a viscosity solution of (7.112) if Φ¯ is a viscosity subsolution and Φ is a viscosity supersolution of (7.112). The following properties of the intervention operator Mκ can be established. Lemma 7.6.2 The following statements hold true regarding Mκ defined by (7.35). (i) If Φ : Sκ →  is USC, then Mκ Φ is USC. (ii) If Φ : Sκ →  is continuous, then Mκ Φ is continuous. ¯ (iii) Let Φ : Sκ → . Then Mκ Φ ≤ Mκ Φ. (iv) Let Φ : Sκ →  be such that Φ ≥ Mκ Φ. Then Φ ≥ Mκ Φ. (v) Suppose Φ : Sκ →  is USC and Φ(x, ξ, ψ(0), ψ) > Mκ Φ(x, ξ, ψ(0), ψ) +  for some (x, ξ, ψ(0), ψ) ∈ Sκ and  > 0. Then Φ(x, ξ, ψ(0), ψ) > Mκ Φ(x, ξ, ψ(0), ψ) + .

7.6 The Viscosity Solution

381

Proof. We only provide a proof of part (i). Parts (ii)-(v) follow as a consequence of part (i). Let (x, ξ, ψ(0), ψ) ∈ Sκ with I = {i ∈ ℵ0 | n(−i) < 0} and I c = ℵ0 − I = {i ∈ ℵ0 | n(−i) ≥ 0}. Define ˆ ψ(0), ˆ ˆ ∈ Sκ | ζ ∈ R(ξ) − {0}} P(x, ξ, ψ(0), ψ) = {(ˆ x, ξ, ψ)  = P+ (x, ξ, ψ(0), ψ) P− (x, ξ, ψ(0), ψ),

(7.117)

where ˆ ψ(0), ˆ ˆ ∈ Sκ | m(0) ≥ 0, P+ (x, ξ, ψ(0), ψ) = {(ˆ x, ξ, ψ) and 0 ≤ m(−i) ≤ −n(−i) for i ∈ I − {0}; and − n(−i) ≤ m(−i) ≤ 0 for i ∈ I c − {0}}, ˆ ψ(0), ˆ ˆ ∈ Sκ | m(0) < 0, P− (x, ξ, ψ(0), ψ) = {(ˆ x, ξ, ψ) and 0 ≤ m(−i) ≤ −n(−i) for i ∈ I − {0}; and − n(−i) ≤ m(−i) ≤ 0 for i ∈ I c − {0}}, ˆ ˆ = (ψ(0), ψ) due to x ˆ and ξˆ are as defined in (7.36) and (7.37), and (ψ(0), ψ) the continuity and the uncontrollability (by the investor) of the stock prices. We claim that for each (x, ξ, ψ(0), ψ) ∈ S◦κ , both P+ (x, ξ, ψ(0), ψ) and P− (x, ξ, ψ(0), ψ) are compact subsets of Sκ . To see this, we consider the following two cases. Case (i). Gκ (x, ξ, ψ(0), ψ) ≥ 0. In this case P+ (x, ξ, ψ(0), ψ) intersects with the hyperplane ∂I−{0},1 Sκ (since m(0) > 0). By the facts that 0 ≤ m(−i) ≤ −n(−i) for i ∈ I − {0} and n(−i) = 0 for all but finitely many i ∈ ℵ0 as required in (7.2), P+ (x, ξ, ψ(0), ψ) is compact. Case (ii). Gκ (x, ξ, ψ(0), ψ) < 0. In this case, P+ (x, ξ, ψ(0), ψ) is bounded by the set {(x, ξ, ψ(0), ψ) | Gκ (x, ξ, ψ(0), ψ)} = 0 and the boundary of [0, ∞) × N+ ×  × L2ρ,+ . From Case (i) and Case (ii), P+ (x, ξ, ψ(0), ψ) is a compact subset of Sκ . We can also prove the compactness of P− (x, ξ, ψ(0), ψ) in a similar manner. Since Φ is USC on P(x, ξ, ψ(0), ψ), there exists (x∗ , ξ ∗ , ψ ∗ (0), ψ ∗ ) ∈ P(x, ξ, ψ(0), ψ) such that ˆ ψ(0), ˆ ˆ | ζ ∈ R(ξ) − {0}} x, ξ, ψ) Mκ Φ(x, ξ, ψ(0), ψ) = sup{Φ(ˆ = Φ(x∗ , ξ ∗ , ψ ∗ (0), ψ ∗ ).  ∞ Fix (x(0) , ξ (0) , ψ (0) (0), ψ (0) ) ∈ Sκ and let (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) be a k=1 sequence in Sκ such that

382

7 Hereditary Portfolio Optimization

(x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → (x(0) , ξ (0) , ψ (0) (0), ψ (0) ) as k → ∞. To show that Mκ Φ is USC, we must show that Mκ Φ(x(0) , ξ (0) , ψ (0) (0), ψ (0) ) ≥ lim sup Mκ Φ(x(k) , ξ (k) , ψ (k) (0), ψ (k) ) k→∞

= lim sup Φ(x(k)∗ , ξ (k)∗ , ψ (k)∗ (0), ψ (k)∗ ). k→∞

˜ ψ(0), ˜ ˜ be a cluster point of Let (˜ x, ξ, ψ)  ∞ (x(k)∗ , ξ (k)∗ , ψ (k)∗ (0), ψ (k)∗ ) ; k=1

˜ ψ(0), ˜ ˜ is the limit of some of convergent subsequence that is, (˜ x, ξ, ψ)  ∞  ∞ (x(kj )∗ , ξ (kj )∗ , ψ (kj )∗ (0), ψ (kj )∗ ) of (x(k)∗ , ξ (k)∗ , ψ (k)∗ (0), ψ (k)∗ ) . j=1

Since

k=1

(x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → (x(0) , ξ (0) , ψ (0) (0), ψ (0) ),

we see that P(x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → P(x(0) , ξ (0) , ψ (0) (0), ψ (0) ) in Hausdorff distance. Hence, since (x(kj )∗ , ξ (kj )∗ , ψ (kj )∗ (0), ψ (kj )∗ ) ∈ P(x(kj ) , ξ (kj ) , ψ (kj ) (0), ψ (kj ) ) for all j, we conclude that ˜ ψ(0), ˜ ˜ (˜ x, ξ, ψ) = lim (x(kj )∗ , ξ (kj )∗ , ψ (kj )∗ (0), ψ (kj )∗ ) ∈ P(x(0) , ξ (0) , ψ (0) (0), ψ (0) ). j→∞

Therefore, ˜ ψ(0), ˜ ˜ Mκ Φ(x(0) , ξ (0) , ψ (0) (0), ψ (0) ) ≥ Φ(˜ x, ξ, ψ) ≥ lim sup Φ(x(kj )∗ , ξ (kj )∗ , ψ (kj )∗ (0), ψ (kj )∗ ) j→∞

= lim sup Mκ Φ(x(k) , ξ (k) , ψ (k) (0), ψ (k) ).

2

k→∞

Theorem 7.6.3 Suppose α > λγ. Then the value function Vκ : Sκ →  defined by (7.13) is a viscosity solution of the HJBQVI (*). The theorem can be proved by verifying the following two propositions, the first of which shows that the value function is a viscosity supersolution of the HJBQVI (*) and the second shows that the value function is a viscosity subsolution of the HJBQVI (*).

7.6 The Viscosity Solution

383

Proposition 7.6.4 The lower semicontinuous envelope Vκ : Sκ →  of the value function Vκ is a viscosity supersolution of the HJBQVI (*). 1,0,2,2 (O) ∩ D(S) Proof. Let Φ : Sκ →  be any smooth function with Φ ∈ Clip on a neighborhood O of Sκ and let (x, ξ, ψ(0), ψ) ∈ Sκ be such that Φ ≤ Vκ on Sκ and Φ(x, ξ, ψ(0), ψ) = Vκ (x, ξ, ψ(0), ψ). We need to prove that

F (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) = F (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) ≤ 0. Note that by Lemma 7.6.2(iv), Mκ Vκ ≤ Vκ ⇒ Mκ Vκ = Mκ Φ ≤ Vκ = Φ on Sκ . # In particular, this inequality holds on I⊂ℵ0 ∂I,1 Sκ . Therefore, we only need to show that # AΦ ≤ 0 on S◦κ ∪ I⊂ℵ0 ∂+,I,2 Sκ and

L0 Φ ≤ 0 on

# I⊂ℵ0

∂−,I,2 Sκ .

For  > 0, let B() = B(; (x, ξ, ψ(0), ψ)) be the open ball in Sκ centered at (x, ξ, ψ(0), ψ) and with radius  > 0. Let π() = (C , T ) ∈ Uκ (x, ξ, ψ(0), ψ) be the admissible strategy beginning with a constant consumption rate C(t) = c ≥ 0 and no transactions up to the first time τ () at which the controlled state process {(X(t), Nt , S(t), St ), t ≥ 0} exits from the set B(). Note that τ () > 0, P -a.s. since there is no transaction and the controlled state process {(X(t), Nt , S(t), St ), t ≥ 0} is continuous on B(). Choose (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) ∈ B() such that (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → (x, ξ, ψ(0), ψ) and

Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → Vκ (x, ξ, ψ(0), ψ) as k → ∞.

Then by the Bellman DPP (see Proposition 7.3.1), we have for all k, Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) )   τ ( )  cγ (k) e−αt dt + e−ατ ( ) Vκ (X(τ ()), Nτ ( ) , S(τ ()), Sτ ( ) ≥E γ 0    τ ( ) cγ e−αt dt + e−ατ ( ) Φ(X(τ ()), Nτ ( ) , S(τ ()), Sτ ( ) ) , ≥ E (k) γ 0 (k)

(k)

(k)

(k)

where E (k) [· · · ] is notation for E x ,ξ ,ψ (0),ψ ;π( ) [· · · ], the conditional expectation given the initial state (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) and the strategy π(). In particular, for all 0 ≤ t ≤ τ (),

384

7 Hereditary Portfolio Optimization

0 = Vκ (x, ξ, ψ(0), ψ) − Φ(x, ξ, ψ(0), ψ)   = lim Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) − Φ(x(k) , ξ (k) , ψ (k) (0), ψ (k) ) k→∞  t cγ ≥ lim E (k) e−αs ds k→∞ γ 0  −αt + e Vκ (X(t), Nt , S(t), St ) − Φ(x(k) , ξ (k) , ψ (k) (0), ψ (k) )  t cγ =E e−αs ds γ 0  + e−αt Φ(X(t), Nt , S(t), St ) − Φ(x, ξ, ψ(0), ψ) . Dividing both sides of the inequality by t and letting t → 0, we have from Dynkin’s formula or Itˆ o’s formula (Theorem 7.2.6) that 1  0 ≥ lim E t→0 t

 0

t

e−αs

cγ ds γ

 Φ(X(t), Nt , S(t), St ) − Φ(x, ξ, ψ(0), ψ)  1  cγ + lim E e−αt Φ(X(t), Nt , S(t), St ) − Φ(x, ξ, ψ(0), ψ) = t→0 t γ cγ + Lc Φ(x, ξ, ψ(0), ψ), = γ −αt

+e

where Lc Φ(x, ξ, ψ(0), ψ) = (A − α + rx∂x − c)Φ(x, ξ, ψ(0), ψ). We conclude from the above that Lc Φ(x, ξ, ψ(0), ψ) +

cγ ≤0 γ

(7.118)

for all c ≥ 0 such that π() ∈ Uκ (x, ξ, ψ(0), ψ) for  > 0 small enough. This implies that  cγ  AΦ(x, ξ, ψ(0), ψ) ≡ sup Lc Φ(x, ξ, ψ(0), ψ) + ≤ 0. γ c≥0 # If (x, ξ, ψ(0), ψ) ∈ S◦κ ∪ I⊂ℵ0 ∂+,I,2 Sκ , then this is clearly the case for all c ≥ 0, and, therefore, HJBQVI (*) implies that AΦ(x, ξ, ψ(0), ψ) ≤ 0. If # (x, ξ, ψ(0), ψ) ∈ I⊂ℵ0 ∂−,I,2 Sκ , then the only such admissible c is c = 0. Therefore, we get L0 Φ(x, ξ, ψ(0), ψ) ≤ 0 as required. This proves the proposition. 2 Proposition 7.6.5 Suppose α > λγ. Then the upper semicontinuous envelope Vκ : Sκ →  of the value function Vκ is a viscosity subsolution of the QVHJBI (*).

7.6 The Viscosity Solution

385

Proof. Suppose π = (C, T ) ∈ Uκ (x, ξ, ψ(0), ψ). Since τ (1) is a G-stopping time, the event {τ (1) = 0} is G(0)-measurable. By the well-known zero-one law (see [KS91, p.94]), one has either τ (1) = 0, P -a.s. or τ (1) > 0 P -a.s.

(7.119)

We first assume τ (1) > 0, P -a.s.. Then by the Markovian property, the cost functional Jκ (x, ξ, ψ(0), ψ; π) satisfies the following relation: For P -a.s. ω,   ∞ C γ (s) ds | G(τ ) (ω). e−ατ Jκ (X(τ ), Nτ , S(τ ), Sτ ; π) = E e−α(s+τ ) γ τ Hence, Jκ (x, ξ, ψ(0), ψ; π) = E

x,ξ,ψ(0),ψ;π



τ

e−αs

0

−ατ

+e

C γ (s) ds γ

 Jκ (X(τ ), Nτ , S(τ ), Sτ ; π)

(7.120)

for all G-stopping times τ ≤ τ (1). It is clear that Vκ (x, ξ, ψ(0), ψ) ≥ Mκ Vκ (x, ξ, ψ(0), ψ),

∀(x, ξ, ψ(0), ψ) ∈ Sκ .

(7.121)

We will prove that Vκ , the upper semicontinuous envelope of Vκ : Sκ → , is a viscosity subsolution of HJBQVI (*). To this end, let Φ : Sκ →  be 1,0,2,2 (O) ∩ D(S) on a neighborhood O of Sκ any smooth function with Φ ∈ Clip and let (x, ξ, ψ(0), ψ) ∈ Sκ be such that Φ ≥ Vκ on Sκ and Φ(x, ξ, ψ(0), ψ) = Vκ (x, ξ, ψ(0), ψ). We need to prove that F¯ (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) ≥ 0. We consider the following two cases separately. Case 1. Vκ (x, ξ, ψ(0), ψ) ≤ Mκ Vκ (x, ξ, ψ(0), ψ). Then, by (7.115), F¯ (A, S, ∂x , Φ, (x, ξ, ψ(0), ψ)) ≥ 0, and, hence, the above inequality holds at (x, ξ, ψ(0), ψ) for Φ = Vκ in this case. Case 2. Vκ (x, ξ, ψ(0), ψ) > Mκ Vκ (x, ξ, ψ(0), ψ). In this case, it suffices to prove that AΦ(x, ξ, ψ(0), ψ) ≥ 0. We argue by contradiction: Suppose (x, ξ, ψ(0), ψ) ∈ Sκ and AΦ(x, ξ, ψ(0), ψ) < 0. Then from the definition of A, we deduce that ∂x Φ(x, ξ, ψ(0), ψ) > 0. Hence, by continuity, ∂x Φ > 0 on a neighborhood G of (x, ξ, ψ(0), ψ). However, then AΦ = (A − α)Φ + (rx − cˆ)∂x Φ +

cˆγ γ

386

7 Hereditary Portfolio Optimization

1 ˜ ψ(0), ˜ ˜ ∈ G∩Sκ . Hence, AΦ is continuous with cˆ = cˆ = (∂x Φ) γ−1 for all (˜ x, ξ, ψ) ¯ of (x, ξ, ψ(0), ψ) on G∩Sκ and so there exists a (bounded) neighborhood G(K) such that

¯ = G(x, ξ, ψ(0), ψ; c) G(K)  ˜ ψ(0), ˜ ˜ | |x − x ¯ = (˜ x, ξ, ψ) ˜| < K, ˜ N < K, ˜ ˜ M 0 and for some K ˜ ψ(0), ˜ ˜ < 1 AΦ(x, ξ, ψ(0), ψ) < 0, AΦ(˜ x, ξ, ψ) 2

˜ ψ(0), ˜ ˜ ∈ G(K) ¯ ∩ Sκ . ∀(˜ x, ξ, ψ)

(7.122) Now, since Vκ (x, ξ, ψ(0), ψ) > Mκ Vκ (x, ξ, ψ(0), ψ), let η be any number such that (7.123) 0 < η < (Vκ − Mκ Vκ )(x, ξ, ψ(0), ψ). Since Vκ (x, ξ, ψ(0), ψ) > Mκ Vκ (x, ξ, ψ(0), ψ) + η, we can, by Lemma 7.6.2(v), ¯ find a sequence {(x(k) , ξ (k) , ψ (k) (0), ψ (k) }∞ k=1 ⊂ G(K) ∩ Sκ such that (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → (x, ξ, ψ(0), ψ) and

Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) → Vκ (x, ξ, ψ(0), ψ) as k → ∞

and for all k ≥ 1, Mκ Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) < Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) − η.

(7.124)

Choose  ∈ (0, η). Since Vκ (x, ξ, ψ(0), ψ) = Φ(x, ξ, ψ(0), ψ), we can choose K > 0 (a positive integer) such that for all n ≥ K, |Vκ (x(k) , ξ (k) , ψ (k) (0), ψ (k) ) − Φ(x(k) , ξ (k) , ψ (k) (0), ψ (k) )| < .

(7.125)

In the following, we fix n ≥ K and put ˜ ψ(0), ˜ ˜ = (x(k) , ξ (k) , ψ (k) (0), ψ (k) ). (˜ x, ξ, ψ) ˜ ˜ T˜ ) with T˜ = {(˜ Let π ˜ = (C, τ (i), ζ(i)), i = 1, 2, . . .} be an -optimal control ˜ ˜ ˜ for (˜ x, ξ, ψ(0), ψ) in the sense that ˜ ψ(0), ˜ ˜ ≤ Jκ (˜ ˜ ψ(0), ˜ ˜ π Vκ (˜ x, ξ, ψ) x, ξ, ψ; ˜ ) + . We claim that τ˜(1) > 0, P -a.s.. If this were false, then τ˜(1) = 0, P -a.s. by the zero-one law (see (7.119)). Then the state process {Z π˜ (t) ≡ (X π˜ (t), Ntπ˜ , S π˜ (t), Stπ˜ ), t ≥ 0} makes ˜ ψ(0), ˜ ˜ to some point (ˆ ˆ ψ(0), ˆ ˆ ∈ Sκ an immediate jump from (˜ x, ξ, ψ) x, ξ, ψ) according to (7.36)-(7.38), and, hence, by its definition,

7.6 The Viscosity Solution

387

˜ ψ(0), ˜ ˜ π ˆ ψ(0), ˆ ˆ π Jκ (˜ x, ξ, ψ; ˜ ) = E x,ξ,ψ(0),ψ;˜π [Jκ (ˆ x, ξ, ψ; ˜ )] ˜ ψ(0), ˜ ˜ and Denoting the conditional expectation given the initial state (˜ x, ξ, ψ) ˜ the strategy π ˜ by E[· · · ], we have ˜ ψ(0), ˜ ˜ ≤ Jκ (˜ ˜ ψ(0), ˜ ˜ π x, ξ, ψ) x, ξ, ψ; ˜) +  Vκ (˜ ˆ ˆ ˆ π ˜ = E[Jκ (ˆ x, ξ, ψ(0), ψ; ˜ )] +  ˆ ψ(0), ˆ ˆ + ˜ κ (ˆ ≤ E[V x, ξ, ψ)] ˜ ψ(0), ˜ ˜ + , ≤ Mκ Vκ (˜ x, ξ, ψ)] which contradicts (7.119). We therefore conclude that τ˜(1) > 0, P -a.s.. Fix R > 0 and define the G-stopping time τ by ¯ τ = τ () = τ˜(1) ∧ R ∧ inf{t > 0 | Z π˜ (t) ∈ / G(K)}. By the Dynkin formula (see Theorem 7.2.6), we have    τ  ˜ ψ(0), ˜ ˜ +E ˜ e−ατ Φ(Z π˜ (τ )) = Φ(˜ ˜ E x, ξ, ψ) e−αt Lc˜Φ(Z π˜ (t) dt 0    −ατ π ˜ ˜ e Φ(Z (τ ) − Φ(Z π˜ (τ −)) , +E or

   ˜ ψ(0), ˜ ˜ +E ˜ ˜ e−ατ Φ(Z π˜ (τ −)) = Φ((˜ x, ξ, ψ) E

0

τ

 e−αt Lc˜Φ(Z π˜ (t)) dt .

Since Vκ ≥ Mκ Vκ , ˜ ψ(0), ˜ ˜ ≤ Jκ (˜ ˜ ψ(0), ˜ ˜ π x, ξ, ψ) x, ξ, ψ); ˜) +  Vκ (˜  τ  γ C˜ (t) ˜ dt + Jκ (Z π˜ (τ ); π =E e−αt ˜) +  γ 0   τ C˜ γ (t) ˜ dt + e−ατ Vκ (Z π˜ (τ )) +  e−αt ≤E γ 0  τ γ ˜ C (t) ˜ dt + e−ατ {Vκ (Z π˜ (τ −))χ{τ τ }     +E e−ατ Vκ (Z π˜ (τ ) − Vκ (Z π˜ (τ −) χ{˜τ (1)≤τ }   ¯ −λτ )γ ≤ E K(x − (x − λ)e   +E e−ατ Vκ (Z π˜ (τ −))χ{˜τ (1)>τ }   +E e−ατ Mκ Vκ (Z π˜ (τ −))χ{˜τ (1)≤τ }   ¯ −λτ )γ ≤ E K(x − (x − λ)e   ˜ ψ(0), ˜ ˜ sup Vκ (˜ x, ξ, ψ) +E e−ατ χ{˜τ (1)>τ } ×   +E e−ατ χ{˜τ (1)≤τ } ×

˜ ψ(0), ˜ ˜ ¯ (˜ x,ξ, ψ)∈G( K)

sup

˜ ψ(0), ˜ ˜ ¯ (˜ x,ξ, ψ)∈G( K)

˜ ψ(0), ˜ ˜ (7.129) Mκ Vκ (˜ x, ξ, ψ).

Now, if there exists a sequence j → 0 and a subsequence {(x(kj ) , ξ (kj ) , ψ (kj ) (0), ψ (kj ) )} of {(x(k) , ξ (k) , ψ (k) (0), ψ (k) )} such that then

E (kj ) [e−ατ ( j ) ] → 1 when j → ∞,   E e−ατ χ{˜τ (1)>τ } → 0 when j → ∞;

so by choosing (x, ξ, ψ(0), ψ) = (x(kj ) , ξ (kj ) , ψ (kj ) (0), ψ (kj ) ), τ = τ (j ), and letting k → ∞, we obtain ¯γ + V¯κ (x, ξ, ψ(0), ψ) ≤ K λ

sup

¯ (x,ξ,ψ(0),ψ)∈G(K)

Mκ Vκ (x, ξ, ψ(0), ψ).

Hence, by Lemma 7.6.2, ¯γ + Vκ (x, ξ, ψ(0), ψ) ≤ lim (K λ ¯ λ→0

sup

¯ (x,ξ,ψ(0),ψ)∈G(K)

Mκ Vκ (x, ξ, ψ(0), ψ))

= Mκ Vκ (x, ξ, ψ(0), ψ) ≤ Mκ Vκ (x, ξ, ψ(0), ψ) < Vκ (x, ξ, ψ(0), ψ) − η. This contradicts (7.123). This contradiction proves the proposition and completes the proof that Vκ is a viscosity subsolution. 2

390

7 Hereditary Portfolio Optimization

7.7 Uniqueness The proof that the value function Vκ : Sκ →  is the unique viscosity solution of the HJBQVI (*) has not been established at this point. However, we make a conjecture on the comparison principle for viscosity super- and sub-solutions as follows. Conjecture on Comparison Principle. (i) Suppose Φ is a viscosity subsolution and Ψ is a viscosity supersolution of the HJBQVI (*) and that Φ and Ψ satisfy the following estimates: −CΨ0 (x, ξ, ψ(0), ψ) ≤ Φ(x, ξ, ψ(0), ψ), Φ(x, ξ, ψ(0), ψ) ≤ CΨ0 (x, ξ, ψ(0), ψ), for some C < ∞. Then

∀(x, ξ, ψ(0), ψ) ∈ Sκ , ∀(x, ξ, ψ(0), ψ) ∈ Sκ ,

(7.130) (7.131)

Φ ≤ Ψ on S◦κ .

(ii) Moreover, if, in addition, Ψ (x, ξ, ψ(0), ψ) =

lim inf

◦ ˜ ψ(0), ˜ ˜ ˜ ψ(0), ˜ ˜ (˜ x,ξ, ψ)→(x,ξ,ψ(0),ψ),(˜ x,ξ, ψ)∈S κ

˜ ψ(0), ˜ ˜ Ψ (˜ x, ξ, ψ)

∀(x, ξ, ψ(0), ψ) ∈ ∂Sκ ,

(7.132)

then Φ ≤ Ψ on Sκ . The following uniqueness result follows immediately from the Conjecture on Comparison Principle described above. Theorem 7.7.1 Suppose α > γb. Then the value function Vκ : Sκ →  defined in (7.13) is continuous on S◦κ , and is the unique viscosity solution of HJBQVI (*) with the property that there exists C < ∞ such that Vκ (x, ξ, ψ(0), ψ) ≤ CΨ0 (x, ξ, ψ(0), ψ),

∀(x, ξ, ψ(0), ψ) ∈ Sκ .

(7.133)

Proof. It has been shown in Section 7.6 that the value function Vκ is continuous on S◦κ and is a viscosity solution of HJBQVI (*) provided that α > γb. Suppose Φ : Sκ →  is another viscosity solution of HJBQVI (*), it follows that Φ¯ is a viscosity subsolution and Φ is a viscosity supersolution, and similarly for Vκ . Hence, by the Conjecture on Comparison Principle (provided it can be proved), we have ¯ on S◦κ . Φ¯ ≤ V κ ≤ V¯κ ≤ Φ ≤ Φ Therefore, Vκ = Φ on Sκ . This proves the theorem.

2

7.8 Conclusions and Remarks

391

7.8 Conclusions and Remarks Instead of developing a general theory for optimal classical-impulse control problems, we focus on a specific problem that arises from a hereditary portfolio optimization problem in which a small investor’s portfolio consists of a savings account that grows continuously with a constant interest rate and an account of one underlying stock whose prices follow a nonlinear stochastic hereditary differential equation with infinite but fading memory. The investor is allowed to consume from the savings and make transactions between the savings and the stock accounts subject to Rules 7.1-7.6 that govern the transactions, transaction cost, and capital gains tax. The capital gains tax is proportional to the amount of profit in selling or buying back shares of the stock. A profit or a loss is defined to the difference between the sale and the base price of the stock, where the base price is the price at which shares of the stock was bought or short sold in the past. Within the solvency region, the investor’s objective is to maximize a certain expected discounted utility from consumption over an infinite time horizon, by using an admissible consumption-trading strategy. The results obtained in this chapter include (1) the derivation of an infinite dimensional Hamilton-Jacobi-Bellman quasi-variational inequality (HJBQVI) together with its boundary conditions; (2) the establishment of the verification theorem for the optimal consumption-trading strategy; (3) the properties and upper bounds for the value function; and (4) the introduction of a viscosity solution and a proof of the value function as a viscosity solution of the HJBQVI. In addition, it is also conjectured that the value function is the unique viscosity solution of the HJBQVI. Issues that have not been addressed in this chapter include the proof for the conjecture and computational methods for the optimal strategy and the value functions. The material presented in this chapter is based on the two pioneer papers by the author (see [Cha07a, Cha07b]) in which the term “hereditary portfolio optimization” appeared for the first time in literature. The hereditary nature of the problem comes from the facts that the stock price dynamics is described by a nonlinear stochastic hereditary differential equation with infinite but fading memory and that the stock base price for calculating capital gains taxes depend on the historical stock prices. Based on the stock price dynamics and Rules 7.1-7.6, there are many open problems, including pricing of European and American options, optimal terminal wealth problem, optimal ergodic control problem, and so forth. Due to the presence of transaction costs (fixed and proportional) and capital gains taxation, it is anticipated that the market is not complete and the open problems mentioned above are expected to be extremely challenging ones.

References

[Ada75] [AMS96]

[AST01]

[AP93] [Arr97]

[AHM07]

[BM06] [BS91]

[Bat96] [BR05] [BL81] [BL84] [BS05] [BBM03]

[Bil99]

Adams, R.A.: Sobolev Spaces, Academic Press, New Yor (1975) Akian, M., Menaldi, J.L., Sulem, A.: On an investment-consumption model with transaction costs. SIAM J. Control & Optimization, 34, 329–364 (1996) M. Akian, M., Sulem, A., M. I. Taksar, M.I.: Dynamic optimization of long term growth rate for a portfolio with transaction costs and logrithmic utility. Mathematical Finance. 11, 153-188 (2001) Ambrosetti, A., Prodi, G.: A Primer of Nonlinear Analysis. Cambridge University Press (1993) Arriojas, M.: A Stochastic Calculus for Functional Differential Equations, Doctoral Dissertation, Department of Mathematics, Southern Illinois University at Carbondale (1997) Arriojas, M., Hu, Y., S.-E. Mohammed, S.-E., Pap, G.: A delayed Balck and Scholes formula. Stochastic Analysis and Applications, 25, 471-492 (2007) Barbu, V., Marinelli, C.: Variational inequlities in Hilbert spaces with measures and optimal stopping. preprint (1996) Barles, G., Souganidis, P.E.: Convergence of approximative schemes for fully nolinear second order equations. J. Asymptotic Analysis., 4, 557– 579 (1991) Bates, D.S.: Testing option pricing models. Stochastic Models in Finance, Handbook of Statistics, 14, 567-611 (1996) Bauer, H., Rieder, U.: Stochastic control problems with delay. Math. Meth. Oper. Res., 62(3), 411-427 (2005) Bensousan, A., Lions, J.L.: Applications of Variational Inequalities., North-Holland, Amsterdam New-York (1981) Bensoussan, A., Lions, J.L.: Impulse Control and Quasi-Variational Inequalities. Gauthier-Villars, Paris (1984) Bensoussan, A., Sulem, A.: Controlled Jumped Diffusion., SpringerVerlag, New York (2005) Beta, C., Bertram, M., Mikhailov, A.S., Rotermund, H.H., Ertl, G.: Controlling turbulence in a surface chemical reaction by time-delay autosynchronization. Phys. Rev. E., 67(2), 046224.1–046224.10 (2003) Billingsley, P.: Convergence of Probability Measures. 2nd edition, Wiley, New York (1999)

394 [BS73]

References

Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Political Economy., 81, 637–659 (1973) [Bou79] Bourgain, J.: La propriete de Radon-Nikodym. Cours polycopie, 36, universite P.et M. Curie, Paris (1979) [BO98] Brekke, K.A., Oksendal, B.: A verification theorem for combined stochastic control and impulse control. In: Stochastic Analysis and Related Topics., Vol. 6, Progress in Probability 42, pp. 211–220, Birkhauser Boston, Cambridge, MA (1998) [CP99] Cadenillas, A., Pliska, S.R.: Optimal trading of a security when there are taxes and transaction costs. Finance and Stochastics., 3, 137–165 (1999) [CZ00] Cadenillas, A., Zapataro, P.: Classical and impulse stochastic control of the exchange rate using interest rates and reserves. Mathematical Finance., 10, 141-156 (2000) [CFN06] Calzolari, A., Florchinger, P., Nappo, G.: Approximation of nonlinear filtering for Markov systems with delayed observations. SIAM J. Control Optim., 45, 599–633 (2006) [CFN07] Calzolari, A., Florchinger, P., Nappo, G.: Convergence in nonlinear filtering for stochastic delay systems. Preprint (2007) [Cha84] Chang, M.H.: On Razumikhin-type stability conditions for stochastic functional differential equations. Mathematical Modelling, 5, 299–307 (1984) [Cha87] Chang, M.H.: Discrete approximation of nonlinear filtering for stochastic delay equations. Stochastic Anal. Appl., 5, 267–298 (1987) [Cha95] Chang, M.H.: Weak infinitesimal generator for a stochastic partial differential equation with time delay. J. of Applied Math. & Stochastic Analysis, 8,115–138 (1995) [Cha07a] Chang, M.H.: Hereditary Portfolio Optimization with Taxes and Fixed Plus Proportional Transaction I. J. Applied Math & Stochastic Analysis. (2007) [Cha07b] Chang, M.H.: Hereditary Portfolio Optimization with Taxes and Fixed Plus Proportional Transaction II. J. Applied Math & Stochastic Analysis. (2007) [CPP07a] Chang, M.H., Pang, T., Pemy, M.: Optimal stopping of stochastic functional differential equations. preprint (2007). [CPP07b] Chang, M.H., Pang, T., Pemy, M: Optimal control of stochastic functional differential equations with a bounded memory. preprint (2007) [CPP07c] Chang, M.H., Pang, T., Pemy, M.: Finite difference approximation of stochastic control systems with delay., preprint (2007) [CPP07d] Chang, M.H., Pang, T., Pemy, M.: Viscosity solution of infinite dimensional Black-Scholes equation and numerical approximations. preprint (2007) [CPP07e] Chang, M.H., Pang, T., Pemy, M.: Numerical methods for stochastic optimal stopping problems with delays. preprint (2007) [CY99] Chang, M.H., Youree, R.K.: The European option with hereditary price structures: Basic theory. Applied Mathematics and Computation., 102, 279–296 (1999) [CY07] Chang, M.H., Youree, R.K.: Infinite dimensional Black-Scholes equation with hereditary price structure. preprint (2007)

References [CCP04] [CM66] [Con83] [Con84]

[CIL92]

[DZ92] [DS96]

[DN90] [DZ94]

[DU04] [DU77] [Doo94] [DS58] [Dyn65] [DY79] [EL76]

[Els00] [EL01] [EOS00]

[EK86] [Fab06] [FFG06]

395

Chilarescu, C., Ciorca, O., Preda, C.: Stochastic differential delay dquations applied to portfolio selection problem. preprint (2004) Coleman, B.D., Mizel, V.J.: Norms and semigroups in the theory of fading memory. Arch. Rational Mech. Anal., 23, 87–123 (1966) Constantinides, G.M.: Capital Market Equilibrium with Personal Tax., Econometrica., 51, 611–636 (1983) Constantinides, G.M.: Optimal stock trading with personal taxes: implications for prices and the abnormal January returns. J. Financial Economics., 13, 65–89 (1984) Crandall, M.g., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc., 27, 1–67 (1992) Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992) Dammon, R., Spatt, C.: The optimal trading and pricing of securities with asymmetric capital gains taxes and transaction costs. Reviews of Financial Studies., 9, 921–952 (1996) Davis, M.H.A., Norman, A.: Portfolio selection with transaction costs. Math. Operations Research, 15, 676–713 (1990) Davis, M.H.A., Zariphopoulou, T.: American options and transaction fees. In: Proceedings IMA Workshop on Mathematical Finance, Springer, New York (1994) Demiguel, A.V., Uppal, R.: Portfolio investment with the exact tax basis via nonlinear programming. preprint (2004) Diestel, J., Uhl, J.J.: Vector Measures. Mathematical Surveys Monograph Series 15, AMS, Providence (1977) Doob, H.: Measure Theory. Springer-Verlag, New York (1994) Dunford, N., Schwartz, J.T.: Linear Operators, Part I: General Theory. Interscience Publishers, New York (1958) Dynkin, E.B.: Markov Processes. Volumes I & II, Springer-Verlag, Berlin-G¨ ottingen-Heidelberg (1965) Dynkin, E.B., Yushkevich, A.A.: Controlled Markov Processes, Springer, Berlin-Heidelberg-New York (1979) Ekeland, I. and Lebourg, G.: Generic differentiability and pertubed optimization in Banach spaces. Trans. Amer. Math. Soc., 224, 193-216 (1976) Elsanousi, I.: Stochastic Control for Systems with Memory. Dr. Scient. thesis, University of Oslo (2002) Elsanousi, I., Larssen, B.: Optimal consumption under partial observations for a stochastic system with delay. preprint (2001) Elsanousi, I., Oksendal, B., Sulem, A.: Some solvable stochastic control problems with delay. Stochastics and Stochastic Reports, 71, 69–89 (2000) Ethier, S.N., Kurtz, T.G.: Markov Processes: Characterization and Convergence. Wiley, New York (1986) Fabbi, G.: Viscosity solutions approach to economic models governed by DDEs., preprint (2006) Fabbri, G., Faggian, S., Gozzi, F.: On the dynamic programming approach to economic models governed by DDE’s. preprint (2006)

396

References

[FG06] [FR05]

[FN06]

[FR06]

[FR75] [FS93] [FPS00] [Fra02]

[FB01]

[GR05a] [GR05b] [GNS01] [GS99] [GMM71]

[GM04] [GMS06] [GSZ05]

[Hal77] [HL93]

[HK79] [HP81]

Fabbi, G., Gozzi, F.: Vintage capital in the AK growth model: a dynamics programming approach. preprint (2006) Ferrante, M., Rovina, C.: Stochastic delay differential equations driven by fractional Brownian motion with Hurst parameter H > 12 . preprint (2005) Fischer, M., Nappo, G.: Time discretisation and rate of convergence for the optimal control of continuous-time stochastic systems with delay. Preprint (2006) Fischer, M., Reiss, M.: Discretization of stochastic control problems for continuous time dynamics with delay. J. Comput. Appl. Math., to appear (2006) Fleming, W.H., Rishel, R.W.: Deterministic and Stochastic Optimal Control. Springer-Verlag, New York (1975) Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions. Springer-Verlag, New York (1993) Fouque, J.-P., Papanicolaou, G., Sircar, K.P.: Derivatives in Finanical Markets with Stochastic Volatility. Cambridge University Press (2000) Frank, T.D.: Multiviariate Markov processes for stochastic systmes with delays: application to the stochastic Gompertz model with delay. Phys. Rev. E, 66, 011914–011918 (2002) Frank, T.D., Beek, P.J.: Stationary solutions of linear stochastic delay differential equations: applications to biological system. Phys. Rev. E, 64, 021917–021921 (2001) Gapeev, P. V., Reiss, M.: An optimal stopping problem in a diffusiontype model with delay. preprint (2005) Gapeev, P. V., Reiss, M.: A note on optimal stopping in models with delay. preprint (2005) Garlappi, L., Naik, V., Slive, J.: Portfolio selection with multiple risky assets and capital gains taxes., preprint (2001) ´ Gatarek, D., Swiech, A.: Optimal stopping in Hilbert spaces and pricing of American options. Math. Methods Oper. Res., 50, 135-147 (1999) Goel, N.S., Maitra, S.C., Montroll, E.W.: On the Volterra and other nonlinear models of interacting populations. Rev. Modern Phys., 43, 231–276 (1971) Gozzi, F., Marinelli, C.: Stochastic optimal control of delay equations arising in advertising models. preprint (2004) Gozzi, F., Marinelli, C., Saving, S: Optimal dynamic advertising under uncertainty with carryover effects. preprint (2006) Gozzi, F., Swiech, A., Zhou, X.Y.: A corrected proof of the stochastic verification theorem within the framework of viscosity solutions. SIAM J. Control & Optimization, 43, 2009–2019 (2005) Hale, J.: Theory of Functional Differential Equations. Springer, New York, Heidelberg, Berlin (1977) Hale, J., Lunel, S. V.: Introduction to Functional Differential Equations. Applied Mathematical Sciences Series, Springer, New York, Heidelberg, Berlin (1993) Harrison, J.M., Kreps, D.M.: Martingales and arbitrage in multi-period security markets. J. Economic Theory, 20, 381-408 (1979) Harrison, J.M., Pliska, S.R.: Martingales and stochastic integrals in theory of continuous trading. Stochastic Processess Appl., 11, 215-260 (1981)

References [HK90]

397

Horowitz, J., Karandikar, R.L.: Martingale problems associated with the Boltzman equation. In: (Ed: E. Cinlar et al) Seminar on Stochastic Processes, Birkha¨ user, Boston, pp. 75-122, (1990) [Hul00] Hull, J.C.: Options, Futures and Other Derivatives. 4th edition, Prentice Hall (2000) [HMY04] Hu, Y., Mohammed, S.A., Yan, F.: Discrete-time approximations of stochastic delay equations: the Milstein scheme. Ann. Prob., 32, 265-314, (2004) [IW81] Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam (Kodansha Ltd., Tokyo) (1981) [Ish93] Ishii, K.: Viscosity solutions of nonlinear second order elliptic PDEs associated with impulse control problems. Funkcial. Ekvac., 36, 123– 141 (1993) [Itˆ o44] Itˆ o, K.: Stochastic integral. Proceedings of Imperial Academy., Tokyo, 20, 519–524 (1944) [Itˆ o84] Itˆ o, K.: Foundations of Stochastic Differential Equations in Infinite Dimensional Spaces. SIAM, Philadelphia, Pennsylvania (1984) [IN64] Itˆ o, K., Nisio, M.: On stationary solutions of a stochastic differential equation. Kyoto J. Mathematics, 4, 1–75 (1964) [IS04] Ivanov, A.F., Swishchuk, A.V.: Optimal control of stochastic differential delay equations with applications in economics. preprint (2004) [JK95] Jouini, E., Kallah, H.: Martingale and arbitrage in securities markets with transaction costs. J. Economic Theory, 66, 178–197 (1995) [JKT99] Jouini, E., Koehl, P.F., Touzi, N.: Optimal investment with taxes: an optimal control problem with endogeneous delay. Nonlinear Analysis, Theory, Methods and Applications, 37, 31–56 (1999) [JKT00] Jouini, E., Koehl, P.F., Touzi, N.: Optimal investment with taxes: an existence result. J. Mathematical Economics, 33, 373–388 (2000) [Kal80] Kallianpur, G.: Stochastic Filtering Theory. Volume 13 of Applications of Mathematics, Springer-Verlag, New York (1980) [KX95] Kallianpur, G., Xiong, J.: Stochastic Differential Equations in Infinite Dimensional Spaces. Lecture Notes-Monograph Series Volume 26, Institute of Mathematical Statistics (1995) [KM02] Kallianpur, G. Mandal, P.K.: Nonlinear filtering with stochastic delay equations. In: N. Balakrishnan (ed), Advance on Theoretical and Methodological Aspects of Probability and Statistics. IISA Volume 2, Gordon and Breach Science Publishers, 3–36 (2002) [Kar96] Karatzas, I.: Lectures on Mathematical Finance. CMR Series, American Mathematical Society (1996) [KS91] Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. 2nd edition, Springer-Verlag, New York (1991) [KSW04a] Kazmerchuk, Y.I., Swishchuk, A.V., Wu, J.H.: The pricing of options for securities markets with delayed response. preprint (2004) [KSW04b] Kazmerchuk, Y.I., Swishchuk, A.V., Wu, J.H.: Black-Scholes formula revisited: securities markets with delayed response. preprint (2004) [KSW04c] Kazmerchuk, Y.I., Swishchuk, A.V., Wu, J.H.: A continuous-time GARCH model for stochastic volatility with delay. preprint (2004) [KS03] Kelome, D., Swiech, A.: Viscosity solutions of infinite dimensional BlackScholes-Barenbaltt equation. Applied Math. Optimization, 47, 253-278 (2003)

398

References

[KLR91]

[KLV04]

[Kni81] [KS96]

[Kry80] [Kus77] [Kus05] [Kus06] [KD01]

[Kusu95] [Lan83]

[Lar02] [LR03]

[Lel99] [Lio82]

[Lio89]

[Lio92]

[LMB90]

[Mal97]

Kind, P., Liptser, R., Runggaldier, W.: Diffusion approximation in pastdependent models and applications to option pricing. Annals of Probability, 1, 379-405 (1991) Kleptsyna, M.L., Le Breton, A., Viot, M.: On the infinite time horizon linear-quadratic regulator problem under fractional Brownian perturbation. preprint (2004) Knight, F.B.: Essentials of Brownian Motion and Diffusion. Mahtematical Surveys, 18, American Mathematical Society, Providence (1981) Kolmanovskii, V.B., Shaikhet, L.E.: Control of Systems with Aftereffect. Translations of Mathematical Monographs Vol. 157, American Mathematical Society (1996) Krylov, N.V.: Controlled Diffusion Processes. Volume 14 of Applications of Mathematics Series, Springer-Verlag, New York (1980) Kushner, H.J.: Probability Methods for Approximations in Stochastic Control and for Elliptic Equations. Academic Press, New York (1977) Kushner, H.J.: Numerical approximations for nonlinear stochastic systems with delays. Stochastics, 77, 211–240 (2005) Kushner, H.J.: Numerical approximations for stochastic systems with delays in the state and control. preprint (2006) Kushner, H.J., Dupuis, P.: Numerical Methods for Stochastic Control Problems in Continuous Time. second edition, Springer-Verlag, Berlin and New York (2001) Kusuoka, S.: Limit theorem on option replication with transaction costs. Annals of Applied Probability, 5, 198–221 (1995) Lang, S.: Real Analysis. 2nd ed., Addison-Wesley Publishing Company, Advanced Book Program/World Science Division, Reading, Massachusetts (1983) Larssen, B.: Dynamic programming in stochastic control of systems with delay. Stochastics & Stochastic Reports, 74, 651–673 (2002) Larssen, B., Risebro, N.H.: When are HJB-equations for control problems with stochastic delay equations finite dimensional. Stochastic Analysis and Applications, 21, 643–661 (2003) Leland, H.E.: Optimal portfolio management with transaction costs and capital gains taxes. preprint (1999) Lions, P. L.: Generalized Solutions of Hamilton-Jacobi Equations. Research Notes in Mathematics, Research Note No. 69, Pitman Publishing, Boston (1982) Lions, P. L.: Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. III. Uniqueness of Viscosity solutions for general second-order equations. J. Functional anlysis, 62, 379–396 (1989) Lions, P. L.: Optimal control of diffusion processes and HamiltonJacobi-Bellman equations. Part 1: The dynamic programming principle and applications and part 2:Viscosity solutions and uniqueness. Comm. P.D.E., 8 1101–1174, 1229–1276 (1992) Longtin, A., Milton, J., Bos, J., Mackey, M.: Noise and critical behavior of the pupil light reflex at oscillation onset. Phys. Rev. A, 41, 6992–7005 (1990) Malliavin, P.: Stochastic Analysis. Springer-Verlag, Berlin (1997)

References [Mao97]

399

Mao, X.: Stochastic Differential Equations and Their Applications. Horwood Publishing, Chichester (1997) [MM88] Marcus, M., Mizel, V.J.: Stochastic functional differential equations modelling materials with selective recall. Stochastics, 25, 195–232 (1988) [MN06] Mazliak, L., Nourdin, I.: Optimal control for rough differential equations. preprint (2006) [MT84] Mizel, V.J., Trutzer, V.: Stochastic hereditary equations: existence and asymptotic stability. J. of Integral Equations, 7, 1–72 (1984) [Mer73] Merton, R.C.: Theory of rational option pricing. Bell J. Economics and Management Science, 4, 141-183 (1973) [Mer90] Merton, R.C.: Continuous-Time Finance. Basil Blackwell, Oxford (1990) [Moh84] Mohammed, S.E.: Stochastic Functional Differential Equations. Research Notes in Mathematics 99, Pitman Advanced Publishing Program, Boston, London, Melbourne (1984) [Moh98] Mohammed, S.E.: Stochastic differential systems with memory: theory, examples, and applications. In: Decreusefond, L., Gjerde, J., Oksendal, B., Ustunel, A.S. (eds), Stochastic Analysis and Related Topics VI, The Geilo Workshop 1996, Progress in Probability, Birkhauser (1998) [NSK91] Niebur, E., Schuster, H. G., Kammen, D.: Collective frequencies and metastability in networks of limit cycle oscillators with time delay. Phys. Rev. Lett., 20, 2753–2756 (1991) [Nua95] Nualart, D.: The Malliavin Calculus and Related Topics. SpringerVerlag, Berlin-Heidelberg, New York (1995) [NP88] Nualart, D., Pardoux, E.: Stochastic calculus with anticipating integrands. Probability Theory and Related Fields, 78, 535-581 (1988) [Øks98] Øksendal, B.: Stochastic Differential Equations. 5th edition, SpringerVerlag, Berlin, New York (2000) [Øks04] Øksendal, B.: Optimal stopping with delayed information. preprint (2004) [OS00] Oksendal, B., Sulem, A.: A maximum principle for optimal control of stochastic systems with delay with applications to finance. In: Mendaldi, J.M., Rofman, E., Sulem, A. (eds), Optimal Control and Partial Differential Equations-Innovations and Applications, IOS Press, Amsterdam (2000) [OS02] Oksendal, B., Sulem, A.: Optimal consumption and portfolio with both fixed and proportional transaction costs. SIAM J. Control & Optimization, 40, 1765–1790 (2002) [Pet00] Peterka, R.J.: Postual control model interpretation of stabilogram diffusion analysis. Biol. Cybernet., 82, 335–343 (2000) [Pet91] Petrovsky, I.G.: Lectures on Partial Differential Equations. Dover Publication, Inc., New York (1991) [Pra03] Prakasa-Rao, B.L.S.: Parametric estimation for linear stochastic delay differential equations driven by fractional Brownian motion. preprint (2003) [Pro95] Protter, P.: Stochastic Integration and Differential Equations. SpringerVerlag (1995) [RBCY06] Ramezani, V., Buche, R., Chang, M.H., Yang, Y.: Power control for a wireless queueing system with delayed state information: heavy traffic modeling and numerical analysis. In Proceedings of MILCOM, Washington, DC (2006)

400 [RY99]

References

Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. 3rd edition, Springer-Verlag, New York (1999) [Rud71] Rudin, W.: Real and Complex Analysis. McGraw Hill, New York (1971) [Shi78] Shirayev, A.N.: Optimal Stopping Rules. Springer-Verlag, Berlin (1978) [SKKM94] Shiryaev, A.N., Kabanov, Y.M., Kramkov, D.O., Melnikov, A.V.: Toward the theory of pricing of options of both European and American types II: continuous time. Theory of Probability and Appl., 39, 61-102 (1994) [SS94] Shreve, S.E., Soner, H.M.: Optimal investment and consumption with transaction costs. Ann. Appl. Probab., 4, 609–692 (1994) [Sid04] Siddiqi, A. H.: Applied Functional Analysis. Marcel Dekker, New York (2004) [SF98] Singh, P., Fogler, H.S.: Fused chemical reactions: the use of dispersion to delay reaction time in tubular reactors. Ind. Eng. Chem. Res., 37, No. 6, pp. 2203–2207 (1998) [Ste78] Stegall, C.: Optimization of functions on certains subsets of Banach spaces. Math. Ann., 236, 171-176 (1978) [TT03] Tahar, I.B., Soner, H.M., Touzi, N.: Modeling continuous-time financial markets with capital gains taxes. preprint (2003) [VB93] Vasilakos, K., Beuter, A.: Effects of noise on delayed visual feedback system. J. Theor. Bio., 165, 389–407 (1993) 19, 139–153 (1981) [WZ77] Wheeden, R.L., Zygmund, A.: Measure and Integral: An Introduction to Real Analysis. Pure and Applied Mathematics Series, Marcel Dekker, Inc., New York-Basel (1977) [YW71] Yamada, T., Watanabe, S.: On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ., 11, 155–167 (1971) [Yan99] Yan, F.: Topics on Stochastic Delay Equations. Ph.D. Dissertation, Southern Illinois University at Carbondale (1999) [YM05] Yan, F., Mohammed, S.E.: A stochastic calculus for systems with memory. Stochastic Analy. and Appl., 23, 613–657 (2005) [YBCR06] Yang, Y., Buche, R., Chang, M.H., Ramezani, V.: Power control for mobile communications with delayed state information in heavy traffic. In Proceedings of IEEE Conference on Decision and Control, San Diego, CA (2006) [YZ99] Yong, J., Zhou, X.Y.: Stochastic Control: Hamiltonian Systems and HJB Equations. No. 43 in Applications of Mathematics, Springer-Verlag, New York (1999) [Zho93] Zhou, X.Y.: Verification theorems within the framework of viscosity solutions. J. Math. Anal. Appl., 177, 208–225 (1993). [ZYL97] Zhou, X.Y., Yong, J., Li, X.: Stochastic verification theorems within the framework of viscosity solutions. SIAM J. Control & Optimization, 35, 243–253 (1997) [Zhu91] Zhu, H.: Dynamic Programming and Variational Inequalities in Singular Stochastic Control. Ph.D. dissertation, Brown University (1991) [ZK81] Zvonkin, A.K., Krylov, N.V.: On strong solutions of stochastic differential equations. Sel. Math. Sov., 1, 19–61 (1981)

Index

Cb (Ξ), 85 Γ¯ , 90 D(S), 106 ρ, 100, 337 A, 111 B∗ , 346 C, 89 C ⊕ B, 90 C∗ , 90 C† , 90 M, 99, 346 M† , 347 account savings, 298, 344 stock, 340, 344 admissible control discrete, 267 Aldous criterion, 137 algorithm, 271, 291, 331 approximation convergence, 274 Euler-Maruyama, 250 Markov chain, 265 arbitrage, 307 bound error, 257, 259, 264 lower, 338 upper, 338, 371 Brownian motion, 45

chain discrete, 267 chain rule, 179 classical control optimal, 134 condition consistency, 268, 269, 284 H¨ older, 247 Lipschitz, 57, 206, 247, 279, 299, 338 monotonicity, 284 Roxin, 140, 147 stability, 286 terminal, 270 conditional expectation, 43 probability, 44 consumption, 353, 355 consumption-trading strategy, 341 contingent claim, 302, 303 continuous H¨ older, 249 lower semi-, 213 upper semi-, 213, 380 control admissible, 132 Markov, 132 relaxed, 138, 275 admissible, 139 convergence weak, 86 credit capital-loss, 334

402

Index

dense weakly, 88 dependence continuous, 62 derivative Fr´echet, 81, 314, 348 Gˆ ateaux, 81 weak, 88 discretization degree, 257 semi-, 248, 249 spatial, 247 temporal, 247 dynamic programming principle, 161 dynamics portfolio, 344 price, 299 equation Black-Scholes, 295, 319 infinite-dimensional, 315 stochastic integral, 300 existence optimal relaxed control, 139 expectation conditional, 268 extension linear continuous, 91 factor aversion, 335 filtration, 206, 339 natural, 46, 299 finite difference approximation, 278 formula Black-Scholes, 294 Dynkin’s, 106, 113, 314, 349 Itˆ o’s, 113 Itˆ o’s, 53, 317, 350 Taylor’s, 82 function excessive, 214 indicator, 340 liquidating, 343 Markov transition, 268, 269 payoff, 312 pricing, 314, 321, 324 quasi-tame, 109, 113, 317

superharmonic, 213 supermeanvalued, 213 tame, 108, 114 value, 134, 208, 228, 246, 251, 258, 369, 382 functional bounded, 85 bounded bilinear, 80 bounded linear, 80 continuous, 85 objective, 133, 208, 246, 251, 270 reward, 212 generator, 83 infinitesimal, 208, 314 weak, 87, 348 growth linear, 57, 206, 279, 299, 338 polynomial, 207, 279 HARA, 345 hedge, 310, 315 HJB equation finite dimensional, 195 HJBQVI, 382 HJBVI, 224, 323, 353, 366 inequality Burkholder-Davis-Gundy, 51 Doob’s maximal, 52, 252 Gronwall, 40, 248, 254 H¨ older, 252 triangular, 253 integral Lebesque-Stieltjes, 298 interest rate effective, 299 interpolation linear, 247 inventory stock, 340 Itˆ o integral, 50 lemma Fatou, 219, 306 Slominski’s, 249 majorant superharmonic, 217

Index map contraction, 285 market (B, S), 297 financial, 297 Markov chain controlled, 267, 268, 272 martingale, 46, 308 characterization, 47 continuous, 47 local, 46 representation, 52 squared integrable, 47 sub-, 46 super-, 46, 215 measurable progressively, 129, 258 measure martingale, 307 risk-neutral, 307 memory bounded, 54, 246 fading, 337 unbounded, 337 memory map, 54, 67 stochastic, 55 metric Prohorov, 135 modulus of continuity, 48, 249 natural filtration, 46 operator S-, 95, 102, 108, 278, 348 bounded linear, 82 finite difference, 281 identity, 88 intervention, 355 shift, 97 optimal stopping, 208 existenc, 208 existence, 218 optimalcontrol existence, 140 option, 303 American, 302 Asian, 304 barrier, 304 call, 294, 303

403

European, 302 pricing, 309 put, 303 Russian, 303 standard, 303 position long, 340 open, 340 short, 340 price base, 334 rational, 312 stock, 346 principle comparison, 175, 234, 323, 390 dynamic programming, 153, 258, 270, 352 uniform boundedness, 86 problem combined classical-impulse control, 346 martingale, 117 optimal classical control, 134 optimal relaxed control, 139 optimal stopping, 204 process control relaxed, 138 limiting, 276 Markov, 206 process controlled, 346 segment, 289, 300, 339 state controlled, 351 strong Markov, 207, 301 wealth, 315 projection point-mass, 247 property Markov, 77, 206 strong Markov, 77, 78, 301, 339 rate discount, 335 mean growth, 337 volatility, 337

404

Index

region continuation, 218, 324 solvency, 342, 343 stopping, 324 relaxed control admissible, 137 resolvent, 85, 88 sale short, 340 scheme consistent, 284 finite difference, 278, 280 stable, 286 second approximation step, 257 selection measurable, 356 self financing, 306 semigroup C0 , 82 contraction, 97 SHDE, 54, 205, 299, 309, 337 controlled, 129, 130, 246 solution series, 325 strong, 58, 205, 300, 339 viscosity, 168, 228, 320, 325, 380, 382 weak, 62 space Banach, 41, 79 Hilbert, 41, 235 Sobolev, 235 step first approximation, 250 stochastic hereditary system controlled, 129 stock inventory, 340 stopping time, 301, 349 optimal, 231 strategy admissible, 344 consumption, 341 consumption-trading, 341

optimal, 346 trading, 304, 315, 317, 341 subdifferential, 169 superdifferential, 169 system autonomous, 207 tax capital-gain, 334 theorem Fubini, 252 Hahn-Banach, 237 Krylov, 263 martingale characterization, 308 Prohorov, 136 Riesz representation, 90, 100 Skorokhod representation, 137, 143 verification, 226, 366 tightness, 136 time exit, 43, 361 expiry, 302 hitting, 43 stopping, 42, 206, 361 topology weak compact, 138 strong, 85 weak, 86 transaction, 353, 355 fixed cost, 333 proportional cost, 333 transformation Girsanov, 54, 309, 362 uniqueness strong, 58, 205, 300, 339 weak, 63 utility expected, 335 value boundary, 357 value function, 246 continuity, 158

Stochastic Modelling and Applied Probability formerly: Applications of Mathematics 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

Fleming/Rishel, Deterministic and Stochastic Optimal Control (1975) Marchuk, Methods of Numerical Mathematics (1975, 2nd. ed. 1982) Balakrishnan, Applied Functional Analysis (1976, 2nd. ed. 1981) Borovkov, Stochastic Processes in Queueing Theory (1976) Liptser/Shiryaev, Statistics of Random Processes I: General Theory (1977, 2nd. ed. 2001) Liptser/Shiryaev, Statistics of Random Processes II: Applications (1978, 2nd. ed. 2001) Vorob’ev, Game Theory: Lectures for Economists and Systems Scientists (1977) Shiryaev, Optimal Stopping Rules (1978, 2008) Ibragimov/Rozanov, Gaussian Random Processes (1978) Wonham, Linear Multivariable Control: A Geometric Approach (1979, 2nd. ed. 1985) Hida, Brownian Motion (1980) Hestenes, Conjugate Direction Methods in Optimization (1980) Kallianpur, Stochastic Filtering Theory (1980) Krylov, Controlled Diffusion Processes (1980) Prabhu, Stochastic Storage Processes: Queues, Insurance Risk, and Dams (1980) Ibragimov/Has’minskii, Statistical Estimation: Asymptotic Theory (1981) Cesari, Optimization: Theory and Applications (1982) Elliott, Stochastic Calculus and Applications (1982) Marchuk/Shaidourov, Difference Methods and Their Extrapolations (1983) Hijab, Stabilization of Control Systems (1986) Protter, Stochastic Integration and Differential Equations (1990) Benveniste/Métivier/Priouret, Adaptive Algorithms and Stochastic Approximations (1990) Kloeden/Platen, Numerical Solution of Stochastic Differential Equations (1992, corr. 3rd printing 1999) Kushner/Dupuis, Numerical Methods for Stochastic Control Problems in Continuous Time (1992) Fleming/Soner, Controlled Markov Processes and Viscosity Solutions (1993) Baccelli/Brémaud, Elements of Queueing Theory (1994, 2nd. ed. 2003) Winkler, Image Analysis, Random Fields and Dynamic Monte Carlo Methods (1995, 2nd. ed. 2003) Kalpazidou, Cycle Representations of Markov Processes (1995) Elliott/Aggoun/Moore, Hidden Markov Models: Estimation and Control (1995) Hernández-Lerma/Lasserre, Discrete-Time Markov Control Processes (1995) Devroye/Györfi/Lugosi, A Probabilistic Theory of Pattern Recognition (1996) Maitra/Sudderth, Discrete Gambling and Stochastic Games (1996) Embrechts/Klüppelberg/Mikosch, Modelling Extremal Events for Insurance and Finance (1997, corr. 4th printing 2003) Duflo, Random Iterative Models (1997) Kushner/Yin, Stochastic Approximation Algorithms and Applications (1997) Musiela/Rutkowski, Martingale Methods in Financial Modelling (1997, 2nd. ed. 2005) Yin, Continuous-Time Markov Chains and Applications (1998) Dembo/Zeitouni, Large Deviations Techniques and Applications (1998) Karatzas, Methods of Mathematical Finance (1998) Fayolle/Iasnogorodski/Malyshev, Random Walks in the Quarter-Plane (1999) Aven/Jensen, Stochastic Models in Reliability (1999) Hernandez-Lerma/Lasserre, Further Topics on Discrete-Time Markov Control Processes (1999) Yong/Zhou, Stochastic Controls. Hamiltonian Systems and HJB Equations (1999) Serfozo, Introduction to Stochastic Networks (1999) Steele, Stochastic Calculus and Financial Applications (2001) Chen/Yao, Fundamentals of Queuing Networks: Performance, Asymptotics, and Optimization (2001) Kushner, Heavy Traffic Analysis of Controlled Queueing and Communications Networks (2001) Fernholz, Stochastic Portfolio Theory (2002) Kabanov/Pergamenshchikov, Two-Scale Stochastic Systems (2003) Han, Information-Spectrum Methods in Information Theory (2003)

Stochastic Modelling and Applied Probability formerly: Applications of Mathematics 51 52 53 54 55 56

Asmussen, Applied Probability and Queues (2nd. ed. 2003) Robert, Stochastic Networks and Queues (2003) Glasserman, Monte Carlo Methods in Financial Engineering (2004) Sethi/Zhang/Zhang, Average-Cost Control of Stochastic Manufacturing Systems (2005) Yin/Zhang, Discrete-Time Markov Chains (2005) Fouque/Garnier/Papanicolaou/Sølna, Wave Propagation and Time Reversal in Random Layered Media (2007) 57 Asmussen/Glynn, Stochastic Simulation: Algorithms and Analysis (2007) 58 KKotelenez, Stochastic Ordinary and Stochastic Partial Differential Equations: Transition from Microscopic to Macroscopic Equations (2008) 59 KChang, Stochastic Control of Hereditary Systems and Applications (2008)