The Master Equation and the Convergence Problem in Mean Field Games: (AMS-201) 9780691193717

This monograph presents two original results in mean-field game theory, contributing to the development of a unified, ri

116 30 3MB

English Pages 224 [225] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

The Master Equation and the Convergence Problem in Mean Field Games: (AMS-201)
 9780691193717

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

125-79542 Cardaliaguet MasterEquation Ch01 3P

June 11, 2019

6.125x9.25

Annals of Mathematics Studies Number 201

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

−1 0 1

June 11, 2019

6.125x9.25

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

The Master Equation and the Convergence Problem in Mean Field Games

Pierre Cardaliaguet Fran¸cois Delarue Jean-Michel Lasry Pierre-Louis Lions

PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD 2019

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

c 2019 by Princeton University Press Copyright  Requests for permission to reproduce material from this work should be sent to [email protected] Published by Princeton University Press 41 William Street, Princeton, New Jersey 08540 6 Oxford Street, Woodstock, Oxfordshire OX20 1TR press.princeton.edu All Rights Reserved LCCN: 2019936796 ISBN 978-0-691-19070-9 ISBN (pbk.) 978-0-691-19071-6 British Library Cataloging-in-Publication Data is available Editorial: Vickie Kearn, Susannah Shoemaker, and Lauren Bucca Production Editorial: Nathan Carr Production: Erin Suydam Publicity: Matthew Taylor and Kathryn Stevens Copyeditor: Theresa Kornak This book has been composed in LATEX Printed on acid-free paper. ∞ Printed in the United States of America 10

−1 0 1

9

8

7

6

5

4

3

2

1

6.125x9.25

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

Contents

Preface

vii

1 Introduction 1.1 From the Nash System to the Master Equation 1.2 Informal Derivation of the Master Equation

1 1 19

2 Presentation of the Main Results 2.1 Notations 2.2 Derivatives 2.3 Assumptions 2.4 Statement of the Main Results

28 28 30 36 38

3 A Starter: The First-Order Master Equation 3.1 Space Regularity of U 3.2 Lipschitz Continuity of U 3.3 Estimates on a Linear System 3.4 Differentiability of U with Respect to the Measure 3.5 Proof of the Solvability of the First-Order Master Equation δU with Respect to m 3.6 Lipschitz Continuity of δm 3.7 Link with the Optimal Control of the Fokker–Planck Equation

48 49 53 60 67 72 75 77

4 Mean Field Game System with a Common Noise 4.1 Stochastic Fokker–Planck/Hamilton–Jacobi– Bellman System 4.2 Probabilistic Setup 4.3 Solvability of the Stochastic Fokker–Planck/ Hamilton–Jacobi–Bellman System 4.4 Linearization

85

89 112

5 The 5.1 5.2 5.3 5.4 5.5

128 128 132 141 150 155

Second-Order Master Equation Construction of the Solution First-Order Differentiability Second-Order Differentiability Derivation of the Master Equation Well-Posedness of the Stochastic MFG System

86 89

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

vi

−1 0 1

CONTENTS

6 Convergence of the Nash System 6.1 Finite Dimensional Projections of U 6.2 Convergence 6.3 Propagation of Chaos

159 160 166 172

A Appendix A.1 Link with the Derivative on the Space of Random Variables A.2 Technical Remarks on Derivatives A.3 Various Forms of Itˆo’s Formula

175 175 183 189

References

203

Index

211

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

Preface The purpose of this short monograph is to address recent advances in the theory of mean field games (MFGs), which has met an amazing success since the simultaneous pioneering works by Lasry and Lions and by Caines, Huang, and Malham´e more than ten years ago. While earlier developments in the theory have been largely spread out over the last decade, issues that are addressed in this book require a new step forward in the analysis. The text evolved with the objective to provide a self-contained study of the so-called master equation and to answer the convergence problem, which has remained mainly open so far. As the writing progressed, the manuscript became longer and longer and, in the end, it turned out to be more relevant to publish the whole as a book. There might be several reasons to explain the growing interest for MFGs. From the technical point of view, the underpinning stakes fall within several mathematical areas, including partial differential equations, probability theory, stochastic analysis, optimal control, and optimal transportation. In particular, several issues raised by the analysis of MFGs may be tackled by analytical or probabilistic tools; sometimes, they even require a subtle implementation of mixed arguments, which is precisely the case in this book. As a matter of fact, researchers from different disciplines have developed an interest in the subject, which has grown very quickly. Another explanation for the interest in the theory is the wide range of applications that it offers. While they were originally inspired by works in economics on heterogeneous agents, MFG models now appear under various forms in several domains, which include, for instance, mathematical finance, study of crowd phenomena, epidemiology, and cybersecurity. Mean field games should be understood as games with a continuum of players, each of them interacting with the whole statistical distribution of the population. In this regard, they are expected to provide an asymptotic formulation for games with finitely many players with mean field interaction. In most of the related works, the connection between finite games and MFGs is addressed in the following manner: It is usually shown that solutions of the asymptotic problem generate an almost equilibrium, understood in the sense of Nash, to the corresponding finite game, the accuracy of the equilibrium getting stronger and stronger as the number of players in the finite game tends to infinity. The main purpose of this book is to focus on the converse problem, which may be formulated as follows: Do the equilibria of the finite games (if they exist) converge to a solution of the corresponding MFG as the number of players becomes

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

viii

−1 0 1

June 11, 2019

6.125x9.25

PREFACE

very large? Surprisingly, answering this question turns out to be much more difficult than proving that any asymptotic solution generates an almost equilibrium. Though several works addressed this problem in specific cases, including the case when the equilibria of the finite player game are taken over open loop strategies, the general case when the agents play strategies in closed (Markovian) feedback form has remained open so far. The objective here is to exhibit a quite large class of MFGs for which the answer is positive and, to do so, to implement a method that is robust enough to accommodate other sets of assumption. The intrinsic difficulty in proving the convergence of finite player equilibria may be explained as follows. When taken over strategies in closed Markovian form, Nash equilibria of a stochastic differential game with N players in a state of dimension d may be described through a system of N quasilinear parabolic partial differential equations in dimension N × d, which we refer to as the Nash system throughout the monograph. As N becomes larger and larger, the system obviously becomes more and more intricate. In particular, it seems especially difficult to get any a priori estimate that could be helpful for passing to the limit by means of a compactness argument. The strategy developed in the book is thus to bypass any detailed study of the Nash system. Instead, we use a short cut and focus directly on the expected limiting form of the Nash system. This limiting form is precisely what we call the master equation in the title of the book. As a result of the symmetry inherent in the mean field structure, this limiting form is no longer a system of equations but reduces to one equation only, which makes it simpler than the Nash system. It describes the equilibrium cost to one representative player in a continuum of players. Actually, to account for the continuum of players underpinning the game, the master equation has to be set over the Euclidean space times the space of probability measures; the state variable is thus a pair that encodes both the private state of a single representative player together with the statistical distribution of the states of all the players. Most of the book is thus dedicated to the analysis of this master equation. One of the key results in the book is to show that, under appropriate conditions on the coefficients, the master equation is uniquely solvable in the classical sense for an appropriate notion of differential calculus on the space of probability measures. Among the assumptions we require, we assume the coefficients to be monotone in the direction of the measure; as demonstrated earlier by Lasry and Lions, this forces uniqueness of the solution to the corresponding MFG. Smoothness of the solution then plays a crucial role in our program. It is indeed the precise property we use next for proving the convergence of the finite player equilibria to the solution of the limiting MFG. The key step is indeed to expand the solution of the master equation along the “equilibrium trajectories” of the finite player games, which requires enough regularity. As indicated earlier, this methodology seems to be quite sharp and should certainly work under different sets of assumptions. Actually, the master equation was already introduced by Lions in his lectures at Coll`ege de France. It provides an alternative formulation to MFGs, different

125-79542 Cardaliaguet MasterEquation Ch01 4P

PREFACE

June 11, 2019

6.125x9.25

ix

from the earlier (and most popular) one based on a coupled forward–backward system, known as the MFG system, which is made of a backward Hamilton– Jacobi equation and a forward Kolmogorov equation. Part of our program in the book is then achieved by exploiting quite systematically the connection between the MFG system and the master equation: In short, the MFG system plays the role of characteristics for the master equation. In the text, we use this correspondence heavily to establish, by means of a flow method, the smoothness of the solution to the master equation. Though the MFG system was extensively studied in earlier works, we provide in the book a detailed analysis of it in the case when players in the finite game are subject to a so-called common noise: Under the action of this common noise, both the backward and forward equations in the MFG system become stochastic, which makes it more complicated; as a result, we devote a whole chapter to addressing the solvability of the MFG system under the presence of such a common noise. Together with the study of the convergence problem, this perspective is completely new in the literature. The book is organized in six chapters, which include a detailed introduction and are followed by an appendix. The guideline follows the aforementioned steps: Part of the book is dedicated to the analysis of the master equation, including the study of the MFG system with a common noise, and the rest concerns the convergence problem. The main results obtained in the book are collected in Chapter 2. Chapter 3 is a sort of warm-up, as it contains a preliminary analysis of the master equation in the simpler case when there is no common noise. In Chapter 4, we study the MFG system in the presence of a common noise, and the corresponding analysis of the master equation is performed in Chapter 5. The convergence problem is addressed in Chapter 6. We suggest the reader start with the Introduction, which contains in particular a formal derivation of the master equation, and then to carry on with Chapters 2 and 3. Possibly, the reader who is interested only in MFGs without common noise may skip Chapters 4 and 5 and switch directly to Chapter 6. In such a case, she/he has to set the parameter β, which stands for the intensity of the common noise throughout the book, equal to 0. The Appendix contains several results on the differential calculus on the space of probability measures together with an Itˆ o’s formula for functionals of a process taking values in the space of probability measures. We emphasize that, for simplicity, most of the analysis provided in the book is on the torus, but, as already explained, we feel that the method is robust enough to accommodate the nonperiodic setting. To conclude, we would like to thank our colleagues from our field for all the stimulating discussions and work sessions we have shared with them. Some of them have formulated very useful comments and suggestions on the preliminary draft of the book. They include in particular Yves Achdou, Martino Bardi, Alain Bensoussan, Ren´e Carmona, Jean-Fran¸cois Chassagneux, Markus Fischer, Wilfrid Gangbo, Christine Gr¨ un, Daniel Lacker and Alessio Porretta. We are also very grateful to the anonymous referees who examined the various versions of the manuscript. Their suggestions helped us greatly in improving the text.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

x

June 11, 2019

6.125x9.25

PREFACE

We also thank the French National Research Agency, which supported part of this work under the grant ANR MFG (ANR-16-CE40-0015-01). Pierre Cardaliaguet, Paris Fran¸cois Delarue, Nice Jean-Michel Lasry, Paris Pierre-Louis Lions, Paris

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 3P

June 11, 2019

6.125x9.25

The Master Equation and the Convergence Problem in Mean Field Games

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

−1 0 1

June 11, 2019

6.125x9.25

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter One Introduction

1.1

FROM THE NASH SYSTEM TO THE MASTER EQUATION

Game theory formalizes interactions between “rational” decision makers. Its applications are numerous and range from economics and biology to computer science. In this monograph we are interested mainly in noncooperative games, that is, in games in which there is no global planner: each player pursues his or her own interests, which are partly conflicting with those of others. In noncooperative game theory, the key concept is that of Nash equilibria, introduced by Nash in [82]. A Nash equilibrium is a choice of strategies for the players such that no player can benefit by changing strategies while the other players keep theirs unchanged. This notion has proved to be particularly relevant and tractable in games with a small number of players and action sets. However, as soon as the number of players becomes large, it seems difficult to implement in practice, because it requires that each player knows the strategies the other players will use. Besides, for some games, the set of Nash equilibria is huge and it seems difficult for the players to decide which equilibrium they are going to play: for instance, in repeated games, the Folk theorem states that the set of Nash equilibria coincides with the set of feasible and individually rational payoffs in the one-shot game, which is a large set in general (see [93]). In view of these difficulties, one can look for configurations in which the notion of Nash equilibria simplifies. As noticed by Von Neumann and Morgenstern [96], one can expect that this is the case when the number of players becomes large and each player individually has a negligible influence on the other players: it “is a well known phenomenon in many branches of the exact and physical sciences that very great numbers are often easier to handle than those of medium size [. . . ]. This is of course due to the excellent possibility of applying the laws of statistics and probabilities in the first case” (p. 14). Such nonatomic games were analyzed in particular by Aumann [10] in the framework of cooperative games. Schmeidler [91] (see also Mas-Colell [78])) extended the notion of Nash equilibria to that setting and proved the existence of pure Nash equilibria. In the book we are interested in games with a continuum of players, in continuous time and continuous state space. Continuous time, continuous space games are often called differential games. They appear in optimal control problems in which the system is controlled by several agents. Such problems (for a

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

2

June 11, 2019

6.125x9.25

CHAPTER 1

finite number of players) were introduced at about the same time by Isaacs [59] and Pontryagin [87]. Pontryagin derived optimality conditions for these games. Isaacs, working on specific examples of two-player zero-sum differential games, computed explicitly the solution of these games and established the formal connection with the Hamilton–Jacobi equations. The rigorous justification of Isaacs ideas for general systems took some time. The main difficulty arose from from the set of strategies (or from the dependence on the cost of the players with respect to these strategies), which is much more complex than for classical games: indeed, the players have to observe the actions taken by the other players in continuous time and choose their instantaneous actions accordingly. For twoplayer, zero-sum differential games, the first general existence result of a Nash equilibrium was established by Fleming [39]: in this case the Nash equilibrium is unique and is called the value function (it is a function of time and space). The link between this value function and the Hamilton–Jacobi equations was made possible by the introduction of viscosity solutions by Crandall and Lions [32] (see also [33] for a general presentation of viscosity solutions). The application to zero-sum differential games are due to Evans and Souganidis [35] (for determinist problems) and Fleming and Souganidis [40] (for stochastic ones). For non-zero-sum differential games, the situation is more complicated. One can show the existence of general Nash equilibria thanks to an adaptation of the Folk theorem: see Kononenko [64] (for differential games of first order) and Buckdahn, Cardaliaguet, and Rainer [23] (for differential games with diffusion). However, this notion of solution does not allow for dynamic programming: it lacks time consistency in general. The existence of time-consistent Nash equilibria, based on dynamic programming, requires the solvability of a strongly coupled system of Hamilton–Jacobi equations. This system, which plays a key role in this book, is here called the Nash system. For problems without diffusions, Bressan and Shen explain in [21, 22] that the Nash system is ill-posed in general. However, for systems with diffusions, the Nash system becomes a uniformly parabolic system of partial differential equations. Typically, for a game with N players and with uncontrolled diffusions, this backward in time system takes the form ⎧ i i 2 N i 1 (t, x), . . . , Dv N (t, x)) = 0 ⎪ ⎨−∂t v (t, x) − tr(a (t, x)D v (t, x)) + H (t, x,d Dv N in [0, T ] × (R ) , i ∈ {1, . . . , N }, ⎪ ⎩ i (1.1) v (T, x) = Gi (x) in (Rd )N .

−1 0 1

The foregoing system describes the evolution in time of the value function v i of agent i (i ∈ {1, . . . , N }). This value function depends on the positions of all the players x = (x1 , . . . , xN ), xi being the position of the state of player i. The second-order terms tr(ai (t, x)D2 v N (t, x)) formalize the noises affecting the dynamics of agent i. The Hamiltonian Hi encodes the cost player i has to pay

125-79542 Cardaliaguet MasterEquation Ch01 4P

INTRODUCTION

June 11, 2019

6.125x9.25

3

to control her state and reaching some goal. This cost depends on the positions of the other players and on their strategies. The relevance of such a system for differential games has been discussed by Star and Ho [94] and Case [30] (for first-order systems) and by Friedman [43] (1972) (for second-order systems); see also the monograph by Ba¸sar and Olsder [11] and the references therein. The well-posedness of this system has been established under some restrictions on the regularity and the growth of the Hamiltonians: See in particular the monograph by Ladyˇzenskaja, Solonnikov, and Ural’ceva [70] and the paper by Bensoussan and Frehse [14]. As for classical games, it is natural to investigate the limit of differential games as the number of players tends to infinity. The hope is that in this limit configuration the Nash system simplifies. This notion makes sense only for timeconsistent Nash equilibria, because no simplification occurs in the framework of Folk’s theorem, where the player who deviates is punished by all the other players. Games in continuous space with infinitely many players were first introduced in the economic literature (in discrete time) under the terminology of heterogeneous models. The aim was to formalize dynamic general equilibria in macroeconomics by taking into account not only aggregate variables—GDP, employment, the general price level, for example—but also the distributions of variables, say the joint distribution of income and wealth or the size distribution of firms, and to try to understand how these variables interact. We refer in particular to the pioneering works by Aiyagari [6], Huggett [58], and Krusell and Smith [65], as well as the presentation of the continuous-time counterpart of these models in [5]. In the mathematical literature, the theory of differential games with infinitely many players, known as mean field games (MFGs), started with the works of Lasry and Lions [71, 72, 74]; Huang, Caines, and Malham´e [53–57] presented similar models under the name of the certainty equivalence principle. Since then the literature has grown very quickly, not only for the theoretical aspects, but also for the numerical methods and the applications: we refer to the monographs [16, 48] or the survey paper [49]. This book focuses mainly on the derivation of the MFG models from games with a finite number of players. In classical game theory, the rigorous link between the nonatomic games and games with a large but finite number of agents is quite well-understood: one can show (1) that limits of Nash equilibria as the number of agents tends to infinity is a Nash equilibrium of the nonatomic game (Green [50]), and (2) that any optimal strategy in the nonatomic game provides an -Nash equilibrium in the game with finitely many players, provided the number of players is sufficiently large (Rashid [90]). For MFGs, the situation is completely different. If the equivalent of question (2) is pretty well understood, problem (1) turns out to be surprisingly difficult. Indeed, passing from the MFG equilibria to the differential game with finitely many problem relies mostly on known techniques in mean field theory: this has been developed since the beginning of the theory in [54] and well studied since then (see also, for instance, [25, 62]). On the contrary, when one considers

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

4

a sequence of solutions to the Nash systems with N players and one wants to let N tend to infinity, the problem becomes extremely intricate. The main reason is that, in classical game theory, this convergence comes from compactness properties of the problem; this compactness is completely absent for differential games. This issue is related to the difficulty of building time-consistent solutions for these games. A less technical way to see this is to note that there is a change of nature between the Nash system and its conjectured limit, the MFG. In the Nash system, the players observe each other, and the deviation of a single player could a priori change entirely the outcome of the game. On the contrary, in the MFG, players react only to the evolving population density and therefore the deviation of a single player has no impact at all on the system. The main purpose of this book is to explain why this limit holds despite this change of nature. 1.1.1

Statement of the Problem

To explain our result further, we first need to specify the Nash system we are considering. We assume that players control their own state and interact only through their cost function. Then the Nash system (1.1) takes the more specific form: ⎧ N N   ⎪ ⎪ N,i N,i ⎪ −∂ v (t, x) − Δ v (t, x) − β TrDx2j ,xk v N,i (t, x) ⎪ t xj ⎪ ⎪ ⎪ j=1 j,k=1 ⎪  ⎨ N,i +H(xi , Dxi v (t, x)) + Dp H(xj , Dxj v N,j (t, x)) · Dxj v N,i (t, x) ⎪ j=i ⎪ ⎪ ⎪ ⎪ = F N,i (x) in [0, T ] × (Rd )N , ⎪ ⎪ ⎪ ⎩ N,i v (T, x) = GN,i (x) in (Rd )N .

(1.2)

As before, the above system is stated in [0, T ] × (Rd )N , where a typical element is denoted by (t, x) with x = (x1 , . . . , xN ) ∈ (Rd )N . The unknowns are the N maps (v N,i )i∈{1,...,N } (the value functions). The data are the Hamiltonian H : Rd ×Rd → R, the maps F N,i , GN,i : (Rd )N → R, the nonnegative parameter β, and the horizon T  0. In the second line, the symbol · denotes the inner product in Rd . System (1.2) describes the Nash equilibria of an N -player differential game (see Section 1.2 for a short description). In this game, the set of “optimal trajectories” solves a system of N coupled stochastic differential equations (SDEs): √   dXi,t = −Dp H Xi,t , Dv N,i (t, X t ) dt + 2 dBti + 2β dWt , t ∈ [0, T ], i ∈ {1, . . . , N }, −1 0 1

(1.3)

where v N,i is the solution to (1.2) and the ((Bti )t∈[0,T ] )i=1,...,N and (Wt )t∈[0,T ] are d-dimensional independent Brownian motions. The Brownian motions ((Bti )t∈[0,T ] )i=1,...,N correspond to the individual noises, while the Brownian

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

5

motion (Wt )t∈[0,T ] is the same for all the equations and, for this reason, is called the common noise. Under such a probabilistic point of view, the collection of random processes ((Xi,t )t∈[0,T ] )i=1,...,N forms a dynamical system of interacting particles. The aim of this book is to understand the behavior, as N tends to infinity, of the value functions v N,i . Another, but closely related, objective of our book is to study the mean field limit of the ((Xi,t )t∈[0,T ] )i=1,...,N as N tends to infinity. 1.1.2

Link with the Mean Field Theory

Of course, there is no chance to observe a mean field limit for (1.3) under a general choice of the coefficients in (1.2). Asking for a mean field limit certainly requires that the system has a specific symmetric structure in such a way that the players in the differential game are somewhat exchangeable (when in equilibrium). For this purpose, we suppose that, for each i ∈ {1, . . . , N }, the maps (Rd )N  x → F N,i (x) and (Rd )N  x → GN,i (x) depend only on xi and on the empirical distribution of the variables (xj )j=i : F N,i (x) = F (xi , mN,i x )

and

GN,i (x) = G(xi , mN,i x ),

(1.4)

where mN,i = N 1−1 j=i δxj is the empirical distribution of the (xj )j=i and x where F, G : Rd × P(Rd ) → R are given functions, P(Rd ) being the set of Borel probability measures on Rd . Under this assumption, the solution of the Nash system indeed enjoys strong symmetry properties, which imply in particular the required exchangeability property. Namely, v N,i can be written in a form similar to (1.4): v N,i (t, x) = v N (t, xi , mN,i x ),

t ∈ [0, T ],

x ∈ (Rd )N ,

(1.5)

for a function v N (t, ·, ·) taking as arguments a state in Rd and an empirical distribution of size N − 1 over Rd . In any case, even under the foregoing symmetry assumptions, it is by no means clear whether the system (1.3) can exhibit a mean field limit. The reason is that the dynamics of the particles (X1,t , . . . , XN,t )t∈[0,T ] are coupled through the unknown solutions v N,1 , . . . , v N,N to the Nash system (1.2), whose symmetry properties (1.5) may not suffice to apply standard results from the theory of propagation of chaos. Obviously, the difficulty is that the function v N on the right-hand side of (1.5) precisely depends on N . Part of the challenge in the text is thus to show that the interaction terms in (1.3) get closer and closer, as N tends to the infinity, to some interaction terms with a much more tractable and much more explicit shape. To get a picture of the ideal case under which the mean-field limit can be taken, one can choose for a while β = 0 in (1.3) and then assume that the function v N in the right-hand side of (1.5) is independent of N . Equivalently, one can replace in (1.3) the interaction function (Rd )N  x → Dp H(xi , v N,i (t, x))

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

6

d d d by (Rd )N  x → b(xi , mN,i x ), for a map b : R × P(R ) → R . In such a case, the coupled system of SDEs (1.3) turns into

dXi,t = b Xi,t ,

√ 1  δXj,t dt + 2 dBti , N −1

t ∈ [0, T ], i ∈ {1, . . . , N }, (1.6)

j=i

the second argument in b being nothing but the empirical measure of the particle system at time t. Under suitable assumptions on b (e.g., if b is bounded and Lipschitz continuous in both variables, the space of probability measures being equipped with the Wasserstein distance) and on the initial distribution of the ((Xi,t )i=1,...,N )t∈[0,T ] , both the marginal law of (Xt1 )t∈[0,T ] (or of any other player) and the empirical distribution of the whole system converge to the solution of the McKean–Vlasov equation:   ∂t m − Δm + div m b(·, m) = 0. (see, among many other references, McKean [77], Sznitman [92], M´el´eard [79]). The standard strategy for establishing the convergence consists in a coupling argument. Precisely, if one introduces the system of N independent equations √   dYi,t = b Yi,t , L(Yi,t ) dt + 2 dBti ,

t ∈ [0, T ], i ∈ {1, . . . , N },

(where L(Yi,t ) is the law of Yi,t ) with the same (chaotic) initial condition as that of the processes ((Xi,t )t∈[0,T ] )i=1,...,N , then it is known that (under appropriate integrability conditions; see Fournier and Guillin [42])   1 sup E [|X1,t − Y1,t |]  CN − max(2,d) 1{d=2} + ln(1 + N )1{d=2} .

t∈[0,T ]

−1 0 1

In comparison with (1.6), all the equations in (1.3) are subject to the common noise (Wt )t∈[0,T ] , at least when β = 0. This makes a first difference between our limit problem and the above McKean–Vlasov example of interacting diffusions, but, for the time being, it is not clear how deeply this may affect the analysis. Indeed, the presence of a common noise does not constitute a real challenge in the study of McKean–Vlasov equations, the foregoing coupling argument working in that case as well, provided that the distribution of Y is replaced by its conditional distribution given the realization of the common noise. However, the key point here is precisely that our problem is not formulated as a McKean– Vlasov equation, as the drifts in (1.3) are not of the same explicit mean field structure as they are in (1.6) because of the additional dependence on N in the right-hand side of (1.5): obviously this is the second main difference between (1.3) and (1.6). This makes rather difficult any attempt to guess the precise impact of the common noise on the analysis. Certainly, as we already pointed out, the major issue in analyzing (1.3) stems from the complex nature of the underlying interactions. As the equations depend on one another through the

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

INTRODUCTION

6.125x9.25

7

nonlinear system (1.2), the evolution with N of the coupling between all of them is indeed much more intricate than in (1.6). And once again, on the top of that, the common noise adds another layer of difficulty. For these reasons, the convergence of both (1.2) and (1.3) has been an open question since Lasry and Lions’ initial papers on MFGs [71, 72]. 1.1.3

The Mean Field Game System

If one tries, at least in the simpler case β = 0, to describe—in a heuristic way—the structure of a differential game with infinitely many indistinguishable players, i.e., a “nonatomic differential game,” one finds a problem in which each (infinitesimal) player optimizes his payoff, depending on the collective behavior of the others, and, meanwhile, the resulting optimal state of each of them is exactly distributed according to the state of the population. This is the “mean field game system” (MFG system): ⎧ ⎪ ⎨−∂t u − Δu + H(x, Dx u) = F (x, m(t)) ∂t m − Δm − div(mDp H(x, Dx u)) = 0 ⎪ ⎩ u(T, x) = G(x, m(T )), m(0, ·) = m(0)

in [0, T ] × Rd , in [0, T ] × Rd ,

(1.7)

d

in R ,

where m(0) denotes the initial state of the population. The system consists in a coupling between a (backward) Hamilton–Jacobi equation, describing the dynamics of the value function of any of the players, and a (forward) Kolmogorov equation, describing the dynamics of the distribution of the population. In that framework, H reads as a Hamiltonian, F is understood as a running cost, and G as a terminal cost. Since its simultaneous introduction by Lasry and Lions [74] and by Huang, Caines, and Malham´e [53], this system has been thoroughly investigated: its existence, under various assumptions, can be found in [15, 25, 54–56, 62, 74, 76]. Concerning uniqueness of the solution, two regimes were identified in [74]. Uniqueness holds under Lipschitz type conditions when the time horizon T is short (or, equivalently, when H, F , and G are “small”), but, as for finite-dimensional two-point boundary value problems, it may fail when the system is set over a time interval of arbitrary length. Over long time intervals, uniqueness is guaranteed under the quite fascinating condition that F and G are monotone; i.e., if, for any measures m, m , the following holds:

(F (x, m) − F (x, m ) d(m − m )(x)  0

Rd

(1.8) 

and Rd



(G(x, m) − G(x, m ) d(m − m )(x)  0.

The interpretation of the monotonicity condition is that the players dislike congested areas and favor configurations in which they are more scattered; see Remark 2.3.1 for an example. Generally speaking, condition (1.8) plays a key

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

8

June 11, 2019

6.125x9.25

CHAPTER 1

role throughout the text, as it guarantees not only uniqueness but also stability of the solutions to (1.7). As observed, a solution to the MFG system (1.7) can indeed be interpreted as a Nash equilibrium for a differential game with infinitely many players: in that framework, it plays the role of the Schmeidler noncooperative equilibrium. A standard strategy to make the connection between (1.7) and differential games consists in inserting the optimal strategies from the Hamilton–Jacobi equation in (1.7) into finitely many player games in order to construct approximate Nash equilibria: see [54], as well as [25, 55, 56, 62]. However, although it establishes the interpretation of the system (1.7) as a differential game with infinitely many players, this says nothing about the convergence of (1.2) and (1.3). When β is positive, the system describing Nash equilibria within a population of infinitely many players subject to the same common noise of intensity β cannot be described by a deterministic system of the same form as (1.7). Owing to the theory of propagation of chaos for systems of interacting particles (see the short remark earlier), the unknown m in the forward equation is then expected to represent the conditional law of the optimal state of any player given the realization of the common noise. In particular, it must be random. This turns the forward Kolmogorov equation into a forward stochastic Kolmogorov equation. As the Hamilton–Jacobi equation depends on m, it renders u random as well. At any rate, a key fact from the theory of stochastic processes is that the solution to an SDE must be adapted to the underlying observation, as its values at some time t cannot anticipate the future of the noise after t. At first sight, it seems to be very demanding, as u is also required to match, at time T , G(·, m(T )), which depends on the whole realization of the noise up until T . The correct formulation to accommodate both constraints is given by the theory of backward SDEs, which suggests penalizing the backward dynamics by a martingale in order to guarantee that the solution is indeed adapted. We refer the reader to the monograph [84] for a complete account on the finite dimensional theory and to the paper [85] for an insight into the infinite dimensional case. Denoting by W “the common noise” (here, a d-dimensional Brownian motion) and by m(0) the initial distribution of the players at time t0 , the MFG system with common noise then takes the form (in which the unknowns are now (ut , mt , vt )) ⎧   dt ut = −(1 + β)Δut + H(x, Dx ut ) − F (x, mt ) − 2βdiv(vt ) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ +vt · dWt , in [0, T ] × Rd , ⎪ ⎨    dt mt = (1 + β)Δmt + div mt Dp H(x, Dx ut ) dt ⎪  ⎪ ⎪ ⎪ −div(mt 2β dWt , in [0, T ] × Rd , ⎪ ⎪ ⎪ ⎩ uT (x) = G(x, mT ), m0 = m(0) , in Rd −1 0 1

(1.9)

where we used the standard convention from the theory of stochastic processes that consists in indicating the time parameter as an index in random functions. As suggested immediately above, the map vt is a random vector field that forces

125-79542 Cardaliaguet MasterEquation Ch01 4P

INTRODUCTION

June 11, 2019

6.125x9.25

9

the solution ut of the backward equation to be adapted to the filtration generated by (Wt )t∈[0,T ] . As far as we know, the system (1.9) has never been investigated and part of this book will be dedicated to its analysis (see, however, [27] for an informal discussion). Below, we call the system (1.9) the MFG system with common noise. Note that the aggregate equations (1.7) and (1.9) (see also the master equation (1.10)) are the continuous-time analogues of equations that appear in the analysis of dynamic stochastic general equilibria in heterogeneous agent models (Aiyagari [6], Bewley [19], and Huggett [58]). In this setting, the factor β describes the intensity of “aggregate shocks,” as discussed by Krusell and Smith in the seminal paper [65]. In some sense, the limit problem studied in the text is an attempt to deduce the macroeconomic models, describing the dynamics of a typical (but heterogeneous) agent in an equilibrium configuration, from the microeconomic ones (the Nash equilibria).

1.1.4

The Master Equation

Although the MFG system has been widely studied since its introduction in [74] and [53], it has become increasingly clear that this system was not sufficient to take into account the entire complexity of dynamic games with infinitely many players. A case in point is that the original system (1.7) becomes much more complex in the presence of a common noise (i.e., when β > 0); see the stochastic version (1.9). In the same spirit, we may notice that the original MFG system (1.7) does not accommodate MFGs with a major player and infinitely many small players; see [52]. And, last but not least, the main limitation is that, so far, the formulation based on the system (1.7) (or (1.9) when β > 0) has not allowed establishment of a clear connection with the Nash system (1.2). These issues led Lasry and Lions [76] to introduce an infinite dimensional equation—the so-called “master equation”—that directly describes, at least formally, the limit of the Nash system (1.2) and encompasses the foregoing complex situations. Before writing down this equation, let us explain its main features. One of the key observations has to do with the symmetry properties, to which we already alluded, that are satisfied by the solution of the Nash system (1.2). Under the standing symmetry assumptions (1.4) on the (F N,i )i=1,...,N and (GN,i )i=1,...,N , (1.5) says that the (v N,i )1,...,N can be written into a form similar to (1.4), namely v N,i (t, x) = v N (t, xi , mN,i x ) (where the empirical measures mN,i x are defined as in (1.4)), but with the obvious but major restriction that the function v N that appears on the right-hand side of the equality now depends on N . With such a formulation, the value function to player i reads as a function of the private state of player i and of the empirical distribution formed by the others. Then, one may guess, at least under the additional assumption that such a structure is preserved as N → +∞, that the unknown in the limit problem takes the form U = U (t, x, m), where x is the position of the (typical) small player at time t and m is the distribution of the (infinitely many) other agents.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

10

June 11, 2019

6.125x9.25

CHAPTER 1

The question is then to write down the dynamics of U . Plugging U = U (t, xi , mN,i x ) into the Nash system (1.2), one obtains—at least formally—an equation stated in the space of measures (see Section 1.2 for a heuristic discussion). This is the so-called master equation. It takes the form

⎧ ⎪ ⎪ −∂t U − (1 + β)Δx U + H(x, Dx U ) − (1 + β) divy [Dm U ] dm(y) ⎪ ⎪ ⎪ Rd

⎪ ⎪   ⎪ ⎪ Dm U · Dp H y, Dx U (·, y, ·) dm(y) + ⎪ ⎪ ⎨ Rd

 2  ⎪ −2β divx [Dm U ] dm(y) − β Tr Dmm U dm⊗2 (y, y  ) ⎪ ⎪ ⎪ Rd R2d ⎪ ⎪ ⎪ ⎪ = F (x, m) in [0, T ] × Rd × P(Rd ), ⎪ ⎪ ⎪ ⎩ U (T, x, m) = G(x, m) in Rd × P(Rd ),

−1 0 1

(1.10)

where ∂t U , Dx U , and Δx U are understood as ∂t U (t, x, m), Dx U (t, x, m), and 2 U Δx U (t, x, m); Dx U (·, y, ·) is understood as Dx U (t, y, m); and Dm U and Dmm 2  are understood as Dm (t, x, m, y) and Dmm U (t, x, m, y, y ). In Eq. (1.10), ∂t U , Dx U , and Δx U stand for the usual time derivative, space derivatives, and Laplacian with respect to the local variables (t, x) of the 2 U are the first- and second-order derivatives unknown U , while Dm U and Dmm with respect to the measure m. The precise definition of these derivatives is postponed to Chapter 2. For the time being, let us just note that it is related to the derivatives in the space of probability measures described, for instance, by Ambrosio, Gigli, and Savar´e in [7] and by Lions in [76]. It is worth mentioning that the master equation (1.10) is not the first example of an equation studied in the space of measures—by far: for instance, Otto [83] gave an interpretation of the porous medium equation as an evolution equation in the space of measures, and Jordan, Kinderlehrer, and Otto [60] showed that the heat equation was also a gradient flow in that framework; notice also that the analysis of Hamilton–Jacobi equations in metric spaces is partly motivated by the specific case in which the underlying metric space is the space of measures (see in particular [8,36] and the references therein). The master equation is, however, the first one to combine at the same time the issue of being nonlocal, nonlinear, and of second order and, moreover, without maximum principle. Besides the discussion in [76], the importance of the master equation (1.10) has been acknowledged by several contributions: see, for instance, the monograph [16] and the companion papers [17] and [18], in which Bensoussan, Frehse, and Yam generalize this equation to mean field type control problems and reformulate it as a partial differential equation (PDE) set on an L2 space, and [27], where Carmona and Delarue interpret this equation as a decoupling field of forward–backward SDE in infinite dimension. If the master equation has been discussed and manipulated thoroughly in the aforementioned references, it is mostly at a formal level: the well-posedness of the master equation has remained, to a large extent, open until now. Besides,

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

INTRODUCTION

6.125x9.25

11

even if the master equation has been introduced to explain the convergence of the Nash system, the rigorous justification of the convergence has not been understood. The aim of this book is to provide an answer to both questions.

1.1.5

Well-posedness of the Master Equation

The largest part of this book is devoted to the proof of the existence and uniqueness of a classical solution to the master equation (1.10), where, by classical, we mean that all the derivatives in (1.10) exist and are continuous. To avoid issues related to boundary conditions or conditions at infinity, we work for simplicity with periodic data: the maps H, F , and G are periodic in the space variable. The state space is therefore the d-dimensional torus Td = Rd /Zd and m(0) belongs to P(Td ), the set of Borel probability measures on Td . We also assume that F, G : Td × P(Td ) → R satisfy the monotonicity conditions (1.8) and are sufficiently “differentiable” with respect to both variables and, of course, periodic with respect to the state variable. Although the periodicity condition is rather restrictive, the extension to maps defined on the full space or to Neumann boundary conditions is probably not a major issue. At any rate, it would certainly require further technicalities. So far, the existence of classical solutions to the master equation has been known in more restricted frameworks. Lions discussed in [76] a finite dimensional analogue of the master equation and derived conditions for this hyperbolic system to be well posed. These conditions correspond precisely to the monotonicity property (1.8), which we here assume to be satisfied by the coupling functions F and G. This parallel strongly indicates—but this should not come as a surprise—that the monotonicity of F and G should play a key role in the unique strong solvability of (1.10). Lions also explained in [76] how to get the well-posedness of the master equation without noise (no Laplacian in the equation) by extending the equation to a (fixed) space of random variables under a convexity assumption in space of the data. In [24] Buckdahn, Li, Peng, and Rainer studied equation (1.10), by means of probabilistic arguments, when there is no coupling or common noise (F = G = 0, β = 0) and proved the existence of a classical solution in this setting; in a somewhat similar spirit, Kolokoltsov, Li, and Yang [62] and Kolokoltsov, Troeva, and Yang [63] investigated the tangent process to a flow of probability measures solving a McKean–Vlasov equation. Gangbo and Swiech [45] analyzed the first-order master equation in short time (no Laplacian in the equation) for a particular class of Hamiltonians and of coupling functions F and G (which are required to derive from a potential in the measure argument). Chassagneux, Crisan, and Delarue [31] obtained, by a probabilistic approach similar to that used in [24], the existence and uniqueness of a solution to (1.10) without common noise (when β = 0) under the monotonicity condition (1.8) in either the nondegenerate case (as we do here) or in the degenerate setting provided that F , H, and G satisfy additional convexity

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

12

June 11, 2019

6.125x9.25

CHAPTER 1

conditions in the variables (x, p). The complete novelty of our result, regarding the specific question of solvability of the master equation, is the existence and uniqueness of a classical solution to the problem with common noise. The technique of proof in [24, 31, 45] consists in finding a suitable representation of the solution: indeed a key remark in Lions [76] is that the master equation is a kind of transport equation in the space of measures and that its characteristics are, when β = 0, the MFG system (1.7). Using this idea, the main difficulty is then to prove that the candidate is smooth enough to perform the computation showing that it is a classical solution of (1.10). In [24, 31] this is obtained by linearizing systems of forward–backward SDEs, while [45] relies on a careful analysis of the characteristics of the associated first-order PDE. Our starting point is the same: we use a representation formula for the master equation. When β = 0, the characteristics are just the solution to the MFG system (1.7). When β is positive, these characteristics become random under the action of the common noise and are then given by the solution of the MFG system with common noise (1.9). The construction of a solution U to the master equation then relies on the method of characteristics. Namely, we define U by letting U (t0 , x, m0 ) := ut0 (x), where the pair (ut , mt )t∈[t0 ,T ] is the solution to (1.9) when the forward equation is initialized at m(0) ∈ P(Td ) at time t0 , that is, ⎧   dt ut = −(1 + β)Δut + H(x, Dx ut ) − F (x, mt ) − 2βdiv(vt ) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ +vt · dWt in [t0 , T ] × Td , ⎪ ⎨    dt mt = (1 + β)Δmt + div mt Dp H(x, Dx ut ) dt ⎪  ⎪ ⎪ ⎪ in [t0 , T ] × Td , −div(m 2β dWt ⎪ t ⎪ ⎪ ⎩ uT (x) = G(x, mT ), mt0 = m(0) in Td .

−1 0 1

(1.11)

There are two main difficult steps in the analysis. The first one is to establish the smoothness of U and the second one is to show that U indeed satisfies the master equation (1.10). To proceed, the cornerstone is to make a systematic use of the monotonicity properties of the maps F and G: basically, monotonicity prevents the emergence of singularities in finite time. Our approach seems to be very powerful, although the reader might have a different feeling because of the length of the arguments. As a matter of fact, part of the technicalities in the proof are caused by the stochastic aspect of the characteristics (1.11). As a result, we spend much effort to handle the case with a common noise (for which almost nothing has been known so far), but, in the simpler case β = 0, our strategy to handle the first-order master equation provides a much shorter proof than in the earlier works [24,31,45]. For this reason, we decided to display the proof in this simple context separately (Section 3). It is worth mentioning that, although our result is the first one to address the MFG system (1.11) in the case β > 0, the existence and uniqueness of equilibria to MFGs with a common noise were already studied in the paper [29] by

125-79542 Cardaliaguet MasterEquation Ch01 4P

INTRODUCTION

June 11, 2019

6.125x9.25

13

Carmona, Delarue, and Lacker. Therein, the strategy is completely different, as the existence is investigated first by combining purely probabilistic arguments together with Kakutani–Fan–Glicksberg’s theorem for set-valued mappings. As a main feature, existence of equilibria is proved by means of a discretization procedure of the common noise, which consists in focusing first on the case when the common noise has a finite number of outcomes. This constraint on the noise is relaxed in a second step. However, it must be stressed that the limiting solutions that are obtained in this way (for the MFG driven by the original noise) are weak equilibria only, which means that they may not be adapted with respect to the common source of noise. This fact is completely reminiscent of the construction of weak solutions to SDEs. Remarkably, Yamada-Watanabe’s principle for weak existence and strong uniqueness to SDEs extends to mean field games with a common noise: provided that a form of strong uniqueness holds for the MFG, any weak solution is in fact strong. Generally speaking, it is shown in [29] that strong uniqueness indeed holds true for MFGs with a common noise whenever the aforementioned monotonicity condition (1.8) is satisfied. In this regard, the result of [29] is completely consistent with the one we obtain here for the solvability of (1.11), as we prove that the solutions to (1.11) are indeed adapted with respect to (Wt )t∈[0,T ] . The main difference with [29] is that we take a short cut to get the result as we directly benefit from the monotone structure (1.8) to apply a fixed-point argument with uniqueness (instead of a fixed-point argument without uniqueness like Kakutani–Fan–Glicksberg’s theorem). As a result, we here get in the same time existence and uniqueness of a solution to (1.11). 1.1.6

The Convergence Result

Although most of the book is devoted to the construction of a solution to the master equation, our main (and primary) motivation remains to justify the mean field limit. Namely, we show that the solution of the Nash system (1.2) converges to the solution of the master equation. The main issue here is the complete lack of estimates on the solutions to this large system of Hamilton–Jacobi equations: this prevents the use of any compactness method to prove the convergence. So far, this question has been almost completely open. The convergence has been known in very few specific situations. For instance, it was proved for the ergodic MFGs (see Lasry-Lions [71], revisited by Bardi-Feleqi [13]). In this case, the Nash equilibrium system reduces to a coupled system of N equations in Td (instead of N equations in TN d as (1.2)) and estimates of the solutions are available. Convergence is also known in the “linear-quadratic” setting, where the Nash system has explicit solutions: see Bardi [12]. Let us finally quote the nice results by Fischer [38] and Lacker [69] on the convergence of open loop Nash equilibria for the N -player game and the characterization of the possible limits. Therein, the authors overcome the lack of strong estimates on the solutions to the N player game by using the notion of relaxed controls for which weak compactness criteria are available. The problem addressed here—concerning closed loop Nash

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

14

equilibria—differs in a substantial way from [38, 69]: indeed, we underline the striking fact that the Nash system (1.2), which concerns equilibria in which the players observe each other, converges to an equation in which the players only need to observe the evolution of the distribution of the population. This is striking because it allows for a drastic gain of complexity: without common noise, limiting equilibria are deterministic and hence can be precomputed; in particular, the limiting strategies are distributed in the sense that players just need to update their own state to compute the equilibrium strategy; this is in contrast with the equilibrium given by (1.2), as the latter requires updating the states of all the players in the equilibrium feedback function. Our main contribution is a general convergence result, in large time, for MFGs with common noise, as well as an estimate of the rate of convergence. The convergence holds in the following sense: for any x ∈ (Td )N , let mN x :=

N 1 δ ; then i=1 xi N sup

i=1,··· ,N

  N,i −1  v (t0 , x) − U (t0 , xi , mN , x )  CN

(1.12)

for a constant C independent of N , t0 , and x. We also prove a mean field result for the optimal solutions (1.3): if the initial conditions of the ((Xi,· ))i=1,...,N are i.i.d. and with the same law m(0) ∈ P(Td ), then   1 E sup |Xi,t − Yi,t |  CN − d+8 ,

(1.13)

t∈[0,T ]

where the ((Yi,t )i=1,...,N )t∈[0,T ] are the solutions to the McKean–Vlasov SDE    dYi,t = −Dp H Yi,t , Dx U t, Yi,t , L(Yi,t |W ) dt √ + 2dBti + 2β dWt , t ∈ [t0 , T ], with the same initial condition as the ((Xi,t )i=1,...,N )t∈[0,T ] . Here U is the solution of the master equation and L(Yi,t |W ) is the conditional law of Yi,t given the realization of the whole path W . Since the ((Yi,t )t∈[0,T ] )i=1,...,N are conditionally independent given W , (1.13) shows that (conditional) propagation of chaos holds for the N -Nash equilibria. The technique of proof consists in testing the solution U of the master equation (1.10) as nearly a solution to the N -Nash system (1.2). On the model of (1.4), a natural candidate for being an approximate solution to the N -Nash system is indeed   , uN,i (t, x) = U t, xi , mN,i x −1 0 1

t ∈ [0, T ], x ∈ (Td )N .

Taking advantage of the smoothness of U , we then prove that the “proxies” (uN,i )i=1,...,N almost solve the N -Nash system (1.2) up to a remainder term that

125-79542 Cardaliaguet MasterEquation Ch01 4P

INTRODUCTION

June 11, 2019

6.125x9.25

15

vanishes as N tends to ∞. As a byproduct, we deduce that the (uN,i )i=1,...,N get closer and closer to the “true solutions” (v N,i )i=1,...,N when N tends to ∞, which yields (1.12). As the reader may notice, the convergence property (1.12) holds in supremum norm, which is a very strong fact. It is worth mentioning that the monotonicity properties (1.4) play no role in our proof of the convergence. However, surprisingly, the uniform parabolicity of the MFG system is a key ingredient of the proof. On the one hand, in the uniformly parabolic setting, the convergence holds under the sole assumption that the master equation has a classical solution (plus structural Lipschitz continuity conditions on the coefficients). On the other hand, we do not know if one can dispense with the parabolicity condition. 1.1.7

Conclusion and Further Prospects

The fact that the existence of a classical solution to the master equation suffices to prove the convergence of the Nash system demonstrates the deep interest of the master equation, when regarded as a mathematical concept in its own right. Considering the problem from a more abstract point of view, the master equation indeed captures the evolution of the time-dependent semigroup generated by the Markov process formed, on the space of probability measures, by the forward component of the MFG system (1.11). Such a semigroup is said to be lifted as the corresponding Markov process has P(Td ) as state space. In other words, the master equation is a nonlinear PDE driven by a Markov generator acting on functions defined on P(Td ). The general contribution of our book is thus to show that any classical solution to the master equation accommodates a given perturbation of the lifted semigroup and that the information enclosed in such a classical solution suffices to determine the distance between the semigroup and its perturbation. Obviously, as a perturbation of a semigroup on the space of probability measures, we are here thinking of a system of N interacting particles, exactly as that formed by the Nash equilibrium of an N -player game. Identifying the master equation with a nonlinear PDE driven by the Markov generator of a lifted semigroup is a key observation. As already pointed out, the Markov generator is precisely the operator, acting on functions from P(Td ) to R, generated by the forward component of the MFG system (1.11). Put differently, the law of the forward component of the MFG system (1.11), which resides in P(P(Td )), satisfies a forward Kolmogorov equation, also referred to as a “master equation” in physics. This says that “our master equation” is somehow the dual (in the sense that it is driven by the adjoint operator) of the “master equation” that would describe, according to the terminology used in physics, the law of the Nash equilibrium for a game with infinitely many players (in which case the Nash equilibrium itself is a distribution). We stress that this interpretation is very close to the point of view developed by Mischler and Mouhot, [80] in order to investigate Kac’s program (except that, differently from ours, Mischler and Mouhot’s work investigates uniform propagation of chaos over an infinite time horizon; we refer to the companion paper by Mischler,

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

16

−1 0 1

June 11, 2019

6.125x9.25

CHAPTER 1

Mouhot, and Wennberg [81] for the analysis, based on the same technology, of mean field models in finite time). Therein, the authors introduce the evolution equation satisfied by the (lifted ) semigroup, acting on functions from P(Rd ) to R, generated by the d-dimensional Boltzmann equation. According to our terminology, such an evolution equation is a “master equation” on the space of probability measures, but it is linear and of the first order while ours is nonlinear and of the second order (meaning second order on P(Td )). In this perspective, we also emphasize that our strategy for proving the convergence of the N -Nash system relies on a similar idea to that used in [80] to establish the convergence of Kac’s jump process. Whereas our approach consists in inserting the solution of the master equation into the N -Nash system, Mischler and Mouhot’s point of view is to compare the semigroup generated by the N particle Kac’s jump process, which operates on symmetric functions from (Rd )N to R (or equivalently on empirical distributions of size N ), with the limiting lifted semigroup, when acting on the same class of symmetric functions from (Rd )N to R. Clearly, the philosophy is the same, except that, in our setting, the “limiting master equation” is nonlinear and of second order (which renders the analysis more difficult) and is set over a finite time horizon only (which does not ask for uniform in time estimates). It is worth mentioning that similar ideas have been explored by Kolokoltsov in the monograph [61] and developed, in the McKean– Vlasov framework, in the subsequent works [62] and [63] in collaboration with his coauthors. Of course, these parallels raise interesting questions, but we refrain from comparing these different works in a more detailed way: this would require to address more technical questions regarding, for instance, the topology used on the space of probability measures and the regularity of the various objects in hand; clearly, this would distract us from our original objective. We thus feel better to keep the discussion at an informal level and to postpone a more careful comparison to future works on the subject. We complete the introduction by pointing out possible generalizations of our results. For simplicity of notation, we work in the autonomous case, but the results remain unchanged if H or F is time dependent provided that the coefficients F , G, and H, and their derivatives (whenever they exist), are continuous in time and that the various quantitative assumptions we put on F , G, and H hold uniformly with respect to the time variable. We can also remove the monotonicity condition (1.8) provided that the time horizon T is assumed to be small enough. The reason is that the analysis of the smoothness of U relies on the solvability and stability properties of the forward–backward system (1.11) and of its linearized version: as for finite-dimensional two-point boundary value problems, Lipschitz type conditions on the coefficients (and on their derivatives since we are also dealing with the linearized version) are sufficient whenever T is small enough. As already mentioned, we also choose to work in the periodic framework. We expect similar results under other type boundary conditions, like the entire space Rd or Neumann boundary conditions.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

17

Notice also that our results can be generalized without much difficulty to the stationary setting, corresponding to infinite horizon problems. This framework is particularly meaningful for economic applications. In this setting the Nash system takes the form ⎧ N N   ⎪ ⎪ N,i N,i ⎪ rv ⎪ (x) − Δ v (x) − β TrDx2j ,xk v N,i (x) + H(xi , Dxi v N,i (x)) xj ⎪ ⎨ j=1

⎪ ⎪ ⎪ ⎪ ⎪ ⎩

+



j,k=1

Dp H(xj , Dxj v N,j (x)) · Dxj v N,i (x) = F N,i (x)

in (Rd )N ,

j=i

where r > 0 is interpreted as a discount factor. The corresponding master equation is ⎧ rU − (1 + β)Δx U + H(x, Dx U ) ⎪ ⎪ ⎪ ⎪

⎪ ⎪   ⎪ ⎨ −(1 + β) divy [Dm U ] dm(y) + Dm U · Dp H y, Dx U (y, ·) dm(y) Rd

Rd

 2  ⎪ ⎪ ⎪ divx [Dm U ] dm(y) − β Tr Dmm U dm⊗2 (y, y  ) = F (x, m) −2β ⎪ ⎪ d 2d ⎪ R R ⎪ ⎩ in Rd × P(Rd ), where the unknown is the map U = U (x, m), and with the same convention of notation as in (1.10). One can again solve this system by using the method of (infinite dimensional) characteristics, paying attention to the fact that these characteristics remain time dependent. The MFG system with common noise takes the form (in which the unknown are now (ut , mt , vt ))   ⎧ dt ut = rut − (1 + β)Δut + H(x, Dx ut ) − F (x, mt ) − 2βdiv(vt ) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ in [0, +∞) × Rd +vt · 2β dWt ⎪ ⎨     dt mt = (1 + β)Δmt + div mt Dp H(mt , Dx ut ) dt − div(mt 2β dWt , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ in [0, +∞) × Rd ⎪ ⎩ d m0 = m(0) in R , (ut )t bounded a.s. Lastly, we point out that, even though we do not address this question in the book, our work could be used later for numerical purposes. Solving numerically MFGs is indeed a delicate issue and, so far, numerical methods have been regarded mostly in the case without common noise: We refer to the works of Achdou and his coauthors; see, for instance [1–3] for discretization schemes of the MFG system (1.7). Owing to obvious issues of complexity, the case with common noise seems especially challenging. A case in point is that the system (1.11) is an infinite-dimensional fully coupled forward–backward system, which could

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

18

be thought, at the discrete level, as an infinite-dimensional equation expanding along all the possible branches of the tree generated by a discrete random walk. Although our work does not provide any clue for bypassing these complexity issues, we guess that our theoretical results—both the representation of the equilibria in the form of the MFG system system (1.11) and through the master equation (1.10) and their regularity—could be useful for a numerical analysis.

1.1.8

−1 0 1

Organization of the Text and Reading Guide

We present our main results in Chapter 2, where we also explain the notation, state the assumption, and rigorously define the notion of derivative on the space of measures. The well-posedness of the master equation is proved in Chapter 3 when β = 0. Unique solvability of the MFG system with common noise is discussed in Chapter 4. Results obtained in Chapter 4 are implemented in Chapter 5 to derive the existence of a classical solution to the master equation in the general case. The last chapter is devoted to the convergence of the Nash system. In the Appendix, we revisit the notion of derivative on the space of probability measures and discuss some useful auxiliary properties. We strongly recommend that the reader starts with Section 1.2 and with Chapters 2 and 3. Section 1.2 provides heuristic arguments for the construction of a solution to the master equation; this might be really helpful to understand the key results of the book. The complete proof of existence for the first-order case (i.e., without common noise) is the precise aim of Chapter 3; we feel it really accessible. The reader who is more interested in the analysis of the convergence problem than in the study of the case with common noise may directly skip to Chapter 6; to make things easier, she/he may follow the computations of Chapter 6 by letting β = 0 therein (i.e., no common noise). In fact, we suggest that, even if she/he is interested in the case with common noise, the reader also follow this plan, especially if she/he is not keen on probability theory and stochastic calculus; at a second time, she/he can go back to Chapters 4 and 5, which are more technical. In these latter two chapters, the reader who is really interested in MFGs with common noise will find new results: The analysis of the MFG system with common noise is mostly the aim of Chapter 4; if needed, the reader may return to Section 3.1, Proposition 3.1.1, for a basic existence result in the case without common noise. The second-order master equation (with common noise) is investigated in Chapter 5, but requires the well-posedness of the MFG system with common noise as stated in Theorem 4.3.1. The reader should be aware of some basics of stochastic calculus (mostly Itˆ o’s formula) to follow the computations of Chapter 6. Chapters 4 and 5 are partly inspired from the theory of backward stochastic differential equations; although this might not be necessary, the reader may have a look at the two monographs [84, 97] for a complete overview of the subject and at the textbook [88] for an introduction.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

INTRODUCTION

6.125x9.25

19

Of course, the manuscript borrows considerably from the PDE literature and in particular from the theory of Hamilton–Jacobi equations; the reason is that a solution to an MFG is defined as a fixed point of a mapping taking as inputs the optimal trajectories of a family of optimal stochastic control problems. As for the connection between stochastic optimal control problems and Hamilton– Jacobi equations, we refer the reader to the monographs [41, 66]. Some PDE regularity estimates are used quite often in the text, especially for linear and nonlinear second-order parabolic equations; most of them are taken from wellknown books on the subject, among which are [70] and [75]. Lastly, the reader will also find in the book results that may be useful for other purposes: Derivatives in the space of measures are discussed in Section 2.2 (definition and basic results) and in Section A.1 of the Appendix (link with Lions’ approach); a chain rule (Itˆ o’s formula) for functions defined on the space of measures, when taken along the solution of a stochastic Kolmogorov equation, is derived in Section A.3 of the Appendix.

1.2

INFORMAL DERIVATION OF THE MASTER EQUATION

Before stating our main results, it is worthwhile explaining the meaning of the Nash system and the heuristic derivation of the master equation from the Nash system and its main properties. We hope that this (by no means rigorous) presentation might help the reader to be acquainted with our notation and the main ideas of proof. To emphasize the informal aspect of the discussion, we state all the ideas in Rd , without bothering about the boundary issues (whereas in the rest of the text we always work with periodic boundary conditions). 1.2.1

The Differential Game

The Nash system (1.2) arises in differential game theory. Differential games are just optimal control problems with many (here N ) players. In this game, player i (i = 1, . . . , N ) controls her/his state (Xi,t )t∈[0,T ] through her/his control (αi,t )t∈[0,T ] . The state (Xi,t )t∈[0,T ] evolves according to the SDE: dXi,t = αi,t dt +

√ 2 dBti + 2β dWt ,

Xt0 = xi,0 .

(1.14)

Recall that the d-dimensional Brownian motions ((Bti )t∈[0,T ] )i=1,...,N and (Wt )t∈[0,T ] are independent, (Bti )t∈[0,T ] corresponding to the individual noise (or idiosyncratic noise) of player i and (Wt )t∈[0,T ] being the common noise, which affects all the players. Controls ((αi,t )t∈[0,T ] )i=1,...,N are required to be progressively measurable with respect to the filtration generated by all the noises. Given an initial condition x0 = (x1,0 , . . . , xN,0 ) ∈ (Rd )N for the whole system at time t0 , each player aims at minimizing the cost functional:

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

20

  JiN t0 , x0 , (αj,· )j=1,...,N   T   N,i N,i =E L(Xi,s , αi,s ) + F (X s ) ds + G (X T ) , t0

where X t = (X1,t , . . . , XN,t ) and where L : Rd × Rd → R, F N,i : RN d → R and GN,i : RN d → R are given Borel maps. For each player i, in order to assume that the other players are indistinguishable, we shall suppose, as in (1.4), that F N,i and GN,i are of the form F N,i (x) = F (xi , mN,i x )

and

GN,i (x) = G(xi , mN,i x ).

In the above expressions, F, G : Rd ×P(Rd ) → R, where P(Rd ) is the set of Borel measures on Rd . The Hamiltonian of the problem is related to L by the formula ∀(x, p) ∈ Rd × Rd ,

H(x, p) = sup {−α · p − L(x, α)} . α∈Rd

o’s formula, it is easy Let now (v N,i )i=1,...,N be the solution to (1.2). By Itˆ to check that (v N,i )i=1,...,N corresponds to an optimal solution of the problem in the sense of Nash, i.e., a Nash equilibrium of the game. Namely, the feedback strategies   ∗ αi (t, x) := −Dp H(xi , Dxi v N,i (t, x)) i=1,...,N

(1.15)

provide a feedback Nash equilibrium for the game:     ∗ ∗ )j=1,...,N  JiN (t0 , x0 , αi,· , (ˆ αj,· )j=i ) v N,i t0 , x0 = JiN t0 , x0 , (αj,· for any i ∈ {1, . . . , N } and any control αi,· , progressively measurable with respect to the filtration generated by ((Btj )j=1,...,N )t∈[0,T ] and (Wt )t∈[0,T ] . In the ∗ is an abuse of notation for the process (αj∗ (t, Xj,t ))t∈[0,T ] , left-hand side, αj,· where (X1,t , . . . , XN,t )t∈[0,T ] solves the system of SDEs (1.14) when αj,t is precisely given under the implicit form αj,t = αj∗ (t, Xj,t ). Similarly, in the righthand side, α ˆ j∗ , for j = i, denotes (αj∗ (t, Xj,t ))t∈[0,T ] , where (X1,t , . . . , XN,t )t∈[0,T ] now solves the system of SDEs (1.14) for the given αi,· , the other (αj,t )j=i ’s being given under the implicit form αj,t = αj∗ (t, Xj,t ). In particular, system (1.3), in which all the players play the optimal feedback (1.15), describes the dynamics of the optimal trajectories. 1.2.2 −1 0 1

Derivatives in the Space of Measures

To describe the limit of the maps (v N,i ), let us introduce—in a completely informal manner—a notion of derivative in the space of measures P(Rd ). A

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

21

rigorous description of the notion of derivative used in this book is given in Section 2.2. In the following discussion, we argue as if all the measures had a density. Let U : P(Rd ) → R. Restricting the function U to the elements m of P(Rd ) that have a density in L2 (Rd ) and assuming that U is defined in a neighborhood O ⊂ L2 (Rd ) of P(Rd ) ∩ L2 (Rd ), we can use the Hilbert structure on L2 (Rd ). We denote by δU/δm the gradient of U in L2 (Rd ), namely δU 1 (p)(q) = lim U (p + εq) − U (p) , ε→0 ε δm

p ∈ O, q ∈ L2 (Rd ).

Of course, we can identify [δU/δm](p) with an element of L2 (Rd ), which we denote by Rd  y → [δU/δm](p, y) ∈ R. Then, the duality product [δU/δm](p)(q) reads as the inner product [δU/δm](p, ·), q(·) L2 (Rd ) . Similarly, we denote by δ 2 U/δm2 (p) the second-order derivative of U at p ∈ L2 (Rd ) (which can be identified with a symmetric bilinear form on L2 (Rd ) and hence with a symmetric function Rd × Rd  (y, y  ) → [δ 2 U/δm2 ](p, y, y  ) ∈ R in L2 (Rd × Rd )): δU 1 δU δU (p)(q, q  ) = lim (p + εq)(q  ) − (p)(q  ) , ε→0 ε δm δm δm

p ∈ O, q, q  ∈ L2 (Rd ).

We then set, when possible, Dm U (m, y) = Dy

δU (m, y), δm

2 2 Dmm U (m, y, y  ) = Dy,y 

δ2 U (m, y, y  ). (1.16) δm2

To explain the meaning of Dm U , let us compute the action of U onto the pushforward of a measure m by the flow an ordinary differential equation driven by a smooth vector field. For a given smooth vector field B : Rd → Rd and an absolutely continuous probability measure m ∈ P(Rd ) with a smooth density, let (m(t))t≥0 = (Rd  x → m(t, x))t≥0 be the solution to ⎧ ⎨ ∂m + div(Bm) = 0, ∂t ⎩m = m. 0 Provided that [∂m/∂t](t, ·) lives in L2 (Rd ), this expression directly gives d δU U (m(h))|h=0 = , −div(Bm) L2 (Rd ) dh δm

= Dm U (m, y) · B(y) dm(y),

(1.17)

Rd

where we used an integration by parts in the last equality. Another way to understand these derivatives is to project the map U to the finite dimensional space (Rd )N via the empirical measure: if x = (x1 , . . . , xN ) ∈

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

22

N N N (Rd )N , let mN x := (1/N ) i=1 δxi and set u (x) = U (mx ). Then one checks the following relationships (see Proposition 6.1.1): for any j ∈ {1, . . . , N }, Dxj uN (x) = Dx2j ,xj uN (x) =

1 Dm U (mN x , xj ), N

1 1 2 Dy [Dm U ] (mN D U (mN x , xj ) + x , xj , xj ) N N 2 mm

(1.18)

(1.19)

while, if j = k, Dx2j ,xk uN (x) =

1.2.3

1 2 D U (mN x , xj , xk ). N 2 mm

(1.20)

Formal Asymptotic of the (v N,i )

Provided that (1.2) has a unique solution, each v N,i , for i = 1, . . . , N , is symmetric with respect to permutations on {1, . . . , N }\{i} and, for i = j, the role played by xi in v N,i is the same as the role played by xj in v N,j (see Section 6.2). Therefore, it makes sense to expect, in the limit N → +∞, v N,i (t, x)  U (t, xi , mN,i x ) where U : [0, T ] × Rd × P(Rd ) → R. Starting from this ansatz, our aim is now to provide heuristic arguments explaining why U should satisfy (1.10). The sense in which the (v N,i )i=1,...,N actually converge to U is stated in Theorem 2.4.8 and the proof given in Chapter 6. The informal idea is to assume that v N,i is already of the form U (t, xi , mN,i x ) and to plug this expression into the equation of the Nash equilibrium (1.2): the time derivative and the derivative with respect to xi are understood in the usual sense, while the derivatives with respect to the other variables are computed by using the relations in the previous section. The terms ∂t v N,i and H(xi , Dxi v N,i ) easily become ∂U/∂t and H(x, Dx U ). We omit for a while the second-order terms and concentrate on the expression (see the second line in (1.2)): 

Dp H(xj , Dxj v N,j ) · Dxj v N,i .

j=i

Note that Dxj v N,j is just like Dx U (t, xj , mN,j x ). In view of (1.18), −1 0 1

Dxj v N,i 

1 Dm U (t, xi , mN,i x , xj ), N −1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

23

and the sum over j is like an integration with respect to mN,i x . So we find, N,j and m , ignoring the difference between mN,i x x 

Dp H(xj , Dxj v N,j ) · Dxj v N,i

j=i



Rd

N,i N,i Dp H(y, Dx U (t, mN,i x , y)) · Dm U (t, xi , mx , y)dmx (y).

N 

We now study the term

Δxj v N,i (see the first line in (1.2)). As Δxi v N,i 

j=1

Δx U , we need to analyze the quantity



Δxj v N,i . In view of (1.19), we expect

j=i



Δxj v N,i 

j=i

1  divy [Dm U ] (t, xi , mN,i x , xj ) N −1 j=i

   1 2 Tr Dmm U (t, xi , mN,i x , xj , xj ) (N − 1)2

+

j=i

=

N,i divy [Dm U ] (t, xi , mN,i x , y)dmx (y)

 2  1 N,i Tr Dmm U (t, xi , mN,i + x , y, y)dmx (y), N − 1 Rd Rd

where we can drop the last term, as it is of order 1/N . Let us finally discuss the limit of the term

N 

Tr(Dx2j ,xk v N,i ) (see the first

k,l=1

line in (1.2)) that we rewrite Δxi v N,i + 2



   2   Tr Dxi Dxk v N,i + Tr Dxk ,xl v N,i .

k=i

(1.21)

k,l=i

The first term gives Δx U . Using (1.18), the second one becomes

2



  Tr Dxi Dxk v N,i 

k=i

=

2  Tr [Dx Dm U ] (t, xi , mN,i x , xk ) N −1 k=i

N,i divx [Dm U ] (t, xi , mN,i 2 x , y)dmx (y). Rd

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

24 As for the last term in (1.21), we have by (1.20): 

  2  1 Tr Dmm U (t, xi , mN,i x , xj , xk ) (N − 1)2 k,l=i

 2   N,i N,i  Tr Dmm U (t, xi , mN,i = x , y, y )dmx (y)dmx (y ).

  Tr Dx2k ,xl v N,i 

k,l=i

Rd

Rd

Collecting the above relations, we expect that the Nash system ⎧ N N  ⎪   ∂v N,i  ⎪ ⎪ ⎪− − Δxj v N,i − β Tr Dx2k ,xl v N,i + H(xi , Dxi v N,i ) ⎪ ⎪ ∂t ⎪ ⎨ j=1 k,l=1  + Dp H(xj , Dxj v N,j ) · Dxj v N,i = F (xi , mN,i ⎪ x ), ⎪ ⎪ ⎪ j=i ⎪ ⎪ ⎪ ⎩ N,i v (T, x) = G(xi , mN,i x ), has for limit

⎧ ∂U ⎪ ⎪ − − Δx U − divy [Dm U ] dm(y) + H(x, Dx U ) ⎪ ⎪ ∂t ⎪ ⎪

Rd

⎪ ⎪ ⎪ ⎪ U + 2 div [D U ] dm(y) + divy [Dm U ] dm(y) −β Δ ⎪ x x m ⎪ ⎪ Rd Rd ⎨   2  ⊗2  + Tr Dmm U dm (y, y ) ⎪ ⎪ ⎪ R2d ⎪

⎪ ⎪   ⎪ ⎪ Dm U · Dp H y, Dx U (·, y, ·) dm(y) = F (x, m) + ⎪ ⎪ ⎪ Rd ⎪ ⎪ ⎩ U (T, x, m) = G(x, m). This is the master equation. Note that there are only two genuine approximations in the foregoing computation. One

is where we dropped the term of order 1/N in the computation of the sum j=i Δxj v N,i . The other one was at the N,i very beginning, when we replaced Dx U (t, xj , mN,j x ) by Dx U (t, xj , mx ). This is again of order 1/N . 1.2.4

−1 0 1

The Master Equation and the MFG System

We complete this informal discussion by explaining the relationship between the master equation and the MFG system. This relation plays a central role in the text. It is indeed the cornerstone for constructing a solution to the master equation via a method of (infinite dimensional) characteristics. We proceed as follows. Assuming that the value function of the MFG system is regular—while it is part of the challenge to prove that it is indeed smooth—we show that it solves the master equation.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

25

We start with the first-order case, i.e., β = 0, as it is substantially easier. For any (t0 , m(0) ) ∈ [0, T ] × P(Rd ), let us define the value function U (t0 , ·, m(0) ) as U (t0 , x, m(0) ) := u(t0 , x)

∀x ∈ Rd ,

where (u, m) is a solution of the MFG system (1.7) with the initial condition m(t0 ) = m(0) at time t0 . We claim that U is a solution of the master equation (1.10) with β = 0. As indicated, we check the claim assuming that U is smooth, although the main difficulty comes from the fact that this has to be proved. We note that, by its very definition, U must satisfy U (t, x, m(t)) = u(t, x)

∀(t, x) ∈ [t0 , T ] × Rd .

Using the equation satisfied by m (and provided that ∂t m can be regarded as an L2 (Rd ) valued function), the time derivative of the left-hand side at t0 is given by ∂t u(t0 , x) = ∂t U + = ∂t U +

 δU δm  δU δm

 , ∂t m

L2 (Rd )

  , Δm + div mDp H(·, Dx U )

L2 (Rd )

(1.22)

= ∂t U

divy [Dm U ] − Dm U · Dp H(y, Dx U (·, y, ·)) dm(0) (y), + Rd

where the function U and its derivatives are evaluated at time t0 and at the measure argument m(0) ; with the exception of the last term in the right-hand side, they are evaluated at point x in space; the auxiliary variable in Dm U is always equal to y. Recalling the equation satisfied by u, we also have   ∂t u(t0 , x) = −Δu(t0 , x) + H x, Dx u(t0 , x) − F (x, m(0) ) = −Δx U + H(x, Dx U ) − F (x, m(0) ). This shows that

  divy [Dm U ] − Dm U · Dp H y, Dx U (·, y, ·) dm(0) (y) ∂t U + Rd

= −Δx U + H(x, Dx U ) − F (x, m(0) ). Rearranging the terms, we deduce that U satisfies the master equation (1.10) with β = 0 at (t0 , ·, m(0) ). For the second-order master equation (β > 0) the same principle applies except that, now, the MFG system becomes stochastic. Let (t0 , m(0) ) ∈ [0, T ] ×

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 1

26

P(Rd ) and (ut , mt , vt ) be a solution of the MFG system with common noise (1.11). We set as before U (t0 , x, m(0) ) := ut0 (x)

∀x ∈ Rd ,

and notice that ∀(t, x) ∈ [t0 , T ] × Rd .

U (t, x, mt ) = ut (x)

Assuming that U is smooth enough, we have, by Itˆ o’s formula for Banach-valued processes and by the equation satisfied by m:   δU   , (1 + β)Δmt + div mt Dp H(·, Dx U ) dt ut (x) = ∂t U + δm L2 (Rd ) d     δ2 U dt +β D m , D m x t x t i i δm2 L2 (Rd ) i=1 −





d   δU i=1

δm

 , D x i mt

L2 (Rd )

(1.23)

dWti ,

where, as before, the function U and its derivatives are evaluated at time t and at the measure argument mt ; with the exception of Dx U in the right-hand side, they are evaluated at point x in space. In comparison with the first-order formula (1.22), equation (1.23) involves two additional terms: The stochastic term on the third line derives directly from the Brownian part in the forward part of (1.11) while the second-order term on the second line is reminiscent of the second-order term that appears in the standard Itˆ o calculus. We provide a rigorous proof of (1.23) in Section 5. Using (1.16), we obtain

−1 0 1

dt ut (x)  = ∂t U

+ (1 + β)divy [Dm U ] − Dm U · Dp H(·, Dx U (·, y, ·) dmt (y) Rd

  2   Tr Dmm U dm⊗2 +β t (y, y ) dt Rd ×Rd + Dm U dmt (y) · 2β dWt . Rd

(1.24)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INTRODUCTION

27

On the other hand, by the equation satisfied by u, we have   dt ut (x) = −(1 + β)Δut + H(x, Dut ) − F (x, mt ) − 2βdiv(vt ) dt + vt · dWt   = −(1 + β)Δx U + H(x, Dx U ) − F (x, mt ) − 2βdiv(vt ) dt + vt · dWt ,

(1.25)

where, on the right-hand side, ut and vt and their derivatives are evaluated at point x. Identifying the absolutely continuous part and the martingale part, we find

(1 + β)divy [Dm U ] − Dm U · Dp H(·, Dx U (·, y, ·)) dmt (y) ∂t U + Rd

 2   (1.26) Tr Dmm U dm⊗2 +β t (y, y ) Rd ×Rd = −(1 + β)Δx U + H(x, Dx U ) − F (x, mt ) − 2βdiv(vt ) and



2β Rd

Dm U dmt (y) = vt .

Inserting the latter identity in the former one, we derive the master equation. Note that, compared with the first-order setting (i.e., β = 0), one faces here the additional issue that, so far, there has not been any solvability result for (1.9) and that the regularity of the map U —which is defined through (1.9)—is much more involved to investigate than in the first-order case.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter Two Presentation of the Main Results

In this chapter we collect our main results. We first state the notation used in this book, specify the notion of derivatives in the space of measures, and describe the assumptions on the data.

2.1

NOTATIONS

Throughout the book, Rd denotes the d-dimensional Euclidean space, with norm |·|, the scalar product between two vectors a, b ∈ Rd being written a·b. We work in the d-dimensional torus (i.e., periodic boundary conditions) that we denote Td := Rd /Zd . When N is a (large) integer, we use bold symbols for elements of (Td )N : for instance, x = (x1 , . . . , xN ) ∈ (Td )N . The set P(Td ) of Borel probability measures on Td is endowed with the Monge–Kantorovich distance 



d1 (m, m ) = sup φ

Td

φ(y) d(m − m )(y),

where the supremum is taken over all Lipschitz continuous maps φ : Td → R with a Lipschitz constant bounded by 1; see also Section A.1 in the Appendix. The notion of convergence associated with this distance is the weak convergence of measures.1 If m belongs to P(Td ) and φ : Td → Td is a Borel map, then φm denotes the push-forward of m by φ, i.e., the Borel probability measure such that [φm](A) = m(φ−1 (A)) for any Borel set A ⊂ Td . When the probability measure m is absolutely continuous with respect to the Lebesgue measure, we use the same letter m to denote its density. Namely, we write m : Td  x → m(x) ∈ R+ . Besides, we often consider flows of time-dependent measures of the form (m(t))t∈[0,T ] , with m(t) ∈ P(Td ) for any t ∈ [0, T ]. When, at each time t ∈ [0, T ], m(t) is absolutely continuous with respect to the Lebesgue measure on Td , we identify m(t) with its density and sometimes denote by m : [0, T ] × Td  (t, x) → m(t, x) ∈ R+ the collection of the densities. In all −1 0 1

1 For a presentation of the set P(Td ) as a metric space, we refer the reader to the monographs [7, 89, 95], for instance.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

29

the examples considered in the text that follows, such an m has a time–space continuous version and, implicitly, we identify m with it. If φ : Td → R is sufficiently smooth and  = (1 , . . . , d ) ∈ Nd , then D φ 1 d stands for the derivative ∂ 1 . . . ∂ d φ. The order of derivation 1 + · · · + d is ∂x1

∂xd

denoted by ||. Given e ∈ Rd , we also denote by ∂e φ the directional derivative of φ in the direction e. For n ∈ N and α ∈ (0, 1), C n+α is the set of maps φ for which older continuous for any  ∈ Nd with ||  n. We set D φ is defined and α-H¨ φn+α :=



sup |D φ(x)| +

||n x∈T

d

 ||=n

sup

x=x

|D φ(x) − D φ(x )| . |x − x |α

In the second term, x and x are implicitly taken in Rd and φ is seen as a periodic function. Equivalently, x and x may be taken in the torus, in which case the Euclidean distance between x and x must be replaced by the torus distance between x and x , namely inf k∈Z |x − x + k|. The dual space of C n+α is denoted by (C n+α ) with norm ∀ρ ∈ (C n+α ) ,

ρ−(n+α) :=

sup φn+α 1

ρ, φ (C n+α ) ,C n+α .

To simplify notation, we often abbreviate the expression ρ, φ (C n+α ) ,C n+α as ρ, φ n+α . If a smooth map ψ depends on two space variables, e.g., ψ = ψ(x, y), and m, n ∈ N are the order of differentiation of ψ with respect to x and y respectively, we set   D(, ) ψ∞ , ψ(m,n) := ||m,| |n

and, if moreover the derivatives are H¨ older continuous, ψ(m+α,n+α) := ψ(m,n) +

 ||=m,| |=n





|D(, ) φ(x, y) − D(, ) φ(x , y  )| , |x − x |α + |y − y  |α (x,y)=(x ,y  ) sup

with the same convention as before for the distance in the second term in the right-hand side. The notation is generalized in an obvious way to mappings depending on three or more variables. If now the (sufficiently smooth) map φ depends on time and space, i.e., φ = φ(t, x), we say that φ ∈ C l/2,l (where l = n + α, n ∈ N, α ∈ (0, 1)) if Dx Dtj φ exists for any  ∈ Nd and j ∈ N with || + 2j  n and is α-H¨older in x and α/2-H¨ older in t. Writing D for Dx , we set φn/2+α/2,n+α :=

 ||+2jn

D Dtj φ∞ +



D Dtj φ

x,α + D Dtj φ

t,α/2

||+2j=n

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

30 with D Dtj φ

x,α := sup

t,x=x

|D Dtj φ(t, x) − D Dtj φ(t, x )| , |x − x |α

|D Dtj φ(t, x) − D Dtj φ(t , x)| . |t − t |α t=t ,x

D Dtj φ

t,α := sup

We also say that φ belongs to C 1,2 if ∂t φ and D2 φ exist and are continuous in time and space variables. If X, Y are random variables on a probability space (Ω, A, P), L(X) is the law of X and L(Y |X) is the conditional law of Y given X. Recall that, whenever X and Y take values in Polish spaces (say SX and SY respectively), we can always find a regular version of the conditional law L(Y |X), that is, a mapping q : SX × B(SY ) → [0, 1] such that: • for each x ∈ SX , q(x, ·) is a probability measure on SY equipped with its Borel σ-field B(SY ); • for any A ∈ B(SY ), the mapping SX  x → q(x, A) is Borel measurable; • q(X, ·) is a version of the conditional law of X given Y , in the sense that   E f (X, Y ) =





 SX

 =E

SY

SY

 f (x, y)q(x, dy) d L(X) (x)

f (X, y)q(X, dy) ,

for any bounded Borel measurable mapping f : SX × SY → R.

2.2

−1 0 1

DERIVATIVES

One of the striking features of the master equation is that it involves derivatives of the unknown with respect to the measure. In this book, we use two notions of δU derivatives. The first one, denoted by δm , is, roughly speaking, the L2 derivative d when one looks at the restriction of P(T ) to densities in L2 (Td ). It is widely used in linearization procedures. The second one, denoted by Dm U , is more intrinsic and is related to the so-called Wasserstein metric on P(Td ). It can be introduced as in Ambrosio, Gigli, and Savar´e [7] by defining a kind of manifold structure on P(Td ) or, as in Lions [76], by embedding P(Td ) into an L2 (Ω, Td ) space of random variables. Here we introduce this latter notion in a slightly different δU . In the Appendix we briefly compare the way, as the derivative in space of δm different notions.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

2.2.1

31

First-Order Derivatives

Definition 2.2.1. We say that U : P(Td ) → R is C 1 if there exists a continδU : P(Td ) × Td → R such that, for any m, m ∈ P(Td ), uous map δm  U ((1 − s)m + sm ) − U (m) δU = (m, y)d(m − m)(y). lim s δm s→0+ d T δU is defined up to an additive constant. We adopt the normalNote that δm ization convention  δU (m, y)dm(y) = 0. (2.1) δm d T d For any m ∈ P(Td ) and any signed δUmeasure μ on T , we will use interchangeably δU the notations δm (m)(μ) and Td δm (m, y)dμ(y). Note also that

∀m, m ∈ P(Td ),

U (m ) − U (m)  1 (2.2) δU ((1 − s)m + sm , y) d(m − m)(y)ds. = 0 Td δm

Let us explain the relationship between the derivative in the aforementioned δU δU = δm (m, y) is Lipssense and the Lipschitz continuity of U in P(Td ). If δm chitz continuous with respect to the second variable with a Lipschitz constant bounded independently of m, then U is Lipschitz continuous: indeed, by (2.2),

δU



Dy

ds d1 (m, m ) ((1 − s)m + sm , ·)

δm

0 ∞

δU

 

(m , ·) d1 (m, m ).  sup Dy δm m ∞

|U (m ) − U (m)| 



1

This leads us to define the “intrinsic derivative” of U . δU Definition 2.2.2. If δm is of class C 1 with respect to the second variable, the intrinsic derivative Dm U : P(Td ) × Td → Rd is defined by

Dm U (m, y) := Dy

δU (m, y). δm

The expression Dm U can be understood as a derivative of U along vector fields: δU Proposition 2.2.3. Assume that U is C 1 , with δm being C 1 with respect to y, and that Dm U is continuous in both variables. Let φ : Td → Rd be a Borel measurable and bounded vector field. Then,

U ((id + hφ)m) − U (m) = lim h→0 h

 Td

Dm U (m, y) · φ(y) dm(y).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

32 Proof.

Let us set mh,s := s(id + hφ)m + (1 − s)m. Then, U ((id + hφ)m) − U (m)  1 δU (mh,s , y)d((id + hφ)m − m)(y) ds = 0 Td δm  1 δU δU = (mh,s , y + hφ(y)) − (mh,s , y))dm(y) ds ( δm 0 Td δm  1  1 =h Dm U (mh,s , y + thφ(y)) · φ(y) dtdm(y) ds. Td

0

0

Dividing by h and letting h → 0 gives the result thanks to the continuity  of Dm U . Note also that, if U : P(Td ) → R and a symmetric matrix, as

δU δm

is C 2 in y, then Dy Dm U (m, y) is

  δU δU Dy Dm U (m, y) = Dy Dy (m, y). (m, y) = Hessy δm δm 2.2.2

Second-Order Derivatives

δU If, for a fixed y ∈ Td , the map m → δm (m, y) is C 1 , then we say that U is C 2 and 2 δ U denote by δm 2 its derivative. (Take note that y is fixed. At this stage, nothing is said about the smoothness in the direction y.) By Definition 2.2.1 we have that δ2 U d d d δm2 : P(T ) × T × T → R with

δU δU (m , y) − (m, y) = δm δm If U is C 2 and if

δ2 U δm2

=



1 0

 Td

δ2U ((1 − s)m + sm , y, y  ) d(m − m)(y  ). δm2

δ2 U  δm2 (m, y, y )

is C 2 in the variables (y, y  ), then we set

2 2 U (m, y, y  ) := Dy,y Dmm 

δ2U (m, y, y  ). δm2

2 We note that Dmm U maps P(Td ) × Td × Td into Rd×d . The next statement 2 U enjoys the classical symmetries of second-order derivatives. asserts that Dmm

Lemma 2.2.4. Then

−1 0 1

Assume that

δ2 U δm2

is jointly continuous in all the variables.

δ2U (m, y, y  ) δm2 δ2 U δU δU = (m, y) − (m, y  ), (m, y  , y) + δm2 δm δm

m ∈ P(Td ), y, y  ∈ Td .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

33 2

δU δ U 1 In the same way, if δm is C 1 in the variable y and δm in the variable 2 is also C 2 δ U y, Dy δm2 being jointly continuous in all the variables, then, for any fixed y ∈ Td , the map m → Dm U (m, y) is C 1 and

Dy

δ2U δ  Dm U (m, y) (y  ), (m, y, y  ) = 2 δm δm

m ∈ P(Td ), y, y  ∈ Td ,

2

δ U 2 while, if δm in the variables (y, y  ), then, for any fixed y ∈ Td , the 2 is also C δ map δm (Dm U (·, y)) is C 1 in the variable y  and

 2 U (m, y, y  ). Dm Dm U (·, y) (m, y  ) = Dmm Proof. First step. We start with the proof of the first claim. By continuity, we just need to show the result when m has a smooth positive density. Let μ, ν ∈ L∞ (Td ), such that Td μ = Td ν = 0, with a small enough norm so that m + sμ + tν is a probability density for any (s, t) ∈ [0, 1]2 . Since U is C 2 , the mapping U : [0, 1]2 ∈ (s, t) → U (m + sμ + nν) is twice differentiable and, by the standard Schwarz theorem, Dt Ds U (s, t) = Ds Dt U (s, t), for any (s, t) ∈ [0, 1]2 . Notice that  Dt Ds U (s, t) =  Ds Dt U (s, t) =

[Td ]2

[Td ]2

δ2U  m + sμ + tν, y, y  μ(y)ν(y  ) dy dy  2 δm δ2U  m + sμ + tν, y  , y μ(y)ν(y  ) dy dy  . δm2

Choosing s = t = 0, we find  [Td ]2

δ2 U   m, y, y μ(y)ν(y  ) dy dy  = δm2

 [Td ]2

δ2 U   m, y , y μ(y)ν(y  ) dy dy  , δm2

for any μ, ν ∈ L∞ (Td ), such that Td μ = Td ν = 0. Hence, for any m smooth with a positive density, there exist maps φ1 , φ2 such that δ2 U  δ2 U   m, y, y = m, y  , y + φ1 (y) + φ2 (y  ), δm2 δm2

y, y  ∈ Td .

(2.3)

To identify φ1 and φ2 , we recall the normalization condition  δU  m, y m(y) dy = 0. Td δm Taking the derivative with respect to m of this equality, we find  Td

δ2 U  δU (m, y  ) = 0, m, y, y  m(y)dy + δm2 δm

y  ∈ Td .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

34

So integrating (2.3) with respect to m(y) and using the normalization condition: δU (m, y  ) = − δm

 Td

δ2U  m, y, y  m(y) dy = 0 + δm2

 Td

φ1 (y)m(y) dy + φ2 (y  ).

Integrating in the same way (2.3) with respect to m(y  ): 0=−

δU  m, y + φ1 (y) + δm

 Td

φ2 (y  )m(y  ) dy  .

Finally integrating (2.3) with respect to m(y)m(y  ): 



0= Td

φ1 (y)m(y) dy +

Td

φ2 (y  )m(y  ) dy 

Combining the three last equalities leads to φ1 (y) + φ2 (y  ) =

δU δU  m, y − (m, y  ) δm δm

which, in view of (2.3), gives the desired result. Second step. The proof is the same for the second assertion, except that now δU we have to consider the mapping U  : [0, 1] × Td  (t, y) → δm (m + tμ, y), for d a general probability measure m ∈ P(T ) and a general finite signed measure μ on Td , such that μ(Td ) = 0 and m + μ is a probability measure. (In particular, m + tμ = (1 − t)m + t(m + μ) is also a probability measure for any t ∈ [0, 1].) By assumption, U  is C 1 in each variable t and y with Dt U  (t, y) =

 Td

δ2U (m + tμ, y, y  )dμ(y  ), δm2

Dy U  (t, y) = Dm U (m + tμ, y).

In particular, Dt U  is C 1 in y and Dy Dt U  (t, y) =

 Td

Dy

δ2U (m + tμ, y, y  )μ(y  ) dy  . δm2

By assumption, Dy Dt U  is jointly continuous and, by the standard Schwarz theorem, the mapping Dy U  is differentiable in t, with   Dt Dy U  (t, y) = Dt Dm U (m + tμ, y) =

 Td

Dy

δ2 U (m + tμ, y, y  )μ(y  ) dy  . δm2

Integrating in t, this shows that −1 0 1

 Dm U (m + μ, y) − Dm U (m, y) =

1 0

 Td

Dy

δ2U (m + tμ, y, y  )μ(y  ) dy  dt. δm2

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

PRESENTATION OF THE MAIN RESULTS

6.125x9.25

35

Choosing μ = m − m, for another probability measure m ∈ P(Td ) and noticing that  δ2 U Dy (m, y, y  )dm(y  ) = 0, δm2 Td we complete the proof of the second claim. For the last assertion, one just need to take the derivative in y in the second one. 

2.2.3

Comments on the Notions of Derivatives

As several concepts of derivatives have been used in the mean field game theory, we now discuss the link between these notions. For simplicity, we argue as if our state space was Rd and not Td , as most results have been stated in this context. (We refer to the Appendix for an exposition on Td .) A first idea consists in looking at the restriction of the map U to the subset of measures with a density that is in L2 (Rd ), and taking the derivative of U in the L2 (Rd ) sense. This is partially the point of view adopted by Lions in [76] and followed by Bensoussan, Frehse, and Yam [16]. In the context of δU smooth densities, this is closely related to our first and second derivatives δm 2 δ U and δm 2. Many works on mean field games (as in Buckdahn, Li, Peng, and Rainer [24]; Carmona and Delarue [27]; Chassagneux, Crisan, and Delarue [31]; Gangbo and Swiech [45]) make use of an idea introduced by Lions in [76]. It consists in working in a sufficiently rich probability space (Ω, A, P) and in looking at maps U : P(Rd ) → R through their lifting to L2 (Ω, A, P, Rd ) defined by  (X) = U (L(X)) U

∀X ∈ L2 (Ω, Rd ),

 —if it exists— where L(X) is the law of X. It is clear that the derivative of U  (X) depends only on the law of X and enjoys special properties because U  is continuously not on the full random variable. As explained in [76], if U 2 differentiable, then its gradient at some point X0 ∈ L (Ω, A, P, Rd ) can be written as  (X0 ) = ∂μ U (L(X0 ))(X0 ), ∇U where ∂μ U : P(Rd ) × Rd  (m, x) → ∂μ U (m)(x) ∈ Rd . We explain in the Appendix that the maps ∂μ U and Dm U introduced in Definition 2.2.2 coincide, as soon as one of the two derivatives exists. Let us also underline that this concept of derivative is closely related to the notion introduced by Ambrosio, Gigli, and Savar´e [7] in a more general setting.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

CHAPTER 2

36 2.3

6.125x9.25

ASSUMPTIONS

Throughout the book, we assume that H : Td × Rd → R is smooth, globally Lipschitz continuous, and satisfies the coercivity condition: 2 H(x, p)  CId 0 < Dpp

for (x, p) ∈ Td × Rd .

(2.4)

We also always assume that the maps F, G : Td × P(Td ) → R are globally Lipschitz continuous and monotone: for any m, m ∈ P(Td ),  

Td

Td

(F (x, m) − F (x, m ))d(m − m )(x)  0, (2.5) (G(x, m) − G(x, m ))d(m − m )(x)  0.

Note that assumption (2.5) implies that tonicity property (explained for F ): 

 Td

Td

δF δm

and

δG δm

satisfy the following mono-

δF (x, m, y)μ(x)μ(y)dxdy  0 δm

for any centered measure μ. Throughout the book, the conditions (2.4) and (2.5) are in force. Next we describe assumptions that might differ according to the results. Let us fix n ∈ N and α ∈ (0, 1). We set (with the notation introduced in Section 2.1)

δF δF δF −1

) := sup (d1 (m1 , m2 )) (·, m1 , ·) − (·, m2 , ·) Lipn (

δm δm δm m1 =m2 (n+α,n+α) and use the analogue notation for G. We call (HF1(n)) the following regularity conditions on F :  

δF (·, m, ·)

(HF1(n)) sup F (·, m)n+α +

δm d m∈P(T )

(n+α,n+α)

+ Lipn (

δF ) < ∞, δm

and (HG1(n)) the analogue condition on G:  (HG1(n)) −1 0 1

sup m∈P(Td )



δG(·, m, ·)

G(·, m)n+α +

δm + Lipn (

 (n+α,n+α)

δG ) < ∞. δm

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

37

We use similar notation when dealing with second-order derivatives: Lipn (

δ2 F ) δm2

:= sup (d1 (m1 , m2 ))

−1

m1 =m2

2

δ F

δ2 F

,

δm2 (·, m1 , ·, ·) − δm2 (·, m2 , ·, ·) (n+α,n+α,n+α)

and call (HF2(n)) (respectively (HG2(n))) the second-order regularity conditions on F : 



δF (·, m, ·)

sup F (·, m)n+α +

δm m∈P(Td ) (n+α,n+α)

2

δ F (·, m, ·, ·) δ2 F

+ sup + Lipn ( 2 ) < ∞.

2 δm δm m∈P(Td ) (n+α,n+α,n+α)

(HF2(n))

and on G:  (HG2(n))

sup m∈P(Td )

+



δG(·, m, ·)

G(·, m)n+α +

δm (n+α,n+α)

2

δ G(·, m, ·, ·) δ2 G

sup + Lip ( ) < ∞. n

δm2 δm2 d

m∈P(T )

Example 2.3.1.

(n+α,n+α,n+α)

Assume that F is of the form  F (x, m) =

Rd

Φ(z, (ρ m)(z))ρ(x − z) dz,

where denotes the usual convolution product (in Rd ) and where Φ : Td × [0, +∞) → R is a smooth map that is nondecreasing with respect to the second variable and ρ is a smooth, even function with compact support. Then F satisfies the monotonicity condition (2.5) as well as the regularity conditions (HF1(n)) and (HF2(n)) for any n ∈ N. Observe that F is periodic in x since Φ is periodic in the first argument and m belongs to P(Td ). Proof.  Td

Let us first note that, for any m, m ∈ P(Td ),

(F (x, m) − F (x, m ))d(m − m )(x)  [Φ(y, ρ m(y)) − Φ(y, ρ m (y))] (ρ m(y) − ρ m (y)) dy  0, = Td

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

38

since ρ is even and Φ is nondecreasing with respect to the second variable. So F is monotone. Writing Φ = Φ(x, θ), the derivatives of F are given by   ∂Φ  δF (x, m, y) = z, ρ m(z) ρ(x − z)ρ(z − y − k) dz δm ∂θ Rd d k∈Z

and δ2F (x, m, y, y  ) δm2   ∂2Φ  = z, ρ m(z) ρ(z − y − k)ρ(z − y  − k  )ρ(x − z) dz. 2 Rd ∂θ  d k,k ∈Z

Then (HF1(n)) and (HF2(n)) hold because of the smoothness of ρ.

2.4



STATEMENT OF THE MAIN RESULTS

The book contains two main results: on the one hand, the well-posedness of the master equation, and, on the other hand, the convergence of the Nash system with N players as N tends to infinity. We start by considering the first-order master equation (β = 0), because, in this setting, the approach is relatively simple (Theorem 2.4.2). To handle the second-order master equation, we build solutions to the mean field game system with common noise, which play the role of “characteristics” for the master equation (Theorem 2.4.3). Our first main result is Theorem 2.4.5, which states that the master equation has a unique classical solution under our regularity and monotonicity assumptions on H, F , and G. Once we know that the master equation has a solution, we can use this solution to build approximate solutions for the Nash system with N -players. This yields to our main convergence results, either in term of functional terms (Theorem 2.4.8) or in term of optimal trajectories (Theorem 2.4.9). 2.4.1

First-Order Master Equation

We first consider the first-order master equation (or master equation without common noise): ⎧ −∂t U (t, x, m) − Δx U (t, x, m) + H(x, Dx U (t, x, m)) ⎪ ⎪ ⎪  ⎪ ⎪ ⎪ ⎪ − divy [Dm U ] (t, x, m, y) dm(y) ⎪ ⎪ d ⎪ ⎨ T

−1 0 1

Dm U (t, x, m, y) · Dp H(y, Dx U (t, y, m)) dm(y) + ⎪ ⎪ ⎪ Td ⎪ ⎪ ⎪ ⎪ = F (x, m) in [0, T ] × Td × P(Td ), ⎪ ⎪ ⎪ ⎩ U (T, x, m) = G(x, m) in Td × P(Td ).

(2.6)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

39

We call it the first-order master equation because it contains only first-order derivatives with respect to the measure variable. Let us first explain the notion of solution. Definition 2.4.1. We say that a map U : [0, T ]×Td ×P(Td ) → R is a classical solution to the first-order master equation if • U is continuous in all its arguments (for the d1 distance on P(Td )), is of class C 2 in x and C 1 in time (the derivatives of order 1 in time and space and of order 2 in space being continuous in all the arguments); • U is of class C 1 with respect to m, the first-order derivative δU (t, x, m, y), δm being continuous in all the arguments, δU/δm being twice differentiable in y, the derivatives being continuous in all the arguments; • U satisfies the master equation (2.6). [0, T ] × Td × P(Td ) × Td  (t, x, m, y) →

Theorem 2.4.2. Assume that F , G, and H satisfy (2.4) and (2.5) in Section 2.3, and that (HF1(n+1)) and (HG1(n+2)) hold for some n  1 and some α ∈ (0, 1). Then the first-order master equation (2.6) has a unique solution U . δU is continuous in all variables and Moreover, U is C 1 (in all variables), δm δU n+2+α and C n+2+α × C n+1+α respecU (t, ·, m) and δm (t, ·, m, ·) are bounded in C δU tively, independently of (t, m). Finally, δm is Lipschitz continuous with respect to the measure variable:

δU δU −1

(t, ·, m1 , ·) − (t, ·, m2 , ·) sup sup (d1 (m1 , m2 ))

δm δm m =m t∈[0,T ]

1

(n+2+α,n+α)

2

< ∞. Chapter 3 is devoted to the proof of Theorem 2.4.2. We also discuss in this chapter the link between the solution U and the derivative of the solution of a Hamilton–Jacobi equation in the space of measures. The proof of Theorem 2.4.2 relies on the representation of the solution in terms of the mean field game system: for any (t0 , m0 ) ∈ [0, T )×P(Td ), the mean field game (MFG) system is the system of forward–backward equations: ⎧ in (t0 , T ) × Td , ⎨−∂t u − Δu + H(x, Du) = F (x, m(t)) (2.7) ∂ m − Δm − div(mDp H(x, Du)) = 0 in (t0 , T ) × Td , ⎩ t d in T . u(T, x) = G(x, m(T )), m(t0 , ·) = m0 As recalled in the text that follows (Proposition 3.1.1), under suitable assumptions on the data, there exists a unique solution (u, m) to the above system. Our aim is to show that the map U defined by U (t0 , ·, m0 ) := u(t0 , ·)

(2.8)

is a solution to (2.6). The starting point is the obvious remark that, for U defined by (2.8) and for any h ∈ [0, T − t0 ],

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

40 u(t0 + h, ·) = U (t0 + h, ·, m(t0 + h)).

Taking the derivative with respect to h and letting h = 0 shows that U satisfies (2.6) provided that it is smooth (as explained at the end of Chapter 1). The main issue is to prove that the map U defined by (2.8) is sufficiently smooth to perform the above computation. To prove the differentiability of the map U , we use a flow method and differentiate the MFG system (2.7) with respect to the measure argument m0 . The derivative system then reads as a linearized system initialized with a signed measure. Fixing a solution (u, m) to (2.7) and allowing for a more singular Schwartz distribution μ0 ∈ (C n+1+α (Td )) as initial condition (instead of a signed measure), the linearized system, with (v, μ) as unknown, takes the form ⎧ δF  ⎪ −∂t v − Δv + Dp H(x, Du) · Dv = x, m(t) (μ(t)) in (t0 , T ) × Td , ⎪ ⎪ ⎨ δm   2 ∂t μ − Δμ − div μDp H(x, Du) − div mDpp H(x, Du)Dv = 0 in (t0 , T ) × Td , ⎪ ⎪ ⎪ ⎩v(T, x) = δG x, m(T ) (μ(T )), μ(t , ·) = μ in Td , 0 0 δm where v and u are evaluated at (t, x). We prove that v can be interpreted as the directional derivative of U in the direction μ0 :  δU (t0 , x, m0 , y)μ0 (y)dy. v(t0 , x) = δm d T Note that this shows at the same time the differentiability of U and the regularity of its derivative. For this reason the introduction of the directional derivative appears extremely useful in this context. 2.4.2

The Mean Field Game System with Common Noise

As explained in the previous section, the characteristics of the first-order master equation (2.6) are the solution to the mean field game system (2.7). The analogous construction for the second-order master equation (with β > 0) yields a system of stochastic partial differential equations, the mean field game system with common noise. Given an initial distribution m0 ∈ P(Td ) at an initial time t0 ∈ [0, T ], this system reads2 ⎧    ⎪ dt ut = −(1 + β)Δut + H(x, Dut ) − F (x, mt ) − 2βdiv(vt ) dt ⎪ ⎪ ⎪ ⎪ ⎪ + vt · dWt , ⎪ ⎨    dt mt = (1 + β)Δmt + div mt Dp H(·, Dut ) dt ⎪  ⎪ ⎪ ⎪ in [t0 , T ] × Td , − 2βdiv(mt dWt , ⎪ ⎪ ⎪ ⎩ mt0 = m0 , uT (x) = G(x, mT ) in Td . −1 0 1

(2.9)

2 To emphasize the random nature of the functions u and m, the time variable is now indicated as a subscript, as often done in the theory of stochastic processes.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

PRESENTATION OF THE MAIN RESULTS

6.125x9.25

41

Here (Wt )t∈[0,T ] is a given d-dimensional Brownian motion, generating a filtration (Ft )t∈[0,T ] . The solution is the process (ut , mt , vt )t∈[0,T ] , adapted to (Ft )t∈[t0 ,T ] , where, for each t ∈ [t0 , T ], vt is a vector field that ensures the solution (ut ) to the backward equation to be adapted to the filtration (Ft )t∈[t0 ,T ] . Up to now, the well-posedness of this system has never been investigated, but it is reminiscent of the theory of forward–backward stochastic differential equations in finite dimension, see, for instance, the monograph [84]. To analyze (2.9), we take advantage of the additive structure of the common noise and perform the (formal) change of variable: u ˜t (x) = ut (x +



2βWt ),

m ˜ t (x) = mt (x +



2βWt ),

x ∈ Td ,

t ∈ [0, T ].

√ √ ˜ m) = ˜ t (x, p) = H(x+ 2βWt , p), F˜t (x, m) = F (x+ 2βWt , m) and G(x, Setting√H o–Wentzell formula (see Chapter 4 for a G(x + 2βWT , m) and invoking the Itˆ more precise account together with Subsection A.3.1 in the Appendix), the pair ˜ t )t∈[t0 ,T ] formally satisfies the system: (˜ ut , m   ⎧ ˜ t (·, D˜ ˜ t, du ˜ = −Δ˜ ut + H ut ) − F˜t (·, mt ) dt + dM ⎪ ⎨ t t    ˜ t (·, D˜ ˜ t = Δm ˜ t + div m ˜ t Dp H ut ) dt, dt m ⎪ ⎩ ˜ mT ), ˜T = G(·, m ˜ t0 = m0 , u

(2.10)

√ √ ˜ t = ( 2βDx u where (still formally) dM ˜t + vt (x + 2βWt )) · dWt . Let us explain how we understand the above system. The solution (˜ ut )t∈[0,T ] is seen as an (Ft )t∈[0,T ] -adapted process with paths in C 0 ([0, T ], C n+2 (Td )), for some fixed n  0. The process (m ˜ t )t∈[0,T ] reads as an (Ft )t∈[0,T ] -adapted process with paths in the space C 0 ([0, T ], P(Td )). We shall look for solutions satisfying  ut n+2+α ∈ L∞ (Ω, A, P), sup ˜

(2.11)

t∈[0,T ]

˜ t )t∈[0,T ] is seen as an (Ft )t∈[0,T ] (for some fixed α ∈ (0, 1)). The process (M 0 adapted process with paths in the space C ([0, T ], C n (Td )), such that, for any ˜ t (x))t∈[0,T ] is an (Ft )t∈[0,T ] martingale. It is required to satisfy x ∈ Td , (M  ˜ t n+α ∈ L∞ (Ω, A, P). sup M

(2.12)

t∈[0,T ]

Theorem 2.4.3. Assume that F , G, and H satisfy (2.4) and (2.5) and that (HF1(n+1)) and (HG1(n+2)) hold true for some n  0 and some α ∈ ˜ t )t∈[0,T ] to (2.10), satisfying (0, 1). Then, there exists a unique solution (˜ ut , m ˜ t, M (2.11) and (2.12). Theorem 2.4.3 is proved in Chapter 4 (see Theorem 4.3.1 for more precise estimates). The main difference with the deterministic mean field game

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

42

system is that the solution (˜ ut , m ˜ t )0tT is sought in a much bigger space, namely [C 0 ([0, T ], C n (Td )) × C 0 ([0, T ], P(Td ))]Ω , which is not well suited to the use of compactness arguments. Because of that, one can can no longer invoke the Schauder theorem to prove the existence of a solution, which is the standard argument for solving the case β = 0; see Chapter 3. For this reason, the proof uses instead a continuation method, directly inspired from the literature on finite dimensional forward–backward stochastic systems (see [86]). Notice also that, due to the presence of the noise (Wt )t∈[0,T ] , the analysis of the time-regularity of the solution becomes a challenging issue and that the continuation method permits bypassing this difficulty. 2.4.3

Second-Order Master Equation

The second main result of this book concerns the analogue of Theorem 2.4.2 when the underlying MFG problem incorporates an additive common noise. Then the master equation involves additional terms, including second-order derivatives in the direction of the measure. It has the form (for some fixed level of common noise β > 0):  ⎧ −∂t U (t, x, m) − (1 + β)Δx U (t, x, m) + H x, Dx U (t, x, m) ⎪ ⎪  ⎪ ⎪    ⎪ ⎪ ⎪ −F x, m − (1 + β) divy Dm U t, x, m, y dm(y) ⎪ ⎪ ⎪  Td ⎪ ⎪   ⎪ ⎪ ⎪ ⎨ + d Dm U t, x, m, y · Dp H y, Dx U (t, y, m) dm(y) T    ⎪ ⎪ ⎪ divx Dm U t, x, m, y dm(y) −2β ⎪ ⎪ ⎪  Td ⎪  ⎪   ⎪ 2  ⎪ dm(y) dm(y  ) = 0, Tr D U t, x, m, y, y −β ⎪ mm ⎪ ⎪ d d T ×T ⎪ ⎪ ⎩ U (T, x, m) = G(x, m), for (t, x, m) ∈ [0, T ] × Td × P(Td ).

(2.13)

Following Definition 2.4.1, we let Definition 2.4.4. We say that a map U : [0, T ]×Td ×P(Td ) → R is a classical solution to the second-order master equation (2.13) if • U is continuous in all its arguments (for the d1 distance on P(Td )), is of class C 2 in x and C 1 in time (the derivatives of order 1 in time and space and of order 2 in space being continuous in all the arguments); • U is of class C 2 with respect to m, the first- and second-order derivatives δU (t, x, m, y), δm δ2 U [0, T ] × Td × P(Td ) × Td × Td  (t, x, m, y, y  ) → (t, x, m, y), δm2

[0, T ] × Td × P(Td ) × Td  (t, x, m, y) → −1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

43

being continuous in all the arguments, the first-order derivative δU/δm being twice differentiable in y, the derivatives being continuous in all the arguments, and the second-order derivative δ 2 U/δm2 being also twice differentiable in the pair (y, y  ), the derivatives being continuous in all the arguments; • the function Dy (δU/δm) = Dm U is differentiable in x, the derivatives being continuous in all the arguments; • U satisfies the master equation (2.13). On the model of Theorem 2.4.2, we claim Theorem 2.4.5. Assume that F , G and H satisfy (2.4) and (2.5) in Section 2.3 and that (HF2(n+1)) and (HG2(n+2)) hold true for some n  2 and for some α ∈ (0, 1). Then, the second-order master equation (2.13) has a unique solution U . The solution U enjoys the following regularity: for any α ∈ [0, α), t ∈ [0, T ], and m ∈ P(Td ), U (t, ·, m), [δU/δm](t, ·, m, ·), and [δ 2 U/δm2 ](t, ·, m, ·, ·) are in       C n+2+α , C n+2+α × C n+1+α and C n+2+α × C n+α × C n+α respectively, independently of (t, m). Moreover, the mappings 

[0, T ] × P(Td )  (t, m) → U (t, ·, m) ∈ C n+2+α , 



[0, T ] × P(Td )  (t, m) → [δU/δm](t, ·, m, ·) ∈ C n+2+α × C n+1+α , 



[0, T ] × P(Td )  (t, m) → [δ 2 U/δm2 ](t, ·, m, ·, ·) ∈ C n+2+α × [C n+α ]2 are continuous. When α = 0, these mappings are Lipschitz continuous in m, uniformly in time. Chapter 5 is devoted to the proof of Theorem 2.4.5. As for the first-order master equation, the starting point consists in letting, given (t0 , m0 ) ∈ [0, T ]×P(Td ), ˜t0 (x), U (t0 , x, m0 ) = u

x ∈ Td ,

˜ t )t∈[0,T ] is the solution to the MFG system with common noise ˜ t, M where (˜ ut , m ˜ and H ˜ is re(2.10), when (Wt )t∈[0,T ] in the definition of the coefficients F˜ , G placed by (Wt − Wt0 )t∈[t0 ,T ] . The key remark (see Lemma 5.1.1), is that, if we √ let mt0 ,t = [id + 2(Wt − Wt0 )]m ˜ t , then, for any h ∈ [0, T − t0 ], P almost surely, √  u ˜t0 +h (x) = U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h , x ∈ Td . (Take note that the function entering the push-forward in the definition of mt0 ,t is random.) Taking the derivative with respect to h at h = 0 on both sides of the equality shows that the map U thus defined satisfies the master equation (up to a tailor-made Itˆ o’s formula; see Chapter A.3.2). Of course, the main issue is to prove that U is sufficiently smooth to perform the foregoing computation: for this we need to show that U has first- and second-order derivatives with respect to the measure. As for the deterministic case, this is obtained by linearizing the

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

44

MFG system (with common noise). This linearization procedure is complicated ˜ t )t∈[0,T ] solves an ˜ t, M by the fact that, because of the noise, the triplet (˜ ut , m equation in which the coefficients have little time regularity. As a byproduct of the construction of the master equation, we can come back to the MFG system with common noise. Definition 2.4.6. Given t0 ∈ [0, T ], we call a solution to (2.9) a triplet (ut , mt , vt )t∈[t0 ,T ] of (Ft )t∈[t0 ,T ] -adapted processes with trajectories in the space C 0 ([t0 , T ], C 3 (Td ) × P(Td ) × [C 2 (Td )]d ) such that the quantity supt∈[t0 ,T ] (ut 3 + supi=1,··· ,d vti 2 ) belongs to L∞ (Ω, A, P) and (2.9) holds true with probability 1. In (2.9), the forward equation is understood in the sense of distributions, namely, with probability 1, for all test functions φ ∈ C 2 (Td ), for all t ∈ [0, T ],       d φdmt = (1 + β) Δφdmt − Dφ · Dp H ·, Dut (·) dmt dt Td Td Td    Dφdmt dWt . + 2β Td

Observe that, in (2.9), the stochastic integral in the backward equation is a priori well defined up to an exceptional event depending on the spatial position x ∈ Td . Using the fact that supt∈[t0 ,T ] supi=1,··· ,d vti 2 is in L∞ (Ω, A, P) and invoking Kolmogorov’s continuity theorem, it is standard to find a version of the stochastic integral that is jointly continuous in time and space. Let U be the solution of the master equation (2.13). Corollary 2.4.7. Under the assumptions of Theorem 2.4.5, for any initial data (t0 , m0 ) ∈ [0, T ] × P(Td ), the stochastic MFG system (2.9) has a unique solution (ut , mt , vt )t∈[0,T ] . The process (ut , mt , vt )t∈[0,T ] is an (Ft )t∈[0,T ] -adapted processes with paths in C 0 ([0, T ], C n+2 (Td )×P(Td )×[C n+1 (Td )]d ) and the vector field (vt )t∈[0,T ] is given by  vt (x) = Dm U (t, x, mt , y)dmt (y). Td

2.4.4

−1 0 1

The Convergence of the Nash System for N Players

We finally study the convergence of Nash equilibria of differential games with N players to the limit system given by the master equation. We consider the solution (v N,i )i∈{1,...,N } of the Nash system: ⎧ N N   ⎪ ⎪ N,i N,i ⎪ −∂ v − Δ v − β TrDx2j ,xk v N,i + H(xi , Dxi v N,i ) ⎪ t x j ⎪ ⎪ ⎨  j=1 j,k=1 N,j N,i Nd (2.14) + D H(x , D v ) · D = F (xi , mN,i p j x xj v x ) in [0, T ] × T j ⎪ ⎪ ⎪ j=i ⎪ ⎪ ⎪ ⎩ N,i Nd v (T, x) = G(xi , mN,i x ) in T

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

PRESENTATION OF THE MAIN RESULTS

45

where we have set, for x = (x1 , . . . , xN ) ∈ (Td )N , mN,i x =

1  δxj . N −1 j=i

Let us recall that, under the same assumptions on H, F , and G as in the statement of Theorem 2.4.5, the above system has a unique solution (see, for instance, [70]). Our main result says that the (v N,i )i∈{1,...,N } “converge” to the solution of the master equation as N → +∞. This result, conjectured in Lasry–Lions [74], is somewhat subtle because in the Nash system players observe each other (closed loop form) whereas in the limit system the players just need to observe the theoretical distribution of the population, and not the specific behavior of each player. We first study the convergence of the functions (v N,i )i∈{1,··· ,N } and then the convergence of the optimal trajectories. We have two different ways to express the convergence of the (v N,i )i=1,··· ,N , described in the following result: Theorem 2.4.8. Let the assumption of Theorem 2.4.5 be in force for some n  2 and let (v N,i ) be the solution to (2.14) and U be the classical solution to the second-order master equation. Fix N  1 and (t0 , m0 ) ∈ [0, T ] × P(Td ). N 1 (i) For any x ∈ (Td )N , let mN x := N i=1 δxi . Then sup

i=1,··· ,N

  N,i −1  v (t0 , x) − U (t0 , xi , mN . x )  CN

(ii) For any i ∈ {1, . . . , N } and xi ∈ Td , let us set wN,i (t0 , xi , m0 ) :=



 Td

...

Td

v N,i (t0 , x)



m0 (dxj ),

j=i

where x = (x1 , . . . , xN ). Then,

N,i

w (t0 , ·, m0 ) − U (t0 , ·, m0 )

L1 (m0 )

⎧ ⎨ CN −1/d  CN −1/2 log(N ) ⎩ CN −1/2

if d  3 if d = 2 . if d = 1

In (i) and (ii), the constant C does not depend on i, t0 , m0 , or N . Theorem 2.4.8 says, in two different ways, that the (v N,i )i∈{1,··· ,N } are close to U . The first statement explains that, for a fixed x ∈ (Td )N and a fixed i, the −1 . In the second statement, quantity |v N,i (t0 , x) − U (t0 , xi , mN x )| is of order N one fixes a measure m0 and an index i, and one averages in space v N,i (t0 , ·) over m0 for all variables but the i-th one. The resulting map wN,i is at a distance of order N − min(1/d,1/2) of U (t0 , ·, m0 ) (with a logarithmic correction if d = 2). We can also describe the convergence in terms of optimal trajectories. Let t0 ∈ [0, T ), m0 ∈ P(Td ) and let (Zi )i∈{1,...,N } be an i.i.d family of N random

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 2

46

variables of law m0 . We set Z = (Z1 , . . . , ZN ). Let also ((Bti )t∈[0,T ] )i∈{1,...,N } be a family of N independent d-dimensional Brownian motions is also independent of (Zi )i∈{1,...,N } and let (Wt )t∈[0,T ] be a d-dimensional Brownian motion independent of the (B i )i∈{1,...,N } and (Zi )i∈{1,...,N } . We consider the optimal trajectories (Y t = (Y1,t , . . . , YN,t ))t∈[t0 ,T ] for the N -player game: 

dYi,t = −Dp H(Yi,t , Dxi v N,i (t, Y t ))dt +



2 dBti +

√ 2β dWt ,

t ∈ [t0 , T ]

Yi,t0 = Zi , ˜ t = (X ˜ 1,t , . . . , X ˜ N,t ))t∈[t ,T ] of the stochastic differential and the solution (X 0 equation of (conditional) McKean–Vlasov type: 

 √   √ ˜ i,t , Dx U t, X ˜ i,t , L(X ˜ i,t |W ) dt + 2 dB i + 2β dWt , ˜ i,t = −Dp H X dX t ˜ i,t = Zi , X 0

Both systems of stochastic differential equations (SDEs) are set on (Rd )N . Since both are driven by periodic coefficients, solutions generate (canonical) flows of probability measures on (Td )N : The flow of probability measures generated in P((Td )N ) by each solution is independent of the representatives in Rd of the Td -valued random variables Z1 , . . . , ZN . The next result says that the solutions of the two systems are close: Theorem 2.4.9. Let the assumption of Theorem 2.4.8 be in force. Then, for any N  1 and any i ∈ {1, . . . , N }, we have ⎧ −1/d

⎨ CN   ⎪  ˜ i,t   CN −1/2 log(N ) E sup Yi,t − X ⎪ t∈[t0 ,T ] ⎩ CN −1/2

if d  3 if d = 2 , if d = 1

for some constant C > 0 independent of t0 , m0 , and N . ˜ i,t )t∈[t ,T ] )i∈{1,...,N } are conditionally independent In particular, since the ((X 0 given the realization of W , the above result is a (conditional) propagation of chaos. The proofs of Theorem 2.4.8 and Theorem 2.4.9 rely on the existence of the solution U of the master equation (2.13) and constitute the aim of Chapter 6. Our starting point is that, for any N  1, the “projection” of U onto the finite dimensional space [0, T ] × (Td )N is almost a solution to the Nash system (2.14). Namely, if we set, for any i ∈ {1, . . . , N } and any x = (x1 , . . . , xN ) ∈ (Td )N , −1 0 1

uN,i (t, x) := U (t, xi , mN,i x ),

125-79542 Cardaliaguet MasterEquation Ch01 4P

PRESENTATION OF THE MAIN RESULTS

June 11, 2019

6.125x9.25

47

then (uN,i )i∈{1,...,N } satisfies (2.14) up to an error term of size O(1/N ) for each equation (Proposition 6.1.3). Note that, as the number of equations in (2.14) is N , this could yield to a serious issue because the error terms could add up. One of the thrusts of our approach is that, somehow, the proofs work under the sole assumption that the master equation (2.13) admits a classical solution. Here the existence of a classical solution is guaranteed under the assumption of Theorem 2.4.5, which includes in particular the monotonicity properties of F and G, but the analysis provided in Chapter 6 shows that monotonicity plays no role in the proofs of Theorems 2.4.8 and 2.4.9. Basically, only the global Lipschitz properties of H and Dp H, together with the nondegeneracy of the diffusions and the various bounds obtained for the solution of the master equation and its derivatives, matter. This is a quite remarkable fact, which demonstrates the efficiency of our strategy.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter Three A Starter: The First-Order Master Equation

In this chapter we prove Theorem 2.4.2; i.e., we establish the well-posedness of the master equation without common noise:   ⎧ −∂t U (t, x, m) − Δx U (t, x, m) + H x, Dx U (t, x, m) ⎪ ⎪ ⎪  ⎪ ⎪ ⎪ ⎪ divy [Dm U ] (t, x, m, y) dm(y) − ⎪ ⎪ ⎪ Td ⎪ ⎪ ⎨    ⎪ ⎪ + Td Dm U (t, x, m, y) · Dp H y, Dx U (t, y, m) dm(y) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ = F (x, m) in [0, T ] × Td × P(Td ), ⎪ ⎪ ⎪ ⎪ ⎩ U (T, x, m) = G(x, m) in Td × P(Td ).

(3.1)

The idea is to represent U by solutions of the mean field game (MFG) system: let us recall that, for any (t0 , m0 ) ∈ [0, T ) × P(Td ), the MFG system is the system of forward–backward equations: ⎧ d ⎪ ⎨ −∂t u − Δu + H(x, Du) = F (x, m(t)), in (t0 , T ) × T ∂t m − Δm − div(mDp H(x, Du)) = 0, in (t0 , T ) × Td ⎪ ⎩ u(T, x) = G(x, m(T )), m(t0 , ·) = m0 in Td .

(3.2)

As recalled in the text that follows, under suitable assumptions on the data, there exists a unique solution (u, m) to the foregoing system. Our aim is to show that the map U defined by U (t0 , ·, m0 ) := u(t0 , ·)

−1 0 1

(3.3)

is a solution to (3.1). Throughout this chapter assumptions (2.4) and (2.5) are in force. Let us, however, underline that the global Lipschitz continuity of H is not absolutely necessary. We just need to know that the solutions of the MFG system are uniformly Lipschitz continuous, independently of the initial conditions: sufficient conditions for this can be found in [74], for instance.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

A STARTER: THE FIRST-ORDER MASTER EQUATION

6.125x9.25

49

The proof of Theorem 2.4.2 requires several preliminary steps. We first recall the existence of a solution to the MFG system (3.2) (Proposition 3.1.1) and show that this solution depends in a Lipschitz continuous way on the initial measure m0 (Proposition 3.2.1). Then we show by a linearization procedure that the map U defined in (3.3) is of class C 1 with respect to the measure (Proposition 3.4.3, Corollary 3.4.4). The proof relies on the analysis of a linearized system with a specific structure, for which well-posedness and estimates are given in Lemma 3.3.1. We are then ready to prove Theorem 2.4.2 (Section 3.5). We also show, for later use, that the first-order derivative of U is Lipschitz continuous with respect to m (Proposition 3.6.1). We complete the chapter by explaining how one obtains the solution U as the derivative with respect to the measure m of the value function of an optimal control problem set over flows of probability measures (Theorem 3.7.1). Some of the proofs given in this chapter consist of a sketch only. One of the reasons is that some of the arguments we use here to investigate the MFG system (3.2) have been already developed in the literature. Another reason is that this chapter constitutes a starter only, specifically devoted to the simpler case without common noise. Arguments will be expanded in detail in the two next chapters, when handling mean field games with a common noise, for which there are fewer available results in the literature.

3.1

SPACE REGULARITY OF U

In this part we investigate the space regularity of U with respect to x. Recall that U (t0 , ·, m0 ) is defined by U (t0 , x, m0 ) = u(t0 , x) where (u, m) is a classical solution to (3.2) with initial condition m(t0 ) = m0 . By a classical solution to (3.2) we mean a pair (u, m) ∈ C 1,2 × C 0 ([t0 , T ], P(Td )) such that the equation for u holds in the classical sense while the equation for m holds in the sense of distributions. Proposition 3.1.1. Assume that (HF1(n)) and (HG1(n+2)) hold for some n  1. Then, for any initial condition (t0 , m0 ) ∈ [0, T ] × P(Td ), the MFG system (3.2) has a unique classical solution (u, m), with u ∈ C 1+α/2,2+α , and this solution satisfies sup

t1 =t2

d1 (m(t1 ), m(t2 )) + D u1+α/2,2+α  Cn , 1/2 |t2 − t1 |

(3.4)

||n

where the constant Cn does not depend on (t0 , m0 ).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

50

Moreover, m has a continuous, positive density in (0, T ] × Td and if, in addition, m0 is absolutely continuous with a C 2+α positive density, then m is of class C 1+α/2,2+α . Finally, the solution is stable, in the sense that, if mn0 converges to m0 in P(Td ), then the corresponding solution (un , mn ) converges to the solution (u, m) of (3.2) in C 1,2 × C 0 ([0, T ], P(Td )). The above result is due to Lasry and Lions in [72, 74], in a more general framework. We reproduce the proof here for the sake of completeness. Note that further regularity of F and G improves the space regularity of u but not its time regularity (as the time regularity of the coefficients depends on that of m, see Proposition 3.1.1). By (3.4), we have, under assumptions (HF1(n)) and (HG1(n+2)), sup

sup

t∈[0,T ] m∈P(Td )

U (t, ·, m)n+2+α  Cn .

Proof. The existence of a solution relies on the Schauder fixed-point argument. Let L be a bound on Dp H and X be the set of time-dependent measures m ∈ C 0 ([t0 , T ], P(Td )) such that d1 (m(s), m(t))  L|t − s| +



2|t − s|1/2 ∀s, t ∈ [t0 , T ].

(3.5)

Note that, by Ascoli–Arzela theorem, X is a convex compact space for the uniform distance. Given m ∈ X, we consider the solution u to the Hamilton–Jacobi equation

−∂t u − Δu + H(x, Du) = F (x, m(t)) u(T, x) = G(x, m(T ))

in (t0 , T ) × Td ,

in Td .

Assumption (HF1(n)) (for n  1) implies that F is Lipschitz continuous in both variables. Then, by the definition of X, the map (t, x) → F (x, m(t)) is H¨ older continuous in time and space and, more precisely, belongs to C 1/2,1 with a H¨older constant independent of m. Moreover, by assumption (HG1(n+2)), the map x → G(x, m(T )) is of class C 2+α with a constant independent of m. Thus, by the theory of Hamilton–Jacobi equations with Lipschitz continuous Hamiltonian ( [70], Theorem V.6.1), there exists a unique classical solution u to the above equation. Beside the L∞ −norm of u (by maximum principle), of Du ([70], Theorem V.3.1) and the C 1+α/2,2+α -norm of u ([70], Theorem V.5.4) are bounded independently of m. Let now m ˜ be the weak solution to the Fokker–Planck equation:

−1 0 1

∂t m ˜ − Δm ˜ − div(mD ˜ p H(x, Du)) = 0 m(t ˜ 0 , ·) = m0

in Td .

in (t0 , T ) × Td ,

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

51

Following [20], the above equation has a unique solution in C 0 ([t0 , T ], P(Td )) in the sense of distributions. Moreover, as Dp H is bounded by L, m satisfies (3.5). In particular, m ˜ belongs to X. This defines a map Φ : X → X which, to any m ∈ X, associates the map m. ˜ Next we claim that Φ is continuous. Indeed let (m )1 be a sequence in ˜  be the corresponding X converging to m ∈ X. For each   1, let u and m solutions to the Hamilton–Jacobi and the Fokker–Planck equations respectively. From our previous estimate, the maps (u )1 are bounded in C 1+α/2,2+α . So, by continuity of F and G, any cluster point of the (u )1 is a solution associated with m. By uniqueness of the solution u of this limit problem, the whole sequence ˜  )1 converges in X to the unique (u )1 converges to u. In the same way, (m solution m ˜ to the Fokker–Planck equation associated with u. This shows the continuity of Φ. We conclude by the Schauder theorem that Φ has a fixed point (u, m), which is a solution to the MFG system (3.2). Uniqueness of this solution is given by Lasry–Lions monotonicity argument, developped in Lemma 3.1.2. Note, by uniform parabolicity and strong maximum principle, that m has a continuous positive density on (0, T ] × Td : see [20]. Let now (u, m) be the solution to (3.2) and assume that m0 has a smooth density. Then m solves the linear parabolic equation

∂t m − Δm − Dm · Dp H(x, Du) − mdiv(Dp H(x, Du)) = 0 in (t0 , T ) × Td , m(0, ·) = m0 in Td ,

with C α/2,α coefficient and C 2+α initial condition. Thus, by Schauder estimates m is of class C 1+α/2,2+α . If, moreover, m0 is positive, then m remains positive by the strong maximum principle. Let us now show the improved space regularity for u when F and G are smoother. Fix a direction e ∈ Rd and consider v := Du · e. Then v solves, in the sense of distributions, ⎧ ⎪ ⎨ −∂t v − Δv + Dx H(x, Du) · e + Dp H(x, Du) · Dv = Dx F (x, m(t)) · e in (t0 , T ) × Td , ⎪ ⎩ in Td . v(T, x) = Dx G(x, m(T )) · e If (HF1(n)) and (HG1(n+2)) hold for n  1, then, as u is bounded in C 1+α/2,2+α , the above equation has uniformly H¨ older continuous coefficient and therefore v is also bounded C 1+α/2,2+α by Schauder theory. This proves that Du1+α/2,2+α  C1 . By induction on n, one can show that, if (HF1(n)) and (HG1(n+2)) hold, then for any l ∈ Nd with |l| = l1 + · · · + ld = n, the map w := Dl u is uniformly

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

52

bounded in C α/2,1+α and solves, in the sense of distributions, a linear equation of the form ⎧ l ⎪ ⎨ −∂t w − Δw + Dp H(x, Du) · Dw + hl = Dx F (x, m(t)) in (t0 , T ) × Td , ⎪ ⎩ w(T, x) = Dxl G(x, m(T )) in Td , where hl involves the derivatives of H and u up to order n and is, therefore, uniformly H¨older continuous. We conclude as earlier that w is bounded in C 1+α/2,2+α and therefore that Dl u1+α/2,2+α  Cn . |l|=n

The stability of the solution is a straightforward consequence of the foregoing estimates and of the uniqueness of the solution.  Lemma 3.1.2 (Lasry–Lions monotonicity argument). Let (ui , mi )i=1,2 be two solutions of the MFG system (3.2) with possibly different initial conditions mi (t0 ) = mi0 ∈ P(Td ) for i = 1, 2. Then 

T t0

 Td



(H(x, Du2 ) − H(x, Du1 ) − Dp H(x, Du1 ) · (Du2 − Du1 ))m1 dx dt

T



+ t0



−

Td

Td

(H(x, Du1 ) − H(x, Du2 ) − Dp H(x, Du2 ) · (Du1 − Du2 ))m2 dx dt

(u1 (t0 , x) − u2 (t0 , x))(m10 (dx) − m20 (dx)),

where m1 , m2 , u1 , and u2 and their derivatives are evaluated at (t, x). Note that, if m1 (t0 ) = m2 (t0 ) = m0 , then the right-hand side of the above inequality vanishes. As H is strictly convex and m1 and m2 are positive in (t0 , T )×Td , this implies that Du1 = Du2 in (t0 , T )×Td . So m1 and m2 solve the same Fokker–Planck equation with a drift of class C α/2,1+α : therefore m1 = m2 and, as u1 and u2 satisfy the same Hamilton–Jacobi equation, they are also equal. This proves the uniqueness of the solution to (3.2). Another consequence of Lemma 3.1.2 is a stability property: as Du1 and Du2 2 are bounded by a constant C0 , and the positivity of Dpp H ensures the existence of a constant C (depending on C0 ) such that  −1 0 1

T



  |Du2 − Du1 |2 m1 + m2 dx dt t0 Td   1   u (t0 , x) − u2 (t0 , x) m10 (dx) − m20 (dx) .  −C Td

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

53

Proof. We start the proof assuming that m1 (t0 ) and m2 (t0 ) have a smooth density, so that u and m are in C 1,2 . The general result can be then obtained d (u1 − u2 )(m1 − m2 )dx and finds, after by approximation. One computes dt Td integrating by parts and rearranging: d dt

 Td

(u1 − u2 )(m1 − m2 )dx 

=−  −  −

Td

Td

Td

(H(x, Du2 ) − H(x, Du1 ) − Dp H(x, Du1 ) · (Du2 − Du1 ))m1 dx (H(x, Du1 ) − H(x, Du2 ) − Dp H(x, Du2 ) · (Du1 − Du2 ))m2 dx (F (x, m1 (t)) − F (x, m2 (t))(m1 − m2 ).

One obtains the claimed inequality by integrating in time and using the monotonicity of F and G. 

3.2

LIPSCHITZ CONTINUITY OF U

Proposition 3.2.1. Assume that (HF1(n+1)) and (HG1(n+2)) hold for some n  1. Let m10 , m20 ∈ P(Td ), t0 ∈ [0, T ] and (u1 , m1 ), (u2 , m2 ) be the solutions of the MFG system (3.2) with initial condition (t0 , m10 ) and (t0 , m20 ) respectively. Then sup t∈[0,T ]



d1 (m1 (t), m2 (t)) + u1 (t, ·) − u2 (t, ·) n+2+α  Cn d1 (m10 , m20 ),

for a constant Cn independent of t0 , m10 and m20 . In particular,

U (t0 , ·, m10 ) − U (t0 , ·, m20 )

n+2+α

 Cn d1 (m10 , m20 ).

Proof. First step. To simplify the notation, we show the result for t0 = 0. We Lasry–Lions monotonicity argument (Lemma 3.1.2): it implies that 

T 0

 Td

  |Du1 (t, y) − Du2 (t, y)|2 m1 (t, y) + m2 (t, y) dy dt

 C

 Td

  u1 (0, y) − u2 (0, y) m10 (dy) − m20 (dy) .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

54 By definition of d1 , the right-hand side can be estimated by 

 Td

  u1 (0, y) − u2 (0, y) m10 (dy) − m20 (dy)  CD(u1 − u2 )(0, ·)∞ d1 (m10 , m20 ).

Hence 

T 0





Td

 m1 (t, y) + m2 (t, y) |Du1 (t, y) − Du2 (t, y)|2 dy dt 1

 CD(u − u

2

(3.6)

)(0, ·)∞ d1 (m10 , m20 ).

Second step: Next we estimate m1 − m2 : to do so, let (Ω, F, P) be a standard probability space and X01 , X02 be random variables on Ω with law m10 and m20 respectively such that E[|X01 −X02 |] = d1 (m10 , m20 ). Let also (Xt1 )t∈[0,T ] , (Xt2 )t∈[0,T ] be the solutions to dXti = −Dp H(Xti , Dui (t, Xti ))dt +



t ∈ [0, T ], i = 1, 2,

2 dBt

where (Bt )t∈[0,T ] is a d-dimensional Brownian motion. Then the law of Xti is mi (t) for any t. We have     E |Xt1 − Xt2 |  E |X01 − X02 |  t       Dp H Xs1 , Du1 (s, Xs1 ) − Dp H Xs2 , Du1 (s, Xs2 )  +E 0

   2   2   1 2 2 2   + Dp H Xs , Du (s, Xs ) − Dp H Xs , Du (s, Xs ) ds . As the maps x → Dp H(x, Du1 (s, x)) and p → Dp H(x, p) are Lipschitz continuous (see (2.4) and Proposition 3.2.1), we deduce: 

    E |Xt1 − Xt2 |  E |X01 − X02 | + C  t +C  −1 0 1

0

Td

d1 (m10 , m20 )

0

0

  E |Xs1 − Xs2 | ds

|Du1 (s, x) − Du2 (s, x)|m2 (s, x)dx ds 

+C

 t  +C

t

t 0 1

Td

  E |Xs1 − Xs2 | ds 2

2

2

|Du (s, x) − Du (s, x)| m (s, x)dx ds

1/2 .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

55

In view of (3.6) and Gronwall’s lemma, we obtain, for any t ∈ [0, T ]     1/2 E |Xt1 − Xt2 |  C d1 (m10 , m20 ) + D(u1 − u2 )(0, ·)∞ d1 (m10 , m20 )1/2 . (3.7) As d1 (m1 (t), m2 (t))  E[|Xt1 − Xt2 |], we then get sup d1 (m1 (t), m2 (t))

t∈[0,T ]

  1/2  C d1 (m10 , m20 ) + D(u1 − u2 )(0, ·)∞ d1 (m10 , m20 )1/2 .

(3.8)

Third step: We now estimate the difference w := u1 − u2 . We note that w satisfies

−∂t w(t, x) − Δw(t, x) + V (t, x) · Dw(t, x) = R1 (t, x)

in [0, T ] × Td ,

in Td ,

w(T, x) = RT (x)

where, for (t, x) ∈ [0, T ] × Td ,  V (t, x) =  R1 (t, x) =

1



0

Td

1 0

  Dp H x, sDu1 (t, x) + (1 − s)Du2 (t, x) ds,

  δF  x, sm1 (t) + (1 − s)m2 (t), y m1 (t, y) − m2 (t, y) dy ds δm

and  RT (x) =

1 0

 Td

  δG  x, sm1 (T ) + (1 − s)m2 (T ), y m1 (T, y) − m2 (T, y) dy ds. δm

By assumption (HF1(n+1)) and inequality (3.8), we have, for any t ∈ [0, T ], R1 (t, ·)n+1+α

 1

δF 

1 2 1 2

D ·, sm  ds (t) + (1 − s)m (t), ·)

y δm

n+1+α ∞ d1 (m (t), m (t)) 0 C ×L   1/2  C d1 (m10 , m20 ) + Dw(0, ·)∞ d1 (m10 , m20 )1/2 , and, in the same way (using assumption (HG1(n+2))),   1/2 RT n+2+α  C d1 (m10 , m20 ) + Dw(0, ·)∞ d1 (m10 , m20 )1/2 .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

56

On the other hand, V (t, ·) is bounded in C n+1+α in view of the regularity of u1 and u2 (Proposition 3.1.1). Then Lemma 3.2.2 states that   sup w(t, ·)n+2+α  C RT n+2+α + sup R1 (t, ·)n+1+α t∈[0,T ] t∈[0,T ]   1/2 d1 (m10 , m20 )1/2 .  C d1 (m10 , m20 ) + Dw(0, ·)∞ Absorbing the last term of the right-hand side into the left-hand side, we find sup w(t, ·)n+2+α  Cd1 (m10 , m20 ),

t∈[0,T ]

and coming back to inequality (3.8), we also obtain sup d1 (m1 (t), m2 (t))  Cd1 (m10 , m20 ).



t∈[0,T ]

In the proof we used the following estimate: Lemma 3.2.2. Let n  1, V be in C 0 ([0, T ], C n−1 (Td , Rd )) and f be in C 0 ([0, T ], C n−1+α ). Then, for any zT ∈ C n+α , the (backward) equation

in [0, T ] × Td

−∂t z − Δz + V (t, x) · Dz = f (t, x),

(3.9)

z(T, x) = zT (x) has a unique solution in L2 ([0, T ], H 1 (Td )) and this solution satisfies:

sup z(t, ·)n+α  C

t∈[0,T ]

 zT n+α + sup f (t, ·)n−1+α t∈[0,T ]

,

where C depends on supt∈[0,T ] V (t, ·)n−1 , d and α only. If, in addition, V belongs to C 0 ([0, T ], C n−1+α (Td , Rd )), then z(t , ·) − z(t, ·)n+α sup C α |t − t| 2 t=t





zT n+α + sup f (t, ·)n−1+α t∈[0,T ]

,

where C depends on supt∈[0,T ] V (t, ·)n−1+α , d and α only. −1 0 1

Proof. The existence and the uniqueness of a solution to (3.9) are well known. (see Theorem II.4.2 of [70]). We now prove the estimates.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

57

Step 1. We start the proof with the case n = 1 and a homogeneous initial datum zT = 0. Let z solve (3.9) with zT = 0. By the maximum principle, we have z∞  Cf ∞ , where C depends on T only. Next we claim that there exists a constant C, depending on V ∞ , α, and d only, such that, if z is the solution to (3.9) with zT = 0, then Dz∞  Cf ∞ .

(3.10)

Indeed, assume for a while that there exist Vn , fn , and zn , with Vn ∞  M and fn ∞  1 and where zn solves

in [0, T ] × Td , −∂t zn − Δzn + Vn · Dzn = fn in Td ,

zn (T, x) = 0

with kn := Dzn ∞ → +∞. We set z˜n := zn /kn , f˜n := fn /kn . Then z˜n solves

∂t z˜n − Δ˜ zn = f˜n − Vn · D˜ zn in [0, T ] × Td in Td ,

zn (0, x) = 0 with

f˜n − Vn · D˜ zn ∞  1 + M. The Lp estimates on the heat potential (see (3.2) of Chapter 3 in [70]) imply that, for any p  1, (∂t z˜n , D2 z˜n )Lp ((0,T )×Td )  Cf˜n − Vn · D˜ zn Lp ((0,T )×Td )  C(1 + M ), for some constant C depending on p and d only. Then by the Sobolev inequality (Lemma II.3.3 in [70]) we have that, for any β ∈ (0, 1), D˜ zn C β/2,β  C(1 + M ), for a constant C depending on β and d only. On the other hand, multiplying the equation by z˜n and integrating in space and on the time interval [t, T ] (for t ∈ [0, T ]) yields  Td

|˜ zn (t)|2 dx +





T t





T t

 Td

1 |f˜n |2 dx dt + 2 d T

|D˜ zn |2 dx dt = 

T t



2

Td



T t

 Td

z˜n (f˜n − Vn · D˜ zn )dx dt

|D˜ zn | dx dt + 3



T t

 Td

|˜ zn |2 (1 + |Vn |2 )dx dt,

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

58

so that   T   1 T |˜ zn (t)|2 dx + |D˜ zn |2 dx dt  Cf˜n 2∞ + C |˜ zn |2 dx dt, 2 d d d t t T T T where the constant C depends on T and M . By the Gronwall lemma, we obtain therefore  T  |˜ zn (t, x)|2 dx + |D˜ zn |2 dxdt  Cf˜n 2∞ . sup t∈[0,T ]

Td

0

Td

zn ) tends to 0 in L2 . This is impossible since (D˜ zn ) As (f˜n ) tends to 0 in L∞ , (D˜ β/2,β and satisfies D˜ zn ∞ = 1. This proves our claim (3.10). is bounded in C We now improve inequality (3.10) into a H¨ older bound. Let z be a solution to (3.9) with z(T, ·) = 0. Then z solves the heat equation with right-hand side given by f − V · Dz. So, arguing as above, we have, thanks to (3.10) and for any β ∈ (0, 1), DzC β/2,β  Cf − V · Dz∞  Cf ∞ ,

(3.11)

where the constant C depends on V ∞ , d, T , and β only. Step 2. Next we prove the inequality for the derivatives of the solution z of (3.9), still with an homogeneous terminal condition z(T, ·) = 0. For l ∈ Nd with |l|  n − 1, the map w := Dl z solves

−∂t w − Δw + V (t, x) · Dw = Dl f (t, x) + gl (t, x),

in [0, T ] × Td

z(T, x) = zT (x) where gl (t, x) is a linear combination of the space derivatives of z up to order |l| with coefficients proportional to space derivatives of V up to order |l|. So, by (3.11), we have, for any β ∈ (0, 1), DwC β/2,β  CDl f + gl ∞  C( sup f (t, ·)|l| + sup z(t, ·)|l| ), t∈[0,T ]

t∈[0,T ]

where C depends on l, β, d, and supt∈[0,T ] V (t, ·)|l| only. By induction on |l|, the above inequality implies for the choice β = α that sup Dl z(t, ·)C α/2,α  C sup f (t, ·)n−1 ,

|l|n

(3.12)

t∈[0,T ]

where C depends on n, α, d and supt∈[0,T ] V (t, ·)n−1 only. −1 0 1

Step 3. We now remove the assumption that zT = 0. We write z as the sum z = z1 + z2 , where z1 solves the heat equation with terminal condition zT and z2

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

59

solves equation (3.9) with the right-hand side f −V ·Dz1 and homogeneous terminal condition z2 (T, ·) = 0. For any l ∈ Nd with |l|  n, the derivative Dl z solves the heat equation with terminal condition Dl zT . So, by the classical H¨older regularity of the solution of the heat equation (inequality III.2.2 of [70]), we have Dl z1 C α/2,α  CDl zT α , where C depends only on α, d, and |l|. By (3.12) and the estimate on z1 , we also have sup Dl z2 C α/2,α  C sup f (t, ·) − V (t, ·) · Dz1 (t, ·)n−1

|l|n

t∈[0,T ]

 C( sup f (t, ·)n−1 + zT n+α ), t∈[0,T ]

where C depends on n, α, d, and supt∈[0,T ] V (t, ·)n−1 only. Putting together the estimates for z1 and z2 gives sup Dl zC α/2,α  C( sup f (t, ·)n−1 + zT n+α ).

(3.13)

sup z(t, ·)n+α  C( sup f (t, ·)n−1 + zT n+α ),

(3.14)

|l|n

t∈[0,T ]

In particular, t∈[0,T ]

t∈[0,T ]

where C is as above. Step 4. We finally check the time regularity. For any fixed y ∈ Rd , the difference w(t, x) := z(t, x + y) − z(t, x) satisfies ⎧ ⎪ ⎨ −∂t w − Δw + V (t, x) · Dw = (f (t, x + y) − f (t, x)) −(V (t, x + y) − V (t, x)) · Dz(t, x + y) in [0, T ] × Td , ⎪ ⎩ z(T, x) = zT (x + y) − zT (x) in Td . From (3.13) we therefore have sup Dl wC α/2,α  C(zT (· + y) − zT (·)n+α

|l|n

+ supt∈[0,T ] f (t, · + y) − f (t, ·)n−1 + sup (V (t, · + y) − V (t, ·)) · Dz(t, · + y)n−1 ) t∈[0,T ]

 C( sup f (t, ·)n−1+α + zT n+α )|y|α , t∈[0,T ]

where we used the H¨ older regularity of V in space and the bound on Dz in (3.14) for the last inequality. Note that the constant C now also depends

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

60

on supt∈[0,T ] V (t, ·)n−1+α . Using the time H¨ older regularity of w, one then obtains z(t , ·) − z(t, ·)n+α  C( sup f (t, ·)n−1+α + zT n+α ). |t − t|α/2 t =t t∈[0,T ]

sup

3.3



ESTIMATES ON A LINEAR SYSTEM

In the sequel we will repetitively encounter forward–backward systems of linear equations. To minimize the computation, we collect in this section estimates on such systems, which have the generic form: ⎧ ⎪ (i) ⎪ ⎪ ⎪ ⎪ ⎨ (ii) ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ (iii)

−∂t z − Δz + V (t, x) · Dz =

δF (x, m(t))(ρ(t)) + b(t, x) δm

∂t ρ − Δρ − div(ρV ) − div(mΓDz + c) = 0 in [t0 , T ] × Td , z(T, x) =

δG (x, m(T ))(ρ(T )) + zT (x), ρ(t0 ) = ρ0 δm

(3.15)

in Td ,

where V is a given vector field in C 0 ([0, T ], C n+1+α (Td ; Rd )) (for some n  0), m is a continuous time-dependent family of probability measures; i.e., m ∈ C 0 ([t0 , T ], P(Td )), Γ : [t0 , T ] × Td → Rd×d is a continuous map with values into the family of symmetric matrices and where the maps b : [t0 , T ] × Td → R, c : [t0 , T ] × Td → Rd and zT : Td → R are given. Above, we used the shortδF δF (x, m(t))(ρ(t)) for the duality bracket between δm (x, m(t), ·) ened notation δm δG and ρ(t), and similarly for δm (x, m(T ))(ρ(T )). In (i), z and its derivatives are evaluated at (t, x) ∈ [t0 , T ] × Td . Moreover, we always assume that there is a constant C¯ > 0 such that ∀t, t ∈ [t0 , T ],

¯ − t |1/2 , d1 (m(t), m(t ))  C|t

∀(t, x) ∈ [t0 , T ] × Td ,

−1 0 1

¯ d. C¯ −1 Id  Γ(t, x)  CI

(3.16)

2 Typically, V (t, x) = Dp H(x, Du(t, x)), Γ(t, x) = Dpp H(x, Du(t, x)) for some solution (u, m) of the MFG system (3.2) starting from some initial data m(t0 ) = m0 . Recall that the derivative Du is globally Lipschitz continuous with a constant independent of (t0 , m0 ) (Proposition 3.1.1), so that assumption (2.4) gives the existence of a constant C¯ for which (3.16) holds. We also note for later use that this constant does not depend on (t0 , m0 ). We establish the existence of a solution and its regularity under the assumption that b and c belong to C 0 ([0, T ], C n+1+α ) and C([0, T ], (C n+α (Td ; Rd )) ) respectively and that zT and ρ0 belong respectively to C n+2+α and to (C n+1+α ) .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

A STARTER: THE FIRST-ORDER MASTER EQUATION

6.125x9.25

61

Lemma 3.3.1. Under assumption (HF1(n + 1)) and (HG1(n+2)), system (3.15) has a unique solution (z, ρ) in C 0 ([0, T ], C n+2+α × (C n+1+α ) ), with sup z(t, ·)n+2+α + sup

t∈[t0 ,T ]

t=t

z(t , ·) − z(t, ·)n+2+α  Cn M. α |t − t| 2

(3.17)

and sup ρ(t)−(n+1+α) + sup

t∈[t0 ,T ]

t=t

ρ(t ) − ρ(t)−(n+1+α) 1

|t − t | 2

 Cn M,

(3.18)

where the constant Cn depends on n, T , α, supt∈[t0 ,T ] V (t, ·)n+1+α , the constant C¯ in (3.16), F , and G and where M is given by M := zT n+2+α + ρ0 −(n+1+α) + sup  + c(t)−(n+α) ,



t∈[t0 ,T ]

b(t, ·)n+1+α (3.19)

where c(t)−(n+α) stands for the supremum of (c(t))i −(n+α) over i ∈ {1, . . . , d}, (c(t))i denoting the ith coordinate of c(t). Proof. Without loss of generality we assume t0 = 0. We prove the existence of a solution to (3.15) by Leray–Schauder argument. The proof requires several steps, the key point being precisely to prove the estimates (3.17) and (3.18). In Step 1, we suppose that m, c, and ρ0 are smooth and prove the existence of the solution under this additional condition. We remove this assumption in the very last step. Step 1: Definition of a map T satisfying the Leray–Schauder theorem. We set X := C 0 ([0, T ], (C n+1+α ) ). For ρ ∈ X, we define T(ρ) as follows. First, we call z the solution to ⎧ δF ⎪ −∂ z − Δz + V (t, x) · Dz = (x, m(t))(ρ(t)) + b(t, x) ⎪ ⎪ ⎨ t δm in [0, T ] × Td , ⎪ ⎪ ⎪ ⎩ z(T ) = δG (x, m(T ))(ρ(T )) + z in Td . T δm

(3.20)

From Lemma 3.2.2, there exists a unique solution z in C α/2 ([0, T ], C n+2+α ) to the above equation and it satisfies

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

62 sup z(t, ·)n+2+α + sup t=t

t∈[0,T ]

z(t , ·) − z(t, ·)n+2+α α |t − t| 2

 δG (x, m(T ))(ρ(T ))n+2+α + zT n+2+α C  δm δF (·, m(t))(ρ(t))n+1+α + sup b(t, ·)n+1+α + sup  δm t∈[0,T ] t∈[0,T ]

 C



(3.21)

zT n+2+α + sup ρ(t))−(n+1+α) + b , t∈[0,T ]

where the constant C depends on the constant in (HF1(n + 1)) and (HG1 (n+2)), on supt∈[0,T ] V (t, ·)n+1+α , d, and α, and where we have set, for simplicity, b := sup b(t, ·)n+1+α . t∈[0,T ]

Next we define ρ˜ as the solution in the sense of distributions to

∂t ρ˜ − Δ˜ ρ − div(˜ ρV ) − div(mΓDz + c) = 0 ρ˜(0) = ρ0

−1 0 1

in Td .

in [0, T ] × Td ,

(3.22)

As ρ0 , m, and c are assumed to be smooth in this step and as V and Γ satisfy (3.16), there exists a unique solution ρ˜ in L2 ([0, T ], H 1 (Td )) to the above parabolic equation (Theorem II.4.1 of [70]). Furthermore, as the coefficients of this parabolic equation are bounded, ρ˜ is bounded in C β/2,β for some β ∈ (0, 1), where the bounds depend on ρ0 , c, and m (but not on ρ since Dz∞ does not depend on ρ). We set T(ρ) := ρ˜ and note that T maps X into X since C β/2,β is compactly embedded in X. We now check that T is continuous and compact on X. Let (ρn ) converge to ρ in X. We denote by zn (respectively z) the solution to (3.20) associated with ρn (and, respectively ρ). We also set ρ˜n := T(ρn ) δF (x, m(t))(ρn (t)) and (t, x) → and ρ˜ := T(ρ). Note that the maps (t, x) → δm δG δF (x, m(T ))(ρ (T )) converge uniformly to the maps (t, x) → δm (x, m(t))(ρ(t)) n δm δG and (t, x) → δm (x, m(T ))(ρ(T )) respectively. Hence (zn ) converges uniformly to z. By the bounds on the derivatives of zn , (Dzn ) also converges uniformly to Dz. The stability property of equation (3.22) then implies that the sequence (˜ ρn ) converges to ρ˜ := T(ρ) in L2 ([0, T ], H 1 (Td )), and thus in X by uniform H¨ older estimate on the solution to equation (3.22). The same uniform H¨ older estimate implies that the map T is compact. In Steps 2 and 3, we show that, if ρ = σT(ρ) for some (ρ, σ) ∈ X ×[0, 1], then ρ satisfies (3.18). This estimate proves that the norm in X of ρ is bounded independently of σ. Then we can conclude by Leray–Schauder theorem the existence of a fixed point for T, which, by construction, yields a solution to (3.15).

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

63

From now on we fix (ρ, σ) ∈ X × [0, 1] such that ρ = σT(ρ) and let z be the solution to (3.20). Note that the pair (z, ρ) satisfies   ⎧ δF ⎪ ⎪ −∂ (x, m(t))(ρ(t)) + b(t, x) z − Δz + V (t, x) · Dz = t ⎪ ⎪ δm ⎪ ⎪ ⎪ ⎨ in [0, T ] × Td , ∂t ρ − Δρ − div(ρV ) − σdiv(mΓDz + c) = 0 in [0, T ] × Td , ⎪ ⎪ ⎪   ⎪ ⎪ ⎪ δG ⎪ ⎩ ρ(0) = σρ0 , (x, m(T ))(ρ(T )) + zT z(T ) = in Td . δm Our goal is to show that (3.17) and (3.18) hold for z and ρ respectively. In Steps 2 and 3, we will not use the smoothness of m, c, and ρ0 , so that these estimates also hold for any solution (z, ρ) to the above system, and, in particular, for solutions of (3.15). Step 2: Estimate of ρ. Let us recall that, for any integer n, we abbreviate the notation for the duality product ·, · (C n+α ) ×C n+α as ·, · n+α . By duality, we have ρ(T ), z(T, ·) n+1+α − σ ρ0 , z(0, ·) n+1+α     δF (·, m(t)) ρ(t) + b(t, ·) dt ρ(t), δm 0 n+1+α  T   −σ Dz(t, x) · Γ(t, x)Dz(t, x) m(t, dx)dt

 =−

T

 −σ

0

Td

T 0

c(t), Dz(t, ·) n+α dt.

Using the terminal condition of z and the regularity and the monotonicity of F and G, we therefore obtain  σ

T 0

 Td

Γ(t, x)Dz(t, x) · Dz(t, x)m(t, dx)dt    sup ρ(t)−(n+1+α) zT n+1+α + b t∈[0,T ]

(3.23)

  + σ sup z(t, ·)n+1+α ρ0 −(n+1+α) + c , t∈[0,T ]

where we have set b := supt∈[0,T ] b(t, ·)n+1+α and c := supt∈[0,T ] c(t)−(n+α) . We now estimate ρ by duality. Let τ ∈ (0, T ], ξ ∈ C n+1+α and w be the solution to the backward equation − ∂t w − Δw + V (t, x) · Dw = 0 in [0, τ ] × Td ,

w(τ ) = ξ in Td .

(3.24)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

64 Lemma 3.2.2 states that sup w(t, ·)n+1+α + sup t=t

t∈[0,T ]

w(t , ·) − w(t, ·)n+1+α  Cξn+1+α , α |t − t| 2

(3.25)

where C depends on supt∈[0,T ] V (t, ·)n+α . We have ρ(τ ), ξ n+1+α = σ ρ0 , w(0) n+1+α  τ   −σ Dw(t, x) · Γ(t, x)Dz(t, x) m(t, dx) dt 0 Td  τ c(t), Dz(t) n+α dt, −σ

(3.26)

0

where the right-hand side is bounded above by σw(0)n+1+α ρ0 −(n+1+α) + σ sup Dw(t, ·)n+α c t∈[0,T ]   1/2 T



0

 ×

T 0



Td

Td

Γ(t, x)Dw(t, x) · Dw(t, x)m(t, dx) dt 1/2

Γ(t, x)Dz(t, x) · Dz(t, x)m(t, dx) dt

 C σξn+1+α ρ0 −(n+1+α) + σ sup Dw(t, ·)n+α c  +σDw∞

0

T

t∈[0,T ]

 Td

1/2 ⎤ ⎦. Γ(t, x)Dz(t, x) · Dz(t, x)m(t, dx) dt

Using (3.23) and (3.25) we therefore obtain  ρ(τ ), ξ n+1+α  Cξn+1+α σρ0 −(n+1+α) + σc   1/2 1/2 + σ 1/2 sup ρ(t)−(n+1+α) zT n+1+α + b1/2 t∈[0,T ]    1/2 1/2 1/2 + σ sup z(t, ·)n+1+α ρ0 −(n+1+α) + c . t∈[0,T ]

−1 0 1

Taking the supremum over ξ with ξC n+α +1  1 and over τ ∈ [0, T ] yields  sup ρ(t)−(n+1+α)  C σρ0 −(n+1+α) + σc t∈[0,T ]   1/2 1/2 + σ 1/2 sup ρ(t)−(n+1+α) zT n+1+α + b1/2 t∈[0,T ]    1/2 1/2 + σ sup z(t, ·)n+1+α ρ0 −(n+1+α) + c1/2 . t∈[0,T ]

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

65

Rearranging and using the definition of M in (3.19), we obtain sup ρ(t)−(n+1+α)

t∈[0,T ]



 C σM + σ sup

t∈[0,T ]

 1/2 1/2 z(t, ·)n+1+α ρ0 −(n+1+α)

1/2

+ c





(3.27) .

We can use the same kind of argument to obtain the regularity of ρ with respect to the time variable: by (3.26), we have, for any τ ∈ [0, T ] and t ∈ [0, τ ] and using the H¨older estimate in time in (3.25): ρ(τ ) − ρ(t), ξ n+1+α = ρ(t), w(t) − w(τ ) n+1+α  τ   −σ Dw(s, x) · Γ(s, x)Dz(s, x) m(s, dx) ds  −σ

t

Td

τ t

c(s), Dw(s, ·) n+α ds α

 C(τ − t) 2 ξn+1+α sup ρ(t)−(n+1+α) t∈[0,T ]



1 2

+ (τ − t) Dw∞

T

1/2



0

Td

Γ(s, x)Dz(s, x) · Dz(s, x)m(s, dx) ds

+ (τ − t)c sup w(t, ·)n+1+α . t∈[0,T ]

Plugging (3.27) into (3.23), we get that the root of the left-hand side in (3.23) satisfies the same bound as the left-hand side in (3.27). Therefore, together with (3.25), we obtain   ρ(τ ) − ρ(t) , ξ n+1+α

    α 1/2 1/2  C(τ − t) 2 ξn+1+α M + sup z(t, ·)n+1+α ρ0 −(n+1+α) + c1/2 . t∈[0,T ]

Dividing by (τ − t)α/2 and taking the supremum over ξ yields sup t=t

ρ(t ) − ρ(t)−(n+1+α) α |t − t | 2  C M + sup

t∈[0,T ]

 1/2 1/2 z(t, ·)n+1+α ρ0 −(n+1+α)

#

1/2

+ c



(3.28) .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

66

Step 3: Estimate of z. In view of the equation satisfied by z, we have, by Lemma 3.2.2, z(t , ·) − z(t, ·)n+2+α α |t − t| 2 t=t t∈[0,T ]



δF  

x, m(t) (ρ(t)) + b(t, ·)  C sup

δm sup z(t, ·)n+2+α + sup

t∈[0,T ]



δG  

x, m(T ) (ρ(T )) + zT + , δm n+2+α

n+1+α

where C depends on supt∈[0,T ] V (t, ·)n+1+α . Assumptions (HF1(n+1)) and (HG1(n+2)) on F and G imply that the right-hand side of the previous inequality is bounded above by # C

sup ρ(t)−(n+1+α) + b + ρ(T )−(n+2+α) + zT n+2+α .

t∈[0,T ]

Estimate (3.27) on ρ then implies (since ρ(T )−(n+2+α)  ρ(T )−(n+1+α) ): z(t , ·) − z(t, ·)n+2+α α |t − t| 2 t=t t∈[0,T ] #   1/2 1/2 1/2 .  C M + sup z(t, ·)n+1+α ρ0 −(n+1+α) + c sup z(t, ·)n+2+α + sup

t∈[0,T ]

Rearranging we obtain (3.17). Plugging this estimate into (3.27) and (3.28) then gives (3.18). Step 4: T he case of general data. We now prove that the results are also valid for general data. Let (mk ), (ck ), and (ρk0 ) be smooth and converge to m, c, and ρ0 in C 0 ([0, T ], P(Td )), C 0 ([0, T ], (C n+α ) ) and (C n+1+α ) respectively. Let (zk , ρk ) be the solution to (3.15) associated with mk , ck , and ρk0 . Note that zk and ρk satisfy inequalities of the form (3.17) and (3.18) with a right-hand side Cn Mk bounded uniformly with respect to k. For k = k  , the difference (zk,k , ρk,k ) := (zk − zk , ρk − ρk ) is a solution to (3.15) associated with δF δF (x, mk (t))(ρk (t)) − (x, mk (t))(ρk (t)), δm δm ck,k := (mk − mk )Γ(t, x)Dzk + ck − ck ,  δG δG (x, mk (T ))(ρk (T )) − (x, mk (T ))(ρk (T )), zTk,k (x) := δm δm

bk,k (t, x) := −1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

67

and 



ρk,k := ρk0 − ρk0 . 0 By (3.17), we have sup zk,k (t, ·)n+2+α + sup t=t

t∈[0,T ]

zk,k (t , ·) − zk,k )(t, ·)n+2+α  Cn Mk,k α |t − t| 2

where Mk,k is given by 



zTk,k n+2+α + ρk,k 0 −(n+1+α) + sup

t∈[0,T ]



 bk,k (t, ·)n+1+α + ck,k (t)−(n+α) .

So, by assumptions (HF1(n+1)) and (HG1(n+2)), Mk,k is bounded above by  δG )d1 (mk (T ), mk (T ))ρk (T )−(n+2+α) + ρk0 − ρk0 −(n+1+α) δm δF ) sup d1 (mk (t), mk (t)) sup ρk (t)−(n+1+α) + Lipn+1 ( δm t∈[0,T ] t∈[0,T ] + sup d1 (mk (t), mk (t))Γ∞ Dzk ∞ + sup ck (t) − ck (t)−(n+α) ,

Lipn+2 (

t∈[0,T ]

t∈[0,T ]

and thus by   C(1 + sup Mk ) ρk0 − ρk0 −(n+1+α)

k  + sup (d1 (mk (t), mk (t)) + ck (t) − ck (t)−(n+α) ) . t∈[0,T ]

Therefore (zk ) is a Cauchy sequence in C α/2 ([0, T ], C n+2+α ). Using in the same way estimate (3.18), we find that (ρn ) is a Cauchy sequence in the space C α/2 ([0, T ], (C n+1+α ) ). One then easily checks that the limit (z, ρ) of (zk , ρk ) solves system (3.15) associated with m, b, c, zT , and ρ0 , which yields the existence of a solution. Uniqueness comes from the fact that system (3.15) is affine and that estimates (3.17) and (3.18) hold. 

3.4

DIFFERENTIABILITY OF U WITH RESPECT TO THE MEASURE

In this section we show that the map U has a derivative with respect to m. To do so, we linearize the MFG system (3.2). Let us fix (t0 , m0 ) ∈ [0, T ] × P(Td ) and let (m, u) be the solution to the MFG system (3.2) with initial condition m(t0 ) = m0 . Recall that, by definition, U (t0 , x, m0 ) = u(t0 , x).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

68

For any μ0 in a suitable space, we consider the solution (v, μ) to the linearized system ⎧ δF ⎪ ⎪ −∂t v − Δv + Dp H(x, Du) · Dv = (x, m(t))(μ(t)), ⎪ ⎪ δm ⎨     2 ∂t μ − Δμ − div μDp H(x, Du) − div mDpp H(x, Du)Dv = 0, ⎪ ⎪ ⎪ ⎪ ⎩ v(T, x) = δG (x, m(T ))(μ(T )), μ(t0 , ·) = μ0 . δm

(3.29)

Our aim is to prove that U is of class C 1 with respect to m with  v(t0 , x) =

Td

δU (t0 , x, m0 , y)μ0 (y)dy. δm

Let us start by showing that the linearized system (3.29) has a solution and give estimates on this solution. Proposition 3.4.1. Assume that (HF1(n+1)) and (HG1(n+2)) hold for some n  0. If m0 ∈ P(Td ) and μ0 ∈ (C n+1+α ) , there is a unique solution (v, μ) of (3.29) and this solution satisfies sup t∈[t0 ,T ]

$

v(t, ·)n+2+α + μ(t)−(n+1+α)

%

 Cμ0 −(n+1+α) ,

where the constant C depends on n, T , H, F , and G (but not on (t0 , m0 )). Note that the map μ0 → (v, μ) is linear and continuous from (C n+1+α ) into C ([t0 , T ], C n+2+α × (C n+1+α ) ). 0

Proof. It is a straightforward application of Lemma 3.3.1, with the choice 2 V (t, x) = Dp H(x, Du(t, x)), Γ(t, x) = Dpp H(x, Du(t, x)) and zT = b = c = 0. 0 n+1+α ) in view of Proposition 3.1.1.  Note that V belongs to C ([0, T ], C Let us recall that we use for simplicity the abbreviated notation ·, · n+1+α for ·, · (C n+1+α ) ,C n+1+α . Corollary 3.4.2. Under the assumptions of Proposition 3.4.1, there exists, for any (t0 , m0 ), a C n+2+α × C n+1+α map (x, y) → K(t0 , x, m0 , y) such that, for any μ0 ∈ (C n+1+α ) , the v component of the solution of (3.29) is given by v(t0 , x) = μ0 , K(t0 , x, m0 , ·) n+1+α .

(3.30)

Moreover −1 0 1

K(t0 , ·, m0 , ·)(n+2+α,n+1+α)  Cn , and K and its derivatives in (x, y) are continuous on [0, T ] × Td × P(Td ) × Td .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

69

Proof. For  ∈ Nd with ||  n + 1 and y ∈ Td , let (v () (·, ·, y), μ() (·, ·, y)) be the solution to (3.29) with initial condition μ0 = D δy (the -th derivative of the Dirac mass at y). Note that μ0 ∈ (C n+1+α ) . We set K(t0 , x, m0 , y) := v (0) (t0 , x, y). Let us check that ∂y1 K(t0 , x, m0 , y) = −v (e1 ) (t0 , x, y), where e1 = (1, 0, . . . , 0). Indeed, since −1 (δy+e1 − δy ) converges to −De1 δy in (C n+1+α ) while, by linearity, the map −1 (K(·, ·, m0 , y + e1 ) − K(·, ·, m0 , y)) is the first component of the solution of (3.29) with initial condition −1 (δy+e1 − δy ), this map converges to the first component of the solution with initial condition −De1 δy , which is −v (e1 ) (·, ·, y). This proves our claim. One can then check in the same way by induction that, for ||  n + 1, Dy K(t0 , x, m0 , y) = (−1)|| v () (t0 , x, y). Finally, if ||  n + 1, Proposition 3.4.1 combined with the linearity of system (3.29) implies that



Dy K(t0 , ·, m0 , y) − Dy K(t0 , ·, m0 , y  )

n+2+α

 CD δy − D δy −(n+1+α)  Cδy − δy −α  C|y − y  |α . Therefore K(t0 , ·, m0 , ·) belongs to C n+2+α × C n+1+α . Continuity of K and its derivatives in (t0 , m0 ) easily follows from the estimates on (u, m) and on (v, μ),  which are independent of the initial measure m0 . We now show that K is indeed the derivative of U with respect to m. Proposition 3.4.3. Assume that (HF1(n+1)) and (HG1(n+2)) hold for some n  0. Fix t0 ∈ [0, T ] and m0 , m ˆ 0 ∈ P(Td ). Let (u, m) and (ˆ u, m) ˆ be the ˆ 0 ) respecsolutions of the MFG system (3.2) starting from (t0 , m0 ) and (t0 , m ˆ 0 − m0 ). tively and let (v, μ) be the solution to (3.29) with initial condition (t0 , m Then $ u(t, ·) − u(t, ·) − v(t, ·)n+2+α + m(t, sup ˆ ˆ ·) − m(t, ·) t∈[t0 ,T ]

% ˆ 0 ). − μ(t, ·)−(n+1+α)  Cd21 (m0 , m

As a straightforward consequence, we obtain the differentiability of U with respect to the measure: Corollary 3.4.4. Under the assumption of Proposition 3.4.3, the map U is of class C 1 (in the sense of Definition 2.2.1) with δU (t0 , x, m0 , y) = K(t0 , x, m0 , y), δm

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

70 whose regularity is given by Corollary 3.4.2. Moreover,



U (t0 , ·, m ˆ ) − U (t , ·, m ) − 0 0 0

Td

δU (t0 , ·, m0 , y)d(m ˆ 0 − m0 )(y)

δm n+2+α

 Cd21 (m0 , m ˆ 0 ). Remark 3.4.5. Let us recall that the derivative δU/δm is defined up to an additive constant and that our normalization condition is  Td

δU (t0 , x, m0 , y)dm0 (y) = 0. δm

Let us check that we have indeed  Td

K(t, x, m0 , y)dm0 (y) = 0.

(3.31)

For this let us choose μ0 = m0 in (3.29). Since, by normalization condition, δF δG δm (t, m(t))(m(t)) = 0, for any t ∈ [0, T ], and δm (t, m(T ))(m(T )) = 0, it is clear that the solution to (3.29) is just (v, μ) = (0, m). So, by (3.30), (3.31) holds. Proof of Proposition 3.4.3. Let us set z := u ˆ − u − v and ρ := m ˆ − m − μ. The proof consists in estimating the pair (z, ρ), which satisfies ⎧ δF ⎪ ⎪ −∂t z − Δz + Dp H(x, Du) · Dz = (x, m(t))(ρ(t)) + b(t, x), ⎪ ⎪ ⎨   δm  2 ∂t ρ − Δρ − div ρDp H(x, Du) − div mDpp H(x, Du)Dz − div(c) = 0, ⎪ ⎪ ⎪ ⎪ ⎩ z(T, x) = δG (x, m(T ))(ρ(T )) + zT (x), ρ(t0 , ·) = 0, δm where b(t, x) = A(t, x) + B(t, x), with  A(t, x) = − −1 0 1

and

1 0

(Dp H(x, sDˆ u + (1 − s)Du) − Dp H(x, Du)) · D(ˆ u − u) ds,

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION



1

71





δF (x, sm(t) ˆ + (1 − s)m(t), y) δm 0 Td  δF (x, m(t), y) d(m(t) ˆ − m(t))(y)ds, − δm   2 c(t) = (m ˆ − m)(t)Dpp H ·, Du(t, ·) (Dˆ u − Du)(t, ·)  1   2 Dpp H ·, sDˆ u(t, ·) + (1 − s)Du(t, ·) +m ˆ

B(t, x) =

0

  2 − Dpp H ·, Du(t, ·)) (Dˆ u − Du)(t, ·)ds, (note that c(t) is a signed measure) and 

zT (x) =

1





δG (x, sm(T ˆ ) + (1 − s)m(T ), y) δm 0 Td  δG (x, m(T ), y) d(m(T ˆ ) − m(T ))(y)ds. − δm

We apply Lemma 3.3.1 to get sup (z(t), ρ(t))C n+2+α ×(C n+1+α )

t∈[t0 ,T ]

#

 C zT n+2+α + ρ0 −(n+1+α) + sup (b(t)n+1+α + c(t)−(n+α) ) . t∈[t0 ,T ]

It remains to estimate the various quantities in the right-hand side. We have sup b(t, ·)n+1+α  sup A(t, ·)n+1+α + sup B(t, ·)n+1+α ,

t∈[t0 ,T ]

t∈[t0 ,T ]

t∈[t0 ,T ]

where sup A(t, ·)n+1+α  C sup (ˆ u − u)(t, ·)2n+2+α  Cd21 (m0 , m ˆ 0 ),

t∈[t0 ,T ]

t∈[t0 ,T ]

according to Proposition 3.2.1 (using also the bounds in Proposition 3.1.1). To estimate B and zT C n+2+α , we argue in the same way: zT C n+2+α + sup B(t, ·)n+1+α  Cd21 (m0 , m ˆ 0 ), t∈[t0 ,T ]

where we have used as above Proposition 3.2.1, now combined with assumptions (HF1(n+1)) and (HG1(n+2)), which imply (e.g., for F ) that, for any

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

72

m1 , m2 ∈ P(Td ),

 



δF δF

d δm (·, m1 , y) − δm (·, m2 , y) d(m1 − m2 )(y) T n+1+α

δF

δF

 d1 (m1 , m2 )

Dy δm (·, m1 , ·) − Dy δm (·, m2 , ·) n+1+α ∞ C ×L  Cd21 (m1 , m2 ). Finally, sup c(t)−(n+α)  sup

t∈[t0 ,T ]

sup

t∈[t0 ,T ] ξn+α 1

c(t), ξ n+α ,

where, for ξn+α  1, c(t), ξ n+α

   2 = ξ, (m ˆ − m)Dpp H ·, Du(t, ·) (Dˆ u − Du)(t, ·)  + m(t) ˆ −

2 Dpp H

1 0



  2 Dpp H ·, [sDˆ u + (1 − s)Du](t, ·)



·, Du(t, ·)



 (Dˆ u − Du)(t, ·)ds

n+α

  C ξ1 u − u ˆ2 d1 (m0 , m ˆ 0 ) + ξL∞ u − u ˆ21 . 

So again by Proposition 3.2.1 we get sup c(t)−(n+α)  Cd21 (m0 , m ˆ 0 ).

t∈[t0 ,T ]



This completes the proof. 3.5

PROOF OF THE SOLVABILITY OF THE FIRST-ORDER MASTER EQUATION

Proof of Theorem 2.4.2 (existence). We check in a first step that the map U defined by (3.3) is a solution of the first-order master equation. Let us first assume that m0 ∈ C ∞ (Td ), with m0 > 0. For t0 ∈ [0, T ), let (u, m) be the classical solution of the MFG system (3.2) starting from m0 at time t0 . Then

−1 0 1

U (t0 + h, x, m0 ) − U (t0 + h, x, m(t0 + h)) U (t0 + h, x, m0 ) − U (t0 , x, m0 ) = h h +

U (t0 + h, x, m(t0 + h)) − U (t0 , x, m0 ) . h

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

73

Let us set ms := (1 − s)m(t0 ) + sm(t0 + h). Note from the equation satisfied by m and from the regularity of U , as given by Corollary 3.4.4, that     U t0 + h, x, m(t0 + h) − U t0 + h, x, m(t0 )  1 δU (t0 + h, x, ms , y)(m(t0 + h, y) − m(t0 , y)) dy ds = Td δm 0  1   t0 +h  δU (t0 + h, x, ms , y) = δm d T 0 t0     × Δm(t, y) + div m(t, y)Dp H(y, Du(t, y)) dt dy ds 

1





t0 +h

Δy

= 0





Td

1

0

t0



t0 +h

Dy

t0

Td

δU (t0 + h, x, ms , y)m(t, y)dt dy ds δm   δU (t0 + h, x, ms , y) · Dp H y, Du(t, y) m(t, y) dt dy ds. δm

Dividing by h and using the continuity of Dm U and its smoothness with respect to the space variables, we obtain U (t0 + h, x, m(t0 + h)) − U (t0 + h, x, m0 ) h   divy [Dm U ] (t0 , x, m0 , y) =

lim

h→0

Td

  − Dm U (t0 , x, m0 , y) · Dp H y, Du(t0 , y) m0 (y) dy. On the other hand, for h > 0, U (t0 + h, x, m(t0 + h)) − U (t0 , x, m0 ) = u(t0 + h, x) − u(t0 , x) = h∂t u(t0 , x) + o(h), since u is smooth, so that lim

h→0+

U (t0 + h, x, m(t0 + h)) − U (t0 , x, m0 ) = ∂t u(t0 , x). h

Therefore ∂t U (t0 , x, m0 ) exists and, using the equation satisfied by u, is equal to

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

74 ∂t U (t0 , x, m0 )  =− divy [Dm U ] (t0 , x, m0 , y)m0 (y) dy 

Td



+ Td



(3.32)

Dm U (t0 , x, m0 , y) · Dp H x, Dx U (t0 , y, m0 ) m0 (y) dy

  − Δx U (t0 , x, m0 ) + H x, Dx U (t0 , x, m0 ) − F (x, m0 ). This means that U has a continuous time derivative at any point (t0 , x, m0 ) where m0 ∈ C ∞ (Td ) with m0 > 0 and satisfies (2.6) at such a point (when m0 is regarded as an element of P(Td )). By continuity of the right-hand side of (3.32), U has a time derivative everywhere and (2.6) holds at any point.  Next we turn to the uniqueness part of the theorem: Proof of Theorem 2.4.2 (uniqueness). To prove uniqueness of the solution of the master equation, we explicitly show that the solutions of the MFG system (3.2) coincide with the characteristics of the master equation. Let V be another solution to the master equation. The main point is that, by definition of 2 δV a solution, Dx,y δm is bounded, and therefore Dx V is Lipschitz continuous with respect to the measure variable. Let us fix (t0 , m0 ) with m0 ∈ C ∞ (Td ). In view of the Lipschitz continuity of Dx V , one can easily uniquely solve in C 0 ([0, T ], P(Td )) the Fokker–Planck equation:

  ˜ − Δm ˜ − div mD ˜ p H(x, Dx V (t, x, m)) ˜ =0 ∂t m m(t ˜ 0 ) = m0

in [t0 , T ] × Td ,

in Td .

Then let us set u ˜(t, x) = V (t, x, m(t)). ˜ By the regularity properties of V , u ˜ is at least of class C 1,2 with ∂t u ˜(t, x) & δV ' (t, x, m(t), ˜ ·), ∂t m(t) ˜ C 2 ,(C 2 ) δm & δV (t, x, m(t), ˜ ·), Δm ˜ = ∂t V (t, x, m(t)) ˜ + δm  ' + div(mD ˜ p H x, Dx V (t, x, m) ˜ C 2 ,(C 2 )  = ∂t V (t, x, m(t)) ˜ + divy [Dm V ] (t, x, m(t), ˜ y)dm(t)(y) ˜ ˜ + = ∂t V (t, x, m(t))

−1 0 1

Td

 −

Td

Dm V (t, x, m(t), ˜ y) · Dp H(y, Dx V (t, y, m))d ˜ m(t)(y). ˜

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

75

Recalling that V satisfies the master equation, we obtain   ∂t u ˜(t, x) = −Δx V (t, x, m(t)) ˜ + H x, Dx V (t, x, m(t)) ˜ − F (x, m(t)) ˜ = −Δ˜ u(t, x) + H(x, D˜ u(t, x)) − F (x, m(t)) ˜ with terminal condition u ˜(T, x) = V (T, x, m(T ˜ )) = G(x, m(T ˜ )). Therefore the pair (˜ u, m) ˜ is a solution of the MFG system (3.2). As the solution of this system is unique, we get that V (t0 , x, m0 ) = U (t0 , x, m0 ) if m0 has a smooth density. The equality V = U holds then everywhere by continuity of V and U .  3.6

LIPSCHITZ CONTINUITY OF WITH RESPECT TO m

δU δm

We later need the Lipschitz continuity of the derivative of U with respect to the measure. Proposition 3.6.1. Let us assume that (HF1(n+1)) and (HG1(n+2)) hold for some n  2. Then

δU δU −1

(t, ·, m1 , ·) − (t, ·, m2 , ·)  C, sup sup (d1 (m1 , m2 ))

δm δm t∈[0,T ] m1 =m2 (n+2+α,n+α) where C depends on n, F , G, H, and T . Proof. Let us set (z, μ) := (v 1 − v 2 , μ1 − μ2 ). We first write an equation for (z, μ). To avoid too heavy notation, we set H1 (t, x) = Dp H(x, Du1 (t, x)), δF 2 H(x, Du1 (t, x)), F1 (x, μ) = Td δm (x, m1 , y)μ(y) dy, etc... Then H1 (t, x) = Dpp (z, μ) satisfies ⎧ in [t0 , T ] × Td , −∂t z − Δz + H1 · Dz = F1 (·, μ) + b ⎪ ⎪ ⎨ ∂t μ − Δμ − div(μH1 ) − div(m1 H1 Dz) − div(c) = 0 in [t0 , T ] × Td , ⎪ ⎪ ⎩ in Td , z(T ) = G1 (μ(T )) + zT , μ(t0 ) = 0 where       b(t, x) := F1 x, μ2 (t) − F2 x, μ2 (t) − (H1 − H2 ) · Dv 2 (t, x),   c(t, x) := μ2 (t, x)(H1 − H2 )(t, x) + (m1 H1 − m2 H2 )Dv 2 (t, x), zT (x) := G1 (μ2 (T )) − G2 (μ2 (T )). We apply Lemma 3.3.1 with V = H1 and Γ = H1 : It says that, under assumptions (HF1(n+1)) and (HG1(n+2)),    sup z(t, ·)n+2+α  C zT n+2+α + sup b(t, ·)n+1+α +c(t, ·)−(n+α) .

t∈[t0 ,T ]

t∈[t0 ,T ]

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

76 Let us estimate the various terms in the right-hand side:

zT n+2+α

 



δG δG 1 2 2

(0, ·, m (T ), y) − (0, ·, m (T ), y) μ (T, y)dy 

δm δm d T n+2+α

δG

δG 1 2

 μ2 (T )−(n+1+α)

δm (0, ·, m (T ), ·) − δm (0, ·, m (T ), ·) (n+2+α,n+1+α)  Cd1 (m10 , m20 ) μ0 −(n+1+α) , where we have used Propositions 3.2.1 and 3.4.1 in the last inequality. Moreover, we have

    b(t, ·)n+1+α  F1 ·, μ2 (t) − F2 ·, μ2 (t) n+1+α



 + H1 − H2 (t, ·)Dv 2 (t, ·) , n+1+α

where the first term can be estimated as zT :

 2   

F1 ·, μ (t) − F2 ·, μ2 (t)

n+1+α

 Cd1 (m10 , m20 )μ0 −(n+1+α) .

The second one is bounded by

 



H1 − H2 (t, ·)Dv 2 (t, ·) n+1+α



   1 = Dp H ·, Du (t, ·) − Dp H(·, Du2 (t, ·)) Dv 2 (t, ·) n+1+α  (u1 − u2 )(t, ·)n+2+α v 2 (t, ·)n+2+α  Cd1 (m10 , m20 ) μ0 −(n+1+α) , where the last inequality comes from Proposition 3.2.1 and Proposition 3.4.1 thanks to assumptions (HF1(n+1)) and (HG1(n+2)). Finally, by a similar argument, c(t)−(n+α) =

( sup φn+α 1

   φ, μ2 H1 − H2

)   + (m1 − m2 )H1 + m2 (H1 − H2 ) Dv 2



sup φn+α 1



φ(H1 − H2 )(t, ·)

n+α

n+α

2

μ (t, ·)−(n+α)

    + d1 m1 (t), m2 (t) sup φ H1 Dv 2 (t, ·) 1 φ1 1

−1 0 1

+ sup φ(H1 − H2 )(t, ·)Dv 2 (t, ·) 0 φ0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

77

 C (u1 − u2 )(t, ·) n+1+α μ0 −(n+α)

    + Cd1 m1 (t), m2 (t) v 2 (t, ·)2 + C u1 − u2 (t, ·) 1 v 2 (t, ·)1  Cd1 (m10 , m20 )μ0 −(n+α) . This shows that sup z(t, ·)n+2+α  Cd1 (m10 , m20 )μ0 −(n+α) .

t∈[t0 ,T ]

As



 z(t0 , x) =

Td

 δU δU (t0 , x, m10 , y) − (t0 , x, m20 , y) μ0 (y)dy, δm δm

we have proved sup (d1 (m1 , m2 ))

m1 =m2

−1



δU

δU

 C.

δm (t0 , ·, m1 , ·) − δm (t0 , ·, m2 , ·) (n+2+α,n+α) 

3.7

LINK WITH THE OPTIMAL CONTROL OF THE FOKKER–PLANCK EQUATION

We now explain that, when F and G derive from potentials F and G, the space derivative Dx U is nothing but the derivative with respect to the measure of the solution U of a Hamilton–Jacobi equation stated in the space of measures. The fact that the mean field game system can be viewed as a necessary condition for an optimal transport of the Kolmogorov equation goes back to Lasry and Lions [74]. As explained by Lions [76], one can also write the value function of this optimal control problem, which turns out to be a Hamilton–Jacobi equation in the space of measure. The (directional) derivative with respect to the measure of the value function is then (at least formally) the solution of the master equation. This is rigorously derived, for the short horizon and first-order (in space and measure) master equation, by Gangbo and Swiech [45]. We show here that this holds true for the master equation without common noise. Let us assume that F and G derive from C 1 potential maps F : P(Td ) → R and G : P(Td ) → R: F (x, m) =

δF (x, m), δm

G(x, m) =

δG (x, m). δm

(3.33)

Note for later use that the monotonicity of F and G implies the convexity of F and G.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

78

Theorem 3.7.1. Under the assumptions of Theorem 2.4.2, let U be the solution to the master equation (3.1) and suppose that (3.33) holds. Then the Hamilton–Jacobi equation ⎧  ⎪ ⎪ −∂t U (t, m) + H (y, Dm U (t, m, y)) dm(y) ⎪ ⎪ ⎨ Td  div [Dm U ] (t, m, y)dm(y) = F(m) − ⎪ ⎪ ⎪ Td ⎪ ⎩ U (T, m) = G(m) in P(Td )

in [0, T ] × P(Td ),

(3.34)

∀(t, x, m) ∈ [0, T ] × Td × P(Td ).

(3.35)

has a unique classical solution U and Dm U (t, x, m) = Dx U (t, x, m)

We represent the solution U of (3.34) as the value function of an optimal control problem: for an initial condition (t0 , m0 ) ∈ [0, T ] × P(Td ), let  U (t0 , m0 ) := inf

(m,α)



T  t0

T

+ t0

Td







H (x, α(t, x)) m(t, dx) dt 





(3.36)

F m(t) dt + G m(T ) ,

(where H ∗ is the convex conjugate of H with respect to the second variable) under the constraint that m ∈ C 0 ([0, T ], P(Td )), α is a bounded and Borel measurable function from [0, T ] × Td into Rd , and the pair (m, α) satisfies in the sense of distributions:   m(t0 ) = m0 in Td . (3.37) ∂t m − Δm − div αm = 0 in [0, T ] × Td , Of course, (3.37) is understood as the Fokker–Planck equation describing the flow of measures generated on the torus by the stochastic differential equation (SDE) dZt = −α(t, Zt )dt + dBt ,

t ∈ [0, T ],

which is known to be uniquely solvable in the weak sense. Notice that, throughout the section, we shall use, as in (3.36), the notation m(t, dx) to denote the integral on the torus with respect to the (time-dependent) measure m(t). The following characterization of the optimal path for U is due to Lasry and Lions [74]: −1 0 1

Proposition 3.7.2. For an initial position (t0 , m0 ) ∈ [0, T ]×P(Td ), let (u, m) be the solution of the MFG system (3.2). Then, the pair (m, α) = (m, Dp H(·, Du(·, ·))) is a minimizer for U (t0 , m0 ).

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

79

Proof. For a function m ˆ ∈ C 0 ([0, T ], P(Td )) and a bounded and measurable function α ˆ from [0, T ] × Td into Rd , we let  J(m, ˆ α) ˆ :=



T t0

Td

  ˆ x) dm(t) ˆ + H ∗ x, α(t,



T t0

    F m(t) ˆ dt + G m(T ˆ ) ,

where m ˆ solves   ∂t m ˆ − Δm ˆ − div α ˆm ˆ = 0 in [0, T ] × Td ,

m(t ˆ 0 ) = m0 in Td .

Since, for any m ∈ P(Td ) and α ∈ Rd , H ∗ (x, α ) = sup (α · p − H(x, p)) , p∈Rd

we have, by convexity of F and G,  J(m, ˆ α) ˆ 

T  t0



Td T

+ t0

    α(t, ˆ x) · Du(t, x) − H x, Du(t, x) m(t, ˆ dx) dt

     F m(t) + F ·, m(t) m(t) ˆ − m(t) dt

     + G m(T ) + G ·, m(T )) m(T ˆ ) − m(T ) = J(m, α)  T     + Du(t, x) · α ˆ (t, x)m(t, ˆ dx) − α(t, x)m(t, dx) t0



Td





− H x, Du(t, x) m ˆ − m (t, dx) 

T

+ t0

 dt

     F ·, m(t) (m ˆ − m)(t)dt + G ·, m(T ) m(T ˆ ) − m(T ) .

because     α(t, x) · Du(t, x) − H x, Du(t, x) = H ∗ x, α(t, x) .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

80 Using the equation satisfied by (m, w) and (m, ˆ w) ˆ we have T 



t0

Td

   Du(t, x) · α ˆ (t, x)m(t, ˆ dx) − α(t, x)m(t, dx) dt T



=−

Td

u(t, x)(m ˆ − m)(t, dx)

T 

 + 0

Td

0

Td

0

   (∂t u + Δu)(t, x) m ˆ − m (t, dx) dt

   = −G ·, m(T ) m(T ˆ ) − m(T )   T        H x, Du(t, x) − F x, m(t) m ˆ − m)(t, dx) dt. + This proves that J(m, ˆ α) ˆ  J(m, α) and shows the optimality of (m, α).



Proof of Theorem 3.7.1. First step. Let us first check that U , defined by (3.36), is C 1 with respect to m and satisfies δU (t, x, m) = U (t, x, m) − δm

 Td

U (t, y, m)dm(y),

(t, x, m) ∈ [0, T ] × Td × P(Td ).

(3.38)

Note that taking the derivative with respect to x on both sides shows (3.35). ˆ 0 be two initial measures and (u, m) and We now prove (3.38). Let m0 , m (ˆ u, m) ˆ the solutions of the MFG system (3.2) with initial conditions (t0 , m0 ) ˆ 0 ) respectively. Let also (v, μ) be the solution of the linearized sysand (t0 , m ˆ 0 − m0 ). Let us recall that, according to tem (3.29) with initial condition (t0 , m Proposition 3.4.3, we have sup t∈[t0 ,T ]

$

ˆ u − u − vn+2+α + m ˆ − m − μ−(n+1+α)

%

 Cd21 (m0 , m ˆ 0 ) (3.39)

while Propositions 3.2.1 and 3.4.1 imply that sup t∈[0,T ]

$

% ˆ u − un+2+α + μ−(n+1+α)  Cd1 (m0 , m ˆ 0 ).

Our aim is to show that  −1 0 1

ˆ 0 ) − U (t0 , m0 ) − U (t0 , m   = O d21 (m0 , m ˆ 0) .

Td

U (t0 , x, m0 )d(m ˆ 0 − m0 )(x) (3.40)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

81

Indeed, if (3.40) holds true, then U is a derivative of U and, by convention (2.1), proves (3.38). Second step. We now turn to the proof of (3.40). Since (u, m) and (ˆ u, m) ˆ are optimal in U (t0 , m0 ) and U (t0 , m ˆ 0 ) respectively, we have  ˆ 0 ) − U (t0 , m0 ) = U (t0 , m

T



t0

Td



− 

Td

  H ∗ x, Dp H(x, Dˆ u(t, x)) m(t, ˆ dx)

   H ∗ x, Dp H(x, Du(t, x)) m(t, dx) dt

T

+ t0

        F m(t) ˆ − F m(t) dt + G m(T ˆ ) − G m(T ) .

Note that, by (3.39), 

T



t0

   H ∗ x, Dp H x, Dˆ u(t, x) m(t, ˆ dx)

Td





H

Td



T 



Td



+ Td

·







x, Dp H x, Du(t, x)



 m(t, dx) dt

   H ∗ x, Dp H x, Du(t, x) μ(t, dx)

= t0



   Dq H ∗ x, Dp H x, Du(t, x)

2 H Dpp



T 

   x, Du(t, x) Dv(t, x) m(t, dx) dt + O d21 (m0 , m ˆ 0)



= t0



Td



    Du(t, x) · Dp H x, Du(t, x) − H x, Du(t, x) μ(t, dx)

   2      H x, Du(t, x) Dv(t, x) m(t, dx) dt + O d21 (m0 , m ˆ 0) , + Du(t, x) · Dpp Td

where we have used the properties of the Fenchel conjugate in the last equality, while 

T t0

        F m(t) ˆ − F m(t) dt + G m(T ˆ ) − G m(T ) 

T 

= t0



Td

   F x, m(t) μ(t, dx) dt + 

ˆ 0) . + O d21 (m0 , m

 Td

  G x, m(T ) μ(T, dx) −1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

82 Recalling the equation satisfied by u and μ, we have d dt

 Td

      u(t, x)μ(t, dx) = H x, Du(t, x) − F x, m(t) μ(t, dx) Td



−  −

Td

Td

  Du(t, x) · Dp H x, Du(t, x) μ(t, dx)     2 Du(t, x) · Dpp H x, Du(t, x) Dv(t, x) m(t, dx).

Putting the last three identities together, we obtain U (t0 , m ˆ 0 ) − U (t0 , m0 )    T    d u(t, x)μ(t, dx) dt + G x, m(T ) μ(T, dx) =− dt Td t0 Td  2  + O d1 (m0 , m ˆ 0)    = u(t0 , x)μ(t0 , dx) + O d21 (m0 , m ˆ 0) 

Td

= Td

  U (t0 , x, m0 )d(m ˆ 0 − m0 )(x) + O d21 (m0 , m ˆ 0) .

This completes the proof of (3.38). Third step. Next we show that U is a classical solution to the Hamilton– Jacobi equation (3.34). Let us fix (t0 , m0 ) ∈ [0, T ) × P(Td ), where m0 has a smooth, positive density. Let also (u, m) be the solution of the MFG system (3.2) with initial condition (t0 , m0 ). Proposition 3.7.2 states that (m, Dp H(·, Du(·, ·))) is a minimizer for U (t0 , m0 ). By standard dynamic programming principle, we have therefore, for any h ∈ (0, T − t0 ),  U (t0 , m0 ) =

t0 +h t0



 Td

t0 +h

+ t0

   H ∗ x, Dp H x, Du(t, x) m(t, x) dx dt 



(3.41)

F m(t) dt + U (t0 + h, m(t0 + h)).

Now we note that

−1 0 1

U (t0 + h, m0 ) − U (t0 + h, m(t0 + h)) U (t0 + h, m0 ) − U (t0 , m0 ) = h h U (t0 + h, m(t0 + h)) − U (t0 , m0 ) + . h

(3.42)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

A STARTER: THE FIRST-ORDER MASTER EQUATION

83

We can handle the first term in the right-hand side of (3.42) by using the fact that U is C 1 with respect to m. Letting ms,h := (1 − s)m0 + sm(t0 + h), we have     U t0 + h, m(t0 + h) − U t0 + h, m0  1    δU  t0 + h, ms,h , y d m(t0 + h) − m0 (y) ds = Td δm 0  1   t0 +h   =− Dm U t0 + h, ms,h , y 0

Td

t0

   · Dm(t, y) + Dp H y, Du(t, y) m(t, y) dt dy ds. 

Dividing by h, letting h → 0+ and rearranging gives U (t0 + h, m(t0 + h)) − U (t0 + h, m0 ) h  = div [Dm U ] (t0 , m0 , y)dm0 (y)

lim

h→0+

Td





Td

  Dm U (t0 , m0 , y) · Dp H y, Du(t0 , y) dm0 (y).

To handle the second term in the right-hand side of (3.42), we use (3.41) and get U (t0 + h, m(t0 + h)) − U (t0 , m0 ) h    =− H ∗ x, Dp H(x, Du(t0 , x)) dm0 (x) − F(m0 ).

lim

h→0+

Td

As Du(t0 , x) = Dx U (t0 , x, m0 ) = Dm U (t0 , m0 , x), we have      − H ∗ x, Dp H x, Du(t, x) + Dm U (t0 , m0 , x) · Dp H y, Du(t0 , y)    = −H ∗ x, Dp H x, Dm U (t0 , m0 , x)   + Dm U (t0 , m0 , x) · Dp H x, Dm U (t0 , m0 , x)   = H x, Dm U (t0 , m0 , x) . Collecting the above equalities, we therefore obtain  U (t0 + h, m0 ) − U (t0 , m0 ) =− div [Dm U ] (t0 , m0 , y)dm0 (y) lim h h→0+ Td    H x, Dm U (t0 , m0 , x) dm0 (x) − F(m0 ). + Td

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 3

84

As the right-hand side of the above equality is continuous in all variables, this shows that U is continuously differentiable with respect to t and satisfies (3.34). Last step. We finally check that U is the unique classical solution to (3.34). For this we use the standard comparison argument. Let V be another classical solution and assume that V = U . To fix the ideas, let us suppose that sup(V −U ) is positive. Then, for any > 0 small enough,  sup (t,m)∈(0,T ]×P(Td )

t  V(t, m) − U (t, m) + log( ) T

is positive. Let (tˆ, m) ˆ be a maximum point. Note that tˆ < T because V(T, ·) = U (T, ·). By optimality of (tˆ, m) ˆ and regularity of V and U , we have ∂t V(tˆ, m) ˆ − ∂t U (tˆ, m) ˆ + =0 tˆ

and

δU ˆ δV ˆ (t, m, ˆ ·) = (t, m, ˆ ·), δm δm

so that Dm V(tˆ, m, ˆ ·) = Dm U (tˆ, m, ˆ ·) and div [Dm V] (tˆ, m, ˆ ·) = div [Dm U ] (tˆ, m, ˆ ·). Using the equation satisfied by U and V yields

−1 0 1

 tˆ

= 0, a contradiction.



125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter Four Mean Field Game System with a Common Noise

The main purpose of this chapter and Chapter 5 is to show that the same approach as the one developed in Chapter 3 may be applied to the case when the whole system is forced by a so-called “common noise.” Such a common noise is sometimes referred to as a “systemic noise,” see, for instance, Lions’ lectures at the Coll`ege de France. In terms of a game with a finite number of players, the common noise describes some noise that affects all the players in the same way, so that the dynamics of one given particle reads1 dXt = −Dp H(Xt , Dut (Xt )) dt +



2dBt +



2βdWt ,

t ∈ [0, T ],

(4.1)

where β is a nonnegative parameter, B and W are two independent d-dimensional Wiener processes, B standing for the same idiosyncratic noise as in the previous chapter and W now standing for the so-called common noise. Throughout the chapter, we use the standard convention from the theory of stochastic processes that consists in indicating the time parameter as an index in random functions. As we shall see next, the effect of the common noise is to randomize the mean field game (MFG) equilibria so that, with the same notations as earlier (mt )t0 becomes a random flow of measures. Precisely, it reads as the flow of conditional marginal measures of (Xt )t∈[0,T ] given the realization of W . To distinguish things properly, we shall refer to the situation discussed in the previous chapter as the “deterministic” or “first-order” case. In this way, we point out that, without common noise, equilibria are completely deterministic. Compared to the notation of the introductory Chapter 1 or of Chapter 2, we let the level of common noise β be equal to 1 throughout the chapter: this is without loss of generality and simplifies (a little) the notation. This chapter is specifically devoted to the analysis of the MFG system in the presence of the common noise (see (1.9)). Using a continuation like argument (instead of the classical strategy based on the Schauder fixed-point theorem), we investigate the existence and uniqueness of a solution. On the model of the first1 Equation

(4.1) is set on Rd but the solution may be canonically mapped onto Td since the coefficients are periodic: when the process (Xt )t∈[0,T ] is initialized with a probability measure on Td , the dynamics on the torus are independent of the representative in Rd of the initial condition.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

86

order case, we also investigate the linearized system. The derivation of the master equation is deferred to the next chapter. The use of the continuation method in the analysis of MFG systems has been widely investigated by Gomes and his coauthors in the absence of common noise (see, for instance, [37,46,47] together with Chapter 11 in [48]), but it seems to be a new point in the presence of common noise. The analysis provided in the text that follows is directly inspired by the study of finite dimensional forward–backward systems: its application is here made possible thanks to the monotonicity assumption required on F and G. As already mentioned, we assume without loss of generality that β = 1 throughout this chapter. 4.1

STOCHASTIC FOKKER–PLANCK/ HAMILTON–JACOBI–BELLMAN SYSTEM

The major difficulty in handling MFG with a common noise is that the system made of the Fokker–Planck and Hamilton–Jacobi–Bellman equations in (3.2) becomes stochastic. Its general form has already been discussed in [27]. Both the forward and the backward equations become stochastic as both the equilibrium (mt )0tT and the value function (ut )0tT depend on the realization of the common noise W . Unfortunately, the stochastic system does not consist of a simple randomization of the coefficients: to ensure that the value function ut at time t depends only on the past before t in the realization of (Ws )0sT , the backward equation incorporates an additional correction term that is reminiscent of the theory of finite-dimensional backward stochastic differential equations. The Fokker–Planck equation satisfied by (mt )t∈[0,T ] reads √     dt mt = 2Δmt + div mt Dp H(mt , Dut ) dt − 2div(mt dWt , t ∈ [0, T ]. (4.2) The value function u is sought as the solution of the stochastic Hamilton–Jacobi– Bellman equation: √   dt ut = −2Δut + H(x, Dut ) − F (x, mt ) − 2div(vt ) dt + vt · dWt ,

(4.3)

where, at any time t, vt is a random function of x with values in Rd . Once d again, we emphasize that the term vt · dWt = i=1 vti dWti allows us to guarantee that (ut )0tT is adapted with √ respect to the filtration generated by the common noise. The extra term − 2div(vt ) may be explained by the so-called Itˆ o–Wentzell formula, which is the chain rule for random fields applied to random processes; see, for instance, [68]. It allows us to cancel out the bracket that arises in the application of the Itˆo–Wentzell formula2 to (ut (Xt ))t∈[0,T ] , −1 0 1

2 In the application of Itˆ o–Wentzell formula, ut is seen as a (random) periodic function from Rd to R.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

87

with (Xt )0tT as in (4.1); see Subsection A.3.1 in the Appendix for a complete overview. Indeed, when expanding the infinitesimal variation of (ut (Xt ))t∈[0,T ] , the martingale term contained in ut works with the martingale term contained in X and √ generates an additional bracket term. This additional √ bracket term is precisely 2div(vt )(Xt ); it thus cancels out with the term − 2div(vt )(Xt ) that appears in the dynamics of ut . For the sake of completeness, we here provide a rough version of the computations that enter the definition of this additional bracket (we refer to the Appendix for a more complete account). When expanding the difference ut+dt (Xt+dt ) − ut (Xt ), for t ∈ [0, T ] and an infinitesimal variation dt, the martingale structure in (4.3) brings about a term of the form o formula, it looks like vt (Xt+dt ) · (Wt+dt − Wt ). By the standard Itˆ 



vt (Xt+dt ) · Wt+dt − Wt =

d

  i vti (Xt+dt ) Wt+dt − Wti

i=1

=

d

i=1

vti (Xt )dWti +

d √

∂vti 2 (Xt ) dt, ∂xi i=1

(4.4)

the last term matching precisely the divergence term (up to the sign) that appears in (4.3). As in the deterministic case, our aim is to define U by means of the same formula as in (3.3), that is, U (0, x, m0 ) is the value at point x of the value function taken at time 0 when the population is initialized with the distribution m0 . To proceed, the idea is to reduce the equations by taking advantage of the additive structure of the common noise. The point is to make the (formal) change of variable √ √ ˜ t (x) := mt (x + 2Wt ), x ∈ Td , t ∈ [0, T ]. u ˜t (x) := ut (x + 2Wt ), m The second definition makes sense when mt is a density, which is the case in the analysis because of the smoothing effect of the noise. A more rigorous way to define m ˜ t is to let it be the push-forward of mt by the shift Td  x → √ x − 2Wt ∈ Td . Take note that such a definition is completely licit, as mt reads as a conditional measure given the common noise.√As the conditioning consists in freezing the common noise, the shift x → x − 2Wt may be seen as a “deterministic” mapping. √ As a result, m ˜ t is the conditional law of the process (Xt − 2Wt )t∈[0,T ] given the common noise. Since √ √    d Xt − 2Wt = −Dp H Xt − 2Wt √ √ √ √  + 2Wt , Dut (Xt − 2Wt + 2Wt ) dt + 2dBt , for t ∈ [0, T ], we get that (m ˜ t )t∈[0,T ] should satisfy √    ˜ t = Δm ˜ t + div m ˜ t Dp H(· + 2Wt , D˜ ut ) dt dt m    ˜ t (·, D˜ ˜ t Dp H ut ) dt, = Δm ˜ t + div m

(4.5)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

88

√ ˜ t (x, p) = H(x + 2Wt , p). This reads as the standard where we have denoted H Fokker–Planck equation but in a random medium. Such a computation may √ be recovered by applying the Itˆ o–Wentzell formula to (mt (x + 2Wt ))t∈[0,T ] , ˜ t∈[0,T ] provided that each mt is smooth enough in space. Quite remarkably, (m) is of absolutely continuous variation in time, which has a clear meaning when (m ˜ t )t∈[0,T ] is seen as a process with values in a set of smooth functions; when ˜ t )t∈[0,T ] (m ˜ t )t∈[0,T ] is seen as a process with values in P(Td ), the process (ϕ, m (·, · standing for the duality bracket) is indeed of absolutely continuous variation when ϕ is a smooth function on Td . √ Similarly, we can apply (at least formally) Itˆo–Wentzell formula to (ut (x + 2Wt ))t∈[0,T ] in order to express the dynamics of (˜ ut )t∈[0,T ] . √   √    dt u ˜t = −Δ˜ ut + H · + 2Wt , D˜ ut − F · + 2Wt , mt dt + v˜t · dWt , (4.6)   ˜ t (·, D˜ ut ) − F˜t (·, mt ) dt + v˜t · dWt , t ∈ [0, T ], = −Δ˜ ut + H √ where F˜t (x, m) = F (x √ + 2Wt , m); for a new representation term v˜t (x) = √ 2Dx u ˜t (x) + vt (x + 2Wt ), the√ boundary condition can be written u ˜T (·) = ˜ mT ) with G(x, ˜ m) = G(x + 2WT , m). In such a way, we completely avoid G(·, any discussion about the smoothness of v˜. Take note that there is no way to get rid of the stochastic integral, as it permits us to ensure that u ˜t remains adapted with respect to the observation up until time t. Below, we shall investigate the system (4.5)–(4.6) directly. It is only in the next chapter (see Section 5.5) that we make the connection with the original formulation (4.2)–(4.3) and then complete the proof of Corollary 2.4.7. The reason is that it suffices to define the solution of the master equation by letting ˜0 (x) with m0 as initial U (0, x, m0 ) be the value of u √ distribution. Notice indeed ˜0 (x) = u0 (x − 2W0 ) = u0 (x). Of course, the that u ˜0 (x) is expected to match u same strategy may be applied at any time t ∈ [0, T ] by investigating (˜ us (x + √ 2(Ws − Wt )))s∈[t,T ] . With these notations, the monotonicity assumption takes the form: Lemma 4.1.1. Let m and m be two elements of P(Td ). For some t ∈ [0, T ] and for some realization of the noise, denote√by m ˜ and m ˜  the push-forwards of m and m by the mapping Td  x → x − 2Wt ∈ Td . Then, for the given realization of (Ws )s∈[0,T ] ,

 Td

 F˜t (x, m)− F˜t (x, m ) d(m− ˜ m ˜  )  0,

Proof. −1 0 1



 Td

 ˜ m)− G(x, ˜ m ) d(m− G(x, ˜ m ˜  )  0.

The proof consists of a straightforward change of variable.



Remark 4.1.2. Below, we shall use quite systematically, without recalling it, the notation tilde (∼) in order to denote the new coefficients and the new √ solutions after the random change of variable x → x + 2Wt .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

4.2

6.125x9.25

89

PROBABILISTIC SETUP

Throughout the chapter, we shall use the probability space (Ω, A, P) equipped with two independent d-dimensional Brownian motions (Bt )t0 and (Wt )t0 . The probability space is assumed to be complete. We then denote by (Ft )t0 the completion of the filtration generated by (Wt )t0 . When needed, we shall also use the filtration generated by (Bt )t0 . Given an initial distribution m0 ∈ P(Td ), we consider the system    ˜ t (·, D˜ dt m ˜ t = Δm ˜ t + div m ˜ t Dp H ut ) dt,   ˜ t (·, D˜ ˜ t, ˜t = −Δ˜ ut + H ut ) − F˜t (·, mt ) dt + dM dt u

(4.7)

with the initial condition m ˜ 0 = m0√and the terminal boundary condition u ˜T = ˜ mT ), with G(x, ˜ mT ) = G(x + 2WT , mT ). G(·, The solution (˜ ut )t∈[0,T ] is seen as an (Ft )t∈[0,T ] -adapted process with paths in the space C 0 ([0, T ], C n (Td )), where n is a large enough integer (see the precise statements in the text that follows). The process (m ˜ t )t∈[0,T ] reads as an (Ft )t∈[0,T ] -adapted process with paths in the space C 0 ([0, T ], P(Td )), P(Td ) being equipped with the Monge–Kantorovich distance d1 . We shall look for solutions satisfying   sup ˜ ut n+α ∈ L∞ (Ω, A, P), (4.8) t∈[0,T ]

for some α ∈ (0, 1). ˜ t )t∈[0,T ] is seen as an (Ft )t∈[0,T ] -adapted process with paths The process (M 0 ˜ t (x))t∈[0,T ] is an in the space C ([0, T ], C n−2 (Td )), such that, for any x ∈ Td , (M (Ft )t∈[0,T ] martingale. It is required to satisfy   ˜ t n−2+α ∈ L∞ (Ω, A, P). sup M

(4.9)

t∈[0,T ]

Notice that, for our purpose, there is no need to discuss the representation of the martingale as a stochastic integral.

4.3

SOLVABILITY OF THE STOCHASTIC FOKKER–PLANCK/ HAMILTON–JACOBI–BELLMAN SYSTEM

The objective is to discuss the existence and uniqueness of a classical solution to the system (4.7) under the same assumptions as in the deterministic case. Theorem 4.3.1 covers Theorem 2.4.3 in Chapter 2:

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

90

Theorem 4.3.1. Assume that F , G, and H satisfy (2.4) and (2.5) in Section 2.3. Assume moreover that, for some integer n  2 and some3 α ∈ [0, 1), (HF1(n-1)) and (HG1(n)) hold true. ˜ t )t∈[0,T ] to (4.7), with the Then, there exists a unique solution (m ˜ t, u ˜t , M prescribed initial condition m ˜ 0 = m0 , satisfying (4.8) and (4.9). It satisfies ˜ t n+α−2 ) ∈ L∞ (Ω, A, P). ut n+α + M supt∈[0,T ] ( ˜ Moreover, we can find a constant C such that, for any two initial conditions m0 and m0 in P(Td ), we have   sup d21 (m ˜ t, m ˜ t ) + ˜ ut − u ˜t 2n+α  Cd21 (m0 , m0 )

t∈[0,T ]

P − a.e.,

˜ ) and (m ˜  ) denote the solutions to (4.7) with m0 and m where (m, ˜ u ˜, M ˜ , u ˜ , M 0 as initial conditions. Theorem 4.3.1 is the analogue of Propositions 3.1.1 and 3.2.1 in the deterministic setting, except that we do not discuss the time regularity of the solutions (which, as well known in the theory of finite-dimensional backward stochastic differential equations, may be a rather difficult question). The strategy of proof relies on the so-called continuation method. We emphasize that, differently from the standard argument that is used in the deterministic case, we will not make use of the Schauder theorem to establish the existence of a solution. The reason is that, in order to apply the Schauder theorem, we would need a compactness criterion on the space on which the equilibrium is defined, namely L∞ (Ω, A, P; C 0 ([0, T ], P(Td ))). As already noticed in an earlier paper [29], this would ask for a careful (and certainly complicated) discussion on the choice of Ω and then on the behavior of the solution to (4.7) with respect to the topology put on Ω. Here the idea is as follows. Given two parameters (ϑ, ) ∈ [0, 1]2 , we shall first have a look at the parameterized system:     ˜ t (·, D˜ dt, dt m ˜ t = Δm ˜ t + div m ˜ t ϑDp H u t ) + bt   ˜ t, ˜ ˜ dt u ˜t = −Δ˜ ut + ϑHt (·, D˜ ut ) − Ft (·, mt ) + ft dt + dM

(4.10)

with the initial condition m ˜ 0 = m0 and the terminal boundary condition u ˜T = ˜ mT ) + gT , where ((bt , ft )t∈[0,T ] , gT ) is some input. G(·, In (4.10), there are two extreme regimes: when ϑ =  = 0 and the input is arbitrary, the equation is known to be explicitly solvable; when ϑ =  = 1 and the input is set equal to 0, (4.10) fits the original one. This is our precise purpose, to prove first, by a standard contraction argument, that the equation is solvable when ϑ = 1 and  = 0 and then to propagate existence and uniqueness from −1 0 1

3 In most of the analysis, α is assumed to be (strictly) positive, except in this statement, where it may be 0. Including the case α = 0 allows for a larger range of application of the uniqueness property.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

6.125x9.25

91

the case (ϑ, ) = (1, 0) to the case (ϑ, ) = (1, 1) by means of a continuation argument. Throughout the analysis, the assumption of Theorem 4.3.1 is in force. Generally speaking, the inputs (bt )t∈[0,T ] and (ft )t∈[0,T ] are (Ft )t∈[0,T ] adapted processes with paths in the space C 0 ([0, T ], [C 1 (Td )]d ) and C 0 ([0, T ], C n−1 (Td )) respectively. Similarly, gT is an FT -measurable random variable with realizations in C n+α (Td ). We shall require that sup bt 1 ,

t∈[0,T ]

sup ft n−1+α ,

t∈[0,T ]

gT n+α

are bounded (in L∞ (Ω, A, P)). It is worth mentioning that, whenever ϕ : [0, T ] × Td → R is a continuous mapping such that ϕ(t, ·) ∈ C α (Td ) for any t ∈ [0, T ], the mapping [0, T ]  t → ϕ(t, ·) α is lower semicontinuous and, thus, the mapping [0, T ]  t → sups∈[0,t] ϕ(s, ·) α is left-continuous. In particular, whenever (ft )t∈[0,T ] is a process with paths in C 0 ([0, T ], C k (Td )), for some k  0, the quantity supt∈[0,T ] ft k+α is a random variable, equal to supt∈[0,T ]∩Q ft k+α , and the nondecreasing process (sups∈[0,t] fs k+α )t∈[0,T ] has continuous paths. As a byproduct, essupω∈Ω sup ft k+α = sup essupω∈Ω ft k+α . t∈[0,T ]

4.3.1

t∈[0,T ]

Case ϑ =  = 0

We start with the following simple lemma: Lemma 4.3.2. Assume that ϑ =  = 0. Then, with the same type of inputs ˜ t )t∈[0,T ] , with the prescribed ˜t , M as above, (4.10) has a unique solution (m ˜ t, u initial condition. As required, it satisfies (4.8) and (4.9). Moreover, there exists a constant C, depending only on n and T , such that   ut n+α  C essupω∈Ω gT n+α + essupω∈Ω sup ft n−1+α . essupω∈Ω sup ˜ t∈[0,T ]

t∈[0,T ]

(4.11) Proof of Lemma 4.3.2. reads

When ϑ =  = 0, the forward equation simply

   dt m ˜ t = Δm ˜ t + div m ˜ t bt dt,

t ∈ [0, T ]

with initial condition m0 . This is a standard Kolmogorov equation (with random coefficient) that is pathwise solvable. Its solution can be interpreted as the marginal law of the solution to √ the stochastic differential equation (compare with (4.1)): dXt = −bt (Xt ) dt + 2dBt , t ∈ [0, T ], X0 having m0 as statistical

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

92

distribution. Recalling the Kantorovich–Rubinstein duality formula for d1 (see, for instance, Subsection A.1.1), we have essupω∈Ω sup s=t

d 1 (m ˜ s, m ˜ t) |s − t|

1 2

   C 1 + essupω∈Ω b ∞ ,

for a constant C depending only on d and T . As ϑ =  = 0, the backward equation in (4.10) has the form   ˜ t , t ∈ [0, T ], dt u ˜t = −Δ˜ ut + ft dt + dM with the terminal boundary condition u ˜T = gT . Although the equation is infinite-dimensional, it may be solved in a quite straightforward way. Taking the conditional expectation given s ∈ [0, T ] in the above equation, we indeed get that any solution should satisfy (provided we can exchange differentiation and conditional expectation)        ˜t |Fs + E ft |Fs dt, t ∈ [s, T ], ˜t |Fs = −ΔE u dt E u which suggests letting as a candidate for a solution:   u ˜s (x) = E u ¯s (x)|Fs , T u ¯s (x) = PT −s gT (x) − Pt−s ft (x) dt, s ∈ [0, T ], x ∈ Td ,

(4.12)

s

where P denotes the heat semigroup (but associated with the Laplace operator Δ instead of (1/2)Δ). For any s ∈ [0, T ] and x ∈ Td , the conditional expectation is uniquely defined up to a negligible event under P. We claim that, for any s ∈ [0, T ], we can find a version of the conditional expectation in such a way ˜s (x)) reads as a progressively meathat the process [0, T ]  s → (Td  x → u surable random variable with values in C 0 ([0, T ], C 0 (Td )). By the representation formula (4.12), we indeed have that, P almost surely, u ¯ is jointly continuous in time and space. Making use of Lemma 4.3.4, we deduce that the realizations ˜s (x)) belong to C 0 ([0, T ], C 0 (Td )), the mapping of [0, T ]  s → (Td  x → u d us (ω))(x)) being measurable with respect to [0, T ] × Ω  (s, ω) → (T  x → (˜ the progressive σ-field   P = A ∈ B([0, T ]) ⊗ A : ∀t ∈ [0, T ], A ∩ ([0, t] × Ω) ∈ B([0, t]) ⊗ Ft . (4.13) By the maximum principle, we can find a constant C, depending only on T and d, such that essupω∈Ω sup ˜ us 0  essupω∈Ω sup ¯ us 0 −1 0 1

s∈[0,T ]

s∈[0,T ]

  C essupω∈Ω gT 0 + essupω∈Ω sup fs 0 . 

0sT

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

93

More generally, taking the representation formula (4.12) at two different x, x ∈ Td and then taking the difference, we get   essupω∈Ω sup ˜ us α  C essupω∈Ω gT α + essupω∈Ω sup fs α . s∈[0,T ]

s∈[0,T ]

We now proceed with the derivatives of higher order. Generally speaking, there are two ways to differentiate the representation formula (4.12). The first one is to say that, for any k ∈ {1, . . . , n − 1}, Dxk u ¯s (x)



k





= PT −s D gT (x) − (s, x) ∈ [0, T ] × Td ,

T s

  Pt−s Dxk ft (x) dt, (4.14)

which may be established by a standard induction argument. The second way is to make use of the regularization property of the heat kernel in order to go one step further, namely, for any k ∈ {1, . . . , n}, ¯s (x) Dxk u



k



k





= PT −s D gT (x) − 

= PT −s D gT (x) −



T s

0

  DPt−s Dxk−1 ft (x) dt,

T −s



DPt Dxk−1 ft+s

(4.15)



(x) dt,

for (s, x) ∈ [0, T ] × Td , where DPt−s stands for the derivative of the heat semigroup. Equation (4.15) is easily derived from (4.14). It permits handling the fact that f is (n − 1)-times differentiable only. Recalling that |DPt ϕ|  ct−1/2 ϕ ∞ for any bounded Borel function ϕ : d T → R and for some c  1 independent of ϕ and of t ∈ [0, T ], we deduce ¯s (x) is that, for any k ∈ {1, . . . , n}, the mapping [0, T ] × Td  (s, x) → Dxk u continuous. Moreover, we can find a constant C such that, for any s ∈ [0, T ], us k+α  essupω∈Ω gT k+α essupω∈Ω ¯ T 1 √ essupω∈Ω ft k+α−1 dt. +C t −s s

(4.16)

In particular, invoking once again Lemma 4.3.4, we can find a version of the conditional expectation in the representation formula u ˜s (x) = E[¯ us (x)|Fs ] such ˜ is progresthat u ˜ has paths in C 0 ([0, T ], C n (Td )). For any k ∈ {1, . . . , n}, Dxk u sively measurable and, for all (s, x) ∈ [0, T ] × Td , it holds that Dxk u ˜s (x) = ¯s (x)|Fs ]. E[Dxk u Using (4.16), we have, for any k ∈ {1, . . . , n},   essupω∈Ω sup ˜ us k+α  C essupω∈Ω gT k+α + essupω∈Ω sup fs k+α−1 . s∈[0,T ]

s∈[0,T ]

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

94

Now that u ˜ has been constructed, it remains to reconstruct the martingale ˜ t )0tT in the backward equation of the system (4.10) (with ϑ =  = 0 part (M therein). Since u ˜ has trajectories in C 0 ([0, T ], C n+α (Td )), n  2, we can let t t ˜ t (x) = u M ˜t (x) − u ˜0 (x) + Δ˜ us (x) ds − fs (x) ds, t ∈ [0, T ], x ∈ Td . 0

0

˜ has trajectories in C 0 ([0, T ], C n−2 (Td )) and that It is then clear that M ˜ t essupω∈Ω sup M < ∞. n+α−2

t∈[0,T ]

˜ t (x))0tT is a It thus remains to prove that, for each x ∈ Td , the process (M martingale (starting from 0). Clearly, it has continuous and (Ft )0tT -adapted paths. Moreover, T ˜ T (x) − M ˜ t (x) = gT (x) − u M ˜t (x) + Δ˜ us (x) ds −

t

T t

fs (x) ds,

t ∈ [0, T ], x ∈ Td .

Now, recalling the relationship E[Δ˜ us (x)|Ft ] = E[Δ¯ us (x)|Ft ], we get

E

T t



 Δ˜ us (x) dsFt = E

T t

  Δ¯ us (x) dsFt .

Taking the conditional expectation given Ft , we deduce that

  ˜ ˜ ¯t (x) − E MT (x) − Mt (x)|Ft = E gT (x) − u

T

+ t

T t

fs (x) ds



 Δ¯ us (x) dsFt = 0,

˜ T (x)|Ft ], ˜ t (x) = E[M the second equality following from (4.12). This shows that M ˜ t (x))0tT is a martingale, as required.  so that the process (M Remark 4.3.3. Notice that, alternatively to (4.11), we also have, by Doob’s inequality,     (4.17) ut 2n+α  CE gT 2n+α + sup ft 2n+α−1 . E sup ˜ t∈[0,T ]

−1 0 1

t∈[0,T ]

Lemma 4.3.4. Consider a random field U : [0, T ] × Td → R, with continuous paths (in the variable (t, x) ∈ [0, T ] × Td ), such that essupω∈Ω U 0 < ∞.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

95

Then, we can find a version of the random field [0, T ]×Td  (t, x) → E[U (t, x)|Ft ] such that [0, T ]  t → (Td  x → E[U (t, x)|Ft ]) is a progressively measurable random variable with values in C 0 ([0, T ], C 0 (Td )), the progressive σ-field P being defined in (4.13). More generally, if, for some k  1, the paths of U are k-times differentiable in the space variable, the derivatives up to the order k having jointly continuous (in (t, x)) paths and satisfying essupω∈Ω sup U (t, ·) k < ∞, t∈[0,T ]

then we can find a version of the random field [0, T ]×Td  (t, x) → E[U (t, x)|Ft ] that is progressively measurable and that has paths in C 0 ([0, T ], C k (Td )), the derivative of order i written [0, T ] × Td  (t, x) → E[Dxi U (t, x)|Ft ]. Proof. First step. We first prove the first part of the statement (existence of a progressively measurable version with continuous paths). Existence of a differentiable version will be handled next. A key fact in the proof is that, the filtration (Ft )t∈[0,T ] being generated by (Wt )t∈[0,T ] , any martingale with respect to (Ft )t∈[0,T ] admits a continuous version. Throughout the proof, we denote by w the (pathwise) modulus of continuity of U on the compact set [0, T ] × Td , namely w(δ) =

sup

sup

x,y∈Td :|x−y|δ s,t∈[0,T ]:|t−s|δ

|U (s, x) − U (t, y)|,

δ > 0.

Since essupω∈Ω U 0 < ∞, we have, for any δ > 0, essupω∈Ω w(δ) < ∞. By Doob’s inequality, we have that, for any integer p  1, ∀ε > 0,

   1    1  P sup E w |Fs  ε  ε−1 E w , p p s∈[0,T ]

the right-hand side converging to 0 as p tends to ∞, thanks to Lebesgue’s dominated convergence theorem. Therefore, by a standard application of the Borel– Cantelli lemma, we can find an increasing sequence of integers (ap )p1 such that the sequence (sups∈[0,T ] E[w(1/ap )|Fs ])p1 converges to 0 with probability 1. We now come back to the original problem. For any (t, x) ∈ [0, T ]×Td , we let V(t, x) = E[U (t, x)|Ft ]. The difficulty comes from the fact that each V(t, x) is uniquely defined up to a negligible set. The objective is thus to choose each of these negligible sets in a relevant way.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

96

Denoting by T a dense countable subset of [0, T ] and by X a dense countable subset of Td , we can find a negligible event N ⊂ A such that, outside N , the process [0, T ]  s → E[U (t, x)|Fs ] has a continuous version for any t ∈ T and x ∈ X . Modifying the set N if necessary, we have, outside N , for any integer p  1, any t, t ∈ T and x, x ∈ X , with |t − t | + |x − x |  1/ap ,    1  |Fs , sup E[U (t, x)|Fs ] − E[U (t , x )|Fs ]  sup E w ap s∈[0,T ] s∈[0,T ] the right-hand side converging to 0 as p tends to ∞. Therefore, by a uniform continuity extension argument, it is thus possible to extend continuously, outside N , the mapping T × X  (t, x) → ([0, T ]  s → E[U (t, x)|Fs ]) ∈ C 0 ([0, T ], R) to the entire [0, T ] × Td . For any (t, x) ∈ [0, T ] × Td , the value of the extension is a version of the conditional expectation E[U (t, x)|Fs ]. Outside N , the slice (s, x) → E[U (s, x)|Fs ] is obviously continuous. Moreover, it satisfies, for all p  1, ∀x, x ∈ Td ,

|x − x | 

1 ⇒ ap

   1  |Fs , sup E[U (s, x)|Fs ] − E[U (s, x )|Fs ]  sup E w ap s∈[0,T ] s∈[0,T ] which says that, for each realization outside N , the functions (Td  x → E[U (s, x)|Fs ])s∈[0,T ] are equicontinuous. Together with the continuity in s, we deduce that, outside N , the function [0, T ]  s → (Td  x → E[U (s, x)|Fs ]) ∈ C 0 (Td ) is continuous. On N , we can arbitrarily let V ≡ 0, which is licit since N has 0 probability. Progressive measurability is then easily checked (the fact that V is arbitrarily defined on N does not matter because the filtration is complete). Second step. We now handle the second part of the statement (existence of a C k version). By a straightforward induction argument, it suffices to treat the case k = 1. By the first step, we already know that the random field [0, T ] × Td  (t, x) → E[Dx U (t, x)|Ft ] has a continuous version. In particular, for any unit vector e ∈ Rd , it makes sense to consider the mapping      1  Td ×R∗  (x, h) → E U (t, x+he)|Ft −E U (t, x)|Ft −E Dx U (t, x), e|Ft . h Notice that we can find an event of probability 1, on which, for all t ∈ [0, T ], x ∈ Td and h ∈ Rd ,

−1 0 1

1         E U (t, x + he)|Ft − E U (t, x)|Ft − E Dx U (t, x), e|Ft  h  1      = E Dx U (t, x + λhe) − Dx U (t, x), e dλ|Ft  0  1             = E Dx U (t, x + λhe), e |Ft − E Dx U (t, x), e |Ft dλ, 0

(4.18)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

97

where we used the fact the mapping [0, T ] × Td  (t, x) → E[Dx U (t, x)|Ft ] has continuous paths in order to guarantee the integrability of the integrand in the third line. By continuity of the paths again, the right-hand side tends to 0 with h (uniformly in t and x).  Instead of (4.11), we will sometimes make use of the following: Lemma 4.3.5. We can find a constant C such that, whenever ϑ =  = 0, any solution to (4.10) satisfies ∀k ∈ {1, . . . , n},

T t

essupω∈Ω ˜ us k+α √ ds s−t   C essupω∈Ω gT k+α +

T t

 essupω∈Ω fs k+α−1 ds .

Proof. Assume that we have a solution to (4.10). Then, making use of (4.16) in the proof of Lemma 4.3.2, we have that, for all k ∈ {1, . . . , n} and all s ∈ [0, T ], 



us k+α  C essupω∈Ω gT k+α + essupω∈Ω ˜

T s

 essupω∈Ω fr k+α−1 √ dr . r−s (4.19)

√ Dividing by s − t for a given t ∈ [0, T ], integrating from t to T , and modifying the value of C if necessary, we deduce that

T

essupω∈Ω ˜ us k+α √ ds s − t t   C essupω∈Ω gT k+α +



= C essupω∈Ω gT k+α +

T t T

t

 essupω∈Ω fr k+α−1 √ √ dr ds s−t r−s s    r 1 √ √ ds dr , essupω∈Ω fr k+α−1 s−t r−s t

T

the last line following from the Fubini theorem. The result easily follows.



Following (4.17), we shall use the following variant of Lemma 4.3.5: Lemma 4.3.6. For p ∈ {1, 2}, we can find a constant C such that, whenever ϑ =  = 0, any solution to (4.10) satisfies, for all t ∈ [0, T ]: ∀k ∈ {1, . . . , n},

E



˜ us p √ k+α ds|Ft s−t t

p  CE gT k+α + T

T t



fr pk+α−1 dr  Ft

 .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

98

Proof. The proof goes along the same lines as that of Lemma 4.3.5. We start with the following variant of (4.16), that holds, for any s ∈ [0, T ],



˜ us pk+α  CE gT pk+α +

T s



fr pk+α−1  √ dr  Fs . r−s

(4.20)

Therefore, for any 0  t  s  T , we get

  E ˜ us pk+α |Ft  CE gT pk+α + √

Dividing by

s



fr pk+α−1  √ dr  Ft . r−s

s − t and integrating in s, we get

 

˜ us pk+α √ ds |Ft s−t t

 T

fr pk+α−1  CE gT pk+α +

 E

T

T

t

r t



  1 √ ds dr|Ft . r−s s−t

Therefore,

 E

T t

 

 T

˜ us p √ k+α ds |Ft  CE gT pk+α +

fr pk+α−1 dr|Ft , s−t t 

which completes the proof. 4.3.2

A Priori Estimates

In the previous subsection, we handled the case ϑ =  = 0. To handle the more general case when (ϑ, ) ∈ [0, 1]2 , we shall use the following a priori regularity estimate: Lemma 4.3.7. Let (b0t )t∈[0,T ] and (ft0 )t∈[0,T ] be (Ft )t∈[0,T ] adapted processes with paths in the space C 0 ([0, T ], C 1 (Td , Rd )) and C 0 ([0, T ], C n−1 (Td )) and gT be an FT -measurable random variable with values in C n (Td ), such that essupω∈Ω sup b0t 1 , essupω∈Ω sup ft0 n+α−1 , essupω∈Ω gT0 n+α  C, t∈[0,T ]

t∈[0,T ]

for some constant C  0. Then, for any k ∈ {0, . . . , n}, we can find two constants λk and Λk , depending on C, such that, denoting by B the set

−1 0 1

 B := w ∈ C 0 ([0, T ], C n (Td )) : ∀k ∈ {1, . . . , n}, ∀t ∈ [0, T ],  

wt k+α  Λk exp λk (T − t) ,

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

99

it holds that, for any integer N  1, any family of adapted processes (m ˜ i, u ˜i )i=1,...,N with paths in C 0 ([0, T ], P(Td )) × B, any families (ai )i=1,...,N ∈ N [0, 1] and (bi )i=1,...,N ∈ [0, 1]N with a1 + · · · + aN  2 and b1 + · · · + bN  2, and any input (ft )t∈[0,T ] and gT of the form

ft =

N



 ˜ t (·, D˜ ai H uit ) − bi F˜t (·, m ˜ it ) + ft0 ,

gT =

i=1

N

˜ m ˜ iT ) + gT0 , bi G(·,

i=1

any solution (m, ˜ u ˜) to (4.10) for some ϑ,  ∈ [0, 1] has paths in C 0 ([0, T ], P(Td ))× B, that is,   ut k+α  Λk exp λk (T − t) , essupω∈Ω ˜ Proof.

t ∈ [0, T ].

Consider the source term in the backward equation in (4.10):

˜ t (·, D˜ ut ) − F˜t (·, m ˜ t) + ϕt := ϑH

N



 ˜ t (·, D˜ ai H uit ) − bi F˜t (·, m ˜ it ) + ft0 .

i=1

Then, for any k ∈ {1, . . . , n}, we can find a constant Ck and a continuous ˜ i, u ˜i ), i = 1, . . . , N , and of (m, ˜ u ˜) nondecreasing function Φk , independent of (m 0 0 (but depending on the inputs (bt )t∈[0,T ] , (ft )t∈[0,T ] and gT ), such that    ut k+α−1 + max ˜ uit k+α−1

ϕt k+α−1  Ck 1 + Φk ˜ i=1,...,N   × 1 + ˜ ut k+α + max ˜ uit k+α .

(4.21)

i=1,...,N

When k = 1, the above bound holds true with Φ1 ≡ 0: it then follows from ˜ t ) is globally Lipschitz (HF1(n-1)) and from the fact that H (or equivalently H ˜ in (x, p) (uniformly in t if dealing with Ht instead of H). When k ∈ {2, . . . , n}, it follows from the standard Fa` a di Bruno formula for the higher-order derivatives of the composition of two functions (together with the fact that Dp H is globally bounded and that the higher-order derivatives of H are locally bounded). Fa`a di Bruno’s formula says that each Φk may be chosen as a polynomial function. Therefore, by (4.21) and by (4.19) in the proof of Lemma 4.3.5 (and modifying the constant Ck in such a way that gT0 k+α + supm∈P(Td ) G(·, m) k+α  Ck ), we deduce that essupω∈Ω ˜ ut k+α 

T  1  i ˜ √ essupω∈Ω ˜  Ck 1 + us k+α + essupω∈Ω max ˜ us k+α ds , i=1,...,N s−t t

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

100 with

   us k+α−1 + max ˜ C˜k := Ck 1 + essupω∈Ω sup Φk ˜ uis k+α−1 . (4.22) s∈[0,T ]

i=1,...,N

Now (independently of the above bound), by (4.21) and Lemma 4.3.5, we can modify Ck in such a way that

T t

essupω∈Ω ˜ us k+α √ ds s−t

T    C˜k 1 + essupω∈Ω ˜ us k+α + essupω∈Ω max ˜ uis k+α ds , t

i=1,...,N

so that, collecting the two last inequalities (and allowing the constant Ck to increase from line to line as long as the definition of C˜k in terms of Ck [see (4.22)] remains valid), essupω∈Ω ˜ ut k+α 

T essupω∈Ω maxi=1,...,N ˜ uis k+α  √ essupω∈Ω ˜ ds  C˜k 1 + us k+α + s−t t

T essupω∈Ω ˜  C˜k 1 + us k+α (4.23) t

 uir k+α  essupω∈Ω maxi=1,...,N supr∈[s,T ] ˜ √ ds . + s−t Now, notice that the last term in the above right-hand side may be rewritten

T t

essupω∈Ω maxi=1,...,N supr∈[s,T ] ˜ uir k+α √ ds s−t T −t essupω∈Ω maxi=1,...,N supr∈[t+s,T ] ˜ uir k+α √ ds, = s 0

which is clearly nonincreasing in t. Returning to (4.23), this permits application of Gronwall’s lemma, from which we get

−1 0 1

essupω∈Ω ˜ ut k+α 

T essupω∈Ω maxi=1,...,N supr∈[s,T ] ˜ uir k+α √ ds .  C˜k 1 + s−t t

(4.24)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

101

In particular, if, for any s ∈ [0, T ] and any i ∈ {1, . . . , N }, essupω∈Ω ˜ uis k+α  Λk exp(λk (T − s)), then, for all t ∈ [0, T ],

T t

essupω∈Ω maxi=1,...,N supr∈[s,T ] ˜ uir k+α √ ds s−t

 Λk

T t

exp(λk (T − s)) √ ds s−t

 Λk exp(λk (T − t))

T −t 0

(4.25)

exp(−λk s) √ ds, s

the passage from the first to the second line following from a change of variable. Write now Λk exp(λk (T − t))

T −t 0

= Λk exp(λk (T − t))

∞ 0

= Λk exp(λk (T − t))

∞ 0

 Λk exp(λk (T − t))

exp(−λk s) √ ds s

∞ 0

exp(−λk s) √ ds − Λk s exp(−λk s) √ ds − Λk s exp(−λk s) √ ds − Λk s



+∞ T −t



+∞ 0



+∞ 0

exp(−λk (s − (T − t)) √ ds s exp(−λk s) √ ds T −t+s exp(−λk s) √ ds, T +s

and deduce, from (4.24) and (4.25), that we can find two constants γ1 (λk ) and γ2 (λk ) that tend to 0 as λk tend to +∞ such that    essupω∈Ω ˜ ut k+α  C˜k 1 − Λk γ1 (λk ) + γ2 (λk )Λk exp λk (T − t) . Choosing λk first such that γ2 (λk )C˜k  1 and then Λk such that 1  γ1 (λk )Λk , we finally get that   essupω∈Ω ˜ ut k+α  Λk exp λk (T − t) . The proof is easily completed by induction (on k).



−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

CHAPTER 4

102 4.3.3

6.125x9.25

Case (ϑ, ) = (1, 0)

Using a standard contraction argument, we are going to prove: Proposition 4.3.8. satisfying

Given some adapted inputs (bt )t∈[0,T ] , (ft )t∈[0,T ] and gT

essupω∈Ω sup bt 1 , essupω∈Ω sup ft n+α−1 , essupω∈Ω gT n+α < ∞, t∈[0,T ]

t∈[0,T ]

the system (4.10), with ϑ = 1 and  = 0, admits a unique adapted solution (m ˜ t, u ˜t )t∈[0,T ] , with paths in C 0 ([0, T ], P(Td )) × C 0 ([0, T ], C n (Td )). As required, it satisfies essupω∈Ω sup ˜ ut n+α < ∞. t∈[0,T ]

Proof. Actually, the only difficulty is to solve the backward equation. Once the backward equation has been solved, the forward equation may be solved by means of Lemma 4.3.2. To solve the backward equation, we make use of the Picard fixed-point theut )t∈[0,T ] , with paths in C 0 ([0, T ], orem. Given an (Ft )t∈[0,T ] adapted process (˜ n d ut n+α < ∞, we denote by (˜ ut )t∈[0,T ] C (T )) and satisfying essupω∈Ω supt∈[0,T ] ˜ the solution to the backward equation in (4.10), with ϑ =  = 0 and with ut ))t∈[0,T ] . By Lemma 4.3.2, the process (ft )t∈[0,T ] replaced by (ft + Ht (·, D˜ (˜ ut )t∈[0,T ] belongs to C 0 ([0, T ], C n (Td )) and satisfies essupω∈Ω supt∈[0,T ]

˜ ut n+α < ∞. This defines a mapping (with obvious domain and codomain) Ψ : (˜ ut )t∈[0,T ] → (˜ ut )t∈[0,T ] . The point is to exhibit a norm for which it is a contraction. Given two adapted inputs (˜ uit )t∈[0,T ] , i = 1, 2, with paths in C 0 ([0, T ], C n (Td )) and with essupω∈Ω supt∈[0,T ] ˜ uit n+α < ∞, i = 1, 2, we call (˜ u,i t )t∈[0,T ] , i = 1, 2, 1 the images by Ψ. By Lemma 4.3.7 (with N = 1, ϑ =  = 0, a = 1, and b1 = 0), we can find constants (λk , Λk )k=1,...,n such that the set  B = w ∈ C 0 ([0, T ], C n (Td )) :   ∀k ∈ {0, . . . , n}, ∀t ∈ [0, T ], wt k+α  Λk exp λk (T − t) is stable by Ψ. We shall prove that Ψ is a contraction on B. ˜1t − u ˜2t and w ˜t = u ˜,1 ˜,2 We let w ˜t = u t −u t , for t ∈ [0, T ]. We notice that   ˜t , −dw ˜t = Δw ˜t − V˜t , Dw ˜t  dt − dN

−1 0 1

˜t )t∈[0,T ] is a process with the terminal boundary condition w ˜T = 0. Above, (N 0 n−2 d d ˜t (x))t∈[0,T ] is a marwith paths in C ([0, T ], C (T )) and, for any x ∈ T , (N ˜ tingale. Moreover, (Vt )t∈[0,T ] is given by

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

V˜t (x) =

1 0

103

  ˜ t x, rD˜ u1t (x) + (1 − r)D˜ Dp H u2t (x) dr.

˜2 ∈ B, We can find a constant C such that, for any u ˜1 , u sup V˜t n+α−1  C.

t∈[0,T ]

Therefore, for any u ˜1 , u ˜2 ∈ B, for any k ∈ {0, . . . , n − 1}

V˜t , Dw ˜t  k+α  C w ˜t k+1+α ,

∀t ∈ [0, T ],

w ˜ := u ˜1 − u ˜2 .

Now, following (4.19), we deduce that, for any k ∈ {1, . . . , n}, essupω∈Ω w ˜t k+α  C



T t

essupω∈Ω w ˜s k+α √ ds, s−t

(4.26)

so that, for any μ > 0,

T

essupω∈Ω w ˜t k+α exp(μt) dt   s T exp(μt) √ dt ds essupω∈Ω w ˜s k+α C s−t 0 0  +∞  T exp(−μs) √  C ds essupω∈Ω w ˜s k+α exp(μs) ds. s 0 0 0

Choosing μ large enough, we easily deduce that Ψ has at most one fixed point ˜i+1 = Ψ(˜ ui ), i ∈ N, in B. Moreover, letting u ˜0 ≡ 0 and defining by induction u we easily deduce that, for μ large enough, for any i, j ∈ N,

T 0

essupω∈Ω ˜ ui+j −u ˜it n+α exp(μt) dt  t

C , 2i

so that (modifying the value of C)

T 0

essupω∈Ω ˜ ui+j −u ˜it n+α dt  t

C . 2i

Therefore, by definition of B and by (4.26), we deduce that, for any ε > 0, ∀i ∈ N,

√ sup essupω∈Ω sup ˜ ui+j −u ˜it n+α  C ε + t j∈N

t∈[0,T ]

C √ , 2i ε

from which we deduce that (˜ ui )i∈N converges in L∞ (Ω, C 0 ([0, T ], C n (Td ))). The limit is in B and is a fixed point of Ψ.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

104

Actually, by Lemma 4.3.7 (with N = 1, ϑ = 1,  = 0 and a1 = b1 = 0), any fixed point must be in B, so that Ψ has a unique fixed point in the whole space.  4.3.4

Stability Estimates

Lemma 4.3.9. Consider two sets of inputs (b, f, g) and (b , f  , g  ) to (4.10), ˜ ) are when driven by two parameters ϑ,  ∈ [0, 1]. Assume that (m, ˜ u ˜) and (m ˜ , u 0 associated solutions (with adapted paths that take values in C ([0, T ], P(Td )) × C 0 ([0, T ], C n (Td ))) that satisfy the conclusions of Lemma 4.3.7 with respect to some vectors of constants Λ = (Λ1 , . . . , Λn ) and λ = (λ1 , . . . , λn ). Then, we can find a constant C  1, depending on the inputs and the outputs through Λ and λ only, such that, provided that 1 C

essupω∈Ω sup bt 1  t∈[0,T ]

it holds that   ut − u ˜t 2n+α + d21 (m ˜ t, m ˜ t ) E sup ˜ t∈[0,T ]

   C d21 (m0 , m0 ) + E sup bt − bt 20 t∈[0,T ]

+ sup ft − t∈[0,T ]

ft 2n+α−1

+ gT − gT 2n+α



.

Remark 4.3.10. The precise knowledge of Λ and λ is crucial in order to make use of the convexity assumption of the Hamiltonian. The proof relies on the following stochastic integration by parts formula: Lemma 4.3.11. Let (mt )t∈[0,T ] be an adapted process with paths in the space C 0 ([0, T ], P(Td )) such that, with n as in the statement of Theorem 4.3.1, for any smooth test function ϕ ∈ C n (Td ), P almost surely,

dt

−1 0 1

Td

     Δϕ(x) − βt (x), Dϕ(x) dmt (x) dt, ϕ(x) dmt (x) = Td

t ∈ [0, T ],

for some adapted process (βt )0tT with paths in C 0 ([0, T ], [C 0 (Td )]d ). (Notice, by separability of C n (Td ), that the above holds true, P almost surely, for any smooth test function ϕ ∈ C n (Td ).) Let (ut )t∈[0,T ] be an adapted process with paths in C 0 ([0, T ], C n (Td )) such that, for any x ∈ Td , dt ut (x) = γt (x) dt + dMt (x),

t ∈ [0, T ],

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

105

where (γt )t∈[0,T ] and (Mt )t∈[0,T ] are adapted processes with paths in the space C 0 ([0, T ], C 0 (Td )) and, for any x ∈ Td , (Mt (x))t∈[0,T ] is a martingale. Assume that   (4.27) essupω∈Ω sup ut n + βt 0 + γt 0 + Mt 0 < ∞. 0tT

Then, the process  ut (x) dmt (x) Td   t    γs (x) + Δus (x) − βs (x), Dus (x) dms (x) ds − 0

Td

t∈[0,T ]

is a continuous martingale. Proof. Although slightly technical, the proof is quite standard. Given two reals s < t in [0, T ], we consider a mesh s = r0 < r1 < · · · < rN = t of the interval [s, t]. Then, ut (x) dmt (x) − us (x) dms (x) Td N −1

Td

=

=

i=0

Td

i=0

Td

N −1

+

uri+1 (x) dmri+1 (x) −

N −1

+



Td

ri

uri+1 (x) dmri (x) −  Td



N −1

i=0

Td

uri+1 (x) dmri (x) 



N −1 ri+1 

i=0

Td

uri (x) dmri (x)



i=0

=





uri+1 (x) dmri+1 (x) −

Td

uri (x) dmri (x)

  Δuri+1 (x) − βr (x), Duri+1 (x) dmr (x) dr

ri+1 ri

Td

(4.28)

 γr (x)dr + Mri+1 (x) − Mri (x) dmri (x).

By conditional Fubini’s theorem and by (4.27), 

N

−1   Mri+1 (x) − Mri (x) dmri (x) | Fs E i=0

=

N −1

i=0

so that

Td

        E Mri+1 (x) − Mri (x)|Fri dmri (x) Fs = 0, E Td

  E S N |Fs = 0,

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

106 where we have let S

N

:=

ut (x) dmt (x) Td N −1 ri+1 



i=0



ri

N −1

i=0



Td



Td ri+1 ri



Td

us (x) dms (x) 



Δuri+1 (x) − βr (x), Duri+1 (x) dmr (x) dr

 γr (x)dr dmri (x).

Now, we notice that the sequence (S N )N 1 converges pointwise to S ∞ :=



ut (x) dmt (x) − us (x) dms (x) Td Td  t    Δur (x) − βr (x), Dur (x) + γr (x) dmr (x) dr. − s

Td

As the sequence (S N )N 1 is bounded in L∞ (Ω, A, P), it is straightforward to deduce that, P almost surely,     E S ∞ |Fs = lim E S N |Fs = 0. N →∞



We now switch to Proof of Lemma 4.3.9. Following the deterministic case, the idea is to use the monotonicity condition. Using the same duality argument as in the deterministic case, we thus compute by means of Lemma 4.3.11:

    u ˜t − u ˜t −m ˜t d m ˜t d T    ˜ t (·, D˜ ˜ t (·, D˜ = −ϑ D˜ ut − D˜ u t , Dp H ut )dm ˜ t − Dp H ut )dm ˜t d T    D˜ ut − D˜ ut , bt dm ˜ t − bt dm ˜t − Td   ˜ t (·, D˜ ˜ t (·, D˜ H +ϑ ut ) − H ut ) d(m ˜ t − m ˜ t) Td   F˜t (·, mt ) − F˜t (·, mt ) d(m − ˜ t − m ˜ t) d T        ft − ft d m ˜t −m + ˜ t dt + dMt ,

dt

−1 0 1



Td

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

107

where (Mt )t∈[0,T ] is a martingale, with the terminal boundary condition

 Td

u ˜T





˜ T u ˜T d(m

−m ˜T) = 



T

d

+

 ˜ m ) − G(·, ˜ mT ) d(m G(·, ˜ T − m ˜T) T



Td

 gT − gT d(m ˜ T − m ˜ T ).

Making use of the convexity and monotonicity assumptions and taking the expectation, we can find a constant c > 0, depending on the inputs and the outputs through Λ and λ only, such that T

   |D˜ ut − D˜ u t |2 d m ˜t +m ˜ t dt 0 Td    ˜ 0, m ˜ 0 ) + E gT − gT 1 d1 (m ˜T,m ˜ T )  u0 − u0 1 d1 (m T

bt − bt 0 ˜ ut − u ˜t 1 dt +E

ϑcE

+E

0

0

(4.29)

T



bt , D˜ ut − D˜ ut  1 + ft − ft 1 d1 (m ˜ t, m ˜ t ) dt.

(As for the first term in the right-hand side, recall that u0 and u0 are F0 -measurable and thus almost-surely deterministic since F0 is almost surely trivial—which is Blumenthal’s zero-one law–.) We now implement the same strategy as in the proof of Proposition 3.2.1 in the deterministic case. Following (3.7), we get that there exists a constant C, depending on T , the Lipschitz constant of Dp H, and the parameters Λ and λ, such that sup t∈[0,T ]

d 1 (m ˜ t , m ˜ t)



 C d 1 (m ˜ 0 , m ˜ 0 ) + sup bt − bt 0 +ϑ

t∈[0,T ]

T

0

Td

|D˜ us



− D˜ us |d m ˜s +

m ˜ s







(4.30)

ds ,

which holds pathwise. Taking the square and the expectation and then plugging (4.29) into (4.30), we deduce that, for any small η > 0 and for a possibly new value of C,   ˜ t, m ˜ t ) E sup d21 (m t    ut − u ˜t 21  C η −1 d21 (m0 , m0 ) + ηE sup ˜ t∈[0,T ]



−1

  essupω∈Ω sup bt 1 E sup ˜ ut − u ˜t 22 t∈[0,T ]

  + η −1 E sup bt − bt 20 + sup ft − ft 21 + gT − gT 21 . t∈[0,T ]

(4.31)

t∈[0,T ]

t∈[0,T ]

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

108

Following the deterministic case, we let w ˜t = u ˜t − u ˜t , for t ∈ [0, T ], so that    ˜t , ˜ t1 − ft − ft dt − dN ˜t − ϑV˜t , Dw ˜t  +  R − dw ˜t = Δw

(4.32)

˜ T +g  −gT . Above, (N ˜t )t∈[0,T ] is with the terminal boundary condition w ˜T =  R T 0 0 d ˜t 0 < ∞, a process with paths in C ([0, T ], C (T )), with essupω∈Ω supt∈[0,T ] N d ˜ and, for any x ∈ T , (Nt (x))t∈[0,T ] is a martingale. Moreover, the coefficients ˜ 1 )t∈[0,T ] and R ˜ T are given by (V˜t )t∈[0,T ] , (R t V˜t (x) =

1 0



  ˜ t x, rD˜ u(x) + (1 − r)D˜ u (x) dr, Dp H

1

  δ F˜t  x, rm ˜ t + (1 − r)m ˜t −m ˜ t m ˜ t dr, 0 δm 1   δG ˜ T (x) = R (x, rm ˜ T + (1 − r)m ˜ T ) m ˜T −m ˜ T dr. δm 0

˜ t1 (x) R

=

Following the deterministic case, we have ˜ 1 n+α−1 + R ˜ T n+α  C sup d1 (m sup R ˜ t, m ˜ t ). t

t∈[0,T ]

t∈[0,T ]

(4.33)

Moreover, recalling that the outputs u ˜ and u ˜ are assumed to satisfy the conclusion of Lemma 4.3.7, we deduce that sup V˜t n+α−1  C.

t∈[0,T ]

In particular, for any k ∈ {0, . . . , n − 1} ∀t ∈ [0, T ],

˜t  k+α  C w ˜t k+1+α .

V˜t , Dw

Now, following (4.20) (with p = 1) in the proof of Lemma 4.3.7 and implementing (4.33), we get, for any t ∈ [0, T ],



w ˜t k+α  CE gT − gT k+α + + sup s∈[0,T ]

t



d 1 (m ˜ s, m ˜ s )  Ft

  CE gT − gT k+α + −1 0 1

T

 T

t

w ˜

√s k+α ds + sup fs − fs k+α−1 s−t s∈[0,T ]

w ˜s k+α ds + sup fs − fs k+α−1

  + sup d1 (m ˜ s, m ˜ s )  Ft , s∈[0,T ]

s∈[0,T ]

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

109

the second line following from Lemma 4.3.6 (with p = 1). By Doob’s inequality, we deduce that

  2  2 E sup w ˜s k+α  E gT − gT k+α + s∈[t,T ]

T t

w ˜s 2k+α ds

 + sup fs − fs 2k+α−1 + sup d21 (m ˜ s, m ˜ s ) . s∈[0,T ]

s∈[0,T ]

By Gronwall’s lemma, we deduce that, for any k ∈ {1, . . . , n},    ˜t 2k+α  CE gT − gT 2k+α E sup w t∈[0,T ]

 + sup ft − ft 2k+α−1 + sup d21 (m ˜ t, m ˜ t ) . t∈[0,T ]

(4.34)

t∈[0,T ]

We finally go back to (4.31). Choosing η small enough and assuming that essupω∈Ω bt 1 is also small enough, we finally obtain (modifying the constant C):     E sup ˜ ut − u ˜t 2n+α + d21 (m ˜ t, m ˜ t )  C d21 (m0 , m0 ) t∈[0,T ]

  + E sup bt − bt 20 + sup ft − ft 2n+α−1 + gT − gT 2n+α , t∈[0,T ]

t∈[0,T ]

which completes the proof.

4.3.5



Proof of Theorem 4.3.1

We now finish the proof of Theorem 4.3.1. First step. We first notice that the L2 stability estimate in the statement is a direct consequence of Lemma 4.3.7 (in order to bound the solutions) and of Lemma 4.3.9 (in order to get the stability estimate itself), provided that existence and uniqueness hold true. Second step (a). We now prove that, given an initial condition m0 ∈ P(Td ), the system (4.7) is uniquely solvable. The strategy consists in increasing inductively the value of , step by step, from  = 0 to  = 1, and to prove, at each step, that existence and uniqueness hold true. At each step of the induction, the strategy relies on a fixed-point argument. It works as follows. Given some  ∈ [0, 1), we assume that, for any input (f, g) in a certain class, we can (uniquely) solve (in the same sense as in the statement of Theorem 4.3.1)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

110    ˜ t (·, D˜ dt m ˜ t = Δm ˜ t + div m ˜ t Dp H ut ) dt,   ˜ t, ˜ t (·, D˜ ˜t = −Δ˜ ut + H ut ) − F˜t (·, mt ) + ft dt + dM dt u

(4.35)

˜ mT )+gT as the boundary ˜T = G(·, with m ˜ 0 = m0 as the initial condition and u condition. Then, the objective is to prove that the same holds true for  replaced by  + , for  > 0 small enough (independent of ). Freezing an input (f¯, g¯) in the admissible class, the point is to show that the mapping    ft = −F˜t (·, mt ) + f¯t t∈[0,T ] → (m ˜ t )t∈[0,T ] , Φ : (m ˜ t )t∈[0,T ] → ˜ mT ) + g¯T gT = G(·, is a contraction on the space of adapted processes (m ˜ t )t∈[0,T ] with paths in C 0 ([0, T ], P(Td )), where the last output is given as the forward component of the solution of the system (4.35). The value of  being given, we assume that the input (f¯, g¯) is of the form f¯t = −

N

bi F˜t (·, mit ),

g¯T =

i=1

N

˜ mi ), bi G(·, T

(4.36)

i=1

where N  1, b1 , . . . , bN  0, with  + b1 + · · · + bN  2, and (m ˜ i )i=1,...,N i (or equivalently (m )i=1,...,N ) is a family of N adapted processes with paths in C 0 ([0, T ], P(Td )). Such an input ((f¯t )t∈[0,T ] , g¯T ) being given, we consider two adapted pro(1) (2) cesses (m ˜ t )t∈[0,T ] and (m ˜ t )t∈[0,T ] with paths in C 0 ([0, T ], P(Td )) (or equiva(1)

(2)

lently (mt )t∈[0,T ] and (mt )t∈[0,T ] without the push-forwards by each of the √ mappings (Td  x → x − 2Wt ∈ Td )t∈[0,T ] , cf. Remark 4.1.2), and we let (l)

ft and

 (l)  := −F˜t ·, mt + f¯t , t ∈ [0, T ];

  (l) ˜ ·, m(l) + g¯T ; gT := −G T

 (l)  , m ˜ (l) := Φ m ˜

l = 1, 2.

l = 1, 2.

Second step (b). By Lemma 4.3.7, we can find positive constants (λk )k=1,...,n and (Λk )k=1,...,n such that, whenever (m ˜ t, u ˜t )t∈[0,T ] solves (4.35) with respect to an input ((f¯t )t∈[0,T ] , g¯T ) of the same type as in (4.36), then ∀k ∈ {1, . . . , n}, ∀t ∈ [0, T ],

−1 0 1

  essupω∈Ω ˜ ut k+α  Λk exp λk (T − t) .

It is worth mentioning that the values of (λk )k=0,...,n and (Λk )k=0,...,n are universal in the sense that they depend neither on  nor on the precise shape of the inputs (f¯, g¯) when taken in the class (4.36). In particular, any output (m ˜ t , ut )t∈[0,T ] obtained by solving (4.35) with the same input (f, g) as in the definition of the mapping Φ must satisfy the same bound.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

111

Second step (c). We apply Lemma 4.3.9 with b = b = 0, (ft , ft )0tT = (1) (2) and (gT , gT ) = (¯ gT , g¯T ). We deduce that

(1) (2) (f¯t , f¯t )0tT ,

   (1) (2)  (1) (2) E sup d21 (m ˜t ,m ˜ t )  2 C E sup F˜t (·, mt ) − F˜t (·, mt ) 2n+α−1 t∈[0,T ]

t∈[0,T ] (1)

(2)

˜ T (·, m ) − G ˜ T (·, m ) 2n+α + G T T



,

the constant C being independent of  and of the precise shape of the input (f¯, g¯) in the class (4.36). Up to a modification of C, we deduce that   (1) (2)  (1) (2)  ˜t ,m ˜ t )  2 CE sup d21 (m ˜t ,m ˜t ) , E sup d21 (m t∈[0,T ]

t∈[0,T ]

which shows that Φ is a contraction on the space L2 (Ω, A, P; C 0 ([0, T ], P(Td ))), when  is small enough (independently of  and of (f¯, g¯) in the class (4.36)). By the Picard fixed-point theorem, we deduce that the system (4.35) is solvable when  is replaced by  + ε (and for the same input (f¯, g¯) in the class (4.36)). By Lemmata 4.3.7 and 4.3.9, the solution must be unique. Third step. We finally establish the L∞ version of the stability estimates. The trick is to derive the L∞ estimate from the L2 version of the stability estimates, which seems rather surprising at first sight but that is quite standard in the theory of backward stochastic differential equations (SDEs). The starting point is to notice that the expectation in the proof of the L2 version permits getting rid of the martingale part when applying Itˆ o’s formula in the proof of Lemma 4.3.9 (see, for instance, (4.29)). Actually, it would suffice to use the conditional expectation given F0 in order to get rid of it, which means that the L2 estimate may be written as     E sup d21 (m ˜ t, m ˜ t ) + ˜ ut − u ˜t 2n+α |F0  Cd21 (m0 , m0 ), t∈[0,T ]

which holds P almost surely. Of course, when m0 and m0 are deterministic the foregoing conditional bound does not say anything more in comparison with the original one: when m0 and m0 are deterministic, the σ-field F0 contains no information and is almost surely trivial. Actually, the inequality is especially meaningful when the initial time 0 is replaced by another time t ∈ (0, T ], in ˜ t and are thus random. The which case the initial conditions become m ˜ t and m trick is thus to say that the same inequality as earlier holds with any time t ∈ [0, T ] as initial condition instead of 0. This proves that     ˜ s, m ˜ s ) + ˜ us − u ˜s 2n+α |Ft  Cd21 (mt , mt ). E sup d21 (m s∈[t,T ]

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

112 Since ˜ ut − u ˜t n+α is Ft -measurable, we deduce that

˜ ut − u ˜t n+α  Cd1 (mt , mt ).

Plugging the above bound in (4.30), we deduce that (modifying C if necessary) sup d1 (mt , mt )  Cd1 (m0 , m0 ).

t∈[0,T ]

Collecting the two last bounds, the proof is easily completed.

4.4



LINEARIZATION

Assumption. Throughout the section, α stands for a H¨older exponent in (0, 1). The purpose here is to follow Section 3.3 and to discuss the following linearized version of the system (4.7):   δ F˜t ˜ t, dt z˜t = −Δ˜ (·, mt )(ρt ) + f˜t0 dt + dM zt + V˜t (·), D˜ zt  − δm     ∂t ρ˜t − Δ˜ ˜ t Γt D˜ ρt − div ρ˜t V˜t − div m zt + ˜b0t = 0,

(4.37)

with a boundary condition of the form z˜T =

˜ δG (·, mT )(ρt ) + g˜T0 , δm

˜ t )t∈[0,T ] is the so-called martingale part of the backward equation, where (M ˜ t )t∈[0,T ] is an (Ft )t∈[0,T ] -adapted process with paths in the space that is, (M 0 ˜ t (x))t∈[0,T ] is an (Ft )t∈[0,T ] C ([0, T ], C 0 (Td )), such that, for any x ∈ Td , (M martingale. Remark 4.4.1. Above, we used the same convention as in Remark 4.1.2. For (˜ ρt )t∈[0,T ] with paths in C 0 ([0, T ], (C k (Td )) ) for some k  0, we let (ρt )t∈[0,T ] be the Schwartz distributional-valued random function with paths in the space C 0 ([0, T ], (C k (Td )) ) defined by ϕ, ρt C k (Td ),(C k (Td )) = ϕ(· +



2Wt ), ρ˜t C k (Td ),(C k (Td )) .

Generally speaking, the framework is the same as that used in Section 3.3, namely we can find a constant C  1 such that −1 0 1



1. The initial condition ρ˜0 = ρ0 takes values in (C n+α (Td )) , for some α ∈ (0, α), and, unless it is explicitly stated, it is deterministic.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

113

2. (V˜t )t∈[0,T ] is an adapted process with paths in C 0 ([0, T ], C n (Td , Rd )), with essupω∈Ω sup V˜t n+α  C. t∈[0,T ]

3. (m ˜ t )t∈[0,T ] is an adapted process with paths in C 0 ([0, T ], P(Td )). 4. (Γt )t∈[0,T ] is an adapted process with paths in C 0 ([0, T ], [C 1 (Td )]d×d ) such that, with probability 1, sup Γt 1  C,

t∈[0,T ]

∀(t, x) ∈ [0, T ] × Td ,

C −1 Id  Γt (x)  CId .

5. (˜b0t )t∈[0,T ] is an adapted process with paths in C 0 ([0, T ], [(C n+α−1 (Td )) ]d ), and (f˜t0 )t∈[0,T ] is an adapted process with paths in C 0 ([0, T ], C n (Td )), with   essupω∈Ω sup ˜b0t −(n+α −1) + f˜t0 n+α < ∞. t∈[0,T ]

6. g˜T0 is an FT -measurable random variable with values in C n+1 (Td ), with essupω∈Ω ˜ gT0 n+1+α < ∞. Here is the analogue of Lemma 3.3.1: Theorem 4.4.2. Under the assumptions (1–6) right above and (HF1(n)) and (HG1(n+1)), for n  2 and β ∈ (α , α), the system (4.37) admits a ˜ ), adapted with respect to the filtration (Ft )t∈[0,T ] , with unique solution (˜ ρ, z˜, M 0 paths in the space C ([0, T ], (C n+β (Td )) × C n+1+β (Td ) × C n+β (Td )) and with ˜ t n−1+β ) < ∞. It satisfies ρt −(n+β) + ˜ zt n+1+β + M essupω supt∈[0,T ] ( ˜   ˜ t n+α−1 < ∞. essupω∈Ω sup ˜ ρt −(n+α ) + ˜ zt n+1+α + M t∈[0,T ]

The proof imitates that of Theorem 4.3.1 and relies on a continuation argument. For a parameter ϑ ∈ [0, 1], we consider the system   δ F˜t ˜ t, (·, mt )(ρt ) + f˜t0 dt + dM dt z˜t = −Δ˜ zt + V˜t (·), D˜ zt  − ϑ δm     ∂t ρ˜t − Δ˜ ˜ t Γt D˜ ρt − div ρ˜t V˜t − div ϑm zt + ˜b0t = 0,

(4.38)

with the boundary conditions ρ˜0 = ρ0 ,

z˜T = ϑ

˜ δG (·, mT )(ρT ) + g˜T0 . δm

(4.39)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

114

As above, the goal is to prove, by increasing step by step the value of ϑ, that the system (4.38), with the boundary condition (4.39) has a unique solution for any ϑ ∈ [0, 1]. Following the discussion after Theorem 4.3.1, notice that, whenever (bt )t∈[0,T ] is a process with paths in C 0 ([0, T ], C −(n+β) (Td )), for some β ∈ (α , α), the quantity supt∈[0,T ] bt −(n+α ) is a random variable, equal to supt∈[0,T ]∩Q bt −(n+α ) . Moreover, essupω∈Ω sup bt −(n+α ) = sup essupω∈Ω bt −(n+α ) . t∈[0,T ]

t∈[0,T ]

˜ t )t∈[0,T ] when denoting a solution; Below, we often omit the process (M ˜ t )t∈[0,T ] so that the namely, we often write (˜ ρt , z˜t )t∈[0,T ] instead of (˜ ρt , z˜t , M backward component is understood implicitly. We feel that the rule is quite clear now: in a systematic way, the martingale component has two degrees of regularity less than (˜ zt )t∈[0,T ] . Throughout the subsection, we assume that the assumption of Theorem 4.4.2 is in force. 4.4.1

Case ϑ = 0

We start with the case ϑ = 0: Lemma 4.4.3. Assume that ϑ = 0 in the system (4.38) with the boundary conρ, z˜), adapted dition (4.39). Then, for any β ∈ (α , α), there is a unique solution (˜ with respect to (Ft )t∈[0,T ] , with paths in C 0 ([0, T ], (C n+β (Td )) × C n+1+β (Td ))) ρt −(n+β) + ˜ zt n+1+β ) < ∞. Moreover, we can and with essupω supt∈[0,T ] ( ˜ find a constant C  , depending only C, T , d, and the bounds in (HF1(n)) and (HG1(n+1)), such that   ρt −(n+α )  C  ρ0 −(n+α ) + essupω∈Ω sup ˜b0t −(n+α −1) , essupω∈Ω sup ˜ t∈[0,T ]

essupω∈Ω sup ˜ zt n+1+α  C t∈[0,T ]

Proof.



t∈[0,T ]



essupω∈Ω ˜ gT0 n+1+α + essupω∈Ω

 sup f˜t0 n+α .

t∈[0,T ]

When ϑ = 0, there is no coupling in the equation and it simply reads   ˜ t, zt + V˜t (·), D˜ zt  + f˜t0 dt + dM (i) dt z˜t = −Δ˜   0  (ii) ∂t ρ˜t − Δ˜ ρt − div ρ˜t V˜t − div ˜b = 0,

(4.40)

t

−1 0 1

with the boundary condition ρ˜0 = ρ0 and z˜T = g˜T0 . First step. Let us first consider the forward equation (4.40-(ii)). We notice that, whenever ρ0 and (˜b0t )t∈[0,T ] are smooth in the space variable, the forward equation may be solved pathwise in the classical sense. Then, by the same duality technique as in Lemma 3.3.1 (with the restriction that the role played by n in

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

115

the statement of Lemma 3.3.1 is now played by n − 1 and that the coefficients c and b in the statement of Lemma 3.3.1 are now respectively denoted by ˜b0 and f˜0 ), for any β ∈ [α , α], it holds, P almost surely, that   ρ0 −(n+β) + sup ˜b0t −(n−1+β) . sup ˜ ρt −(n+β)  C  ˜

t∈[0,T ]

t∈[0,T ]

(4.41)

 Whenever ρ0 and (˜b0t )t∈[0,T ] are not smooth but take values in (C n+α (Td ))  and (C n+α −1 (Td )) only, we can mollify them by a standard convolution argu˜0,N )t∈[0,T ] )N 1 , it is ment. Denoting the mollified sequences by (ρN 0 )N 1 and ((bt  standard to check that, for any β ∈ (α , α), P almost surely,

lim

N →+∞



 ˜0,N − ˜bt −(n−1+β) = 0,

ρN 0 − ρ0 −(n+β) + sup bt

(4.42)

t∈[0,T ]

from which, together with (4.41), we deduce that, P almost surely, the sen+β quence ((˜ ρN (Td )) ), where t )t∈[0,T ] )N 1 is Cauchy in the space C([0, T ], (C N each (˜ ρt )t∈[0,T ] denotes the solution of the forward equation (4.40-(ii)) with ˜0,N )t∈[0,T ] ). With probability 1 under P, the limit of the Cauchy inputs (ρN 0 , ( bt sequence belongs to C([0, T ], (C n+β (Td )) ) and satisfies (4.41). Pathwise, it solves the forward equation. Note that the duality techniques of Lemma 3.3.1 are valid for any solution (˜ ρt )t∈[0,T ] of the forward equation in (4.40-(ii)), with paths in the space C 0 ([0, T ], (C n+β (Td )) ). This proves uniqueness to the forward equation. Finally, it is plain to see that the solution is adapted with respect to the filtration (Ft )t∈[0,T ] . The reason is that the solutions are constructed as limits of Cauchy sequences, which may be shown to be adapted by means of a Duhamel type formula. Second step. For the backward component of (4.40), we can adapt Proposition 4.3.8: the solution is adapted, has paths in C 0 ([0, T ], C n+1+β (Td )), for any β ∈ (α , α), and, following (4.11), it satisfies zt n+1+α essupω∈Ω sup ˜ 0tT

  C essupω∈Ω ˜ gT0 n+1+α + essupω∈Ω sup f˜t0 n+α , 



t∈[0,T ]

which completes the proof. 4.4.2



Stability Argument

The purpose is now to increase ϑ step by step in order to prove that (4.38)–(4.39) has a unique solution. We start with the following consequence of Lemma 4.4.3:

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

116



Lemma 4.4.4. Given some ϑ ∈ [0, 1]; an initial condition ρ˜0 in (C n+α (Td )) ; ˜ t , Γt )t∈[0,T ] as in points 2, 3, and 4 of the introduction a set of coefficients (V˜t , m of Section 4.4; and a set of inputs ((˜b0t , f˜t0 )t∈[0,T ] , g˜T0 ) as in points 5 and 6 of the introduction to Section 4.4, consider a solution (˜ ρt , z˜t )t∈[0,T ] of the system (4.38) with the boundary condition (4.39), the solution being adapted with respect to the filtration (Ft )t∈[0,T ] , having paths in the space C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], C n+1+β (Td )), for some β ∈ (α , α), and satisfying ρt −(n+β) + ˜ zt n+1+β ) < ∞. essupω∈Ω sup ( ˜ t∈[0,T ]

Then,

  essupω∈Ω sup ˜ ρt −(n+α ) + ˜ zt n+1+α < ∞. t∈[0,T ]

Proof.

Given a solution (˜ ρt , z˜t )t∈[0,T ] as in the statement, we let ˆb0 = ˜b0 + ϑm ˜ t Γt D˜ zt , t t gˆT0 = g˜T0 + ϑ

δ F˜t (·, mt )(ρt ), fˆt0 = f˜t0 − ϑ δm

t ∈ [0, T ] ;

˜ δG (·, mT )(ρT ). δm

Taking advantage of the assumption (HF1(n)), we can check that (ˆb0t )t∈[0,T ] , (fˆt0 )t∈[0,T ] and gˆT0 satisfy the same assumptions as (˜b0t )t∈[0,T ] , (f˜t0 )t∈[0,T ] and g˜T0 in the introduction to Section 4.4. The result then follows from Lemma 4.4.3.  The strategy now relies on a new stability argument, which is the analogue of Lemma 4.3.9: Proposition 4.4.5. Given some ϑ ∈ [0, 1]; two initial conditions ρ˜0 and ρ˜0 in  ˜ t , Γt )t∈[0,T ] and (V˜t , m ˜ t , Γt )t∈[0,T ] as (C n+α (Td )) ; two sets of coefficients (V˜t , m in points 2, 3, and 4 of the introduction to Section 4.4; and two sets of inputs ˜0 ˜T0 ) as in points 5 and 6 of the introduc((˜b0t , f˜t0 )t∈[0,T ] , g˜T0 ) and ((˜b0 t , ft )t∈[0,T ] , g ρt , z˜t )t∈[0,T ] of the tion to Section 4.4, consider two solutions (˜ ρt , z˜t )t∈[0,T ] and (˜ system (4.38) with the boundary condition (4.39), both being adapted with respect to the filtration (Ft )t∈[0,T ] , having paths in the space C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], C n+1+β (Td )), for some β ∈ (α , α), and satisfying essupω∈Ω sup



t∈[0,T ]

−1 0 1

Then, it holds that



˜ ρt −(n+β) + ˜ zt n+1+β + ˜ ρt −(n+β) + ˜ zt n+1+β < ∞.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

117

  2  2 E sup ˜ zt − z˜t n+1+α + sup ˜ ρt − ρ˜t −(n+α ) t∈[0,T ]

t∈[0,T ]

 2  C  ˜ ρ0 − ρ˜0 2−(n+α ) + E sup ˜b0t − ˜b0 t −(n+α −1) t∈[0,T ]

+ sup t∈[0,T ]

+ sup

f˜t0

+ ˜ gT0 − g˜T0 2n+1+α



 

˜ zt 2n+1+α + ˜ ρt 2−(n+α ) × V˜t − V˜t 2n+α

t∈[0,T ]

+



f˜t0 2n+α

[d1 (mt , mt )]2

+ Γt −

Γt 20



 ,

the constant C  depending only upon C in the introduction to Section 4.4, T , d, α, and α . Proof. First step. The first step is to make use of a duality argument. We start with the case when ρ˜0 , ρ˜0 , ˜b0 and ˜b0 are smooth. Letting ˆb0t = zt + ˜b0t and ˆb0 ˜ t Γt D˜ zt + ˜b0 ρt )t∈[0,T ] and (˜ ρt )t∈[0,T ] ϑm ˜ t Γt D˜ t = ϑm t , for t ∈ [0, T ], (˜ 0 0 solve the linear equation (ii) in (4.40) with (˜bt )t∈[0,T ] and (˜bt )t∈[0,T ] replaced ˜0 by (ˆb0t )t∈[0,T ] and (ˆb0 t )t∈[0,T ] respectively. By Lemma 4.4.3 with (bt )t∈[0,T ] in 0 (4.40) equal to (ˆbt )t∈[0,T ] and with n in the statement of Lemma 4.4.3 reρt )t∈[0,T ] have bounded paths placed by n − 1, we deduce that (˜ ρt )t∈[0,T ] and (˜ in C 0 ([0, T ], (C n−1+β (Td )) ), for the same β ∈ (α , α) as in the statement of Proposition 4.4.5. With a suitable adaptation of Lemma 4.3.11 and with the same kind of notations as in Section 3.3, this allows us to expand the infinitesimal variation of the duality bracket ˜ zt − z˜t , ρ˜t − ρ˜t n+β , where lg ·, ·n+β stands for lg ·, ·C n+β (Td ),(C n+β (Td )) . We compute   dt z˜t − z˜t , ρ˜t − ρ˜t n+β          ˜ zt − z˜t ), ρ˜t V˜t − V˜t dt = − D(˜ + D˜ zt , V˜t − V˜t ρ˜t − ρ˜t n+β n+β        + dt dt − D z˜t − z˜t , ˜b0t − ˜b0 f˜t0 − f˜t0 , ρ˜t − ρ˜t t n+β

  t −ϑ (·, mt ) ρt − ρt , ρ˜t − ρ˜t δm n+β    δ F˜ ˜   δ F t t dt (·, mt ) − (·, mt ) ρt , ρ˜t − ρ˜t + δm δm n+β       − ϑ D z˜t − z˜t , m ˜ t Γt D z˜t − z˜t n+β           dt + dt Mt , ˜ t Γt − m zt + D z˜t − z˜t , m ˜ t Γt D˜  δ F˜

n−1+β



n+β

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

118

where (Mt )0tT is a martingale and where we applied Remark 4.4.1 to define (ρt )t∈[0,T ] and (ρt )t∈[0,T ] . An important fact in the proof is that the martingale part in (4.40) has continuous paths in C 0 ([0, T ], C n−1+β (Td )), which allows us to ρt − ρ˜t )t∈[0,T ] give a sense to the duality bracket (in x) with (˜ ρt − ρ˜t )t∈[0,T ] , since (˜ 0 n−1+β d  is here assumed to have continuous paths in C ([0, T ], (C (T )) ). Similarly, ρt − ρ˜t )t∈[0,T ] makes the duality bracket of (˜ zt − z˜t )t∈[0,T ] with the Laplacian of (˜ sense and, conversely, the duality bracket of (˜ ρt )t∈[0,T ] with the Laplacian of (˜ zt − z˜t )t∈[0,T ] makes sense as well, the two of them canceling each other. Of course, the goal is to relax the smoothness assumption made on ρ˜0 , ρ˜0 , ˜b0 , and ˜b0 . Although it was pretty straightforward to do in the deterministic case, it is more difficult here because of the additional martingale term. As already mentioned, the martingale term is defined as a duality bracket between a path with values in C 0 ([0, T ], C n−1+β (Td )) and a path with values in C 0 ([0, T ], C −(n−1+β) (Td )). Of course, the problem is that this is no longer true in the general case that (˜ ρt − ρ˜t )t∈[0,T ] has paths in C 0 ([0, T ], C −(n−1+β) (Td )). To circumvent the difficulty, one way is to take first the expectation in order to cancel the martingale part and then to relax the smoothness conditions. Taking the expectation in the above formula, we get (in the mollified setting):   d  E z˜t − z˜t , ρ˜t − ρ˜t n+β dt      ˜ zt − z˜ ), ρ˜ V˜t − V˜  = −E D(˜ t t t n+β         + E D˜ zt , V˜t − V˜t ρ˜t − ρ˜t n+β         0 0  − E D z˜t − z˜t , ˜b0t − ˜b0 + E f˜t − f˜t , ρ˜t − ρ˜t t n+β

    δ F˜   t −ϑ E (·, mt ) ρt − ρt , ρ˜t − ρ˜t δm n+β  δ F˜   ˜   δ Ft t +E (·, mt ) − (·, mt ) ρt , ρ˜t − ρ˜t δm δm n+β          ˜ t Γt D z˜t − z˜t − ϑ E D z˜t − z˜t , m n+β           . ˜ t Γt − m zt ˜ t Γt D˜ + E D z˜t − z˜t , m

 n−1+β

(4.43)

n+β

−1 0 1

Whenever ρ˜0 , ρ˜0 , ˜b0 , and ˜b0 are not smooth (and thus just satisfy the assumption in the statement of Proposition 4.4.5), we can mollify them in the same way ρ,N as in the first step of Lemma 4.4.3. We respectively call (˜ ρN 0 )N 1 , (˜ 0 )N 1 , 0,N 0,N   ˜ ˜ (bt )N 1 , and (bt )N 1 the mollifying sequences. For any β ∈ (α , α) and P almost surely, the first two sequences respectively converge to ρ˜0 and ρ˜0 in norm · −(n+β  ) and the last two ones respectively converge to ˜b0t and ˜b0 t in ρt , z˜t )t∈[0,T ] and (˜ ρt , z˜t )t∈[0,T ] norm · −(n−1+β  ) , uniformly in t ∈ [0, T ]. With (˜ the original solutions given by the statement of Proposition 4.4.5, we denote,

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

119

for each N  1, by (˜ ρN ˜tN )t∈[0,T ] and (˜ ρ,N ˜t,N )t∈[0,T ] the respective solutions t ,z t ,z to (4.40), but with (˜b0t , f˜t0 , g˜T0 )t∈[0,T ] respectively replaced by ⎛ ˆ0,N bt := ˜b0,N + ϑm ˜ t Γt D˜ zt t ⎜ 0 δF 0 ⎜ fˆ := f˜ − ϑ (·, mt )(ρt ) t ⎜ t δm ⎝ δG gˆT0 := g˜T0 + ϑ (·, mT )(ρT ) δm ⎛ ˆ0,N bt := ˜b0,N + ϑm ˜ t Γt D˜ zt t ⎜ 0 δF   ⎜ ˆ ˜0 and ⎜ ft := ft − ϑ δm (·, mt )(ρt ) ⎝ δG gˆT0 := g˜T0 + ϑ (·, mT )(ρT ) δm

⎞ ⎟ ⎟ ⎟ ⎠ t∈[0,T ]

⎞ ⎟ ⎟ ⎟ ⎠

. t∈[0,T ]

ρ,N By linearity of (4.40) and by Lemma 4.4.3, we have that (˜ ρN t )N 1 t )N 1 and (˜ converge to ρ˜t and ρ˜t in norm · −(n+β) , uniformly in t ∈ [0, T ], and that zt,N )N 1 converge to z˜t and z˜t in norm · n+1+β , uniformly in (˜ ztN )N 1 and (˜ t ∈ [0, T ]. Then, we may write down the analogue of (4.43) for any mollified solution (˜ ρN ˜tN )N 1 (take note that the formulation of (4.43) for the mollified solutions t ,z is slightly different since the mollified solutions satisfy only an approximate version of (4.38)). Following (4.42), we can pass to the limit under the symbol E. By Lemma 4.4.3, we can easily exchange the almost sure convergence and the symbol E, proving that the identity (4.43) holds true under the standing assumption on ρ˜0 , ρ˜0 , (˜b0t )t∈[0,T ] , and (˜b0 t )t∈[0,T ] . Using the positive definiteness of Γ and the monotonicity of F˜ , we deduce that E



T         D z˜s − z˜s 2 dm ˜ s ds z˜T − z˜T , ρ˜T − ρ˜T n+β + C −1 ϑE t Td       E z˜0 − z˜0 , ρ˜0 − ρ˜0 n+β

T    Θ ˜ ρs − ρ˜s −(n+α ) + ˜ zs − z˜s n+1+α ds , + C E t

where Θ := ˜ ρ0 − ρ˜0 −(n+α ) + ˜ gT0 − g˜T0 n+1+α  + sup f˜s0 − f˜s0 n+α + ˜b0s − ˜b0 s −(n+α −1) s∈[0,T ]

   + ˜ zs n+1+α + ˜ ρs −(n+α )   × V˜s − V˜s n+α + d1 (ms , ms ) + Γs − Γs 0 .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

120 Recalling that ˜ zT − z˜T , ρ˜T − ρ˜T n+β   δG ˜ (·, mT )(ρT − ρT ), ρ˜T − ρ˜T =ϑ δm n+β   δ G ˜ ˜  δG (·, mT ) − (·, mT ) (ρT ), ρ˜T − ρ˜T +ϑ δm δm n+β 0 0  + ˜ gT − g˜T , ρ˜T − ρ˜T n+β  −C  Θ ˜ ρT − ρ˜T −(n+α ) ,

where we have used the monotonicity of G to deduce the second line, we thus get

ϑE

T 

     D z˜s − z˜s 2 dm ˜ s ds d 0

T   C  E Θ ˜ z0 − z˜0 n+1+α + ˜ ρT − ρ˜T −(n+α )

T

+ 0



˜ ρs − ρ˜s −(n+α ) + ˜ zs − z˜s n+1+α ds

(4.44)  .

Second step. As a second step, we follow the strategy used in the deterministic  'T ' case in order to estimate ( ˜ ρt − ρ˜t −(n+α ) )t∈[0,T ] in terms of 0 ( Td |D z˜s −  ˜ s ) ds on the left-hand side of (4.44). z˜s |2 dm We use again a duality argument. Given ξ ∈ C n+α (Td ) and τ ∈ [0, T ], we consider the solution (w ˜t )t∈[0,τ ] , with paths in C 0 ([0, τ ], C n+β (Td )), to the backward PDE:   ∂t w ˜t = −Δw ˜t + V˜t (·), Dw ˜t  , (4.45) with the terminal boundary condition w ˜τ = ξ. Take note that the solution is not adapted. It satisfies (see the proof in the last step below), with probability 1, ∀t ∈ [0, τ ],

w ˜t n+α  C  ξ n+α ,

∀t ∈ [0, τ ),

w ˜t n+1+α  √

C

ξ n+α . τ −t

Then following the end of the proof of Lemma 3.3.1, we have

−1 0 1

dt  w ˜t , ρ˜t − ρ˜t n+α      ˜  − V˜t )˜ = − Dw ˜t , ˜b0t − ˜b0 dt + D w ˜ , ( V ρ dt t t t t n−1+α n+α     − ϑ Dw ˜t , m ˜ t Γt D z˜t − z˜t dt n+α   − ϑ Dw ˜ t , (m ˜ t Γt − m ˜ t Γt )D˜ zt dt,  n+α

(4.46)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

121

so that ξ, ρ˜τ −

ρ˜τ n+α



 C ξ n+α Θ + ϑ

τ  0

Td

2  D(˜ zs − z˜s ) dm ˜s

1/2

 ds .

Therefore,

˜ ρτ −

ρ˜τ −(n+α )





C Θ+ϑ

T 0

Rd

2  D(˜ zs − z˜s ) dm ˜s



1/2

ds .

(4.47)

Plugging (4.47) into (4.44), we obtain

ϑE

T  0

Td

     D z˜s − z˜s 2 dm ˜ s ds

    C  E Θ Θ + sup ˜ zt − z˜t n+1+α .

(4.48)

t∈[0,T ]

Therefore,      E sup ˜ ρt − ρ˜t 2−(n+α )  C  E Θ Θ + sup ˜ zt − z˜t n+1+α . t∈[0,T ]

t∈[0,T ]

(4.49)

Third step. We now combine the two first steps to get an estimate of ( ˜ zt − z˜t n+1+α )t∈[0,T ] . Following the proof of (4.34) on the linear equation (4.32) and using the assumptions (HF1(n)) and (HG1(n+1)), we get that   E sup ˜ zt − z˜t 2n+1+α t∈[0,T ]

 E Θ2 + ˜ ρT − ρ˜T 2−(n+α ) +

T 0



˜ ρs − ρ˜s 2−(n+α ) ds .

(4.50)

By (4.49), we easily complete the proof. It just remains to prove (4.46). The first line follows from Lemma 3.2.2. The second line may be proved as follows. Following (4.26), we have, with probability 1, ∀t ∈ [0, τ ),

w ˜t n+1+α  C 



ξ n+α √ + τ −t



τ t



w ˜s n+1+α √ ds . s−t

(4.51)

Integrating and allowing the constant C  to increase from line to line, we have, for all t ∈ [0, τ ),

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

122

τ t

τ

w ˜s n+1+α 1   √ √ √ ds  C ξ n+α ds s−t τ − s s−t t    r τ 1  √ √ ds dr

w ˜r n+1+α + r−s s−t t t 

τ  C  ξ n+α +

w ˜r n+1+α dr . t

Plugging the above estimate into (4.51), we get that ∀t ∈ [0, τ ),

 √ τ − t w ˜t n+1+α  C  ξ n+α +

τ t

 √ τ − r w ˜r n+1+α dr ,

which yields, by Gronwall’s lemma,

w ˜t n+1+α  √

∀t ∈ [0, τ ),

C

ξ n+α , τ −t 

which is the required bound.

4.4.3

A Priori Estimate

A typical example of application of Proposition 4.4.5 is to choose ρ˜0 = 0, (˜b0 , f˜0 , g˜0 ) ≡ (0, 0, 0), V˜ ≡ V˜  , Γ ≡ Γ , in which case 

 ρ˜ , z˜ ≡ (0, 0).

Then, Proposition 4.4.5 provides an a priori L2 estimate of the solutions to (4.37). (Take note that the constant C in the statement depends on the smoothness assumptions satisfied by V˜ .) The following corollary shows that the L2 bound can be turned into an L∞ bound. It reads as an extension of Lemma 4.4.3 to the case in which ϑ may be nonzero: 

Corollary 4.4.6. Given ϑ ∈ [0, 1], an initial condition ρ˜0 in (C n+α (Td )) and a set of inputs ((˜b0t , f˜t0 )t∈[0,T ] , g˜T0 ) as in points 1–6 in the introduction of Section 4.4, consider an adapted solution (˜ ρt , z˜t )t∈[0,T ] of the system (4.38)– (4.39), with paths in the space C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], C n+1+β (Td )) for some β ∈ (α , α), such that −1 0 1

 essupω∈Ω sup

t∈[0,T ]

 ρt −(n+β) < ∞.

˜ zt n+1+β + ˜

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

123

Then, we can find a constant C  , depending only on C, T , d, α, and α , such that   zt n+1+α + ˜ ρt −(n+α ) essupω∈Ω sup ˜ 

t∈[0,T ]

 C  ˜ ρ0 −(n+α )

(4.52)

   gT0 n+1+α + sup f˜t0 n+α + ˜b0t −(n+α −1) + essupω∈Ω ˜ . t∈[0,T ]



For another initial condition in the space ρ˜0 in (C n+α (Td )) , another set of ˜0 ˜ t , Γt )t∈[0,T ] , and another set of inputs ((˜b0 ˜T0, ) as coefficients (V˜t , m t , ft )t∈[0,T ] , g in points 1–6 in the introduction to Section 4.4, consider an adapted solution (˜ ρt , z˜t )t∈[0,T ] of the system (4.38)–(4.39), with paths in C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], C n+1+β (Td )) for the same β ∈ (α , α) as above, such that  essupω∈Ω sup

t∈[0,T ]



˜ zt n+1+β + ˜ ρt −(n+β) < ∞.

Then, we can find a constant C  , depending only on C, T , d, α, and α and on  0 

˜ ρ0 −(n+α ) + ˜ gT n+1+α + ˜ ρ0 −(n+α ) + essupω∈Ω ˜ gT0 n+1+α    −1) ,

+ essupω∈Ω sup f˜t0 n+α + f˜t0 n+α + ˜b0t −(n+α −1) + ˜b0 −(n+α t t∈[0,T ]

such that   zt − z˜t 2n+1+α + ˜ essupω∈Ω sup ˜ ρt − ρ˜t 2−(n+α ) t∈[0,T ]

   C  ˜ ρ0 − ρ˜0 2−(n+α ) + essupω∈Ω ˜ gT0 − g˜T0 2n+1+α   2 ˜0 − f˜0 2 + sup ˜b0t − ˜b0

+

f  t −(n+α −1) t t n+α

(4.53)

t∈[0,T ]

  + essupω∈Ω sup V˜t − V˜t 2n+α + d21 (mt , mt ) + Γt − Γt 20 . t∈[0,T ]

Proof. We start with the proof of (4.52). First step. The proof relies on the same trick as that used in the third step of the proof of Theorem 4.3.1. In the statement of Proposition 4.4.5, the initial conditions ρ˜0 and ρ˜0 are assumed to be deterministic. It can be checked that the same argument holds when both are random and the expectation is replaced by a conditional expectation given the initial condition. More generally, given some

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

124

time t ∈ [0, T ], we may see the pair (˜ ρs , z˜s )s∈[t,T ] as the solution of the system (4.38) with the boundary condition (4.39), but on the interval [t, T ] instead of [0, T ]. In particular, when ρ˜0 = 0, (˜b0 , f˜0 , g˜0 ) ≡ (0, 0, 0), V˜ ≡ V˜  , Γ ≡ Γ (in which case (˜ ρ , z˜ ) ≡ (0, 0)), we get        2 2 zs n+1+α + ˜ ρt 2−(n+α ) + E Θ2 |Ft , E sup ˜ ρs −(n+α ) Ft  C  ˜ s∈[t,T ]

where we have let gT0 n+1+α . Θ = sup ˜b0s −(n+α −1) + sup f˜s0 n+α + ˜ s∈[t,T ]

s∈[t,T ]

Second step. We now prove the estimate on ρ˜. From the first step, we deduce that    ρt 2−(n+α ) + E Θ2 |Ft

˜ zt 2n+1+α  C  ˜   (4.54) ρt 2−(n+α ) + essupω∈Ω Θ2 .  C  ˜ The above inequality holds true for any t ∈ [0, T ], P almost surely. By continuity of both sides, we can exchange the “P almost sure” and the “for all t ∈ [0, T ].” Now we can use the same duality trick as in the proof of Proposition 4.4.5. With the same notations as in (4.45) and (4.46), we have ∀t ∈ [0, τ ],

w ˜t n+α  C  ξ n+α .

Then, we have 

   w ˜0 , ρ˜0 n+α   +

Dw ˜s n+α −1 ˜b0s −(n+α −1) + ˜ zs n+α ds 0  τ      

˜ ρs −(n+α ) + essupω∈Ω Θ ds ,  C ξ n+α ˜ ρ0 −(n+α ) +

w ˜τ , ρ˜τ



n+α τ

0

from which we deduce, by Gronwall’s lemma, that   ρ0 −(n+α ) + sup essupω∈Ω Θ ,

˜ ρτ −(n+α )  C  ˜ t∈[0,T ]

and thus −1 0 1

  ρt −(n+α )  C  ˜ essupω∈Ω sup ˜ ρ0 −(n+α ) + essupω∈Ω Θ . t∈[0,T ]

(4.55)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

6.125x9.25

125

By (4.54) and (4.55), we easily get a bound for z˜. Last step. It then remains to prove (4.53). By means of the first step, we have bounds for   essupω∈Ω sup ˜ zt n+1+α + ˜ zt n+1+α + ˜ ρt −(n+α ) + ˜ ρt −(n+α ) . t∈[0,T ]

Plugging the bound into the stability estimate in Proposition 4.4.5, we may proceed in the same way as in the first two steps in order to complete the proof. 

4.4.4

Proof of Theorem 4.4.2

We now complete the proof of Theorem 4.4.2. It suffices to prove Proposition 4.4.7. There is an ε0 > 0 such that if, for some ϑ ∈ [0, 1)  and β ∈ (α , α), for any initial condition ρ˜0 in (C n+α (Td )) , any set of coef˜ t , Γt )t∈[0,T ] and any input ((˜b0t , f˜t0 )t∈[0,T ] , g˜T0 ) as in the introducficients (V˜t , m tion to Section 4.4, the system (4.38)–(4.39) has a unique solution (˜ ρt , z˜t )t∈[0,T ] with paths in the space C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], C n+1+β (Td )) such that ρt −(n+β) + ˜ zt n+1+β ) < ∞, (˜ ρt , z˜t )t∈[0,T ] also satisfying essupω supt∈[0,T ] ( ˜ ρt −(n+α ) + ˜ zt n+1+α ) < ∞, then, for any ε ∈ (0, ε0 ], essupω supt∈[0,T ] ( ˜ unique solvability also holds with ϑ replaced by ϑ + ε, for the same class of initial conditions and of inputs and in the same space; moreover, solutions also  lie (almost surely) in a bounded subset of the space L∞ ([0, T ], (C (n+α ) (Td )) ) × ∞ n+1+α d (T )). L ([0, T ], C Proof. Given ϑ ∈ [0, 1) in the statement, ε > 0, an initial condition ρ˜0 ∈  (C n+α (Td )) , an input ((˜b0t )t∈[0,T ] , (f˜t0 )t∈[0,T ] , g˜T0 ) satisfying the prescription described in the introduction to Section 4.4, and an adapted process (˜ ρt , z˜t )t∈[0,T ] (˜ ρ having ρ˜0 as initial condition) with paths in the space C 0 ([0, T ], (C n+β (Td )) )× C 0 ([0, T ], C n+1+β (Td )) such that   ρt −(n+α ) + ˜ (4.56) essupω∈Ω sup ˜ zt n+1+α < ∞, t∈[0,T ]

we call Φε (˜ ρ, z˜) the pair (˜ ρt , z˜t )0tT solving the system (4.38) with respect to the initial condition ρ˜0 and to the input: ˜b0 = εm ˜ t Γt D˜ zt + ˜b0t , t δ F˜t (·, mt )(ρt ) + f˜t0 , f˜t0 = −ε δm ˜ δG (·, mT )(ρT ) + g˜T0 . g˜T0 = ε δm

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 4

126 By assumption, it satisfies    essupω∈Ω sup ˜ ρt −(n+α ) + ˜ zt n+1+α < ∞, t∈[0,T ]

By Corollary 4.4.6,    essupω∈Ω sup ˜ zt n+1+α + ˜ ρt −(n+α )

t∈[0,T ]

  ρt −(n+α ) + ˜  C  ˜ zt n+1+α ρ0 −(n+α ) + c ε essupω∈Ω sup ˜  + essupω∈Ω

t∈[0,T ]

  0 0 0 ˜ ˜ sup bt −(n+α −1) + ft n+α + ˜ gT n+1+α , 

t∈[0,T ]

where c is a constant, which depends only on the constant C appearing in points 1–6 in the introduction to Section 4.4 and on the bounds appearing in (HF1(n)) and (HG1(n+1)). In particular, if   zt n+1+α + ˜ ρt −(n+α ) essupω∈Ω sup ˜ 

t∈[0,T ]

 gT0 n+1+α  2C  ˜ ρ0 −(n+α ) + essupω∈Ω ˜   0  0 ˜ ˜ + sup bt −(n+α −1) + ft n+α ,

(4.57)

t∈[0,T ]

and 2C  cε  1, then    essupω∈Ω sup ˜ zt n+1+α + ˜ ρt −(n+α ) 

t∈[0,T ]

 gT0 n+1+α  2C  ˜ ρ0 −(n+α ) + essupω∈Ω ˜   0  0 ˜ ˜  + sup bt −(n+α −1) + ft n+α , t∈[0,T ]

−1 0 1

so that the set of pairs (˜ ρ, z˜) that satisfy (4.56) and (4.57) is stable by Φε for ε small enough. Now, given two pairs (˜ ρ1t , z˜t1 )t∈[0,T ] and (˜ ρ2t , z˜t2 )t∈[0,T ] satisfying (4.57), we let 1 1 2 2 ρt , z˜t )t∈[0,T ] be their respective images by Φε . We deduce (˜ ρt , z˜t )t∈[0,T ] and (˜

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

MEAN FIELD GAME SYSTEM WITH A COMMON NOISE

127

from Proposition 4.4.5 that   2 E sup ˜ zt1 − z˜t2 2n+1+α + sup ˜ ρ1 ˜2 t −ρ t −(n+α ) t∈[0,T ]

t∈[0,T ]

  zt1 − z˜t2 2n+1+α + sup ˜ ρ1t − ρ˜2t 2−(n+α ) ,  C ε E sup ˜  2

t∈[0,T ]

t∈[0,T ]

for a possibly new value of the constant C  , but still independent of ϑ and ε. Therefore, for C  ε2 < 1 and 2C  cε  1, Φε is a contraction on the set of adapted processes (˜ ρt , z˜t )t∈[0,T ] having paths in C 0 ([0, T ], (C n+β (Td )) ) × C 0 ([0, T ], n+1+β d (T )) and satisfying (4.57) (and thus (4.56) as well), which is a closed C set of the Banach space C 0 ([0, T ], (C n+β (Td )) )×C 0 ([0, T ], C n+β (Td )). By the Picard fixed-point theorem, we deduce that Φε has a unique fixed point satisfying (4.57). The fixed point solves (4.38)–(4.39), with ϑ replaced by ϑ + ε. Consider now another solution to (4.38)–(4.39) with ϑ replaced by ϑ+ε, with paths in a bounded subset of C 0 ([0, T ], (C n+β (Td )) )×C 0 ([0, T ], C n+1+β (Td )). By Proposition 4.4.5, it must coincide with the solution we just constructed. 

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter Five The Second-Order Master Equation

Taking advantage of the analysis performed in the previous chapter on the unique solvability of the mean field game (MFG) system, we are now ready to define and investigate the solution of the master equation. The principle is the same as in the first-order case: the forward component of the MFG system has to be seen as the characteristics of the master equation. The regularity of the solution of the master equation is then investigated through the tangent process that solves the linearized MFG system. As in the previous chapter, the level of common noise β is set to 1 throughout this chapter. This is without loss of generality and this makes the notation a little bit simpler.

5.1

CONSTRUCTION OF THE SOLUTION

Assumption. Throughout the section, we assume that the assumption of Theorem 4.3.1 is in force, with α ∈ (0, 1). Namely, we assume that F , G, and H satisfy (2.4) and (2.5) in Section 2.3, and that, for some integer n  2 and some α ∈ (0, 1), (HF1(n-1)) and (HG1(n)) hold true. For any initial distribution m0 ∈ P(Td ), the system (4.7) admits a unique solution so that, following the analysis performed in the deterministic setting, we may let U (0, x, m0 ) = u ˜0 (x), x ∈ Td . The initialization is here performed at time 0, but, of course, there is no difficulty in replacing 0 by any arbitrary time t0 ∈ [0, T ], in which case we rewrite the system (4.7) as    ˜ t ,t (·, D˜ ˜ t = Δm ˜ t + div m ˜ t Dp H ut ) dt, dt m 0   ˜ t ,t (·, D˜ ˜ t, ˜t = −Δ˜ ut + H ut ) − F˜t0 ,t (·, mt0 ,t ) dt + dM dt u 0 −1 0 1

(5.1)

with the initial condition m ˜ t0 = m0 and the terminal boundary condition u ˜T =

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

129

˜ t (·, mt ,T ), under the prescription that G 0 0 √   ˜ t, mt0 ,t = id + 2(Wt − Wt0 ) m √   ˜ Ft0 ,t (x, μ) = F x + 2(Wt − Wt0 ), μ , √   ˜ t (x, μ) = G x + 2(WT − Wt ), μ , G 0 0 √   ˜ t ,t (x, p) = H x + 2(Wt − Wt ), p , H 0

0

(5.2) x ∈ Td , p ∈ Rd , μ ∈ P(Td ).

It is then possible to let U (t0 , x, m0 ) = u ˜t0 (x),

x ∈ Td .

Note that, following Theorem 4.3.1, U : (t0 , x, m0 ) → U (t0 , x, m0 ) is Lipschitz continuous in the last two variables. We shall often use the following important fact: Lemma 5.1.1. Given an initial condition (t0 , m0 ) ∈ [0, T ] × P(Td ), denote by ˜t )t∈[t0 ,T ] the solution of (5.1) with the prescription (5.2) and with m ˜ t0 = (m ˜ t, u Call m the image of m ˜ by the random mapping m0 as initial condition. t0 ,t t√ √ ˜ t . Then, Td  x → x + 2(Wt − Wt0 ), that is, mt0 ,t = [id + 2(Wt − Wt0 )]m for any t0 + h ∈ [t0 , T ], P almost surely, √   u ˜t0 +h (x) = U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h , Proof.

x ∈ Td .

Given t0 and h as above, we let √    ˜ t, m ¯ t = id + 2 Wt0 +h − Wt0 m √    u ¯t (x) = u ˜t x − 2 Wt0 +h − Wt0 , t ∈ [t0 + h, T ], x ∈ Td .

¯t )t∈[t0 +h,T ] is a solution of (5.1)–(5.2), with t0 replaced by We claim that (m ¯ t, u t0 + h and with mt0 ,t0 +h as initial condition. The proof is as follows. We start with a preliminary remark. For t ∈ [t0 +h, T ], 

id +

√  √     ¯ t = id + 2 Wt − Wt0 m ˜ t = mt0 ,t . 2 Wt − Wt0+h m

(5.3)

¯t )t0 +htT solves the forward equation We now prove that the pair (m ¯ t, u in (5.1). To this end, denote by (Xt0 ,t )t∈[t0 ,T ] the solution of the stochastic differential equation (SDE) √   ˜ t ,t Xt ,t , D˜ dXt0 ,t = −Dp H ut (Xt0 ,t ) dt + 2dBt , 0 0

t ∈ [t0 , T ],

the initial condition Xt0 ,t0 having m0 as distribution. (Notice that the equation is well-posed, as u is known to be Lipschitz in space.) Then, √ D˜ √ the pro˜ t = Xt ,t + 2(Wt +h − Wt ))t∈[t +h,T ] has (m ¯ t = (id + 2(Wt0 +h − cess (X 0 0 0 0

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

130

Wt0 ))m ˜ t )t∈[t0 +h,T ] as marginal conditional distributions (given (Wt )t∈[0,T ] ). The process satisfies the SDE  √ ˜ t − 2(Wt +h − Wt ), ˜ t ,t X ˜ t = −Dp H dX 0 0 0 √ √   ˜ t − 2(Wt +h − Wt ) dt + 2dBt D˜ ut X 0 0  √   ˜ t , D¯ ˜ t +h,t X ˜ t dt + 2dBt , = −Dp H ut X 0 which is enough to check that the forward equation holds true, with m ¯ t0 +h = mt0 ,t0 +h as the initial condition; see (5.3). We now have √     ˜ t ,t (·, D˜ dt u ¯t = −Δ¯ ut + H ut ) − F˜t0 ,t (·, mt0 ,t ) · − 2(Wt0 +h − Wt0 ) dt 0 √   ˜ t · − 2(Wt +h − Wt ) + dM 0 0    ˜ t +h,t (·, D¯ ut ) − F˜t0 +h,t (·, mt0 ,t ) dt = −Δ¯ ut + H 0 √   ˜ t · − 2(Wt +h − Wt ) . + dM 0

0

√   ¯ t , where (m ¯ t )t0 +htT Now, (5.3) says that mt0 ,t reads [id+ 2 Wt −Wt0 +h ]m is the current forward component. This matches exactly the prescription on the backward equation in (5.1) and (5.2). If mt0 ,t0 +h was deterministic, we would have, by definition of U , U (t0 + ¯t0 +h (x), x ∈ Td , and thus, by definition of u ¯t0 +h , h, x, mt0 ,t0 +h ) = u √   u ˜t0 +h (x) = U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h ,

x ∈ Td .

(5.4)

Although the result is indeed correct, the argument is false, as mt0 ,t0 +h is random. To prove (5.4), we proceed as follows. By compactness of P(Td ), we can find, for any ε, a family of N disjoint Borel subsets A1 , . . . , AN ⊂ P(Td ), each of them being of diameter less than ε, that covers P(Td ). For each i ∈ {1, . . . , N }, we may find μi ∈ Ai . We then call (m ˆ it , u ˆit )t∈[t0 +h,T ] the solution of (5.1)–(5.2), with t0 replaced by t0 + h and with μi as initial condition. We let m ˆ t :=

N

  m ˆ it 1Ai mt0 ,t0 +h ,

i=1

u ˆt :=

N

  u ˆit 1Ai mt0 ,t0 +h .

i=1

−1 0 1

Since the events {mt0 ,t0 +h ∈ Ai }, for each i = 1, . . . , N , are independent of ˆ t, u ˆt )t∈[t0 +h,T ] is a the Brownian motion (Wt − Wt0 +h )t∈[t0 +h,T ] , the process (m

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

131

solution of (5.1)–(5.2), with t0 replaced by t0 +h and with m ˆ t0 ,t0 +h as the initial condition. With an obvious generalization of Theorem 4.3.1 to cases when the initial conditions are random, we deduce that     ˆt0 +h 2n+α  CE d21 (m ¯ t0 +h , m ˆ t0 +h ) E ¯ ut0+h − u =C

N

    E 1Ai mt0 ,t0 +h d21 (mt0 ,t0 +h , μi ) . i=1

Obviously, the right-hand side is less than Cε2 . The trick is then to say that u ˆit0 +h reads U (t0 + h, ·, μi ). Therefore, N

    ut0+h − U (t0 + h, ·, μi )2n+α  Cε2 . E 1Ai mt0 ,t0 +h ¯ i=1

Using the Lipschitz property of U (t0 + h, ·, ·) in the measure argument (see Theorem 4.3.1), we deduce that  2  E u ¯t0+h − U t0 + h, ·, mt0 ,t0 +h n+α  Cε2 . 

Letting ε tend to 0, we complete the proof.

Corollary 5.1.2. For any α ∈ (0, α), we can find a constant C such that, for any t0 ∈ [0, T ], h ∈ [0, T − t0 ], and m0 ∈ P(Td ), U (t0 + h, ·, m0 ) − U (t0 , ·, m0 ) Proof.



n+α

 Ch(α−α )/2 .

Using the backward equation in (5.1), we have that

  u ˜t0 (·) = E Ph u ˜t0 +h (·) −

t0 +h t0

Ps−t0

  ˜ ˜ Ht0 ,s (·, D˜ us ) − Ft0 ,s (·, mt0 ,s ) ds .



Therefore,      ˜t0 +h (·) = E Ph − id u ˜t0 +h (·) u ˜t0 (·) − E u  −

t0 +h t0

   ˜ t ,s (·, D˜ ˜ Ps−t0 H u ) − F (·, m ) ds . s t0 ,s t0 ,s 0

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

132 So that

    ˜t0 +h n+α  E Ph − id u ˜t0 +h n+α ˜ u t0 − E u  t0 +h ˜ t ,s (·, D˜ +C (s − t0 )−1/2 H us ) − F˜t0 ,s (·, mt0 ,s ) n+α −1 ds. 0 t0

It is well checked that

   ˜t0 +h n+α  Ch(α−α )/2 E u ˜t0 +h n+α E Ph − id u 

 Ch(α−α )/2 , the last line following from Lemma 4.3.7. Now, by Lemma 5.1.1,   E u ˜t0 +h √    = E U t0 + h, · + 2(Wt0 +h − Wt0 ), mt0 ,t0+h √    = E U t0 + h, · + 2(Wt0 +h − Wt0 ), mt0 ,t0+h     − U t0 + h, ·, m0 + U t0 + h, ·, m0 , where, by Theorem 4.3.1, it holds that   √    E U t0 + h, · + 2(Wt0 +h − Wt0 ), mt0 ,t0+h − U t0 + h, ·, m0 n+α     CE |d1 (mt0 ,t0 +h , m0 )| + E U t0 + h, ·

√    + 2(Wt0 +h − Wt0 ), m0 − U t0 + h, ·, m0 n+α , 

which is less than Ch(α−α )/2 .

5.2



FIRST-ORDER DIFFERENTIABILITY

Assumption. Throughout the section, we assume that F , G, and H satisfy (2.4) and (2.5) in Subsection 2.3 and that, for some integer n  2 and some α ∈ (0, 1), (HF1(n)) and (HG1(n+1)) hold true.

−1 0 1

The purpose is here to follow Section 3.4 in order to establish the differentiability of U with respect to the argument m0 . The analysis is performed at t0 fixed, so that, without any loss of generality, t0 can be chosen as t0 = 0. ˜ u ˜) the solution The initial distribution m0 ∈ P(Td ) being given, we call (m, of the system (4.7) with m0 as initial distribution. Following (3.29), the strategy

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

133

is to investigate the linearized system (of the same type as (4.37)):   δ F˜t ˜ t (·, D˜ ˜ t, dt z˜t = −Δ˜ (·, mt )(ρt ) dt + dM zt + Dp H ut ), D˜ zt − δm     2 ˜ ˜ t (·, D˜ Ht (·, D˜ ∂t ρ˜t − Δ˜ ρt − div ρ˜t Dp H ut ) − div m ˜ t Dpp ut )D˜ zt = 0,

(5.5)

with a boundary condition of the form z˜T =

˜ δG (·, mT )(ρT ). δm

As explained later on, the initial condition of the forward equation will be chosen in an appropriate way. In that framework, we shall repeatedly apply the results from Section 4.4 with 2 ˜ Ht (·, D˜ Γt = Dpp ut ),

˜ t (·, D˜ V˜t (·) = Dp H ut ),

t ∈ [0, T ],

(5.6)

which motivates the following lemma: Lemma 5.2.1. There exists a constant C such that, for any initial condition m0 ∈ P(Td ), the processes (V˜t )t∈[0,T ] and (Γt )t∈[0,T ] in (5.6) satisfy points 2 and 4 in the introduction to Section 4.4. Proof. By Theorem 4.3.1 and Lemma 4.3.7, we can find a constant C such ˜t )t∈[0,T ] to (4.7) satisfies, independently of the initial that any solution (m ˜ t, u condition m0 , ut n+1+α  C. essupω∈Ω sup ˜ t∈[0,T ]

In particular, allowing the constant C to increase from line to line, it must hold that  ˜ t (·, D˜ essupω∈Ω sup Dp H ut n+α  C. t∈[0,T ]

Moreover, implementing the local coercivity condition (2.4), we deduce that (assuming C  1), with probability 1, for all t ∈ [0, T ], Γt 1  C ;

∀x ∈ Td ,

C −1 Id  Γt (x)  CId , 

which completes the proof. n

Given y ∈ Td and a d-tuple ∈ {0, . . . , n}d such that | | = i=1 i  n, we call Td  x → v () (x, m0 , y) ∈ R the value at time 0 of the backward component of the solution to (5.5) when the forward component is initialized  with the Schwartz distribution (−1)|| D δy . Clearly, D δy ∈ (C n+α (Td )) for any α ∈ (0, 1), so that, by Theorem 4.4.2, v () (·, m0 , y) belongs to C n+α (Td ). (Recall that, for a test function ϕ ∈ C n (Td ), (D δy )ϕ = (−1)|| D 1 d ϕ(y).) Similarly, y1 ...yd

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

134

||  we may denote by (˜ ρ,y ˜t,y )t∈[0,T ] the solution of (5.5) with ρ˜,y t ,z 0 = (−1) D δy as the initial condition. For simplicity, we omit m0 in the notation. We then have (5.7) z˜0,y = v () (·, m0 , y).

We then claim Lemma 5.2.2. Let m0 ∈ P(Td ). Then, with the same notation as above, we have, for any α ∈ (0, α) and any d-tuple ∈ {0, . . . , n}d such that | |  n, essupω∈Ω sup

lim

Td h→0

 ρ˜,y+h − ρ˜,y t t

t∈[0,T ]

−(n+α )

+ z˜t,y+h − z˜t,y n+1+α = 0.

(5.8)

Moreover, for any ∈ {0, . . . , n − 1}d with | |  n − 1 and any i ∈ {1, . . . , d}, lim

R\{0}h→0

 1   i ρ˜,y+he − ρ˜t+ei ,y −(n+α ) − ρ˜,y t t h t∈[0,T ]  − z˜t,y − z˜t+ei ,y n+1+α = 0,

essupω∈Ω sup

1 + z˜t,y+hei h

where ei denotes the i-th vector of the canonical basis and + ei is understood as ( + ei )j = j + δij , for j ∈ {1, . . . , d}, δij denoting the Kronecker symbol. In particular, the function [Td ]2  (x, y) → v (0) (x, m0 , y) is n-times differentiable with respect to y and, for any ∈ {0, . . . , n}d with | |  n, the derivative Dy v (0) (·, m0 , y) : Td  x → Dy v (0) (x, m0 , y) belongs to C n+1+α (Td ) and we write Dy v (0) (x, m0 , y) = v () (x, m0 , y), (x, y) ∈ Td . Moreover, sup

sup Dy v (0) (·, m0 , y)n+1+α < ∞.

m0 ∈P(Td ) y∈Td

Proof. By Corollary 4.4.6, we can find a constant C such that, for all y ∈ Td , for all m0 ∈ P(Td ), and all ∈ {0, . . . , n}d with | |  n,  ,y  zt n+1+α + ˜ essupω∈Ω sup ˜ ρ,y t −(n+α )  C. t∈[0,T ]

In particular, v () (·, m0 , y)n+1+α  C. Now, we make use of Proposition 4.4.5. We know that, for any α ∈ (0, 1), −1 0 1

lim D δy+h − D δy −(n+α ) = 0.

h→0

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

135

Therefore, for α < α, Corollary 4.4.6 gives (5.8). This yields lim v () (·, m0 , y + h) − v () (·, m0 , y) n+1+α = 0,

h→0

proving that the mapping Td  y → v () (·, m0 , y) ∈ C n+1+α (Td ) is continuous. Similarly, for | |  n − 1 and i ∈ {1, . . . , d}, 1   D δy+hei − D δy + D+ei δy = 0, −(n+α ) R\{0}h→0 h lim

or equivalently, 1  (−1)|| D δy+he − (−1)|| D δy − (−1)|+ei | D+ei δy = 0. i −(n+α ) R\{0}h→0 h lim

As a byproduct, we get 1  = 0, v () (·, m0 , y + hei ) − v () (·, m0 , y) − v (+ei ) (·, m0 , y) R\{0}h→0 h n+1+α lim

which proves, by induction, that Dy v (0) (x, m0 , y) = v () (x, m0 , y),

x, y ∈ Td . 

This completes the proof. Now, we prove

Lemma 5.2.3. Given a finite signed measure μ on Td , the solution z˜ to (5.5) with μ as initial condition reads, when taken at time 0, z˜0 : Rd  x → z˜0 (x) =

 Td

v (0) (x, m0 , y)dμ(y).

Proof. By compactness of the torus, we can find, for a given ε > 0, a covering (Ui )1iN of Td , made of disjoint Borel subsets, such that each Ui , i = 1, . . . , N , has a diameter less than ε. Choosing, for each i ∈ {1, . . . , N }, yi ∈ Ui , we then let N

 με := μ Ui )δyi . i=1 1

d

Then, for any ϕ ∈ C (T ), with ϕ1  1, we have    

 

   N      ϕ(y) − ϕ(yi ) dμ(y)  Cμε, ϕ(y)d μ − μ (y) =  d 

T

ε



i=1

Ui

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

136 where we have denoted by μ the total mass of μ. Therefore, by Proposition 4.4.5, N 

z˜0 − i=1

Ui

v (0) (·, m0 , yi )dμ(y)

n+1+α

 Cμε,

where we have used the fact that, by linearity, the value at time 0 of the backward component of the solution to (5.5), when the forward component is initialized with με , reads N

μ(Ui )v (0) (·, m0 , yi ) =

i=1

N 

i=1

Ui

v (0) (·, m0 , yi )dμ(y).

By smoothness of v (0) in y, we easily deduce that  (0) z˜0 − v (·, m0 , y)dμ(y) Td

n+1+α

 Cμε. 

The result follows by letting ε tend to 0. On the model of Corollary 3.4.4, we now claim

Proposition 5.2.4. Given two initial conditions m0 , m0 ∈ P(Td ), we denote ˜t )t∈[0,T ] and (m ˜ t , u ˜t )t∈[0,T ] the respective solutions of (4.7) with m0 and by (m ˜ t, u m0 as initial conditions and by (˜ ρt , z˜t )t∈[0,T ] the solution of (5.5) with m0 − m0 as the initial condition, so that we can let δ ρ˜t = m ˜ t − m ˜ t − ρ˜t ,

δ˜ zt = u ˜t − u ˜t − z˜t ,

t ∈ [0, T ].

Then, for any α ∈ (0, α), we can find a constant C, independent of m0 and m0 , such that   zt n+1+α  Cd21 (m0 , m0 ). essupω∈Ω sup δ ρ˜t −(n+α ) + δ˜ 0tT

In particular,  U (0, ·, m0 ) − U (0, ·, m0 ) − 

Td

 v (0) (x, m0 , y)d m0 − m0 )(y)

n+1+α

Cd21 (m0 , m0 ),

and, thus, for any x ∈ Td , the mapping P(Td )  m → U (0, x, m) is differentiable with respect to m and the derivative reads, for any m ∈ P(Td ), −1 0 1

δU (0, x, m, y) = v (0) (x, m, y), δm

y ∈ Td .

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

137

The normalization condition holds:  Td

v (0) (x, m, y)dm(y) = 0.

The proof is the same as in the deterministic case (see Remark 3.4.5). Proof.

We have        ˜ t (·, D˜ zt = −Δ δ˜ zt + Dp H ut ), D δ˜ zt

dt δ˜    δ F˜t ˜ t, − (·, mt ) δρt + f˜t dt + dM δm

together with        ˜ t (·, D˜ ∂t δ ρ˜t − Δ δ ρ˜t − div δ ρ˜t Dp H ut )     2 ˜ Ht (·, D˜ − div m ˜ t Dpp ut ) Dδ˜ zt + ˜bt = 0, with a boundary condition of the form δ˜ zT =

˜   δG (·, mT ) δρT + g˜T , δm

where      2 ˜ ˜bt = m ˜ t (·, D˜ ˜ t (·, D˜ Ht (·, D˜ ˜ t Dp H ut ) − Dp H ut ) − m ˜ t Dpp ut ) D˜ ut − D˜ ut   ˜ t (·, D˜ ˜ t (·, D˜ ˜ t (·, D˜ f˜t = H ut ) − H u t ) − Dp H ut ), D˜ ut − D˜ ut    δ F˜t − F˜t (·, mt ) − F˜t (·, mt ) − (·, mt ) mt − mt , δm ˜   δ G ˜ mT ) − G(·, ˜ mT ) − g˜T = G(·, (·, mT ) mT − mT . δm Now,     ˜bt = m ˜ t (·, D˜ ˜ t (·, D˜ ˜t −m ˜ t Dp H ut ) − Dp H ut )  1

    2 ˜ 2 ˜ Ht ·, λD˜ Ht (·, D˜ Dpp ut + (1 − λ)D˜ ut − Dpp ut ) D˜ ut − D˜ ut dλ +m ˜t =



0

m ˜ t

−m ˜t



 1 +m ˜t

0

1 0



1 0

    2 ˜ Ht ·, λD˜ ut + (1 − λ)D˜ ut − D˜ Dpp ut D˜ ut dλ

   ⊗2 3 ˜ t ·, λsD˜ H ut + (1−λ + λ(1−s))D˜ ut − D˜ λDppp ut D˜ ut dλ ds.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

138 Also,  f˜t =

1 0

 ˜ t (·, λD˜ ˜ t (·, D˜ Dp H ut + (1 − λ)D˜ u t ) − Dp H ut ), D˜ ut − D˜ ut dλ

 1 ˜   δ F˜t  δ Ft  ·, λmt + (1 − λ)mt − (·, mt ) mt − mt dλ − δm δm 0  1 1      2 ˜ t ·, λsD˜ ut + (1 − λ + λ(1 − s))D˜ ut − D˜ λ Dpp H ut D˜ ut , = 0 0  D˜ ut − D˜ ut dλ ds  1 ˜   δ F˜t  δ Ft  − ·, λmt + (1 − λ)mt − (·, mt ) mt − mt dλ. δm δm 0 And, g˜T =

 1 ˜  ˜  δG  δG  ·, λmT + (1 − λ)mT − (·, mT ) mT − mT dλ. δm δm 0

By Lemma 4.3.7, we have a universal bound for   essupω∈Ω sup ˜ ut n+1+α + ˜ ut n+1+α . t∈[0,T ]

We deduce that     ˜bt −1  C d1 m ˜ t , m ut − u ˜ t ˜ ˜t 2 + ˜ ut − u ˜t 21 ,     f˜t n+α  C ˜ ˜ t, m ut − u ˜t 2n+1+α + d21 m ˜t ,    ˜T,m ˜T . ˜ gT n+1+α  Cd21 m Therefore, by Theorem 4.3.1, we deduce that essupω∈Ω sup ˜bt −1 + essupω∈Ω sup f˜t n+α + essupω∈Ω ˜ gT n+1+α 0tT



Cd21



m0 , m0

0tT



.

By Corollary 4.4.6, we get the first of the two inequalities in the statement. We deduce that U (0, ·, m0 ) − U (0, ·, m0 ) − z˜0 By Lemma 5.2.3, we complete the proof. −1 0 1

n+1+α

 Cd21 (m0 , m0 ). 

Proposition 5.2.5. For any α ∈ (0, α), we can find a constant C such that, for any m0 , m0 ∈ P(Td ), any y, y  ∈ Td and any index ∈ {0, . . . , n}d with

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

139

| |  n, denoting by (m ˜ t, u ˜t )t∈[0,T ] and (m ˜ t , u ˜t )t∈[0,T ] the respective solutions   ρt , z˜t )t∈[0,T ] the corresponding solutions of (4.7), and then (˜ ρt , z˜t )t∈[0,T ] and (˜ of (5.5) when driven by two initial conditions (−1)|| D δy and (−1)|| D δy , it holds that  essupω∈Ω 

zt − sup ˜

t∈[0,T ]

z˜t n+1+α

+ sup ˜ ρt − 



t∈[0,T ]

ρ˜t −(n+α )



 C d1 (m0 , m0 ) + |y − y  |α . In particular, 

d

∀y, y ∈ T ,

 δU  δU   Dy δm (0, ·, m0 , y) − Dy δm (0, ·, m0 , y ) n+1+α    α  C d1 (m0 , m0 ) + |y − y | .

Proof. Given two initial conditions m0 and m0 , we call (m ˜ t, u ˜t )t∈[0,T ] and ˜t )t∈[0,T ] the respective solutions of (4.7). With (m ˜ t, u ˜t )t∈[0,T ] and (m ˜ t , u ˜t )t∈[0,T ] , we associate the solutions (˜ ρt , z˜t )t∈[0,T ] and (˜ ρt , z˜t )t∈[0,T ] of (5.5) (m ˜ t , u ||  when driven by two initial conditions (−1) D δy and (−1)|| D δy . Since | |  n, we have   D δy − D  δy   |y − y  |α . −(n+α ) To prove the first estimate, we can apply Corollary 4.4.6 with ˜ D˜ ˜ D˜ ut ), V˜t = Dp H(·, ut ), V˜t = Dp H(·, 2 ˜ 2 ˜ Ht (·, D˜ Ht (·, D˜ ut ), Γt = Dpp ut ), Γt = Dpp and (˜b0t )t∈[0,T ] ≡ 0, (f˜t0 )t∈[0,T ] ≡ 0, g˜T0 ≡ 0 and ϑ = 0 in (4.38), so that, following the proof of Proposition 5.2.4, ut − u ˜t n+1+α . V˜t − V˜t n+α + Γt − Γt 0  C˜ Now, the first estimate in the statement follows from the combination of Theorem 4.3.1 and Corollary 4.4.6. The second estimate is a straightforward consequence of the first one.  Proposition 5.2.6. Propositions 5.2.4 and 5.2.5 easily extend to any initial time t0 ∈ [0, T ]. Then, for any α ∈ (0, α), any t0 ∈ [0, T ], and m0 ∈ P(Td ) δU δU  (t0 + h, ·, m0 , ·) − Dy (t0 , ·, m0 , ·) = 0. Dy h→0 ∈{0,...,n}d ,||n δm δm n+1+α ,α lim

sup

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

140

Proof. Given two probability measures m, m ∈ P(Td ), we know from Proposition 5.2.4 that, for any t ∈ [0, T ],     U t, ·, m − U t, ·, m       δU  t, ·, m, y d m − m (y) + O d21 (m, m ) , = Td δm

(5.9)

the equality holding true in C n+1+α (Td ) and the Landau notation O(·) being uniform in t0 and m (the constant C in the statement of Proposition 5.2.4 being explicitly quantified by means of Proposition 4.4.5, related to the stability of solutions to the linear equation). From Proposition 5.2.5, we deduce that the set of functions ([Td ]2  (x, y) →   (δU/δm)(t, x, m, y))t∈[0,T ] is relatively compact in C n+1+α (Td ) × C n+α (Td ), for any α ∈ (0, α). Any limit Φ : [Td ]2 → R obtained by letting t tend to t0 in (5.9) must satisfy (use Corollary 5.1.2 to pass to the limit in the left-hand side):            U t0 , ·, m − U t0 , ·, m = Φ ·, y d m − m (y) + O d21 (m, m ) , Td

the equality holding true in C 0 (Td ). This proves that, for any x ∈ Td ,           δU  t0 , x, m, y d m − m (y) = Φ x, y d m − m (y). δm d d T T Choosing m as the solution at time h of the Fokker–Planck equation ∂t mt = −div(bmt ),

t  0,

for a smooth field b and with m0 = m as initial condition, and then letting h tend to 0, we deduce that       Dm U t0 , x, m, y · b(y)dm(y) = Dy Φ x, y · b(y)dm(y). Td

Td

When m has full support, this proves that Φ(x, y) =

 δU  t0 , x, m, y + c(x), δm

x, y ∈ Td .

Since both sides have a zero integral in y with respect to m, c(x) must be zero. When the support of m does not cover Td , we can approximate m by a sequence (mn )n1 of measures with full supports. By Proposition 5.2.5, we know that, for any α ∈ (0, α), −1 0 1

δU    δU  t, ·, mn , · − t, ·, m, · sup = 0, n→∞ t∈[0,T ] δm δm n+1+α ,α lim

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION 

141



so that, in C n+1+α (Td ) × C α (Td ), lim

t→t0

  δU   δU  δU  t, ·, m, · = lim lim t, ·, mn , · = t0 , ·, m, · . n→∞ t→t0 δm δm δm

We easily complete the proof when | | = 0. Since the set of functions ([Td ]2   (x, y) → (Dy δU/δm)(t, x, m, y))t∈[0,T ] is relatively compact in C n+1+α (Td ) ×  C α (Td ), any limit as t tends to t0 must coincide with the derivative of index  in y of the limit of [Td ]2  (x, y) → [δU/δm](t, x, m, y) as t tends to t0 .

5.3

SECOND-ORDER DIFFERENTIABILITY

Assumption. Throughout the section, we assume that F , G, and H satisfy (2.4) and (2.5) in Subsection 2.3 and that, for some integer n  2 and some α ∈ (0, 1), (HF2(n)) and (HG2(n+1)) hold true. To complete the analysis of the master equation, we need to investigate the second-order differentiability in the direction of the measure, on the same model as for the first-order derivatives. As for the first order, the idea is to write the second-order derivative of U in the direction m as the initial value of the backward component of a linearized system of the type (4.37), which is referred next to as the second-order linearized system. Basically, the second-order linearized system is obtained by differentiating one more time the first-order linearized system (5.5). Recalling that (5.5) has the form   δ F˜t ˜ t (·, D˜ ˜ t, (·, mt )(ρt ) dt + dM zt + Dp H ut ), D˜ zt − dt z˜t = −Δ˜ δm     2 ˜ ˜ t (·, D˜ Ht (·, D˜ ∂t ρ˜t − Δ˜ ρt − div ρ˜t Dp H ut ) − div m ˜ t Dpp ut )D˜ zt = 0,

(5.10)

with the boundary condition z˜T =

˜ δG (·, mT )(ρT ), δm

the procedure is to differentiate the pair (˜ ρt , z˜t )t∈[0,T ] with respect to the initial condition m0 of (m ˜ t, u ˜t )t∈[0,T ] , the initial condition of (˜ ρt , z˜t )t∈[0,T ] being kept frozen. ˜t )0tT is indeed chosen as the solution of the system (4.7), Above, (m ˜ t, u ρt , z˜t )t∈[0,T ] as the solution of for a given initial distribution m0 ∈ P(Td ), and (˜  the system (5.10) with an initial condition ρ0 ∈ (C n+α (Td )) , for some α < α. Implicitly, the initial condition ρ0 is understood as some m0 − m0 for another ρt , z˜t )t∈[0,T ] m0 ∈ P(Td ), in which case we know from Proposition 5.2.4 that (˜ reads as the derivative, at ε = 0, of the solution to (4.7) when initialized with the

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

142

measure m0 +ε(m0 −m0 ). However, following the strategy used in the analysis of the first-order derivatives of U , it is much more convenient, in order to investigate the second-order derivatives of U , to distinguish the initial condition of (˜ ρt )t∈[0,T ] from the direction m0 − m0 used to differentiate the system (4.7). This says that, in (5.10), we should allow (˜ ρt , z˜t )t∈[0,T ] to be driven by an arbitrary initial n+α d  condition ρ0 ∈ (C (T )) . Now, when (5.10) is driven by an arbitrary initial condition ρ0 and m0 is perturbed in the direction m0 − m0 for another m0 ∈ P(Td ) (that is m0 is changed into m0 + ε(m0 − m0 ) for some small ε), the system obtained by differentiating (5.10) (at ε = 0) takes the form   δ F˜t (2) (2)  (2) ˜ t (·, D˜ (·, mt )(ρt ) − = −Δ˜ zt + D p H ut ), D˜ zt δm   δ 2 F˜t  2 ˜ t (·, D˜ H ut ), D˜ zt ⊗ D∂m u ˜t − (·, mt )(ρt , ∂m mt ) dt + Dpp 2 δm ˜ + d Mt ,   (2) (2) (2) (2) 2 ˜ ˜ t (·, D˜ Ht (·, D˜ ∂t ρ˜t − Δ˜ ρt − div ρ˜t Dp H ut ) − div m ˜ t Dpp ut )D˜ zt   2 ˜ 2 ˜ Ht (·, D˜ Ht (·, D˜ − div ρ˜t Dpp ut )D∂m u ˜t − div ∂m m ˜ t Dpp ut )D˜ zt  3 ˜ t (·, D˜ (5.11) H − div m ˜ t Dppp ut )D˜ zt ⊗ D∂m u ˜t = 0, (2)

dt z˜t

with a terminal boundary condition of the form (2)

z˜T =

−1 0 1

˜ ˜  (2)  δ 2 G δG (·, mT ) ρT + (·, mT )(ρT , ∂m mT ), δm δm2

where we have denoted by (∂m m ˜ t , ∂m u ˜t )t∈[0,T ] the derivative of (m ˜ t, u ˜t )t∈[0,T ] when the initial condition is differentiated in the direction m0 − m0 at point m0 , (2) (2) for another m0 ∈ P(Td ). In (5.11), the pair (˜ ρt , z˜t )t∈[0,T ] is then understood as the derivative of the solution (˜ ρt , z˜t )t∈[0,T ] to (5.10). Now, using the same philosophy as in the analysis of the first-order derivatives, we can choose freely the initial condition ρ0 of (5.10). Generally speaking, we will choose ρ0 = (−1)|| D δy , for some multi-index ∈ {0, . . . , n − 1}d with | |  n − 1 and some y ∈ Td . Since ρ0 is expected to be insensitive to any (2) perturbation that could apply to m0 , it then makes sense to let ρ0 = 0. As said earlier, the initial condition ∂m m0 of (∂m m ˜ t )0tT is expected to have the form m0 − m0 for another probability measure m0 ∈ P(Td ). Anyhow, by the same linearity argument as in the analysis of the first-order derivative, we can start with the case when ∂m m0 is the derivative of a Dirac mass, namely ∂m m0 = (−1)|k| Dk δζ , for another multi-index k ∈ {0, . . . , n − 1}d , and another ˜ t , ∂m u ˜t )0tT is another solution to (5.10), but with ζ ∈ Td , in which case (∂m m

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

143

∂m m0 = (−1)|k| Dk δζ as the initial condition. Given these initial conditions, we then let   (2) v (,k) ·, m0 , y, ζ = z˜0 , provided that (5.11) has a unique solution. To check that existence and uniqueness hold true, we may proceed as follows. The system (5.11) is of the type (4.37), with 2 ˜ ˜ t (·, D˜ V˜t = Dp H Ht (·, D˜ ut ), Γt = Dpp ut ), 0 2 ˜b = ρ˜t D H ˜ t (·, D˜ ut )D∂m u ˜t t

pp

2 ˜ 3 ˜ t (·, D˜ Ht (·, D˜ H + ∂m m ˜ t Dpp ut )D˜ zt + m ˜ t Dppp ut )D˜ zt ⊗ D∂m u ˜t ,  δ 2 F˜t  2 ˜ t (·, D˜ H f˜t0 = Dpp ut ), D˜ zt ⊗ D∂m u ˜t − (·, mt )(ρt , ∂m mt ), δm2 ˜ δ2 G g˜T0 = (·, mT )(ρT , ∂m mT ). δm2

(5.12)

Recall from Theorem 4.3.1 and Lemma 4.3.7 on the one hand and from Corollary 4.4.6 on the other hand that we can find a constant C (the value of which is allowed to increase from line to line), independent of m0 , y, ζ, , and k, such that essupω∈Ω sup ˜ ut n+1+α  C, t∈[0,T ]

essupω∈Ω

 sup ˜ zt n+1+α + ∂m u ˜t n+1+α

t∈[0,T ]

+ ˜ ρt −(n+α ) + ∂m m ˜ t −(n+α )

(5.13) 

 C.

Since | |, |k|  n − 1, we can apply Corollary 4.4.6 with n replaced by n − 1 (notice that n − 1 satisfies the assumption of Section 5.2), so that essupω∈Ω

  sup ˜ ρt −(n+α −1) + ∂m m ˜ t −(n+α −1)  C.

(5.14)

t∈[0,T ]

Therefore, we deduce that essupω∈Ω sup ˜b0t −(n+α −1)  C. t∈[0,T ]

Similarly, gT0 n+1+α  C. essupω∈Ω sup f˜t0 n+α + essupω∈Ω ˜ t∈[0,T ]

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

144

From Theorem 4.4.2, we deduce that, with the prescribed initial conditions, (5.11) has a unique solution. Moreover, by Corollary 4.4.6, (2)

(2)

zt n+1+α + essupω∈Ω sup ˜ ρt −(n+α )  C, essupω∈Ω sup ˜ t∈[0,T ]

(5.15)

t∈[0,T ]

where C is independent of m0 , y, ζ, and k. On the model of Lemma 5.2.2, we claim: Lemma 5.3.1.

The function [Td ]3  (x, y, ζ) → v (0,0) (x, m0 , y, ζ)

admits continuous crossed derivatives in (y, ζ), up to the order n − 1 in y and to the order n − 1 in ζ, the derivative Dy Dζk v (0,0) (·, m0 , y, ζ) : Td  x → Dy Dζk v (0,0) (x, m0 , y, ζ), for | |, |k|  n − 1, belonging to C n+1+α (Td ). Moreover, writing v (,k) (x, m0 , y, ζ) = Dy Dζk v (0,0) (x, m0 , y, ζ),

x, y, ζ ∈ Td ,

there exists, for any α ∈ (0, α), a constant C such that, for any multi-indices , k with | |, |k|  n − 1; any y, y  , ζ, ζ  ∈ Td ; and any m0 ∈ P(Td ), (,k) v (·, m0 , y, ζ) n+1+α  C, (,k)    v (·, m0 , y, ζ) − v (,k) (·, m0 , y  , ζ  ) n+1+α  C |y − y  |α + |ζ − ζ  |α . Proof. With the same notations as in the statement of Lemma 5.2.2, we denote ˜tk,ζ )t∈[0,T ] the solution to (5.5) with (−1)|k| Dk δζ as the initial condition by (˜ ρk,ζ t ,z and by (˜ ρ,y ˜t,y )t∈[0,T ] the solution to (5.5) with (−1)|| D δy as the initial t ,z condition. By Proposition 5.2.5 (applied with both n − 1 and n), we have, for any y, y  ∈ Td and any ζ, ζ  ∈ Td ,  essupω∈Ω

sup t∈[0,T ]

˜ ztk,ζ



 z˜tk,ζ n+1+α

+ sup t∈[0,T ]

˜ ρk,ζ t



 ρ˜k,ζ −(n+α −1) t





−1 0 1

 C|ζ − ζ  |α ,   ,y ,y  ,y ,y  zt − z˜t n+1+α + sup ˜ ρt − ρ˜t −(n+α −1) essupω∈Ω sup ˜ t∈[0,T ]

 α

 C|y − y | .

t∈[0,T ]

(5.16)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

145

Denote now by (˜b,k,y,ζ )t∈[0,T ] the process (˜b0t )t∈[0,T ] in (5.12) when (˜ ρt , z˜t )t∈[0,T ] t ,y ,y stands for the process (˜ ρt , z˜t )t∈[0,T ] and (∂m m ˜ t , ∂m u ˜t )t∈[0,T ] is replaced by k,ζ ,k,y,ζ ˜ (˜ ρk,ζ , z ˜ ) . Define in a similar way ( f ) ˜T,k,y,ζ . Then, comt∈[0,T ] t∈[0,T ] and g t t t bining (5.16) with (5.13) and (5.14),   essupω g˜T,k,y ,ζ − g˜T,k,y,ζ n+1+α      ,ζ + sup ˜b,k,y − ˜b,k,y,ζ + f˜t,k,y ,ζ − f˜t,k,y,ζ n+α t t −(n+α −1) t∈[0,T ]

   C |y − y  |α + |z − z  |α .



By Proposition 4.4.5, we deduce that (,k) (·, m0 , y, ζ) − v (,k) (·, m0 , y  , ζ  ) v

n+1+α

    C |y − y  |α + |ζ − ζ  |α , (5.17)

which provides the last claim in the statement (the L∞ bound following from (5.15)). Now, by Lemma 5.2.2 (applied with both n and n − 1), we know that, for |k|  n − 2 and j ∈ {1, . . . , d}, lim

R\{0}h→0

essupω∈Ω

 1   ζ,k+ej j ,k ρ˜ζ+he − ρ˜t − ρ˜ζ,k t t −(n+α −1) h t∈[0,T ] 1  ζ+hej ,k  ζ,k+ej = 0, + z˜t − z˜tζ,k − z˜t n+1+α h sup

where ej denotes the j th vector of the canonical basis of Rd . Therefore, by (5.13), lim

R\{0}h→0

essupω∈Ω

 1   ,k+ej ,y,ζ j ˜b,k,y,ζ+he − ˜bt − ˜b,k,y,ζ t t −(n+α −1) h t∈[0,T ] 1  ,k,y,ζ+hej  ,k+ej ,y,ζ + f˜t − f˜t,k,y,ζ − f˜t n+α h 1  ,k,y,ζ+hej  ,k+ej ,y,ζ + g˜T = 0. − g˜T,k,y,ζ − g˜T n+1+α h sup

By Proposition 4.4.5, 1  lim v (,k) (·, m0 , y, ζ + hej ) − v (,k) (·, m0 , y, ζ) h→0 h − v (,k+ej ) (·, m0 , y, ζ) = 0, n+1+α

which proves, by induction, that Dζk v (,0) (x, m0 , y, ζ) = v (,k) (x, m0 , y, ζ),

x, y, ζ ∈ Td .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

146 Similarly, we can prove that Dy v (0,k) (x, m0 , y, ζ) = v (,k) (x, m0 , y, ζ),

x, y, ζ ∈ Td .

Together with the continuity property (5.17), we complete the proof.



We claim that Proposition 5.3.2. We can find a constant C such that, for any m0 , m0 ∈ P(Td ), any y, ζ ∈ Td , and any multi-indices , k with | |, |k|  n − 1, (,k) (·, m0 , y, ζ) − v (,k) (·, m0 , y, ζ)  Cd1 (m0 , m0 ). v n+1+α

Proof.

The proof consists of a new application of Proposition 4.4.5. Given

• the solutions (m ˜ t, u ˜t )t∈[0,T ] and (m ˜ t , u ˜t )t∈[0,T ] to (4.7) with m ˜ 0 = m0 and   m ˜ 0 = m0 as the respective initial conditions; ˜ t , ∂m u ˜t )t∈[0,T ] and (∂m m ˜ t , ∂m u ˜t )t∈[0,T ] to (5.10), with • the solutions (∂m m   ˜t )t∈[0,T ] and (m ˜ t, u ˜t )t∈[0,T ] as the respective input and ∂m m ˜0 = (m ˜ t, u  |k| k ∂m m ˜ 0 = (−1) D δζ as the initial condition, for some multi-index k with |k|  n − 1 and for some ζ ∈ Td ; ρt , z˜t )t∈[0,T ] to (5.10), with (m ˜ t, u ˜t )t∈[0,T ] • the solutions (˜ ρt , z˜t )t∈[0,T ] and (˜   and (m ˜ t, u ˜t )t∈[0,T ] as the respective input and (−1)|| D δy as the initial condition, for some multi-index with | |  n − 1 and for some y ∈ Td ; (2) (2) (2) (2) • the solutions (˜ ρt , z˜t )t∈[0,T ] and (˜ ρt , z˜t )t∈[0,T ] to the second-order lin˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] and earized system (5.11) with the tuples (m ˜ t, u ˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] as the respective input and with 0 as the (m ˜ t , u initial condition. Notice from (5.7) that z˜0 = v () (·, m0 , y) and z˜0 = v () (·, m0 , y). ˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] With each of the two aforementioned tuples (m ˜ t, u and (m ˜ t , u ˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] , we can associate the same coefficients as in (5.12), labeling with a prime the coefficients associated with the input ˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] . Combining (5.13) and (5.14) together with (m ˜ t , u (5.12) and the version of (5.12) associated with the “primed” coefficients and the “primed” inputs, we obtain V˜t − V˜t n+α + Γt − Γt 0 

+ b0t − b0t −(n+α −1) + ft0 − ft0 n+α + gT0 − gT0 n+1+α

  C ˜ ut − u ˜t n+1+α + ˜ zt − z˜t n+1+α + ∂m u ˜ t − ∂m u ˜t n+1+α

+ d 1 (m ˜ t, m ˜t  ) + ˜ ρt − ρ˜t −(n+α −1) + ∂m m ˜ t − ∂m m ˜ t −(n+α −1) .

−1 0 1

By Propositions 4.4.5 and 5.2.5 (applied with both n and n − 1), we complete the proof. 

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

THE SECOND-ORDER MASTER EQUATION

6.125x9.25

147

On the model of Lemma 5.2.3, we have Lemma 5.3.3. Given a finite measure μ on Td , the solution z˜(2) to (5.5), with ρt )0tT is 0 as the initial condition, when (m ˜ t )0tT is initialized with m0 , (˜ ˜ t )0tT is initialized with (−1)|| D δy , for | |  n − 1 and y ∈ Td , and (∂m m initialized with μ, reads, when taken at time 0, (2) z˜0

d

: R  x →

(2) z˜0 (x)

 = Td

v (,0) (x, m0 , y, ζ)dμ(ζ).

Now, Proposition 5.3.4. We can find a constant C such that, for any multi-index with | |  n − 1, any m0 , m0 ∈ P(Td ) and any y ∈ Td ,  ()   (,0) v (·, m0 , y) − v () (·, m0 , y) − v (·, m , y, ζ)d m − m )(ζ) 0 0 0 Td



n+1+α

Cd21 (m0 , m0 ).

Proof. We follow the lines of the proof of Proposition 5.2.4. Given two initial conditions m0 , m0 ∈ P(Td ), we consider • the solutions (m ˜ t, u ˜t )t∈[0,T ] and (m ˜ t , u ˜t )t∈[0,T ] to (4.7) with m ˜ 0 = m0 and m ˜ 0 = m0 as the respective initial conditions; ˜ t , ∂m u ˜t )t∈[0,T ] to the system (5.5), when driven by the • the solution (∂m m ˜t )t∈[0,T ] and by the initial condition ∂m m ˜ 0 = m0 − m0 ; input (m ˜ t, u ρt , z˜t )t∈[0,T ] to (5.10), with (m ˜ t, u ˜t )t∈[0,T ] • the solutions (˜ ρt , z˜t )t∈[0,T ] and (˜   ||  and (m ˜ t, u ˜t )t∈[0,T ] as the respective input and (−1) D δy as the initial condition, for some multi-index with | |  n − 1 and for some y ∈ Td ; (2) (2) • the solution (˜ ρt , z˜t )t∈[0,T ] to (5.11) with (m ˜ t, u ˜t , ρ˜t , z˜t , ∂m m ˜ t , ∂m u ˜t )t∈[0,T ] as input and 0 as the initial condition. Then, we let (2)

δ ρ˜t

(2)

= ρ˜t − ρ˜t − ρ˜t ,

(2)

δ˜ zt

(2)

= z˜t − z˜t − z˜t ,

t ∈ [0, T ].

We have  (2)    (2)    (2)   (2)  δ F˜t ˜ t (·, D˜ dt δ˜ (·, mt ) δρt + f˜t dt zt = −Δ δ˜ zt + Dp H ut ), D δ˜ zt − δm ˜ t, + dM  (2)   (2)   (2)   ˜ t (·, D˜ ∂t δ ρ˜t − Δ δ ρ˜t − div δ ρ˜t Dp H ut )    (2)  2 ˜ Ht (·, D˜ ut ) Dδ˜ zt + ˜bt = 0, − div m ˜ t Dpp with a boundary condition of the form

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

148 (2)

δ˜ zT = where

˜  (2)  δG (·, mT ) δρT + g˜T , δm

 ˜bt = ρ˜ Dp H ˜ t (·, D˜ ˜ t (·, D˜ ut ) − Dp H ut ) t  ˜ t (·, D˜ ˜ t (·, D˜ u ) − m ˜ t D2 H ut ) D˜ z + m ˜  D2 H t

− −

pp

t

pp

t

2 ˜ 2 ˜ Ht (·, D˜ Ht (·, D˜ ∂m m ˜ t Dpp ut )D˜ zt − ρ˜t Dpp ut )D∂m u ˜t 3 ˜ t (·, D˜ m ˜ t Dppp H ut )D˜ zt ⊗ D∂m u ˜t ,

and,   ˜ t (·, D˜ ˜ t (·, D˜ f˜t = Dp H ut ) − Dp H ut ), D˜ zt   ˜ t (·, D˜ − D2 H ut ), D˜ zt ⊗ D∂m u ˜t pp

 δ F˜

δ F˜t δ 2 F˜t (·, mt )(ρt ) − (·, m )(ρ , ∂ m ) , t t m t δm δm δm2 ˜ ˜ ˜ δG δG δ2G g˜T = (·, mT )(ρT ) − (·, mT )(ρT ) − (·, mT )(ρT , ∂m mT ), δm δm δm2 −

t

(·, mt )(ρt ) −

˜ t )t∈[0,T ] is a square integrable martingale as in (4.37). and where (M Therefore,    ˜bt = ρ˜ − ρ˜t Dp H ˜ ˜ H (·, D˜ u ) − D (·, D˜ u ) t p t t t t       2 ˜ ˜ t ·, D˜ ˜ t ·, D˜ Ht (·, D˜ ut − Dp H ut − Dpp ut )D∂m u ˜t + ρ˜t Dp H    2 ˜ 2 ˜ Ht (·, D˜ Ht (·, D˜ ut ) − m ˜ t Dpp ut ) D˜ zt − D˜ zt + m ˜ t Dpp   2 ˜ 2 ˜ + m ˜ t − m Ht (·, D˜ Ht (·, D˜ ˜ t Dpp ut ) − Dpp ut ) D˜ zt  2 ˜ Ht (·, D˜ + m ˜ t − m ˜ t − ∂m m ˜ t Dpp ut )D˜ zt  2 ˜ 2 ˜ 3 ˜ t (·, D˜ zt , Ht (·, D˜ Ht (·, D˜ H +m ˜ t Dpp ut ) − Dpp ut ) − Dppp ut )D∂m u ˜t D˜ and

      ˜ t ·, D˜ ˜ t ·, D˜ f˜t = Dp H ut − Dp H ut , D˜ zt − D˜ zt       2 ˜ ˜ t ·, D˜ ˜ t ·, D˜ Ht (·, D˜ ut − Dp H ut − Dpp ut )D∂m u ˜t , D˜ zt + Dp H

−1 0 1

 δ F˜ t

  δ F˜t (·, mt ) ρt − ρt δm δm  δ F˜ δ F˜t δ 2 F˜t t (·, mt )(ρt ) − (·, mt )(ρt ) − (·, m )(ρ , ∂ m ) + t t m t , δm δm δm2 +

(·, mt ) −

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

149

Similarly, g˜T =

 δG ˜

 ˜  δG (·, mT ) ρT − ρT δm δm  δG ˜ ˜ ˜ δG δ2 G + (·, mt )(ρT ) − (·, mt )(ρT ) − (·, mT )(ρT , ∂m mT ) . 2 δm δm δm (·, mT ) −

Applying Theorem 4.3.1, Lemma 4.3.7, Propositions 5.2.4 and 5.2.5, and (5.13) and (5.14) and using the same kind of Taylor expansion as in the proof of Proposition 5.2.4, we deduce that

gT n+1+α  Cd21 (m0 , m0 ). essupω∈Ω sup ˜bt −(n+α −1) + f˜t n+α + ˜ t∈[0,T ]



By Proposition 4.4.5, we complete the proof. We thus deduce:

Proposition 5.3.5. For any x ∈ Td , the function P(Td )  m → U (0, x, m) is twice differentiable in the direction m and the second-order derivatives read, for any m ∈ P(Td ) δ2U (0, x, m, y, y  ) = v (0,0) (x, m, y, y  ), δm2

y, y  ∈ Td .

In particular, for any α ∈ (0, α), t ∈ [0, T ] and m ∈ P(Td ), the function    [δ 2 U/δm2 ](0, ·, m, ·, ·) belongs to C n+1+α (Td ) × C n−1+α (Td ) × C n−1+α (Td ) and the mapping P(Td )  m →

   δ2 U (0, ·, m, ·, ·) ∈ C n+1+α (Td ) × C n−1+α (Td ) × C n−1+α (Td ) δm2

is continuous (with respect to d1 ). The derivatives in y and y  read Dy Dyk

δ2U (0, x, m, y, y  ) = v (,k) (x, m, y, y  ), δm2

y, y  ∈ Td ,

|k|, | |  n − 1.

Proof. By Proposition 5.3.4, we indeed know that, for any multi-index with | |  n − 1 and any x, y ∈ Td , the map P(Td )  m → Dy [δU/δm](0, x, m, y) is differentiable with respect to m, the derivative writing, for any m ∈ P(Td ), δ  δU D (0, x, m, y, y  ) = v (,0) (0, x, m, y, y  ), δm y δm

y, y  ∈ Td .

By Lemma 5.3.1, [δ/δm][Dy [δU/δm]](0, x, m, y, y  ) is n − 1 times differentiable with respect to y  and, together with Proposition 5.3.2, the derivatives are continuous in all the parameters. Making use of Schwarz’ Lemma 2.2.4 (when = 1

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

150

and iterating the argument to handle the case  2), the proof is easily completed.  Following Proposition 5.2.6, we finally claim: Proposition 5.3.6. Proposition 5.3.5 easily extends to any initial time t0 ∈ [0, T ]. Then, for any α ∈ (0, α), any t0 ∈ [0, T ], and m0 ∈ P(Td ) δ2 U sup Dy Dyk (t0 + h, ·, m0 , ·) h→0 |k|n−1 ||n−1 δm2 lim

sup

− Dy Dyk

5.4

δ2 U (t , ·, m , ·) = 0. 0 0 δm2 n+1+α ,α ,α

DERIVATION OF THE MASTER EQUATION

We now prove Theorem 2.4.5. Of course the key point is to prove that U , as constructed in the previous section, is a solution of the master equation (2.13).

5.4.1

Regularity Properties of the Solution

The regularity properties of U follow from Sections 5.1, 5.2, and 5.3, see particular Propositions 5.3.5 and 5.3.6 (take note that, in the statements Theorem 2.4.5 and of Proposition 5.3.5, the indices of regularity in y and are not exactly the same; this comes from the fact that, in the statement Theorem 2.4.5, we require (HF2) and (HG2) to hold at one more rank).

5.4.2

in of y of

Derivation of the Master Equation

We now have all the necessary ingredients to derive the master equation satisfied by U . The first point is to recall that, whenever the forward component (m ˜ t )t∈[t0 ,T ] in (5.1) is initialized with m0 ∈ P(Td ) at time t0 , then U (t0 , x, m0 ) = u ˜t0 (x),

x ∈ Td ,

(˜ ut )t∈[t0 ,T ] denoting the backward component in (5.1). Moreover, by Lemma 5.1.1, for any h ∈ [T − t0 ], √   u ˜t0 +h (x) = U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h , −1 0 1

x ∈ Td ,

√ m ˜ t by the random mapping Td  x → x + 2(Wt − where mt0 ,t is the image of √ ˜ t . In particular, we can write Wt0 ), that is, mt0 ,t = [id + 2(Wt − Wt0 )]m

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

151

U (t0 + h, x, m0 ) − U (t0 , x, m0 ) h √    E U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0+h − U (t0 , x, m0 ) = h √    U (t0 + h, x, m0 ) − E U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h (5.18) + h E[˜ ut0 +h (x)] − u ˜t0 (x) = h √    U (t0 + h, x, m0 ) − E U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h . + h We start with the first term in the right-hand side of (5.18). Following (4.12), we deduce from the backward equation in (5.1) that, for any x ∈ Td ,       ˜ t ,t (·, D˜ ˜t ,t (·, mt ,t ) (x) dt, ˜t x = E −Δ˜ ut + H dt E u u ) − F t 0 0 0 ˜ t ,t are given by (5.2). In particular, thanks to where the coefficients F˜t0 ,t and H 0 the regularity property in Corollary 5.1.2, we deduce that E[˜ ut0 +h (x)] − u ˜t0 (x) h0 h     = −Δx U (t0 , m0 , x) + H x, Dx U (t0 , m0 , x) − F x, m0 . lim

(5.19)

To pass to the limit in the last term in (5.18), we need a specific form of Itˆ o’s formula. The precise version is given in the Appendix; see Lemma A.3.1. Applied to the current setting, with   βt (·) = Dp H ·, Dx U (t, ·, mt0 ,t ) , it says that √  1  E U t0 + h, x + 2(Wt0 +h − Wt0 ), mt0 ,t0 +h h0 h   − U t0 + h, x, m0 lim

= Δx U (t0 , x, m0 )     +2 divy Dm U t0 , x, m0 , y dm0 (y) d  T     Dm U t0 , x, m0 , y Dp H y, DU (t0 , y, m0 ) dm0 (y) − Td     +2 divx Dm U t0 , x, m0 , y dm0 (y) d  T   2 Tr Dmm U t0 , x, m0 , y, y  dm0 (y)dm0 (y  ). + [Td ]2

(5.20)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

152

From (5.19) and (5.20), we deduce that, for any (x, m0 ) ∈ Td × P(Td ), the mapping [0, T ]  t → U (t, x, m0 ) is right-differentiable and, for any t0 ∈ [0, T ), U (t0 + h, x, m0 ) − U (t0 , x, m0 ) h     = −2Δx U (t0 , x, m0 ) + H x, Dx U (t0 , x, m0 ) − F x, m0     divy Dm U t0 , x, m0 , y dm0 (y) −2 d  T     Dm U t0 , x, m0 , y Dp H y, DU (t0 , y, m0 ) dm0 (y) + Td     divx Dm U t0 , x, m0 , y dm0 (y) −2 d  T   2 Tr Dmm U t0 , x, m0 , y, y  dm0 (y)dm0 (y  ). − lim

h0

[Td ]2

Since the right-hand side is continuous in (t0 , x, m0 ), we deduce that U is continuously differentiable in time and satisfies the master equation (2.13) (with β = 1). 5.4.3

Uniqueness

It now remains to prove uniqueness. Considering a solution V to the master equation (2.13) (with β = 1) along the lines of Definition 2.4.4, the strategy is to expand √   u ˜t = V t, x + 2Wt , mt , t ∈ [0, T ], where, for a given initial condition m0 ∈ P(Td ), mt is the image of m ˜ t by √ d  ˜ t )t∈[0,T ] denoting the solution of the the mapping T  x → x + 2Wt , (m Fokker–Planck equation  √     ˜ t ·, Dx V (t, x + 2Wt , m ˜ t = Δm ˜ t + div m ˜ t Dp H ˜ t ) dt, dt m which reads, for almost every realization of (Wt )t∈[0,T ] , as the flow of conditional marginal distributions (given (Wt )t∈[0,T ] ) of the McKean–Vlasov process √   ˜ t Xt , Dx V (t, x + 2Wt , L(Xt |W )) dt dXt = −Dp H √ + 2dBt , t ∈ [0, T ],

−1 0 1

(5.21)

X0 having m0 as distribution. Notice that the above equation is uniquely solvable since Dx V is Lipschitz continuous in the space and measure arguments (by the simple fact that Dx2 V and Dm Dx V are continuous functions on a compact set). We refer to [92] for standard solvability results for McKean–Vlasov SDEs (which may be easily extended to the current setting).

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

153

Of course, the key point is to prove that the pair (m ˜ t , u ˜t )t∈[0,T ] solves the ˜t )t∈[0,T ] , in which case it will same forward–backward system (4.7) as (m ˜ t, u ˜0 = u ˜0 = U (0, x, m0 ). (The same argument may be follow that V (0, x, m0 ) = u repeated for any other initial condition with another initial time.) The strategy consists of a suitable application of Lemma A.3.1 below. Given 0  t  t + h  T , we have to expand the difference √ √       E V t + h, x + 2Wt+h , mt+h |Ft − V t, x + 2Wt , mt √ √       = E V t + h, x + 2Wt+h , mt+h |Ft − V t + h, x + 2Wt , mt √ √     + V t + h, x + 2Wt , mt − V t, x + 2Wt , mt

(5.22)

1 2 = St,h + St,h .

By Lemma A.3.1, with   βt (·) = Dp H ·, Dx V (t, ·, mt ) ,

t ∈ [0, T ],

it holds that 1 St,h

 √   = h Δx V t, x + 2Wt , mt  +2 Td

 −

Td

√     Dm V t, x + 2Wt , mt , y · Dp H y, Dx V (t, y, mt ) dmt (y)



+2 

√    divy Dm V t, x + 2Wt , mt , y dmt (y)

Td

+ [Td ]2

(5.23)

√    divx Dm V t, x + 2Wt , mt , y dmt (y)

 √  2 Tr Dmm V t, x + 2Wt , mt , y, y  dmt (y)dmt (y  )

 + εt,t+h , where (εs,t )s,t∈[0,T ]:st is a family of real-valued random variables such that lim

sup

h0 s,t∈[0,T ]:|s−t|h

  E |εs,t | = 0.

2 Expand now St,h in (5.22) to the first order in t and use the fact that ∂t V is uniformly continuous on the compact set [0, T ]×Td ×P2 (Td ). Combining (5.22),

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

154 (5.23), and the master PDE (2.13) satisfied by V , we deduce that

√ √       E V t + h, x + 2Wt+h , mt+h |Ft − V t, x + 2Wt , mt √ √ √     = −h Δx V t, x + 2Wt , mt − H x + 2Wt , Dx V (t, x + 2Wt , mt )

√   + F x + 2Wt , mt + εt,t+h . Considering a partition t = t0 < t1 < · · · < tN = T of [t, T ] of step size h, the above identity yields √ √       E G x + 2WT , mT − V t, x + 2Wt , mt |Ft N −1

√   Δx V ti , x + 2Wti , mti = −h i=0

√ √ √     − H x + 2Wti , Dx V (ti , x + 2Wti , mti ) + F x + 2Wti , mti +h

N −1

  E εti ,ti+1 |Ft .

i=0

Since lim sup h0

sup r,s∈[0,T ]:|r−s|h

   E E εr,s |Ft   lim sup h0

sup r,s∈[0,T ]:|r−s|h

  E |εr,s | = 0,

we can easily replace each E[εti ,ti+1 |Ft ] by εti ,ti+1 itself, allowing for a modification of εti ,ti+1 . Moreover, here and below, we use the fact that, for a random process (γt )t∈[0,T ] , with paths in C 0 ([0, T ], R), satisfying essupω∈Ω sup |γt | < ∞,

(5.24)

t∈[0,T ]

it must hold that lim

sup

h0 s,t∈[0,T ]:|s−t|h

  E |ηs,t | = 0,

ηs,t =

1 |s − t|



t s

(γr − γs )dr.

(5.25)

The proof just consists in bounding |ηs,t | by wγ (h), where wγ stands for the pathwise modulus of continuity of (γt )t∈[0,T ] , which satisfies, thanks to (5.24) and Lebesgue’s dominated convergence theorem, −1 0 1

lim E[wγ (h)] = 0.

h0

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

155

Therefore, allowing for a new modification of the random variables εti ,ti+1 , for i = 0, . . . , N − 1, we deduce that √ √       E G x + 2WT , mT − V t, x + 2Wt , mt |Ft  T √ √ √     Δx V s, x + 2Ws , ms − H x + 2Ws , Dx V (s, x + 2Ws , ms ) =− t

N −1

√   εti ,ti+1 . + F x + 2Ws , ms ds + h i=0

Letting, for any x ∈ Td , √   ˜  (x) = V t, x + 2Wt , m + M t t

 t √   Δx V s, x + 2Ws , ms 0

√ √ √     − H x + 2Ws , Dx V (s, x + 2Ws , ms ) + F x + 2Ws , ms ds,

we deduce that

N −1

   ˜  (x)|Ft = h ˜ (x) − M εti ,ti+1 . E M T t i=0

˜ t (x))t∈[0,T ] is a martingale. Thanks Now, letting h tend to 0, we deduce that (M to the regularity properties of V and its derivatives, it is bounded. Letting √ v˜t (x) = V (t, x + 2Wt , mt ), t ∈ [0, T ], we finally notice that

 T      ˜ ˜ s x, D˜ v˜t (x) = GT (x, mT ) + Δx v˜s (x) − H vs (x) + F˜ (x, ms ) ds t    ˜ −M ˜  (x), t ∈ [0, T ], − M T t

˜  )t∈[0,T ] solves (4.7). which proves that (m ˜ t , v˜t , M t 5.5

WELL-POSEDNESS OF THE STOCHASTIC MFG SYSTEM

We are now ready to come back to the well-posedness of the stochastic MFG system ⎧ √   ⎪ dt ut = −2Δut + H(x, Dut ) − F (x, mt ) − 2div(vt ) dt ⎪ ⎪ ⎪ ⎪ ⎪ + vt · dWt , ⎪ ⎨ √     dt mt = 2Δmt + div mt Dp H(x, Dut ) dt − 2div(mt dWt , (5.26) ⎪ ⎪ d ⎪ ⎪ in [t0 , T ] × T , ⎪ ⎪ ⎪ ⎩ d in T . mt0 = m0 , uT (x) = G(x, mT )

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 5

156

For simplicity of notation, we prove the existence and uniqueness of the solution for t0 = 0. First step. Existence of a solution. We start with the solution, denoted by ˜ t )t∈[0,T ] , to the system ˜ t, M (˜ ut , m    ⎧ ˜ t (·, D˜ ˜ = Δm ˜ t + div m ˜ t Dp H ut ) dt, dm ⎪ ⎨ t t   ˜ t (·, D˜ ˜ t, ˜t = −Δ˜ ut + H ut ) − F˜t (·, mt ) dt + dM dt u ⎪ ⎩ d ˜ mT ) ˜T (x) = G(x, in T . m ˜ 0 = m0 , u

(5.27)

√ √ ˜ t (x, p) = H(x + 2Wt , p), F˜t (x, m) = F (x + 2Wt , m) and G(x, ˜ m) = where √ H ˜ t )t∈[0,T ] ut , m ˜ t, M G(x + 2WT , m). The existence and uniqueness of a solution (˜ to (5.27) are ensured by Theorem 4.3.1. Given such a solution, we let ut (x) = u ˜t (x −



2Wt ),

x ∈ Td ;

mt = (id +

√ 2Wt )m ˜ t,

t ∈ [0, T ],

and claim that the pair (ut , mt )t∈[0,T ] thus defined satisfies (5.26) (for a suitable (vt )t∈[0,T ] ). o–Wentzell The dynamics satisfied by (mt )t∈[0,T ] are given by the so-called Itˆ formula for Schwartz distribution-valued processes (see [67, Theorem 1.1]), the proof of which works φ ∈ C 4 (Td ) and any √  as follows: for any test function d φ(x)dmt (x) = Td φ(x + 2Wt )dm ˜ t (x); expanding the z ∈ R , we have Td  ˜ t (x))t∈[0,T ] by means of the Fokker–Planck equavariation of ( Td φ(x + z)dm √ tion satisfied by (m ˜ t )t∈[0,T ] and then replacing z by 2Wt , we then obtain the √  ˜ t (x))t∈[0,T ] by applying the semi-martingale expansion of ( Td φ(x + 2Wt )dm standard Itˆ o–Wentzell formula, as given in Subsection A.3.1. Once again we refer to [67, Theorem 1.1] for a complete account. Applying this strategy to our framework (with the formal writing (mt (x) = √ m ˜ t (x − 2Wt ))t∈[0,T ] ), this shows exactly that (mt )t∈[0,T ] solves   √ √    ˜ t x − 2Wt , D˜ dt mt = 2Δmt + div Dp H ut (x − 2Wt ) dt √ − 2div(mt dWt ) (5.28)   √    = 2Δmt + div Dp H x, Dut (x) dt − 2div(mt dWt ). Next we consider the equation satisfied by (ut )t∈[0,T ] . Generally speaking, the strategy is similar. Intuitively, o–Wenztell formula √ it consists in applying Itˆ ˜t (x − 2Wt ))t∈[0,T ] . In any case, to apply Itˆ o–Wentzell again, but to (ut (x) = u formula, we need first to identify the martingale part in (˜ ut (x))t∈[0,T ] (namely ˜ t (x))t∈[0,T ] ). Recalling from Lemma 5.1.1 the formula (M −1 0 1

√   u ˜t (x) = U t, x + 2Wt , mt ,

t ∈ [0, T ],

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

THE SECOND-ORDER MASTER EQUATION

157

we understand that the martingale part of (˜ ut (x))t∈[0,T ] should be given by the first-order expansion of the above right-hand side (using an appropriate version of Itˆo’s formula for functionals defined on [0, T ] × Td × P(Td )). For our purpose, it is simpler to express ut (x) in terms of U directly:   ut (x) = U t, x, mt ,

t ∈ [0, T ].

The trick is then to expand the above right-hand side by taking advantage of the master equation satisfied by U and from a new Itˆ o’s formula for functionals of a measure argument; see Theorem A.1 in the Appendix. As consequence of Theorem A.1, we get that  dt ut (x) =

  ∂t U t, x, mt    2divy Dm U (t, x, mt , y)dmt (y) + d T   Dm U (t, x, mt , y) · Dp Ht y, Dut (y) dmt (y) − d  T   2    + Tr Dmm U (t, x, mt , y, y )dmt (y)dmt (y ) dt Td Td ! √  Dm U (t, x, mt , y)dmt (y) · dWt . + 2 Td

Letting  vt (x) =

Td

Dm U (t, x, mt , y)dmt (y),

t ∈ [0, T ], x ∈ Td ,

and using the master equation satisfied by U , we obtain therefore √     dt ut (x) = −2Δut (x) + H x, Dut (x) − F (x, mt ) − 2div vt (x) dt + vt (x) · dWt . Together with (5.28), this completes the proof of the existence of a solution to (5.26). Second step. Uniqueness of the solution. We now prove uniqueness of the solution to (5.26). Given a solution (ut , mt )t∈[0,T ] (with some (vt )t∈[0,T ] ) to (5.26), we let u ˜t (x) = ut (x +



2Wt ),

x ∈ Td ,

m ˜ t = (id −

√ 2Wt )mt ,

t ∈ [0, T ].

To prove uniqueness, it suffices to show that (˜ ut , m ˜ t )t∈[0,T ] is a solution to (5.27) ˜ (for some martingale (Mt )t∈[0,T ] ).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

158

June 11, 2019

6.125x9.25

CHAPTER 5

We first investigate the dynamics of (m ˜ t )t∈[0,T ] . As in the first step (existence  of a solution), we may apply Itˆo–Wenztell formula to the process ( Td φ(x − √ 2Wt )dmt (x))t∈[0,T ] for a smooth function φ ∈ C 4 (Td ), see Subsection A.3.1. Equivalently, we may apply the Itˆ o–Wenztell formula for Schwartz distributionvalued processes as given in [67, Theorem 1.1]. We get exactly that (m ˜ t )t∈[0,T ] satisfies the first equation in (5.27). To prove the second equation in (5.27), √ we apply the Itˆ o–Wentzell formula for real-valued processes to (˜ ut (x) = ut (x + 2Wt ))t∈[0,T ] , see again Subsection A.3.1.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Chapter Six Convergence of the Nash System

In this chapter, we consider, for an integer N  2, a classical solution (v N,i )i∈{1,...,N } of the Nash system with a common noise: ⎧ N N   ⎪ ⎪ N,i N,i ⎪ −∂ v (t, x) − Δ v (t, x) − β TrDx2j ,xk v N,i (t, x) ⎪ t xj ⎪ ⎪ ⎪ j=1 j,k=1 ⎪  ⎨ +H(xi , Dxi v N,i (t, x)) + Dp H(xj , Dxj v N,j (t, x)) · Dxj v N,i (t, x) ⎪ j=i ⎪ ⎪ ⎪ N,i ⎪ = F (x , m ) in [0, T ] × (Td )N , ⎪ i x ⎪ ⎪ ⎩ N,i in (Td )N , v (T, x) = G(xi , mN,i x ) where we set, for x = (x1 , . . . , xN ) ∈ (Td )N , mN,i x =

(6.1)

1  δxj . N −1 j=i

Our aim is to prove Theorem 2.4.8, which says that (v N,i )i∈{1,...,N } converges, in a suitable sense, to the solution of the second-order master equation and Theorem 2.4.9, which claims that the optimal trajectories also converge. Throughout this part we assume that H, F , and G satisfy the assumption of Theorem 2.4.5 with n  2. This allows us to define U = U (t, x, m) the solution of the second-order master equation  ⎧ ⎪ −∂t U − (1 + β)Δx U + H(x, Dx U ) − (1 + β) divy [Dm U ] dm(y) ⎪ ⎪ ⎪ Td ⎪  ⎪ ⎪  ⎪ ⎪ + Dm U · Dp H y, Dx U (·, y, ·) dm(y) ⎪ ⎪ ⎨ Td   (6.2)

2 ⎪ div [D U ] dm(y) − β Tr Dmm U dm⊗2 (y, y  ) −2β ⎪ x m ⎪ ⎪ Td T2d ⎪ ⎪ ⎪ ⎪ ⎪ = F (x, m) in [0, T ] × Td × P(Td ), ⎪ ⎪ ⎩ U (T, x, m) = G(x, m) in Td × P(Td ), where ∂t U , Dx U and Δx U are understood as ∂t U (t, x, m), Dx U (t, x, m) and Δx U (t, x, m), Dx U (·, y, ·) is understood as Dx U (t, y, m), and Dm U and 2 2 U are understood as Dm (t, x, m, y) and Dmm U (t, x, m, y, y  ). Above, Dmm  β  0 is a parameter for the common noise. For α ∈ (0, α), we have for any

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

160 (t, x) ∈ [0, T ] × Td , m, m ∈ P(Td ) δU U (t, ·, m)n+2+α + (t, ·, m, ·) δm (n+2+α ,n+1+α ) 2 δ U + δm2 (t, ·, m, ·, ·) (n+2+α ,n+α ,n+α )

(6.3)

 C0 , and the mapping [0, T ] × P(Td )  (t, m) →

2 δ2U n+2+α d n+α d (t, ·, m, ·, ·) ∈ C (T ) × C (T ) (6.4) δm2

is continuous. As previously stated, a solution of (6.2) satisfying the above properties has been built into Theorem 2.4.5. When β = 0, one just needs to replace the above assumptions by those of Theorem 2.4.2, which do not require the second-order differentiability of F and G with respect to m. The main idea for proving the convergence of the (v N,i )i∈{1,...,N } toward U is to use the fact that suitable finite dimensional projections of U are nearly solutions to the Nash equilibrium equation. Actually, as we already alluded to at the end of Chapter 2, this strategy works under weaker assumptions than that required in the statement of Theorem 2.4.5. What is really needed is that H and Dp H are globally Lipschitz continuous, that the coefficient diffusion driving the independent noises ((Bti )t∈[0,T ] )i=1,··· ,N in (1.3) is nondegenerate, and that the master equation has a classical solution satisfying the conclusion of Theorem 2.4.5 (or Theorem 2.4.2 if β = 0). In particular, the monotonicity properties of F and G have no role in the proof of the convergence of the N -Nash system. We refer to Remarks 6.1.5 and 6.2.2 and we let the interested reader reformulate the statements of Theorems 2.4.8 and 2.4.9 accordingly.

6.1

FINITE DIMENSIONAL PROJECTIONS OF U

For N  2 and i ∈ {1, . . . , N } we set uN,i (t, x) = U (t, xi , mN,i x )  1 δxj . mN,i x = N −1

where x = (x1 , . . . , xN ) ∈ (Td )N ,

j=i

−1 0 1

Note that the uN,i are at least C 2 with respect to the xi variable because so is U . Moreover, ∂t uN,i exists and is continuous because of the regularity of U . The next statement says that uN,i is actually globally C 2 in the space variables:

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

161

Proposition 6.1.1. Assume (6.3) and (6.4). For any N  2, i ∈ {1, . . . , N }, uN,i is of class C 2 in the space variables, with 1 Dm U (t, xi , mN,i x , xj ) N −1 1 Dx Dm U (t, xi , mN,i Dx2i ,xj uN,i (t, x) = x , xj ) N −1 1 Dy [Dm U ] (t, xi , mN,i Dx2j ,xj uN,i (t, x) = x , xj ) N −1 1 + D2 U (t, xi , mN,i x , xj , xj ) (N − 1)2 mm 1 Dx2j ,xk uN,i (t, x) = D2 U (t, xi , mN,i x , xj , xk ) (N − 1)2 mm

Dxj uN,i (t, x) =

(j = i), (j = i),

(j = i), (i, j, k distinct).

Remark 6.1.2. If, instead of assumptions (6.3) and (6.4), one only requires the assumptions of Theorem 2.4.2 (no second-order differentiability for F and G), then, for any N  2, i ∈ {1, . . . , N }, uN,i is of class C 1 in the space variables, with uniformly Lipschitz continuous space derivatives and with 1 Dm U (t, xi , mN,i x , xj ) N −1 1 Dx Dm U (t, xi , mN,i Dx2i ,xj uN,i (t, x) = x , xj ) N −1 1 C |Dx2j ,xj uN,i (t, x) − Dy [Dm U ] (t, xi , mN,i x , xj )|  N −1 N

Dxj uN,i (t, x) =

(j = i), (j = i), (j = i),

where the two equalities hold for any x ∈ (Td )N while the inequality only holds for a.e. x ∈ (Td )N , the constant C depending on the Lipschitz regularity of Dm U in the m variable. The proof is the same as for Proposition 6.1.1 except that one uses Proposition A.2.1 instead of Proposition A.2.3 in the argument that follows. Proof. For x = (xj )j∈{1,...,N } such that xj = xk for any j = k, let  = minj=k |xj − xk |. For v = (vj ) ∈ (Rd )N with vi = 0 (the value of i ∈ {1, . . . , N } being fixed), we consider a smooth vector field φ such that φ(x) = vj

if x ∈ B(xj , /4),

where B(xj , /4) is the ball of center xj and of radius /4. Then, in view of our assumptions (6.3) and (6.4) on U , Propositions A.2.3 and A.2.4 in the Appendix imply that

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

162

    U t, xi , (id + φ)mN,i − U t, xi , mN,i x x    N,i Dm U t, xi , mN,i − x , y · φ(y) dmx (y) d T 

 1 N,i Dy Dm U t, xi , mN,i − x , y φ(y) · φ(y) dmx (y) 2 Td      1 2 N,i   N,i N,i   − Dmm U t, xi , mx , y, y φ(y) · φ(y ) dmx (y)dmx (y ) 2 Td Td  φ2L3

N,i mx

ω(φL3 N,i ), mx

for some modulus ω such that ω(s) → 0 as s → 0+ . Therefore (recalling that vi = 0), uN,i (t, x + v) − uN,i (t, x)  − U (t, xi , mN,i = U t, xi , (id + φ)mN,i x x )   N,i = Dm U t, xi , mN,i x , y · φ(y) dmx (y) d T 

 1 N,i Dy Dm U t, xi , mN,i + x , y φ(y) · φ(y)dmx (y) 2 Td    1 2 N,i N,i Dmm U t, xi , mN,i + x , y, z φ(y) · φ(z)dmx (y)dmx (z) 2 Td Td    + O φ2L3 (mN,i ) ω φL3 (mN,i x ) x  1  Dm U t, xi , mN,i = x , xj · v j N −1 j=i

+



 1 Dy Dm U t, xi , mN,i x , xj v j · v j 2(N − 1) j=i

   2 1 2 + Dmm U t, xi , mN,i x , xj , xk vj · vk + O |v| ω(|v|) , 2(N − 1)2 j,k=i

where |O(r)|  C|r| for a constant C independent of (t, x), N and i and where ω is allowed to vary from line to line as long as ω(s) → 0 as s → 0+ . This shows that uN,i has a second-order expansion at x with respect to the variables (xj )j=i and that  1 Dm U t, xi , mN,i x , xj N −1

 1 2 N,i Dxj ,xj u (t, x) = Dy Dm U t, xi , mN,i x , xj N −1  1 2 + Dmm U t, xi , mN,i x , xj , xj 2 (N − 1)

Dxj uN,i (t, x) =

−1 0 1

(j = i),

(j = i),

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

Dx2j ,xk uN,i (t, x) =

163

 1 2 Dmm U t, xi , mN,i x , xj , xk 2 (N − 1)

(i, j, k distinct).

Observing that the existence of the derivatives in the direction xi is obvious, we have thus proved the existence of first- and second-order space derivatives of U in the open subset of [0, T ]×(Td )N consisting of the points (t, x) = (t, x1 , · · · xN ) 2 U are continuous, such that xi = xj for any i = j. As Dm U , Dy [Dm U ] and Dmm these first and second-order derivatives can be continuously extended to the whole space [0, T ] × (Td )N , and therefore uN,i is C 2 with respect to the space  variables in [0, T ] × TN d . We now show that (uN,i )i∈{1,...,N } is “almost” a solution to the Nash system (6.1): Proposition 6.1.3. Assume (6.3) and (6.4). One has, for any i ∈ {1, . . . , N }, ⎧ N N   ⎪ ⎪ N,i N,i ⎪ −∂ u (t, x) − Δ u (t, x) − β TrDx2j ,xk uN,i (t, x) ⎪ t x j ⎪ ⎪ ⎪ j=1 j,k=1 ⎪  ⎨ +H(xi , Dxi uN,i (t, x)) + Dp H(xj , Dxj uN,j (t, x)) · Dxj uN,i (t, x) (6.5) ⎪ j=i ⎪ ⎪ ⎪ N,i N,i ⎪ = F (x , m ) + r (t, x) in [0, T ] × (Td )N , ⎪ i x ⎪ ⎪ ⎩ N,i v (T, x) = G(xi , mN,i in (Td )N , x ) where rN,i ∈ C 0 ([0, T ] × TN d ) with rN,i ∞ 

C . N

Remark 6.1.4. When β = 0, assumptions (6.3) and (6.4) can be replaced by those of Theorem 2.4.2 (no second-order differentiability for F and G). In this case, equation (6.5) only holds a.e. with rN,i ∈ L∞ still satisfying rN,i ∞  C/N . The proof is the same as for Proposition 6.1.3, except that one ignores of course the terms involving β and one replaces the conclusion of Proposition 6.1.1 by those of Remark 6.1.2. Proof.

As U solves (6.2), one has at a point (t, xi , mN,i x ):

− ∂t U − (1 + β)Δx U + H(xi , Dx U ) 

 N,i − (1 + β) divy Dm U t, xi , mN,i x , y dmx (y) Td    N,i N,i + Dm U t, xi , mN,i x , y · Dp H y, Dx U (t, y, mx ) dmx (y) Td 

 N,i divx Dm U t, xi , mN,i − 2β x , y dmx (y) d  T  N,i  2 N,i N,i . TrDmm U t, xi , mN,i −β x , y, z dmx (y)dmx (z) = F xi , mx Td

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

164 So uN,i satisfies − ∂t uN,i − (1 + β)Δxi uN,i + H(xi , Dxi uN,i ) 

 N,i − (1 + β) divy Dm U t, xi , mN,i x , y dmx (y) Td

  1  N,i + Dm U t, xi , mN,i x , xj · Dp H xj , Dx U (t, xj , mx ) N −1 j=i 

 N,i divx Dm U t, xi , mN,i − 2β x , y dmx (y) d  T  N,i 2 N,i N,i −β TrDmm U t, xi , mN,i x , y, z dmx (y)dmx (z) = F (xi , mx ). Td

Note that, by Proposition 6.1.1,  1 N,i Dm U t, xi , mN,i (t, x). x , x j = Dxj u N −1 In particular, Dxj uN,i ∞ 

C . N

(6.6)

By the Lipschitz continuity of Dx U with respect to m, we have   N,j  N,i N,j Dx U (t, xj , mN,i x ) − Dx U (t, xj , mx )  Cd1 (mx , mx ) 

C , N −1

so that, by Lipschitz continuity of Dp H,     C N,j Dp H xj , Dx U (t, xj , mN,i (t, x)   . x ) − Dp H x j , Dx j u N Collecting the above relations, we obtain   1  N,i Dm U t, xi , mN,i x , xj · Dp H xj , Dx U (t, xj , mx ) N −1 j=i   Dxj uN,i (t, x) · Dp H xj , Dx U (t, xj , mN,i = x ) j=i

−1 0 1

=

 j=i

 1 , Dxj uN,i (t, x) · Dp H xj , Dxj uN,j (t, x) + O N

(6.7)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

165

where we used (6.6) in the last inequality. On the other hand, N 

N 

Δxj uN,i + β

j=1

TrDx2j ,xk uN,i

j,k=1

= (1 + β)Δxi u 

+ 2β

N,i

+ (1 + β)



Δxj uN,i

j=i

TrDx2i ,xj uN,i + β

j=i



TrDx2j ,xk uN,i ,

j,k,i distinct

where, using Proposition 6.1.1, 

Δxj u

N,i

 (t, x) =

j=i



Td

 N,i divy Dm U t, xi , mN,i x , y dmx (y)

2  N,i Tr Dmm U t, xi , mN,i x , y, y dmx (y)  

 N,i TrDx2i ,xj uN,i (t, x) = divx Dm U t, xi , mN,i x , y dmx (y) 1 + N −1

Td

Td

j=i



TrDx2j ,xk uN,i (t, x)

j,k,i distinct





= Td

Td

2  N,i N,i Tr Dmm U t, xi , mN,i x , y, z dmx (y)dmx (z).

Therefore − ∂t uN,i (t, x) −

N  j=1



+ H x i , Dx i u 1+β + N −1

Δxj uN,i (t, x) − β

N,i



(t, x) +



N 

TrDx2j ,xk uN,i (t, x)

j,k=1

Dxj u

N,i

 (t, x) · Dp H xj , Dxj uN,j (t, x)

j=i





Td

N,i 2 TrDmm U t, xi , mN,i x , y, y dmx (y)

= F (xi , mN,i x )+O

1 , N

which shows the result.



Remark 6.1.5. The reader may observe that, in addition to the existence of a classical solution U (to the master equation) satisfying the conclusion of Theorem 2.4.5, only the global Lipschitz property of Dp H is used in the proof; see (6.7).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

CHAPTER 6

166 6.2

6.125x9.25

CONVERGENCE

We now turn to the proof of Theorem 2.4.8. For this, we consider the solution (v N,i )i∈{1,...,N } of the Nash system (6.1). By uniqueness of the solution, the (v N,i )i∈{1,...,N } must be symmetrical. By symmetrical, we mean that, for any ˜ = (˜ xl )l∈{1,...,N } is the x = (xl )l∈{1,...,N } ∈ TN d and for any indices j = k, if x N -tuple obtained from x by permuting the j and k vectors (i.e., x ˜l = xl for ˜k = xj ), then l ∈ {j, k}, x ˜ j = xk , x ˜ ) = v N,i (t, x) if i ∈ {j, k}, while v N,i (t, x ˜ ) = v N,k (t, x) if i = j, v N,i (t, x which may be reformulated as follows: There exists a function V N : Td × [Td ]N −1 → R such that, for any x ∈ Td , the function [Td ]N −1  (y1 , . . . , yN −1 ) → V N (x, (y1 , . . . , yN −1 )) is invariant under permutation, and ∀i ∈ {1, . . . , N }, x ∈ [Td ]N ,

 v N,i (t, x) = V N xi , (x1 , . . . , xi−1 , xi+1 , . . . , xN ) .

Note that the (uN,i )i∈{1,...,N } are also symmetrical. The proof of Theorem 2.4.8 consists in comparing the “optimal trajectories” for v N,i and for uN,i , for any i ∈ {1, . . . , N }. For this, let us fix t0 ∈ [0, T ), m0 ∈ P(Td ) and let (Zi )i∈{1,...,N } be an i.i.d family of N random variables of law m0 . We set Z = (Zi )i∈{1,...,N } . Let also ((Bti )t∈[0,T ] )i∈{1,...,N } be a family of N independent d-dimensional Brownian motions that is also independent of (Zi )i∈{1,...,N } and let W be a d-dimensional Brownian motion independent of the family ((Bti )t∈[0,T ] , Zi )i∈{1,...,N } . We consider the systems of stochastic differential equations (SDEs) with variables (X t = (Xi,t )i∈{1,...,N } )t∈[0,T ] and (Y t = (Yi,t )i∈{1,...,N } )t∈[0,T ] (the SDEs being set on Rd with periodic coefficients):  ⎧ N,i ⎨ dXi,t = −Dp H Xi,t , Dxi u√ (t, X t )dt + 2dBti + 2βdWt , ⎩ Xi,t0 = Zi ,

t ∈ [t0 , T ],

(6.8)

 ⎧ N,i dt ⎨ dYi,t = −Dp H Yi,t , Dxi v √ (t, Y t )  i + 2dBt + 2βdWt , ⎩ Yi,t0 = Zi .

t ∈ [t0 , T ],

(6.9)

and

−1 0 1

Since the (uN,i )i∈{1,...,N } are symmetrical, the processes ((Xi,t )t∈[t0 ,T ] )i∈{1,...,N } are exchangeable. The same holds for the ((Yi,t )t∈[t0 ,T ] )i∈{1,...,N } and, actually, the N R2d -valued processes ((Xi,t , Yi,t )t∈[t0 ,T ] )i∈{1,...,N } are also exchangeable.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

167

Theorem 6.2.1. Assume (6.3) and (6.4). Then we have, for any i ∈ {1, . . . , N },

C E sup |Yi,t − Xi,t |  , ∀t ∈ [t0 , T ], N t∈[t0 ,T ]  T  C E |Dxi v N,i (t, Y t ) − Dxi uN,i (t, Y t )|2 dt  2 , N t0

(6.10) (6.11)

and, P−almost surely, for all i = 1, . . . , N , |uN,i (t0 , Z) − v N,i (t0 , Z)| 

C , N

(6.12)

where C is a (deterministic) constant that does not depend on t0 , m0 , and N . Proof of Theorem 6.2.1. First step. We start with the proof of (6.11). For simplicity, we work with t0 = 0. Let us first introduce new notations: UtN,i = uN,i (t, Y t ),

VtN,i = v N,i (t, Y t ),

DUtN,i,j = Dxj uN,i (t, Y t ),

DVtN,i,j = Dxj v N,i (t, Y t ),

t ∈ [0, T ].

Using equation (6.1) satisfied by the (v N,i )i∈{1,...,N } , we deduce from Itˆ o’s formula that, for any i ∈ {1, . . . , N }, N

  Dxj v N,i (t, Y t ) · Dp H Yj,t , Dxj v N,i (t, Y t ) dVtN,i = ∂t v N,i (t, Y t ) − j=1

+

N 

Δxj v N,i (t, Y t ) + β

j=1

+

N 

 TrDx2j ,xk v N,i (t, Y t ) dt

j,k=1

N N   √  2 Dxj v N,i (t, Y t ) · dBtj + 2β Dxj v N,i (t, Y t ) · dWt

j=1 j=1 (6.13)

  = H Yi,t , Dxi v N,i (t, Y t ) − Dxi v N,i (t, Y t ) · Dp H Yi,t , Dxi v N,i (t, Y t ) N  √   N,i − F Yi,t , mY t ) dt + 2 Dxj v N,i (t, Y t ) · dBtj j=1

+





N  j=1

Dxj v N,i (t, Y t ) · dWt .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

168

Similarly, as (uN,i )i∈{1,...,N } satisfies (6.5), we have by standard computation:

 dUtN,i = H Yi,t , Dxi uN,i (t, Y t ) − Dxi uN,i (t, Y t )    N,i ) − r (t, Y ) dt · Dp H Yi,t , Dxi uN,i (t, Y t ) − F Yi,t , mN,i t Yt −

N 

  Dxj uN,i (t, Y t ) · Dp H Yj,t , Dxj v N,j (t, Y t )

(6.14)

j=1 N √    − Dp H Yj,t , Dxj uN,j (t, Y t ) dt + 2 Dxj uN,i (t, Y t ) j=1

· dBtj +





N 

Dxj uN,i (t, Y t ) · dWt .

j=1

Compute the difference between (6.13) and (6.14), take the square, and apply Itˆ o’s formula again: 

2    d UtN,i − VtN,i = 2 UtN,i − VtN,i H Yi,t , DUtN,i,i    

 − H Yi,t , DVtN,i,i − 2 UtN,i − VtN,i DUtN,i,i · Dp H Yi,t , DUtN,i,i   

 − 2 UtN,i − VtN,i DUtN,i,i − DVtN,i,i − Dp H Yi,t , DVtN,i,i · Dp H



Yi,t , DVtN,i,i



 − 2 UtN,i − VtN,i

N 

  N,i N,i N,i − 2 U t − Vt r (t, Y t ) dt

(6.15)



 DUtN,i,j · Dp H Yj,t , DVtN,j,j

j=1

  N   − Dp H Yj,t , DUtN,j,j dt + 2 |DUtN,i,j − DVtN,i,j |2 j=1

 N  √  2   DUtN,i,j − DVtN,i,j  dt + 2 UtN,i − VtN,i + 2β  N  

j=1

    DUtN,i,j − DVtN,i,j · dBtj + β DUtN,i,j − DVtN,i,j · dWt .

j=1

−1 0 1

Recall now that H and Dp H are Lipschitz continuous in the variable p. Recall also that DUtN,i,i = Dx U (t, Yi,t , mN,i Y t ) is bounded, independently of i, N , and N,i,j t, and that DUt = (1/(N − 1))Dm U (t, Yi,t , mN,i Y t ) is bounded by C/N when

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

169

i = j, for C independent of i, j, N , and t. Recall finally from Proposition 6.1.3 that rN,i is bounded by C/N . Integrating from t to T in the above formula and taking the conditional expectation given Z (with the shortened notation EZ [·] = E[·|Z]), we deduce: Z

E



|UtN,i



VtN,i |2

+2

N  j=1

+ 2βEZ



N T  

 

t

Z

E



T t

|DUsN,i,j



DVsN,i,j |2 ds



 2 DUsN,i,j − DVtN,i,j  ds

j=1



C T Z N,i  EZ |UTN,i − VTN,i |2 + E |Us − VsN,i | ds N t  T

 +C EZ |UsN,i − VsN,i | · |DUsN,i,i − DVsN,i,i | ds

(6.16)

t

  C  T Z N,i + E |Us − VsN,i | · |DUsN,j,j − DVsN,j,j | ds. N t j=i

Note that the boundary condition UTN,i − VTN,i is 0. By a standard convexity argument, we get   T

EZ |UtN,i − VtN,i |2 + EZ |DUsN,i,i − DVsN,i,i |2 ds t



C +C N2



T t

EZ |UsN,i − VsN,i |2 ds

(6.17)

  T N 1  Z N,j,j N,j,j 2 + E |DUs − DVs | ds . 2N j=1 t By taking the mean over i ∈ {1, . . . , N }, we can get rid of the second term in the left-hand side. By Gronwall’s lemma, we obtain (allowing the constant C to increase from line to line):  sup

0tT

 N 1  Z N,i C N,i 2 E |Ut − Vt |  2. N i=1 N

(6.18)

Plugging (6.18) into (6.17), we deduce that   T N 1  Z C N,j,j N,j,j 2 E |DUs − DVs | ds  2 . N j=1 N 0

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

170

Inserting this bound on the right-hand side of (6.17) and applying Gronwall’s lemma once again, we finally end up with: Z

sup E

t∈[0,T ]



|UtN,i



VtN,i |2

Z



+E

T 0

|DUsN,i,i



DVsN,i,i |2 ds

 

C . (6.19) N2

This proves (6.11). Second step. We now derive (6.10) and (6.12). We start with (6.12). Noticing that U0N,i − V0N,i = uN,i (0, Z) − v N,i (0, Z), we deduce from (6.19) that, with probability 1 under P, for all i ∈ {1, . . . , N }, |uN,i (0, Z) − v N,i (0, Z)| 

C , N

which is exactly (6.12). We are now ready to estimate the difference Xi,t − Yi,t , for t ∈ [0, T ] and i ∈ {1, . . . , N }. In view of the equation satisfied by the processes (Xi,t )t∈[0,T ] and by (Yi,t )t∈[0,T ] , we have 

|Xi,t − Yi,t | 

t

 Dp H Xi,s , Dxi uN,i (s, X s ) 0   − Dp H Yi,s , Dxi v N,i (s, Y s ) ds

Using the Lipschitz regularity of Dp H, the regularity of U and Proposition 6.1.1, we obtain  t     Xj,s − Yj,s  ds Xi,s − Yi,s  + 1 |Xi,t − Yi,t |  C N 0 j=i  t   Dp H Yi,s , Dx uN,i (s, Y s ) + (6.20) i 0

  − Dp H Yi,s , Dxi v N,i (s, Y s ) ds. Taking the sup over t ∈ [0, τ ] (for τ ∈ [0, T ]) and the conditional expectation with respect to Z we find  τ  



Z E sup |Xi,t − Yi,t |  C EZ sup Xi,t − Yi,t  t∈[0,τ ]

t∈[0,s]

0

   1  Z

+ E sup Xj,t − Yj,t  ds N t∈[0,s] j=i

−1 0 1

+ EZ



  DUsN,i,i − DVsN,i,i ds .

T 0

(6.21)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

171

Summing over i ∈ {1, . . . , N } and using (6.19), we derive by Gronwall’s inequality: N 

EZ sup |Xi,t − Yi,t | ds  C. t∈[0,T ]

i=1

Inserting the above inequality into (6.21) and using once again Gronwall’s lemma, we obtain (6.10).  Remark 6.2.2. The reader may observe that, in addition to the existence of a classical solution U (to the master equation) satisfying the conclusion of Theorem 2.4.5, only the global Lipschitz properties of H and Dp H and the ellipticity of σ are used in the proof; see (6.16) and (6.20). Remark 6.2.3. If β = 0, the conclusion of Theorem 6.2.1 also holds under the assumptions of Theorem 2.4.2 (no second-order differentiability for F and G) instead of (6.3) and (6.4). The proof is the same as above because one can still apply Itˆ o’s formula to the maps uN,i which are C 1 in time and C 1,1 in space and the diffusions driving the processes are non degenerate. Proof of Theorem 2.4.8. (6.12):

For part (i), let us choose m0 ≡ 1 and apply

  C   N,i U (t0 , Zi , mZ ) − v N,i (t0 , Z)  N

i ∈ {1, . . . , N },

a.e.,

where Z = (Z1 , . . . , ZN ) with Z1 , . . . , ZN i.i.d. random variables with uniform density on Td . The support of Z being (Td )N , we derive from the continuity of U and of the (v N,i )i∈{1,...,N } that the above inequality holds for any x ∈ (Td )N :   C N,i U (t0 , xi , mN,i (t0 , x)  x )−v N

∀x ∈ (Td )N ,

i ∈ {1, . . . , N },

for all i ∈ {1, . . . , N }. Then we use the Lipschitz continuity of U with respect N to m to replace U (t0 , xi , mN,i x ) by U (t0 , xi , mx ) in the above inequality, the additional error term being of order 1/N . To prove (ii), we use the Lipschitz continuity of U and a result by Fournier and Guillin [42] (see also Dereich, Scheutzow, and Schottstedt [34] when d  3 and Ajtai, Komlos, and Tusndy [9] when d = 2), to deduce that for any xi ∈ Td :  Td(N −1)

|uN,i (t, x) − U (t, xi , m0 )|



= Td(N −1)

m0 (dxj )

j=i

|U (t, xi , mN,i x ) − U (t, xi , m0 )|



C



Td(N −1)

d1 (mN,i x , m0 )

 j=i



m0 (dxj )

j=i

m0 (dxj )  CεN ,

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

172 where

εN

⎧ −1/d ⎨N = N −1/2 ln(N ) ⎩ −1/2 N

if d  3, if d = 2, if d = 1.

Combining Theorem 6.2.1 with the above inequality, we therefore obtain N,i w (t0 , ·, m0 ) − U (t0 , ·, m0 ) 1 L (m0 )           N,i  t, (xj ) = v m0 (dxj ) − U (t, xi , m0 ) dm0 (xi )  Td  Td(N −1)  j=i  N 

 E |v N,i (t, Z) − uN,i (t, Z)| + |uN,i (t, x) − U (t, xi , m0 )| m0 (dxj ) TdN

 CN

−1

j=1

+ CεN  CεN .

This shows part (ii) of the theorem.

6.3



PROPAGATION OF CHAOS

We now prove Theorem 2.4.9. Let us recall the notation. Throughout this part, (v N,i )i∈{1,...,N } is the solution of the Nash system (6.1) and the processes ((Yi,t )t∈[t0 ,T ] )i∈{1,...,N } are “optimal trajectories” for this system, i.e., solve (6.9) with Yi,t0 = Zi as the initial condition at time t0 . Our aim is to understand the behavior of the ((Yi,t )t∈[t0 ,T ] )i∈{1,...,N } for a large number of players N . ˜ i,t )t∈[t ,T ] be the solution of the SDE of For any i ∈ {1, . . . , N }, let (X 0 McKean–Vlasov type:    √ ˜ i,t = −Dp H X ˜ i,t , Dx U (t, X ˜ i,t , L(X ˜ i,t |W ) dt + 2dBti + 2βdWt , dX ˜ i,t = Zi . X 0 ˜ i,t |W ) is equal to Recall that, for any i ∈ {1, . . . , N }, the conditional law L(X mt , where (ut , mt )t∈[0,T ] is the solution of the mean field game (MFG) system with common noise given by (5.1)–(5.2) (see Section 5.4.3). Solvability of the McKean–Vlasov equation may be discussed on the model of (5.21). Our aim is to show that −1 0 1

  ˜ i,t   CεN , E sup Yi,t − X t∈[t0 ,T ]

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CONVERGENCE OF THE NASH SYSTEM

173

for some C > 0, where, as before,

εN

⎧ −1/d ⎨N = N −1/2 ln(N ) ⎩ −1/2 N

if d  3, if d = 2, if d = 1.

Proof of Theorem 2.4.9. The proof is a direct application of Theorem 6.2.1 ˜ i,t )t∈[t ,T ] and combined with the following estimate on the distance between (X 0 the solution (Xi,t )t∈[t0 ,T ] of (6.8):

  ˜ i,t   CεN . E sup Xi,t − X

(6.22)

t∈[t0 ,T ]

Indeed, by the triangle inequality, we have, provided that (6.22) holds true:

      ˜ i,t   E sup Yi,t − Xi,t  + E sup Xi,t − X ˜ i,t  E sup Yi,t − X t∈[t0 ,T ]

t∈[t0 ,T ]

t∈[t0 ,T ]

  C N −1 + εN , where we used (6.10) to pass from the first to the second line. It now remains to check (6.22). For this, we fix i ∈ {1, . . . , N } and let

  ˜ i,s  . ρ(t) = E sup Xi,s − X s∈[t0 ,t]

Then, for any s ∈ [t0 , t], we have   ˜ i,s   Xi,s − X



s t0

 −Dp H Xi,r , Dx uN,i (r, X r ) i

   ˜ i,r , Dx U r, X ˜ i,r , mr dr + Dp H X  s    −Dp H Xi,r , Dx U r, Xi,r , mN,i  Xr t0

   ˜ i,r , Dx U r, X ˜ i,r , mN,i dr + Dp H X ˜ Xr  s    ˜ i,r , Dx U r, X ˜ i,r , mN,i −Dp H X + ˜ X t0

 ˜ i,r , Dx U r, X ˜ i,r , mr dr. + Dp H X 



r

As (x, m) → Dx U (t, x, m) and (x, z) → Dp H(x, z) are uniformly Lipschitz continuous, we get  s      ˜ i,s   C ˜ i,r | + d1 mN,i , mN,i + d1 mN,i , mr dr, Xi,s − X |Xi,r − X ˜ ˜ Xr X X t0

r

r

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

CHAPTER 6

174 where  N,i d1 mN,i  ˜ X t , mX t

1  ˜ j,s |. |Xj,s − X N −1

(6.23)

j=i

Hence,   ˜ i,s   C Xi,s − X +

 s t0

˜ i,r | |Xi,r − X

 1  ˜ j,r | + d1 (mN,i , mr ) dr. |Xj,r − X ˜r X N −1 j=i

Taking the supremum over s ∈ [t0 , t] and then the expectation, we have, recalling ˜ j,r )j∈{1,...,N } have the same law: that the random variables (Xj,r − X

  ˜ i,s  ρ(t) = E sup Xi,s − X s∈[t0 ,t]

 t  ˜ i,s | + C E |Xi,s − X t0



+C  C

t0

t t0

t

 1  ˜ E |Xj,s − Xj,s | ds N −1

  ds E d1 mN,i , m s ˜ X

j=i

(6.24)

s

ρ(s)ds + CεN ,

where we used, as in the proof of Theorem 2.4.8, the result by Fournier and ˜ i,s )i=1,··· ,N are, conditional on W , Guillin [42]. Namely, since the variables (X i.i.d. random variables with law mt , we have 

1/4

 4 ˜  cE | X |(W E d1 mN,i , m ) | | (W ) εN , s r r∈[t0 ,T ] 1,t r r∈[t0 ,T ] ˜ X s

for a constant c only depending on the dimension d. Taking the expectation, we easily derive the last line in (6.24). By Gronwall’s inequality, the proof is easily completed. 

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

Appendix We now provide several basic results on the notion of differentiability on the space of probability measures used in the book, including a short comparison with the derivative on the set of random variables. A.1

LINK WITH THE DERIVATIVE ON THE SPACE OF RANDOM VARIABLES

As a first step, we discuss the connection between the derivative δU/δm in Definition 2.2.1 and the derivative introduced by Lions in [76] and used (among others) in [24, 31]. The notion introduced in [76] consists in lifting up functionals defined on the space of probability measures into functionals defined on the set of random variables. When the underlying probability measures are defined on a (finite dimensional) vector space E (so that the random variables that are distributed along these probability measures also take values in E), this permits us to take advantage of the standard differential calculus on the Hilbert space formed by the square-integrable random variables with values in E. Here the setting is slightly different, as the probability measures that are considered throughout the book are defined on the torus. Some care is thus needed in the definition of the linear structure underpinning the argument. Throughout the Appendix, we call dTd the distance on the d-dimensional torus, which is defined as follows: x − yˆ + c|, dTd (x, y) = inf |ˆ c∈Zd

x, y ∈ Td ,

where x ˆ and yˆ are the only representatives of x and y in [0, 1)d . For clarity, we ˆ. We observe that, for any coordinate call φ the mapping Td  x → φ(x) := x i ∈ {1, · · · , d}, the function φi is lower-semicontinuous and thus measurable. Above, | · | is the Euclidean norm on Rd . A.1.1

First-Order Expansion with Respect to Torus-Valued Random Variables

On the torus Td , we may consider the group of translations (τy )y∈Rd , parameterized by elements y of Rd . For any y ∈ Rd , τy maps Td into itself. The mapping

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

176

Rd  y → τy (0) being obviously continuous allows us to define, for any square ˜ ∈ L2 (Ω, A, P; Rd ) (where (Ω, A, P) is an atomless integrable random variable X probability space), the random variable τX˜ (0), which takes values in Td . Given a mapping U : P(Td ) → R, we may define its lifted version as   ˜ : L2 (Ω, A, P; Rd )  X ˜ → U ˜ (X) ˜ = U L(τ ˜ (0)) , U X

(A.1)

where the argument in the right-hand side denotes the law of τX˜ (0) (seen as a Td -valued random variable). Quite obviously, L(τX˜ (0)) only depends on the law ˜ of X. ˜ is continuously Fr´echet differentiable on Assume now that the mapping U 2 d ˜ ∈ L2 (Ω, A, P; Rd ), the Fr´echet L (Ω, A, P; R ). What [76] says is that, for any X derivative has the form   ˜ (X) ˜ = ∂ ˜ ˜ DU P almost surely, (A.2) μ U L(X) (X), d d 2 d ˜ ˜ for a mapping {∂  y → ∂ μ U (L(X)) : R μ U (L(X))(y) ∈ R } ∈ L (R , d ˜ R ). This relationship is fundamental. Another key observation is that, L(X); ˜ and Y˜ with values in Rd and ξ˜ with values in Zd , for any random variables X it holds that        1 ˜ ˜ ˜ ˜ X ˜ = E DU ˜ X ˜ + ξ˜ · Y˜ , X + ξ + εY˜ − U lim U ε→0 ε

which, by the simple fact that τX+ ˜ (0), is also equal to ˜ ξ˜(0) = τX        1 ˜ ˜ ˜ X ˜ = E DU ˜ X ˜ · Y˜ , U X + εY˜ − U ε→0 ε lim

proving that

    ˜ X ˜ = DU ˜ X ˜ + ξ˜ . DU

(A.3)

Consider now a random variable X from Ω with values into Td . With X, ˆ = φ(X), with values in [0, 1)d , given we may associate the random variable X (pointwise) as the only representative of X in [0, 1)d . We observe that the law ˆ is uniquely determined by the law of X. Moreover, for any Borel function of X h : Td → R, ˆ X)], ˆ E[h(X)] = E[h( ˆ is the identification of h as a function from [0, 1)d to R, namely h(x) ˆ where h = d h(τx (0)), for x ∈ [0, 1) . Then, we deduce from (A.2) that −1 0 1

  ˜ (X) ˆ = ∂ ˆ ˆ DU μ U L(X) (X),

P almost surely.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

177

Moreover, from (A.3), we also have, for any random variable ξ˜ with values in Zd ,   ˜ = ∂ ˜ (X ˆ + ξ) ˆ ˆ DU P almost surely. μ U L(X) (X), ˆ ˆ Rd ) and X ˆ takes values in [0, 1)d , we Since ∂μ U (L(X))(·) is in L2 (Rd , L(X); ˆ with a function in L2 (Td , L(X); Rd ). Without any can identify ∂μ U (L(X))(·) ambiguity, we may denote this function (up to a choice of a version) by      Td  y → ∂μ U L(X) (y) := ∂μ U L(φ(X)) φ(y) . As an application we have that, for any random variables X and Y with values in Td ,         U L(Y ) − U L(X) = U L(τYˆ (0)) − U L(τXˆ (0))     ˜ Yˆ − U ˜ X ˆ =U  1     ˜ L(λYˆ + (1 − λ)X) ˆ · Yˆ − X ˆ dλ. DU =E 0

Now, we can write ˆ =X ˆ + λ(Yˆ − X) ˆ = Z, ˆ λYˆ + (1 − λ)X

with Z = τλ(Yˆ −X) ˆ (X).

Noticing that Z is a random variable with values in Td , we deduce that     U L(Y ) − U L(X)  1    ˆ ˆ ∂μ U L(τλ(Yˆ −X) =E ˆ (X)) τλ(Yˆ −X) ˆ (X) · (Y − X) dλ. 0

Similarly, for any random variable ξ˜ with values in Zd ,         ˜ Yˆ + ξ˜ − U ˜ X ˆ U L(Y ) − U L(X) = U  1     ˜ X ˆ + λ(Yˆ + ξ˜ − X) ˆ · Yˆ + ξ˜ − X ˆ dλ. =E DU 0

˜ where ζ˜ is a random variable with values in ˆ + λ(Yˆ + ξ˜− X) ˆ writes Zˆ + ζ, Now, X d d ˆ Z and Z is associated with the T -valued random variable Z = τλ(Yˆ +ξ− ˜ X) ˆ (X), so that     U L(Y ) − U L(X)  1     ˜ Zˆ · Yˆ + ξ˜ − X ˆ dλ =E DU (A.4) 0  1      ˆ ˜ ˆ =E ∂μ U L(τλ(Yˆ +ξ− ˜ X) ˜ X) ˆ (X) τλ(Yˆ +ξ− ˆ (X) · Y + ξ − X dλ. 0

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

178

The fact that ξ˜ can be chosen in a completely arbitrary way says that the choice of the representatives of X and Y in (A.4) does not matter. Of course, this is a consequence of the periodic structure underpinning the whole analysis. Precisely, ¯ and Y¯ (with values in Rd ) of X and Y , we can write for any representatives X     U L(Y ) − U L(X)  1 (A.5)    ¯ − X) ¯ dλ. =E ∂μ U L(τλ(Y¯ −X) ¯ (X) τλ(Y¯ −X) ¯ (X) · (Y 0

Formula (A.5) gives a rule for expanding, along torus-valued random variables, functionals depending on torus-supported probability measures. It is the analogue of the differentiation rule defined in [76] on the space of probability measures on Rd through the differential calculus in L2 (Ω, A, P; Rd ). ˜ is continuously differentiable, with (say) DU ˜ being Lipschitz In particular, if U 2 d continuous on L (Ω, A, P; R ), then (with the same notations as in (A.4))   ˜ − DU ˜ (Yˆ ) − DU ˜ (X)| ˆ 2 = E |DU ˜ (Yˆ + ξ) ˜ (X)| ˆ 2 E |DU  ˆ2 .  CE |Yˆ + ξ˜ − X|

(A.6)

Now, for two random variables X and Y with values in the torus, one may find ˜ with values in Zd , such that, pointwise, a random variable ξ, ˆ ξ˜ = argminc∈Zd |τc (Yˆ ) − X|, the right-hand side being the distance dTd (X, Y ) between X and Y on the torus. ˆ = dTd (X, Y ). Plugged Stated differently, we may choose ξ˜ such that |Yˆ + ξ˜ − X| ˜ (on L2 (Ω, A, P; Rd )) into (A.6), this shows that the Lipschitz property of DU reads as a Lipschitz property with respect to torus-valued random variables. Namely,     ˜ (Yˆ ) − DU ˜ (X)| ˆ 2  CE dTd (X, Y ) 2 , E |DU which may be rewritten as 



2      2  E ∂μ U L(Y ) (Y ) − ∂μ U L(X) (X)  CE dTd (X, Y ) . Taking the infimum over all the pairs of Td -valued random variables (X, Y ) with prescribed marginal laws, we also have W2 ∂μ U (m)(·) m, ∂μ U (m )(·) m  C 1/2 d2 (m, m ), −1 0 1

for all m, m ∈ P(Td ). Here as well as throughout, for any p  1, Wp is the standard p-Wasserstein distance on the space Pp (Rd ) of probability measures

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

APPENDIX

6.125x9.25

179

on Rd with a finite p-order moment and dp is the p-Wasserstein distance on the space P(Td ), whose definition is as follows:





dp (m, m ) = inf

[Td ]2

dpTd (x, y)dπ(x, y)

1/p ,

the infimum being taken over all the probability measures π on [Td ]2 with m and m as marginal laws. In this regard, we recall that the Kantorovich–Rubinstein duality formula (see, for instance, Chapter 6 in [95]), asserts that, when p = 1, the above definition of d1 is consistent with the definition we gave in Section 2.1, namely   d1 (m, m ) = sup φ(y)d(m − m )(y), φ

Td

the supremum being taken over all Lipschitz-continuous functions φ : Td → R with a Lipschitz constant less than 1. A.1.2

From Differentiability Along Random Variables to Differentiability in m

We now address the connection between the mapping P(Td ) × Td  (m, y) → ∂μ U (m)(y) ∈ Rd and the derivative P(Td ) × Td  (m, y) → [δU/δm](m, y) ∈ Rd described in Definition 2.2.1. Proposition A.1.1. Assume that the function U is differentiable in the sense explained in Subsection A.1.1 and thus satisfies the expansion formula (A.5). Assume moreover that there exists a continuous version of the mapping ∂μ U : P(Td ) × Td  (m, y) → ∂μ U (m, y) ∈ Rd . Then, U is differentiable in the sense of Definition 2.2.1. Moreover, δU/δm is continuously differentiable with respect to the second variable and Dm U (m, y) = ∂μ U (m)(y),

m ∈ P(Td ), y ∈ Td .

Proof. First step. The first step is to prove that, for any m ∈ P(Td ), there exists a continuously differentiable map V (m, ·) : Td  y → V (m, y) ∈ R such that ∂μ U (m)(y) = Dy V (m, y), y ∈ Td . The strategy is to prove that ∂μ U (m) : Td  y → ∂μ U (m)(y) is orthogonal (in L2 (Td , dy; Rd )) to divergence-free vector fields. It suffices to prove that, for any smooth divergence-free vector field b : Td → Rd ,  Td

∂μ U (m)(y) · b(y)dy = 0.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

180

Since ∂μ U is jointly continuous in (m, y), it is enough to prove the above identity for any m with a positive smooth density. When m is not smooth, we may indeed approximate it by m ρ, where denotes the convolution and ρ a smooth kernel on Rd with full support. With such an m and such a b, we consider the ordinary differential equation (set on Rd but driven by periodic coefficients) dXt =

b(Xt ) dt, m(Xt )

t  0,

the initial condition X0 being [0, 1)d -valued and distributed according to some m ∈ P(Td ) (identifying m with the probability measure φ m on [0, 1)d ). By periodicity of b and m, (Xt )t0 generates on Td a flow of probability measures (mt )t0 satisfying the Fokker–Planck equation ∂t mt = −div(

 b mt , m

t  0,

m0 = m.

Since b is divergence free, we get that mt = m for all t  0. Indeed, m is an obvious solution of the equation and it is the unique one within the class of continuous weak solutions from [0, ∞) into P(Td ), which shows in particular that m is an invariant measure of (Xt )t0 . Then, for all t  0,     U mt − U m0 = 0, ˜ (Xt ) − U ˜ (X0 ))/t] = 0. so that, with the same notation as in (A.1), limt0 [(U ¯ ¯ Now, choosing Y = Xt and X = X0 in (A.5), dividing by t and letting t  0, we get  ∂μ U (m)(y) · b(y)dy = 0. Td

We easily deduce that ∂μ U (m) reads as a gradient, that is, ∂μ U (m)(y) = ∂y V (m, y). It is given as a solution of the Poisson equation Δy V (m, y) = divy ∂μ U (m)(y)

−1 0 1

Of course, V (m, ·) is uniquely defined up to an additive constant. We can choose it in such a way that  V (m, y)dm(y) = 0. Td

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

181

Using the representation of the solution of the Poisson equation by means of the Poisson kernel, we easily deduce that the function V is jointly continuous. Indeed, under the above centering condition,  V (m, y) =

+∞ 0

 E divy ∂μ U (m)(y + Bt ) dt,

where (Bt )t0 is a d-dimensional Brownian motion. Recalling from a standard spectral gap argument that, for some ρ > 0,





sup E divy ∂μ U (m)(y + Bt )  e−ρt sup divy ∂μ U (m)(y) ,

y∈Td

y∈Td

t  0,

continuity easily follows. Second step. The second step of the proof is to check that Definition 2.2.1 N ∗ holds true. Let us consider two measures of the form mN X and mY , where N ∈ N , d N d N X = (x1 , . . . , xN ) ∈ (T ) is such that xi = xj and Y = (y1 , . . . , yN ) ∈ (T ) . Without loss of generality we assume that the indices for Y are such that N d1 (mN X , mY ) =

N N 1  1  dTd (xi , yi ) = |¯ xi − y¯i |, N i=1 N i=1

(A.7)

where x ¯1 , . . . , x ¯N and y¯1 , . . . , y¯N are well-chosen representatives, in Rd , of the points x1 , . . . , xN and y1 , . . . , yN in Td (dTd denoting the distance on the torus). ¯ be a random variable such that P(X ¯ =x Let X ¯i ) = 1/N and Y¯ be the random ¯ ¯ ¯i . Then, with the same notations as in (A.1), variable defined by Y = y¯i if X = x N and P = m PL(τX¯ (0)) = mN L(τ (0)) ¯ X Y . Y Thanks to (A.5), we get N U (mN Y ) − U (mX )  1       ¯ dλ. = E ∂μ U L τλY¯ +(1−λ)X¯ (0) τλY¯ +(1−λ)X¯ (0) · (Y¯ − X) 0

So, if w is a modulus of continuity of the map ∂μ U on the compact set P(Td )×Td , we obtain by (A.7):

 1  



 N   N ¯ ¯

U (mN τ ) − U (m ) − E ∂ U m (0) · ( Y − X) dλ

¯ ¯ μ λY +(1−λ)X Y X X

0    N ¯ w d1 (mN  E |Y¯ − X| X , mY )   N N N = d1 (mN X , mY )w d1 (mX , mY ) .

(A.8)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

182 Moreover, since Dy V (m, y) = ∂μ U (m)(y), we have 

1 0

     ¯ − X) ¯ dλ E ∂μ U (mN ¯ (0) τλY¯ +(1−λ)X ¯ · (Y X ) τλY¯ +(1−λ)X N    1  1 = D y V mN yi − x ¯i ) dλ yi +(1−λ)¯ xi (0) · (¯ X , τλ¯ N i=1 0 N    1  1 yi − x = D y V mN yi + (1 − λ)¯ xi · (¯ ¯i ) dλ, X , λ¯ N i=1 0

d where we saw Dy V (mN X , ·) as a periodic function defined on the whole R . Then,



1 0

    ¯ − X) ¯ dλ E ∂μ U (mN ¯ (0) · (Y X ) τλY¯ +(1−λ)X  N N V (mN = X , x)d(mY − mX )(x). Td

N By density of the measures of the form mN X and mY and by continuity of V , we  deduce from (A.8) that, for any measure m, m ∈ P(Td ),









U (m ) − U (m) − V (m, x)d(m − m)(x)

 d1 (m, m )w(d1 (m, m )),

Td

which shows that U is C 1 in the sense of Definition 2.2.1 with A.1.3

δU δm

=V.



From Differentiability in m to Differentiability Along Random Variables

We now discuss the converse to Proposition A.1.1 Proposition A.1.2. Assume that U satisfies the assumption of Definition 2.2.2. Then, U satisfies the differentiability property (A.5). Moreover, it holds that Dm U (m, y) = ∂μ U (m)(y), for m ∈ P(Td ) and y ∈ Td . Proof. We are given two random variables X and Y with values in the torus Td . By Definition 2.2.1,

−1 0 1

U (L(Y )) − U (L(X))   1     δU  λL(Y ) + (1 − λ)L(X), y d L(Y ) − L(X) (y) dλ = 0 Td δm   1   δU   δU  = λL(Y ) + (1 − λ)L(X), Y − λL(Y ) + (1 − λ)L(X), X dλ E δm δm 0   1 1     δU   ¯ ¯ ¯ ¯ λL(Y ) + (1 − λ)L(X) λ Y + (1 − λ )X (Y − X) dλdλ , E Dy = δm 0 0

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

183

¯ and Y¯ are Rd -valued random variables that represent the Td -valued where X random variables X and Y , while Dy [δU/δm](m, ·) is seen as a periodic function from Rd into Rd×d . By uniform continuity of Dm U = Dy [δU/δm] on the compact set P(Td )×Td , we deduce that          δU  ¯ Y¯ − X ¯ L(X) (X) U L(Y ) − U L(X) = E Dy δm   ¯ − Y¯ |2 ]1/2 , ¯ − Y¯ |2 ]1/2 w E[|X + O E[|X

(A.9)

for a function w : R+ → R+ that tends to 0 in 0 (w being independent of X and Y ) and where |O(r)|  |r|. Above, we used the fact that d1 (L(X), L(Y ))  ¯ − Y¯ |2 ]1/2 . E[|X Let now Zλ = τλ(Y¯ −X) ¯ (X), for λ ∈ [0, 1], so that Zλ+ε = τε(Y¯ −X) ¯ (Zλ ), for ¯ and λY¯ + (1 − λ)X ¯ are 0  λ  λ + ε  1. Then, (λ + ε)Y¯ + [1 − (λ + ε)]X representatives of Zλ+ε and Zλ and the distance between both reads



¯ ¯ − λY¯ − (1 − λ)X ¯ = ε|Y¯ − X|.

(λ + ε)Y¯ + [1 − (λ + ε)]X Therefore, by (A.9),      d δU  ¯ , U (Zλ ) = E Dy L(Zλ ) (Zλ ) Y¯ − X dλ δm

λ ∈ [0, 1].

Integrating with respect to λ ∈ [0, 1], we get (A.5) with the identity ∂μ U (m)(y) =  Dm U (m, y).

A.2

TECHNICAL REMARKS ON DERIVATIVES

Here we collect several results related to the notion of derivative described in Definition 2.2.1. The first one is a quantified version of Proposition 2.2.3. Proposition A.2.1. Assume that U : Td × P(Td ) → R is C 1 , that, for some δU n ∈ N, U (·, m) and (·, m, ·) are in C n+α and in C n+α × C 2 respectively, and δm that there exists a constant Cn such that, for any m, m ∈ P(Td ),     δU    Cn ,  δm (·, m, ·) (n+α,2)

(A.10)

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

184 and    U (·, m ) − U (·, m) −  

Td

  δU  (·, m, y)d(m − m)(y)  δm n+α

Fix m ∈ P(Td ) and let φ ∈ L2 (m, Rd ) be a vector field. Then,        U ·, (id + φ) m − U (·, m) −  D U (·, m, y) · φ(y) dm(y) m   Td



(A.11)

Cn d21 (m, m ).

n+α

(A.12)

Cn φ 2L2 (m) ,

where Cn depends only on Cn . Below, we give conditions that ensure that (A.11) holds true. Proof.

Using (A.11) we obtain    U ·, (id + φ) m − U (·, m)      δU − (·, m, y)d (id + φ) m − m (y)  Td δm n+α    Cn d21 m, (id + φ) m  Cn φ 2L2 (m) .

(A.13)

δU , we obtain, for an {1, · · · , d}-valued tuple  of length Using the regularity of δm ||  n and for any x ∈ Td , (omitting the dependence with respect to m for simplicity):

 

 

δU

δU

d Dx δm (x, y)d (id + φ) m (y) − d Dx δm (x, y)dm(y) T

T 

− Dx Dm U (x, y) · φ(y) dm(y)

d

 T

 δU  δU =

(x, y) x, y + φ(y) − Dx

Dx

δm δm d T



− Dx Dm U (x, y) · φ(y) dm(y)

 1 





 δU 



=

x, y + sφ(y) − Dx Dm U (x, y) · φ(y) dm(y)ds

Dx Dy δm 0 Td

 1  1 





  

=

s Dx Dy Dm U x, y + stφ(y) φ(y) · φ(y) dm(y) dsdt

0

−1 0 1



0 Td 2 Cn φ L2 (m) ,

where we used (A.10) in the last line.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

185

Coming back to (A.13), this shows that      D U ·, (id + φ) m − D U (·, m) −   Cn φ 2L2 (m) ,

Td

Dx Dm U (·, y)

  · φ(y) dm(y) 



which proves (A.12) but with α = 0. The proof of the H¨older estimate goes along the same line: if x, x ∈ Td , then







   δU δU (x, y)d (id + φ) m (y) − (x, y)dm(y) Dx

Dx

δm δm Td Td  − Dx Dm U (x, y) · φ(y) dm(y) Td

    δU 

δU  (x , y)d (id + φ) m (y) − (x , y)dm(y) − Dx Dx

δm δm d d T T 



Dx Dm U (x , y) · φ(y) dm(y)

− d

 T 

 

δU

δU



=

x, y + φ(y) − Dx (x, y) − Dx Dm U (x, y) · φ(y) dm(y) Dx δm δm Td

 

  

δU

δU 

 x , y + φ(y) − Dx (x , y) − Dx Dm U (x , y) · φ(y) dm(y)

Dx − δm δm d T

 1  

δU



=

(x, y + sφ(y)) − Dx Dm U (x, y) · φ(y) dm(y)ds Dx Dy δm 0 Td

  1

δU  (x , y + sφ(y)) − Dx Dm U (x , y) · φ(y) dm(y)ds

. − Dx Dy δm 0 Td

Hence,







   δU δU (x, y)d (id + φ) m (y) − (x, y)dm(y) Dx

Dx

δm δm Td Td  − Dx Dm U (x, y) · φ(y) dm(y) Td

    δU 

δU  (x , y)d (id + φ) m (y) − (x , y)dm(y) − Dx Dx

δm δm d d T T 



Dx Dm U (x , y) · φ(y) dm(y)

− Td

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

186



=



1 0



1 0



 Td

s

Dx Dy Dm U



  Dx Dy Dm U x, y + stφ(y) 

x , y + stφ(y)





φ(y) · φ(y) dm(y) ds dt



 Cn |x − x |α φ 2L2 (m) . This shows that    

Td

   δU (·, m, y)d (id + φ) m − m (y) − δm



Cn φ 2L2 (m) .

 T

  Dm U (·, m, y) · φ(y) dm(y)  d

n+α

Plugging this inequality into (A.13) shows the result.



We now give conditions under which (A.11) holds. Proposition A.2.2. some n ∈ N∗ ,

Assume that U : Td × P(Td ) → R is C 1 and that, for

Lipn

δU δm

  Cn .

Then, for any m, m ∈ P(Td ), we have    U (·, m ) − U (·, m) − 

Td

  δU (·, m, y)d(m − m)(y)  Cn d21 (m, m ).  δm n+α

We refer to Section 2.3 for the definition of Lipn , see, for instance, (HF1(n)). Proof. We only show the Holder regularity: the L∞ estimates go along the same line and are simpler. For any  ∈ Nd with ||  n and any x, x ∈ Td , we have

−1 0 1





δU

Dx U (x, m ) − Dx U (x, m) − (x, m, y)d(m − m)(y) Dx

δm Td 



δU  (x , m, y)d(m − m)(y)

Dx

− Dx U (x , m ) − Dx U (x , m) − δm Td  1 

 

δU 

δU



d Dx δm x, (1 − s)m + sm , y − Dx δm (x, m, y) T 0

 δU  

 δU 

x , (1 − s)m + sm , y − Dx

(x , m, y) d(m − m)(y)

ds − Dx δm δm

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

187



 δU  δU  sup

Dy Dx

x, (1 − s)m + sm , y − Dy Dx

(x, m, y) δm δm s,y  

 δU   δU  x , (1 − s)m + sm , y − Dy Dx

(x , m, y)

d1 (m, m ) − Dy Dx

δm δm

 δU  Lipn |x − x |α d21 (m, m ). δm 

This proves our claim. Proposition A.2.3. P(Td ),

Let U : P(Td ) → R be C 2 and satisfy, for any m, m ∈



δU

U (m ) − U (m) − (m, y)d(m − m)(y)

Td δm

 

δ2 U 1    

− (m, y, y )d(m − m)(y)d(m − m)(y )

2 2 Td Td δm    d21 (m, m )w d1 (m, m ) ,

(A.14)

where w(r) → 0 as r → 0, together with    2   δU      +  δ U (m, ·, ·) (m, ·)  C0 .  δm   δm2  3 (2,2) Then, for any m ∈ P(Td ) and any vector field φ ∈ L3 (m; Rd ), we have



 

U (id + φ) m − U (m) − Dm U (m, y) · φ(y) dm(y)

Td  1 − Dy Dm U (m, y)φ(y) · φ(y) dm(y) 2 Td

 

1 2   

− Dmm U (m, y, y )φ(y) · φ(y ) dm(y)dm(y )

2 Td Td  φ 2L3m w( φ ˜ L3m ),

where the modulus w ˜ depends on w and on C0 . Proof.

We argue as in Proposition A.2.1. By assumption, we have

 

   δU

U (id + φ) m − U (m) − (id + φ) m − m (y) (m, y)d

Td δm  

    1 δ2 U (id + φ) m − m (z)

− (id + φ) m − m (y)d (m, y, z)d 2 Td Td δm2      d21 m, (id + φ) m w d1 (m, (id + φ) m)  φ 2L3m w( φ L3m ).

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

188 Now 

   δU (m, y)d (id + φ) m − m (y) Td δm   δU δU = (m, y + φ(y)) − (m, y) dm(y) δm δm Td    1  2 δU δU 3 (m, y) · φ(y) + Dy (m, y)φ(y) · φ(y) + O(|φ(y)| ) dm(y) = Dy δm 2 δm Td    1 3 Dm U (m, y) · φ(y) + Dy Dm U (m, y)φ(y) · φ(y) + O(|φ(y)| ) dm(y), = 2 Td

where  Td





O(|φ(y)|3 ) dm(y)  Dy2 Dm U  ∞

 Td

|φ(y)|3 dm(y)  C0 φ 3L3m .

Moreover, 



      δ2U (m, y, z)d (id + φ) m − m (y)d (id + φ) m − m (z) 2 Td Td δm   2  δ2 U   δ U m, y + φ(y), z + φ(z) − m, y + φ(y), z = 2 2 δm δm Td Td  δ2U δ2U − 2 (m, y, z + φ(z)) + (m, y, z) dm(y)dm(z) δm δm2   2 2 δ U = (m, y, z)φ(y) · φ(z) Dy,z δm2 Td Td   +O |φ(y)|2 |φ(z)| + |φ(y)||φ(z)|2 dm(y)dm(z)   2 U (m, y, z)φ(y) · φ(z) = Dmm d d T T    + O |φ(y)|2 |φ(z)| + |φ(y)||φ(z)|2 dm(y)dm(z),

where 



O |φ(y)|2 |φ(z)| + |φ(y)||φ(z)|2 dm(y)dm(z) Td

2  sup Dmm U (m, ·, ·) (C 1 )2 φ 3L3m  C0 φ 3L3m . m

Putting the above estimates together gives the result. −1 0 1



We complete the section by giving conditions under which inequality (A.14) holds:

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

189 2

δ U Proposition A.2.4. Assume that the mapping P(Td )  m → δm 2 (m, ·, ·) is d 2 d 2 continuous from P(T ) into (C (T )) with a modulus w. Then (A.14) holds true.

Proof.

We have

U (m ) − U (m)  1  δU  (1 − s)m + sm , y d(m − m)(y) = δm d 0 T δU (m, y)d(m − m)(y) = δm d T  1 1  δ2U  + s 2 (1 − sτ )m + sτ m , y, y  d(m − m)(y)d(m − m)(y  ). 0 0 Td δm Hence



δU

U (m ) − U (m) − (m, y)d(m − m)(y)

δm d T

 

δ2U 1    

− (m, y, y )d(m − m)(y)d(m − m)(y )

2 Td Td δm2

 d1 (m, m )2   1 1  2  2 δ2 U     2 δ U  (1 − sτ )m + sτ m , ·, · − Dyy × s Dyy (m, ·, ·)  dτ ds 2 2 δm δm 0 0 ∞    2   d1 (m, m ) w d1 (m, m ) .

A.3 A.3.1



ˆ FORMULA VARIOUS FORMS OF ITO’S Itˆ o–Wentzell Formula

We here give a short reminder of the so-called Itˆ o–Wentzell formula, which we alluded to several times in the text. We refer the reader to Chapter 3 in Kunita’s monograph [68] for a complete account, see in particular Theorem 3.3.1 therein. We emphasize that our presentation is slightly different from [68], as it is tailormade to the framework considered throughout the book. The statement is as follows. Let (Ω, A, P) be a complete probability space equipped with two d-dimensional Brownian motions (Bt )t0 and (Wt )t0 and endowed with the completion (Ft )t≥0 of the filtration generated by (Bt )t0 and (Wt )t0 . Let (Ψt (·))t0 be a continuous process with values in the space C 2 (Rd ) of twice continuously differentiable functions from Rd to R, equipped with the collection of seminorms {  C 2 (K) := supx∈K (|(x)| + |D(x)| + |D2 (x)|),  ∈

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

190

C 2 (Rd )} indexed by the compact subsets K of Rd . Assume that, for any x ∈ Rd , o form: (Ψt (x))t≥0 may be expanded under the following Itˆ  Ψt (x) = Ψ0 (x) +

t 0

fs (x) ds +

d   k=1

t 0

gsk (x)dWsk ,

t  0,

where (ft (·))t0 and ((gtk (·))t0 )k=1,··· ,d are progressively measurable processes with values in C 2 (Rd ) such that, for any compact subsets K ⊂ Rd , with probability 1 under Rd , for all T > 0, 

T 0

fs (·) C 1 (K) ds < ∞,

d  

T 0

k=1

gsk (·) 2C 2 (K) ds < ∞.

Consider in addition an m-dimensional Itˆo process (Xt )t0 with the following expansion: d d   dXt = bt dt + σtk dBtk + ζtk dWtk , t  0, k=1

k=1

where (bt )t≥0 , ((σtk )t≥0 )k=1,··· ,d and ((ζtk )t≥0 )k=1,··· ,d are progressively measurable processes with values in Rm satisfying with probability 1:  ∀T > 0,

T 0

|bs |ds < ∞,

d   k=1

T 0

 |σsk |2 + |ζsk |2 ds < ∞.

Then, (Ψt (Xt ))t≥0 is an Itˆo process. With probability 1, it expands, for all t  0, as: d    1 trace D2 Ψt (Xt ) σtk ⊗ σtk + ζtk ⊗ ζtk dΨt (Xt ) = DΨt (Xt ) · bt + 2 k=1

+ ft (Xt ) +

d 

 Dgtk (Xt ) · ζtk dt

k=1

+

d  k=1

−1 0 1

d    DΨt (Xt ) · σtk dBtk + ζtk dWtk + gtk (Xt )dWtk . k=1

Above, σtk ⊗ σtk denotes the m × m matrix with entries ((σtk )i (σtk )j )i,j=1,··· ,m , and similarly for ((ζtk )i (ζtk )j )i,j=1,··· ,m . The proof follows from the combination of Exercise 3.1.5 and Theorem 3.3.1 in [68].

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

A.3.2

191

Chain Rule for Function of a Measure Argument

Let U be a function satisfying the same assumption as in Definition 2.4.4 and, for a given t0 ∈ [0, T ], (m ˜ t )t∈[t0 ,T ] be an adapted process with paths in 0 d C ([t0 , T ], P(T )) such that, with probability 1, for any smooth test function ϕ ∈ C n (Td ),  dt

  √    Δϕ(x) − βt x + 2(Wt − Wt0 ) ϕ(x)dm ˜ t (x) = Td Td  · Dϕ(x) dm ˜ t (x) dt,

(A.15)

for t ∈ [t0 , T ], for some adapted process (βt )t∈[t0 ,T ] , with paths in the space C 0 ([t0 , T ], [C 0 (Td )]d ), such that essupω∈Ω sup βt 0 < ∞, t∈[t0 ,T ]

so that, by Lebesgue’s dominated convergence theorem,  lim E

h→0

sup s,t∈[t0 ,T ],|t−s|h

βs − βt 0 = 0.

In other words, (m ˜ t )t∈[t0 ,T ] stands for the flow of conditional marginal laws of ˜ ˜ t )t∈[t ,T ] solves the stochastic differential equa(Xt )t∈[t0 ,T ] given FT , where (X 0 tion: √ √   ˜ t + 2(Wt − Wt ) dt + 2dBt , t ∈ [t0 , T ], ˜ t = −βt X dX 0 ˜ t being distributed according to mt conditional on FT . In particular, there X 0 0 exists a deterministic constant C such that, with probability 1, for all t0  t  t + h  T, √ ˜ t+h , m ˜ t )  C h. d1 (m √ Given some t ∈ [t0 , T ], we denote by mt = (· → · + 2(Wt − Wt0 )) m ˜ t the push-forward of m ˜ t by the application Td  x → x + Wt − Wt0 ∈ Td (so ˜ t0 ). Equivalently, (mt )t∈[t0 ,T ] is the flow of conditional marginal that mt0 = m laws of (Xt )t∈[t0 ,T ] given FT , where (Xt )t∈[t0 ,T ] solves the stochastic differential equation: √ √ dXt = −βt (Xt ) dt + 2dBt + 2dWt , t ∈ [t0 , T ], Xt0 being distributed according to mt0 conditional on FT .

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

192 We then have the local Itˆ o–Taylor expansion:

Lemma A.3.1. Under the above assumption, we can find a family of realvalued random variables (εs,t )s,t∈[t0 ,T ]:st such that  lim sup E |εs,t | = 0, h0 s,t∈[t0 ,T ]:|s−t|h

and, for any t ∈ [t0 , T ], √  1   E U t + h, x + 2(Wt+h − Wt0 ), mt+h h √    − U t + h, x + 2(Wt+h − Wt0 ), mt |Ft √   = Δx U t, x + 2(Wt − Wt0 ), mt  √    +2 divy Dm U t, x + 2(Wt − Wt0 ), mt , y dmt (y) d  T √   Dm U t, x + 2(Wt − Wt0 ), mt , y · βt (y)dmt (y) − Td  √    divx Dm U t, x + 2(Wt − Wt0 ), mt , y dmt (y) +2 d  T   √  2 + Tr Dmm U t, x + 2(Wt − Wt0 ), mt , y, y  dmt (y)dmt (y  ) [Td ]2

+ εt,t+h . Proof. Without any loss of generality, we assume that t0 = 0. Moreover, throughout the analysis, we shall use the following variant of (5.25): For two random processes (γt )t∈[0,T ] and (γt )t∈[0,T ] , with paths in C 0 ([0, T ], C 0 (E)) and C 0 ([0, T ], F ) respectively, where E is a compact metric space (the distance being denoted by dE ) and F is a metric space (the distance being denoted by dF ), satisfying essupω∈Ω sup γt 0 < ∞, t∈[0,T ]

it must hold that lim

sup

h0 s,t∈[0,T ]:|s−t|h

with ηs,t = sup

 E |ηs,t | = 0, sup

 ,γ  ) r∈[s,t] x,y∈E:dE (x,y)supu∈[s,t] dF (γu s

−1 0 1



γr (y) − γs (x) .

(A.16)

The proof of (A.16) holds in two steps. The first one is to show, by a compactness argument, that, P-almost surely, sups,t∈[0,T ]:|s−t|≤h |ηs,t | tends to 0; the claim then follows from the dominated convergence theorem. Now, for given t ∈ [0, T ) and h ∈ (0, T − t], we let δh Wt := Wt+h − Wt and δh mt := mt+h − mt . By Taylor–Lagrange’s formula, we can find some random

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

193

variable λ with values in [0, 1]1 such that √ √     U t + h, x + 2Wt+h , mt+h − U t + h, x + 2Wt , mt √ √   = 2Dx U t + h, x + 2Wt , mt · δh Wt  √    δU  t + h, x + 2Wt , mt , y d δh mt (y) + Td δm √ √   + Dx2 U t + h, x + 2Wt + 2λδh Wt , mt + λδh mt · (δh Wt )⊗2 √ √  √ δU  t + h, x + 2Wt + 2λδh Wt , mt + 2 Dx δm Td    + λδh mt , y · δh Wt d δh mt (y)  √ √ 1 δ2U  + t + h, x + 2Wt + 2λδh Wt , mt 2 2 [Td ]2 δm      + λδh mt , y, y  d δh mt (y)d δh mt (y  )

(A.17)

=: Th1 + Th2 + Th3 + Th4 + Th5 , where we used the dot “·” to denote the inner product in Euclidean spaces. Part of the analysis relies on the following decomposition. Given a bounded and Borel measurable function ϕ : Td → R, it holds that 

  ϕ(y)d δh mt (y) Td   = ϕ(y)dmt+h (y) − ϕ(y)dmt (y) d Td  T √ √   ˜ t+h (y) − ϕ y + 2Wt+h dm ϕ(y + 2Wt )dm ˜ t (y) = Td Td  √     ˜ t+h − m ϕ y + 2Wt+h d m ˜ t (y) = Td    √ √   ϕ y + 2Wt+h − ϕ(y + 2Wt ) dm + ˜ t (y) d  T √     ˜ t+h − m ϕ y + 2Wt+h d m ˜ t (y) = Td    √   ϕ y + 2δh Wt − ϕ(y) dmt (y). +

(A.18)

Td

1 The fact that λ is a random variable may be justified as follows. Given a continuous mapping ϕ from Td × P(Td ) into R and two random variables (X, m) and (X  , m ) with values in (Rd , P(Td )) such that the mapping [0, 1]  c → ϕ(cX  + (1 − c)X, cm + (1 − c)m) vanishes at least once, the quantity λ = inf{c ∈ [0, 1] : ϕ(cX  + (1 − c)X, cm + (1 − c)m) = 0} defines a random variable since {λ > c} = ∪n∈N\{0} ∩c ∈Q∈[0,c] {ϕ(c X  + (1 − c )X, c m + (1 − c )m)ϕ(X, m) > 1/n}.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

194

Likewise, whenever ϕ is a bounded Borel measurable mapping from [Td ]2 into R, it holds that  [Td ]2

    ϕ(y, y  )d δh mt (y)d δh mt (y  )



= [Td ]2

√       ˜ t+h − m ϕ y + 2Wt+h , y  d m ˜ t (y)d δh mt (y  )



+ [Td ]2

 =

[Td ]2

   √    ϕ y + 2δh Wt , y  − ϕ(y, y  ) dmt (y)d δh mt (y  )

√ √   ϕ y + 2Wt+h , y  + 2Wt+h

   ˜ t+h − m d m ˜ t+h − m ˜ t (y)d m ˜ t (y  )    √ √  ϕ y + 2Wt+h , y  + 2δh Wt + 

(A.19)

[Td ]2

√     − ϕ y + 2Wt+h , y  d m ˜ t (y)dmt (y  ) ˜ t+h − m    √ √  ϕ y + 2δh Wt , y  + 2Wt+h + [Td ]2

√     − ϕ y, y  + 2Wt+h dmt (y)d m ˜ t+h − m ˜ t (y  )    √ √ √    ϕ y + 2δh Wt , y  + 2δh Wt − ϕ y + 2δh Wt , y  + [Td ]2

 √   − ϕ y, y  + 2δh Wt + ϕ(y, y  ) dmt (y)dmt (y  ). We now proceed with the analysis of (A.17). We start with Th1 . It is pretty clear that  (A.20) E Th1 |Ft = 0. Look at now the term Th2 . Following (A.18), write it Th2 =

−1 0 1



√ √    δU  t + h, x + 2Wt , mt , y + 2Wt+h d m ˜ t+h − m ˜ t (y) Td δm   √ √  δU  t + h, x + 2Wt , mt , y + 2δh Wt + Td δm √  δU  t + h, x + 2Wt , mt , y dmt (y) − δm

=: Th2,1 + Th2,2 .

(A.21)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

195

By the PDE satisfied by (m ˜ t )t∈[t0 ,T ] , we have Th2,1 =





√ √  δU  ˜ s (y) t + h, x + 2Wt , mt , y + 2Wt+h dm δm t Td  t+h  √ √  (A.22) δU  t + h, x + 2Wt , mt , y + 2Wt+h ds Dy − δm d t √ T   ˜ s (y). · βs y + 2Ws dm t+h

ds

Δy

Therefore, taking the conditional expectation, dividing by h and using √ the fact that mt is the push-forward of m ˜ t by the mapping Td  x → x + 2Wt (take ˜ t ), we can write note that the measures below are mt and not m 1  2,1 E Th |Ft = h



√  δU  t, x + 2Wt , mt , y dmt (y) Δy δm d T  √   Dm U t, x + 2Wt , mt , y · βt (y)dmt (y) + εt,t+h , − Td

where, as in the statement, (εs,t )0stT is a generic notation for denoting a family of random variables that satisfies  lim sup E |εs,t | = 0.

h0 |t−s|h

(A.23)

Here we used the trick explained in (A.16) to prove (A.23). The application of (A.16) is performed in two steps. The first one is to choose E = [Td ]2 , F = Rd , δU δU (t, x, mt , y) and γt = Wt and then γt (x, y) = Dy δm (t, x, mt , y) · γt (x, y) = Δy δm  βt (y) and γt = Wt . Then, by a first application of (A.16), we can write Th2,1



=



√ √  δU  s, x + 2Ws , ms , y + 2Ws dm ˜ s (y) δm d t T  t+h  √ √  δU  s, x + 2Ws , ms , y + 2Ws ds Dy − δm d t √ T   ˜ s (y) + hεt,t+h . · βs y + 2Ws dm t+h

ds

Δy

Then, we can apply (A.16) once again with  γs (x) =

√ √  δU  ˜ s (y) s, x + 2Ws , ms , y + 2Ws dm Δy δm Td  √ √ √    δU  s, x + 2Ws , ms , y + 2Ws · βs y + 2Ws dm − ˜ s (y), Dy δm d T

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

196

and γ  constant. Using Itˆ o’s formula to handle the second term in (A.21), we get in a similar way 1  2 E Th |Ft = 2 h

 Td

Δy

 −

Td

√  δU  t, x + 2Wt , mt , y dmt (y) δm

√   Dm U t, x + 2Wt , mt , y · βt (y)dmt (y) + εt,t+h .

(A.24)

Turn now to Th3 in (A.17). Using again (A.16), it is quite clear that √ 1  3 E Th |Ft = Δx U (t, x + 2Wt , mt ) + εt,t+h . h

(A.25)

We now handle Th4 . Following (A.18), we write Th4

√ √   δU  √ Dx t + h, x + 2Wt + 2λδh Wt , mt = 2 δm Td   √   ˜ t+h − m + λδh mt , y + 2Wt+h · δh Wt d m ˜ t (y) √   δU  √ √ √  + 2 Dx t + h, x + 2Wt + 2λδh Wt , mt + λδh mt , y + 2δh Wt δm Td − Dx

√ √  δU  t + h, x + 2Wt + 2λδh Wt , mt + λδh mt , y δm

· δh Wt dmt (y) =: Th4,1 + Th4,2 . Making use of the forward Fokker–Planck equation for (m ˜ t )t∈[t0 ,T ] as in the proof of (A.24), we get that 1  4,1 E Th |Ft = εt,t+h . h Now, by Taylor–Lagrange’s formula, we can find another [0, 1]-valued random variable λ such that

−1 0 1

 

√ √ δU  t + h, x + 2Wt + 2λδh Wt , mt δm Td  √  + λδh mt , y + 2λ δh Wt · (δh Wt )⊗2 dmt (y).

Th4,2 = 2

Dy Dx

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

197

And, then, invoking once again the trick (A.16), but with E = Td × P(Td ) × Td , δU (t, x, m, y) and γt = (Wt , mt ), we get F = Rd × P(Td ), γt (x, m, y) = Dy Dx δm 1  4 1  4,2 E Th |Ft = E Th |Ft + εt,t+h h h √  δU   t, x + 2Wt , mt , y dmt (y) + εt,t+h (A.26) =2 divy Dx δm d T √  δU   t, x + 2Wt , mt , y dmt (y) + εt,t+h . =2 divx Dy δm Td It finally remains to handle Th5 . Thanks to (A.19), we write Th5 =



√ √ δ2 U  t + h, x + 2Wt + 2λδh Wt , mt + λδh mt , y 2 [Td ]2 δm √ √      ˜ t+h − m ˜ t+h − m ˜ t (y)d m ˜ t (y  ) + 2Wt+h , y  + 2Wt+h d m  2  √ √ δ U 1 + t + h, x + 2Wt + 2λδh Wt , mt 2 2 [Td ]2 δm √ √  + λδh mt , y + 2δh Wt , y  + 2Wt+h 1 2

√ √ δ2U  t + h, x + 2W + 2λδh Wt , mt + λδh mt , y, y  t δm2  √    + 2Wt+h dmt (y)d m ˜ t+h − m ˜ t (y  ) −





√ √ δ2U  t + h, x + 2W + 2λδh Wt , mt t 2 [Td ]2 δm √ √  + λδh mt , y + 2Wt+h , y  + 2δh Wt 1 + 2

(A.27)

√ √ δ2U  t + h, x + 2Wt + 2λδh Wt , mt + λδh mt , y δm2  √     ˜ t (y)dmt (y  ) + 2Wt+h , y d m ˜ t+h − m







√ √ δ2U  t + h, x + 2W + 2λδh Wt , mt t 2 [Td ]2 δm √ √  + λδh mt , y + 2δh Wt , y  + 2δh Wt 1 + 2

√ √ √  δ2U  t + h, x + 2Wt + 2λδh Wt , mt + λδh mt , y + 2δh Wt , y  2 δm √ √  δ2U  t + h, x + 2λδh Wt , mt + λδh mt , y, y  + 2δh Wt − 2 δm



−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

198

 √ √  δ2 U   t + h, x + dmt (y)dmt (y  ) 2W + 2λδ W , m + λδ m , y, y t h t t h t δm2  1 =: Th5,1 + Th5,2 + Th5,3 + Th5,4 . 2 +

Making use of the Fokker–Planck equation satisfied by (m ˜ t )t∈[t0 ,T ] together with the regularity assumptions of δ 2 U/δm2 in Definition 2.4.4, it is readily seen that 1  5,1 E Th + Th5,2 + Th5,3 |Ft = εt,t+h . h

(A.28)

Focus now on Th5,4 . With obvious notation, write it under the form Th5,4 =: Th5,4,1 − Th5,4,2 − Th5,4,3 + Th5,4,4 . Performing a second-order Taylor expansion, we get Th5,4,1

 = [Td ]2

√ δ2U  t + h, x + 2Wt 2 δm

√  2λδh Wt , mt + λδh mt , y, y  dmt (y)dmt (y  )  √ √ √ δ2 U  + t + h, x + 2Wt + 2λδh Wt , mt 2Dy 2 δm [Td ]2  + λδh mt , y, y  · δh Wt dmt (y)dmt (y  )  √ √ √ δ2U  + t + h, x + 2Wt + 2λδh Wt , mt 2Dy 2 δm [Td ]2  + λδh mt , y, y  · δh Wt dmt (y)dmt (y  )  √ √ δ2 U  t + h, x + 2Wt + 2λδh Wt , mt + Dy2 2 δm [Td ]2   ⊗2 + λδh mt , y, y  · δh Wt dmt (y)dmt (y  )  √ √ δ2U  + t + h, x + Dy2 + 2W 2λδh Wt , mt t δm2 [Td ]2   ⊗2 + λδh mt , y, y  · δh Wt dmt (y)dmt (y  )  √ √ δ2U  t + h, x + 2Wt + 2λδh Wt , mt + 2Dy Dy 2 δm [Td ]2   ⊗2 + λδh mt , y, y  · δh Wt dmt (y)dmt (y  ) + hεt,t+h +

−1 0 1

=: Th5,4,4 + Ih1 + Ih2 + Jh1 + Jh2 + Jh1,2 + hεt,t+h .

(A.29)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

199

Similarly, we get Th5,4,2 = Th5,4,4 + Ih1 + Jh1 + hεt,t+h , Th5,4,3 = Th5,4,4 + Ih2 + Jh2 + hεt,t+h , from which, together with (A.29), we deduce that Th5,4 = Jh1,2 + hεt,t+h ,

(A.30)

and then, with (A.28), 1  5,4 1  5 E Th |Ft = E Th |Ft + εt,t+h h 2h   √  δ2U  t, x + 2Wt , mt , y, y  = Tr Dy Dy 2 δm [Td ]2

(A.31)

dmt (y)dmt (y  ) + εt,t+h . From (A.17), (A.20), (A.24), (A.25), (A.26), and (A.31), we deduce that √ √     1   E U t + h, x + 2Wt+h , mt − U t + h, x + 2Wt , mt |Ft h  √ √    = Δx U (t, x + 2Wt , mt ) + 2 divy Dm U t, x + 2Wt , mt , y dmt (y) Td

 −

Td

√   Dm U t, x + 2Wt , mt , y · βt (y)dmt (y)



+2 

Td

+ [Td ]2

√    divx Dm U t, x + 2Wt , mt , y dmt (y)  √   2 Tr Dmm U t, x + 2Wt , mt , y, y  dmt (y)dmt (y  ) + εt,t+h ,

which completes the proof.



We now deduce from the previous lemma an expansion for (U (t, x, mt ))t∈[0,T ] . Theorem A.1. Under the assumption stated at the beginning of Subsection A.3.2, the process (U (t, x, mt ))t∈[0,T ] expands, for any x ∈ Td , as a semi-

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

200 martingale. Namely, with probability 1 under P, for all t ∈ [0, T ], dt U (t, x, mt ) =

      2divy Dm U (t, x, mt , y) ∂t U t, x, mt + Td  − Dm U (t, x, mt , y) · βt (y) dmt (y)     2   + Tr Dmm U (t, x, mt , y, y )dmt (y)dmt (y ) dt Td Td

  √ Dm U (t, x, mt , y)dmt (y) · dWt . + 2 Td

It is worth noting that the variable x in (U (t, x, mt ))t∈[0,T ] may be replaced by a stochastic process (Xt )t∈[0,T ] with a semi-martingale expansion, in which case the global expansion of (U (t, Xt , mt ))t∈[0,T ] may be easily derived by combining the above formula with Itˆ o–Wentzell’s formula. Proof. The strategy is to apply Lemma A.3.1. To do so, observe that, the variable x being frozen, we are led back to the case when U is independent of x. So, with the same notation as in Lemma A.3.1, we get  E U (t + h, x, mt+h ) − U (t + h, x, mt )|Ft    divy Dm U (t, x, mt , y)dmt (y) = 2 d T − Dm U (t, x, mt , y) · βt (y)dmt (y) d  T  2   + Tr Dmm U (t, x, mt , y, y )dmt (y)dmt (y ) + εt,t+h h.

(A.32)

[Td ]2

Of course, this gives the absolutely continuous part only in the semi-martingale expansion of (U (t, x, mt ))t∈[0,T ] . To compute the martingale part, one must revisit the proof of Lemma A.3.1. Going back to (A.17), we know that, in our case, Th1 , Th3 , and Th4 are 0 (as everything works as if U was independent of x). Now, denoting by (ηs,t )s,t∈[0,T ]:st a family of random variables satisfying  1 sup E |ηs,t |2 = 0, h0 h s,t∈[0,T ]:|s−t|h lim

we can write, by (A.21) and (A.22): −1 0 1

Th2

=



 2

 δU (t, x, mt , y)dmt (y) · δh Wt + ηt,t+h Dy δm Td

(A.33)

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

APPENDIX

6.125x9.25

201

Moreover, by (A.27) and (A.30): Th5 = ηt,t+h , proving that  U (t + h, x, mt+h ) − E U (t + h, x, mt+h )|Ft

  √ δU (t, x, mt , y)dmt (y) · δh Wt + ηt,t+h , Dy = 2 δm Td for some family (ηs,t )s,t∈[0,T ]:st that must satisfy (A.33). With such a decomposition, it holds that E[ηt,t+h |Ft ] = 0. Therefore, for any t ∈ [0, T ] and any partition 0 = r0 < r1 < r2 < · · · < rN = t, we have N −1 

 U (ri+1 , x, mri+1 ) − E U (ri+1 , x, mri+1 )|Fri

i=0

=

N −1 



 2 Td

i=0

Dy

    δU (ri , x, mri , y)dmri (y) · Wri+1 − Wri + ηri ,ri+1 , δm

with the property that  E ηri ,ri+1 |Fri = 0,

 E |ηri ,ri+1 |2  πri ,ri+1 |ri+1 − ri |,

where limh0 sup(s,t)∈[0,T ]2 :|s−t|h πs,t = 0. By a standard computation of conditional expectation, we have that  N −1

2 



ηri ,ri+1 = 0, lim E

δ→0

i=0

where δ stands for the mesh of the partition r0 , r1 , . . . , rN . As a consequence, the following limit holds true in L2 :

lim

δ0

N −1 

 U (ri+1 , x, mri+1 ) − E U (ri+1 , x, mri+1 )|Fri

i=0

√  t Dm U (s, x, ms , y) · dWs . = 2 0

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

APPENDIX

202 Together with (A.32), we deduce that lim

N −1 

δ0

U (ri+1 , x, mri+1 ) − U (ri+1 , x, mri )

i=0  t 

= 0



+ Td





  2divy Dm U (s, x, ms , y) − Dm U (s, x, ms , y) · βs (y) dms (y) Td    2   Tr Dmm U (s, x, ms , y, y )dms (y)dms (y ) ds Td

√  t + 2 Dm U (s, x, ms , y) · dWs . 0

Finally, with probability 1, for all t ∈ [0, T ], U (t, x, mt ) − U (0, x, m0 )  t   = ∂t U s, x, ms 0     2divy Dm U (s, x, ms , y) − Dm U (s, x, ms , y) · βs (y) dmt (y) + d  T   2   + Tr Dmm U (t, x, mt , y, y )dmt (y)dmt (y ) dt Td Td

  √ Dm U (t, x, mt , y)dmt (y) · dWt , + 2 Td

which completes the proof.

−1 0 1



125-79542 Cardaliaguet MasterEquation Ch01 1P

June 11, 2019

6.125x9.25

References [1]

Achdou, Y., and Capuzzo-Dolcetta, I. (2010). Mean field games: numerical methods. SIAM J. Numer. Anal., 48, 1136–1162.

[2]

Achdou, Y., Camilli, F., and Capuzzo-Dolcetta, I. (2013). Mean field games: convergence of a finite difference method. SIAM J. Numer. Anal., 51, 2585– 2612.

[3]

Achdou, Y., and Porretta, A. (2016). Convergence of a finite difference scheme to weak solutions of the system of partial differential equations arising in mean field games. SIAM J. Numer. Anal., 54, 161–186.

[4]

Achdou, Y., Buera, F. J., Lasry, J. M., Lions, P. L., and Moll, B. (2014). Partial differential equation models in macroeconomics. Philos. Trans. R. Soc. A: Math. Phys. Engin. Sci., 372 (2028), 20130397.

[5]

Achdou, Y., Han, J., Lasry, J. M., Lions, P. L., and Moll, B. (2014). Heterogeneous agent models in continuous time. https://economics.yale.edu/ sites/default/files/moll 131105 0.pdf/

[6]

Aiyagari, S. R. (1994). Uninsured idiosyncratic risk and aggregate saving. Q. J. Econ., 109 (3), 659–684.

[7]

Ambrosio, L., Gigli, N., and Savar´e, G. (2008). Gradient Flows in Metric Spaces and in the Space of Probability Measures. Second edition. Lectures in Mathematics ETH Z¨ urich. Basel: Birkh¨ auser Verlag.

[8]

Ambrosio, L., and Feng, J. (2014). On a class of first order Hamilton–Jacobi equations in metric spaces. J. Differ. Eq., 256(7), 2194–2245.

[9]

Ajtai, M., Komlos, J., and Tusn´ ady, G. (1984). On optimal matchings. Combinatorica, 4 (4), 259–264.

[10] Aumann R. (1964). Markets with a continuum of traders. Econometrica, 32(1/2), 39–50. [11] Ba¸sar, T., and Olsder, G. (1998). Dynamic Noncooperative Game Theory. SIAM. [12] Bardi, M. (2012). Explicit solutions of some linear-quadratic mean field games. Networks Heterog. Media, 7(2), 243–261.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

204

June 11, 2019

6.125x9.25

REFERENCES

[13] Bardi, M., and Feleqi, E. (2016). Nonlinear elliptic systems and mean field games. NoDEA Nonlinear Differ. Eq. Appl., 23, p. 44. [14] Bensoussan, A., and Frehse, J. (2002). Smooth solutions of systems of quasilinear parabolic equations. ESAIM: Control Optim. Calc. Variat., 8, 169– 193. [15] Bensoussan, A., and Frehse, J. (2012). Control and Nash games with mean field effect. Chin. Ann. Math. B, 34 (2), 161–192. [16] Bensoussan, A., Frehse J., and Yam, S.C.P. (2013). Mean field games and mean field type control theory. Springer Briefs in Mathematics. New York: Springer Science+Business Media. [17] Bensoussan, A., Frehse J., and Yam, S.C.P. (2015). The master equation in mean field theory. J. Math. Pures Appl., 103, 1441–1474. [18] Bensoussan, A., Frehse J., and Yam, S.C.P. (2017). On the interpretation of the master equation. Stoc. Proc. App., 127, 2093–2137. [19] Bewley, T. (1986). Stationary monetary equilibrium with a continuum of independently fluctuating consumers. In Contributions to Mathematical Economics in Honor of Gerard Debreu, ed. Werner Hildenbrand and Andreu Mas-Collel. Amsterdam: North-Holland. [20] Bogachev, V. I., Krylov, N. V., and R¨ ockner, M. (2009). Elliptic and parabolic equations for measures. Russ. Math. Surv. 64 (6), 973. [21] Bressan, A., and Shen W. (2004). Small BV solutions of hyperbolic noncooperative differential games. SIAM J. Control Optim. 43, 104–215. [22] Bressan, A., and Shen W. (2004). Semi-cooperative strategies for differential games. Int. J. Game Theory, 32, 561–593. [23] Buckdahn, R., Cardaliaguet, P., and Rainer, C. (2004). Nash equilibrium payoffs for nonzero-sum stochastic differential games. SIAM J. Control Optim., 43 (2), 624–642. [24] Buckdahn, R., Li, J., Peng, S., and Rainer, C. (2017). Mean-field stochastic differential equations and associated PDEs. Ann. Probab., 45, 824–878. [25] Carmona, R., and Delarue, F. (2013). Probabilist analysis of Mean-Field Games. SIAM J. Control Optim., 51(4), 2705–2734. −1 0 1

[26] Carmona, R., and Delarue, F. (2015). Forward–backward stochastic differential equations and controlled McKean Vlasov dynamics. Ann. Probab., 43, 2647–2700.

125-79542 Cardaliaguet MasterEquation Ch01 4P

REFERENCES

June 11, 2019

6.125x9.25

205

[27] Carmona R., and Delarue F. (2014). The master equation for large population equilibriums. In Stochastic Analysis and Applications. D. Crisan, B. Hambly, T. Zariphopoulou eds., New York: Springer Science+Business Media, 77–128. [28] Carmona, R., Delarue, F., and Lachapelle, A. (2013). Control of McKean– Vlasov dynamics versus mean field games. Math. Finan. Econ., 7 (2), 131– 166. [29] Carmona, R., Delarue, F., and Lacker, D. (2016). Probabilistic analysis of mean field games with a common noise. Ann. Probab., 44, 3740–3803. [30] Case, J. (1969). Toward a theory of many player differential games. SIAM J. Control, 7(2), 179–197. [31] Chassagneux, J. F., Crisan, D., and Delarue, F. (2014). Classical solutions to the master equation for large population equilibria. arXiv preprint arXiv:1411.3009. [32] Crandall, M.G., and Lions, P.-L. (1983). Viscosity solutions of Hamilton– Jacobi equations. Trans. Am. Math. Soc., 277 (1), 1–42. [33] Crandall M.G., Ishii H., and Lions P.-L. (1992). User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Soc., 27, 1–67. [34] Dereich, S., Scheutzow, M., and Schottstedt, R. (2013). Constructive quantization: approximation by empirical measures. In Annales de l’IHP, Probabili´es et Statistiques, 49(4), 1183–1203. [35] Evans, L.C., and Souganidis, P.E. (1984). Differential games and representation formulas for solutions of Hamilton–Jacobi Equations. Indiana Univ. Math. J., 282, 487–502. [36] Feng, J., and Katsoulakis, M. (2009). A comparison principle for Hamilton– Jacobi equations related to controlled gradient flows in infinite dimensions. Arch. Ration. Mech. Anal., 192(2), 275–310. [37] Ferreira R., and Gomes, D. (2018). Existence of weak solutions to stationary mean-field games through variational inequalities. To appear in SIAM J. Math. Anal., 50(6), 5969–6006. [38] Fischer, M. (2017). On the connection between symmetric N-player games and mean field games. Ann. Appl. Probab., 27(2), 757–810. [39] Fleming, W. H. (1961). The convergence problem for differential games. J. Math. Anal. Appl., 3, 102–116.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

206

June 11, 2019

6.125x9.25

REFERENCES

[40] Fleming, W.H., and Souganidis, P.E. (1989). On the existence of value functions of two-player, zero-sum stochastic differential games. Indiana Univ. Math. J. 38(2), 293–314. [41] Fleming, W. H., and Soner, H. M. (2006). Controlled Markov Processes and Viscosity Solutions, Second edition. New York: Springer Science+Business Media. [42] Fournier, N., and Guillin, A. (2015). On the rate of convergence in Wasserstein distance of the empirical measure. Probab. Theory Relat. Fields, 162(3), 707–738. [43] Friedman, A. (1972). Stochastic differential games. J. Differ. Eq., 11(1), 79–108. [44] Gangbo, W., and Swiech, A. (2015). Metric viscosity solutions of Hamilton– Jacobi equations depending on local slopes. Calc. Variat. Partial Differ. Eq., 54, 1183–1218. [45] Gangbo, W., and Swiech, A. (2015). Existence of a solution to an equation arising from the theory of mean field games. J. Differ. Eq., 259, 6573–6643. [46] Gomes, D. A., and Mitake, H. (2015). Existence for stationary mean-field games with congestion and quadratic Hamiltonians. NoDEA Nonlinear Differ. Eq. Appl., 22(6), 1897–1910. [47] Gomes, D. A., Patrizi, S., and Voskanyan, V. (2014). On the existence of classical solutions for stationary extended mean field games. Nonlinear Anal., 99, 49–79. [48] Gomes, D.A., Pimentel, E., and Voskanyan, V. (2016). Regularity Theory for Mean-Field Game Systems. SpringerBriefs in Mathematics. New York: Springer Science+Business Media. [49] Gomes, D., and Saude J. (2014). Mean field games models-a brief survey. Dyn. Games Appl., 4(2), 110–154. [50] Green, E. J. (1984). Continuum and finite-player noncooperative models of competition. Econometrica, 52(4), 975–993. [51] Gu´eant, O., Lasry, J.-M, and Lions, P.-L. (2011). Mean field games and applications. Paris-Princeton Lectures on Mathematical Finance 2010. Tankov, Peter; Lions, Pierre-Louis; Laurent, Jean-Paul; Lasry, Jean-Michel; Jeanblanc, Monique; Hobson, David; Gu´eant, Olivier; Cr´epey, St´ephane; Cousin, Areski. Berlin: Springer, pp. 205–266. −1 0 1

[52] Huang, M. (2010). Large-population LQG games involving a major player: The Nash certainty equivalence principle. SIAM J. Control Optim., 48, 3318–3353.

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

REFERENCES

207

[53] Huang, M., Malham´e, R.P., and Caines, P.E. (2006). Large population stochastic dynamic games: closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun. Infor. Syst., 6, 221–252. [54] Huang, M., Caines, P.E., and Malham´e, R.P. (2007). Large-population costcoupled LQG problems with nonuniform agents: individual-mass behavior and decentralized -Nash equilibria. IEEE Trans. Autom. Control, 52(9), 1560–1571. [55] Huang, M., Caines, P.E., and Malham´e, R.P. (2007). The Nash Certainty Equivalence Principle and McKean–Vlasov Systems: an Invariance Principle and Entry Adaptation. 46th IEEE Conference on Decision and Control, 121–123. [56] Huang, M., Caines, P.E., and Malham´e, R.P. (2007). An invariance principle in large population stochastic dynamic games. J. Syst. Sci. Complex., 20(2), 162–172. [57] Huang, M., Caines, P. E., and Malham´e, R. P. (2010). The NCE (mean field) principle with locality dependent cost interactions. IEEE Trans. Autom. Control, 55(12), 2799–2805. [58] Huggett, M. (1993). The risk-free rate in heterogeneous-agent incompleteinsurance economies. J. Econ. Dyn. Control, 17(5-6), 953–969. [59] Isaacs R. (1965). Differential Games. New York: John Wiley & Sons. [60] Jordan, R., Kinderlehrer, D., and Otto, F. (1998). The variational formulation of the Fokker–Planck equation. SIAM J. Math. Anal., 29 (1), 1–17. [61] Kolokoltsov, V.N. (2010). Nonlinear Markov Processes and Kinetic Equations. Cambridge: Cambridge University Press. [62] Kolokoltsov, V. N., Li, J., and Yang, W. (2011). Mean field games and nonlinear Markov processes. Preprint arXiv:1112.3744. [63] Kolokoltsov, V. N., Troeva, M., and Yang, W. (2014). On the rate of convergence for the mean-field approximation of controlled diffusions with large number of players. Dyn. Games Appl., 4, 208–230. [64] Kononenko, A. F. (1976). Equilibrium positional strategies in nonantagonistic differential games. Dokl. Akad. Nauk SSSR, 231(2), 285–288. [65] Krusell, P., and Smith, Jr, A. A. (1998). Income and wealth heterogeneity in the macroeconomy. J. Polit. Econ., 106(5), 867–896. [66] Krylov, N. V. (1980). Controlled Diffusion Processes. New York: SpringerVerlag.

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

208

June 11, 2019

6.125x9.25

REFERENCES

[67] Krylov, N. (2011). On the Itˆ o–Wentzell formula for distribution-valued processes and related topics. Probab. Theory Relat. Fields, 150, 295–319. [68] Kunita, H. (1990). Stochastic Flows and Stochastic Differential Equations. Cambridge: Cambridge University Press. [69] Lacker, D. (2016). A general characterization of the mean field limit for stochastic differential games. Probab. Theory Relat. Fields, 165, 581–648. [70] Ladyˇzenskaja O.A., Solonnikov, V.A, and Ural’ceva, N.N. (1967). Linear and Quasilinear Equations of Parabolic Type. Translations of Mathematical Monographs, Vol. 23. Providence, RI: American Mathematical Society. [71] Lasry, J.-M., and Lions, P.-L. (2006). Jeux ´a champ moyen. I. Le cas stationnaire. C. R. Math. Acad. Sci. Paris, 343(9), 619–625. [72] Lasry, J.-M., and Lions, P.-L. (2006). Jeux ´a champ moyen. II. Horizon fini et contrˆ ole optimal. C. R. Math. Acad. Sci. Paris, 343(10), 679–684. [73] Lasry, J.-M., and Lions, P.-L. (2007). Large investor trading impacts on volatility. Ann. Inst. H. Poincar´e Anal. Non Lin´eaire, 24(2), 311–323. [74] Lasry, J.-M., and Lions, P.-L. (2007). Mean field games. Jpn. J. Math., 2(1), 229–260. [75] Lieberman, G. M. (1996). Second Order Parabolic Differential Equations. Singapore: World Scientific. [76] Lions, P.-L. Cours au Coll`ege de France. www.college-de-france.fr. [77] McKean, H.P. (1967). Propagation of chaos for a class of non linear parabolic equations. In Lecture Series in Differential Equations, Vol. 7. New York: Springer-Verlag. 41–57. [78] Mas-Colell A. (1984). On a theorem of Schmeidler. J. Math. Econ., 3, 201– 206. [79] M´el´eard, S. (1996). Asymptotic behaviour of some interacting particle systems; McKean–Vlasov and Boltzmann models. Probabilistic models for nonlinear partial differential equations (Montecatini Terme, 1995), 42–95, Lecture Notes in Math., 1627, Berlin: Springer-Verlag. [80] Mischler, S., and Mouhot, C. (2013). Kac’s Program in Kinetic Theory. Inventiones mathematicae, 193, 1–147.

−1 0 1

[81] Mischler, S., Mouhot, C., and Wennberg, B. (2015). A new approach to quantitative propagation of chaos for drift, diffusion and jump processes. Probab. Theory Relat. Fields, 161, 1–59. [82] Nash, J. (1951). Non-cooperative games. Ann. Math., 286–295.

125-79542 Cardaliaguet MasterEquation Ch01 4P

REFERENCES

June 11, 2019

6.125x9.25

209

[83] Otto, F. (2001). The geometry of dissipative evolution equations: the porous medium equation. Commun. Partial Differ. Eq., 26(1-2), 101–174. [84] Pardoux, E., and Rˇ a¸scanu, A. (2014). Stochastic Differential Equations, Backward SDEs, Partial Differential Equations. New York: Springer Science+Business Media. [85] Peng, S. (1992). Stochastic Hamilton Jacobi Bellman equations. SIAM J. Control Optim., 30, 284–304. [86] Peng, S., and Wu, Z. (1999). Fully coupled forward–backward stochastic differential equations and applications to optimal control. SIAM J. Control Optim., 37, 825–843. [87] Pontryagin, N.S. (1968). Linear differential games I and II. Soviet Math. Doklady, 8(3-4), 769–771 & 910,912. [88] Pham, H. (2009). Continuous-Time Stochastic Control and Optimization with Financial Applications. Berlin: Springer-Verlag. [89] Rachev, S.T., and Schendorf, L. R. (1998). Mass Transportation problems. Vol. I: Theory; Vol. II: Applications. New York: Springer-Verlag. [90] Rashid, S. (1983). Equilibrium points of non-atomic games: asymptotic results. Econ. Lett., 12(1), 7–10. [91] Schmeidler, D. (1973). Equilibrium points of nonatomic games. J. Stat. Phys., 7, 295–300. [92] Sznitman, A.-S. (1989). Topics in Propagation of Chaos. Cours de l’Ecole d’´et´e de Saint-Flour. Lecture Notes in Mathematics, Vol. 1464. New York: Springer-Verlag. [93] Sorin, S. (1992). Repeated games with complete information. Handbook of Game Theory with Economic Applications, 1, 71–107. [94] Starr, A. W., and Ho, Y.-C. (1969). Nonzero-sum differential. J. Optim. Theory Appl., 3(3), 184–206. [95] Villani, C. (2008). Optimal Transport, Old and New. Springer Verlag, 2008. [96] Von Neumann, J., and Morgenstern, O. (2007). Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. [97] Zhang, J. (2017). Backward Stochastic Differential Equations. New York: Springer Science+Business Media. −1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

−1 0 1

June 11, 2019

6.125x9.25

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

Index assumption, 36 (HF1(n)), 36 (HF2(n)), 37 (HG1(n)), 36 (HG2(n)), 37 chain rule for a function of a measure argument, 153, 191 Itˆ o–Wentzell formula, 86, 189 characteristics, ix, 12, 24 coercivity condition, 36 common noise in the master equation, 42 in the mean field game system, 8, 40 in the particle system, 6 continuation method, 42, 90 convergence convergence of the Nash system, 45 convergence of the optimal trajectories, 46 propagation of chaos for SDEs, 6 derivative in the Wasserstein space chain rule, 191 first-order, 31 informal definition, 20 intrinsic, 31 link with the derivative on the space of random variables, 175 second-order definition, 32 symmetry, 32

fixed-point theorem with uniqueness, 13, 102, 111, 127 without uniqueness, 13, 51, 62 Fokker–Planck equation control of Fokker–Planck equations, 77 deterministic equation, 6, 50 McKean–Vlasov equation, 6 stochastic equation, 86 forward–backward stochastic differential equation, 10, 42, 86, 89 Hamilton–Jacobi equation finite dimensional equation, 19, 50 in the space of measures, 78 stochastic finite dimensional equation, 86 Hamiltonian, 20 Kolmogorov equation (see also Fokker–Planck equation), 7, 19 linearized system first-order mean field game system, 60 second-order mean field game system, 113 master equation finite-dimensional projections, 160

−1 0 1

125-79542 Cardaliaguet MasterEquation Ch01 4P

June 11, 2019

6.125x9.25

INDEX

212 master equation (cont.) first-order definition of a classical solution, 39 existence and uniqueness, 39 formal derivation, 22 formal link with the mean field game system, 24 second-order definition of a classical solution, 42 existence and uniqueness, 43 McKean–Vlasov equation Fokker–Planck equation, 6 McKean–Vlasov SDE, 14 mean field game system first-order definition of a solution, 48 existence, 49 linearized system, 60 with common noise definition of a solution, 40 existence, 43 linearized system, 113 Monge–Kantorovich distance (see also Wasserstein distance), 28, 179

−1 0 1

monotone coupling, 36 monotonicity argument, 52 Nash equilibria, 1 in closed loop form, 14 in open loop form, 13 Nash system convergence, 45 definition, 2 formal asymptotic, 22 interpretation, 19 optimal trajectories for the mean field game system, 46 for the Nash system, 46 convergence, 46 potential mean field games, 77 propagation of chaos, 6, 172 systemic noise (see also common noise), 85 Wasserstein distance, 28, 178