Game Theory and Partial Differential Equations 9783110621792, 9783110619256

Extending the well-known connection between classical linear potential theory and probability theory (through the interp

186 105 3MB

English Pages 234 [236] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgment
Contents
1. Random walks and the Laplacian
2. A first glimpse of the Tug-of-War games
3. Tug-of-War with noise
4. Tug-of-War
5. Mixed boundary conditions and the obstacle problem
6. Maximal operators
7. Games for eigenvalues of the Hessian
8. Parabolic problems
9. Free boundary problems
A. Viscosity solutions
B. Probability theory
Bibliography
Notations
Index
Recommend Papers

Game Theory and Partial Differential Equations
 9783110621792, 9783110619256

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Pablo Blanc, Julio Daniel Rossi Game Theory and Partial Differential Equations

De Gruyter Series in Nonlinear Analysis and Applications

|

Editor-in Chief Jürgen Appell, Würzburg, Germany Editors Catherine Bandle, Basel, Switzerland Alain Bensoussan, Richardson, Texas, USA Avner Friedman, Columbus, Ohio, USA Mikio Kato, Tokyo, Japan Wojciech Kryszewski, Torun, Poland Umberto Mosco, Worcester, Massachusetts, USA Louis Nirenberg, New York, USA Simeon Reich, Haifa, Israel Alfonso Vignoli, Rome, Italy Vicenţiu D. Rădulescu, Krakow, Poland

Volume 31

Pablo Blanc, Julio Daniel Rossi

Game Theory and Partial Differential Equations |

Mathematics Subject Classification 2010 Primary: 35D40, 91A80, 91A05; Secondary: 35J60, 35K55 Authors Dr. Pablo Blanc Universidad de Buenos Aires Dept. de Matematica FCEYN Pabellon 1-Ciudad Universitaria 1428 Buenos Aires Argentina [email protected] Prof. Dr. Julio Daniel Rossi Universidad de Buenos Aires Dept. de Matematica FCEYN Pabellon 1-Ciudad Universitaria 1428 Buenos Aires Argentina [email protected]

ISBN 978-3-11-061925-6 e-ISBN (PDF) 978-3-11-062179-2 e-ISBN (EPUB) 978-3-11-061932-4 ISSN 0941-813X Library of Congress Control Number: 2019939561 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2019 Walter de Gruyter GmbH, Berlin/Boston Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

|

To Maggie and Cecilia

Preface The goal of this book is to present recent results concerning nonlinear second-order partial differential equations (PDEs) using a game theoretical approach. The analysis was motivated by the study of different variants of Tug-of-War games. These games have lead to a new chapter in the rich history of results connecting differential equations and probability theory. The fundamental works by Doob, Feller, Hunt, Kakutani, Kolmogorov, and many others show the deep connection between classical potential theory and probability theory. The main idea behind this relation is that harmonic functions and martingales have something in common: the mean value formulas. This relation is also quite fruitful in the nonlinear case and the Tug-of-War games are a clear evidence of this fact. At the end of the 1980s the mathematician David Ross Richman proposed a new kind of game that lies in between the classical games introduced by Von Neumann and Morgenstern and the combinatoric games studied by Zermelo, Lasker, and Conway among others. Here two players are in contest in an arbitrary combinatoric game (Tictac-toe, chess, checkers, etc.), but each one of them has a certain amount of money that modifies the rules of the game: At each turn the players bid and the one who offered more wins the right to make the next move. In the case that both players bid the same amount the turn can be decided by a coin toss. At the end of the 1990s in [68] and [69] this kind of games, known as Richman’s games, were studied. They have been translated into a diffusion problem on a graph: The nodes stand for the positions of the game and the links between the nodes are the allowed moves, a token is moved by one of the players according to who wins the bidding. The game ends when the token arrives to the nodes of the graph labeled as terminal ones and there a certain boundary datum says how much the first player gets (the amount of money that the other player pays). Among the several variants of these games, an interesting case is when the turn is decided at random tossing a fair coin at each turn, getting rid in this way of the bidding mechanism. This idea gives rise to the game called Tug-of-War introduced in [97] by Peres, Schramm, Sheffield, and Wilson. In that reference this game was studied and a novel connection with PDEs was found, or more concretely, with the ∞-Laplacian, an operator that appears naturally in a completely different context since it is associated to the minimal Lipschitz extension problem; see [7]. When we deal with PDEs we have to mention the concept of solution that we are considering. The theory for second-order operators in divergence form is naturally associated to the concept of weak solutions in Sobolev spaces; however, when one deals with fully nonlinear equations that are not in divergence form, the use of viscosity solutions seems more appropriate. This notion of solution was introduced by Crandall and Lions in the 1980s. Here we have to refer to the classical reference [37]. We included a brief summary of the https://doi.org/10.1515/9783110621792-201

VIII | Preface viscosity solutions theory in Appendix A. There you will find some general comments concerning the theory and some results that will be used along this book. Being one of the triggers of this book, we briefly describe now the game introduced in [97]. The Tug-of-War game is a two-person, zero-sum game, that is, two players are in contest and the total earnings of one of them are the losses of the other. Hence, one of them, say, Player I, plays trying to maximize her expected outcome, while the other, say, Player II is trying to minimize Player I’s outcome (or, since the game is zero-sum, to maximize his own outcome). Consider a bounded domain Ω ⊂ ℝN and a fixed ε > 0. At an initial time, a token is placed at a point x0 ∈ Ω. Players I and II play as follows: They toss a fair coin (with the same probability for heads and tails) and the winner of the toss moves the game token to any point x1 of his choice at a distance less than ε from the previous position, that is, he chooses x1 ∈ Bε (x0 ). Then they continue playing from x1 with the same rules, and at each turn, the coin is tossed again, and the winner chooses a new game state xk ∈ Bε (xk−1 ). This procedure yields a sequence of game states x0 , x1 , . . . . Once the game position leaves Ω, let us say at the τ-th step, the game ends. At that time the token will be at the final position xτ in ℝN \ Ω. A final payoff function g : ℝN \ Ω → ℝ is given. At the end of the game Player II pays to Player I the amount given by g(xτ ), that is, Player I has earned g(xτ ) while Player II has earned −g(xτ ). A strategy SI for Player I is a choice of the next position xk+1 ∈ Bε (xk ) of the game at every location provided she wins the coin toss. When the two players fix their strategies the movements depend only on the coin tosses and hence we can compute its expected value. Now, for each x0 ∈ Ω we can consider the expected payoff uε (x0 ) for the game starting at x0 assuming that both players play optimally. To do this we just compute the infimum among all possible strategies for Player II and the supremum among strategies for Player I of the expected value (here we need to mention that we have to penalize the use of strategies that produce games that never end with positive probability). This is what we call the game value and we denote it by uε . Hence, for each ε, we have a function uε : Ω → ℝ that depends only on the initial position of the game. In [97] it is proved that there exists a continuous function u : Ω → ℝ such that uε → u as ε → 0 and that u satisfies the PDE −2 2 −ΔH ∞ u = −|∇u| ⟨D u∇u; ∇u⟩ = 0

in Ω,

with the boundary condition u = g on 𝜕Ω. The key argument to show this convergence result is to prove that the value of the game uε verifies an equation inside Ω that is called the Dynamic Programming Principle (DPP) in the literature. In this case, for the Tug-of-War game, the DPP reads as uε (x) =

1 1 sup uε + inf uε 2 Bε (x) 2 Bε (x)

Preface

| IX

for x ∈ Ω, with uε (x) = g(x) for x ∈ ̸ Ω. Then, we prove that there exists a continuous function u that is a uniform limit of a subsequence of these game values uε as ε → 0, that is, we show that uε → u

uniformly in Ω.

The key ingredient to prove this uniform convergence is an Arzela–Ascoli type lemma in which one shows that the family uε is uniformly bounded and asymptotically equicontinuous (here we remark that the value functions are not continuous in general). Then, to obtain the limit equation, we just have to pass to the limit (in the viscosity sense) in the DPP to obtain that any uniform limit of the value functions uε is a viscosity solution to the limit PDE. Finally, the uniqueness of solution to the PDE will allow us to conclude the convergence of the whole sequence. After this seminal work many versions of the game were considered and many results have been obtained. It turns out that this strategy of finding a game, showing that its value functions verifies a DPP, and then passing to the limit is quite flexible and allows one to deal with several different PDEs elliptic or parabolic including free boundary problems. In [98] a version of the game related to the p-Laplacian is studied. Nonlocal versions of the game were proposed in [21] and [22]. Different boundary conditions where considered: Neumann boundary conditions in [2] and mixed boundary conditions in [32]. A continuous time game was presented in [11]. Let us also mention [84], [85], and [86], where different versions of the game related to the p-Laplacian are studied. All these works provided a general framework that was widely exploited later. A game related to the obstacle problem was studied in [87], one related to an operator with a gradient constrain was considered in [63], the ∞-Laplacian with a transport term in [76], a game related to the p(x)-Laplacian in [8], games related to parabolic problems in [79], [40], and [12], and we can find other variants in [48] and [91]. This approach was also useful to find different proofs of regularity results (such as Harnack’s inequality and Hölder regularity); we refer to [10], [77], [78], [102], [74], and [95]. For games related to eigenvalues of the Hessian, see [29], [26] and for a parabolic version see [25]. A survey, and a predecessor of this book, is [101]. The main ideas contained in the game theoretical approximation also lead to the study of the mean value formulas given by the DPPs that can be seen as discrete versions of the underling PDEs and deserve attention by their own; see [49], [55]. This subject is also related to asymptotic mean value characterizations of solutions to nonlinear PDEs (like the mean value property that characterizes harmonic functions). This subject started with [84]; see also [9], [30], [46], [65], [71], [75]. In this book we will concentrate on game theoretical arguments and hence the asymptotic mean value characterizations for PDEs are beyond the scope of the book.

X | Preface For extra related references, including, for example, applications (like image processing) and equations on graphs, among other things, see [4], [5], [43], [39], [50], [53], [106]. Motivated by these results, here our goal is to present a systematic overview of these results emphasizing the relation between games and nonlinear PDEs. Let us now summarize the contents of this book. The book contains two main parts. The first one, which corresponds to Chapters 1 to 4, deals mainly with the basic strategy. We describe here in detail the Tug-of-War game, show some simple examples, and describe with some care the steps needed to show convergence of the game values. Chapter 1 is devoted to the study of the classical linear case and we highlight the connection between the random walk and the Laplacian. We also include here a comment on the heat equation (covering the parabolic case). In Chapters 2, 3, and 4 we study the Tug-of-War game (with and without noise) and show that we can pass to the limit and obtain solutions to nonlinear PDEs, like the p-Laplacian and the ∞-Laplacian. We prefer to deal first with the Tug-of-War game with noise, since in this case the noise kicks you out of the domain with full probability and hence there is no need to penalize strategies that never end with positive probability. This fact simplifies some proofs. Next, we can study the case without noise and show how one can tackle the extra difficulties that appear in this case. The next chapters, Chapters 5 to 9, contain the study of extensions of these ideas to deal with more general equations. In this second part of the book we focus on the ideas needed for the generalizations and we will skip some details, referring to the first chapters or to the appropriate references when needed. Chapter 5 deals with mixed Dirichlet/Neumann boundary conditions (here the position chosen by the players is forced to remain inside Ω) and the obstacle problem (now one of the players can stop the game at any point and obtain the payment given by the obstacle). In Chapter 6 we study maximal operators (in this case one of the players can choose between two games which one is played at every turn). In Chapter 7 we deal with games for elliptic problems involving the eigenvalues of the Hessian matrix, D2 u. We also include here a game whose values approximate solutions to the classical Pucci maximal operator. In Chapter 8 we introduce games for parabolic nonlinear PDEs (including the evolution problem given by eigenvalues of the Hessian). In this chapter we highlight the fact that the game can be used to obtain results concerning the asymptotic behavior of the solutions for large times. In the last chapter (Chapter 9) we present ideas that can be used to tackle a free boundary problem. The main difficulty here is to design a game that does not involve anticipating rules and gives in the limit an equation in which the set {u > 0} appears (the free boundary is given by 𝜕{u > 0} ∩ Ω in this case).

Preface

| XI

At the end of each chapter we have added a section (named “Comments”) where we collect some remarks on related results and previous references. Finally, In Appendix A we include results from general viscosity theory; in Appendix B we collect some probability results that we use throughout the chapters. The bibliography of this monograph does not escape the usual rule of being incomplete. In general, we have listed those papers which are close to the topics discussed here. But, even for those papers, the list is far from complete and we apologize for possible omissions. Throughout this book we want to focus on game theory and hence we will assume extra regularity conditions on the data (as smoothness of the domain Ω, Lipschitz regularity of the boundary datum, etc). We point out that some results are valid under weaker conditions but we prefer to keep some extra assumptions in order to focus on the main difficulties avoiding subtle technicalities. It is a pleasure to acknowledge here the debt we have with our coauthors: F. Charro, L. Del Pezzo, C. Esteve, J. Garcia-Azorero, I. Gomez, P. Juutinen, J. Manfredi, J. C. Navarro, M. Parviainen, J. Pinasco, and J. V. da Silva. This monograph could not have been written without their contribution. Our thanks also go to people from probability, I. Armendariz, P. Ferrari, and N. Frevenza, for several nice discussions. Finally, we would like to thank also our families for their continuous encouragement. P. Blanc and J. D. Rossi Buenos Aires, February 2019

Acknowledgment Partially supported by MEC (Spain), UBACYT (Argentina), and CONICET (Argentina).

https://doi.org/10.1515/9783110621792-202

Contents Preface | VII Acknowledgment | XIII 1 1.1 1.2 1.3 1.4 1.5

Random walks and the Laplacian | 1 The probability of hitting the exit and harmonic functions | 1 Counting the number of steps needed to reach the exit | 5 Anisotropic media | 6 The heat equation | 6 Comments | 7

2 2.1 2.2 2.3 2.4

A first glimpse of the Tug-of-War games | 9 Description of the game | 9 The ∞-Laplacian and the best Lipschitz extension problem | 12 Game value convergence | 14 Comments | 16

3 3.1 3.2 3.3 3.4 3.5

Tug-of-War with noise | 19 Description of the game | 20 Dynamic Programming Principle | 22 Game value convergence | 28 Game with running payoff | 40 Comments | 42

4 4.1 4.2 4.3 4.4 4.5

Tug-of-War | 43 Dynamic Programming Principle | 43 Game value convergence | 46 Game with running payoff | 48 A game without value | 48 Comments | 49

5 5.1 5.1.1 5.1.2 5.2 5.2.1 5.2.2 5.2.3

Mixed boundary conditions and the obstacle problem | 51 Mixed boundary conditions and Tug-of-War games | 51 Description of the game | 51 Game value convergence | 52 The obstacle problem for Tug-of-War games | 60 Description of the game | 62 Dynamic Programming Principle | 63 Game value convergence | 65

XVI | Contents 5.2.4 5.3

Convergence of the contact sets | 67 Comments | 69

6 6.1 6.2 6.3 6.4

Maximal operators | 71 Unbalanced Tug-of-War games with noise | 74 Dynamic Programming Principle | 76 Game value convergence | 90 Comments | 99

7 7.1 7.1.1 7.1.2 7.1.3 7.2 7.2.1 7.3

Games for eigenvalues of the Hessian | 101 A random walk for λj | 101 Preliminaries on viscosity solutions | 105 Description of the game | 106 Geometric conditions on 𝜕Ω | 112 Games for Pucci’s operators | 113 Properties of the game values and convergence | 115 Comments | 126

8 8.1 8.1.1 8.1.2 8.1.3 8.2 8.2.1 8.2.2 8.3 8.3.1 8.3.2 8.4

Parabolic problems | 127 Games for the parabolic p-Laplacian | 127 Viscosity solutions for parabolic problems | 127 (p, ε)-parabolic functions and Tug-of-War games | 128 Game value convergence | 132 Games for parabolic problems with eigenvalues of the Hessian | 139 Preliminaries on viscosity solutions | 142 Parabolic random walk for λj | 143 Asymptotic behavior as t → ∞ | 151 PDE arguments | 154 Probabilistic arguments | 160 Comments | 167

9 9.1 9.1.1 9.1.2 9.1.3 9.2 9.2.1 9.2.2 9.2.3 9.2.4 9.3

Free boundary problems | 169 Gradient constraints | 169 Play or sell the turn Tug-of-War | 170 Dynamic programming principle | 172 Game value convergence | 175 A free boundary problem | 181 Viscosity solutions | 183 Pay or Leave Tug-of-War | 184 Dynamic Programming Principle | 186 Game value convergence | 186 Comments | 191

Contents | XVII

A A.1 A.1.1 A.1.2 A.1.3 A.2 A.3

Viscosity solutions | 193 Basic definitions | 193 Elliptic problems | 193 Parabolic problems | 196 Dirichlet boundary conditions | 200 Uniqueness | 200 Existence | 201

B B.1 B.2

Probability theory | 203 Stochastic processes | 203 Optional stopping theorem | 204

Bibliography | 207 Notations | 213 Index | 215

1 Random walks and the Laplacian The aim of this chapter is to begin with the study of the deep relation that exists between probability theory and partial differential equations (PDEs). We will look at the relation between some stochastic process and linear operators. In particular we will look at the close relation that exists between the random walk and the Laplacian. This relation has its roots in mean value formulas that hold both in the probabilistic setting and in the pure PDE environment.

1.1 The probability of hitting the exit and harmonic functions Let us begin by considering a bounded and smooth two-dimensional domain Ω ⊂ ℝ2 (the notion that the domain is two-dimensional is not really relevant here, but helps to simplify a little the resulting formulas) and assume that the boundary 𝜕Ω is decomposed in two parts, Γ1 and Γ2 (that is, Γ1 ∪ Γ2 = 𝜕Ω with Γ1 ∩ Γ2 = 0). We begin with a position (x, y) ∈ Ω and ask the following question: Assume that you move completely at random, beginning at (x, y) (we assume that we are in a homogeneous environment and that we do not privilege any direction, and in addition, we assume that every time the particle moves independently of its past history). What is the probability u(x, y) of hitting the first part of the boundary Γ1 the first time that the particle hits the boundary? We will call Γ1 the “open part” of the boundary; when we hit this part we can “exit” the domain. We will call Γ2 the “closed part” of the boundary; when we hit it we are destroyed. This problem of describing the random movement in a precise mathematical way is the central subject of Brownian motion. It originated in 1827, when the botanist Robert Brown observed this type of random movement in pollen particles suspended in water. A simple way to get some insight to solve the question runs as follows: First, we simplify the problem and approximate the movement by random increments of step ε in each of the axial directions, with ε > 0 small. From (x, y) the particle can move to one of the four points (x + ε, y),

(x − ε, y),

(x, y + ε),

or (x, y − ε),

each movement being chosen at random with probability 1/4. Starting at (x, y), let uε (x, y) be the probability of hitting the exit part Γ1 + Bε (0) the first time that 𝜕Ω + Bε (0) is hit when we move on the lattice of side ε. Observe that we need to enlarge a little the boundary to capture points on the lattice of size ε (that do not necessarily lie on 𝜕Ω). https://doi.org/10.1515/9783110621792-001

2 | 1 Random walks and the Laplacian Applying conditional expectations we obtain that uε verifies the mean value formula uε (x, y) =

1 ε 1 1 1 u (x + ε, y) + uε (x − ε, y) + uε (x, y + ε) + uε (x, y − ε). 4 4 4 4

(1.1)

That is, 0 = {uε (x + ε, y) − 2uε (x, y) + uε (x − ε, y)}

+ {uε (x, y + ε) − 2uε (x, y) + uε (x, y − ε)}.

(1.2)

Now, assume that uε converges as ε → 0 to a continuous function u locally uniformly in Ω. Note that this convergence can be proved rigorously applying the techniques that we will use in the next chapters. Also we remark here that (1.2) appears when one uses classical numerical analysis (finite differences) to discretize the Laplacian. Therefore, convergence of these approximations can be also found in numerical analysis textbooks like [90]. Next, we want to identify the limit PDE that the uniform limit of uε verifies. To this end, we cannot pass to the limit in the derivatives of uε (we remark that in general uε is discontinuous). Instead we will use viscosity techniques (see Appendix A). Let ϕ be a smooth function such that u − ϕ has a strict minimum at (x0 , y0 ) ∈ Ω. By the uniform convergence of uε to u there are points (xε , yε ) such that (uε − ϕ)(xε , yε ) ≤ (uε − ϕ)(x, y) + o(ε2 ),

(x, y) ∈ Ω,

and (xε , yε ) → (x0 , y0 ) as ε → 0. Note again that uε is not necessarily continuous. Hence, from (1.2) at the point (xε , yε ) and using that uε (x, y) − uε (xε , yε ) ≥ ϕ(x, y) − ϕ(xε , yε ) + o(ε2 ),

(x, y) ∈ Ω,

we get 0 ≥ {ϕ(xε + ε, yε ) − 2ϕ(xε , yε ) + ϕ(xε − ε, yε )}

+ {ϕ(xε , yε + ε) − 2ϕ(xε , yε ) + ϕ(xε , yε − ε)} + o(ε2 ).

Now, we just observe that 𝜕2 ϕ (x , y ) + o(ε2 ), 𝜕x2 ε ε 𝜕2 ϕ {ϕ(xε , yε + ε) − 2ϕ(xε , yε ) + ϕ(xε , yε − ε)} = ε2 2 (xε , yε ) + o(ε2 ). 𝜕y {ϕ(xε + ε, yε ) − 2ϕ(xε , yε ) + ϕ(xε − ε, yε )} = ε2

(1.3)

1.1 The probability of hitting the exit and harmonic functions | 3

Hence, substituting in (1.3), dividing by ε2 , and taking limit as ε → 0 we get 0≥

𝜕2 ϕ 𝜕2 ϕ (x0 , y0 ) + 2 (x0 , y0 ). 2 𝜕x 𝜕y

Therefore, a uniform limit of the approximate values uε , u, has the following property: Each time that a smooth function ϕ touches u from below at a point (x0 , y0 ) the derivatives of ϕ must satisfy 0≥

𝜕2 ϕ 𝜕2 ϕ (x0 , y0 ) + 2 (x0 , y0 ). 2 𝜕x 𝜕y

An analogous argument considering ψ a smooth function such that u − ψ has a strict maximum at (x0 , y0 ) ∈ Ω shows a reverse inequality. Therefore, each time that a smooth function ψ touches u from above at a point (x0 , y0 ) the derivatives of ψ must verify 0≤

𝜕2 ψ 𝜕2 ψ (x0 , y0 ) + 2 (x0 , y0 ). 2 𝜕x 𝜕y

But at this point we realize that this is exactly the definition of u being a viscosity solution (see Appendix A for a precise definition) to the Laplace equation Δu =

𝜕2 u 𝜕2 u + = 0. 𝜕x 2 𝜕y2

Hence, we obtained that the uniform limit of the sequence of solutions to the approximated problems uε , u is the unique viscosity solution (which is also a classical solution in this case) to the following boundary value problem: −Δu = 0 { { u=1 { { { u=0

in Ω,

(1.4)

on Γ1 ,

on Γ2 .

The boundary conditions can be easily obtained from the fact that uε ≡ 1 in a neighborhood (of width ε) of Γ1 and uε ≡ 0 in a neighborhood of Γ2 . In higher dimensions, that is, for Ω ⊂ ℝN , the discretization method described above leads in the same simple way to viscosity solutions to the Laplace operator in higher dimensions and then to the fact that exiting probabilities are harmonic functions. Note that we have only required uniform convergence to get the result, and hence no requirement is made on derivatives of the approximating sequence uε . Moreover, we do not need uε to be continuous. Now, let us consider a different discrete model in Ω ⊂ ℝN , one where at every step the movement is completely random and equidistributed in the ball Bε (x0 ). Then, by the same conditional expectation argument used before, we have uε (x0 ) =

1 ∫ uε (x) dx = ∫ uε (x) dx. |Bε (x0 )| Bε (x0 )

Bε (x0 )

4 | 1 Random walks and the Laplacian Again one can take the limit as ε → 0 and obtain that the uniform limit of uε , u, is harmonic, i. e., verifies Δu = 0, in the viscosity sense. Let us do the calculations needed to see that the limit verifies the equation. As before, let ϕ be a smooth function such that u − ϕ has a strict minimum at x0 ∈ Ω. Consider the second-order Taylor expansion of ϕ 1 ϕ(x) = ϕ(x0 ) + ∇ϕ(x0 ) ⋅ (x − x0 ) + ⟨D2 ϕ(x0 )(x − x0 ), (x − x0 )⟩ + o(|x − x0 |2 ). 2 Averaging both sides of the classical Taylor expansion of ϕ at x0 we get N

𝜕2 ϕ 1 (x0 ) ∫ xi xj dx + o(ε2 ). 2 2 i,j=1 𝜕xi

∫ ϕ(x0 ) dx = ϕ(x) + ∑ Bε (x0 )

Bε (0)

Now, we note that ∫ Bε (0)

1 x x dx = 0 2 i j

when i ≠ j. Using the symmetry of the ball, we compute ∫ xi2 dx = Bε (0)

ε

1 1 ∫ |x|2 dx = ∫ ∫ ρ2 dS dρ N N|Bε | 0 𝜕Bρ

Bε (0)

ε

=

|𝜕B1 | |𝜕B1 |ε2 ε2 = . ∫ ρN+1 dρ = N N(N + 2)|B1 | (N + 2) N|B1 |ε 0

As in (1.3), we have ϕ(xε ) ≥ ∫ ϕ(x) dx. Bε (xε )

Hence, we end up with 0 ≥ ∫ ϕ(x) dx − ϕ(xε ) = Bε (xε )

ε2 Δϕ(xε ) + o(ε2 ). 2(N + 2)

Therefore, dividing by ε2 and passing to the limit as ε → 0 we arrive to 0≥

1 Δϕ(x0 ). 2(N + 2)

(1.5)

1.2 Counting the number of steps needed to reach the exit | 5

An analogous argument considering ψ a smooth function such that u − ψ has a strict maximum at x0 ∈ Ω completes the proof. Another way to understand this strong relation between probabilities and the Laplacian is through the mean value property of harmonic functions. In the same context of the problem considered above, consider a continuous random movement (not discrete as we have done before). Its precise definition would be given by the Brownian motion (which we do not include here). Assume that a ball Br (x0 ) of radius r and centered at a point x0 is contained in Ω. Starting at x0 , the probability density of hitting first a given point on the sphere 𝜕Br (x0 ) is constant on the sphere, that is, it is uniformly distributed on the sphere. Therefore, the probability u(x0 ) of exiting through Γ1 starting at x0 is the average of the exit probabilities u on the sphere; here we are using again the formula of conditional probabilities. That is, u satisfies the mean value property on spheres, i. e., u(x0 ) =

1 |𝜕Br (x0 )|

∫ u(x) dS = 𝜕Br (x0 )

∫ u(x) dS 𝜕Br (x0 )

with r small enough. It is well known that this property is equivalent to u being harmonic.

1.2 Counting the number of steps needed to reach the exit Another motivating problem is the following: With the same setting as before (Ω a bounded smooth domain in ℝ2 ) we would like to compute the expected time, which we call T, that we have to spend starting at (x, y) before hitting the boundary 𝜕Ω for the first time. We can proceed exactly as before computing the time of the random walk at the discrete level, that is, in the lattice of size ε. This amounts to adding a constant (the unit of time that we spend in doing each movement), which depends on the step ε, to the right-hand side of (1.1). We have T ε (x, y) =

1 ε 1 1 1 T (x + ε, y) + T ε (x − ε, y) + T ε (x, y + ε) + T ε (x, y − ε) + t(ε). 4 4 4 4

That is, 0 = {T ε (x + ε, y) − 2T ε (x, y) + T ε (x − ε, y)}

+ {T ε (x, y + ε) − 2T ε (x, y) + T ε (x, y − ε)} + t(ε).

Proceeding as we did before, and since we need to divide by ε2 , a natural choice is to set that t(ε) is of order ε2 . Choosing t(ε) = Kε2

6 | 1 Random walks and the Laplacian and letting ε → 0, we conclude that a uniform limit of the approximate solutions T ε , T is the unique solution to {

−ΔT = 4K

in Ω,

T=0

on 𝜕Ω.

(1.6)

The boundary condition is natural since if we begin on the boundary the expected time needed to reach it is zero. From the previous probabilistic interpretations for the solutions of problems (1.4) and (1.6) as limits of random walks, one can imagine a probabilistic model for which the solution of the limit process is a solution to the general Poisson problem {

−Δu = f

in Ω,

u=g

on 𝜕Ω.

(1.7)

In this general model the functions f and g can be thought of as costs that one pays, respectively, along the random movement (at each step of the random walk) and at the stopping time on the boundary (g is a final payoff).

1.3 Anisotropic media Suppose now that the medium in which we perform our random movements is neither isotropic (that is, it is directionally dependent) nor homogeneous (that is, it differs from one point to another). We can imagine a random discrete movement as follows. We move from a point (x, y) to four possible points at distance ε located at two orthogonal axes forming a given angle α with the horizontal, and with different probabilities q/2 for the two points on the first axis and (1 − q)/2 for the two points on the second axis. Both the angle α (which measures the orientation of the axis) and the probability q depend on the point (x, y). After the same analysis as above, we encounter now the general elliptic equation 2

∑ aij (x, y)uxi xj (x, y) = 0.

i,j=1

1.4 The heat equation Now assume that we are on the real line ℝ and consider that we move in a twodimensional lattice as follows: When we are at the point (x0 , t0 ), the time increases by δt := ε2 and the spacial position moves with an increment of size δx = ε and goes to x0 − ε or to x0 + ε with the same probability. In this way the new points in the lattice

1.5 Comments | 7

that can be reached starting from (x0 , t0 ) are (x0 − ε, t0 − ε2 ) and (x0 + ε, t0 − ε2 ), each one with probability 1/2. As we will see, the choice δt = (δx)2 is made to ensure that a certain limit as ε → 0 exists. Let us start at x = 0, t = 0 and let uε (x, t) be the probability that we are at x at time t (here x = kε and t = lε2 is a point in the two-dimensional lattice). As in the previous section, conditional probabilities give the identity 1 1 uε (x, t) = uε (x − ε, t − ε2 ) + uε (x + ε, t − ε2 ). 2 2 That is, uε (x, t) − uε (x, t − ε2 ) 1 1 1 = 2 { uε (x − ε, t − ε2 ) + uε (x + ε, t − ε2 ) − uε (x, t − ε2 )}. 2 ε2 ε 2 Now, as before, we let ε → 0 and, assuming uniform convergence (which can be proved!), we arrive to the fact that the limit should be a viscosity solution to 1 ut (x, t) = uxx (x, t). 2 It is at this point where the relation δt = (δx)2 is needed. In more dimensions with the same ideas one arrives to viscosity solutions to the classical heat equation, 1 ut (x, t) = Δu(x, t). 2

1.5 Comments We want to remark that two facts are crucial in the previous analysis. The first one is the formula for the discrete version of the problem obtained using conditional expectations, (1.1), and the second one is the use of the theory of viscosity solutions to perform the passage to the limit without asking for more than uniform convergence. These ideas are going to be used again in the next chapters. One can show the required uniform convergence from the following two facts: First, the values uε are uniformly bounded (they all lie between 0 and 1 in the case of the problem of exiting the domain through Γ1 and between the maximum and the minimum of the datum g in the general case assuming f ≡ 0) and second, the family uε is equicontinuous for “not too close points” (see the following chapters); this can be proved using coupling methods. In fact, one can mimic a path of the process starting at x but starting at y and when one of the two paths hits the boundary the other is at a position that is close (at a distance smaller than |x − y|) to the boundary. It remains to prove a uniform in ε estimate that says that “given η > 0 there exists δ > 0 such that if

8 | 1 Random walks and the Laplacian one begins close to the boundary Γ1 (with dist(x0 , Γ1 ) < δ), then the probability of hitting this part of the boundary is bounded below by 1 − η” (and of course an analogous statement for positions that start close to Γ2 ). For this argument to work in the general case one can impose that g is uniformly continuous (for example, as we will do in the following chapters, we can assume that g is Lipschitz). The initial condition for the heat equation in Section 1.4 is u(x, 0) = δ0 and the solution can be explicitly obtained as the Gaussian profile u(x, t) =

1 − x2t2 e . √2πt

For an initial distribution of particles given by u(x, 0) = u0 (x) we get a simple formula for the solution, u(x, t) = ∫ ℝ

1 − y2t2 1 − x2t2 ∗ u0 (x). e u0 (x − y) dy = e √2πt √2πt

Note that in general it is not easy to obtain explicit solutions for nonlinear PDEs. Also note that the identities obtained using conditional expectations are the same that correspond to the discretization of the equations using finite differences. This gives a well-known second-order numerical scheme for the Laplacian. Hence, we remark again that the probability theory arguments that we present here can be used to obtain numerical schemes for PDEs.

2 A first glimpse of the Tug-of-War games The aim of this chapter is to introduce Tug-of-War games and their relation with nonlinear PDEs. We begin by describing the original Tug-of-War game introduced in [97]. Later, we revise some results concerning the limit PDE, that is given by a nonlinear elliptic operator called the ∞-Laplacian. This game gives a way to approximate ∞-harmonic functions (solutions to the ∞-Laplacian equation) in the same way as problems involving the random walk described in the previous chapter approximate harmonic functions. We want this chapter to be a simple introduction to the general framework connecting games and nonlinear PDEs that we describe in this book. Hence, let us start with an informal presentation of the game, leaving the treatment of hard technical details for the next two chapters. At this point we want to stress that we prefer to introduce the Tug-of-War games in the original version, which is the simpler one. We warn the reader that for this version some technical difficulties arise (and, as we mentioned, we prefer to treat them later).

2.1 Description of the game Tug-of-War is a two-person, zero-sum game, that is, two players are in contest and the total earnings of one are the losses of the other. Hence, one of them, say Player I, plays trying to maximize his expected outcome, while the other, say Player II, is trying to minimize Player I’s outcome (or, since the game is zero-sum, to maximize his own outcome). Here we describe briefly this game, introduced by Peres, Schramm, Sheffield, and Wilson in [97]. Consider a bounded domain Ω ⊂ ℝN and a fixed ε > 0. At an initial time, a token is placed at a point x0 ∈ Ω. Players I and II play as follows. They toss a fair coin (with the same probability for heads and tails) and the winner of the toss moves the game token to any x1 ∈ Bε (x0 ) of his choice. Then they continue playing from x1 . At each turn, the coin is tossed again, and the winner chooses a new game state xk ∈ Bε (xk−1 ). This procedure yields a sequence of game states x0 , x1 , . . . . Once the game position leaves Ω, let us say at the τ-th step, the game ends. At that time the token will be outside Ω, in ℝN \ Ω. A final payoff function is defined over ℝN \ Ω, g : ℝN \ Ω → ℝ. At the end Player II pays Player I the amount given by g(xτ ), that is, Player I has earned g(xτ ) while Player II has earned −g(xτ ). A strategy SI for Player I is a collection of measurable mappings SI = {SIk }∞ k=0 such that the next game position is SIk (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ) https://doi.org/10.1515/9783110621792-002

10 | 2 A first glimpse of the Tug-of-War games if Player I wins the toss given a partial history (x0 , x1 , . . . , xk ). Similarly Player II plays according to a strategy SII . For each x0 ∈ Ω we can consider the expected payoff uε (x0 ) for the game starting at x0 assuming that both players play optimally. This is what we call the game value. We postpone its precise definition and treatment. Let us compute this game value in two simple examples. Example 2.1. We consider Ω = (0, 1), ε = 1/2, and the final payoff given by g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0, as shown in Figure 2.1.

Figure 2.1: The board for a simple Tug-of-War game with Ω = (0, 1).

In this game the optimal strategies for the players are clear. Player I will move the token to the right trying to move the token to a point with g(x) = 1 while Player II will do the opposite moving to the left. If xn ∈ (0, 21 ) and Player II gets to move, he will move the token to a point with g(xn+1 ) = 0. On the other hand, if Player I gets to move, she will move the token to a point xn+1 ∈ ( 21 , 1). So, when xn ∈ (0, 21 ) with probability 21 we will get g(xn+1 ) = 0 and

with probability 21 , xn+1 ∈ ( 21 , 1). Hence the game value in (0, 21 ) is half the value in ( 21 , 1). Something similar occurs when xn ∈ ( 21 , 1) with probability 21 we will get g(xn+1 ) = 1 and

with probability 21 , xn+1 ∈ (0, 21 ). Hence the game value in ( 21 , 1) is the average between

1 and the game value in (0, 21 ). We obtain that uε (x) = x∈

( 21 , 1).

Finally, if xn =

1 2

with probability

1 2

1 3

for x ∈ (0, 21 ) and uε (x) =

we will have xn+1 ∈ (0,

probability 21 we will have xn+1 ∈ ( 21 , 1). Hence, we obtain uε ( 21 ) = value is the one depicted in Figure 2.2.

1 ) 2

1 2 + 3 3

2

2 3

for

and also with

= 21 . The game

Example 2.2. We consider Ω = (0, 1), ε = 72 , and the final payoff given by g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0. The game value is the one shown in Figure 2.3. Exercise 2.3. Compute the game value for Ω = (0, 1), ε = 52 , and the final payoff given by g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0. Let us observe that in one dimension it is easy to know the optimal strategies for the players. This is not necessarily true in higher dimensions. For example, in general, pointing to the boundary point with higher payoff will not be the best strategy. Although we do not know where players should move, we know that at every round they will move attempting to maximize their own earnings. That is, if Player I moves, she aims to move the token to a point that maximizes the expected final payoff. In the same way, Player II tries to move in such a way that minimizes the expected payoff. By considering these two cases and the fact that each one has probability one half,

2.1 Description of the game

| 11

Figure 2.2: Game value for the Tug-of-War game with Ω = (0, 1), ε = 21 , and final payoff g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0.

Figure 2.3: Game value for the Tug-of-War game with Ω = (0, 1), ε = 27 , and final payoff g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0.

we can conjecture (latter we will prove rigorously that this is indeed the case) that the game value satisfies uε (x) =

1 1 sup uε + inf uε 2 Bε (x) 2 Bε (x)

(DPP)

for every x ∈ Ω. This kind of formula is called the Dynamic Programming Principle (DPP) in the literature and it provides a useful tool in many applications. This equation (the DPP), in some sense, plays the role of the mean value property for harmonic functions in the nonlinear scenario (the ∞-harmonic case, see below).

12 | 2 A first glimpse of the Tug-of-War games This principle turns out to be an important qualitative property of the approximations of ∞-harmonic functions. It has provided a new tool to approach problems in the analysis of PDEs, like uniqueness of viscosity solutions; see [3]. It is also an important tool to construct convergent numerical methods for this kind of problems; see [92] and [16].

2.2 The ∞-Laplacian and the best Lipschitz extension problem The ∞-Laplacian is a nonlinear degenerate elliptic operator, usually denoted by Δ∞ , given by N

𝜕u 𝜕u 𝜕2 u , 𝜕xi 𝜕xj 𝜕xi xj i,j=1

Δ∞ u := ⟨D2 u∇u; ∇u⟩ = ∑

and arises from taking limit as p → ∞ in the p-Laplacian operator in the viscosity sense; see [7] and [17]. In fact, let us present a formal derivation. First, expand (formally) the p-Laplacian, i. e., Δp u = div(|∇u|p−2 ∇u)

= |∇u|p−2 Δu + (p − 2)|∇u|p−4 ∑ uxi uxj uxi ,xj i,j

= (p − 2)|∇u|p−4 {

1 |∇u|2 Δu + ∑ uxi uxj uxi ,xj }, p−2 i,j

and next, using this formal expansion, pass to the limit in the equation Δp u = 0, sending p → ∞, to obtain Δ∞ u = ∑ uxi uxj uxi ,xj = ⟨D2 u∇u; ∇u⟩ = 0. i,j

Note that this computation can be made rigorous in the viscosity sense. Also note that we can consider the normalized (1-homogeneous) p-Laplacian and ∞-Laplacian, given by ΔH pu=

1 Δu + |∇u|−2 ∑ uxi uxj uxi ,xj p−2 i,j

and −2 2 ΔH ∞ u = |∇u| ∑ uxi uxj uxi ,xj = ⟨D u i,j

∇u ∇u ; ⟩, |∇u| |∇u|

respectively. In this way we can see the ∞-Laplacian as the second derivative in the direction of the gradient. Viscosity solutions to Δp u = 0 and to ΔH p u = 0 coincide. Also viscosity solutions to Δ∞ u = 0 and to ΔH u = 0 coincide. We refer to [62]. ∞

2.2 The ∞-Laplacian and the best Lipschitz extension problem

| 13

The ∞-Laplacian operator appears naturally when one considers absolutely minimizing Lipschitz extensions of a boundary function g; see [56] and also the survey [7]. A fundamental result of Jensen [56] establishes that the Dirichlet problem for Δ∞ is well posed in the viscosity sense. Solutions to −Δ∞ u = 0 (which are called infinity harmonic functions) are also used in several applications, for instance, in optimal transportation and image processing (see, e. g., [44], [47], and the references therein). Also the eigenvalue problem related to the ∞-Laplacian has been exhaustively studied; see [23], [33], [57], [60], and [61]. Let us recall the definition of an absolutely minimizing Lipschitz extension. Let g : 𝜕Ω → ℝ. We denote by L(g, 𝜕Ω) the smallest Lipschitz constant of g in 𝜕Ω, i. e., |g(x) − g(y)| . |x − y| x,y∈𝜕Ω

L(g, 𝜕Ω) := sup

If we are given a Lipschitz function g : 𝜕Ω → ℝ, i. e., L(g, 𝜕Ω) < +∞, then it is well known that there exists a minimal Lipschitz extension (MLE) of g to Ω, that is, a function h : Ω → ℝ such that h|𝜕Ω = g and L(g, 𝜕Ω) = L(h, Ω). Extremal extensions were explicitly constructed by McShane [88] and Whitney [107]. They are given by Ψ(g)(x) := inf (g(y) + L(g, 𝜕Ω)|x − y|),

x ∈ Ω,

Λ(g)(x) := sup (g(y) − L(g, 𝜕Ω)|x − y|),

x ∈ Ω.

y∈𝜕Ω

and y∈𝜕Ω

Both Ψ(g) and Λ(g) are MLEs of g to Ω and if u is any other MLE of g to Ω, then Λ(g) ≤ u ≤ Ψ(g). The notion of an MLE is not completely satisfactory since it involves only the global Lipschitz constant of the extension and ignore what may happen locally. To solve this problem, in the particular case of the Euclidean space ℝN , Arosson [6] introduced the concept of absolutely minimizing Lipschitz extension (AMLE) and proved the existence of AMLE by means of a variant of Perron’s method. The AMLE is given by the following definition. Here we consider the general case of extensions of Lipschitz functions defined on a subset A ⊂ Ω, but the reader may consider A = 𝜕Ω. Definition 2.4. Let A be any nonempty subset of Ω and let F : A ⊂ Ω → ℝ be a Lipschitz function. A function u : Ω → ℝ is an absolutely minimizing Lipschitz extension of g to Ω if (i) u is an MLE of g to Ω; (ii) whenever B ⊂ Ω and h : Ω → ℝ is Lipschitz such that h = u in Ω \ B, then L(u, B) ≤ L(h, B).

14 | 2 A first glimpse of the Tug-of-War games Remark 2.5. The definition of AMLE can be extended to any metric space (X, d), and existence of such an extension can be proved when (X, d) is a separable length space [58]. It turns out (see [7]) that the unique AMLE of g (defined on 𝜕Ω) to Ω is the unique solution to {

−Δ∞ u = 0 u=g

in Ω, on 𝜕Ω.

As we mentioned earlier, our game will allow us to obtain solutions to this equation. Let us give a heuristic argument showing how the game is related to the ∞-Laplacian and, more in general, how solutions to the DPP are related to the ∞-Laplacian. Consider the classical Taylor expansion 1 u(y) = u(x) + ∇u(x) ⋅ (y − x) + ⟨D2 u(x)(y − x), (y − x)⟩ + O(|y − x|3 ). 2 Then observe that the gradient direction is almost the maximizing direction. Thus, summing up the two Taylor expansions roughly gives us 1 u(x) − {sup u + inf u} 2 B (x) Bε (x) ε

1 ∇u(x) ∇u(x) ≈ u(x) − {u(x + ε ) + u(x − ε )} 2 |∇u(x)| |∇u(x)| =−

(2.1)

ε2 Δ u(x) + O(ε3 ). 2 ∞

We will formalize this heuristic argument in Chapter 4.

2.3 Game value convergence When considering a version of the game, after giving a precise definition of the rules, our main task will be to show that the game has a value (this is, in some sense, that the expected final payoff is well defined) and that the game value satisfies a DPP. After doing that we will study the limit of the game values as ε → 0. Note that for each ε > 0 we have a different game (we still use the same rules to play, but the set of possible movements changes as ε changes) and hence a different game value uε . We want to study the limit of uε as ε → 0. For the game that we are considering now we have the following result (we will provide a proof later). Theorem 2.6. [97] There exists a continuous function u such that uε → u

2.3 Game value convergence

| 15

uniformly in Ω. This limit u is the unique viscosity solution to Δ∞ u = 0 { u=g

in Ω, on 𝜕Ω.

Let us analyze in detail the one-dimensional case to gain intuition on what is going on in this convergence. As before, we consider Ω = (0, 1) and the final payoff given by g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0. For each n ∈ ℕ, we consider ε = n1 . By repeating the analysis done for ε = 21 , it can be shown that k

1 { n+1 u n (x) = { 1 k+ 2 { n+1

for x ∈ ( k−1 , k ), 1 ≤ k ≤ n, n n for x = nk , 1 ≤ k ≤ n − 1.

Observe that the game value in the interval ( k−1 , k ) is equal to the average of the values n n k k+1 k−2 k−1 in the intervals ( n , n ) and ( n , n ). And for x = nk the game value is equal to the average of the game values in the surrounding intervals, that is, ( k−1 , k ) and ( nk , k+1 ). n n n Since it can be seen that x−

1 1 1 ≤ u n (x) ≤ x + , n+1 n+1

if we consider u : (0, 1) → ℝ given by

u(x) = x, we have 1

un → u uniformly in Ω.̄ Note that the limit function u is a straight line that turns out to be the unique viscosity (and classical) solution to Δ∞ u(x) = uxx (x)(ux )2 (x) = 0,

x ∈ (0, 1),

with boundary conditions u(0) = 0,

u(1) = 1.

Note that in this simple one-dimensional case the solutions to Δ∞ u(x) = uxx (x)(ux )2 (x) = 0 and to Δp u(x) = (|ux |p−2 ux )x (x) = 0 coincide with just a straight line. This does not happen in higher dimensions.

16 | 2 A first glimpse of the Tug-of-War games Exercise 2.7. Consider the game in the one-dimensional case with Ω = (0, 1), for a fixed ε > 0, and the final payoff given by g(x) = 1 for x ≥ 1 and g(x) = 0 for x ≤ 0. Suppose that xε , 1−x ∈ ̸ ℤ. Prove that ε ε

u (x) =

1 + ⌊ xε ⌋

⌋ 1 + ⌊ xε ⌋ + 1 + ⌊ 1−x ε

.

In the general case, to prove the game value converges, we will employ a variant of the classical Arzela–Ascoli compactness lemma; see Lemma 3.6 in Chapter 3. The functions uε are not continuous, in general, as shown in Figures 2.2 and 2.3. That is why the classical version of the lemma cannot be applied as it requires the functions in the sequence to be equicontinuous. Nevertheless, for the values of this game the jumps can be controlled and they can be shown to be small for a small value of ε. The Arzela–Ascoli type lemma will allow us to obtain a convergent subsequence. Then our task is to pass to the limit (making rigorous the computations in (2.1)) to obtain that this limit is a viscosity solution to the ∞-Laplacian (that in this case is the limit PDE). After proving that the limit is the solution to the PDE, the uniqueness of solution for the PDEs will allow us to conclude the convergence of the whole sequence. To prove that the sequence is in the hypothesis of the Arzela–Ascoli lemma we will employ game theoretical arguments considering different strategies for the players. In [97] the arguments employed to prove the game value convergence are different from the ones we have just described. We prefer the plan we have just described, as it follows the general path that we will use to prove this type of result for all versions of the game that we will consider along the book.

2.4 Comments For an unbalanced situation in which the probabilities of both players to win the coin toss are not 1/2–1/2 we refer to [96]. For vector-valued Lipschitz extensions see [104]. This kind of analysis can be extended to more general spaces (for example, metric spaces) in order to obtain results concerning Lipschitz extensions in ambient spaces that are not just a smooth domain in ℝN . Here we prefer to confine ourselves to a Euclidean environment to simplify the exposition. Observe that the limit function u verifies a PDE that involves derivatives and hence, a priori, we need to require the ambient space to be Euclidean. Instead, the definition of uε only requires the use of balls (the game makes sense in any metric space). This convergence of value functions defined in a general ambient space can be reproduced for other operators. Hence if we are able to define uε and prove that they converge to a limit u we have obtained a candidate, u, to act as a solution to a PDE even if the operator does not make sense in an ambient space that is more general than the Euclidean one.

2.4 Comments | 17

These results show, one more time, that there is a close connection between probability and analysis of PDEs. This connection is quite evident when we look at linear operators (see the previous chapter); however, it is also quite useful when one looks at nonlinear PDE problems like the ∞-Laplacian.

3 Tug-of-War with noise In this chapter we will introduce the game called Tug-of-War with noise. We begin by giving a description of the rules of the game and a definition of the game value. Then, we prove that the game has a value and that the value is a solution to the DPP. We prove this result in two different ways. First, we give the proof that we believe is the simplest one. Then we give another proof that requires some extra work but is more versatile. We will use this second approach for other versions of the game and that is why we are interested in first presenting the idea here, in a simpler case, and then use it in more general frameworks. Solutions to the DPP for the game presented in this chapter are called p-harmonious functions. As a byproduct of our arguments we will obtain that the game value is the unique solution to the DPP. This will allow us to obtain some interesting results concerning p-harmonious functions. The next step will be to study the limit of the game values as ε → 0. We will prove the following theorem. Theorem 3.1. Let Ω be a bounded domain satisfying the exterior sphere condition and g a Lipschitz continuous function. Consider the unique viscosity solution u to {

Δp u(x) = 0 u(x) = g(x)

x ∈ Ω, x ∈ 𝜕Ω

(3.1)

and let uε be the unique p-harmonious function with boundary values g. Then uε → u

uniformly in Ω

as ε → 0. As mentioned before, Δp u = div(|∇u|p−2 ∇u) is the p-Laplacian operator. To motivate the relation between the game that we consider here and the PDE that arises in the limit, recall the (formal) expansion for the p-Laplacian, i. e., Δp u = div(|∇u|p−2 ∇u)

= |∇u|p−2 Δu + (p − 2)|∇u|p−4 ∑ uxi uxj uxi ,xj i,j

(3.2)

= |∇u|p−2 {Δu + (p − 2)ΔH ∞ u}. The game values for the Tug-of-War, described in the previous chapter, approximate solutions to the ∞-Laplacian, and the classical random walk that we mentioned in Chapter 1 was related to the usual Laplacian. The game that we consider here combines both random processes to obtain an approximation to the p-Laplacian. Let us remark that when we are dealing with equation (3.1) we are only interested in the values of g on 𝜕Ω but when considering the game we will need g to be defined https://doi.org/10.1515/9783110621792-003

20 | 3 Tug-of-War with noise in ℝN \ Ω (or at least in a strip around the boundary outside Ω). This does not introduce any difficulties. Given a continuous real function defined in 𝜕Ω we can extend it continuously to ℝN \ Ω. Also, if g is Lipschitz, then it can be extended to a Lipschitz function. Even more, since 𝜕Ω is compact, g is bounded there, and we can take the extension bounded. So, we consider g defined in ℝN \ Ω Lipschitz and bounded.

3.1 Description of the game Again we have a two-person, zero-sum game. Players I and II play as follows. At an initial time, they place a token at a point x0 ∈ Ω and toss a biased coin with probabilities α and β such that 1 > α, β > 0, and α + β = 1. If they get heads (probability α), they toss a fair coin and the winner of the toss moves the game position to any x1 ∈ Bε (x0 ) of his/her choice. On the other hand, if they get tails (probability β), the game state moves according to the uniform probability density to a random point x1 ∈ Bε (x0 ). Then they continue playing from x1 . This procedure yields a sequence of game states x0 , x1 , . . .. Once the game position leaves Ω, the game ends and Player II pays Player I the amount given by a payoff function g : ℝN \ Ω → ℝ. Player I earns g(xk ) while Player II earns −g(xk ). A strategy SI for Player I is a collection of measurable mappings SI = {SIk }∞ k=0 such that the next game position is SIk (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ) if Player I wins the toss given a partial history (x0 , x1 , . . . , xk ). Similarly Player II plays according to a strategy SII . The next game position xk+1 ∈ Bε (xk ), given a partial history (x0 , x1 , . . . , xk ), is distributed according to the probability πSI ,SII (x0 , x1 , . . . , xk , A) =

β|A ∩ Bε (xk )| α α + δSk (x0 ,x1 ...,xk ) (A) + δSk (x0 ,x1 ,...,xk ) (A), |Bε (xk )| 2 I 2 II

where A is any measurable set. From now on, we shall omit k and simply denote the strategies by SI and SII . Let us fix strategies SI , SII . Let ℝn be equipped with the natural topology and the σ-algebra ℬ of the Lebesgue measurable sets. The space of all game sequences H ∞ = {x0 } × ℝN × ℝN × ⋅ ⋅ ⋅ is a product space endowed with the product topology. Let {ℱk }∞ k=0 denote the filtration of σ-algebras, ℱ0 ⊂ ℱ1 ⊂ ⋅ ⋅ ⋅ defined as follows: ℱk is the product σ-algebra generated by cylinder sets of the form {x0 } × A1 × ⋅ ⋅ ⋅ × Ak × ℝN × ℝN ⋅ ⋅ ⋅ with Ai ∈ ℬ. For ω = (x0 , ω1 , . . .) ∈ H ∞ ,

3.1 Description of the game

| 21

we define the coordinate processes Xk (ω) = ωk ,

Xk : H ∞ → ℝn , k = 0, 1, . . . ,

so that Xk is an ℱk -measurable random variable. Moreover, ℱ∞ = σ(⋃ ℱk ) is the smallest σ-algebra so that all Xk are ℱ∞ -measurable. To denote the time when the game state reaches the boundary, we define a random variable τ(ω) = inf{k : Xk (ω) ∈ ̸ Ω, k = 0, 1, . . .}, which is a stopping time relative to the filtration {ℱk }∞ k=0 . A starting point x0 and the strategies SI and SII determine a unique probabilx ity measure ℙS0,S in H ∞ relative to the σ-algebra ℱ ∞ . This measure is built by KolI II mogorov’s extension theorem, applied to the family of transition probabilities πSI ,SII (x0 , X1 (ω), . . . , Xk (ω), A) =

β|A ∩ Bε (ωk )| α α + δSI (x0 ,ω1 ...,ωk ) (A) + δSII (x0 ,ω1 ,...,ωk ) (A). |Bε (ωk )| 2 2

The expected payoff is then x

x

𝔼S0,S [g(Xτ )] = ∫ g(Xτ (ω))ℙS0,S (dω). I

II

I

H∞

II

Note that, due to the fact that β > 0, the game ends almost surely, x

ℙS0,S ({ω ∈ H ∞ : τ(ω) < ∞}) = 1, I

II

for any choice of strategies because the game sequences contain arbitrary long sequences of random steps with probability 1. The value of the game for Player I is given by x

uεI (x0 ) = sup inf 𝔼S0,S [g(Xτ )] SI

SII

I

II

while the value of the game for Player II is given by x

uεII (x0 ) = inf sup 𝔼S0,S [g(Xτ )]. SII

SI

I

II

The values uεI (x0 ) and uεII (x0 ) are in a sense the best expected outcomes each player can almost guarantee when the game starts at x0 . When uεI = uεII we say the game has a value and let uε := uεI = uεII . For the measurability of the value functions we refer to [80] and [81].

22 | 3 Tug-of-War with noise As we have mentioned, our game is related to the p-Laplacian. Given a value of p, the corresponding game is the one with α=

p−2 p+N

and β =

2+N . p+N

Observe that as p approaches 2, β approaches 1. When β = 1 and α = 0, the game token moves at random and the limit operator is the usual Laplacian (see Chapter 1). On the other hand, when p goes to infinity, α approaches 1, and when β = 0 and α = 1 we only play the Tug-of-War game that has the ∞-Laplacian as limit PDE (see Chapter 2). Anyway, recall that here we assume that 0 < α, β < 1, hence we have 2 < p < +∞.

3.2 Dynamic Programming Principle In this section we will prove the following theorem and later some results concerning p-harmonious functions. Theorem 3.2. The Tug-of-War with noise has a value and the game value satisfies uε (x) =

α {sup uε + inf uε } + β ∫ uε (y) dy Bε (x) 2 Bε (x)

(DPP)

Bε (x)

for x ∈ Ω. Following [86] we will call a solution to (DPP) a p-harmonious function. Intuitively, the expected payoff at a given point can be computed by summing up all three cases: Player I moves, Player II moves, or a random point is chosen, with their corresponding probabilities. Player I, who tries to maximize the payoff, will choose a point maximizing the expected payoff, and Player II a point minimizing the expected payoff. In what follows we prove that the DPP holds for the value of the game in two different ways. In the first proof we first establish the existence of a function that satisfies the DPP and then show that this function must coincide with the game value. In the second proof we construct a sequence of functions that converges to a solution of the DPP and we prove that the game value coincides with the limit using the sequence. For a third proof we refer to [85], where the result is established by a pure probabilistic method. The key point in that proof is that a strategy can be decomposed into the first step and the rest. Then, one can show that each player gets no advantage by choosing his first step before the other player. Showing this for all the steps allows to interchange the sup / inf in the definition of uεI and uεII , proving that both functions coincide. We proceed with the first proof. The plan here is to establish the existence of a function that satisfies the DPP. To this end we will apply Perron’s method; see [73].

3.2 Dynamic Programming Principle

| 23

A subsolution to the DPP is a function that satisfies u(x) ≤

α {sup u + inf u} + β ∫ u dy Bε (x) 2 Bε (x) Bε (x)

N

for x ∈ Ω, and u ≤ g on ℝ \ Ω. In the same way we call the functions that satisfy the reverse inequality supersolutions. We consider 𝒮 , the set of subsolutions to the DPP that are bounded by sup g. It can be seen that any subsolution is bounded by sup g, but we do not need this result here. We will obtain this fact as a byproduct of our arguments. We observe that the set 𝒮 is nonempty. For example the function u given by u = inf g in Ω, and u = g on ℝN \Ω, is a subsolution to the DPP and is bounded by sup g. We define u : ℝN → ℝ as u(x) = sup u(x). u∈𝒮

It turns out that this function is a solution to the DPP. Theorem 3.3. The function u = sup u(x) u∈𝒮

is a solution to the DPP. Proof. Since the function u given by u = inf g in Ω, and u = g on ℝN \ Ω, belongs to 𝒮 and every function in 𝒮 satisfies u ≤ g on ℝN \ Ω, we have u = g in ℝN \ Ω. For every u ∈ 𝒮 we have u ≤ u and hence u(x) ≤

α {sup u + inf u} + β ∫ u(y) dy. Bε (x) 2 Bε (x) Bε (x)

If we take supremum over all the u ∈ 𝒮 in the left-hand side we obtain that u is a subsolution. We consider v : ℝN → ℝ given by v(x) =

α {sup u + inf u} + β ∫ u(y) dy Bε (x) 2 Bε (x) Bε (x)

for x ∈ Ω, and v = g on ℝN \ Ω. Since u is a subsolution, we have u ≤ v. Then, v(x) =

α {sup u + inf u} + β ∫ u(y) dy Bε (x) 2 Bε (x) Bε (x)



α {sup v + inf v} + β ∫ v(y) dy Bε (x) 2 Bε (x) Bε (x)

and hence, v is a subsolution which proves that v ≤ u, and hence v = u. Thus u is a solution to the DPP.

24 | 3 Tug-of-War with noise Now that we have stated the existence of a solution to the DPP, we can prove that the game has a value and that the value coincides with the solution to the DPP. First proof of Theorem 3.2. Let u be a solution to the DPP. We will prove that uεII ≤ u, and with a similar argument we can show that u ≤ uεI . Since uεI ≤ uεII always holds, it follows that u = uεI = uεII and we have the desired result. To prove that uεII ≤ u, we will construct a strategy for Player II based on the solution to the DPP. η For a fixed η > 0, Player II follows a strategy SII such that at xk−1 ∈ Ω, he always chooses to step to a point that almost minimizes u, that is, to a point xk such that u(xk ) ≤ inf u + η2−k . Bε (xk−1 )

We start from the point x0 . It follows from the choice of strategies and the DPP for u that x0 η [u(xk ) SI ,SII

𝔼



+ η2−k | x0 , . . . , xk−1 ]

α { sup u + inf u + η2−k } + β Bε (xk−1 ) 2 Bε (xk−1 )

= u(xk−1 ) + η2

−(k−1)



u dy + η2−k

Bε (xk−1 )

.

Thus Mk = u(xk ) + η2−k is a supermartingale . We obtain by Fatou’s lemma and the optional stopping theorem that x

uεII (x0 ) = inf sup 𝔼S0,S [g(xτ )] SII



I

SI

II

x sup 𝔼S0,Sη [g(xτ ) I II SI

+ η2−τ ]

x

≤ sup lim inf 𝔼S0,Sη [u(xτ∧k ) + η2−(τ∧k) ] SI

k→∞

I

II

≤ sup 𝔼SI ,Sη [u(x0 ) + η] = u(x0 ) + η. SI

II

Since η > 0 is arbitrary, this completes the proof. Let us observe that Player II can play accordingly to the strategy described in the above proof thanks to the fact that the game ends almost surely. If this were not the case, when considering strategies for the players we would have to make sure that they end the game almost surely. Now, we proceed with the details of the second proof. The plan here is to obtain the solution to the DPP as the limit of a sequence of functions. Here we follow [28] and [102].

3.2 Dynamic Programming Principle

| 25

We define a sequence of functions un : ℝN → ℝ. We define u0 as u0 = inf g in Ω, and u0 = g on ℝN \ Ω and inductively un (x) =

α {sup u + inf u } + β ∫ un−1 (y) dy 2 Bε (x) n−1 Bε (x) n−1 Bε (x)

for x ∈ Ω, and un = g on ℝN \ Ω for all n ∈ ℕ. We observe that u1 ≥ u0 and that if un ≥ un−1 , by the recursive definition, we have un+1 ≥ un . Then, by induction, we obtain that the sequence of functions is an increasing sequence. Since u0 ≤ sup g and the recursive definition we can see that un is bounded by sup g for all n. This gives us a uniform bound for un (independent of n). Hence, un converge pointwise to a bounded Borel function u. Now we want to prove that the convergence is uniform. Assume, for the sake of contradiction, that it is not. Observe that if ‖un+1 −un ‖∞ → 0 we can extract a uniformly Cauchy subsequence, thus this subsequence converges uniformly to a limit u. This implies that uk converge uniformly to u, because of the monotonicity. By the recursive definition we have ‖un − un−1 ‖∞ ≥ ‖un+1 − un ‖∞ ≥ 0. Then, as we are assuming the convergence is not uniform, we have ‖un+1 − un ‖∞ → M for some M > 0. Since sup un + inf un − sup un−1 − inf un−1 ≤ 2‖un − un−1 ‖∞ ,

Bε (x)

Bε (x)

Bε (x)

Bε (x)

by the recursive definition un+1 (x) − un (x) ≤ α‖un − un−1 ‖∞ + β ∫ un (y) − un−1 (y) dy. Bε (x)

Since un−1 , un ≤ u on Ω, by taking supremum over all x ∈ Ω on the right-hand side we obtain ‖un+1 − un ‖∞ ≤ α‖un − un−1 ‖∞ +

β ∫ u(y) − un−1 (y) dy. |Bε | Ω

Given that the sequence un is bounded and converges pointwise to u, we have ∫ u(y) − un−1 (y) dy → 0 Ω

as n → ∞. Hence if we take limit in the above equation we obtain M ≤ αM, which is a contradiction since α < 1. We have shown that un → u uniformly. By passing to the

26 | 3 Tug-of-War with noise limit in the recursive definition of un we obtain that the limit u satisfies the DPP. This follows since we can pass to the limit as n → ∞ in all terms of the DPP formula. In the same way we can construct a decreasing sequence vk of functions starting with v0 = sup g in Ω, and v0 = g on ℝN \ Ω. In this case we can show that the sequence is bounded by below and converges uniformly to a function v that is a solution of the DPP. Let us show that both limits coincide, that is, we aim to show that u = v. Since u0 ≤ v0 , inductively we obtain that uk ≤ vk for every k and hence u ≤ v. Suppose that M = ‖v − u‖∞ > 0. Using the fact that both functions are a solution to the DPP, we obtain v(x) − u(x) =

α {sup v + inf v − sup u − inf u} + β ∫ v(y) − u(y) dy Bε (x) Bε (x) 2 Bε (x) Bε (x) Bε (x)

≤ αM + β ∫ v(y) − u(y) dy. Bε (x)

By taking supremum over x ∈ Ω we conclude that sup ∫ v(y) − u(y) dy = M. x∈Ω

Bε (x)

Because the integral is absolutely continuous we obtain that the set E = {x ∈ Ω̄ : v(x) − u(x) = M} is nonempty. If x ∈ E, then v − u = M almost everywhere in a ball Bε (x); this is a contradiction since E ⊂ Ω̄ is bounded. Hence, we conclude that u = v as desired. Now we are ready to give a second proof of the main result of this section. Second proof of Theorem 3.2. We will show that u ≤ uεI . The analogous result can be proved for uεII , considering the definition of v, completing the proof. η η Given η > 0, let n > 0 be such that un (x0 ) > u(x0 ) − 2 . We build a strategy SI for Player I, in the first n moves; given xk−1 she will choose to move to a point that almost maximizes un−k , that is, she chooses xk ∈ Bε (xk−1 ) such that η un−k (xk ) > sup un−k − . 2n Bε (xk−1 ) After the first n moves she will follow any strategy. We have x

kη |x , . . . , xk−1 ] 2n 0 η α + sup u } + βi ≥ { inf un−k − 2 Bε (xk−1 ) 2n Bε (xk−1 ) n−k

𝔼S0η I

,SII

[un−k (xk ) +

(k − 1)η , ≥ un−k+1 (xk−1 ) + 2n

∫ Bε (xk−1 )

un−k dy +

kη 2n

3.2 Dynamic Programming Principle

| 27

where we have estimated the strategy of Player II by inf and used the construction for uk . Thus Mk = { is a submartingale. Now we have

un−k (xk ) +

kη 2n

η 2



Mk = infΩ g

for 0 ≤ k ≤ n, for k > n

x

uεI (x0 ) = sup inf 𝔼S0,S [g(xτ )] SI

SII

I

II

x0 [g(xτ )] η SI ,SII

≥ inf 𝔼 SII

x0 [Mτ ] η SI ,SII

≥ inf 𝔼 SII

x

≥ inf 𝔼S0η ,S [M0 ] SII

I

II

η 2 > u(x0 ) − η,

= un (x0 ) −

where we used the optional stopping theorem for Mk . Since η is arbitrary this proves the claim. Let us observe that the above proof does not depend on the game ending almost surely. In that case we should choose a strategy for Player I such that after the first n movements, she moves in a way that ends the game almost surely. This occurs for example if she pulls in a fixed direction or to a fixed point outside Ω. We will use this kind of strategy in Chapter 6. Let us establish some results concerning solutions to the DPP. Recall that such functions are called p-harmonious. We have shown that these functions coincide with the value of the game, and hence uniqueness follows. Theorem 3.4. For a given function g : ℝN \ Ω → ℝ, there exists a unique p-harmonious function. The game having a value and the uniqueness of solutions to the DPP are two sides of the same coin. In general uεI ≤ uεII and it can be seen that uεI is the smallest solution to the DPP and uεII is the larger one. We refer to Section 4.3 for an example with nonuniqueness. In addition, a comparison principle holds. Theorem 3.5. Let u and v be p-harmonious functions for boundary data g ≤ h. Then u ≤ v in Ω. Proof. Since u satisfies the DPP and u = g ≤ h in ℝN \ Ω, it is a subsolution of the DPP for the datum h. Hence as v is the supremum over all the subsolutions to the DPP, we have u ≤ v.

28 | 3 Tug-of-War with noise A strong maximum principle and a strong comparison principle for p-harmonious functions (solutions to the DPP) can be obtained although we do not include those results here; see [86] and [101].

3.3 Game value convergence In the previous section we proved the existence of the game value uε and that it satisfies the DPP. The aim of this section is to prove Theorem 3.1. Given a version of the game, if we modify the value of ε we end up with a different version. In the previous section we have considered the value of the parameter ε fixed, along with all the other rules of the game. Here we will consider multiple games at the same time, one for each value of ε. The other rules will be fixed. As we have one game for each value of ε, let us emphasize the dependence of the value of the game uε on ε. The first step to prove the theorem is to show that such a limit exists along subsequences. To do that we will employ a variant of the classical Arzela–Ascoli compactness lemma, Lemma 3.6. The functions uε are not continuous, in general, as shown in Figures 2.2 and 2.3. That is why the classical version of the lemma cannot be applied, as it requires the functions to be continuous. Nonetheless, the jumps can be controlled and can be shown to be small for a small value of ε. We state and prove the lemma to which we are referring throughout this book as an Arzela–Ascoli type lemma. Lemma 3.6 ([86], Lemma 4.2). Let {uε : Ω → ℝ, ε > 0} be a set of functions such that (1) there exists C > 0 so that |uε (x)| < C for every ε > 0 and every x ∈ Ω; (2) given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η. Then there exist a uniformly continuous function u : Ω → ℝ and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω,

as ε → 0. Proof. First, we find a candidate to be the uniform limit u. Let X ⊂ Ω be a dense countable set. Since the involved functions are uniformly bounded, a diagonal procedure provides a subsequence still denoted by {uε } that converges for all x ∈ X. Let u(x) denote this limit. Note that at this point u is defined only for x ∈ X. By assumption, given η > 0, there exists r0 such that for any x, y ∈ X with |x−y| < r0 we have 󵄨 󵄨󵄨 󵄨󵄨u(x) − u(y)󵄨󵄨󵄨 < η.

3.3 Game value convergence

| 29

Hence, we can extend u to the whole Ω continuously by setting u(z) = lim u(x). X∋x→z

Our next step is to prove that {uε } converges to u uniformly. We choose a finite covering N

Ω ⊂ ⋃ Br (xi ) i=1

and ε0 > 0 such that 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (xi )󵄨󵄨󵄨 < η/3

and

󵄨 󵄨󵄨 󵄨󵄨u(x) − u(xi )󵄨󵄨󵄨 < η/3

for every x ∈ Br (xi ) and ε < ε0 , as well as 󵄨 󵄨󵄨 ε 󵄨󵄨u (xi ) − u(xi )󵄨󵄨󵄨 < η/3, for every xi and ε < ε0 . To obtain the last inequality, we used the fact that N < ∞. Thus for any x ∈ Ω, we can find xi so that x ∈ Br (xi ) and 󵄨 󵄨 󵄨 󵄨 󵄨 ε 󵄨 󵄨 ε 󵄨󵄨 ε ε 󵄨󵄨u (x) − u(x)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨u (x) − u (xi )󵄨󵄨󵄨 + 󵄨󵄨󵄨u (xi ) − u(xi )󵄨󵄨󵄨 + 󵄨󵄨󵄨u(xi ) − u(x)󵄨󵄨󵄨 < η, for every ε < ε0 , where ε0 is independent of x. Hence, to obtain the convergence along subsequences, we have to prove that the game values satisfy condition (1), that the sequence is uniformly bounded, and (2), that the sequence is asymptotically uniformly continuous. Observe that since at every step the game token moves less than ε, the token must end in the compact boundary strip of width ε given by Γε = {x ∈ ℝN \ Ω : dist(x, 𝜕Ω) ≤ ε}.

(3.3)

Since we are only interested in small values of ε, we can restrict our analysis to ε < ε0 for certain ε0 . In this case we will only be interested in values of g for points in Γε0 . Since we are assuming that g is continuous and defined in the whole ℝN \ Ω, this function g restricted to Γε0 is uniformly continuous (we can assume this in general without loss of generality). For the first condition we have to show that the game value is bounded independently of the value of ε. In the last section, when we prove that the game value coincides with the solution to the DPP that we constructed as a byproduct, we obtained inf g ≤ uε ≤ sup g. So the first condition is satisfied.

(3.4)

30 | 3 Tug-of-War with noise For the second condition we have to show that if we consider the game starting at x0 and the game starting at y0 with those two points close to each other, the expected payoff is similar for both games. To do that, we will assume that the domain satisfies the exterior sphere condition. This assumption can be relaxed; see [84]. It is enough to assume that there exist δ󸀠 > 0 and μ ∈ (0, 1) such that for every δ ∈ (0, δ󸀠 ] and y ∈ 𝜕Ω there exists a ball Bμδ (z) ⊂ Bδ (y) \ Ω. For example, when Ω satisfies the exterior cone condition it satisfies this requirement. This is indeed the case when Ω is a Lipschitz domain. In Lemma 3.8, we prove a slightly stronger estimate that implies asymptotic uniform continuity. At each step, we make a small correction in order to show that the process is a supermartingale. To show that the effect of the correction is small also in the long run, we need to estimate the expectation of the stopping time τ. We bound τ by the exit time τ∗ for a random walk in a larger annular domain with a reflecting condition on the outer boundary. Lemma 3.7. Let us consider an annular domain BR (y) \ Bδ (y) and a random walk such that when at xk−1 , the next point xk is chosen according to a uniform distribution at Bε (xk−1 ) ∩ BR (y). Let τ∗ = inf{k : xk ∈ Bδ (y)}. Then 𝔼x0 (τ∗ ) ≤

C(R/δ) dist(𝜕Bδ (y), x0 ) + o(1) , ε2

(3.5)

for x0 ∈ BR (y) \ Bδ (y). Here o(1) → 0 as ε → 0. Proof. We will use a solution to a corresponding Poisson problem to prove the result. To motivate this idea, let us denote hε (x) = 𝔼x (τ∗ ). Intuitively, the function hε satisfies a DPP, hε (x) =

hε dz + 1,

∫ Bε (x)∩BR (y)

because the number of steps always increases by one when making a step to one of the neighboring points at random. Further, we denote vε (x) = ε2 hε (x) and obtain vε (x) =

∫ Bε (x)∩BR (y)

vε dz + ε2 .

3.3 Game value convergence

| 31

This formula suggests a connection to the problem Δv(x) = −2(N + 2) { { { v(x) = 0 { { { 𝜕v { 𝜕ν = 0

x ∈ BR+ε (y) \ Bδ (y), x ∈ 𝜕Bδ (y),

(3.6)

x ∈ 𝜕BR+ε (y),

refers to the normal derivative. Indeed, when Bε (x) ⊂ BR+ε (y) \ Bδ (y), the where 𝜕u 𝜕ν classical calculation shows that the solution of this problem satisfies the mean value property v(x) = ∫ v dz + ε2 .

(3.7)

Bε (x)

The solution of problem (3.6) is positive, radially symmetric, and strictly increasing in r = |x − y|. It takes the form v(r) = −ar 2 − br 2−N + c if N > 2 and v(r) = −ar 2 − b log(r) + c if N = 2. We extend this function as a solution to the same equation to Bδ (y) \ Bδ−ε (y) and use the same notation for the extension. Thus, v satisfies (3.7) for each Bε (x) ⊂ BR+ε (y)\ Bδ−ε (y). In addition, because v is increasing in r, we have for each x ∈ BR (y) \ Bδ (y) v dz ≤ ∫ v dz = v(x) − ε2 .

∫ Bε (x)∩BR (y)

Bε (x)

It follows that 𝔼[v(xk ) + kε2 | x0 , . . . , xk−1 ] =

v dz + kε2 = v(xk−1 ) + (k − 1)ε2

∫ Bε (xk−1 )

if Bε (xk−1 ) ⊂ BR (y) \ Bδ−ε (y), and if Bε (xk−1 ) \ BR (y) ≠ 0, then 𝔼[v(xk ) + kε2 | x0 , . . . , xk−1 ] =



v dz + kε2

Bε (xk−1 )∩BR (y)





v dz = v(xk−1 ) + (k − 1)ε2 .

Bε (xk−1 )

Thus v(xk ) + kε2 is a supermartingale, and the optional stopping theorem yields 𝔼x0 [v(xτ∗ ∧k ) + (τ∗ ∧ k)ε2 ] ≤ v(x0 ). Because xτ∗ ∈ Bδ (y) \ Bδ−ε (y), we have 0 ≤ −𝔼x0 [v(xτ∗ )] ≤ o(1).

(3.8)

32 | 3 Tug-of-War with noise Furthermore, the estimate 0 ≤ v(x0 ) ≤ C(R/δ) dist(𝜕Bδ (y), x0 ) holds for the solutions of (3.6). Thus, by passing to a limit with k in (3.8), we obtain ε2 𝔼x0 [τ∗ ] ≤ v(x0 ) − 𝔼[u(xτ∗ )] ≤ C(R/δ)(dist(𝜕Bδ (y), x0 ) + o(1)). This completes the proof. By estimating the dominating terms br −N+2 or b log(r) of explicit solutions to (3.6) close to r = δ, we see that 󵄨󵄨 x0 󵄨 󵄨󵄨𝔼 [v(xτ∗ )]󵄨󵄨󵄨 ≤ C log(1 + ε). Thus the error term o(1) could be improved to C log(1 + ε). Next we derive an estimate for the asymptotic uniform continuity of the family uε . Recall that we are assuming that the domain satisfies the exterior sphere condition: For each y ∈ 𝜕Ω, there exists Bδ (z) ⊂ ℝN \ Ω such that y ∈ 𝜕Bδ (z). To simplify the notation and to obtain an explicit estimate, we also assume that g is Lipschitz continuous. Lemma 3.8. The p-harmonious function uε with the boundary data g satisfies 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 ≤ Lip(g)δ + C(R/δ)(|x − y| + o(1)),

(3.9)

for every small enough δ > 0 and for every two points x, y ∈ Ω ∪ Γε . Proof. The case x, y ∈ ̸ Ω is clear (it follows immediately from the Lipschitz continuity of g). Thus, we can concentrate on the cases x ∈ Ω and y ∈ Γε as well as x, y ∈ Ω. We utilize the connection to games. Suppose first that x ∈ Ω and y ∈ Γε . By the exterior sphere condition, there exists Bδ (z) ⊂ ℝN \ Ω such that y ∈ 𝜕Bδ (z). Player I chooses a strategy of pulling towards z, denoted by SIz . Then Mk = |xk − z| − Cε2 k is a supermartingale for a constant C large enough and independent of ε. Indeed, x

𝔼S0z ,S [|xk − z| | x0 , . . . , xk−1 ] II

I



α {|x − z| + ε + |xk−1 − z| − ε} + β 2 k−1 2



|x − z| dx

Bε (xk−1 )

≤ |xk−1 − z| + Cε . The first inequality follows from the choice of the strategy, and the second from the estimate ∫ Bε (xk−1 )

|x − z| dx ≤ |xk−1 − z| + Cε2 .

3.3 Game value convergence

| 33

By the optional stopping theorem, this implies that x

x

𝔼S0z ,S [|xτ − z|] ≤ |x0 − z| + Cε2 𝔼S0z ,S [τ]. I

II

I

II

(3.10)

x

Next we estimate 𝔼S0z ,S [τ] by the stopping time of Lemma 6.21. Player I pulls toI II wards z and Player II uses any strategy. The expectation of |xk − z| when at xk−1 is at most |xk−1 −z| when we know that the Tug-of-War occurs. On the other hand, if the random walk occurs, then we know that the expectation of |xk − z| is greater than or equal to |xk−1 − z|. Therefore we can bound the expectation of the original process by considering a suitable random walk in BR (z) \ Bδ (z) for BR (z) such that Ω ⊂ BR/2 (z). When xk ∈ BR (z) \ Bδ (z), the successor xk+1 is chosen according to a uniform probability in Bε (x) ∩ BR (z). The process stops when it hits Bδ (z). Thus, by (3.5), x

x

ε2 𝔼S0z ,S [τ] ≤ ε2 𝔼S0z ,S [τ∗ ] ≤ C(R/δ)(dist(𝜕Bδ (z), x0 ) + o(1)). I

II

I

II

Since y ∈ 𝜕Bδ (z), dist(𝜕Bδ (z), x0 ) ≤ |y − x0 |, and thus, (3.10) implies x

𝔼S0z ,S [|xτ − z|] ≤ C(R/δ)(|x0 − y| + o(1)). I

II

We get x

g(z) − C(R/δ)(|x − y| + o(1)) ≤ 𝔼S0z ,S [g(xτ )] I

II

≤ g(z) + C(R/δ)(|x − y| + o(1)). Thus, we obtain x

x

sup inf 𝔼S0,S [g(xτ )] ≥ inf 𝔼S0z ,S [g(xτ )] SI

SII

I

II

SII

I

II

≥ g(z) − C(R/δ)(|x0 − y| + o(1))

≥ g(y) − Lip(g)δ − C(R/δ)(|x0 − y| + o(1)).

The upper bound can be obtained by choosing for Player II a strategy where he points to z, and thus, (3.9) follows. Finally, let x, y ∈ Ω and fix the strategies SI , SII for the game starting at x. We define a virtual game starting at y: We use the same coin tosses and random steps as the usual game starting at x. Furthermore, the players adopt their strategies SIv , SIIv from the game starting at x, that is, when the game position is yk−1 a player chooses the step that would be taken at xk−1 = yk−1 − y + x in the game starting at x. We proceed in this way until for the first time xk ∈ Γε or yk ∈ Γε . At that point we have |xk −yk | = |x −y|, and we may apply the previous steps that work for xk ∈ Ω, yk ∈ Γε or for xk , yk ∈ Γε . In

34 | 3 Tug-of-War with noise particular, if we have xk ∈ Ω, yk ∈ Γε , one of the two players starts to point to the point in which the other game ended. By taking expectations, we thus get 𝔼xSI ,SII [g(xτ )] − Lip(g)δ − (C(R/δ)(|x − y| + o(1)) y

≤ 𝔼Sv ,Sv [g(yτ )] I

II

≤ 𝔼xSI ,SII [g(xτ )] + Lip(g)δ + C(R/δ)(|x − y| + o(1)). Since every pair of strategies SI , SII defines a unique pair of strategies SIv , SIIv and vice versa, we can finish the proof taking infimums and supremums. Note that, thanks to (3.4) and Lemma 3.8, the family uε satisfies the hypothesis of the compactness Lemma 3.6. Corollary 3.9. Let {uε } be a family of p-harmonious functions with a fixed continuous boundary data g. Then there exists a uniformly continuous u and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω.

Next we prove that the limit u in Corollary 3.9 is a solution to (3.1). The idea is to work in the viscosity setting and to show that the limit is a viscosity sub- and supersolution. To accomplish this, we utilize some ideas from [84], where p-harmonic functions were characterized in terms of asymptotic expansions. The key point here will be the expansion for the p-Laplacian from equation (3.2), that is, Δp u = |∇u|p−2 {Δu + (p − 2)⟨D2 u

∇u ∇u ; ⟩}. |∇u| |∇u|

(3.11)

The p-Laplacian, written in this form, is not well defined when the gradient vanishes. Hence, in order to give the definition of viscosity solution we have to recall the definition of the semicontinuous envelope of the operator; see Appendix A. Observing that Δu = tr(D2 u) we can write (3.1) as F(∇u, D2 u) = 0, where F(v, X) = − tr(X) − (p − 2)⟨X

v v ; ⟩. |v| |v|

As we just mentioned, this function F : ℝN × 𝕊N 󳨃→ ℝ is not well defined at v = 0 (here 𝕊N denotes the set of real symmetric N × N matrices). Therefore, we need to consider the lower semicontinuous F∗ and upper semicontinuous F ∗ envelopes of F. These functions coincide with F for v ≠ 0 and for v = 0 they are given by F ∗ (0, X) = − tr(X) − (p − 2)λmin (X)

3.3 Game value convergence

| 35

and F∗ (0, X) = − tr(X) − (p − 2)λmax (X), where λmin (X) = min{λ : λ is an eigenvalue of X} and λmax (X) = max{λ : λ is an eigenvalue of X}. Now we are ready to give the definition for a viscosity solution to our equation. Definition 3.10. For 1 < p < ∞ consider the equation −Δp u = div(|∇u|p−2 ∇u) = 0. (1) A lower semicontinuous function u is a viscosity supersolution if for every ϕ ∈ C 2 such that ϕ touches u at x ∈ Ω strictly from below, we have F ∗ (∇ϕ(x), D2 ϕ(x)) ≥ 0. (2) An upper semicontinuous function u is a subsolution if for every ϕ ∈ C 2 such that ϕ touches u at x ∈ Ω strictly from above, we have F∗ (∇ϕ(x), D2 ϕ(x)) ≤ 0. (3) Finally, u is a viscosity solution if it is both a sub- and a supersolution. Recall that viscosity solutions and weak solutions are equivalent in the case of p-harmonic functions; see [62]. Theorem 3.11. Let g and Ω be as in Theorem 3.1. Then the uniform limit u of p-harmonious functions uε is a viscosity solution to (3.1). Proof. First let us observe that u = g on 𝜕Ω, and we can focus our attention on showing that u is p-harmonic in Ω in the viscosity sense. To this end, we recall from [84] an estimate that involves the regular Laplacian (p = 2) and an approximation for the ∞-Laplacian (p = ∞). Choose a point x ∈ Ω and a C 2 -function ϕ defined in a neighborhood of x. Note that since ϕ is continuous we have min ϕ(y) = inf ϕ(y)

y∈Bε (x)

y∈Bε (x)

for all x ∈ Ω. Let zε be the point at which ϕ attains its minimum in Bε (x) ϕ(zε ) = min ϕ(y). y∈Bε (x)

Now, consider the Taylor expansion of the second order of ϕ 1 ϕ(y) = ϕ(x) + ∇ϕ(x) ⋅ (y − x) + ⟨D2 ϕ(x)(y − x), (y − x)⟩ + o(|y − x|2 ) 2

36 | 3 Tug-of-War with noise as |y − x| → 0. Evaluating this Taylor expansion of ϕ at the point x at zε , we get 1 ϕ(xε ) = ϕ(x) + ∇ϕ(x)(zε − x) + ⟨D2 ϕ(x)(zε − x), (zε − x)⟩ + o(ε2 ), 2 as ε → 0. Evaluating the Taylor expansion at the antipodal point of zε with respect to x, given by zε̃ = 2x − zε , we have

1 ϕ(zε̃ ) = ϕ(x) − ∇ϕ(x)(zε − x) + ⟨D2 ϕ(x)(zε − x), (zε − x)⟩ + o(ε2 ). 2

Adding the expressions, we obtain ϕ(zε̃ ) + ϕ(zε ) − 2ϕ(x) = ⟨D2 ϕ(x)(zε − x), (zε − x)⟩ + o(ε2 ). Since zε is the point where the minimum of ϕ is attained, it follows that ϕ(zε̃ ) + ϕ(zε ) − 2ϕ(x) ≤ max ϕ(y) + min ϕ(y) − 2ϕ(x), y∈Bε (x)

y∈Bε (x)

and then 1 { max ϕ(y) + min ϕ(y)} − ϕ(x) 2 y∈Bε (x) y∈Bε (x) 1 2 ≥ ⟨D ϕ(x)(zε − x), (zε − x)⟩ + o(ε2 ). 2

(3.12)

Recall the expansion obtained for the usual Laplacian in (1.5), that is, ∫ ϕ(y) dy − ϕ(x) = Bε (x)

ε2 Δϕ(x) + o(ε2 ). 2(N + 2)

(3.13)

Recall that the values of α and β are given by α=

p−2 p+N

and β =

2+N . p+N

Multiply (3.12) by α and (3.13) by β and add. We arrive at the expansion valid for any smooth function ϕ, i. e., α { max ϕ(y) + min ϕ(y)} + β ∫ ϕ(y) dy − ϕ(x) 2 y∈Bε (x) y∈Bε (x) 2



Bε (x)

z −x z −x βε ((p − 2)⟨D2 ϕ(x)( ε ), ( ε )⟩ + Δϕ(x)) 2(n + 2) ε ε + o(ε2 ).

(3.14)

3.3 Game value convergence

| 37

Suppose that ϕ touches u at x strictly from below. By the uniform convergence, there exists a sequence {xε } converging to x such that uε − ϕ has an approximate minimum at xε , that is, for ηε > 0, there exists xε such that uε (x) − ϕ(x) ≥ uε (xε ) − ϕ(xε ) − ηε . Thus, by recalling the fact that uε is p-harmonious, we obtain ηε ≥ −ϕ(xε ) +

α {max ϕ + min ϕ} + β ∫ ϕ(y) dy, 2 Bε (xε ) Bε (xε ) Bε (xε )

and thus, by (3.14), and choosing ηε = o(ε2 ), we have 0≥

z − xε z − xε βε2 ((p − 2)⟨D2 ϕ(xε )( ε ), ( ε )⟩ + Δϕ(xε )) + o(ε2 ). 2(n + 2) ε ε

(3.15)

We remark that zε ∈ 𝜕Bε (xε ) for ε > 0 small enough whenever ∇ϕ(x) ≠ 0. In fact, suppose, on the contrary, that there exists a subsequence zεj ∈ Bεj (xεj ) of minimum points of ϕ. Then, ∇ϕ(zεj ) = 0 and, since zεj → x as εj → 0, we have by continuity ∇ϕ(x) = 0. A simple argument based on Lagrange multipliers then shows that zε − xε ∇ϕ =− (x). ε→0 ε |∇ϕ| lim

Next we need to observe that ⟨D2 ϕ(xε )(

zε − xε z − xε ), ( ε )⟩ ε ε

2 converges to ΔH ∞ ϕ(x) when ∇ϕ(x) ≠ 0 and is always bounded by λmin (D ϕ(x)) and 2 2 λmax (D ϕ(x)). Dividing by ε and letting ε → 0, we get

F ∗ (∇ϕ(x), D2 ϕ(x)) ≥ 0. Therefore u is a viscosity supersolution. To prove that u is a viscosity subsolution, we use an analogous argument by considering a function ψ that touches u from above and the maximum point of the test function. Now, we are ready to complete the proof of the main theorem of this chapter. Proof of Theorem 3.1. We just have to observe that since the viscosity solution of (3.1) is unique, we have convergence for the whole family uε as ε → 0. We have proved that the game values converge to the unique p-harmonic function under the assumption that the domain satisfies the exterior sphere condition. Now we

38 | 3 Tug-of-War with noise will give a different proof assuming that ∇u ≠ 0. To be more precise, we will assume that there exists ε0 > 0 such that u is a p-harmonic function with ∇u ≠ 0 in Ωε0 = Ω∪Γε0 (recall the definition of Γε in (3.3)). This assumption guarantees that u is real analytic according to a classical theorem of Hopf [54], and thus equation (3.16) holds with a uniform error term in Ω. This result is Theorem 4.1 from [86]. For a proof using a similar idea, see Theorem 2.4 in [98]. Theorem 3.12. Let u be p-harmonic with nonvanishing gradient ∇u ≠ 0 in Ωε0 as above and let uε be the p-harmonious function in Ω with the exterior values u in Γε . Then uε → u

uniformly in Ω

as ε → 0. Proof. As noted previously, the p-harmonious function with boundary values coincides with the value of the game and thus we can use a game theoretic approach. Remark that u is continuous. It is known from [84] that u satisfies u(x) =

α {sup u + inf u} + β ∫ u dy + O(ε3 ) 2 B (x) Bε (x)

(3.16)

Bε (x)

ε

with a uniform error term for x ∈ Ω as ε → 0. The error term is uniform due to our assumptions on u. Player II follows a strategy SII0 such that at a point xk−1 he chooses to step to a point that minimizes u, that is, to a point xk ∈ Bε (xk−1 ) such that u(xk ) < inf u(y) + ε3 . Bε (xk−1 )

Choose C > 0 such that |O(ε3 )| ≤ Cε3 . Under the strategy SII0 Mk = u(xk ) − C1 kε3 is a supermartingale, where C1 = C + 1. Indeed, 𝔼SI ,S0 (u(xk ) − C1 kε3 | x0 , . . . , xk−1 ) II



α { sup u + inf u + ε3 } + β 2 B (x ) Bε (xk−1 ) ε

k−1

3

≤ u(xk−1 ) − C1 (k − 1)ε .



u dy − C1 kε3

Bε (xk−1 )

(3.17)

The first inequality follows from the choice of the strategy and the second from (3.16). Now we can estimate uεII (x0 ) by using Fatou’s lemma and the optional stopping theo-

3.3 Game value convergence

| 39

rem for supermartingales. We have x

uεII (x0 ) = inf sup 𝔼S0,S [g(xτ )] SII



I

SI

II

x sup 𝔼S0,S0 [u(xτ )] I II SI x

≤ sup 𝔼S0,S0 [u(xτ ) + C1 τε3 − C1 τε3 ] SI

I

II

x

x

≤ sup(lim inf 𝔼S0,S0 [u(xτ∧k ) − C1 (τ ∧ k)ε3 ] + C1 ε3 𝔼S0,S0 [τ]) SI

k→∞

I

≤ u(x0 ) + C1 ε

I

II

II

x sup 𝔼S0,S0 [τ]. I II SI

3

This inequality and the analogous argument for Player I implies for uε = uεII = uεI that x

x

u(x0 ) − C1 ε3 inf 𝔼S00 ,S [τ] ≤ uε (x0 ) ≤ u(x0 ) + C1 ε3 sup 𝔼S0,S0 [τ]. SII

I

SI

II

I

II

Letting ε → 0 the proof is completed if we prove that there exists C such that x

𝔼S0,S0 [τ] ≤ Cε−2 . I

II

To establish this bound, we show that M̃ k = −u(xk )2 + u(x0 )2 + C2 ε2 k is a supermartingale for small enough ε > 0. If Player II wins the toss, we have u(xk ) − u(xk−1 ) ≤ −C3 ε because ∇u ≠ 0, as we can choose for example C3 = infx∈Ω |∇u|. It follows that 2

𝔼SI ,S0 [(u(xk ) − u(xk−1 )) | x0 , . . . , xk−1 ] II



αC3 2 2 α ((−C3 ε)2 + 0) + β ⋅ 0 = ε. 2 2

(3.18)

We have 𝔼SI ,S0 [M̃ k − M̃ k−1 | x0 , . . . , xk−1 ] II

= 𝔼SI ,S0 [−u(xk )2 + u(xk−1 )2 + C2 ε2 | x0 , . . . , xk−1 ] II

2

= 𝔼SI ,S0 [−(u(xk ) − u(xk−1 )) | x0 , . . . , xk−1 ] II

− 𝔼SI ,S0 [2(u(xk ) − u(xk−1 ))u(xk−1 ) | x0 , . . . , xk−1 ] + C2 ε2 . II

(3.19)

By subtracting a constant if necessary, we may assume that u < 0. Moreover, u(xk−1 ) is determined by the point xk−1 , and thus, we can estimate the second term on the

40 | 3 Tug-of-War with noise right-hand side as − 𝔼SI ,S0 [2(u(xk ) − u(xk−1 ))u(xk−1 ) | x0 , . . . , xk−1 ] II

= −2u(xk−1 )(𝔼SI ,S0 [u(xk ) | x0 , . . . , xk−1 ] − u(xk−1 )) II 󵄨 󵄨 ≤ 2󵄨󵄨󵄨|u|󵄨󵄨󵄨∞ C1 ε3 . The last inequality follows from (3.16) similarly as estimate (3.17). This together with (3.18) and (3.19) implies 𝔼SI ,S0 [M̃ k − M̃ k−1 | x0 , . . . , xk−1 ] ≤ 0, II

when −ε2 αC3 2 /2 + 2‖u‖∞ C1 ε3 + C2 ε2 ≤ 0. This holds if we choose, for example, C2 such that C3 ≥ 2√C2 /α and take ε < C2 /(2‖u‖∞ C1 ). Thus, M̃ k is a supermartingale. Recall that we assumed that p > 2 implying α > 0. According to the optional stopping theorem for supermartingales x 𝔼S0,S0 [M̃ τ∧k ] ≤ M̃ 0 = 0, I

II

and thus x

x

C2 ε2 𝔼S0,S0 [τ ∧ k] ≤ 𝔼S0,S0 [u(xτ∧k )2 − u(x0 )2 ]. I

I

II

II

The result follows by passing to the limit with k since u is bounded in Ω.

3.4 Game with running payoff As a modification to the Tug-of-War with noise we are treating in this chapter, we can introduce the version with running payoff. This version of the game was treated in [102]. We will not provide proofs for the results concerning this game, since the arguments are similar to the ones for the game without running payoff. In addition, the game with running payoff can be seen as a particular version of the game that we treat in Chapter 6. Hence, the results can be seen as particular cases of the ones obtained there. The only modification to the game rules is in the payoff; along with the final payoff function g : ℝN \ Ω → ℝ, a running payoff function f : Ω → ℝ is given. Considering these two functions, the total payoff is τ−1

g(xτ ) + ε2 ∑ f (xk ). k=0

(3.20)

3.4 Game with running payoff | 41

We can think that whenever the token leaves a point xk , Player I pays Player II the amount given by ε2 f (xk ) and, as usual, when the game ends, Player I pays Player II the amount given by g(xτ ). x Since β > 0, it can be seen that 𝔼S0,S [τ] is finite. And hence assuming that f is I II bounded we obtain that the expectation of (3.20) is well defined. Hence, the game values for each player can be define as before, considering that the payoff is given by (3.20). The DPP for the game is given by the following theorem. Theorem 3.13. The Tug-of-War with noise and running payoff has a value and the game value satisfies uε (x) = ε2 f (x) +

α {sup uε + inf uε } + β ∫ uε (y) dy Bε (x) 2 Bε (x) Bε (x)

for x ∈ Ω. The term ε2 multiplying the f in the running payoff can be motivated by the estimate (3.5). Suppose that f is constant, let us say f ≡ 1. Then, if Player II cannot force the game to end in an expected number of steps bounded by ε12 , then the payoff diverges with ε. Also observe that this scale is the right one when passing to the limit in the equation for a test function. The equivalent to equation (3.15) is 0≥

z − xε z − xε βε2 ((p − 2)⟨D2 ϕ(xε )( ε ), ( ε )⟩ + Δϕ(xε )) 2(N + 2) ε ε + ε2 f (xε ) + o(ε2 ).

(3.21)

At this step we divide by ε2 . The scaling ε2 multiplying f allows us to pass to the limit (note that we obtain the previous operators +f (x) in such a limit). Theorem 3.14. Let Ω be a bounded domain satisfying the exterior sphere condition and g be a continuous function. Consider the unique viscosity solution u to {

̄ −ΔH p u(x) = f (x) u(x) = g(x)

x ∈ Ω, x ∈ 𝜕Ω,

and let uε be the game values. Then uε → u

uniformly in Ω

as ε → 0. Here ΔH pu=

div(|∇u|p−2 ∇u) (N + p)|∇u|p−2

(3.22)

42 | 3 Tug-of-War with noise is the 1-homogeneous p-Laplacian. As in (3.11), we can show that ΔH pu=

p−2 H 1 Δ u+ Δu, N +p ∞ N +p

where Δu is the usual Laplacian and ΔH ∞ u is the normalized ∞-Laplacian, that is, N

Δu = ∑ uxi xi i=1

and

ΔH ∞u =

N 1 u u u . ∑ |∇u|2 i,j=1 xi xi xi xj

Hence, dividing by ε2 and letting ε → 0 in (3.21), we get 0≥ Recalling that β =

f ̄ = 2f .

β ((p − 2)ΔH ∞ ϕ(x) + Δϕ(x)) + f (x). 2(N + 2)

2+N p+N

we have obtained that 0 ≥ ΔH p u(x) + 2f . Hence, in (3.22),

3.5 Comments The Tug-of-War with noise that we presented in this chapter was introduced in [85]. For a related version of this game see [98]. We can add a random walk as noise, getting 1 N ε ∑(u (x + εej ) + uε (x − εej )) 2N j=1 instead of ∫ uε (y) dy Bε (x)

in the DPP and obtain similar convergence results for the modified game. The equivalence between viscosity and strong solutions also holds for equations with spacial dependence; see [105]. In [102], a proof of the Harnack inequality was obtained for the p-Laplacian employing game theoretical arguments related to the Tug-of-War with noise introduced in this chapter.

4 Tug-of-War In Chapter 2 we introduced the Tug-of-War game in its original and simplest form but we did not treat it in detail. Our main goal now is to include the details. As we have mentioned, when we play Tug-of-War without noise there is the possibility that the players choose strategies such that games that never end (that remain inside Ω for infinitely many turns) have a positive probability. This fact creates an extra difficulty since we need to show that both players have strategies that force the game to end no matter what the other player does (and at the same time allow them to maximize their profit).

4.1 Dynamic Programming Principle For the Tug-of-War with noise we have defined the expected payoff as x

x

𝔼S0,S [g(Xτ )] = ∫ g(Xτ (ω))ℙS0,S (dω). I

II

I

H∞

II

For the Tug-of-War we do not know that the game ends almost surely and hence the expression giving the expected payoff for the previous case (for the Tug-of-War game with noise) is not well defined. We have to define the payoff in the case the game does not end almost surely. We define x

𝔼 0 [g(Xτ )] I,x VS ,S0 = { SI ,SII I II −∞

x

if ℙS0,S ({ω ∈ H ∞ : τ(ω) < ∞}) = 1, I

II

otherwise

and x

𝔼 0 [g(Xτ )] II,x VS ,S0 = { SI ,SII I II +∞

x

if ℙS0,S ({ω ∈ H ∞ : τ(ω) < ∞}) = 1, I

II

otherwise.

Then, the value of the game for Player I is given by I,x

uεI (x0 ) = sup inf VS ,S0 SI

SII

I

II

while the value of the game for Player II is given by II,x

uεII (x0 ) = inf sup VS ,S0 . SII

SI

I

II

We penalize severely any player choosing a strategy that does not end the game almost surely. As before, when uεI = uεII we say the game has a value uε := uεI = uεII and the following theorem holds. https://doi.org/10.1515/9783110621792-004

44 | 4 Tug-of-War Theorem 4.1. The Tug-of-War has a value and the game value satisfies uε (x) =

1 1 sup uε + inf u 2 Bε (x) 2 Bε (x)

(DPP)

for x ∈ Ω. Proof. Let u be a solution to the DPP (that can be obtained by Perron’s method). We will show that u ≤ uεI . The inequality u ≥ uεII can be obtained in an analogous way, completing the proof. We want to find a strategy SI0 for Player I that ensures a payoff close to u. She has to maximize the expected payoff and, at the same time, make sure that the game ends almost surely. To do that we will employ the backtracking strategy from the original argument utilized in [97]. To that end, we define δ(x) = sup u − u(x). Bε (x)

Fix η > 0 and a starting point x0 ∈ Ω, and set δ0 = min{δ(x0 ), ε}/2. We suppose for now that δ0 > 0, and we define X0 = {x ∈ Ω : δ(x) > δ0 }. We consider a strategy SI0 for Player I that distinguishes between the cases xk ∈ X0 and xk ∉ X0 . To that end, we let u(xk ) − η2−k mk = { u(yk ) − δ0 dk − η2−k

if xk ∈ X0 ,

if xk ∉ X0 ,

where yk denotes the last game position in X0 up to time k, and dk is the distance, measured in number of steps, from xk to yk along the graph spanned by the previous points yk = xk−j , xk−j+1 , . . . , xk that were used to get from yk to xk . In what follows we define a strategy for Player I and prove that mk is a submartingale. First, if xk ∈ X0 , then Player I chooses to step to a point xk+1 satisfying u(xk+1 ) ≥ sup u − ηk+1 2−(k+1) , Bε (xk )

where ηk+1 ∈ (0, η] is small enough to guarantee that xk+1 ∈ X0 . Let us remark that u(x) − inf u ≤ sup u − u(x) = δ(x) Bε (x)

Bε (x)

(4.1)

4.1 Dynamic Programming Principle

| 45

and hence δ(xk ) − ηk+1 2−(k+1) ≤ sup u − u(xk ) − ηk+1 2−(k+1) Bε (xk )

≤ u(xk+1 ) − u(xk )

≤ u(xk+1 ) − inf u Bε (xk+1 )

≤ δ(xk+1 ). Therefore, we can guarantee that xk+1 ∈ X0 by choosing ηk+1 such that δ0 < δ(xk ) − ηk+1 2−(k+1) . Thus if xk ∈ X0 and Player I gets to choose the next game position, we have mk+1 ≥ u(xk ) + δ(xk ) − ηk+1 2−(k+1) − η2−(k+1) ≥ u(xk ) + δ(xk ) − η2−k

= mk + δ(xk ).

If Player II wins the toss and moves from xk ∈ X0 to xk+1 ∈ X0 , we have, in view of (4.1), mk+1 ≥ u(xk ) − δ(xk ) − η2−(k+1) > mk − δ(xk ). If Player II wins the toss and he moves to a point xk+1 ∉ X0 (whether xk ∈ X0 or not), we have mk+1 = u(yk ) − dk+1 δ0 − η2−(k+1) ≥ u(yk ) − dk δ0 − δ0 − η2−k

= mk − δ0 .

(4.2)

In the case xk ∉ X0 , the strategy for Player I is to backtrack to yk , that is, if she wins the coin toss, she moves the token to one of the points xk−j , xk−j+1 , . . . , xk−1 closer to yk so that dk+1 = dk − 1. Thus if Player I wins and xk ∉ X0 (whether xk+1 ∈ X0 or not), mk+1 ≥ δ0 + mk . If Player II wins the coin toss and moves from xk ∉ X0 to xk+1 ∈ X0 , then mk+1 = u(xk+1 ) − η2−(k+1) ≥ −δ(xk ) + u(xk ) − η2−k ≥ −δ0 + mk , where the first inequality is due to (4.1), and the second follows from the fact that mk = u(yk ) − dk δ0 − η2−k ≤ u(xk ) − η2−k . The same was obtained in (4.2) when xk+1 ∉ X0 .

46 | 4 Tug-of-War Taking into account all the different cases, we see that mk is a submartingale, and it is bounded. Since Player I can assure that mk+1 ≥ mk + δ0 if she gets to move the token, the game must terminate almost surely. This is because there are arbitrary long sequences of moves made by Player I (if she does not end the game immediately). We get by Fatou’s lemma and the optional stopping theorem that x

uεI (x0 ) = inf sup 𝔼S0,S [g(xτ )] SII

I

SI

II

x

≥ sup 𝔼S0,Sη [g(xτ ) − η2−τ ] SI

I

II

x

≥ sup lim inf 𝔼S0,Sη [u(xτ∧k ) − η2−(τ∧k) ] SI

k→∞

I

II

≥ sup 𝔼SI ,Sη [u(x0 ) + η] = u(x0 ) − η. II

SI

Since η > 0 is arbitrary, we have proved that u ≤ uεI . Finally, let us remove the assumption that δ(x0 ) > 0. If δ(x0 ) = 0 for x0 ∈ Ω, when Tug-of-War is played, Player I adopts a strategy of pulling towards a boundary point until the game token reaches a point x0󸀠 such that δ(x0󸀠 ) > 0 or x0󸀠 is outside Ω. Observe that this strategy ends the game almost surely since Player I gets arbitrary long sequences of consecutive movements with full probability. And, since (4.1) holds, we have u(x0 ) = u(x0󸀠 ).

4.2 Game value convergence Here we prove rigorously Theorem 2.6. For simplicity, we assume here that g is Lipschitz. Proof of Theorem 2.6. The proof follows the usual steps. Since inf g ≤ uε ≤ sup g to apply the Arzela–Ascoli compactness lemma, Lemma 3.6, it remains to prove that the sequence is asymptotically uniformly continuous. We will show that uε (x) − uε (y) ≤ Lip(g)ε for every x, y such that |x − y| < ε. Note that this implies that uε (x) − uε (y) ≤ Lip(g)dε (x, y),

4.2 Game value convergence

| 47

where dε (⋅, ⋅) refers to the discrete distance given by { { { { { { { dε (x, y) = { { { { { { { {

0

if x = y,

ε

if 0 < |x − y| ≤ ε,



if ε < |x − y| ≤ 2ε

.. .

If g is constant, g ≡ c, there is nothing to prove, since the whole sequence is constant, uε ≡ c for every ε > 0. If not, then Lip(g) > 0. Also, when both points, x and y, are outside Ω the result holds. Hence, it remains to prove the bound when at least one of the points x, y is in Ω. Suppose, arguing by contradiction, that there exists x0 ∈ Ω such that for some x1 ∈ Bε (x0 ) uε (x1 ) − uε (x0 ) > Lip(g)ε. Then, if x1 ∈ Ω, using the DPP at x1 , we obtain sup uε − uε (x1 ) = uε (x1 ) − inf uε ≥ uε (x1 ) − uε (x0 ) > Lip(g)ε. Bε (x1 )

Bε (x1 )

Hence, by considering a point where the supremum is almost achieved, we get x2 ∈ Bε (x1 ) such that uε (x2 ) − uε (x1 ) > Lip(g)ε. Inductively we can construct xk such that uε (xk ) − uε (xk−1 ) > Lip(g)ε. Since uε is bounded there exists n such that xn ∈ ℝN \ Ω. In a similar way we can construct x−1 such that uε (x0 ) − uε (x−1 ) > Lip(g)ε and, inductively, x−k such that uε (x−k ) − uε (x−k−1 ) > Lip(g)ε. As before, since uε is bounded, there exists m such that x−m ∈ ℝN \ Ω. We have n

g(xn ) − g(x−m ) ≥ |xn − x−m |

∑ uε (xk ) − uε (xk−1 )

k=−m+1

ε(n + m)

> Lip(g),

which is a contradiction. We have proved that the sequence converges up to subsequences. Computations similar to the ones done in the proof of Theorem 3.11 show that the limit is a ∞-harmonic function. Hence, since uniqueness holds for the Dirichlet problem for the ∞-Laplacian equation, we obtain that the whole sequence converges. Let us remark that in the proof we have obtained that the limit u is Lipschitz with the same Lipschitz constant of the boundary datum. This is consistent with the aforementioned results in Section 2.2; recall that u is the AMLE of g.

48 | 4 Tug-of-War

4.3 Game with running payoff As in the Tug-of-War with noise, we can add a running payoff to the game. Although here, in order to prove that that game has a value, we have to require that f does not change sign. Under this hypothesis one can prove that the game has a value. Theorem 4.2. The Tug-of-War with running payoff f such that f ≡ 0,

inf f > 0, Ω

or

sup f < 0 Ω

has a value and the game value satisfies uε (x) = ε2 f (x) +

1 1 sup uε + inf u 2 Bε (x) 2 Bε (x)

(DPP)

for x ∈ Ω.

4.4 A game without value Let us show a counterexample of the existence of a value for this game in the case that f changes sign. For other counterexamples we refer to Section 5 in [97]. Let us suppose that two players play Tug-of-War and the token can be moved between only three different positions, as illustrated in Figure 4.1. In one of them, z in the figure, the game ends with 0 as final payoff. At x the running payoff is 1, Player II pays Player I one unit and at y the running payoff is −1, Player I pays Player II one unit.

Figure 4.1: Game with no value.

It can be seen that for any value −1 ≤ a ≤ 1, the function given by u(z) = 0,

u(x) = a + 1,

u(y) = a − 1

is a solution to the DPP. Even more, uI is the smallest of these functions and uII is the largest one, that is, uI (z) = 0,

uI (x) = 0,

uI (y) = −2

4.5 Comments | 49

and uII (z) = 0,

uII (x) = 2,

uII (y) = 0.

Hence, the game does not have a value, since we have uI ≠ uII . This instance of the Tug-of-War can appear in our general setting when playing in a domain in ℝN . Let us consider a bounded domain Ω, a final payoff function g ≡ 0, and a running payoff function f such that −1 ≤ f ≤ 1 and assume that there exist x, y ∈ Ω with f (x) = 1 and f (y) = −1. Suppose that ε = 1 is larger than the diameter of Ω. Hence at every step a player can move the token to any point in Ω and also can finish the game by moving outside Ω. Since g ≡ 0, the options for each player are the same at every point, to move to another point in Ω or to end the game with final payoff 0. If Player I gets to move the token, she can end the game or move to a point in Ω; in this case she will move to x where f attains its maximum, 1. Recall that she wants to maximize the final payoff. In the same way Player II will move to y, where the minimum is attained. We end up with a game equivalent to the one described above that does not have a value.

4.5 Comments This Tug-of-War game can be played in more general sets. For example, in a graph one selects some terminal nodes and a final payoff function g on these nodes. Then the possible movements of the token are given by the nodes that are connected with the actual position of the game. The rules of the game are exactly as the ones described in this book; after a coin toss the winner chooses the new position of the game among nodes connected with the actual position and continue playing until they arrive to a terminal node. This kind of game can be also played in a directed tree (that is, an infinite graph in which each node has exactly m successors). The study of this game was performed in [82], [106].

5 Mixed boundary conditions and the obstacle problem In this chapter we first look at mixed boundary conditions and analyze a game whose values approximate this kind of problems. The main idea is that near points where we want to have a homogeneous Neumann boundary condition we just restrict the possible movements to be inside Ω. Therefore, you cannot exit the domain across this part of the boundary. Next, we consider the obstacle problem. In this case the obstacle appears in the game as the possibility that one of the players has to stop the game at every point inside Ω and collect the amount given by a prescribed function (the obstacle).

5.1 Mixed boundary conditions and Tug-of-War games Here our aim is to introduce a game related to elliptic problems where we combine two kinds of different boundary conditions, on some part of the boundary we impose a Dirichlet boundary condition, u = g, while on another part of the boundary we impose a homogeneous Neumann boundary condition, 𝜕ν u = 0. 5.1.1 Description of the game Now, our goal is to see how we can obtain mixed (Dirichlet/Neumann) boundary conditions in the limit PDE problem. Consider a bounded domain Ω ⊂ ℝN , and take ΓD ⊂ 𝜕Ω and ΓN ≡ 𝜕Ω \ ΓD . Then we have 𝜕Ω = ΓD ∪ ΓN . Let g : ΓD → ℝ be a Lipschitz continuous function. At an initial time, a token is placed at a point x0 ∈ Ω \ ΓD . Then the players play the usual Tug-of-War game but now they can choose positions of the game only in Ω. A (fair) coin is tossed and the winner of the toss is allowed to move the game position to any x1 ∈ Bε (x0 ) ∩ Ω. At each turn, the coin is tossed again, and the winner chooses a new game state xk ∈ Bε (xk−1 ) ∩ Ω. Once the token has reached some xτ ∈ ΓD , the game ends and Player I earns g(xτ ) (while Player II earns −g(xτ )). Note that when the position of the token is close to ΓN = 𝜕Ω \ ΓD , the possible choices of the next position lie in a set that is not symmetric with respect to the previous position. As before (see the previous chapter) we have the value of the game, given by II,x

I,x

uε (x0 ) = sup inf VS ,S0 = inf sup VS ,S0 . SI

SII

I

II

SII

SI

I

II

Again, here we penalize severely the games that never end; see Section 4.1. https://doi.org/10.1515/9783110621792-005

52 | 5 Mixed boundary conditions and the obstacle problem It can be proved, as we did before, that all these ε-values are Lipschitz functions with respect to the discrete distance dε given by { { { { { { dε (x, y) = { { { { { { {

0

if x = y,

2ε .. .

if ε < |x − y| ≤ 2ε,

ε

if 0 < |x − y| ≤ ε,

See [97] (but in general they are not continuous). Thanks to this Lipschitz property and the fact that the values are uniformly bounded, we can use the Arzela–Ascoli type lemma to show that there is a subsequence εj → 0 and a continuous function u such that uεj → u uniformly in Ω. Our next goal is to find the PDE problem that this limit satisfies in the viscosity sense. 5.1.2 Game value convergence When ΓD ≡ 𝜕Ω, that is, ΓN = 0, we proved in Chapter 4 that u is a viscosity solution to the problem {

−ΔH ∞ u(x) = 0

x ∈ Ω,

u(x) = g(x)

x ∈ ΓD ≡ 𝜕Ω.

(5.1)

However, when ΓD ≠ 𝜕Ω the PDE problem (5.1) is incomplete, since there is a missing boundary condition on ΓN = 𝜕Ω \ ΓD . Our concern now is to find the boundary condition that completes the problem. We assume that ΓN is regular, in the sense that ⃗ is well defined and continuous for all x ∈ ΓN . In fact, we the normal vector field ν(x) will see that the boundary condition that appears there when computing the limit of the game values is in fact the homogeneous Neumann boundary condition 𝜕u (x) = 0, 𝜕ν

x ∈ ΓN .

The key point in the proof is, as before, the Dynamic Programming Principle, which in our case reads as follows: The value of the game uε verifies uε (x) =

1 1 sup uε + inf uε 2 B (x)∩Ω 2 Bε (x)∩Ω ε

We have the following result.

∀x ∈ Ω \ ΓD .

5.1 Mixed boundary conditions and Tug-of-War games | 53

Theorem 5.1. Let u be a uniform limit of the values of the Tug-of-War game described above. Then u is a viscosity solution to the mixed boundary value problem −ΔH ∞ u(x) = 0 { { { 𝜕u (x) = 0 { 𝜕ν { { { u(x) = g(x)

Here, as before

2 ΔH ∞ u(x) = ⟨D u(x)

x ∈ Ω, x ∈ ΓN , x ∈ ΓD .

∇u(x) ∇u(x) , ⟩. |∇u(x)| |∇u(x)|

We understand solutions to ΔH ∞ u = 0 in the viscosity sense, and hence we refer to Chapter 3 and Appendix A for the definition of being a super- and a subsolution to this equation inside Ω. Now, we give the precise definition of viscosity solution including the Neumann boundary condition following [14]. Recall that the homogeneous infinity Laplacian is not well defined when the gradient vanishes. Hence, in order to give the definition we need to employ the semicontinuous envelopes of the operator; see Chapter 3 and Appendix A. We define F(v, X) = −⟨X

v v ; ⟩. |v| |v|

The lower semicontinuous F∗ and upper semicontinuous F ∗ envelopes of F coinciding with it for v ≠ 0 and for v = 0 are given by F ∗ (0, X) = −λmin (X) and F∗ (0, X) = −λmax (X), where λmin (X) = min{λ : λ is an eigenvalue of X} and λmax (X) = max{λ : λ is an eigenvalue of X}. Definition 5.2. (1) A lower semicontinuous function u is a viscosity supersolution if for every ϕ smooth such that u−ϕ has a strict minimum at the point x0 ∈ Ω with u(x0 ) = ϕ(x0 ) we have the following. If x0 ∈ ΓD , g(x0 ) ≤ ϕ(x0 ); if x0 ∈ ΓN , the inequality max{⟨ν(x0 ), ∇ϕ(x0 )⟩, F ∗ (∇ϕ(x0 ), D2 ϕ(x0 ))} ≥ 0 holds; and if x0 ∈ Ω, then we require F ∗ (∇ϕ(x0 ), D2 ϕ(x0 )) ≥ 0.

54 | 5 Mixed boundary conditions and the obstacle problem (2) An upper semicontinuous function u is a subsolution if for every ψ smooth such that u − ψ has a strict maximum at the point x0 ∈ Ω with u(x0 ) = ψ(x0 ) we have the following. If x0 ∈ ΓD , g(x0 ) ≥ ψ(x0 ); if x0 ∈ ΓN , the inequality min{⟨ν(x0 ), ∇ψ(x0 )⟩, F∗ (∇ψ(x0 ), D2 ψ(x0 ))} ≤ 0 holds; and if x0 ∈ Ω, then we require F∗ (∇ψ(x0 ), D2 ψ(x0 )) ≤ 0. (3) Finally, u is a viscosity solution if it is both a super- and a subsolution. Note that the notion of viscosity solution at points that are on ΓN asks that only one of the required inequalities holds. For example, touching at a point x0 ∈ ΓN from above with a smooth test function ϕ with ∇ϕ(x0 ) ≠ 0 we need to check that ⟨ν(x0 ), ∇ϕ(x0 )⟩ ≥ 0

or

|∇ϕ(x0 )|−2 ⟨D2 ϕ(x0 )∇ϕ(x0 ), ∇ϕ(x0 ⟩ ≥ 0,

and if ∇ϕ(x0 ) = 0 we need ⟨ν(x0 ), ∇ϕ(x0 )⟩ ≥ 0

or

λmax (D2 ϕ(x0 )) ≥ 0,

which trivially holds, since ∇ϕ(x0 ) = 0. Therefore, let us also observe that in the definition we have written max{⟨ν(x0 ), ∇ϕ(x0 )⟩, F ∗ (∇ϕ(x0 ), D2 ϕ(x0 ))} ≥ 0, where in the left-hand side the upper semicontinuous envelope of the operator −ΔH ∞u appears. Although we have written it in the way following the usual definition, in this case the envelopes play no role since in the case that the gradient vanishes the inequality is trivially fulfilled. Hence, we can write max{⟨ν(x0 ), ∇ϕ(x0 )⟩, −Δ∞ ϕ(x0 )} ≥ 0 or only consider test functions with nonvanishing gradient. Proof of Theorem 5.1. The starting point is the following DPP: 2uε (x) = sup uε + inf uε Bε (x)∩Ω

Bε (x)∩Ω

∀x ∈ Ω \ ΓD .

(5.2)

5.1 Mixed boundary conditions and Tug-of-War games | 55

Let us check that u (a uniform limit of uε ) is a viscosity supersolution. To this end, consider a smooth function ϕ such that u − ϕ has a strict local minimum at x0 , i. e., u(x) − ϕ(x) > u(x0 ) − ϕ(x0 ),

x ≠ x0 .

Without loss of generality, we can suppose that ϕ(x0 ) = u(x0 ). Let us see the inequality that these test functions satisfy, as a consequence of the DPP. From the uniform convergence of uε to u, there exists a sequence xε → x0 such that uε (x) − ϕ(x) ≥ uε (xε ) − ϕ(xε ) − o(ε2 ),

(5.3)

for every x in a fixed neighborhood of x0 . From (5.3), we deduce sup uε ≥ max ϕ + uε (xε ) − ϕ(xε ) − o(ε2 ) Bε (xε )∩Ω

Bε (xε )∩Ω

and inf

Bε (xε )∩Ω

uε ≥ min ϕ + uε (xε ) − ϕ(xε ) − o(ε2 ). Bε (xε )∩Ω

Then we have from (5.2) 2ϕ(xε ) ≥ max ϕ + min ϕ − 2o(ε2 ). Bε (xε )∩Ω

Bε (xε )∩Ω

(5.4)

It is clear that the uniform limit of uε , u, verifies u(x) = g(x),

x ∈ ΓD .

In Ω \ ΓD there are two possibilities: x0 ∈ Ω and x0 ∈ ΓN . In the former case we have to check that −F ∗ (∇u(x0 ), D2 u(x0 )) ≥ 0, while in the latter, what we have to prove is max{

𝜕ϕ (x ), F ∗ (∇u(x0 ), D2 u(x0 ))} ≥ 0. 𝜕ν 0

Case A. First, assume that x0 ∈ Ω. In this case we have to prove that F ∗ (∇u(x0 ), D u(x0 )) ≥ 0. This can be done as in the proof of Theorem 3.11. Let us include the details as we will use them in Case B. If ∇ϕ(x0 ) ≠ 0 we proceed as follows. Since ∇ϕ(x0 ) ≠ 0 we also have ∇ϕ(xε ) ≠ 0 for ε small enough. 2

56 | 5 Mixed boundary conditions and the obstacle problem In the sequel, x1ε , x2ε ∈ Ω̄ will be the points such that ϕ(x1ε ) =

max

y∈Bε (xε )∩Ω

ϕ(y) and ϕ(x2ε ) =

min

y∈Bε (xε )∩Ω

ϕ(y).

We remark that x1ε , x2ε ∈ 𝜕Bε (xε ). Hence, since Bε (xε ) ∩ 𝜕Ω = 0, we have x1ε = xε + ε[

∇ϕ(xε ) ∇ϕ(xε ) + o(1)] and x2ε = xε − ε[ + o(1)] |∇ϕ(xε )| |∇ϕ(xε )|

(5.5)

as ε → 0. This can be deduced from the fact that, for ε small enough, ϕ is approximately the same as its tangent plane. In fact, if we write x1ε = xε + εvε with |vε | = 1 and we fix any direction w, then the Taylor expansion of ϕ gives ϕ(xε ) + ⟨∇ϕ(xε ), εvε ⟩ + o(ε) = ϕ(x1ε ) ≥ ϕ(xε + εw) and hence ⟨∇ϕ(xε ), vε ⟩ + o(1) ≥

ϕ(xε + εw) − ϕ(xε ) = ⟨∇ϕ(xε ), w⟩ + o(1) ε

for any direction w. This implies vε =

∇ϕ(xε ) + o(1). |∇ϕ(xε )|

Now, consider the Taylor expansion of second order of ϕ 1 ϕ(y) = ϕ(xε ) + ∇ϕ(xε ) ⋅ (y − xε ) + ⟨D2 ϕ(xε )(y − xε ), (y − xε )⟩ + o(|y − xε |2 ) 2 as |y − xε | → 0. Evaluating the above expansion at the point at which ϕ attains its minimum in Bε (xε ), x2ε , we get 1 ϕ(x2ε ) = ϕ(xε ) + ∇ϕ(xε )(x2ε − xε ) + ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ), 2 as ε → 0. Evaluating at its symmetric point in the ball Bε (xε ), given by x̃2ε = 2xε − x2ε , we get 1 ϕ(x̃2ε ) = ϕ(xε ) − ∇ϕ(xε )(x2ε − xε ) + ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ). 2 Adding both expressions we obtain ϕ(x̃2ε ) + ϕ(x2ε ) − 2ϕ(xε ) = ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ).

(5.6)

5.1 Mixed boundary conditions and Tug-of-War games | 57

We observe that, by our choice of x2ε as the point where the minimum is attained, ϕ(x̃2ε ) + ϕ(x2ε ) − 2ϕ(xε ) ≤

max ϕ(y) +

y∈Bε (x)∩Ω

min

y∈Bε (x)∩Ω

ϕ(y) − 2ϕ(xε ) ≤ o(ε2 ).

Therefore 0 ≥ ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ). Note that from (5.5) we get x2ε − xε ∇ϕ =− (x ). ε→0 ε |∇ϕ| 0 lim

Then we get, dividing by ε2 and passing to the limit, 0 ≤ −ΔH ∞ ϕ(x0 ). Now, if ∇ϕ(x0 ) = 0 we can argue exactly as above and, moreover, we can suppose (considering a subsequence) that (x2ε − xε ) → v2 ε

as ε → 0,

for some v2 ∈ ℝn . Thus, we obtain 0 ≤ −⟨D2 ϕ(x0 )v2 , v2 ⟩ ≤ −λmin (D2 ϕ(x0 )). Observe that in this case, as in the proof of Theorem 3.11, when proving that u is a supersolution, we only need to consider x2ε , the point where the minimum of ϕ is attained, but in the former case both points, x1ε and x2ε , will play an important role. Case B. Suppose that x0 ∈ ΓN . There are four subcases to be considered depending on the direction of the gradient ∇ϕ(x0 ) and the distance of the points xε to the boundary. Case 1: If either ∇ϕ(x0 ) = 0, or ∇ϕ(x0 ) ≠ 0 and ∇ϕ(x0 )⊥ν(x0 ), then 𝜕ϕ (x ) = 0 𝜕ν 0 Case 2: We have lim infε→0



max{

dist(xε ,𝜕Ω) ε

𝜕ϕ (x ), F ∗ (∇ϕ(x0 ), D2 ϕ(x0 ))} ≥ 0. 𝜕ν 0

> 1, and ∇ϕ(x0 ) ≠ 0.

Since ∇ϕ(x0 ) ≠ 0 we also have ∇ϕ(xε ) ≠ 0 for ε small enough. Hence, since Bε (xε ) ∩ 𝜕Ω = 0, we have, as before, x1ε = xε + ε[

∇ϕ(xε ) + o(1)] |∇ϕ(xε )|

and x2ε = xε − ε[

∇ϕ(xε ) + o(1)] |∇ϕ(xε )|

58 | 5 Mixed boundary conditions and the obstacle problem as ε → 0. Note that both x1ε , x2ε → 𝜕Bε (xε ). This can be deduced from the fact that, for ε small enough, ϕ is approximately the same as its tangent plane. Then we can argue exactly as before (when x0 ∈ Ω) to obtain that 0 ≤ −ΔH ∞ ϕ(x0 ). Case 3: We have lim supε→0

dist(xε ,𝜕Ω) ε

≤ 1, and ∇ϕ(x0 ) ≠ 0 points inwards Ω.

In this case, for ε small enough, ∇ϕ(xε ) ≠ 0 points inwards as well. Thus, x1ε = xε + ε[

∇ϕ(xε ) + o(1)] ∈ Ω, |∇ϕ(xε )|

while x2ε ∈ Ω ∩ Bε (xε ). Indeed, |x2ε − xε | = δε ≤ 1. ε We have the following first-order Taylor expansions: 󵄨 󵄨 ϕ(x1ε ) = ϕ(xε ) + ε󵄨󵄨󵄨∇ϕ(xε )󵄨󵄨󵄨 + o(ε) and ϕ(x2ε ) = ϕ(xε ) + ∇ϕ(xε ) ⋅ (x2ε − xε ) + o(ε), as ε → 0. Adding both expressions, we arrive at 󵄨 󵄨 ϕ(x1ε ) + ϕ(x2ε ) − 2ϕ(xε ) = ε󵄨󵄨󵄨∇ϕ(xε )󵄨󵄨󵄨 + ∇ϕ(xε ) ⋅ (x2ε − xε ) + o(ε). Using (5.4) and dividing by ε > 0, (xε − xε ) 󵄨 󵄨 0 ≥ 󵄨󵄨󵄨∇ϕ(xε )󵄨󵄨󵄨 + ∇ϕ(xε ) ⋅ 2 + o(1) ε as ε → 0. We can write 󵄨 󵄨 0 ≥ 󵄨󵄨󵄨∇ϕ(xε )󵄨󵄨󵄨(1 + δε cos θε ) + o(1), where θε = angle(∇ϕ(xε ),

(x2ε − xε ) ). ε

Letting ε → 0 we get 󵄨 󵄨 0 ≥ 󵄨󵄨󵄨∇ϕ(x0 )󵄨󵄨󵄨(1 + δ0 cos θ0 ),

5.1 Mixed boundary conditions and Tug-of-War games | 59

where δ0 ≤ 1, and θ0 = lim θε = angle(∇ϕ(x0 ), v(x0 )), ε→0

with x2ε − xε . ε→0 ε

v(x0 ) = lim

Since |∇ϕ(x0 )| ≠ 0, we find (1 + δ0 cos θ0 ) ≤ 0, and then θ0 = π and δ0 = 1. Hence x2ε − xε ∇ϕ =− (x ), ε→0 ε |∇ϕ| 0 lim

(5.7)

or, equivalently, x2ε = xε − ε[

∇ϕ(xε ) + o(1)]. |∇ϕ(xε )|

Now, consider x̃2ε = 2xε − x2ε the symmetric point of x2ε with respect to xε . We go back to (5.4) and use the Taylor expansions of second order, 1 ϕ(x2ε ) = ϕ(xε ) + ∇ϕ(xε )(x2ε − xε ) + ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ) 2 and 1 ϕ(x̃2ε ) = ϕ(xε ) + ∇ϕ(xε )(x̃2ε − xε ) + ⟨D2 ϕ(xε )(x̃2ε − xε ), (x̃2ε − xε )⟩ + o(ε2 ), 2 to get 0≥

min

y∈Bε (xε )∩Ω

ϕ(y) +

max

y∈Bε (xε )∩Ω

ϕ(y) − 2ϕ(xε )

≥ ϕ(x2ε ) + ϕ(x̃2ε ) − 2ϕ(xε ) 1 = ∇ϕ(xε )(x2ε − xε ) + ∇ϕ(xε )(x̃2ε − xε ) + ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ 2 1 2 + ⟨D ϕ(xε )(x̃2ε − xε ), (x̃2ε − xε )⟩ + o(ε2 ) 2 = ⟨D2 ϕ(xε )(x2ε − xε ), (x2ε − xε )⟩ + o(ε2 ), by the definition of x̃2ε . Then, we can divide by ε2 and use (5.7) to obtain −ΔH ∞ ϕ(x0 ) ≥ 0. Case 4: We have lim supε→0

dist(xε ,𝜕Ω) ε

≤ 1, and ∇ϕ(x0 ) ≠ 0 points outwards Ω.

60 | 5 Mixed boundary conditions and the obstacle problem In this case we have 𝜕ϕ (x ) = ⟨∇ϕ(x0 ); ν(x0 )⟩ ≥ 0, 𝜕ν 0 since n(x0 ) is the exterior normal at x0 and ∇ϕ(x0 ) points outwards Ω. Thus max{

𝜕ϕ (x ), −ΔH ∞ ϕ(x0 )} ≥ 0, 𝜕ν 0

and we conclude that u is a viscosity supersolution. It remains to check that u is a viscosity subsolution. This fact can be proved in an analogous way, taking some care in the choice of the points where we perform Taylor expansions. In fact, instead of taking (5.6) we have to choose x̃1ε = 2xε − x1ε , that is, the reflection of the point where the maximum in the ball Bε (xε ) of the test function is attained. This ends the proof.

5.2 The obstacle problem for Tug-of-War games Our second main goal in this chapter is to study the obstacle problem in the context of games. That is, we propose a game that involves a function Ψ (the obstacle) and that is such that the value function of the game is above it in the whole domain. With this definition at hand, we prove existence, uniqueness, and some properties of the value functions of this game and we find that a uniform limit of these functions exists and is a viscosity solution of the obstacle problem for the ∞-Laplacian. The obstacle problem for elliptic operators has been extensively studied. In the classical approach one seeks to minimize the energy E(u) = ∫Ω |Du|2 among the func-

tions that coincide with a given function g at the boundary of Ω ⊂ ℝd and remain above a prescribed obstacle Ψ. Such a problem is motivated by the description of the equilibrium position of a membrane (the graph of the solution) that is attached at level g along the boundary of Ω and that is forced to remain above the obstacle in the interior of Ω. Many of the results obtained for the Laplacian were generalized for the p-Laplacian whose energy functional is given by E(u) = ∫Ω |Du|p . However, the ∞-Laplacian is not variational (hence no energy methods are directly available). One may rely on methods from potential theory (Perron method), on limit procedures like the ones described here, or one can take the limit as p → ∞ in the obstacle problem for the p-Laplacian and obtain a solution to the obstacle problem for the ∞-Laplacian; see [100] for example. Let us describe briefly the game in which we are interested. We play the Tug-of-War game described in detail in the previous chapters, but now we also have an obstacle

5.2 The obstacle problem for Tug-of-War games | 61

Ψ : ℝN 󳨃→ ℝ such that Ψ ≤ g in ℝN \ Ω. As in ordinary Tug-of-War, if the boundary is reached at xτ , then Player I receives g(xτ ). However, in our case, Player I has the additional choice to stop the game at any position xn ∈ Ω and receive the payoff given by the obstacle Ψ(xn ). This is much like the case of American options, where investors can exercise the option at any time up to expiry and accept a payoff equal to the intrinsic value which in our case is the obstacle, or they may wait (continue to play) if the expected benefit of waiting is greater than the intrinsic value. Our game is similar to investing in American options in that optimal strategies are to stop where the value function agrees with the intrinsic value. Underlying is the idea that Player I will choose to stop where her value function is equal to the obstacle. However, that she does so as a part of her strategy is not a requirement for any of the proofs. We have the following results concerning properties of uε , the value function of this game, where ε indicates the maximum size of each move. Theorem 5.3. The game has a value. This value is the solution to the discrete obstacle problem; that is, it satisfies uε (x) ≥

1 1 sup uε + inf uε , 2 Bε (x) 2 Bε (x)

lies above the obstacle in Ω, uε ≥ Ψ, coincides with g outside Ω, and is such that the above inequality is an equality where uε lies strictly above the obstacle. In addition, a comparison principle holds. Let uε1 , uε2 be values of the ε games with boundary functions g1 , g2 and obstacles Ψ1 , Ψ2 , respectively. If g1 ≥ g2 and Ψ1 ≥ Ψ2 , then uε1 ≥ uε2 . Moreover, the value function of the game also satisfies the following Lewy– Stampacchia type inequalities. Lemma 5.4. We have 1 1 0 ≤ uε (x) − (sup uε + inf uε ) ≤ [Ψ(x) − (sup Ψ + inf Ψ)] . Bε (x) Bε (x) 2 Bε (x) 2 Bε (x) + Here we used the notation [A(x)]+ = max{A(x), 0}. Finally, if the obstacle Ψ is Lipschitz, then the value function is Lipschitz with

respect to the discrete distance dε (x, y) = ε⌈ |x−y| ⌉. ε

Lemma 5.5. If the obstacle Ψ is Lipschitz, then there exists a constant C, independent of ε, such that the value function uε satisfies 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 ≤ Cdε (x, y). Concerning the limit as ε → 0 of these value functions we have the following result.

62 | 5 Mixed boundary conditions and the obstacle problem Theorem 5.6. As ε → 0, uε → u uniformly in Ω. The limit u is the unique viscosity solution to the obstacle problem for the ∞-Laplacian, that is, it is the unique superinfinity harmonic function, i. e., functions that satisfy −ΔH ∞ u ≥ 0, in the viscosity sense, is above the obstacle Ψ in Ω, takes the boundary value g on 𝜕Ω, and is ∞-harmonic where it lies strictly above the obstacle. 5.2.1 Description of the game The game that we describe below is called a leavable game. Some leavable games are described in [80] Chapter 7. In this Tug-of-War leavable game Player I can decide to end the game before the boundary is reached, i. e., her strategy includes a stopping rule. Now, let us describe the game more precisely. Let Ω ⊂ ℝN be a bounded smooth domain. Let g : ℝN \ Ω → ℝ be a Lipschitz continuous function (the final payoff). In addition, we have a function Ψ : ℝN 󳨃→ ℝ (the obstacle) such that Ψ≤g

in ℝN \ Ω.

The rules of the game are as follows: At an initial time a token is placed at a point x0 ∈ Ω and we fix ε > 0. Then, a (fair) coin is tossed and the players play Tug-of-War. The winner of the toss is allowed to move the game position to any x1 ∈ Bε (x0 ). As before, once the token has reached some xτ outside Ω, the game ends and Player I earns g(xτ ) (while Player II earns −g(xτ )). In addition, at every position xn , Player I is allowed to choose to end the game, earning Ψ(xn ) (while Player II earns −Ψ(xn )). A strategy SI for Player I is a pair SI = (τ,̃ {SIk }∞ k=0 ), consisting of a stopping time ̃ 0 , x1 , . . . , xk ) = k (see Appendix B) and a collection of measurable mappings SIk . If τ(x the game ends immediately, else the coin is tossed and if Player I gets to move the token, the next game position is SIk (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ). Similarly, Player II plays according to a strategy SII = {SIIk }∞ k=0 . Note that there is no stopping time choice for Player II. A starting point x0 and the strategies SI and SII define, by Kolmogorov’s extension x x theorem, a unique probability measure ℙS0,S . We denote by 𝔼S0,S the corresponding I II I II expectation. Observe that the final payoff is given by ̃ g(x) ={

g(x) Ψ(x)

x ∈ ℝN \ Ω, x∈Ω

5.2 The obstacle problem for Tug-of-War games | 63

and that the time when the game ends is given by τ = min{τ,̃ inf{k : xk ∈ ̸ Ω, k = 0, 1, . . .}}, which is a stopping time. Then, if SI and SII denote the strategies adopted by Players I and II, respectively, we define the expected payoff for Player I as I,x

VS ,S0 = { I

II

x

̃ τ )] 𝔼S0,S [g(x I II −∞

if the game terminates a. s., otherwise.

Analogously, we define the expected payoff for Player II as II,x

VS ,S0 = { I

II

x ̃ τ )] 𝔼S0,S [g(x I II +∞

if the game terminates a. s., otherwise.

Finally, we can define the value of the game for Player I as I,x

uεI (x0 ) = sup inf VS ,S0 , SI

SII

I

II

while the value of the game for Player II is defined as II,x

uεII (x0 ) = inf sup VS ,S0 . SII

SI

I

II

Note that, as in the previous chapters, we penalize severely the games that never end. 5.2.2 Dynamic Programming Principle Here we prove the existence of a value function for the game, Theorem 5.3, and the validity of a comparison principle for values of the game. To prove these results we need some previous lemmas. We will show that uεI is the smallest supersolution that satisfies our conditions and uεII is, in some sense, the largest subsolution. Remark 5.7. Note that uεI and uεII are at least as large as the corresponding ordinary Tug-of-War game with “boundary” Y = (ℝN \ Ω) ∪ Au , where Au is the corresponding contact set, i. e., Au = AuI or Au = AuII . More precisely, let Au be the contact set of u and let Y = (ℝN \ Ω) ∪ Au . Let F̂ : Y → ℝ be the Lipschitz function ̂ g(x) ={

g(x) Ψ(x)

x ∈ ℝN \ Ω, x ∈ Au .

Note that ĝ is well defined because if Γ ∩ Au is nonempty, then g = Ψ there. Then we have the inequality wε ≤ u, where wε is the value function for the ordinary Tug-of-War game in this setting.

64 | 5 Mixed boundary conditions and the obstacle problem We have wε ≤ u because Player I could always play as if she were in this ordinary Tug-of-War situation, so she can do at least as well in the obstacle game. Lemma 5.8. Let v be a supersolution to the DPP, that is, a function that satisfies v(x) ≥ g(x) { { { v(x) ≥ Ψ(x) { { { 1 { v(x) ≥ 2 (supBε (x) v(y) + infBε (x) v(y))

in ℝN \ Ω, in Ω, in Ω.

Then we have uεI (x) ≤ v(x). ε

ε

ε

Proof. If x0 ∈ AuI , then uεI (x0 ) = Ψ(x0 ) ≤ v(x0 ). So we assume x0 ∉ AuI . In Ω \ AuI we have uεI (x) =

1 1 sup uε (y) + inf uεI (y) > Ψ(x). 2 Bε (x) I 2 Bε (x)

Let wε be the Tug-of-War game, without obstacle, described in Remark 5.7 (with Au = ε ε AuI ). Thus, since uεI is ε discrete ∞-harmonic in Ω \ AuI and wε is the unique such function by [97], we have uεI = wε ≤ v. Lemma 5.9. Let v be a subsolution away from the obstacle which also lies above the obstacle, that is, a function that satisfies v(x) ≤ g(x) { { { { { v(x) ≥ Ψ(x) { { { 1 { { v(x) ≤ (sup v(y) + inf v(y)) Bε (x) 2 Bε (x) {

in ℝN \ Ω, in Ω, in Ω \ Av .

Then we have v(x) ≤ uεII (x). Proof. For x0 ∈ Av we have v(x0 ) = Ψ(x0 ) ≤ uεII (x0 ). Assume x0 ∈ Ω \ Av . Let wε be the value of discrete ε Tug-of-War (without obstacle) on Ω as described in Remark 5.7 with A = Av and ĝ a Lipschitz extension of g to Av such that v ≤ ĝ ≤ uεII . Since v ≤ ĝ and v is a subsolution in Ω \ Av , by [97] we have v(x0 ) ≤ wε (x0 ). We have wε ≤ uεII in the whole Ω by the remark. Therefore v ≤ uεII on Ω \ Av as well.

5.2 The obstacle problem for Tug-of-War games | 65

Now we are ready to prove existence of a unique value of the game. ε

Proof of Theorem 5.3. We always have uεI ≤ uεII . For x0 ∈ AuII we have uεII (x0 ) = Ψ(x0 ) ≤ uεI (x0 ). ε

ε

Assume x0 ∈ Ω \ AuII . Let wε be as in the remark with A = AuII ; uεII is ε game ε harmonic on Ω \ AuII and therefore, by [97], uεII = wε . By the remark, we have wε (x0 ) ≤ uεI (x0 ). From now on we drop the subscripts and let uε := uεI = uεII . We now prove a result that will be needed for the proof of Lemma 5.4, the Lewy– Stampacchia lemma. Lemma 5.10. The function Ψ is a supersolution to the DPP of the Tug-of-War game on the coincidence set, i. e., we have Ψ(x) −

1 1 sup Ψ(y) − inf Ψ(y) ≥ 0, 2 Bε (x) 2 Bε (x)

ε

x ∈ Au .

ε

Proof. In the set Au we have Ψ(x) = uε (x) = max{Ψ(x), ≥ ≥

1 1 sup uε (y) + inf uε (y)} 2 Bε (x) 2 Bε (x)

1 1 sup uε (y) + inf uε (y) 2 Bε (x) 2 Bε (x) 1 1 sup Ψ(y) + inf Ψ(y). 2 Bε (x) 2 Bε (x)

The last inequality holds because uε ≥ Ψ on Ω. ε

Proof of Lemma 5.4. The first inequality is immediate from the DPP. If x ∈ ̸ Au , then the ε second inequality is clear since uε is discrete ε ∞-harmonic there. If x ∈ Au , then, from ε the fact that uε ≥ Ψ on Ω and uε (x) = Ψ(x) for x ∈ Au , we get the last inequality. 5.2.3 Game value convergence In this section we prove the main result of this chapter, Theorem 5.6, regarding the limit of the game value functions. ⌉. The proof of Lemma 5.5, Recall the discrete distance is given by dε (x, y) = ε⌈ |x−y| ε that the game value function is Lipschitz with respect to the dε metric, is now immediate from [97] and our results in the previous chapters. The proof of the uniform Lipschitz lemma, Lemma 3.5 from [97], shows that the Lipschitz constant depends only on the Lipschitz constants of g and Ψ.

66 | 5 Mixed boundary conditions and the obstacle problem We are now in a position to apply one more time the variant of the Arzela–Ascoli lemma to obtain uniform convergence. Theorem 5.11. If g and Ψ are Lipschitz continuous functions then there exists a subsequence of the values of the game uεj that converges uniformly to a continuous function u in Ω, lim uεj = u.

εj →0

Proof. The Arzela–Ascoli type lemma can be applied since the uniform bound holds with C = max{g(x), ψ(x)} and for the asymptotic equicontinuity we can take, for inη stance, ε0 = δ and r0 = L , where L is the Lipschitz constant of uδ with respect to dδ which does not depend on δ. Remark 5.12. If we assume that Ψ is C 2 the Lewy–Stampacchia estimate gives that there exists K > 0 such that 1 0 ≤ uε (x) − (sup uε (y) + inf uε (y)) ≤ Kε2 . Bε (x) 2 Bε (x) Next we prove that this uniform limit of the values of the game is the viscosity solution of the obstacle problem for the ∞-Laplacian. We are now ready to prove our main result, Theorem 5.6. We prove this theorem by comparing the uεj to appropriately defined discrete harmonic vεj with fixed boundary conditions which we know converge to an ∞-harmonic function from the results presented in the previous chapters. We obtain uniqueness of the limit by proving that our limit is the least superharmonic function that lies above the obstacle. Proof of Theorem 5.6. We first prove that our limit is ∞-harmonic where it lies strictly above the obstacle. Passing to a subsequence if necessary we let u = lim uεj . Fix x0 ∈ Ω \ Au , and choose r such that Br (x0 ) is included in the set Ω \ Au (which is open). Given δ, for εj small enough, we have |u − uεj | < δ on Br (x0 ). Define vεj to be the discrete εj harmonic function that agrees with u on 𝜕Br (x0 ). Then we have vεj −δ < uεj < vεj +δ on 𝜕Br (x0 ). Since vεj −δ and vεj +δ are also discrete harmonic on Br (x0 ) with lower and higher boundary values, respectively, than uεj , with the help of the comparison principle for discrete harmonic functions we get the inequalities on all of Br (x0 ). We know that vεj converges uniformly to an ∞-harmonic function v on Br (x0 ). Therefore, by the sandwich lemma and sending δ → 0, we have u = v, thus u is ∞-harmonic on Br (x0 ) and therefore u is ∞-harmonic on Ω \ Au . Now let x0 ∈ Au . When Ψ is C 2 and for ε small enough satisfies Ψ(x0 ) ≥

1 1 sup Ψ + inf Ψ, 2 Bε (x0 ) 2 Bε (x0 )

5.2 The obstacle problem for Tug-of-War games | 67

then in Au we have 2 −ΔH ∞ Ψ = −⟨D Ψ

DΨ DΨ , ⟩ ≥ 0. |DΨ| |DΨ|

Note that this inequality holds in the sense of the semicontinuous envelope, involving λmax (D2 Ψ) when ∇Ψ = 0 (see Chapters 3 and 4 and Appendix A). From this and the proof of Lemma 5.4 we find that u also satisfies this inequality on Au . Therefore the uniform limit of a subsequence of the values of the game, u, satisfies −ΔH ∞ u = 0,

in Ω \ Au ,

− ΔH ∞ u ≥ 0,

and

in Ω,

in the viscosity sense. We now prove uniqueness. We define u∞ to be the least ∞-superharmonic function that is above the obstacle and the boundary function. Since u is ∞-superharmonic and above the obstacle and boundary function we get the inequalities u ≥ u∞ ≥ Ψ, from which we have Au ⊂ Au∞ , so on Au we have u = u∞ . Now, in the set Ω \ Au , u is a solution to −ΔH ∞ u = 0 and u∞ is a supersolution with the same boundary values (u∞ = u = g on 𝜕Ω and u∞ = u = Ψ on Au ). Therefore, the comparison principle for Δ∞ implies that u∞ ≥ u in Ω \ Au . Then we conclude that u∞ = u. Since we have uniqueness of the limit, the whole sequence uε converges uniformly.

5.2.4 Convergence of the contact sets εj

We now simplify notation slightly and let Aεj := Au and we discuss the convergence of the contact sets of the uεj to the contact set of the limit function u. We define ∞ ∞

lim sup Aεj = ⋂ ⋃ Aεj j→∞

p=1 j=p

and

∞ ∞

lim inf Aεj = ⋃ ⋂ Aεj . j→∞

p=1 j=p

68 | 5 Mixed boundary conditions and the obstacle problem Now, let us define lim supε→0 Aε and lim infε→0 Aε as lim sup Aε = ⋃ lim sup Aεj , ε→0

εj →0

j→∞

that is, the smallest set that contains all possible limits along subsequences, and lim inf Aε = ⋂ lim inf Aεj , ε→0

εj →0

j→∞

that is, the largest set that is included in every possible sequential limit. First, we show an upper bound for lim supε→0 Aε . Lemma 5.13. We have lim sup Aε ⊂ Au . ε→0

Proof. Let K ⊂⊂ Ω \ Au so V = Ω \ K is a neighborhood of Au . There exists an η such that u − Ψ > η in K. By the uniform convergence there exists an ε0 depending on K such that uε − Ψ > η/2 for ε < ε0 . Then we have, for every ε < ε0 , Aε ⊂ V. Thus we have lim sup Aεj ⊂ V εj →0

for any sequence εj → 0 and for any neighborhood V of Au . Therefore lim sup Aε ⊂ Au . ε→0

To obtain a lower bound for lim infε→0 Aε we need to assume an extra condition on the obstacle. u o Lemma 5.14. Assume that Ψ satisfies −ΔH ∞ Ψ(x0 ) > 0 in the viscosity sense in (A ) . Then we have o

(Au ) ⊂ lim inf Aε . ε→0

Proof. Fix x0 ∈ (Au )o and choose any δ such that Bδ (x0 ) ⊂ (Au )o . If uεj (x) > Ψ(x) for all x ∈ Bδ (x0 ) and some sequence εj → 0, then ε

−Δ∞j uεj (x) = 0

for all x ∈ Bδ (x0 ) and every εj .

5.3 Comments | 69

Therefore, by the argument in the proof of Theorem 5.6, −Δ∞ u(x) = 0

for x ∈ Bδ/2 (x0 )

in the viscosity sense. As Bδ/2 (x0 ) ⊂ (Au )o we have u = Ψ there and hence we have −Δ∞ Ψ(x0 ) = 0, a contradiction with our hypothesis. Therefore, for any sequence εj → 0 there exists xj ∈ Aεj such that xj → x0 . Hence, (Au )o ⊂ lim infεj →0 Aεj , and since lim infεj →0 Aεj

is a closed set we get (Au )o ⊂ lim infεj →0 Aεj for every sequence εj → 0. Therefore

(Au )o ⊂ lim infε→0 Aε .

An immediate consequence of the previous two lemmas is the following result. u o Theorem 5.15. Assume that Ψ satisfies −ΔH ∞ Ψ(x0 ) > 0 in the viscosity sense in (A ) and u also assume that the contact set satisfies (Au )o = A . Then we have

lim Aε = lim inf Aε = lim sup Aε = Au .

ε→0

ε→0

ε→0

Remark 5.16. For a Lipschitz obstacle, it may happen that (Au )o = 0. In fact, take Ω = B1 (0), boundary function g(x) = 0, and obstacle Ψ(x) = −3|x| + 1. The solution to the obstacle problem for the ∞-Laplacian is given by the cone u(x) = −|x| + 1 and the contact set Au is just a single point Au = {0}.

5.3 Comments Concerning mixed boundary conditions, we follow [97] and [32], but we restrict ourselves to the case of a game in a bounded smooth domain Ω ⊂ ℝN (the results presented in [97] are valid in general length spaces). The results of this chapter concerning the obstacle problem are taken from [87]. These results can be extended to the p-Laplacian case adding noise when playing the Tug-of-War game and keeping the possibility for Player I to stop the game and obtain the amount given by Ψ. This extension can be found in [71]. In [13] a numerical method was developed using these ideas. For a double obstacle problem, see [35]. The fact that the game presented here has a value can be also proved, as in [73]. A fractional version of the PDE obstacle problem can be found in [89].

6 Maximal operators Our main goal in this chapter is to extend the game theoretical approach to cover the case in which one of the players is allowed to choose the parameters that regulate the probability of moving at random or that a round of Tug-of-War occurs. In this way one finds as a limit equation the maximum or the minimum, according to which player makes the choice, of two different elliptic operators. As the limit of the game values we will obtain a solution to the Dirichlet problem for the PDE max {−ΔH p u(x)} = f (x)

p1 ≤p≤p2

(6.1)

in a bounded smooth domain Ω ⊂ ℝN for 2 ≤ p1 , p2 ≤ ∞. Here we have normalized the p-Laplacian and considered the operator ΔH pu=

div(|∇u|p−2 ∇u) , (N + p)|∇u|p−2

which is called the 1-homogeneous p-Laplacian. We will assume that f ≡ 0 or that f is strictly positive or negative in Ω. Note that the 1-homogeneous p-Laplacian already appeared in Chapter 3. As pointed out in that chapter, formally, the 1-homogeneous p-Laplacian can be written as ΔH pu=

1 p−2 H Δ u+ Δu, N +p ∞ N +p

where Δu is the usual Laplacian and ΔH ∞ u is the normalized ∞-Laplacian, that is, N

Δu = ∑ uxi xi i=1

and

ΔH ∞u =

N 1 ∑ uxi uxi xi uxj . 2 |∇u| i,j=1

Therefore, we can think about the 1-homogeneous p-Laplacian as a convex combination of the Laplacian divided by N + 2 and the ∞-Laplacian, i. e., ΔH pu=

p−2 H N + 2 Δu Δ∞ u + = αΔH ∞ u + θΔu N +p N +pN +2

p−2 1 with α = N+p and θ = N+p (in this chapter we reserve β for a different constant) for 2 ≤ p < ∞, and α = 1 and θ = 0 for p = ∞. Since we are dealing with convex combinations, equation (6.1) becomes H H max {−ΔH p u(x)} = max{−Δp1 u(x), −Δp2 u(x)} = f (x)

p1 ≤p≤p2

with 2 ≤ p1 , p2 ≤ ∞. In [67] and [28] the following result is proved. https://doi.org/10.1515/9783110621792-006

(6.2)

72 | 6 Maximal operators Theorem 6.1. Assume that infΩ f > 0, supΩ f < 0, or f ≡ 0. Then, given g a continuous function defined on 𝜕Ω, there exists a unique viscosity solution u ∈ C(Ω)̄ of (6.2) with u = g in 𝜕Ω. Moreover, a comparison principle holds, if u, v ∈ C(Ω)̄ are such that H max{−ΔH p1 u, −Δp2 u} ≤ f ,

H max{−ΔH p1 v, −Δp2 v} ≥ f ,

in Ω and v ≥ u on 𝜕Ω, then v≥u in Ω. Remark 6.2. An analogous result holds for the equation min {−ΔH p u} = f .

p1 ≤p≤p2

Remark 6.3. For the homogeneous case, f ≡ 0, viscosity sub- and supersolutions p−2 H 1 to the 1-homogeneous p-Laplacian, − N+p Δ∞ u − N+p Δu = 0, coincide with viscosity sub- and supersolutions to the usual ((p − 1)-homogeneous) p-Laplacian − div(|∇u|p−2 ∇u) = 0; see [86]. Therefore, for f ≡ 0 we are providing existence and uniqueness of viscosity solutions to maxp1 ≤p≤p2 −Δp u(x) = 0, with Δp u = div(|∇u|p−2 ∇u) being the usual p-Laplacian that comes from calculus of variations. Remark 6.4. When one considers the family of uniformly elliptic second-order operators of the form − tr(AD2 u) and looks for maximal operators one finds the so-called + Pucci maximal operator, Pλ,Λ (D2 u) = maxA∈𝒜 − tr(AD2 u), where 𝒜 is the set of uniformly elliptic matrices with ellipticity constant between λ and Λ. When we try to produce maximal operators for the family of p-Laplacians we are lead to consider the operator maxp1 ≤p≤p2 −ΔH p u(x). This operator has similar properties to the ones that hold for Pucci’s maximal operator (a game for this operator will be presented in the next chapter), but with respect to the p-Laplacian family. The game that we treat in this chapter is called unbalanced Tug-of-War with noise. The setting is the same as in the Tug-of-War with noise treated in Chapter 3. At every round Player I chooses a coin between two possible ones. As in the Tug-of-War with noise, they toss the chosen coin which is biased with probabilities αi and βi , αi + βi = 1 and 1 ≥ αi , βi ≥ 0, i = 1, 2. If they get heads (probability αi ), they play a round of Tug-of-War, and if they get tails (probability βi ), the game state moves according to the uniform probability density to a random point x1 ∈ Bε (x0 ). Then, at every round Player I is allowed to choose the coin again. Once the game position leaves Ω, let us say at the τ-th step, the game ends. The payoff is given by a running payoff function f : Ω → ℝ and a final payoff function g : ℝN \ Ω → ℝ. At the end Player II pays to Player I the amount given by the formula g(xτ ) + ε2 ∑τ−1 n=0 f (xn ). The game value is defined as usual.

6 Maximal operators | 73

Theorem 6.5. Assume that f is a Lipschitz function with supΩ f < 0, infΩ f > 0, or f ≡ 0. The unbalanced Tug-of-War game with noise with {α1 , α2 } ≠ {0, 1} when f ≡ 0 has a value that satisfies the DPP, given by uε (x) = ε2 f (x) + max ( i∈{1,2}

αi {sup uε + inf uε } + βi ∫ uε (y)dy) Bε (x) 2 Bε (x) Bε (x)

for x ∈ Ω, with uε (x) = g(x) for x ∈ ̸ Ω. Moreover, if g is Lipschitz, then there exists a uniformly continuous function u such that uε → u

uniformly in Ω.

This limit u is a viscosity solution to H ̄ max{−ΔH p1 u, −Δp2 u} = f { u=g

on Ω, on 𝜕Ω,

where f ̄ = 2f and p1 , p2 are given by αi =

pi − 2 , pi + N

βi =

2+N , pi + N

i = 1, 2.

The proof of Theorem 6.5 follows from the results in the following sections. In Section 6.2 we establish that the game has a value and that the value is the unique function that satisfies the DPP. In Section 6.3 we prove the convergence part of the theorem. In Lemma 6.12 we establish the existence of a function satisfying the DPP. In Lemma 6.13 we prove that the function satisfying the DPP is unique and coincides with the game value, in the case β1 , β2 > 0, sup f < 0, or inf f > 0. The same result is obtained in the remaining cases in Theorems 6.16 and 6.17. Here is where we had to assume that {α1 , α2 } ≠ {0, 1}. Finally, the convergence is established in Theorem 6.25. Now, we include a comment on the main technical novelties contained in this chapter. When the game is played with some noise at every turn, that is, when the two βi are positive, the game ends almost surely (regardless the strategies adopted by the players). When f is strictly positive or negative, one of the players is strongly motivated to end the game quickly. This fact simplifies the arguments used in the proofs. When f ≡ 0 and one of the αi is one (and therefore the corresponding βi is zero), we do not know a priori that the game terminates almost surely and this fact introduces some extra difficulties. The argument that shows that there is a unique solution to the DPP in this case is delicate; see Theorem 6.16. The proof of convergence of the values of the game as the size of the steps goes to zero is also different from previous results in this book since here one has to take care of the strategy of the player who chooses the parameters of the game. In particular, the proof of the fact that when any of the two players pulls in a fixed direction the expectation of the exit time is bounded above by C/ε2 and is more involved; see Lemma 6.19.

74 | 6 Maximal operators Remark 6.6. We also prove uniqueness of solutions to the DPP. That is, there exists a unique function verifying v(x) = ε2 f (x) + max ( i∈{1,2}

αi {sup v + inf v} + βi ∫ v(y)dy) Bε (x) 2 Bε (x) Bε (x)

for x ∈ Ω, with v(x) = g(x) for x ∈ ̸ Ω. Remark 6.7. hen Player II (recall that this player wants to minimize the expected outcome) has the choice of the probabilities α and β we end up with a solution to H min{−ΔH p1 u, −Δp2 u} = f { u=g

on Ω, on 𝜕Ω.

Remark 6.8. It seems natural to consider a more general protocol to determine α in a prescribed closed set. It is clear that there are only two possible scenarios: At each turn Player I wants to maximize the value of α and Player II wants to minimize it, or the converse. An expected value for α is obtained in each case assuming each player plays optimally. Depending on the value of α in each case, we are considering a game equivalent to the one that we described previously or another one where Player II gets the choice of the first coin, for certain values of αi .

6.1 Unbalanced Tug-of-War games with noise In this section we introduce the game that we call unbalanced Tug-of-War with noise. It is a two-player zero-sum stochastic game. The game is played over a bounded open set Ω ⊂ ℝN . An ε > 0 is given. Players I and II play as follows. At an initial time, they place a token at a point x0 ∈ Ω and Player I chooses a coin between two possible ones (with different probabilities of getting heads for each coin); we think she chooses i ∈ {1, 2}. Now they play the Tug-of-War with noise described in Chapter 3 with the chosen coin. They toss the chosen coin, which is biased with probabilities αi and βi , αi + βi = 1 and 1 ≥ αi , βi ≥ 0. If they get heads (probability αi ), they toss a fair coin (with the same probability for heads and tails) and the winner of the toss moves the game position to any x1 ∈ Bε (x0 ) of his choice. On the other hand, if they get tails (probability βi ) the game state moves according to the uniform probability density to a random point x1 ∈ Bε (x0 ). Then they continue playing from x1 . At each turn Player I may change the choice of the coin. This procedure yields a sequence of game states x0 , x1 , . . .. Once the game position leaves Ω, let us say at the τ-th step, the game ends. At that time the token will be on the compact boundary strip around Ω of width ε that we denote Γε = {x ∈ ℝn \ Ω : dist(x, 𝜕Ω) ≤ ε}.

6.1 Unbalanced Tug-of-War games with noise | 75

The payoff is given by a running payoff function f : Ω → ℝ and a final payoff function g : ℝN \ Ω → ℝ. At the end Player II pays Player I the amount given by a g(xτ ) + 2 τ−1 ε2 ∑τ−1 n=0 f (xn ), that is, Player I has earned g(xτ )+ε ∑n=0 f (xn ) while Player II has earned −g(xτ )−ε2 ∑τ−1 n=0 f (xn ). We can think that when the token leaves xi Player II pays Player I ε2 f (xi ) and g(xτ ) when the game ends. A strategy SI for Player I is a pair of collections of measurable mappings SI = k ∞ ({γ k }∞ k=0 , {SI }k=0 ), such that, given a partial history (x0 , x1 , . . . , xk ), Player I chooses coin 1 with probability γ k (x0 , x1 , . . . , xk ) = γ ∈ [0, 1] and the next game position is SIk (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ) if Player I wins the toss. Similarly Player II plays according to a strategy SII = {SIIk }∞ k=0 . A starting point x0 and the strategies SI and SII define, by Kolmogorov’s extension x x theorem, a unique probability measure ℙS0,S . We denote by 𝔼S0,S the corresponding I II I II expectation. Then, if SI and SII denote the strategies adopted by Players I and II, respectively, we define the expected payoff for Player I as x

𝔼 0 [g(Xτ ) + ε2 ∑τ−1 II,x n=1 f (xn )] VS ,S0 = { SI ,SII I II −∞

if the game ends a. s., otherwise

and the expected payoff for Player II as x

𝔼 0 [g(Xτ ) + ε2 ∑τ−1 I,x n=1 f (xn )] VS ,S0 = { SI ,SII I II +∞

if the game ends a. s., otherwise.

Note that we penalize both players when the games does not end almost surely. The value of the game for Player I is given by I,x

uεI (x0 ) = sup inf VS ,S0 SI

SII

I

II

while the value of the game for Player II is given by II,x

uεII (x0 ) = inf sup VS ,S0 . SII

SI

I

II

As usual, when uεI = uεII we say the game has a value uε := uεI = uϵII .

76 | 6 Maximal operators

6.2 Dynamic Programming Principle In this section, we prove that the game has a value, that is, uεI = uεII , and that this value function satisfies the DPP given by uε (x) =ε2 f (x) + max ( i∈{1,2}

uε (x) =g(x),

αi {sup uε + inf uε } + βi ∫ uε (y) dy), Bε (x) 2 Bε (x)

x ∈ ℝN \ Ω.

x ∈ Ω,

Bε (x)

Let us see intuitively why this holds. At each step Player I chooses i ∈ {1, 2} and then we have three possibilities: α – With probability 2i , Player I moves the token; she will try to maximize the expected outcome. α – With probability 2i , Player II moves the token; he will try to minimize the expected outcome. – With probability βi , the token moves at random. Since Player I chooses i trying to maximize the expected outcome we obtain a maxi∈{1,2} in the DPP. Finally, the expected payoff at x is given by ε2 f (x) plus the expected payoff for the rest of the game. Here we use ideas from [2], [72], [78], [85], [97], and [102], where similar results are proved. Note that when α1 = α2 (and hence β1 = β2 ) player I has no choice to make and we recover known results for Tug-of-War games (with or without noise) that were treated in the previous chapters. In the proof for our actual problem we will have to face two different cases: One where the noise assures us that the game ends almost surely independently of the strategies adopted by the players or where the strictly positivity (or negativity) of f helps us in this direction, and another one where we have to handle the problem of getting strategies for the players to play almost optimally and to make sure that the game ends almost surely. In the first case we follow a plan similar to the one in the first proof of Theorem 3.2, although here the game has a running payoff. In the second case we follow the plan introduced in the second proof of Theorem 3.2. As before, Ω ⊂ ℝN is a bounded open set, ε > 0, g : ℝN \ Ω → ℝ and f : Ω → ℝ a bounded Borel function. We assume that f is such that f ≡ 0, infΩ f > 0 or supΩ f < 0. Definition 6.9. A function u is sub-p1 -p2 -harmonious if u(x) ≤ ε2 f (x) + max ( i∈{1,2}

u(x) ≤ g(x),

x ∈ ̸ Ω.

αi {sup u + inf u} + βi ∫ u(y) dy), Bε (x) 2 Bε (x)

x ∈ Ω,

Bε (x)

Analogously, a function u is super-p1 -p2 -harmonious if u(x) ≥ ε2 f (x) + max ( i∈{1,2}

u(x) ≤ g(x),

x ∈ ̸ Ω.

αi {sup u + inf u} + βi ∫ u(y) dy), Bε (x) 2 Bε (x) Bε (x)

x ∈ Ω,

6.2 Dynamic Programming Principle

| 77

Finally, u is p1 -p2 -harmonious if it is both sub- and super-p1 -p2 -harmonious (i. e., the equality holds). Here αi and βi are given by αi =

pi − 2 pi + N

and βi =

N +2 , pi + N

i = 1, 2.

Our next task is to prove uniform bounds for these functions. Lemma 6.10. Sub-p1 -p2 -harmonious functions are uniformly bounded from above. Proof. We will consider the space partitioned along the xN axis in strips of width ε2 . To this end we define D=

|{y ∈ Bε : yN < − ε2 }| |Bε |

=

|{y ∈ B1 : yN < − 21 }| |B1 |

and C = 1 − D.

The constant D gives the fraction of the ball Bε (x) covered by the shadowed section in Figure 6.1, {y ∈ Bε : yN < xN − ε2 }, and C the fraction occupied by its complement.

Figure 6.1: The partition considered in the proof of Lemma 6.10.

Given x ∈ Ω, let us consider t ∈ ℝ such that xN < t ε2 + ε2 . We get ε ε {y ∈ Bε (x) : yN < xN − } ⊂ {z ∈ ℝN : zN < t }. 2 2 Now, given a sub-p1 -p2 -subharmonious function u, we have u(x) ≤ ε2 f (x) + max ( i∈{1,2}

αi {sup u + inf u} + βi ∫ u(y) dy). Bε (x) 2 Bε (x) Bε (x)

78 | 6 Maximal operators Now we can bound the terms in the right-hand side considering the partition given above; see Figure 6.1. We have sup u ≤ sup u, Ωε

Bε (x)

inf u ≤

Bε (x)

sup

{y∈Bε (x):yN 0, sup f < 0, or inf f > 0. Then, if v is a p1 -p2 harmonious function for gv and fv such that gv ≤ guε and fv ≤ fuε , then v ≤ uεI . I

I

Proof. By choosing a strategy according to the points where the maximal values of v are attained, we show that Player I can obtain that a certain process is a submartingale. The optional stopping theorem then implies that the expectation of the process under this strategy is bounded by v. Moreover, this process provides a lower bound for uεI . Player II follows any strategy and Player I follows a strategy SI0 such that at xk−1 ∈ Ω she chooses γ as γ=1

if

α1 { sup u(y) + inf u(y)} + β1 ∫ u(y) dy y∈Bε (x) 2 y∈Bε (x) Bε (x)

α > 2 { sup u(y) + inf u(y)} + β2 ∫ u(y) dy y∈Bε (x) 2 y∈Bε (x) Bε (x)

and γ = 0

otherwise,

and she steps to a point that almost maximizes v, that is, to a point xk ∈ Bε (xk−1 ) such that v(xk ) ≥ sup v − η2−k Bε (xk−1 )

84 | 6 Maximal operators for some fixed η > 0. We start from the point x0 . It follows that k−1

x

𝔼S0,S0 [v(xk ) + ε2 ∑ f (xn ) − η2−k | x0 , . . . , xk−1 ] I

II

n=0

≥ max ( i∈{1,2}

αi { inf v − η2−k + sup v} + βi 2 Bε (xk−1 ) Bε (xk−1 )



v dy)

Bε (xk−1 )

k−1

+ ε2 ∑ f (xn ) − η2−k n=0

k−1

≥ v(xk−1 ) − ε2 f (xk−1 ) − η2−k + ε2 ∑ f (xn ) − η2−k n=0

k−2

= v(xk−1 ) + ε2 ∑ f (xn ) − η2−k+1 , n=0

where we have estimated the strategy of Player II inf and used the fact that v is p1 -p2 harmonious. Thus k−1

Mk = v(xk ) + ε2 ∑ f (xn ) − η2−k n=0

is a submartingale. Now we observe the following: If β1 , β2 > 0, then the game ends almost surely and we can continue (see below). If sup f < 0, the fact that Mk is a submartingale implies that the game ends in a finite number of moves (which can be estimated). In the case inf f > 0, if the game does not end in a finite number of moves, then we have to play until the accumulated payoff (recall that f gives the running payoff) is greater than v and then choose a strategy that ends the game almost surely (for example pointing to some prescribed point x0 outside Ω). Since gv ≤ guI and fv ≤ fuI , we deduce τ−1

x

uI (x0 ) = sup inf 𝔼S0,S [guεI (xτ ) + ε2 ∑ f (xn )] SI

SII

I

II

n=0

τ−1

x

≥ inf 𝔼S00 ,S [gv (xτ ) + ε2 ∑ f (xn ) − η2−τ ] SII

I

II

n=0

(τ−1)∧k

x

≥ inf lim inf 𝔼S00 ,S [v(xτ∧k ) + ε2 ∑ f (xn ) − η2−(τ∧k) ] SII

k→∞

I

II

n=0

≥ inf 𝔼S0 ,SII [M0 ] = v(x0 ) − η, SII

I

where (τ − 1) ∧ k = min(τ − 1, k), and we used Fatou’s lemma as well as the optional stopping theorem for Mk . Since η is arbitrary this proves the claim.

6.2 Dynamic Programming Principle

| 85

A symmetric result can be proved for uεII , hence we obtain the following result. Theorem 6.14. Assume that β1 , β2 > 0, sup f < 0, or inf f > 0. Then there exists a unique p1 -p2 -harmonious function. In addition, the game has a value, uεI = uεII , which coincides with the unique p1 -p2 -harmonious function. Proof. Let u be a p1 -p2 -harmonious function, which we know exists by Lemma 6.10. From the definition of the game values we know that uεI ≤ uεII . Then by Lemma 6.13 we have uεI ≤ uεII ≤ u ≤ uεI . This is uεI = uεII = u. Since we can repeat the argument for any p1 -p2 -harmonious function, uniqueness follows. Remark 6.15. Note that if we have a sub-p1 -p2 -harmonious function u, then v given by v = u − C in Ω and v = u in Γε is sub-p1 -p2 -harmonious for every constant C > 0. In this way we can obtain a sub-p1 -p2 -harmonious function smaller that any super-p1 -p2 harmonious function, and then if we start the above construction with this function we get the smallest p1 -p2 -harmonious function. That is, there exists a minimal p1 -p2 harmonious function. We can analogously construct the larger p1 -p2 -harmonious function (the maximal p1 -p2 -harmonious function). We now tackle the remaining case in which f ≡ 0 and one of the βi is zero (which is the same as saying that one of the αi is equal to one). Theorem 6.16. There exists a unique p1 -p2 -harmonious function when α1 = 1, α2 > 0, and f ≡ 0. Proof. Supposed it does not, that is, we have u, v, such that α 1 v(x) = max{ (sup v + inf v), (sup v + inf v) + β ∫ v}, Bε (x) Bε (x) 2 Bε (x) 2 Bε (x) Bε (x)

1 α u(x) = max{ (sup u + inf u), (sup u + inf u) + β ∫ u} Bε (x) Bε (x) 2 Bε (x) 2 Bε (x) Bε (x)

in Ω and u=v=g in ℝN \ Ω with ‖u − v‖∞ = M > 0. As we observed in Remark 6.15 we can assume u ≥ v (just take v the minimal solution to the DPP). Now we want to build a point where the difference between u and v is

86 | 6 Maximal operators almost attained and v has a large variation in the ball of radius ε around this point (all this has to be carefully quantified). First, we apply a compactness argument. We know that Ω ε ⊂ ⋃ B ε (x). 4

2

x∈Ω

As Ω ε is compact there exists yi such that 4

k

Ω ε ⊂ ⋃ B ε (yi ). 4

2

i=1

Let A = {i ∈ {1, . . . , k} : u or v are not constant on B ε (yi )} and let λ > 0 such that for 2 every i ∈ A sup u − inf u > (4 +

Bε (yi )

Bε (yi )

4β )λ α

or

sup v − inf v > 2λ.

Bε (yi )

Bε (yi )

We fix this λ. Now, for every δ > 0 such that λ > δ and M > δ, let z ∈ Ω such that M − δ < u(z) − v(z). Let O = {x ∈ Ω : u(x) = u(z) and v(x) = v(z)} ⊂ Ω. Take z̄ ∈ 𝜕O ⊂ Ω. Let i0 be such that z̄ ∈ B ε (yi0 ), and we have 2

and B ε (yi0 ) ∩ Oc ≠ 0,

B ε (yi0 ) ∩ O ≠ 0 2

2

hence i0 ∈ A. Let x0 ∈ B ε (yi0 ) ∩ O. In this way we have obtained x0 such that u(x0 ) − 2 v(x0 ) > M − δ and one of the following holds: (1) sup u − inf u > (4 + Bε (x0 )

Bε (x0 )

4β )λ, α

(2) sup v − inf v > 2λ.

Bε (x0 )

Bε (x0 )

Let us show that in fact the second statement must hold. Suppose it does not. Then the first holds and we have sup v − inf v ≤ 2λ.

Bε (x0 )

Bε (x0 )

Given that 1 v(x0 ) ≥ ( sup v + inf v) Bε (x0 ) 2 Bε (x0 )

6.2 Dynamic Programming Principle

| 87

we get v(x0 ) + λ ≥ sup v. Bε (x0 )

Hence v(x0 ) + λ + M ≥ sup v + M ≥ sup u. Bε (x0 )

Bε (x0 )

But we have more, since u(x0 ) − v(x0 ) > M − δ > M − λ, we get u(x0 ) + 2λ > sup u, Bε (x0 )

and sup u > inf u + (4 +

Bε (x0 )

Bε (x0 )

4β )λ. α

Hence u(x0 ) − (2 +

4β )λ > inf u. Bε (x0 ) α

If we bound the integral by the value of the supremum we can control all the terms in the DPP in terms of u(x0 ). We have α 1 u(x0 ) = max{ ( sup u + inf u), ( sup u + inf u) + β ∫ u} Bε (x0 ) Bε (x0 ) 2 Bε (x0 ) 2 Bε (x0 ) Bε (x0 )

4β 1 < max { (u(x0 ) + 2λ + u(x0 ) − (2 + )λ), 2 α 4β α (u(x0 ) + 2λ + u(x0 ) − (2 + )λ) + β(u(x0 ) + 2λ)} 2 α 4β < max{u(x0 ) − λ, u(x0 )} = u(x0 ), α which is a contradiction. Hence we obtain that the second condition must hold, that is, we have sup v − inf v > 2λ.

Bε (x0 )

Bε (x0 )

Applying the DPP we get 1 v(x0 ) ≥ ( sup v + inf v) Bε (x0 ) 2 Bε (x0 )

88 | 6 Maximal operators together with the fact that sup v − inf v > 2λ. Bε (x0 )

Bε (x0 )

We conclude that v(x0 ) > inf v + λ. Bε (x0 )

We have proved that there exists x0 such that v(x0 ) > inf v + λ Bε (x0 )

and u(x0 ) − v(x0 ) > M − δ.

Now we are going to build a sequence of points where the difference between u and v is almost maximal and where the value of v decreases at least λ in every step. Applying the DPP to M − δ < u(x0 ) − v(x0 ) and bounding the difference of the supremums by M we get M−

2 δ + inf v < inf u. Bε (x0 ) Bε (x0 ) α

Let x1 be such that v(x0 ) > v(x1 ) + λ and infBε (x0 ) v + δ > v(x1 ). We get M − (1 +

2 )δ + v(x1 ) < u(x1 ). α

To repeat this construction we need two things: – In the last inequality if δ is small enough we have u(x1 ) ≠ v(x1 ), hence x1 ∈ Ω. – We know that 2v(x1 ) ≥ infBε (x1 ) v + supBε (x1 ) v > v(x0 ) + infBε (x1 ) v. Hence, since v(x0 ) > v(x1 ) + λ, we get v(x1 ) > infBε (x1 ) v + λ. Then we get v(xn−1 ) > v(xn ) + λ and k

n 2 M − ( ∑ ( ) )δ + v(xn ) < u(xn ). α k=0

We can repeat this argument as long as k

n 2 M − ( ∑ ( ) )δ > 0, α k=0

which is a contradiction with the fact that we know that v is bounded.

6.2 Dynamic Programming Principle

| 89

Now we want to show that this unique function that satisfies the DPP is the game value. The key point of the proof is to construct a strategy based on the approximating sequence that we used to construct the solution. Theorem 6.17. Given f ≡ 0, the game has a value, uεI = uϵII , which coincides with the unique p1 -p2 -harmonious function. Proof. Let u be the unique p1 -p2 -harmonious function. We will show that u ≤ uεI . The analogous result can be proved for uεII , completing the proof. Let us consider a sub-p1 -p2 -harmonious function u0 , smaller than infΩ g at every point in Ω. Starting with this u0 we build the corresponding uk as in Lemma 6.12. We have uk → u as k → ∞. Now, given δ > 0, let n > 0 be such that un (x0 ) > u(x0 ) − δ2 . We build a strategy δ SI for Player I. In the firsts n moves, given xk−1 , she will choose to move to a point that almost maximizes un−k , that is, she chooses xk ∈ Bε (xk−1 ) such that un−k (xk ) > sup un−k − Bε (xk−1 )

δ 2n

and chooses γ in order to maximize αi δ { inf u − + sup u } + βi 2 Bε (xk−1 ) n−k 2n Bε (xk−1 ) n−k



un−k dy.

Bε (xk−1 )

After the first n moves she will follow a strategy that ends the game almost surely (for example pointing in a fixed direction). We have kδ | x , . . . , xk−1 ] 2n 0 α δ ≥ max ( i { inf un−k − + sup u } i∈{1,2} 2 Bε (xk−1 ) 2n Bε (xk−1 ) n−k

x0 [un−k (xk ) SIδ ,SII

𝔼

+ βi



+

un−k dy) +

Bε (xk−1 )

≥ un−k+1 (xk−1 ) +

kδ 2n

(k − 1)δ , 2n

where we have estimated the strategy of Player II by inf and used the construction for uk . Thus Mk = { is a submartingale.

un−k (xk ) +

kδ 2n

Mk = infΩ g



δ 2

for 0 ≤ k ≤ n, for k > n

90 | 6 Maximal operators Now we have x

uεI (x0 ) = sup inf 𝔼S0,S [g(xτ )] SI



SII

I

II

x inf 𝔼 0δ [g(xτ )] SI ,SII SII x0 [Mτ ] SIδ ,SII

≥ inf 𝔼 SII

≥ inf 𝔼Sδ ,SII [M0 ] = un (x0 ) − SII

I

δ > u(x0 ) − δ, 2

where we used the optional stopping theorem for Mk . Since δ is arbitrary this proves the claim. As an immediate corollary of our results in this section we obtain a comparison result for solutions to the DPP. Corollary 6.18. If v and u are p1 -p2 -harmonious functions for gv , fv and gu , fu , respectively, such that gv ≥ gu and fv ≥ fu , then v ≥ u.

6.3 Game value convergence First, we show some properties of p1 -p2 -harmonious functions for which we need to prove convergence as ε → 0. We want to apply the Arzela–Ascoli type lemma. We need to check that: (1) there exists C > 0 such that |uε (x)| < C for every ε > 0 and every x ∈ Ω; (2) given η > 0, there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η. In this way we will obtain that there exist a uniformly continuous function u : Ω → ℝ and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω,

as ε → 0. So our task now is to show that the family uε satisfies the hypotheses of the previous lemma. To this end we need some bounds on the expected exit time in the case a player chooses a certain strategy. Let us start showing that uε are uniformly bounded. In Lemma 6.10 we obtained a bound for the value of the game for a fixed ε; here we need a bound independent of ε. To this end, let us define what we understand by pulling in one direction: We fix a direction, that is, a unitary vector v, and at each turn of the game the player’s strategy is given as S(xk−1 ) = xk−1 + (ε − ε3 /2k )v.

6.3 Game value convergence

| 91

Lemma 6.19. In a game where a player pulls in a fixed direction the expectation of the exit time is bounded above by 𝔼[τ] ≤ Cε−2 for some C > 0 independent of ε. Proof. First, let us assume without loss of generality that Ω ⊂ {x ∈ ℝN : 0 < xN < R} and that the direction that the player is pulling to is −eN . Then Mk = (xk )N +

ε3 2k

is a supermartingale. Indeed, if the random move occurs, then we know that the expectation of (xk+1 )N is equal to (xk )N . If the Tug-of-War game is played we know that with probability one half (xk+1 )N = (xk )N − ε + ε3 /2k and if the other player moves 3 (xk+1 )N ≤ (xk )N + ε, so the expectation is less than or equal to (xk )N + 2εk+1 . Let us consider the expectation for (Mk+1 − Mk )2 . If the random walk occurs, then ε2 the expectation is N+2 + o(ε2 ). Indeed, ∫ xN2



ε

ε

0

0

|𝜕B | 1 1 ε2 = ∫ |x|2 = N . ∫ r 2 |𝜕Br | dr = N 1 ∫ r N+1 dr = N N +2 ε N|B1 | ε N|B1 | Bε

If the Tug-of-War occurs we know that with probability one half (xk+1 )N = (xk )N − 2 ε + ε3 /2k , so the expectation is greater than or equal to ε3 . 2 Let us consider the expectation for Mk2 − Mk+1 . We have 2 𝔼[Mk2 − Mk+1 ] = 𝔼[(Mk+1 − Mk )2 ] + 2𝔼[(Mk − Mk+1 )Mk+1 ]. 2 As (xk )N is positive, we have 2𝔼[(Mk − Mk+1 )Mk+1 ] ≥ 0. Then 𝔼[Mk2 − Mk+1 ]≥

Mk2

kε2 N+2

ε2 , N+2

so

+ is a supermartingale. According to the optional stopping theorem for supermartingales 2 𝔼[Mτ∧k +

(τ ∧ k)ε2 ] ≤ M02 . N +2

We have 𝔼[(τ ∧ k)]

ε2 2 ≤ M02 − E[Mτ∧k ] ≤ M02 . N +2

Taking limit in k, we get a bound for the expected exit time, 𝔼[τ] ≤ (N + 2)M02 ε−2 , so the statement holds for C = (N + 2)R2 .

92 | 6 Maximal operators Lemma 6.20. An f -p1 -p2 -harmonious function uε with boundary values g satisfies inf g(y) + C inf f (y) ≤ uε (x) ≤ sup g(y) + C sup f (y).

y∈Ω ̸

y∈Ω

y∈Ω ̸

y∈Ω

Proof. We use the connection to games. Let one of the players choose a strategy of pulling in a fixed direction. Then 𝔼[τ] ≤ Cε−2 and this gives the upper bound τ−1

𝔼[g(Xτ ) + ε2 ∑ f (Xn )] ≤ sup g(y) + E[τ]ε2 sup f (y) ≤ sup g(y) + C sup f (y). n=0

y∈Ω ̸

y∈Ω

y∈Ω ̸

y∈Ω

The lower bound follows analogously. Let us show now that uε are asymptotically uniformly continuous. First we need a lemma that bounds the expectation for the exit time when one player is pulling towards a fixed point. Let us consider an annular domain BR (y) \ Bδ (y) and a game played inside this domain. In each round the token starts at a certain point x, and an ε-step Tug-of-War is played inside BR (y) or the token moves at random with uniform probability in BR (y) ∩ Bε (x). If an ε-step Tug-of-War is played, with probability 1/2, each player moves the token to a point of his choice in BR (y) ∩ Bε (x). We can think there is a third player choosing whether the ε-step Tug-of-War or the random move occurs. The game ends when the position reaches Bδ (y), that is, when xτ∗ ∈ Bδ (y). Lemma 6.21. Assume that one of the players pulls towards y in the game described above. Then, no matter how many times the Tug-of-War is played or a random move is made, the exit time verifies 𝔼x0 (τ∗ ) ≤

C(R/δ) dist(𝜕Bδ (y), x0 ) + o(1) , ε2

(6.3)

for x0 ∈ BR (y) \ Bδ (y). Here τ∗ is the exit time in the previously described game and o(1) → 0 as ε → 0 can be taken depending only on δ and R. Proof. Let us denote hε (x) = 𝔼x (τ). By symmetry we know that hε is radial and it is easy to see that it is increasing in r = |x − y|. If we assume that the other player wants to maximize the expectation for the exit time and that the random move or Tug-of-War is chosen in the same way, the function hε satisfies a DPP, 1 hε (x) = max{ ( max hε + min hε ), Bε (x)∩BR (y) 2 Bε (x)∩BR (y)

∫ Bε (x)∩BR (y)

hε dz} + 1,

6.3 Game value convergence

| 93

by the above assumptions and the number of steps always increases by one when making a step. Further, we denote vε (x) = ε2 hε (x) and obtain 1 vε (x) = max{ ( sup vε + inf vε ), Bε (x)∩BR (y) 2 Bε (x)∩BR (y)



vε dz} + ε2 .

Bε (x)∩BR (y)

This induces us to look for a function v such that v(x) ≥ ∫ v dz + ε2 Bε (x)

and 1 v(x) ≥ (sup v + inf v) + ε2 . Bε (x) 2 Bε (x)

(6.4)

Note that for small ε this is a sort of discrete version to the following inequalities: {Δv(x) ≤ −2(N + 2) x ∈ BR+ε (y) \ Bδ−ε (y), { H x ∈ BR+ε (y) \ Bδ−ε (y). {Δ∞ v(x) ≤ −2

(6.5)

This leads us to consider the problem Δv(x) = −2(N + 2) { { { { { v(x) = 0 { { { { { 𝜕v { 𝜕ν = 0

x ∈ BR+ε (y) \ Bδ (y), x ∈ 𝜕Bδ (y),

(6.6)

x ∈ 𝜕BR+ε (y),

refers to the normal derivative. The solution to this problem is radially symwhere 𝜕u 𝜕ν metric and strictly increasing in r = |x − y|. It takes the form v(r) = −ar 2 − br 2−N + c if N > 2 and v(r) = −ar 2 − b log(r) + c if N = 2. If we extend this v to Bδ (y) \ Bδ−ε (y), it satisfies Δv(x) = −2(N + 2) in BR+ε (y) \ Bδ−ε (y). We know that ΔH ∞ v = vrr ≤ vrr +

N −1 vr = Δv. r

Thus, v satisfies the inequalities (6.5). Then, the classical calculation shows that v satisfies (6.4) for each Bε (x) ⊂ BR+ε (y) \ Bδ−ε (y).

94 | 6 Maximal operators In addition, as v is increasing in r, we have for each x ∈ BR (y) \ Bδ (y) v dz ≤ ∫ v dz ≤ v(x) − ε2

∫ Bε (x)∩BR (y)

Bε (x)

and 1 1 ( sup v+ inf v) ≤ (sup v+ inf v) ≤ v(x) − ε2 . 2 Bε (x)∩BR (y) Bε (x)∩BR (y) 2 Bε (x) Bε (x) It follows that 𝔼[v(xk ) + kε2 | x0 , . . . , xk−1 ]

1 sup v+ inf v), ≤ max{ ( Bε (xk−1 )∩BR (y) 2 Bε (xk−1 )∩BR (y) ≤ v(xk−1 ) + (k − 1)ε2 ,



v dz}

Bε (xk−1 )∩BR (y)

if xk−1 ∈ BR (y) \ Bδ (y). Thus v(xk ) + kε2 is a supermartingale, and the optional stopping theorem yields 𝔼x0 [v(xτ∗ ∧k ) + (τ∗ ∧ k)ε2 ] ≤ v(x0 ). Because xτ∗ ∈ Bδ (y) \ Bδ−ε (y), we have 0 ≤ −𝔼x0 [v(xτ∗ )] ≤ o(1). Furthermore, the estimate 0 ≤ v(x0 ) ≤ C(R/δ) dist(𝜕Bδ (y), x0 ) holds for the solutions of (6.6). Thus, by passing to the limit as k → ∞, we obtain ε2 𝔼x0 [τ∗ ] ≤ v(x0 ) − 𝔼[u(xτ∗ )] ≤ C(R/δ)(dist(𝜕Bδ (y), x0 ) + o(1)). This completes the proof. Next we derive a uniform bound and estimate for the asymptotic continuity of the family of p1 -p2 -harmonious functions. We assume here that Ω satisfies an exterior sphere condition: For each y ∈ 𝜕Ω, there exists Bδ (z) ⊂ ℝn \ Ω such that y ∈ 𝜕Bδ (z). Lemma 6.22. Let g be Lipschitz continuous in ℝN \ Ω and f Lipschitz continuous in Ω such that f ≡ 0, inf f > 0, or sup f < 0. The p1 -p2 -harmonious function uε with data g and f satisfies 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 ≤ Lip(g)(|x − y| + δ) ̃ Lip(f )|x − y|, + C(R/δ)(|x − y| + o(1))(1 + ‖f ‖∞ ) + C

(6.7)

6.3 Game value convergence

| 95

for every small enough δ > 0 and for every two points x, y ∈ ℝN . Here o(1) can be taken depending only on δ and R. Proof. The case x, y ∈ ̸ Ω is clear. Thus, we can concentrate on the cases x ∈ Ω and y ∈ ̸ Ω as well as x, y ∈ Ω. We use the connection to games. Suppose first that x ∈ Ω and y ∈ ̸ Ω. By the exterior sphere condition, there exists Bδ (z) ⊂ ℝn \Ω such that y ∈ 𝜕Bδ (z). Now Player I chooses a strategy of pulling towards z, denoted by SIz . Then Mk = |xk − z| − Cε2 k is a supermartingale for a constant C large enough independent of ε. Indeed, x

𝔼S0z ,S [|xk − z| | x0 , . . . , xk−1 ] I

II

≤ max ( i∈{1,2}

αi {|x − z| + ε − ε3 + |xk−1 − z| − ε} + βi 2 k−1



|x − z| dx)

Bε (xk−1 )

≤ |xk−1 − z| + Cε2 . The first inequality follows from the choice of the strategy, and the second follows from the estimate |x − z| dx ≤ |xk−1 − z| + Cε2 .

∫ Bε (xk−1 )

By the optional stopping theorem, this implies that x

x

𝔼S0z ,S [|xτ − z|] ≤ |x0 − z| + Cε2 𝔼S0z ,S [τ]. I

II

I

II

x

(6.8)

Next we can estimate 𝔼S0z ,S [τ] by the stopping time of Lemma 6.21. Let R > 0 be I

II

such that Ω ⊂ BR (z). Thus, by (6.3), x

x

ε2 𝔼S0z ,S [τ] ≤ ε2 𝔼S0z ,S [τ∗ ] ≤ C(R/δ)(dist(𝜕Bδ (z), x0 ) + o(1)). I

II

I

II

Since y ∈ 𝜕Bδ (z), dist(𝜕Bδ (z), x0 ) ≤ |y − x0 |, and thus, (6.8) implies x

𝔼S0z ,S [|xτ − z|] ≤ C(R/δ)(|x0 − y| + o(1)). I

II

We get x

g(z) − C(R/δ)(|x − y| + o(1)) ≤ 𝔼S0z ,S [g(xτ )]. I

II

96 | 6 Maximal operators Thus, we obtain τ−1

x

sup inf 𝔼S0,S [g(xτ ) + ε2 ∑ f (xn )] SI

SII

I

II

n=0

τ−1

x

≥ inf 𝔼S0z ,S [g(xτ ) + ε2 ∑ f (xn )] SII

I

II

n=0

x

≥ g(z) − C(R/δ)(|x0 − y| + o(1)) − ε2 inf 𝔼S0z ,S [τ]‖f ‖∞ SII

I

II

≥ g(y) − Lip(g)δ − C(R/δ)(|x0 − y| + o(1))(1 + ‖f ‖∞ ). The upper bound can be obtained by choosing for Player II a strategy where he points to z, and thus, (6.7) follows. Finally, let x, y ∈ Ω and fix the strategies SI , SII for the game starting at x. We define a virtual game starting at y: We use the same coin tosses and random steps as the usual game starting at x. Furthermore, the players adopt their strategies SIv , SIIv from the game starting at x, that is, when the game position is yk−1 a player chooses the step that would be taken at xk−1 in the game starting at x. We proceed in this way until for the first time xk ∈ ̸ Ω or yk ∈ ̸ Ω. At that point we have |xk − yk | = |x − y|, and we may apply the previous steps that work for xk ∈ Ω, yk ∈ ̸ Ω or for xk , yk ∈ ̸ Ω. In the case f ≡ 0 we are done. In the case infy∈Ω |f (y)| > 0, as we know that uε are uniformly bounded according to Lemma 6.20, the expected exit time is bounded by ̃= C

max |g(y)| + C max |f (y)| y∈Ω ̸

y∈Ω

.

inf |f (y)|

y∈Ω

So the expected difference in the running payoff in the game starting at x and the ̃ virtual one is bounded by CLip(f )|x − y|, because |xi − yi | = |x − y| for all 0 ≤ i ≤ k.

Corollary 6.23. Let {uε } be a p1 -p2 -harmonious family. Then there exists a uniformly continuous u and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω.

Proof. Using Lemmas 6.20 and 6.22 we obtain that the family uε satisfies the hypothesis of the compactness lemma, Lemma 3.6. Now, before proceeding to prove that the limit is a solution to the PDE, let us state the definition of a viscosity solution. We have to handle some technical difficulties as the 1-homogeneous ∞-Laplacian is not well defined when the gradient vanishes. Observing that Δu = tr(D2 u)

and

2 ΔH ∞ u = ⟨D u

we can write (6.2) as F(∇u, D2 u) = f , where

∇u ∇u ; ⟩, |∇u| |∇u|

6.3 Game value convergence

F(v, X) = max {−αi i∈{1,2}

| 97

v v X − θi tr(X)}. |v| |v|

Note that F is degenerate elliptic, that is, F(v, X) ≤ F(v, Y) for v ∈ ℝN \ {0} and X, Y ∈ SN provided X ≥ Y, as it is generally requested to work in the context of viscosity solutions. As we have already seen in Chapter 3, this function F : ℝN × SN 󳨃→ ℝ is not well defined at v = 0 (here SN denotes the set of real symmetric N × N matrices). Therefore, we need to consider the lower semicontinuous F∗ and upper semicontinuous F ∗ envelopes of F. These functions coincide with F for v ≠ 0 and for v = 0 they are given by F ∗ (0, X) = max {−αi λmin (X) − θi tr(X)} i∈{1,2}

and F∗ (0, X) = max {−αi λmax (X) − θi tr(X)}, i∈{1,2}

where λmin (X) = min{λ : λ is an eigenvalue of X} and λmax (X) = max{λ : λ is an eigenvalue of X}. Now we are ready to give the definition for a viscosity solution to our equation. Definition 6.24. For 2 ≤ p1 , p2 ≤ ∞ consider the equation H max{−ΔH p1 u, −Δp2 u} = f

in Ω. (1) A lower semicontinuous function u is a viscosity supersolution if, for every ϕ ∈ C 2 such that ϕ touches u at x ∈ Ω strictly from below, we have F ∗ (∇ϕ(x), D2 ϕ(x)) ≥ f (x). (2) An upper semicontinuous function u is a subsolution if, for every ψ ∈ C 2 such that ψ touches u at x ∈ Ω strictly from above, we have F∗ (∇ψ(x), D2 ψ(x)) ≤ f (x). (3) Finally, u is a viscosity solution if it is both a sub- and a supersolution. Theorem 6.25. The function u obtained as a limit in Corollary 6.23 is a viscosity solution to (6.2) when we consider the game with f /2 as the running payoff function. Proof. First, we observe that u = g on 𝜕Ω due to uε = g on 𝜕Ω for all ε > 0. Hence, we can focus our attention on showing that u is p1 -p2 -harmonic inside Ω in the viscosity

98 | 6 Maximal operators sense. To this end, we recall from [84] an estimate that involves the regular Laplacian (p = 2) and an approximation for the ∞-Laplacian (p = ∞). Choose a point x ∈ Ω and a C 2 -function ϕ defined in a neighborhood of x. Note that since ϕ is continuous we have min ϕ(y) = inf ϕ(y)

y∈Bε (x)

y∈Bε (x)

for all x ∈ Ω. Let x1ε be the point at which ϕ attains its minimum in Bε (x), i. e., ϕ(x1ε ) = min ϕ(y). y∈Bε (x)

It follows from the Taylor expansions in [84] that α ( max ϕ(y) + min ϕ(y)) + β ∫ ϕ(y) dy − ϕ(x) 2 y∈Bε (x) y∈Bε (x) Bε (x)

xε − x xε − x ε {(p − 2)⟨D2 ϕ(x)( 1 ), ( 1 )⟩ + Δϕ(x)} 2(N + p) ε ε 2



+ o(ε2 ).

(6.9)

Suppose that ϕ touches u at x strictly from below. We want to prove that F ∗ (∇ϕ(x), D ϕ(x)) ≥ f (x). By the uniform convergence, there exists a sequence {xε } converging to x such that uε − ϕ has an approximate minimum at xε , that is, for ηε > 0, there exists xε such that 2

uε (x) − ϕ(x) ≥ uε (xε ) − ϕ(xε ) − ηε . Moreover, considering ϕ̃ = ϕ − uε (xε ) − ϕ(xε ), we can assume that ϕ(xε ) = uε (xε ). Thus, by recalling the fact that uε is p1 -p2 -harmonious, we obtain ηε ≥ ε2

f (xε ) α − ϕ(xε ) + max { i (max ϕ + min ϕ) + βi ∫ ϕ(y) dy}, i∈{1,2} 2 2 Bε (xε ) Bε (xε ) Bε (xε )

and thus, by (6.9), and choosing ηε = o(ε2 ), we have 0≥

xε − xε xε − xε ε2 max {αi ⟨D2 ϕ(xε )( 1 ), ( 1 )⟩ + θi Δϕ(xε )} 2 i∈{1,2} ε ε f (x ) + ε2 ε + o(ε2 ). 2

Next we need to observe that ⟨D2 ϕ(xε )(

xε − xε x1ε − xε ), ( 1 )⟩ ε ε

converges to Δ∞ ϕ(x) when ∇ϕ(x) ≠ 0 and always is bounded in the limit by λmin (D2 ϕ(x)) and λmax (D2 ϕ(x)). Dividing by ε2 and letting ε → 0, we get

6.4 Comments | 99

F ∗ (∇ϕ(x), D2 ϕ(x)) ≥ f (x). Therefore u is a viscosity supersolution. To prove that u is a viscosity subsolution, we use a reverse inequality to (6.9) by considering the maximum point of the test function and choose a smooth test function that touches u from above. Now, we just observe that this probabilistic approach provides an alternative existence proof of viscosity solutions to our PDE problem. Corollary 6.26. Any limit function obtained as in Corollary 6.23 is a viscosity solution to the problem H max{−ΔH p1 u, −Δp2 u} = f { u=g

on Ω, on 𝜕Ω.

In particular, the problem has a solution. One can show that the problem has a unique solution using PDE methods, therefore we conclude that we have convergence as ε → 0 of uε (not only along subsequences).

6.4 Comments The results of this chapter are taken from [28]. Here we have a connection with the obstacle problem that we briefly comment. First, play the usual Tug-of-War with noise with probabilities α1 and β1 and final payoff g (assume that the running payoff is f ≡ 0 for simplicity). We call uϵ1 the value of this game. Now use this value function as an obstacle (see Chapter 5) for the Tug-of-War game with noise with probabilities α2 and β2 to obtain a function uϵ2 . Now we repeat this procedure (we use the previous function as an obstacle for the game alternating the use of α1 , β1 and α2 , β2 . In this way we obtain an increasing sequence of functions uϵn that converge to a solution to the DPP for the maximum operator (the one that we studied in this chapter). We refer to [27] for a similar argument in the PDE setting. We can look at this procedure in the following way: First uϵ1 is the value of the game with α1 and β1 . Next, we start playing with probabilities α2 and β2 but we allow Player I to change only one time to play the game with probabilities α1 and β1 , and continue playing this game until the end. Then the value function uϵ2 is just the value of this game (with the possibility of only one change). In general, uϵn is the value function of a game in which the player can change at most n times between the two sets of probabilities. In this way we look at the value function for our game in this chapter as the value function of a game in which Player I can change from one game to another infinitely many times.

7 Games for eigenvalues of the Hessian In this chapter, we first study a game whose values converge to a solution of the Dirichlet problem λj (D2 u) = 0 u=g

{

in Ω, on 𝜕Ω.

(λj , g)

Here Ω is a domain in ℝN and for the Hessian matrix of a function u : Ω 󳨃→ ℝ, D2 u, we denote by λ1 (D2 u) ≤ ⋅ ⋅ ⋅ ≤ λN (D2 u) the ordered eigenvalues. Thus our equation says that the j-th smaller eigenvalue of the Hessian is equal to zero inside Ω. Our next main goal in this chapter is to describe a game whose values approximate viscosity solutions to Pucci’s maximal operator {

+ Pλ,Λ (D2 u) := Λ ∑λj >0 λj + λ ∑λj 0

and κN−j+1 (x) > 0,

∀x ∈ 𝜕Ω.

(H)

Now, let us give a geometric interpretation of solutions to (λj , g). Let Hj be the set of functions v such that v ≤ g,

on 𝜕Ω

and have the following property: For every S affine of dimension j and every j-dimensional domain D ⊂ S ∩ Ω we have v ≤ z,

in D,

where z is the concave envelope of v|𝜕D in D. Then we have the following result. https://doi.org/10.1515/9783110621792-007

102 | 7 Games for eigenvalues of the Hessian Theorem 7.1 ([29]). An upper semicontinuous function v belongs to Hj if and only if it is a viscosity subsolution to (λj , g). Moreover, the function u(x) = sup v(x) v∈Hj

is the largest viscosity solution to λj (D2 u) = 0, in Ω, with u ≤ g on 𝜕Ω. Remark 7.2. Note that we can look at the equation λj = 0 from a dual perspective. Now, we consider VN−j+1 the set of functions w that are greater than or equal to g on 𝜕Ω and verify the following property: For every T affine of dimension N − j + 1 and any domain D ⊂ T, w is larger than or equal to z for every z a convex function in D that is less than or equal to w on 𝜕D. Let u(x) =

inf

w∈VN−j+1

w(x).

Arguing as before, it turns out that u is the smallest viscosity solution to λj (D2 u) = 0,

in Ω

with u ≥ g on 𝜕Ω. Note that, for j = N, the equation for the concave envelope of u|𝜕Ω in Ω is just λN = 0, while the equation for the convex envelope is λ1 = 0. See [94] for the convex envelope of a boundary datum and [93] for the convex envelope of a function inside Ω, f : Ω 󳨃→ ℝ. Note that the concave/convex envelope of g inside Ω is well defined for every domain (just take the infimum/supremum of concave/convex functions that are above/below g on 𝜕Ω). Remark that Theorem 7.1 says that the equation λj (D2 u) = 0 for 1 < j < N is also related to concave/convex envelopes of g, but in this case we consider concave/convex functions restricted to affine subspaces. Now, let us describe the game that we propose to approximate solutions to the equation. It is a two-player zero-sum game. Fix a domain Ω ⊂ ℝN , ε > 0 and a final payoff function g : ℝN \Ω 󳨃→ ℝ. The rules of the game are the following: The game starts with a token at an initial position x0 ∈ Ω, one player (the one who wants to minimize the expected payoff) chooses a subspace S of dimension j and then the second player (who wants to maximize the expected payoff) chooses one unitary vector, v, in the subspace S. Then the position of the token is moved to x ± ϵv with equal probabilities. The game continues until the position of the token leaves the domain and at this point xτ the first player gets −g(xτ ) and the second player g(xτ ). When the two players fix their strategies SI (the first player chooses a j-dimensional subspace S at every step of

7.1 A random walk for λj

|

103

the game) and SII (the second player chooses a unitary vector v ∈ S at every step of the game) we can compute the expected outcome as x

𝔼S0,S [g(xτ )]. I

II

Then the values of the game for any x0 ∈ Ω for the two players are defined as x

uεI (x0 ) = inf sup 𝔼S0,S [g(xτ )], SI

SII

I

II

x

uεII (x0 ) = sup inf 𝔼S0,S [g(xτ )]. SII

SI

I

II

When the two values coincide we say that the game has a value. Next, we state that this game has a value and the value verifies an equation (called the DPP in the literature). Theorem 7.3. The game has the value uε = uεI = uεII , which verifies {

uε (x) = infdim(S)=j supv∈S,|v|=1 { 21 uε (x + εv) + 21 uε (x − εv)} ε

u (x) = g(x)

x ∈ Ω, x ∈ ̸ Ω.

Our next goal is to look for the limit as ε → 0. To this end we need a geometric assumption on 𝜕Ω. Given y ∈ 𝜕Ω we assume that there exists r > 0 such that for every δ > 0 there exist T ⊂ ℝN , a subspace of dimension j, v ∈ ℝN of norm 1, λ > 0, and θ > 0 such that {x ∈ Ω ∩ Br (y) ∩ Tλ : ⟨v, x − y⟩ < θ} ⊂ Bδ (y), where Tλ = {x ∈ ℝN : d(x, T) < λ}. For our game with a given j we will assume that Ω satisfies both (Fj ) and (FN−j+1 ); in this case we will say that Ω satisfies condition (F). For example, if we consider the equation λ2 = 0 in ℝ3 , we will require that the domain satisfies (F2 ), as illustrated in Figure 7.1. Theorem 7.4. Assume that Ω satisfies (F) and let uε be the values of the game. Then there exists a continuous function u such that uε → u,

as ϵ → 0,

uniformly in Ω. Moreover, the limit u is characterized as the unique viscosity solution to {

λj (D2 u) = 0 u=g

in Ω, on 𝜕Ω.

104 | 7 Games for eigenvalues of the Hessian

Figure 7.1: Condition (F2 ) in ℝ3 . We have 𝜕Ω curved, Bδ (y), and Tλ flat.

We regard condition (F) as a geometric way to state (H) without assuming that the boundary is smooth. In Section 7.1.3, we discuss the relation within the different conditions on the boundary in detail, and we prove that (H) ⇒ (F). These results can be easily extended to cover equations of the form k

∑ α i λ ji = 0

(7.2)

i=1

with α1 + ⋅ ⋅ ⋅ + αk = 1, αi > 0, and λj1 ≤ ⋅ ⋅ ⋅ ≤ λjk any choice of k eigenvalues of D2 u (not necessarily consecutive ones). In fact, once we fix indices j1 , . . . , jk , we can just choose at random (with probabilities α1 , . . . , αk ) which game we play at each step (between the previously described games that give λji in the limit). In this case the DPP reads k

1 1 sup { uε (x + εv) + uε (x − εv)}). 2 2

uε (x) = ∑ αi ( inf

dim(S)=ji v∈S,|v|=1

i=1

Passing to the limit as ε → 0 we obtain a solution to (7.2). In particular, we can handle equations of the form Pk+ (D2 u) :=

N

∑ i=N−k+1

λi (D2 u) = 0

k

and Pk− (D2 u) := ∑ λi (D2 u) = 0, i=1

or a convex combination of the previous two, ± Pk,l,α (D2 u) := α

N

∑ i=N−k+1

l

λi (D2 u) + (1 − α) ∑ λi (D2 u) = 0. i=1

7.1 A random walk for λj

|

105

Remark 7.5. We can interchange the roles of Player I and Player II. In fact, consider a version of the game where the player who chooses the subspace S of dimension j is the one seeking to maximize the expected payoff while the one who chooses the unitary vector wants to minimize the expected payoff. In this case the game values will converge to a solution of the equation λN−j+1 (D2 u) = 0. Note that the geometric condition on Ω, (Fj ) and (FN−j+1 ), is also well suited to deal with this case.

7.1.1 Preliminaries on viscosity solutions We begin by stating the usual definition of a viscosity solution to (λj , g). Here and in what follows Ω is a bounded domain in ℝN . We refer to [37] and Appendix A for general results on viscosity solutions. Definition 7.6. A function u : Ω 󳨃→ ℝ verifies λj (D2 u) = 0 in the viscosity sense if: (1) For every ϕ ∈ C 2 such that u − ϕ has a strict minimum at the point x ∈ Ω with u(x) = ϕ(x), we have λj (D2 ϕ(x)) ≤ 0. (2) For every ψ ∈ C 2 such that u − ψ has a strict maximum at the point x ∈ Ω with u(x) = ψ(x), we have λj (D2 ψ(x)) ≥ 0. Now, we refer to [51] for the following existence and uniqueness result for viscosity solutions to (λj , g). Theorem 7.7 ([51]). Let Ω be a smooth bounded domain in ℝN . Assume that condition (H) holds at every point on 𝜕Ω. Then, for every g ∈ C(𝜕Ω), the problem {

λj (D2 u) = 0 u=g

has a unique viscosity solution u ∈ C(Ω).

in Ω, on 𝜕Ω

106 | 7 Games for eigenvalues of the Hessian We remark that for the equation λj (D2 u) = 0 there is a comparison principle. A viscosity supersolution u (a lower semicontinuous function that verifies (1) in Definition 7.6) and viscosity subsolution u (an upper semicontinuous function that verifies (2) in Definition 7.6) that are ordered as u ≤ u on 𝜕Ω are also ordered as u ≤ u inside Ω. This comparison principle holds without assuming condition (H). Condition (H) is needed to obtain the continuity up to the boundary of the solutions; see [51]. See also [29], where explicit barriers are constructed employing this condition. 7.1.2 Description of the game In this section, we describe in detail the two-player zero-sum game that we call a random walk for λj . Let Ω ⊂ ℝN be a bounded open set and fix ε > 0. A token is placed at x0 ∈ Ω. Player I, the player seeking to minimize the final payoff, chooses a subspace S of dimension j and then Player II (who wants to maximize the expected payoff) chooses one unitary vector, v, in the subspace S. Then the position of the token is moved to x ± εv with equal probabilities. After the first round, the game continues from x1 according to the same rules. This procedure yields a sequence of game states x0 , x1 , . . ., where every xk is a random variable. The game ends when the token leaves Ω. We denote by xτ the first point in the sequence of game states that is outside Ω, so that τ refers to the first time we leave the domain. At this time the game ends with the final payoff given by g(xτ ), where g is a given continuous function that we call payoff function. Player I earns −g(xτ ) while Player II earns g(xτ ). A strategy SI for Player I is a function defined on the partial histories that gives a j-dimensional subspace S at every step of the game, SI (x0 , x1 , . . . , xk ) = S ∈ Gr(j, ℝN ). A strategy SII for Player II is a function defined on the partial histories that gives a unitary vector in a prescribed j-dimensional subspace S at every step of the game, SII (x0 , x1 , . . . , xk , S) = v ∈ S. When the two players fix their strategies SI (the first player chooses a subspace S at every step of the game) and SII (the second player chooses a unitary vector v ∈ S at every step of the game) we can compute the expected outcome as follows: Given the sequence x0 , . . . , xk with xk ∈ Ω the next game position is distributed according to the probability πSI ,SII (x0 , . . . , xk , A) 1 1 = δxk +εSII (x0 ,...,xk ,SI (x0 ,...,xk )) (A) + δxk −εSII (x0 ,...,xk ,SI (x0 ,...,xk )) (A). 2 2

7.1 A random walk for λj

|

107

By using Kolmogorov’s extension theorem and the one-step transition probabilities, x x we can build a probability measure ℙS0,S on the game sequences. We denote by 𝔼S0,S I II I II the corresponding expectation. The value of the game for Player I is given by x

uεI (x0 ) = inf sup 𝔼S0,S [g(xτ )] SI

I

SII

II

while the value of the game for Player II is given by x

uεII (x0 ) = sup inf 𝔼S0,S [g(xτ )]. SII

SI

I

II

As usual, when uεI = uεII we say that the game has a value. Let us observe that the game ends almost surely. Then the expectations in the previous definitions are well defined. If we consider the square of the distance to a fixed point, at every step, this value increases by at least ε2 with probability 21 . As the distance to that point is bounded with a positive probability the game ends after a finite number of steps. This implies that the game ends almost surely. To see that the game has a value, we can consider u, a function that satisfies the DPP 1 1 { inf sup { u(x + εv) + u(x − εv)} x ∈ Ω, { u(x) = dim(S)=j 2 2 v∈S,|v|=1 { { x ∈ ̸ Ω. { u(x) = g(x) The existence of such a function can be seen by Perron’s method. The operator given by the right-hand side of the DPP is in the hypotheses of the main result of [99]. Now, we want to prove that u = uεI = uεII . We know that uεI ≥ uεII , and to obtain the desired result, we will show that u ≥ uεI and uεII ≥ u. Given η > 0 we can consider the strategy SII0 for Player II that at every step almost maximizes u(xk + εv) + u(xk − εv), that is, SII0 (x0 , x1 , . . . , xk , S) = w ∈ S, such that

We have

1 1 { u(xk + εw) + u(xk − εw)} 2 2 1 1 ≥ sup { u(xk + εv) + u(xk − εv)} − η2−(k+1) . 2 2 v∈S,|v|=1 x

𝔼S0,S0 [u(xk+1 ) − η2−(k+1) | x0 , . . . , xk ] I

II



1 1 sup { u(xk + εv) + u(xk − εv)} S,dim(S)=j v∈S,|v|=1 2 2 inf

− η2−(k+1) − η2−(k+1) ≥ u(xk ) − η2−k ,

108 | 7 Games for eigenvalues of the Hessian where we have estimated the strategy of Player I by inf and used the DPP. Thus Mk = u(xk ) − η2−k is a submartingale. Now, we have x

uεII (x0 ) = sup inf 𝔼S0,S [g(xτ )] SII



SI

I

II

x inf 𝔼S0,S0 [g(xτ )] I II S I

x

≥ inf lim inf 𝔼S0,S0 [Mτ∧k ] SI



k→∞

I

x inf 𝔼S0,S0 [M0 ] I II S I

II

= u(x0 ) − η,

where τ ∧ k = min(τ, k), and we used the optional stopping theorem for Mk . Since η is arbitrary this proves that uεII ≥ u. An analogous strategy can be considered for Player I to prove that u ≥ uεI . Now our aim is to pass to the limit in the values of the game, uε → u,

as ε → 0,

and obtain in this limit process a viscosity solution to (7.1). To obtain a convergent subsequence uε → u we will use one more time the Arzela– Ascoli type lemma. We want to prove that: (1) there exists C > 0 such that |uε (x)| < C for every ε > 0 and every x ∈ Ω; (2) given η > 0, there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η. Concerning (1) we observe that we have min g ≤ uε ≤ max g.

(7.3)

Hence it remains to prove that uε satisfies the second hypothesis. To prove that we will have to make some geometric assumptions on the domain. Recall that for our game with a given j we will assume that Ω satisfies both (Fj ) and (FN−j+1 ). Let us observe that for j = 1 we assume (FN ); this condition can be read as follows. Given y ∈ 𝜕Ω, we assume that there exists r > 0 such that for every δ > 0 there exists v ∈ ℝN of norm 1 and θ > 0 such that {x ∈ Ω ∩ Br (y) : ⟨v, x − y⟩ < θ} ⊂ Bδ (y).

(7.4)

Lemma 7.8. Given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η.

7.1 A random walk for λj

|

109

Proof. The case x, y ∈ ℝN \ Ω follows from the uniformity continuity of g in ℝN \ Ω. For the case x, y ∈ Ω we argue as follows. We fix the strategies SI , SII for the game starting at x. We define a virtual game starting at y. We use the same random steps as the game starting at x. Furthermore, the players adopt their strategies SIv , SIIv from the game starting at x, that is, when the game position is yk a player makes the choices that he would have taken at xk in the game starting at x. We proceed in this way until for the first time xk ∈ ℝN \ Ω or yk ∈ ℝN \ Ω. At that point we have |xk − yk | = |x − y|, and the desired estimate follows from the one for xk ∈ Ω, yk ∈ ℝN \ Ω or for xk , yk ∈ ℝN \ Ω. Thus, we can concentrate on the case x ∈ Ω and y ∈ ℝN \ Ω. In addition, we can assume that y ∈ 𝜕Ω. If we have the bound for those points we can obtain a bound for a point y ∈ ℝN \ Ω just by considering z ∈ 𝜕Ω in the line segment between x and y. In this case we have uε (y) = g(y), and we need to obtain a bound for uε (x). First, we deal with j = 1. To this end we just observe that, for any possible strategy of the players (that is, for any possible choice of the direction v at every point), the projection of xn in the direction of a fixed vector w of norm 1, ⟨xn − y, w⟩, is a martingale. We fix r > 0 and consider xτ , the first time x leaves Ω or Br (y). Hence 𝔼⟨xτ − y, w⟩ ≤ ⟨x − y, w⟩ ≤ d(x, y) < r0 . From the geometric assumption on Ω, we have ⟨xn − y, w⟩ ≥ −ε. Therefore ℙ(⟨xτ − y, w⟩ > r01/2 )r01/2 − (1 − ℙ(⟨xτ − y, w⟩ > r01/2 ))ε < r0 . Then we have (for every ε small enough) ℙ(⟨xτ − y, w⟩ > r01/2 ) < 2r01/2 . Then (7.4) implies that given δ > 0 we can conclude that ℙ(d(xτ , y) > δ) < 2r01/2 by taking r0 small enough and an appropriate w. When d(xτ , y) ≤ δ, the point xτ is actually the point where the process has left Ω. Hence, 󵄨 󵄨󵄨 ε 󵄨󵄨u (x) − g(y)󵄨󵄨󵄨 󵄨 󵄨 ≤ ℙ(d(xτ , y) ≤ δ)󵄨󵄨󵄨g(xτ ) − g(y)󵄨󵄨󵄨 + ℙ(d(xτ , y) > δ)2 max g 󵄨 󵄨 ≤ sup 󵄨󵄨󵄨g(xτ ) − g(y)󵄨󵄨󵄨 + 4r01/2 max g < η xτ ∈Bδ (y)

if r0 and δ are small enough.

110 | 7 Games for eigenvalues of the Hessian For a general j we can proceed in the same way. We have to do some extra work to argue that the points xn that appear along the argument belong to Tλ . If r0 < λ we have x ∈ Tλ , so if we make sure that at every move v ∈ T the game sequence will be contained in x + T ⊂ Tλ . Recall that here we are assuming both (Fj ) and (FN−j+1 ) are satisfied. We can separate the argument into two parts. We will prove on the one hand that uε (x) − g(y) < η and on the other that g(y) − uε (x) < η. For the first inequality we can make extra assumptions on the strategy for Player I, and for the second one we can do the same with Player II. Since Ω satisfies (Fj ), Player I can make sure that at every move v belongs to T by selecting S = T. This proves the upper bound uε (x) − g(y) < η. On the other hand, since Ω satisfies (FN−j+1 ), Player II will be able to select v in a space S of dimension j and hence he can always choose v ∈ S ∩ T since dim(T) + dim(S) = N − j + 1 + j = N + 1 > N. This shows the lower bound g(y) − uε (x) < η. From (7.3) and Lemma 7.8 the hypotheses of the Arzela–Ascoli type lemma are satisfied. Hence we have obtained uniform convergence of uε along a subsequence. Corollary 7.9. Let uε be the values of the game. Then, along a subsequence, uε → u,

as ε → 0,

uniformly in Ω. Now, let us prove that any possible limit of uε is a viscosity solution to the limit PDE problem. Theorem 7.10. Any uniform limit of the values of the game uε , u, is a viscosity solution to {

λj (D2 u) = 0 u=g

in Ω, on 𝜕Ω.

(7.5)

Proof. First, we observe that since uε = g on 𝜕Ω we obtain, from the uniform convergence, that u = g on 𝜕Ω. Also, note that a uniform limit of uε is a continuous function. To check that u is a viscosity solution to λj (D2 u) = 0 in Ω, in the sense of Definition 7.6, let ϕ ∈ C 2 be such that u − ϕ has a strict minimum at the point x ∈ Ω with u(x) = ϕ(x). We need to check that λj (D2 ϕ(x)) ≤ 0. As uε → u uniformly in Ω we have the existence of a sequence xε such that xε → x as ε → 0 and uε (z) − ϕ(z) ≥ uε (xε ) − ϕ(xε ) − ε3

7.1 A random walk for λj

| 111

(we remark that uϵ is not continuous in general). As uε is a solution to uϵ (x) =

1 1 sup { uε (x + εv) + uε (x − εv)} 2 2

inf

dim(S)=j v∈S,|v|=1

we obtain that ϕ verifies the inequality 0≥

inf

1 1 sup { ϕ(xε + εv) + ϕ(xε − εv) − ϕ(xε )} − ε3 . 2 2

dim(S)=j v∈S,|v|=1

Now, consider the Taylor expansion of the second order of ϕ 1 ϕ(y) = ϕ(x) + ∇ϕ(x) ⋅ (y − x) + ⟨D2 ϕ(x)(y − x), (y − x)⟩ + o(|y − x|2 ) 2 as |y − x| → 0. Hence, we have 1 ϕ(x + εv) = ϕ(x) + ε∇ϕ(x) ⋅ v + ε2 ⟨D2 ϕ(x)v, v⟩ + o(ε2 ) 2 and

1 ϕ(x − εv) = ϕ(x) − ε∇ϕ(x) ⋅ v + ε2 ⟨D2 ϕ(x)v, v⟩ + o(ϵ2 ). 2

Hence, using these expansions we get 1 1 ε2 ϕ(xε + εv) + ϕ(xε − εv) − ϕ(xε ) = ⟨D2 ϕ(xε )v, v⟩ + o(ε2 ), 2 2 2 and then we conclude that 0 ≥ ε2

inf

1 sup { ⟨D2 ϕ(xε )v, v⟩} + o(ε2 ). 2

dim(S)=j v∈S,|v|=1

Dividing by ε2 and passing to the limit as ε → 0 we get 0≥

sup {⟨D2 ϕ(x)v, v⟩},

inf

dim(S)=j v∈S,|v|=1

which is equivalent to 0 ≥ λj (D2 ϕ(x)) as we wanted to show. The reverse inequality when a smooth function ψ touches u from below can be obtained in a similar way. Remark 7.11. Since there is uniqueness of viscosity solutions to the limit problem (7.5) (uniqueness holds for every domain without any geometric restriction once we have existence of a continuous solution) we obtain that the uniform limit lim uε = u

ε→0

exists (not only along a subsequence).

112 | 7 Games for eigenvalues of the Hessian 7.1.3 Geometric conditions on 𝜕Ω Now, our goal is to analyze the relation between the two different conditions on 𝜕Ω. We have: (H) that involves the curvatures of 𝜕Ω and hence requires smoothness; this condition was used in [51] to obtain existence of a continuous viscosity solution to (7.1). (F) that is given by (Fj ) and (FN−j+1 ). This condition was used to obtain convergence of the values of the game and, hence, also allows us to obtain existence of a continuous viscosity solution to (7.1). We have (H) ⇒ (F). Observe that in Figure 7.2 we have a domain that is not smooth and hence does not satisfy condition (H), but it can be shown that it satisfies condition (F).

Figure 7.2: A domain that satisfies condition (F) but not condition (H).

Let us show that the condition κN−j+1 > 0 in (H) implies (Fj ). We consider T = ⟨xN−j+1 , . . . , xN ⟩ (note that this is a subspace of dimension j), v = xN , and r as above. We want to show that for every δ > 0 there exists λ > 0 and θ > 0 such that {x ∈ Ω ∩ Br (y) ∩ Tλ : ⟨v, x − y⟩ < θ} ⊂ Bδ (y). We have to choose λ and θ such that for x with ‖x‖ > δ, 󵄩 󵄩󵄩 󵄩󵄩(x1 , . . . , xN−j )󵄩󵄩󵄩 < λ,

7.2 Games for Pucci’s operators | 113

and xN −

N−1 1 N−1 2 ∑ κi xi > o( ∑ xi2 ), 2 i=1 i=1

we have xN > θ. Let us prove this fact. We have xN >

N−1 1 N−1 2 ∑ κi xi + o( ∑ xi2 ) 2 i=1 i=1 N−j



N−1 1 1 N−1 ∑ κi xi2 + ∑ κi xi2 + o( ∑ xi2 ) 2 i=1 2 i=N−j+1 i=1 N−j

N−1

i=1

i=1

N−1

≥ −C1 ∑ xi2 + C2 ∑ xi2 + o( ∑ xi2 ) i=1

N−1

≥ −C1 λ2 + C2 δ2 + o( ∑ xi2 ) > θ i=1

for r, λ, and θ small enough (for a given δ).

7.2 Games for Pucci’s operators Our main goal in this section is to describe a game whose values approximate viscosity solutions to the maximal Pucci problem P + (D2 u) := Λ ∑ λj + λ ∑ λj = f { { λ,Λ λj 0 { { { u=g

in Ω, on 𝜕Ω.

(7.6)

Here D2 u is the Hessian matrix and λi (D2 u) denote its eigenvalues. The function f is assumed to be uniformly continuous. We assume that the ellipticity constants verify Λ > λ > 0. Let us describe the game that we propose to approximate solutions to (7.6). It is a single-player game (that tries to maximize the expected outcome). It can also be viewed as an optimal control problem. Fix a bounded domain Ω ⊂ ℝN that satisfies a uniform exterior sphere condition. Fix a running payoff function f : Ω 󳨃→ ℝ and a final payoff function g : ℝN \ Ω 󳨃→ ℝ. The rules of the game are as follows: A token is placed at an initial position x0 ∈ Ω, the player chooses an orthonormal basis of ℝN , v1 , . . . , vN , and then, for each vi , he

114 | 7 Games for eigenvalues of the Hessian chooses either μi = √λ or μi = √Λ. Then the position of the token is moved to x ± εμi vi 1 with equal probabilities 2N . The game continues until the position of the token leaves 1 2 τ−1 the domain and at this point xτ the payoff is given by g(xτ )− 2N ε ∑k=0 f (xk ). For a given strategy SI (the player chooses an orthonormal basis and the set of corresponding μi at every step of the game) we compute the expected outcome as x

𝔼S0 [g(xτ ) − I

1 2 τ−1 ε ∑ f (x )]. 2N k=0 k

Then the game value for any x0 ∈ Ω is defined as x

uε (x0 ) := sup 𝔼S0 [g(xτ ) − SI

I

1 2 τ−1 ε ∑ f (x )]. 2N k=0 k

Our first result states that the value of this game verifies a DPP. Theorem 7.12. The value of the game uε verifies N ε2 1 { ε { sup ∑ uε (x + εμi vi ) + uε (x − εμi vi ) { u (x) = − f (x) + 2N 2N vi ,μi i=1 { { { ε { u (x) = g(x)

x ∈ Ω, x ∈ ̸ Ω.

Our next goal is to look for the limit as ε → 0. Theorem 7.13. Let uε be the values of the game. Then uε → u,

as ε → 0,

uniformly in Ω. The limit u is the unique viscosity solution to P + (D2 u) := Λ ∑ λj + λ ∑ λj = f { { λ,Λ λj 0 { { { u=g

in Ω, on 𝜕Ω.

(7.7)

Remark 7.14. When f is assumed to be positive, one can play the game described before, but with the following variant: The player chooses v1 , . . . , vk orthonormal vectors (note that the number of vectors k is part of the player’s choice). Then, the new position of the game goes to x ± ε√Λvi with equal probabilities 1/2N or remains fixed at x with probability 1 − k/N (note that there is no choice of μi among λ, Λ). In this case the value of the game uε verifies uε (x) = −

k N −k ε ε2 1 f (x) + sup ∑ uε (x + ε√Λvi ) + uε (x − ε√Λvi ) + u (x) 2N 2N vi i=1 N

for x ∈ Ω, with uε (x) = g(x) for x ∈ ̸ Ω. Now, with the same ideas used to deal with Pucci’s maximal operator, one can pass to the limit as ε → 0 and find a viscosity solution to the degenerate problem

7.2 Games for Pucci’s operators | 115

P + (D2 u) := Λ ∑ λj = f { { 0,Λ λj >0 { { { u=g

in Ω, on 𝜕Ω.

Note that the requirement on f to be positive is necessary in order to have a solution to the limit equation. When one adapts the proofs in the following sections to this case the key fact is that for f positive the player wants to end the game instead of continue playing for a long time (since at each move he is paying a running payoff that is strictly positive). 7.2.1 Properties of the game values and convergence We begin by stating the usual definition of a viscosity solution to (7.6). Definition 7.15. A continuous function u verifies + (D2 u) := Λ ∑ λj + λ ∑ λj = f Pλ,Λ λj >0

λj 0

λj 0

λj 0 a fixed real number. The game starts with a token placed at an initial position x0 ∈ Ω. At every step, the only player, Player I, chooses an orthonormal basis of ℝN , v1 , . . . , vN and then for each vi he chooses either μi = √λ or μi = √Λ. Then the position of the token is moved to x ± εμi vi with 1 . After the first equal probabilities. That is, each position is selected with probability 2N round, the game continues from x1 according to the same rules. This procedure yields a sequence of game states x0 , x1 , . . ., where every xk is a random variable. The game ends when the token leaves Ω and we denote by xτ this point and τ refers to the first time we hit ℝN \ Ω. The payoff is determined by two given functions: g : ℝN \ Ω → ℝ, the final payoff function, and f : Ω → ℝ, the running payoff function. We require g to be

116 | 7 Games for eigenvalues of the Hessian continuous and bounded and f to be uniformly continuous and also bounded. When the game ends, the total payoff is given by g(xτ ) −

1 2 τ−1 ε ∑ f (x ). 2N k=0 k

1 2 We can think that when the token leaves a point xk , Player I must pay 2N ε f (xk ) to move to the next position and at the end he receives g(xτ ). A strategy SI for Player I is a Borel function defined on the partial histories that gives a orthonormal base and values μi at every step of the game,

SI (x0 , x1 , . . . , xn ) = (v1 , . . . , vN , μ1 , . . . , μN ). x

A starting point x0 and a strategy define a unique probability measure ℙS0 . We denote

by

x 𝔼S0 I

I

the corresponding expectation.

The value of the game is given by

x

uε (x0 ) = sup 𝔼S0 [g(xτ ) − I

SI

1 2 τ−1 ε ∑ f (x )]. 2N k=0 k

Lemma 7.16. The sequence of random variables {|xk − x0 |2 − kλϵ2 }k≥1 x

is a supermartingale with respect to the natural filtration {ℱk 0 }n≥1 . Proof. Let us compute x

x

0 ](xk−1 ) 𝔼S0 [|xk − x0 |2 | ℱk−1 I

= =

1 N ∑ |x − x0 + εμi vi |2 + |xk−1 − x0 − εμi vi |2 2N i=1 k−1 1 N ∑ |x − x0 |2 + ε2 μ2i N i=1 k−1

= |xk−1 − x0 |2 + ε2

1 N 2 ∑μ N i=1 i

≥ |xk−1 − x0 |2 + ε2 λ. Therefore, we have

x

x

0 ](xk−1 ) 𝔼S0 [|xk − x0 |2 − kλϵ2 | ℱk−1 I

≥ |xk−1 − x0 |2 + ε2 λ − kλϵ2

= |xk−1 − x0 |2 − (k − 1)λε2 , as we wanted to show.

7.2 Games for Pucci’s operators | 117

Applying Doob’s optional stopping to the finite stopping times τ ∧ n and letting n → ∞, we obtain x

𝔼S0 [|xτ − x0 |2 − τλϵ2 ] ≥ 0 I

and x

x

λϵ2 𝔼S0 [τ] ≤ 𝔼S0 [|xτ − x0 |2 ] ≤ C(Ω) < ∞.

(7.8)

I

I

In addition, the process ends almost surely, i. e., x

ℙS0 ({τ < ∞}) = 1. I

We conclude that the expectation is well defined. To see that the game value satisfies the DPP, we can consider u, a function that satisfies the DPP N 1 2 1 { { u(x) = − u(x + εμi vi ) + u(x − εμi vi ) ε f (x) + sup ∑ { 2N 2N vi ,μi i=1 { { { { u(x) = g(x)

x ∈ Ω, x ∈ ̸ Ω.

The existence of such a function can be seen by Perron’s method. In fact, the operator given by the right-hand side of the DPP, that is, u 󳨃→ −

N 1 1 2 ̃ − εμi vi ), ε f (x) + sup ∑ u(x + εμi vi ) + u(x 2N 2N vi ,μi i=1

is in the hypotheses of the main result of [99]. Recall that we want to prove that u = uε . Given η > 0 we can consider the strategy 0 SI for Player I that at every step almost maximizes N 1 sup ∑[u(x + εμi vi ) + u(x − εμi vi )], 2N vi ,μi i=1

that is, we take SI0 (x0 , x1 , . . . , xn ) = (w1 , . . . , wN , ν1 , . . . , νN ) such that 1 N ∑ u(xn + ενi wi ) + u(xn − ενi wi ) 2N i=1 ≥

N 1 sup ∑ u(xn + εμi vi ) + u(xn − εμi vi ) − η2−(n+1) . 2N vi ,μi i=1

118 | 7 Games for eigenvalues of the Hessian With this choice of the strategy we have x

𝔼S00 [u(xn+1 ) − I



1 2 n ε ∑ f (x ) − η2−(n+1) | x0 , . . . , xk ] 2N k=0 k

N 1 sup ∑ u(xn + εμi vi ) + u(xn − εμi vi ) 2N vi ,μi i=1



1 2 n ε ∑ f (x ) − η2−(n+1) − η2−(n+1) 2N k=0 k 1 2 n−1 ε ∑ f (x ), 2N k=0 k

≥ u(xn ) − η2−n −

where we have used that the DPP holds at xn for u. That is, we have proved that Mn = u(xn ) − η2−n −

1 2 n−1 ε ∑ f (x ) 2N k=0 k

is a submartingale with respect to the natural filtration. Next, we compute x

uε (x0 ) = sup 𝔼S0 [g(xτ ) − I

SI

x

≥ 𝔼S00 [g(xτ ) − I

x

≥ 𝔼S00 [g(xτ ) − I

x

1 2 τ−1 ε ∑ f (x )] 2N k=0 k

1 2 τ−1 ε ∑ f (x )] 2N k=0 k 1 2 τ−1 ε ∑ f (x ) − η2−τ ] 2N k=0 k x

≥ lim inf 𝔼S00 [Mτ∧n ] = 𝔼S00 [M0 ] = u(x0 ) − η, n→∞

I

I

where τ ∧ n = min(τ, n), where we have used the optional stopping theorem for Mn . Since η is arbitrary this proves that uε ≥ u. Analogously, we have x

𝔼S00 [u(xn+1 ) − I



1 2 n ε ∑ f (x ) | x0 , . . . , xk ] 2N k=0 k

N 1 2 n 1 sup ∑ u(xn + εμi vi ) + u(xn − εμi vi ) − ε ∑ f (x ) 2N vi ,μi i=1 2N k=0 k

≤ u(xn ) −

1 2 n−1 ε ∑ f (x ), 2N k=0 k

where we have estimated the strategy for Player I by the supremum. Hence, Mn = u(xn ) −

1 2 n−1 ε ∑ f (x ) 2N k=0 k

7.2 Games for Pucci’s operators | 119

is a supermartingale. Now, we have x

uε (x0 ) = sup 𝔼S0 [g(xτ ) − I

SI

1 2 τ−1 ε ∑ f (x )] 2N k=0 k x

x

≤ lim sup 𝔼S00 [Mτ∧n ] = 𝔼S00 [M0 ] = u(x0 ). n→∞

I

I

ε

ε

This proves that u ≤ u. We have proved that u = u, and hence Theorem 7.7 follows. Now our aim is to pass to the limit in the values of the game uε → u,

as ε → 0

and obtain in this limit process a viscosity solution to (7.6). To obtain a convergent subsequence uε → u we will use the Arzela–Ascoli type lemma. We need to prove that: (1) There exists C > 0 such that |uε (x)| < C for every ε > 0 and every x ∈ Ω. (2) Given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η. Let us start by showing that the family is uniformly bounded. Lemma 7.17. There exists C > 0 such that |uε (x)| < C for every ε > 0 and every x ∈ Ω. Proof. We consider R > 0 such that Ω ⊂ BR (0) and set Mn = |xn |2 . From the bound (7.8) we get x

𝔼S0 [τ] ≤ I

Next, we claim that min g −

4R2 1 . λ ε2

2R2 max |f | 2R2 max |f | ≤ uε (x) ≤ max g + Nλ Nλ

for every x ∈ Ω. In fact, x

uε (x0 ) = sup 𝔼S0 [g(xτ ) − SI

I

1 2 τ−1 ε ∑ f (x )] 2N k=0 k

x

≤ max g + sup 𝔼S0 [− SI

I

1 2 τ−1 ε ∑ f (x )] 2N k=0 k

1 2 x ≤ max g + ε max |f | sup 𝔼S0 [τ] I 2N SI

4R2 1 1 2 ε max |f | 2N λ ε2 2R2 max |f | . ≤ max g + Nλ ≤ max g +

The lower bound can be obtained analogously.

120 | 7 Games for eigenvalues of the Hessian Next, we prove that the family satisfies condition (2) in Lemma 3.6. To this end, we follow ideas from [86]. First, we prove the following lemma. Lemma 7.18. Let us consider the game played in an annular domain BR (y)\Bδ (y). Then, the exit time τ∗ of the game starting at x0 verifies 𝔼x0 (τ∗ ) ≤

C(R/δ) dist(𝜕Bδ (y), x0 ) + o(1) , ε2

(7.9)

for x0 ∈ BR (y) \ Bδ (y). Here o(1) → 0 as ε → 0 can be taken depending only on δ and R. Proof. Let us denote aε (x) = 𝔼x (τ). Since aε is invariant under rotations, we know that aε is radial. If we assume that the player wants to maximize the expectation for the exit time, the function aε satisfies a DPP aε (x) = 1 +

N 1 sup ∑[aε (x + εμi vi ) + aε (x − εμi vi )] 2N vi ,μi i=1

by the above assumptions and the fact that the number of steps always increases by one when making a step. Further, we let hε (x) = ε2 aε (x) and obtain hε (x) = ε2 +

N 1 sup ∑[hε (x + εμi vi ) + hε (x − εμi vi )]. 2N vi ,μi i=1

If we rewrite the equation as N

−N = sup ∑ μ2i [ vi ,μi

i=1

hε (x + εμi vi ) + hε (x − εμi vi ) − 2hε (x) ] 2ε2 μ2i

we obtain a discrete version of the PDE, N

−N = sup ∑ μ2i vi ,μi

i=1

𝜕2 h . 𝜕vi2

We denote r = |x − y|. Since h is radial, its eigenvalues are (h)rr with multiplicity 1 and (h)r r with multiplicity N − 1. We will look for a solution u(r), concave and radially increasing. It will verify −N = λurr + (n − 1)Λ

ur . r

Its solution takes the form Λ(N−1)

u(r) = −

Nr 2 r 1− λ +b +a 2(Λ(N − 1) + λ) 1 − Λ(N−1) λ

7.2 Games for Pucci’s operators | 121

if λ ≠ Λ(N − 1) and u(r) = −

Nr 2 + a log(r) + b 2(Λ(N − 1) + λ)

if λ = Λ(N − 1). Here a and b are two constants. We consider u defined in x ∈ BR+δ (y) \ Bδ (y). We can choose a and b such that u󸀠 (R + δ) = 0, u(δ) = 0 and such that the function is positive. The resulting u is concave and radially increasing. In fact, we have u󸀠 (r) = −

Λ(N−1) Nr + ar − λ . Λ(N − 1) + λ

Then, from u󸀠 (R + δ) = 0 we conclude that − Λ(N−1) λ

N(R + δ) r r u (r) = (− +( ) Λ(N − 1) + λ R + δ R+δ 󸀠

),

which is positive when r < R + δ. This shows that u is increasing with respect to r. Moreover, u is concave as a function of r, λurr = −N − (n − 1)Λ

ur < 0. r

We extend this function as a solution to the same equation to Bδ (y) \ Bδ−√Λϵ (y). If we consider ε√Λ < δ, for each x ∈ BR (y) \ Bδ (y) we have x ± εμi vi ⊂ BR+δ (y) \ Bδ−√Λϵ (y). Since N

−N = sup ∑ μ2i vi ,μi

i=1

𝜕2 u 𝜕vi2

and u is smooth we get N

−N = sup ∑ μ2i [ vi ,μi

i=1

u(x + εμi vi ) + u(x − εμi vi ) − 2u(x) ] + o(1) 2ε2 μ2i

and hence u(x) = ε2 +

N 1 sup ∑[u(x + εμi vi ) + u(x − εμi vi )] + o(ε2 ). 2N vi ,μi i=1

Then, for ε small enough u(x) ≥

N 1 ε2 + sup ∑[u(x + εμi vi ) + u(x − εμi vi )]. 2 2N vi ,μi i=1

Now, we consider w defined as u in BR (y) and zero outside. Observe that we have w(x) ≥

N 1 ε2 + sup ∑[w(x + εμi vi ) + w(x − εμi vi )] 2 2N vi ,μi i=1

for every x ∈ BR (y) \ Bδ (y).

122 | 7 Games for eigenvalues of the Hessian It follows that 𝔼[w(xk ) + ≤

k 2 ε | x0 , . . . , xk−1 ] 2

N k 1 sup ∑[w(xk−1 + εμi vi ) + w(xk−1 − εμi vi )] + ε2 2N vi ,μi i=1 2

≤ w(xk−1 ) +

k−1 2 ε, 2

if xk−1 ∈ BR (y)\Bδ (y). Thus w(xk )+ k2 ε2 is a supermartingale, and the optional stopping theorem yields 1 𝔼x0 [w(xτ∗ ∧k ) + (τ∗ ∧ k)ε2 ] ≤ w(x0 ). 2 For xτ∗ outside BR (y) we have w(xτ∗ ) = 0 and for xτ∗ ∈ Bδ (y) \ Bδ−√Λϵ (y) we have −w(xτ∗ ) ≤ o(1). Furthermore, the estimate 0 ≤ w(x0 ) ≤ C(R/δ) dist(𝜕Bδ (y), x0 ) holds for the solutions due to the fact that they are concave and verify w = 0 on 𝜕Bδ (y). Thus, passing to the limit as k → ∞, we obtain 1 2 x0 ∗ ε 𝔼 [τ ] ≤ w(x0 ) − 𝔼[w(xτ∗ )] ≤ C(R/δ) dist(𝜕Bδ (y), x0 ) + o(1). 2 This completes the proof. We are ready to prove that the family uε is asymptotically equicontinuous. Lemma 7.19. Given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x) − u (y)󵄨󵄨󵄨 < η. Proof. The case x, y ∈ ̸ Ω follows from the uniformity continuity of g. For the case x, y ∈ Ω we argue as follows. We fix the strategy SI0 for the game starting at x. We consider a virtual game starting at y. We use the same random steps as the game starting at x. Furthermore, the player adopts the strategy SI0 from the game starting at x, that is, when the game position is at yk the player makes the choices that he would have taken at xk = yk −y +x for the game starting at x. At every time we have |xk −yk | = |x −y|. We proceed in this way until for the first time xn ∈ ̸ Ω or yn ∈ ̸ Ω. We can separate the payoff in the amount paid by the player up to this moment and the payoff for the rest

7.2 Games for Pucci’s operators | 123

of the game. The difference in the payoff for the rest of the game can be bounded with the desired estimate in the case xn ∈ Ω, yn ∈ ̸ Ω or for xn , yn ∈ ̸ Ω. The difference for the amount paid by the player before xn or yn reaches ℝN \ Ω, that is, 1 2 n−1 ε ∑ (f (xk ) − f (yk )) 2N k=0 can be bounded considering the bound for the expected exit time obtained in the proof of Lemma 7.17 and the fact that f is uniformly continuous. In fact, we have 󵄨󵄨 󵄨󵄨 n−1 󵄨󵄨 1 󵄨󵄨 R2 𝔼(󵄨󵄨󵄨 ε2 ∑ (f (xk ) − f (yk ))󵄨󵄨󵄨) ≤ ω (|x − y|), 󵄨󵄨 2N 󵄨󵄨 Nλ f k=0 󵄨 󵄨 where ωf stands for the uniform modulus of continuity of f . Thus, we can concentrate on the case x ∈ Ω and y ∈ ̸ Ω. By the exterior sphere condition, there exists Bδ (z) ⊂ ℝN \ Ω such that y ∈ 𝜕Bδ (z). For a small δ we know that the difference |g(y) − g(z)| is small; it remains to prove that the difference |uε (x) − g(z)| is small. We take ε0 such that α0 < δ2 . Then Mk = |xk − z| −

Λ 2 εk δ

is a supermartingale. Indeed, x

𝔼S0 [|xk − z| | x0 , . . . , xk−1 ] I



≤ ≤

max

‖v‖=1,α∈{√λ,√Λ}

|xk−1 − z + εvα| + |xk−1 − z − εvλ| 2

2 2 √ |xk−1 − z + εvα| + |xk−1 − z − εvλ| 2 ‖v‖=1,α∈{√λ,√Λ}

max

max

‖v‖=1,α∈{√λ,√Λ}

≤ |xk−1 − z| + ≤ |xk−1 − z| +

√|xk−1 − z|2 + |εvα|2

ε2 Λ 2|xk−1 − z| Λε2 . δ

The last inequality holds because |xk−1 − z| > δ2 . Then, the optional stopping theorem implies that 𝔼xSI [|xτ − z|] ≤ |x − z| +

Λ 2 x ε 𝔼SI [τ]. δ

124 | 7 Games for eigenvalues of the Hessian Next we estimate 𝔼xSI [τ] by the stopping time in Lemma 7.18, for R such that Ω ⊂

BR \ Bδ (z). In fact, if we play our game in BR \ Bδ (z) we can reproduce the same movements until we exit Ω (this happens before the game in BR \ Bδ (z) ends since Ω ⊂ BR \ Bδ (z)) and hence we obtain, for any strategy SI , 𝔼xSI [τ] ≤ 𝔼xSI [τ∗ ]. Note that any strategy in the domain Ω can be extended to a strategy to the larger ring domain. Thus, it follows from (7.9) that 𝔼xSI [|xτ − z|] ≤ |x − z| +

Λ (C(R/δ) dist(𝜕Bδ (z), x0 ) + o(1)). δ

Since dist(𝜕Bδ (z), x0 ) ≤ |x − y| and |x − z| ≤ |x − y| + δ, we have 𝔼xSI [|xτ − z|] ≤ δ + (1 +

Λ C(R/δ))|x − y| + o(1). δ

Thus, we have obtained bounds for 𝔼xSI [|xτ − z|] and ε2 𝔼xSI [τ]. Using these bounds and the fact that g is uniformly continuous, we have 󵄨󵄨 󵄨󵄨 τ−1 󵄨󵄨 x0 󵄨 󵄨󵄨𝔼 [g(xτ ) − 1 ε2 ∑ f (xk )] − g(z)󵄨󵄨󵄨 󵄨󵄨 SI 󵄨󵄨 2N k=0 󵄨󵄨 󵄨󵄨 1 2 x x ε 𝔼SI [τ]‖f ‖∞ ≤ 𝔼S0 [|g(xτ ) − g(z)|] + I 2N 1 2 x x ε 𝔼SI [τ]‖f ‖∞ ≤ 𝔼S0 [|g(xτ ) − g(z)|] + I 2N ≤ ℙxSI (|xτ − z| ≥ θ)2‖g‖∞ + sup ‖g(x) − g(y)‖ + ‖x−y‖0

λj (D2 ϕ(x)) + λ



∑ λj

⟨D2 ϕ(x)vi , vi ⟩]

⟨D2 ϕ(x)vi ,vi ⟩ ϕ(x, t) for (x, t) ∈ ΩT , (x, t) ≠ (x0 , t0 ), then we have at the point (x0 , t0 ) H if ∇ϕ(x0 , t0 ) ≠ 0, {(N + p)ϕt ≥ (p − 2)Δ∞ ϕ + Δϕ { 2 {(N + p)ϕt ≥ λmin ((p − 2)D ϕ) + Δϕ if ∇ϕ(x0 , t0 ) = 0.

Moreover, we require that when touching u with a test function from above all inequalities are reversed and λmin ((p − 2)D2 ϕ) is replaced by λmax ((p − 2)D2 ϕ).

8.1.2 (p, ε)-parabolic functions and Tug-of-War games Recall that ΩT = Ω × (0, T) ⊂ ℝN+1 is an open set. Below g : ℝN × (−∞, T) \ ΩT → ℝ denotes a bounded Lipschitz function.

8.1 Games for the parabolic p-Laplacian

| 129

Definition 8.2. The function uε is (p, ε)-parabolic, 2 ≤ p ≤ ∞, in ΩT with boundary values g if uε (x, t) =

α ε2 ε2 { sup uε (y, t − ) + inf uε (y, t − )} y∈Bε (x) 2 y∈Bε (x) 2 2 + β ∫ uε (y, t −

ε

Bε (x)

u (x, t) = g(x, t),

ε2 ) dy, 2

for every (x, t) ∈ ΩT ,

for every (x, t) ∈ ℝN × (−∞, T) \ ΩT ,

(8.2)

where α=

p−2 , p+N

N +2 . p+N

β=

Next we study the parabolic version of the Tug-of-War game with noise studied in Chapter 3. Again, it is a zero-sum game between two players, Player I and Player II. In the parabolic case, there are two key differences: The game has a preset maximum number of rounds and boundary values may change with time. To be more precise, at the beginning we fix the maximum number of rounds to be K and place a token at a point x0 ∈ Ω. The players toss a biased coin with probabilities α and β, α+β = 1. If they get heads (probability α), they play a Tug-of-War game, that is, a fair coin is tossed and the winner of the toss decides a new game position x1 ∈ Bε (x0 ). On the other hand, if they get tails (probability β), the game state moves according to the uniform probability density to a random point in the ball Bε (x0 ). They continue playing the game until either the token leaves Ω or the number of rounds reaches K. Recall the definitions of Γε = {x ∈ ℝN \ Ω : dist(x, 𝜕Ω) ≤ ε} and Ωε = Γε ∪ Ω. We denote by τ ∈ {0, 1, . . . , K} the hitting time of Γε or K, whichever comes first, and by xτ ∈ Ωε the end point of the game. At the end of the game Player I earns h(xτ , τ) while Player II earns −h(xτ , τ), where h : Ωε × ℕ → ℝ is the payoff function. A strategy SI for Player I is a function which gives the next game position, SI (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ), if Player I wins the coin toss. Similarly, Player II plays according to a strategy SII . The value of the game for Player I when starting at x0 with the maximum number of rounds K is given by x

0 uε,K I (x0 , 0) = sup inf 𝔼S ,S [h(xτ , τ)],

SI

SII

I

II

130 | 8 Parabolic problems while the value of the game for Player II is given by x

0 uε,K II (x0 , 0) = inf sup 𝔼S ,S [h(xτ , τ)].

SII

SI

I

II

More generally, we define the value of the game when starting at x and playing for K − k rounds to be x uε,K I (x, k) = sup inf 𝔼SI ,SII [h(xτ , τ)], SI

SII

while the value of the game for Player II is given by x uε,K II (x, k) = inf sup 𝔼SI ,SII [h(xτ , τ)]. SII

SI

Observe that, in order to accommodate for time-dependent boundary values, we need to keep track of the number k of rounds played. Also, observe that since τ is finite, the expectation involved in the definition of the game values is well defined. As usual, ε,K = uε,K := uε,K = uε,K when uε,K II . I II we say that the game has a value u I Now, we state the DPP for the Tug-of-War with noise and a maximum number of rounds, i. e., uε,K (x, k) =

α { sup uε,K (y, k + 1) + inf uε,K (y, k + 1)} y∈Bε (x) 2 y∈Bε (x) + β ∫ uε,K (y, k + 1) dy,

if x ∈ Ω and k < K,

Bε (x)

uε,K (x, k) = h(x, k),

if x ∈ ̸ Ω or k = K.

(8.3)

The DPP is obtained by summing up the expectations of three possible outcomes for the next step with the corresponding probabilities; Player I chooses the next position (probability α/2), Player II chooses (probability α/2), or the next position is random (probability β). Observe that uε,K (x, k) is related to the game with K − k rounds and uε,K (x, k + 1) is related to the game with one less round K − (k + 1) = K − k − 1. Next we describe the change of time scale that relates values of the Tug-of-War games with noise with a maximum number of rounds and (p, ε)-parabolic functions. The definition of the (p, ε)-parabolic function uε , Definition 8.2, refers to a forwardin-time parabolic equation. The values uε (⋅, t) at time t are determined by the values 2 uε (⋅, t − ε2 ). In contrast, in the DPP (8.3) above, the values at step k are determined by the values at step k + 1. Given 0 < t < T, let K(t) be the integer defined by 2t 2t ≤ K(t) < 2 + 1, ε2 ε

8.1 Games for the parabolic p-Laplacian

| 131

that is, K(t) = ⌈2t/ε2 ⌉. Set t0 = t and tk+1 = tk − ε2 /2 for k = 0, 1, . . . , K(t) − 1, that is, tk = ε2

K(t) − k + tK(t) . 2

2

Observe that tK(t) ∈ (− ε2 , 0]. The game begins at k = 0 corresponding to t0 = t in the time scale. When we play one round k → k + 1, the clock steps ε2 /2 backwards, tk+1 = tk − ε2 /2, and we play until we get outside the cylinder when k = τ corresponding to tτ in the time scale. In the game we have described above, we have kept track of the position of the token in Ω and the number of steps separately. Considering the values tk we can think that the token moves inside ΩT = Ω × (0, T), and at the k-th step the token is placed at the position (xk , tk ). Here we can consider the final payoff given by g(xτ , tτ ) instead of h(xτ , τ). The game value uε this way is defined in ΩT and satisfies uε (x, t) = uε,K(t) (x, 0) with respect to the value of the game we described earlier. For these functions, the DPP takes the form uε (x, t) =

α ε2 ε2 { sup uε (y, t − ) + inf uε (y, t − )} y∈Bε (x) 2 y∈Bε (x) 2 2 + β ∫ uε (y, t −

ε

Bε (x)

u (x, t) = g(x, t),

ε2 ) dy, 2

for every (x, t) ∈ ΩT ,

for every (x, t)ℝN × (−∞, T),

(8.4)

which agrees with Definition 8.2. From now on we consider the game in this version, with the token moving from (xk , tk ) → (xx+1 , tk −

ε2 ). 2

Let us prove that this game has a value. First, let us observe that it is easy to prove the existence of a (p, ε)-parabolic function. In fact, given g(x, t) we can compute u(x, t) 2 2 for 0 < t ≤ ε2 using the definition given by (8.2) and then continue with u for ε2 < t ≤ 2

2 ε2 , etc.

Theorem 8.3. The game has a value and it satisfies the DPP given by (8.4). Proof. Let u be a (p, ε)-parabolic function. We always have uεI ≤ uεII , and we will show that uεII ≤ u. Analogously we can show that uεI ≥ u completing the proof. We argue as follows: Player II follows a strategy SII0 such that at xk−1 ∈ Ω he always chooses to step to a point that almost minimizes u, that is, to a point xk such that u(xk , tk ) ≤

inf

y∈Bε (xk−1 )

u(y, tk ) + η2−k ,

132 | 8 Parabolic problems for a fixed η > 0. We start from the point (x0 , t0 ) so that K = ⌈2t0 /ε2 ⌉. It follows that from the choice of strategies and that u is (p, ε)-parabolic that x

𝔼S0,S0 [u(xk , tk ) + η2−k | x0 , . . . , xk−1 ] I

II



α { inf u(y, tk ) + η2−k + sup u(y, tk )} 2 y∈Bε (xk−1 ) y∈Bε (xk−1 ) +β

u(y, tk ) dy + η2−k

∫ Bε (xk−1 )

≤ u(xk−1 , tk−1 ) + η2−(k−1) . Thus Mk = uε (xk , tk ) + η2−k is a supermartingale. According to the optional stopping theorem x

x

uεII (x0 , t0 ) = inf sup 𝔼S0,S [g(xτ , tτ )] ≤ sup 𝔼S0,S0 [g(xτ , tτ ) + η2−τ ] SII

I

SI

II

SI

I

II

x

= sup 𝔼S0,S0 [u(xτ , tτ ) + η2−τ ] SI

I

II

x

≤ sup 𝔼S0,S0 [u(x0 , t0 ) + η] = u(x0 , t0 ) + η. SI

I

II

As this holds for every η > 0, we have shown that uεI ≤ uεII , as desired. Uniqueness and a comparison principle for (p, ε)-parabolic functions follow. Theorem 8.4. There exists a unique (p, ε)-parabolic function with given boundary values g, and it coincides with the value of the game. Theorem 8.5. If vε and uε are (p, ε)-parabolic functions with boundary values gvε ≥ guε , then vε ≥ uε in ΩT . 8.1.3 Game value convergence Here, we show that (p, ε)-parabolic functions approximate solutions to ut (x, t) = ΔH p u(x, t). To prove the convergence, we use the Arzela–Ascoli type compactness lemma. We restate it using a notation more appropriate for the parabolic setting.

8.1 Games for the parabolic p-Laplacian

| 133

Lemma 8.6. Let {uε : ΩT → ℝ, ε > 0} be a set of functions such that: (1) there exists C > 0 so that |uε (x, t)| < C for every ε > 0 and every (x, t) ∈ ΩT ; (2) given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any (x, t), (y, s) ∈ Ω with |x − y| + |t − s| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x, t) − u (y, s)󵄨󵄨󵄨 < η. Then there exists a uniformly continuous function u : ΩT → ℝ and a subsequence still denoted by {uε } such that uε → u

uniformly in ΩT ,

as ε → 0. First we recall the estimate for the stopping time of a random walk that was obtained in Lemma 3.7. In this lemma, there is no bound for the maximum number of rounds. Lemma 8.7. Let us consider an annular domain BR (z) \ Bδ (z) and a random walk such that when at xk−1 , the next point xk is chosen according to a uniform probability distribution at Bε (xk−1 ) ∩ BR (z). Let τ∗ = inf{k : xk ∈ Bδ (z)}. Then 𝔼x0 (τ∗ ) ≤

C(R/δ) dist(𝜕Bδ (z), x0 ) + o(1) , ε2

for x0 ∈ BR (z) \ Bδ (z). Above we used the notation o(1) to denote a quantity such that o(1) → 0 as ε → 0. Next we derive an estimate for the asymptotic uniform continuity of a family {uε } of (p, ε)-parabolic functions with fixed boundary values. We assume that Ω satisfies an exterior sphere condition: For each y ∈ 𝜕Ω, there exists Bδ (z) ⊂ ℝN \ Ω with δ > 0 such that y ∈ 𝜕Bδ (z). Below δ is always chosen small enough according to this condition. We also assume that g satisfies 󵄨 󵄨󵄨 1/2 󵄨󵄨g(x, tx ) − g(y, ty )󵄨󵄨󵄨 ≤ L(|x − y| + |tx − ty | ).

(8.5)

First, we consider the case where (y, ty ) is a point at the lateral boundary strip. Lemma 8.8. Let g and Ω be as above. The (p, ε)-parabolic function uε with the boundary data g satisfies 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨

≤ C min{|x − y|1/2 + o(1), tx1/2 + ε} + L|tx − ty |1/2 + 2Lδ

(8.6)

134 | 8 Parabolic problems for every (x, tx ) ∈ Ω, and y ∈ Γε . The constant C depends on δ, n, L, and the diameter of Ω. In (8.6) o(1) is taken relative to ε. Proof. Suppose for the moment that tx = ty , denote t0 = tx = ty , and set x0 = x as well as N = ⌈2tx /ε2 ⌉. By the exterior sphere condition, there exists Bδ (z) ⊂ ℝN \ Ω such that y ∈ 𝜕Bδ (z). Player I chooses a strategy of pulling towards z, denoted by SIz . Then x

𝔼S0z ,S [|xk − z| | x0 , . . . , xk−1 ] I

II

α ≤ {|xk−1 − z| + ε + |xk−1 − z| − ε} + β 2

≤ |xk−1 − z| + Cε

2

|x − z| dx

∫ Bε (xk−1 )

(8.7)

implies that Mk = |xk − z| − Cε2 k is a supermartingale for some C independent of ε. The first inequality follows from the choice of the strategy, and the second follows from the estimate ∫

|x − z| dx ≤ |xk−1 − z| + Cε2 .

Bε (xk−1 )

The optional stopping theorem and Jensen’s inequality then give 1/2

τ x x 𝔼S0z ,S [|xτ − z| + |tτ − t0 |1/2 ] = 𝔼S0z ,S [|xτ − z| + ε( ) ] I II I II 2 1/2

x

≤ |x0 − z| + Cε(𝔼S0z ,S [τ]) . I

(8.8)

II

In formula (8.7), the expected distance of the pure Tug-of-War is bounded by |xk−1 − z| whereas the expected distance of the pure random walk is slightly larger. Therefore, we can bound from above the stopping time of our process by a stopping time of the random walk in the setting of Lemma 8.7 by choosing R > 0 such that Ω ⊂ BR (z). Thus, we obtain x

x

𝔼S0z ,S [τ] ≤ min{𝔼S0z ,S [τ∗ ], K} I

II

I

II

≤ min{C(R/δ)(dist(𝜕Bδ (z), x0 ) + o(1))/ε2 , K}.

Since y ∈ 𝜕Bδ (z), we have dist(𝜕Bδ (z), x0 ) ≤ |y − x0 |. Together with (8.8) this gives x

𝔼S0z ,S [|xτ − z| + |tτ − t0 |1/2 ] ≤ min{C(R/δ)(|x0 − y| + o(1)), Cε2 K} I

II

+ |x0 − z|.

1/2

8.1 Games for the parabolic p-Laplacian

| 135

Thus, we end up with 1/2

g(z, t0 ) − L(min{C(R/δ)(|x0 − y| + o(1)), Cε2 K} ≤

x 𝔼S0z ,S [g(xτ , tτ )] I II

+ |x0 − z|) 1/2

≤ g(z, t0 ) + L(min{C(R/δ)(|x0 − y| + o(1)), Cε2 K}

+ |x0 − z|),

which implies x

sup inf 𝔼S0,S [g(xτ , tτ )] SII

SI



I

II

x inf 𝔼S0z ,S [g(xτ , tτ )] I II S II

1/2

≥ g(z, t0 ) − L(min{C(R/δ)(|x0 − y| + o(1)), Cε2 K}

2

+ |x0 − z|) 1/2

≥ g(y, t0 ) − 2Lδ − L min{C(R/δ)(|x0 − y| + o(1)), Cε K} . The upper bound can be obtained by choosing for Player II a strategy where he points to z, and thus (8.6) follows. Finally, if tx ≠ ty , then we just use the above estimate and obtain 󵄨 󵄨 󵄨 ε 󵄨 󵄨 ε 󵄨󵄨 ε ε ε ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨u (x, tx ) − u (y, tx )󵄨󵄨󵄨 + 󵄨󵄨󵄨u (y, tx ) − u (y, ty )󵄨󵄨󵄨 1/2

≤ 2Lδ + min{C(R/δ)(|x − y| + o(1)), Cε2 K}

+ L|tx − ty |1/2 ,

and the proof ends recalling that K = ⌈2tx /ε2 ⌉. Next we consider the case when the boundary point (y, ty ) lies at the initial boundary strip. Lemma 8.9. Let g and Ω be as in Lemma 8.8. The (p, ε)-parabolic function uε satisfies 󵄨 󵄨󵄨 ε 1/2 ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨 ≤ C(|x − y| + tx + ε), for every (x, tx ) ∈ ΩT and (y, ty ) ∈ Ω × (−ε2 /2, 0]. Proof. Set x0 = x and K = ⌈2tx /ε2 ⌉. Player I pulls to y. Then Mk = |xk − y|2 − Ckε2 is a supermartingale. Indeed, x

𝔼S0y ,S [|xk − y|2 | x0 , . . . , xk−1 ] II

I



α {(|xk−1 − y| + ε)2 + (|xk−1 − y| − ε)2 } + β 2 2

2

2

2

∫ Bε (xk−1 )

|x − y|2 dx

≤ α{|xk−1 − y| + ε } + β(|xk−1 − y| + Cε ) ≤ |xk−1 − y|2 + Cε2 .

(8.9)

136 | 8 Parabolic problems According to the optional stopping theorem, x ,N

x

𝔼S0y ,S [|xτ − y|2 ] ≤ |x0 − y|2 + Cε2 𝔼S0y ,S [τ], II

I

I

II

and since the stopping time is bounded by ⌈2tx /ε2 ⌉, this implies x

𝔼S0y ,S [|xτ − y|2 ] ≤ |x0 − y|2 + C(tx + ε2 ). II

I

Finally, Jensen’s inequality gives x

𝔼S0y ,S [|xτ − y|] ≤ (|x0 − y|2 + C(tx + ε2 )) I

II

1/2

≤ |x0 − y| + C(tx1/2 + ε).

The rest of the argument is similar to the one used in the previous proof. In particular, we obtain the upper bound by choosing for Player II a strategy where he points to y. We end up with 󵄨 󵄨󵄨 ε 1/2 ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨 ≤ C(|x − y| + tx + ε). Next we will show that (p, ε)-parabolic functions are asymptotically uniformly continuous. Lemma 8.10. Let g and Ω be as in Lemma 8.8. Let {uε } be a family of (p, ε)-parabolic functions. Then this family satisfies the conditions in Lemma 8.6. Proof. It follows from the definition of (p, ε)-parabolic function that 󵄨󵄨 ε 󵄨󵄨 󵄨󵄨u 󵄨󵄨 ≤ sup |g| and we can thus concentrate on the second condition of Lemma 8.6. Observe that the case x, y ∈ Γε readily follows from the uniform continuity of the data, and thus we can concentrate on the cases x ∈ Ω, y ∈ Γε , and x, y ∈ Ω. Choose any η > 0. By (8.6) and (8.9), there exists ε0 > 0, δ > 0, and r0 > 0 so that 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨 < η for all ε < ε0 and for any (x, tx ) ∈ ΩT , (y, ty ) ∈ Γε such that |x − y|1/2 + |tx − ty |1/2 ≤ r0 . Next we consider a slightly smaller domain Ω̃ T = {(z, t) ∈ ΩT : d((z, t), 𝜕p ΩT ) > r0 /3}

8.1 Games for the parabolic p-Laplacian

| 137

with d((z, t), 𝜕p ΩT ) = inf{|z − y|1/2 + |t − s|1/2 : (y, s) ∈ 𝜕p Ω}. Suppose then that x, y ∈ ΩT with |x − y|1/2 + |tx − ty |1/2 < r0 /3. First, if x, y ∈ Γ,̃ then we can estimate 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x, tx ) − u (y, ty )󵄨󵄨󵄨 ≤ 3η for ε < ε0 by comparing the values at x and y to the nearby boundary values and using the previous step. Finally, a translation argument finishes the proof. Let (x, tx ), (y, ty ) ∈ Ω̃ T . Without loss of generality we may assume that tx > ty . Define ̃ tz ) = uε (z − x + y, tz + ty − tx ) + 3η. g(z, We have ̃ tz ) ≥ uε (z, tz ) g(z, by the reasoning above. Solve the (p, ε)-parabolic function ũ ε in Ω̃ T with the boundary values g̃ in Γ.̃ By the comparison principle, Theorem 8.5, and the uniqueness theorem, Theorem 8.4, we deduce uε (x, tx ) ≤ ũ ε (x, tx ) = uε (x − x + y, tx − tx + ty ) + 3η = uε (y, ty ) + 3η

in Ω̃ T .

The lower bound follows by a similar argument. Corollary 8.11. Let g satisfy the continuity condition (8.5) and let Ω satisfy the exterior sphere condition. Let {uε } be a family of (p, ε)-parabolic functions with boundary values g. Then there exist a uniformly continuous u and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω

as ε → 0. Now, we are ready to show that the limit is a viscosity solution to the associated parabolic PDE. Theorem 8.12. Let g satisfy the continuity condition (8.5) and Ω satisfy the exterior sphere condition. Then the uniform limit u = lim uε ε→0

of (p, ε)-parabolic functions obtained in Corollary 8.11 is a viscosity solution to the equation ut (x, t) = ΔH p u(x, t) with boundary values g.

138 | 8 Parabolic problems Proof. First, clearly u = g on 𝜕Ω, and we can focus our attention on showing that u is a viscosity solution. Now, we have, for any ϕ ∈ C 2 , an estimate of the form α ε2 ε2 { max ϕ(y, t − ) + min ϕ(y, t − )} y∈Bε (x) 2 y∈Bε (x) 2 2 + β ∫ ϕ(y, t − Bε (x)

ε2 ) dy − ϕ(x, t) 2

ε,t−ε2 /2

x βε2 ((p − 2)⟨D2 ϕ(x, t) 1 ≥ 2(n + 2)

2

− x x1ε,t−ε /2 − x , ⟩ ε ε

+ Δϕ(x, t) − (n + p)ϕt (x, t)) + o(ε2 ),

(8.10)

where 2

ϕ(x1ε,t−ε /2 , t −

ε2 ε2 ) = min ϕ(y, t − ). 2 2 y∈Bε (x)

Suppose then that ϕ touches u at (x, t) from below. By the uniform convergence, there exists a sequence {(xε , tε )} converging to (x, t) such that uε −ϕ has an approximate minimum at (xε , tε ), that is, there exists (xε , tε ) such that uε (y, s) − ϕ(y, s) ≥ uε (xε , tε ) − ϕ(xε , tε ) − o(ε2 ), in the neighborhood of (xε , tε ). Further, set ϕ̃ = ϕ + uε (xε , tε ) − ϕ(xε , tε ), so that ̃ , t ) and uε (y, s) ≥ ϕ(y, ̃ s) − o(ε2 ). uε (xε , tε ) = ϕ(x ε ε Thus, by recalling the fact that uε is (p, ε)-parabolic, we obtain ε2 ̃ , t ) + β ∫ ϕ(y, ̃ o(ε2 ) ≥ − ϕ(x tε − ) dy ε ε 2 Bε (xε )

+

ε2 α ε2 ̃ ̃ { max ϕ(y, tε − ) + min ϕ(y, tε − )}. 2 y∈Bε (xε ) 2 2 y∈Bε (xε )

According to (8.10) and observing that ∇ϕ = ∇ϕ,̃ D2 ϕ̃ = D2 ϕ, we get 0≥

ε,t−ε2 /2

x βε2 ((p − 2)⟨D2 ϕ(xε , tε ) 1 2(N + 2)

2

− xε x1ε,t−ε /2 − xε , ⟩ ε ε

+ Δϕ(xε , tε ) − (N + p)ϕt (xε , tε )) + o(ε2 ). Suppose that ∇ϕ(x, t) ≠ 0. Dividing by ε2 and letting ε → 0, we get 0 ≥ (p − 2)ΔH ∞ ϕ(x, t) + Δϕ(x, t) − (N + p)ϕt (x, t).

(8.11)

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 139

Now, if ∇ϕ(x, t) = 0, from (8.11) (again dividing by ε2 and letting ε → 0 arguing as in Chapter 3) we obtain 0 ≥ (p − 2)λmin (D2 ϕ(x, t)) + Δϕ(x, t) − (N + p)ϕt (x, t). To verify the other half of the definition of a viscosity solution, we derive a reverse inequality to (8.10) by considering the maximum point of the test function and choose a function ϕ which touches u from above. The rest of the argument is analogous. Finally, we conclude that also the original sequence converges to a unique viscosity solution. To this end, observe that by the above any sequence {uε } contains a subsequence that converges uniformly to some viscosity solution u. By [34] (see also [45]), viscosity solutions to (8.1) are uniquely determined by their boundary values. Hence we conclude that the whole original sequence converges. Observe that the above theorem also gives a proof of the existence of viscosity solutions to (8.1) using probabilistic arguments.

8.2 Games for parabolic problems with eigenvalues of the Hessian Now we turn our attention to an evolution problem in which the involved elliptic operator is given by an eigenvalue of the Hessian (these operators already appeared in Chapter 7). Consider the problem u (x, t) − λj (D2 u(x, t)) = 0 { { t u(x, t) = g(x, t) { { u(x, 0) = u0 (x) {

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω.

(8.12)

Here Ω is a bounded domain in ℝN , with N ≥ 1 and λj (D2 u) stands for the j-th eigenvalue of D2 u = (𝜕x2i ,xj u)ij , which is the Hessian matrix of u. We will assume from now on that u0 and g are continuous functions with the compatibility condition u0 (x) = g(x, 0), x ∈ 𝜕Ω. Problem (8.12) is the evolution version of the elliptic problem {

λj (D2 z(x)) = 0 z(x) = g(x)

in Ω, on 𝜕Ω,

(8.13)

which was studied in the previous chapter; see also [1, 20, 18, 19, 29, 31, 51, 52, 94, 93]. In particular, for j = 1 and j = N, problem (8.13) is the equation for the convex and concave envelope of g in Ω, respectively, i. e., the solution z is the biggest convex (smallest concave) function u, satisfying u ≤ g (u ≥ g) on 𝜕Ω; see [94, 93]. In our parabolic setting, using classical ideas from [37] one can show that there is also a comparison principle. Hence, uniqueness of a viscosity solution follows.

140 | 8 Parabolic problems In order to show existence of a continuous viscosity solution it seems natural to try to use Perron’s method, relying on the comparison principle. However, we prefer to take a different approach. We provide an existence proof using an approximation based on game theory (this approach will be very useful since it allows us to gain some intuition that will be used when dealing with the asymptotic behavior of the solutions). Here we propose a parabolic version of the game introduced in the previous chapter (see also [29]) in order to show existence of a viscosity solution to (8.12). Like for the elliptic problem, it is a two-player zero-sum game. The initial position of the game is determined by a token placed at some point x0 ∈ Ω and at some time t0 > 0. Player I, who wants to minimize the final payoff, chooses a subspace S of dimension j in ℝN and then Player II, who wants to maximize the final payoff, chooses a unitary vector v ∈ S. Then, for a fixed ε > 0, the position of the token is moved to (x0 + εv, t0 − ε2 /2) or to (x0 − εv, t0 − ε2 /2) with equal probabilities. After the first round, the game continues from the new position (x1 , t1 ) according to the same rules. Note that we take t1 = t0 − ε2 /2, but x1 = x0 ± εv depends on a coin toss. The game ends when the token leaves Ω × (0, T]. A function h is defined outside the domain. For our purposes we choose h to be such that h(x, t) = g(x, t) for x ∈ 𝜕Ω and t > 0, and h(x, 0) = u0 (x) for x ∈ Ω. That is, h is a continuous extension of the boundary data. We denote by (xτ , tτ ) the point where the token leaves the domain, that is, either xτ ∈ ̸ Ω with tτ > 0 or with tτ ≤ 0. At this point the game ends and the final payoff is given by h(xτ , tτ ). That is, Player I pays Player 2 the amount given by h(xτ , tτ ). For Player I, we denote by SI a strategy, which is a collection of measurable mappings SI = {Sk }∞ k=0 , where each mapping has the form Sk :

Ωk+1 × (kε2 /2, +∞) (x0 , . . . , xk , t0 )

󳨀→ 󳨃󳨀→

Gr(j, ℝN ), S,

where S is a subspace of dimension j. For Player II, a strategy SII is a collection of measurable mappings SII = {Sk }∞ k=0 , where each mapping has the form Sk :

Ωk+1 × Gr(j, ℝN ) × (kε2 /2, +∞) (x0 , . . . , xk , S, t0 )

󳨀→ 󳨃󳨀→

S, v,

where v is a unitary vector in S. Once both players chose their strategies, we can define the value of the game for each player as follows: x ,t

uεI (x0 , t0 ) = inf sup 𝔼S0,S0 [h(xτ , tτ )], SI

SII

I

II

x ,t

uεII (x0 , t0 ) = sup inf 𝔼S0,S0 [h(xτ , tτ )]. SII

SI

I

II

Observe that the expectations above are well defined since the number of steps of the game is at most ⌈2t0 /ε2 ⌉, and therefore, the game ends in a finite number of steps with

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 141

probability 1. As usual we say that the game has a value when uεI (x0 , t0 ) = uεII (x0 , t0 ), and we define the game value as uε (x0 , t0 ) := uεI (x0 , t0 ) = uεII (x0 , t0 ). In Section 8.2.2 we prove that the game has a value uε (x, t) that verifies a DPP and that uε (x, t) converges uniformly in Ω × [0, T] for every T > 0 to a function u(x, t), which is continuous and is the unique viscosity solution of the problem (8.12). This is the content of our next result; see Theorem 8.13. For the convergence of uε (x, t) we need to assume a condition on the domain that we impose from now on and reads as follows: For every y ∈ 𝜕Ω, we assume that there exists r > 0 such that for every δ > 0 there exists T ⊂ ℝN , a subspace of dimension j, w ∈ ℝN of norm 1, λ > 0, and θ > 0 such that {x ∈ Ω ∩ Br (y) ∩ Tλ : ⟨w, x − y⟩ < θ} ⊂ Bδ (y), where Tλ = {x ∈ ℝN : d(x − y, T) < λ}. As in Chapter 7, for our game with a given j we will assume that Ω satisfies both (Fj ) and (FN−j+1 ). Note that a uniformly convex domain verifies this condition for every j ∈ {1, .., N}, but more general domains also satisfy this hypothesis; see [29]. Theorem 8.13. There is a value function for the game described before, uε . This value function can be characterized as being the unique solution to the DPP uε (x, t) =

1 ε2 1 ε2 sup { uε (x + εv, t − ) + uε (x − εv, t − )} dim(S)=j v∈S,|v|=1 2 2 2 2 inf

for x ∈ Ω, t > 0, together with uε (x, t) = h(x, t) for x ∈ ̸ Ω, or t ≤ 0. Moreover, if Ω satisfies conditions (Fj ) and (FN−j+1 ), there exists a function u ∈ C(Ω× [0, +∞) such that uε → u

uniformly in Ω × [0, T],

as ε → 0 for every T > 0. This limit u is the unique viscosity solution to u (x, t) − λj (D2 u(x, t)) = 0 { { t u(x, t) = g(x, t) { { u(x, 0) = u0 (x) {

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω.

142 | 8 Parabolic problems 8.2.1 Preliminaries on viscosity solutions We begin by stating the usual definition of a viscosity solution to (8.12). Definition 8.14. A function u : ΩT := Ω × (0, T) → ℝ verifies ut − λj (D2 u) = 0 in the viscosity sense if it satisfies: (1) For every ϕ ∈ C 2 (ΩT ) such that u − ϕ has a strict minimum at the point (x, t) ∈ ΩT with u(x, t) = ϕ(x, t), we have ϕt (x, t) − λj (D2 ϕ(x, t)) ≥ 0. (2) For every ψ ∈ C 2 (ΩT ) such that u − ψ has a strict maximum at the point (x, t) ∈ ΩT with u(x, t) = ψ(x, t), we have ψt (x, t) − λj (D2 ψ(x, t)) ≤ 0. Comparison holds for our equation; see Theorem 8.2 from [37]. Let u be a supersolution, that is, it verifies u (x, t) − λj (D2x u(x, t)) ≥ 0 { { t u(x, t) ≥ g(x, t) { { { u(x, 0) ≥ u0 (x)

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω,

(8.14)

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω.

(8.15)

and u be a subsolution, that is, u (x, t) − λj (D2x u(x, t)) ≤ 0 { { t u(x, t) ≤ g(x, t) { { u(x, 0) ≤ u0 (x) { Note that the inequalities ut (x, t) − λj (D2x u(x, t)) ≥ 0 and ut (x, t) − λj (D2x u(x, t)) ≤ 0 are understood in the viscosity sense (see Definition 8.14), while the other inequalities (that involve boundary/initial data) are understood in a pointwise sense. Lemma 8.15. Let u and u verify (8.14) and (8.15). respectively. Then u(x, t) ≥ u(x, t) for every (x, t) ∈ Ω × (0, +∞). As an immediate consequence of this result uniqueness of continuous viscosity solutions to our problem (8.12) follows.

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 143

8.2.2 Parabolic random walk for λj Let Ω ⊂ ℝN be a bounded open set and T > 0. As before, we let ΩT = Ω × (0, T). Two values, ε > 0 and j ∈ {1, . . . , N}, are given. The game under consideration is a two-player zero-sum game that is played in the domain ΩT . Initially, a token is placed at some point (x0 , t0 ) ∈ ΩT . Player I chooses a subspace S of dimension j and then Player II chooses one unitary vector, v, in the subspace S. Then the position of the to2 ken is moved to (x0 ±εv, t0 − ε2 ) with equal probabilities. After the first round, the game continues from (x1 , t1 ) according to the same rules. This procedure yields a sequence of game states (x0 , t0 ), (x1 , t1 ), . . . , where every xk is a random variable. The game ends when the token leaves ΩT ; at this point the token will be in the parabolic boundary strip of width ε given by ΓεT = (Γε × [−

ε2 ε2 , T]) ∪ (Ω × [− , 0]), 2 2

where Γε = {x ∈ ℝN \ Ω : dist(x, 𝜕Ω) ≤ ε}. We denote by (xτ , tτ ) ∈ ΓεT the first point in the sequence of game states that lies in ΓεT , so that τ refers to the first time we hit ΓεT . At this time the game ends with the final payoff given by h(xτ , tτ ), where h : ΓεT → ℝ is a given continuous function that we call payoff function. Player I earns −h(xτ , tτ ) while Player II earns h(xτ , tτ ) (recall that this game is a zero-sum game). For our purposes we choose h(x, t) = {

g(x, t) u0 (x)

x ∈ 𝜕Ω, t > 0, x ∈ Ω, t = 0.

(8.16)

A strategy SI for Player I, the player seeking to minimize the final payoff, is a function defined on the partial histories that at every step of the game gives a j-dimensional subspace S, SI (t0 , x0 , x1 , . . . , xk ) = S ∈ Gr(j, ℝN ). A strategy SII for Player II, who seeks to maximize the final payoff, is a function defined on the partial histories that at every step of the game gives a unitary vector in a prescribed j-dimensional subspace S, SII (t0 , x0 , x1 , . . . , xk , S) = v ∈ S.

144 | 8 Parabolic problems When the two players fix their strategies SI and SII we can compute the expected outcome as follows: Given the sequence (x0 , t0 ), (x1 , t1 ), . . ., (xk , tk ) in ΩT , the next game position is distributed according to the probability 1 1 πSI ,SII ((x0 , t0 ), (x1 , t1 ), . . . , (xk , tk ), A) = δ δ ε2 (A) + ε2 (A), 2 (xk +εv,tk − 2 ) 2 (xk −εv,tk − 2 ) for all A ⊂ ΩT ∪ ΓεT , where v = SII (t0 , x0 , x1 , . . . , xk , SI (t0 , x0 , x1 , . . . , xk )). By using the one-step transition probabilities and Kolmogorov’s extension theorem, we can build x ,t x ,t a probability measure ℙS0,S0 on the game sequences. We denote by 𝔼S0,S0 the correI II I II sponding expectation. The value of the game for Player I is defined as x ,t

uεI (x0 , t0 ) = inf sup 𝔼S0,S0 [h(xτ , tτ )] SI

I

SII

II

while the value of the game for Player II is defined as x ,t

uεII (x0 , t0 ) = sup inf 𝔼S0,S0 [h(xτ , tτ )]. SII

SI

I

II

Intuitively, the values uεI (x0 , t0 ) and uεII (x0 , t0 ) are the best expected outcomes each player can expect when the game starts at (x0 , t0 ). As usual, if these two values coincide, uεI = uεII , we say that the game has a value uε := uεI = uεII . Let us observe that the game ends after at most a finite number of steps; in fact, we have τ≤⌈

2T ⌉. ε2

Hence, the expected value is well defined. To see that the game has a value, we can consider u, a function that satisfies the DPP associated with this game, given by u(x, t) =

ε2 1 ε2 1 sup { u(x + εv, t − ) + u(x − εv, t − )} dim(S)=j v∈S,|v|=1 2 2 2 2 inf

for (x, t) ∈ ΩT , with u(x, t) = h(x, t) for (x, t) ∈ ̸ ΩT . The existence of such a function can be seen defining the function 2 backwards in time. In fact, given h(x, t) we can compute u(x, t) for 0 < t ≤ ε2 using the 2

2

DPP and then continue with u for ε2 < t ≤ 2 ε2 , etc. Now, we want to prove that a function that verifies the DPP, u, is in fact the value of the game, that is, we have u = uεI = uεII . We know that uεII ≤ uεI , and to obtain the equality, we will show that u ≤ uεII and uεI ≤ u.

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 145

Given u a function that verifies the DPP and η > 0, we can consider the strategy SII0 for Player II that at every step almost maximizes u(xk + εv, tk −

ε2 ε2 ) + u(xk − εv, tk − ), 2 2

that is, SII0 (t0 , x0 , x1 , . . . , xk , S) = w ∈ S, such that ε2 1 ε2 1 u(xk + εw, tk − ) + u(xk − εw, tk − ) 2 2 2 2

1 ε2 1 ε2 ≥ sup { u(xk + εv, tk − ) + u(xk − εv, tk − )} − η2−(k+1) . 2 2 2 v∈S,|v|=1 2

We have x ,t

𝔼S0,S00 [u(xk+1 , tk+1 ) − η2−(k+1) | x0 , . . . , xk ] I

II



ε2 1 ε2 1 sup { u(xk + εv, tk − ) + u(xk − εv, tk − )} S,dim(S)=j v∈S,|v|=1 2 2 2 2 inf

− η2−(k+1) − η2−(k+1) ≥ u(xk , tk ) − η2−k , where we have estimated the strategy of Player I by inf and used the notion that u satisfies the DPP. Thus Mk = u(xk , tk ) − η2−k is a submartingale. Now, we have x ,t

uεII (x0 , t0 ) = sup inf 𝔼S0,S0 [h(xτ , tτ )] SII

SI

I

II

x ,t

≥ inf 𝔼S0,S00 [h(xτ , tτ )] SI



I

II

x ,t inf 𝔼S0,S00 [Mτ ] I II S I

x ,t

≥ inf 𝔼S0,S00 [M0 ] = u(x0 , t0 ) − η, SI

I

II

where we used the optional stopping theorem for Mk . Since η is arbitrary small, this proves that uεII ≥ u. Analogously, we can consider a strategy S10 for Player I to prove that u ≥ uεI . This shows that the game has a value that can be characterized as the solution to the DPP.

146 | 8 Parabolic problems Our next aim now is to pass to the limit in the values of the game, uε → u, as ε → 0 and obtain in this limit process a viscosity solution to (8.12). We will use the Arzela–Ascoli type lemma to obtain a convergent subsequence uε → u. We want to prove that: (1) There exists C > 0 such that |uε (x, t)| < C for every ε > 0 and every (x, t) ∈ Ω×[0, T]. (2) Given η > 0 there are constants r0 and ε0 such that for every ε < ε0 and any x, y ∈ Ω with |x − y| < r0 and for every t, s ∈ [0, T] with |t − s| < r0 we have 󵄨 󵄨󵄨 ε ε 󵄨󵄨u (x, t) − u (y, s)󵄨󵄨󵄨 < η. Then there exists a uniformly continuous function u : Ω × [0, T] → ℝ and a subsequence still denoted by {uε } such that uε → u

uniformly in Ω × [0, T],

as ε → 0. So, our goal now is to show that the family uε satisfies the hypotheses of the previous lemma. First, let us observe that min h ≤ uε (x, t) ≤ max h for every (x, t) ∈ Ω × [0, T]. To prove that uε satisfies the second condition we will have to make some geometric assumptions on the domain. As in Chapter 7 (see also [29]), given y ∈ 𝜕Ω, we assume that there exists r > 0 such that for every δ > 0 there exists T ⊂ ℝN , a subspace of dimension j, w ∈ ℝN of norm 1, λ > 0, and θ > 0 such that {x ∈ Ω ∩ Br (y) ∩ Tλ : ⟨w, x − y⟩ < θ} ⊂ Bδ (y), where Tλ = {x ∈ ℝN : d(x − y, T) < λ}. For our game with a fixed j we will assume that Ω satisfies both (Fj ) and (FN−j+1 ). As we mentioned in the introduction, observe that every strictly convex domain verifies (Fj ) for any 1 ≤ j ≤ N. The key point to obtain the asymptotic equicontinuity required in the second condition in Lemma 3.6 is to obtain the bound for (x, t) ∈ ΩT and (y, s) ∈ ΓεT . For the case (x, t), (y, s) ∈ ΓεT the bound follows from the uniform continuity of h in ΓεT . For the case (x, t), (y, s) ∈ ΩT we argue as follows. We fix the strategies SI , SII for the game starting at (x, t). We define a virtual game starting at (y, s) using the same random steps as the game starting at (x, t). Furthermore, the players adopt their strategies SI , SII from

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 147

the game starting at (x, t), that is, when the game position is (yk , sk ) a player makes the choices that he would have taken at (xk , tk ) while playing the game starting at (x, t). We proceed in this way until for the first time one of the positions leaves the parabolic domain, that is, until (xk , tk ) ∈ ΓεT or (yk , sk ) ∈ ΓεT . At that point we have |(xk , tk ) − (yk , sk )| = |(x, t) − (y, s)|, and the desired estimate follows from the one for xk , yk ∈ Γε (in the case that both positions leave the domain at the same turn, k) or xk ∈ Ω, yk ∈ Γε (if only one has left the domain). Thus, we can concentrate on the case (x, t) ∈ ΩT and (y, s) ∈ ΓεT . We can assume that (y, s) ∈ 𝜕P ΩT . If we have the bound for those points we can obtain a bound for a point (y, s) ∈ ΓεT just by considering (z, u) ∈ 𝜕P ΩT close to (x, t) and (y, s). If s < 0, we can consider the point (x, 0) and for y ∈ ̸ Ω we can consider (z, t), with z ∈ 𝜕Ω a point in the line segment that joins x and y. Hence, we have to handle two cases. In the first one we have to prove that |uε (x, t)− ε u (x, 0)| < η for x ∈ Ω and 0 < t < r0 . In the second one we have to prove that |uε (x, t) − uε (y, t)| < η for x ∈ Ω, y ∈ 𝜕Ω such that |x − y| < r0 and 0 < t ≤ T. In the first case we have uε (x, 0) = u0 (x), and we have to show that the game starting at (x, t) will not end too far away from 2 (x, 0). We have − ε2 < tτ < t, so we have to obtain a bound for |x − xτ |. To this end we consider Mk = |xk − x|2 − ε2 k. We have 2 2 𝔼x,t SI ,SII [|xk+1 − x| − ε (k + 1)|x, x1 , . . . , xk ]

|xk + εvk − x|2 + |xk − εvk − x|2 − ε2 (k + 1) 2 = |xk − x|2 + ε2 |vk |2 − ε2 (k + 1) =

= Mk .

(8.17)

Hence, Mk is a martingale. By applying the optional stopping theorem, we obtain 2 2 x,t 2 𝔼x,t SI ,SII [|xτ − x| ] = ε 𝔼SI ,SII [τ] ≤ ε ⌈

2r 2t ⌉ ≤ ε2 ⌈ 20 ⌉ ≤ ε2 + 2r0 ≤ ε02 + 2r0 . 2 ε ε

Hence, using 2 2 𝔼x,t SI ,SII [|xτ − x| ] ≥ ℙ(|(xτ , tτ ) − (x, 0)| ≥ δ)δ ,

we obtain ℙ(|(xτ , tτ ) − (x, 0)| ≥ δ) ≤

ε02 + 2r0 . δ2

(8.18)

148 | 8 Parabolic problems With this bound, we can obtain the desired result as follows: |uε (x, t) − h(x, 0)| ≤ ℙ(|(xτ , tτ ) − (x, 0)| < δ) ×

sup

(xτ ,tτ )∈Bδ (x,0)

|h(xτ , tτ ) − h(x, 0)|

+ ℙ(|(xτ , tτ ) − (x, 0)| ≥ δ))2 max |h| sup



(xτ ,tτ )∈Bδ (x,0)

|h(xτ , tτ ) − h(x, 0)| +

(ε02 + 2r0 )2 max |h| 0 such that for every δ > 0 there exists w ∈ ℝN of norm 1 and θ > 0 such that {x ∈ Ω ∩ Br (y) : ⟨w, x − y⟩ < θ} ⊂ Bδ (y).

(8.20)

Let us observe that, for any possible choice of the direction v at every step, the projection of the position of the game, xn , in the direction of a fixed unitary vector w, that is, ⟨xn − y, w⟩, is a martingale. We fix r > 0 and consider τ,̃ the first time x leaves Ω or Br (y). Hence 𝔼⟨xτ̃ − y, w⟩ ≤ ⟨x − y, w⟩ ≤ d(x, y) < r0 . We consider the vector w given by the geometric assumption on Ω; we have ⟨xn − y, w⟩ ≥ −ε. Therefore, (8.21) implies ℙ(⟨xτ̃ − y, w⟩ > r01/2 )r01/2 − (1 − ℙ(⟨xτ̃ − y, w⟩ > r01/2 ))ε < r0 . Hence, we have (for every ε > ε0 small enough) ℙ(⟨xτ̃ − y, w⟩ > r01/2 ) < 2r01/2 . Then (8.20) implies that given δ > 0 we can conclude that ℙ(d(xτ̃ , y) > δ) < 2r01/2 by taking r0 small enough and choosing an appropriate w.

(8.21)

8.2 Games for parabolic problems with eigenvalues of the Hessian

| 149

Hence, d(xτ̃ , y) ≤ δ with probability close to one, and in this case the point xτ̃ is actually the point where the process has left Ω, that is, τ̃ = τ. When d(xτ , y) ≤ δ, by the same martingale argument used in (8.18), we obtain 𝔼[t − tτ ] = 𝔼[

𝔼[|xτ − x|2 ] δ2 ε2 τ] = ≤ . 2 2 2

Hence, ℙ(t − tτ > δ) ≤

δ 2

and the bound follows as in (8.19). In the general case, for any value of j, we can proceed in the same way. In order to be able to use condition (Fj ), we have to argue that the points xn involved in our argument belong to Tλ . For r0 < λ we have x ∈ Tλ , so if we ensure that at every move v ∈ T, the game sequence will be contained in x + T ⊂ Tλ . Recall that here we are assuming that both (Fj ) and (FN−j+1 ) are satisfied. We can separate the argument into two parts. We will prove on the one hand that uε (x, t) − g(y, s) < η and on the other that g(y, s) − uε (x, t) < η. For the first inequality we can make choices for the strategy for Player I, and for the second one we can do the same for strategies of Player II. Since Ω satisfies condition (Fj ), Player I can make sure that at every move the vector v belongs to T by selecting S = T. This proves the upper bound uε (x, t) − g(y, s) < η. On the other hand, since Ω satisfies (FN−j+1 ), Player II will be able to select v in a space S of dimension j and hence he can always choose v ∈ S ∩ T since dim(T) + dim(S) = N − j + 1 + j = N + 1 > N. This shows the lower bound g(y, s) − uε (x, t) < η. We have shown that the hypotheses of the Arzela–Ascoli type lemma, Lemma 3.6, are satisfied. Hence we have obtained uniform convergence of a subsequence of uε . Lemma 8.16. Let Ω be a bounded domain in ℝN satisfying conditions (Fj ) and (FN−j+1 ). Then there exists a subsequence of uε that converges uniformly. That is, uεj → u,

as εj → 0,

uniformly in Ω × [0, T], where u is a uniformly continuous function. Now, let us prove that any possible uniform limit of uε is a viscosity solution to the limit PDE problem. This result shows existence of a continuous function up to the boundary solution defined in Ω × [0, T] for every T > 0. Uniqueness of this viscosity solution follows from the comparison principle stated in Lemma 8.15. Theorem 8.17. Let u be a uniform limit of the values of the game uε . Then u is a viscosity solution to (8.12).

150 | 8 Parabolic problems Proof. First, we observe that since uε = g on 𝜕Ω × (0, T) and uε (x, 0) = u0 (x) for x ∈ Ω, we obtain, from the uniform convergence, that u = g on 𝜕Ω × (0, T) and u(x, 0) = u0 (x) for x ∈ Ω. Also, note that the limit function is continuous. To check that u is a viscosity supersolution to λj (D2 u) = 0 in Ω, let ϕ ∈ C 2 (ΩT ) be such that u − ϕ has a strict minimum at the point (x, t) ∈ ΩT with u(x, t) = ϕ(x, t). We need to check that ϕt (x, t) − λj (D2 ϕ(x, t)) ≥ 0. As uε → u uniformly in Ω × [0, T] we have the existence of two sequences xε , tε such that xε → x, tε → t as ε → 0 and uε (z, s) − ϕ(z, s) ≥ uε (xε , tε ) − ϕ(xε , tε ) − ε3 (we remark that uε is not continuous in general). Since uε is a solution to uε (x, t) =

1 ε2 1 ε2 sup { uε (x + εv, t − ) + uε (x − εv, t − )} dim(S)=j v∈S,|v|=1 2 2 2 2 inf

we obtain that ϕ verifies ϕ(xε , tε ) − ϕ(xε , tε − ≥

ε2 ) 2

1 ε2 sup { ϕ(xε + εv, tε − ) dim(S)=j v∈S,|v|=1 2 2 inf

1 ε2 ε2 + ϕ(xε − εv, tε − ) − ϕ(xε , tε − )} − ε3 . 2 2 2

(8.22)

Now, consider the second-order Taylor expansion of ϕ (to simplify the notation we omit the dependence of t here) 1 ϕ(y) = ϕ(x) + ∇ϕ(x) ⋅ (y − x) + ⟨D2 ϕ(x)(y − x), (y − x)⟩ + o(|y − x|2 ) 2 as |y − x| → 0. Hence, we have 1 ϕ(x + εv) = ϕ(x) + ε∇ϕ(x) ⋅ v + ε2 ⟨D2 ϕ(x)v, v⟩ + o(ε2 ) 2 and 1 ϕ(x − εv) = ϕ(x) − ε∇ϕ(x) ⋅ v + ε2 ⟨D2 ϕ(x)v, v⟩ + o(ε2 ). 2 Using these expansions we get 1 ε2 1 ϕ(xε + εv) + ϕ(xε − εv) − ϕ(xε ) = ⟨D2 ϕ(xε )v, v⟩ + o(ε2 ). 2 2 2

8.3 Asymptotic behavior as t → ∞

| 151

Plugging this into (8.22) and dividing by ε2 /2, we obtain ϕ(xε , tε ) − ϕ(xε , tε − ε2 /2



inf

ε2 ) 2

sup {⟨D2 ϕ(xε , tε − ε2 /2)v, v⟩ + 2

dim(S)=j v∈S,|v|=1

o(ε2 ) } − 2ε. ε2

Therefore, passing to the limit as ε → 0 in (8.22) we conclude that ϕt (x, t) ≥

inf

sup {⟨D2 ϕ(x, t)v, v⟩},

dim(S)=j v∈S,|v|=1

which is equivalent to ϕt (x, t) ≥ λj (D2 ϕ(x)), as we wanted to prove. When we consider a smooth function ψ that touches u from above, we can obtain the reverse inequality in a similar way. Remark 8.18. From the uniqueness of viscosity solutions to the limit problem (recall that a comparison principle holds) we obtain the convergence of the whole family uε , that is, uε → u uniformly as ε → 0 (not only along subsequences). Hence, we have completed the proof of Theorem 8.13.

8.3 Asymptotic behavior as t → ∞ In this section we restrict our attention to the case where the boundary condition does not depend on the time, that is, ut (x, t) − λj (D2x u(x, t)) = 0 { { { u(x, t) = g(x) { { { { u(x, 0) = u0 (0)

in Ω × (0, +∞), on 𝜕Ω × (0, +∞),

(8.23)

in Ω,

where u0 is a continuous function defined on Ω and g = u0 |𝜕Ω . We want to study the asymptotic behavior as t → ∞ of the solution to this parabolic equation. We deal with the problem with two different techniques; on the one hand we use pure PDE methods (comparison arguments) and on the other hand we use our game

152 | 8 Parabolic problems theoretical approach. In this way the reader may appreciate the close interplay between analysis and probability. First, using PDE techniques, a comparison argument with super- and subsolutions constructed using an associated eigenvalue problem, we can show that u(x, t) converges exponentially fast to the stationary solution. In the special case of j = 1 (or j = N) this result provides us with an approximation of the convex envelope (or the concave envelope) of a boundary datum by solutions to a parabolic problem. Theorem 8.19. Let Ω ⊂ ℝN be an open bounded domain, and let u0 be a continuous function defined on Ω and g = u0 |𝜕Ω . Then there exist two positive constants, μ > 0 (which depends on Ω) and C (depending on the initial condition u0 ), such that the unique viscosity solution u of (8.23) verifies 󵄩 󵄩󵄩 −μt 󵄩󵄩u(⋅, t) − z(⋅)󵄩󵄩󵄩∞ ≤ Ce , where z is the unique viscosity solution of (8.13). In addition, we also describe an interesting behavior of the solutions. Let us present our ideas in the simplest case and consider the special case j = 1 with g ≡ 0, that is, we deal with the problem u (x, t) − λ1 (D2 u(x, t)) = 0 { { t u(x, t) = 0 { { { u(x, 0) = u0 (x)

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω.

Note that in this case z ≡ 0 and from Theorem 8.19 we have u(x, t) → 0 exponentially fast, −Ce−μt ≤ u(x, t) ≤ Ce−μt . In this scenario we can improve the upper bound. We show that there exists a finite time T > 0 depending only on Ω, such that the solution satisfies u(x, t) ≤ 0, for any t > T. This is a consequence of the fact that the eigenvalue problem − λ1 (D2 φ(x)) = μφ(x),

in Ω

(8.24)

admits a positive solution for any μ > 0 whenever Ω is bounded. In particular, this result says that, for g ≡ 0 and j = 1, there exists T > 0 such that the solution of (8.12) is below the convex envelope of g in Ω for any time beyond T. In fact, the same situation occurs when g is an affine function (we just apply the same argument to ũ = u − g). When we consider j = N and an affine function g, we have the analogous behavior, i. e., there exists T > 0 such that the solution of (8.12) is above the concave envelope of g in Ω for any time beyond T. However, the situation is different when one considers 1 < j < N. In this case, equation (8.24) admits a positive and a negative solution for any μ > 0, and therefore, it is possible to prove the existence of T > 0, depending only on Ω, such that u(x, t) = z(x), for any t > T, where z is the solution of (8.13). We sum up these results in the following theorem.

8.3 Asymptotic behavior as t → ∞

| 153

Theorem 8.20. Let Ω ⊂ ℝN be an open bounded domain. Let g be the restriction of an affine function to 𝜕Ω and u0 a continuous function in Ω. If u(x, t) is the viscous solution of (8.23) and z(x) is the affine function (which turns out to be the viscous solution of (8.13)), then there exists T > 0, depending only on Ω, such that: (1) If j = 1, then u(x, t) ≤ z(x), for any t > T. (2) If j = N, then u(x, t) ≥ z(x), for any t > T. (3) If 1 < j < N, then u(x, t) = z(x), for any t > T. Note that (3) says that for 1 < j < N and an affine boundary datum we have convergence to the stationary solution in finite time. Although this result implies that for some situations the exponential decay given in Theorem 8.19 is not sharp, we will also describe some other situations (with boundary data that are not affine functions) for which the solution u(x, t) does not fall below or above the convex or concave envelope in finite time. In this final part of the section we also look at the asymptotic behavior of the values of the game described above and show that there exists μ > 0, a constant depending only on Ω, and C independent of ε such that 󵄩󵄩 ε −μt ε 󵄩 󵄩󵄩u (⋅, t) − z (⋅)󵄩󵄩󵄩∞ ≤ Ce , with uε being the value function for the game and z ε being a stationary solution to the DPP. Note that from here we can provide a different proof (using games) of Theorem 8.19. We also provide a new proof of Theorem 8.20 using game theoretical arguments. With these techniques we can obtain a similar result in the case that g coincides with an affine function in a half-space. Moreover, thanks to the game approach we can show a more bizarre behavior in a simple configuration of the data. Consider the equation ut = λj (D2 u). Let Ω be a ball centered at the origin, Ω = BR ⊂ ℝN , and call (x󸀠 , x󸀠󸀠 ) ∈ ℝj × ℝN−j . Assume that the boundary datum is given by two affine functions (for example, take g(x󸀠 , x󸀠󸀠 ) = |x󸀠󸀠 |, for (x󸀠 , x󸀠󸀠 ) ∈ ℝN \ Ω) and the initial condition is strictly positive inside Ω, u0 > 0. For this choice of g, the stationary solution satisfies z(x󸀠 , x󸀠󸀠 ) = 0 in Ω ∩ {x󸀠󸀠 = 0}. In this configuration of the data, for every point x0 in the segment {x󸀠󸀠 = 0} ∩ Ω, we have u(x0 , t) > 0 = z(x0 ) for every t > 0. However, for any point x0 outside the segment {x 󸀠󸀠 = 0}∩Ω, there exists a finite time t0 (depending on x0 ) such that u(x0 , t) = z(x0 ) for every t > t0 . That is, the solution to the evolution problem eventually coincides with the stationary solution outside the segment {x 󸀠󸀠 = 0} ∩ Ω, but this does not happen on the segment.

154 | 8 Parabolic problems 8.3.1 PDE arguments We will use the eigenvalue problem associated with −λN (D2 u). For every strictly convex domain there is a positive eigenvalue μ1 , with an eigenfunction ψ1 that is positive inside Ω and continuous up to the boundary with ψ1 |𝜕Ω = 0 such that {

−λN (D2 ψ1 ) = μ1 ψ1 ψ1 = 0

in Ω, on 𝜕Ω.

(8.25)

This eigenvalue problem was studied in [19]. Note that φ1 = −ψ1 is a negative solution to {

−λ1 (D2 φ1 ) = μ1 φ1 φ1 = 0

in Ω, on 𝜕Ω.

We will use the following lemma. Lemma 8.21. For any two symmetric matrices A, B, we have λ1 (A) + λj (B) ≤ λj (A + B) ≤ λN (A) + λj (B). Proof. Given a subspace S of dimension j, we have sup ⟨Bv, v⟩ + inf ⟨Av, v⟩

v∈S,|v|=1

|v|=1

≤ sup ⟨(A + B)v, v⟩ v∈S,|v|=1

≤ sup ⟨Bv, v⟩ + sup⟨Av, v⟩. v∈S,|v|=1

|v|=1

Hence, the first inequality follows from λj (A + B) = ≤

inf

sup ⟨(A + B)v, v⟩

inf

sup ⟨Bv, v⟩ + sup⟨Av, v⟩

dim(S)=j v∈S,|v|=1

dim(S)=j v∈S,|v|=1

|v|=1

= λN (A) + λj (B) and the second one from λj (A + B) = ≥

inf

sup ⟨(A + B)v, v⟩

inf

sup ⟨Bv, v⟩ + inf ⟨Av, v⟩

dim(S)=j v∈S,|v|=1 dim(S)=j v∈S,|v|=1

= λ1 (A) + λj (B). This ends the proof.

|v|=1

(8.26)

8.3 Asymptotic behavior as t → ∞

| 155

Theorem 8.22. Let u0 be continuous with u0 |𝜕Ω = g. Let ψR and φR be the eigenfunctions associated with μR the first eigenvalue for (8.25) and (8.26) in a large strictly convex domain ΩR such that Ω ⊂⊂ ΩR . Then there exist two positive constants C1 , C2 , depending on the initial condition u0 , such that z(x) + C1 e−μR t φR (x) ≤ u(x, t) ≤ z(x) + C2 e−μR t ψR (x).

(8.27)

Proof. We just observe that u(x, t) = z(x) + C1 e−μR t φR (x) with C1 large enough is a subsolution to our evolution problem in Ω. In fact, we have ut (x, t) = −μR C1 e−μt φR (x) and λ1 (D2 u(x, t)) = λ1 (D2 z(x) + C1 e−μR t D2 φR (x))

≥ λ1 (D2 z(x)) + C1 e−μR t λ1 (D2 φR (x)) = −μC1 e−μR t φR (x).

An analogous computation shows that u(x, t) = z(x) + C2 e−μR t ψR (x) is a supersolution. In addition, we have u(x, t) ≤ g(x) ≤ u(x, t),

x ∈ 𝜕Ω, t > 0,

and for C1 , C2 large enough (depending on u0 ) u(x, 0) = z(x) + C1 φR (x) ≤ u0 (x) ≤ u(x, 0) = z(x) + C2 ψR (x),

x ∈ Ω.

Note that here we are using the fact that φR and ψR are strictly negative and strictly positive, respectively, inside ΩR . Finally we apply the comparison principle in Ω to obtain the desired conclusion z(x) + C1 e−μR t φR (x) ≤ u(x, t) ≤ z(x) + C2 e−μR t ψR (x). As an immediate consequence of this result we obtain that solutions to our evolution problem converge uniformly to the convex envelope of the boundary condition. This proves Theorem 8.19. Note that in the previous result μ is the first eigenvalue for −λN (D2 u) in the larger domain ΩR . Now, our aim is to obtain a sharper bound (involving μ1 , the first eigenvalue in Ω). To this aim we have to assume that u0 is C 1 (Ω) with u0 |𝜕Ω = g and that the solution z of (8.13) is C 1 (Ω). This regularity of the solution of (8.13) up to the boundary is not included in [94] (their only interior regularity for the convex envelope is shown). Under these hypotheses on u0 and z the difference u0 − z is C 1 (Ω) and vanishes on 𝜕Ω. Note that we do not know if there is a regularizing effect for our evolution problem. That is, we do not know if for a smooth boundary datum and a continuous initial condition the solution is smooth in Ω for any positive time t (as happens with solutions to the heat equation).

156 | 8 Parabolic problems As a previous step in our arguments, we need to show that the eigenfunctions have a “negative normal derivative.” Note that the existence of such eigenfunction is proved in [18] for strictly convex domains. Although this hypothesis is sufficient but not necessary (see [19] for construction of eigenfunctions in rectangles), we shall assume it here since the optimal hypotheses for existence of eigenfunctions are unknown (as far as we know). In the next two results we need to assume that the domain Ω has some extra regularity (it has an interior tangent ball at every boundary point). Lemma 8.23. Assume that Ω is strictly convex and has an interior tangent ball at every point of its boundary. Let φ1 and ψ1 be the eigenfunctions associated with μ1 , the first eigenvalue for (8.25) and (8.26) in Ω. Assume that they are normalized with ‖ψ‖∞ = ‖φ‖∞ = 1. Then, there exists C > 0 such that ψ1 (x) ≥ C dist(x, 𝜕Ω) and

φ1 (x) ≤ −C dist(x, 𝜕Ω),

for x ∈ Ω. Proof. Take x0 ∈ 𝜕Ω. Let Br (y) be a ball inside Ω, tangent to 𝜕Ω at x0 . In Br/2 (y) the eigenfunction ψ1 is strictly positive and then we obtain that there exists a constant c such that μ1 ψ1 (x) ≥ c,

x ∈ Br/2 (y).

Now, we take a(x), the solution to {

−λN (D2 a(x)) = cχBr/2 (y) (x) a(x) = 0

in Br (y), on 𝜕Br (y).

(8.28)

This function a is radial, a(x) = a(|x − y|), and can be explicitly computed. In fact, a(x) = {

c1 (r − |x − y|)

in Br (y) \ Br/2 (y),

c2 −

in Br/2 (y),

c |x 2

2

− y|

with c1 , c2 such that c1 = cr/2 (continuity of the derivative at r/2) and c1 r/2 = c2 − c/2(r/2)2 (continuity of the function at r/2). To conclude we use the comparison argument for (8.28) to obtain that a(x) ≤ ψ1 (x),

x ∈ Br (y).

This implies that ψ1 (x) ≥ C dist(x, 𝜕Ω). A similar argument shows that φ1 (x) ≤ −C dist(x, 𝜕Ω).

8.3 Asymptotic behavior as t → ∞

| 157

Theorem 8.24. Assume that Ω is strictly convex and has an interior tangent ball at every point of its boundary. Let g be such that the solution z of (8.13) is C 1 (Ω), let u0 be C 1 (Ω) with u0 |𝜕Ω = g, and let μ1 be the first eigenvalue for (8.25) and (8.26) in Ω. Then there exist two positive constants (depending on the initial condition u0 ) such that z(x) + C1 e−μ1 t φ1 (x) ≤ u(x, t) ≤ z(x) + C2 e−μ1 t ψ1 (x). Proof. We just observe that the arguments used in the proof of Theorem 8.22 also work here since we can find two constants C1 and C2 such that z(x) + C1 φ1 (x) ≤ u0 (x) ≤ z(x) + C2 ψ1 (x),

x ∈ Ω.

(8.29)

Here we are using the notion that u0 − z is C 1 (Ω) with (u0 − z)|𝜕Ω = 0 to obtain that there is a constant C such that −C dist(x, 𝜕Ω) ≤ (u0 − z)(x) ≤ C dist(x, 𝜕Ω), and we observe that from Lemma 8.23 we obtain (8.29). We next give the proof of Theorem 8.20, which is a refined description of the asymptotic behavior of the solution to (8.12) when the boundary datum g comes from the restriction of an affine function to 𝜕Ω. For instance, if we consider the case j = 1, it shows that there exists a finite time T > 0 beyond which the upper estimate in (8.27) can be reduced to z(x), the λj -envelope of g inside Ω. Proof of Theorem 8.20. We assume that there is an affine function (a plane if we are in the case N = 2) π such that g = π|𝜕Ω . In this case the λj -envelope z of g inside Ω is given by z(x) = π(x). Hence, let us consider ̂ t) = u(x, t) − z(x) = u(x, t) − π(x). u(x, This function û is the viscosity solution to ̂ û (x, t) − λj (D2 u)(x, t) = 0 { { t ̂ u(x, t) = 0 { { ̂ 0) = u0 (x) − z(x) { u(x,

in Ω × (0, +∞), on 𝜕Ω × (0, +∞), in Ω.

(8.30)

For 1 ≤ j ≤ N − 1, we consider a large ball BR with Ω ⊂ BR . Inside this ball we take 2

r2

w(x, t) = eR μ e−μt e−μ 2 .

158 | 8 Parabolic problems For large μ, this function w verifies 2

r2

r2

2

{ w − λ1 (D2 w) = −μeR μ e−μt e−μ 2 + μeR μ e−μt e−μ 2 = 0 { { t w>0 { { 2 { R2 μ −μ r { w(x, 0) = e e 2 ≥ u0 (x) − z(x)

Ω × (0, +∞), 𝜕Ω × (0, +∞), Ω.

Hence, w is a supersolution to (8.30) and then, by the comparison principle, we get r2

2

̂ t) ≤ w(x, t) = eR μ e−μt e−μ 2 , u(x, for every μ large enough. Then for every t > T = R2 /2 we get 2

r2

̂ t) ≤ lim eR μ e−μt e−μ 2 = 0. u(x, μ→∞

Hence, we have shown that when the boundary condition is the restriction of an affine function to the boundary, there exists a finite time T such that the solution to the evolution problem lies below the stationary solution z, regardless of the initial condition u0 , that is, we have u(x, t) ≤ z(x) for every x ∈ Ω and every t < T for 1 ≤ j ≤ N − 1. For 2 ≤ j ≤ N the same argument proves that there exists a finite time T such that the solution to the evolution problem lies above the stationary solution. Hence for 2 ≤ j ≤ N − 1 there exists a finite time T such that the solution to the evolution problem coincides with the stationary solution. This proves Theorem 8.20. Observe that for j = 1, u(x, t) = e−μ1 t φ1 (x) is a solution to the problem that does not become zero in finite time. The same holds for u(x, t) = e−μ1 t ψ1 (x) for j = N. Our next result shows that, in general, we cannot expect that all solutions lie below z in the whole Ω in finite time. Theorem 8.25. Let Ω be an open bounded domain in ℝN , and let 1 ≤ j ≤ N. For any x0 ∈ Ω, there exist g and u0 continuous in 𝜕Ω and Ω, respectively, with u0 |𝜕Ω = g, such that the solution of problem (8.12) satisfies u(x0 , t) ≥ z(x0 ) + ke−μ1 t ,

for all t > 0,

where μ1 , k > 0 are two constants and z is the solution of (8.13). We can obtain the analogous result for the inequality u(x0 , t) ≤ z(x0 ) − ke−μ1 t .

8.3 Asymptotic behavior as t → ∞

| 159

Proof. Consider, without loss of generality, that x0 ∈ Ω is the origin. Take r > 0 small enough such that the ball Br of radius r and with its center at the origin satisfies Br ⊂⊂ Ω. In the rest of the proof we will denote ℝN = ℝj × ℝN−j , and we will write any point in ℝN as x = (x󸀠 , x󸀠󸀠 ) ∈ ℝj × ℝN−j . Consider Bjr = Br ∩ {x 󸀠󸀠 = 0}. We observe that Bjr is a j-dimensional ball. Therefore, as proved in [19], there exists a positive eigenvalue μ1 , with an eigenfunction ψ1 which is continuous up to the boundary, such that −λj (D2 ψ1 ) = μ1 ψ1 { { ψ =0 { { 1 ψ { 1>0

in Bjr , on 𝜕Bjr , in Bjr .

Consider g a nonnegative continuous function defined on 𝜕Ω such that g(x󸀠 , x󸀠󸀠 ) ≥ ψ1 (x󸀠 ),

for all (x󸀠 , x󸀠󸀠 ) ∈ 𝜕Ω, with x󸀠 ∈ Bjr ,

(8.31)

and g(x󸀠 , 0) = 0,

for all (x 󸀠 , 0) ∈ 𝜕Ω ∩ {x󸀠󸀠 = 0}.

(8.32)

We note that this choice of g is always possible since, if x󸀠 ∈ Bjr , then (x󸀠 , 0) ∈ Br , and since we have considered Br ⊂⊂ Ω, we deduce (x󸀠 , 0) ∈ ̸ 𝜕Ω. For this choice of g, we claim that the solution of problem (8.13) satisfies z(x󸀠 , x󸀠󸀠 ) = 0,

in Ω ∩ {x 󸀠󸀠 = 0}.

In order to prove this claim, we use the geometric interpretation of solutions to problem (8.13) given in [29]. Consider the j-dimensional subspace {x󸀠󸀠 = 0} and the j-dimensional domain D := Ω ∩ {x 󸀠󸀠 = 0}. Following the ideas of [29], the solution z of (8.13) must satisfy z ≤ zD ,

in D,

where zD is the concave envelope of g in D = Ω ∩ {x󸀠󸀠 = 0}. By the choice of g, using (8.32), it follows that zD ≡ 0. The claim then follows from the maximum principle, since g ≥ 0 in 𝜕Ω. In particular, we have z(0) = 0. Now, take u0 a nonnegative continuous function in Ω satisfying u0 |𝜕Ω = g and u0 (x󸀠 , x󸀠󸀠 ) ≥ ψ1 (x󸀠 ),

for all (x󸀠 , x󸀠󸀠 ) ∈ Ω, with x󸀠 ∈ Bjr .

Consider the following function u defined in the subdomain j

𝒬 := (Ω ∩ {x ∈ Br }) × [0, +∞) 󸀠

(8.33)

160 | 8 Parabolic problems that is given by u(x󸀠 , x󸀠󸀠 , t) := ψ1 (x󸀠 )e−μ1 t . We have ut (x󸀠 , x󸀠󸀠 , t) = −μ1 ψ1 (x󸀠 )e−μ1 t ,

λj (D2 u(x󸀠 , x󸀠󸀠 , t)) = −μ1 ψ1 (x󸀠 )e−μ1 t ,

u(x󸀠 , x󸀠󸀠 , t) ≤ ψ1 (x󸀠 ),

in 𝒬. By (8.31) and (8.33), together with the comparison principle, we get u(x󸀠 , x󸀠󸀠 , t) ≤ u(x󸀠 , x󸀠󸀠 , t),

in 𝒬,

and since z(0) = 0, we have u(0, t) ≥ z(0) + ψ1 (0)e−μ1 t ,

for all t > 0.

8.3.2 Probabilistic arguments Here we will argue relating the value for our game and the value for the game a random walk for λj that we studied in Chapter 7; see also [29]. We call z ε (x0 ) the value of the game for the elliptic case (see Chapter 7) considering the initial position x0 and a length step of ε. This game is the same as the one described in Section 8.2.2 but now we do not take into account the time, that is, we do not stop when tk < 0 (and therefore the number of plays is not a priori bounded by ⌈2T/ε2 ⌉). We will call xτ ∈ ̸ Ω the final position of the token. In what follows we will refer to the game described in Section 8.2.2 as the parabolic game while when we disregard time we refer to the elliptic game. Note that the elliptic DPP is given by 1 1 ε sup { vε (x + εv) + vε (x − εv)} { v (x) = inf dim(S)=j v∈S,|v|=1 2 2 { ε { v (x) = g(x)

x ∈ Ω, x ∈ ̸ Ω.

Solutions to this DPP are stationary solutions (solutions independent of time) for the DPP that corresponds to the parabolic game. Let us recall it here. We have uε (x, t) =

1 ε2 1 ε2 sup { uε (x + εv, t − ) + uε (x − εv, t − )} dim(S)=j v∈S,|v|=1 2 2 2 2 inf

in ΩT , with uε (x, t) = h(x, t)

8.3 Asymptotic behavior as t → ∞

|

161

for (x, t) ∈ ̸ ΩT . Here we choose h in such a way that it does not depend on t (we can do this since we are assuming that g does not depend on t). Our goal will be to show that there exist two positive constants μ, depending only on Ω, and C, depending on u0 , but both independent of ε, such that 󵄩󵄩 ε −μt ε 󵄩 󵄩󵄩u (⋅, t) − v (⋅)󵄩󵄩󵄩∞ ≤ Ce . For the elliptic game, the strategies are denoted by S̃I and S̃II . Given two strategies for the elliptic game, we can play the parabolic game according to those strategies by considering, for all t0 > 0, SI (t0 , x0 , x1 , . . . , xk ) = S̃I (x0 , x1 , . . . , xk ), SII (t0 , x0 , x1 , . . . , xk , S) = S̃II (x0 , x1 , . . . , xk , S).

(8.34)

When we attempt to do the analogous construction, building a strategy for the elliptic game given one for the parabolic game, we require that the game sequences are not too long since the strategies for the parabolic game are only defined for tk > 0 (when tk ≤ 0 the parabolic game ends). However, for any t > 0, if we suppose that the game ends in less than ⌈2t/ε2 ⌉ steps, i. e., τ < ⌈2t/ε2 ⌉, then we have a bijection between strategies for the two games that have the same probability distribution for the game histories (x0 , x1 , . . . , xτ ). The next lemma ensures that, in the parabolic game, the probability of the final payoff being given by the initial data goes to 0 exponentially fast when t → +∞. In addition, we also prove that in the elliptic game, trajectories that take too long to exit the domain have exponentially small probability. Lemma 8.26. Let Ω be a bounded domain, SI , SII being two strategies for the parabolic game and S̃I , S̃II being two strategies for the elliptic game. We have, for any t > 0, x ,t

ℙS0,S [tτ ≤ 0] ≤ Ce−μt I

II

and



ε2 τ x0 ̃SI ,S̃II [ 2

≥ t] ≤ Ce−μt ,

where μ > 0 is a constant depending only on Ω and C is another constant independent of the size of the steps, ε. We recall that τ denotes the number of steps until the game ends. Proof. Take BR (x) such that Ω ⊂ BR (x). We start by proving the estimate for the elliptic game. Let S̃I , S̃II be two strategies for this game. As computed in (8.17), Mk = |xk − x|2 − ε2 k is a martingale. By applying the optional stopping theorem, we obtain x0 [τ] S̃I ,S̃II

ε2 𝔼

x0 [|xτ S̃I ,S̃II

=𝔼

− x|2 ] ≤ R2 .

162 | 8 Parabolic problems Hence, we get ε2 τ x0 ] ̃SI ,S̃II [ 2

𝔼



R2 2

and we can show the bound ε2 τ x0 ̃SI ,S̃II [ 2

≥ t] ≤



R2 . 2t

For n ∈ ℕ by considering the martingale starting after n steps, we can obtain ε x0 [ S̃I ,S̃II



2

󵄨󵄨 ε2 τ ε2 τ ε2 R2 ≥ n + t 󵄨󵄨󵄨 ≥ n] ≤ . 󵄨 2 2 2 2 2t

Hence, for n, k ∈ ℕ, applying this bound multiple times we obtain ε2 τ x0 ̃SI ,S̃II [ 2





ε2 ε2 τ ε2 󵄨󵄨󵄨 ε2 τ ε2 x nk] = ℙ 0̃ ̃ [ ≥ nk 󵄨󵄨 ≥ n(k − 1)] SI ,SII 󵄨 2 2 2 2 2 2 2 󵄨󵄨 ε2 τ ε2 ετ ε x ≥ n(k − 1) 󵄨󵄨󵄨 ≥ n(k − 2)] × ℙ 0̃ ̃ [ SI ,SII 󵄨 2 2 2 2 ε x0 [ S̃I ,S̃II

× ⋅⋅⋅ × ℙ ≤(

k

R2 2

2( ε2n )

2

τ ε2 ≥ n] 2 2

) .

For ε < ε0 = 1 we consider δ=

1 R2 + . 2e−1 2

We have ε x0 [ S̃I ,S̃II



ε2 τ ε2 δ2 t τ x ≥ t] ≤ ℙ 0̃ ̃ [ ≥ ⌊ 2 ⌋⌊ ⌋]. SI ,SII 2 2 2 ε δ 2

By the above argument we obtain ⌊ δt ⌋

ε2 τ R2 x ℙ 0̃ ̃ [ ≥ t] ≤ ( ) 2 SI ,SII 2 2⌊ δ22 ⌋ ε ε

2

≤(

R2 2(δ −

We have shown ℙ

ε2 τ x0 ̃SI ,S̃II [ 2

≥ t] ≤ Ce−μt

t −1 δ

ε02 ) 2

)

t

= e− δ +1 .

8.3 Asymptotic behavior as t → ∞

|

163

for C = e and μ = δ1 . The same bound holds for the parabolic game, using the relation between the strategies given in (8.34). That is, 2t ε2 τ x ,t ≥ t] = 1 − ℙS0,S [τ < 2 ] I II I II 2 ε 2t x = 1 − ℙ 0̃ ̃ [τ < 2 ] ≤ Ce−μt . SI ,SII ε x ,t

x ,t

ℙS0,S [tτ ≤ 0] = ℙS0,S [ I

II

The use of the equivalence (8.34) between strategies of the two games is justified because we are computing the probability of the number of steps being less than ⌈2t/ε2 ⌉. Using Lemma 8.26, we are able to prove that, as happens for the evolution PDE (see the previous subsection), also in the game formulation, the asymptotic behavior of the value function as t goes to infinity is given by the value of the elliptic game (that is, by the stationary solution of the game). Note that in the probabilistic approach we obtain a bound for ‖u(⋅, t) − z(⋅)‖∞ of the form C‖u0 ‖∞ e−μt . However, μ does not come from an eigenvalue problem but from the exponential bounds obtained in Lemma 8.26. Proposition 8.27. There exists μ > 0, a constant depending only on Ω, and C > 0 depending on u0 , such that 󵄩󵄩 ε −μt ε 󵄩 󵄩󵄩u (⋅, t) − v (⋅)󵄩󵄩󵄩∞ ≤ Ce , where uε and vε are the value functions for the parabolic and the elliptic game, respectively. Moreover, as a consequence of this exponential decay, we obtain that the solution u of the problem (8.12) and the convex envelope z(x) of g in Ω satisfy 󵄩 󵄩󵄩 −μt 󵄩󵄩u(⋅, t) − z(⋅)󵄩󵄩󵄩∞ ≤ Ce . Proof. Recall the payoff function h defined in (8.16), which does not depend on t. For any (x0 , t0 ) ∈ Ω × (0, +∞) fixed, we have x ,t

uε (x0 , t0 ) = inf sup 𝔼S0,S0 [h(xτ , tτ )] SI

I

SII

II

x ,t

x ,t

= inf sup {𝔼S0,S0 [g(xτ )|tτ > 0]ℙS0,S0 (tτ > 0) SI

I

SII

I

II

II

x ,t

x ,t

+ 𝔼S0,S0 [u0 (xτ )|tτ ≤ 0]ℙS0,S0 (tτ ≤ 0)} I



I

II

x ,t inf sup 𝔼S0,S0 [g(xτ )|tτ I II SI SII

II

> 0] x ,t

+ (‖g‖∞ + ‖u0 ‖∞ ) sup ℙS0,S0 (tτ ≤ 0) SI ,SII

I

II

(8.35)

164 | 8 Parabolic problems and x ,t

uε (x0 , t0 ) ≥ inf sup 𝔼S0,S0 [g(xτ )|tτ > 0] SI

SII

I

II

x ,t

− (‖g‖∞ + ‖u0 ‖∞ ) sup ℙS0,S0 (tτ ≤ 0). I

SI ,SII

II

(8.36)

Now, let z ε (x0 ) be the value of the elliptic game considering as payoff function the same function g as before. We have z ε (x0 ) = inf sup {𝔼 S̃I

S̃II

x0 [g(xτ )|τ S̃I ,S̃II

x0 (τ S̃I ,S̃II

< 2t0 /ε2 ]ℙ

x0 x [g(xτ )|τ ≥ 2t0 /ε2 ]ℙ 0̃ ̃ (τ S̃I ,S̃II SI ,SII x inf sup 𝔼 0̃ ̃ [g(xτ )|τ < 2t0 /ε2 ] S ̃SI I ,SII S̃II

≥ 2t0 /ε2 )}

+𝔼



+ ‖g‖∞ sup ℙ S̃I ,S̃II

x0 (τ S̃I ,S̃II

< 2t0 /ε2 )

≥ t0 /ε2 )

(8.37)

and x0 [g(xτ )|τ S̃I ,S̃II

z ε (x0 ) ≥ inf sup 𝔼 S̃I

S̃II

x0 (τ S̃I ,S̃II

< 2t0 /ε2 ] − ‖g‖∞ sup ℙ S̃I ,S̃II

≥ t0 /ε2 ).

(8.38)

Given t0 > 0 in the parabolic game, if we suppose that τ < 2t0 /ε2 in both games, we have an equivalence between the strategies of both games, regardless of what happens after step ⌊2t0 /ε2 ⌋. That is, x0 [g(xτ )|τ S̃I ,S̃II

inf sup 𝔼 S̃I

S̃II

x ,t

< 2t0 /ε2 ] = inf sup 𝔼S0,S0 [g(xτ )|tτ > 0]. SI

SII

I

II

Now, combining (8.35), (8.36), (8.37), and (8.38), we obtain x ,t x 󵄨 󵄨󵄨 ε ε 2 󵄨󵄨u (x0 , t0 ) − z (x0 )󵄨󵄨󵄨 ≤ 2‖u0 ‖∞ (sup ℙS0̃ ,S̃ (τ ≥ 2t0 /ε ) + sup ℙS0I ,S0II (tτ ≤ 0)). I II S̃I ,S̃II

SI ,SII

Applying Lemma 8.26, for ε < ε0 = 1, we have 󵄨 󵄨󵄨 ε −μt ε 󵄨󵄨u (x0 , t0 ) − z (x0 )󵄨󵄨󵄨 ≤ 4‖u0 ‖∞ Ce 0 , for some μ depending only on Ω. Letting ε → 0 and using the uniform convergence of uε (x0 , t0 ) and z ε (x0 ) to u(x0 , t0 ) and z(x0 ), respectively, we obtain 󵄨 󵄨󵄨 −μt 󵄨󵄨u(x0 , t0 ) − z(x0 )󵄨󵄨󵄨 ≤ 4‖u0 ‖∞ Ce 0 . This completes the proof.

8.3 Asymptotic behavior as t → ∞

|

165

Now, assume that there is an affine function π such that g = π for x ∈ ̸ Ω. In this case, π(xk ) is a martingale. Hence, under a strategy that forces the game to end outx ,t side Ω, we obtain 𝔼S0,S0 [h(xτ , tτ )] = π(x0 ). I II Suppose 1 ≤ j ≤ N − 1, Ω ⊂ BR (x), and g ≡ π. Player I can choose S at every step in such a way that it is normal to x − xk , hence v ∈ S is normal to x − xk , and we have |x − xk+1 |2 = |x − xk − vε|2 = |vε|2 + |x − xk |2 = ε2 + |x − xk |2 . If Player I plays with this strategy, we obtain |x − xk |2 = kε2 + |x − x0 |2 . Since Ω ⊂ BR (x), |x − xk |2 ≤ R2 for every xk ∈ Ω, and hence the game ends after at most R2 − |x − x0 |2 ε2 turns. Hence, we have u(x, t) ≤ π(x) for every x ∈ Ω and every t > T = 2R2 . Analogously, if 2 ≤ j ≤ N, Player II can choose v ∈ S such that v is normal to x − xk (because the intersection of S and the (N − 1)-dimensional normal space to x − xk is not empty). By the same arguments used before, we can show that u(x, t) ≥ π(x) for every x ∈ Ω and every t > T = 2R2 . Hence, we have shown that, for 2 ≤ j ≤ N − 1, u(x, t) = π(x) for every x ∈ Ω and every t > T = 2R2 . Note that this argument can be considered as a proof of Theorem 8.20 based on the game strategies. We can obtain a similar result when g = π in a half-space. Suppose that h = π for every x ∈ {x ∈ Ωc : x ⋅ w > θ} for a given w ∈ ℝN of norm 1 and θ ∈ ℝ. Given y ∈ {x ∈ Ω : x⋅w > θ} we can choose ξ ∈ ℝN and r > 0 such that {x ∈ Ω : x⋅w ≤ θ} ⊂ Br (ξ ) and y ∈ ̸ Br (ξ ), as depicted in Figure 8.1. Now, arguing in the same way as before, we can consider the strategies that give a vector v normal to xk − ξ . Hence, in the case 1 ≤ j ≤ N − 1 we can prove that u(x, t) ≤ π(x) for every y ∈ {x ∈ Ω : x ⋅ w > θ} and every t large enough (for instance we can take t > 2r 2 , where r is the radius of the ball described before, which depends on x). Note that the closer y is to the hyperplane x ⋅ w = θ, the longer we will have to wait before having the above inequality.

166 | 8 Parabolic problems

Figure 8.1: Here 𝜕Ω is round and 𝜕Br (ξ ) is curved.

In the case 2 ≤ j ≤ N, with analogous arguments, we can also show that we have the reverse inequality, that is, u(x, t) ≥ π(x) for every x ∈ {x ∈ Ω : x ⋅ w > θ} and every t large enough. Next, we present an example that illustrates the result of Theorem 8.25. Although it is possible to give a more general argument, giving rise to an alternative proof of this theorem based only on probabilistic arguments, we restrict ourselves to this example to clarify the exposition. Example 8.28. Consider the parabolic game for λj in a ball BR centered at the origin, and take as initial and boundary data two functions u0 (x󸀠 , x󸀠󸀠 ) and g(x󸀠 , x󸀠󸀠 ), with (x󸀠 , x󸀠󸀠 ) ∈ ℝj × ℝN−j , such that u0 > 0,

in Ω,

and g(x󸀠 , x󸀠󸀠 ) = |x󸀠󸀠 |,

for all (x󸀠 , x󸀠󸀠 ) ∈ ℝN \ Ω.

For this choice of g, we claim that the solution of problem (8.13) satisfies z(x󸀠 , x󸀠󸀠 ) = 0,

in Ω ∩ {x 󸀠󸀠 = 0}.

In order to prove this claim, we use the geometric interpretation of solutions to problem (8.13) given in [29]. Consider the j-dimensional subspace {x󸀠󸀠 = 0} and the j-dimensional domain D := Ω ∩ {x󸀠󸀠 = 0}. Following the ideas of [29], the solution z of (8.13) must satisfy z ≤ zD ,

in D,

where zD is the concave envelope of g in D = Ω ∩ {x󸀠󸀠 = 0}. By the choice of g, it follows that zD ≡ 0. The claim then follows from the maximum principle, since g ≥ 0 in 𝜕Ω. Now, let us prove that for any x0 ∈ Ω ∩ {x󸀠󸀠 = 0} and t0 > 0, we have x ,t

uε (x0 , t0 ) = inf sup 𝔼S0,S0 [h(xτ , tτ )] > 0. SI

SII

I

II

8.4 Comments | 167

Let x0 ∈ Ω ∩ {x 󸀠󸀠 = 0}. Since u0 ≥ 0, if uε (x0 , t0 ) = 0, Player I should have a strategy such that whatever Player II does, the final payoff is 0 with probability 1. Since u0 vanishes only on 𝜕Ω ∩ {x 󸀠󸀠 = 0}, Player I needs to make sure that xk reaches this set before the game comes to end. We claim that the only strategy Player I can follow is to choose the j-dimensional subspace {x 󸀠󸀠 = 0} at every step. Indeed, if at some step xk leaves this subspace, the probability of never coming back, and then the final payoff being nonzero, is positive. Once Player I has fixed this only possible strategy to obtain zero as final payoff, Player II can choose any unitary vector in the subspace {x󸀠󸀠 = 0} and always play with the same vector. Playing with these strategies, the game is reduced to a random walk in a segment, and it is well known that for this process, the probability of not reaching the extremes of the segment in less than ⌈2t0 /ε2 ⌉ steps is strictly positive for any t0 > 0 (in fact, it is uniformly bounded below). Since the initial condition verifies u0 > 0 in Ω, we conclude that the value of the game is also strictly positive at (x0 , t0 ), and moreover, it is bounded below, uε (x0 , t0 ) > c > 0, for any x0 ∈ Ω ∩ {x 󸀠󸀠 = 0} and t0 > 0 independently of ε. Then uε (x0 , t0 ), and hence its limit as ε → 0, u(x0 , t0 ), does not lie below the stationary solution z in finite time. Finally, note that from our previous arguments, for any point x0 ∈ Ω \ {x 󸀠󸀠 = 0} there is a finite time t0 (which depends on x0 ) such that uε (x0 , t) = z(x0 ) for every t ≥ t0 .

8.4 Comments The results contained in this chapter are taken from [83] and [25]. In [40] spatial and time dependence of the possible movements is introduced. Note that the same general strategy can be used to obtain different parabolic problems. In fact, as we did in this chapter, one can play an elliptic game and just take into account time as counting the number of plays to obtain in the limit a PDE of the form ut = F(D2 u, ∇u).

9 Free boundary problems In this chapter our aim is to introduce a free boundary problem in connection with Tugof-War games. To this end, we first present results for a game where one of the players may sell the turn to the other in a fixed (given) subdomain of Ω. An operator with a gradient constraint arises as limit of the game values in this case. Next, we introduce our free boundary problem in which the set {u > 0} plays a crucial role. A rule that depends on the expected payoff, such as “Player II is allow to end the game when the expected value is positive,” is not admissible and we need to circumvent this delicate issue when we describe the rules of the game. We find a valid set of rules for a game in such a way that the set where the expected value is positive arises naturally in the corresponding DPP.

9.1 Gradient constraints We first deal with the analysis of a game that approximates solutions to a PDE with a gradient constraint. We deal with min{Δ∞ u(x), |Du(x)| − χD (x)} = 0 { u(x) = g(x)

in Ω, on 𝜕Ω.

(9.1)

Here χD : Ω → ℝ denotes the characteristic function of the set D, that is, χD (x) = {

1 0

if x ∈ D, if x ∈ Ω \ D.

The problem (9.1) can be obtained taking the limit as p → ∞ in a variational problem involving the p-Laplacian. In fact, in [63] the limit is studied as p → ∞ in the following variational problem: {

Δp up (x) = f (x)

in Ω,

up (x) = g(x)

on 𝜕Ω,

with f ≥ 0 and a continuous boundary datum. In this context, (up )p≥2 converges, up to a subsequence, to a limiting function u∞ , which is a solution to the following problem in the viscosity sense: {

min{Δ∞ u∞ (x), |∇u∞ (x)| − χ{f >0} (x)} = 0 u∞ (x) = g(x)

Note that, taking D = {f > 0}, (9.2) can be written as (9.1). https://doi.org/10.1515/9783110621792-009

in Ω, on 𝜕Ω.

(9.2)

170 | 9 Free boundary problems This kind of problems are known as problems with gradient constraints since the equation says that 󵄨 󵄨󵄨 󵄨󵄨Du(x)󵄨󵄨󵄨 ≥ 1,

in D.

Gradient constraint problems like 󵄨 󵄨 min{Δ∞ u(x), 󵄨󵄨󵄨∇u(x)󵄨󵄨󵄨 − h(x)} = 0,

(9.3)

where h ≥ 0, appeared in [57]. By considering solutions to Fδ [u] ≡ min{Δ∞ u, |∇u| − δ} = 0, resp. of its pair F δ [u] ≡ max{Δ∞ u, δ − |∇u|} = 0, Jensen provided a mechanism to obtain solutions of the ∞-Laplace equation −Δ∞ u = 0 via an approximation procedure. In this context, he proved uniqueness for the ∞-Laplace equation by first showing that it holds for the approximating equations and then sending δ → 0. A similar strategy was used in the anisotropic counterpart in [70], and a variant of (9.3) appears in the so called ∞-eigenvalue problem; see, e. g., [61]. We highlight that, in general, the uniqueness of solutions to (9.3) is an easy task if h is a continuous function and strictly positive everywhere. Moreover, uniqueness is known to hold if h ≡ 0; see [57]. Nevertheless, the case h ≥ 0 yields significant obstacles. Such a situation resembles the one that holds for the ∞-Poisson equation −Δ∞ u = h, where the uniqueness is known to hold if h > 0 or h ≡ 0, and the case h ≥ 0 is an open problem. In this direction, [63, Theorem 4.1] proved uniqueness for (9.3) in the special case h = χD under the mild topological condition D = D∘ on the set D ⊂ ℝN . Furthermore, they showed counterexamples where the uniqueness fails if such topological condition is not satisfied; see [63, Section 4.1]. Moreover, a minimal solution (a solution that is less than or equal to any other) is also proved to exist (see again [63]). Finally, from a regularity viewpoint, [63] also established that viscosity solutions to (9.3) are Lipschitz continuous. 9.1.1 Play or sell the turn Tug-of-War Now, let us introduce a variant of the Tug-of-War game and show that the value functions of this game converges, as the step size tends to zero, to the minimal solution of (9.2). Let Ω be a bounded open set and D ⊂ Ω. For a fixed ε > 0, consider the following two-player zero-sum game. If x0 ∈ Ω \ D, then the players play a Tug-of-War game, that

9.1 Gradient constraints | 171

is, a fair coin is tossed and the winner of the toss is allowed to move the game token to any x1 ∈ Bε (x0 ). On the other hand, if x0 ∈ D ∩ Ω, then Player II, the player seeking to minimize the final payoff, can either sell the turn to Player I getting a payment of −ε or decide that they toss a fair coin and play Tug-of-War. If Player II sells the turn, then Player I can move the game token to any x1 ∈ Bε (x0 ) without tossing a coin. After the first round, the game continues from x1 according to the same rules. This procedure yields a possibly infinite sequence of game states x0 , x1 , . . ., where every xk is a random variable. The game ends when the game token is in ℝN \ Ω. We denote by xτ ∈ ℝN \ Ω the first point in the sequence of game states that lies outside Ω, so that τ refers to the first time we hit ℝN \ Ω. At this time the game ends with the terminal payoff given by g(xτ ), where g : ℝN \ Ω → ℝ is a given Lipschitz continuous payoff function. Player I earns g(xτ ) while Player II earns −g(xτ ). As before, a strategy SI for Player I is a function defined on the partial histories that gives the next game position SI (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ) if Player I wins the toss. Similarly Player II plays according to a strategy SII . In addition, we need a decision variable, which tells when Player II decides to sell a turn. We have 1 θ(x0 , . . . , xk ) = { 0

xk ∈ D and Player II sells a turn,

otherwise.

In this case, the one-step transition probabilities will be πSI ,SII ,θ (x0 , . . . , xk , A)

1 = (1 − θ(x0 , . . . , xk )) (δSI (x0 ,...,xk ) (A) + δSII (x0 ,...,xk ) (A)) 2 + θ(x0 , . . . , xk )δSI (x0 ,...,xk ) (A).

By using Kolmogorov’s extension theorem and the one-step transition probabilities, x we can build a probability measure ℙS0,S ,θ on the game sequences. We denote by x

𝔼S0,S I

II ,θ

I

the corresponding expectation.

II

The value of the game for Player I is given by x

uεI (x0 ) = sup inf 𝔼S0,S SI SII ,θ

I

II

τ−1

[g(xτ ) − ε ∑ θ(x0 , . . . , xi )] ,θ i=0

while the value of the game for Player II is given by x

uεII (x0 ) = inf sup 𝔼S0,S SII ,θ SI

I

II ,θ

τ−1

[g(xτ ) − ε ∑ θ(x0 , . . . , xi )]. i=0

Observe that if the game does not end almost surely, then the expectation is not x well defined. In this case, as we did previously, we penalize and define 𝔼S0,S ,θ to take I II value −∞ when evaluating uεI (x0 ) and +∞ when evaluating uεII (x0 ). If uεI = uεII , we say that the game has a value. We denote the game value as uε := ε uI = uεII .

172 | 9 Free boundary problems 9.1.2 Dynamic programming principle We begin the analysis of our game with the statement of the Dynamic Programming Principle. Lemma 9.1 (DPP). The game has a value, and it satisfies 1 1 uε (x) = min{ sup uε + inf uε ; sup uε (y) − εχD (x)} 2 Bε (x) 2 Bε (x) Bε (x) for x ∈ Ω and uε (x) = g(x) in ℝN \ Ω. One version of this game that will be used in our arguments is when D = Ω. The difference with the previous game is that Player II can sell turns in the whole Ω and not just when the token is in D. We refer to our original game as D-game and to the modified game as Ω-game. Lemma 9.2 (DPP, Ω-game). The game has a value, and it satisfies 1 1 uε (x) = min{ sup uε + inf uε ; sup uε (y) − ε} 2 Bε (x) 2 Bε (x) Bε (x) for x ∈ Ω and uε (x) = g(x) in ℝN \ Ω. Now let us show that there is a value for the Ω- and D-games by modifying the methods presented in previous chapters. Proof of Lemma 9.2. Let u be a solution to the DPP in the lemma. It can be seen that such a solution is bounded. By definition, uεI ≤ uεII , and thus if we show that uεII ≤ u ≤ uεI , the result follows. We will prove u ≤ uεI , the analogous inequality for uεII can be obtained in a similar way. Let δ(x) := sup u − u(x). Bε (x)

Suppose that Player I uses a strategy SI0 , in which she always chooses to step to a point that almost maximizes u, that is, to a point xk such that u(xk ) ≥ sup u − η2−k , Bε (xk−1 )

for a fixed η > 0. We claim that mk = u(xk ) − η2−k is a submartingale. Indeed, it follows by the DPP that u(xk ) − inf u(y) ≤ sup u(y) − u(xk ) = δ(xk ), Bε (xk )

Bε (xk )

(9.4)

9.1 Gradient constraints | 173

and thus x

𝔼S00 ,S I

II ,θ

[u(xk ) − η2−k | x0 , . . . , xk−1 ] ≥ u(xk−1 ) − η2−(k−1) .

From the submartingale property it follows that the limit limk→∞ mτ∧k exists by the martingale convergence theorem. Furthermore, at every point x ∈ Ω either 1 u(x) = { inf u + sup u(y)} < sup u(y) − ε, 2 Bε (x) Bε (x) Bε (x) implying ε < sup u − u(x), Bε (x)

(9.5)

or u(x) = sup u(y) − ε. Bε (x)

Hence sup u(y) − u(x) = ε.

Bε (x)

(9.6)

Thus δ(x) ≥ ε always. On the other hand, there are arbitrary long sequences of moves made by Player I. Indeed, if Player II sells a turn, then Player I gets to move, and otherwise this is a consequence of the zero-one law. Since mk is a bounded submartingale, these two facts imply that the game must end almost surely. By a similar argument using the fact δ(x) ≥ ε, we see that k−1

u(xk ) − ε ∑ θ(x0 , . . . , xi ) − η2−k i=0

is a submartingale as well. It then follows from Fatou’s lemma and the optional stopping theorem that x

τ−1

uεI (x0 ) = sup inf 𝔼S0,S SI SII ,θ x

I

≥ inf 𝔼S00 ,S SII ,θ

I

II ,θ

II ,θ

[g(xτ ) − ε ∑ θ(x0 , . . . , xi )] i=0

τ−1

[g(xτ ) − ε ∑ θ(x0 , . . . , xi ) − η2−τ ] i=0

x

≥ inf lim sup 𝔼S00 ,S SII ,θ

k→∞

I

II

τ∧k−1

[u(xτ∧k ) − ε ∑ θ(x0 , . . . , xi ) − η2−(τ∧k) ] ,θ i=0

≥ inf 𝔼S0 ,SII ,θ [u(x0 ) − η] = u(x0 ) − η. SII ,θ

I

This implies that uεI ≥ u.

174 | 9 Free boundary problems An analogous argument also works for the D-game. Proof of Lemma 9.1. The proof is quite similar to the proof of Lemma 9.2, but when Tug-of-War is played outside D we have to make sure that δ(x) := sup u − u(x) Bε (x)

is large enough. This is done by using the backtracking strategy; see cf. Theorem 2.2 of [97] or the proof of Theorem 4.1. Fix η > 0 and a starting point x0 ∈ Ω, and set δ0 = min{δ(x0 ), ε}/2. We suppose for now that δ0 > 0, and we define X0 = {x ∈ Ω : δ(x) > δ0 }. Observe that D ⊂ X0 by estimates similar to (9.5) and (9.6). We consider a strategy SI0 for Player I that distinguishes between the cases xk ∈ X0 and xk ∉ X0 . First, if xk ∈ X0 , then she always chooses to step to a point xk+1 satisfying u(xk+1 ) ≥ sup u − ηk+1 2−(k+1) , Bε (xk )

where ηk+1 ∈ (0, η] is small enough to guarantee that xk+1 ∈ X0 . Thus if xk ∈ X0 and Player I gets to choose the next position (by winning the coin toss or through the selling of the turn by the other player), for mk = u(xk ) − η2−k we have mk+1 ≥ u(xk ) + δ(xk ) − ηk+1 2−(k+1) − η2−(k+1) ≥ u(xk ) + δ(xk ) − η2−k

= mk + δ(xk ).

On the other hand, if Player II wins the toss and moves from xk ∈ X0 to xk+1 ∈ X0 , we have, in view of (9.4), mk+1 ≥ u(xk ) − δ(xk ) − η2−(k+1) > mk − δ(xk ). In the case xk ∉ X0 , we set mk = u(yk ) − δ0 dk − η2−k , where yk denotes the last game position in X0 up to time k, and dk is the distance, measured in number of steps, from xk to yk along the graph spanned by the previous points

9.1 Gradient constraints | 175

yk = xk−j , xk−j+1 , . . . , xk that were used to get from yk to xk . The strategy for Player I in this case is to backtrack to yk , that is, if she wins the coin toss, she moves the token to one of the points xk−j , xk−j+1 , . . . , xk−1 closer to yk so that dk+1 = dk − 1. Thus if Player I wins and xk ∉ X0 (whether xk+1 ∈ X0 or not), mk+1 ≥ δ0 + mk . To prove the desired submartingale property for mk , there are three more cases to be checked. If Player II wins the toss and he moves to a point xk+1 ∉ X0 (whether xk ∈ X0 or not), we have mk+1 = u(yk ) − dk+1 δ0 − η2−(k+1) ≥ u(yk ) − dk δ0 − δ0 − η2−k

= mk − δ0 .

If Player II wins the coin toss and moves from xk ∉ X0 to xk+1 ∈ X0 , then mk+1 = u(xk+1 ) − η2−(k+1) ≥ −δ(xk ) + u(xk ) − η2−k ≥ −δ0 + mk , where the first inequality is due to (9.4), and the second follows from the fact that mk = u(yk ) − dk δ0 − η2−k ≤ u(xk ) − η2−k . Taking into account all the different cases, we see that mk is a bounded (from above) submartingale, and since Player I can assure that mk+1 ≥ mk + δ0 if she wins a coin toss, the game must again terminate almost surely. We can now conclude the proof using the fact that δ(xk ) ≥ ε whenever xk ∈ D. Finally, let us remove the assumption that δ(x0 ) > 0. If δ(x0 ) = 0 for x0 ∈ X, then Player I adopts a strategy of pulling towards a boundary point until the game token reaches a point x0󸀠 such that δ(x0󸀠 ) > 0 or x0󸀠 is outside Ω. We have u(x0 ) = u(x0󸀠 ), because by (9.4) it cannot happen that δ(x) = supBε (x) u − u(x) = 0 and u(x) − infBε (x) u > 0 simultaneously. Thus we can repeat the proof also in this case.

9.1.3 Game value convergence Our main goal now is to prove that the value uε of the game converges to the minimal solution of (9.1). Theorem 9.3. Let uε be the family of game values for a Lipschitz continuous boundary datum g, and let u be the minimal solution to (9.1). Then uε → u as ε → 0.

uniformly in Ω

176 | 9 Free boundary problems As a first step, we prove that, up to selecting a subsequence, uε → u as ε → 0 for some Lipschitz function u. Theorem 9.4. Let uε be a family of game values for a Lipschitz continuous boundary datum g. Then there exists a Lipschitz continuous function u such that, up to selecting a subsequence, uε → u

uniformly in Ω

as ε → 0. Proof. The boundedness of uε follows by a rather standard combination of a pulling to a boundary point strategies combined with a martingale argument. Thus, it suffices to prove asymptotic Lipschitz continuity for the family uε and then use the asymptotic version of the Arzela–Ascoli lemma. We prove the required oscillation estimate by arguing by contradiction: If there exists a point where the oscillation A(x) := sup uε (y) − inf uε (y) y∈Bε (x)

y∈Bε (x)

is large compared to the oscillation of the boundary data, then the DPP takes the same form as for the standard Tug-of-War game. Intuitively, the Tug-of-War never reduces the oscillation when playing to sup or inf directions. Thus we can iterate this idea up to the boundary to show that the oscillation of the boundary data must be larger than it actually is, which is the desired contradiction. To be more precise, we claim that A(x) ≤ 4 max{Lip(g); 1}ε, for all x ∈ Ω. Aiming for a contradiction, suppose that there exists x0 ∈ Ω such that A(x0 ) > 4 max{Lip(g); 1}ε. In this case, we have uε (x0 ) = min{ =

1 1 sup uε (y) + inf uε (y); sup uε (y) − εχD } 2 Bε (x0 ) 2 Bε (x0 ) Bε (x0 )

1 1 sup uε (y) + inf uε (y). 2 Bε (x0 ) 2 Bε (x0 )

(9.7)

The reason is that the alternative 1 1 sup uε (y) + inf uε (y) > sup uε (y) − εχD 2 y∈Bε (x0 ) 2 y∈Bε (x0 ) y∈Bε (x0 ) would imply A(x0 ) = sup uε (y) − y∈Bε (x0 )

inf uε (y) < 2εχD ≤ 2ε,

y∈Bε (x0 )

(9.8)

9.1 Gradient constraints | 177

which is a contradiction with A(x0 ) > 4 max{Lip(F); 1}ε. It follows from (9.7) that sup uε (y) − uε (x0 ) = uε (x0 ) −

y∈Bε (x0 )

1 inf uε (y) = A(x0 ). y∈Bε (x0 ) 2

Let η > 0 and take x1 ∈ Bε (x0 ) such that uε (x1 ) ≥ sup uε (y) − y∈Bε (x0 )

We obtain

η . 2

η η 1 uε (x1 ) − uε (x0 ) ≥ A(x0 ) − ≥ 2 max{Lip(g); 1}ε − , 2 2 2

and, since x0 ∈ Bε (x1 ), also sup uε (y) − inf uε (y) ≥ 2 max{Lip(g); 1}ε −

y∈Bε (x1 )

y∈Bε (x1 )

η . 2

Arguing as before, (9.7) also holds at x1 , since otherwise the above inequality would lead to a contradiction similarly as (9.8) for small enough η. Thus sup uε (y) − uε (x1 ) = uε (x1 ) − inf uε (y) ≥ 2 max{Lip(g); 1}ε − y∈Bε (x1 )

y∈Bε (x1 )

η , 2

so that A(x1 ) = sup uε (y) − uε (x1 ) + uε (x1 ) − inf uε (y) y∈Bε (x1 )

y∈Bε (x1 )

≥ 4 max{Lip(g); 1}ε − η. Iterating this procedure, we obtain xi ∈ Bε (xi−1 ) such that uε (xi ) − uε (xi−1 ) ≥ 2 max{Lip(g); 1}ε −

η 2i

(9.9)

and i−1

η . j 2 j=0

(9.10)

A(xi ) ≥ 4 max{Lip(g); 1}ε − ∑

We can proceed with an analogous argument considering points where the infimum is nearly attained to obtain x−1 , x−2 ,…such that x−i ∈ Bε (x−(i−1) ), and (9.9) and (9.10) hold. Since uε is bounded, there must exist k and l such that xk , x−l ∈ Γε , and we have k

|g(xk ) − g(x−l )| ≥ |xk − x−l |

∑ uε (xj ) − uε (xj−1 )

j=−l+1

ε(k + l)

≥ 2 max{Lip(g); 1} −

2η , ε

178 | 9 Free boundary problems a contradiction. Therefore A(x) ≤ 4 max{Lip(g); 1}ε, for every x ∈ Ω. As we have mentioned, in order to prove Theorem 9.3, we use a modified game, the Ω-game in which we just take D = Ω. We proved that this game has a value uε := uεI = uεII . Now, we show that the value of the game converges to the unique solution of Jensen’s equation min{Δ∞ u, |Du| − 1} = 0. Theorem 9.5. Let uε be the family of values of the Ω-game for a Lipschitz continuous boundary datum g, and let u be the unique solution to min{Δ∞ u(x), |Du(x)| − 1} = 0 in Ω, { u(x) = g(x) on 𝜕Ω.

(9.11)

Then uε → u

uniformly in Ω.

Proof. The convergence of a subsequence to a Lipschitz continuous function u follows by the same argument as in Theorem 9.4. Since uε = g on 𝜕Ω, we see that u = g on 𝜕Ω, and we can focus our attention on showing that u satisfies Jensen’s equation in the viscosity sense. To establish this, we consider an asymptotic expansion related to our operator. Fix a point x ∈ Ω and ϕ ∈ C 2 (Ω). Let x1ε and x2ε be a minimum point and a maximum point, respectively, for ϕ in Bε (x). As we did previously (see also [84]) we obtain the asymptotic expansion min{

1 1 max ϕ(y) + min ϕ(y); max ϕ(y) − ε} − ϕ(x) 2 y∈Bε (x) 2 y∈Bε (x) y∈Bε (x)

xε − x xε − x ε2 ⟨D2 ϕ(x)( 1 ), ( 1 )⟩ + o(ε2 ); 2 ε ε xε − x xε − x xε − x ε2 − 1)ε + ⟨D2 ϕ(x)( 2 ), ( 2 )⟩ + o(ε2 )}. (Dϕ(x) ⋅ 2 ε 2 ε ε

≥ min{

(9.12)

Suppose that u − ϕ has a strict local minimum at x and that Dϕ(x) ≠ 0. By the uniform convergence, for any ηε > 0 there exists a sequence (xε ) converging to x such that uε (x) − ϕ(x) ≥ uε (xε ) − ϕ(xε ) − ηε ,

9.1 Gradient constraints | 179

that is, uε − ϕ has an approximate minimum at xε . Moreover, considering ϕ̃ = ϕ − uε (xε ) − ϕ(xε ), we may assume that ϕ(xε ) = uε (xε ). Thus, by recalling the fact that uε satisfies the DPP and that maxy∈B (x) ϕ(y) = supy∈Bε (x) ϕ(y), we obtain ε

ηε ≥ −ϕ(xε ) + min{

1 1 max ϕ(y) + min ϕ(y); max ϕ(y) − ε}. 2 y∈Bε (x) 2 y∈Bε (x) y∈Bε (x)

By choosing ηε = o(ε2 ), using (9.12), and dividing by ε2 , we have xε − x xε − x 1 o(ε2 ) 0 ≥ min{ ⟨D2 ϕ(x)( 1 ), ( 1 )⟩ + 2 ; 2 ε ε ε ε ε x −x x −x xε − x 1 1 o(ε2 ) (Dϕ(x) ⋅ 2 − 1) + ⟨D2 ϕ(x)( 2 ), ( 2 )⟩ + 2 }. ε ε 2 ε ε ε Since Dϕ(x) ≠ 0, by letting ε → 0, we conclude that Δ∞ ϕ(x) ≤ 0

or

󵄨 󵄨󵄨 󵄨󵄨Dϕ(x)󵄨󵄨󵄨 − 1 ≤ 0.

This shows that u is a viscosity supersolution to Jensen’s equation (9.11), provided that Dϕ(x) ≠ 0. On the other hand, if Dϕ = 0, then Δ∞ ϕ(x) = 0 and the same conclusion follows. To prove that u is a viscosity subsolution, we consider a function φ that touches u from above at x ∈ Ω and observe that a reverse inequality to (9.12) holds at the point of touching. Arguing as above, one can deduce that Δ∞ φ(x) ≥ 0

󵄨 󵄨 and 󵄨󵄨󵄨Dφ(x)󵄨󵄨󵄨 − 1 ≥ 0.

To finish the proof, we observe that since the viscosity solution of (9.11) is unique, all the subsequential limits of uε are equal. Next we prove that the value uε of the D-game converges to the minimal solution of min{Δ∞ u, |Du| − χD } = 0. Proof of Theorem 9.3. Let hε , uε , and z ε denote the values of the standard Tug-of-War, the D-game, and the Ω-game, respectively. Since Player II has more options in the D-game and again more in the Ω-game, we have z ε (x) ≤ uε (x) ≤ hε (x) for all x ∈ Ω. We denote by h the unique solution to the ∞-Laplace equation, 󵄨

󵄨

𝒜 = {x ∈ Ω: 󵄨󵄨󵄨Dh(x)󵄨󵄨󵄨 < 1},

ℬ = 𝒜 ∩ D,

and by z the unique solution to the equation min{Δ∞ z, |∇z| − 1} = 0.

(9.13)

180 | 9 Free boundary problems We claim that uε → z

in ℬ.

Striving for a contradiction, suppose that there is x0 ∈ ℬ such that uε (x0 ) − z(x0 ) > C

(9.14)

for all ε > 0. We recall from Theorem 9.5 and [97] that zε → z

and hε → h

uniformly in Ω.

Moreover, z(x) = h(x) in Ω \ 𝒜 and this together with (9.13) yields h = z ≤ uε + o(1) ≤ z + o(1) = h + o(1)

(9.15)

in Ω \ 𝒜 with a uniform error term. We will next show that δ(x0 ) := sup uε − uε (x0 ) ≥ ε. B(x0 )

(9.16)

Indeed, looking at the DPP in Lemma 9.1, we have two alternatives. The first alternative is 1 uε (x0 ) = { inf uε + sup uε } < sup uε − εχD (x0 ). 2 B(x0 ) B(x0 ) B(x0 ) Since x0 ∈ D, this implies that 2ε < sup uε − inf uε B(x0 )

B(x0 )

and sup uε − uε (x0 ) = uε (x0 ) − inf uε , B(x0 )

B(x0 )

(9.17)

from which we deduce δ(x0 ) > ε. The second alternative is uε (x0 ) = sup uε − ε, B(x0 )

which implies δ(x0 ) = ε, and the claim (9.16) follows. Let η > 0 and choose a point x1 so that uε (x1 ) ≥ sup uε − η2−1 . B(x0 )

(9.18)

9.2 A free boundary problem

| 181

It follows from (9.16) that uε (x1 ) − infy∈B(x1 ) uε (y) ≥ ε − η2−1 . Moreover, in the case of the first alternative at x1 we have δ(x1 ) ≥ ε − η2−1 by the equation similar to (9.17), and in the case of the second alternative, the equation similar to (9.18) implies that δ(x1 ) = ε. We iterate the argument and obtain a sequence of points (xk ) such that ∞

uε (xk ) ≥ uε (x0 ) + kε − η ∑ 2−i . i=1

(9.19)

The sequence exits 𝒜 in a finite number of steps, i. e., there exists a first point xk0 in the sequence such that xk0 ∈ Ω \ 𝒜. This follows from (9.19) and the boundedness of uε . On the other hand, since |z(x) − z(y)| ≤ |x − y| whenever the line segment [x, y] is contained in 𝒜 (see [36]), we have z(xk0 ) ≤ z(x0 ) + kε + Cε, where the term Cε is due to the last step being partly outside 𝒜. By this estimate, (9.15), and (9.19), we obtain o(1) ≥ uε (xk0 ) − z(xk0 ) ≥ uε (x0 ) − z(x0 ) − η − Cε. This gives a contradiction with (9.14) provided we choose η and ε small enough. We have uε → z

in 𝒜 ∩ D

and uε → h

in Ω \ 𝒜.

But in 𝒜 \D, the D-game is just a Tug-of-War game, and then uε converges to the unique solution to Δ∞ u = 0 in 𝒜 \ D, { { { u =h on 𝜕A \ D, { { { on 𝜕D ∩ 𝒜. {u = z This ends the proof.

9.2 A free boundary problem Now we deal with a free boundary problem in which the set in which we impose the gradient constraint is given by the positivity set of the solution itself. We consider {

max{−Δ∞ u, −|∇u| + χ{u>0} } = 0 u=g

in Ω ∩ {u ≥ 0}, on 𝜕Ω.

Here g ≥ 0 is a Lipschitz continuous boundary datum and Ω ⊂ ℝN is a bounded and smooth domain.

182 | 9 Free boundary problems In this context, 𝜕{u > 0} ∩ Ω is the free boundary of the problem. This problem appears as the limit as p → ∞ in {

−Δp u(x) + χ{u>0} (x) = 0 u(x) = g(x)

in Ω, on 𝜕Ω,

(9.20)

where Δp u = div(|∇u|p−2 ∇u) stands for the p-Laplace operator. We refer to [24]. It is worth mentioning that the unique weak solution (cf. [41, Theorem 1.1 ]) to (9.20) appears when we minimize the functional 1󵄨 󵄨p Jp [v] = ∫( 󵄨󵄨󵄨∇v(x)󵄨󵄨󵄨 + λ0 (x)vχ{v>0} (x))dx p Ω

over the admissible set 𝕂 = {v ∈ W 1,p (Ω) and v = g on 𝜕Ω}. Theorem 9.6 ([24]). Let (up )p≥2 be the family of weak solutions to (9.20). Then, up to a subsequence, up → u∞ uniformly in Ω. Furthermore, such a limit fulfills in the viscosity sense {

max{−Δ∞ u∞ , −|∇u∞ | + χ{u∞ >0} } = 0 u∞ = g

in Ω ∩ {u∞ ≥ 0}, on 𝜕Ω.

(9.21)

Finally, u∞ is a Lipschitz continuous function with [u∞ ]Lip(Ω) ≤ C(N) max{1, [g]Lip(𝜕Ω) }. Note that (9.21) can be written as a fully nonlinear second-order operator as follows: F∞ : ℝ × ℝN × 𝕊N (s, ξ , X)

󳨀→ 󳨃→

ℝ, max{−ξ T Xξ , −|ξ | + χ{s>0} },

which is nondecreasing in s. Moreover, F∞ is a degenerate elliptic operator in the sense that F∞ (s, ξ , X) ≤ F∞ (s, ξ , Y) whenever Y ≤ X. Nevertheless, F∞ is not in the framework of [37, Theorem 3.3]. Then, proving uniqueness of limit solutions becomes a nontrivial task, which was tackled in [24] using ideas from [63, Section 4]. Theorem 9.7 (Uniqueness [24]). There is a unique viscosity solution to (9.21). Moreover, a comparison principle holds, i. e., if g1 ≤ g2 on 𝜕Ω, then the corresponding solutions u1∞ and u2∞ verify u1∞ ≤ u2∞ in Ω.

9.2 A free boundary problem

| 183

Note that, since we have uniqueness for the limit problem, we have convergence of the whole family (up )p≥2 as p → ∞ in Theorem 9.6 (and not only convergence along a subsequence). Our main motivation for considering (9.21) here comes from its connection to modern game theory. Now, we define a variant of the Tug-of-War game, which we call Pay or Leave Tug-of-War, which was inspired by the one that we used in the previous section to approximate problems with a gradient constraint; see also [63]. In this game, one of the players decides to play the usual Tug-of-War or to pass the turn to the other, who then decides to end the game immediately (and gets 0 as final payoff) or move and pay ε (which is the step size). The value functions of this new game, namely, uε , fulfill a Dynamic Programming Principle given by 1 uε (x) = min{ (sup uε + inf uε ); max{0; sup uε − ε}}. Bε (x) 2 Bε (x) Bε (x) Moreover, we show that the sequence uε converges and the corresponding limit is a viscosity solution to (9.21). Therefore, besides its own interest, the game theoretic scheme provides an alternative mechanism to prove the existence of a viscosity solution to (9.21). Theorem 9.8. Let uε be the value functions of the game previously described. Then we have uε → u

uniformly in Ω,

u being the unique viscosity solution to equation (9.21). It is important to mention that we have been able to obtain a game approximation for a free boundary problem that involves the set where the solution is positive, {u > 0}. This task involves the following difficulty: If one tries to play with a rule of the form “one player sells the turn when the expected payoff is positive,” then the value of the game will not be well defined since this rule is anticipating the result of the game. We overcome this difficulty by giving the other player the chance to stop the game (and obtain 0 as final payoff in this case) or buy the turn (when the first player gives this option). In this way we obtain a set of rules that are nonanticipating and give a DPP that can be seen as a discretization of the limit PDE.

9.2.1 Viscosity solutions Here we make some comments regarding solutions to (9.21). Let us begin by stating the definition of viscosity solution.

184 | 9 Free boundary problems Definition 9.9. An upper semicontinuous (resp. lower semicontinuous) function u: Ω → ℝ is a viscosity subsolution (resp. supersolution) to (9.21) in Ω if, whenever x0 ∈ Ω and φ ∈ C 2 (Ω) are such that u − φ has a strict local maximum (resp. minimum) at x0 , we have 󵄨 󵄨 max{−Δ∞ φ(x), χ{u>0} (x0 ) − 󵄨󵄨󵄨∇φ(x0 )󵄨󵄨󵄨} ≤ 0,

(9.22)

󵄨 󵄨 max{−Δ∞ φ(x), χ{u≥0} (x0 ) − 󵄨󵄨󵄨∇φ(x0 )󵄨󵄨󵄨} ≥ 0.

(9.23)

resp.

Finally, a continuous function u: Ω → ℝ is a viscosity solution to (9.21) in Ω if it is both a viscosity subsolution and a viscosity supersolution. We remark that, since (9.22) does not depend on ϕ(x0 ), we can assume that ϕ satisfies u(x0 ) = ϕ(x0 ) and u(x) < ϕ(x), when x ≠ x0 . Analogously, in (9.23) we can assume that u(x0 ) = ϕ(x0 ) and u(x) > ϕ(x), when x ≠ x0 . Also we remark that (9.22) is equivalent to −Δ∞ ϕ(x0 ) ≤ 0

and

󵄨 󵄨 − 󵄨󵄨󵄨∇ϕ(x0 )󵄨󵄨󵄨 + χ{u>0} (x0 ) ≤ 0

and that (9.23) is equivalent to −Δ∞ ϕ(x0 ) ≥ 0

or

󵄨 󵄨 − 󵄨󵄨󵄨∇ϕ(x0 )󵄨󵄨󵄨 + χ{u≥0} (x0 ) ≥ 0.

The strict and nonstrict inequalities in the terms {u > 0} and {u ≥ 0}, respectively, are given by the upper and lower envelopes of the operators; see Appendix A. Note that here we are using the sets {u ≥ 0} and {u > 0} instead of the sets that correspond to the test function, {ϕ ≥ 0} and {ϕ > 0}. The function obtained as limit of the game values will satisfy the definition as stated here. We refer to [24] for extra details with regard to the definition of a viscosity solution. This definition also holds when proving the convergence of the p-Laplacian approximations up , and, moreover, uniqueness holds for the equation with this definition.

9.2.2 Pay or Leave Tug-of-War In this section, let us describe the two-player zero-sum game that we call Pay or Leave Tug-of-War (a variant of the Tug-of-War games considered in previous chapters). Let Ω be a bounded open set and ε > 0. A token is placed at x0 ∈ Ω. Player II, the player seeking to minimize the final payoff, can either pass the turn to Player I or decide to toss a fair coin and play Tug-of-War. In this case, the winner of the coin toss gets to move the token to any x1 ∈ Bε (x0 ). If Player II passes the turn to Player I, then she can either move the game token to any x1 ∈ Bε (x0 ) at the price −ε or decide to end

9.2 A free boundary problem

| 185

the game immediately with no payoff for either of the players. After the first round, the game continues from x1 according to the same rules. This procedure yields a possibly infinite sequence of game states x0 , x1 , . . ., where every xk is a random variable. If the game is not ended by the rules described above, the game ends when the token leaves Ω. We denote by xτ ∈ ℝN the final position of the token. At this time the game ends with the terminal payoff given by g(xτ ), where F : N ℝ \ Ω → ℝ is a given Lipschitz continuous payoff function. Player I earns g(xτ ) while Player II earns −g(xτ ). A strategy SI for Player I is a function defined on the partial histories that gives the next game position SI (x0 , x1 , . . . , xk ) = xk+1 ∈ Bε (xk ) if Player I gets to move the token. Similarly Player II plays according to a strategy SII . In addition, we define a decision variable for Player II, which tells when Player II decides to pass a turn, i. e., 1 θII (x0 , . . . , xk ) = { 0

Player II passes a turn, otherwise,

and one for Player I which tells when Player I decides to end the game immediately, i. e., 1 θI (x0 , . . . , xk ) = { 0

Player I ends the game, otherwise.

Given the sequence x0 , . . . , xk with xk ∈ Ω the game will end immediately when θI (x0 , . . . , xk ) = θII (x0 , . . . , xk ) = 1. Otherwise, the one-step transition probabilities will be πSI ,SII ,θI ,θII (x0 , . . . , xk , A)

1 = (1 − θII (x0 , . . . , xk )) (δSI (x0 ,...,xk ) (A) + δSII (x0 ,...,xk ) (A)) 2 + θII (x0 , . . . , xk )(1 − θI (x0 , . . . , xk ))δSI (x0 ,...,xk ) (A).

By using Kolmogorov’s extension theorem and the one-step transition probabilities, x we can build a probability measure ℙS0,S ,θ ,θ on the game sequences. We denote by x

𝔼S0,S I

I

II ,θI ,θII

II

the corresponding expectation.

I

II

The value of the game for Player I is given by x

uεI (x0 ) = sup inf 𝔼S0,S SI ,θI SII ,θII

II ,θI ,θII

I

τ−1

[g(xτ ) − ε ∑ θII (x0 , . . . , xk )(1 − θI (x0 , . . . , xk ))] i=0

while the value of the game for Player II is given by x

uεII (x0 ) = inf sup 𝔼S0,S SII ,θII SI ,θI

I

II ,θI ,θII

τ−1

[g(xτ ) − ε ∑ θII (x0 , . . . , xk )(1 − θI (x0 , . . . , xk ))],

where g is extended as g ≡ 0 in Ω.

i=0

186 | 9 Free boundary problems Observe that if the game does not end almost surely, then the expectation is not x well defined. In this case, we define 𝔼S0,S ,θ ,θ to take value −∞ when evaluating uI (x0 ) I II I II and +∞ when evaluating uII (x0 ). As usual, if uεI = uεII , we say that the game has a value. We denote the game value as uε := uεI = uεII . 9.2.3 Dynamic Programming Principle Here we establish the DPP for the game we are considering. Theorem 9.10. The game has a value uε := uεI = uεII and it satisfies

1 u(x) = min{ (sup u + inf u); max{0; sup u − ε}} Bε (x) 2 Bε (x) Bε (x)

for x ∈ Ω and u(x) = g(x) in ℝN \ Ω. Let us see intuitively why this holds. At each step, with the token in a given x ∈ Ω, Player II chooses whether to play Tug-of-War or to pass the turn to Player I. In the first case, with probability 21 , Player I gets to move and will try to maximize the expected outcome, and with probability 21 , Player II gets to move and will try to minimize the expected outcome. In this case the expected payoff will be 1 1 sup u + inf u. 2 Bε (x) 2 Bε (x) On the other hand, if Player II passes the turn to Player I, she will have two options, i. e., to end the game immediately, obtaining 0, or to move trying to maximize the expected outcome by paying ε. Player I will prefer the option that gives the greater payoff, that is, the expected payoff is given by max{0; sup u − ε}. Bε (x)

Finally, Player II will decide between the two possible payoffs mentioned here, preferring the one with the minimum payoff. We refer to [24] for the proof of the theorem. The existence of a solution to the DPP is obtained there following the ideas of the second proof of Theorem 3.2 by constructing a sequence of functions that converges uniformly to a solution of the DPP. Then the result is obtained following the same ideas employed in the last section. Also, similar ideas were used in the proof of Theorem 4.1. 9.2.4 Game value convergence In this subsection we study the behavior of the game values as ε → 0. We want to prove that uε → u

9.2 A free boundary problem

| 187

uniformly on Ω as ε → 0 and that u is a viscosity solution to {

max{−Δ∞ u, χ{u>0} − |∇u|} = 0 u(x) = g(x)

in Ω, on 𝜕Ω.

(9.24)

To this end, we would like to apply one more time the Arzela–Ascoli type lemma. So our task now is to show that the family uε satisfies the hypotheses of the previous lemma. In the next lemma, we prove that the family is asymptotically uniformly continuous, that is, it satisfies condition 2 of Lemma 3.6. To do that we follow the same plan employed in the last section to obtain the corresponding result; see also [63]. Lemma 9.11. The family uε is asymptotically uniformly continuous. Proof. We prove the required oscillation estimate by arguing by contradiction. We define A(x) ≡ sup uε − inf uε . Bε (x)

Bε (x)

We claim that A(x) ≤ 4 max{Lip(g); 1}ε, for all x ∈ Ω. Aiming for a contradiction, suppose that there exists x0 ∈ Ω such that A(x0 ) > 4 max{Lip(g); 1}ε. In this case, we have 1 uε (x0 ) = min{ ( sup uε + inf uε ); max{0; sup uε − ε}} Bε (x0 ) 2 Bε (x0 ) Bε (x0 ) 1 = ( sup uε + inf uε ). Bε (x0 ) 2 Bε (x0 )

(9.25)

The reason is that the alternative 1 ( sup uε + inf uε ) > max{0; sup uε − ε} > sup uε − ε Bε (x0 ) 2 Bε (x0 ) Bε (x0 ) Bε (x0 ) would imply A(x0 ) = sup uε − inf uε < 2ε, Bε (x0 )

Bε (x0 )

(9.26)

which is a contradiction with A(x0 ) > 4 max{Lip(g); 1}ε. It follows from (9.25) that 1 sup uε − uε (x0 ) = uε (x0 ) − inf uε = A(x0 ). B (x ) 2 ε 0 Bε (x0 )

188 | 9 Free boundary problems Let η > 0 and take x1 ∈ Bε (x0 ) such that uε (x1 ) ≥ sup uε − Bε (x0 )

η . 2

We obtain η η 1 uε (x1 ) − uε (x0 ) ≥ A(x0 ) − ≥ 2 max{Lip(F); 1}ε − , 2 2 2

(9.27)

and, since x0 ∈ Bε (x1 ), also sup uε − inf uε ≥ 2 max{Lip(F); 1}ε −

Bε (x1 )

Bε (x1 )

η . 2

Arguing as before, (9.25) also holds at x1 , since otherwise the above inequality would lead to a contradiction similarly as (9.26) for small enough η. Thus, (9.27) and (9.25) imply sup uε − uε (x1 ) = uε (x1 ) − inf uε ≥ 2 max{Lip(F); 1}ε − Bε (x1 )

Bε (x1 )

η , 2

so that A(x1 ) = sup uε − uε (x1 ) + uε (x1 ) − inf uε ≥ 4 max{Lip(g); 1}ε − η. Bε (x1 )

Bε (x1 )

Iterating this procedure, we obtain xi ∈ Bε (xi−1 ) such that uε (xi ) − uε (xi−1 ) ≥ 2 max{Lip(g); 1}ε −

η 2i

(9.28)

and i−1

η . j 2 j=0

A(xi ) ≥ 4 max{Lip(g); 1}ε − ∑

(9.29)

We can proceed with an analogous argument considering points where the infimum is nearly attained to obtain x−1 , x−2 ,…such that x−i ∈ Bε (x−(i−1) ), and (9.28) and (9.29) hold. Since uε is bounded, there must exist k and l such that xk , x−l ∈ Γε , and we have k

|g(xk ) − g(x−l )| ≥ |xk − x−l |

∑ uε (xj ) − uε (xj−1 )

j=−l+1

ε(k + l)

≥ 2 max{Lip(g); 1} −

a contradiction. Therefore A(x) ≤ 4 max{Lip(g); 1}ε, for every x ∈ Ω.

2η , ε

9.2 A free boundary problem

| 189

Lemma 9.12. Let uε be a family of game values for a Lipschitz continuous boundary datum g. Then there exists a Lipschitz continuous function u such that, up to selecting a subsequence, uε → u

uniformly in Ω

as ε → 0. Proof. By choosing always to play Tug-of-War and moving with any strategy that ends the game almost surely (as pulling in a fixed direction), Player II can ensure that the final payoff is at most maxΓε g. Similarly, by ending the game immediately if given the option and moving with any strategy that ends the game almost surely when playing Tug-of-War, Player I can ensure that the final payoff is at least min{0, min g}. We have min{0, min g} ≤ uε ≤ max g. This, together with Lemma 9.11, shows that the family uε satisfies the hypothesis of Lemma 3.6. Theorem 9.13. The function u obtained as a limit in Lemma 9.12 is a viscosity solution to (9.24). Proof. First, we observe that u = g on 𝜕Ω due to uε = g on 𝜕Ω for all ε > 0. Hence, we can focus our attention on showing that u satisfies the equation inside Ω in the viscosity sense. To this end, we obtain the following asymptotic expansions, as in [84]. Choose a point x ∈ Ω and a C 2 -function ψ defined in a neighborhood of x. Note that since ψ is continuous, we have min ψ = inf ψ Bε (x)

Bε (x)

and

max ψ = sup ψ Bε (x)

Bε (x)

for all x ∈ Ω. Let x1ε and x2ε be a minimum point and a maximum point, respectively, for ϕ in Bε (x). It follows from the Taylor expansions in [84] that 1 ( max ψ(y) + min ψ(y)) − ψ(x) 2 y∈Bε (x) y∈Bε (x) ≥ ε2 ⟨D2 ψ(x)(

xε − x x1ε − x ), ( 1 )⟩ + o(ε2 ) ε ε

(9.30)

and max ϕ(y) − ε − ϕ(x)

y∈Bε (x)

≥ (Dϕ(x) ⋅

x2ε − x xε − x xε − x ε2 − 1)ε + ⟨D2 ϕ(x)( 2 ), ( 2 )⟩ + o(ε2 ). ε 2 ε ε

(9.31)

190 | 9 Free boundary problems Suppose that u − ψ has a strict local minimum. We want to prove that 󵄨 󵄨 max{−Δ∞ ψ(x), χ{u≥0} (x) − 󵄨󵄨󵄨∇ψ(x)󵄨󵄨󵄨} ≥ 0. If ∇ψ(x) = 0, we have −Δ∞ ψ(x) = 0 and hence the inequality holds. We can assume ∇ψ(x) ≠ 0. By the uniform convergence, there exists a sequence xε converging to x such that uε − ψ has an approximate minimum at xε , that is, for ηε > 0, there exists xε such that uε (x) − ψ(x) ≥ uε (xε ) − ψ(xε ) − ηε . Moreover, considering ψ̃ = ψ − uε (xε ) − ψ(xε ), we can assume that ψ(xε ) = uε (xε ). If u(x) < 0, we have to show that −Δ∞ ψ(x) ≥ 0. Since u is continuous and uε converges uniformly, we can assume that uε (xε ) < 0. Thus, by recalling the fact that uε satisfy the DPP (Theorem 9.10) and observing that max{0; sup uε (y) − ε} ≥ 0, Bε (x)

we conclude that 1 uε (x) = (sup uε + inf uε ). Bε (x) 2 Bε (x) We obtain 1 ηε ≥ −ψ(xε ) + (max ψ + min ψ) 2 Bε (xε ) Bε (xε ) and thus, by (9.30) and choosing ηε = o(ε2 ), we have 0 ≥ ε2 ⟨D2 ψ(x)(

x1ε − x xε − x ), ( 1 )⟩ + o(ε2 ). ε ε

Next, we observe that, as in Chapter 4, we have ⟨D2 ψ(xε )(

xε − xε x1ε − xε ), ( 1 )⟩ → Δ∞ ψ(x) ε ε

since ∇ψ(x) ≠ 0. If u(x) ≥ 0, we have to show that 󵄨 󵄨 max{−Δ∞ ψ(x), 1 − 󵄨󵄨󵄨∇ψ(x)󵄨󵄨󵄨} ≥ 0.

9.3 Comments | 191

As above, by (9.30) and (9.31), we obtain 0 ≥ min{

xε − x xε − x ε2 ⟨D2 ϕ(x)( 1 ), ( 1 )⟩ + o(ε2 ); max{o(ε2 ) − ψ(x); 2 ε ε xε − x xε − x xε − x ε2 − 1)ε + ⟨D2 ϕ(x)( 2 ), ( 2 )⟩ + o(ε2 )}}, (Dϕ(x) ⋅ 2 ε 2 ε ε

and hence we conclude Δ∞ ψ(x) ≤ 0

or

󵄨 󵄨󵄨 󵄨󵄨∇ψ(x)󵄨󵄨󵄨 − 1 ≤ 0,

as desired. We have shown that u is a supersolution to our equation. Similarly we obtain the subsolution counterpart. Let us remark, as part of those computations, that when uε (x) > 0 the DDP implies max{0; sup uε (y) − ε} > 0 Bε (x)

and hence sup uε (y) − ε > 0.

Bε (x)

Then in this case we have 1 uε (x) = min{ (sup uε + inf uε ); sup uε (y) − ε}. Bε (x) 2 Bε (x) Bε (x) Finally, since uniqueness holds for the limit equation, we conclude that convergence as ε → 0 of uε holds not only along subsequences. This ends the proof of Theorem 9.8.

9.3 Comments Here we presented the result obtained in [63] and [24] as main references. Game theoretical arguments where used in [74] to obtain qualitative properties of solutions to geometric free boundary evolution problems. Here we refer also to [66] for a game for the mean curvature flow.

A Viscosity solutions A.1 Basic definitions In this appendix we give a brief introduction to the theory of viscosity solutions. We base the presentation on the introductory text [64] and the classical reference [37]. Viscosity solutions were first introduced in the 1980s by Crandall and Lions [38]. The term “viscosity solution” originates from the “vanishing viscosity method” used to deal with Hamilton–Jacobi first-order equations, but it is not necessarily confined to this kind of equations and it is a very powerful tool to deal with second-order elliptic fully nonlinear PDEs. Viscosity solutions constitute a general theory of “weak” (i. e., nondifferentiable) solutions which applies to certain fully nonlinear PDEs of first and second order.

A.1.1 Elliptic problems Consider the PDE F(⋅, u, Du, D2 u) = 0, where F : Ω × ℝ × ℝN × 𝕊N → ℝ and 𝕊N denotes the set of symmetric N × N matrices. The idea behind viscosity solutions is to use the maximum principle in order to “pass derivatives to smooth test functions.” This idea allows us to consider operators in nondivergence form. We will assume that F is degenerate elliptic, that is, F satisfies a monotonicity property with respect to the matrix variable, i. e., X≤Y

in 𝕊N 󳨐⇒ F(x, r, p, X) ≥ F(x, r, p, Y)

for all (x, r, p) ∈ Ω × ℝ × ℝN . Now, let us motivate the definition of viscosity solution. Suppose that u ∈ C 2 (Ω) is a classical solution to the PDE F(x, u(x), Du(x), D2 u(x)) = 0,

x ∈ Ω.

Assume further that at some x0 ∈ Ω, u can be “touched from above” (see Figure A.1) by some smooth function ψ ∈ C 2 (ℝN ) at x0 . That is, (ψ − u)(x) ≥ 0 = (ψ − u)(x0 ) https://doi.org/10.1515/9783110621792-010

194 | A Viscosity solutions

Figure A.1: ψ touches u from above.

on a ball Br (x0 ). Since ψ − u is smooth and attains a minimum at x0 we have D(ψ − u)(x0 ) = 0

and

D2 (ψ − u)(x0 ) ≤ 0.

By using that u is a solution and the ellipticity of F, we obtain 0 = F(x0 , u(x0 ), Du(x0 ), D2 u(x0 )) ≥ F(x0 , ψ(x0 ), Dψ(x0 ), D2 ψ(x0 )). We have proved that if u is a solution to the equation and ψ “touches u from above,” then 0 ≥ F(x0 , ψ(x0 ), Dψ(x0 ), D2 ψ(x0 )). Analogously, it can be seen that if ϕ “touches u from below,” then 0 ≤ F(x0 , ϕ(x0 ), Dϕ(x0 ), D2 ϕ(x0 )). Now, with this result in mind, we are ready to give the definition of viscosity solution to the equation F(⋅, u, Du, D2 u) = 0.

(A.1)

Definition A.1. A lower semicontinuous function u is a viscosity supersolution of (A.1) if, for every ϕ ∈ C 2 such that ϕ touches u at x ∈ Ω strictly from below (that is, u − ϕ has a strict minimum at x with u(x) = ϕ(x)), we have F(x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≥ 0.

A.1 Basic definitions |

195

An upper semicontinuous function u is a subsolution of (A.1) if, for every ψ ∈ C 2 such that ψ touches u at x ∈ Ω strictly from above (that is, u − ψ has a strict maximum at x with u(x) = ψ(x)), we have F(x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≤ 0. Finally, u is a viscosity solution of (A.1) if it is both a sub- and a supersolution. Observe that we have required u − ϕ to have a strict minimum. We have done this since in general this is the definition that we use along this book. If we only require the difference to have a minimum we obtain an equivalent definition. In general we assume that F is continuous, that is, for sequences xk → x in Ω, rk → r in ℝ, pk → p in ℝN , and Xk → X in 𝕊N , we have F(xk , rk , pk , Xk ) → F(x, r, p, X) as k → ∞. However, we are interested in operators as the homogeneous p-Laplacian and the ∞-Laplacian which are not defined when the gradient vanishes. In order to be able to handle these cases, we need to consider the lower semicontinuous, F∗ , and upper semicontinuous, F ∗ , envelopes of F. These functions are given by F ∗ (x, r, p, X) = lim sup(y,s,w,Y)→(x,r,p,X) F(y, s, w, Y), F∗ (x, r, p, X) = lim inf(y,s,w,Y)→(x,r,p,X) F(y, s, w, Y). These functions coincide with F at every point of continuity of F and are lower and upper semicontinuous, respectively. Definition A.2. A lower semicontinuous function u is a viscosity supersolution of (A.1) if, for every ϕ ∈ C 2 such that ϕ touches u at x ∈ Ω strictly from below (that is, u − ϕ has a strict minimum at x with u(x) = ϕ(x)), we have F ∗ (x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≥ 0. An upper semicontinuous function u is a subsolution of (A.1) if, for every ψ ∈ C 2 such that ψ touches u at x ∈ Ω strictly from above (that is, u − ψ has a strict maximum at x with u(x) = ψ(x)), we have F∗ (x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≤ 0. Finally, u is a viscosity solution of (A.1) if it is both a sub- and a supersolution. Here we have required supersolutions to be lower semicontinuous and subsolutions to be upper semicontinuous. To extend this concept we consider the lower semicontinuous envelope, u∗ , and the upper semicontinuous envelope, u∗ , of u, that is, u∗ (x) = sup inf u(y) and u∗ (x) = inf sup u(y). r>0 y∈Br (x)

r>0 y∈B (x) r

196 | A Viscosity solutions As stated before for F, these functions coincide with u at every point of continuity of u and are lower and upper semicontinuous, respectively. Now we give a more general definition of viscosity solution involving these functions. Definition A.3. A function u is a viscosity supersolution of (A.1) if, for every ϕ ∈ C 2 such that ϕ touches u∗ at x ∈ Ω strictly from below (that is, u∗ −ϕ has a strict minimum at x with u∗ (x) = ϕ(x)), we have F ∗ (x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≥ 0. A function u is a subsolution of (A.1) if, for every ψ ∈ C 2 such that ψ touches u∗ at x ∈ Ω strictly from above (that is, u∗ − ψ has a strict maximum at x with u∗ (x) = ψ(x)), we have F∗ (x, ϕ(x), Dϕ(x), D2 ϕ(x)) ≤ 0. Finally, u is a viscosity solution of (A.1) if it is both a sub- and a supersolution. The definitions given above will be considered depending on the context, i. e., depending on whether we are considering a continuous F or not, if u is continuous or not, etc. Another possible way to state the definition of viscosity solution is to define the superjets and subjets, which play the role of the derivatives of u, and later give the definition of viscosity solution referring to them.

A.1.2 Parabolic problems Let T > 0, let Ω ⊂ ℝN be an open set, and let ΩT = Ω × (0, T) be a space-time cylinder with the parabolic boundary 𝜕p ΩT = {𝜕Ω × [0, T]} ∪ {Ω × {0}}. Now we deal with ut (x, t) = F(x, t, u(x, t), Du(x, t), D2 u(x, t)) in ΩT . Here F is an elliptic operator as the ones considered before in this appendix. We recall now the definition of viscosity solution, and we refer the reader to the references [34], [45] and the monograph [42] for extra details. Definition A.4. A function u : ΩT → ℝ is a viscosity solution if u is continuous and whenever (x0 , t0 ) ∈ ΩT and ϕ ∈ C 2 (ΩT ) is such that

A.1 Basic definitions |

197

(i) u(x0 , t0 ) = ϕ(x0 , t0 ), (ii) u(x, t) > ϕ(x, t) for (x, t) ∈ ΩT , (x, t) ≠ (x0 , t0 ), then we have at the point (x0 , t0 ) ϕt (x0 , t0 ) ≥ F∗ (x0 , t0 , ϕ(x0 , t0 ), Dϕ(x0 , t0 ), D2 ϕ(x0 , t0 )). Moreover, we require that when touching u with a test function from above all inequalities are reversed and F∗ is replaced by F ∗ . For example, in Chapter 8 we considered the PDE (N + p)ut (x, t) = (p − 2)ΔH ∞ u(x, t) + Δu(x, t).

(A.2)

In this case, the previous definition imposes if ∇ϕ(x0 , t0 ) ≠ 0, (N + p)ϕt (x0 , t0 ) ≥ (p − 2)ΔH ∞ ϕ(x0 , t0 ) + Δϕ(x0 , t0 ) { (N + p)ϕt (x0 , t0 ) ≥ λmin ((p − 2)D2 ϕ(x0 , t0 )) + Δϕ(x0 , t0 ) if ∇ϕ(x0 , t0 ) = 0. When touching u with a test function from above, all inequalities are reversed and λmin ((p − 2)D2 ϕ(x0 , t0 )) is replaced by λmax ((p − 2)D2 ϕ(x0 , t0 )). It is useful to observe that we can further reduce the number of test functions in the definition of a viscosity solution. Indeed, if the gradient of a test function vanishes we may assume that D2 ϕ = 0, and thus λmax = λmin = 0. Nothing is required if ∇ϕ = 0 and D2 ϕ ≠ 0. The proof follows the ideas in [45]; see also [34] and Lemma 3.2 in [59] for p = ∞. For the convenience of the reader we provide the details. Lemma A.5. A function u : ΩT → ℝ is a viscosity solution to (A.2) if u is continuous and whenever (x0 , t0 ) ∈ ΩT and ϕ ∈ C 2 (ΩT ) is such that (i) u(x0 , t0 ) = ϕ(x0 , t0 ), (ii) u(x, t) > ϕ(x, t) for (x, t) ∈ ΩT , (x, t) ≠ (x0 , t0 ), then at the point (x0 , t0 ) we have {

(N + p)ϕt ≥ (p − 2)ΔH ∞ ϕ + Δϕ ϕt (x0 , t0 ) ≥ 0

if ∇ϕ(x0 , t0 ) ≠ 0,

if ∇ϕ(x0 , t0 ) = 0 and D2 ϕ(x0 , t0 ) = 0.

Moreover, we require that when testing from above all inequalities are reversed. Proof. The proof is by contradiction: We assume that u satisfies the conditions in the statement but still fails to be a viscosity solution in the sense of Definition A.4. If this is the case, we must have ϕ ∈ C 2 (ΩT ) and (x0 , t0 ) ∈ ΩT such that (i) u(x0 , t0 ) = ϕ(x0 , t0 ), (ii) u(x, t) > ϕ(x, t) for (x, t) ∈ ΩT , (x, t) ≠ (x0 , t0 ),

198 | A Viscosity solutions for which ∇ϕ(x0 , t0 ) = 0, D2 ϕ(x0 , t0 ) ≠ 0, and (N + p)ϕt (x0 , t0 ) < λmin ((p − 2)D2 ϕ(x0 , t0 )) + Δϕ(x0 , t0 ),

(A.3)

or the analogous inequality when testing from above (in this case the argument is symmetric and we omit it). Let wj (x, t, y, s) = u(x, t) − (ϕ(y, s) −

j j |x − y|4 − |t − s|2 ) 4 2

and denote by (xj , tj , yj , sj ) the minimum point of wj in ΩT × ΩT . Since (x0 , t0 ) is a local minimum for u − ϕ, we may assume that (xj , tj , yj , sj ) → (x0 , t0 , x0 , t0 ),

as j → ∞

and (xj , tj ) , (yj , sj ) ∈ ΩT for all large j, similarly to [59]. We consider two cases: either xj = yj infinitely often or xj ≠ yj for all j large enough. First, let xj = yj , and denote φ(y, s) =

j j |x − y|4 + (tj − s)2 . 4 j 2

Then ϕ(y, s) − φ(y, s) has a local maximum at (yj , sj ). By (A.3) and continuity of (x, t) 󳨃→ λmin ((p − 2)D2 ϕ(x, t)) + Δϕ(x, t), we have (N + p)ϕt (yj , sj ) < λmin ((p − 2)D2 ϕ(yj , sj )) + Δϕ(yj , sj ) for j large enough. As ϕt (yj , sj ) = φt (yj , sj ) and D2 ϕ(yj , sj ) ≤ D2 φ(yj , sj ), we have by the previous inequality 0 < −(N + p)φt (yj , sj ) + λmin ((p − 2)D2 φ(yj , sj )) + Δφ(yj , sj ) = −(N + p)j(tj − sj ),

where we also used the fact that yj = xj and thus D2 φ(yj , sj ) = 0. Next denote j j ψ(x, t) = − |x − yj |4 − (t − sj )2 . 4 2

(A.4)

A.1 Basic definitions |

199

Similarly, u(x, t) − ψ(x, t) has a local minimum at (xj , tj ), and thus since D2 ψ(xj , tj ) = 0, our assumptions imply 0 ≤ (N + p)ψt (xj , tj ) = (N + p)j(tj − sj ),

(A.5)

for j large enough. Summing up (A.4) and (A.5), we get 0 < −(N + p)j(tj − sj ) + (N + p)j(tj − sj ) = 0, a contradiction. Next we consider the case yj ≠ xj . For the following notation, we refer to [37], [94], and [62]. We also use the parabolic theorem of sums for wj , which implies that there exist symmetric matrices Xj , Yj such that Xj − Yj is positive semidefinite and (j(tj − sj ), j|xj − yj |2 (xj − yj ), Yj ) ∈ 𝒫 2

(j(tj − sj ), j|xj − yj | (xj − yj ), Xj ) ∈ 𝒫

2,+ 2,−

ϕ(yj , sj ), u(xj , tj ).

Using (A.3) and the assumptions on u, we get 0 = (N + p)j(tj − sj ) − (N + p)j(tj − sj ) < (p − 2)⟨Yj

(xj − yj ) (xj − yj ) , ⟩ + tr(Yj ) |xj − yj | |xj − yj |

− (p − 2)⟨Xj

(xj − yj ) (xj − yj ) , ⟩ − tr(Xj ) |xj − yj | |xj − yj |

= (p − 2)⟨(Yj − Xj )

(xj − yj ) (xj − yj ) , ⟩ + tr(Yj − Xj ) |xj − yj | |xj − yj |

≤ 0, because Yj − Xj is negative semidefinite. If 1 < p < 2, the last inequality follows from the calculation (p − 2)⟨(Yj − Xj )

(xj − yj ) (xj − yj ) , ⟩ + tr(Yj − Xj ) |xj − yj | |xj − yj | n

≤ (p − 2)λmin + ∑ λi i=1

= (p − 1)λmin + ∑ λi λi =λ̸ min

≤ 0, where λi , λmin , and λmax denote the eigenvalues of Yj − Xj . This provides the desired contradiction.

200 | A Viscosity solutions A.1.3 Dirichlet boundary conditions For a bounded domain Ω ⊂ ℝN , we consider the Dirichlet problem F(⋅, u, Du, D2 u) = 0 in Ω, { u=g on 𝜕Ω, where g is a continuous boundary condition. Here we impose, on top of the previous inequalities for a test function that touches u from above or below, that u ≥ g pointwise on 𝜕Ω for supersolutions and u ≤ g for subsolutions (assuming that u and g are continuous). An analogous definition can be given for parabolic problems. Here we take a boundary datum g on 𝜕Ω × (0, T) and an initial condition in Ω × {t = 0}. In this case we add to the previous inequalities that u(x, 0) ≥ u0 (x) for supersolutions and u(x, 0) ≤ u0 (x) for subsolutions (we are assuming here that u, g, and u0 are continuous). In what remains of this appendix we will comment on the question of existence and uniqueness of solutions for the Dirichlet problem.

A.2 Uniqueness In this section we address the question of uniqueness of solutions of the Dirichlet problem. Uniqueness can be obtained as an immediate consequence of the comparison principle for solutions to the equation. Let us start by giving a proof of comparison for smooth viscosity solutions. We will assume that F is degenerate elliptic and proper, that is, a strict monotonicity in the u variable is assumed, r < s in ℝ 󳨐⇒ F(x, r, p, X) < F(x, s, p, X) for all (x, p, X) ∈ Ω × ℝN × 𝕊N . Our goal is to show that if u ∈ C 2 (Ω) ∩ C(Ω)̄ is a subsolution and v ∈ C 2 (Ω) ∩ C(Ω)̄ is a supersolution such that u ≤ v on 𝜕Ω, then u ≤ v in Ω. Observe that if v and u are smooth, then we can use them in the definition of super- and subsolution as tests functions. We obtain F(x, u, Du, D2 u) ≤ 0 ≤ F(x, v, Dv, D2 v). Suppose, arguing by contradiction, that u > v somewhere in Ω. Then, since u ≤ v on 𝜕Ω, there exists x0 ∈ Ω such that (u − v)(x0 ) ≥ (u − v)(x), in Ω.

A.3 Existence | 201

Hence D(u − v)(x0 ) = 0 and D2 (u − v)(x0 ) ≤ 0. We have u(x0 ) > v(x0 ), Du(x0 ) = Du(x0 ), and D2 u(x0 ) ≤ D2 v(x0 ). By our assumptions on F, we have F(x0 , u(x0 ), Du(x0 ), D2 u(x0 )) ≥ F(x0 , u(x0 ), Du(x0 ), D2 v(x0 )) ≥ F(x0 , u(x0 ), Dv(x0 ), D2 v(x0 ))

> F(x0 , v(x0 ), Dv(x0 ), D2 v(x0 )), which is a contradiction since u is a subsolution and v is a supersolution. We cannot apply this idea to only continuous solutions since we may not be able to touch the functions at the points of maxima of u − v. The idea to overcome this difficulty is to double the number of variables and, in the place of u − v, to consider instead the maximization of the function of two variables (x, y) → u(x) − v(y). Then we penalize the doubling of variables, in order to push the maxima to the diagonal {x = y}. The idea is to maximize the function Wα (x, y) = u(x) − v(y) −

α |x − y|2 2

and let α → +∞. We have used this idea above in the proof of Lemma A.5. In [64] a comparison principle for the equation F(u, ∇u, D2 u) = f is proved under the assumptions of F and f being continuous, F being degenerate elliptic, and F(r, p, X) ≥ F(s, p, X) + γ(s − r) for some γ > 0. Of course the result holds in greater generality. For example, let us mention the classical reference [15].

A.3 Existence In this section we include a proof of existence via Perron’s method. We assume that F is continuous and weakly proper, that is, F is degenerate elliptic and satisfies r ≥ s in ℝ 󳨐⇒ F(x, r, p, X) ≥ F(x, s, p, X) for all (x, p, X) ∈ Ω×ℝN ×𝕊N . We also assume that the equation satisfies the comparison principle. Theorem A.6. If there exist a subsolution u and a supersolution u of the Dirichlet problem such that u = u = g on 𝜕Ω, then u(x) = inf{v(x) : v is a supersolution and u ≤ v ≤ u} is a solution of the Dirichlet problem.

202 | A Viscosity solutions Proof. Being the infimum of supersolutions, the function u is a supersolution. We already know that u is upper semicontinuous, as it is the infimum of upper semicontinuous functions. Let us see it is indeed a solution. Suppose it is not. Then there exists ϕ ∈ C 2 such that ϕ touches u at x0 ∈ Ω strictly from above but F(x0 , u(x0 ), Du(x0 )u, D2 u(x0 )) > 0. Let us write 1 ϕ(x) = ϕ(x0 ) + ⟨Dϕ(x0 ); (x − x0 )⟩ + ⟨D2 ϕ(x0 )(x − x0 ), x − x0 ⟩ + o(|x − x0 |2 ). 2 ̂ We define ϕ(x) = ϕ(x) − δ for a small positive number δ. Then ϕ̂ < u in a small neighborhood of x0 , contained in the set {x : F(x, u, Du, D2 u) > 0}, but ϕ̂ ≥ u outside this neighborhood, if we take δ small enough. Now we can consider v = min{ϕ,̂ u}. Since u is a viscosity supersolution in Ω and ̂ ϕ also is a viscosity supersolution in the small neighborhood of x0 , it follows that v is a viscosity supersolution. Moreover, on 𝜕Ω, v = u ≥ g. This implies v ∈ {v(x) : v is a supersolution and u ≤ v ≤ u}, but v = ϕ̂ < u near x0 , which is a contradiction with the definition of u as the infimum of that set. Let us remark that in the same way we can prove that u(x) = max{v(x) : v is a subsolution and u ≤ v ≤ u} is a solution to the Dirichlet problem.

B Probability theory B.1 Stochastic processes In this appendix we include some definitions and the statements of some results from probability theory that are used along the book. As we will not refer to the games explicitly we use a notation slightly different to the one that we used in the game context, although the general setting will be very similar. Let Ω ⊂ ℝN be equipped with the natural topology and the σ-algebra ℬ of the Lebesgue measurable sets. Let x0 ∈ Ω be a given point. We want to define a stochastic process in Ω starting in x0 . To that end, we consider the space of all sequences H ∞ = {x0 } × Ω × Ω × ⋅ ⋅ ⋅ , which is a product space endowed with the product topology. Let {ℱk }∞ k=0 denote the filtration of σ-algebras, ℱ0 ⊂ ℱ1 ⊂ ⋅ ⋅ ⋅ , defined as follows: ℱk is the product σ-algebra generated by cylinder sets of the form {x0 } × A1 × ⋅ ⋅ ⋅ × Ak × Ω × Ω ⋅ ⋅ ⋅ with Ai ∈ ℬ(Ω). For ω = (x0 , ω1 , . . .) ∈ H ∞ , we define the coordinate processes Xk (ω) = ωk ,

Xk : H ∞ → ℝn , k = 0, 1, . . . ,

so that Xk is an ℱk -measurable random variable. Moreover, ℱ∞ = σ(⋃ ℱk ) is the smallest σ-algebra so that all Xk are ℱ∞ -measurable. Given the sequence x0 , . . . , xk with xk ∈ Ω, the next position is distributed according to the probability π(x0 , . . . , xk , A) for A ∈ ℬ(Ω). By using Kolmogorov’s extension theorem and the one-step transition probabilities, we can build a probability measure ℙ in H ∞ relative to the σ-algebra ℱ ∞ . We denote by 𝔼 the corresponding expectation with respect to ℙ. Definition B.1. We say that M = (Mk )k≥0 is a stochastic process if it is a collection of random variables such that Mk is ℱk -measurable for every k ≥ 0. The coordinate process defined above is a stochastic process. To define a process we have to specify the probability π(x0 , . . . , xk , A). In other words, given the history (x0 , . . . , xk ), we have to specify how xk+1 is chosen. Let us give two examples to which we are going to refer to illustrate the definitions and results that we are going to introduce in the next section. https://doi.org/10.1515/9783110621792-011

204 | B Probability theory Example B.2. Let us consider Ω = ℝN . Suppose that at every time a random unitary vector v is selected and then xk+1 = xk + v with probability 21 or xk+1 = xk − v with probability 21 . Then, given a fixed y ∈ ℝN , we can consider Mk = ‖xk − y‖2 . Observe that Mk depends only on (x0 , x1 , . . . , xk ) and hence it is ℱk -measurable, and it is a stochastic process. Example B.3. Suppose that you are playing roulette (without the zero) in a casino, starting with x0 = 0 dollar (you are allow to get credit to play). At every round you bet a certain amount of money (which may depend on the result of the previous rounds). If you start the round with Xk dollar and you bet vk , then Xk+1 = Xk +vk with probability 1 and Xk+1 = Xk − vk with probability 21 . In our setting, we can consider Ω = ℤ to model 2 this situation.

B.2 Optional stopping theorem Definition B.4. A stopping time with respect to the filtration {ℱk }∞ k=0 is a random variable τ : Ω → ℕ ∪ {+∞} such that {τ ≤ k} ∈ ℱk for all k ∈ ℕ. In particular we will be interested in hitting times. Suppose Γ ⊂ Ω is a given set. To denote the time when the process state reaches Γ, we define a random variable τ(ω) = inf{k ≥ 0 : Xk (ω) ∈ Γ}. This random variable is a stopping time relative to the filtration {ℱk }∞ k=0 . In Example B.2, for a given R > 0, we can consider Γ = BR (0)c . Then τ refers to the first time the process leaves BR (0). In Example B.3, we can consider that the player leaves the casino the first time he finds himself with a profit. If Γ = ℕ, the hitting time τ is by definition that moment. Definition B.5. Let M = (Mk )k≥0 be a stochastic process such that 𝔼[|Mk |] < ∞. – – –

We say that M is a submartingale if 𝔼[Mk |ℱk−1 ] ≥ Mk−1 for every k ∈ ℕ. We say that M is a supermartingale if 𝔼[Mk |ℱk−1 ] ≤ Mk−1 for every k ∈ ℕ. We say that M is a martingale if 𝔼[Mk |ℱk−1 ] = Mk−1 for every k ∈ ℕ. In Example B.2, we have 0 ≤ Mk ≤ k 2 and hence 𝔼[Mk ] < ∞. Since ‖xk + v − y‖2 + ‖xk − v − y‖2 2 = ‖xk − x0 ‖2 + ‖v‖2

𝔼[Mk+1 |ℱk ] =

= ‖xk − x0 ‖2 + 1

≥ ‖xk − x0 ‖2 = Mk ,

Mk is a supermartingale.

B.2 Optional stopping theorem | 205

In Example B.3, 𝔼[Xk+1 |ℱk ] =

Xk + vk Xk − vk + = Xk . 2 2

If the strategy used by the player guarantees 𝔼[|Xk |] < ∞, Xk is a martingale. Theorem B.6 (Optional stopping theorem). Let M = (Mk )k≥0 be a supermartingale and τ a stopping time. Suppose there exists a constant c such that |Mτ∧k | ≤ c almost surely for every k ≥ 0, where ∧ denotes the minimum operator. Then 𝔼[Mτ ] ≤ 𝔼[M0 ]. Analogously, if M is a submartingale we have 𝔼[Mτ ] ≥ 𝔼[M0 ]. And hence the equality 𝔼[Mτ ] = 𝔼[M0 ] holds for M being a martingale. There are different versions of the theorem. The hypothesis of the uniform bound for the variables |Xτ∧k | ≤ c can be substituted by τ ≤ c almost surely, or by 𝔼[τ] < ∞ and 𝔼[Mk+1 − Mk |ℱk ] ≤ c. In Example B.3, suppose that the player uses the martingale betting system. That is, he bets 1 dollar the first round, 2 dollars the second round, 4 dollars the third round, etc. In the n-th round he bets 2n−1 dollars, until he wins for the first time. Observe that at that moment he will have −1 − 2 − ⋅ ⋅ ⋅ − 2n−2 + 2n−1 = 1 dollar. We have X0 = 0 and Xτ = 1, hence the optional stopping theorem does not hold in this case. Observe that xk is not bounded (from below). At every round with probability one half the player will win and hence stop. Then, ℙ(τ = k) = 2−k , and so τ is not bounded almost surely. On the other hand, we have ∞

𝔼[τ] = ∑ k2−k = 2. k=1

We have 𝔼[τ] < ∞ but 𝔼[Xk+1 − Xk |ℱk ] is not bounded. As we can see none of the possible sets of hypotheses for the validity of the optional stopping theorem is fulfilled in this example. In Example B.2, suppose that x0 = 0. We consider Nk = ‖xk ‖2 − k. With a similar computation as the one done before we can show that Nk is a martingale. We consider Γ = BR (0)c and the corresponding hitting time τ. If we apply the optional stopping theorem we obtain that 𝔼[Nτ ] = N0 = ‖x0 ‖2 − 0 = 0. Since at every step the process makes a jump of distance 1 and before reaching xτ−1 ∈ BR (0), we have ‖xτ ‖ ≤ R + 1. Hence 𝔼[‖xτ ‖2 ] ≤ (R + 1)2 . Since 𝔼[Nτ ] = 𝔼[‖xk ‖2 − k] = 0,

206 | B Probability theory we obtain 𝔼[τ] = 𝔼[‖xτ ‖2 ] ≤ (R + 1)2 . That is, we have proved that the expected time for the process to exit the ball of radius R is bounded by (R + 1)2 .

Bibliography [1]

[2]

[3]

[4] [5]

[6] [7] [8]

[9] [10]

[11] [12] [13]

[14] [15]

[16] [17]

[18]

A. D. Alexandroff. Almost everywhere existence of the second differential of a convex function and some properties of convex surfaces connected with it. Leningrad State Univ. Annals [Uchenye Zapiski] Math. Ser., 6:3–35, 1939. Tonći Antunović, Yuval Peres, Scott Sheffield, and Stephanie Somersille. Tug-of-war and infinity Laplace equation with vanishing Neumann boundary condition. Comm. Partial Differential Equations, 37(10):1839–69, 2012. Scott N. Armstrong and Charles K. Smart. An easy proof of Jensen’s theorem on the uniqueness of infinity harmonic functions. Calc. Var. Partial Differential Equations, 37(3–4):381–4, 2010. Scott N. Armstrong and Charles K. Smart. A finite difference approach to the infinity Laplace equation and tug-of-war games. Trans. Amer. Math. Soc., 364(2):595–636, 2012. Scott N. Armstrong, Charles K. Smart, and Stephanie J. Somersille. An infinity Laplace equation with gradient term and mixed boundary conditions. Proc. Amer. Math. Soc., 139(5):1763–76, 2011. Gunnar Aronsson. Extension of functions satisfying Lipschitz conditions. Ark. Mat., 6(1967):551–61, 1967. Gunnar Aronsson, Michael G. Crandall, and Petri Juutinen. A tour of the theory of absolutely minimizing functions. Bull. Amer. Math. Soc. (N.S.), 41(4):439–505, 2004. Ángel Arroyo, Joonas Heino, and Mikko Parviainen. Tug-of-war games with varying probabilities and the normalized p(x)-Laplacian. Commun. Pure Appl. Anal., 16(3):915–44, 2017. Ángel Arroyo and José G. Llorente. On the asymptotic mean value property for planar p-harmonic functions. Proc. Amer. Math. Soc., 144(9):3859–68, 2016. Ángel Arroyo, Hannes Luiro, Mikko Parviainen, and Eero Ruosteenoja. Asymptotic Lipschitz regularity for tug-of-war games with varying probabilities. arXiv preprint arXiv:1806.10838, 2018. Rami Atar and Amarjit Budhiraja. A stochastic differential game for the inhomogeneous ∞-Laplace equation. Ann. Probab., 38(2):498–531, 2010. Amal Attouchi and Mikko Parviainen. Hölder regularity for the gradient of the inhomogeneous parabolic normalized p-Laplacian. Commun. Contemp. Math., 20(4):1750035, 2018. Lothar Banz, Bishnu P. Lamichhane, and Ernst P. Stephan. Higher order FEM for the obstacle problem of the p-Laplacian – a variational inequality approach. Comput. Math. Appl., 76(7):1639–60, 2018. G. Barles. Fully nonlinear Neumann type boundary conditions for second-order elliptic and parabolic equations. J. Differential Equations, 106(1):90–106, 1993. G. Barles and Jérôme Busca. Existence and comparison results for fully nonlinear degenerate elliptic equations without zeroth-order term. Comm. Partial Differential Equations, 26(11–12):2323–37, 2001. G. Barles and P. E. Souganidis. Convergence of approximation schemes for fully nonlinear second order equations. Asymptotic Anal., 4(3):271–83, 1991. T. Bhattacharya, E. DiBenedetto, and J. Manfredi. Limits as p → ∞ of Δp up = f and related extremal problems. Rend. Sem. Mat. Univ. Politec. Torino, (Special Issue):15–68 (1991). Some topics in nonlinear PDEs (Turin, 1989). Isabeau Birindelli, Giulio Galise, and Hitoshi Ishii. A family of degenerate elliptic operators: maximum principle and its consequences. Ann. Inst. H. Poincaré Anal. Non Linéaire, 35(2):417–41, 2018.

https://doi.org/10.1515/9783110621792-012

208 | Bibliography

[19] [20] [21] [22] [23] [24]

[25] [26] [27] [28] [29]

[30] [31] [32] [33]

[34]

[35]

[36]

[37] [38] [39] [40]

Isabeau Birindelli, Giulio Galise, and Hitoshi Ishii. Towards a reversed Faber–Krahn inequality for the truncated Laplacian. Preprint. arXiv:1803.07362, 2018. Isabeau Birindelli, Giulio Galise, and Fabiana Leoni. Liouville theorems for a family of very degenerate elliptic nonlinear operators. Nonlinear Anal., 161:198–211, 2017. C. Bjorland, L. Caffarelli, and A. Figalli. Non-local gradient dependent operators. Adv. Math., 230(4–6):1859–94, 2012. C. Bjorland, L. Caffarelli, and A. Figalli. Nonlocal tug-of-war and the infinity fractional Laplacian. Comm. Pure Appl. Math., 65(3):337–80, 2012. Pablo Blanc. A lower bound for the principal eigenvalue of fully nonlinear elliptic operators. Preprint arXiv:1709.02455, 2017. Pablo Blanc, João Vítor da Silva, and Julio D Rossi. A limiting free boundary problem with gradient constraint and tug-of-war games. Ann. Mat. Pura Appl., 2018, to appear. DOI https://doi.org/10.1007/s10231-019-00825-0. Pablo Blanc, Carlos Esteve, and Julio D Rossi. The evolution problem associated with eigenvalues of the Hessian. Preprint. arXiv:1901.01052, 2019. Pablo Blanc, Juan J Manfredi, and Julio D Rossi. Games for Pucci’s maximal operators. Preprint. arXiv:1808.07763, 2018. Pablo Blanc, Juan P. Pinasco, and Julio D. Rossi. Obstacle problems and maximal operators. Adv. Nonlinear Stud., 16(2):355–62, 2016. Pablo Blanc, Juan P. Pinasco, and Julio D. Rossi. Maximal operators for the p-Laplacian family. Pacific J. Math., 287(2):257–95, 2017. Pablo Blanc and Julio D. Rossi. Games for eigenvalues of the Hessian and concave/convex envelopes. Jour. Math. Pures Appl., 2018, to appear. DOI https://doi.org/10.1016/j.matpur. 2018.08.007. Claudia Bucur and Marco Squassina. Asymptotic mean value properties for fractional anisotropic operators. J. Math. Anal. Appl., 466(1):107–26, 2018. Luis A. Caffarelli. The regularity of free boundaries in higher dimensions. Acta Math., 139(3-4):155–84, 1977. Fernando Charro, Jesus García Azorero, and Julio D. Rossi. A mixed problem for the infinity Laplacian via tug-of-war games. Calc. Var. Partial Differential Equations, 34(3):307–20, 2009. Fernando Charro and Ireneo Peral. Limit branch of solutions as p → ∞ for a family of sub-diffusive problems related to the p-Laplacian. Comm. Partial Differential Equations, 32(10–12):1965–81, 2007. Yun Gang Chen, Yoshikazu Giga, and Shun’ichi Goto. Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations. J. Differential Geom., 33(3):749–86, 1991. Luca Codenotti, Marta Lewicka, and Juan Manfredi. Discrete approximations to the double-obstacle problem and optimal stopping of tug-of-war games. Trans. Amer. Math. Soc., 369(10):7387–403, 2017. Michael G. Crandall, Gunnar Gunnarsson, and Peiyong Wang. Uniqueness of ∞-harmonic functions and the Eikonal equation. Comm. Partial Differential Equations, 32(10–12):1587–615, 2007. Michael G. Crandall, Hitoshi Ishii, and Pierre-Louis Lions. User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc. (N.S.), 27(1):1–67, 1992. Michael G. Crandall and Pierre-Louis Lions. Viscosity solutions of Hamilton–Jacobi equations. Trans. Amer. Math. Soc., 277(1):1–42, 1983. Dante DeBlassie and Robert G. Smits. The tug-of-war without noise and the infinity Laplacian in a wedge. Stochastic Process. Appl., 123(12):4219–55, 2013. Leandro M. Del Pezzo and Julio D. Rossi. Tug-of-war games and parabolic problems with

Bibliography | 209

[41]

[42] [43]

[44] [45] [46] [47]

[48] [49] [50] [51] [52] [53] [54]

[55]

[56] [57]

[58] [59] [60] [61]

spatial and time dependence. Differential Integral Equations, 27(3–4):269–88, 2014. J. I. Díaz. Elliptic equations. In Nonlinear partial differential equations and free boundaries. Vol. I, volume 106 of Research notes in mathematics. Pitman (Advanced Publishing Program), Boston, MA, 1985. J. I. Díaz. Surface evolution equations: a level set approach. Springer, Birkhäuser, Basel, 2006. A. Elmoataz, X. Desquesnes, and M. Toutain. On the game p-Laplacian on weighted graphs with applications in image processing and data clustering. European J. Appl. Math., 28(6):922–48, 2017. L. C. Evans and W. Gangbo. Differential equations methods for the Monge–Kantorovich mass transfer problem. Mem. Amer. Math. Soc., 137(653):viii+66, 1999. L. C. Evans and J. Spruck. Motion of level sets by mean curvature. I. J. Differential Geom., 33(3):635–81, 1991. Fausto Ferrari. Mean value properties of fractional second order operators. Commun. Pure Appl. Anal., 14(1):83–106, 2015. J. García-Azorero, J. J. Manfredi, I. Peral, and J. D. Rossi. The Neumann problem for the ∞-Laplacian and the Monge–Kantorovich mass transfer problem. Nonlinear Anal., 66(2):349–66, 2007. Ivana Gómez and Julio D. Rossi. Tug-of-war games and the infinity Laplacian with spatial dependence. Commun. Pure Appl. Anal., 12(5):1959–83, 2013. David Hartenstine and Matthew Rudd. Perron’s method for p-harmonious functions. Electron. J. Differential Equations, 123:12, 2016. Hans Hartikainen. A dynamic programming principle with continuous solutions related to the p-Laplacian, 1 < p < ∞. Differential Integral Equations, 29(5–6):583–600, 2016. F. Reese Harvey and H. Blaine Lawson, Jr. Dirichlet duality and the nonlinear Dirichlet problem. Comm. Pure Appl. Math., 62(3):396–443, 2009. F. Reese Harvey and H. Blaine Lawson, Jr. p-Convexity, p-plurisubharmonicity and the Levi problem. Indiana Univ. Math. J., 62(1):149–69, 2013. Joonas Heino. Uniform measure density condition and game regularity for tug-of-war games. Bernoulli, 24(1):408–32, 2018. Eberhard Hopf. Über den funktionalen, insbesondere den analytischen Charakter der Lösungen elliptischer Differentialgleichungen zweiter Ordnung. Math. Z., 34(1):194–233, 1932. Michinori Ishiwata, Rolando Magnanini, and Hidemitsu Wadade. A natural approach to the asymptotic mean value property for the p-Laplacian. Calc. Var. Partial Differential Equations, 56(4):97, 2017. Robert Jensen. Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient. Arch. Rational Mech. Anal., 123(1):51–74, 1993. Petri Juutinen. Minimization problems for Lipschitz functions via viscosity solutions. Ann. Acad. Sci. Fenn. Math. Diss., (115):53, 1998. Dissertation, University of Jyväskulä, Jyväskulä, 1998. Petri Juutinen. Absolutely minimizing Lipschitz extensions on a metric space. Ann. Acad. Sci. Fenn. Math., 27(1):57–67, 2002. Petri Juutinen and Bernd Kawohl. On the evolution governed by the infinity Laplacian. Math. Ann., 335(4):819–51, 2006. Petri Juutinen and Peter Lindqvist. On the higher eigenvalues for the ∞-eigenvalue problem. Calc. Var. Partial Differential Equations, 23(2):169–92, 2005. Petri Juutinen, Peter Lindqvist, and Juan J. Manfredi. The ∞-eigenvalue problem. Arch. Ration. Mech. Anal., 148(2):89–105, 1999.

210 | Bibliography

[62] [63] [64]

[65] [66] [67] [68] [69]

[70] [71] [72] [73] [74] [75] [76] [77] [78]

[79] [80] [81] [82]

[83]

Petri Juutinen, Peter Lindqvist, and Juan J. Manfredi. On the equivalence of viscosity solutions and weak solutions for a quasi-linear equation. SIAM J. Math. Anal., 33(3):699–717, 2001. Petri Juutinen, Mikko Parviainen, and Julio D. Rossi. Discontinuous gradient constraints and the infinity Laplacian. Int. Math. Res. Not. IMRN, (8):2451–92, 2016. Nikos Katzourakis. An introduction to viscosity solutions for fully nonlinear PDE with applications to calculus of variations in L∞ . SpringerBriefs in mathematics. Springer, Cham, 2015. Bernd Kawohl, Juan Manfredi, and Mikko Parviainen. Solutions of nonlinear PDEs in the sense of averages. J. Math. Pures Appl. (9), 97(2):173–88, 2012. Robert V. Kohn and Sylvia Serfaty. A deterministic-control-based approach to fully nonlinear parabolic and elliptic equations. Comm. Pure Appl. Math., 63(10):1298–350, 2010. Shigeaki Koike and Takahiro Kosugi. Remarks on the comparison principle for quasilinear PDE with no zeroth order terms. Commun. Pure Appl. Anal., 14(1):133–42, 2015. Andrew J. Lazarus, Daniel E. Loeb, James G. Propp, Walter R. Stromquist, and Daniel H. Ullman. Combinatorial games under auction play. Games Econom. Behav., 27(2):229–64, 1999. Andrew J. Lazarus, Daniel E. Loeb, James G. Propp, and Daniel H. Ullman. Richman games. In Games of no chance (Berkeley, CA, 1994), volume 29 of Math. Sci. Res. Inst. Publ., pages 439–49. Cambridge Univ. Press, Cambridge, 1996. Peter Lindqvist and Teemu Lukkari. A curious equation involving the ∞-Laplacian. Adv. Calc. Var., 3(4):409–21, 2010. Peter Lindqvist and Juan Manfredi. On the mean value property for the p-Laplace equation in the plane. Proc. Amer. Math. Soc., 144(1):143–9, 2016. Qing Liu and Armin Schikorra. A game-tree approach to discrete infinity laplacian with running costs. arXiv preprint arXiv:1305.7372, 2013. Qing Liu and Armin Schikorra. General existence of solutions to dynamic programming equations. Commun. Pure Appl. Anal., 14(1):167–84, 2015. Qing Liu, Armin Schikorra, and Xiaodan Zhou. A game-theoretic proof of convexity-preserving properties for motion by curvature. Indiana Univ. Math. J., 65(1):171–97, 2016. José G. Llorente. Mean value properties and unique continuation. Commun. Pure Appl. Anal., 14(1):185–99, 2015. Rafael López-Soriano, José C. Navarro-Climent, and Julio D. Rossi. The infinity Laplacian with a transport term. J. Math. Anal. Appl., 398(2):752–65, 2013. Hannes Luiro and Mikko Parviainen. Regularity for nonlinear stochastic games. Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 35(6):1435–56, 2018. Hannes Luiro, Mikko Parviainen, and Eero Saksman. Harnack’s inequality for p-harmonic functions via stochastic games. Comm. Partial Differential Equations, 38(11):1985–2003, 2013. Hannes Luiro, Mikko Parviainen, and Eero Saksman. On the existence and uniqueness of p-harmonious functions. Differential Integral Equations, 27(3–4):201–16, 2014. Ashok P. Maitra and William D. Sudderth. Borel stochastic games with lim sup payoff. Ann. Probab., 21(2):861–85, 1993. Ashok P. Maitra and William D. Sudderth. Discrete gambling and stochastic games, volume 32 of Applications of mathematics (New York). Springer-Verlag, New York, 1996. Juan J. Manfredi, Adam M. Oberman, and Alexander P. Sviridov. Nonlinear elliptic partial differential equations and p-harmonic functions on graphs. Differential Integral Equations, 28(1–2):79–102, 2015. Juan J. Manfredi, Mikko Parviainen, and Julio D. Rossi. An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games. SIAM J. Math. Anal., 42(5):2058–81, 2010.

Bibliography | 211

[84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96]

[97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108]

Juan J. Manfredi, Mikko Parviainen, and Julio D. Rossi. An asymptotic mean value characterization for p-harmonic functions. Proc. Amer. Math. Soc., 138(3):881–9, 2010. Juan J. Manfredi, Mikko Parviainen, and Julio D. Rossi. Dynamic programming principle for tug-of-war games with noise. ESAIM Control Optim. Calc. Var., 18(1):81–90, 2012. Juan J. Manfredi, Mikko Parviainen, and Julio D. Rossi. On the definition and properties of p-harmonious functions. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 11(2):215–41, 2012. Juan J. Manfredi, Julio D. Rossi, and Stephanie J. Somersille. An obstacle problem for tug-of-war games. Commun. Pure Appl. Anal., 14(1):217–28, 2015. E. J. McShane. Extension of range of functions. Bull. Amer. Math. Soc., 40(12):837–42, 1934. Lourdes Moreno Mérida and Raúl Emilio Vidal. The obstacle problem for the infinity fractional Laplacian. Rend. Circ. Mat. Palermo (2), 67(1):7–15, 2018. K. W. Morton and D. F. Mayers. An introduction. In Numerical solution of partial differential equations, second edition. Cambridge University Press, Cambridge, 2005. Kaj Nyström and Mikko Parviainen. Tug-of-war, market manipulation, and option pricing. Math. Finance, 27(2):279–312, 2017. Adam M. Oberman. A convergent difference scheme for the infinity Laplacian: construction of absolutely minimizing Lipschitz extensions. Math. Comp., 74(251):1217–30, 2005. Adam M. Oberman. The convex envelope is the solution of a nonlinear obstacle problem. Proc. Amer. Math. Soc., 135(6):1689–94, 2007. Adam M. Oberman and Luis Silvestre. The Dirichlet problem for the convex envelope. Trans. Amer. Math. Soc., 363(11):5871–86, 2011. Mikko Parviainen and Eero Ruosteenoja. Local regularity for time-dependent tug-of-war games with varying probabilities. J. Differential Equations, 261(2):1357–98, 2016. Yuval Peres, Gábor Pete, and Stephanie Somersille. Biased tug-of-war, the biased infinity Laplacian, and comparison with exponential cones. Calc. Var. Partial Differential Equations, 38(3-4):541–64, 2010. Yuval Peres, Oded Schramm, Scott Sheffield, and David B. Wilson. Tug-of-war and the infinity Laplacian. J. Amer. Math. Soc., 22(1):167–210, 2009. Yuval Peres and Scott Sheffield. Tug-of-war with noise: a game-theoretic view of the p-Laplacian. Duke Math. J., 145(1):91–120, 2008. Alexander Quaas and Boyan Sirakov. Existence results for nonproper elliptic equations involving the Pucci operator. Comm. Partial Differential Equations, 31(7-9):987–1003, 2006. J. D. Rossi, E. V. Teixeira, and J. M. Urbano. Optimal regularity at the free boundary for the infinity obstacle problem. Interfaces Free Bound., 17(3):381–98, 2015. Julio D. Rossi. Tug-of-war games and PDEs. Proc. Roy. Soc. Edinburgh Sect. A, 141(2):319–69, 2011. Eero Ruosteenoja. Local regularity results for value functions of tug-of-war with noise and running payoff. Adv. Calc. Var., 9(1):1–17, 2016. Ji-Ping Sha. Handlebodies and p-convexity. J. Differential Geom., 25(3):353–61, 1987. Scott Sheffield and Charles K. Smart. Vector-valued optimal Lipschitz extensions. Comm. Pure Appl. Math., 65(1):128–54, 2012. Jarkko Siltakoski. Equivalence of viscosity and weak solutions for the normalized p(x)-Laplacian. Calc. Var. Partial Differential Equations, 57(4):95, 2018. Alexander P. Sviridov. p-Harmonious functions with drift on graphs via games. Electron. J. Differential Equations, 114:11, 2011. Peiyong Wang. A formula for smooth ∞-harmonic functions. PanAmer. Math. J., 16(1):57–65, 2006. H. Wu. Manifolds of partially positive curvature. Indiana Univ. Math. J., 36(3):525–48, 1987.

Notations We use the following notations (some of them are standard): As ambient space we use the Euclidean space ℝN .

Ω: bounded smooth domain in ℝN .

Bε (x): ball of radius ε centered at x.

u, v: solution to a PDE or DPP.

∇u or Du = (𝜕xi u)i : gradient of u.

D2 u = (𝜕xi ,xj u)i,j : Hessian matrix of u.

λmin := min{λ : λ is an eigenvalue of D2 u}.

λmax := max{λ : λ is an eigenvalue of D2 u}.

𝕊N : the set of real symmetric N × N matrices. g: boundary datum/final payoff. f : source term/running payoff. uε : game value.

uεI : value of the game for Player I.

uεII : value of the game for Player II. ℙ: probability.

𝔼: expected value. τ ∧ k := min{τ; k}.

|A|: Lebesgue measure of the set A. ∫A u(x)dx =

1 ∫ u(x)dx: |A| A

mean value of the function u in the set A.

https://doi.org/10.1515/9783110621792-013

Index 1-homogeneous ∞-Laplacian 12 1-homogeneous p-Laplacian 12, 42, 71 Absolutely minimizing Lipschitz extension 13 Arzela–Ascoli’s lemma 16, 28 Asymptotic behavior 151 Brownian motion 1 Comparison principle 200 Conditional expectation 3 Dirichlet boundary condition 51, 200 Discrete distance 47, 65 Dynamic Programming Principle 11, 22, 43, 52, 63, 76, 103, 114, 131, 144, 172, 186 Eigenvalues of the Hessian 101 Elliptic problem 193 Fatou’s lemma 24, 38 Filtration 21, 203 Final payoff function 9 Free boundary problem 169 Game value 10, 14, 21, 43, 51, 63, 75, 103, 107, 114, 131, 141, 171, 186 Gradient constraint 169 Harmonic function 3, 5 Heat equation 6, 127 ∞-Laplacian 9, 12, 47, 52 Kolmogorov’s extension theorem 203 Laplacian 1, 3, 8 Leavable game 62 Lewy–Stampacchia lemma 61, 65 Lipschitz extension problem 12

Martingale 204 Maximal operators 71 Mean value 5 Minimal Lipschitz extension 13 Mixed boundary conditions 51 Neumann boundary condition 51 Obstacle problem 51, 60 Optional stopping theorem 33, 40, 204 p-Laplacian 12, 19 Parabolic problem 127, 196 Pay or Leave Tug-of-War 181 Perron’s method 22 Play or sell the turn Tug-of-War 170 Poisson problem 6 Pucci’s maximal operator 72, 101, 113 Random walk 6 Random walk for λj 106 Running payoff 40, 48 Semicontinuous envelope 34, 195 Stochastic processes 203 Stopping rule 62 Stopping time 204 Strategies 9, 20 Submartingale 204 Supermartingale 24, 204 Taylor expansion 14, 36 Transition probabilities 21, 203 Tug-of-War game 9, 43 Tug-of-War with noise 19, 40 Unbalanced Tug-of-War with noise 72 Viscosity solution 3, 12, 35, 53, 97, 115, 183, 193

De Gruyter Series in Nonlinear Analysis and Applications Volume 30 Lucio Damascelli, Filomena Pacella Morse Index of Solutions of Nonlinear Elliptic Equations, 2019 ISBN 978-3-11-053732-1, e-ISBN (PDF) 978-3-11-053824-3, e-ISBN (EPUB) 978-3-11-053824-3 Volume 29 Rafael Ortega Periodic Differential Equations in the Plane. A Topological Perspective, 2019 ISBN 978-3-11-055040-5, e-ISBN (PDF) 978-3-11-055116-7, e-ISBN (EPUB) 978-3-11-055042-9 Volume 28 Dung Le Strongly Coupled Parabolic and Elliptic Systems. Existence and Regularity of Strong and Weak Solutions, 2018 ISBN 978-3-11-060715-4, e-ISBN (PDF) 978-3-11-060876-2, e-ISBN (EPUB) 978-3-11-060717-8 Volume 27 Maxim Olegovich Korpusov, Alexey Vital’evich Ovchinnikov, Alexey Georgievich Sveshnikov, Egor Vladislavovich Yushkov Blow-Up in Nonlinear Equations of Mathematical Physics. Theory and Methods, 2018 ISBN 978-3-11-060108-4, e-ISBN (PDF) 978-3-11-060207-4, e-ISBN (EPUB) 978-3-11-059900-8 Volume 26 Saïd Abbas, Mouffak Benchohra, John R. Graef, Johnny Henderson Implicit Fractional Differential and Integral Equations. Existence and Stability, 2018 ISBN 978-3-11-055313-0, e-ISBN (PDF) 978-3-11-055381-9, e-ISBN (EPUB) 978-3-11-055318-5 Volume 25 Lubos Pick, Alois Kufner, Oldrich John, Svatopluk Fucık Function Spaces. Volume 2, 2018 ISBN 978-3-11-027373-1, e-ISBN (PDF) 978-3-11-032747-2, e-ISBN (EPUB) 978-3-11-038221-1 www.degruyter.com