A Course on Tug-of-War Games with Random Noise: Introduction and Basic Constructions [1st ed.] 9783030462086, 9783030462093

This graduate textbook provides a detailed introduction to the probabilistic interpretation of nonlinear potential theor

122 25 4MB

English Pages IX, 254 [258] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-ix
Introduction (Marta Lewicka)....Pages 1-7
The Linear Case: Random Walk and Harmonic Functions (Marta Lewicka)....Pages 9-35
Tug-of-War with Noise: Case p ∈ [2, ∞) (Marta Lewicka)....Pages 37-70
Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞) (Marta Lewicka)....Pages 71-99
Game-Regularity and Convergence: Case p ∈ (2, ∞) (Marta Lewicka)....Pages 101-134
Mixed Tug-of-War with Noise: Case p ∈ (1, ∞) (Marta Lewicka)....Pages 135-161
Back Matter ....Pages 163-254
Recommend Papers

A Course on Tug-of-War Games with Random Noise: Introduction and Basic Constructions [1st ed.]
 9783030462086, 9783030462093

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Universitext

Marta Lewicka

A Course on Tug-of-War Games with Random Noise Introduction and Basic Constructions

Universitext

Universitext

Series Editors Sheldon Axler San Francisco State University Carles Casacuberta Universitat de Barcelona John Greenlees University of Warwick Angus MacIntyre Queen Mary University of London Kenneth Ribet University of California, Berkeley Claude Sabbah École Polytechnique, CNRS, Université Paris-Saclay, Palaiseau Endre Süli University of Oxford Wojbor A. Woyczy´nski Case Western Reserve University

Universitext is a series of textbooks that presents material from a wide variety of mathematical disciplines at master’s level and beyond. The books, often well classtested by their author, may have an informal, personal even experimental approach to their subject matter. Some of the most successful and established books in the series have evolved through several editions, always following the evolution of teaching curricula, into very polished texts. Thus as research topics trickle down into graduate-level teaching, first textbooks written for new, cutting-edge courses may make their way into Universitext.

More information about this series at http://www.springer.com/series/223

Marta Lewicka

A Course on Tug-of-War Games with Random Noise Introduction and Basic Constructions

Marta Lewicka Department of Mathematics University of Pittsburgh Pittsburgh, PA, USA

ISSN 0172-5939 ISSN 2191-6675 (electronic) Universitext ISBN 978-3-030-46208-6 ISBN 978-3-030-46209-3 (eBook) https://doi.org/10.1007/978-3-030-46209-3 Mathematics Subject Classification (2020): 91A15, 91A24, 31C45, 35B30, 35G30, 35J70 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The goal of these Course Notes is to present a systematic overview of the basic constructions and results pertaining to the recently emerged field of Tug-ofWar games, as seen from an analyst’s perspective. To a large extent, this book represents the author’s own study itinerary, aiming at precision and completeness of a classroom text in an upper undergraduate- to graduate-level course. This book was originally planned as a joint project between Marta Lewicka (University of Pittsburgh) and Yuval Peres (then Microsoft Research). Due to an unforeseen turn of events, neither the collaboration nor the execution of the project in their priorly conceived forms could have been pursued. The author wishes to dedicate this book to all women in mathematics, with admiration and encouragement. The publishing profit will be donated to the Association for Women in Mathematics. Pittsburgh, PA, USA October 2019

Marta Lewicka

v

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2

The Linear Case: Random Walk and Harmonic Functions . . . . . . . . . . . . . 2.1 The Laplace Equation and Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 2.2 The Ball Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Ball Walk and Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Convergence at the Boundary and Walk-Regularity . . . . . . . . . . . . . . . . . . 2.5 A Sufficient Condition for Walk-Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6* The Ball Walk Values and Perron Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7* The Ball Walk and Brownian Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 10 11 17 19 23 27 29 34

3

Tug-of-War with Noise: Case p ∈ [2, ∞). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The p-Harmonic Functions and the p-Laplacian . . . . . . . . . . . . . . . . . . . . . 3.2 The Averaging Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The First Averaging Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Tug-of-War with Noise: A Basic Construction . . . . . . . . . . . . . . . . . . . . . . . 3.5 The First Averaging as the Dynamic Programming Principle . . . . . . . . 3.6* Case p = 2 and Brownian Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7* Equivalence of Regularity Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 38 40 47 51 57 60 65 68

4

Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞) . . . . . . . . . . . . . 4.1 The Second Averaging Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Basic Convergence Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Playing Boundary Aware Tug-of-War with Noise . . . . . . . . . . . . . . . . . . . . 4.4 A Probabilistic Proof of the Basic Convergence Theorem . . . . . . . . . . . 4.5* The Boundary Aware Process at p = 2 and Brownian Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71 72 77 82 88 93 99

vii

viii

5

Contents

Game-Regularity and Convergence: Case p ∈ (2, ∞) . . . . . . . . . . . . . . . . . . . 5.1 Convergence to p-Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Game-Regularity and Convergence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Concatenating Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The Annulus Walk Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Sufficient Conditions for Game-Regularity: Exterior Cone Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Sufficient Conditions for Game-Regularity: p > N . . . . . . . . . . . . . . . . . . 5.7 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

101 102 109 113 117 120 122 134

Mixed Tug-of-War with Noise: Case p ∈ (1, ∞) . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Third Averaging Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Dynamic Programming Principle and the Basic Convergence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Mixed Tug-of-War with Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Sufficient Conditions for Game-Regularity: Exterior Corkscrew Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Sufficient Conditions for Game-Regularity: Simply Connectedness in Dimension N = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

135 136

A Background in Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Probability and Measurable Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Random Variables and Expectation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Product Measures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Conditional Expectation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 Martingales and Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7 Convergence of Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

163 163 165 168 172 174 176 180

B Background in Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Definition and Construction of Brownian Motion . . . . . . . . . . . . . . . . . . . . B.2 The Wiener Measure and Uniqueness of Brownian Motion. . . . . . . . . . B.3 The Markov Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Brownian Motion and Harmonic Extensions . . . . . . . . . . . . . . . . . . . . . . . . .

185 185 192 194 199

C Background in PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1 Lebesgue Lp Spaces and Sobolev W 1,p Spaces . . . . . . . . . . . . . . . . . . . . . . C.2 Semicontinuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4 The p-Laplacian and Its Variational Formulation. . . . . . . . . . . . . . . . . . . . . C.5 Weak Solutions to the p-Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203 203 209 210 214 217

6

142 146 152 156 161

Contents

ix

C.6 C.7 C.8

Potential Theory and p-Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . 221 Boundary Continuity of Weak Solutions to p u = 0 . . . . . . . . . . . . . . . . 224 Viscosity Solutions to p u = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

D Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Chapter 1

Introduction

The goal of these Course Notes is to present a systematic overview of the basic constructions pertaining to the recently emerged field of Tug-of-War games with random noise, as seen from an analyst’s perspective. The Linear Motivation The prototypical elliptic equation, arising ubiquitously in Analysis, Function Theory and Mathematical Physics, is the Laplace equation:   . u = div ∇u = 0. Laplace’s equation is linear and its solutions u : D → R, defined on a domain D ⊂ RN , are called harmonic functions. They are precisely the critical points (i.e., solutions to the Euler–Lagrange equation) of the quadratic potential energy: . I2 (u) =

ˆ D

|∇u(x)|2 dx.

The idea behind the classical interplay of the linear Potential Theory and Probability is that harmonic functions and random walks share a common averaging property. We now briefly recall this relation. Indeed, on the one hand, for any u ∈ C2 (RN ) there holds the expansion: u(y) dy = u(x) + B (x)

2 u(x) + o( 2 ) 2(N + 2)

as  → 0 + .

ffl . 1 ´ Recall that B u = |B| B u denotes the average of u on a set B. The displayed formula follows by writing the quadratic Taylor expansion of u at x:

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3_1

1

2

1 Introduction

u(y) = u(x) + ∇u(x), y − x +

 1 2 ∇ u(x) : (y − x) ⊗ (y − x) + o(|y − x|2 ), 2

and averaging each term separately on the ball B (x). In the right hand side, the zeroth order term u(x) averages to itself, the first order term averages to zero due to the symmetry of the ball, while the second order average consists of the scalar ffl 2 product of two matrices: ∇ 2 u(x) and B (x) (y − x) ⊗ (y − x) dy = N+2 I dN , yielding: ∇ 2 u(x) : I dN  = Trace∇ 2 u(x) = u(x). We note that the obtained mean value expansion is consistent with the fact that the equivalent condition for u = 0 is the mean value property: u(y) dy = u(x). B (x)

Hence, the coefficient u(x) in the mean value expansion measures the second order (in the averaging radius ) error from the satisfaction of the mean value property and thus from harmonicity of u. On the other hand, consider the discrete stochastic process where, at each step, the particle is randomly shifted from its current position x to a new position y, uniformly distributed in some ball Bδ(x) (x). Assume that the particle is initially located at x0 ∈ D and that the radius of the random shift δ(x) at x equals, rather than  as above, the minimum of the parameter  and the particle’s current distance from ∂D. We see that this process never reaches the boundary and the position of the token is shifted indefinitely in D; such infinite horizon process is called the ball walk. It is also clear that the current position of the particle equals the mean of its immediate future positions. The same property holds for process’ values, defined as follows. It can be checked that with probability one, the particle’s positions accumulate at some limiting point x∞ ∈ ∂D. We then set u (x0 ) to equal the expected value of the given continuous boundary data F , evaluated at x∞ . It turns out that for each , all functions u are actually one and the same harmonic function u (i.e., they are independent of ), and that they coincide with the so-called Perron’s solution of the Laplace–Dirichlet problem on D: u = 0 in D,

u=F

on ∂D.

The Perron solution may actually fail to achieve its defining boundary condition F when D is irregular. According to what is by now one of the fundamental results in Potential Theory, u(y) coincides with the correct boundary value F (y) precisely at those points y ∈ ∂D which satisfy the so-called Wiener regularity condition. This

1 Introduction

3

condition is independent of the choice of the continuous data F , and it is worth to mention that it can be derived directly through the ball walk, without appealing to its advanced independent analytical formulation. We remark that the consecutive positions of the particle in the described process may be characterized as a discrete realization of continuous paths in the Brownian motion on RN started at x0 , whereas the limiting position x∞ is precisely the first exit point from D. The study of these and other relations between linear elliptic PDEs and Stochastic Processes has been instrumental for the interrelated developments of Analysis, PDEs and Probability in the past century. The Nonlinear Motivation If we replace the quadratic exponent in I2 by the p-th power, p ∈ (1, ∞), the resulting p-potential energy functional: . Ip (u) =

ˆ D

|∇u(x)|p dx,

has the p-Laplacian as its Euler– Lagrange equation for critical points:   . p u = div |∇u|p−2 ∇u = 0. The p-Laplace equation occupies a similar central position for nonlinear phenomena as the Laplacian for the theory of linear PDEs. At points x ∈ D where ∇u(x) = 0, this equation is degenerate when p > 2, and singular when p < 2. Its solutions are called p-harmonic functions; they arise in various contexts of Mathematical Physics and have many applications. We also mention that passing to the limit with p → ∞ leads to (the normalized version of) the celebrated ∞Laplacian: . ∞ u =

 1  2 (∇ u)∇u, ∇u , |∇u|2

which returns the second derivative of u in the direction of its normalized gradient ∇u |∇u| , whenever defined. The ∞-Laplacian is a fully nonlinear operator arising, for example, in the study of optimal Lipschitz extensions and image processing. It is the most difficult and the least understood among the Laplace-like operators, for which many questions (for instance, related to the regularity or uniqueness of eigenfunctions) remain still open. The p-Laplacian, Laplacian and the ∞-Laplacian are related by the following interpolation identity: |∇u|2−p p u = u + (p − 2)∞ u. We further remark that the operator in the left hand side above, namely: . 2−p p u G p u = |∇u|

4

1 Introduction

is known as the “game-theoretical” p-Laplacian. Although G p is still nonlinear, it is one-homogeneous and less singular than the full p-Laplacian p . It is somewhat unexpected that the Probability approach as displayed in the linear case can be implemented with appropriate modifications, also in the setting of the nonlinear Potential Theory. One way of arriving at this observation is as follows. Averaging a function u ∈ C2 (RN ) on the ellipsoid, rather than on the ball B (x) which worked well in the linear setting, leads to: u(y) dy = u(x)+ E(x,;α,ν)

  2 u(x)+(α 2 −1)∇ 2 u(x) : ν ⊗2  +o( 2 ). 2(N + 2)

Here, E(x, ; α, ν) = x + {y ∈ RN ; y, ν2 + α 2 |y − y, νν|2 < α 2  2 }, where r is the radius, α is the aspect ratio and ν is the unit-length orientation vector. We recall that the scalar product of the two square matrices ∇ 2 u(x) and ν ⊗2 = ν ⊗ ν in: ∇ 2 u(x) : ν ⊗2  = ∇ 2 u(x)ν, ν returns √ the second derivative of u at x, in ∇u(x) the direction of ν. Thus, for the choice α = p − 1 and ν = |∇u(x)| , the above displayed mean value expansion becomes: 

 u(y) √ ∇u(x) E x,; p−1, |∇u(x)|

dy = u(x) +

2 G u(x) + o( 2 ), 2(N + 2) p

with the familiar quantity G p u(x) at the second order in the averaging “radius” . To obtain an expansion where the left hand side averaging does not require the knowledge of ∇u(x) and allows for the identification of a p-harmonic function which is a priori only continuous, one should additionally take the mean over all orientations ν. This idea can indeed be carried out by fflsuperposing: the deterministic average “ 12 (inf + sup)”, with the stochastic average “ ”, leading to the expansion:  1 inf + sup 2 z∈B (x) z∈B (x)







 z−x E z,γp ;αp ( z−x  ), |z−x| = u(x) +

 u(y) dy γp2  2

2(N + 2)

2 G p u(x) + o( ).

Above, the constant scaling factor γp and the quadratic aspect ratio function αp satisfy the appropriate compatibility identity depending on N and p. As in the linear case, one can then prove that an equivalent condition for p-harmonicity p u = 0 is the asymptotic satisfaction of the mean value equation:  1 inf + sup 2 z∈B (x) z∈B (x)

E



z−x z,γp ,αp , |z−x|

2  u(y) dy = u(x) + o( ).

In order to draw a further parallel, we now describe the discrete stochastic process modelled on the above equation, which is the two-player, zero-sum game, called

1 Introduction

5

the Tug-of-War with random noise. In this process, the token is initially placed at x0 ∈ D, and at each step it is advanced according to the following rule. First, either of the two players (each acting with probability 12 ) updates the current position xn by a chosen vector y of length at most ; second, the token is further randomly shifted  y to a new position xn+1 , uniformly distributed within the ellipsoid E z, γp , αp , |y| that is centred at z = xn + y. The game is stopped whenever the token reaches the -neighbourhood of ∂D. The game value u (x0 ) is then defined as the expectation of the value of the given continuous boundary function F (extended continuously on RN ) at the stopping position xτ , subject to both players playing optimally. The applied optimality criterion is based on the rule that Player II pays to Player I the value F (xτ ). This rule gives Player I the incentive to maximize the gain by pulling the token towards portions of ∂D with high values of F , whereas Player II will likely try to minimize the loss by pulling towards lower values. Due to the minmax property, the notion of optimality turns out to be well defined, i.e., the order of supremizing the outcome over strategies of the first player and infimizing over strategies of the opponent is immaterial. We observe that since the random sampling is performed within radius of order , regardless of the token/particle’s position, it is no more true that {u }∈(0,1) are each, one and the same function, even in the linear case p = 2. However, it is expected that the family {u }→0 still converges pointwise in D to the unique Perron solution u of the associated p-Laplace–Dirichlet problem: p u = 0 in D,

u=F

on ∂D,

subject to the continuous boundary data F . In agreement with the linear case, it is also natural to expect that this convergence is uniform for regular boundary, and that in that case u = F on ∂D. While the former result is not yet available at the time when these Course Notes are written (for p = 2), the latter two assertions hold true and will be precisely the subject of our studies. The Whys and Wherefores Similarly to the linear case, where the notions and results of the linear Potential Theory find their classical counterparts via Brownian motion, the Game-Theoretical interpretation of the nonlinear Potential Theory brings up a fruitful point of view, allowing to replace involved analytical techniques by relying instead on suitable choices of strategies for the competing players. Applications include: a new proof of Harnack’s inequality, a new proof of Hölder’s regularity of p-harmonic functions, results on obstacle problems or the study of Tug-of-War games in the Heisenberg group. Other aspects which are beyond the introductory scope of this textbook, concern the limiting exponent cases p = ∞ and p = 1. The case p = ∞ is related to the absolutely minimizing Lipschitz extension property of the ∞-harmonic functions (which may be defined on arbitrary length spaces), in connection to the pure Tugof-War games. The case p = 1 concerns a variant of Spencer’s “pusher-chooser” (deterministic) game and the level-set formulation of motion by mean curvature. Of interest are procedures of passing to the limit with p → ∞ and p → 1.

6

1 Introduction

Parallel problems can be formulated and studied on graphs, and have recently found applications in the graph-based semi-supervised learning problems, yielding new algorithms with theoretical guarantees. More generally, the approach of: 1. finding an asymptotic expansion in which the second order coefficient matches the prescribed partial differential operator of second order; 2. introducing a related mean value equation by removing higher order error terms in the expansion; 3. interpreting the mean value equation as the dynamic programming principle of a “game” incorporating deterministic and stochastic components; 4. passing to the limit in the radius of sampling in order to recover the continuum solutions from the values of the game process; is quite flexible and allows to deal with several elliptic and parabolic nonlinear PDEs, including free boundary problems. The Content of This Book We start with analysing the basic linear case in detail in Chap. 2, where we link the ball walk, the harmonic functions and the mean value property. This chapter presents a more classical material, but it crucially serves as a stepping stone towards gaining familiarity with the more complex nonlinear constructions. In Chap. 3 we are concerned with the case p ≥ 2, which is somewhat closer to the linear case, by means of the first nonlinear extension of the mean value property (called here the averaging principle) and a resulting Tug-of-War game. Another game process and another asymptotic expansion, valid for p > 2, are studied in Chap. 4. Its advantage is that the game values u inherit regularity properties of the boundary function F (continuity, Hölder and Lipschitz continuity), thanks to interpolating to the boundary in deterministic and stochastic sampling rules. The aim of Chap. 5 is to introduce the notion of game-regularity of boundary points, which is essentially equivalent to the local equicontinuity of the family {u }→0 , and ultimately to its uniform convergence to the unique viscosity solution of the studied Dirichlet problem. This notion extends the walk-regularity introduced in Chap. 2, and both notions may be seen as natural extensions of Doob’s regularity for Brownian motion. It is expected that game-regularity is equivalent to the Wiener p-regularity criterion. In the so-far absence of such result, we show its two sufficient conditions: the exterior cone condition and the exponent range p > N. In Chap. 6 we derive the ultimate ellipsoid-based averaging principle that serves as the dynamic programming principle for the Tug-of-War game with noise, which is viable for the whole range exponent p ∈ (1, ∞). We also show that the exterior corkscrew condition is sufficient for game-regularity, and that in dimension N = 2, every simply connected domain is game-regular, for any p ∈ (1, ∞). The final three chapters gather a background material, which is to be used at the instructor’s and students’ discretion. Appendix A contains basic definitions and facts in Probability: probability and measurable spaces, conditional expectation, martingales in discrete times, Doob’s and Dubins’ theorems. Appendix B serves

1 Introduction

7

as an introduction to Brownian Motion, where we discuss the Lévy construction, stopping times, Markov properties, the Wiener measure and the Brownian motion harmonic extensions. This material is only used in sections denoted by “∗”, towards comparing the values of different variants of Tug-of-War at p = 2 along with their regularity conditions, with the Brownian motion harmonic extension Doob’s regularity. Both Appendix B and Sections* are independent of the main material, and may be skipped at first reading. In Appendix C, we recall the preliminary facts in PDEs: Lebesgue and Sobolev spaces, harmonic functions, p-harmonic functions, viscosity and weak solutions, and also present some aspects of the nonlinear Potential Theory. Finally, Appendix D contains solutions of selected exercises. Prerequisites The presentation aims to be self-contained, at the level of a classroom text in an upper undergraduate to graduate course. Familiarity with differential and integral calculus in RN , and with some basic measure theory, is assumed. Familiarity with the concepts of Probability and PDEs at the level of a rigorous core course for Mathematics majors is advised. At the same time, these Course Notes are equipped with an extensive background and auxiliary material in the three appendix sections, where the range of definitions, facts, proofs and guided exercises is gathered. Students who passed the first courses on Probability and PDEs will be able to go through the main material starting from Chap. 2 with no problems. Students who lack such training should begin by approaching the adequate material from Appendix A and Sects. C.1–C.5 of Appendix C. The usage of the appendices is expected to be gauged to the students’ preparation level, determined by the Course instructor. Notation Having to decide between at times a bit heavier notation, or risking a student wonder about the dependence of quantities on each other, the domains of definiteness of functions, or the structure of the involved spaces, the former has been intentionally chosen. At the same time, the author has strived to make the notation as friendly, balanced and intuitive as possible. Acknowledgments In preparing these Course Notes, the author has greatly benefited from discussions with Y. Peres, whose seminal work from a decade ago uncovered the deep connections between Nonlinear Potential Theory and Stochastic Processes. The author wishes to thank Y. Peres for advising her studies of Game Theory and Probability, in the oftentimes limiting context of the author’s analysis-trained and oriented point of view. The gratitude extends to J. Manfredi for introducing the author to the topic of this book, for discussions on p-Laplacian and viscosity solutions and for coauthoring joint papers. The author is further grateful to P. Lindqvist for discussions and the continuous kind encouragement. An acknowledgement is due to Microsoft Research, whose financial support allowed for the author’s visits to MSR Redmond in the early stages of this work. As a final word, the author would like to bring the readers’ attention to the recent book by Blanc and Rossi (2019), which is concerned with the same topic as these Course Notes, albeit written with different scope and style.

Chapter 2

The Linear Case: Random Walk and Harmonic Functions

In this chapter we present the basic relation between the linear potential theory and random walks. This fundamental connection, developed by Ito, Doob, Lévy and others, relies on the observation that harmonic functions and martingales share a common cancellation property, expressed via mean value properties. It turns out that, with appropriate modifications, a similar observation and approach can be applied also in the nonlinear case, which is of main interest in these Course Notes. Thus, the present chapter serves as a stepping stone towards gaining familiarity with more complex constructions of Chaps. 3–6. After recalling the equivalent defining properties of harmonic functions in Sect. 2.1, in Sect. 2.2 we introduce the ball walk. This is an infinite horizon discrete process, in which at each step the particle, initially placed at some point x0 in the open, bounded domain D ⊂ RN , is randomly advanced to a new position, uniformly distributed within the following open ball: centred at the current placement, and with radius equal to the minimum of the parameter  and the distance from the boundary ∂D. With probability one, such process accumulates on ∂D and u (x0 ) is then defined as the expected value of the given boundary data F at the process limiting position. Each function u is harmonic, and we show in Sects. 2.3 and 2.4, that if ∂D is regular, then each u coincides with the unique harmonic extension of F in D. One sufficient condition for regularity is the exterior cone condition, as proved in Sect. 2.5. Our discussion and proofs are elementary, requiring only a basic knowledge of probabilistic concepts, such as: probability spaces, martingales and Doob’s theorem. For convenience of the reader, these are gathered in Appendix A. The slightly more advanced material which may be skipped at first reading, is based on the Potential Theoretic and the Brownian motion arguments from, respectively, Appendix C and Appendix B. Both approaches allow to deduce that functions in the family {u }∈(0,1) are one and the same function, regardless of the regularity of ∂D. This fact is obtained first in Sect. 2.6* by proving that u coincide with the Perron solution of the Dirichlet problem for boundary data F . The same follows © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3_2

9

10

2 The Linear Case: Random Walk and Harmonic Functions

in Sect. 2.7* by checking that the ball walk consists of discrete realizations along the Brownian motion trajectories, to the effect that u equal the Brownian motion harmonic extension of F . Thus, the three classical approaches to finding the harmonic extension by: 1. evaluating the expectation of the values of the (discrete) ball walk at its limiting infinite horizon boundary position; 2. taking infima/suprema of super- and sub-harmonic functions obeying comparison with the boundary data; 3. evaluating the expectation of the values of the (continuous) Brownian motion at exiting the domain; are shown to naturally coincide when F is continuous.

2.1 The Laplace Equation and Harmonic Functions Among the most important of all PDEs is the Laplace equation. In this section we briefly recall the relevant definitions and notation; for the proofs and a review of basic properties we refer to Sect. C.3 in Appendix C. Let D ⊂ RN be an open, bounded, connected set. The Euler–Lagrange equation for critical points of the following quadratic energy functional: ˆ I2 (u) =

D

|∇u(x)|2 dx

is expressed by the second order partial differential equation: . ∂ 2u = 0 in D, u = (∂xi )2 N

i=1

whose solutions are called harmonic functions. The operator  is defined in the classical sense only for C2 functions u, however, a remarkable property of harmonicity is that it can be equivalently characterized via mean value properties that do not require u to be even continuous. At the same time, harmonic functions are automatically smooth. More precisely, the following conditions are equivalent: (1) A locally bounded, Borel function u : D → R satisfies the mean value property on balls: u(x) =

u(y) dy Br (x)

for all B¯ r (x) ⊂ D.

2.2 The Ball Walk

11

(ii) A locally bounded, Borel function u : D → R satisfies for each x ∈ D and almost every r ∈ (0, dist(x, ∂D)) the mean value property on spheres: u(y) dσ N −1 (y).

u(x) = ∂Br (x)

(iii) The function u is smooth: u ∈ C∞ (D), and there holds u = 0 in D. The proof of the equivalence will be recalled in Sect. C.3. We further remark at this point that, Taylor expanding any function u ∈ C2 (D) and averaging term by term on B¯  (x) ⊂ D, leads to the mean value expansion, also called in what follows the averaging principle: u(y) dy = u(x) + B (x)

2 u(x) + o( 2 ) 2(N + 2)

as  → 0+,

(2.1)

which in fact is consistent with interpreting u as the (second order) error from harmonicity. This point of view is central to developing the probabilistic interpretation of the general p-Laplace equations, which is the goal of these Course Notes. While we will not need (2.1) in order to construct the random walk and derive its connection to the Laplace equation  in the linear setting p = 2 studied in this chapter, it is beneficial to keep in mind that the mean value property in (i) may be actually “guessed” from the expansion (2.1). Throughout next chapters, more general averaging principles will be proved (in Sects. 3.2, 4.1 and 6.1), informing the mean value properties that characterize, in the asymptotic sense, zeroes of the nonlinear operators p at any p ∈ (1, ∞), and ultimately leading to the Tug-of-War games with random noise.

2.2 The Ball Walk In this section we construct the discrete stochastic process whose value will be shown to equal the harmonic function with prescribed boundary values. The probability space of the ball walk process is defined as follows. Consider ( 1 , F1 , P1 ), where 1 is the unit ball B1 (0) ⊂ RN , the σ -algebra F1 consists of Borel subsets of 1 , and P1 is the normalized Lebesgue measure: P1 (D) =

|D| |B1 (0)|

for all D ∈ F1 ,

For any n ∈ N, we denote by n = ( 1 )n the Cartesian product of n copies of 1 , and by ( n , Fn , Pn ) the corresponding product probability space. Further, the

12

2 The Linear Case: Random Walk and Harmonic Functions

Fig. 2.1 The ball walk and the process {Xn,x0 }∞ n=0 in (2.2)

countable product ( , F, P) is defined as in Theorem A.12 on: ∞

. = ( 1 )N = 1 = ω = {wi }∞ ; w ∈ B (0) for all i ∈ N . i 1 i=1 i=1

We identify each σ -algebra Fn with the sub-σ -algebra of F consisting of sets of the ∞ form F × ∞ i=n+1 1 for all F ∈ Fn . Note that {Fn }n=0 where  F0 = {∅, }, is a filtration of F and that F is the smallest σ -algebra containing ∞ n=0 Fn . Definition 2.1 Let D ⊂ RN be an open, bounded, connected set. The ball walk for  ∈ (0, 1) and x0 ∈ D, is recursively defined (see Fig. 2.1) as the following sequence of random variables {Xn,x0 : → D}∞ n=0 : X0,x0 ≡ x0 ,

  ,x0 ,x0 Xn,x0 (w1 , . . . , wn ) = Xn−1 (w1 , . . . , wn−1 ) +  ∧ dist(Xn−1 , ∂D) wn

(2.2)

for all n ≥ 1 and all (w1 , . . . wn ) ∈ n . We will often write: xn = Xn,x0 (w1 , . . . , wn ). Intuitively, {xn }∞ n=0 describe the consecutive positions of a particle initially placed at x0 ∈ D, along a discrete path consisting of a succession of random steps of magnitude at most . The size of steps decreases as the particle approaches the boundary ∂D. The position xn ∈ D is obtained from xn−1 by sampling uniformly on the open ball B∧dist(xn−1 ,∂ D) (xn−1 ). It is clear that each random variable Xn,x0 : → RN is Fn -measurable and that it depends only on the previous position xn−1 , its distance from ∂D and the current random outcome wn ∈ 1 .

2.2 The Ball Walk

13

Lemma 2.2 In the above context, the sequence {Xn,x0 }∞ n=0 is a martingale relative to the filtration {Fn }∞ , namely: n=0 ,x0 E(Xn,x0 | Fn−1 ) = Xn−1

P − a.s.

for all n ≥ 1.

Moreover, there exists a random variable X,x0 : → ∂D such that: lim Xn,x0 = X,x0

P − a.s.

n→∞

(2.3)

Proof 1. Since the sequence {Xn,x0 }∞ n=0 is bounded in view of boundedness of D, Theorem A.38 will yield convergence in (2.3) provided we check the martingale property. Indeed it follows that (see Lemma A.17): ˆ E(Xn,x0

| Fn−1 )(w1 , . . . , wn−1 ) =

  = xn−1 +  ∧ dist(xn−1 , ∂D)

ˆ

Xn,x0 (w1 , . . . , wn ) dP1 (wn )

1

wn dP1 (wn ) 1

,x0 = Xn−1 (w1 , . . . , wn−1 )

for Pn−1 -a.e. (w1 , . . . , wn−1 ) ∈ n−1 .

¯ satisfies 2. It remains to prove that the limiting random variable X,x0 : → D P-a.s. the boundary accumulation property: X,x0 ∈ ∂D. Observe that: 

 lim Xn,x0 = X,x0 ∩ {X,x0 ∈ D} ⊂

n→∞



A(n, δ),

(2.4)

n∈N, δ∈(0,)∩Q

 ,x0 where A(n, δ) = dist(Xi,x0 , ∂D) ≥ δ and |Xi+1 − Xi,x0 | ≤ Then:

δ 2

 for all i ≥ n .

  1 A(n, δ) ⊂ ω ∈ ; |wi | ≤ for all i > n . 2 Indeed, if ω = {wi }∞ i=1 ∈ A(n, δ) with δ < , it follows that:   δ ,x0 (ω) − Xi,x0 (ω)| =  ∧ dist(Xi,x0 (ω), ∂D) |wi+1 | ≥ |Xi+1 2 ≥ ( ∧ δ)|wi+1 | = δ|wi+1 |, which implies that |wi+1 | ≤

1 2

for all i ≥ n. Concluding:

  P A(n, δ) ≤ lim P1 (B 1 (0))i−n = 0 i→∞

2

for all n ∈ N and all δ ∈ (0, ).

Hence, the event in the left hand side of (2.4) has probability 0.

 

14

2 The Linear Case: Random Walk and Harmonic Functions

Given now a continuous function F : ∂D → R, define: ˆ  .   ,x0 = F ◦ X,x0 dP. u (x0 ) = E F ◦ X

(2.5)



Note that the above construction obeys the comparison principle. Namely, if F, F¯ : ∂D → R are two continuous functions such that F ≤ F¯ on ∂D, then the corresponding u and u¯  satisfy: u ≤ u¯  in D. Remark 2.3 It is useful to view the boundary function F as the restriction on ∂D of ¯ → R, see Exercise 2.7 (i). Then we may write: some continuous F : D ˆ u (x0 ) = lim 

n→∞

F ◦ Xn,x0 dP.

(2.6)

Since for each n ≥ 0 the function F ◦ Xn,x0 is jointly Borel-regular in the variables x0 ∈ D and ω ∈ n , it follows by Theorem A.11 that x0 → E[F ◦ Xn,x0 ] is Borel-regular. Consequently, u : D → R is also Borel. In what follows, we will denote the average Aδ u of an integrable function u : D → R on a ball Bδ (x) ⊂ D by: . Aδ u(x) =

u(y) dy. Bδ (x)

Directly from Definition 2.1 and (2.5) we conclude the satisfaction of the mean value property for each u on the sampling balls from (2.2):

Theorem 2.4 Let D ⊂ RN be open, bounded, connected, and let F : ∂D → R be continuous. Then, the function u : D → R defined in (2.5) and equivalently in (2.6), is continuous and satisfies: u (x) = A∧dist(x,∂ D) u (x)

for all x ∈ D.

Proof Fix  ∈ (0, 1) and x0 ∈ D. For each n ≥ 2 we view ( n , Fn , Pn ) as the product of probability spaces ( 1 , F1 , P1 ) and ( n−1 , Fn−1 , Pn−1 ). Applying Fubini’s Theorem (Theorem A.11), we get:   E F ◦ Xn,x0 =

ˆ 1

ˆ =

1

ˆ n−1

  F ◦ Xn,x0 (w1 , . . . , wn ) dPn−1 (w2 , . . . , wn ) dP1 (w1 )

,x  ,X 0 (w1 )  dP1 (w1 ), E F ◦ Xn−11

2.2 The Ball Walk

15

¯ → R is some continuous extension of its given values on ∂D, as in where F : D (2.6). Passing to the limit with n → ∞ and changing variables, we obtain: ˆ u (x0 ) = 1

ˆ =

  u X1,x0 (w1 ) dP1 (w1 )   u x0 + ( ∧ dist(x0 , ∂D)) w1 dP1 (w1 )

1

=

u (y) dy. B  ∧ dist(x , D) (x0 ) 0

Continuity of u follows directly from the averaging formula and we leave it as an exercise (see Exercise 2.7 (ii)).   The next two statements imply uniqueness of classical solutions to the boundary value problem for the Laplacian. The same property, in the basic analytical setting that we review in Sect. C.3, follows via the maximum principle. Corollary 2.5 Given  ∈ (0, 1) and x0 ∈ D, let {Xn,x0 }∞ n=0 be as defined in (2.2). In the setting of Theorem 2.4, the sequence {u ◦ Xn,x0 }∞ n=0 is then a martingale relative to the filtration {Fn }∞ . n=0 Proof Indeed, Lemma A.17 yields for all n ≥ 1:   E u ◦ Xn,x0 | Fn−1 (w1 , . . . , wn−1 ) = ˆ = 1

=

ˆ 1

(u ◦ Xn,x0 )(w1 , . . . , wn ) dP1 (wn )

    ,x0 u Xn−1 (w1 , . . . , wn−1 ) +  ∧ dist(xn−1 , ∂D) wn dP1 (wn )

B  ∧ dist(x

n−1

, D) (xn−1 )

x0 u (y) dy = (u ◦ Xn−1 )(w1 , . . . , wn−1 ),

(2.7)

valid for Pn−1 -a.e. (w1 , . . . , wn−1 ) ∈ n−1 .

 

¯ solves: Lemma 2.6 In the setting of Theorem 2.4, assume that u ∈ C(D) u = 0 in D,

u = F on ∂D.

(2.8)

Then u = u for all  ∈ (0, 1). In particular, (2.8) has at most one solution. Proof We first claim that given x0 ∈ D and  ∈ (0, 1), the sequence {u ◦ Xn,x0 }∞ n=0 is a martingale relative to {Fn }∞ n=0 . This property follows exactly as in (2.7),

16

2 The Linear Case: Random Walk and Harmonic Functions

where u is now replaced by u and where the mean value property for harmonic functions (C.8) is used instead of the single-radius averaging formula of Theorem 2.4. Consequently, we get: u(x0 ) = E[u ◦ X0,x0 ] = E[u ◦ Xn,x0 ]

for all n ≥ 0.

Since the right hand side above converges to u (x0 ) with n → ∞, it follows that u(x0 ) = u (x0 ). To prove the second claim, recall that u (x0 ) depends only on ¯ This yields the boundary values u|∂ D = F and not on their extension u on D. uniqueness of the harmonic extension in (2.8).   We finally remark that the mean value property stated in Theorem 2.4 suffices to conclude that each u is harmonic (see Sect. C.3). One can also show that all functions in the family {u }∈(0,1) are the same, even in the absence of the classical harmonic extension u satisfying (2.8). This general result will be given two independent proofs in Sects. 2.6* and 2.7*. In the next section, we provide an elementary proof in domains that are sufficiently regular. An entirely similar ¯ and analysing strategy, based on showing the uniform convergence of {u }→0 in D its limit, will be adopted in Chaps. 3–6 for the p-harmonic case, p ∈ (1, ∞), in the context of Tug-of-War with noise. Exercise 2.7 (i) Let F : A → R be a continuous function on a compact set A ⊂ RN . Verify that, setting: |x − y| . F (x) = min F (y) + −1 y∈A dist(x, A)

for all x ∈ RN \ A,

defines a continuous extension of F on RN . This construction is due to Hausdorff and it provides a proof of the Tietze extension theorem. (ii) Let u : RN → R be a bounded, Borel function and let  : RN → (0, ∞) be continuous. Show that the function: x → A(x) u(x) is continuous on RN . Exercise 2.8 Modify the construction of the ball walk to the sphere walk using the outline below. (i) Let 1 = ∂B1 (0) ⊂ RN and let P1 = σ N −1 be the normalized spherical measure on the Borel σ -algebra F1 of subsets of 1 (see Example A.9). Define the induced probability spaces ( , F, P) and {( n , Fn , Pn )}∞ n=0 as in the case ,x0 of the ball walk. For every  ∈ (0, 1) and x0 ∈ D, let {Xn : D → RN }∞ n=0 be the sequence of random variables in: X0,x0 ≡ x0

and for all n ≥ 1 and all (w1 , . . . , wn−1 ) ∈ n−1 :   1 Xn,x0 (w1 , . . . , wn ) = xn−1 +  ∧ dist(xn−1 , ∂D) wn 2 where

,x0 xn−1 = Xn−1 (w1 , . . . , wn−1 ).

2.3 The Ball Walk and Harmonic Functions

17

∞ Prove that {Xn,x0 }∞ n=0 is a martingale relative to the filtration {Fn }n=0 and that ,x (2.3) holds for some random variable X 0 : → ∂D. (ii) For a continuous function F : ∂D → R, define u : D → R according to (2.5). Show that u is Borel-regular and that it satisfies:

u (x) =

u (y) dσ N −1 (y) ∂B

∧

(x) 1 2 dist(x, ∂D)

for all x ∈ D.

¯ as in (2.8), then u = u for (iii) Deduce that if F has a harmonic extension u on D all  ∈ (0, 1).

2.3 The Ball Walk and Harmonic Functions The main result of this section states that the uniform limits of values {u }→0 of the ball walk that we introduced in Sect. 2.2, are automatically harmonic. The proof relies on checking that each limiting function u satisfies the mean value property on spheres. This is achieved by applying Doob’s theorem to u evaluated along its own walk process {Xn,x0 }∞ n=0 , and choosing to stop on exiting the ball whose boundary coincides with the given sphere.

Theorem 2.9 Let J ⊂ (0, 1) be a sequence decreasing to 0. Assume that {u }∈J defined in (2.5), converges locally uniformly in D, as  → 0,  ∈ J , to some u ∈ C(D). Then u must be harmonic.

Proof 1. In virtue of Theorem C.19, it suffices to prove that: u(x0 ) =

u(y) dσ N (y)

for all B2r (x0 ) ⊂ D.

(2.9)

∂Br (x0 )

Fix x0 ∈ D and r ≤ 12 dist(x0 , ∂D), and for each  ∈ J consider the following random variable τ  : → N ∪ {+∞}:   τ  = inf n ≥ 1; Xn,x0 ∈ Br (x0 ) , where {Xn,x0 }∞ n=0 is the usual sequence of the token positions (2.2) in the ball walk started at x0 . Clearly, τ  is finite a.s. in view of convergence to the boundary in (2.3) and it is a stopping time relative to the filtration {Fn }∞ n=0 . The

18

2 The Linear Case: Random Walk and Harmonic Functions

corresponding stopping position xτ  is indicated in Fig. 2.2. By Corollary 2.5, Doob’s theorem (Theorem A.31 (ii)) yields:     u (x0 ) = E u ◦ X0,x0 = E u ◦ Xτ,x0 , while by passing to the limit with  → 0 we obtain, by uniform convergence: u(x0 ) =

lim

→0, ∈J

  E u ◦ Xτ,x0 =

ˆ lim

→0, ∈J

u(y) dσ (y). Br+ (x0 )\Br (x0 )

(2.10) The Borel probability measures {σ }∈(0,r) are here defined on B¯ 2r (x0 ) \ Br (x0 ) by the push-forward procedure, as in Exercise A.8:  .  σ (A) = P Xτ,x0 ∈ A . 2. We now identify the limit in the right hand side of (2.10). Observe that, by construction, the measures σ are rotationally invariant. Further, by Prohorov’s theorem (Theorem A.10), each subsequence of {σ }→0, ∈J has a further subsequence that converges (weakly-∗) to a Borel probability measure μ on B¯ 2r (x0 ) \ Br (x0 ). Since each σ is supported in Br+ (x0 ) \ Br (x0 ), the limit μ must be supported on ∂Br (x0 ). Also, μ is rotationally invariant in view of the same property of each σ . Consequently, μ = σ N −1 must be the uniquely defined, normalized spherical measure on ∂Br (x0 ) (see Exercises 2.10 and 2.11). As the limit does not depend on the chosen subsequence of J , we conclude: ˆ lim

→0, ∈J

B¯ 2r (x0 )\Br (x0 )

u(y) dσ N −1 (y).

u(y) dσ (y) = ∂Br (x0 )

Together with (2.10), this establishes (2.9) as claimed.

Fig. 2.2 The stopping position xτ  in the proof of Theorem 2.9

 

2.4 Convergence at the Boundary and Walk-Regularity

19

Exercise 2.10 Show that every (weak-∗) limit point of the family of probability measures {σ }∈(0,r) defined in (2.10) must be rotationally invariant and supported on ∂Br (x0 ). Exercise 2.11 Using the following outline, prove that the only rotationally invariant, Borel probability measure μ on ∂B1 (0) ⊂ RN , is the normalized spherical measure σ N −1 . (i) Fix an open set U ⊂ ∂B1 (0) and consider the sequence of Borel functions μ(U ∩B(x, n1 )) ∞ x → , where B(x, r) denotes the (N − 1)-dimensional 1 μ(B(x, n ))

n=1

curvilinear ball in ∂B1 (0) centred at x and with radius r ∈ (0, 1). Apply Fatou’s lemma (Theorem A.6) and Fubini’s theorem (Theorem A.11) to the indicated sequence and deduce that:  σ N −1 (B(x, n1 ))  σ N −1 (U ) ≤ lim inf · μ(U ), n→∞ μ(B(x, n1 ))

(2.11)

where both quantities σ N −1 (B(x, n1 )) and μ(B(x, n1 )) are independent of x ∈ ∂B1 (0) because of the rotational invariance. (ii) Exchange the roles of μ and σ N −1 in the above argument and conclude: lim

n→∞

σ N −1 (B(x, n1 )) μ(B(x, n1 ))

= 1.

Thus, μ(U ) = σ N −1 (U ) for all open sets U , so there must be μ = σ N −1 . This proof is due to Christensen (1970) and the result is a particular case of Haar’s theorem on uniqueness of invariant measures on compact topological groups.

2.4 Convergence at the Boundary and Walk-Regularity We now investigate conditions assuring the validity of the uniform convergence assumption of Theorem 2.9. It turns out that such condition may be formulated independently of the boundary data F , only in terms of the behaviour of the ball walk (2.2) close to ∂D, which is further guaranteed by a geometrical sufficient condition in the next section. In Theorem 2.14 we will show how the boundary regularity of the process can be translated (via walk coupling) into the interior ¯ and regularity, resulting in the existence of a harmonic extension u of F on D, ultimately yielding u = u for all  ∈ (0, 1), in virtue of Lemma 2.6.

20

2 The Linear Case: Random Walk and Harmonic Functions

Fig. 2.3 Walk-regularity of a boundary point y0 ∈ ∂D

Definition 2.12 Consider the ball walk (2.2) on a domain D ⊂ RN . (a) We say that a boundary point y0 ∈ ∂D is walk-regular if for every η, δ > 0 there exists δˆ ∈ (0, δ) and ˆ ∈ (0, 1) such that:   P X,x0 ∈ Bδ (y0 ) ≥ 1 − η

for all  ∈ (0, ) ˆ and all x0 ∈ Bδˆ (y0 ) ∩ D,

where X,x0 is the limit in (2.3) of the -ball walk started at x0 . (b) We say that D is walk-regular if every y0 ∈ ∂D is walk-regular (Fig. 2.3). Lemma 2.13 Assume that the boundary point y0 ∈ ∂D of a given open, bounded, connected domain D is walk-regular. Then for every continuous F : ∂D → R, the family {u }→0 defined in (2.5) satisfies the following. For every η > 0 there is δˆ > 0 and ˆ ∈ (0, 1) such that: |u (x0 ) − F (y0 )| ≤ η

for all  ∈ (0, ) ˆ and all x0 ∈ Bδˆ (y0 ) ∩ D.

(2.12)

Proof Given η > 0, let δ > 0 satisfy: |F (y) − F (y0 )| ≤

η 2

for all y ∈ ∂D such that |y − y0 | < δ.

By Definition 2.12, we choose ˆ and δˆ corresponding to

η 4F ∞ +1

and δ. Then:

ˆ |u (x0 ) − F (y0 )| ≤



|F ◦ X,x0 − F (y0 )| dP

  ≤ P X ,x0 ∈ Bδ (y0 ) · 2F ∞ + ≤

ˆ X,x0 ∈Bδ (y0 )

|F ◦ X,x0 − F (y0 )| dP

η η · 2F ∞ + ≤ η, 4F ∞ + 1 2

for all x0 ∈ Bδˆ (y0 ) ∩ D and all  ∈ (0, ). ˆ This completes the proof.

 

2.4 Convergence at the Boundary and Walk-Regularity

21

By Lemma 2.13 and Theorem 2.9 we achieve the main result of this chapter:

Theorem 2.14 Let D be walk-regular. Then, for every continuous F : ∂D → ¯ is the R, the family {u }∈(0,1) in (2.5) satisfies u = u, where u ∈ C(D) unique solution of the boundary value problem: u = 0 in D,

u = F on ∂D.

Proof 1. Let F : ∂D → R be a given continuous function. We will show that {u }→0 is “asymptotically equicontinuous in D”, i.e.: for every η > 0 there exists δ > 0 and ˆ ∈ (0, 1) such that: |u (x0 ) − u (y0 )| ≤ η

for all  ∈ (0, ) ˆ and all x0 , y0 ∈ D with |x0 − y0 | ≤ δ.

(2.13)

Since {u }→0 is equibounded (by F ∞ ), condition (2.13) imply that for every sequence J ⊂ (0, 1) converging to 0, one can extract a further subsequence ¯ Further, in view of (2.12) it of {u }∈J that converges locally uniformly in D. ¯ follows that u ∈ C(D) and u = F on ∂D (see Exercise 2.17). By Theorem 2.9, we get that u is harmonic in D and the result follows in virtue of Lemma 2.6. 2. To show (2.13), fix η > 0 and choose δ¯ > 0 such that |F (y) − F (y)| ¯ ≤ η3 for ¯ By (2.12), for each y0 ∈ ∂D there exists all y, y¯ ∈ ∂D with |y − y| ¯ ≤ 3δ. ˆδ(y0 ) ∈ (0, δ) ¯ and ˆ (y0 ) ∈ (0, 1) satisfying: |u (x0 ) − F (y0 )| ≤

η 3

for all  ∈ (0, (y ˆ 0 )) and all x0 ∈ Bδ(y ˆ 0 ) (y0 ) ∩ D.

The family of balls {Bδ(y) ˆ (y)}y∈∂ D is then a covering of the compact set ∂D; let n ˆ i ). Clearly: {Bδ(y ˆ i ) (yi )}i=1 be its finite sub-cover and set ˆ = mini=1...n (y ∂D + B2δ (0) ⊂

n 

Bδ(y ˆ i ) (yi )

i=1

¯ This implies: for some δ > 0 where we additionally request that δ < δ. |u (x0 ) − u (y0 )| ≤ η

for all  ∈ (0, ) ˆ   and all x0 , y0 ∈ ∂D + B2δ (0) ∩ D with |x0 − y0 | ≤ δ.

(2.14)

22

2 The Linear Case: Random Walk and Harmonic Functions

3. To conclude the proof of (2.13), fix  ∈ (0, ˆ ∧ δ) and let x0 , y0 ∈ D satisfy dist(x0 , ∂D) ≥ δ, dist(y0 , ∂D) ≥ δ and |x0 − y0 | < δ. Define τδ : → N ∪ {+∞}:   τδ = min n ≥ 1; dist(xn , ∂D) < δ or dist(yn , ∂D) < δ , 0 ∞ where {xn = Xn,x0 }∞ n=0 and {yn = Xn }n=0 denote the consecutive positions in the process (2.5) started at x0 and y0 , respectively. It is clear that τδ is finite P-a.s. in view of convergence to the boundary in (2.3), and it is a stopping time relative to the filtration {Fn }∞ n=0 . By Corollary 2.5 and Doob’s theorem (Theorem A.31 (ii)) it follows that:     u (x0 ) = E u ◦ Xτ,x0 and u (y0 ) = E u ◦ Xτ,y0 .

,y

  ,y ,y Since |Xτ,x0 − Xτ 0 | = |x0 − y0 | < δ and Xτ,x0 , Xτ 0 ∈ ∂D + B2δ (0) ∩ D for a.e. ω ∈ , we conclude by (2.14) that: ˆ |u (x0 ) − u (y0 )| ≤ |u ◦ Xτ,x0 − u ◦ Xτ,y0 | dP ≤ η.

This ends the proof of (2.13) and of the Theorem.

 

Walk-regularity is, in fact, equivalent to convergence of u to the right boundary values. We have the following observation, converse to Lemma 2.13: Lemma 2.15 If y0 ∈ ∂D is not walk-regular, then there exists a continuous function F : ∂D → R, such that for u in (2.5) there holds: lim sup u (x) = F (y0 ). x→y0 , →0

Proof Define F (y) = |y − y0 | for all y ∈ ∂D. By assumption, there exists η, δ > 0 ∞ and sequences {i }∞ i=1 , {xj ∈ D}j =1 such that: lim j = 0,

j →∞

lim xj = y0

j →∞

and

  P Xj ,xj ∈ Bδ (y0 ) > η for all j ≥ 1,

where each Xj ,xj above stands for the limiting random variable in (2.3) corresponding to the j - ball walk. By the nonnegativity of F , it follows that: ˆ ˆ uj (xj ) − F (y0 ) = F ◦ Xj ,xj dP ≥ F ◦ Xj ,xj dP > ηδ > 0,

proving the claim.

{X

j ,xj ∈Bδ (y0 )}

 

Exercise 2.16 Show that if D is walk-regular then δˆ and ˆ in Definition 2.12 (a) can be chosen independently of y0 (i.e., δˆ and ˆ depend only on the parameters η and δ).

2.5 A Sufficient Condition for Walk-Regularity

23

Exercise 2.17 Let {u }∈J be an equibounded sequence of functions u : D → R defined on an open, bounded set D ⊂ RN , and satisfying (2.12), (2.13) with some continuous F : ∂D → R. Prove that {u }∈J must have a subsequence that ¯ → R. converges uniformly, as  → 0,  ∈ J , to a continuous function u : D

2.5 A Sufficient Condition for Walk-Regularity In this section we state a geometric condition (exterior cone condition) implying the validity of the walk-regularity condition introduced in Definition 2.12. We remark that the exterior cone condition in Theorem 2.19 may be weakened to the socalled exterior corkscrew condition, and that the analysis below is valid not only in the presently studied linear case of p = 2, but in the nonlinear setting of an arbitrary exponent p ∈ (1, ∞) as well. This will be explained in Chap. 6, with proofs conceptually based on what follows. We begin by observing a useful technical reformulation of the regularity condition in Definition 2.4. Namely, at walk-regular boundary points y0 not only the limiting position of the ball walk may be guaranteed to stay close to y0 with high probability, but the same local property may be, in fact, requested for the whole walk trajectory, with uniformly positive probability. Lemma 2.18 Let D ⊂ RN be an open, bounded, connected domain. For a given boundary point y0 ∈ ∂D, assume that there exists θ0 < 1 such that for every δ > 0 there are δˆ ∈ (0, δ) and ˆ ∈ (0, 1) with the following property. For all  ∈ (0, ) ˆ and all x0 ∈ Bδˆ (y0 ) ∩ D there holds:   (2.15) P ∃n ≥ 0 Xn,x0 ∈ Bδ (y0 ) ≤ θ0 , where {Xn,x0 }∞ n=0 is the -ball walk defined in (2.2). Then y0 is walk-regular. Proof 1. Fix η, δ > 0 and let m ∈ N be such that: θ0m ≤ η. m ˆ m−1 Define the tuples {k }m k=0 , {δk }k=0 and {δk }k=1 inductively, in:

δm = δ,

m = 1,

δˆk−1 ∈ (0, δk ), k−1 ∈ (0, k ) for all k = 1, . . . , m so that:   P ∃n ≥ 0 Xnx0 ∈ Bδk (y0 ) ≤ θ0 for all x0 ∈ Bδˆk−1 (y0 ) ∩ D and all  ∈ (0, k−1 ), δk−1 ∈ (0, δˆk−1 ) for all k = 2, . . . , m.

(2.16)

24

2 The Linear Case: Random Walk and Harmonic Functions

We finally set: . ˆ = 0 ∧

min

k=1,...,m−1

|δˆk − δk |

and

. δˆ = δˆ0 .

ˆ We will show that: Fix x0 ∈ Bδˆ (y0 ) ∩ D and  ∈ (0, ).   P ∃n ≥ 0 Xn,x0 ∈ Bδk (y0 )   ≤ θ0 · P ∃n ≥ 0 Xn,x0 ∈ Bδk−1 (y0 )

for all k = 2, . . . , m.

(2.17)

Together with the inequality in (2.16) for k = 1, the above bounds will yield:     P X,x0 ∈ B2δ (y0 ) ≤ P ∃n ≥ 0 Xn,x0 ∈ Bδ (y0 ) ≤ θ0m ≤ η. Since η and δ were arbitrary, the validity of the condition in Definition 2.12 will thus be justified, proving the walk-regularity of y0 . 2. Towards showing (2.17), we denote: ˜ = {∃n ≥ 0 Xn,x0 ∈ Bδk−1 (y0 )} ⊂ . ˜ Without loss of generality, we otherwise   may  assume that P( ) > 0, because  P ∃n ≥ 0 Xn,x0 ∈ Bδk (y0 ) ≤ P ∃n ≥ 0 Xn,x0 ∈ Bδk−1 (y0 ) = 0 and (2.17) ˜ P) ˜ F, ˜ defined by: holds then trivially. Consider the probability space ( , P(A) ˜ = {A ∩ ; ˜ A ∈ F} and P(A) ˜ F = for all A ∈ F˜ . ˜ P( )  f in = ∞ Also, let the measurable space ( f in , Ff in ) be given by:  n=1 n and by taking Ff in to be the smallest σ -algebra containing ∞ F . Then the n n=1 ˜ → N: following random variable τk :   . τk = min n ≥ 1; Xn,x0 ∈ Bδk−1 (y0 ) ˜ n = {A ∩ ; ˜ A∈ ˜ with respect to the induced filtration {F is a stopping time on ∞ Fn }}n=0 . We consider two further random variables below: ˜ → f in Y1 : ˜ → Y2 :

  . τk Y1 {wi }∞ i=1 = {wi }i=1   . ∞ Y2 {wi }∞ i=1 = {wi }i=τk +1

and observe that they are independent, namely:     P˜ Y1 ∈ A1 ) · P˜ Y2 ∈ A2 ) = P˜ {Y1 ∈ A1 } ∩ {Y2 ∈ A2 } for all A1 ∈ Ff in , A2 ∈ F.

2.5 A Sufficient Condition for Walk-Regularity

25

We now apply Lemma A.21 to Y1 , Y2 and to the indicator function:  .   Z {wi }si=1 , {wi }∞ i=s+1 = 1 ∃n≥0

,x0 ({wi }∞ i=1 ) ∈Bδk (y0 )

Xn



that is a random variable on the measurable space f in × , equipped with the product σ -algebra of Ff in and F. It follows that:   P ∃n ≥ 0 Xn,x0 ∈ Bδk (y0 ) =

ˆ ˜

Z ◦ (Y1 , Y2 ) dP˜ =

ˆ ˜

˜ 1 ), f (ω1 ) dP(ω

˜ where for each ω1 = {wi }∞ i=1 ∈ we have:     τk ,x0 ∞ f (ω1 ) = P {w¯ i }∞ {w ∈ B ∈ ; ∃n ≥ 0 X } , { w ¯ } (y ) i i δ 0 n k i=1 i=τk +1 i=1   ,xτk ˜ · θ0 , ˜ · P ∃n ≥ 0 Xn ∈ Bδk (y0 ) ≤ P( ) = P( ) in view of xτk ∈ Bδˆk (y0 ) and the construction assumption (2.16). This ends the proof of (2.17) and of the lemma.   The main result of this section is a geometric sufficient condition for walkregularity. When combined with Theorem 2.14, it implies that every continuous boundary data F admits the unique harmonic extension to any Lipschitz domain D. This extension automatically coincides with all process values u , regardless of the choice of the upper bound sampling radius  ∈ (0, 1).

Theorem 2.19 Let D ⊂ RN be open, bounded, connected and assume that y0 ∈ ∂D satisfies the exterior cone condition, i.e., there exists a finite cone C ⊂ RN \ D with the tip at y0 . Then y0 is walk-regular.

Proof The exterior cone condition assures the existence of a constant R > 0 such that for all sufficiently small ρ > 0 there exists z0 ∈ C satisfying: |z0 − y0 | = ρ(1 + R)

and

BRρ (z0 ) ⊂ C ⊂ RN \ D.

(2.18)

Let δ > 0 be, without loss of generality, sufficiently small and define z0 ∈ RN as in ˆ where we set δˆ = δ (Fig. 2.4). We will show that condition (2.18) with ρ = δ, 4+2R (2.15) holds for all  ∈ (0, 1). Fix x0 ∈ Bδˆ (y0 ) ∩ D and consider the profile function v : (0, ∞) → R in:  v(t) =

sgn(N − 2) t 2−N for N = 2, − log t for N = 2.

26

2 The Linear Case: Random Walk and Harmonic Functions

Fig. 2.4 The concentric balls in the proof of Theorem 2.19

By Exercise C.27, the radial function x → v(|x − z0 |) is harmonic in RN \ {z0 }, so in view of Lemma 2.6 the sequence of random variables {v ◦ |Xn,x0 − z0 |}∞ n=0 is a martingale with respect to the filtration {Fn }∞ . Further, define the random variable n=0 τ : → N ∪ {+∞} by:   . τ = inf n ≥ 0; Xn,x0 ∈ Bδ (y0 ) , where we suppress the dependence on  in the above notation. Applying Doob’s theorem (Theorem A.31 (ii)) we obtain:       0 v |x0 − z0 | = E v ◦ |X0,x0 − z0 | = E v ◦ |Xτ,x ∧n − z0 |

for all n ≥ 0,

because for every n ≥ 0, the a.s. finite random variable τ ∧ n is a stopping time. Passing to the limit with n → ∞ and recalling the definition (2.3), now yields:   v |x0 − z0 | =

ˆ {τ 0 and observe that: 

 τ¯k+1 ≤ t = {wk+1 = 0}∪      1  N N ∪ . {τ¯k ≤ q} ∩ |BN q − Bτ¯k | ≥  ∧ dist(Bτ¯k , ∂D − x0 ) |wk+1 | − m m>0 q∈[0,t]∩Q

Now {wk+1 = 0} ∈ Ft , whereas each of the countably many sets in the right hand side above may be written as:   1 N N . {τ¯k ≤ q} ∩ |BN − B · 1 | ≥  ∧ dist(B · 1 , ∂D − x ) |w | − τ¯k ≤q τ¯k ≤q 0 k+1 q τ¯k τ¯k m ¯ Since BN τ¯k · 1τ¯k ≤q is Fq -measurable, by the inductive assumption it follows that ¯q ⊂ F ¯ t , as claimed. Since τ¯k ≤ τ¯ , we further deduce the set above belongs to F that each τ¯k is a.s. finite, hence a stopping time. 2. Let τ¯∞ be a pointwise limit of the nondecreasing sequence {τ¯k }∞ k=0 . Clearly τ¯∞ is a stopping time and τ¯∞ ≤ τ¯ . To show that τ¯∞ = τ¯ a.s., consider the event: 1 A = {τ¯∞ < +∞}∩ ω = {wk }∞ ∈ ; |w | ≥ for infinitely many k = 1 . . . ∞ . k k=1 2

2.7* The Ball Walk and Brownian Trajectories

31

¯ Obviously, P(A) = 1, because: ∞     1 1 P ω ∈ ; |wk | ≥ for finitely many k ≤ P |wk | < for all k ≥ n = 0. 2 2 n=1

Further, for every (ωB , ω) ∈ A we get: N lim BN τ¯k = Bτ¯∞ ,

k→∞

N N N and thus: limk→∞ |BN τ¯k+1 − Bτ¯k | = 0, while: |Bτ¯k+1 − Bτ¯k | ≥  1 BN τ¯k , ∂D) whenever |wk | ≥ 2 . Consequently:

1 2

  ∧ dist(x0 +

    N dist x0 + BN τ¯∞ , ∂D = lim dist x0 + Bτ¯k , ∂D = 0, k→∞

so x0 + BN τ¯∞ ∈ ∂D and hence τ¯∞ = τ¯ on A.

 

Given a continuous boundary function F : ∂D → R, recall that: ˆ u(x0 ) =

¯

  ¯ F ◦ x0 + BN τ¯ dP

(2.23)

defines a harmonic function u : D → R, in virtue of Corollary B.29 that builds on the classical construction and discussion of Brownian motion presented in ¯ Appendix B. As in Remark 2.3, we view F as a restriction of some F ∈ C(D). Then, by Lemma 2.24 we also have: ˆ u(x0 ) = lim

k→∞ ¯

  ¯ F ◦ x0 + BN τ¯k dP.

On the other hand, we recall that in (2.6) we defined: ˆ ˆ  ,x0 u (x0 ) = F ◦X dP = lim F ◦ Xk,x0 dP.

k→∞

We now observe:

Theorem 2.25 For all  ∈ (0, 1) and all x0 ∈ D there holds: u (x0 ) = u(x0 ). In fact, we have:   ,x   0 ∈ A PB x0 + BN τ¯ ∈ A = P X

for all Borel A ⊂ RN .

(2.24)

32

2 The Linear Case: Random Walk and Harmonic Functions

Proof 1. Fix  ∈ (0, 1) and x0 ∈ D. We will prove that for all f ∈ Cc (RN ):    ,x0  E ¯ f ◦ (x0 + BN τ¯k ) = E f ◦ Xk

for all k ≥ 0.

(2.25)

We proceed by induction, defining μk to be the push-forward of P¯ via the random ,x0 ¯ variable x0 + BN τ¯k : → D, and νk to be the push-forward of P via Xk :   μk (A) = P¯ x0 +BN τ¯k ∈ A ,

  νk (A) = P Xk,x0 ∈ A

for all Borel A ⊂ D.

For k = 0 we have: μ0 = ν0 = δx0 . To show that μk+1 = νk+1 if μk = νk , observe that for f ∈ Cc (RN ), there holds: ˆ D

ˆ f dνk+1 = ˆ



ˆ

=

    f Xk,x0 +  ∧ dist(Xk,x0 , ∂D) wk+1 dP1 (wk+1 ) dPk

k

1

k

B

dist(Xk ,∂ D) (Xk

B

dist(z,∂ D)

ˆ =

∧

ˆ =

,x0 f ◦ Xk+1 dP

D

,x0 )

f (y) dy dPk

f (y) dy dνk (z), ∧

(z)

(2.26)

whereffl we used Fubini’s theorem and the definition of νk in view of the function z → B (z) f (y) dy, being continuous and bounded. ∧dist(z,∂ D) 2. On the other hand, we write: ˆ ˆ   ¯ f dμk+1 = (2.27) f ◦ ψ ◦ Z1 , Z2 , Z3 dP, ¯ D ¯ ¯ where Z1 = x0 + BN τ¯k is a D-valued random variable on ( , F). Further:   N N Z2 (ω) = [0, ∞)  t → BN τ¯k +t − Bτ¯k ∈ R ¯ in the notation of Exercise B.27: ¯ F) is the E-valued random variable on ( , E = f ∈ C([0, ∞), RN ); f (0) = 0 and |f (t)| > diam D for some t > 0 . Here, E is a measurable space when equipped with the Borel σ -algebra induced by topology of uniform convergence on compact intervals [0, T ], for all T > 0.

2.7* The Ball Walk and Brownian Trajectories

33

This σ -algebra is generated by sets of the type:   Ag,T ,δ = h ∈ E; h − gL∞ ([0,T ]) ≤ δ for polynomials g with rational coefficients and rational numbers T , δ > 0. ¯ ¯ F), Finally, we set Z3 = |wk+1 | to be the R-valued random variable on ( , whereas ψ : D × E × [0, diam D] → RN in (2.27) is given by:     ψ(z, h, r) = z + h min{t ≥ 0; |h(t)| =  ∧ dist(x, ∂D) r . As in Exercise B.27, ψ is measurable with respect to the product σ -algebra of: Borel subsets of D, the indicated σ -algebra in E and the Borel σ -algebra in R. We now observe that Z1 , Z2 , Z3 are independent, by the strong Markov ¯ -measurable, whereas each preimage property in Theorem B.24. Indeed, Z1 is F  τ¯Nk   −1 ∈ B¯ δ (g(q)) belongs to basis set Z2 (Ag,T ,δ ) = q∈[0,T ]∩Q Bτ¯k +q − BN τ ¯ k  N the σ -algebra generated by the Brownian motion BτN ¯k +t − Bτ¯k }t≥0 , which is ¯ τ¯ . ¯ τ¯ . Also, the latter σ -algebra is contained in F × Fk , as is F independent of F k k B (i) 3. Let μk+1 denote the push-forward of P via the corresponding random variable (3) Zi , i = 1 . . . 3. By the very definition, we have: μ(1) k+1 = μk and μk+1 = dr, (2)

whereas μk+1 coincides with the Wiener measure μW as in Exercise B.17. For each fixed r¯ ∈ [0, diam D], consider the stopping time τ r¯ = min{t ≥ 0; |BN t | = r¯ } (2)

and observe that the push-forward of μk+1 on ∂Br¯ (0) via the measurable mapping BN is rotationally invariant, as in Corollary B.29. Hence, the aforemenτ r¯ tioned push-forward coincides with the normalized spherical measure σ N −1 , by Exercise 2.11. Fubini’s theorem, combined with Exercise A.20, now yield: ˆ ˆ  (2) (3)  f dμn+1 = f ◦ ψ d μ(1) k+1 × μk+1 × μk+1 D D×E×[0,diam D] ˆ ˆ ˆ   (2) (3) (1) = f ◦ ψ (z, h, r) dμk+1 (h) dμk+1 (r) dμk+1 (z) D [0,1] E ˆ ˆ 1ˆ   = f z + BN(∧dist(z,∂ D))r dP dr dμk (z). τ D 0 Consequently: ˆ ˆ f dμn+1 = f (y) dy dμk (z) D D B(∧dist(z,∂ D))r (z) ˆ ˆ 1 = f (y) dσ N −1 (y) dr dμk (z), D 0 ∂B(∧dist(z,∂ D))r (z)

(2.28)

34

2 The Linear Case: Random Walk and Harmonic Functions

and we see that μk = νk , (2.26) and (2.28) imply: μk+1 = νk+1 , together with (2.25). Passing to the limit k → ∞ achieves the proof of (2.24) and implies u = u in D.   Exercise 2.26 Modify the arguments in this section to the setting of the sphere walk introduced in Exercise 2.8. Follow the outline below: (i) Given x0 ∈ D and  ∈ (0, 1), show that the following are stopping times on ( B , FB , PB ): τ0 = 0,

1 N N τk+1 = min t ≥ τk ; |BN t − Bτk | =  ∧ dist(x0 + Bτk , ∂D) , 2

that converge a.s. as k → ∞, to the exit time: τ = min{t ≥ 0; BN t ∈ ∂D − x0 }.  (ii) Let ( , F, P) and {Xn,x0 }∞ n=0 be as in Exercise 2.8 (i), and define u : D → R according to (2.5) and (2.6). Prove that the push-forward of P on D via Xk,x0 , coincides with the push-forward of PB via x0 + BN τk , for every k ≥ 0. ´ N  Consequently, u (x0 ) = F ◦ (x0 + Bτ ) dP, which is the harmonic extension of a given F ∈ C(∂D), independent of  ∈ (0, 1).

2.8 Bibliographical Notes All constructions, statements of results and proofs in this chapter have their continuous random process counterparts through Brownian motion, see Mörters and Peres (2010). The ball walk can be seen as a modification of the sphere walk in Exercise 2.8, which in turn is one of the most commonly used methods for sampling from harmonic measure, proposed in Muller (1956). The definition of the walk-regularity of a boundary point y0 , which in the context of Sect. 2.7* can be rephrased as: ∀η, δ > 0

∃δˆ ∈ (0, δ)

∀x0 ∈ Bδˆ (y0 ) ∩ D

  P x0 + BN τ ∈ Bδ (y0 ) ≥ 1 − η,

is equivalent to the classical definition given in Doob (1984):     N PB inf t > 0; y0 + BN t ∈ R \ D = 0 = 1; this equivalence will be shown in Sect. 3.7*. The above property is further equivalent to the classical potential theory 2-regularity of y0 in Definition C.46. Its equivalence with the Wiener regularity criterion, stating that RN \ D is 2-thick at y0 (compare Definition C.47 (ii)) can be proved directly, see Mörters and Peres (2010) for a

2.8 Bibliographical Notes

35

modern exposition. In working out the proofs of this chapter and the analysis in Sect. 2.7*, the author has largely benefited from the aforementioned book and from personal communications with Y. Peres. Various averaging principles and related random walks in the Heisenberg group were discussed in Lewicka et al. (2019). In papers by Lewicka and Peres (2019b,a), Laplace’s equation augmented by the Robin boundary conditions has been studied from the viewpoint of the related averaging principles in C1,1 -regular domains. There, the asymptotic Hölder regularity of the values of the -walk has been proved, for any Hölder exponent α ∈ (0, 1) and up to the boundary of D, together with the interior asymptotic Lipschitz equicontinuity.   The “ellipsoid walk” linked to the elliptic problem: Trace A(x)∇ 2 u(x) = 0 has been analysed in Arroyo and Parviainen (2019). For bounded, measurable coefficients matrix A satisfying det A = 1, and uniformly elliptic with the elliptic distortion ratio that is close to 1 in D, this leads to proving the local asymptotic uniform Hölder continuity of the associated process values u .

Chapter 3

Tug-of-War with Noise: Case p ∈ [2, ∞)

Many properties of the ordinary Laplacian  = 2 studied in Chap. 2 also hold for the p-Laplacian p . While the classical potential theory deals with harmonic functions, the discussed here case of the p-harmonic functions or solutions to more general nonlinear equations of divergence type requires more refined techniques. These were developed in the framework of the nonlinear potential theory, combining ideas and notions in partial differential equations, calculus of variations and mathematical physics. The purpose of this and the following chapters is to present one of such remarkable connections, namely the connection to probability. The nonlinear counterpart of the arguments in Chap. 2, relying on the so-called Tug-of-War games, has been discovered and studied only recently by Peres et al. (2009); Peres and Sheffield (2008); Manfredi et al. (2012b). We presently start with treating the case p ≥ 2, which is somewhat closer to the linear case. The remaining singular case p ∈ (1, 2) will be discussed in Chap. 6, in the unified manner to p > 1. In Sect. 3.1 below, we briefly recall the definition and notation for p . Section 3.2 develops the mean value expansions with p as the second order term, in the ffl 2 spirit of the familiar expansion B (x) u(y) dy = u(x) + 2(N +2) u(x) + o( 2 ) ffl and consistent with the mean value property B (x) u(y) dy = u(x) of harmonic functions. In Sect. 3.3 we focus on the first of the introduced expansions and show that the boundary value problem for the mean value equation resulting by neglecting the o( 2 ) error term has a unique solution u on D ⊂ RN . In Sects. 3.4 and 3.5, we prove that each u coincides with the value u of the Tug-of-War game with noise, which can be seen as a deterministic modification of the finite-horizon counterpart to the ball walk in Chap. 2. In this process, a token that is initially placed at x0 ∈ D, at each step of the game is advanced by one of the players (both acting with probability β 2 ) or by a random shift (activated with probability α), within the ball centred at the current position and with radius . The probabilities in: α + β = 1 depend on p and the dimension N. The process is stopped when the token reaches the © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3_3

37

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

38

-neighbourhood of the boundary ∂D, and u (x0 ) is then defined as the expected value of the boundary data F (extended continuously on the neighbourhood of ∂D) at the stopping position xτ , subject to both players playing optimally. The optimality criterion is based on the rule that Player II pays to Player I the value F (xτ ), thus giving Player I the incentive to maximize the gain by pulling towards the portions of ∂D with high values of F , whereas Player II will likely try to minimize the loss by pulling, locally, towards the low values of F . In Sect. 3.6*, we identify the Tug-of-War game process corresponding to p = 2, with the discrete realization of the Brownian motion along its continuous trajectories, similarly as in Sect. 2.7*. Since the sampling takes place on balls of radius  regardless of the position of the particle, it is no more true that {u }∈(0,1) are each, one and the same function. However, we show that {u }→0 converge pointwise to a harmonic function u that is the Brownian motion harmonic extension of F introduced in Appendix B. When ∂D is regular, then u|∂ D = F so that u is the classical harmonic extension of F . Various regularity conditions are compared in Sect. 3.7*; these last two sections require the basic familiarity with Brownian motion and may be skipped at first reading.

3.1 The p-Harmonic Functions and the p-Laplacian We briefly recall the definition and notation for the p-Laplacian. An expanded overview of the nonlinear Potential Theory can be found in Appendix C. Let D ⊂ RN be an open, bounded, connected set. Consider the integral: ˆ Ip (u) =

D

|∇u(x)|p dx

for all u ∈ W 1,p (D),

where, in this chapter, the exponent p belongs to the range: p ∈ [2, ∞), replacing the harmonic exponent p = 2 from Chap. 2. We want to minimize the energy Ip among all functions u subject to given boundary data. The condition for vanishing of the first variation of Ip (see Lemma C.28) takes form: ˆ D

  |∇u|p−2 ∇u, ∇η dx = 0

for all η ∈ C∞ c (D).

Assuming sufficient regularity of u, the divergence theorem then yields: ˆ D

  η div |∇u|p−2 ∇u dx = 0

for all η ∈ C∞ c (D),

3.1 The p-Harmonic Functions and the p-Laplacian

39

which, by the fundamental theorem of Calculus of Variations, becomes:   . p u = div |∇u|p−2 ∇u = 0 in D.

(3.1)

Definition 3.1 The second order differential operator in (3.1) is called the pLaplacian, the partial differential equation (3.1) is called the p-harmonic equation and its solution u is a p-harmonic function. To examine the differential expression in (3.1) more closely, compute: N N      p−2   p−4 ∇ |∇u|p−2 = ∇ |∂i u|2 2 = (p − 2) |∂i u|2 2 (∇ 2 u)∇u, i=1

i=1

which implies:  (∇ 2 u)∇u ∇u      , ∇ |∇u|p−2 , ∇u = (p − 2)|∇u|p−2 |∇u| |∇u|  ∇u  ∇u = (p − 2) ∇ 2 u : . ⊗ |∇u| |∇u| Consequently:     p u = |∇u|p−2 u + ∇ |∇u|p−2 , ∇u   ∇u  ∇u , ⊗ = |∇u|p−2 u + (p − 2) ∇ 2 u : |∇u| |∇u|

(3.2)

and we see that when p → ∞, the second term in parentheses above prevails, which motivates the following definition of the ∞-Laplacian:  ∇u  ∇u ∞ u = ∇ 2 u : . ⊗ |∇u| |∇u|

(3.3)

The equation ∞ u = 0 is called the ∞-harmonic equation and its solution is a ∞harmonic function. Some authors, e.g., Lindqvist (2019), prefer to write ∞ u = ∇ 2 u : ∇u ⊗ ∇u = 12 ∇|∇u|2 , ∇u, which results in the interpolation: p u =   |∇u|p−4 |∇u|2 u + (p − 2)∞ , and call ∞ in (3.3) the “game-theoretic” ∞Laplacian. Since this book features connections of p to game theory, we indeed use this definition. We conclude by pointing out the following useful decomposition which will be at the centre of our attention in the next Section: Lemma 3.2 Let D ⊂ RN be an open set, and let u ∈ C2 (D). If ∇u(x) = 0, then:   p u(x) = |∇u|p−2 u + (p − 2)∞ u (x),

(3.4)

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

40

  p u(x) = |∇u|p−2 |∇u|1 u + (p − 1)∞ u (x).

(3.5)

Proof The formula (3.4) follows directly from (3.2). In particular, from the same expansion expression it also follows that u = |∇u|1 u + ∞ u, which together with (3.4) results in (3.5).   Exercise 3.3 Let 1 < q < p < r < ∞. Prove the identity: (r − q)|∇u|2−p p u = (r − p)|∇u|2−q q u + (p − q)|∇u|2−r r u.

3.2 The Averaging Principles The purpose of this section is to derive the averaging principles for the operator p . The ultimate goal is to find the approximations of solutions to (3.1), analogous to the construction in Chap. 2 where the mean value property of harmonic functions was precisely the averaging principle responsible for the correspondence between the Laplace operator  = 2 and the ball walk. The expansion formulas (3.13) and (3.14) below will, in the same spirit, serve as the dynamic programming principles for Tug-of-War games with noise whose values yield p-harmonic functions. Recall that given a function u : D → R, its average on B (x) ⊂ D is denoted: A u(x) =

u(y) dy. B (x)

We start with proving the basic averaging expansions for: the linear operator  corresponding to p = 2, and the fully nonlinear ∞ corresponding to p = ∞: Theorem 3.4 Let D ⊂ RN be an open set and assume that u ∈ C2 (D). Then, for every x ∈ D and  > 0 such that B¯  (x) ⊂ D, we have: A u(x) = u(x) +

(i)

2 u(x) + o( 2 ). 2(N + 2)

(3.6)

(ii) If ∇u(x) = 0, then: 1 2



 2 inf u(y) + sup u(y) = u(x) + ∞ u(x) + o( 2 ). y∈B (x) 2 y∈B (x)

A precise bound on the o( 2 ) error terms above is given in Exercise 3.7.

(3.7)

3.2 The Averaging Principles

41

Proof 1. Fix B¯  (x) ⊂ D and observe that by subtracting constants from both sides of (3.6) and (3.7), we may without loss of generality assume that u(x) = 0. Consider an approximation v of u, given by its quadratic Taylor polynomial:  1 A : (y − x) ⊗ (y − x) 2

v(y) = a, y − x +

for all y ∈ B¯  (x),

(3.8)

where we set: a = ∇u(x) = ∇v(x),

A = ∇ 2 u(x) = ∇ 2 v(x),

so that, in particular: u(x) = v(x) = trace A Since u − vC0 (B

 (x))

and

 a a  . ∞ u(x) = ∞ v(x) = A : ⊗ |a| |a|

= o( 2 ), it also follows that:

|A u(x) − A v(x)| = o( 2 ),      inf u − inf v  +  sup u − sup v  = o( 2 ). B (x)

B (x)

B (x)

B (x)

Thus, proving (3.6) and (3.7) for v will automatically imply the validity of the same asymptotic expressions for u. 2. fflIn order to show (3.6), we integrate (3.8) over the ball B (x). Observe that: ffl (y −x) dy = B (0) y dy = 0 by the symmetry of the fflball, which results in: fflB (x) B (x) a, y − x dy ffl= 0. Further, the entries of the matrix B (x) (y − x) ⊗ (y − x) dy are given by: B (0) yi yj dy, equalling 0 for i = j , while for any i = j we get: A |yi |2 (0) = A |y1 |2 (0) =

|y1 |2 dy = B (0)

2 . N +2

(3.9)

The above simple calculation is left as Exercise 3.7 (i). Consequently:  1 A: (y − x) ⊗ (y − x) dy 2 B (x)     1 = A : A |y1 |2 (0) IdN 2

A v(x) =

=

2 2 trace A = v(x), 2(N + 2) 2(N + 2)

proving (3.6) for v and hence for u.

(3.10)

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

42

3. Assume that a = ∇u(x) = 0. In order to show (3.7) for v, we write: v(y) = |a|ψ

y − x  , 

where the function ψ : B¯ 1 (0) → R is the according rescaling of v, of the form: ψ(z) = b, z + B : z ⊗ z

with b =

1 a , B= A. |a| 2|a|

We will prove that: 1 2



 inf ψ(z) + sup ψ(z) = B : b ⊗ b + o(),

z∈B1 (0)

(3.11)

z∈B1 (0)

which after substituting the definition of ψ and changing variable, directly translates into the claimed bound (3.7) for the function v: 1 2



 2 inf v(y) + sup v(y) = ∞ v(x) + o( 2 ). y∈B (x) 2 y∈B (x)

To prove (3.11), let zmax ∈ B¯ 1 (0) be some maximizer of ψ, so that: ψ(zmax ) = maxz∈B¯ 1 (0) ψ(z). Recalling that |b| = 1 and writing ψ(zmax ) ≥ ψ(b), we obtain:   zmax , b ≥ b, b +  B : b ⊗ b − zmax ⊗ zmax   ≥ 1 − |B|b ⊗ b − zmax ⊗ zmax  ≥ 1 − 2|B||zmax − b|. Observe now that ∇ψ(z) = 0 in B1 (0), for every  satisfying |B| < 1. In that case, zmax must belong to ∂B1 (0) and |zmax | = 1. Consequently:   |zmax − b|2 = |zmax |2 + |b|2 − 2zmax , b ≤ 2 − 2 − 4|B||zmax − b| = 4|B||zmax − b|, which results in: |zmax − b| ≤ 4|B|. Since b, zmax  ≤ 1 = b, b, we conclude that:   0 ≤ ψ(zmax ) − ψ(b) = b, zmax − b +  B : zmax ⊗ zmax − b ⊗ b   ≤  B : zmax ⊗ zmax − b ⊗ b ≤ 2|B||zmax − b| ≤ 8 2 |B|2 .

3.2 The Averaging Principles

43

Likewise, denoting a minimizer of ψ by zmin , so that ψ(zmin ) minz∈B¯ 1 (0) ψ(z), we obtain that |zmin | = 1 and:

=

0 ≥ ψ(zmin ) − ψ(−b) ≥ −8 2 |B|2 . Combining the last two displayed inequalities, we arrive at: 1    ψ(zmax )+ψ(zmin ) − B : b ⊗ b 2    1  =  ψ(zmax ) + ψ(zmin ) − ψ(b) + ψ(−b)  2      1  ≤ ψ(zmax ) − ψ(b) + ψ(zmin ) − ψ(−b) ≤ 8 2 |B|2 . 2  

This ends the proof of (3.11) and of (ii).

In the next result we interpolate the two averaging expansions in Theorem 3.4, motivated by the formula (3.4) in which p is shown to be an interpolation between 2 and ∞ . Given p ∈ [1, ∞), we define the interpolation coefficients: αN,p =

N +2 , N +p

βN,p = 1 − αN,p =

p−2 , N +p

(3.12)

Theorem 3.5 Let D ⊂ RN be an open set, u ∈ C2 (D) and let x ∈ D be such that ∇u(x) = 0 and p u(x) = 0. Then: (i) For every  > 0 such that B¯  (x) ⊂ D, we have: βN,p 2

u(x) = αN,p A u(x) +



 inf u(y) + sup u(y) + o( 2 ).

y∈B (x)

y∈B (x)

(3.13) (ii) Let p > 2. Fix α ∈ [0, 1) and β = 1 − α, and define:  r=

 1 βN,p · = β αN,p

1 p−2 · . β N +2

Then for every  > 0 small enough, we have: u(x) = α A u(x) +

β 2

 inf

y∈Br (x)

A u(y) +

 sup A u(y) + o( 2 ). y∈Br (x)

(3.14) A precise bound on the

o( 2 )

error terms above is given in Exercise 3.7.

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

44

Proof 1. Summing the expansions in Theorem 3.4, weighted with coefficients αN,p and βN,p in (3.12), we arrive at:   βN,p αN,p A u(x) + inf u(y) + sup u(y) y∈B (x) 2 y∈B (x) = u(x) +

 1 2  u(x) + (p − 2)∞ u(x) + o( 2 ) 2N +p

= u(x) +

1 2 p u(x) + o( 2 ). 2(N + p) |∇u(x)|p−2 (3.15)

This proves (3.13) under the indicated assumptions. 2. To show (3.14), consider the function v(x) = A u(x). Clearly, for  small enough, we have B¯ +r (x) ⊂ D and also ∇v(x) = 0 in view of ∇u(x) = 0. We may thus apply the expansion (3.7) to v on the ball Br (x), and obtain:   1 r 22 ∞ v(x) + o( 2 ). inf A u(y) + sup A u(y) = A u(x) + 2 y∈Br (x) 2 y∈Br (x) Further, a straightforward error analysis in which we replace ∇ 2 v and ∇v in the expression for ∞ v by A ∇ 2 u and A ∇u leads to:  A ∇u(x)  A ∇u(x) = ∞ u(x) + o(1). ⊗ ∞ v(x) = A ∇ 2 u(x) : |A ∇u(x)| |A ∇u(x)| Thus, in virtue of (3.6) and (3.4), we obtain: α A u(x) +

β 2

 inf

y∈Br (x)

A u(y) +

 sup A u(y) y∈Br (x)

βr 2  2 ∞ u(x) + o( 2 ) 2   2 u(x) + βr 2 (N + 2)∞ u(x) + o( 2 ) = u(x) + 2(N + 2)   2 u(x) + (p − 2)∞ u(x) + o( 2 ) = u(x) + 2(N + 2)

= A u(x) +

= u(x) +

1 2 p u(x) + o( 2 ). 2(N + 2) |∇u(x)|p−2 (3.16)

This completes the proof of (3.14).

 

3.2 The Averaging Principles

45

Remark 3.6 (i) The formula in (3.13), although true for any p ∈ [1, ∞), is most useful for p ≥ 2 when αN,p , βN,p ≥ 0. Since αN,p + βN,p = 1, these coefficients can then be interpreted as proportions of the linear part (integral average) and the fully nonlinear (arithmetic mean) part in the operator p . (ii) When p = 2, then (3.14) can be interpreted as: u(x) = A u(x) + o( 2 ), which also coincides with (3.13), since there we have βN,2 = 0. Indeed, from (3.6) we see that the average of u on a small ball B (x) differs from the value u(x) at the centre of the ball by order of square of its radius, and the leading coefficient in this expansion is given precisely by u(x). These observations are in agreement with our discussion in Chap. 2, and reflect the mean value property of harmonic functions, stating that u(x) = A u(x) for every Br (x) contained in the domain where u = 0. (iii) Taking p → ∞, the formula in (3.13) asymptotically becomes: u(x) =   1 inf u(y) + sup u(y) . The same is implied by (3.14), where y∈B (x) y∈B (x) 2 we accordingly set: α = 0 and replace r with , and A u(y) with u(y). On the other hand, we see by (3.7) that the arithmetic mean of the extreme values of u on a small ball B (x) differs from the value u(x) at the centre of the ball by, as before, order of square of the radius, whereas the leading order coefficient is specified by ∞ u(x). This observation is in agreement with the Absolutely Minimizing Lipschitz Extension (AMLE) property of ∞-harmonic functions, which states that for every open subset V ⊂ D, the restriction u|V has the smallest Lipschitz constant among all the extensions of u|∂V , see Jensen (1993). Exercise 3.7 (i) Compute the integral in the formula (3.9). (ii) Work out another proof of (3.7) using the following outline. Let ymin be a minimizer of u on B¯  (x). Summing the Taylor expansion (3.8) where y = ymin , with the estimate: supy∈B (x) u(y) ≥ u(x) − ∇u(x), ymin − x + 1 2 min − x) ⊗ (y min − x) + o( 2 ), we get:  2 ∇ u(x) : (y    1 1 inf u(y)+ sup u(y) ≥ u(x)+ ∇ 2 u(x) : (ymin −x)⊗(ymin −x) +o( 2 ). 2 y∈B (x) 2 y∈B (x)

Note that: ∇u(x) ymin − x =− . →0  |∇u(x)| lim

This follows by a simple blow-up argument, observing that the maps u (z) = 1 ¯  (u(x + z) − u(x)) converge uniformly on B1 (0) to the linear function

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

46

z → ∇u(x), z, so that the limit of any converging subsequence of their minimizers must be a minimizer of ∇u(x), z. Conclude that: 1 2



 2 inf u(y) + sup u(y) ≥ u(x) + ∞ u(x) + o( 2 ). y∈B (x) 2 y∈B (x)

The same reasoning applied to the maximizers ymax instead of the minimizers gives the reversed inequality in the above formula. (iii) Revisit the proof of Theorem 3.4 to quantify the bounds in (3.6) and (3.7). ¯ + B¯ 0 (0), Namely, let D ⊂ RN be an open, bounded set and denote D♦ = D for some 0 > 0. Let u ∈ C2 (int D♦ ). Prove that for all  ∈ (0, 0 ) one has:    A u(x) − u(x) +

 2  u(x)  ≤ C 2 ω∇ 2 u (B (x)) 2(N + 2)

for all x ∈ D,

where ω∇ 2 u stands for the modulus of continuity of the function ∇ 2 u, namely: ω∇ 2 u (B (x)) = supz,w∈B (x) |∇ 2 u(z)−∇ 2 u(w)|, and C is a universal constant. If additionally ∇u(x) = 0 for all x ∈ D♦ , then prove the following estimate, valid uniformly for all x ∈ D and all  ∈ (0, ), ˆ with ˆ depending on u: 1    2   inf u(y) + sup u(y) − u(x) + ∞ u(x)   2 y∈B (x) 2 y∈B (x)   |∇ 2 u(x)|2 ≤ C 2 ω∇ 2 u (B (x)) +  . |∇u(x)| Conclude then that:   βN,p   inf u(y) + sup u(y) αN,p A u(x) + y∈B (x) 2 y∈B (x)  − u(x) +

 1 2   u(x)  p 2(N + p) |∇u(x)|p−2   |∇ 2 u(x)|2 ≤ C 2 ω∇ 2 u (B (x)) + βN,p  . |∇u(x)|

(iv) Revisit the proof of Theorem 3.5 to deduce the following bound. Let D, D♦ be as in Exercise (iii) above and let p > 2, α ∈ [0, 1), β = 1 − α and r be as in Theorem 3.5 (ii). Assume that u ∈ C2 (int D♦ ) satisfies ∇u(x) = 0 for every

3.3 The First Averaging Principle

47

¯ Then, for all  ∈ (0, (u)) ¯ there holds: x ∈ D. ˆ and all x ∈ D,   β  inf A u(y) + sup A u(y) αA u(x) + 2 y∈Br (x) y∈Br (x)   2 1    u(x) − u(x) +  p 2(N + 2) |∇u(x)|p−2  ∇ 2 u2 0 + ω∇ 2 u (B (x))2  C (B (x)) 2 ≤ C ω∇ 2 u (B(1+r) (x)) +  , |∇u(x)| where the constant C depends on N and p but not on u.

3.3 The First Averaging Principle The asymptotic mean value expansions in Theorem 3.4 suggest to seek p-harmonic functions as limits, when  → 0, of solutions to the exact -averaging formulas. This will be implemented in the following chapters where we prove the uniform convergence of such solutions, interpreted as values of a Tug-of-War game, corresponding to (3.14) with, in particular, α = 13 , β = 23 . Below, we first discuss the variant (3.13) that is quite simple and leads to the Tug-of-War game described in Sect. 3.4. The more involved variant (3.14), modified in order to ensure continuity (Lipschitz continuity) of the approximate solutions for continuous (Lipschitz) boundary data, will be implemented in Sect. 4.1, together with the game interpretation in Sect. 4.3. The reader interested in the main convergence result of Theorem 5.2 may directly move to Sect. 4.1. Definition 3.8 Let D ⊂ RN be open, bounded and connected. For a parameter  ∈ (0, 1), define the thickened inner boundary  of D, the outer boundary out and the closed domain D♦ (Fig. 3.1):  ={x ∈ D; dist(x, ∂D) ≤ },

D♦ = D ∪ out ,

out = {x ∈ RN \ D; dist(x, ∂D) ≤ 1}.

(3.17)

Theorem 3.9 Fix  ∈ (0, 1) and let D,  be as in Definition 3.8. Let α ∈ (0, 1], β = 1 − α. Given a bounded, Borel function F :  → R, there exists a unique bounded, Borel function u : D → R, such that: ⎧ α A u (x) ⎪ +  ⎪ ⎨ β inf u (y) + sup u (y) if x ∈ D \  + u (x) = 2 y∈B (x) ⎪ y∈B (x) ⎪ ⎩ F (x) if x ∈  . (3.18)

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

48

Fig. 3.1 The thickened boundaries  , out and the closed domain D♦ in Definition 3.8

Proof 1. To ease the notation, we drop the subscript  and write u,  instead of u ,  . Towards proving existence of solutions, we define the operator T , which to a bounded, Borel v : D → R associates T v : D → R given by:   ⎧ ⎨ α A v(x) + β inf v(y) + sup v(y) if x ∈ D \   2 y∈B (x) (T v)(x) = y∈B (x) ⎩ F (x) if x ∈ . (3.19) The first easy observation is that the function T v is Borel, as a sum of: a continuous function x → A v(x) and the lower-semicontinuous and uppersemicontinuous functions x → supB (x) v(x) and x → infB (x) v(x) (see Exercise 3.12 (i)). It is also easy to observe that T is monotone: T v ≤ T v¯ if v ≤ v. ¯ Further, for any two bounded, Borel functions v, v¯ : D → R and any x ∈ D \  there holds: |(T v)(x) − (T v)(x)| ¯ ¯ ≤ α |A (v − v)(x)|    β     (3.20) + ¯  +  sup v(y) − sup v(y) ¯   inf v(y) − inf v(y) y∈B (x) 2 y∈B (x) y∈B (x) y∈B (x) ≤ α A |v − v|(x) ¯ + β sup |v(y) − v(y)|. ¯ y∈B (x)

2. The solution u of (3.18) is obtained as the limit of iterations un+1 = T un , where we set u0 to be a constant function, satisfying: u0 ≡ const ≤ inf F.

3.3 The First Averaging Principle

49

An easy direct calculation shows that u1 = T u0 ≥ u0 in D. By monotonicity of T , the sequence {un }∞ n=1 is nondecreasing and also it is bounded, because: u0 ≤ un (x) ≤ sup F

for all x ∈ D,

n ≥ 1.

Hence, {un }∞ n=1 converges pointwise to a bounded, Borel function u : D → R. We now prove that the above convergence is actually uniform. For every m, n ≥ 1 the estimate (3.20) yields the following bound: α sup |T um − T un | ≤ um − un L1 (D) + β sup |um − un |. |B  (0)| D D Passing to the limit with m → ∞ implies, in view of the boundedness of {un }∞ n=1 and its pointwise convergence to u, that: α u − un L1 (D) + β sup |u − un | sup |u − T un | ≤ |B (0)| D D

for all n ≥ 1.

Passing now to the limit with n → ∞, gives: lim sup |u − un | ≤ β lim sup |u − un |, n→∞ D D

n→∞

so there must be: lim sup |u − un | = 0, D

n→∞

because β < 1. The uniform convergence follows. Consequently, the limit function u is a fixed point of T and thus a solution to (3.18): u = lim T un = T lim un = T u. n→∞

n→∞

3. To show uniqueness, assume that u and u¯ satisfy (3.18) and call: ¯ M = sup |u(x) − u(x)|. x∈D By (3.20) we get: M ≤ α sup A |u − u|(x) ¯ + βM. Subtracting βM from x∈D\ both sides and dividing by 1 − β = α > 0, we obtain the first inequality in the following bound, whereas the second one is obvious: M ≤ sup A |u − u|(x) ¯ ≤ M. x∈D\

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

50

Since the function x → A |u − u|(x) ¯ is well defined and continuous in D \ , we now conclude that the following set must be nonempty:   ¯ =M . DM = x ∈ D \ ; A |u − u|(x) Clearly, DM is relatively closed in D \ , as a level set of a continuous function. To prove that it is relatively open, take x ∈ DM and note that almost every point y ∈ B (x) ∩ D \  has the property that |u(y) − u(y)| ¯ = M. The same argument as before, based on (3.20), yields that y ∈ DM . Consequently: B (x) ∩ D \  ⊂ DM , proving the openness of DM in D \  and that DM = D \ . Let now x ∈ ∂D be a maximizer of the function x → |x| on D \ . Then, |B (x) ∩ | > 0 and since u = u¯ on  and x ∈ DM , we get that M = 0.   Directly from the proof of Theorem 3.9 and the monotonicity of the operator T , we obtain that: inf F ≤ u (x) ≤ sup F for every x ∈ D. More generally: Corollary 3.10 In the setting of Theorem 3.9, let u and u¯  be the unique solutions to (3.18) with the respective Borel, bounded boundary data F and F¯ . If F ≤ F¯ in , then u ≤ u¯  in D. Corollary 3.11 In the setting of Theorem 3.9, let u0 be any bounded, Borel function on D. Then the sequence {un }∞ n=1 , defined recursively by: un+1 = T un where T is as in (3.19), converges uniformly to the unique solution u to (3.18). Proof We first observe that by the same argument as in the proof of Theorem 3.9, the new sequence: u¯ n = T n u¯ 0

where u¯ 0 ≡ const ≥ sup F,

converges uniformly to the same unique solution of (3.18) on D. Given u0 as in the statement, call un = T n u0 and u¯ n = T n u¯ 0 , where u0 = inf u0 and u¯ 0 = sup u0 . By the monotonicity of the operator T , it follows that: un ≤ un ≤ u¯ n

in D,

which yields the uniform convergence of {un }∞ n=1 to u .

 

Exercise 3.12 (i) Let v be a bounded, Borel function on RN . Show that x → infB (x) v is uppersemicontinuous and x → supB (x) v is lower-semicontinuous. In particular, both functions are Borel and bounded. (ii) Let p ≥ 2 and let F : out ∪  → R be a given bounded, Borel function. For each  ∈ (0, 1) denote d (x) = 1 min{, dist(x, RN \ D)}. Modify the

3.4 Tug-of-War with Noise: A Basic Construction

51

proof of Theorem 3.9 to construct an iterated sequence {un }∞ n=1 of approximate solutions to the following problem:  u (x) = d (x) αN,p

 βN,p  inf u (y) + sup u (y) A u (x) + y∈B (x) 2 y∈B (x)

+ (1 − d (x))F (x)



for all x ∈ D♦ , (3.21)

and prove their uniform convergence to the unique solution of (3.21). (iii) Modify the proof of Corollary 3.11 to the setting of (3.21) and show that every initial (bounded, Borel) function u0 results in an iterated sequence {un }∞ n=1 that converges uniformly to the unique solution of (3.21). (iv) Deduce that if F is continuous, then u solving (3.21) is also continuous.

3.4 Tug-of-War with Noise: A Basic Construction In this and the next sections we relate solutions of the -averaging formulas to the probabilistic setting and interpret them as the values of an appropriate Tug-of-War game with noise. We start with the averaging principle (3.18). The game play is as follows: a token is initially placed at a position x0 on the board game D. At a n-th step of the game, a biased coin is tossed: if the outcome is heads (with probability α > 0), then the token is moved randomly in the ball B (xn−1 ) around the current position xn . If the outcome is tails (with probability β = 1 − α), then another fair coin is tossed and the player who wins the toss may move the token to any xn with |xn −xn−1 | < , according to their strategy (Fig. 3.2). The game ends the first time when xn ∈  and Player I payoff (equal to Player II loss) is the value of the boundary data F (xn ).

Fig. 3.2 Player I and Player II compete in a Tug-of-War with random noise

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

52

Note that since D is bounded, the game ends almost surely for any choice of strategies (Lemma 3.13). This is due to the fact that for a large n ≥ 1 satisfying n > 2 diam D, almost surely there will be a block of consecutive n random moves, advancing the token by the distance at least 2 in the x1 direction. Since Player I seeks to maximize the payoff, and since the game is zero-sum so that Player II seeks to minimize the loss, then naturally Player I will try to “tug” the token towards portions of the boundary where F is maximized, while Player II will tug away from his opponent’s target towards the set where F is minimized. Below we define this setting in detail. 1. Fix parameters α ∈ (0, 1], β = 1−α and let 1 = B1 (0)×{1, 2, 3}. We consider the product probability space ( 1 , F1 , P1 ), where F1 is the smallest σ -algebra of subsets of 1 , containing all the sets of the form D × A with D ⊂ B1 (0) ⊂ RN Borel and A ⊂ {1, 2, 3}. The probability measure P1 is uniquely defined by requiring that P1 (D × A) is given as the product: P1 (D × A) =

|D| · |A|, |B1 (0)|

of the normalized Lebesgue measure of D and the normalized discrete measure of A, where (see Chap. A and Example A.2): |{1}| = |{2}| =

β , 2

|{3}| = α

. For any number n ∈ N, let n = ( 1 )n be the Cartesian product of n copies of 1 , and ( n , Fn , Pn ) be the corresponding product probability space. Finally: ( , F, P) = ( 1 , F1 , P1 )N will denote the probability space on the countable product: = ( 1 )N =



 N 1 = B1 (0) × {1, 2, 3} ,

i=1

defined by means of Theorem A.12. For each n ∈ N, we identify the σ -algebra Fn with the sub-σ -algebra of F consisting of sets of the form: F × ∞ i=n+1 1 for all F ∈ Fn . For completeness, we also define F0 = {∅, }. Note that the sequence {Fn }∞ n=0 is a filtration of F. The elements n are n-tuples {(wi , ai )}ni=1 , while the elements of are sequences {(wi , ai )}∞ i=1 with wi ∈ B (0) and ai ∈ {1, 2, 3} for all i ∈ N. As customary in probability, we tend to suppress the notion of such outcomes and instead refer only to the random variables, as defined below.

3.4 Tug-of-War with Noise: A Basic Construction

53

n ∞ 2. We now introduce the strategies σI = {σIn }∞ n=0 and σI I = {σI I }n=0 of Players I and II, respectively. For every n ≥ 0, these are:

σIn , σInI : Hn → B1 (0) ⊂ RN the vector-valued random variables on the spaces of “finite histories”: Hn = RN × (RN × 1 )n , endowed with the product σ -algebra, where the σ -algebra of subsets of RN is, as usual, taken to be Borel. 3. Let  ∈ (0, 1) and let D,  be as in Definition 3.8. Fix an initial point (the position of the token) x0 ∈ D♦ , and the strategies σI and σI I as above. We now recursively define a sequence of vector-valued random variables: Xnx0 ,σI ,σI I : → D

for n = 0, 1, . . . .

For simplicity of notation, we momentarily suppress the superscripts x0 , σI , σI I and write Xn instead of Xnx0 ,σI ,σI I . We begin by setting X0 to be constant: X0 ≡ x0

in .

∞ The sequence {Xn }∞ n=0 will be adapted to the filtration {Fn }n=0 of F, and thus each Xn for n ≥ 1 is effectively defined on n . We set, for all (w1 , a1 ) ∈ 1 :

⎧ x0 + σI0 (x0 ) ⎪ ⎪ ⎨ x0 + σI0I (x0 ) X1 (w1 , a1 ) = ⎪ x + w1 ⎪ ⎩ 0 x0

for a1 for a1 for a1 for x0

= 1 and x0 ∈ D \  = 2 and x0 ∈ D \  = 3 and x0 ∈ D \  ∈  .

Note that, calling x1 = X1 (w1 , a1 ), we have: h1 = (x0 , (x1 , w1 , a1 )) ∈ H1 and that h1 is a Borel function of the argument (w1 , a1 ) ∈ 1 . We now proceed by setting, for n ≥ 2: ⎧ xn−1 + σIn−1 (hn−1 ) ⎪ ⎪ ⎪ ⎨   xn−1 + σIn−1 I (hn−1 ) Xn (w1 , a1 ), . . . , (wn , an ) = ⎪xn−1 + wn ⎪ ⎪ ⎩ xn−1

for an = 1 and xn−1 ∈ D \  for an = 2 and xn−1 ∈ D \  for an = 3 and xn−1 ∈ D \  for xn−1 ∈  , (3.22)

together with the n-th position of the token:   xn = Xn (w1 , a1 ), . . . , (wn , an ) and the n-th augmented history hn that, as before,   can be seen as a measurable function of the argument (w1 , a1 ), . . . , (wn , an ) :   hn = x0 , (x1 , w1 , a1 ), . . . , (xn , wn , an ) ∈ Hn .

54

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

Fig. 3.3 Player I, Player II and random noise with their probabilities

It is clear that each Xn is a Fn -measurable random variable on , taking values in D. It represents the token position xn , which was initially placed at x0 and has then been advanced (as long as in D \ ) by: (i) the random shifts wn of length at most , and: (ii) the -scaled outputs of the deterministic strategies σI and σI I . These are activated according to the results of the biased “3-sided dice” tosses an , where an = 1 corresponds to activating σI and an = 2 to σI I , whereas an = 3 results in not activating any of them (Fig. 3.3). The strategies depend on the partial histories hn of the game that record: the positions xi of the token, the random shifts wi , and the toss outcomes ai , for all i ≤ n. When the token reaches some (enlarged) boundary position xn ∈  then it is stopped, i.e., xk = xn for every k ≥ n. 4. We now define the random variable τ x0 ,σI ,σI I : → {0, 1, 2, . . . , +∞}:   τ x0 ,σI ,σI I (w1 , a1 ), (w2 , a2 ), . . . = min{n ≥ 0; xn ∈  },   where as before xn = Xn (w1 , a1 ), . . . , (wn , an ) . We drop the superscript x0 , σI , σI I and write τ instead of τ x0 ,σI ,σI I if no ambiguity arises. Clearly, τ is F-measurable and, in fact, it is a stopping time relative to the filtration {Fn }∞ n=0 , as: Lemma 3.13 In the above setting, P(τ < +∞) = 1. Proof Consider the following set Dadv of “advancing” random outcomes (Fig. 3.4): 1 . Dadv = w ∈ B1 (0); w, e1  > 2

3.4 Tug-of-War with Noise: A Basic Construction

55

Fig. 3.4 The set Dadv of “advancing” random noise outcomes in the proof of (3.23)

By the boundedness of D, it follows that there exists n ≥ 1 such that: x+

n

wi ∈ D \ 

for all x ∈ D \  and wi ∈ Dadv ,

i = 1 . . . n.

i=1

Defining δ =

 |D

adv |

|B1 (0)|

n α

> 0, we thus obtain the first easy bound:

∞    

n   1 = P1 Dadv × {3} = δ. P τ ≤ n ≥ P (Dadv × {3})n ×

(3.23)

i=n+1

We now prove by induction that:   P τ > kn ≤ (1 − δ)k

for all k ∈ N.

(3.24)

For k = 1, the claim follows directly from (3.23). On the other hand: "     #  P τ > (k + 1)n = E 1τ >(k+1)n = E E 1τ >(k+1)n | Fkn     ≤ E (1 − δ)1τ >kn = (1 − δ) E 1τ >kn ≤ (1 − δ)k+1 , where we used the inductive assumption in addition to Lemma A.17 (i) and (3.23), to deduce that: ˆ   E 1τ >(k+1)n | Fkn = 

∞ i=kn+1 1

∞ 

  1τ >(k+1)n d P1 (wi , ai )

≤ (1 − δ)1τ >kn

i=kn+1

a.s.

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

56

 ∞ and conclude (3.24). Finally, since τ > kn k=1 is a sequence of decreasing subsets of , the bound in (3.24) yields: ∞       P τ = +∞ = P {τ > kn} = lim P τ > kn ≤ lim (1 − δ)k = 0, k→∞

k=1

k→∞

 

This completes the proof of Lemma 3.13.

5. Consequently, given a starting point x0 ∈ D and σI and σI I , one   two strategies can define the vector-valued random variable Xx0 ,σI ,σI I τ x0 ,σI ,σI I : → D:  x ,σ ,σ  P − a.s. in . X 0 I I I τ x0 ,σI ,σI I (ω) = Xτx0x0,σ,σII,σ,σII II (ω) (ω) For a bounded, Borel function F : 1 → R, we now set: # "   uI (x0 ) = sup inf E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I σI σI I

uI I (x0 )

# "   = inf sup E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I ,

(3.25)

σI I σI

where sup and inf are taken over all strategies as above and the expectation E is with respect to the probability measure P on ( , F, P). These can be understood as the values of the game for the respective Players I and II: the minimum gain for Player I if he plays optimally, choosing the strategy σI that maximizes the expectation E[F ◦ Xτ ] under the assumption that Player II always responds with his best strategy; and the minimum loss for Player II if he plays optimally (and assumes that Player I is the perfect player as well). It turns out that the game values uI and uI I coincide (i.e., the Tug-of-War game described at the beginning of this Section has a value) and that they equal the solution of the averaging principle (3.18). We now state the main theorem in this setting that will be proved in the next section:

Theorem 3.14 In the setting of Theorem 3.9, let F : 1 → R be a bounded, Borel function. Let u be the solution to (3.18) and let uI , uI I be the two game values as in (3.25). Then: uI = u = uI I

in D.

(3.26)

We remark that, in the context of game theory, the equality uI = uI I in (3.26) is not automatic. Consider the matrix A = [aij ]i,j =1,2 = diag(1, 2) ∈ R2×2 and the corresponding game where Player I selects a row i, Player II selects a column j , and then Player II pays Player I the value aij . Then: uI = max min aij = 0 < 1 = min max aij = uI I . i

j

j

i

3.5 The First Averaging as the Dynamic Programming Principle

57

3.5 The First Averaging as the Dynamic Programming Principle We will use the notation of Definition 3.8 and Theorem 3.14. An important role in the proof of (3.26) will be played by the almost-optimal selection lemma: Lemma 3.15 Let u : D → R be a bounded, Borel function. Fix  ∈ (0, 1) and δ > 0. There exist Borel functions σsup , σinf : D \  → D such that: σsup (x), σinf (x) ∈ B (x)

for all x ∈ D \ 

(3.27)

and: u(σsup (x)) ≥ sup u(y) − δ, y∈B (x)

u(σinf (x)) ≤

inf u(y) + δ

for all x ∈ D \  .

y∈B (x)

(3.28)

Proof 1. We will prove existence of σsup , while existence of σinf follows in a similar manner. By adding a constant, we may assume that u is nonnegative. Firstly, let u = 1A for some Borel set A ⊂ D. We write A + B (0) = ∞ i=1 B (xi ) as the union of countably many open balls centred at points xi ∈ A. Define, for all x ∈ D \  : $ σsup (x) =

x if x ∈ A + B (0)  xi if x ∈ B (xi ) \ i−1 j =1 B (xj ).

Clearly, σsup above is Borel, it satisfies (3.27) and also we have: u(σsup (x)) = sup u

for all x ∈ D \  .

(3.29)

B (x)

% 2. Let u = nk=1 αk 1 Ak be a simple function, given by n disjoint Borel sets {Ak ⊂ D}nk=1 satisfying nk=1 Ak = D, and scalars α1 > . . . > αn . We now write, as  k k before: Ak + B (0) = ∞ i=1 B (xi ), with xi ∈ Ak for all i and k. Similarly as before, (3.27) and (3.29) hold by setting for all x ∈ D \  : $ σsup (x) =

xik if x ∈ B (xik ) \ x otherwise.



 i−1 k j 0. By Lemma 3.15, there exists a strategy σ0,I I for Player II (as defined in Sect. 3.4), such that n (h ) = σ n (x ) and that, for every h ∈ H there holds: σ0,I n n I n 0,I I n n u(xn + σ0,I I (xn )) ≤ n σ0,I I (xn )

= xn

inf

y∈B (xn )

u(y) +

η 2n+1

if xn ∈ D \  (3.30)

if xn ∈  .

Clearly then, we have:         uI I (x0 ) = sup inf E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I ≤ sup E F ◦ Xx0 ,σI ,σ0,I I τ x0 ,σI ,σI I . σI σI I

σI

We will prove that, for every σI :     E F ◦ Xx0 ,σI ,σ0,I I τ x0 ,σI ,σI I ≤ u(x0 ) + η,

(3.31)

which, in view of arbitrariness of η > 0 will yield uI I (x0 ) ≤ u(x0 ), as claimed. 2. Let σI be any strategy of Player I, and consider the sequence of random variables {Mn }∞ n=0 on , given by: x ,σI ,σ0,I I

Mn = u ◦ Xn0

+

η . 2n

As usual, we drop the superscripts in Xn and simply write: Mn = u◦Xn + 2ηn . We ∞ claim that {Mn }∞ n=0 is a supermartingale with respect to the filtration {Fn }n=0 : E(Mn | Fn−1 ) = E(u ◦ Xn | Fn−1 ) + = u ◦ Xn−1 +

η 2n

η η + n = Mn−1 2n 2

a.s.

(3.32)

3.5 The First Averaging as the Dynamic Programming Principle

59

To prove (3.32), fix n ≥ 1 and compute:    E u ◦ Xn | Fn−1 (w1 , a1 ), . . . (wn−1 , an−1 ) ˆ    u ◦ Xn (w1 , a1 ), . . . (wn−1 , an−1 ), (wn , an ) dP1 (wn , an ) = 1

=

⎧ ⎪ α ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩

u(xn−1 + wn ) dwn B1 (0)

β β n−1 + u(xn−1 + σIn−1 (hn−1 )) + u(xn−1 + σ0,I I (hn−1 )) 2 2 u(xn−1 )

if xn−1 ∈ 

if xn−1 ∈  ⎧   η ⎪ ⎨ α A u(xn−1 ) + β sup u(y) + inf u(y) + n if xn−1 ∈  y∈B (xn−1 ) 2 y∈B (xn−1 ) 2 ≤ ⎪ ⎩ u(x ) if xn−1 ∈  n−1 ≤ u(xn−1 ) +

η 2n

a.s.

where we used Lemma A.17 (i) in the first equality, the Fubini–Tonelli theorem (Theorem A.11) in the second equality, the bound (3.30) in the successive inequality and the averaging principle (3.18) on D in the final bound. The supermartingale property (3.32) thus follows. Applying Doob’s theorem (Theorem A.34 (ii)) in view of the uniform boundedness of {Mn }∞ n=0 , yields:   E Mτ ≤ E[M0 ] = u(x0 ) + η. Since F ◦ Xτ = u ◦ Xτ , we easily conclude (3.31):     E F ◦ Xτ ≤ E Mτ ≤ u(x0 ) + η. 3. To prove that u ≤ uI in D, we argue exactly as above, choosing an almostn (h ) = σ n (x ) and: optimal strategy σ0,I for Player I, so that σ0,I n 0,I n n (xn )) ≥ u(xn + σ0,I

σ0,I (xn ) = xn

sup

u(y) −

y∈B (xn )

η

if xn ∈ D \ 

2n+1

if xn ∈ 

for all hn ∈ Hn . It follows that for every strategy σI I of Player II, the sequence of random variables {Mn }∞ n=0 , defined by: x ,σ0,I ,σI I

Mn = u ◦ Xn0



η 2n

(3.33)

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

60

is a submartingale adapted to the filtration {Fn }∞ n=0 . This allows to conclude:     uI (x0 ) ≥ inf E F ◦ Xx0 ,σ0,I ,σI I τ x0 ,σI ,σI I σI I

    ≥ E Mτ x0 ,σI ,σI I ≥ E M0 = u(x0 ) − η, proving u ≤ uI , since η > 0 was arbitrary. We remark that one can alternatively deduce u ≤ uI from the already established inequality uI I ≤ u, by replacing u with −u and switching the roles of the players.   Exercise 3.16 (i) In the setting of Theorem 3.14, show that uI ≤ uI I . (ii) Prove that the sequence defined in (3.33) is a submartingale with respect to the   filtration {Fn }∞ n=0 . Deduce that u (x0 ) ≤ uI (x0 ). Exercise 3.17 If we replace the open balls B (x) in the requirement (3.27) by the closed ones, then the Borel selection in Lemma 3.15 may not exist. Let  = 1, δ = 1 3 3 and let u = 1A where A ⊂ R is a bounded Borel set with the property that A + B¯ 1 (0) is not Borel. Show that (3.28) does not hold. The existence of A is nontrivial (see Luiro et al. 2014) and relies on constructing a Borel set in R2 whose projection on the x1 axis is not Borel. This extends the famous example in Erdos and Stone (1970) of a compact (Cantor) set A and a Gδ set B such that A + B is not Borel.

3.6* Case p = 2 and Brownian Trajectories In this section we compare the Tug-of-War game process corresponding to p = 2, with the appropriate discrete realization of the Brownian motion trajectories. Similarly as in the discussion of Sect. 2.7* for the ball walk studied in Chap. 2, familiarity with the material in Appendix B will be assumed. ¯ P) ¯ F, ¯ = ( , F , P ) × ( , F, P) We will use notation of Sect. 2.7*, for ( , B B B being the product probability space where ( , F, P) denotes the space in Sect. 3.4, ¯ t }t≥0 . We also refer to the standard Brownian motion and for the filtration {F N {Bt }t≥0 discussed in Appendix B. For D ⊂ RN open, bounded, connected, and given a starting position x0 ∈ D, recall the definition of the exit time:   τ¯ (ωB ) = min t ≥ 0; x0 + BN t (ωB ) ∈ ∂D .

(3.34)

Fix  ∈ (0, 1). Since the discrete process (3.22) utilizes random sampling on balls of constant radii , in distinction from the shrinking radii construction in the ball

3.6 * Case p = 2 and Brownian Trajectories

61

walk (2.2), we modify definition (2.21) to: τ¯0 = 0,   τ¯n+1 ωB , {wi }∞ i=1   N  = |wn+1 | . (ω ) − B (ω ) = min t ≥ τ¯n ; BN t τ¯n (ω ,ω) B B B

(3.35)

As in Lemma 2.24 and Theorem 2.25, one can deduce the following properties of τ¯0 and the relation to the discrete random walk: Exercise 3.18 In the above context, prove that: ¯ P). ¯ → [0, ∞] in (3.35) is a stopping time on ( , ¯ F, ¯ (i) Each τ¯k : (ii) Consider the random variables Xn,x0 : → RN in:  ,x0  w1 , . . . , wn+1 = Xn,x0 + wn+1 . X0,x0 = x0 , Xn+1 Then for all n ≥ 0 and all Borel A ⊂ RN we have:   ,x   0 ∈ A . P¯ x0 + BN τ¯n ∈ A = P Xn In what follows, we will consider the same stopping time as in Sect. 3.4:   τ1,x0 = min n ≥ 0; dist(Xn,x0 , ∂D) ≤  . For a given F ∈ Cc (RN ), the formulas (3.25) applied with p = 2 reduce to: ˆ  0 dP. u (x0 ) = F ◦ Xτ,x (3.36) 1

We now prove the main statement of this Section, which is:

Theorem 3.19 In the above context, {u }→0 in (3.36) converge pointwise on D, to the harmonic extension of F|∂ D in: ˆ   F ◦ x0 + BN (3.37) u(x0 ) = τ¯ dPB . B When D is regular, in the sense that for every y0 ∈ ∂D there holds: ∀η, δ > 0

∃δˆ ∈ (0, δ), ˆ ∈ (0, 1)

∀ ∈ (0, ), ˆ x0 ∈ Bδˆ (y0 ) ∩ D  ,x  P Xτ1 0 ∈ Bδ (y0 ) ≥ 1 − η, (3.38)

then convergence of {u }→0 to u is uniform.

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

62

Proof 1. Fix x0 ∈ D,  ∈ (0, 1) and in addition to τ1,x0 define the corresponding stopping ¯ time for {BN t }t≥0 on :     T1,x0 = min τ¯n ≥ 0; dist x0 + BN τ¯n , ∂D ≤  . The fact that T1,x0 is a.s. well defined, follows as in the proof of Lemma 2.24, where we note that on the full measure set:   1 ¯ A = {τ¯ < +∞} ∩ ω = {wi }∞ for infinitely many i = 1, . . . , ∞ ⊂ , i=1 ; |wi | ≥ 2

it is not possible to have: dist(x0 +BN τ¯n , ∂D) >  for all n ≥ 0. Indeed, otherwise, ( ¯ ∞ ¯ = (ωB , ω) ∈ A the bounded sequence {x0 + BN τ¯n ω)} n=0 along some ω would have a converging subsequence, and since limn→∞ τ¯n (ω) ¯ < +∞ in N view of τ¯ (ωB ) < +∞, then: limn→∞ BN ( ω) ¯ = B (ω τ¯n B ). However, ¯ limn→∞ τ¯n (ω) N ∞ ¯ n=0 being Cauchy stands in contradiction with having the sequence {Bτ¯n (ω)} infinitely many |wi | ≥ 12 in ω. It is also straightforward to check (using Lemma B.21) that T1,x0 is a stopping time. We now argue that for every f ∈ Cc (RN ) there holds: ˆ ¯

  ¯ f ◦ x0 + BN T1 dP =

ˆ

0 dP. f ◦ Xτ,x 1

(3.39)

The argument follows along the lines of proof of Theorem 2.25 and Exercise 3.18 (ii), by inductively checking that for all Borel A ⊂ RN :    n−1  dist(x0 + BN ∈ A ∩ P¯ x0 + BN τ¯n τ¯k , ∂D) >  k=0

   n−1  dist(Xk,x0 , ∂D) >  = P Xn,x0 ∈ A ∩

for all n ≥ 0.

k=0

Applying the above assertion to the Borel subsets A ⊂ {x ∈ D; dist(x, ∂D) ≤ }, we obtain:        ,x  0 ∈ A P¯ T1 = τ¯n ∩ x0 + BN = P ∈ A τ = n ∩ X 1 n τ¯n

for all n ≥ 0.

Consequently, the claimed identity (3.39) follows through: ∞ ˆ ¯ n=0 ∩{T1 =τ¯n }

∞   ¯ d P = f ◦ x0 + BN τ¯n

ˆ

n=0 ∩{τ1 =n}

f ◦ Xn,x0 dP.

3.6 * Case p = 2 and Brownian Trajectories

63

¯ Indeed, we have: T ,x0 < τ¯ , 2. We now observe that: lim→0 T1,x0 = τ¯ a.s. in . 1 so on the event {τ¯ < +∞}, any decreasing to 0 sequence J ⊂ (0, 1) must have a subsequence along which {T1,x0 }→0 converges to some T ≤ τ¯ , whereas   x0 + BN,x0 →0 converges to x0 + BN T ∈ ∂D, implying T = τ¯ . We now apply T1

(3.39) with F , to the effect of: ˆ ˆ      N ¯ ¯ lim u (x0 ) = lim F ◦ x0 + B ,x0 dP = F x0 + BN τ¯ dP = u(x0 ), T1

→0 ¯

→0

¯

which proves the first part of Theorem. Towards the second part, we begin by the following useful implication (which in fact is an equivalence, see Lemma 3.21): Lemma 3.20 Given y0 ∈ ∂D, condition (3.38) implies the following regularity: ∀η, δ > 0 ∃δˆ ∈ (0, δ)

∀x0 ∈ Bδˆ (y0 ) ∩ D   PB x0 + BN τ¯ ∈ Bδ (y0 ) ≥ 1 − η.

(3.40)

0 Proof Clearly, (3.39) allows for replacing Xτ,x by x0 + BN,x0 in (3.38), namely: 1

T1

∀η, δ > 0

∃δˆ ∈ (0, δ), ˆ ∈ (0, 1)

∀ ∈ (0, ), ˆ x0 ∈ Bδˆ (y0 ) ∩ D   P¯ x0 + BN,x0 ∈ Bδ (y0 ) ≥ 1 − η. T1

(3.41) ˆ ˆ > 0 such that: P(x ¯ 0 + BN1/n,x ∈ B δ (y0 )) ≥ 1 − η for all Fix η, δ > 0 and find δ, 0 T1

x0 ∈ Bδˆ (y0 ) ∩ D and n > 1ˆ . Since:

2

    x0 + BN1/n,x0 ∈ B δ (y0 ) x0 + BN τ¯ ∈ Bδ (y0 ) ⊃ k≥1 n>k

T1

2

holds on the full measure event {lim→0 T1,x0 = τ¯ }, if follows that:   P x0 + BN τ¯ ∈ Bδ (y0 ) ≥ 1 − η for all x0 ∈ Bδˆ (y0 ) ∩ D, as claimed in (3.40).

 

1. We are now ready to complete the proof of Theorem 3.19. By Lemma 3.20, condition (3.40) holds for all y0 ∈ ∂D. Similarly as in Lemma 5.6, observe that δˆ may be chosen uniformly in y0 , due to compactness of ∂D: ∀η, δ > 0 ∃δˆ ∈ (0, δ)

∀y0 ∈ ∂D, x0 ∈ Bδˆ (y0 ) ∩ D   PB x0 + BN τ¯ ∈ Bδ (y0 ) ≥ 1 − η.

(3.42)

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

64

¯ there exists δ > 0 such that: Fix δ0 ∈ (0, 1). By continuity of F on D, ¯ ∀x, y ∈ D

|x − y| < δ ⇒ |F (x) − F (y)| ≤

δ0 . 2

ˆ where δˆ < δ/2 is obtained by applying (3.42) to δ/2 in place We take  ∈ (0, δ), of δ, and to η = 4F δ0∞ +1 . Firstly, there holds: ˆ ¯

     F ◦ x0 + BN,x0 − F ◦ x0 + BN  dP¯ τ¯ T1

ˆ

=

¯

       | F ,x0 dP¯ E F ◦ x0 + BN,x0 − F ◦ x0 + BN τ¯ T T1

ˆ ˆ =

¯



B

1

   F ◦ x0 + BN,x0 (ω) ¯ T1

 − F ◦ x0 + BN,x0 (ω) ¯ + BN τ¯ T1

B

x0 +

N ,x (ω) ¯ T1 0

 ¯ ω), ¯ (ω)  dPB (ω) dP( (3.43)

where we used the strong Markov property, reasoning as in the proof of Theorem B.28. To estimate the internal integral above, for a given x ∈ D with ˆ Then, (3.42) yields: dist(x, ∂D) <  let y0 ∈ ∂D be such that |x − y0 | < δ. ˆ

B

   F (x) − F ◦ x + BN  dP τ¯ B ˆ =

{x+B

N τ¯ ∈Bδ (y0 )}

ˆ

+ ≤

   F (x) − F ◦ x + BN  dP τ¯ B

{x+B

N τ¯ ∈Bδ (y0 )}

   F (x) − F ◦ x + BN  dP τ¯ B

δ0 + 2F ∞ η ≤ δ0 . 2

Consequently, (3.43) becomes: ˆ ¯

     F ◦ x0 + BN,x0 − F ◦ x0 + BN  dP¯ ≤ δ0 τ¯ T1

achieving the proof of Theorem 3.19.

ˆ for all  < δ,  

3.7* Equivalence of Regularity Conditions

65

3.7* Equivalence of Regularity Conditions In this section, we note equivalence of all the boundary regularity conditions used so far, with the classical Doob’s regularity condition (3.44) below. Theorem 3.21 Given y0 ∈ ∂D, conditions (3.38) and (3.40) are equivalent. They are further equivalent to:     PB inf t > 0; y0 + BN t ∈ D = 0 = 1,

(3.44)

and also to the strengthened version of (3.40), namely: ∀η, δ > 0 ∃δˆ ∈ (0, δ) ∀x0 ∈ Bδˆ (y0 ) ∩ D   PB τ¯ < min{t ≥ 0; x0 + BN t ∈ Bδ (y0 )} ≥ 1 − η.

(3.45)

Proof 1. In Lemma 3.20 we already showed that (3.38)⇒(3.40). Also, the implication (3.45)⇒(3.38) is elementary, as T1,x0 < τ¯ and hence: 

   N τ¯ < min{t ≥ 0; x0 + BN t ∈ Bδ (y0 ) ⊂ x0 + B ,x0 ∈ Bδ (y0 ) . T1

  D and for We now prove (3.40)⇒(3.44). Denote T = inf t > 0; y0 + BN t ∈   each r > 0, define the stopping time τr = min t > 0; |BN t | ≥ r . It follows that:     1  1 P inf τr > 0 = P = lim P inf τr > inf τr > n→∞ r>0 r>0 r>0 n n n≥1

  ≤ lim P BN 1/n = 0 = 0. n→∞

Thus, to show (3.44) it suffices to check that:  PB ∃t ∈ (0, τr )

   y0 + BN t ∈ D = PB T < τr = 1

for all r > 0. (3.46)

Fix η > 0 and for each δ ∈ (0, r) consider the events:     Ar (δ) = T > τr ∩ |BN T| τ A } ∩ {B = 0} = 0, there exists δ>0 r n=1 r n T B satisfying:   PB Ar (δ) < η. For the chosen δ, η > 0 we assign δˆ ∈ (0, δ) according to (3.40) and write:   PB ∃t ∈ (0, τr ) y0 + BN t ∈ D   ≥ PB ∃t ∈ (0, τr ) y0 + BN t ∈ Bδ (y0 ) \ D   N (3.47) = PB ∃t ∈ (0, τδ/2 ˆ ] y0 + Bt ∈ D   N + PB ∀t ∈ (0, τδ/2 ˆ ] y0 + Bt ∈ D     ∩ ∃t > τδ/2 y0 + BN ˆ t ∈ Bδ (y0 ) \ D ∩ B \ Ar (δ) , and estimate the second probability in the right hand side above by:    N y0 + BN PB PB ∃t > τδ/2 ˆ ˆ ] y0 + Bt ∈ D t ∈ Bδ (y0 ) \ D | ∀t ∈ (0, τδ/2   − PB Ar (δ)   N ≥ PB ∀t ∈ (0, τδ/2 ˆ ] y0 + Bt ∈ D · (1 − η) − η, since the indicated conditional probability is a.s. bounded from below by 1 − η, in view of the assumed (3.40) and the strong Markov property. In conclusion, (3.47) yields:  PB ∃t ∈ (0, τr )

 y0 + BN t ∈ D ≥ 1 − 2η,

implying (3.46) and (3.44) in view of η > 0 being arbitrarily small. 2. In this Step, we check the remaining implication (3.44)⇒(3.45). For a fixed δ, η > 0, by (3.44) it follows that P ∃t ∈ (0, τδ/2 ) y0 + BN t ∈ D = 1, hence for some large n > 0 we get:   1  η P ∃t ∈ , τδ/2 y0 + BN t ∈ D ≥ 1 − . n 2

(3.48)

3.7* Equivalence of Regularity Conditions

We will show that taking δˆ ≤

η √ 2 n

67

results in:

   1 η ∈ D ≥ 1− P ∃t ∈ , τδ/2 x0 +BN t n 2

for all x0 ∈ Bδˆ (y0 )∩D,

(3.49)

as the probabilities in the left hand sides of (3.48) and (3.49) differ at most by η2 . The argument uses the “reflection coupling” as follows. Fix x0 ∈ Bδˆ (y0 ) ∩ D and let H denote the hyperplane bisecting the segment [x0 , y0 ]. By the usual application of Exercise B.10, the random variable TH = N min{t > 0; y0 + BH t ∈ H } is a stopping time for {Bt }t≥0 . We will employ the fact that (see Exercise 3.22):  √ 1 P TH > ≤ |x0 − y0 | n. n

(3.50)

We denote {BN,H }t≥0 the Brownian motion reflected across the (N − 1)t 0 dimensional subspace H − x0 +y ⊂ RN . Then the newly defined process: 2 $ BN,H for t ≤ TH t ¯ Bt = N,H N N Bt + BTH − BTH for t > TH , is again a Brownian motion (see Exercise B.25). The uniqueness of the Wiener measure in Theorem B.15 and Exercise B.17, imply that:     1   1 ¯ t ∈ D . , τδ/2 x0 + B P ∃t ∈ , τδ/2 x0 + BN t ∈ D = P ∃t ∈ n n ¯ t = y0 + BN At the same time, for t ≥ TH there holds: x0 + B t because: N BN,H TH − BTH = y0 − x0 .

Consequently:   1  ¯ t ∈ D ∩ TH ≤ 1 ∃t ∈ , τδ/2 x0 + B n n  1  1 , = ∃t ∈ , τδ/2 y0 + BN ∈ D ∩ TH ≤ t n n which, in view of (3.50), results in:      1      ¯ t ∈ D − P ∃t ∈ 1 , τδ/2 y0 + BN P ∃t ∈ , τδ/2 x0 +B t ∈ D  n n  √ 1 η ≤ P TH > ≤ δˆ n ≤ , n 2 and in (3.49). Requesting further that |x0 − y0 | < 2δ , we obtain (3.45).

 

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

68

Exercise 3.22 Use the following outline to show that for every θ, ζ > 0, we have the following identity on the standard 1-dimensional Brownian motion:      θ  P min t ≥ 0; B1t = θ > ζ = P |B11 | < √ . ζ   (i) Consider the stopping time: τθ = min t ≥ 0; B1t = θ and define a reflected process by: ¯t = B



for t ≤ τθ B1t −B1t + 2θ for t > τθ .

¯ t }t≥0 is a standard 1-dimensional Brownian motion. Deduce that {B   (ii) We write: {τθ > ζ } = {B1ξ < θ } \ {τθ < ζ } ∩ {B1ζ < θ } . Observe that:       ¯ ζ < θ } = P B1ζ > θ P {τθ < ζ } ∩ {B1ζ < θ } = P {τθ < ζ } ∩ {B and conclude, towards completing the proof:           P τθ > ζ } = P B1ζ < θ − P B1ζ > θ = P B1ζ < θ − P B1ζ < −θ    θ  = P |B1ζ | < θ = P |B11 | < √ . ζ  &    (iii) Deduce further that: P min t ≥ 0; B1t = θ > ζ ≤ π2 ·

√θ . ζ

3.8 Bibliographical Notes The asymptotic mean value property (3.13) (called here “averaging principle”) for p-harmonic functions with exponent p ∈ (1, ∞], has been first put forward in Juutinen et al. (2010), where the authors proved that u ∈ C(D) is a viscosity solution to p u = 0 in D if and only if (3.13) holds in the viscosity sense as  → 0, for all x ∈ D. Paper by Kawohl et al. (2012) concerns further properties of solutions to nonlinear PDEs in the sense of averages, including a discussion of case p = 1. It is also worth mentioning that the interpolation (3.5) has been used in Kawohl (2008) in the context of the applications of p to image recognition, and for the evolutionary problem in Does (2011). In Le Gruyer (1998, 2007) the asymptotic mean value expansion for the case p = ∞ has been developed and the uniform convergence of solutions to: u(x) = 12 (infB (x) + supB (x) )u, to the viscosity solution of ∞ u = 0 has been shown.

3.8 Bibliographical Notes

69

The connection of the Dirichlet problem for ∞ to the Tug-of-War games has been first analysed in the fundamental paper by Peres et al. (2009), followed by the extension to games with no terminal state and the Neuman boundary condition in Antunovic et al. (2012), and to the biased games in presence of the drift term: β|∇u| + ∞ u = 0 in Peres et al. (2010). Armstrong and Smart (2012) further used a boundary-biased modification of the original construction and obtained estimates for the rate of convergence of the finite difference solutions, to the infinitely harmonic functions under Dirichlet boundary condition. The second seminal paper by Peres and Sheffield (2008) treated the case of p ∈ (1, ∞), albeit via dynamic programming principle and the implicated Tugof-War game with noise distinct than the expansions (3.13), (3.14) presented in this section. Papers by Manfredi et al. (2012b,a) introduced a version of the Tug-ofWar game modelled on (3.13) for p ≥ 2 and proved the uniform convergence of game values to the unique viscosity solution of the associated Dirichlet problem for smooth domains. Convergence rates of solutions u to the limiting u in case of ∇u vanishing in finitely many points, have been developed in Luiro and Parviainen (2017). In Luiro et al. (2014) existence and uniqueness of solutions to (3.18) has been shown for p ≥ 2, while the well-posedness of a more general class of dynamic programming principles has been proved in Liu and Schikorra (2015). The asymptotic mean value formula similar to that in (3.13) and applicable in the context of the parabolic problem: ut = |∇u|2−p p u, has been studied in Manfredi et al. (2010), where equivalence of the viscosity solutions and the viscous asymptotic solutions of the aforementioned version of (3.13) has been proved, via the Tug of War game for p ≥ 2. The parabolic case with varying exponent p(x) was considered in Parviainen and Ruosteenoja (2016). In a similar context, a paper by Nyström and Parviainen (2017) developed an option-pricing model based on a Tug-of-War game for the financial market. In Charro et al. (2009), a variation of the Tug-of-War game from Peres et al. (2009) has been adapted to treat the problem ∞ u = 0 subject to the mixed boundary conditions: Dirichlet with Lipschitz data on a portion of the boundary, and Neuman on the remaining C1 portion of the boundary. Another variant may be found in Gomez to the dynamic programming principle:  and Rossi (2013), leading  u (x) = 12 infy∈A(x) + supy∈A(x) u (x + y) whose solutions are shown to converge uniformly to the viscosity solutions to: ∇ 2 u : argminz∈A(x) ∇v(x), z = 0. Towards extending this result to the parabolic problem, some work had been done in Gomez and Rossi (2013). Numerical schemes based on the averaging principles in Theorems 3.4 and 3.5 were derived in: Oberman (2005) for ∞ , and in Casas and Torres (1996); Falcone et al. (2013); Codenotti et al. (2017) for p . We also mention that the limiting case p = 1 (not studied in these Course Notes) is related to the level-set formulation of motion by mean curvature and a variant of Spencer’s “pusher-chooser” game, discussed in Kohn and Serfaty (2006) (see also Buckdahn et al. (2001) for the corresponding representation formula). More generally, an approach of realizing a PDE via the Hamilton–Jacobi equation of a

70

3 Tug-of-War with Noise: Case p ∈ [2, ∞)

deterministic two-persons game has been implemented for a broad class of fully nonlinear second order parabolic or elliptic problems in Kohn and Serfaty (2010). In preparing the material in Sects. 3.6* and 3.7*, the author has benefited from the book by Mörters and Peres (2010) and from discussions with Y. Peres.

Chapter 4

Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

The aim of this chapter is to improve the regularity of the mean value characterized approximate solutions to p-harmonic functions, by implementing the dynamic programming principle (3.14) instead of (3.13) used in Chap. 3 and by further interpolating between the averaging operator and the boundary condition. Our discussion will be presented in the exponent range p ∈ (2, ∞); however, the linear case p = 2 is still formally valid and can be easily covered by approximation or by obvious modifications of the arguments below. In Sect. 4.1 we show that the mean value equation modelled on (3.14) has a unique solution u on D ⊂ RN , that is continuous (resp. Lipschitz) for continuous (resp. Lipschitz) boundary data F . In Sect. 4.2 we prove that if F is already a restriction of a smooth p-harmonic function with nonvanishing gradient, then the resulting family {u }→0 converges to F uniformly, at a linear rate in . The same result is reproved in Sect. 4.4, based on the interpretation of u as the value of a twoplayers boundary aware Tug-of-War with noise introduced in Sect. 4.3. In this game, the token which is initially placed at x0 ∈ D is then advanced to further consecutive positions according to the following rule: at each step of the game, either of the two players (each acting with probability 13 ) shifts the token by the distance at most r away from its current position, which is followed by a random shift in the ball of radius . The scaling radius factor r depends on p and N . The game terminates with probability proportional to: 1 minus the distance of the token’s previous location from RN \D, in the -neighbourhood of ∂D. Then, u (x0 ) is defined as the expected value of F (which is first continuously extended on the said neighbourhood) at the stopping position xτ , subject to both players playing optimally. As in Chap. 3, the optimality criterion is based on the game rule that Player II pays to Player I the value F (xτ ). In the last Sect. 4.5* we show that at the linear exponent p = 2, the described above process is a discrete realization along Brownian motion paths, whereas the stopping positions xτ remain close to those paths exiting positions, with high probability. We conclude that the family {u }→0 , as in Chap. 3, converges pointwise © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3_4

71

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

72

to the Brownian motion harmonic extension u of F on D, while for regular domains this convergence is uniform, guaranteeing that u|∂ D = F . This section is more involved as it uses material from Appendix B, and may be skipped at first reading.

4.1 The Second Averaging Principle In this section we study the averaging principle (3.14) under a particular choice of weights α = 13 , β = 23 . As we shall see, solutions to the related averaging equation (4.1) below inherit the continuity/Lipschitz continuity properties of the boundary data. This is essentially due to the presence of the integral average in every term of (4.1) and its further interpolation to the boundary data.

♦ Theorem & 4.1 Let D, out and D be as in Definition 3.8. Fix p > 2 and 3(p−2) r = 2(N +2) , and let  > 0 be such that (1 + r) < 1. Then, given a bounded, Borel function F : out ∪  → R, there exists a unique bounded, Borel function u : D♦ → R which is the solution to:

u (x) = d (x)

1 3

A u (x) +

 1 1 inf A u (y) + sup A u (y) 3 y∈Br (x) 3 y∈Br (x) for all x ∈ D♦ ,

+ (1 − d (x)) F (x)

(4.1) where we denote: d (x) =

1 min{, dist(x, out )}. 

(4.2)

Proof 1. We remark that, by continuity of the averaging functions y → A u (y) in (4.1), we could be writing: miny∈B¯ r (x) A u (y) and maxy∈B¯ r (x) A u (y), instead of: infy∈Br (x) A u (y) and supy∈Br (x) A u (y). We prefer to keep the latter notation for consistency (Fig. 4.1). The proof follows the same steps of the proof of Theorem 3.9. To ease the notation, we drop the subscript  in the solution to (4.1) and write u instead of u . Define the operators T and S, which to any bounded, Borel function v : D♦ → R associate the continuous function Sv : D → R and the Borel function

4.1 The Second Averaging Principle

73

Fig. 4.1 Sampling sets in the averaging term infy∈Br (x) A u (y)

T v : D♦ → R in: (Sv)(x) =

 1 A v(x) + inf A v(y) + sup A v(y) y∈Br (x) 3 y∈Br (x)

(4.3)

T v = d Sv + (1 − d )F. Clearly, S and T are monotone, that is: Sv ≤ S v¯ and T v ≤ T v¯ if v ≤ v. ¯ Observe that, for any two bounded, Borel functions v, v¯ : D♦ → R and any x ∈ D, we have: |(Sv)(x) − (S v)(x)| ¯   1   |A (v − v)(x)| ≤ ¯ +  inf A v(y) − inf A v(y) ¯  y∈Br (x) y∈Br (x) 3     +  sup A v(y) − sup A v(y) ¯  y∈Br (x)

≤ ≤

y∈Br (x)

1 2 A |v − v|(x) sup A |v − v|(y) ¯ + ¯ 3 3 y∈Br (x) ¯ sup A |v − v|(y). y∈Br (x)

(4.4) The solution u of (4.1) is obtained as the limit of iterations un+1 = T un , where we set u0 ≡ const ≤ inf F and observe that u1 = T u0 ≥ u0 in D♦ . By the monotonicity of T , the sequence of Borel functions {un }∞ n=1 is bounded, ♦ nondecreasing and it converges pointwise. Its limit u : D → R is a bounded, Borel function that must be a fixed point of T and thus a solution to (4.1). 2. For uniqueness, assume by contradiction that u = u¯ satisfy (4.1) and set: ¯ > 0. M = sup |u(x) − u(x)| ¯ = sup |u(x) − u(x)| ♦ x∈D x∈D

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

74

Let {xn }∞ ¯ n )| = n=1 be a sequence of points in D, such that limn→∞ |u(xn ) − u(x M. By (4.4) we obtain: |u(xn ) − u(x ¯ n )| = d (xn )|(Su)(xn ) − (S u)(x ¯ n )| ≤

d (xn ) 2d (xn ) ¯ n) + ¯ A |u − u|(x sup A |u − u|(y) 3 3 y∈Br (xn )

≤ d (xn )M ≤ M. ¯ we get: Now, passing to the limit with n → ∞ where limn→∞ xn = x ∈ D, d (x) 2d (x) A |u − u|(x) M = M, ¯ + 3 3 which yields: d (x) = 1

and

A |u − u|(x) ¯ = M,

so that in particular |u − u| ¯ = M almost everywhere in B (x) ⊂ D♦ . The set DM = {x ∈ D; A |u − u|(x) ¯ = M} is thus nonempty. Clearly, DM is relatively closed in D. But it is also open, because for every x ∈ DM , each point in B (x) ∩ D is a limit of some sequence {xn }∞ n=1 with the property that |u(xn )− u(x ¯ n )| = M, and the same argument as before yields: B (x)∩D ⊂ DM . It now follows that DM = D, which contradicts u = u¯ on out , as in the proof of Theorem 3.9.   We further have the following easy observations: Corollary 4.2 In the setting of Theorem 4.1, let u and u¯  be the unique solutions to (4.1) with the respective Borel, bounded boundary data F and F¯ . If F ≤ F¯ , then u ≤ u¯  in D♦ . Corollary 4.3 In the setting of Theorem 4.1, let u0 be any bounded, Borel function on D♦ . Then the sequence {un }∞ n=1 , defined recursively by: un+1 = T un where T is as in (4.3), converges uniformly to the unique solution u to (4.1). Proof The proof is the same as that of Corollary 3.11. To show uniform convergence, we observe that by (4.4): |un+1 (x) − u(x)| = |(T un )(x) − (T u)(x)| ≤ |(Sun )(x) − (Su)(x)| ˆ 1 ≤ sup A |un − u |(y) ≤ |un (z) − u(z)| dz, |B (0)| D♦ y∈Br (x) for all x ∈ D. This yields the result by the dominated convergence theorem and in   view of the pointwise convergence of {un }∞ n=1 to u .

4.1 The Second Averaging Principle

75

We now prove the claimed regularity property:

Theorem 4.4 In the setting of Theorem 4.1, if F is continuous (resp. Lipschitz continuous) on out ∪  , then u is also continuous (resp. Lipschitz continuous) on its domain D♦ .

Proof 1. Since the function Sv is continuous in D for any Borel, bounded v : D♦ → R, it follows that each un is also continuous as a linear combination of Sun−1 and a continuous F , by means of a continuous weight function d . Since a uniform limit of continuous functions is continuous, the claim follows. 2. We now argue that the function Sv is always Lipschitz continuous in D, with the Lipschitz constant depending on , N and vL∞ . Note first that for every y, y¯ ∈ D + Br (0) there holds: |A v(y) − A v(y)| ¯ ≤

¯ |B (y) B (y)| vL∞ |B (0)|

(4.5)

VN −1 |y − y| ¯ vL∞ , ≤2 · VN 

where we denote VN = |B1 (0)| and VN −1 is, likewise, the volume of the unit ball in RN −1 . Indeed, for any y0 ∈ RN we have: |B1 (0)

B1 (y0 )| = 2|B1 (0) \ B1 (y0 )|   ≤ 2 z + ty0 ∈ Rn ; |z| = 1, z, y0  ≤ 0, t ∈ [0, 1]  = 2|y0 |VN −1 ,

and then (4.5) follows through a simple rescaling argument: |B (y)

 B (y)| ¯ = B (0) ≤

   B (y¯ − y) =  N B1 (0)

B1

 y¯ − y    

|B (0)| 2|y − y| ¯ VN −1 . · VN 

Since (4.5) establishes the Lipschitz continuity of y → A v(y), it follows that both the infimum and supremum of this function on Br (x) are also Lipschitz (in x), with the same Lipschitz constant. Consequently, for every x, x¯ ∈ D:   VN −1 vL∞ (Sv)(x) − (Sv)(x) ¯ ≤6 |x − x|. ¯ · VN 

(4.6)

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

76

3. Let now F be Lipschitz continuous on , with Lipschitz constant Lip F . We claim that the functions in the sequence: un = T n u0 ,

u0 ≡ inf F,

must also be Lipschitz continuous, with uniformly controlled Lipschitz constants. More precisely, we will show that: Lip un ≤ L

'    F  ∞ VN −1 L , 2 Lip F +1 = max 4 VN 

for all n ≥ 0.

(4.7)

Since u is a uniform limit {un }∞ n=1 , it will thus follow that u itself is Lipschitz, with Lipschitz constant obeying the same bound as in (4.7). We proceed by induction. For n = 0 the claim is trivial, as u0 is constant. Assume that (4.7) holds for un . Observe that: un L∞ ≤ F L∞ and Sun L∞ ≤ 3F L∞ and also that Lip d = 1 . Then, for any x, x¯ ∈ D♦ we have:   |un+1 (x) − un+1 (x)| ¯ ≤ d (x)Sun (x) − Sun (x) ¯  + |d (x) − d (x)| ¯ · Sun L∞ + (1 − d (x))|F (x) − F (x)| ¯ + |d (x) − d (x)| ¯ · F L∞   F L∞ L F L∞ VN −1 F L∞ + + + |x − x| ¯ ≤ 2 VN   2   V  F  ∞  L N −1 L +2 |x − x| ¯ ≤ L|x − x|, ¯ = +1 2 VN  where in the first bound we have used (4.6). This completes the induction step in the proof of (4.7). The proof of Theorem 4.4 is done.   Exercise 4.5 (i) Modify the proof of Theorem 4.4 and show that if F is Lipschitz continuous, then u solving (3.21) in Exercise 3.12 must be Lipschitz as well. (ii) Let F : ∂D → R be a Lipschitz function. Show that the formula:   F (x) = inf F (y) + (Lip F )|y − x| y∈∂ D

for x ∈ RN ,

defines a Lipschitz continuous extension of F on RN , with the same Lipschitz constant Lip F . Modify the above definition to obtain another Lipschitz extension F˜ that still preserves the Lipschitz constant, and also obeys the bound: inf∂ D F ≤ F˜ (x) ≤ sup∂ D F . We note that this is a particular case of the classical Kirszbraun theorem, that ensures existence of Lipschitz continuous extension F˜ : H1 → H2 of any

4.2 The Basic Convergence Theorem

77

F : D → H2 defined on some subset D of a Hilbert space H1 and taking values in another Hilbert space H2 , such that Lip F˜ = Lip F .

4.2 The Basic Convergence Theorem Ultimately, our goal is to show that solutions u of (4.1) converge uniformly to the unique viscosity solution u to the p-Laplace equation (5.1) with f = 0 and the prescribed continuous boundary data F . This will be accomplished in Sects. 5.2, 5.5 and 5.6, for a quite large family of admissible domains D (so-called game-regular), using the game-theoretical approach. As we shall see below, when F is already the restriction of a given p-harmonic function u on D♦ , the same result follows directly from the averaging expansions of u.

Theorem 4.6 Let D, D♦ be as in Definition 3.8. Given p > 2, assume that a (necessarily smooth) bounded function u : int D♦ → R satisfies: p u = 0

and

∇u = 0

in int D♦ .

(4.8)

Then, solutions & u of (4.1) with the boundary data F = u|out ∪ and with the

parameter r =

3(p−2) 2(N +2) ,

converge to u uniformly as  → 0. More precisely: u − uC(D) ≤ C,

for all 
2: ¯ = ∅ Lemma 4.7 Let D and D♦ be as in Definition 3.8. Assume that B1 (0) ∩ D and let u : int D♦ → R satisfy (4.8). Then, there exists q ≥ 2 and ˆ ∈ (0, 1) such that the smooth functions v : int D♦ → R, defined for  < ˆ by: v (x) = u(x) + |x|q ,

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

78

satisfy: p v (x) ≥ q|∇v (x)|p−2

and

∇v (x) = 0

¯ for all x ∈ D.

(4.10)

Proof Observe first the following easy formulas: ∇|x|q = q|x|q−2 x,

∇ 2 |x|q = q(q − 2)|x|q−4 x ⊗ x + q|x|q−2 Id,

|x|q = q(q − 2 + N)|x|q−2 . ¯ and denote a = ∇v (x) and b = ∇u(x). Then, by (4.8) we have: Fix x ∈ D    a  a |x|q + (p − 2)  ∇ 2 |x|q : ⊗ p v (x) = |∇v (x)| |a| |a|   a b b  a 2 ⊗ − ⊗ + ∇ u(x) : |a| |a| |b| |b|    4|∇ 2 u(x)|  ≥ |∇v (x)|p−2 q|x|q−2 q − 2 + N + (p − 2) 1 − |x| . |∇u(x)| (4.11) p−2

Above, we have also used the bound:   a x 2 a  a ∇ 2 |x|q : = q(q − 2)|x|q−2 ⊗ , + q|x|q−2 ≥ q|x|q−2 , |a| |a| |a| |x| together with the following straightforward estimate:  a a b b  |a − b|  ⊗ − ⊗ . ≤4 |a| |a| |b| |b| |b|

(4.12)

The result now follows by fixing an exponent q that satisfies:   |∇ 2 u(x)| q ≥ 3 − N + (p − 2) max −1 , ¯ |∇u(x)| x∈D so that the quantity in the last line parentheses in (4.11) is greater than 1, and further taking  > 0 small enough to have: minx∈D   ¯ |∇v (x)| > 0. The second preliminary result is the comparison principle for the corrected pharmonic functions v : & 3(p−2) Lemma 4.8 In the setting of Lemma 4.7, let p > 2, r = 2(N +2) and denote: S v (x) =

1 1 1 A v (x) + inf A v (y) + sup A v (y), 3 3 y∈Br (x) 3 y∈Br (x)

4.2 The Basic Convergence Theorem

79

Then there exists q ≥ 2 and ˆ ∈ (0, 1) such that for all  < ˆ there holds (4.10), together with: v (x) ≤ S v (x)

¯ for all x ∈ D.

(4.13)

Proof By translation of D, we may without loss of generality, assume that B1 (0) ∩ D♦ = ∅. We now apply Exercise 3.7 (iv) to each v , with the parameters: α = 13 , β = 23 . Since the averaging operator in the right-hand side of (4.1) equals S , the estimate in (4.10) yields: v (x) − S v (x) ≤ −

3 q 2(N + 2)

 ∇ 2 v 2 0 + ω∇ 2 v (B (x))2  C (B (x)) + C 2 ω∇ 2 v (B(1+r) (x)) +  , |∇v (x)| 

where C depends only on N and p. Clearly, the right-hand side above is negative for q sufficiently large, provided that  < ˆ is sufficiently small. This proves the claim.   An Analytical Proof of Theorem 4.6 1. By translation of D, we may without loss of generality, assume that B1 (0) ∩ D♦ = ∅. Consider the differences: φ (x) = v (x) − u (x) = u(x) − u (x) + |x|q , where the functions v are as in Lemma 4.7, defined for q and  < ˆ as in Lemma 4.8. By (4.1) and (4.13), we obtain:     φ (x) ≤ d (x) v (x) − S u (x) + (1 − d (x)) v (x) − u(x)     ≤ d (x) S v (x) − S u (x) + (1 − d (x)) v (x) − u(x) 1  2 sup ≤ d (x) A φ (x) + φ (y) 3 3 y∈B(1+r) (x)   ¯ + (1 − d (x)) v (x) − u(x) for all x ∈ D.

(4.14)

Note that φ ∈ C(int D♦ ), whereas on the open neighbourhood of ∂D♦ in D♦ : (∂D♦ + B1/2 (0)) ∩ D♦ ⊂ out , we have φ (x) = |x|q . Thus φ ∈ C(D♦ ) and, consequently, it is possible to define: M = max φ (x). ♦ x∈D

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

80

2. We first claim that: ∃x ∈ out ∪ 

φ (x) = M .

(4.15)

To this end, we will prove that if φ (y) = M for some point y ∈ D with the property that dist(y, ∂D) ≥ , then in fact we also have: ∃x ∈ D

dist(x, ∂D) < 

and

φ (x) = M .

(4.16)

Let D0 be the connected component of the set {x ∈ D; dist(x, ∂D) ≥ } containing y. Consider its subset: DM = {z ∈ D0 ; φ (z) = M }. Clearly, DM is nonempty and closed in D0 . To prove that DM is open in D0 , let z ∈ D. Since d (z) = 1, we get by (4.14): M = φ (z) ≤

1 2 1 2 A φ (z) + sup φ (w) ≤ A φ (z) + M , 3 3 w∈B(1+r) (z) 3 3

which results in: M ≤ A φ (z) ≤ M , implying that φ ≡ M on B (z) ∩ D0 and proving the openness property of DM . In conclusion, there must be DM = D0 so that (φ )|D0 ≡ M and in particular φ (z) = M for all z ∈ ∂D0 ⊂ ∂{x ∈ D; dist(x, ∂D) ≥ }. By the same argument as above, we get φ ≡ M on the ball B (z) that must therefore contain a point x as in (4.16), proving (4.15). 3. Having established (4.15), we now deduce a bound on M . We distinguish two cases. In the first case, the maximum M = φ (x) is attained at some x ∈ D with dist(x, ∂D) < , so that d (x) < 1. Then (4.14) yields:   M = φ (x) ≤ d (x)M + (1 − d (x)) v (x) − u(x) , immediately implying that: M ≤ v (x) − u(x) = |x|q . In the second case, M = φ (x) for some x ∈ out , so by u = u on , likewise: M = φ (x) = |x|q .

4.2 The Basic Convergence Theorem

81

In either case, there follows the one-sided inequality:   max u(x) − u (x) ≤ max φ (x) + C = M + C ≤ 2C, ¯ ¯ x∈D x∈D q where C = maxx∈D ¯ |x| depends on D, p, N and u but not on . Applying the same argument to the p-harmonic function (−u) and noting (−u) = −u , we arrive at:

    min u(x) − u (x) = − max u (x) − u(x) ≥ −2C. ¯ ¯ x∈D x∈D  

This concludes the proof of the bound (4.9). Exercise 4.9

(i) Show that inequality (4.12) is valid for any a, b ∈ RN \ {0}. (ii) Modify the proof of Theorem 4.6 for the case of the averaging operator (3.18). Namely, let D and  be as in Definition 3.8. Fix p ≥ 2 and consider the solutions u to (3.18), with coefficients α = αN,p , β = βN,p as in (3.12), and where the boundary data F = u| is given by a smooth function u : D♦ → R satisfying (4.8). Prove that: u − uC(D) ≤ C, for all small  > 0 and a constant C depending on N, p, u and D♦ but not on . (iii) Give an example of an open, bounded, connected set D ⊂ RN , such that for every small  > 0 the set {x ∈ D; dist(x, ∂D) ≥ } is disconnected. Exercise 4.10 Given a p-harmonic function u : int D♦ → R with nonvanishing gradient (i.e. (4.8) is satisfied), Lemma 4.7 proves that the variation x → |x|q that shifts u into the region of sub-p-harmonicity. Carry out calculations below for an alternative construction. (i) For every A > 0, let gA : (0, ∞) → (0, ∞) be a continuous, increasing function given by:   gA (t) = log A(et − 1) + 1 . ! (t) − Note that g1 (t) = t. For A < 1, show that: gA (t) − t ∈ (log A, 0) and gA 1 ∈ (A − 1, 0) for all t > 0. For A > 1, show that: gA (t) − t ∈ (0, log A) and ! (t) − 1 ∈ (0, A − 1) for all t > 0. Moreover, when A < 1, we have: gA !! gA (t) = (1 − A)

(A(et

Aet > 0. − 1) + 1)2

The approximation function gA has been studied in Juutinen et al. (2001) and also in Lindqvist and Lukkari (2010) as a tool for proving the comparison principle for a variation of the ∞-Laplace equation. (ii) Prove that (gA )−1 = g1/A .

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

82

(iii) Let u : int D♦ → R be a smooth, positive function with nonvanishing gradient. For every  ∈ (0, 1), define v = g1− ◦ u. It then follows that: log(1 − ) < v (x) − u(x) < 0 for all x ∈ int D♦ and we have:   p−2   !  ! !! (u(x))|∇u(x)|p . g1− p v (x) = g1− (u(x)) (u(x))p u(x) + (p − 1)g1−

Thus, if p u = 0 in int D♦ , we obtain p v (x) > 0. More precisely:  p−2  !  !! p v (x) = (p − 1)g1− (u(x)) g1− (u(x))|∇u(x)|p ≥ (p − 1) |∇v (x)| ≥ valid for all 
2, r =

3(p−2) 2(N +2) ,

 > 0 as in Theorem 4.1.

1. Consider the product probability space ( 1 , F1 , P1 ) on: 1 = B1 (0) × {1, 2, 3} × (0, 1), equipped with the σ -algebra F1 which is the smallest σ -algebra containing all the products D × A × B ⊂ 1 where D ⊂ B1 (0) ⊂ RN and B ⊂ (0, 1) are Borel, and A ⊂ {1, 2, 3}. The probability measure P1 is given as the product of: the normalized Lebesgue measure on B1 (0), the uniform counting measure on {1, 2, 3}, and the Lebesgue measure on (0, 1) (see Example A.2): P1 (D × A × B) =

|A| |D| · · |B|. |B1 (0)| 3

For each n ∈ N, we denote by n = ( 1 )n the Cartesian product of n copies of 1 , and we let ( n , Fn , Pn ) be the product probability space. Further: ( , F, P) = ( 1 , F1 , P1 )N

4.3 Playing Boundary Aware Tug-of-War with Noise

83

denotes the probability space on the countable product: = ( 1 )N =



 N 1 = B1 (0) × {1, 2, 3} × (0, 1) ,

i=1

defined by means of Theorem A.12 (compare Example A.13 (ii)). For each n ∈ N, the σ -algebra F n is identified with the sub-σ -algebra of F, consisting of sets of the form: F × ∞ i=n+1 1 for all F ∈ Fn . We also define F0 = {∅, } and observe that the sequence {Fn }∞ n=1 is a filtration of F. The elements of n are the n-tuples {(wi , ai , bi )}ni=1 , while the elements of are sequences {(wi , ai , bi )}∞ i=1 , where wi ∈ B1 (0), ai ∈ {1, 2, 3} and bi ∈ (0, 1) for all i ∈ N. n ∞ 2. We now describe the strategies σI = {σIn }∞ n=0 and σI I = {σI I }n=0 of Players I and II. For every n ≥ 0, these are the functions: σIn , σInI : Hn → Br (0) ⊂ RN , defined on the spaces of “finite histories” Hn = RN × (RN × 1 )n , assumed to be measurable with respect to the (target) Borel σ -algebra B(Br (0)) and the (domain) product σ -algebra in Hn . 3. Given an initial point x0 ∈ D♦ and the strategies σI and σI I , we inductively define a sequence of vector-valued random variables: Xnx0 ,σI ,σI I : → D♦

for n = 0, 1, . . . .

For simplicity of notation, we suppress the superscripts x0 , σI , σI I and write Xn instead of Xnx0 ,σI ,σI I . Firstly, we set X0 ≡ x0 and we write h0 ≡ x0 ∈ H0 . The ∞ sequence {Xn }∞ n=0 will be adapted to the filtration {Fn }n=0 of F, and thus each Xn for n ≥ 1 is effectively defined on n . Recall that the scaled distance from the complement of D in D♦ is: d (x) =

  1 min , dist(x, out ) . 

We now set:   Xn (w1 , a1 , b1 ), . . . , (wn , an , bn ) = ⎧ xn−1 + wn + σIn−1 (hn−1 ) ⎪ ⎪ ⎨ xn−1 + wn + σIn−1 I (hn−1 ) = ⎪ x + w n−1 n ⎪ ⎩ xn−1

for an = 1 and d (xn−1 ) ≥ bn for an = 2 and d (xn−1 ) ≥ bn for an = 3 and d (xn−1 ) ≥ bn for d (xn−1 ) < bn . (4.17)

84

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

Fig. 4.2 Player I, Player II and random noise in the boundary aware Tug-of-War

The n-th position xn and the n-th augmented history hn :   xn = Xn (w1 , a1 , b1 ), . . . , (wn , an , bn )   hn = x0 , (x1 , w1 , a1 , b1 ), . . . , (xn , wn , an , bn ) ∈ Hn   are then measurable functions of the argument (w1 , a1 , b1 ), . . . , (wn , an , bn ) ∈ n . It is clear that Xn is Fn -measurable and that it takes values in D♦ . In this game, the token position is advanced by random shifts wn of length at most , preceded by the scaled deterministic shifts σIn−1 (hn−1 ) and σIn−1 I (hn−1 ), which are activated according to the value of the equally probable outcomes an ∈ {1, 2, 3}. Namely, an = 1 corresponds to activating σI and an = 2 to activating σI I , whereas an = 3 results in not activating any of the deterministic strategies (Fig. 4.2). The token at xn ∈ out ∪  with d (xn ) < bn+1 is not moved. 4. The variables bn ∈ (0, 1) serve to set a threshold for reading the value of the game from the prescribed boundary data F . We now define the random variable τ x0 ,σI ,σI I : → N ∪ {+∞}:   τ x0 ,σI ,σI I (w1 , a1 , b1 ), (w2 , a2 , b2 ), . . . = min{n ≥ 1; bn > d (xn−1 )}, (4.18)   where, as before, xn = Xn (w1 , a1 , b1 ), . . . , (wn , an , bn ) . We drop the superscript x0 , σI , σI I and write τ instead of τ x0 ,σI ,σI I if no ambiguity arises. Clearly, τ is F-measurable and, in fact, it is a stopping time relative to the filtration {Fn }∞ n=0 , similarly as in Lemma 3.13 (the proof is left as an exercise):

4.3 Playing Boundary Aware Tug-of-War with Noise

85

Lemma 4.11 In the above setting, we have: P(τ < +∞) = 1. 5. Given now a starting position x0 ∈ D♦ and σI and σI I , the F two strategies  measurable vector-valued random variable Xx0 ,σI ,σI I τ x0 : → D♦ is:  x ,σ ,σ  X 0 I I I τ x0 ,σI ,σI I (ω) = Xτx0x0,σ,σII,σ,σII II (ω) (ω)

P − a.s. in .

Let F : out ∪  → R be a bounded, Borel function. Define: # "   uI (x0 ) = sup inf E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I σI σI I

# "   uI I (x0 ) = inf sup E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I ,

(4.19)

σI I σI

where sup and inf are taken over all strategies as above. The main result in this context is the following version of Theorem 3.14:

Theorem 4.12 Given a bounded, Borel function F : out ∪  → R and letting p, r, , u be as in Theorem 4.1, we have: uI = u = uI I

in D♦ .

Proof 1. We drop the sub/superscript  to ease the notation in the proof. To show that uI I ≤ u in D♦ , fix x0 ∈ D♦ and let η > 0. By Lemma 3.15, there exists a n (h ) = σ n (x ) and that, for every strategy σ0,I I for Player II, such that σ0,I I n 0,I I n hn ∈ Hn we have: n A u(xn + σ0,I I (xn )) ≤ n σ0,I I (xn ) = xn

inf

y∈Br (xn )

A u(y) +

η 2n+1

if xn ∈ D (4.20)

if xn ∈ D.

Fix a strategy σI of Player I and consider the following sequence of random variables Mn : → R: Mn = (u ◦ Xn )1τ >n + (F ◦ Xτ )1τ ≤n +

η . 2n

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

86

x ,σ ,σ

As usual, we have dropped the superscripts in Xn0 I 0,I I and τ x0 ,σI ,σI I .1 Note that each Mn is well defined; the range of Xn is contained in D♦ that is the domain of u, and on the other hand when τ ≤ n then for some 1 ≤ k ≤ n we have: bk > d (xk−1 ). In this case, xk = xk−1 ∈  and so F (xk ) is well defined. Clearly, each Mn is also Fn -measurable. We now show that {Mn }∞ n=0 is a supermartingale with respect to the filtration {Fn }∞ . We write: n=0     E(Mn | Fn−1 ) = E (u ◦ Xn )1τ >n | Fn−1 + E (F ◦ Xn )1τ =n | Fn−1   η + E (F ◦ Xτ )1τ n = 1τ ≥n 1bn ≤d (xn−1 ) , so that:     E (u◦Xn )1τ >n | Fn−1 = E (u◦Xn )1bn ≤d (xn−1 ) | Fn−1 1τ ≥n

1 More

a.s.

(4.24)

precisely, we write:     η x ,σ ,σ Mn = u ◦ Xn0 I 0,I I 1τ x0 ,σI ,σI I >n + F ◦ Xx0 ,σI ,σ0,I I τ x0 ,σI ,σI I 1τ x0 ,σI ,σI I ≤n + n . 2

4.3 Playing Boundary Aware Tug-of-War with Noise

87

Observe that (u ◦ Xn )1bn ≤d (xn−1 ) is Fn -measurable, so by Lemma A.17 (ii) and the defining property (4.20) of σ0,I I , we get:   E (u ◦ Xn )1bn ≤d (xn−1 ) | Fn−1 =

ˆ 1

  u ◦ Xn 1bn ≤d (xn−1 ) dP1

 1 n−1 = d (xn−1 ) A u(xn−1 ) + A u(xn−1 + σIn−1 ) + A u(xn−1 + σ0,I ) I 3  η a.s. ≤ d (xn−1 ) S ◦ Xn−1 + n 2 (4.25) Consequently, (4.24) and (4.25) result in:     η E (u ◦ Xn )1τ >n | Fn−1 ≤ d (xn−1 ) S ◦ Xn−1 1τ ≥n + n 2

a.s.

Together with (4.22) and (4.23), this implies:   E(Mn | Fn−1 ) ≤ d (xn−1 )(S ◦ Xn−1 ) + (1 − d (xn−1 ))(F ◦ Xn−1 )1τ ≥n 1τ ≥n η + (F ◦ Xτ −1 )1τ n−1 + (F ◦ Xτ −1 )1τ ≤n−1 + n−1 = Mn−1 2

a.s.

where we have used (4.1) to u = u . 3. The supermartingale property being established, we now conclude that: E[Mτ ] ≤ E[M0 ] = u(x0 ) + η, in view of Doob’s theorem (Theorem A.34 (ii)) and the boundedness of F and u, implying the uniform boundedness of {Mn }∞ n=0 . Finally, since: F ◦ Xτ = Mτ −

η , 2τ

we get:     uI I (x0 ) ≤ sup E F ◦ Xτ ≤ sup E Mτ ≤ u(x0 ) + η. σI

σI

As η > 0 was arbitrary, we obtain the claimed comparison: uI I (x0 ) ≤ u(x0 ).

4 Boundary Aware Tug-of-War with Noise: Case p ∈ (2, ∞)

88

To show the remaining inequality u(x0 ) ≤ uI (x0 ), we argue exactly as above. n (h ) = σ n (x ) Take σI I to be an arbitrary strategy, while choose σ0,I so that σ0,I n 0,I n and that, for every hn ∈ Hn we have: n A u(xn + σ0,I (xn )) ≥

σ0,I (xn ) = xn

sup

A u(y) −

y∈Br (xn )

η 2n+1

if xn ∈ D

if xn ∈ D.

Then, the following sequence of random variables {M¯ n }∞ n=0 is a submartingale with respect to the filtration {Fn }∞ n=0 : η M¯ n = (u ◦ Xn )1τ >n + (F ◦ Xτ )1τ ≤n − n 2

(4.26)

and the Doob theorem implies u(x0 ) ≤ uI (x0 ). Since uI (x0 ) ≤ uI I (x0 ) (see Exercise 3.16), we conclude that uI = u = uI I in D♦ . We leave these points as an exercise below. The proof of Theorem 4.12 is done.   Exercise 4.13 (i) Prove Lemma 4.11. (ii) Show that the sequence defined in (4.26) is a submartingale with respect to the  filtration {Fn }∞ n=0 . Deduce that u (x0 ) ≤ uI (x0 ).

4.4 A Probabilistic Proof of the Basic Convergence Theorem In this section, we make use of the game-theoretical interpretation of solutions to (4.1), that has been developed in Sect. 4.12, to give an alternative proof of Theorem 4.6. The previous proof in Sect. 4.2 used solely analytic methods. A Game-Theoretical Proof of Theorem 4.6 1. Consider the following “negative gradient strategy” σ0,I I for Player II that depends only on the last position xn of the token in D: ⎧ ⎨ −r(1 −  2 ) ∇u(xn ) if x ∈ D n n n σ0,I I (hn ) = σ0,I I (xn ) = |∇u(xn )| ⎩ 0 if xn ∈ D. By the analysis in Theorem 3.4 we obtain:  ∇u(xn )  − C 3 u(y) ≥ u x − r y∈Br (x) |∇u(xn )|   n 3 ¯ ≥ u x + σ0,I for all x ∈ D, I (x) − C inf

(4.27)

4.4 A Probabilistic Proof of the Basic Convergence Theorem

89

where C above is a universal constant, depending on u, p and D♦ but ¯ and (sufficiently small)  > 0. In view of Exercise 3.7, we independent of x ∈ D also deduce: inf

y∈Br (x)

A u(y) ≥

 inf

y∈Br (x)

 u(y) +

  2 inf u(y) + C 3 , 2(N + 2) y∈Br (x)

together with the following estimate:  n (x) ≥ A ux + σ n (x) − u x + σ0,I  I 0,I I

2 n (x)) − C 3 , u(x + σ0,I I 2(N + 2)

which result in: inf

y∈Br (x)

  n A u(y) ≥ A u x + σ0,I I (x) +

2 · sup u(y) 2(N + 2) y∈Br (x)

n 3 − u(x + σ0,I I (x)) − C   n 3 ≥ A u x + σ0,I I (x) − C

¯ for all x ∈ D. (4.28)

2. Fix now an initial position x0 ∈ D♦ and fix astrategy σI for Player I. Using the x ,σ ,σ ∞ definition of the process Xn = Xn0 I 0,I I n=0 in (4.17) and of the stopping x ,σ ,σ time τ = τ 0 I 0,I I in (4.18), similarly as in the proof of Theorem 4.12 we compute:   E u ◦ Xτ ∧n | Fn−1     = E (u ◦ Xn )1bn ≤d (xn−1 ) 1τ ≥n | Fn−1 + E (u ◦ Xn )1bn >d (xn−1 ) 1τ ≥n | Fn−1   + E (u ◦ Xτ )1τ δ , D 6

˜ + B¯ δ/8 (0). ˜ = ∂ D

Note that for any two points x0 , y0 and  > 0 which satisfy: ˜ with |x0 − y0 | ≤ x0 , y0 ∈ D

δ 48

and 
 for every z ∈ D Consequently d (z − (x0 − y0 )) = 1 and thus (4.1) yields:  1 A u˜  (z) + inf A u˜  (y) + sup A u˜  (y) , y∈Br (z) 3 y∈Br (z)

u˜  (z) =

where we define a translated copy of each u by:   u˜  (z) = u z − (x0 − y0 ) + η. Calling d˜ (z) =

1 

˜ min{, dist(z, ˜ \ D)}, we thus observe:

1  1 1 inf A u˜  (y) + sup A u˜  (y) u˜  (z) = d˜ (z) A u˜  (z) + 3 3 y∈Br (z) 3 y∈Br (z)   + 1 − d˜ (z) u˜  (z)



˜ . for all z ∈ D (5.10)

˜ ♦ , subject to On the other hand, u similarly solves the same problem (5.10) in D ˜ Note now that u˜  ≥ u in , ˜ because: its own boundary data u on . u˜  (z) − u (z) = u (z − (x0 − y0 )) − u (z) + η ≥ −η + η = 0, where we used (5.8) in view of: z, z−(x0 −y0 ) ∈ δ/3 and |(z−(x0 −y0 ))−z| ≤ δ 3. Consequently, by the comparison result in Corollary 4.2, it follows that u˜  ≥ ˜ ♦ . In particular, we get: u in D u (x0 ) − u (y0 ) = u (x0 ) − u˜  (x0 ) + η ≤ η.

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

106 Fig. 5.1 The “mirror strategies” shift the token in the x0 and y0 games in parallel

3. Exchanging x0 with y0 , the same argument as above yields: |u (x0 )−u (y0 )| ≤ η for all x0 , y0 and  ∈ J satisfying (5.9). Recalling (5.8), we conclude that for all δ  ∈ J with  < 48(1+r) , there holds: |u (x0 ) − u (y0 )| ≤ η

for all x0 , y0 ∈ D♦ with |x0 − y0 | ≤

This establishes equicontinuity of {u }∈J and proves the theorem.

δ . 48 

We now give an alternative proof of the fact that equicontinuity at the boundary extends to equicontinuity in the interior of the domain D, by a translation-type argument. Given two nearby interior initial positions of the token x0 and y0 , and given two strategies σI , σI I of the two players for the game starting at x0 , one considers a new “translated” game, starting at y0 and utilizing the outcomes {(wi , ai , bi )}∞ ¯ I , σ¯ I I (Fig. 5.1). i=1 of the previous game, with “mirror strategies” σ These mirror strategies are designed so that they shift the token from yn−1 to yn by vector xn − xn−1 . We proceed playing the two games in parallel, until one of the positions xn or yn reaches the neighbourhood of ∂D, where the assumed boundary y ,σ¯ ,σ¯ equicontinuity may be applied. Since |Xnx0 ,σI ,σI I − Xn0 I I I | = |x0 − y0 | < δ and thus, consequently, |u (xn ) − u (yn )| ≤ η, the same bound on |u (x0 ) − u (y0 )| follows by choosing the strategies σI , σI I optimally. A Game-Theoretical Proof of Theorem 5.3 1. Fixing η > 0, we may, similarly as in (5.8) in the first step of the analytical proof, deduce that there exists δ > 0 such that for all  ∈ J : |u (x0 ) − u (y0 )| ≤

η 3

  for all x0 , y0 ∈ ∂D + Bδ (0) ∪ out δ with |x0 − y0 | < . 3

(5.11)

Assume that: (1 + r)
n + (F ◦ Xτ x0 )1τ x0 ≤n +

η , 3 · 2n

whereas {Mn 0 }∞ n=0 below is a submartingale: y

y

Mn 0 = (u ◦ Yn )1τ y0 >n + (F ◦ Yτ y0 )1τ y0 ≤n −

η . 3 · 2n

3. Let τδ : → N ∪ {+∞} be a new stopping time, namely:   τδ (w1 , a1 ,b1 ), (w2 , a2 , b2 ), . . . 2δ 2δ , or dist(yn , ∂D) < = min n ≥ 1; dist(xn , ∂D) < 3 3

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

108

where xn and yn denote the consecutive token  positions in the games starting,  respectively, at x (w1 , a1 , b1 ), . . . , (wn , an , bn ) and and y , that is: x = X 0 0 n n   yn = Yn (w1 , a1 , b1 ), . . . , (wn , an , bn ) . In view of (5.12), it follows that: τδ < min{τ x0 , τ y0 }, so in particular P(τδ < +∞) = 1. Also, directly from the definition of τδ and the choice of the mirror strategies, we observe: dist(Xτδ , ∂D) < δ,

dist(Yτδ , ∂D) < δ

Xτδ − Yτδ = x0 − y0

and

a.s.

Consequently, by (5.11) we obtain: |u ◦ Xτδ − u ◦ Yτδ | ≤

η 3

a.s.

(5.13)

and further, in view of Theorem A.34 and using (5.12), we get: " η η η # η ≤ E[Mτyδ0 ] + = E u ◦ Yτδ − τ + 3 3 2δ 3 " " # 2η η η η # 2η = E Mτxδ0 − τ + ≤ E[u ◦ Yτδ ] + ≤ E u ◦ Xτδ − τ + 3 2δ 3 2δ 3 2η 2η ≤ E[M0x0 ] + = u (x0 ) + η. ≤ E[Mτxδ0 ] + 3 3 y

u (y0 ) = E[M0 0 ] +

By a symmetric argument, it follows that u (x0 ) ≤ u (y0 ) + η. Finally, for all  ∈ J satisfying (5.12) there holds: |u (x0 ) − u (y0 )| ≤ η

for all x0 , y0 ∈ D♦ with |x0 − y0 |
δ consists of finitely many continuous functions, it follows that {u }∈J is equicontinuous, as claimed.  Exercise 5.4 (i) In the setting of Theorem 5.2, prove that for any x0 ∈ D and any test function ¯ satisfying (5.3), there holds p φ(x0 ) ≥ 0. φ ∈ C2 (D) (ii) Let D, 0 and out be as in Definition 3.8 and let F : out → R be a given continuous function. Fix α ∈ (0, 1] and β = 1 − α. In the setting of Theorem 3.9, assume that a sequence {u }∈J of solutions to (3.18) converges uniformly in D♦ to a continuous limit function u : D♦ → R. Prove that u is a ¯ with p = N +2 − N viscosity solution to the homogeneous problem (5.1) in D, α (and f = 0). (iii) In the same setting of Theorem 3.9, prove that the asymptotic equicontinuity near the boundary implies asymptotic equicontinuity throughout D♦ for

5.2 Game-Regularity and Convergence

109

solutions of (3.18). Namely, let F : out → R be a continuous function and assume that the following holds for some sequence {u }∈J that satisfies (3.18). For every η > 0 there exists ˆ ∈ (0, 0 ) and δ > 0 such that for all  ∈ (0, ) ˆ ∩ J: |u (x0 ) − u (y0 )| ≤ η

¯ y0 ∈ ∂D with |x0 − y0 | ≤ δ. for all x0 ∈ D,

Prove that a stronger property of the sequence {u }∈J is automatically valid: ⎡

For every η > 0, there exists ˆ ∈ (0, 0 ), δ > 0 such that: ⎢ ⎢ |u (x0 ) − u (y0 )| ≤ η for all  ∈ (0, ) ˆ ∩J ⎣ and all x0 , y0 ∈ D♦ with |x0 − y0 | ≤ δ.

(5.14)

(iv) Prove the following version of the Ascoli–Arzelà theorem for discontinuous functions. Let {u }∈J be an equibounded sequence of functions u : D♦ → R on a compact set D♦ ⊂ RN , and assume that the asymptotic equicontinuity condition (5.14) holds. Then there exists a subsequence of {u }∈J , converging uniformly, as  → 0, to a continuous function u : D♦ → R.

5.2 Game-Regularity and Convergence In this section, we prove that the coinciding game values uI = uI I converge as  → 0, to the unique p-harmonic function that solves the Dirichlet problem (5.1) with a given continuous boundary data F . We now introduce the regularity condition on ∂D, assuring such result in the framework of the Tug-of-War game corresponding to the averaging principle (4.1). A point y0 ∈ ∂D will be called game-regular if, whenever the game starts near y0 , Player I has a strategy for making the game terminate near the same y0 , with high probability. More precisely: & 3(p−2) Definition 5.5 Let D, 0 and D♦ be as in Definition 3.8. Fix p > 2, r = 2(N +2) 1 and for each  ∈ (0, 1+r ) consider the boundary-aware Tug-of-War game with noise according to (4.1), as defined in Sect. 4.3.

(i) We say that a point y0 ∈ ∂D is game-regular if for every η, δ > 0 there exist 1 δˆ ∈ (0, δ) and ˆ ∈ (0, 1+r ) such that the following holds. Fix  ∈ (0, ) ˆ and choose an initial token position x0 ∈ Bδˆ (y0 ); there exists then a strategy σ0,I of Player I with the property that for every strategy σI I of Player II we have:   P Xτ ∈ Bδ (y0 ) ≥ 1 − η,

(5.15)

110

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

Fig. 5.2 Game-regularity of a boundary point y0 ∈ ∂D

where Xτ is the shorthand for the random variable (Xx0 ,σ0,I ,σI I )τ x0 ,σ0,I ,σI I and x ,σ ,σ where the sequence of random variables {Xn0 0,I I I }∞ n=0 records the positions of the token in the -game in (4.17), while τ = τ x0 ,σ0,I ,σI I is the stopping time in (4.18) (Fig. 5.2). (ii) We say that the domain D is game-regular if every boundary point y0 ∈ ∂D is game-regular. Observe that game-regularity is symmetric with respect to strategies σI and σI I . Namely, game-regularity of y0 ∈ ∂D is equivalent to existence of δˆ ∈ (0, δ) and ˆ > 0, depending on the prescribed η, δ > 0, with the following property. For every  ∈ (0, ) ˆ and every x0 ∈ Bδˆ (y0 ) there exists a strategy σ0,I I of Player II such that for every strategy σI of Player I there holds:   P (Xx0 ,σI ,σ0,I I )τ x0 ,σI ,σ0,I I ∈ Bδ (y0 ) ≥ 1 − η. (5.16) Lemma 5.6 If D is game-regular, then δˆ and ˆ in Definition 5.5 (a) can be chosen independently of y0 (i.e., they depend only on the prescribed thresholds η, δ). ˆ 0 ) < δ and ˆ (y0 ) > 0 Proof For a fixed η, δ > 0 and each y0 ∈ ∂D, choose δ(y 2 sufficiently small and such that Definition 5.5 (a) holds for η and 2δ . By compactness, the boundary ∂D is covered by finitely many balls Bδ(y ˆ 0 ) (y0 ), namely: ∂D ⊂

n 

Bδ(y ˆ 0,i ) (y0,i ),

i=1

with y0,i ∈ ∂D, i = 1 . . . n. Let ˆ = mini=1...n ˆ (y0,i ) and let δˆ be such that for every y0 ∈ ∂D, the ball Bδˆ (y0 ) ⊂ Bδ(y ˆ ˆ 0,i ) (y0,i ) for some i = 1 . . . n. Let  ∈ (0, ) and choose an initial token position x0 ∈ Bδˆ (y0 ). Then, according to (5.15), there exists σ0,I such that for every σI I there holds:   P (Xx0 ,σ0,I ,σI I )τ x0 ,σ0,I ,σI I ∈ Bδ (y0 )   ≥ P (Xx0 ,σ0,I ,σI I )τ x0 ,σ0,I ,σI I ∈ Bδ/2 (y0,i ) ≥ 1 − η, proving the claim in view of Bδ/2 (y0,i ) ⊂ Bδ (y0 ).

 

5.2 Game-Regularity and Convergence

111

Theorem 5.7 Assume that D is game-regular. Then, for every continuous function F :  ∪ out → R, the solutions u : D♦ → R of (4.1) converge uniformly, as  → 0, to a continuous u : D♦ → R that is the unique viscosity ¯ solution of (5.1) in D.

Proof 1. We will show that every sequence {u }∈J , of solutions to (4.1) with the prescribed continuous boundary data F :  ∪ out → R, where J ⊂ (0, 1) is a sequence decreasing to 0, has a subsequence that converges uniformly. By Theorem 5.2 and Theorem C.63, it follows that the limit of such subsequence is the unique viscosity solution u of (5.1), with the boundary condition F|∂ D . Thus the entire family {u }→0 must converge uniformly to u. Since, according to Theorem 4.4 and Corollary 4.2, solutions to (4.1) are continuous and equibounded, it suffices to check their equicontinuity in D♦ . Equivalently, in virtue of Theorem 5.3, we will prove the equicontinuity of {u }∈J at the boundary. To this end, fix η > 0 and let δ > 0 be such that: |F (x) − F (y)| ≤

η 3

for all x, y ∈  with |x − y| < δ.

(5.17)

By Lemma 5.6 and the observation after Definition 5.5 it follows that we may choose δˆ < δ and ˆ > 0 such that for every  ∈ (0, ), ˆ y0 ∈ ∂D and x0 ∈ Bδˆ (y0 ), there exists a strategy σ0,I I with the property that for all σI we have:   P Xτ ∈ Bδ (y0 ) ≥ 1 −

η , 6F C0 () + 1

(5.18)

¯ where above we denoted: Xτ = (Xx0 ,σI ,σ0,I I )τ x0 ,σI ,σ0,I I . Let y0 ∈ ∂D and x0 ∈ D ˆ satisfy: |x0 − y0 | ≤ δ. Then, in virtue of Theorem 4.12, we observe that:   u (x0 ) − u (y0 ) = uI I (x0 ) − F (y0 ) ≤ sup E F ◦ Xτ − F (y0 ) σI

 η  ≤ E F ◦ Xτ − F (y0 ) + , 3

  for some fixed strategy σI and with the notation: Xτ = Xx0 ,σI ,σ0,I I τ x0 ,σI ,σ0,I I . Thus, by (5.18) and (5.17), we further get: ˆ u (x0 ) − u (y0 ) ≤   |F (xτ ) − F (y0 )| dP Xτ ∈Bδ (y0 )

ˆ + 

Xτ ∈Bδ (y0 )

 |F (xτ ) − F (y0 )| dP +

  η η ≤ + 2F C0 () P {Xτ ∈ Bδ (y0 )} + ≤ η. 3 3

η 3

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

112

2. The reverse inequality u (x0 ) − u (y0 ) ≥ −η is obtained by taking the strategy σ0,I as in Definition 5.5 (i) for the same thresholds 6F  η0 +1 and δ as in (5.17), C () and using the fact that:   u (x0 ) = uI (x0 ) ≥ inf E F ◦ Xτ , σI I

  where Xτ = Xx0 ,σ0,I ,σI I τ x0 ,σ0,I ,σI I . The proof is done.

 

We now show that if some boundary point y0 ∈ ∂D is not game-regular, then there exists a continuous F : RN → R, such that the solutions u : D♦ → R of (4.1) do not converge uniformly. Namely, we have the following:

Theorem 5.8 Assume that for every continuous F : RN → R, the solutions u : D♦ → R of (4.1) converge uniformly on D♦ as  → 0. Then D is game-regular. Proof Choose y0 ∈ ∂D and let η, δ > 0. Consider the data function F : RN → R: F (y) = −|y − y0 |. By assumption, the uniform limit of {u } −ηδ, σI σI I

  where Xτ = Xx0 ,σI ,σI I τ x0 ,σI ,σI I . Consequently, there exists a strategy σ0,I with     the property that for every σI I there holds: E F ◦ Xx0 ,σ0,I ,σI I τ x0 ,σ0,I ,σI I > −ηδ. Then, in view of the nonpositivity of F we get:   1 P Xτ ∈ Bδ (y0 ) ≤ − δ

ˆ F (xτ ) dP < η,

and we obtain:     P Xτ ∈ Bδ (y0 ) = 1 − P Xτ ∈ Bδ (y0 ) ≥ 1 − η, as requested in (5.16). This ends the proof of y0 being game-regular.

 

5.3 Concatenating Strategies

113

5.3 Concatenating Strategies In this section we prove that game-regularity of a boundary point y0 ∈ ∂D is implied by (5.19) below. This condition requires the validity of (5.15) and (5.16) for one fixed η0 ∈ (0, 1), rather than for all small η > 0. & 3(p−2) Theorem 5.9 Let D, 0 and D♦ be as in Definition 3.8. Fix p > 2, r = 2(N +2) 0 and for each  ∈ (0, 1+r ) consider the boundary-aware Tug-of-War game with noise according to (4.1), as defined in Sect. 4.3. For a given boundary point y0 ∈ ∂D, assume that there exists θ0 < 1 such that 1 for every δ > 0 there exists δˆ ∈ (0, δ) and ˆ ∈ (0, 1+r ) with the following property. Fix  ∈ (0, ) ˆ and choose an initial token position x0 ∈ Bδˆ (y0 ); there exists then a strategy σ0,I I of Player II in the -game on D♦ corresponding to (4.1) such that for every strategy σI of Player I we have:  P ∃n ≤ τ

 Xn ∈ Bδ (y0 ) ≤ θ0 .

(5.19)

Then y0 is game-regular. Under condition (5.19), construction of an optimal strategy realizing the (arbitrarily small) threshold η in (5.16) is carried out by concatenating the m optimal strategies corresponding to the achievable threshold η0 , on m concentric balls centred at y0 , where (1 − η0 )m ≥ 1 − η. Proof of Theorem 5.9 1. Fix η, δ > 0. We want to find ˆ and δˆ such that (5.16) holds. We first observe that for η ≤ 1 − θ0 the claim follows directly from (5.19). In the general case, let m ∈ {2, 3, . . .} be such that: θ0m ≤ η.

(5.20)

m We now define the radii {δk }m k=1 and the maximal token shifts {k }k=1 , ˆ k )}m , {ˆ (δk )}m from the assumed conand assign the corresponding {δ(δ k=1 k=1 dition (5.19). Namely, for every initial token position in Bδ(δ ˆ k ) (y0 ) in the Tug-of-War game on D♦ with step less than ˆ (δk ), there exists a strategy σ0,I I,k guaranteeing that the token exits Bδk (y0 ) (before the game is stopped) with probability at most θ0 . This construction will be achieved through the repeated application of (5.19). ˆ m ) and ˆ (δm ), with the indicated We set δm = δ and find the quantities δ(δ choice of the strategy σ0,I I,m . Decreasing the value of ˆ (δm ) if necessary, we then set:

ˆ m ) − (1 + r)ˆ (δm ) > 0, δm−1 = δ(δ

ˆ m) δ(δ > 0. m−1 = min ˆ (δm ), 2(1 + r)

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

114

ˆ k ) and In the same manner, having constructed δk > 0 and k > 0, we find δ(δ ˆ (δk ) and define: ˆ k ) − (1 + r)ˆ (δk ) > 0, δk−1 = δ(δ

ˆ k) δ(δ > 0. k−1 = min k , (δ ˆ k ), 2(1 + r)

Eventually, we call: ˆ 1 ), δˆ = δ(δ

  ˆ = min 1 , (δ ˆ 1) .

To show that the condition of game-regularity at y0 is satisfied, we will concatenate strategies {σ0,I I,k }m k=1 by switching to σ0,I I,k+1 immediately after the token exits Bδk (y0 ) ⊂ Bδ(δ ˆ k+1 ) (y0 ) (Fig. 5.3). This is carried out in the next step. 2. Fix x0 ∈ Bδˆ (y0 ) and let  ∈ (0, ). ˆ Define the strategy σ0,I I of Player II:   n n σ0,I I = σ0,I I x0 , (x1 , w1 , a1 , b1 ), . . . , (xn , wn , an , bn )

for all n ≥ 0,

separately in the following two cases. Case 1. If xk ∈ Bδ1 (y0 ) for all k ≤ n, then we set:   n n σ0,I I = σ0,I I,1 x0 , (x1 , w1 , a1 , b1 ), . . . , (xn , wn , an , bn ) . Case 2. Otherwise, define: . k = k(x0 , x1 , . . . , xn ) = max 1 ≤ k ≤ m − 1; ∃ 0 ≤ i ≤ n xi ∈ Bδk (y0 ) . i = min 0 ≤ i ≤ n; xi ∈ Bδk (y0 ) . and set:   n−i n σ0,I I = σ0,I I,k+1 xi , (xi+1 , wi+1 , ai+1 , bi+1 ), . . . , (xn , wn , an , bn ) . n N is (Borel) It is not hard to check that each σ0,I I : Hn → Br (0) ⊂ R measurable. Let σI be now any strategy of Player I. We will show that:

  P ∃n ≤ τ Xn ∈ Bδk (y0 )   ≤ θ0 · P ∃n ≤ τ Xn ∈ Bδk−1 (y0 ) for all k = 2 . . . m,

(5.21)

5.3 Concatenating Strategies

115

Fig. 5.3 The concatenated strategy σ0,I I in the proof of Theorem 5.9

x ,σ ,σ

where the token position random variables Xn = Xn0 I 0,I I are defined in (4.17), whereas τ = τ x0 ,σI ,σ0,I I is the stopping time in (4.18). Observe that (5.21) implies:     P Xτ ∈ Bδ (y0 ) ≤ P ∃n ≤ τ Xn ∈ Bδm (y0 )   ≤ θ0m−1 · P ∃n ≤ τ Xn ∈ Bδ1 (y0 ) ≤ θ0m , ˆ 1 ). This will end the proof of the result by (5.20). for δˆ = δ(δ 3. In order to show (5.21), we denote:   ˜ = ∃n ≤ τ Xn ∈ Bδk−1 (y0 ) ⊂ . ˜ > 0. Indeed, since: Without loss of generality, one may assume that P( )   P ∃n ≤ τ Xn ∈ Bδk (y0 ) ≤ P ∃n ≤ τ Xn ∈ Bδk−1 (y0 ) , it follows that if ˜ = 0 then both sides of (5.21) equal 0, and thus the inequality holds. P( ) ˜ P) ˜ > 0, we define the probability space ( , ˜ F, ˜ by: For P( )   ˜ = A ∩ ; ˜ A∈F F

and

˜ P(A) =

P(A) ˜ P( )

˜ for all A ∈ F.

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

116

 Define also the measurable space ( f in , Ff in ), by setting f in = ∞ n=1 n and by taking Ff in to be the smallest σ -algebra containing ∞ F (see Sect. 4.3 n n=1 for the definition of probability spaces ( n , Fn , Pn )). We now consider the random variables:   . τk ˜ → f in Y1 : Y1 {(wn , an , bn )}∞ n=1 = {(wn , an , bn )}n=1   . ∞ ˜ → Y2 : Y2 {(wn , an , bn )}∞ n=1 = {(wn , an , bn )}n=τk +1 , ˜ where τk is the following stopping time on :   τk = min n = 1, 2, . . . ; Xn ∈ Bδk−1 (y0 ) . We claim that Y1 and Y2 are independent, that is:       P˜ {Y1 ∈ A1 } ∩ {Y2 ∈ A2 } = P˜ Y1 ∈ A1 · P˜ Y2 ∈ A2

for all A1 ∈ Ff in , A2 ∈ F.

To this end, by the definition of σ -algebras Ff in and F, it suffices to check (see Exercise 5.10 (i)) that for every s, t ∈ N we have:       ˜ · P {Y1 ∈ A1 } ∩ {Y2 ∈ A2 } = P Y1 ∈ A1 · P Y2 ∈ A2 P( ) for all A1 ∈ Fs , A2 ∈ Ft .

(5.22)

Indeed, computing the probabilities:      bi ≤ d (xi−1 ) , P Y1 ∈ A1 = Ps A1 ∩ {τk = s} ∩ 



i 0, depending only on N, p, u and D, such that: u˜  − uC0 (D ˜ ) ≤ C, for all small  > 0.

5.4 The Annulus Walk Estimate

119

˜ ∩ BR2 (0) and a sufficiently small  < ˜0 . Using Theorem 4.12 to the Fix x0 ∈ D game value u˜ I I = u˜  , we see that there exists a strategy σ˜ 0,I I so that for every σ˜ I :   E u ◦ X˜ τ˜ − u(x0 ) ≤ 2C. We now estimate:   E u◦X˜ τ˜ − u(x0 ) ˆ =

X˜ τ˜ ∈B¯ R3 − (0)

u(X˜ τ˜ ) dP +

ˆ X˜ τ˜ ∈BR1 + (0)

(5.28)

u(X˜ τ˜ ) dP − u(x0 )

    ≥ P X˜ τ˜ ∈ B¯ R3 − (0) v ρR3 −       + 1 − P X˜ τ˜ ∈ B¯ R3 − (0) v R1 − (1 + r) − v(R2 ), where we have used the fact that v in (5.26) is always an increasing function. Recalling (5.28), this implies:  v(R2 ) − v(R1 − (1 + r)) + 2C  . P X˜ τ˜ ∈ B¯ R3 − (0) ≤ v(R3 − ) − v(R1 − (1 + r))

(5.29)

The proof of (5.25) is now complete, in view of continuity of the right-hand side function above with respect to .   By inspecting the quotient in the right-hand side of (5.25) in Exercise 5.14, we immediately obtain: Corollary 5.12 The function v in (5.26) has the following properties, for any fixed 0 < R1 < R2 : ⎧  R  p−N ⎪ 2 p−1 v(R2 ) − v(R1 ) ⎨ 1 − for 2 < p < N = (i) lim R1 ⎪ R3 →∞ v(R3 ) − v(R1 ) ⎩ 0 for p ≥ N, ⎧ 1 ⎨ for p = N v(MR1 ) − v(R1 ) 2 (ii) lim = ⎩ M→∞ v(M 2 R1 ) − v(R1 ) 0 for p > N. Consequently, the estimate (5.25) can be replaced by:   P X˜ τ˜ ∈ B¯ R3 − (0) ≤ θ0

(5.30)

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

120

  p−N p−1 2 valid for any θ0 > 1 − R if p ∈ (2, N), and any θ0 > 0 if p ≥ N , upon R1 choosing R3 sufficiently large with respect to R1 and R2 . When p > N , the same bound is valid when setting R2 = MR1 , R3 = M 2 R1 with the ratio M large enough. The results of Theorem 5.11 and Corollary 5.12 are invariant under scaling. More precisely, we have: Corollary 5.13 The bounds (5.25) and (5.30) remain true if we replace R1 , R2 , R3 by ρR1 , ρR2 , ρR3 and ˆ by ρ , ˆ for any ρ > 0. ˜ = BρR3 (0) \ B¯ ρR1 (0) and the boundary Proof For ρ > 0, consider the annulus ρ D ˜ ∩ BρR2 (0). Let ρ < aware Tug-of-War game starting from the position ρx0 ∈ ρ D ρ ˆ and let σ˜ 0,I I be the strategy of Player II in the statement of Theorem 5.11. Then the strategy σ˜ 0,I I,ρ defined by:   n σ˜ 0,I I,ρ ρx0 , (ρx1 , ρw1 , a1 , b1 ), . . . , (ρxn , ρwn , an , bn )   n = σ˜ 0,I I x0 , (x1 , w1 , a1 , b1 ), . . . , (xn , wn , an , bn ) gives:     ρ P X˜ τ˜ ρ ∈ B¯ ρR3 −ρ (0) = P X˜ τ˜ ∈ B¯ R3 − (0) ˜ The for any σ˜ I that naturally induces σ˜ I,ρ of Player I in the ρ-step game on ρ D. ˜ estimate (5.25) is then equivalent to the claimed bound on ρ D.   Exercise 5.14 Verify the statements (i) and (ii) in Corollary 5.12.

5.5 Sufficient Conditions for Game-Regularity: Exterior Cone Property The purpose of this section is to show the following regularity result. Theorem 5.15 Assume that D, D♦ and p, r are as in Definition 5.5. Let y0 ∈ ∂D have the exterior cone property, that is: there exists a finite cone C ⊂ out with the tip at y0 . Then y0 is game-regular.

5.6 Sufficient Conditions for Game-Regularity: p > N

121

Proof With the help of Theorem 5.11, we will show that the assumption of Theorem 5.9 is satisfied, with probability θ0 < 1 depending only on p, N and the angle of the external cone at y0 . The exterior cone condition implies that there exists a constant R1 > 0 such that for all ρ > 0 there is z0 ∈ C satisfying: |z0 − y0 | = ρ(1 + R1 )

and

B2ρR1 (z0 ) ⊂ C ⊂ RN \ D.

Define R2 = 2+R1 and let R3 > R2 be such that 1, as in Corollary 5.12 (i). We set: δˆ =

δ , 1 + R1 + R3

v(R2 )−v(R1 ) v(R3 )−v(R1 )

(5.31)

≤ θ0 (R1 , R2 , p, N)
0 be as in Theorem 5.11, applied to the annuli with radii ρR1 , ρR2 , ρR3 , centred at z0 (Fig. 5.5) and, for a given x0 ∈ Bδˆ (y0 ) and  < ˆ , let σ0,I I = σ˜ 0,I I be the strategy of Player II ensuring validity of the bound (5.30). For a given strategy σI of Player I, we claim that: x ,σ ,σ ω ∈ ; ∃n ≤ τ x0 ,σI ,σ0,I I (ω) Xn0 I 0,I I (ω) ∈ Bδ (y0 ) x ,σ ,σ ⊂ ω ∈ ; X˜ τ˜ 0 I 0,I I (ω) ∈ BρR3 − (z0 ) ,

(5.32)

˜ ∞ where {Xn }∞ n=0 and τ are the random variables in (4.17), (4.18), and {Xn }n=0 and τ˜ are as in the proof of Theorem 5.11, corresponding to the token positions in the ˜ annulus D. Indeed, if ω belongs to the event set in the left-hand side of (5.32), then there must be τ˜ (ω) ≤ n and Xi (ω) = X˜ i (ω) ∈ B¯ ρR1 + (z0 ) for all i ≤ τ˜ (ω). Thus ω must belong to the right hand side set, proving thus (5.32). The final claim follows by (5.30) and by applying Theorem 5.9.   Note that the constant θ0 depends only on the cone C at y0 , on p and N . In particular, when p ≥ N , the assertions in the proof of Theorem 5.15 are true with any θ0 > 0, whereas for p ∈ (2, N) they hold with any θ0 > θ0 (C, p, N) ∈ (0, 1).

122

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

Fig. 5.5 The concentric balls and the annuli in the proof of Theorem 5.15, unscaled to δˆ = 1

5.6 Sufficient Conditions for Game-Regularity: p > N In this section, we show that the geometrical notions of regularity of ∂D become irrelevant for large p. Namely, every boundary point y0 ∈ ∂D is always gameregular, according to Definition 5.5, when p > N. This finding is consistent with the well-known result that points in RN have positive p-capacity when p > N (see Sect. C.7 in Appendix C for details). Theorem 5.16 Let D, D♦ and p, r be as in Definition 5.5. Assume that the harmonicity exponent p is greater than the dimension N of D: p > N.

(5.33)

Then every boundary point y0 ∈ ∂D is game-regular.

We split the proof of Theorem 5.16 in a sequence of lemmas regarding the following random variables, where the condition of the boundary awareness has been suppressed. Namely, for n = 0, 1, . . . , the random variables Zn : → RN on the probability space ( , F, P) are defined recursively, for a given step size  and given strategies σI , σI I : ⎧ n−1   ⎨ zn−1 + wn + σIn−1 (hn−1 ) for an = 1 Zn (w1 , a1 , b1 ), . . . , (wn , an , bn ) = zn−1 + wn + σI I (hn−1 ) for an = 2 ⎩ for an = 3. zn−1 + wn

5.6 Sufficient Conditions for Game-Regularity: p > N

123

We set a constant Z0 ≡ z0 and let the n-th history hn ∈ Hn = RN × (RN × 1 )n be given by:   hn = z0 , (z1 , w1 , a1 , b1 ), . . . , (zn , wn , an , bn ) , denoting by zk the k-th position Zk ((w1 , a1 , b1 ), . . . , (wk , ak , bk )). Lemma 5.17 Let D, D♦ and p, r be as in Definition 5.5. Assume that, for a boundary point y0 ∈ ∂D, the following holds. There exists η1 > 0 such that for δ every δ " 1 there is δˆ ∈ (0, 1) and ˆ ∈ (0, 2+r ) with the property that for any  ∈ (0, ) ˆ and z0 ∈ Bδˆ (y0 ), Player II can choose a strategy σ0,I I satisfying:      P min n ≥ 0; Zn ∈ B 2 (y0 ) < min n ≥ 0; Zn ∈ Bδ (y0 )} ≥ η1 ,

(5.34)

for any strategy σI chosen by Player I. Then y0 ∈ ∂D is game-regular. Proof In virtue of Theorem 5.9, it is enough to show existence of a positive lower bound η0 > 0 of the probability that the token remains within a prescribed distance from the boundary point y0 until the game is stopped. More precisely, we will show 1 that for every δ > 0 there exists δˆ ∈ (0, 1) and ˆ ∈ (0, 1+r ) satisfying the following condition. Fix  ∈ (0, ) ˆ and choose an initial token position x0 ∈ Bδˆ (y0 ); there exists a strategy σ0,I I of Player II in the boundary aware Tug-of-War game with step size , such that for every strategy σI there holds:  P ∀n ≤ τ x0 ,σI ,σ0,I I

x ,σI ,σ0,I I

Xn0

 ∈ Bδ (y0 ) ≥ η0 .

(5.35)

Without loss of generality, we may assume that δ " 1. Consider:    1 or Zn−1 ∈ Bδ (y0 ) , τˆ = min n ≥ 1; Zn−1 ∈ B 2 (y0 ) and bn ≥ 2 which is a F-measurable stopping time relative to the filtration {Fn }∞ n=0 . The fact that P(τˆ < +∞) = 1 can be proved exactly as in Lemma 3.13. Observe that if we take z0 = x0 , then: ∀n ≤ τ

1 . Xn ∈ Bδ (y0 ) ⊃ Zτˆ −1 ∈ B 2 (y0 ) and bτˆ ≥ 2

Indeed, if ω ∈ belongs to the set in the right-hand side, then Zn (ω) ∈ Bδ (y0 ) for all n ≤ τˆ (ω) and d (Zτˆ −1 ) < 12 ≤ bτˆ . Then, there must be τ (ω) ≤ τˆ (ω), so ω belongs to the set in the left-hand side as well.

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

124

Further, we have:   1 Zτˆ −1 ∈ B 2 (y0 ) and bτˆ ≥ ⊃ τˆ = 1 + min n ≥ 0; Zn ∈ B 2 (y0 ) . 2 and then, by (5.34):    P τˆ = 1 + min n ≥ 0; Zn ∈ B 2 (y0 )     1  · P min n ≥ 0; Zn ∈ B 2 (y0 ) < min n ≥ 0; Zn ∈ Bδ (y0 )} 2 1 ≥ η1 , 2

=

 

which completes the proof of (5.35).

The building block of the proof of (5.34) will be the annulus walk estimate in Theorem 5.11. Lemma 5.18 Let p, N, r be as in Definition 5.5. There exists a constant p¯ < a sequence {ki }∞ i=1 , depending only on p and N , satisfying:

1 2

and

k1 > 2(1 + r), k2 > k1 + 1 + r,

(5.36)

k2 (k2 + 1) −1 k2 + 1 + r < k3 ≤ k1 and: ki+1 > ki + 1 + r, and

ki ki+2 − r = k1 k3 − r

ki ki+1 + 1 , ≥ k1 k2 + 1

(5.37)

for all i ≥ 1.

Moreover, for any i ≥ 1, any  > 0 and any z0 satisfying ki  < |z0 − y0 | < ki k1 (k2 + 1), there exists a strategy σ0,I I,i+1 of Player II such that for every σI there holds:  P min{n ≥ 0; Zn ∈ B¯ (ki+2 −r) (y0 )} (5.38)  < min{n ≥ 0; Zn ∈ B(ki +1) (y0 )} ≤ p, ¯ in the -walk {Zn }∞ n=0 .

5.6 Sufficient Conditions for Game-Regularity: p > N

125

Proof 1. Define first the three positive constants κ1 , κ2 , κ3 , depending only on p and N , with the following properties: κ1 > 2(1 + r), κ2 > κ1 + 1 + r 2

p−1 p−N

p−1

2 p−N (κ2 + 1) + 1 + r
2(1 + r) and then κ2 sufficiently large, the first three p−1

conditions above are easily achieved. The last condition follows from: 2 p−N (κ2 + 1) + r < κ2 (κκ21+1) − 1, valid in view of the third inequality in (5.39). Consider the annulus walk as in Theorem 5.11, with the radii R1 = κ1 , R2 = κ2 + 1, R3 = κ3 − r and the boundary thickness ˜0 = κ31 < R21 . Observe that: v(R2 ) − v(R1 ) 1 < , v(R3 ) − v(R1 ) 2 because of the last condition in (5.39): 

p−N

p−N

2(κ2 + 1) p−1 − κ1p−1

 p−1

p−N

p−1

< 2 p−N (κ2 + 1) < κ3 − r.

It now follows from (5.25), that there exists ζˆ < 1 with the property that for every z0 ∈ BR2 (y0 ) \ B¯ R1 (y0 ) and every ζ ≤ ζˆ , Player II has a strategy so that regardless of the strategy of Player I, there holds:   1 P Z˜ τ˜ ∈ B¯ R3 −ζ (y0 ) ≤ p¯ < . 2

(5.40)

Here, p¯ depends again only on p and N , whereas {Z˜ n }∞ n=0 and τ˜ correspond to the token positions and the stopping time in the boundary aware Tug-of-War ˜ = BR3 (y0 ) \ B¯ R1 (y0 ), thickened by the boundary game played on the annulus D of size ˜0 . 2. Let ρ = 1ˆ > 1 and define k1 , k2 , k3 (again depending only on p and N ) by: ζ

k1 = ρκ1 ,

k2 + 1 = ρ(κ2 + 1),

k3 − r = ρ(κ3 − r).

(5.41)

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

126

By Corollary 5.13, used with the rescaling ρ, it follows that for every ζ ≤ 1 and every starting position z0 ∈ Bk2 +1 (y0 ) \ B¯ k1 (y0 ) in the ζ -walk {Zn }∞ n=0 , Player II has a strategy so that for every strategy of Player I there holds:   ¯ P min{n ≥ 0; Zn ∈ B¯ k3 −r (y0 )} < min{n ≥ 0; Zn ∈ Bk1 +ζ (y0 )} ≤ p. (5.42) ρ Indeed, the event in the left-hand side above is a subset of the event {Z˜ τ˜ ρ ∈ ρ ∞ ρ B¯ k3 −r (y0 ) = B¯ ρR3 (y0 )} where now {Z˜ n }n=0 and τ˜ correspond, as in the proof ˜ ρ = BρR3 (y0 ) \ B¯ ρR1 (y0 ) with the step size of Corollary 5.13, to the game on D ρ ¯ according ζ = ρ. The probability of {Z˜ τ˜ ρ ∈ B¯ ρR3 (y0 )} is clearly bounded by p, to (5.40). Observe that k1 , k2 , k3 in (5.41) satisfy (5.36) (we propose to check this statement in Exercise 5.21). For every i ≥ 4, we define:

⎧ ⎪ α m−1 − 1 ⎪ ⎨ α m−1 k2 + r for i = 2m . α−1 ki = m−1 −1 α ⎪ ⎪ ⎩ α m−1 k3 + r for i = 2m + 1 α−1 where: α=

k3 − r > 1. k1

(5.43)

We, likewise, leave it as an exercise to prove (5.37). For any fixed i ≥ 1 and  > 0 we presently use the scaling property in Corollary 5.13, with the scaling factor ρ = kk1i . The claimed property (5.38) is now established in view of (5.42) and of the third condition in (5.37). Indeed, it suffices to write  = ρζ and apply (5.42) to ζ = kk1i ≤ 1 (Fig. 5.6).  

Fig. 5.6 Application of Theorem 5.11 in the proof of (5.38)

5.6 Sufficient Conditions for Game-Regularity: p > N

127

Fix δ " 1 and, recalling the definition of α in (5.43), set: ˆ =

δ δ < k3 − r 2+r

and

δˆ =

δ . α

(5.44)

For a given  ∈ (0, ), ˆ z0 ∈ Bδˆ (y0 ), we now define the strategy σ0,I I of Player II, and prove that (5.34) holds with some η1 > 0 (depending only on p and N ). Firstly, using the fact that the sequence {ki − r}∞ i=1 increases to +∞, we set:   . m = max i ≥ 2; δ ≥ (ki+1 − r) . Note that in view of the third equality in (5.37) there holds: |z0 − y0 |
1 (Case 3). Upon entering A1 , the strategy switches to the “do nothing” strategy (Case 2). Since: |z0 − y0 | < km  < (km + 1) ≤

km−1 (k2 + 1), k1

by the second inequality in (5.37), and since: Ai ⊂ B ki−1 k1

(k2 +1)

(y0 ) \ B¯ (ki−1 +1) (y0 )

for all i ≥ 2,

(5.45)

Cases 1 through 3 cover all possible scenarios. n N It is also not hard to check that each σ0,I I : Hn → Br (0) ⊂ R is (Borel) measurable, as required. Before completing the proof of Theorem 5.16, we make another simple observation, whose proof is similar to that of Lemma 3.13, to the effect that the “do nothing” strategy of Case 2 above advances the token, once in the ring A1 , to a position in the final ball B/2 (y0 ), with positive probability that is bounded below by a constant depending only on p and N . In the application below, we will take R = k1 + 1. ¯ 0 < 1, depending only Lemma 5.19 Let R > 1. There exists a probability bound Q on p, N, R, such that for every z0 ∈ BR (0) and any strategies of Player I and Player II, there holds:   ¯ 0. P min{n ≥ 0; Zn ∈ B/2 (0)} > min{n ≥ 0; Zn ∈ BR (0)} ≤ Q

5.6 Sufficient Conditions for Game-Regularity: p > N

129

Proof Without loss of generality, we may assume that z0 = |z0 |e1 . Consider the following set Dadv of “advancing” random outcomes (Fig. 5.8): . Dadv =



  − ,− × B N−1 (0) ⊂ B (0), 16R 2 4

 where B N−1 (0) is the (N − 1)-dimensional ball with centre 0 and radius 16R . 16R Note that since |z0 | < R, it takes n ≤ #4R$ moves by any vectors {wi }ni=1 in Dadv , to achieve |Zn , e1 | < 4 . On the other hand, the sum of lengths of these  advancements in the direction orthogonal to e1 is smaller than #4R$ 16R ≤ 4 . Hence, for all w1 , . . . , w4R ∈ Dadv we have (Fig. 5.9):

z0 + 

∃n ≤ #4R$

n i=1

wi ∈



  −1 × BN − , (0) ⊂ B 2 (0).  4 4 4

Consequently:   P min{n ≥ 0; Zn ∈ B/2 (0)} < min{n ≥ 0; Zn ∈ BR (0)}  #4R$ ≥ P Dadv × {3} × (0, 1) ×



i=#4R$+1

Fig. 5.8 The set of “advancing” outcomes in the proof of Lemma 5.19

Fig. 5.9 The path of consecutive token positions advancing to B/2 (0)

 1

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

130

   #4R$ = P1 Dadv × {3} × (0, 1) =

¯0 = 1 − proving the claim with: Q



VN−1 12(16R)N−1 VN

#4R$

VN−1 12(16R)N−1 VN

#4R$ ,

 

< 1.

Applying Lemma 5.19 to R = k1 + 1, we obtain the following bound: Corollary 5.20 For any z0 ∈ B(k1 +1) (y0 ) we have:   P min{n ≥ 0; Zn ∈B/2 (y0 )} < min{n ≥ 0; Zn ∈ B(k1 +1) (y0 )} ¯0 = ≥1−Q



VN −1 12(16(k1 + 1))N −1 VN

4(k1 +1)

(5.46) ,

where the expression in the right-hand side above depends only on p and N . Proof of Theorem 5.16 1. In virtue of Lemma 5.17, it suffices to prove the estimate (5.34), where we fixed δ < 0 together with  ∈ (0, ) ˆ and z0 ∈ Bδˆ (y0 ) (according to ˆ and δˆ chosen as in (5.44) and for the strategy σ0,I I defined above). To this end, we first remark that (5.34) trivially holds when z0 ∈ B(k1 −r) (y0 ), because by (5.46) we get:   P min{n ≥ 0; Zn ∈ B 2 (y0 )} < min{n ≥ 0; Zn ∈ Bδ (y0 )}   ¯ 0. ≥ P min{n ≥ 0; Zn ∈ B 2 (y0 )} < min{n ≥ 0; Zn ∈ B(k1 +1) (y0 )} ≥ 1 − Q

Otherwise, let 1 ≤ i ≤ m − 2 be such that |z0 − y0 | ∈ ((ki − r), (ki+1 − r)], or set i = m − 1 in case of |z0 − y0 | > (km−1 − r). In this step we will show that:   P min{n ≥ 0; Zn ∈ B 2 (y0 )} < min{n ≥ 0; Zn ∈ Bδ (y0 )}  . ≥ P j = min{n ≥ 0; Zn ∈ Ai } < min{n ≥ 0; Zn ∈ Ai+2 }  and min{n ≥ j ; Zn ∈ B 2 (y0 )} < min{n ≥ j ; Zn ∈ Ai+1 }    ¯i , ≥ 1 − p¯ 1 − Q (5.47) where we define, for all i ≥ 1:   . ¯i = sup sup P min{n ≥ 0; Zn ∈ B 2 (y0 )} > min{n ≥ 0; Zn ∈ Ai+1 Q z0 ∈Ai σI

(5.48) The first inequality in (5.47) is straightforward from the inclusion between the events in the left and right-hand sides.

5.6 Sufficient Conditions for Game-Regularity: p > N

131

To prove the second inequality, we proceed as in the proof of Theorem 5.9. Firstly, observe that if j = 0 then z0 ∈ Ai and (5.47) is simply a consequence of the definition in (5.48). To treat the case j > 0, denote: ˜ = ∩ min{n ≥ 0; Zn ∈ Ai } < min{n ≥ 0; Zn ∈ Ai+2 } . ˜ P) ˜ F, ˜ in: and consider the probability space ( , ˜ = {C ∩ ; ˜ C ∈ F} F

and

˜ P(C) =

P(C) ˜ for all C ∈ F, ˜ P( )

where by (5.38) we assure that: ˜ ≥ 1 − p¯ > 0. P( )

(5.49)

 Define also the measurable space ( f in , Ff in ) by setting fin = ∞ n=1 n and by taking Ff in to be the smallest σ -algebra containing ∞ F . n=1 n There, ˜ with j = min{n ≥ 1; Zn ∈ Ai } defined as in (5.47) is a stopping time on ∞ ˜ respect to the filtration {C ∩ ; C ∈ Fn }n=0 . We further consider the random variables: ˜ → f in Y1 : ˜ → Y2 :

  . j Y1 {(wn , an , bn )}∞ n=1 = {(wn , an , bn )}n=1 ,   . ∞ Y2 {(wn , an , bn )}∞ n=1 = {(wn , an , bn )}n=j +1 ,

(5.50)

and note that they are independent (see Exercise 5.21 (iii)). Applying Lemma A.21 to Y1 , Y2 and to the measurable indicator function F : f in × → R (see Exercise 5.21 (iv)):    F {(wn , an , bn )}sn=1 , {(wn , an , bn )}∞ n=s+1 = 1 min{n>s; Z

n ∈B  2

,

(y0 )}s; Zn ∈Ai+1 }

we obtain:  . P j = min{n ≥ 0; Zn ∈ Ai } < min{n ≥ 0; Zn ∈ Ai+2 } and min{n ≥ j ; Zn ∈ B 2 (y0 )} < min{n ≥ j ; Zn ∈ Ai+1 } ˆ ˆ   ˜ ˜ · ˜ 1 ), ˜ · F Y1 (ω), Y2 (ω) dP(ω) = P( ) f (ω1 ) dP(ω = P( ) ˜



˜

(5.51)

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

132

˜ where for a given ω1 = {(wn , an , bn )}∞ n=1 ∈ the integrand function f returns:  ˜ f (ω1 ) = P˜ ω2 = {(w¯ n , a¯ n , b¯n )}∞ n=1 ∈ ;   j (ω )  (y0 ) ∈ B min n>j (ω1 ); Zn {(ws , as , bs )}s=11 , {(w¯ s , a¯ s , b¯s )}∞ s=j (ω2 )+1 2  < min n>j (ω1 ); Zn ∈ Ai+1 .

˜ as above, we have: Consequently, for every ω1 ∈  ˜ · f (ω1 ) = P ω2 = {(w¯ n , a¯ n , b¯n )}∞ ˜ P( ) n=1 ∈ ;  Zj (ω ) ,σI ,σ0,I I   {(w¯ s , a¯ s , b¯s )}∞ min n > 0; Zn 1 s=j (ω2 )+1 ∈ B 2 (y0 )  < min n > 0; Zn ∈ Ai+1      ¯ i ≥ 1 − p¯ 1 − Q ¯i , ˜ · 1−Q ≥ P( )

in view of the definition in (5.48) and the estimate (5.49). Thus, (5.51) implies (5.47), as requested. 2. Define now, for all i ≥ 1, all z0 ∈ Ai and all strategies σI of Player I:  z ,σ ,σ Qi (z0 , σI ) = P min{n ≥ 0; Zn0 I 0,I I ∈ A1 }  > min{n ≥ 0; Zn ∈ Ai+1 } , (5.52) Qi = sup sup Qi (z0 , σI ). z0 ∈Ai σI

We then have, for all i, z0 and σI as specified above:   z ,σ ,σ P min{n ≥ 0; Zn0 I 0,I I ∈ Ai+1 } < min{n ≥ 0; Zn ∈ B 2 (y0 )}   = P min{n ≥ 0; Zn ∈ Ai+1 } < min{n ≥ 0; Zn ∈ A1 }  . + P j = min{n ≥ 0; Zn ∈ A1 } < min{n ≥ 0; Zn ∈ Ai+1 } and min{n ≥ j ; Zn ∈ Ai+1 } < min{n ≥ j ; Zn ∈ B 2 (y0 )}



≤ Qi (z0 , σI )   + 1 − Qi (z0 , σI ) ·   · sup sup P min{n ≥ 0; Zn ∈ Ai+1 } < min{n ≥ 0; Zn ∈ B 2 (y0 )} z0 ∈A1 σI

  ¯0 ≤ Qi (z0 , σI ) + 1 − Qi (z0 , σI ) Q     ¯0 + 1 − Q ¯ 0 Qi (z0 , σI ) ≤ Q ¯0 + 1 − Q ¯ 0 Qi . =Q (5.53)

5.6 Sufficient Conditions for Game-Regularity: p > N

133

The bound of the second term in the left-hand side of first inequality above by the product of probabilities:   1 − Qi (z0 , σI ) ·   · sup sup P min{n ≥ 0; Zn ∈ Ai+1 } < min{n ≥ 0; Zn ∈ B 2 (y0 )} z0 ∈A1 σI

is obtained in the same manner as in the proof of the bound in (5.47), so we suppress the details. Further, the second inequality in (5.53) follows from (5.46) and the last inequality is a consequence of the definition of Qi . Therefore, we arrive at:   ¯0 + 1 − Q ¯ 0 Qi ¯i ≤ Q for all i ≥ 1. (5.54) Q 3. We now estimate the probability bound Qi in (5.52). For i = 1, we have: Q1 = 0. Fix i ≥ 2 and z0 ∈ Ai , together with a strategy σI of Player I. Then:  . 1 − Qi (z0 , σI ) = P j = min{n ≥ 0; Zn ∈ Ai−1 } < min{n ≥ 0; Zn ∈ Ai+1 }  and min{n ≥ j ; Zn ∈ A1 } < min{n ≥ j ; Zn ∈ Ai+1 }     ≥ 1 − p¯ · inf inf P min{n ≥ 0; Zn ∈ A1 } < min{n ≥ 0; Zn ∈ Ai+1 } , z0 ∈Ai−1 σI

in view of (5.38) and reasoning again as in the proof of (5.47). On the other hand, for every z0 ∈ Ai−1 and every σI we have:   P min{n ≥ 0; Zn ∈ A1 } < min{n ≥ 0; Zn ∈ Ai+1 }   = P min{n ≥ 0; Zn ∈ A1 } < min{n ≥ 0; Zn ∈ Ai }  . + P j = min{n ≥ 0; Zn ∈ Ai } < min{n ≥ 0; Zn ∈ A1 } and min{n ≥ j ; Zn ∈ A1 } < min{n ≥ j ; Zn ∈ Ai+1 }     = 1 − Qi−1 (z0 , σI ) + 1 − Qi Qi−1 (z0 , σI )



= 1 − Qi Qi−1 (z0 , σI ) ≥ 1 − Qi Qi−1 , by the definition (5.52) and reasoning as in the proof of (5.47). Concluding, we obtain:    1 − Qi (z0 , σI ) ≥ 1 − p¯ 1 − Qi Qi−1

for all i ≥ 2, z0 ∈ Ai , σI ,

   which results in: Qi ≤ 1 − 1 − p¯ 1 − Qi Qi−1 or equivalently: ⎧ p¯ ⎨0 ≤ Q ≤ i 1 − (1 − p)Q ¯ i−1 ⎩ Q1 = 0.

for all i ≥ 2

(5.55)

5 Game-Regularity and Convergence: Case p ∈ (2, ∞)

134

It is now an immediate observation that: 0 ≤ Qi ≤

p¯ 1 − (1 − p) ¯

for all i ≥ 1.

Indeed, the above inequality holds for i = 1 and if it is valid for Qi−1 then 0 ≤ (1 − p)Q ¯ i−1 ≤ p¯ so (5.55) implies the same bound for Qi . By (5.54) and the above, we get:       ¯i ≥ 1 − Q ¯ 0 1 − 2p¯ . ¯ 0 · 1 − Qi ≥ 1 − Q 1−Q 1 − p¯ Finally, (5.47) yields:      P min{n ≥ 0; Zn ∈ B 2 (y0 )} < min{n ≥ 0; Zn ∈ Bδ (y0 )} ≥ 1−Q¯ 0 1−2p¯ . The constant in the right-hand side above is clearly positive, as p¯ < 12 by (5.40), and it depends only on the exponent p and the dimension N . This ends the proof of (5.34) and achieves Theorem 5.16.  Exercise 5.21 (i) (ii) (iii) (iv)

Verify conditions in (5.36). Verify the properties (5.37). Prove that the random variables Y1 and Y2 in (5.50) are independent. Show that the function F : f in × → used in (5.51) is measurable.

5.7 Bibliographical Notes The game-theoretical constructions in this chapter are based on the seminal analysis in Peres and Sheffield (2008), here adapted to the dynamic programming principle (4.1), and expanded to contain all the probability-related details. Definition 5.5 is a natural extension of the simpler notion of walk-regularity in Definition 2.12 that was valid for the random walk process (the ball walk discussed in Chap. 2) and related to the linear case p = 2. We recall that this notion has been, in turn, derived from the classical Doob’s regularity condition of boundary points for the Brownian motion, discussed in Sect. 3.7*.

Chapter 6

Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

Our discussion was started in Chap. 2 by analysing the linear case p = 2. There, we derived the probabilistic interpretation of harmonic functions via their mean value property, used as the dynamic programming principle of the ball walk. In Chaps. 3 and 4 we developed a parallel description for p-harmonic functions, via the asymptotic mean value expansions (3.13), (3.14) viewed as the game-theoretical dynamic programming principles, valid for exponent ranges p ≥ 2 and p > 2, respectively. In the present chapter, we will follow the same program, based on a family of asymptotic expansions that are viable in the entire range 1 < p < ∞. In Sect. 6.1 below, we introduce a new finite difference approximation to the Dirichlet problem for the homogeneous p-Laplace equation posed on a domain D ⊂ RN , under the boundary data F on ∂D. The approximation is based on superposing the “deterministic averages 12 (inf + sup)” taken over balls, with the ffl “stochastic averages ”, taken over N-dimensional ellipsoids whose aspect ratio depends on N, p and whose orientations span all directions while determining inf / sup. In Sect. 6.2 we prove the well-posedness of the induced mean value equations, utilizing the interpolation to the boundary as in Chap. 4. The probabilistic interpretation u of their solutions, at each averaging scale  > 0, is put forward in Sect. 6.3, where we describe the mixed Tug-of-War game with noise. In this game, the token is initially placed at x0 ∈ D, and at each step it is further advanced according to the following rule. First, either of the two players (each acting with probability 12 ) shifts the token by a chosen vector y of length at most ; second, the token is further shifted within an ellipsoid centred at the current position, with radius γp , oriented along |y|, and with aspect ratio that varies quadratically between 1 and another parameter ρp as the magnitude of the shift |y| increases from 0 to . The scaling factors γp and ρp depend on p and N . Whenever the token reaches the 2-neighbourhood of ∂D, the game is stopped with probability proportional to: 1 minus the distance of the token’s location from RN \ (D + B (0)). Then, u (x0 ) is set to be the expectation of F at the stopping position xτ ∈ D, subject to both

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3_6

135

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

136

players playing optimally. As in Chap. 3, the optimality criterion is based on Player II disbursing the payoff F (xτ ) to Player I at the termination of the game; due to the min-max property, the order of supremizing the outcomes over strategies of one player and infimizing over strategies of the opponent, is immaterial. In Sect. 6.4, we define the related game-regularity of the boundary points. As in Chap. 5, this notion is essentially equivalent to the local equicontinuity of the family {u }→0 , and ultimately to its uniform convergence to the unique viscosity solution of the studied Dirichlet problem. It is expected that game-regularity is equivalent to the Wiener p-regularity condition in Definition C.47 and Theorem C.48, as discussed in Appendix C. In the absence of such result at the time of preparing these Course Notes, we prove in Sect. 6.4 that a sufficient condition for game-regularity is provided by the exterior corkscrew condition. Another sufficient condition p > N can be achieved along the same lines as in Chap. 5, and is thus left to the reader. In Sect. 6.5 we show that in dimension N = 2, every simply connected domain is game-regular.

6.1 The Third Averaging Principle For γ , ρ > 0 and a unit vector ν ∈ RN , we denote by E(0, γ ; ρ, ν) ⊂ RN the ellipsoid centred at 0, with radius γ , and with aspect ratio ρ oriented along ν (Fig. 6.1): y, ν2 2 2 . + |y − y, νν| < γ E(0, γ ; ρ, ν) = y ∈ RN ; ρ2 For x ∈ RN , we have the translated ellipsoid: E(x, γ ; ρ, ν) = x + E(0, γ ; ρ, ν). Note that, when ν = 0, this formula also makes sense and returns the ball E(x, γ ; ρ, 0) = Bγ (x). Given a function u : RN → R, its average will be denoted:   A u; γ , ρ, ν (x) =

E(x,γ ;ρ,ν)

  u x + γ y + γ (ρ − 1)y, νν dy.

u(y) dy = B1 (0)

In particular, A(u; γ ,ffl1, ν)(x) = A(u; γ , ρ, 0)(x) = Aγ u(x) is consistent with the notation Aγ u(x) = Bγ (x) u(y) dy from Chap. 2. We also observe the following linear change of variables:   B1 (0)  y → γρy, νν + γ y − y, νν ∈ E(0, γ ; ρ, ν) that will be often used in the sequel.

6.1 The Third Averaging Principle

137

Fig. 6.1 The averaging ellipsoid in A(u; γ , ρ, ν)(x)

Theorem 6.1 Given p ∈ (1, ∞), fix any scaling factors γp , ρp > 0 with: N +2 + ρp2 = p − 1. γp2

(6.1)

Let D ⊂ RN be an open set and assume that u ∈ C2 (D). Then, for every x0 ∈ D such that ∇u(x0 ) = 0, and  > 0 satisfying B¯ (1+γp (1∨ρp )) (x0 ) ⊂ D, we have (Fig. 6.2):   1 |x − x0 |2 x − x0  inf + sup (x) A u; γp , 1 + (ρp − 1) , 2 x∈B (x0 ) x∈B (x0 ) |x − x0 | 2 = u(x0 ) +

γp2  2 2(N + 2)

|∇u(x0 )|2−p p u(x0 ) + o( 2 ). (6.2)

The rate of convergence o( 2 ) depends on: p, N, γp and (in increasing manner) on |∇u(x0 )|, |∇ 2 u(x0 )| and the modulus of continuity of ∇ 2 u at x0 .

It is convenient to write the expression in the left-hand side of the formula (6.2) as the average 12 (inf + sup) on the ball B (x0 ), of the function x → fu (x; x0 , ) in:  |x − x0 |2 x − x0  (x) , fu (x; x0 , ) = A u; γ , 1 + (ρ − 1) |x − x0 | 2   γ (ρ − 1) y, x − x0 (x − x0 ) dy, = u x + γ y +  B1 (0)

(6.3)

where γ = γp and ρ = ρp . We will frequently use the notation: S u(x0 ) =

 1 inf + sup fu (x; x0 , ). 2 x∈B (x0 ) x∈B (x0 )

(6.4)

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

138

Fig. 6.2 The sampling ellipsoids in the expansion (6.2), when  = 1

For each x ∈ B (x0 ) the integral quantity in (6.3) returns the average of u on the N -dimensional ellipse centred at x, with radius γ , and with aspect ratio  2 +(ρ−1)|x−x0 |2 x−x0 along the orientation vector |x−x . Equivalently, writing x = x0 + 2 0| y, the value fu (x; x0 , ) is the average of u on the scaled ellipse:  y  . x0 + E y, γ ; 1 + (ρ − 1)|y|2 , |y| Since the aspect ratio changes smoothly from 1 to ρ as |x − x0 | increases from 0 to , the said ellipse coincides with the ball B(x0 , γ ) at x = x0 and it interpolates as  x−x0  |x − x0 | → −, to E x, γ ; ρ, |x−x . 0| Proof of Theorem 6.1 1. We fix γ , ρ > 0 and consider the Taylor expansion of u at a given x ∈ B (x0 ) under the integral in (6.3). Observe that the first order increments are linear in y, hence they integrate to 0 on B1 (0). These increments are of order , so: fu (x; x0 , )  ⊗2  γ (ρ − 1) 1 2 ∇ u(x) : y, x − x0 (x − x0 ) γ y + dy + o( 2 ) 2  B1 (0) γ2 2 ∇ u(x) :  2 = u(x) + y ⊗2 dy 2 B1 (0) = u(x) +

+ 2(ρ − 1) +

B1 (0)

y, x − x0 y dy ⊗ (x − x0 )

  (ρ − 1)2  2 dy (x − x )⊗2 + o( 2 ). y, x − x  0 0 2 B1 (0)

(6.5)

6.1 The Third Averaging Principle

Recall that: A y ⊗2 (0) = fu (x;x0 , ) = u(x) + +

139

1 N +2 I dN

by Exercise 3.7 (i). Hence (6.5) becomes:

γ 22 u(x) 2(N + 2)

 (ρ − 1)|x − x0 |2  2 γ 2 (ρ − 1)  2 ∇ u(x) : (x − x0 )⊗2 + o( 2 ) + 2 2 N +2  (N + 2) = f¯u (x; x0 , ) + o( 2 ),

where a further Taylor expansion of u at x0 gives: γ 22 u(x0 ) 2(N + 2)  1 γ 2 (ρ − 1)  2  (ρ − 1)|x − x0 |2  2 + + ∇ u(x0 ) : (x − x0 )⊗2 . + 2 2 N +2  2 (N + 2)

f¯u (x; x0 ,) = u(x0 ) + ∇u(x0 ), x − x0  +

The left-hand side of (6.2) thus satisfies:  1 inf fu (x; x0 , ) + sup fu (x; x0 , ) 2 x∈B (x0 ) x∈B (x0 )   1 inf f¯u (x; x0 , ) + sup f¯u (x; x0 , ) + o( 2 ), = 2 x∈B (x0 ) x∈B (x0 ) (6.6) Since on B (x0 ) we have: f¯u (x; x0 , ) = u(x0 ) + ∇u(x0 ), x − x0  + O( 2 ), the assumption ∇u(x0 ) = 0 implies that the continuous function f¯u (· ; x0 , ) attains its extrema on the boundary ∂B (x0 ), provided that  is sufficiently small. This reasoning justifies that f¯u in (6.6) may be replaced by the quadratic polynomial: γ 22 u(x0 ) + ∇u(x0 ), x − x0  2(N + 2)  1 γ 2 (ρ 2 − 1)   + ∇ 2 u(x0 ) : (x − x0 )⊗2 . + 2 2(N + 2)

f¯¯u (x; x0 , ) = u(x0 ) +

2. We now recall that f¯¯u attains its extrema on B¯  (x0 ), up to error O( 3 ) whenever ∇u(x0 ) r is sufficiently small, precisely at the opposite boundary points x0 +  |∇u(x 0 )|

∇u(x0 ) and x0 − |∇u(x , as shown in Step 3 of the proof of Theorem 3.4. Consequently, 0 )|

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

140

for γ = γp , ρ = ρp satisfying (6.1), there holds:  1 inf f¯¯u (x; x0 , ) + sup f¯¯u (x; x0 , ) 2 x∈B (x0 ) x∈B (x0 )  1 γ 2 (ρ 2 − 1)  γ 22 ∞ u(x0 ) + O( 3 ) u(x0 ) +  2 + 2(N + 2) 2 2(N + 2) N + 2   γ 22  2 u(x0 ) + = u(x0 ) + + ρ − 1  u(x ) + O( 3 ), ∞ 0 2(N + 2) γ2 (6.7)

= u(x0 ) +

further equal to: u(x0 ) +

γp2  2 2(N + 2)



 u(x0 ) + (p − 2)∞ u(x0 ) + O( 3 )

= u(x0 ) + γp2  2

|∇u(x0 )|2−p p u(x0 ) + O( 3 ). 2(N + 2)

This completes the proof of (6.2) in view of (6.6). Remark 6.2 (i) When p → ∞, one can take ρp = 1 and γp ∼ 0 in (6.1), whereas for ∞ u(x0 ) = 0, the asymptotic formula (6.2) formally becomes: u(x0 ) =  1 inf u(x) + sup u(x) , consistently with the AMLE property x∈B (x ) x∈B (x0 )  0 2 of the ∞-harmonic functions (see Remark 3.6 (iii)). (ii) When p = 2, then choosing ρp = 1 and γp ∼ ∞ corresponds to taking both types of averages on balls whose radii have ratio ∼ ∞. Equivalently, one may take the integral average on B (x0 ) and the external one on B0 (x0 ) ∼ {x0 }, consistently with the mean value property of harmonic functions: u(x0 ) = A u(x0 ). (iii) On the other hand, when p → 1+, then there must be ρp → 0+ and the critical choice ρp = 0 is the only one valid for every p ∈ (1, ∞). It corresponds to varying the aspect ratio along the radius of B (x0 ) from 1 to 0 rather  than to ρp > 0, and taking the sampling domains to be the ellipsoids: E x, γ ; 1 − & |x−x0 |2 x−x0  N +2 , with the radius γ  scaled by the factor γ = , |x−x0 | p−1 . At x = 2 x0 , the aforementioned ellipsoid coincides with the ball Bγ  (x0 ), whereas as |x − x0 | → − it degenerates to the (N − 1)-dimensional ball:  x − x0  = x + y ∈ RN ; y, x − x0  = 0 and |y| <  E x, γ ; 0, |x − x0 |



N +2 . p−1

6.1 The Third Averaging Principle

141

The resulting mean value expansion is then:   1 |x − x0 |2 x − x0  inf + sup (x) A u; γ , 1 − , 2 x∈B (x0 ) x∈B (x0 ) |x − x0 | 2 2 = u(x0 ) + |∇u(x0 )|2−p p u(x0 ) + o( 2 ). 2(p − 1)

(6.8)

Exercise 6.3 In Peres and Sheffield (2008), instead of averaging on an N dimensional ellipsoid, the average is taken on the (N − 2)-dimensional sphere centred at x, with some radius γ |x − x0 |, and contained within the hyperplane perpendicular to x − x0 . The radius thus increases linearly from 0 to γ  with a factor γ > 0, as |x − x0 | varies from 0 to . This corresponds to evaluating on B (x0 ) the averages 12 (sup + inf) of: γ

fu (x; x0 , ) =

∂B N−1

  u x + γ |x − x0 |R(x)y dy.

x−x0 , and ∂B N −1 stands for the Here, R(x) ∈ SO(N ) is such that R(x)eN = |x−x 0| (N − 2)-dimensional sphere of unit radius, viewed as a subset of RN contained in the subspace RN −1 orthogonal to eN (note that x → R(x) can be only locally defined as a C2 function). Apply the argument as in the proof of Theorem 6.1 to show the following expansion of any u ∈ C2 (D) at x0 ∈ D satisfying ∇u(x0 ) = 0: &

 1 inf + sup fu 2 x∈B (x0 ) x∈B (x0 )

N−1 p−1

(x; x0 , ) (6.9)

2 = u(x0 ) + |∇u(x0 )|2−p p u(x0 ) + o( 2 ). 2(p − 1) Exercise 6.4 Prove the following expansion that has been put forward in Arroyo et al. (2017). Let u ∈ C2 (D) satisfy p u = 0 in D and let x0 ∈ D be such that ∇u(x0 ) = 0. Then we have:  p − 1 1 N +1 inf + sup u(x0 ) = u(x0 + ν) + 2 |ν|=) |ν|= p+N p+N

 Bν

u(x0 + y) dσ (y) , (6.10)

where Bν denotes the (N − 1)-dimensional ball, centred at 0, having radius , and orthogonal to the given vector ν.

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

142

6.2 The Dynamic Programming Principle and the Basic Convergence Theorem As in Chap. 3, we begin by proving the well-posedness of the dynamic programming principle obtained by truncating error terms in (6.2) and continuously interpolating to the boundary data on ∂D. In order to avoid the stopping of the induced Tug-ofWar game outside of D (which was the case in the boundary aware game discussed in Chap. 4), the interpolation occurs in the (, 2) neighbourhood of the boundary, rather than in the (0, ) neighbourhood (Fig. 6.3). Also, instead of dealing with the thickened and outer boundaries , out as in Definition 3.8, we will assume that the boundary data F is defined on the whole RN . In the limit  → 0 and when F ∈ C(RN ), the statements below will depend only on the trace F|∂ D and not on its particular extension on RN . Theorem 6.5 Let D ⊂ RN be open, bounded, and connected. Given γp , ρp > 0 as in (6.1), for every  ∈ (0, 1) and every bounded, Borel F : RN → R, there exists a unique bounded, Borel u : RN → R, satisfying:   u (x) = d (x)S u (x) + 1 − d (x) F (x)

for all x ∈ RN .

(6.11)

Here, S is defined in (6.4) and we set: d (x) =

   1 min , dist x, (RN \ D) + B¯  (0) . 

(6.12)

The solution operator to (6.11) is monotone, i.e. if F ≤ F¯ , then the corresponding solutions satisfy: u ≤ u¯  . Moreover: u L∞ (RN ) ≤ F L∞ (RN ) .

Fig. 6.3 The domain D and the scaled distance function d in (6.12)

6.2 The Dynamic Programming Principle and the Basic Convergence Theorem

143

Proof 1. The proof follows steps of the proof of Theorem 4.1. To ease the notation, we drop the subscript  and write u instead of u . Clearly, the solution u of (6.11) is a fixed point of T v = d S v + (1 − d )F . Recall that: (S v)(x) = where:

 1 inf + sup fv (x + z; x, ) 2 z∈B1 (0) z∈B1 (0)

fv (x + z; x, ) =

z x+E(z,γp ;1+(ρp −1)|z|2 , |z| )

(6.13) v(w) dw.

For a fixed  and x, and given a bounded, Borel function v : RN → R, the average fv is continuous in z ∈ B¯ 1 (0). In view of continuity of d , we conclude that both T , S return a bounded, Borel function. We further note that S and T are monotone: S v ≤ S v¯ and T v ≤ T v¯ if v ≤ v. ¯ The solution u of (6.11) is obtained as the limit of iterations un+1 = T un , where we set u0 ≡ const ≤ inf F . Since u1 = T u0 ≥ u0 on RN , by monotonicity of T , the sequence {un }∞ n=0 is nondecreasing. It is also bounded (by F L∞ (RN ) ) and thus it converges pointwise to a (bounded, Borel) limit u : RN → R. Observe now that: |T un (x) − T u(x)| ≤ |S un (x) − S u(x)| ≤ sup

z ) z∈B1 (0) x+E(z,γp ;1+(ρp −1)|z|2 , |z|

|un − u|(w) dw

(6.14)

ˆ

≤ C

D

|un − u|(w) dw,

where C is the lower bound on the volume of the sampling ellipses. By the monotone convergence theorem, it follows that the right-hand side in (6.14) converges to 0 as n → ∞. Consequently, u = T u, proving existence of solutions to (6.11). 2. We now show uniqueness. If u, u¯ both solve (6.11), then define: ¯ M = sup |u(x) − u(x)| ¯ = sup |u(x) − u(x)|. x∈D x∈RN The proof is the same as in Step 2 of the proof of Theorem 4.1; we now indicate its simplified content for the case u, u¯ ∈ C(RN ) (which holds automatically for continuous F ). Consider any maximizer x0 ∈ D, where |u(x0 ) − u(x ¯ 0 )| = M.

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

144

By (6.14) we get: M = |u(x0 ) − u(x ¯ 0 )| = d (x0 )|S u(x0 ) − S u(x ¯ 0 )| ≤ sup f|u−u| ¯ (x + z; x, ) ≤ M, z∈B1 (0)

yielding in particular



¯ dw B(x0 ,γp ) |u− u|(w)

= M. Consequently, B(x0 , γp ) ⊂

DM = {|u−u| ¯ = M} and hence the set DM is open in RN . Since DM is obviously closed and nonempty, there must be DM = RN and since u − u¯ = 0 on RN \ D, it follows that M = 0. Thus u = u, ¯ proving the claim. Finally, monotonicity of S yields the monotonicity of the solution operator to (6.11).   Remark 6.6 It follows from (6.14) that the sequence {un }∞ n=1 in the proof of Theorem 6.5 converges to u = u uniformly. In fact, the iteration procedure un+1 = T un started by any bounded, Borel u0 converges uniformly to the uniquely given u . We further remark that if F is continuous, then u is likewise continuous, and if F is Lipschitz, then u is Lipschitz, with Lipschitz constant depending (in nondecreasing manner) on: 1/, F C(∂ D) and the Lipschitz constant of F|∂ D . We conclude this section by showing the basic convergence result, which is parallel to Theorem 4.6 stated for the averaging principle (3.14).

Theorem 6.7 Let F ∈ C2 (RN ) be a bounded data function that satisfies on some open set U , compactly containing D: p F = 0

and

∇F = 0

in U.

(6.15)

Then the solutions u of (6.11) converge to F uniformly in RN , namely: u − F C(D) ≤ C

as  → 0,

(6.16)

with a constant C depending on F , U , D and p, but not on .

Proof 1. Since u = F on RN \ D by construction, (6.16) indeed implies the uniform convergence of u in RN . Also, by translating D if necessary, we may assume that B1 (0) ∩ U = ∅. Let v be the function resulting in Lemma 4.8, namely v (x) = F (x) + |x|s for some s ≥ 2, satisfying for every  ∈ (0, ): ˆ ∇v = 0

and

p v ≥ s · |∇v |p−2

¯ in D.

(6.17)

6.2 The Dynamic Programming Principle and the Basic Convergence Theorem

145

Parameters s and ˆ can be further chosen in a way that for all  ∈ (0, ): ˆ v ≤ S v

¯ in D.

(6.18)

Indeed, analysis of the remainder terms in the expansion (6.2) reveals that: v (x) − S v (x) = −

2 |∇v (x)|2−p p v (x) + R2 (, s), p−1

(6.19)

where: |R2 (, s)| ≤ Cp  2 oscB(x,(1+γp )) |∇ 2 v | + C 3 . We denoted by Cp a constant depending only on p, whereas C is a constant depending only |∇v | and |∇ 2 v |, that remain uniformly bounded for small . Since v is the sum of the smooth on U function x → |x|s , and a p-harmonic function F that is also smooth in virtue of its nonvanishing gradient, we obtain that (6.19) and (6.17) imply (6.18) for s sufficiently large and taking  appropriately small. 2. Let A be a compact set in: D ⊂ A ⊂ U . For each x ∈ A and  ∈ (0, ), ˆ let: φ (x) = v (x) − u (x) = F (x) − u (x) + |x|s . By (6.18) and (6.11) we get: φ (x) = d (x)(v (x) − S u (x)) + (1 − d (x))(v (x) − F (x)) ≤ d (x)(S v (x) − S u (x)) + (1 − d (x))(v (x) − F (x))     ≤ d (x) sup fφ x + y, x,  + (1 − d (x)) v (x) − F (x) .

(6.20)

y∈B1 (0)

Define: M = maxA φ . We claim that there exists x0 ∈ A with d (x0 ) < 1 and such that φ (x0 ) = M . To prove the claim, let D = x ∈ D; dist(x, ∂D) ≥ 2}. We can assume that the closed set D ∩ {φ = M } is nonempty; otherwise the claim would be obvious. Let D0 be a nonempty connected component of D and denote DM = D0 ∩ {φ = M }. Clearly, DM is closed in D0 ; we now show that it is also open. Let x ∈ DM . Since d (x) = 1 from (6.20) it follows that: M = φ (x) ≤

 |y − x|2 y − x  (y) ≤ M . A φ ; γp  + (ρp − 1) , |y − x| 2 y∈B(x,) sup

Consequently, φ ≡ M in B(x, γp ) and thus we obtain the openness of DM in D0 . In particular, DM contains a point x¯ ∈ ∂D . Repeating the previous argument for x¯ results in φ ≡ M in B(x, ¯ γp ), proving the claim. 3. We now complete the proof of Theorem 6.7 by deducing a bound on M . If ¯ with d (x0 ) < 1, then (6.20) yields: M = M = φ (x0 ) for some x0 ∈ D

146

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

  φ (x0 ) ≤ d (x0 )M + (1 − d (x0 )) v (x0 ) − F (x0 ) , which implies: M ≤ v (x0 ) − F (x0 ) = |x0 |s . On the other hand, if M = φ (x0 ) for some x0 ∈ A \ D, then d (x0 ) = 0, hence likewise: M = φ (x0 ) = v (x0 ) − F (x0 ) = |x0 |s . In either case: max(u − u ) ≤ max φ + C ≤ 2C, ¯ ¯ D D where C = maxx∈V |x|s is independent of . A symmetric argument applied to −u after noting that (−u) = −u gives: minD ¯ (u − u ) ≥ −2C. The proof is done.  

6.3 Mixed Tug-of-War with Noise Below, we develop the probability setting similar to that of Sects. 3.4 and 4.3, but related to the expansion (6.2). We fix p ∈ (1, ∞), the scaling factors γp , ρp as in (6.1), and the sampling radius parameter  ∈ (0, 1). 1. Let 1 = B1 (0) × {1, 2} × (0, 1) and define:  = ( 1 )N = ω = {(wi , ai , bi )}∞ i=1 ;

 wi ∈ B1 (0), ai ∈ {1, 2}, bi ∈ (0, 1) for all i ∈ N .

The probability space ( , F, P) is given as the countable product of ( 1 , F1 , P1 ). Here, F1 is the smallest σ -algebra containing all products D × S × B where D ⊂ B1 (0) ⊂ RN and B ⊂ (0, 1) are Borel, and A ⊂ {1, 2}. The measure P1 is the product of: the normalized Lebesgue measure on B1 (0), the uniform counting measure on {1, 2} and the Lebesgue measure on (0, 1): P1 (D × S × B) =

|A| |D| · · |B|. |B1 (0)| 2

For each n ∈ N, the probability space ( n , Fn , Pn ) is the product of n copies of ( 1 , F1 , P1 ). The σ -algebra of F,

Fn is always identified with the sub-σ -algebra ∞ where consisting of sets F × ∞ for all F ∈ F . The sequence {F } 1 n n i=n+1 n=0 F0 = {∅, }, is a filtration of F. n ∞ 2. Given are two families of functions σI = {σIn }∞ n=0 and σI I = {σI I }n=0 , defined N N on the corresponding spaces of “finite histories” Hn = R × (R × 1 )n : σIn , σInI : Hn → B1 (0) ⊂ RN ,

6.3 Mixed Tug-of-War with Noise

147

assumed to be measurable with respect to the (target) Borel σ -algebra in B1 (0) and the (domain) product σ -algebra on Hn . For every x0 ∈ RN we now recursively define the sequence of random variables: As usual, we often suppress some of the superscripts x0 , σI , σI I and write Xn (or Xnx0 , or XnσI ,σI I , etc.) instead of Xnx0 ,σI ,σI I , if no ambiguity arises. Let: X0 ≡ x0 ,   Xn (w1 , a1 , b1 ), . . . , (wn , an , bn ) ⎧  ⎪  σIn−1 (hn−1 ) + γp wn ⎪ ⎪ ⎪  ⎪ ⎪ ⎨ +γp (ρp − 1)wn , σ n−1 (hn−1 )σ n−1 (hn−1 ) for an = 1 I I  = xn−1 + ⎪  σIn−1 ⎪ I (hn−1 ) + γp wn ⎪ ⎪  ⎪ ⎪ ⎩ +γp (ρp − 1)wn , σ n−1 (hn−1 )σ n−1 (hn−1 ) for an = 2, II II   where xn−1 = Xn−1 (w1 , a1 , b1 ), . . . , (wn−1 , an−1 , bn−1 )   and hn−1 = x0 , (x1 , w1 , a1 , b1 ), . . . , (xn−1 , wn−1 , an−1 , bn−1 ) ∈ Hn−1 . (6.21) In this “game”, the position xn−1 is first advanced (deterministically) according to the two players’ “strategies” σI and σI I by a shift y ∈ B (0), and then (randomly) uniformly by a further shift in the ellipsoid E 0, γp ; 1 + (ρp − y  . The deterministic shifts are activated by the value of the equally 1)|y|2 , |y| probable outcomes: an = 1 activates σI and an = 2 activates σI I (Fig. 6.4). 3. The auxiliary variables bn ∈ (0, 1) serve as thresholds for reading the eventual value from the prescribed boundary data. Let D ⊂ RN be an open, bounded and

Fig. 6.4 Player I, Player II and random noise in the mixed Tug-of-War

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

148

connected set. Define τ ,x0 ,σI ,σI I : → N ∪ {∞} in:   τ x0 ,σI ,σI I (w1 , a1 , b1 ), (w2 , a2 , b2 ), . . .   = min n ≥ 1; bn > d (xn−1 ) .

(6.22)

As before, we drop the superscripts and write τ instead of τ x0 ,σI ,σI I if there is no ambiguity. Our game is thus terminated, with probability 1−d (xn−1 ), whenever the position xn−1 reaches the -neighbourhood of ∂D. Lemma 6.8 If the scaling factors ρp , γp > 0 in (6.1) satisfy: ρp ≤ 1 and γp ρp > 1

or

ρp ≥ 1 and γp > 1,

(6.23)

then τ is a stopping time relative to the filtration {Fn }∞ n=0 , namely: P(τ < ∞) = 1. Further, for any p ∈ (1, ∞) there exist positive ρp , γp with (6.1) and (6.23). Proof Let ρp ≤ 1 and γp ρp > 1. Then, for some β > 0, there also holds: γp (ρp − β) > 1. Define an open set of “advancing random shifts”:   Dadv = w ∈ B1 (0); w, e1  > 1 − β . For every σ ∈ B1 (0) and every w ∈ Dadv we have:     γp w + (ρp − 1)w, σ σ, e1 ≥ γp w, e1  + ρp − 1 > γp (ρp − β). Since D is bounded, the above estimate implies existence of n ≥ 1 (depending on ) such that for all x0 ∈ D and all deterministic shifts {σ i ∈ B1 (0)}ni=1 there holds: x0 + 

n  i  σ + γp wi + γp (ρp − 1)wi , σ i σ i ∈ D

for all {wi ∈ Dadv }ni=1 .

i=1

In conclusion:  n   |Dadv | n = P(τ ≤ n) ≥ Pn Dadv × {1, 2} × (0, 1) = η > 0, |B1 (0)| and so: P(τ > kn) ≤ (1 − η)k for all k ∈ N, yielding: P(τ = ∞) = lim P(τ > kn) = 0. k→∞

6.3 Mixed Tug-of-War with Noise

149

The proof proceeds similarly when ρp ≥ 1 and γp > 1. Fix β¯ > 0 such that ¯ > 1 and define Dadv as before, for a small 0 < β " β, ¯ ensuring that: γp (1 − β) +     ¯ γp w + (ρp − 1)w, σ σ, e1 ≥ γp w, e1  − (ρp − 1) 2β > γp (1 − β) , D for every σ ∈ B1 (0) and every w ∈ Dadv . Again, after at most (γdiam ¯ p (1−β)−1) shifts, the token will leave D (unless it is stopped earlier) and the game will be terminated. It remains to prove existence of γp , ρp > 0 satisfying (6.1) and (6.23). We observe that the viability of ρp ≤ 1, γp ρp > 1 is equivalent to: γ12 < p−1− Nγ+2 2 ≤ p

1 and further to:

p−2 N +2



1 γp2


1 is equivalent to: γp2 > 1 and p − 1 − Nγ+2 2 ≥ 1, that is: p

 p−2 1 , < min 1, 2 N +2 γp implying existence of γp , ρp for p > 2.

 

1. From now on, we will work under the additional requirement (6.23). In our “game”, the first “player” collects from his opponent the payoff given by the data F at the stopping position. The incentive of the collecting “player” to maximize the outcome and of the disbursing “player” to minimize it, leads to the definition of the two game values below. Let F : RN → R be a bounded, Borel function. Then we have: # "   uI (x0 ) = sup inf E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I −1 , σI σI I

# "   uI I (x0 ) = inf sup E F ◦ Xx0 ,σI ,σI I τ x0 ,σI ,σI I −1 .

(6.24)

σI I σI

The main result in Theorem 6.9 below will show that uI = uI I coincide with the unique solution to the dynamic programming principle in Sect. 6.2, modelled on the expansion (6.2). It is also clear that uI,I I depend only on the values of F in the -neighbourhood of ∂D. In Sect. 6.4 we will prove that as  → 0, the uniform limit of uI,I I that depends only on the continuous F|∂ D , is p-harmonic in D and attains F on ∂D, provided that ∂D is regular.

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

150

Theorem 6.9 For every  ∈ (0, 1), let uI , uI I be as in (6.24) and u as in Theorem 6.5. Then: uI = u = uI I .

Proof 1. The proof is as that of Theorem 4.12, with only technical modifications. We drop the sub/superscript  to ease the notation. To show that uI I ≤ u, fix x0 ∈ RN n (h ) = and η > 0. We first observe that there exists a strategy σ0,I I where σ0,I I n n σ0,I I (xn ) satisfies for every n ≥ 0 and hn ∈ Hn : n fu (xn + σ0,I I (xn ); xn , ) ≤

inf fu (xn + z; xn , ) +

z∈B1 (0)

η . 2n+1

(6.25)

Indeed, using continuity of (6.3), we note that there exists δ > 0 such that:    inf fu (y +z; y, )− inf fu (y¯ +z; y, ¯ ) < z∈B1 (0)

z∈B1 (0)

η 2n+2

for all |y − y| ¯ < δ.

N Let {Bδ (yi )}∞ i=1 be a locally finite covering of R . For each i = 1 . . . ∞, choose zi ∈ B1 (0) satisfying:

| inf fu (yi + z; yi , ) − fu (yi + zi ; yi , )| < z∈B1 (0)

η 2n+2

.

Finally, set: n σ0,I I (y)

= zi

for y ∈ Bδ (yi ) \

i−1 

Bδ (yj ).

j =1 n The piecewise constant σ0,I I is obviously Borel and it satisfies (6.25). An alternative argument to show (6.25) is to invoke Lemma 3.15. 2. Fix a strategy σI and consider the random variables Mn : → R:

Mn = (u ◦ Xn )1τ >n + (F ◦ Xτ −1 )1τ ≤n +

η . 2n

∞ Then {Mn }∞ n=0 is a supermartingale with respect to the filtration {Fn }n=0 . Clearly:

      E Mn | Fn−1 = E (u ◦ Xn )1τ >n | Fn−1 + E (F ◦ Xn−1 )1τ =n | Fn−1   η + E (F ◦ Xτ −1 )1τ n = 1τ ≥n 1bn ≤d (xn−1 ) , we get in view of (6.25):     E (u ◦ Xn )1τ >n | Fn−1 = E u ◦ Xn | Fn−1 · d (xn−1 )1τ ≥n ˆ = (u ◦ Xn )1bn ≤d (xn−1 ) dP1 · 1τ ≥n a.s. 1

=

  σ n−1  1 A u; γp , 1 + (ρp − 1)|σIn−1 |2 , In−1 (xn−1 + σIn−1 ) 2 |σ | I



n−1 2 + A u; γp , 1 + (ρp − 1)|σ0,I I| ,

n−1  σ0,I I n−1 |σ0,I I|

 n−1 (xn−1 + σ0,I ) · d (xn−1 )1τ ≥n I

so that:    η E (u ◦ Xn )1τ >n | Fn−1 ≤ S u ◦ Xn−1 + n d (xn−1 )1τ ≥n 2

a.s.

Concluding, by (6.11) the decomposition (6.26) yields:        E Mn | Fn−1 ≤ d (xn−1 ) S u ◦ Xn−1 + (1 − d (xn−1 )) F ◦ Xn−1 1τ ≥n + (F ◦ Xτ −1 )1τ ≤n−1 +

η = Mn−1 2n−1

a.s.

3. The supermartingale property of {Mn }∞ n=0 being established, we get:       η u(x0 ) + η = E M0 ≥ E Mτ = E F ◦ Xτ −1 + τ . 2   Thus: uI I (x0 ) ≤ supσI E F ◦ (XσI ,σI I,0 )τ −1 ≤ u(x0 ) + η. As η > 0 was arbitrary, we obtain the claimed comparison uI I (x0 ) ≤ u(x0 ). For the reverse inequality u(x0 ) ≤ uI (x0 ) we use a symmetric argument, with an almostmaximizing strategy σ0,I and the submartingale M¯ n = (u ◦ Xn )1τ >n + (F ◦ Xτ −1 )1τ ≤n − 2ηn along a given yet arbitrary strategy σI I . The obvious estimate uI (x0 ) ≤ uI I (x0 ) ends the proof.  

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

152

6.4 Sufficient Conditions for Game-Regularity: Exterior Corkscrew Condition In this section we carry out the program described in Chap. 5. Most of the results and proofs are the same as in the context of the averaging principle (3.14) with p > 2, hence we only indicate a few necessary modifications. The reader familiar with our previous discussions should have no problem in filling out the details. Towards checking convergence of the family {u }→0 , we first observe that its equicontinuity is implied by the equicontinuity “at ∂D”. This last property will be, in turn, implied by the “game-regularity” condition (6.28) below. Lemma 6.10 Let D ⊂ RN be an open, bounded, connected domain and let F ∈ C(RN ) be a bounded, continuous data function. Assume that for every η > 0 there exists δ > 0 and ˆ ∈ (0, 1) such that for all  ∈ (0, ) ˆ there holds: |u (y0 ) − u (x0 )| ≤ η

for all y0 ∈ D, x0 ∈ ∂D satisfying |x0 − y0 | ≤ δ.

(6.27)

¯ Then the family {u }→0 of solutions to (6.11) is equicontinuous in D. The proof is left as an exercise (Exercise 6.19) since it essentially follows the analytical proof of Theorem 5.3. As in Chap. 5, we say that a point y0 ∈ ∂D is game-regular if, whenever the game starts near x0 , one of the players has a strategy for making the game terminate still near x0 , with high probability. Definition 6.11 Fix p ∈ (1, ∞),  ∈ (0, 1) and γp , ρp as in (6.1), (6.23). Consider the Tug-of-War game with noise (6.21), as defined in Sect. 6.3. Let D ⊂ RN be open, bounded and connected. (a) We say that a point y0 ∈ ∂D is game-regular if for every η, δ > 0 there exist δˆ ∈ (0, δ) and ˆ ∈ (0, 1) such that the following holds. Fix  ∈ (0, ) ˆ and x0 ∈ Bδˆ (y0 ); there exists then a strategy σ0,I with the property that for every strategy σI I we have:   P (Xx0 ,σ0,I ,σI I )τ −1 ∈ Bδ (y0 ) ≥ 1 − η,

(6.28)

where τ = τ x0 ,σ0,I ,σI I is the stopping time in (6.22). (b) We say that D is game-regular if every boundary point y0 ∈ ∂D is game-regular. Observe that if condition (b) above holds, then δˆ and ˆ in part (a) can be chosen independently of y0 . Also, game-regularity is symmetric in σI and σI I .

6.4 Sufficient Conditions for Game-Regularity: Exterior Corkscrew Condition

153

Theorem 6.12 Let D, p, γp , δp be as in Definition 6.11. (i) Assume that for every bounded data F ∈ C(RN ), the family of solutions ¯ Then D is game-regular. {u }→0 to (6.11) is equicontinuous in D. (ii) Conversely, if D is game-regular then {u }→0 satisfies (6.27), and hence it is equicontinuous in virtue of Lemma 6.10, for every bounded and continuous data F ∈ C(RN ).

The proof is again verbatim the same as the proof of Theorems 5.7 and 5.8. As a consequence, we easily get:

Theorem 6.13 Let F ∈ C(RN ) be a bounded data function and let D be open, bounded and game-regular with respect to p, γp , δp be as in Definition 6.11. Then the family {u }→0 of solutions to (6.11) converges ¯ to the unique viscosity solution of (6.29). uniformly in D

Proof By Theorem 6.12 (ii) and the Ascoli–Arzelà theorem, every subsequence of {u }→0 contains a further subsequence {u }∈J that converges uniformly as  → ¯ As in the proof of Theorem 5.2, it follows that u is a 0,  ∈ J to some u ∈ C(D). viscosity solution to: p u = 0 in D,

u=F

on ∂D,

(6.29)

according to Definition 5.1. Indeed, fix x0 ∈ D and let φ be a test function as in (5.2). As in the previous case, there exists a sequence {x }∈J ∈ D, such that: lim

→0,∈J

x = x0

and

u (x ) − φ(x ) = min (u − φ). ¯ D

  ¯ and further: Consequently: φ(x) ≤ u (x) + φ(x ) − u (x ) for all x ∈ D,   S φ(x ) − φ(x ) ≤ S u(x ) + φ(x ) − u (x ) − φ(x ) = 0,

(6.30)

for all  sufficiently small. On the other hand, (6.2) yields: S φ(x ) − φ(x ) =

2 |∇φ(x )|2−p p φ(x ) + o( 2 ), p−1

for  small enough to get ∇φ(x ) = 0. Combining the above with (6.30) gives: p φ(x ) ≤ o(1).

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

154

Passing to the limit with  → 0,  ∈ J establishes the desired inequality p φ(x0 ) ≤ 0 and proves part (i) of Definition 5.1. The verification of part (ii) is done along the same lines. The proof is now done, in virtue of the uniqueness of viscosity solutions in Corollary C.63.   Definition 6.14 We say that a given boundary point y0 ∈ ∂D satisfies the exterior corkscrew condition provided that there exists μ ∈ (0, 1) such that for all sufficiently small r > 0 there exists a ball Bμr (x) such that: ¯ Bμr (x) ⊂ Br (y0 ) \ D. The main result of this section is an improvement of the sufficiency, stated in Theorem 5.15, of the exterior cone condition for game-regularity:

Theorem 6.15 Assume that D, p, γp , δp are as in Definition 6.11. If y0 ∈ ∂D satisfies the exterior corkscrew condition, then y0 is game-regular.

The proof is based on the concatenating strategies technique, the analysis of the annulus walk and the equivalent notion of game regularity stated below. The indicated results are the counterparts of Theorems 5.9, 5.11 and Corollary 5.12, in the present setting of the mixed averaging principle (6.2). Theorem 6.16 In the context of Definition 6.11, for a given y0 ∈ ∂D, assume that there exists θ0 ∈ (0, 1) such that for every δ > 0 there exists δˆ ∈ (0, δ) and ˆ ∈ (0, 1) with the following property. Fix  ∈ (0, ) ˆ and choose an initial position x0 ∈ Bδˆ (y0 ) ∩ D; there is a strategy σ0,I I such that for every σI we have:  P ∃n < τ x0 ,σI ,σ0,I I

x ,σI ,σ0,I I

Xn0

 ∈ Bδ (y0 ) ≤ θ0 .

(6.31)

Then y0 is game-regular. ˜ = Theorem 6.17 For given radii 0 < R1 < R2 < R3 , consider the annulus D BR3 (0) \ B¯ R1 (0) ⊂ RN . For every ξ > 0, there exists ˆ ∈ (0, 1) depending on ˜ ∩ BR2 (0) and every  ∈ (0, ), ˆ R1 , R2 , R3 and ξ, p, N, such that for every x0 ∈ D there exists a strategy σ˜ 0,I I with the property that for every σ˜ I there holds:   v(R ) − v(R ) 2 1 + ξ. P X˜ τ˜ −1 ∈ B¯ R3 −2 (0) ≤ v(R3 ) − v(R1 )

(6.32)

Here, v : (0, ∞) → R is given by: $ v(t) =

sgn(p − N) t log t

p−N p−1

for p = N for p = N,

(6.33)

6.4 Sufficient Conditions for Game-Regularity: Exterior Corkscrew Condition

155

Fig. 6.5 Positions of the concentric balls B(y0 ) and B(x0 ) in the proof of Theorem 6.15

x ,σ˜ ,σ˜ x0 ,σ˜ I ,σ˜ 0,I I denote, as before, the random and {X˜ n = X˜ n0 I 0,I I }∞ n=0 and τ˜ = τ˜ variables corresponding to positions and stopping time in the random Tug-of-War ˜ The estimate (6.32) can be replaced by: game on D.

  P X˜ τ˜ −1 ∈ B¯ R3 −2 (0) ≤ θ0 ,

(6.34)

 2  p−N p−1 in p ∈ (1, N) and any θ valid for any θ0 > 1 − R 0 > 0 if p ≥ N , R1 upon choosing R3 sufficiently large with respect to R1 and R2 . The bounds (6.32) ˜ by and (6.34) remain true if we replace R1 , R2 , R3 by rR1 , rR2 , rR3 , the domain D ˜ r D and ˆ by r ˆ , for any r > 0. Proof of Theorem 6.15 With the help of Theorem 6.17, we will show that the assumption of Theorem 6.16 is satisfied, with probability θ0 < 1 depending only on p, N and μ ∈ (0, 1) in Definition 6.14. Namely, set R1 = 1, R2 = μ2 and R3 > R2 according to (6.34) in order to have θ0 = θ0 (p, N, R1 , R2 ) < 1. Further, set r = Using the corkscrew condition, we obtain:

δ 2R3

so that rR2 =

δ μR3 .

¯ B2rR1 (x) ⊂ Bδ/(μR3 ) (y0 ) \ D, for some x ∈ RN (Fig. 6.5). In particular: |x − y0 | < rR2 , so y0 ∈ BrR2 (x) \ B¯ 2rR1 (x). It now easily follows that there exists δˆ ∈ (0, δ) with the property that: Bδˆ (y) ⊂ BrR2 (x) \ B¯ 2rR1 (x).

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

156

Finally, observe that BrR3 (x) ⊂ Bδ (y0 ) as rR3 +|x −y0 | < rR3 +rR2 < 2rR3 = δ. Let ˆ /r > 0 be as in Theorem 6.17, applied to the annuli with radii R1 , R2 , R3 . For a given x0 ∈ Bδˆ (y0 ) and  ∈ (0, ), ˆ let σ˜ 0,I I be the strategy ensuring validity of ˜ For a given σI there holds: the bound (6.34) in the annulus walk on x + D.

x ,σ ,σ ω ∈ ; ∃n < τ x0 ,σI ,σ0,I I (ω) Xn0 I 0,I I (ω) ∈ Bδ (y0 ) x ,σ˜ ,σ˜ ⊂ ω ∈ ; X˜ τ˜ 0−1I 0,I I (ω) ∈ BrR3 −2 (x) .

The final claim follows by (6.34) and by applying Theorem 6.16. Remark 6.18 With a bit more analysis, one can show that every open, bounded domain D ⊂ RN is game-regular for p > N . The proof mimics the argument of Sect. 5.6 for the process based on the mean value expansion (6.9), so we omit it. Exercise 6.19 (i) Work out the details of the proof of Lemma 6.10 based on the proof of Theorem 5.3. (ii) Work out the proofs of Theorems 6.12, 6.16, 6.17, adjusting the appropriate arguments in Chap. 5. (iii) Give an example of an open, bounded set in RN that satisfies the exterior corkscrew condition but does not satisfy the exterior cone condition. Exercise 6.20 * Follow the outline of proof of Theorem 3.21 to show that at p = 2, condition of game-regularity in Definition 6.11 (a) is equivalent to Doob’s regularity (3.44).

6.5 Sufficient Conditions for Game-Regularity: Simply Connectedness in Dimension N = 2 In this section we derive a new sufficient condition for game-regularity, namely: all simply connected domains are game-regular when N = 2. The idea of the proof below is based on the observation that each player has a strategy to keep the token trajectory close to a given (polygonal) curve in D, with positive probability, and regardless of the oponent’s strategy. Given a boundary point y0 , we then consider a surrounding path as in Fig. 6.6, where the origin plays the role and y0 and where the whole diagram gets scaled by a chosen δ > 0. We use a topological argument: since such line must intersect ∂D at a point different than y0 , it follows that the token remains in the ball Bδ (y0 ) until the stopping time, as requested in condition (6.31) which ensures game-regularity of y0 .

6.5 Sufficient Conditions for Game-Regularity: Simply Connectedness in. . .

157

Fig. 6.6 The directed polygonal line 0ABCDE and the area of location of the consecutive game positions Xn in the proof of Theorem 6.21

Theorem 6.21 Let D ⊂ R2 be open, bounded, connected and simply connected. Assume that p ∈ (1, ∞) and γp , ρp satisfy (6.1) and (6.23) with respect to N = 2. Then D is game-regular.

Proof 1. By applying Theorem 6.17 to R1 = 12 and R2 = 2, we conclude that there are R3 > 2, η > 0 and ¯ > 0 such that for every x0 ∈ B1 (−e1 ) and every  ∈ (0, ) ¯ there exists σ˜ 0,I I with the property that for every σ˜ I there holds:      x ,σ˜ ,σ˜ x ,σ˜ ,σ˜ P min n ≥ 0; Xn0 I 0,I I ∈ B 1 +2 (0) < min n ≥ 0; Xn0 I 0,I I ∈ B¯ R3 −2 (0) ≥ η. 2

Recall that the planar process {Xn }∞ n=0 above is defined in (6.21). Given r > 0, this implies that for every x0 ∈ Br (−re1 ) and every  ∈ (0, r ): ¯    x ,σ˜ ,σ˜ P min n ≥ 0; Xn0 I 0,I I ∈ Br (0)   x ,σ˜ ,σ˜ < min n ≥ 0; Xn0 I 0,I I ∈ BR3 r (0) ≥ η.

(6.35)

2. Consider √ the polygonal line 0ABCDE with vertexes: √ √ 0 = (0, 0), A =√(0, −1), B = ( 12 , 2 − 1), C = − 12 , 2 − 1), D = (− 12 , 2 − 2), E = ( 12 , 2 − 2), depicted in Fig. 6.6. Let n be such that: R3 1 min |z|. < n 2 2 z∈ABCDE

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

158

n−1

Call r = 21n and denote {zi }11·2 the consecutive points along the oriented i=0 −−−−−−→ polygonal 0ABCDE, started at z0 = 0 and distanced r apart from each other. Since segments in 0ABCDE have respective lengths 1, 32 , 1, 1, 1, it follows that: z2n = A,

z5·2n−1 = B,

z7·2n−1 = C,

z9·2n−1 = D,

z11·2n−1 = E.

¯ Clearly x0 ∈ Br (zi ) for some i = 0 . . . 2n . We now Fix x0 ∈ 0A and  ∈ (0, r ). define the strategy σ0,I I such that for every σI there holds:  x ,σ ,σ . p¯ = P ∃n ≥ 0; Xn0 I 0,I I ∈ Br (E) and −−−−−−−−−→ the oriented polygonal line x0 X1 X2 . . . Xn passes through all n−1

balls {Br (zj }11·2 , while staying in the R3 r neighbourhood j =i  n−1 −−−−−−→ of 0ABCDE ≥ η11·2 . (6.36) −−−−−−−−−→ Condition on the oriented polygonal x0 X1 X2 . . . Xn in (6.36) implies that it surrounds 0 and intersects itself, while remaining within B 3 (0). 2 Define the stopping times: τ0 = 0,

τj = min{k ≥ j − 1; Xk ∈ Br (zj )} for all j = 1 . . . 11 · 2n−1 .

Also, for all j as above, let fj : R2 → R2 denote the rigid motions fj (x) = rRj x + zj where Rj ∈ SO(2) satisfies Rj e1 = 1r (zj − zj −1 ). Then, we put:   k σ0,I I x0 , (x1 , w1 , a1 , b1 ), . . . , (xk , wk , ak , bk )   k−τ = fj ◦ σ˜ 0,I I j −1 fj−1 (xτj −1 ), (fj−1 (xτj −1 +1 ), Rj−1 w1 , a1 , b1 ), . . . , (fj−1 (xk ), Rj−1 wk , ak , bk ) for all k ∈ [τj −1 , τj ) and all j = i + 1, . . . , 11 · 2n−1 , where the strategy σ˜ 0,I I satisfies (6.35). Consequently:    x ,σ ,σ P min n ≥ 0; Xn0 I 0,I I ∈ Br (zi+1 )   x ,σ ,σ < min n ≥ 0; Xn0 I 0,I I ∈ B¯ R3 r (zi+1 ) ≥ η,



6.5 Sufficient Conditions for Game-Regularity: Simply Connectedness in. . .

159

and further, for all j = i + 1, . . . , 11 · 2n−1 − 1:  x ,σ ,σ P ∃n ≥ τj ; Xn0 I 0,I I ∈ Br (zj +1 ) x ,σI ,σ0,I I

and ∀k < n ∃i ≤ s ≤ j + 1 Xk 0

∈ BR3 r (zs )



 x ,σ ,σ ≥ η · P ∃n ≥ τj −1 ; Xn0 I 0,I I ∈ Br (zj )

x ,σI ,σ0,I I

and ∀k < n ∃i ≤ s ≤ j Xk 0

 ∈ BR3 r (zs ) , (6.37)

which follows by an application of Lemma A.21 as in the proof of Theorem 5.9 (see Exercise 6.24). By induction, we hence derive (6.36):  p¯ ≥ P ∀0 ≤ k ≤ τ11·2n−1 ∃i ≤ s ≤ 11 · 2n−1

x ,σI ,σ0,I I

Xk 0

 n−1 ∈ BR3 r (zs ) ≥ η11·2 .

3. Call θ¯0 = η11·2 . Let y0 ∈ ∂D and let δ > 0. We claim that (6.31) is valid with some universal threshold θ0 < 1 and : n−1

δˆ =

δ 2

and

ˆ =

δ δ ¯ ∧ . 2 8(1 + γp (1 ∨ δp ))

Indeed, fix  ∈ (0, ) ˆ and choose an initial position x0 ∈ Bδˆ (y0 ) ∩ D. By performing an orthogonal change of variables, we may without loss of generality assume that y0 = 0 and x0 ∈ 2δ 0A. Let σ0,I I be as constructed in Step 1. Then:  δ x ,σ ,σ P ∃n ≥ 0; Xn0 I 0,I I ∈ B δ r ( E) and the oriented polygonal line 2 2  −−−−−−−−−→ x0 X1 X2 . . . Xn surrounds 0, intersects itself, and stays in B 3 δ (0) ≥ 1 − θ¯0 . 4

in virtue of (6.36). Since D is simply connected, the aforementioned path must then cross ∂D, which yields:    x ,σ ,σ P min n ≥ 0; Xn0 I 0,I I ∈ ∂D + B2(1+γp (1∨ρp )) (0)   x ,σ ,σ < min n ≥ 0; Xn0 I 0,I I ∈ B¯ 3 δ (0) ≥ 1 − θ¯0 .

(6.38)

4

We now note the following simple counterpart of Lemma 5.19:

 

6 Mixed Tug-of-War with Noise: Case p ∈ (1, ∞)

160

Lemma 6.22 Let R > 1. There exists p¯ 0 > 0, depending on p, γp , δp , R, such that for every x0 ∈ BR (0) there exists σ˜ 0,I I with the property that for every σ˜ I we have:      x ,σ˜ ,σ˜ x ,σ˜ ,σ˜ P min n ≥ 0; Xn0 I 0,I I ∈ B  < min n ≥ 0; Xn0 I 0,I I ∈ B¯ R (0) ≥ p¯ 0 . 2

Proof Without loss of generality, we may assume that x0 = |x0 |e1 . Define the strategy σ˜ 0,I I = σ = − 13 e1 and the set of “neutral” random outcomes: Dneu = Bs (0) ⊂ B1 (0)

s≤

1 . 12#4R$γp (ρp + 1)

Observe that for all w ∈ Dneu there holds:    1 1 σ + γp (w + (ρp − 1)w, σ σ ), e1 ∈ − , , 4 2   1  σ + γp (w + (ρp − 1)w, σ σ ), e1  < . 12#4R$ Thus, it takes some n ≤ #4R$ moves by any vectors as above, to achieve:  |Xn , e1 | < 4 and also: n · #4R$ < 4 . Consequently: ∃n ≤ #4r$

x0 +

n  i=1

    2 σ + γp (w + (ρp − 1)w, σ σ ) ∈ − , ⊂ B 2 (0) 2 2

for any w1 , . . . , wn ∈ Dneu . Hence, we arrive at:      x ,σ˜ ,σ˜ x ,σ˜ ,σ˜ P min n ≥ 0; Xn0 I 0,I I ∈ B 2 < min n ≥ 0; Xn0 I 0,I I ∈ B¯ R (0) ∞   π s 2 #4R$ 

#4R$ ≥ P Dneu × {2} × (0, 1) × 1 = = p¯ 0 , 2 i=#4r$

as claimed. We now complete the proof of Theorem 6.21. Applying Lemma 6.22 to R = 1 + γp max{1, ρp }, we see that for all x0 ∈ D with dist(x0 , ∂D) < R there exists σ˜ 0,I I such that for every σ˜ I there holds:  P ∀n ≤ τ x0 ,σ˜ I ,σ˜ 0,I I

x ,σ˜ I ,σ˜ 0,I I

Xn0

 p¯ 0 , ∈ B4(1+γp max{1,ρp }) (x0 ) ≥ 2

6.6 Bibliographical Notes

161

with some p¯ 0 > 0 that depends only on p, γp , ρp . Recalling (6.38) we obtain:  P ∀n ≤ τ x0 ,σI ,σ0,I I

x ,σ˜ I ,σ˜ 0,I I

Xn0

 p¯ 0 . ∈ Bδ (0) ≥ (1 − θ¯0 ) · 2

This yields (6.31) in Theorem 6.16 with θ0 = 1 − 12 (1 − θ¯0 )p¯ 0 .

 

Remark 6.23 By the same argument as in the proof above, one can see the following. Let D ⊂ R2 be open, bounded, connected and let y0 ∈ ∂D satisfy the exterior curve condition, i.e. there exists a continuous curve contained in R2 \ D, of positive length and with one endpoint at y0 . Then y0 is walk-regular. Exercise 6.24 Provide all details of the proof of the inductive formula (6.37). Exercise 6.25 An open, bounded, simply connected set D ⊂ R2 does not have to satisfy the exterior curve condition as in Remark 6.23. Consider K0 ⊂ R2 that is the π union of the vertical segment {1} × [−1, 1] with the graph of the function: sin x−1 over the interval (1, 2]. Define: K = {(0, 0)} ∪

∞  1 K0 , 2n

D = (−1, 1)2 \ K.

n=2

Show that the open set D ⊂ R2 is bounded and simply connected, but (0, 0) is a boundary point of D without an exterior curve.

6.6 Bibliographical Notes The construction in this chapter is mostly taken from Lewicka (2018). The fundamental paper by Peres and Sheffield (2008) developed the dynamic programming principle and the Tug-of-War game valid in the whole exponent range p ∈ (1, ∞), that was based on the averaging expansion different than our (6.2). In particular, since it involved averaging on the codimension 2 sets, the regularity of the game values as well as the coincidence of the upper and lower values were not clear. In Arroyo et al. (2017); Hartikainen (2016), yet another dynamic programming principle (6.10), still sampling on sets of measure zero, has been analysed and convergence of its solutions proved in domains satisfying the exterior corkscrew condition has been shown in Arroyo et al. (2018). The proof in Sect. 6.5 is an adaptation of the argument in Peres and Sheffield (2008) to the mixed averaging principle and the resulting Tug of War game in Sect. 6.3. The present set-up has the advantage of utilizing the full N -dimensional sampling on ellipses, rather than on spheres. This implies that the solutions of the dynamic programming principle at each scale  > 0 are unique, continuous and coincide with the well-defined game values; much like in the linear p = 2 case where the N -dimensional averaging guaranteed smoothness of harmonic functions.

Appendix A

Background in Probability

In this chapter we recall definitions and statements on the following chosen topics in probability: probability and measurable spaces, random variables, product spaces, conditional expectation, independence, martingales in discrete times, stopping times, Doob’s optional stopping theorem and convergence of martingales. In this chapter we recall definitions and preliminary facts on the chosen topics in probability: probability spaces, conditional expectation and martingales in discrete times. We limit ourselves to the material that is necessary to carry out the constructions of the ball walk and the tug of war games discrete processes. This material may be found in any textbook on probability, for example in: Williams (1991); Durrett (2010); Kallenberg (2002); Dudley (2004).

A.1 Probability and Measurable Spaces Definition A.1 Let be a given set endowed with a σ -algebra F of its subsets and a probability measure P on F. We call ( , F) a measurable space and ( , F, P) a probability space. Recall that the following are required for F ⊂ 2 to be a σ -algebra: (i) ∅ ∈ F. (ii) If A ∈ F, then \ A ∈ F. ∞ (iii) If {Ai }∞ i=1 Ai ∈ F. i=1 is a sequence of sets Ai ∈ F, then

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3

163

164

A Background in Probability

A measure on a measurable space ( , F) is, by definition, a nonnegative and ¯ + = [0, +∞], that is: countably additive set function μ : F → R (i) 0 = μ(∅) ≤ μ(A), for all A ∈ F.   ∞ (ii) If the sets in {Ai ∈ F}∞ = i=1 Ai i=1 are pairwise disjoint, then: μ %∞ μ(A ). i i=1 A probability measure μ (usually denoted by P) is a measure such that: (iii) μ( ) = 1. More generally, we shall deal with σ -finite measures, which means that:  (iii)’ = ∞ i=1 Ai , such that Ai ∈ F and μ(Ai ) < +∞ for each i ∈ N. One primary example of a measurable space consists of a Borel set ⊂ RN and the σ -algebra F = B( ) of all Borel subsets of . Recall that the Borel σ algebra B(RN ) is, by definition, the smallest σ -algebra containing all the open N dimensional sets, whereas B( ) = {A ∩ ; A ∈ B(RN )}. We further have the following examples of a probability space ( , F, P): Example A.2 (i) (Uniform probability measure on a finite set). We take: = {1, 2, . . . , n}, |A| F = 2 and P(A) = , where |A| denotes the number of elements in the n finite set A ⊂ . % (ii) (Discrete probability measure). Let = N, F = 2 and % let ∞ i=1 pi be a nonnegative series, summable to 1. Then we define: P(A) = i∈A pi . (iii) (Normalized Lebesgue measure on a ball). We set = B (0) ⊂ RN , F = |A| , where |A| denotes the N -dimensional Lebesgue B( ) and P(A) = N  VN measure (i.e. the volume) of A ∈ B( ). We write VN = |B1 (0)| for the volume of the unit ball B1 (0) in RN . (iv) (Probability measure with a density). Here, ⊂ RN is a given Lebesgue (or Borel) measurable set and F consists of all its Lebesgue (or Borel) measurable subsets. We define: ˆ f (x) dx, (A.1) P(A) = A

where f : ´→ R is a given nonnegative Lebesgue (or Borel) measurable function with f (x) dx = 1. We call f the density of P; observe that f with property (A.1) is unique up to modifications on sets of P-measure 0. An element ω ∈ is called an outcome and a set A ∈ F is called an event. Throughout the book, we use probability theory convention and suppress the notion of outcomes, when no ambiguity arises. Accordingly, for a function X : → R and r ∈ R, we write {X ≤ r} instead of {ω ∈ ; X(ω) ≤ r}, or A ∩ {X = 0} instead of {ω ∈ A; X(ω) = 0}, etc.

A

Background in Probability

165

We also adopt another convention: when a certain property, for example X(ω) ≤ r, is satisfied on a set of full measure: P({X ≤ r}) = 1, we write: X ≤ r a.s. (to be read: almost surely), or X ≤ r P-a.s. in case there is need to specify the probability measure P. Only in case of possible ambiguity, we write: Y (ω) ≤ r for P-a.e. ω (to be read: P-almost every ω). To avoid repeated parentheses, we often replace P({X ≤ r}) by P(X ≤ r), unless a further clarification is needed. One can construct a probability measure on a σ -algebra as an extension of a given premeasure on an algebra. This is an important technique, used to introduce the Lebesgue measure on = RN , as well as the countable product of probability spaces, explained in Sect. A.3. Recall that A ⊂ 2 is an algebra if: (i) ∅ ∈ A, (ii) If A ∈ A, then \ A ∈ A. (iii) If A1 , A2 ∈ A, then A1 ∪ A2 ∈ A. A premeasure on an algebra A is a set function μ0 : A → R¯ + , such that: (i) 0 = μ0 (∅) ≤ μ0 (A) for all A ∈ A.  (ii) If {Ai }∞ a sequence of pairwise disjoint sets Ai ∈ A such that ∞ i=1 Ai ∈ i=1 is   % ∞ ∞ = A, then μ0 A μ (A ). i i=1 i i=1 0 The fact that every premeasure generates a measure is known as the Caratheodory extension theorem: Theorem A.3 Let μ0 be a premeasure on an algebra A ⊂ 2 . Let F be the smallest σ -algebra F of subsets of , that contains A. Then, there exists a measure μ on F, satisfying: μ(A) = μ0 (A)

for all A ∈ A.

If μ0 ( ) < +∞, then such measure μ is unique.

A.2 Random Variables and Expectation Definition A.4 A random variable on a measurable space ( , F) is a F-measurable . function X : → R¯ = R ∪ {−∞, +∞}. Namely, we have: {X ≤ r} ∈ F

for all r ∈ R.

The simplest example of a random variable is the indicator of a set A ∈ F:  1A (ω) =

1 if ω ∈ A 0 if ω ∈ \ A.

166

A Background in Probability

More generally, we also consider random variables X : 1 → 2 between two measurable spaces ( 1 , F1 ) and ( 2 , F2 ). The measurability condition on X is then: {X ∈ A2 } ∈ F1

for all A2 ∈ F2 .

¯ B(R)) ¯ yields the scalar Clearly, taking ( 1 , F1 ) = ( , F) and ( 2 , F2 ) = (R, ¯ stands valued case of Definition A.4 where, with a slight abuse of notation, B(R) ¯ for the smallest σ -algebra of subsets of R containing all the sets [r, ∞) ∪ {+∞} for ¯ r ∈ R. Unless stated otherwise, by “random variable” we always mean the R-valued measurable function as in Definition A.4, that is a.s. finite. Given a random variable X : → R¯ on a probability space ( , F, P), there is a standard construction of the integral (also called expectation in this setting) which proceeds in three steps: % (i) If X = ni=1 αi 1Ai with Ai ∈ F and αi ∈ R for i = 1 . . . n and some n ∈ N, then we call X a simple function and define: ˆ

. X dP = αi P(Ai ) ∈ R. n



i=1

(ii) Every nonnegative random variable X is a nondecreasing pointwise limit of simple functions {Xi }∞ i=1 . For such X we define: ˆ

. X dP = lim

ˆ

i→∞



Xi dP ∈ R¯ + .

´ (iii) When |X| dP < ∞ we say that X is P-integrable. We then define X+ , X− to be the positive and negative parts of X, namely: X = X+ − X− and X+ , X− ≥ 0 a.s., and we set: ˆ ˆ ˆ . X dP = X+ dP − X− dP ∈ R.





Definition A.5 Given a probability space ( , F, P), we denote by L1 ( , F, P) the linear space of all P-integrable random variables X on . The expectation of X ∈ L1 ( , F, P) is then: ˆ . X dP. E[X] =

The following fundamental results allow for passing to the limit under the integral sign. Theorem A.6 is usually referred to as Fatou’s lemma, part (i) of Theorem A.7 as the monotone convergence theorem and part (ii) as the Lebesgue dominated convergence theorem.

A

Background in Probability

167

Theorem A.6 Let {Xi }∞ i=1 be a sequence of a.s. nonnegative random variables on a probability space ( , F, P). Then we have: ˆ

ˆ lim inf Xi dP ≤ lim inf

i→∞

i→∞

Xi dP.

Theorem A.7 Let {Xi }∞ i=1 be a sequence of random variables on a probability space ( , F, P), converging pointwise a.s. to a random variable X. Assume that one of the following properties holds: (i) For each i ∈ N we have: Xi ≥ 0 and Xi ≤ Xi+1 a.s. (ii) There exists an integrable random variable Z ∈ L1 ( , F, P) such that |Xi | ≤ Z a.s. for every i ∈ N. ´ ´ Then one can pass to the limit under the integral: limi→∞ Xi dP = X dP. The useful construction of the push-forward measures yields the following change of variable formula: Exercise A.8 Let ( 1 , F1 , P1 ) and ( 2 , F2 ) be a probability space and a measurable space, respectively. Given a random variable X : 1 → 2 , define: . P2 (A) = P1 (X ∈ A)

for all A ∈ F2 .

Show that ( 2 , F2 , P2 ) is a probability space and that: ˆ ˆ Z ◦ X dP1 = Z dP2 , 1

2

¯ for every nonnegative or integrable random variable Z : 2 → R. Example A.9 Let ( 1 , F1 , P1 ) be the normalized Lebesgue measure on the Borel σ algebra F1 of subsets of 1 = Br (0) ⊂ RN . Let 2 = ∂Br (0) and let X : 1 → 2 ω be the projection given by X(ω) = r |ω| for all ω ∈ 1 \ {0}. The push-forward construction of Exercise A.8 results then in the probability measure P2 on the σ algebra F2 of Borel subsets of 2 , namely: F2 = {A ⊂ 2 ; X−1 (A) ∈ F1 }. We call P2 the normalized spherical measure, whereas the following measure will be called the spherical measure on ∂Br (0): N

. 2π 2 N −1 r σ N −1 = P2 , ( N2 ) so that σ N −1 (∂Br (0)) = |∂Br (0)| equals the surface area of ∂Br (0). Since |∂Br (0)| = Nr |Br (0)|, we obtain the “polar coordinates” integration formula for all u ∈ C(B¯ r (0)): ˆ rˆ ˆ u(y) dy = u(y) dσ N −1 (y) ds. (A.2) Br (0)

0

∂Bs (0)

168

A Background in Probability

Finally, we state the result on the (weak-∗) convergence of probability measures, which is a version of the fundamental Prohorov’s theorem: Theorem A.10 Let ( , d) be a compact metric space and let {Pi }∞ i=1 be a sequence of probability measures on equipped with the Borel σ -algebra F. Then there exists a subsequence {Pik }∞ k=1 converging weakly-∗ to some probability measure P on ( , F), which means that: ˆ ˆ lim X dPik = X dP for all X ∈ C( ). k→∞



A.3 Product Measures We now recall the notion and construction of a product measurable space and a product measure. Given a finite number of probability spaces {( i , Fi , Pi )}ni=1 , we form a Cartesian product = ni=1 i and endow it with the product σ -algebra F, defined as the smallest σ -algebra containing all the product sets: A = ni=1 Ai , where Ai ∈ Fi for each i = 1, . . . , n. The product measure P on F is then the unique measure such that: P(A) =

n

P(Ai ).

i=1

A fundamental result in this context is the Fubini–Tonelli theorem. For its statement below and also in the sequel, we adopt the following convention. Let A ∈ F be a set of full measure in the probability space ( , F, P), i.e. P(A) = 1. Let Z : A → R¯ be a random variable on the induced probability space (A, FA , P|FA ) with FA = {B ∩ A; B ∈ F}. Then we write: ˆ ˆ . Z dP = Z dP|FA ,

A

whenever the integral in the right hand side above is well defined. Theorem A.11 Let ( , F, P) be the product of two probability spaces, denoted ( 1 , F1 , P1 ) and ( 2 , F2 , P2 ). Let X : → R¯ be a F-measurable random variable ¯ given which is either´ P-integrable or nonnegative. Then the function Y : 1 → R by Y (ω1 ) = 2 X(ω1 , ω2 ) dP2 (ω2 ) is well defined P1 -a.s., it is F1 -measurable, and: ˆ ˆ X dP = Y dP1 .

1

A

Background in Probability

169

In the same manner, when ( , F, P) is the product of n ≥ 2 probability spaces, the integral of any P-integrable or nonnegative F-measurable random variable X : → R¯ can be expressed by means of iterated integrals: ˆ X(ω1 , . . . , ωn ) dP

ˆ

ˆ =

X(ω1 , . . . , ωn ) dPn (ωn ) . . . dP1 (ω1 ).

... 1

(A.3)

n

Given now a sequence of probability spaces {( i , Fi , Pi )}∞ i=1 , the parallel construction of the product ( , F, P) requires a bit more care. For the measurable

∞ with ω ∈ space ( , F), we let = ∞ to consist of all sequences {ω } i i i i i=1 i=1 for all i ∈ N, while the σ -algebra F is the smallest σ -algebra of subsets of ,

containing all sets of the form ∞ i=1 Ai , where An ∈ Fn for some n ∈ N and Ai = i for all other indices i = n. We then have: Theorem A.12 Let {( i , Fi , Pi )}∞ i=1 be a sequence of probability spaces. There exists a unique probability measure P on the product ( , F), such that: ∞ ∞ 



Ai = Pi (Ai ), P i=1

(A.4)

i=1

for every sequence of sets {Ai }∞ i=1 such that Ai ∈ Fi for each i ∈ N and Ai = i for all but finitely many indices i. Proof 1. Consider the family generated by the finite “cylinders” of the form: ∞ m n  

 . 

A= Aik × i ; k=1 i=1

Aik ∈ Fi

i=n+1

for all i = 1, . . . , n, k = 1, . . . , m, n, m ∈ N .

(A.5)

It is easy to observe that A is an algebra of subsets of and that the smallest σ -algebra containing A coincides with F. Define the set function μ0 on A by: μ0 (F ×



i ) = P¯ n (F ),

(A.6)

i=n+1

n  ¯ for all sets F of the form: F = m k=1 i=1 Aik as in (A.5), and where Pn is n the product measure on the product of n probability spaces {( i , Fi , Pi )}i=1 . We will now show that μ0 is a premeasure on A; the existence of its unique extension P on F will then follow from Theorem A.3.

170

A Background in Probability

∞ To this end, consider ∞ a sequence {Ai }i=1 of sets Ai ∈ A that are pairwise disjoint and with i=1 Ai ∈ A. Clearly, for every n ≥ 1 we have: n

μ0 (Ai ) = μ0

n 

i=1

∞    Ai ≤ μ0 Ai ,

i=1

i=1

where we used (A.6) and  ∞  of μ0 . To show that %nthe resulting monotonicity % ∞ μ (A ) = lim μ (A ) = μ A i n→∞ 1 0 i=1 0 i=1 0 i=1 i , it suffices to check: lim μ0

n→∞

∞  

 Ai = 0.

i=n+1

This property will follow  from the general fact below, applied to the decreasing family of sets Bn = ∞ i=n+1 Ai with empty intersection. 2. Let {Bn }∞ be a decreasing family of sets Bn ∈ A, such that: n=1 ∞ 

Bn = ∅.

(A.7)

n=1

We will show that limn→∞ μ0 (Bn ) = 0. Assume, by contradiction, that: μ0 (Bn ) ≥  > 0

for every n ≥ 1.

(A.8)

Without loss of generality, we may take each Bn to be of the form as in (A.5), i.e. Bn = Fn × ( ∞ products i=n+1 i ), where Fn is a finite union of Cartesian

k of measurable subsets of 1 , . . . , n . Given

a k-tuple (x1 , . . . xk ) ∈ i=1 i , we denote: Bn (x1 , . . . xk ) = {(xk+1 , . . .) ∈ ∞ i=k+1 i ; (x1 , x2 , . . .) ∈ Bn }. Clearly

then: ( ki=1 i ) × Bn (x1 , . . . , xk ) ∈ A. In virtue of Theorem A.11 we observe that: ˆ   μ0 1 × Bn (x1 ) dP1 (x1 ) μ0 (Bn ) = 1

   ≤ P1 x1 ∈ 1 ; μ0 ( 1 × Bn (x1 )) ≥ + . 2 2 Together with (A.8), this yields:    P1 x1 ∈ 1 ; μ0 ( 1 × Bn (x1 )) ≥ ≥ 2 2

for all n ≥ 1.

A

Background in Probability

171

Since the given above subsets of 1 are decreasing as n → ∞, it follows that their intersection must be nonempty, namely: ∃x˜1 ∈ 1

∀n ≥ 1

μ0 ( 1 × Bn (x˜1 )) ≥

 . 2

(A.9)

In a similar manner, we obtain: ˆ   μ0 ( 1 × Bn (x˜1 )) = μ0 1 × 2 × Bn (x˜1 , x2 ) dP2 (x2 ) 2

   ≤ P2 x2 ∈ 2 ; μ0 ( 1 × 2 × Bn (x˜1 , x2 )) ≥ + , 4 4 which combined with (A.9) yields:    P2 x2 ∈ 2 ; μ0 ( 1 × 2 × Bn (x˜1 , x2 )) ≥ ≥ 4 4

for all n ≥ 1,

and thus: ∃x˜2 ∈ 2

∀n ≥ 1

μ0 ( 1 × 2 × Bn (x˜1 , x˜2 )) ≥

 . 4

Repeating this procedure, we inductively obtain a sequence {x˜i ∈ i }∞ i=1 so that: μ0

k 

i=1

   k × Bn (x˜1 , . . . , x˜k ) ≥ k 2

for all n, k ≥ 1.

and In particular, taking k = n we observe that each set Bn (x˜1 , . . . , x˜n ) is nonempty,  so (x˜1 , . . . , x˜n ) ∈ Fn for all n ≥ 1. Thus, there must be: (x˜1 , x˜2 , . . .) ∈ ∞ n=1 Bn , contradicting (A.7). The proof is done.   Example A.13 (i) The countable product of the probability spaces in Example A.2 (iii). Namely, let B (0) ⊂ RN be endowed with the Borel σ -algebra B(B (0)) and the | normalized Lebesgue measure |B|·(0)| . Theorem A.12 defines the probability

. ∞ N space on = (B (0)) = i=1 B (0). (ii) In Chap. 3 we will be working with the probability space ( , F, P), where =  N B1 (0) × {1, 2, 3} × (0, 1) , constructed as the infinite product of countably many copies of the product B1 (0)×{1, 2, 3}×[0, 1]. This last probability space is the product of: the normalized Lebesgue measure space as in Example A.2 (iii), the uniform measure space as in Example A.2 (i) with n = 3, and the Lebesgue measure space on (0, 1). In fact, Theorem A.12 holds for arbitrary (not necessarily countable) products of probability spaces. The same proof works in the general case as well, since every set

172

A Background in Probability

A belonging to the product algebra A depends only on finitely many coordinates and each set in the product σ -algebra F depends only on countably many coordinates.

A.4 Conditional Expectation Definition A.14 Let X ∈ L1 ( , F, P) be an integrable random variable on a probability space ( , F, P) and let G ⊂ F be a sub- σ -algebra. The conditional expectation of X relative to G, denoted by E(X | G), is a random variable Y = E(X | G) such that Y ∈ L1 ( , G, P|G ) and: ˆ

ˆ Y dP =

X dP

A

for all A ∈ G.

A

Observe that, automatically, there holds: E[E(X | G)] = E[X]. Existence of conditional expectation follows from the fundamental Radon–Nikodym theorem: Theorem A.15 Let μ, ν be two σ -finite measures on a measurable space ( , F). If ν is absolutely continuous with respect to μ (we write ν " μ), i.e. ν(A) = 0

for all A ∈ F such that μ(A) = 0,

then there exists a F-measurable random variable Y on , such that: ˆ ν(A) = Y dμ for all A ∈ F. A

Any Y as above is nonnegative μ-a.s. and if Y1 , Y2 are such, then: Y1 = Y2 μ-a.s. We call Y the Radon–Nikodym derivative of ν with respect to μ. Write now X = X+ − X− as the difference of the positive and negative parts of an integrable random variable X on a probability space ( , F, P). Given a sub-σ algebra G ⊂ F, define μ = P|G and: ˆ ν(A) = A

X+ dP

for all A ∈ G.

Clearly, μ and ν are two σ -finite measures on the measurable space ( , G) and ν " μ, so ν has its Radon–Nikodym derivative Y+ with respect to μ. The same construction for X− results in the G-measurable random variable Y− . We now set Y = Y+ − Y− . We leave it to the reader to check that the conditions of Definition A.14 are satisfied to the effect that Y = E(X | G).

A

Background in Probability

173

Exercise A.16 (i) If Y1 , Y2 are two random variables satisfying the conditions of Definition A.14, then Y1 = Y2 a.s. Let X, X1 , X2 be integrable random variables on a probability space ( , F, P) and let G, G1 , G2 be sub-σ -algebras of F. Prove that: (Linearity). E(a1 X1 + a2 X2 | G) = a1 E(X1 | G) + a2 E(X2 | G) a.s. (Monotonicity). If X1 ≤ X2 a.s., then E(X1 | G) ≤ E(X2 | G) a.s. (The tower property). If G1 ⊂ G2 , then E(X | G1 ) = E(E(X | G2 ) | G1 ) a.s. For every bounded, G-measurable random variable Z on there holds: E(ZX | G) = ZE(X | G) a.s. Consequently: E[ZX] = E[ZE(X | G)]. (vi) (Jensen’s inequality). If φ : R → R is a convex function and φ◦X is integrable, then:

(ii) (iii) (iv) (v)

φ ◦ E(X | G) ≤ E(φ ◦ X | G) a.s. The following simple observation is a direct consequence of Theorem A.11: Lemma A.17 Let X be a integrable random variable on ( , F, P) that is the product of two probability spaces ( 1 , F1 , P1 ) and ( 2 , F2 , P2 ). Define G = {A × 2 }A∈F1 which is a sub-σ -algebra of F (viewed as a copy of F1 in F). (i) For P1 -a.e. ω1 ∈ 1 , we have: ˆ E(X | G)(ω1 ) =

X(ω1 , ω2 ) dP2 (ω2 )

a.s.

2

(ii) When X = 1X2 >X1 is given by a random variable X1 on ( 1 , F1 , P1 ) and a random variable X2 on ( 2 , F2 , P2 ), then for P1 -a.e. ω1 ∈ 1 there holds: ˆ E(X | G)(ω1 ) = 2

  1X2 >X1 (ω1 ) dP2 = P2 X2 > X1 (ω1 ) a.s.

We may think of the conditional expectation as a projection. Indeed, for a Gmeasurable X we have E(X | G) = X, and one can prove that E(· | G) is actually an orthogonal projection from L2 ( , F, P) onto L2 ( , G, P|G ). The following conditional convergence theorems are the counterparts of the Fatou, Lebesgue and the monotone convergence Theorems A.6, A.7: Exercise A.18 Let {Xi }∞ i=1 be a sequence of integrable random variables on a probability space ( , F, P). Let G be a sub-σ -algebra of F. Prove that:     (i) If each Xi is a.s. nonnegative, then: E lim inf Xi | G ≤ lim inf E Xi | G . i→∞

i→∞

174

A Background in Probability

Assume that {Xi }∞ i=1 converge pointwise a.s. to a random variable X and that one of the following properties holds: (ii) For each i ∈ N we have: Xi ≥ 0 and Xi ≤ Xi+1 a.s. (iii) There exists Z ∈ L1 ( , F, P) such that |Xi | ≤ Z a.s. for every i ∈ N. Then one can pass to the limit under conditional expectation:     lim E Xi | G = E X | G a.s.

i→∞

A.5 Independence Definition A.19 Let ( 1 , F1 ) and ( 2 , F2 ) be two measurable spaces. The random variables X1 : → 1 and X2 : → 2 on the probability space ( , F, P) are independent, if: P({X1 ∈ A1 } ∩ {X2 ∈ A2 }) = P(X1 ∈ A1 ) · P(X2 ∈ A2 ) for all A1 ∈ F1 , A2 ∈ F2 .

The following observation, which we leave as an exercise, expresses independence in terms of equality of the induced product measures. Exercise A.20 Let X1 , X2 be two random variables on the probability space ( , F, P), with values in measurable spaces ( 1 , F1 ) and ( 2 , F2 ), respectively. For i = 1, 2 and every Ai ∈ Fi , define: Pi (Ai ) = P(Xi ∈ Ai ). (i) Both ( i , Fi , Pi ), where i = 1, 2, are probability spaces. ¯ be the product of ( 1 , F1 ) and ( 2 , F2 ). Then X1 × X2 : (ii) Let ( 1 × 2 , F) ¯ is a random variable. Further, ( 1 × 2 , F, ¯ P) ¯ is a ( , F) → ( 1 × 2 , F) ¯ ¯ probability space, where for all A ∈ F we define: P(A) = P((X1 × X2 ) ∈ A). ¯ The random (iii) Let P¯¯ be the product measure of P1 and P2 on ( 1 × 2 , F). ¯ ¯ ¯ variables X1 and X2 are independent if and only if P = P. The next observation is a direct consequence of the Fubini–Tonelli theorem: Lemma A.21 Let X1 , X2 be two independent random variables with values in 1 , ¯ P) ¯ as 2 , as in Definition A.19. Define the product probability space ( 1 × 2 , F, ¯ ¯ in Exercise A.20 and assume that Z : 1 × 2 → R is a F-measurable random ¯ variable, that is either P-integrable or nonnegative.

A

Background in Probability

175

(i) The integral in the right hand side below is well defined and we have: ˆ

ˆ ˆ Z(X1 (ω), X2 (ω)) dP(ω) =



Z(X1 (ω1 ), X2 (ω2 )) dP(ω2 ) dP(ω1 ).



  ˜ 1 = {X1 ∈ A1 }; A1 ∈ F1 , which is a (ii) Consider the collection of sets: F sub-σ -algebra of F in . Then, for P-a.e. ω1 ∈ : ˆ   ˜ 1 (ω1 ) = E Z ◦ (X1 × X2 ) | F Z(X1 (ω1 ), X2 (ω2 )) dP(ω2 ).

For more than two random variables, independence is understood in a similar manner as in Definition A.19, namely: Definition A.22 The given n ≥ 2 random variables {Xi : → i }ni=1 , defined on the probability space ( , F, P) and valued in the respective measurable spaces ( , Fi ) are called independent, provided that: P

n 

n 

{Xi ∈ Ai } = P(Xi ∈ Ai ) for all Ai ∈ Fi , i = 1 . . . n.

i=1

i=1

Clearly, if {Xi }ni=1 are independent, then they are also pairwise independent, that is every pair (Xi , Xj ) is independent for i = j . The converse is not true: Exercise A.23 Give an example of three pairwise independent random variables on B1 (0) ⊂ R2 , for which condition in Definition A.22 does not hold. n Similarly as in Exercise A.20, it is easy to observe

n that {Xi }i=1 are independent if and only if the probability measure defined on i=1 ( i , Fi ) as the push-forward of P through the multi-dimensional random variable (X1 , . . . , Xn ), coincides with the product measure of the individual push-forwards P(Xi ∈ Ai ) on each ( i , Fi ). We further have the following useful result:

Exercise A.24 Assume that {Xi : → i }ni=1 are independent random variables as in Definition A.22. Given are measurable functions:

ks+1

fs :

i → 0s

for s = 1 . . . m,

i=ks +1

valued in some measurable spaces ( 0s , F0s )m s=1 , and defined on the indicated products of the consecutive spaces ( i , Fi )ni=1 , where 0 = k1 < k2 . . . < km+1 ≤ n. Prove that the random variables {fs ◦ (Xks +1 , . . . , Xks+1 )}m s=1 are independent.

176

A Background in Probability

A.6 Martingales and Stopping Times One of the central concepts in probability is the concept of a martingale. Definition A.25 Let ( , F, P) be a probability space. (i) A filtration {Fi }∞ i=0 of the σ -algebra F is an increasing sequence of sub-σ algebras of F, namely: F0 ⊂ F1 ⊂ . . . Fi ⊂ Fi+1 ⊂ . . . ⊂ F. (ii) A martingale relative to a filtration {Fi }∞ i=0 (we also say adapted to a filtration) is a sequence of integrable random variables {Xi }∞ i=0 , such that each Xi : → ¯ is Fi -measurable and there holds: R Xi = E(Xi+1 | Fi ) a.s.

for all i ≥ 0.

(A.10)

∞ (iii) A submartingale/supermartingale {Xi }∞ i=0 relative to a filtration {Fi }i=0 is defined as above, with the equality (A.10) replaced by an inequality: Xi ≤ E(Xi+1 | Fi ) for submartingale, and Xi ≥ E(Xi+1 | Fi ) for supermartingale, in each case valid a.s. with respect to P.

It is easy to observe that expectation is constant along a martingale, namely: E[Xi ] = E[X0 ]

for all i ≥ 0,

(A.11)

whereas it is increasing/decreasing along a sub/supermartingale. Also, Exercise A.16 (iv) yields then in view of (A.10): Xi = E(Xj | Fi ) a.s.

for all 0 ≤ i ≤ j,

(A.12)

with the equality replaced by “≤” for a submartingale and by “≥” for a supermartingale. We further have: Exercise A.26 . 1 (i) If {Fi }∞ i=0 is a filtration of F and X ∈ L ( , F, P), then Xi = E(X | Fi ) defines a martingale {Xi }∞ i=0 adapted to the given filtration. (ii) If {Xi }∞ is a martingale adapted to a filtration {Fi }∞ i=0 i=0 and φ : R → R is convex/concave, then {φ ◦ Xi }∞ is a sub/supermartingale, provided that φ ◦ Xi i=0 is integrable for all i. Many important concepts involving sequences of random variables are based on how the future time state depends on the past. This necessitates the notion of “time” itself as a random variable, and leads to the notion of a stopping time:

A

Background in Probability

177

Definition A.27 A random variable τ : → {0, 1, . . . , +∞} is called a stopping time relative to a filtration {Fi }∞ i=0 of F if the two conditions hold: . (i) {τ ≤ i} = {ω ∈ ; τ (ω) ≤ i} ∈ Fi for all i ≥ 0, (ii) P(τ = +∞) = 0. Roughly speaking, the defining property of a stopping time is that the decision to stop (and to possibly read the value of a specific random variable Xi ) at a “time” i, is based only on information available up to this time. Clearly, a constant random variable taking values in N ∪ {0} is a stopping time: Example A.28 (i) Any two stopping times τ1 , τ2 generate a stopping time: . τ1 ∧ τ2 = min{τ1 , τ2 }. (ii) Let {Fi }∞ i=0 be a filtration of F in the probability space ( , F, P) and let be a sequence of random variables such that each Xi : → R¯ is {Xi }∞ i=0 Fi -measurable. Given r ∈ R, define: n Xi (ω) ≥ r . τr (ω) = min n ≥ 0; i=0

If P(τr = +∞) = 0, then the random variable τr is a stopping time. (iii) If τ , satisfying τ ≥ 1 a.s. is a stopping time, then τ − 1 might fail to be a stopping time, whereas τ + 1 is always a stopping time. We further have the following important properties: Exercise A.29 Let τ be a stopping time relative to a filtration {Fi }∞ i=0 of F. (i) Define:   Fτ = A ∈ F; A ∩ {τ ≤ i} ∈ Fi for all i . Then Fτ is a sub-σ -algebra of F and τ is Fτ -measurable. Moreover, if τ1 , τ2 are two stopping times satisfying τ1 ≤ τ2 , then Fτ1 ⊂ Fτ2 . Intuitively, Fτ represents the information available at the random time τ . (ii) Let {Xi }∞ i=0 be a sequence of random variables where each Xi is Fi -measurable. Then the random variable Xτ given by: Xτ (ω) = Xτ (ω) (ω), is defined P-a.s. in and it is Fτ -measurable. If each Xi is integrable and τ is bounded, then Xτ is integrable. Intuitively, Xτ represents the state of the process {Xi }∞ i=0 at a random time τ .

178

A Background in Probability

Lemma A.30 Let {Xi }∞ i=0 be a martingale/sub/supermartingale and τ be a stop∞ ping time relative to a filtration {Fi }∞ i=0 . Then {Xτ ∧i }i=0 is also a martin∞ gale/sub/supermartingale, respectively, relative to {Fi }i=0 . Proof It is enough to prove the result in the submartingale case. According to Exercise A.29, each Xτ ∧i is Fτ ∧i - and thus Fi -measurable, and P-integrable. To prove the desired inequality between the random variables E(Xτ ∧(i+1) | Fi ) and Xτ ∧i , take any A ∈ Fi and observe that: ˆ

ˆ A

E(Xτ ∧(i+1) | Fi ) dP =

A

Xτ ∧(i+1) dP ˆ

ˆ =

A∩{τ ≤i}

Xτ ∧(i+1) dP +

ˆ =

A∩{τ ≤i}

ˆ Xτ ∧i dP +

A∩{τ >i}

Xτ ∧(i+1) dP

Xi+1 dP. A∩{τ >i}

Since A ∩ {τ >´ i} ∈ Fi , the last ´integral in the right hand side above is bounded from below by: A∩{τ >i} Xi dP = A∩{τ >i} Xτ ∧i dP. Consequently, we get: ˆ

ˆ A

E(Xτ ∧(i+1) | Fi ) dP ≥

A

Xτ ∧i dP,  

which achieves the result.

The next result is the celebrated Doob’s optional stopping theorem. It says that the expectation of a martingale at a stopping time is equal to the expectation at the initial time (under suitable assumptions), very much as in the case of a constant stopping time τ = i in (A.11). In other words, the possibility of stopping at an opportune moment gives no advantage, as long as one cannot foresee the future. Theorem A.31 Let {Xi }∞ i=0 be a martingale and τ a stopping time, relative to a filtration {Fi }∞ on the probability space ( , F, P). Assume that one of the i=0 following properties holds: (i) The stopping time τ is bounded. (ii) There exists an integrable random variable Z ∈ L1 ( , F, P) such that |Xi | ≤ Z a.s. for every i ≥ 0. Then Xτ ∈ L1 ( , F, P) and: E[Xτ ] = E[X0 ].

(A.13)

∞ Proof By Lemma A.30, the sequence {Xτ ∧i }∞ i=0 is a martingale relative to {Fi }i=0 . Thus: E[Xτ ∧i ] = E[Xτ ∧0 ] = E[X0 ]. On the other hand, it is straightforward that:

lim E[Xτ ∧i ] = E[Xτ ],

i→∞

(A.14)

A

Background in Probability

179

since in case (i) we have Xτ ∧i = Xτ for sufficiently large i, while in case (ii) one uses the Lebesgue dominated convergence theorem to pass to the limit. The proof of (A.13) is done.   We now observe that the same result as in Theorem A.31 is valid when conditions (i) and (ii), that were only used to justify convergence in (A.14), are replaced by the uniform integrability of {Xτ ∧i }∞ i=0 . Recall that a sequence of integrable ∞ random variables {Xi }i=0 is uniformly integrable, if it is bounded in L1 ( , F, P) and equiintegrable, i.e. ˆ ∀ > 0 ∃δ > 0 ∀A ∈ F ∀i ≥ 0 P(A) < δ ⇒ |Xi | dP < . A

An equivalent definition of uniform integrability, that is more adequate in probability theory, is given by: Exercise A.32 A sequence of random variables {Xi }∞ i=0 on a probability space ( , F, P) is uniformly integrable if and only if: ˆ ∀ > 0 ∃M ∀i ≥ 0 |Xi | dP < . {|Xi |>M}

Note that assumptions (i) and (ii) in Theorem A.31 imply the uniform integrability, which is a more general condition. We also have: Exercise A.33 Let {Xi }∞ i=0 be a sequence of integrable random variables, such that: supi≥0 Xi+1 − Xi L∞ ( ) < +∞. Let τ be an integrable stopping time. Then {Xτ ∧i }∞ i=0 is uniformly integrable. The following is a more general version of Theorem A.31, extended to sub/supermartingales: Theorem A.34 Let {Xi }∞ i=0 be a sequence of random variables and τ a stopping time, relative to a filtration {Fi }∞ i=0 on the probability space ( , F, P). Assume that {Xτ ∧i }∞ is uniformly integrable. Then Xτ ∈ L1 ( , F, P) and: i=0 (i) If {Xi }∞ i=0 is a submartingale, then E[Xτ ] ≥ E[X0 ]. (ii) If {Xi }∞ i=0 is a supermartingale, then E[Xτ ] ≤ E[X0 ]. (iii) If {Xi }∞ i=0 is a martingale, then E[Xτ ] = E[X0 ]. Proof Since the sequence {Xτ ∧i }∞ i=0 converges pointwise P-a.s. to Xτ and it is bounded in L1 ( , F, P), Fatou’s lemma implies the integrability of Xτ . By Lemma A.30 we obtain that E[Xτ ∧i ] ≥ E[X0 ] for all i ≥ 0 for the case of submartingale, with the inequality “≥” replaced by “≤” and “=” for supermartingale and martingale cases, respectively. To conclude the proof, it suffices to check (A.14). Fix  > 0 and consider a decreasing sequence of subsets of : Ai =

∞  j =i

{|Xτ ∧j − Xτ | > }.

180

A Background in Probability

Since P(

∞

i=0 Ai )

= 0 in view of the pointwise convergence, we obtain: lim P(Ai ) = 0.

(A.15)

i→∞

Consequently, for sufficiently large i: ˆ

ˆ

|Xτ ∧i − Xτ | dP ≤

ˆ

\Ai

|Xτ ∧i − Xτ | dP +

Ai

|Xτ ∧i | + |Xτ | dP < 3,

where we used the equiintegrability assumptions and (A.15) to bound the second integral term above. Thus (A.14) follows and the proof is complete.   By integrating on A ∈ Fτ rather than on , we similarly obtain: Exercise A.35 Let {Xi }∞ i=0 be a martingale and let τ1 , τ2 be two stopping times, ∞ relative to a filtration {Fi }∞ i=0 . Assume that τ1 ≤ τ2 and that the sequence {Xτ2 ∧i }i=0 ∞ is uniformly integrable. Then {Xτ1 ∧i }i=0 is uniformly integrable as well and: E(Xτ2 | Fτ1 ) = Xτ1

a.s.

We remark that by the Dunford–Pettis theorem, the uniform integrability of a sequence of random variables {Xi }∞ i=0 signifies, precisely, its relative weak sequential compactness in L1 . Since the weak limit and the pointwise limit must coincide, it follows that, given a stopping time τ relative to the same filtration as ∞ {Xi }∞ i=0 , the new sequence {Xτ ∧i }i=0 weakly converges to Xτ provided that it is uniformly integrable (in fact, it also converges strongly). This suffices to pass to the limit (A.14) in the proof of Doob’s optional stopping and deduce the result in Theorem A.34.

A.7 Convergence of Martingales Let {Xi }∞ i=0 be a sequence of random variables on a probability space ( , F, P). For any two numbers a < b, we define the nonnegative random variable Na,b which counts the number of upcrossings of the interval [a, b], by {Xi }∞ i=0 , namely: . Na,b (ω) = sup n; exist integers 0 ≤ s1 < t1 < . . . sn < tn such that Xsi (ω) ≤ a and Xti (ω) ≥ b for all i = 1 . . . n for all ω ∈ .

It is easy to observe that Na,b is indeed F-measurable.

A

Background in Probability

181

The following Dubins upcrossing inequality controls the number of upcrossings of a nonnegative supermartingale: Lemma A.36 Let {Xi }∞ i=0 be a nonnegative supermartingale relative to a filtration {Fi }∞ on the probability space ( , F, P). Then: i=0    a n P Na,b ≥ n ≤ b

for all 0 ≤ a < b and all n ≥ 0.

Proof Define the auxiliary random variables: . τ0 = 0,

. . σ0 = inf{i ≥ 0; Xi ≤ a}, τ1 = inf{i ≥ σ0 ; Xi ≥ b}, . . σn = inf{i ≥ τn ; Xi ≤ a}, τn+1 = inf{i ≥ σn ; Xi ≥ b}

for all n ∈ N,

with convention that inf = +∞ if the infimized set is empty. There holds: {Na,b ≥ n} = {τn < +∞}

for all n ∈ N.

(A.16)

Note that each σn and τn satisfies (i) in Definition A.27 of stopping time. In particular, although the event {τn = +∞} may have positive probability, Lemma A.30 yields that the sequence {Xτn ∧i }∞ i=0 is a (nonnegative) supermartingale. Hence: E(Xτn ∧i | Fj ) ≤ Xτn ∧j a.s.

for all 0 ≤ j ≤ i.

(A.17)

We now estimate: ˆ b · P(τn ≤ i) ≤



{τn ≤i}

Xτn ∧i dP =

i ˆ j =0 {σn−1 =j }

i ˆ j =0 {τn ≤i}∩{σn−1 =j }

Xτn ∧i dP

Xτn ∧i dP.

To continue, observe that since {σn−1 = j } ∈ Fj , we may use (A.17) in: i ˆ j =0 {σn−1 =j }

Xτn ∧i dP =



i ˆ j =0 {σn−1 =j } i ˆ j =0 {σn−1 =j }

E(Xτn ∧i | Fj ) dP

Xτn ∧j dP

ˆ

=

{σn−1 ≤i}

Xσn−1 dP ≤ a · P(σn−1 ≤ i) ≤ a · P(τn−1 ≤ i).

182

A Background in Probability

Consequently: P(τn ≤ i) ≤

a · P(τn−1 ≤ i), b

which after passing to the limit with i → ∞ in both the increasing sequences of ∞ probabilities {P(τn ≤ i)}∞ i=0 and {P(τn−1 ≤ i)}i=0 , we obtain: P(τn < +∞) ≤

a · P(τn−1 < +∞) b

for all n ∈ N.

It follows that by (A.16) that:  a n   a n  · P(τ0 < +∞) = , P Na,b ≥ n ≤ b b  

as claimed.

Corollary A.37 Let {Xi }∞ i=0 be a supermartingale, bounded from below a.s., by some constant c ∈ R. Then, there exists a random variable X such that: lim Xi = X a.s.

(A.18)

i→∞

Proof Without loss of generality, we may assume that c = 0 and that all random variables Xi are all nonnegative. By Lemma A.36 it follows that: P(Na,b = +∞) = 0

for all 0 ≤ a < b.

(A.19)

On the other hand, there holds: 

 lim inf Xi < lim sup Xi ⊂ i→∞

i→∞



{Na,b = +∞}

0≤a 0, provided that the push-forward of measure P by X is absolutely continuous with respect to the Lebesgue measure and its Radon–Nikodym derivative equals − √ 1 e 2π σ 2

(x−μ)2 2σ 2

. More precisely:

  P X∈A =

ˆ A

2 1 − (x−μ) e 2σ 2 dx √ 2π σ 2

for all Borel A ⊂ R.

We then write: X ∼ N(μ, σ 2 ). Exercise B.3 In the above context, prove the following statements: (i) If X ∼ N (μ, σ 2 ) and γ = 0, then γ X ∼ N(γ μ, γ 2 σ 2 ). (ii) If X1 ∼ N (μ1 , σ12 ) and X2 ∼ N(μ2 , σ22 ) are two independent random variables, then X1 + X2 ∼ N(μ1 + μ2 , σ12 + σ22 ). (iii) If both independent random variables X1 , X2 have standard normal distribution N (0, 1), then √1 (X1 + X2 ) and √1 (X1 − X2 ) are also independent with 2 2 distributions N(0, 1). We start by constructing the N = 1 dimensional standard Brownian motion {Bt }t∈[0,1] on a particular probability space as in Definition B.1, that up to sets of measure 0 is defined as follows. Namely, we set ( , F, P) to be the countable prodx2

uct of R equipped with the Borel σ -algebra and probability measure √1 e− 2 dx. 2π We index the coordinates of: ω = {ωq }q= ak ∈ by ordered binary rationals q = 2ak 2

where a > 0 is odd, k ≥ 0 and a < 2k , that results in having:   ω = ω1 , ω 1 , ω 1 , ω 3 , ω 1 , ω 3 , ω 5 , ω 7 , ω 1 , . . . . 2

4

4

8

8

8

8

16

Then we inductively define: B0 ≡ 0,

B1 (ω) = ω1 ,  1 1 a (ω) = a B a−1 (ω) + B a+1 (ω) + √ ω k+1 B k+1 2 2 2k+1 2k+1 2k 2

for all k ≥ 0.

(B.1)

B

Background in Brownian Motion

187

Theorem B.4 1 (i) For every two binary rationals 0 ≤ s < t ≤ 1, the random variable √t−s (Bt − Bs ) has standard normal distribution. Equivalently, there holds: Bt − Bs ∼ N (0, t − s). (ii) If {(si , ti )}ni=1 are disjoint subintervals of [0, 1] with binary endpoints, then {Bti − Bsi }ni=1 are independent random variables.

Proof 1. To show (i), we first observe that B1 − B0 = ω1 ∼ N (0, 1). Assume that the a statement holds whenever t − s = 21k with k ≥ 0. Let now s = 2k+1 , t = 2a+1 k+1 ∈ [0, 1] for some odd a > 0. From (B.1) we obtain:  1 1 a Bt − Bs = B a+1 − B a−1 + B a+1 + √ ω k+1 2 2k+1 2k+1 2k+1 2k 2 (B.2)   1 1 a B a+1 − B a−1 − √ ω k+1 . = 2 2k+1 2k+1 2 2k 2 Observe that by the (i) we have:   inductive1 assumption1 and by Exercise B.3 1 1 √ a B ∼ N(0, ) and − ω ∼ N (0, ). Also, these a+1 − B a−1 2 k 2k+2 2k+2 k+1 2k+1

2 2

2k+1

2

two random variables are independent in view of Exercise A.24, so Exercise B.3 1 (ii) yields: Bt − Bs ∼ N(0, 2k+1 ) as claimed. a−1 a ∈ [0, 1] for some odd a > 0, then: Similarly, when s = 2k+1 , t = 2k+1  1 1 a B a−1 + B a+1 + √ ω k+1 − B a−1 2 2k+1 2k+1 2k+1 2k 2   1 1 1 a B a+1 − B a−1 + √ ω k+1 = ∼ N (0, k+1 ), 2 2k+1 2 2k+1 2 2k 2

Bt − Bs =

(B.3)

and the statement in (i) is hence validated on all binary intervals of the two indicated types. The general statement will result by Exercise A.24, provided we show (ii). √ √ 2. We start by noting that 2(B1/2 − B0 ) = √1 (ω1 + ω1/2 ) and 2(B1 − B1/2 ) = 2 √1 (ω1 − ω1/2 ) are independent by Exercise B.3 2 B1/2 are independent. Now we again proceed by

(iii), so B1/2 − B0 and B1 − induction, with the induction step argument similar to that in the proof of Exercise B.3 (iii). Assume that the 2k −1 statement in (ii) is true for the family of intervals {( 2jk , j2+1 k )}j =0 at some k ≥ 1. To show the same statement at k + 1, consider the 2k+1 random variables k+1 k+1 {B j +1 − B j }j2 =0 −1 , which we view as components of the following R2 2k+1

2k+1

valued random variable: X=

"   B 2i+2 − B 2i+1 , B 2i+1 − B 2k+1

2k+1

2k+1

#2k −1 2i 2k+1

i=0

.

188

B Background in Brownian Motion

By (B.2), (B.3) we observe that: √ √  1  1 #2k −1 2k  1 " 2k  B 2i+2 − B 2i − ω 2i+1 , B 2i+2 − B 2i + ω 2i+1 X= √ 2 2k+1 2 2 2k+1 i=0 2k+1 2k+1 2k+1 2k+1 2k 2 1 = √ RY, 2k+1

where Y =

"+  2k B 2i+2 − B 2k+1

and where:

R = diag

 . √1

2 √1 2

 2i 2k+1

− √1 √1 2

, ω 2i+1

2k+1

/

2

#2k −1 i=0

. ,...,

is a R2

− √1

√1 2 √1 2

√1 2

k+1

-valued random variable,

/ ∈ R2

2

k+1 ×2k+1

k+1

is a rotation of R2 equal to the composition of 2k independent two-dimensional rotations, each by π/4 angle. The induction assumption implies that the components of Y are independent and N(0, 1)-distributed. To deduce that components of X are independent, we consider Borel sets {Ai } in: k+1 2k+1   +

−1   2 −1  −1 k+1 P X∈ Ai = P Y ∈ 2 R Ai

j =0

j =0

ˆ

=



2k+1 R −1 (

Ai )

ˆ =

=

√ k+1 ( 2 Ai ) 2k+1

−1

1 √ 2k+1 e 2π 1

√ 2k+1 e 2π

− |x|2

2 − |x|2

2

dx

dx =

2k+1

−1  ˆ √ j =0

2k+1 Ai

 x2 1 √ e− 2 dx 2π

  P X, ej  ∈ Ai ,

j =0

where again we invoked the rotational invariance of the density function and the fact 1 that components of X are N(0, 2k+1 )-distributed in view of (i). This implies that for k

each k ≥ 1, the random variables {Btj − Bsj }2j =1 are independent, along the basic k

partition intervals {(sj , tj )}2j =1 of length 21k . We now conclude the statement (ii) in the general case as well, by Exercise A.24. This ends the proof of Theorem B.4.   So far, we verified the validity of conditions (i) and (ii) in Definition B.1 for the binary rationals in [0, 1]. We presently complete the construction in (B.1) to have {Bt }t∈[0,1] , via condition (iii).

B

Background in Brownian Motion

189

Lemma B.5 Define an increasing sequence of events {Ak0 }∞ k0 =1 in by setting: Ak0

0   k = ω ∈ ; B j +1 (ω) − B j (ω) ≤ 2 k k k 2 2 2

for all k ≥ k0 , j = 0, . . . , 2k − 1 .

∞   Then: P Ak0 = 1. k0

Proof Since

√   2k B j +1 − B j ∼ N(0, 1), it follows that for every k ≥ 1 and every 2k

2k

j = 0, . . . , 2k − 1 there holds: 0  ˆ ˆ ∞  2  x2 k − x2   P B j +1 (ω) − B j (ω) > 2 k = dx = 2 √ e− 2 dx √ e 2 2k 2k |x|>2 k 2 k ˆ ∞ 2 1 1 − x2 ≤ √ dx = √ e−2k . √ xe 2 k 2 k k Consequently: 0  k −1  2  k   P( \ Ak0 ) ≤ P B j +1 (ω) − B j (ω) > 2 k ≤ 2k e−2k . 2 2k 2k k≥k0 j =0

k≥k0

The right hand side above converges  ∞to 0 as k0 →∞, because the geometric series % −2 )k converges. Hence P (2e   k k0 =1 ( \ Ak0 ) = 0, implying the claim. Corollary B.6 For P-a.e. ω ∈ , the path t → Bt (ω) may be uniquely extended to a Hölder continuous function on [0, 1], such that: (i) For every 0 ≤ s < t ≤ 1 we have: Bt − Bs ∼ N (0, t − s). (ii) If {(si , ti )}ni=1 are pairwise disjoint subintervals of [0, 1], then the random variables {(Bt1 − Bsi }ni=1 are independent. Proof 1. For k0 ≥ 1 and ω ∈ Ak0 . Let 0 ≤ s < t ≤ 1 be two binary rationals such that |t − s| < 21k0 . Then, there exists k1 ≥ k0 satisfying: 1 1 ≤ |t − s| < k , 21 2k1 +1

(B.4)

and we may express the interval [s, t] as the union of intervals of the type j [ j2−1 k , 2k ] with only k > k1 present and with each such generation k being represented by at most two intervals (possibly one or none). Consequently,

190

B Background in Brownian Motion

|Bt (ω) − Bs (ω)| is bounded by the sum of increments along the indicated subdivision intervals. Recalling the defining property of Ak0 , we get: |Bt (ω) − Bs (ω)| ≤ 2

k>k1

0

k 2 k ≤C 2

0

k1 , 2k1

where C > 0 is a universal constant as in Exercise B.7. Fix α > 0. Clearly, k1 ≤ Cα (2k1 )α for any k1 ≥ 1, where the constant Cα > 0 depending only on α. Consequently: |Bt (ω) − Bs (ω)| ≤ Cα

 1  1−α 1−α 2 ≤ Cα |t − s| 2 k 21

in virtue of (B.4). It follows that t → Bt (ω) is Hölder continuous (with any prescribed exponent β < 12 ) and as such it may be uniquely extended to a βHölder path [0, 1]  t → Bt (ω), whose norm is bounded in function of β and k0 only. By Lemma B.5, we see that this extension can be performed for P-a.e. ω ∈ , and that: Bt = lim Btn n→∞

a.s. in , when binary rationals tn → t.

(B.5)

2. It remains to check properties (i) and (ii). Given 0 ≤ t < s ≤ 1, let tn → t and sn → s be some approximating binary rational sequences. Then: Zn = √

1 (Bt − Bsn ) ∼ N (0, 1) tn − sn n

1 (Bt − Bs ), we obtain by Theorem B.4 (i). Since Zn converge P-a.s. to Z = √t−s the following convergence, valid for every f ∈ Cc (R):

ˆ

x2 1 f (x) √ e− 2 dx = 2π R

ˆ

ˆ f ◦ Zn dP →



f ◦ Z dP,

as n → ∞



and thus Z ∼ N(0, 1) as claimed in (i). Condition (ii) can be shown similarly, in view of Theorem B.4 (ii).   Exercise B.7 Show that there exists C ≥ 1 with the property that: ∀k1 ≥ 1

k>k1

0

k ≤C 2k

0

k1 . 2k1

B

Background in Brownian Motion

191

We finally state: Theorem B.8 There exists a standard N -dimensional Brownian motion, satisfying all properties in Definition B.1. Proof We first extend the one-dimensional process {Bt }t∈[0,1] defined in (B.1) and Corollary B.6 for t ∈ [0, 1], to the  positive reals t ≥ 0. Let ( 0 , F0 , P0 ) be the new probability space, where 0 = k0 ≥1 Ak0 and F0 = {A ∩ 0 ; A ∈ F} with P0 = P|F0 . Define:   ( N , FN , PN ) = ( 0 , F0 , P0 )N = ω = (ω(1) , ω(2) , . . .); ω(i) ∈ 0 for all i ≥ 1 (i)

and let {Bt }t∈[0,1] be the one-dimensional Brownian motion in Corollary B.6 on (i) the i-th coordinate space of N , namely: Bt (ω) = Bt (ω(i) ). Then we set: Bt =

i 

 j (i+1) B1 + Bt−i

for t ∈ [i, i + 1], i ≥ 0.

j =1 N as the Second, we define {BN t }t≥0 on the probability space ( N , FN , PN ) (1) (N ) N vector-valued process Bt = (Bt , . . . , Bt ), which on each coordinate i = 1 . . . N coincides with the above one-dimensional construction: B(i) = Bt ◦ πi t on the i-th coordinate probability space. Clearly, {BN } satisfies all the required t t≥0 properties.  

We right away deduce a simple useful fact: Lemma B.9 A Brownian motion exits any ball Br (0) almost surely, i.e.   P ∃t ≥ 0; |BN t |≥r =1

for all r > 0.

Proof The claim follows, since for any fixed r, T > 0 we have:   N   P ∃t ≥ 0; |BN t | ≥ r ≥ P |BT | ≥ r = ˆ =

RN \Br/√T (0)

ˆ RN \Br (0)

2 1 − |x| 2T dx e (2π T )N/2

2 1 − |y|2 e dy, (2π )N/2

the integral in the right hand side goes to 1 as T → ∞. Exercise B.10 Show that the 1-dimensional Brownian motion hits points a.s.: ∀θ ∈ R

 P ∃t ≥ 0

 B1t = θ = 1.

 

192

B Background in Brownian Motion

Remark B.11 According to the observation in Lemma B.9, Brownian motion will a.s. enter the open set RN \ B¯ r (0). One may ask if the same is true for bounded sets, say for a ball Br (x0 )  0. The answer is affirmative in dimensions N = 1, 2, N namely: P(∃t >; BN t ∈ Br (x0 )) = 1, whereas for N ≥ 3 we have: P(∃t >; Bt ∈ r N−2 Br (x0 )) = |x |N−2 < 1 when |x0 | > r. 0 This fact is related to the recurrence and transience properties. For N = 1, Brownian motion is point-recurrent: P(∃tn → ∞; B1tn = x) = 1 for every x ∈ R. For N = 2, it is neighbourhood-recurrent: P(∃tn → ∞; B2tn ∈ Br (x0 )) = 1 for every Br (x0 ) ⊂ R2 , but it is not point-recurrent. For N ≥ 3, Brownian motion is transient: P(limt→∞ |BN t | = +∞) = 1.

B.2 The Wiener Measure and Uniqueness of Brownian Motion We now prove the uniqueness of Brownian motion. In order to not be restricted by the choice of the probability space ( , F, P) in Definition B.1, this is done by proving uniqueness of the associated Wiener measure. Definition B.12 Let {BN t }t≥0 be a standard Brownian N -dimensional motion. The Wiener measure μW is the probability measure on the Banach space E = C([0, 1], RN ) equipped with the σ -algebra of its Borel subsets, such that:     N ∈A μW (A) = P ω; [0, 1]  t → BN t (ω) ∈ R for all Borel A ⊂ E.

(B.6)

Thus, μW is the push-forward of P by the indicated in (B.6) measurable map from to E. Indeed, we observe that: Exercise B.13 (i) The space E = C([0, 1], RN ) is separable. The countable dense subset of E consists, for example, of all polynomials on [0, 1] with rational coefficients. (ii) The σ -algebra of Borel subsets of E is generated by the closed balls.  N ∈ E is measurable. (iii) The mapping  ω → [0, 1]  t → BN t (ω) ∈ R Automatically, for all ψ ∈ Cb (E) we have: ˆ

ˆ ψ(f ) dμW (f ) = E



  ψ [0, 1]  t → BN t (ω) dP(ω).

Denoting the right hand side above by F (ψ), it follows that F ∈ Cb (E)∗ , namely F is a linear, continuous and positive (i.e. F (ψ) ≥ 0 for all ψ ≥ 0) functional on the Banach space Cb (E). When restricted to the unit ball, F is also continuous

B

Background in Brownian Motion

193

with respect to the topology of uniform convergence on compact sets. Hence, μW is naturally the measure obtained by invoking the Riesz representation theorem (on nonlocally compact space E, see Parthasarathy 1967). Exercise B.14 Let μ1 = μ2 be two distinct probability measures on Cb (E). Show that there exists 0 ≤ t1 < t2 . . . < tn ≤ 1 and ψ˜ ∈ Cc (RN n ) such that defining:   ψ(f ) = ψ˜ f (t1 ), . . . , f (tn ) the function ψ ∈ Cb (E) satisfies:

´ E

ψ dμ1 =

for all f ∈ E, ´ E

(B.7)

ψ dμ2 .

Theorem B.15 The Wiener measure μW is independent of the process {BN t }t∈[0,1] , as long as BN = 0 and conditions (i)–(iii) (restricted to the interval [0, 1]) in 0 Definition B.1 hold. Proof Let ψ be as in (B.7) for some 0 ≤ t1 < . . . < tn ≤ 1 and ψ˜ ∈ Cc (RN n ). Then: ˆ ˆ     N ψ˜ BN F (ψ) = ψ t → BN (ω) dP(ω) = t t1 (ω), . . . , Btn (ω) dP(ω)

ˆ =





 N N N N N φ˜ BN t1 (ω) − B0 (ω), Bt2 (ω) − Bt1 (ω), . . . , Btn (ω) − Btn−1 (ω) dP(ω), 

where φ˜ ∈ Cc (RN n ) is given by:     φ˜ x (1) , . . . , x (n) = ψ˜ x (1) , x (1) + x (2) , . . . , x (1) + . . . + x (n) . N n−1 As the increments {BN ti+1 − Bti }i=0 are independent and normally distributed:

ˆ F (ψ) =

RNn

 n−1  φ˜ x (1) , . . . , x (n) i=0

(i) |2 1 − 2(t|x −t i+1 i ) d(x (1) , . . . , x (n) ). e (2π(ti+1 − ti ))N/2

The above quantity depends only on ψ, and it is independent of the particular choice of the process {BN   t }t∈[0,1] . Thus μW is uniquely defined, by Exercise B.14. Exercise B.16 Show that for every selection of points 0 = t0 < t1 < t2 . . . < tn ≤ 1 and Borel sets Ai ⊂ RN , i = 1 . . . n there holds:   μW f ∈ E; f (ti ) ∈ Ai for all i = 1 . . . n = ˆ n |x −x |2

1 − 2(ti −ti−1 ) i i−1 e d(x1 , . . . , xn ).

n (2π(ti − ti−1 ))N/2 i=1 Ai i=1

The above formula is sometimes taken as the definition of the Wiener measure.

194

B Background in Brownian Motion

Exercise B.17 (i) Consider the space E0 = C([0, ∞), RN ) with topology of uniform convergence on compact intervals [0, T ], for all T > 0. As in Exercise B.13, show that the Borel σ -algebra F0 in E0 is generated by sets of the type: Ag,T , = {f ∈ E0 ; f − gL∞ ([0,T ]) ≤ }, where g are polynomials with rational coefficients and T ,  > 0 are rational numbers.  (ii) Show that μW (A) = P ω; (t → BN t (ω)) ∈ A defines a probability measure on (E0 , F0 ). (iii) Prove that each standard N -dimensional Brownian motion {BN t }t≥0 induces the same Wiener measure μW as in (ii).

B.3 The Markov Properties The following statement is the Markov property of Brownian motion: Theorem B.18 If {BN t }t≥0 is a standard Brownian motion as in Definition B.1, then N for every s > 0 the process {BN t+s − Bs }t≥0 is also a standard Brownian motion, N which is independent of {Bt }t∈[0,s] . More precisely, the latter property states that if 0 ≤ s1 < . . . < sk ≤ s and 0 < t1 < . . . < tn for some k, n ≥ 1, then the following two vector-valued random variables:     N N N N and BN Bs1 , . . . , BN sk t1 +s − Bs , . . . , Btn +s − Bs are independent. N ¯ t = BN Proof The fact that {B t+s − Bs }t≥0 satisfies conditions (i)–(iii) of Defini¯ tion B.1 is self-evident. To show that {BN t }t∈[0,s] and {Bt }t≥0 are independent, fix two Borel sets A ⊂ RN k and C ⊂ RN n . Then:  N    N   N N N N Bs1 , . . . , BN Bs1 − BN sk ∈ A = 0 , Bs2 − Bs1 , . . . , Bsk − Bsk−1 ∈ Mk (A)

and:    ¯t ,...,B ¯ tn ∈ C B 1    N N N N N = BN t1 +s − Bs , Bt2 +s − Bt1 +s , . . . , Btn +s − Btn−1 +s ∈ Mn (C) , where for each i ≥ 1 we define the invertible matrix: ⎤



I dN ⎢ −I dN I dN ⎢ ⎢ −I dN I dN Mi = ⎢ ⎢ ⎣

..

. −I dN I dN

⎥ ⎥ ⎥ ⎥ ∈ RN i×N i . ⎥ ⎦

B

Background in Brownian Motion

195

Using the fact that the intervals (0, s1 ), (s1 , s2 ), . . . , (sk−1 , sk ), (s, t1 +s), (t1 +s, t2 + s), . . . , (tn−1 + s, tn + s) are disjoint, we invoke the independence of increments property of {BN t }t≥0 which in view of Exercise A.24 yields:       N ¯ ¯ B ∈ A ∩ ∈ C , . . . , B , . . . , B P BN t t n s1 sk 1    N N N N N ∈ M − B , B − B , . . . , B − B (A) = P BN k s1 s2 s1 sk sk−1 0    N N N N N · P BN t1 +s − Bs , Bt2 +s − Bt1 +s , . . . , Btn +s − Btn−1 +s ∈ Mn (C)       N ¯ ¯ = P BN s1 , . . . , Bsk ∈ A · P Bt1 , . . . , Btn ∈ C ,  

as needed.

Definition B.19 Let {BN t }t≥0 be a standard N -dimensional Brownian motion as in Definition B.1. (i) For every t ≥ 0 we define Ft ⊂ F to be the smallest sub-σ algebra of F such N that for each s ∈ [0, t] the random variable BN s : → R is Ft -measurable. Clearly, Fs ⊂ Ft when s ≤ t. (ii) We say that τ : → [0, ∞] is a stopping time for {BN t }t≥0 , provided that {τ ≤ t} ∈ Ft for all t ≥ 0 and P(τ = +∞) = 0. We then further define:   Fτ = A ∈ F; A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0 . Exercise B.20 In the above context, prove the following assertions: (i) Fτ is indeed a σ -algebra and τ is Fτ -measurable. For a constant stopping time τ ≡ s ≥ 0 we have: Fτ = Fs . (ii) If two stopping times satisfy τ1 ≤ τ2 , then Fτ1 ⊂ Fτ2 . (iii) For every k ≥ 0, the random variable τk = 21k '2k τ ( is another stopping time N N and BN τk defined as Bτk (ω) = Bτk (ω) (ω) is a random variable. Since τk converge to τ pointwise a.s. in , it follows immediately that BN τ is a random variable, N where BN (ω) = B (ω). τ τ (ω) Lemma B.21 If τ is a stopping time for the Brownian motion {BN t }t≥0 , then the random variable BN is F -measurable. τ τ Proof For every k ≥ 0, define the random variable ηk = 21k #2k τ $. We remark that since {ηk ≤ t} = {τ < 21k '2k t(}, then ηk is indeed measurable but it is not, in general, a stopping time. For Borel A ⊂ RN and t ≥ 0, consider the event:       n n + 1  BNn ∈ A ∩ τ ∈ k , k ∩ {τ ≤ t} {BN ηk ∈ A} ∩ {τ ≤ t} = 2 2 2k n≥0

=

 0≤n≤2k t



    n n + 1  BNn ∈ A ∩ τ ∈ k , k ∩ {τ ≤ t} . 2 2 2k

196

B Background in Brownian Motion

     ∩ ≤ t we have: BNn ∈ A ∈ F nk ⊂ Ft and also: τ ∈ 2nk , n+1 k 2 2 2k  {τ ≤ t} ∈ Ft , which follows by observing that {τ < s} = m≥0 {τ ≤ s − m1 } ∈ Fs . We conclude that the event in the left hand side of (62) belongs to Ft , which readily implies that each random variable BN ηk is Fτ -measurable. Since ηk converge to τ as N k → ∞ and τ is a.s. finite, we see that BN ηk converge to Bτ pointwise a.s. in , proving the claim.  

Now, for each

n 2k

N N Exercise B.22 Show that independence of {BN t }t∈[0,s] and {Bt+s − Bs }t≥0 stated N N in Theorem B.18, is equivalent to {Bt+s − Bs }t≥0 being independent of Fs , in the sense that for all 0 ≤ t1 < . . . < tn , all {Ai }ni=1 Borel subsets of RN and all C ∈ Fs :

P

n  

n     N  N N · P(C). BN ∩ C = P B − B ∈ A − B ∈ A i i ti +s s ti +s s

i=1

i=1

Lemma B.23 Let K be a closed subset of RN . Then: τK (ω) = min{t ≥ 0; BN t (ω) ∈ K} is a stopping time, provided that P(τK = +∞) = 0. In particular, if 0 ∈ D ⊂ RN is open and bounded, then τ∂ D is a stopping time. Proof Given t ≥ 0, observe that: {τK ≤ t} =





m≥1 q∈[0,t]∩Q

 1 ∈ Ft . ω; dist(BN q (ω), K) ≤ m

This implies that τK is a stopping time, in view of the assumed a.s. finiteness of τK . The second claim follows from Lemma B.9 and the continuity of the Brownian paths t → BN   t (ω). The following statement is an extension of Theorem B.18 to non-constant stopping times, known as the strong Markov property of Brownian motion. Theorem B.24 Let {BN t }t≥0 be a standard N -dimensional Brownian motion as in Definition B.1, and let τ be a stopping time. Then the process: N {BN t+τ − Bτ }t≥0

is also a standard Brownian motion (upon a possible modification of by a set of P-measure 0 in order to achieve (iii)). This process is independent of Fτ , namely: for all 0 ≤ t1 < . . . < tn , all {Ai }ni=1 Borel subsets of RN and all C ∈ Fτ : n n     N   N  P ∈ A · P(C). Bti +τ − BN ∩ C = P Bti +τ − BN i τ τ ∈ Ai i=1

i=1

B

Background in Brownian Motion

197

Proof 1. Since for t ≥ 0, the random variable t + τ is also a stopping time, Exercise B.20 N ¯ t = BN (iii) implies that all B t+τ − Bτ are measurable. It is also clear that for ¯ t (ω) ∈ RN is continuous. ω ∈ {τ < ∞} the path [0, ∞)  t → B  Recall the definition of the approximating stopping times: τk =  1 '2k τ ( k≥0 . For every 0 ≤ s < t and a Borel set A ⊂ RN we get: 2k    N N P BN − B ∈ A = P Bt+ t+τk s+τk i≥0

=

 P BN t+ i≥0

ˆ =

A



since BN

t+

− BN

i 2k

s+

i 2k

i 2k

i 2k

− BN s+

− BN s+

i 2k

i 2k

  i  ∈ A ∩ τk = k 2   i  ∈ A · P τk = k 2

 |x|2 1 i  − 2(t−s) e dx · P τk = k , N/2 2 (2π(t − s)) i≥0

  ∈ A is independent of τk =

i 2k



∈F

i 2k

⊂ Fs+

i 2k

in

view of Exercise B.22. Above, we also used the property (i) in Definition B.1. Thus: ˆ |x|2  N  1 − 2(t−s) N P Bt+τk − Bs+τk ∈ A = e dx, N/2 A (2π(t − s)) N N N to the effect that: BN t+τk − Bs+τk ∼ N(0, t − s). As Bt+τk − Bs+τk converges to N N ¯ t −B ¯ s as k → ∞, pointwise P-a.s.in , it follows that B ¯ t −B ¯ s ∼ N (0, t − s) B ¯ t }t≥0 . Property as well, proving property (i) in Definition B.1 for the process {B (ii) is shown in a similar manner: first by calculations as above for each τk , in view of (i) and (ii) and Markov’s property, and then by approximating to τ . ¯ t }t≥0 from Fτ . Firstly, for a fixed n2. It remains to show the independence of {B tuple 0 ≤ t1 < t2 . . . < tn , Borel subsets {Aj }nj=1 of RN , and C ∈ Fτk with some k ≥ 0, we observe that: n    N  P Btj +τk − BN ∩ C ∈ A j τk j =1

=

n   N Bt + P i≥0

j =1

j

n   N = Bt + P i≥0

j =1

j

i 2k

i 2k

   i  − BNi ∈ Aj ∩ τk = k ∩ C 2 2k −B

N i 2k

∈ Aj



·P

n   N  =P · P(C), Btj +τk − BN τk ∈ Aj j =1



 i  τk = k ∩ C 2

(B.8)

198

B Background in Brownian Motion

 because τk = of F

i 2k

i 2k



∩C ∈ F

i 2k

and

n

j =1

 N B

tj + ik 2

by the Markov property.

 − BNi ∈ Aj are independent 2k

  n  N N does not We have also used the fact that P j =1 Btj + i − B i ∈ Aj 2k  2k  n  N depend on i ≥ 0 and equals P j =1 Btj ∈ Aj . To see this last assertion, we argue as in the proof of Theorem B.15. We write, for any s ≥ 0: n   j =1

N BN tj +s − Bs ∈ Aj

=



n   

 N N N N N −1 Aj , BN t1 +s − Bs , Bt2 +s − Bt1 +s , . . . , Btn +s − Btn−1 +s ∈ Mn j =1

where Mn is the following invertible matrix: ⎤ I dN ⎥ ⎢ I dN I dN ⎥ ⎢ Mi = ⎢ ⎥ ∈ RN n×N n . . . ⎦ ⎣ I dN I dN . I dN I dN I dN ⎡

Consequently: n   N  Btj +s − BN P s ∈ Aj j =1

ˆ =

 −1 n

Mn

j =1 Aj



n−1

i=0

(i) |2 1 − 2(t|x −t i+1 i ) d(x (1) . . . x (n) ) e (2π(ti+1 − ti ))N/2

 BN tj ∈ Aj , as claimed.   n  N N = 3. We now apply (B.8) to C = to obtain that: P j =1 Btj +τk − Bτk ∈ Aj   n  N P j =1 Btj ∈ Aj . In conclusion: is constant in s and hence equal to P

P

n   j =1

=P

 n



j =1

  N BN tj +τk − Bτk ∈ Aj ∩ C

n   j =1

N BN tj +τk − Bτk ∈ Aj



· P(C),

(B.9)

establishing the desired independence for any stopping  time τk . For the case of τ , we proceed by approximation, noting that Fτ ⊂ k≥0 Fτk . Fix C ∈ Fτ ; then by (B.9) we observe that: ˆ ˆ    N  N n N n φ (Btj +τk − Bτk )j =1 dP = φ (BN tj +τk − Bτk )j =1 dP · P(C), C



B

Background in Brownian Motion

199

N for all k ≥ 0 and all φ ∈ Cc (RN n ). Since BN tj +τk − Bτk converges as k → ∞ to N BN tj +τ − Bτ pointwise P-a.s., it follows that:

ˆ C

  N n φ (BN tj +τ − Bτ )j =1 dP =

ˆ

  N n φ (BN tj +τ − Bτ )j =1 dP · P(C),

Exercise B.25 Deduce the following “reverse strong Markov property”. Let ¯N {BN t }t≥0 , {Bt }t≥0 be two standard Brownian motions on some probability space ( , F, P). Let τ be a stopping time as in Definition B.19 (ii). Define: ¯¯ = B t

$

BN for t ≤ τ t N N ¯N ¯ B + B − B t τ for t > τ. τ

¯¯ } Then {B t t≥0 is a N -dimensional Brownian motion on ( , F, P).

B.4 Brownian Motion and Harmonic Extensions Let D ⊂ RN be open, bounded and connected. Let F : ∂D → R be a continuous function, which without loss of generality we view as the restriction to ∂D of some F ∈ Cc (RN ). Given x ∈ D, we consider the stopping time: τx = min{t ≥ 0; BN t ∈ ∂D − x} as in Lemma B.9 and define: ˆ u(x) =

F (x + BN τx ) dP.

(B.10)

The main observation is that u is harmonic in D. Classically (see Mörters and Peres 2010; Doob 1984; Parthasarathy 1967; Durrett 2010; Kallenberg 2002), it is proved by taking B¯ r (x) ⊂ D and writing: # "  " # N r ) = E E F ◦ (x + B ) | F u(x) = E F ◦ (x + BN τ τx τx x # " = E u ◦ (x + BN τxr ) =

u(y) dσ N −1 (y), ∂Br (x)

where the first equality is just the definition in (B.10), the second follows by the tower property of conditional expectation with respect to the stopping time: τxr = min{t ≥ 0; BN t ∈ ∂Br (x)},

(B.11)

200

B Background in Brownian Motion

the third one results from the strong Markov property (Theorem B.24), and the fourth from the rotational invariance of the standard Brownian motion. Thus, u satisfies the spherical mean value property so it is harmonic. We now give details of proofs of the above statements. Exercise B.26 Show that u in (B.10) is Borel, using the outline below. ˜ P) ˜ F, ˜ obtained by taking the product of (i) Consider a new probability space ( , ( , F, P) and the set D equipped with its Borel σ -algebra and the normalized 1 ˜ t = x + BN Lebesgue measure |D dx. Check that the process {B t (ω)}t≥0 is a | ˜ ˜ F, P). ˜ N -dimensional Brownian motion on ( , ˜ t }t≥0 where each F ˜ t is ˜ t }t≥0 is {F (ii) Check that the associated filtration for {B the product of Ft with the σ -algebra of Borel subsets of D. Shown that τ˜ = ˜ t ∈ ∂D} is a stopping time. Thus, F ◦ B ˜ τ˜ is a bounded random min{t ≥ 0; B ˜ ˜ F). variable on ( , ´ ˜ τ˜ )(ω, x) dP(x) is Borel. (iii) Use Fubini’s theorem to conclude that: x → (F ◦ B Exercise B.27 (i) Consider the measurable space (E0 , F0 ) introduced in Exercise B.17. Then:  .  E = f ∈ E0 ; f (0) = 0 and |f (t)| > diam D for some t > 0 ∈ F0 . (ii) Let ψ : D × E → RN be given by:   ψ(x, f ) = x + f min{t ≥ 0; x + f (t) ∈ ∂D} . Prove that ψ is measurable with respect to the product σ -algebra on D × E of: the σ -algebra of Borel subsets of D and the restriction of F0 to E. Theorem B.28 For u as in (B.10) and any B¯ r (x) ⊂ D there holds: ˆ u(x) =

u(x + BN τxr ) dP.

Proof Given B¯ r (x) ⊂ D, let Fτxr be the sub-σ -algebra of F generated by the stopping time τxr in (B.11). Define the D-valued random variable on ( , F) by: X1 = x + BN τxr and note that X1 is Fτxr -measurable. Further, define the E-valued random variable on ( , F) in: X2 (ω) =



 N N 0, ∞)  t → BN τxr +t − Bτxr ∈ R .

As in Exercise B.27, it follows that X2 is indeed measurable with respect to the Borel σ -algebra in E, whereas by Theorem B.24 we see that X1 and X2 are independent.

B

Background in Brownian Motion

201

To show the latter, it is enough to observe that each preimage basis set: 

  X2−1 Ag,T , =

q∈[0,T ]∩Q

N ¯ BN τxr +q − Bτxr ∈ B (g(q))

N belongs to the σ -algebra generated by the Brownian motion {BN τxr +t −Bτxr }t≥0 which is independent of Fτxr . Let now μ1 , μ2 be the push-forwards of P via Xi for i = 1, 2, respectively:

μ1 (A) = P(X1 ∈ A)

for all Borel A ⊂ D

μ2 (C) = P(X2 ∈ C)

for all Borel C ⊂ E.

Recall that with ψ given in Exercise B.27, we have: N N N x + BN τx = x + Bτxr + Bτx − Bτxr = ψ ◦ (X1 , X2 ).

Thus there holds: ˆ ˆ N F (x + Bτx ) dP = F ◦ ψ ◦ (X1 , X2 ) dP u(x) =



ˆ = =

D×E ˆ ˆ D

ˆ

F ◦ ψ d(μ1 × μ2 ) =

ˆ

D

  F ◦ ψ (z, f ) dμ2 (f ) dμ1 (z)

E

  F ◦ ψ (z, X2 (ω)) dP(ω) dμ1 (z),



where we used the independence of X1 and X2 in view of Exercise (A.20) and Fubini’s theorem. Finally: ˆ

ˆ u(x) =

D

ˆ

u(z) dμ1 (z) =

u ◦ X1 dP =



  u x + BN τxr dP,  

which completes the proof. Corollary B.29 Let u be as in (B.10). Then, for any B¯ r (x) ⊂ D there holds: u(y) dσ N −1 (y).

u(x) = ∂Br (x)

Consequently, u is a harmonic function in D. Proof Let τxr be the stopping time in (B.11) and define μ to be the push-forward of P on ∂Br (0) via the measurable mapping BN τ r , so that: x

  μ(A) = P BN τxr ∈ A

for all Borel A ⊂ ∂Br (0).

202

B Background in Brownian Motion

We now argue that μ is rotationally invariant. Recall the measurable space (E, F0|E ) defined in Exercise B.27, where the function ψ : E → RN given by:   ψ(f ) = f min{t ≥ 0; f (t) ∈ ∂Br (0)} is measurable. For any Borel A ⊂ ∂Br (0), the composition 1A ◦ψ is thus a bounded random variable on the probability space (E, F0|E , μW ) with the Wiener measure μW on C([0, ∞), RN ) given in Exercise B.17. Consequently: ˆ

ˆ 1A ◦ ψ dμW = E



ˆ =



   1A ◦ ψ (t → BN t (ω) dP(ω)   1 A BN τxr (ω) dP(ω) = μ(A).

On the other hand, for any rotation R ∈ SO(N ), the process {R T ◦ BN t }t≥0 is also a standard Brownian motion, and hence it generates the same Wiener measure: ˆ

ˆ 1A ◦ ψ dμW = E



   1A ◦ ψ (t → R T BN t (ω) dP(ω)

ˆ =



1 A ◦ R T BN τxr dP = μ(RA).

It follows that μ(A) = μ(RA), so μ is indeed a rotationally invariant probability measure on ∂Br (0) that must coincide with the normalized spherical measure σ N −1 by Exercise 2.11. Finally, Theorem B.28 yields: ˆ

u(x + y) dσ N −1 (y) =

u(x + y) dμ(y) =

u(x) = ∂Br (0)

∂Br (0)

udσ N −1 . ∂Br (x)

Recalling that u is Borel as derived in Exercise B.26, harmonicity of u results from Remark C.20.  

Appendix C

Background in PDEs

In this chapter we recall definitions and both background and auxiliary material on the chosen topics in PDEs: Lebesgue and Sobolev spaces, semicontinuous functions, harmonic and p-harmonic functions, regularity theory and relation of various notions of solutions to the p-Laplacian: weak, p-harmonic, viscosity. In this chapter we recall definitions and preliminary facts on the chosen topics in PDEs: Sobolev spaces, regularity theory and various notions of solutions to the p-Laplacian: weak, p-harmonic, viscosity. The presented material may be found in graduate textbooks on Analysis and PDEs such as: Brezis (2011); Evans (2010); Adams and Fournier (2003), and further in classical monographs about the nonlinear potential theory: Lindqvist (2019); Heinonen et al. (2006).

C.1 Lebesgue Lp Spaces and Sobolev W 1,p Spaces Let U ⊂ RN be an open (possibly unbounded) set. In this section we consider the measurable space (U, LN (U ), | · |) where LN (U ) is the σ -algebra of Lebesguemeasurable subsets of U , and | · | is the N -dimensional Lebesgue measure. As . in Definition A.5, the space L1 (U ) = L1 (U, LN (U ), | · |) consists of (Lebesgue) measurable functions f : U → R¯ such that: . f L1 (U ) =

ˆ |f | dx < +∞, U

where the (Lebesgue) integral is defined as in Appendix A.2. We identify with a given f all functions g coinciding with f in U \ A, for some zero measure set A ⊂ U ; we say then that g is a representative of f . We continue with the convention that when a certain property is satisfied outside of a set of zero measure, we say that it holds almost everywhere and write: for a.e. x ∈ U . © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3

203

204

C Background in PDEs

Clearly, the basic results in previous sections regarding the monotone convergence and the Lebesgue dominated convergence (Theorem A.7) as well as the iterated integrals (formula (A.3) and the Fubini–Tonelli theorem) still hold. Definition C.1 (i) Let p ∈ [1, ∞). We define the space: . Lp (U ) = f : U → R¯ measurable; |f |p ∈ L1 (U ) ˆ 1/p .  |f |p dx < +∞ . = f : U → R¯ measurable; f Lp (U ) = U

(ii) For p = +∞ we define the space: . . L∞ (U ) = f : U → R¯ measurable; f L∞ (U ) = ess sup f (x) < +∞ , x∈U

where for a given set A ⊂ U one has: . ess sup f (x) = inf x∈A



sup |f (x)|; B ⊂ A and |B| = 0 . x∈A\B

Theorem C.2 For every p ∈ [1, ∞], the Lebesgue space Lp (U ) is a Banach space with norm  · Lp (U ) . When p ∈ (1, ∞) then Lp (U ) is reflexive. When p ∈ [1, ∞) then Lp (U ) is separable. p

We write: f ∈ Lloc (U ), if f ∈ Lp (V ) for every open bounded set V such that ¯ V ⊂ U . When f ∈ L1loc (U ), we say that f is locally integrable (clearly, f ∈ p Lloc (U ) for any p implies the local integrability of f ). The Lebesgue differentiation theorem states that a locally integrable function coincides almost everywhere with its (existing a.e.) limit of averages on shrinking neighbourhoods: Theorem C.3 Let f ∈ L1loc (U ). Then: f (x) = lim

r→0 Br (x)

f (y) dy

for a.e. x ∈ U.

Recall that the linear space Cc∞ (U ) consists of smooth test functions φ : U → R . whose support: supp φ = {φ = 0} is compact and contained in U . The following density result is valid for all p ∈ [1, ∞):   Lp (U ) = closureLp (U ) Cc∞ (U )

(C.1)

C

Background in PDEs

205

and we have the very useful fundamental theorem of calculus of variations: Theorem C.4 If f ∈ L1loc (U ) and ˆ f φ dx = 0 U

for all φ ∈ C∞ c (U ),

then f = 0 a.e. in U . For every p ∈ [1, ∞] define the conjugate exponent p! by requesting that: 1 p!

1 p

+

= 1. Then Hölder’s inequality asserts: ˆ U

f g dx ≤ f Lp (U ) · gLp! (U )

!

for all f ∈ Lp (U ), g ∈ Lp (U ).

(C.2) !

The main feature of conjugate exponents is that for p ∈ [1, ∞) the space Lp (U ) can be identified with the dual space [Lp (U )]∗ to Lp (U ), in ´ the sense that all linear continuous functionals on Lp (U ) have the form f → U fg dx for some g ∈ ! Lp (U ). In this context, we say that a sequence {fn ∈ Lp (U )}∞ n=1 converges weakly to the limit f ∈ Lp (U ) (we write: fn  f in Lp (U )) provided that: ˆ

ˆ lim

n→∞ U

fn g dx =

f g dx

!

for all g ∈ Lp (U ).

U

Clearly, strong convergence implies weak convergence by (C.2), but the converse is, in general, not true. The following compactness property is related to the reflexivity statement in Theorem C.2: Theorem C.5 Assume that p ∈ (1, ∞). Then every bounded sequence {fn ∈ p Lp (U )}∞ n=1 has a subsequence that converges weakly to some limit f ∈ L (U ). One of the central notions in analysis and PDEs is the notion of Sobolev spaces. Intuitively, Sobolev space consists of functions that can be assigned a “weak gradient” through integration by parts (against gradients of test functions), and that have a prescribed summability exponent. Definition C.6 Let p ∈ [1, ∞]. Define the space: . W 1,p (U ) = f ∈ Lp (U ); there exists g ∈ Lp (U, RN ) such that: ˆ ˆ f ∇φ dx = − φg dx for all φ ∈ C∞ c (U ) . U

U

The function g is called the distributional (weak) gradient of f and we write: . ∇f = g.

206

C Background in PDEs

By Theorem C.4 it is easy to observe that the distributional gradient is defined uniquely (in the a.e. sense). When f ∈ C1 (U ) ∩ Lp (U ) and ∇f ∈ Lp (U, RN ), then f ∈ W 1,p (U ) and the distributional gradient coincides with the weak one. Conversely, when f ∈ C(U ) ∩ W 1,p (U ) (i.e. f has a continuous representative) and the distributional gradient ∇f ∈ C(U ) then f ∈ C1 (U ). As in Theorem C.2 we have: Theorem C.7 For every p ∈ [1, ∞], the Sobolev space W 1,p (U ) is a Banach . space with norm f W 1,p (U ) = f Lp (U ) + ∇f Lp (U,RN ) . When p ∈ (1, ∞) then W 1,p (U ) is reflexive. When p ∈ [1, ∞) then W 1,p (U ) is separable. 1,p We write: f ∈ Wloc (U ), if f ∈ W 1,p (V ) for every open set V such that V¯ ⊂ U . The following easy properties of Sobolev functions are collected as an exercise:

Exercise C.8 (i) Let {fn ∈ W 1,p (U )}∞ n=1 be a sequence of Sobolev functions converging in N p Lp (U ) to some f and such that {∇fn }∞ n=1 converges in L (U, R ) to some g. 1,p Then f ∈ W (U ) and g = ∇f . (ii) If ρ ∈ C1 (U¯ ) and f ∈ W 1,p (U ), then ρf ∈ W 1,p (U ) and the distributional gradient equals: ∇(ρf ) = ρ∇f + f ∇ρ. 1,p 1,p (iii) If ρ ∈ L1 (RN ) and f ∈ Wloc (RN ), then ρ ∗ f ∈ Wloc (RN ) and ∇(ρ ∗ f ) = ρ ∗ ∇f . Recall that the´convolution of two functions ρ and f is given by the formula: (ρ ∗ f )(x) = RN ρ(y)f (x − y) dy. (iv) Let ρ ∈ C1 (R) be such that ρ(0) = 0 and that ρ ! is bounded. Then for every f ∈ W 1,p (U ) we have: ρ ◦ f ∈ W 1,p (U ) and ∇(ρ ◦ f ) = (ρ ! ◦ f )∇f . (v) Given a diffeomorphism h : U1 → U2 of class C1 between two open sets U1 , U2 ⊂ RN such that ∇h and ∇h−1 are bounded, and given f ∈ W 1,p (U2 ), we have f ◦ h ∈ W 1,p (U1 ) and ∇(f ◦ h) = (∇f ◦ h)∇h. We also have two observations regarding truncations of Sobolev functions: Exercise C.9 Denote the positive and negative parts of f : U → R¯ by: . f + (x) = max{f (x), 0},

. f − (x) = (−f )+ (x).

(i) If f ∈ W 1,p (U ), then f + ∈ W 1,p (U ) and: ∇f

+

 =

∇f a.e. in {f > 0} 0 a.e. in {f ≤ 0}.

1,p (U ), then the truncated sequence (ii) If {fn ∈ W 1,p (U )}∞ n=1 converges to f in W ∞ + 1,p + {fn }n=1 converges in W (U ) to f .

C

Background in PDEs

207

Sobolev functions can be approximated by smooth functions, as stated in the celebrated Meyers–Serrin theorem (known as the “H = W ” theorem): Theorem C.10 Let p ∈ [1, ∞). Then:   W 1,p (U ) = closureW 1,p (U ) C∞ (U ) ∩ W 1,p (U ) . When ∂U has Lipschitz regularity (in fact, a less restrictive “segment condition” in Lemma C.58 (i), suffices) and p < ∞, then every W 1,p (U ) function can be N approximated by a sequence of C∞ c (R ) test functions. Approximation by test functions supported in U is achieved only in the following important subspace of W 1,p (U ), defined for p ∈ [1, ∞):   . 1,p W0 (U ) = closureW 1,p (U ) C∞ c (U ) . 1,p

1,p

Clearly, W0 (RN ) is a Banach space and W0 (RN ) = W 1,p (RN ). Intuitively, 1,p functions in W0 (U ) are the Sobolev functions with zero boundary values. In this context, the Poincaré inequality is valid: Theorem C.11 If p ∈ [1, ∞) and U has finite width (i.e. its projection on some line in RN is bounded), then there exists a constant C depending only on p and U , with: f Lp (U ) ≤ C ∇f Lp (U,RN )

1,p

for all f ∈ W0 (U ).

We define the weak convergence in W 1,p (U ) for p ∈ [1, ∞) as the weak convergence of the given sequence and the corresponding sequence of distributional 1,p (U ) to gradients. More precisely, {fn ∈ W 1,p (U )}∞ n=1 converges weakly in W N p p f provided that fn  f in L (U ) and ∇fn  ∇f in L (U, R ). Similarly to Theorem C.5, we now have: Theorem C.12 Assume that p ∈ (1, ∞). Then every bounded sequence {fn ∈ 1,p (U ). W 1,p (U )}∞ n=1 has a subsequence that converges weakly to some f ∈ W The next result gathers the embedding theorems. Namely, Sobolev W 1,p functions are actually of higher regularity than merely Lp and the increase in regularity is more pronounced for lower dimensions N . Theorem C.13 (i) (Sobolev embedding theorem). Let p ∈ [1, N) and define the Sobolev conjugate exponent p∗ by: p1∗ = p1 − N1 . Then the following embedding is continuous: ∗

i : W 1,p (RN ) → Lp (RN ).

208

C Background in PDEs

(ii) (Sobolev embedding theorem, critical case). For every q ∈ [N, ∞) we have the continuous embedding: i : W 1,N (RN ) → Lq (RN ). (iii) (Morrey embedding theorem). If p ∈ (N, ∞), then we have the continuous embedding: i : W 1,p (RN ) → L∞ (RN ). Moreover, there exists a constant C > 0 depending only on p and N , such that for all f ∈ W 1,p (RN ) there holds: |f (x) − f (y)| ≤ C|x − y|

1− N p

∇f Lp (RN ,RN )

for a.e. x, y ∈ RN . (C.3)

In particular, f has a Hölder continuous representative for which the above inequality holds for all x, y ∈ RN . The assertions of Theorem C.13 remain true when the domain RN is replaced by U satisfying the cone condition i.e. such that each x ∈ U is the vertex of some cone Cx ⊂ U which is a rigid motion of one finite cone. The Hölder continuity bound in (C.3) additionally requires Lipschitz continuity of ∂U and the constant C depends then on p and U . When U is bounded and regular then all the subcritical Sobolev embeddings are compact, in view of the Rellich–Kondrachov theorem below: Theorem C.14 Let U be bounded and satisfy the cone condition. Then, the following embeddings are compact: i : W 1,p (U ) → Lq (U )

for all q ∈ [1, p∗ ), when p ∈ [1, N),

i : W 1,N (U ) → Lq (U )

for all q ∈ [1, ∞),

i : W 1,p (U ) → Cbdd (U )

when p ∈ (N, ∞),

where by Cbdd (U ) we denoted the space of continuous bounded functions on U . In particular, i : W 1,p (U ) → Lp (U ) is compact for all p and N . When ∂U is additionally Lipschitz continuous, then for p ∈ (N, ∞) the embedding: i : W 1,p (U ) → C0,α (U¯ ) is compact for every Hölder exponent α ∈ (0, 1 −

N p ).

C

Background in PDEs

209

C.2 Semicontinuous Functions In this section we recall the definition and first properties of semicontinuous functions. These are functions whose values at points close to a given domain point remain approximately below or approximately above (as opposed to “approximately close”, requested for continuous functions) the indicated value. Definition C.15 A function f : U → (−∞, +∞] defined on an open subset U of RN is called lower-semicontinuous if: f (x) ≤ lim inf f (y) y→x

for all x ∈ D,

(C.4)

where we denote:   . lim inf f (y) = lim inf f (y); y ∈ Br (x) ∩ (D \ {x}) . y→x

r→0

If (−f ) is lower-semicontinuous, then f is called upper-semicontinuous. Clearly, f : U → R is continuous if an only if it is both lower- and uppersemicontinuous. We summarize other basic properties below: Exercise C.16 (i) A function f is lower-semicontinuous if and only if the set {f > λ} is open for every λ ∈ R. In particular, lower-semicontinuous functions are Borel-regular. (ii) A lower-semicontinuous function f : U → (−∞, +∞] attains its infimum on any compact set K ⊂ U (but it does not have to attain its supremum). In particular, every lower-semicontinuous function is locally bounded from below. (iii) Supremum of an arbitrary family of lower-semicontinuous functions is lowersemicontinuous. Minimum of finitely many lower-semicontinuous functions is lower-semicontinuous. (iv) For every lower-semicontinuous f on U there exists a nondecreasing sequence {fn ∈ C∞ (U )}∞ n=1 converging pointwise to f as n → ∞. (v) If f : U → (−∞, +∞] is locally bounded from below, then the function g(x) = lim infy→x f (y) is lower-semicontinuous in U . For functions that are defined only up to zero measure sets, Definition C.15 and the construction in Exercise C.16 (v) has the following natural counterpart: Exercise C.17 (i) Assume that a measurable function f : U → R¯ is locally essentially bounded from below, i.e. for every compact set K ⊂ U there exists a measure zero

210

C Background in PDEs

set A such that infK\A f > −∞. Prove that the following function is lowersemicontinuous in U : g(x) = ess lim inf f (y), y→x

. where ess lim infy→x f (y) = limr→0 ess infBr (x) f . 1 (ii) Prove that f ∈ Lloc (U ) has a continuous representative if and only if: ess lim inf f (y) = ess lim sup f (y) y→x

y→x

for all x ∈ U.

(C.5)

Finally, we recall the Ascoli–Arzelá theorem, valid for sequences of continuous functions on compacts: Theorem C.18 Let K ⊂ RN be a compact set and assume that the sequence {fn ∈ C(K)}∞ n=1 satisfies: (i) (Equiboundedness). There exists C ≥ 0 such that fn L∞ (K) ≤ C for all n ≥ 1. (ii) (Equicontinuity). For every  > 0 there exists δ > 0 such that if x, y ∈ K satisfy |x − y| < δ, then |fn (x) − fn (y)| <  for all n ≥ 1. Then {fn }∞ n=1 has a subsequence, converging uniformly in K to some f ∈ C(K).

C.3 Harmonic Functions In this section we review the basic properties of one of the most important equations of mathematical physics, that is the Laplace equation. The same material and a further discussion can be found, among others, in the graduate textbooks by Evans (2010) or Gilbarg and Trudinger (2001). Let D ⊂ RN be an open, bounded, connected set. We say that a function u ∈ 2 C (D) is harmonic in D, provided that: u =

N ∂ 2u =0 (∂xi )2

in D.

(C.6)

i=1

Other notions of harmonicity and ´ the derivation of (C.6) as the Euler–Lagrange equation of the energy I2 (u) = D |∇u|2 are discussed in the general case of the p-Laplacian, p ∈ (1, ∞), in Sect. C.4.

C

Background in PDEs

211

Theorem C.19 Let u ∈ C(D). Then the following conditions are equivalent: (i) u satisfies the mean value property on spheres: u(y) dσ N −1 (y)

u(x) =

for all B¯ r (x) ⊂ D.

(C.7)

∂Br (x)

Here, σ N −1 is the spherical measure as in Example A.9. (ii) u satisfies the mean value property on balls: u(x) =

u(y) dy

for all B¯ r (x) ⊂ D.

(C.8)

Br (x)

(iii) u ∈ C2 (D) and u is harmonic in D. Moreover, if the above conditions are satisfied, then u ∈ C∞ (D). Proof 1. For a fixed x ∈ D and r ∈ (0, dist(x, ∂D)), denote: . φ x (r) =

u(y) dσ N −1 (y) = ∂Br (x)

u(x + r(y − x)) dσ N −1 (y), ∂B1 (x)

so that integrating in “polar coordinates” as in (A.2), we obtain: ˆ

ˆ u(y) dy =

r

|∂Bs (x)| · φx (s) ds

for all B¯ r (x) ⊂ D.

0

Br (x)

x Now, if u satisfies (C.7), ´ ´ rthen φ (r) = u(x) for all r ∈ (0, dist(x, ∂D)) and so Br (x) u(y) dy = u(x) 0 |∂Bs (x)| ds = u(x) · |Br (x)|, which gives (C.8). On the other hand, assuming (C.8) and noting that ddr |Br (x)| = N r N −1 |B1 (0)| = r N −1 |∂B1 (0)| = |∂Br (x)|, implies:

0=

d dr

u(y) dy Br (x)

ˆ   d  1 x |∂B (x)| · φ (r) · |B (x)| − (x)| u(y) dy |B r r r 2 dr |Br (x)| Br (x)   |∂Br (x)| x φ (r) − u(y) dy for all r ∈ (0, dist(x, ∂D)). = |Br (x)| Br (x)

=

ffl It follows then that φr (x) = Br (x) u(y) dy = u(x), which is (C.8). We have thus proved the equivalence of conditions (i) and (ii).

212

C Background in PDEs

2. To show that (iii) implies (i), use the divergence theorem: 

 ∇u(x + r(y − x)), y − x dσ N −1 (y)

d x φ (r) = dr

∂B1 (x)



∇u(y),

= ∂Br (x)

=

1 |Br (x)|

ˆ

y − x  N −1 dσ (y) = r

∂Br (x)

∂u dσ N −1 (y) ∂n

u(y) dy = 0, Br (x)

(C.9) where we denote the outward unit normal vector by n. Hence, the function r → φ x (r) is constant, coinciding with its limit value lim

r→0 ∂Br (x)

u(y) dσ N −1 (y) that

equals u(x) in view of continuity of u. We thus get (C.7). 3. To prove that (i) implies u ∈ C∞ (D) and (iii), let {φ }>0 be a family of smooth, . radially symmetric mollifiers. Namely: φ (x) = 1N φ( x ) for some function φ ∈ ´ C∞ c (B1 (0)) that is nonnegative, radial and satisfies B1 (0) φ(x) dx = 1. For every B (x) ⊂ D we then have: ˆ (u ∗ φ )(x) = =

B (0)

ˆ  0

φ (y)u(x − y) dy =

ˆ ˆ 0

∂Br (0)

φ (y)u(x − y) dσ N −1 (y) dr

(φ )|∂Br (0) · |∂Br (0)| · u(x) dr = u(x)

ˆ B (0)

φ (y) dy = u(x).

Consequently, u ∈ C∞ (D). By the identities in (C.9) it follows that: 0 = ´ d x 1 dr φ (r) = |∂Br (x)| Br (x) u(y) dy, for all Br (x) ⊂ D. Thus u = 0 in D.   We thus see that every harmonic function is smooth. In fact, it is also analytic i.e. represented, locally in its domain, by a convergent power series. Another remarkable result is that for a continuous and bounded function u : D → R, the strong converse mean value property holds. That is, iffflfor every x ∈ D there exists some radius r(x) ∈ (0, dist(x, ∂D)) such that u(x) = Br(x) (x) u(y) dy, ¯ this then u must be harmonic in D. Under the additional assumption that u ∈ C(D), result is due to Volterra (1909) and Kellogg (1934). In the general case, it has been proved in Hansen and Nadirashvili (1993, 1994), where it is also shown that the same result is in general false when taking spherical (instead of ball) mean values. Remark C.20 Equivalence of (i)–(iii) in Theorem C.19 remains valid under the assumption that u : D → R is bounded and Borel, where in (i) it is sufficient for the mean value property on spheres to hold for every x ∈ D and almost every r ∈ (0, dist(x, ∂D)). Indeed, for bounded Borel u, the integral in the right hand side of (C.7) is well defined for a.e. indicated r, in view of Fubini’s theorem, and

C

Background in PDEs

213

the weakened condition (i) implies (ii) by the same proof. The other implications remain verbatim the same. Exercise C.21 Let D = Br (0) ⊂ RN and let F ∈ C(∂D). Then the function: u(x) = (r 2 − |x|2 )r N −2 ∂Br (0)

is harmonic in D and it satisfies function (x, y) →

(r 2 −|x|2 )r N−2 |x−y|N

lim

x→x0 , x∈D

F (y) dσ N −1 (y) |x − y|N

u(x) = F (x0 ) for all x0 ∈ ∂D. The

is called the Poisson kernel.

Exercise C.22 Using the outline below, work out an alternative proof of the implication (ii)⇒(iii) of Theorem C.19. (i) Let u ∈ C(D) satisfy (C.8). For a given B¯ r (x) ⊂ D, let v ∈ C(B¯ r (x)) ∩ C∞ (Br (x)) be given by the formula in Exercise C.21, to satisfy: v = 0 in Br (x) and v = u on ∂Br (x). Applying the assumed mean value property of u together with the same property of the harmonic v, deduce contradiction from: maxBr (x) (v − u) = v(x0 ) − u(x0 ) > 0 at some x0 ∈ Br (x). (ii) Conclude that v = u in B¯ r (x) and further, that u must be harmonic in D. The same outline above may be followed to prove the maximum principle for the Laplace equation: ¯ ∩ C∞ (D) is harmonic in D, then max u = max u. Theorem C.23 If u ∈ C(D) ¯ ∂D D Moreover, if maxD u = u(x) at some x ∈ D, then u is constant in D. ¯ . Proof Define M = maxD ¯ u and assume that M = u(x) for some x ∈ D. ffl Theorem C.19 then yields: M ≤ Br (x) u(y) dy ≤ M for every r ≤ dist(x, ∂D). Consequently, u = M in Br (x) and thus the set u−1 (M) is both open and closed in D. Therefore: u = M in D, proving both claims of the theorem.   Corollary C.24 For every F ∈ C(∂D), the boundary value problem: u = 0 in D,

u=F

on ∂D,

¯ ∩ C∞ (D). has at most one solution u ∈ C(D) Our final statement resulting from the mean value property is the Harnack inequality for harmonic functions: Theorem C.25 Let V be an open, bounded, connected set such that V ⊂ V¯ ⊂ D. There exists a constant C > 0 depending only on V and D, such that: sup u ≤ C inf u, V

V

for all nonnegative functions u that are harmonic in D.

214

C Background in PDEs

Proof Let r = 12 dist(V , ∂D). For any x, y ∈ V with |x − y| ≤ r we have: u(y) dy ≥

u(x) = B2r (x)

|Br (x)| |B2r (x)|

u(y) dy = Br (x)

1 u(y). 2N

By switching the roles of x and y it further follows that: 1 u(y) ≤ u(x) ≤ 2N u(y). 2N By compactness, the set V¯ may be covered by a finite number n ≥ 1 of open balls with radius r. Thus we have: 1 u(y) ≤ u(x) ≤ 2nN u(y) 2nN

for all x, y ∈ V ,  

proving the stated inequality with C = 2nN .

C.4 The p-Laplacian and Its Variational Formulation In this and the following sections we present the preliminaries of the classical theory of the p-Laplace equation (we restrict out attention to the bounded domains D ⊂ RN ), whose probabilistic (game-theoretical) treatment is the content of this book. We refer to the monographs by Heinonen et al. (2006) and Lindqvist (2019) for a thorough discussion; here we only introduce the basic definitions, prove facts needed in the future study and state main properties. Let D ⊂ RN be an open, bounded, connected set and let p ∈ (1, ∞). Consider the following Dirichlet integral: ˆ Ip (u) =

D

|∇u(x)|p dx

for all u ∈ W 1,p (D).

Minimizing the energy Ip among all functions u subject to some given boundary data, the condition for the vanishing of the first variation of Ip (see Lemma C.28 below) takes the form: ˆ   |∇u|p−2 ∇u, ∇η dx = 0 for all η ∈ C∞ c (D). D Assuming sufficient regularity of u, the classical divergence theorem yields: ˆ D

  η div |∇u|p−2 ∇u dx = 0

for all η ∈ C∞ c (D),

C

Background in PDEs

215

by the fundamental theorem of calculus of variations (Theorem C.4), becoming:   . p u = div |∇u|p−2 ∇u = 0 in D.

(C.10)

Definition C.26 The second order differential operator p is called the pLaplacian and the partial differential equation (C.10) is called the p-harmonic equation. An example of a p-harmonic function in the punctured space is provided by the following radially symmetric construction: Exercise C.27 For a fixed x0 ∈ RN , prove that the smooth radial function u : RN \ {x0 } → R given by: $ u(x) =

p−N

|x − x0 | p−1 if p = N log |x − x0 | if p = N

satisfies: p u = 0 and ∇u = 0 in RN \ {x0 }. We now observe: Lemma C.28 Let w ∈ W 1,p (D) for some p ∈ (1, ∞). Then the problem:   1,p minimize Ip (u); u − w ∈ W0 (D)

(C.11)

has a unique solution u ∈ W 1,p (D). Equivalently, u solves (C.11) if and only if 1,p u − w ∈ W0 (D) and: ˆ D

  |∇u|p−2 ∇u, ∇η dx = 0

for all η ∈ C∞ c (D),

(C.12)

. where we set |∇u(x)|p−2 ∇u(x) = |0|p−2 0 = 0, whenever ∇u(x) = 0 and p < 2. Proof 1. We will frequently use the estimate:  |b|p ≥ |a|p + p |a|p−2 a, b − a

for all a, b ∈ RN ,

(C.13)

following from convexity of the function x → |x|p and from ∇|x|p = p|x|p−2 x. To prove existence of a solution to (C.11), define: . Imin =

inf

u−w∈W0 (D) 1,p

Ip (u) ∈ [0, Ip (w)].

216

C Background in PDEs

Consider a sequence {un }∞ n=1 satisfying un − w ∈ W0 (D) and lim Ip (un ) = 1,p

n→∞

1,p (D), because: Imin . It is easy to note that {un }∞ n=1 is bounded in W

un Lp (D) + ∇un Lp (D) ≤ un − wLp (D) + wLp (D) + ∇un Lp (D) ≤ C∇un − ∇wLp (D) + wLp (D) + ∇un Lp (D)     ≤ C wW 1,p (D) + ∇un Lp (D) ≤ C wW 1,p (D) + Ip (un )1/p ≤ C,

where we used the Poincaré inequality in Theorem C.11 to estimate un − wLp (D) . Consequently, by Theorem C.12 {un }∞ n=1 has a subsequence (that we do not relabel) converging weakly in W 1,p (D) to some limit u that must satisfy: 1,p un − w  u − w ∈ W0 (D). Moreover, by (C.13) we have: ˆ Ip (un ) ≥ Ip (u) + p

D

  |∇u|p−2 ∇u, ∇un − ∇u dx.

(C.14)

!

p Since |∇u|p−2 ∇u ∈ Lp (D) with p! = p−1 and ∇(un − u) converges weakly p to 0 in L (D), it follows that the second term in the right hand side of (C.14) converges to 0. Therefore we obtain:

Ip (u) ≤ lim Ip (un ) = Imin . n→∞

This proves that u is a minimizer in (C.11). 2. For uniqueness, we use strict convexity of the function x → |x|p . If u1 , u2 both solve (C.11), then u = 12 (u1 + u2 ) satisfies u − w ∈ W01,2 (D) and: ˆ Imin ≤ Ip (u) =

ˆ {u1 =u2 }

|u|p dx +

|u|p dx

{u1 =u2 }



ˆ ˆ ˆ ˆ  1  1 |u1 |p dx + |u2 |p dx + |u1 |p dx + |u2 |p dx 2 {u1 =u2 } 2 {u1 =u2 } {u1 =u2 } {u1 =u2 }

=

 1 Ip (u1 ) + Ip (u2 ) = Imin , 2

with the strict inequality on the set {u1 = u2 }. Consequently, there must be: |u1 = u2 | = 0 and so the two functions u1 , u2 coincide almost everywhere in D. 3. To show that the minimizer u in (C.11) satisfies (C.12), consider functions u = 1,p u + η, for a given η ∈ C∞ c (D) and each c ∈ R. Clearly, u − w ∈ W0 (D) and thus by (C.13) we get: ˆ 0 ≥ Ip (u) − Ip (u ) ≥ −p

D

  |∇u |p−2 ∇u , ∇η dx.

C

Background in PDEs

217

Consequently: ˆ (sgn ) ·

D

  |∇u |p−2 ∇u , ∇η dx ≥ 0. !

p−2 ∇u converges in Lp (D) to |∇u|p−2 ∇u as  → 0, we obtain |   ´Since  |∇up−2 |∇u| ∇u, ∇η dx = 0. D

On the other hand, if for some u ∈ W 1,p (D) there holds (C.12) with all η ∈ 1,p then the same is true with all η ∈ W0 (D). Let v ∈ W 1,p (D) satisfy 1,p 1,p v − w ∈ W0 (D). Then v − u ∈ W0 (D) and thus:

C∞ c (D),

ˆ 0=

   1 |∇u|p−2 ∇u, ∇v − ∇u dx ≤ Ip (v) − Ip (u) , p D

where we used (C.13). This proves Ip (u) ≤ Ip (v), establishing (C.11).

 

C.5 Weak Solutions to the p-Laplacian When it comes to the notion of a solution to the p-Laplace equation (C.10), several definitions are being used. In this and the next sections, we will introduce the following ones: (i) weak solutions, defined via test functions under the integral; (ii) p-harmonic (potential-theoretic) solutions, via comparison principle; (iii) viscosity solutions, via test functions evaluated at points of contact. Definition C.29 Let D ⊂ RN be an open, bounded, connected set and let p ∈ 1,p (1, ∞). We say that u ∈ Wloc (D) is a weak solution to p u = 0 in D if (C.12) holds. We say that u is a weak supersolution to p u = 0 in D if: ˆ D

  |∇u|p−2 ∇u, ∇η dx ≥ 0

for all η ∈ C∞ c (D) with η ≥ 0.

When (−u) is a weak supersolution, then we call u a weak subsolution to p u = 0 in D. If no ambiguity arises, we will often simply write “weak solution” or “weak supersolution” instead of “weak solution to p u = 0 in D” or “weak supersolution to p u = 0 in D”, etc. Exercise C.30 Show that u is a weak solution if and only if it is both a weak supersolution and a weak subsolution. It turns out that weak solutions to p u = 0 in D are automatically Hölder regular C1,α loc (D) with the Hölder exponent α depending only on N and p. The proof of this fact is rather complicated; it has been completed in Uralceva (1968); Uhlenbeck

218

C Background in PDEs

(1977); Evans (1982) for p ≥ 2 and in Lewis (1983) for the degenerate case of p ∈ (1, 2), whereas the complete arguments covering all exponents p ∈ (1, ∞) in a unified manner (in fact, as a special case of a more general class of quasilinear elliptic PDEs) can be found in DiBenedetto (1983) and Tolksdorf (1984). They are based on a sequence of fundamental properties of the weak super/ subsolutions, which are of independent interest and importance. We present some of them below. Firstly, note that Hölder continuity of u follows easily in case p > N by Morrey’s embedding in Theorem C.13. For p ∈ (1, N ] the reasoning is more involved and it is achieved as follows. Theorem C.31 Every weak solution u to p u = 0 in D is locally essentially bounded, i.e. u ∈ L∞ loc (D). The next result is the celebrated Harnack’s inequality that, as we shall see, implies Hölder continuity of u for any p ∈ (1, ∞). Theorem C.32 Let u be a weak solution that is nonnegative in some ball B2r (x) ⊂ D. Then we have: ess sup u ≤ C ess inf u, Br (x)

(C.15)

Br (x)

where the constant C ≥ 1 depends on N, p, but it is independent of u, r, x ∈ D. The proofs of Theorem C.31 and C.32 rely on the application of the Moser iteration technique in Moser (1961); Serrin (1963, 1964), see also the classical text by Ladyzhenskaya and Ural’tseva (1968). The following standard corollary is in order: Corollary C.33 Let u be a weak solution. Then: lim

r→0 Br (x)

u = ess lim sup u(y) = ess lim sup u(y) y→x

y→x

for all x ∈ D.

In particular, u has a continuous representative. Proof Fix x ∈ D. For any r < 13 dist(x, ∂D), Theorem C.31 implies that u ∈ L∞ (B2r (x)). Applying Harnack’s inequality (C.15) to the function u − ess infB2r (x) u, that is a weak solution, nonnegative in B2r (x), we get: 0≤



 u − ess inf u ≤ ess sup u − ess inf u Br (x)

Br (x)

  ≤ C ess inf u − ess inf u . Br (x)

B2r (x)

Br (x)

B2r (x)

(C.16)

C

Background in PDEs

219

Observe that since the constant C is independent of r, the right hand side in (C.16) converges to 0 as r → 0, because the function r → ess infBr (x) u is bounded and nonincreasing. Consequently: ess lim inf u(y) = lim y→x

r→0 Br (x)

u.

By the same argument applied to the function (−u) we get: ess lim supy→x u(y) = ffl ffl . limr→0 Br (x) u. Continuity of the representative v(x) = limr→0 Br (x) u, follows now by Exercise C.17 (ii).   From now on, we will identify every weak solution u to p u = 0 on D with its continuous representative. We now show that Harnack’s inequality in fact yields Hölder’s continuity: Corollary C.34 Let u be a (continuous) weak solution to p u = 0 in D. Then, for every compact set K ⊂ D we have: |u(x) − u(y)| ≤ C |x − y|α

for all x, y ∈ K,

(C.17)

where α depends only on N and p, while C depends on N, p, K and u. Proof 1. Given a compact set K, let  > 0 be such that K + B3 (0) ⊂ D. Fix a radius r ∈ (0, ] and let x ∈ K. We now apply Theorem C.32 to the following two nonnegative, continuous weak solutions: u − (infB2r (x) u) and (supB2r (x) u) − u on B2r (x). It follows that:   sup u − inf u ≤ C inf u − inf u , Br (x)

B2r (x)

sup u − inf u ≤ C B2r (x)

Br (x)

Br (x)



B2r (x)

 sup u − inf u . B2r (x)

Br (x)

. Adding the two inequalities and denoting by ω(x, r) = supBr (x) u − infBr (x) u  the oscillation of u on Br (x), we obtain: ω(x, r) + ω(x, 2r) ≤ C ω(x, 2r) +  ω(x, r) , which yields: ω(x, r) ≤ λω(x, 2r) with a constant λ = C−1 C+1 ∈ [0, 1). 2. Let now x, y ∈ K. If |x − y| ≥ , then obviously: |u(x) − u(y)| ≤ 2uL∞ (K) ≤

2uL∞ (K) |x − y|α . α

(C.18)

220

C Background in PDEs

On the other hand, if |x − y| ∈ (0, ), then denoting by k ≥ 0 the integer such  that 2k−1 ≤ |x − y| < 2k and iterating (k + 1) times the bound (C.18), we get: |u(x) − u(y)| ≤ ω(x,

 1 α  ) ≤ λk+1 ω(x, 2) ≤ k+1 2uL∞ (K+B2r (0)) 2k 2

2uL∞ (K+B2r (0)) |x − y|α , α  α if only α satisfies λ ≤ 12 . Existence of such exponent α that depends only on N and p is guaranteed from λ < 1.   ≤

Note that, from the proof above the dependence of the constant C in (C.17) on u is only through the norm uL∞ (V ) , where V is an open set such that K ⊂ V ⊂ V¯ ⊂ D. The next corollary to Harnack’s inequality (C.15) is known as the strong maximum principle: Corollary C.35 Let u be a (continuous) weak solution to p u = 0 in D. If u attains its maximum (or minimum) in D, then u must be constant. Proof Assume that u(x0 ) = maxD u for some x0 ∈ D and apply Theorem C.32 to the nonnegative weak solution u(x0 ) − u on the ball B2r (x0 ) ⊂ D. It follows that  supBr (x0 ) u(x0 ) − u ≤ 0, so there must be u ≡ u(x0 ) in Br (x0 ). Consequently, the set Dmax = {x ∈ D; u(x) = u(x0 )} is open and nonempty. Since Dmax is also closed in D, we deduce that Dmax = D. The case of u attaining its minimum in D follows in the same manner.   We remark that although our presentation is carried out in bounded domains D, most of the properties remain true also in the unbounded case. In particular, the following Liouville theorem is a direct consequence of Corollary C.35 valid on RN : if a weak solution u ∈ C(RN ) to p u = 0 is bounded, then it must be constant. In view of Harnack’s inequality, it is in fact enough to have u bounded from below in order to conclude its constancy. From the above, we deduce the first version of the comparison principle: Corollary C.36 Let u1 and u2 be two (continuous) weak solutions to p u = 0 in D. Assume that: lim sup u1 (y) ≤ lim inf u2 (y) y→x

y→x

for all x ∈ ∂D,

where both limits are never simultaneously −∞ or +∞. Then u1 ≤ u2 in D. Proof Note that u1 − u2 is a weak solution and that for all x ∈ ∂D we have: lim sup(u1 − u2 )(y) ≤ lim sup u1 (y) − lim inf u2 (y) ≤ 0. y→x

y→x

y→x

C

Background in PDEs

221

Consequently, if (u1 −u2 )(x0 ) > 0 at some x0 ∈ D, then it easily follows that u1 −u2 must attain its (positive) maximum in D. By Corollary C.35 we get: u1 − u2 ≡ C > 0, contradicting the assumption.   It is possible to prove the comparison principle in the a.e. sense and valid for weak super- and subsolutions, without referring to Harnack’s inequality: Exercise C.37 (i) Prove the following identity: 

  1 |b|p−2 b − |a|p−2 a, b − a = |b|p−2 + |a|p−2 |b − a|2 2   1 + |b|p−2 − |a|p−2 (|b|2 − |a|2 ) for all a, b ∈ RN . 2

Deduce that the expression in the left hand side above is strictly positive for all a = b. (ii) Prove the following comparison principle, using only Definition C.29. Let u1 be a weak subsolution and u2 a weak supersolution to p u = 0 in D, satisfying 1,p (u1 − u2 )+ ∈ W0 (D). Then u1 ≤ u2 a.e. in D. Finally, we state without proof the mentioned ultimate regularity result, as a special case of the results in DiBenedetto (1983); Tolksdorf (1984). Theorem C.38 Let u be the weak (continuous) solution of p u = 0 in D. Then ∇u ∈ C0,α loc (D), namely on every compact set K ⊂ D, the inequality: |∇u(x) − ∇u(y)| ≤ C|x − y|α

for all x, y ∈ K

is valid with exponent α depending only on N and p, and the constant C depending on N, p, dist(K, ∂D) and uL∞ (V ) , where V is an open set such that K ⊂ V ⊂ V¯ ⊂ D. Further, u is analytic (i.e. it is represented by its Taylor polynomial) in the open set where ∇u = 0. We remark that the analyticity is due to Lewis (1980) and that the C1,α loc regularity is, in general, the best possible (see Lewis 1980; Tolksdorf 1984).

C.6 Potential Theory and p-Harmonic Functions Motivated by the comparison principle in Corollary C.36, we introduce: Definition C.39 A lower-semicontinuous function u : D → (−∞, +∞] that is not equivalently equal to +∞ in D, is called p-superharmonic provided that on each open connected domain U satisfying U¯ ⊂ D, the comparison with weak solutions

222

C Background in PDEs

1,p holds. Namely: if v ∈ C(U¯ ) ∩ Wloc (U ) is a weak solution to p v = 0 in U and v ≤ u on ∂U , then v ≤ u in U . If the function (−u) is p-superharmonic in D, then we say that u is psubharmonic in D. A continuous u : D → R is called p-harmonic when it is both p-superharmonic and p-subharmonic.

It turns out that p-harmonic functions are exactly the continuous representatives of weak solutions. We start comparing the notions in Definitions C.29 and C.39 by stating the counterpart of Theorem C.31 and Corollary C.33: Theorem C.40 Every weak supersolution u to p u = 0 in D is locally essentially bounded from below and u(x) = ess lim infy→x u(y) for a.e. x ∈ D. Consequently, u has a lower-semicontinuous representative which satisfies: u(x) = ess lim inf u(y) y→x

for all x ∈ D.

(C.19)

From now on, we will identify every weak supersolution u with its lowersemicontinuous representative satisfying (C.19) (this limiting property is due to Heinonen and Kilpeläinen 1988b), similarly as we chose to identify weak solutions with their continuous representatives. It is also immediate that u ≡ +∞ and moreover we have: Theorem C.41 Let u be a (lower-semicontinuous and satisfying (C.19)) weak supersolution to p u = 0 in D. Then u is p-superharmonic. Proof It suffices to show the comparison property. Let v ∈ C(U¯ ) be a weak solution in some open connected set U with U¯ ⊂ D. Assume that v ≤ u on ∂U . Since for every  > 0 and every x ∈ ∂U we have: v(y) < v(x) + 2 in some open neighbourhood of x in U¯ , and since: u(y) > u(x) − 2 for all y sufficiently close to x in D in view of Exercise C.16 (i), it follows that v < u +  on some set of the form: Bδ(x) (x) ∩ U¯ . Consequently, for every  > 0 we have: v 0 is sufficiently small. This yields (v − u − )+ ∈ W0 (U ) and we may apply Exercise C.37 (ii) to v and u + , concluding that v ≤ u +  a.e. in U . Passing to the limit with  → 0 it follows that v ≤ u a.e. in U . Thus, v ≤ u in U by (C.19).   1,p

It is useful to record the following properties of p-superharmonic functions: Lemma C.42 (i) Let u : D → R¯ and assume that for every x ∈ D there exists its open neighbourhood Vx ⊂ D such that u|Vx is p-superharmonic in Vx . Then u is p-superharmonic in D (thus p-superharmonicity is a local property). (ii) Minimum of finitely many p-superharmonic functions is p-superharmonic.

C

Background in PDEs

223

The following version of comparison principle is quite straightforward but it relies on the existence of a harmonic function taking the prescribed continuous boundary values as in Theorem C.49. Exercise C.43 Let u1 be p-subharmonic and u2 be p-superharmonic in D. Assume: lim sup u1 (y) ≤ lim inf u2 (y) y→x

for all x ∈ ∂D,

y→x

(C.20)

and that both quantities above are never simultaneously −∞ or +∞. Using the following outline, prove that u1 ≤ u2 in D. (i) Show that for every  > 0 there exists δ > 0 such that u1 − u2 <  in the open . set Vδ = {x ∈ D; dist(x, ∂D) < δ}. (ii) Fix  > 0 and let U ⊂ D be any open, connected subset satisfying ∂U ⊂ Vδ . Since u2 is lower-semicontinuous, Exercise C.16 (iv) implies that u2 is a pointwise limit of a nondecreasing sequence {φn ∈ C∞ (D)}∞ n=1 as n → ∞. Clearly, φn ≤ u2 on ∂U for all n ≥ 1. Show that u1 ≤ φn +  on ∂U , for a sufficiently large n. (iii) Let vn ∈ C(U¯ ) ∩ W 1,p (U ) denote the weak solution to p u = 0 in a p-regular set U , such that vn = φn on ∂U . Existence of such solution follows from Theorem C.49. Prove that for n large enough there holds: u1 ≤ vn + ≤ u2 + in U . Conclude the result exhausting D with p-regular sets. To prove a statement converse to Theorem C.41, we first observe: Lemma C.44 For every p-superharmonic function u in D, the set where it attains finite values {u = +∞} is dense in D. Proof We first show that u ≡ +∞ on Br (x0 ) ⊂ D where r < implies that u ≡ +∞ on B2r (x0 ). By Exercise C.27, the function: $ v(x) =

1 4 dist(x0 , ∂D),

p−N

|x − x0 | p−1 if p = N − log |x − x0 | if p = N

(C.21)

is a weak solution in the annulus U = B3r (x0 ) \ B¯ r (x0 ) and v ∈ C(U¯ ). If u ≡ +∞ in B¯ 3r (x0 ), then there is nothing to prove. Otherwise, we apply the comparison property in Definition C.39 to the well-defined p-superharmonic function: u − infB¯ 3r (x0 ) u and to each of the weak solutions in the sequence:  ∞ .  vk = k v − v(x0 + 3re1 ) . k=1

Observing that vk ≤ u − infB¯ 3r (x0 ) u on ∂U , it follows that the same inequality holds in U for every k ≥ 1. Since limk→∞ vk (x) = +∞ for all x ∈ B2r (x0 ) \ B¯ r (x0 ), we obtain that u ≡ +∞ on B2r (x0 ), as claimed.

224

C Background in PDEs

. Consider now the set D∞ = {u = +∞}. By the above reasoning, if Bδ (x) ⊂ D∞ for some r > 0 and x ∈ D, then B 1 dist(x,∂ D) (x) ⊂ D. Since the case D∞ = D is 2 excluded by Definition C.39, the proof is done.   . Note that the fundamental solution v in (C.21) is p-superharmonic in D = Br (x0 ), but it is not a weak supersolution, merely because it fails to belong to 1,p Wloc (D). However, one has (see Heinonen and Kilpeläinen 1988b): Theorem C.45 If u is p-superharmonic and locally bounded in D, then u is a weak 1,p supersolution to p u = 0 in D. In particular, u ∈ Wloc (D). Consequently, we conclude that (continuous) weak solutions coincide with pharmonic functions.

C.7 Boundary Continuity of Weak Solutions to p u = 0 Definition C.46 Let D ⊂ RN be open, bounded, connected and let p ∈ (1, ∞). A ¯ ∩ W 1,p (D) the boundary point x ∈ ∂D is called p-regular if for every v ∈ C(D) 1,p following holds. Let u ∈ C(D) ∩ W (D) be the unique solution to the problem: p u = 0 in D

and

1,p

u − v ∈ W0 (D);

(C.22)

then there must be limy→x u(y) = v(x). To find a geometric description of regular points, we introduce: Definition C.47 (i) Let A be a subset of an open, bounded set U ⊂ RN and let p ∈ (1, ∞). The p-capacity of A in U is: . Cp (A, U ) = inf

ˆ U

1,p

|∇u|p dx; u ∈ W0 (U ) and u = 1 a.e. in an open neighbourhood of A .

If the set of admissible functions u above is empty, then Cp (A, U ) = +∞. (ii) Let A ⊂ RN . For every x0 ∈ RN , we define the Wiener function (0, 1)  r → δp (r; x0 , A):   . Cp A ∩ Br (x0 ), B2r (x0 )   . δp (r; x0 , A) = Cp Br (x0 ), B2r (x0 )

C

Background in PDEs

225

The Wiener function is left-continuous, and we define the Wiener integral: . Wp (x0 , A) =

ˆ

1

0

1

δr (r; x0 , A) p−1 dr. r

We say that the set A is p-thick at x0 if Wp (x0 , A) = +∞. If Wp (x0 , A) < +∞, then we say that A is p-thin at x0 . We remark that one also has: Cp (A, U ) =

V

inf sup open, A⊂V ⊂U K compact, K⊂V ˆ inf |∇φ|p ; φ ∈ C∞ c (D) and φ ≥ 1 in K U

and that Cp (·, ·) is a Choquet capacity, i.e. it obeys the following conditions: (i) (Monotonicity). If A1 ⊂ A2 ⊂ U , then Cp (A1 , U ) ≤ Cp (A2 , U ). (ii) (Continuity along increasing sequences).  If {An ⊂ U }∞ n=1 is an increasing sequence, then limn→∞ Cp (An , U ) = Cp ( ∞ A , U ). n=1 n (iii) (Continuity along decreasing sequences of compacts). If {An ⊂ U }∞ n=1 is adecreasing sequence of compact sets, then limn→∞ Cp (An , U ) = Cp ( ∞ n=1 An , U ). One can prove that when A is a C1 manifold embedded in U , then Cp (A, U ) > 0 if and only if: dim A ∈ (N − p, N ]. In particular, sets containing an open subset or a manifold of co-dimension 1 always have positive capacity, whereas points (nonempty sets) have positive capacity if an only if p > N. We now state the following classical result: Theorem C.48 A boundary point x0 ∈ ∂D is p-regular if and only if the set RN \D is p-thick at x0 . We then have: sup

x∈Br (x0 )∩D

|u(x) − v(x0 )| ≤

+



sup

x,y∈Br (x0 )∩∂ D



|v(x) − v(y)| 

ˆ

sup |v(x) − v(y)| · exp − C

x,y∈∂ D

r

1

 δp (ρ, x0 , RN \ D) p−1 dρ ρ 1

¯ ∩ W 1,p (D) where u ∈ C(D) ∩ W 1,p (D) is for all r ∈ (0, 1) and all v ∈ C(D) defined through (C.22). The constant C depends only on p and N . Sufficiency of p-thickness for regularity was shown in Wiener (1924), see also Littman et al. (1963); Maz’ya (1976), while the converse was proved in Lindqvist and Martio (1985) for p > N − 1 and in Kilpeläinen and Malý (1994) for p ∈ (1, N]. We remark that when p > N then all boundary points are regular since

226

C Background in PDEs

the corresponding Wiener integrals diverge in view of singletons having positive p-capacity. As a result, we get: Theorem C.49 If every x ∈ ∂D is p-regular (we say then that D satisfies the 1,p (D) there exists a unique solution ¯ Wiener p-condition), then for all v ∈ C(D)∩W 1,p ¯ ∩ W (D) to: u ∈ C(D) p u = 0 in D

and

u=v

on ∂D.

According to the Kellogg property, the set of boundary points that are not pregular must have p-capacity 0 in its any open bounded superset. The following lemma from Heinonen et al. (2006) gathers examples of regular points: Lemma C.50 (i) Assume that A ⊂ RN satisfies the corkscrew condition at x0 ∈ A: there exists c ∈ (0, 1] and R > 0 such that: Bcr (xr ) ⊂ A ∩ Br (x0 )

for some xr ∈ A and all r ∈ (0, R).

Then Wp (x0 , A) = +∞ for all p ∈ (1, ∞). (ii) If A contains an open cone with vertex at x0 ∈ A (i.e. A satisfies the cone condition at x0 ), then Wp (x0 , A) = +∞ for all p ∈ (1, ∞). It follows that all polyhedra, all balls and all sets with Lipschitz boundary satisfy the Wiener p-condition in Theorem C.49, i.e. each boundary point is p-regular. In particular, every open set D ⊂ RN can be exhausted by such regular open sets. We now recall another fundamental notion related to boundary regularity. Definition C.51 For every F ∈ C(∂D), the following function hF is called the Perron solution to p u = 0 in D with boundary values F : . hF = inf v; p-superharmonic in D, bounded from below and such that lim inf v(y) ≥ F (x) for all x ∈ ∂D y→x

= sup v; p-subharmonic in D, bounded from above and such that lim sup v(y) ≤ F (x) for all x ∈ ∂D . y→x

The (well defined) above hF is a (continuous) weak solution to p u = 0 in D. The fact that the upper and the lower Perron solutions in Definition C.51 coincide (for any continuous function g given on the boundary of a bounded domain D), is the famous Wiener resolutivity theorem, established for various cases in: Wiener (1925); Lindqvist and Martio (1985); Kilpeläinen (1989). An important observation

C

Background in PDEs

227

is that if g extends to a continuous W 1,p function in D, then hg automatically coincides with the variational solution: 1,p ¯ ∩ W 1,p (D), then h Lemma C.52 If v ∈ C(D) v|∂ D − v ∈ W0 (D), so the corresponding Perron solution hv|∂ D ∈ W 1,p (D) ∩ C(D) equals the unique variational solution u of (C.22).

Corollary C.53 A boundary point x ∈ ∂D is p-regular if and only if: lim hF (y) = F (x)

for every F ∈ C(∂D).

y→x

Proof Regularity as in the statement of the Corollary implies p-regularity in Definition C.46 in view of Lemma C.52. Conversely, let x ∈ ∂D be p-regular. Given F ∈ C(∂D), one proceeds by approximating it uniformly with functions 1,p {vn ∈ C∞ (RN )}∞ n=1 , so that h(vn )|∂ D − vn ∈ W0 (D). The p-regularity of x yields: lim h(vn )|∂ D (y) = vn (x).

y→x

(C.23)

Observe that the sequence {h(vn )|∂ D }∞ n=1 converges uniformly to hF in D, because of the bound: h(vn )|∂ D −hF L∞ (D) ≤ vn −F L∞ (∂ D) that is a direct consequence of Definition C.51. Thus, the same boundary convergence as in (C.23) is valid for hF , as claimed.   In view of the comparison principle in Corollary C.36 we obtain: ¯ there Corollary C.54 If D satisfies the Wiener p-condition, then for all F ∈ C(D) 1,p ¯ exists a unique u ∈ C(D) ∩ Wloc (D) which is a weak solution to p u = 0 in D and which satisfies: u|∂ D = F . To compare Theorem C.49 and Corollary C.54, we point out that in general, Perron’s solution u fails to be W 1,p (D) regular, even when D is p-regular (unless the given boundary function extends continuously to the boundary of D and belongs to W 1,p (D)). In this situation, u cannot be obtained from the minimization of the Ip energy, as shown in the following Hadamard’s example: Exercise C.55 Let D = B1 (0) ⊂ R2 and define u : D → R: u(r, θ ) =

∞ n! r sin(n!θ ) n=1

n2

,

¯ and that it is a weak solution to 2 u = 0 in polar coordinates. Show that u ∈ C(D) 1,2 in D. Further, show that u ∈ Wloc (D) \ W 1,2 (D).

228

C Background in PDEs

The following equivalent description of boundary regularity, extending the classical results valid for p = 2, is due to Kilpeläinen and Malý (1994): Theorem C.56 A boundary point x ∈ ∂D is p-regular if and only if one of the following equivalent conditions holds: ¯ ∩ W 1,p (D) to p u = 0 on D, such that (i) There exists a weak solution u ∈ C(D) ¯ u(x) = 0 and u > 0 on D \ {x}. (ii) There exists a barrier function relative to D, namely a p-superharmonic function u : D → (−∞, +∞] such that: lim infz→y u(z) > 0 for all y ∈ ∂D \ {x}, while: limz→x u(z) = 0. It turns out that the method of barriers is a useful device in linear and nonlinear well-posedness theories. For example, we have: Exercise C.57 Let D ⊂ R2 be open, bounded, connected and assume that x ∈ ∂D satisfies the line condition, i.e. there exists a simple continuous arc S ⊂ R2 \ D such that x ∈ S. Let Br (x) have a radius r < 1 so small that S intersects ∂Br (x). Let B˜ be Br (x) less the part of S from x to the first hit of ∂Br (x). Then the function ˜ Show that: z → log(z − x) has a well-defined single-valued analytic branch in B. . v(z) = −Re

1 log(z − x)

is 2-harmonic in B˜ and that it satisfies: lim infz→y v(z) > 0 for all y ∈ (∂D \ ˜ while: limz→x v(z) = 0. Conclude the 2-regularity of x by modifying the {x}) ∩ B, function v to construct a 2-barrier at x. In fact, in dimension N = 2 the line condition implies p-regularity for any p ∈ (1, +∞). Even more generally, we have: Lemma C.58 Let D ⊂ R2 be open, bounded and connected. Let x ∈ ∂D have the property that the single point set {x} is not a connected component of R2 \ D. Then x is p-regular for any p ∈ (1, ∞). In particular, if no connected component of R2 \ D is a single point, then D satisfies the Wiener 2-condition for all p ∈ (1, ∞). The above result follows by a direct estimate of Wiener’s integral: ˆ Wp (x, R2 \ D) ≥ c 0

r0

1 dr = +∞, r

which is due to the bound on the Wiener function: δp (r; x, R2 \ D) ≥ c > 0, valid for all r > 0 that are sufficiently small. Indeed, if δp (rn ; x, R2 \ D) → 0 for some rn → 0 as n → ∞, then a result in Heinonen and Kilpeläinen (1988a) implies existence of shrinking spheres ∂Bρn (x0 ) ⊂ D with ρn ∈ (0, r2n ). This means that {x} is a connected component of R2 \ D, contradicting the assumption.

C

Background in PDEs

229

C.8 Viscosity Solutions to p u = 0 We have yet another definition of solutions to (C.10): Definition C.59 Let u : D → (−∞, +∞] be lower-semicontinuous and not equivalently equal to +∞ in D. We say that u is a viscosity p-supersolution when for every x0 ∈ D and every φ ∈ C∞ (D) satisfying: φ(x) ≤ u(x) for all x ∈ D

φ(x0 ) = u(x0 ) with ∇φ(x0 ) = 0

and

(we say that φ touches u from below at x0 ), we have: p φ(x0 ) ≤ 0. If (−u) is a viscosity p-supersolution, then we say that u is a viscosity psubsolution. A continuous function u : D → R is called a viscosity solution to p u = 0 in D if it is both a viscosity p-supersolution and a viscosity p-subsolution. For an account of the modern theory of viscosity solutions, we refer to Crandall et al. (1992); Koike (2004). To motivate Definition C.59, recall that u ∈ C2 (I ) defined on an interval I = (a, b) is convex if and only if u!! ≥ 0 on I . For merely continuous u we have the equivalence: Exercise C.60 A continuous function u : I → R is convex if and only if φ !! (x0 ) ≥ 0 for every x0 ∈ I and every φ ∈ C∞ (I ) such that: φ(x0 ) = u(x0 )

and

φ > u in I \ {x0 }.

(C.24)

It can be directly observed that: Lemma C.61 Every p-superharmonic function is a viscosity p-supersolution. Proof Let u be p-superharmonic in D. Assume that for some test function φ ∈ C∞ (D) that touches u from below at a given x0 ∈ D where ∇φ(x0 ) = 0, we have p φ(x0 ) > 0. By possibly modifying φ through: φ − |x − x0 |4 we may also, without loss of generality, assume that: φ(x) < u(x)

for all x ∈ D \ {x0 }.

(C.25)

Since p φ ≥ 0 on some ball Br (x0 ) ⊂ D, in virtue of Theorem C.41, φ is psubharmonic in Br (x0 ). By the lowersemicontinuity of u, the ordering in (C.25) implies: . m = min (u − φ) > 0. ∂Br (x0 )

230

C Background in PDEs

We now apply the comparison principle in Exercise C.43 to the p-superharmonic function u and the p-subharmonic function φ +m, in view of φ +m ≤ u on ∂Br (x0 ). It follows that φ + m ≤ u in Br (x0 ), contradicting φ(x0 ) = u(x0 ).   A more complicated argument in Juutinen et al. (2001) (whereas an alternative, simpler proof, based on approximation with the so-called infimal convolutions, is due to Julin and Juutinen 2012) yields: Theorem C.62 If u is a viscosity p-supersolution, then u is p-superharmonic. Concluding, Theorems C.62, C.45 and C.41 imply that locally bounded viscosity p-supersolutions coincide with weak supersolutions and also that the three notions of: viscosity solutions, p-harmonic functions and weak solutions to p u = 0 in D are actually the same. In view of the comparison principle in Corollary C.36 and Corollary C.54 we obtain: ¯ to: Corollary C.63 Viscosity solutions u ∈ C(D) p u = 0 in D

and

u = F on ∂D, 1,p

with given boundary values F ∈ C(∂D) are unique and satisfy u ∈ Wloc (D). If all boundary points x ∈ ∂D are p-regular, then such (unique) solution exists.

Appendix D

Solutions to Selected Exercises

This chapter contains solutions to selected problems in the book. Exercise 2.7 (i) Given x1 , x2 ∈ RN \ A, let y1 , y2 ∈ A be the respective minimizing points for the introduced definition of the extension. It then follows that:     |x1 − y2 | |x2 − y2 | − 1 − F (y2 ) + −1 F (x1 ) − F (x2 ) ≤ F (y2 ) + dist(x1 , A) dist(x2 , A) =

|x1 − y2 |

dist(x1 , A)



|x2 − y2 |

dist(x2 , A)



|x1 − x2 |

dist(x1 , A)

+

|x1 − y2 | · |x1 − x2 |

dist(x1 , A) · dist(x2 , A)

,

resulting in the continuity of F on RN \ A. To check continuity at x0 ∈ ∂A, let x ∈ A and observe that for every r > 0 there holds:   |x − y| F (x) − F (x0 ) = inf F (y) − F (x0 ) + −1 A dist(x, A)     r − |x − x0 | = min inf F (y) − F (x0 ) , inf F − sup F + −1 . A y∈A∩Br (x0 ) |x − x0 | A

Since the second term in the above minimization converges to ∞ as x → x0 regardless of r > 0, we see that taking r small achieves F (x) − F (x0 ) ≥ − whenever |x − x0 | is sufficiently small. For the opposite bound, set y ∈ A so that |x − y| = dist(x, A). Then: F (x) − F (x0 ) ≤ F (y) − F (x0 ) → 0

as x → x0 ,

since |y − x0 | ≤ 2|x − x0 |.

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3

231

232

D Solutions to Selected Exercises

Exercise 2.11 (i) We first prove that each function fn : ∂B1 (0) → R defined by: fn (x) =

μ(U ∩ B(x, n1 )) μ(B(x, n1 ))

is Borel-regular. Indeed, given r ∈ (0, 1) and a sequence xn → x0 in ∂B1 (0), it is easily seen that 1B(x0 ,r) ≤ lim infn→∞ 1B(xn ,r) , so by Fatou’s lemma: ˆ

ˆ

U

1B(x0 ,r) dμ ≤ lim inf n→∞

U

1B(xn ,r) dμ

´   proving that the function x → U 1B(x,r) dμ = μ U ∩ B(x, r) is lowersemicontinuous (see Definition C.15) and hence Borel. Recalling that x → μ(B(x, r)) is constant by the rotational invariance, fn must be Borel as well. Observe that limn→∞ fn = 1 in U . Applying Fatou’s lemma again, we get: σ N −1 (U ) ≤ lim inf n→∞

= lim inf n→∞

ˆ

fn (x) dσ N −1 (x)

U

ˆ ˆ

1 μ(B(x, n1 ))

U

U

1B(x, 1 ) (y) dμ(y) dσ N −1 (x), n

whereas Fubini’s theorem yields: ˆ ˆ U

U

1B(x, 1 ) (y) dμ(y) dσ ˆ =

N −1

n

ˆ ˆ (x) = U

U

1B(x, 1 ) (y) dσ N −1 (x) dμ(y)

 1  1 σ N −1 U ∩ B(y, ) dμ(y) ≤ σ N −1 (B(x, )) · μ(U ), n n U

achieving (2.11). (ii) A symmetric argument as above gives: μ(U ) ≤ σ N −1 (U ),

n

 lim infn→∞

μ(B(x, n1 )) N−1 σ (B(x, n1 ))

for all open sets U. In particular, for U = ∂B1 (0) we obtain:

1 ≤ lim inf n→∞

 σ N −1 (B(x, n1 )) −1 = lim sup n→∞ σ N −1 (B(x, n1 )) μ(B(x, n1 )) μ(B(x, n1 ))

 σ N −1 (B(x, n1 )) −1 ≤ lim inf ≤ 1. n→∞ μ(B(x, n1 ))



·

D

Solutions to Selected Exercises

233

Thus there exists the limit: limn→∞ σ N −1 (U ) ≤ μ(U ) and, as before, in: ∂B1 (0), proving that μ = σ N −1 .

σ N−1 (B(x, n1 )) = 1. By (2.11) this μ(B(x, n1 )) μ(U ) ≤ σ N −1 (U ) for all open

results in: sets U ⊂

Exercise 2.16 ˆ 0 ) ∈ (0, δ ) and For fixed η, δ > 0 and each boundary point y0 ∈ ∂D, choose δ(y 2 ˆ (y0 ) ∈ (0, 1) so that the bound in Definition 2.12 (a) holds for the parameters η and 2δ . By compactness of ∂D we may choose its finite covering: ∂D ⊂

n 

Bδ(y ˆ 0,i ) (y0,i ),

i=1

corresponding to the boundary points {y0,i }ni=1 . Let δˆ ∈ (0, δ) be such that for every y0 ∈ ∂D there holds: Bδˆ (y0 ) ⊂ Bδ(y ˆ 0,i ) (y0,i ) for some i = 1 . . . n. Let ˆ = mini=1...n ˆ (y0,i ). It now follows that:     P Xx0 ∈ Bδ (y0 ) ≥ P Xx0 ∈ B δ (y0,i ) ≥ 1 − η, 2

for all  ∈ (0, ) ˆ and all x0 ∈ Bδˆ (y0 ) ∩ D, as needed. Exercise 3.3 We compute: (r − q)|∇u|2−p p u = (r − q)u + (r − q)(p − 2)∞ u   = (r − q)u + (r − p)(q − 2) + (p − q)(r − 2) ∞ u     = (r − p) u + (q − 2)∞ u + (p − q) u + (r − 2)∞ u = (r − p)|∇u|2−q q u + (p − q)|∇u|2−r r u.

Exercise 3.7 (i) By symmetry and change of variable it follows that: |y1 |2 dy = B (0)

1 N

|y|2 dy = B (0)

2 N

|y|2 dy. B1 (0)

For the last integral, we employ the hyperspherical coordinates (r, φ1 , . . . , φN −1 ). Calling S = (sinN −2 φ1 )(sinN −3 φ2 ) . . . (sin φN −2 ), the volume element

234

D Solutions to Selected Exercises

is given by: r N −1 S dr dφ1 dφ1 . . . dφN −1 . We thus get: ´ 2π ´ π B1 (0)

|y|2 dy = ´02π ´0π = ´01 0

which implies that:



´π ´1

0 ´π 0 ... 0

0

´1

...

r N +1 dr r N −1 dr

B (0) |y1 |

2

N , N +2

=

dy =



N +1 dr S dφ dφ . . . dφ 1 1 N −1 0 r ´1  N −1 dr S dφ1 dφ1 . . . dφN −1 0 r

2 N +2 .

Exercise 3.12 (i) By Definition C.15, the upper-semicontinuity of x → infB (x) v reads: inf v ≥ lim

B (x)



sup

 inf v ,

r→0 y∈Br (x)\{x} B (y)

and the function under the limit in the right hand side, is nonincreasing in r. Fix δ > 0 and let yδ ∈ B (x) be such that v(yδ ) ≤ δ + infB (y) v. We observe that for every r0 <  − |yδ − x| and every y ∈ Br0 (x), there holds: yδ ∈ B (y). Consequently: lim

sup



 inf v ≤

r→0 y∈Br (x)\{x} B (y)

sup



 inf v ≤ v(yδ ) ≤ δ + inf v,

y∈Br0 (x) B (y)

B (x)

which ends the proof upon taking δ → 0. Exercise 4.10 Aet ! (t) = (i) Since gA A(et −1)+1 > 0 for t ∈ (0, ∞), the function gA is strictly increasing from limt→0+ gA (t) = 0 to limt→∞ gA (t) = ∞. We observe that (gA (t) − t)! = gA (t) − 1 = A(eA−1 t −1)+1 and that the function t → −1  t is strictly decreasing from 1 to 0 on (0, ∞). Thus we directly A(e −1)+1 ! (t) − 1. As a consequence g (t) − t is strictly obtain the claimed bounds of gA A increasing/decreasing according to the sign of(A − 1). Since limt→0+ (gA (t) −  t) = 0 and limt→∞ (gA (t)−t) = limt→∞ log A+ 1−A = log A, we conclude et the remaining bounds.    (ii) It suffices to check: gA g1/A (t) = log A exp(log( A1 (et −1)+1))+1−A = t. (iii) Applying (i) to A = 1 −  < 1, we get: v (x) − u(x) = g1− (u(x)) − u(x) ∈  log(1 − ), 0 , in view of the assumption u(x) > 0. Further: ! ∇v (x) = g1− (u(x))∇u(x), !! ! ∇ 2 v (x) = g1− (u(x))∇u(x) ⊗ ∇u(x) + g1− (u(x))∇ 2 u(x), !! ! v (x) = g1− (u(x))|∇u(x)|2 + g1− (u(x))u(x),

D

Solutions to Selected Exercises

235

∇v ∇u so that, noting the fact that |∇v = |∇u| , we directly arrive at the indicated | formula for p v and at its simplified form in case of p u = 0. The final estimates follow from  < 12 in view of: ! g1− (t) = 1 −

 1 >1− > , (1 − )(et − 1) + 1 2

(1 − )et 1  !! (t) =   > t. g1− 2 ≥  t (1 − )e e (1 − )et +  Exercise 5.10 (i) For a given s ∈ N and A1 ∈ Fs , consider the family A of sets A2 ∈ F satisfying:       P˜ {Y1 ∈ A1 } ∩ {Y2 ∈ A2 } = P˜ Y1 ∈ A1 · Y2 ∈ A2 . It is easy  to observe that A is a σ -algebra. Since by (5.22) A contains all subsets A2 ∈ t∈N Ft of , we deduce that A = F. Similarly, fix now A2 ∈ F and consider the family of sets A1 ∈ Ff in for whom the equality displayed above holds true. Again, this family is closed with respect to countable unions and  complements, and it contains all subsets A1 ∈ s∈N Fs ⊂ Ff in of f in , so it equals Ff in . (ii) Since the subset {∃n ≤ τ Xn ∈ Bδk (y0 )} ⊂ is F-measurable, it suffices to prove that the function g : f in × → given below is measurable:  .  ∞ g {(wn , an , bn )}sn=1 , {(wn , an , bn )}∞ n=s+1 = {(wn , an , bn )}n=1 . To this end, we will show that g −1 (A) belongs to the product σ -algebra of Ff in and F, for the sets A ∈ F of the form: A = ∞ i=1 Ai , where Ai ∈ F1 for all i ∈ N and all but finitely many indices i satisfy Ai = 1 . Such sets generate the σ -algebra F. Indeed, we have: g

−1

(A) =



  n



Ai ×

n=1 i=1

together with

n

i=1 Ai

∈ Ff in and



i=n+1 Ai

∞ 

 Ai ,

i=n+1

∈ F.

Exercise 5.21 (i) The first bound in (5.36) is obvious. The second bound follows from: k2 > ρκ2 > ρκ1 + ρ(1 + r) > k1 + 1 + r, in view of ρ > 1 and (5.39). For the third bound, observe that: k3 = ρκ3 − (ρ − 1)r > ρ(κ2 + 1) + ρr − (ρ − 1)r = k2 + 1 + r.

236

D Solutions to Selected Exercises

For the last bound, we have: ρ(κ2 + 1) − 1 κ2 (κ2 + 1) (ρ − 1)(κ2 + 1) k2 (k2 + 1) −1= (κ2 + 1) − 1 = ρ + −1 k1 κ1 κ1 κ1 ≥ ρκ3 + ρ − 1 + (ρ − 1)

κ2 + 1 > ρκ3 > ρκ3 − (ρ − 1)r = k3 . κ1

(ii) For i = 1, 2, the first inequality in (5.37) follows directly from (5.36). Otherwise, observe that for m ≥ 2 we get: k2m+1 − k2m = α m−1 (k3 − k2 ) > k3 − k2 > 1 + r k2m − k2m−1 = α m−2 (αk2 − k3 + r) ≥ αk2 − k3 + r = >

k3 (k2 − k1 ) − rk2 +r k1

k3 (1 + r) − rk2 k3 + r(k3 − k2 ) +r = + r > 1 + r. k1 k1

To prove the second bound in (5.37), check first that it holds for i = 2 by the last inequality in (5.36), whereas for i = 3 we have: k3 (k2 + 1) − (k4 + 1)k1 = k3 + k2 r − k1 r − k1 > (k2 − k1 )(1 + r) + 1 + r > 0, by (5.36). Let now m ≥ 2 and we compute: k2m (k2 + 1) = α m−1 k2 (k2 + 1) + > α m−1 (k3 +1)k1 +

α m−1 − 1 r + k2 + 1 α−1

α m−1 − 1 rk1 = k2m−1 k1 + α m−1 k1 ≥ (k2m+1 + 1)k1 α−1

k2m−1 (k2 + 1) = α m−2 k3 (k2 + 1) +

α m−2 − 1 r(k2 + 1) α−1

≥ α m−1 k1 k2 + α m−2 k1 r + α m−2 k1 +

α m−2 − 1 r(k2 + 1) α−1

  α m−1 − 1 r + α m−2 = (k2m + 1)k1 . > k1 α m−1 k2 + α−1

Finally, the last expression in (5.37) is equivalent to the definition of the sequence {ki }∞ i=1 via: ki+2 = αki + r. (iii) We proceed as in Exercise 5.10 (i) and (5.22). Let C1 ∈ Fs and C2 ∈ Ft for some s, t ∈ N. Then:    P(Y1 ∈ C1 ) = Ps C1 ∩ {j = s} ∩ {Zn ∈ Ai+2 } , n δˆ

and

ˆ

dδ (q) =

1 ˆ min{, dist(q, RN \Dδ )}. 

Fix η > 0. In view of (6.27) and since without loss of generality the data F is constant outside of some large bounded superset of D in RN , there exists δˆ > 0 with: ˆ ˆ  ∈ (0, ). for all x ∈ RN \ Dδ , |w| ≤ δ, ˆ

|u (x + z) − u (x)| ≤ η

¯ with |x0 − y0 | ≤ Fix x0 , y0 ∈ D function u˜  ∈ C(RN ):

δˆ 2

(D.1)

ˆ

and let  ∈ (0, 2δ ). Consider the following

u˜  (x) = u (x − (x0 − y0 )) + η. Then, by (6.11) and recalling the definition of the averaging operator S , we get: (S u˜  )(x) = (S u )(x − (x0 − y0 )) + η = u (x − (x0 − y0 )) + η = u˜  (x)

ˆ

for all x ∈ Dδ .

(D.2)

ˆ

because in Dδ there holds: dist(x − (x0 − y0 ), RN \ D) ≥ dist(x, RN \ D) − |x0 − y0 | ≥ δˆ −

δˆ δˆ = > . 2 2

D

Solutions to Selected Exercises

239

It follows now from (D.2) that:  ˆ ˆ u˜  = dδ (S u˜  ) + 1 − dδ u˜ 

in RN .

On the other hand, u itself similarly solves the same problem above, subject to its ˆ ˆ own data u on RN \ Dδ . Since for every x ∈ RN \ Dδ we have: u˜  (x) − u (x) = u (x − (x0 − y0 )) − u (x) + η ≥ 0 in view of (D.1), the monotonicity property in Theorem 6.5 yields: u ≤ u˜ 

in RN .

Thus, in particular: u (x0 ) − u (y0 ) ≤ η. Exchanging x0 with y0 we get the opposite inequality. In conclusion, |u (x0 ) − u (y0 )| ≤ η, which yields the claimed ¯ equicontinuity of {u }→0 in D. Exercise A.16 (i) Since {Y1 < Y2 } ∈ G, the defining condition of E(X | G) = Yi yields: ˆ

ˆ {Y1 0 and let δ > 0 be as in the equiintegrability condition. Then: ˆ ˆ  δ δ C C = dP ≤ P |Xi | > |Xi | dP ≤ δ, δ C {|Xi |>C/δ} δ C {|Xi |>C/δ}

D

Solutions to Selected Exercises

241

´ so {|Xi |>M} |Xi | dP < , as claimed. For the opposite direction, assume the validity of the condition in Exercise A.32. Let M be chosen for  = 1, then: ˆ ˆ ˆ |Xi | dP = |Xi | dP + |Xi | dP ≤ 1 + M, {|Xi |>M}



{|Xi |≤M}

proving equiboundedness. To show equiintegrability, fix  > 0. Then: ˆ

ˆ |Xi | dP = A

if only P(A)
M}

A∩{|Xi |≤M}

|Xi | dP <  + M · P(A) < 2,

 M.

Exercise A.33 By assumption, for some C > 0 there holds: |Xi | ≤ Ci + |X0 | a.s. for all i ≥ 0. Hence, for every A ∈ F we get: ˆ

ˆ A

|Xτ ∧i | dP ≤

ˆ

ˆ

C(τ ∧ i) + |X0 | dP ≤ C A

τ dP + A

|X0 | dP. A

1 Taking A = we see that the sequence {Xτ ∧i }∞ i=0 is bounded in L ( , F, P). The equiintegrability likewise follows, from integrability of X0 and τ .

Exercise B.3    (i) We have, by the change of variable formula: P γ X ∈ A = P X ∈ 2 2 ´ ´ − (y−γ2 μ)2 − (x−μ) 2σ γ 2σ 2 √ 1 e √ 1 dx = e dy. 1 A A 2 2 γ



1 γA

=

2π σ |γ |

2π σ

(ii) Let A ⊂ R be a Borel set. Applying Lemma A.21 to the nonnegative random variable Z(x, y) = 1{x+y∈A} on R2 , we obtain:   P X1 + X2 ∈ A =

ˆ

  P X2 ∈ A \ X1 (ω1 ) dP(ω1 )



ˆ =

R

ˆ = A

&

1

e



(x−μ1 )2 2σ12

2π σ12

1 & & 2 2π σ1 · 2π σ22

1 ·& 2π σ22 ˆ e



ˆ e



(y−μ2 )2 2σ22

dy dx

A−x

(x−μ1 )2 2σ12

·e



(z−x−μ2 )2 2σ22

ˆ dx dz =

R

where f is given as follows: ˆ f (z) =

R

&

1 2π σ12 ·

& e 2π σ22

f (z) dz, A



σ22 (x−μ1 )2 +σ12 (z−x−μ2 )2 2σ12 σ22

dx.

242

D Solutions to Selected Exercises

We now write σ 2 = σ12 + σ22 and compute the convolution:  ˆ f (z) =

R

1

0

σ12 σ22 σ2



1 ·√ e 2πσ 2 

ˆ =

R

1

0



σ12 σ22 σ2

·e





σ 2 μ1 +σ12 (z−μ2 ) x− 2 σ2

σ 2 μ1 +σ12 (z−μ2 ) x− 2 σ2 2σ12 σ22 /σ 2

1 ·√ 2πσ 2

2 

R

e



2

σ 2 (z−μ2 )2 +σ22 μ2 1 + 1 σ2

dx

2 dx · 

ˆ

σ22 μ1 +σ12 (z−μ2 ) σ2 2σ12 σ22 /σ 2





2 − σ22 μ1 +σ12 (z−μ2 ) +σ 2 σ12 (z−μ2 )2 +σ 2 σ22 μ2 1 2σ 2 σ12 σ22

.

The integral above, after a change of variable, equals: ˆ R

0

1

e



x2 2(σ1 σ2 /σ )2

σ 2σ 2 2π 1σ 22

dx = 1,

and thus we obtain the claimed property: f (z) = √

1 2π σ 2

e



2 2 2 2 2 2 σ1 σ22 μ2 1 +σ1 σ2 (z−μ2 ) −2σ1 σ2 (z−μ2 ) σ 2 σ12 σ22

(iii) By part (i) it immediately follows that: have:

√1 (X1 2

+ X2 ),

the rotation of

R2

√1 (X1 2

(z−(μ1 +μ2 ))2 1 − σ2 =√ e . 2π σ 2

√1 X1 ± √1 X2 2 2

∼ N (0, 1), so by (ii) we

− X2 ) ∼ N(0, 1). To prove independence, denote

by π/4: . Rπ/4 =

√1 2 √1 2

− √1 √1 2

/

2

    and observe that: √1 (X1 − X2 ), √1 (X1 + X2 ) = Rπ/4 X1 , X2 . Let A1 , A2 2 2 be two Borel subsets of R. Then, using notation of Exercise A.20: P

 1 1    √ (X1 − X2 ) ∈ A1 ∩ √ (X1 + X2 ) ∈ A2 = P Rπ/4 (X1 , X2 ) ∈ A1 × A2 2 2     = P (X1 , X2 ) ∈ R−π/4 (A1 × A2 ) = P¯ R−π/4 (A1 × A2 )  ˆ  1 − x2 − x2 e 2 · e 2 dx dy = P¯¯ R−π/4 (A1 × A2 ) = R−π/4 (A1 ×A2 ) 2π

D

Solutions to Selected Exercises

243

ˆ

1 − x 2 +y 2 2 d(x, y) e A1 ×A2 2π  1 ˆ   1 ˆ  x2 x2 = √ e− 2 dx · √ e− 2 dx 2π A1 2π A2  1   1  = P √ (X1 − X2 ) ∈ A1 · P √ (X1 + X2 ) ∈ A2 , 2 2

=

where we used the rotational invariance of the probability density: R2  z → and the already noted fact that

1 − |z|2 e 2 2π

± X2 ) are normally distributed.

√1 (X1 2

Exercise C.9 (i) Consider the functions φn ∈ C1 (R) given, with their derivatives, by: φn (x) =

⎧ ⎨0

n 2 x ⎩2 x−

1 2n

for x ≤ 0 for x ∈ (0, n1 ) for x ≥ n1

⎧ ⎨ 0 for x ≤ 0 so that φn! (x) = nx for x ∈ (0, n1 ) ⎩ 1 for x ≥ n1 .

Since φn (0) = 0 and each φn! is bounded, Exercise C.8 (iv) implies that φn ◦f ∈ W 1,p (U ) for all n ≥ 1. We now note that the sequence {φn ◦ f }∞ n=1 converges to f + in Lp (U ) by the monotone convergence theorem. Further, the sequence N p of weak gradients {∇(φn ◦ f ) = (φn! ◦ f )∇f }∞ n=1 converges in L (U, R ) to:  g(x) =

∇f (x) a.e. in {f > 0} 0 a.e. in {f ≤ 0}

by the dominated convergence theorem. Exercise C.8 (i) completes the proof. + p + + (ii) The sequence {fn+ }∞ n=1 converges to f in L (U ) because: |fn −f | ≤ |fn − f | a.e. in U . Further, we have: ˆ U

|∇fn+ − ∇f + |p dx = ≤C



ˆ

  (1(0,∞) ◦ fn )∇fn − (1(0,∞) ◦ f )∇f p dx

U

ˆ

|∇fn − ∇f |p dx + U

   1(0,∞) ◦ fn − 1(0,∞) ◦ f  · |∇f |p dx .

U

The first term in the right hand side above converges to 0 as n → ∞, by assumption. The second term converges to 0 by the dominated convergence theorem. + 1,p (U ). This achieves convergence of {fn+ }∞ n=1 to f in W

244

D Solutions to Selected Exercises

Exercise C.16 (i) Fix x ∈ U . If f satisfies (C.4) and f (x) > λ, then for sufficiently small r > 0 we have: infBr (x)∩U f > λ, proving that the set {f > λ} contains an open neighbourhood of x. Conversely, if the sets {f > λ} are open for all λ ∈ R, then taking λ = f (x) − , we obtain: u(y) > λ for all y ∈ Br (x) ∩ U , where r > 0 is an appropriate radius. Consequently: u(x) −  ≤

inf

Br (x)∩(U \{x})

f ≤ lim inf f (y) y→x

and passing to the limit with  → 0 there follows (C.4). (ii) Let {xn ∈ K}∞ n=1 be a sequence converging to some x ∈ K and such ¯ Then, either {xn }∞ has a constant that limn→∞ f (xn ) = infK f ∈ R. n=1 subsequence so that f (x) = infK f , or we may assume that xn = x for all n, in which case: inf f ≤ f (x) ≤ inf f (xn ) = inf f n→∞

K

K

by (C.4) so that, again: f (x) = infK f . This proves the main claim of the exercise. Finally, the following function f : R → (−∞, +∞] is lowersemicontinuous but it does not attain its supremum on K = [−1, 1]:  f (x) =

x 2 if x ∈ (−1, 1) 0 if |x| ≥ 1.

(iii) If all functions in the family {fi : U → (−∞, +∞]}i∈I are lower-semicontinuous, then for every λ ∈ R the following set is open: 

  (sup fi ) > λ = {fi > λ}, i∈I

i∈I

proving lower-semicontinuity of (supi∈I fi ) : U → (−∞,  +∞] by (i). Ifthe index set I has finitely many elements, then each set (mini∈I fi ) > λ =  {f > λ} is also open, hence follows the second assertion of the result. i∈I i ∞ be an increasing sequence of compact subsets of U , such that U = (iv) Let {K } ∞ n n=1 n=1 Kn . Since f is bounded from below on each Kn in view of (ii), one can construct a continuous g : U → R satisfying g ≤ f in U (by modifying the constant lower bounds on each Kn close to the boundary). A similar reasoning yields:   f (x) = sup g(x); g ∈ C(U ) with g ≤ f in U . We now refine this construction. Let a sequence {Bn }∞ n=1 be an enumeration of the countable family of all open balls with rational centres and rational radii, compactly contained in U . For each i, n ∈ N choose a function gi,n ∈ C(U )

D

Solutions to Selected Exercises

245

such that gi,n ≤ f in U and that on the ball 12 Bn with the same centre as Bn but half its radius:   infBn f − 1i if infBn f < +∞ gi,n = i if infBn f = +∞. . Since f = supi,n fi,n , then taking fn = max1≤i,k≤n gi,k we obtain a ∞ nondecreasing sequence {fn ∈ C(U )}n=1 that converges pointwise to f . It is easy to modify {fn }∞ n=1 to the final sequence of smooth functions, strictly increasing to f . (v) By the local boundedness property we get g(x) = −∞ for all x ∈ U . It further directly follows:   lim inf g(y) = lim inf lim inf f (z) = lim inf f (y) = g(x). y→x

y→x

z→y

y→x

Exercise C.17 (i) By the essential local boundedness property, we get that g : U → (−∞, +∞]. In order to show that g is lower-semicontinuous, we observe that:   lim inf g(y) = lim inf ess lim inf f (z) = ess lim inf f (y) = g(x). y→x

y→x

z→y

y→x

ffl (ii) Recall that f (x) = limr→0 Br (x) f for almost all x ∈ U , by Theorem C.3. On ffl the other hand: ess infBr (x) ≤ Br (x) f ≤ ess supBr (x) f , so passing to the limit with r → 0 and using (C.5) we obtain that the function: . g(x) = lim

r→0 Br (x)

f = ess lim inf f (y) = ess lim sup f (y) y→x

y→x

is well defined and that it coincides with f almost everywhere in U . By (i) it follows that g must be continuous. Exercise C.30 1,p It suffices to show that if u ∈ Wloc (D) satisfies: ˆ D

  |∇u|p−2 ∇u, ∇η dx = 0

for all η ∈ C∞ c (D) with η ≥ 0,

(D.3)

then the above equality holds in fact for all test functions η ∈ C∞ c (D). Writing η = η+ − η− where η+ , η− ∈ Cc (D) we approximate: η± = limn→∞ ηn± in 1,p W (D) by smooth compactly supported functions {ηn± }∞ n=1 . By (D.3) we have: ´0 p−2 ∇u, ∇η  dx ≥ 0 for all η = η+ − η− ∈ C∞ (D). Since {η }∞ |∇u| n n n n=1 n n c D p!

converges to η in W 1,p (D) and |∇u|p−2 ∇u ∈ Lloc (D), the result follows by passing to the limit with n → ∞ in view of (C.2).

246

D Solutions to Selected Exercises

Exercise C.37 (i) The claimed formula follows by direct inspection. We also note that both terms in its right hand side are nonnegative. Thus: 

 |b|p−2 b − |a|p−2 a, b − a ≤ 0

if and only if both terms equal zero: |b − a|2 = 0 or equivalently: a = b. 1,p (ii) Since every nonnegative function in W0 (D) may be approximated by a sequence of nonnegative test functions in C∞ c (D), we may apply Defini1,p + tion C.29 with η = (u1 − u2 ) ∈ W0 (D) in view of Exercise C.9. This yields: ˆ 0≥ =

D ˆ

  |∇u1 |p−2 ∇u1 − |∇u2 |p−2 ∇u2 , ∇(u1 − u2 )+ dx

{u1 >u2 }

  |∇u1 |p−2 ∇u1 − |∇u2 |p−2 ∇u2 , ∇u1 − ∇u2 dx.

Since by (i) the last integrand above is nonnegative and it equals zero if and only if ∇u1 = ∇u2 , there must be ∇u1 = ∇u2 a.e. in {u1 > u2 }. Consequently, 1,p ∇(u1 − u2 )+ = 0 so that (u1 − u2 )+ = 0, in view of (u1 − u2 )+ ∈ W0 (D). This yields the claimed inequality:u1 ≤ u2 a.e. in D. Exercise C.43 (i) We argue by contradiction. If u2 (xn ) +  ≤ u1 (xn ) along some sequence {xn ∈ D}∞ n=1 converging to x ∈ ∂D, then: lim inf u2 (y) +  ≤ lim inf u1 (xn ) +  y→x

n→∞

≤ lim sup u1 (xn ) ≤ lim sup u1 (y). n→∞

(D.4)

y→x

In particular, lim infy→x u2 (y) = +∞ as lim supy→x u1 (y) = +∞ by assumption. Likewise, lim infy→x u2 (x) = −∞ by assumption that both quantities in (C.20) cannot equal −∞. From (D.4) and (C.20) we now get: lim infy→x u2 (y) +  ≤ lim infy→x u2 (y) which is a contradiction. (ii) Assume that there is a sequence {xn ∈ ∂D}∞ n=1 converging to some x ∈ ∂D, such that φn (xn ) +  < u1 (xn ) for all n ≥ 1. Let m ≥ 1 be such that φm (x) +  > u1 (x). Since the function u1 − φm is upper-semicontinuous, it follows that: φn > u1 −  holds in some open neighbourhood of x. Thus, for all n > m sufficiently large: φm (xn ) +  ≥ φm (xn ) +  > u1 (xn ), which is a contradiction.

D

Solutions to Selected Exercises

247

(iii) The inequality u1 ≤ vn +  ≤ u2 +  in U follows by Definition C.39. Since one may exhaust D by admissible domains U such that ∂U ⊂ Vδ , it follows that u1 ≤ u2 +  in D. Passing to the limit with  → 0, we finally obtain: u1 ≤ u2 in D. Exercise C.55 Since each term of the series is harmonic and the series converges absolutely ¯ On the other hand: uniformly in D, hence u is harmonic in D and continuous in D. ˆ

ˆ

D

|∇u(x)|2 dx ≥ 0



ˆ

ρ

|∂r u(r, θ )|2 r drdθ =

0

∞ π n! n=1

ρ 2n! ≥ 2n4

m π n! n=1

2n4

ρ 2n! ,

for any ρ < 1 and m ≥ 1. This implies that Ip (u) = +∞. Exercise C.57 1 ˜ its real part v is 2-harmonic. Writing: log(z − Since z → − log(z−x) is analytic in B, x) = log |z − x| + iArg(z − x), we get: v(z) = −

log |z − x| (log |z − x|)2 + (Arg(z − x))2

  . which  implies the limit properties of v. Let c = inf v(z); z ∈ Br (x) \ Br/2 (x) ∪ S > 0. Then the following function u : D → R is a barrier at x:  u(z) =

v(z) if z ∈ D ∩ Br/2 (x) c if z ∈ D \ Br/2 (x).

Indeed, u still satisfies the required limit properties and it is p-superharmonic in D, in view of Lemma C.42. Exercise C.60 Assume that u is convex, fix x0 ∈ (a, b) and let φ satisfy (C.24). For every small parameter h > 0, we Taylor expand φ at (x0 + h) and (x0 − h) to get:  1 1 φ(x0 + h) + φ(x0 − h) = φ(x0 ) + h2 φ !! (x0 ) + O(h3 ). 2 2 On the other hand, convexity of u implies: 1 1 (φ(x0 + h) + φ(x0 − h)) > (u(x0 + h) + u(x0 − h)) ≥ u(x0 ) = φ(x0 ). 2 2 This yields: 12 h2 φ !! (x0 ) + O(h3 ) > 0. Dividing now by h2 and passing to the limit with h → 0 we conclude that φ !! (x0 ) ≥ 0, as claimed.  For the converse  implication,  x1 +x2  we argue by contradiction. If u is not convex, then 1 u(x for some x1 , x2 ∈ (a, b) with x1 < x2 . Without ) + u(x ) < u 1 2 2 2

248

D Solutions to Selected Exercises

loss of generality, by adding to u a linear function if necessary, we may assume that u(x1 ) = u(x2 ) = 0. Also, by possibly shrinking the interval (x1 , x2 ), we may assume that u(x) > 0 for all x ∈ (x1 , x2 ). For simplicity, let x2 = −x1 > 0 and fix h > 0 such that x2 + h, x1 − h ∈ (a, b). For each c > 0 consider now the concave function: . φc (x) = c(x2 + h)2 − cx 2 . Since the family {φc }c>0 converges uniformly to 0 as c → 0 on [a, b], and since it strictly increases to +∞ as c → ∞, we may define:   . c0 = min c > 0; φc ≥ u on [a, b] > 0. Clearly, there must be φc0 (x0 ) = u(x0 ) for some x0 ∈ (a, b). Finally, define: 1 . φ(x) = φc0 (x) + c0 (x − x0 )2 . 2 We easily see that φ(x0 ) = u(x0 ) and φ(x) > φc0 (x) ≥ u(x) for all x ∈ [a, b]. By modifying φ outside of a neighbourhood of x0 we may actually assume that φ satisfies (C.24). However: φ !! (x0 ) = φc!!0 (x0 ) + c0 = −c0 < 0, contradicting φ !! (x0 ) ≥ 0.

References

J. Adams and J.F. Fournier. Sobolev spaces. volume 140 of Pure and Applied Mathematics. Academic Press, 2003. T. Antunovic, Y. Peres, S. Sheffield, and S. Somersille. Tug-of-war and infinity Laplace equation with vanishing Neumann boundary condition. Communications in Partial Differential Equations, 37(10):1839–1869, 2012. S. Armstrong and Ch. Smart. A finite difference approach to the infinity Laplace equation and tug-of-war games. Trans AMS, 364:595–636, 2012. A. Arroyo and M. Parviainen. Asymptotic holder regularity for ellipsoid process. 2019. A. Arroyo, J. Heino, and M. Parviainen. Tug-of-war games with varying probabilities and the normalized p(x)-Laplacian. Commun. Pure Appl. Anal., 16(3):915–944, 2017. A. Arroyo, H. Luiro, M. Parviainen, and Ruosteenoja. Asymptotic lipschitz regularity for tug-ofwar games with varying probabilities. 2018. A. Attouchi, H. Luiro, and M. Parviainen. Gradient and lipschitz estimates for tug-of-war type games. 2019. Ch. Bishop and Y. Peres. Fractals in probability and analysis. Cambridge University Press, 2017. C. Bjorland, L. Caffarelli, and A. Figalli. Nonlocal tug-of-war and the infinity fractional Laplacian. Comm. Pure Appl. Math., 65(3):337–380, 2012. P. Blanc and J.D. Rossi. Games for eigenvalues of the Hessian and concave/convex envelopes. 2018. P. Blanc and J.D. Rossi. Game theory and partial differential equations. volume 31 of Nonlinear Analysis and Applications. De Gruyter Series, 2019. P. Blanc, J. Manfredi, and J.D. Rossi. Games for Pucci’s maximal operators. 2018. H. Brezis. Functional analysis, Sobolev spaces and partial differential equations. volume 2011 Edition of Universitext. Springer, 2011. R. Buckdahn, P. Cardaliaguet, and M. Quincampoix. A representation formula for the mean curvature motion. SIAM Journal on Mathematical Analysis, 33(4):827–846, 2001. J.R. Casas and L. Torres. Strong edge features for image coding. pages 443–450, 1996. F. Charro, J. Garcia Azorero, and J.D. Rossi. A mixed problem for the infinity Laplacian via tug-of-war games. Calculus of Variations and Partial Differential Equations, 34(3):307–320, 2009. J. Christensen. On some measures analogous to Haar measure. Mathematica Scandinavica, 26: 103–103, 1970.

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3

249

250

References

L. Codenotti, M. Lewicka, and J. Manfredi. Discrete approximations to the double-obstacle problem, and optimal stopping of tug-of-war games. Trans. Amer. Math. Soc., 369:7387–7403, 2017. M. Crandall, H. Ishii, and P.-L. Lions. User’s guide to viscosity solutions of second order partial differential equations. Bull. AMS, 27:1–67, 1992. F. del Teso, Manfredi J., and M. Parviainen. Convergence of dynamic programming principles for the p-Laplacian. 2018. E. DiBenedetto. C1+α local regularity of weak solutions of degenerate elliptic equations. Nonlinear Anal., 7:827–850, 1983. K. Does. An evolution equation involving the normalized p-Laplacian. Comm. Pure Appl. Anal., 10:361–369, 2011. J.L. Doob. Classical potential theory and its probabilistic counterpart. Springer-Verlag New York, 1984. R.M. Dudley. Real analysis and probability. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2004. R. Durrett. Probability: theory and examples. volume 4th edition of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2010. P. Erdos and A.H. Stone. On the sum of two Borel sets. Proc. Amer. Math. Soc., 25:304–306, 1970. L. Evans. A new proof of local C1,α regularity for solutions of certain degenerate elliptic P.D.E. JDE, 45:365–373, 1982. L. Evans. Partial differential equations. volume 2nd edition of Graduate Studies in Mathematics. American Mathematical Society, 2010. M. Falcone, S. Finzi Vita, T. Giorgi, and R.G. Smits. A semi-Lagrangian scheme for the game p-Laplacian via p-averaging. Applied Numerical Mathematics, 73:63–80, 2013. D Gilbarg and N. Trudinger. Elliptic partial differential equations of second order. volume 3rd edition of Classics in Mathematics. Springer, 2001. I Gomez and J.D. Rossi. Tug-of-war games and the infinity Laplacian with spatial dependence. Communications on Pure and Applied Analysis, 12(5):1959–1983, 2013. W. Hansen and N. Nadirashvili. A converse to the mean value theorem for harmonic functions. Acta Math., 171 (2):136–163, 1993. W. Hansen and N. Nadirashvili. Littlewood’s one circle problem. J. London Math. Soc., 50 (2): 349–360, 1994. H. Hartikainen. A dynamic programming principle with continuous solutions related to the pLaplacian, 1 < p < ∞. Differential Integral Equations, 29(5–6):583–600, 2016. J. Heinonen and T. Kilpeläinen. On the Wiener criterion and quasilinear obstacle problems. Trans. Amer. Math. Soc., 310 (1):239–255, 1988a. J. Heinonen and T. Kilpeläinen. A-superharmonic functions and supersolutions of degenerate elliptic equations. Ark. Mat, 26:87–105, 1988b. J. Heinonen, T. Kilpelainen, and O. Martio. Nonlinear potential theory of degenerate elliptic equations. Dover Publications, Inc., Mineola, NY, 2006. L. Helms. Potential theory. Universitext. Springer, 2014. R. Jensen. Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient. Archive for Rational Mechanics and Analysis, 123(1):51–74, 1993. V. Julin and P. Juutinen. A new proof of the equivalence of weak and viscosity solutions for the p-Laplace equation. Commun. Part. Diff. Eq, 37:934–946, 2012. P. Juutinen, P. Lindqvist, and J.J. Manfredi. On the equivalence of viscosity solutions and weak solutions for a quasi-linear elliptic equation. SIAM J. Math. Anal., 33:699–717, 2001. P. Juutinen, T. Lukkari, and Parviainen M. Equivalence of viscosity and weak solutions for the p(x)-Laplacian. Ann. Inst. H. Poincarè Anal. Non Linèaire, 27(6):1471–1487, 2010. O. Kallenberg. Foundations of modern probability. volume 2nd edition of Probability and Its Applications. Springer, 2002. I. Karatzas and S. Shreve. Brownian motion and stochastic calculus. Graduate Texts in Mathematics. Springer, 1991.

References

251

B. Kawohl. Variational versus PDE-based approaches in mathematical image processing. CRM Proceedings and Lecture Notes, 44:113–126, 2008. B. Kawohl, J. Manfredi, and M. Parviainen. Solutions of nonlinear PDEs in the sense of averages. J. Math. Pures Appl., 97(2):173–188, 2012. O.D Kellogg. Converses of Gauss’ theorem on the arithmetic mean. Trans. Amer. Math. Soc., 36: 227–242, 1934. T. Kilpeläinen. Potential theory for supersolutions of degenerate elliptic equations. Indiana Univ. Math. J, 38:253–275, 1989. T. Kilpeläinen and J. Malý. The Wiener test and potential estimates for quasilinear elliptic equations. Acta Math., 172:137–161, 1994. R.V. Kohn and S. Serfaty. A deterministic-control-based approach to motion by curvature. Comm. Pure Appl. Math, 59:344–407, 2006. R.V. Kohn and S. Serfaty. A deterministic-control-based approach to fully nonlinear parabolic and elliptic equations. Comm. Pure Appl. Math, 63:1298–1350, 2010. S. Koike. A beginner’s guide to the theory of viscosity solutions. volume 13 of MSJ Memoirs. Mathematical Society of Japan, 2004. O.A. Ladyzhenskaya and N.N. Ural’tseva. Linear and quasilinear elliptic equations. volume 46 of Mathematics in Science and Engineering. Academic Press, 1968. E. Le Gruyer. On absolutely minimizing Lipschitz extensions and PDE δ∞ u = 0. NoDEA, 14: 29–55, 2007. J.C. Le Gruyer, E.and Archer. Harmonious extensions. SIAM J.Math. Anal., 29(1):279–292, 1998. M. Lewicka. Random tug of war games for the p-Laplacian, 1 < p < ∞. 2018. M. Lewicka and J. Manfredi. Game theoretical methods in PDEs. Bollettino dell’Unione Matematica Italiana, 7(3):211–216, 2014. M. Lewicka and Y. Peres. The Robin mean value equation ii: Asymptotic hölder regularity. 2019a. M. Lewicka and Y. Peres. The Robin mean value equation i: A random walk approach to the third boundary value problem. 2019b. M. Lewicka, J. Manfredi, and D. Ricciotti. Random walks and random tug of war in the Heisenberg group. Mathematische Annalen, 2019. J. Lewis. Smoothness of certain degenerate elliptic equations. Proc. Am. Math. Soc, 80:259–265, 1980. J. Lewis. Regularity of the derivatives of solutions to certain degenerate elliptic equations. Indiana Univ. Math. J, 32:849–858, 1983. P. Lindqvist. Notes on the stationary p-Laplace equation. SpringerBriefs in Mathematics. Springer, 2019. P. Lindqvist and T. Lukkari. A curious equation involving the ∞-Laplacian. Adv. Calc. Var., 3(4): 409–421, 2010. P. Lindqvist and O. Martio. Two theorems of N. Wiener for solutions of quasilinear elliptic equations. Acta Math., 155:153–171, 1985. W. Littman, G. Stampacchia, and H.F. Weinberger. Regular points for elliptic equations with discontinuous coefficients. Ann. Scuola Norm. Sup. Pisa Sci, 17 (3):43–77, 1963. Q. Liu and A. Schikorra. General existence of solutions to dynamic programming equations. Communications on Pure and Applied Analysis, 14(1):167–184, 2015. H. Luiro and M. Parviainen. Gradient walk and p-harmonic functions. Proc. Amer. Math. Soc., 145:4313–4324, 2017. H. Luiro and M. Parviainen. Regularity for nonlinear stochastic games. Ann. Inst. H. Poincarè Anal. Non Linèaire, 35(6):1435–1456, 2018. H. Luiro, M. Parviainen, and E. Saksman. Harnack’s inequality for p-harmonic functions via stochastic games. Differential and Integral Equations, 38(12):1985–2003, 2013. H. Luiro, M. Parviainen, and E. Saksman. On the existence and uniqueness of p-harmonious functions. Differential and Integral Equations, 27(3/4):201–216, 2014. J. Manfredi, M. Parviainen, and J. Rossi. An asymptotic mean value characterization for pharmonic functions. Proc. Amer. Math. Soc., 138(3):881–889, 2010.

252

References

J. Manfredi, M. Parviainen, and J. Rossi. Dynamic programming principle for tug-of-war games with noise. ESAIM Control Optim. Calc. Var., 18:81–90, 2012a. J. Manfredi, J.D. Rossi, and S. Sommersille. An obstacle problem for tug-of-war games. Communications on Pure and Applied Analysis, 14(1):217–228, 2015. J.J. Manfredi, M. Parviainen, and J.D. Rossi. On the definition and properties of p-harmonious functions. Ann. Sc. Norm. Super. Pisa Cl. Sci., 11(2):215–241, 2012b. V.G. Maz’ya. On the continuity at a boundary point of solutions of quasi-linear elliptic equations. Vestnik Leningrad Univ. Math., 3:225–242, 1976. P. Mörters and Y. Peres. Brownian motion. Cambridge University Press, 2010. J. Moser. On Harnack’s theorem for elliptic differential equations. CPAM, 14:577–591, 1961. M.E. Muller. Some continuous Monte Carlo methods for the Dirichlet problem. Ann. Math. Statist., 27:569–589, 1956. K. Nyström and M. Parviainen. Tug-of-war, market manipulation and option pricing. Math. Finance, 27(2):279–312, 2017. A.M. Oberman. A convergent difference scheme for infinity Laplacian: construction of absolutely minimizing lipschitz extensions. Math. Comp., 74:1217–1230, 2005. K.R. Parthasarathy. Probability measures on metric spaces. Cambridge Mathematical Textbooks. Academic Press, 1967. M. Parviainen and E. Ruosteenoja. Local regularity for time-dependent tug-of-war games with varying probabilities. J. Differential Equations, 261(2):1357–1398, 2016. Y. Peres and S. Sheffield. Tug-of-war with noise: a game-theoretic view of the p-Laplacian. Duke Math J., 145:91–120, 2008. Y. Peres, O. Schramm, S. Sheffield, and D.B. Wilson. Tug-of-war and the inifnity Laplacian. J. Amer. Math. Soc, 22:167–210, 2009. Y. Peres, G. Pete, and S. Somersille. Biased tug-of-war, the biased infinity Laplacian, and comparison with exponential cones. Calculus of Variations and Partial Differential Equations, 38(3–4):541–564, 2010. E. Ruosteenoja. Local regularity results for value functions of tug-of-war with noise and running payoff. Advances in Calculus of Variations, 9(1):1–17, 2014. J. Serrin. A Harnack’s inequality for nonlinear equations. Bull. Amer. Math., 69:481–486, 1963. J. Serrin. Local behaviour of solutions of quasi-linear equations. Acta Math., 111:247–302, 1964. P. Tolksdorf. Regularity for a more general class of quasilinear elliptic equations. JDE, 51:126– 150, 1984. K. Uhlenbeck. Regularity for a class of nonlinear elliptic systems. Acta Math., 138:219–240, 1977. N. Uralceva. Degenerate quasilinear elliptic systems. Zap. Naucn. Sem. Leningrad. Otdel. Mat. Inst. Steklov, 7:184–192, 1968. V. Volterra. Alcune osservazioni sopra proprietáatte individuare una funzione. Rend. Acadd. d. Lincei Roma, 5:263–266, 1909. N. Wiener. Certain notions in potential theory. J. Math. Phys., 3:24–51, 1924. N. Wiener. Note on a paper of o. perron. Journal of Math. and Phys., 4:21–32, 1925. D. Williams. Probability with martingales. Cambridge Mathematical Textbooks. Cambridge University Press, 1991.

Index

A Almost surely, 165 Annulus walk, 117 Ascoli–Arzelá’s theorem, 210 Average, 14

F Filtration, 176 Fubini–Tonelli theorem, 168 Fundamental theorem of calculus of variations, 205

B Ball walk, 11 Basic convergence theorem, 77, 88, 144 Binary rationals, 186 Brownian motion, 185

G Game-regularity, 109, 152

C Capacity, 224 Caratheodory’s extension theorem, 165 Comparison principle, 220 Concatenating strategies, 113 Conditional expectation, 172

D Doob’s regularity condition, 65 Doob’s theorem, 178, 179 Dubins’ theorem, 181 Dynamic programming principle, 47, 72, 142

H Hölder’s inequality, 205 Hadamard’s example, 227 Harmonic extension, 199 Harmonic function, 210 Harnack’s inequality, 213, 218 I Independent random variables, 174, 175 Infinity-Laplacian, 39 J Jensen’s inequality, 173 K Kellogg’s property, 226

E Expectation, 166 Exterior cone condition, 25, 120 Exterior corkscrew condition, 154 Exterior curve condition, 161

L Lebesgue spaces, 204 Lower-semicontinuity, 209

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 M. Lewicka, A Course on Tug-of-War Games with Random Noise, Universitext, https://doi.org/10.1007/978-3-030-46209-3

253

254 M Markov’s property, 194 Martingale, 176 Maximum principle, 213, 220 Mean value expansion, 40, 43, 137, 140, 141 Mean value property, 211 Measure, 164 Meyers–Serrin’s theorem, 207 N Normal distribution, 186 P Perron’s solution, 28, 226 p-harmonic function, 221 p-Laplacian, 39, 215 Poincare’s inequality, 207 Poisson’s kernel, 213 p-regularity, 224 Probability space, 163 Product measure, 168 Prohorov’s theorem, 168 R Radon–Nikodym’s theorem, 172 Random variable, 165 Reflection coupling, 67, 68 Rellich–Kondrachov’s theorem, 208 Resolutivity, 28, 226 S Sigma-algebra, 163

Index Sobolev’s embedding theorem, 207 Sobolev spaces, 205 Sphere walk, 16 Spherical measure, 167 Stopping time continuous, 195 discrete, 176 Strong Markov’s property, 196 Submartingale, 176 Supermartingale, 176

T Test functions, 204 Tug-of-War with noise basic, 51 boundary aware, 82 mixed, 146

U Uniform integrability, 179 Upcrossing, 180 Upper-semicontinuity, 209

V Viscosity solution, 102, 229

W Walk-regularity, 20 Weak solution, 217 Wiener measure, 192 Wiener’s regularity criterion, 225